July 30th, 2013, 06:02 AM
Multithreading and scipy arrays
I am fairly new to python and have been using it recently for scientific computing especially really liking the scipy and numpy libraries.
One thing I do quite often is resample n-dimensional data on a uniform grid using scipy's ndimage map_coordinates method. An example usage of this method is:
So here I resample the input 3D data on the specified input grid and the interpolated result is written to the output_3D_image variable.
ndimage.map_coordinates(input_3D_image, input_deformation_grid, output_3D_image, order=self.order, prefilter=False)
Now, in theory each pixel/voxel can be processed independently and I am wondering what would be the right way to make this operation parallel.
Any suggestions would be greatly appreciated.
July 30th, 2013, 08:00 PM
I suppose you'd use the threading module with inputs to a function of
n = input_deformation_grid.shape
m = n/NUMBER_OF_THREADS
then reassemble the output with numpy.concatenate
[/code] are essential for python code and Makefiles!