I try to bin and group a large dataset in Python. It is a list of measured electrons with the properties (positionX, positionY, energy, time). I need to group it along positionX, positionY and do binning in energy classes.
So far I could do it with pandas, but I would like to run it in parallel. So, I try to use dask.
The groupby method works very well, but unfortunately, I run into difficulties when trying to bin the data in energy. I found a solution using pandas.cut(), but it requires to call compute() on the raw dataset (turning it essentialy into non-parallel code). Is there an equivalent to pandas.cut() in dask, or is there another (elegant) way to achieve the same functionality?
import dask
# create dask dataframe from the array
dd = dask.dataframe.from_array(mainArray, chunksize=100000, columns=('posX','posY', 'time', 'energy'))
# Set the bins to bin along energy
bins = range(0, 10000, 500)
# Create the cut in energy (using non-parallel pandas code...)
energyBinner=pandas.cut(dd['energy'],bins)
# Group the data according to posX, posY and energy
grouped = dd.compute().groupby([energyBinner, 'posX', 'posY'])
# Apply the count() method to the data:
numberOfEvents = grouped['time'].count()
Thanks a lot!
You should be able to do dd['energy'].map_partitions(pd.cut, bins)
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With