Binning demonstration on locally generated fake data#
In this example, we generate a table with random data simulating a single event dataset. We showcase the binning method, first on a simple single table using the bin_partition method and then in the distributed method bin_dataframe, using daks dataframes. The first method is never really called directly, as it is simply the function called by the bin_dataframe on each partition of the dask dataframe.
[1]:
import dask
import numpy as np
import pandas as pd
import dask.dataframe
import matplotlib.pyplot as plt
from sed.binning import bin_partition, bin_dataframe
%matplotlib widget
/opt/hostedtoolcache/Python/3.10.18/x64/lib/python3.10/site-packages/dask/dataframe/__init__.py:42: FutureWarning:
Dask dataframe query planning is disabled because dask-expr is not installed.
You can install it with `pip install dask[dataframe]` or `conda install dask`.
This will raise in a future version.
warnings.warn(msg, FutureWarning)
Generate Fake Data#
[2]:
n_pts = 100000
cols = ["posx", "posy", "energy"]
df = pd.DataFrame(np.random.randn(n_pts, len(cols)), columns=cols)
df
[2]:
posx | posy | energy | |
---|---|---|---|
0 | -0.061631 | -1.013243 | 0.545225 |
1 | -0.917284 | -0.142692 | -1.479003 |
2 | -1.093951 | -0.714302 | 0.812720 |
3 | 2.573280 | -0.091704 | -0.902088 |
4 | 0.597694 | 1.678061 | -0.232981 |
... | ... | ... | ... |
99995 | 0.203458 | 0.261983 | -0.196509 |
99996 | -1.480062 | 1.779615 | 0.494100 |
99997 | -0.390310 | 0.930799 | -0.043930 |
99998 | -0.031436 | 0.425172 | 0.500675 |
99999 | 0.670388 | -0.147122 | 0.109378 |
100000 rows × 3 columns
Define the binning range#
[3]:
binAxes = ["posx", "posy", "energy"]
nBins = [120, 120, 120]
binRanges = [(-2, 2), (-2, 2), (-2, 2)]
coords = {ax: np.linspace(r[0], r[1], n) for ax, r, n in zip(binAxes, binRanges, nBins)}
Compute the binning along the pandas dataframe#
[4]:
%%time
res = bin_partition(
part=df,
bins=nBins,
axes=binAxes,
ranges=binRanges,
hist_mode="numba",
)
CPU times: user 1.18 s, sys: 16.7 ms, total: 1.19 s
Wall time: 1.19 s
[5]:
fig, axs = plt.subplots(1, 3, figsize=(6, 1.875), constrained_layout=True)
for i in range(3):
axs[i].imshow(res.sum(i))
Transform to dask dataframe#
[6]:
ddf = dask.dataframe.from_pandas(df, npartitions=50)
ddf
[6]:
Dask DataFrame Structure:
posx | posy | energy | |
---|---|---|---|
npartitions=50 | |||
0 | float64 | float64 | float64 |
2000 | ... | ... | ... |
... | ... | ... | ... |
98000 | ... | ... | ... |
99999 | ... | ... | ... |
Dask Name: from_pandas, 1 graph layer
Compute distributed binning on the partitioned dask dataframe#
In this example, the small dataset does not give significant improvement over the pandas implementation, at least using this number of partitions. A single partition would be faster (you can try…) but we use multiple for demonstration purposes.
[7]:
%%time
res = bin_dataframe(
df=ddf,
bins=nBins,
axes=binAxes,
ranges=binRanges,
hist_mode="numba",
)
CPU times: user 640 ms, sys: 176 ms, total: 817 ms
Wall time: 715 ms
[8]:
fig, axs = plt.subplots(1, 3, figsize=(6, 1.875), constrained_layout=True)
for dim, ax in zip(binAxes, axs):
res.sum(dim).plot(ax=ax)
[ ]: