Binning demonstration on locally generated fake data#
In this example, we generate a table with random data simulating a single event dataset. We showcase the binning method, first on a simple single table using the bin_partition method and then in the distributed method bin_dataframe, using daks dataframes. The first method is never really called directly, as it is simply the function called by the bin_dataframe on each partition of the dask dataframe.
[1]:
import dask
import numpy as np
import pandas as pd
import dask.dataframe
import matplotlib.pyplot as plt
from sed.binning import bin_partition, bin_dataframe
%matplotlib widget
Generate Fake Data#
[2]:
n_pts = 100000
cols = ["posx", "posy", "energy"]
df = pd.DataFrame(np.random.randn(n_pts, len(cols)), columns=cols)
df
[2]:
posx | posy | energy | |
---|---|---|---|
0 | 1.503362 | 0.407499 | -0.051490 |
1 | -0.464478 | 0.046401 | 0.873805 |
2 | 0.264292 | -1.302149 | -1.327216 |
3 | -1.762067 | 0.840551 | 1.654777 |
4 | -1.056144 | 0.918105 | 0.400613 |
... | ... | ... | ... |
99995 | 0.671038 | 0.703163 | 0.361904 |
99996 | -1.720362 | -0.105392 | 0.969213 |
99997 | 0.723874 | -0.308329 | -1.854372 |
99998 | -0.568478 | -0.366917 | 1.368988 |
99999 | 0.313023 | 0.366284 | 0.331397 |
100000 rows × 3 columns
Define the binning range#
[3]:
binAxes = ["posx", "posy", "energy"]
nBins = [120, 120, 120]
binRanges = [(-2, 2), (-2, 2), (-2, 2)]
coords = {ax: np.linspace(r[0], r[1], n) for ax, r, n in zip(binAxes, binRanges, nBins)}
Compute the binning along the pandas dataframe#
[4]:
%%time
res = bin_partition(
part=df,
bins=nBins,
axes=binAxes,
ranges=binRanges,
hist_mode="numba",
)
CPU times: user 1.21 s, sys: 19.9 ms, total: 1.23 s
Wall time: 1.23 s
[5]:
fig, axs = plt.subplots(1, 3, figsize=(8, 2.5), constrained_layout=True)
for i in range(3):
axs[i].imshow(res.sum(i))
Transform to dask dataframe#
[6]:
ddf = dask.dataframe.from_pandas(df, npartitions=50)
ddf
[6]:
Dask DataFrame Structure:
posx | posy | energy | |
---|---|---|---|
npartitions=50 | |||
0 | float64 | float64 | float64 |
2000 | ... | ... | ... |
... | ... | ... | ... |
98000 | ... | ... | ... |
99999 | ... | ... | ... |
Dask Name: from_pandas, 1 graph layer
Compute distributed binning on the partitioned dask dataframe#
In this example, the small dataset does not give significant improvement over the pandas implementation, at least using this number of partitions. A single partition would be faster (you can try…) but we use multiple for demonstration purposes.
[7]:
%%time
res = bin_dataframe(
df=ddf,
bins=nBins,
axes=binAxes,
ranges=binRanges,
hist_mode="numba",
)
CPU times: user 389 ms, sys: 557 ms, total: 946 ms
Wall time: 509 ms
[8]:
fig, axs = plt.subplots(1, 3, figsize=(8, 2.5), constrained_layout=True)
for dim, ax in zip(binAxes, axs):
res.sum(dim).plot(ax=ax)
[ ]: