Demonstration of the conversion pipeline using time-resolved ARPES data stored on Zenodo#

In this example, we pull some time-resolved ARPES data from Zenodo, and load it into the sed package using functions of the mpes package. Then, we run a conversion pipeline on it, containing steps for visualizing the channels, correcting image distortions, calibrating the momentum space, correcting for energy distortions and calibrating the energy axis. Finally, the data are binned in calibrated axes. For performance reasons, best store the data on a locally attached storage (no network drive). This can also be achieved transparently using the included MirrorUtil class.

[1]:
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import sed
from sed.dataset import dataset

%matplotlib widget

Load Data#

[2]:
dataset.get("WSe2") # Put in Path to a storage of at least 20 GByte free space.
data_path = dataset.dir # This is the path to the data
scandir, caldir = dataset.subdirs # scandir contains the data, caldir contains the calibration files
INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
Set 'use_existing' to False if you want to download to a new location.
INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
INFO - WSe2 data is already present.
[3]:
# create sed processor using the config file:
sp = sed.SedProcessor(folder=scandir, config="../src/sed/config/mpes_example_config.yaml", system_config={}, verbose=True)
INFO - Configuration loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/mpes_example_config.yaml]
INFO - Folder config loaded from: [/home/runner/work/sed/sed/docs/tutorial/sed_config.yaml]
INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/sed/config/default.yaml]
WARNING - Entry "KTOF:Lens:Sample:V" for channel "sampleBias" not found. Skipping the channel.
[4]:
# Apply jittering to X, Y, t, ADC columns.
# Columns are defined in the config, or can be provided as list.
sp.add_jitter()
INFO - add_jitter: Added jitter to columns ['X', 'Y', 't', 'ADC'].
[5]:
# Plot of the count rate through the scan
rate, secs = sp.loader.get_count_rate(range(100))
plt.plot(secs, rate)
[5]:
[<matplotlib.lines.Line2D at 0x7f1020a01f10>]
[6]:
# The time elapsed in the scan
sp.loader.get_elapsed_time()
[6]:
2588.4949999999994
[7]:
# Inspect data in dataframe Columns:
# axes = ['X', 'Y', 't', 'ADC']
# bins = [100, 100, 100, 100]
# ranges = [(0, 1800), (0, 1800), (130000, 140000), (0, 9000)]
# sp.view_event_histogram(dfpid=1, axes=axes, bins=bins, ranges=ranges)
sp.view_event_histogram(dfpid=2)

Distortion correction and Momentum Calibration workflow#

Distortion correction#

1. step:#

Bin and load part of the dataframe in detector coordinates, and choose energy plane where high-symmetry points can well be identified. Either use the interactive tool, or pre-select the range:

[8]:
#sp.bin_and_load_momentum_calibration(df_partitions=20, plane=170)
sp.bin_and_load_momentum_calibration(df_partitions=100, plane=33, width=10, apply=True)

2. Step:#

Next, we select a number of features corresponding to the rotational symmetry of the material, plus the center. These can either be auto-detected (for well-isolated points), or provided as a list (these can be read-off the graph in the cell above). These are then symmetrized according to the rotational symmetry, and a spline-warping correction for the x/y coordinates is calculated, which corrects for any geometric distortions from the perfect n-fold rotational symmetry.

[9]:
#features = np.array([[203.2, 341.96], [299.16, 345.32], [350.25, 243.70], [304.38, 149.88], [199.52, 152.48], [154.28, 242.27], [248.29, 248.62]])
#sp.define_features(features=features, rotation_symmetry=6, include_center=True, apply=True)
# Manual selection: Use a GUI tool to select peaks:
#sp.define_features(rotation_symmetry=6, include_center=True)
# Autodetect: Uses the DAOStarFinder routine to locate maxima.
# Parameters are:
#   fwhm: Full-width at half maximum of peaks.
#   sigma: Number of standard deviations above the mean value of the image peaks must have.
#   sigma_radius: number of standard deviations around a peak that peaks are fitted
sp.define_features(rotation_symmetry=6, auto_detect=True, include_center=True, fwhm=10, sigma=12, sigma_radius=4, apply=True)

3. Step:#

Generate nonlinear correction using splinewarp algorithm. If no landmarks have been defined in previous step, default parameters from the config are used

[10]:
# Option whether a central point shall be fixed in the determination fo the correction
sp.generate_splinewarp(include_center=True)
INFO - Calculated thin spline correction based on the following landmarks:
pouter_ord: [[203.00661088 342.9867013 ]
 [299.88627446 346.19746514]
 [350.94724605 244.78343147]
 [305.63355431 150.20914559]
 [199.53736623 152.78496404]
 [153.40607743 243.05425187]]
pcent: (249.2332537730513, 249.26135953131777)

Optional (Step 3a):#

Save distortion correction parameters to configuration file in current data folder:

[11]:
# Save generated distortion correction parameters for later reuse
sp.save_splinewarp()
INFO - Saved momentum correction parameters to "sed_config.yaml".

4. Step:#

To adjust scaling, position and orientation of the corrected momentum space image, you can apply further affine transformations to the distortion correction field. Here, first a potential scaling is applied, next a translation, and finally a rotation around the center of the image (defined via the config). One can either use an interactive tool, or provide the adjusted values and apply them directly.

[12]:
#sp.pose_adjustment(xtrans=14, ytrans=18, angle=2)
sp.pose_adjustment(xtrans=8, ytrans=7, angle=-4, apply=True)
INFO - Applied translation with (xtrans=8.0, ytrans=7.0).
INFO - Applied rotation with angle=-4.0.

5. Step:#

Finally, the momentum correction is applied to the dataframe, and corresponding meta data are stored

[13]:
sp.apply_momentum_correction()
INFO - Adding corrected X/Y columns to dataframe:
Calculating inverse deformation field, this might take a moment...
INFO - Dask DataFrame Structure:
                       X        Y        t      ADC       Xm       Ym
npartitions=100
                 float64  float64  float64  float64  float64  float64
                     ...      ...      ...      ...      ...      ...
...                  ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...
Dask Name: apply_dfield, 206 graph layers

Momentum calibration workflow#

1. Step:#

First, the momentum scaling needs to be calibrated. Either, one can provide the coordinates of one point outside the center, and provide its distance to the Brillouin zone center (which is assumed to be located in the center of the image), one can specify two points on the image and their distance (where the 2nd point marks the BZ center),or one can provide absolute k-coordinates of two distinct momentum points.

If no points are provided, an interactive tool is created. Here, left mouse click selects the off-center point (brillouin_zone_centered=True) or toggle-selects the off-center and center point.

[14]:
k_distance = 2/np.sqrt(3)*np.pi/3.28 # k-distance of the K-point in a hexagonal Brillouin zone
#sp.calibrate_momentum_axes(k_distance = k_distance)
point_a = [308, 345]
sp.calibrate_momentum_axes(point_a=point_a, k_distance = k_distance, apply=True)
#point_b = [247, 249]
#sp.calibrate_momentum_axes(point_a=point_a, point_b = point_b, k_coord_a = [.5, 1.1], k_coord_b = [0, 0], equiscale=False)

Optional (Step 1a):#

Save momentum calibration parameters to configuration file in current data folder:

[15]:
# Save generated momentum calibration parameters for later reuse
sp.save_momentum_calibration()
INFO - Saved momentum calibration parameters to sed_config.yaml

2. Step:#

Now, the distortion correction and momentum calibration needs to be applied to the dataframe.

[16]:
sp.apply_momentum_calibration()
INFO - Adding kx/ky columns to dataframe:
INFO - Using momentum calibration parameters generated on 11/04/2025, 21:52:02
INFO - Dask DataFrame Structure:
                       X        Y        t      ADC       Xm       Ym       kx       ky
npartitions=100
                 float64  float64  float64  float64  float64  float64  float64  float64
                     ...      ...      ...      ...      ...      ...      ...      ...
...                  ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...
Dask Name: assign, 216 graph layers

Energy Correction and Calibration workflow#

Energy Correction (optional)#

The purpose of the energy correction is to correct for any momentum-dependent distortion of the energy axis, e.g. from geometric effects in the flight tube, or from space charge

1st step:#

Here, one can select the functional form to be used, and adjust its parameters. The binned data used for the momentum calibration is plotted around the Fermi energy (defined by tof_fermi), and the correction function is plotted ontop. Possible correction functions are: “spherical” (parameter: diameter), “Lorentzian” (parameter: gamma), “Gaussian” (parameter: sigma), and “Lorentzian_asymmetric” (parameters: gamma, amplitude2, gamma2).

One can either use an interactive alignment tool, or provide parameters directly.

[17]:
#sp.adjust_energy_correction(amplitude=2.5, center=(730, 730), gamma=920, tof_fermi = 66200)
sp.adjust_energy_correction(amplitude=2.5, center=(730, 730), gamma=920, tof_fermi = 66200, apply=True)

Optional (Step 1a):#

Save energy correction parameters to configuration file in current data folder:

[18]:
# Save generated energy correction parameters for later reuse
sp.save_energy_correction()
INFO - Saved energy correction parameters to sed_config.yaml

2. Step#

After adjustment, the energy correction is directly applied to the TOF axis.

[19]:
sp.apply_energy_correction()
INFO - Applying energy correction to dataframe...
INFO - Using energy correction parameters generated on 11/04/2025, 21:52:03
INFO - Dask DataFrame Structure:
                       X        Y        t      ADC       Xm       Ym       kx       ky       tm
npartitions=100
                 float64  float64  float64  float64  float64  float64  float64  float64  float64
                     ...      ...      ...      ...      ...      ...      ...      ...      ...
...                  ...      ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...      ...
Dask Name: assign, 230 graph layers

Energy calibration#

For calibrating the energy axis, a set of data taken at different bias voltages around the value where the measurement was taken is required.

1. Step:#

In a first step, the data are loaded, binned along the TOF dimension, and normalized. The used bias voltages can be either provided, or read from attributes in the source files if present.

[20]:
# Load energy calibration EDCs
energycalfolder = caldir
scans = np.arange(1,12)
voltages = np.arange(12,23,1)
files = [energycalfolder + r'/Scan' + str(num).zfill(3) + '_' + str(num+11) + '.h5' for num in scans]
sp.load_bias_series(data_files=files, normalize=True, biases=voltages, ranges=[(64000, 75000)])
WARNING - Entry "KTOF:Lens:Sample:V" for channel "sampleBias" not found. Skipping the channel.

2. Step:#

Next, the same peak or feature needs to be selected in each curve. For this, one needs to define “ranges” for each curve, within which the peak of interest is located. One can either provide these ranges manually, or provide one range for a “reference” curve, and infer the ranges for the other curves using a dynamic time warping algorithm.

[21]:
# Option 1 = specify the ranges containing a common feature (e.g an equivalent peak) for all bias scans
# rg = [(129031.03103103103, 129621.62162162163), (129541.54154154155, 130142.14214214214), (130062.06206206206, 130662.66266266267), (130612.61261261262, 131213.21321321322), (131203.20320320321, 131803.8038038038), (131793.7937937938, 132384.38438438438), (132434.43443443443, 133045.04504504506), (133105.10510510512, 133715.71571571572), (133805.8058058058, 134436.43643643643), (134546.54654654654, 135197.1971971972)]
# sp.find_bias_peaks(ranges=rg, infer_others=False)
# Option 2 = specify the range for one curve and infer the others
# This will open an interactive tool to select the correct ranges for the curves.
# IMPORTANT: Don't choose the range too narrow about a peak, and choose a refid
# somewhere in the middle or towards larger biases!
rg = (66100, 67000)
sp.find_bias_peaks(ranges=rg, ref_id=5, infer_others=True, apply=True)
INFO - Use feature ranges: [(np.float64(64638.0), np.float64(65386.0)), (np.float64(64913.0), np.float64(65683.0)), (np.float64(65188.0), np.float64(65991.0)), (np.float64(65474.0), np.float64(66310.0)), (np.float64(65782.0), np.float64(66651.0)), (np.float64(66101.0), np.float64(67003.0)), (np.float64(66442.0), np.float64(67388.0)), (np.float64(66794.0), np.float64(67795.0)), (np.float64(67190.0), np.float64(68213.0)), (np.float64(67575.0), np.float64(68664.0)), (np.float64(67993.0), np.float64(69148.0))].
INFO - Extracted energy features: [[6.51330000e+04 9.43293095e-01]
 [6.54080000e+04 9.52672958e-01]
 [6.57050000e+04 9.47981834e-01]
 [6.60130000e+04 9.46402431e-01]
 [6.63430000e+04 9.50330198e-01]
 [6.66730000e+04 9.63564813e-01]
 [6.70360000e+04 9.59838033e-01]
 [6.73990000e+04 9.67203319e-01]
 [6.78060000e+04 9.55975950e-01]
 [6.82130000e+04 9.56439197e-01]
 [6.86750000e+04 9.70683038e-01]].

3. Step:#

Next, the detected peak positions and bias voltages are used to determine the calibration function. Essentially, the functional Energy(TOF) is being determined by either least-squares fitting of the functional form d2/(t-t0)2 via lmfit (method: “lmfit”), or by analytically obtaining a polynomial approximation (method: “lstsq” or “lsqr”). The parameter ref_energy is used to define the absolute energy position of the feature used for calibration in the calibrated energy scale. energy_scale can be either “kinetic” (decreasing energy with increasing TOF), or “binding” (increasing energy with increasing TOF).

After calculating the calibration, all traces corrected with the calibration are plotted ontop of each other, and the calibration function (Energy(TOF)) together with the extracted features is being plotted.

[22]:
# Eref can be used to set the absolute energy (kinetic energy, E-EF, etc.) of the feature used for energy calibration (if known)
Eref=-1.3
# the lmfit method uses a fit of (d/(t-t0))**2 to determine the energy calibration
# limits and starting values for the fitting parameters can be provided as dictionaries
sp.calibrate_energy_axis(
    ref_energy=Eref,
    method="lmfit",
    energy_scale='kinetic',
    d={'value':1.0,'min': .7, 'max':1.2, 'vary':True},
    t0={'value':8e-7, 'min': 1e-7, 'max': 1e-6, 'vary':True},
    E0={'value': 0., 'min': -100, 'max': 0, 'vary': True},
)
INFO - [[Fit Statistics]]
    # fitting method   = leastsq
    # function evals   = 43
    # data points      = 11
    # variables        = 3
    chi-square         = 0.00218781
    reduced chi-square = 2.7348e-04
    Akaike info crit   = -87.7502612
    Bayesian info crit = -86.5565754
[[Variables]]
    d:   1.09544523 +/- 0.03646409 (3.33%) (init = 1)
    t0:  7.6073e-07 +/- 7.5361e-09 (0.99%) (init = 8e-07)
    E0: -46.6158341 +/- 0.79487877 (1.71%) (init = 0)
[[Correlations]] (unreported correlations are < 0.100)
    C(d, t0)  = -0.9997
    C(d, E0)  = -0.9988
    C(t0, E0) = +0.9974

Optional (Step 3a):#

Save energy calibration parameters to configuration file in current data folder:

[23]:
# Save generated energy calibration parameters for later reuse
sp.save_energy_calibration()
INFO - Saved energy calibration parameters to "sed_config.yaml".

4. Step:#

Finally, the the energy axis is added to the dataframe. Here, the applied bias voltages of the measurement is taken into account to provide the correct energy offset. If the bias cannot be read from the file, it can be provided manually.

[24]:
sp.append_energy_axis(bias_voltage=16.8)
INFO - Adding energy column to dataframe:
INFO - Using energy calibration parameters generated on 11/04/2025, 21:52:11
INFO - Dask DataFrame Structure:
                       X        Y        t      ADC       Xm       Ym       kx       ky       tm   energy
npartitions=100
                 float64  float64  float64  float64  float64  float64  float64  float64  float64  float64
                     ...      ...      ...      ...      ...      ...      ...      ...      ...      ...
...                  ...      ...      ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...      ...      ...
                     ...      ...      ...      ...      ...      ...      ...      ...      ...      ...
Dask Name: assign, 243 graph layers

4. Delay calibration:#

The delay axis is calculated from the ADC input column based on the provided delay range. ALternatively, the delay scan range can also be extracted from attributes inside a source file, if present.

[25]:
sp.dataframe.head()
[25]:
X Y t ADC Xm Ym kx ky tm energy
0 -0.381883 -0.381883 -0.381883 -0.381883 0.000000 0.000000 -2.060071 -2.060071 -48.615043 -25.224026
1 365.025813 1002.025813 70101.025813 6317.025813 355.588632 1032.230050 -1.106246 0.708766 70084.015066 -9.315094
2 761.129089 818.129089 75615.129089 6316.129089 791.536119 839.726038 0.063133 0.192397 75614.245464 -16.717363
3 691.742054 970.742054 66454.742054 6316.742054 713.275575 984.780351 -0.146792 0.581488 66449.067969 -0.832515
4 671.164583 712.164583 73026.164583 6317.164583 697.147835 741.523214 -0.190053 -0.071021 73025.780036 -13.817320
[26]:
#from pathlib import Path
#datafile = "file.h5"
#print(datafile)
#sp.calibrate_delay_axis(datafile=datafile)
delay_range = (-500, 1500)
sp.calibrate_delay_axis(delay_range=delay_range, preview=True)
INFO - Adding delay column to dataframe:
INFO - Append delay axis using delay_range = [-500, 1500] and adc_range = [475.0, 6400.0]
INFO -              X            Y             t          ADC           Xm  \
0    -0.074160    -0.074160     -0.074160    -0.074160     0.000000
1   364.613131  1001.613131  70100.613131  6316.613131   355.149087
2   760.707760   817.707760  75614.707760  6315.707760   791.099491
3   691.569902   970.569902  66454.569902  6316.569902   713.094516
4   670.844952   711.844952  73025.844952  6316.844952   696.808126
5   298.801329  1163.801329  68458.801329  6315.801329   282.053744
6   571.059429   665.059429  73903.059429  6316.059429   590.071345
7   821.689111   544.689111  72631.689111  6317.689111   847.772900
8   818.393259   416.393259  72422.393259  6317.393259   838.937705
9  1005.614553   666.614553  72801.614553  6316.614553  1039.636260

            Ym        kx        ky            tm     energy        delay
0     0.000000 -2.060071 -2.060071    -48.289337 -25.223943  -660.362586
1  1031.868657 -1.107425  0.707797  70083.597303  -9.314335  1471.852534
2   839.330187  0.061962  0.191335  75613.834162 -16.716962  1471.546923
3   984.624642 -0.147278  0.581070  66448.902033  -0.832021  1471.837942
4   741.232360 -0.190964 -0.071801  73025.455420 -13.816903  1471.930785
5  1185.050088 -1.303494  1.118688  68432.287954  -5.971735  1471.578508
6   701.323632 -0.477273 -0.178852  73900.148200 -14.887946  1471.665630
7   586.617472  0.213982 -0.486538  72627.530632 -13.294219  1472.215734
8   467.002176  0.190282 -0.807392  72412.753189 -13.002197  1472.115868
9   708.876303  0.728633 -0.158592  72794.146627 -13.515920  1471.853014

5. Visualization of calibrated histograms#

With all calibrated axes present in the dataframe, we can visualize the corresponding histograms, and determine the respective binning ranges

[27]:
axes = ['kx', 'ky', 'energy', 'delay']
ranges = [[-3, 3], [-3, 3], [-6, 2], [-600, 1600]]
sp.view_event_histogram(dfpid=1, axes=axes, ranges=ranges)

Define the binning ranges and compute calibrated data volume#

[28]:
axes = ['kx', 'ky', 'energy', 'delay']
bins = [100, 100, 200, 50]
ranges = [[-2, 2], [-2, 2], [-4, 2], [-600, 1600]]
res = sp.compute(bins=bins, axes=axes, ranges=ranges, normalize_to_acquisition_time="delay")
INFO - Calculate normalization histogram for axis 'delay'...

Some visualization:#

[29]:
fig, axs = plt.subplots(4, 1, figsize=(6, 18), constrained_layout=True)
res.loc[{'energy':slice(-.1, 0)}].sum(axis=(2,3)).T.plot(ax=axs[0])
res.loc[{'kx':slice(-.8, -.5)}].sum(axis=(0,3)).T.plot(ax=axs[1])
res.loc[{'ky':slice(-.2, .2)}].sum(axis=(1,3)).T.plot(ax=axs[2])
res.loc[{'kx':slice(-.8, -.5), 'energy':slice(.5, 2)}].sum(axis=(0,1)).plot(ax=axs[3])
[29]:
<matplotlib.collections.QuadMesh at 0x7f1010e681a0>
[30]:
fig, ax = plt.subplots(1,1)
(sp._normalization_histogram*90000).plot(ax=ax)
sp._binned.sum(axis=(0,1,2)).plot(ax=ax)
plt.show()
[ ]: