Analysis Results

As part of the shifter procedure, you’re expected to detect any issues which are not automatically caught by the DAQ/Analysis scripts. Remember, there should be 32 plots (one for each PMT) for a given quanitity.

There may also be warning issued by the analysis scripts that you should inspect. These are not always serious - so please compare to these plots here.

Here are examples of good and some “questionable” results we’ve seen. These are also some example plots that are useful for the End of Shift Presentation.

This should also not subsitute inspecting the monitoring plots.

You can find analysis outputs on Grappa: /disk20/fat/software/degg_measurements/degg_measurements/analysis/. See the previous section on “Analyses - Remote”.

Gain Analysis

The gain analysis is run frequently throughout the FAT and is perhaps the most important measurement. While there are a few controls already, please check the gain_vs_hv plots for each PMT.

Things to look for:

  1. The PMT reaches 1e7 gain between 1000 ~ 2000 V.

  2. Data points have reasonable error bars - if error bars are quite large, this can indicate issues with the charge histogram fit (please investigate!).

  3. The fit describes the data.

These are examples of good/acceptable fits:

gain curve SQ0650 gain curve SQ0704

This is an example of a poor (but not disasterous) fit:

bad gain curve - see first point

If you see something like this, look into the charge histograms. It’s possible that the fits are not very good.

Dark Rate Analysis (Scaler)

The scaler dark rate measurement has two configurations - the ADC & the FIR trigger. In general, as the FIR trigger is a more strict requirement, the dark rate for the FIR triggered data should be lower. Due to anomalous dark rates (unknown origin), the FIR triggered rates should also be a more consistent depiction of the module’s true dark rate. In other words, temporary spikes in dark rate (even up to 100s of kHz) for the ADC triggered dark rate is not unheard of. However it is unusual for the FIR trigger setting and is worth investigating.

Similar to the monitoring section we’re looking for a few specific things:

  1. The dark rate is not 0 and not over ~10 MHz.

  2. The dark rate is not consistently over ~10 kHz (isolated instances can be OK).

  3. The dark rate from the FIR trigger is not over ~5 kHz.

If you see cases where the dark rate is spiking, please record this. It will be useful for tracking specific modules which appear to have issues with higher frequency than others.

To get an idea for what a typical range of dark rates look like for the FIR triggger:

good dark rate distribution

And this plot shows an example where the rate is much too high. This would be something to report to the google form.

bad dark rate distribution

Linearity Analysis

The linearity analysis involves light injection, which provides another possible source of issues. While the frequency of the function generator is regularly checked (see #chiba-daq), monitoring changes in individual fiber outputs can be challenging.

In general the analysis should be independent of small changes in incident light levels, but not immune. If light levels for the 100% filter drop below 200 PE, the FAT goalpost is being evaluated only via extrapolation, as the metric is #pe @ 200 (observed) / #pe @ 200 (injected). This value should be above 60%.

There are two key plots to examine per PMT. The first is the 100% filter setting, from which you can see the peak charge (hopefully above 200 PE). The second is the overall linearity distribution for that PMT. The fit should describe the data across all filter settings, increasing as the filters increase.

good pmt 100%

Lastly you should check the efficiency values. For the 9 settings starting from 2.5% ~ 100%, check that the efficiency is above 80%. If this number is significantly below, investigate the charge histograms for that PMT. This can indicate a number of errors:

  1. Issue during data-taking (mainboard trigger settings)

  2. Issue during data-taking (light injection, fiber damaged)

  3. Issue during data-taking (function generator frequency)

In most of these cases, it warrants repeating the data. It is important to catch these errors in a timely fashion.

Example of a good linearity plot:

good pmt

Example of a bad linearity plot, where the light was not corretly hitting the PMT:

good pmt

Double Pulse Analysis

The double pulse measurement also uses the light injection system. It is vulnerable to similar issues, such as problems with the function generator or optical fiber. Similar to the linearity analysis we can calculate an efficiency value to indicate how many of the triggers appear to be from the laser. We can get a good idea of this by calculating the number of double pulse waveforms, divided by the total. If this value is too low (~80%), this may indicate a problem during data-taking.

Three analysis-level quantities are calculated in the analysis. These values depend heavily on extracting good double-pulse waveforms (see below) For example, you can get the efficiency estimator from this plot, as well as this timing separation.

good pmt

The time between the two peaks is calculated for each double pulse ID’d waveform and then averaged. This should be a time between about 18 - 22 ns, depending on some uncertainty in the trigger time due to the ~4 ns waveform ADC digitisation time (240 MHz). Additionally the peak to valley ratios are calculated for the two peaks. The value of the first peak is more stable, with a range between 1.5 - 2.0 The second peak is less strict with larger variations. A limit of 1.9 - 2.8 is placed on the second peak.

Check that the peak separation is within bounds and that the peak to valley distributions follow this rough description:

good pmt peak to valleys

With a good summary of the events like this:

good run

Timing (TTS/SPTR

The timing measurement is the most complicated of the group of test and analyses run during FAT. This is also a light injection test, which have the possiblity of similar issues covered above. As part of the analysis script, summary information is sent to #chiba-daq, along with a warning for some out of bounds parameters.

These parameters are:

  1. The transit-time spread (TTS).

  2. The fit chi2 to the transit time (TT) distribution.

  3. The measurement efficiency

The TTS is determined from the fit to the TT distribution by taking the 1 sigma width of the fit. This value should typically be below 3 ns, with a requirement of less than 5 ns. In order to populate the TT distribution, triggers from the PMTs are matched to the triggers from the table top mainboard (which are a proxy for the laser sync signal). Some uncertainties arise from the procedure of running RapCal which are believed to generate artifacts in the TT distribution.

Here are is an example of a good distribution:

good TT

with a chi2 value of: ~1.

The chi2 is meant to pick-up on distributions which would otherwise have a good TTS, but poor fit agreement. In this case, it is believed that this shape is due to a synchronisation issue between the D-Egg and MFH ICM.

There are two main reasons for this:

  1. This is being observed for both PMTs in the same D-Egg (same ICM)

  2. When the peaks are aligned in time, the distribution appears normal.

TT distribution with no corrections applied, chi2 ~ 8. Notice how the peaks do not clearly follow the expected Gaussian-like distribution.

questionable TT

But when each readout block (in-between which RapCal is run) is artificially aligned, the distribution no longer has any clear artifacts.

questionable TT

If we still saw poor agreement in this space, this would be an indication some effect beyond the TT synchronisation.

Lastly the summary plots can give you a good indication if you need to inspect anything further:

TTS summary Efficiency summary chi2 summary

The warnings sent to #chiba-daq are largely based off of the summary plots here.