Existing tools

A set of standard tools and analyzers is already available to the user, providing support from detector experts directly in the form of working code or simply convenience tools. Here is a list of categories:

  • MCTesters: meant to analyse NA62MC output files for debugging and comparison between versions
  • Time alignment: an integral part of the automatic time calibration procedure applied on data
  • Pre-analyzers: typically applying energy and momentum corrections to raw candidates provided by the reconstruction
  • Association tools: developed by detector experts to combine information across more than one system, taking into account understood instrumental effects
  • Geometry tools: applying acceptance cuts or vertexing algorithms
  • Containers: collecting information in one place, to allow quicker implementation of selections

All the standard cuts applied by these analyzers should be documented in doxygen and customizable by the user. A more detailed description can be found here.


All existing analyzers should be documented in doxygen. When a new empty analyzer is created it is fully documented inline, to guide the user through the implementation.


NA62Analysis is designed to handle most of the technicalities involved in reading and writing ROOT files, allowing the user to have direct access to the objects stored on disk and apply arbitrary algorithms to them. Algorithms are implemented within objects, called analyzers, that can work independently or in compliex chains; the main engine sorts out the dependencies at run-time and executes the analyzers in the appropriate sequence.

In this framework it is potentially very convenient to break down an analysis in several analyzers; with the proper structure it is possible to reuse a fraction of them for multiple analyses, as it would be for an analyzer that identifies electrons, for example.

It is also possible to use any analyzer to filter the input data, to speed up further reprocessing during the development and refinement of any analysis that is focused on a small fraction of the whole data sample, i.e. for rare decays.

A set of analyzers is under preparation for quasi-online data quality monitoring during the data taking, which is going to be integrated with the processing chain.

Useful Details

Quick tutorial

  • /afs/ -v rcurrent -w /chosen/path/Analysis
  • cd /chosen/path/Analysis
  • source
  • prepare MyAnalysis
  • cd MyAnalysis
  • source scripts/
  • Add a set of standard analyzers to the file config (don't forget the name of the executable), or create your own by new MyAnalyzer (an empty template will be created in Analyzers/src)
  • build config
  • Choose a list of files (reconstructed or MC, depending on your choice of analyzer(s)) from MC productions or processed data
  • ./YourExecutable -l ChosenList -o Test.root
  • The -h option shows more options

A more detailed tutorial is available here.


Reconstructed data is flagged by a trigger word that can be interpreted by cross-referencing the trigger masks used for a specific run (available here) and the data format (see also TDQ).

For Developers

Requirements for analyzers to be included in Postprocessing

Analyzers to be included in the Postprocessing are required to be 2-pass:

  • pass 1: reading reconstructed files and producing ROOT files with a set of histograms (typically over 10 input files)
  • pass 2: reading the previously produced output (--histo command line option) and combine the results (over a full run), producing ROOT and/or PDF files with histograms/plots
  • there should be no a priori assumption on the number of input files processed; statistics should be dynamically accumulated until the required threshold is reached