pipeline.Evaluation package#

This package allow to perform matching and track-finding evaluation after training the pipeline. It utilises the MonteTracko library.

pipeline.Evaluation.matching module#

pipeline.Evaluation.matching.perform_matching(df_tracks, df_hits_particles, df_particles, min_track_length=3, matching_fraction=0.7, cure_clones=False)[source]#

Perform matching and return the TrackEvaluator object for evaluation.

Parameters:
  • df_tracks (DataFrame) – dataframe of tracks

  • df_hits_particles (DataFrame) – dataframe of hits-particles

  • df_particles (DataFrame) – dataframe of particles

  • min_track_length (int) – Minimum number of hits for a track to be kept

  • matching_fration – Minimal matching fraction for the matching

  • cure_clone – just a weird way of trying to remove clones that are matched to more than one particle

Return type:

TrackEvaluator

Returns:

TrackEvaluator object that contain the matched candidates.

pipeline.Evaluation.plotting module#

pipeline.Evaluation.plotting.plot_evaluation(trackEvaluator, category, plotted_groups=['basic'], detector=None, output_dir=None, suffix=None)[source]#

Generate and display histograms of track evaluation metrics in specified particle-related columns.

Parameters:
  • trackEvaluator (TrackEvaluator) – A TrackEvaluator instance containing the results of the track matching

  • category (Category) – Truth category for the plot

  • plotted_groups (List[str]) – Pre-configured metrics and columns to plot. Each group corresponds to one plot that shows the the distributions of various metrics as a function of various truth variables, as hard-coded in this function. There are 3 groups: basic, geometry and challenging.

  • detector (Optional[str]) – name of the detector (velo or scifi)

  • suffix (Optional[str]) – Suffix to add at the end of the figure names

pipeline.Evaluation.plotting.plot_evaluation_categories(trackEvaluator, detector=None, categories=None, plotted_groups=['basic'], output_dir=None, suffix=None)[source]#

Generate and display histograms of track evaluation metrics in specified particle-related columns, for various categories.

Parameters:
  • trackEvaluator (TrackEvaluator) – A TrackEvaluator instance containing the results of the track matching

  • category – Truth category for the plot

  • plotted_groups (Optional[List[str]]) – Pre-configured metrics and columns to plot. Each group corresponds to one plot that shows the the distributions of various metrics as a function of various truth variables, as hard-coded in this function. There are 3 groups: basic, geometry and challenging.

  • categories (Optional[Iterable[Category]]) – list of categories

  • suffix (Optional[str]) – Suffix to add at the end of the figure names

pipeline.Evaluation.plotting.plot_histograms_trackevaluator(trackEvaluator, columns, metric_names, color=None, label=None, column_labels=None, bins=None, column_ranges=None, category=None, same_fig=True, lhcb=False, with_err=True, **kwargs)[source]#

Plot multiple histograms of metrics.

Parameters:
  • trackEvaluator (TrackEvaluator | List[TrackEvaluator]) – one or more montetracko track evaluators to plot. They should share the same data distributions.

  • columns (List[str]) – list of columns to histogrammise the metrics on

  • metric_names (List[str]) – list of metric names to plot

  • color (str | List[str] | None) – colors for each track evaluator

  • labels – labels for each track evaluator

  • column_labels (Dict[str, str] | None) – Associates a column name with its label

  • bins (int | Sequence[float] | str | Dict[str, Any] | None) – Number of bins, or a dictionary that associates a metric name with the bin edges

  • column_ranges (Dict[str, Tuple[float, float]] | None) – Associates a column name with the tuples of lower and upper ranges of the bin

  • category (Category | None) – Particle category to plot

  • same_fig (bool) – whether to put all the axes in the same figure

  • lhcb (bool) – whether to add “LHCb Simulation at the top of every matplotlib ax

Returns:

the figure(s), the axes and the histogram axes.

Return type:

A tuple of 3 elements

pipeline.Evaluation.reporting module#

pipeline.Evaluation.reporting.report_evaluation(trackEvaluator, allen_report=True, table_report=True, output_path=None, detector=None)[source]#

Perform the evaluation and produce reports.

Parameters:
  • trackEvaluator (TrackEvaluator) – montetracko.TrackEvaluator object, output of the matching

  • allen_report (bool) – whether to generate the Allen report

  • table_report (bool) – whether to generate the table reports

  • output_path (Optional[str]) – Output path where to save the report

Return type:

Optional[str]