pipeline.Evaluation package#
This package allow to perform matching and track-finding evaluation after training the pipeline. It utilises the MonteTracko library.
pipeline.Evaluation.matching module#
- pipeline.Evaluation.matching.perform_matching(df_tracks, df_hits_particles, df_particles, min_track_length=3, matching_fraction=0.7, cure_clones=False)[source]#
Perform matching and return the TrackEvaluator object for evaluation.
- Parameters:
df_tracks (
DataFrame
) – dataframe of tracksdf_hits_particles (
DataFrame
) – dataframe of hits-particlesdf_particles (
DataFrame
) – dataframe of particlesmin_track_length (
int
) – Minimum number of hits for a track to be keptmatching_fration – Minimal matching fraction for the matching
cure_clone – just a weird way of trying to remove clones that are matched to more than one particle
- Return type:
TrackEvaluator
- Returns:
TrackEvaluator object that contain the matched candidates.
pipeline.Evaluation.plotting module#
- pipeline.Evaluation.plotting.plot_evaluation(trackEvaluator, category, plotted_groups=['basic'], detector=None, output_dir=None, suffix=None)[source]#
Generate and display histograms of track evaluation metrics in specified particle-related columns.
- Parameters:
trackEvaluator (
TrackEvaluator
) – ATrackEvaluator
instance containing the results of the track matchingcategory (
Category
) – Truth category for the plotplotted_groups (
List
[str
]) – Pre-configured metrics and columns to plot. Each group corresponds to one plot that shows the the distributions of various metrics as a function of various truth variables, as hard-coded in this function. There are 3 groups:basic
,geometry
andchallenging
.detector (
Optional
[str
]) – name of the detector (velo
orscifi
)suffix (
Optional
[str
]) – Suffix to add at the end of the figure names
- pipeline.Evaluation.plotting.plot_evaluation_categories(trackEvaluator, detector=None, categories=None, plotted_groups=['basic'], output_dir=None, suffix=None)[source]#
Generate and display histograms of track evaluation metrics in specified particle-related columns, for various categories.
- Parameters:
trackEvaluator (
TrackEvaluator
) – ATrackEvaluator
instance containing the results of the track matchingcategory – Truth category for the plot
plotted_groups (
Optional
[List
[str
]]) – Pre-configured metrics and columns to plot. Each group corresponds to one plot that shows the the distributions of various metrics as a function of various truth variables, as hard-coded in this function. There are 3 groups:basic
,geometry
andchallenging
.categories (
Optional
[Iterable
[Category
]]) – list of categoriessuffix (
Optional
[str
]) – Suffix to add at the end of the figure names
- pipeline.Evaluation.plotting.plot_histograms_trackevaluator(trackEvaluator, columns, metric_names, color=None, label=None, column_labels=None, bins=None, column_ranges=None, category=None, same_fig=True, lhcb=False, with_err=True, **kwargs)[source]#
Plot multiple histograms of metrics.
- Parameters:
trackEvaluator (TrackEvaluator | List[TrackEvaluator]) – one or more montetracko track evaluators to plot. They should share the same data distributions.
columns (List[str]) – list of columns to histogrammise the metrics on
metric_names (List[str]) – list of metric names to plot
color (str | List[str] | None) – colors for each track evaluator
labels – labels for each track evaluator
column_labels (Dict[str, str] | None) – Associates a column name with its label
bins (int | Sequence[float] | str | Dict[str, Any] | None) – Number of bins, or a dictionary that associates a metric name with the bin edges
column_ranges (Dict[str, Tuple[float, float]] | None) – Associates a column name with the tuples of lower and upper ranges of the bin
category (Category | None) – Particle category to plot
same_fig (bool) – whether to put all the axes in the same figure
lhcb (bool) – whether to add “LHCb Simulation at the top of every matplotlib ax
- Returns:
the figure(s), the axes and the histogram axes.
- Return type:
A tuple of 3 elements
pipeline.Evaluation.reporting module#
- pipeline.Evaluation.reporting.report_evaluation(trackEvaluator, allen_report=True, table_report=True, output_path=None, detector=None)[source]#
Perform the evaluation and produce reports.
- Parameters:
trackEvaluator (
TrackEvaluator
) –montetracko.TrackEvaluator
object, output of the matchingallen_report (
bool
) – whether to generate the Allen reporttable_report (
bool
) – whether to generate the table reportsoutput_path (
Optional
[str
]) – Output path where to save the report
- Return type:
Optional
[str
]