tempo_eval.Metric
- class tempo_eval.Metric(name, formatted_name=None, description=None, eval_function=None, extract_function=None, suitability_function=<function Metric.<lambda>>, significant_difference_function=None, best_value=1.0, signed=False, unit='%')[source]
Metric.
Structured collection of logic and injected functions for different metrics.
- Example
>>> from tempo_eval import OE1, read_reference_annotations, read_estimate_annotations >>> gt_ref = read_reference_annotations('giantsteps_tempo', validate=False) >>> gt_est = read_estimate_annotations('giantsteps_tempo', validate=False) # evaluate estimates using the reference values and Metric OE1: >>> res = OE1.eval_annotations(gt_ref['tempo'], gt_est['tempo']) # show result of ref '1.0' and est 'davies2009/mirex_qm_tempotracker' # for file '3453642.LOFI.jams': >>> res['1.0']['davies2009/mirex_qm_tempotracker']['3453642.LOFI.jams'] [-0.02271693594286862]
- __init__(name, formatted_name=None, description=None, eval_function=None, extract_function=None, suitability_function=<function Metric.<lambda>>, significant_difference_function=None, best_value=1.0, signed=False, unit='%')[source]
Create a metric using the given functions.
- Parameters
name (
str) – namedescription (
Optional[str]) – HTML-formatted, high level descriptioneval_function (
Optional[Callable[[Union[Any,Tuple[float,float,float],float],Union[Any,Tuple[float,float,float],float]],Any]]) – function that compares two tempi, may also accept a tolerance parameter, e.g.equal1.extract_function (
Optional[Callable[[Annotation],Union[Any,Tuple[float,float,float],float]]]) – function to extract tempo values from annotation, e.g.extract_tempo.significant_difference_function (
Optional[Callable[[Dict[str,List[Any]],Dict[str,List[Any]],str,str],List[float]]]) – function to determine significant differences. E.g.mcnemar.suitability_function (
Callable[[Union[Any,Tuple[float,float,float],float]],bool]) – function to determine whether a tempo value is suitable for this metric, e.g.is_single_bpm.best_value (
float) – best value (to show, if no values are available)signed (
bool) – is the metric signed (e.g. percentage error) or absolute (e.g. Accuracy1)?
Methods
__init__(name[, formatted_name, ...])Create a metric using the given functions.
are_tempi_suitable(tempi)Is at least one tempo meant for this metric?
averages(eval_results[, item_id_filter, ...])Calculate means and standard deviations for the given evaluation results.
eval_annotations(reference_annotation_set, ...)Evaluates annotations.
eval_tempi(reference_tempi, estimated_tempi)Evaluates tempi for all provided tempi for each track.
extract_tempi(annotation_set)Extracts tempi from the given annotation set.
is_tempo_suitable(tempo)Indicates whether a given tempo is suitable for this metric.