tempo_eval.Metric

class tempo_eval.Metric(name, formatted_name=None, description=None, eval_function=None, extract_function=None, suitability_function=<function Metric.<lambda>>, significant_difference_function=None, best_value=1.0, signed=False, unit='%')[source]

Metric.

Structured collection of logic and injected functions for different metrics.

Example

>>> from tempo_eval import OE1, read_reference_annotations, read_estimate_annotations
>>> gt_ref = read_reference_annotations('giantsteps_tempo', validate=False)
>>> gt_est = read_estimate_annotations('giantsteps_tempo', validate=False)
# evaluate estimates using the reference values and Metric OE1:
>>> res = OE1.eval_annotations(gt_ref['tempo'], gt_est['tempo'])
# show result of ref '1.0' and est 'davies2009/mirex_qm_tempotracker'
# for file '3453642.LOFI.jams':
>>> res['1.0']['davies2009/mirex_qm_tempotracker']['3453642.LOFI.jams']
[-0.02271693594286862]
__init__(name, formatted_name=None, description=None, eval_function=None, extract_function=None, suitability_function=<function Metric.<lambda>>, significant_difference_function=None, best_value=1.0, signed=False, unit='%')[source]

Create a metric using the given functions.

Parameters

Methods

__init__(name[, formatted_name, ...])

Create a metric using the given functions.

are_tempi_suitable(tempi)

Is at least one tempo meant for this metric?

averages(eval_results[, item_id_filter, ...])

Calculate means and standard deviations for the given evaluation results.

eval_annotations(reference_annotation_set, ...)

Evaluates annotations.

eval_tempi(reference_tempi, estimated_tempi)

Evaluates tempi for all provided tempi for each track.

extract_tempi(annotation_set)

Extracts tempi from the given annotation set.

is_tempo_suitable(tempo)

Indicates whether a given tempo is suitable for this metric.