rwc_mdb_j
This is the tempo_eval report for the ‘rwc_mdb_j’ corpus.
Reports for other corpora may be found here.
Table of Contents
- References for ‘rwc_mdb_j’
- Estimates for ‘rwc_mdb_j’
- Estimators
- Basic Statistics
- Smoothed Tempo Distribution
- Accuracy
- Accuracy Results for 1.0
- Accuracy1 for 1.0
- Accuracy2 for 1.0
- Differing Items
- Significance of Differences
- Accuracy1 on cvar-Subsets
- Accuracy2 on cvar-Subsets
- Accuracy1 on Tempo-Subsets
- Accuracy2 on Tempo-Subsets
- Estimated Accuracy1 for Tempo
- Estimated Accuracy2 for Tempo
- Accuracy1 for ‘tag_open’ Tags
- Accuracy2 for ‘tag_open’ Tags
- OE1 and OE2
- AOE1 and AOE2
References for ‘rwc_mdb_j’
References
1.0
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 1.0 |
Curator | Masataka Goto |
Data Source | manual annotation |
Annotation Tools | derived from beat annotations |
Annotation Rules | median of corresponding inter beat intervals |
Annotator, name | Masataka Goto |
Annotator, bibtex | Goto2006 |
Annotator, ref_url | https://staff.aist.go.jp/m.goto/RWC-MDB/AIST-Annotation/ |
Basic Statistics
Reference | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
1.0 | 50 | 37.27 | 157.89 | 90.71 | 27.53 | 61.00 | 0.76 |
Smoothed Tempo Distribution
Figure 1: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Tag Distribution for ‘tag_open’
Figure 2: Percentage of tracks tagged with tags from namespace ‘tag_open’. Annotations are from reference 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Beat-Based Tempo Variation
Figure 3: Fraction of the dataset with beat-annotated tracks with cvar < τ.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimates for ‘rwc_mdb_j’
Estimators
boeck2015/tempodetector2016_default
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 0.17.dev0 |
Annotation Tools | TempoDetector.2016, madmom, https://github.com/CPJKU/madmom |
Annotator, bibtex | Boeck2015 |
davies2009/mirex_qm_tempotracker
Attribute | Value | |
---|---|---|
Corpus | rwc_mdb_j | |
Version | 1.0 | |
Annotation Tools | QM Tempotracker, Sonic Annotator plugin. https://code.soundsoftware.ac.uk/projects/mirex2013/repository/show/audio_tempo_estimation/qm-tempotracker Note that the current macOS build of ‘qm-vamp-plugins’ was used. | |
Annotator, bibtex | Davies2009 | Davies2007 |
percival2014/stem
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 1.0 |
Annotation Tools | percival 2014, ‘tempo’ implementation from Marsyas, http://marsyas.info, git checkout tempo-stem |
Annotator, bibtex | Percival2014 |
schreiber2014/default
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 0.0.1 |
Annotation Tools | schreiber 2014, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2014 |
schreiber2017/ismir2017
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=ismir2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2017/mirex2017
Attribute | Value |
---|---|
Corpus | rwc_mdb_j |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=mirex2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2018/cnn
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=cnn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/fcn
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=fcn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/ismir2018
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=ismir2018), https://github.com/hendriks73/tempo-cnn |
Basic Statistics
Estimator | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 50 | 54.05 | 162.16 | 99.10 | 28.77 | 69.00 | 0.76 |
davies2009/mirex_qm_tempotracker | 50 | 89.10 | 191.41 | 128.66 | 23.63 | 84.00 | 0.96 |
percival2014/stem | 50 | 54.11 | 143.56 | 96.14 | 24.18 | 68.00 | 0.86 |
schreiber2014/default | 50 | 55.75 | 141.82 | 91.30 | 21.66 | 62.00 | 0.88 |
schreiber2017/ismir2017 | 50 | 58.83 | 196.67 | 101.39 | 25.54 | 67.00 | 0.92 |
schreiber2017/mirex2017 | 50 | 41.72 | 206.76 | 98.04 | 27.70 | 66.00 | 0.88 |
schreiber2018/cnn | 50 | 54.00 | 216.00 | 114.68 | 43.19 | 65.00 | 0.64 |
schreiber2018/fcn | 50 | 43.00 | 208.00 | 104.68 | 35.74 | 73.00 | 0.78 |
schreiber2018/ismir2018 | 50 | 56.00 | 208.00 | 109.52 | 33.08 | 73.00 | 0.78 |
Smoothed Tempo Distribution
Figure 4: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy
Accuracy1 is defined as the percentage of correct estimates, allowing a 4% tolerance for individual BPM values.
Accuracy2 additionally permits estimates to be wrong by a factor of 2, 3, 1/2 or 1/3 (so-called octave errors).
See [Gouyon2006].
Note: When comparing accuracy values for different algorithms, keep in mind that an algorithm may have been trained on the test set or that the test set may have even been created using one of the tested algorithms.
Accuracy Results for 1.0
Estimator | Accuracy1 | Accuracy2 |
---|---|---|
percival2014/stem | 0.7800 | 0.9800 |
boeck2015/tempodetector2016_default | 0.7400 | 0.9800 |
schreiber2017/mirex2017 | 0.6200 | 0.8800 |
schreiber2017/ismir2017 | 0.6200 | 0.8800 |
schreiber2018/cnn | 0.6000 | 0.8600 |
schreiber2018/fcn | 0.5800 | 0.8600 |
schreiber2018/ismir2018 | 0.5800 | 0.9000 |
schreiber2014/default | 0.5600 | 0.7400 |
davies2009/mirex_qm_tempotracker | 0.4800 | 0.9000 |
Table 3: Mean accuracy of estimates compared to version 1.0 with 4% tolerance ordered by Accuracy1.
Raw data Accuracy1: CSV JSON LATEX PICKLE
Raw data Accuracy2: CSV JSON LATEX PICKLE
Accuracy1 for 1.0
Figure 5: Mean Accuracy1 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for 1.0
Figure 6: Mean Accuracy2 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Differing Items
For which items did a given estimator not estimate a correct value with respect to a given ground truth? Are there items which are either very difficult, not suitable for the task, or incorrectly annotated and therefore never estimated correctly, regardless which estimator is used?
Differing Items Accuracy1
Items with different tempo annotations (Accuracy1, 4% tolerance) in different versions:
1.0 compared with boeck2015/tempodetector2016_default (13 differences): ‘RM-J001’ ‘RM-J006’ ‘RM-J008’ ‘RM-J009’ ‘RM-J018’ ‘RM-J028’ ‘RM-J031’ ‘RM-J038’ ‘RM-J040’ ‘RM-J044’ ‘RM-J045’ … CSV
1.0 compared with davies2009/mirex_qm_tempotracker (26 differences): ‘RM-J001’ ‘RM-J002’ ‘RM-J003’ ‘RM-J004’ ‘RM-J006’ ‘RM-J007’ ‘RM-J008’ ‘RM-J009’ ‘RM-J012’ ‘RM-J013’ ‘RM-J014’ … CSV
1.0 compared with percival2014/stem (11 differences): ‘RM-J004’ ‘RM-J009’ ‘RM-J010’ ‘RM-J012’ ‘RM-J031’ ‘RM-J032’ ‘RM-J038’ ‘RM-J040’ ‘RM-J044’ ‘RM-J046’ ‘RM-J050’ … CSV
1.0 compared with schreiber2014/default (22 differences): ‘RM-J005’ ‘RM-J008’ ‘RM-J009’ ‘RM-J010’ ‘RM-J011’ ‘RM-J013’ ‘RM-J015’ ‘RM-J019’ ‘RM-J020’ ‘RM-J022’ ‘RM-J023’ … CSV
1.0 compared with schreiber2017/ismir2017 (19 differences): ‘RM-J002’ ‘RM-J005’ ‘RM-J008’ ‘RM-J009’ ‘RM-J011’ ‘RM-J012’ ‘RM-J017’ ‘RM-J022’ ‘RM-J025’ ‘RM-J027’ ‘RM-J030’ … CSV
1.0 compared with schreiber2017/mirex2017 (19 differences): ‘RM-J002’ ‘RM-J005’ ‘RM-J008’ ‘RM-J009’ ‘RM-J015’ ‘RM-J016’ ‘RM-J017’ ‘RM-J022’ ‘RM-J025’ ‘RM-J027’ ‘RM-J030’ … CSV
1.0 compared with schreiber2018/cnn (20 differences): ‘RM-J001’ ‘RM-J003’ ‘RM-J004’ ‘RM-J006’ ‘RM-J009’ ‘RM-J015’ ‘RM-J016’ ‘RM-J017’ ‘RM-J021’ ‘RM-J024’ ‘RM-J026’ … CSV
1.0 compared with schreiber2018/fcn (21 differences): ‘RM-J001’ ‘RM-J003’ ‘RM-J006’ ‘RM-J008’ ‘RM-J009’ ‘RM-J015’ ‘RM-J016’ ‘RM-J017’ ‘RM-J020’ ‘RM-J021’ ‘RM-J024’ … CSV
1.0 compared with schreiber2018/ismir2018 (21 differences): ‘RM-J001’ ‘RM-J002’ ‘RM-J004’ ‘RM-J006’ ‘RM-J008’ ‘RM-J009’ ‘RM-J021’ ‘RM-J022’ ‘RM-J024’ ‘RM-J026’ ‘RM-J028’ … CSV
None of the estimators estimated the following 5 items ‘correctly’ using Accuracy1: ‘RM-J009’ ‘RM-J031’ ‘RM-J038’ ‘RM-J040’ ‘RM-J044’ CSV
Differing Items Accuracy2
Items with different tempo annotations (Accuracy2, 4% tolerance) in different versions:
1.0 compared with boeck2015/tempodetector2016_default (1 differences): ‘RM-J044’ CSV
1.0 compared with davies2009/mirex_qm_tempotracker (5 differences): ‘RM-J013’ ‘RM-J014’ ‘RM-J018’ ‘RM-J031’ ‘RM-J044’ CSV
1.0 compared with percival2014/stem (1 differences): ‘RM-J044’ CSV
1.0 compared with schreiber2014/default (13 differences): ‘RM-J005’ ‘RM-J009’ ‘RM-J011’ ‘RM-J013’ ‘RM-J015’ ‘RM-J019’ ‘RM-J020’ ‘RM-J022’ ‘RM-J023’ ‘RM-J025’ ‘RM-J030’ … CSV
1.0 compared with schreiber2017/ismir2017 (6 differences): ‘RM-J005’ ‘RM-J025’ ‘RM-J030’ ‘RM-J035’ ‘RM-J036’ ‘RM-J044’ CSV
1.0 compared with schreiber2017/mirex2017 (6 differences): ‘RM-J005’ ‘RM-J025’ ‘RM-J030’ ‘RM-J035’ ‘RM-J036’ ‘RM-J044’ CSV
1.0 compared with schreiber2018/cnn (7 differences): ‘RM-J001’ ‘RM-J015’ ‘RM-J030’ ‘RM-J031’ ‘RM-J035’ ‘RM-J036’ ‘RM-J044’ CSV
1.0 compared with schreiber2018/fcn (7 differences): ‘RM-J003’ ‘RM-J020’ ‘RM-J030’ ‘RM-J031’ ‘RM-J035’ ‘RM-J036’ ‘RM-J044’ CSV
1.0 compared with schreiber2018/ismir2018 (5 differences): ‘RM-J002’ ‘RM-J030’ ‘RM-J035’ ‘RM-J036’ ‘RM-J044’ CSV
All tracks were estimated ‘correctly’ by at least one system.
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0044 | 0.7539 | 0.0636 | 0.2379 | 0.2379 | 0.1671 | 0.0386 | 0.0574 |
davies2009/mirex_qm_tempotracker | 0.0044 | 1.0000 | 0.0015 | 0.5716 | 0.1892 | 0.2100 | 0.2863 | 0.3833 | 0.3323 |
percival2014/stem | 0.7539 | 0.0015 | 1.0000 | 0.0127 | 0.0574 | 0.0768 | 0.0636 | 0.0309 | 0.0213 |
schreiber2014/default | 0.0636 | 0.5716 | 0.0127 | 1.0000 | 0.5811 | 0.5811 | 0.8318 | 1.0000 | 1.0000 |
schreiber2017/ismir2017 | 0.2379 | 0.1892 | 0.0574 | 0.5811 | 1.0000 | 1.0000 | 1.0000 | 0.8145 | 0.7905 |
schreiber2017/mirex2017 | 0.2379 | 0.2100 | 0.0768 | 0.5811 | 1.0000 | 1.0000 | 1.0000 | 0.7905 | 0.7905 |
schreiber2018/cnn | 0.1671 | 0.2863 | 0.0636 | 0.8318 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/fcn | 0.0386 | 0.3833 | 0.0309 | 1.0000 | 0.8145 | 0.7905 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/ismir2018 | 0.0574 | 0.3323 | 0.0213 | 1.0000 | 0.7905 | 0.7905 | 1.0000 | 1.0000 | 1.0000 |
Table 4: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy1 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1250 | 1.0000 | 0.0018 | 0.0625 | 0.0625 | 0.0312 | 0.0312 | 0.1250 |
davies2009/mirex_qm_tempotracker | 0.1250 | 1.0000 | 0.1250 | 0.0768 | 1.0000 | 1.0000 | 0.7266 | 0.7266 | 1.0000 |
percival2014/stem | 1.0000 | 0.1250 | 1.0000 | 0.0018 | 0.0625 | 0.0625 | 0.0312 | 0.0312 | 0.1250 |
schreiber2014/default | 0.0018 | 0.0768 | 0.0018 | 1.0000 | 0.0391 | 0.0391 | 0.1460 | 0.1460 | 0.0386 |
schreiber2017/ismir2017 | 0.0625 | 1.0000 | 0.0625 | 0.0391 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/mirex2017 | 0.0625 | 1.0000 | 0.0625 | 0.0391 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/cnn | 0.0312 | 0.7266 | 0.0312 | 0.1460 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 0.6250 |
schreiber2018/fcn | 0.0312 | 0.7266 | 0.0312 | 0.1460 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 0.6250 |
schreiber2018/ismir2018 | 0.1250 | 1.0000 | 0.1250 | 0.0386 | 1.0000 | 1.0000 | 0.6250 | 0.6250 | 1.0000 |
Table 5: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy2 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Accuracy1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
Accuracy1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 7: Mean Accuracy1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
Accuracy2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 8: Mean Accuracy2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy1 on Tempo-Subsets for 1.0
Figure 9: Mean Accuracy1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy2 on Tempo-Subsets for 1.0
Figure 10: Mean Accuracy2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy1 for Tempo
When fitting a generalized additive model (GAM) to Accuracy1-values and a ground truth, what Accuracy1 can we expect with confidence?
Estimated Accuracy1 for Tempo for 1.0
Predictions of GAMs trained on Accuracy1 for estimates for reference 1.0.
Figure 11: Accuracy1 predictions of a generalized additive model (GAM) fit to Accuracy1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy2 for Tempo
When fitting a generalized additive model (GAM) to Accuracy2-values and a ground truth, what Accuracy2 can we expect with confidence?
Estimated Accuracy2 for Tempo for 1.0
Predictions of GAMs trained on Accuracy2 for estimates for reference 1.0.
Figure 12: Accuracy2 predictions of a generalized additive model (GAM) fit to Accuracy2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy1 for ‘tag_open’ Tags for 1.0
Figure 13: Mean Accuracy1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy2 for ‘tag_open’ Tags for 1.0
Figure 14: Mean Accuracy2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 and OE2
OE1 is defined as octave error between an estimate E
and a reference value R
.This means that the most common errors—by a factor of 2 or ½—have the same magnitude, namely 1: OE2(E) = log2(E/R)
.
OE2 is the signed OE1 corresponding to the minimum absolute OE1 allowing the octaveerrors 2, 3, 1/2, and 1/3: OE2(E) = arg minx(|x|) with x ∈ {OE1(E), OE1(2E), OE1(3E), OE1(½E), OE1(⅓E)}
Mean OE1/OE2 Results for 1.0
Estimator | OE1_MEAN | OE1_STDEV | OE2_MEAN | OE2_STDEV |
---|---|---|---|---|
percival2014/stem | 0.1045 | 0.4670 | 0.0045 | 0.0341 |
boeck2015/tempodetector2016_default | 0.1331 | 0.4829 | -0.0069 | 0.0590 |
schreiber2014/default | 0.0360 | 0.5049 | 0.0243 | 0.1471 |
schreiber2018/ismir2018 | 0.2768 | 0.5208 | 0.0168 | 0.0846 |
schreiber2017/ismir2017 | 0.1856 | 0.5323 | 0.0456 | 0.1256 |
davies2009/mirex_qm_tempotracker | 0.5487 | 0.5478 | 0.0253 | 0.1185 |
schreiber2017/mirex2017 | 0.1256 | 0.5861 | 0.0373 | 0.1096 |
schreiber2018/cnn | 0.3095 | 0.5991 | 0.0461 | 0.1351 |
schreiber2018/fcn | 0.1967 | 0.6355 | 0.0250 | 0.1027 |
Table 6: Mean OE1/OE2 for estimates compared to version 1.0 ordered by standard deviation.
Raw data OE1: CSV JSON LATEX PICKLE
Raw data OE2: CSV JSON LATEX PICKLE
OE1 distribution for 1.0
Figure 15: OE1 for estimates compared to version 1.0. Shown are the mean OE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 distribution for 1.0
Figure 16: OE2 for estimates compared to version 1.0. Shown are the mean OE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.6639 | 0.1468 | 0.5038 | 0.9287 | 0.0522 | 0.3398 | 0.0277 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0049 | 0.0000 | 0.0003 |
percival2014/stem | 0.6639 | 0.0000 | 1.0000 | 0.2219 | 0.2178 | 0.7865 | 0.0115 | 0.2567 | 0.0147 |
schreiber2014/default | 0.1468 | 0.0000 | 0.2219 | 1.0000 | 0.0395 | 0.2117 | 0.0004 | 0.0126 | 0.0008 |
schreiber2017/ismir2017 | 0.5038 | 0.0000 | 0.2178 | 0.0395 | 1.0000 | 0.1824 | 0.1760 | 0.8964 | 0.2289 |
schreiber2017/mirex2017 | 0.9287 | 0.0000 | 0.7865 | 0.2117 | 0.1824 | 1.0000 | 0.0365 | 0.3548 | 0.0603 |
schreiber2018/cnn | 0.0522 | 0.0049 | 0.0115 | 0.0004 | 0.1760 | 0.0365 | 1.0000 | 0.0989 | 0.6604 |
schreiber2018/fcn | 0.3398 | 0.0000 | 0.2567 | 0.0126 | 0.8964 | 0.3548 | 0.0989 | 1.0000 | 0.2508 |
schreiber2018/ismir2018 | 0.0277 | 0.0003 | 0.0147 | 0.0008 | 0.2289 | 0.0603 | 0.6604 | 0.2508 | 1.0000 |
Table 7: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1200 | 0.3852 | 0.1713 | 0.0114 | 0.0161 | 0.0364 | 0.0293 | 0.0620 |
davies2009/mirex_qm_tempotracker | 0.1200 | 1.0000 | 0.2209 | 0.9690 | 0.4165 | 0.6080 | 0.3352 | 0.9879 | 0.6908 |
percival2014/stem | 0.3852 | 0.2209 | 1.0000 | 0.3629 | 0.0288 | 0.0456 | 0.0207 | 0.2299 | 0.3893 |
schreiber2014/default | 0.1713 | 0.9690 | 0.3629 | 1.0000 | 0.1588 | 0.4025 | 0.2925 | 0.9732 | 0.6831 |
schreiber2017/ismir2017 | 0.0114 | 0.4165 | 0.0288 | 0.1588 | 1.0000 | 0.3222 | 0.9815 | 0.2546 | 0.0523 |
schreiber2017/mirex2017 | 0.0161 | 0.6080 | 0.0456 | 0.4025 | 0.3222 | 1.0000 | 0.6300 | 0.4082 | 0.0798 |
schreiber2018/cnn | 0.0364 | 0.3352 | 0.0207 | 0.2925 | 0.9815 | 0.6300 | 1.0000 | 0.2277 | 0.1144 |
schreiber2018/fcn | 0.0293 | 0.9879 | 0.2299 | 0.9732 | 0.2546 | 0.4082 | 0.2277 | 1.0000 | 0.3672 |
schreiber2018/ismir2018 | 0.0620 | 0.6908 | 0.3893 | 0.6831 | 0.0523 | 0.0798 | 0.1144 | 0.3672 | 1.0000 |
Table 8: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
OE1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
OE1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 17: Mean OE1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
OE2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 18: Mean OE2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE1 on Tempo-Subsets for 1.0
Figure 19: Mean OE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE2 on Tempo-Subsets for 1.0
Figure 20: Mean OE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE1 for Tempo
When fitting a generalized additive model (GAM) to OE1-values and a ground truth, what OE1 can we expect with confidence?
Estimated OE1 for Tempo for 1.0
Predictions of GAMs trained on OE1 for estimates for reference 1.0.
Figure 21: OE1 predictions of a generalized additive model (GAM) fit to OE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE2 for Tempo
When fitting a generalized additive model (GAM) to OE2-values and a ground truth, what OE2 can we expect with confidence?
Estimated OE2 for Tempo for 1.0
Predictions of GAMs trained on OE2 for estimates for reference 1.0.
Figure 22: OE2 predictions of a generalized additive model (GAM) fit to OE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE1 for ‘tag_open’ Tags for 1.0
Figure 23: OE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
OE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE2 for ‘tag_open’ Tags for 1.0
Figure 24: OE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE1 and AOE2
AOE1 is defined as absolute octave error between an estimate and a reference value: AOE1(E) = |log2(E/R)|
.
AOE2 is the minimum of AOE1 allowing the octave errors 2, 3, 1/2, and 1/3: AOE2(E) = min(AOE1(E), AOE1(2E), AOE1(3E), AOE1(½E), AOE1(⅓E))
.
Mean AOE1/AOE2 Results for 1.0
Estimator | AOE1_MEAN | AOE1_STDEV | AOE2_MEAN | AOE2_STDEV |
---|---|---|---|---|
percival2014/stem | 0.2284 | 0.4206 | 0.0109 | 0.0326 |
boeck2015/tempodetector2016_default | 0.2604 | 0.4278 | 0.0168 | 0.0569 |
schreiber2014/default | 0.2930 | 0.4127 | 0.0807 | 0.1254 |
schreiber2017/ismir2017 | 0.3472 | 0.4442 | 0.0538 | 0.1223 |
schreiber2017/mirex2017 | 0.3672 | 0.4738 | 0.0455 | 0.1064 |
schreiber2018/ismir2018 | 0.3698 | 0.4594 | 0.0319 | 0.0801 |
schreiber2018/cnn | 0.3871 | 0.5521 | 0.0551 | 0.1316 |
schreiber2018/fcn | 0.3931 | 0.5366 | 0.0416 | 0.0972 |
davies2009/mirex_qm_tempotracker | 0.5488 | 0.5476 | 0.0549 | 0.1081 |
Table 9: Mean AOE1/AOE2 for estimates compared to version 1.0 ordered by mean.
Raw data AOE1: CSV JSON LATEX PICKLE
Raw data AOE2: CSV JSON LATEX PICKLE
AOE1 distribution for 1.0
Figure 25: AOE1 for estimates compared to version 1.0. Shown are the mean AOE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 distribution for 1.0
Figure 26: AOE2 for estimates compared to version 1.0. Shown are the mean AOE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0008 | 0.6234 | 0.6199 | 0.2621 | 0.1943 | 0.1608 | 0.0412 | 0.0916 |
davies2009/mirex_qm_tempotracker | 0.0008 | 1.0000 | 0.0002 | 0.0051 | 0.0128 | 0.0460 | 0.0580 | 0.0810 | 0.0207 |
percival2014/stem | 0.6234 | 0.0002 | 1.0000 | 0.2451 | 0.0637 | 0.0669 | 0.0520 | 0.0382 | 0.0466 |
schreiber2014/default | 0.6199 | 0.0051 | 0.2451 | 1.0000 | 0.4173 | 0.2525 | 0.2317 | 0.1130 | 0.2889 |
schreiber2017/ismir2017 | 0.2621 | 0.0128 | 0.0637 | 0.4173 | 1.0000 | 0.6547 | 0.6601 | 0.5846 | 0.7612 |
schreiber2017/mirex2017 | 0.1943 | 0.0460 | 0.0669 | 0.2525 | 0.6547 | 1.0000 | 0.8215 | 0.7341 | 0.9741 |
schreiber2018/cnn | 0.1608 | 0.0580 | 0.0520 | 0.2317 | 0.6601 | 0.8215 | 1.0000 | 0.9294 | 0.8122 |
schreiber2018/fcn | 0.0412 | 0.0810 | 0.0382 | 0.1130 | 0.5846 | 0.7341 | 0.9294 | 1.0000 | 0.7332 |
schreiber2018/ismir2018 | 0.0916 | 0.0207 | 0.0466 | 0.2889 | 0.7612 | 0.9741 | 0.8122 | 0.7332 | 1.0000 |
Table 10: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0250 | 0.1183 | 0.0021 | 0.0546 | 0.0944 | 0.0213 | 0.0927 | 0.2361 |
davies2009/mirex_qm_tempotracker | 0.0250 | 1.0000 | 0.0068 | 0.3016 | 0.9655 | 0.6796 | 0.9899 | 0.4637 | 0.2464 |
percival2014/stem | 0.1183 | 0.0068 | 1.0000 | 0.0004 | 0.0198 | 0.0302 | 0.0110 | 0.0271 | 0.0667 |
schreiber2014/default | 0.0021 | 0.3016 | 0.0004 | 1.0000 | 0.0396 | 0.0079 | 0.2075 | 0.0287 | 0.0024 |
schreiber2017/ismir2017 | 0.0546 | 0.9655 | 0.0198 | 0.0396 | 1.0000 | 0.3222 | 0.9479 | 0.4465 | 0.1194 |
schreiber2017/mirex2017 | 0.0944 | 0.6796 | 0.0302 | 0.0079 | 0.3222 | 1.0000 | 0.5951 | 0.7728 | 0.2012 |
schreiber2018/cnn | 0.0213 | 0.9899 | 0.0110 | 0.2075 | 0.9479 | 0.5951 | 1.0000 | 0.2044 | 0.0955 |
schreiber2018/fcn | 0.0927 | 0.4637 | 0.0271 | 0.0287 | 0.4465 | 0.7728 | 0.2044 | 1.0000 | 0.2437 |
schreiber2018/ismir2018 | 0.2361 | 0.2464 | 0.0667 | 0.0024 | 0.1194 | 0.2012 | 0.0955 | 0.2437 | 1.0000 |
Table 11: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
AOE1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
AOE1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 27: Mean AOE1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
AOE2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 28: Mean AOE2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE1 on Tempo-Subsets for 1.0
Figure 29: Mean AOE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE2 on Tempo-Subsets for 1.0
Figure 30: Mean AOE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE1 for Tempo
When fitting a generalized additive model (GAM) to AOE1-values and a ground truth, what AOE1 can we expect with confidence?
Estimated AOE1 for Tempo for 1.0
Predictions of GAMs trained on AOE1 for estimates for reference 1.0.
Figure 31: AOE1 predictions of a generalized additive model (GAM) fit to AOE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE2 for Tempo
When fitting a generalized additive model (GAM) to AOE2-values and a ground truth, what AOE2 can we expect with confidence?
Estimated AOE2 for Tempo for 1.0
Predictions of GAMs trained on AOE2 for estimates for reference 1.0.
Figure 32: AOE2 predictions of a generalized additive model (GAM) fit to AOE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE1 for ‘tag_open’ Tags for 1.0
Figure 33: AOE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE2 for ‘tag_open’ Tags for 1.0
Figure 34: AOE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
Generated by tempo_eval 0.1.1 on 2022-06-29 18:52. Size L.