lmd_tempo
This is the tempo_eval report for the ‘lmd_tempo’ corpus.
Reports for other corpora may be found here.
Table of Contents
- References for ‘lmd_tempo’
- Estimates for ‘lmd_tempo’
References for ‘lmd_tempo’
References
1.0
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 1.0 |
Curator | Hendrik Schreiber |
Data Source | LMD 0.1, https://colinraffel.com/projects/lmd/ |
Annotation Tools | Schreiber2017, MIDI parser |
Annotation Rules | Consensus between Schreiber2017 and MIDI messages (2% tolerance). |
Annotator, bibtex | Schreiber2018a |
Annotator, ref_url | http://www.tagtraum.com/tempo_estimation.html |
Basic Statistics
Reference | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
1.0 | 3611 | 44.44 | 161.42 | 113.59 | 21.25 | 77.00 | 0.94 |
Smoothed Tempo Distribution
Figure 1: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Tag Distribution for ‘tag_open’
Figure 2: Percentage of tracks tagged with tags from namespace ‘tag_open’. Annotations are from reference 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimates for ‘lmd_tempo’
Estimators
boeck2015/tempodetector2016_default
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.17.dev0 |
Annotation Tools | TempoDetector.2016, madmom, https://github.com/CPJKU/madmom |
Annotator, bibtex | Boeck2015 |
davies2009/mirex_qm_tempotracker
Attribute | Value | |
---|---|---|
Corpus | lmd_tempo | |
Version | 1.0 | |
Annotation Tools | QM Tempotracker, Sonic Annotator plugin. https://code.soundsoftware.ac.uk/projects/mirex2013/repository/show/audio_tempo_estimation/qm-tempotracker Note that the current macOS build of ‘qm-vamp-plugins’ was used. | |
Annotator, bibtex | Davies2009 | Davies2007 |
percival2014/stem
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 1.0 |
Annotation Tools | percival 2014, ‘tempo’ implementation from Marsyas, http://marsyas.info, git checkout tempo-stem |
Annotator, bibtex | Percival2014 |
schreiber2017/ismir2017
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=ismir2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2017/mirex2017
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=mirex2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2018/cnn
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.0.2 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=cnn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/fcn
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.0.2 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=fcn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/ismir2018
Attribute | Value |
---|---|
Corpus | lmd_tempo |
Version | 0.0.2 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=ismir2018), https://github.com/hendriks73/tempo-cnn |
Basic Statistics
Estimator | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 3611 | 41.96 | 240.00 | 113.00 | 24.72 | 74.00 | 0.89 |
davies2009/mirex_qm_tempotracker | 3611 | 64.60 | 234.91 | 121.64 | 21.04 | 79.00 | 0.95 |
percival2014/stem | 3611 | 50.54 | 20671.90 | 115.52 | 342.85 | 72.00 | 0.93 |
schreiber2017/ismir2017 | 3611 | 41.40 | 201.95 | 113.42 | 22.23 | 75.00 | 0.92 |
schreiber2017/mirex2017 | 3611 | 53.82 | 204.08 | 114.27 | 22.56 | 75.00 | 0.91 |
schreiber2018/cnn | 3611 | 41.00 | 232.00 | 114.41 | 22.77 | 77.00 | 0.92 |
schreiber2018/fcn | 3611 | 40.00 | 224.00 | 113.99 | 24.66 | 76.00 | 0.89 |
schreiber2018/ismir2018 | 3611 | 56.00 | 224.00 | 114.86 | 21.56 | 77.00 | 0.94 |
Smoothed Tempo Distribution
Figure 3: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy
Accuracy1 is defined as the percentage of correct estimates, allowing a 4% tolerance for individual BPM values.
Accuracy2 additionally permits estimates to be wrong by a factor of 2, 3, 1/2 or 1/3 (so-called octave errors).
See [Gouyon2006].
Note: When comparing accuracy values for different algorithms, keep in mind that an algorithm may have been trained on the test set or that the test set may have even been created using one of the tested algorithms.
Accuracy Results for 1.0
Estimator | Accuracy1 | Accuracy2 |
---|---|---|
schreiber2018/ismir2018 | 0.9574 | 0.9914 |
schreiber2018/cnn | 0.9554 | 0.9900 |
schreiber2017/mirex2017 | 0.9490 | 0.9983 |
schreiber2017/ismir2017 | 0.9438 | 0.9981 |
schreiber2018/fcn | 0.9308 | 0.9889 |
boeck2015/tempodetector2016_default | 0.9241 | 0.9925 |
davies2009/mirex_qm_tempotracker | 0.8892 | 0.9693 |
percival2014/stem | 0.8887 | 0.9809 |
Table 3: Mean accuracy of estimates compared to version 1.0 with 4% tolerance ordered by Accuracy1.
Raw data Accuracy1: CSV JSON LATEX PICKLE
Raw data Accuracy2: CSV JSON LATEX PICKLE
Accuracy1 for 1.0
Figure 4: Mean Accuracy1 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for 1.0
Figure 5: Mean Accuracy2 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Differing Items
For which items did a given estimator not estimate a correct value with respect to a given ground truth? Are there items which are either very difficult, not suitable for the task, or incorrectly annotated and therefore never estimated correctly, regardless which estimator is used?
Differing Items Accuracy1
Items with different tempo annotations (Accuracy1, 4% tolerance) in different versions:
1.0 compared with boeck2015/tempodetector2016_default (274 differences): ‘TRABYWC128F9307C66’ ‘TRABZZD128F9334C0B’ ‘TRAEMIU128F14641D6’ ‘TRAFOZW128F92F1594’ ‘TRAHQST128F425CBE6’ ‘TRALWYN128F429887F’ ‘TRAQKCT128F426C28F’ ‘TRARQNM128F92E1E52’ ‘TRATFNI12903CEF67A’ ‘TRAWBPY128F425512E’ ‘TRAZQSW128F9308F88’ … CSV
1.0 compared with davies2009/mirex_qm_tempotracker (400 differences): ‘TRABYWC128F9307C66’ ‘TRADFPY128F92D2B4E’ ‘TRAEMIU128F14641D6’ ‘TRAEOOG128F148B494’ ‘TRAGHBO128F92D8C9E’ ‘TRAHOOM128F427F234’ ‘TRAHQST128F425CBE6’ ‘TRAPVQY128F9333E52’ ‘TRAQKCT128F426C28F’ ‘TRASUWA128F933E6F7’ ‘TRATMIZ128F428E152’ … CSV
1.0 compared with percival2014/stem (402 differences): ‘TRACYOR128F427FB1D’ ‘TRADFPY128F92D2B4E’ ‘TRAFOZW128F92F1594’ ‘TRAHQST128F425CBE6’ ‘TRAJPPO128F428AEB1’ ‘TRALWYN128F429887F’ ‘TRANKTK128E07921D9’ ‘TRAOMMX128F92E4E60’ ‘TRAOVNR128F4275781’ ‘TRAPPUJ128F931B3D1’ ‘TRAPVQY128F9333E52’ … CSV
1.0 compared with schreiber2017/ismir2017 (203 differences): ‘TRAEMIU128F14641D6’ ‘TRALWYN128F429887F’ ‘TRANKTK128E07921D9’ ‘TRANPKC128F93486B9’ ‘TRARQNM128F92E1E52’ ‘TRATMIZ128F428E152’ ‘TRAUYGJ128E0798505’ ‘TRAWBPY128F425512E’ ‘TRBBQPP128F92EE611’ ‘TRBKSUX128EF342BD6’ ‘TRBRSTH128F4296255’ … CSV
1.0 compared with schreiber2017/mirex2017 (184 differences): ‘TRADFPY128F92D2B4E’ ‘TRAEMIU128F14641D6’ ‘TRAHQST128F425CBE6’ ‘TRAKCER128F932BE98’ ‘TRALWYN128F429887F’ ‘TRAMJWO128EF343C19’ ‘TRANPKC128F93486B9’ ‘TRARQNM128F92E1E52’ ‘TRAWBPY128F425512E’ ‘TRBBQPP128F92EE611’ ‘TRBFNOR128F933B0F5’ … CSV
1.0 compared with schreiber2018/cnn (161 differences): ‘TRAEMIU128F14641D6’ ‘TRAJZLQ128F424E12B’ ‘TRARQNM128F92E1E52’ ‘TRATMIZ128F428E152’ ‘TRAWBPY128F425512E’ ‘TRBBQPP128F92EE611’ ‘TRBISBN128F4267AB6’ ‘TRBRSTH128F4296255’ ‘TRCFKSY128F14B0FA5’ ‘TRCJRZW128F930113B’ ‘TRCKPMP128F423984C’ … CSV
1.0 compared with schreiber2018/fcn (250 differences): ‘TRAEMIU128F14641D6’ ‘TRAFOZW128F92F1594’ ‘TRAKGGU128F428D83C’ ‘TRALWYN128F429887F’ ‘TRANKTK128E07921D9’ ‘TRAOCHX128F42A00AE’ ‘TRAPPUJ128F931B3D1’ ‘TRAQKCT128F426C28F’ ‘TRARQNM128F92E1E52’ ‘TRATFNI12903CEF67A’ ‘TRAVULK128F4232387’ … CSV
1.0 compared with schreiber2018/ismir2018 (154 differences): ‘TRABYWC128F9307C66’ ‘TRAEMIU128F14641D6’ ‘TRAHQST128F425CBE6’ ‘TRANPKC128F93486B9’ ‘TRARQNM128F92E1E52’ ‘TRASUWA128F933E6F7’ ‘TRBQWGI128E0793D38’ ‘TRCABMS128F42A0EFC’ ‘TRCJETO128F425FE38’ ‘TRCJRZW128F930113B’ ‘TRCKPMP128F423984C’ … CSV
None of the estimators estimated the following 11 items ‘correctly’ using Accuracy1: ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRGBTIM128F427FFF0’ ‘TRIRCFN128F4280243’ ‘TRLLLKU128F425FF8B’ ‘TRUMGSR128F428F2E2’ ‘TRVKVHJ12903CF3934’ ‘TRWDVQL128F423605D’ ‘TRWDYGV128F92EDBF6’ ‘TRYHIIS128F4234DAF’ ‘TRZJFFH128F14647FC’ … CSV
Differing Items Accuracy2
Items with different tempo annotations (Accuracy2, 4% tolerance) in different versions:
1.0 compared with boeck2015/tempodetector2016_default (27 differences): ‘TRBISBN128F4267AB6’ ‘TRCSBQX12903CFA040’ ‘TRDPAGI12903CA69AD’ ‘TRFTLJV128F92EEDBD’ ‘TRFYUKH128F42B0BDA’ ‘TRGBHFD128F422A9D3’ ‘TRGKGAA128F147706C’ ‘TRHTLCM128E0783A19’ ‘TRIRCFN128F4280243’ ‘TRKTRGF128F42B323B’ ‘TRLLLKU128F425FF8B’ … CSV
1.0 compared with davies2009/mirex_qm_tempotracker (111 differences): ‘TRAEOOG128F148B494’ ‘TRAHOOM128F427F234’ ‘TRAPVQY128F9333E52’ ‘TRAQKCT128F426C28F’ ‘TRAWASU12903D07A25’ ‘TRAZASM128F932FBEE’ ‘TRAZQSW128F9308F88’ ‘TRBDNEG128F93562C0’ ‘TRBISBN128F4267AB6’ ‘TRBKSUX128EF342BD6’ ‘TRCSBQX12903CFA040’ … CSV
1.0 compared with percival2014/stem (69 differences): ‘TRAHQST128F425CBE6’ ‘TRAPVQY128F9333E52’ ‘TRAQKCT128F426C28F’ ‘TRBISBN128F4267AB6’ ‘TRCILWH128F4288159’ ‘TRCSBQX12903CFA040’ ‘TRCYHJI128F147B1DC’ ‘TRDPAGI12903CA69AD’ ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRDYZYY128F4284F78’ … CSV
1.0 compared with schreiber2017/ismir2017 (7 differences): ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRHTLCM128E0783A19’ ‘TRIRCFN128F4280243’ ‘TRTUCHS128F1459CF2’ ‘TRVKVHJ12903CF3934’ ‘TRZJFFH128F14647FC’ CSV
1.0 compared with schreiber2017/mirex2017 (6 differences): ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRHTLCM128E0783A19’ ‘TRIRCFN128F4280243’ ‘TRVKVHJ12903CF3934’ ‘TRZJFFH128F14647FC’ CSV
1.0 compared with schreiber2018/cnn (36 differences): ‘TRBISBN128F4267AB6’ ‘TRBRSTH128F4296255’ ‘TRCSBQX12903CFA040’ ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRETAHF128F425309E’ ‘TRFPDPQ12903CA9803’ ‘TRGKGAA128F147706C’ ‘TRINNMP12903CC4C64’ ‘TRIRCFN128F4280243’ ‘TRJUJAH128F425A752’ … CSV
1.0 compared with schreiber2018/fcn (40 differences): ‘TRAQKCT128F426C28F’ ‘TRAZQSW128F9308F88’ ‘TRBISBN128F4267AB6’ ‘TRBRSTH128F4296255’ ‘TRCSBQX12903CFA040’ ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRFPDPQ12903CA9803’ ‘TRGASIV128F429230D’ ‘TRINNMP12903CC4C64’ ‘TRIRCFN128F4280243’ … CSV
1.0 compared with schreiber2018/ismir2018 (31 differences): ‘TRCSBQX12903CFA040’ ‘TRDPAGI12903CA69AD’ ‘TRDPTCF128F93002CE’ ‘TRDWVQX128F9335CED’ ‘TRFWOOG128F1485DA5’ ‘TRGKGAA128F147706C’ ‘TRINNMP12903CC4C64’ ‘TRIRCFN128F4280243’ ‘TRKTRGF128F42B323B’ ‘TRKUWVV128F4268367’ ‘TRLLLKU128F425FF8B’ … CSV
None of the estimators estimated the following 2 items ‘correctly’ using Accuracy2: ‘TRIRCFN128F4280243’ ‘TRZJFFH128F14647FC’ CSV
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0000 | 0.0000 | 0.1886 | 0.0000 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.9651 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.0000 | 0.9651 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2017/ismir2017 | 0.0001 | 0.0000 | 0.0000 | 1.0000 | 0.1241 | 0.0057 | 0.0054 | 0.0012 |
schreiber2017/mirex2017 | 0.0000 | 0.0000 | 0.0000 | 0.1241 | 1.0000 | 0.1299 | 0.0001 | 0.0423 |
schreiber2018/cnn | 0.0000 | 0.0000 | 0.0000 | 0.0057 | 0.1299 | 1.0000 | 0.0000 | 0.6207 |
schreiber2018/fcn | 0.1886 | 0.0000 | 0.0000 | 0.0054 | 0.0001 | 0.0000 | 1.0000 | 0.0000 |
schreiber2018/ismir2018 | 0.0000 | 0.0000 | 0.0000 | 0.0012 | 0.0423 | 0.6207 | 0.0000 | 1.0000 |
Table 4: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy1 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.0000 | 0.0001 | 0.0000 | 0.1628 | 0.0470 | 0.5572 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0000 |
schreiber2017/ismir2017 | 0.0001 | 0.0000 | 0.0000 | 1.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2017/mirex2017 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2018/cnn | 0.1628 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.5716 | 0.4731 |
schreiber2018/fcn | 0.0470 | 0.0000 | 0.0002 | 0.0000 | 0.0000 | 0.5716 | 1.0000 | 0.1221 |
schreiber2018/ismir2018 | 0.5572 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.4731 | 0.1221 | 1.0000 |
Table 5: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy2 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Accuracy1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy1 on Tempo-Subsets for 1.0
Figure 6: Mean Accuracy1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy2 on Tempo-Subsets for 1.0
Figure 7: Mean Accuracy2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy1 for Tempo
When fitting a generalized additive model (GAM) to Accuracy1-values and a ground truth, what Accuracy1 can we expect with confidence?
Estimated Accuracy1 for Tempo for 1.0
Predictions of GAMs trained on Accuracy1 for estimates for reference 1.0.
Figure 8: Accuracy1 predictions of a generalized additive model (GAM) fit to Accuracy1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy2 for Tempo
When fitting a generalized additive model (GAM) to Accuracy2-values and a ground truth, what Accuracy2 can we expect with confidence?
Estimated Accuracy2 for Tempo for 1.0
Predictions of GAMs trained on Accuracy2 for estimates for reference 1.0.
Figure 9: Accuracy2 predictions of a generalized additive model (GAM) fit to Accuracy2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy1 for ‘tag_open’ Tags for 1.0
Figure 10: Mean Accuracy1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy2 for ‘tag_open’ Tags for 1.0
Figure 11: Mean Accuracy2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 and OE2
OE1 is defined as octave error between an estimate E
and a reference value R
.This means that the most common errors—by a factor of 2 or ½—have the same magnitude, namely 1: OE2(E) = log2(E/R)
.
OE2 is the signed OE1 corresponding to the minimum absolute OE1 allowing the octaveerrors 2, 3, 1/2, and 1/3: OE2(E) = arg minx(|x|) with x ∈ {OE1(E), OE1(2E), OE1(3E), OE1(½E), OE1(⅓E)}
Mean OE1/OE2 Results for 1.0
Estimator | OE1_MEAN | OE1_STDEV | OE2_MEAN | OE2_STDEV |
---|---|---|---|---|
schreiber2018/ismir2018 | 0.0169 | 0.1918 | -0.0000 | 0.0353 |
schreiber2018/cnn | 0.0079 | 0.1995 | 0.0011 | 0.0359 |
schreiber2017/mirex2017 | 0.0066 | 0.2228 | -0.0003 | 0.0145 |
schreiber2017/ismir2017 | -0.0043 | 0.2360 | -0.0002 | 0.0161 |
schreiber2018/fcn | -0.0029 | 0.2484 | 0.0009 | 0.0361 |
boeck2015/tempodetector2016_default | -0.0159 | 0.2717 | 0.0001 | 0.0338 |
davies2009/mirex_qm_tempotracker | 0.1048 | 0.2972 | 0.0246 | 0.0611 |
percival2014/stem | -0.0488 | 0.3323 | 0.0042 | 0.1044 |
Table 6: Mean OE1/OE2 for estimates compared to version 1.0 ordered by standard deviation.
Raw data OE1: CSV JSON LATEX PICKLE
Raw data OE2: CSV JSON LATEX PICKLE
OE1 distribution for 1.0
Figure 12: OE1 for estimates compared to version 1.0. Shown are the mean OE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 distribution for 1.0
Figure 13: OE2 for estimates compared to version 1.0. Shown are the mean OE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.0000 | 0.0191 | 0.0000 | 0.0000 | 0.0084 | 0.0000 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.0000 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2017/ismir2017 | 0.0191 | 0.0000 | 0.0000 | 1.0000 | 0.0009 | 0.0025 | 0.7552 | 0.0000 |
schreiber2017/mirex2017 | 0.0000 | 0.0000 | 0.0000 | 0.0009 | 1.0000 | 0.7332 | 0.0360 | 0.0077 |
schreiber2018/cnn | 0.0000 | 0.0000 | 0.0000 | 0.0025 | 0.7332 | 1.0000 | 0.0047 | 0.0058 |
schreiber2018/fcn | 0.0084 | 0.0000 | 0.0000 | 0.7552 | 0.0360 | 0.0047 | 1.0000 | 0.0000 |
schreiber2018/ismir2018 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0077 | 0.0058 | 0.0000 | 1.0000 |
Table 7: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.0190 | 0.5729 | 0.4297 | 0.0814 | 0.1946 | 0.8685 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.0190 | 0.0000 | 1.0000 | 0.0119 | 0.0097 | 0.0865 | 0.0670 | 0.0185 |
schreiber2017/ismir2017 | 0.5729 | 0.0000 | 0.0119 | 1.0000 | 0.3174 | 0.0401 | 0.0552 | 0.7329 |
schreiber2017/mirex2017 | 0.4297 | 0.0000 | 0.0097 | 0.3174 | 1.0000 | 0.0238 | 0.0312 | 0.5901 |
schreiber2018/cnn | 0.0814 | 0.0000 | 0.0865 | 0.0401 | 0.0238 | 1.0000 | 0.6973 | 0.0639 |
schreiber2018/fcn | 0.1946 | 0.0000 | 0.0670 | 0.0552 | 0.0312 | 0.6973 | 1.0000 | 0.1804 |
schreiber2018/ismir2018 | 0.8685 | 0.0000 | 0.0185 | 0.7329 | 0.5901 | 0.0639 | 0.1804 | 1.0000 |
Table 8: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
OE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE1 on Tempo-Subsets for 1.0
Figure 14: Mean OE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE2 on Tempo-Subsets for 1.0
Figure 15: Mean OE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE1 for Tempo
When fitting a generalized additive model (GAM) to OE1-values and a ground truth, what OE1 can we expect with confidence?
Estimated OE1 for Tempo for 1.0
Predictions of GAMs trained on OE1 for estimates for reference 1.0.
Figure 16: OE1 predictions of a generalized additive model (GAM) fit to OE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE2 for Tempo
When fitting a generalized additive model (GAM) to OE2-values and a ground truth, what OE2 can we expect with confidence?
Estimated OE2 for Tempo for 1.0
Predictions of GAMs trained on OE2 for estimates for reference 1.0.
Figure 17: OE2 predictions of a generalized additive model (GAM) fit to OE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE1 for ‘tag_open’ Tags for 1.0
Figure 18: OE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
OE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE2 for ‘tag_open’ Tags for 1.0
Figure 19: OE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE1 and AOE2
AOE1 is defined as absolute octave error between an estimate and a reference value: AOE1(E) = |log2(E/R)|
.
AOE2 is the minimum of AOE1 allowing the octave errors 2, 3, 1/2, and 1/3: AOE2(E) = min(AOE1(E), AOE1(2E), AOE1(3E), AOE1(½E), AOE1(⅓E))
.
Mean AOE1/AOE2 Results for 1.0
Estimator | AOE1_MEAN | AOE1_STDEV | AOE2_MEAN | AOE2_STDEV |
---|---|---|---|---|
schreiber2018/ismir2018 | 0.0414 | 0.1880 | 0.0059 | 0.0348 |
schreiber2018/cnn | 0.0429 | 0.1950 | 0.0059 | 0.0354 |
schreiber2017/mirex2017 | 0.0503 | 0.2171 | 0.0009 | 0.0145 |
schreiber2017/ismir2017 | 0.0560 | 0.2293 | 0.0010 | 0.0161 |
schreiber2018/fcn | 0.0663 | 0.2394 | 0.0066 | 0.0355 |
boeck2015/tempodetector2016_default | 0.0795 | 0.2603 | 0.0091 | 0.0326 |
percival2014/stem | 0.1062 | 0.3186 | 0.0099 | 0.1040 |
davies2009/mirex_qm_tempotracker | 0.1178 | 0.2923 | 0.0295 | 0.0589 |
Table 9: Mean AOE1/AOE2 for estimates compared to version 1.0 ordered by mean.
Raw data AOE1: CSV JSON LATEX PICKLE
Raw data AOE2: CSV JSON LATEX PICKLE
AOE1 distribution for 1.0
Figure 20: AOE1 for estimates compared to version 1.0. Shown are the mean AOE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 distribution for 1.0
Figure 21: AOE2 for estimates compared to version 1.0. Shown are the mean AOE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0060 | 0.0000 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0703 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.0000 | 0.0703 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2017/ismir2017 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.0787 | 0.0009 | 0.0202 | 0.0002 |
schreiber2017/mirex2017 | 0.0000 | 0.0000 | 0.0000 | 0.0787 | 1.0000 | 0.0565 | 0.0003 | 0.0202 |
schreiber2018/cnn | 0.0000 | 0.0000 | 0.0000 | 0.0009 | 0.0565 | 1.0000 | 0.0000 | 0.6290 |
schreiber2018/fcn | 0.0060 | 0.0000 | 0.0000 | 0.0202 | 0.0003 | 0.0000 | 1.0000 | 0.0000 |
schreiber2018/ismir2018 | 0.0000 | 0.0000 | 0.0000 | 0.0002 | 0.0202 | 0.6290 | 0.0000 | 1.0000 |
Table 10: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.6354 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |
percival2014/stem | 0.6354 | 0.0000 | 1.0000 | 0.0000 | 0.0000 | 0.0212 | 0.0528 | 0.0186 |
schreiber2017/ismir2017 | 0.0000 | 0.0000 | 0.0000 | 1.0000 | 0.3174 | 0.0000 | 0.0000 | 0.0000 |
schreiber2017/mirex2017 | 0.0000 | 0.0000 | 0.0000 | 0.3174 | 1.0000 | 0.0000 | 0.0000 | 0.0000 |
schreiber2018/cnn | 0.0000 | 0.0000 | 0.0212 | 0.0000 | 0.0000 | 1.0000 | 0.1893 | 0.9332 |
schreiber2018/fcn | 0.0000 | 0.0000 | 0.0528 | 0.0000 | 0.0000 | 0.1893 | 1.0000 | 0.2001 |
schreiber2018/ismir2018 | 0.0000 | 0.0000 | 0.0186 | 0.0000 | 0.0000 | 0.9332 | 0.2001 | 1.0000 |
Table 11: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
AOE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE1 on Tempo-Subsets for 1.0
Figure 22: Mean AOE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE2 on Tempo-Subsets for 1.0
Figure 23: Mean AOE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE1 for Tempo
When fitting a generalized additive model (GAM) to AOE1-values and a ground truth, what AOE1 can we expect with confidence?
Estimated AOE1 for Tempo for 1.0
Predictions of GAMs trained on AOE1 for estimates for reference 1.0.
Figure 24: AOE1 predictions of a generalized additive model (GAM) fit to AOE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE2 for Tempo
When fitting a generalized additive model (GAM) to AOE2-values and a ground truth, what AOE2 can we expect with confidence?
Estimated AOE2 for Tempo for 1.0
Predictions of GAMs trained on AOE2 for estimates for reference 1.0.
Figure 25: AOE2 predictions of a generalized additive model (GAM) fit to AOE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE1 for ‘tag_open’ Tags for 1.0
Figure 26: AOE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE2 for ‘tag_open’ Tags for 1.0
Figure 27: AOE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
Generated by tempo_eval 0.1.1 on 2022-06-29 18:49. Size L.