rwc_mdb_r
This is the tempo_eval report for the ‘rwc_mdb_r’ corpus.
Reports for other corpora may be found here.
Table of Contents
- References for ‘rwc_mdb_r’
- Estimates for ‘rwc_mdb_r’
- Estimators
- Basic Statistics
- Smoothed Tempo Distribution
- Accuracy
- Accuracy Results for 0.1
- Accuracy1 for 0.1
- Accuracy2 for 0.1
- Accuracy Results for 1.0
- Accuracy1 for 1.0
- Accuracy2 for 1.0
- Differing Items
- Significance of Differences
- Accuracy1 on cvar-Subsets
- Accuracy2 on cvar-Subsets
- Accuracy1 on Tempo-Subsets
- Accuracy2 on Tempo-Subsets
- Estimated Accuracy1 for Tempo
- Estimated Accuracy2 for Tempo
- Accuracy1 for ‘tag_open’ Tags
- Accuracy2 for ‘tag_open’ Tags
- OE1 and OE2
- Mean OE1/OE2 Results for 0.1
- OE1 distribution for 0.1
- OE2 distribution for 0.1
- Mean OE1/OE2 Results for 1.0
- OE1 distribution for 1.0
- OE2 distribution for 1.0
- Significance of Differences
- OE1 on cvar-Subsets
- OE2 on cvar-Subsets
- OE1 on Tempo-Subsets
- OE2 on Tempo-Subsets
- Estimated OE1 for Tempo
- Estimated OE2 for Tempo
- OE1 for ‘tag_open’ Tags
- OE2 for ‘tag_open’ Tags
- AOE1 and AOE2
- Mean AOE1/AOE2 Results for 0.1
- AOE1 distribution for 0.1
- AOE2 distribution for 0.1
- Mean AOE1/AOE2 Results for 1.0
- AOE1 distribution for 1.0
- AOE2 distribution for 1.0
- Significance of Differences
- AOE1 on cvar-Subsets
- AOE2 on cvar-Subsets
- AOE1 on Tempo-Subsets
- AOE2 on Tempo-Subsets
- Estimated AOE1 for Tempo
- Estimated AOE2 for Tempo
- AOE1 for ‘tag_open’ Tags
- AOE2 for ‘tag_open’ Tags
References for ‘rwc_mdb_r’
References
0.1
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 0.1 |
Curator | Masataka Goto |
Data Source | AIST website. Tempo values are rough estimates and should not be used as data for research purposes. |
Annotation Tools | unknown |
Annotation Rules | unknown |
Annotator, name | Masataka Goto |
Annotator, bibtex | Goto2006 |
Annotator, ref_url | https://staff.aist.go.jp/m.goto/RWC-MDB/AIST-Annotation/ |
1.0
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 1.0 |
Curator | Masataka Goto |
Data Source | manual annotation |
Annotation Tools | derived from beat annotations |
Annotation Rules | median of corresponding inter beat intervals |
Annotator, name | Masataka Goto |
Annotator, bibtex | Goto2006 |
Annotator, ref_url | https://staff.aist.go.jp/m.goto/RWC-MDB/AIST-Annotation/ |
Basic Statistics
Reference | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
0.1 | 15 | 70.00 | 200.00 | 114.00 | 37.21 | 66.00 | 0.87 |
1.0 | 15 | 69.97 | 200.00 | 113.97 | 37.25 | 65.00 | 0.87 |
Smoothed Tempo Distribution
Figure 1: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Tag Distribution for ‘tag_open’
Figure 2: Percentage of tracks tagged with tags from namespace ‘tag_open’. Annotations are from reference 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Beat-Based Tempo Variation
Figure 3: Fraction of the dataset with beat-annotated tracks with cvar < τ.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimates for ‘rwc_mdb_r’
Estimators
boeck2015/tempodetector2016_default
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 0.17.dev0 |
Annotation Tools | TempoDetector.2016, madmom, https://github.com/CPJKU/madmom |
Annotator, bibtex | Boeck2015 |
davies2009/mirex_qm_tempotracker
Attribute | Value | |
---|---|---|
Corpus | rwc_mdb_r | |
Version | 1.0 | |
Annotation Tools | QM Tempotracker, Sonic Annotator plugin. https://code.soundsoftware.ac.uk/projects/mirex2013/repository/show/audio_tempo_estimation/qm-tempotracker Note that the current macOS build of ‘qm-vamp-plugins’ was used. | |
Annotator, bibtex | Davies2009 | Davies2007 |
percival2014/stem
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 1.0 |
Annotation Tools | percival 2014, ‘tempo’ implementation from Marsyas, http://marsyas.info, git checkout tempo-stem |
Annotator, bibtex | Percival2014 |
schreiber2014/default
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 0.0.1 |
Annotation Tools | schreiber 2014, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2014 |
schreiber2017/ismir2017
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=ismir2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2017/mirex2017
Attribute | Value |
---|---|
Corpus | rwc_mdb_r |
Version | 0.0.4 |
Annotation Tools | schreiber 2017, model=mirex2017, http://www.tagtraum.com/tempo_estimation.html |
Annotator, bibtex | Schreiber2017 |
schreiber2018/cnn
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=cnn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/fcn
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=fcn), https://github.com/hendriks73/tempo-cnn |
schreiber2018/ismir2018
Attribute | Value |
---|---|
Corpus | |
Version | 0.0.3 |
Data Source | Hendrik Schreiber, Meinard Müller. A Single-Step Approach to Musical Tempo Estimation Using a Convolutional Neural Network. In Proceedings of the 19th International Society for Music Information Retrieval Conference (ISMIR), Paris, France, Sept. 2018. |
Annotation Tools | schreiber tempo-cnn (model=ismir2018), https://github.com/hendriks73/tempo-cnn |
Basic Statistics
Estimator | Size | Min | Max | Avg | Stdev | Sweet Oct. Start | Sweet Oct. Coverage |
---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 15 | 69.77 | 130.43 | 101.28 | 20.83 | 66.00 | 1.00 |
davies2009/mirex_qm_tempotracker | 15 | 70.79 | 152.00 | 112.50 | 22.58 | 76.00 | 0.93 |
percival2014/stem | 15 | 70.07 | 130.01 | 101.28 | 20.66 | 66.00 | 1.00 |
schreiber2014/default | 15 | 74.84 | 139.97 | 107.55 | 18.93 | 70.00 | 1.00 |
schreiber2017/ismir2017 | 15 | 70.02 | 199.98 | 122.22 | 41.44 | 65.00 | 0.80 |
schreiber2017/mirex2017 | 15 | 46.68 | 199.98 | 112.44 | 39.58 | 65.00 | 0.80 |
schreiber2018/cnn | 15 | 70.00 | 200.00 | 108.07 | 32.72 | 66.00 | 0.93 |
schreiber2018/fcn | 15 | 70.00 | 200.00 | 108.13 | 32.72 | 66.00 | 0.93 |
schreiber2018/ismir2018 | 15 | 70.00 | 200.00 | 108.00 | 32.80 | 66.00 | 0.93 |
Smoothed Tempo Distribution
Figure 4: Percentage of values in tempo interval.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy
Accuracy1 is defined as the percentage of correct estimates, allowing a 4% tolerance for individual BPM values.
Accuracy2 additionally permits estimates to be wrong by a factor of 2, 3, 1/2 or 1/3 (so-called octave errors).
See [Gouyon2006].
Note: When comparing accuracy values for different algorithms, keep in mind that an algorithm may have been trained on the test set or that the test set may have even been created using one of the tested algorithms.
Accuracy Results for 0.1
Estimator | Accuracy1 | Accuracy2 |
---|---|---|
schreiber2018/fcn | 0.9333 | 1.0000 |
schreiber2018/ismir2018 | 0.9333 | 1.0000 |
schreiber2018/cnn | 0.9333 | 1.0000 |
schreiber2017/mirex2017 | 0.9333 | 0.9333 |
percival2014/stem | 0.8667 | 1.0000 |
schreiber2017/ismir2017 | 0.8667 | 0.9333 |
boeck2015/tempodetector2016_default | 0.8667 | 1.0000 |
davies2009/mirex_qm_tempotracker | 0.7333 | 1.0000 |
schreiber2014/default | 0.7333 | 0.9333 |
Table 3: Mean accuracy of estimates compared to version 0.1 with 4% tolerance ordered by Accuracy1.
Raw data Accuracy1: CSV JSON LATEX PICKLE
Raw data Accuracy2: CSV JSON LATEX PICKLE
Accuracy1 for 0.1
Figure 5: Mean Accuracy1 for estimates compared to version 0.1 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for 0.1
Figure 6: Mean Accuracy2 for estimates compared to version 0.1 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy Results for 1.0
Estimator | Accuracy1 | Accuracy2 |
---|---|---|
schreiber2018/fcn | 0.9333 | 1.0000 |
schreiber2018/ismir2018 | 0.9333 | 1.0000 |
schreiber2018/cnn | 0.9333 | 1.0000 |
schreiber2017/mirex2017 | 0.9333 | 0.9333 |
percival2014/stem | 0.8667 | 1.0000 |
schreiber2017/ismir2017 | 0.8667 | 0.9333 |
boeck2015/tempodetector2016_default | 0.8667 | 1.0000 |
davies2009/mirex_qm_tempotracker | 0.7333 | 1.0000 |
schreiber2014/default | 0.7333 | 0.9333 |
Table 4: Mean accuracy of estimates compared to version 1.0 with 4% tolerance ordered by Accuracy1.
Raw data Accuracy1: CSV JSON LATEX PICKLE
Raw data Accuracy2: CSV JSON LATEX PICKLE
Accuracy1 for 1.0
Figure 7: Mean Accuracy1 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for 1.0
Figure 8: Mean Accuracy2 for estimates compared to version 1.0 depending on tolerance.
CSV JSON LATEX PICKLE SVG PDF PNG
Differing Items
For which items did a given estimator not estimate a correct value with respect to a given ground truth? Are there items which are either very difficult, not suitable for the task, or incorrectly annotated and therefore never estimated correctly, regardless which estimator is used?
Differing Items Accuracy1
Items with different tempo annotations (Accuracy1, 4% tolerance) in different versions:
0.1 compared with boeck2015/tempodetector2016_default (2 differences): ‘RM-R012’ ‘RM-R014’ CSV
0.1 compared with davies2009/mirex_qm_tempotracker (4 differences): ‘RM-R003’ ‘RM-R004’ ‘RM-R012’ ‘RM-R014’ CSV
0.1 compared with percival2014/stem (2 differences): ‘RM-R012’ ‘RM-R014’ CSV
0.1 compared with schreiber2014/default (4 differences): ‘RM-R004’ ‘RM-R012’ ‘RM-R013’ ‘RM-R014’ CSV
0.1 compared with schreiber2017/ismir2017 (2 differences): ‘RM-R007’ ‘RM-R013’ CSV
0.1 compared with schreiber2017/mirex2017 (1 differences): ‘RM-R013’ CSV
0.1 compared with schreiber2018/cnn (1 differences): ‘RM-R014’ CSV
0.1 compared with schreiber2018/fcn (1 differences): ‘RM-R014’ CSV
0.1 compared with schreiber2018/ismir2018 (1 differences): ‘RM-R014’ CSV
1.0 compared with boeck2015/tempodetector2016_default (2 differences): ‘RM-R012’ ‘RM-R014’ CSV
1.0 compared with davies2009/mirex_qm_tempotracker (4 differences): ‘RM-R003’ ‘RM-R004’ ‘RM-R012’ ‘RM-R014’ CSV
1.0 compared with percival2014/stem (2 differences): ‘RM-R012’ ‘RM-R014’ CSV
1.0 compared with schreiber2014/default (4 differences): ‘RM-R004’ ‘RM-R012’ ‘RM-R013’ ‘RM-R014’ CSV
1.0 compared with schreiber2017/ismir2017 (2 differences): ‘RM-R007’ ‘RM-R013’ CSV
1.0 compared with schreiber2017/mirex2017 (1 differences): ‘RM-R013’ CSV
1.0 compared with schreiber2018/cnn (1 differences): ‘RM-R014’ CSV
1.0 compared with schreiber2018/fcn (1 differences): ‘RM-R014’ CSV
1.0 compared with schreiber2018/ismir2018 (1 differences): ‘RM-R014’ CSV
All tracks were estimated ‘correctly’ by at least one system.
Differing Items Accuracy2
Items with different tempo annotations (Accuracy2, 4% tolerance) in different versions:
0.1 compared with boeck2015/tempodetector2016_default: No differences.
0.1 compared with davies2009/mirex_qm_tempotracker: No differences.
0.1 compared with percival2014/stem: No differences.
0.1 compared with schreiber2014/default (1 differences): ‘RM-R013’ CSV
0.1 compared with schreiber2017/ismir2017 (1 differences): ‘RM-R013’ CSV
0.1 compared with schreiber2017/mirex2017 (1 differences): ‘RM-R013’ CSV
0.1 compared with schreiber2018/cnn: No differences.
0.1 compared with schreiber2018/fcn: No differences.
0.1 compared with schreiber2018/ismir2018: No differences.
1.0 compared with boeck2015/tempodetector2016_default: No differences.
1.0 compared with davies2009/mirex_qm_tempotracker: No differences.
1.0 compared with percival2014/stem: No differences.
1.0 compared with schreiber2014/default (1 differences): ‘RM-R013’ CSV
1.0 compared with schreiber2017/ismir2017 (1 differences): ‘RM-R013’ CSV
1.0 compared with schreiber2017/mirex2017 (1 differences): ‘RM-R013’ CSV
1.0 compared with schreiber2018/cnn: No differences.
1.0 compared with schreiber2018/fcn: No differences.
1.0 compared with schreiber2018/ismir2018: No differences.
All tracks were estimated ‘correctly’ by at least one system.
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
davies2009/mirex_qm_tempotracker | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 0.6875 | 0.3750 | 0.2500 | 0.2500 | 0.2500 |
percival2014/stem | 1.0000 | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2014/default | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 0.6250 | 0.2500 | 0.2500 | 0.2500 | 0.2500 |
schreiber2017/ismir2017 | 1.0000 | 0.6875 | 1.0000 | 0.6250 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/mirex2017 | 1.0000 | 0.3750 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/cnn | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/fcn | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/ismir2018 | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
Table 5: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy1 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
davies2009/mirex_qm_tempotracker | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 0.6875 | 0.3750 | 0.2500 | 0.2500 | 0.2500 |
percival2014/stem | 1.0000 | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2014/default | 0.5000 | 1.0000 | 0.5000 | 1.0000 | 0.6250 | 0.2500 | 0.2500 | 0.2500 | 0.2500 |
schreiber2017/ismir2017 | 1.0000 | 0.6875 | 1.0000 | 0.6250 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/mirex2017 | 1.0000 | 0.3750 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/cnn | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/fcn | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/ismir2018 | 1.0000 | 0.2500 | 1.0000 | 0.2500 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
Table 6: McNemar p-values, using reference annotations 0.1 as groundtruth with Accuracy1 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
davies2009/mirex_qm_tempotracker | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
percival2014/stem | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2014/default | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/ismir2017 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/mirex2017 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/cnn | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/fcn | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/ismir2018 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
Table 7: McNemar p-values, using reference annotations 1.0 as groundtruth with Accuracy2 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
davies2009/mirex_qm_tempotracker | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
percival2014/stem | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2014/default | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/ismir2017 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2017/mirex2017 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/cnn | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/fcn | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
schreiber2018/ismir2018 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
Table 8: McNemar p-values, using reference annotations 0.1 as groundtruth with Accuracy2 [Gouyon2006]. H0: both estimators disagree with the groundtruth to the same amount. If p<=ɑ, reject H0, i.e. we have a significant difference in the disagreement with the groundtruth. In the table, p-values<0.05 are set in bold.
Accuracy1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
Accuracy1 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 9: Mean Accuracy1 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 10: Mean Accuracy1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
Accuracy2 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 11: Mean Accuracy2 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 12: Mean Accuracy2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy1 on Tempo-Subsets for 0.1
Figure 13: Mean Accuracy1 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 on Tempo-Subsets for 1.0
Figure 14: Mean Accuracy1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean Accuracy2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
Accuracy2 on Tempo-Subsets for 0.1
Figure 15: Mean Accuracy2 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 on Tempo-Subsets for 1.0
Figure 16: Mean Accuracy2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy1 for Tempo
When fitting a generalized additive model (GAM) to Accuracy1-values and a ground truth, what Accuracy1 can we expect with confidence?
Estimated Accuracy1 for Tempo for 0.1
Predictions of GAMs trained on Accuracy1 for estimates for reference 0.1.
Figure 17: Accuracy1 predictions of a generalized additive model (GAM) fit to Accuracy1 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy1 for Tempo for 1.0
Predictions of GAMs trained on Accuracy1 for estimates for reference 1.0.
Figure 18: Accuracy1 predictions of a generalized additive model (GAM) fit to Accuracy1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy2 for Tempo
When fitting a generalized additive model (GAM) to Accuracy2-values and a ground truth, what Accuracy2 can we expect with confidence?
Estimated Accuracy2 for Tempo for 0.1
Predictions of GAMs trained on Accuracy2 for estimates for reference 0.1.
Figure 19: Accuracy2 predictions of a generalized additive model (GAM) fit to Accuracy2 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated Accuracy2 for Tempo for 1.0
Predictions of GAMs trained on Accuracy2 for estimates for reference 1.0.
Figure 20: Accuracy2 predictions of a generalized additive model (GAM) fit to Accuracy2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy1 for ‘tag_open’ Tags for 0.1
Figure 21: Mean Accuracy1 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy1 for ‘tag_open’ Tags for 1.0
Figure 22: Mean Accuracy1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
Accuracy2 for ‘tag_open’ Tags for 0.1
Figure 23: Mean Accuracy2 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
Accuracy2 for ‘tag_open’ Tags for 1.0
Figure 24: Mean Accuracy2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 and OE2
OE1 is defined as octave error between an estimate E
and a reference value R
.This means that the most common errors—by a factor of 2 or ½—have the same magnitude, namely 1: OE2(E) = log2(E/R)
.
OE2 is the signed OE1 corresponding to the minimum absolute OE1 allowing the octaveerrors 2, 3, 1/2, and 1/3: OE2(E) = arg minx(|x|) with x ∈ {OE1(E), OE1(2E), OE1(3E), OE1(½E), OE1(⅓E)}
Mean OE1/OE2 Results for 0.1
Estimator | OE1_MEAN | OE1_STDEV | OE2_MEAN | OE2_STDEV |
---|---|---|---|---|
schreiber2017/mirex2017 | -0.0390 | 0.1458 | 0.0277 | 0.1037 |
schreiber2018/ismir2018 | -0.0667 | 0.2494 | 0.0000 | 0.0000 |
schreiber2018/cnn | -0.0653 | 0.2499 | 0.0014 | 0.0051 |
schreiber2018/fcn | -0.0645 | 0.2501 | 0.0022 | 0.0055 |
schreiber2017/ismir2017 | 0.0943 | 0.2632 | 0.0277 | 0.1037 |
percival2014/stem | -0.1340 | 0.3404 | -0.0007 | 0.0016 |
boeck2015/tempodetector2016_default | -0.1345 | 0.3409 | -0.0012 | 0.0042 |
schreiber2014/default | -0.0391 | 0.4581 | 0.0276 | 0.1037 |
davies2009/mirex_qm_tempotracker | 0.0188 | 0.5198 | 0.0188 | 0.0091 |
Table 9: Mean OE1/OE2 for estimates compared to version 0.1 ordered by standard deviation.
Raw data OE1: CSV JSON LATEX PICKLE
Raw data OE2: CSV JSON LATEX PICKLE
OE1 distribution for 0.1
Figure 25: OE1 for estimates compared to version 0.1. Shown are the mean OE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 distribution for 0.1
Figure 26: OE2 for estimates compared to version 0.1. Shown are the mean OE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Mean OE1/OE2 Results for 1.0
Estimator | OE1_MEAN | OE1_STDEV | OE2_MEAN | OE2_STDEV |
---|---|---|---|---|
schreiber2017/mirex2017 | -0.0386 | 0.1461 | 0.0281 | 0.1034 |
schreiber2018/ismir2018 | -0.0662 | 0.2505 | 0.0004 | 0.0016 |
schreiber2018/cnn | -0.0649 | 0.2509 | 0.0018 | 0.0050 |
schreiber2018/fcn | -0.0641 | 0.2512 | 0.0026 | 0.0056 |
schreiber2017/ismir2017 | 0.0948 | 0.2629 | 0.0281 | 0.1034 |
percival2014/stem | -0.1336 | 0.3413 | -0.0002 | 0.0024 |
boeck2015/tempodetector2016_default | -0.1341 | 0.3418 | -0.0008 | 0.0052 |
schreiber2014/default | -0.0387 | 0.4587 | 0.0280 | 0.1034 |
davies2009/mirex_qm_tempotracker | 0.0192 | 0.5203 | 0.0192 | 0.0101 |
Table 10: Mean OE1/OE2 for estimates compared to version 1.0 ordered by standard deviation.
Raw data OE1: CSV JSON LATEX PICKLE
Raw data OE2: CSV JSON LATEX PICKLE
OE1 distribution for 1.0
Figure 27: OE1 for estimates compared to version 1.0. Shown are the mean OE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 distribution for 1.0
Figure 28: OE2 for estimates compared to version 1.0. Shown are the mean OE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1177 | 0.6951 | 0.1984 | 0.0506 | 0.3679 | 0.3157 | 0.3100 | 0.3255 |
davies2009/mirex_qm_tempotracker | 0.1177 | 1.0000 | 0.1182 | 0.4494 | 0.6349 | 0.6948 | 0.4915 | 0.4956 | 0.4844 |
percival2014/stem | 0.6951 | 0.1182 | 1.0000 | 0.1973 | 0.0508 | 0.3706 | 0.3202 | 0.3146 | 0.3302 |
schreiber2014/default | 0.1984 | 0.4494 | 0.1973 | 1.0000 | 0.3340 | 0.9996 | 0.7991 | 0.8058 | 0.7897 |
schreiber2017/ismir2017 | 0.0506 | 0.6349 | 0.0508 | 0.3340 | 1.0000 | 0.1643 | 0.1043 | 0.1077 | 0.1025 |
schreiber2017/mirex2017 | 0.3679 | 0.6948 | 0.3706 | 0.9996 | 0.1643 | 1.0000 | 0.7482 | 0.7534 | 0.7333 |
schreiber2018/cnn | 0.3157 | 0.4915 | 0.3202 | 0.7991 | 0.1043 | 0.7482 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.3100 | 0.4956 | 0.3146 | 0.8058 | 0.1077 | 0.7534 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3255 | 0.4844 | 0.3302 | 0.7897 | 0.1025 | 0.7333 | 0.3343 | 0.1671 | 1.0000 |
Table 11: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1177 | 0.6951 | 0.1984 | 0.0506 | 0.3679 | 0.3157 | 0.3100 | 0.3255 |
davies2009/mirex_qm_tempotracker | 0.1177 | 1.0000 | 0.1182 | 0.4494 | 0.6349 | 0.6948 | 0.4915 | 0.4956 | 0.4844 |
percival2014/stem | 0.6951 | 0.1182 | 1.0000 | 0.1973 | 0.0508 | 0.3706 | 0.3202 | 0.3146 | 0.3302 |
schreiber2014/default | 0.1984 | 0.4494 | 0.1973 | 1.0000 | 0.3340 | 0.9996 | 0.7991 | 0.8058 | 0.7897 |
schreiber2017/ismir2017 | 0.0506 | 0.6349 | 0.0508 | 0.3340 | 1.0000 | 0.1643 | 0.1043 | 0.1077 | 0.1025 |
schreiber2017/mirex2017 | 0.3679 | 0.6948 | 0.3706 | 0.9996 | 0.1643 | 1.0000 | 0.7482 | 0.7534 | 0.7333 |
schreiber2018/cnn | 0.3157 | 0.4915 | 0.3202 | 0.7991 | 0.1043 | 0.7482 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.3100 | 0.4956 | 0.3146 | 0.8058 | 0.1077 | 0.7534 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3255 | 0.4844 | 0.3302 | 0.7897 | 0.1025 | 0.7333 | 0.3343 | 0.1671 | 1.0000 |
Table 12: Paired t-test p-values, using reference annotations 0.1 as groundtruth with OE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.6951 | 0.3215 | 0.3201 | 0.3201 | 0.2118 | 0.0772 | 0.3060 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.7572 | 0.7550 | 0.7550 | 0.0000 | 0.0001 | 0.0000 |
percival2014/stem | 0.6951 | 0.0000 | 1.0000 | 0.3226 | 0.3212 | 0.3212 | 0.1308 | 0.0680 | 0.1392 |
schreiber2014/default | 0.3215 | 0.7572 | 0.3226 | 1.0000 | 0.8146 | 0.8146 | 0.3366 | 0.3775 | 0.3365 |
schreiber2017/ismir2017 | 0.3201 | 0.7550 | 0.3212 | 0.8146 | 1.0000 | 1.0000 | 0.3351 | 0.3760 | 0.3351 |
schreiber2017/mirex2017 | 0.3201 | 0.7550 | 0.3212 | 0.8146 | 1.0000 | 1.0000 | 0.3351 | 0.3760 | 0.3351 |
schreiber2018/cnn | 0.2118 | 0.0000 | 0.1308 | 0.3366 | 0.3351 | 0.3351 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.0772 | 0.0001 | 0.0680 | 0.3775 | 0.3760 | 0.3760 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3060 | 0.0000 | 0.1392 | 0.3365 | 0.3351 | 0.3351 | 0.3343 | 0.1671 | 1.0000 |
Table 13: Paired t-test p-values, using reference annotations 1.0 as groundtruth with OE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.6951 | 0.3215 | 0.3201 | 0.3201 | 0.2118 | 0.0772 | 0.3060 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.7572 | 0.7550 | 0.7550 | 0.0000 | 0.0001 | 0.0000 |
percival2014/stem | 0.6951 | 0.0000 | 1.0000 | 0.3226 | 0.3212 | 0.3212 | 0.1308 | 0.0680 | 0.1392 |
schreiber2014/default | 0.3215 | 0.7572 | 0.3226 | 1.0000 | 0.8146 | 0.8146 | 0.3366 | 0.3775 | 0.3365 |
schreiber2017/ismir2017 | 0.3201 | 0.7550 | 0.3212 | 0.8146 | 1.0000 | 1.0000 | 0.3351 | 0.3760 | 0.3351 |
schreiber2017/mirex2017 | 0.3201 | 0.7550 | 0.3212 | 0.8146 | 1.0000 | 1.0000 | 0.3351 | 0.3760 | 0.3351 |
schreiber2018/cnn | 0.2118 | 0.0000 | 0.1308 | 0.3366 | 0.3351 | 0.3351 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.0772 | 0.0001 | 0.0680 | 0.3775 | 0.3760 | 0.3760 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3060 | 0.0000 | 0.1392 | 0.3365 | 0.3351 | 0.3351 | 0.3343 | 0.1671 | 1.0000 |
Table 14: Paired t-test p-values, using reference annotations 0.1 as groundtruth with OE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
OE1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
OE1 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 29: Mean OE1 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 30: Mean OE1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
OE2 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 31: Mean OE2 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 32: Mean OE2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE1 on Tempo-Subsets for 0.1
Figure 33: Mean OE1 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 on Tempo-Subsets for 1.0
Figure 34: Mean OE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean OE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
OE2 on Tempo-Subsets for 0.1
Figure 35: Mean OE2 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
OE2 on Tempo-Subsets for 1.0
Figure 36: Mean OE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE1 for Tempo
When fitting a generalized additive model (GAM) to OE1-values and a ground truth, what OE1 can we expect with confidence?
Estimated OE1 for Tempo for 0.1
Predictions of GAMs trained on OE1 for estimates for reference 0.1.
Figure 37: OE1 predictions of a generalized additive model (GAM) fit to OE1 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE1 for Tempo for 1.0
Predictions of GAMs trained on OE1 for estimates for reference 1.0.
Figure 38: OE1 predictions of a generalized additive model (GAM) fit to OE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE2 for Tempo
When fitting a generalized additive model (GAM) to OE2-values and a ground truth, what OE2 can we expect with confidence?
Estimated OE2 for Tempo for 0.1
Predictions of GAMs trained on OE2 for estimates for reference 0.1.
Figure 39: OE2 predictions of a generalized additive model (GAM) fit to OE2 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated OE2 for Tempo for 1.0
Predictions of GAMs trained on OE2 for estimates for reference 1.0.
Figure 40: OE2 predictions of a generalized additive model (GAM) fit to OE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
OE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE1 for ‘tag_open’ Tags for 0.1
Figure 41: OE1 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
OE1 for ‘tag_open’ Tags for 1.0
Figure 42: OE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
OE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
OE2 for ‘tag_open’ Tags for 0.1
Figure 43: OE2 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
OE2 for ‘tag_open’ Tags for 1.0
Figure 44: OE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE1 and AOE2
AOE1 is defined as absolute octave error between an estimate and a reference value: AOE1(E) = |log2(E/R)|
.
AOE2 is the minimum of AOE1 allowing the octave errors 2, 3, 1/2, and 1/3: AOE2(E) = min(AOE1(E), AOE1(2E), AOE1(3E), AOE1(½E), AOE1(⅓E))
.
Mean AOE1/AOE2 Results for 0.1
Estimator | AOE1_MEAN | AOE1_STDEV | AOE2_MEAN | AOE2_STDEV |
---|---|---|---|---|
schreiber2017/mirex2017 | 0.0391 | 0.1457 | 0.0279 | 0.1036 |
schreiber2018/ismir2018 | 0.0667 | 0.2494 | 0.0000 | 0.0000 |
schreiber2018/cnn | 0.0680 | 0.2491 | 0.0014 | 0.0051 |
schreiber2018/fcn | 0.0688 | 0.2489 | 0.0022 | 0.0055 |
schreiber2017/ismir2017 | 0.0945 | 0.2631 | 0.0279 | 0.1036 |
percival2014/stem | 0.1350 | 0.3401 | 0.0016 | 0.0006 |
boeck2015/tempodetector2016_default | 0.1364 | 0.3402 | 0.0031 | 0.0031 |
schreiber2014/default | 0.2284 | 0.3990 | 0.0285 | 0.1034 |
davies2009/mirex_qm_tempotracker | 0.2815 | 0.4374 | 0.0188 | 0.0091 |
Table 15: Mean AOE1/AOE2 for estimates compared to version 0.1 ordered by mean.
Raw data AOE1: CSV JSON LATEX PICKLE
Raw data AOE2: CSV JSON LATEX PICKLE
AOE1 distribution for 0.1
Figure 45: AOE1 for estimates compared to version 0.1. Shown are the mean AOE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 distribution for 0.1
Figure 46: AOE2 for estimates compared to version 0.1. Shown are the mean AOE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Mean AOE1/AOE2 Results for 1.0
Estimator | AOE1_MEAN | AOE1_STDEV | AOE2_MEAN | AOE2_STDEV |
---|---|---|---|---|
schreiber2017/mirex2017 | 0.0401 | 0.1457 | 0.0288 | 0.1032 |
schreiber2018/ismir2018 | 0.0677 | 0.2501 | 0.0010 | 0.0013 |
schreiber2018/cnn | 0.0689 | 0.2498 | 0.0023 | 0.0048 |
schreiber2018/fcn | 0.0698 | 0.2496 | 0.0032 | 0.0053 |
schreiber2017/ismir2017 | 0.0954 | 0.2627 | 0.0288 | 0.1032 |
percival2014/stem | 0.1354 | 0.3406 | 0.0021 | 0.0013 |
boeck2015/tempodetector2016_default | 0.1371 | 0.3406 | 0.0037 | 0.0037 |
schreiber2014/default | 0.2292 | 0.3992 | 0.0292 | 0.1031 |
davies2009/mirex_qm_tempotracker | 0.2824 | 0.4375 | 0.0192 | 0.0101 |
Table 16: Mean AOE1/AOE2 for estimates compared to version 1.0 ordered by mean.
Raw data AOE1: CSV JSON LATEX PICKLE
Raw data AOE2: CSV JSON LATEX PICKLE
AOE1 distribution for 1.0
Figure 47: AOE1 for estimates compared to version 1.0. Shown are the mean AOE1 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 distribution for 1.0
Figure 48: AOE2 for estimates compared to version 1.0. Shown are the mean AOE2 and an empirical distribution of the sample, using kernel density estimation (KDE).
CSV JSON LATEX PICKLE SVG PDF PNG
Significance of Differences
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1381 | 0.1035 | 0.2102 | 0.7389 | 0.3605 | 0.3235 | 0.3302 | 0.3142 |
davies2009/mirex_qm_tempotracker | 0.1381 | 1.0000 | 0.1334 | 0.4850 | 0.2296 | 0.0816 | 0.0655 | 0.0668 | 0.0637 |
percival2014/stem | 0.1035 | 0.1334 | 1.0000 | 0.2033 | 0.7487 | 0.3688 | 0.3367 | 0.3435 | 0.3272 |
schreiber2014/default | 0.2102 | 0.4850 | 0.2033 | 1.0000 | 0.3321 | 0.1044 | 0.1029 | 0.1064 | 0.1013 |
schreiber2017/ismir2017 | 0.7389 | 0.2296 | 0.7487 | 0.3321 | 1.0000 | 0.4325 | 0.7972 | 0.8047 | 0.7885 |
schreiber2017/mirex2017 | 0.3605 | 0.0816 | 0.3688 | 0.1044 | 0.4325 | 1.0000 | 0.7203 | 0.7143 | 0.7342 |
schreiber2018/cnn | 0.3235 | 0.0655 | 0.3367 | 0.1029 | 0.7972 | 0.7203 | 1.0000 | 0.6672 | 0.3343 |
schreiber2018/fcn | 0.3302 | 0.0668 | 0.3435 | 0.1064 | 0.8047 | 0.7143 | 0.6672 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3142 | 0.0637 | 0.3272 | 0.1013 | 0.7885 | 0.7342 | 0.3343 | 0.1671 | 1.0000 |
Table 17: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.1386 | 0.1263 | 0.2108 | 0.7375 | 0.3587 | 0.3216 | 0.3275 | 0.3116 |
davies2009/mirex_qm_tempotracker | 0.1386 | 1.0000 | 0.1347 | 0.4859 | 0.2296 | 0.0814 | 0.0655 | 0.0666 | 0.0636 |
percival2014/stem | 0.1263 | 0.1347 | 1.0000 | 0.2050 | 0.7460 | 0.3658 | 0.3333 | 0.3392 | 0.3231 |
schreiber2014/default | 0.2108 | 0.4859 | 0.2050 | 1.0000 | 0.3317 | 0.1040 | 0.1026 | 0.1060 | 0.1009 |
schreiber2017/ismir2017 | 0.7375 | 0.2296 | 0.7460 | 0.3317 | 1.0000 | 0.4315 | 0.7971 | 0.8038 | 0.7877 |
schreiber2017/mirex2017 | 0.3587 | 0.0814 | 0.3658 | 0.1040 | 0.4315 | 1.0000 | 0.7193 | 0.7144 | 0.7343 |
schreiber2018/cnn | 0.3216 | 0.0655 | 0.3333 | 0.1026 | 0.7971 | 0.7193 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.3275 | 0.0666 | 0.3392 | 0.1060 | 0.8038 | 0.7144 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.3116 | 0.0636 | 0.3231 | 0.1009 | 0.7877 | 0.7343 | 0.3343 | 0.1671 | 1.0000 |
Table 18: Paired t-test p-values, using reference annotations 0.1 as groundtruth with AOE1. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.1035 | 0.3696 | 0.3776 | 0.3776 | 0.2826 | 0.7648 | 0.0018 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.7260 | 0.7371 | 0.7371 | 0.0001 | 0.0001 | 0.0000 |
percival2014/stem | 0.1035 | 0.0000 | 1.0000 | 0.3440 | 0.3517 | 0.3517 | 0.8988 | 0.4585 | 0.0073 |
schreiber2014/default | 0.3696 | 0.7260 | 0.3440 | 1.0000 | 0.1994 | 0.1994 | 0.3239 | 0.3647 | 0.3243 |
schreiber2017/ismir2017 | 0.3776 | 0.7371 | 0.3517 | 0.1994 | 1.0000 | 1.0000 | 0.3316 | 0.3726 | 0.3317 |
schreiber2017/mirex2017 | 0.3776 | 0.7371 | 0.3517 | 0.1994 | 1.0000 | 1.0000 | 0.3316 | 0.3726 | 0.3317 |
schreiber2018/cnn | 0.2826 | 0.0001 | 0.8988 | 0.3239 | 0.3316 | 0.3316 | 1.0000 | 0.6672 | 0.3343 |
schreiber2018/fcn | 0.7648 | 0.0001 | 0.4585 | 0.3647 | 0.3726 | 0.3726 | 0.6672 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.0018 | 0.0000 | 0.0073 | 0.3243 | 0.3317 | 0.3317 | 0.3343 | 0.1671 | 1.0000 |
Table 19: Paired t-test p-values, using reference annotations 1.0 as groundtruth with AOE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
Estimator | boeck2015/tempodetector2016_default | davies2009/mirex_qm_tempotracker | percival2014/stem | schreiber2014/default | schreiber2017/ismir2017 | schreiber2017/mirex2017 | schreiber2018/cnn | schreiber2018/fcn | schreiber2018/ismir2018 |
---|---|---|---|---|---|---|---|---|---|
boeck2015/tempodetector2016_default | 1.0000 | 0.0000 | 0.1263 | 0.3713 | 0.3845 | 0.3845 | 0.2639 | 0.6366 | 0.0020 |
davies2009/mirex_qm_tempotracker | 0.0000 | 1.0000 | 0.0000 | 0.7314 | 0.7496 | 0.7496 | 0.0000 | 0.0001 | 0.0000 |
percival2014/stem | 0.1263 | 0.0000 | 1.0000 | 0.3473 | 0.3599 | 0.3599 | 0.8454 | 0.7263 | 0.0000 |
schreiber2014/default | 0.3713 | 0.7314 | 0.3473 | 1.0000 | 0.0169 | 0.0169 | 0.3188 | 0.3594 | 0.3196 |
schreiber2017/ismir2017 | 0.3845 | 0.7496 | 0.3599 | 0.0169 | 1.0000 | 1.0000 | 0.3313 | 0.3722 | 0.3314 |
schreiber2017/mirex2017 | 0.3845 | 0.7496 | 0.3599 | 0.0169 | 1.0000 | 1.0000 | 0.3313 | 0.3722 | 0.3314 |
schreiber2018/cnn | 0.2639 | 0.0000 | 0.8454 | 0.3188 | 0.3313 | 0.3313 | 1.0000 | 0.7148 | 0.3343 |
schreiber2018/fcn | 0.6366 | 0.0001 | 0.7263 | 0.3594 | 0.3722 | 0.3722 | 0.7148 | 1.0000 | 0.1671 |
schreiber2018/ismir2018 | 0.0020 | 0.0000 | 0.0000 | 0.3196 | 0.3314 | 0.3314 | 0.3343 | 0.1671 | 1.0000 |
Table 20: Paired t-test p-values, using reference annotations 0.1 as groundtruth with AOE2. H0: the true mean difference between paired samples is zero. If p<=ɑ, reject H0, i.e. we have a significant difference between estimates from the two algorithms. In the table, p-values<0.05 are set in bold.
AOE1 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
AOE1 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 49: Mean AOE1 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 50: Mean AOE1 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on cvar-Subsets
How well does an estimator perform, when only taking tracks into account that have a cvar-value of less than τ, i.e., have a more or less stable beat?
AOE2 on cvar-Subsets for 0.1 based on cvar-Values from 1.0
Figure 51: Mean AOE2 compared to version 0.1 for tracks with cvar < τ based on beat annotations from 0.1.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on cvar-Subsets for 1.0 based on cvar-Values from 1.0
Figure 52: Mean AOE2 compared to version 1.0 for tracks with cvar < τ based on beat annotations from 1.0.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE1 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE1 on Tempo-Subsets for 0.1
Figure 53: Mean AOE1 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 on Tempo-Subsets for 1.0
Figure 54: Mean AOE1 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on Tempo-Subsets
How well does an estimator perform, when only taking a subset of the reference annotations into account? The graphs show mean AOE2 for reference subsets with tempi in [T-10,T+10] BPM. Note that the graphs do not show confidence intervals and that some values may be based on very few estimates.
AOE2 on Tempo-Subsets for 0.1
Figure 55: Mean AOE2 for estimates compared to version 0.1 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE2 on Tempo-Subsets for 1.0
Figure 56: Mean AOE2 for estimates compared to version 1.0 for tempo intervals around T.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE1 for Tempo
When fitting a generalized additive model (GAM) to AOE1-values and a ground truth, what AOE1 can we expect with confidence?
Estimated AOE1 for Tempo for 0.1
Predictions of GAMs trained on AOE1 for estimates for reference 0.1.
Figure 57: AOE1 predictions of a generalized additive model (GAM) fit to AOE1 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE1 for Tempo for 1.0
Predictions of GAMs trained on AOE1 for estimates for reference 1.0.
Figure 58: AOE1 predictions of a generalized additive model (GAM) fit to AOE1 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE2 for Tempo
When fitting a generalized additive model (GAM) to AOE2-values and a ground truth, what AOE2 can we expect with confidence?
Estimated AOE2 for Tempo for 0.1
Predictions of GAMs trained on AOE2 for estimates for reference 0.1.
Figure 59: AOE2 predictions of a generalized additive model (GAM) fit to AOE2 results for 0.1. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
Estimated AOE2 for Tempo for 1.0
Predictions of GAMs trained on AOE2 for estimates for reference 1.0.
Figure 60: AOE2 predictions of a generalized additive model (GAM) fit to AOE2 results for 1.0. The 95% confidence interval around the prediction is shaded in gray.
CSV JSON LATEX PICKLE SVG PDF PNG
AOE1 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE1 for ‘tag_open’ Tags for 0.1
Figure 61: AOE1 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
AOE1 for ‘tag_open’ Tags for 1.0
Figure 62: AOE1 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
AOE2 for ‘tag_open’ Tags
How well does an estimator perform, when only taking tracks into account that are tagged with some kind of label? Note that some values may be based on very few estimates.
AOE2 for ‘tag_open’ Tags for 0.1
Figure 63: AOE2 of estimates compared to version 0.1 depending on tag from namespace ‘tag_open’.
AOE2 for ‘tag_open’ Tags for 1.0
Figure 64: AOE2 of estimates compared to version 1.0 depending on tag from namespace ‘tag_open’.
Generated by tempo_eval 0.1.1 on 2022-06-29 18:55. Size L.