Advertisement

The Objective Ear: Assessing the Progress of a Music Task

  • Joel BurrowsEmail author
  • Vivekanandan Kumar
Conference paper
Part of the Lecture Notes in Educational Technology book series (LNET)

Abstract

Music educators assess the progress made by their students between lessons. This assessment process is error prone, relying on skills and memory. An objective ear is a tool that takes as input a pair of performances of a piece of music and returns an accurate and reliable assessment of the progress between the performances. The tool evaluates performances using domain knowledge to generate a vector of metrics. The vectors for a pair of performances are subtracted from each other and the differences are used as input to a machine-learning classifier which maps the differences to an assessment. The implementation demonstrates that an objective ear tool is a feasible and practical solution to the problem of assessment.

Keywords

Music education assessment learning analytics machine learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Chordia, Parag, Avinash Sastry and Sertan Senturk. “Predictive Tabla Modelling Using Variablelength Markov and Hidden Markov Models.” Journal of New Music Research, vol. 40, no. 2, pp. 105–118, 2011.Google Scholar
  2. [2]
    Darrow, Alice-Ann. “Examining the validity of self-report: middle-level singers’ ability to predict and assess their sight-singing skills.” International Journal of Music Education, vol. 24, no. 1, pp. 21–29, 2006.Google Scholar
  3. [3]
    Dixon, Simon. “Automatic Extraction of Tempo and Beat from Expressive Performances.” Journal of New Music Research, vol. 30, no. 1, pp. 39–58, 2001.Google Scholar
  4. [4]
    Eibe, Frank, Mark A. Hall, and Ian H. Witten. The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques”, Morgan Kaufmann, Fourth Edition, 2016.Google Scholar
  5. [5]
    Gingras, Bruno and Stephen McAdams. “Improved Score-performance Matching Using Both Structure and Temporal Information from MIDI Recordings.” Journal of New Music Research, vol. 41, no. 1, pp. 43–57, 2011.Google Scholar
  6. [6]
    Groulx, Timothy. “The Influence of Tonal and Atonal Contexts on Error Detection Accuracy.” Journal of Research in Music Education, vol. 61, no. 2, pp. 233–243, 2013.Google Scholar
  7. [7]
    Grout, Donald J. A History of Western Music. 6th ed. W. W. Norton & Company Inc., New York, NY. 2001.Google Scholar
  8. [8]
    Guerin, Robert. MIDI Power!. 2nd ed. Cengage Learning. Boston, MA. 2008.Google Scholar
  9. [9]
    Hamanaka, Masatoshi, Keiji Hirata, and Satoshi Tojo. “Implementing ‘A Generative Theory of Tonal Music’.” Journal of New Music Research, vol. 35, no. 4, pp. 249–277, 2006.Google Scholar
  10. [10]
    Pearce, Marcus and Geraint Wiggins. “Improved Methods for Statistical Modelling of Monophonic Music.” Journal of New Music Research, vol. 33, no. 4, pp. 367–385, 2004.Google Scholar
  11. [11]
    Raphael, Christopher and Joshua Stoddard. “Functional Harmonic Analysis Using Probabilistic Models.” Computer Music Journal, vol. 28, no. 3, pp. 45–52, 2004.Google Scholar
  12. [12]
    Siemens, George. “Learning Analytics: The Emergence of a Discipline.” American Behavioral Scientist, vol. 57, no. 10, pp. 1380–1400, 2013.Google Scholar
  13. [13]
    Stambaugh, Laura. “Differences in Error Detection Skills by Band and Choral Preservice Teachers.” Journal of Music Teacher Education, vol. 25, no. 2, pp. 25–36, 2016.Google Scholar
  14. [14]
    Widmer, Gerhard. “Machine Discoveries: A Few Simple, Robust Local Expression Principles.” Journal of New Music Research, vol. 31, no. 1, pp. 37–50, 2002.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Athabasca UniversityAthabascaCanada

Personalised recommendations