Advertisement

Detecting and Adapting to Users’ Cognitive and Affective State to Develop Intelligent Musical Interfaces

  • Beste F. YukselEmail author
  • Kurt B. Oleson
  • Remco Chang
  • Robert J. K. Jacob
Chapter
Part of the Springer Series on Cultural Computing book series (SSCC)

Abstract

In musical instrument interfaces, such as piano keyboards, the player’s communication channels may be limited by the expressivity and resolution of input devices, the expressivity of relevant body parts, and human attention bottlenecks. In this chapter, we consider intelligent musical interfaces that can measure cognitive or affective states implicitly in real-time to allow musically appropriate adaptations by the system without conscious effort on the part of the user. This chapter focuses on two specific areas in music where the detection of cognitive and affective states has been applied to interaction design for music: musical learning (including learning instruments or pieces of music) and musical creativity (including composing and improvisation). The motivation, theory, and technological basis for work of this kind are discussed. Relevant existing work is considered. The design and evaluation of two systems of this kind for musical learning and musical creativity implemented by the authors is presented and critiqued.

Notes

Acknowledgements

The authors would like to thank Evan M. Peck from Bucknell University, Daniel Afergan from Google Inc., and Paul Lehrman and Kathleen Kuo from Tufts University for discussions on this topic. We thank the National Science Foundation (grant nos. IIS-1065154, IIS-1218170) and Google Inc. for their support of this work.

References

  1. Afergan D, Peck EM, Solovey ET, Jenkins A, Hincks SW, Brown ET, Chang R, Jacob RJ (2014a) Dynamic difficulty using brain metrics of workload. In: Proceedings of CHI 2014, pp 3797–3806Google Scholar
  2. Afergan D, Shibata T, Hincks SW, Peck EM, Yuksel BF, Chang R, Jacob RJ (2014b) Brain-based target expansion. In: Proceedings of UISTGoogle Scholar
  3. Aggarwal JK, Cai Q (1999) Human motion analysis: a review. Comput Vis Image Underst 73(3)CrossRefGoogle Scholar
  4. Arslan B, Brouse A, Castet J, Lehembre R, Simon C, Filatriau J-J, Noirhomme Q (2006) A real time music synthesis environment driven with biological signals. In: Proceedings of IEEE ICASSPGoogle Scholar
  5. Ayaz H, Shewokis PA, Bunce S, Izzetoglu K, Willems B, Onaral B (2012) Optical brain monitoring for operator training and mental workload assessment. Neuroimage 59(1):36–47CrossRefGoogle Scholar
  6. Bauer JS, Jansen A, Cirimele J. (2011) MoodMusic: a method for cooperative, generative music playlist creation. In: Proceedings of the 24th annual ACM symposium adjunct on user interface software and technology (UIST), pp 85–86Google Scholar
  7. Chew YCD, Caspary E (2011) MusEEGk: a brain computer musical interface. In: Extended abstracts CHI 2011, pp 1417–1422Google Scholar
  8. Chung JW, Vercoe GS (2006) The affective remixer: personalized music arranging. In: CHI’06 extended abstracts on human factors in computing systems, 21 Apr 2006. ACM, pp 393–398Google Scholar
  9. Craig S, Graesser A, Sullins J, Gholson B (2004) Affect and learning: an exploratory look into the role of affect in learning with AutoTutor. J Educ Media 29(3):241–250 (2004)CrossRefGoogle Scholar
  10. D’Esposito M, Postle BR, Rypma B (2000) Prefrontal cortical contributions to working memory: evidence from event-related fMRI studies. Exp Brain Res 133(1):3–11CrossRefGoogle Scholar
  11. Ekman P, Friesen WV, Hager, JC (2002) Facial action coding system. A human face. Salt Lake City, USAGoogle Scholar
  12. Ekman P, Friesen W (1978) Manual for the facial action coding system. Consulting Psychology PressGoogle Scholar
  13. FakhrHosseini M, Jeon M (2016) The effects of various music on angry drivers’ subjective, behavioral, and physiological states. In: Proceedings of the 8th international conference on automotive user interfaces and interactive vehicular applications adjunct, 24 Oct 2016. ACM, pp 191–196Google Scholar
  14. Fox PT, Raichle ME, Mintun MA, Dence C (1988) Glucose during focal physiologic neural activity nonoxidative consumption. Science 241:462–464CrossRefGoogle Scholar
  15. Gabrielsson A, Lindstrom E (2010) The role of structure in the musical expression of emotions. In Handbook of music and emotion: theory, research, applications, pp 367–400CrossRefGoogle Scholar
  16. Gagnon L, Peretz I (2003) Mode and tempo relative contributions to “happy-sad” judgements in equitone melodies. Cogn Emot 17(1):25–40CrossRefGoogle Scholar
  17. Gevins A, Smith ME, Leong H, McEvoy L, Whitfeld S, Du R, Rush G (1998) Monitoring working memory load during computer-based tasks with EEG pattern recognition methods. Hum Factors: J Hum Factors Ergon Soc 40(1):79–91CrossRefGoogle Scholar
  18. Gevins A, Smith ME, McEvoy L, Yu D (1997) High-resolution EEG mapping of cortical activation related to working memory: effects of task difficulty, type of processing, and practice. Cereb Cortex 7:374–385CrossRefGoogle Scholar
  19. Girouard A, Solovey E, Jacob RJK (2013) Designing a passive brain computer interface using real time classification of functional near-infrared spectroscopy. Int J Auton Adapt Commun Syst 6(1):26–44CrossRefGoogle Scholar
  20. Grierson, M (2008) Composing with brainwaves: minimal trial P300b recognition as an indication of subjective preference for the control of a musical instrument. ICMCGoogle Scholar
  21. Gundel A, Wilson GF (1992) Topographical changes in the ongoing EEG related to the difficulty of mental tasks. Brain Topogr 5(1):17–25CrossRefGoogle Scholar
  22. Juslin PN (1997) Perceived emotional expression in synthesized performances of a short melody: capturing the listener’s judgment policy. Musicae scientiae 1(2):225–256CrossRefGoogle Scholar
  23. Juslin PN, Sloboda JA (2010) Handbook of music and emotion: theory, research, applications. Oxford University PressGoogle Scholar
  24. El Kaliouby R, Robinson P (2005) Generalization of a vision-based computational model of mind-reading. In: Proceedings of first international conference on affective computing and intelligent interaction, pp 582–589Google Scholar
  25. Kim HJ, Yoo MJ, Kwon JY, Lee IK (2009) Generating affective music icons in the emotion plane. In: CHI’09 extended abstracts on human factors in computing systems. ACM, pp 3389–3394Google Scholar
  26. Kort B, Reilly R, Picard RW (2001) An affective model of interplay between emotions and learning: reengineering educational pedagogy-building a learning companion. In: Advanced learning technologies, 2001. IEEE, pp 43–46Google Scholar
  27. Knapp RB, Lusted HS (1990) A bioelectric controller for computer music applications. Comput Music J 42–47CrossRefGoogle Scholar
  28. Le Groux S, Manzolli J, Verschure PF (2010) Disembodied and collaborative musical interaction in the multimodal brain orchestra. In: Proceedings of NIME, pp 309–314Google Scholar
  29. Lucier A (1976) Statement on: music for solo performer. In: Rosenboom D (ed) Biofeedback and the arts, results of early experiments. Aesthetic Research Center of Canada Publications, Vancouver, pp 60–61Google Scholar
  30. Manoach DS, Schlaug G, Siewert B, Darby DG, Bly BM, Benfeld A, Edelman RR, Warach S (1997) Prefrontal cortex fMRI signal changes are correlated with working memory load. NeuroReport 8(2):545–549CrossRefGoogle Scholar
  31. Mayer RE (1999) Fifty years of creativity research. In: Sternberg RJ (ed) Handbook of creativity. Cambridge University PressGoogle Scholar
  32. Mealla S, Väljamäe A, Bosi M, Jordà S (2011) Listening to your brain: Implicit interaction in collaborative music performances. In: Proceedings of NIME ’11 (2011)Google Scholar
  33. Meyer LB (2008) Emotion and meaning in music. University of Chicago PressGoogle Scholar
  34. McDaniel B, D’Mello S, King B, Chipman P, Tapp K, Graesser A (2007) Facial features for affective state detection in learning environments. In: Proceedings of 29th annual meeting of the cognitive science societyGoogle Scholar
  35. Miranda E, Brouse A (2005) Toward direct brain-computer musical interfaces. In: Proceedings of NIME, pp 216–219Google Scholar
  36. Miranda E, Sharman E, Kilborn K, Duncan A (2003) On harnessing the electroencephalogram for the musical braincap. Comput Music J 27(2):80–102CrossRefGoogle Scholar
  37. Moriyama T, Ozawa S (1999) Emotion recognition and synthesis system on speech. In: IEEE international conference on multimedia computing and systems, Florence, ItalyGoogle Scholar
  38. Morreale F, De Angeli A (2016) Collaborating with an autonomous agent to generate affective music. Comput Entertain (CIE) 14(3):5Google Scholar
  39. Nasoz F, Alvarez K, Lisetti CL, Finkelstein N (2004) Emotion recognition from physiological signals using wireless sensors for presence technologies. Cogn Technol Work 6:4–14CrossRefGoogle Scholar
  40. Pavlovic VI, Sharma RS, Huang TS (1997) Visual interpretation of hand gestures for human-computer interaction: a review. IEEE Trans Pattern Anal Mach IntellGoogle Scholar
  41. Peck EM, Yuksel BF, Ottely A, Jacob RJ, Chang R (2013) Using fNIRS brain sensing to evaluate information visualization interfaces. In: Proceedings of CHI 2013, pp 473–482Google Scholar
  42. Petrushin VA (2000) Emotion recognition in speech signal: experimental study, development and application. ICSLP Beijing, ChinaGoogle Scholar
  43. Picard RW (1997) Affective computing. MIT Press, CambridgeGoogle Scholar
  44. Picard RW, Vyzas E, Healey J (2001) Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans Pattern Anal Mach Intell 23(10):1175–1191CrossRefGoogle Scholar
  45. Repovš G, Baddeley A (2006) The multi-component model of working memory: explorations in experimental cognitive psychology. Neuroscience 139(1):5–21CrossRefGoogle Scholar
  46. Russell JA (1980) A circumplex model of affect. J Pers Soc Psych. 39:1161–1178CrossRefGoogle Scholar
  47. Sasaki S, Hirai T, Ohya H, Morishima S (2013) Affective music recommendation system using input images. In: ACM SIGGRAPH 2013. ACM, p. 90Google Scholar
  48. Shneiderman B (2009) Creativity support tools: a grand challenge for HCI researchers. Eng User Interface 1–9Google Scholar
  49. Shneiderman B, Fischer G, Myers B, Edmonds E, Eisenberg M, Jennings P (2006) Creativity support tools: report from a U.S. national science foundation sponsored workshop. Int J Hum-Comput Interact 20(2):61–77CrossRefGoogle Scholar
  50. Shneiderman B, Fischer G, Czerwinski M, Myers B, Resnik M (2005) Creativity support tools: a workshop sponsored by the National Science FoundationGoogle Scholar
  51. Solovey ET, Girouard A, Chauncey K, Hirshfield LM, Sassaroli A, Zheng F, Fantini S, Jacob RJK (2009) Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines. In: Proceedings of UISTGoogle Scholar
  52. Solovey ET, Lalooses F, Chauncey K, Weaver D, Scheutz M, Sassaroli A, Fantini S, Schermerhorn P, Jacob RJK (2011) Sensing cognitive multitasking for a brain-based adaptive user interface. In: Proceedings of CHIGoogle Scholar
  53. Solovey ET, Schermerhorn P, Scheutz M, Sassaroli A, Fantini S, Jacob RJK (2012) Brainput: enhancing interactive systems with streaming fNIRS brain input. In: Proceedings of CHIGoogle Scholar
  54. Sweller J (1999) Instructional design in technical areas. ACER PressGoogle Scholar
  55. Swift B (2012) Becoming-sound: affect and assemblage in improvisational digital music making. In: Proceedings of the SIGCHI conference on human factors in computing systems 2012. ACM, pp 1815–1824Google Scholar
  56. Västfjäll D (2001) Emotion induction through music: a review of the musical mood induction procedure. Musicae Scientiae 5(1 suppl):173–211CrossRefGoogle Scholar
  57. Villon O, Lisetti C (2006) A user-modeling approach to build user’s psycho-physiological maps of emotions using BioSensors. In: Proceedings of IEEE ROMAN 2006, 15th IEEE international symposium robot and human interactive communication, session emotional cues in human robot interaction, pp 269–276Google Scholar
  58. Villringer A, Chance B (1997) Non-invasive optical spectroscopy and imaging of human brain function. Trends Neurosci 20CrossRefGoogle Scholar
  59. Wagner J, Kim NJ, Andre E (2005) From physiological signals to emotions: implementing and comparing selected methods for feature extraction and classification. In: Proceedings of IEEE international conference on multimedia and expo, pp 940–943Google Scholar
  60. Wang G (2016) Game design for expressive mobile music. In: New interfaces for musical expressionGoogle Scholar
  61. Yuksel BF, Oleson KB, Harrison L, Peck EM, Afergan D, Chang R, Jacob RJK (2016) Learn piano with BACh: an adaptive learning interface that adjusts task difficulty based on brain state. In Proceedings of the SIGCHI conference on human factors in computing systems 2016, pp 5372–5384Google Scholar
  62. Yuksel BF, Afergan D, Peck EM, Griffin G, Harrison L, Chen N, Chang R, Jacob RJK (2015) BRAAHMS: a novel adaptive musical interface based on users’ cognitive state. In: New interfaces for musical expression (NIME), pp 136–139Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Beste F. Yuksel
    • 1
    Email author
  • Kurt B. Oleson
    • 2
  • Remco Chang
    • 3
  • Robert J. K. Jacob
    • 4
  1. 1.Department of Computer ScienceUniversity of San FranciscoSan FranciscoUSA
  2. 2.Computer Science DepartmentTufts UniversityMedfordUSA
  3. 3.Computer Science DepartmentTufts UniversityMedfordUSA
  4. 4.Computer Science DepartmentTufts UniversityMedfordUSA

Personalised recommendations