Pulsed Melodic Processing – The Use of Melodies in Affective Computations for Increased Processing Transparency

Chapter
Part of the Springer Series on Cultural Computing book series (SSCC)

Abstract

Pulsed Melodic Processing (PMP) is a computation protocol that utilizes musically-based pulse sets (“melodies”) for processing – capable of representing the arousal and valence of affective states. Affective processing and affective input/output are key tools in artificial intelligence and computing. In the designing of processing elements (e.g. bits, bytes, floats, etc.), engineers have primarily focused on the processing efficiency and power. They then go on to investigate ways of making them perceivable by the user/engineer. However Human-Computer Interaction research – and the increasing pervasiveness of computation in our daily lives – supports a complementary approach in which computational efficiency and power are more balanced with understandability to the user/engineer. PMP allows a user to tap into the processing path to hear a sample of what is going on in that affective computation, as well as providing a simpler way to interface with affective input/output systems. This requires the developing of new approaches to processing and interfacing PMP-based modules. In this chapter we introduce PMP and examine the approach using three example: a military robot team simulation with an affective subsystem, a text affective-content estimation system, and a stock market tool.

Keywords

Virtual Machine Affective State News Story Spike Neural Network Algorithmic Trading 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Banik, S., Watanabe, K., Habib, M., Izumi, K. (2008). Affection based multi-robot team work. Lecture Notes in Electrical Engineering, 21, 355–375.CrossRefGoogle Scholar
  2. Bresin, R., & Friberg, A. (2002). Emotional coloring of computer-controlled music performances. Computer Music Journal, 24(4), 44–63.CrossRefGoogle Scholar
  3. Cohen, J. (1994). Monitoring background activities. In Auditory display: Sonification, audification, and auditory interfaces (pp. 499–531). Reading: Addison-Wesley.Google Scholar
  4. Cooke, D. (1959). The language of music. Oxford: Oxford University Press.Google Scholar
  5. Cosmides, L., & Tooby, J. (2000). Evolutionary psychology and the emotions. In M. Lewis & J.M. Haviland-Jones (Eds.), Handbook of emotions (pp. 91–115). New York: Guilford.Google Scholar
  6. Haykin, S. (1994). Neural networks: A comprehensive foundation. New York: Prentice Hall.MATHGoogle Scholar
  7. Indiveri, G., Chicca, E., & Douglas, R. (2006). A VLSI array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity. IEEE Transactions on Neural Networks, 17(1), 211–221.CrossRefGoogle Scholar
  8. Juslin, P. (2005). From mimesis to catharsis: Expression, perception and induction of emotion in music. In Music communication (pp. 85–116). Oxford: Oxford University Press.CrossRefGoogle Scholar
  9. Kirke, A., & Miranda, E. (2009). A survey of computer systems for expressive music performance. ACM Surveys, 42(1), 1–41.CrossRefGoogle Scholar
  10. Kirke, A., & Miranda, E. (2011). Emergent construction of melodic pitch and hierarchy through agents communicating emotion without melodic intelligence. In Proceedings of 2011 International Computer Music Conference (ICMC 2011), International Computer Music Association. Huddersfield, UK.Google Scholar
  11. Kirke, A., Bonnot, M., & Miranda, E. (2011). Towards using expressive performance algorithms for typist emotion detection. In Proceedings of 2011 international computer music conference (ICMC 2011), International Computer Music Association. Huddersfield, UK.Google Scholar
  12. Kissell, R., & Glantz, M. (2003). Optimal trading strategies. New York: Amacom.Google Scholar
  13. Kotchetkov, I., Hwang, B., Appelboom, G., Kellner, C., & Sander Connolly, E. (2010). Brain-computer interfaces: Military, neurosurgical, and ethical perspective. Neurosurgical Focus, 28(5), E25.CrossRefGoogle Scholar
  14. Krumhansl, C., & Kessler, E. (1982). Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychological Review, 89(4), 334–368.CrossRefGoogle Scholar
  15. Livingstone, S. R., Muhlberger, R., Brown, A. R., & Loch, A. (2007). Controlling musical emotionality: An affective computational architecture for influencing musical emotions. Digital Creativity, 18(1), 43–53.CrossRefGoogle Scholar
  16. Malatesa, L., Karpouzis, K., & Raouzaiou, A. (2009). Affective intelligence: The human face of AI. Artificial Intelligence, 5640, 53–70. Springer-Verlag.Google Scholar
  17. Marinos, P. (1969). Fuzzy logic and its application to switching systems. IEEE transactions on computers C, 18(4), 343–348.MATHCrossRefGoogle Scholar
  18. Picard, R. (2003). Affective computing: Challenges. International Journal of Human Computer Studies, 59(1–2), 55–64.CrossRefGoogle Scholar
  19. Stanford, V. (2004). Biosignals offer potential for direct interfaces and health monitoring. Pervasive Computing, 3(1), 99–103.CrossRefGoogle Scholar
  20. Subrahmanyam, A. (2008). Behavioural finance: A review and synthesis. European Financial Management, 14(1), 12–29.MathSciNetGoogle Scholar
  21. Vickers, P., & Alty, J. (2003). Siren songs and swan songs debugging with music. Communications of the ACM, 46(7), 87–92.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Interdisciplinary Centre for Computer Music Research, School of Humanities, Music and Performing ArtsPlymouth UniversityPlymouthUK

Personalised recommendations