[self.]: Realization / Art Installation / Artificial Intelligence: A Demonstration

  • Axel TidemannEmail author
  • Øyvind Brandtsegg
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9353)


This interactive installation paper describes [self.], an open source art installation where the people interacting with it determine its auditory and visual vocabulary. When the system starts, it knows nothing since the authors have decided that it should be without any kind of bias. However, the robot is equipped with the ability to learn and be creative with what it has internalized. In order to achieve this behaviour, biologically inspired models are implemented. The robot itself is made up of a moving head, mounted with a camera, projector, microphone and speaker. As an art installation, it has a clear robotic visual appearance, although it is designed to demonstrate life-like behaviour. This is done by making the system start in a “tabula rasa” state, forming categories and concepts as it learns through interaction. This is achieved by linking sounds, faces, video and their corresponding temporal information to form novel sentences. The robot also projects an association between sound and image; this is achieved using neural networks. This provides a visual and immediate way of seeing how the internal representations actually learn a certain concept.


artificial intelligence robot interaction art 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Brandtsegg, Ø., Saue, S., Johansen, T.: Particle synthesis – a unified model for granular synthesis. In: Linux Audio Conference (2011)Google Scholar
  2. 2.
    Brooks, R.: A robust layered control system for a mobile robot. IEEE Journal of Robotics and Automation 2(1), 14–23 (1986)CrossRefGoogle Scholar
  3. 3.
    Clark, A.: Mindware. Mindware (2001)Google Scholar
  4. 4.
    Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20(3), 273–297 (1995)zbMATHGoogle Scholar
  5. 5.
    Hamming, R.: Error detecting and error correcting codes. The Bell System Technical Journal 29(2), 147–160 (1950)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Holland, J.H.: Adaptation in Neural and Artificial Systems. University of Michigan Press, Ann Arbor (1975)Google Scholar
  7. 7.
    Jaeger, H., Haas, H.: Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  8. 8.
    Kalman, R.E.: A new approach to linear filtering and prediction problems. Journal of Fluids Engineering 82(1), 35–45 (1960)Google Scholar
  9. 9.
    Lyon, R.F., Rehn, M., Bengio, S., Walters, T.C., Chechik, G.: Sound retrieval and ranking using sparse auditory representations. Neural Computation 22(9), 2390–2416 (2010)CrossRefzbMATHGoogle Scholar
  10. 10.
    Stickgold, R., Hobson, J.A., Fosse, R., Fosse, M.: Sleep, learning, and dreams: Off-line memory reprocessing. Science 294(5544), 1052–1057 (2001)CrossRefGoogle Scholar
  11. 11.
    Tidemann, A., Brandtsegg, Ø.: [self.]: an Interactive Art Installation that Embodies Artificial Intelligence and Creativity. ACM Cognition + Creativity (to appear)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceNorwegian University of Science and TechnologyTrondheimNorway
  2. 2.Department of MusicNorwegian University of Science and TechnologyTrondheimNorway

Personalised recommendations