Non-representational Interaction Design

  • Marco Gillies
  • Andrea Kleinsmith
Part of the Studies in Applied Philosophy, Epistemology and Rational Ethics book series (SAPERE, volume 15)


This paper presents how non-representational views of cognition can inform interaction design as it moves from traditional graphical user interfaces to more bodily forms of interaction such as gesture or movement tracking. We argue that the true value of these “bodily” interfaces is that they can tap our prior skills for interacting in the world. However, these skills are highly non-representational and so traditional representational approaches to interaction design will fail to capture them effectively. We propose interactive machine learning as an alternative approach to interaction design that is able to capture non-representational sensori-motor couplings by allowing people to design by performing actions rather than by representing them. We present an example of this approach applied to designing interactions with video game characters.


interaction design bodily interaction interactive machine learning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Krueger, M.W.: Responsive environments. In: Proceedings of the National Computer Conference, AFIPS 1977, June 13-16, pp. 423–433. ACM, New York (1977)Google Scholar
  2. 2.
    Bevilacqua, F., Zamborlin, B., Sypniewski, A., Schnell, N., Guédy, F., Rasamimanana, N.: Continuous realtime gesture following and recognition. In: Kopp, S., Wachsmuth, I. (eds.) GW 2009. LNCS (LNAI), vol. 5934, pp. 73–84. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  3. 3.
    Fails, J.A., Olsen Jr., D.R.: Interactive machine learning. In: Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI 2003, pp. 39–45. ACM, New York (2003)Google Scholar
  4. 4.
    Antle, A.N., Corness, G., Droumeva, M.: What the body knows: Exploring the benefits of embodied metaphors in hybrid physical digital environments. Interacting with Computers 21(1-2), 66–75 (2009)CrossRefGoogle Scholar
  5. 5.
    Fiebrink, R., Cook, P.R., Trueman, D.: Human model evaluation in interactive supervised learning. In: Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems, CHI 2011, pp. 147–156. ACM, New York (2011)CrossRefGoogle Scholar
  6. 6.
    Snibbe, S.S., Raffle, H.S.: Social immersive media: pursuing best practices for multi-user interactive camera/projector exhibits. In: Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, pp. 1447–1456. ACM, New York (2009)Google Scholar
  7. 7.
    Fergus, P., Haggerty, J., Taylor, M., Bracegirdle, L.: Towards a whole body sensing platform for healthcare applications. In: England, D. (ed.) Whole Body Interaction. Human-Computer Interaction Series, pp. 135–149. Springer, London (2011)CrossRefGoogle Scholar
  8. 8.
    Norman, D.A.: Natural user interfaces are not natural. Interactions 17(3), 6–10 (2010)CrossRefGoogle Scholar
  9. 9.
    Jacob, R.J., Girouard, A., Hirshfield, L.M., Horn, M.S., Shaer, O., Solovey, E.T., Zigelbaum, J.: Reality-based interaction: a framework for post-wimp interfaces. In: CHI 2008: Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems, pp. 201–210. ACM, New York (2008)CrossRefGoogle Scholar
  10. 10.
    Dourish, P.: Where The Action Is: The Foundations of Embodied Interaction. MIT Press (2001)Google Scholar
  11. 11.
    Talbot, J., Lee, B., Kapoor, A., Tan, D.S.: Ensemblematrix: interactive visualization to support machine learning with multiple classifiers. In: CHI 2009: Proceedings of the 27th International Conference on Human Factors in Computing Systems, pp. 1283–1292. ACM, New York (2009)Google Scholar
  12. 12.
    Kulesza, T., Stumpf, S., Wong, W.K., Burnett, M.M., Perona, S., Ko, A., Oberst, I.: Why-oriented end-user debugging of naive bayes text classification. ACM Trans. Interact. 1(1), 2:1–2:31 (2011)Google Scholar
  13. 13.
    Fiebrink, R.: Real-time Human Interaction with Supervised Learning Algorithms for Music Composition and Performance. PhD thesis, Princeton University, Princeton, NJ, USA (January 2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Embodied Audio-Visual Interaction, Department of ComputingGoldsmiths, University of LondonLondonUK
  2. 2.Virtual Experiences Research Group Department of Computer and Information Science and EngineeringUniversity of FloridaGainesvilleUSA

Personalised recommendations