Strong Concepts for Designing Non-verbal Interactions in Mixed Reality Narratives

  • Joshua A. FisherEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10045)


As next-generation augmented reality (AR) devices, referred to by companies like Microsoft as mixed reality (MR) headsets, enter the market non-verbal interaction paradigms for storytelling should be designed to take advantage of the new technical affordances. By ascertaining strong concepts, a model used in the HCI community to describe intermediate knowledge that sits between theory and practice for generating new interaction designs, a foundation for future work in MR, based on the existing technical affordances and interaction behaviors of AR and virtual reality (VR), can be developed. Strong concepts free interaction designers to explore potential interaction gestalts based on current behaviors and technologies. The framework is useful for speculating on effective non-verbal interactions in MR narratives as the platform is still being established. Interaction designers can use these foundational strong concepts as a starting point to develop a toolset for non-verbal interactions in future MR interactive digital stories.


Theoretical foundations Mixed reality Augmented reality Interactive narratives Non-verbal interactions Design concepts 


  1. 1.
    Milgram, P., Haruo, T., Akira, U., Fumio, K.: Augmented reality: a class of displays on the reality-virtuality continuum. In: Telemanipulator and Telepresence Technologies (1995)Google Scholar
  2. 2.
  3. 3.
    Höök, K., Löwgren, J.: Strong concepts. ACM Trans. Comput. Hum. Interact. 19(3), 1–18 (2012). doi: 10.1145/2362364.2362371 CrossRefGoogle Scholar
  4. 4.
    Murray, J.H.: The Aesthetics of the Medium. Hamlet on the Holodeck: The Future of Narrative in Cyberspace, pp. 97–154. MIT, Cambridge (1998)Google Scholar
  5. 5.
    Mateas, M., Stern, A.: Interaction and narrative. In: The Game Design Reader: A Rules of Play Anthology, pp. 642–669 (2006)Google Scholar
  6. 6.
    Akimoto, T., Ogata, T.: Macro Structure and Basic Methods in the Integrate Narrative Generation System By Introducing Narratological Knowledge, pp. 1–25 (1999)Google Scholar
  7. 7.
    Lawson, B.: How Designers Think. Butterworth Architecture, London (1990)Google Scholar
  8. 8.
    Bolter, J.D., Grusin, R.: Remediation: Understanding New Media. MIT, Cambridge (1999)Google Scholar
  9. 9.
    Nam, Y.: Designing interactive narratives for mobile augmented reality. Cluster Comput. 18(1), 309–320 (2015). doi: 10.1007/s10586-014-0354-3 CrossRefGoogle Scholar
  10. 10.
    Feng, Z, Been-Lirn Duh, H., Billinghurst, M.: Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR 2008), pp. 193–202. IEEE Computer Society (2008) doi: 10.1109/ISMAR.2008.4637362
  11. 11.
    Yi-fu, T.: Space and Place: The Perspective of Experience. University of Minnesota, Minneapolis (1977). PrintGoogle Scholar
  12. 12.
    Hernández-Aceituno, J., Arnay, R., Toledo, J., Acosta, L.: Using kinect on an autonomous vehicle for outdoors obstacle detection. IEEE Sens. J. 16(10), 3603–3610 (2016). doi: 10.1109/JSEN.2016.2531122 CrossRefGoogle Scholar
  13. 13.
    Dostal, J., Hinrichs, U., Kristensson, P.O., Quigley, A.: SpiderEyes: designing attention-and proximity-aware collaborative interfaces for wall-sized displays. In: Proceedings of the 19th international conference on Intelligent User Interfaces. (2014)Google Scholar
  14. 14.
    Whelan, T., Kaess, M., Fallon, M.: Kintinuous: Spatially extended kinectfusion. In: RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, p. 7 (2012)Google Scholar
  15. 15.
  16. 16.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR (2001)Google Scholar
  17. 17.
    Heidegger, M.: The question concerning technology. In: Technology and Values: Essential Readings, pp. 99–113 (1954)Google Scholar
  18. 18.
    O’Brien, M.: Commentary on Heidegger’s “The Question Concerning Technology”. In: Thinking Together: Proceedings of the IWM Junior Fellows’ Conference, vol. 16, XVI, 2 (2004)Google Scholar
  19. 19.
    Murray, J.: Did it make you cry? Creating dramatic agency in immersive environments. In: Subsol, G. (ed.) ICVS-VirtStory 2005. LNCS, vol. 3805, pp. 83–94. Springer, Heidelberg (2005)Google Scholar
  20. 20.
    SmartReality Augmented and Virtual Reality BIM Mobile App.
  21. 21.
  22. 22.
    Collins English Dictionary. HarperCollins (2011)Google Scholar
  23. 23.
    Hugues, O., Fuchs, P., Nannipieri, O.: New augmented reality taxonomy: technologies and features of augmented environment. In: Handbook of Augmented Reality (2011). doi: 10.1007/978-1-4614-0064-6 Google Scholar
  24. 24.
    Yelp: Yelp on the App Store. App Store. iTunes App Store (2016).
  25. 25.
    Manovich, L.: The poetics of augmented space. Visual Commun. 5(2), 219–240 (2006)CrossRefGoogle Scholar
  26. 26.
    Debord, G.: Theory of the Dérive. Situationist International Online. Trans. Ken Knabb. Internationale Situationniste (2015).
  27. 27.
    Biermaan, B.C., Levy, H.: AR I AD Takeover: Publicadcampaign. AR I ADTakeover: Publicadcampaign. PublicAdCampaign (2012).
  28. 28.
    Tuters, M.: The locative commons: situating location-based media in urban public space. In: Electronic Proceedings of the 2004 Futuresonic Conference (2004)Google Scholar
  29. 29.
    Deleuze, G.: Pure Immanence, p. 27. Zone, New York (2001)Google Scholar
  30. 30.
    Dow, S., Mehta, M., Lausier, A., MacIntyre, B., Mateas, M.: Initial lessons from AR Façade, an interactive augmented reality drama. In: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, Article No.: 28 (2006). doi: 10.1145/1178823.1178858
  31. 31.
    Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Piekarski, W.: First person indoor/outdoor augmented reality application: ARQuake. Pers. Ubiquit. Comput. 6(1), 75–86 (2002). doi: 10.1007/s007790200007 CrossRefGoogle Scholar
  32. 32.
    Rekimoto, J., Nagao, K.: The world through the computer: computer augmented interaction with real world environments. In: Proceedings of the 8th Annual ACM Symposium User Interface and Software Technology UIST, pp. 29–36. ACM Press (1995). doi: 10.1145/215585.215639
  33. 33.
    Lee, T., Hollerer, T.: Handy AR: Markerless inspection of augmented reality objects using fingertip tracking. In: International Symposium on Wearable Computers, October 2007 (2007)Google Scholar
  34. 34.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR 2007), Nara, Japan, November 2007 (2007)Google Scholar
  35. 35.
    Lepetit, V., Fua, P.: Monocular model-based 3d tracking of rigid objects. Found. Trends Comput. Graph. Vis. 1(1), 1–89 (2006)CrossRefGoogle Scholar
  36. 36.
    Cavazza, M., Charles, F., Mead, S.: Interacting with virtual characters in interactive storytelling. Agents and Multiagent Systems: Part 1, pp. 241–318 (2002). doi: 10.1145/544741.544819
  37. 37.
    Rheingold, H.: Virtual Reality. Summit, New York (1991)zbMATHGoogle Scholar
  38. 38.
    Holz, D.: 2015 SVVR - The Future of Wearable Displays and Inputs. Silicon Valley Virtual Reality. San Jose Convention Center, San Jose. YouTube. Web.
  39. 39.
    Lee, T., Höllerer, T.: Hybrid feature tracking and user interaction for markerless augmented reality. In: Proceedings - IEEE Virtual Reality, pp. 145–152 (2008). doi: 10.1109/VR.2008.4480766
  40. 40.
    Metanaut. Play Your Music (2015)
  41. 41.
    Lee, M., Green, R., Billinghurst, M.: 3D natural hand interaction for AR applications. In: 2008 23rd International Conference Image and Vision Computing New Zealand, IVCNZ, pp. 2–7 (2008). doi: 10.1109/IVCNZ.2008.4762125
  42. 42.
    Merleau-ponty, M.: Phenomenology of Perception. Philos. Books 4(2), 17–20 (1963). doi: 10.1111/j.1468-0149.1963.tb00795.x CrossRefGoogle Scholar
  43. 43.
    Reifinger, S., Wallhoff, F., Ablassmeier, M., Poitschke, T., Rigoll, G.: Static and Dynamic Hand-Gesture Recognition for Augmented Reality Applications. In: Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4552, pp. 728–737. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-73110-8 Google Scholar
  44. 44.
    Bolter, J., Gromala, D.: Transparency and reflectivity: digital art and the aesthetics of interface design. In: Aesthetic Computing, 7 (2004)Google Scholar
  45. 45.
    Buchmann, V., Violich, S., Billinghurst, M., Cockburn, A.: FingARtips – Gesture based direct manipulation in augmented reality. In: Proceedings GRAPHITE 2004 - 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, 1(212), pp. 212–221 (2004). doi: 10.1145/988834.988871

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Georgia Institute of TechnologyAtlantaUSA

Personalised recommendations