Skip to main content

Ecological Interfaces: Extending the Pointing Paradigm by Visual Context

  • Conference paper
  • First Online:
Modeling and Using Context (CONTEXT 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1688))

  • 524 Accesses

Abstract

Following the ecological approach to visual perception, this paper presents an innovative framework for the design of multimodal systems. The proposal emphasises the role of the visual context on gestural communication. It is aimed at extending the concept of affordances to explain referring gesture variability. The validity of the approach is confirmed by results of a simulation experiment. A discussion of practical implications of our findings for software architecture design is presented.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. De Angeli A., Gerbino W., Cassano G., Petrelli D.: Visual Display: Pointing, and Natural Language: The power of Multimodal Interaction. In proceedings of Advanced Visual Interfaces Conference, L’Aquila, Italy (1998)

    Google Scholar 

  2. Feyereisen, P.: La compréhension des gestes référentiels. In Nouveaux Actes Semiotiques, Vol. 52–53–54 (1997) 29–48.

    Google Scholar 

  3. Fraser, N. M. and Gilbert, G. N.: Simulating speech systems. Computer, Speech and Languages, Vol. 5 (1991) 81–99.

    Article  Google Scholar 

  4. Gibson, J.J.: The Ecological Approach to Visual Perception. Boston: Houghton Mifflin (1979)

    Google Scholar 

  5. Glenberg, A. and McDaniel, M.: Mental models, pictures, and text: Integration of spatial and verbal information. Memory and Cognition, Vol. 20(5) (1992) 158–460

    Google Scholar 

  6. Johnston M., Cohen P.R., McGee D., Oviatt S., Pittman J.A. e Smith I.: Unification-based multimodal integration. In Proceedings of the 35 th Annual Meeting of the Association for Computational Linguistics, Madrid, Spain (1997) 281–288

    Google Scholar 

  7. Kanizsa, G.: Organization in vision. Praeger New York (1979)

    Google Scholar 

  8. Levelt, W. J., Richardson, G., & La Heij, W.: Pointing and voicing in deictic expressions. Journal of Memory and Language, Vol. 24 (1985) 133–164

    Article  Google Scholar 

  9. Neisser, U.: Cognition and Reality. San Francisco: Freeman & Co (1976)

    Google Scholar 

  10. Nigay, L. and Coutaz, J.: A design space for multimodal systems: Concurrent processing and data fusion. In Proceedings of INTERCHI’93 (1993) 172–178

    Google Scholar 

  11. Norman, D. A. and Draper, S. W.: User Centered System Design: New Perspectives on Human-Computer Interaction. Hillsdale, New Jersey: Lawrence Erlbaum Associates (1986).

    Google Scholar 

  12. Oviatt, S.: Multimodal interactive maps: Designing for human performance. Human-Computer Interaction, Vol. 12 (1997) 93–129

    Article  Google Scholar 

  13. Oviatt, S., Cohen, P. R., Fong, M. and Frank, M.: A Rapid Semi-automatic Simulation Technique for Investigating Interactive Speech and Handwriting. In Proceedings of the International Conference on Spoken Language Processing, Vol. 2 (1992) 1351–1354

    Google Scholar 

  14. Oviatt, S., De Angeli, D., Kuhn, K.: Integration and synchronization of input modes during multimodal human-computer interaction. In: Conference on Human Factors in Computing Systems, CHI97, New York, ACM Press (1997)

    Google Scholar 

  15. Streit, M.: Active and Passive Gestures — Problems with the Resolution of Deictic and Elliptic Expressions in a Multimodal System. In Proceedings of the Workshop on Referring Phenomena in a Multimedia Context and Their Computational Treatment, ACLEACL, Madrid, Spain (1997)

    Google Scholar 

  16. Thorisson K.R.: Simulated perceptual grouping: an application to human-computer interaction, 16th annual conference of cognitive science society (1994).

    Google Scholar 

  17. Wertheimer, M.: Untersuchungen zur Lehre von der Gestalt I. Psychologische Forschung, (1922) 47–58.

    Google Scholar 

  18. Wolff, F.: Analyse contextuelle des gestes de désignation en dialogue Homme-Machine. PhD Thesis, University Henri Poincaré, Nancy 1, LORIA, Laboratoire Lorrain de Recherche en Informatique et ses Applications (1999).

    Google Scholar 

  19. Wolff, F., De Angeli, A., Romary, L.: Acting on a visual world: The role of perception in multimodal HCI. Workshop Representations for Multi-Modal Human-Computer Interaction, AAAI Press (1998).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

De Angeli, A., Romary, L., Wolff, F. (1999). Ecological Interfaces: Extending the Pointing Paradigm by Visual Context. In: Bouquet, P., Benerecetti, M., Serafini, L., Brézillon, P., Castellani, F. (eds) Modeling and Using Context. CONTEXT 1999. Lecture Notes in Computer Science(), vol 1688. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48315-2_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-48315-2_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66432-1

  • Online ISBN: 978-3-540-48315-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics