Skip to main content

Speech and Multimodal Interaction in Mobile GIS Search: A Case of Study

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNISA,volume 7236)

Abstract

The In this short paper we will present an Android mobile application making use of a multimodal interface. We base our application on a proprietary architecture based on the W3C recommendation MM-Framework. App-users can talk and use natural gestures on a mobile device touch screen in order to formulate a complex interrogation to a geo-referenced web service. The interaction produce as result a query whose origin is a semantically incomplete audio sentence complete by a deictic gesture (i.e.: “please, find all bus stops in this area - - while tapping or making a circle on the screen where a map is showed - -”).

Keywords

  • mobile GIS
  • multimodal mobile interaction
  • mobilepublic transport GIS

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (Canada)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (Canada)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Larson, J.A., Raman, T.V., Raggett, D., Bodell, M., Johnston, M., Kumar, S., Potter, S., Waters, K.: W3C multimodal interaction framework. W3C Note (May 2003), http://www.w3.org/TR/mmi-framework/

  2. Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., Young, R.M.: Four easy pieces for assessing the usability of multimodal interaction: The CARE properties. In: Proceedings of INTERACT 1995, Lillehammer, pp. 115–120 (June 1995)

    Google Scholar 

  3. Bourguet, M.-L.: Designing and Prototyping Multimodal Commands. In: Human-Computer Interaction 2003, pp. 717–720 (2003)

    Google Scholar 

  4. Dumas, B., Lalanne, D., Ingold, R.: Description languages for multimodal interaction: a set of guidelines and its illustration with SMUIML. Multimodal User Interfaces 3, 237–247 (2010)

    CrossRef  Google Scholar 

  5. http://code.google.com/intl/it/android/add-ons/google-apis/reference/index.html

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cutugno, F., Leano, V.A., Mignini, G., Rinaldi, R. (2012). Speech and Multimodal Interaction in Mobile GIS Search: A Case of Study. In: Di Martino, S., Peron, A., Tezuka, T. (eds) Web and Wireless Geographical Information Systems. W2GIS 2012. Lecture Notes in Computer Science, vol 7236. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29247-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29247-7_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29246-0

  • Online ISBN: 978-3-642-29247-7

  • eBook Packages: Computer ScienceComputer Science (R0)