Discovery and Registration: Finding and Integrating Components into Dynamic Systems

Chapter

Abstract

One of the major gaps in the current HTML5 web platform is the lack of an interoperable means for a multimodal application to discover services and applications available in a given space and network, for example, in a smart house with a network of connected objects. To address this gap, the Multimodal Interaction Working Group has produced a draft specification based on distributed services, which aims to support the Discovery and Registration of multimodal components. In this approach, the components are described and virtualized in a Resources Manager communicating bidirectionally through dedicated events. To facilitate the fine-grained management of concurrent multimodal interactions, the Resources Manager registers the distributed components and provides to the Interaction Manager the means to control them. In this way, interoperable search, discovery, and selection of heterogeneous and dynamic features on the Web of Things can be performed by multimodal applications producing natural interaction and a semantically rich user experience.

Notes

Acknowledgments

The authors wish to thank the W3C MMI Working Group for its collaboration, with special thanks to Deborah Dahl and Kazuyuki Ashimura, and to Cyril Concolato, Jean Lefeuvre and Jean Claude Dufour for the very helpful insights and support during this work.

References

  1. 1.
    W3C’s MMI-Arch. http://www.w3.org/TR/mmi-arch/. Accessed 1 Apr 2015.
  2. 2.
    Rodriguez, B. H. The W3C’s multimodal architecture and interfaces. https://en.wikipedia.org/wiki/Multimodal_Architecture_and_Inter faces. Accessed 29 Sept 2015.
  3. 3.
    Rodriguez, B. H. (2013). A SOA model, semantic and multimodal, and its support for the discovery and registration of assistance services. Ph.D. thesis, Institut Mines-Télécom, Telecom ParisTech, Paris.Google Scholar
  4. 4.
    W3C’s Multimodal Interaction Framework. http://www.w3.org/TR/mmiframework/. Accessed 1 Apr 2015.
  5. 5.
    Rodriguez, B. H., Dahl, D., Ashimura, K., Barnett, J., Tumuluri, R., & Kharidi, N. (Eds.) (2015). Discovery and registration of multimodal modality components: State handling. First Public Draft 11/06/2015. http://www.w3.org/TR/mmi-mc-discovery/. Accessed 29 Sept 2015.
  6. 6.
    International Symposium on Home Energy Management System—Joint Discussion with the W3C MMI WG, Keio University Shonan Fujisawa Research Institute. 25–26 Feb 2015. Organized by the Ministry of Industry, Echonet Consortium and Keio University. Participants: Mr. Sano, Director Ministry of Economy, Trade and Industry, Mr. Taniguchi, Executive Director ENNET Corporation, Mr. Kodama, Director ECHONET Consortium, Mr. Isshiki Director ECHONET HEMS Interoperability test center, Mr. Umejima, Senior Fellow ECHONET Consortium, Deputy Chair, JSCA, Mr. Murakami, Director ECHONET Consortium, Mr. Aida Leader Iene consortium, Sureswaran Ramadass, Chairman Asia Pacific Advanced Network, Richard Schomberg, IEC Chair Senior Vice President EDF France, Patrick Veron, Former Senior Vice President in Cisco Corporation, Ms Dahl chair W3C MMI Working Group, Ms RODRIGUEZ W3C Discovery and Registration Editor.Google Scholar
  7. 7.
    Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update 2014–2019 White Paper. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-indexvni/white_paper_c11-520862.html. Accessed 4 Aug 2016
  8. 8.
    KNX Network Communications Protocol for Intelligent Buildings (EN 50090, ISO/IEC 14543). http://www.knx.org/knx-en/index.php.
  9. 9.
    Martin, D., Cheyer, A., & Moran, D. (1999). The open agent architecture: A framework for building distributed software systems. Applied Artificial Intelligence, 13(1–2), 91–128.CrossRefGoogle Scholar
  10. 10.
  11. 11.
    ECHONET Energy Conservation and Homecare Network. http://www.echonet.gr.jp/.
  12. 12.
    ETSI Machine to Machine Communication. http://www.etsi.org/technologies-clusters/technologies/m2m.
  13. 13.
    Digital Living Network Alliance. http://www.dlna.org.
  14. 14.
    Universal Plug and Play. http://www.upnp.org.
  15. 15.
    Zero Configuration Networking. https://developer.apple.com/bonjour/index.html.
  16. 16.
    Web of Things Interest Group. https://www.w3.org/WoT/IG/.
  17. 17.
    WebApps Working Group. http://www.w3.org/2008/webapps/.
  18. 18.
  19. 19.
    Machine-to-machine communications: Connecting billions of devices. OECD Digital Economy Papers, No. 192, OECD Publishing. doi: 10.1787/5k9gsh2gp043-en. Accessed 25 April 2012.
  20. 20.
    Coutaz, J., Nigay, L., Salber, D., Blandford, A., May, J., &Young, R. (1995). Four easy pieces for assessing the usability of multimodal interaction: The CARE properties. In Proceedings of the INTERACT’95, Lillehammer.Google Scholar
  21. 21.
    T. Nitta et al. Activities of Interactive Speech Technology Consortium (ISTC) targeting open software development for MMI systems. Robot and Human Interactive Communication, 2004. ROMAN 2004. 13th IEEE International Workshop on, 2004, pp. 165-170. doi:  10.1109/ROMAN.2004.1374749
  22. 22.
    LonWorks: ISO CEI 14908-1/4 or ASI CEA 709.1. ISO/IEC 14908, Parts 1, 2, 3, and 4 available online: http://downloads.echelon.com/support/documentation/manuals/general/078-0183-01B_Intro_to_LonWorks_Rev_2.pdf Accessed 4 Aug 2016
  23. 23.
  24. 24.
    KNX: ISO/IEC 14 543-3 (2006). http://www.iso.org/iso/catalogue_detail.htm?csnumber=59865 Accessed 4 Aug 2016.
  25. 25.
    Serrano, M., Nigay, L., Lawson, J.-Y., Ramsay, L., & Denef, S. (2008). The open-Interface framework: A tool for multimodal interaction, In: Proceedings of the CHI’08 Extended Abstracts on Human Factors in Computing Systems—CHI EA08, ACM, New York, NY, USA, pp. 3501–3506.Google Scholar
  26. 26.
    Bellik, Y. (1995). Interfaces Multimodales: Concepts, Modèles et Architectures. Ph.D. thesis, University Paris-South 11, Orsay.Google Scholar
  27. 27.
    Cassell, J. (2001). Embodied conversational agents: Representation and intelligence in user interfaces. AI Magazine, 22(4), 67–83.Google Scholar
  28. 28.
    Dumas, B., Lalanne, D., & Ingol, R. (2008). Démonstration: Hephais TK, une boîte à outils pour le prototypage d’interfaces multimodales.Google Scholar
  29. 29.
    Le Feuvre, J., et al. (2011). Experimenting with multimedia advances using GPAC. Scottsdale, AZ: ACM Multimedia.CrossRefGoogle Scholar
  30. 30.
    Herzog, G., & Reithinger, N. (2006). The SmartKom architecture: a framework for multimodal dialogue systems. In W. Wahlster (Ed.), SmartKom: foundations of multimodal dialogue systems. Heidelberg: Springer. doi: 10.1007/3-540-36678-4_4.
  31. 31.
    König, W. A., Rädle, R., & Reiterer, H. (2009). Squidy: A zoomable design environment for natural user in-terfaces. In Proceedings of the 27th International Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’09), ACM, New York, NY, USA, pp. 4561–4566. doi: 10.1145/1520340.1520700.
  32. 32.
    Rodriguez, B. H., Dahl, D., Tumuluri, R., Wiechno, P., & Ashimura, K. (2012). Registration & discovery of multimodal modality components in multimodal systems: Use cases and requirements. W3C Working Group Note 5 July. http://www.w3.org/TR/mmi-discovery/. Accessed 1.
  33. 33.

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  1. 1.W3C’s MMI Working Group Editor for the Discovery and Registration ActivityParisFrance
  2. 2.Institut Mines Télécom, Télécom ParisTech CNRSParis Cedex 13France

Personalised recommendations