Skip to main content

Reinforcement Learning of Context Models for a Ubiquitous Personal Assistant

  • Conference paper

Part of the book series: Advances in Soft Computing ((AINSC,volume 51))

Summary

Ubiquitous environments may become a reality in a foreseeable future and research is aimed on making them more and more adapted and comfortable for users. Our work consists on applying reinforcement learning techniques in order to adapt services provided by a ubiquitous assistant to the user. The learning produces a context model, associating actions to perceived situations of the user. Associations are based on feedback given by the user as a reaction to the behavior of the assistant. Our method brings a solution to some of the problems encountered when applying reinforcement learning to systems where the user is in the loop. For instance, the behavior of the system is completely incoherent at the be-ginning and needs time to converge. The user does not accept to wait that long to train the system. The user’s habits may change over time and the assistant needs to integrate these changes quickly. We study methods to accelerate the reinforced learning process.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bardram, J.E.: The Java Context Awareness Framework (JCAF) - A Service Infrastructure and Programming Framework for Context-Aware Applications. In: Pervasive Computing (2005)

    Google Scholar 

  2. Brdiczka, O., Reignier, P., Crowley, J.L.: Automatic development of an abstract context model for an intelligent environment. In: PerCom (2005)

    Google Scholar 

  3. Christensen, J., Sussman, J., Levy, S., Bennett, W.E., Wolf, T.V., Kellogg, W.A.: Too much information. ACM Queue (2006)

    Google Scholar 

  4. Crowley, J.L., Coutaz, J., Rey, G., Reigner, P.: Perceptual components for context awareness. In: International conference on ubiquitous computing (2002)

    Google Scholar 

  5. Degris, T., Sigaud, O., Wuillemin, P.-H.: Learning the structure of factored markov decision processes in reinforcement learning problems. In: ICML (2006)

    Google Scholar 

  6. Dent, L., Boticario, J., Mitchell, T., Sabowski, D., McDermott, J.: A personal learning apprentice. In: AAAI (1992)

    Google Scholar 

  7. Dey, A.K., Abowd, G.D.: The context toolkit: Aiding the development of context-aware applications. In: SEWPC (2000)

    Google Scholar 

  8. Emonet, R., Vaufreydaz, D., Reignier, P., Letessier, J.: O3MiSCID: an Object Oriented Opensource Middleware for Service Connection, Introspection and Discovery. In: SIPE 2006 (2006)

    Google Scholar 

  9. Isbell, C., Shelton, C.R., Kearns, M., Singh, S., Stone, P.: A social reinforcement learning agent. In: AGENTS (2001)

    Google Scholar 

  10. Kozierok, R., Maes, P.: A learning interface agent for scheduling meetings. In: IUI (1993)

    Google Scholar 

  11. Maes, P.: Agents that reduce work and information overload. ACM, New York (1994)

    Google Scholar 

  12. Ricquebourg, V., Menga, D., Durand, D., Marhic, B., Delahoche, L., Log, C.: The smart home concept: our immediate future. In: ICELIE (2006)

    Google Scholar 

  13. Roman, M., Hess, C.K., Cerqueira, R., Ranganathan, A., Campbell, R.H., Nahrstedt, K.: Gaia: A middleware infrastructure to enable active spaces. In: IEEE Pervasive Computing (2002)

    Google Scholar 

  14. Schiaffino, S., Amandi, A.: Polite personal agent. Intelligent Systems (2006)

    Google Scholar 

  15. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: ICML (1990)

    Google Scholar 

  16. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction (1998)

    Google Scholar 

  17. Vallée, M., Ramparany, F., Vercouter, L.: Dynamic service composition in ambient intelligence environments: a multi-agent approach. In: YRSOC (2005)

    Google Scholar 

  18. Weiser, M.: The computer for the 21st century. Scientific American (1991)

    Google Scholar 

  19. Zaidenberg, S., Reignier, P., Crowley, J.L.: An architecture for ubiquitous applications. In: IWUC (2007)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Juan M. Corchado Dante I. Tapia José Bravo

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zaidenberg, S., Reignier, P., Crowley, J.L. (2009). Reinforcement Learning of Context Models for a Ubiquitous Personal Assistant. In: Corchado, J.M., Tapia, D.I., Bravo, J. (eds) 3rd Symposium of Ubiquitous Computing and Ambient Intelligence 2008. Advances in Soft Computing, vol 51. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85867-6_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-85867-6_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-85866-9

  • Online ISBN: 978-3-540-85867-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics