Advertisement

Principles of Lifelong Learning for Predictive User Modeling

  • Ashish Kapoor
  • Eric Horvitz
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4511)

Abstract

Predictive user models often require a phase of effortful supervised training where cases are tagged with labels that represent the status of unobservable variables. We formulate and study principles of lifelong learning where training is ongoing over a prolonged period. In lifelong learning, decisions about extending a case library are made continuously by balancing the cost of acquiring values of hidden states with the long-term benefits of acquiring new labels. We highlight key principles by extending BusyBody, an application that learns to predict the cost of interrupting a user. We transform the prior BusyBody system into a lifelong learner and then review experiments that highlight the promise of the methods.

Keywords

User Model Lifelong Learning Hide State Incoming Message Case Library 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Fogarty, J., Hudson, S.E., Lai, J.: Examining the Robustness of Sensor-Based Statistical Models of Human Interruptability. In: Proceedings of CHI (2004)Google Scholar
  2. 2.
    Freund, Y., Seung, H.S., Shamir, E., Tishby, N.: Selective Sampling Using the Query by Committee Algorithm. Machine Learning, 28 (1997)Google Scholar
  3. 3.
    Horvitz, E.: Principles of Mixed-Initiative User Interfaces. In: ACM SIGCHI Conference on Human Factors in Computing Systems (1999)Google Scholar
  4. 4.
    Horvitz, E., Apacible, J.: Learning and Reasoning about Interruption. In: Intl. Conference on Multimodal Interfaces (2003)Google Scholar
  5. 5.
    Horvitz, E., Apacible, J., Koch, P.: BusyBody: Creating and Fielding Personalized Models of the Cost of Interruption. In: Conference on Computer Supported Cooperative Work (2004)Google Scholar
  6. 6.
    Horvitz, E., Jacobs, A., Hovel, D.: Attention-Sensitive Alerting. Uncertainty in Artificial Intelligence (1999)Google Scholar
  7. 7.
    Kapoor, A., Horvitz, E., Basu, S.: Selective Supervision: Guiding Supervised Learning with Decision-Theoretic Active Learning. In: Intl. Joint Conference on Artificial Intelligence (2007)Google Scholar
  8. 8.
    Lawrence, N., Seeger, M., Herbrich, R.: Fast Sparse Gaussian Process Method: Informative Vector Machines. Neural Information Processing Systems, vol. 15 (2002)Google Scholar
  9. 9.
    MacKay, D.: Information-Based Objective Functions for Active Data Selection. Neural Computation, vol. 4(4) (1992)Google Scholar
  10. 10.
    Platt, J.: Probabilities for support vector machines. Adv. in Large Margin Classifiers (2000)Google Scholar
  11. 11.
    Tong, S., Koller, D.: Support Vector Machine Active Learning with Applications to Text Classification. In: Intl. Conference on Machine Learning (2000)Google Scholar
  12. 12.
    Williams, C.K.I., Barber, D.: Bayesian Classification with Gaussian Processes. IEEE Transactions on Pattern Analysis and Machine Intelligence (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ashish Kapoor
    • 1
  • Eric Horvitz
    • 1
  1. 1.Microsoft Research, Redmond WA 98052USA

Personalised recommendations