Advertisement

User Interaction with User-Adaptive Information Filters

  • Henriette Cramer
  • Vanessa Evers
  • Maarten van Someren
  • Bob Wielinga
  • Sam Besselink
  • Lloyd Rutledge
  • Natalia Stash
  • Lora Aroyo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4560)

Abstract

User-adaptive information filters can be a tool to achieve timely delivery of the right information to the right person, a feat critical in crisis management. This paper explores interaction issues that need to be taken into account when designing a user-adaptive information filter. Two case studies are used to illustrate which factors affect trust and acceptance in user-adaptive filters as a starting point for further research. The first study deals with user interaction with user-adaptive spam filters. The second study explores the user experience of an art recommender system, focusing on transparency. It appears that while participants appreciate filter functionality, they do not accept fully automated filtering. Transparency appears to be a promising way to increase trust and acceptance, but its successful implementation is challenging. Additional observations indicate that careful design of training mechanisms and the interface will be crucial in successful filter implementation.

Keywords

user-adaptive systems information filtering transparency trust acceptance recommenders 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cortellessa, G., Giuliani, M.V., Scopelliti, M., Cesta, A.: Key Issues in Interactive Problem Solving: An Empirical Investigation on Users Attitude. In: Costabile, M.F., Paternó, F. (eds.) INTERACT 2005. LNCS, vol. 3585, pp. 657–670. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  2. 2.
    Dzindolet, M.: The role of trust in automation reliance. Int. J. of Human-Computer Studies 58(6), 697–718 (2003)CrossRefGoogle Scholar
  3. 3.
    Fogg, B.J., Tseng, H.: The Elements of Computer Credibility. In: Proc. CHI 1999, pp. 80–87. ACM Press, New York (1999)Google Scholar
  4. 4.
    Fogg, B.J.: Prominence-Interpretation Theory: Explaining How People Assess Credibility Online. In: Proc. CHI 2003, ACM Press, NewYork (2003)Google Scholar
  5. 5.
    Hanani, U., Shapira, B., Shoval, P.: Information Filtering: Overview of Issues. Research and Systems, User Modeling and User-Adapted Interaction 11(3), 203–259 (2001)zbMATHCrossRefGoogle Scholar
  6. 6.
    Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proc. CSCW 2000, pp. 241–250. ACM Press, New York (2000)CrossRefGoogle Scholar
  7. 7.
    Höök, K.: Steps to Take Before Intelligent Interfaces Become Real. Interacting with computers 12(4), 409–426 (2000)CrossRefGoogle Scholar
  8. 8.
    Jameson, A.: Adaptive Interfaces and Agents. In: Jacko, J.A., Sears, A. (eds.) Human-computer interaction handbook, pp. 305–330. Erlbaum, Mahwah, NJ (2003)Google Scholar
  9. 9.
    Jian, J.Y., Bisantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. of Cognitive Ergonomics 4(1), 53–71 (2000)CrossRefGoogle Scholar
  10. 10.
    Jøsang, A., Lo Presti, S.: Analysing the Relationship between Risk and Trust. In: Proc. International Conference on Trust Management, pp. 135–145. Springer, Heidelberg (2004)Google Scholar
  11. 11.
    Kaber, D.B., Endsley, M.R.: The effects of level of automation and adaptive automation on human performance, situation awareness and workload in a dynamic control task. Theoretical issues in ergonomic science 5(2), 113–153 (2004)CrossRefGoogle Scholar
  12. 12.
    Meissner, A., Wang, Z., Putz, W., Grimmer, J.: MIKoBOS - A Mobile Information and Communication System for Emergency Response. In: Proc. ISCRAM 2006 Van de Walle, B., Turoff, M. (eds.), Newark, NJ, USA (2006)Google Scholar
  13. 13.
    Muir, B.M., Moray, N.: Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics 39(3), 429–460 (1996)Google Scholar
  14. 14.
    Parasuraman, R., Miller,C.: Trust and etiquette in high-criticality automated systems. In: Proc. CHI 2004, pp. 51–55. ACM Press, NewYork (2004)Google Scholar
  15. 15.
    Pu, P., Chen, L.: Trust Building with Explanation Interfaces. In: Proc. IUI 2006, pp. 93–100. ACM Press, NewYork (2006)CrossRefGoogle Scholar
  16. 16.
    Sinha, R., Swearingen, K.: The Role of Transparency in Recommender Systems. In: CHI’02 extended abstracts on Human factors in computing systems, pp. 830–831. ACM Press, New York (2002)CrossRefGoogle Scholar
  17. 17.
    Venkatesh, V., Morris, M., Davis, G., Davis, F.: User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3), 479–501 (2003)Google Scholar
  18. 18.
    Wærn, A.: User Involvement in Automatic Filtering: An Experimental Study. User Modeling and User-Adapted Interaction 14(2-3), 201–237 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Henriette Cramer
    • 1
  • Vanessa Evers
    • 1
  • Maarten van Someren
    • 1
  • Bob Wielinga
    • 1
  • Sam Besselink
    • 1
  • Lloyd Rutledge
    • 2
  • Natalia Stash
    • 3
  • Lora Aroyo
    • 4
  1. 1.University of Amsterdam, Human Computer Studies Lab, Kruislaan 419, 1089 VA AmsterdamThe Netherlands
  2. 2.Telematica Instituut, P.O. Box 589, EnschedeThe Netherlands
  3. 3.Vrije Universiteit Amsterdam, De Boelelaan 1083a, AmsterdamThe Netherlands
  4. 4.Technische Universiteit Eindhoven, P.O. Box 513, EindhovenThe Netherlands

Personalised recommendations