Advertisement

Recommendation Delivery

Getting the User Interface Just Right
  • Emerson Murphy-Hill
  • Gail C. Murphy
Chapter

Abstract

Generating a useful recommendation is only the first step in creating a recommendation system. For the system to have value, the recommendations must be delivered with a user interface that allows the user to become aware that recommendations are available, to determine if any of the recommendations have value for them and to be able to act upon a recommendation. By synthesizing previous results from general recommendation system research and software engineering recommendation system research, we discuss the factors that affect whether or not a user considers and accepts recommendations generated by a system. These factors include the ease with which a recommendation can be understood and the level of trust a user assigns to a recommendation. In this chapter, we will describe these factors and the opportunities for future research towards helping getting the user interface of a recommendation system just right.

Keywords

User Interface Recommendation System Textual Description Cognitive Effort Heuristic Evaluation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Adamczyk, P.D., Bailey, B.P.: If not now, when?: The effects of interruption at different moments within task execution. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 271–278 (2004). doi: 10.1145/985692.985727Google Scholar
  2. 2.
    Bacchelli, A., Ponzanelli, L., Lanza, M.: Harnessing Stack Overflow for the IDE. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 26–30 (2012). doi: 10.1109/RSSE.2012.6233404Google Scholar
  3. 3.
    Carmel, E., Whitaker, R.D., George, J.F.: PD and joint application design: A transatlantic comparison. Commun. ACM 36(6), 40–48 (1993). doi: 10.1145/153571.163265CrossRefGoogle Scholar
  4. 4.
    Carter, J., Dewan, P.: Design, implementation, and evaluation of an approach for determining when programmers are having difficulty. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp. 215–224 (2010). doi: 10.1145/1880071.1880109Google Scholar
  5. 5.
    Cialdini, R.B.: Influence: Science and Practice, 4th edn. Allyn and Bacon, London (2000)Google Scholar
  6. 6.
    Coverity: Effective management of static analysis vulnerabilities and defects (2011). URL http://www.coverity.com/library/pdf/effective-management-of-static-analysis-vulnerabilities-and-defects.pdf. [retrieved 9 October 2013]
  7. 7.
    Cremonesi, P., Garzotto, F., Turrin, R.: Investigating the persuasion potential of recommender systems from a quality perspective: An empirical study. ACM Trans. Interact. Intell. Syst. 2(2), 11:1–11:41 (2012). doi: 10.1145/2209310.2209314Google Scholar
  8. 8.
    DeLine, R., Czerwinski, M., Robertson, G.G.: Easing program comprehension by sharing navigation data. In: Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 241–248 (2005). doi: 10.1109/VLHCC.2005.32Google Scholar
  9. 9.
    Fischer, G., Lemke, A., Schwab, T.: Active help systems. In: Readings on Cognitive Ergonomics: Mind and Computers, Lecture Notes in Computer Science, vol. 178, pp. 115–131 (1984). doi: 10.1007/3-540-13394-1_10CrossRefGoogle Scholar
  10. 10.
    Fluri, B., Zuberbühler, J., Gall, H.C.: Recommending method invocation context changes. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 1–5 (2008). doi: 10.1145/1454247.1454249Google Scholar
  11. 11.
    Foster, S.R., Griswold, W.G., Lerner, S.: WitchDoctor: IDE support for real-time auto-completion of refactorings. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, pp. 222–232 (2012). doi: 10.1109/ICSE.2012.6227191Google Scholar
  12. 12.
    Fowler, M.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Boston, MA (1999)Google Scholar
  13. 13.
    Ge, X., DuBose, Q.L., Murphy-Hill, E.: Reconciling manual and automatic refactoring. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, pp. 211–221 (2012). doi: 10.1109/ICSE.2012.6227192Google Scholar
  14. 14.
    Hiew, L.: Assisted detection of duplicate bug reports. Master’s thesis, The University Of British Columbia (2006)Google Scholar
  15. 15.
    Holten, D.: Hierarchical edge bundles: Visualization of adjacency relations in hierarchical data. IEEE Trans. Visual. Comput. Graph. 12(5), 741–748 (2006). doi: 10.1109/TVCG.2006.147CrossRefGoogle Scholar
  16. 16.
    Horvitz, E., Jacobs, A., Hovel, D.: Attention-sensitive alerting. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence, pp. 305–313 (1999)Google Scholar
  17. 17.
    Kohavi, R., Longbotham, R., Sommerfield, D., Henne, R.M.: Controlled experiments on the web: Survey and practical guide. Data Min. Knowl. Discov. 18(1), 140–181 (2009). doi: 10.1007/s10618-008-0114-1CrossRefMathSciNetGoogle Scholar
  18. 18.
    Konstan, J.A., Miller, B.N., Maltz, D., Herlocker, J.L., Gordon, L.R., Riedl, J.: GroupLens: Applying collaborative filtering to Usenet news. Commun. ACM 40(3), 77–87 (1997). doi: 10.1145/245108.245126CrossRefGoogle Scholar
  19. 19.
    Konstan, J.A., Riedl, J.: Recommender systems: From algorithms to user experience. User Model. User-Adapt. Interact. 22(1–2), 101–123 (2012). doi: 10.1007/s11257-011-9112-xCrossRefGoogle Scholar
  20. 20.
    Lahiri, S., Hawblitzel, C., Kawaguchi, M., Rebêlo, H.: SymDiff: A language-agnostic semantic diff tool for imperative programs. In: Proceedings of the International Conference on Computer Aided Verification, Lecture Notes in Computer Science, vol. 7358, pp. 712–717 (2012). doi:  10.1007/978-3-642-31424-7_54
  21. 21.
    LaToza, T.D., Myers, B.A.: Designing useful tools for developers. In: Proceedings of the ACM SIGPLAN Workshop on Evaluation and Usability of Programming Languages and Tools, pp. 45–50 (2011). doi: 10.1145/2089155.2089166Google Scholar
  22. 22.
    Maalej, W., Sahm, A.: Assisting engineers in switching artifacts by using task semantic and interaction history. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 59–63 (2010). doi: 10.1145/1808920.1808935Google Scholar
  23. 23.
    Maulsby, D., Greenberg, S., Mander, R.: Prototyping an intelligent agent through Wizard of Oz. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 277–284 (1993). doi: 10.1145/169059.169215Google Scholar
  24. 24.
    McFarlane, D.C., Latorella, K.A.: The scope and importance of human interruption in human–computer interaction design. Hum. Comput. Interact. 17(1), 1–61 (2002). doi: 10.1207/S15327051HCI1701_1CrossRefGoogle Scholar
  25. 25.
    McSherry, D.: Explanation in recommender systems. Artif. Intell. Rev. 24(2), 179–197 (2005). doi: 10.1007/s10462-005-4612-xCrossRefzbMATHGoogle Scholar
  26. 26.
    Muşlu, K., Brun, Y., Holmes, R., Ernst, M.D., Notkin, D.: Speculative analysis of integrated development environment recommendations. In: Proceedings of the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 669–682 (2012). doi: 10.1145/2384616.2384665Google Scholar
  27. 27.
    Murphy-Hill, E., Black, A.P.: An interactive ambient visualization for code smells. In: Proceedings of the ACM Symposium on Software Visualization, pp. 5–14 (2010). doi: 10.1145/1879211.1879216Google Scholar
  28. 28.
    Murphy-Hill, E., Murphy, G.C.: Peer interaction effectively, yet infrequently, enables programmers to discover new tools. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, pp. 405–414 (2011). doi: 10.1145/1958824.1958888Google Scholar
  29. 29.
    Nass, C.I., Yen, C.: The Man Who Lied to his Laptop: What Machines Teach Us about Human Relationships. Current Hardcover, New York (2010)Google Scholar
  30. 30.
    Nielsen, J.: Ten Usability Heuristics. Alertbox (2005). URL. http://www.useit.com/alertbox/20030825.html. [last accessed on 11/14/05]
  31. 31.
    Nielsen, J.: Progressive disclosure (2006). URL http://www.nngroup.com/articles/progressive-disclosure/. [retrieved 9 October 2013]
  32. 32.
    Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 249–256 (1990). doi: 10.1145/97243.97281Google Scholar
  33. 33.
    Niu, N., Yang, F., Cheng, J.R.C., Reddivari, S.: A cost-benefit approach to recommending conflict resolution for parallel software development. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 21–25 (2012). doi: 10.1109/RSSE.2012.6233403Google Scholar
  34. 34.
    Norman, D.A.: Affordance, conventions, and design. Interactions 6(3), 38–43 (1999). doi: 10.1145/301153.301168CrossRefGoogle Scholar
  35. 35.
    Ozok, A.A., Fan, Q., Norcio, A.F.: Design guidelines for effective recommender system interfaces based on a usability criteria conceptual model: Results from a college student population. Behav. Inform. Tech. 29(1), 57–83 (2010). doi: 10.1080/01449290903004012CrossRefGoogle Scholar
  36. 36.
    Pu, P., Chen, L.: Trust building with explanation interfaces. In: Proceedings of the International Conference on Intelligent User Interfaces, pp. 93–100 (2006). doi: 10.1145/1111449.1111475Google Scholar
  37. 37.
    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Software Eng. 14(2), 131–164 (2009). doi: 10.1007/s10664-008-9102-8CrossRefGoogle Scholar
  38. 38.
    Sawadsky, N., Murphy, G.C.: Fishtail: From task context to source code examples. In: Proceedings of the Workshop on Developing Tools as Plug-ins, pp. 48–51 (2011). doi: 10.1145/1984708.1984722Google Scholar
  39. 39.
    Sawadsky, N., Murphy, G.C., Jiresal, R.: Reverb: Recommending code-related web pages. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, pp. 812–821 (2013). doi: 10.1109/ICSE.2013.6606627Google Scholar
  40. 40.
    Schafer, J.B., Konstan, J., Riedi, J.: Recommender systems in e-commerce. In: Proceedings of the ACM Conference on Electronic Commerce, pp. 158–166 (1999). doi: 10.1145/336992.337035Google Scholar
  41. 41.
    Sinha, R., Swearingen, K.: The role of transparency in recommender systems. In: Extended Abstracts of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 830–831 (2002). doi: 10.1145/506443.506619Google Scholar
  42. 42.
    Thies, A., Roth, C.: Recommending rename refactorings. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 1–5 (2010). doi: 10.1145/1808920.1808921Google Scholar
  43. 43.
    Tintarev, N., Masthoff, J.: Evaluating the effectiveness of explanations for recommender systems: Methodological issues and empirical studies on the impact of personalization. User Model. User-Adapt. Interact. 22(4–5), 399–439 (2012). doi: 10.1007/s11257-011-9117-5CrossRefGoogle Scholar
  44. 44.
    Toleman, M.A., Welsh, J.: Systematic evaluation of design choices for software development tools. Software Concepts Tool 19(3), 109–121 (1998)CrossRefGoogle Scholar
  45. 45.
    Tosun Mısırlı, A., Bener, A., Çağlayan, B., Çalıklı, G., Turhan, B.: Field studies: A methodology for construction and evaluation of recommendation systems in software engineering. In: Robillard, M., Maalej, W., Walker, R.J., Zimmermann, T. (eds.) Recommendation Systems in Software Engineering, Chap.  13. Springer, New York (2014)Google Scholar
  46. 46.
    Trumper, J., Dollner, J.: Extending recommendation systems with software maps. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 92–96 (2012). doi: 10.1109/RSSE.2012.6233420Google Scholar
  47. 47.
    Viriyakattiyaporn, P., Murphy, G.C.: Improving program navigation with an active help system. In: Proceedings of the IBM Centre for Advanced Studies Conference on Collaborative Research, pp. 27–41 (2010). doi: 10.1145/1923947.1923951Google Scholar
  48. 48.
    Wharton, C., Rieman, J., Lewis, C., Polson, P.: The cognitive walkthrough method: A practitioner’s guide. In: Nielsen, J., Mack, R.L. (eds.) Usability Inspection Methods, pp. 105–140. Wiley, New York (1994)Google Scholar
  49. 49.
    Whitworth, B.: Polite computing. Behav. Inform. Tech. 24(5), 353–363 (2005). doi: 10.1080/01449290512331333700CrossRefGoogle Scholar
  50. 50.
    Wohlin, C., Runeson, P., Host, M., Ohlsson, M.C., Regnell, B., Wesslen, A.: Experimentation in Software Engineering. Springer, New York (2012). doi:  10.1007/978-3-642-29044-2 Google Scholar
  51. 51.
    Xiao, J., Catrambone, R., Stasko, J.: Be quiet?: Evaluating proactive and reactive user interface assistants. In: Proceedings of the IFIP TC13 International Conference on Human–Computer Interactaction, vol. 3, pp. 383–390 (2003)Google Scholar
  52. 52.
    Xie, J., Lipford, H., Chu, B.T.: Evaluating interactive support for secure programming. In: Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 2707–2716 (2012). doi: 10.1145/2207676.2208665Google Scholar
  53. 53.
    Zagalsky, A., Barzilay, O., Yehudai, A.: Example Overflow: Using social media for code recommendation. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, pp. 38–42 (2012). doi: 10.1109/RSSE.2012.6233407Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.North Carolina State UniversityRaleighUSA
  2. 2.University of British ColumbiaVancouverCanada

Personalised recommendations