How to Build a Recommendation System for Software Engineering

  • Sebastian ProkschEmail author
  • Veronika Bauer
  • Gail C. Murphy
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8987)


Software developers must interact with large amounts of different types of information and perform many different activities to build a software system. To ease the finding of information and hone workflows, there has been growing interest in building recommenders that are intended to help software developers work more effectively. Building an effective recommender requires a deep understanding of the problem that is the target of a recommender, analysis of different aspects of the approach taken to perform the recommendations and design and evaluation of the mechanisms used to present recommendations to a developer. In this chapter, we outline the different steps that must be taken to develop an effective recommender system to aid software development.


Source Code Software Engineering Recommender System User Study Association Rule Mining 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



We thank the organizers of the Laser 2014 Summer School for an invigorating week of discussion, which has carried through the writing of this chapter. We also thank the anonymous reviewer for the valuable comments and suggestions to improve the quality of the paper.

The work presented in this paper was partially funded by NSERC and by the German Federal Ministry of Education and Research (BMBF) within the Software Campus projects KaVE (grant no. 01IS12054), and IndRe (grant no. 01IS12057).


  1. 1.
    Achar, A., Laxman, S., Viswanathan, R., Sastry, P.: Discovering injective episodes with general partial orders. Data Min. Knowl. Discov. 25(1), 67–108 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Agrawal, R., Imieliński, T., Swami, A.: Mining association rules between sets of items in large databases. In: SIGMOD Record, vol. 22, pp. 207–216. ACM (1993)Google Scholar
  3. 3.
    Agrawal, R., Srikant, R., et al.: Fast algorithms for mining association rules. Proc. of VLDB 1215, 487–499 (1994)Google Scholar
  4. 4.
    Amann, S., Proksch, S., Mezini, M.: Method-call recommendations from implicit developer feedback. In: Proceedings of the International Workshop on CrowdSourcing in Software Engineering, pp. 5–6. ACM (2014)Google Scholar
  5. 5.
    Ausubel, D.P.: The Psychology of Meaningful Verbal Learning. Grune and Stratton, New York (1963)Google Scholar
  6. 6.
    Ausubel, D.P., Novak, J.D., Hanesian, H., et al.: Educational Psychology: A Cognitive View. Holt, Rinehart and Winston, New York (1968)Google Scholar
  7. 7.
    Baxter, P., Jack, S.: Qualitative case study methodology: study design and implementation for novice researchers. Qual. Rep. 13(4), 544–559 (2008)Google Scholar
  8. 8.
    Berg, E.A.: A simple objective technique for measuring flexibility in thinking. J. Gen. Psychol. 39(1), 15–22 (1948)CrossRefGoogle Scholar
  9. 9.
    Beyer, H., Holtzblatt, K.: Contextual Design: A Customer-Centered Approach to Systems Designs. Morgan Kaufmann, San Fransisco (1997)Google Scholar
  10. 10.
    Blum, A.L., Langley, P.: Selection of relevant features and examples in machine learning. Artif. Intell. 97(1), 245–271 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Booth, W.C., Colomb, G.G., Williams, J.M.: The Craft of Research. University of Chicago Press, Chicago (2003)CrossRefGoogle Scholar
  12. 12.
    Bruch, M., Monperrus, M., Mezini, M.: Learning from examples to improve code completion systems. In: Proceedings of ESEC/FSE, pp. 213–222. ACM (2009)Google Scholar
  13. 13.
    Bruch, M., Schäfer, T., Mezini, M.: FrUiT: IDE support for framework understanding. In: Proceedings of the 2006 OOPSLA Workshop on Eclipse Technology eXchange, pp. 55–59. ACM (2006)Google Scholar
  14. 14.
    Buxton, B.: Sketching User Experiences: Getting the Design Right and the Right Design. Morgan Kaufmann, Amsterdam (2010)Google Scholar
  15. 15.
    Carroll, J.M.: Scenario-based Design: Envisioning Work and Technology in System Development. Wiley, New York (1995)Google Scholar
  16. 16.
    Carroll, J.M.: Making Use: Scenario-Based Design of Human-Computer Interactions. MIT Press, Cambridge (2000)CrossRefGoogle Scholar
  17. 17.
    Cooper, A.: The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity, vol. 261. Sams Indianapolis, Indianapolis (1999)Google Scholar
  18. 18.
    Cooper, A.: The origin of personas. Innovation 23(1), 26–29 (2004)Google Scholar
  19. 19.
    Coxon, A.P.M.: Sorting Data: Collection and Analysis, vol. 127. Sage Publications, USA (1999)CrossRefzbMATHGoogle Scholar
  20. 20.
    Crystal, A., Ellington, B.: Task analysis and human-computer interaction: approaches, techniques, and levels of analysis. In: Proceedings of AMCIS, p. 391 (2004)Google Scholar
  21. 21.
    Davis, J., Goadrich, M.: The relationship between precision-recall and ROC curves. In: Proceedings of the International Conference on Machine Learning, pp. 233–240. ACM (2006)Google Scholar
  22. 22.
    DeLine, R., Czerwinski, M., Robertson, G.: Easing program comprehension by sharing navigation data. In: IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 241–248. IEEE (2005)Google Scholar
  23. 23.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 39, 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Diehl, S.: Software Visualization: Visualizing the Structure, Behaviour, and Evolution of Software. Springer, Heidelberg (2007)zbMATHGoogle Scholar
  25. 25.
    Djajadiningrat, J.P., Gaver, W.W., Fres, J.: Interaction relabelling and extreme characters: methods for exploring aesthetic interactions. In: Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 66–71. ACM (2000)Google Scholar
  26. 26.
    Dow, S., MacIntyre, B., Lee, J., Oezbek, C., Bolter, J.D., Gandy, M.: Wizard of Oz support throughout an iterative design process. IEEE Pervasive Comput. 4(4), 18–26 (2005)CrossRefGoogle Scholar
  27. 27.
    Flanagan, J.C.: The critical incident technique. Psychol. Bull. 51(4), 327 (1954)CrossRefGoogle Scholar
  28. 28.
    Fonteyn, M.E., Kuipers, B., Grobe, S.J.: A description of think aloud method and protocol analysis. Qual. Health Res. 3(4), 430–441 (1993)CrossRefGoogle Scholar
  29. 29.
    Foster, S.R., Griswold, W.G., Lerner, S.: WitchDoctor: IDE support for real-time auto-completion of refactorings. In: Proceedings of the International Conference on Software Engineering, ICSE 2012, pp. 222–232. IEEE Press, Piscataway (2012)Google Scholar
  30. 30.
    Furnas, G., Landauer, T., Gomez, L., Dumais, S.: The vocabulary problem in human-system communication. Commun. ACM 30(11), 964–971 (1987)CrossRefGoogle Scholar
  31. 31.
    Gabel, M., Su, Z.: Online inference and enforcement of temporal properties. In: Proceedings of the International Conference on Software Engineering, pp. 15–24. ACM (2010)Google Scholar
  32. 32.
    Goldberg, D., Nichols, D., Oki, B.M., Terry, D.: Using collaborative filtering to weave an information tapestry. Commun. ACM 35(12), 61–70 (1992)CrossRefGoogle Scholar
  33. 33.
    Goodwin, K.: Designing for the Digital Age: How to Create Human-Centered Products and Services. Wiley, New York (2011)Google Scholar
  34. 34.
    Gould, J.D., Conti, J., Hovanyecz, T.: Composing letters with a simulated listening typewriter. Commun. ACM 26(4), 295–308 (1983)CrossRefGoogle Scholar
  35. 35.
    Gousios, G.: The GHTorrent dataset and tool suite. In: Proceedings of the 10th Working Conference on Mining Software Repositories, pp. 233–236 (2013)Google Scholar
  36. 36.
    Gutman, J.: A means-end chain model based on consumer categorization processes. J. Mark. 46, 60–72 (1982)CrossRefGoogle Scholar
  37. 37.
    Hackos, J.T., Redish, J.: User and Task Analysis for Interface Design. Wiley, New York (1998)Google Scholar
  38. 38.
    Hariri, N., Castro-Herrer, C., Cleland-Huang, J., Mobasher, B.: Recommendation systems in requirements discovery. In: Robillard, M.P., Walker, R.J., Zimmermann, T. (eds.) Recommendation Systems in Software Engineering, Chap. 17, pp. 455–476. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  39. 39.
    Hawley, M.: Laddering: A Research Interview Technique for Uncovering Core Values. UX Matters (2009)Google Scholar
  40. 40.
    Heinemann, L., Bauer, V., Herrmannsdoerfer, M., Hummel, B.: Identifier-based context-dependent API method recommendation. In: Proceedings of CSMR (2012)Google Scholar
  41. 41.
    Heinemann, L., Hummel, B.: Recommending API methods based on identifier contexts. In: Proceedings of the International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation, pp. 1–4. ACM (2011)Google Scholar
  42. 42.
    Henß, S., Monperrus, M., Mezini, M.: Semi-automatically extracting FAQs to improve accessibility of software development knowledge. In: Proceedings of the International Conference on Software Engineering, ICSE 2012, pp. 793–803. IEEE Press, Piscataway (2012)Google Scholar
  43. 43.
    Hindle, A., Barr, E.T., Su, Z., Gabel, M., Devanbu, P.: On the naturalness of software. In: Proceedings of the International Conference on Software Engineering, pp. 837–847. IEEE (2012)Google Scholar
  44. 44.
    Holmes, R., Murphy, G.C.: Using structural context to recommend source code examples. In: Proceedings of the International Conference on Software Engineering, pp. 117–125. ACM (2005)Google Scholar
  45. 45.
    Holmes, R., Walker, R.J., Murphy, G.C.: Strathcona example recommendation tool. In: Proceedings of ESEC/FSE, pp. 237–240. ACM, New York (2005)Google Scholar
  46. 46.
    Holtzblatt, K., Wendell, J.B., Wood, S.: Rapid Contextual Design: A How-to Guide to Key Techniques for User-Centered Design. Elsevier, San Francisco (2004)Google Scholar
  47. 47.
    Houde, S., Hill, C.: What do prototypes prototype. Handb. Hum.-Comput. Interact. 2, 367–381 (1997)CrossRefGoogle Scholar
  48. 48.
    Hummel, O.: Facilitating the comparison of software retrieval systems through a reference reuse collection. In: Proceedings of the ICSE Workshop on Search-Driven Development: Users, Infrastructure, Tools and Evaluation, pp. 17–20. ACM (2010)Google Scholar
  49. 49.
    Inozemtseva, L., Holmes, R., Walker, R.J.: Recommendation systems in-the-small. In: Robillard, M.P., Maalej, W., Walker, R.J., Zimmermann, T. (eds.) Recommendation Systems in Software Engineering, Chap. 4, pp. 77–92. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  50. 50.
    Jannach, D., Zanker, M., Felfernig, A., Friedrich, G.: Recommender Systems: An Introduction. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  51. 51.
    Kelly, D., Belkin, N.J.: Reading time, scrolling and interaction: exploring implicit sources of user preferences for relevance feedback. In: Proceedings of the SIGIR Conference on Research and Development in Information Retrieval, pp. 408–409. ACM (2001)Google Scholar
  52. 52.
    Kersten, M.: Focusing knowledge work with task context. Ph.D. thesis, University of British Columbia (2007)Google Scholar
  53. 53.
    Kersten, M., Murphy, G.C.: Using task context to improve programmer productivity. In: Young, M., Devanbu, P.T. (eds.) Proceedings of FSE, pp. 1–11. ACM (2006)Google Scholar
  54. 54.
    Kirwan, B., Ainsworth, L.K.: A Guide to Task Analysis: The Task Analysis Working Group. CRC Press, Boca Raton (1992)CrossRefGoogle Scholar
  55. 55.
    Kuniavsky, M.: Observing the User Experience: A Practitioner’s Guide to User Research. Morgan Kaufmann, Boston (2003)Google Scholar
  56. 56.
    Laxman, S., Sastry, P., Unnikrishnan, K.: Discovering frequent episodes and learning hidden markov models: a formal connection. IEEE Trans. Knowl. Data Eng. 17(11), 1505–1517 (2005)CrossRefGoogle Scholar
  57. 57.
    Lee, S., Kang, S.: Clustering navigation sequences to create contexts for guiding code navigation. J. Syst. Softw. 86(8), 2154–2165 (2013)CrossRefGoogle Scholar
  58. 58.
    Lessmann, S., Baesens, B., Mues, C., Pietsch, S.: Benchmarking classification models for software defect prediction: a proposed framework and novel findings. Trans. Softw. Eng. 34(4), 485–496 (2008)CrossRefGoogle Scholar
  59. 59.
    Li, Z., Zhou, Y.: Pr-miner: automatically extracting implicit programming rules and detecting violations in large software code. In: ACM SIGSOFT Software Engineering Notes, vol. 30, pp. 306–315. ACM (2005)Google Scholar
  60. 60.
    Lidwell, W., Holden, K., Butler, J.: Universal Principles of Design, Revised and Updated: 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions, and Teach Through Design. Rockport Publishers, Rockport (2010)Google Scholar
  61. 61.
    Livshits, B., Zimmermann, T.: Dynamine: finding common error patterns by mining software revision histories. In: ACM SIGSOFT Software Engineering Notes, vol. 30, pp. 296–305. ACM (2005)Google Scholar
  62. 62.
    Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theor. 28(2), 129–137 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  63. 63.
    Lo, D., Khoo, S.C.: Quark: empirical assessment of automaton-based specification miners. In: Proceedings of Working Conference on Reverse Engineering, pp. 51–60. IEEE (2006)Google Scholar
  64. 64.
    Lovins, J.B.: Development of a Stemming Algorithm. MIT Information Processing Group, Electronic Systems Laboratory, Cambridge (1968)Google Scholar
  65. 65.
    Lozano, A., Kellens, A., Mens, K., Arevalo, G., et al.: Mentor: mining entities to rules. In: Proceedings of the Belgian-Netherlands Evolution Workshop (2010)Google Scholar
  66. 66.
    Martin, B., Hanington, B., Hanington, B.M.: Universal Methods of Design: 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions. Rockport Publishers, Rockport (2012)Google Scholar
  67. 67.
    Maulsby, D., Greenberg, S., Mander, R.: Prototyping an intelligent agent through wizard of Oz. In: Proceedings of the INTERACT 1993 and CHI 1993 Conference on Human Factors in Computing Systems, pp. 277–284. ACM (1993)Google Scholar
  68. 68.
    McCallum, A., Nigam, K., Ungar, L.H.: Efficient clustering of high-dimensional data sets with application to reference matching. In: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 169–178. ACM (2000)Google Scholar
  69. 69.
    McCarey, F., Cinnéide, M.O., Kushmerick, N.: A case study on recommending reusable software components using collaborative filtering. In: Proceedings of the MSR, pp. 117–121. IET (2004)Google Scholar
  70. 70.
    Meyer, A., Fritz, T., Murphy, G., Zimmermann, T.: Software developers’ perceptions of productivity. In: Proceedings of the ACM SIGSOFT Foundations of Software Engineering, pp. 19–29 (2014)Google Scholar
  71. 71.
    Meyer, A.N., Fritz, T., Murphy, G.C., Zimmermann, T.: Software developers’ perceptions of productivity. In: Proceedings of SIGSOFT International Symposium on Foundations of Software Engineering (2014)Google Scholar
  72. 72.
    Michail, A.: Data mining library reuse patterns using generalized association rules. In: Proceedings of the International Conference on Software Engineering, pp. 167–176. ACM (2000)Google Scholar
  73. 73.
    Monsell, S.: Task switching. Elsevier TRENDS Cogn. Sci. 7(3), 134–140 (2003)CrossRefGoogle Scholar
  74. 74.
    Murphy, G.C., Kersten, M., Findlater, L.: How are java software developers using the elipse IDE? IEEE Softw. 23(4), 76–83 (2006)CrossRefGoogle Scholar
  75. 75.
    Murphy-Hill, E., Black, A.: An interactive ambient visualization for code smells (2010)Google Scholar
  76. 76.
    Murphy-Hill, E., Jiresal, R., Murphy, G.C.: Improving software developers’ fluency by recommending development environment commands. In: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, FSE 2012, pp. 42:1–42:11. ACM (2012).
  77. 77.
    Murphy-Hill, E., Murphy, G.C.: Recommendation delivery. In: Robillard, M.P., Maalej, W., Walker, R.J., Zimmermann, T. (eds.) Recommendation Systems in Software Engineering, Chap. 9, pp. 223–242. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  78. 78.
    Muşlu, K., Brun, Y., Holmes, R., Ernst, M.D., Notkin, D.: Speculative analysis of integrated development environment recommendations. ACM SIGPLAN Not. 47(10), 669–682 (2012)CrossRefGoogle Scholar
  79. 79.
    Nagappan, M., Zimmermann, T., Bird, C.: Diversity in software engineering research. In: Proceedings of ESEC/FSE, pp. 466–476. ACM (2013)Google Scholar
  80. 80.
    Nguyen, A.T., Nguyen, T.T., Nguyen, H.A., Tamrawi, A., Nguyen, H.V., Al-Kofahi, J., Nguyen, T.N.: Graph-based pattern-oriented, context-sensitive source code completion. In: Proceedings of the International Conference on Software Engineering, pp. 69–79. IEEE Press (2012)Google Scholar
  81. 81.
    Nguyen, H.A., Nguyen, T.T., Pham, N.H., Al-Kofahi, J.M., Nguyen, T.N.: Accurate and efficient structural characteristic feature extraction for clone detection. In: Chechik, M., Wirsing, M. (eds.) FASE 2009. LNCS, vol. 5503, pp. 440–455. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  82. 82.
    Nguyen, T.T., Nguyen, A.T., Nguyen, H.A., Nguyen, T.N.: A statistical semantic language model for source code. In: Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pp. 532–542. ACM (2013)Google Scholar
  83. 83.
    Nguyen, T.T., Nguyen, H.A., Pham, N.H., Al-Kofahi, J.M., Nguyen, T.N.: Graph-based mining of multiple object usage patterns. In: Proceedings of ESEC/FSE, pp. 383–392. ACM (2009)Google Scholar
  84. 84.
    Nielsen, J.: Card Sorting: How Many Users to Test. Jakob Nielsen’s Alertbox (2004)Google Scholar
  85. 85.
    Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 249–256. ACM (1990)Google Scholar
  86. 86.
    Nielsen, J., Sano, D.: Design of sunweb-sun microsystems’ intranet. (1994)Google Scholar
  87. 87.
    Novak, J.D.: Learning How to Learn. Cambridge University Press, Cambridge (1984)CrossRefGoogle Scholar
  88. 88.
    Novak, J.D., Cañas, A.J.: The theory underlying concept maps and how to construct and use them. Technical report HMC CmapTools 2006–01, Institute for Human and Machine Computation (2008)Google Scholar
  89. 89.
    Patel, S., Bosley, W., Culyba, D., Haskell, S.A., Hosmer, A., Jackson, T., Liesegang, S.J., Stepniewicz, P., Valenti, J., Zayat, S., et al.: A guided performance interface for augmenting social experiences with an interactive animatronic character. In: AIIDE, pp. 72–79 (2006)Google Scholar
  90. 90.
    Ponzanelli, L., Bavota, G., Di Penta, M., Oliveto, R., Lanza, M.: Mining StackOverflow to turn the IDE into a self-confident programming prompter. In: Proceedings of the 11th Working Conference on Mining Software Repositories, MSR 2014, pp. 102–111. ACM (2014)Google Scholar
  91. 91.
    Pradel, M., Gross, T.R.: Leveraging test generation and specification mining for automated bug detection without false positives. In: International Conference on Software Engineering, pp. 288–298. IEEE (2012)Google Scholar
  92. 92.
    Pradel, M., Jaspan, C., Aldrich, J., Gross, T.R.: Statically checking API protocol conformance with mined multi-object specifications. In: Proceedings of the International Conference on Software Engineering, pp. 925–935. IEEE Press (2012)Google Scholar
  93. 93.
    Preszler, R.: Cooperative concept mapping: improving performance in undergraduate biology. J. Coll. Sci. Teach. 33(6), 30–35 (2004)Google Scholar
  94. 94.
    Proksch, S., Amann, S., Mezini, M.: Towards standardized evaluation of developer-assistance tools. In: Proceedings of the 4th International Workshop on Recommendation Systems for Software Engineering, pp. 14–18. ACM (2014)Google Scholar
  95. 95.
    Reynolds, T.J., Gutman, J.: Laddering theory, method, analysis, and interpretation. J. Advertising Res. 28(1), 11–31 (1988)Google Scholar
  96. 96.
    Robillard, M.P.: Topology analysis of software dependencies. ACM Trans. Softw. Eng. Methodol. 17(4), 18 (2008)CrossRefGoogle Scholar
  97. 97.
    Robillard, M., Walker, R., Zimmermann, T.: Recommendation systems for software engineering. Software 27(4), 80–86 (2010)CrossRefGoogle Scholar
  98. 98.
    Robles, G.: Replicating MSR: a study of the potential replicability of papers published in the mining software repositories proceedings. In: Proceedings of Mining Software Repositories, pp. 171–180. IEEE (2010)Google Scholar
  99. 99.
    Rosenberg, M.J.: Cognitive structure and attitudinal affect. J. Abnorm. Soc. Psychol. 53(3), 367 (1956)CrossRefGoogle Scholar
  100. 100.
    Ryan, G.W., Bernard, H.R.: Data Management and Analysis Methods. Handbook of Qualitative Research (2000)Google Scholar
  101. 101.
    Sahavechaphan, N., Claypool, K.: Xsnippet: mining for sample code. ACM Sigplan Not. 41(10), 413–430 (2006)CrossRefGoogle Scholar
  102. 102.
    Salton, G., Wong, A., Yang, C.S.: A vector space model for automatic indexing. Commun. ACM 18(11), 613–620 (1975)CrossRefzbMATHGoogle Scholar
  103. 103.
    Schwarz, P.: The Art of the Long View–Planning for the Future in an Uncertain World. Currency Doubleday, New York (1991)Google Scholar
  104. 104.
    Serenko, A.: The use of interface agents for email notification in critical incidents. Int. J. Hum.-Comput. Stud. 64(11), 1084–1098 (2006)CrossRefGoogle Scholar
  105. 105.
    Serenko, A., Stach, A.: The impact of expectation disconfirmation on customer loyalty and recommendation behavior: investigating online travel and tourism services. J. Inf. Technol. Manag. 20(3), 26–41 (2009)Google Scholar
  106. 106.
    Singer, J., Elves, R., Storey, M.A.: Navtracks: supporting navigation in software maintenance. In: Proceedings of the IEEE International Conference on Software Maintenance, ICSM 2005, pp. 325–334. IEEE Computer Society, Washington, DC, USA (2005)Google Scholar
  107. 107.
    Sjøberg, D.I., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.K., Rekdal, A.C.: A survey of controlled experiments in software engineering. Trans. Softw. Eng. 31(9), 733–753 (2005)CrossRefGoogle Scholar
  108. 108.
    Sommer, B., Sommer, R.: A Practical Guide to Behavioral Research: Tools and Techniques. Oxford University Press, Oxford (1991)Google Scholar
  109. 109.
    Spencer, D.: Card Sorting: Designing Usable Categories. Rosenfeld Media, New York (2009)Google Scholar
  110. 110.
    Subramanian, S., Inozemtseva, L., Holmes, R.: Live API documentation. In: Proceedings of the International Conference on Software Engineering, ICSE 2014, pp. 643–652. ACM (2014)Google Scholar
  111. 111.
    Terveen, L., Hill, W.: Beyond recommender systems: helping people help each other. HCI New Millennium 1, 487–509 (2001)Google Scholar
  112. 112.
    Thummalapenta, S., Xie, T.: Parseweb: a programmer assistant for reusing open source code on the web. In: Proceedings of the International Conference on Automated Software Engineering, pp. 204–213. ACM (2007)Google Scholar
  113. 113.
    Urquhart, C., Light, A., Thomas, R., Barker, A., Yeoman, A., Cooper, J., Armstrong, C., Fenton, R., Lonsdale, R., Spink, S.: Critical incident technique and explicitation interviewing in studies of information behavior. Libr. Inf. Sci. Res. 25(1), 63–88 (2003)CrossRefGoogle Scholar
  114. 114.
    Wansink, B.: Using laddering to understand and leverage a brand’s equity. Qual. Mark. Res.: Int. J. 6(2), 111–118 (2003)CrossRefGoogle Scholar
  115. 115.
    Warfel, T.Z.: Prototyping: A Practitioner’s Guide. Rosenfeld Media, New York (2009)Google Scholar
  116. 116.
    Wasson, C.: Ethnography in the field of design. Hum. Organ. 59(4), 377–388 (2000)CrossRefGoogle Scholar
  117. 117.
    Wasylkowski, A., Zeller, A., Lindig, C.: Detecting object usage anomalies. In: Proceedings of ESEC/FSE, pp. 35–44. ACM, New York (2007)Google Scholar
  118. 118.
    Weimer, M., Karatzoglou, A., Bruch, M.: Maximum margin code recommendation. In: Proceedings of the Conference on Recommender Systems (2009)Google Scholar
  119. 119.
    Wharton, C., Rieman, J., Lewis, C., Polson, P.: The cognitive walkthrough method: a practitioner’s guide. In: Usability Inspection Methods, pp. 105–140. Wiley Inc. (1994)Google Scholar
  120. 120.
    Zeisel, J.: Inquiry by Design: Tools for Environment-Behaviour Research. CUP Archive (1984)Google Scholar
  121. 121.
    Zeisel, J.: Inquiry by Design: Environment/Behavior/Neuroscience in Architecture, Interiors, Landscape, and Planning. WW Norton and Co, York (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Sebastian Proksch
    • 1
    Email author
  • Veronika Bauer
    • 2
  • Gail C. Murphy
    • 3
  1. 1.TU DarmstadtDarmstadtGermany
  2. 2.TU MünchenMunichGermany
  3. 3.UBCVancouverCanada

Personalised recommendations