Advertisement

Toward Theory-Based End-User Software Engineering

Chapter

Abstract

One area of research in the end-user development area is known as end-user software engineering (EUSE). Research in EUSE aims to invent new kinds of technologies that collaborate with end users to improve the quality of their software. EUSE has become an active research area since its birth in the early 2000s, with a large body of literature upon which EUSE researchers can build. However, building upon these works can be difficult when projects lack connections due to an absence of cross-cutting foundations to tie them together. In this chapter, we advocate for stronger theory foundations and show the advantages through three theory-oriented projects: (1) the Explanatory Debugging approach, to help end users debug their intelligent assistants; (2) the GenderMag method, which identifies problems with gender inclusiveness in EUSE tools and other software; and (3) the Idea Garden approach, to help end users to help themselves in overcoming programming barriers. In each of these examples, we show how having a theoretical foundation facilitated generalizing beyond individual tools to the production of general methods and principles for other researchers to directly draw upon in their own works.

Keywords

End-user software engineering theory foundations theory-oriented products EUD research 

Notes

Acknowledgements

We thank our students and collaborators who contributed to our work, all the participants of our empirical studies, and the reviewers for their helpful suggestions. Our work in developing this chapter was supported in part by the National Science Foundation under grants CNS-1240957, IIS-1314384, and IIS-1528061, and by the DARPA Explainable AI (XAI) program grant DARPA-16-53-XAI-FP-043. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the sponsors.

References

  1. Aho, A., Lam, M., Sethi, R., Ullman, J. (2006). Compilers: principles, techniques & tools. Boston, MA, USA: Addison Wesley.Google Scholar
  2. Anderson, L., Krathwohl, D., Airasian, P., Cruikshank, K., Mayer, R., Pintrich, P., et al. (Ed.). (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives (Complete edition). Longman.Google Scholar
  3. Appel, M., Kronberger, N., Aronson, J. (2011). Stereotype threat impair ability building: effects on test preparation among women in science and technology. European Journal of Social Psychology, 41(7), 904–913.CrossRefGoogle Scholar
  4. Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychological Review, 8(2), 191–215.CrossRefGoogle Scholar
  5. Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice Hall.Google Scholar
  6. Beckwith, L., Burnett, M., Wiedenbeck, S., Grigoreanua, V., Wiedenbeck, S. (2006). Gender HCI: what about the software? Computer, (Nov. 2006), 83–87.Google Scholar
  7. Beckwith, L., Inman, D., Rector, K., Burnett, M. (2007). On to the real world: gender and self-efficacy in excel. In IEEE symposium visual languages and human-centric computing (pp. 119–126). USA: IEEE, Couer d’Alene, Idaho.Google Scholar
  8. Beckwith, L., Sorte S., Burnett, M., Wiedenback, S., Chintakovid, T., Cook C. (2005). Designing features for both genders in end-user programming environments. In IEEE symposium VLHCC (pp. 153–160). USA: IEEE, Dallas, Texas.Google Scholar
  9. Beyer, S., Rynes, K., Perrault, J., Hay, K., Haller, S. (2003). Gender differences in computer science students. In SIGCSE: special interest group on computer science education (pp. 49–53). Reno, Nevada, USA: ACM.Google Scholar
  10. Blackwell, A., & Hague, R. (2001). AutoHAN: an architecture for programming the home. IEEE symposium human-centric computing languages and environments (pp. 150–157). Stresa: IEEE.CrossRefGoogle Scholar
  11. Blackwell, A. F. (2002). First steps in programming: a rationale for attention investment models. In IEEE VL/HCC (pp. 2–10). Arlington, Virginia, USA: IEEE.Google Scholar
  12. Brandt, J., Dontcheva, M., Weskamp, M., Klemmer, S. (2010). Example-centric programming: integrating web search into the programming environment. In ACM conference on human factors in computing systems (pp. 513–522). Atlanta, Georgia, USA: ACM.Google Scholar
  13. Bunt, A., Lount, M., Lauzon, C. (2012). Are explanations always important? A study of deployed, low-cost intelligent interactive systems. In ACM IUI (pp. 169–178). Austin, Texas, USA: ACM.Google Scholar
  14. Burnett, M., Beckwith, L., Wiedenbeck, S., Fleming, S. D., Cao, J., Park, T. H., et al. (2011). Gender pluralism in problem-solving software. Interacting with Computers, 23, 450–460.CrossRefGoogle Scholar
  15. Burnett, M., Fleming, S., Iqbal, S., Venolia, G., Rajaram, V., Farooq, U., et al. (2010). Gender differences and programming environments: across programming populations. In ACM-IEEE international symposium on empirical software engineering and measurement. 10 pages.  http://doi.acm.org/10.1145/1852786.1852824
  16. Burnett, M., & Myers, B. (2014). Future of end-user software engineering: Beyond the silos. In ACM/IEEE international conference on software engineering: future of software engineering track (ICSE companion proceedings) (pp. 201–211). Hyderabad, India: ACMGoogle Scholar
  17. Burnett, M., Peters, A., Hill, C., Elarief, N. (2016). Finding gender-inclusiveness software issues with GenderMag: a field investigation. In ACM conference on human factors in computing systems (CHI). (pp. 760–787). Oxford, UK: Oxford University Press.Google Scholar
  18. Burnett, M., Stumpf, S., Macbeth, J., Makri, S., Beckwith, L., Kwan, I., et al. (2016). GenderMag: a method for evaluating software’s gender inclusiveness. Interacting with Computers, 28(6), 760–787. doi: 10.1093/iwc/iwv046.CrossRefGoogle Scholar
  19. Cao, J., Fleming, S., Burnett, M. (2011). An exploration of design opportunities for “gardening” end-user programmers’ ideas. In IEEE symposium on visual languages and human-centric computing (pp. 35–42).Google Scholar
  20. Cao, J., Fleming, S., Burnett, M., Scaffidi, C. (2015). Idea Garden: situated support for problem solving by end-user programmers. Interacting with Computers, 27(6), 640–660.CrossRefGoogle Scholar
  21. Cao, J., Rector, K., Park, T., Fleming, S., Burnett, M., Wiedenbeck, S. (2010). A debugging perspective on end-user mashup programming. In IEEE symposium on visual languages and human-centric computing (pp. 149–156). Madrid, Spain: IEEE.Google Scholar
  22. Carroll, J. (1990). The nurnberg funnel: designing minimalist instruction for practical computer skill. Cambridge, MA, USA: MIT Press.Google Scholar
  23. Carroll, J. (Ed.). (1998). Minimalism beyond the nurnberg funnel. Cambridge, MA, USA: MIT Press.Google Scholar
  24. Carroll, J., & Rosson, C. (1987). The paradox of the active user. In Interfacing thought: cognitive aspects of human-computer interaction (pp. 26–28). Cambridge, MA, USA: MIT Press.Google Scholar
  25. Chambers, C., & Scaffidi, C. (2010). Struggling to excel: a field study of challenges faced by spreadsheet users. IEEE symposium on visual languages and human-centric computing (pp. 187–194). Pittsburg, USA: IEEE.Google Scholar
  26. Compeau, D., & Higgins, C. (1995). Application of social cognitive theory to training for computer skills. Information System Research, 6(2), 118–143.CrossRefGoogle Scholar
  27. Craven, M. W., & Shavlik, J. W. (1997). Using neural networks for data mining. Future Generation Computer Systems, 13, (Nov. 1997), 211–229.CrossRefGoogle Scholar
  28. de Souza, C. S. (2017). Semiotic engineering: a cohering theory to connect EUD with HCI, CMC and more. In F. Paternò & V. Wulf (Eds.). New perspectives in end-user development. (pp. 269–306). Cham: Springer.Google Scholar
  29. Ennals, R., Brewer, E., Garofalakis, M., Shadle, M., Gandhi, P. (2007). Intel mash maker: join the web. SIGMOD Record, 36(4), 27–33.CrossRefGoogle Scholar
  30. Fleming, S., Scaffidi, C., Piorkowski, D., Burnett, M., Bellamy, R., Lawrance, J., et al. (2013). An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Trans. Soft. Engr. and Method. (TOSEM), 22(2), 14:1.Google Scholar
  31. Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642.Google Scholar
  32. Grigoreanu, V., Brundage, J., Bahna, E., Burnett, M., ElRif, P., Snover, J. (2009). Males’ and females’ script debugging strategies. In Symposium on end-user development. (pp. 205–224). Siegen, Germany: Springer.Google Scholar
  33. Grigoreanu, V., Burnett, M., Wiedenbeck, S., Cao, J., Rector, K., Kwan, I. (2012). End-user debugging strategies: a sensemaking perspective. Transactions on Computer-Human Interaction 19, 1, ACM.Google Scholar
  34. Grigoreanu, V., Cao, J., Kulesza, T., Bogart, C., Rector, K., Burnett, M., et al. (2008). Can feature design reduce the gender gap in end-user software development environments? In IEEE symposium on visual languages and human-centric computing (pp. 149–156). New York, New York, USA: ACM.Google Scholar
  35. Hargittai, E., & Shafer, S. (2006). Differences in actual and perceived online skills: the role of gender. Social Science Quarterly, 87(2), 432–448.CrossRefGoogle Scholar
  36. Hartmann, B., MacDougall, D., Brandt, J., Klemmer, S. (2010). What would other programmers do: suggesting solutions to error messages. In ACM conference on human factors in computing systems (pp. 1019–1028). Atlanta, Georgia, USA: ACM.Google Scholar
  37. Hartzel, K. (2003). How self-efficacy and gender issues affect software adoption and use. Communications of ACM, 46(9), 167–171.CrossRefGoogle Scholar
  38. Herbsleb, J. (2016). Building a socio-technical theory of coordination: why and how. In ACM symposium foundations of software engineering (pp. 2–10). Seattle, Washington, USA: ACM.Google Scholar
  39. Huffman, A., Whetton, J., Huffman, W. (2013). Using technology in higher education: the influence of gender roles on technology self-efficacy. Computers in Human Behavior, 29(4), 1779–1786.CrossRefGoogle Scholar
  40. Jernigan, W., Horvath, A., Lee, M., Burnett, M., Cuilty, T., Kuttal, S., et al. (2015). A principled evaluation for a principled Idea Garden. In Proceedings IEEE Visual Languages and Human-Centric Computing (VL/HCC ’15) (pp. 235–243). Atlanta, Georgia, USA: IEEE.Google Scholar
  41. Jernigan, W., Horvath, A, Lee, M., Burnett, M., Cuilty, T., Kuttal, S., et al. (2017). General principles for a Generalized Idea Garden. Journal of Visual Languages and Computing, 39, 51–65.Google Scholar
  42. Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness. Cambridge MA, USA: Harvard University Press.Google Scholar
  43. Kelleher, C., & Pausch, R. (2006). Lessons learned from designing a programming system to support middle school girls creating animated stories. Symposium on visual languaes and human-centric computing (pp. 165–172). Brighton: IEEE.CrossRefGoogle Scholar
  44. Ko, A., Abraham, R., Beckwith, L., Blackwell, A., Burnett, M., Erwig, M., et al. (2011). The state of the art in end-user software engineering. ACM Computing Surveys 43(3), Article 21, 44 pages.Google Scholar
  45. Ko, A., & Myers, B. (2004). Designing the whyline: a debugging interface for asking questions about program behavior. In ACM conference on human factors in computing systems (pp. 151–158). Vienna, Austria: ACM.Google Scholar
  46. Ko, A., Myers, B., Aung, H. (2004). Six learning barriers in end-user programming systems. In IEEE symposium on visual languages and human-centric computing (pp. 199–206). Rome, Italy: IEEE Google Scholar
  47. Kulesza, T., Burnett, M. M., Wong, W. -K., Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. In ACM conference on intelligent user interfaces (pp. 126–137). Atlanta, Georgia, USA: ACM.Google Scholar
  48. Kulesza, T., Stumpf, S., Burnett, M. M., Kwan, I. (2012). Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In ACM CHI (pp. 1–10). Austin, Texas, USA: ACM.Google Scholar
  49. Kulesza, T., Stumpf, S., Burnett, M. M., Wong, W. -K., Riche, Y., Moore, T., et al. (2010). Explanatory debugging: supporting end-user debugging of machine-learned programs. In IEEE symposium on visual languages and human-centric computing (pp. 41–48). Madrid, Spain: IEEE.Google Scholar
  50. Kulesza, T., Stumpf, S., Burnett, M. M., Yang, S. (2013). Too much, too little, or just right? Ways explanations impact end users’ mental models. In IEEE symposium on visual languages and human-centric computing (pp. 3–10). San Jose, California, USA: IEEE.Google Scholar
  51. Kulesza, T., Stumpf, S., Wong, W. -K., Burnett, M. M., Perona, S., Ko, A. J., et al. (2011). Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems, 1, 1.CrossRefGoogle Scholar
  52. Lacave, C., & Díez, F. J. (2002). A review of explanation methods for Bayesian networks. The Knowledge Engineering Review, 17(2), 107–127.CrossRefGoogle Scholar
  53. Lee, M., Bahmani, F., Kwan, I., Laferte, J., Charters, P., Horvath, A., et al. (2014). Principles of a debugging-first puzzle game for computing education. IEEE Symposium on Visual Languages and Human-Centric Computing, Melbourne, Australia (pp. 57–64).Google Scholar
  54. Lee, M., & Ko, A. (2011). Personifying programming tool feedback improves novice programmers’ learning. In Proceedings of ICER (pp. 109–116). Providence, Rhode Island, USA: ACM Press.Google Scholar
  55. Lieberman, H., Paterno, F., Wulf, V. (Eds.). (2006). End-user development. Dordrecht, The Netherlands: Kluwer/Springer.Google Scholar
  56. Lim, B. Y., & Dey, A. K. (2009). Proceedings of the International Conference on Ubiquitous Computing. Orlando, Florida, USA: ACM.Google Scholar
  57. Lin, J., Wong, J., Nichols, J., Cypher, A., Lau, T. (2009). End-user programming of mashups with Vegemite. In ACM international conference on intelligent user interfaces (pp. 97–106). Sanibel Island, Florida, USA: ACM.Google Scholar
  58. Little, G., Lau, T., Cypher, A., Lin, J., Haber, D., Kandogan, E. (2007). Koala: capture, share, automate, personalize business processes on the web. In ACM conference on human factors in computing systems (pp. 943–946). San Jose, California, USA: ACM.Google Scholar
  59. Loksa, D., Ko, A. J., Jernigan, W., Oleson, A., Mendez, C.J., Burnett, M. (2016). Programming, problem solving, and self-awareness: effects of explicit guidance. In ACM conference on human factors in computing systems (CHI). (pp. 1449–1461). California, USA: ACM, San JoseGoogle Scholar
  60. Luger, E. (2014). A design for life: recognizing the gendered politics affecting product design. In CHI workshop: perspectives on gender and product design. https://www.sites.google.com/site/technologydesignperspectives/papers.
  61. Marsden, N. (2014). CHI 2014 workshop on perspectives on gender and product design. https://www.sites.google.com/site/technologydesignperspectives/papers.
  62. McFarlane, D. (2002). Comparison of four primary methods for coordinating the interruption of people in human-computer interaction. Human-Computer Interaction, 17(1), 63–139.CrossRefGoogle Scholar
  63. Meyers-Levy, J. (1989). Gender differences in information processing: a selectivity interpretation. In P. Cafferata & A. Tubout (Eds.), Cognitive and affective responses to advertising. Lexington Books. (pp. 219–260). Lanham, Maryland, USA.Google Scholar
  64. Meyers-Levy, J., & Loken, B. (2015). Revisiting gender differences: what we know and what lies ahead. Journal of Consumer Psychology, 25, pp. 129–149.Google Scholar
  65. Meyers-Levy, J., & Maheswaran, D. (1991). Exploing differences in males’ and females’ processing strategies. Journal of Consumer Research, 18, 63–70.CrossRefGoogle Scholar
  66. Miller, R., Bolin, M., Chilton, L., Little, G., Webber, M., Yu, C. -H. (2010). Rewriting the web with chickenfoot. In A. Cypher, M. Dontcheva, T. Lau, & J. Nichols (Eds.), No code required: giving users tools to transform the web (pp. 39–63). Burlington, MA, USA: Morgan Kaufmann.Google Scholar
  67. Myers, B. A., Pane, J. F., Ko, A. (2004). Natural programming languages and environments. Communications of the ACM, 47(9), 47–52.CrossRefGoogle Scholar
  68. Norman, D. A. (2002). The design of everyday things. Revised and Expanded Edition. New York, New York, USA: Basic Books.Google Scholar
  69. Oney, S., & Myers, B. (2009). FireCrystal: understanding interactive behaviors in dynamic web pages. In IEEE symposium on visual languages and human-centric computing (pp. 105–108).Google Scholar
  70. Piorkowski, D., Henley, A., Nabi, T., Fleming, S., Scaffidi, C., Burnett, M. (2016). Foraging and navigations, fundamentally: developers’ predictions of value and cost. In ACM symposium foundations of software engineering (pp. 97–108). Seattle, Washington, USA: ACM.Google Scholar
  71. Pirolli, P. (2007). Information foraging theory: adaptive interaction with information. Oxford, UK: Oxford University Press.Google Scholar
  72. Repenning, A., & Ioannidou, A. (2008). Broadening participation through scalable game design. International Conference on Software Engineering (pp. 305–309). Leipzig: ACM.Google Scholar
  73. Robertson, T., Prabhakararao, S., Burnett, M., Cook, C., Ruthruff, J., Beckwith, L., et al. (2004). Impact of interruption style on end-user debugging. In ACM conference on human factors in computing systems (CHI) (pp. 287–294). Vienna, Austria: ACM.Google Scholar
  74. Rowe, M. B. (1973). Teaching science as continuous inquiry. New York, New York, USA: McGraw-Hill.Google Scholar
  75. Ruthruff, J., Burnett, M., Rothermel, G. (2006). Interactive fault localization techniques in a spreadsheet environment. IEEE Transactions on Software Engineering, 2(4), 213–239.CrossRefGoogle Scholar
  76. Shaw, M. (1990). Prospects for an engineering discipline of software. IEEE Software, 7, 15–24.Google Scholar
  77. Shneiderman, B. (1995). Looking for the bright side of user interface agents. ACM Interactions, 2(1), 13–15, January.CrossRefGoogle Scholar
  78. Sjøberg, D., Dybå, T., Anda, B., Hannay, J. (2008). Building theories in software engineering. In F. Shull, J. Singer, & D. I. K. Sjøberg (Eds.), Guide to advanced empirical software engineering (pp. 312–336). London, UK: Springer.Google Scholar
  79. Spencer, R. (2000). The streamlined cognitive walkthrough method, working around social constraints encountered in a software development company. ACM Conference on Human Factors in Computing Systems, The Hague, The Netherlands (pp. 353–359).Google Scholar
  80. Stol, K., Ralph, P., Fitzgerald, B. (2016). Grounded theory in Software Engineering research: a critical review and guidelines. In ACM/IEEE international conference on software engineering (pp. 120–131). Austin, Texas, USA: ACM.Google Scholar
  81. Stumpf, S., Rajaram, V., Li, L., Wong, W. -K., Burnett, M. M., Dietterich, T., et al. (2009). Interacting meaningfully with machine learning systems: three experiments. International Journal of Human-Computer Studies, 67(8), 639–662. (Aug. 2009).CrossRefGoogle Scholar
  82. Subrahmaniyan, N., Beckwith, L., Grigoreanu, V., Burnett, M., Wiedenbeck, S., Narayanan, V., et al. (2008). Testing vs. code inspection vs. … what else? Male and female end users’ debugging strategies. In Proceedings of CHI (pp. 617–626). Florence, Italy: ACM.Google Scholar
  83. Szafron, D., Greiner, R., Lu, P., Wishart, D. (2003). Explaining naive Bayes classifications. Tech report TR03-09, University of Alberta.Google Scholar
  84. Turner, P., & Turner, S. (2011). Is stereotyping inevitable when designing with personas? Design Studies, 32, 30–44, 1, January 2011.CrossRefGoogle Scholar
  85. van der Meij, H., & Carroll, J. M. (1998). Principles and heuristics for designing minimalist instruction. In J. M. Carroll (Ed.). Minimalism beyond the nurnberg funnel (pp. 19–53). Cambridge, MA: MIT Press.Google Scholar
  86. Wharton, C., Rieman, J., Lewis, C., Polson, P. (1994). The cognitive walkthrough method: a practioner’s guide. In J. Nielsen, & R. Mack (Eds.). Usability inspection methods (pp. 105–140). New York: John Wiley.Google Scholar
  87. Wulf, V., Müller, C., Pipek, V., Randall, D., Rohde, M. (2015). Practice based computing: empirically-grounded conceptualizations derived from design cases studies. In V. Wulf, K. Schmidt, D. Randall (Eds.). Designing socially embedded technologies in the real-world. (pp. 111–150). London: Springer.CrossRefGoogle Scholar
  88. Yang, R., & Newman, M. W. (2013). Learning from a learning thermostat: lessons for intelligent systems for the home. In ACM international joint conference on pervasive and ubiquitous computing (pp. 93–102). Zurich, Switzerland: ACM.Google Scholar
  89. Yang, Y., & Pedersen, J. O. (1997). A comparative study on feature selection in text categorization. Twentieth International Conference on Machine Learning, 412–420. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.Google Scholar
  90. Zeldin, A. L., & Pajares, F. (2000). Against the odds: self-efficacy beliefs of women in mathematical, scientific, and technological careers. American Educational Research Journal, 37, 215–246.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Oregon State UniversityCorvallisUnited States
  2. 2.MicrosoftRedmondUnited States
  3. 3.ConfigitCopenhagenDenmark
  4. 4.comScorePortlandUnited States
  5. 5.GEManhattanUnited States

Personalised recommendations