Testing in the Wild: The Social and Organisational Dimensions of Real World Practice

Article

Abstract

Testing is a key part of any systems engineering project. There is an extensive literature on testing, but very little that focuses on how testing is carried out in real-world circumstances. This is partly because current practices are often seen as unsophisticated and ineffective. We believe that by investigating and characterising the real-world work of testing we can help question why such ‘bad practices’ occur and how improvements might be made. We also argue that the testing literature is too focused on technological issues when many of the problems, and indeed strengths, have as much do with work and organisation. In this paper we use empirical examples from four systems engineering projects to demonstrate how and in what ways testing is a cooperative activity. In particular we demonstrate the ways in which testing is situated within organisational work and satisfices organisational and marketplace demands.

Keywords

dependability ethnography ethnomethodology organisational issues software development systems testing work practices 

References

  1. Ahonen, J., Junttila, T., & Sakkinen, M. (2004). Impacts of the organizational model on testing: three industrial cases. Empirical Software Engineering, 9(4), 275–296.CrossRefGoogle Scholar
  2. Alby, F., & Zucchermaglio, C. (2009). Time, narratives and participation frameworks in software troubleshooting. Computer Supported Cooperative Work, 18(2–3), 129–146.CrossRefGoogle Scholar
  3. Bach, J. (1998). A framework for good enough testing. IEEE Computer, 31(10), 124–126.MathSciNetGoogle Scholar
  4. Beizer, B. (1983). Software testing techniques. New York: Van Nostrand Reinhold.Google Scholar
  5. Blythin, S., Hughes, J., Kristoffersen, S., Rodden, T., & Rouncefield, M. (1997). Recognising ‘success’ and ‘failure’: Evaluating groupware in a commercial context. In Proceedings of Group’97, The ACM SIGGROUP Conference on Supporting Group Work, Phoenix, USA, November 16–19, 1997, pp. 39–46.Google Scholar
  6. Brooks, F. (1975). The mythical man month. Essays on software engineering. Boston: Addison-Wesley.Google Scholar
  7. Büscher, M., O’Neill, J., & Rooksby, J. (2009). Designing for diagnosing: introduction to the special issue on diagnostic work. Computer Supported Cooperative Work, 18(2–3), 109–128.CrossRefGoogle Scholar
  8. Button, G. (2000). The ethnographic tradition and design. Design Studies, 21(4), 319–332.CrossRefGoogle Scholar
  9. Button, G., & Sharrock, W. (1992). Occasioned practices in the work of software engineers. In M. Jirotka & J. Goguen (Eds.), Requirements analysis: Social and technical issues (pp. 217–240). London: Academic.Google Scholar
  10. Button, G., & Sharrock, W. (1996). Project work: the organisation of collaborative design and development in software engineering. Computer Supported Cooperative Work, 5(4), 369–386.CrossRefGoogle Scholar
  11. Button, G., & Sharrock, W. (1998). The organisational accountability of technological work. Social Studies of Science, 28(1), 78–102.CrossRefGoogle Scholar
  12. Capretz, L. (2003). Personality types in software engineering. International Journal of Human-Computer Studies, 58(2), 207–214.CrossRefGoogle Scholar
  13. Carstensen, P., & Sørensen, C. (1995). Let’s talk about bugs!. Scandinavian Journal of Information Systems, 7(1), 33–54.Google Scholar
  14. Collins, H. (1988). Public experiments and displays of virtuosity: the core set revisited. Social Studies of Science, 18(4), 725–748.CrossRefGoogle Scholar
  15. Cornford, J., & Pollock, N. (2003). Putting the university online. Information technology and organisational change. Maidenhead: Open University Press.Google Scholar
  16. Da Cunha, A., & Greathead, D. (2007). Does personality matter? An analysis of code-review ability. Communications of the ACM, 50(5), 109–112.CrossRefGoogle Scholar
  17. Dant, T., & Francis, D. (1998). Planning in organisations: rational control or contingent activity? Sociological Research Online, 3(2), <http://www.socresonline.org.uk/socresonline/3/2/4.html>.
  18. Dijkstra, E. (1972). The humble programmer. Communications of the ACM, 15(10), 859–866.CrossRefGoogle Scholar
  19. Downer, J. (2007). When the chick hits the fan: representativeness and reproducibility in technological tests. Social Studies of Science, 37(1), 7–26.CrossRefGoogle Scholar
  20. Evans, M. (1984). Productive software test management. New York: Wiley.Google Scholar
  21. Feller, J., & Fitzgerald, B. (2001). Understanding open source software development. Boston: Addison-Wesley.Google Scholar
  22. Garfinkel, H. (1967). Studies in ethnomethodology. Englewood Cliffs: Prentice Hall.Google Scholar
  23. Harper, R. (2000). The organisation in ethnography. Computer Supported Cooperative Work, 9(2), 239–264.CrossRefGoogle Scholar
  24. Home Office Identity and Passport Service. (2007). Report on key projects implemented in 2007. http://www.ips.gov.uk/passport/downloads/IPS-report-on-key-projects-implemented-2007.pdf (retrieved 20th September 2008).
  25. House of Commons Transport Committee. (2008). The opening of Heathrow Terminal 5. Twelfth Report of Session 2007-08. London: The Stationary Office Limited.Google Scholar
  26. Jorgensen, P. (2002). Software testing a craftsman’s approach. Boca Raton: CRC Press.Google Scholar
  27. Juristo, N., Moreno, A., & Strigel, W. (2006a). Guest editors’ introduction: software testing practices in industry. IEEE Software, 23(4), 19–21.CrossRefGoogle Scholar
  28. Juristo, N., Moreno, A., Vegas, S., & Solari, M. (2006b). In search of what we experimentally know about unit testing. IEEE Software, 23(6), 72–80.CrossRefGoogle Scholar
  29. Kaner, C., Bach, J., & Pettichord, B. (2002). Lessons learned in software testing. New York: Wiley.Google Scholar
  30. Lippert, M., Roock, S., & Wolf, H. (2002). Extreme programming in action. Practical examples from real world projects. New York: Wiley.Google Scholar
  31. Mackenzie, D. (1990). Inventing accuracy: A historical sociology of nuclear missile Guidance. Cambridge: MIT.Google Scholar
  32. Mackenzie, D. (2001). Mechanizing proof. Computing, risk and trust. Cambridge: MIT.MATHGoogle Scholar
  33. Martin, D., Hartswood, M., Slack, R., & Voss, A. (2007a). Achieving dependability in the configuration, integration and testing of healthcare technologies. Computer Supported Cooperative Work, 15(5–6), 467–499.CrossRefGoogle Scholar
  34. Martin, D., Rooksby, J., & Rouncefield, M. (2007b). Users as contextual features of software product development and testing. In Proceedings of Group’07, pp. 301–310.Google Scholar
  35. Martin, D., Rooksby, J., Rouncefield, M., & Sommerville, I. (2007c). ‘Good’ organisational reasons for ‘bad’ software testing: An ethnographic study of testing in a small software company. In Proceedings of ICSE’07, pp. 602–611.Google Scholar
  36. Miller, J., & Zhichao, Y. (2004). A cognitive-based mechanism for constructing software inspection teams. IEEE Transactions on Software Engineering, 30(11), 811–825.CrossRefGoogle Scholar
  37. Myers, G. (1976). The art of software testing. New York: Wiley.Google Scholar
  38. Patton, R. (2006). Software testing. Indianapolis: Sams.Google Scholar
  39. Pinch, T. (1993). “Testing—One, two, three… testing!”: towards a sociology of testing. Science Technology & Human Values, 18(1), 25–41.CrossRefMathSciNetGoogle Scholar
  40. Randall, D., Harper, R., & Rouncefield, M. (2007). Fieldwork for design: Theory and practice. London: Springer Verlag.Google Scholar
  41. Reddy, M., Dourish, P., & Pratt, W. (2006). Temporality in medical work: time also matters. Computer Supported cooperative work, 15(1), 29–53.CrossRefGoogle Scholar
  42. Rönkkö, K., Dittrich, Y., & Randall, D. (2005). When plans do not work out: how plans are used in software development projects. Computer Supported Cooperative Work, 14(5), 433–468.CrossRefGoogle Scholar
  43. Royce, W. (1970). Managing the development of large software systems. In Proceedings of WESTCON, August 1970, reprinted in Proceedings of ICSE ‘87 the 9th International Conference on Software Engineering, Monterey, USA. pp. 328–338.Google Scholar
  44. Runeson, P. (2006). A survey of unit testing practices. IEEE Software, July/August 2006, pp. 22–29.Google Scholar
  45. Schmidt, K., & Bannon, L. (1992). Taking CSCW seriously: supporting articulation work. Computer Supported Cooperative Work, 1(1–2), 7–40.CrossRefGoogle Scholar
  46. Segal, J. (2005). When software engineers met research scientists: a case study. Empirical Software Engineering, 10(4), 517–536.CrossRefGoogle Scholar
  47. Sharrock, W., & Anderson, B. (1993). Working towards agreement. In G. Button (Ed.), Technology in working order (pp. 149–161). London: Routledge.Google Scholar
  48. Sharrock, W., & Anderson, B. (1994). The user as a scenic feature of design space. Design Studies, 15(1), 5–18.CrossRefGoogle Scholar
  49. Simon, H. (1969). The sciences of the artificial. Cambridge: MIT.Google Scholar
  50. Suchman, L. (1987). Plans and situated action: The problem of the human—machine communication. Cambridge: Cambridge University Press.Google Scholar
  51. Tassey, G. (2002). The economic impacts of inadequate infrastructure for software testing. National Institute of Standards and Technology, US Department of Commerce Technology Administration. RTI Project Number 7007.011.Google Scholar
  52. Whittaker, J. (2000). What is software testing? And why is it so hard? IEEE Software, 17(1), 70–79.CrossRefGoogle Scholar
  53. Whittaker, J. (2002). How to break software: A practical guide to testing. Boston: Addison-Wesley.Google Scholar
  54. Winter, J., Rönkkö, K., Ahlberg, M., & Hotchkiss, J. (2008). Meeting organisational needs and quality assurance through balancing agile & formal usability testing results. In Proceedings of CEE-SET the 3rd IFIP TC2 Central and East European Conference on Software Engineering Techniques, Brno, Czech Republic, Oct. 13–15.Google Scholar
  55. Woolgar, S. (1991). Configuring the user, the case of usability trials. In J. Law (Ed.), A sociology of monsters. Essays on power technology and domination (pp. 58–100). London: Routledge.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  • John Rooksby
    • 1
  • Mark Rouncefield
    • 2
  • Ian Sommerville
    • 1
  1. 1.School of Computer ScienceUniversity of St AndrewsSt AndrewsUK
  2. 2.Computer DepartmentLancaster UniversityLancasterUK

Personalised recommendations