Advertisement

Personal Opinion Surveys

  • Barbara A. Kitchenham
  • Shari L. Pfleeger

Although surveys are an extremely common research method, surveybased research is not an easy option. In this chapter, we use examples of three software engineering surveys to illustrate the advantages and pitfalls of using surveys. We discuss the six most important stages in survey-based research: setting the survey’s objectives; selecting the most appropriate survey design; constructing the survey instrument (concentrating on self-administered questionnaires); assessing the reliability and validity of the survey instrument; administering the instrument; and, finally, analysing the collected data. This chapter provides only an introduction to survey-based research; readers should consult the referenced literature for more detailed advice.

Keywords

Target Population Software Engineering Survey Instrument Ordinal Scale Cronbach Alpha 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bourque, L. and Fielder, E. How to Conduct Self-administered and Mail Surveys, Sage Publications, Thousand Oaks, CA, 1995.Google Scholar
  2. Baruch, Y. Response rate in academic studies–a comparative analysis. Human Relations, 52(4), 1999, pp. 412–438.Google Scholar
  3. Cronbach, L.J. Coefficient alpha and internal structure of tests. Psychometrika, 16(3), 1951, pp. 297–334.CrossRefGoogle Scholar
  4. Dybå, T. An empirical investigation of the key factors for success in software process improvement. IEEE Transactions on Software Engineering, 31(5), 2005, pp. 410–424.CrossRefGoogle Scholar
  5. Dybå, T. An instrument for measuring the key factors of success in software process improvement. Empirical Software Engineering, 5(4), 2000, pp. 357–390.CrossRefGoogle Scholar
  6. El Emam, K., Goldenson, D., Briand, L., and Marshall, P. Interrater Agreement in SPICE Based Assessments. Proceedings 4th International Software Metrics Conference, IEEE Computer Society Press, 1996, pp. 149–156.Google Scholar
  7. El Emam, K., Simon, J.-M., Rousseau, S., and Jacquet. E. Cost Implications of Interrater Agreement for Software Process Assignments. Proceedings 5th International Software Metrics Conference, IEEE Computer Society Press, 1998, pp. 38–51.Google Scholar
  8. Fowler, F.J. Jr. Survey Research Methods, Third Edition, Sage Publications, Thousand Oaks, CA, 2002.Google Scholar
  9. Fink, A. The Survey Handbook, Sage Publications, Thousand Oaks, CA, 1995.Google Scholar
  10. Humphrey, W. and Curtis, B. Comments on ‘a critical look’, IEEE Software, 8:4, July, 1991, pp. 42–46.Google Scholar
  11. Krosnick, J.A. Survey research. Annual Review of Psychology, 50, 1990, pp. 537–567.CrossRefGoogle Scholar
  12. Lethbridge, T. A Survey of the Relevance of Computer Science and Software Engineering Education. Proceedings of the 11th International Conference on Software Engineering Education, IEEE Computer Society Press, 1998.Google Scholar
  13. Levy, P.S. and Lemeshow, S. Sampling of Populations: Methods and Applications, Third Edition, Wiley Series in Probability and Statistics, Wiley, New York, 1999.Google Scholar
  14. Lethbridge, T. What knowledge is important to a software professional. IEEE Computer, 33(5), 2000, pp. 44–50.Google Scholar
  15. Little, R.J.A. and Rubin, D.B. Statistical Analysis with Missing Data, Wiley, New York, 1987.Google Scholar
  16. Litwin, M. How to Measure Survey Reliability and Validity, Sage Publications, Thousand Oaks, CA, 1995.Google Scholar
  17. Moses, J. Bayesian probability distributions for assessing measurement of subjective software attributes. Information and Software Technology, 42(8), 2000, pp. 533–546.CrossRefMathSciNetGoogle Scholar
  18. Moløkken-Østvold, K., Jørgensen, M., Tanilkan, S.S., Gallis, H., Lien, A. and Hove, S. A Survey on Software Estimation in the Norwegian Industry. Proceedings 10th International Symposium on Software metrics. Metrics 2004, IEEE Computer Society, 2004, pp. 208–219.Google Scholar
  19. Ropponen, J. and Lyytinen, K. Components of software development risk: how to address them. A project manager survey. IEEE Transactions on Software Engineering, 26(2), 2000, pp. 98–112.CrossRefGoogle Scholar
  20. Shaddish, W.R., Cook, T.D., and Campbell, D.T. Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Houghton Mifflin Company, New York, 2002.Google Scholar
  21. Siegel, S. and Castellan, N.J. Nonparametric Statistics for the Behavioral Sciences, Second Edition, McGraw-Hill Book Company, New York, 1998.Google Scholar
  22. Spector, P.E. Summated Rating Scale Construction. An Introduction, Sage Publications, Thousand Oaks, CA, 1992.Google Scholar
  23. Standish Group. Chaos Chronicles, Version 3.0, West Yarmouth, MA, 2003.Google Scholar
  24. Straub, D.W. Validating instruments in MIS research. MIS Quarterly, 13 (2), 1989, pp. 147–169.CrossRefMathSciNetGoogle Scholar
  25. Zelkowitz, M.V., Dolores, R.W., and Binkley, D. Understanding the culture clash in software engineering technology transfer. University of Maryland technical report, 2 June 1998.Google Scholar

Copyright information

© Springer-Verlag London Limited 2008

Authors and Affiliations

  • Barbara A. Kitchenham
    • 1
  • Shari L. Pfleeger
    • 2
  1. 1.School of Computing and MathematicsKeele UniversityStaffordshireUK
  2. 2.Rand CorporationArlingtonUSA

Personalised recommendations