An Exploration of Code Quality in FOSS Projects

  • Iftekhar Ahmed
  • Soroush Ghorashi
  • Carlos Jensen
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 427)

Abstract

It is a widely held belief that Free/Open Source Software (FOSS) development leads to the creation of software with the same, if not higher quality compared to that created using proprietary software development models. However there is little research on evaluating the quality of FOSS code, and the impact of project characteristics such as age, number of core developers, code-base size, etc. In this exploratory study, we examined 110 FOSS projects, measuring the quality of the code and architectural design using code smells. We found that, contrary to our expectations, the overall quality of the code is not affected by the size of the code base, but that it was negatively impacted by the growth of the number of code contributors. Our results also show that projects with more core developers don’t necessarily have better code quality.

Keywords

Code Quality Success Metrics FOSS Open Source Software 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Arisholm, E., Sjoberg, D.I.: Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software. IEEE Transactions on Software Engineering 30(8), 521–534 (2004)CrossRefGoogle Scholar
  2. 2.
    Bloch, J.: Effective java. Addison-Wesley Professional (2008)Google Scholar
  3. 3.
    Boehm, B.W., Brown, J.R., Lipow, M.: Quantitative evaluation of software quality. In: Proc. 2nd International Conference on Software Engineering, pp. 592–605 (1976)Google Scholar
  4. 4.
  5. 5.
    De Groot, A., Kügler, S., Adams, P.J., Gousios, G.: Call for quality: Open source software quality observation. In: Damiani, E., Fitzgerald, B., Scacchi, W., Scotto, M., Succi, G. (eds.) Proc.Open Source Systems. IFIP, pp. 57–62. Springer, Boston (2006)CrossRefGoogle Scholar
  6. 6.
    del Bianco, V., Lavazza, L., Morasca, S., Taibi, D., Tosi, D.: An Investigation of the Users’ Perception of OSS Quality. In: Ågerfalk, P., Boldyreff, C., González-Barahona, J.M., Madey, G.R., Noll, J. (eds.) OSS 2010. IFIP Advances in Information and Communication Technology, vol. 319, pp. 15–28. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    DeLone, W.H., McLean, E.R.: Information Systems Success Revisited. In: Proc. of the 35th Hawaii International Conference on System Sciences (2002)Google Scholar
  8. 8.
    DeLone, W.H., McLean, E.R.: The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. Journal of Management Information Systems, 9–30 (2003)Google Scholar
  9. 9.
    DeLone, W.H., McLean, E.R.: Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 60–95Google Scholar
  10. 10.
    Fontana, F.A., Mariani, E., Morniroli, A., Sormani, R., Tonello, A.: An experience report on using code smells detection tools. In: Software Testing, Verification and Validation Workshops (ICSTW), pp. 450–457 (2011)Google Scholar
  11. 11.
    Fowler, M.: Refactoring: improving the design of existing code. Addison-Wesley Professional (1999)Google Scholar
  12. 12.
  13. 13.
    Golden, B.: Making Open Source Ready for the Enterprise, The Open Source Maturity Model. Extracted From Succeeding with Open Source. Addison-Wesley Publishing Company (2005)Google Scholar
  14. 14.
    Gorton, I., Liu, A.: Software Component Quality Assessment in Practice: Successes and Practical Impediments. In: Proc. of the 24th International Conference on Software Engineering, pp. 555–558. IEEE Computer SocietyGoogle Scholar
  15. 15.
    Hoepman, J.H., Jacobs, B.: Increased security through open source. Communications of the ACM 50(1), 79–83 (2007)CrossRefGoogle Scholar
  16. 16.
    Sommerville, I.: Software Engineering. Pearson Education Limited, Essex (2001)Google Scholar
  17. 17.
  18. 18.
    Jung, H.W., Kim, S.G., Chung, C.S.: Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software 21(5), 88–92 (2004)CrossRefGoogle Scholar
  19. 19.
    Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33, 159–174 (1977)CrossRefMATHMathSciNetGoogle Scholar
  20. 20.
    Lanza, M., Marinescu, R.: Object-oriented metrics in practice: using software metrics to characterize, evaluate, and improve the design of object-oriented systems. Springer (2006)Google Scholar
  21. 21.
    Lavazza, L., Morasca, S., Taibi, D., Tosi, D.: Predicting OSS trustworthiness on the basis of elementary code assessment. In: Proc. of ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, p. 36 (2010)Google Scholar
  22. 22.
    Lee, S.Y.T., Kim, H.W., Gupta, S.: Measuring open source software success. Proc. Omega 37(2), 426–438 (2009)CrossRefGoogle Scholar
  23. 23.
    Li, W., Shatnawi, R.: An empirical study of the bad smells and class error probability in the post-release object-oriented system evolution. Journal of Systems and Software 80(7), 1120–1128 (2007)CrossRefGoogle Scholar
  24. 24.
    Marinescu, R.: Detecting design flaws via metrics in object-oriented systems. In: 39th International Conference and Exhibition on Technology of Object-Oriented Languages and Systems, TOOLS 39, pp. 173–182 (2001)Google Scholar
  25. 25.
    Marticorena, R., López, C., Crespo, Y.: Extending a taxonomy of bad code smells with metrics. In: Proc.7th ECCOP International Workshop on Object-Oriented Reengineering (WOOR), p. 6 (2006)Google Scholar
  26. 26.
    Mason, B.I.: Issues in virtual ethnography. In: Proc. of Ethnographic Studies in Real and Virtual Environments: Inhabited Information Spaces and Connected Communities, pp. 61–69. Edinburgh (1999)Google Scholar
  27. 27.
    Mockus, A., Fielding, R.T., Herbsleb, J.D.: Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology 11, 309–346 (2002)CrossRefGoogle Scholar
  28. 28.
    Moha, N., Rezgui, J., Guéhéneuc, Y.-G., Valtchev, P., El Boussaidi, G.: Using FCA to suggest refactorings to correct design defects. In: Yahia, S.B., Nguifo, E.M., Belohlavek, R. (eds.) CLA 2006. LNCS (LNAI), vol. 4923, pp. 269–275. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  29. 29.
    Mäntylä, M.: Bad smells in software-a taxonomy and an empirical study. Helsinki University of Technology (2003)Google Scholar
  30. 30.
    Nonnecke, B., Preece, J.: Lurker Demographics: Counting the Silent. In: Proc. CHI 2000, pp. 73–80 (2000)Google Scholar
  31. 31.
    Rating, B.R.: Business readiness rating for open source, http://openbrr.org
  32. 32.
    Raymond, E.: The cathedral and the bazaar. Knowledge, Technology & Policy 12(3), 23–49 (1999)CrossRefMathSciNetGoogle Scholar
  33. 33.
    Robbins, J.: Adopting open source software engineering (OSSE) practices by adopting OSSE tools. In: Perspectives on Free and Open Source Software, pp. 245–264 (2005)Google Scholar
  34. 34.
    Koch, S., Schneider, G.: Effort, cooperation and coordination in an open source software project: GNOME. Information Systems Journal 12(1), 27–42 (2002)CrossRefGoogle Scholar
  35. 35.
    Scacchi, W., Feller, J., Fitzgerald, B., Hissam, S., Lakhani, K.: Understanding free/open source software development processes. Software Process: Improvement and Practice 11(2), 95–105 (2006)CrossRefGoogle Scholar
  36. 36.
    Schmidt, D.C., Porter, A.: Leveraging open-source communities to improve the quality & performance of open-source software. In: Proc. of the 1st Workshop on Open Source Software Engineering (2001)Google Scholar
  37. 37.
    Sen, R., Subramaniam, C., Nelson, M.L.: Open source software licenses: Strong-copyleft, non-copyleft, or somewhere in between? Decision Support Systems 52(1), 199–206 (2011)CrossRefGoogle Scholar
  38. 38.
    Senyard, A., Michlmayr, M.: How to have a successful free software project. In: Proceedings of the 11th Asia-Pacific Software Engineering Conference, pp. 84–91. IEEE Computer Society, Busan (2004)CrossRefGoogle Scholar
  39. 39.
    Stamelos, I., Angelis, L., Oikonomou, A., Bleris, G.L.: Code quality analysis in open source software development. Information Systems Journal 12(1), 43–60 (2002)CrossRefGoogle Scholar
  40. 40.
    Subramaniam, C.: Determinants of open source software project success: A longitudinal study. Decision Support Systems 46, 576–585 (2009)CrossRefGoogle Scholar
  41. 41.
  42. 42.
    Vermeulen, A. (ed.): The Elements of Java (TM) Style, vol. 15. Cambridge University Press (2000)Google Scholar
  43. 43.
    Walli, S., Gynn, D., Rotz, V.: The Growth of Open Source Software in Organization (2005), http://dirkriehle.com/wp-content/uploads/2008/03/wp_optaros_oss_usage_in_organizations.pdf
  44. 44.
    Wennergren, D.M.: Clarifying Guidance Regarding Open Source Software, OSS (2009), http://dodcio.defense.gov/Portals/0/Documents/FOSS/2009OSS.pdf
  45. 45.
    Wheeler, D.: Why Open Source Software/Free Software (OSS/FS,FOSS, or FLOSS)? Look at the Numbers (2007), http://www.dwheeler.com/oss_fs_why.html

Copyright information

© IFIP International Federation for Information Processing 2014

Authors and Affiliations

  • Iftekhar Ahmed
    • 1
  • Soroush Ghorashi
    • 1
  • Carlos Jensen
    • 1
  1. 1.School of Electrical Engineering and Computer ScienceOregon State UniversityCorvallisUSA

Personalised recommendations