Advertisement

Are web applications more defect-prone than desktop applications?

  • Marco Torchiano
  • Filippo Ricca
  • Alessandro Marchetto
WSE 2009

Abstract

A lot of effort in the literature has been devoted to define and validate fault taxonomies and models related to different domains, e.g. Service-oriented and Web systems, and properties, e.g. software quality and security. Nevertheless, few attempts were carried out to understand the specific nature of Web bugs and their distribution among the layers of a typical application’s architecture—presentation layer, business logic and data logic. In this paper, we present an experimental investigation aimed at studying the distribution of bugs among different layers of Web and Desktop applications. The experiment follows a well-defined procedure executed by six bachelor students. Overall, the analysis considers 1,472 bugs belonging to 20 different applications. The experimental study provides strong evidence that the presentation layer in Web applications is more defect-prone than the analogous layer in Desktop applications. An additional factor influencing the distribution of defects is represented by the application domain.

Keywords

Web and desktop applications Defect location Distribution of defects Empirical evaluation Well-defined experimental procedure 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrews A., Offutt J., Alexander R.: Testing Web applications by modeling with FSMs. Softw. Syst. Model. 4(3), 326–345 (2005)CrossRefGoogle Scholar
  2. 2.
    Arisholm, E., Briand, L.C., Fuglerud, M.: Data mining techniques for building fault-proneness models in telecom java software. In: In IEEE International Symposium on Software Reliability (ISSRE), pp. 215–224 (2007)Google Scholar
  3. 3.
    Artho, C., Biere, A., Honiden, S.: Enforcer-efficient failure injection. In: Formal Methods (FM), pp. 412–427 (2006)Google Scholar
  4. 4.
    Artzi, S., Kiezun, A., Dolby, J., Tip, F., Dig, D., Paradkar, A., Ernst, M.D.: Finding bugs in web applications using dynamic test generation and explicit state model checking. In: IEEE Transaction on Software Engineering (2010, preprint)Google Scholar
  5. 5.
    Baresi, L., Garzotto, F., Paolini, P.: From web sites to web applications: new issues for conceptual modeling. In: Workshops on Conceptual Modeling Approaches for E-Business and The World Wide Web and Conceptual Modeling: Conceptual Modeling for E-Business and the Web, pp. 89–100 (2000)Google Scholar
  6. 6.
    Basili V.R., Selby R.W.: Comparing the effectiveness of software testing strategies. IEEE Trans. Softw. Eng. 13(12), 1278–1296 (1987)CrossRefGoogle Scholar
  7. 7.
    Becker S.A., Berkemeyer A.: Rapid application design and testing of web usability. IEEE MultiMed. 9(4), 38–46 (2002)CrossRefGoogle Scholar
  8. 8.
    Bellini, P., Bruno, I., Nesi, P., Rogai, D.: Comparing fault-proneness estimation models. In: In International Conference on Engineering of Complex Computer Systems (ICECCS), pp. 205–214 (2005)Google Scholar
  9. 9.
    Briand L., Wuest J.: Empirical studies of quality models in object-oriented systems. Adv. Comput. 59, 97–166 (2002)CrossRefGoogle Scholar
  10. 10.
    Cataldo M., Mockus A., Roberts J., Herbsleb J.: Software dependencies, work dependencies and their impact on failures. IEEE Trans. Softw. Eng. 35(6), 864–878 (2009)CrossRefGoogle Scholar
  11. 11.
    Cohen J.: Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Lawrence Earlbaum Associates, Hillsdale (1988)zbMATHGoogle Scholar
  12. 12.
    Deshpande Y., Murgesan S., Ginige A., Hansen S., Schwabe D., Gaedke M., White B.: Web engineering. J. Web Eng. (JWE) 1(1), 3–17 (2002)Google Scholar
  13. 13.
    Di Lucca G., Fasolino A.R.: Testing web-based applications: the state of the art and future trends. Inform. Softw. Technol. 48(12), 1172–1186 (2006)CrossRefGoogle Scholar
  14. 14.
    Di Lucca, G.A., Di Penta, M.: Considering browser interaction in web application testing. In: In International Workshop on Web Site Evolution (WSE), pp. 74–84. IEEE Computer Society (2003)Google Scholar
  15. 15.
    Di Lucca, G.A., Fasolino, A.R., Faralli, F., De Carlini, U.: Testing Web applications. In: Proceedings of the International Conference on Software Maintenance (ICSM), pp. 310–319. IEEE Computer Society, Montreal, Canada (2002)Google Scholar
  16. 16.
    Draheim, D., Grundy, J., Hosking, J.: Realistic load testing of web applications. In: Conference on Software Maintenance and Reengineering (CSMR), pp. 57–70 (2006)Google Scholar
  17. 17.
    Ecott, S., Sprenkle, S., Pollock, L.: Fault seeding vs. mutation operators: an empirical comparison of testing techniques for web applications. In: Grace Hopper Celebration of Women in Computing (2006)Google Scholar
  18. 18.
    El Emam K., Benlarbi S., Goel N., Rai S.: The confounding effect of class size on the validity of object-oriented metrics. IEEE Trans. Softw. Eng. 27(7), 630–650 (2001)CrossRefGoogle Scholar
  19. 19.
    Elbaum, S., Karre, S., Rothermel, G.: Improving Web application testing with user session data. In: International Conference on Software Engineering (ICSE), pp. 49–59. IEEE Computer Society, Portland, USA (2003)Google Scholar
  20. 20.
    Eldh, S., Hansson, H., Punnekka, S., Pettersson, A., Sundmark D.: A framework for comparing efficiency, effectiveness and applicability of software testing techniques. In: Testing: Academic & Industrial Conference: Practice And Research Techniques (TAIC PART), pp. 159–170. IEEE Computer Society (2006)Google Scholar
  21. 21.
    Giang, L.T., Kang, D., Bae, D.-H.: Software fault prediction models for web applications (compsac). In: Annual IEEE Computer Software and Applications (2010)Google Scholar
  22. 22.
    Guo, Y., Sampath, S.: Web application fault classification: an exploratory study. In: Empirical Software Engineering and Measurement (ESEM), pp. 303–305 (2008)Google Scholar
  23. 23.
    Gyimóthy T., Ferenc R., Siket I.: Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Softw. Eng. 31(10), 897–910 (2005)CrossRefGoogle Scholar
  24. 24.
    Gyimothy T., Ferenc R., Siket I.: Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Trans. Softw. Eng. 31(10), 897–910 (2005)CrossRefGoogle Scholar
  25. 25.
    Harrold M.J., Offutt A.J., Tewary K.: An approach to fault modeling and fault seeding using the program dependence graph. J. Syst. Softw. 36(3), 273–295 (1997)CrossRefGoogle Scholar
  26. 26.
    Hartikainen, J.: Is web application development less challenging than desktop application development? Utopia (2008, published on-line on code)Google Scholar
  27. 27.
    Hieatt E., Mee R., Faster G.: Testing the web application engineering internet. IEEE Softw. 19(2), 60–65 (2002)CrossRefGoogle Scholar
  28. 28.
    Karre S., Elbaum S., Rothermel G., Fisher M. II: Leveraging user session data to support web application testing. IEEE Trans. Softw. Eng. 31(3), 187–202 (2005)CrossRefGoogle Scholar
  29. 29.
    Koru A., El Emam K.: Theory of relative dependency: higher coupling concentration in smaller modules. IEEE Softw. 27(2), 81–89 (2010)CrossRefGoogle Scholar
  30. 30.
    Koru, A., Zhang, D., Liu, H.: Modeling the effect of size on defect proneness for open-source software. In: International Workshop on Predictor Models in Software Engineering (PROMISE), pp. 1–10 (2007)Google Scholar
  31. 31.
    Larson, J.: Testing Ajax Applications with Selenium. InfoQ Magazine (2006)Google Scholar
  32. 32.
    Li, L., Leung, H.: Using the number of faults to improve fault-proneness prediction of the probability models. In: World Congress on Computer Science and Information Engineering, pp. 722–726, March (2009)Google Scholar
  33. 33.
    Liu C.-H., Kung D.C., Hsia P., Hsu C.-T.: An object-based data flow testing approach for web applications. Int. J. Softw. Eng. Knowl. Eng. 11(2), 157–179 (2001)CrossRefGoogle Scholar
  34. 34.
    Livshits, V.B., Lam, M.S.L.: Finding security vulnerabilities in java applications with static analysis. In: Conference on USENIX Security Symposium, pp. 18–28, Bari, Italy (2005)Google Scholar
  35. 35.
    Marchetto A., Ricca F., Tonella P.: An empirical validation of a web fault taxonomy and its usage for web testing. Int. J. Web Eng. 8(4), 316–345 (2009)Google Scholar
  36. 36.
    Marchetto, A., Tonella, P., Ricca, F.: Empirical validation of a web fault taxonomy and its usage for fault seeding. In: IEEE International Symposium on Web Site Evolution (WSE). IEEE Computer Society, Paris, France, pp. 31–38, October (2007)Google Scholar
  37. 37.
    Marchetto A., Tonella P., Ricca F.: A case study-based comparison of web testing techniques applied to ajax web applications. Int. J. Softw. Tools Technol. Transf. (STTT) 10(6), 477–492 (2008)CrossRefGoogle Scholar
  38. 38.
    Marchetto, A., Tonella, P., Ricca, F.: State-based testing of ajax web applications. In: International Conference on Software Testing Verification and Validation (ICST). IEEE Computer Society, Lillehammer, Norway, pp. 121–130, April (2008)Google Scholar
  39. 39.
    Marchetto, A., Trentini, A.: Evaluating web applications testability by combining metrics and analogies. In: International Conference on Information and Communications Technology (ITI), pp. 751–779, December (2005)Google Scholar
  40. 40.
    Mesbah, A., van Deursen, A.: Migrating multi-page web applications to single-page ajax interfaces. In: European Conference on Software Maintenance and Reengineering (CSMR). IEEE Computer Society, pp. 181–190 (2007)Google Scholar
  41. 41.
    Mesbah, A., van Deursen, A.: Invariant-based automatic testing of ajax user interfaces. In: 31st International Conference on Software Engineering (ICSE). IEEE Computer Society, pp. 210–220, May (2009)Google Scholar
  42. 42.
    Olague H., Etzkorn L., Gholston S., Quattlebaum S.: Empirical validation of three software metrics suites to predict fault-proneness of object-oriented classes developed using highly iterative or agile software development processes. IEEE Trans. Softw. Eng. 33(6), 402–419 (2007)CrossRefGoogle Scholar
  43. 43.
    Pai G., Dugan J.: Empirical analysis of software fault content and fault proneness using bayesian methods. IEEE Trans. Softw. Eng. 33(10), 675–686 (2007)CrossRefGoogle Scholar
  44. 44.
    Ricca, F., Tonella, P.: Analysis and testing of Web applications. In: International Conference on Software Engineering (ICSE), Canada, pp. 25–34, May (2001)Google Scholar
  45. 45.
    Ricca F., Tonella P.: Testing processes of web applications. Ann. Softw. Eng. 14, 93–114 (2002)CrossRefzbMATHGoogle Scholar
  46. 46.
    Selvarani, R., Nair, T., Prasad, V.: Estimation of defect proneness using design complexity measurements in object-oriented software. In: International Conference on Signal Processing Systems, pp. 766–770 (2009)Google Scholar
  47. 47.
    Shaffer J.P.: Multiple hypothesis testing. Ann. Rev. Psych. 46, 561–584 (1995)CrossRefGoogle Scholar
  48. 48.
    Sheskin D.: Handbook of parametric and nonparametric statistical procedures, 4th edn. Chapman & All, London (2007)zbMATHGoogle Scholar
  49. 49.
    Sprenkle, S., Gibson, E., Sampath, S., Pollock, L.: Automated replay and failure detection for web applications. In: IEEE international conference on Automated software engineering (ASE). IEEE Computer Society, pp. 253–262 (2005)Google Scholar
  50. 50.
    Tal, O., Knight, S., Dean, T.: Syntax-based vulnerability testing of frame-based network protocols. In: Privacy, Security and Trust 2004 Fredericton, pp. 155–160 (2004)Google Scholar
  51. 51.
    Torchiano, M., Ricca, F., Marchetto, A.: Defect location in traditional vs. web applications: an empirical investigation. In: IEEE International Symposium on Web Systems Evolution (WSE). IEEE Computer Society, Edmonton, Canada, pp. 121–129 (2009)Google Scholar
  52. 52.
    Wassermann, G., Yu, D., Chander, A., Dhurjati, D., Inamura, H., Su, Z.: Dynamic test input generation for web applications. In: International Symposium on software testing (ISSTA), pp. 249–260 (2008)Google Scholar
  53. 53.
    White S.A.: BPMN Modeling and Reference Guide. Future Strategies Inc., Lighthouse Pt (2008)Google Scholar
  54. 54.
    Wohlin C., Runeson P., Höst M., Ohlsson M., Regnell B., Wesslén A.: Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Norwell (2000)zbMATHGoogle Scholar
  55. 55.
    Zammetti, F.: Practical Ajax Projects with Java Technology. ApressCorporate (2006)Google Scholar
  56. 56.
    Ziemer, S.: The use of trade-offs in the development of web applications. In: ICWE Workshops 2004, pp. 249–281 (2004)Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Marco Torchiano
    • 1
  • Filippo Ricca
    • 2
  • Alessandro Marchetto
    • 3
  1. 1.Politecnico di TorinoTurinItaly
  2. 2.DISIUniversità di GenovaGenoaItaly
  3. 3.Fondazione Bruno Kessler–IRSTTrentoItaly

Personalised recommendations