Advertisement

Dealing with Comprehension and Bugs in Native and Cross-Platform Apps: A Controlled Experiment

  • Maria CauloEmail author
  • Rita Francese
  • Giuseppe Scanniello
  • Antonio Spera
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11915)

Abstract

In this paper, we present the results of a controlled experiment aimed to investigate whether there is a difference when comprehending apps implemented with either cross-platform (Ionic-Cordova-Angular) and native (Android) technologies. We divided participants into two groups. The participants in each group were asked to comprehend the source code of either the app implemented using Ionic-Cordova-Angular technology or its Android version. We also asked the participants to identify and fix faults in the source code. The goal was to verify if the technology might play a role in the execution of these two kinds of tasks. We also investigated the affective reactions of participants and the difficulty they perceived when accomplishing the tasks mentioned before. The most important take-away result is: there is not a statistically significant difference in the comprehension and in the identification and fixing of bugs when dealing with either native or cross-platform apps.

Keywords

Android Cross-platform Ionic Sentiment analysis 

References

  1. 1.
    Bradley, M.M., Lang, P.J.: Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Therapy Exper. Psychiatry 25(1), 49–59 (1994)CrossRefGoogle Scholar
  2. 2.
    Brodie, M.L., Stonebraker, M.: Legacy Information Systems Migration: Gateways, Interfaces, and the Incremental Approach. Morgan Kaufmann Publishers Inc., San Francisco (1995)Google Scholar
  3. 3.
    Canfora, G., Di Penta, M.: New frontiers of reverse engineering. In: Proceedings of Workshop on the Future of Software Engineering, pp. 326–341. IEEE (2007)Google Scholar
  4. 4.
    Carver, J., Jaccheri, L., Morasca, S., Shull, F.: Issues in using students in empirical studies in software engineering education. In: Proceedings of International Symposium on Software Metrics, pp. 239–251 (2003)Google Scholar
  5. 5.
    Corral, L., Sillitti, A., Succi, G.: Mobile multiplatform development: an experiment for performance analysis. Procedia Comput. Sci. 10, 736–743 (2012)CrossRefGoogle Scholar
  6. 6.
    Dalmasso, I., Datta, S.K., Bonnet, C., Nikaein, N.: Survey, comparison and evaluation of cross platform mobile application development tools. In: Proceedings of International Wireless Communications and Mobile Computing Conference, pp. 323–328 (2013)Google Scholar
  7. 7.
    El-Kassas, W.S., Abdullah, B.A., Yousef, A.H., Wahba, A.M.: Taxonomy of cross-platform mobile applications development approaches. Ain Shams Eng. J. 8(2), 163–190 (2017)CrossRefGoogle Scholar
  8. 8.
    Francese, R., Gravino, C., Risi, M., Scanniello, G., Tortora, G.: Mobile app development and management: Results from a qualitative investigation. In: Proc. of Intl. Conference on Mobile Software Engineering and Systems. pp. 133–143 (2017)Google Scholar
  9. 9.
    Heitkötter, H., Hanschke, S., Majchrzak, T.A.: Evaluating cross-platform development approaches for mobile applications. In: Cordeiro, J., Krempels, K.-H. (eds.) WEBIST 2012. LNBIP, vol. 140, pp. 120–138. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36608-6_8CrossRefGoogle Scholar
  10. 10.
    Höst, M., Regnell, B., Wohlin, C.: Using students as subjects-a comparative study of students and professionals in lead-time impact assessment. Empirical Softw. Eng. 5(3), 201–214 (2000)CrossRefGoogle Scholar
  11. 11.
    Jedlitschka, A., Ciolkowski, M., Pfahl, D.: Reporting experiments in software engineering. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds.) Guide to Advanced Empirical Software Engineering, pp. 201–228. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-1-84800-044-5_8CrossRefGoogle Scholar
  12. 12.
    Juristo, N., Moreno, A.: Basics of Software Engineering Experimentation. Kluwer Academic Publishers, Dordrecht (2001)CrossRefGoogle Scholar
  13. 13.
    Kamsties, E., von Knethen, A., Reussner, R.: A controlled experiment to evaluate how styles affect the understandability of requirements specifications. Inf. Softw. Technol. 45(14), 955–965 (2003)CrossRefGoogle Scholar
  14. 14.
    Kim, S., Clark, J.A., Mcdermid, J.: Class mutation: mutation testing for object-oriented programs (2000)Google Scholar
  15. 15.
    Kim, S., Clark, J.A., McDermid, J.A.: The rigorous generation of Java mutation operators using HAZOP. Technical report (1999)Google Scholar
  16. 16.
    Koelstra, S., et al.: Deap: a database for emotion analysis using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRefGoogle Scholar
  17. 17.
    Malavolta, I., Ruberto, S., Soru, T., Terragni, V.: End users’ perception of hybrid mobile apps in the Google play store. In: Proceedings of International Conference on Mobile Services, pp. 25–32 (2015)Google Scholar
  18. 18.
    Mann, H.B., Whitney, D.R.: On a test of whether one of two random variables is stochastically larger than the other. Ann. Math. Statist. 18(1), 50–60 (1947)Google Scholar
  19. 19.
    Que, P., Guo, X., Zhu, M.: A comprehensive comparison between hybrid and native app paradigms. In: Proceedings of International Conference on Computational Intelligence and Communication Networks, pp. 611–614, December 2016Google Scholar
  20. 20.
    Scanniello, G., Gravino, C., Risi, M., Tortora, G., Dodero, G.: Documenting design-pattern instances: a family of experiments on source-code comprehensibility. ACM Trans. Softw. Eng. Methodol. 24(3), 14:1–14:35 (2015)Google Scholar
  21. 21.
    Scanniello, G., Risi, M., Tramontana, P., Romano, S.: Fixing faults in C and Java source code: abbreviated vs. full-word identifier names. ACM Trans. Softw. Eng. Methodol. 26(2), 6:1–6:43 (2017)Google Scholar
  22. 22.
    Shapiro, S., Wilk, M.: An analysis of variance test for normality. Biometrika 52(3–4), 591–611 (1965)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Sillito, J., Murphy, G.C., De Volder, K.: Asking and answering questions during a programming change task. IEEE Trans. Softw. Eng. 34(4), 434–451 (2008)CrossRefGoogle Scholar
  24. 24.
    Sullivan, G.M., Artino, A.R.: Analyzing and interpreting data from likert-type scales. J. Graduate Med. Educ. 5(4), 541–2 (2013)CrossRefGoogle Scholar
  25. 25.
    Willocx, M., Vossaert, J., Naessens, V.: Comparing performance parameters of mobile app development strategies. In: Proceedings of International Conference on Mobile Software Engineering and Systems, pp. 38–47, May 2016Google Scholar
  26. 26.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M., Regnell, B., Wesslén, A.: Experimentation in Software Engineering. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29044-2CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Maria Caulo
    • 1
    Email author
  • Rita Francese
    • 2
  • Giuseppe Scanniello
    • 1
  • Antonio Spera
    • 2
  1. 1.University of BasilicataPotenzaItaly
  2. 2.University of SalernoFiscianoItaly

Personalised recommendations