Advertisement

Smart Measurements and Analysis for Software Quality Enhancement

  • Sarah Dahab
  • Stephane Maag
  • Wissam MallouliEmail author
  • Ana Cavalli
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1077)

Abstract

Requests to improve the quality of software are increasing due to the competition in software industry and the complexity of software development integrating multiple technology domains (e.g., IoT, Big Data, Cloud, Artificial Intelligence, Security Technologies). Measurements collection and analysis is key activity to assess software quality during its development live-cycle. To optimize this activity, our main idea is to periodically select relevant measures to be executed (among a set of possible measures) and automatize their analysis by using a dedicated tool. The proposed solution is integrated in a whole PaaS platform called MEASURE. The tools supporting this activity are Software Metric Suggester tool that recommends metrics of interest according several software development constraints and based on artificial intelligence and MINT tool that correlates collected measurements and provides near real-time recommendations to software development stakeholders (i.e. DevOps team, project manager, human resources manager etc.) to improve the quality of the development process. To illustrate the efficiency of both tools, we created different scenarios on which both approaches are applied. Results show that both tools are complementary and can be used to improve the software development process and thus the final software quality.

Keywords

Software engineering DevOps team Metrics combination Metrics reuse Metrics suggestion Metrics correlation Software quality 

Notes

Acknowledgment

This work is partially funded by the ongoing European project ITEA3-MEASURE started in Dec. 1st, 2015, and the EU HubLinked project started in Jan. 1st, 2017.

References

  1. 1.
    Akbar, M.A., et al.: Improving the quality of software development process by introducing a new methodology-AZ-model. IEEE Access 6, 4811–4823 (2018).  https://doi.org/10.1109/ACCESS.2017.2787981CrossRefGoogle Scholar
  2. 2.
    Alshayeb, M., Shaaban, Y., AlGhamdi, J.: SPMDL: software product metrics definition language. J. Data Inf. Qual. 9(4), 20:1–20:30 (2018).  https://doi.org/10.1145/3185049CrossRefGoogle Scholar
  3. 3.
    Bagnato, A., da Silva, M.A.A., Abherve, A., Rocheteau, J., Pihery, C., Mabit, P.: Measuring green software engineering in the MEASURE ITEA 3 project. In: Condori-Fernández, N., Procaccianti, G., Calero, C., Bagnato, A. (eds.) Proceedings of the 3rd International Workshop on Measurement and Metrics for Green and Sustainable Software Systems, MeGSuS 2016, Co-Located with 10th International Symposium on Empirical Software Engineering and Measurement (ESEM 2016), Ciudad Real, Spain, 7 September 2016. CEUR Workshop Proceedings, vol. 1708, pp. 33–42. CEUR-WS.org (2016). http://ceur-ws.org/Vol-1708/paper-06.pdf
  4. 4.
    Bouwers, E., van Deursen, A., Visser, J.: Evaluating usefulness of software metrics: an industrial experience report. In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) 35th International Conference on Software Engineering, ICSE 2013, San Francisco, CA, USA, 18–26 May 2013, pp. 921–930. IEEE Computer Society (2013).  https://doi.org/10.1109/ICSE.2013.6606641
  5. 5.
    Carvallo, J.P., Franch, X.: Extending the ISO/IEC 9126-1 quality model with non-technical factors for cots components selection. In: Proceedings of the 2006 International Workshop on Software Quality, WoSQ 2006, pp. 9–14. ACM, New York (2006).  https://doi.org/10.1145/1137702.1137706. http://doi.acm.org/10.1145/1137702.1137706
  6. 6.
    Dahab, S.A., Maag, S., Hernandez Porras, J.J.: A novel formal approach to automatically suggest metrics in software measurement plans. In: 2018 13th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE). IEEE (2018)Google Scholar
  7. 7.
    Dahab, S.A., Silva, E., Maag, S., Cavalli, A.R., Mallouli, W.: Enhancing software development process quality based on metrics correlation and suggestion. In: Proceedings of the 13th International Conference on Software Technologies, ICSOFT 2018, Porto, Portugal, 26–28 July 2018, pp. 154–165 (2018)Google Scholar
  8. 8.
    De Maesschalck, R., Jouan-Rimbaud, D., Massart, D.L.: The Mahalanobis distance. Chemometr. Intell. Lab. Syst. 50(1), 1–18 (2000)CrossRefGoogle Scholar
  9. 9.
    Fenton, N., Bieman, J.: Software Metrics: A Rigorous and Practical Approach. CRC Press, Boca Raton (2014)CrossRefGoogle Scholar
  10. 10.
    Fenton, N.E., Pfleeger, S.L.: Software Metrics - A Practical and Rigorous Approach, 2nd edn. International Thomson, Boston (1996)Google Scholar
  11. 11.
    García-Domínguez, A., Barmpis, K., Kolovos, D.S., da Silva, M.A.A., Abherve, A., Bagnato, A.: Integration of a graph-based model indexer in commercial modelling tools. In: Baudry, B., Combemale, B. (eds.) Proceedings of the ACM/IEEE 19th International Conference on Model Driven Engineering Languages and Systems, Saint-Malo, France, 2–7 October 2016, pp. 340–350. ACM (2016).  https://doi.org/10.1145/2976767. http://dl.acm.org/citation.cfm?id=2976809
  12. 12.
    García-Munoz, J., García-Valls, M., Escribano-Barreno, J.: Improved metrics handling in SonarQube for software quality monitoring. In: Omatu, S., et al. (eds.) Distributed Computing and Artificial Intelligence, 13th International Conference. AISC, vol. 474, pp. 463–470. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-40162-1_50CrossRefGoogle Scholar
  13. 13.
    Ge, Z., Song, Z., Ding, S.X., Huang, B.: Data mining and analytics in the process industry: the role of machine learning. IEEE Access 5, 20590–20616 (2017)CrossRefGoogle Scholar
  14. 14.
    Grez, A., Riveros, C., Ugarte, M.: Foundations of complex event processing. CoRR abs/1709.05369 (2017). http://arxiv.org/abs/1709.05369
  15. 15.
    Group, O.M.: Structured Metrics Metamodel (SMM) (October), pp. 1–110 (2012)Google Scholar
  16. 16.
    Hauser, J., Katz, G.: Metrics: you are what you measure!. Eur. Manag. J. 16, 517–528 (1998)CrossRefGoogle Scholar
  17. 17.
    ISO/IEC: ISO/IEC 25010 - systems and software engineering - systems and software quality requirements and evaluation (square) - system and software quality models. Technical report (2010)Google Scholar
  18. 18.
    Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010)CrossRefGoogle Scholar
  19. 19.
    Kevrekidis, K., et al.: Software complexity and testing effectiveness: an empirical study. In: 2009 Annual Reliability and Maintainability Symposium, RAMS 2009. IEEE (2009)Google Scholar
  20. 20.
    Khalid, S., Khalil, T., Nasreen, S.: A survey of feature selection and feature extraction techniques in machine learning. In: Science and Information Conference (SAI), pp. 372–378. IEEE (2014)Google Scholar
  21. 21.
    Kitchenham, B.A.: What’s up with software metrics? - A preliminary mapping study. J. Syst. Softw. 83(1), 37–51 (2010).  https://doi.org/10.1016/j.jss.2009.06.041CrossRefGoogle Scholar
  22. 22.
    Laradji, I.H., Alshayeb, M., Ghouti, L.: Software defect prediction using ensemble learning on selected features. Inf. Softw. Technol. 58, 388–402 (2015).  https://doi.org/10.1016/j.infsof.2014.07.005CrossRefGoogle Scholar
  23. 23.
    Malhotra, R.: A systematic review of machine learning techniques for software fault prediction. Appl. Soft Comput. 27(C), 504–518 (2015).  https://doi.org/10.1016/j.asoc.2014.11.023CrossRefGoogle Scholar
  24. 24.
    van der Meulen, M., Revilla, M.A.: Correlations between internal software metrics and software dependability in a large population of small C/C++ programs. In: ISSRE 2007, The 18th IEEE International Symposium on Software Reliability, Trollhättan, Sweden, 5–9 November 2007, pp. 203–208 (2007)Google Scholar
  25. 25.
    Papadopoulos, L., Marantos, C., Digkas, G., Ampatzoglou, A., Chatzigeorgiou, A., Soudris, D.: Interrelations between software quality metrics, performance and energy consumption in embedded applications. In: Stuijk, S. (ed.) Proceedings of the 21st International Workshop on Software and Compilers for Embedded Systems, SCOPES 2018, Sankt Goar, Germany, 28–30 May 2018, pp. 62–65. ACM (2018).  https://doi.org/10.1145/3207719.3207736
  26. 26.
    Pelleg, D., Moore, A.W., et al.: X-means: extending k-means with efficient estimation of the number of clusters. In: ICML, vol. 1, pp. 727–734 (2000)Google Scholar
  27. 27.
    Shepperd, M.J., Bowes, D., Hall, T.: Researcher bias: the use of machine learning in software defect prediction. IEEE Trans. Software Eng. 40(6), 603–616 (2014).  https://doi.org/10.1109/TSE.2014.2322358CrossRefGoogle Scholar
  28. 28.
    Shweta, S.S., Singh, R.: Analysis of correlation between software complexity metrics. IJISET Int. J. Innovative Sci. Eng. Technol. 2(8), 902–905 (2015)Google Scholar
  29. 29.
    Vapnik, V.N., Vapnik, V.: Statistical Learning Theory, vol. 1. Wiley, New York (1998)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.SAMOVAR, Telecom SudParis, Université Paris-SaclaySaint-AubinFrance
  2. 2.Montimage Research and DevelopmentParisFrance

Personalised recommendations