Advertisement

Examples of Measures in Measurement Systems

  • Miroslaw Staron
  • Wilhelm Meding
Chapter

Abstract

Never in history have we collected so much data as we have today; we have even coined an expression for this: “Big data.” Never have we measured and analyzed data as much as we do today. Data is easy to collect and store. Statistical methods and tools, business intelligence (BI) tools, and machine learning, together with cheap data storage and processing, make this possible. Everybody (well, almost) claims to be an expert in measuring. What we see, though, are evidences to the contrary. Companies and organizations are drawn in data and measures, while at the same time, measures are incomplete, misused, or not trusted. If there is one question we have heard over and over again it is “What should we measure?” It is a question asked by everyone, regardless of title, role and position in the organization’s hierarchy. In this chapter, we present a number of measures, how they “came to be,” and how to develop and visualize them. We present also a structured way to categorize measures, into five measurement areas.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [ASGJ13]
    Majdi Abdellatief, Abu Bakar Md Sultan, Abdul Azim Abdul Ghani, and Marzanah A Jabar. A mapping study to investigate component-based software system metrics. Journal of systems and software, 86(3):587–603, 2013.CrossRefGoogle Scholar
  2. [CD09]
    Cagatay Catal and Banu Diri. A systematic review of software fault prediction studies. Expert systems with applications, 36(4):7346–7354, 2009.CrossRefGoogle Scholar
  3. [FFFLG14]
    Farnaz Fotrousi, Samuel A Fricker, Markus Fiedler, and Franck Le-Gall. Kpis for software ecosystems: A systematic mapping study. In International Conference of Software Business, pages 194–211. Springer, 2014.Google Scholar
  4. [FSHL13]
    Robert Feldt, Miroslaw Staron, Erika Hult, and Thomas Liljegren. Supporting software decision meetings: Heatmaps for visualising test and code measurements. In Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on, pages 62–69. IEEE, 2013.Google Scholar
  5. [GL17]
    Yossi Gil and Gal Lalouche. On the correlation between size and metric validity. Empirical Software Engineering, 22(5):2585–2611, 2017.CrossRefGoogle Scholar
  6. [ISO16a]
    ISO/IEC. ISO/IEC 25023 - Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Measurement of system and software product quality. Technical report, International Standards Organization, 2016.Google Scholar
  7. [ISO16b]
    ISO/IEC. ISO/IEC 25023 - Systems and software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Measurement of system and software product quality. Technical report, International Standards Organization, 2016.Google Scholar
  8. [Kit10]
    Barbara Kitchenham. What’s up with software metrics? – A preliminary mapping study. Journal of systems and software, 83(1):37–51, 2010.CrossRefGoogle Scholar
  9. [Lal14]
    Frederic Laloux. Reinventing organizations: A guide to creating organizations inspired by the next stage in human consciousness. Nelson Parker, 2014.Google Scholar
  10. [LTC03]
    Mikael Lindvall, Roseanne Tesoriero Tvedt, and Patricia Costa. An empirically-based process for software architecture evaluation. Empirical Software Engineering, 8(1):83–108, 2003.CrossRefGoogle Scholar
  11. [McG02]
    John McGarry. Practical software measurement: objective information for decision makers. Addison-Wesley Professional, 2002.Google Scholar
  12. [Med17]
    Wilhelm Meding. Effective monitoring of progress of agile software development teams, in modern software companies – an industrial case study. In MENSURA, pages 1–8. ACM, 2017.Google Scholar
  13. [MS10]
    Niklas Mellegård and Miroslaw Staron. Characterizing model usage in embedded software engineering: a case study. In Proceedings of the Fourth European Conference on Software Architecture: Companion Volume, pages 245–252. ACM, 2010.Google Scholar
  14. [NB05]
    Nachiappan Nagappan and Thomas Ball. Use of relative code churn measures to predict system defect density. In Software Engineering, 2005. ICSE 2005. Proceedings. 27th International Conference on, pages 284–292. IEEE, 2005.Google Scholar
  15. [Opp04]
    Bohdan W Oppenheim. Lean product development flow. Systems engineering, 7(4), 2004.Google Scholar
  16. [OSB+17]
    Miroslaw Ochodek, Miroslaw Staron, Dominik Bargowski, Wilhelm Meding, and Regina Hebig. Using machine learning to design a flexible loc counter. In Machine Learning Techniques for Software Quality Evaluation (MaLTeSQuE), IEEE Workshop on, pages 14–20. IEEE, 2017.Google Scholar
  17. [PSSM10]
    K. Pandazo, A. Shollo, M Staron, and W. Meding. Presenting Software Metrics Indicators: A Case Study. In Proceedings of the 20th International Conference on Software Product and Process Measurement, 2010.Google Scholar
  18. [PW92]
    Dewayne E Perry and Alexander L Wolf. Foundations for the study of software architecture. ACM SIGSOFT Software engineering notes, 17(4):40–52, 1992.CrossRefGoogle Scholar
  19. [PW13]
    Daniel Port and Joel Wilf. The value of certifying software release readiness: An exploratory study of certification for a critical system at JPL. In Empirical Software Engineering and Measurement, 2013 ACM/IEEE International Symposium on, pages 373–382. IEEE, 2013.Google Scholar
  20. [Ric11]
    Eric Richardson. What an agile architect can learn from a hurricane meteorologist. IEEE software, 28(6):9–12, 2011.CrossRefGoogle Scholar
  21. [SBB+09]
    Helen Sharp, Nathan Baddoo, Sarah Beecham, Tracy Hall, and Hugh Robinson. Models of motivation in software engineering. Information and software technology, 51(1):219–233, 2009.CrossRefGoogle Scholar
  22. [SBK+17]
    Jan Schroeder, Christian Berger, Alessia Knauss, Harri Preenja, Mohammad Ali, Miroslaw Staron, and Thomas Herpel. Comparison of model size predictors in practice. In Proceedings of the 39th International Conference on Software Engineering Companion, pages 186–188. IEEE Press, 2017.Google Scholar
  23. [SDR17]
    Miroslaw Staron, Darko Durisic, and Rakesh Rana. Improving measurement certainty by using calibration to find systematic measurement error – A case of lines-of-code measure. In Software Engineering: Challenges and Solutions, pages 119–132. Springer, 2017.Google Scholar
  24. [SFGL07]
    Cláudio Sant’Anna, Eduardo Figueiredo, Alessandro Garcia, and Carlos JP Lucena. Effective monitoring of progress of agile software development teams, in modern software companies – an industrial case study. In European Conference on Software Architecture, pages 207–224. Springer, 2007.Google Scholar
  25. [SHF+13]
    Miroslaw Staron, Jorgen Hansson, Robert Feldt, Anders Henriksson, Wilhelm Meding, Sven Nilsson, and Christoffer Hoglund. Measuring and visualizing code stability–a case study at three companies. In Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2013 Joint Conference of the 23rd International Workshop on, pages 191–200. IEEE, 2013.Google Scholar
  26. [SM07]
    Miroslaw Staron and Wilhelm Meding. Predicting weekly defect inflow in large software projects based on project planning and test status. Information and Software Technology, page (available online), 2007.Google Scholar
  27. [SM10]
    Miroslaw Staron and Wilhelm Meding. Defect inflow prediction in large software projects. e-Informatica Software Engineering Journal, 4(1):1–23, 2010.Google Scholar
  28. [SM11]
    Miroslaw Staron and Wilhelm Meding. Monitoring bottlenecks in agile and lean software development projects – A method and its industrial use. Product-Focused Software Process Improvement, pages 3–16, 2011.Google Scholar
  29. [SM17]
    Miroslaw Staron and Wilhelm Meding. A portfolio of internal quality metrics for software architects. In International Conference on Software Quality, pages 57–69. Springer, 2017.Google Scholar
  30. [SMHH13]
    Miroslaw Staron, Wilhelm Meding, Christoffer Hoglund, and Jorgen Hansson. Identifying implicit architectural dependencies using measures of source code change waves. In Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on, pages 325–332. IEEE, 2013.Google Scholar
  31. [SMP12a]
    Miroslaw Staron, Wilhelm Meding, and Klas Palm. Release readiness indicator for mature agile and lean software development projects. In Agile Processes in Software Engineering and Extreme Programming, pages 93–107. Springer, 2012.Google Scholar
  32. [SMP12b]
    Miroslaw Staron, Wilhelm Meding, and Klas Palm. Release readiness indicator for mature agile and lean software development projects. In Agile Processes in Software Engineering and Extreme Programming, pages 93–107. Springer, 2012.Google Scholar
  33. [SMS10]
    Miroslaw Staron, Wilhelm Meding, and Bo Söderqvist. A method for forecasting defect backlog in large streamline software development projects and its industrial evaluation. Information and Software Technology, 52(10):1069–1079, 2010.CrossRefGoogle Scholar
  34. [SMT+]
    Miroslaw Staron, Wilhelm Meding, Matthias Tichy, Jonas Bjurhede, Holger Giese, and Ola Söder. Industrial experiences from evolving measurement systems into self-healing systems for improved availability. Software: Practice and Experience.Google Scholar
  35. [Sta17]
    Miroslaw Staron. Automotive software architectures: An introduction. Springer, 2017.Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Miroslaw Staron
    • 1
  • Wilhelm Meding
    • 2
  1. 1.Department of Computer Science and EngineeringUniversity of GothenburgGothenburgSweden
  2. 2.Ericsson ABGothenburgSweden

Personalised recommendations