Abstract
The early measuring of the software process attributes increases the chances of the software being cost-effective and energy-efficient. However, some of the crucial metrics are accessible only during the latter stages. Therefore, the set of measurements through the whole SDLC should be considered to evaluate the software development process attributes and lead the project to success. This chapter demonstrates the division of SDLC phases into early and late ones, different software quality evaluation methodologies, and a set of measurements.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Software Development Life Cycle (SDLC) is a framework that covers all the stages of software development including requirements elicitating, developing, testing, and maintaining (Ruparelia 2010). Nowadays, for a software system, being sustainable is not enough. For example, the number of applications for mobile devices is constantly growing. In the first quarter of 2022, Android users had about 3.48 million applications to choose from.Footnote 1 Having such a great variety of applications, sophisticated users aim at a longer lifespan of batteries. This leads us to the area of energy-efficient computing. However, software quality and reliability as well as the energy efficiency of a system being developed depends on the way of defining and operating software process metrics throughout SDLC. In turn, the efficiency of metrics investigation depends on the software process phases; the early measuring of the software process quality increases the chances of the software being cost-effective and energy-efficient to meet the schedule and the budget (Sultan et al. 2008). However, some of the crucial metrics like complexity of modules or energy consumed on different devices are accessible only during the latter stages. Therefore, the set of measurements through the whole SDLC should be considered to evaluate the software development process quality and efficiency and lead the project to success (Ergasheva et al. 2020). However, the definition of early and late phases varies from one company to another depending on the methodologies used. Consequently, company preferences influence the choice of tools or approaches for assessing and evaluating the quality of the software process or the energy efficiency of the final product. Therefore, the set of measurements to collect for the evaluation or assessment varies as well. This chapter demonstrates the division of SDLC phases into early and late ones, different software quality evaluation methodologies, and a set of measurements.
2.1 Early-Phase Metrics
To monitor the project development, one needs to measure the software process attributes as early as possible. However, before identification of the early-phase metrics, one should define these early phases. During analysis of the literature, we defined a general set of phases and metrics to analyze software quality; these can be metrics of code, design, or the whole system (Ergasheva et al. 2020). Most of the studies consider requirements, design, and seldom code as software life early phases (Ergasheva et al. 2020). However, Davis et al. included User Needs Analysis, Definition of the Solution Space, External Behavior Definition, and Preliminary Design in the early stage of SDLC (Davis 1988). The Requirements Analysis and Definition phase includes feasibility study, requirements elicitation, analysis, validation, and documentation. Moreover, the researchers established the set of stages that SDLC early phases should go through:
-
Initial planning phase—constructing the technical and economic basis for the project
-
Analysis—defining the requirements for the software configuration
-
Design—mapping the requirements to the software components
As was discussed earlier, late defects detection increases the chances of not meeting the budget and time expectations. The cost of removing the defects depends on the time (Phillips et al. 2018). The earlier we define the errors, the fewer financial and time expenses the project will have. For example, errors found in the Acceptance phase are 4–15 times more costly compared to the Design phase, while errors found in the Maintenance phase are 1000 times more costly compared to the Requirements phase (Phillips et al. 2018).
Researchers have noticed that 70% of defects are injected during the early phases while 30% are injected in the late phases (Phillips et al. 2018). There are several studies investigating approaches to assess or evaluate the software quality in the early phases. For example, Aversano et al. investigated the quality of documentation (Aversano et al. 2017). They highlight that the documentation is composed of documents of different kinds including code comments. Therefore, considering design and code phases in the early stages is also important. Many researchers empirically showed the usefulness of metrics collected at a Design or Code phase for defect prediction (Bharathi et al. 2015; Kumar et al. 2017). For example, Basili et al. have shown the effectiveness of five out of the six Chidamber and Kemerer object-oriented metrics to predict class fault-proneness (Basili et al. 1996). Similarly, Kumar et al. suggest using complexity, coupling, and cohesion (CCC) metrics for defect identification (Kumar et al. 2017). Moreover, the Requirements phase highly influences user interest (Davis 1988). However, the activities to be conducted during the early phase depend on the quality assessment and evaluation methods chosen by a company. Many studies were focused on deriving metrics from the following viewpoints (Ergasheva et al. 2020):
-
Module complexity
-
Module maintainability
-
Module functionality
Tables 2.1 and 2.2 show the metrics for the Design and Requirements phases derived from a systematic literature review (Ergasheva et al. 2020). Of the studies found through this SLR 75.7% considered the Requirements and Design phases as early phases of the software development process. Nonetheless, the base metrics are the following: Chidamber and Kemerer’s object-oriented metrics, cyclomatic complexity, Lines of Code, and Halstead complexity metric. However, SDLC software metrics are usually uncertain. The uncertainty, vagueness, and imprecision in software metrics can be captured by fuzzy set theory (Yadav et al. 2013).
2.2 Late-Phase Metrics
Almost all the studies investigating the whole SDLC separate the late phase into Development, Testing, and Deployment.
There are two different approaches to the development phase quality evaluation: static analysis and dynamic analysis. Static analysis requires analysis of the code with the help of additional tools while dynamic analysis requires human intervention in the code analysis. Software fault prediction helps to improve the quality of software during the development phase (Kumar et al. 2016). However, one needs to run different kinds of tests after changes to be sure that the integration was successful. But rerunning tests each time increases the time, cost, and resources spent on a project. Some researchers suggested approaches for prioritizing and selecting test cases based on relevant data from experiments or using specialized algorithms (Bajaj et al. 2019; Silva et al. 2016), while another study introduced a framework for test execution and test review phase quality metrics evaluation (Machado et al. 2016).
Development phase metrics are the most popular measurements defined in existing publications. For example, Hota et al. (2019) have shown the set of code metrics affecting the source code. The researchers showed that coupling metrics achieve better performance than such metrics as size, cohesion, and inheritance. Nevertheless, it is important to investigate the test metrics while considering the quality of the software. The Test phase metrics are based on test cases prioritization steps (Silva et al. 2016):
-
Inferring the relevance of classes requires the following set of metrics:
-
Features relevance
-
Correlations among features and classes
-
Class relevance
-
-
Calculating class criticality:
-
Coupling
-
Complexity
-
Relevance
-
-
Test criticality computation
Despite the Deployment phase being a critical stage of SDLC nowadays, only the minor publications are dedicated to the Deployment phase metrics. As a result, Tables 2.3, 2.4, and 2.5 show the metrics related to the Development, Testing, and Deployment phases.
The metrics presented in Tables 2.3, 2.4, and 2.5 can be tracked with DevOps tools. One of the most popular DevOps tools nowadays is SonarQube (Guaman et al. 2017). Several researchers investigated existing tools like SonarQube to assess the metrics in the code (Guaman et al. 2017). They evaluated technical debt as an indicator of quality attributes like security, changeability, reliability, and testability. SonarQube can help developers understand how not to increase the technical debt.
The late phases evaluation is different from the early phases. The late phases require separate systems such as Jenkins or SonarQube to check the quality (Armenise 2015).
With the increased demand for high-quality software and its continuous integration, it is important to track software quality and monitor the software development process. We addressed the important aspects of the early and late phases of the SDLC and the existing software quality models together with the DevOps tools to track software process quality metrics during the whole SDLC.
2.3 Metrics of Energy Consumption
With the high demand for information technologies, the problem of energy consumption became a vital problem. One of the ways to address this concern is to control the energy spent by a software. For example, a group of researchers showed that refactoring the code smells reduces energy consumption by up to 87% (Palomba et al. 2019). Moreover, it was found that choosing the wrong collections type in Java language can increase the energy spent by a software by up to 300%.
Ergasheva et al. (2020) systematized metrics of energy consumption in software systems through a systematic literature review. They classified the metrics found into the following categories:
-
Hard metrics, which can be found through physical measurement
-
Code metrics, which can be analyzed using code
-
Runtime metrics, which are related to the dynamic analysis of applications
-
Indirect metrics, which refer to the specific energy models
-
Process metrics, which can be assessed through the analysis of the software development process
-
Others, which are mostly related to the specific system’s operations
Since we are interested in software metrics that can be assessed throughout the whole SDLC without dependency on any specific energy consumption models, we can focus on code and process metrics. Table 2.6 shows an example of such metrics that are used to evaluate the energy spent by a software (Ergasheva et al. 2020). Moreover, there already exist tools to derive these metrics like MEMT (Liu et al. 2020), PETrA (Di Nucci et al. 2017), PUPiL (Zhang et al. 2016), and many others.
2.4 Conclusion
As one can notice, a project’s success requires monitoring and analysis of the software development process throughout the whole SDLC. To meet the time and cost expectations, one should keep tracking software engineering metrics as early as possible. Early phases of SDLC include requirements management, design, and sometimes code phases. However, not all the important factors can be monitored during this period. Therefore, the late phase metrics should also be involved. The late phases usually consist of development, testing, and deployment.
Nowadays, the efficiency of a software in terms of consumed energy also impacts the quality of the overall project. To create energy-efficient solutions, developers have to track the energy consumption during the software development process. From the tables given within this chapter, we can notice that there is still no overall framework for tracking both software quality and energy consumption. One of the main reasons for this is that such analysis requires the participation of developers, which increases time costs. Therefore, as an alternative, we can suggest using the noninvasive systems to model the energy consumption of systems being developed.
References
Armenise, Valentina. 2015. Continuous delivery with Jenkins: Jenkins solutions to implement continuous delivery. In 2015 IEEE/ACM 3rd International Workshop on Release Engineering, 24–27. Piscataway: IEEE.
Aversano, Lerina et al. 2017. Analysis of the documentation of ERP software projects. Procedia Computer Science 121: 423–430.
Bajaj, Anu et al. 2019. A systematic literature review of test case prioritization using genetic algorithms. IEEE Access 7: 126355–126375.
Basili, Victor R et al. 1996. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering 22(10): 751–761.
Bharathi, R. et al. 2015. A framework for the estimation of OO software reliability using design complexity metrics. In 2015 International Conference on Trends in Automation, Communications and Computing Technology (I-TACT-15), 1–7. Piscataway: IEEE.
Davis, Alan M. 1988. A taxonomy for the early stages of the software development life cycle. Journal of Systems and Software 8(4): 297–311.
Di Nucci, Dario et al. 2017. Software-based energy profiling of android apps: Simple, efficient and reliable? In 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER), 103–114. Piscataway: IEEE.
Ergasheva, Shokhista et al. 2020. Metrics of energy consumption in software systems: A systematic literature review. In IOP Conference Series: Earth and Environmental Science. Vol. 431, 012051. Bristol: IOP Publishing.
Guaman, Daniel et al. 2017. SonarQube as a tool to identify software metrics and technical debt in the source code through static analysis. In Proceedings of 2017 the 7th International Workshop on Computer Science and Engineering. WCSE.
Hota, Chinmay et al. 2019. An empirical analysis on effectiveness of source code metrics for aging related bug prediction. In Proceedings of the 25th International Conference on Distributed Multimedia Systems. KSI Research Inc. and Knowledge Systems Institute Graduate School.
Kumar, Lov et al. 2016. Empirical validation for effectiveness of fault prediction technique based on cost analysis framework. International Journal of System Assurance Engineering and Management 8(S2): 1055–1068.
Kumar, Prathipati Ratna et al. 2017. A novel probabilistic-ABC based boosting model for software defect detection. In 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), 1–6. Piscataway: IEEE.
Liu, Weifeng et al. 2020. Improving the energy efficiency of data-intensive applications running on clusters. International Journal of Parallel, Emergent and Distributed Systems 35(3): 246–259.
Machado, Bruno N. et al. 2016. SBSTFrame: A framework to search-based software testing. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Piscataway: IEEE.
Palomba, Fabio et al. 2019. On the impact of code smells on the energy consumption of mobile applications. In Information and Software Technology 105: 43–55.
Phillips, Dewanne M. et al. 2018. An architecture, system engineering, and acquisition approach for space system software resiliency. Information and Software Technology 94: 150–164.
Ruparelia, Nayan B. 2010. Software development lifecycle models. ACM SIGSOFT Software Engineering Notes 35(3): 8–13.
Silva, Dennis et al. 2016. A hybrid approach for test case prioritization and selection. In 2016 IEEE Congress on Evolutionary Computation (CEC). Piscataway: IEEE.
Sultan, Khalid et al. 2008. Catalog of metrics for assessing security risks of software throughout the software development life cycle. In 2008 International Conference on Information Security and Assurance (ISA 2008), 461–465. Piscataway: IEEE.
Yadav, Harikesh Bahadur et al. 2013. Defects prediction of early phases of software development life cycle using fuzzy logic. In Confluence 2013: The Next Generation Information Technology Summit (4th International Conference), 2–6. IET.
Zhang, Huazhe et al. 2016. Maximizing performance under a power cap: A comparison of hardware, software, and hybrid techniques. ACM SIGPLAN Notices 51(4): 545–559.
Author information
Authors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Kruglov, A., Succi, G., Kholmatova, Z. (2023). Metrics of Sustainability and Energy Efficiency of Software Products and Process. In: Developing Sustainable and Energy-Efficient Software Systems. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-031-11658-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-11658-2_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-11657-5
Online ISBN: 978-3-031-11658-2
eBook Packages: Computer ScienceComputer Science (R0)