Introduction of static quality analysis in small- and medium-sized software enterprises: experiences from technology transfer
- 562 Downloads
Today, small- and medium-sized enterprises (SMEs) in the software industry face major challenges. Their resource constraints require high efficiency in development. Furthermore, quality assurance (QA) measures need to be taken to mitigate the risk of additional, expensive effort for bug fixes or compensations. Automated static analysis (ASA) can reduce this risk because it promises low application effort. SMEs seem to take little advantage of this opportunity. Instead, they still mainly rely on the dynamic analysis approach of software testing. In this article, we report on our experiences from a technology transfer project. Our aim was to evaluate the results static analysis can provide for SMEs as well as the problems that occur when introducing and using static analysis in SMEs. We analysed five software projects from five collaborating SMEs using three different ASA techniques: code clone detection, bug pattern detection and architecture conformance analysis. Following the analysis, we applied a quality model to aggregate and evaluate the results. Our study shows that the effort required to introduce ASA techniques in SMEs is small (mostly below one person-hour each). Furthermore, we encountered only few technical problems. By means of the analyses, we could detect multiple defects in production code. The participating companies perceived the analysis results to be a helpful addition to their current QA and will include the analyses in their QA process. With the help of the Quamoco quality model, we could efficiently aggregate and rate static analysis results. However, we also encountered a partial mismatch with the opinions of the SMEs. We conclude that ASA and quality models can be a valuable and affordable addition to the QA process of SMEs.
KeywordsSoftware quality Small- and medium-sized software enterprises Static analysis Code clone detection Bug pattern detection Architecture conformance analysis Quality models
We would like to thank Christian Pfaller, Bernhard Schätz and Elmar Jürgens for their technical and organisational support throughout the project. The authors owe sincere gratitude to Klaus Lochmann for his advice and support in issues related to quality models. We thank all involved companies as well as the OpenMRS lead developers for their reproachless collaboration and assistance. Last but not least, we thank Veronika Bauer, Georg Hackenberg, Maximilian Junker and Kornelia Kuhle as well as our anonymous peer reviewers for many helpful remarks.
- Ahsan, S. N., Ferzund, J., & Wotawa, F. (2009). Are there language specific bug patterns? Results obtained from a case study using Mozilla. In Proceeding of the fourth international conference on software engineering advances (ICSEA’09) (pp. 210–215). IEEE Computer Society.Google Scholar
- Al-Kilidar, H., Cox, K., & Kitchenham, B. (2005). The use and usefulness of the ISO/IEC 9126 quality standard. In Proceedings of the international symposium on empirical software engineering (ISESE’05) (pp. 126–132). IEEE Computer Society.Google Scholar
- Ayewah, N., Pugh, W., Morgenthaler, J. D., Penix, J., & Zhou, Y. (2007). Evaluating static analysis defect warnings on production software. In Proceedings of the 7th workshop on program analysis for software tools and engineering (PASTE ’07) (pp. 1–8). ACM Press. doi: 10.1145/1251535.1251536.
- Baca, D., Carlsson, B., & Lundberg, L. (2008). Evaluating the cost reduction of static code analysis for software security. In Proceedings of the third ACM SIGPLAN workshop on programming languages and analysis for security (PLAS ’08) (pp. 79–88). New York, NY: ACM. doi: 10.1145/1375696.1375707.
- Beizer, B. (1990). Software testing techniques (2nd ed.). New York, NY: Thomson.Google Scholar
- Boehm, B. W., Brown, J. R., Kaspar, H., Lipow, M., Macleod, G. J., & Merrit, M. J. (1978). Characteristics of software quality. Amsterdam: Van Nostrand Reinhold.Google Scholar
- Boogerd, C., & Moonen, L. (2009). Evaluating the relation between coding standard violations and faults within and across software versions. In 6th IEEE international working conf. mining software repositories (MSR) (pp. 41–50). doi: 10.1109/MSR.2009.5069479.
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340.Google Scholar
- de Moor, O., Verbaere, M., Hajiyev, E., Avgustinov, P., Ekman, T., Ongkingco, N., et al. (2007). QL for source code analysis. In Proceedings of the seventh IEEE international working conference on source code analysis and manipulation (SCAM 2007) (pp. 3–16). IEEE Computer Society.Google Scholar
- Deissenboeck, F., Feilkas, M., Heinemann, L., Hummel, B., & Juergens E. (2010a). ConQAT book. Technische Universität München, Institut für Informatik, Software & Systems Engineering, v2.6 edn. http://conqat.cs.tum.edu/index.php/ConQAT.
- Deissenboeck, F., Heinemann, L., Herrmannsdoerfer, M., Lochmann, K., & Wagner, S. (2011). The Quamoco tool chain for quality modeling and assessment. In Proceedings of the 33rd international conference on software engineering.Google Scholar
- Deissenboeck, F., Heinemann, L., Hummel, B., & Juergens, E. (2010b). Flexible architecture conformance assessment with ConQAT. In Proceedings of the 32nd ACM/IEEE international conference on software engineering (Vol. 2, pp. 247–250). ACM Press. doi: 10.1145/1810295.1810343.
- Deissenboeck, F., Heinemann, L., Hummel, B., & Wagner, S. (2012). Challenges of the dynamic detection of functionally similar code fragments. In T. Mens, A. Cleve, & R. Ferenc (Eds.), CSMR (pp. 299–308). IEEE.Google Scholar
- Deissenboeck, F., Juergens, E., Lochmann, K., & Wagner, S. (2009). Software quality models: Purposes, usage scenarios and requirements. In Proceedings of the ICSE workshop on software quality.Google Scholar
- Deissenboeck, F., Wagner S., Pizka, M., Teuchert, S., & Girard, J. F. (2007). An activity-based quality model for maintainability. In Proceedings of the IEEE international conference on software maintenance.Google Scholar
- Elva, R., & Leavens, G. T. (2012). Jsctracker: A semantic clone detection tool for java code. Orlando, FL: University of Central Florida.Google Scholar
- European Commission. (2003). Commission recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises. Official Journal of the European Union L 124, 36–41.Google Scholar
- Feilkas, M., Ratiu, D., & Juergens, E. (2009). The loss of architectural knowledge during system evolution: An industrial case study. In Proceedings of the IEEE 17th international conference on program comprehension (ICPC’09) (pp. 188–197). IEEE Computer Society.Google Scholar
- Ferzund, J., Ahsan, S. N., & Wotawa, F. (2008). Analysing bug prediction capabilities of static code metrics in open source software. In Proceedings of the international conferences on software process and product measurement (IWSM/Metrikon/Mensura ’08) (vol. 5338, pp. 331–343). Springer, LNCS.Google Scholar
- Fiutem, R., & Antoniol, G. (1998). Identifying design-code inconsistencies in object-oriented software: A case study. In Proceedings of the international conference on software maintenance (ICSM’98). IEEE Computer Society.Google Scholar
- Foster, J., Hicks, M., & Pugh, W. (2007). Improving software quality with static analysis. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on program analysis for software tools and engineering (PASTE’07) (pp. 83–84). ACM Press.Google Scholar
- Gleirscher, M., Golubitskiy, D., Irlbeck, M., & Wagner, S. (2012). On the benefit of automated static analysis for small and medium-sized software enterprises. In Lecture Notes in business information processing (vol. 94, pp. 14–38), previously accepted at: 1st Research Track at Software Quality Days, Vienna, 2012.Google Scholar
- Heitlager, I., Kuipers, T., & Visser, J. (2007). A practical model for measuring maintainability. In Proceedings of the 6th international conference on quality of information and communications technology.Google Scholar
- Hofer, C. (2002). Software development in Austria: Results of an empirical study among small and very small enterprises. In Proceedings of the 28th Euromicro conference (pp. 361–366). IEEE Computer Society. doi: 10.1109/EURMIC.2002.1046219.
- ISO/IEC 9126. (2003). Software engineering—product quality—quality model. International Standard.Google Scholar
- ISO/IEC 25010. (2011). Systems and software engineering—systems and software quality requirements and evaluation (SQuaRE)—system and software quality models. International Standard.Google Scholar
- Juergens, E. (2011). Why and how to control cloning in software artifacts. PhD thesis, Technische Universitaet Muenchen.Google Scholar
- Juergens, E., Deissenboeck, F., & Hummel, B. (2009a). CloneDetective—A workbench for clone detection research. In Proceedings of the 31th international conference on software engineering (ICSE’09) (pp. 603–606). IEEE Computer Society. doi: 10.1109/ICSE.2009.5070566.
- Juergens, E., Deissenboeck, F., Hummel, B., & Wagner, S. (2009b). Do code clones matter? In Procedings of the 31th international conference on software engineering (ICSE’09) (pp. 485–495). IEEE Computer Society.Google Scholar
- Juergens, E., & Göde, N. (2010). Achieving accurate clone detection results. In Proceedings 4th international workshop on software clones (pp 1–8). ACM Press.Google Scholar
- Kautz, K., Hansen, H. W., & Thaysen, K. (2000). Applying and adjusting a software process improvement model in practice: The use of the ideal model in a small software enterprise. In Proceedings of the 22nd international conference on Software engineering (ICSE ’00) (pp. 626–633). New York, NY: ACM. doi: 10.1145/337180.337492.
- Kitchenham, B., & Pfleeger, S. L. (1996). Software quality: The elusive target. IEEE Software, 13(1), 12–21.Google Scholar
- Knodel, J., & Popescu, D. (2007). A comparison of static architecture compliance checking approaches. In Proceedings of the IEEE/IFIP working conference on software architecture (WICSA’07) (pp. 12–12). IEEE Computer Society.Google Scholar
- Koschke, R. (2007). Survey of research on software clones. In Duplication, redundancy, and similarity in software, Schloss Dagstuhl.Google Scholar
- Koschke, R., & Simon, D. (2003). Hierarchical reflexion models. In Proceedings of the 10th working conference on reverse engineering (WCRE’03) (p. 368). IEEE Computer Society.Google Scholar
- Kremenek, T. (2008). From uncertainty to bugs: Inferring defects in software systems with static analysis, statistical methods, and probabilistic graphical models. PhD thesis, Dept. of Computer Science, Stanford University.Google Scholar
- Lague, B., Proulx, D., Mayrand, J., Merlo, E. M., & Hudepohl, J. (1997). Assessing the benefits of incorporating function clone detection in a development process. In Proceedings of the international conference on software maintenance (ICSM’97) (pp. 314–321). IEEE Computer Society.Google Scholar
- Lanubile, F., & Mallardo, T. (2003). Finding function clones in web applications. In Proceedings of the 7th European conference on software maintenance and reengineering (CSMR 2003) (pp. 379–388). IEEE Computer Society.Google Scholar
- Littlewood, B., Popov, P. T., Strigini, L., & Shryane N. (2000). Modeling the effects of combining diverse software fault detection techniques. IEEE Transactions on Software Engineering, 26, 1157–1167. doi: 10.1109/32.888629. http://portal.acm.org/citation.cfm?id=358134.357482 Google Scholar
- Lochmann, K. (2010). Engineering quality requirements using quality models. In Proceedings of 15th international conference on engineering of complex computer systems (ICECCS’10). IEEE Computer Society, St. Anne’s College, University of Oxford, United Kingdom.Google Scholar
- Lochmann, K. (2012). A benchmarking-inspired approach to determine threshold values for metrics. In Proc. of the 9th International Workshop on Software Quality (WoSQ’12). ACM, Research Triangle Park, Cary; (to appear in November 2012).Google Scholar
- Lochmann, K., & Goeb, A. (2011). A unifying model for software quality. In Proceedings of the 8th international workshop on software quality (WoSQ’11). Szeged: ACM.Google Scholar
- Mattsson, A., Lundell, B., Lings, B., & Fitzgerald, B. (2007). Experiences from representing software architecture in a large industrial project using model driven development. In Proceedings of the second workshop on sharing and reusing architectural knowledge architecture, rationale, and design intent (SHARK-ADI ’07). IEEE Computer Society. doi: 10.1109/SHARK-ADI.2007.7.
- McCall, J. A, Richards, P. K., & Walters, G. F. (1977). Factors in software quality. National Technical Information Service.Google Scholar
- Passos, L., Terra, R., Valente, M. T., Diniz, R., & das Chagas Mendonca, N. (2010). Static architecture-conformance checking: An illustrative overview. IEEE Software, 27, 82–89. doi: 10.1109/MS.2009.117.
- Pino, F. J., Garcia, F., & Piattini, M. (2009). Key processes to start software process improvement in small companies. In Proceedings of the 2009 ACM symposium on applied computing (SAC ’09) (pp. 509–516). New York, NY: ACM. doi: 10.1145/1529282.1529389.
- Plösch, R., Gruber, H., Körner, C., Pomberger, G., & Schiffer, S. (2009). A proposal for a quality model based on a technical topic classification. In Tagungsband des 2. Workshops zur Software-Qualitätsmodellierung und -bewertung.Google Scholar
- Plösch, R., Gruber, H., Körner, C., & Saft, M. (2010). A method for continuous code quality management using static analysis. In Proceedings of the seventh international conference on the quality of information and communications technology (QUATIC) (pp. 370–375). IEEE Computer Society.Google Scholar
- Pusatli, O., & Misra, S. (2011). A discussion on assuring software quality in small and medium software enterprises: An empirical investigation. Technical Gazette, 18(3), 447–452.Google Scholar
- Richardson, I., & VonWangenheim, C. (2007). Guest editors’ introduction: Why are small software organizations different? IEEE Software, 24(1), 18–22. doi: 10.1109/MS.2007.12. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4052546.Google Scholar
- Rosik, J., Le Gear, A., Buckley, J., & Babar, M. (2008). An industrial case study of architecture conformance. In Proceedings of the 2nd ACM-IEEE international symposium on empirical software engineering and measurement (ESEM ’08) (pp. 80–89). ACM Press.Google Scholar
- Roy, C. K., & Cordy, J. R. (2007). A survey on software clone detection research. Tech. rep., Queen’s University at Kingston.Google Scholar
- Ruthruff, J. R., Penix, J., Morgenthaler, J. D., Elbaum, S., & Rothermel, G. (2008). Predicting accurate and actionable static analysis warnings: An experimental approach. In Proceedings of the 30th international conference on Software engineering (ICSE ’08) (pp. 341–350). New York, NY: ACM. doi: 10.1145/1368088.1368135.
- Sangal, N., Jordan, E., Sinha, V., & Jackson, D. (2005). Using dependency models to manage complex software architecture. In: Proceedings of the 20th annual ACM SIGPLAN conference on object-oriented programming, systems, languages, and applications (OOPSLA ’05) (pp. 167–176). ACM Press. doi: 10.1145/1094811.1094824.
- Sjøberg, D. I. K., Anda, B., & Mockus, A. (2012). Questioning software maintenance metrics: A comparative case study. In P. Runeson, M. Höst, E. Mendes, A. A. Andrews, & R. Harrison (Eds.), ESEM (pp. 107–110). ACM.Google Scholar
- Wagner, S. (2008). Defect classification and defect types revisited. In Proceedings of the 2008 workshop on defects in large software systems (DEFECTS 2008) (pp. 39–40). ACM Press.Google Scholar
- Wagner, S., Deissenboeck, F., Aichner, M., Wimmer, J., & Schwalb, M. (2008). An evaluation of two bug pattern tools for java. In Proceedings of the first international conference on software testing, verification, and validation (ICST 2008) (pp. 248–257). IEEE Computer Society.Google Scholar
- Wagner, S., Juerjens, J., Koller, C., & Trischberger, P. (2005). Comparing bug finding tools with reviews and tests. In Proceedings of the 17th international conference on testing of communicating systems (TestCom ’05), LNCS (vol. 3502, pp. 40–55).Google Scholar
- Wagner, S., Lochmann, K., Heinemann, L., Kläs, M., Lampasona, C., Trendowicz, A., et al. (2013). Practical product quality modelling and assessment: The Quamoco approach. Submitted manuscript.Google Scholar
- Wagner, S., Lochmann, K., Heinemann, L., Kläs, M., Trendowicz, A., Plösch, R., et al. (2012a). The Quamoco product quality modelling and assessment approach. In Proceedings of the 34th international conference on software engineering.Google Scholar
- Wagner, S., Lochmann, K., Winter, S., Goeb, A., & Klaes, M. (2009). Quality models in practice: A preliminary analysis. In Proceedings of the 3rd international symposium on empirical software engineering and measurement. doi: 10.1109/ESEM.2009.5316003.
- Wagner, S., Lochmann, K., Winter, S., Goeb, A., & Kläs, M., Nunnenmacher, S. (2012b). Software quality models in practice. Technical Report TUM-I129, Technische Universität München, Institut für Informatik.Google Scholar
- von Wangenheim, C. G., Anacleto, A., & Salviano C. F. (2006). Helping small companies assess software processes. IEEE Software, 23, 91–98.Google Scholar