TORC: test plan optimization by requirements clustering
Acceptance testing is a time-consuming task for complex software systems that have to fulfill a large number of requirements. To reduce this effort, we have developed a widely automated method for deriving test plans from requirements that are expressed in natural language. It consists of three stages: annotation, clustering, and test plan specification. The general idea is to exploit redundancies and implicit relationships in requirements specifications. Multi-viewpoint techniques based on RM-ODP (Reference Model for Open Distributed Processing) are employed for specifying the requirements. We then use linguistic analysis techniques, requirements clustering algorithms, and pattern-based requirements collection to reduce the total effort of testing against the requirements specification. In particular, we use linguistic analysis for extracting and annotating the actor, process and object of a requirements statement. During clustering, a similarity function is computed as a measure for the overlap of requirements. In the test plan specification stage, our approach provides capabilities for semi-automatically deriving test plans and acceptance criteria from the clustered informal textual requirements. Two patterns are applied to compute a suitable order of test activities. The generated test plans consist of a sequence of test steps and asserts that are executed or checked in the given order. We also present the supporting prototype tool TORC, which is available open source. For the evaluation of the approach, we have conducted a case study in the field of acceptance testing of a national electronic identification system. In summary, we report on lessons learned how linguistic analysis and clustering techniques can help testers in understanding the relations between requirements and for improving test planning.
KeywordsLinguistic analysis Requirements clustering Acceptance testing Test planning Acceptance criteria
We thank Dr. Michael Jahnich for contributing to the former publications and Peter Winkelhane for contributing to the tool support TORC.
- Al-Otaiby, T. N., AlSherif, M., Bond, W. P. (2005). Toward software requirements modularization using hierarchical clustering techniques. In Proceedings of the 43rd Annual Southeast Regional Conference–Volume 2 (Kennesaw, Georgia, March 18–20). ACM-SE 43. ACM, New York, NY, 223–228.Google Scholar
- ANSI/IEEE Std 830-1984–Software Requirements Specification.Google Scholar
- Bradner, S. (1997). Key words for use in RFCs to indicate requirement levels. BCP 14, RFC 2119.Google Scholar
- CERN. (1999) European organization for nuclear research: Colt. URL: http://dsd.lbl.gov/~hoschek/colt-download.
- Chen, K., Zhang, W., Zhao, H., Mei, H. (2005). An approach to constructing feature models based on requirements clustering. In Proceedings of the 13th IEEE international Conference on Requirements Engineering 2005. RE. IEEE Computer Society, Washington, DC, 31–40.Google Scholar
- Dowty, D. W. (1979). Word meaning and montague grammatic. Berlin: Springer.Google Scholar
- Dustin, E., Rashka, J., & Paul, J. (1999). Automated software testing: Introduction, management and performance. Boston: Addison Wesley.Google Scholar
- Engels, G., Sauer, S. (2010). A meta-method for defining software engineering methods. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, Bernhard Westfechtel (Eds.), Graph transformations and model-driven engineering (LNCS, Vol. 5765, pp. 411–440). Berlin/Heidelberg: Springer.Google Scholar
- Geisser, M., Hildenbrand, T., Riegel, N. (2007). Evaluating the applicability of requirements engineering tools for distributed software development, Working Paper 2/2007, University of Mannheim, January.Google Scholar
- Goldin, L. & Berry, D. M. (1994). AbstFinder: A prototype abstraction finder for natural language text for use in requirements elicitation: Design, methodology, and evaluation. In Proceedings First International Conference on Requirements Engineering. Colorado Springs, CO: IEEE Computer Society, pp. 84–93.Google Scholar
- Güldali, B., Funke, H., Jahnich, M., Sauer, S., Engels, G. (2009). Semi-automated test planning for eID systems by using requirements clustering. In 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), 16–20 November, Auckland, New Zealand, pp. 29–39.Google Scholar
- Güldali, B., Sauer, S., Winkelhane, P., Funke, H., Jahnich, M. (2010). Pattern-based generation of test plans for open distributed processing systems. In Proceedings of the 5th Workshop on Automation of Software Test (Cape Town, South Africa, May 03–04). AST ‘10. ACM, New York, NY, 119–126. doi:http://doi.acm.org/10.1145/1808266.1808284.
- HJP Consulting. (2005). GlobalTester: Framework for Testing Smart Cards, www.globaltester.org.
- Hsia, P., & Gupta, A. (1992). Incremental delivery using abstract data types and requirements clustering. Systems Integration, 1992. ICSI ‘92., Proceedings of the Second International Conference on, pp. 137–150, 15–18 June 1992, Morristown, NJ, USA. doi:10.1109/ICSI.1992.217275.
- Hsia, P. & Yaung, A. T. (1998). Another approach to system decomposition: requirements clustering. Computer Software and Applications Conference, 1988. COMPSAC 88. In Proceedings of the Twelfth International, pp. 75–82, 5–7 October.Google Scholar
- International Civil Aviation Organization. (2006). Doc 9303, machine readable travel documents. Part 1: Machine readable passports. Vols. 1 and 2, Sixth Edition.Google Scholar
- ISO/IEC 10746-1: 1998–12. Information technology—Open Distributed Processing—Reference model.Google Scholar
- ISO/IEC 10746-2:1996–09. Information technology—Open Distributed Processing—Reference model: Foundations.Google Scholar
- ISO/IEC 10746-3:1996–09. Information technology—Open Distributed Processing—Reference model: Architecture.Google Scholar
- ISO/IEC 10746-4: 1998–12. Information technology—Open distributed processing—Reference model: Architectural semantics.Google Scholar
- Lehmann, E., Wegener, J. (2000). Test case design by means of the CTE XL. In Proceedings of the 8th European International Conference on Software Testing, Analysis & Review (EuroSTAR 2000), Kopenhagen, Denmark, December.Google Scholar
- Li, Z., Rahman, Q. A., Madhavji, N. H. (2007). An approach to requirements encapsulation with clustering. In Proceedings of Anais do WER07—Workshop em Engenharia de Requisitos, Toronto, Canada, May 17–18, pp 92–96.Google Scholar
- Müller, T., Black, R., Eldh, S., Graham, D., Olsen, K., Pyhäjärvi, M., Thompson, G., van Veendendal, E. (2007). Certified Tester–Foundation Level Syllabus–Version 2007, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany.Google Scholar
- Object Management Group. (2007). UML Specification V2.1.1. www.omg.org/cgi-bin/doc?formal/-07-02-05.
- Rupp, C. (2007): Requirements engineering und management. 4. Auflage: Hanser-Verlag.Google Scholar
- Salger, F., Sauer, S., Engels, G., Baumann, A. (2010). Knowledge transfer in global software development–leveraging ontologies, tools and assessments. In 5th IEEE International Conference on Global Software Engineering (ICGSE 2010), pp. 336–341.Google Scholar
- Sneed, H. (2008). Automated requirements analysis with the text analyzer testing tool. Version: 1.3, Anecon Software Design und Beratung G.m.b.H, January.Google Scholar
- Stanford Lexicalized Parser v1.6.1. (2008). URL: http://nlp.stanford.edu/downloads/lex-parser.shtml.
- Utting, M., & Legeard, B. (2007). Practical model-based testing—A tools approach. Morgan Kaufmann Publ., Amsterdam.Google Scholar
- Wiggerts, T. A. (1997). Using clustering algorithms in legacy systems remodularization. In Proceedings of the Fourth Working Conference on Reverse Engineering, pp. 33–43, 6–8 October.Google Scholar
- Winkelhane, P. (2010). Teilautomatisierte Generierung von Testplänen aus Anforderungsspezifikationen für offene, verteilte Identifikationssysteme. Master Thesis (in German), University of Paderborn.Google Scholar