Combining usage-based and model-based testing for service-oriented architectures in the industrial practice

MBTCloud

Abstract

Usage-based testing focuses quality assurance on highly used parts of the software. The basis for this are usage profiles based on which test cases are generated. There are two fundamental approaches in usage-based testing for deriving usage profiles: either the system under test (SUT) is observed during its operation and from the obtained usage data a usage profile is automatically inferred, or a usage profile is modeled by hand within a model-based testing (MBT) approach. In this article, we propose a third and combined approach, where we automatically infer a usage profile and create a test data repository from usage data. Then, we create representations of the generated tests and test data in the test model from an MBT approach. The test model enables us to generate executable Testing and Test Control Notation version 3 (TTCN-3) and thereby allows us to automate the test execution. Together with industrial partners, we adopted this approach in two pilot studies. Our findings show that usage-based testing can be applied in practice and greatly helps with the automation of tests. Moreover, we found that even if usage-based testing is not of interest, the incorporation of usage data can ease the application of MBT.

Keywords

Usage-based testing Model-based testing Usage monitoring Web service testing TTCN-3 

References

  1. 1.
  2. 2.
    Health Level Seven International. http://www.hl7.org.uk/
  3. 3.
    Itainnova-instituto tecnológico de aragón. http://www.itainnova.es/
  4. 4.
  5. 5.
    Object management group (omg). http://www.omg.org
  6. 6.
    Selenium webdriver. http://www.seleniumhq.org/
  7. 7.
  8. 8.
    Baker, P., Dai, Z.R., Grabowski, J., Haugen, O., Schieferdecker, I., Williams, C.: Model-driven testing: using the uml testing profile. Springer-Verlag New York Inc, Secaucus (2007)Google Scholar
  9. 9.
    Barcelona Liédana, M.A., López-Nicolás, G., García-Borgon̋ón, L.: Practical experiences in the usage of midas in the logistics domain. Software Tools for Technology Transfer (STTT), accepted (2016)Google Scholar
  10. 10.
    Chen, C., Zaidman, A., Gross, H.G.: A framework-based runtime monitoring approach for service-oriented software systems. In: Proceedings of the International Workshop on Quality Assurance for Service-Based Applications, QASBA ’11, pp. 17–20. ACM, New York, NY, USA (2011). doi:10.1145/2031746.2031752
  11. 11.
    Cheung, R.C.: A user-oriented software reliability model. IEEE Trans. Softw. Eng. 6(2), 118–125 (1980). doi:10.1109/TSE.1980.234477 CrossRefMATHGoogle Scholar
  12. 12.
    Cover, T.M., Thomas, J.A.: Elements of information theory, 2nd edn. Wiley, Hoboken (2006)MATHGoogle Scholar
  13. 13.
    De Francesco, A., Di Napoli, C., Giordano, M., Ottaviano, G., Perego, R., Tonellotto, N.: A soa testing platform on the cloud: The midas experience. In: Intelligent Networking and Collaborative Systems (INCoS), 2014 International Conference on, pp. 659–664 (2014). doi:10.1109/INCoS.2014.62
  14. 14.
    Di Napoli, C., De Francesco, A., Giordano, M., Ottaviano, G., Tonellotto, N., Perego, R.: Midas: a cloud platform for soa testing as a service. International Journal of High Performance Computing and Networking (2015) (in press) Google Scholar
  15. 15.
    Dulz, W., Zhen, F.: MaTeLo—statistical usage testing by annotated sequence diagrams, Markov Chains and TTCN-3. In: Proceedings of the 3rd International Conference on Quality Software (QSIC) (2003)Google Scholar
  16. 16.
    Eclipse Foundation: Papyrus. https://eclipse.org/papyrus/
  17. 17.
    Feliachi, A., Le Guen, H.: Generating transition probabilities for automatic model-based test generation. In: Proceedings of the 3rd International Conference on Software Testing, Verification and Validation (ICST) (2010). doi:10.1109/ICST.2010.26
  18. 18.
    Feller, W.: An introduction to probability theory and its applications. Wiley, Hoboken (1971)MATHGoogle Scholar
  19. 19.
    Grabowski, J., Hogrefe, D., Réthy, G., Schieferdecker, I., Wiles, A., Willcock, C.: An introduction to the testing and test control notation (TTCN-3). Comput. Netw. 42(3), 375–403 (2003). doi:10.1016/S1389-1286(03)00249-4 CrossRefMATHGoogle Scholar
  20. 20.
    GS1: Logistics interoperability model version 1. http://www.gs1.org/lim (2007)
  21. 21.
    Health Level Seven International: Hl7 version 3 standard: Identification service (is), release 1. http://www.hl7.org/implement/standards/product_brief.cfm?product_id=87 (2014)
  22. 22.
    Healthcare Service Specification Project (HSSP): Hssp specifications. https://hssp.wikispaces.com/specs
  23. 23.
    Herbold, S., Bünting, U., Grabowski, J., Waack, S.: Deployable capture/replay supported by internal messages. Adv. Comput. 85, 327–367 (2012)CrossRefGoogle Scholar
  24. 24.
    Herbold, S., Grabowski, J., Waack, S.: A Model for Usage-based Testing of Event-driven Software. In: 3rd International Workshop on Model-Based Verification & Validation From Research to Practice. IEEE Computer Society (2011)Google Scholar
  25. 25.
    Herbold, S., Harms, P.: AutoQUEST—automated quality engineering of event-driven software. In: Proceedings of the IEEE 6th International Conference on Software Testing, Verification and Validation Workshops (ICSTW) (2013). doi:10.1109/ICSTW.2013.23
  26. 26.
    International Software Testing Qualitifications Board (ISTQB): Standard glossary of terms used in Software Testing, Version 2.1 (2010)Google Scholar
  27. 27.
    Kosala, R., Blockeel, H.: Web mining research: a survey. ACM SIGKDD Explor. Newsl. 2(1), 1–15 (2000). doi:10.1145/360402.360406 CrossRefGoogle Scholar
  28. 28.
    Le Guen, H., Marie, R., Thelin, T.: Reliability estimation for statistical usage testing using markov chains. In: Proceedings of the 15th International Symposium on Software Reliability Engineering (ISSRE) (2004). doi:10.1109/ISSRE.2004.33
  29. 29.
    Littlewood, B.: A reliability model for systems with Markov structure. J. R. Stat. Soc. Ser. C (Applied Statistics) 24(2), 172–177 (1975)Google Scholar
  30. 30.
    MIDAS Consortium: Model and Inference Driven Automated testing of Servicesarchitectures (MIDAS). http://www.midas-project.eu (link checked June 2nd, 2014)
  31. 31.
    Motahari-Nezhad, H.R., Saint-Paul, R., Casati, F., Benatallah, B.: Event correlation for process discovery from web service interaction logs. VLDB J. 20(3), 417–444 (2011). doi:10.1007/s00778-010-0203-9 CrossRefGoogle Scholar
  32. 32.
    Object Management Group (OMG): Retrieve, locate, and update service (rlus). http://www.omg.org/spec/RLUS/ (2011)
  33. 33.
    Rumbaugh, J., Jacobson, I., Booch, G.: Unified modeling language reference manual, the (2nd edition). Pearson Higher Education, New York (2004)Google Scholar
  34. 34.
    Srivastava, J., Cooley, R., Deshpande, M., Tan, P.N.: Web usage mining: discovery and applications of usage patterns from Web data. ACM SIGKDD Explor. Newsl. 1(2), 12–23 (2000). doi:10.1145/846183.846188 CrossRefGoogle Scholar
  35. 35.
    Testing Technologies: Ttworkbench. http://www.testingtech.com/products/ttworkbench.php
  36. 36.
    Tonella, P., Ricca, F.: Dynamic model extraction and statistical analysis of web applications. In: Proceedings of the 4th International Workshop on Web Site Evolution (WSE) (2002)Google Scholar
  37. 37.
    Tonella, P., Ricca, F.: Statistical testing of web applications. J. Softw. Maint. Evol. Res. Pract. 16(1–2), 103–127 (2004). doi:10.1002/smr.284 CrossRefMATHGoogle Scholar
  38. 38.
    Tonella, P., Ricca, F.: Dynamic Model extraction and statistical analysis of web applications: Follow-up after 6 years. In: Proceedings of the 10th International Symposium on Web Site Evolution (WSE) (2008)Google Scholar
  39. 39.
    Tonella, P., Tiella, R., Nguyen, C.D.: Interpolated n-grams for model based testing. In: Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pp. 562–572. ACM, New York, NY, USA (2014). doi:10.1145/2568225.2568242
  40. 40.
    Walton, G.H., Poore, J.H., Trammell, C.J.: Statistical testing ofsoftware based on a usage model. Softw. Pract. Ant Exp. 25(1), 97–108 (1995). doi:10.1002/spe.4380250106 CrossRefGoogle Scholar
  41. 41.
    Wendland, M.F., Schneider, M., Hoffmann, A.: A model-driven approach to test automation for soa systems. Software Tools for Technology Transer (submitted) (2015)Google Scholar
  42. 42.
    Wesslén, A., Wohlin, C.: Modelling and generation of software usage. In: Proceedings of the 5th International Conference on Software Quality (1995)Google Scholar
  43. 43.
    Whittaker, J.A., Poore, J.H.: Markov analysis of software specifications. ACM Trans. Softw. Eng. Methodol. 2(1), 93–106 (1993). doi:10.1145/151299.151326 CrossRefGoogle Scholar
  44. 44.
    Whittaker, J.A., Thomason, M.G.: A Markov chain model for statistical software testing. IEEE Trans. Softw. Eng. 20(10), 812–824 (1994). doi:10.1109/32.328991 CrossRefGoogle Scholar
  45. 45.
    Woit, D.M.: Specifying operational profiles for modules. SIGSOFT Softw. Eng. Notes 18(3), 2–10 (1993). doi:10.1145/174146.154187 CrossRefGoogle Scholar
  46. 46.
    Woit, D.M.: Conditional-event usage testing. In: Proceedings of the 1998 conference of the Centre for Advanced Studies on Collaborative research, CASCON ’98, p. 23. IBM Press (1998)Google Scholar
  47. 47.
    World Wide Web Consortium (W3C): Web services addressing (ws-addressing). http://www.w3.org/Submission/ws-addressing/ (2004)
  48. 48.
    Yufang Dan Nicolas Stouls, S.F.C.C.: A Monitoring approach for dynamic service-oriented architecture systems. In: SERVICE COMPUTATION 2012: The Fourth International Conferences on Advanced Service Computing, pp. 20–23. XPS (Xpert Publishing Services) (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  • Steffen Herbold
    • 1
  • Patrick Harms
    • 1
  • Jens Grabowski
    • 1
  1. 1.Institute of Computer ScienceGeorg-August-Universität GöttingenGöttingenGermany

Personalised recommendations