BPEL Integration Testing

  • Seema JehanEmail author
  • Ingo Pill
  • Franz Wotawa
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9033)


Service-oriented architectures, and evolvements such as clouds, provide a promising infrastructure for future computing. They encapsulate an IP core’s functionality for easy access via well-defined business and web interfaces, and in turn allow us to flexibly realize complex software drawing on available expertise. In this paper, we take a look at some challenges we have to face during the task of testing such systems for verification purposes. In particular, we delve into the task of test suite generation, and compare the performance of two corresponding algorithms. In addition, we report on experiments for a collection of BPEL processes taken from the literature, in order to identify performance trends with respect to fault coverage metrics. Our results suggest that a structural reasoning might outperform a completely random approach.


Test Suite Symbolic Execution Test Case Generation Satisfying Assignment Mutation Score 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Active VOS engine,
  2. 2.
  3. 3.
  4. 4.
    MuBPEL- a mutation testing tool for WS-BPEL,
  5. 5.
  6. 6.
    Arnold, T.R.: Visual Test 6 Bible. IDG Books Worldwide, Inc., Foster City (1998)Google Scholar
  7. 7.
    Bentakouk, L., Poizat, P., Zaïdi, F.: A formal framework for service orchestration testing based on symbolic transition systems. In: Proc. of the 21st IFIP WG 6.1 Int. Conf. on Testing of Software and Communication Systems and 9th Int. FATES Workshop, pp. 16–32 (2009)Google Scholar
  8. 8.
    Bozkurt, M., Harman, M.: Automatically generating realistic test input from web services. In: International Symposium on Service-Oriented System Engineering (SOSE), pp. 13–24 (December 2011)Google Scholar
  9. 9.
    Bozkurt, M., Harman, M., Hassoun, Y.: Testing Web Services: A Survey. Tech. Rep. TR-10-01, Dep. of Computer Science, King’s College London (January 2010)Google Scholar
  10. 10.
    Brandis, M.M., Mössenböck, H.: Single-pass generation of static assignment form for structured languages. ACM TOPLAS 16(6), 1684–1698 (1994)CrossRefGoogle Scholar
  11. 11.
    Dong, W.: Test case generation method for BPEL-Based Testing. In: Int. Conf. on Computational Intelligence and Natural Computing, vol. 2, pp. 467–470 (June 2009)Google Scholar
  12. 12.
    Faigon, A.: Testing for zero bugs,
  13. 13.
    Garcia-fanjul, J., Tuya, J., Riva, C.D.L.: Generating Test Cases Specifications for BPEL Compositions of Web Services Using SPIN (2006),
  14. 14.
    Gent, I.P., Jefferson, C., Miguel, I.: Minion: A fast scalable constraint solver. In: Proceedings of ECAI 2006, Riva del Garda, pp. 98–102. IOS Press (2006)Google Scholar
  15. 15.
    Gotlieb, A., Botella, B., Rueher, M.: Automatic test data generation using constraint solving techniques. In: ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 53–62 (1998)Google Scholar
  16. 16.
    Grün, B.J.M., Schuler, D., Zeller, A.: The impact of equivalent mutants. In: Proceedings of the IEEE Int. Conf. on Software Testing, Verification, and Validation Workshops, ICSTW 2009, pp. 192–199 (2009)Google Scholar
  17. 17.
    Jehan, S., Pill, I., Wotawa, F.: SOA testing via random paths in BPEL models. In: 10th Workshop on Advances in Model Based Testing; 2014 IEEE Seventh Int. Conf. on Software Testing, Verification and Validation Workshops (ICSTW), pp. 260–263 (2014)Google Scholar
  18. 18.
    Jehan, S., Pill, I., Wotawa, F.: Functional SOA testing based on constraints. In: 8th Int. Workshop on Automation of Software Test (AST), pp. 33–39 (2013)Google Scholar
  19. 19.
    Jehan, S., Pill, I., Wotawa, F.: SOA grey box testing - a constraint-based approach. In: 5th Int. Workshop on Constraints in Software Testing, Verification and Analysis; 2013 IEEE Sixth Int. Conf. on Software Testing, Verification and Validation Workshops (ICSTW), pp. 232–237 (2013)Google Scholar
  20. 20.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976), CrossRefzbMATHGoogle Scholar
  21. 21.
    Langdon, W.B., Harman, M., Jia, Y.: Efficient Multi-objective Higher Order Mutation Testing with Genetic Programming. J. Syst. Softw. 83(12), 2416–2430 (2010)CrossRefGoogle Scholar
  22. 22.
    Lübke, D.: Bpel Unit (2006),
  23. 23.
    Mayer, P., Lübke, D.: Towards a BPEL unit testing framework. In: Workshop on Testing, Analysis, and Verification of Web Services and Applications, pp. 33–42 (2006)Google Scholar
  24. 24.
    Mayer, W., Friedrich, G., Stumptner, M.: On computing correct processes and repairs using partial behavioral models. In: European Conf. on Artificial Intelligence (ECAI), pp. 582–587 (2012)Google Scholar
  25. 25.
    McMinn, P., Shahbaz, M., Stevenson, M.: Search-based test input generation for string data types using the results of web queries. In: 5th Int. Conf. on Software Testing, Verification and Validation (ICST), pp. 141–150 (April 2012)Google Scholar
  26. 26.
    Nyman, N.: Using monkey test tools. Software Testing & Quality Enineering Magazine (January/February 2000)Google Scholar
  27. 27.
    Ouyang, C., Verbeek, E., van der Aalst, W.M.P., Breutel, S., Dumas, M., ter Hofstede, A.H.M.: Formal semantics and analysis of control flow in WS-BPEL. Sci. Comput. Program. 67(2-3), 162–198 (2007)CrossRefzbMATHGoogle Scholar
  28. 28.
    Papazoglou, M.P., Traverso, P., Dustdar, S., Leymann, F.: Service-oriented computing: a research roadmap. Int. J. of Cooperative Information Systems 17(2), 223–255 (2008)CrossRefGoogle Scholar
  29. 29.
    Paradkar, A., Sinha, A.: Specify once test everywhere: Analyzing invariants to augment service descriptions for automated test generation. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 551–557. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  30. 30.
    Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science (FOCS), pp. 46–57 (1977)Google Scholar
  31. 31.
    Wotawa, F., Schulz, M., Pill, I., Jehan, S., Leitner, P., Hummer, W., Schulte, S., Hoenisch, P., Dustdar, S.: Fifty shades of grey in SOA testing. In: 9th Workshop on Advances in Model Based Testing; 2013 IEEE Sixth Int. Conf. on Software Testing, Verification and Validation Workshops (ICSTW), pp. 154–157 (2013)Google Scholar
  32. 32.
    J., Li, Z., Yuan, Y., Sun, W., Yan, J.Z.: BPEL4WS Unit Testing: Test case generation using a concurrent path analysis approach. In: 17th Int. Symp. on Software Reliability Engineering (ISSRE), pp. 75–84. IEEE Computer Society (2006)Google Scholar
  33. 33.
    Yang, Y., Tan, Q., Xiao, Y.: Verifying web services composition based on hierarchical colored petri nets. In: 1st Int. Workshop on Interoperability of Heterogeneous Information Systems, IHIS 2005, pp. 47–54. ACM (2005)Google Scholar
  34. 34.
    Li, Z., Yuan, W.S.Y.: A graph-search based approach to BPEL4WS test generation. In: Int. Conf. on Software Engineering Advances, p. 14 (October 2006)Google Scholar
  35. 35.
    Zakaria, Z., Atan, R., Ghani, A.A.A., Sani, N.F.M.: Unit testing approaches for BPEL: A systematic review. In: 16th Asia-Pacific Software Engineering Conference, pp. 316–322. IEEE Computer Society (2009)Google Scholar
  36. 36.
    Zheng, Y., Zhou, J., Krause, P.: A model checking based test case generation framework for web services pp. 715–722 (April 2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Institute for Software TechnologyGraz University of TechnologyGrazAustria

Personalised recommendations