Heterogeneous Systems Testing Techniques: An Exploratory Survey

  • Ahmad Nauman GhaziEmail author
  • Kai Petersen
  • Jürgen Börstler
Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 200)


Heterogeneous systems comprising sets of inherent subsystems are challenging to integrate. In particular, testing for interoperability and conformance is a challenge. Furthermore, the complexities of such systems amplify traditional testing challenges. We explore (1) which techniques are frequently discussed in literature in context of heterogeneous system testing that practitioners use to test their heterogeneous systems; (2) the perception of the practitioners on the usefulness of the techniques with respect to a defined set of outcome variables. For that, we conducted an exploratory survey. A total of 27 complete survey answers have been received. Search-based testing has been used by 14 out of 27 respondents, indicating the practical relevance of the approach for testing heterogeneous systems, which itself is relatively new and has only recently been studied extensively. The most frequently used technique is exploratory manual testing, followed by combinatorial testing. With respect to the perceived performance of the testing techniques, the practitioners were undecided regarding many of the studied variables. Manual exploratory testing received very positive ratings across outcome variables.


Heterogeneous System Testing Technique Generate Test Case Combinatorial Testing Exploratory Testing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Afzal, W., Ghazi, A.N., Itkonen, J., Torkar, R., Andrews, A., Bhatti, K.: An experiment on the effectiveness and efficiency of exploratory testing. Empir. Softw. Eng. 1–35 (2014)Google Scholar
  2. 2.
    Ali, N.B., Petersen, K., Mäntylä, M.: Testing highly complex system of systems: an industrial case study. In: Proceedings of the ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM 2012), pp. 211–220. ACM (2012)Google Scholar
  3. 3.
    Apilli, B.S.: Fault-based combinatorial testing of web services. In: Companion to the 24th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2009, 25–29 October 2009, Orlando, Florida, USA, pp. 731–732 (2009)Google Scholar
  4. 4.
    Bhatti, K., Ghazi, A.N.: Effectiveness of exploratory testing: an empirical scrutiny of the challenges and factors affecting the defect detection efficiency. Master’s thesis, Blekinge Institute of Technology (2010)Google Scholar
  5. 5.
    Cohen, D., Dalal, S., Fredman, M., Patton, G.: The AETG system: an approach to testing based on combinatorial design. IEEE Trans. Softw. Eng. 23(7), 437–444 (1997)CrossRefGoogle Scholar
  6. 6.
    Cohen, M.B., Snyder, J., Rothermel, G.: Testing across configurations: implications for combinatorial testing. Softw. Eng. Notes 31(6), 1–9 (2006)CrossRefGoogle Scholar
  7. 7.
    Diaz, J., Yague, A., Alarcon, P.P., Garbajosa, J.: A generic gateway for testing heterogeneous components in acceptance testing tools. In: Seventh International Conference on Composition-Based Software Systems (ICCBSS 2008), pp. 110–119, Feb 2008 (2008)Google Scholar
  8. 8.
    DoD. Systems and software engineering. systems engineering guide for systems of systems, version 1.0. Technical Report ODUSD(A&T)SSE, Office of the Deputy Under Secretary of Defense for Acquisition and Technology, Washington, DC, USA (2008)Google Scholar
  9. 9.
    Donini, R., Marrone, S., Mazzocca, N., Orazzo, A., Papa, D., Venticinque, S.: Testing complex safety-critical systems in SOA context. In: 2008 International Conference on Complex, Intelligent and Software Intensive Systems, pp. 87–93 (2008)Google Scholar
  10. 10.
    Forward, A., Lethbridge, T.C.: A taxonomy of software types to facilitate search and evidence-based software engineering. In: Proceedings of the 2008 Conference of the Center for Advanced Studies on Collaborative Research: Meeting of Minds, p. 14. ACM (2008)Google Scholar
  11. 11.
    Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)CrossRefGoogle Scholar
  12. 12.
    Ghazi, A.N.: Testing of heterogeneous systems. Blekinge Institute of Technology Licentiate Dissertation Series 2014(03):1–153 (2014)Google Scholar
  13. 13.
    Ghazi, A.N., Andersson, J., Torkar, R., Petersen, K., Börstler, J.: Information sources and their importance to prioritize test cases in the heterogeneous systems context. In: Barafort, B., Messnarz, R., O’Connor, R.V., Poth, A. (eds.) EuroSPI 2014. CCIS, vol. 425, pp. 86–98. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  14. 14.
    Graves, T.L., Harrold, M.J., Kim, J.-M., Porter, A., Rothermel, G.: An empirical study of regression test selection techniques. In: Proceedings of the 20th International Conference on Software Engineering, ICSE ’98, Washington, DC, USA, pp. 188–197. IEEE Computer Society (1998)Google Scholar
  15. 15.
    Herbsleb, J.D.: Global software engineering: the future of socio-technical coordination. In: 2007 Future of Software Engineering, pp. 188–198. IEEE Computer Society (2007)Google Scholar
  16. 16.
    Kaner, C., Bach, J., Pettichord, B.: Lessons Learned in Software Testing. Wiley, New York (2008)Google Scholar
  17. 17.
    Karahanna, E., Straub, D.W.: The psychological origins of perceived usefulness and ease-of-use. Inf. Manag. 35(4), 237–250 (1999)CrossRefGoogle Scholar
  18. 18.
    Kindrick, J.D., Sauter, J.A., Matthews, R.S.: Interoperability testing. Stand. View 4(1), 61–68 (1996)Google Scholar
  19. 19.
    Kontio, J.: A case study in applying a systematic method for cots selection. In: Proceedings of the 18th International Conference on Software Engineering, 1996, pp. 201–209. IEEE (1996)Google Scholar
  20. 20.
    Lane, J.A.: SoS management strategy impacts on SoS engineering effort. In: Münch, J., Yang, Y., Schäfer, W. (eds.) ICSP 2010. LNCS, vol. 6195, pp. 74–87. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  21. 21.
    Lewis, G., Morris, E., Place, P., Simanta, S., Smith, D., Wrage, L.: Engineering systems of systems. In: 2008 2nd Annual IEEE Systems Conference, pp. 1–6. IEEE (2008)Google Scholar
  22. 22.
    Lewis, G.A., Morris, E., Place, P., Simanta, S., Smith, D.B.: Requirements engineering for systems of systems. In: 2009 3rd Annual IEEE Systems Conference, pp. 247–252. IEEE (2009)Google Scholar
  23. 23.
    Mao, C.: Towards a hierarchical testing and evaluation strategy for web services system. In: 2009 Seventh ACIS International Conference on Software Engineering Research, Management and Applications, pp. 245–252 (2009)Google Scholar
  24. 24.
    Marin, B., Vos, T., Giachetti, G., Baars, A., Tonella, P.: Towards testing future Web applications. In: 2011 Fifth International Conference on Research Challenges in Information Science (RCIS), May 2011, pp. 1–12 (2011)Google Scholar
  25. 25.
    McMinn, P.: Search-based software test data generation: a survey. Softw. Test. Verif. Reliab. 14(2), 105–156 (2004)CrossRefGoogle Scholar
  26. 26.
    Miller, J.: Statistical significance testing-a panacea for software technology experiments? J. Syst. Softw. 73, 183–192 (2004)CrossRefGoogle Scholar
  27. 27.
    Mirarab, S., Ganjali, A., Tahvildari, L., Li, S., Liu, W., Morrissey, M.: A requirement-based software testing framework : an industrial practice. Test, pp. 452–455 (2008)Google Scholar
  28. 28.
    Narita, M., Shimamura, M., Iwasa, K., Yamaguchi, T.: Interoperability verification for Web Service based robot communication platforms. In: IEEE International Conference on Robotics and Biomimetics, 2007, ROBIO 2007, pp. 1029–1034, December 2007Google Scholar
  29. 29.
    Nie, C., Leung, H.: A survey of combinatorial testing. ACM Comput. Surv. 43(2), 1–29 (2011)CrossRefGoogle Scholar
  30. 30.
    Ortega, M., Pérez, M., Rojas, T.: Construction of a systemic quality model for evaluating a software product. Softw. Qual. J. 11(3), 219–242 (2003)CrossRefGoogle Scholar
  31. 31.
    Pan, X., Chen, H.: Using organizational evolutionary particle swarm techniques to generate test cases for combinatorial testing. In: 2011 Seventh International Conference on Computational Intelligence and Security, December 2011, pp. 1580–1583 (2011)Google Scholar
  32. 32.
    Perumal, T., Ramli, A.R., Leong, C.Y., Mansor, S., Samsudin, K.: Interoperability among heterogeneous systems in smart home environment. In: IEEE International Conference on Signal Image Technology and Internet Based Systems, 2008, SITIS ’08, pp. 177–186 (2008)Google Scholar
  33. 33.
    Petersen, K., Khurum, M., Angelis, L.: Reasons for bottlenecks in very large-scale system of systems development. Inf. Softw. Technol. 56(10), 1403–1420 (2014)CrossRefGoogle Scholar
  34. 34.
    Petersen, K., Rönkkö, K., Wohlin, C.: The impact of time controlled reading on software inspection effectiveness and efficiency: a controlled experiment. In: Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM ’08, pp. 139–148. ACM, New York (2008)Google Scholar
  35. 35.
    Piel, É., Gonzalez-Sanchez, A., Gross, H.-G.: Built-in data-flow integration testing in large-scale component-based systems. In: Petrenko, A., Simão, A., Maldonado, J.C. (eds.) ICTSS 2010. LNCS, vol. 6435, pp. 79–94. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  36. 36.
    Rafi, D.M., Moses, K.R.K., Petersen, K., Mäntylä, M.: Benefits and limitations of automated software testing: systematic literature review and practitioner survey. In: 2012 7th International Workshop on Automation of Software Test (AST), pp. 36–42. IEEE (2012)Google Scholar
  37. 37.
    Shah, S.M.A., Gencel, C., Alvi, U.S., Petersen, K.: Towards a hybrid testing process unifying exploratory testing and scripted testing. J. Softw. Evol. Process 25(3), 261–283 (2013)CrossRefGoogle Scholar
  38. 38.
    Shiba, T.: Using Artificial Life Techniques to Generate Test Cases for Combinatorial Testing. In: Computer Software and Applications Conference (COMPSAC 2004) (2004)Google Scholar
  39. 39.
    Thörn, C.: Current state and potential of variability management practices in software-intensive SMEs: results from a regional industrial survey. Inf. Softw. Technol. 52(4), 411–421 (2010)CrossRefGoogle Scholar
  40. 40.
    Van Veenendaal, E., et al.: The Testing Practitioner. UTN Publishers, Den Bosch (2002)Google Scholar
  41. 41.
    Wang, D., Barnwell, B., Witt, M.B.: A cross platform test management system for the SUDAAN statistical software package. In: 2009 Seventh ACIS International Conference on Software Engineering Research, Management and Applications, pp. 237–244 (2009)Google Scholar
  42. 42.
    Wang, Z., Xu, B., Chen, L., Xu, L.: Adaptive interaction fault location based on combinatorial testing. In: 2010 10th International Conference on Quality Software, July 2010, pp. 495–502 (2010)Google Scholar
  43. 43.
    Wohlin, C., Runeson, P., Höst, M., Ohlsson, M.C., Regnell, B.: Experimentation in Software Engineering. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  44. 44.
    Xia, Q.-M., Peng, T., Li, B., wen Feng, Z.: Study on automatic interoperability testing for e-business. In: CiSE 2009, International Conference on Computational Intelligence and Software Engineering, Dec 2009, pp. 1–4 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Ahmad Nauman Ghazi
    • 1
    Email author
  • Kai Petersen
    • 1
  • Jürgen Börstler
    • 1
  1. 1.Blekinge Institute of TechnologyKarlskronaSweden

Personalised recommendations