Advertisement

Towards Reliability Estimation of Large Systems-of-Systems with the Palladio Component Model

  • Fouad ben Nasr OmriEmail author
  • Ralf Reussner
Chapter

Abstract

The component paradigm aims at accelerating the construction of large scale systems-of-systems by facilitating the integration of third-party components. The size and the complexity of such large software make the reliability assessment process challenging. The different components building such large systems can be developed and maintained by multiple parties. In addition, the reliability testing process should focus on the integration logic connecting the components. The standard approach is to perform integration testing to uncover interaction faults between the different components building the software. However, integration testing of large systems-of-systems is in the most of the cases intractable. The multiplicity of the potential interactions between the different subsystems can be hardly systematically tested. In addition, standard integration test cases cannot be used to estimate the reliability of the tested software system. The integration test cases are not necessarily representative of the software usage model and consequently cannot be used to derive a sound reliability estimate. We propose a novel testing approach which supports both sound reliability estimation and high interaction coverage for reliable and interaction-intensive software.

Keywords

Software reliability testing Statistical usage-based testing Symbolic execution 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aho, A.V., Dahbura, A., Lee, D., Uyar, M.: An optimization technique for protocol conformance test generation based on uio sequences and rural chinese postman tours. Communications, IEEE Transactions on 39(11), 1604–1615 (1991)Google Scholar
  2. 2.
    Baskiotis, N., Sebag, M.: Structural statistical software testing with active learning in a graph. In: Proceedings of the 17th international conference on Inductive logic programming, ILP’07, pp. 49–62. Springer-Verlag, Berlin, Heidelberg (2008)Google Scholar
  3. 3.
    Becker, S., Koziolek, H., Reussner, R.: The palladio component model for model-driven performance prediction. J. Syst. Softw. 82(1), 3–22 (2009)Google Scholar
  4. 4.
    Belli, F., Budnik, C.J.: Minimal spanning set for coverage testing of interactive systems. In: Proceedings of the First international conference on Theoretical Aspects of Computing, ICTAC’04, pp. 220–234. Springer-Verlag, Berlin, Heidelberg (2005)Google Scholar
  5. 5.
    Chen, M.H., Lyu, M.,Wong,W.: Effect of code coverage on software reliability measurement. Reliability, IEEE Transactions on 50(2), 165–170 (2001)Google Scholar
  6. 6.
    Chevalley, P., Thevenod-Fosse, P.: Automated generation of statistical test cases from uml state diagrams. In: Computer Software and Applications Conference, 2001. COMPSAC 2001. 25th Annual International, pp. 205–214 (2001)Google Scholar
  7. 7.
    Csöndes, T., Kotnyek, B., Szabó, J.Z.: Application of heuristic methods for conformance test selection. European Journal of Operational Research 142 (2002)Google Scholar
  8. 8.
    Dulz, W., Zhen, F.: Matelo - statistical usage testing by annotated sequence diagrams, markov chains and ttcn-3. In: Quality Software, 2003. Proceedings. Third International Conference on, pp. 336–342 (2003)Google Scholar
  9. 9.
    Frank, A.: On kuhn’s hungarian method – a tribute from hungaryGoogle Scholar
  10. 10.
    Mills, H.D., Dyer, M., Linger, R.C.: Cleanroom software engineering (1987)Google Scholar
  11. 11.
    Musa, J.D., Okumoto, K.: A logarithmic poisson execution time model for software reliability measurement. In: Proceedings of the 7th international conference on Software engineering, ICSE’84, pp. 230–238. IEEE Press, Piscataway, NJ, USA (1984)Google Scholar
  12. 12.
    Poore, J., Mills, H., Mutchler, D.: Planning and certifying software system reliability. Software, IEEE 10(1), 88–99 (1993)Google Scholar
  13. 13.
    Păsăreanu, C.S., Rungta, N.: Symbolic pathfinder: symbolic execution of java bytecode. In: Proceedings of the IEEE/ACM international conference on Automated software engineering, ASE’10, pp. 179–180. ACM, New York, NY, USA (2010)Google Scholar
  14. 14.
    Riebisch, M., Philippow, I., Götze, M.: Uml-based statistical test case generation. In: Revised Papers from the International Conference NetObjectDays on Objects, Components, Architectures, Services, and Applications for a Networked World, NODe’02, pp. 394–411. Springer-Verlag, London, UK, UK (2003)Google Scholar
  15. 15.
    Sayre, K.D.: Improved techniques for software testing based on markov chain usage models. Ph.D. thesis, University of Tennessee, Knoxville (1999)Google Scholar
  16. 16.
    Thelin, T.: Automated statistical testing suite for software validation (2004)Google Scholar
  17. 17.
    Thimbleby, H.: The directed chinese postman problem. Software: Practice and Experience 33(11), 1081–1096 (2003). DOI  10.1002/spe.540. URL http://dx.doi.org/10.1002/spe.540
  18. 18.
    Thévenod-Fosse, P., Mazuet, C., Crouzet, Y.: On statistical structural testing of synchronous data flow programs 852, 250–267 (1994)Google Scholar
  19. 19.
    Visser, W., Păsăreanu, C.S., Khurshid, S.: Test input generation with java pathfinder. SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)Google Scholar
  20. 20.
    Whittaker, J.A., Poore, J.H.: Markov analysis of software specifications. ACM Trans. Softw. Eng. Methodol. 2(1), 93–106 (1993)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Karlsruhe Institute of TechnologyKarlsruheGermany

Personalised recommendations