Advertisement

Executable Interface Specifications for Testing Asynchronous Creol Components

  • Immo Grabe
  • Marcel Kyas
  • Martin Steffen
  • Arild B. Torjusen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5961)

Abstract

We propose and explore a formal approach for black-box testing asynchronously communicating components in open environments. Asynchronicity poses a challenge for validating and testing components. We use Creol, a high-level, object-oriented language for distributed systems and present an interface specification language to specify components in terms of traces of observable behavior.

The language enables a concise description of a component’s behavior, it is executable in rewriting logic and we use it to give test specifications for Creol components. In a specification, a clean separation between interaction under control of the component or coming from the environment is central, which leads to an assumption-commitment style description of a component’s behavior. The assumptions schedule the inputs, whereas the outputs as commitments are tested for conformance with the specification. The asynchronous nature of communication in Creol is respected by testing only up-to a notion of observability. The existing Creol interpreter is combined with our implementation of the specification language to obtain a specification-driven interpreter for testing.

Keywords

Operational Semantic Abstract Syntax Travel Agent Interface Behavior Incoming Call 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Johnsen, E.B., Owe, O., Yu, I.C.: Creol: A type-safe object-oriented model for distributed concurrent systems. Theoretical Computer Science 365(1-2), 23–66 (2006)zbMATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    de Roever, W.P., de Boer, F.S., Hannemann, U., Hooman, J., Lakhnech, Y., Poel, M., Zwiers, J.: Concurrency Verification: Introduction to Compositional and Noncompositional Proof Methods. Cambridge University Press, Cambridge (2001)zbMATHGoogle Scholar
  3. 3.
    The Creol language, http://heim.ifi.uio.no/creol
  4. 4.
    Abadi, M., Cardelli, L.: A Theory of Objects. Monographs in Computer Science. Springer, Heidelberg (1996)zbMATHGoogle Scholar
  5. 5.
    Milner, R., Parrow, J., Walker, D.: A calculus of mobile processes, part I/II. Information and Computation 100, 1–77 (1992)zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Grabe, I., Kyas, M., Steffen, M., Torjusen, A.B.: Executable interface specifications for testing asynchronous Creol components. Technical Report 375, University of Oslo, Dept. of Computer Science (July 2008)Google Scholar
  7. 7.
    Steffen, M.: Object-Connectivity and Observability for Class-Based, Object-Oriented Languages. Habilitation thesis, Technische Faktultät der Christian-Albrechts-Universität zu Kiel (July 2006)Google Scholar
  8. 8.
    Jeffrey, A., Rathke, J.: A fully abstract may testing semantics for concurrent objects. In: 17th Annual IEEE Symposium on Logic in Computer Science, pp. 101–112. IEEE Computer Society Press, Los Alamitos (2002)CrossRefGoogle Scholar
  9. 9.
    Meseguer, J.: Conditional rewriting as a unified model of concurrency. Theoretical Computer Science 96, 73–155 (1992)zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Clavel, M., Durán, F., Eker, S., Lincoln, P., Martí-Oliet, N., Meseguer, J., Talcott, C.: The Maude 2.0 system. In Nieuwenhuis, R., ed.: RTA 2003. In: Nieuwenhuis, R. (ed.) RTA 2003. LNCS, vol. 2706, pp. 76–87. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  11. 11.
    Myers, G.J.: The Art of Software-Testing. John Wiley & Sons, New York (1979)Google Scholar
  12. 12.
    Patton, R.: Software Testing, 2nd edn. SAMS (July 2005)Google Scholar
  13. 13.
    Gaudel, M.C.: Testing can be formal, too. In: Mosses, P.D., Schwartzbach, M.I., Nielsen, M. (eds.) CAAP 1995, FASE 1995, and TAPSOFT 1995. LNCS, vol. 915, pp. 82–96. Springer, Heidelberg (1995)Google Scholar
  14. 14.
    Binder, R.V.: Testing Object-Oriented Systems, Models, Patterns, and Tools. Addison-Wesley, Reading (2000)Google Scholar
  15. 15.
    Bertolino, A.: Software testing research: Achievements, challenges, dreams. In: FOSE 2007: Future of Software Engineering, pp. 85–103. IEEE Computer Society Press, Los Alamitos (2007)CrossRefGoogle Scholar
  16. 16.
    Chen, H.Y., Tse, T.H., Chan, F.T., Chen, T.Y.: In black and white: An integrated approach to class-level testing of object-oriented program. ACM Transactions of Software Engineering and Methodology 7(3), 250–295 (1998)CrossRefGoogle Scholar
  17. 17.
    Bernot, G., Gaudel, M.C., Marre, B.: Software testing based on formal specifications. IEEE Software Engineering Journal 6(6), 387–405 (1991)Google Scholar
  18. 18.
    Doong, R.K., Frankl, P.G.: The ASTOOT approach to testing object-oriented programs. ACM Transactions on Software Engineering and Methodology 3(2), 101–130 (1994)CrossRefGoogle Scholar
  19. 19.
    Doong, R.K., Frankl, P.G.: Case studies on testing object-oriented programs. In: TAV4: Proceedings of the symposium on Testing, analysis, and verification, pp. 165–177. ACM Press, New York (1991)CrossRefGoogle Scholar
  20. 20.
    Frankl, P.G., Doong, R.K.: Tools for testing object-oriented programs. In: Proceedings of the 8th Pacific Northwest Conference on Software Quality, pp. 309–324 (1990)Google Scholar
  21. 21.
    Chen, H.Y., Sun, Y.X., Tse, T.H.: A strategy for selecting synchronization sequences to test concurrent object-oriented software. In: Proceedings of the 27th International Computer Software and Application Conference (COMPSAC 2003). IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  22. 22.
    Long, B.: Testing Concurrent Java Components. PhD thesis, University of Queensland (July 2005)Google Scholar
  23. 23.
    Brinch Hansen, P.: Reproducible testing of monitors. Software – Practice and Experience 8, 223–245 (1978)Google Scholar
  24. 24.
    Jacky, J., Veanes, M., Campbell, C., Schulte, W.: Model-Based Software Testing and Analysis with C#. Cambridge University Press, Cambridge (2008)Google Scholar
  25. 25.
    Johnsen, E.B., Owe, O., Torjusen, A.B.: Validating behavioral component interfaces in rewriting logic. Fundamenta Informaticae 82(4), 341–359 (2008)zbMATHMathSciNetGoogle Scholar
  26. 26.
    Schlatte, R., Aichernig, B., de Boer, F., Griesmayer, A., Johnsen, E.B.: Testing concurrent objects with application-specific schedulers. In: Fitzgerald, J.S., Haxthausen, A.E., Yenigun, H. (eds.) ICTAC 2008. LNCS, vol. 5160, pp. 319–333. Springer, Heidelberg (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Immo Grabe
    • 1
  • Marcel Kyas
    • 2
  • Martin Steffen
    • 3
  • Arild B. Torjusen
    • 3
  1. 1.Christian-Albrechts University KielGermany
  2. 2.Department of Computer ScienceFreie Universität BerlinGermany
  3. 3.Department of InformaticsUniversity of OsloNorway

Personalised recommendations