Symbolic Production Grammars in LSCs Testing

  • Hai-Feng Guo
  • Mahadevan Subramaniam
Part of the Studies in Computational Intelligence book series (SCI, volume 429)


We present LCT SG , an LSC (Live Sequence Chart) consistency testing system, which takes LSCs and symbolic grammars as inputs and performs an automated LSC simulation for consistency testing. A symbolic context-free grammar is used to systematically enumerate continuous inputs for LSCs, where symbolic terminals and domains are introduced to hide the complexity of different inputs which have common syntactic structures as well as similar expected system behaviors. Our symbolic grammars allow a symbolic terminal to be passed as a parameter of a production rule, thus extending context-free grammars with context-sensitivity on symbolic terminals. Constraints on symbolic terminals may be collected and processed dynamically along the simulation to properly decompose their symbolic domains for branched testing. The LCT SG system further provides either a state transition graph or a failure trace to justify the consistency testing results. The justification result may be used to evolve the symbolic grammar for refined test generation.


Domain Decomposition Production Rule Consistency Testing State Transition Diagram Symbolic Terminal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bartsch, K., Robey, M., Ivins, J., Lam, C.: Consistency checking between use case scenarios and uml sequence diagrams. In: IASTED Conference on Software Engineering, pp. 581–589 (2004)Google Scholar
  2. 2.
    Bontemps, Y., Heymans, P.: Turning High-Level Live Sequence Charts into Automata. In: Pacholski, L., Tiuryn, J. (eds.) CSL 1994. LNCS, vol. 933, pp. 456–470. Springer, Heidelberg (1995)CrossRefGoogle Scholar
  3. 3.
    Clarke, L.A.: A system to generate test data and symbolically execute programs. IEEE Transaction on Software Engineering 2, 215–222 (1976)CrossRefGoogle Scholar
  4. 4.
    Damm, W., Harel, D.: Lscs: Breathing life into message sequence charts. In: Proceedings of 3rd IFIP Int. Conf. on Formal Methods for Open Object-based Distributed Systems, pp. 293–312 (1999)Google Scholar
  5. 5.
    Declarativa. Interprolog 2.1.2: a java front-end and enhancement for prolog (2010),
  6. 6.
    Guo, H.-F., Zheng, W., Subramaniam, M.: Consistency checking for LSC specifications. In: 3rd IEEE International Symposium on Theoretical Aspects of Software Engineering, TASE 2009 (July 2009)Google Scholar
  7. 7.
    Guo, H.-F., Zheng, W., Subramaniam, M.: L2c2: Logic-based LSC consistency checking. In: 11th International ACM SIGPLAN Symposium on Principles and Practice of Declarative Programming, PPDP (September 2009)Google Scholar
  8. 8.
    Harel, D.: From Play-In Scenarios to Code: An Achievable Dream. In: Maibaum, T. (ed.) FASE 2000. LNCS, vol. 1783, pp. 22–34. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Harel, D., Kugler, H.: Synthesizing state-based object systems from LSC specifications. Int. Journal of Foundations of Computer Science 13(1), 5–51 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Harel, D., Maoz, S., Segall, I.: Some results on the expressive power and complexity of LSCs. In: Pillars of Computer Science, pp. 351–366 (2008)Google Scholar
  11. 11.
    Harel, D., Marelly, R.: Come, Let’s Play: Scenario-Based Programming Using LSCs and the Play-Engine. Springer (2003)Google Scholar
  12. 12.
    ITU-T. Message sequence chart (MSC). Z.120 ITU-T Recommendation (1996)Google Scholar
  13. 13.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19, 385–394 (1976)zbMATHCrossRefGoogle Scholar
  14. 14.
    Klose, J., Toben, T., Westphal, B., Wittke, H.: Check It Out: On the Efficient Formal Verification of Live Sequence Charts. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 219–233. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Kugler, H., Harel, D., Pnueli, A., Lu, Y., Bontemps, Y.: Temporal Logic for Scenario-Based Specifications. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 445–460. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. 16.
    Majumdar, R., Xu, R.-G.: Directed test generation using symbolic grammars. In: The ACM SIGSOFT Symposium on the Foundations of Software Engineering: Companion Papers, pp. 553–556 (2007)Google Scholar
  17. 17.
    Maurer, P.M., Maurer, P.M.: Generating test data with enhanced context-free grammars. IEEE Software 7, 50–55 (1990)CrossRefGoogle Scholar
  18. 18.
    Pemmasani, G., Guo, H.-F., Dong, Y., Ramakrishnan, C., Ramakrishnan, I.: Online justification for tabled logic programs. In: The 7th Int. Symposium on Functional and Logic Programming (2004)Google Scholar
  19. 19.
    OMG. Unified modeling languages superstructure specification, v2.0. The Object Management Group (2005),
  20. 20.
    OMG. Documentation of the unified modeling language. The Object Management Group (2009),
  21. 21.
    Sirer, E.G., Bershad, B.N.: Using production grammars in software testing. In: Second Conference on Domain Specific Languages, pp. 1–13 (1999)Google Scholar
  22. 22.
    Sun, J., Dong, J.S.: Model checking live sequence charts. In: The 10th IEEE Int. Conf. on Engineering of Complex Computer Systems (2005)Google Scholar
  23. 23.
    SWI-Prolog. SWI-prolog’s home (2010),
  24. 24.
    Toben, T., Westphal, B.: On the expressive power of LSCs. In: The 32nd Conf. on Current Trends in Theory and Practice of Computer Science, pp. 33–43 (2006)Google Scholar
  25. 25.
    Visser, W., Pǎsǎreanu, C.S., Khurshid, S.: Test input generation with java pathfinder. SIGSOFT Softw. Eng. Notes 29, 97–107 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of Nebraska at OmahaOmahaUSA

Personalised recommendations