Unit Testing for Domain-Specific Languages

  • Hui Wu
  • Jeff Gray
  • Marjan Mernik
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5658)


Domain-specific languages (DSLs) offer several advantages by providing idioms that are similar to the abstractions found in a specific problem domain. However, a challenge is that tool support for DSLs is lacking when compared to the capabilities offered in general-purpose languages (GPLs), such as Java and C++. For example, support for unit testing a DSL program is absent and debuggers for DSLs are rare. This limits the ability of a developer to discover the existence of software errors and to locate them in a DSL program. Currently, software developers using a DSL are generally forced to test and debug their DSL programs using available GPL tools, rather than tools that are informed by the domain abstractions at the DSL level. This reduces the utility of DSL adoption and minimizes the benefits of working with higher abstractions, which can bring into question the suitability of using DSLs in the development process. This paper introduces our initial investigation into a unit testing framework that can be customized for specific DSLs through a reusable mapping of GPL testing tool functionality. We provide examples from two different DSL categories that serve as case studies demonstrating the possibilities of a unit testing engine for DSLs.


Domain-specific languages unit testing tool generation 


  1. 1.
    ANTLR, ANother Tool for Language Recognition (2008),
  2. 2.
    Attali, I., Courbis, C., Degenne, P., Fau, A., Fillon, J., Parigot, D., Pasquier, C., Coen, C.S.: SmartTools: a Development Environment Generator based on XML Technologies. In: ICSE Workshop on XML Technologies and Software Engineering, Toronto, Canada (2001)Google Scholar
  3. 3.
    Batory, D., Lofaso, B., Smaragdakis, Y.: JTS: Tools for Implementing Domain-Specific Languages. In: Fifth International Conference on Software Reuse, Victoria, Canada, pp. 143–153 (1998)Google Scholar
  4. 4.
    Bentley, J.: Little Languages. Communications of the ACM 29(8), 711–721 (1986)CrossRefGoogle Scholar
  5. 5.
    Bodeveix, J.P., Filali, M., Lawall, J., Muller, G.: Formal methods meet Domain Specific Languages. In: Romijn, J.M.T., Smith, G.P., van de Pol, J. (eds.) IFM 2005. LNCS, vol. 3771, pp. 187–206. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Bravenboer, M., Visser, E.: Concrete Syntax for Objects: Domain-Specific Language Embedding and Assimilation without Restrictions. In: Object-Oriented Programming, Systems, Languages, and Applications, Vancouver, Canada, pp. 365–383 (2004)Google Scholar
  7. 7.
    Burnett, M., Cook, C., Pendse, O., Rothermel, G., Summet, J., Wallace, C.: End-User Software Engineering with Assertions in the Spreadsheet Paradigm. In: International Conference on Software Engineering, Portland, OR, pp. 93–105 (2003)Google Scholar
  8. 8.
  9. 9.
    Czarnecki, K., Eisenecker, U.W.: Generative Programming: Methods, Techniques, and Applications. Addison-Wesley, Reading (2000)Google Scholar
  10. 10.
    Dmitriev, M.: Design of JFluid: A Profiling Technology and Tool Based on Dynamic Bytecode Instrumentation. Sun Microsystems Technical Report. Mountain View, CA (2004),
  11. 11.
    Domain-Specific Language Testing Studio (2009),
  12. 12.
    Eclipse (2008),
  13. 13.
    End-Users Shaping Effective Software Consortium (2007),
  14. 14.
    Gamma, E., Beck, K.: Contributing to Eclipse: Principles, Patterns, and Plug-Ins. Addison-Wesley, Reading (2003)Google Scholar
  15. 15.
    Gelperin, D., Hetzel, B.: The Growth of Software Testing. Communications of the ACM 31(6), 687–695 (1988)CrossRefGoogle Scholar
  16. 16.
    Harrison, W.: The Dangers of End-User Programming. IEEE Software 21(4), 5–7 (2005)CrossRefGoogle Scholar
  17. 17.
    Henriques, P., Pereira, M., Mernik, M., Lenič, M., Gray, J., Wu, H.: Automatic Generation of Language-based Tools using LISA. IEE Proceedings – Software 152(2), 54–69 (2005)CrossRefGoogle Scholar
  18. 18.
    Hilzenrath, D.: Finding Errors a Plus, Fannie says: Mortgage Giant Tries to Soften Effect of $1 Billion in Mistakes, The Washington Post (2003)Google Scholar
  19. 19.
  20. 20.
    JUnit (2006),
  21. 21.
    Kieburtz, B.R., Mckinney, L., Bell, J.M., Hook, J., Kotov, A., Lewis, J., Oliva, D., Sheard, T., Smith, I., Walton, L.: A Software Engineering Experiment in Software Component Generation. In: International Conference on Software Engineering, Berlin, Germany, pp. 542–552 (1996)Google Scholar
  22. 22.
    Lämmel, R., Schulte, W.: Controllable Combinatorial Coverage in Grammar-Based Testing. In: IFIP International Conference on Testing Communicating Systems, New York, NY, pp. 19–38 (2006)Google Scholar
  23. 23.
    Mernik, M., Heering, J., Sloane, A.: When and How to Develop Domain-Specific Languages. ACM Computing Surveys 37(4), 316–344 (2005)CrossRefGoogle Scholar
  24. 24.
    Mernik, M., Lenič, M., Avdičauševič, E., Žumer, V.: LISA: An Interactive Environment for Programming Language Development. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 1–4. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  25. 25.
    Mernik, M., Žumer, V.: Incremental Programming Language Development. Computer Languages, Systems and Structures 31, 1–16 (2005)CrossRefzbMATHGoogle Scholar
  26. 26.
    NUnit Project Page (2008),
  27. 27.
    Olan, M.: Unit Testing: Test Early, Test Often. Journal of Computing Sciences in Colleges 19(2), 319–328 (2003)Google Scholar
  28. 28.
    Rebernak, D., Mernik, M., Wu, H., Gray, J.: Domain-Specific Aspect Languages for Modularizing Crosscutting Concerns in Grammars. In: IET Software (Special Issue on Domain-Specific Aspect Languages) (2009) (in press)Google Scholar
  29. 29.
    Scaffidi, C., Shaw, M., Myers, B.: Estimating the Numbers of End Users and End User Programmers. In: Symposium on Visual Languages and Human-Centric Computing, Dallas, TX, pp. 207–214 (2005)Google Scholar
  30. 30.
    Schmitt, R.B.: New FBI Software May Be Unusable. Los Angeles Times (2005)Google Scholar
  31. 31.
    Sebesta, R.W.: Concepts of Programming Languages. Addison-Wesley, Reading (2003)zbMATHGoogle Scholar
  32. 32.
    Tassey, G.: The Economic Impacts of Inadequate Infrastructure for Software Testing. NIST Planning Report 02-3 (2002),
  33. 33.
    Thai, T.L., Lam, H.: Net Framework Essentials. O’Reilly, Sebastopol (2002)Google Scholar
  34. 34.
    Tillmann, N., Schulte, W.: Parameterized Unit Tests with Unit Meister. In: European Software Engineering Conference (ESEC)/Symposium on the Foundations of Software Engineering, Lisbon, Portugal, pp. 241–244 (2005)Google Scholar
  35. 35.
    Van Den Brand, M., Heering, J., Klint, P., Oliver, P.: Compiling Language Definitions: The ASF+SDF Compiler. ACM Transactions on Programming Languages and Systems 24(4), 334–368 (2002)CrossRefGoogle Scholar
  36. 36.
    Van Deursen, A., Klint, P.: Little Languages: Little Maintenance? Journal of Software Maintenance 10(2), 75–92 (1998)CrossRefGoogle Scholar
  37. 37.
    Van Deursen, A., Klint, P.: Domain-Specific Language Design Requires Feature Descriptions. Journal of Computing and Information Technology 10(1), 1–17 (2002)CrossRefzbMATHGoogle Scholar
  38. 38.
    Van Deursen, A., Klint, P., Visser, J.: Domain-Specific Languages: An Annotated Bibliography. ACM SIGPLAN Notices 35(6), 26–36 (2000)CrossRefGoogle Scholar
  39. 39.
    Van Wyk, E., Krishnan, L., Schwerdfeger, A., Bodin, D.: Attribute Grammar-based Language Extensions for Java. In: Ernst, E. (ed.) ECOOP 2007. LNCS, vol. 4609, pp. 575–599. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  40. 40.
    Wile, D.S.: Lessons Learned from Real DSL Experiments. Science of Computer Programming 51(3), 265–290 (2004)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Wile, D.S., Ramming, J.C.: Guest Editorial: Introduction to the Special Section “Domain-Specific Languages (DSLs)”. IEEE Transactions on Software Engineering 25(3), 289–290 (1999)CrossRefGoogle Scholar
  42. 42.
    Wu, H., Gray, J., Roychoudlhury, S., Mernik, M.: Weaving a Debugging Aspect into Domain-Specific Language Grammars. In: Symposium for Applied Computing (SAC) – Programming for Separation of Concerns Track, Santa Fe, NM, pp. 1370–1374 (2005)Google Scholar
  43. 43.
    Wu, H., Gray, J., Mernik, M.: Grammar-Driven Generation of Domain-Specific Language Debuggers. Software: Practice and Experience 38(10), 1475–1497 (2008)Google Scholar
  44. 44.
    Xie, T., Marinov, D., Schulte, W., Noktin, D.: Symstra: A Framework for Generating Object-Oriented Unit Tests Using Symbolic Execution. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 365–381. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  45. 45.
    Xie, T., Zhao, J.: A Framework and Tool Support for Generating Test Inputs of AspectJ Programs. In: International Conference on Aspect-Oriented Software Development, Bonn, Germany, pp. 190–201 (2006)Google Scholar
  46. 46.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software Unit Test Coverage and Adequacy. ACM Computing Surveys 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2009

Authors and Affiliations

  • Hui Wu
    • 1
  • Jeff Gray
    • 1
  • Marjan Mernik
    • 2
  1. 1.Department of Computer and Information SciencesUniversity of Alabama at BirminghamBirmingham, AlabamaUSA
  2. 2.Faculty of Electrical Engineering and Computer ScienceUniversity of MariborMariborSlovenia

Personalised recommendations