Program comprehension of domain-specific and general-purpose languages: comparison using a family of experiments

Abstract

Domain-specific languages (DSLs) are often argued to have a simpler notation than general-purpose languages (GPLs), since the notation is adapted to the specific problem domain. Consequently, the impact of domain relevance on the creation of the problem representation is believed to improve programmers’ efficiency and accuracy when using DSLs compared with using similar solutions like application libraries in GPLs. Most of the common beliefs have been based upon qualitative conclusions drawn by developers. Rather than implementing the same problem in a DSL and in a GPL and comparing the efficiency and accuracy of each approach, developers often compare the implementation of a new program in a DSL to their previous experiences implementing similar programs in GPLs. Such a conclusion may or may not be valid. This paper takes a more skeptical approach to acceptance of those beliefs. By reporting on a family of three empirical studies comparing DSLs and GPLs in different domains. The results of the studies showed that when using a DSL, developers are more accurate and more efficient in program comprehension than when using a GPL. These results validate some of the long- held beliefs of the DSL community that until now were only supported by anecdotal evidence.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. 1.

    http://lpm.uni-mb.si/index.php?page=ProjectPC

References

  1. Basili V, Shull F, Lanubile F (1999) Building knowledge through families of experiments. IEEE Trans Softw Eng 25(4):456–473

    Article  Google Scholar 

  2. Bentley J (1986) Little languages. Commun ACM 29(8):711–721

    MathSciNet  Article  Google Scholar 

  3. Carver J, Jaccheri L, Morasca S, Shull F (2003) Issues in using students in empirical studies in software engineering education. In: METRICS ’03: proceedings of the 9th international symposium on software metrics. IEEE Computer Society, Washington, DC, USA, p 239

    Google Scholar 

  4. Carver J, Jaccheri L, Morasca S, Shull F (2010) A checklist for integrating student empirical studies with research and teaching goals. Empir Softw Eng 15(1):35–59

    Article  Google Scholar 

  5. Consel C, Marlet R (1998) Architecturing software using a methodology for language development. In: Proceedings of the 10th international symposium on programming language implementation and logic programming, vol 1490, pp 170–194

  6. Cruz-Lemus JA, Genero M, Manso ME, Morasca S, Piattini M (2009) Assessing the understandability of UML statechart diagrams with composite states–a family of empirical studies. Empir Softw Eng 14(6):685–719

    Article  Google Scholar 

  7. Czarnecki K, Eisenecker U (2000) Generative programming: methods, tools and applications. Addison-Wesley, Reading

    Google Scholar 

  8. van Deursen A, Klint P (1998) Little languages: little maintenance. J Softw Maint 10(2):75–92

    Article  Google Scholar 

  9. van Deursen A, Klint P (2002) Domain-specific language design requires feature descriptions. J Comput Inf Technol 10(1):1–17

    MATH  Article  Google Scholar 

  10. van Deursen A, Klint P, Visser J (2000) Domain-specific languages: an annotated bibliography. ACM SIGPLAN Not 35(6):26–36

    Article  Google Scholar 

  11. Elliott C (1999) An embedded modeling language approach to interactive 3D and multimedia animation. IEEE Trans Softw Eng 25(3):291–309

    Article  Google Scholar 

  12. Gansner ER, Koutsofios E, North S (2009) Drawing graphs with dot. Tech Rep AT&T Bell Laboratories, Murray Hill, NJ, USA. http://www.graphviz.org/pdf/dotguide.pdf

  13. Green TRG, Petre M (1996) Usability analysis of visual programming environments: a ‘cognitive dimensions’ framework. J Vis Lang Comput 7(2):131–174

    Article  Google Scholar 

  14. Hevner AR, Linger RC, Webb Collins R, Pleszkoch M, Prowell S, Walton G (2005) The impact of function extraction technology on next-generation software engineering. Tech Rep CMU/SEI-90-TR-21, Software Engineering Institute, Carnegie Mellon University

  15. Hudak P (1998) Modular domain specific languages and tools. In: Proceedings: fifth international conference on software reuse. IEEE Computer Society Press, Los Alamitos, pp 134–142

    Google Scholar 

  16. Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting experiments in software engineering. In: Shull F, Singer J, Sjæberg DIK (eds) Guide to advanced empirical software engineering. Springer, London, pp 201–228. doi:10.1007/978-1-84800-044-5_8

    Google Scholar 

  17. Kang K, Cohen S, Hess J, Novak W, Peterson S (1990) Feature-oriented domain analysis (FODA) feasibility study. Tech Rep CMU/SEI-90-TR-21, Software Engineering Institute, Carnegie Mellon University

  18. Kieburtz RB, McKinney L, Bell JM, Hook J, Kotov A, Lewis J, Oliva DP, Sheard T, Smith I, Walton L (1996) A software engineering experiment in software component generation. In: ICSE-18: proceedings of the 18th international conference on software engineering. IEEE Computer Society, pp 542–553

  19. Kosar T, Martínez López PE, Barrientos PA, Mernik M (2008) A preliminary study on various implementation approaches of domain-specific language. Inf Softw Technol 50(5):390–405

    Article  Google Scholar 

  20. Kosar T, Mernik M, Črepinšek M, Henriques PR, da Cruz D, Varanda Pereira MJ, Oliveira N (2009) Influence of domain-specific notation to program understanding. In: Proceedings of the international multiconference on computer science and information technology, WAPL 2009—2nd workshop on advances in programming languages, pp 675–682

  21. Kosar T, Mernik M, Črepinšek M, Henriques PR, da Cruz D, Varanda Pereira MJ, Oliveira N (2010) Comparing general-purpose and domain-specific languages: an empirical study. Comput Sci Inf Syst 7(2):247–264 (Extended version of CORTA’09 paper: comparison of XAML and C# forms using cognitive dimension framework)

    Article  Google Scholar 

  22. Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 22(140):1–55

    Google Scholar 

  23. MacVittie LA (2006) XAML in a nutshell. O’Reilly Media, Inc

  24. Mauw S, Wiersma W, Willemse T (2004) Language-driven system design. Int J Softw Eng Knowl Eng 6(14):625–664

    Article  Google Scholar 

  25. Meijer E, Beckman B, Bierman G (2006) Linq: reconciling object, relations and xml in the .net framework. In: Proceedings of the 2006 ACM SIGMOD international conference on management of data. ACM, New York, NY, USA, pp 706–706

    Google Scholar 

  26. Mernik M, Heering J, Sloane A (2005) When and how to develop domain-specific languages. ACM Comput Surv 37(4):316–344

    Article  Google Scholar 

  27. Nugroho A (2009) Level of detail in UML models and its impact on model comprehension: a controlled experiment. Inf Softw Technol 51(12):1670–1685

    Article  Google Scholar 

  28. Otero MC, Dolado JJ (2004) Evaluation of the comprehension of the dynamic modeling in UML. Inf Softw Technol 46(1):35–53

    Article  Google Scholar 

  29. Peyton Jones S, Blackwell A, Burnett M (2003) A user-centred approach to functions in Excel. In: ICFP ’03: proceedings of the eighth ACM SIGPLAN international conference on functional programming. ACM Press, New York, NY, USA, pp 165–176

    Google Scholar 

  30. Ricca F, Scanniello G, Torchiano M, Reggio G, Astesiano E (2010) On the effectiveness of screen mockups in requirements engineering: results from an internal replication. In: Proceedings of the 2010 ACM-IEEE international symposium on empirical software engineering and measurement (ESEM 2010). ACM, New York, NY, USA, pp 17:1–17:10

    Google Scholar 

  31. Sjoeberg D, Hannay J, Hansen O, Kampenes V, Karahasanovic A, Liborg NK, Rekdal A (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753

    Article  Google Scholar 

  32. Sprinkle J, Mernik M, Tolvanen JP, Spinellis D (2009) What kinds of nails need a domain-specific hammer? IEEE Softw 26(4):15–18

    Article  Google Scholar 

  33. Storey MA (2005) Theories, methods and tools in program comprehension: past, present and future. In: IWPC ’05: proceedings of the 13th international workshop on program comprehension. IEEE Computer Society, pp 181–191

  34. Thibault S, Marlet R, Consel C (1999) Domain-specific languages: from design to implementation—application to video device drivers generation. IEEE Trans Softw Eng 25(3):363–377

    Article  Google Scholar 

  35. Varanda Pereira MJ, Mernik M, da Cruz D, Henriques PR (2008) Program comprehension for domain-specific languages. Comput Sci Inf Syst 5(2):1–17

    Article  Google Scholar 

  36. Webb Collins R, Hevner AR, Walton GH, Linger RC (2008) The impacts of function extraction technology on program comprehension: a controlled experiment. Inf Softw Technol 50(11): 1165–1179

    Article  Google Scholar 

  37. Wilcoxon F (1945) Individual comparisons by ranking methods. Biom Bull 1(6):80–83

    Article  Google Scholar 

  38. Wile DS (2001) Supporting the DSL spectrum. J Comput Inf Technol 9(4):263–287

    MATH  Article  Google Scholar 

Download references

Acknowledgements

Special thanks to Matej Črepinšek and Dainela da Cruz who helped us prepare the third experiment on graphical user interfaces. Also thanks to Portuguese partners (Pedro Rangel Henriques, Maria João Varanda Pereira, and Nuno Oliveira) for their constructive comments on questionnaires that helped us increase the quality of this work.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Tomaž Kosar.

Additional information

This work was partially sponsored by the bilateral project “Program Comprehension for Domain-Specific Languages” (code BI-PT/08-09-008) between Slovenia and Portugal.

Editor: Giulio Antoniol

Appendices

Appendix A: Background Questionnaires

  1. B1

    How would you rate your programming skill level?

  2. B2

    How would you rate your programming skills level in Java/C/C#?

  3. B3

    How would you rate your experience with domain-specific languages before the experiment?

  4. B4

    Are you familiar with domain1/domain2/domain3?

  5. B5

    Are you familiar with DSL of domain1/domain2/domain3?

  6. B6

    Are you familiar with the application library of domain1/domain2/domain3?

  7. B7

    Are you familiar with DSL application 1?

  8. B8

    Are you familiar with GPL application 1?

  9. B9

    Are you familiar with DSL application 2?

  10. B10

    Are you familiar with GPL application 2?

Appendix B: Feedback Questionnaires

  1. F1

    How simple for use does DSL 1/2/3 seem to you?

  2. F2

    How simple for use does GPL 1/2/3 seem to you?

  3. F3

    How would you grade complexity of the DSL 1/2/3 questionnaire?

  4. F4

    How would you grade complexity of the GPL 1/2/3 questionnaire?

  5. F5

    How well have you understood programs on DSL application 1?

  6. F6

    How well have you understood programs on GPL application 1?

  7. F7

    How well have you understood programs on DSL application 2?

  8. F8

    How well have you understood programs on GPL application 2?

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Kosar, T., Mernik, M. & Carver, J.C. Program comprehension of domain-specific and general-purpose languages: comparison using a family of experiments. Empir Software Eng 17, 276–304 (2012). https://doi.org/10.1007/s10664-011-9172-x

Download citation

Keywords

  • Domain-specific languages
  • General-purpose languages
  • Program understanding
  • Program comprehension
  • Controlled experiments
  • Language evaluations