Advertisement

Applying static code analysis for domain-specific languages

  • Iván Ruiz-RubeEmail author
  • Tatiana Person
  • Juan Manuel Dodero
  • José Miguel Mota
  • Javier Merchán Sánchez-Jara
Regular Paper

Abstract

The use of code quality control platforms for analysing source code is increasingly gaining attention in the developer community. These platforms are prepared to parse and check source code written in a variety of general-purpose programming languages. The emergence of domain-specific languages enables professionals from different areas to develop and describe problem solutions in their disciplines. Thus, source code quality analysis methods and tools can also be applied to software artefacts developed with a domain-specific language. To evaluate the quality of domain-specific language code, every software component required by the quality platform to parse and query the source code must be developed. This becomes a time-consuming and error-prone task, for which this paper describes a model-driven interoperability strategy that bridges the gap between the grammar formats of source code quality parsers and domain-specific text languages. This approach has been tested on the most widespread platforms for designing text-based languages and source code analysis. This interoperability approach has been evaluated on a number of specific contexts in different domain areas.

Keywords

Text-based languages Static analysis Model-driven interoperability Xtext SonarQube 

Notes

Acknowledgements

This work has been developed in the VISAIGLE project, funded by the Spanish National Research Agency (AEI) with ERDF funds under grant ref. TIN2017-85797-R.

References

  1. 1.
    Abran, A., Khelifi, A., Suryn, W., Seffah, A.: Usability meanings and interpretations in iso standards. Softw. Qual. J. 11(4), 325–338 (2003)CrossRefGoogle Scholar
  2. 2.
    Ameller, D., Franch, X.J.: Dealing with non-functional requirements in model-driven development. In: 2010 18th IEEE international requirements engineering conference, pp. 189–198 (2010)Google Scholar
  3. 3.
    Ampatzoglou, A., Ampatzoglou, A., Chatzigeorgiou, A., Avgeriou, P.: The financial aspect of managing technical debt: a systematic literature review. Inf. Softw. Technol. 64, 52–73 (2015)CrossRefGoogle Scholar
  4. 4.
    Besova, G., Steenken, D., Wehrheim, H.: Grammar-based model transformations: definition, execution, and quality properties. Comput. Lang. Syst. Struct. 43, 116–138 (2015)zbMATHGoogle Scholar
  5. 5.
    Bettini, L.: Implementing Domain-Specific Languages with Xtext and Xtend. Packt Publishing Ltd, Birmingham (2013)Google Scholar
  6. 6.
    Boud, D.: Sustainable assessment: rethinking assessment for the learning society. Stud. Contin. Educ. 22(2), 151–167 (2000)CrossRefGoogle Scholar
  7. 7.
    Brambilla, M., Cabot, J., Wimmer, M.: Model-driven software engineering in practice. Synth. Lect. Softw. Eng. 1(1), 1–182 (2012)CrossRefGoogle Scholar
  8. 8.
    Brown, N.C.C., Altadmri, A.: Novice java programming mistakes: large-scale data vs. educator beliefs. ACM Trans. Comput. Educ. (TOCE) 17(2), 7:1–7:21 (2017).  https://doi.org/10.1145/2994154 Google Scholar
  9. 9.
    Davies, S.: Effective assessment in a digital age. URL: http://www.jisc.ac.uk/media/documents/programmes/elearning/digiassass_eada.pdf (2010). Accessed 15 Oct 2013
  10. 10.
    Ford, B.: Parsing expression grammars: a recognition-based syntactic foundation. In: Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’04, pp. 111–122. ACM, New York, NY, USA (2004).  https://doi.org/10.1145/964001.964011
  11. 11.
    Fowler, M.: Domain-Specific Languages. Pearson Education, London (2010)Google Scholar
  12. 12.
    Gamma, E.: Design Patterns: Elements of Reusable Object-Oriented Software. Pearson Education India, Bangalore (1995)Google Scholar
  13. 13.
    Gould, E.: Behind Bars: The Definitive Guide to Music Notation. Faber Music, London (2011)Google Scholar
  14. 14.
    Heinze, T.S., Amme, W., Moser, S.: Static analysis and process model transformation for an advanced business process to petri net mapping. Softw. Pract. Exp. 48(1), 161–195 (2018)CrossRefGoogle Scholar
  15. 15.
    Hermans, F., Pinzger, M., Deursen, A.: Domain-specific languages in practice: a user study on the success factors. In: Model Driven Engineering Languages and Systems: 12th International Conference, pp. 423–437. Springer, Berlin (2009)Google Scholar
  16. 16.
    Hermans, F., Stolee, K.T., Hoepelman, D.: Smells in block-based programming languages. In: 2016 IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2016, Cambridge, United Kingdom, September 4–8, 2016, pp. 68–72 (2016)Google Scholar
  17. 17.
    Hevner, A., Chatterjee, S.: Design Research in Information Systems: Theory and Practice, vol. 22. Springer, Berlin (2010)Google Scholar
  18. 18.
    ISO/IEC: 14977: Information technology—Syntactic metalanguage—Extended BNF. Standard, International Organization for Standardization (1996)Google Scholar
  19. 19.
    ISO/IEC: 25010: Systems and software engineering—systems and software quality requirements and evaluation (SQuaRE)—System and software quality models. Tech. rep., International Organization for Standardization (2010)Google Scholar
  20. 20.
    Jouault, F., Allilaire, F., Bézivin, J., Kurtev, I.: ATL: A model transformation tool. Sci. Comput. Program. 72(1), 31–39 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Kappel, G., Langer, P., Retschitzegger, W., Schwinger, W., Wimmer, M.: Model transformation by-example: a survey of the first wave. In: Düsterhöft, A., Klettke, M., Schewe, K.-D. (eds.) Conceptual Modelling and Its Theoretical Foundations, pp. 197–215. Springer (2012)Google Scholar
  22. 22.
    Kruchten, P., Nord, R.L., Ozkaya, I.: Technical debt: from metaphor to theory and practice. IEEE Softw. 29(6), 18–21 (2012)CrossRefGoogle Scholar
  23. 23.
    Letouzey, J.L.: The SQALE method for evaluating technical debt. In: Managing Technical Debt (MTD), 2012 Third International Workshop on, pp. 31–36. IEEE (2012)Google Scholar
  24. 24.
    Mandal, A., Mohan, D., Jetley, R., Nair, S., D’Souza, M.: A generic static analysis framework for domain-specific languages. In: 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA), vol. 1, pp. 27–34 (2018).  https://doi.org/10.1109/ETFA.2018.8502576
  25. 25.
    Mernik, M., Heering, J., Sloane, A.M.: When and how to develop domain-specific languages. ACM Comput. Surv. 37(4), 316–344 (2005)CrossRefGoogle Scholar
  26. 26.
    Nienhuys, H.W., Nieuwenhuizen, J.: Lilypond, a system for automated music engraving. In: Proceedings of the XIV Colloquium on Musical Informatics (XIV CIM 2003), vol. 1, pp. 167–172 (2003)Google Scholar
  27. 27.
    Parr, T., Fisher, K.: LL (*): the foundation of the ANTLR parser generator. In: ACM SIGPLAN Notices, vol. 46, pp. 425–436. ACM (2011)Google Scholar
  28. 28.
    Prähofer, H., Angerer, F., Ramler, R., Lacheiner, H., Grillenberger, F.: Opportunities and challenges of static code analysis of iec 61131-3 programs. In: Proceedings of 2012 IEEE 17th International Conference on Emerging Technologies Factory Automation (ETFA 2012), pp. 1–8 (2012).  https://doi.org/10.1109/ETFA.2012.6489535
  29. 29.
    Redziejowski, R.R.: From ebnf to peg. Fundam. Inform. 128(1–2), 177–191 (2013)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Rochimah, S., Arifiani, S., Insanittaqwa, V.F.: Non-source code refactoring: a systematic literature review. Int. J. Softw. Eng. Appl. 9(6), 197–214 (2015)Google Scholar
  31. 31.
    Rose, K.: Project Quality Management: Why, What and How, 2nd edn. J. Ross Publishing, USA (2005)Google Scholar
  32. 32.
    Rubin, J., Chisnell, D.: Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Wiley, Hoboken (2008)Google Scholar
  33. 33.
    Ruiz-Rube, I., Person, T., Dodero, J.M.: Static analysis of textual models. In: Jornadas de Ingeniería del Software y Bases de Datos (JISBD) (2016)Google Scholar
  34. 34.
    Saad, C., Bauer, B.: Data-flow based model analysis and its applications. In: Moreira, A., Schätz, B., Gray, J., Vallecillo, A., Clarke, P. (eds.) Model-Driven Engineering Languages and Systems, pp. 707–723. Springer, Berlin, Heidelberg (2013)CrossRefGoogle Scholar
  35. 35.
    Shambaugh, R., Weiss, A., Guha, A.: Rehearsal: a configuration verification tool for puppet. In: Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’16, pp. 416–430. ACM, New York, NY, USA (2016).  https://doi.org/10.1145/2908080.2908083
  36. 36.
    Sharma, T., Fragkoulis, M., Spinellis, D.: Does your configuration code smell? In: Proceedings of the 13th International Conference on Mining Software Repositories, MSR ’16, pp. 189–200. ACM, New York, NY, USA (2016).  https://doi.org/10.1145/2901739.2901761
  37. 37.
    Stevenson, A., Cordy, J.R.: A survey of grammatical inference in software engineering. Sci. Comput. Program. 96, 444–459 (2014)CrossRefGoogle Scholar
  38. 38.
    Syriani, E., Gray, J.: Challenges for addressing quality factors in model transformation. In: 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation, pp. 929–937 (2012)Google Scholar
  39. 39.
    Tomas, P., Escalona, M., Mejias, M.: Open source tools for measuring the internal quality of java software products. A survey. Comput. Stand. Interfaces 36(1), 244–255 (2013)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer EngineeringUniversity of CádizCádizSpain
  2. 2.E-LECTRA Research GroupUniversity of SalamancaSalamancaSpain

Personalised recommendations