Special issue on quality in model-driven engineering

Model-driven engineering (MDE) refers to a range of approaches where models play a core role in software development. Modeling promotes reasoning at a higher level of abstraction, therefore reducing complexity of software development, while hiding the unnecessary low level details at appropriate stages, and promoting communication among the several stakeholders in the development process. MDE initiatives make claims of increased quality and productivity by separating business and application logic from underlying platform technology, transforming models to other models and automating code generation. However, while quality assurance is a well-known topic in traditional software engineering, less is known on how to assess quality across the MDE lifecycle. We should understand not only how to measure the quality of the MDE process (and determine whether it is better than other approaches), but also understand the quality of the models themselves (determining metrics for the quality of both models and metamodels, design patterns and anti-patterns). This special issue aimed at being the proper place to share mature research, ground-breaking ideas and experience reports in QMDE. From 26 submissions, which shows the interest in the topic, and after a strict peer-review process involving at least 3 reviewers per paper (and 5 or 6 different ones in some), 7 were finally selected. The presented contributions range from topics related to the MDD process starting with the quality of requirements, to the model evolution and reuse process, passing by the concern with languages quality (concrete vs. graphical, complexity of developed DSLs, DSL environments for agents), talking about quality in model transformations, and finishing with aspects related to the semantics and time concerns in the behavior models.

The contributions can be summarized in the following way:

  • Quality in Model-Driven Engineering: A tertiary study Miguel Goulão, Marjan Mernik, and Vasco Amaral. This work aggregates consolidated findings on quality in MDE, so that researchers and practitioners in this field can learn and identify relatively unexplored niches of research.

  • Ontology-based Automated Support for Goal–Use case Model Analysis by Tuong Huan Nguyen, John Grundy, and Mohamed Almorsy that present a framework for combining goal-oriented and use case modeling, in requirements engineering, to automate analysis of consistency, correctness, and completeness. This framework relies on domain ontologies and UC metamodel to obtain OWL models, via semi-automated models, for automated reasoning. The evaluation of the tool shows positive results in soundness, completeness, and problem detection rates in benchmark applications.

  • Staged Model Evolution and Proactive Quality Guidance for Model Libraries by Andreas Ganser, Hors Lichter, Alexander Roth, and Bernhard Rumpe. The authors depart from the fact that current model evolution approaches do not consider reuse and take advantage of reuse purposes of model repositories to become model libraries and propose an approach for model evolution in UML.

  • Comparison of a textual versus a graphical notation for the maintainability of MDE domain models: an empirical pilot study by Santiago Meliá, Cristina Cachero, Jesus Hermida, and Enrique Aparicio. The authors describe an empirical pilot study to compare a textual and a graphical notation with respect to the efficiency, effectiveness, and satisfaction of junior software developers while performing analysability and modifiability tasks on domain models. Subjects in the experiment performed significantly better for analysability coverage and modifiability efficiency with a textual notation. On the other hand, subjects showed a slight preference toward the graphical notation.

  • Measuring the complexity of domain-specific languages developed using MDD by Boštjan Slivnik. The author describes a metric for measuring the appropriateness of domain-specific languages that are developed using model-driven development. The metric measures the depth of the deepest domain-specific command within abstract syntax trees. The approach is demonstrated using examples from a few real-world domain-specific languages.

  • A Systematic Approach on Evaluating Domain-specific Modeling Language Environments for Multi-agent Systems by Moharram Challenger, Geylani Kardas, and Bedir Tekinerdogan. The authors present an evaluation framework and systematic approach for assessing domain-specific modeling languages and their corresponding tools for multi-agent systems. The authors report on the lessons learned using their qualitative and quantitative evaluation approach on SEA_ML for supporting the generation of agent-based systems.

  • Assessing and Improving Quality of QVTo Model Transformations by Christine M. Gerpheide, Ramon R.H. Schiffelers, and Alexander Serebrenik. In this work, the authors address mainly two research questions. Firstly, how to assess quality of QVTo model transformations, and secondly, how to develop higher-quality QVTo transformations. To address the first question, the authors built a quality model for QVTo using a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. To address the second research question, the quality model was used to identify developer support tooling for QVTo and a code test coverage tool was implemented and evaluated.

  • Timing Consistency Checking for UML/MARTE Behavioral Models by Jinho Cho, Eunkyoung Jee, and Doo-Hwan Bae. In this work, the authors propose a systematic approach to check timing consistency of state machine sequence and timing diagrams with MARTE annotations for real-time embedded systems.

To conclude, we hope that this issue, with the selected insightful papers, will contribute and shed light on and point to relevant future research directions, paving the way to what we believe to be an interesting, relevant, and challenging research thread in model-driven engineering where much is still to be done.

We would like to thank the reviewers for their essential contribution in the reviewing process. Also, special thanks goes to the Software Quality Journals Editor-In-Chief, Rachel Harrison for the strong support without which this special issue could not become true.

Author information



Corresponding author

Correspondence to Vasco Amaral.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Amaral, V., Mernik, M. Special issue on quality in model-driven engineering. Software Qual J 24, 597–599 (2016). https://doi.org/10.1007/s11219-016-9327-5

Download citation