In this section, we provide a summary of the synthesized data as well as an analysis of the demographics and quality of publications. The summary will be in narrative form, supported by plots and graphs as suggested by Boot, Sutton and Papaioannou
[14]. Before describing our findings with regard to the research questions from Sect. 3.1, we first offer statistics and information about the demographic data of the collected literature as well as an overview over their quality which we assessed using the quality criteria from Sect. 3.4.
Demographics
Figure 4 provides an overview over the quantity of included publications per year. An interesting thing to note is that it took only two years from the introduction of the Model-Driven Architecture in 2001 to the first mentions of advantages of model transformation languages. One of the most cited papers about model transformations in our literature review was published that year too (P63). Its title shapes introductions of publications in the community even today: Model transformation: The heart and soul of model-driven software development.
Scrutinizing claims about MTLs, however, just recently started to be a focus of research, with the first study (P59) dedicated to evaluating advantages of MTLs being published in 2018. To us, this suggests that research might be slowly catching on to the fact that evaluation of specific properties of MTLs is necessary instead of relying on broad claims. Simply relying on the fact that model transformation languages are DSLs and that DSLs in general fare better compared to non-domain-specific languages
[12, 28, 40] is not enough.
Industrial case studies about the adoption of MDSE have been performed much earlier than 2018, but such studies mainly focus on the complete MDSE workbench and do not analyse the impact of the used MTLs in great detail. The case study P670 for example, while stating that “The technology used in the company should provide advanced features for developing and executing model transformations”, does not go into detail about neither current shortcomings nor any other specifics of model transformation languages used during the development process.
Overall, there are 32 publications that mention advantages and 36 publications that mention disadvantages. Moreover, four publications provide empirical evidence for either advantages or disadvantages, while 12 publications use citations to support their claims and 14 publications use other means such as examples and experience (more on this in Sect. 4.4).
Lastly, Table 3 shows which transformation languages were directly involved in publications used in our data extraction. We counted a transformation language as being involved if it was used, analysed or introduced in the publication. Simply being mentioned during enumerations of example MTLs was not sufficient.
The table paints an interesting picture. ATL far exceeds all other model transformation languages in involvement, and most languages are only discussed in a single publication.
Table 3 Number of publications that mention specific MTLs Quality of publications
The results from the quality assessment, summarized in Fig. 5, shows that both the problem context and definition as well as the overall contributions are well defined in a majority of publications. Insights drawn from the work described in these publications, while less comprehensive in many cases, are also described most often. However, thorough descriptions of the research design, the used methods or steps taken are less common, a trend which is even more prominent for the presentation and discussion of limitations that act upon the studies. Similar observations have already been made by other literature reviews in different domains
[26, 57].
RQ1: Advantages and disadvantages of model transformation languages
We used data items D4 and D5 to answer our first research question, namely which advantages or disadvantages of dedicated model transformation languages are claimed in the literature. The resulting statements were sorted into 15 different categories (seen in Fig. 6) which arose naturally from the collected statements. An overview over all claims sorted into the different categories is given in Table 4. The table ascribes each claim with a unique ID (Cxx) for reference throughout this work. The table also contains evidence used to support a claim (if existent) to which we will come back later in Sect. 4.4. For almost all categories, there exist papers that describe model transformation languages as being advantageous as well as publications that describe them as disadvantageous in the category. In the following, we discuss the statements made in publications for each category.
Analysability
Throughout our gathered literature, there is only one publication, P45, that mentions analysability. According to them, a declarative transformation language comes with the added advantage of being automatically analysable which enables optimizations and specialized tool support (C1). While a detailed discussion of this claim within the publication remains owed, the authors provide examples of how static analysis allows the engine to implicitly construct an execution order. While our literature review found only a single publication that explicitly mentions analysability as an advantage of model transformation languages, there do exist multiple publications
[2, 3, 63] that contain analysis procedures for model transformations.
Comprehensibility
Comprehensibility is a much disputed and multifaceted issue for model transformation languages. A total of eleven publications touch on several different aspects of how the use of MTLs influences the understandability of written transformations.
The first aspect is the use of graphical syntax compared to a textual one which is typically used in general-purpose programming languages. In P63, the authors talk about “perceived cognitive gains” of graphical representations of models when compared to textual ones (C6). A pronouncement that is echoed in P43 states that graphical syntax for transformations is more intuitive and beneficial when reading transformation programs (C2).
While all these claims about graphical notation increasing the comprehensibility of transformations stand undisputed in our gathered literature, there are other facets in which graphical notation is said to be disadvantageous. We will come back to them later on in Sect. 4.3.5.
Declarative textual syntax is another commonly used syntax for defining model transformations. The authors of P45 contend that a declarative syntax makes it easy to understand transformation rules in isolation and combination (C3). However, declarative transformation languages are typically based on graph transformation approaches which can become complex and hard to read according to P70 (C13). They additionally assert that the use of abstract syntax hampers the comprehensibility of transformation rules (C12). Furthermore, P22 insist that the use of graph patterns results in only parts of a meta-model being revealed in the transformation rules and that current transformation languages exhibit a general lack of facilities for understanding transformations (C8). P22 also reports that understanding transformations in current model transformation languages is hampered, specially by the fact that many of the involved artefacts such as meta-models, models and transformation rules are scattered across multiple views (C9). P29 brings forward the concern that large models are also a factor that hampers comprehensibility since there exist no language concepts to master this complexity (C11). Adding to this point, P27 describes that for non-experts (e.g. stakeholders) transformations written in a traditional model transformation language are “very complex to understand” because they lack the necessary skills (C10). The authors of P95 on the other hand claim that the usage of dedicated MTLs, which incorporate high-level abstractions, produces transformations that are more concise and more understandable (C7). This sentiment is shared in P44 which explains the belief that using GPLs for defining synchronizations brings disadvantages in comprehensibility compared to model transformation languages (C3).
Understanding a transformation requires, among other things, understanding which elements are affected by it and in which context a transformation is placed. Using a model transformation language is beneficial for this as shown in the study described in P59 (C5).
Conciseness
Interestingly, there seems to be a consensus on the conciseness of model transformation languages compared to GPLs.
In general, dedicated model transformation languages are seen as more concise (P63 C17, P95 C21) which, apart from textual languages, is also stated for graphical languages in P75 (C18).
The fact that MTLs are more abstract making them more concise and thus better is claimed multiple times in P80 (C19), P52 (C15), P3 (C14) and P95 (C20), while P673 claims that the abstraction in MTLs helps to reduce their overall complexity (C22).
The SLOC metric has also been drawn from as a way to compare MTLs with other MTLs and even GPLs. According to an experiment described in P59, using a rule-based model transformation reduces the transformation code by up to 48% (C16). Whether or not this is any indication of superiority is a disputed subject
[9].
Debugging
Debugging support is much less disputed than comprehensibility. Of the five publications that talk about debugging in model transformation languages, none praise the current state of debugging support.
P22 (C24, C25) and P90 (C27) both describe that currently no sufficient debugging support exist for MTLs. And while in P95 it is stated that debugging of transformations in a dedicated languages is likely better than when the transformation is written in a general-purpose language (C23) they fail to bring forth a single example for their assertion.
Lastly, P45 lauded declarative syntax for its benefit in comprehension but also note that imperative syntax is easier to debug in general (C26).
Ease of writing a transformation
The main purpose of model transformation languages is to improve the ease with which developers are able to define transformations. Hence, this should also be a main benefit when compared to general-purpose languages. However, the authors of the study described in P59 found:“no sufficient (statistically significant evidence) of general advantage of the specialized model transformation language QVTO over the modern GPL Xtend” (C39). This is not to say that there are none as the authors admit the conclusions were “made under narrow conditions” but is still a concerning finding. Much more so because claims about such benefits of using MTLs persist through the literature. Claims such as those described in P29 (C29), P672 (C32) and P50 (C30) state that their simpler syntax makes it easier to handle and transform models. These claims draw from statements about the expressiveness, to which we will come to in the next section, and reason that better expressiveness must lead to an easier time in writing transformations. A potential reason that hampers model transformation languages from evidentially being better for writing transformations is cited in P27 (C34) and P28 (C35). They both state that using a model transformation language requires skill, experience and a deep knowledge of the meta-models involved (P56 C38). In our opinion, however, this holds true regardless of the language used to transform models.
Moreover, many model transformation languages use declarative syntax which can be unfamiliar for many programmers, according to P45 (C37) and P63 (C40), which are much more familiar with the status quo, i.e. imperative languages. The authors of P22, on the other hand, state that imperative MTLs often require additional code since many issues have to be accomplished explicitly compared to implicitly in declarative languages (C33).
Lastly, graphical syntax is said to make writing model transformations easier as the syntax is purported to be more intuitive for this task compared to a textual one in P3. In P43 (C36) and P672 (C41), however, the authors claim that graphical syntax can be complicated to use and that textual syntax is more compact and does not force users to spend time to beautify the layout of diagrams.
Expressiveness
As described in Sect. 2.2, the idea behind domain-specific languages is to design languages around a specific domain, thus making it more expressive for tasks within the domain
[50]. Since model transformation languages are DSLs, it should not be a surprise that their expressiveness in the domain of model transformations is mentioned almost exclusively positive by a total of 19 different publications found in our literature review.
A large portion (P95, P80, P94, P63, P15, P40, P52, P70) of publications refer to expressiveness state that the higher level of abstraction that results from specific language constructs for model manipulation increases the conciseness and expressiveness of MTLs. P80 additionally asserts that model transformation languages are just easier to use (C61).
Another portion (P2, P15, P45, P677, P27, P63, P95, P27) explains that the expressiveness is increased by the fact that model transformation engines can hide complexity from the developer. One such complex task is pattern matching and the source model traversal as mentioned in P2 (C42), P15 (C43) and P45 (C53), respectively. According to them, not having to write the matching algorithms increases the expressiveness and ease of writing transformations in MTLs. Implicit rule ordering and rule triggering is another aspect that P15 (C46), P45 (C51) and P677 (C65) claim increases the expressiveness of a transformation language. Related to rule ordering is the internal management and resolution of trace information which is stated by P15 (C44), P45 (C50), P677 (C65) and P95 (C64) to be a major advantage of model transformation languages. Furthermore, P45 asserts that implicit target creation is another expressiveness advantage that MTLs can have over general-purpose languages (C52). Lastly, the study described in P59 observed that copying complex structures can be done more effectively in MTLs (C56).
However, we also uncovered some shortcomings in current syntaxes. P10 argues that the lack of expressions for transforming a single element into fragments of multiple targets is a detriment to the expressiveness of transformation languages, going as far as to allege that without such constructs model transformation languages are not expressive enough (C68). P32 implies that MTLs are unable to transform OCL constraints on source model elements to target model elements (C69). And lastly P33 critiques that model transformation languages lack mechanisms for describing and storing information about the properties of transformations (C70).
Extendability
Being able to extend the capabilities of a model transformation language seems to be less of a concern to the community. This can be seen by the fact that only P50 touches this issue. They explain that external MTLs can only be extended (“if at all”) with a specific general-purpose language (C71). Internal model transformation languages of course do not suffer from this problem since they can be extended using the host language
[21, 32, 46].
Just better
Apart from specific aspects in which the literature ascribes advantages or disadvantages to model transformation languages, there are also several instances where a much broader claim is made.
P86 for example states that there exists a consensus that MTLs are most suitable for defining model transformations (C78). This claim is also reiterated in several other publications using statements such as “the only sensible way” or “most potential due to being tailored to the purpose” (P9, P23, P63, P64, P66). However, one publication claims that both GPLs and MTLs are not well suited for model migrations and that instead dedicated migration languages are required (P34 C80).
Learnability
The learnability issues of tools have been shown to positively correlate with usability defects
[1] and thus their general acceptance.
However, the learnability of model transformation languages is rarely discussed in detail. P30 (C81), P58 (C83) and P81 (C84) all express concerns about the steep learning curve of model transformation languages, and P52 explain that transformation developers are often required to learn multiple languages, which requires both time and effort (C82).
Performance
The execution performance of transformations is an important aspect of model transformations. Often times, the goal is to trigger a chain of multiple transformations with each change to a model. Hence, good transformation performance is paramount to the success of model transformation languages.
Opinion on performance in the literature is divided. On the one hand, there are publications such as P52 (C88) and P80 (C89) which describe that the performance of dedicated MTLs is worse than that of compiled general-purpose programming languages, while on the other hand there is P95 which states that some introduced transformation languages are more performant (C85), citing articles from the Transformation Tool Contest (TTC), and P675 which shows a performance comparison of transformations written in Java and GrGen where GrGen performs better than Java (C86). There are also more nuanced views on the subject. P45 describes that practitioners sometimes perceive the performance as worse and that there exist factors that hamper the performance (C87). The listed factors are the fact that the transformation languages are often interpreted, a mismatch with hardware and less control over the algorithms that are used. However, they also describe that specialized optimizations can bridge the performance gap.
Productivity
Increased productivity through the use of DSLs is a much cited advantage
[50] (C6D). Unsurprisingly, it resurfaces in various forms in the context of model transformation languages as well. For instance, in P45 it is described that the use of declarative MTLs improves the productivity of developers (C91). P29 goes even further, claiming that the use of any model transformation language results in higher productivity (C90).
This is contrasted by the hypothesis that productivity in general-purpose programming languages might be higher due to the fact that it is easier to hire expert users, which was put forward in P59 (C93). Lastly, P32 raises the concern that some of the interviewed subjects perceive model transformation languages as not effective, i.e. not helpful for the productivity of developers (C92).
Reuse and maintainability
In our gathered literature, maintainability is used as a motivation for modularization and reuse concepts. P29,P60 and P95 all claim that reuse mechanisms are necessary to keep model transformations maintainable. Combined with a total of eight (P4, P10, P29, P33, P41, P60, P95, P78) publications that state that reuse is hardly, if at all, established in current model transformation languages, this paints a bleak picture for both maintainability and reuse. The need for reuse mechanisms has already been recognized in the research community as stated by P77 in which the authors explain that a plethora of mechanisms have been introduced (C95) but are hindered by several barriers such as insufficient abstraction from meta-models and platform or missing repositories of reusable artefacts (C103).
There exists only a single claim that directly addresses maintainability. P44 states that bidirectional model transformation languages have an advantage when it comes to maintenance (C94).
Apart from the maintainability of written code, there is also the maintainability of languages and their ecosystems. Surprisingly, this is hardly discussed in the literature at all. Only P52 explains that evolving and maintaining a model transformation language is difficult and time-consuming (C101).
Semantics and verification
Three publications (P39, P23, P58) all suggest that most model transformation languages do not have well-defined semantics which in turn makes verification and verification support difficult (P22 C109). P44, however, explains that bidirectional transformations are advantageous with regards to verification (C107).
Tool support
Tools are another important aspect in the MDE life cycle according to Hailpern and Tarr
[28]. They are essential for efficient transformation development. Regrettably, MTLs lack good tool support according to P23, P45, P52 and P80 and if tools exist, they are not close to as mature as those of general-purpose languages as stated in P74 (C119). Additionally, the authors of P94 explain that developers of MTLs need to put extra effort into the creation of tool support for the language (C121). This might, however, be worthwhile, because P44 presumes that dedicated tools for model transformation languages have the potential to be more powerful than tools for GPLs in the context of transformations (C114). And due to the high analysability of MTLs, P45 explains that tool support could potentially thrive (C115). Internal MTLs, on the other hand, are able to inherit tool support from their host languages as reported by P23 (C113). This helps to mitigate the overall lack of tool support, at least for internal MTLs.
An interesting discussion to be held is how important tool support for the acceptance of MTLs actually is. Whittle et al.
[65] describe that organizational effects are far more impactful on the adoption of MDE, while the results of Cabot and Gérard
[16] contradict this observation citing interviewees from commercial tool vendors that stopped the development of tools due to lack of customer interest.
Versatility
It should be self-evident that languages that are designed for a special purpose do not possess the same level of versatility and area of applicability than general-purpose languages. Hence, it is not surprising that all mentions of versatility of model transformation languages in our gathered literature paint MTLs as less versatile compared to GPLs (P52 (C124), P80 (C125), P94 (C127)).
RQ2: Supporting evidence for advantages and disadvantages of MTLs
We found a number of different ways used by authors of our gathered literature to support their assertions. The largest portion of “supporting evidence” is made up of cited literature, i.e. a claim is followed by a citation that supposedly supports the claim.
The second way claims are supported is by example, i.e. authors implemented transformations in MTLs and/or GPLs and reported on their findings. Another aspect of this is relying on experience, i.e. authors state that from experience it is clear that some pronouncement is true or that it is a well-established fact within the community that a claim is true.
Third, there is empirical evidence, i.e. studies designed to measure specific effects of model transformation languages or case studies designed to gather the state of MTL usage in industry.
Last, there are those assertions that are not supported by any means. Authors simply suggest that an advantage or disadvantage exists. We assume that some claims made in this way implicitly rely on experience but do not state so. Nevertheless, since there is no way of testing this assumption we have to record such claims exactly the way they are made, without any evidence.
In the following sections, we will talk in detail about how each group of evidence is used in the literature to support claims about advantages or disadvantages of model transformation languages. As mentioned previously, Table 4 contains a complete overview over each claim and through what evidence the claim is supported.
Citation as evidence
Using citations to support statements is a core principle in research. It should therefore come as no surprise that citations are used to support claims about model transformation languages. An interesting aspect to explore for us was to trace how the cited literature supports the claim. For that, as stated in Sect. 3, we created a graphical representation to trace citations used as evidence through literature. The graph is shown in Fig. 7. It is inspired by UML syntax for object diagrams. The head of an “object” contains a publication id, while the body contains the categories for which advantages (+) or disadvantages (–) are claimed in the publication. Each category within the body is accompanied by an ID which can be used to find the corresponding claim within Table 4. We use different borders around publications to denote the type of evidence provided by the publication and arrows from one category within a publication to a different publication stand for the use of a citation to support a claim. Lastly, if the content of a publication does not concern itself with model transformation languages but instead with DSLs, the publication id is followed by “(DSL)”.
Our graph allows to easily gauge information about the following things:
-
What publication claims an advantage or disadvantage of MTLs in which category?
-
What type of evidence (if any) is used to support claims in a publication?
-
Which exact claims are supported through the citation of what publication?
In the following, we discuss observations about citations as evidence that can be made with help from the citation graphs.
First, only a total of 25 citations, split among 12 out of the 58 gathered publications, are used to support claims. This constitutes less than ten percent of all assertions found during our literature review. Seven of the 25 citations cite a publication that itself only states claims without any evidence thereof (P63, P94, P673, P674, P800). A further 11 end in a publication that uses examples or experience (see also Sect. 4.4.3) (P664, P665, P667, P671, P672, P676, P77, P64, P804, P801). Next, there are 3 citations that cite publications which in turn cite further publications to support their claims (P677, P675), leaving only 4 citations that cite empirical studies (P669, P670, P803) (see also Sect. 4.4.2). To us, this is worrying because the practice of citing literature that only restates an assertion corrodes the confidence readers can have in citations as supporting evidence.
From the graph, it is clearly evident that there exists no single cited source for claims about model transformation languages. This is clearly indicated by the fact that only five publications (P63, P77, P673, P675, P803) are cited more than once; twice to be exact. And no publication is cited more than two times. Moreover, of those five publications P675 and P803 are each cited by a single publication, respectively. P675 is cited twice by P80 and P803 by P675. Related thereto, nearly each claim, even within the same category, is being supported through different citations.
Furthermore, only claims about conciseness, expressiveness, reuse & maintainability, tool support, performance and statements that MTLs are just better are supported using citations. It is interesting to note that claims within these categories which are supported by citations are either all positive or all negative. This is not to say that there are no contrasting claims, see for example C113 and C116 in P23, only that, if citations are used for a category the supported claims are either all positive or all negative.
Another thing to note is that in some instances claims about model transformation languages are being supported by citing publications on domain-specific languages in general. This can be seen in P80. The claims C60 and C61 are both supported by a citation of P675 which is a publication that concerns itself with DSLs. Interestingly, P675 itself then cites both publications about DSLs (P800, P801, 803) and a publication about model transformation languages (P804) to support claims stated within the publication.
Coming back to citations of empirical studies, we have to report that while there exist 4 citations of empirical studies only a single claim about model transformation languages (C116 in P23) is actually supported thereby. This is due to P803 being an empirical study about DSLs and P669 and P670 both being cited as evidence for C116.
Lastly, apart from those publications that only make a single claim, no publication supports all their claims using citations. Extreme cases of this can be seen in P45 and P52 which make a total of 16 claims, only supporting three of them with citations while leaving the other 13 unsubstantiated.
Empirical evidence
To our disappointment, we have to report a lack of overall empirical evidence for properties of model transformation languages. Only four publications (P32, P59, P669, P670) in our gathered literature assess characteristics of model transformations using empirical means (see Fig. 7 and Table 4). Of those four, only P59 focuses on MTLs as its central research object, while the other three are case studies about MDA that happen to contain results about transformation languages. P803 too is an empirical study, but as mentioned in Sect. 4.4.1 focuses on domain-specific languages in general not on MTLs. In order to provide the necessary context for scrutinizing the claims extracted from the publications, we provide a short overview over the central aspects of P32, P59, P669, P670 in the following.
The study described in P59 was comprised of a large-scale controlled experiment with over 78 subjects from two universities as well as a preliminary study with a single individual. Subjects had to solve 231 tasks using three different languages (ATL, QVT-O and Xtend). The tasks focused on one of three aspects in transformation development, namely comprehending an existing transformation, changing a transformation and creating a transformation from scratch. After analysing the results, the authors come to the disillusioning conclusion that there is “no statistically significant benefit of using a dedicated transformation language over a modern general-purpose language”.
The authors of P32 report on an empirical study on the efficiency and effectiveness of MDA. A total of 38 subjects, selected from a model-driven engineering course, were asked to implement the book-purchasing functionality of an e-book store system. Afterwards, the subjects evaluated the perceived efficiency and effectiveness of the used methodology. This also included questions about the used QVT language which was perceived as only marginally efficient.
Both P669 and 670 are reports of industrial case studies. The objective of the study in P669 was to investigate the state of practice of applying MDSE in industry. To achieve this, they collected data from tool evaluations, interviews and a survey. Four different companies were consulted to collect the data. Again while some reported results concerned themselves with transformations, model transformation languages were not explicitly discussed. Similarly, P670 reports on an industrial case study involving two companies aiming to collect factors that influence the decision to adopt MDE. For that purpose, multiple preselected individuals at both companies were interviewed. Just as P669, the study did not directly focus on transformations or transformation languages.
As evident from Fig. 7, the results from P32 and P59 have yet to be used in the literature for supporting claims about MTLs. Since both of them have only been published recently, we are, however, optimistic about this prospect.
Evidence by example/experience
Using examples to demonstrate shortcomings of any kind has a long-standing tradition not only in informatics. Using examples to demonstrate an advantage, however, can result in less robust claims (especially toy or textbook examples Shaw
[56]). As such, it is important to differentiate whether a claim is made by demonstrating a shortcoming or benefit.
In our gathered literature, ten publications use examples to support a claim. Interestingly, examples are mainly used to support broad claims about model transformation languages. This can be observed by the fact that P34 and P64 use examples to try and demonstrate that GPLs are not well suited for transforming models, while P664, P665, P667, P672, P804 and P676 try to demonstrate the general superiority of MTLs by showing examples of transformations written in MTLs. Other claims that are supported through examples are a demonstration of the reduction in code size when using rule-based MTLs in P59 and statements about the extensive amount of reuse mechanisms for MTLs through listing gathered publications about the proposed mechanisms in P77.
Long-time practitioners of model transformation languages or programming languages in general often rely on their experience to make assertions about aspects of the language. And while the experience of long-term users can create valuable insights, it is still subjective and can therefore vary in accuracy. In our case, six publications directly state that their assertions come from experience. P3 report on their experiences using different languages to implement transformations, coming to the conclusion that graphical rule definition is more intuitive, an experience shared by P40. P43 name user feedback as grounds for claiming that visual syntax has advantages in comprehension but makes writing transformations more difficult. And P672 share that they are under the impression that graph transformations are the superior method for defining refactorings.
Since experience is subjective, contradicting experiences are bound to occur sometime. While the authors of P10 believe from experience that current MTLs are not abstract enough for expressing transformations, P671 feel that the difficulty of writing transformations in a MTL does stem from the chosen MDD method rather than the syntax of the language.
No evidence
Figure 7 and especially Table 4 make it clear that a large portion of both positive and negative claims about model transformations are never substantiated. In fact, of the 127 claims ~69% are unsubstantiated. Adding those that are supported by a citation that in the end turns out to be unsupported as well brings the number up to ~77%. Particularly, the categories concerning the usability of MTLs such as comprehensibility, ease of writing a transformation and productivity lack meaningful evidence. All three of them being cornerstones of language engineers arguments for the superiority of model transformation languages make this especially worrisome.
We believe that a realization in the community about this fact is necessary. The necessity or superiority of model transformations has to be properly motivated. This means that it is not sufficient to claim advantages or disadvantages without providing at least some form of explanation on why this claim is valid (more on this in Sect. 5.3).