Introduction

Inconsistencies can cause catastrophic events: e.g., the NASA unmanned MARS Climate Orbiter [101] was destroyed in 1999 due to use of inconsistent metric units by design teams, and Airbus had 6 billion dollar loss in 2006 due to use of inconsistent specifications in different versions of design tools [114].

Inconsistencies can be found in several stages of the system development life cycle. In earlier stages, when engineers are eliciting requirements, they might misunderstand the stakeholders’ needs. Thus, the stakeholders’ needs might be modeled wrongly, resulting in a product that does not match their expectations. Another inconsistency can arise when the models (e.g., class diagram, activity diagram) are correct but the software developers misunderstand them, resulting in a source code that does not represent the design intention. The crucial point here is that the earlier the inconsistency is found, the less it will cost [64] to fix the inconsistency. In the previous examples, only one domain was involved, i.e., the software engineering domain. In these scenarios, identifying and managing inconsistencies is already difficult.

Furthermore, it is known that systems are becoming increasingly complex to develop, especially when these systems are heterogeneous and there is a need to combine models created by engineers from different expertise and different domains [6, 7, 40, 125, 144]. One example of such a complex system is a mechatronic component: to develop it, one might need to combine expertise from different engineering domains such as mechanics, electronics and software [124].

Formally, we say that models are from the same domain, if they are created by engineers from the same engineering discipline, e.g., software engineering or mechanical engineering. Models of the same domain can be created using different modeling tools: e.g., one UML model can be created using Lucidchart.Footnote 1 and another one using StarUML.Footnote 2 We say that models are from different domains if they are created by engineers from different engineering disciplines that might be using the same or different modeling tools. For example, engineers can use SimulinkFootnote 3 to design both the electrical and mechanical components of a six degree-of-freedom robotic arm in Fig. 1. Models of electrical and mechanical components are shown in Fig. 2.

Fig. 1
figure 1

Six degree-of-freedom robotic arm. Reproduced with permission of MathWorks. Copyright (2020) [46]

Due to the sheer complexity of modern systems and the presence of multiple authors, inconsistencies between the models might be inadvertently introduced, e.g., one model might assume the presence of a certain feature, while another one might assume its absence. This problem might be further amplified by differences in terminology used in different domains: e.g., for a software engineer, a feature is a functionality provided by the system, but for system engineer, a feature is an aspect of the system, like the color. This kind of misunderstanding can affect the consistency of the models. Therefore, the terms have to be well described in order to simplify the process of maintaining consistency between models.

Fig. 2
figure 2

A Simulink model of the electrical and mechanical components of the robotic arm. Reproduced with permission of MathWorks. Copyright (2020) [46]

Maintaining consistency between models is known to be a challenging task, especially because it is difficult to predict the effects of changes introduced in one model on other models [113]. While maintaining consistency between models is imperative [117], in practice, it can never be fully ensured [63], and the system engineer is responsible to define what has to be consistent and when. The process of managing these models can be expensive. Thus, we believe that the consistency should only be managed when the costs to maintain the consistency are lower than the costs that an eventual inconsistency can cause.

In order to understand industrial practices and academical approaches, aimed at checking and keeping consistency of models of different domains, we defined four Research Questions (RQ)s:

  • RQ1: How do model life cycle management tools address consistency between models from different domains?

    • Motivation: Maintaining the consistency between models of different domains is a challenging task. Thus, we investigate how tools support this task.

    • Answer: We identify 80 tools, but the majority of them do not check consistency between models from different domains.

  • RQ2: What inconsistency types are addressed by the model life cycle management tools?

    • Motivation: In order to be able to indicate gaps related to the inconsistencies types addressed by the tools, we investigate which inconsistencies types are captured or not.

    • Answer: The inconsistency types addressed by the tools are: Behavioral, Information, Interaction, Interface, Refinement, and Requirement. Interface is the most popular. Behavioral and Refinement are the least popular. We believe that these inconsistency types, previously identified in the research literature, are not addressed by the tools due to the complexity of capturing them. Surprisingly, these tools do not advertise that they can capture the Name inconsistency type. We conjecture that since Name inconsistency can be easily captured, and it is also an Interface inconsistency (but not vice versa), the tool builders prefer to advertise the later option.

  • RQ3: Which strategies have been used to keep the consistency between models of different domains?

    • Motivation: Identify the drawbacks of the technologies and approaches used to keep consistency between models of different domains.

    • Answer: The following strategies have been used to keep the consistency between models from different domains: Interoperability, Inconsistency Patterns, Modeling dependencies explicitly, Parameters or constraints management, Ontology, STEP, and KCModel. Some of these strategies are based on prototypes or approaches having the following main drawbacks: they are time-consuming, they cause data loss, and they are tool dependent.

  • RQ4: What are the challenges to manage models of different domains?

    • Motivation: to identify the main challenges in order to identify directions for future work.

    • Answer: Due to the heterogeneous environment this topic belongs, the main cited challenges are interoperability, maintaining consistency, dependency management, and traceability.

To achieve this goal, we have conducted a systematic literature review [77]. Taking into consideration the fact that scientific publications do not always reflect industrial practices, we have decided to include white papers, such as technical reports. Thus, we have covered both industrial and academic sources.

Answering RQ1–RQ3 is useful both for researchers willing to develop new approaches and for tool vendors willing to add new features or improve the features presented in current tools. Answering RQ4 is useful for those researchers that are willing to study model management, to organize the study knowing the challenges they will face.

This study is an extension and revision of our previous work [139]. In this extension, we updated the number of papers causing minor alterations on the first three research questions. Research question four was added for this extension. The main contributions of this study are:

  • List of model management tools;

  • Classification of the inconsistency management approaches;

  • Identification of gaps such as the need to improve the current tools to address more kinds of consistency check, and direction for future work indicates that further research should be done on Interoperability, Maintaining Consistency, Dependency Management, and how to capture the Behavioral, Refinement, and Requirement inconsistency types.

Related work

To the best of our knowledge, no systematic literature review has considered model management tools focusing on cross-domain model consistency. However, a number of studies consider consistency checking of models and model management focusing on the software engineering domain. All the studies reviewed below focus on models from the same domain, usually the software engineering domain: as opposed to this line of work, we consider cross-domain consistency checking.

Cicchetti et al. [33] conducted a systematic literature review on existing solutions for multi-view modeling of software and systems. The authors further investigated the support for consistency management provided by multi-view modeling solutions. They identified consistency management as one of the most common limitations and recognized importance of a lack of support for semantic consistency management. Franzago et al. [51] conducted a systematic mapping study of collaborative model-driven software engineering approaches from a researcher’s viewpoint. The authors decomposed the collaborative Model-Driven Software Engineering (MDSE) approaches into three main dimensions, with model management being one of them. Franzago et al. presented characteristics of the model management infrastructure focusing on the supported artifact, modeling language, multi-view, editor, and application domain.

Bharadwaj et al. [21] conducted a survey of model management literature within the mathematical modeling domain. The authors identified three approaches to support model management on the mathematical modeling domain and categorized various modeling systems based on features they provided.

Santos et al. [122] conducted a systematic mapping study to investigate existing inconsistency management approaches within Software Product Lines. The authors conclude that the existing approaches should provide faster feedback, support co-evolution of the artifacts, and handle the inconsistencies.

Muran et al. [98] conducted a systematic literature review on software behavior model consistency checking. As conclusion the authors suggested that future research should focus on tool support for consistency checking, tool integration, and better strategies for inconsistency handling.

Spanoudakis and Zisman [130] conducted a literature review to investigate techniques and methods that support the management of inconsistencies on models within the software engineering domain. Usman et al. [141] conducted an informal literature review on consistency checking techniques for UML models focusing on five consistency types (inter-/intra-model, evolution, semantic, syntactic). They concluded that almost all techniques provide consistency rules to validate consistency between UML models. It is worth noting that these studies did not follow a strict literature review protocol.

Lucas et al. [90] conducted a systematic literature review to identify and evaluate the current approaches for model consistency management between UML models. They also briefly proposed a solution to overcome the limitations they found. Ahmad and Nadeem [3] conducted a survey to evaluate Description Logics-based approaches to consistency checking also focusing on UML models.

Torre et al. [138] conducted a systematic mapping study on UML consistency rules and observed that there is limited tool support for checking these rules. Later, Torre et al. [137] conducted a survey in order to understand how model consistency between UML models is addressed in academia and industry.

Background: model management

INCOSEFootnote 4 defines model-based systems engineering (MBSE) as “the formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later life cycle phases.” [70]. According to Friedenthal et al. [52], MBSE was proposed to facilitate systems engineering activities: following MBSE the system engineers would use models instead of documents, and this is expected to improve quality of system specification and design, as well as the communication among the development team.

A model is a representation of reality, an abstraction of something relevant to the stakeholder described using well-defined and unambiguous languages [48].

Model management emerged with the need to organize and maintain models, ensuring consistency. Franzago et al. [51] stated that the infrastructure for the model management may include a model repository and modeling tools. This infrastructure is responsible for managing the life cycle of the models such as creating, editing, and deleting. The focus on this study lies on one aspect of model management: the consistency checking on models from different domains.

Product lifecycle management (PLM) [56, 57, 118, 132, 134] is an environment, infrastructure, a system of methods, processes, and practices that cover the entire product lifecycle, from requirements definition, design, to late stages such as maintenance and recycling of the product. While model management is focused on the models of the product, PLM includes every artifact related to the product. TeamcenterFootnote 5 is an example of a PLM software that can also provide model management capabilities. Since PLM can include model management features, we have decided to include it in our literature review.

Methodology

Selecting the literature review technique

In order to answer the Research Questions (RQ1–RQ4), we conducted a literature review. Several literature review techniques have been proposed in the scientific literature, e.g., snowballing [145, 148], systematic literature review (SLR) [77], and systematic mapping review (SMR) [107].

Table 1 Keywords

We opted for SLR because of the SLR characteristics that identify, analyze and interpret the data related to specific RQs. In contrast, SMR aims to answer general research questions and snowballing can be labor intensive. Thus, we believe that this approach is the most appropriate to answer our RQs. To circumvent the inherent SLR limitations implied by the choice of the search strings, we have combined different keywords obtaining 600 different search strings. This process is more extensively explained in the next section.

The SLR consists of the creation of research questions (RQs), the queries on electronic sources having the RQs as a guide, and the use of pre-determined criteria for eligibility and relevance to form the set of accepted papers to be used in the study. As the data source, Kitchenham and Charters [77] recommend the use of a search engine that offers a wide coverage of sources. Thus, we have chosen Google Scholar: it offers a wide coverage of electronic sources of different research areas, and it has been used in multiple software engineering studies [53, 71, 83, 94, 100, 148].

Data extraction

Since we are mainly interested in tools (product life cycle management tools, and model management tools), we create search strings to query Google Scholar, based on PICO [79]. Thus, we have selected and organized keywords into four categories: process supported by tools, model, consistency, and multiple domains. Figure 3 presents the overview of the selection of the keywords. For each category, we have selected keywords related to the Research Questions, as presented in Table 1.

The first two categories (process supported by tools and model) are the base for answering all RQs, and the remaining two categories (consistency and multiple domains) are more specific for answering RQ2 and RQ3. For example, the reasoning behind choosing the keyword “Dependency” in the category “Consistency” was that in our initial research we found that dependency modeling has been used to maintain consistency. Thus, this keyword could help find more results that could answer RQ3.

We have combined the keywords from different categories to create queries to be executedFootnote 6 in Google Scholar. For instance, for the first query we have used the following keywords: “Model Management, MBSE, Consistency, Multidomain Model Integration.” For the second query we used “Model Management, MBSE, Consistency, Multi Domains,” and so on. In total we have \(600 = 4\times 10\times 3\times 5\) combinations.

Due to the similarity of the queries, some papers have been retrieved multiple times. We have automatically excluded these duplicates prior to the manual inspection. In total we have obtained 4293 hits, but only 618 of them were unique.

Fig. 3
figure 3

Overview of the selection of the keywords

Manual inspection

The selection criteria were defined in order to avoid bias and to reduce subjectivity. The inclusion (I) and exclusion (E) criteria were designed to answer the RQs, as proposed by Kuhrmann et al. [81]. The following are the inclusion and exclusion criteria we used:

  • I1: Studies written in English and available in full text.

  • I2: Studies review or proposal of a new technique, approach, method, or tool (prototypeFootnote 7) that support model management.

  • I3: Studies mention tools related to Product Lifecycle Management (PLM) and Model lifecycle management.

  • E1: Studies do not mention model (in)consistency.

  • E2: CVs, PhD and Master theses, and books or book chapters. Although we excluded all PhD theses, we considered publications related to the PhD theses and applied the inclusion and exclusion criteria to them. We decided to check for derived papers because we chose to be as conservative as possible, and we did not want to exclude PhD theses without checking for derived papers. In the end of this process, we included 32 derived papers.

In order to identify the relevance of the paper, we read the title, abstract, and conclusion of 618 papers.

Relevance assessment was performed iteratively. At each iteration, we used 15 papers randomly selected from the list of papers we had downloaded. The first and the last authors of this paper individually read the title, abstract, and conclusion to label the relevance of the paper. Both of the raters were software engineering researchers having at least a Master’s degree in Computer Science. At the end of each iteration, we computed Cohen’s \(\kappa \) [34] to measure the agreement between the raters and discussed the disagreements. According to Cohen [34] and Landis et al. [82], the Kappa coefficient in the range of 0.61–0.80 is interpreted as substantial agreement and this range was used in previous studies [16, 22, 69].

Table 2 The \(\kappa \) value obtained in each iteration

As presented in Table 2, four review rounds were needed to reach the \(\kappa \) value greater than 0.6. In the first review round, we obtained the lowest \(\kappa \) value, because the interpretation of the inclusion and exclusion criteria was not clear among the raters. We improved the agreement level in every subsequent review round. In the second and third review rounds, we obtained 0.29 and 0.33, respectively. We finalized the fourth round of the process by reading 20 papers instead of 15, and we obtained the agreement level of 0.61.

When we reached the acceptable agreement level, the first author continued the selection procedure independently. After reading the title, abstract, and conclusion of 618 papers, we labeled 193 as possibly relevant. To finalize, the first author completely read these papers and selected 96 papers to answer the Research Questions. Figure 4 presents the summary of the process to select the papers.

Fig. 4
figure 4

Search and selection process. We obtained 4293 hits in the initial search. After removing all duplicated hits, we obtained 618 unique hits. We obtained 193 papers when the selection criteria was applied, and we conclude the process with 96 papers

Fig. 5
figure 5

Selected publications organized by year

Fig. 6
figure 6

Distribution of selected publications organize by type and year

Gathering information about tools

The previous section describes how to identify relevant papers supporting us to find the answers for the RQs. Although the papers provide a list of tools, the information regarding the tools is not necessarily well presented or detailed. As a consequence, we decided to use additional data sources, for instance the website of each tool. One option to gather information about the tools would be to install and to try all the tools. However, this option was not feasible, mainly because of the need to know how to use them, but also because most of the tools are commercial, requiring the license to try them.

We selected the closed card sorting technique [131] to categorize the type of consistency the selected tools address. Taylor et al. [133] describe consistency as “an internal property of an architectural model, which is intended to ensure that different elements of that model do not contradict one another” and distinguish the following five inconsistency types.

Name inconsistencies happen when components, connectors or services have the same name. In most programming languages, this kind of inconsistency is trivial to be captured. However, there are cases in which capturing this inconsistency is not trivial. Taylor et al. exemplify that large systems may have two or more similarly named GUI-rendering components. Identifying the misuse of these components can be a difficult task.

Interface inconsistencies happen when connected interface elements have mismatching values, terminologies, or schemes [66]. Name inconsistencies are interface inconsistencies but not vice versa. Taylor et al. [133] explain that “A component’s required service may have the same name as another component’s provided service, but their parameter lists, as well as parameter and return types, may differ.”. They exemplify that this inconsistency can be presented in a case where there are methods with the same name but different parameters and the connector between the client and server components is a direct procedure call.

Behavioral inconsistencies Taylor et al. [133] explain that these inconsistencies “occur between components that request and provide services whose names and interfaces match, but whose behaviors do not.” This kind of inconsistency can happen when the behavior of the element is not the expected one. An example of behavioral inconsistency would be if the service provider assumes that the distance is expressed in kilometers and the requester assumes it to be in miles.

Interaction inconsistencies this kind of inconsistencies can “occur when a component’s provided operations are accessed in a manner that violates certain interaction constraints, such as the order in which the component’s operations are to be accessed.” [133]. To exemplify the occurrence of this inconsistency, assuming that there is a Queue component (server) that stores a list of elements. This component demands to not be empty before an attempt to remove an element. In case the client component does not respect this constraint, then an interaction inconsistency will happen.

Refinement inconsistencies occur between models of different abstraction levels due to the fact some elements are suppressed/inserted to fit the corresponding abstraction level. Taylor et al. [133] explain that “a very high-level model of the architecture may only represent the major subsystems and their dependencies, while a lower-level model may elaborate on many details of those subsystem and dependencies.”.

We used the consistency types identified by Taylor et al. [133], to label the selected tools. We opted to use the consistency types provided by the selected tools, in those cases which it was not possible to match the consistency types identified by Taylor et al. with the description of the consistency type provided by the tools. We labeled DNF (data not found), those tools that we did not find information about the consistency type they address.

Additionally, we conducted a survey designed following the recommendations of Kitchenham and Pfleeger [78]. We contacted the responsible for each tool inquiring whether the tool can check consistency between models from different domains. In case of a positive answer we ask them the consistency types the tool can address. The survey was conducted either via email, via a question and answer form on the official website, or via the official forum of each tool. Not all tools provide contact information, such as an email address. Consequently, we could not contact all of them. We sent 58 messages (15 emails failed to deliver), and we received 24 replies. Our response rate is \(\approx \)50%, much higher than response rates commonly reported in the software engineering literature [17, 116].

Categorizing challenges and future work

During the full reading of the selected papers, we collected key sentences that summarize the challenges that are faced by the authors. Then, we applied the open card sorting technique [131] to categorize these challenges.

Complementary to the list of challenges faced by the authors, we also organized the list indicating the direction for future work. In order to organize this list, we followed the same methodology described before. The categories are basically the same, the only category not present is simulation. Those studies that do not explicitly state the future work are grouped in the category “Not Applicable.”

Data description

We organize the selected publications based on the type (Symposium, Conference, Journal, Congress, Workshop, and Others), venue, and year. It is important to make it clear that these publications do not represent all publications about model management, but only those that are relevant according to our exclusion criteria.

Figure 5 presents the distribution of the publications on model management. The first study was published in 1999 and only after the hiatus of 4 years the number of publications had increased. More precisely, the average number of publications between 1999 and 2011 is less than 3 studies per year, whereas between 2012 and 2018 this number is \(\approx \)10 studies per year.

Table 3 Publication venues that have more than two selected publications

Table 3 presents the publication venues that hosted more than two publications. There are publications on 56 different venues spread on different research areas such as software and system engineering, aerospace engineering, and information engineering. This is an indication that model management is not specific for a research area but a set of different research domains.

The venue with more publications is INCOSE having six publications spread in 2012 (one publication), 2015 (two publications), and 2016 (three publications). Journal papers are the majority representing 43.75% of the selected publications, followed by conference papers 32.29%. Figure 6 presents the distributions of papers per type of venue and per year. The type “others” represent the white papers.

Results

RQ1: How do model life cycle management tools address consistency between models from different domains?

We analyzed the descriptions of the tools that were mentioned in the selected papers and we organized them into three categories as described in Table 4.

  1. 1.

    Provide consistency model checking on models of different domains: we identified 32 tools that claim they can perform model checking on models of different domains.Footnote 8 This number represents 40% of the total amount of tools we found.

  2. 2.

    Provide consistency model checking but only on models of the same domain: we identified 20 tools that fit into this category, representing 25% of the total of the tools.

  3. 3.

    Do not provide any consistency checking: We assume that tools that do not explicitly claim that they provide consistency check, do not have this functionality. Thus, we identified that 28 tools, 35%, fit into this category. We did not expect to identify this amount of tools, since we use keywords related to multiple domains to restrict the results.

figure d
Table 4 Tools organized into three categories (RQ1)
Table 5 Tools and kind of consistency (RQ2)

RQ2: What consistency types are addressed by the model life cycle management tools?

In order to answer RQ2, we classified the consistency types addressed by the tools identified in the previous subsection.

We focus on those tools that could provide model consistency check at some level, more specifically regarding to what type of consistency those tools provide. However, it is not possible to find the description of the kind and level of consistency check in \(\approx \)50% of these tools. For those tools that provide this description, we observe that half of them only address one kind of consistency check. Table 5 presents the list of tools and the consistency types they address.

In Subsection 4.4 we present a list of consistency types identified by Taylor et al. [133]. Additional consistency types are presented below:

  • Requirement Consistency—It checks whether the requirements from a requirement list are related to some model element and if this relationship is valid.Footnote 9

  • Information Consistency—It checks if the data that can be presented on different media, remain the same regardless of how they are presented [36]. Example of Information inconsistency would be when the distance is presented in different units without respecting the conversion calculation.

Interface and Interaction are the two most popular consistency types addressed by the tools, and Behavioral, and Refinement are the least popular. The complexity in capturing Behavioral, and Refinement inconsistencies might be the main reason for the low amount of tools implementing these inconsistencies. Name inconsistency type can be easily identified. However, we observed that the tool builders do not advertise that their tools address it. Therefore, we conjecture one possible reason: since capturing Name inconsistency is trivial and it is also an Interface inconsistency (but not vice versa), the tool builders advertise the latter option.

figure e

RQ3: Which strategies have been used to keep the consistency between models of different domains?

We have selected papers that cited tools that manage consistency between models of different domains. We selected 56 papers; however, only half described how they check and keep the consistency between models. We organize the papers into categories according to the approach they use to keep consistency between models of different domains.

Interoperability This approach is defined as “the ability of two or more software components to cooperate despite differences in language, interface, and execution platform” [146]. Qamar et al. [110] present the need to manage inconsistency through interoperability between tools such as MagicDraw, TeamCenter, and Simulink. On the one hand, standard file formats as Mcad-ecad [32] and XML [37, 89] are used to maintain interoperability in engineering and software domains. On the other hand, the use of these standard files to maintaining consistency could be problematic due the data loss [12], since the data would be transiting between different tools and domains.

Inconsistency Patterns This approach recommends selecting the appropriate technique from an extensible catalogue of inconsistency patterns, and apply it in an unmanaged process to achieve a managed one [38, 40].

Modeling dependencies explicitly in order to manage inconsistency, some researchers such as [86, 111, 124, 136] believe that making the inter-/intra-model dependencies explicit will facilitate the model management. The main drawback of this approach is, as any other modeling task, it can be time-consuming. Such dependencies can be identified between properties or between structural elements of two models, in such a way that the properties or elements can affect each other. This dependency modeling can be done using any technology that explicitly maps dependencies [110, 111, 113, 115, 136, 140]. Design Structure Matrix (DSM) is an example of such technology. DSM is a representation of the components and their relations, in order to make the shared information more precise and less ambiguous [86, 124, 128]. DSM consists of a matrix with properties mapped horizontally and vertically. Each marked box inside a cell of a DSM indicates a dependency between the corresponding properties. A dependency loop occurs when there is a dependency marked above the main diagonal on the DSM. In order to avoid these loops, a reorganization of the DSM is needed [111].

Parameters or constraints management This approach proposes using parameters or constraints to check the model consistency within a multi-disciplinary development team. If these parameters or constraints are violated, the inconsistency can be detected and managed. According to Weingartner et al. [147], to implement this approach, it is necessary to have a well-designed data model of the models one wants to manage [10, 119, 120, 135, 147, 151].

Ontology is an explicit specification of a conceptualization of properties and relations of one or more domains. A conceptualization is the set of objects, concepts and other entities that are assumed to exist in some area of interest together with the relationships that hold among them. A conceptualization is an abstract simplified view of the world to be represented for some purpose [59]. This approach allows engineers to independently develop partial descriptions of the same product and check consistency when the descriptions are combined [24, 96, 105]. However, creation of an ontology can be a time-consuming task [59].

STEP Standard for the Exchange of Product model data (ISO 10303)[109]. STEP consists of a number of components, called application protocols (APs), which define data models on which translators for CAD data exchange are based. The International Organization for Standardization (ISO) developed STEP in order to cover a wide range of application areas, such as automotive, aerospace, and architecture [32]. In our systematic literature review, we have not found papers that only use STEP to check consistency between models of different domains; instead, they use an extension of STEP, or a combination of STEP and other technologies [13, 31, 32, 80].

KCModel This approach is organized basically into “Information Core Entity” (ICE) and “Configuration Entity” (CE). The former is the smallest information entity used, responsible for storing parameters and rules, and represents a generic multi-domain baseline. In order to use the parameters and rules in a specific context (3D, thermal calculations, excel files, etc.), it is necessary to create a Configuration Entity instantiating ICE. This approach allows engineers to create their own models, trace parameters and rules, and check consistency. [8, 9, 18, 106]

We identified seven strategies to keep the consistency between models of different domains. Although, some of these strategies are commonly used in the industry, we believe that these strategies are not mature enough because they might cause data loss, they are tool dependent, time-consuming, and they do not (individually) fully support co-evolution of the models.

figure f

RQ4: What are the challenges to manage models of different domains?

We identify nine main challenges encountered by authors of the selected studies (Table 6). It is possible that the challenges of the studies belong to more than one category. In this case, the study is present in more than one category. The last right column gives examples of each category. Some of these studies do not explicitly state the challenges faced. In this case, these studies are in the category “Not Applicable.” The most cited challenges are related to Interoperability, Maintaining consistency, Dependency Management and Traceability. These challenges are presented in 29, 23, 16, and 10 selected studies, respectively.

Interoperability and maintaining consistency are cited as the main challenges. A heterogeneous setting where engineers of different areas of expetise use different design tools can easily cause synchronization issues such as problems with respect to data exchange. Thus, interoperability and maintaining consistency represent important challenges to be faced.

As an extension of the interoperability, the dependency management also represents a challenge. It is because the models are created using different technologies not always known by all engineers, making it difficult for them to identify the dependency and relations between these models by themselves. Once the relations are defined, the traceability has to be done in order to track affected models due to changes. Questions that arise from this challenge can be “How to create the trace automatically?” or “How to trace the impact of one change?”

Additionally to the list of challenges faced by the authors, we also organized the list indicating the direction for future work (Table 7). As expected, the direction for future work follows the challenges faced by the authors. Interoperability, Maintaining Consistency, and Dependency Management are also the topics for further research. One additional finding is that almost 40% of the selected studies do not explicitly state the direction for future work, which is surprising, since we did not expect that this number could be this high.

figure g

Discussion and future work

The results described in this paper can serve as a starting point for future research on model management topics. We provide a list of available tools used to support model management. We group them according to the functionality they offer related to the consistency model checking on models of different or same domains. We observe that 40% of the tools we found can provide consistency model checking on models from different domains, 25% on the models of the same domain, and 35% do not provide any consistency model checking.

Regarding commercial tools, we have found that they do not fully describe the kind of inconsistency they can address (Sect. 6.2). We conducted a survey to overcome this problem. While ca. 50% response rate is better than one is accustomed to in software engineering surveys, this also means that half of the tool builders did not respond. This lack of information makes it difficult to map the inconsistencies these tools can handle, since these tools are commercial and we would need the licenses and the expertise to use them. Further evaluation on commercial tools is necessary, and this should be done with the help of specialists of each tool or at least a full description of all features should be provided. For those tools that describe the consistency type they address, we have found that the majority of them can perform the Interface consistency check, checking whether the connected interface elements have mismatching values. We expected that these tools could address more than one kind of consistency type. However, this was not what we observed. We observed that most of them can address up to two consistency types.

Due to the fact that Name inconsistency can be easily captured, we expected that all tools could address this consistency type. However, this was not what we found. We conjecture that the reasons might be that this functionality is indeed implemented; however, the tool builders do not advertise that their tools address it because it is trivial. The second reason can be due to the fact that Name inconsistency is also an Interface inconsistency (but not vice versa), and thus, they advertise the later option. Behavioral, and Refinement are the least addressed consistency types. It might be due to the complexity of capturing them. Thus, for future work, we believe the researchers should investigate how to capture these inconsistency types and tool builders should improve their tools to address more kinds of inconsistency type.

Our study (Sect. 6.3) reveals seven strategies to keep the consistency between models of different domains. These strategies are based on prototypes or approaches having the following main drawbacks: time-consuming, data loss, and they are tool dependent.

According to Qamar et al. [111], explicit dependency modeling between models is not commonly used in the industry. However, this is regarded as a requirement by the academic research if one wishes to manage inconsistency between models from different domains. They claim that “Capturing dependencies formally and explicitly is currently not supported by available methods and tools in MBSE, and having no explicit knowledge of dependencies is a main cause of inconsistencies and potential failures”. The explicit dependency modeling between models can be done using design structure matrix and ontology, and it can be followed by the use of standards as STEP.

Reichwein et al. [117] state that “Due to the wide variety of disciplines and modeling tools that are used in mechatronic design, there is currently no established solution that allows engineers to efficiently and formally define dependencies between different models. Therefore, maintaining consistency between different models is often a manual, time-consuming, and error-prone process.”. Interoperability between tools was also used as a strategy to keep the consistency between models of different domains, specially due the fact that engineers would not need to stop using the tools they are familiarized with.

As stated in Sect. 6.4, there is still room for more research. The main challenges faced by the authors, as well as the proposed directions for future work, belong to the same research topics. The most cited research topics are: interoperability, maintaining consistency, and dependency management. Thus, for future work, we strongly believe the researchers should focus not only on ways of modeling dependency between models of different domains, but also on how to manage these dependencies. The management can be done using a tool agnostic infrastructure that stores in a database all the relationships between models, notifies the owners of those affected models due to a change, and that infers new relationships by analyzing the stored relations. Facilitating the capture of all inconsistency types can be a direction for future work.

Threats to validity

Wohlin et al. [149] provide a list of possible threats that researchers can face during a scientific research. In this section, we describe the actions we took in order to increase the validity and decrease the threats.

External validity concerns how the results and findings can be generalized. We only accepted studies written in English, and this can represent a threat despite the fact that English is the most widely used language for scientific papers. As one of the goals of this study is to understand what the industrial practices are we decided to accept gray literature (white papers and technical reports).

The fact that we could not try all the tools and we could not find the full description of the kind of inconsistency they address can represent a threat. We believe that this threat can be minimized with the help of specialists of each tool or at least a full description of all features should be provided.

Table 6 The main challenges encountered by authors of the selected papers to manage models of different domains (RQ4)
Table 7 The direction of the future work organized by categories (RQ4)

Internal validity: Google scholar continuously indexes new papers. Hence running the queries at different moments of time might lead to different results. However, it is not possible to run all queries simultaneously due to limitations of Google scholar. We do not think that a considerable amount of papers was missed, since all queries were similar to each other and more than half of our query results were duplicated hits. textbfConstruct validity concerns how the selected studies represent the real population of studies to answer the research questions. To mitigate this concern, in the construction of the search string, we performed an informal literature review that helped us in the selection of the appropriated keywords, and we used different variations of the same keyword. To such a degree, we are confident that our queries are broad enough that all relevant papers were found in our automatic search. In order to mitigate possible bias that could be present in the manual inspection, we strictly followed the inclusion and exclusion criteria to select the relevant papers. Additional to it, the relevance assessment was performed iteratively. At the end of each iteration, we measured the inter-researcher agreement level, and we obtained the Cohen’s \(\kappa \) coefficient of 0.61, which is interpreted as substantial agreement.

Conclusion validity concerns the relations between the conclusions that we draw and the analyzed data. In order to mitigate this concern, we followed well known systematic research methods, and we described all decisions we made. Thus, this study can be replicated by other researchers. Of course, the gross number of papers can change, because new papers can be published or some papers might not be available online anymore, but we believe that the final conclusion will not deviate from ours.

Conclusions

We presented a systematic literature review intending to give an overview of industrial practices and academic approaches to cross-domain model management. We started with 618 potentially relevant studies, and after a rigorous selection criteria we concluded the process with 96 papers.

We provide a list of available tools used to support model management. We observed that 40% of the tools can provide consistency model checking on models of different domains, 25% on the same domain, and 35% do not provide any consistency model checking.

Our study reveals that the strategies to keep the consistency between models of different domains are not mature enough because they might cause data loss, are tool dependent, and do not (individually) fully support co-evolution of the models. Moreover, the majority of the tools address no more than two kinds of consistency.

Due to the lack of details about the kind of inconsistency that commercial tools address, we suggest that a further evaluation on commercial tools is needed. This should be done with the help of specialists of each tool or at least a full description of all features should be provided. We believe that future work should be towards the creation of tool agnostic infrastructure to manage the relationships between models of different domains.

To conclude, we observe that more research has to be done to improve the quality of the approaches and tools used to ensure consistency. There is no silver bullet, but at least we have a set of strategies that together can provide consistency.