Applying Model-Driven Engineering to Stimulate the Adoption of DevOps Processes in Small and Medium-Sized Development Organizations

Purpose: Microservice Architecture (MSA) denotes an increasingly popular architectural style in which business capabilities are wrapped into autonomously developable and deployable software components called microservices. Microservice applications are developed by multiple DevOps teams each owning one or more services. In this article, we explore the state of how DevOps teams in small and medium-sized organizations (SMOs) cope with MSA and how they can be supported. Methods: We show through a secondary analysis of an exploratory interview study comprising six cases, that the organizational and technological complexity resulting from MSA poses particular challenges for small and medium-sized organizations (SMOs). We apply Model-Driven Engineering to address these challenges. Results: As results of the second analysis, we identify the challenge areas of building and maintaining a common architectural understanding, and dealing with deployment technologies. To support DevOps teams of SMOs in coping with these challenges, we present a model-driven workflow based on LEMMA - the Language Ecosystem for Modeling Microservice Architecture. To implement the workflow, we extend LEMMA with the functionality to (i) generate models from API documentation; (ii) reference remote models owned by other teams; (iii) generate deployment specifications; and (iv) generate a visual representation of the overall architecture. Conclusion: We validate the model-driven workflow and our extensions to LEMMA through a case study showing that the added functionality to LEMMA can bring efficiency gains for DevOps teams. To develop best practices for applying our workflow to maximize efficiency in SMOs, we plan to conduct more empirical research in the field in the future.


Introduction
Microservice Architecture (MSA) is a novel architectural style for service-based software systems with a strong focus on loose functional, technical, and organizational coupling of services [53]. In a microservice architecture, services are tailored to distinct business capabilities and executed as independent processes. The arXiv:2107.12425v1 [cs.SE] 26 Jul 2021 adoption of MSA is expected to increase an application's scalability, maintainability, and reliability [53]. It is frequently employed to decompose monolithic applications for which such quality attributes are of critical scale [8].
MSA fosters the adoption of DevOps practices, because it promotes to (i) bundle microservices in selfcontained deployment units for continuous delivery; and (ii) delegate responsibility for a microservice to a single team being composed of members with heterogeneous professional backgrounds [3,52]. Conway's Law [14] is a determining factor in DevOps-based MSA engineering. It states that the communication structure of a system reflects the structure of its development organization. Thus, in order to achieve loose coupling and autonomy of microservices, it is also crucial to divide the responsibility for microservices' development and deployment between autonomous DevOps teams [52]. As a result, MSA engineering leads to a distributed development process, in which several teams create coherent services of the same software system in parallel.
While various larger enterprises like Netflix 1 , Spotify 2 , or Zalando 3 regularly report about their successful adoption of MSA, there are only a small number of experience reports (e.g., [11]) about how microservices combined with DevOps can be successfully implemented in small and medium-sized development organizations (SMOs) with less than 100 developers involved. Such SMOs typically do not have sufficient resources to directly apply large-scale process models such as Scrum at Scale [13,76] in terms of employees, knowledge, and experience.
To support SMOs in bridging the gap between available resources and required effort for a successful adoption of DevOps-based MSA engineering, we (i) investigate the characteristics of small-and medium-scale microservice development processes; and (ii) propose means to reduce complexity and increase productivity in DevOps-based MSA engineering within SMOs. More precisely, the contributions of our article are threefold. First, we identify challenges of SMOs in DevOps-based MSA engineering by analyzing a data set of an exploratory qualitative study and linking it with existing empirical knowledge. Second, we employ Modeldriven Engineering (MDE) [12] to introduce a workflow for coping with the previously identified challenges in DevOps-based MSA engineering for SMOs. Third, we present and validate extensions to LEMMA (Language Ecosystem for Modeling Microservice Architecture), which is a set of Eclipse-based modeling lan-guages and model transformations for MSA engineering [62] enabling sophisticated modeling support for the workflow.
The remainder of this article is organized as follows. In Sect. 2, we describe in detail the microservice architecture style particularly related to the design, development, and operation stages. In addition, we explain organizational aspects that result from the use of microservices. Section 3 illustrates LEMMA as a set of modeling languages and tools that address the MDE of MSA. In Sect. 4, we analyze a dataset based on an exploratory interview study in SMOs to identify challenging areas in engineering MSA for DevOps teams in SMOs. Based on these challenge areas, we present a model-driven workflow in Sect. 6 and describe the extensions of LEMMA in order to support the workflow. In this regard, Subsect. 6.2 present means to derive LEMMA models from API documentation, Subsect. 6.3 presents extensions to the LEMMA languages in order to assemble individual microservice models, Subsect. 6.4 describes additions to create a visual representation of microservice models, Subsect. 6.5 presents means to specify deployment infrastructure, and Subsect. 6.6 elaborates on the ability to generate infrastructure code. We validate our contributions to LEMMA in Sect. 7. Section 8 discusses the model-driven workflow and LEMMA components towards the application in DevOps teams of SMOs. We present related research in Sect. 9. The article ends with a conclusion and outlook on future work in Sect. 10.

Background
This section provides background on the MSA approach and its relation towards the DevOps paradigm. It details special characteristics in the design, development, operation, and organization of microservice architectures and their realization.

General
MSA is a novel approach towards the design, development, and operation of service-based software systems [53]. Therefore, MSA promotes to decompose the architecture of complex software systems into services, i.e., loosely coupled software components that interact by means of predefined interfaces and are composable to synergistically realize coarse-grained business logic [24].
Compared to other approaches for architecting service-based software systems, e.g., SOA [24], MSA puts a strong emphasis on service-specific independence. This independence distinguishes MSA from other approaches w.r.t. the following features [53,52,63]: -Each microservice in a microservice architectures focuses on the provisioning of a single distinct capability for functional or infrastructure purposes. -A microservice is independent from all other architecture components regarding its implementation, data management, testing, deployment, and operation. -A microservice is fully responsible for all aspects related to its interaction with other architecture components, ranging from the determination of communication protocols over data and format conversions to failure handling. -Exactly one team is responsible for a microservice and has full accountability for its services' design, development, and deployment.
Starting from the above features, the adoption of MSA may introduce increases in quality attributes [38] such as (i) scalability, as it is possible to purposefully run new instances of microservices covering strongly demanded functionality; (ii) maintainability, as microservices are seamlessly replaceable with alternative implementations; and (iii) reliability, as it delegates responsibility for robustness and resilience to microservices [53,18,19]. Additionally, MSA fosters DevOps and agile development, because its single-team ownership calls for heterogeneous team composition and microservices' constrained scope fosters their evolvability [79,16].
Despite its potential for positively impacting the aforementioned features of a software architecture and its implementation, MSA also introduces complexity both to development processes and operation [16,78,72]. Consequently, practitioners in SMOs perceive the successful adoption of MSA as complex [8]. Challenges that must be addressed in MSA adoption are spread across all stages in the engineering process, and thus concern the design of the architecture, its development and operation. Furthermore, MSA imposes additional demands on the organization of the engineering process.

Design Stage
A frequent design challenge in MSA engineering concerns the decomposition of an application domain into microservices that each have a suitable functional granularity [31,72]. Too coarse-grained microservice capabilities neglect the aforementioned benefits of MSA in terms of service-specific independence. Too fine-grained microservices, on the other hand, may require an inefficiently high amount of communication and thus net-work traffic at runtime [45]. Although there exist approaches such as Domain-driven Design (DDD) [25] to support in the systematic decomposition and granularity determination of a microservice architecture [53], their perceived complexity hampers widespread adoption in practice [27,8].
An additional specific in microservice design stems from MSA's omission of explicit service contracts [56]. By contrast to SOA, MSA considers the API of a microservice its implicit contract [84], thereby delegating concerns in API management, e.g., API versioning to microservices [72]. Consequently, microservices must ensure their compatibility with possible consumers and also inform them about possible interaction requirements. Furthermore, implicit microservice contracts foster ad hoc communication, which increases runtime complexity and the occurrence of cyclic interaction relationships [77].

Development Stage
By contrast to monolithic applications, which rely on a holistic, yet vendor-dependent technology stack [18], microservice architectures foster technology heterogeneity [53]. Specifically, due to the increase in servicespecific independence, each microservice may employ those technologies that best fit a certain capability. Typical technology variation points [59] comprise programming languages, databases, communication protocols, and data formats. However, technology heterogeneity imposes a greater risk for technical debt, additional maintainability costs, and steeper learning curves, particularly for new members of a microservice team [77].

Operation Stage
MSA usually requires a sophisticated deployment and operation infrastructure consisting of, e.g., continuous delivery systems, a basic container technology and orchestration platform, to cope with MSA's emphasis of maintainability and reliability [79]. In addition, microservices often rely on further infrastructure components such as service discoveries, API gateways, or monitoring solutions [4], which lead to additional administration and maintenance effort. Consequently, microservice operation involves a variety of different technical components, thereby resulting in a significant complexity increase compared to monolithic applications [72].
Furthermore, technology heterogeneity also concerns microservice operation w.r.t. technology variation points like deployment and infrastructure technolo-gies [59]. Particularly the latter also involve independent decision-making by microservice teams. For example, there exist infrastructure technologies, e.g., to increase performance or resilience, which directly focus on a microservice [4]. Hence, teams are basically free to decide for suitable solutions based on different criteria such as compatibility with existing microservice implementations or available experience.

Organizational Aspects
The use of MSA requires a compatible organizational structure, i.e., following Conway's law, a structure that corresponds to the communication principle of microservices. This results in the necessity of using separate teams, each of which is fully responsible for one or more services (cf. Subsect. 2.1). The requirement that a team should cover the entire software lifecycle of its microservices automatically leads to the need for crossfunctional teams. In order to ensure collaboration between teams, large companies such as Netflix or Spotify usually use established large-scale agile process models [17], e.g., the Scaled Agile Framework (SAFe) [65], Scrum at Scale [76], or the Spotify Model [70]. Establishing such a form of organization and to establish organizational alignment may require upfront efforts [53].
Thus, MSA fosters DevOps practices, which can result in lowered cost and accelerate the pace of product increments [52]. To this end, it is critical to foster a collaborative culture within and across teams to promote integration and collaboration among team members with different professional backgrounds [48].
A key enabler of a collaborative culture is the extensive automation of manual tasks to prevent the manifestation of inter-team and extra-team silos [48]. Specifically, it relieves people from personal accountability for a task and may thus help in reducing existing animosities of team members with different professional backgrounds [44].
Another pillar of a collaborative culture is knowledge sharing following established formats and guidelines [48]. It aims to mitigate the occurrence of insufficient communication, which can be an impediment in both MSA and DevOps [16,64].

Language Ecosystem for Modeling Microservice Architecture
In our previous works we developed LEMMA [59,62]. LEMMA is a set of Eclipse-based modeling languages and model transformations that aims to mitigate the challenges in MSA engineering (cf. Sect. 2) by means of Model-driven Engineering (MDE) [12].
To this end, LEMMA refers to the notion of architecture viewpoint [39] to support stakeholders in MSA engineering in organizing and expressing their concerns towards a microservice architecture under development. More specifically, LEMMA clusters four viewpoints on microservice architectures. Each viewpoint targets at least one stakeholder group in MSA engineering, and comprises one or more stakeholder-oriented modeling languages.
The modeling languages enable the construction of microservice architecture models and their composition by means of an import mechanism. As a result, LEMMA allows reasoning about coherent parts of a microservice architecture [39], e.g., to assess quality attributes and technical debt of microservices [61] or perform DevOps-oriented code generation [60].
The following paragraphs summarize LEMMA's approach to microservice architecture model construction and processing. LEMMA's Domain Data Modeling Language [62] allows model construction in the context of the domain viewpoint on a microservice architecture. Therefore, it addresses the concerns of domain experts and microservice developers. First, the language aims to mitigate the complexity of DDD (cf. Sect. 2) by defining a minimal set of modeling concepts for the construction of domain concepts, i.e., data structures and list types, and the assignment of DDD patterns, e.g., Entity or Value Object [25]. Additionally, it integrates validations to ensure the semantically correct usage of the patterns. Second, the language considers underspecification in DDDbased domain model construction [58], thereby facilitating model construction for domain experts. However, microservice developers may later resolve underspecification to enable automated model processing [60]. All other LEMMA modeling languages depend on the Domain Data Modeling Language (cf. Fig. 1) because it provides them with a Java-aligned type system [62] given Java's predominance in service programming [66,8].

Microservice Architecture Model Construction
LEMMA's Service Modeling Language [62] addresses the concerns of microservice developers (cf. Fig. 1) in the service viewpoint on a microservice architecture. One goal of the Service Modeling Language Figure 1 Overview of LEMMA's modeling languages, their compositional dependencies and addressed stakeholders. Arrow semantics follow those of UML for dependency specifications [54].
is to make the APIs of microservices explicit (cf. Sect. 2) but keeping their definition as concise as possible based on built-in language primitives. That is, the language provides developers with targeted modeling concepts for the definition of microservices, their interfaces, operations and endpoints. LEMMA service models may import LEMMA domain models to identify the responsibility of a microservice for a certain portion of the application domain [53] and type operation parameters with domain concepts.
LEMMA's Technology Modeling Language [59] considers technology to constitute a dedicated architecture viewpoint [37] that frames the concerns of technology-savvy stakeholders in MSA engineering, i.e., microservice developers and operators (cf. Fig. 1). The Technology Modeling Language enables those stakeholder groups to construct and apply technology models. A LEMMA technology model modularizes information targeting a certain technology relevant to microservice development and operation, e.g., programming languages, software frameworks, or deployment technologies. Furthermore, it integrates a generic metadata mechanism based on technology aspects [59]. Technology aspects may, for example, cover annotations of software frameworks. LEMMA service and operation models depend on LEMMA technology models (cf. Fig. 1) and import them to apply the contained technology information to, e.g., modeled microservices and containers. In particular, LEMMA's Technology Modeling Language aims to cope with technology heterogeneity in MSA engineering (cf. Sect. 2) by making technology decisions explicit [73].
LEMMA's Operation Modeling Language [62] addresses the concerns of microservice operators (cf. Fig. 1) w.r.t. the operation viewpoint in MSA engineering. To this end, the language integrates primitives for the concise modeling of microservice containers, infrastructure nodes, and technology-specific configuration. To model the deployment of microservices, LEMMA operation models import LEMMA ser-vice models and assign modeled microservices to containers. Additionally, it is possible to express the dependency of containers on infrastructure nodes such as service discoveries or API gateways [4]. By providing microservice operators with a dedicated modeling language we aim to cope with operation challenges in MSA engineering (cf. Sect. 2). First, the Operation Modeling Language defines a unified syntax for the modeling of heterogeneous operation nodes of a microservice architecture. Second, it is flexibly extensible with support for operation technologies, e.g., for microservice monitoring or security, leveraging LEMMA technology models (cf. Fig. 1). Third, operation models may import other operation models, e.g., to compose the models of different microservice teams to centralize specification and maintenance of shared infrastructure components such as service discoveries and API gateways.

Microservice Architecture Model Processing
LEMMA relies on the notion of intermediate model representation [40] to facilitate the processing of constructed models. Next to intermediate model representations, LEMMA also provides a model processing framework 4 , which facilitates the implementation of Java-based model processors, e.g., for microservice developers without a strong background in MDE. To this end, the framework leverages the Inversion of Control (IoC) design approach [41], and its realization based on the Abstract Class pattern [71] and Java annotations [29]. In addition, the framework implements the Phased Construction model transformation design pattern [46]. That is, the framework consists of several phases including phases for model validation and code generation. To implement a phase as part of a model processor, developers need to provide an implementation of a corresponding abstract framework class, e.g., AbstractCodeGenerationModule, and augment the implementation with a phase-specific annotation, e.g., @CodeGenerationModule. At runtime, model processors pass control over the program flow to the framework. The framework will then (i) parse all given intermediate LEMMA models; (ii) transform them into object graphs, which abstract from a concrete modeling technology; and (iii) invoke the processor-specific phase implementations with the object graphs. As a result, the added complexities of MDE w.r.t. model parsing and the construction of Abstract Syntax Trees as instantiations of language metamodels [12] remain opaque for model processor developers. Moreover, LEMMA's model processing framework provides means to develop model processors as standalone executable Java applications. This characteristic is crucial for the integration of model processors into continuous integration pipelines [42], which constitute a component in DevOps-based MSA engineering [5,8]. Figure 2 illustrates the interplay of intermediate model transformations, and the implementation and execution of model processors with LEMMA. Figure 2 comprises two compartments. Therefore, LEMMA treats technology models and model processors as conceptual unities, i.e., a model processor for a certain technology must be aware of the semantics of the elements in its technology model and be capable in interpreting their application, e.g., within service models.
The second compartment of Fig. 2 concerns model processing. A LEMMA model processor constitutes an implementation conform to LEMMA's model processing framework, which thus provides the processor with capabilities for model parsing and phase-oriented model processing. Typical results from processing service models comprise (i) executable microservice code; (ii) shareable API specifications, e.g., based on Ope-nAPI 5 ; (iii) event schemata, e.g., for Apache Avro 6 ; and (iv) measures of static complexity and cohesion metrics applicable to MSA [36,2,34,23,7].

DevOps-Related Challenges in Microservice Architecture Engineering of SMOs
In this section, we present an empirical analysis of microservice development processes (cf. Subsect. 2.5) in SMOs with the goal of identifying SMO-specific challenges in microservice engineering. For this purpose, we perform a secondary analysis [35] of transcribed qualitative interviews from one of our previous works [74]. While the initial analysis of the data through inductive open coding, our secondary analytical procedure specifically aims to identify challenges and obstacles during the development process.

Study Design
The study from which the dataset emerged is a as a comparative multi-case study [83]. The aim of the study was to gain exploratory insights into the development processes of SMOs. To this end, in-depth interviews were conducted on-site in 2019 with five software architects, each from a different company, and afterwards transcribed. The interviews were conducted in a semistructured manner and covered the areas of (i) applied development process; (ii) daily routines; (iii) meeting formats; (iv) tools; (v) documentation; and (vi) knowledge management. Participants were recruited from existing contacts of our research group to SMOs. Furthermore, we constrained participant selection to the professional level or senior software architects, and SMOs that develop microservice systems with equal or less than 100 people.

Dataset
As depicted in Table 1, the dataset includes transcripts and derived paraphrases covering six different cases (Column C) of microservice development processes in SMOs. In total, we conducted five in-depth interviews (Column I) with software architects whereby I4 covered two cases.
As shown Table 1, we distinguish the cases into greenfield (new development from scratch), templated greenfield (new development based on legacy system), and migration (transformation of a monolithic legacy system into an MSA-based system) (Column Type). We further categorize each devlopment process by the domain of the microservice application under development (Column Domain). The number of microservices present in the application at the time of the interview (Column #Services), number of people (Column #Ppl) and teams (Column #Teams) involved vary depending on the case. Case three is a special case; although there are formally only two teams, sub-teams are formed depending on the customizations to be performed to the microservice application, so that at certain points in time up to five teams work simultaneously on the application. In all cases the interviewees stated to apply the Scrum framework [67] for internal team organization. By contrast, the collaboration across teams was in all cases not following a particular formal methodology or model (cf. Subsect. 2.5). In addition, all interviewees reported that they strive for a DevOps culture [20] in their SMOs. A detailed description of the cases can be found in our previous work [74].

Analytical Procedure
For the analysis of the dataset, we used the Constant Comparison method [69]. That is, we rescreened exist- ing paraphrases and marked challenges and/or solutions that our interviewees told us about with corresponding codes for challenges, obstacles, and solutions. We then used the coded statements across all cases to combine similar statements to higher-level challenges.

Study Results and Challenges
Our analysis of the dataset resulted in the discovery of several common challenges across all cases. Comparable to other empirical studies, e.g., [80] or [33], our participants reported about the high technical complexity and high training effort during a microservice development process compared to a monolithic approach. Other discovered challenges in line with existing literature, e.g., [28], concern the slicing of the business domain into individual microservices and the most suitable granularity of a microservice (cf. Sect. 2). In the following, we elaborate on two challenge areas (CA) which we found to be of particular concern for SMOs adopting a DevOps culture in more detail.

CA1: Developing, Communicating, and Stabilizing a Common Architectural Understanding
Developing a common architectural understanding of the architecture components of an application, especially about the goals and communication relationships of these components, in the case of MSA the microservices, is essential for developing a software in an organization which follows the DevOps paradigm [5]. The interviewees also think that the development of a general understanding of architecture among those involved in development is an important prerequisite for granting teams autonomy and trust.
For cases C2, C4 and C5 (cf. Table 1), which each comprise approx. ten people and two to three teams, the practices to achieve this understanding are Scrum Dailies [67] and regular developer meetings about the current status of the architecture. However, in case of more involved people, achieving a common understanding is reported to be very challenging. For cases C1, C3, and C6, the system development initially started with fewer people, and as the software product became successful, more people and teams were added. Regarding this development and the common architectural understanding I1 states that "From one agile team to multiple agile teams is a huge leap, you have to regularly adapt and question the organization. [...] you need a common understanding of the architecture and a shared vision of where we want to go [...], we are working on that every day and I don't think we'll ever be done." A strategy that we observed to create this common architectural understanding in C1, C3, and C6 is the creation of new meeting formats. However, a contradicting key aspect of the DevOps culture is to minimize coordination across teams as much as possible [5]. The arising problem is also experienced by our interviewees. The more people and teams involved in exchanging knowledge to develop an architectural understanding, the more time-consuming the exchange becomes. In the case of C6, this has led to the discontinuation of comprehensive knowledge exchanges due to the excessive time involved. They now only meet on the cross-team level to discuss technologies, e.g., a particular authentication framework or a new programming language. We interpret this development as a step towards the introduction of horizontal knowledge exchange formats such as Guilds in the Spotify Model [70]. As a result, C6 is currently challenged with building a common understanding of the architecture only through these technology-focused discussions. This is a problem area that is also evident in the data of other empirical studies. For example, Bogner et al. [8] report on the creation of numerous development guidelines by a large development organization to enforce a common architectural understanding. However, the development of guidelines requires that architecture decisions, technology choices, and use cases are documented [32], a practice we encountered only at C4 and C5.
In terms of technical documentation, the teams in all six cases use Swagger to document the microservices' APIs. Other documentation, such as a wiki system or UML diagrams, either is not used or not kept up to date. In almost all cases, access to the API documentation is not regulated centrally, but is instead provided by the respective team through explicit requests, e.g., by e-mail. Only C3 has extensive and organization-wide technical documentation as it is described by I3: "Swagger is a good tool, but of course this is not completely sufficient, which is why we have an area where the entire concept of the IT platform [...] is explained. We also have a few tutorials." Summarizing CA1, we suspect that due to a mostly volatile organization, where the number of developers and software features often grows as development progresses, as well as the reported hard transition from a single to multiple agile teams, SMOs are particularly affected by the challenge of implementing a common architectural understanding as part of a successful Dev-Ops culture. Documenting architecture decisions, deriving appropriate guidelines, and an accessible technical documentation are key factors for an efficient development process that become more relevant with more teams and developers involved [49] and is therefore often not considered by SMOs early in the development process.

CA2: Complexity of Deployment Techniques and Tools
A recurrent challenge we identified is how to deal with the operation of microservice applications within the development process. While cross-functional teams following the DevOps paradigm are mentioned in the literature, e.g., [52], as being recommended for the implementation of microservices architectures, in each of the researched cases we found specialized units for operating microservices instead of operators included as a part of a microservice team. In C1, C2, and C6 we encountered entire teams solely dedicated to operational aspects. In all cases, the development process included a handover of developed services to those specialized units for operating the microservice application. Although most interviewees were aware that this contradicts the ownership principle of microservices (cf. Subsect. 2.1) and they all stated to try to establish a pure DevOps without specialized teams, the effort to learn the basics of the necessary operational aspects is perceived as high. In this regard I2 comments "The complexity (note: of cloud-based deployment platforms) is already very, very high, you know. I would say that each of these functions in such a platform is a technology in itself that you have to learn." In contrast to operations, the SMOs are successful in including other professions, such as UI/UX, as parts of their cross-functional teams. Our data indicates that this is due to two main reasons. First, the inherently high complexity of the operational technologies and the associated high hurdle of learning and integrating them into the microservice development process. Second, the transfer of this knowledge not only into special units but into the individual microservice teams in order to do justice to a DevOps approach.
Summarizing CA2, deployment and operation in the SMOs studied is not in the responsibility of the teams to which the respective microservices belong. This seems to be due to the complexity of operation technologies and the associated learning effort. This might particularly be an issue for SMOs due to the challenging environment, where there are few resources to substitute, e.g., for a colleague who needs to learn an operation technology.

Case Study
In this section, we present a case study that we will use in the following sections to illustrate and validate our model-driven workflow (cf. Sect. 6) to address the challenges in Sect. 4. We decided for the usage of a case study to show the applicability of our approach because non-disclosure agreements prevent us from presenting our approach in the context of the explored SMO cases (cf. Subsect. 4.2). Therefore, we selected an open source case study microservice architecture, which maps to the design and implementation of the explored SMO cases w.r.t. the scope of our approach. More precisely, the case study (i) employs Swagger for API documentation (cf. Subsect. 4.4), (ii) uses synchronous and asynchronous communication means, (iii) is mainly based on the Java programming language, and (iv) the number of software components matches the smaller SMOs in our qualitative analysis (cf. Subsect. 4.2).
The case study is based on a fictional insurance company called Lakeside Mutual [75]. The application serves to exemplify different API patterns and DDD for MSA. The application comprises several micro-frontends [57], i.e., semi-independent frontends that invoke backend functionality, and microservices centered around the insurance sector, e.g., customer administration, risk management, and customer self-administration functions. The application's source code as well as documentation is publicly available on GitHub 7 . Figure 3 depicts the architectural design of the Lakeside Mutual application. Overall it consists of five functional backend microservice. Each microservice is aligned with a micro-frontend. Backend Microservice offering infrastructural functions for, e.g., service discovery or messaging.

Microservice providing business functionalities via REST interfaces.
Micro-Frontend that provides UI components, e.g., service-specifix views.  Except for the Risk Management Server, all microservices are implemented in Java 8 using the Spring framework 9 . A micro-frontend communicates with its aligned microservice using RESTful HTTP [26]. Additionally, the Risk Management Client and Risk Management Server communicate via gRPC. For internal service to service communication, the software system also relies on synchronous RESTful HTTP, but also on asynchronous amqp messaging over an Active MQ message broker. The Customer Management Backend and the Customer Core services also provide generated API documentations based on Swagger 10 .
Besides the functional microservices, the Lakeside Mutual application also uses infrastructural microservices. The Eureka Server implements a Service Registry [63] to enable loose coupling between microservices and their different instances. For monitoring purposes, the Spring Boot Admin service provides a monitoring interface for the health status of individual services and the overall application.

A Model-Driven Workflow for Coping with DevOps-Related Challenges in Microservice Architecture Engineering
This section proposes a model-driven workflow based on LEMMA (cf. Sect. 3) to cope with the challenges identified in Sect. 4. More precisely, the workflow provides a common architectural understanding of a microservice application (cf. Challenge CA1 in Subsect. 4.4), and reduces the complexity in deploying and operating microservice architectures (Challenge CA2).
In the following subsections, we present the design of the workflow (cf. Subsect. 6.1). Next, we describe the components, which we have added to LEMMA, to support the workflow. These components include (i) interoperability bridges between OpenAPI and LEMMA models (CA1; cf. Subsect. 6.2); (ii) an extension to the Service Modeling Language to allow the import of remote models (CA1; cf. Subsect. 6.3); (iii) a model processor to visualize microservice architectures (CA1; cf. 6.4); (iv) enhancing the Operation Modeling Language (cf. Subsect. 6.5; and (v) code generators for microservice deployment and operation (CA2; cf. Subsect. 6.6).
Furthermore, we present in detail prototypical components that we have added to the LEMMA ecosystem to support the workflow. These include deriving models from API documentation (cf. Subsect. 6.2) and assembling microservice models into an architecture model (cf. Subsect. 6.3) as a means to build a common architectural understanding (CA1), and enriching microservice models with deployment infrastructure models (cf. Subsect. 6.5) as a means to more easily handle operational aspects for SMOs (CA2).
To ensure replicability of our results we have provided a GitHub repository 11 which contains a documentation how to setup LEMMA and our in this article contributed extensions to it. It further contains all generated artifacts as well as sources and scripts to rerun the generations. Finally, it contains a manually created set of LEMMA models which represent all Java-based microservices of the Lakeside Mutual case study (cf. Sect. 5). An Organization includes multiple DevOps Teams, each responsible for one or more Microservices (cf. Subsect. 2.5). The sum of all microservices forms the Microservice Application that is developed by the organization. Associated with a microservice is a corresponding documentation of its interfaces (API Documentation). For each microservice owned by it, the team constructs a Set of LEMMA Views as a model representation (cf. Sect. 3). The sum of all LEMMA models forms an Architecture Model which describes the system's architecture. This model can be used by the organization, e.g., to gain insight into existing dependencies between the microservices involved.

LEMMA-Based Workflow for Coping with DevOps Challenges
Based on the conceptual elements and their relationships, Fig. 5 shows our model-driven workflow for DevOps-based microservices development in SMOs as a UML acitivity diagram [54].
We depict the workflow from the perspective of a single DevOps team including all steps required for the development of a new microservice. When incremental changes are made to individual aspects of a microservice, only the steps affected by the changes need to be performed.
The process starts with the planning of the development. The team decides whether to follow a code-first or model-first approach. We support both variants to allow the teams autonomy according to the DevOps paradigm [5]. Code-First Approach Here, the team first implements the microservice consisting of structure and behavior. Based on the finished implementation, the team creates an API Documentation, which can done manually or automatically with tools such as Swagger 12 . Using the API documentation, a LEMMA domain model and a LEMMA service model are automatically derived (cf. Subsect. 6.2) and, if necessary, refined by the team. In parallel, the team creates a LEMMA operation model, since the information required for this kind of model cannot be derived from the API documentation (cf. Subsect. 6.2).
Model-First Approach Alternatively, the team can decide to first model the structure and operation of the microservice using LEMMA. In the subsequent implementation activity, the structural aspects can be generated based on the previously constructed models and only the manual implementation of the behavior is necessary (cf. Sect. 3).
Regardless of which of the two approaches was chosen, at the end LEMMA domain, service, and operation models are available that describe the Dev and Ops aspects of the microservice under development.
The operation model is then used to Generate a Deployment Specification for a container-based environment which mitigates the complexity of the operation (cf. Subsect. 6.6). The team refines this specification as needed and then deploys the microservice. In parallel, the models generated during the workflow are sent to a central model repository and made available to the entire organization where they can be used by other teams to gain insight and a common understanding of the application's architecture, e.g., by visualizing its structure.
Based on the use of model transformations and code generation steps, in the code-first as well as the modelfirst approach, we argue that the application of the workflow is possible with almost the same resources as the current development processes in the individual DevOps teams that we were able to explore as cases in the empirical study (cf. Sect. 4). At its core, the code-first approach relies on the same development steps, i.e., implementing structure and behavior of a microservice, as non-model-based processes in the individual teams, so that even teams without experience in MDE can adapt the flow in a non-invasive way. Besides the actual implementation, the workflow provides a service's description in the form of LEMMA viewpoint models, which can be used as a communication basis and for knowledge transfer in order to create a common architecture understanding (CA1; cf. Subsect. 4.4) Figure 4 Overview of the concepts within the workflow and their interrelationships represented as a UML class diagram [54].
in the organization. This can be used to, e.g., accelerate verbal coordination processes between teams, improve the documentation, or identify microservice bad smells ( [77]). In addition, by using LEMMA operation models and generating deployment specifications, it is easier for teams of an SMO to address the ops aspects themselves without passing on the responsibility for deployment to another unit (CA2; cf. Subsect. 4.4). This enables teams to foster the ownership principle of MSA (cf. Subsect. 2.1).

Derivation of Microservice Models from API Documentations
To enable the model-driven workflow with sophisticated modeling support by LEMMA, we extended the ecosystem with the ability to derive data and service models from API documentation, that conforms to the Ope-nAPI Specification 13 (OAS) [55], into LEMMA modeling files. OAS defines a standardized interface to describe RESTful APIs. One of the most popular tools implementing OAS is Swagger, which was used by all SMOs in the qualitative study (cf. Sect. 4).
The transformation of OAS files into LEMMA files can be classified as an interoperability issue in which OAS models are to be converted into LEMMA models. We therefore applied the interoperability bridge process proposed by Brambilla et al. [9]. Fig. 6 shows the applied interoperability bridge process.
To be able to transform the in-memory LEMMA models as files, we extended LEMMA with extractors [9] for technology, service, and data models.
Listing 1 and Listing 2 illustrate the application of the process.
Listing 1 shows an excerpt of the API documentation file of the Customer Core microservice from the case study (cf. Sect. 5). In detail, the listing presents the OAS description for a HTTP GET request on the path cities/{postCode} (Lines 2 and 3). This includes, 15 https://spec.openapis.org/oas/v3.0.3#openapi-object e.g., the unique id getCitiesForPostalCodeUsingGET (Line 6) of the operation, the incoming parameters (Line 8 to 14), and the information that a response returns an object based on the CitiesResponseDto schema (Lines 15 to 19). The excerpt shows only the response for HTTP status code 200 (Line 16). OAS also offers the possibility to define responses for other status codes, e.g. HTTP status code 404, but these are currently not considered in the transformation to LEMMA in our prototypical implementation.
Listing 2 shows the LEMMA service model automatically transformed from the CustomerCore OAS model in Listing 1. First, the results of the other transformations are imported into the service model. This includes the previously transformed LEMMA domain data model customerCore.data resulting from the OAS schemas (Lines 1 and 2), which contains all data structures such as CitiesResponseDto, and the technology model OpenApi.technology (Line 3), which contains, e.g., the OpenAPI-specific primitive data types and the media types used in the Customer-Core OAS model. Line 4 enables the OpenApi technology for the com.lakesidemutual.customercore.Cus-tomerCore microservice whose definition starts in Lines 6 and 7. The microservice comprises an interface named cityReferenceDataHolder which was derived by the associated tags in the OAS model (Line 8). The interfaces consists of the operation getCitiesForPostal-CodeUsingGET named after the OAS operationId (Lines 18 to 22). The operation commentary section (Lines 9 to 14) is populated using the summary information from OAS. The OAS path is added as an endpoint (Line 15) and the operation classified as an HTTP GET request (Line 17). The OAS response associated with the HTTP status code 200 is modeled as an OUT parameter and named returnValue (Lines 20 and 21).

Assembling a Common Architecture Model from Distributed Microservice Models
Microservices need to interact with each other to realize coarse-grained functionality [53]. Thus, can de-pend on other services. In the case study (cf. Sect. 5), such a relationship is found between the microservices Customer Management Backend and Customer Core. Such dependencies cannot be derived from an API documentation, since its purpose is to describe the provided interface of a service and not the invocation of functionality provided by other architecture components. However, these dependencies are essential in order to be able to assemble and assess an architecture model and to raise a common architectural understanding across the whole organization (cf. Subsect. 4.4). Therefore, within the workflow (cf. Subsect. 6.1), the dependencies should be added manually by the teams in the LEMMA models. This can be done during the Model Services activity when using the model-first approach and during the Refine Generated Models activity when using the code-first approach.
However, LEMMA service models originally were only able to depend on other LEMMA service models if they are accessible in the local file system. Therefore, to allow teams the expression of interaction dependencies with the microservices of other teams, we have extended LEMMA to allow external service imports. Listing 3 shows the service model of the Customer Management Backend microservice from the case study. The mi-1 import datatypes from "customerManagementBackend.data" 2 as customerManagementBackend 3 import microservices from "../customer-core/customerCore 4 .services" as customerCoreServices 5 //External import as alternative to Lines 3 to 4 6 import microservices from "https://repo.lakeside.com/ 7 teamB/customercore.json" to "../customer-core/ 8 customerCore.services" as customerCoreServices 9 10 public functional microservice 11 com.lakesidemutual.customerManagementBackend 12 . croservice imports show the two alternatives. The syntax for importing locally accessible service files is shown in Lines 3 to 4. Alternatively, the import in Lines 6 to 8 exemplifies the mechanism for external imports.
As soon as the Eclipse IDE detects such an external import in the model, it offers a quickfix that automatically downloads the referenced file and, if it is OAScompliant API documentation, starts a corresponding transformation to LEMMA (see Section 3.2). This also makes it possible to model a dependency to a service of another team, even if this team does not yet provide its own model but only API documentation.
Since LEMMA models are textual [62] and with the extension it is possible to import external sources, the model files of the different teams can be managed centrally as an architecture model by a version management system such as Git and thus integrated into CI/CD pipelines, e.g., by a Git hook 16 that copies the models to a central model repository with each release of the microservice.

Visualization of Microservice Architecture Models
To enable visualization of the architecture using LEMMA (cf. Subsect. 6.1), we have developed the LEMMA Visualizer 17 . It is able to transform several LEMMA intermediate service models (cf. Subsect. 3.2) into a single graphical representation using a model-totext transformation [12]. The steps of the transformation are depicted in Fig. 7 In order to enable DevOps teams in SMOs to take full ownership of their respective services, which mitigates the need to apply specialized teams dedicated to operating the whole microservice application (cf. Sect. 4), we have extended the OML with means to import other operation models as nodes and, therefore, nest operation specifications with each other. I.e. teams do not have to maintain individual models for infrastructure microservices, but can use the new mechanism  to import the operation model, e.g., for a Eureka service discovery (cf. Sect. 5), from a central model repository (cf. Subsect. 6.3).
OML now enables the DevOps Team to describe the deployment of a microservice and all necessary dependencies. Listing 4 shows an excerpt of the operation model for the deployment of the CustomerCore microservice. Lines 1 and 2 of the listing imports the cus-tomerCore.services model derived from the services' Open API specification (cf. Subsect. 6.2). The following Lines 3 to 6 are dealing with the import of the technology for service deployment. The Container_base technology model uses Docker 19 and Kubernetes 20 for service deployment. Lines 7 and 8 illustrates the new pos-19 https://www.docker.com/ 20 https://kubernetes.io/ sibility to import other operation models as nodes by importing the eureka.operation model that describes the deployment of a service discovery by the Eureka 21 technology.
Lines 10 and 11 assign the technology to the Cus-tomerCoreContainer (Line 12). The container acts as a vessel for the deployed microservices and clusters deployment-relevant information, e.g., dependencies to infrastructural components such as databases, servicespecific configurations, and protocol-specific endpoints. For this purpose, Line 12 and 13 create the Cus-tomerCoreContainer and assign the Kubernetes deployment technology which is imported from the container_base technology model. The deployment of the CustomerCore microservice into the container 21 https://github.com/Netflix/eureka 1 import microservices from "customerCore.services" 2 as customerCoreServices 3 import technology from "../technology/ 4 container_base.technology" as container_base 5 import technology from "../technology/ 6 javaWithSpring.technology" as protocolTechnology 7 import nodes from "../eureka-server/ 8 eurekaServer.operation" as eureka 9 10 @technology(container_base) 11 @technology(protocolTechnology) 12 container CustomerCoreContainer deployment technology is shown in Lines 14 and 15. The following Lines 15 to 24, show the dependency to the ServiceDiscovery imported from the eureka.operation model. In detail, Line 19 includes the service-specific configuration of the CustomerCore microservice by specifying the eu-rekaUri responsible for configuring the dependency to the ServiceDiscovery. The CustomerCore microservice exposes its functionality via a rest endpoint as defined in Lines 20 to 22. Besides modeling the deployment of microservicespecific configurations, OML also enables the DevOps team to specify infrastructural components' deployment, e.g., service discoveries and databases. Listing 5 describes the deployment of the ServiceDiscovery. Lines 1 to 3 are responsible for importing the containerbase.technology and eureka.technology models. The models include the specification of the technology used for the deployment of the ServiceDiscovery. Lines 4 and 5 import the CustomerCore service which uses the service discovery. Lines 7 and 8 assign the imported technology to the ServiceDiscovery.
Line 9 starts the actual specification of the ServiceDiscovery, which uses the imported Eureka technology. The following Line 10 contains the dependency to the CustomerCoreContainer, specified in Listing 4. The service-specific configuration of the ServiceDiscovery is set via the assignment of default values in Lines 12 to 16. Line 13 and 14 set the actual hostname and port of the service.
Overall, LEMMA's OML enables the DevOps team to construct operation models which specify the deployment of microservices and their dependencies on the microservice application's infrastructural components. 1 import technology from "docker.technology" 2 as containerTechnology 3 import technology from "eureka.technology" as Eureka 4 import nodes from "customerCore.operation" 5 as customerCore 6 7 @technology(containerTechnology) 8 @technology(Eureka) 9 ServiceDiscovery is Eureka::_infrastructure.Eureka 10 used by nodes customerCore::CustomerCoreContainer { 11 ... The operation models consist of the concepts of containers and infrastructure nodes. Containers (cf. Listing 4) specify the deployment of microservice, whereby infrastructure nodes contain the configuration for infrastructural components, e.g., API gateways, databases, and service discoveries (cf. Listing 5).

Generating Code from Distributed Deployment Infrastructure Models
In Subsect. 6.5 we introduced OML as a methodology to describe the deployment of a service-based software system. In this subsection, we contribute a code generation pipeline for creating deployment-related artifacts based on the operation models using LEMMA's Model Processor (cf. Subsect. 3.2). As depicted in Fig. 9, the code generation pipeline consists of two consecutive stages.
The first stage of the code generation pipeline consists of a model-to-model transformation [12]  The second stage of the code generation pipeline deals with the creation of the deployment-relevant artifacts. Based on an intermediate operation model, the code generators already included in LEMMA (cf. Sect. 3) provide a variety of different functionalities that are usually bound to a specific technology model. As already shown in Listing 4 and Listing 5, the described operation models both use the container-_base technology model.
The container_base model clusters a technology stack suited for a service-based software system with focusing on container technologies [43] such as Docker, Docker-Compose, and Kubernetes. Listing 6 shows an excerpt of this specific technology model. Line 1 specifies the actual name of the model.  Line 2 to 6 describe the deployment technologies of the model, in this particular case Kubernetes. Additionally, Kubernetes supports the operation environments golang, python3, and openjdk as its default.
The second part of Listing 6 contains the definition of operation aspects for further service deployment specification from Lines 8 to 13. Lines 10 and 11 define the Dockerfile aspect, which can be applied to containers in operation models. The aspect consists of a single attribute named content containing the actual content of the Dockerfile. Furthermore, the content attribute has the property mandatory to it, so it can only be configured a single time per container.
The containerbase code generator 22 , compatible with the eponymous technology model, creates a set of different deployment-related artifacts such as build scripts, Dockerfiles, Kubernetes files, and extends existing service configuration files. Based on the operation an technology models from Listing 6 and Listing 4 the code generation pipeline creates executable configuration from the start without any additional configuration needed.
Listing 7 shows a Dockerfile created by the containerbase generator from the described operation models. The Dockerfile contains a basic configuration consisting of a docker image deducted from the operation environment configured in the operation model. In Lines 2 to 15 several artifacts are copied into the image. Line 17 describes the port 8110 on which the microservice is started. Finally, Lines 18 to 20 define the entrypoint of the docker image to compile and run the microservice.
The containerbase generator creates a basic Dockerfile configuration to create a more advanced configuration for a Dockerfile. OML provides the mechanism of operation aspects to create custom a Dockerfile. Furthermore, the operation aspect mechanism also applies for Docker-Compose and Kubernetes configurations.
In Addition to the Dockerfiles, the containerbase generator also creates Kubernetes deployment files. Generally, the Kubernetes file consists of the deployment and service parts. The deployment part described in Listing 8 contains the configuration of the Kubernetes pod 23 the microservice gets deployed to. Line 1 defines the apiVersion the Kubernetes file uses. The following Line 2 contains the definition of the configuration kind: deployment. Line 5 to 7 specifying a configuration type overarching name to the deployment, in this case, customercorecontainer. Line 8 indicates the configuration of the Kubernetes deployment, specifically the set of replicas that should be created for the deployment.
Listing 9 contains the service part of the Kubernetes deployment and contains the configuration on how the microservices application is exposed. As previously, Lines 1 to 2 contain the information about the apiVersion and configuration type of the Kubernetes file kind: Service. Followed by the name assignment of the deployment in Lines 3 to 7. The listing defines the exposure of the microservice via port 8110 in Lines 9 to 12.
Supplementary to the deployment-related generated artifacts, LEMMA's code generation pipeline also supports the extension of existing service configurations. For this purpose, we implemented additional code generators for technologies, e.g., MongoDB 24 , Mari-aDB 25 , Zuul, and Eureka. Listing 4 shows in Line 13 a property for specifying the eurekaUri. Based on this property, the spring spring eureka code generator extends the service's configuration in the specified CustomerCoreContaier.
Listing 10 contains a variety of configurations for Spring-based microservice implementations. The spring.application.name and server.port in Lines 1 and 2 are derived from the modeled microservice's name and its specified endpoint from the LEMMA models. Lines 3 to 7 are deduced from the Eureka configuration shown in Listing 4. They configure the endpoints for connecting to the eureka service discovery.

Validation
In this section, we validate the present LEMMA extensions that implement the workflow (cf. Sect. 6). To enable replicability of our results, we provide a validation package on GitHub 26 .
In order to make the validation feasible, we first reconstructed the functional backend and infrastructure microservices of Lakeside Mutual (cf. Sect. 5) using a systematic process [61]. This step was necessary because the backend and infrastructure microservices of Lakeside Mutual are implemented in Java and not modeled with LEMMA. In detail, our reconstruction includes all four Java-based functional microservices and the infrastructural microservices Eureka Server and Spring Boot Admin (cf. Sect. 5).
In addition, we retrieved the current API documentation of Lakeside Mutual by putting the architecture into operation and triggering the generation of the documentation using prepared REST requests. At the end of this process, we could refer to the current API documentations of Lakeside Mutual's Customer Core and Customer Management Backend (cf. Fig. 3 in Sect. 5), which are the two components for which the application provides API documentation.
We then performed the individual generation steps of our workflow (cf. Sect. 6) based on our reconstructed LEMMA models and the case study's API documentation. We illustrate the results of the application of our workflow as shown in Table 3 using the Lines of Code (LoC) metric.
As Table 3 shows, using the OAS-conform API documentation, we were able to generate 171 and 174 LoC of LEMMA Domain Data and Service files for the Customer Core and the Customer Management Backend microservices, respectively. Although the same operations and parameters for interfaces are present in the models generated by our workflow and the reconstructed LEMMA models, the LoC are higher in our reconstructed models. This is due to the fact that, e.g., the operation-related portion of LoC or technologyrelated annotations for databases are present in the manual models, but not in the generated ones, since no information on this is available from the API documentation.
Regarding the generation of deployment specifications, we were able to generate 285 lines of infrastructure code for Docker and Kubernetes from the reconstructed operation models of the functional microservices. Teams can abstract from technology-specific infrastructure code and, in combination with LEMMA's source code generators such as the Java Base Generator [60], generate directly executable and deployable stubs of their services.

Discussion
The model-based workflow presented in Sect. 6 addresses the previously identified challenge areas (cf. Sect. 4). In detail, the workflow provides means to establish a common understanding of architecture in an organization scaling to the level of multiple teams for the first time (CA1) and the complexity of operational aspects in microservice engineering (CA2).
We argue that by documenting the architecture in a centralized manner (cf. Subsect. 6.2 and Subsect. 6.3), combined with the ability to visualize it (cf. Subsect. 6.4), teams and higher-level stakeholders, such as project sponsors, have a good basis for sharing knowl-edge and gaining insight into each other's development artifacts through the inherent abstraction property of the models [47]. Box-and-line diagrams, in particular, have the advantage that people can more easily grasp relations between concepts [12].
Another added value of our approach is the ability to seamlessly integrate deployment specifications into architecture models as a LEMMA operation model with the possibility to derive deployment configurations for heterogeneous deployment technologies, i.e., to generate them for Docker and Kubernetes (cf. Subsect. 6.6). To this regard, Combemale et al. [12] underline the added value of models to abstract complexity in the deployment process making the process more manageable. However, deployment technologies supported by our workflow constitute de-facto standards [68], LEMMA does limited justice to the heterogeneous technology landscape concerning cloud providers. In particular, we do not specifically address cloud-based deployment platforms such as AWS 27 or Azure 28 . Presumably, LEMMA is able to support such technologies through specific technology models (cf. Subsect. 3.1). In the future, we plan to address this limitation by providing LEMMA technology models and code generators for languages targeting the Infrastructure as Code [51] paradigm, e.g., Terraform [10]. As a result, LEMMA would support model-based deployment to a variety of cloud-based deployment platforms.
In order to implement and take advantage of the LEMMA-based workflow, team members need to learn and use a new technology with LEMMA. As the validation (cf. Sect. 7) shows, teams can significantly increase efficiency through the available generation facilities of LEMMA. However, we need further empirical evaluation in practice (cf. Sect. 10) to more accurately assess in which cases the efficiency gains from better documentation, accessible architectural understanding, and generation of deployment specifications outweigh the effort required to learn LEMMA and in which cases they do not.
An important aspect on which the efficiency of the workflow depends is the organization-wide agreement on the level of detail of the models shared between teams. For example, if a very high level of detail is agreed upon, i.e., including as much information as possible from the source code in the models, as we applied to the reconstruction of the case study (cf. Table 3), generated artifacts must be more refined by the DevOps teams, i.e., a higher effort is necessary. This can be seen, for example, when looking at the Customer Core Service (cf. 7). The reconstructed model contains considerably more LoC, e.g. regarding technologies, than the generated model. In contrast, if the organization agrees on a low level of detail that, e.g., the abstraction from technology-related information and, thus, only considers technology-agnostic domain, service, and operation models (cf. LEMMA-Background-Subsection), very few adjustments to the generated models are necessary.
A technical limitation within the LEMMA-based workflow is the unidirectional artifact creation. Changes to the models currently have to be made by the team owning the corresponding microservice. However, in order to further extend a shared understanding of the architecture as well as to follow DevOps' minimize communication efforts characteristic [20], it would be beneficial if other teams or stakeholders could request editing of services of other teams directly using the shared models, e.g., to add an attribute to an interface operation.

Related Work
In the following, we describe related work from the areas of service and operation modeling, comparable qualitative studies, and workflows for DevOps-oriented development of microservice architectures in the context of model-driven software engineering.
MSA Service Modeling Terzić et al. [81] present Mi-croBuilder, a tool that enables the modeling and generation of microservices. Like LEMMA, MicroDSL is based on the Eclipse Modeling Framework. Unlike LEMMA, however, MicroBuilder is closely linked to Java and Spring as specific technologies, so that the MicroDSL metamodel would have to be adapted for new technologies. MicroBuilder also addresses only the role of the developer and neglects stakeholders such as domain experts or operators. In addition, MicroBuilder does not address MSA's characteristic of having multiple teams involved in the development process. Another model-based approach called MicroART [30] is provided by Granchelli et al. MicroART contains a DSL called MicroARTDSL which aims to capture architecture information. The purpose of MicroART is to recover microservice architectures through static and dynamic analysis. As such, MicroART can support organizations in raising a common architectural understanding similar to the visualization we proposed in Subsect. 6.4. However, MicroART does not provide a model-based workflow for the teams and lacks the rich ecosystem of LEMMA comprising means to also model and generate domain data, operational aspects, and different technologies.
Qualitative Study Bogner et al. [8] describe a study related to our qualitative empirical analysis (cf. Sect. 4) that includes 14 interviews with software architects. In contrast to our analysis, Bogner et al. do not focus on the challenges in the workflow of the organizations, but on the technologies used and software quality aspects. Another interview study was conducted by Haselböck et al. [33] focusing on software design aspects such as the sizing of microservices. A questionnaire based study on Bad Smells in MSA was conducted by Taibi et al. [77]. The study touches on organizational aspects and is included in our argumentation of the challenges (cf. Subsect. 4.4, but due to the study design as a questionnaire, the development process as a whole was not considered.

Development Workflows
In the context of our proposed workflow (cf. Subsect. 6.1), there are several largescale agile process models or methodologies that can foster the development of MSA by multiple DevOps teams. Examples include Scrum at Scale [76], the Spotify Model [70], or SAFe [65] (cf. Subsect. 2.5). However, these approaches generally only become viable when an organization has at least 50 or more developers involved [17], and are therefore not suitable for SMOs facing the challenge of initially scaling their small organization from one to two or three teams. In addition, the aforementioned approaches address development at an organizational level and do not address development practices. Therefore, we expect our proposed workflow (cf. Subsect. 6.1) to integrate well with the stated largescale approaches.

MSA Operation Modeling
The essential deployment metamodel (EDMM) [82] is an approach that combines existential components of the deployment of a software system in a metamodel, taking into account concepts such as configuration management [15] and infrastructure as code [51]. EDMM makes a specific mapping concerning the technology used for the software system's provisioning process based on the metamodel. For deploying the microservice application, EDMM supports technologies like Puppet 29 , Terraform 30 , AWS Cloud Formation 31 , and Cloudify 32 . Unlike EDMM, LEMMA addresses the deployment of service-based systems and their data structures and service composition. Besides, EDMM provides mapping concerning specific cloud providers. On the other hand, LEMMA provides technology-specific provisioning artifacts that can be used with different cloud providers. DICER [1] represents an approach based on technology-independent models for the generation of infrastructure as code and is used to deploy the software system. DICER models encapsulate monitoring, self-adaptation, configuration management, server deployment, and software system deployment. Also, DICER fosters the transformation of models into artifacts for service deployment using TOSCA 33 and other technologies. The functional scope of DICER relates exclusively to the provisioning or operation of the software system. Furthermore, DICER does not support the modeling of data structures or service composition. Like LEMMA, DICER also provides technology-specific artifacts that can be used for the deployment process. Additionally, it also provides a graphical representation in the form of UML deployment diagrams, which LEMMA does not provide on an operational view.

Conclusion and Future Work
In this paper, we have identified two key challenge areas for SMOs through an empirical analysis of an interview study (cf. Sect. 4). First, it is challenging for SMOs to develop and maintain a common understanding of architecture in an organization that is scaling to multiple teams for the first time through the application of SMA. Second, deployment in particular seems challenging due to its complexity, so SMOs tend to constitute special operation teams contrary to the microservice ownership principle (cf. Subsect. 2.5). This is detrimental to the implementation of DevOps practices and the benefits hoped for within the teams. 29 https://puppet.com/ 30 https://www.terraform.io/ 31 https://aws.amazon.com/ 32 https://cloudify.co/ 33 https://cloudify.co/tosca/ To address these two challenge areas, we have presented a model-driven workflow based on LEMMA (cf. Sect. 3) for developing microservice architectures (cf. Sect. 6) and elaborated on the components we have added to LEMMA to support this workflow. The components comprise (i) interoperability bridges between OpenAPI and LEMMA models (cf. Subsect. 6.2); (ii) an extension to the Service Modeling Language to allow the import of remote models (cf. Subsect. 6.3); (iii) a model processor to visualize microservice architectures (cf. Subsect. 6.4); (iv) enhancing the Operation Modeling Language through the ability to import infrastructural nodes (cf. Subsect. 6.5); and (v) code generators for microservice deployment and operation (cf. Subsect. 6.6).
For future work we plan to conduct a qualitative observation and interview study which aims to evaluate the proposed workflow in practice. Regarding the presented LEMMA extensions, we are going to mature the prototypical development and improve accessibility for users, e.g. by providing a dashboard. Furthermore, we would like to develop LEMMA's means to support a common architectural understanding in an organization not only through the presented visualization but also through analytical means such as code metrics.