1 Introduction

Modern organizations are in a continuous state of change and evolution; therefore, they are considered dynamic systems [1]. They are encountering challenges that are usually associated with perpetual changes derived from the environments in which they operate. An organization’s environment involves changes bearing the characteristic that their direction and speed are usually difficult to predict and anticipate [2]. Naturally, while facing environmental threats and opportunities, any organization has to change, either to improve how effective it is toward achieving a business goal [3], or ensuring its survival [2]. This dynamic state is an outcome of the high level of dynamism that exists in business environments, both external and internal, and it results in a required pace of change that surpasses the one that organizations can tackle with ease since is higher than the actual pace of change in organizations [4]. This pace is increased additionally by specific factors like the digital transformation of contemporary society [5] and new emerging strategies and technologies [2].

Change and strategy, on a conceptual level, bear a strong link to each other but also to the concept of capability [6]. Practically, the organizations respond by changing the business offerings that they are capable of. This fact establishes the concept of capability as highly relevant. Capabilities can be considered an essential aspect of organizations because they encompass many important notions of organizational design that are also relevant to change, such as decision, goal, process, context, and service [7, 8].

This situation leads to an increased need for adaptability of organizations and their capabilities. Additionally, any academic or industrial discussion around the adaptability of organizations needs to include highly adaptive Information Systems (IS), because the IS need to be continuously available and also able to adapt to changes in the conditions derived from the organizational environment and other requirements [9]. Organizations need support while encountering the phenomenon of capability change, and this task can be facilitated by Enterprise Modeling (EM) as a discipline that can support the development of a multi-perspective view of an organization’s design. Capability, as a concept, offers to EM a context-dependent extension for handling business variability and change. Taking this fact into consideration, various capability modeling methods and approaches have been developed, as identified and reported in earlier steps of this research project [10]. These approaches are differentiated, both in regard of being generic or domain-specific and the level of complexity involved.

Hitherto, to our understanding and knowledge, there are no approaches whose specific design aims to tackle the domain of changing organizational capabilities, and this is the research gap and problem that has been addressed via the development of KYKLOS [11] and Compass [12], two interlinked approaches of different complexity. The goal of the research project that resulted in the development of KYKLOS and Compass is to provide methodological and tool support for organizations that undergo changes regarding their capabilities or need to. Specifically concerning Compass, its development has been motivated by the above-mentioned research problem, and earlier research, in particular [13], which is the conference paper that this article extends on.

As far as the development of modeling approaches is concerned, regardless of the design and development framework, it is commonly required that every approach is used by people for the solution of practical problems [14]. An additional commonality existing in every approach’s design and development project is the need to have the designed artifact demonstrated and evaluated by its intended users, as suggested by the Design Science Research (DSR) [14] principles that have driven this project. The aim of the present article is to (i) demonstrate KYKLOS and Compass by showing their use in the solution of a specific problem instance as a way to prove its feasibility as a problem-solving creation, and (ii) evaluate KYKLOS and Compass by determining the problem-solving capabilities of the two approaches in general, based on the degree to which the requirements that have driven their development are fulfilled. Fulfilling the goals will provide an answer to the following research question: “What is the value of KYKLOS and Compass as problem-solving artifacts in the domain of changing capabilities in organizations, especially in regard to their main stakeholder groups?”.

As a ground for the demonstration and evaluation, a case study has been conducted in a Swedish company, referred to as Digital Intelligence (DI). The company operates within the ERP system implementation and consulting domain and undergoes a challenging shift in its customers’ preferences which requires adaptation of its business capabilities.

The rest of the article is structured as follows. Section 2 provides the required background and related literature, Sect. 3 presents the methodological decisions, Sect. 4 describes the conducted case study and the analyses of the case using the different approaches, and Sect. 5 presents the results of the evaluations. Sections 6 and 7 discuss the results and provide concluding remarks respectively.

2 Background

This section explains the origins of this article, an overview of the evaluated approaches, a theoretical background regarding modeling methods and method evaluation, and a presentation of similar studies.

The present article is an extension of [13], where the KYKLOS method’s evaluation has been initially presented. The aim of the initial article was to have the method demonstrated and evaluated by its potential users, consisting mainly of business and modeling experts. Including both groups enabled a comparison of the user types and identification of differences between them, along with an understanding of the required knowledge for using the method and identification of potential means to bridge the gap as input for further development of the method. A case study was conducted using the DI case and its ERP implementation and consulting business.

The evaluation in the initial article has motivated the development of Compass [12]. This fact motivated an additional evaluation cycle to assess the new artifact as a complementary potential solution to the identified problem and as a stand-alone one. The new evaluation cycle is the main part of this extension. More specifically, the Introduction has been extended and updated to reflect on the new and updated content and goals of the paper. The Background section now also includes an overview of Compass, along with a clarification of the unique and common characteristics of each approach. The extension of Methodology consists of an explanation of the requirement areas, analyses of the previously published KYKLOS evaluation, and the method decisions about the new evaluation cycle. Section 4 is extended with the analysis of the case using Compass. Finally, the Discussion and Conclusions sections have been updated to reflect on all the new content.

2.1 Overview of KYKLOS and Compass

This subsection provides an overview of KYKLOS and Compass, their commonalities, and their specifics.

2.1.1 Commonalities

Initially, both KYKLOS and Compass are domain-specific approaches. Their domain specificity [15] is that of changing business capabilities. Additionally, they share Design Science (DS) [16] as the research framework that has been used for their design. The framework used for their development was suggested in [14]. According to it, every project is initiated with the phase where the problem to be addressed in the project is explicated.

The two approaches were developed as a response to the same problem, namely, “even if change methods, as-is and to-be models exist, there is a lack of methodological guidance for describing the transition of capabilities.” [17]. Their shared goal is to provide support for managing changing capabilities.

KYKLOS and Compass also share the next phase of DS. Since they were developed as a response to the same problem, they also share their requirements, which were published as a set of 28 goals [18, 19]. For the evaluation purposes reported in this study, there is a need to condense the goals into areas that improve evaluation feasibility. This was achieved by grouping the goals into requirement which are mapped to concepts of KYKLOS and Compass. No goals remaining unmapped in this task, thus evaluating the abstracted requirement areas means that the entire initial set is being evaluated. The identified requirement areas are (i) Context, (ii) Intentions, (iii) Capability architecture (related capabilities), (iv) Decision-making (motivation to change), (v) Capability transition, (vi) Ownership, and (vii) Configuration components (component management). The result of the grouping and abstraction is shown in Table 1, which includes the goal numbers, the identified requirement areas, and the relevant elements from KYKLOS and Compass that are associated with the evaluation of the specific area. The elements will be described in detail in the following sections of this article.

Table 1 Requirement areas

Both approaches rely on a common definition of a capability as “a set of resources and behaviors, whose configuration bears the ability and capacity to enable the potential to create value by fulfilling a goal within a context” [17]. The need to change is identified as unfulfilled conditions from the external context of the capability or the internal intentions of the organization that owns the capability. The actual phenomenon of capability change is perceived as a transition from the “before” to the “after” configuration of the capability, in terms of changes in the required component sets in every change case.

Additionally, both approaches are iterative, reflecting on the iterative nature of changing capabilities. Thus, a capability change can have an impact on the capability itself, on capabilities which are relevant to the changing one, or to the context of the capability. This is reflected in the modeling procedure of KYKLOS and Compass.

2.1.2 KYKLOS specifics

KYKLOS in its entirety was introduced in [11]. It is a modeling toolkit—a modeling method supported by a dedicated modeling tool. The mechanisms and algorithms of KYKLOS are integrated as dynamic visualization elements in the KYKLOS tool, so what is mainly evaluated in this research is the modeling technique. The KYKLOS tool has been developed via the ADOxx meta-modeling platform [20], as reported in [21]. The concepts and notation of KYKLOS are presented in Table 2, which can be used as legend for the figures included in Sect. 4.1. For the semantic explanation of the concepts, we refer to [11].

Table 2 The KYKLOS notation [21]

The KYKLOS toolkit, which includes both the methodological guidance and complementing tool, can facilitate the modeling of several capabilities and transitions in parallel, which increases significantly the complexity of the created models and provides a more detailed overview on the capability architecture around the change.

The KYKLOS modeling procedure consists of four phases, (i) Foundation, (ii) Observation, (iii) Decision, and (iv) Delivery, as shown in Fig. 1, which also depicts the input and outcomes of every phase.

Fig. 1
figure 1

A visualization of the modeling procedure with the input and outcomes of every phase

The Foundation phase uses as input the identified problem and establishes the analysis’s basis. The drivers of this phase are to identify which capabilities’ potential changes are to be further analyzed and what is the value that every capability produces, including the ownership aspect. There are several different approaches for the identification of the capabilities, as published in [22].

The next phase, Observation of context and intentions, uses as input the intentions of the owner organization and the context of the capability, and the given instance of capability change. The drivers for this phase are identifying the external factors and internal intentions that should be fulfilled by the capability. The output of this phase is the capability’s context, connected to the capability using KPIs and a set of intention elements.

The third phase, named Decision alternatives, explores the existing configuration and examines all the potential configurations that can be formulated. The input is the changing capability, and the captured need to change, in terms of unfulfilled context and intentions. This phase identifies the capability’s components, focusing on resources since they are quantifiable, and alternative configurations also fulfilling the need to change. Missing components are identified to facilitate planning, and deciding on the optimal transition between configurations.

The fourth phase, Delivery of change, uses as input the decision to change and a set of capability configurations, both produced during the previous phase. The aim is to document the transitions that can be performed between capability configurations and what are the properties of the actual changes. The aim and output of this phase is a set that consists of at least one transition between configurations of the modeled capability. Directed transition relationships are established between configurations, and for each transition, the properties of change are identified and documented. The properties are dichotomies that can provide insight into how the change can be realized. The properties used for every transition in a KYKLOS model are (i) Control, (ii) Scope, (iii) Frequency, (iv) Stride, (v) Time, (vi) Tempo, (vii) Desire, and (viii) Intention.

2.1.3 Compass specifics

Compass is a canvas-based approach that has been developed as a response to results of KYKLOS’s evaluation. The Compass canvas consists of six areas [12], namely (i) Capability, with a light orange color, (ii) Motivation, colored with light blue, (iii) Components, in light yellow color, (iv) Transition, with a light green color, (v) Change properties, with white color, and (vi) Impact, in light gray color. The outline of the Compass canvas is shown in Fig. 2.

Fig. 2
figure 2

The outline of the Compass canvas

Each area includes concepts deemed as relevant for the desired functionality. The concepts borrow their definitions from KYKLOS’s concepts, and the ones not existing have equivalents in that set. Table 3 provides an overview, descriptions of the areas, the questions they answer, and the associated concepts, c.f. [12] for a details.

Table 3 The areas of Compass, their descriptions, and the included concepts per area. The first three rows explain the structure of the table. Adapted from [12]

The Compass modeling procedure starts from the Capability area, the user proceeds sequentially to Motivation, Components, Transition and Change properties, and finally the Impact area. By filling in the Impact area, the user that wishes to continue the analysis checks whether the impact has changed the outcomes, related capabilities and fulfillment status of motivation elements to a point where the capability needs to change again, or not. In contrast to KYKLOS, Compass is designed so that there can be only one capability and only one transition per canvas. That is, the focus of the canvas is on one transition at a time and for any analysis of multiple capabilities and/or transitions, multiple canvases need to be used.

2.2 Conceptual and procedural integration of the two approaches

KYKLOS facilitates the analysis of simple or complex capability changes, while Compass is optimal for simple cases only. Compass captures basic information of the changing capability without the all the details that are needed for KYKLOS. Thus, Compass can serve as a pre-modeling activity. It can also capture the phenomenon’s basic information to enhance the understanding about it and facilitate a high-level analysis. In this way, it serves its second goal; to serve as a high-level stand-alone tool. While KYKLOS enables the analysis of the given change both qualitatively and quantitatively, Compass has no quantitative aspect, so the analysis is performed on a higher level of abstraction. While KYKLOS can involve a multitude of changing capabilities and transitions, Compass is restricted to one transition per canvas.

The two approaches are conceptually aligned, and this is shown in a common meta-model that includes the concepts of both KYKLOS and Compass. The meta-model, see Fig. 3, is color coded to capture the association between the concepts and the approach that uses them. This meta-model extends on the KYKLOS meta-model, which was originally published in [11], and matches the combined set of the blue and green concepts in Fig. 3.

Fig. 3
figure 3

The conceptual integration of KYKLOS and Compass. Blue depicts KYKLOS concepts, yellow means Compass concepts, and green means concepts that the two approaches have in common

KYKLOS and Compass are also procedurally aligned—every Compass area can provide input for specific phases of KYKLOS. The only exception is the Impact area, which corresponds to the analysis part of a complete KYKLOS model, which is realized in a partially automatic way in the tool. For this reason, the impact of a changing capability had to be captured in the canvas as well, and this is the justification for introducing this additional area. Table 4 presents how the areas of Compass correspond to KYKLOS phases. In this way, Compass satisfies the required potential to be used as a pre-modeling step for a KYKLOS model.

Table 4 The procedural correspondence between KYKLOS and Compass

Using a Higher Educational Institution (HEI) as a brief illustrative example, KYKLOS and Compass are approaches managing and improving capabilities, such as teaching a UML course. In KYKLOS, the initial phase identifies the HEI's capability to educate, any supporting capabilities, and all their outcomes. The Capability area of Compass serves the same purpose. The Observation phase of KYKLOS and the Motivation area of Compass involve monitoring context, like the number of students graduating from the course, using KPIs to assess outcomes, and goals, like providing state-of-the-art education.

If they are not fulfilled, the Decision phase identifies potential changes in the capability’s configuration’s components, like updating course content. Resources, like educators, classrooms, and modeling expertise, processes, and configurations, like Current and Future, are managed to ensure the capability will be active and meet context and intentions. This is performed in the Components area of Compass. During the Delivery phase of KYKLOS, the attributes of the transition are also captured, for example, if the change is intended, planned, and discrete, an activity which is reflected in the Transition and Change properties areas of Compass.

Compass mirrors these phases but emphasizes documenting resources, transitions, and impacts in a more static, manual way, contrasting with KYKLOS's dynamic visual analysis. For this reason, the Impact area is included in the canvas to compensate for missing dynamics. Both approaches involve collaboration between modelers and domain experts to navigate changes and enhance the capability’s offerings, which can take various forms, like an analyst-driven approach or participatory modeling [23].

2.3 Theoretical background

This subsection provides an overview of the theoretical background.

2.3.1 Capabilities

Diverse capability definitions exist in the literature, as, for example, in [17]. In this project, capability has been defined as a potential to create value by fulfilling an intention within a context, using a configured set of resources and behaviors. Employing capability management is an efficient way to tackle the complexity that stems from the turbulence of the business contexts [24], because capability management provides the methodological guidance needed for improving organizational flexibility and productivity, especially in digital organizations [7].

Literature examples that involve the concept of capability include stand-alone approaches like Capability-Driven Development (CDD) [7], Capability-Oriented Designs with Knowledge (CODEK) [25] and Value-Drive Modeling Language (VDML) [26], EA frameworks like the NATO Architecture Framework (NAF) [27] and the Ministry of Defence Architecture Framework (MODAF) [28], along with extensions of existing approaches, for example, i* [29] and Capability Mapping [30,31,32].

Capability bears a significant association with the dynamic environment in which it exists. This has led to the concept of dynamic capability [33, 34]. Even though dynamic capability is relevant to change, it has not been used in this project because it is considered relatively imprecise, as expressed in [35]. This decision is based on the diverse definitions in the literature that result in inconsistencies. For example, dynamic capability has been defined as ability [33], orientation [36], process [37] capacity [38], creation/design [39] or mechanism [40]. Additionally, the diverse definitions are justified by including the term “dynamic,” which has a double meaning when referring to change. It refers both to something that causes change to something else and something that is being changed [41]. This is a potential source of confusion and diverse definitions, and extending the discussion on the definition of dynamic capabilities is outside the scope of this research, so the term has been avoided, even if the existing literature has been taken into consideration toward a deeper understanding on capabilities.

2.3.2 Modeling methods and approaches

The aim of modeling activities is to describe specific aspects of the world by applying abstraction. For specific domains, specific attributes of the entities that comprise the domain are represented in conceptualizations like a meta-model, and a specific state of affairs of the domain that is expressed using a conceptualization is called a model [42]. The approaches, methods, and languages that are developed to support the modeling of specific domains are called domain-specific [22, 43], in contrast to other approaches which are called generic.

Modeling requires guidance in the form of a modeling method. A method’s components are a modeling technique and the mechanisms and algorithms that work on the created model [44]. The latter refer to functionalities existing in a method that is implemented in a tool. The modeling technique consists of a modeling language and a modeling procedure. The description of the steps for applying the method is the modeling procedure. Syntax, semantics and notation comprise the modeling language. The syntax describes rules and elements for developing a model, the semantics describe the meaning of a language, and the notation concerns its visualization [44].

Structuring information during conceptual modeling can be perceived as enrichment or elaboration. This is justified by considering that a user commonly starts by creating a highly abstract model before proceeding to more complex and detailed models by reducing the degree of abstraction and capturing more complex aspects [45]. The complexity of a model is associated with the modeler’s experience level. A modeler with extensive experience can manifest a deeper understanding and ability to apply complex methods to complex cases and deliver more efficient models. However, modeling method developers do not target only modeling experts. Regarding non-expert, specific approaches are selected that aim to reduce the risks derived from the above-mentioned complexity. Such approaches often involve post-its and text notes, e.g., the canvas approach [46] which has been employed for the development of the Compass. The canvas approach aims to document and visualize all the necessary elements of a case, without the complexity of the relationships among the elements.

A widely known canvas application is the Business Model Canvas (BMC) [47], which provides an overview of an enterprise and has inspired a variety of approaches in Business Informatics. Such examples are the Operating Model Canvas [48], specialized for the operational level of the organizations, the Business Process Canvas [49], specialized for business processes, and the Co-creation canvas, [50], specialized for co-creation activities. Applications of the canvas approach exist in other areas as well, like, for example, the Design Science canvas [14], whose development supports the capture and guidance for the essentials of a DSR project.

2.3.3 Method evaluation approaches

Modeling methods are often developed within design frameworks like Design Science Research [16] and Design Thinking [51]. What is common is that every developed method must be evaluated. Evaluation can take place in various formats [14]. An evaluation can be summative or formative, which refers to assessing completed or still developing artifacts respectively. An evaluation can also be naturalistic, when being conducted in the case’s natural environment, with a real case, users, and problem, or artificial, when it is not bound to reality. Finally, activities that require an artifact to be used before evaluating it, are called ex post, and the ones concerning artifacts that are evaluated on a conceptual level without being used, called ex ante [14].

From a method engineering perspective, the evaluation of a method is its quality assessment aspect, namely enactment [52]. Enactment is complementary to generation. In particular, generation is the act of defining and describing the method based on a defined foundation, often a meta-model. Enactment is the act of validating the method through application. This is in line with the ISO/IEC standard 24,744:2014 standard [53]. This standard also depicts roles such as the method engineer, who is the person who designs, builds, extends and maintains the method, and the developer, who is the person who applies the method during enactment. These two activities and roles are interlinked so that they can participate during both generation and enactment in an interactive way.

The diversity of design projects results in diverse evaluation approaches and strategies. The Framework for Evaluation in Design Science (FEDS) [54] includes four evaluation strategies, namely (i) Quick and Simple (ii) Technical Risk and Efficacy, (iii) Purely Technical, and (iv) Human Risk and Effectiveness, which is the strategy used in this study. It is selected when the main risk in a project is user-oriented, it is cheap to evaluate naturalistically and the focus is on evaluating if the benefits of the artifact will still accrue while placed in operation, even if complications exist regarding social and human difficulties in adoption and use of the artifact. It emphasizes initial formative and artificial evaluations that gradually progress to summative naturalistic ones [54].

Another relevant approach for the evaluation of methods in Information Systems (IS) research is the Method Evaluation Model (MEM) [55]. MEM was developed by combining two theoretical areas; (i) the Technology Acceptance Model (TAM) [56], with an origin from the IS literature, and (ii) Methodological Pragmatism [57], originating from the Philosophy of Science. It consists of six constructs, which reflect on specific aspects of the evaluated method while applying the MEM approach. The first two evaluation aspects are derived from Methodological Pragmatism, the next three from TAM, and the model is completed with the Actual Usage.

  • Actual efficiency, which concerns the effort that the user must make to apply the evaluated method.

  • Actual effectiveness, which assesses if the intended outcomes are produced/if it is fulfilling its requirements.

  • Perceived Ease of Use, which concerns the degree to which a user perceives that the method is easy to apply.

  • Perceived Usefulness, concerning the user’s perception of the usefulness and effectiveness of the method.

  • Intention to Use, the extent to which a user has the intention to use the evaluated method.

  • Actual Usage, the extent to which the evaluated method is already used in practical cases.

2.4 Similar studies

The evaluation of EM approaches is an activity that is commonly published in the literature. In addition, employing FEDS [54] and one of the strategies that it consists of, and using MEM [55] as guidance and driver for the development of EM evaluation protocols is a common practice in the scientific literature, with numerous examples existing. In this subsection, we present a small set of studies similar to the current one, as an additional means to justify the selection of FEDS and MEM for evaluation of KYKLOS and Compass.

One relevant study, concerning an EM method designed for context modeling within the framework of capability management, namely eCoM [58] includes a report of its evaluation. The activity has taken place within several evaluation cycles, having employed a variety of strategies and methods, like action research and case study. The FEDS framework has been used, without mentioning a specific strategy. Regarding MEM, it has not been mentioned explicitly; however, its efficiency and effectiveness aspects have been used for the evaluation.

Another evaluation of an EM method exists in [59]. The evaluated method is the Domain-based Business Process Architecture (dBPA), and the study reports on two evaluation cycles. There is no explicit mention of employing FEDS and a specific strategy, yet its usage is implied, employing the FEDS principles, and moving from artificial to more naturalistic evaluation cycles. The MEM is also mentioned as a future step of the research.

In [60], the GOBIS (Goal and Business Process Perspectives for Information Systems Analysis) framework, is evaluated using the aspects of the MEM. One significant adjustment in this study is that the effectiveness aspect of the MEM has been decomposed into two evaluation aspects, the completeness and the validity of the model. This adjustment has been based on the model quality framework [61] and relies on the fact that the given method being evaluated is a modeling method, which means that such adjustments can only be case-specific.

Another study [62] reports the design of an experimental evaluation of an EM language which has been developed using the Resource Event Agent and the Unified Foundational Ontology as the theoretical base. The evaluation uses MEM as the basis of evaluation, providing an extensive and thorough analysis of the framework and its aspects. In a similar way to this study, Actual Usage has been omitted, since the evaluation concerned a freshly developed method. FEDS has not been part of the evaluation.

3 Research methodology

This article is part of a project aiming for methodological and tool support for organizations whose capabilities are changing or need to. Driven by DSR [16], and in particular, the framework of [14], it consists of five steps—(i) Explicate problem, (ii) Define requirements, (iii) Develop artifact, (iv) Demonstrate artifact, and (v) Evaluate artifact. The first step is focused on the thorough investigation and analysis of the research problem. For the current project, this has been performed and published in [10]. The second step concerns the transformation of the explicated problem to a set of elicited requirements, and, for this project, has been published in [18, 19]. The third step, which concerns the actual development of the artifact, has been conducted iteratively and published in [11, 21, 63,64,65,66]. The fourth step is about the use of the artifact in a specific problem instance to prove that it is a feasible problem-solving creation and the fifth step concerns determining the problem-solving capabilities of the created solution by also comparing it to the requirements that have been elicited during the second step [14]. This article concerns the last two steps of DSR or the enactment phase, as referred to in method engineering. A case study, which is described in Sect. 4, has been employed, based on the Human Risk and Effectiveness strategy of FEDS, since the developed artifacts, KYKLOS and Compass, have user-oriented and social challenges. This strategy has driven the selection of MEM as method, since alternative methods, like experiments have been taken into consideration, yet, discarded. The reason behind this decision is that the strategy aims at user-oriented and social challenges, a fact that implies in-depth investigation to gain insight into the issues perceived by the users of the method. In addition, the progress level of the research project indicated a summative, naturalistic exploration with real users, a fact which indicated the selection of the case study. It has initially been used to demonstrate and evaluate the KYKLOS method. The authors have acted both as method engineers and developers from the method engineering perspective. The results have been used to evaluate the method, which includes all the parts that comprise it, like the language and procedure, along with the complementary tool. The evaluation results motivated the development of Compass, initially as a pre-modeling step for KYKLOS, which has also been used in the same case study as an independent analysis approach. Compass can be applied independently providing a higher level of analysis; however, it is not meant to replace KYKLOS. The application has been used as a demonstration of Compass as well, and as an opportunity to evaluate the canvas too.

3.1 Demonstration methodology

Both approaches were demonstrated using the case study strategy [67], by collecting data in an organization and applying the approaches for the analysis. Regarding the data collection for the case study, four initial online individual guided interviews [68] were conducted to frame the problem and the current and desired states of the company. For every interview, the participant’s role in the company and the role’s association to change initiatives guided the discussion. The participants were employees of DI with various roles associated to the given change, in particular, the director of the company, the head of customer success, the responsible for strategic initiatives of DI, and a data scientist. The participants were selected using purposive and convenience sampling. The company’s specialization is directly related to changes, being an essential aspect of their services to customers. The company’s understanding of changing capabilities is not limited to its own capabilities, but its understanding of the customers’ organizations is also valuable and relevant to KYKLOS and Compass.

3.1.1 Methodological decisions regarding the demonstration of KYKLOS

The initial interviews were followed by three four-hour modeling sessions with an analyst-driven approach, where the main author acted as the analyst and method expert, applying and guiding the KYKLOS modeling procedure, and a representative from DI participated and validated the modeling decisions. A two-hour tutorial session on the method and tool preceded the modeling session, so that the company representative could be actively involved in the task. In other words, the model was developed iteratively and collaboratively, while in parallel, the analysis of the case provided opportunities for utilizing the method to its full extent.

3.1.2 Methodological decisions regarding the demonstration of Compass

For the demonstration of Compass, the collected data from the case study have been used, and, in practice, the case has been analyzed for a second time using the canvas. Taking into consideration that several capabilities needed to be analyzed for the case study, several canvases were developed by the main author, and validated by the same representative of the company that also participated in the modeling session while applying KYKLOS.

3.2 Evaluation methodology

The evaluation of KYKLOS and Compass entails fulfilling their requirements. The requirement areas served as the basis for the empirical evaluation. This is efficient because the question set is smaller and more concrete.

3.2.1 Methodological decisions regarding the evaluation of KYKLOS

The evaluation is summative, naturalistic, and ex post. This is in line with the previous artificial ex ante formative evaluation of KYKLOS [64]. Nine workshops were held to evaluate the results of applying the method in the given case study, with a total of 21 respondents participating. Convenience and purposive sampling have been used. The evaluators were classified into two categories, the business group and the modeling experts. The former consisted of 10 DI employees and the latter of 11 modeling experts. The business group was involved because it concerns their own organization. The director and head of customer success, a solution architect and seven consultants participated. The expert group is also suitable, due to their extensive familiarity and knowledge of modeling methods. It consisted of 10 expert researchers and lecturers in enterprise and conceptual modeling, affiliated with universities in Sweden and Latvia, and one expert modeler from a private organization in the United States.

Four online workshops were held for the business group, with one to five participants, and five for the expert group, two online and three physical ones, with one to six evaluators per workshop. The workshops consisted of a presentation of the method’s aims, semantics, syntax and tool, a thorough explanation of the goals, model, and analysis of the case study, and the evaluation, consisting of discussion, comments and an evaluation questionnaire. For the experts, tool demonstration was also included, along with tool use during the physical workshops.

The questionnaire was based on the MEM [55]. It consisted of 15 Likert-scale questions (Q1-15), inspired from and reflecting on MEM’s aspects—the method’s Perceived Ease of Use (Q1-3), Efficiency (Q4), Actual Effectiveness (Q5-11), Perceived Usefulness (Q12-13), and Intention to Use (Q14-15). MEM also includes the Actual Usage aspect; however, the recent development of KYKLOS does not provide fruitful ground for assessment. Likert-scale questions are an appropriate means to evaluate the MEM aspects, and this is suggested by the MEM developers [55]. The questionnaire was checked for internal consistency using Cronbach’s alpha [69].

The analysis was quantitative, using descriptive statistics and correlations. Even if Likert-scale questions are considered to produce ordinal data and non-parametric tests are suggested, we considered that the literature includes a plethora of method evaluation and other studies where Likert-scale data are subjected to parametrical tests because parametric tests “are generally considered more robust” [70] than non-parametric tests. According to [71], the analysis of Likert-scale responses using parametric tests bears sufficient robustness to produce largely unbiased answers that are accurate on an acceptable level. This practice is often recommended, especially when the measured concepts concern less concrete concepts [70], bearing no risk of “coming to the wrong conclusion” [71]. For this reason, we decided to subject the collected data to both parametric and non-parametric tests to exploit both the descriptive robustness of parametric tests and the validity of non-parametric ones, using both means and medians respectively. The source paper presented the parametric viewpoint alone because of the limitation of pages; however, in this article, both approaches are presented. In particular, t tests and Mann–Whitney–Wilcoxon [72] tests. For the secondary qualitative analysis of documented comments stated during the workshops, descriptive coding [73] and deductive thematic analysis [74] have been employed.

3.2.2 Methodological decisions regarding the evaluation of Compass

Similarly to KYKLOS, the evaluation of Compass is naturalistic and ex post. The main difference is that this evaluation is formative, because the canvas is in the early steps of its development. Compass has no requirements for any technical expertise, thus, there was no need to use a diverse group of evaluators. The group consisted of individuals with business and managerial expertise, which are Compass’s intended users and stakeholders.

The data collection method is workshop combined with semi-structured expert interviews. The structure of the workshop is identical to the one used for KYKLOS. The only addition was an introduction to KYKLOS as the motivating factor for the development of Compass. There were both individual and group interviews. We opted for expert participants and more in-depth investigations, which resulted in workshops with longer duration, in particular, 1, 5–2 h per session. In total, the group of 10 evaluators for Compass participated in five workshops. There were three workshops with one participant, two online and one in-person, one with two participants, conducted online, and one with five, conducted in a hybrid manner, with three participants attending physically and two online. Eight of the participants are based in Sweden and two in the United Kingdom. All the interviewees chose to provide their answers in written form, so a questionnaire was developed in Google Forms, which was distributed during the workshops. The work experience of the participants ranged between 5 and 41 years. The group consists of individuals with managerial positions like Head of department, business analyst, director, two project managers, process manager, Head of Strategy Execution, country manager, team leader and manager, and compliance manager. There is no overlapping between the evaluators of KYKLOS and Compass.

The questions posed were interrogative open-ended versions of the ones used for the KYKLOS evaluation, adjusted for Compass’s aspects, for example, the statement about the phases of KYKLOS was transformed into a question about the areas of Compass. Thus, Q1-Q15 addressed the same areas as in the evaluation of KYKLOS, with two additional questions posed addressing the relationship between Compass and KYKLOS (Q16-17). The collected data were analyzed using deductive thematic analysis, using the MEM areas as the guiding framework.

3.3 Threats to research quality

This research involved qualitative and quantitative aspects, the quality of which has been mentioned in the respective sections; however, we provide here a summary of the involved validity threats.

Initially, the fact that only one case study has been conducted is a threat to the generalizability of the reported findings. The selection of the specific case study has been aligned with the evaluation scope since the case fulfills the criteria of the KYKLOS project, involving a set of changing capabilities that enabled demonstrating and evaluating the method. However, employing a multitude of case studies would have resulted in more reliable results, since it is overall considered a more robust approach [75, 76].

The small sample size of the participants is another encountered issue. On the one hand, we aspire that the interviews and workshops, combined with the level of expertise of the participants, have minimized this threat; however, on the other hand, we still acknowledge that a higher number of participants could have contributed to higher validity for the results. Additionally, we consider that the participants’ expertise has contributed toward reaching a degree of saturation of the findings that compensates for the small number of evaluators.

Regarding the data collection, the different data collection formats pose an additional threat to the consistency of the dataset. In other words, combining physical and remote data collection may have resulted in potential inconsistency of the data. For this reason, the internal consistency of the data. Furthermore, regarding the analysis of the collected data, from the quantitative perspective, both the validity of the research instrument, in particular, the questionnaire, and the internal consistency and reliability of the results are relevant. The validity of the questionnaire has been tested using bivariate correlations and the internal consistency has been tested using Cronbach’s alpha. One of the questionnaire items did not pass the test and was excluded from the analysis for improved quality. Knowing that analyzing statistically Likert-scale questions using means has the potential to provide limited insight, we combined mean and median analysis using the respective tests, complemented by the qualitative aspect using textual analysis to elicit results that can enrich the value of the findings.

From the qualitative perspective, systematic deductive thematic analysis has been conducted, which enabled mapping and associating the data to the framework of the question set. In this case, the driving framework was the MEM areas. One potential threat in this aspect is the possible bias included by the small number of analysts, two of the authors. Involving a higher number could have attributed higher validity to the results.

4 The case study

The case study concerns an ERP and IT consulting company in Sweden. In compliance with the company’s desire to remain anonymous, we will refer to it as DI. It is an SME, established over 20 years ago, with several offices countrywide, and specialized in selling ERP products and consulting its clients regarding the ERP products, and, in particular, their operational aspects, and the purchase of specific customizations. The clients are given support during their digital transformations via software systems and other IT solutions. Various change initiatives have been implemented in DI over the years. Their aim was to retain DI’s market share, with a focus not only on the services provided to the customers, but also on the company’s structure.

The initial challenge was to identify and frame the problem. The motivation was an observed shift in the requirements of its customer base. The customers used to request consulting services that were limited to equipping a specific solution but since recently, the customers tend to request a wider supply of services, specifically, they request an assessment on a wide spectrum of dimensions, to support the customer’s decision on the optimal IT solution. Consequently, DI has identified an emerging need to monitor and assess its provided services, which led to a needed adaptation of DI’s capabilities, a fact that indicates the case’s suitability for applying KYKLOS.

4.1 Case analysis using KYKLOS

The gap between the value delivered by DI’s capabilities and the clients’ requests was clear. The initial focal point of the analysis was to identify the company’s capabilities that are associated to the provision of insightful ERP sales, whether the gap can be bridged, and how this can be achieved. DI’s work procedures were thoroughly explored and it was revealed that, the required change is the evolution of ERP sales by including a deep insight into the customers’ needs. In essence, this means extending the operational consulting with strategic consulting.

During the Foundation phase of KYKLOS, ERP sales are identified as the main capability that is affected by this change. The supporting capabilities that are related to this change are Consulting and Customer assessment, along with Product acquisition and Company role clarification, which bear an indirect relevance. Completing the foundation phase, the outcomes of the capability have also been modeled, as shown in Fig. 4.

Fig. 4
figure 4

The capabilities and their outcomes, captured during foundation

The Observation phase identifies the change motivators, which can be both intention and context elements. Initially, high-level intention elements are captured, like the main goal “To run a successful business,” which is not directly connected with any of the capabilities, but is decomposed into lower-level goals, and other specific intention elements, like problems and/or requirements. The context and intention elements that are connected to the ERP sales capability do not indicate a need to change, since they are fulfilled. This implies that the issue lies either in a poor definition of the KPIs, or a change motivator in the form of an unfulfilled status existing for one of the related and dependent capabilities, that is affecting the ERP sale. Having captured all the relevant capabilities, the answer is feasible. While decomposing the main goal “To run a successful business,” and associating all the Intention elements in the model with the capabilities that are expected to fulfill them, the identification of the need to change is completed. The problem that the customers are lacking strategic guidance, and the need for DI to gain insight into the customers’ needs and inform them about them are identified as change motivators, since they are not fulfilled by their respective capabilities, as shown in Fig. 5.

Fig. 5
figure 5

The capabilities and intention elements, captured during observation

In a similar way, Context elements like Industry trends and Automation of sales processes have been captured, and the Satisfaction of customers has also been identified as a change motivator, as shown in Fig. 6.

Fig. 6
figure 6

The capabilities and relevant context elements

During the Decision phase, different configurations and their required components were identified along with the available resources. The configuration of ERP sales with limited insight requires salespersons, offices, salaries, and specific human resource roles like Key account manager, along with established communication processes between the company and clients. Insightful sales require insight into the customers’ needs.

Instead of completing the analysis, the missing component provided the opportunity to explore how the insight can be acquired. The answer was found during the analysis of the Customer assessment capability, which can provide the Insight knowledge as its outcome. Three configurations were identified, (1) the Reactive one, which reflects on the version of Customer assessment where the company only reacts to the customer’s requests, (2) the Proactive one, where the company plans but still misses the proper level of Insight into the customer’s needs knowledge, and (3) the Improved proactive where the desired depth of customer insight is produced.

Improved proactive is planned as a replacement for Proactive, which has, in return, replaced the Reactive configuration (Fig. 7). This transition was not possible because, among the components of the Improved proactivity configuration, a Facilitation working method (Knowledge) and a training process for the method were required but missing.

Fig. 7
figure 7

The capability, its configurations, outcomes, and allocated resources

Based on the information provided by the experts, these components can provide the required resource. With the term Facilitation, DI describes the practical interaction between consultants and customers and the respective data collection. As shown in Fig. 8 which depicts the capability’s required resources for each configuration, analyzing why the transition is not possible leads to a knowledge resource, the Facilitation working method.

Fig. 8
figure 8

The transition between configurations and their required resources, including the identified missing component

This required component is considered missing because the facilitation was identified as lacking structure, a fact that resulted in the inability to gain deep insight into the customers’ needs, which is required to improve the ERP sales capability. Additional exploration revealed that this resource, as an outcome, can be acquired via the introduction of a new capability, the Employee facilitation training, which fulfills a new goal element that is introduced in the model, along with all configurations and transition of the new capability, as shown in Fig. 9.

Fig. 9
figure 9

Step-wise introduction of a new capability in the case study

A noteworthy fact is that the components required already exist in the company, and temporary reallocation can result in the desired strategic consulting and insightful sales. Finally, all the transitions and their attributes were captured in the delivery phase of the modeling procedure. In this way, all the above-mentioned elements were captured and the KYKLOS model was completed, in a model deemed as too large to present here.

4.2 Case analysis using Compass

The gap driving the analysis with KYKLOS is valid during the application of Compass. This is the difference between the customers’ needs and the company’s capabilities’ delivered value. A point of emphasis here is that only one transition can be documented per canvas, a fact that drives the whole analysis of the case.

The first activity is filling in the Capability area, where the Changing capability and its Outcomes are documented, which are ERP sales, and Financial revenue and Sold product, respectively, and the Related capabilities.

Afterward, the Motivation area is filled in. For the ERP sales capability, there are both Context and Intention elements that justify the capability and its outcomes. The documented information concerns Company sales and Industry trends as Context elements, and Expected revenue and Percentage of cloud versions per sold product, as the respective KPIs. Regarding the intentions, the goal “to sell quality products” has been documented. All the motivation elements are fulfilled.

The next part focuses on the Components that are relevant to ERP sales. The list that consists of resources and processes that are deemed as the most relevant has been identified via DI’s domain experts and is documented in the Components area. A few examples are the human resources, like the Key Account Manager, the Customer Success Manager and the Chief Digital Officer, and infrastructure like the Customer Relationship Management (CRM) system and the Online platform used for communication and collaboration. For every resource, its type, ownership status and allocation status are documented, and an ID is assigned to them. Relevant processes are also documented, like the Communication with customers and ERP training.

The next area is the Transition. Initially, the resource IDs that are currently allocated to the capability are documented, along with the processes. Naming the version is useful in this step. The current version has been named “Limited Insight.” In this case, no need to change is identified, so there is no change happening at the moment. DI knows that the desired version requires Increased Insight and can document this as a planned configuration, while also filling in the Impact area with potential and planned impact; however, this is not a priority at the moment. The canvas is shown in Fig. 12.

At this point, since there is no transition happening, therefore, there is no way to continue apart from documenting the desired version, as understood by the domain experts. In particular, Increased insight is documented as a required knowledge resource, without any further information about it. Since the analysis has not identified the motivation to change, the related capabilities should be explored. Gaining the Increased Insight knowledge resource should be the outcome of the Customer assessment capability. Thus, a new canvas is created for it.

The exploration resulted in the identification of three different versions of the capability, as mentioned in the KYKLOS demonstration. However, Compass cannot capture these versions simultaneously, so two separate canvases were created. The three versions, Reactive, Proactive, and Improved Proactive, represent the version without customer insight, the version with limited insight, and the desired insightful version, respectively. The transition toward the desired version is the most important between the two; thus, it is the one reported here.

The Customer assessment capability is motivated by the customers’ satisfaction, and the goals of gaining insight into the customers’ needs and communicating it to the customers. These motivation elements are not fulfilled. The capability components are identical. Analyzing the current and required components for realizing the transition revealed a condition similar to the ERP sales capability. The transition cannot be completed because there are missing components. The analysis pointed toward the identification of the missing components but the canvas on its own does not provide any assistance on where to get these. The properties and impact of the planned change can be documented, and the Compass for this transition is shown in Fig. 10.

Fig. 10
figure 10

The compass for the transition from proactive to improved proactive customer assessment

The next step included the introduction of the Employee facilitation training capability and all its relevant information, as discussed in the previous section. The result of the documentation of the capability is shown in Fig. 11. The interesting part here is that the transition is a change of type introduction, which means that there is no set of required components allocated to any version of the capability, because no version exists currently. In this case, the right-side part of the Transition area could have been empty; however, we decided to use a <  < Missing >  > version name tag to facilitate the comprehensibility of introduction as a type of capability change.

Fig. 11
figure 11

The compass for the introduction of the employee facilitation training capability

The introduction of facilitation training to the employees of DI results in the existence of facilitators, most importantly, the facilitation training method, which is a required knowledge resource for the Customer assessment capability. In this way, the transition to the improved proactive version is possible, and the increased insight into the customers’ needs is gained and can be used in ERP sales. The final canvas regarding the transition to insightful ERP sales is shown in Fig. 12.

Fig. 12
figure 12

The compass for the transition to insightful ERP sales

5 Evaluation results

This section provides the results of the evaluation that has been conducted for KYKLOS and Compass.

5.1 Evaluation of KYKLOS

This section presents the results of the evaluation of KYKLOS, per group, per evaluation aspect, and overall. The evaluators responded to Likert-scale questions, which were formed as statements about the method, with the values 1 to 5 being assigned to the labels “Strongly disagree” to “Strongly agree,” respectively. Initially, the collected dataset’s internal consistency and reliability were analyzed using Cronbach’s alpha, ensuring the quality of the results with an a = 0.95. The construct validity of the questionnaire was tested using bivariate correlations and all the items were tested and validated with scores < 0.05, with one exception (Q11), which was tested but not validated, therefore, excluded from the analysis. The description of the findings follows, per evaluation aspect, along with a visualization of the results in a set of diagrams that depicts the overall evaluation results (Fig. 13a), the results of the group of modeling experts (Fig. 13b), the results of the business group (Fig. 13c), the means (Fig. 13d) and the medians (Fig. 13e) of the groups, for each part of the questionnaire.

Fig. 13
figure 13

The evaluation results overall (a), for the group of modeling experts (b), the business group (c), and the compared means (d) and median (e) of the two groups

5.1.1 Perceived ease of use

The initial three questions of the questionnaire assessed the method’s Perceived Ease of Use according to MEM, and were specified for KYKLOS by asking about the clarity of the method’s phases (Q1), the clarity of its modeling procedure (Q2), and the ease to use it overall (Q3). For the business group, the means of the responses ranged from 2.7 to 3.0/5, resulting in negative to neutral scores for KYKLOS’s ease of use. On the contrary, the modeling experts evaluated this aspect of the method with 4.0–4.5/5, which results in a positive score for the given aspect. This is also reflected in the medians, where the business group scored 3.0/5, and the experts’ group 4.0/5 for all three questions of this aspect. Overall, combining the results of the entire group of evaluators results in means scores that range between 3.4 and 3.8/5, and median scores between 3.0 and 4.0/5, which results in an overall positive response regarding KYKLOS’s ease of use. The comments of the two groups are indicative of their responses. To illustrate, one business evaluator mentioned “for me, that do not have any experience in this type of modeling I found this method hard to understand.”, but a comment from the expert group was “the method itself is not hard to follow if you have some experience in capability modelling.”

5.1.2 Efficiency

To evaluate the efficiency of a method, MEM suggests assessing the quantitative aspects of applying the method, in terms of time, cost and effort. However, taking into consideration that there are no other methods specifically designed for the domain of changing capabilities, this aspect was evaluated by comparing KYKLOS with every evaluator’s known methods that may be used for tackling the given problem. Therefore, the evaluators were asked whether the method reduces the effort required for modeling changing capabilities (Q4). For the business group of evaluators, the responses produced a means score of 3.2/5, showing a positive result which is slightly higher than neutral, and a median of 3.0/5. This is supported by statements like “I think it would take quite a while to model a complex organization” and “I think this model is too time consuming.” The group of modeling experts produced a value of 4.0/5 for the same aspect, both for means and median, a result that shows a positive viewpoint. An interesting comment about the effort came from an expert stating “effort reduction is not an issue; the issue is getting a better result and KYKLOS could be useful here.” In total, the evaluation results in a means score of 3.6/5 and median score of 4.0/5 for the reduced effort required using KYKLOS.

5.1.3 Actual effectiveness

MEM defines the actual effectiveness of a method as the degree to which it achieves its objectives. So, for this aspect, the requirement areas of KYKLOS (c.f. [18] for details) were used, asking the evaluators whether KYKLOS is effective for modeling the following areas that are associated to changing capabilities: (i) Context (Q5), (ii) Intentions (Q6), (iii) Decision-making (Q7), (iv) Configuration components (Q8), (v) Transitions (Q9), (vi) Ownership (Q10), and (vii) Capability dependencies (Q11).

Q11 was identified as an item with low validity and has been removed from the statistical analyses. The results from the business group of evaluators ranged between 3.0 and 3.8/5 means and 3.0–4.0/5 median values, in other words, the requirements of KYKLOS scores ranged from neutral to different grades of positive, as shown in Fig. 13b. Means also allows the identification of the lowest score, which was received by Ownership and the highest one by Configuration components. A comment indicating the group’s response was “some of the components were a little bit confusing.” Regarding the expert group, the means values were higher, ranging from 4.1/5 which was assigned to Context, to 4.6/5, which was the score of Configuration components. The experts’ medians ranged between 4.0 and 5.0/5. The expert group focused on specific model elements as this comment illustrates—“the pool concept and notation is interesting and probably useful in many situations.” The combined results of the two groups provided a range of means values between 3.6 and 4.2/5 and medians of 4.0/5 for all the questions, which shows an overall positive response to the effectiveness of the method.

Q11, which is not part of the quantitative analyses, got the 3rd higher score among the questionnaire items. Even if Q11 has been removed from the statistical analyses for reasons of research rigor, there were comments supporting the high scores that capability associations and dependencies received. So, even if it cannot be evaluated quantitatively, qualitatively, this aspect was approved, a fact supported by comments like “the capability relationships can be used for capability mapping in Enterprise Architecture projects,” and “I agree with the idea that only one main capability and the supporting ones exist per model because it helps avoid complexity.”

A notable observation associated to the perceived ease of use and actual effectiveness is that both groups expressed difficulties regarding sources of potential confusion; however, the business group expressed generic comments about the method overall, while the group of experts expressed difficulties regarding specific components of KYKLOS. For example, comments from the expert group about the notation “the hardest part, at least for me, would be to follow the expected notation that is not a standard modelling notation,” the Resource pool, “more clarification on the resource pool can be beneficial,” and the ownership “was not sure about ownership, there are resource elements in modelling notation, but resources are meant to describe the ownership for capabilities?”. An example of the business group’s generic comments states “if I were to look at the finished model I think it is quite complex and there are several other modelling techniques that are easier to grasp.” There were also neutral statements, like “I think that compared to a lot of other models, it quite detailed focused. Which could be both a positive thing and a negative one,” coming from the business group.

5.1.4 Perceived usefulness

For evaluating the perceived usefulness of KYKLOS, the evaluators were asked to assess whether they perceive the concepts included in the method as adequate for modeling the phenomenon of capability change (Q12), and if, as a whole they perceive the method to be useful for the given domain (Q13). The business group’s means values were 2.9 and 3.3/5, respectively, while the expert modeler group’s scores were higher for these questions too; 4.4 and 4.3/5 respectively. Regarding the median, the business group’s values were 3.0 and 3.5/5 while the expert group’s were both 4.0/5. The comments coming from the two groups were as diverse as the scores, for example, “I'm not sure how we could use this model.”, from the business group, and “it fills a necessary niche in the modeling world.” or “it is very useful for large companies with so many projects running at the same time.” from the expert group. The combined means values of the entire group of evaluators were 3.7/5 for the concept set and 3.8/5 for the overall usefulness, and the combined median values were 4.0/5 for both questions, resulting in a positive response in general.

5.1.5 Intention to use

As suggested by MEM, the objects that were used for evaluating the method’s intention to use consisted of a statement about the overall intention to use KYKLOS for modeling changing capabilities (Q14), and another one about the preference to use KYKLOS in comparison to other available methods known to the evaluators (Q15). The business group responded negatively to both questions, with means values of 2.3 and 2.1/5 respectively, and medians of 2.0/5 for both questions. This was also supported by statements like “I will probably choose to use a simpler model.” On the contrary, the expert modelers responded positively, evaluating Q14 with 3.7/5 and Q15 with 3.9/5 means values and 3.0/5 and 4.0/5 median values respectively, supported by statements like “for any future capability modeling, I would utilize the KYKLOS method.” The overall results regarding the intention to use KYKLOS show a neutral response of 3.0/5 for both questions, means and median as well. A specific aspect that was discussed was the tool, which was mentioned as a motivational factor for using the method, in comments like “I'd use KYKLOS, since it has a responsive UI and does allow to create a clean model.” Given the significant differences in this aspect, it should be considered that a new method is still part of a familiar area for the experts; however, for the business group, their interest in using the method could only be seen from a perspective of extending their personal methodological toolbox, based on their specific roles in the company.

5.1.6 Overall

Assigning weights of − 2 to 2 to the responses “Strongly disagree” to “Strongly agree,” respectively, enabled a different perspective on the analysis, and allowed identifying that the highest rated question was the Configuration components with a score of 25 and the Intent to Use and Preference shared the lowest position sharing the score of 1. Regarding the responses per group, the business group’s weighted scores ranged between − 9 and 8 and the expert group’s between 8 and 18. This is also reflected in the means of the responses; Components had a score of 4.2/5 and Intent to Use and Preference shared a 3.0/5. Regarding all the means of the responses per group, they ranged between 2.1 and 3.8/5 for the business group and 3.7 and 4.6/5 for the group of expert modelers. The differences between the two groups are also reflected in the means, which are shown in Fig. 13d. The median values provide a less detailed depiction of the differences, ranging between 2.0 and 4.0/5 for the business group, and 3.0–5.0/5 for the expert modelers, as shown in Fig. 13e. The group results were checked with Mann–Whitney–Wilcoxon tests and all group differences were identified as significant (sig. two-tailed < 0.05), except for Context (Q5), for all the questionnaire items without taking into consideration Capability dependencies, which had already been excluded. T tests were also performed and identified significant differences for all the questions; however, in this case, we opted for the nonparametric tests due to their increased validity.

5.2 Evaluation of Compass

This subsection reports on the findings derived from expert interviews, according to the respective MEM aspects.

5.2.1 Perceived ease of use

This aspect has been assessed by asking about how clear and understandable the interviewees perceive the areas of the canvas (Q1), how easy they perceive the procedure of filling in the canvas (Q2) and the overall perceived ease of use (Q3). Apart from two evaluators, who consider the canvas areas “not very clear,” the rest consider them as clear and easy to understand. This has been expressed with quotes like “The sections provided allow an easier visualization of the parameters used. My perception is that the areas are well defined” and “These seem to be clear and cover the relevant areas.” Regarding the procedure for filling the canvas, the results are similar. Three of the participants expressed negative opinions regarding the ease of the procedure to fill in the canvas, stating “I believe it requires a trained modeler to fill the canvas.” The other participants expressed positive opinions, with statements like “The use of arrows motivates the user to follow the trail outlined by the model.” Regarding an overall estimation of the artifact’s ease of use, the participants had expressed mixed opinions, with a slight majority, six out of ten, considering the canvas easy to use. There have been statements like “The canvas is easy to use. It is color coded, clearly labeled and structured, elements that make it easily accessible.” and “It doesn't require much from the user, only the information, so overall it is easy to use.”

The semi-structured interviews resulted in the managers focusing on one point that they considered as essential to discuss by deviating from the questions. One theme that emerged while discussing the ease of using the Compass was “training.” This is one of the most important findings, and the ease of using is the MEM area to which it fits better. In particular, most of the interviewees, regardless of their positive or negative opinion on the canvas’s ease of use, mentioned that training should be involved for users. However, even among like-minded participants, different degrees of consideration of training as a requirement emerged, having comments referring to minimum training like “As long as an introduction to the model has been performed, I see no challenges in the ease of use of compass” and “Compass is a straightforward tool, and the training required could be limited to written documentation, meant as a reference.”, to comments that emphasized training as a hard requirement for using it, for example, “A consultant team should always guide the audience to fill in the canvas, they cannot do it themselves alone” and “I believe it requires a trained modeler to fill the canvas.”

5.2.2 Efficiency

For evaluating the efficiency of Compass, we asked for the participants’ opinions regarding Compass as a means to reduce the effort required for documenting changing capabilities (Q4). In addition, just like any other part of the interviews, they were motivated to freely express any other thoughts regarding the efficiency of the artifact.

The responses regarding the efficiency of Compass are diverse. Most of the interviewees consider the canvas as an efficient solution when encountering the phenomenon of changing capabilities. There was only one of the participants that considers it inefficient and this is based on the level of complexity. As the participant mentioned “My honest opinion is that this is complicated even in the canvas version”, referring to the complexity of KYKLOS as well. On the contrary, the other participants expressed positive opinions, for example, “I think the canvas captures a lot of information, and offers a shorthand that a modeler can use to absorb a lot of information quickly”, “I believe that Compass offers much efficient way to illustrate the changing capability effort in organizations,” and “It takes the user's time off of thinking and organizing it, so it reduces the effort.”

The “training” theme appeared in this aspect too, with most of the participants mentioning the importance of the presence of expert modelers while using the canvas. For example, one participant acknowledged the value of Compass, but only “Provided they have an in-house modelling expert that is trained in using this tool appropriately.” Another participant directly associated the efficiency with training on the usage of the tool, by stating “The actual efficiency would be dependent on the training of the user.”

5.2.3 Actual effectiveness

The evaluation of the canvas’s actual effectiveness was performed by comparing it to the summarized requirements that motivate its development, as suggested in MEM. Therefore, all the questions asked the interviewees to express their opinions on the degree to which the artifact is useful for describing the context (Q5) and intentions (Q6) that affect capabilities, supporting decisions on capability change (Q7), (iv) describing the components (Q8), transitions (Q9) and ownership (Q10) of capabilities, and the association between a changing capability and other capabilities (Q11). The participants also expressed comments and opinions that they deemed relevant.

Regarding the canvas’s effectiveness in describing the context of a capability, there was unanimous agreement that it successfully captures the needed information. One illustrative quote stated “I believe that Compass is useful to describe the external aspects that influence the changing capability by adopting the PESTEL approach.”

Regarding intentions, one interviewee expressed concerns about the clarity and comprehensibility of this aspect, but the rest were positive about its effectiveness for describing the organization’s intentions that are deemed as relevant to the changing capability. Statements like “good in mapping and discovering aspects related to SWOT analysis and mirroring the company strategies” support this fact. An interesting point that emerged in this part that was not mentioned in other parts of the evaluation was that, for two of the participants, the intentions part of the canvas is weak visually, and does not reflect properly on the importance of the concept. Illustratively, “I would make Intentions more prominent, to make them stand out more within the Motivation box.”

Regarding the effectiveness of Compass in supporting decision-making, the evaluator group expressed positive opinions. Statements like “It provides enough information for upper management to decide if a change is required” support this fact. Yet, three participants also expressed hesitation regarding the current format of Compass as decision-making support. For example, the participants stated that “it depends a lot on the "ground" work,” “Changes, however, do not occur in an isolated manner, and the decisions may be biased by the lack of information on other capabilities that are undergoing a change.”, and “I trust that Compass is useful to support changing capability decision although allow only one analyzed capability per canvas.”

Asking about the capability components received nine positive responses and one negative. The positive ones were expressed in statements like “Compass is very useful for describing the resources and processes that are needed for a capability, due to the simple and clear way it displays them.” and “The compass list and categorizes them well.” The negative concerned the limited canvas space for listing components. It was stated that “Compass is lacking the overall information of the available resource pools and resources present in the company. This could cause issues, in which a resource is already planned to be used in a change on another capability, but it is requested by the capability that is undergoing a change.” and “…I don’t know if there is a limitation on the amount of listed items.” Another point is the possibility of double allocation, since no common resource pool exists across canvases developed in a case study. As stated by the participant, “I perceive that there is missing functionality in Compass, mainly with the resource pools, and I see a possible risk of double-booking resources.”

Regarding the support toward the transition of capabilities, six responses were deemed as positive and four as negative. The positive ones were expressed with statements like “Very efficient tool for capturing and visualizing the needs for the transition of capabilities,” sometimes also expressing an appreciation toward the design of the canvas, as in “The color representation is good to showcase a transition of capabilities.” The negative ones were highlighting elements that were considered as poorly emphasized, for example, “…it is very theoretical. I would suggest that you present the usefulness… …and then conclude with the changes necessary and how to implement them”, or overall on the clarity of this aspect, like “This is a little bit unclear.”

The ownership aspect of the effectiveness was evaluated as good and useful by most of the participants, in particular, nine out of ten. The positive evaluations were supported by explanations about the usage of labels, for example, “Compass shows adequate ownership of the relevant resources by providing the IN and EX flag,” and “The column representing the External or Internal ownership may provide a quick glance of which resources are already available.” The negative response was based on the fact that the actual owners of the capability and components are not captured in the canvas. This was also mentioned by three more participants, even if they did not deem the omission as grave enough to produce a negative perception, as in “Good, but it is a little bit crude. For instance, which internal and external parties possess the capabilities and the respective components”.

Regarding the associations between a given capability and other capabilities, the opinions were equally split. Half of the evaluators considered that the canvas described these to a high degree, but the other half had the opposite view. The former expressed statements like “This is highlighted in Capabilities and the Impact, so yes the canvas is useful in highlighting dependencies or impact to other aspects of the business” and “Compass can describe effectively the association of a changing capability with other capabilities,” while the latter focused on the limitations, stating “On its own, a single compass canvas may not be enough to fully visualize the association of capabilities” and “This is somewhat limited naturally since the canvas-based approach inherently allow only one analyzed capability,” which is based on the fact that a canvas can only document one transition at a time.

Since evaluating the effectiveness of the canvas concerns a more detailed assessment of its technical aspects, as reflected in the questions, it comes as no surprise that the main theme that emerged from this part was “improvement.” The participants observed and highlighted a series of potential improvement points in the current version of Compass. For example, one participant emphasized on the canvas’s limitation in the Components area, which means that the number of resources captured may pose a problem, “I'm curious as how the capabilities would be shown in Compass when the Components present are more than the space available in the table.”

5.2.4 Perceived usefulness

This aspect of MEM was used to evaluate its usefulness, as perceived by the interviewees. This has been performed by asking whether the concepts comprising Compass are adequate for describing the phenomenon of capability change (Q12) and getting the participants’ overall assessment of the usefulness of Compass (Q13).

The concept set was evaluated as adequate for describing changing capabilities, resulting in an unanimous agreement that the canvas’s usefulness in terms of the concept set is not questionable. There were strong statement like “Compass with my experience contains everything that it needs to describe capability change” and “Compass seems adequate to capture the necessary concept of a capability change in organizations.”

The same applied to the overall usefulness, regardless of the concept set comprising the canvas. A few comments that have been expressed regarding the overall usefulness of Compass include “Compass is highly effective in general as it clearly and in the simplest possible way describes the steps taken in the capability change process,” “I believe that canvas-based approach in Compass is useful to depict one analyzed capability change in organizations,” and “a solid ground to start from.”

5.2.5 Intention to use

This aspect is expected to assess the users’ intention to use the evaluated method. This was achieved by asking the participants if they would use the Compass canvas in future domain-specific tasks (Q14) and if they would also prefer to use Compass over other methods for such projects (Q15).

Seven out of ten participants expressed a positive response toward using Compass for future capability change projects. On the one hand, the positive responses were expressed as an interest in testing the canvas in different contexts, dealing with changing capabilities in every participant’s own professional area. This interest was expressed in statements like “Yes, because it seems to be clear and easy to use for those with basic or no knowledge of modeling,” “Yes. It will help with procedures and it will save time.”, and “Yes, because the model can be easily accessed by non-specialists and used in a business environment.” On the other hand, the ones that expressed a negative response did not fully exclude it as an option but would prefer to have revised future versions of Compass, before applying it. For example, two participants stated that “Not as it is because I would not be able to use it as a change tool for my organization.” and “…I would feel limited…”

Selecting to use Compass over other modeling methods is a more specific question, which does not cover the area of interest to test. Half of the participants expressed positive responses. Positive statements like “Compass is a nice starting modeling method for someone who doesn't have the experience of using more complicated modeling methods. It's practical, easy, and fast to use.” The ones that prioritized simplicity over descriptive power rejected using the tool over their known methods, emphasizing the communication of the need to change to the end user, which was deemed as unfulfilled by the Compass, “This does not yet support this yet.”

5.2.6 Overall

The last part of the assessment does not directly assess the Compass, but its aim is to gain insight about the relationship of the two methods. This is a way to understand better the overall performance of both approaches from a holistic perspective, that is, an attempt to collect data about “the big picture” that includes both approaches. The questions used for this purpose asked about the participants’ thoughts regarding the consistency between the two approaches (Q16) and the potential to use Compass as a pre-modeling step for KYKLOS (Q17).

Regarding the consistency between the two interlinked approaches, the respondents unanimously agreed that KYKLOS and Compass are consistent with each other. Certain participants provided straightforward responses, for example, they stated that “…both approaches are consistent and complement each other…” and “There is a consistency between the two.” Additionally, regarding the way the two approaches are interlinked, there were responses supporting the use of Compass as a pre-modeling step for KYKLOS, for example, “Compass will make it easier to collect descriptive inputs from casual users per proposed capability change. It can be a basis for the architect to develop the more comprehensive change capability model in organizations.”

Nevertheless, the majority emphasized a comparison of the two approaches, in terms of potentials and complexity. One participant considers that Compass is not as robust as KYKLOS, therefore would not prefer the canvas over it. As stated, “I perceive that there is missing functionality in Compass.” Another participant had a similar perspective while comparing the two methods in terms of functionality, transparency and comprehensibility. More precisely, the participant stated “Compass is less opaque, and more transparent. Compass focuses on modelling one transition at a time. It sacrifices the dynamic nature of KYKLOS to present information in a more digestible manner.” The two approaches were discussed in terms of understandability. “I believe that the Compass Canvas will be easier to explain to the end user and use for change management,” is a quote that emphasizes the end user perspective, while another one complements the previous statement with “because the compass makes it easier to see the overall picture in detail with less complexity. KYKLOS might be a little intimidating for those with little experience in models.”. Another point mentioned was the potential to use the canvas for explaining a KYKLOS model. The participant stated that “the canvas might also help people with little modeling knowledge to understand models depicting capability changes.”, and this was in line with others, that also consider that “The Canvas is good, but perhaps not as a standalone tool.”

6 Discussion

This section will discuss the outcomes of demonstrating and evaluating the two interlinked approaches.

6.1 Regarding KYKLOS

Initially, the demonstration of KYKLOS provided the opportunity to show that it has the potential to be not only effective for capturing and documenting one or more changing capabilities, but also facilitates the analysis of the capabilities via their configurations, to a degree that enables making suggestions for improvements, thus, it also supports decision-making. In particular, in the DI case, the thorough exploration of the configurations of the supporting capabilities indicated weaknesses that could be mitigated by reallocating resources, a fact which led to the suggestion for introducing a new capability, the Employee facilitation training. In this way, applying KYKLOS to the case study proved its feasibility, which is the aim of the demonstration step in DSR [14].

Regarding the evaluation, KYKLOS has been deemed useful but, for the business group, there are difficulties concerning the adoption and use, which is in line with the Human Risk and Effectiveness strategy, whose activities aim to ensure that the method is still beneficial “on the long run,” despite the difficulties, according to FEDS [54]. This relates to an important aspect of the results, which is the significant differences that have been identified between the two groups of evaluators, which were not expected during the design phase of the research. All the evaluation aspects received a higher score from the expert group, a fact indicating that previous modeling experience is a desired attribute for every potential KYKLOS user. This has also been mentioned in comments from the business group, for example, “I am not sure everyone is able to apply this model, I think it does require previous knowledge of modelling.”, “I think the model is difficult to communicate to outsiders.”, and “I think it is important to have in mind that training and education will be an important part when companies are going to use this model.”, while there were no similar comments from the expert group.

Overall, both groups are positive about the effectiveness, efficiency, and usefulness of the method, but disagree on the ease and intention to use it, a fact indicated by the larger differences in the respective scores. This raises the issue of the complexity of the method. The evaluation of KYKLOS identified that its current version should be communicated to users with modeling experience and avoid non-experts, since the response from the business group of evaluators indicates several encountered difficulties, in terms of understanding, applying and benefiting from the method. In this way, the complexity of the method, which reflects the actual complexity of the domain of capability change, will remain intact, without sacrificing any of the descriptive power of KYKLOS. This will also mean that the target group of KYKLOS users will be significantly delimited and efforts to bridge the gap between user categories are needed.

Potential solutions to this issue are, to name a few, the development of structured method training, a KYKLOS “light” version with reduced complexity and available modeling elements, that can be used for a higher level of models for changing capabilities, or a canvas-based approach, hiding the details of the modeling language, that can be used as a pre-modeling step for KYKLOS. The first solution would not really bridge the gap methodologically, because it aims to convert non-experts to experts; however, the other two could provide valid bridging solutions, but the fact that these approaches will result in reduced descriptive power or increased analyst workload should not be ignored. The canvas-based approach has been selected for the development of Compass because of its higher level of simplicity.

6.2 Regarding Compass

The demonstration of Compass has facilitated the identification of several weaknesses that need to be addressed in future versions of the canvas. This has been expected, since Compass has only undergone one complete design cycle, a fact which indicates a preliminary efficiency, yet it implies needed improvement in future iterations.

Most of the issues that emerged during the demonstration have also been mentioned by the participants during the second evaluation cycle. One confusing point that emerged during the demonstration was the weak depiction of the capability architecture of the organization, that is, the decision to simplify the canvas by allowing only one transition per canvas limits significantly its descriptive power. In particular, the impact in the demonstrated case became evident when the initial transition was captured in a capability and the need to change was not captured in the canvas. On the one hand, common sense and previous knowledge from the application of KYKLOS on the case provide an easy solution for this specific case. On the other hand, this cannot be the standard procedure. One potential structural solution to this issue is to include the entire set of Intention elements in every canvas, in other words, to add the motivators of the supporting capabilities to the main one, so that any unfulfilled motivator depicts the need to change. An alternative methodological solution is to include additional process steps in the procedure to fill in the canvas, for exceptions as the one encountered in this case.

Other structural beneficial updates will include the potential to capture the owners of the capability and components, not just the type as it is currently. One other important point that has been pointed out by participants, is that for complex cases that require more than one canvas, there should be an ID for easier canvas reference.

Indicated procedural updates include instructions on how to continue with unidentified needs to change, since during the demonstration of Compass, a point where a half-filled canvas existed without clear instructions on how to proceed. Updating the instructions should also highlight what to look at during the analysis of cases with multiple transitions. For example, in the DI case, a capability can produce outcomes that can be reused as components by another capability, but without clear instructions, it is hard for the analysis to proceed based on assumptions. The point about potential double allocation that is not controlled by the current version of the canvas is one more procedural update that should be taken into consideration.

All these procedural updates cannot be addressed in an update of the canvas structure because this would imply cross-canvas associations, which is possible, yet such a design would exceed the canvas-based approach and be perceived as steps toward a new modeling language. Naturally, as identified in both the evaluation cycles in this article, this would augment significantly the level of complexity, and this would have the opposite result of the initial intention. Therefore, these updates need to be implemented in the form of procedural instructions.

A pattern that needs to be discussed is that all the weaknesses of Compass are related to cases where multiple capabilities and transitions are involved. As several participants of the Compass evaluation also noticed, the canvas can be a clear and efficient solution for capturing single transitions; however, for complex change cases, there are weaknesses that need to be addressed in future research and development of future versions of Compass.

6.3 Overall

The expression “the whole is greater than the sum of its parts,” from the ancient Greek philosopher Aristotle describes accurately the evaluation activity that has been reported in this article. The two evaluation cycles provided insight not only for the two approaches separately, but also for a holistic viewpoint.

One part of this is the comparison that was enabled via the evaluation activities. The first cycle revealed that KYKLOS is well received by modeling experts but is hard to use for business experts, thus making it hard to adopt. Its complexity has been the motivating factor for developing Compass; however, according to the second evaluation cycle, the issue of complexity is not resolved in its entirety. Complexity is derived from the actual complexity of the phenomenon of capability change and is a way to preserve a method’s descriptive power. However, since it was identified that this complexity makes the method inaccessible to users without modeling experience, an attempt to bridge the gap via Compass was evaluated in the second cycle. According to the authors’ experience, which has unanimously been confirmed by the evaluators, Compass has successfully simplified KYKLOS, yet it is questionable whether the achieved simplicity is on an acceptable level. One of the most important findings is that the canvas is still not considered simple enough to be usable by users without any modeling experience. However, interpreting the results of the two cycles provides a strong indication that progress has been achieved and the gap may not be bridged, but is narrower. Regarding the two aspects of KYKLOS with the lowest evaluation scores, there is an essential difference in the ease of use and adoption of Compass.

A question that arises with the evaluation of the two approaches is to check whether the benefits gained from developing the Compass as a simplified version of KYKLOS surpass the losses in descriptive power. According to the findings of this evaluation, there is no objective answer. According to our findings, there is a strong indication that subjectivity determines the answer, that is, it depends on the role of the respondent. From the perspective of a modeling expert, KYKLOS is an efficient toolkit and does not need the support of Compass, while from the perspective of a business expert, simplicity is the most important and prioritized attribute of an approach, as stated during the interviews “I have used multiple change models and change management tools and the best ones are defined by the simplicity of use based on the end user understanding of what we need to change and why.” Therefore, taking into consideration that most of the Compass evaluators consider the canvas a successfully simplified version of KYKLOS, the loss in descriptive power is worthwhile for the business experts.

7 Conclusions

The paper reports two demonstration and evaluation cycles of KYKLOS and Compass, two interlinked approaches designed for managing changing organizational capabilities, which were applied to a Swedish ERP sales and consulting company. Initially, KYKLOS was evaluated for its modeling and decision-support capabilities. The results highlighted difficulties in the ease and intention to use for business experts, in contrast to modeling experts. Thus, Compass was developed as a simplified canvas-based version of KYKLOS and reassessed. While Compass has not bridged the gap between the different user types, it has successfully narrowed it down.

The first evaluation motivated future research directions, notably the exploration of the differentiation between expert and non-expert users, the potential for incorporating existing enterprise models into KYKLOS to enhance functionality, the extension of KYKLOS’s flexibility in scope, especially regarding organizational versus individual capabilities. The second evaluation produced suggestions for integrating Compass more closely with KYKLOS, such as through a preliminary questionnaire to aid modeling. This direction indicates an expansion the KYKLOS toolkit to improve accessibility and comprehensibility.