Automaton-based comparison of Declare process models

The Declare process modeling language has been established within the research community for modeling so-called flexible processes. Declare follows the declarative modeling paradigm and therefore guarantees flexible process execution. For several reasons, declarative process models turned out to be hard to read and comprehend. Thus, it is also hard to decide whether two process models are equal with respect to their semantic meaning, whether one model is completely contained in another one or how far two models overlap. In this paper, we follow an automaton-based approach by transforming Declare process models into finite state automatons and applying automata theory for solving this issue.


Introduction
In business process management (BPM), two opposing classes of business processes can be identified: routine processes and flexible processes (also called knowledgeintensive, decision-intensive or declarative processes) [1,2]. For the latter, in the last years a couple of different process modeling languages such as Declare [3], Multi-Perspective-Declare (MP-Declare) [4], DCR graphs [5] and the Declarative Process Intermediate Language (DPIL) [6,7] emerged. These languages are called declarative modeling languages. They describe a process by restrictions (so-called constraints) over the behavior, which must be satisfied throughout process execution. Especially Declare has become a widespread and frequently used modeling lan- This paradigm guarantees more flexibility than the imperative one, which is the modeling standard for routine processes. But on the other hand it turned out that declarative process models are for several reasons hard to read and understand, which affects the execution, modeling and maintenance of declarative process models in a negative way: the large degree of flexibility offers the modeler a multitude of options to express the same fact. Hence, the same process can be described by very different declarative process models (cf. Sect. 2). In general, declarative process models possess a high risk for over-or underspecification, i.e., the process model forbids valid process executions or allows process executions that do not correspond to reality, respectively. Such a wrong specification is often caused by hidden dependencies [8], i.e., implicit dependencies between activities that are not explicitly modeled but occur through the interaction of other dependencies. The Declare modeling language relies on linear temporal logic (LTL) [3]. Hence, constraints and process models, respectively, are represented as LTL formulas. Although there is a set of common Declare templates, this set is not exhaustive in the sense that sometimes plain LTL formulas are necessary to complete a process specification. Also for defining customized templates for reuse (i.e., if a dependency between more than two activities should be expressed) modelers are not aware of working with plain LTL. This deficiency increases since a canonical standard form for LTL formulas does not exist, so in general, these formulas are not unique. Enriching the predefined constraints with plain LTL exacerbates the problem of understanding such models.
Therefore, there is a high interest to keep a process model as simple as possible without deteriorating conformance with reality. However, changing or simplifying such a process model bears the risks described above, i.e., over-and underspecification. Hence, model checking, especially comparing models on equality, becomes an important task for modeling and verifying declarative process models. Most of the time this is achieved by simulating process executions of different lengths (so-called trace length) and checking their validity. However, this is a very time-consuming and tedious task and can only be done for a limited number of traces and gives no guarantee that the considered process models are equal. Also when the process models differ, it might be interesting to work out their common properties and differences and quantify them.
This paper is a continuation of our previous work [9] that determines an upper bound proof for the trace length for comparing two Declare process models for equality based on traces. This approach is mainly simulation-based. It simulates all traces with length lower than or equal to an upper bound and compares them. In this paper, we propose an alternative to this simulation-based approach that completely relies on automata theory. This latter approach has the advantage that the computational effort for simulating traces can be neglected. This is a decisive advantage since this effort might be rather high when complex process models have to be treated. In Sect. 4.5, we recommend how to combine both approaches, the simulation-based and the theory-based, in order to enhance the applicability of our work. We show that both approaches complement each other ideally. Furthermore, in this paper we propose some measures to quantify the differences of non-equal Declare process models.
The remainder of the paper is structured as follows: Section 2 recalls basic terminology, explains the necessary foundations of automata theory and introduces a running example. In Sect. 3, we give an overview of related work and show how our work differs from existing work. In Sect. 4, we recall the simulation-based approach from [9] and propose its advanced version. Additionally we introduce some measures which help to measure up the similarity of Declare models. Section 5 presents the implementation, discusses the asymptotic behavior of the proposed algorithms and presents a practical application of our approach. Finally, Sect. 6 concludes the work and gives an outlook on future work.

Basic terminology and running example
In this section, we recall basic terminology and the foundations of automata theory and introduce a running example. Events, traces and event logs are introduced to provide a common basis for the contents of both process models and process traces. Afterward, we give a short introduction of the Declare modeling language, since we focus on this modeling language in the rest of the paper. We also introduce the foundations of automata theory, since our approach is founded on this theory.

Events, traces and event logs
We briefly recall the standard definitions of events, traces and (process) event logs as defined in [10]. We start with the definition of activities and events: Definition 1 An activity is a well-defined step in a business process. An event is the occurrence of an activity in a particular process instance.
This definition enables the definition of a trace, which is a time-ordered sequence of events: Definition 2 Let E be the universe of all events, i.e., the set of all possible events. A trace is a finite sequence σ = e 1 , ..., e n such that all events belong to the same process instance and are ordered by their execution time, where n:=|σ | denotes the trace length of σ .
We say that a trace is completed if the process instance was successfully closed, i.e., the trace does not violate a constraint of the process model and no additional events related to this process instance will occur in future. Note that in case of declarative process modeling languages like Declare the user must stop working on the process instance in order to close it, whereas in imperative process models this is achieved automatically by reaching an end event [3]. However, a process instance can only be closed if and only if no constraint of the underlying process model is violated [3]. From the definitions above, we can derive the definition of an event log.

Declare and Declare constraints
Declare is a single-perspective declarative process modeling language that was introduced in [3]. Instead of modeling all viable paths explicitly, Declare describes a set of constraints applied to activities that must be satisfied throughout the whole process execution. Hereby, the control flow and the ordering of the activities are implicitly specified. Each process execution, which does not violate any of the constraints, is a valid execution. Declare constraints are instances of templates, i.e., patterns that define parameterized classes of properties [4]. Each template corresponds to a graphical representation in order to make the model more understandable to the user. Table 1 summarizes the common Declare absence(A, n) G(¬A ∨ X(absence(A, n − 1))), absence(A, 0) = G(¬A) templates. Although Declare provides a broad repertoire of different templates, which covers the most necessary scenarios, this set is non-exhaustive and can be arbitrarily extended by the modeler defining new templates. Hence, the user is not aware of the underlying logic-based formalization that defines the semantic of the templates (respectively constraints). Declare relies on the linear temporal logic (LTL) over finite traces (LTL f ) [3]. Hence, we can define a Declare process model formally as follows: where A is a finite set of activities and T is a finite set of LTL constraints over A (i.e., instances of the predefined templates or LTL formulas).
LTL makes it possible to define conditions or rules about the future of a system. In addition to the common logical connectors (¬, ∧, ∨, →, ↔) and atomic propositions, LTL provides a set of temporal (future) operators. Let φ 1 and φ 2 be LTL formulas. The future operators F, X, G, U and W have the following meaning: formula Fφ 1 means that φ 1 sometimes holds in the future, Xφ 1 means that φ 1 holds in the next position, Gφ 1 means that φ 1 holds forever in the future and φ 1 Uφ 2 means that sometimes in the future φ 2 will hold and until that moment φ 1 holds. The weaker form of the until operator (U), the so-called weak until φ 1 Wφ 2 has the same meaning as the until operator, whereby φ 2 is not required to hold. In this case, φ 1 must hold forever.
For a more convenient specification, LTL is often extended to past linear temporal logic (PLTL) [11] by introducing so-called past operators, which make it possible to define conditions or rules about the past but do not increase the expressiveness of the formalism [12]. The past operators O, Y and S have the following meaning: Oφ 1 means that φ 1 sometimes holds in the past, Yφ 1 means that φ 1 holds in the previous position and φ 1 Sφ 2 means that φ 1 has held sometimes in the past and since that moment φ 2 holds.
For a better understanding, we exemplarily consider the response constraint G(A → FB). This constraint means that if A occurs, B must eventually follow sometimes in the future. We consider, for example, the following traces: and T 4 = A, B, A, C . The traces T 1 , T 2 and T 3 satisfy the response constraint as each occurrence of activity A is followed by an occurrence of activity B. Note that T 2 fulfills this constraint trivially because activity A does not occur at all (so-called vacuously satisfied). However, T 4 violates the constraint, because after the second occurrence of A no execution of B follows.
We say that an event activates a constraint in a trace if its occurrence imposes some obligations on other events in the same trace. Such an activation leads either to a fulfillment or to a violation of a constraint. Consider, for example, the response constraint response(A, B). This constraint is activated by the execution of activity A. In T 4 , for instance, the response constraint is activated twice. In case of the first activation, this leads to a fulfillment because B occurs. However, the second activation leads to a violation because B does not occur subsequently.
In our research, we use Declare as a representative for declarative process modeling languages. Declare is rather prominent in the process modeling community and is investigated thoroughly what is supporting our decision. In principle, our approach would allow to exchange the declarative process modeling language. In order to do so, the language constructs of that language would have to be transformed to finite state automatons as will be shown in Sect. 4.1. Having settled this transformation, our methods can further be applied as shown in this paper.

Automata theory
Our approach is mainly based on deterministic finite state automatons (FSA). We aim to map the underlying Declare process models to finite state automatons in order to extract information, which can be used to make statements about the process models. Therefore, we briefly introduce the basic concepts and algorithms of automata theory. For further details cf. [13]. We start with the formal definition of a deterministic finite state automaton:

Definition 5 A deterministic finite-state automaton (FSA)
is a quintuple M = ( , S, s 0 , δ, F) where is a finite (nonempty) set of symbols, S is a finite (non-empty) set of states, s 0 ∈ S is an initial state, δ : S × → S is the state-transition function and F ⊆ S is the set of final states.
As we want to deal with words and not only single symbols, we have to expand the definition: Definition 6 Let be a finite (non-empty) set of symbols. Then, * :={a 1 a 2 . . . a n | n ∈ N 0 , a i ∈ } is the set of all words over symbols in . For each word ω ∈ * , we define the length of ω as In the following, for the sake of simplicity, δ always denotes the extended state-transition functionδ for words ω ∈ * . The set of words that are accepted by a FSA M is called the language of M: Words that can be constructed from the same alphabet, but are not accepted by the FSA, form the complement of the language: The complement L C of L consists of all words of * which do not start with A: L C = {ω 1 ω 2 | ω 1 ∈ \{A}, ω 2 ∈ * }. The corresponding automaton is illustrated on the right side of Fig. 1.
As we have to handle with automatons that consist of a big number of states, it is desirable to decrease the number of states in order to improve the performance. In general, there exists a minimal automaton which accepts the same language: Proof cf. [13].

Remark 2
This theorem is trivially fulfilled if M is already minimal. If M is not minimal, we use the Hopcroft algorithm [14] to construct an equivalent minimal finite state automaton. 1 Given two FSAs, we are interested in the intersection of their corresponding languages, i.e., the set of all words that are accepted by both. Therefore, we can use the construct of the product automaton: two deterministic finite-state automatons over the same set of symbols . The product automa- From the definition of the product automaton M = M 1 × M 2 of two deterministic finite-state automatons M 1 and M 2 follows that M accepts exactly the intersection of L(M 1 ) and Furthermore, an automaton for the symmetric product of two automatons can be calculated: given two finite state automatons M 1 and M 2 , an automaton that accepts exactly the words of M 1 (and not by M 2 ) can be constructed by calculating the automaton . We will use this construction later in our approach.

Running example
In the following, we will refer extensively to the following two examples, which reflect the different application scenarios of our approach.
Example 1 The first sample process P consists of a set A of three activities A, B and C with the following control flow: either the three activities are executed in sequence (i.e., ABC) or alternatively C is executed arbitrarily often but at least once. After each execution of the sequence ABC also the sequence BC can be executed arbitrarily often. The Declare language offers manifold ways for modeling this process. For example, we can describe this process by the following two process models: taining the following constraints: t 1 : response(A, B), ) and t 7 : G(B → X(¬A)).
ing the following constraints: For a better illustration, the process models are depicted in graphical Declare notation in Fig. 2a, b. Apart from the respondedExistence template that occurs in both process models, P 1 and P 2 seem to be completely different. Hence, it is difficult to assess, whether the two process models really describe the same process. We will show throughout the paper, how our approach can be used to validate this claim. Obviously, these process models describe different processes, since activity A has to be executed at least once in Q 1 , whereas in Q 2 it does not have to be executed. Hence, Q 2 accepts the trace B B (as the only constraint of Q 2 demands for a double execution of activity B) and Q 1 does not because an execution of activity A is missing. We will use this example in order to demonstrate how our approach can be used for analyzing differences of Declare process models. Both process models are depicted in graphical Declare notation in Fig. 3a, b.

Related work on process model similarity
Determining similarity and common properties of process models is a very important issue in industry and research [15,16]. It is on the one hand necessary to identify duplicate models [17] and different model variants [18], which might be produced when process models are changed or emerged. This work relates to the stream of research on modeling and checking declarative process models. Difficulties in understanding and modeling declarative processes are a well-known problem in the current research. Nevertheless, there are only a handful of experimental studies that deal with the understandability of declarative process models. In [19], a study reveals that single constraints can be handled well by most individuals, whereas sets of constraints establish a serious challenge, i.e., Declare models consisting of more than a handful constraints might get very hard or even not at all understandable for humans. Furthermore, it has been stated that individuals use the composition of the notation elements for interpreting Declare models. Similar studies [8,20] investigated the understandability of hybrid process representations which consist of graphical-and text-based specifications.
For different model checking tasks of both multiperspective and single-perspective declarative process models, there are different approaches. In [21], an automatonbased approach is presented for the detection of redundant constraints and contradictions between the constraints, which does not fill the gap of different process models on equality or differences. In [22,23], the problem of the detection of hidden dependencies is addressed. Hidden dependencies are dependencies between activities which are not modeled explicitly but result of the combination of certain different constraints. In [22], the extracted hidden dependencies are added to the Declare models through visual and textual annotations to improve the understandability of the models. In [24], the authors transform the common Declare templates in a standardized form called positive normal-form, with the aim of simplifying model comparisons. But also this approach reaches its limits because the positive normal-form is not unique, and hence, different positive normal-forms can describe the same model. The authors in [25] investigate the single elements of process models in order to detect corresponding or equivalent elements in different process models. Hence, equivalent elements, e.g., activities or actors, can be identified but there is still the need of combining all these elements as they represent a whole process model.
There is also some effort in transforming Declare process models into different representations for deeper analysis. In [26], formulas of linear temporal logic over finite traces are translated to both nondeterministic and deterministic finite automatons, which were not investigated yet in order to compare the underlying process models. In [27] Büchi automatons are generated from LTL formulas. In [28], Declare templates are translated into deterministic finite automatons, which are used for implementing a declarative discovery algorithm for the Declare language. Also these efforts do not deliver a possibility to compare the process models.
The standard procedure for comparing the desired behavior with the expected behavior provided in a process model includes the generation of exemplary process executions [29], which are afterward analyzed in detail with regard to undesired behavior such as contradictions, deadlocks or deviations from the behavior in reality. Therefore, process execution traces up to a certain length are simulated and investigated. This procedure has the weakness that the calculation has to be stopped at some trace length due to computing power and storage requirements. Hence, a 100% statement about equality or inequality cannot be manifested as there might be undetected inconsistences in traces of larger lengths. In [9], we handled this issue by computing a theoretical upper bound for the trace length in order to make it possible to decide about equality with certainty. The underlying paper extends this method by presenting an alternative approach for the automaton comparison and giving measures, which help to make statements about differences of non-equal process models.
In [30], the authors define the equality between two process models (regardless of whether they are imperative or declarative) on the base of all viable process execution paths. Often, for a better understanding of a model, also counterexamples are explicitly constructed to verify whether a model prevents a particular behavior [31]. For generating exemplary process executions, it is necessary to execute declarative process models. In [32], both MP-Declare templates and Declare templates are translated into the logic language Alloy 2 and the corresponding Alloy framework is used for the execution. For generating traces directly from a declarative process model (i.e. MP-Declare as well as Declare) the authors in [33].
In [31], based on a given process execution trace (that can also be empty), possible continuations of the process execution are simulated up to an a-priori defined length. The authors emphasize the usefulness of model checking of (multi-perspective) declarative processes by simulating different behavior. However, the length of the look-ahead is chosen arbitrarily and, hence, can only guarantee the correctness of a model up to a certain trace length. In summary, the need for a generally applicable algorithm to determine the minimum trace length required to find out whether process models are equivalent is still there, and this issue has not been solved so far.

Comparing Declare process models
In this introductory part, we want to give a brief overview on our approach. Details about the single steps for comparing Declare process models will be explained in the corresponding subsections. The overall concept is illustrated in Fig. 4.
The input of our approach are two Declare models P 1 = (A 1 , T 1 ) and P 2 = (A 2 , T 2 ). In a preparation phase (cf. Sect. 4.1), we first transform each template of the Declare models into deterministic finite state automatons (step 1 in Fig. 4). Afterward, we construct (minimal) FSAs D 1 and D 2 for each process model by intersecting the automatons of the corresponding templates (step 2 in Fig. 4, cf. Sect. 4.2).
After calculating the two product automatons D 1 and D 2 , we can apply our comparison algorithms (step 3 in Fig. 4), i.e., we compare the two automatons with respect to equality. Firstly, this is done by comparing all words of an automaton until a particular length (so-called simulationbased approach) that guarantees whether D 1 and D 2 are equal (cf. Sect. 4.3.1). Secondly, as an alternative approach (so-called theory-based approach 3 ) the comparison takes exclusively place by directly investigating the automatons themselves (cf. Sect. 4.3.2). The simulation-based approach and the theory-based approach complement each other. Thus, we provide a short recommendation when to apply what algorithm in Sect. 4.5.
If the process models are equivalent there is no further work to do, otherwise we analyze their differences in detail (cf. Sect. 4.4). This encompasses checking the models for mutual containment (step 4 in Fig. 4) and calculating the intersection and differences of the process models (step 5 in Fig. 4).
The resulting automatons of intersection and differences are often difficult to interpret and compare. So it is a common approach to generate traces of certain lengths that are accepted by the automatons and analyze and compare those sets. Therefore, we propose and apply some measures to quantify the differences of the process models (cf. Sect. 4.6).

Transformation of Declare templates to finite state automatons
The first step of our approach is to transform Declare templates into deterministic finite state automatons (step 1 Fig. 4). For the most common Declare templates, this transformation was already done in [28]. However, the Declare templates notRespondedExistence and notResponse are not dealt with in that paper; their representations as FSAs are shown in Figs. 5 and 6. Traces fulfilled by a Declare template are exactly the elements of the accepted language of the corresponding FSA. For example, trace σ 1 = A, A fulfills the notResponse template, whereas trace σ 2 = A, A, B does not. The same thing holds for the automaton, too: σ 1 is accepted and σ 2 is not accepted by the corresponding automaton (cf. Fig. 6). In a Declare model, multiple activities are involved within multiple templates. One concrete template normally comprises one or two activities. From the viewpoint of such a template, we have to consider those activities that are associated with other activities as well. Since their executions do not have an impact on the execution of the template under considerations, we add transitions of type :otherwise to the corresponding automaton. These transitions represent all activity executions of activities that do not occur in the respective template. For example, in Fig. 6 the Declare template defines a dependency between activities A and B. When A and B are occurring, respective, state transitions are initiated. Nevertheless, this template might be part of a comprehensive process model that also contains the activities C, D, and E. Referring to the template from Fig. 6, whenever these three activities are occurring they are "swallowed" by the transitions: otherwise, i.e., they do not change the state of the FSA.

Transformation of Declare models to finite state automatons
Next, Declare process models have to be transformed into finite state automatons (step 2, Fig. 4). This procedure is described in Algorithm 2. Hence, the process model is represented by a deterministic finite state automaton and the words of the automatons correspond to the valid traces of the process model (i.e., the language of the automaton is the set of all valid traces).
As a Declare process model, P consists of a set of different Declare templates T = {t 1 , . . . , t n }, a trace σ that satisfies P is a trace, that satisfies all templates: By using the concept of the product automaton (cf. Sect. 2.3), the following conclusion can be derived:

Remark 3
The resulting product automaton D 1 × · · · × D n consists of |S 1 |·· · ··|S n | states. In order to potentially decrease the number of states, we can use the Hopcroft minimalization algorithm [14] after each intersection of two automatons D i and D j . This means that the minimization algorithm will be called n − 1 times during the calculation.
Note that the minimization algorithm does not change the effectiveness of our approach. Minimization just helps to decrease the number of states in order to reduce storage space needed for our computations and to speed up the algorithm.

Checking Declare models for equality
Based on the previous results, it is now possible to construct algorithms for checking equality of two Declare process models P 1 = (A 1 , T 1 ) and P 2 = (A 2 , T 2 ) (step 3 in Fig. 4). Therefore, we check the corresponding finite state automatons for equality, i.e., we check whether they accept the same language. This can either be achieved by considering the words until a particular length accepted by the automatons which guarantees to decide the equality of the automatons (simulation-based approach). This approach was already proposed in our previous work [9]. Alternatively we can check the equality by directly investigating the automatons themselves (theory-based approach), which is one of the new contribution of this article. In the following, we describe the two approaches. Note that both approaches are also applicable for checking more than two models for equality: respectively two models can be compared in pairs in order to get information about more models.

Simulation-based approach
This approach constructs traces of a particular length and compares them. The essential part of the simulation-based approach is to determine an upper bound, i.e., a maximal trace length until which the traces must be simulated in order to decide with certainty whether two Declare models are equal. Therefore, we formulate and prove a theorem that determines this upper bound. We assume by contradiction that |a| ≥ mn. We define X :={δ(q 0 , b) | b prefix of a}. Since |X | ≥ mn + 1 and |S D | = mn, there exist two prefixes u and u of a with δ D (q 0 , u) = δ D (q 0 , u ). We assume without any loss of generality that u is a prefix of u . So there are two words v and z so that uv = u and u z = a. It follows that uvz = a.
As u = u , v is not empty. The equation δ D (δ D (q 0 , u), v) = δ D (q 0 , u) says that v leads D through a loop from state δ D (q 0 , u) into itself. So we have found a word uz with δ D (q 0 , uz) = δ M (q 0 , a) with |uz| < |a|. This is a contradiction to the minimality of a.
The interpretation of this theorem for our purposes is the following: in order to check the underlying Declare models for equality we have to calculate the upper bound b:=|D 1 | · |D 2 | where |D i | denotes the number of states of automaton D i . Afterward, all words up to length b have to be simulated and checked whether D 1 and/or D 2 accepts them (this can be done via a trace generator for Declare process models [32,33] or by deriving all words directly from the automaton). If they both accept the same set of words, i.e., {ω | |ω| ≤ b and D 1 accepts ω} = {ω | |ω| ≤ b and D 2 accepts ω}, the automatons are equal. Otherwise, they are not equal.  6 i ← i + 1 /* Calculating set U of automatons for T */ 7 end 8 for j = 1, . . . , i − 1 do 9 t j+1 ← minimization(product(t j , t j+1 )) /* Calculating minimal product automaton of P */ 10 end 11 D ← t j+1 12 return D

Example
We now apply the two approaches for checking equality to the process models P 1 and P 2 of the running example. First, we must transform each constraint of the process models into a finite state automaton. Afterward, the automatons of the constraints of each process model are intersected and minimized, to represent each process model as a single automaton. Calculating the automatons D 1 and D 2 leads to the same automaton, which is depicted in Fig. 7. Note that the minimization step has an impressive effect: Without minimization, the product automatons have |S t 1 | · |S t 2 | · |S t 3 | · |S t 4 | · |S t 5 | · |S t 6 | · |S t 7 | = 1296 states (process model | · |S t 2 | · |S t 3 | · |S t 4 | = 108 states (process model P 2 ), respectively, since the number of states of the product automatons is the product of the numbers of states of the single template automatons. The minimized automatons only contain 5 states (cf. Fig. 7). In case of the simulation-based approach, Theorem 2 tells us that we must simulate all traces up to length |S 1 | · |S 2 | = 5 · 5 = 25 (without minimization it would be necessary to consider all traces up to a length of |S 1 | · |S 2 | = 1296 · 108 = 139.968). Generating all traces for both process models until the determined upper bound and comparing them reveals that the two process models are indeed equal. In the theory-based approach it is not necessary to determine an upper bound and simulate traces. It is sufficient to create the (minimal) symmetric difference product of both automatons. The results of this construction are depicted in Fig. 8, which shows that this automaton does not contain an accepting state. Hence, the automatons and thus the process models are identical.

Analyzing mutual containment and differences between Declare models
In case that two process models are not equal, it is interesting to identify and also to interpret their differences. The following two questions arise: Q1 Is one model contained within the other one, i.e., {σ | σ satisfies P 1 } ⊂ {σ | σ satisfies P 2 } (i.e., all traces accepted by one model are accepted by the other one) or vice versa (Mutual Containment)? Q2 What are the common properties of the models, i.e., which traces are accepted by both models and where are the differences between the models, i.e., which traces are accepted by P 1 but not by P 2 and vice versa?

Answer to question 1
Let D 1 be the corresponding finite state automaton of P 1 and D 2 be the corresponding finite state automaton of P 2 . For checking mutual containment (step 4 in Fig. 4), we first check whether {ω | D 1 accepts ω} ⊂ {ω | D 2 accepts ω}, i.e., whether the first Declare model is completely contained in the second one. If the result is true, we get the information that L(D 1 ) is contained in L(D 2 ) and thus P 1 is contained in P 2 . Otherwise we check the opposite containment relation, i.e., {ω | D 2 accepts ω} ⊂ {ω | D 1 accepts ω}. If none of the models is contained in the other one, they describe quite different applications.
For checking the containment of L(D i ) in L(D j ), we calculate the product automaton P of D i and D j . This automaton accepts exactly the intersection between L(D i ) and L(D j ). We check whether the intersection P is equal to D i . If they are equal, L(D i ) is a subset of L(D j ).

Answer to question 2
For answering the second question, we construct automatons describing the intersection L(D 2 ) ∩ L(D 1 ) and the differences L(D 1 )\L(D 2 ) and L(D 2 )\L(D 1 ) (step 5 in Fig. 4). We calculate the product automaton of D 1 and D 2 in order to get an automaton for the intersection. For calculating the difference L(D i )\L(D j ), we first calculate the product automaton of D i and the complement of D j (cf. Sect. 2.3). The resulting automaton accepts the set {σ | σ satisfies P 1 and σ does not satisfy P 2 }.
In case the results of the above case analysis-which are automatons that reflect common or different parts of languages-are not illustrative enough, we provide a practical approach to illustrate these partial languages. We simulate traces up to different lengths. These traces present either common or varying parts of the two Declare process models. By producing traces of different lengths, the domain experts get an impression of the common parts and the differences of the Declare process models to be compared. Besides, the generated traces can be analyzed afterward by applying various measurements (cf. Sect. 4.6).

Simulation-based versus theory-based approach
The simulation-based approach and the theory-based approach lead to equal results from a qualitative perspective: they decide whether two process models are equal or not. Nevertheless, the calculation of the results is quite different and the intermediate results can be used for quite different considerations of the process models. Also the effort to reach results is totally different for both approaches. The simulation-based approach is pretty time-and costconsuming. However, its results are very illustrative since it delivers concrete process traces that are produced by one or by two process models to be compared. In contrast, comparing two process models, i.e., comparing their corresponding automatons, is quite economical with the theory-based approach. Without huge calculations, similarities and dissimilarities of process models to be compared can be calculated. Albeit, results produced by this approach are kind of abstract since only automatons are produced reflecting the similarities and dissimilarities.
Based on the observations from above, we recommend the following processing. First, we would apply the theorybased approach in order to receive a general overview on the equality of two process models. The main advantage of this proceeding is the low effort this approach is requesting and the clear results concerning similarities and dissimilarities. Depending on further users' interest, the simulation-based approach can be applied afterward. This will add concrete results, i.e., process traces, to the formerly performed theorybased approach and so will illustrate the abstract results of the first approach. Nevertheless, this processing is just a recommendation. Finally, users of our algorithm have to find out their preferred usage that heavily depends on whether they need more or less illustrative feedback and whether they can spend more or less computing time. Although we do not give more than a recommendation how to apply the simulationbased and the theory-based approach, our experience reveals that both approaches complement each other and together provide promising insights into the similarity issue of Declare process models. As stated in the related work, there are no alternative approaches in literature that deliver comparable results.

Measuring the similarity of declarative process models
As mentioned at the beginning of Sect. 4, we want to measure the similarity of two non-equal Declare process models. There are two general approaches: (i) considering the automatons as graphs and applying metrics from graph theory and (ii) comparing the automatons on word level. Since in our research domain the second strategy is still neglected, we fill this gap by offering a couple of measurements that are based on the length of traces (Sect. 4.6.1). In Sect. 4.6.2, we propose an additional measure, the Damerau-Levenshtein distance, which does not focus on the trace lengths but on the structure of the traces, i.e., regarding the traces as strings and computing their edit distances.

Density and similarity based on trace length
In automata theory, the length of words is not limited at all. However, in business process management we only consider traces of limited length because the number of steps or activities executed for a process is of limited size. Hence, the following measures for comparing process models are based on trace length. Therefore, we first note that for a trace of length n over m activities, there are m n possible traces. Now we can define the n-density of a process model:

Definition 11
For a Declare process model M over m activities, we call the n-density of M.
As {σ of length n | σ satisfies M} is a subset of all traces of length n, λ n (M) takes a value between 0 and 1. In other words, λ n (M) describes the percentage of traces of length n which satisfies a process model M compared to all potential traces. This measure yields an estimation of how many process traces (of a certain length n) are covered by a process model. The bigger this number, the more flexible a process model is; vice versa, the smaller this number, the more restrictive a process model is. Therefore, the density measure puts the coverage of a process model into perspective.
For two Declare process models M 1 , M 2 and n ∈ N, the corresponding n-densities λ n (M 1 ) and λ n (M 2 ) can be calculated by simulating all traces up to length n and checking whether they satisfy M 1 , M 2 or none of them. The elements of the respective sets then are counted and hence determine the n-densities. These values can also be used to get a rough feeling about how far the models differ from each other: if the values differ extremely, e.g., λ n (M 1 ) = 0.1 and λ n (M 2 ) = 0.7, M 1 and M 2 cannot have a lot of properties in common (they overlap in at most 10% of the traces and differ in 60%).
Note that a similar n-density does not necessarily mean that the models are similar. Even in the case λ n (M 1 ) = 0.5 = λ n (M 2 ), it could be possible that the sets of traces covered by the two models are completely disjoint. Figure 9 depicts all possible cases. The above row shows the case when the sum of the single coverage of the two process models together is not more than 100%. Thus, the two process models can be completely disjoint, can be overlapping or one model can fully encompass the other one. The lower row of Fig. 9 shows the case when two process models together cover more than "100% of process traces", i.e., we again just sum up the coverage of the two process models and so can get a value greater than 100%. Then, the two models might overlap or one might completely encompass the other one. They cannot be disjoint anymore.
We can extend the definition of the n-density to the minmax-density, which considers all traces with length between min and max. The explanatory power of this measure is similar to n-density; however, it broadens the scope of observation to a range of trace lengths. The principle proposition of this measure is the same as for the n-density: it unveils the flexibility of a process model. Another measure to compare two process models directly is the n-similarity: The main difference of similarity measures to density measures is that the latter compares the coverage of a process model to the whole space of potential process traces. The former takes into account the percentage of traces which are accepted by both models and compares it with the coverage of these models. Consider for example that M 1 accepts 100 traces of length n, M 2 accepts 200 traces of length n and the set of traces of length n which both models accept consists of 50 traces. Then n (M 1 , M 2 ) = min{ 50 100 , 50 200 } = min{0.5, 0.25} = 0.25. This means that 25% of the traces of the "larger" model (i.e., the model which accepts more traces) are accepted by both models, whereas 50% of the more restrictive process model are covered by both models.
The measure n-similarity provides insights on the overlapping of process models to be compared. The bigger this number is, the more the two models are overlapping. A measure of 0 means that the two models are disjoint; a measure of 1 means that they are equal. All numbers between 0 and 1 depict the percentage of common process traces relatively to the less restrictive process model.
Analogously to the min-max density, we define the minmax similarity, which describes similarity of two process models with regard to a range of trace lengths. It just broadens the scope of comparison to a range of process traces.  Altogether the two measures density and similarity provide an estimation of how flexible the process models are and how much they overlap. This information helps a process modeler to assess whether to apply one or the other process model. For example, when both process models have about the same density (coverage) and the similarity is pretty high, (s)he "arbitrarily" chooses one of the two process models to be employed, i.e., (s)he chooses that model that looks more familiar or clearer.

Similarity based on Damerau-Levenshtein distance
While in the last subsection, we focus on trace lengths, we now analyze the structure of traces in order to discuss similarity of process models. Comparing or determining the similarity of traces can be done by using metrics that calculate the edit distance between them, i.e., they count the (minimal) number of operations like insertions, deletions, substitutions and transpositions that are needed to transform one trace into another. In context of process traces, however, it is important to take into account that some activities in the process may occur in parallel. Assume, for instance, that in a trace σ 1 = A, C, D, B the activities C and D were executed in parallel (i.e., potentially at the same time). Then, σ 1 could also be rewritten as A, D, C, B . This transposed trace does-in principle-represent "the same" execution. Hence, we use the Damerau-Levenshtein distance metric [34], since it does not penalize transpositions so harshly than other metrics based on edit distance. However, in general a transposition cannot be free of any penalization since in many cases the order of execution is crucial. So the Damerau-Levenshtein distance is a good compromise to take into account parallel executions but also not to neglect violations of an execution order.
We use the inverse of the scaled measure (for scaling we use the maximum size between the two traces that are compared), so that a higher value implies a higher similarity among the traces, i.e.
For determining the similarity between the process models based on a set of traces, we first generate for each process model all traces of a length within an a-priori defined range [n; m]. In the following, we denote the set of such traces p i , i.e., with length ≥ n and ≤ m of a process model P, as S n,m P = {p 1 , . . . , p |S n,m P | }. This set can be considered as a process event log.
Afterward, we pair each generated trace σ ∈ S n,m M 1 of the process model M 1 with the most similar trace (with respect to the Damerau-Levenshtein distance) μ ∈ S n,m M 2 of the traces of the process model M 2 [35]. Once the pairs are formed, we calculate the mean Damerau-Levenshtein distance between them [34]: the m-n-process event-log-similarity of M 1 and M 2 .
That means that we measure the similarity of all traces with a particular length between two process models. It is important to mention that the min-max-event-logsimilarity is in general not symmetric, i.e., The Damerau-Levenshtein distance directly exhibits on the level of text strings, i.e., traces, how similar two process models are. As with the various measures about density introduced in Sect. 4.6.1, the Damerau-Levenshtein distance is more an indicator than a concrete marker for a statement about similarity. It provides an attested impression of how similar two process models are by comparing traces of these models. Similarity on this level cannot automatically lead to statements about similarity on a conceptual and logical level since small differences on the trace level can lead to great differences on a logical level, and vice versa. However, a domain expert has to assess whether these similarities show the same tendency or are just accidentally similar. We recommend to apply this measure also as an indicator for potential similarities of process models.

Example
We now apply the previously defined measures to the process models Q 1 and Q 2 of the running example (Sect. 2.4). All values are depicted in Table 2.
For n = 0, there is only the empty trace which is accepted neither by Q 1 nor by Q 2 . That is why the densities of both process models are 0 for n = 0. Nevertheless, we count the empty trace as one potential trace. Thus, there is a 1 in the denominator of the fraction in the upper four rows in the column for n = 0. As Q 2 requires the execution of minimal two Bs, it does not accept a trace of length 1 and only accepts one trace of length 2, namely B B . This restriction causes that the density of Q 1 is larger than the density of Q 2 for n = 1, 2. For n = 3, we see that Q 1 and Q 2 accept the same number of traces and hence have the same density λ n . We observe that for n ≥ 4 the values of the density measures (λ n , λ n 0 ) for Q 1 are lower than for Q 2 . This can be interpreted that there is a significant difference between the process models and that process model Q 2 describes a more flexible process, since it offers more process execution variants, which is confirmed  The similarity measures n (Q 1 , Q 2 ) and n 0 (Q 1 , Q 2 ) are 0 for all n ∈ N, which implies that Q 1 and Q 2 do not accept common traces. This is caused by the fact that the existence(A, 1) and exclusiveChoice(A, B) constraints of Q 1 prohibit the execution of activity B and the existence(B, 2) implies an-at least-double execution of activity B. Hence, activity B may not occur in the execution of Q 1 , whereas the execution of Q 2 requires the execution of B. Taking the observations regarding densities and similarities together, we can conclude that we are in the upper left case of Fig. 9, i.e., the two processes are disjoint-with respect to execution traces-and differ broadly with respect to their flexibility. Table 2 also reveals that the measure 0-n-process eventlog-similarity is indeed asymmetric. The graphical curves depicted in Fig. 10 confirm this observation. We can also observe that with increasing trace length the similarity values of the 0-n-process event-log-similarity increase and for all n ≥ 3 n 0 (Q 1 , Q 2 ) > n 0 (Q 2 , Q 1 ) holds. This is again caused by the two constraints existence(A, 1) and exclusiveChoice(A, B) of Q 1 which prohibit the execution of activity B. Hence, from each accepted trace t 1 of Q 1 an accepted trace t 2 of Q 2 can be derived by replacing any two symbols of t 1 by Bs (as two Bs are mandatory for Q 2 ). The other way around, for transforming a trace t 2 of Q 2 into a trace t 1 of Q 1 , all Bs can be exchanged by As. Hence, a trace t 1 of Q 1 can be derived from a trace t 2 of Q 2 by replacing each B by A. As t 2 includes at least two Bs, at least two operations are needed. As some traces of Q 2 include more than two Bs, more than two operations are needed, which implies a higher inverse Damerau-Levenshtein distance and hence a lower 0-n-process event-log-similarity.

Implementation and evaluation
In this section, we give an introduction to our implementation and evaluate our approach from two angles: First, we determine the time complexity of our approach (cf. Sect. 5.1).
Analyzing the asymptotic behavior has the advantage that the results are independent of the deployed hardware and more general than calculating particular example processes. Afterward, we conduct a small comparative study of declarative mining approaches on real-life event logs to demonstrate the applicability of our approach in a practical scenario (cf. Sect. 5.3).

Implementation
As a proof of concept, the algorithm visualized in Fig. 4 has been implemented in Java and can be used through a command-line interface. All steps are arranged in a configurable and automated pipeline. Besides the sources and a pre-compiled runnable JAR file also some sample models are publicly available. 4 As mandatory inputs, the user has to provide two models as text files of the following structure: Activities have to be encoded in the shape of a commaseparated list of characters-the alphabet-at the beginning of each text file, followed by an arbitrary number of Declare constraints in the commonly used textual representation: < constraintTemplateName>(< activity1> [,< activity2>]). Activities need to be part of the alphabet.
In order to run the application, providing the model files is sufficient. However, it is possible to output the automa- Since the last terms are the predominately ones, the time complexity of the simulation-based algorithm is exponential. However, if we are only interested in the minimal upper bound, the time complexity is R. Theory-based For two finite state automatons, the theorybased algorithm has time complexity R + O(m · n) where m and n are the number of states of the minimal product automatons of M 1 and M 2 . The first term describes again the construction of the minimal product automatons and the second term describes the time complexity of the symmetric difference construction. We observe that the theory-based algorithm is much faster than the simulation-based algorithm as the dominating term m · n is quadratic and not exponential. Analyzing Differences If the two process models are not equal and we are interested in analyzing the differences (cf. Sect. 4.6), it is necessary to calculate all traces up to a desired length l. This task requires as mentioned above an exponential time complexity of γ (l, M 1 ) + γ (l, M 2 ).

Practical application
For evaluating how our approach performs on real-life data, we conduct a small comparative study of declarative mining algorithms. We apply two Declare Miner (Uncon-strainedMiner [28] and DeclareMapsMiner [37]) to extract declarative process models from real-life event logs. Afterward, we use our approach (and metrics) for a comparison of the mined models. We performed our study on 3 real-life event logs from different domains with diverse characteristics (cf. Table 3) extracted from the 4TU Center for Research Data. 5 We configured both Declare Miner in that way, that all supported Declare templates (cf. Table 1) of our approach were mined. Additionally, we set the threshold for confidence and support that a constraint must satisfy to ≥ 0.9. Setting the 1    support lower than 1.0 is necessary, since all real-life event logs contain noise. On the other hand, a smaller threshold leads to a significant increase of constraints (several thousand constraints), which resulted for all used event logs in a broken model, i.e., the model was so restrictive that it prevents any process execution. We could directly observe that the mined process models differ with regard to the number of constraints (cf. Table 7). In all cases, the UnconstrainedMiner detects a larger number of constraints. Hence, the UnconstrainedMiner bears a higher risk for a broken model. In all cases, the models were different and no one was included in the other one. So we calculated the metrics proposed in the previous section to quantify the difference between the models. The measurements are listed in Tables 4, 5 and 6. Since the metrics require a simulation of all traces up to a given length, the calculation of the metrics is faced with the same problem as of any simulation-based approach. The scalability wall prevents a (fast) simulation of long traces. Hence, we set an upper bound of 5 for calculating the simulation based metrics. We argue that this limit is already sufficient, since larger traces would be very cumbersome for a manual investigation by a domain expert and even these short traces allow to derive a tendency. Note that this scalability problem only holds for the calculation of the simulation based metrics, while the equality check based on automatons is not affected. Hence, we can conclude that our approach enables the comparison of large declarative models as they occur on real data. The analysis of the outputted traces up to length 5 reveals that the models are not broken, but even very restrictive, due to the small amount of allowed process executions. Nevertheless, even these small number provide useful insights for a domain expert. In all cases, we could observe that albeit neither the process models are identical nor one is a subset of the other, the intersection of allowed traces between the models was not empty. Hence, the mined models possess similarities and allow in partial the same behavior. We could also derive from the measures n-density and 0-n-density that the models of the MapsMiner are more restrictive. Eventually it could also determined that the UnconstrainedMiner allows in all cases an empty trace, while the MapsMiner prevents this behavior.

Conclusion and future work
In this paper, we presented two different approaches for comparing two Declare process models for equality by using finite state automaton constructions and minimization. The first approach, the simulation-based approach, makes use of a calculated upper bound, which was proved. The corresponding algorithm shows an exponential time complexity, whereas the second approach, the theory-based algorithm, performs quadratic and hence surpasses the simulation-based approach. On the other side, the simulation-based approach is needed in order to make statements about common properties and differences of models, which are not completely identical.
In future work, our approach will be extended to other process modeling languages, especially declarative multiperspective languages like MP-Declare [4]. Whereas Declare mainly considers the control flow perspective [10], MP-Declare can also deal with human and technical resources like performing actors or used artifacts (e.g., computer programs, tools). Furthermore, we aim to make our approach applicable for so-called imperative process modeling languages like the Business Process Model and Notation (BPMN) [38] in order to construct a tool, which can compare a plethora of different process modeling languages. Finally, the approach will be integrated in a graphical user friendly interface.
Another important point in our future work will be to elaborate and discuss the applicability of our approach to other domains, e.g. organizational models or sequence diagrams. As organizational models are "static" in a way, there might be no direct application of our approach, whereas investigating sequence diagrams might lead to promising results as there are already efforts of transforming them into finite state automatons [39].
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.