“The practicality of any security policy depends on whether that policy is enforceable and at what cost [182].” What Schneider stated in his seminal work on enforceable security policies also applies to the conflict between security and business goals in business processes. In classic IT security, enforcement mechanisms work by monitoring execution steps of some system, often called the target, and terminating the target’s execution if it is about to violate the security policy being enforced. In order to operate a Process-Aware Information System (PAIS) according to given policies, mechanisms that implement the policies and control their enforcement are equally necessary. However, if such mechanisms are used in a PAIS, this is opposed to the achievement of business goals, as enforcement may “cost” the completion of the process execution, such that the process does not generate the expected value to the company. Technically, it is this enforceability that enables the process execution to become obstructed. Hence, an obstruction is the case when enforceable policies conflict with the goal of process completion, or, put differently, enforceability implies obstructability. The first chapter identified that detecting, preventing and handling the obstructability resulting from the implementation of an automated regulation, is an important problem. It showed that a solution must provide an indicator-based view on security that enables obstructed executions to be completed. In this respect, the concepts of governance, risk and compliance allow for greater room for maneuver than classic IT-security and should be considered when solving obstructions. The actual costs of enforcement may therefore take into account not only the financial loss due to process stop but also the risks if the process is nevertheless completed, for example if tasks are delegated, executed by unauthorized users or if SoD rules are violated. Based on such considerations a rational decision has to be taken whether and how to continue the process or to stop, or, in relation to Schneider, to balance enforcement and its costs.

This chapter aims to relate this conflict to the existing state of the art. It will therefore first introduce concepts related to security properties and their enforcement to better grasp the problem of security-related obstructability in PAISs. Based on this and the BPM lifecycle, a structure will be derived along which the analysis of related work will then be conducted.

Figure 2.1
figure 1

Obstructive sequence of activities

To begin with, it can be observed that the identified conflict between security and business goals occurs on the level of a single process execution that eventually becomes obstructed or runs through. Figure 2.1 exemplarily highlights an activity sequence of such a single execution, based on the executions encoded in the representation of the behavioral scope of a process in a PAIS. Based on the introduced behavioral areas of a process in Chapter 1, the sequence encompasses activities in the scope of secure behavior (the first three activities) and in the scope of compliant behavior (the last three activities). Moreover, there is an activity (the forth activity) that escapes the obstructed state (indicated as a barricade sign). Hence, the representation of the obstructive activity execution provided in Figure 2.1, embodies secure behavior, which would actually not allow to complete the execution of the process securely and would therefore cause the PAIS to block further execution. However, it also contains the complete successful execution that is enabled by escaping the obstruction in a still compliant way. In relation to the BPM lifecycle, such an activity sequence can be determined on the basis of the process specification as a task execution sequence. It occurs during runtime in a case, or can also be found in logs as a so-called trace.

The conflict between security and continuity in operation is not new but it now manifests itself no longer at the infrastructure or application level but at the business level, which involves the business processes, regulatory rules implemented in PAISs and the process stakeholders. The basic security concept that can be applied to these architectural layers was first introduced in the context of language-based security (LBS [183]) by so-called security property classes focusing on the application layer. Because a process can also be seen as a small application—in fact, a workflow can be understood as a business process representation on the application layer—this view is adaptable to the business layer as well, which manifests itself in the common notion of an execution sequence or, equivalently, a trace for both layers. In order to be able to reason on whether system states satisfy specific security requirements, the notion of “properties” is used. The focus on these properties is supposed to help to reason on their impact on the process execution and the desired outcomes for the detection and handling of obstructions throughout this chapter.

The notion of trace properties can be used to specify basic types of security classes. Because it will be sufficient for the subsequent analysis of the state of the art to informally show the general concepts and intuitions, it is referred to a more formal definition of properties and the related property classes to the seminal works of Lamport, Alpern and Schneider in this field [13, 132, 182]. Informally, a property itself is encoded by a set of execution sequences. Such trace represents an execution of an abstracted system as a sequence of events. The observation of these execution sequences allows to reason on their properties, such that traces can be identified for which a specific property holds. In other words, the set of sequences that hold this property, builds a subset of the set of all sequences. As introduced in Chapter 1, there are two basic property classes, namely safety and liveness. A safety property stipulates that no “bad thing”, i.e., the violation of a property, happens during the execution [13]. It describes states that should never be reached. This means, if there is a certain sequence that can be appended to a given execution sequence, such that the property does not hold anymore, it is a safety property. There is a finite prefix in the execution sequence at which the violation can be detected (discreteness)Footnote 1. Typical safety properties are confidentiality or integrity (keeping a private key secret or mutual exclusion). If a violation of the property happens in an execution sequence, this execution can then not be remedied for this property or, in other words, there is no chance to fix it. Hence, for safety considerations, partial sequences are sufficient and, if a safety property is violated, this happens in finite interval of time. For example, suppose the safety property that the two activities "compute market value" and "control market value" shall be executed by different people. If there is a sequence in which both activities are performed by the same person, this sequence does not hold the safety property anymore. On the other hand, a liveness property, as introduced by Alpern et al. [13], stipulates that a “good thing”, i.e., the fulfillment of a property, eventually happens during the execution. Typical liveness properties relate to availability, for example, guaranteed service, starvation freedom (i.e., a process makes continuous progress) or process termination. This means, if there is a certain sequence that can be appended to a given execution sequence, such that the property holds, it is a liveness property. A liveness property cannot stipulate that a “good thing” always happens, only that it eventually happensFootnote 2. Unlike safety, liveness does not require the “good thing” to be discrete. It refers to states that must be reached at some time in the future, and it means that something good will happen sometime. In other words, it is always possible that the “good thing” will still take place and it is not sufficient to consider partial sequences to assess liveness. Given, for example, a partial execution of the collateral evaluation process where a user has not (yet) executed the activity “approved acquisition”, it is always possible to extend a partial sequence in a way that a user executes the activity by appending a sequence with this activity to the existing sequence.

In relation to the identified activity sequence in Figure 2.1, Figure 2.2 sketches a liveness property, in particular the reaching of the end activity, and a safety property, in particular an existing SoD constraint between two events (or activity executions respectively). It depicts that, for a partial trace (here, the first three events) the safety property holds. In contrast, the liveness property holds for the whole execution sequence. In this way, it is now possible to capture secure and compliant behavior with the respective property classes.

Figure 2.2
figure 2

Intuitive example of property classificiation for an obstructive sequence

The differentiation between safety and liveness has further implications. Depending on the property class to which a property belongs, it can be enforced, such that the violation of the property can be prevented, which means the execution of the target system to which it applies, is stopped [14]. As it is this enforcement and its implications that cause the occurrence of obstructions, a closer look at the respective enforcement mechanisms is followed. This step will enable a better insight into the causes, time-points and forms in which obstructions can manifest themselves.

Schneider showed that only safety properties are enforceable whereas liveness properties are not [182]. Mechanisms to enforce safety properties are classically divided into static analysis, execution monitoring and program rewriting [103]. Schneider describes static analysis as an enforcement mechanism that strictly operates prior to running the untrusted program. The idea of static analysis is that, after the analysis, accepted programs are permitted to run unhindered while rejected programs are not allowed to run at all. Reference monitors [15, 210] and other enforcement mechanisms that operate alongside an untrusted program are termed execution monitors [182]. Program rewriting refers to any enforcement mechanism that, in a finite interval of time, modifies an untrusted program prior to execution. This chronological differentiation may be well-applied on the business layer as well. Instead of programs, the object to investigate are processes. Analogously, depending on the time of enforcement, the policies that can actually be enforced, differ. For instance, a separation of duties constraint, which depends on the actual execution history of a process instance (also known as “dynamic SoD”), can only be enforced at runtime. If the set of executions for a security policy is not a safety property, an enforcement mechanism for the class of properties an execution monitor can enforce does not exist. On the other hand, the converse—that all safety properties have enforcement mechanisms for execution monitoring—does not hold (for example for the Maximum Waiting Time (MWT) regarding time interval) [182]. In this respect, Schneider already identified that his conditions for enforceability are necessary but not sufficient. To address this, based on similar monitors that observe system actions and terminate systems to prevent policy violations, Basin et al. [26] distinguish between actions that are only observable and those that are also controllable. Their enforcement mechanism cannot terminate the target system when considering an only observable action. In contrast, it can prevent the execution of a controllable action by terminating the system [26]. However, because the only observable cases are rather encompassing policies related to time, for example meeting deadlines, or administrative changes, such only observable policies will not be in focus of this thesis. In fact, it is the cases of controllable policies that may obstruct executions. Therefore, it is sufficient to stay with the basic notion of enforceability given by Schneider. The terms of Basin et al., however, are occasionally used for clarification. In this respect, the enforcement of any enforceable safety property can lead to an obstruction, because, in order to avoid a violation a reference monitor per se is always able to block an execution. In this thesis, this obstructability is considered for those enforceable safety properties that are relevant with regard to the security requirements for business processes identified in Chapter 1.

According to the points of time of an enforcement mechanism, enforcement of security properties from a purely chronological point of view only makes sense by means of a preceding preventive analysis before the execution, or monitoring during the execution. After the execution, the result of it must be accepted as it is and properties can not be actively enforced anymore nor change the status quo. However, narrowing the focus only on the phases relevant to enforcement would neglect a significant portion of the possibilities offered by process logs. A log allows to check for safety and liveness properties, and this, for the actually “lived”, thus practically relevant processes. More specifically, the log is also the only way to check whether processes have really been completed in practice, in other words, whether the possibility of completion, the desired liveness property, was actually perceived in the process being performed. Hence, for a holistic detection and handling of obstructions, all three phases of the process execution are important. Then, depending on the regarded process entity, there are different possibilities to treat or avoid potential damage resulting from obstructions: Before the execution, the design of involved processes can be addressed. During the execution, runtime monitors are in control and obstructive situations can be detected and handled, and after the execution, damage may already have occurred but can be determined, for example, by auditing, such that its effects can be mitigated subsequently. Further, the log can also be used to learn for the handling of obstructions based on the history of the process.

Figure 2.3
figure 3

Properties encoding the cause of obstruction (safety) and the desired outcome (liveness)

In conclusion, before, during and after the process execution different aspects of safety and liveness manifest themselves. Figure 2.3 abstracts from specific obstructive activity sequences and depicts how these complementary property classes set the frame to analyze and handle obstructions. In a sense, these classes capture the cause and the cure for obstructions at the same time. On the one hand, enforceable safety properties cause obstructions. Their consideration allows to reason on the detection and avoidance of obstructions and how policies may be improved. On the other hand, liveness encodes the property of process completion, which describes the aim to escape and resolve an obstruction. Their consideration allows to find out when obstructions occur, that is when the liveness property is not fulfilled (falsification), or when there are no obstructions, which can give hints on how obstructions can be fixed.

Figure 2.4
figure 4

Terminology of processes, event logs, and models  [40]

Based on Figure 2.4, Table 2.1 relates the identified different kinds of enforcement and security perspectives to the BPM lifecycle and its process entities. Preventive and detective mechanisms [33] work on the process model or the log respectively. Execution monitoring observes the process instance at runtime. Based on this structure, the subsequent systematic analysis of the state of the art determines deficits and potentials for development for the detection and treatment of obstructions, which then lead to their reformulation in the form of requirements, which a solution should consider, with regards to the goal of this thesis: to escape an obstructive situation that was caused by a safety property and find a solution that restores liveness, that is, to allow for process completion. In all of this, the common terminology and policies, problem cases and developments in the related fields will be grasped, such that they can be applied and adapted or that they can inspire solutions to obstructability. The requirements for the three approaches of this thesis will be derived from the three individual areas arranged according to design, execution and audit time. Firstly, the analysis of the state of the art for preventive analysis will determine the requirements for a specification of a model that covers all relevant process aspects and can be of added value in practice. Based on this, the requirements from execution monitoring will then be identified for the approach to resolve obstructions. Finally, with the analysis of log-based approaches and its potentials regarding obstructability, the basis is laid to beneficially involve logs to identify and resolve obstructions.

Table 2.1 Process terminology from Figure 2.4 [40] related to BPM lifecycle phases and enforcement mechanisms

2.1 Preventive Process Analysis: Obstructability by Design

The preventive analysis takes place prior to the actual process execution and is related to the “Design” and “Analysis” phase of the BPM lifecycle. The idea here is to first analyze the effects of policy enforcement a priori execution to find out, if the process is obstructable. For this, this subchapter will first consider the process specification, which embodies the different process aspects. Based on the process model and the process policy, it will be shown that obstructability foremost analyzes possible conflicts between the organizational and the functional and behavioral aspect of the process, namely if there is a user-task assignment that is able to obstruct the execution of the control-flow. This question relates to other questions that arise with the enforcement of policies in security-aware workflows, more specifically, questions regarding their satisfiability and resilience. The so-called workflow satisfiability problem (WSP) asks whether there is a mapping of users to tasks in a workflow so that each task can be executed and the policy can be followed. Moreover, obstructive situations may also arise due to reasons beyond the policy. For instance, if there are exceptional situations due to illness of staff or further unexpected unavailabilities. The notion of resilience (or resiliency) is used to describe such scenarios, asking how many users can be absent (or are likely to be absent) such that the policy can still be followed and each task is still executable. To some extent, WSP and resilience analysis imply obstructability analyses. For example, an unsatisfiable workflow is obstructable as well. However, a satisfiable workflow can still contain obstructions. The particular differences of these notions will be elaborated in more detail in the subsequent sections. Due to this interrelation, and to allow for comparability, this subchapter will mainly elaborate on satisfiability and resilience problems, which will sometimes implicitly relate to obstructability analysis as well. However, it will also reveal that, so far, explicitly analyzing obstructability has only marginally been considered in related work. The sections on satisfiability and resilience will therefore highlight important findings and deficits from related research and extract criteria that will be key for the preventive analysis of obstructability.

2.1.1 Process Specification

The design of the process manifests itself in its overall specification, because a PAIS is steered by the process specification. It builds the basic reference to preventively analyze policy enforcement in a PAIS. However, strictly speaking, because the term “process specification” is often intended to only describe the control-flow, it does not represent the whole process. Rather, there are different specifications that encode different parts of the different process aspects, which were briefly introduced in Chapter 1.

The process model is typically used to capture the functional and behavioral aspects of the process. In particular, it specifies the control flow to capture the behavior and includes the functional components of the process, for example in the Business Process Model and Notation (BPMN) language (which is used for the example in Figure 1.3). This overall composition encodes the business goal, which is a central part of the functional aspect.

For the other aspects, in particular for the organizational and informational aspect, further specifications are typically used. They describe policies concerning who is involved in the process and how data may be accessed or how it may flow, for example in the form of security models, such as ACL, RBAC, Bell-La-Padula or Chinese wall. These policies will be subsumed under the overall policy.

2.1.1.1 The Process Model

As outlined in the right-hand part of Figure 2.4 a process model consists of tasks, where each task represents an activity of the process and its respective execution dependencies, that is, an activity literally is a task that is actively executed by a user. Activities themselves are conducted sequentially or in concurrent order. The execution of activities can depend on decisions such that there are branching situations that cause the process execution to follow different paths. The decision on the direction to take can base on process-related or contextual properties, such that process activities can be linked to conditions and obligations in different ways. Further, parts of a process may be conducted multiple times. A process model can also be instantiated. This means that an execution sequence of a process model consists of task executions, which enables a design-time representation of a case of the process.

A process model primarily stands for the business process and the business goal to be achieved. Security requirements of the functional and behavioral aspects mainly focus on the relationship between the activities. For example, they require that for the evaluation of a collateral. a market value computation must always be carried out and that in any case a check of this computation must only happen afterwards. Many such requirements can be covered by the explicit definition of the control flow in the form of process models. They can be checked with so-called patterns to facilitate re-use [169]. Patterns also build the bridge to other security policies, or in general safety or liveness properties, for instance expressed by means of Linear Temporal Logic (LTL). Such patterns can check if a given control flow supports a required behavior. For secure process execution, it is important to strictly comply with the requirements on the control flow layer.

Figure 2.5
figure 5

Highlighted sub-process to determine the market value of a collateral

The control flow requirements can be specified in various ways. Rule-based modeling follows an Event-Condition-Action-approach (ECA) to define requirements along business rules. However, business processes are usually defined with the help of graphical models such as UML [116, 117], Event-driven Process Chains (EPCs), BPMN [118] (as in Figure 2.5) or Petri nets. These process instructions explicitly define valid process activities and the order in which they must be executed. Within PAISs, such models can be used to automatically assign users to activities and monitor execution. In BPMN, the de facto standard in process modeling, tasks are represented by rectangles. Immediate events are visualized by circles, for example, that start or end the process as shown in Figure 2.5. Execution dependencies are modeled by control flow arcs and diamond-shaped nodes, which are called gateways. The gateway semantics determines the exact process behavior. For example, they determine whether or not the incoming arcs are synchronized (AND or XOR gateway with a plus, or cross symbol respectively). Further, for outcoming arcs, they determine whether they are activated concurrently or mutually exclusive (AND or XOR gateway) [40]. For further illustration, the CEW example is simplified and only considers the subprocess of determining the market value of a collateral, namely its computation and control. This sub-process will be called the “Determine Market Value” (DMV) process depicted on the left in Figure 2.6b. A subject computes the market value (t1) and afterwards, computation has to be controlled (t2).

2.1.1.2 The Enforceable Policy

A security policy is used to specify, either directly or indirectly, which interactions are authorized in the process. The first chapter introduced the policy classes that are typically used to specify security requirements in business processes. These four policy classes are authorization, separation of duties, usage control and isolation and can be incorporated into the specification in different ways. As mentioned at the beginning of this chapter, only enforceable safety properties can cause a reference monitor to block the execution of a workflow, thus cause an obstruction. However, not every security policy is a property in the sense that was described at the beginning of this chapter. In fact, some security policies cannot be defined using the criteria that individual executions must each satisfy in isolation. If the set of executions for a security policy is not a controllable safety property, then an enforcement mechanism for execution monitoring does not exist. Therefore, the different policy types need to be examined for their enforceability in the sense of the introduced trace properties.

In general, “every property which is formalizable as a set of traces can be written as the intersection (or conjunction) of a safety and a liveness property” [148]. This means, a sequence or trace can fulfill liveness and safety properties and encode different policies. At the same time, execution monitoring mechanisms for enforceable safety properties compose in a natural way as well. In the case where several such mechanisms are used simultaneously, the policy enforced by the aggregate is the combination of the policies enforced by each mechanism in isolation. This is of interest because it enables complex policies to be decomposed into conjunctions with a separate mechanism used to enforce each of the component policies [182]. Hence, because the policy composition does not affect enforceability, it is possible to subsequently examine each of the different policy classes separately to identify the ones that are relevant for obstructability. The policy classes therefore will be elaborated in greater detail to identify to which extent they are enforceable. Moreover, examples of typical specifications are given, which often represent models well-established in practice. In combination with the policy classes, this section will examine the related process aspects in greater detail in order to comprehensibly identify relevant policies. To the best of our knowledge, this is the first systematic attempt to comprehensively analyze the enforceability of all given classes of business process security requirements in terms of trace properties.

Authorization:

Access control regulates the authorization of individual subjects, namely users or IT-systems, within processes or also the granting of access to corresponding resources or data. It centers on the organizational process aspect. As introduced in chapter 1, authorization means enforcing access control to ensure that only authorized individuals are allowed to execute activities within a process. Because this allows to define authorized as well as unauthorized trace properties, authorization represents an enforceable safety property [182]. Authorization forms the core of access control along with authentication. The latter is usually part of the supporting IT-Infrastructure. The information system that guides the process execution usually gets the information about authorized users from the implemented access control model. For this, Access Control Lists (ACL) or also Role-based Access Control (RBAC) are used. Corresponding roles are created for different tasks or functions, which can also be structured in hierarchies and allow for the delegation of rights. Hence, PAISs can use authorization concepts to enforce user activity assignments (e.g., an SAP-System with Authorization or similar), i.e., the mapping of process tasks to stakeholders with respective permissions and enough capacity. The informational aspect complements authorizations that purely assign process participants to activities with requirements related to properties of data elements, for instance how they can be used or produced. In particular, through the consideration of data elements, such as databases, documents or variables, certain subjects can be involved in the process or excluded, for example, on the basis of a loan amount. Such data-oriented conditions are however usually encoded or annotated in the control flow specification as refinements of branching conditions. The focus, however, is on policies that are not specified in the control flow but in a separate policy, such that situations arise in which the policy enforcement monitor can block the control flow execution.

Separation of Duties:

A further concept that embodies the organizational aspect, is the separation, or, its dual, the binding of duties to avoid conflicts of interest or reduce the risk of fraud (cf. Chapter 1). It generally states that some activities in the processes cannot (SoD) or must be (BoD) executed by the same subject or by the same role. The specification of the authorization already allows for a structural separation or binding of duties in using appropriate access control concepts. For example, separations are taken into account when designing the organizational structure of a company. Suitable role and authorization concepts are defined to ensure that processes meet the requirements of functional separation, for instance, that different departments are involved in a process. However, role hierarchies can be very complex with the result that individuals can avoid the intended separation by acting in different roles. Thus, the separation or binding of duties needs to be implemented on an individual level as well, for example with the Four-Eye Principle, which requires that always two people are involved in a process. Such policies represent constraints that act on top of authorization. Here, based on the execution of specific activities of a user, or in other words, the execution history of a process case, the user is forced (binding) or not allowed (separation) to execute other process activities. This is how the assignment of actually authorized users is further constrained. Because such an assignment can also be encoded as a trace property, allowing or denying a respective access can be enforced during execution. Hence, similarly to authorization, the separation of duties and the binding of duties encode enforceable safety properties. Regarding the specification of authorization and SoD policies, the NIST RBAC standards provide three levels of RBAC. It allows to define basic RBAC, hierarchical RBAC which also supports inheritance between role, and RBAC-Level-2, or also called “constrained” RBAC, which supports the definition of separation of duties as well. Further, using formal descriptions of security properties, model checking techniques may also be applied, such that BoD and SoD can be verified by specifying appropriate LTL formulas as patterns as well. These specifications can be used with the help of reference monitors that are able to enforce both authorization and SoD-related policies.

Usage Control:

Usage Control is primarily associated with the functional and behavioral process aspect and expresses conditions that must hold after the access to a resource [179]. It is often used to capture regulatory compliance and especially privacy and data protection requirements. More specifically, it requires the specification of pre- and post-conditions for the execution of process activities or the access to resources or data. Conditions not only describe the maximal number of access but also temporal relationships of activities independent of specific execution paths. They may also constrain the data retention, for example, requiring to delete local copies of a data item after its access and usage. However, depending on their observability, usage control requirements are not enforceable by a reference monitor and they encode liveness properties. However, it is possible to reformulate certain usage control policies in a way that they become controllable. Such enforceable usage control policies consider the interplay of different activities and are encoded in the control flow, typically in the course of the specification of conditions and obligations. These control flow constructs can be captured as patterns as well. Usage Control, analogously to authorization, can capture the informational process aspect as well and it can consider data elements used in the process and their influence on the control flow of the process. For example, the annotation of branching conditions with data-related constraints determines the choice of subsequent paths. The consideration of given usage control policies in the control flow is also achieved by means of process rewriting, which is challenged to keep functional and behavioral aspects of the initial control-flow. By this, it is possible to enrich a process model with activities derived from usage control policies, for example policies in which the condition determines the obligation, such as “whoever uses a service has to pay for it” or “when the data is accessed, the user must be informed”. Thus, usage control policies can use the same model specification language as the process model. Hence, a big part of usage control can be taken into account in the process model. Such model-based policies, however, do not lie in the regarded area of conflict between the workflow and the execution monitor. However, there is a minor portion of usage control policies that are as formalizable as enforceable safety properties, for instance, the maximum number of access to resources.

Isolation:

Isolation policies stem from the informational process aspect. Isolation generally says that confidentiality and also integrity of information must remain during the execution of a process. Isolation policies can be subsumed to the class of information flow policies, which define the way information moves throughout a system. Any confidentiality and integrity policy embodies an information flow policy. They either preserve confidentiality of data, to prevent information from flowing to an unauthorized user to receive it, or the integrity of data, such that information may flow only to processes that are not more trustworthy than the data [33]. In this respect, authorization and SoD represent information flow policies as well and access control can be seen as a component of information flow, in which the accessed object is the information. The informational aspect constitutes a generalization of the organizational aspect. However, its focus is on information and its distribution, not on the organization of the company in the sense of connecting resources and users to processes to eventually run a business. Isolation policies, on the one hand, address potential conflicts of interest. On the other hand, isolation policies restrict the direct and indirect flow of information in a process and the PAIS:

  • As a rather organizational policy, isolation can be used to avoid conflicts of interests. Here, the aim is to prevent the flow of sensitive information between competing companies or departments involved in the execution of a process. The history of activities that have already been executed is key for the decision as to whether a person is authorized to carry out certain activities. For this, the Chinese Wall security model is able to describe business policies to separate conflicting domains in a history-dependent dynamic way. It is a model of a security policy that equally refers to confidentiality and integrity. In particular, by defining so-called conflict classes, it can be avoided that business areas that are in conflict with each other interfere and conflicts of interest arise (e.g., if competing enterprises work with the same external consultant).

  • Isolation is further necessary for business processes of different clients such that information must not flow between the different client domains (for instance in cloud technologies), but also within business processes when sensitive information is only allowed to flow to certain subjects involved in the process execution and not to unauthorized people. Here, information flow policies can specify security levels modeling clearance levels and allowed or unwanted flows between parts of the process, for example users or activities of different security levels. Further, they describe that subjects involved in a process are fully isolated form each other or that they can only communicate over a so called declassification channel. For example, the Bell-La-Padula security model can be used to preserve the confidentiality of data elements. It associates users and workflow tasks with a security label [28]. Thereby, it can be stated that, for instance, two tasks can only be performed by users within the same security clearance. In analogy to usage control, there are patterns that are able to directly encode the isolation policies into the control flow model (e.g., the so-called “no-read-up”, “no-write-down”) and that can be used to check if the policies are fulfilled in a given process model.

  • Besides explicit information flows due to direct access operations, covert channels (or implicit flows) may allow an indirect information transfer and inferences about secret information by analyzing the process behavior (structure-based implicit flows and timing channels) or its data handling (storage channels), such that executions of the process do not interfere with one another. The consideration of covert channels mainly origins from contexts with high security standards, such as military. Such information flow policies restrict what information subjects can infer about objects from observing system behavior [182]. Associated security requirements are characterized by the term non-interference. A program (or process) is typically said to be non-interfering if the values of its public outputs do not depend on the values of its secret inputs [97]. In fact, what is often meant by the informational aspect is to consider the implicit flows, most prominently non-interference properties. Non-interference is a very restrictive security notation that can reflect not only the confidentiality of data elements but also the confidentiality (in a sense of non-observability) of process activities.

    Regarding the enforceability of isolation policies, for instance, the Chinese Wall security model can be enforced, because it may limit the number of tasks that any single user can perform. This also applies for Bell-La-Padula, for example when users involved in two tasks have the same security clearance. Both examples represent enforceable safety properties whose violation can be checked on the trace level. However, it is important to note that not all isolation related properties can be characterized as trace properties such that they are enforceable by a monitor. Such monitors which see executions as a sequence of performed actions are not sufficient to enforce strong information flow properties. Whereas trace properties hold for sets of traces, these isolation requirements are only specifiable as properties over sets of sets of traces, so-called hyperproperties [46]. They can be used to define strong information flow policies which specify to what extent information can be learned by users of a system, or respectively, the actors that participate in a process. Hence, strong indirect information flow policies do not define sets of properties in the sense of a trace property, so they do not define safety properties. This particularity is highlighted in the works of McLean and Schneider. McLean [148] proved that non-interference information flow policies are not trace properties. Because they are no safety properties, there are no enforcement mechanisms to enforce them during execution [182]. Hence, they cannot cause an obstruction in the sense of this work.

    This difference can be illustrated with a short excursion into communication theory. If one understands the obstructive situation communication-theoretically, the trace focuses on what is said, that is, which activities are actually carried out. Based only on what is said, on the given trace prefix, an enforcement monitor can decide whether it is permissible or not. In contrast, strong information flow policies do not only depend on what is said (in terms of a trace prefix) but also on what is not said. In a sense, this can be related to Watzlawiks first axiom “One cannot not communicate” [211]: What is actually said does not only relate to itself but can be seen in relation to what else could have been said, based on a given set of possible pieces of information or messages. If this is transferred to a process, it must not only be considered how a process is executed, but also what information can be inferred from what could have been executed on the basis of the model (which is what was not executed). Nevertheless, although an obstruction results from a focus on “what is said”, the question “what else could have been said” may be relevant for the resolution of an obstructive situation.

The focus of this thesis is on controllable safety trace properties, whose enforcement causes an execution monitor to obstruct the process execution. Hence, in conclusion, only authorization and SoD policies in the sense of execution monitoring are fully enforceable as safety properties. Only few usage control and isolation policies are enforceable in such a way that they can be of interest in causing obstructions. Hence, with regard to the different process aspects, an obstruction primarily shows the conflict between the functional and behavioral aspect and the organizational aspect. Besides, the informational aspect is not enforceable in its essential characteristics as far as trace properties are concerned. Strong informational flow policies are therefore not considered any further. However, some of the enforceable isolation policies also relate to users, so that, similarly to the organizational aspect, the assignment of users to activities is controlled, for example, Chinese Wall and Bell-La-Padula. Analogously to the isolation policies, only a part of the usage control policies is enforceable, for example the maximal number of resources a user can access (cardinality). Equally, for usage control, the enforceable part can be related to the organizational aspect. As a further observation, policies can also be mapped into the control flow, for instance usage control policies. Thereby, they eventually also form execution sequences or traces, which can be grasped as properties. However, because they are not encodeable as an enforceable policy, they cannot cause an obstruction and are therefore neglected.

Based on these findings, the considered policy shall typically involve the authorization policy that defines which users are authorized to perform which workflow tasks. On top of that, further polices can be involved that stem from SoD, but also Usage Control or Isolation requirements, and further restrict which of the authorized subjects can perform specific tasks. These are often called “authorization constraints”, which means they are constraints on top of the basic authorization but are crucial in finally determining the user-task assignment during execution. Hence, it is possible that a user, while authorized by the authorization policy to perform a particular task, is prevented (by one or more constraints) from executing this task in a specific workflow instance because particular users have performed other tasks in the workflow before. As an example of such policy, an execution of the workflow of determining the market value shall now be constrained by the authorizations illustrated as assignments from subjects to tasks in Figure 2.6a. For example, Alice can control the computation (t2) whereas Bob is not authorized to do so. Moreover, the workflow system that executes the payment workflow has the SoD constraint that t1 must be executed by a different subject than the one performing t2. SecureBPMN [36], SecBPMN [177] or further BPMN extensions [39]Footnote 3 are suggestions for modeling security policies, which have, however, not been considered in the standard so far. For this work, a variation of these approaches is used to model authorization constraints. The affected tasks are connected with dashed lines, whose label represents the type of the individual constraints, e.g., “\(\ne \)” for SoD in Figure 2.6b and “\(=\)” for BoD.

Figure 2.6
figure 6

DMV Model and policy based on the example in Burri [39]

In conclusion, all of the identified enforceable policies can have a negative impact on the process execution, because, depending on the contextual situation and the existing execution, they ultimately may prevent available users from being assigned to the execution of a task, thus causing a workflow to become obstructed. Such a scenario may not only depend on the constraints, but may also happen when authorized users are unavailable. Existing research relates this to the general notion of workflow satisfiability and workflow resilience.

2.1.2 Satisfiability

The notion of obstruction is related to the so-called satisfiability problem of workflows. The basic version of the Workflow Satisfiability Problem (WSP) assumes the existence of a process model specification, an authorization policy, and a number of authorization constraints. Given a set of users U, a set T of tasks, and a policy, which consists of an authorization list for each user \( u \in U\) , which determines the tasks for which u is authorized, and a set of constraints on T. Then, the WSP asks, if it is possible to find a valid plan \(\pi : T\rightarrow U\), which assigns a user to every task, such that the policy, namely both authorizations and constraints, is satisfied. If a valid plan exists for an instance of the WSP, then the instance is called satisfiable. The authorization list itself can be seen as a set of specific, rather simple constraints that encode the authorization policy. However, it makes sense to assume that for every task, there is some user who is authorized to perform it. Otherwise, it is trivial that the workflow is unsatisfiable. Because this builds the basis for further constraints, the authorization list, or the authorization policy, respectively, is handled separately.

Given the introduced policies of the market value computation, the workflow is in general satisfiable because Bob can execute the first task and Alice is authorized to perform the second, which is illustrated in Figure 2.7. In other words, there exists a valid plan to execute the workflow.

The practical motivation for the WSP is based on three main reasons, which are of particular interest to policy-designers as well: The first one is that it can be used in a form of static analysis before deployment to ensure that the workflow specification is useful in the sense that there is at least one possible “execution path” throughout the workflow. Secondly, the WSP can be used to synthesize plans for workflow instances, which assign the tasks to the users for each instance of the workflow so that they can be used when instantiating the specification. Thirdly, the WSP can also be used in a more dynamic way if tasks in a workflow instance are not assigned to users in advance, which is related to obstruction-free enforcement. This section will consider the satisfiability aspects at design time, such that satisfiability can be analyzed before the execution of a process. These aspects are also connected to workflow resilience, which will be considered afterwards on the basis of this section.

Figure 2.7
figure 7

A satisfiable assigment of users to tasks

2.1.2.1 Initial Publications

Workflow satisfiability and the associated problems are an active area of research. Its beginnings date back to the turn of the millennium. The work of Bertino et al. [30] from 1999 considers the completion of security-aware workflow instances. Their exponential algorithm assigns users and roles to tasks in sequential workflows by keeping an authorized user from performing a task when this implies that subsequent tasks would not have authorized users who satisfied the workflow constraints. Thereby, the algorithm generates an actual valid user task assignment rather than deciding whether a workflow is satisfiable or not. Afterwards, “unfortunately, solutions to [the satisfiability] problem have largely been overlooked in the literature” [194]. The problem was tackled from another direction by Wainer et al. [204] in 2003. Their idea is to assign priority levels to each constraint and different override levels to users. In case a workflow is not satisfiable, some constraints may usually be overriden by unauthorized users who, however, have an override level that is higher or equal to the priority level of the affected constraints. Inconsistencies within the specification of constraints were analyzed by Tan et al. [194] in 2004. Such inconsistencies can cause an unsuccessful workflow execution. They define a model for constrained workflow systems, which includes constraints such as cardinality, SoD and BoD. The authors define a workflow as a partially ordered set of tasks and explicitly specify workflow and task instances. Authorization constraints are given for pairs of tasks in terms of relations over users that must be satisfied when executed. Eventually, in 2005, Crampton was the first to establish the term “workflow satisfiability”. He refined the existing ideas by looking at workflows as partial orders, defining simple SoD and BoD constraints, and developing an algorithm to determine if there is a mapping of users to tasks that meet the constraints. This represents the basic problem formulation of the WSP. In 2006, Solworth used the notion of “approvability” and allowed SoD constraints in the presence of loops if the first allocating task is always executed by the same person. For this, an approvability graph is designed to describe sequences of actions defining the termination of workflows with an RBAC policy, sequential or conditional executions. The approaches have so far all been based on the so-called ordered version of the WSP, which means they always refer to the task order check, task by task, whether there is still a valid assignment. In 2007 Wang and Li [208] were the first to not consider the order and introduced the unordered version of the WSP. In their work, they introduce an extension to the common role-based access control (RBAC [178]) to be able to define authorization constraints and capture common workflow system security requirements (e.g., SoD). They use plans to study the time complexity of the WSP and prove that the WSP is NP-complete in their access control setting and reduced the problem to Boolean satisfiability problem (SAT), which allows the use of off-the-shelf SAT-solvers. Moreover, they prove that a workflow system supporting subject-task authorization and either so-called subject-inequality (SoD) or existence-equality (BoD) constraints (or both) to be NP-hard, however, efficiently solvable if the parameter is the number of tasks of a workflow, which is usually smaller than the number of involved users. Wang and Li’s FPT proof motivated many later works by Crampton et al., all then considering the unordered version of the WSP for workflows specified as partial orders. Wang and Li further were the first to use the term of resilience in this context. With these works, foremost the works of Bertino et. al., Crampton et al., and Wang and Li, the basis has been laid for the subsequently strongly growing research interest in the field.

Since 2007, research related to the WSP has branched out in different directions. In the observation of the initial publications, essential criteria already emerge, which have then been deepened in the course of further research. Besides, it can already be observed that it is a common practice in the analysis of workflow satisfiability and also resilience to abstract from parts of a workflow specification. For example, it is common practice to limit the allowed control flow constructs or supported authorization constraints and to neglect data-oriented elements. For instance, there is only one work from 2011 that actually at least considers the data flow. However, it does not model data-flow but overapproximates data-oriented gateway decisions with an internal choice operator [24]. Further, despite its strong relations and origins to access control, it is hardly spoken of a “subject” (which accesses an object). The subjects, which can represent human users but also IT-agents or systems, are often simplified to “users”, which may better reflect the business process context and the direct connection to the problem of potentially missing process participants. In this thesis, both terms are used interchangeably. Today, more than 100 publications can be counted and, to date, about 10 to 15 new papers on the subject have been published every year in renowned conferences and journals. Covering every publication in detail therefore would be too exhaustive and would diffuse the focus. Therefore, the spotlight will be on the key findings and deficits that are of relevance when considering obstructability. For this, first important aspects of satisfiability, afterwards resilience and then its runtime versions (in the section on execution monitoring) will be considered.

2.1.2.2 Process Structure

The structure of the process models upon which the different instances of the WSP in the literature are based differs. As the initial publications already indicated, differentiation can mainly be made between workflows that only allow a sequential or linear execution of tasks, and partial orders that also enable concurrent executions. There are only few other approaches that allow for choice branches and looping tasks (e.g., using Hoare’s process algebra CSP [24]). In the latter case, the tasks executed in a workflow vary from one instance to another, which further implies that there may be constraints that only apply for certain sequences of tasks. In this regard, the example process in Figure 2.5 requires full support of all of the mentioned structural possibilities.

2.1.2.3 Order of Assignment

Solutions to the WSP also differ in how the order of the tasks is considered when the users are assigned. As one possible solution, the unordered WSP offers a plan that assigns users to tasks in such a way that all tasks have an assigned user and all constraints are satisfied. In contrast, the ordered version offers a plan with an execution sequence, such that the assignment must respect the ordering of tasks defined by the control flow. The ordered and unordered versions of the WSP are only equivalent for workflows with tasks that can be executed in any order [61]. For the other cases, assuming either the ordered or the unordered version can make a big difference. This can be imagined by a slight modification of the DMV authorization policy in Figure 2.6a, such that Alice and Bob would now both only be authorized for the second activity, and only Alice would be authorized for the first one. In the unordered case, assigning Alice to the second task before assigning her to the first would not allow the assignment of the first task anymore due to the SoD constraint. The ordered version would, however, first consider the assignment of Alice to the first task such that for the second task, Bob would still be assignable. In literature, the consideration or non-consideration of the order is roughly in balance while there is the tendency that the more complex the policy under consideration, the less the order is taken into account.

2.1.2.4 Constraint Types

The subsequent analysis of the constraints considered in WSP research will show that the WSP considers constraints that relate to the types of enforceable and controllable policies from process security that were determined in Section 2.1.1.2. The terminology for these policies in the WSP context differs, however, from the terminology of process security polices. The latter rather stems from business or regulatory rules. For instance, the SoD constraint against fraud can be related to the requirements of an Internal Control System (cf. Chapter 1). In contrast, the terminology of the constraint types in WSP research will manifest further references, such as the connection to constraint satisfaction problems (CSP) and its complexity-theoretical considerations, as well as the entwinements to access control research, combined with the ambition to generalize findings on the instances of the WSP and the constraints considered therein.

Therefore, in the course of the different publications on the WSP, a growing number, differentiation and abstraction of constraints types—which are in fact described very differently in the individual publications albeit they often mean the very same thing—have developed. Although some initial works followed other attempts to specify constraints, for example, Bertino et al.’s constraint specification language [30] and Li and Wang’s Separation of Duties Algebra [208], the following classes of authorization constraints for workflows have meanwhile crystallized [63, 181]Footnote 4.

  • Counting (or also named cardinality) constraints specify that a user is allowed to perform either zero tasks or a specific number of tasks within a certain range (e.g., to count the maximal number of accesses). One example of counting constraint is \((1, 2, \{t_1, t_2, t_3\})\), which means that a user can execute 0, 1 or 2 tasks among those tasks in \(\{t_1, t_2, t_3\}\).

  • Entailment constraints describe properties for the assignment of users to two disjoint sets of activities that entail each other, typically SoD or BoD constraints. They differ in the allowed cardinalities of the sets, such that either both sets are singletons, at least one set must be a singleton, or there are no restrictions on the cardinality of the sets. Respective examples are, according to the enumeration, \((\{t_1\}, \{t_2\}, \ne )\),\( (\{t_1,\ t_2\}, \{t_3\}, \ne )\) and \( (\{t_1,\ t_2\}, \{t_3,\ t_4\}, \ne . )\) The first constraint is satisfied if a user \(u_1\) executes \(t_1\) and \(u_2\) executes \(t_2\) (because \(u_1\ne u_2\)). The second and third constraint are satisfied if \(u_1\) executes \(t_1\) and \(u_2\) executes \(t_3\). Those are examples of SoD. BoD constraints can be defined similarly by using \(=\) instead of \(\ne \). A special class of singleton entailment constraint, which is of interest for different security models, considers equivalence-based constraints. For example a constraint \((t_{1},\ t_{2},\ \sim )\), where \(\sim \) is an equivalence relation on the set of users, means that the user who executes \(t_{1}\) and the user who executes \(t_{2}\) belong to the same equivalence class, for example the same role (cf. RBAC) or the same security clearance (Bell-La-Padula) (or \((t_{1},\ t_{2},\ \not \sim )\) for different classes).

  • There are two generalizations of theses classes, namely user-independent constraints and class-independent constraints. User-independent constraints are those whose satisfaction does not depend on the individual identities of users. Class-independent constraints are those whose satisfaction depends only on the equivalence classes that users belong to. Given a class-independent constraint, it does not matter to which specific classes the assigned users belong, only that the classes are different (\(\not \sim \)) or the same (\(\sim \)). Every equivalence constraint \((t_{1},\ t_{2},\ \sim )\) or \((t_{1},\ t_{2},\ \not \sim )\) is class-independent. Every user-independent constraint is class-independent, meaning that, if a plan satisfies a user-independent constraint, any user of that plan can be replaced by an arbitrary user, such that the replacement users are all distinct. Many constraints of practical interest, such as separation-of-duty, binding-of-duty, and cardinality or counting constraints, are user-independent [60].

2.1.2.5 Complexity and Fixed-Parameter Tractability

At the core of parameterized complexity, there is the idea of finding an aspect of the problem that makes it intractable, namely NP-hard. Then, depending on the application under consideration, a parameter k is introduced, which measures this aspect in such a way that k would be relatively small for those problem instances that arise in practice. The aim is to design efficient algorithms called fixed-parameter tractable (FPT), which run in time \(O(f(k)*n^c)\), where f is an arbitrary computable function in k only, n the size of the problem, and c an absolute constant [71]. Being polynomial for any fixed value of k, such algorithms are extensions of polynomial algorithms for NP-hard problems. A given formula of a satisfiability problem is parameterized by the number of variables, such that the problem size n with k variables can be checked by brute force in time \(O(2^{k}n)\). In this respect, to make the WSP fixed-parameter-tractable, Wang and Li [206] introduce the number of tasks |T| as the parameter k, arguing that in practice it is often much smaller than the number of users |U|.

At the beginning of the WSP research, it was well-known that the WSP is NP-complete. Wang and Li [206] prove that the WSP is also W[1]-hard, which first of all means that it is highly unlikely that there is an FPT algorithm for the problem. However, in their work from 2007, they also show that WSP is FPT if the constraint set is limited to certain simple constraints. In fact, this assumption appears very natural in practice. Crampton et al. [65] extend the classes of workflow specifications from Wang and Li [207] for which the satisfiability problem is known to be FPT. Those classes include counting constraints, entailment constraints and constraints based on equivalence classes. In this way, organizational hierarchies or constraints, which for instance implement the Bell-LaPadula security model or other business rules, can be defined. The authors establish the circumstances under which an instance of the WSP has a polynomial kernel, they are able to solve the original problem more quickly by applying kernelization and show that it remains FPT for counting and equivalence constraints [64]. Crampton et al. [61] pick up the concept of constraint expressions [128] (logical combinations of constraints) to translate an instance of WSP for entailment constraints with unrestricted cardinality into multiple instances for singleton entailment constraints. The underlying idea is that an instance of the WSP for a conditional workflow can be solved as many instances of parallel workflows. By this, it is uniformly possible to support conditional workflows and entailment constraints with no limit in cardinality and thereby keeping fixed-parameter-tractability. They further develop a first solution [59] to the WSP regarding seniority constraints and specify special cases that are fixed-parameter-tractable in which they are still able to represent common subject hierarchies from practice. Cohen et al. [49] solve the WSP using techniques for the Constraint Satisfaction Problem, which allow the authors to present a generic algorithm to solve the WSP using the general constraint types: cardinality, so-called regular (e.g., SoD) and user-independent constraints. Their solution builds executions incrementally, discarding partial executions that can never satisfy the constraints. The authors show that their algorithm is optimal for user-independent constraints. As an important landmark, Cohen et al. further introduce the aforementioned notion of user-independent constraints [49], which constitute a natural generalization of the simple constraints Wang and Li considered. Crampton et al. [60] were the ones that did extend the notion of user-independent constraints to that of class-independent constraints. They show that such WSP remains FPT and propose an algorithm to solve it. It is shown that user-independent constraints provide a good trade-off between expressive power and tractability. On the one hand, user-independent constraints include many workflow constraints of practical interest. On the other hand, the WSP remains FTP if user-independent constraints are considered. This focus on user-independent constraints enabled the development of highly efficient algorithms and tool-support [47, 63, 125]. For example, Crampton et al. [60] and Cohen et al. [47] show the superiority of their FPT algorithm in comparison with the classical SAT reduction of the problem.

2.1.3 Resilience

Resilience asks to what extent a workflow is satisfiable against the absence of users, for example due to vacation or illness. Thus, it is a question in the context of the WSP, which represents a way of conditioning the WSP on the basis of constraints that are independent of the workflow. As a preliminary study, Li et al. [135] introduce the concept of resilience policies for access control systems. These policies specify a minimum number of users who must have certain privileges. Thereby, an appropriate level of redundancy is ensured in the system so that the system is resilient to the absence of users. The resilience checking problem (RCP) related to this, draws on the ability to check whether an access control state satisfies a particular resilience policy. Wang and Li [206] then examine resilience in the workflow system context and its relationship to the WSP. Based on the observation that there are different types of workflows whose execution time frame varies between hours, days and weeks, Wang and Li propose a three-layered view on resilience in workflow systems [207]: (1) static resilience —some subjects are absent before the execution while remaining subjects will not become absent during the execution; (2) decremental resilience—subjects are absent before or during the execution and absent subjects will not become available again; (3) dynamic resilience—subjects may be absent before or during the execution and absent subjects may become available again. A workflow is said to be (statically) k-resilient and remains satisfiable even after any k users are removed from the current state.

Regarding the market value example, suppose that Alice becomes ill and is not available to participate in the execution of the workflow: Although, based on the authorization policy, Bob is authorized to execute all tasks of the workflow, he will not be allowed to perform \(t_2\) after executing \(t_1\) due to the SoD constraint. Similar problems would arise if Bob was not available. Hence, the absence of either Alice or Bob would result in an unsatisfiable workflow, which is why the example workflow specification represented in Figure 2.6 is not resilient for \(k>0\) absent users.

Solving resilience questions can clearly be motivated by similar aspects for policy designers as done for the WSP. Moreover, it is of particular interest for emergency planning, which is important for many companies today. As identified in Chapter 1, a policy should follow the principle of least privilege such that access is only granted if it is absolutely necessary. Resilience requirements have a completely contrary conception. For example, the mentioned resilience policies show the attempt to capture these opposing goals by means of a policy as well. Looking for ways to reason on and resolve the contradictions between conventional access control policies and resilience requirements is an active research area, and a challenging task for policy designers.

In essence, resilience is associated with ensuring the achievement of business goals even if some users are not available for the tasks that contribute to the achievement of those goals. Although this has similarities to the general aim of this thesis to ensure the achievement of business goals, an obstruction does not necessarily happen due to an unplanned absence of users, but is rather caused by the policy. Nevertheless, based on these similarities, resilience approaches offer interesting concepts, so that the following aspects will also be of interest for considerations to detect and handle obstructions.

2.1.3.1 Synthesizing Execution Plans

The generation of execution plans is an important component in determining resilience because the existence of multiple valid plans makes it possible for a workflow to be completed even if a number of users are absent. Literature about the synthetization and verification of such plans for security aware workflows addresses different levels of resilience. Crampton et al. [57, 66] use bounded model checking to generate authorized plans and derives measures to evaluate the resilience of solutions by obtaining the sizes of minimal subjects bases [68]. Thus, before the execution of a workflow, it can be analyzed what the minimal subject base is to complete a given workflow during execution (static resilience). As Lowalekar et al. [138] point out, several assignments with the same degree of k-resilience may exist such that it may be necessary to pick the most favorable one. As an approach related to decremental resilience, Paci et al. [158] generate a set of valid plans that are stored to decide which user to assign to a task in case the previously assigned user is unavailable at runtime. For this, they introduce resilience constraints that state the minimum number of users for a satisfiable execution. They describe a process to be user-failure-resilient if a user-task assignment that meets the resilience as well as the security constraints can be found. Massaci et al. [145] propose an approach to analyze also a priori execution if a given subject assignment is resilient against the dynamical absence of subjects.

2.1.3.2 Quantification

Instead of investigating the maximum number of absent users (k-resilience) or the minimum number of available users (minimal subject base), the so-called quantitative resilience, proposed by Mace et al. [141], examines the probability of the availability of the users. Based on this, the path that offers the highest degree of resilience with regards to the maximum probability of a valid process completion is determined. Therefore, quantitative resilience seeks to measure the extent to which a workflow is resilient, rather than simply deciding whether it is resilient or not. Thus, with the introduction of quantitative resilience, the risk of user absence, or its cost respectively, is quantified for the first time.

2.1.3.3 Feasibility and Change of Policies

As for the risk, one is willing to take, to achieve resilience, literature is going further. Thus, it can be considered which policy changes are needed in order to make a non-satisfiable process specification satisfiable. These changes can be connected to risks or quantified as costs. The idea to edit the policy was initially associated with the concept of feasibility [128]. The authors describe workflow feasibility as the dual of workflow resilience. Hence, their intention is to not “boil down” a given workflow specification to a critical state to assess its resilience, thereby determining the minimal subject base or maximal number of absent users. In contrast, they consider a workflow that is not satisfiable, such that the question is, if it is feasible at all to somehow complete the workflow. They use relationship-based access control, which allows them to repair the policy by edge addition or removal in a social network. With regards to the constraints needed to address the workflow satisfiability problem, relationship-based access control models, however, only offer a limited way of specifying authorization policies. Nevertheless, feasibility represents the basic idea in repairing and changing the policy to allow for satisfiability, which is then explored in further works. For example, the idea to change the policy is used by Basin et al. [25] which use a cost-based approach to increase the resilience of a workflow. Given an unsatisfiable workflow, a new user-role assignment is determined, whereby the costs of change regarding the risk of a role substitution, maintenance and administration are minimized. Further, Mace et al. [142] allow policy designers to automatically evaluate workflow resilience and compute optimal security constraint changes to ensure a certain resilience threshold.

2.1.3.4 Policy Violation

A further, in a sense more drastic, step towards reaching resilience of an unsatisfiable workflow is the violation of the policy. Here, the risks associated with a policy violation are considered. With the so-called Valued WSP [62] and its refinement, the Bi-Objective WSP [63], Crampton et al. define approaches to optimize the user-task assignment for a workflow that is unsatisfiable, for instance due to user unavailability, such that the “least risky” assignment is found in order to achieve resilience. They distinguish between the authorization constraints and the authorization policy, which can be violated in order to complete a workflow. The risk associated with the violation is expressed as a cost. The algorithms that find a user-task assignment which completes the workflow with minimum cost, is shown to be fixed-parameter tractable with user-independent constraints. The bi-objective WSP aims to minimize two weight functions associated to a valid plan, one representing the violation of constraints and one representing the violation of the authorization policy, which results in a set of incomparable solutions (a Pareto front), allowing the user to choose the most suitable. This may help policy designers to find a plan that, for example, ensures that the cost of constraint violations is zero and that the cost of policy violations is minimized. If the overall cost is zero, the workflow is definitely satisfiable. In relation to Mace et al. [128], Crampton et al. [63] further show how their approach can be used to consider user availability as well. On the one hand, there are the costs of assigning an unauthorized or unavailable user. On the other hand, there are the costs of assigning an authorized user, which correspond to the probability that the user is not available. The work by Crampton et al. also touches on works related to risk-aware access control [43, 44, 80, 156], which aims to quantify the risk of letting a user execute an action instead of just making a decision to approve or deny such action. It also ensures that the accumulated risk remains within certain thresholds. However, unlike other work, their focus is on calculating user-task assignments at minimal cost instead of access control decisions [62].

It can be concluded that, if a workflow is found to be unsatisfiable, it can never be completed without changing or violating the security policy. The areas of costs, policy changes and violations, and user availability can also be considered together. For example, the approach of Crampton et al. on the one hand allows a consideration of whether the risk of a violation is greater than the risk that the actually authorized user is not available [63]. All these metrics and costs ultimately represent indicators, which, as shown above, can be used to minimize violations, to not exceed thresholds, or to indicate options for action, among which the user has to choose.

2.1.4 Obstructability: The New Factor

As a general observation, checking for resilience of a satisfiable workflow is comparable to putting a healthy patient through his paces at the doctor’s in order to compare his vital values with the critical threshold values that indicate illness. In contrast, the feasibility, or more generally the change or violation of the policy, assumes a sick patient, who should become healthy again with targeted measures, or in other words, it checks whether the restoration of health is feasible at all. These two poles of approaching resilience and related problems are reflected in the identified different aspects of existing research.

The same metaphor can be used to illustrate the difference between obstructability and obstruction. Checking obstructability assumes an actually “healthy” process specification, comparable to checking satisfiability or resilience. In contrast, the assumption that an obstruction is given, can be compared to the case of having a sick patient, so to say, a “sick process instance”, which must be treated in such a way that it can recover, i.e., that the process can be completed. Because the latter happens during the execution time of the process, it will be covered separately in the next section. For the preventive analysis, in analogy to satisfiability and resilience, the notion of obstructability of a security-aware workflow is introduced. Given a security-aware workflow, obstructability asks: Is there a partial assignment of users to tasks of the workflow (or a partial plan) that obstructs the execution of the complete workflow, such that no other authorized user can be assigned to the remaining tasks? In terms of its effects, obstructability, as opposed to satisfiability or resilience, describes a danger rather than a desirable ability. Nevertheless, it appears reasonable to examine security-aware workflows from this point of view as it allows to reveal whether and to what extent obstructions are present therein. The mere analysis of the satisfiability or resilience tends to neglect or overlook the cases of possibly still existing obstructions, which will be examined in the following.

Regarding satisfiability, if a workflow is unsatisfiable (and not trivially unsatisfiable), this definitely means that there are only permutations of user-task assignments possible that are not able to fully execute the workflow, which in an ordered assignment represent obstructions. However, a satisfiable workflow may still be obstructable. Given the DMV example in Figure 2.6, for which satisfiability and 0-resilience was attested, and that all users are available, if Alice executes the first task, the execution of the second task will be obstructed by the policy. This obstruction is depicted in Figure 2.8. One the one hand, this is because Bob is not authorized to execute the second task. On the other hand, Alice would in fact be authorized to do so based on the authorization policy. However, due to her execution of the first task, it would conflict with the SoD constraint. This means, the satisfiable DMV workflow contains an obstruction. It is this simple example that clarifies that satisfiability does not mean obstructability, and that obstructability is not the same as non-satisfiability. Obstructability, however, looks at satisfiability from the opposite angle: Every unsatisfiable workflow is obstructable, but obstructable workflows are often satisfiable. In this way, an obstruction-free process specification is not the same as a satisfiable one either.

Figure 2.8
figure 8

An obstructed execution after Alice is assigned to the first task

Regarding resilience, a resilient workflow is equally not necessarily obstruction-free. Suppose that, in the example, there was Claire and Dave as the third and the fourth user. Claire has the same access rights as Bob, Dave has the same rights as Alice. Now, if either of the four was absent, the workflow would still be satisfiable, such that it would be 1-resilient. However, if either Alice or Dave was absent, the aforementioned obstruction, as given in the example above, would still occur, if one of them executed the first task. Hence, resilient executions can still obstruct. However, obstructability can be compared to resilience to a certain extent because it is the organizational aspect that blocks further execution as well. The relationship between obstructability and resilience can therefore be stated this way: The foremost reason for obstruction is a policy problem because it is the policy that blocks further process execution. This policy problem can however, be amplified from a lack in resource availability.

In conclusion, it can be observed that satisfiability and resilience only indicate the fact that there is the possibility of the workflow being able to be completed, potentially despite absent users. They concern the whole process specification, that is the workflow and its policy. In contrast, obstructability means that the process specification inherits the danger to obstruct but can in general be satisfiable or resilient. As introduced at the beginning of this chapter, an obstruction happens in a single case, which resulted in the chosen sequence, case and trace perspective to describe security properties . Hence, the difference in these concepts lies in the model vs. the case-based perspective, which clarifies why a satisfiable or resilient workflow may still contain obstructions.

Obstructability analysis before execution can clearly not handle an occurring obstruction. However, it is able to identify weak spots of the policy and build the basis for considering how to improve the policy or to handle such situations at runtime. Similarly to satisfiability research in its first years, it seems that obstructability has been overlooked as well, although capturing and handling obstructability is gradually becoming more relevant in research and industry, as Chapter 1 has shown. To the best of knowledge, this is the first work to explicitly introduce the “obstructability” of security-aware workflows as a notion.

2.1.4.1 Requirements for Obstructability Analysis (ROA)

For the analysis of obstructability, executions that represent a blockage between the policy and the workflow must be found. Analogously to satisfiability, the trivial case of such an obstruction would be that the policy simply does not provide authorizations for every task of the workflow. Only considering that this trivial case is not given, allows to speak of an obstruction in the sense of blocking actually authorized users, which, however, are not authorized in the obstructive situation. Hence, the interesting elements of the policy that actually cause an obstruction, are the constraints put on the basic authorization policy whose access control decisions depend on the execution history of the workflow instance. As shown before, obstructability contains elements of both satisfiability and resilience. Therefore, the findings and deficits of the different aspects of existing works on the preventive analysis of resilience and satisfiability are illustrated in order to formulate requirements that enable an adequate analysis of obstructability as well.

Ordered Assignment and Pre-Assignments (ROA-1):

Constraint satisfaction in related work is often assumed to be independent of the ordering of tasks because the policy is defined in terms of sets and functions, and plans are also defined as functions. In fact, the ordering of tasks is only relevant to the sequence in which the tasks of a workflow instance are executed, not in determining whether there is a valid plan, which is the basic application of WSP and resilience approaches. From the perspective of obstructability, however, it is exactly this sequence that encodes the execution history that is of interest. For example, the obstruction described in Figure 2.8, has no connection to reality if first the controlling of the computation is performed by a user before it is clear who actually does compute the market value. Hence, because obstructions consider the individual process case, it is natural that the assignment of users to tasks must take the ordering of tasks into account, which is defined by the control flow. Because an obstruction can only occur with respect to this task execution order, the ordering of tasks is actually implicitly assumed. However, based on this, a further distinction can be introduced, namely between the actual task execution, and the bare assignments of users, which will be termed as pre-assignments. In fact, situations may arise, in which a specific task at hand cannot be executed because succeeding “later” tasks have already been pre-assigned to users, tasks that should not have been executed yet. It may be the case that such pre-assignments must be preserved, and may not be canceled, for example, because it must be ensured that a task is only performed by a specially qualified user. Indeed, it is not untypical in the conduct of a PAIS to first reserve users for specific tasks before they are actually ready to be executed, for example to ensure that the process execution runs through without user bottlenecks. Based on this observation, a differentiation between the ordered version and the unordered version of an obstruction is introduced. While the ordered version of an obstruction assumes the tasks to be assigned and executed at the same time, the unordered version differentiates between these time points, such that it may be the case that tasks that are to be executed later, have already been assigned to users. This can be compared to the computation of a plan, which is used to prepare its actual execution. Such an “unordered” version of an obstruction means, an obstructive situation may be caused by such pre-assignments as well (or at least takes pre-assignments into account). It, however, still relates to the task execution order. More specifically, it encompasses an “unordered” (in the sense of not sequential) pre-assignment of users to tasks, and an ordered assignment with respect to the users that actually execute the tasks.

Comprehensive Structure (ROA-2):

As indicated in the CEW example, the structure of a business process or workflow does not only need to be sequential or concurrent. Currently, more comprehensive models hardly exist in WSP literature at all. Few approaches consider control-flows that are more complex than partial orders. However, because they do not focus on obstructability, they rather implicitly also analyze obstructions but do not explicitly find them, nor are they able to adequately capture and represent them. Because this dissertation aims for a practical solution, it also tries to capture the most realistic and comprehensive workflow representation for the definition of the problem of obstruction. This means sequential, concurrent, parallel, conflicting and looping activities shall be allowed. In this way, it is also possible to further consider our example, which represents a process template from practice as well.

Allow for Efficient Techniques (ROA-3):

Literature discussing WSP and resilience in workflows mainly offers theoretical approaches that focus on understanding the complexity and finding efficient solutions to the problem. In particular, research related to the WSP shows that (under the assumption that P\(\ne \)NP) the NP-complete WSP is efficiently solvable for a growing number of constraint types. The relatively efficient algorithms developed for this purpose assume that the number of tasks is significantly smaller than the number of users who are authorized to perform tasks in the workflow, as well as that all constraints are user-independent. The efforts to stay in the class of FPT problems to solve the problems efficiently should also be considered for the analysis of obstructability. Hence, the envisaged representation to reason on obstructions should allow for an efficient light computation as well. This also means that preferably only constraints that support this goal should be used.

Common Constraints (ROA-4):

Because of the aim of this work is to consider obstructions which result from a conflict of the enforcement of security properties and the workflow, the enforceable policies, namely the authorization and further constraints, are considered. Although there are different representations of how authorizations are encoded (e.g., RBAC), they ultimately encode the possible assignments of users to tasks. Therefore, the basic authorization, that is, the basic user-task assignment as it would be represented in an ACL, is sufficient to consider. Against the identified manifold facets of constraints “on top” of the authorization policy, it is not surprising that rules, such as SoD or BoD, which initially appear intuitively plausible, seem rather complicated in the course of the development of the constraint terminologies in WSP research. Here, SoD and BoD constraints will be considered because they are typical user-independent entailment constraints in related work and in practice. Due to the practically oriented focus of this work, the terms “SoD” and “BoD” will continue to be used. However, thereby, the existence of further constraints and constraint classifications needs to be taken into account such that it is possible to integrate respective extensions.

Synthesizing Partial Plans (ROA-5):

The approaches to synthesize execution plans seem promising to be adapted to synthesize partial plans, which are needed to indicate obstructive sequences. For instance, the number of such obstructed executions may allow policy designer to asses the given extent of obstructability, when they are related to the number of satisfiable ones. The synthesizing of partial plans may also inspire solutions to find the partial plan to complete an obstructed state (in the sense of liveness).

Considering Costs (ROA-6):

To address the identified need to take indicators into account (cf. Chapter 1), obstructability analysis should allow to consider costs. A key question in analyzing and especially handling obstructions is, what needs to be changed or which parts of the policy need to be violated in order to allow for process completion. The investigation on satisfiability and resilience revealed comparable questions, in which an unsatisfiable workflow is assumed, whose policy needs to be changed or violated in order to reach satisfiability or resilience. Because the basic idea of this thesis is to also take a certain degree of violation into account, while still being policy-compliant to allow for more (compliant) behavior, especially the assessment of the violation with a cost is adjustable for obstructability analysis. In literature, the possibility to assign costs to violations is rather coarse-grained, for instance, it is not possible to weigh individual user-task assignments with different costs. More fine-grained approaches would allow for more security-sensitivity, which could be realized by not differentiating between constraints and policy violation, but rather allowing to asses each violation separately. Clearly, this also stresses the aforementioned requirement to consider the order because violation can be history-dependent and order-dependent as well. What the literature on satisfiability and resilience has in common with the handling of obstructability is that it does not consider changing the order of tasks, namely the control flow, because this would change the business goals encoded therein.

Capture the State of Obstruction (ROA-7):

For an adequate analysis of obstructability, not only the requirements to perform an comprehensive analysis, but also how the analysis results can be captured and presented, should be taken into account. What all approaches on satisfiability and resilience have in common is that they only sparsely allow to actually specify an obstruction in relation to the information provided from the process specification. Basin et al., for instance, basically regard an unsatisfiable sequence as a partial execution sequence. As a more promising, yet still deficient example, the Bi-Objective WSP models such a partially completed workflow instance by adjusting the policy such that the set of authorized users for each previously executed task is reduced to the user who executed the task. This is clearly problematic when loops are supposed to be supported as a structural element because a recurring activity could then only be performed by the ever-same person. Further, the fact that an obstruction foremost represents a conflict between the organization and the functional and behavioral aspect, is what also manifests the usual and often reasonable practice of separating the process specification into its different aspects. Hence, related work does not allow to get a big picture to comprehensively represent the overall state of the system regarding an obstructed process. Therefore, obstructability analysis should be able to depict its analysis result, namely identified obstructions, in a comprehensive representation. For this, not only workflows and the policy need to be specified, but also the obstructed state, such that the execution (and (pre-)assignment) history can be encoded, and all existing information is provided. Thereby, it should be allowed to capture the overall PAIS system state regarding the considered workflow and its policies in one comprehensive representation. Because an obstruction identifies a possible weak spot of a policy and its workflow, such a representation can also help the policy designer to better visualize and comprehend, and finally improve the policies.

In conclusion, an approach for the analysis of obstructability is supposed to consider the order in the explained sense, a comprehensive process structure and costs, allow for efficient computation, and to capture and visualize the overall state of obstruction. A representation that meets with these requirements could then also be extendable to investigate and capture satisfiability or resilience as well.

2.2 Process Execution Monitoring: The Case of Obstruction at Runtime

The general notion of execution monitoring “includes security kernels, reference monitors, firewalls, and most other operating system and hardware-based enforcement mechanisms that have appeared in the literature. The targets may be objects, modules, processes, subsystems, or entire systems; the execution steps monitored may range from fine-grained actions (such as memory accesses) to higher-level operations (such as method calls) to operations that change the security-configuration and thus restrict subsequent execution” [182]. This thesis considers execution monitors in the sense of reference monitors that are implemented in a PAIS, allowing or denying access from users to the process tasks (as depicted in the AAA-Figure 1.6 in Chapter 1). Such execution monitoring enforces security requirements during process enactment (cf. BPM lifecycle) and encompasses passive observation and active interception [33] (which relates to the only-observable and the controllable security properties [26]). On the one hand, preventive monitors actively control the access of users to tasks during execution, thereby enforcing safety properties. In contrast, detective monitors observe the execution and can detect a violating behavior and possibly trigger some sort of mitigation action. The latter, which considers safety properties as well, but may also be able to assess liveness, will be considered in the section on detective analysis (Section 2.3). The core assumption in execution monitoring (in the preventive sense) is that if something unwanted happens, it shall be stopped. While this may be true in a classic access control context, based on the observations in Chapter 1 on the conflicting interests between security and business goals, this may not be desirable on the business layer with respect to business processes in a PAIS. The question of obstructability of security-aware workflows becomes a real danger if a PAIS actually blocks at runtime. It represents the decisive phase to identify, avoid or handle obstructions. Hence, during execution, it is not about the analysis of obstructability but the question is rather how to deal with actually occurring obstructions at runtime.

This chapter discusses which existing approaches there are to actually handle runtime obstructions or related problems. After a closer look at the different process elements and notions during runtime, it will be investigated how PAISs actually enforce the execution. Afterwards, the ways how preventive monitors are able to handle obstructions will be considered in more detail. Thereby, analogously to the previous section on preventive analysis, deficits will be identified such that requirements can be deduced how to allow for a more adequate handling of obstructions. Because an obstruction at runtime results from the process specification, this section strongly builds upon the findings on preventive analysis in the previous section. The selected literature in this section is therefore examined with the focus on the avoidance and handling of obstructions. It does not go again into details on the different aspects found in section 2.1 and the already deduced requirements, for example regarding structural process components or computational complexity.

2.2.1 Process Enactment

The focus of this subchapter is on the execution phase of the process and related entities, such that also the PAIS, which in fact steers the process execution, will be regarded in more detail. Examining the information system in which an obstruction of the process execution happens at runtime, then allows to better relate existing approaches to the overall setting given by a PAIS.

2.2.1.1 Process Execution

The central part of Figure 2.4 depicts the terminology in the phase of enactment (as related in Table 2.1), that is, the point of time of the actual execution of the process. A process is enacted by instantiating activities and executing them in a coordinated manner. This coordination of the activity executions takes place within a certain scope, which is called case. A case represents an instance of the process. It encompasses all activity executions that refer to a particular trigger, such as a so-called start event (in BPMN), or a particular input to the system for which the behavior is described by the process [40].

2.2.1.2 PAIS Steering Execution

Based on the configuration phase of the BPM lifecycle, different types of information systems can come into use, which then support the execution of a process. An important criterion here is whether the system is process-aware. Even if this is not the case and the execution of a process is completely manual, information that is created or consumed during the execution of the process (e.g., valuation of collateral and corresponding values) is often stored in a database or a document management system. If one remembers the Anglo-investigation from Chapter 1, for example the email material that revealed some of the fraudulent behavior, there was probably a database system, an e-mail program, a spreadsheet program, or a text editor. Even when such systems do not support an automation or a coordination of activities, the execution of a process can manifest itself in such information systems either way. For instance, the Anglo-emails may reflect the triggering of certain business activities. In this sense, such systems or tools may be used to execute tasks in some business process. However, these tools are not “aware” of the processes they are used in. Therefore, they cannot be actively involved in the management and orchestration of the processes they are used for [3].

Apart from this indirect support of a process, the already introduced PAIS represents a specialized type of information system to support process automation. There are many manifestations of such systems, for instance BPMS (Business Process Management Systems), WFMS (Workflow Management Systems), ERP (Enterprise Resource Planning) systems, CRM (Customer Relationship Management) systems, rule-based systems, call center software or high-end middleware, such as WebSphere. These systems all have in common that there is a process notion present, that they are aware of the processes they support, and that they can be configured in some way (through an explicit process specification, via predefined settings, or using customization) [3]. As briefly explained in Chapter 1, a specific class of PAISs is formed by generic systems that are driven by explicit process models. Examples are BPMS and WFMS. WFMS primarily focus on the automation of business processes [119]. A WfMS directly implements behavioral and functional aspects defined by the model, creating cases according to the provided blueprint. The basic problems of satisfiability and resilience assume such systems as well, which in practice usually provide basic authorization enforcement, whereas support for authorization constraints in such systems is rare. BPMS has a broader scope: from process automation and process analysis to process management and the organization of work [81]. What BPMS and WFMS have in common is that they both support the coordination of activity executions based on the process specification and allow for process automation.

This thesis assumes a PAIS in the sense of a BPMS because it provides holistic support for the specification, execution, monitoring and auditing of intra- as well as cross-organizational workflows, which also entails the consideration of the different process phases of the BPM lifecycle. The use of such a PAIS does not necessarily mean that all activities are automated. They can still be performed manually. However, the PAIS supports the coordination of the execution. On the one hand, activities to be executed can be selected and assigned to possible users based on the policy. However, outstanding activities can also be made available to users for selection. Thereby, and as a contrast to traditional WfMS, the users are commonly in control of which activity to execute. As a consequence, they may also allow the users to deviate from the process specification given by the underlying process model, thereby providing a degree of flexibility, which is crucial in many application domains to keep a certain room for maneuver [40]. This could, for example, mean that in the collateral evaluation process, depicted in Figure 2.5, an employee could deny or accept the acquisition before the respective collateral market value has been controlled. Although this would not be in line with the process as defined in its model, such a deviation can make sense based on contextual factors that are not captured in the process model. However, in the Anglo case, it would indicate suspicious behavior because the control activity was skipped. Such flexibility, however, does not exclude the case of obstructions because, despite all flexibility, process tasks in the normal case still need to be executed to reach the business goal of the process. Hence, processes have to be completed and the provided flexibility needs to support this. Such process-aware information systems with suitable steering mechanisms can be used, for example, to conduct certain paths defined in the specification and thus meet requirements with regard to the control flow, that is, the interplay of different activities in the process. Hence, a PAIS is able to enforce a plan (a user task assignment), such that, in case it obstructs, it can also enforce a plan to resolve the obstruction.

2.2.2 Avoiding Obstructions

If an obstructable process specification is enacted with such a PAIS, there is the danger of an obstruction factually occurring at runtime. To address this, literature, on the one hand, does not focus on the obstruction but how to avoid it. This has resulted in the development of avoidance strategies, namely preventive monitors that avoid an execution to become obstructed by enforcing only execution plans that are obstruction-free. In particular, this has a strong relation to the synthesization of execution plans as elaborated in Section 2.1 because these plans can be seen as a guide to enforce an assignment of users to tasks that allow for a satisfiable or a likely satisfiable (in case of quantitative resilience) execution. Put differently, obstruction-free enforcement aims to suppress the potential obstructability of a process specification.

2.2.2.1 Enforcing Obstruction-Free Workflows

In analogy to Schneider’s enforcement classes, Bertino et al. [30] provide a categorization of authorization constraints in workflow systems into static constraints (enforced at design time), dynamic constraints (enforced at runtime), and hybrid constraints (enforced at design and runtime) [134], which is also useful for obstruction-free policy enforcement. Although obstruction-free policy enforcement may also be enforced by static constraints, the subsequently observed literature is mostly concerned with obstruction-free enforcement during runtime. Basin et al. [24] introduce an algorithm realizing this goal. Thereby, they provide a mechanism to enforce only processes that are obstruction-free (based on a trace-based notation). Crampton et al. [67] present two mechanisms to analyze the realizability of a workflow instance under given access control constraints, which can support authorization enforcement before the execution (static) or during the execution (dynamic) of a workflow. Bertolissi et al. [31] and dos Santos et al.[52], similarly to Basin et al. [24], provide approaches for the automated synthesis of run-time monitors to enforce authorization policies in business processes. In particular, they develop enforcement mechanisms that try to prevent the reaching of an obstructed state. As elaborated before, the field of obstruction-free enforcement has strong interrelations with the aspect of synthesizing execution plans. In essence, it means that the plans that were computed before process execution find their application to enact the process during execution. Therefore, the regarded literature can also be seen as an extension to the approaches presented in Section 2.1.

2.2.3 Handling Obstructions

Even if the satisfiability of a workflow has been analyzed before, if the minimal number of subjects for a resilient execution is known or if the workflow is analyzed to be obstruction-free, and even if, based on all these findings, preventive monitors aim for an obstruction-free execution, exceptional situations, in which the execution of a workflow becomes obstructed anyways, can still suddenly occur . It is not enough to try to avoid obstruction but they need to be handled. Therefore, another direction in literature lies in actually handling obstructions that occur during runtime. In Section 2.1, the analysis of literature already identified approaches that seem useful to actually complete obstructed process executions at runtime as well. For example, the approaches of changing or violating the policy in the preventive analysis can serve as a basis for finding partial execution plans that take a certain degree of risk into account and that can complete runtime obstructions. Such a handling of obstruction has its similarity to the aforementioned obstruction-free monitoring, because it provides execution plans too. These approaches however, require, for example, the change of policies or imply violation, which means that obstruction-freedom has a certain cost, a certain “price to pay” or risk. Further, while obstruction-free enforcement mechanisms focus on valid plans, handling obstructions focus on the executed obstructive partial plan. In order to complete the workflow execution, its aim is to eventually find and append a partial plan to the obstructive partial plan. Beyond that, there are further approaches to handle obstructive situations that originate from the area of access control. There are mainly two approaches to access a certain object for the case when no subject is available [70]: the concept of “Break-Glass”, which often relates to the clinical context in case of emergency, and delegation. Thus, a look at these concepts is first taken, in order to set these in relation to business processes and the associated approaches.

2.2.3.1 “Breaking” the Policy

In a Break-Glass scenario, if the execution blocks an attempted access to an object, it explicitly asks whether it should be resumed. For example, the user that is blocked by the execution monitor must confirm that she or he has exceptional privileges and can be held responsible for access misuse. It is the user, who must balance potential use against harm. If the user “breaks the glass”, the policy violations are recorded along with the further process execution such that after the execution, a so-called post-access evaluation can take place, and traceability and accountability are given. The costs of Break Glass are therefore strongly influenced by (manual) audit costs, which is why there are automation approaches as well [37, 163]. However, this does not significantly address the problem that Break Glass approaches override initial constraints, disregarding their security related intention or responsibilities of subjects to certain tasks. In a way, Break-Glass defines alternative constraints that take action in the case of an obstruction. The assessment of the cost of overwriting is exclusively dependent on the user who is in full control over the access. The system is not in charge, which involves the risk of user abuse.

Regarding business processes, overriding security constraints to enable a workflow to complete was considered in other works that appear in the literature as well. The approach of Wainer et al. [204], introduced in the context of the WSP, allows to override policies in exceptional situations as well. This is, however, only possible if the overriding users possess the necessary predefined override level, and the affected task does not surpass this, which allows to contain the possible transgressions to a certain extent. Brunel et al. [38] directly incorporate the possibility of violations into the policy. Their approach introduces a violation management, such that a security constraint is allowed to be overridden. However, its overriding implies pre- or post-conditions that need to be fulfilled such that the security policy is eventually met. In a sense, it represents a combination of Wainer’s pre-assigned override levels (as a pre-condition) and break-glass post-access evaluation (as a post-condition).

2.2.3.2 Delegation

The other concept of access control to handle missing users is delegation, such that another subject is empowered to access the object (cf. the delegation required by the BaFin in Chapter 1). With regards to the override of initial constraints disregarding their security-related intention or responsibilities of subjects to certain tasks, delegation seems less harmful because the initial constraints of a workflow are mostly retained, except for the right that is delegated to another subject to be able to execute a task. Nevertheless, delegation involves the danger of collusion of subjects and misuse [209] (as seen for example in the Anglo-case). It further requires the delegator to be available to perform the delegation. This raises the question what happens if a user who is able to delegate her or his right unexpectedly becomes unavailable herself or himself so that she or he is unable to delegate her or his right regarding a critical task. The approach of Crampton et al. [70] considers these deficits and suggests the concept of auto-delegation, in which qualifications that indicate a potential delegatee are introduced. Examples on how the so-called qualification hierarchy may be computed based on an RBAC-model are given. In this way, a mechanism automatically resolves user unavailability by delegating a task to the most qualified available user. Because this work is located in the context of access control, the satisfiability and completion of processes is not taken into account. However, the basic idea of Auto-Delegation-Mechanism seems promising for the use in the process context, more specifically, in a PAIS.

With regards to business processes, Crampton et al. were the first to examine the satisfiability of workflow systems while delegating tasks [69]. Bakkali et al. [21] present an approach to bypass situations where obstructions occur by applying a specific delegation process, requiring a manual definition of potential delegates for the respected tasks. These delegates are selected according to their suitability, but may not have the necessary competence or expertise. The focus of the delegation from Bakkali et al., however, is on the task level. This leaves open to what extent a (secure) completion of the entire process is still possible.

2.2.3.3 Changing the Policy

The approach of Basin et al. [25], which was only briefly mentioned before, is now examined in more detail as an example for the type of approaches pursuing the change of policy with an unsatisfiable case at hand. Based on the distinction between administrable (e.g., RBAC), and non-administrable (e.g., SoD/BoD) authorization policies, it tries to solve an unsatisfiable workflow by changing only the administrable policies. The Enforcement Process Existence Problem asks, whether there is an obstruction-free enforcement mechanism that overcomes unsatisfiable workflows by reallocating roles to users at runtime such that the non-administrable policies are satisfied. Based on manually predefined costs of the risk of assigning a new user to a role, maintenance or administration costs, it is possible to determine the cheapest change of the authorization policy. Comparing this approach with the requirements from Section 2.1, the representation of the obstructed state is given only by the obstructed execution trace. It is the basis for finding the alternative policy. This alternative policy however only regards the authorization policy, not the constraints. Thereby, it might happen that an actually well-qualified user is not even considered to execute a pending task because she or he may be blocked by a “non-administrable” SoD constraint, and that a rather unqualified user is added to the role that allows to execute the task, no matter how “costly” this may eventually be. This is comparable to the risk in the approach of Bakkalili et al., where suitable manually predefined delegates that however may not represent the best options regarding competence or expertise are selected.

2.2.3.4 Violating the Policy

Crampton et al. [63] draw the same line of separation between the different policy types as Basin et al., namely authorization policies and constraints. So far, it has been the most sophisticated example of the approaches that allow for policy violation. As elaborated in Section 2.1, they aim to find the “least risky” user-task assignment for a workflow that is unsatisfiable. They assume that security constraints and user-task permissions can be violated, or overridden in order to complete a workflow. However, they allow to violate both, which results in a bi-objective optimization. In a further approach, there is also the idea of having a certain kind of budget what violations may cost, whereas the costs are still rather coarse-grained. However, with respect to the requirement of a comprehensive workflow structure, neither of the approaches considers loops or conditional branches. In contrast to Basin et al., they do not consider only the obstructed execution trace as input, but to some extent actually allow to model the obstructed state by changing the user-task authorization to singletons that contain the user who executed them, as depicted in Section 2.1. Despite limiting the representation of obstructability analysis result by not capturing all provided information in the obstructed state, encoding the obstruction with a simplification of the authorization policy also limits the set of possible solutions to resolve the obstruction as well, especially when loops are considered.

2.2.4 Completability

In summary, if it is not possible to avoid an obstruction, and given that an early process termination is not an option, ways how to still complete an obstructed execution must be provided. To give this idea a name, the term of completability is introduced. Completability can be seen as the consequence of obstructability. While obstructability is concerend with the detection of obstructions, completability focuses on the handling of obstructions in order to find solutions to complete obstructed execution. To formulate it as a question, completability asks: Given an obstructed workflow execution that resulted from the enforcement of a partial plan, is there a partial plan to complete the workflow that meets certain (security) requirements? In order to not allow such a solution to become arbitrary—one could trivially allow for completability by neglecting all security requirements— it is important that the question of completability depends on the requirements that such a solution should consider. The basic input for such a solution is given by the process specification that is used by the PAIS.

2.2.4.1 Requirements for Specification-Based Completability (RSC)

Based on the identified aspects and deficits, requirements for a specification-based approach to resolve and complete obstructions are identified. These requirements build upon the requirements stipulated in Section 2.1. In particular, a comprehensive process model structure should be considered for realistic obstructions at runtime. Further, the fact that runtime obstructions usually need to be handled immediately underlines the necessity to allow for efficient techniques. In the following requirements for the completion of obstructions, further references are made to the requirements of preventive analysis.

Obstruction-resolving enforcement (RSC-1):

An important research direction is the avoidance of obstructions, which strongly bases on the findings from preventive analysis. There are several approaches that aim to provide obstruction-free enforcement mechanisms for which satisfiable plans can be used to ensure an obstruction-free enforcement, which is why the synthesization of runtime monitors is proposed. Despite their focus on prevention, in case of an obstruction, the monitors for obstruction-free enforcement are helpful as well to enforce the resolution of an obstruction. In case an obstruction needs to be fixed, obstruction-free enforcement mechanisms could focus on partial plans that complete the process, starting from the obstruction. In this respect, also plans that involve the least risky assignment regarding violations can be considered. The capability to enact such plans is also reflected in the aforementioned possibilities given by a PAIS.

Security-sensitive Overrides (RSC-2):

Despite the identified deficits of Break-Glass approaches, foremost in not considering the existing policy, the general idea to allow to break out of an obstructed state and thereby taking violations into account is also helpful for the handling of obstructions. Further, the idea of Break-Glass to require subsequent inspection of the affected case can still be implemented in more security-sensitive approaches as well, as a means to further improve security. A PAIS can provide such additional mitigating techniques to prioritize audit of the affected case. Hence, there is a need for security-sensitive overrides that take violations of the policy into account. Based on the findings from the requirements from Section 2.1, costs can be used for such an assessment of security-sensitivity.

Automatable Delegation (RSC-3):

Although classic delegation is more security-sensitive than Break-Glass approaches because it at least considers a responsible delegator with enough expertise to choose and an appropriate delegatee, it requires a high administrative effort, for example, the availability of the delegator, and involves fraud risks as well. Therefore, there is a need for an automated delegation. Technically, if a security-sensitive alternative assignment is computed, for instance with cost-based approaches, its enforcement can be regarded as the enforcement of an exceptional temporal security policy, which is comparable to a temporal delegation of rights as well.

Obstruction-Aware Completability (RSC-4):

The question of completability of an obstructed workflow depends on the information provided to describe the obstructed situation at hand. In this respect, the existing cost-related approaches that allow for completion of an unsatisfiable case are only able to take the obstructive situation into account to a limited extent. For example, Basin et al. consider only the obstructed execution sequence, Crampton et al. in parts shrink their policy to capture the obstructed state, whereby information for computing a solution is lost. Because the existing approaches do not allow taking the “full picture” of how the obstructive situation arose and only assume a limited representation of an obstruction, it is in turn not possible either for them to offer a solution that takes full account of the course of process execution until the obstruction occurs. The identified requirements from Section 2.1 already attested that the overall state of obstruction in a PAIS must be captured in a better way, including all available information. In turn, this builds the foundation to handle and complete an obstruction adequately because the more information is provided, also the “better” and more security-sensitive solutions can be. Hence, to resolve an obstruction, there is the necessity for an approach that is fully aware of the obstruction.

In conclusion, obstructions at runtime need to be handled. The conflict between the workflow and policy enforcement is to be captured and resolved. Based on the enumerated requirements, to address the deficits of overriding, delegating or aborting the process, this thesis caters to find optimal partial plans, such that eventually, the obstructed case can be completed in a security-sensitive way. To allow for completeability, a PAIS can then enforce such an obstruction-resolving plan to steer the process towards its completion. It therefore requires an efficient automatable approach that allows a violation of the policy while taking the requirements from the preventive analysis, in particular, an extensive representation of the state of obstruction and a comprehensive process structure, into account.

2.3 Detective Process Analysis: The Case of Obstructability by Incompleteness

The detective analysis is related to the “evaluation” phase of the BPM lifecycle and focuses on the recorded process executions. As examined in Chapter 1, process automation goes along with the magnitude of data that is generated in the course of digitization. More specifically, the enterprise information systems (regardless of whether process-aware or not) generate data while the processes are executed, such that in some form, the execution of the process is recorded. According to Bishop [33], logging is “the recording of events or statistics to provide information about system use and performance”. Given a PAIS, process executions can be recorded and captured in a process log. Each activity that executed a task of a process is recorded as an event that can be assigned to an instance of a process or a case. Such recorded cases, namely the recorded sequences of events, are represented as traces, which constitute the overall process log. Analogously to the process model, a log captures different dimensions as well, for example the control flow or the organizational perspective. The research discipline that builds upon this process entity is captured under the notion of process mining [3]. Detective analysis subsumes not only detective monitoring but, more generally, auditing, which, from a general computer security perspective, is “the analysis of log records to present information about the system in a clear and understandable manner” [33]. Hence, it can be used as a posteriori technique for determining security violations. Whereas detective monitoring implies that a potential action is taken timely, the reaction time for auditing can be significantly longer. However, the execution traces of the process are regarded in both cases. Based on the given trace properties, such process executions can be analyzed as to whether (and how) the designated business goals are achieved (liveness) and whether policies were adhered to (safety). Hence, whereas the so far regarded literature mainly covers safety properties and the implications of their enforcement, logs additionally provide the possibility of checking for liveness properties. In particular, they allow to consider completed “closed” cases, which allow to assess if liveness properties were indeed “eventually” fulfilled. Regarding the observation of obstructions, this is mainly interesting for property of completion, for example if an end activity could be reached. If this is not the case, the obstructability of the process involved is indicated by incompleteness, i.e., traces that do not represent completed executions due to obstructions.

In general, the main difference to the modeling and specification of a process is that the log actually represents how the process was actually executed, or “how it was lived”. In contrast, a process model rather aims at a comprehensive view of a process and generalizes individual cases of the real world process [40]. The basis for the WSP builds the process model and policy specification. Nevertheless, the use of logs for satisfiability, resilience and obstructability research can be well-motivated because there are manifold reasons to use their so far untapped potential, for example: Regarding the WSP, the log can help to assess to what extent specific satisfiability problems are actually relevant, such that it is able to investigate the importance of individual paths of a workflow. Regarding resilience, besides assessing the importance of specific paths and its relevance, the log can reveal the probability of a user to be available. The log further offers the possibility to simplify the computational complexity the computation because it represents a finite set of events and traces. Regarding the computational complexity, because a log represents a finite set of events and traces, if it is replayable on a model, to identify if, for this finite set of traces, it is still satisfiable. Thereby, loops would be restricted to the finite set of realistic events given by the log. This may, for instance, give insights on how changes to policy design impact the conduct of the lived process. Regarding obstructability, at first sight, logs in fact “limp a step behind” in handling obstruction during runtime. However, they provide a basis to learn from completing obstructions. On the one hand, a log contains obstructed executions which manifest themselves in an incomplete or aborted trace that can result from an obstructive policy design or exceptional user absence situations. On the other hand, the log can contain complete and successful traces, or is even able to document how an obstructive situation has been resolved. Regarding the latter, depending on the flexibility a PAIS allows, or, in other words, how strictly it insists on adherence to the control flow, the log reveals important and practically relevant insights, which may differ from the control flow of the model, for example because further contextual factors beyond the model and specification were taken into account. That way, the log may encompass completed traces that represent compliant behavior that deviates from given security policies and properties. Further, a trace of a completed execution that involves violations may also result from a Break-Glass scenario, which, however, may have been checked by audit and was subsequently assessed and marked for being without concern. In this case, also the log would capture behavior of completed, compliant executions that deviate from the initial process specification. The log may further be useful if an information system does not support the modeling and enforcement of authorization constraints. Indeed, the lack of controls during runtime, as show in Chapter 1 (cf. ACFE results), is often reflected in the fact that preventive controls are not always used due to the associated costs and the sometimes negative effects on process execution. The risk of deviating execution paths is accepted and the compliance check is shifted to audit analyses. Although there are attempts to integrate monitor synthesis techniques, such constraints are often specified separately and handled by auditing software (cf. CSI tools), whose main goal is to detect problems a posteriori. However, even in case of no control and authorization enforcement during the execution, logs are of beneficial use. An actual obstruction during process execution would then not block the process. The involved users may, however, be aware of an actually obstructive situation based on contextual information (e.g., known regulatory rules). After extracting the data of such a system into a log-file, a subsequent analysis of the respective log is able to reveal if such a situation was at hand and how it was handled, namely if the process was blocked, aborted or if there was some way to overcome an obstruction. The log would then give insights to indicate weak spots of the process, for example risky or failing workarounds. On the other hand, this could also reveal successful workarounds, which may even help to resolve and guide other obstructive situations. Thereby, a user who is aware of an obstructive situation could, for example, be assisted by a system that recommends how to proceed, based on the insights provided by the log.

These manifold reasons further reveal that it seems natural and beneficial to differentiate between successful and obstructed log executions, which can be done, for example by analyzing the liveness property of process completion. This allows to reason on the causes of successful executions and even to guide a log-based handling of obstructions. Moreover, the log allows to derive further indicators, for example, Resource Behavior Indicators (RBI) [164] or indicators that are used in predictive monitoring [143], which helps in assessing the risk of violation in case of an obstruction and can enrich the process model. Such information improves the overall level of information how to handle and complete obstructions.

While most of the so far regarded related work is aimed at theoretical preventive observations, the use of logs for a practical application to approach (un)satisfiability is hardly ever considered. As far as known, there is only one further approach that relates logs and process mining to the extended context of the workflow satisfiability problem [52]. It is used to preprocess and reconstruct a control flow model such that a user subsequently defines the policy on top, which, however, represents a different focus than this work does. This is the first work that aims to use logs for the analysis and handling of obstructions. Taking “real” runtime obstructions and “real” solutions into account stresses its practically oriented focus. For this, this chapter will systematically draw potentials from the use of logs regarding obstructability. Based on the possibilities of process mining, general ways of how logs can be used beneficially to analyze and resolve obstruction will subsequently be identified. After a short look on event logs, these possibilities will be identified and elaborated further along the threefold disciplines of process mining, namely process discovery, conformance checking and enhancement. Finally, the potentials and requirements for a log-based approach will be deduced.

2.3.1 Process Logs

When a process is supported by information systems, details of the execution of the process are generally available in the form of event data. Although PAISs directly provide event logs, as mentioned before, there are many information systems that store such information in unstructured form (for example databases), such that event data can be distributed over many tables or need to be retrieved from subsystems that exchange messages. In such cases, event data exist, but some effort is required to extract them, such that data extraction is an essential part of any process mining endeavor [3]. After getting and extracting the data, possibly from different sources, these data take the form of an event log. Event logs represent the footprints left by process executions that were stored by an information system [3](e.g., a PAIS). As shown in Figure 2.4, they consist of a collection of events that record which activity for which case was executed. Thus, an event log depicts the recorded behavior of a process. Events can be distinguished by the cases in which the respective activities were executed. This results in event sequences, designated as traces. They represent the recorded behavior for the individual cases of the process. Table 2.2 displays an example of such a trace. Accordingly, a trace is a recorded representation of a case of the process, analogously to an execution sequence of a process model, which is a modeled representation of a case [40]. This section takes a closer look on process logs, in particular on what they are able to capture, and which ways are offered to detect obstructions.

Table 2.2 Example DMV trace

2.3.1.1 Formats

Event logs are the core ingredient for process mining algorithms. They exist in different formats. After the Mining eXtensible Markup Language (MXML), its successor, the XES format, was established. XES is an a XML-based format for the interchange of event log data between tools and application domains, which is approved by the IEEE as the Standard for eXtensible Event Stream (XES) for Achieving Interoperability in Event Logs and Event Streams (1849-2016) [115].

Figure 2.9
figure 9

Standard transactional life-cycle model [3]

Analogously to the described basic structure, an XES document represents an XML file and contains a log consisting of any number of traces. Each trace describes a sequential list of events that are assigned to a particular case. The log, its traces, and its events can have any number of attributes that can be nested in each other. No fixed set of mandatory attributes is required for each element (log, trace, and event). However, to provide semantics for such attributes, the log refers to so-called extensions. Each extension can define attributes that are considered standard when the extension is used. XES can declare certain attributes as mandatory fields. For example, it can be specified that each trace should have a name. Thus, not every possible attribute must be contained in a log [3]. In logs of higher granularity, different information on the state of an activity execution can be captured as well. This transactional information on activity instances can have different state attributes according to the standard transactional model, which is displayed in the state machine in Figure 2.9. These are of considerable interest when filtering the log for potentially obstructed or completed traces in the sense of unsuccessful or successful traces respectively.

figure a

Listing 2.1 shows an XES version of a further example case of the DMV process. Here, the organizational extension defines a resource attribute of type xs:string (e.g., for “Alice”). Further, this log also provides the lifecycle extension that allows to represent the status of each activity execution. In this way, the process abortion that occurred in case 22 can be documented (see line 41). Hence, depending on the information recorded during process enactment, a log file is able to capture obstructed and successful traces in different levels of detail.

2.3.1.2 Filtering

It is common practice to filter the logs before applying process mining techniques. Filtering is basically first used to clean up the log so that it does not contain any erroneous traces, for instance, by minimizing the noise [40]. There are systematic approaches to identify noise patterns [192] as well as to filter and clean event data [53, 205]. In this respect, the term of “incomplete logging” is also related to errors that happen during the recording of process data, for example missing activities distributed along the entire process execution. It is important to note that the aim to filter traces that are incomplete—in the sense of not completely executed—represents a notion of incompleteness that should not be confused with the term “incomplete logging”. In this context “fragmentary” or “gappy logging” would be a more differentiated description of what is often meant by “incomplete logging”. In contrast, in the sense of an obstructed execution, incomplete means “not completed”, with respect to the flow of activities, and is an error-free recording of a partial trace.

Endpoints Filter:

A log usually represents an excerpt from the system log over a certain period of time. Due to this selected time period, there may be incomplete traces whose start or end activity lie outside the selected scope. It must therefore be ensured that traces whose beginning or end are “cut off” are filtered out as well. In this respect, for example, the so-called “endpoints filter” allows to determine what should be the first and the last event in a process case. With regard to the identification of obstructions, given a basically filtered trace that does not contain any traces that were cut off due to the selection of the log interval anymore, the endpoint filter is still of interest. It can also be used to filter traces that fulfill the liveness property of process completion scanning the traces with regards to the occurrence of an end event. That way, if the end activity that distinctively characterizes a completed process is known, complete and incomplete traces can be separated. However, this does not necessarily mean that incomplete traces are also obstructed traces. Therefore, in order to increase the likelihood of actually filtering obstructed traces if a log provides more detailed information, further and more fine-grained filtering can be conducted.

Attribute-Based Filtering:

Based on the attributes given by the XES standard, further fine-grained filtering can be done in order to split the log into aborted or completed cases. As indicated in Listing 2.1, the transactional model provides state attributes that allow to indicate that a case or process instance was aborted (cf. abort_case or in XES:pi_abort (see line 41)). Further, an activity that was only “scheduled” (see line 36 in Listing 2.1), but has not been assigned yet can indicate an obstruction as well, when the assignment was not allowed due to the policy or when it was not possible due to unavailable resources. Filtering traces in this way, is a first indicator that incomplete traces were actually aborted due to an obstruction. Conformance checking, which will be highlighted as a process mining-method in the next section, is able to go one step further. For example, based on the given policy, it is able to indicate if SoD conflicts were involved within an aborted or only scheduled case. This would exclude cases that were aborted for other reasons, for example system failure, and it would substantiate the suspicion that the regarded traces were indeed obstructed due to the enforcement of safety properties.

To conclude, filtering in its common application is often used, for example, to select only the most frequent event logs in order to simplify the application of subsequent process mining methods and to lighten their computation and to strengthen the significance of the results in the light of a specific question. This whole process has an iterative nature because process mining results most likely trigger new questions and these questions may lead to the exploration of new data sources or more detailed data extractions. Typically, several iterations of the extraction, filtering, and mining phases are needed. In this context, the underlying questions of Process Mining can also be seen from a business intelligence perspective [3]. A concrete question is addressed to the logs, for example, how the process performs, whereupon the log is filtered against this question and analyzed. The same applies to the use of logs in the security context, or in the sense of this thesis in identifying and dealing with obstructions. The following methods of process mining will be introduced briefly, but will then be specifically considered with regards to their possible advantages for the detection and handling of obstructions in security-aware workflows.

2.3.2 Process Mining

Process Mining addresses both processes and data, which are fundamental elements of digitization. It is therefore a fairly young research discipline, and can be located between machine learning and data mining on one side and process modeling and analysis on the other. The basic idea of Process Mining is to discover, monitor and improve real processes (i.e., not assumed processes) by extracting knowledge from event records that are easily accessible in today’s systems [3]. It can essentially be subdivided into three disciplines, as shown in Figure 2.10. This section presents a brief overview of the process mining methods and relates it to process security and obstructability.

Figure 2.10
figure 10

Positioning of the three main types of process mining: discovery, conformance, and enhancement [3]

2.3.2.1 Process Discovery

Process Discovery has the goal of discovering a process model based on logs without the use of any a-priori information. If the event log provides further information, for instance about resources, it is possible to discover resource-related models, such as a social network that reveals how people work together in an organization [3]. Discovery approaches that are related to security aim to reconstruct models that are as precise as possible, to not rule out possibly rare but important deviations that involve suspicious facts and indicate violations [189]. Challenges here are clearly the aforementioned incompleteness or noise, which affects the log but represents behavior that did not actually happen. Regarding obstructions, this challenge also applies for the distinction between erroneous/incomplete and uncompleted (in the sense of unsuccessful) executions.

Hence, given that noise and error are eliminated, analyzing a discovered model allows to reveal unsuccessful executions, for instance, remarkable paths in the model that noticeably skip the usual activities and come to an abrupt end. On the other hand, based on previous filtering, the discovery technique can focus on successful executions, such that the discovered model of successful traces is used to identify obstructions or to show which paths there are to avoid or even handle them. Such comparisons of log and model already belong to the subsequent type of process mining.

2.3.2.2 Conformance Checking

Conformance checking bases on a discovered or a manually defined model. It compares a process model with an event log of the same process and can be used to confirm that the reality recorded in the log is consistent with the model, and vice versa. For instance, the SoD constraint states that the computation of the market value and its control need to be done by two different people. By scanning the event log using a model that defines this requirement, a violation, and thereby, potential fraud of the actors involved in the found traces, can be detected. Therefore, conformance checking can be performed to identify, detect and refine anomalies and measure their severity [3]. Checking for security properties may therefore be performed by means of conformance checking. Clearly, as a basis for a precise analysis of security requirements with conformance checking, process discovery also need to be precise to capture important security aspects, for example deviations, as well, and to not rule out possibly rare but suspicious paths that indicate violations. More specifically, based on the comparison of model and log, there are three general so-called conformance checking artifacts, which indicate consistent and deviating parts: rule checking, token replay, or alignments. The indicated example on SoD represents such behavioral rules, which are defined by the model and violated by some traces of the event log. As to the replay, events of traces can either be replayed by task executions in the process model, or the replay fails. An alignment tries to “align” events from a trace of the event log with the task executions of an execution sequence from the model [40].

Identifying Obstructability ex-post:

Because the traces in the log represent the actual behavior how the process was lived, the process log can be depicted similarly to the overall behavioral scope in Figure 2.14. Conformance checking can then be illustrated as drawing the lines in an unclassified behavioral scope given by the log according to given requirements such that, for example, the secure, compliant and non-compliant areas are identified. The separation of consistent and deviating parts with regards to certain requirements can be used to separate successful and obstructed traces as well, which allows to reason on the obstructability of the process and how it is handled. For example, by checking if there are traces in which a liveness property, for instance, a finalizing activity, is missing (rule checking), by replaying traces on a model to indicate non-replayable and potentially obstructed traces, or, in a more fine-grained way, also alignments can be used to indicate successful and obstructed traces. Alignments can be assessed against different metrics, most prominently fitness and precision. Fitness investigates how much behavior of the log is captured by the model, precision asks how accurate the model describes the log. Depending on the granularity of the log, successful traces can also be traces that only come close to the model, but still represent a complete execution. Here, execution sequences based on a model that addresses the requirements stipulated for the preventive analysis in Section 2.1 may increase the significance of the alignments computed with a given obstructed trace.

The identified categorization and separations in the Process Mining context can build the basis for further analysis. For example, by identifying traces that do not reach the end of an execution, indicators on possible related problems can be investigated in comparing them to successful ones. Clearly, based on such a separation, discovery techniques can identify a successful and an obstructed model to identify problems in the process or policy specification as well.

2.3.2.3 Enhancement

Enhancement represents the third type of process mining. Its overall idea is to extend or improve a process by using information about the actual process recorded in some event log. On the one hand, the model or the log can be repaired (i.e., improved), based on the findings of discovery or conformance checking. On the other hand, the model or the log can be extended or enriched with further information based on such and further findings. Repairs or extensions can also be intertwined and work in both directions, the log improves and repairs the model, and vice versa. Log enhancement can therefore enrich the events of a log with additional information, which can then be used for further techniques for a log-driven analysis of the regarded process. Such information originates from a process model or some analysis conducted based on a process model, for example, using the labeling of activities with responsible roles or the probability to complete the process. The enhancement of the model enriches the model based on the log, which enables further types of model-driven analysis. A typical example is, if two activities are modeled sequentially but can happen in any order in reality, the model is corrected to reflect this [40].

Extension can mean adding a new perspective related to the log. A prominent example is the extension of a model with performance data. For instance, by using timestamps in the event log, one can extend the model to show bottlenecks, service levels, throughput times, and frequencies. In particular, the model can be enriched with the duration of the activity executions, such that a distribution is fitted to the execution times recorded in the log per activity. This information enhances the process model and, for instance, enables performance simulation and prediction. In this way, logs can be used to tackle the gap of not knowing what is going to happen next in the process execution and they can help in better defining probabilities that certain events occur. Further, a model can be extended with information about resources, decision rules, quality metrics, etc. [5].

Hence, regarding this thesis, on the one hand, “extensions” seem useful to enrich models or logs, thus enabling the required indicators to be taken into account. On the other hand, “repairing” also offers interesting approaches, which are beneficial for the completion (or “repairing”) of obstructed executions.

Extensions to Capture Indicators:

Extension, in particular the extension of the model, are of interest for the consideration of an indicator-based security. Here, it is of interest to mainly consider security relevant indicators. There is a wide range of indicators that can be derived from logs [17, 164, 165]. Besides, the prominent Key Performance Indicators (KPIs), mining resource profiles and related indicators have raised a lot of interest in recent years. Regarding this work, the Resource Behavior Indicators (RBIs) are subsequently sketched as a particularly suitable example because resource behavior is also relevant for security and completability. Different taxonomies cover the different behavioral aspects of resources, which are exemplified in Figure 2.11. When it comes to resilience, satisfiability and obstruction resolution, user reliability can, for example, be considered as an important factor to assign the most reliable users to an already “ailing” execution in order to ensure its completion. Such a reliability indicator lessens the risk that an employee is likely to be involved in security violations. That way, the process model can be enriched with indicators that preferably assess policy violations and the associated risks.

Ultimately, the computation and the weighting of different indicators can result in a final number expressing the overall risk. This can be done not only with regard to the users but also to the tasks to be performed. A model which is able to capture these indicators would create a framework for an indicator-based security and would enable a security-sensitive and differentiated view on violations.

Figure 2.11
figure 11

Using logs to deduce resource indicators [17]

Repair: Fixing the process specification:

In general, repairing can be understood in the sense of repairing the model such that it better reflects reality. On the one hand, repairing can be useful for policy designers who want to fix obstructions on the basis of the insights from process mining. Based on the separation into successful and obstructed logs and further conformance checking, weaknesses in the specification, for example, in policy design, can be uncovered. Further, if certain risky paths that allow obstructions do not occur in the log, the process designer may consider changing or adapting the model. For example, it is possible to repair an SoD policy in a way that makes the overall policy more restrictive and thus prevents obstructions during the execution by design. To do this, two activities for which an SoD conflict exists can be assigned to only different users, so that no user is authorized for both activities at all (which would, however, negatively impede the flexibility of the process execution). In a broader sense, repairing may also be understood as the fixing of an obstructed path in the model, such that based on the log, either the path is changed, or it is extended such that it can still be completed.

Based on the log that contains successful executions and an obstructed trace, the question is how this trace can be repaired such that it is completed with minimal violation. Repairing can also be understood as a repair at runtime, where the question would be how an obstructed execution can be completed. It is then not a question of repairing the whole model, but only an execution sequence or a case. In this respect, a further distinction of process mining is introduced, regarding the point of time of its application, namely online and offline process mining.

2.3.2.4 Offline and Online Process Mining

The traditional way of using process mining is offline. This means that only closed cases are taken into account, that is, the event log contains complete traces corresponding to the cases completely processed in the past. However, for operational support, for example, the handling of obstructions during execution, it is necessary to consider “live” event data and to provide an online response to these data. The basic idea here is to only consider cases that are currently still running, because these can also be influenced and still generate events. They are described by partial traces [3].

The basic setting of these online process mining approaches is that there is some partial trace of a running execution. Based on that partial trace, an operational support system considers insights from the log to find violations or make predictions and recommendations (cf. Figure 2.12). In particular, the log is used to learn a normative, predictive or recommendation model, which then builds the basis for the operational support system to put the given partial trace into the perspective of past executions. Interestingly, this basic setting is comparable to the basic situation of this thesis in handling obstructions. On the one hand, there is a partial trace that is obstructed, and on the other hand, the approach is supposed to provide, in a sense “predict” or “recommend” a security-sensitive partial trace to still complete the execution. Various techniques can be used to generate predictions, for example, techniques from supervised learning. On the basis of the information contained in the partial trace and a prediction model, predictions can be derived, for instance, regarding a KPI such as the remaining flow time or the expected total costs. The predictive model is based on historic event data, but can be used to make predictions for the cases that are still running. Recommendations base on predictions as well [3].

Figure 2.12
figure 12

Recommendation [3]

Instead of focusing on the model and the policies for the execution monitoring as done before, logs can also be used to check executions in terms of a monitor. Thus, not only the preventive analysis, but also the detective view can be used as a basis for monitoring. Here, the previously identified detective monitoring comes into play. It is related to process enhancement in a sense that the process is extended with additional information, but discovery and conformance checking can also be involved. Therefore, predictive monitoring will be introduced in the following.

2.3.2.5 Predictive Monitoring

A sub-discipline of online process mining that is particularly noteworthy against the background of this work is predictive monitoring. Predictive business process monitoring techniques go beyond traditional ones by predicting quantifiable metrics about the future state of the running instance of a business process (i.e., the cases) [143]. There are different questions that want to be predicted: What will probably be the next activity that will be executed? Will there be violations in the execution? How long will the overall process take or will the remaining time stay below a certain bound (e.g., for a loan application)? Or, what will be the results of the process (e.g., will a client purchase an item or not, or more generally, the achievement of the business goal)?

Figure 2.13
figure 13

Predictive Process Monitoring [203]

More specifically, as depicted in Figure 2.13, predictive monitoring assumes a partial trace for which predictions about the future can be made on the basis of the process log. The event log is the input of these methods and provides the necessary characteristics that define the process for the prediction. The predicted value represents the output of these methods and applies either to the current process instance or to a collection of instances. Depending on the target of the prediction, this value belongs to a particular domain and can be numeric (e.g., the remaining time of a process), Boolean (i.e., regarding an outcome, e.g., the fulfillment of a particular goal), or categorical (e.g., regarding a user) [144]. Related literature objects to predict a wide range of values, among which are time, foremost remaining execution time , the prediction of the next event in the given case [91, 166, 195], an estimation of the value of a single indicator or an aggregate attribute, LTL formulas, which determine the occurrence of a certain situation in the process, or the risk probability (e.g., the violation of a constraint or abnormal termination) or in general, the final outcome of a case with respect to a possible set of business outcomes [81, 84, 85, 143, 149, 150]. Regarding the latter, each running case of a process can be classified according to a given set of possible categorical outcomes [202]. For instance, a possible outcome of a case may be that a collateral evaluation is finally completed (e.g., the acquisition was finally approved). On the other hand, it could also be closed unsatisfactorily (e.g., the evaluation was aborted and the process goal was not reached). Another further outcome that could be considered would be if the collateral evaluation was performed in a specific time (with respect to a maximum acceptable waiting time for the overall process). Predictions can be used to alert process users to problematic cases or to support the assignment of resources, for example, assigning additional resources to risky instances [144, 201]. Hence, such an outcome may be the fulfillment of a compliance rule, a performance objective (e.g., maximum allowed cycle time) or business goal, or any other characteristic of a case that can be determined upon its completion.

Predicting Obstructions:

The questions of predictive monitoring can be related to the question of obstructability: Will the process be successfully executed or will it get obstructed? Will there be enough users to execute the process? Will the current partial trace be obstructed in the course of its continuation due to missing authorizations or can the case even come to an end? Or, will a positive outcome of an obstructed execution be achieved? The similarities of questions make predictive monitoring particularly interesting, especially the outcome-oriented version. Particularly, predictions on the outcome in terms of fulfillment or violation of security properties can be performed. Further, the prediction of the process termination can determine the probability of the fulfillment of a liveness property. In the sense of operational supports, such an online on-the-fly conformance checking can not enforce liveness or safety properties, but allows to check them very promptly. Thus, even if, as previously explained, liveness can not be enforced in the classical sense, one can at least increase the probability of the fulfillment of liveness properties with corresponding prediction or recommendation methods. In this respect, the log may give insights on which sequences involving which actors and which activities are most likely to succeed. Therefore, an estimation can be made, which allocation of users to tasks will probably also “enforce” process termination. In turn, also problematic execution paths, for example due to unsatisfiable policies, could be identified, for example by relating the possible paths to their rate of completion.

Recommending Obstruction Resolution:

From the view of the handling of obstructions, similarly to previous observations in this chapter, the fundamental deficit of the current approaches to predictive monitoring is the tendency that they are basically “avoidance approaches” as well. Prediction techniques are used to avoid undesired states of the process, e.g., security violations, or also obstructed process executions. For example, there are techniques to maximize the probability that a process execution satisfies all constraints, or, to identify if it is likely that a possible execution path takes too long, such that another path with a better prediction can be chosen. Hence, this can be compared to the goal of preventive analysis, which tries to avoid unsatisfiable workflows, or the enforcement of obstruction-free authorizations during runtime. Indeed, to some extent, predictive monitoring can be compared to the approach during runtime to use synthesized plans, to guide and predict the execution of a process in terms of following an obstruction-free path. The question is to what extent a predictive monitor really enforces its predictions, which is comparable to the obstruction-free enforcement mechanisms presented before, or if obstruction-free predictions are only recommendations such that the user may eventually choose which path to follow. Because predictions also allow for proactive and corrective actions to improve process performance and mitigate risks, the so-called “prescriptive process monitoring” goes one step further. Prescriptive process monitoring not only wants to predict that process executions may result in an undesirable process state such as a security violation but also seeks to prevent this. It extends the scope of purely predictive systems by not only generating predictions but also advising users in case of an instance being likely to lead to an undesired outcome if and how they should intervene in an ongoing case. Such an undesired outcome can be prevented or mitigated, for example, by optimizing a given utility function [201]. However, prescriptive process monitoring still represents a rather observational procedure. It is still an avoidance strategy, albeit a more proactive one. The ideas of correction and intervention are rather meant to influence the development of process execution on the basis of predictions in such a way that the defined goals can be achieved as far as possible, which can only happen within the scope of action set by the process model. Further, as the name suggests, prescriptive monitoring additionally “prescribes” in advance how to react to possible undesirable developments and defines key indicators or thresholds, which should alert the responsible users in case they are exceeded, so that they can then take corrective action according to the correction prescribed in advance. It is not considered to “touch” (or even violate) the process specification. The questions of what to do in the event of an actual process obstruction, and how obstructed traces are fixed, which, as explained, can occur despite all predicted probabilities, are not dealt with. Nevertheless, due to the comparability of the questions posed regarding obstructability, the methods of predictive monitoring seem promising to resolve obstructions as well. Due to the abundance of such methods, which is underlined in two recent literature reviews [144, 202], it is foremost a matter of showing the fundamental feasibility of the adaptation of the methods to the aims of this work. Predictive monitoring is therefore of considerable interest for the handling of obstructions; on the one hand regarding the indicators that are considered, on the other hand, regarding the approaches that are used to generate predictions such that they may inspire solutions how to complete an obstructed trace based on the log.

2.3.3 Log-Based Completability Requirements (RLC)

In conclusion, it has been identified that the log offers the potential to create a different approach to satisfiability, resilience and obstructability. Due to the lack of log-based techniques in the given context, this section on detective analysis has not identified deficits of existing techniques, but revealed potentials for the application of logs. Based on these observations and the observed potentials, log-based approaches should consider the following possibilities and requirements for the analysis and handling of obstructions:

Detect and Separate Obstructed and Successful Traces (RLC-1):

In order to be able to use logs in the context of this thesis, it is necessary to have methods to separate obstructed traces from successful ones. This can be done by attribute-based trace filtering, with process discovery, in which an obstructed trace is captured in a path that bypasses other essential activities, or in a more precise way with the help of conformance checking. Thereby, logs can be separated in obstructed and successful executions. Based on this separation, an obstruction can be put into perspective. It is possible to assess the obstructability of the process, even if the process was not controlled by a PAIS during execution.

Obstructed traces can be used to identify deficits in the process specification. One can think of the reasons why the obstructed traces were obstructed to improve the process. Further, they can be used to assess the probability of a preventively detected obstruction to occur in reality. Successful logs can be used to guide how the policy is to be improved or for the completion of a running instance, for example, with success rates regarding an outcome in predictive monitoring, e.g., process completion, or by reasoning on how they might be used to complete partial traces. Further, a discovered model based on the successful logs can guide the completion in case of an obstruction as well.

Identifying and quantifying indicators: Assign Costs Based on Log (RLC-2):

As identified, the log can be used to identify manifold indicators. Methods of filtering and conformance checking but also process enhancement and predictive monitoring can be used for this. These indicators can then be considered and used to determine an execution sequence that can complete an obstructed execution as security-sensitive as possible. This, in turn, underlines that the model must be able to consider costs, such that quantifiable indicators can actually enhance it. In the light of the requirements for a representation, as specified in Sections 2.1 and 2.2, the extension of the model with further information has similarities to the previously identified requirement to consider costs, for example in finding executions scenarios for satisfiable processes. What would be new here, in contrast to the cost-based approaches shown in the previous sections, is that the model would directly be extended with the costs. Hence, the need to capture costs, was now determined before, during and after execution as a requirement for all approaches.

Proposing measures: Finding paths to complete obstructions (RLC-3):

How does a log-based approach need to be designed to not only detect but to provide measures to complete obstructed executions? As identified, logs can also be used to recommend actions based on the behavior they reveal. This generally meets the approach of this thesis on how to deal with obstructions, so that the basis for the required rational decision is extended with data. Here, although online-process mining has a similarity to the handling of obstructions during execution monitoring (cf. Section 2.2), the log-based approach has not yet been considered for the handling and completion of obstructions. Therefore, methods already used in predictive monitoring are meant to inspire solutions to resolve obstructions. In particular, among other goals, predictive monitoring uses logs to complete processes (as a positive outcome). This and involved log-based techniques represent a starting point to resolve obstructions as well. Here, first and foremost, the basic practicability of using logs to resolve obstructions needs to be shown.

To conclude, this work is meant to also consider logs to unleash their potential in detecting and handling obstructions. There are manifold ways to separate completed traces from obstructed ones, for example by analyzing safety and liveness properties. Based on this, it is possible to derive meaningful indicators, which can be considered as costs when solutions are calculated for an indicator-based security. These costs can then be incorporated in the representation required in Section 2.1 and 2.2. Here, completing obstructed traces, in particular the adaption of methods provided from predictive monitoring seems promising to actually tackle obstructions based on logs. Such a completed trace would possibly violate a safety property such as an SoD requirement, but would eventually allow the liveness to be “enforced”. Depending on the information system that is used to execute a process, the policy or process model is not necessarily enforced and it may happen that only logs are available. Therefore, the log-based approach should stand for its own, but at the same time it should be able to be used to complement the specification-based approach.

Figure 2.14
figure 14

Process behavior sketching completed satisfiable executions reaching end state (no outgoing arc) and k-resilience (with k=1, i.e., one absent user)

2.4 Security-Sensitive Detection and Handling of Obstructions

The previous sections of this chapter identified that engineering methods that can handle unexpected process “obstructions” is a hardly touched research field. However, it is very relevant in practice, because it contributes to the construction of reliable, secure enterprise information systems [8].

There are different research directions ultimately trying to improve and finally avoid an obstructive state. Besides the extensive research, which has been conducted with respect to the analysis of satisfiability and resilience before process execution, mainly research topics concerning the enforcement of obstruction-free executions or other forms of an avoidance of obstructions were identified. Interestingly the previously identified nature of classic IT security comes to light in this, i.e., an obstructive state is rather avoided than handled. This can be illustrated with the help of an exemplary allocation of existing approaches to the behavioral space. Obstructive situations mainly originate from applying classic IT security to workflows. Figure 2.14 indicates the restricted behavioral space of satisfiable workflows, and the even more restricted scope of resilience (for example for k=1). Obstruction-free behavior corresponds to the secure behavior or is even restricted further. Hence, similarly to these indicated two areas, a big part of existing research mainly has the tendency to even further restrict the behavioral scope. Research on related notions focus on keeping the secure state, across all process entities. As for the use of logs, predictive monitoring equally aims at avoiding unwanted outcome (e.g., avoidance of non-completing paths). All such approaches act in the frame set by classic security. Before the execution these avoidance-techniques definitely make sense because the obstructive situation is not there, and they can be used to improve process or policy design and detect design flaws. Taken together, classic WSP research operates in the frame set by IT security as well and only a few other approaches consider a security violation. Interestingly those who do so actually assign costs to policy changes or violations, which is somewhat in line with the required indicator-based approach to process security.

2.4.1 Main Deficits in Obstructability Research

Figure 2.15 shows the different entrepreneurial possibilities in avoiding or confronting and handling of obstructions. To actually handle obstructions, there is a need to extend the behavioral scope in the sense of the paradigm shift that is meant to work with an indicator-based security, i.e., to allow the handling of obstructions in the frame set by compliance (see cross-hatched area in Figure 2.15). After each subchapter identified requirements for preventive, runtime and detective approaches, the main deficits regarding the detection and handling of obstruction will be highlighted and summarized. This allows to put the solutions of this thesis into a more comprehensive framework.

Figure 2.15
figure 15

Existing WSP and resilience approaches and the potential behavioral gain in handling obstructions

Despite the need to automate regulation, as identified in Chapter 1, this chapter identified that there is no automated or semi-automated security-sensitive solution to obstructions. Although there are approaches with this intention, they often only exist as first concepts, and aim to change policies, or even override them without taking them into account (e.g., Break-Glass).

Although delegation does not totally override policies, it involves the risk of collusion and requires the availability of a delegator. Here, automating delegation seems promising to provide a higher degree of process automation. There are only a few approaches that allow to work with indicators. They actually consider a rather coarse grained policy violation. However, these promising cost-based approaches, which allow for policy violation, are not able to adequately capture the state of obstruction and do not consider a comprehensive process structure. Regarded approaches do not start from a somehow specified obstruction, from which it would be reconstructable how the obstruction occurred in the first place. They rather assume that there is such a situation and then search for solutions without actually fully taking the obstructive situation into account. In particular, there is no representation of the problem of obstruction that captures all inputs involved. This, however, would allow to set the frame that allows for further steps to handle the obstructive situation. Although logs constitute a further input, literature has so far not considered their use to handle an obstructive situation. In essence, the following main deficits in the detection and handling of obstructions can be observed, namely there is

  • no comprehensive representation of the problem of obstruction,

  • no security-sensitive solution, and

  • no use of logs.

Figure 2.16
figure 16

Main deficits regarding completability

2.4.2 New Spheres of Action: Expanding the Behavioral Space

How can the scope of action be widened to allow for more obstruction handling behaviour that is still compliant (see cross-hatched area)? In order to understand the general approach to obstruction handling and its requirements, the big picture, which illustrates the general effects of obstruction, is broken down to the consideration of the individual case of an obstruction again. The arrow towards the finish flag in Figure 2.16 indicates an obstruction arc. Then, the aim of the approaches of this thesis is to find a partial trace or activity sequence to complete the obstructed process execution (i.e., find the missing path to reach the finish flag). This again corresponds to what the literal meaning of the obstruction expresses. In contrast to a deadlock, which in its literal sense has no possibility of still finding a good end, an obstruction means that something is blocked, but the goal is still in sight. Therefore the goal is to clear away or bypass what is causing the obstruction to still reach the goal that has so far been blocked. Clearly, if a process obstructs in practice due to the lack of practical solutions, process participants often choose pragmatic ways beyond the identified approaches. They figure out how to work around an obstruction, for example, by making phone calls to find out why a process is blocked, or they get their business done in a way completely unrelated to the process. This is hardly observable nor even controllable form a PAIS and involves many security risks. Therefore, a security-sensitive approach to tackle obstructions needs to set the frame that allows a PAIS to be in control and aware of the obstructive situation and address the deficits of existing approaches.

2.4.2.1 General Framework for Requirements

Based on the identified deficits and against the background of Figure 2.16, three main requirements are identified for the approaches to detect and handle obstructions:

  • Capturing the state of obstruction (GR-1): Obstructions can currently only tediously and not comprehensibly be represented, based on separate specifications of the process model, the policy, and further constraint specifications. Hence, there is no intuitive representation that can capture the obstructed state. Given that such a representation would be graphical, it would even allow to better visualize obstructions, for example, if an SoD rule conflicts with the progression of the process. Further, it allows to better handle obstructions because it forms a more comprehensive basis to apply analysis techniques, which make it possible to find a solution.

  • Detecting and solving obstructions in a security-sensitive way based on indicators (GR-2): Based on the identified deficits of existing approaches, the “tackling” of an obstructed state has to be done in a more security-sensitive way. Along all three phases of execution, the need to foster an indicator-based security can be underlined. Here, a cost-based approach constitutes a frame how to consider and capture indicators. Given that the least cost represents the highest degree of security-sensitivity, the minimization of these costs then represents an optimization problem.

  • Considering all inputs (model, policy, log) (throughout all approaches) (GR-3): Based on the identified phases of process execution, this thesis strives for a holistic approach, and is meant to tackle model- as well as log-based situations. The problem shall be solved by taking all relevant inputs into account and deduct an optimal solution based on indicators, thereby taking the least risk or violation into account. Logs are not used for obstruction analysis and handling, and are useful for the indicator based view on security. Such a solution helps to better implement regulation because it allows to better consider risks as well. Here, one can imagine the help of ex ante as well as ex post approaches to steer the execution towards completion. Hence, the notion of obstructability of this work aims to handle obstructions at runtime in a way that allows to benefit from the approaches of the other phases as well. By addressing these requirements, the risk of damage by blocked process executions as well as violation of security policy shall be minimized.

Figure 2.17
figure 17

Contribution of work to analyze, detect and resolve obstructions

In conclusion, as depicted in Figure 2.15, the goal of this thesis is to develop an approach that can detect obstructions and resolve them policy-wisely. The general approach is located between security and business goals, compliance and indicators. In the following chapters, the identified deficits will be addressed with a holistic approach that takes the design, execution and audit phase into account. Figure 2.17 illustrates problem setting and places the contributions of this work into the identified gaps. The SecANet  approach (Chapter 3) addresses the lack of capturing the state of obstruction and lays the foundation for an indicator-based solution. In order to handle obstructed executions, a hybrid approach will be presented, depending on the existence of historical information. Both will be able to consider indicators such that obstructive situations will become resolvable in a security-sensitive way. The OLive-M  approach (Chapter 4) will show how, based on the SecANet, model-based solutions to obstructions can be computed by using light analysis methods. The OLive-L  approach (Chapter 5) foremost uses the log to find solutions to handle an obstruction. That way, this work represents a holistic approach that takes models and logs into account and is applicable at design-, run- and audit-time. Beyond the general requirements (GR) identified in this section, these approaches will address the specific requirements that apply to their execution phase (i.e., ROA, RSC, RLC).

Altogether, by analysis, detecting and handling obstructions, the goal is to improve security in business processes. Conflicts caused by security policies are supposed to be captured and resolved in a security-sensitive way and the processes are allowed to complete policy-wise, and still meet compliance. The overreaching goal is to relax the tension between too strict security controls and business goals of processes in the practical setting. By providing mechanisms that predict (or at least detect) obstructions and, based on the existing process controls and data, propose workarounds that, albeit not fully policy-compliant, allow enactment with the nearest security-sensitive match feasible, it will become possible to engineer enterprise systems with a large extent of flexibility and security.