Metrics for Business Process Models

Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 6)


Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

In this instance, measurement refers to a process that assigns numbers or symbols to attributes of entities in the real world [124] in order to represent the amount or degree of those attributes possessed by the entities [432, p.19]. This way, measurement opens abstract concepts to an empirical evaluation and is, therefore, a cornerstone of natural, social and engineering sciences. In this definition, an attribute refers to a property or a feature of an entity while an entity may be an object or an event in the real world. Measurement at least serves the following three purposes: understanding, control and improvement. The classical statement attributed to Galilei, “What is not measurable make measurable”, stresses the ability of a measurement to deliver understanding. The principle idea behind this phrase is that measurement makes concepts more visible. In effect, entities and their relationships can be tracked more precisely bringing forth a better understanding. In an emerging discipline like complexity of business process models, it might not be clear what to measure in the first place. Proposing and discussing measures opens a debate that ultimately leads to a greater understanding [124, p.7]. Measurement then enables control in order to meet goals. According to DeMarco “you cannot control what you cannot measure” [103]. Based on an understanding of relationships between different attributes, one can make predictions such as whether goals will be met and what actions need to be taken. For business process modeling projects, it is important to establish suitable measurements since, as Gilb points out, projects without clear goals will not achieve their goals clearly [141]. The lack of measurements that can be automatically calculated from a given process model is a central problem of several quality frameworks. Examples include the Guidelines of Modeling (GoM) [50, 388, 51], SEQUAL [250, 307, 228] or the work of Güceglioglu and Demirörs [157, 156]. While various empirical research has been conducted on quality aspects of data models (see [304, 135, 305, 136, 137]), such work is mostly missing for business process models [306]. Defining quality concepts in a measurable way would be a major step towards understanding bad process design in general. Measurement is also crucial for the improvement of both business process models as products and business process modeling processes. In business science, it is an agreed upon insight, from Taylor’s scientific management [421] to Kaplan and Norton’s balanced scorecard [205, 206], that measurement directs human behavior towards a goal. In organizational theory, this phenomenon was first recognized in the Hawthorne plant of Western Electric and is referred to as the Hawthorne Effect: what you measure will be improved (see [234, p,21]). Business process modeling has not yet established a general suite of measurements that is commonly accepted. The potential for improvements in current modeling practices is, therefore, difficult to assess. This chapter aims to contribute to a more quantitatively oriented approach to business process modeling by proposing a set of potential error determinants for EPC business process models. This is also a step towards establishing business process modeling as an engineering discipline since “to predict and control effectively you must be able to measure. To understand and evaluate quality, you must be able to measure.” [234, p.4].

The remainder of this chapter is structured as follows: Section 4.1 presents the theoretical background of measurement with a focus on scale types and issues related to validity and reliability. Section 4.2 discusses which concepts are measured in the neighboring discipline of network analysis. We focus on degree, density, centrality and connectivity metrics since they seem to be promising for business process models. Section 4.3 gives an overview of complexity metrics in software engineering. We highlight the most prominent metrics and discuss their relationship to more abstract quality concepts for software products. In Section 4.4, we present related work on metrics for business process models. In Section 4.5, we identify the complexity of a process model’s structure and its state space as the key determinants for error probability. Related to these two aspects we define a set of metrics and discuss their impact on error probability. Section 4.7 gives a summary before the metrics are tested in the subsequent chapter.


Business Process Error Probability Business Process Model Articulation Point Connector Degree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

There are no affiliations available

Personalised recommendations