The Structured Process Modeling Theory (SPMT) a cognitive view on why and how modelers benefit from structuring the process of process modeling
After observing various inexperienced modelers constructing a business process model based on the same textual case description, it was noted that great differences existed in the quality of the produced models. The impression arose that certain quality issues originated from cognitive failures during the modeling process. Therefore, we developed an explanatory theory that describes the cognitive mechanisms that affect effectiveness and efficiency of process model construction: the Structured Process Modeling Theory (SPMT). This theory states that modeling accuracy and speed are higher when the modeler adopts an (i) individually fitting (ii) structured (iii) serialized process modeling approach. The SPMT is evaluated against six theory quality criteria.
KeywordsBusiness process modeling Process of process modeling Explanatory theory Structured process modeling Cognitive fit
For the design and analysis of information systems for organizations, analysts typically deal with the complexity of the organization by using conceptual models. These models abstract from specific instances and represent the generic properties of the modeled system. The focus in this paper is on process models, which are considered to be a specific kind of conceptual models. A process model is a mostly graphical representation that documents the different steps that are or that have to be performed in the execution of a particular process under study, together with their execution constraints such as the allowed sequence or the potential responsible actors for these steps (Dumas et al. 2013; Weske 2007).
The recent developments in research about process models can be classified into three research streams. One stream studies the application of process models. For example, the construction of process models has shown to be a key success factor in process redesign (Kock et al. 2009; Xiao and Zheng 2012), software development (Krishnan et al. 1999), and communication (Abecker et al. 2000; Davies et al. 2006). Therefore it is important that the quality of process models is high.
A second research stream is thus investigating the quality of process models. Traditionally, it is believed that the quality of the model has to be evaluated relative to the purpose of the model (Juran and Gryna 1988; Lindland et al. 1994). An abundance of process model quality dimensions and metrics, targeted at various purposes, has thus been examined (Nelson et al. 2012; Vanderfeesten et al. 2007). For example, if the process model is created as a tool for communication about a particular process, the comprehensibility of the model by its intended readers can be regarded as an important quality dimension. In case the process model has to serve as input for a process-aware information system, syntactic correctness and semantic completeness may be considered to be more crucial. An extensive overview of quality dimensions and related metrics is presented by (Sánchez-González et al. 2013) in their systematic literature review on process model quality research.
Recently, a third stream of process model research originated that shifts the focus from investigating what are characteristics of a good process model towards the study of how good process models are constructed. For instance, Brown et al. (2011) investigated how the use of virtual world technology increases modeler empowerment and consensual development during modeling in collaborative settings. Collaborative process modeling and how technology supports this activity is also the subject of (Recker et al. 2013). Further, Pinggera et al. (2013) identified three process modeling styles relating to variations in modeling speed and model reconciliation. Lastly, Claes et al. (2015) developed a visualization that represents how process models are created in terms of consecutive operations on the model in a modeling tool.
Similar trends of research shift existed already in the broad field of conceptual modeling (e.g., Hoppenbrouwers et al. 2005) and the even more general area of system analysis and development (e.g., Chakraborty et al. 2010; Nunamaker and Chen 1990). The underlying assumption in all of these studies is that the quality of the product depends on the quality of the process that creates the product, at least to some extent. Based on the observations described in this paper, we subscribe to this assumption and presume that certain quality concerns are caused during the modeling process. Therefore, we abstract from the different process model quality dimensions and study the cognitive mechanisms during the process of process modeling in which these quality issues originate.
We define the process of process modeling (PPM) as the sequence of steps a modeler performs in order to translate his mental image of the process into a formal, explicit and mostly graphical process specification: the process model. The modeler forms a mental representation of the process based on direct observation and/or various descriptions of the real or intended process such as interview transcripts, whiteboard notes and requirements documents (Chakraborty et al. 2010). It should also be noted that mental models are rarely stable, they keep evolving as more information is processed (Rogers and Rutherford 1992). Hence, the transformation of the (individual and dynamic) mental model into an explicit process model is a complex cognitive task. During this task the modeler iterates between shaping the mental model, evaluating the mental model, converting the mental model into a formal model, evaluating the formal model, adapting the mental model, etc.
Throughout this complex task the modeler is hindered by his cognitive limits, which results in cognitive ineffectiveness that can be manifested by a decrease of accuracy and speed (Rockwell and Bajaj 2005; Sweller, 1988). Therefore, the end goal of our research is to help the modeler to reduce these negative effects by developing a method for process modeling that should warrant the optimal use of a modeler’s cognitive functions. As advocated by (Avgerou 2000; Naumann 1986), a first, fundamental step for achieving this goal is the collection and description of the necessary knowledge that helps understand why, how and when cognitive failures occur during process modeling. Because such knowledge is not currently readily available, this paper handles entirely on its development. The presented contribution is the Structured Process Modeling Theory (SPMT), which explains why the modelers that implement an optimal structuring approach towards modeling may deal with the complexity of the task in a more cognitively effective and efficient way.
The SPMT being an explanatory theory will indeed serve as a foundation for the development of a method that prescribes how to create process models in a cognitively optimal way. Furthermore, the SPMT brings together cognitive theories about learning and problem solving in a fundamental new way and has the potential to “give back” to the cognition research field, because of the novel view on (the combined application) of these theories. This paper has practical significance by providing knowledge that can be used for process modeling training, for differentiated tool development, etc.
In Section 2, the methodology that was used to build and to test the SPMT is discussed. The theory was developed by adapting and combining cognitive theories to the context of process modeling in order to explain the varying success of different observed modeling approaches. The way these observations were collected is described in Section 3. Subsequently, Section 4 provides the theoretical background for the developed SPMT, which itself is presented in Section 5. Next, the SPMT is evaluated in Section 6. The context of the research is outlined in Section 7, which summarizes related work. Finally, Section 8 contains an extensive discussion and a brief conclusion is provided in Section 9.
2 Research methodology
RQ. Why do people struggle with the complexity of constructing a process model?
The above research question asks for explanations of human behavior and thus an explanatory theory was developed that describes the cognitive leverages that play a role while constructing a process model. An explanatory theory” provides explanations but does not aim to predict with any precision. There are no testable propositions.” (Gregor 2006, p. 620). Other types of theories exist as well. A predictive theory for example does not provide explanations, but it does include testable propositions with predictable effects. Next to descriptive theories such as the explanatory or predictive theories, also prescriptive theories exist. Instead of only describing, explaining or predicting relations between constructs, they offer concrete prescriptions and relate the proposed actions to certain consequences (Gregor 2006).
2.1 Theory building
The input for theory development may include (objective) observations (Godfrey-Smith 2009; Nagel 1979), as well as (subjective) impressions (Popper 2005). New theory can then be developed by searching for explanations for the observations and impressions (Weick 1989). In order to collect observations and impressions about how modelers construct process models, exploratory modeling sessions were performed (see Section 3). An explanation for the observed relations between modeling approach and cognitive failures was searched for in cognitive literature. Section 4 explains how cognitive theories propose that the human brain is limited in handling complex tasks and if the brain gets overloaded, modelers tend to work slower and make more mistakes. These theories can explain the observed behavior and varying success of the modelers while constructing process models. We compiled and synthesized these theories into the central contribution of this paper: the Structure Process Modeling Theory (SPMT), presented in Section 5.
2.2 Theory testing
For most theories, the actual value can only be measured on the long term, by evaluating its actual use by others (Weick 1989). Nevertheless, in literature about theory in the information systems domain six assessable criteria for good (explanatory) theories were found: i.e., novelty, parsimony, consistency, plausibility, credibility, and transferability (Gregor 2006; Grover et al. 2008; Weber 2012; Weick 1989). Section 6 elaborates on the assessment of the SPMT against these criteria. For the evaluation of consistency, a second series of observational modeling sessions was examined in order to assess to what extent the described theory can be used to explain the additional observations.
3 Problem exploration
In order to explore how people construct process models, structured sessions were performed in which participants were asked to construct a business process model based on a given textual case description. These observational sessions supported the collection of the data that were studied in order to collect the observations and impressions that served as input for the development of the Structured Process Modeling Theory (SPMT).
3.1 Data collection method: Exploratory observational modeling sessions
During the exploratory modeling sessions it was observed how the modelers constructed a process model from a textual case description. The participants were instructed to aim for a high quality model. It was, however, not defined what was meant by ‘high quality model’.
The case to be modeled described the steps in the request handling of mortgages by a bank.1 A textual description was handed over to the participants and comprised two A4 format sheets excluding instructions. The process models that were built by the participants contained on average 27 activities and construction took on average 276 recorded modeling operations in the tool (see further). This size indicates the complexity of the case and the modeling task according to (Mendling 2008), which will be further discussed in the following sections.
In order to gain knowledge about how inexperienced modelers deal with the complexity of a case throughout a process-modeling endeavor, master students that attended a course in Business Process Management were selected as primary target group. The sessions were strategically planned after the lectures in which the students were introduced into process modeling, but before the training of specific modeling techniques or guidelines. This way a group was formed of participants that have enough maturity and knowledge about process modeling without possessing an abundance of modeling experience. The focus was on inexperienced modelers, because they did not yet consciously learn any technique to cope with the complexity of a modeling task, which we expected to result in more variety in the observations and a more open search for potential interesting modeling approaches. The observational modeling sessions took place in December 2012 at Eindhoven University of Technology. The group of participants was composed of 118 master students in total, distributed over three different educational programs (i.e., Operations Management & Logistics, Innovation Management, and Business Information Systems). The mixture of educational profiles from technical-oriented to business-oriented students has the advantage of increasing the likelihood that a heterogeneous set of observations is obtained. Participation was voluntarily and the students could stop at any time without handing in a solution.
A simplified modeling language was used for the modeling sessions. It contained constructs representing the main concepts of a control flow model2: start node, end node, activity, sequence flow, parallel branch (split and join), and optional branch (split and join). These constructs were chosen because they are found in the majority of currently used process modeling languages (e.g., BPMN, EPC, Petri-Net, UML Activity Diagrams, Workflow Net, YAWL, etc.) Moreover, they are considered the most used constructs for process modeling (Zur Muehlen and Recker 2008). The advantage of this approach is that the results can be transposed to existing or perhaps also future process model notations and the modeler could not be hindered by an abundance of model language constructs. The BPMN symbols for the constructs were used in order to be easily understood by the participants, who were familiar with the BPMN notation. This latter process model notation was used in a number of lectures of the BPM course in which the participants were enrolled.
The Cheetah Experimental Platform3 (Pinggera et al. 2010a) was used to support the data collection. This program was developed at the University of Innsbruck as an open source research platform to support experiments investigating the process of process modeling. The modeling sessions were entirely supported by this tool and consisted of three consecutive tasks. The tool tutorial task presented short videos together with a brief explanation to exemplify each feature of the modeling editor. To reassure that the tool features were sufficiently understood, the user had to mimic the actions of the video in the modeling editor correctly before the next feature was presented. Next, in the process-modeling task the participants had to construct a process model for the given case description. Finally, the survey task had to be completed by answering a questionnaire.
The experimental tool recorded each modeling operation automatically in an event log. A list of the different types of operations that were recorded, is presented in Appendix A. Besides the name of the recorded operation, the event records contained additional information such as the time of its occurrence, position on the canvas, source and target activities of edges, etc. These data can be used for a step-by-step replay of the model construction process or to feed mining algorithms that support analyses of this process (such as the PPMChart visualization described in Section 3.2.1 below). Furthermore, the tool captured the constructed process models, which allows for inspecting different properties of the produced models. Finally, the questionnaire (see Appendix B) was used to collect data about the demographics of the respondents, as well as domain knowledge, modeling language and method knowledge and general tool and language issues.
3.2 Data analysis
Demographic information of participants
1 age 20
7 age 21
36 age 22
38 age 23
22 age 24
11 age 25
2 age 26
1 age 28
1 Danish, French, German, Indonesian, Macedonian, Persian, Polish, Portuguese, Romanian
1 Part-time student
1 PhD student
86 Operations Management & Logistics (OML)
25 Business Information Systems (BIS)
4 Innovation Management (IM)
3 Human-Technology Interaction (HTI)
1 Professional Doctorate
1 Doctorate (PhD)
3.2.1 PPMChart visualization
The line of the dot represents the model element on which the operation was performed (the identifier of the model element is displayed at the beginning of the line).
The position of the dot on the line represents the time when the operation occurred (the default width of a PPMChart is 1 h).
The color of the dot represents the type of operation (i.e., green for creation, blue for movement, red for deletion, orange for (re)naming, and grey for reconnection of edges).
The shape of the dot represents the type of model element of the operation (i.e., circle for events, square for activity, diamond for gateways, triangle for edges).
For example, in the annotated highlight of Fig. 1 it can be observed that the first created element (i.e., the far most left green dot) was the start event (i.e., a circular dot on the first line). Next, an activity was put on the canvas somewhat later (i.e., a green square dot on another line, slightly more to the right). After the creation of some elements (i.e., left vertical zone of green dots), an almost simultaneous movement of all existing elements can be observed (i.e., vertical blue line of dots). Only much later, the edges that connect these elements were created (i.e., line of green triangular dots at the right).
3.2.2 Process model quality
Lindland et al. (1994) define three main quality dimensions of conceptual models: (i) syntactic quality indicates to which degree the symbols of the modeling language were used according to the rules of the language, (ii) semantic quality indicates how adequate the model represents the modeled phenomenon in terms of correctness and completeness, and (iii) pragmatic quality indicates the extent to which the users of the model understand the model as intended by the modeler.
In the study of the observational modeling sessions, only syntactic quality was evaluated to form impressions of the quality of the produced models. Rather arbitrarily, this dimension was selected because a measurement can be determined easily and objectively on the basis of the modeling language specification. It was assumed that the syntactic quality provides a sufficient insight at this stage of the research. Furthermore, a distinction was made between errors that originate in a lack of knowledge of the modeling language and errors that originate in cognitive failure. Especially the latter type of error is interesting for investigating our research question. In the remainder of the paper, the term ‘mistake’ is used to identify those syntactic errors in the process models, which did not clearly arise from a lack of knowledge of the process modeling language. A list of the observed syntax errors is included in Appendix D, together with their classification in ‘mistakes’ and other syntactic errors.
3.2.3 Observations and impressions about the modeling process
PPMCharts allow for zooming in on specific operations (i.e., on individual dots in the charts), as well as on aggregated modeling phases and patterns (i.e., combinations of dots in the charts). Different PPMCharts were compared to extract patterns that reflect identifiable modeling approaches. This section presents a selection of such observations together with our impressions about the relation between these approaches and the properties of the resulting process models.
Serializing the modeling process
Observation 1: All but one of the modelers paused frequently during the modeling process.
Impression 1: Modelers are in need of serializing the modeling process to deal with its complexity.
Structuring the modeling process
Whereas serialization is defined as splitting up a task in sequentially executed subtasks, structuring can be defined as the extent to which a consistent strategy is applied for defining those subtasks. The way of (not) structuring the modeling process can be recognized in the PPMChart by the patterns that can (not) be clearly discovered in the arrangement of the dots in the chart.
Observation 2: A large group of the sessions can be categorized as “flow-oriented process modeling”.
Observation 3: A smaller group of the sessions can be categorized as “aspect-oriented process modeling”.
Observation 4: Another large group of the sessions used a combination of “flow-oriented process modeling” and “aspect-oriented process modeling”.
Observation 5: Another small group of the sessions can be categorized as “undirected process modeling”.
So far, three structuring strategies for serialization were observed: flow-oriented process modeling, aspect-oriented process modeling, and a combination of both approaches. We also observed undirected process modeling.
The remaining 30 sessions (25 %) could not clearly be categorized as structured or undirected. They were labeled “uncategorized” and were left out of scope for further analysis.
Impression 2: Structured serializing of the modeling process helps avoiding ‘mistakes’.
Impression 3: Structured serializing does not support every modeler to avoid ‘mistakes’ to the same extent.
Speed of the modeling process
Observation 6: The sessions labeled “undirected process modeling” lasted longer than the other approaches.
Observed serialization strategies and their measured properties
Number of cases
Number of serialized approaches
Mean modeling time
Flow-oriented process modeling
33/118 (28 %)
33/33 (100 %)
42,80 ± 9,56 min.
Aspect-oriented process modeling
10/118 (8 %)
10/10 (100 %)
40,32 ± 9,44 min.
Combined process modeling
33/118 (28 %)
33/33 (100 %)
37,37 ± 7,92 min.
Undirected process modeling
12/118 (10 %)
11/12 (99 %)
49,87 ± 6,63 min.
30/118 (25 %)
30/30 (100 %)
Observations and impressions about how people deal with complexity during process modeling
All but one of the modelers paused frequently during the modeling process.
A large group of the modeling sessions can be categorized as “flow-oriented process modeling”.
A smaller group of the sessions can be categorized as “aspect-oriented process modeling”.
Another large group of the sessions used a combination of “flow-oriented process modeling” and “aspect-oriented process modeling”
Another small group of the sessions can be categorized as “undirected process modeling”.
The sessions labeled “undirected process modeling” lasted longer than the other approaches.
Modelers are in need of serializing the modeling process to deal with its complexity.
Structured serializing of the modeling process helps avoiding ‘mistakes’.
Structured serializing does not support every modeler to avoid ‘mistakes’ to the same extent.
4 Theoretical background
Different cognitive theories can be combined to provide explanations for the observed modeling approaches and their relation with modeling accuracy and speed. In the next section, the theoretical background presented here is used to formulate a theory for explaining modelers’ cognitive strategies for dealing with complexity.
4.1 Kinds of human memory
The literature on cognition describes three main kinds of human memory. Sensory memory is very fast memory where the stimuli of our senses are stored for a short period (Sperling 1963). During this instant the information that is unconsciously considered relevant is handed over to working memory (Sperling 1963). Next, the information in working memory is complemented with existing knowledge that is retrieved from long-term memory (Sweller et al. 1998). This latter kind of memory is slow but virtually unlimited (Sweller et al. 1998). Information is stored in long-term memory as cognitive schemas composed of patterns of connected elementary facts (Sweller et al. 1998). Relevant information for process modeling that is retrieved from long-term memory includes domain knowledge, and modeling language and modeling method knowledge. In working memory the information is organized and processed in order to initiate certain performances (e.g., to put an activity on the modeling canvas with a mouse click) or to complement the knowledge in long-term memory (e.g., to complement the mental model of the case with new insights from a line of text that was read) (Atkinson and Shiffrin 1968). Because working memory has a limited capacity (Cowan 2010; Miller 1956) and information can only be stored in this memory for a short period (Van Merriënboer and Sweller 2005), it is important to use it effectively when dealing with highly complex tasks, such as process modeling.
4.2 Types of cognitive load
Process modeling requires input information to be absorbed and complemented with knowledge from long-term memory such as domain knowledge, in order to be processed in working memory leading to the actions of constructing the process model. The center of this complex task are the operations in working memory (Atkinson and Shiffrin 1968; Sweller et al. 1998). The necessary information fills up working memory and is subdivided in three types of cognitive load (Sweller and Chandler 1994). Intrinsic cognitive load is the amount of information that needs to be loaded in working memory for deciding how to conduct a particular task. It mainly depends on the properties of the task and the amount of relevant prior knowledge of the performer of the task (i.e., knowledge about the domain, about the modeling language and about the modeling method). Extraneous cognitive load is the load that is raised for processing and interpreting the input material of the task such as descriptions or direct observations of the process to be modeled. This type of cognitive load depends on the representation of the input material as well as the fit of this representation with the task it has to support and with the characteristics of the interpreter of the material (Vessey and Galletta 1991) (see also Section 4.4). Finally, during the execution of a task humans usually are able to reserve some load in working memory for building, restructuring and completing cognitive schemas to be stored in long-term memory. This will help reducing cognitive load for performing similar tasks in the future. This activity is called learning and the associated load is the germane cognitive load. Furthermore, a distinction can be made between the overall cognitive load (i.e., the total amount of information sequentially loaded in working memory for performing a specific task) and instantaneous cognitive load (i.e., the amount of information that is loaded in working memory at a certain point in time) (Paas et al. 2003b).
4.3 Cognitive load theory
The capacity of working memory is limited. In the past, researchers have tried to define how much information can be loaded at the same time in this kind of memory. Miller estimated the amount of information that can be remembered in short term memory at about 7 units (Miller 1956). More recent research concludes that only 3 to 4 units of information can be activated and processed in working memory at the same time (Sweller et al. 1998; Van Merriënboer and Sweller 2005). Although there appears to be a limit on the amount of units that can be loaded simultaneously in working memory, there seems to be no constraint on the size and complexity of these units of information (Sweller et al. 1998). More specifically, it is believed that one unit of information loaded in working memory (often referred to as ‘information chunk’) corresponds with one cognitive schema in long-term memory (Sweller et al. 1998). This can explain why a person seems to be able to store more information in working memory for tasks in which he is experienced, because for such tasks he was able to build up larger and stronger cognitive schemas in the past.
Therefore, for complex tasks or tasks in which a person is not adequately experienced, it is imaginable that the limited capacity of the working memory is not sufficient for the (maximum instantaneous) load that is needed to accomplish the task. This is called cognitive overload (Sweller 1988). The Cognitive Load Theory states that when working memory is overloaded, there is no room for learning (i.e., schema building) and accuracy and speed of information processing decrease (Rockwell and Bajaj 2005; Sweller 1988). In other words, cognitive overload has a negative impact on the effectiveness and efficiency of the modeling performance.
4.4 Cognitive fit theory
The Cognitive Fit Theory states that humans are able to solve problems more effectively and efficiently if the representation of the input material of a certain task ‘fits’ with the task itself (Vessey 1991). For example, when a task involves exploring relationships between data a visual representation such as a diagram is preferred. For more statistical purposes such as determining the average of a series of numbers a textual representation in the form of a list or table is more cognitive efficient (Vessey 1991). Whereas the focus of Cognitive Fit Theory is on the match between problem representation and task, a secondary effect is described as the match between the task and its performer. For example, most people excel in either graphical or logical tasks (Pithers 2002). The former type of people probably needs less effort to work on the layout of the model, whereas the latter may find it easy to warrant the semantic correctness of the model. For the development of our theory, we focused mainly on this secondary relation between task and performer. Since the initial publication of the theory in 1991, the work is refined and concepts of domain knowledge, method knowledge and problem solving tools are taken into account as well (Khatri et al. 2006a, b; Shaft and Vessey 2006; Sinha and Vessey 1992; Vessey 1991).
Extraneous cognitive load mainly depends on the input material representation fit with the task and the modeler. A higher fit requires a lower cognitive load. For process modeling the input material includes any descriptive process information such as verbal or oral transcripts of interviews with process managers/workers and existing documents describing the process.
The intrinsic cognitive load increases for more complex tasks and decreases in case the modeler possesses more relevant prior knowledge. Differences between various process modeling tasks are mainly related to the complexity of the case to be modeled. Prior knowledge incorporates domain knowledge, and modeling language and method knowledge.
Germane cognitive load is caused by loading information in working memory for the construction of cognitive schemas, which is not a prerequisite for the task, but rather the result of learning. This can only occur if during previous processing of information the working memory was not overloaded.
If the sum of these three types of cognitive load at a certain point in time transcends working memory capacity, cognitive overload occurs. This has a negative effect on process model quality (i.e., more ‘mistakes’ are made), speed of modeling, and learning. Note that learning means that the set of cognitive schemas of the modeler is broadened and strengthened, which gradually improves the useful knowledge of the modeler for future similar tasks.
5 The structured process modeling theory (SPMT)
In order to explain the observations and impressions presented in Section 3, the cognitive theories listed in Section 4 were integrated and transformed into the newly developed Structured Process Modeling Theory (SPMT). Three key concepts were extracted from the observations and impressions: serialization, structuring and individual differences. Therefore the SPMT consist of three parts, each targeting one of these concepts.
5.1 Part 1: Serialization of the process modeling task can reduce cognitive overload
We observed that inexperienced modelers use a serialization approach to construct the process model (i.e., Observation 1). Our impression was that the serialization appears to help these modelers dealing with the complexity of the modeling task (i.e., Impression 1). Cognitive theories also recognize the concept of cognitive serialization to deal with cognitive overload. If a task requires too much information to be stored in working memory simultaneously, then it is advised to load the information sequentially (Bannert 2002; De Jong 2010; Gerjets et al. 2004; Paas et al. 2003a; Pithers 2002; Pollock et al. 2002; Van Merriënboer et al. 2003). This means that intrinsic cognitive load can be spread out over a longer period, which reduces the probability of instantaneous cognitive overload (De Jong 2010).
On the other hand, serialization causes more intrinsic cognitive load for integration and for administration of the sequentially processed and produced information (Gerjets et al. 2004). In other words, extra load is created to aggregate the information of the separate parts of a solution and for building the modeling strategy (Gerjets et al. 2004; Van Merriënboer et al. 2003). The modeling strategy determines how to divide the modeling task in subtasks, in which order to proceed, how to execute each subtask, how to aggregate the different partial results, etc. The extra load for aggregation and strategy building results in a total overall intrinsic cognitive load that can be higher in case of serialization. But if the intrinsic load for aggregation and strategy building can be kept low, the maximum instantaneous load decreases together with the probability of cognitive overload.
We conclude that serialization of the process of process modeling helps reducing the probability of instantaneous cognitive overload if the important condition is met that aggregation of the partial solutions (and modeling strategy building) do not consume the freed resources in working memory.
5.2 Part 2: Structured process modeling reduces cognitive overload
A more structured serialization approach towards process modeling makes it easier to keep track of the progress of the modeling endeavor (Van Merriënboer et al. 2003). This in turn lowers the effort to evaluate and adjust the modeling strategy (Van Merriënboer 1997). By structuring the serialization process, also the outcome of this process (i.e., the process model) will probably be more structured, which facilitates the aggregation of the separately developed parts of the process model (Kim et al. 2000).
Therefore, part 2 of the SPMT states that structuring the (serialized) approach towards process modeling lowers the intrinsic cognitive load for aggregation of the partial solutions and modeling strategy building and thus reduces the probability of instantaneous cognitive overload.
5.3 Part 3: Serialization style fit is a prerequisite for cognitive overload reduction
Cognitive literature suggests that each human being has a specific intrinsic learning style (Felder and Silverman 1988). One of the defined dimensions of learning style is called ‘global/sequential understanding’ (Felder and Silverman 1988). It specifies to what extent a learner needs the material to be processed sequentially. For example, we hypothesize that the flow-oriented modeling method is better suited to a sequential learner, because it builds up the model in a sequential manner. The aspect-oriented approach starts from a global view of the content of the model and drills down into the different details (aspects) of the process model to be constructed. Therefore, it is matched with the global learning style.
Similarly, the Field Dependence-Field Independence Theory (Pithers 2002; Witkin and Goodenough 1981) states that some people are better in abstract reasoning than others (i.e., they do not need to load a lot of contextual information in memory). Field dependent modelers find it harder to break up the model in smaller parts that they will construct separately and without considering its context (Pithers 2002), which means they may prefer the aspect-oriented style for structuring the modeling process, because for each aspect that is targeted sequentially the whole process model is considered before turning to the next aspect.
The Need for Structure scale defines to what extent the performance of a person depends on the structuredness of the adopted solution method (Neuberg and Newsom 1993). Therefore it is hypothesized that modelers with a high need for structure will benefit most of structuring the modeling process according to the one or the other structuring style.
In summary, based on cognitive theories, we suggest that the load for aggregation of the partial solutions and for modeling strategy building can be kept low if the serialization of the process-modeling task is conducted in a structured way that fits with the characteristics of the modeler.
5.4 Summary of the SPMT
The SPMT is summarized below in (i) a theoretical model, which graphically represents the included constructs and their relation; (ii) the propositions, which describe the produced knowledge in a textual format; and (iii) a brief description of the boundaries of the theory.
5.4.1 Theoretical model
- Proposition 1:
- Proposition 2:
- Proposition 3:
Individually fitting structured serialization.
If the structured serialization approach (e.g., aspect-oriented or flow-oriented process modeling) fits with the characteristics of the modeler (i.e., learning style, need for structure and field-dependency) the increase in instantaneous intrinsic cognitive load for aggregating and for strategy building can be further reduced.
The SPMT was based on observations and impressions in a specific setting. The observed subjects were master students. They served as a proxy for inexperienced modelers. The observed task was the construction of a control flow model in a simplified modeling language. Therefore, the SPMT applies at least for control flow modeling by inexperienced modelers. However, the SPMT is composed of constructs and relations that were found in literature. The only boundary of these existing theories is that they describe cognitive properties, processes or relations of human beings. The SPMT may thus apply to more generic situations.
6 Evaluation of the structured process modeling theory (SPMT)
In this section the six criteria for evaluating an explanatory theory mentioned in Section 2.2 are applied to the SPMT: novelty, parsimony, consistency, plausibility, credibility, and transferability. These criteria were found in various academic articles about theory testing (Gregor 2006; Grover et al. 2008; Weber 2012; Weick 1989). Nevertheless, we found no concrete guidelines on how to assess the SPMT against these criteria. In this paper, where the emphasis is on the theory building, logical arguments rather than empirical data are used to evaluate the criteria (Whetten 1989). The section concludes with a brief discussion of two other important theory testing criteria that we consider currently not feasible to evaluate: falsifiability and utility (Bacharach 1989).
There are different ways in which a theory can be novel: (i) it describes constructs or associations that were not established before, (ii) it describes well-known constructs or associations in a fundamental new way, (iii) it makes important changes to existing theory (Weber 2012). The SPMT is novel because it combines several existing cognitive theories in a fundamental new way. The consideration of the first part of the SPMT - describing how serialization of the modeling effort helps reducing intrinsic cognitive load - has been touched before (Rockwell and Bajaj 2005; Soffer et al. 2012). Yet, the idea of structuring the construction process of the process model (i.e., the second part of the SPMT) seems more original, although structuredness of the outcome of such a construction process is well studied (Laue and Mendling 2010; Zugal et al. 2013). Also in software engineering for example, there are many studies about the structuredness of program code (e.g., procedural versus object-oriented code (Wiedenbeck and Ramalingam 1999)).
The real novelty of the SPMT lies in the third part. The technique of serialization is described in cognitive literature as “cognitive sequencing” (De Jong 2010). Different structured sequencing strategies are defined: e.g., simple-to-complex sequencing, part-whole sequencing (similar to flow-oriented modeling), simplified whole tasks or whole-task sequencing (similar to aspect-oriented modeling), and modular presentation (Gerjets et al. 2004; Van Merriënboer et al. 2003). However, while the notions of cognitive fit where already published in 1986 (Vessey and Weber 1986), the principle of cognitive fit is not considered in literature when advising which of these sequencing strategies to use. For example whole-task sequencing is considered to always outperform part-whole sequencing (Van Merriënboer et al. 2003) and modular presentation in turn was presented as an improvement of whole-task sequencing (Gerjets et al. 2004). Nevertheless, we propose that cognitive fit should be considered for selecting the appropriate sequencing technique, as is stated in part 3 of the SPMT.
Number of constructs and associations in the SPMT
The artificial distinction between the three types of intrinsic cognitive load (i.e., load for process modeling, for aggregating and for strategy building) and between the two attributes of selected serialization style (i.e., degree and structuredness) could have been omitted. This would reduce the total amount of constructs to 7 and the amount of associations to 8 (i.e., still distinguishing between a positive and a negative effect of adopted serialization style on intrinsic cognitive load). Nevertheless, this would - in our opinion - also significantly diminish the explanatory power and the understandability of the theory.
A theory is consistent if various observations can be explained with the same theory. Therefore, other available datasets with recorded data about the process of process modeling were examined for supplementary observations about complexity handling during the modeling activities. Another set of observational modeling sessions in 2013 contained such additional observations. Participants were master students of Business Engineering at Ghent University. They have a similar background and are enrolled in a similar educational program as the students from the exploratory modeling sessions in Eindhoven. 143 additional modeling sessions were recorded. The case to be modeled described a process about collecting fines.5
The SPMT was developed to describe and explain the observations of Section 3.2.3, but it was also intended to be applicable in a broader sense. As a consequence, it can also be used to explain this additional observation about a previously not discovered way of structuring the modeling process. The modelers clearly serialized the modeling process and used a structured approach (i.e., the happy path first modeling). Potentially, the use of this particular structuring style fitted more to the modeler. For example, a sequential learner that can be classified as field independent, would prefer the flow-oriented approach towards modeling the happy path, but may like to abstract from exceptional behavior at first. According to the SPMT, this can explain why they appeared to have made fewer ‘mistakes’ and were faster than the modelers from the “undirected process modeling” subset. Structuring their approach to modeling in a way that fitted with their characteristics has helped them avoid cognitive overload, which has increased their modeling accuracy and speed.
In other words, the SPMT can be used in a consistent way to explain this observation. There was no need to adapt or complement the SPMT in order to be used to explain why the happy path first modeling approach helped these particular modelers. Moreover, a retrospective examination of the modeling sessions in the dataset described in Section 3, showed that 15 of the 118 sessions (13 %) could have been labeled “happy path first modeling”. It was also noticed that all but one of these instances were currently labeled ‘uncategorized’.
6.4 Plausibility & credibility
The real observed behavior, pronounced in the observations and impressions in Section 3.2.3, was explained based on established theory. Existing cognitive theories were used to provide all the constructs and associations that make up the theory. This theory building methodology warrants both plausibility and credibility. The SPMT is plausible, because it explains accurately and profoundly the effects that were observed in reality. It is also credible, because it uses only constructs and associations from established existing theories to explain those effects.
A good theory is transferable to other research contexts. The SPMT was developed as a mid-range theory (Weber 2012) with the observations and impressions in the context of process modeling in mind. Nevertheless, the constructs and associations that constitute the theory were taken from general cognitive literature. Therefore, the SPMT has the potential to be transferred beyond the process-modeling domain. It can apply also in other domains such as conceptual modeling in general, programming, text writing, etc., which would make it a macro-level theory (Weber 2012). In order to establish the real rather than the potential theory level and transferability, the theory needs to be applied and tested in various domains, which is addressed as future work in Section 8.3.
6.6 Falsifiability and utility
According to (Bacharach 1989) a theory should be evaluated against two other primary criteria: falsifiability and utility. We acknowledge this point of view, but evaluating our theory against these criteria is considered infeasible at this point and therefore out of the scope of this paper. The evaluation of falsifiability of the SPMT is explicitly addressed as future research in Section 8.3, because this requires the propositions of the theory to be operationalized into testable hypotheses. The best way of evaluating the utility of a theory is to measure how much it is actually used for practical and academic purposes, which is off course only possible on a longer term.
7 Related work
Although the constructs of serialization, structuredness and cognitive fit, the three parts of the Structured Process Modeling Theory (SPMT), were not considered together before, they were studied separately in various contexts. In this section, related work is presented that takes a cognitive view on general conceptual modeling or process modeling in particular with a focus on serialization, structuredness or cognitive fit.
Rockwell and Bajaj (2005) propose the COGEVAL framework that consists of a collection of 8 propositions about modeling complexity and model readability based on cognitive theories. One of the propositions presents chunking as a technique in conceptual modeling to improve modeling effectiveness and efficiency. It is not clear if the term ‘chunking’ refers to splitting up the model in smaller subparts, or splitting up the modeling process in smaller subparts.6 If the latter applies, this is similar to part 1 of the SPMT, but without considering the increase in cognitive load for aggregation and strategy building. Next, the process of constructing process models is described by (Soffer et al. 2012) as a sequence of two phases. A modeler first builds a mental model of the process to be represented in the diagram and then the mental model is mapped onto the constructs of a formal process modeling language in order to build the process model. The focus of the paper is on optimizing the formation of the mental model as a prerequisite to increase semantic quality of the process model. It is advised to lower cognitive load by building this mental model chunk by chunk. Furthermore, the paper suggests to examine the impact of model structuredness on domain understanding. Is does not consider however a structured approach towards the chunking.
Most cognition inspired literature on structuredness in conceptual modeling describes a relation between structuredness of the model and some other characteristic of the model. A model is considered well-structured if every branch of a split of a certain type is joined in a single join construct of the same type. For example, well-structuredness is proposed to have an impact on correctness because it makes it easier for the modeler to navigate through the model that was build so far, which reduces the chance on introducing errors (Laue and Mendling 2010). Further, also nesting depth of split and join constructs is an aspect of structuredness and a greater nesting depth is proposed to imply greater model complexity (Gruhn and Laue 2006). Finally, (Zugal et al. 2013) describe the effect of hierarchical structuring (i.e., decomposing the model in sub-models) on expressiveness and understandability. It is proposed that hierarchical models suffer from two opposing effects: (i) abstraction decreases mental effort7 by hiding information and supporting pattern recognition, but (ii) fragmentation increases mental effort because of attention switch and integration effort. The opposing effects of abstraction and fragmentation are described in part 1 of the SPMT. Serialization of the modeling process allows focusing on one part of the model at a time (abstracting from the other parts), but there is a cost of aggregating the different parts (integration effort).
Except for structuredness of the model, there is also literature about structuredness of the input (e.g., a textual case description). Pinggera et al. (2010b) propose that a breadth-first ordering of text was best suited to yield good results. Breadth-first ordering was defined as “begins with the start activity and then explains the entire process by taking all branches into account” (p. 448). It corresponds with the flow-oriented approach to modeling described in this paper (whereas depth-first can be matched with the happy path first modeling style). Cognitive fit however, was not considered in their work.
In their summarizing framework of cognition variables for conceptual modeling, it is proposed in (Stark and Esswein 2012) that problem-solving skills of the modeler have to match with the task of modeling and that this (mis)match can cause effects on the resulting conceptual model. Regrettably, this was not further investigated or tested. Further, (Agarwal et al. 1996a, b, 2000) propose that an object-oriented representation is not universally more usable or less usable than other representations. Cognitive fit and prior method knowledge should be considered to evaluate the usability of object-oriented representations. This is fully in line with part 3 of the SPMT, but the focus is not on object-oriented modeling as a process (which would be similar to structured modeling), though it is on object-oriented representations. Therefore the research centers on extraneous load, rather than intrinsic load (as is the case for the SPMT). Lastly, the understandability of a process model is proposed to be more impacted by personal factors, than by model factors (Reijers and Mendling 2011). This work also recognizes the need for studying cognitive fit, albeit in the context of model reading.
Guidelines for modeling
Most of the work mentioned above describes causal effects between various variables. The emphasis is on predicting, rather than explaining. (Gregor 2006) states that both theories for explaining and theories for predicting can be used as input for a theory for design and action. The ambition of the SPMT is also to describe the necessary knowledge in order to build a prescriptive theory for process modeling. Two of such prescriptive theories were found already in literature. (Mendling et al. 2010) propose seven process-modeling guidelines (7PMG) that are based on strong empirical evidence and are simple enough to be used by practitioners. Guideline 4 proposes to model as structured as possible. The guidelines of modeling (GOM) presented in (Becker et al. 2000) are less concrete guidelines that claim to assure the quality of process models beyond syntactical aspects. Both prescriptive theories, however, provide recommendations about desired process model properties that can be guarded during modeling without considering the cognitive fit of the recommendation with the characteristics of the modeler.
The research described in this paper is limited in several ways. Nevertheless, the SPMT can be valuable in practice and for research. The limitations and implications of the presented research are discussed below. In order to work on the limitations and to increase its usefulness, future research is described in this section as well.
8.1.1 Limited ecological validity
The observations and impressions that were used as input for building the Structured Process Modeling Theory (SPMT) stem from modeling sessions with master students. Furthermore, they were given an artificial case description. In real life modeling sessions the modelers seldom start from a structured case description such as the one that was used for the observations. They rather use direct observation, interview transcripts, notes and pictures from whiteboard sessions, etc. Finally, only syntactic quality was considered when evaluating the produced models. Because the Structure Process Modeling Theory (SPMT) was only inspired by these observations and impressions, but it was compiled from existing cognitive theories that apply widely, there is no reason to suspect that the SPMT does not apply in a more realistic setting. However, the limited ecological validity of the observations and impressions may have hindered the disclosure of all relevant effects of serialization on cognitive load.
8.1.2 Limited content and construct validity
The SPMT and its constructs and associations may have limited content validity. First, only (structured) serialization was investigated (in accordance with the observations), no other general problem solving techniques were considered. Second, the assessment of syntactic quality that formed the base of Impressions 2 and 3, was partly subjective. It is possible that the impressions are not entirely accurate, which may have hindered the disclosure of certain relevant effects of serialization on cognitive load. The credibility of the theory however, is guaranteed by the deductive approach, which builds on existing, established theories. Additional observations in several different settings can help to assess the content validity of the SPMT in the future. Third, although the constructs are clearly described in the SPMT, some of them may be hard to transform into a variable that can be measured properly (i.e., with high construct validity). For example, to date there are no known metrics that measure intrinsic cognitive load separately from extraneous or germane cognitive load, not to mention metrics for the artificially separated constructs of intrinsic cognitive load for modeling, for aggregating and for strategy building of the SPMT.
8.2.1 Implications for practice
We have experienced that in practice a lot of modelers (experienced and inexperienced) often struggle with the complexity of the case at hand. Although it was observed how some inexperienced modelers automatically turned to a structuring approach and although the structuring techniques are not particularly hard to apply, other modelers do not seem to structure their modeling processes. A slower constructed and lower quality process model was observed, that – according to the SPMT – can be a consequence of applying an undirected process modeling strategy. The SPMT will help building the knowledge that is necessary to (i) be aware of suboptimal modeling conditions (e.g., when modelers apply a structuring technique that does not fit with the task and with their characteristics as a problem solver), (ii) train the modelers to use an individually fitting structured serialization technique for process modeling in order to raise effectiveness and efficiency, (iii) provide the means to better support the modelers in handling complexity (e.g., differentiated or adaptive tools that support structured process modeling in accordance to different modeling approaches or with changing features for consecutively modeling phases).
8.2.2 Implications for research
The SPMT is novel in its recognition of cognitive fit between modeling task and modeler characteristics of the proposed modeling structuring technique for optimal effectiveness and efficiency. This fundamental focus point can inspire researchers in other research domains to develop adaptive techniques as well. The SPMT can be applied in a broader context and can add to the existing cognitive theories about serialization as a generic problem solving technique. Furthermore, within the domain of process modeling, the (descriptive) SPMT is considered as a first, necessary step towards the development of a prescriptive theory that will further extend our knowledge about the effect and applicability of structuring and individual fit during process modeling.
8.3 Future work
8.3.1 More extensive evaluation of the SPMT
The SPMT needs to be tested more profoundly. The propositions will be converted into empirically testable hypotheses, accurate metrics need to be developed for each of the involved variables of these hypotheses and new series of observational modeling sessions will be performed in which these variables are measured and correlations are calculated.
Because of limited ecological validity of observations and impressions that were used as input for the development of the SPMT, the external validity of the SPMT itself needs to be examined further. Current observations were made on master student behavior where the prior knowledge of existing modeling techniques is assumed to be very low. Therefore, one of the factors to examine is how much this prior knowledge of experienced modelers influences the observed effects. Cognitive theories suggest, that retraining an experienced modeler to use a different technique than the ones he is used to, consumes a lot of germane load, which is expressed in an initial decrease of performance (this is called the Expert Reversal Effect, Kalyuga et al. 2003).
8.3.2 Development of prescriptive theory and a method for cognitive effective and efficient process modeling
Furthermore, in order to convert the SPMT, which is a (descriptive) theory for explaining, towards a (prescriptive) theory for design and action, next actions still need to be undertaken.
First, it should be examined if modelers can be trained to apply the three aspects of the SPMT. This requires the development of a method (i.e., prescribing how to construct the process model according to the individually fitting structured serialized process modeling principle of the SPMT) and a treatment (i.e., describing how to train modelers to use that method). Subsequently, the degree of treatment adoption in an experimental context can be measured.
Second, it should be examined if the positive effect on load, overload and by consequence accuracy and speed manifests itself indeed when modelers are trained to apply the developed method based on the three aspects of the SPMT (i.e., testing causality). This requires reformulating the hypotheses into causal relations between the variables and the set-up of a controlled comparative experiment to isolate the effect of the treatment in the measurements of these causal relations.
In experimental modeling sessions with master students that were instructed to construct a process model based on the same textual description, we noted various differences in the produced process models. For example different syntactical errors were found in the models. Some errors were made consistently and can be caused by a lack of knowledge, whereas other errors seem to be a result of cognitive failures during process modeling. For the development of tools to help modelers to reduce the latter type of errors, knowledge is needed about why, how and when these failures occur and impact the accuracy and speed of the modeling process. This knowledge was not readily available and therefore it is provided in this article in the form of an explanatory theory.
The developed theory is called the Structured Process Modeling Theory (SPMT) and consists of three parts. Based on observations and impressions, and on explanations from cognitive literature, it describes how the probability of cognitive overload can be reduced by (i) serializing the modeling process, and (ii) structuring that serialization (iii) in a way that fits with the characteristics of the modeler. The research methodology of theory building based on observations and impressions and using components of existing theories should warrant the utility of the newly developed theory. However, a brief evaluation of the theory and a discussion on the limitations are described in Sections 6 and 8.
This work is important on three levels. Firstly, it provides new knowledge on the relation between serializing, structuring and fit of the process modeling approach on the one hand and cognitive effectiveness and efficiency on the other hand. It explains why some modelers struggle (more than others) with the complexity of constructing a process model. This knowledge in itself is useful because it facilitates the selection of suitable modelers or modeling approaches for concrete projects.
Secondly, it is a step towards the development of a method that aims at supporting modelers to select and implement an optimized process modeling strategy that fits with the task at hand and with the characteristics of the modeler. If the theory is true, if a modeler can be trained to modify his modeling technique and if this change of approach preserves the described effects, the SPMT has the potential to significantly and positively impact the quality of future process modeling projects.
Lastly, the knowledge and the method can be used to develop tool support for process modeling that is differentiated (i.e., the features of the tool can differ according to the use(r) of the tool) or adaptive (i.e., the features of the tool change during the modeling process, for example to support consecutive phases of modeling). Tools can ease a modeler’s transition to an improved process-modeling technique and can aid the application of such a technique.
Case description can be downloaded from http://bpm.q-e.at/experiment/MortgageEindhoven.
A control flow model is a process model that mainly represents the sequential order of process steps (i.e., the control flow).
More information about the tool can be found at http://www.cheetahplatform.org.
The term aspect-oriented process modeling must not be confused with aspect-oriented modeling. The former is our description of splitting up the modeling process according to the various aspects that are targeted sequentially. The latter is a way of splitting up the model itself in sub-models that each represent another aspect of the system to be modeled. Both terms are derived from aspect-oriented programming, a technique for splitting up the programming process as well as the program code according to the different aspects to be programmed.
The case description can be downloaded at http://www.janclaes.info/papers/PPMISF.
In literature the term ‘chunk’ has different meanings: a part of a process, a part of an artifact, a collection of information in memory. Therefore, the term was used sparsely in this paper. Splitting up a process in parts was named ‘serialization’ and a collection of information is stored in memory as a ‘cognitive schema’.
Whereas mental load is defined as the amount of information needed to store in working memory at a certain time to perform a task, mental effort can be regarded as the amount of information that is actually stored in working memory during the execution of a task.
We wish to express our gratitude to the developers of Cheetah Experimental Platform (CEP), to all the students and administrators involved in the observational modeling sessions, and to Martin Valcke who pointed out some relevant cognitive theories that apply for the observed behavior.
- Agarwal, R., Sinha, A. P., & Tanniru, M. (1996a). Cognitive fit in requirements modeling: a study of object and process methodologies. Journal of Management Information Systems, 13(2), 137–162.Google Scholar
- Bacharach, S. (1989). Organizational theories: some criteria for evaluation. Academy of Management Review, 14(4), 496–515.Google Scholar
- Becker, J., Rosemann, M., & Von Uthmann, C. (2000). Guidelines of business process modeling. In W. Van der Aalst, J. Desel, & A. Oberweis (Eds.), Business process management. Models, techniques, and empirical studies. Part I. (Vol. LNCS 1806, pp. 30–49). Springer Berlin Heidelberg.Google Scholar
- Chakraborty, S., Sarker, S., & Sarker, S. (2010). An exploration into the process of requirements elicitation: a grounded approach. Journal of the Association for Information Systems, 11(4), 212–249.Google Scholar
- Dumas, M., Rosa, M. La, Mendling, J., Reijers, H. A., & La Rosa, M. (2013). Fundamentals of business process management. Springer Berlin Heidelberg.Google Scholar
- Felder, R., & Silverman, L. (1988). Learning and teaching styles in engineering education. Engineering Education, 78(June), 674–681.Google Scholar
- Godfrey-Smith, P. (2009). Theory and reality: An introduction to the philosophy of science. Chicago:University of Chicago Press.Google Scholar
- Gregor, S. (2006). The nature of theory in information systems. MIS Quarterly, 30(3), 611–642.Google Scholar
- Grover, V., Lyytinen, K., Srinivasan, A., & Tan, B. C. Y. (2008). Contributing to rigorous and forward thinking explanatory theory. Journal of the Association for Information Systems, 9(2), 40–47.Google Scholar
- Gruhn, V., & Laue, R. (2006). Complexity Metrics for Business Process Models. In W. Abramowicz & H. C. Mayr (Eds.), 9th international conference on business information systems (BIS 2006) (Vol. LNI85, pp. 1–12). Klagenfurt: Gesellschaft für Informatik (GI).Google Scholar
- Hevner, A. R. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.Google Scholar
- Hoppenbrouwers, S. J. B. A., Proper, H. A., & Van der Weide, T. P. (2005). A fundamental view on the process of conceptual modeling. In L. Delcambre, C. Kop, H. C. Mayr, J. Mylopoulos, & O. Pastor (Eds.), Conceptual modeling - ER 2005. 24th International Conference on Conceptual Modeling, Klagenfurt, Austria, October 24–28, 2005. Proceedings. (Vol. LNCS 3716, pp. 128–143). Springer Berlin Heidelberg.Google Scholar
- Juran, J. M., & Gryna, F. M. (1988). In J. M. Juran, & F. M. Gryna (Eds.), Juran’s quality control handbook. (4th ed.). McGraw-Hill.Google Scholar
- Mendling, J. (2008). Metrics for process models: Empirical foundations of verification, error prediction and guidelines for correctness. LNBIP 6. (J. Mylopoulos & N. M. Sadeh, Eds.). Berlin Heidelberg: Springer Science & Business Media.Google Scholar
- Nagel, E. (1979). The structure of science. Indianapolis:Hackett Publishing Company.Google Scholar
- Naumann, J. D. (1986). The role of frameworks in MIS research. In Proceeding of the 1986 DSI National Meeting, Honolulu, Hawaii (pp. 569–571).Google Scholar
- Nunamaker, J. F., & Chen, M. (1990). Systems development in information systems research. In System Sciences, 1990., Proceedings of the Twenty-Third Annual Hawaii International Conference on (Vol. 3, pp. 631–640). Kailua-Kona, HI: IEEE. doi: 10.1109/HICSS.1990.205401.
- Pinggera, J., Zugal, S., & Weber, B. (2010a). Investigating the Process of Process Modeling with Cheetah Experimental Platform. In B. Mutschler, J. Recker, R. Wieringa, J. Ralyte, & P. Plebani (Eds.), ER-POIS 2010. Proceedings of the 1st International Workshop on Empirical Research in Process-Oriented Information Systems. (Vol. CEUR -WS 6, pp. 13–18).Google Scholar
- Pinggera, J., Zugal, S., Weber, B., Fahland, D., Weidlich, M., Mendling, J., & Reijers, H. A. (2010b). How the structuring of domain knowledge helps casual process modelers. In J. Parsons, M. Saeki, P. Shoval, C. Woo, & Y. Wand (Eds.), Conceptual modeling – ER 2010. 29th international conference on conceptual modeling, vancouver, BC, Canada, November 1–4, 2010. Proceedings. (Vol. LNCS 6412, pp. 445–451). Springer Berlin Heidelberg.Google Scholar
- Pinggera, J., Soffer, P., Fahland, D., Weidlich, M., Zugal, S., Weber, B., … Mendling, J. (2013). Styles in Business Process Modeling: An Exploration and a Model. Software & Systems Modeling, (published online, May 2013).Google Scholar
- Popper, K. (2005). The logic of scientific discovery. Electronic version: Taylor & Francis.Google Scholar
- Rogers, Y., & Rutherford, A. (1992). Models In the mind - theory, perspective, and application. Londen:Academic Press.Google Scholar
- Sánchez-González, L., García, F., Ruiz, F., & Piattini, M. (2013). Toward a quality framework for business process models. International Journal of Cooperative Information Systems, 22(01), 1350003–1–15.Google Scholar
- Shaft, T. M., & Vessey, I. (2006). The role of cognitive fit in the relationship between software comprehension and modification. MIS Quarterly, 30(1), 29–55.Google Scholar
- Simon, H. A. (1996). The sciences of the artificial. MIT Press.Google Scholar
- Soffer, P., Kaner, M., & Wand, Y. (2012). Towards understanding the process of process modeling: Theoretical and empirical considerations. In F. Daniel, K. Barkaoui, & S. Dustdar (Eds.), Business process management workshops. BPM 2011 international workshops, clermont-ferrand, France, August 29, 2011, revised selected papers, part I. (Vol. LNBIP 99, pp. 357–369). Springer Berlin Heidelberg.Google Scholar
- Sperling, G. (1963). A model for visual memory tasks. Human Factors, 5(1), 19–31.Google Scholar
- Stark, J., & Esswein, W. (2012). Rules from cognition for conceptual modelling. In P. Atzeni, D. Cheung, & S. Ram (Eds.), Conceptual modeling. 31st international conference ER 2012, florence, Italy, October 15–18, 2012. Proceedings. (Vol. LNCS 7532, pp. 78–87). Springer Berlin Heidelberg.Google Scholar
- Van Merriënboer, J. J. G. (1997). Training complex cognitive skills: A four-component instructional design model for technical training. Englewood Cliffs:Educational Technology.Google Scholar
- Vanderfeesten, I., Cardoso, J., Mendling, J., Reijers, H. A., & Van der Aalst, W. M. P. (2007). Quality Metrics for Business Process Models. In BPM and Workflow handbook (pp. 179–190). Future Strategies Inc.Google Scholar
- Weber, R. (2012). Evaluating and developing theories in the information systems discipline. Journal of the Association for Information Systems, 13(1), 1–30.Google Scholar
- Weick, K. E. (1989). Theory construction as disciplined imagination. Academy of Management Review, 14(4), 516–531.Google Scholar
- Weske, M. (2007). Business process management: Concepts, languages, architectures (1st ed.). Springer Berlin Heidelberg.Google Scholar
- Witkin, H. A., & Goodenough, D. R. (1981). Cognitive styles: Essence and origins, field dependence and field independence. Michigan:International Universities Press.Google Scholar
- Zugal, S., Soffer, P., Haisjackl, C., Pinggera, J., Reichert, M., & Weber, B. (2013). Investigating expressiveness and understandability of hierarchy in declarative business process models. Software & Systems Modeling, (published online, June 2013).Google Scholar
- Zur Muehlen, M., & Recker, J. C. (2008). How much language is enough? Theoretical and practical use of the business process modeling notation. In Z. Bellahsène & M. Léonard (Eds.), Advanced information systems engineering. 20th international conference, CAiSE 2008 montpellier, France, June 16–20, 2008 proceedings. LNCS 5074 (Vol. LNCS 5074, pp. 465–479). Springer Berlin Heidelberg.Google Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.