Reference Work Entry

Encyclopedia of Complexity and Systems Science

pp 1845-1864

Delay and Disruption in Complex Projects

  • Susan HowickAffiliated withStrathclyde Business School, University of Strathclyde
  • , Fran AckermannAffiliated withStrathclyde Business School, University of Strathclyde
  • , Colin EdenAffiliated withStrathclyde Business School, University of Strathclyde
  • , Terry WilliamsAffiliated withSchool of Management, Southampton University


Cause map

A cause map is similar to a cognitive map however it is not composed of an individuals perception but rather the views/statements from a number of participants. It follows the same formalisms as cognitive mapping but does not reflect cognition as it is composite.

Cognitive map

A cognitive map is a representation of an individuals perception (cognition) of an issue. It is graphically depicted illustrating concepts/statements connected together with arrows representing causality. They are created using a set of established formalisms.

Complex project

A complex project is a project in which the project behaviors and outcomes are difficult to predict and difficult to explain post-hoc.

Disruption and delay

Disruption and delay (D&D) is primarily the consequence of interactions which feed on themselves as a result of an initial disruption or delay or portfolio of disruptions and delays.


A project is a temporary endeavor undertaken to create a unique product or service [1].

Definition of the Subject

There are many examples of complex projects suffering massive time and cost overruns. If a project has suffered such an overrun there may be a need to understand why it behaved the way it did. Two main reasons for this is (i) to gain learning for future projects or (ii) because one party of the project wishes to claim compensation from another party and thus is trying to explain what occurred during the project. In the latter case, system dynamics has been used for the last 30 years to help to understand why projects behave the way they do. Its success in this arena stems from its ability to model and unravel complex dynamic behavior that can result in project overruns. Starting from the first use of system dynamics in a claim situation in the late 1970's [2], it has directly influenced claim results worth millions of dollars. However, the number of claims which system dynamics has been involved in is still small as it is not perceived by project management practitioners as a standard tool for analyzing projects. System dynamics has a lot to offer in understanding complex projects, not only in a post‐mortem situation, but it could also add value in the pre‐project analysis stage and during the operational stage of a project.


In this chapter we discuss the role of system dynamics (SD) modeling in understanding, and planning, a complex project. In particular we are interested in understanding how and why projects can go awry in a manner that seems surprising and often very difficult to unravel.

When we refer to projects we mean “a temporary endeavor undertaken to create a unique product or service” [1]. Projects are a specific undertaking, which implies that they are “one‐shot”, non‐repetitive, time‐limited, and, when complex, frequently bring about revolutionary (rather than evolutionary) improvements, start (to some extent) without precedent, and are risky with respect to customer, product, and project. If physical products are being created in a project, then the product is in some way significantly different to previous occasions of manufacturing (for example, in its engineering principles, or the expected operating conditions of the product, etc.), and it is this feature that means there is a need to take a project orientation.

Complex projects often suffer massive cost overruns . In recent decades those that have been publicized relate to large public construction projects, for example airports, bridges, and public buildings. Some examples include Denver's US$5 billion airport that was 200% overspent [3], the 800 million Danish Kroner Oresund bridge that was 68% overspent [4], and the UK's Scottish Parliament, which was 10 times the first budget [5]. The Major Projects Association [6] talks of a calamitous history of cost overruns of very large projects in the public sector. Flyvberg et al., [7] describe 258 major transportation infrastructure projects showing 90% of projects overspent. Morris and Hough [8] conclude that “the track record of projects is fundamentally poor, particularly for the larger and more difficult ones. … Projects are often completed late or over budget, do not perform in the way expected, involve severe strain on participating institutions or are canceled prior to their completion after the expenditure of considerable sums of money.” (p.7).

“Complex” projects are ones in which the project behaviors and outcomes are difficult to predict and difficult to explain post-hoc. Complex projects, by their nature, comprise multiple interdependencies, and involve nonlinear relationships (which are themselves dynamic). For example, choices to accelerate might involve the use of additional overtime which can affect both learning curves and productivity as a result of fatigue – each of which are non‐linear relationships. In addition many of the important features of complex projects are manifested through ‘soft’ relationships – for example managers will recognize deteriorating morale as projects become messy and look a failure, but assessing the impact of morale on levels of mistakes and rate of working has to be a matter of qualitative judgment. These characteristics are amenable particularly to SD modeling which specializes in working with qualitative relationships that are non‐linear [9,10,11].

It is therefore surprising that simulation modeling has not been used more extensively to construct post‐mortem analyzes of failed projects, and even more surprising because of SD's aptitude for dealing with feedback. Nevertheless the authors have been involved in the analysis of 10 projects that have incurred time and cost overruns and PA Consulting Group have claimed to have used SD to explain time and cost overruns for over 30 litigation cases [12]. Although in the mid‐1990's, attempts to integrate SD modeling with more typical approaches to project management were emerging, their use has never become established within the project management literature or practice [13,14,15]. In addition, recognition that the trend towards tighter project delivery and accelerated development times meant that parallelism in project tasks was becoming endemic, and the impact of increasing parallelism could result in complex feedback dynamics where vicious cycles exist [16]. These vicious cycles are often the consequence of actions taken to enforce feedback control designed to bring a project back on track.

As project managers describe their experiences of projects going wrong they will often talk of these “vicious cycles” occurring, particularly with respect to the way in which customer changes seem to generate much more rework than might be expected, and that the rework itself then generates even more rework. Consider a small part of a manager's description of what he sees going on around him:

“For some time now we've been short of some really important information the customer was supposed to provide us. As a consequence we've been forced to progress the contract by making engineering assumptions, which, I fear, have led to more mistakes being made than usual. This started giving us more rework than we'd planned for. But, of course, rework on some parts of the project has meant reopening work that we thought we'd completed, and that, in turn has reopened even more past work. Engineering rework has led to the need for production work‐arounds and so our labour in both engineering and production have been suffering stop/starts and interruptions – and each time this happens they take time to get back up to speed again. This has led to productivity dropping because of unnecessary tasks, let alone productivity losses from the workforce getting fed-up with redoing things over and over again and so just becoming demoralized and so working slower. Inevitably all the rework and consequential productivity losses have put pressure on us to accelerate the project forcing us to have to make more engineering assumptions and do work‐arounds.”

Figure 1 shows a ‘cause map ’ of the arguments presented by this project manager – the words used in the map are those used by the project manager and the arrows represent the causality described by the manager. This description is full of vicious cycles (indeed there are 35 vicious cycles discussed – see Fig. 1) all triggered by a shortage of customer furnished information and resulting in the rework cycle [17,18,19,20,21] and the need to accelerate in order to keep the project on schedule. Using traditional project management models such as Critical Path Method /Network Analysis cannot capture any of the dynamics depicted in Fig. 1, but SD simulation modeling is absolutely appropriate [22].
Figure 1

Cause map showing the interactions described by a project manager and illustrating the feedback loops resulting from the complex dynamics behavior of a project under duress. The arrows represent causality

So, why has SD modeling been so little used? Partly it is because in taking apart a failed project the purpose is usually associated with a contractor wishing to make a claim for cost‐overruns. In these circumstances the traditions of successful claims and typical attitudes of courts tend to determine the approach used. A ‘measured‐mile’ approach is common, where numerical simplicity replaces the need for a proper understanding [23].

It was not until the early 1980's that the use of simulation modeling became apparent from publications in the public‐domain. The settlement of a shipbuilding claim [2] prompted interest in SD modeling and [24], in the same year, reported on the use of management science modeling for the same purpose. It was not surprising that this modeling for litigation generated interest in modeling where the purpose was oriented to learning about failed projects (indeed the learning can follow from litigation modeling [25], although it rarely does).

As Fig. 1 demonstrates, it is not easy to understand fully the complex dynamic behavior of a project under duress. Few would realize that 35 feedback loops are encompassed in the description that led to Fig. 1. Indeed one of the significant features of complex projects is the likelihood of underestimating the complexity due to the dynamics generated by disruptions. [6] has reported on the more specific difficulty of understanding feedback behavior and research in the field of managerial judgment reinforces the difficulties of biases unduly influencing judgment [27].

In the work presented here we presume that there is a customer and a contractor, and there is a bidding process usually involving considerations of liquidated damages for delays and possibly strategic reputational consequences for late delivery. Thus, we expect the project to have a clear beginning and an end when the customer (internal or external) signs off a contract. Finally, we do not explore the whole project business life cycle, but that part where major cost overruns occur: thus, we start our consideration when a bid is to be prepared, consider development and manufacturing or construction, but stop when the product of the project is handed over to the customer.

Thus, in this chapter we shall be concerned specifically with the use of SD to model the consequences of disruptions and delays. Often these disruptions are small changes to the project, for example design changes [28]. The work discussed here is the consequence of 12 years of constructing detailed SD simulation models of failed complex projects. The first significant case was reported in Ackermann et al. [10] and Bennett et al. [29]. In each case the prompt for the work was the reasonable prospect of the contractor making a successful claim for damages. In all the cases the claim was settled out of court and the simulation model played a key role in settling the dispute.

The chapter will firstly consider why modeling disruption and delay (D&D) is so difficult. It will discuss what is meant by the term D&D and the typical consequences of D&D. This will be examined using examples from real projects that have suffered D&D. The contribution of SD modeling to the analysis of D&D and thus to the explanation of project behavior will then be discussed. A process of modeling which has been developed over the last 12 years and one that provides a means of modeling and explaining project behavior will be introduced. This process involves constructing both qualitative cause maps and quantitative system dynamics models. The chapter will conclude by considering potential future developments for the use of SD in modeling complex projects.

Disruption and Delay

(The following contains excerpts from Eden at al. [22] which provides a full discussion on the nature of D&D).

The idea that small disruptions can cause serious consequences to the life of a major project, resulting in massive time and cost overruns, is well established. The terms ‘disruption and delay’ or ‘delay and disruption’ are also often used to describe what has happened on such projects. However, although justifying the direct impact of disruptions and delays is relatively easy, there has been considerable difficulty in justifying and quantifying the claim for the indirect consequences. Our experience from working on a series of such claims is that some of the difficulty derives from ambiguity about the nature of disruption and delay (D&D). We now consider what we mean by D&D before moving onto considering the types of consequences that can result from the impact of D&D.

What is a Disruption?

Disruptions are events that prevent the contractor completing the work as planned. Many disruptions to complex projects are planned for at the bid stage because they may be expected to unfold during the project. For example, some level of rework is usually expected, even when everything goes well, because there will always be ‘normal’ errors and mistakes made by both the contractor and client. The disruption and delay that follows would typically be taken to be a part of a risk factor encompassed in the base estimate, although this can be significantly underestimated [30]. However, our experience suggests that there are other types of disruptions that can be significant in their impact and are rarely thought about during original estimating. When these types of disruptions do occur, their consequences can be underestimated as they are often seen by the contractor as aberrations with an expectation that their consequences can be controlled and managed. The linkage between risk assessment and the risks as potential triggers of D&D is often missed [31]. Interferences with the flow of work in the project is a common disruption. For example, when a larger number of design comments than expected are made by the client an increased number of drawings need rework . However it also needs to be recognized that these comments could have been made by the contractors own methods engineering staff. In either case, the additional work needed to respond to these comments, increases the contractor's workload and thus requires management to take mitigating actions if they still want to deliver on time. These mitigating actions are usually regarded as routine and capable of easily bringing the contract back to plan, even though they can have complex feedback ramifications.

Probably one of the most common disruptions to a project comes when a customer or contractor causes changes to the product (a Variation or Change Order ). For example, the contractor may wish to alter the product after engineering work has commenced and so request a direct change. However, sometimes changes may be made unwittingly. For example, a significant part of cost overruns may arise where there have been what might be called ‘giveaways’. These may occur because the contractor's engineers get excited about a unique and creative solution and rather than sticking to the original design, produce something better but with additional costs. Alternatively, when the contractor and customer have different interpretations of the contract requirements unanticipated changes can occur. For example, suppose the contract asks for a door to open and let out 50 passengers in 2 minutes, but the customer insists on this being assessed with respect to the unlikely event of dominantly large, slow passengers rather than the contractor's design assumptions of an average person. This is often known as ‘preferential engineering’. In both instances there are contractor and/or customer requested changes that result in the final product being more extensive than originally intended.

The following example, taken from a real project and originally cited in Eden et al. [30], illustrates the impact of a client induced change to the product:

Project 1: The contract for a ‘state of the art’ train had just been awarded. Using well‐established design principles – adopted from similar train systems – the contractor believed that the project was on track. However within a few months problems were beginning to emerge. The client team was behaving very differently from previous experience and using the contract specification to demand performance levels beyond that envisioned by the estimating team. One example of these performance levels emerged during initial testing, 6 months into the contract, and related to water tightness. It was discovered that the passenger doors were not sufficiently watertight. Under extreme test conditions a small (tiny puddle) amount of water appeared. The customer demanded that there must be no ingress of water, despite acknowledging that passengers experiencing such weather would bring in more water on themselves than the leakage.

The contractor argued that no train had ever met these demands, citing that most manufacturers and operators recognized that a small amount of water would always ingress, and that all operators accepted this. Nevertheless the customer interpreted the contract such that new methods and materials had to be considered for sealing the openings. The dialog became extremely combative and the contractor was forced to redesign. An option was presented to the customer for their approval, one that would have ramifications for the production process. The customer, after many tests and after the verdict of many external experts in the field, agreed to the solution after several weeks. Not only were many designs revisited and changed, with an impact on other designs, but also the delays in resolution impacted the schedule well beyond any direct consequences that could be tracked by the schedule system (www.​Primavera.​com) or costs forecasting system.

What is a Delay?

Delays are any events that will have an impact on the final date for completion of the project. Delays in projects come from a variety of sources. One common source is that of the client‐induced delay. Where there are contractual obligations to comment upon documents, make approvals, supply information or supply equipment, and the client is late in these contractually‐defined duties, then there may be a client‐induced delay to the expected delivery date (although in many instances the delay is presumed to be absorbed by slack). But also a delay could be self‐inflicted: if the sub‐assembly designed and built did not work, a delay might be expected.

The different types of client‐induced delays (approvals, information, etc.) have different effects and implications. Delays in client approval, in particular, are often ambiguous contractually. A time to respond to approvals may not have been properly set, or the expectations of what was required within a set time may be ambiguous (for example, in one project analyzed by the authors the clients had to respond within n weeks – but this simply meant that they sent back a drawing after n weeks with comments, then after the drawing was modified, they sent back the same drawing after a further m weeks with more comments). Furthermore, excessive comments, or delays in comments can cause chains of problems, impacting, for example, on the document approval process with sub‐contractors, or causing over-load to the client's document approval process.

If a delay occurs in a project, it is generally considered relatively straightforward to cost. However, ramifications resulting from delays are often not trivial either to understand or to evaluate. Let us consider a delay only in terms of the CPM (Critical Path Method ), the standard approach for considering the effects of delays on a project [32]. The consequences of the delay depend on whether the activities delayed are on the Critical Path. If they are on the Critical Path, or the delays are sufficient to cause the activities to become on the critical path, it is conceptually easy to compute the effect as an Extension Of Time (EOT) [33]. However, even in this case there are complicating issues. For example; what is the effect on other projects being undertaken by the contractor? When this is not the first delay, then to which schedule does the term “critical path” refer? To the original, planned programme, which has already been changed or disrupted, or to the “as built”, actual schedule? Opinions differ here. It is interesting to note that, “the established procedure in the USA [of using as-built CPM schedules for claims] is almost unheard of in the UK” [33].

If the delay is not on the Critical Path then, still thinking in CPM terms, there are only indirect costs. For example, the activities on the Critical Path are likely to be resource dependent, and it is rarely easy to hire and fire at will – so if non‐critical activities are delayed, the project may need to work on tasks in a non‐optimal sequence to keep the workforce occupied; this will usually imply making guesses in engineering or production, requiring later re-work, less productive work, stop/starts, workforce overcrowding, and so on.

The following example, taken from a real project, illustrates the impact of a delay in client furnished information to the project:

Project 2: A state of the art vessels project had been commissioned which demanded not only the contractor meeting a challenging design but additionally incorporating new sophisticated equipment. This equipment was being developed in another country by a third party. The client had originally guaranteed that the information on the equipment would be provided within the first few months of the contract – time enough for the information to be integrated within the entire design. However time passed and no detailed specifications were provided by the third party – despite continual requests from the contractor to the client.

As the project had an aggressive time penalty the contractor was forced to make a number of assumptions in order to keep the design process going. Further difficulties emerged as information from the third party trickled in demanding changes from the emerging design. Finally manufacturing which had been geared up according to the schedule were forced to use whatever designs they could access in order to start building the vessel.

Portfolio Effect of many Disruptions

It is not just the extent of the disruption or delay but the number of them which may be of relevance. This is particularly the case when a large number of the disruptions and/or delays impact immediately upon one another thus causing a portfolio of changes. These portfolios of D&D impacts result in effects that would probably not occur if only one or two impacts had occurred. For example, the combination of a large number of impacts might result in overcrowding or having to work in poor weather conditions (see example below). In these instances it is possible to identify each individual item as a contributory cause of extra work and delay but not easy to identify the combined effect.

The following example, taken from another real project, illustrates the impact of a series of disruptions to the project:

Project 3: A large paper mill was to be extended and modernized. The extension was given extra urgency by new anti‐pollution laws imposing a limit on emissions being enacted with a strict deadline.

Although the project had started well, costs seemed to be growing beyond anything that made sense given the apparent minor nature of the disruptions. Documents issued to the customer for ‘information only’ were changed late in the process. The customer insisted on benchmarking proven systems, involving visits to sites working with experimental installations or installations operating under different conditions in various different countries. In addition there were many changes of mind about where equipment should be positioned and how certain systems should work. Exacerbating these events was the circumstance of both the customer's and contractor's engineers being co‐located, leading to ‘endless’ discussions and meetings slowing the rate of both design and (later) commissioning.

Relations with the customer, who was seen by the contractor to be continually interfering with progress of the project, were steadily deteriorating. In addition, and in order to keep the construction work going, drawings were released to the construction team before being fully agreed. This meant that construction was done in a piecemeal fashion, often inefficiently (for example, scaffolding would be put up for a job, then taken down so other work could proceed, then put up in the same place to do another task for which drawings subsequently had been produced). As the construction timescale got tighter and tighter, many more men were put on the site than was efficient (considerable overcrowding ensued) and so each task took longer than estimated.

As a result the project was behind schedule, and, as it involved a considerable amount of external construction work, was vulnerable to being affected by the weather. In the original project plan (as used for the estimate) the outer shell (walls and roof) was due to be completed by mid‐Autumn. However, the project manager now found himself undertaking the initial construction of the walls and roofing in the middle of winter! As chance would have it, the coldest winter for decades, which resulted in many days being lost while it was too cold to work. The combination of the particularly vicious winter and many interferences resulted in an unexpectedly huge increase in both labour hours and overall delay. Overtime payments (for design and construction workers) escalated. The final overspend was over 40% more than the original budget.

Consequences of Disruptions and Delays

Disruption and delay (D&D) is primarily the consequence of interactions which feed on themselves as a result of an initial disruption or delay or portfolio of disruptions and delays. If an unexpected variation (or disruption) occurs in a project then, if no intervention was to take place, a delivery delay would occur. In an attempt to avoid this situation, management may choose to take actions to prevent the delay (and possible penalties). In implementing these actions, side‐effects can occur which cause further disruptions. These disruptions then cause further delays to the project. In order to avoid this situation, additional managerial action is required. Thus, an initial disruption has led to a delay, which has led to a disruption, which has led to a further delay. A positive feedback loop has been formed, where both disruption and delay feed back on themselves causing further disruptions and delays. Due to the nature of feedback loops, a powerful vicious cycle has been created which, if there is no alternative intervention, can escalate with the potential of getting ‘out of control’. It is the dynamic behavior caused by these vicious cycles which can cause severe disruption and consequential delay in a project.

The dynamic behavior of the vicious cycles which are responsible for much of the D&D in a project make the costing of D&D very difficult. It is extremely difficult to separate each of the vicious cycles and evaluate their individual cost. Due to the dynamic behavior of the interactions between vicious cycles, the cost of two individual cycles will escalate when they interact with one another, thus disruptions have to be costed as part of a portfolio of disruptions.

Returning to Project 2, the vessel case, as can be seen in Fig. 2, the client caused both disruptions (continuous changes of mind) and delays (late permission to use a particular product). Both of these caused the contractor to undertake rework, and struggle with achieving a frozen (fixed) design. These consequences in turn impacted upon staff morale and also developed as noted above dynamic behavior – where rework resulted in more submissions of designs, which led to further comments, some of which were inconsistent and therefore led to further rework . As mentioned in the introduction, the rework cycle [17,18,19,20,21] can be a major driver of escalating feedback within a complex project.
Figure 2

Excerpt from a cause map showing some of the consequences of disruption and delay in Project 2. Boxed statements are specific illustrations with statements underlined representing generic categories (e. g. changes of mind). Statements in bold text represent the SD variables with the remainder providing additional context. All links are causal however those in bold illustrate sections of a feedback loop. The numbers at the beginning of concept are used as reference numbers in the model

Managerial Actions and the Consequences of D&D

The acceleration of disrupted projects to avoid overall project delays is common practice by managers who are under pressure from the client and/or their own senior management to deliver on time. However, the belief that this action will always help avoid delays is naive as it does not take into account an appreciation of the future consequences that can be faced. For example, one typical action designed to accelerate a project is to hire new staff. In doing so, some of the difficulties which may follow are:

  • New staff take time to become acquainted with both the project and thus their productivity is lower than that of an existing skilled worker.

  • New staff require training on the project and this will have an impact on the productivity of existing staff.

  • Rather than hiring new staff to the organization, staff may be moved from other parts of the organization. This action results in costs to other projects as the other project is short of staff and so may have to hire workers from elsewhere, thereby suffering many of the problems discussed above.

Many of the outcomes of this action and other similar actions can lead to a reduction in expected productivity levels. Low productivity is a further disruption to the project through a lack of expected progress. If management identifies this lack of progress, then further managerial actions may be taken in an attempt to avoid a further delay in delivery. These actions often lead to more disruptions, reinforcing the feedback loop that had been set up by the first actions.

Two other common managerial actions taken to avoid the impact of a disruption on delivery are (i) the use of overtime and (ii) placing pressure on staff in an attempt to increase work rate. Both of these actions can also have detrimental effects on staff productivity once they have reached particular levels. Although these actions are used to increase productivity levels, effects on fatigue and morale can actually lead to a lowering of productivity via a slower rate of work and/or additional work to be completed due to increased levels of rework [21,34]. This lowering of productivity causes a delay through lack of expected progress on the project, causing a further delay to delivery. Management may then attempt to avoid this by taking other actions which in turn cause a disruption which again reinforces the feedback loop that has been set up.

Analyzing D&D and Project Behavior

The above discussion has shown that whilst D&D is a serious aspect of project management, it is a complicated phenomenon to understand. A single or a series of disruptions or delays can lead to significant impacts on a project which cannot be easily thought through due to human difficulties in identifying and thinking through feedback loops [26,35]. This makes the analysis of D&D and the resulting project behavior particularly difficult to explain.

SD modeling has made a significant contribution to increasing our understanding of why projects behave in the way they do and in quantifying effects. There are two situations in which this is valuable: the claim situation, where one side of the party is trying to explain the project's behavior to the other (and, usually, why the actions of the other party has caused the project to behave in the way it has) and the post‐project situation, where an organization is trying to learn lessons from the experience of a project. In the case of a claim situation, although it has been shown that SD modeling can meet criteria for admissibility to court [36], there are a number of objectives which SD, or any modeling method, would need to address [37]. These include the following:

  1. 1.

    Prove causality – show what events triggered the D&D and how the triggers of D&D caused time and cost overruns on the project.

  2. 2.

    Prove the ‘quantum’ – show that the events that caused D&D created a specific time and cost over-run in the project. Therefore, there is a need to replicate over time the hours of work due to D&D that were over-and-above those that were contracted, but were required to carry out the project.

  3. 3.

    Prove responsibility – show that the defendant was responsible for the outcomes of the project. Also to demonstrate the extent to which plaintiff's management of the project was reasonable and the extent that overruns could not have been reasonably avoided.

  4. 4.

    All of the above have to be proved in a way which will be convincing to the several stakeholders in a litigation audience.


Over the last 12 years the authors have developed a model building process that aims to meet each of these purposes. This process involves constructing qualitative models to aid the process of building the ‘case’ and thus help to prove causality and responsibility (purposes 1 and 3). In addition, quantitative system dynamics models are involved in order to help to prove the quantum (purpose 2). However, most importantly, the process provides a structured, transparent, formalized process from “real world” interviews to resulting output which enables multiple audiences, including multiple non‐experts as well as scientific/expert audiences to appreciate the validity of the models and thus gain confidence in these models and the consulting process in which they are embedded (purpose 4). The process is called the ‘Cascade Model Building Process’. The next section describes the different stages of the model building process and some of the advantages of using the process.

Cascade Model Building Process

(The following contains excerpts from Howick et al. [38], which contains a full description of the Cascade Model Building process).

The ‘Cascade Model Building Process’ involves four stages (see Fig. 3) each of which are described below.
Figure 3

The Cascade Model Building Process

Stage 1: Qualitative Cognitive and Cause Map

The qualitative cognitive maps and /or project cause map aim to capture the key events that occurred on the project, for example a delay as noted above in the vessel example in Project 2. The process of initial elicitation of these events can be achieved in two ways. One option is to interview, and construct cognitive maps [39,40,41] for each participant's views. Here the aim is to gain a deep and rich understanding that taps the wealth of knowledge of each individual. These maps act as a preface to getting the group together to review and assess the total content represented as a merged cause map [42] in a workshop setting. The second option is to undertake group workshops where participants can contribute directly, anonymously and simultaneously, to the construction of a cause map. The participants are able to ‘piggy back’ off one another, triggering new memories, challenging views and developing together a comprehensive overview [43]. As contributions from one participant are captured and structured to form a causal chain, this process triggers thoughts from others and as a result a comprehensive view begins to unfold. In Project 1, this allowed the relevant design engineers (not just those whose responsibility was the water tight doors, but also those affected who were dealing with car-body structure, ventilation, etc.), methods personnel and construction managers to surface a comprehensive view of the different events and consequences that emerged.

The continual development of the qualitative model, sometimes over a number of group workshops, engenders clarity of thought predominantly through its adherence to the coding formalisms used for cause mapping [44]. Members of the group are able to debate and consider the impact of contributions on one another. Through bringing the different views together it is also possible to check for coherency – do all the views fit together or are there inconsistencies? This is not uncommon as different parts of the organizations (including different discipline groups within a division e. g. engineering) encounter particular effects. For example, during an engineering project, manufacturing can often find themselves bewildered by engineering processes – why are designs so late. However, the first stage of the cascade process enables the views from engineering, methods, manufacturing, commissioning etc. to be integrated. Arguments are tightened as a result, inconsistencies identified and resolved and detailed audits (through analysis and features in the modeling software) undertaken to ensure consistency between both modeling team and model audience. In some instances the documents generated through reports about the organizational situation can be coded into a cause map and merged into the interview and workshop material [45].

The cause map developed at this stage is usually large – containing up to 1000 nodes. Computer supported analysis of the causal map can inform further discussion. For example, it can reveal those aspects of causality that are central to understanding what happened. Events that have multiple consequences for important outcomes can be detected. Feedback loops can be identified and examined. The use of software facilitates the identification of sometimes complex but important feedback loops that follow from the holistic view that arises from the merging of expertise and experience across many disciplines within the organization.

The resulting cause map from stage 1 can be of particular use in proving causality. For example, Fig. 4 represents some of the conversations made regarding the water ingress situation described in the above case. In this figure, consequences such as additional engineering effort and engineering delays can be traced back to events such as client found water seeping out of door.
Figure 4

Excerpt from a cause map showing some of the conversations regarding the water ingress situation in Project 1. As with Fig. 2, statements that have borders are the illustrations, those with bold font represent variables with the remainder detailing context. Dotted arrows denote the existence of further material which can be revealed at anytime

Stage 2: Cause Map to Influence Diagram

The causal model produced from stage 1 is typically very extensive. This extensiveness requires that a process of ‘filtering’ or ‘reducing’ the content be undertaken – leading to the development of an Influence Diagram (ID) (the second step of the cascade process). Partly this is due to the fact that many of the statements captured whilst enabling a detailed and thorough understanding of the project, are not relevant when building the SD model in stage 4 (as a result of the statements being of a commentary like nature rather than a discrete variable). Another reason is that for the most part SD models comprise fewer variables/auxiliaries to help manage the complexity (necessary for good modeling as well as comprehension).

The steps involved in moving from a cause map to an ID are as follows:

Step 1: Determining the core/endogenous variables of the ID
  1. (i)

    Identification of feedback loops: It is not uncommon to find over 100 of these (many of these may contain a large percentage of common variables) when working on large projects with contributions from all phases of the project.

  2. (ii)

    Analysis of feedback loops: Once the feedback loops have been detected they are scrutinized to determine a) whether there are nested feedback ‘bundles’ and b) whether they traverse more than one stage of the project. Nested feedback loops comprise a number of feedback loops around a particular topic where a large number of the variables/statements are common but with variations in the formulation of the feedback loop. Once detected, those statements that appear in the most number of the nested feedback loops are identified as they provide core variables in the ID model.


Where feedback loops straddle different stages of the process for example from engineering to manufacturing note is taken. Particularly interesting is where a feedback loop appears in one of the later stages of the project e. g. commissioning which links back to engineering. Here care must be taken to avoid chronological inconsistencies – it is easy to link extra engineering hours into the existing engineering variable however by the time commissioning discover problems in engineering, the majority if not all engineering effort has been completed.

Step 2: Identifying the triggers/exogenous variables for the ID

The next stage of the analysis is to look for triggers – those statements that form the exogenous variables in the ID. Two forms of analysis provide clues which can subsequently be confirmed by the group:

  1. (i)

    The first analysis focuses on starting at the end of the chains of argument (the tails) and laddering up (following the chain of argument) until a branch point appears (two or more consequences). Often statements at the bottom of a chain of argument are examples which when explored further lead to a particular behavior e. g. delay in information, which provides insights into the triggers.

  2. (ii)

    The initial set of triggers created by (i) can be confirmed through a second type of analysis – one which takes two different means of examining the model structure for those statements that are central or busy. Once these are identified they can be examined in more detail through creating hierarchical sets based upon them and thus “tear drops” of their content. Each of these teardrops is examined as possible triggers.

Step 3: Checking the ID

Once the triggers and the feedback loops are identified care is taken to avoid double counting – where one trigger has multiple consequences some care must be exercised in case the multiple consequences are simple replications of one another.

The resulting ID is comparable to a ‘causal loop diagram’ [46] which is often used as a pre‐cursor to a SD model. From the ID structure it is possible to create “stories” where a particular example triggers an endogenous variable which illustrates the dynamic behavior experienced.
Figure 5

A small section of an ID from Project 2 showing mitigating actions (italics), triggers (underline) and some of the feedback cycles

Stage 3: Influence Diagram to System Dynamics Influence Diagram (SDID)

When a SD model is typically constructed after producing a qualitative model such as an ID (or causal loop diagram), the modeler determines which of the variables in the ID should form the stocks and flows in the SD model, then uses the rest of the ID to determine the main relationships that should be included in the SD model. However when building the SD model there will be additional variables/constants that will need to be included in order to make it ‘work’ that were not required when capturing the main dynamic relationships in the ID. The SDID is an influence diagram that includes all stocks, flows and variables that will appear in the SD model and is, therefore a qualitative version of the SD model. It provides a clear link between the ID and the SD model.

The SDID is therefore far more detailed than the ID and other qualitative models normally used as a pre‐cursor to a SD model.

Methods have been proposed to automate the formulation of a SD model from a qualitative model such as a causal loop diagram [47,48,49] and for understanding the underlying structure of a SD model [50]. However, these methods do not allow for the degree of transparency required to enable the range of audiences involved in a claim situation, or indeed as part of an organizational learning experience, to follow the transition from one model to the next. The SDID provides an intermediary step between an ID and a SD model to enhance the transparency of the transition from one model to another for the audiences. This supports an auditable trail from one model to the next.

The approach used to construct the SDID is as follows:

The SDID is initially created in parallel with the SD model. As a modeler considers how to translate an ID into a SD model, the SDID provides an intermediary step. For each variable in the ID, the modeler can do either of the following:

  1. (i)

    Create one variable in the SD & SDID: If the modeler wishes to include the variable as one variable in the SD model, then the variable is simply recorded in both the SDID and the SD model as it appears in the ID.

  2. (ii)

    Create multiple variables in the SD & SDID: To enable proper quantification of the variable, additional variables need to be created in the SD model. These variables are then recorded in both the SD model and SDID with appropriate links in the SDID which reflect the structure created in the SD model.


The SDID model forces all qualitative ideas to be placed in a format ready for quantification. However, if the ideas are not amenable to quantification or contradict one another, then this step is not possible. As a result of this process, a number of issues typically emerge including the need to add links and statements and the ability to assess the overall profile of the model though examining the impact of particular categories on the overall model structure. This process can also translate back into the causal model or ID model to reflect the increased understanding.

Stage 4: The System Dynamics Simulation Model

The process of quantifying SD model variables can be a challenge, particularly as it is difficult to justify subjective estimates of higher‐level concepts such as “productivity” [51]. However, moving up the cascade reveals the causal structure behind such concepts and allows quantification at a level that is appropriate to the data‐collection opportunities available. Figure 6, taken from the ID for Project 1, provides an example. The quantitative model will require a variable “productivity” or “morale”, and the analyst will require estimation of the relationship between it and its exogenous and (particularly) endogenous causal factors. But while the higher‐level concept is essential to the quantitative model, simply presenting it to the project team for estimation would not facilitate justifiable estimates of these relationships.
Figure 6

Section of an ID from Project 1 showing the factors affecting productivity

Reversing the Cascade

The approach of moving from stage 1 through to stage 4 can increase understanding and stimulate learning for all parties. However, the process of moving back up the cascade can also facilitate understanding between the parties. For example, in Fig. 7 the idea that a company was forced to use subcontractors and thus lost productivity might be a key part of a case for lawyers. The lawyers and the project team might have come at Fig. 7 as part of their construction of the case. Moving back up from the ID to the Cause Map (i. e. Fig. 7 to Fig. 8) as part of a facilitated discussion not only helps the parties to come to an agreed definition of the (often quite ill‐defined) terms involved, it also helps the lawyers understand how the project team arrived at the estimate of the degree of the relationship. Having established the relationship, moving through the SDID (ensuring well‐defined variables etc.) to the SD model enables the analysts to test the relationships to see whether any contradictions arise, or if model behaviors are significantly different from actuality, and it enables comparison of the variables with data that might be collected by (say) cost accountants. Where there are differences or contradictions, the ID can be re‐inspected and if necessary the team presented with the effect of the relationship within the SD model explained using the ID, so that the ID and the supporting cause maps can be re‐examined to identify the flaws or gaps in the reasoning. Thus, in this example, as simulation modelers, cost accountants, lawyers and engineers approach the different levels of abstraction, the cascade process provides a unifying structure within which they can communicate, understand each other, and equate terms in each others discourse.
Figure 7

Section of an ID from Project 1 indicating the influence of the use of subcontractors on productivity
Figure 8

Section of a Cause Map from Project 1 explaining the relationship between the use of subcontractors and productivity

Advantages of the Cascade

The Cascade integrates a well‐established method, cause mapping, with SD. This integration results in a number of important advantages for modeling to explain project behavior:

Achieving Comprehensiveness

Our experience suggests that one of the principal benefits of using the cascade process derives from the added value gained through developing a rich and elaborated qualitative model that provides the structure (in a formalized manner) for the quantitative modeling. The cascade process immerses users in the richness and subtlety that surrounds their view of the projects and ensures involvement and ownership of all of the qualitative and quantitative models. The comprehensiveness leads to a better understanding of what occurred, which is important due to the complex nature of D&D, and enables effective conversations to take place across different organizational disciplines.

The process triggers new contributions as memories are stimulated and both new material and new connections are revealed. The resultant models thus act as organizational memories providing useful insights into future project management (both in relation to bids and implementation). These models provide more richness and therefore an increased organizational memory when compared to the traditional methods used in group model building for system dynamics models (for example [52]). However this outcome is not untypical of other problem structuring methods [53].

Testing the Veracity of Multiple Perspectives

The cascade's bi‐directionality enabled the project team's understandings to be tested both numerically and from the perspective of the coherency of the systemic portrayal of logic. By populating the initial quantitative model with data [10] rigorous checks of the validity of assertions were possible.

In a claim situation, blame can be the fear of those participating in accounting for history and often restricts contributions [44]. When initiating the cascade process, the use of either interviews or group workshops increases the probability that the modeling team will uncover the rich story rather than partial explanations or as is often the case with highly politicized situations, ‘sanitized’ explanations. By starting with ‘concrete’ events that can be verified, and exploring their multiple consequences, the resultant model provides the means to reveal and explore the different experiences of various stakeholders in the project.

Modeling Transparency

By concentrating the qualitative modeling efforts on the capture and structuring of multiple experiences and viewpoints the cascade process initially uses natural language and rich description as the medium which facilitates generation of views and enables a more transparent record to be attained.

There are often insightful moments as participants viewing the whole picture realize that the project is more complex than they thought. This realization results in two advantages. The first is a sense of relief that they did not act incompetently given the circumstances i. e. the consequences of D&D took over – which in turn instills an atmosphere more conducive to openness and comprehensiveness (see [44]). The second is learning – understanding the whole, the myriad and interacting consequences and in particular the dynamic effects that occurred on the project (that often acts in a counter‐intuitive manner) provides lessons for future projects.

Common Understanding Across many Audiences

Claim situations involve numerous stakeholders, with varying backgrounds. The cascade process promotes ownership of the models from this mixed audience. For example, lawyers are more convinced by the detailed qualitative argument presented in the cause map (stage 1) and find this part of greatest utility and hence engage with this element of the cascade. However, engineers get more involved in the construction of the quantitative model and evaluating the data encompassed within it.

A large, detailed system dynamics model can be extremely difficult to understand for many of the stakeholders in a claim process [54]. However, the rich qualitative maps developed as part of the cascade method are presented in terms which are easier for people with no modeling experience to understand. In addition, by moving back up the cascade, the dynamic results that are output by the simulation model are given a grounding in the key events of the project, enabling the audience to be given fuller explanations and reasons for the D&D that occurred on the project.

Using the cascade method, any structure or parameters that are contained in the simulation model can be easily, and quickly, traced back to information gathered as a part of creating the cognitive maps or cause maps . Each contribution in these maps can then normally be traced to an individual witness who could defend that detail in the model. This auditable trail can aid the process of explaining the model and refuting any attacks made on the model.


The step-by-step process forces the modeler to be clear in what statements mean. Any illogical or inconsistent statements highlighted, require the previous stage to be revisited and meanings clarified, or inconsistencies cleared up. This results in clear, logical models.

Confidence Building

As a part of gaining overall confidence in a model, any audience for the model will wish to have confidence in the structure of the model (for example [55,56,57,58]). When assessing confidence levels in a part of the structure of a SD model, the cascade process enables any member of the ‘client’ audience to clearly trace the structure of the SD model directly to the initial natural language views and beliefs provided from individual interviews or group sessions.

Scenarios are also an important test in which the confidence of the project team in the model can be considerably strengthened. Simulation is subject to the demands to reproduce scenarios that are recognizable to the managers capturing a portfolio of meaningful circumstances that occur at the same time, including many qualitative aspects such as morale levels. For example, if a particular time-point during the quantitative simulation is selected, clearly the simulated values of all the variables, and in particular the relative contributions of factors in each relationship, can be output from the model. If we consider Fig. 6, the simulation might show that at a particular point in a project, loss of productivity is 26% and that the loss due to:

“Use of subcontractors” is 5%.

“Difficulty due to lack of basic design freeze” is 9%.

“Performing design work out of order” is 3%.

“loss of morale” is 5%.

“overtime” is 4%.

Asking the project team their estimates of loss of productivity at this point in time, and their estimation of the relative contribution of these five factors, will help to validate the model. In most cases this loss level is best captured by plotting the relative levels of productivity against the time of critical incidents during the life the project. Discussion around this estimation might reveal unease with the simple model described in Fig. 6, which will enable discussion around the ID and the underlying cause map, either to validate the agreed model, or possibly to modify it and return up the cascade to further refine the model. In this scenario, validation of the cascade process provides a unifying structure within which the various audiences can communicate and understand each other.

The Cascade Model Building Process provides a rigorous approach to explaining why a project has behaved in a certain way. The cascade uses rich, qualitative stories to give a grounding in the key events that drive the behavior of the project. In addition, it provides a quantifiable structure that allows the over time dynamics of the project to be described. The Cascade has therefore contributed significantly in understanding why projects behave in the way they do.

This chapter has focused on the role of SD modeling in explaining the behavior of complex projects. The final two sections will consider the implications of this work and will explore potential future directions for the use of SD modeling of projects.

Implications for Development

So what is the current status of SD modeling of projects? What is the research agenda for studying projects using SD? Below we consider each aspect of the project life-cycle in turn, to suggest areas where SD modeling may be applied, and to consider where further work is needed.

The first area is pre‐project risk analysis . Risk analysis traditionally looks at risks individually, but looking at the systemicity in risks has clear advantages [59]. Firstly, the use of cause mapping techniques by an experienced facilitator, aided by software tools, is a powerful means of drawing out knowledge of project risk from an individual manager (or group of managers), enhancing clarity of thought, allowing investigation of the interactions between risks, and enhancing creativity. It is particularly valuable when used with groups, bringing out interactions between the managers and helping to surface cultural differences. And it clearly enables analysis of the systemicity, in particular identification of feedback dynamics, which can help explicate project dynamics in the ways discussed above. The influence of such work has led to the ideas of cause maps , influence diagrams and SD to be included into risk practice standard advice (the UK “PRAM” Guide, edition 2 [60] – absent from Edition 1). In one key example [31], the work described above enabled the team to develop a ‘Risk Filter’ in a large multi‐national project‐based organization, for identifying areas of risk exposure on future projects and creating a framework for their investigation. The team reviewed the system after a few years; it had been used by 9 divisions, on over 60 major projects, and completed by 450 respondents; and it was used at several stages during the life of a project to aid in the risk assessment and contribute to a project database. The system allowed investigation of the interactions between risks, and so encouraged the management of the causality of relationships between risks, rather than just risks, thus focusing attention on those risks and causality that create the most frightening ramifications on clusters of risks, as a system, rather than single items. This also encouraged conversations about risk mitigation across disciplines within the organization. Clearly cause mapping is useful in risk analysis, but there are a number of research questions that follow, for example:

  • In looking at possible risk scenarios , what are appropriate methodologies to organize and facilitate heterogeneous groups of managers? And how technically can knowledge of systemicity and scenarios be gathered into one integrated SD model and enhance understanding? [61]

  • How can SD models of possible scenarios be populated to identify key risks? How does the modeling cascade help in forward‐looking analysis?

  • There are many attempts to use Monte-Carlo simulation to model projects, without taking the systemic issues into account – leading to models which can be seriously misleading [62]. SD models can give a much more realistic account of the effect of risks – but how can essentially deterministic SD models as described above be integrated into a stochastic framework to undertake probabilistic risk analyzes of projects which acknowledges the systemicity between the risks and the systemic effects of each risk?

  • The use of SD is able to identify structures which give projects a propensity for the catastrophic systemic effects discussed in the Introduction. In particular, the three dimensions of structural complexity, uncertainty, and severe time‐limitation in projects can combine together to cause significant positive feedback. However, defining metrics for such dimensions still remains an important open question. While a little work has been undertaken to give operational measures to the first of these (for example [63,64]), and de Meyer et al. [65] and Shenhar and Dvir [66] suggest selecting the management strategy based on such parameters, there has been little success so far in quantifying these attributes. The use of the SD models discussed above needs to be developed to a point where a project can be parametrized to give quantitatively its propensity for positive feedback.

  • Finally, SD modeling shows that the effects of individual risks can be considerably greater than intuition would indicate, and the effects of clusters of risks particularly so. How can this be quantified so that risks or groups of risks can be ranked in importance to provide prioritization to managers? Again, Howick et al. [61] gives some initial indications here, but more work is needed.

The use of SD in operational control of projects has been less prevalent (Lyneis et al., [12] refers to and discusses examples of where it has been used). For a variety of reasons, SD and the traditional project management approach do not match well together. Traditional project‐management tools look at the project in its decomposed pieces in a structured way (networks, work breakdown structures, etc.); they look at operational management problems at a detailed level; SD models aggregate into a higher strategic level and look at the underlying structure and feedback. Rodrigues and Williams [67] describe one attempt at an integrated methodology, but there is scope for research into how work with the SD paradigm can contribute to operational management of projects, and Williams [68] provides some suggestions for hybrid methods.

There is also a more fundamental reason why SD models do not fit in easily into conventional project management. Current project management practice and discourse is dominated by the “Bodies of Knowledge” or BoKs [69], which professional project management bodies consider to be the core knowledge of managing projects [1,70], presenting sets of normative procedures which appear to be self‐evidently correct. However, there are three underlying assumptions to this discourse [71].

  • Project Management is self‐evidently correct: it is rationalist [72] and normative [73].

  • The ontological stance is effectively positivist [74].

  • Project management is particularly concerned with managing scope in individual parts [75].

These three assumptions lead to three particular emphases in current project management discourse and thus in the BoKs [71]:

  • A heavy emphasis on planning [73,76].

  • An implication of a very conventional control model [77].

  • Project management is generally decoupled from the environment [78].

The SD modeling work provided explanations for why some projects severely over-run, which clash with these assumptions of the current dominant project management discourse.

  • Unlike the third assumption, the SD models show behavior arising from the complex interactions of the various parts of the project, which would not be predicted from an analysis of the individual parts of the project [79].

  • Against the first assumption, the SD models show project behavior which is complex and non‐intuitive, with feedback exacerbated through management response to project perturbations, conventional methods provide unhelpful or even disbeneficial advice and are not necessarily self‐evidently correct.

  • The second assumption is also challenged. Firstly, the models differ from the BoKs in their emphasis on, or inclusion of, “soft” factors, often important links in the chains of causality. Secondly, they show that the models need to incorporate not only “real” data but management perceptions of data and to capture the socially constructed nature of “reality” in a project.

The SD models tell us why failures occur in projects which exhibit complexity [63] – that is, when they combine structural complexity [80] – many parts in complex combinations – and uncertainty, in project goals and in the means to achieve those goals [81]. Goal uncertainty in particular is lacking in the conventional project management discourse [74,82], and it is when uncertainty affects a structurally complex traditionally‐managed project that the systemic effects discussed above start to occur. But there is a third factor identified in the SD modeling. Frequently, events arise that compromise the plan at a faster rate than that at which it is practical to re-plan. When the project is heavily time‐constrained, the project manager feels forced to take acceleration actions. A structurally complex project when perturbed by external uncertainties can become unstable and difficult to manage, and under time‐constraints dictating acceleration actions when management has to make very fast and sometimes very many decisions, the catastrophic over-runs described above can occur. Work from different direction seeking to establish characteristics that cause complexity projects come up with similar characteristics (for example [66]). But the SD modeling explains how the tightness of the time‐constraints strengthen the power of the feedback loops which means that small problems or uncertainties can cause unexpectedly large effects; it also shows how the type of under‐specification identified by Flyvberg et al. [4] brings what is sometimes called “double jeopardy” – under‐estimation (when the estimate is elevated to the status of a project control‐budget) which leads to acceleration actions that then cause feedback which causes much greater over-spend than the degree of under‐estimation.

Because of this, the greatest contribution that SD has made – and perhaps can make – is to increase our understanding of why projects behave in the way they do. There are two situations in which this is valuable: the claim situation, where one side of the party is trying to explain the project's behavior to the others (and, usually, why the actions of the other party has caused the project to behave in the way it has) and the post‐project situation, where an organization is trying to learn lessons from the experience of a project.

The bulk of the work referred to in this chapter comes in the first of these, the claim situation. However, while these have proved popular amongst SD modelers, they have not necessarily found universal acceptance amongst the practicing project‐management community. Work is needed therefore in a number of directions. These will be discussed in the next section.

Future Directions

We have already discussed the difficulty that various audiences can have in comprehending a large, detailed system dynamics model [54], and that gradual explanations that can be given by working down (and back up) the cascade to bring understanding to a heterogeneous group (which might include jurors, lawyers, engineers and so on) and so link the SD model to key events in the project. While this is clearly effective, more work is needed to investigate the use of the cascade. In particular, ways in which the cascade can be most effective in promoting understanding, in formalizing the methodology and the various techniques mentioned above to make it replicable, as well as how best to use SD here (Howick [54] outlines nine particular challenges the SD modeler faces in such situations). Having said this, it is still the case that many forums in which claims are made are very set in conventional project‐management thinking, and we need to investigate more how the SD methods can be combined with more traditional methods synergistically, so that each supports the other (see for example [83]).

Significant unrealized potential of these methodologies are to be found in the post‐project “lessons learned” situation. Research has shown many problems in learning generic lessons that can be extrapolated to other projects, such as getting to the root causes of problems in projects, seeing the underlying systemicity, and understanding the narratives around project events (Williams [84], which gives an extensive bibliography in the area). Clearly, the modeling cascade, working from the messiness of individual perceptions of the situation to an SD model, can help in these areas. The first part of the process (Fig. 3), working through to the cause map , has been shown to enhance understanding in many cases; for example, Robertson and Williams [85] describe a case in an insurance firm, and Williams [62] gives an example of a project in an electronics firm, where the methodology was used very “quick and dirty” but still gave increased understanding of why a (in that case successful) project turned out as it did, with some pointers to lessons learned about the process. However, as well as formalization of this part of the methodology and research into the most effective ways of bringing groups together to form cause maps, more clarity is required as to how far down the cascade to go and the additional benefits that the SD modeling brings. “Stage 4” describes the need to look at quantification at a level that is appropriate to the data‐collection opportunities available, and there might perhaps be scope for SD models of parts of the process explaining particular aspects of the outcomes. Attempts to describe the behavior of the whole project at a detailed level may only be suitable for the claims situation; there needs to be research into what is needed in terms of Stages 3 and 4 for gaining lessons from projects (or, if these Stages are not carried out, how the benefits such as enhanced clarity and validity using the cause maps, can be gained).

One idea for learning lessons from projects used by the authors, following the idea of simulation “learning labs”, was to incorporate learning from a number of projects undertaken by one particular large manufacturer into a simulation learning “game” [25]. Over a period of 7 years, several hundred Presidents, Vice‐Presidents, Directors and Project Managers from around the company used the simulation tool as a part of a series of senior management seminars, where it promoted discussion around the experience and the effects encountered, and encouraged consideration of potential long-term consequences of decisions, enabling cause and effect relationships and feedback loops to be formed from participants' experiences. More research is required here as to how such learning can be made most effective.

SD modeling has brought a new view to project management, enabling understanding of the behavior of complex projects that was not accessible with other methods. The chapter has described methodology for where SD has been used in this domain. This last part of the chapter has looked forward to a research agenda into how the SD work needs to be developed to bring greater benefits within the project‐management community.

Copyright information

© Springer-Verlag 2009
Show all