1 1. Introduction

The study of simulation projects leads to the question of what project success and failure mean. The notion of project success or failure, however, is a multi-faceted and multi-perspective topic. For instance, the perception of project success may differ from one stakeholder to another, mainly simulation customers and providers. Any combination of success and failure experiences might occur in one single project for these two major stakeholders, while obviously the ideal situation is a success from both parties’ perspectives. Success must also be seen from a time-based perspective. In the short-term it may appear that few benefits accrue from a specific project, but in the longer-term the full impact of a project may be much greater.

Critical Success Factors (CSFs) represent those areas of an organisation or a project that are vital to its success. Thus, the management needs to focus on these areas in order to create high levels of performance (McIvor et al, 2010). The CSF method has proved valuable for linking qualitative and quantitative aspects of processes and organisations. While the CSF methodology was originally introduced with organisational perspectives in mind, its application in the context of projects is not new or uninformed (eg, see Belassi and Tukel, 1996 and Cooke-Davis, 2002).

Computer simulation projects, which are categorised as a service type project, have been carried out in many sectors in order to improve the performance of systems in general. Their level of success, however, has been in question.

The use of CSF as a key component of a wider framework that takes a multi-faceted and multi-perspective approach to develop quantitative performance indicators for simulation projects could be explored. This paper aims to fulfil this objective. The contribution of this work is the identification of a set of CSFs and associated Key Performance Indicators (KPIs) for simulation projects. This in turn leads to the development of an instrument for measuring and predicting simulation project success.

The paper is organised as follows. The next section provides the background literature for this paper. We present our framework and its underlying concepts and methods in Section 3.Section 4 outlines 9 exemplar cases in healthcare that were used to test our proposed framework. Section 5 presents the results of a survey on the 9 exemplar cases on the basis of the proposed framework. In section 6, the survey results are analysed. The findings of our study are discussed in Section 7 mainly from two perspectives, namely the reliability of our framework and any meaningful patterns observed in the results. Finally, Section 8 brings the paper to a close by outlining the main contributions of the research, limitations, as well as some directions for further studies.

2 2. Previous work

The study of challenges, success and failure factors in a simulation project has received much attention from researchers and practitioners who have reported their findings in the literature. Melão and Pidd (2003), Murphy and Perera (2002), McHaney et al (2002), Robinson and Pidd (1998), Robinson and Bhatia (1995) and Robinson (1994) are some examples with no particular emphasis on the sector orientation. Similarly, some others have looked at the subject but within a specific sector. For example, Van Lent et al (2012), Jahangirian et al (2012), Brailsford and Vissers (2011), Eldabi (2009), Brailsford et al (2009), Brailsford (2007), Brailsford (2005), Harper and Pitt (2004) and Lowery et al (1994) have investigated the implementation challenges of simulation projects in the health-care setting.

Similar studies have been carried out in other contexts, such as the construction industry (eg, Chan et al, 2004; Al-Tmeemy et al, 2011) and the information systems domain (eg Reel, 1999; Agarwal and Rathod, 2006 and Chow and Cao, 2008). The position of the topic ‘project success in non-simulation projects’ has gone even further to the stage that frameworks have been proposed to measure the success, mostly using KPIs. Examples are Luu et al (2008), Lam et al (2007), Cheung et al (2004) and Chan and Chan (2004) in the context of construction projects. This topic has been so important that an industry KPI standard was set out in 2000 by collaboration between the UK government and the construction industry (Raynsford, 2000), on the basis of which annual reports are being produced. The standard has played as a key component of the construction organisations’ move towards achieving best practice.

The difference between types of projects, however, undermines the transferability. The nature of projects is influential when studying the success factors. For example, while health and safety factors could be critical in construction projects, these factors might be of much less importance in others such as computer simulation projects. Therefore, a context-specific investigation of project success is required.

The situation in the simulation projects domain is much less advanced. While there have been many efforts to identify and discuss success factors, there has been little effort to produce instruments that measure success. Robinson (1998) is one of very few articles that proposes a measure of simulation project quality. However, the proposed instrument, SimQual, seemed daunting for customers, perhaps mainly because of the length of the survey questionnaire, asking respondents more than 130 questions about 62 indicators. This highlighted the need for a more pragmatic view to the issue. In his later publication, Robinson presents the notion of a simulation quality trilogy where the quality of a simulation project is characterised by three concepts: content, process and outcome (Robinson, 2002). The research, however, remains at the conceptual level and does not present a quantitative framework for assessment.

The majority of studies that present KPIs to measure the projects’ success in both simulation contexts (eg, Robinson, 1998) and non-simulation ones (eg, Lam et al, 2007; Luu et al, 2008) use the judgemental method of data collection. Survey respondents are asked to reflect on their opinions based on qualitative measures such as ‘the level of agreement with a specific statement’. This leaves the research exposed to judgemental biases, whereas using quantitative measures could mitigate this weakness. Examples of these quantitative studies include Robinson (1998) (3 projects), Chan and Chan (2004) (3 projects), Chan et al (2004) (3 projects), Cheung et al (2004) (1 project) and Luu et al (2008) (15 projects). As noted each of these studies review a small number of projects to derive their conclusions.

The review of literature exhibits a gap in advancing this study to the stage that the quantitative measures would guide the improvements. Therefore, this paper aims to focus on the quantitative side of success in the context of simulation projects. More specifically, we will adopt a top-down approach to develop KPIs whereby a simulation project’s success can be measured and compared in a more pragmatic way.

3 3. Methods

A top-down framework is presented which links CSFs to a set of KPIs. The principal aim is to start from the quite vague and ambitious goals or objectives, namely project’s success, to CSFs and finally towards the very concrete and measurable outcomes (KPIs). Such an approach allows a top-down connection between strategic and operational activities, where CSFs represent strategic focus areas and KPIs represent operational performances. Two interim steps, namely the development of Statements of success and Common Features, are proposed in order to enable an informed path from CSFs to KPIs. Figure 1 demonstrates this top-down, hierarchical framework. Finally, an exemplar study that uses data from 9 specific simulation projects, to which we had access, is carried out to derive conclusions. A questionnaire is developed to collect the project’s performance data based on the KPIs.

Figure 1
figure 1

Proposed top-down CSF/KPI framework.

3.1 3.1. Critical success factors (CSFs)

A simulation project’s success, which can represent the ultimate objective, has a number of dimensions that are supposed to be covered by the CSFs, a limited number of success factors that constitute the building blocks of this framework. In that sense, our study uses the success factors developed by Robinson and Pidd (1998), which is arguably the most prominent one in the context of simulation projects and has also been adopted by some other researchers (eg, Altsitsiadis, 2011). Robinson and Pidd (1998) present 19 dimensions of success in the simulation projects cited by 10 providers and 10 customers of these projects. Each dimension can be seen as a success factor. The list is claimed to cover the achievement of the project’s objectives, as well as customer acceptance stages that are under the simulation provider’s control.

Robinson and Pidd (1998) identify 19 dimensions of success. However, according to the recommendations made originally by Daniel (1961), there are three to six key factors that determine success. We thus used a ranking method based on the total frequency of citations in Robinson and Pidd (1998) as a way to identify CSFs. The top five in the ranked list were selected on the basis of the number of citations by those Robinson and Pidd interviewed. Such a selective approach is key to the pragmatic objective of the study in a sense that the assessment instrument will be used effectively and efficiently by key stakeholders. The selected CSFs are shown in Table 1, along with their coverage in terms of stakeholders and time-based perspectives, as explained earlier in Section 1. This implies that these five factors work well to cover those two perspectives. For example, the Communication and interaction between the provider and the customer factor could be a success factor from both providers’ and customers’ perspectives in a sense that both groups require to share some information with each other effectively throughout the project as a key element of success. Competence of the provider, however, is seen as a CSF only from customers’ point of view, while seen by the providers mostly as a costly factor.

Table 1 Five selected CSFs and their coverage in terms of stakeholders and consistency perspectives

From the time-based perspective, the communication and interaction factor can only help towards the success of the particular project for which the provider and the customer are communicating about, whereas the competence of the provider factor could be one that would attract further contracts and potential profits for the provider organisation in the future even though it might be costly in short-term.

3.2 3.2. Key performance indicators (KPIs)

A KPI is a quantifiable measure that is used to gauge or compare performance in terms of meeting strategic and operational goals. Therefore, KPIs must be aligned with the objectives. An appropriate approach to secure such alignment would be through the CSFs (Parmenter, 2010).

A fundamental element of our framework is concerned with the identification of KPIs and their connection with the CSFs, where there is a lack of evidence in the context of simulation project success. We intend to suggest up to three KPIs that will cover different aspects of each CSF in the best way possible. The whole set of KPIs will then work as a method to quantify a project’s success from various perspectives.

In order to do so, we propose two interim steps, the main purpose of which is to enable a sensible, informed path from CSFs to KPIs. The approach we use is to dig into the expert opinion survey conducted by Robinson and Pidd (1998) on each CSF in order to identify its key characteristics, which will then be used to inform the development of KPIs.

The first step involves a notion we call Statement of Success, or so-called cited factor in the Robinson and Pidd (1998) study, which is the providers’ and customers’ perceptions of success factors in their own language. Each CSF is associated with a set of statements of success. For instance, the CSF ‘communication and interaction between the provider and the customer’ represents 41 statements of success, such as ‘There will be regular communication between the provider and customer’ or ‘The customer will be constantly informed about progress on the project’. These statements provide us with more detailed information about each CSF; hence they can be used as a part of our top-down approach to reach the KPIs. We use a total of 164 statements of success associated with all five selected CSFs as presented in the Robinson and Pidd (1998) study.

In the second step, we propose another notion called Common Feature, which characterises a limited number of features—maximum three—that are perceived to be common among the set of statements of success associated with one individual CSF. Indeed, these common features are to encapsulate a set of statements of success into a manageable set of criteria. For instance, one common theme about the above two statements in the previous paragraph is regularity and continuity of the communication, for which we propose one common feature called ‘Frequency of Communications’. Table 2 presents 15 proposed common features for the five selected CSFs. Tables A1-A5 in the Appendix shows a detailed account of how these common features are associated with the 164 statements of success.

Table 2 Proposed Common Features in association with each selected CSF

Finally, we propose one KPI for each common feature. In order to avoid judgemental biases, quantitative data-based indicators are suggested for 14 out of the 15 KPIs. For example, we suggest ‘average no. of communications per month throughout the project’ as a KPI for the Frequency of Communications. The only KPI that requires a qualitative data-based response is communication effectiveness for which we could not find an appropriate quantitative indicator. The use of quantitative data-based indicator for the KPIs will provide the proposed framework with the following advantages:

  • Low exposure to judgemental bias

  • Smaller sample size needed for survey data collection: due to two reasons; firstly lower bias is involved, and secondly the survey asks for verifiable quantitative data with regards to each KPI.

  • Easier for stakeholders to complete the questionnaire: because respondents will be asked for factual data that is easier to portray than it is with the subjective opinions.

Table 3 presents the association between common features and proposed KPIs, along with the description of associations.

Table 3 Proposed KPIs and their association with the selected CSFs

3.3 3.3. Data collection instrument

A questionnaire was developed that was used to collect data about each of the 15 KPIs with regards to the project. Each question in the questionnaire corresponds to one KPI. A five-point multi-choice format was used for all the KPIs to allow for consistency. Responses are coded from 5 (most successful) to 1 (least successful), which are actually the coded KPI values.

The questionnaire was tested and modified through a pilot survey with three members of the provider organisations. See the appendix for the questionnaire used in this study.

3.4 3.4. Overall success measures

Two measures based on the KPIs are suggested that facilitate the assessment of a project’s performance. The first one is called the Project’s Success Measure (PSM), calculated as a percentage:

where i and j refer to the CSF indice and its related KPI indice respectively, and w i is the weight of the ith CSF calculated as:

where c i is the number of citations associated with ith CSF, as reported in Robinson and Pidd (1998) and shown in Table 4. PSM could be used to represent each project’s level of overall success relative to others, as well as against a target.

Table 4 CSFs and their associated weight values (w i )

The other measure, called the Success Factor Measure (SFM i ), represents the project’s performance in the area of success factor i. SFM is calculated as a percentage:

where i and j refer to the CSF indice and its related KPI indice respectively, and the value 5 in the denominator indicates the number of choices associated with each KPI in the questionnaire. This measure will enable the management team to conduct a Drill-Down assessment for one CSF in order to explore the areas of project’s weakness or strength.

4 4. Exemplar studies

The framework was put into test with data from 9 exemplar cases of modelling and simulation with a wide range of progress and achievement levels. This allowed a better assessment of the framework in terms of its ability to distinguish between projects with a wide range of success levels. These exemplar cases were carried out by a team of modelling experts from five organisations—mostly academics—in health-care settings during 2007–2008. Table 5 summarises these 9 cases and the progress made from only defining the problem (Stage 1) to implementation of the findings (Stage 6).

Table 5 Simulation exemplar cases

We contacted 14 people from the provider organisations and collected their assessment based on our KPI framework for the 9 exemplars using the questionnaire in Figure 2. The completed questionnaires were received within 3 weeks. Where there was more than one person involved in an exemplar case, a consensus was made through sharing the responses between the people and asking for a single set of responses.

Figure 2
figure 2

Projects assessment based on the PSM measure.

5 5. Results and analysis

The results from the questionnaire for the 9 exemplar cases are shown in Table 6.

Table 6 KPI values of 9 exemplar cases obtained via the questionnaire

5.1 5.1. Project’s success measure (PSM)

Figure 2 presents the results of a comparative analysis on the basis of the PSM values. The projects are sorted on a descending order of PSM. All the top four cases, namely case numbers 1, 4, 8 and 9 went through a full-scale simulation project and produced some results at the end. The case numbers 1 and 9 also reached the implementation stage. The four cases, however, were different in a way that would influence their overall success, which will be explained in the next section.

On the other side of the spectrum, none of the bottom three cases, namely case numbers 7, 2 and 3, went through a full-scale simulation project or produced any hard results. The case number 3 was particularly different in a sense that the project came to a halt at a very early stage following a health condition that occurred to the key contact person in the customer organisation.

5.2 5.2. Comparative analysis by the SFM measures

Figure 3 presents the results of a comparative analysis based on the projects’ performance on each critical success factor, or so-called SFM measure. As seen in the figure, there are exemplar cases that perform rather well on some factors but rather poorly on some others. An example is the case number 7, which achieves an SFM of 87% on the Communication and Interaction factor while having a poor performance of 27% on the Customer Organisation factor. Another example is case numbers 4 and 8, which show a relatively poor assessment on Customer Organisation in particular, unlike their overall good performance in all other SFMs. This implies that these cases did not experience a high level of organisational support from the customers, which could be because of a lack of enough simulation knowledge within the UK health-care system. Organisational support is essential for the results to be implemented and real impacts to be realised. Also, the case number 9 is characterised by two lower than 50% SFMs, namely Involvement and Customer organisation. Its low score on the Involvement factor might reflect the issue regarding disengagement of the customers from the project.

Figure 3
figure 3

Assessment and comparison of the exemplar cases based on the SFM measure.

Almost all the cases reported a relatively high performance with regards to the Provider’s Competence. This is true because the providers’ team belonged either to the academic community or experienced consultancy companies. The only exception was the team involved in case number 7 who were specialised mostly in other types of modelling, rather than simulation modelling. This explains the main reason why this case did not reach the stage of developing the simulation model.

A relatively high performance of the less successful case number 2 with regards to the Customer Organisation is interesting. This was mainly attributed to the fact that the customer’s first contact people were mostly working at the strategic level; hence there was a high level of familiarity and acceptance among this group about the simulation project. However, an active involvement from other key stakeholder groups did not happen, which explains a low score on the Involvement factor.

By calculating average scores for each SFM across the 9 exemplar cases, as presented in Figure 4, it can be observed that two out of the five CSFs, namely Involvement and Customer Organisation show a performance of less than 50%. The situation with regards to the Customer Organisation is the poorest. This result may point to areas that generally need more attention in practice.

Figure 4
figure 4

Average SFM values across the 9 exemplar cases.

5.3 5.3. Correlation analyses

5.3.1 5.3.1. Between the progress score and the overall success of the projects

A Pearson correlation analysis between the progress scores and the overall success of the projects—based on the PSM values—produced a Correlation Coefficient of 0.763, which demonstrates a strong positive association.

5.3.2 5.3.2. Between the CSFs and the overall success of the projects

A Pearson correlation analysis between the CSFs and the overall success of the exemplar cases—based on the SFM and the PSM values—revealed that four out of the five factors, namely Responsiveness, Involvement, Communication and Interaction, and Customer’s Organisation hold strong associations with the overall success level of the projects, while Responsiveness represents the strongest association (Figure 5). On the other hand, competence of the provider factor exhibits a very low correlation. Table 7 presents Correlation Coefficient values.

Table 7 Correlation Coefficients between the CSFs and the overall success of the project based on the 9 exemplars’ data
Figure 5
figure 5

Correlation between Responsiveness and PSM values.

5.4 5.4. Analysis based on individual KPIs

A further analysis is possible by drilling down one level further to investigate each individual KPI. For example, the case number 1, which is the most successful one in terms of the PSM measure, shows a poor score with regards to organisational knowledge of simulation. Knowing this could help the provider to improve organisational knowledge for future projects. In this particular case, for example, the provider organisation could arrange simulation training for some key stakeholders during the early phase of the next project.

Furthermore, a bird’s-eye view to the KPI scores across projects could help identify the problem areas in general in simulation practice. For this purpose, Figure 6 presents average KPI values across all 9 exemplar cases in our study. The average score for organisational knowledge of simulation in our survey of nine exemplar cases is 29%, which is the lowest score among the 15 KPIs. This highlights the importance of simulation awareness and training programs in our context, healthcare.

Figure 6
figure 6

Average KPI values across the 9 exemplar cases.

The second lowest average score is for the benefits for customers indicator with an average score of 31%. This highlights the point that the simulation cases have not quite achieved in producing expected amounts of benefits for customers, which is vital to the success of any project. Case number 4 is an example where simulation results were generated but not implemented, hence less actual benefits for customers were realised. This has been reflected in the lowest scoring of the exemplar case for that particular KPI.

Organisational support and commitment, with 40% average score presents the third lowest performance measure. Management’s supports play a key role from different aspects. Apart from its financial support of the project, its moral support will motivate other stakeholders in the client’s organisation to have a high commitment.

6 5. Discussion

The results of our exemplar studies can be discussed from two perspectives. One is to assess the reliability of our proposed framework, and the other is to identify some meaningful patterns in the data, which could in turn inform practice.

On the reliability aspect, the results on the PSM measure shows an overall consistency with the comparative level of progress and achievements made throughout the exemplar cases. The analysis of the results suggests that the proposed framework has the capability to distinguish between projects in terms of the level of success achieved from different perspectives. For example, even though all the four case numbers 1, 4, 8 and 9 reached the final simulation stage and produced results, their PSM values illustrate a clear superiority of the case number 1, which reached the full implementation phase, against the other three, which either did not (case numbers 4 and 8) or partially did (case number 9). This is also interesting in a sense that the simulation projects normally end and then the results are implemented. In fact, in addition to a multi-faceted approach to the success, if our proposed framework is used in the middle of the simulation project it seems to exhibit an anticipatory view of success too as a predictor of likely implementation and ultimate success. This seems evident because it is highly likely that an interim success would lead to an ultimate success. The primary objective of the proposed framework, however, is to evaluate the project’s current success on the basis of its past performance.

Another interesting result is about case number 9. Even though the impression is that this case might be a highly successful project because it went through all the stages of a simulation project and produced some results that were used by the customers, the PSM score was just in a moderate range (56%). The fact of the matter is that this case can be seen as an incomplete version of a simulation project, which failed to secure full engagement and customer support, hence resulted in a premature halt of the project. Therefore, it can be implied that our proposed framework was able to consider different facets of success in a simulation project, and to avoid possible short-sighted biases.

Also, as proved by the correlation analysis presented earlier in Section 5.3.2, four out of the five CSFs showed a strong capacity in differentiating between successful and unsuccessful projects. This finding suggests that the proposed framework could be used—with confidence—to evaluate the success of simulation projects. Interestingly, responsiveness to the clients’ expectations shows the strongest correlation with r=0.955. It might mean that the ‘Responsiveness’ factor can better represent the overall success of a project if used individually. What really matters is how much a simulation study has helped the customers to achieve their objectives. This point reminds providers—who think customers might be wrong about the problem and how to tackle it—to reconsider and be more responsive. The outcome will ultimately be translated into success for both the customers and the providers; an ideal situation in a success study. Furthermore, involvement as the second strongest factor with a very high correlation coefficient, r=0.879, shows the importance of such subjective criteria in securing the overall success of projects. This finding confirms the widespread evidence on the significance of stakeholder engagement factor (eg, Fildes and Ranyard, 1997; Melão and Pidd, 2003; Brailsford et al, 2009 and Taylor et al, 2009). Another interesting finding is about the weak correlation between the competence of the provider factor and the overall success, which is very similar to the finding by Robinson (2002) where it claims that there is a greater emphasis by both modellers and customers on the process of a simulation study in making quality judgements than there is on the content of a simulation study.

Overall, looking at the results of the 9 exemplar cases as a sample, a general pattern can be observed and discussed. Customer’s organisational capacity to support the project showed the poorest result within the set of 9 exemplar cases. More specifically, there were clear indications of poor management support and simulation knowledge in the customers’ organisations to back the projects internally. Lack of familiarity and awareness about the simulation capacity and benefits appears to be playing a key role in this (Murphy and Perera, 2002). Little effort has been made to generate and collect evidence on the cost/benefit assessment of simulation projects (Jahangirian et al, 2010) and to present that in an effective way to the community of management practitioners. A need for an integration of simulation training within the management curriculum, especially in the European countries, is also evident (Murphy and Perera, 2002).

The selection of CSFs in our study, based on the work by Robinson and Pidd (1998), were informed by views from both the provider and the customer groups. Our exemplar study, however, used evaluation of the projects’ performance by providers only. This was mainly because of the fact that we could only have access to the providers. While, there are some indicators, such as the ones in the Competence of the provider category, that can be better evaluated by the providers, some others, such as the ones in the Responsiveness category, could be addressed to the customers. Yet, there are some areas, such as the Communications and interactions category, that could improve by responses from a combination of both groups. However, we believe that a quantitative data-based approach adopted in our framework can mitigate possible biases in the evaluation process.

7 6. Conclusions

Traditional project management techniques take cost, time and end product quality factors—or so-called Iron Triangle—into account and provide the project managers with information only on these three attributes in order to plan and control the project. While they have been useful to some extent, research studies have suggested that it is difficult to manage projects using these traditional techniques (eg, Baccarini, 1996; Williams, 1999; Bryde, 2005; Pundir et al, 2007). It is even more difficult in consulting projects, such as simulation studies, where the immediate products are rather intangible. For example Robinson (2002) highlights the greater importance of process over content. Therefore, it is important to recognise that the success of such projects cannot be linked only to the end products. A more process-oriented view with some attention to the intangible benefits needs to be taken on board. For example, a project that would not reach the final stage, might still gain some scores on such intangible criteria as ‘increased understanding of the system and potential system improvements in the future’.

The study of CSFs, which brings in a wide range of perspectives alongside the Iron Triangle, has become a popular area of research to address the complexity of projects and this paper shows its applicability in simulation projects. However, the mainstream research has not gone beyond the identification of success factors. There has been a clear gap in advancing this topic to the stage that a multi-faceted view of a simulation project could be quantified and then used to manage the project towards success. To address this, we present a top-down framework on the basis of KPIs linked to CSFs whereby the concept of simulation project success can be quantified. Such a multi-faceted approach allows a wider application.

The results of our survey on 9 exemplar cases and correlation analyses on the results provided some support for the reliability of our proposed framework. Further, an analysis of the results highlighted some areas that might represent a general pattern. For example, the 9 cases produced consistently lower scores on the customer’s organisational capacity to support the simulation project, which is crucial in securing the implementation of results. Simulation providers could fill the gap regarding the simulation awareness in the customer organisations with short training programs for key stakeholders during the early phase of the project.

The results suggest that our proposed framework and questionnaire could be used with some confidence to measure performance, to monitor and to benchmark simulation projects, but further testing is needed. Performance measurements using the questionnaire could be done both during the course of the project or after its completion. The analysis of the performance measurements, through drilling down to the individual KPIs, could facilitate the identification of issues in a simulation project. A complementary research direction could be to study how the CSFs could be ‘embedded’ in emerging simulation methodologies and tools such as enterprise business process simulation (Liu and Iijima, 2015) and construction engineering (AbouRizk et al, forthcoming).

This work is by no means complete and has its limitations. The selection of CSFs in our study, based on the work by Robinson and Pidd (1998), were informed by views from both the provider and the customer groups. However, our exemplar study used evaluation of the projects’ performance by providers only. This limitation, we believe, could be removed by involving customers in the future surveys. Further work might also be allocated to fine tuning our questionnaire constructs based on future surveys. New survey studies measuring simulation projects’ performances using our proposed framework could also provide some useful general insights into the areas of concern in simulation practice. These could be further complemented by studies of the impact of causal factors on the success of simulation projects (Jahangirian et al, 2015).

While the scale of our exemplar study (9 exemplars) is similar in magnitude with the existing published research, the methodology will benefit from reflections on further exemplar studies. This research adopted data from health-care sector that were available to the authors; hence there might be a potential health bias. Similar studies using data from other sectors could provide insights on comparable findings. Further research could also be dedicated to confirm the weights used for each CSF.