Systemic Practice and Action Research

, Volume 20, Issue 6, pp 441–453

The Pattern-matching Role of Systems Thinking in Improving Research Trustworthiness

Authors

    • School of Business Organisation and Management, University of Ulster
Original Paper

DOI: 10.1007/s11213-007-9069-1

Cite this article as:
Cao, G. Syst Pract Act Res (2007) 20: 441. doi:10.1007/s11213-007-9069-1

Abstract

This paper seeks to identify a new and useful role for systems thinking in improving qualitative research quality. Firstly, it reflects upon the main characteristics and difficulties of qualitative research. Secondly, it critically reviews some of the techniques suggested to improve research credibility, transferability, dependability and confirmability. Thirdly, it explains the concept of pattern-matching and its role in designing case studies. Based on these, the paper examines the pattern-matching role of systems thinking in enhancing the trustworthiness of qualitative research. Finally, a case study is described to illustrate how pattern-matching is embedded in systems perspective to improve research trustworthiness. The paper highlights that when systems thinking is combined with pattern-matching properly in qualitative research, it can help the investigators to develop a fuller and richer understanding of a phenomenon, and can enable others interested in the research to recover how conclusions have been made and to make informed judgement about the research findings.

Keywords

Systems thinkingQualitative researchResearch trustworthinessPattern-matching

Introduction and outline

This paper attempts to identify a pattern-matching role of systems thinking in improving the quality of qualitative research that can be immensely untidy and difficult, sometimes even irreproducible and incommunicable. While recognising that systems thinking can provide comprehensive answers and insightful guidelines to solving many complex problems, the relationship between systems thinking and research quality has received scant attention in the literature. This is where this article endeavours to make a contribution. It argues that when systems thinking is blended with pattern-matching properly in qualitative research, it can help the investigators to develop a fuller and richer understanding of a phenomenon, can enable others interested in the research to recover how conclusions have been made, and can allow others to make informed judgement about the research findings.

The paper first reviews the main characteristics of qualitative research and the concept of research quality or research trustworthiness, helping to understand the main difficulties of qualitative research. The article then critically examines several activities, suggested by Lincoln and Guba (1985), to enhance research quality. The result of this critique highlights the need for new and/or supplementary ways of improving research quality. Next, the article explains the concept of pattern-matching and examines how systems thinking might be mixed with pattern-matching and employed in qualitative research to help to make transparent the conceptual underpinnings of the research and the process of analysing and reasoning, to allow anyone interested in the research to re-evaluate or to recover the whole research, and to judge whether or not the research findings are trustworthy. Finally, a case study of applying critical systems thinking (CST) to a change management project is briefly described to illustrate the usefulness of the pattern-matching role of systems thinking.

Main characteristics and research trustworthiness of qualitative research

Qualitative research is often understood differently by different groups of researchers. In this article qualitative research refers to an interpretative approach of collecting and analysing data to explore and explain a phenomenon. It often takes the form of a case study of a particular instance, or a small number of instances. Research data might include transcripts of in-depth interviews, observations or documents. Within the contexts of analysis, the goal of qualitative data analysis is to identify related themes and patterns, to discover relationships among the themes and patterns, and to provide fruitful explanations for these relationships (Walsh 2003). “Good qualitative data are more likely to lead to serendipitous findings and to new integrations; they help researchers to get beyond initial conceptions and to generate or revise conceptual frameworks” (Miles and Huberman 1994). They help researchers to reach a “fundamental understanding of the structure, process and driving forces” of a particular problem situation and to generalise even from a single case (Norman 1970, cited in Gummesson 1991). In short, qualitative research is very powerful in understanding what is happening, looking at the totality of each problem situation, developing ideas through induction from data (Easterby-Smith et al. 1991; Gummesson 1991; Creswell 1994; Hussey and Hussey 1997), or in theory-testing through deductive processes (Hyde 2000).

Qualitative research may indeed be valuable and indispensable to developing our understanding of a phenomenon. Analysing qualitative data is, however, often immensely complex, arduous, and messy because of the relatively unstructured nature of such materials, the least well-developed analysing procedures, and imprecise criteria for interpreting such findings (Gummesson 1991; Miles and Huberman 1994; Yin 1994). For instance, researchers often fail to justify why and how their qualitative approaches are sound (Decrop 1999). “We are left with the researcher’s telling us of classifications and patterns drawn from the welter of field data, in ways that are irreducible or even incommunicable” (Miles and Huberman 1994, p 2). Because of the interpretative nature of qualitative research, traditional concepts such as “validity” and “reliability” do not seem to be very helpful and relevant in many situations.

Consequently, one of the major issues for qualitative researchers is to have clear and appropriate criteria that will help to justify their research approaches and to allow their researches to be recovered and communicated. For this purpose, new criteria have been suggested by different researchers for example Lincoln and Denzin (1994) and Seale (1999). Lincoln and Guba (1985) suggest that qualitative researches should have trustworthiness in terms of credibility—the truthfulness of particular findings, transferability—the applicability of the research findings to another setting or group, dependability—the consistence and reproducibility of the research results, and confirmability—the research findings being reflective of the inquiry and not the researcher’s biases. Henceforth in this paper, these four criteria will be the basis on which research trustworthiness will be understood and activities to enhance research trustworthiness will be critically examined.

Critiquing activities to increase research trustworthiness

To increase research trustworthiness, triangulation, “thick description”, and inquiry audit are suggested to be used (for details, see Lincoln and Guba 1985, pp 301–316). As far as triangulation is concerned, they recommend that multiple and different data sources, methods and investigators are to be used. The use of multiple theories, they argue, however, is fallacious and meaningless. The main argument is that if a given fact, which is often theory-determined, is consistent with two theories, that finding may be more a function of the similarity or interrelatedness of the theories. For instance, many “facts” within Newtonian theory are also facts within relativity theory, because Newtonian theory can be taken as a “special case” of the relativity theory. Accordingly, they argue, the fact that is consistent with two theories is neither more believable nor more meaningful than if it were with only one of them. Theoretical triangulation is epistemological unsound and empirically empty.

This may be true for certain types of research. However, there are serious questions to be raised if this line of reasoning is to be followed in qualitative research. First, if one is to accept that a given finding can only be meaningfully consistent with one theory, is this in stark contrast with the subjective nature of qualitative research? For natural science, there are immutable causal relations given certain conditions. This is however not the case in social research where many problem situations, it can be argued, are multifaceted and could be understood differently by different groups of people based on diverse perspectives. For instance, an organisation can be meaningfully understood from processual, structural, cultural and political perspectives (Flood 1995). While each of the four views is distinctively dissimilar to others and effectively focuses on one aspect of a complex phenomenon, together the four can help develop a much fuller comprehension of an organisation. Gregory (1996) views critical practice as having four interdependent studies—empirical-analytic, historical-hermeneutic, self-reflection, and ideology-critique. This also suggests the multi-dimensional nature of a phenomenon. Habermas (1976, 1984a, b), on the other hand, argues that there are four statements in any sentence intended for communication: intelligibility, truth, rightness, and sincerity. It is argued that intelligibility is a precondition for effective communication. The other three, however, refer to three interrelated worlds: the natural world, the social world and the internal world, which Midgley (1992) refers to as the “ontological complexity”. Basically, statements of truth are about the natural world: objective and independent of human beings. Statements of rightness are about the social world: inter-subjectivity, normative social practices and rules. Statements of sincerity are about the internal world: an individual’s thoughts, emotions, experiences and beliefs. In a nutshell, these suffice to suggest that a fuller understanding of a phenomenon can only be effectively developed based on different perspectives. Understandings based on one theory can be very incomplete and insufficient. Essentially, it has to be recognised that it is the “ontological complexity” that necessitates the use of different perspectives (Midgley 1992). Theoretical triangulation is not only epistemological sound but also empirically meaningful in qualitative research.

Second, is it merely consistency to be highly desired in qualitative research? Having said that qualitative research is essentially complex and that multiple perspectives are necessary to understand a phenomenon, it has therefore to be recognised that similarities, differences, or even contradictions may coexist in qualitative research. For example, from the processual and structural views (Cao 2001), organisational change can be perceived as logical and orderly to the extent that it can be usefully characterised and effectively managed. Approaches to change are also perceived as logical and orderly to the extent that they can be usefully distinguished and employed under suitable change contexts. In this sense, it is certainly possible to conceive of this understanding being developed in a modernist direction. However, from the cultural and political views, organisational change is a phenomenon depending on individuals’ understandings and definitions of problem situations. It accepts multiple beliefs, values, and thus “multicultural organisations”. Furthermore, it recognises the importance of understanding an organisation in terms of a particular balance of various forces. This makes it possible to challenge the ideas, beliefs and values underpinning the processes and the design of an organisation, and to reveal possible coercive influences and effects, thereby improving the positions of the disadvantaged. In this sense, it is certainly possible to conceive of this comprehension being developed in a postmodernist direction. It is clear that the understandings based on different perspectives may differ greatly. Nevertheless, these differences, “just like a constellation of stars in the sky, changes over time and can be seen from different angles, yet supplementing one another no matter how incompatible positions are taken” (Gregory 1996, p 49). In conclusion, it might be argued that not only similarities but also differences should be searched for and appreciated in qualitative research. Only in such a way, richer and fuller interpretations can be developed.

To improve the second key criterion of trustworthiness, transferability, Lincoln and Guba (1985, pp 301–316) suggest that “thick description” should be provided to enable others interested in the research to judge whether the research findings can be applied to another setting or group. In this “thick description”, an account of working hypothesises and the general context in which they hold should be given. This is undoubtedly informative and fundamental. Without inferable underlying theory, research findings are just an ad hoc solution in one specific instance. Without theory, it is difficult for people to understand why any finding works in the way it works. Further, without this theory-informed understanding, it is difficult for people to learn from the findings and to communicate with others. Therefore, it is essential that research findings are underpinned conceptually thus well-informed reasons are provided for people to make judgement about whether or not the findings can be transferable to other, similar, problem situations.

Finally, inquiry audit is suggested to examine research dependability and confirmability, just like in a fiscal audit the records and accounting process of a business is authenticated. Basically, dependability can be assessed by conducting several auditing activities, including for example evaluating the appropriateness of the inquiry decisions and methodological considerations, examining to what extend the findings may have been influenced by the researcher’s bias, and evaluating the overall research design. Confirmability can be assessed on successfully completing several activities. The first is to judge whether the findings are grounded in data rather than the researcher’s personal constructions. The next is to ascertain whether inferences are logical by using appropriate analytic techniques. Then it is to evaluate the appropriateness of the category structure used. Finally, it is to find out whether methodological efforts have been made by the researcher to ensure the findings are reflective of the inquiry rather than the researcher’s biases. Once these assessment activities are done, an overall judgement about the research’s confirmability can be concluded.

These audit activities may take a week or more to complete, including five different stages and requiring six classes of very detailed raw records stemming from the research. There is no doubt that this is highly desirable and useful in assessing research trustworthiness for all research projects. However, it can be argued, it is not always feasible to call in an auditor to examine the inquiry process and the findings in such a complex way. For large and major research programmes, it might be necessary and practicable to fully evaluate their findings through inquiry audit. For smaller research projects, however, it may not be viable to do the same in terms of the resources that may be required. Then, as an alternative, will it be more achievable to report research findings in ways that the readers can make their own judgements about the research dependability and confirmability? If for example the methodological considerations, the way data are collected, and the analysis process based on which conclusions are drawn, can be more transparent and more reproducible by others interested in the research, surely others will be in a better position to judge whether the research results are consistent, recoverable, reflective of the inquiry and not biased by the researcher. Expectedly all research reports should provide enough evidence to enable such a reader-oriented audit, however, there are many papers published in ways that are “irreducible or even incommunicable” (Miles and Huberman 1994, p 2), probably due to lacking ways to adequately organise the report. Therefore, it might be useful to find an organised way to facilitate reporting that can be audited by readers interested in the research.

To recap, while recognising the importance and usefulness of these activities in improving research trustworthiness, it might be desirable and helpful to have alternative or supplementary ways of improving qualitative research. For instance, ways are needed to incorporate for example theory triangulation, searching for not only convergence but also differences, or even contradictions thereby improving the chances of developing richer and fuller interpretations of a phenomenon. Ways should emphasise the need for theoretically supporting the findings so that others interested in the research can make an informed decision about the applicability of this finding. In addition, ways for reporting that makes a reader-oriented audit possible are also needed to operationalise good ideas such as inquiry audit for research dependability and confirmability.

This article argues that when systems thinking is blended with pattern-matching it can make a significant contribution to supplementing these activities to increase research trustworthiness, and is therefore discussed in the following sections: the concept of a pattern and pattern matching, the pattern-matching role of systems thinking in improving research trustworthiness.

Blending systems thinking and pattern-matching for research trustworthiness

First, what is a pattern and pattern-matching? Consider a familiar case in which the effects of implementing a total quality management (TQM) programme in an organisation are studied. Based on the concepts of TQM and organisations, it can be predicted that after a successful implementation of the TQM programme certain types of organisational change (or patterns) in the organisation could be expected, such as improvement in organisational processes, quality of products, customer satisfaction, and team building. If the actual results could be matched with the prediction (pattern-matching), then solid conclusions about the effects of TQM programme might be drawn. Or, if the actual results failed to show the patterns as predicted then the theorising of TQM effects may have to be questioned.

Pattern-matching was first used by Campbell (1969) to discuss the effect of the passage of a traffic law in the United States limiting the speed to 55 miles per hour. There were two potential propositions or patterns. If the passed law had effects, then the traffic accident rate should show a downtrend after the legalised change. The other was that the law had no effects while the traffic accident rate would show no systematic change before and after the passage of the traffic law. By examining the traffic accident rate over a few years before and after the legal change, the conclusion was that the law had no effects because the data matched this better. There were two rival patterns, “effects” and “no effects” regarding the impact of the traffic law. Pattern-matching was the process used to relate the actual accident data to the propositions and to decide which proposition fit better with the data. Campbell (1975) further described pattern-matching as a case study strategy for comparative social studies where actual data in a case could be related to theoretical propositions.

Pattern-matching has been portrayed to have an essential role in case study research. According to Yin (1984, 1994, pp 20–26), there are five very important components of a research design for a case study: its research questions, its propositions, the units of analysis, the logic linking the data to the propositions, and the criteria for interpreting the findings. Pattern-matching is associated with three of them. First, a theory (or rival theories) is specified as the basis of propositions or predicted patterns of events. These propositions can act as a series of benchmarks against which actual data are compared. Then in the case study, information on all events is collected and also stored in an empirical pattern. As a final step, the empirical pattern and the predicted patterns are matched by analysing whether they are in line with each other based on predefined criteria. If the predicted pattern matches the empirical pattern, the theory is seen to be tested and to be able to predict the situation in the case study. While recognising that there can be other ways of linking data to propositions, pattern-matching is seen to be the most precisely defined and promising approach to case studies. However, Yin (1994, p 110) also points out that currently linking data to the propositions and defining criteria still lack precision and need to be further developed to provide detailed guidance

Pattern-matching is also seen to be conducive to internal validity (Campbell 1975; Yin 1993, p 40), which refers to “establishing a causal relationship, whereby certain conditions are shown to lead to other conditions, as distinguished from spurious relationships” in explanatory or causal case studies (Yin 1994, pp 32–35). Yin (1994, p 35) focuses on two of the factors that may threaten the internal validity of a case study. The first concern over internal validity is the cause-effect relationship between variables. Whenever this types of relationship is to be established for example in studies that assess the effects of social programmes or interventions, the primary consideration is whether the observed changes can be attributed to the programme or intervention and not to other possible causes or alternative explanations. Another concern over internal validity is that a case study makes inferences based on evidences collected whenever an event cannot be observed directly. In order to have trustworthy findings it is very important to make sure that the inference is correct. Yin (1994, pp 106–108) suggests that pattern-matching is one effective way to address those two concerns over internal validity. For instance, strong causal inferences can be made in a case that has a variety of outcomes or predicted patterns when the predicted values are found and simultaneously alternative patterns are not found. The internal validity of a case study can also be increased if rival explanation patterns are developed and matched.

As a qualitative approach pattern-matching needs clearly stated criteria to make meaningful comparisons (Yin 1994). Without measurable standards, for example, to operationalise concepts such as TQM, organisational processes, quality, customer satisfaction, and team, it would be almost impossible to study the effects of TQM programmes, not to mention further communicating the study to others. It is thus essential to make transparent what and why any criterion is used in pattern-matching.

In short, pattern-matching often involves a series of steps: stating theoretical propositions before gathering data, defining criteria used for pattern-matching before collecting data, and comparing patterns in terms of the predetermined criteria (Yin 1994). Seen by many as one of the best strategies for case studies, pattern-matching has been widely used and published by researchers in many different areas. Since there is no possible way to provide an exhaustive or inclusive list of relevant publications, the following references are provided to demonstrate the usefulness and diversity of pattern-matching applications in areas of health care (Thomas et al. 2005), supply chain management (Ogden 2006), e-government (Ke and Wei 2006), marketing (Pauwels and Matthyssens 1999; Kohn 2005, Heffernan and Farrell 2005, Thompson and Perry 2004)), human resource management (Manz et al. 1991; Lee et al. 1996; Anderson and Boocock 2002; Lo and Lamm 2005), information systems management (Al-Busaidi and Olfman 2005), action research (Mårtensson and Lee 2004), organisational studies (Ross and Staw 1993; Langley 1999; Chiles et al. 2004), and collaborative relationship (Paul and McDaniel 2004).

In this article, a pattern is any consistent and characteristic form that is by definition non-random and potentially describable. Pattern-matching means to compare an empirically based pattern—the “pragmatic reality”, with theoretical patterns—the “theoretical ideals”, or “systemic patterns”. The former is derived from accurate observation or inferred from collected data on what actually happened in practice, whilst the latter refers to the predicted outcomes, which are what might have happened based on specific systems perspectives. The matching of the two patterns, based on clearly defined criteria, will help either strengthen or question the credibility of the theoretical considerations or systems perspectives for the given problem situation. If the patterns match, one can conclude that the theory receives support. If they do not match, the theory might be incorrect or inadequately developed, thus should be re-examined and reformulated.

Having explained what pattern-matching involves and how it might be used, the next question is how systems thinking is likely to be associated with pattern-matching and ultimately research trustworthiness? This will be discussed in relation to Churchman’s (1971, 1979) “dialectical debate” and Ulrich’s (1983) “critical systems heuristics”.

For Churchman, the social world is the creative construction of human beings according to different perspectives, which is exactly the very nature of qualitative research as discussed previously. Therefore, systems thinking needs to take into consideration different perceptions about systems, which can be achieved through a process of dialectical debate. The prevailing worldview (thesis) should be unearthed and understood. Then this “thesis” should be challenged by an “antithesis”—a different worldview, giving rise to alternative proposals. In the light of both worldviews, a synthesis could be produced, bringing about a richer picture of the problem situation.

It can be argued that this dialectical debate may contain pattern-matching in its process if the debate is properly organised. First the thesis and antithesis, in other words the theoretical propositions, need to be clearly described before the analysis. Next criteria used for judging what is the thesis and antithesis must be defined to enable a meaningful and informed debate. Then contrasting the “thesis” with the “antithesis” in terms of predetermined criteria should encourage one to collect detailed data and search for cases to challenge and develop one’s theory. In such a way, Churchman’s’ dialectical debate can be understood to be a critical process where rival theoretical patterns and empirical patterns can be contrasted or matched, thereby improving the chance of fully synthesising and developing a richer understanding of the complex features of a problem situation.

In like manner, Ulrich’s (1983) “critical systems heuristics” (CSH) can be related to pattern-matching. In evaluating what accounts as knowledge and what accounts as ethically defendable improvement, Ulrich suggests to use two modes: the “is”—what actually the case is, and the “ought”—what the case ought to be, to unfold the normative and empirical content of the evaluation. The “ought” mode will usually provide a standard for subsequently evaluating the “is” mode. This will provide a systematic way to evaluate the value content of judgement regarding knowledge and improvement, whilst at the same time exposing the value basis of the evaluation itself, preventing us from submitting to any illusion of objectivity. When used for the purpose of debate, the contrast of the two modes will largely enhance critique rather than accept any judgement dogmatically. For example, if “who is involved” is challenged in terms of “who ought to be involved”, then whoever challenges the normative assumptions must also make clear his or her own normative assumptions. This is because the critique of the “is” mode depends on the “ought” mode. In short, the contrast of the two modes provides a systematic critical approach toward evaluating knowledge and improvement.

Similarly, CSH can also be understood to be able to include pattern-matching in its critical process, just like the case of Churchman’s dialectical debate being related to pattern-matching. Before the critical discussion, the “is” mode and the “ought” mode, together with their criteria will have to be clearly described and defined. Then the matching of the “is” mode and the “ought” mode, or the “pragmatic reality” and the “theoretical ideals” can be used to critique what the actual problem situation is and what it might be if the theoretical propositions had been adequately followed. In this way, CSH can be seen to embed pattern-matching in its critical approach that helps to contrast different conceptual assumptions and make transparent the conclusion process of qualitative analysis.

To sum up, Churchman’s dialectical debate and CSH can be seen to dovetail with pattern-matching in their analysis. Similarly the same can be said about many other systems perspective. This provides the basis to further discuss how systems thinking when blended with pattern-matching might be able to enhance research trustworthiness.

The first contribution of systems thinking is its value in increasing research credibility, coming from its prerequisite of making transparent the theoretical propositions. As discussed earlier, qualitative research can be characterised by “ontological complexity”; thus different perspectives must be considered to comprehensively interpret a phenomenon. Systems thinking is in complete agreement with this. By clearly stating theoretical, especially contrasting, propositions into research, systems thinking can be understood to have embedded theory triangulation in the research design. It provides an organised process to facilitate exploring a phenomenon in view of diverse positions. Based on different conceptual patterns, systems thinking encourages searching for not only similarities but also differences about a problem situation, leading to a richer and fuller understanding. This will eventually increase the credibility of qualitative research.

The second contribution of systems thinking comes from its ability to improve research transferability. In order that research findings are not an ad hoc solution in one specific instance but transferable to other, similar, problem situations, the findings must be underpinned conceptually. With theoretical patterns defined in advance, systems thinking provides the theoretical basis enabling one to understand why the findings would or would not work under certain problem situations, to learn from the research and to communicate with others effectively. With the underpinning theories made transparent, systems thinking helps to make an informed decision about the applicability of research findings.

The next contribution of systems thinking is from the way in which it can be used to increase research dependability and confirmability. It has been argued in the previous critique that inquiry audit is indeed important and desirable but not always viable simply because of the resource required. It is therefore suggested that an organised way is needed to enable the readers to conduct their own audit. Systems thinking is seen to be able to facilitate a reader-oriented audit through pattern-matching. In the first place, systems thinking makes transparent its theoretical positions and criteria for comparison before collecting data. This provides a conceptual framework for the readers to make sense of the overall research design and consequently to judge research trustworthiness. In the next place, defining theoretical, especially rival, propositions in research is an organised way to encourage one to search for negative cases in order to develop one’s theory by a method of constant comparison. This limits personal biases in research. “The effect of articulating them (rival propositions) at the outset of a case study is to influence the design and data collection of the study. Ideally, the case study will fairly collect the data needed to give each rival an opportunity to be proven correct or incorrect. If these procedures are carried out properly, the study will not only contribute to the substantive literature but also have incorporated key procedures to avoid pitfalls regarding biased results” (Yin 1993, p 113). In the third place, systems thinking helps to compare theoretical patterns with empirical patterns based on the criteria clearly defined in advance. It enables theory and practice to be deliberately separated, allowing the gap between “pragmatic reality” and “systemic patterns” to appear clearly and to be compared. In this way, any conclusion drawn from the analysis process is clearer and more reproducible by others interested in the research; any assumptions made can be examined and re-examined through the inbuilt critical reflection mechanism in terms of pattern-matching to surface possible shortcomings or the author’s biases. Therefore the readers, based on the patterns and criteria defined and the matching process, can make their own judgement about whether the findings are consistent, representative of the inquiry and not a product of the researcher’s biases. In this way, a study’s generalizability can be much corroborated, elaborated, or questioned. This is also in consistence with Checkland and Holwell’s (1998) research. They argue that to enhance the validity of qualitative research, it is essential to state the methodology encompassing a particular framework of ideas in advance, through which the research will be understood and interpreted. Thus the process of analysis can be “recoverable by any one interested in subjecting the research to critical scrutiny”.

Having explained the pattern-matching role of systems thinking in improving research trustworthiness, next a case study is briefly described in terms of its research questions, its propositions, the units of analysis, the logic linking the data to the propositions, and the criteria for interpreting the findings, illustrating how systems perspective combined with pattern-matching was applied to improving the quality of the case study.

A case study of change management from a systemic perspective

Pattern-matching was combined with a systems perspective and used to conduct a case study of managing change in a Higher Education Institution (Cao et al. 2004). The research question was whether change management could have been more effective if a critical systems perspective had been applied. This was developed from an understanding of the importance of change management in a competitive business environment and the high failure rate of change management programmes. To understand and manage organisational change more effectively, a critical systems perspective had been suggested as an improved way forward. It was argued that within many organisations change was composed of elements that were not simple and clearly definable, but were interrelated and interacting. The implications were several. First, different types of organisational change needed to be managed by different approaches. Currently, each type of approaches to change was primarily focused on specific dimensions of organisational change, to which each could be pertinent and effective. But there was no one best way to manage change. Second, different types of organisational change were interrelated; they needed to be managed together as a whole. Current approaches seemed unable to address situations where more than one type of organisational change was surfaced, or to understand and manage organisational change holistically. Third, if the interaction of different types of change was to be successfully managed, it necessitated a mixed use of methods and methodologies. However, current approaches seemed to have little to offer beyond a single method focusing on a specific problem. Finally, how was it to be known that the most appropriate decisions were made? Current change approaches provided little explicit guidance as how to reflect critically on the decisions made. To help address those problems, CST was seen to have great potentiality for the following reasons. It’s commitment to critical awareness was seen to be helpful in reflecting on the shortcomings of any proposed change project. It’s commitment to methodological pluralism could help provide guidance on how to employ multiple methods in order to address the whole. Finally, it’s commitment to emancipation could be used to guide intervention toward “local improvement”. Since CST could not provide specific change suggestions, it had to be integrated with the knowledge of change management to form a systemic view of change management. After all, the essential question was whether change management could have been more effective if a CST perspective had been applied? Given the complexity of any organisational change programme, a case study was seen to be pertinent to answer this research question and to arrive at a normative position to help understand change management and to provide implications for future change management practice.

The theoretical propositions were thus derived from an understanding of change management based on CST. Essentially, it was thought that organisational change could have been more effectively managed by using systemic approaches. The main elements of this proposition included: improvements through the change programme would have been judged by multiple objectives and multiple views from all organisational members; the boundary of the change would have been critically challenged and defined by considering the views of all organisational members, multiple methods would have been creatively designed and employed to manage the change, and participation would have been emphasised and facilitated by using various ways.

The unit of analysis was the whole organisation being discussed. The logic linking the data to the propositions was through the process of pattern-matching by comparing the empirically based pattern or actual change undertaken in the case with a theoretical pattern of predicted outcomes, which might have happened based on theoretical considerations grounded on CST.

Before collecting data, the final step of the research design was the criteria for interpreting the findings, which were decided to include improvement, boundary judgement, multiple methods and participation. Then data were collected from several sources such as a historical analysis of the case, a review of documentation, and in-depth interviews with members of the organisation. Based on the collected data, the empirical pattern was established. The actual case was then reinterpreted from the theoretical perspective based on CST to further develop a more detailed theoretical pattern. The empirical pattern and theoretical pattern were then compared in terms of the four systemic criteria. For example, “what the improvement was” in the empirical pattern was matched against “what the improvement should have been” in the theoretical pattern, “what the boundaries were” against “what the boundaries should have been”, “what methods had been used and in what ways” against “what methods should have been used and in what ways”, and “who had been involved” against “who should have been involved”. As a result of this critique, a more comprehensive normative understanding of change management grounded on CST was developed.

Although the case study was retrospectively conducted, it was seen to be pertinent to provide trustworthy answers to the research question. Pattern-matching was seen to have made a much unstructured and immensely complex case study more reproducible and communicable. It also allowed others who might be interested in the case study to recover how conclusions had been made and to make informed judgement about the findings.

Conclusions

This paper has attempted to show how systems thinking can be blended with pattern-matching to increase research credibility, transferability, dependability and confirmability in qualitative research. It has argued that supplementary ways are needed to augment current activities to improve research trustworthiness. It has also argued that theory triangulation is epistemologically necessary and empirically meaningful because of the “ontological complexity” of qualitative research. It has emphasised that making transparent the conceptual underpinnings of qualitative research is fundamental for judging the transferability of the finding. Finally, reader-oriented audit is suggested to be useful for research dependability and confirmability.

To supplement existing techniques for research trustworthiness, systems thinking is seen to be able to contribute greatly to research credibility, transferability, dependability and confirmability. By embedding theory triangulation in the process, systems thinking can significantly increase research credibility to help develop a fuller and richer understanding of a phenomenon. With its clearly defined theoretical frameworks, systems thinking allow others interested in the research to make informed judgement about the applicability of the findings. Finally, matching systemic and empirical patterns according to clearly defined criteria makes transparent the process of analysing and reasoning, allowing others to recover how conclusions have been made and to judge the dependability and confirmability of the research.

The pattern-matching role of systems thinking is meant to be a new application of systems perspectives in improving research trustworthiness. This new role developed and illustrated in this paper is exploratory and supplementary to other qualitative methods. It is likely that systems thinking can be used in other ways successfully to increase research trustworthiness.

Copyright information

© Springer Science+Business Media, LLC 2007