Journal of the Operational Research Society

, Volume 57, Issue 2, pp 202–219 | Cite as

A critique of statistical modelling in management science from a critical realist perspective: its role within multimethodology

Theoretical Paper


Management science was historically dominated by an empiricist philosophy that saw quantitative modelling and statistical analysis as the only legitimate research method. More recently interpretive or constructivist philosophies have also developed employing a range of non-quantitative methods. This has sometimes led to divisive debates. ‘Critical realism’ has been proposed as a philosophy of science that can potentially provide a synthesis in recognizing both the value and limitations of these approaches. This paper explores the critical realist critique of quantitative modelling, as exemplified by multivariate statistics, and argues that its grounds must be re-conceptualized within a multimethodological framework.


critical realism critique mathematical modelling multimethodology multiple regression philosophy of OR statistical modelling 


  1. Mingers J (2000). The contribution of critical realism as an underpinning philosophy for OR/MS and systems. J Opl Res Soc 51: 1256–1270.CrossRefGoogle Scholar
  2. Mingers J and Brocklesby J (1997). Multimethodology: towards a framework for mixing methodologies. Omega 25: 489–509.CrossRefGoogle Scholar
  3. Mingers J and Gill A (eds) (1997). Multimethodology: Theory and Practice of Combining Management Science Methodologies. Wiley: Chichester.Google Scholar
  4. Orlikowski W and Baroudi J (1991). Studying information technology in organizations: research approaches and assumptions. Inf Systems Res 2: 1–28.CrossRefGoogle Scholar
  5. Cheon M, Grover V and Sabherwal R (1993). The evolution of empirical research in IS: A study in IS maturity. Inf Mngt 24: 107–119.Google Scholar
  6. Walsham G (1995). The emergence of interpretvism in IS research. Inf Systems Res 6: 376–394.CrossRefGoogle Scholar
  7. Nandhakumar J and Jones M (1997). Too close for comfort? Distance and engagement in interpretive information systems research. Inf Systems J 7: 109–131.CrossRefGoogle Scholar
  8. Mingers J (2003). The paucity of multimethod research: a review of the information systems literature. Inf Systems J 13: 233–249.CrossRefGoogle Scholar
  9. Nissen H-E, Klein H and Hirscheim R (eds) (1991). Information Systems Research: Contemporary Approaches and Emergent Traditions. North-Holland: Amsterdam.Google Scholar
  10. Galliers R (ed) (1992). Information Systems Research: Issues, Methods and Practical Guidelines. Blackwell: Oxford.Google Scholar
  11. Goles T and Hirschheim R (2000). The paradigm is dead, the paradigm is dead … long live the paradigm: the legacy of Burrell and Morgan. Omega 28: 249–268.CrossRefGoogle Scholar
  12. Myers D (1994). Dialectical hermeneutics: a theoretical framework for the implementation of information systems. Inf Systems J 5: 51–70.CrossRefGoogle Scholar
  13. Avison D and Myers M (1995). Information systems and anthropology: an anthropological perspective on IT and organizational culture. Inf Technol People 8: 43–56.CrossRefGoogle Scholar
  14. Harvey L and Myers M (1995). Scholarship and practice: the contribution of ethnographic research methods to bridging the gap. Inf Technol People 8: 13–27.CrossRefGoogle Scholar
  15. Checkland P and Holwell S (1998). Information, Systems and Information Systems: Making Sense of the Field. Wiley: Chichester.Google Scholar
  16. Ngwenyama O and Lee A (1997). Communication richness in electronic mail: critical social theory and the contextuality of meaning. MIS Quart 21: 145–167.CrossRefGoogle Scholar
  17. King A (1999). Against structure: a critique of morphogenetic social theory. Sociological Rev 47: 199–227.CrossRefGoogle Scholar
  18. Mingers J (1992). Recent developments in critical management science. J Opl Res Soc 43: 1–10.CrossRefGoogle Scholar
  19. Checkland P and Scholes J (1990). Soft Systems Methodology in Action. Wiley: Chichester.Google Scholar
  20. Eden C (1993). From the playpen to the bombsite—the changing nature of management science. Omega-Int J Mngt Sci 21: 139–154.CrossRefGoogle Scholar
  21. Friend J (2001). The strategic choice approach. In: Rosenhead J and Mingers J (eds). Rational Analysis for a Problematic World Revisited: Problem Structuring Methods for Complexity, Uncertainty and Conflict. Wiley: Chichester, pp 115–150.Google Scholar
  22. Jackson M (1993). Signposts to critical systems thinking and practice—an invited article. Kybernetes 22: 11–21.CrossRefGoogle Scholar
  23. Ulrich W (1994). Critical Heuristics of Social Planning: a New Approach to Practical Philosophy. Wiley: Chichester.Google Scholar
  24. Jackson M (2000). Systems Approaches to Management. Kluwer Academic: Dordrecht.Google Scholar
  25. Ulrich W (2003). Beyond methodology choice: critical systems thinking as critically systemic discourse. J Opl Res Soc 54: 325–342.CrossRefGoogle Scholar
  26. Mingers J (2001). Embodying information systems: the contribution of phenomenology. Inf Organization (formerly Accounting, Mngt Inf Technol) 11: 103–128.Google Scholar
  27. Layder D (1993). New Strategies in Social Research. Polity Press: Cambridge.Google Scholar
  28. Lawson T (1997). Economics and Reality. Routledge: London.CrossRefGoogle Scholar
  29. Mingers J (1984). Subjectivism and soft systems methodology—a critique. J Appl Systems Anal 11: 85–103.Google Scholar
  30. Mingers J (1992). Criticizing the phenomenological critique—autopoiesis and critical realism. Systems Pract 5: 173–180.CrossRefGoogle Scholar
  31. Mingers J (1999). Synthesising constructivism and critical realism: towards critical pluralism. In: Mathijs E, Van der Veken J and Van Belle H (eds). World Views and the Problem of Synthesis. Kluwer Academic: Amsterdam, pp 187–204.CrossRefGoogle Scholar
  32. Kowalczyk R (2004). Tracing the effects of a hospital merger. In: Ackroyd S and Fleetwood S (eds). Critical Realist Applications in Organisation and Management Studies. Routledge: London.Google Scholar
  33. Dobson P (2001). The philosophy of critical realism—an opportunity for information systems research. Inf Systems Frontiers 3: 199–210.CrossRefGoogle Scholar
  34. Dobson P (2001). Longitudinal case research: a critical realist perspective. Systemic Pract Action Res 14: 283–296.CrossRefGoogle Scholar
  35. Mutch A (1999). Critical realism, managers and information. Br J Mngt 10: 323–333.Google Scholar
  36. Mutch A (2002). Actors and networks or agents and structures: towards a realist view of information systems. Organization 9: 513–532.CrossRefGoogle Scholar
  37. Mingers J (2001). Combining IS research methods: towards a pluralist methodology. Inf Systems Res 12: 240–259.CrossRefGoogle Scholar
  38. Mingers J (2004). Re-establishing the real: critical realism and information systems research. In: Mingers J and Willcocks L (eds). Social and Philosophical Theory for Information Systems. Wiley: London.Google Scholar
  39. Ackroyd S and Fleetwood S (2000). Realist Perspectives on Management and Organisations. Routledge: London.Google Scholar
  40. Fleetwood S (ed) (1999). Critical Realism in Economics: Development and Debate. Routledge: London.Google Scholar
  41. Tsang E and Kwan K (1999). Replication and theory development in organizational science: a critical realist perspective. Academy Mngt Rev 24: 759–780.Google Scholar
  42. Marsden R (1993). The politics of organizational analysis. Organization Studies 14: 93–124.CrossRefGoogle Scholar
  43. Archer M et al (eds) (1998). Critical Realism: Essential Readings. Routledge: London.Google Scholar
  44. Sayer A (2000). Realism and Social Science. Sage: London.CrossRefGoogle Scholar
  45. Bhaskar R (1978). A Realist Theory of Science. Harvester: Hemel Hempstead.Google Scholar
  46. Bhaskar R (1979). The Possibility of Naturalism. Harvester Press: Sussex.Google Scholar
  47. Bhaskar R (1993). Dialectic: the Pulse of Freedom. Verso: London.Google Scholar
  48. Bhaskar R (1994). Plato Etc. Verso: London.Google Scholar
  49. Bhaskar R (1997). On the ontological status of ideas. J Theory Social Behav 27: 139–147.CrossRefGoogle Scholar
  50. Bhaskar R (2002). From Science to Emancipation. Sage: London.Google Scholar
  51. Hume D (1967). Enquiries Concerning Human Understanding and the Principles of Morals. Clarendon Press: Oxford.Google Scholar
  52. Popper K (1962). The Open Society and its Enemies. RKP: London.Google Scholar
  53. King A (2000). The accidental derogation of the lay actor: a critique of Giddens's concept of structure. Philos Social Sci 30: 362–383.CrossRefGoogle Scholar
  54. Archer M (2000). For structure: its reality, properties and powers: a reply to Anthony King. Sociological Rev 48: 464–472.CrossRefGoogle Scholar
  55. Searle J (1996). The Construction of Social Reality. Penguin Books: London.Google Scholar
  56. Wittgenstein L (1958). Philosophical Investigations. Blackwell: Oxford.Google Scholar
  57. Peirce C (1878). How to make our ideas clear. Popular Science Monthly.Google Scholar
  58. Pidd M (1996). Tools for Thinking: Modelling in Management Science. Wiley: Chichester.Google Scholar
  59. Mitchell G (1993). The Practice of Operational Research. Wiley: Chichester.Google Scholar
  60. Rivett P (1994). The Craft of Decision Modelling. Wiley: Chichester.Google Scholar
  61. Mingers J (1989). Problems of measurement. In: Jackson M, Keys P and Cropper S (eds). Operational Research and the Social Sciences. Plenum Press: New York, pp 471–477.CrossRefGoogle Scholar
  62. Mann P (2004). Introductory Statistics. Wiley: New York.Google Scholar
  63. Ehrenberg A (1982). A Primer in Data Reduction. Wiley: Chichester.Google Scholar
  64. Anderson D, Sweeney D and Williams T (1993). Statistics for Business and Economics. West Publishing: New York.Google Scholar
  65. Picconi M, Romano A and Olson C (1993). Business Statistics: Elements and Applications. HarperCollins: New York.Google Scholar
  66. Kvanli A, Guynes S and Pavur R (1989). Introduction to Business Statistics. West Publishing: New York.Google Scholar
  67. McClave J and Benson G (1989). A First Course in Business Statistics. Dellen: San Francisco.Google Scholar
  68. Mendenhall W and Sincich T (1989). A Second Course in Business Statistics: Regression Analysis. Dellen: San Francisco.Google Scholar
  69. Wackerly D, Mendenhall W and Scheaffer R (1996). Mathematical Statistics with Applications. Wadsworth: Belmont, CA.Google Scholar
  70. Newbold P (1995). Statistics for Business and Economics. Prentice-Hall: New Jersey.Google Scholar
  71. Fildes R (1985). Quantitative forecasting—the state of the art: econometric models. J Opl Res Soc 36: 549–580.Google Scholar
  72. Allen GP and Fildes R (2000). Econometric Forecasting. In: Armstrong JS (ed). Principles of Forecasting: a Handbook for Researchers and Practitioners. Kluwer: Norwell, MA, pp 303–362.Google Scholar
  73. Pawson R and Tilley N (1997). Realistic Evaluation. Sage: London.Google Scholar
  74. Fleetwood S (2001). Causal laws, functional relations and tendencies. Rev Political Econ 13: 201–220.CrossRefGoogle Scholar
  75. Porpora D (1998). Do realist run regressions? 2nd International Centre for Critical Realism Conference. University of Essex.Google Scholar
  76. Ron A (1999). Regression analysis and the philosophy of social sciences—a critical realist view. Critical Realism: Implications for Practice. Örebro University Sweden, Centre for Critical Realism.Google Scholar
  77. Olsen W (1999). Developing open-systems interpretations of path analysis: fragility analysis using farm data from India. Critical Realism: Implications for Practice Örebro University Sweden, Centre for Critical Realism.Google Scholar
  78. Sayer A (1992). Method in Social Science. Routledge: London.Google Scholar
  79. Abbott A (1998). The causal devolution. Sociological Methods Res 27: 148–181.CrossRefGoogle Scholar
  80. Hendry D, Leamer E and Poirier D (1990). The ET dialogue: a conversation on econometric methodology. Econometric Theory 6: 171–261.CrossRefGoogle Scholar
  81. Box G and Jenkins G (1976). Time Series Analysis: Forecasting and Control. Holden Day: San Francisco.Google Scholar
  82. Fleetwood S (2001). Conceptualizing unemployment in a period of atypical employment: a critical realist perspective. Rev Social Econ LIX: 45–69.CrossRefGoogle Scholar
  83. Pearl J (2000). Causality: Models, Reasoning, and Inference. Cambridge University Press: Cambridge.Google Scholar
  84. Hendry D (1995). Dynamic Econometrics. Oxford University Press: Oxford.CrossRefGoogle Scholar
  85. Liu T (1960). Under-identification, structural estimation, and forecasting. Econometrica 28: 855–865.CrossRefGoogle Scholar
  86. David P (1986). Understanding the economics of QWERTY. In: Parker W (ed). Economic History and the Modern Economist. Blackwell: Oxford, pp 30–49.Google Scholar
  87. Lucas R (1976). Econometric policy evaluation: a critique. In: Briunner K and Meltzer A (eds). The Phillips Curve and Labour Markets. North-Holland: Amsterdam, pp 19–46.Google Scholar
  88. Hoover K (1988). The New Classical Macroeconomics: a Sceptical Enquiry. Blackwell: Oxford.Google Scholar
  89. Magnus J and Morgan M (eds) (1999). Methodology and Tacit Knowledge: Two Experiments in Econometrics. Wiley: New York.Google Scholar
  90. Tobin J (1950). A statistical demand function for food in the USA. J R Stat Soc A 113: 113–149.CrossRefGoogle Scholar
  91. Fildes R and Stekler H (2000). The state of macroeconomic forecasting Discussion Paper 99-04, Lancaster University: Lancaster.Google Scholar
  92. Koopmans T (1947). Measurement without theory. Rev Econ Stat 29: 161–172.CrossRefGoogle Scholar
  93. Cooley T and LeRoy S (1985). Atheoretical macroeconomics; a critique. J Monetary Econ 16(3): 283–308.CrossRefGoogle Scholar
  94. Barten A et al (1999). Comparative assessments of the field trial experiment. In: Magnus J and Morgan M (eds). Methodology and Tacit Knowledge: Two Experiments in Econometrics. Wiley: New York, pp 269–283.Google Scholar
  95. Sims C (1980). Macroeconomics and reality. Econometrica 48: 1–48.CrossRefGoogle Scholar
  96. Nickerson R (2000). Null hypothesis significance testing: a review of an old and continuing controversy. Psychol Methods 5: 241–301.CrossRefGoogle Scholar
  97. Chow S (1996). Statistical Significance: Rationale, Validity and Utility. Sage: New York.Google Scholar
  98. Chow S (1998). Precis of statistical significance: rationale, validity, and utility. Behav Brain Sci 21: 169–239.Google Scholar
  99. Cortina J and Dunlap W (1997). On the logic and purpose of significance testing. Psychol Methods 2: 161–172.CrossRefGoogle Scholar
  100. Hagen R (1997). In praise of the null hypothesis statistical test. Am Psychol 52: 15–24.CrossRefGoogle Scholar
  101. Fisher R (1934). The Design of Experiments. Oliver and Boyd: Edinburgh.Google Scholar
  102. Rozeboom W (1960). The fallacy of the null hypothesis significance test. Psychol Bull 57: 416–428.CrossRefGoogle Scholar
  103. Bakin D (1966). The test of significance in psychological research. Psychol Bull 66: 1–29.CrossRefGoogle Scholar
  104. Meehl P (1967). Theory testing in psychology and physics: a methodological paradox. Philos Sci 34: 103–115.CrossRefGoogle Scholar
  105. Morrison D and Henkel R (eds) (1970). The significance test controversy. Aldine: Chicago.Google Scholar
  106. Harlow L, Mulaik S and Steiger J (eds) (1997). What if there were no significance tests? Erlbaum: Hillsdale.Google Scholar
  107. Cohen J (1994). The earth is round (p<0.05). Am Psychol 49: 997–1003.CrossRefGoogle Scholar
  108. Gill J (1999). The insignificance of null hypothesis significance testing. Political Res Quart 52: 647–674.CrossRefGoogle Scholar
  109. McClosky D and Ziliak S (1996). The standard error of regressions. J Econ Literature 34: 97–114.Google Scholar
  110. Ziliak S and McCloskey D (2004). Size matters: the standard error of regressions in the American Economic Review. J Socio-Economics 33(5): 527–546.Google Scholar
  111. Gardner M and Altman D (1986). Confidence intervals rather than p-values. Estimation rather than hypothesis testing. Br Med J 292: 746–750.CrossRefGoogle Scholar
  112. Casella G and Berger R (1990). Statistical Inference. Wadsworth: Belmont.Google Scholar
  113. Meehl P (1997). The problem is epistemology not statistics. Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions. In: Harlow L, Mulaik S and Steiger J (eds). What if There Were no Significance Tests? Erlbaum: Hillsdale, pp 391–423.Google Scholar
  114. Reichardt C and Gollob H (1997). When confidence intervals should be used instead of statistical tests, and vice versa. In: Harlow L, Mulaik S and Steiger J (eds). What if There Were no Significance Tests? Erlbaum: Hillsdale, pp 259–284.Google Scholar
  115. West M and Harrison J (1989). Bayesian Forecasting and Dynamic Models. Springer-Verlag: New York.CrossRefGoogle Scholar
  116. Gelman A, Carlin J, Stern H and Rubin D (1995). Bayesian Data Analysis. Chapman & Hall: New York.Google Scholar
  117. Zellner A (1997). Bayesian Analysis in Econometrics and Statistics. Edward Elgar: Cheltenham.Google Scholar
  118. Leamer E (1983). Let's take the con out of econometrics. Am Econ Rev 73: 31–43.Google Scholar
  119. Rosenberg A (1992). Economics: Mathematical Politics or Science of Diminishing Returns. University of Chicago Press: Chicago.Google Scholar
  120. Hutchison T (1994). Ends and means in the methodology of economics. In: Backhouse R (ed). New Directions in Economic Methodology. Routledge: London.Google Scholar
  121. Kay J (1995). Cracks in the crystal ball. Financial Times.Google Scholar
  122. Smith D (1995). Tables turn on Britain's leading forecasters. Sunday Times: London.Google Scholar
  123. Mills T and Pepper G (1999). Assessing the forecasters: an analysis of the forecasting records of the Treasury, the London Business School, and the National Institute. Int J Forecasting 15: 247–257.CrossRefGoogle Scholar
  124. Sherden W (1998). The Fortune Sellers: The Big Business of Buying and Selling Predictions. Wiley: New York.Google Scholar
  125. Fildes R (2000). Book review. Int J Forecasting 16: 132–133.CrossRefGoogle Scholar
  126. Armstrong JS (1985). Long Range Forecasting: From Crystal Ball to Computer. Wiley: New York.Google Scholar
  127. Makridakis S, Wheelwright S and Hyndman R (1998). Forecasting: Methods and Applications. New York: Wiley.Google Scholar
  128. Fildes R and Makridakis S (1995). The impact of empirical accuracy studies on time series analysis and forecasting. Int Stat Rev 63: 289–308.CrossRefGoogle Scholar
  129. Fildes R, Hibon S and Makridakis S (1997). Generalising about univariate forecasting methods: further empirical evidence. Int J Forecasting 14: 339–358.CrossRefGoogle Scholar
  130. Makridakis S and Hibon M (2000). The M3-competition: results, conclusions and implications. Int J Forecasting 16: 451–476.CrossRefGoogle Scholar
  131. French S and Smith J (eds) (1997). The Practice of Bayesian Analysis. Arnold: London.Google Scholar
  132. Pole A, West M and Harrison J (1994). Applied Bayesian Forecasting and Time Series Analysis. Chapman & Hall: London.CrossRefGoogle Scholar
  133. Berry D (1996). Statistics: a Bayesian Perspective. Duxbury Press: Belmont.Google Scholar
  134. Oliver R and Smith J (eds) (1990). Influence Diagrams, Belief Nets, and Decision Analysis. Wiley: Chichester.Google Scholar
  135. McNaught K (2001). An introduction to Bayesian Belief Networks. OR 43, Bath, OR Society 39–62.Google Scholar
  136. Checkland P (1981). Systems Thinking, Systems Practice. Wiley: Chichester.Google Scholar
  137. Miles M and Huberman M (1994). Qualitative Data Analysis: an Expanded Sourcebook. SAGE Publications: Thousand Oaks, CA.Google Scholar
  138. Habermas J (1979). Communication and the Evolution of Society. Heinemann: London.Google Scholar
  139. Habermas J (1984). The Theory of Communicative Action Vol. 1: Reason and the Rationalization of Society. Heinemann: London.Google Scholar
  140. Habermas J (1987). The Theory of Communicative Action Vol. 2: Lifeworld and System: a Critique of Functionalist Reason. Polity Press: Oxford.Google Scholar
  141. Habermas J (1993). On the pragmatic, the ethical, and the moral employments of practical reason. In: Habermas J (ed). Justification and Application. Polity Press: Cambridge, pp 1–17.Google Scholar
  142. Midgley G (1992). Pluralism and the legitimation of systems science. Systems Pract 5: 147–172.CrossRefGoogle Scholar
  143. Maturana H (1990). Science and daily life: the ontology of scientific explanations. In: Krohn W, Kuers G and Nowotny H (eds). Selforganization: Portrait of a Scientific Revolution. Kluwer Academic Publishers: Dordrecht, pp 12–35.CrossRefGoogle Scholar
  144. Tashakkori A and Teddlie C (1998). Mixed Methodology: Combining Qualitative and Quantitative Approaches. SAGE Publications: London.Google Scholar
  145. Mingers J (1994). Multimethodology—a framework for mixing and matching methodologies. EURO XII/OR 36, Strathclyde University, Scotland.Google Scholar
  146. Jackson M (1999). Towards coherent pluralism in management science. J Opl Res Soc 50: 12–22.CrossRefGoogle Scholar
  147. Mingers J (1997). Towards critical pluralism. In: Mingers J and Gill A (eds). Multimethodology: Theory and Practice of Combining Management Science Methodologies. Wiley: Chichester, pp 407–440.Google Scholar
  148. Mingers J (2000). Variety is the spice of life: combining soft and hard OR/MS methods. Int Trans Opl Res 7: 673–691.CrossRefGoogle Scholar

Copyright information

© Palgrave Macmillan Ltd 2005

Authors and Affiliations

  1. 1.University of KentCanterburyUK

Personalised recommendations