We propose a simple checklist for the users of policy supporting research in order to decide whether a piece of research begs further study or can be dismissed right away. The checklist focusses on the quality of the research question (is it a research question, and is the research question answerable); the kind of knowledge along with the order, level and quality of data needed for answering the RQ; the methods of analysis used; the degree to which the research results support the conclusions; and whether the conclusions provide an answer to the research question.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
For instance, the argument in (Haverland and Yanow (2012), p. 402) panders to funders. Rather than arguing that there are conditions in which the applied study of social objects cannot and should not be expected to produce valid and reliable results, they advance their own version of qualitative inquiry as empirical and scientific. Readers ignorant of the dilution of meaning can then easily be led to (1) erroneously recognize research that cannot aspire to produce reliable or valid knowledge as of the same kind, or (2) on the contrary, expect that all empirical research (both inside and outside the social sciences) have contestable validity and reliability.
There are a number of different terms that are used to discuss fundamental differences in how humans make sense of ourselves and our worlds. Of those, the word paradigm is the most familiar and abused. This makes it an appropriate term to use when the intent is to signal an interest in the fundamentals of human understanding without taking up the argument for any given way of understanding those fundamentals. One part of these ‘paradigms’ is ontology, helpfully discussed by Stout (2012).
There are three parts in any measurement in the natural sciences: the measurement itself, an assessment of the accuracy of that measurement and a discussion of the analytically relevant conditions under which that measurement was taken. By convention, measurements are accurate to 1/2 a unit of measure and conditions are ‘standard’ unless otherwise noted. There are no such conventions in the social sciences so reports must be explicit about both measurement error and analytically relevant conditions.
The most basic tests for research findings in the natural sciences are validity and reliability. In the social sciences these tests are appropriate only in cases where the question is appropriate, the context amenable and researchers have declared their interest in valid and reliable findings. In conditions where it does not make sense to try for valid and reliable findings researchers can and do specify alternative standards such as those of plausibility and vividness. These are in no way equivalent to validity and reliability.
There is one circumstance under which cause need not precede effect. Human conduct can be informed by the anticipation of a future event. In this case while the causal event is subsequent to the effect, the mental act of anticipation does precede the observed effect. Under these circumstances causal studies must account for how the mental act of anticipation determines physical conduct.
There are, reasons to doubt the assumption that the organizations that commission and consume research are rational (Albiek 1995) A reasonable condition for predicting the impact of a given recommendation is an understanding of how organizations make use of recommendations. A selective review of the literature suggests that there is limited consensus on the framework for understanding how organizations use recommendations (c.f. Beyer 1982; Haynes et al. 2008; Landry et al. 2001). While reviews of empirical studies on the use of research in shaping organizational conduct do not arrive at consensus at the level of specifics, they do agree that the link between submission of a report and the conduct of an organization is not simple.
In addition to a framework for understanding how organizations make use of recommendations, predicting the effect of a given recommendation would require an understanding of the factors that shape their uptake by organizations. Findings on this question are, as the following few examples suggest, conceptually and substantively diverse. A regression on nearly 1,000 responses to a questionnaire found that linkage mechanisms, research experience, unit size, and research relevance for the users interacted with individual and organizational variables in explaining reported utilization (Belkhodja et al. 2007). A case study on a significant policy change found that ‘beliefs, reflecting ideas exogenous to the policy process’ were able to shape a significant policy initiative (Hook 2008). A case study on the uptake of research in shaping health policy found organizational motivation, organizational capacity to acquire research findings, organizational capacity to transform research findings and moderating organizational factors all relevant to understanding research uptake (Hamel and Schrecker 2011). Finally, another case study in the health sector noted the relevance to research uptake of the ‘taste’ that policymakers have for research, where this was operationalized as the ability and inclination to seek out and select high quality relevant research (Jewell and Bero 2008).
Discussion of how research shapes organizations’ conduct and the factors that condition this shaping presumes the relevance of research. Recent review articles have noted that “consistent evidence shows that health systems fail optimally to use evidence” (Straus et al. 2011) and that “the literature on knowledge utilization generally reveals limited use of social science research in policy-making” (Hird 2009). The source studies for these reviews tended to focus on instrumental use of research findings: precisely that form which seems to be presumed in the recommendation of clinical trials as an adequate foundation for prescriptive claims. Limiting interest in the effect of research on policy and practice are argued elsewhere to inevitably lead “to a dismal view of research impact, because it ignores the variety of ways in which research uptake and adoption can occur” (Cherney and McGee 2011). If knowledge use is expanded to include the more indirect forms of influence noted above, studies suggest a more hopeful (Cherney and McGee 2011; Paris 2011) but certainly less predictable picture.
In summary, there is limited consensus surrounding both an appropriate framework for understanding the use organizations make of research and the factors that shape that use. Even granting agreement on this point, studies that focus on the more predictable forms of use tend to produce rather dismal findings. As such, researchers do not have the ability at this time to predict the fate of their recommendations. Taking the next step, forecasting the effects of an organization’s conduct requires a causal understanding of what produces the desired effect and the ability to forecast that all analytically relevant conditions in the future will be arranged such that the desired effect can be reproduced. In the framework of social science research modeled on the natural sciences it is neither possible to establish causal relationships nor is it possible to make predictions of future social states with any degree of confidence.
In preparing this essay we debated whether to include discussion of the role of other factors such as the researcher’s own cognition in shaping their hearing, understanding, memory and reporting of the narrative. We decided that this and others of its ilk were, for the limited purposes of this essay, a bridge too far.
Each research paradigm has its own notions regarding what constitutes legitimate claims. These apply both to findings and inferences based on those findings. If the researcher, for example, is conducting empirical research in a paradigm that supports validity and reliability then the inferential claims made must be based on valid and reliable empirical measures.
In the natural sciences uncertainties, by default, make findings meaningless.
In our combined 35 years of experience the authors have never read a piece of directly commissioned policy relevant research that concluded that policy is irrelevant or the funder insignificant.
This protocol, importantly, does not support decisions as to how to understand or how seriously to take a report.
Albiek, E.: Between knowledge and power: Utilization of social science in public policy making. Policy Sci. 28, 79–100 (1995)
Belkhodja, O., Amara, N., Landry, R., Ouimet, M.: The extent and organizational determinants of research utilization in Canadian health services organizations. Sci. Commun. 28(3), 377–417 (2007)
Beyer, J.: The utilization process: a conceptual framework and synthesis of empirical findings. Adm. Sci. Q. 27(4), 591–622 (1982)
Cherney, A., McGee, T.R.: Utilization of social science research: results of a pilot study among Australian sociologists and criminologists. J. Sociol. 47(2), 144–162 (2011)
Frankfurt, H.G.: On bullshit. Princeton, NJ: Princeton University Press (2005)
Hamel, N., Schrecker, T.: Unpacking capacity to utilize research: a tale of the Burkina Faso public health association. Soc. Sci. Med. 72(1), 31–38 (2011)
Haverland, M., Yanow, D.: A Hitchhiker’s guide to the public administration research universe: surviving conversations on methodologies and methods. Public Adm. Rev. 72(3), 401–408 (2012)
Haynes, A.S., Gillespie, J.A., Derrick, G.E., Hall, W.D., Redman, S., Chapman, S., Sturk, H.: Galvanizers, guides, champions, and shields: the many ways that policymakers use public health researchers. Milbank Q. 86(4), 529–532 (2008)
Hird, J.A.: The study and use of policy research in state legislatures. Int. Reg. Sci. Rev. 32(4), 523–535 (2009)
Holsapple, C.W., Joshi, K.D.: Knowledge management ontology. In: Holsapple, C.W. (ed.) Handbook on knowledge management 1: knowledge matters, vol. 1, pp. 89–128. Springer (2002)
Hook, S.W.: Ideas and change in U.S. foreign aid: inventing the millennium challenge corporation. Foreign Policy Anal. 4(2), 147–167 (2008)
Bero, La, Jewell, C.J., Bero, L.A.: “Developing good taste in evidence”: facilitators of and hindrances to evidence-informed health policymaking in state government. Milbank Q. 86(2), 177–208 (2008)
Kampen. J.K.: A methodological note on the making of causal statements in the debate on anthropogenic global warming. Theor. Appl. Clim. 104(3), 423–427 (2011)
Kieser, A., Leiner, L.: Why the rigour–relevance gap in management Alfred Kieser and Lars Leiner. J. Manag. Stud. 46, 516–533 (2009)
Kieser, A., Leiner, L.: On the social construction of relevance: a rejoinder. J. Manag. Stud. 48(4), 891–898 (2011)
Landry, R., Amara, N., Lamari, M.: Climbing the ladder of research utilization: evidence from social science research. Sci. Commun. 22(4), 396–422 (2001)
Paris, R.: Ordering the world: academic research and policymaking on Fragile States 1. Int. Stud. Rev. 13, 58–71 (2011)
Pencil, M.: Salt passage research: the state of the art. J. Commun. 26(4), 31–36 (1976)
Skyrme, D.: Commercialization: the next phase of knowledge management. Handbook on Knowledge Management. Springer, Lexington (2003)
Spicker, P.: Generalisation and phronesis: rethinking the methodology of social policy. J. Soc. Policy 40(1), 1–19 (2011)
Stout, M.: Competing ontologies: a primer for public administration. Public Adm. Rev. 72(3), 388–398 (2012)
Straus, S.E., Tetroe, J.M., Graham, I.D.: Knowledge translation is the use of knowledge in health care decision making. J. Clin. Epidemiol. 64(1), 6–10 (2011)
We would like thank Prof. Dr. M. Paspalanova (Expert on Human Rights Indicators, OHCHR, Mexico), Prof. Dr. I. Tanasescu (Political Science Department, Free University of Brussels, Belgium) and Andy Tamás (Director, Tamás Consultants) for their helpful comments on an earlier draft of this article.
About this article
Cite this article
Kampen, J.K., Tamás, P. Should I take this seriously? A simple checklist for calling bullshit on policy supporting research. Qual Quant 48, 1213–1223 (2014). https://doi.org/10.1007/s11135-013-9830-8
- Research methods
- Methodological issues
- Decision support
- Knowledge assessment