Abstract
One important strategy for dealing with error in our methods is triangulation, or the use multiple methods to investigate the same object. Current accounts of triangulation assume that its primary function is to provide a confirmatory boost to hypotheses beyond what confirmation of each method alone could produce. Yet, researchers often use multiple methods to examine new constructs about which they are uncertain. For example, social psychologists use multiple indirect measures to provide convergent evidence about implicit attitudes, but how to characterize implicit attitudes is an open question. To make sense of triangulation under uncertainty about constructs, I suggest two changes: first, triangulation can serve multiple epistemic functions, including some that are non-confirmatory, and second, researchers should assess the epistemic risk in claims about evidence and the acceptance/rejection of hypotheses.
Similar content being viewed by others
Notes
This argument relies on Kuorikoski et al (2010)’s analysis of confirmation in robustness analysis, where multiple but distinct models providing the same result increase the confirmation of that result (relative to the confirmation provided by a single model’s result). However, Harris (2021) provides reasons to doubt the soundness of Kuorikoski et al. (2010)’s argument.
Here I am not so interested in debates about the use of the term ‘triangulation.’ The term is important in the sense that it is used also by scientists broadly to refer to multi-method research. Thus, I tend to use the term more broadly. However, my points would still stand even if the term ‘triangulation’ was reserved for only cases of multi-method research that results in a confirmatory boost. My claim is simply that there are similarities in the practices of multi-method research generally that warrant keeping these research practices together, even if they contribute to diverse scientific functions.
Feest (2020) discusses implicit measures and implicit attitudes in the context of construct validity, but does not consider triangulation. However, my contention fits best with her analysis of psychologists approaching implicit attitudes with a ‘wide’ conception of the construct. By not restricting their characterization of implicit attitudes too early, researchers can treat it as an epistemically blurry object. Then my claim is that triangulation can be used to narrow this characterization. Knowledge about the methods can be used to provide information about what features ‘implicit attitudes’ must have if the methods are measuring them.
Wimsatt’s “illusory robustness” would also count as a relevant kind of epistemic risk, but I set it aside here.
I think an argument for the role of socio-political, ethical, and practical values is possible, but I will not defend that account here.
References
Banaji, M. R. (2001). Implicit attitudes can be measured. In H. L. Roediger & J. S. Nairne (Eds.), The nature of remembering: essays in honor Of Robert G Crowder (pp. 117–150). American Psychological Association.
Bar-Anan, Y., & Nosek, B. A. (2014). A comparative investigation of seven indirect attitude measures. Behavior Research Methods, 46(3), 668–688.
Bar-Anan, Y., & Vianello, M. (2018). A multi-method multi-trait test of the dual-attitude perspective. Journal of Experimental Psychology: General, 147(8), 1264.
Basso, A. (2017). The appeal to robustness in measurement practice. Studies in History and Philosophy of Science Part a., 65–66, 57–66.
Bechtel, W. (2002). Aligning multiple research techniques in cognitive neuroscience: Why is it important? Philosophy of Science, 69(S3), S48–S58.
Biddle, J. B., & Kukla, R. (2017). The geography of epistemic risk. Exploring Inductive Risk, 15, 215–238.
Bogen, J., & Woodward, J. F. (1988). Saving the phenomena. The Philosophical Review, 8(3), 303–352.
Bosson, J. K., Swann, W. B., & Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: The blind men and the elephant revisited? Journal of Personality and Social Psychology, 79, 631–643.
Bradburn, N. M., Cartwright, N., & Fuller, J. (2017). A theory of measurement. In Measurement in medicine: Philosophical essays on assessment and evaluation (pp.73–88). Rowman & Littlefield.
Brownstein, M., Madva, A., & Gawronski, B. (2019). What do implicit measures measure? Wiley Interdisciplinary Reviews, 10(5), e1501.
Calcott, B. (2011). Wimsatt and the robustness family: Review of Wimsatt’s Re-engineering Philosophy for Limited Beings. Biology & Philosophy, 26, 281–293.
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56(2), 81–105.
Cartwright, N. (1991). Replicability, reproducibility, and robustness: Comments on Harry Collins. History of Political Economy, 23(1), 143–155.
Cartwright, N., & Runhardt, R. (2014). Measurement. In N. Cartwright & E. MontuschiIn (Eds.), Philosophy of social science: A new introduction. Oxford University Press.
Coko, K. (2020). The multiple dimensions of multiple determination. Perspectives on Science, 28(4), 505–541.
Culp, S. (1994). Defending robustness: The bacterial mesosome as a test case. PSA, 1, 46–57.
Douglas, H. (2016). Values in science. Oxford Handbook in the Philosophy of Science, 23, 15.
Dovidio, J. F., & Gaertner, S. L. (2000). Aversive racism and selection decisions: 1989 and 1999. Psychological Science, 11(4), 315–319.
Fazio, R. H., & Olson, M. A. (2003). Attitudes: Foundations, functions, and consequences. The SAGE Handbook of Social Psychology, 1, 123–145.
Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50(2), 229–238.
Feest, U. (2011a). Remembering (short-term) memory: Oscillations of an epistemic thing. Erkenntnis, 75, 391–411.
Feest, U. (2011b). What exactly is stabilized when phenomena are stabilized? Synthese, 182, 57–71.
Feest, U. (2017). Phenomena and objects of research in the cognitive and behavioral sciences. Philosophy of Science., 84(5), 1165–1176.
Feest, U. (2020). Construct validity in psychological tests: The case of implicit social cognition. European Journal for Philosophy of Science., 10(1), 1–24.
Gawronski, B., Hofmann, W., & Wilbur, C. J. (2006). Are ‘Implicit’ attitudes unconscious? Consciousness and Cognition, 15, 485–499.
Gawronski, B., Deutsch, R., Lebel, E. P., & Peters, K. R. (2008). Some traps and gaps in the assessment of mental associations with experimental paradigms. European Journal of Psychological Assessment., 24(4), 218–225.
Gawronski, B. (2019). Six lessons for a cogent science of implicit bias and its criticism. Perspectives on Psychological Science, 14(4), 574–595.
Greenwald, A. G., & Lai, C. K. (2020). Implicit social cognition. Annual Review of Psychology, 71, 419–445.
Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 110, 4.
Greenwald, A. G., McGee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464–1480.
Hahn, A., Judd, C. M., Hirsh, H. K., & Blair, I. V. (2014). Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143(3), 1369–1392.
Hammerton, G., & Munafò, M. R. (2021). Causal inference with observational data: The need for triangulation of evidence. Psychological Medicine, 51(4), 563–578.
Harris, M. (2021). The epistemic value of independent lies: False analogies and equivocations. Synthese, 199, 14577–14597.
Harnois, C. E., Bastos, J. L., & Shariff-Marco, S. (2020). Intersectionality, contextual specificity, and everyday discrimination: Assessing the difficulty associated with identifying a main reason for discrimination among racial/ethnic minority respondents. Sociological Methods & Research, 15, 0049124120914929.
Heesen, R., Bright, L. K., & Zucker, A. (2019). Vindicating methodological triangulation. Synthese, 196(8), 3067–3081.
De Houwer, J., Teige-Mocigemba, S., Spruyt, A., & Moors, A. (2009). Implicit measures: A normative analysis and review. Psychological Bulletin, 135(3), 347–368.
Hudson, R. G. (2014). Seeing things: The philosophy of reliable observation. Oxford University Press.
Jones, E. E., & Sigall, H. (1971). The bogus pipeline: A new paradigm for measuring affect and attitude. Psychological Bulletin, 76(5), 349–364.
Jost, J. T. (2019). The IAT is dead, long live the IAT: Context-sensitive measures of implicit attitudes are indispensable to social and political psychology. Current Directions in Psychological Science, 28(1), 10–19.
Krieger, N., Smith, K., Naishadham, D., Hartman, C., & Barbeau, E. M. (2005). Experiences of discrimination: Validity and reliability of a self-report measure for population health research on racism and health. Social Science & Medicine, 61(7), 1576–1596.
Kuorikoski, J., & Marchionni, C. (2016a). Evidential diversity and the triangulation of phenomena. Philosophy of Science, 83(2), 227–247.
Kuorikoski, J., & Marchionni, C. (2016b). Triangulation across the lab, the scanner and the field: The case of social preferences. European Journal for Philosophy of Science., 6(3), 361–376.
Kuorikoski, J., Lehtinen, A., & Marchionni, C. (2010). Economic modelling as robustness analysis. The British Journal for the Philosophy of Science, 61(3), 541–567.
Kurdi, B., & Dunham, Y. (2021). Sensitivity of implicit evaluations to accurate and erroneous propositional inferences. Cognition, 214, 104792.
Lloyd, E. A. (2015). Model robustness as a confirmatory virtue: The case of climate science. Studies in History and Philosophy of Science, 49, 58–68.
Mitchell, J. P., Nosek, B. A., & Banaji, M. R. (2003). Contextual variations in implicit evaluation. Journal of Experimental Psychology: General, 132, 455–469.
Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Automatic Processes in Social Thinking and Behavior. Psychology Press.
Olson, M. A., & Fazio, R. H. (2003). Relations between implicit measures of prejudice: What are we measuring? Psychological Science, 14, 636–639.
Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776–783.
Salmon, W. (1984). Scientific explanation and the causal structure of the world. Princeton University Press.
Schickore, J., & Coko, K. (2013). Using multiple means of determination. International Studies in the Philosophy of Science, 27(3), 295–313.
Schupbach, J. N. (2018). Robustness analysis as explanatory reasoning. British Journal for the Philosophy of Science, 69(1), 275–300.
Schimmack, U. (2021). The implicit association test: A method in search of a construct. Perspectives on Psychological Science, 16(2), 396–414.
Sober, E. (1989). Independent evidence about a common cause. Philosophy of Science., 56, 275–287.
Stegenga, J. (2009). Robustness, discordance, and relevance. Philosophy of Science, 76, 650–661.
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8(3), 220–247.
Sue, D. W. (2010). Microaggressions, marginality, and oppression: An introduction. In D. W. Sue (Ed.), Microaggressions and marginality: Manifestation, dynamics, and impact (pp. 3–22). John Wiley & Sons, Inc.
Teige-Mocigemba, S., & Klauer, K. C. (2013). On the controllability of evaluative-priming effects: Some limits that are none. Cognition & Emotion, 27(4), 632–657.
Trizio, E. (2012). Achieving robustness to confirm controversial hypotheses: A case study in cell biology. In L. Soler, E. Trizio, T. Nickles, & W. Wimsatt (Eds.), Characterizing the Robustness of Science: After the Practice Turn in Philosophy of Science (pp. 105–121). Springer.
Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science Part A, 40(1), 92–101.
Wimsatt, W. (1981). Robustness, Reliability, and Overdetermination. In M. Brewer & B. Collins (Eds.), Scientific Inquiry in the Social Sciences (pp. 123–162). Jossey-Bass.
Woodward, J. (2006). Some varieties of robustness. Journal of Economic Methodology, 13(2), 219–240.
Acknowledgements
For helpful feedback on this paper, I would like to thank: Edouard Machery, David Danks, Kevin Zollman, Jim Woodward, Mazviita Chirimuuta, Mahi Hardalupas, Annika Froese, Nedah Nemati, Liam Kofi Bright, Willy Penn, Siska de Baerdemaeker, Zina Ward, Michael Brownstein, Marie Kaiser, Alkistis Elliott-Graves, Rose Trappes, Philipp Haueis, Robert Frühstückl, and David Lambert.
Funding
Morgan Thompson was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project 254954344/GRK2073/2.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author reports no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Thompson, M. Epistemic risk in methodological triangulation: the case of implicit attitudes. Synthese 201, 1 (2023). https://doi.org/10.1007/s11229-022-03943-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-022-03943-0