Let’s not agree to disagree: the role of strategic disagreement in science

Abstract

Supposedly, stubbornness on the part of scientists—an unwillingness to change one’s position on a scientific issue even in the face of countervailing evidence—helps efficiently divide scientific labor. Maintaining disagreement is important because it keeps scientists pursuing a diversity of leads rather than all working on the most promising, and stubbornness helps preserve this disagreement. Planck’s observation that “Science progresses one funeral at a time” might therefore be an insight into epistemically beneficial stubbornness on the part of researchers. In conversation with extant formal models, recent empirical research, and a novel agent-based model of my own I explore whether the epistemic goods which stubbornness can secure—disagreement and diversity—are attainable through less-costly methods. I make the case that they are, at least in part, and also use my modeling results to show that if stubbornness is scientifically valuable, it still involves a willingness to change one’s mind.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Notes

  1. 1.

    Indeed, the idea that adherence to norms of epistemic rationality on the part of individual scientists explains the success of science is often taken to be the default, traditional view. For example, Zamora Bonilla (2002) pushes against the “long tradition of understanding the scientific method as a matter of careful application of logical rules” of inference (see also Foley 1987; Thagard 2004).

  2. 2.

    But see Alexander et al. (2015) for a technical critique of their model. Thoma (2015) presents a similar model which avoids that critique but still supports the value of a division of labor including some mavericks.

  3. 3.

    Mayo-Wilson et al. (2011) call this sort of dilemma an Independence Thesis and liken Independence Theses to similar cases where individual and group rationality diverge, such as the prisoner’s dilemma.

  4. 4.

    The fact that credit-seeking is a grubby motive when it comes to choosing which project to pursue or theory to endorse doesn’t entail that credit-seeking is even prima facie problematic as a motive for other scientific decisions. As Zollman (2018) argues, many scientific decisions are more practical than evidential, including “To which journal should I send my paper?” “Can I afford to hire another postdoc?” and “Should I apply for this grant?” That expected credit may be the decisive factor in these decisions doesn’t seem grubby, because credit-seeking isn’t competing directly with a responsiveness to evidence. In other words, I’m not arguing that scientists shouldn’t be credit seekers; my aim is merely to interrogate the idea that resistance to evidence should be one outcome of the pursuit of credit.

  5. 5.

    Heesen (2018) provides evidence that credit-seeking stubbornness might not efficiently divide labor as well as authors like Strevens have thought, but for present purposes we’ll grant to my opponents that stubbornness does produce a desirable epistemic diversity.

  6. 6.

    I owe this objection to a reviewer.

  7. 7.

    I suspect that there are situations in which misleading testimony and self-deception are harmful to the community’s investigation as well, but a convincing argument to that effect is beyond what I have space for here.

  8. 8.

    Bright (2017) argues that in certain situations credit-seekers are less likely to commit fraud than scientists motivated only by concern for the truth. But he tempers this result by showing that the reason for it is something similar to noble hypocrisy, and if we eliminate that noble lying, truth-seekers have less motive to be fraudulent than credit-seekers. Since my arguments suggests that we should eliminate noble hypocrisy, noble lying, and their relatives, I don’t take Bright’s results to undermine my worries about credit-seeking and fraud.

  9. 9.

    To be fair, there are other benefits of credit-seeking, such as incentivizing scientists to share intermediate results (Heesen 2017) and encouraging epistemically-beneficial collaborations (Boyer-Kassem and Imbert 2015). Someone could argue that these additional benefits tilt the scale back in favor of grubby motives, to which I would respond by making the same sort of argument I’m making against using grubby motives to diversify science: there are probably less costly alternative means of securing these benefits as well.

  10. 10.

    I’m not saying anything new here. For example, Du Bois defended pure truth seeking by scientists in part on these grounds, as documented by Bright (2018).

  11. 11.

    While this sort of fracturing of the social network and belief polarization is often seen as a bad thing, it is a real means of securing epistemic diversity. It may have some epistemic costs of its own to consider, such as the phenomenon of epistemic factionalization (Weatherall and O’Connor 2018). However, polarization and factionalization are central phenomena in human sociality, and would not be easy to remove from the scientific network. Since we’re stuck with them, it makes sense to ask if they already provide enough epistemic diversity to obviate the need for consciously strategic disagreement.

  12. 12.

    Implemented in NetLogo. Code available on request or at www.carlosgraysantana.com/research.html (as of 29 March 2019).

  13. 13.

    If it seems unrealistic to you to assume that the extant theories in field exhaust the possibilities, think of it as follows: Let the set of theories be {theory0, theory1, …, theoryn}. Consider every theory but theoryn to have a specific positive content, and the content of theoryn to be ¬ (theory0theory1 ∨ … ∨ theoryn-1). The set of theories is thus exhaustive, but doesn’t require that present science be aware of every possibility.

  14. 14.

    This is done by initially assigning each theory a probability of \(\frac{1}{{number \;of\; {\text{theories}}}}\), adding to each probability a random number from a uniform distribution in the interval [0,2), then normalizing the set so it sums to one.

  15. 15.

    Each mule multiplies its credence for each theory by the inverse of a logarithmic transformation of the number of researchers already working on the theory, then endorses the theory associated with the highest of these modified values. In the code the logarithm is an adjustable parameter; the experiment below used the natural log.

  16. 16.

    \(\frac{{1 + baseline\left[ {world - state} \right]}}{2}\) Note that this doesn’t guarantee that experiments on the world-state are the most reliably successful, but most of the time they will be. Anyway, only most of the time (if that) are real-world experiments on the right theory the most visibly successful.

  17. 17.

    Agents thus aren’t epistemically autonomous in the sense of Dellsén (2018). Their epistemic attitudes are directly influenced by the epistemic attitudes of other agents, and only indirectly by the experimental results of other researchers.

  18. 18.

    experiment-power is set to 0.15.

  19. 19.

    While checking the model’s robustness by varying parameters not varied in this experiment, it was clear that on easy research problems (fewer theories, higher experiment-power) neither stubbornness nor social structure matters much: the community almost always proves the world-state and does so quickly.

  20. 20.

    Perhaps a cat in my position should also convert to mule-hood, but establishing that fact would require further argument.

  21. 21.

    An anonymous reviewer argues that the agents in my model have in-group bias, a major human foible. Certainly the model could represent the situation where individual researchers have complete knowledge of the field, but due to in-group or confirmation bias discount evidence from researchers who disagree with them. But the model as I’m intending it can also represent a situation where individual researchers have no such biases. Instead they have detailed knowledge of the evidence gathered by researchers who have similar approaches to their own, but imperfect knowledge of what’s going on in other research programs in their field. They read more papers published by people they know, go to conferences featuring people mostly working in their research program, etc. The in-group bias in the model thus represents a biased structure of the social and communication network, and not biases on the part of individual researchers.

  22. 22.

    Another reviewer points out that my model, unlike some of the other models in the area (e.g. Weisberg and Muldoon 2009), doesn’t involve pure exploration of an epistemic landscape, unbiased by the social factors represented in my model. Perhaps a model where agents occasionally experimented on a theory besides the one they currently endorse out of pure curiosity, and not strategic mule-hood, could turn out better for cats. Or it could turn out even worse. There’s certainly a lot of room to do more modeling work in this area.

  23. 23.

    The model includes a visualization where agents gather around the theory they currently endorse, which allows observation of this sort of dynamic behavior.

References

  1. Alexander, J. M., Himmelreich, J., & Thompson, C. (2015). Epistemic landscapes, optimal search, and the division of cognitive labor. Philosophy of Science, 82(3), 424–453.

    Article  Google Scholar 

  2. Bohman, J. (2006). Deliberative democracy and the epistemic benefits of diversity. Episteme, 3(3), 175–191.

    Article  Google Scholar 

  3. Boyer-Kassem, T., & Imbert, C. (2015). Scientific collaboration: do two heads need to be more than twice better than one? Philosophy of Science, 82(4), 667–688.

    Article  Google Scholar 

  4. Bright, L. K. (2017). On fraud. Philosophical Studies, 174(2), 291–310.

    Article  Google Scholar 

  5. Bright, L. K. (2018). Du Bois’ democratic defence of the value free ideal. Synthese, 195, 1–19.

    Article  Google Scholar 

  6. Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14(2), 238–257.

    Article  Google Scholar 

  7. Dellsén, F. (2018). The epistemic value of expert autonomy. Philosophy and Phenomenological Research, 63, 85.

    Google Scholar 

  8. Deszo, C., & Ross, D. (2008). When women rank high, firms profit. New York: Columbia Business School Ideas at Work.

    Google Scholar 

  9. Eagly, A. H., & Chin, J. L. (2010). Diversity and leadership in a changing world. American Psychologist, 65(3), 216.

    Article  Google Scholar 

  10. Fazio, L. K., Brashier, N. M., Payne, B. K., & Marsh, E. J. (2015). Knowledge does not protect against illusory truth. Journal of Experimental Psychology: General, 144(5), 993.

    Article  Google Scholar 

  11. Fleisher, W. (2017). Rational endorsement. Philosophical Studies, 175, 1–27.

    Google Scholar 

  12. Foley, R. (1987). Epistemic rationality and scientific rationality. International Studies in the Philosophy of Science, 1(2), 233–250.

    Article  Google Scholar 

  13. Freeman, R. B., & Huang, W. (2014). Collaboration: Strength in diversity. Nature News, 513(7518), 305.

    Article  Google Scholar 

  14. Frey, D., & Šešelja, D. (2018). Robustness and idealizations in agent-based models of scientific interaction. The British Journal for the Philosophy of Science, 6, 31.

    Google Scholar 

  15. Grim, P., Singer, D. J., Bramson, A., Holman, B., McGeehan, S., & Berger, W. J. (2019). Diversity, ability, and expertise in epistemic communities. Philosophy of Science, 86(1), 98–123.

    Article  Google Scholar 

  16. Grim, P., Singer, D. J., Fisher, S., Bramson, A., Berger, W. J., Reade, C., et al. (2013). Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. Episteme, 10(4), 441–464.

    Article  Google Scholar 

  17. Heesen, R. (2017). Communism and the incentive to share in science. Philosophy of Science, 84(4), 698–716.

    Article  Google Scholar 

  18. Heesen, R. (2018). The credit incentive to Be a maverick. Studies in History and Philosophy of Science Part A, 115, 661.

    Google Scholar 

  19. Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 16385–16389.

    Article  Google Scholar 

  20. Hull, D. L. (1990). Science as a process: An evolutionary account of the social and conceptual development of science. Chicago: University of Chicago Press.

    Google Scholar 

  21. Keller, E. F. (1984). A feeling for the organism, 10th aniversary edition: The life and work of Barbara McClintock. London: Macmillan.

    Google Scholar 

  22. Kitcher, P. (1993). The advancement of science: Science without legend, objectivity without illusion. Oxford: Oxford University Press.

    Google Scholar 

  23. Mayo-Wilson, C., Zollman, K. J., & Danks, D. (2011). The independence thesis: When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.

    Article  Google Scholar 

  24. Page, S. E. (2007). Making the difference: Applying a logic of diversity. Academy of Management Perspectives, 21(4), 6–20.

    Article  Google Scholar 

  25. Pennisi, E. (2007). Jumping genes hop into the evolutionary limelight. Science, 317(5840), 894–895.

    Article  Google Scholar 

  26. Phillips, K. W., Northcraft, G. B., & Neale, M. A. (2006). Surface-level diversity and decision-making in groups: When does deep-level similarity help? Group Processes and Intergroup Relations, 9(4), 467–482.

    Article  Google Scholar 

  27. Planck, M. (1949). Scientific autobiography and other papers, trans. F. Gaynor. New York: Philosophical Library.

    Google Scholar 

  28. Reaves, M. L., Sinha, S., Rabinowitz, J. D., Kruglyak, L., & Redfield, R. J. (2012). Absence of detectable arsenate in DNA from arsenate-grown GFAJ-1 cells. Science, 337(6093), 470–473.

    Article  Google Scholar 

  29. Rosenstock, S., Bruner, J., & O’Connor, C. (2017). In epistemic networks, Is less really more? Philosophy of Science, 84(2), 234–252.

    Article  Google Scholar 

  30. Santana, C. (2018). Why not all evidence is scientific evidence. Episteme, 15(2), 209–227.

    Article  Google Scholar 

  31. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.

    Article  Google Scholar 

  32. Singer, D. J. (2019). Diversity, not randomness, trumps ability. Philosophy of Science, 86(1), 178–191.

    Article  Google Scholar 

  33. Smith, M. K., Trivers, R., & von Hippel, W. (2017). Self-deception facilitates interpersonal persuasion. Journal of Economic Psychology, 63, 93–101.

    Article  Google Scholar 

  34. Solomon, M. (2001). Social empiricism (p. 186). Cambridge: MIT press.

    Google Scholar 

  35. Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.

    Article  Google Scholar 

  36. Taubes, G. (1993). Bad science: The short life and weird times of cold fusion. Random House.

  37. Thagard, P. (2004). Rationality and science. In Handbook of rationality (pp. 363–379).

  38. Thoma, J. (2015). The epistemic division of labor revisited. Philosophy of Science, 82(3), 454–472.

    Article  Google Scholar 

  39. Von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34(1), 1–16. https://doi.org/10.1017/S0140525X10001354.

    Article  Google Scholar 

  40. Weatherall, J. O., & O’Connor, C. (2018). Endogenous epistemic factionalization: A network epistemology approach. arXiv preprint arXiv:1812.08131.

  41. Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.

    Article  Google Scholar 

  42. Zamora Bonilla, J. P. (2002). Scientific inference and the pursuit of fame: A contractarian approach. Philosophy of Science, 69(2), 300–323.

    Article  Google Scholar 

  43. Zollman, K. J. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.

    Article  Google Scholar 

  44. Zollman, K. J. (2018). The credit economy and the economic rationality of science. The Journal of Philosophy, 115(1), 5–33.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Carlos Santana.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Santana, C. Let’s not agree to disagree: the role of strategic disagreement in science. Synthese (2019). https://doi.org/10.1007/s11229-019-02202-z

Download citation

Keywords

  • Social structure of science
  • Epistemic diversity
  • Social epistemology
  • Agent-based model