Skip to main content
Log in

Strategy and the pursuit of truth

  • Published:
Synthese Aims and scope Submit manuscript

Abstract

Science is a social epistemic enterprise. The complexity of research requires the division of cognitive labor. As a consequence, scientists have to present results and incorporate the results of others into their body of knowledge. This creates the possibility of strategic behavior, leading to phenomena such as publication bias. To analyze the dynamics of strategic behavior in epistemic communities, agent-based modeling suggests itself as a method. The phenomena generated by the developed agent-based simulation model reveal a diverse set of possible dynamics in strategically heterogeneous groups and support the claim that there is a trade-off between a behavioral rule’s efficacy to generate accurate beliefs under optimal conditions and its robustness to variation in the composition of the epistemic environment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Gigerenzer (2015) provides another example of, at least allegedly, selective reception and reporting of results.

  2. For a formal analysis of the impact of biased agents in epistemic networks, cf. Holman and Bruner (2015).

  3. For alternative, but related models of strategic behavior in opinion dynamics, cf. Hegselmann et al. (2014) and Eger (2016).

  4. Any other probability distribution will serve the same purpose; the beta distribution has simply been chosen for technical reasons.

  5. It is an important feature of this model to allow for multiple levels of higher-order evidence: it allows the agents to form a hierarchy of second order evidence.

  6. One might notice that the model is structurally very similar to linear averaging models, and wonder whether the opinion of an agent is represented by \(b_i\) or \(s_i\), or put otherwise if there is an actual dynamic of evolving opinions, or just an adaption of stated claims. The more appropriate interpretation for an application to science is that of an actual evolution of opinions, the possible importance of their initial signal to an agent notwithstanding. There are different target systems the model is applicable to, such as political discourse, where often an evolving compromise is better understood as just a convergence on expressed opinion, with no underlying change of opinion.

  7. For clarification, the need to assign credit long before the predicted event is supposed to take place is often necessary in fields like climate science or high energy physics, where predictions extend far into the future or theoretical progress often precedes technical realizability by decades.

  8. This is the crucial difference between models explicitly incorporating agent motivation and standard models in opinion dynamics such as Lehrer and Wagner (1981). In applications of such deliberation processes, it has been acknowledged that strategic behavior poses a problem (Regan et al. 2006).

  9. Accuracy stands in for a number of plausible epistemic values discussed e.g. by Kuhn (1977). The reason is both ease of quantification and the almost universal acceptance of this virtue.

  10. As Douven (2010) points out, beliefs in this kind of model can either be interpreted as degrees of belief or estimates of a certain variable of interest. For the purpose of the analysis of accuracy, it is unnecessary to distinguish between the two interpretations.

  11. The strategy profile where all agents play this strategy is a Nash equilibrium. However, the strategy is not dominant: if the other agents play, for example, by uniform randomization, the agent should stick to their own signal.

  12. The notion of peerhood assumed is best expressed in Feldman (2009), which does not require perfect epistemic equivalence, but instead comparable evidence and similar cognitive ability; within the model, this is represented by the equal quality of initial signals and potentially threatened by different agent motivations.

  13. A recently discussed example of this substitution is the priority of Sakoda (1971) over Schelling (1971) described by Hegselmann and Flache (1998) and in more detail in Hegselmann (2017), where the suggested models are similar though different in subtle details, but the community allocated all their acknowledgment to Schelling and coordinated on his results.

  14. Note that this model itself has been under severe criticism. cf. Thoma (2015) and Alexander et al. (2015). However, the basic structure of the model and the problems it addresses still provide a valid point of reference.

  15. As before, other relevant components have to be set aside, in particular the impact of material resources on the actual outcomes.

  16. These assumptions block the game theoretic solutions partially discussed before, since they conflict with perfect information and common knowledge of rationality.

  17. In terms of the peer disagreement debate, the agent employs an equal weight view on graded beliefs.

  18. The factor n before the signal \(b_i\) might strike the reader as odd; it is a consequence of optimizing the payoff scheme the social influencer is based on, and effectively corrects the weights such that the agent compare their signal to the average opinion in the community.

  19. This particular strategy lives actually in a continuum: by introducing additional weight parameters, one could adjust the exact degree to which the agent responds to the socially available information. Having no explicit weights results naturally from the payoff scheme, but this shouldn’t be taken as a fundamental limitation on details of a cautious learning strategy.

  20. cf. Elkin and Wheeler (2018) for discussion of steadfastness as a response to disagreement.

  21. See the “Appendix” for additional robustness tests of the results.

  22. Whether there is oscillation or not depends on the initial distribution of signals, and the type of the social learners; cautious learners create less oscillation, since they are less prone to overadaption.

  23. This is not to say that the scientist is not to some degree epistemically blameworthy, but their claims are still connected to actual empirical results and not, for example, on makeshift data.

  24. Even defenders of a very strong version of this individualism can of course accept the possibility that such explanations are pragmatically useful.

  25. The model also allows to cleanly distinguish between an agent’s cognitive capability represented by the variance of their initial signal and motivational factors limiting their epistemic purity, represented in their behavioral rules.

  26. A similar argument has been put forward in a different formal framework investigating epistemic norms (Mayo-Wilson 2014).

  27. Muldoon and Weisberg (2011) provides an analysis of the limitations of decentralized, market-like allocation of resources to epistemic projects.

  28. Note that this problem can be circumvented if both an agent’s initial reliability estimates and signal are constrained to be accurate to a minimum degree. For a model of expectation-based updating of reliabilities, see the competing accounts of Bovens and Hartmann (2003, ch. 3) and Olsson and Vallinder (2013). However, neither of these approaches helps under unfavorable conditions.

References

  • Alexander, J. M. K., Himmelreich, J., & Thompson, C. (2015). Epistemic landscapes, optimal search, and the division of cognitive labor. Philosophy of Science, 82(3), 424–453.

    Article  Google Scholar 

  • Auspurg, K., Hinz, T., & Schneck, A. (2014). Ausmaß und Risikofaktoren des Publication Bias in der Deutschen Soziologie. Kzfss Kölner Zeitschrift für Soziologie und Sozialpsychologie, 66(4), 549–573.

    Article  Google Scholar 

  • Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.

    Google Scholar 

  • Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthey Weather Review, 78(1), 1–3.

    Article  Google Scholar 

  • Douven, I. (2010). Simulating peer disagreements. Studies in History and Philosophy of Science Part A, 41(2), 148–157.

    Article  Google Scholar 

  • Eger, S. (2016). Opinion dynamics and wisdom under out-group discrimination. Mathematical Social Sciences, 80, 97–107.

    Article  Google Scholar 

  • Elkin, L., & Wheeler, G. (2018). Resolving peer disagreements through imprecise probabilities. Noûs, 52(2), 260–278.

    Article  Google Scholar 

  • Feldman, R. (2006). Epistemological puzzles about disagreement. In S. Hetherington (Ed.), Epistemology futures (pp. 216–36). Oxford: Oxford University Press.

    Google Scholar 

  • Feldman, R. (2009). Evidentialism, higher-order evidence, and disagreement. Episteme, 6(3), 294–312.

    Article  Google Scholar 

  • Gigerenzer, G. (2015). On the supposed evidence for libertarian paternalism. Review of Philosophy and Psychology, 6(3), 361–383.

    Article  Google Scholar 

  • Hegselmann, R. (2017). Thomas C. Schelling and James M. Sakoda: The intellectual, technical, and social history of a model. Journal of Artificial Societies and Social Simulation, 20(3), 15.

    Article  Google Scholar 

  • Hegselmann, R., & Flache, A. (1998). Understanding complex social dynamics: A plea for cellular automata based modelling. Journal of Artificial Societies and Social Simulation, 1(3), 1.

    Google Scholar 

  • Hegselmann, R., & Krause, U. (2015). Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. Networks & Heterogeneous Media, 10(3), 477–509.

    Article  Google Scholar 

  • Hegselmann, R., König, S., Kurz, S., Niemann, C., & Rambau, J. (2014). Optimal opinion control: The campaign problem. Arxiv preprint: arXiv:1410.8419.

  • Holman, B., & Bruner, J. P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968.

    Article  Google Scholar 

  • Joas, H., & Knöbl, W. (2004). Sozialtheorie. Frankfurt am Main: Suhrkamp.

    Google Scholar 

  • Kelly, T. (2011). Peer disagreement and higher order evidence. In A. Goldman & D. Withcomb (Eds.), Social epistemology: Essential readings (pp. 183–217). Oxford: Oxford University Press.

    Google Scholar 

  • Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.

    Article  Google Scholar 

  • Kuhn, T. (1977). Objectivity, value judgment, and theory choice. In The essential tension. Chicago: University of Chicago Press.

  • Lehrer, K., & Wagner, C. (1981). Rational consensus in science and society. A philosophical and mathematical study. Dordrecht: D. Reidel Publishing Company.

    Book  Google Scholar 

  • Leitgeb, H., & Pettigrew, R. (2010a). An objective justification of Bayesianism I: Measuring inaccuracy. Philosophy of Science, 77(2), 201–235.

    Article  Google Scholar 

  • Leitgeb, H., & Pettigrew, R. (2010b). An objective justification of Bayesianism II: The consequences of minimizing inaccuracy. Philosophy of Science, 77(2), 236–272.

    Article  Google Scholar 

  • Mayo-Wilson, C. (2014). Reliability of testimonial norms in scientific communities. Synthese, 191(1), 55–78.

    Article  Google Scholar 

  • Muldoon, R., & Weisberg, M. (2011). Robustness and idealization in models of cognitive labor. Synthese, 183(2), 161–174.

    Article  Google Scholar 

  • Olsson, E. J., & Vallinder, A. (2013). Norms of assertion and communication in social networks. Synthese, 190(13), 2557–2571.

    Article  Google Scholar 

  • Oreskes, N., & Conway, E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. London: Bloomsbury Publishing.

    Google Scholar 

  • Regan, H. M., Colyvan, M., & Markovchick-Nicholls, L. (2006). A formal model for consensus and negotiation in environmental management. Journal of Environmental Management, 80(2), 167–176.

    Article  Google Scholar 

  • Sakoda, J. M. (1971). The checkerboard model of social interaction. The Journal of Mathematical Sociology, 1(1), 119–132.

    Article  Google Scholar 

  • Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143–186.

    Article  Google Scholar 

  • Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.

    Article  Google Scholar 

  • Thoma, J. (2015). The epistemic division of labor revisited. Philosophy of Science, 82(3), 454–472.

    Article  Google Scholar 

  • Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.

    Article  Google Scholar 

  • Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.

    Article  Google Scholar 

Download references

Acknowledgements

I want to thank the participants of the workshops “Formal Models of Scientific Inquiry” in Bochum and “Interdisciplinary Workshop on Opinion Dynamics and Collective Decision” in Bremen in 2017 for discussions of earlier versions of this paper. I also want to thank Kevin Zollman, Stephan Hartmann and Simon Scheller who provided valuable feedback, and my anonymous reviewers, one of which in particular suggested the inclusion of further robustness data. The research was carried out at the Munich Center for Mathematical Philosophy.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christoph Merdes.

Appendix: Sensitivity Analysis

Appendix: Sensitivity Analysis

The deliberation game model has, in particular due to the vast combinatorial space of agent populations, too large a parameter space to provide a full parameters sweep. However, it is worth looking at some important alternatives to the configurations utilized to support the arguments in this paper. The sensitivity checks will focus on three questions:

  1. 1.

    How does variation in the communication schedule change model dynamics?

  2. 2.

    How do the accuracy results scale with increasing numbers of participating agents?

  3. 3.

    What is the accuracy impact of an increasing number of strategic exaggerators?

1.1 Sequential dynamics

Figures 3 and 4 depict runs that are configured analogously to the runs chosen in the exploration of model dynamics in Sect. 3, with one key difference: Communication is not synchronous, but agents update their expressed opinion in randomized sequential order.

Fig. 3
figure 3

Group of naive learners without exaggerator (left) and with a single exaggerator (right); updating is performed in randomized sequential order

Fig. 4
figure 4

Group of cautious learners without exaggerator (left) and with a single exaggerator (right); updating is performed in randomized sequential order

The key results inferred from the synchronous case still hold, though with some variation:

  • Exaggerators are still able to exert their influence, fully steering naive learners and partially influencing cautious learners.

  • Cautious learners still fail to fully merge.

  • Oscillations caused by oversteering from the exaggerator can still be observed, but it is substantially diminished.

In particular with respect to the last result, this scheduling algorithm may produce results that are more realistic for particular domains of applications. While contributions to a scientific conference are often written in parallel and with little to no knowledge what beliefs the other participants will be expressing, political discussions, to name just an obvious example, often have a more sequential, reactive structure.

This partial sensitivity result is supported by the theoretical background of the model in game theory: Sequential and synchronous game structures can differ substantially in their behavioral predictions, and the bounded rationality of agents only increases the difference.

1.2 Scaling, accuracy and communities of exaggeration

To explore both scaling up the model and evaluating the impact of an increased percentage of exaggerators, the following simulation experiment has been performed: Populations of 20 and 50 agents respectively have been simulated playing the simulation game. The groups were populated with strategic exaggerators and either naive or cautious learners. For each configuration, all possible proportions of exaggerators and learners where run. Updating of beliefs was synchronous and the initial step featured again a revelation of the received signal, again unknown to the agents participating. Each of the resulting model configurations has been run for 80 timesteps to remain comparable with the results presented in the main analysis, though convergence generally takes place substantially earlier. Each configuration ran 100 times, and the results are averaged over those 100 runs. Mean squared error, squared error of mean belief and differences between naive and cautious learners are depicted in Figs. 5, 67 and 8 (Note, that the viewpoints for the plots of accuracy differences are slightly different, to make the graph more readable. In the same vein, the accuracy scales in the various plots differ in their range, as the same range for all plots would have rendered them harder to read.)

Fig. 5
figure 5

20 agents total, mean of squared error of belief, naive learners (left), cautious learners (middle) and difference between the two population types (right). The plane in the difference plot signifies 0, and values above 0 signify a difference in favor of cautious learners

Fig. 6
figure 6

50 agents total, mean of squared error of belief, naive learners (left), cautious learners (right) and difference between the two population types (right)

Fig. 7
figure 7

20 agents total, squared error of mean belief, naive learners (left), cautious learners (middle) and difference between the two population types (right)

Fig. 8
figure 8

50 agents total, squared error of mean belief, naive learners (left), cautious learners (middle) and difference between the two population types (right)

These plots contain an immense amount of information, but a few key observations are sufficient to understand their general implication. First, the differential advantage of cautious and naive learners appears most pronounced in a comparison of the populations with 0 and 1 exaggerator respectively. When the number of exaggerators increases, accuracy first improves and then worsens substantially. This is in line with the trade-off between cautious and naive social learning argued before.

There is also a pattern that may seem strange at first, namely that populations with a particular large proportion of exaggerators suddenly become way worse, in particular in terms of the accuracy of mean belief. One can also observe, more visible in the graphs depicting the mean of belief inaccuracy, that populations seem to be split between a few distinct levels of accuracy and jump around between these levels in very short time spans and without fully converging at any point during the simulation. The underlying dynamic for these patterns is a population wide oscillation between 0 and 1; without a sufficient amount of learners mediating between the exaggerators by taking averages of their expressed belief, subgroups of exaggerators emerge that jump between the two extremes, forcing each other to reverse that jump to the other extreme on the next time step. With respect to the intended application of scientific exchange, this behavior can only be interpreted as a model artifact due to a move out of the proper parameter space (where not almost everyone is a strategic exaggerator). In cases where such population compositions are plausible, a more subtle update mechanism may be required to avoid this extreme pattern.

The figures also show that the sheer number of agents has little impact on the core results; what is relevant is whether any exaggerators are present and how much of the population they actually comprise.

Finally, it is worth noting that this additional data suggests a limitation to the advantage of competing exaggerators, as speculated in the model analysis. Moving from a single exaggerator to a competitive environment improves accuracy, but further additions quickly start to actually worsen the results again—though not beyond the negative impact of a single exaggerator except for the extreme population-wide oscillation patterns mentioned above. This insight does not contradict the speculation on the advantages of competition, as those are contingent on the presence of a substantial population of learners, and not necessarily imply good accuracy for the competing opinion leaders themselves.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Merdes, C. Strategy and the pursuit of truth. Synthese 198, 117–138 (2021). https://doi.org/10.1007/s11229-018-01985-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11229-018-01985-x

Keywords

Navigation