Science is a social epistemic enterprise. The complexity of research requires the division of cognitive labor. As a consequence, scientists have to present results and incorporate the results of others into their body of knowledge. This creates the possibility of strategic behavior, leading to phenomena such as publication bias. To analyze the dynamics of strategic behavior in epistemic communities, agent-based modeling suggests itself as a method. The phenomena generated by the developed agent-based simulation model reveal a diverse set of possible dynamics in strategically heterogeneous groups and support the claim that there is a trade-off between a behavioral rule’s efficacy to generate accurate beliefs under optimal conditions and its robustness to variation in the composition of the epistemic environment.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Gigerenzer (2015) provides another example of, at least allegedly, selective reception and reporting of results.
For a formal analysis of the impact of biased agents in epistemic networks, cf. Holman and Bruner (2015).
Any other probability distribution will serve the same purpose; the beta distribution has simply been chosen for technical reasons.
It is an important feature of this model to allow for multiple levels of higher-order evidence: it allows the agents to form a hierarchy of second order evidence.
One might notice that the model is structurally very similar to linear averaging models, and wonder whether the opinion of an agent is represented by \(b_i\) or \(s_i\), or put otherwise if there is an actual dynamic of evolving opinions, or just an adaption of stated claims. The more appropriate interpretation for an application to science is that of an actual evolution of opinions, the possible importance of their initial signal to an agent notwithstanding. There are different target systems the model is applicable to, such as political discourse, where often an evolving compromise is better understood as just a convergence on expressed opinion, with no underlying change of opinion.
For clarification, the need to assign credit long before the predicted event is supposed to take place is often necessary in fields like climate science or high energy physics, where predictions extend far into the future or theoretical progress often precedes technical realizability by decades.
Accuracy stands in for a number of plausible epistemic values discussed e.g. by Kuhn (1977). The reason is both ease of quantification and the almost universal acceptance of this virtue.
As Douven (2010) points out, beliefs in this kind of model can either be interpreted as degrees of belief or estimates of a certain variable of interest. For the purpose of the analysis of accuracy, it is unnecessary to distinguish between the two interpretations.
The strategy profile where all agents play this strategy is a Nash equilibrium. However, the strategy is not dominant: if the other agents play, for example, by uniform randomization, the agent should stick to their own signal.
The notion of peerhood assumed is best expressed in Feldman (2009), which does not require perfect epistemic equivalence, but instead comparable evidence and similar cognitive ability; within the model, this is represented by the equal quality of initial signals and potentially threatened by different agent motivations.
A recently discussed example of this substitution is the priority of Sakoda (1971) over Schelling (1971) described by Hegselmann and Flache (1998) and in more detail in Hegselmann (2017), where the suggested models are similar though different in subtle details, but the community allocated all their acknowledgment to Schelling and coordinated on his results.
As before, other relevant components have to be set aside, in particular the impact of material resources on the actual outcomes.
These assumptions block the game theoretic solutions partially discussed before, since they conflict with perfect information and common knowledge of rationality.
In terms of the peer disagreement debate, the agent employs an equal weight view on graded beliefs.
The factor n before the signal \(b_i\) might strike the reader as odd; it is a consequence of optimizing the payoff scheme the social influencer is based on, and effectively corrects the weights such that the agent compare their signal to the average opinion in the community.
This particular strategy lives actually in a continuum: by introducing additional weight parameters, one could adjust the exact degree to which the agent responds to the socially available information. Having no explicit weights results naturally from the payoff scheme, but this shouldn’t be taken as a fundamental limitation on details of a cautious learning strategy.
cf. Elkin and Wheeler (2018) for discussion of steadfastness as a response to disagreement.
See the “Appendix” for additional robustness tests of the results.
Whether there is oscillation or not depends on the initial distribution of signals, and the type of the social learners; cautious learners create less oscillation, since they are less prone to overadaption.
This is not to say that the scientist is not to some degree epistemically blameworthy, but their claims are still connected to actual empirical results and not, for example, on makeshift data.
Even defenders of a very strong version of this individualism can of course accept the possibility that such explanations are pragmatically useful.
The model also allows to cleanly distinguish between an agent’s cognitive capability represented by the variance of their initial signal and motivational factors limiting their epistemic purity, represented in their behavioral rules.
A similar argument has been put forward in a different formal framework investigating epistemic norms (Mayo-Wilson 2014).
Muldoon and Weisberg (2011) provides an analysis of the limitations of decentralized, market-like allocation of resources to epistemic projects.
Note that this problem can be circumvented if both an agent’s initial reliability estimates and signal are constrained to be accurate to a minimum degree. For a model of expectation-based updating of reliabilities, see the competing accounts of Bovens and Hartmann (2003, ch. 3) and Olsson and Vallinder (2013). However, neither of these approaches helps under unfavorable conditions.
Alexander, J. M. K., Himmelreich, J., & Thompson, C. (2015). Epistemic landscapes, optimal search, and the division of cognitive labor. Philosophy of Science, 82(3), 424–453.
Auspurg, K., Hinz, T., & Schneck, A. (2014). Ausmaß und Risikofaktoren des Publication Bias in der Deutschen Soziologie. Kzfss Kölner Zeitschrift für Soziologie und Sozialpsychologie, 66(4), 549–573.
Bovens, L., & Hartmann, S. (2003). Bayesian epistemology. Oxford: Oxford University Press.
Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monthey Weather Review, 78(1), 1–3.
Douven, I. (2010). Simulating peer disagreements. Studies in History and Philosophy of Science Part A, 41(2), 148–157.
Eger, S. (2016). Opinion dynamics and wisdom under out-group discrimination. Mathematical Social Sciences, 80, 97–107.
Elkin, L., & Wheeler, G. (2018). Resolving peer disagreements through imprecise probabilities. Noûs, 52(2), 260–278.
Feldman, R. (2006). Epistemological puzzles about disagreement. In S. Hetherington (Ed.), Epistemology futures (pp. 216–36). Oxford: Oxford University Press.
Feldman, R. (2009). Evidentialism, higher-order evidence, and disagreement. Episteme, 6(3), 294–312.
Gigerenzer, G. (2015). On the supposed evidence for libertarian paternalism. Review of Philosophy and Psychology, 6(3), 361–383.
Hegselmann, R. (2017). Thomas C. Schelling and James M. Sakoda: The intellectual, technical, and social history of a model. Journal of Artificial Societies and Social Simulation, 20(3), 15.
Hegselmann, R., & Flache, A. (1998). Understanding complex social dynamics: A plea for cellular automata based modelling. Journal of Artificial Societies and Social Simulation, 1(3), 1.
Hegselmann, R., & Krause, U. (2015). Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: A simple unifying model. Networks & Heterogeneous Media, 10(3), 477–509.
Hegselmann, R., König, S., Kurz, S., Niemann, C., & Rambau, J. (2014). Optimal opinion control: The campaign problem. Arxiv preprint: arXiv:1410.8419.
Holman, B., & Bruner, J. P. (2015). The problem of intransigently biased agents. Philosophy of Science, 82(5), 956–968.
Joas, H., & Knöbl, W. (2004). Sozialtheorie. Frankfurt am Main: Suhrkamp.
Kelly, T. (2011). Peer disagreement and higher order evidence. In A. Goldman & D. Withcomb (Eds.), Social epistemology: Essential readings (pp. 183–217). Oxford: Oxford University Press.
Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87(1), 5–22.
Kuhn, T. (1977). Objectivity, value judgment, and theory choice. In The essential tension. Chicago: University of Chicago Press.
Lehrer, K., & Wagner, C. (1981). Rational consensus in science and society. A philosophical and mathematical study. Dordrecht: D. Reidel Publishing Company.
Leitgeb, H., & Pettigrew, R. (2010a). An objective justification of Bayesianism I: Measuring inaccuracy. Philosophy of Science, 77(2), 201–235.
Leitgeb, H., & Pettigrew, R. (2010b). An objective justification of Bayesianism II: The consequences of minimizing inaccuracy. Philosophy of Science, 77(2), 236–272.
Mayo-Wilson, C. (2014). Reliability of testimonial norms in scientific communities. Synthese, 191(1), 55–78.
Muldoon, R., & Weisberg, M. (2011). Robustness and idealization in models of cognitive labor. Synthese, 183(2), 161–174.
Olsson, E. J., & Vallinder, A. (2013). Norms of assertion and communication in social networks. Synthese, 190(13), 2557–2571.
Oreskes, N., & Conway, E. M. (2011). Merchants of doubt: How a handful of scientists obscured the truth on issues from tobacco smoke to global warming. London: Bloomsbury Publishing.
Regan, H. M., Colyvan, M., & Markovchick-Nicholls, L. (2006). A formal model for consensus and negotiation in environmental management. Journal of Environmental Management, 80(2), 167–176.
Sakoda, J. M. (1971). The checkerboard model of social interaction. The Journal of Mathematical Sociology, 1(1), 119–132.
Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143–186.
Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.
Thoma, J. (2015). The epistemic division of labor revisited. Philosophy of Science, 82(3), 454–472.
Weisberg, M., & Muldoon, R. (2009). Epistemic landscapes and the division of cognitive labor. Philosophy of Science, 76(2), 225–252.
Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17.
I want to thank the participants of the workshops “Formal Models of Scientific Inquiry” in Bochum and “Interdisciplinary Workshop on Opinion Dynamics and Collective Decision” in Bremen in 2017 for discussions of earlier versions of this paper. I also want to thank Kevin Zollman, Stephan Hartmann and Simon Scheller who provided valuable feedback, and my anonymous reviewers, one of which in particular suggested the inclusion of further robustness data. The research was carried out at the Munich Center for Mathematical Philosophy.
Appendix: Sensitivity Analysis
Appendix: Sensitivity Analysis
The deliberation game model has, in particular due to the vast combinatorial space of agent populations, too large a parameter space to provide a full parameters sweep. However, it is worth looking at some important alternatives to the configurations utilized to support the arguments in this paper. The sensitivity checks will focus on three questions:
How does variation in the communication schedule change model dynamics?
How do the accuracy results scale with increasing numbers of participating agents?
What is the accuracy impact of an increasing number of strategic exaggerators?
Figures 3 and 4 depict runs that are configured analogously to the runs chosen in the exploration of model dynamics in Sect. 3, with one key difference: Communication is not synchronous, but agents update their expressed opinion in randomized sequential order.
The key results inferred from the synchronous case still hold, though with some variation:
Exaggerators are still able to exert their influence, fully steering naive learners and partially influencing cautious learners.
Cautious learners still fail to fully merge.
Oscillations caused by oversteering from the exaggerator can still be observed, but it is substantially diminished.
In particular with respect to the last result, this scheduling algorithm may produce results that are more realistic for particular domains of applications. While contributions to a scientific conference are often written in parallel and with little to no knowledge what beliefs the other participants will be expressing, political discussions, to name just an obvious example, often have a more sequential, reactive structure.
This partial sensitivity result is supported by the theoretical background of the model in game theory: Sequential and synchronous game structures can differ substantially in their behavioral predictions, and the bounded rationality of agents only increases the difference.
Scaling, accuracy and communities of exaggeration
To explore both scaling up the model and evaluating the impact of an increased percentage of exaggerators, the following simulation experiment has been performed: Populations of 20 and 50 agents respectively have been simulated playing the simulation game. The groups were populated with strategic exaggerators and either naive or cautious learners. For each configuration, all possible proportions of exaggerators and learners where run. Updating of beliefs was synchronous and the initial step featured again a revelation of the received signal, again unknown to the agents participating. Each of the resulting model configurations has been run for 80 timesteps to remain comparable with the results presented in the main analysis, though convergence generally takes place substantially earlier. Each configuration ran 100 times, and the results are averaged over those 100 runs. Mean squared error, squared error of mean belief and differences between naive and cautious learners are depicted in Figs. 5, 6, 7 and 8 (Note, that the viewpoints for the plots of accuracy differences are slightly different, to make the graph more readable. In the same vein, the accuracy scales in the various plots differ in their range, as the same range for all plots would have rendered them harder to read.)
These plots contain an immense amount of information, but a few key observations are sufficient to understand their general implication. First, the differential advantage of cautious and naive learners appears most pronounced in a comparison of the populations with 0 and 1 exaggerator respectively. When the number of exaggerators increases, accuracy first improves and then worsens substantially. This is in line with the trade-off between cautious and naive social learning argued before.
There is also a pattern that may seem strange at first, namely that populations with a particular large proportion of exaggerators suddenly become way worse, in particular in terms of the accuracy of mean belief. One can also observe, more visible in the graphs depicting the mean of belief inaccuracy, that populations seem to be split between a few distinct levels of accuracy and jump around between these levels in very short time spans and without fully converging at any point during the simulation. The underlying dynamic for these patterns is a population wide oscillation between 0 and 1; without a sufficient amount of learners mediating between the exaggerators by taking averages of their expressed belief, subgroups of exaggerators emerge that jump between the two extremes, forcing each other to reverse that jump to the other extreme on the next time step. With respect to the intended application of scientific exchange, this behavior can only be interpreted as a model artifact due to a move out of the proper parameter space (where not almost everyone is a strategic exaggerator). In cases where such population compositions are plausible, a more subtle update mechanism may be required to avoid this extreme pattern.
The figures also show that the sheer number of agents has little impact on the core results; what is relevant is whether any exaggerators are present and how much of the population they actually comprise.
Finally, it is worth noting that this additional data suggests a limitation to the advantage of competing exaggerators, as speculated in the model analysis. Moving from a single exaggerator to a competitive environment improves accuracy, but further additions quickly start to actually worsen the results again—though not beyond the negative impact of a single exaggerator except for the extreme population-wide oscillation patterns mentioned above. This insight does not contradict the speculation on the advantages of competition, as those are contingent on the presence of a substantial population of learners, and not necessarily imply good accuracy for the competing opinion leaders themselves.
About this article
Cite this article
Merdes, C. Strategy and the pursuit of truth. Synthese 198, 117–138 (2021). https://doi.org/10.1007/s11229-018-01985-x
- Social epistemology
- Philosophy of science
- Agent-based modeling and simulation
- Peer disagreement
- Opinion dynamics
- Social influence