Abstract
This paper examines lessons obtained by means of simulations in the form of agent-based models (ABMs) about the norms that are to guide disagreeing scientists. I focus on two types of epistemic and methodological norms: (i) norms that guide one’s attitude towards one’s own theory, and (ii) norms that guide one’s attitude towards the opponent’s theory. Concerning (i) I look into ABMs that have been designed to examine the context of peer disagreement. Here I challenge the conclusion that the given ABMs provide a support for the so-called Steadfast Norm, according to which one is epistemically justified in remaining steadfast in their beliefs in face of disagreeing peers. I argue that the proposed models at best provide evidence for a weaker norm, which concerns methodological steadfastness. Concerning (ii) I look into ABMs aimed at examining epistemic effects of scientific interaction. Here I argue that the models provide diverging suggestions and that the link between each ABM and the type of represented inquiry is still missing. Moreover, I examine alternative strategies of arguing in favor of the benefits of scientific interaction, relevant for contemporary discussions on scientific pluralism.
Similar content being viewed by others
Notes
Beside the case-study approach and computational methods, empirical studies of scientific disagreements are also becoming employed by the philosophical community (see e.g. Beebe et al. 2018). Such studies are important not only for the understanding of the descriptive state of affairs, but they may also serve as a basis for the formulation of normative conclusions, e.g. if combined with computer simulations that examine counterfactual scenarios.
For arguments in favor of CN see e.g. Christensen (2010), Elga (2007), Feldman (2007) and Feldman (2006); for arguments in favor of SN see e.g. Cruz and Smedt (2013) and Kelp and Douven (2012); for reasons why norms are context-dependent see e.g. Christensen (2010), Douven (2010), Kelly (2010a) and Konigsberg (2012).
In the literature on (scientific) rationality we often find the third type of assessment: the axiological one. For instance, Rescher (1988) distinguishes between epistemic, practical/instrumental and evaluative rationality. The latter concerns the assessment of our goals and their conduciveness to some more general ends. A similar point is made by Laudan (1984) in his ‘reticulated model’ of scientific rationality, according to which assessing scientific goals can be done in terms of their feasibility and the overall fit with the existing scientific practice. An important point made by both Rescher and Laudan is that all three types of assessments are interrelated since, e.g. what we believe (or accept) depends on what we do to obtain evidence, which depends on the goals we have; similarly, what we do and praise will depend on our beliefs, etc.
I will henceforth use these notions—theory, hypothesis, model—interchangeably since the discussion applies to all of them.
This is the so-called bounded-confidence model in the sense that when adjusting their opinions agents take into account only those opinions of agents which are sufficiently similar to their own.
The update of information is modeled in terms of the following function:
$$\begin{aligned} x_i (u+1) = \alpha \frac{1}{|X_i(u)|} \sum _{j \in X_i(u)} x_j(u) + (1 - \alpha )\tau \end{aligned}$$where \(x_i(u)\) is the opinion of agent \(x_i\) after the \(u\)-th update, \(\alpha \in \, ]0,1]\) is the weighting factor determining how much the opinions of others and one’s own research influence the change of one’s belief, \(\tau \in \, ]0,1]\) is the objective value of the parameter, \(X_i(u):= \{j : |x_i(u) - x_j(u) | \le \varepsilon \}\) with \(\varepsilon \in [0,1]\) being the confidence interval determining the agents whose opinions are taken into account, and \(|X_i(u)|\) the cardinality of \(X_i(u)\).
This is done by slightly adjusting the process of updating:
$$\begin{aligned} x_i (u+1) = \alpha \frac{1}{|X_i(u)|} \sum _{j \in X_i(u)} x_j(u) + (1 - \alpha )(\tau + \mathrm {rnd}(\zeta )) \end{aligned}$$where \(\mathrm {rnd}(\zeta )\) is a function that gives a unique uniformly distributed real number in the interval \([-\zeta , +\zeta ]\), where \(\zeta \in [0,1]\).
De Langhe employs Goldman’s (2010) idea that even though disagreeing peers may share the evidence concerning the given issue in question, they may not share the evidence for the epistemic system within which they evaluate the former (object-level) evidence.
This is however not surprising since difference-splitting with agents from other epistemic systems bears information on their own respective \(\tau \), while the success of each agent is measured by how close she gets to \(\tau \) in her own system. Note that this issue may pose a more general conceptual problem for De Langhe’s model since it is not clear which epistemic benefits are included in the representation of interaction between agents belonging to different epistemic systems.
Not to be confused with Hugh Lacey’s notion of endorsement which refers to the acceptance of hypotheses in the context of application, see Lacey (2015).
Even though philosophers have sometimes conjectured what such a relationship may look like (e.g. Magnus 2014 suggests that “scientists who cultivate agnosticism might not pursue their chosen research program with the necessary vigor. The community would then do better if those individuals fully embraced the presuppositions of their approach”, p. 132), as explained above, the situation is not so simple, and only via a proper empirical study can we obtain reliable information about this relationship.
Given that the ABMs in question were originally developed to address issues discussed in the literature on peer disagreement around 2005, this is not surprising. The whole setup of the peer disagreement debate revolved around doxastic attitudes, and the importance of alternative cognitive attitudes has only recently entered the discussion (see e.g. Fleisher 2018a). At the same time, ABMs of scientific inquiry are still typically based on the assumption that an agent’s beliefs and pursuit-related attitudes are mutually correlated. While sometimes this may be a harmless idealization, in case differentiating between the two could have an impact on conclusions we draw from the model, it is important to keep this distinction in mind.
For a recent discussion on different types of scientific pluralism see Šešelja (2017).
Every round an agent makes 1,000 pulls, each of which can be a success or a failure, where the probability of success is given by the objective probability of success of the respective hypothesis. Agents then update their beliefs via Bayesian reasoning (modeled by means of beta distributions). Note that the model described here is Zollman’s (2010) model, which is a generalized version of his (2007) one.
Our enhancement of Zollman’s ABM (Frey and Šešelja 2018a) is based on the observation that, on the one hand, Zollman’s result hinges on the parameter choices for the objective probability of success of two hypotheses, namely 0.499 and 0.5, and that, on the other hand, if scientists are considered successful only if they converge on the better hypothesis, then the difference between the two hypotheses should gradually increase. In other words, as scientists improve their methodology, they should have a better grasp of the difference between the rivaling approaches. The corollary of implementing this assumption is that scientists always converge on the better hypothesis, the only question is how long it takes them to do so. Consequently, efficiency in this model is measured in terms of time (needed for the successful convergence) rather than in terms of the percentage of successful runs.
Agents move with the speed of 0.5, approximating her target halfway each round with the probability of 0.5 (the latter represents inertia of the agent). Each time an agent moves, she jumps to a random region within four points either side of the target spot. This represents a shaking hand phenomenon, namely, an attempt at replicating the target hypothesis which may give slightly different results (Grim 2009, Section 3).
Grim (2009) shows that there is a threshold below which lower connected networks begin to perform worse.
More precisely, a subset of arguments \(A\) of a given theory \(T\) is admissible iff for each attacker \(b\) of some \(a\) in \(A\) there is an \(a'\) in \(A\) that attacks \(b\). An argument \(a\) in \(T\) is said to be defended in \(T\) iff it is a member of the maximally admissible subset of \(T\) (note that each theory in the model is conflict-free in the sense that no two arguments in it attack one another).
In Borg et al. (2018) we also present an alternative, pluralist criterion of success, according to which a community is successful if at the end of the run the best theory doesn’t have fewer agents than either of the rivaling theories (p. 295; the results of ArgABM simulations are usually obtained by employing a landscape consisting of three theories, though they are similar if the number of theories is reduced to two).
This is important even in case of models that aim to provide a how-possibly explanation of the given target since not all possibilities are interesting in the sense that we can derive from them relevant information about real-world phenomena (see Frey and Šešelja 2018b) and may instead amount only to ‘just so stories’ (Verreault-Julien 2019) or ‘model based story telling’ (Arnold 2006).
Clearly, beside descriptively adequate representation, a model can incorporate counterfactual assumptions if examining them is interesting from a normative perspective (e.g. we could use simulations to compare different types of information sharing and their relative impact on the efficiency of the given community).
Experiments may however represent a different target phenomenon than the given model, and moreover, further studies may give conflicting results. For instance, the study by Mason et al. (2008) suggesting that less connected networks have a better problem-solving performance than the fully connected ones was subsequently challenged by Mason and Watts (2012) who found the opposite to be the case.
In Straßer et al. (2015) we define the epistemic toleration as a conditional norm, which is triggered if “the tolerated stance is considered objectionable and in an important sense epistemically problematic” and if “there are reasons—namely, the indices of [rational disagreement]—in view of which it would be wrong not to tolerate an objectionable stance.” Epistemic toleration, however, has its limits and it is not triggered if one has “reasons to consider the stance of the opponent as futile”, for instance in case of “empirically backed up reasons to suppose bias or fraud on the side of the opposition, the refusal to take part in argumentative exchange, a systematic reluctance to put hypotheses under critical empirical tests, systematic self-immunization from empirical and argumentative scrutiny, etc.” (p. 128–129).
Historical case studies may also be helpful, especially if combined with the analysis of bibliometric data, in order to generate sufficiently broad evidence base. Such results could also be used for the empirical embedding of ABMs as suggested by Frey and Šešelja (2018b) and recently employed by Harnagel (2018).
As we write in Straßer et al. (2015): “To caricature it a bit ...do we rather want a science where the scientists are individually rational but the scientific machinery may sometimes move a bit slowlier than optimal, or do we want a scientific machinery that performs most efficiently but where the scientists may sometimes put on blinkers which make them suboptimal viz. slightly dogmatic epistemic agents?” (p. 145).
References
Arnold, E. (2006). The dark side of the force: When computer simulations lead us astray and “model think” narrows our imagination—Pre-conference draft for the models and simulation conference, Paris, June 12–14. Accessed on October 31, 2018. https://eckhartarnold.de/papers/2006_simulations/node10.html.
Arnold, E. (2013). Simulation models of the evolution of cooperation as proofs of logical possibilities. How useful are they? Ethics and Politics, XV(2), 101–138.
Barnes, T. J., & Sheppard, E. (2010). Nothing includes everything’: Towards engaged pluralism in Anglophone economic geography. Progress in Human Geography, 34(2), 193–214.
Beebe, J. R, Baghramian, M., O’C Drury, L., & Dellsen, F., (2018). Divergent perspectives on expert disagreement: Preliminary evidence from climate science, climate policy, astrophysics, and public opinion. arXiv preprint arXiv:1802.01889.
Boero, R., & Squazzoni, F. (2005). Does empirical embeddedness matter? Methodological issues on agent-based models for analytical social science. Journal of artificial societies and social simulation, 8, 4.
Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2017). An argumentative agent-based model of scientific inquiry. In S. Benferhat, K. Tabia, & M. Ali (Eds.), Proceedings of the advances in artificial intelligence: From theory to practice—30th international conference on industrial engineering and other applications of applied intelligent systems, IEA/AIE 2017, Arras, France, 27–30 June 2017. Part I (pp. 507–510). Cham: Springer.
Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2017). Examining network effects in an argumentative agent-based model of scientific inquiry. In A. Baltag, J. Seligman, & T. Yamada (Eds.), Proceedings of the logic, rationality, and interaction: 6th international workshop, LORI 2017, Sapporo, Japan, 11–14 September 2017 (pp. 391–406). Berlin: Springer.
Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2018). Epistemic effects of scientific interaction: Approaching the question with an argumentative agent-based model. Historical Social Research, 43(1), 285–309.
Borg, A., Frey, D., Šešelja, D., & Straßer, C. (2019). Theory-choice, transient diversity and the efficiency of scientific inquiry. European Journal for Philosophy of Science. https://doi.org/10.1007/s13194-019-0249-5.
Casini, L. & Manzo, G. (2016). Agent-based models and causality: A methodological appraisal. In The IAS working paper series. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133332. Accessed 1 Dec 2018
Chang, H. (2012). Is water H2O? Evidence, Pluralism and Realism. Berlin: Springer.
Christensen, D. (2010). Higher-order evidence. Philosophy and Phenomenological Research, 81(1), 185–215.
De Cruz, H., & De Smedt, J. (2013). The value of epistemic disagreement in scientific practice. The case of Homo oresiensis. Studies in History and Philosophy of Science Part A, 44(2), 169–177.
De Langhe, R. (2013). Peer disagreement under multiple epistemic systems. Synthese, 190, 2547–2556.
Douglas, H. E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh Press.
Douven, I. (2010). Simulating peer disagreements. Studies in History and Philosophy of Science Part A, 41(2), 148–157.
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77, 321–358.
Elga, A. (2007). Reflection and disagreement. Noûs, 41(3), 478–502.
Elgin, C. Z. (2010). Persistent disagreement. In R. Feldman & T. A. Warfield (Eds.), Diasagreement (pp. 53–68). Oxford: Oxford University Press.
Elliott, K. C., & Willmes, D. (2013). Cognitive attitudes and values in science. Philosophy of Science, 80(5), 807–817.
Feldman, R. (2005). Respecting the evidence. Philosophical Perspectives, 19(1), 95–119.
Feldman, R. (2006). Epistemological puzzles about disagreement. Epistemology futures (pp. 216–326). Oxford: Oxford University Press.
Feldman, R. (2007). Reasonable religious disagreements. In L. M. Antony (Ed.), Philosophers without gods (pp. 194–214). Oxford: OUP.
Fleisher, W. (2018a). How to endorse conciliationism. https://doi.org/10.7282/t3-z234-rj23.
Fleisher, W. (2018b). Rational endorsement. Philosophical Studies, 175(10), 2649–2675.
Frey, D., & Šešelja, D. (2018a). Robustness and idealization in agent-based models of scientific interaction. British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy039.
Frey, D., & Šešelja, D. (2018b). What is the epistemic function of highly idealized agent-based models of scientific inquiry? Philosophy of the Social Sciences. https://doi.org/10.1177/0048393118767085.
Goldman, A. (2010). Epistemic relativism and reasonable disagreement. In R. Feldman & T. Warfield (Eds.), Disagreement (pp. 187–215). Oxford: Oxford University Press.
Grim, P. (2009). Threshold phenomena in epistemic networks. In AAAI fall symposium: complex adaptive systems and the threshold effect (pp. 53–60).
Grim, P., Singer, D. J., Fisher, S., Bramson, A., Berger, W. J., Reade, C., et al. (2013). Scientific networks on data landscapes: Question difficulty, epistemic success, and convergence. Episteme, 10(04), 441–464.
Harnagel, A. (2018). A mid-level approach to modeling scientific communities. Studies in History and Philosophy of Science. https://doi.org/10.1016/j.shpsa.2018.12.010.
Hegselmann, R., & Krause, U. (2002). Opinion dynamics and bounded confidence models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5, 3.
Hegselmann, R., & Krause, U. (2005). Opinion dynamics driven by various ways of averaging. Computational Economics, 25(4), 381–405.
Hegselmann, R., & Krause, U. (2006). Truth and cognitive division of labor: First steps towards a computer aided social epistemology. Journal of Artificial Societies and Social Simulation, 9(3), 10.
Kelly, T. (2010a). Peer disagreement and higher-order evidence. In R. Feldman & T. A. Warfield (Eds.), Disagreement (pp. 111–174). Oxford: Oxford University Press.
Kelly, T. (2010a). Peer disagreement and higher order evidence. Social Epistemology: Essential Readings, 24, 183–217.
Kelp, C., & Douven, I. (2012). Sustaining a rational disagreement. In H. de Regt, S. Hartmann & S. Okasha (Eds.), EPSA philosophy of science: Amsterdam 2009. The European philosophy of science association proceedings (Vol. 1). Dordrecht: Springer.
Konigsberg, A. (2012). The problem with uniform solutions to peer disagreement. Theoria, 79, 96.
Kuhn, T. (1977). The essential tension: Selected studies in scientific tradition and change. Chicago: University of Chicago press.
Lacey, H. (2009). The interplay of scientific activity, worldviews and value outlooks. Science and Education, 18, 839–860.
Lacey, H. (2013). Rehabilitating neutrality. Philosophical studies, 163(1), 77–83.
Lacey, H. (2014). Science, respect for nature, and human well-being: Democratic values and the responsibilities of scientists today. Foundations of Science, 21, 1–17.
Lacey, H. (2015). ‘Holding’ and ‘endorsing’ claims in the course of scientific activities. Studies in History and Philosophy of Science Part A, 53, 89–95.
Laudan, L. (1984). Science and values. Berkeley: University of California Press.
Longino, H. (2002). The fate of knowledge. Princeton: Princeton University Press.
Longino, H. E. (2013). Studying human behavior: How scientists investigate aggression and sexuality. Chicago: University of Chicago Press.
Magnus, P. D. (2014). Science and rationality for one and all. Ergo. https://doi.org/10.3998/ergo.12405314.0001.005.
Mason, W., & Watts, D. J. (2012). Collaborative learning in networks. Proceedings of the National Academy of Sciences, 109(3), 764–769.
Mason, W. A., Jones, A., & Goldstone, R. L. (2008). Propagation of innovations in networked groups. Journal of Experimental Psychology: General, 137(3), 422.
Merdes, C. (2018). Strategy and the pursuit of truth. Synthese. https://doi.org/10.1007/s11229-018-HrB01985-xHrB.
Nickles, T. (2006). Heuristic appraisal: Context of discovery or justification? In J. Schickore & F. Steinle (Eds.), Revisiting discovery and justification: Historical and philosophical perspectives on the context distinction (pp. 159–182). Amsterdam: Springer.
Rescher, N. (1988). Rationality: A philosophical inquiry into the nature and the rationale of reason. Oxford: Oxford University Press.
Rolin, K. (2011). Diversity and dissent in the social sciences: The case of organization studies. Philosophy of the Social Sciences, 41(4), 470–494.
Rosenstock, S., O’Connor, C., & Bruner, J. (2017). In epistemic networks, is less really more? Philosophy of Science, 84(2), 234–252.
Šešelja, D. (2017). Scientific pluralism and inconsistency toleration. Humana. Mente Journal of Philosophical Studies, 32, 1–29.
Šešelja, D. (2018). Exploring scientific inquiry via agent-based modeling (Forthcoming). http://philsci-archive.pitt.edu/15120/1/Exploratory_ABMs.pdf. Accessed 1 Dec 2018.
Šešelja, D., Kosolosky, L., & Straßer, C. (2012). Rationality of scientific reasoning in the context of pursuit: Drawing appropriate distinctions. Philosophica, 86, 51–82.
Šešelja, D., & Straßer, C. (2013). Abstract argumentation and explanation applied to scientific debates. Synthese, 190, 2195–2217.
Šešelja, D., & Straßer, C. (2014). Epistemic justification in the context of pursuit: A coherentist approach. Synthese, 191(13), 3111–3141.
Šešelja, D., & Weber, E. (2012). Rationality and irrationality in the history of continental drift: Was the hypothesis of continental drift worthy of pursuit? Studies in History and Philosophy of Science, 43, 147–159.
Solomon, M. (2006). Groupthink versus the wisdom of crowds: The social epistemology of deliberation and dissent. The Southern Journal of Philosophy, 44, 28–42.
Straßer, C., Šešelja, D., & Wieland, J. W. (2015). With-standing tensions: Scientific disagreement and epistemic tolerance. In E. Ippoliti (Ed.), Heuristic reasoning. Studies in applied philosophy, epistemology and rational ethics (pp. 113–146). Berlin: Springer.
Thicke, M. (2018). Evaluating formal models of science. Journal for General Philosophy of Science, 66, 371.
Verreault-Julien, P. (2019). How could models possibly provide how-possibly explanations? Studies in History and Philosophy of Science Part A, 73, 22–33.
Whitt, L. A. (1990). Theory pursuit: Between discovery and acceptance. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1, 467–483.
Whitt, L. A. (1992). Indices of theory promise. Philosophy of Science, 59, 612–634.
Zollman, K. J. S. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.
Zollman, K. J. S. (2010). The epistemic benefit of transient diversity. Erkenntnis, 72(1), 17–35.
Acknowledgements
I would like to thank Andrea Robitzsch for valuable discussions on epistemic and methodological norms, which inspired parts of this paper. I am also grateful to two anonymous referees, to Borut Trpin and to the audience of the MAP MCMP (Minorities and Philosophy at the Munich Center for Mathematical Philosophy) seminar where I first presented this paper, for valuable comments. Research for this paper was funded by the DFG (Research Grant HA 3000/9-1).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Šešelja, D. Some lessons from simulations of scientific disagreements. Synthese 198 (Suppl 25), 6143–6158 (2021). https://doi.org/10.1007/s11229-019-02182-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-019-02182-0