We motivate a picture of social epistemology that sees forgetting as subject to epistemic evaluation. Using computer simulations of a simple agent-based model, we show that how agents forget can have as large an impact on group epistemic outcomes as how they share information. But, how we forget, unlike how we form beliefs, isn’t typically taken to be the sort of thing that can be epistemically rational or justified. We consider what we take to be the most promising argument for this claim and find it lacking. We conclude that understanding how agents forget should be as central to social epistemology as understanding how agents form beliefs and share information with others.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Outside of epistemology, it’s more common to see forgetting as the sort of thing that can be subject to rational criticism. Angela Smith’s (2005) paper about responsibility for attitudes, for example, has as its central example a case in which someone is blameworthy for forgetting a close friend’s birthday. It’s most natural to conceive of that case as one of practical or moral responsibility, rather than epistemic. Here we’ll restrict ourselves to epistemic evaluations of forgetting.
This suggests that we’re committed to a consequentialist picture of epistemic normativity. While the primary author does explicitly endorse such a view, one need not be a consequentialist to feel the pull of this idea. One only needs to see the epistemic impacts as a symptom of the rationality of the forgetting method, not its (constituent) cause.
This might not be right though, as the example of forgetting yesterday’s pain (mentioned above) shows. We’re inclined to see that reason as non-epistemic, but the point here is just that limited agents have some reasons that unlimited agents don’t have.
We’ll also drop talk of justification here, simply so that we don’t have to repeat “rational or justified” many times, but nothing we say below hinges on whatever difference there may be between rational and justified beliefs/things forgotten.
Since the sets of propositions that constitute arguments in this model can be arbitrary sets, there can be sets that support some conclusion to a certain degree despite a superset of it supporting the negation of the conclusion to a greater degree.
The ways of forgetting we use are were also in used in our earlier work (Singer et al. 2019). Like in that work, here we keep fixed and uniform how many propositions the agents can remember, but the results we discuss here do not depend on this. In other work studying agents with limited memories, the norm has been to model limitations on memory by limiting the number of states a finite automaton that represents the agent can be in. Those models clearly differ greatly from the model we discuss here. See Singer, et al (2019), note 11 for more information.
Of course, this idealizing assumption is not realistic, but since the role of the model is to help us understand the effect of memory limitations on social epistemic outcomes for groups, the complexities of a more realistic memory model would likely detract from the more general usefulness of the model.
For each forgetting method, agents will reassess what they believe after forgetting a proposition. One might think it’s more realistic for agents to continue believing what was supported by their information even after they forget the supporting information. Agents like that would be much more doxastically complicated than the kinds of agents we consider, since our agents can be represented by just the supporting information they have, not any additional belief state that would itself need updating methods, etc. One reason to think that forgetting methods like ours might be realistic in some cases is their ability to explain phenomena witnessed in the real world, like group polarization (see Singer et al. 2019). Moreover, modeling limited agents as fully removing forgotten information from their beliefs is standard. See Halpern et al. (2014) for a survey.
When influential and biased speakers share information, they measure the informativeness of their information based on all of the information that has been shared, not just what they remember of it. We can think of this like the agent doing a literature review before publishing: the expectation is that they will add to the discussion that is being had by everyone, not just the aspects of it they happen to be familiar with.
The strengths were assigned in this way to mimic the idea that few arguments are “clincher” arguments—ones which totally win the day—while there are many arguments that lend some support to one view or another.
When they’re appropriate, we use parametric tests, like t-tests, and non-parametric tests, like the Wilcoxon rank-sum test (Mann–Whitney U test) and Kolmogorov–Smirnov test (“KS-test”). Cohen’s d is a measure of effect size, with < 0.2 being generally recognized as small and > 0.8 being generally recognized as large. When it’s reported with an inequality here, we’re summarizing more than one test.
It’s worth noting that the distributions are very different. At step 1000, the biased agents all have either 100% of the agents on the right side or 100% on the wrong side. For random agents, roughly the same number of runs of random agents have 100% on the right side, but the rest of the runs are roughly evenly spread through the rest of the space. The difference in mean is significant though (p < 3.2 × 10−13 in a t test).
One might expect biased sharers to perform equally as well as random sharers in the long run. This turns out not to be true because biased sharers end up hiding information, since when they have nothing to support their own view, they share a proposition at random, as long as that proposition doesn’t go against their view, in light of what has been shared.
This was computed using partial eta-squared in a ANOVA type III, where both of these factors only explained a tiny amount of the variance in the model. We believe the extreme overall variance is explained by the wildly different initial conditions that are possible in the model.
As above, this was measured by the partial eta-squared in a type III ANOVA. Convergence was tested for only every 10 steps from 0 to 100 steps and then every 100 steps from 100 to 1000.
Moreover, the methodology we use here would not translate naturally to testing forgetting methods for individuals, since incoming propositions only come from other agents in the model as it was used here.
The percentages given here come from the all of the data we have on limited groups, but the same pattern can be seen in the tables above for those particular parameters.
Alston, W. P. (1988). The deontological conception of epistemic justification. Philosophical Perspectives, 2, 257–299.
Barrett, J., & Zollman, K. J. S. (2009). The role of forgetting in the evolution and learning of language. Journal of Experimental & Theoretical Artificial Intelligence, 21(4), 293–309.
Bernecker, S. (2018). On the blameworthiness of forgetting. In K. Michaelian, D. Debus, & D. Perrin (Eds.), New directions in the philosophy of memory. Abingdon: Routledge.
Bernecker, S., & Grundmann, T. (2019). Knowledge from forgetting. Philosophy and Phenomenological Research, 98(3), 525–540.
Betz, G. (2012). Debate dynamics: How controversy improves our beliefs (Vol. 357). Berlin: Springer.
Cruz, M. G., Boster, F. J., & Rodriguez, J. I. (1997). The impact of group size and proportion of shared information on the exchange and integration of information in groups. Communication Research, 24, 291–313.
Fagin, R., & Halpern, J. (1988). Belief, awareness, and limited reasoning. Artificial Intelligence, 34(1), 39–76.
Feldman, R. (2000). The ethics of belief. Philosophy and Phenomenological Research, 60(3), 667–695.
Fischer, J. M., & Ravizza, M. (1998). Responsibility and control: A theory of moral responsibility. Cambridge University Press.
Ginet, C. (2001). Deciding to believe. In M. Steup (Ed.), Knowledge, truth and duty (pp. 63–76). New York, NY: Oxford University Press.
Goldman, A. I. (1999). Knowledge in a social world. Oxford University Press.
Halpern, J. Y. & Pass, R. (2010). I don’t want to think about it now: Decision theory with costly computation. In Twelfth international conference on the principles of knowledge representation and reasoning.
Halpern, J. Y., Pass, R., & Seeman, L. (2014). Decision theory with resource-bounded agents. Topics In Cognitive Science, 6(2), 245–257.
Hieronymi, P. (2006). Controlling attitudes. Pacific Philosophical Quarterly, 87, 45–74.
Hill, B. (2010). Awareness dynamics. Journal of Philosophical Logic, 39, 113–137.
Lumet, S. (Director), Fonda, H., Rose, R. (Producers), Rose, R., & Hopkins, K. (Writers). (1956). 12 angry men [Motion picture]. United States: United Artists Corp.
Mayo-Wilson, C., Zollman, Kevin J. S., & Danks, D. (2011). The independence thesis: When individual and social epistemology diverge. Philosophy of Science, 78(4), 653–677.
McHugh, C. (2012). Epistemic deontology and voluntariness. Erkenntnis, 77(1), 65–94.
Moss, S. (2015). Time-slice epistemology and action under indeterminacy. In T. S. Gendler & J. Hawthorne (Eds.), Oxford studies in epistemology (pp. 172–194). Oxford: Oxford University Press.
Peels, R. (2017). Responsible belief: A theory in ethics and epistemology. Oxford: Oxford University Press.
Ryan, S. (2003). Doxastic compatibilism and the ethics of belief. Philosophical Studies, 114, 47–79.
Schwarz, W. (2017). Evidentialism and conservatism in bayesian epistemology. Unpublished. https://www.umsu.de/papers/conservatism.pdf. Accessed 31 Oct 2018.
Singer, D. J. (2018). How to be an epistemic consequentialist. Philosophical Quarterly, 68(272), 580–602.
Singer, D. J., Bramson, A., Grim, P., Holman, B., Jung, J., Kovaka, K., et al. (2019). Rational social and political polarization. Philosophical Studies, 176, 2243.
Smith, A. M. (2005). Responsibility for attitudes: Activity and passivity in mental life. Ethics, 115, 236–271.
Stasser, G., & Titus, W. (1985). Pooling of unshared information in group decision making: Biased information sampling during discussions. Journal of Personality and Social Psychology, 48, 1467–1478.
Stasser, G., & Titus, W. (1987). Effects of information load and percentage of shared information on the dissemination of unshared information during group discussion. Journal of Personality and Social Psychology, 53, 81–93.
Steup, M. (2008). Doxastic freedom. Synthese, 161, 375–392.
Steup, M. (2011). Belief, voluntariness and intentionality. Dialectica, 65(4), 537–559.
Sunstein, C. (2002). The law of group polarization. Journal of Political Philosophy, 10(2), 175–195.
Sunstein, C., & Hastie, R. (2014). Making dumb groups smarter. Harvard Business Review. https://hbr.org/2014/12/making-dumb-groups-smarter. Accessed 15 Apr 2018.
Vahid, H. (1998). Deontic vs. nondeontic conceptions of epistemic justification. Erkenntnis, 49(3), 285–301.
van Benthem, J. (2011). Logical dynamics of information and interaction. Cambridge: Cambridge University Press.
van Benthem, J., & Velázquez-Quesada, F. R. (2010). The dynamics of awareness. Synthese, 177(1), 5–27.
Van Ditmarsch, H., Herzig, A., Lang, J., & Marquis, P. (2009). Introspective forgetting. Synthese, 169(2), 405–423.
Vineberg, S. (Spring 2016 Edition). Dutch book arguments. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/spr2016/entries/dutch-book/. Accessed 30 Apr 2018.
Weatherson, B. (2008). Deontology and Descartes’ demon. Journal of Philosophy, 105, 540–569.
Williamson, T. (2000). Knowledge and its Limits. Oxford: Oxford University Press.
Wilson, A. (2014). Bounded memory and biases in information processing. Econometrica, 82(6), 2257–2294.
Wittenbaum, G. M., Hollingshead, A. B., & Botero, I. C. (2004). From cooperative to motivated information sharing in groups: Moving beyond the hidden profile paradigm. Communication Monographs, 71(3), 286–310.
Zollman, K. J. (2007). The communication structure of epistemic communities. Philosophy of Science, 74(5), 574–587.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We’re very thankful for the feedback on aspects of this research from Hélène Landemore, Joe Halpern, and audiences at the University of Michigan, Dartmouth College, and the Wharton School at the University of Pennsylvania.
About this article
Cite this article
Singer, D.J., Bramson, A., Grim, P. et al. Don’t forget forgetting: the social epistemic importance of how we forget. Synthese (2019). https://doi.org/10.1007/s11229-019-02409-0
- Agent-based modelling
- Group deliberation
- Social epistemology