The Cognitive Basis of the Conditional Probability Solution to the Value Problem for Reliabilism

The value problem for knowledge is the problem of explaining why knowledge is more valuable than mere true belief. The problem arises for reliabilism in particular, i.e., the externalist view that knowledge amounts to reliably acquired true belief. Goldman and Olsson argue that knowledge, in this sense, is more valuable than mere true belief due to the higher likelihood of future true beliefs (produced by the same reliable process) in the case of knowledge. They maintain that their solution works given four empirical assumptions, which they claim hold “normally.” However, they do not show that their assumptions are externalistically acceptable; nor do they provide detailed evidence for their normality claim. We address these remaining gaps in Goldman and Olsson’s solution in a constructive spirit. In particular, we suggest an externalist interpretation of the assumptions such that they essentially spell out what it means for a broad range of organisms capable of belief-like representations to be epistemically adapted to their environment. Our investigation also sheds light on the circumstances in which the assumptions fail to hold and knowledge therefore fails to have extra value in Goldman and Olsson’s sense. The upshot is a deeper understanding of their solution as a contribution to naturalized epistemology and a strengthened case for its empirical validity.


Introduction
Knowledge is traditionally construed as true belief that has a special quality. Yet, epistemologists have been unable to converge on the precise nature of this quality. Internalists think of it as being something "internal" in the sense of being accessible 1 3 to the knower herself, such as her being rationally justified in her belief. Externalists, by contrast, conceive of it in a way that does not require such internal access but rather can be completely external to the knower herself, such as her belief having been produced by a reliable process, i.e., one that tends to produce beliefs that are true, whether the knower is aware of this fact or not. The latter view is called reliabilism, and we will return to it shortly. 1 Now given an account of knowledge in the traditional sense, we may ask the further question whether true belief with this extra "something," whatever its more precise nature, is more valuable than true belief itself? If it is not, why should we care about knowledge over and above true belief? If it is, why is it so? Starting with Plato, this question has been debated over the centuries, with varying intensity, culminating in a very lively debate in the last two decades, which includes influential work by, for example, Kvanvig (2003), Sosa (2003), and Zagzebski (2003). Yet, it has been surprisingly difficult to pinpoint exactly what knowledge adds if one already has a belief that is true. If we know that player A will win over player B, we can place our bet and collect the winnings. But if we have a mere belief that player A will win, place our bet and the belief turns out, by a happy coincidence, to be true, the practical result will be the same. So at least from a practical perspective, knowledge does not seem to have an extra value over mere true belief.
The problem of accounting for the extra value knowledge has over mere true belief has been called the primary value problem to be distinguished from the secondary value problem (Pritchard, 2007). The latter pertains to why knowledge is more valuable than any proper subset of its parts, which include but are not restricted to, mere true belief. For instance, why is knowledge, on a standard internalist construal, better than mere justified belief? In this article, we will focus on the primary value problem as it presents itself for reliabilism, according to which, again, the special quality that knowledge has in relation to true belief consists in the latter having been produced by a reliable belief-forming process. 2 Henceforth, "knowledge" means "reliabilist knowledge," and "value problem" means "primary value problem," unless otherwise indicated. In connection with reliabilism, the value problem is sometimes referred to as the swamping problem since the value of true belief seems to "swamp" the value of the belief having been reliably formed. In other words, once the true belief is in place, the further fact that the belief was reliably formed does not seem to add value. For instance, it certainly does not make the already true belief "any more true." Various reliabilist responses to the value problem have been proposed in the literature. Our focus will be on one of the most discussed ones, namely, the conditional probability (CP) solution offered by Goldman and Olsson (2009). According to the CP solution, knowledge is, on a first approximation, more valuable due to the higher likelihood of future true belief produced by the same reliable method. 3

3
The Cognitive Basis of the Conditional Probability Solution… If your true belief was reliably produced, you will be in a better position to obtain more true beliefs (and knowledge) in the future than you would, if your belief was unreliably produced. Goldman and Olsson argue that their solution works given that four empirical assumptions hold. They claim, moreover, that the assumptions hold "normally" (ibid., p. 30). However, Goldman and Olsson do not show that their assumptions are externalistically acceptable in the sense of not presupposing conscious operations on the part of the knower; nor do they provide detailed evidence for their claim that the four assumptions hold normally, given that an externalist interpretation can be found. In this paper, we address these remaining gaps in Goldman and Olsson's solution in a constructive spirit. The upshot, we believe, is a deeper understanding of the CP solution as a contribution to naturalized epistemology and a strengthened case for its empirical validity.
In Sect. 2, we introduce the CP solution, including the assumptions upon which it rests. In Sect. 3, we suggest an externalist interpretation of the 5,6, and 7, we examine the support for the assumptions, thus construed, drawing on an extensive body of established work in cognitive science. Section 8 concludes the paper.

The Conditional Probability Solution
Goldman and Olsson's CP solution essentially states that knowledge is a valueadding indicator of future true beliefs: having knowledge raises the probability of obtaining more true beliefs in the future due to the same reliable process than does having mere true belief. As Goldman and Olsson (2009, p. 28) put it, "under reliabilism, the probability of having more true belief (of a similar kind) in the future is greater conditional on S's knowing that p than conditional on S's merely truly believing that p." 4 As part of their proposal, Goldman and Olsson state four empirical assumptions: I. Non-uniqueness: once you encounter a problem of a certain type, you are likely to face other problems of the same type in the future. II. Cross-temporal access: a method that was used once is often available when similar problems arise in the future. III. Learning: a method that was unproblematically employed once will tend to be employed again on similar problems in the future. IV. Generality: a method that is (un)reliable in one situation is likely to be (un) reliable in other similar situations in the future. 5 4 There are various hints in the same direction in earlier literature. For instance, David Armstrong (1973, p. 173, italics in original) wrote: "There is a sense in which knowledge is a pragmatic concept. Why are we interested in the distinction between knowledge and mere true belief? Because the man who has mere true belief is unreliable. He was right this time, but if the same sort of situation crops up again he is likely to get it wrong.". 5 See Goldman and Olsson (2009, p. 29). Here, we have adopted the compact formulation in Olsson (2011aOlsson ( , 2011b. Goldman and Olsson argue that the fact that these assumptions normally hold is what makes the central probabilistic claim of the CP solution true. Hence, given that the empirical conditions are satisfied, the probability of having future true beliefs is higher conditional on reliabilist knowledge than it is conditional on mere true belief. 6 Goldman and Olsson are careful to point out that their solution depends on the four empirical assumptions being satisfied. In particular cases when they do not hold, "knowledge fails to have an extra value in the present sense" (Goldman & Olsson, 2009, p. 30). However, Goldman and Olsson do not further support their contention that the assumptions in question are normally satisfied. While the assumptions have a common-sensical quality to them, it is ultimately an empirical question whether they hold or not. Yet, before we investigate the truth of the assumptions, we will consider the possibility of giving them an externalist-friendly interpretation.

An Externalist Interpretation of Goldman and Olsson's Assumptions
For an externalist, knowledge does not require higher-order conscious states on the part of the knower. To the extent that organisms can have belief-like representations, they can also have knowledge. We assume that an externalist would like less sophisticated organisms not only to be in a position to know but also to benefit from the distinctive value of knowing (in relation to merely believing truly). This assumption, however, requires that the four assumptions underlying the CP solution can be satisfied for such organisms. Thus, we need to get clearer on the precise meaning of the Goldman-Olsson assumptions and, crucially, the extent to which they can be given an externalistically acceptable interpretation. A useful starting point is criticism raised by Christoph Jäger (2011a) to the effect that the CP solution "implicitly invokes higher-level epistemic conditions that run against the spirit of externalism" (p. 201). Jäger argues that the learning assumption as discussed by Goldman and Olsson themselves invokes internalist concepts. For example, he quotes Goldman and Olsson saying that "if you have used a given method before and the result has been unobjectionable, you are likely to use it again on a similar occasion, if it is available. Having invoked the navigation system once without apparent problems, you have reason to believe that it should work again. Hence, you decide to rely on it also at the second crossroads" (Goldman & Olsson, 2009, p. 29). Jäger's point is that Goldman and Olsson themselves describe their condition as involving a "decision" to rely on the reliable mechanism in question and as having "reasons to believe" that it should work again, etc. Jäger is quite right in observing that these original conditions were couched in internalistic language,

3
The Cognitive Basis of the Conditional Probability Solution… as is also acknowledged by Olsson in his response to Jäger (co-written with Martin Jönsson) (Olsson & Jönsson, 2011, p. 218).
Jäger proceeds to observe that the crucial point is "whether Goldman and Olsson could not abandon their conditions and replace them by certain externalistfriendly 'empirical regularities'" (Jäger, 2011a, p. 209). Focusing on the learning assumption, he considers the possibility that "people are simply hardwired to reuse epistemic processes that have proven to be epistemically 'unobjectionable'" (ibid.). However, Jäger dismisses this possibility on the ground that "a mechanism M of the desired kind would have to be able to distinguish reliable epistemic processes from unreliable ones" (ibid.), which in turn would require, he believes, relying "on a favourable track record of employment of that process or method" (2011a, p. 210). Such track-keeping would in turn require, he thinks, "some kind of unconscious inductive mechanism that would generate a tendency or disposition in the subject to reuse epistemic processes which over a representative series of trials it records are successful, while at the same time blocking tendencies to reuse processes with an unfavourable track record" (ibid.). Jäger judges, for a number of reasons, that "the prospects for developing a convincing proposal along such lines do not appear rosy" (ibid.).
In an attempt to undermine the second part of Jäger's critique, Olsson and Jönsson (2011) argue that the learning assumption, despite Jäger's claim to the contrary, can be reformulated in a way that avoids any reference to track records. In their words (p. 220), "[i]n order to satisfy learning, it is sufficient that an organism is hard-wired in accordance with the law of least resistance, the most economical way of reacting to stimuli being to continue responding in the old way rather than trying something new." They add that "[s]tubbornly repeating the same pattern of actions is not always the best approach to life's challenges, which is why it seems reasonable to assume also that many creatures are equipped with sensors and other mechanisms needed for signalling to the brain when the organism's desires have been frustrated" (ibid.). In other words, an unproblematic or successful employment of a method could be understood as one that was not followed by negative "surprise." 7 As Ramstead (2021, p. 546) explains, "[t]he concept of surprise does not refer to the psychological phenomenon of being surprised. It is an information-theoretic notion that measures how uncharacteristic or unexpected a particular sensory state is, where sensory states can be caused by external worldly (and bodily) states." We will return to this point in Sect. 5.2. Jäger (2011a, p. 211) notes that in another paper Olsson explicitly writes that the realization of the value of knowledge "requires the satisfaction of a modest internalist condition" (Olsson, 2007, p. 352). It might therefore be thought that both from a systematic point of view and as indicated by Goldman and Olsson themselves an internalist construal especially of the learning condition suggests itself. 8 However, on closer examination, Olsson makes this remark in the context of a different, though compatible and arguably complementary, solution to the value problem, to the effect that a reliably acquired true belief is objectively more likely to stay put than is an unreliably acquired true belief. The argument for this "stability solution" requires, Olsson argues, that the cognizer keep an internal record of the sources of its beliefs (pp. 348-349). In the case of the CP solution, we will argue that an internalist construal of learning, or indeed any of the other Goldman-Olsson assumptions, is not needed. This is important because it means that the assumptions secure an extra value of knowledge while being at the same time externalistically acceptable. It may still be true that, as Olsson writes, "the full realization of its value requires the satisfaction of a modest internalist condition" (Olsson, 2007, p. 352, our emphasis).
A further point of clarification concerns the use of the term "method" itself. Methods, as we construe them, are not any behavioral sequence prompted by some stimulus but mainly instances of epistemic behavior, i.e., explorative behavior conditioned on the animal perceiving itself to be safe that terminates in a belief-like representation. For instance, an ape may react to the sensation of hunger by looking for a banana in the nearest tree. The method terminates when a banana is discovered or no banana can be found. Behavior can include "mental behavior" such as simulating outcomes, or transformations that induce insight. As we use the term, observation unpreceded by any particular behavior also counts as a method.
The non-uniqueness assumption, too, contains terms whose externalist status is in need of some clarification. The assumption states, to repeat, that once you encounter a problem of a certain type, you are likely to face other problems of the same type in the future. What are we to mean by a "problem"? A few examples will be sufficient for our purposes. In the animal kingdom, typical "problems" are finding food to satisfy an internal need for nutrition or dealing with some sort of environmental challenge, such as escaping the deadly claws of a predator. Surely, neither requires any conscious thought on the part of the organism. Low-level detection of some feature of the situation can simply trigger certain behaviors to put the organism in a position of greater chance of survival.
Finally, we need to be more explicit about what to mean by two problems or situations being "similar," a term that occurs in all four assumptions. 9 Obviously, we wish to avoid an interpretation of "similar" which requires, for two things to be similar, that the organism has consciously contemplated their similarity. Is there such an externalistic construal of "similar"? We believe there is. An organism can be more or less adapted to its environment in the sense of being more or less able to generalize from past successes to future successful behavior. Such generalization involves tracking similarities in the environment and past successes as a guide to future successful behavior. This ability need not be based on conscious reasoning but can occur at lower level of the brain. We leave a fuller discussion of this point for Sect. 7.

3
The Cognitive Basis of the Conditional Probability Solution… It is worth noting that, as Bruineberg et al. (2018) observe, an organism does not necessarily care about truth per se. Rather, it cares about survival and maintaining homeostasis. What Bruineberg et al. refer to as the crooked scientist hypothesis states that biological brains do not primarily operate by posing and testing hypotheses like good scientists to arrive at an objective truth about the world. This is so because "a perfect hypothesis that precisely represents the state of the environment is worthless if it does not specify what action minimizes surprisal, or improves grip" (ibid., p. 2430). Here, the term "grip" alludes to an organism's access to its environment's affordances through skillful behavior. 10 The implications of this may seem counterintuitive at first glance, since an organism that does not care about truth might appear ill equipped to thrive. In practice though, what minimizes surprisal and improves grip normally overlaps at least in approximation with an environment's ground truth. 11 On this note, the empirical content of the generality assumption is effectively that a method that is successful, i.e., does not give rise to a surprising result, in one situation is likely to be successful in the future as well.
We will now proceed to investigate the empirical support for the four Goldman-Olsson assumptions, externalistically construed along the lines indicated above. This will require covering a lot of empirical ground in cognitive science. However, the main points are simple and straightforward and can be summarized in a few sentences: Organisms (capable of belief-like representations) are adapted to their environment or ecological niche, or they would not be alive. This entails that problems that arise for the organism typically recur and, if the previous response did not involve any surprise, future problems are handled in the same way as before; and when this happens, the response will continue to be reliable, if it was reliable to begin with. The need to find nutrition, for instance, does not arise just once in an organism's lifetime, but repeatedly. If, on a previous occasion, the solution (in the sense of a triggered behavioral sequence) was to search for food in the nearby bushes, and this solution did not give rise to surprise, then the same solution is likely to be available and employed the next time around. If it was reliable before-food was actually found-it is likely to continue to be reliable upon reemployment. In other words, the Goldman-Olsson assumptions spell out part of what it means for an organism to be epistemically well-adapted to its environment. Since organisms normally are thus adapted, the assumptions hold normally, individually as well as collectively.

Non-uniqueness (I.)
The non-uniqueness assumption states that once you encounter a problem of a certain type, you are likely to face other problems of the same type in the future. This assumption begins with the observation that the world is not pure randomness, but is spatiotemporally structured such that physical processes seem to occur according to certain detectable patterns (see, e.g., Dennett, 1991). By patterns, we mean any nonrandom physical process occurring in space-time. Organisms such as ourselves are, from this highly abstract perspective, examples of spatiotemporal patterns.
Otherwise put, organisms find themselves in the presence of the world's various physical fields which can convey information pertinent to survival. Such physical fields include the various electromagnetic fields such as photon fields, heat fields, or the magnetic field of the earth. Other fields are concentrations of chemical compounds, and matter fields making up solid objects. Given such fields, organisms possess sensory apparati suitable for converting the gradients relevant to their environmental niche to a form useful for internal processing and hence for associating environmental conditions with behavior.
Such sensory-behavior association becomes more effective if there exists some means by which particular patterns in the environment can be adapted to and stored within the organism. Thus, if there are some means for the organism to learn, it can more effectively handle environmental contingencies and thus increase its probability of survival. A more sophisticated version of such a learning system could also be capable of matching situation-obstacle pairings to behavioral sequences that successfully negotiated the obstacle. This would free the organism from repeated trial and error and excessive energy expenditure. We return to learning in Sect. 6.
In other words, these gradients, or changes, exist both across space and in time, and neural systems have evolved to map gradients of their particular environments and, on a shorter time span, adapt to them. The mechanism whereby this is accomplished can be simplified as excitation, inhibition, and change in neurotransmitter levels in presynaptic boutons. For organisms like mammals and fish, the ability to sense and process information from the internal environment evolved as an adaptation to manage the higher complexity of a multi-cellular and multi-organ system.
The mechanisms of excitation and inhibition can be utilized in a top-down fashion to modulate signals from the sensory apparatus. By increasing amplitude of one signal and decreasing another, the difference, or contrast, between the two is increased. This can in turn be used in the process of identify sub-categories and increasing the dynamic range 12 of the neural networks represented some stimuli, which again allows associating behaviors more appropriately and with higher precision. By modifying the rate of change of a sensory neuron, temporal compression can be achieved. This can be used to increase sharpness and distinctiveness of a stimulus, which again can be used to increase dynamic range. With the evolution of structures supporting a working memory, such temporal compression became more feasible.

3
The Cognitive Basis of the Conditional Probability Solution… The spatiotemporal context of an organism can be viewed as displaying a certain scale. This means that a certain dynamic range must be covered by that organism's information processing apparatus. Hence, cognitive processes are also adapted to negotiate obstacles in this context, but will break down outside. If an obstacle occurs which is much slower or quicker than that of the niche, or exists on a smaller or larger spatial scale, it will become impossible to negotiate. For example, an oak tree cannot easily negotiate a bulldozer operating at human speed, but it might be able to do it if the bulldozer moved at a speed closer to that of the oak-perhaps a few centimeters a year (see, e.g., Chalmers, 2010).
From an empirical point of view, these are not controversial statements. Rather, to exist in patterned space time is what it means to be an organism. Organisms are thus exposed to similar patterns during their lifetimes, such that individual problems they encounter share underlying patterns with other problems likely to be encountered.
Another observation to support the non-unique assumption which offers a somewhat less abstract viewpoint is the fact that organisms have ecological niches or econiches (see Sect. 7). These are sets of behaviors and functions in ecosystems that sets of organisms tend to follow (Laland et al., 2015;Stotz, 2017). An ecological niche could range from anything such as certain adapted bacteria that convert abundant carbon dioxide and hydrogen emitted from vents on the ocean floor into methane (Sleep & Bird, 2007) to the vast variety of niches found in human societies where resource gathering includes everything from hunting and gathering to bureaucratic work rewarded in paychecks that can be converted into products and services.
The validity of invoking the concept of an ecological niche comes from the fact that it is possible to predict future organism behavior by learning what kinds of behaviors such organisms are typically performing. Organisms, since they exist, must have an ability to maintain themselves over some time, meaning that they are specialized in solving a tiny subset of possible problems to resist death-specializations that have emerged throughout evolutionary history. In conclusion, due to the patterned nature of the world, it is likely that organisms, once they encounter a certain problem, will face problems of the same type over time, as the non-uniqueness assumption states.

Cross-Temporal Access (II.)
The assumption that a method that was used once is likely to be available again when a problem reoccurs has both an evolutionary and a neurological predictive processing dimension. Below, we discuss both and identify the constraints they impose on the validity of the assumption.

Evolutionary Dimension
The process of evolution shapes organisms in a way that their bodies afford harvesting resources sufficient to maintain homeostasis with a high probability. As mentioned above, evolutionary adaptation typically creates species that mirror the regularities in the ecological niche adequately for individuals of that species to approach and consume those resources. That is, organisms internally model the characteristic patterns of their niche such that they may accurately anticipate what surrounds them. Different species may predict with varying resolution, but natural selection will balance energy requirements with resource availability and niche complexity, such that organisms tend to have precisely the required abilities of foresight. Excessive as well as insufficient predictive abilities will tend to be penalized over evolutionary time. The former because of its excessive energy demands, the latter in terms of inadequate resource gathering abilities. Having said this, niches are not equally punishing, with selection processes for instance being less harsh in temperate and energy abundant climates than in arctic ones.
Depending on the variability of niches, or the frequency of variation, organisms may require different levels of flexibility. If variation is slower than the lifetime of an organism, less flexibility is required, and behavioral patterns can be stored genetically. If variation happens within the organism's lifetime, the organism requires more flexible ways of dealing with change. For any organism that endures more than a single season, be it spring, summer, autumn, or winter, that organism must be able to adapt its behavior by learning. This implies retaining successful behavior that rewards the organism with needed resources. Another implication is that the organism is internally stable, such that stored behavioral patterns can be accessed and made use of across time.
Given that an organism is capable of learning and updating its model of the world, and that this update particularly happens by means of reinforcement, so that successful behavior is more likely to be prominent than unsuccessful behavior, then to utilize this memory, perception must trigger the updated behavioral sequence when a similar-enough situation occurs again. However, there is still the problem of how to adapt the stored context and behavioral patterns to new situations, since no organism will encounter identical situations twice. According to Passingham and Wise (2012), one way of achieving this is by storing and employing goal-sites instead of raw percepts and behavioral sequences. This means that categories of targets for approach may be stored, along with categories of targets for grabbing, manipulation sequences, and consummation sequences. In the context of obstacle negotiation, or problem solving, this could be interpreted as for example tool-objects being targets for approach. Importantly though, this way of using goal-sites allows for flexibility in approach, since how exactly the organism gets to the target can be improvised. Similarly for grabbing and manipulation, there is flexibility on exactly how it is done, as long as a target is reached, and the reward ends up consumed.
The notion that behavioral patterns that have led to reward in the past are stored and can be retrieved is supported on a basic level by classical experiments involving conditioning, as carried out by Pavlov (1927). Somewhat more recent support comes from experiments by Rescorla (1988) on stimulus-response behavior. Work by McClelland (1995) supports the notion that organisms can use pattern completion in the process of recognizing objects and situations that are similar to previously encountered ones. As shown by, for example, Schacter et al. (2011), memory is not perfectly accurate, and this approximate quality can be thought of as a feature, rather 1 3 The Cognitive Basis of the Conditional Probability Solution… than an imperfection, of the memory process. The reason is that it allows matching on similarity rather than identity. The precise way that memories are stored is not yet fully understood, but it appears that sensory traces from all modalities, including somatosensory and proprioceptive traces, are all associated (Johnson & Chalfonte, 1994;Metcalfe, 1990;Moscovitch, 1992;Schacter, 1989;Schacter et al., 1998). This associative nature of memory is further supported by Aminoff et al. (2008) allowing memory traces to converge and hence become general (Dewhurst et al., 2011). This is also called "gist based processing" (Brainerd & Reyna, 2005).
"Priming" is a term in psychology which is used when certain patterns are prepotentiated, increasing the likelihood of those patterns being retrieved. "Transfer" is a related term, having to do with learned problem-solving behavior for tackling problems that are similar, but not identical, to ones that have been encountered in the past. Kokinov (1990) showed that priming increases the chance that transfer behavior will occur in human subjects. For animals, Vale et al. (2016) showed that functional overlap in a task can be used by chimpanzees to recall and apply previously learned behavior. Clayton et al. (2001) showed similar results for scrub jays and Osvath and Osvath (2008) for great apes.
A memory system based on similarity and associations will have limitations. For example, context associations can yield inaccurate retrieval since the context can match, even if the sought after pattern does not (Gallo, 2006). There may also be problems with pattern separation. This happens when two traces should be stored as separate traces, but have sufficient overlap to be stored as the same trace (McClelland, 1995). The so-called schema-driven recall errors similarly imply that a typical pattern is recalled, rather than a distinct one (Nystrom & McClelland, 1992). Finally, as Bobrowicz and colleagues (2019) have shown in the context of great apes and cockatoos, a conflict may arise when similar cues point to different behavior. This implies that the organism must be able to resolve cue ambiguity to take advantage of learned behavior. On a more fundamental neurological level, we obtain support for the cross-temporal access assumption by relying on the most recent computational neuroscience theories. These theories will also turn out to be relevant for assumptions (III.) and (IV.).

Predictive Processing Dimension
In recent years, a new paradigm in theoretical and cognitive neurosciences has been proposed that is sometimes referred to as the "predictive processing (PP) theory of the brain"-which stands in contrast to a traditional "representational theory of the brain." In contrast to the representational view, PP no longer assumes that the perceptive system is passively detecting the features of our environment by some hard-wired feature-detection mechanisms. Instead, it is commonly held under the PP paradigm that the brain is actively engaged in creating its own internal representation of the causal structure of the world, including a representation of all the features relevant for detecting these structures. The neocortex is thought to be composed of a hierarchy of layers that at the lowest level receives the raw data provided by the sensory organs. At higher cortical levels, the data is represented in more abstract forms, including neural activation that encodes for very high-level features (e.g., the visual detection of a square).
PP is an abstract theory about the general functioning of the brain, suggesting that the perceptive system is constantly engaged in trying to predict the present state of the world (Clark, 2013(Clark, , 2015. The PP theory of the brain suggests that higher levels in a cortical hierarchy of the brain predict the activity of the lower levels. When a mismatch in prediction is identified, there is a high level of surprise, and typically synaptic strengths are modulated (this is referred to as synaptic plasticity), resulting in an update of the cortical model (Clark, 2016;Friston et al., 2017;Hohwy, 2013). Hence, PP models describe ways to minimize the so-called informational free energy 13 of the system, which then minimizes long-term average prediction error.
A computational model of the predictive processing scheme applied to the visual cortex was first presented by Rao and Ballard (1999), and more sophisticated models have been developed ever since, taking more neurophysiological features into account. Any PP model contains the following core features (adapted from Wiese & Metzinger, 2017): 1. Top-down processing: not only do lower-level cortical regions, closer to the sensory input, propagate processed information to higher cortical regions but also vice versa, and continuously so. 2. Statistical estimation: from the data sampled over the sensory system, the brain builds a model that allows it to statistically estimate the probability of the current state of the world. 3. Hierarchical processing: the estimates of the brain are hierarchically organized.
Typically, regions in higher cortical areas will encode for more complex and larger scale feature estimations. Such feature detection must, however, not be thought to exist a priori but are learned over time. 4. Prediction: higher cortical regions predict the activity of the lower regions. At the lowest level in the cortical hierarchy, the sensory input is predicted, whereas in higher regions neurons predict complex features. 5. Prediction error minimization and control: neurons predict the activity of neurons in lower cortical regions. If there is a mismatch in the prediction, the neuron model is updated, typically through modulation of the synaptic strengths. 6. Bayesian inference: the updating described about takes place in accordance with the principles of Bayesian inference, at least in approximation (see also Sect. 7 below).
Typically, the top-down information flow is realized by a generative process that models the probability distribution generating the input signal, whereas the 1 3 The Cognitive Basis of the Conditional Probability Solution… bottom-up information flow corresponds to a discriminative model that tries to correctly classify the input data. The generative process is used at each neural network level to generate a prediction of the layer's activity.
In the representational view of the brain due to, for example, Fodor (1975Fodor ( , 1981, sense data is represented by neural processes such that they afford symbolic manipulation, akin to that of a Turing machine. However, if the human brain were to operate in accordance with the representational view, the number of possible causal structures and problems our brain could make sense of would be fairly limited. This is because each possible feature would have to be hard-wired into our brains over the course of evolution. As we saw, PP tells a different story, namely, that these features come about by means of the process of discrimination, as mentioned above, whereby a population of neurons successively refine the differences in the sense data to which they are sensible (see, e.g., Schwaninger, 2019). At the primary level, these features will typically correspond to simple gradients of light and dark on the retina, but on higher levels, such simple building blocks will be composed to represent meaningful larger structures like trees and animals. Therefore, an organism equipped with the possibility of creating its own features for learning about the world is most likely to have an appropriate cortical process involved for the specific problem that arises, and so the cross-temporal access condition, which states that a method that was used once is often available when similar problems arise in the future, will in most cases be satisfied.
There is, however, a complication to the story. In a study on the behavior of rodents, Garcia and Koelling (1996) demonstrated that not everything can be learned, and thus there are cases where an organism will never be able to have an adequate cognitive process for a certain problem. In a variety of experiments, rats in cages were either electrically shocked, exposed to loud noise, or bad odor, whenever they were eating from some specific food in the cage. What one ordinarily expects is that the rats would eventually be conditioned to learn the correlation between stimulus and food and eventually avoid one food source in favor of another. In this experiment, however, the associative learning process did not take place. The rats were unable to be conditioned on the presence of food while being exposed to an unpleasant stimulus.
These results are typically interpreted as showing that a species, over the course of evolution, becomes equipped with prior knowledge about its environment in a way which biases the learning mechanism to the point that the organism may be unable to learn a method for a given problem. This observation is reflected in the PP scheme by a prior probability distribution that depends on hyper-parameters that cannot be altered by the individual organism itself but only through genetic variations in the species' evolutionary development. Hyper-parameters in the context of biological nervous systems are distinct from model parameters in that they do not concern what is learned, but rather affect the process of learning itself. Hence, maximum learning-rate and neurotransmitter synthesis capacity are examples of hyper-parameters. Williamson (2000), who like Goldman and Olsson explicates the value of knowledge in terms of conditional probability, emphasizes that knowledge has its extra value if the cognitive faculties of the subject are in good order (p. 79). While this is certainly correct in general, the rat example shows that from an ecological perspective, an organism and its niche are coupled such that if the niche changes too much and too quickly, the organism cannot adequately adapt and will go extinct; and similarly, if the organism changes significantly, over too short a period of time, it is possible that a method for solving a problem is no longer accessible. 14 The point can be generalized even further referring to results in learning theory, according to which any learning algorithm will fail to learn certain data. The no free lunch theorem dictates that there cannot be a universal learning algorithm (Wolpert & Macready, 1995). Therefore, given some highly unusual problem in the environment, even an organism whose cognitive faculties are in good order will fail to access an adequate method.
We conclude that the cross-temporal access condition depends on certain normality constraints both on the organism and on the environment so that it therefore indeed holds normally.

Learning (III.)
As we have seen, there are certain theoretical limits to the extent to which methods are learnable and thus accessible to the organism. If, however, a solution to a problem in terms of sensory data in a given context has once been learned successfully, PP guarantees that a healthy organism will again rely on this solution at a later point in time, if no better solution was found in the meantime. This guarantee stems from the fact that the cognitive system updates or modifies its internal model only when absolutely necessary, making it robust to noise. The brain is constantly trying to build an internal representation of the world's causal structure to predict future events and act successfully. This internal model, however, is maintained when the brain is confronted with sensory data in a certain context that is familiar. Only in the presence of an unfamiliar event does the brain experience a moment of surprise that causes an update of the model, such as when a routinely used solution fails unexpectedly. On the behavioral level, this mechanism can be observed as associative learning and conditioning. Typical examples are Pavlov's aforementioned experiments, conditioning dogs to salivate by associating food with sound stimuli. Conditioning of behavior typically also implies that the behavior in question is prioritized in the process of behavioral selection. A winner-takes-all mechanism in the brain ensures that a single behavior is chosen for the organism to engage in, and priority is typically assigned on the basis of the reward associates with each behavior.
Learning, in the sense of applying a previously successful solution ("successful" in the sense of not occasioning surprise) to a similar problem on a different occasion, does not imply that the organism has correctly grasped the causal structure of the world in a deeper sense. The psychologist Skinner (1948) famously carried out an experiment to study the 1 3 The Cognitive Basis of the Conditional Probability Solution… formation of "superstition" in pigeons. He placed hungry pigeons in a cage that automatically served them at constant time intervals, independent of their actual behavior. Skinner observed that whatever state of action the pigeons were in at the time when they received their first portion of food, that action was memorized and later repeated over and over again as a form of "ritual." Consequently, the new habit was further reinforced when the automated mechanism of the cage presented the birds with new food. In this case, wrong causal structures were inferred and a bird that was by coincidence flapping its wings at the time when the food was served would continue to flap its wings in the "hope" it could cause the food to appear again.
Thus, PP provides a theory of the brain that is adaptive to new, previously unencountered situations in the world, while at the same time ensuring a high likelihood to repeat a response when that response has previously been successful. On the one hand, the brain is highly adaptive and creates its own features that allow for an appropriate prediction of the incoming data (validating non-uniqueness). On the other hand, the brain routinely generates the same response to a certain situation to which it is exposed. Relating this to obstacle negotiation and problem solving, once a situation is recognized as being similar to one for which successful negotiation behavior has been learned, that behavior will be preferentially selected since it is most strongly associated with reward (validating learning). Hence, a method that was unproblematically employed once will tend to be employed again on similar problems in the future, which is exactly what the learning assumption entails.

Generality (IV.)
To provide empirical support for IV we need to show that a method M that is successful in one situation is more likely than not to be successful in similar situations in the future. This entails providing evidence that in the population of similar situations in the future, M is more successful than not.
We will first consider the problem from the perspective of Bayesian probability theory and argue that it can help elucidate how method reliability is learnt over time. Methods are associated with situations that display similarity. Since environments are differently variable, they require different flexibility and ability to generalize for organisms to be able to judge which situations are sufficiently similar for methods to stay reliable. Organisms use internal models of their environment to determine similarity, and lastly in this section, we will consider model selection and the balancing of informativeness and robustness to best fit the challenges of particular environments.
To demonstrate our claim about reliability, we begin by turning to Bayes' theorem (Joyce, 2019). 15 The theorem yields the probability of an event given new information. We choose to introduce the theorem by way of an example. If the probability of finding the way to Larissa by any means at all is 0.6 (P(Larissa)); the probability of using a GPS is 0.8 (P(GPS)); and the probability that the GPS was used if Larissa was found is 0.7 (P(GPS | Larissa)), then the probability of finding Larissa using a GPS can be calculated as: P(Larissa | GPS) = P(GPS | Larissa) P(Larissa) / P(GPS) = 0.7 * 0.6/0.8 = 0.525. Crucially, these probabilities can be updated based on experience. If the success in reaching the goal increases, so that P(Larissa) increases to 0.8, say, the probability of succeeding using the GPS also increases, in this case to 0.7. Now consider an agent having two methods for reaching a goal in a given situation but no knowledge of which method is most reliable. Testing both methods gives information about their reliability, based on the success of reaching the goal. According to Bayes' theorem, each time the methods are attempted, the agent learns about those methods' reliability. As we noted, there is evidence that biological brains use Bayesian-like mechanisms for perception and inference (see, e.g., Noppeney, 2021;Rohe et al., 2019;but also Bowers & Davis, 2012) such that, e.g., visual shape discrimination, audiovisual rate determination, and audiovisual spatial discrimination is close to statistically optimal from a Bayesian perspective. 16 It could be objected that it is difficult or impossible to determine how one situation in the past is "similar" to a situation in the present such that a given reliable method can be reapplied. However, from a scientific perspective, this is a misconception given that there are objectively measurable features of the world that can be compared for similarity. Particularly relevant for this discussion is the perspective of ecology according to which the similarity of the situations an organism is likely to encounter is correlated with the stability of the ecological niche during the organism's lifetime. Bruineberg and Rietveld (2014) define an econiche as the unique "landscape of affordances" available to a given organism. Dependent on the organism's body and information processing system, the econiche provides various opportunities for interaction for that organism. This also opens up for the organism and its environment mutually changing each other, like beavers changing the flow of a river which again may change the availability of dietary plants for the beaver. An example of a very stable niche is the environment around hydrothermal vents, where bacterial organisms can survive for many generations without changing their method for harvesting energy (Sleep & Bird, 2007). This situation stands in contrast to that of a crow living in a human city, in which case the environment is likely to change during the lifetime of the crow (new food sources appearing, buildings being torn down, etc.).
From the above remarks, we may deduce that gradients of similarity exist for different organisms, where some species can develop methods for energy acquisition that will be very reliable since their ecological niche does not change much. The affordances of the environment then allow for the body of the organism to specialize in a way that optimizes the amount of energy that can be harvested. On the other hand, some environments are on the surface much more variable in terms of affordances, and methods may not be directly applicable across time. In such cases,

3
The Cognitive Basis of the Conditional Probability Solution… the environment may pressure organisms to compensate by developing ways to generalize. One avenue for generalization is in terms of relaxing perceptual classifications of what is edible. Put another way, the sensory impressions that trigger feeding behavior become less specific. In a sense, a grub and the leftovers of a hamburger become similar to a crow, and the method for pecking in the earth and tossing away twigs can be applied to trash cans and tossing away pieces of human garbage with equal success.
To illustrate, consider how urban and rural finches differ in problem-solving ability. According to Audet et al. (2016), birds living in urban areas were better at reversal learning and problem-solving than the rural birds. Reversal learning is the ability to update a learned rule and inhibit associated behavior to accommodate changing reward structure (Lissek et al., 2002). As mentioned, environments vary in their stability, and rural environments are typically more stable than are urban ones (Lowry et al., 2013). To thrive in a more frequently changing environment, organisms must display cognitive and behavioral flexibility. They must adapt their food-gathering skills as well as their diets. Another way of putting this is that rural birds are stricter in their judgments of similarity, while urban birds are more relaxed. Being less flexible, methods for food gathering require that situations appear to have more sensory overlap in rural birds than in urban ones.
We will next consider the implications of model selection in the context of organisms, where the model in question is that which an organism has of itself and its environment.
Learning theory tells us that there is always a tradeoff between informativeness and robustness (Buhmann, 2010), when selecting models. Therefore, the cognitive system of an organism must ideally choose the right balance between the two in order to maximize usefulness for purposes of survival. A completely flexible system is unable to learn anything, as it lacks the ability to entertain a generalized model of the world. A completely robust system, on the other hand, is unable to cope with variations. Such systems cannot sustain and persevere over time, as they cannot adapt to changes in the environment. 17 As situational complexity grows, a single behavioral sequence might thus not be sufficient to negotiate an obstacle. In cases like this, it is necessary for the organism to be able to compose simpler behavioral sequences into more complex ones (Cromwell et al., 1998;Nemec et al., 2009). However, we argue along with, for example, Niemelä et al. (2013) that it is the necessities of the environment that to a large degree will determine whether a particular species develops abilities for generalization and flexible behavior. Bobrowicz (2019, p. 25) is clear on this point: [S]everal cognitive processes that support an efficient and flexible use of memory, are sometimes overtly sensitive to irrelevant similarities between otherwise unrelated experiences. In unrelated experiences, perceptual and/or conceptual similarities between two situations cannot inform our behaviour: a solution that we applied in a past situation will not help us resolve the present one. But most of the time, being sensitive to, matching and exploiting such similarities is arguably the most adaptive way of dealing with rapid and unpredictable environments.
An organism would gain further flexibility and broaden its environmental niche if it possessed the ability to not only associate sensory patterns with behavior but could also inhibit learned behavior in response to situational dynamics. Therefore, any living system must build a sufficiently robust representation of the world, so that small variations within the input data do not alter the outcome of the method employed, while still being able to respond with sufficient flexibility to changes that do occur. Together, these considerations imply the generality assumption: a method that is (un)reliable or (un)successful on one occasion is likely to be (un)reliable or (un)successful in similar situations in the future.
In summary, problems tend to recur (by assumption I.), and organisms able to find a solution most likely will make use of that same solution on another occasion (by assumptions II. and III.). Moreover, a method that is (un)reliable in one situation is likely to be (un)reliable in other similar future situations as well (assumption IV.). Taken together, all four assumptions of the CP solution to the value problem have adequate support from empirical science.

Conclusion
In effect, the Goldman-Olsson assumptions spell out part of what it means for an organism capable of having belief-like representations to be epistemically adapted to its environment or ecological niche. They are therefore normally satisfied, not only individually but also collectively. A consequence is that the extra value of knowledge, in the sense of reliably acquired true belief, is to be enjoyed not only by us humans but by organisms in general to the extent that they can be said to have reliably produced belief-like representations about their environment. For such organisms, the likelihood of having more true beliefs in the future is higher conditional on knowledge now than it is conditional on mere true belief now. In a word, for them, knowledge is more valuable than mere true belief. Thus, our investigation shows that the conditional probability solution, in addition to being founded on scientifically respectable assumptions about animal cognition, is also a deeply externalist response to the value problem. This is so because the kind of adaptation it relies on is grounded in low-level operations of the brain that need not be consciously available to the organism.
Summing up, Goldman and Olsson (2009) claim that reliabilist knowledge has an extra value over mere true belief due to the greater likelihood of future true beliefs in the case of knowledge. They note that their account relies on four assumptions of an empirical nature which they, moreover, claim hold "normally." We registered the absence of empirical support for their assumptions as a remaining gap in their theory, together with the open question of how to provide the assumptions with a rendering that is externalistically legitimate. Having suggested externalistic interpretations of all crucial terms, we investigated the empirical basis of the assumptions through a survey of relevant literature in cognitive science. We found that they are indeed very much in line with established work in the field, so that the claim that they are normally satisfied for creatures capable of belief-like representations is highly plausible. Our investigation also added perspective to the debate by identifying circumstances in which the assumptions behind Goldman and Olsson's approach fail to hold and hence knowledge fails to have extra value in their sense. The effect of our endeavors is a strengthened case for Goldman and Olsson's proposal as an empirically wellfounded solution to the value problem for reliabilism-a solution that externalists can cheerfully embrace without sacrificing any deep-seated principles of naturalized epistemology.
Funding Open access funding provided by Lund University.

Conflict of Interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.