Getting the world right: Cognitive maps and pictures of universals
- First Online:
- Cite this article as:
- Raftopoulos, A. Metascience (2013) 22: 115. doi:10.1007/s11016-012-9704-z
- 107 Views
Classical cognitive theories, that is, the family of theories that assume that the brain is a syntactic machine that processes symbols that are constant, context independent, and freely repeatable elements that figure constitutively in propositional contents, have failed to capture the way the brain goes about in its abductive business. Semantic holism, the trait par excellence of the sort of abductive reasoning performed by the brain, and the inability of the classical theories to explain adequately how the brain could in real time behave the way it does pose a severe problem for the classical conception of the brain.
In his new book, Paul Churchland continues, quite successfully, his bit to persuade the reader that the classical conception of the workings of the brain should be substituted by a construal of the brain as a dynamic neural network. The brunt of Churchland’s argument is that the new conception sheds light on some of the capacities of the brain that the classical view has repeatedly failed to explain adequately. Abductive reasoning and its concomitant frame problem, how the brain learns the basic perceptual concepts, how the brain is able to apply new conceptual frameworks to a domain and conceive a new theory for that domain (the problem of conceptual redeployment or conceptual change), and how the brain developed language and social institutions in order, essentially, to promote its own business are such capacities, and Churchland devotes one section for each one of them. Churchland addresses several epistemological problems both in Epistemology, such as the problem of what constitutes a representation and how representations acquire their semantic content/reference, and in the Philosophy of Science, such as scientific realism, scientific methodology, inter-theoretic reduction, theory-ladenness of perception, and the problem of the criteria for theory evaluation and choice.
In the first chapter of the book, Churchland presents to the reader connectionism and specifically dynamic, recurrent neural networks. Neural networks consist of units that simulate the brain’s neurons and are stratified into several hierarchically organized layers (or rugs) and are interconnected. The first layer is the input to the network of sensory inputs, the top-most layer is the output of the network, and the intervening layers are thought to be the sites of the representations that the network builds gradually as it confronts the input. Signals can be transmitted in a purely feed-forward, bottom-up fashion from the first layer to the higher-level layers, or they can be transmitted both bottom-up and top-down, that is, signals from the higher levels are transmitted back to, or reenter, the earlier layers. When the network receives some input, its input units are activated and spread this activation higher up to the units in the other layers. Eventually, all the units get activated. Each piece of information in the network is represented by the activation pattern of the relevant neurons in some layers. This activation pattern is a vector, whose tip is a point in a wider representational space whose dimensions equal the number of the units in the layer.
It follows that the fundamental unit of cognition is not some linguistic or linguistic-like entity but the activation pattern across a proprietary population of neurons. Representations are activation patterns. This means that the cornerstone of the classical view of cognition, namely, the symbol, is tossed out of the picture as a fundamental unit of cognition, because activation patterns do not behave like symbols, mostly on account of the fact that, unlike symbols that are context independent, activation patterns are highly contextual. This means, in turn, that the processing at work in the brain, that is, the transformations of the representational units to other representational units, does not consist in the transformations of complex or simple symbols by means of a set of syntactic rules as in the algorithms that brain is supposed to run. Instead, it is the algebraic transformation of activation patterns (the transformations from one multi-dimensional matrix or tensor to another) that is effected by the synaptic connections among the neurons as the signal passes from one layer to another. The synaptic connections are, thus, the brain’s elemental information processors. At the same time, the synaptic connections are the repositories of the information stored in the brain, that is, the carriers of its representations.
Churchland distinguishes between the ephemeral vehicles of knowledge of the fleeting here-and-now and the enduring vehicles of background knowledge of the world’s-general-structure-in-space-and-time. The former are the activation patterns across a population of neurons, while the latter is the entire space of possible activations for the relevant population of neurons. The vehicle of representation in the latter case is the entire conceptual framework or map that the brain has built through its interaction with the environment and which encompasses all the possible instances of which the creature currently has any conception. This is Churchland’s “sculpted activation space.” This map is realized by the current values of the weights of the connections, as they have been fine-tuned through learning.
Within this sculpted activation-space framework, a theory or conceptual map is neither a set of sentences, as in the syntactic account of theories, nor a family of models, as in the semantic view of theories, but, rather “a conceptual framework or high-dimensional cognitive map that purports to be, and sometimes succeeds in being an accurate or faithful map of some domain of objective features and of enduring relations that hold between them” (215). The structure of the map, and most specifically, the metrical relations of similarity and “distality” between its elements, provides its semantics as well. Three major consequences follow from this conception of theories that Churchland exploits in various places in the book. First, the proper semantic theory for a state-space is holistic. The elements of a map gain their semantic significance and have their reference fixed collectively, that is, as a function of all of the relations within the map that the elements bear to the other elements in the map. “The representational significance or semantic content of a given element of a map is determined by the unique profile of its many proximity and distance relations to all of the other map-elements within the same space” (105). Second, scientific understanding of a domain is the possession of sundry maps of the enduring categories, symmetries, and invariants displayed by the objective universe (216). Third, the virtue of a theory “does not consist in its having any such set of sentences [observation sentences] among its specifically logical consequences… Rather, its virtue consists in its being a coherent, stable, and successful vehicle for interpreting and anticipating the perceptual/instrumental experience of the creature who is deploying it” (262).
In the second chapter, Churchland discusses the learning processes of the brain qua neural network. The most fundamental sort of learning is Hebbian learning where neurons that are simultaneously activated have their synaptic connections strengthened. Thus, neurons that react similarly to some environmental input tend to be activated as a whole whenever the same type of input is encountered. Synapses that are not systematically activated are weakened and eventually die out. This is the sort of learning that causes structural changes in the brain by strengthening synapses or creating new ones (constructivism) or by pruning.
After its successful training, the brain represents the domain in which it was trained by building the representations necessary to accommodate (i.e., produce the appropriate outputs) this domain. Learning in the brain consists in changing the synaptic weights until they acquire the values that enable the network to produce the appropriate output to a given input. To do this, the brain must extract from its training material the deep regularities that govern the behavior of the entities in the domain by abstracting away from its surface similarities. “[T]he learning brain very slowly constructs, or ‘takes a picture,’ of the landscape or configuration of the abstract universals, the temporal invariants, and the enduring symmetries that structure the objective universe of its experience” (vii).
Churchland discusses the sense in which the brain represents the environment and distances himself both from the notion of representation as a first-order resemblance and from the indicator semantics theories that locate the essence of representation in the nomic or causal relation that a representation bears to some specific feature or object in the environment. Finally, Churchland discusses the ways the conceptual frameworks that the brain builds can be compared across distinct individuals.
In chapter three, Churchland concentrates on the ways the frameworks/theories/maps can be evaluated for their accuracy in depicting the intended objective domain. This is in essence the problem of scientific realism, and Churchland examines, qualifies, and argues against both the Kantian idea that we can know nothing of the-thing-in itself and the rejection of the correspondence theory of truth by Pragmatism.
I think that this is the most interesting chapter in Churchland’s book not only because it lays the ground for the discussions that will follow in the next chapters, but also because it vividly portrays the evolution of Churchland’s ideas on perception, its relation to concepts, its theory-ladenness, and its role in securing some sort of realism. Churchland points out right from the beginning that he will examine the ways one can compare cognitive maps not with one another but with the enduring features of the world. Given the maps’ semantic holism, this comparison concerns the existence and extent of a global homomorphism (an onto rather than an into relations between two maps) between the internal structure of the map and the independent structure of the external feature-space that the map purports to portray. Thus, one should forget the classical interpretation of the correspondence theory of truth in which the syntactic structures of certain sets of interpreted sentences correspond to the set-theoretic structures realized in the world. It is global correspondences that give the brain a grip of the world (132).
If one wishes to test the accuracy of a map, one better put it in work and use it to navigate the environment and assess its efficiency. Churchland (129) argues, against Kant and the Pragmatists, that, given our success in navigating the environment, this test suggests that the brain’s maps are at least partially and approximately homomorphic with some of the objective feature-spaces of the worldly things; they get something right about the world. Even though Kant was right that one cannot get a map-independent grip of the world and the Pragmatists were right that there is not an Archimedean vintage point outside our maps from which one could compare them with the world and assess their accuracy, still a map-dependent grip remains. As the maps that allow one successfully to navigate in the environment increase in numbers, this family of maps collectively affords a very firm grip of the world. Churchland insists (134) that the role of the navigational success in establishing homomorphism should not be interpreted to mean that the conception of the representational virtues of a map or theory lies in its pragmatic success. On the contrary, it is independent of any measure of pragmatic success because it explains pragmatic successes or failures in terms of the virtues and vices of the various representational maps.
How Hebbian learning creates maps in the first place? Churchland’s argues that the brain learns as it extracts some deep regularities from its input by exploring the raw statistical regularities of an agent’s sensory experience. This exploration can make the agent differentially sensitive to certain activational patterns, that is, make her construct a map, a structured family of prototypical categories that as a whole is homomorphic to some objective feature-domain. This way, the agent acquires a wealth of a general background knowledge (a conceptual map) in the process of learning the categorical structure of the domain in which she is exposed.
This unconscious and preconceptual Hebbian learning process yields conceptual frameworks that can correspond accurately to some of the objective structures of various external feature-domains (179).
it does not require a conceptual framework already in place, a framework fit for expressing propositions, some of which serve as hypotheses about the world, and some of which serves as evidence for or against those hypotheses… The Hebbian story explored above…offers an account of how a structured family of prototypical categories can slowly emerge and form the background conceptual framework in which subsequent sensory inputs are preferentially interpreted. It offers an account of learning that needs no antecedent conceptual framework in order to get an opening grip on the world” (164–165).
Since these maps are used to evaluate perceptual judgments, the raw sensory nonconceptual information is somehow involved in this evaluation too, although Churchland is right to point out that there is no logical relation between perceptual judgments and the nonconceptual content of perception. However, this means only that epistemic, evidential relations need not be logical. Moreover, there is a substantial part of perceptual processing that is preconceptual, that is, nonconceptual. If a child in the course of her development builds her conceptual maps by means of such a conceptually encapsulated process, this means that there is a stage of perceptual processing that is cognitively impenetrable. The repercussions of this thesis cannot be overstated.
Conceptualists think that perception is through and through conceptual. Now, Churchland somewhat distances himself from this group. However, this is not the end of the story. If there is a stage of perceptual processing that is not affected by concepts and, thus, is cognitively impenetrable, what are we to make of Churchland’s claim in his new book that at any point in time, each processing in the brain is modulated in a top-down manner by the background conceptual frameworks and that, thus, this constant top-down modulation entails each sensory encounter is always interpreted by the relevant conceptual framework?
However, it is one thing to say that the early vision processes that steer the formation of the basic conceptual frameworks are conceptually encapsulated/cognitively impenetrable, and it is another thing to say that these processes are not affected by any sort of background information. As the brain learns some feature-domain, even if this learning is not cognitively driven, the brain certainly acquires information about its environment that it stores in the visual system. Churchland is correct to point out several times that each processing cycle, even in early perception, is guided by the knowledge about the environment that the brain accumulates during its interactions with the environment and stores in the synaptic connections that connect both the input layer to the first representational layer and one representation to the next. If the background information at work is construed as some theory about the world, the early perceptual processes, their conceptual encapsulation notwithstanding, are theory-laden. Perhaps, this is how one should interpret Churchland’s claims that every step in perceptual processing is theory-laden and put aside the claim that at all perceptual processes are modulated in a top-down manner by conceptual maps.
Is the knowledge stored in the perceptual circuits some form of a theory? Consider the circuits in early vision that are disposed to make a transition from states representing a light contrast map to states representing where the edges of the objects in a visual scene lie. To make this transition, the system takes advantage of the information/rule that the edges of the objects tend to occur at the spatial locations where discontinuities in contrast levels are found. However, this information/rule is not represented anywhere in the system to be able to function as an inferential rule that the system applies to make the state transition. It is hardwired in the visual circuits constituting their modus operandi rather than a representation that can be used by them. Its only effect is to produce the representations of edges, and it cannot be used elsewhere, unlike inferential rules that can be used in different contexts. Thus, this transition is not an inference and is not an ingredient of a theory either because theory parts are supposed to apply to various conditions in order, say, to explain a multitude of phenomena of different type (the resilience of theories). There are many other pieces of information or rules that guide perception and are used to solve the many underdetermination problems that plague perception. I have called them (Raftopoulos 2009) operationalconstraints, and I have claimed that they are hardwired in the perceptual system and are not contents of some representational vehicles. As such, they cannot be construed as theories or conceptual maps, because maps concern high-dimensional geometrical representational contents (132).
Let us also examine the sort of information that is stored in perceptual circuits before any conceptual intervention. Evidence form studies showing early object classification effects seem to suggest that to the extent that object classification presupposes object knowledge, this knowledge affects early vision in a top-down manner. For example, familiarity, including repetition memory, may affect object classification (whether an image portrays an animal or a face), a process that occurs in short latencies (95–100 ms and 85–95 ms, respectively). These sorts of effects seem to entail the theory-ladenness of perception because one could claim that familiarity entails storage of information that constitutes some form of a theory about the world, and this theory affects perceptual processing by determining the synaptic weights of perceptual circuits.
The early effects of familiarity may be explained by invoking contextual associations (target-context spatial relationships) that are stored in early sensory areas to form unconscious perceptual memories. They may also be explained by appealing to configurations of properties of objects or scenes. Currently, research suggests that what is stored in early visual areas are implicit associations representing fragments of objects and shapes, or “edge complexes”, as opposed to whole objects and shapes. The associations that are built in, through learning, in early visual circuits reflect in essence the statistical distribution of properties in environmental scenes. The statistical differences in physical properties of different subsets of images are detected very early by the visual system before any top-down conceptual. It follows that the classification of an object that may occur very early during the fast feedforward sweep at about 85–100 ms is due to associations regarding shape and object fragments stored in early visual areas and does not reflect any top-down cognitive effects on early vision. Thus, early object classification is not a sign of the theory-ladenness of early vision, since our theories and knowledge about the world do not affect in a top-down manner early vision.
To conclude, even if knowledge is not equated with concept application/possession and even if this “nonconceptual” knowledge is representational, it is not a theory as the term is usually meant. Thus, early nonconceptual perceptual processing is guided by some sort of “knowledge” but is not theory-laden.
Conceptual frameworks are meant to provide an understanding of the relevant domain, which is mainly an explanatory understanding. As knowledge accumulates, one has to change their beliefs on a certain subject. Sometimes this is just an addition of some new pieces of information to an existing body of knowledge. Other times the conceptual change is radical in that it requires an ontology shift. All these are cases of conceptual redeployment, that is, cases in which some conceptual map is used for understanding a domain for which the map was not initially constructed to explain. In science, one frequently comes across cases in which one theory succeeds an older one, and the latter is thought to be reduced to the new theory—intertheoretic reduction. Both conceptual redeployments and intertheoretic reductions are discussed in chapter four, and Churchland masterfully shows how these processes can be properly understood from the perspective of the succession of one conceptual map by another. With respect to intertheoretic reductions, Churchland rejects several accounts of reduction and proposes what he calls the “Map-Subsumption Account”. According to it, a more general framework G reduces a less general framework T iff the conceptual map G, or some part of it, subsumes the conceptual map T, where “subsumption” is an homomorphic relation between the activational state-space or landscape that constitutes T and some substructure or lower-dimensional projection of the activational state-space that constitutes G.
Finally, Churchland discusses the problem of the underdetermination of theories by empirical evidence and its repercussions for realism. As far as realism is concerned, we have seen that Churchland rejects Pragmatism and adopts a view of realism according to which science provides increasingly successful representations of the world. Even though we are forced to infer that even our best current theories are false, still, “our conceptual maps of enduring reality have improved dramatically in both their breadth and accuracy.” Despite the fact that we know they are imperfect and have failures, “they have given us progressively better portrayals of the reality that embeds us, in all of the evaluatory dimensions germane to any map” (217). The evaluatory criteria of conceptual maps/theories are the internal consistency of the maps and their respective comparative strengths in representational success. The latter substitutes for the defunct, in the new framework, correspondence theory of truth and introduces the notion of representational success as the appropriate locus where to search for the point of contact of the theory with its intended domain. Representational success, however, must be construed within the framework of the ways conceptual maps face the world. Maps do not provide a map-independent grip of the world but they do provide for a map-dependent grip, which is genuine grip (126–127), in that it guides our successful navigation in our environment. Furthermore, a large family of independent but overlapping maps collectively affords a firmer grip and reinforces the grip of each map individually.
In the last chapter of the book, Churchland turns to explain from the perspective of the prelinguistic/sublinguistic account of cognition the emergence of the linguistic capabilities and of symbols and algorithms. He discusses, first, the emergence and significance of regulatory mechanisms (schools, universities, scientific, and professional societies) that would not exist save for language. Then, he examines the way social institutions steer second-level learning, that is, conceptual change and theory construction. Churchland argues (270–271) that the linguistic formulations of theories are logico-linguistic reconstructions of the cognitive situation in the mind of scientist, which, the reader should remember, can be properly and adequately described only by means of the state-space landscape and its kinematics and dynamics. These reconstructions portray this complex multi-dimensional cognitive reality in terms of discrete categories and binary (true/false) evaluations. In other words, they provide a digital representation of analog cognitive processes and states. The digital renderings should not be taken to portray adequately the true cognitive kinematics and dynamics, because as Churchland (1981) has stated in the past, the process that produces propositional beliefs and other linguistic entities is one in which the multi-dimensional complexities of the underlying processes are projected through linguistic behavior, which creates the appearance of definiteness and precision, thanks to the discreteness of words. Thus, a person’s propositional belief is a one-dimensional projection through the linguistic centers of the brain of a multi-dimensional solid that is an element in that person’s true kinematic and dynamic cognitive reality.
If a book’s success is judged both by the scope of its material and by the amount of the novelty it brings, then Churchland’s new book is an unqualified success. The novelty, it should be emphatically mentioned, does not consist only in the application of a radically new framework of the mind, after all Churchland has presented his views on the subject in both his The Engine of Reason, the Seat of Soul, and Neurophilosophy at Work, but in two other aspects of the new book. First, in the discussion of the repercussions of the new view of cognition for traditional philosophical problems, such as the underdetermination of theories and of observational reports from raw sensory data, the problem of realism, the correspondence theory of truth, the criteria of theory evaluation. Second, in the discussion of the role of Hebbian learning and the construction of converging conceptual maps in individuals with the same sensory apparatus who live in the same environment, and the examination of the significance of this for the traditional philosophical problems mentioned above. As I have argued, this realization on the part of Churchland creates some tensions with his previously held beliefs on the theory-ladenness of perception and its reverberations on theory evaluation and theory underdetermination, which all center around the extent of conceptual influences on perceptual processes. However, this only adds to the merits and of the book and makes it even more fascinating and challenging to read.