Abstract
We discuss the potential of applying category theory to the study of consciousness. We first review a recent proposal from the neurosciences of consciousness to illustrate the “correlational project”, using the integrated information theory of consciousness as an example. We then discuss some technical preliminaries related to categories and in particular to the notion of a functor, which carries the bulk of conceptual weight in many current discussions. We then look at possible payoffs of this project—getting to grips with the hard problem, theory integration, and exploiting explanatory dualities—and discuss possible avenues for further research, stressing the need to better develop the categorical representation of consciousness, in particular its phenomenological structure. A better understanding of consciousness cannot be achieved by merely studying the physical brain. By contrast, the categorical treatment even suggests application beyond the domain of neuroscience, for example in computer science and artificial intelligence research, while also emphasizing the primacy of (phenomenal) experience.
Avoid common mistakes on your manuscript.
1 Introduction
Throughout this article, we are understanding the “correlational project” to refer to a class of approaches in consciousness science that try to correlate the workings of a physical system (particularly the brain) to consciousness. Importantly, many advocates of the correlational project do not think that this entails the specification of any supposedly underlying causal relation. Correlation is not causation. Neither does the correlational project inform us about metaphysics per se. The correlational project is also not bound to one specific approach in the science of consciousness, for example, to integrated information theory (IIT, discussed in this article) or to the global neuronal workspace theory (Mashour et al., 2020). Rather, it expresses a methodology, namely to focus on the neuronal substrates of consciousness,Footnote 1 thereby leaving questions of metaphysics open.
In this article, we argue that previous approaches in consciousness studies need to be transcended in order to make real progress on the actual problem of consciousness: relating the mental and the physical (Atmanspacher and Prentner, 2022). Due to its metaphysical neutrality, the mathematical framework of category theory appears to be a fruitful tool to do this. Unlike the correlational project, which focuses on the neural substrates of consciousness, category theory would focus on patterns and relations. We thus outline potential avenues in applying this kind of thinking to the study of consciousness. In particular, we argue that applying category theory has several implications for the study of consciousness: truly understanding—not just listing—mind-matter correlations, theory integration and exploiting explanatory dualities.
A very early instance of the correlational project has been proposed by Chalmers (1996), later refined in (Chalmers, 2004). To make progress toward a new fundamental theory of consciousness, it has been advocated to start with establishing, and then formalizing, the “structural coherence” between awareness (understood in functional, third-personal terms) and consciousness (understood in phenomenal, first-personal terms). Only then, in the final step, should we try to make metaphysically-informed statements about their relation. Originally, “information” has been speculated to be the underlying metaphysical principle, and the proposal expresses a kind of “non-reductive supervenience” relation between brain states and consciousness (Velmans, 2009). An even more metaphysically-neutral approach defined the search for the neural correlates of consciousness (NCC: Chalmers, 2000; Fink, 2016; Koch et al., 2016; Lepauvre and Melloni, 2021): what is the minimal neural system whose state is sufficient (but not necessary) for any conscious experience? One reason that the NCC-project has been so successful is that it could be approached from very different metaphysical angles. Unlike in Chalmers’ earlier proposal about structural coherence, one could focus purely on the search for the NCC while being agnostic on the underlying relation between consciousness and the brain (e.g. their non-reducibility). This had the great advantage that researchers could agree on a common methodology despite philosophical differences. Remnants of this are still noticeable, for example, when considering the different metaphysical positions associated with the authors, who contributed to the recent publications of the adversarial collaboration to accelerate consciousness science (e.g., Cogitate Consortium et al., 2023)—materialism, idealism, and panpsychism (and maybe even more).
Yet, despite this laudable effort, the actual “hard problem” (Chalmers, 1996) stays untouched. Perhaps a more rigorous formalization using mathematics, specifically: category theory (Lawvere and Schanuel, 2009; Mac Lane, 1998), could be of help? But how can we precisely (mathematically) relate consciousness to the workings of a physical system, such as the brain? A basic recipe has been proposed by Tsuchiya and Saigo (2021), according to which one should best divide the problem and first construct two categorical representations—one “category of the brain”Footnote 2 and one “category of consciousness”—to then, in a subsequent step, think about possible ways of relating them. While it is quite undisputed that the project of constructing a category of the brain, although difficult, is possible (e.g., by relying on existing mathematical theories specifying the NCCs, cf. section 2), it is still doubtful whether constructing the category of consciousness is at all a meaningful project. The first conceptual steps needed to be taken are outlined in section 3. But even if one succeeded with these first steps, it is not clear how to conceive of the relation between these categories. Worse, it is not obvious that this would really help to make any progress beyond what could in principle be achieved with more conventional methods. For example, to what extent would our efforts at formalization help to make progress beyond the empirical study of the neural correlates of consciousness? We thus speculate on three possible benefits and avenues for further research in section 4: getting a better grip on the hard problem of consciousness, the integration of (seemingly unrelated) theories, and exploiting explanatory dualities.
In Sect. 5, we then conclude by pointing, on the one hand, to the need of further pursuing the categorical approach (including giving more details about how existing theories can be understood categorically) and, on the other, by explicitly requiring that the categorical treatment should stay adequate to (phenomenal) consciousness. Finally, while consciousness science would likely profit from a categorical turn, this seems to entail that it has to leave the narrow confines of neuroscience. This is in particularly true if one is interested in research on artificial consciousness (Kanai et al., 2019; Bach, 2019; Blum and Blum, 2021; Butlin et al., 2023).Footnote 3
2 The case of integrated information theory
2.1 A categorical reconstruction
In the paper by Tsuchiya et al. (2016), the authors have invoked category theory to study the relation of consciousness and the brain, using the example of integrated information theory (more specifically: “IIT3.0”: Oizumi et al. (2014); Tononi et al. (2016)). The reason to rely on IIT is not so much because IIT is the theory of (phenomenal) consciousness, but because IIT is the most advanced theory in the neuroscience of consciousness when it comes to the specification of the mathematical structure of the relevant physical elements. If IIT appears to be the only game in town for a mathematical science of consciousness, this has perhaps less to do with the theory itself than with a deficiency of competing theories in the neuroscience of consciousness. Indeed, the categorical approach might generalize to any sufficiently well-developed theory that could be used to derive mathematical structures from brain activity correlated to consciousness.
Other categorical treatments of IIT have been proposed by Kleiner and Tull (2021) and Tull and Kleiner (2021). In particular, in (Tull and Kleiner, 2021) the mathematical focus has been shifted from standard category theory to the framework of applied category theory (Abramsky and Coecke, 2008; Bradley, 2018; Fong and Spivak, 2019). A suggestive reason to use the latter framework, applied category theory, is the fact that category theory often appears to be very abstract and beyond any useful application to “real-world problems”. Indeed, category theory has often been described as “abstract nonsense” (as reported ironically by one of its inventors, Saunders Mac Lane (1997)).
Generally, applied category theory can be characterized as applying tools from category theory outside the context of pure mathematics. A reason for using applied category theory in this particular context is because this allows for a better (more concise) understanding of the mathematical structure of IIT itself. For example, a focus on “compositionality” (Coecke, 2021), accounting for the different ways of composing sub-processes, which lies at the core of the IIT-algorithm, would allow for a better understanding of the theory. Following this approach, applied category theory identifies two different ways of composing complex systems, sequentially \(\circ \) and in parallel \(\otimes \). In (Tull and Kleiner, 2021), compositionality thus plays an important part in understanding IIT’s decomposition of systems into parts (e.g. the “minimum information partition”) in a more principled way.
There exist certain commonalities between the two approaches but also differences, the main being that, whereas Tsuchiya et al. believe that IIT expresses a well-defined relation between consciousness and physical systems (in the form of them being “categorically equivalent’—already quite a strong assumption short of categorical isomorphism), Tull and Kleiner (2021) do not find any necessary relation between integrated informational structures and consciousness, although various (contingent) points of contact can perhaps be established. Viewed this way, IIT merely specifies an algorithm that proposes links between structured spaces. Why and how these spaces are structured the way they are (and why consciousness should at all be conceived of in this way) is not really part of the IIT-formalism itself.Footnote 4
Traditional category theorists are however often less interested in questions of compositionality but in questions regarding universality (e.g. does a certain universal construction exist?). Since the main focus in Tsuchiya et al. (2016) lies not on the evaluation (or generalization) of IIT’s core algorithm but on the relation between structures defined on physical and mental states, the concept of compositionality plays only a minor role for the ensuing discussion in their (and this) paper. Hence, for understanding the mind-matter relation within a mathematical framework, IIT appears as a representative (but not the only) approach.
2.2 The central identity revisited
We continue by looking again at IIT’s “central identity” as presented in the canonical publications by Oizumi et al. (2014) and Tononi et al. (2016). The central identity says that (i) consciousness, in its quality, is identical to a distinguished “cause-effect structure”Footnote 5 within a physical system, represented by a collection of mechanisms that have non-zero \(\varphi \), and (ii) consciousness, in its quantity, can be measured by the integrated information \(\Phi \) associated with this structure insofar as it defines the maximum integrated information in the system.
However, IIT-proponents usually conceive of this identity not as a scientific proposal in the conventional sense, for example as a hypothesis directly targeting the regularities between two empirical events (i.e., consciousness and brain activity). Instead, they start by listing “phenomenological axioms” that capture the essential properties of consciousness. In the next step, axioms are translated into postulates about physical systems, which could, in turn, be studied empirically (e.g., whether a system that satisfies the postulates also correlates to experience). Yet, on the canonical view, the theory’s axiomatic basis is unassailable and irrefutably true (it expresses necessary and sufficient conceptual truths about consciousness). Moreover, any empirical evidence for the central identity has to be indirect (so not via targeting the axioms directly, but via physical systems that satisfy the postulates), and naturally much criticism of IIT has been directed at exactly the question of whether or not IIT is falsifiable,Footnote 6 or whether or not its axiomatic basis is really that “axiomatic” in the classical sense (namely, specifying self-evident truths Bayne (2018); Negro (2022)). In the latest iteration of the theory, “IIT4.0”, the central identity has been replaced by an “explanatory identity” (Haun and Tononi, 2019; Ellia et al., 2021; Tononi et al., 2022; Albantakis et al., 2023a, b) that supposedly carries less ontological weight, yet some of the arguments against the previous iteration of the theory seem also to be relevant for the newest iteration (Signorelli et al., 2022).
Here lies a crucial methodological difference between the approach of Tsuchiya et al. (2016) and the canonical IIT literature. While Tsuchiya et al. (2016) are sympathetic to IIT in general, they do not assume from the outset, motivated by philosophical reasoning, that the central identity is true (or weaker: that it expresses a principled explanatory device), nor do they subscribe to IIT’s methodology of progressing from (self-evident or irrefutable) axioms to postulates about physical systems. By contrast, they believe that IIT merely specifies a mathematical structure (implemented by a neural system) that needs to be related to consciousness empirically. IIT then appears primarily not to be about consciousness itself but about the correlating,Footnote 7 physical systems. Still, there could be a systematic relation between those structures and consciousness—and category theory might deliver the right tools to assess this relation. More generally, the authors ask how one could understand an identity (or any other relation) between some “mathematical formalism and consciousness” (Tsuchiya et al., 2016).
A first step to assess this relation is to assume that consciousness could be represented in terms of a structured entity (ontological questions aside). So, on this view, IIT specifies the relation between the cause-effect structure of a physical system and the phenomenal structure of the system’s conscious experience. While canonical IIT reasons from an indubitably given conscious experience to the physical structure that supports it, Tsuchiya et al. (2016) ask how two (formalizable) domains—physical systems on the one hand, and consciousness on the other—could be related.
3 Category theory
3.1 Basics of category theory
At this point, it becomes suggestive to invoke the framework of category theory. A category \({\mathcal {C}}\) is a very general mathematical entity that can be defined as a collection of objects (denoted with capital letters) together with, for each pair of objects, a collection of morphisms between them (denoted by small letters, sometimes also called “arrows”) that satisfy the following conditions:
-
1.
Composition. If there exist morphisms between A and B as well as between B and C, then there exists a morphism from A to C: \(f: A \rightarrow B,\ g: B \rightarrow C \Rightarrow g \circ f: A \rightarrow C.\)
-
2.
Associativity. If we look at the composition of morphisms, it does not matter whether we first compose f and g and then h, or whether we first compose g and h and then f. Formally: \(h \circ (g \circ f) = (h \circ g) \circ f.\)
-
3.
Identity. To each object X, there exists an identity morphism from X to itself, \(1_X: X \rightarrow X\). The existence of identity morphisms is very useful when we want to replace statements about objects with statements about relations. Identity-morphisms must also satisfy that they can be composed with other morphisms of the category in a special way, serving as a left and right units of composition, \(f: X \rightarrow Y \Rightarrow f \circ 1_X = 1_Y \circ f = f.\)
A classical example is the category of sets, Set. Here, the objects are sets and the morphisms are functions between those sets. Another example is the category of topological spaces, Top, where the objects are topological spaces and the morphisms are continuous mappings between them. In applied category theory, an illustrative example is the process of preparing meringue pie (Fong and Spivak, 2019). Ingredients (objects) are transformed in various steps (morphisms) according to a recipe. These processes in turn composeFootnote 8 to give the larger process of preparing meringue pie.
Here, we like to specifically focus our attention on examples that are taken from the consciousness science literature. But before we distinguish between “categories of the brain” and “categories of consciousness”, let us first give a categorical re-description of a particularly prominent example from the analytic philosophy of mind: machine state functionalism (Putnam, 1967).
In functionalism, mental states are defined in terms of relations between inputs, outputs and other states. An illustration is given by a vending machine. A coke costs fifty cents (those were the days!), and the machine accepts only quarters. In one state (call it the “zero state”), inputing two quarters into the machine makes the machine output a coke and stay in the zero state. However, if we input only one quarter the machine won’t output a coke but move to the “quarter state”. Inputing another quarter then suffices to output a coke and move the machine back to the zero state. Inputing two quarters when the machine is in the quarter state would output a coke and makes the machine stay in the quarter state.Footnote 9 Graphically this can be represented as follows:
![](http://media.springernature.com/lw157/springer-static/image/art%3A10.1007%2Fs11229-024-04718-5/MediaObjects/11229_2024_4718_Equ1_HTML.png)
Observe that this defines a category (we might call it Fun) with two objects, corresponding to the zero and quarter states, with morphisms between them that satisfy composition, associativity and identities. For example, with respect to composition, inputing two quarters at once is equivalent to first inputing a single quarter, then another one. One immediately also sees from the graph how this would satisfy associativity and identity relations. Similar as in machine state functionalism, where it is customary to define mental states purely in terms of their relations, we can also think about objects in category theory this way.
But could mental states really be defined this way? One way to understand the difference between machine state functionalism and more demanding versions of functionalism can be cashed out in terms of further constraints that are put on the objects/morphisms. For example, if one endorses psychofunctionalism (Levin, 2021), one would require the morphisms to refer to psychologically relevant functions. An interesting question is whether the distinction between “role” and “realizer” functionalism (McLaughlin, 2006), can adequately be captured, and perhaps even reconciled, via (higher) category theory.
However, one important critique of functionalism of all varieties is its apparent inability to account for the qualitative (as opposed to the functional) nature of mental states (Van Gulick, 2022). In functionalism, mental states are defined purely relationally and thus lack any supposed qualitative “intrinsic” component. Perhaps we can do better if we distinguish between two relevant categories: one for the brain, one for consciousness, incl. the notion of qualia—and think about their relation.
Category of the brain There is no single right approach to categorifying what we know about the brain. But there have been several proposals in the literature. Following the “memory evolutive neural systems” by Ehresmann and Vanbremeersch (2007), one could first look at a neural graph (as could nowadays be reconstructed from imaging data) representing a neural system at a single instance in time and from there construct the category NEUR, which satisfies the above-mentioned constraints.Footnote 10 Specifically, a neural graph defines paths through a neural system, which satisfy composition and associativity. While the facts that neuron A is connected to neuron B and neuron B is connected to neuron C does not imply that there is a direct link between neuron A and neuron C, there exists a synaptic path from A to C. In this picture, morphisms would therefore correspond to synaptic paths (including delay) that change the activation states of the connected neurons. The proposed composition is such that delay times have to be respected. Identity morphisms correspond to no changes in state (hence a “synaptic path” with zero delay).
Another example from consciousness studies that was already mentioned is IIT. In IIT3.0 (Oizumi et al. 2014), most examples were not given in terms of neural systems but in terms of logic gates. A set of logic gates, appropriately wired together, then realizes a certain cause-effect-structure. In a classical theory, unlike a quantum one, there arises an ambiguity here: both the states as well as the operation on these states are realized by the same objects, i.e. the logic gates (McQueen et al., 2023). In a quantum theory, prima facie, a logic gate would have no state on its own, but only its inputs and outputs are associated with a state.Footnote 11 The basic algorithm of IIT, intended to quantify the amount of integrated information, is premised on comparing a system to a decomposed version of the system. Only if the system processes information (causally) over and above the decomposed version, can it be associated with integrated information. Tull and Kleiner (2021) propose to use category theory to make the setup more precise and even generalize it to systems beyond neural-like systems, a claim that is prominently but also often critically associated with IIT. By contrast, Tsuchiya et al. (2016) propose to use category theory to analyze relations (e.g. similarities) between cause-effect structures, a claim we will further focus on in the remainder.
Category of consciousness Whereas we already found various ambiguities in setting up the category of the brain, these difficulties become exacerbated in the case of consciousness. black One reason that category theory is a suggestive framework for studying consciousness is the fact that in category theory objects are fully specified by the relations to other objects. This resonates with various intuitions we hold about qualia. For example, Tsuchiya et al. (2016) assume that the contents of consciousness are either category theoretical objects themselves (“qualia in the narrow sense”; Kanai and Tsuchiya (2012), e.g., the redness of the cup in front of me), relations between objects (e.g., the experienced closedness of the two cups in front of me) or more complicated structures (e.g., faces as collections of objects and relations between objects or “qualia in the broad sense”; Kanai and Tsuchiya (2012), i.e., the total contents of any experience).
More generally, an abstract way to understand the role of objects in category theory is to see them as the interfaces along which morphisms compose.Footnote 12 This also defines a potential point of contact with phenomenology. The contents of our conscious minds can be understood as mediators between processes of experience (Taguchi, 2019). According to a phenomenologically-inspired proposal this is tantamount to an act of “constitution” (Gallagher and Zahavi, 2008; Tsuchiya et al., 2016). But, unlike in the brain-example, what makes this proposal more difficult to understand is that, at least according to traditional phenomenology (Husserl, 2012), “constitution” typically refers to the idea the contents of experience are not pre-given objects but results of a mental activity. This points to an important constraint on the category of the brain: morphisms need to reflect functional (dynamical), rather than purely structural (static) data. The existence of synaptic connections alone is not sufficient here.
3.2 The power of category theory
Category theory is a very general framework. This is both a strength and a weakness of the formalism. On the one hand, it appears that almost anything could be interpreted in terms of a category (consciousness being an interesting and non-trivial case!).Footnote 13 But on the other hand, if we could find a general scheme that fits almost anything, the danger is that nothing specifically worthwhile would have thereby been said.
One thing that could justify the use of category theory has to do with the relationship of domains. Specifically, category theory knows of different kinds of systematic relations between categories that exhibit, informally speaking, various degrees of similarity. The weakest is the existence of a “functor”: a transformation from the objects and morphisms in one category to the other, \(F: {\mathcal {C}} \rightarrow {\mathcal {D}}\). Functors have to satisfy the following conditions:
-
1.
They transform all objects in one category into objects of the other, \(F(X) = X'\), such that
-
2.
if a morphism between objects exists in one category, \(f: X_1 \rightarrow X_2\), then a morphism between the corresponding objects in the second category also exists, \(F(f) = f': F(X_1) \rightarrow F(X_2)\).
-
3.
The functorial mapping preserves identity and composition.
Revisiting the previous examples, F could be a functor from Top to Set, \(F: \textrm{Top} \rightarrow \textrm{Set}\). A philosophical example, to which we have already appealed to in the last subsection, concerns the distinction of role and realizer functionalism. Whereas the former is specified by a category Fun, the later could perhaps be equivalent to the range of a functor from Fun to physiological states.
One way to relate this to the example of IIT is to focus on IIT’s notion of identity between cause-effect-structures (on the brain side) and qualia (on the consciousness side). In general, a functorial relationship between two domains is much weaker than an identity relation (or isomorphism). If a functor \(F: {\mathcal {C}} \rightarrow {\mathcal {D}}\) existed, and also a functor \(G: {\mathcal {D}} \rightarrow {\mathcal {C}}\), then (categorical) isomorphism requires that their product amounts to the identity functor, \(G \circ F = id_{{\mathcal {C}}}\) (and the other way around). This is quite a strong relation. Category theory knows several “intermediate” degrees of similarity such as strong or weak categorical equivalence (nLab authors, 2022a) that could be invoked instead.
Without further constraints, the presence of a functor between categories is rather weak. Still, already functorial relationships between domains can be highly useful since they allow one to go from one domain to another, make an inference in the new domain, and then “reason back” to the original domain. A functorial relationship between the structure of the brain and phenomenal structures would, for example, allow one to solve difficult problems in one domain and transfer the solutions (mutatis mutandi) to the other.
Tsuchiya et al. (2016) et al. mention the example of Brouwer’s fixed-point theorem. Recall that the fixed-point theorem says that any continuous function from a disk in \({\mathbb {R}}^2\) to itself, \(f: D^2 \rightarrow D^2 \), has at least one fixed-point, \(f(x_0) = x_0\). This can be illustrated with a map of a country. There will be at least one point where the location in the map exactly corresponds to a “real” location in the country.
The fixed-point theorem is notoriously difficult to prove within geometry itself. However, geometrical objects are functorially related to sets of numbers (certain algebraic objects, known as fundamental groups). Of course, the geometrical objects are not identical to these sets of numbers, and the structures defined by such geometric objects are not equivalent to the structures on these sets. Still, there exists a systematic mapping between those two domains. This allows for a much easier proof of the fixed-point theorem via deriving an impossibility result in the category of these algebraic objects.
Similarly, one could speculate that already the (still quite loose) analogy expressed by a functorial relationship between consciousness and the brain suffices to derive interesting statements about the nature of conscious experience (the domain where statements supposedly are difficult). Consciousness is rich in difficult questions such as: “What are other persons conscious of at the moment?” or “What entities in nature are in fact conscious of anything?”. One would perhaps expect that we need a full theory of consciousness to answer these questions, but it may be the case that a much weaker theoretical construction suffices. Already the concept of a functor presents a lot of benefits and we shall thus mostly limit our discussion to this idea.Footnote 14
4 Possible payoffs
In this section, we mention some potential payoffs of using category theory. The first is quite uncontested and can be viewed as being part (or a natural continuation) of finding the neural correlates of consciousness. But we need research that goes beyond the correlational project. This will lead us to two potential avenues of further research, where category theory seems to be particularly promising. The latter subsections are thus more speculative.
4.1 Category theory and the neural correlates of consciousness
The NCC have been defined as “the minimum neural mechanisms jointly sufficient for any one specific conscious experience” (Koch et al., 2016). Finding the NCC, has been the single most useful and successful approach in consciousness studies over the last 30 years. Yet, despite its remarkable progress, a true understanding of the NCC is still not in sight. What do these correlates express? Why do these correlates hold?
How to think about the neural correlates of consciousness with respect to category theory? If one could cast both mathematical structures derived from 3rd-person methods (e.g., from brain scans) as well as from 1st-person methods (e.g., via mathematized phenomenology) into the form of a category, one might look for a functorial relation between these categories. Presumably, the brain is not identical to consciousness, nor is the abstract mathematical structure itself, which has been derived from brain activity (how should an abstract mathematical structure be identical to consciousness?). But instead, the abstract mathematical structure of the brain could be (categorically) related to a structure that describes important aspects of conscious experience (e.g., its “phenomenal structure” Van Gulick (2022); Kleiner (2024)).
A first question might be whether similarity/difference relations could be preserved across categories, i.e. whether there exists a functor \(F: {\mathcal {C}}_{\text {brain}} \rightarrow {\mathcal {C}}_{\text {cons}}\) such that \(X \sim Y \Rightarrow F(X) \sim F(Y)\) (also satisfying the other requirements listed in the previous section). Recall a nice property of functors: if there exists a functor between categories, \(F: {\mathcal {C}} \rightarrow {\mathcal {D}}\), and if there exists an (invertible) morphism between the objects in \({\mathcal {C}}\), there will exist an invertible morphism between the respective objects in the category \({\mathcal {D}}\). Thus, if we interpret similarity in terms of the existence of invertible morphisms within a category, we will find that the similarity structure between categories has to be preserved.
The converse, \(F(X) \sim F(Y) \Rightarrow X \sim Y\) would hold, for example, if there also existed an inverse functor. The existence of such functors is a necessary requirement for a categorical isomorphism and weaker notions of categorical equivalence (e.g. \(G \circ F \cong \text {id}_{\mathcal {C}}\), and the other way around). If it fails, then isomorphism etc. cannot be true. But are there empirical ways to determine whether or not there exist such functors, and if so, how they would look?
Luckily, there exists (in principle) a way to assess this question that is tractable with standard practices in the field of neuropsychology and psychophysics.
-
1.
Same stimuli, different experience. The “contrastive method” in consciousness studies (Baars, 2005; Lepauvre and Melloni, 2021) is the idea to hold a stimulus fixed and observe the neural changes while conscious perception changes from “seen” to “unseen”. This arguably allows one to in principle identify neural correlates for specific contents, although, the situation is not quite as straightforward due to several confounding factors and misguided localization approaches (Signorelli et al., 2021a; Lepauvre and Melloni, 2021). If one is now able to cast the recording of brain activity into mathematical form (for example, by deriving IIT’s cause-effect structures), then one could determine whether or not identical (or very similar) objects in the “category of the brain” map to different (or very dissimilar) conscious experiences.
-
2.
Different stimuli, same experience. The converse is to see whether very dissimilar (different) conscious experience map to similar (identical) structures of the brain. This could be achieved, for example, using the phenomenon of perceptual metamers (Freeman and Simoncelli, 2011), where different physical stimuli correlate to the same perceptual experience.
In both cases, one of the categories brain/consciousness might render some of its objects identical (similar) whereas the objects in the corresponding category are not. This would falsify the claim that a functor exists, and hence the claim of identity between mathematical representations of brain activity (e.g., cause-effect structures of IIT) and conscious experience, at least in this specific case. In addition, this might have nothing to do with the stimulus itself (e.g., one could hold the stimulus constant, but observe changes in brain activity/experience or one could vary the stimulus but find roughly the same brain activity/experience).
A similar question has already been asked in the psychophysics literature for a long time (and has been more elaborated philosophically in the literature on “quality spaces” (Clark, 1993; Rosenthal, 2010; Lee, 2021; Lyre, 2022)), one difference being that the categorical treatment could be applied to all sufficiently formalizable relations that hold for conscious experiences not limited to psychophysics, for example, the intentional self-world structure of phenomenology (Smith, 2018; Prentner, 2024).
Whereas quality spaces limit themselves mainly to the discussion of features of external objects as they are represented (e.g. by the sensory system), the descriptive scope of category theory is larger and includes the idea of studying interactions between experiences or even the very processes that lead to the formation of a certain experience in the first place. To give one contrasting example: In the quality spaces of olfaction (Young et al., 2014), the properties and relations that certain (unconscious) smell experiences have to each other are represented based on whether they can be discriminated, whereas possible interactions between experiences are not part of the formalism. While this is also not explicitly the case in the work of Tsuchiya et al. (2016) there seems to be no fundamental reason that rules this out. One of the powers of categories is to speak of relations of relations, for example, in terms of “higher category theory” (nLab authors, 2022b).
But say we would indeed find a functorial relationship on the most basic (“category-1”) level, as would be predicted by IIT. Does this also provide evidence for their identity? No. We have not proven that inverse functors always exist with the property \(G \circ F = id_{{\mathcal {C}}}\). And we also have not established that nothing beyond mathematical structure is relevant to the question of consciousness—relating to an argument made by Chalmers (2002) known as the “structure and dynamics argument”. Hence, checking whether a functor between domains exists, is still part of the NCC-project (or a natural continuation thereof), namely to discover correlates in a more systematic way. Yet, this entails the possibility to transform certain problems in one domain into problems in the other one. This has proven to be a successful strategy in science (as illustrated by the proof of the fixed-point theorem, see above).
Yet, a more important point can be inferred from the approach of Tsuchiya et al. (2016). Correlations are the empirical material of a science of consciousness, they are something that needs to be explained eventually (Atmanspacher and Prentner, 2022). Neural correlates of consciousness are not themselves theories, but it is helpful to have a rigorous formalization of them. This is a precondition of actually testing whether a particular theory is right or wrong. We thus endorse the claim made by Tsuchiya et al. (2016) that IIT’s central identity should be taken as neither self-evidently true nor as something whose truth could be justified by a (unique) transition from (self-evidently true) axioms to postulates, but as something that needs to be (and can be!) scrutinized empirically.Footnote 15 But would we have needed to invoke the formal apparatus of category theory for this result? Perhaps not, but one strength of using category theory is certainly that it makes these questions explicit.
4.2 Theory-integration
One could try to apply the categorical treatment to a range of theories in the study of consciousness with the hope of systematically integrating them (for example, besides IIT, the “temporo-spatial theory of consciousness” (Northoff et al., 2019, 2020; Northoff and Zilio, 2022) or “conscious agent theory” (Hoffman and Prakash, 2014; Fields et al., 2018; Prentner, 2021; Hoffman et al., 2023)).
Category theory could serve the purpose of establishing a shared (formal) vocabulary for theories that might otherwise talk past each other. Especially in an early (immature) stage of a scientific field, such as the study of consciousness, this seems to be a better idea than (prematurely) sealing off the boundaries between approaches.
A different, though perhaps related strategy would be to start anew with a framework for studying consciousness on its own, in terms of category theory. This should perhaps not be seen as yet another theory of consciousness (Wiese, 2020), but more as an “integration-hub” for current theories in the field.
The above-mentioned theories arguably represent proponents of quite different “-isms” in the philosophy of mind: physicalism, idealism, and various non-reductive approaches. A formal treatment such as category theory could help to overcome metaphysical trenches. This would be in line with Husserl’s early emphasis on “bracketing” metaphysical questions as far as possible (Yoshimi, 2014). It would furthermore allow one to better make the distinction between a transcendental approachFootnote 16 and an anti-realist ontological thesis (e.g. empirical idealism). This way, the use of category theory in the study of consciousness could help to establish a metaphysically-neutral research program beyond the dogmatism of traditional ontologies.
4.3 Exploiting explanatory dualities
What could be a further benefit of such a metaphysically-neutral (but formal and perhaps heterodox) research program? A physics-inspired example would be an explanation that proceeds in terms of “dualities”—systematic transformations from one domain of inquiry to the other, often followed by back-translation. It is an interesting question whether one could specify a duality between consciousness and brain activity.
In previous papers, it has been proposed that a major benefit of the categorical treatment is the possibility to solve hard questions about consciousness in the arguably more tractable domain of neuroscience. But what’s more, we speculate that—in accordance with a “constraint-based methodology” to the study of consciousness (Signorelli et al., 2022)—we might even be able to solve some very difficult problems in neuroscience by translating them into the language of (mathematized) phenomenology.
Yet, terminology might also be misleading. The concept of “duality” is often invoked in very different contexts—from philosophy, mathematics, and physics. In philosophy, a duality is typically a (quite general) statement about complementary modes of acquiring knowledge. A paradigmatic duality in this sense is embodied in the old Chinese idea of Yin and Yang: reality as well-balanced set of contradictory opposites.
In mathematics, one knows of several “dualities”. The mathematician’s use of the terms is usually much more stringent. An example pertains to the mapping \(f: A \times B \rightarrow C\). A representational duality would enable us to make statements about A by using maps from B to C, and vice versa (Corfield, 2017). The perhaps most general formulation of a duality in category theory involves the existence of a pair of “adjoint functors” (nLab authors, 2024).
In physics, the concept of a duality is still quite loosely defined as compared to its use in mathematics. So, it is even more important to clarify what sense of duality one has in mind. “Duality” can thereby stand for quite different things. For example, in the early days of quantum mechanics it was common to allude to a “wave-particle duality”. But also prior to quantum mechanics, dualities of sorts have been known. For example, the flux of an electric field through a closed surface is proportional to the net electric charge inside the respective volume of space (irrespective of its exact spacial distribution). This fundamental regularity is known as “Gauss’s law”. A more modern form of duality in physics is the “AdS/CFT duality” that links string theory and conformal quantum field theory (Maldacena, 1999; Rickles, 2013).
Our inspirational example here is the related conjecture that distances between regions in spacetime are inversely proportional to the entanglement entropy of the associated systems on their boundaries (Ryu and Takayanagi, 2006; Van Raamsdonk, 2010; Ney, 2021). We conceive of a similar move in consciousness science. Recall, for example, that IIT features plenty of computationally almost intractable problems such as choosing the minimum information partition by evaluating distances between systems and their partitions. If one were able to systematically transform these questions into (simpler) questions about phenomenal relations, this would likely lead to great progress throughout the field. Pointing to functorial relationship alone is hardly sufficient though. We need to be more specific: how exactly does an entity in one domain (e.g., a distance) relate to an entity in the other (e.g., to a relation in consciousness)?
5 Conclusions and outlook
The use of category theory only provides a high-level story. The low-level story has to be told using specific theories and should be informed by existing architectures. There seems no way to get past the nitty-gritty details that arise when actually working with a particular model. At the same time, it should be kept in mind that a good description at a higher level should illuminate the essential features relevant to one’s problem, thereby abstracting from a lot of unnecessary detail. Striking the right balance between providing enough details and abstraction is a difficult task. Category theory promises to be a great tool, but one that should be used thoughtfully. As a representative that was discussed in the literature previously, we briefly discussed the case of IIT in Sect. 2. It is worth repeating that this line of reasoning is not specifically tied to IIT, but any mathematically sophisticated theory of consciousness could be embedded in a categorical framework.
Conversely, this might also imply that the current state of theorizing in the science of consciousness is underdeveloped. Most of the theories—except for some that at least seem to be amenable to a categorical treatment (we have discussed IIT and briefly mentioned the temporo-spatial theory of consciousness and the conscious agent theory)—are not yet in a state where applying category theory is really that suggestive. Yet, this must not be understood as a problem to do with category theory (à la “abstract nonsense!”) but as shortcoming of these theories. If one believes that there is something about consciousness that requires a relational approach, for example for giving a scientific account of qualia, then invoking category theory feels natural.
More generally, the targeted theories could come from quite different starting points and need not even focus exclusively on the brain. Using category theory to study mind-matter correlations might not even be specifically tied to the neuroscience of consciousness, but category theory could be applied to any sufficiently mathematizable domain drawn from biology or computer science. While we are certain that some types of neural systems (namely those that resemble human brains) correlate to conscious experience, our certainty decreases if we go to different systems. This is particularly acute when we think about AI systems. What is the right measure for AI-consciousness? Is it subjective report? This could be easily faked. Is it behavior? Also this is easily engineered, provided we endow the AI systems with the relevant sensors and effectors to construct artificial perception-action loops. What about the right kind of functioning, as proposed e.g. by Butlin et al. (2023)? One problem here is that “functioning” is often underspecified and tentative. Category theory can render these ideas much more precise (see Sect. 3). In addition, it might enable one to make specific predictions as outlined in section 4.
But is this at all a good way to think about consciousness? One critique of IIT pertains to their treatment of phenomenology itself (Merker et al., 2022; Singhal et al., 2022). After all, our best systematic knowledge about phenomenal experience comes from the philosophical discipline of (e.g., Husserlian) phenomenology (Smith, 2018; Husserl, 2012; Gallagher and Zahavi, 2008). Constructing a category of consciousness would likely heavily draw on research into “mathematizing phenomenology” (Petitot et al., 1999; Yoshimi, 2007; Marbach, 2009; Prentner, 2024; Taguchi and Saigo, 2023), and it could be conjectured that related approaches (Yoshimi, 2016; Rudrauf et al., 2017; Prentner, 2019; Signorelli et al., 2021b; List, 2023; Ramstead et al., 2023; Kleiner, 2024) as well as research on the mathematical representation of qualia (Stanley, 1999; Kleiner, 2020; Tsuchiya and Saigo, 2021; Resende, 2022) could be unified into a single framework.
But as of today, whether a translation of these ideas into a formal language is at all possible seems to be under-researched in the science of consciousness. If we had a categorical representation from first (personal) principles, this could substantially constrain our models of the neural basis of consciousness. We have encountered one idea when thinking about the construction of the category of consciousness in section 3: if our morphisms in the category of consciousness should capture the phenomenological idea of “constitution”, purely structural (anatomical) relations in the category of the brain are ruled out.
Given our emphasis on its integrating character—and indeed consciousness science, most of all, needs the integration of individual models (Wiese, 2020; Signorelli et al., 2021a; Northoff and Zilio, 2022)—we envision that category theory is well suited to help bring together mathematical theories with philosophically or psychologically informed treatments. So, while we think that the project of constructing categories to represent conscious experience is far from trivial, we are also confident that (at least some aspects of) consciousness could be formalized this way.
Specifically, if one is also interested in applications of consciousness science to artificial intelligence, then a first step would be to develop a basic mathematical representation of consciousness that could be translated to computational architectures. This article has discussed some first steps into that direction while also acknowledging the pervasive role of the first-personal perspective in setting up such a project. Now it’s time for implementation.
Notes
Or more specifically: finding the “neural correlates of consciousness" or NCC; see definition below.
We use this as shorthand for the category of “neural information processing” for purely mnemonic reasons. Indeed, notice the shift in meaning. We are primarily intending to talk about categories of certain types of information processing or computation. The fact that these, according to mainstream opinion in the field, happen to be realized in neural systems is not essential and might even distract from the larger message: the category of the brain is but one possible instantiation of a more general, formal structure.
As stressed here, category theory is primarily about patterns and relations—not about the specific material that realizes those things. From this it automatically follows, that a less neurocentric viewpoint needs to be taken.
At least if one does not follow IIT-proponents in basing the theory on so-called “phenomenological axioms” and regarding the “postulates” as unique realizers (or physical models) of these axioms.
Originally termed a “maximally irreducible conceptual structure”, MICS, but later referred to as the “cause-effect” or “\(\Phi \)-structure” of a physical system.
In the language preferred by IIT-proponents, “correlating physical system” should be replaced by “physical substrate of consciousness” (PSC). Correlation, according to canonical readings of IIT, is a much too weak notion. Since the central identity proposes an identity IIT supposedly specifies both necessary and sufficient conditions and thus not merely a correlate. The acceptance of axioms and their unique specification in terms of postulates decides whether one should speak of correlation or identity. Logically, identity implies correlation, but not the other way around. For this reason, it seems better to use the weaker notion of “correlation” throughout this paper. To be even more precise, there is still a difference between necessary and sufficient conditions, on the one hand, and identity on the other—something that (higher) category theory could illustrate using different notions of categorical equivalence, as applied outside of pure mathematics.
Some of the morphisms are composed in sequence, some in parallel.
Trivially, upon not inserting anything into the machine, nothing happens and the machine stays in its state. Also, the vendor is “coke-only”.
More precisely, NEUR also includes the states of the neural system at different times (hence evolutive) as well as all the co-limits (hence hierarchical) defined at each step. Evolution + Hierarchy \(\rightsquigarrow \) Memory.
In the quantum setting, it would thus be much more suggestive to treat gates as morphisms. Also a perspective based on applied category theory would suggest as much, identifying a gate with a process that turns inputs into outputs. The same is true for neurons.
Thanks to an anonymous reviewer for pointing this out to me.
At least if we do not want to prove that our formalization indeed satisfies all requirements of a category, which is sometimes highly non-trivial.
There exist further concepts of category theory such as (co-)limits, fibrations, indexed categories, or higher category theory. But we will not go into further details about these at this point and instead deepen the discussion of functorial relation. One of the dangers of applying category theory to the domain of consciousness science lies exactly in a premature advancement without making sure that even very basic concepts make sense. Some concepts such as natural transformations are crucial components when we want to find the “right” structure (and have already been appealed to in the literature, see e.g. discussions in (Northoff et al., 2019) or (Tsuchiya and Saigo, 2021)), but we will not go into details in this paper.
IIT can give you information about whether a physical system that satisfies the postulates correlates with experience. From this, it does not follow that such systems “really” are conscious—in a more philosophical sense (expressing necessity), and not an empirical one (expressing contingency).
References
Abramsky, S., & Coecke, B. (2008). Categorical quantum mechanics. In Engesser, K., Gabbay, D. M., & Lehmann, D. (Eds.) Handbook of quantum logic and quantum structures (pp. 261–323). Elsevier.
Albantakis, L. , Barbosa, L. , Findlay, G. , Grasso, M. , Haun, A.M. , Marshall, W., ...Tononi, G. (2023). Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. PLoS Computational Biology, 19(10), e1011465, https://doi.org/10.1371/journal.pcbi.1011465
Albantakis, L., Prentner, R., & Durham, I. T. (2023). Computing the integrated information of a quantum mechanism. Entropy, 25(3), 449. https://doi.org/10.3390/e25030449
Atmanspacher, H., & Prentner, R. (2022). Desiderata for a viable account of psychophysical correlations. Mind and Matter, 20(1), 63–86.
Baars, B. J. (2005). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Progress in Brain Research, 150, 45–53. https://doi.org/10.1016/s0079-6123(05)50004-9
Bach, J. (2019). The cortical conductor theory: Towards addressing consciousness in ai models. In Samsonovich, A. V. (Ed.) Biologically inspired cognitive architectures 2018 (pp. 16–26).
Bayne, T. (2018). On the axiomatic foundations of the integrated information theory of consciousness. Neuroscience of Consciousness, 4(1), niy007. https://doi.org/10.1093/nc/niy007
Blum, L., & Blum, M. (2021). A theoretical computer science perspective on consciousness. Journal on Artificial Intelligence and Consciousness, 8(1), 1–42. https://doi.org/10.1142/S2705078521500028
Bradley, T.-D. (2018). What is applied category theory? https://doi.org/10.48550/arXiv.1809.05923
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., & Deane, G.e. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. http://arxiv.org/abs/2308.08708
Chalmers, D. J. (1996). The conscious mind. In search of a fundamental theory. Oxford University Press.
Chalmers, D. J. (2000). What is a neural correlate of consciousness? In Metzinger, T. (Ed.) Neural correlates of consciousness: Empirical and conceptual questions (pp. 17–40). MIT Press.
Chalmers, D. J. (2002). Consciousness and its place in nature. In D. J. Chalmers (Ed.), Philosophy of mind. Classical and contemporary readings (pp. S 247–272). Oxford University Press.
Chalmers, D. J. (2004). How can we construct a science of consciousness? In M. Gazzaniga (Ed.), The Cognitive Neurosciences III (pp. S 1111–1120). MIT Press.
Chis-Ciure, R. (2022). The transcendental deduction of Integrated Information Theory: Connecting the axioms, postulates, and identity through categories. Synthese. https://doi.org/10.1007/s11229-022-03704-z
Clark, A. (1993). Sensory qualities. Oxford University Press.
Coecke, B. (2021). Compositionality as we see it, everywhere around us. https://doi.org/10.48550/arXiv.2110.05327
Cogitate Consortium, Ferrante, O., Gorska-Klimowska, U., Henin, S., Hirschhorn, R., Khalaf, A., ...Melloni, L. (2023). An adversarial collaboration to critically evaluate theories of consciousness. https://doi.org/10.1101/2023.06.23.546249
Corfield, D. (2017). Duality as a category-theoretic concept. Studies in History and Philosophy of Modern Physics, 59, 55–61. https://doi.org/10.1016/j.shpsb.2015.07.004
Doerig, A., Schurger, A., Hess, K., & Herzog, M. H. (2019). The unfolding argument: Why iit and other causal structure theories cannot explain consciousness. Consciousness & Cognition, 72, 49–59. https://doi.org/10.1016/j.concog.2019.04.002
Ehresmann, A. C., & Vanbremeersch, J.-P. (2007). Memory evolutive systems: Hierarchy, emergence, cognition. Elsevier.
Ellia, F., Hendren, J., Grasso, M., Kozma, C., Mindt, G., P. Lang, J., ...Tononi, G. (2021). Consciousness and the fallacy of misplaced objectivity. Neuroscience of Consciousness, 2021(2), niab032, https://doi.org/10.1093/nc/niab032
Fields, C., Hoffman, D. D., Prakash, C., & Singh, M. (2018). Conscious agent networks: Formal analysis and application to cognition. Cognitive Systems Research, 47, 186–213. https://doi.org/10.1016/j.cogsys.2017.10.003
Fink, S. B. (2016). A deeper look at the “neural correlate of consciousness’’. Frontiers in Psychology, 7, 1044. https://doi.org/10.3389/fpsyg.2016.01044
Fong, B., & Spivak, D. I. (2019). An invitation to applied category theory. seven sketches in compositionality. Cambridge University Press.
Freeman, J., & Simoncelli, E. P. (2011). Perceptual metamers of the ventral stream. Nature Reviews Neuroscience, 14, 1195–1201. https://doi.org/10.1038/nn.2889
Gallagher, S., & Zahavi, D. (2008). The phenomenological mind: An introduction to the philosophy of mind and cognitive science. Routledge.
Haun, A., & Tononi, G. (2019). Why does space feel the way it does? Towards a principled account of spatial experience. Entropy, 21(12), 1160. https://doi.org/10.3390/e21121160
Hoffman, D. D., & Prakash, C. (2014). Objects of consciousness. Frontiers in Psychology, 5, 577. https://doi.org/10.3389/fpsyg.2014.00577
Hoffman, D. D., Prakash, C., & Prentner, R. (2023). Fusions of consciousness. Entropy, 25(1), 129. https://doi.org/10.3390/e22050514
Husserl, E. (2012). Ideas. General introduction to pure phenomenology. Routledge.
Kanai, R., Chang, A., Yu, Y., Magrans de Abril, I., Biehl, M., & Guttenberg, N. (2019). Information generation as a functional basis of consciousness. Neuroscience of Consciousness, 5(1), 016.
Kanai, R., & Tsuchiya, N. (2012). Qualia. Current Opinion in Biology, 22(10), R392–R396. https://doi.org/10.1016/j.cub.2012.03.033
Kleiner, J. (2020). Mathematical models of consciousness. Entropy, 22(6), 609. https://doi.org/10.3390/e22060609
Kleiner, J. (2024). Towards a structural turn in consciousness science. Consciousness & Cognition, 119, 103653. https://doi.org/10.1016/j.concog.2024.103653
Kleiner, J., & Hoel, E. (2021). Falsification and consciousness. Neuroscience of Consciousness, 1, niab001. https://doi.org/10.1093/nc/niab001
Kleiner, J., & Tull, S. (2021). The mathematical structure of integrated information theory. Frontiers in Applied Mathematics and Statistics, 6, 602973. https://doi.org/10.3389/fams.2020.602973
Koch, C., Massimini, M., Boly, M., & Tononi, G. (2016). Neural correlates of consciousness: Progress and problems. Nature Reviews Neuroscience, 17, 307–321. https://doi.org/10.1038/nrn.2016.22
Lawvere, F. W., & Schanuel, S. H. (2009). Conceptual mathematics: A first introduction to categories (2nd ed.). Cambridge University Press.
Lee, A. Y. (2021). Modeling mental qualities. The Philosophical Review, 130, 263–298. https://doi.org/10.1215/00318108-8809919
Lepauvre, A., & Melloni, L. (2021). The search for the neural correlate of consciousness: Progress and challenges. Philosophy and the Mind Sciences https://doi.org/10.33735/phimisci.2021.87
Levin, J. (2021). Functionalism. In: E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2021 ed.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2021/entries/functionalism/
List, C. (2023). The many-worlds theory of consciousness. Noûs, 57, 316–340. https://doi.org/10.1111/nous.12408
Lyre, H. (2022). Neurophenomenal structuralism. A philosophical agenda for a structuralist neuroscience of consciousness. Neuroscience of Consciousness, 2022(1), niac012. https://doi.org/10.1093/nc/niac012
Mac Lane, S. (1997). The PNAS way back then. Proceedings of the National academy of Sciences of the United States of America, 94, 5983–5985. https://doi.org/10.1073/pnas.94.12.5983
Mac Lane, S. (1998). Category theory for the working mathematician. Springer.
Maldacena, J. M. (1999). The large N Limit of superconformal field theories and supergravity. International Journal of Theoretical Physics, 38, 1113–1133. https://doi.org/10.1023/A:1026654312961
Marbach, E. (2009). Towards a formalism for expressing structures of consciousness. In: D. S. Shaun Gallagher (Ed.), Handbook of phenomenology and cognitive science (pp. S 57 – 81). Springer.
Mashour, G. A., Roelfsema, P., Changeux, J.-P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776–798. https://doi.org/10.1016/j.neuron.2020.01.026
McLaughlin, B. (2006). Is role-functionalism committed to epiphenomenalism? Journal of Consciousness Studies, 13(1–2), 39–66.
McQueen, K. J., Durham, I. T., & Müller, M. P. (2023). Building a quantum superposition of conscious states with integrated information theory. arXiv:2309.13826
Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, e41. https://doi.org/10.1017/S0140525X21000881
Negro, N. (2022). Axioms and postulates: Finding the right match through logical inference. Behavioral and Brain Sciences, 45, e41. https://doi.org/10.1017/S0140525X2100193X
Ney, A. (2021). From quantum entanglement to spatiotemporal distance. In: C. Wüthrich, B. Le Bihan, & N. Hugget (Eds.), Philosophy beyond spacetime. implications from quantum gravity (pp. 78–102). Oxford University Press.
nLab authors (2022a). Equivalence of categories. https://ncatlab.org/nlab/show/equivalence+of+categories. (Revision 43)
nLab authors (2022b). Higher category theory. http://ncatlab.org/nlab/show/higher+category+theory. (Revision 78)
nLab authors (2024, April). duality. https://ncatlab.org/nlab/show/duality. (Revision 50)
Northoff, G., Tsuchiya, N., & Saigo, H. (2019). Mathematics and the brain: A category theoretical approach to go beyond the neural correlates of consciousness. Entropy, 21(12), 1234. https://doi.org/10.3390/e21121234
Northoff, G., Wainio-Theberge, S., & Evers, K. (2020). Is temporo-spatial dynamics the “common currency’’ of brain and mind? In Quest of “Spatiotemporal Neuroscience’’. Physics of Life Reviews, 33, 34–54. https://doi.org/10.1016/j.plrev.2019.05.002
Northoff, G., & Zilio, F. (2022). From shorter to longer timescales: Converging integrated information theory (IIT) with the temporo-spatial theory of consciousness (TTC). Entropy, 24(2), 270. https://doi.org/10.3390/e24020270
Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLoS Computational Biology, 10(5), e1003588. https://doi.org/10.1371/journal.pcbi.1003588
Petitot, J., Varela, F. J., Pachoud, B., & Roy, J.-M. (Eds.). (1999). Naturalizing phenomenology. Issues in contemporary phenomenology and cognitive science. Stanford University Press.
Prentner, R. (2019). Consciousness and topologically structured phenomenal spaces. Consciousness and Cognition, 12(1), 93–118. https://doi.org/10.1016/j.concog.2019.02.002
Prentner, R. (2021). Dr Goff, tear down this wall! the interface theory of perception and the science of consciousness. Journal of Consciousness Studies, 28(9–10), 91–103. https://doi.org/10.53765/20512201.28.9.091
Prentner, R. (2024). Mathematized phenomenology and the science of consciousness. (https://osf.io/preprints/psyarxiv/8d2mf)
Putnam, H. (1967). Psychological Predicates. In Captain, H. (Ed.), Art, mind and religion (pp. 158–167). Pittsburgh University Press.
Ramstead, M. J. D., Albarracin, M., Kiefer, A., Klein, B., Fields, C., Friston, K., & Safron, A. (2023). The inner screen model of consciousness: Applying the free energy principle directly to the study of conscious experience. https://doi.org/10.48550/arXiv.2305.02205
Resende, P. (2022). Qualia as physical measurements: a mathematical model of qualia and pure concepts. (arXiv:2203.10602)
Rickles, D. (2013). AdS/CFT duality and the Emergence of Spacetime. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 44(3), 312–320. https://doi.org/10.1016/j.shpsb.2012.06.001
Rosenthal, D. (2010). How to think about mental qualities. Philosophical Issues, 20, 368–393. https://doi.org/10.2307/41413557?refreqid=search-gateway:42943346876a9dc8f28a0c0ea2d561fd
Rudrauf, D., Bennequin, D., Granic, I., Landini, G., Friston, K., & Williford, K. (2017). A mathematical model of embodied consciousness. Journal of Theoretical Biology, 428, 106–131. https://doi.org/10.1016/j.jtbi.2017.05.032
Ryu, S., & Takayanagi, T. (2006). Aspects of holographic entanglement entropy. Journal of High Energy Physics, 2006(8), 045–045. https://doi.org/10.1088/1126-6708/2006/08/045
Signorelli, C. M., Cea, I., & Prentner, R. (2022). We need to explain subjective experience, but its explanation may not be mechanistic. (https://psyarxiv.com/e6kdg/)
Signorelli, C. M., Szczotka, J., & Prentner, R. (2021). Explanatory profiles of models of consciousness - towards a systematic classification. Neuroscience of Consciousness, 2021(2), niab021. https://doi.org/10.1093/nc/niab021
Signorelli, C. M., Wang, Q., & Coecke, B. (2021). Reasoning about conscious experience with axiomatic and graphical models. Consciousness and Cognition, 95, 103168. https://doi.org/10.1016/j.concog.2021.103168
Singhal, I., Mudumba, R., & Srinivasan, N. (2022). In search of lost time: Integrated information theory needs constraints from temporal phenomenology. Philosophy and the Mind Sciences. https://doi.org/10.33735/phimisci.2022.9438
Smith, D. W. (2018). Phenomenology. In Zalta, E. N. (Ed.), The Stanford encyclopedia of philosophy (Summer 2018 ed.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2018/entries/phenomenology/
Stanley, R. P. (1999). Qualia space. Journal of Consciousness Studies, 6(1), 49–60.
Taguchi, S. (2019). Mediation-based phenomenology: Neither subjective nor objective. Metodo, 7(2), 17–44.
Taguchi, S., & Saigo, H. (2023). The monoid-now: A category theoretic approach to the structure of phenomenological time-consciousness. Frontiers in Psychology, 14, 1237984. https://doi.org/10.3389/fpsyg.2023.1237984
Tononi, G., Albantakis, L., Boly, M., Cirelli, C., & Koch, C. (2022). Only what exists can cause: An intrinsic view of free will. (https://doi.org/10.48550/arXiv.2206.02069)
Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450–461. https://doi.org/10.1038/nrn.2016.44
Tsuchiya, N., & Saigo, H. (2021). A relational approach to consciousness: Categories of level and contents of consciousness. Neuroscience of Consciousness, 2021(2), niab034. https://doi.org/10.1093/nc/niab034
Tsuchiya, N., Taguchi, S., & Saigo, H. (2016). Using category theory to assess the relationship between consciousness and integrated information theory. Neuroscience Research, 107, 1–7. https://doi.org/10.1016/j.neures.2015.12.007
Tull, S., & Kleiner, J. (2021). Integrated information in process theory. Journal of Cognitive Science, 22(2), 92–123.
Van Gulick, R. (2022). Consciousness. In E. N. Zalta & U. Nodelman (Eds.), The Stanford encyclopedia of philosophy (Winter 2022 ed.). Metaphysics Research Lab, Stanford University. (https://plato.stanford.edu/archives/win2022/entries/consciousness/)
Van Raamsdonk, M. (2010). Building up spacetime with quantum entanglement. General Relativity and Gravitation, 42, 2323–2329. https://doi.org/10.1007/s10714-010-1034-0
Velmans, M. (2009). Understanding consciousness. Routledge.
Wiese, W. (2020). The science of consciousness does not need another theory, it needs a minimal unifying model. Neuroscience of Consciousness, 2020(1), niaa013. https://doi.org/10.1093/nc/niaa013
Yoshimi, J. (2007). Mathematizing phenomenology. Phenomenology and the Cognitive Sciences, 6(3), 271–291. https://doi.org/10.1007/s11097-007-9052-4
Yoshimi, J. (2014). The metaphysical neutrality of Husserlian phenomenology. Husserl Studies, 31(1), 1–15. https://doi.org/10.1007/s10743-014-9163-z
Yoshimi, J. (2016). Husserlian phenomenology: A unifying interpretation. Springer.
Young, B. D., Keller, A., & Rosenthal, D. (2014). Quality-space theory of olfaction. Frontiers in Psychology, 5, 1. https://doi.org/10.3389/fpsyg.2014.00001
Zahavi, D. (2017). Husserl’s legacy. Oxford University Press.
Acknowledgements
We report no competing interests. We thank LMU Munich and ShanghaiTech University for institutional support and two anonymous referees for their comments.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Prentner, R. Category theory in consciousness science: going beyond the correlational project. Synthese 204, 69 (2024). https://doi.org/10.1007/s11229-024-04718-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-024-04718-5