Skip to main content


Log in

Panpsychism and AI consciousness

  • Original Research
  • Published:
Synthese Aims and scope Submit manuscript


This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence (AI) may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship between the three. Section 2 then argues that if this is true and micropsychism—the panpsychist view that phenomenal consciousness or its precursors exist at a microphysical level of reality—is also true, then human brains must somehow manipulate fundamental microphysical-phenomenal magnitudes in an analog manner that renders them phenomenally coherent at a macro level. However, Sect. 3 argues that because digital computation abstracts away from microphysical-phenomenal magnitudes—representing cognitive functions non-monotonically in terms of digits (such as ones and zeros)—digital computation may be inherently incapable of realizing coherent macroconscious experience. Thus, if panpsychism is true, digital AI may be incapable of achieving phenomenal coherence. Finally, Sect. 4 briefly examines our argument’s implications for Tononi’s Integrated Information Theory (IIT) theory of consciousness, which we contend may need to be supplanted by a theory of macroconsciousness as analog microphysical-phenomenal information integration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  1. Our argument differs substantially from another recent line of argument that (some) digital computers may be incapable of phenomenal consciousness (Tononi & Koch, 2015). Based on Tononi’s Integrated Information Theory (IIT) of consciousness, which holds that phenomenal consciousness is identical to maximally integrated information, Tononi and Koch argue that two functionally identical systems can have the same input-output function, while only one of those systems integrates information whereas the other does not. On IIT, the latter system—even if it otherwise functioned like a human brain—would be a “zombie” system with no conscious experience (see also Oizumi et al., 2014, pp. 19–22). Further, as Tononi & Koch (2015) and Koch (2019) elaborate, because current digital computers cannot integrate information in anything like the fine-grained way that human brains do (Koch, 2019, pp. 142–144), if IIT is true, then it may take neuromorphic electronic hardware “built according to the brain’s design principles” for AI to “amass sufficient intrinsic cause-effect power to feel like something” (ibid., p. 150; see also Tononi & Koch 2015, p. 16, fn. 15). Our argument is more radical than this in at least two respects. First, our argument implies that even a “neuromorphic” digital machine could fail to realize coherent macrophenomenal consciousness—for such a machine might still fail to manipulate fundamental microphysics in a way necessary for combining fundamental phenomenal qualities into a coherent macrophenomenal manifold. Second, as we explain in Sect. 4, our argument entails that IIT itself may be false. If we are correct, then the only way for AI to have coherent macrophenomenal consciousness may be for them to be analog machines that integrate fundamental microphysical-phenomenal magnitudes in the right way.

  2. There are other ways of naming numbers, such as using words in a natural language (e.g. “four”), using different kinds of numerical conventions (e.g. Roman numerals), or using purely conventional single symbols (e.g. “π”). Digital representations (and other numerical conventions) are a special kind of name that specify the value of the number named as a function of the individual numerals. Chrisomalis (2020) discusses these systems—and many others—in fascinating detail.

  3. Analog computers are virtually never used for these purposes now, although they were once the dominant computing paradigm before the 1970s.

  4. “Non-analog” is typically taken to mean “digital”, but there are other schemes that are neither (e.g. the symbol “π”, mentioned in footnote 2 above).

  5. Although some readers may doubt whether hues are magnitudes—as hue does not come in different degrees—particular hues clearly do come in degrees: something can clearly be more or less red, as well as more or less of a combination of one hue with another (different orange hues, for example, are different graded combinations of red and yellow).

  6. We want to distinguish here between having a phenomenal experience of a banana and having an experience as of a banana, where the difference is as follows. As Fisher (2007) contends, perhaps all that a physical, functional, or phenomenal state must do in order to represent (or be of) a banana is to be causally or functionally related to banana(s) in the external environment in some way. That may well be the case. Still, whether a phenomenal experience resembles an actual banana in a coherent fashion (viz. first-personally looking like or seeming as of a small yellow fruit) also seems relevant to representation: namely, for qualitatively representing the banana in consciousness as it really is (Summers & Arvan, forthcoming, §3). Our point is that even if digital AI could represent bananas in Fisher’s externalist sense—visually “tracking” bananas in their environment—digital AI cannot do so in a manner that produces a coherent first-personal phenomenal experience as of yellow bananas.

  7. We thank an anonymous reviewer for pressing this concern.

  8. Whereas in a binary scheme, two digits would be needed to represent four different values.


Download references


We are grateful to several sets of anonymous reviewers, Philippe Chuard, Gerardo Viera, and audience members at the 2021 Meeting of the Pacific Division of the APA for helpful feedback on earlier drafts of this paper.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Marcus Arvan PhD.

Ethics declarations

Conflict of interest

The authors have no competing financial or non-financial interests directly or indirectly related to the work submitted for publication.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arvan, M., Maley, C. Panpsychism and AI consciousness. Synthese 200, 244 (2022).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: