Skip to main content
Log in

Could robots be phenomenally conscious?

  • Published:
Phenomenology and the Cognitive Sciences Aims and scope Submit manuscript

Abstract

In a recent book (Tye 2017), Michael Tye argues that we have reason to attribute phenomenal consciousness to functionally similar robots like commander Data of Star Trek. He relies on a kind of inference to the best explanation – ‘Newton’s Rule’, as he calls it. I will argue that Tye’s liberal view of consciousness attribution fails for two reasons. First, it leads into an inconsistency in consciousness attributions. Second, and even more importantly, it fails because ceteris is not paribus. The big, categorical difference in history between Data-like robots on the one hand and human beings on the other hand defeats the ceteris paribus assumption, which can be seen by various considerations. So the inference rule cannot be applied. We should not attribute phenomenal consciousness to robots like Data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Apart from the phenomenal version of the problem of other minds (‚How do we know that other human beings have phenomenal consciousness?’), a further prominent epistemological problem is what Ned Block has called the ‘harder problem of consciousness’. Cf. Block (2002).

  2. Tye (2017). All page and chapter numbers in the text will refer to this text.

  3. How about robots that do have an evolutionary history, somehow of the same kind as ours? – For those robots things might be different, but here I will focus exclusively on robots that have no such evolutionary history but have been constructed (directly) by human beings as their designers.

  4. Sameness talk here is always to be understood as talk of sufficient similarity. No strict identity is necessary.

  5. The attribute ‘superficial’ here is intended to indicate that we are talking about common sense functional roles, not any functional roles specified by a scientific psychological theory. Cf. Block (1978) who distinguishes between ‘Functionalism’ (the common sense version) and ‘Psychofunctionalism’ (the version appealing to a scientific psychological theory). Tye’s qualification ‘superficial’ indicates Functionalism in this sense, as he tells us in a footnote (fn. 5, 180).

  6. Any pragmatic or moral reasons, if there could be any, will be ignored here.

  7. In my view, this epistemological question could be formulated by using the notion of (epistemic) justification as well, and without mentioning reasons. We would then ask: Is the belief that Data has experiences (prima facie and/or ultima facie) justified? – Tye officially puts the epistemological question in a comparative way: is it rational to prefer the hypothesis that Data has experiences to the hypothesis that Data lacks experiences? I think it does not matter for the present purposes whether we use this comparative epistemic standing or the more absolute notions of having evidence or justification for a belief.

  8. The argument is similar to what David Lewis has acknowledged in his paper on mad pain and Martian pain (Lewis 1980): a mad Martian (being both physically and functionally very different from humans) would have to be attributed pain, too. Lewis’s solution is to introduce relativization to populations. The upshot of this is the following: Sometimes it is the (standard) functional role that grounds being in pain, entirely independent of any population facts [like with Data]; sometimes it is belonging to a (human or nonhuman) population and being in an internal state, i, such that i standardly occupies the standard pain role in that population (though i does not occupy it in this being) [like with an Alzheimer human]. So sometimes it is totally independent of one’s population and purely internal; sometimes it depends on belonging to a certain population and thus is highly relational and external. Since Alzheimer Data has no population and shares neither our brain state nor the functional role, he will not be assigned pain. In my view, the main shortcoming of this proposal is that it is too flexible: it allows for systems being in pain that have nothing interesting in common. This could never be a plausible account of the nature of pain. It sounds like causal roles functionalism, but actually it isn’t since sometimes a highly relational, external fact (belonging to a population such that …) bestows pain on a system. – Tye is well-advised not to adopt Lewis’s relativization strategy since it works by way of a problematic theoretical element (and also belongs to the top-down approach that Tye refuses to adopt in the book, cf. xiv).

  9. There is one other way out, namely, to reject the analogous reasoning concerning the move from Data to Alzheimer Data. But what should be wrong about this? I do not see any reason to reject it if the corresponding move from ordinary human beings to Alzheimer humans is accepted. Remains the ceteris paribus condition.

  10. We can either take the facts that constitute this historical difference to be the defeater or our knowledge of this historical difference. Whether we go for the ‘psychologistic’ or the ‘non-psychologistic’ conception of defeaters (and reasons) does not matter for the present purposes. The argument can be stated in either way.

  11. Accepting this hypothesis need not mean full, outright belief, but can be understood as a weaker doxastic attitude, such as favoring the hypothesis over its negation.

  12. Cf. Dretske (1995), for example. Tye himself discusses historical versions of representationalism in his Tye (2009), sc. 14.5, as a solution to the problems of the inverted spectrum and the inverted earth.

  13. Physicalism is perhaps the most important naturalistic alternative, but it has its problems, most importantly, Ned Block’s ‘harder problem of consciousness’ and the problem of how to give an account of intentionality as involved in experiences. The harder problem of consciousness basically is an epistemic problem: the physicalist has no way of rationally deciding between two unattractive options when considering beings which are more or less functionally equivalent to us but physically very different: (i) their physical state realizing (more or less) the functional role of pain, e.g., counts as an alternative physical state that essentially brings with it pain (thus making pain physically very disjunctive), or (ii) it does not (thus being chauvinistic). The second big problem for (narrow) physicalism is intentionality. How could an internal physical state be about anything else at all? Prima facie, this question is unanswerable for an internalist physicalist. Intuitively, however, many phenomenal states are about something, for example, about one’s foot. The state somehow refers to or is about one’s foot when one feels a pain in the foot (also in cases of phantom limbs); the location is felt. To my knowledge, this problem for physicalists is not spelled out fully in the literature. But at least to some extent it is presented in Tye’s Ten Problems of Consciousness (1995), sc. 1.10, “The problem of felt location and phenomenal vocabulary”. – Note that a physicalist could not opt for ‘ceteris is not paribus’ in the same way as proponents of the historical approach suggested here. So they will have a hard(er) time to escape Tye’s argument.

  14. The kind of representation relevant for our discussion is, of course, natural, non-conventional, mental representation.

  15. There are many theories of etiological functions. Among the most important ones are Millikan (1984), Sterelny (1990), Neander (1991), Papineau (1993), Dretske (1995).

  16. What if phenomenal consciousness has no function? (Thanks to an anonymous referee for pressing me on this point.) – After all, one could doubt that phenomenal consciousness has a function, perhaps relying on Ned Block’s distinction between access consciousness and phenomenal consciousness (cf. Block 1995), or David Chalmers’ distinction between psychological consciousness and phenomenal consciousness (cf. Chalmers 1996, ch. 1). On the other hand, lots of empirical considerations have been launched in favor of the thesis that phenomenal consciousness has a function (or several functions). Enabling flexible responses, unifying and/or integrating information, and making information available to central cognitive processing are among the most important ideas. (For a good summary with many empirical pieces of evidence, cf. Earl 2014, e.g.) Admittedly, if any kind of strict separation of phenomenal consciousness from the causal-functional worked, the case would be lost at this point.

  17. If representationalism about consciousness is right, the answer is more or less simple and obvious. Firstly, the primary place of consciousness is in genuine (conscious) perception where the environment and/or body is veridically represented; the job of perceptual consciousness is something like providing information to certain ‘central’ cognitive systems. Secondly, other conscious states, like imagery and the like, are somehow derivative from perception, and they may have other purposes. (Note that multiple functions are possible as well.) – The proposal is of course (pretty much) the same as the one that Michael Tye has put forward in an earlier paper (Tye 1996).

  18. Cf. Dretske (1995), e.g.

  19. Cf., e.g., Fantl, McGrath (2002).

  20. For further discussions of swamp beings see Papineau (2001) and Lycan (2001). – Thanks to an anonymous referee for bringing up this topic.

  21. Actually Tye speaks of different descriptions of bodily movements. I have rephrased everything in terms of different kinds of behavior, in order to mark the distinction more sharply.

  22. Tye does appeal to behavior_3 in the case of animals when he responds to the criticism that attributing the ‘same behavior’ to animals is question-begging or anthropomorphizing (77). I have transferred his response to the similar question about robots.

  23. Cf., for example, Sosa (2015), Hyman (2015).

  24. Note that the ‘functional state’ here is merely a certain causal-dispositional state, not a functional state in the etiological sense (a teleofunctional state). One might add that in virtue of the designer’s intentions the functional state also has a certain job or purpose, but it will be merely a derivative, intention-dependent purpose – something very much different from a natural or intrinsic purpose or teleofunction.

  25. I am grateful for helpful discussions to Christian Loew, Hannes Fraissler, and Susanne Mantel.

References

  • Block, N. (1978). “Troubles with functionalism”. Minnesota Studies in the Philosophy of Science, 9, 261–325.

  • Block, N. (1995). “On a confusion about a function of consciousness”, Brain and Behavioral Sciences 18(2), 227-247.

  • Block, N. (2002). “The harder problem of consciousness”. Journal of Philosophy, 99(8), 391–425.

  • Chalmers, D. (1996). The conscious mind. OUP.

  • Dretske, F. (1995). Naturalizing the mind. MIT Press.

  • Earl, B. (2014). “The biological function of consciousness”. Frontiers in Psychology 5, Article 697, doi:10.3389/fpsyg.2014.00697.

  • Fantl, J., & McGrath, M. (2002). “Evidence, pragmatics, and justification”. Philosophical Review, 111(1), 67–94.

  • Hyman, J. (2015). Action, knowledge, and will. OUP.

  • Lewis, D. (1980). “Mad pain and martian pain”, In: D. Lewis, Philosophical Papers Vol. 1, 1983, 122–132. OUP.

  • Lycan, W. (2001). “The case for phenomenal externalism”. Philosophical Perspectives, 15, 17–35.

  • McLaughlin, B. (2003), “A naturalist phenomenal realist response to Block's harder problem”, Philosophical Issues 13, 163–204.

  • Millikan, R.G. (1984). Language, Thought, And Other Biological Categories, MIT Press.

  • Neander, K. (1991). “The teleological notion of ‘function’”, Australasian Journal of Philosophy 96(4), 454–468.

  • Papineau, D. (1993). Philosophical naturalism. Blackwell.

  • Papineau, D. (2001). “The status of teleosemantics, or how to stop worrying about swampman”. Australasian Journal of Philosophy, 79(2), 279–289.

  • Sosa, E. (2015). Judgment and agency. OUP.

  • Sterelny, K. (1990). The representational theory of mind. Blackwell.

  • Tye, M. (1995). Ten Problems of Consciousness, MIT Press.

  • Tye, M. (1996). The function of consciousness. Noûs, 30(3), 287–305.

    Article  Google Scholar 

  • Tye, M. (2009). “Representationalist theories of consciousness”, In: B. McLaughlin, A. Beckermann & S. Walter (eds.). The Oxford handbook of philosophy of mind, (253–268). OUP.

  • Tye, M. (2017). Tense bees and shell-shocked crabs. OUP.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank Hofmann.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hofmann, F. Could robots be phenomenally conscious?. Phenom Cogn Sci 17, 579–590 (2018). https://doi.org/10.1007/s11097-017-9528-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11097-017-9528-9

Keywords

Navigation