Skip to main content

Other Minds, Empathy, and Interstellar Communication

  • Chapter
  • First Online:
  • 1806 Accesses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

If an extraterrestrial intelligence should have the technological capacity to decode an interstellar message, or at least to receive our signal, then it is highly probable that its society would be based on a reasonably high degree of cooperation among its members. Cooperation, in turn, is hardly conceivable without an ability to understand and express emotions and intentions—ability indispensible for setting off a communication process, even in the absence of a common code. This is the role of empathy—affective understanding of other minds. As a psychological mechanism underlying complex types of cooperative behavior, empathy might thus be a psychological universal—a fairly widespread characteristic of intelligent life. In standard communicative situations on Earth, empathy is essential to both of the participants in the communication process. To optimize this process with respect to the resources employed, the sender is typically required to foresee what the receiver already knows. That is, one usually wants to structure a message so that only the necessary information get explicitly encoded, leaving everything else—the potentially redundant part of the information content—implicit. However, in case of interstellar communication, even an impoverished message, leaning heavily on the common context, might fail to get across. Overestimating decoders’ decoding potentials—being too optimistic about aliens’ cognitive abilities or the commensurability of their representational system with ours—may prove fatal for our project. In order to forestall this risk, I propose, and try to justify, the following guideline: if our communicants are incapable of understanding the informative intention behind our message they might still be able to understand our communicative intention—the intention to simply reveal our presence as intentional beings. For it is much more likely that they will be able to empathically recognize such an intention than to interpret a signal embodying an explicit representational content.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Of course, if we were to encounter an alien creature exhibiting behavioral plasticity of such a degree, it would be irresistible to assume that it does possess a human-like mind. A more cautious attitude, however, would demand that we suspend this assumption and remain neutral to the question of the nature of our creature’s “inner life”—if it had one, that is. (Why not also leave open the possibility that our communicant is a zombie?) This is why, in my initial determination of alien mind, I want to remain as economical as possible, i.e., avoid any reference to potentially anthropomorphic notions, including “consciousness,” “experience,” “feelings” etc. One way to do this is to refrain from mentalistic vocabulary altogether and speak of “mindlike behavior.” Another way is to make use of Daniel Dennett’s notion of “intentional system.” By intentional system Dennett (1996, 34) means any entity “whose behavior is predictable/explicable from the intentional stance”—by treating it as an agent, i.e., by “attributing to it beliefs and desires on the basis of its perception of the situation and its goals and needs.” The point is that an entity, in order to be an intentional system, need not literally possess any beliefs and desires, i.e., be aware of its goals and needs as humans typically are; it needn’t, in fact, possess any distinctively mental features in the sense of our folk-psychological attributions. It needn’t even be a sentient being. For no one would explicitly attribute an agency-level mind to a jellyfish or a vacuum cleaner, notwithstanding the fact that we routinely understand and predict behavioral reactions of such pseudo-agents by implicitly treating them as real (i.e., minded) agents. (Think of the usefulness of predicting the next move of a chess playing computer by “reading” its “mind.”) This is not contradicted by the fact that, when prompted to weigh in on a creature’s mental status, we are often unable to make up our minds. (Do mice have minds? What about parrots? Industrial robots or chess playing computers certainly not. Or?) The advantage of Dennett’s instrumentalist notion is that it spares us from taking a stance on such issues—it spares us from formulating conjectures about the (unknowable) mental underpinnings of someone’s/some thing’s behavioral characteristics. However, in the case of our interstellar communicants we do have a good reason to take such a stance.

  2. 2.

    Types of mental representation are here given in quotation marks in order to avoid their literal, i.e., anthropomorphic reading.

  3. 3.

    The term was invented by Stewart and Cohen (1999).

  4. 4.

    Think of a Platonic, totalitarian society ruled by an intellectual elite, or by a single individual, responsible for strategic decision-making, innovations, and technological progress. Imagine that this ruling class or ruling individual is served by all other members of the society, each obediently fulfilling its limited task (like drones in an ant or a bee colony) to the benefit of the population as a whole, but without there being any noteworthy interaction with other society members. In such a case, we would have a high level of behavioral coordination and social division of labor without real cooperation between individual members. See Barkow (2013, in this volume) for this kind of scenario. Cohen and Stewart (2002, 284) warn that “if an alien had sufficiently great intelligence, then the relevant store of know-how would not be beyond the capacity of any individual, and extelligence would be unnecessary.”

  5. 5.

    Thanks to the Kepler mission, a space-based telescope, the number of discovered exoplanets in 2011 exceeded the number of such planets discovered ever before. Data gathered during Kepler's mission from March 2009 to May 2013 yielded more than 3500 planet candidates, of which more than 130 have already been confirmed as planets (NASA Exoplanet Archive 2013).

  6. 6.

    They speak of a “phase space of imaginative possibilities,” borrowing the idea from Henri Poincaré (Cohen and Stewart 2002, 18).

  7. 7.

    Extelligence is the case in point. See Cohen and Stewart (2002), Chap. 12.

  8. 8.

    See Lem (1974), Chap. III (“Cosmic Civilizations”).

  9. 9.

    This definition is adopted from Janović et al. (2003, 811).

  10. 10.

    There are authors that take phenomenal properties to be ontological primitives. According to David Chalmers (1996), even the smallest ingredients of the physical world (elementary particles) can have non-physical properties that are proto-constituents of consciousness (“protophenomenal properties”). When elementary particles combine into larger structures, protophenomenal properties also combine, producing fully structured conscious experience.

  11. 11.

    They are also causally connected, as exemplified by at least one evolutionary line of terrestrial evolution.

  12. 12.

    For an elucidation of the relation between conciousness, empathy, and its rudimentary social aspects (intersubjectivity) see Thompson (2001). David Dunér (2013) makes an interesting attempt in this volume at extending Thompson’s arguments to extraterrestrial conditions.

  13. 13.

    For an overview of some of these theories and their historical roots see Janović et al. (2003).

  14. 14.

    Searle (1990, 414–415; cited in Tomasello 2008, 73) speaks of a “background sense of the other as a candidate for cooperative agency.” He takes this special kind of sense—functionally equivalent to empathy—to be a “necessary condition of all collective behavior,” including communication.

  15. 15.

    Even when used in this-worldly contexts—i.e., when applied to known types of minds—“empathy” is a vague term referring to an unspecified variety of mental phenomena. This is why there are so many definitions of empathy. Batson (2009) tries to bring some order into this chaotic field by identifying eight basic uses of the term. He sees each of these uses as motivated by researchers’ need to answer at least one of two crucial questions: “How can one know what another person is thinking and feeling? What leads one person to respond with sensitivity and care to the suffering of another?” (Batson 2009, 3). My use of the term is closest to Batson’s first and most general definition of empathy: “knowing another person’s internal state, including his or her thoughts and feelings.” Some authors call this type of empathy “cognitive empathy,” while others speak of “mindreading.” But the term itself is irrelevant for the point I am trying to make. What is important is that empathic recognition of intentions (and other mental states) need not function as an inference from a hidden knowledge store (implicit “theory of mind”), requiring special cognitive abilities. It needn’t be a conceptually mediated process at all. The alternative way to achieve the same goal is to simulate (“mirror”) another mind when prompted by relevant behavioral/situational cues. In fact, it is this simulation model that best fits recent empirical findings (Gallese 2001; Gallese and Goldman 1998).

  16. 16.

    This is the main idea of the “code model” or the “information-processing” approach to communication. According to Shannon and Weaver’s (1949) seminal account, all communication is a kind of encoding–decoding activity governed by a system of rules (“code”) shared by the participants in the process. The rules enable “messages” (internal representations of objects or states of affairs) to be paired with “signals” (modifications of the external environment) in a systematic way. As a consequence, the same message (mental representation) occurs at both ends of the communication process. This mediating procedure is necessary for obvious reasons: the messages themselves, as they are defined, cannot travel through space–time, and therefore cannot be directly conveyed. No telepathy is possible.

  17. 17.

    See Wittgenstein (1953) and Tomasello (2008, 58–59).

  18. 18.

    These two steps need not be psychologically distinct: neither the communicator nor the communicant need experience them as distinct.

  19. 19.

    It is thereby assumed that this very premise (the explicit part of the message) is simply decoded. In other words, encoding/decoding processes cannot be entirely replaced by inference.

  20. 20.

    Contexts thus relate not only to wider bits of discourse, or to physical contingencies of communication process (time, place, properties of the natural or social environment, etc.), but also to everything else involved in this process. Sperber and Wilson (1986, 15) highlight this broader meaning of context:

    [A] context is a psychological construct, a subset of the hearer’s assumptions about the world. It is these assumptions, of course, rather than the actual state of the world, that affect the interpretation of an utterance. The context in this sense is not limited to information about the immediate physical environment or the immediately preceding utterances: expectations about the future, scientific hypotheses or religious beliefs, anecdotal memories, general cultural assumptions, beliefs about the mental state of the speaker, may all play a role in interpretation.

  21. 21.

    One might also call it—following Sperber and Wilson (1986)—the “inferential model.” The difference is just one of emphasis. In their idiom, it is the means (inference) that is pointed out. In the expression I favor, it is the goal (recognition of intention) that is given priority (bearing in mind that inference is not the only means for achieving this goal, as some recent findings strongly suggest).

  22. 22.

    Grice (1957) calls the “meaning” conveyed in this way “natural meaning.” Black clouds “mean” rain in this, and only in this sense. A being capable of intentions can also exhibit the non-intentional, “natural” kind of meaning: a bruise on my forehead “means” injury (as long as I am not showing it to somebody deliberately).

  23. 23.

    David Dunér (2013, in this volume) has proposed a very interesting solution for at least some of these hard problems. His solution is based on Tomasello’s concept of “joint attention.” In many respects, the conceptual foundations of Dunér’s proposal are the same as mine, as is his choice of the issues relevant (but often neglected as such) for thinking about interstellar communication and its principal constraints.

  24. 24.

    As Sperber (1995, 198) put it, “[y]ou have to be doubly intelligent to see the intelligence in others. You need the ability to represent in your own mind the mental representations of other creatures. You need, that is, the ability to entertain representations of representations, what, in our jargon, we call ‘meta-representations’.”

  25. 25.

    As Cohen and Stewart (2002, 285–291) show, even mathematical concepts might not be universal—they might be invented, not discovered.

References

  • Barkow, Jerome H. 2013. “Eliciting Altruism While Avoiding Xenophobia: A Thought Experiment.” In Extraterrestrial Altruism: Evolution and Ethics in the Cosmos, edited by Douglas A. Vakoch, 37–48. Heidelberg: Springer.

    Google Scholar 

  • Baron-Cohen, Simon. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Basalla, George. 2006. Civilized Life in the Universe: Scientists on Intelligent Extraterrestrials. New York: Oxford University Press.

    Google Scholar 

  • Batson, Daniel C. 2009. “These Things Called Empathy: Eight Related But Distinct Phenomena.” In The Social Neuroscience of Empathy, edited by Jean Decety and William Ickes, 3–16. Cambridge, MA: MIT Press.

    Google Scholar 

  • Chalmers, David. 1996. The Conscious Mind. New York: Oxford University Press.

    Google Scholar 

  • Cohen, Jack, and Ian Stewart. 2002. What Does a Martian Look Like?: The Science of Extraterrestrial Life. Hoboken, NJ: John Wiley and Sons.

    Google Scholar 

  • Dennett, Daniel. 1996. Kinds of Minds: Towards an Understanding of Consciousness. London: Weidenfeld and Nicolson.

    Google Scholar 

  • Dunér, David. 2013. “Interstellar Intersubjectivity: The Significance of Shared Cognition for Communication, Empathy, and Altruism in Space.” In Extraterrestrial Altruism: Evolution and Ethics in the Cosmos, edited by Douglas A. Vakoch, 141–167. Heidelberg: Springer.

    Google Scholar 

  • Gallese, Vittorio, and Alvin Goldman. 1998. “Mirror Neurons and the Simulation Theory of Mindreading.” Trends in Cognitive Sciences 12:493–501.

    Google Scholar 

  • Gallese, Vittorio. 1999. “From Grasping to Language: Mirror Neurons and the Origin of Social Communication.” In Towards a Science of Consciousness, edited by Stuart R. Hameroff, Alfred W. Kaszniak, and David J. Chalmers, 165–178. Cambridge, MA: MIT Press.

    Google Scholar 

  • Gallese, Vittorio. 2001. “The ‘Shared Manifold’ Hypothesis: From Mirror Neurons to Empathy.” Journal of Consciousness Studies 8:33–50.

    Google Scholar 

  • Grice, Paul. 1957. “Meaning.” Philosophical Review 66:377–388.

    Google Scholar 

  • Janović, Tomislav, Vladimir Ivković, Damir Nazor, Karl Grammer, and Veljko Jovanović. 2003. “Empathy, Communication, Deception.” Collegium Antropologicum 27:809–822.

    Google Scholar 

  • Lem, Stanisław. 1974. Summa Technologiae. Krakow: Wydawnictwo Literackie.

    Google Scholar 

  • Minsky, Marvin. 1985. “Why Intelligent Aliens Will be Intelligible.” In Extraterrestrials: Science and Alien Intelligence, edited by Edward Regis, Jr., 117–128. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • NASA Exoplanet Archive. 2013. Planets Count Page. Last modified July 31. http://exoplanetarchive.ipac.caltech.edu/docs/counts_detail.html.

  • Rescher, Nicholas. 1985. “Extraterrestrial Science.” In Extraterrestrials: Science and Alien Intelligence, edited by Edward Regis, Jr., 83–116. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Rescher, Nicholas. 1987. Scientific Realism: A Critical Appraisal. Dordrecht: Reidel.

    Google Scholar 

  • Rizzolatti, Giacomo, and Michael A. Arbib. 1998. “Language Within Our Grasp.” Trends in Neuroscience 21:188–194.

    Google Scholar 

  • Searle, John. 1978. “Literal Meaning.” Erkenntnis 13:207–224.

    Google Scholar 

  • Searle, John. 1979. “Intentionality and the Use of Language.” In Meaning and Use, edited by Avishai Margalit, 181–197. Dordrecht: Kluwer.

    Google Scholar 

  • Searle, John. 1990. “Collective Intentions and Actions.” In Intentions in Communication, edited by Philip R. Cohen, Jerry Morgan, and Martha E. Pollack, 401–416. Cambridge, MA: MIT Press.

    Google Scholar 

  • Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. Urbana: University of Illinois Press.

    Google Scholar 

  • Sperber, Dan. 1995. “How Do We Communicate?” In How Things Are: A Science Toolkit for the Mind, edited by John Brockman and Katinka Matson, 191–199. New York: Morrow.

    Google Scholar 

  • Sperber, Dan, and Deirdre Wilson. 1986. Relevance: Communication and Cognition. Cambridge, MA: Blackwell.

    Google Scholar 

  • Stewart, Ian, and Jack Cohen. 1999. Figments of Reality. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Thompson, Evan. 2001. “Empathy and Consciousness.” Journal of Consciousness Studies 8:1–32.

    Google Scholar 

  • Tomasello, Michael. 2008. Origins of Human Communication. Cambridge, MA: MIT Press.

    Google Scholar 

  • Tooby, John, and Leda Cosmides. 1992. “The Psychological Foundations of Culture.” In The Adapted Mind: Evolutionary Psychology and the Generation of Culture, edited by Jerome Barkow, Leda Cosmides, and John Tooby, 19–136. New York: Oxford University Press.

    Google Scholar 

  • Vakoch, Douglas. 1999. “The View from a Distant Star: Challenges of Interstellar Message Making.” Mercury 28(2):26–32.

    Google Scholar 

  • Wittgenstein, Ludwig. 1953. Philosophical Investigations. Oxford: Basil Blackwell.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomislav Janović .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Janović, T. (2014). Other Minds, Empathy, and Interstellar Communication. In: Vakoch, D. (eds) Extraterrestrial Altruism. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37750-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37750-1_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37749-5

  • Online ISBN: 978-3-642-37750-1

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics