C ONSCIOUSNESS AND THE P ROBLEM OF O THER M INDS

Because a conception of consciousness based on direct self-awareness does not generalise easily to other cases, a fully general theory of consciousness under this conception may not be possible, as Ned Block (2003) has pointed out. In this paper, I discuss the possibility of doing away with the underlying idea, common in philosophical and scientific circles, that consciousness is a first-person concept in the first instance, from which we infer outwards to others. I suggest that notable Phenomenological descriptions of the centrality and automaticity of our experience of others is broadly in line with the evidence from psychology and the neurosciences; recent work on possible “mirror” and related mechanisms for engaging with the mental states of others, together with behavioural and developmental studies, all undermine the inferential approach to other minds. I defend the possibility that the conception of mentality that underlies our engagement with others from infancy does not arise from self-awareness, and that if so, the way we theorise about consciousness in its most general sense may have to change.


INTRODUCTION
What do we believe when we believe that others are conscious? When are those beliefs justified? Attempts to answer these questions in the Analytic philosophical tradition have focussed on the idea that we arrive at a conception of mentality via direct self-awareness of some kind, and then attribute mentality under that conception to others through a process of inference, either analogical (other people are externally like me, so it as very likely that they are like me internally as well) or abductive (being conscious is the best way of explaining why other people move as they do). However, because a conception of consciousness based on direct selfawareness does not generalise easily to other cases, a fully general theory of consciousness under this conception may not be possible, as has been well argued by Ned Block (2003). In this paper, I discuss the possibility of doing away with the idea that consciousness is a firstperson concept in the first instance, from which we infer outwards to others. Key thinkers in the Phenomenological tradition, notably Husserl and Merleau-Ponty, defended a non-inferential approach to other minds, as have some notable Analytic philosophers. Recent findings in the cognitive and brain sciences add significant weight to this idea. If it is true that our judgements about other minds are meaningfully "direct", then the idea that our conception of consciousness is derived solely from self-awareness will need to be discarded. I argue that this possibility has potentially far-reaching consequences for the search for a truly general theory of consciousness.

THE HARDER PROBLEM OF CONSCIOUSNESS
In the last few decades, significant progress has been made in understanding the brain processes underlying many aspects of consciousness experience. There is a general optimism that we are within reach of being able to simulate, intervene in, and improve our conscious lives. In contrast to this justified optimism, however, there is a view, influential in some philosophical and scientific circles, that we are actually no closer to a general theory of consciousness. To understand this point of view it is important to distinguish between a theory of how the brain works and a fully general theory of ACTIVITAS NERVOSA SUPERIOR REVIEW ARTICLE *Correspondence to: John O' Dea, email: john@jodea.net consciousness. A general theory of consciousness must take seriously the possibility that a functioning human brain along primate lines is not the only way to achieve consciousness. In the same way that there are radically different ways of achieving self-powered flight -flapping feathered wings, propellors, jet engines -there may be radically different ways of achieving consciousness. And just as a thorough knowledge of jet engines does not entail an understanding of how sparrows fly, a thorough knowledge of the human brain may not entail an understanding of consciousness generally. This does not create a special difficulty in itself. Whether or not there is such a thing as a general theory of flight, there is no essential problem because each individual case -bird, propellor, jet -can be understood on its own merits. Whether or not there is such a thing as a general theory of flight, there is no essential problem because each individual case -bird, propellor, jet -can be understood on its own merits. To understand the added (some say insuperable) difficulty in the case of consciousness, it needs to be noted that in the case of flight there is no trouble in identifying individual cases as examples of flight. Whether something can fly under its own power can straight-forwardly be observed. How it achieves this can then be studied. The case of consciousness is different because whether something is conscious can not be observed in this way, at least according to the prevailing view of consciousness as an essentially first-person observable phenomenon. If this view is correct, then picking out for study all and only the conscious beings can only be done indirectly. Ned Block (2003) has argued, persuasively in my view, that the best available way of sorting out the conscious from the non-conscious beings (i.e., indirectly) does not provide rational grounds for believing either way in cases of creatures which are sufficiently physically different from ourselves. Block argues that a general theory of consciousness is currently out of our reach because we have no way of knowing which creatures apart from ourselves are conscious. Because we lack criteria for judging when something is or isn't conscious apart from introspective observation, we don't have a starting point from which to theorise. He employs the possibility of a creature exactly like us in all everyday behavioural ways but entirely different physically (he uses the hypothetical example of a robot similar to the fictional character 'Commander Data' in the Star Trek television series). Such a creature, he argues, would in the first instance reasonably be judged to be conscious through a kind of analogical inference. But that inference, he says, would essentially be an inference from the presence of surface similarity to the presence of an underlying similarity (or an inference to a "common material basis" p.38). When we discover that there is no underlying physical similarity, the inference is undermined, but in a way that leaves the question of the creature's consciousness entirely open. Describing this problem as an epistemic version, or aspect, of the famous "Hard Problem" of consciousness (Chalmers 1996), Block writes that: "The root of the epistemic problem is that the example of a conscious creature on which the science of consciousness is inevitably based is us… But how can science based on us generalise to a creature that doesn't share our physical properties? It would seem that a form of physicalism that could embrace other creatures would have to be based on them at least in part in the first place, but that cannot be done unless we already know whether they are conscious." (p.25) The reason that the science of consciousness is inevitably based on us is that the only known example of consciousness is ourselves and, by extension, creatures exactly like ourselves. We know about our own case because we are introspectively aware of being conscious, and we can infer that creatures similar enough physically are also conscious. But beyond that we have no way of telling.

THE INFERENTIAL ACCOUNT OF KNOWLEDGE OF THE EXISTENCE OF OTHER MINDS
The "harder" problem of consciousness, as Block defines it, arises because the way we have chosen the starting point -direct introspection -does not seem to allow the investigation to expand towards a truly general approach to consciousness. Although the prevailing view is that direct introspection is how we know in the first place that we we ourselves are conscious, and is the basis for our very conception of consciousness, there are good reasons for thinking that this view may be mistaken. Challenging introspection as the right way of choosing a starting point for theorising about consciousness may be only way of avoiding the problem that Block points to. The modern version of the problem of our knowledge of other minds is a quite recent one. John Stuart Mill is generally thought to have been the first to clearly raise it, in his Examination of Sir William Hamilton's Philosophy (1865). Mill is credited with the origin of the argument from analogy, according to which we know that others have minds because they are similar in many other observable ways. The problem for which the argument from analogy is a purported solution is -how do I know that others have an internal life like mine? Here is Mill on the problem: "By what evidence do I know, or by what considerations am I led to believe, that there exist other sentient creatures; that the walking and speaking figures which I see and hear, have sensations and thoughts, or in other words, possess Minds?... I conclude that other human beings have feelings like me, because, first, they have bodies like me, which I know in my own case to be the antecedent condition of feelings; and because, secondly, they exhibit the acts, and other outward signs, which in my own case I know by experience to be caused by feelings." (1865; Chap.12) Within Analytic philosophy there has been considerable discussion about the validity of this kind of inference. Though for some time it was regarded as the best response to the problem, increasingly from the middle of last century it came to be seen that an analogy from a single case (i.e. ourselves) to all others is too weak to provide sufficient grounds for believing in other minds (see Hyslop 2005). Today it is more commonly held that an inference to the best explanation is the best approach, but there is no real consensus. Wittgenstein is perhaps the best known for rejecting all inferential approaches to the other minds problem as, effectively, arguments to the wrong conclusion. The argument from analogy would have it that when I attribute a mind to another person, I attribute to them some mental quality that is available to me on introspection, except for the quality of being my own. But this is incoherent, he argued, since I cannot separate the qualities that I introspect from their being felt to be qualities of me (Wittgenstein 1953; §302). However, many early phenomenologists rejected the argument from analogy as a wholly inadequate account of how we actually come to believe in the existence of minds other than our own, and of what we believe when we do. Writing in 1912, Max Scheler objected to the argument from analogy as being too intellectual while ignoring the many disanalogies between my relation to my own body and the bodies of others, not to mention between humans and animals. For Husserl (1931Husserl ( /1969, my awareness of others is not mere awareness of others as special kinds of objects out there in the world. Rather, being aware of others changes the way we perceive the world. To be aware that an object I see is also seen by someone else forces me to be aware that the object I see can be seen in a different way, from a different perspective, which is one way of understanding the difference between appearance and reality. In this way, according to Husserl, our awareness of other minds is a component of our basic experience of the world around us, and as such the kind of inferential justification proposed by Mill and his successors is obviated. Sartre (1958) extends one of Husserl's ideas, namely that there is something quite special about the experience of being looked at. Being aware of another in this way involves being aware of myself as an object and, according to Sartre, this is the only way I could come to be so aware. Awareness of myself as an object is one way of understanding self-awareness. My own selfawareness is therefore, for Sartre, bound up with my awareness of others. Merleau-Ponty (1962) argued that in order to perceive others as experiencing the same sort of things that I experience, I need to see my own body movements as not the causes and results of inner experience but rather as partly constituted by them. If this were the case, then seeing the bodily movements of others as similar to my own bodily movements would be to automatically see them in partly mental terms. Moreover, Merleau-Ponty points out, even infants are able to connect the perceived bodily movement of an adult with a "matching" action of its own. This kind of mimicry suggests that infants have an innate tendency to connect the visual perception of an action with proprioception of its own body. The combination of these two elements (the experience of bodily movement on the part of the self as partly constitutive of one's mental life, together with the close connection between proprioception and the perception of body movements on the part of others) suggests a relation to other minds more bound up with self-awareness than the inferential approaches to other minds allows for. Danish philosopher Dan Zahavi nicely summarizes Merleau-Ponty's thinking: "...Merleau-Ponty suggests that the infant is able to cross the gap between the visual appearance of the other 's body and the proprioceptive appearance of its own body exactly because its lived body has an outside and contains an anticipation of the other. The infant does not need to carry out any process of inference. Its body schema is characterized by a transmodal openness that immediately allows it to understand and imitate others." (Merleau-Ponty, 1945, pp. 165, 404-05;1960, pp. 213, 221). In his entry on Other Minds in the Stanford Encyclopedia of Philosophy, Alec Hyslop (2005) contrasts the "Analytic" and "Continental" approaches to the Problem of Other Minds thus: "Broadly, Continental philosophy often sees human beings as essentially social beings. We are thought to exist at our deepest level in and as a community. We depend on others not merely for our existence, but for our very sense of ourselves, and our awareness of others is claimed to be at the heart of our awareness of ourselves. "Opposed to this view are those who see each of us as aware of ourselves and our experience in a way that we can never be with respect to any other human being. Self enclosed, we are seen as needing to reach an understanding of the inner lives of others, somehow, on the basis of our own unique awareness of our inner lives. However, this denies us the comfort of a more direct closeness. We live forever with a gap between ourselves and others. "To have one or the other of these two diametrically opposed views is to differ profoundly on fundamental human experience. Each can lead to very different con-ceptions of human existence and interpersonal relationships, and, indeed, to different ways of living, and different relationships." This characterization, which seems to me correct in general terms, is somewhat misleading in a certain respect. Certainly the phenomenologists mentioned above all consider our experience of others as central to our awareness of ourselves, but by no means do they all consider that the result of this is a "comfortable closeness". We become aware of others when we become aware that we are the object of another's attention. This can be quite uncomfortable, not least because it forces us to be aware that we are objects of attention.
Hyslop's broad brush characterisation of the two approaches to the problem of other minds -individualistic versus essentially social -suggests the availability of two corresponding approaches to the problem of mentality in general. The "harder" problem of consciousness as Block describes it arises from a conception of consciousness consistent with the idea that "we live forever with a gap between ourselves and others". Because the "Continental" approach to the problem of other minds denies precisely this, proposing as it does that "our awareness of others is ... at the heart of our awareness of ourselves", it suggests the possibility that the right conception of consciousness is correspondingly inclusive, and therefore that the best general theory of consciousness is essentially social. The explosion of recent research into human social cognition has suggested that our awareness of ourselves and of others is linked on a surprisingly intimate level. Some of this research may well be relevant to discovering whether the individualistic or social conceptions of other minds, and therefore perhaps of consciousness itself, are closer to the truth. In the next section, I describe some of this research.

RECENT EVIDENCE FOR A NON-INFERENTIAL APPROACH TO OTHER MINDS
The "Mirror" Mind-reading System The surprising discovery by Rizollati et al (1996) of a so-called "Mirror Neuron System" (MNS) in Macaque monkeys, in which the pre-motor cortex activates during perception of corresponding movements in others, with the likelihood of an analog in humans (see Rizolatti 2004), has perhaps been the most dramatic apparent confirmation of some of Merleau-Ponty's claims. The idea that the motor system is used in the perception of other's movements qua intentional actions (see Gallese 2004) resonates well with Merleau-Ponty's argument that awareness of our own actions "contains an anticipation of the other". Gallese and Goldman have argued that the human analog of the monkey mirror system facilitates a "mind reading" capacity that uses awareness of one's own actions and emotions to simulate and thereby understand the actions and emotions of others. Gallese (2004) found that the experience of disgust and the ability to recognise that emotion in others appears to be subserved by overlapping neural mechanisms. On the basis of this and other related studies, he proposed that we understand the emotional lives (inter alia) of other people non-inferentially in terms of our own experiences, "...side by side with the sensory description of the observed stimuli" (p.400). Gallagher (2007) has pointed out that simulation may not be the best way of understanding how this would work, since simulation suggests that self-understanding precedes understanding of others. He argues for the possibility that the purported human analog of the MNS is involved in "perceiving" the actions of others in an agent neutral fashion (i.e. without assigning an agenteither the self or the other) and is separately assigned an agent through a "Who system" as suggested by Georgieff and Jeannerod (1998; see also Jeannerod and Anquetil 2008). If this is how the mirror neuron system works, then understanding the intentions of others is not predicated on an initial awareness of the self. We do not necessarily introspect our own minds and then work "outwards" from there.

The Co-development of Self and Other
Awareness of gaze is important in the Phenomenological tradition. According to Sartre, there is something quite special, and certainly non-inferential, about the experience of being looked at. Being aware of another in this way involves being aware of myself as an object, and this is the only way I could come to be so aware. My own self-awareness is therefore, according to Sartre, bound up with my awareness of others. Work on the possibility of a "mirror" system underlying our understanding of others is one strand of research that appears to justify the non-inferential approach to our knowledge of other minds. Another strand attempts to trace the development of self-knowledge in relation to knowledge of others. Eye gaze features strongly in this research. Newborn infants are known to mimic facial expressions of visible others (Meltzoff and Moore, 1983). Though some have speculated this to be a reflex-driven "contagion" effect, Gallagher and Meltzoff (1996) have argued that a cognitive differentiation between self and other is implicated, based inter alia on the finding that the mimicry can be delayed. Rochat (1997) has also shown that newborns respond differently to self-touch compared with being touched by others, suggesting that there may be more sophisticated cognitive machinery at work. It is known that from about two months of age, infants become aware of, and very interested in, the direct gaze of others. Importantly, even at this very young age they respond to gazes with what appears to be a limited range of social emotions (Reddy 2000(Reddy , 2003. Initially, it is mainly the direct eye-to-eye gaze of others that infants react to; looking at other parts of the body does not elicit the same response. Over the course of the first year, infants begin to become aware of the gaze of others being directed towards their body, then their actions, and lastly third-person objects. Based on this very early emotional engagement with the attention of others, and other evidence, Reddy (2003) proposed an "affectiveengagement view" according to which the infant's sense of self develops after, and from, its experience as of being the object of attention by others (see also Trevarthen 1993 for an earlier view along similar lines). Reddy argues that the hypothesis that infants' experience of others' attention is their first experience of mentality in any form, and which provides the building blocks of later conceptions of mentality, is better able to explain data regarding infants' reactions to attention, and also some important developmental continuities. Until fairly recently, it had been thought that infants can not feel self-conscious until they become aware, in some cognitively meaningful sense, of having a self, which was established to be around the 18 month mark (when they are able to pass the so-called "mirror test"; see Rochat 2003). But this relatively sophisticated sense of a continuing "I" may well be the end product of a series of cognitive developments that begin with the child's first awareness of itself as an object of another's attention.
The Social Nature of the "Resting" Brain Recent findings concerning the brain's default "resting" state which emphasize the centrality of the social realm to human cognition provide further, albeit circumstantial, evidence for the non-inferential approach to knowledge of other minds. Standard neurocognitive imaging studies, mostly using functional MRI technologies, focus on apparent increases in brain activity above a baseline. But this raises the question, what is the baseline? Are there particular areas of the brain that are active when a person is not engaged in any particular task? Raichlie and colleagues (2001; see also Raichlie 2007) found evidence for a "default network" which typically decreases in activity when subjects are asked to engage in any particular cognitive task. Schilbach et al (2008) found that this network has large overlaps with areas known to be associated with social cognition, leading them to conclude that human beings have a "predisposition to understand the world first and foremost as a social environment" (p.464). The idea that there is a social "resting state" network in the human brain does not directly entail the falsity of the idea that beliefs about others derives ultimately from introspective awareness. However, it does imply the preponderable inevitability of believing in others with minds, given the sorts of creature we are. The inferen-tial approach to other minds suggests that, were things different, we might reasonably fail to make the necessary inference. If I were to believe, for example, that every other person beside me is physically different -is a version of Commander Data, for example -it would be reasonable, on this view, to adopt Solipsism (the belief that my mind is the only mind in existence). It is very doubtful, given the apparent pervasive sociality of the "default network", that our brains would really give us the chance. It would be like, only much harder than, believing that nothing in the world is really coloured. The different lines of research described in this section all point to the conclusion that awareness of mentality in other creatures probably does not involve an initial self-awareness followed by extension to others, and certainly does not involve an actual inference from, in Block's terms, an internal similarity from an external similarity. Block's harder problem of consciousness, which must arise if that sort of inference is involved in our justified attribution of consciousness to others, dissolves if no such inference is needed. However, given the importance currently accorded the "first-person perspective" in accounts of how the concept consciousness is properly conceived (and therefore theorized), it is necessary to outline a plausible alternative. If "consciousness" is not in its most general sense essentially a first-person phenomenon, what could it be? If the consciousness of other people is something that I could be directly aware of, without the need to infer from my own case, then what kind of property is it, and how could I come to be so aware? In the following section, I describe a plausible direction for future research.

A Neutral, Perception-driven Conception of Mind
Uncontroversially, when we observe another person what most clearly brings out the sense that they do indeed possess a mind is, broadly speaking, their behaviour. Although "behaviour" can be used in a purely physical sense (as in, the behaviour of gases, etc) there is a more common sense in which ascribing behaviour is to intimate the involvement of a mind. When I kick the chair I am behaving; when my leg reflexively moves in an identical way following a tap to the tendon below the knee, I am not behaving in this sense. Noting Armstrong's (1968;p.84) observation "behaviour proper" implies a relationship to a mind, Avramides (2003) has developed an account of a conception of mentality in general that draws heavily on the idea of "behaviour" as a notion that comes imbued with mentality and does not itself arise from introspective awareness. Drawing on the work of P.F. Strawson and Donald Davidson, Avramides argues that no conception of having a mind that takes the first-person perspective as its basis can really make sense of mentality from any other perspective, and therefore that if this were our actual concep-tion of mentality we would all of conceptual necessity be Solipsists. In contrast, she writes: "[T]he concept of behaviour proper (in Armstrong's sense) is a concept which straddles the divide between the subject and her world. To acknowledge a close relationship between our concept of behaviour proper and our concept of mind is to acknowledge that our concept of mind is such as to leave no room for a sharp divide between the subject and her world. (p. 283)" The idea that "behaviour proper" implies a relationship to a mind raises the question of how this implication comes about. A plausible answer is that that is how we perceive the world. This is one way of interpreting Gallese's proposal, cited above, that we become aware of aspects of the mental lives of others "...side by side with the sensory description of the observed stimuli". We directly perceive certain stimuli as "behaviour proper", in that a sense of mentality is a literal part of the contents of the relevant perceptual state. Though an outlandish idea from the perspective of traditional philosophy of perception, which tends to be very conservative in what it allows as literal perceivable properties, it seems to me something along these lines must be true if we are to make any sense of the idea, plausible from every other perspective, that our knowledge of other minds is direct or non-inferential. For many years philosophers have restricted the domain of literally perceptible properties to the so-called "sensible qualities", such as colours, sounds, shapes, and so on. Everything else was taken to be not literally perceived but rather judged, or inferred, on the basis of the perceived presence of shapes, colours, and so forth. According to Berkeley's (1709Berkeley's ( /1732) influential account of vision, we do not even perceive distance directly; rather we infer it. But this very restricted stance could not be maintained. Recently there has been considerable debate about what sorts of properties can really be perceived. For example, Susanna Siegal (2008) argues that we perceive such things as causation and kind (or type) properties. Historically, the starting point for theorising about what is immediately perceived has been the site of transduction, where energy from the world is converted into neural impulses. In the case of vision, that site is the retina. Information that is not more or less obviously present on the surface of the retina was not considered a candidate for immediate perception. But we now know that the brain is structured so as to extract information at every stage of processing, including at the retina itself. Any property that we become aware of automatically ("side-by-side with perception" as it were), non-conceptually, and inescapably, is a plausible candidate for a literally perceived property. In the case of having a mind, or being conscious, these three criteria seem easily met. When we see other people in normal situations, we automatically see that they are conscious, we need not have any particular concept to see them in this light (as witnessed by the fact that infants do it), and no argument can cause us to fail to do so (even if, as with other perceptual illusions, we subsequently reject the evidence of the senses). It might be objected that because our perceptions as of other minds has as its main causal basis the physical movements of others, what we are really perceiving is just those physical movements. But this objection is wrong-headed, because the same inference does not work even in the case of colour. We know fairly precisely what causes colour perception, but no merely causal story about wavelengths of light is sufficient to replace the intentional story. We say that we see the colour of the It is important to note that, like any perceptible property, the fact that a person looks conscious does not make it so. Perception is fallible. This does not entail, though it may allow, that there is an ultimate non-perceptual arbiter of whether something is conscious, or has a mind. Take again the case of colour. The fact that something looks red does not make it so. Colour perception can misrepresent. However, if the light is normal and I am perceiving normally, the fact that something looks red plausibly does make it red (see Lewis 1997). That is because the way things look is my only real touchstone to what colour things really are. The only way for it to be false that grass looks green under normal circumstances is if there is not such property as colour; that is to say if an error theory about colour perception is true. What are we to say about the apparent perception of mentality? The link between the look of something and its colour is necessitated by the apparent conceptual link between the way things look in respect of colour and the colour of things. But is it plausible that there is a conceptual link between the way things look in respect of other minds and others having minds? Certainly, the orthodox view allows no such link. But this is based on what I have suggested is the mistaken idea that our general concept of consciousness is based on our own case. This idea, well expressed by Ned Block in the quoted passage earlier in this essay, and undoubtedly intuitive, may well be false, for all of the reasons given above. If so, then our most basic access to whether a creature has a mind, or is conscious, may be no "deeper" than the ordinary evidence of our senses. This would render it an empirical question whether there was anything interesting in common underlying all and only those creatures which, according to how things perceptually seem, are conscious and do have a mind. As Avramides notes (Ch.X), it also renders non-mysterious why certain objects, such trees and automobiles, are for all intents and purposes ruled out as plausible seats of conscious awareness, despite the same lack of introspective access to these objects as for other people, which is for Block the only reliable way of telling: inanimate objects cannot be seen by us as engaging in behaviour. According to the view I am suggesting here, this is enough reason to disqualify them. It also means that objects which we cannot but see as engaging in behaviour, such as Block's example of Commander Data, count as conscious in a fully-fledged sense. Differences in underlying physical constitution, to the extent that they have no affect on our inclination to experience a thing as engaging in behaviour, have no special inferential force in the oppose direction. If this is true, and if Commander Data-type cases illustrate the difficulty for a general theory of consciousness by virtue of an initial undecidability as to whether a science of consciousness should be based on partly on them, then the approach suggested here seems to dissolve the problem. By hypothesis, Commander Data is behaviourally indistinguishable from you or I, and therefore is perfectly suited to engage with us in all of the ways relevant to full social participation. Insofar as our conception of mentality depends on our social experience of others, as I have argued the evidence strongly suggests, there is good reason to think that Commander Data is as much the object of that conception as you or I. Any fully general science of consciousness must therefore include him, and any underlying physical differences would be no more relevant than the physical differences between a bird and a helicopter are relevant to the possibility of a fully general theory of self-powered flight.

CONCLUSION
To sum up, Block argues that a fully general theory of consciousness is epistemically unavailable at present because before we can have such a thing we need to have some means of identifying in general which things are conscious. Because the means we have available is introspection, which is restricted to our own case, we are limited to attempting an inference from our own case while knowing that a creature that is physically quite different from us might be quite conscious. Block argues that a general science of consciousness is impossible because we cannot know the range of creatures that have consciousness. We cannot know that range because consciousness is something we originally know only through introspection, and which therefore allows us justifiably neither to make nor withhold its attribution to physically very different creatures. I have argued that the idea that we infer that others are conscious from our own case appears at odds with vari-ous facts about our awareness of other minds. The phenomenologists' description of the centrality and automaticity of our experience of others is broadly in line with the evidence from psychology and the neurosciences. Recent work on possible "mirror" and related mechanisms for engaging with the mental states of others, together with behavioural and developmental studies which suggest a co-development of infant conceptions of self and other, and research on the putative "default state network" which appear to underline the absolute centrality of the social realm for human cognition, all point away from the idea of an "unbridgeable gap between ourselves and others", and thus undermines an assumption, common in philosophical and scientific circles, that leads to the apparent "harder problem of consciousness". The problem of consciousness is certainly difficult, but it may not be as insoluble as many fear.