In 1980, John Searle published his seminal paper ‘Minds, Brains, and Programs’, distinguishing for the first time ‘strong’ AI from ‘weak’ or ‘cautious’ AI: weak AI serves as a powerful tool for studying the mind, whereas strong AI is the possibility of developing a programmed computer that is in itself a mind that can understand and have actual cognitive states (Searle 1980, 417). To refute the argument that what machines can or could do may constitute an explanation of how minds work, Searle offered his thought experiment that is now known as the Chinese Room (ibid., 417–418). In the same year, the 85-year-old mystic and thinker Jiddu Krishnamurti (1895–1986),Footnote 1 having been introduced to the subject of artificial intelligence by computer scientists, became engrossed in the challenge posed to the human mind by the prospect of the machine taking over its processes and faculties (Jayakar 1986, 407–408). This question would quickly turn into one of Krishnamurti’s main preoccupations during the last decade of his life, driving him to engage his audiences and discussants on numerous occasions in what he deemed an urgent crisis (ibid., 408; Krishnamurti 2004, 168–220).

Why Krishnamurti became so enthralled by the emergence of machines that can imitate or simulate aspects of the human mind is a matter of speculation. It is easy to understand that for someone like Krishnamurti, whose life work commencing in the 1930s and extending to the 1980s was entirely devoted to the mind’s mechanical condition, a field that implied the mind’s reproducibility would echo his primary concerns as a thinker. In a way that somewhat resembled Searle’s view of weak AI as a ‘tool in the study of the mind’ (Searle 1980, 417), the growing presence of artificial intelligence could supplement Krishnamurti’s insight into the mind with an effective thought experiment that enhanced his audiences’ introspection. Furthermore, the urgency of this thought experiment, in light of AI’s actual advancement at that time, could justify Krishnamurti’s insistence on the exigency of one’s psychological transformation. Nevertheless, countless occasions on which Krishnamurti addressed the topic indicate that it was more than a mere rhetorical device or a means to illustrate the mind’s intricate illusions. In effect, what we find in his writings and in recordings of his lectures and discussions may strike us as a series of predictions or even prophecies that forewarn of an age in which humans would become obsolete, pronounced with the same vehemence with which Friedrich Nietzsche’s madman proclaimed the imminent passage to a world that has disposed of its God (Nietzsche 1974, 125).

Nowadays, the internet is brimming over with media and texts containing Krishnamurti’s predictions about AI, with the intention of marveling at how accurate they were.Footnote 2 However, if anything, these texts indicate the degree to which these prophecies have remained mostly unfulfilled. In public discourses and dialogs with students and scientists, Krishnamurti asserted, with his characteristic passion, that ‘in about ten, fifteen years’, artificial intelligence would far surpass natural intelligence, leaving humans as unemployed zombies, helplessly obsessed with leisure and entertainment (Krishnamurti 2023, 4, 12, 17). The machine would tell us how to live, diagnose better than any doctor can, translate books, compose poems and Beethoven-like symphonies, produce unparalleled philosophies, theories, and knowledge, and even invent new gurus and gods (ibid., 13–14, 16, 29, 30).Footnote 3 Currently, even 44 years later, human culture seems to be alive and kicking; there seems to be a widespread agreement among philosophers that the likelihood of the emergence of a general or strong AI is either faint or non-existent, and that AI is an artificial agency mimicking or simulating certain capacities of the human mind while relying on human–machine interaction (Krämer 2022, 17–18, 22). Nonetheless, recent advances in machine learning, particularly those based on deep learning methodologies, certainly echo some of Krishnamurti’s projections: from large language models (LLMs), publicly available since 2022, that are able to produce humanlike text in response to human prompts, including essays and poems, to AI-generated art and music and drug design and discovery.Footnote 4 This progress has been enough to provoke an outcry among some intellectuals that bears a resemblance to Krishnamurti’s worries. Historian Yuval Noah Harari (2023), for instance, writes that with AI’s current ‘abilities to manipulate and generate language, whether with words, sounds or images’, it has ‘hacked the operating system of our civilisation’, since language is what nearly all human culture consists of. Consequently, this alternative and better-than-the-average-human storyteller would be able to invent myths and craft scriptures that could give rise to cults and religions, thereby bringing human history to an end and ushering in an altogether different culture.

Despite the noticeable commonality in Harari’s and Krishnamurti’s apocalyptic visions, what makes Krishnamurti’s angle worthy of separate discussion is that it accentuates not social and cultural ramifications, such as the hazard of disinformation, but a mostly overlooked potential philosophical and psychological crisis. In effect, Krishnamurti did not think of the intelligent machine as an ‘other’ that might overthrow humans; the dim future he foresaw was one in which humans would destroy themselves as a direct result of the degeneration of their minds, while the machine, as the mind’s extension, would accelerate the pace of humanity’s deterioration (Krishnamurti 2004, 182–183, 189).Footnote 5 As a thinker and mystic whose life work entirely centered on the transformation of the human mind, he worried that an insufficiently cultivated mind that had been employed merely for material and mechanical purposes would be perfectly imitable and thus replaceable by computers and other machines (Jayakar 1986, 407). Thus, our main concern should not be machines attaining humanlike minds—in Selmer Bringsjord’s and Naveen Sundar Govindarajulu’s (2018) words, machines reaching the ‘full heights of human intelligence’—but people having machinelike minds.

1 A crisis of meaning

Of course, thinking of intelligent machines as a mirror through which we see our own reflection is an unavoidable and common theme in the field of philosophy of AI. Mark Coeckelbergh (2020, 32, 36–38) makes this point eloquently when he writes that ultimately, the discussions are never only about ‘what AI is and should be’ but also about ‘what the human is and should be’. The fact that these themes are tightly interwoven is first and foremost related to the conditions for the emergence of artificial intelligence rather than the results and implications of this emergence. In other words, the ways in which developers of AI attempt to construct a humanlike mind uncover presuppositions about human minds, human intelligence, and human creativity.Footnote 6 This observation has been made, albeit in an overextended manner, by Daniel Dennett, who contends that AI is in itself philosophy, in that it constitutes an attempt to explain intelligence: ‘a most abstract inquiry into the possibility of intelligence or knowledge’ (Dennett 1979, 64).Footnote 7

However, in this paper, I argue that the particular dimension of the AI–mind encounter elucidated by Krishnamurti—a dimension which has been absent from discussions thus far—can significantly broaden the field of the philosophy of artificial intelligence. First, while his approach does not seem helpful in attempting to resolve the field’s major questions, Krishnamurti’s investigation further problematizes the field by handing us the question: ‘If the machine can take over everything man can do, and do it still better than us, then what is a human being, what are you?’ (Krishnamurti 2023, 5). Accordingly, he demonstrates, deploying phenomenological observations, various ways in which human thought is, after all, worryingly computational and machinelike. Second, since Krishnamurti’s approach is transformative and arguably soteriological, he offers intriguing ways in which we could consider the inherently non-mechanical yet unactualized faculties of the mind. These aspects, which Krishnamurti urges us to cultivate as a kind of humanistic (but fundamentally religious) response to the rise of artificial intelligence, may be key to transcending the mind’s programmed condition. For reasons that will be explicated in the following section, Krishnamurti’s answer to the question he himself poses is abstruse and mainly involves self-reflection on the part of the individual.

Since Krishnamurti’s observations involve an insight into the present condition of the mind and a potential active response to this condition, they have some pertinence to both the primary discourse in the field—the interrelations between artificial intelligence and aspects of human cognition —and the emerging discussion of meaningfulness in life in the face of AI. The former may benefit from the fact that Krishnamurti’s thought exists on the border between philosophy of religion and philosophy of artificial intelligence (and, for that matter, philosophy of mind), thus imbuing concepts arising from theories such as computational functionalism with the immediacy of introspection. The latter can be even more enhanced by this transformational angle, given that Krishnamurti addresses a crisis of meaning that penetrates the core of human existence as a whole.

As a subcategory in AI ethics, the discourse on the effects of AI on meaningfulness in life is still in its infancy (Nyholm and Rüther 2023, 4). In itself a response to the limited nature of discussions in AI ethics—which are mostly occupied with either valuable and morally acceptable or wrong and unjust uses of AI technologies—it conveys the need of some ethicists to develop a broader approach encompassing the question of AI as a hindrance to or an enhancement of the good life and personal sense of meaning (e.g., Danaher 2019; Chalmers 2022; Tasioulas 2022; Nyholm 2023). This, as Nyholm and Rüther (2023, 4) demonstrate in their comprehensive overview of the emergent trend, is in line with the expanding debate on meaning in life that has taken place in normative and applied ethics during the past two decades.

Nonetheless, while Krishnamurti’s contribution certainly falls within the scope of these discussions, it is difficult to relate it to existing literature.Footnote 8 First, ethicists aim to tackle the problem of meaning not in cosmic or universal terms but in a more limited sense, as a ‘non-instrumental value that makes an individual’s life more desirable’, thus addressing meaning in life (Nyholm and Rüther 2023, 5–6). Krishnamurti, on the other hand, seems to speak about meaning in the vast context of the significance of human existence as a contributing species in the world, wondering, in his words, ‘what is a human being?’ (Krishnamurti 2023, 5). Second, Krishnamurti is not engaged with the question of whether some meaningful human activities, such as work, might be rendered obsolete in an AI-managed environment,Footnote 9 or whether new meaningful activities, such as virtual realities, might come into being.Footnote 10 His interest lies in the mind and its place in a civilization governed by super-intelligent machines.

Having laid out the foundations of the discussion, I now intend to carefully examine Krishnamurti’s perspective on AI, centering on its two principal aspects of the problem and the solution. In the first two sections, I shall devote a great deal of attention to Krishnamurti’s observations on the mechanical and programmed condition of the mind, against the backdrop of related discussions in the philosophy of AI that have a bearing on the philosophy of mind. In the third section, I will focus on Krishnamurti’s illumination of non-mechanical aspects of the mind and the ways in which they may be cultivated, while placing this response in the ethical context of the debate on meaningfulness. This section will include an assessment of the usefulness of Krishnamurti’s suggested response in the face of the potential crisis of meaning. Thus, even though the limited scope of this article prevents me from covering the full implications of Krishnamurti’s views on the topic, it will suffice to prepare the ground for further study.

2 The machinelike nature of thought

Before attempting to grasp Krishnamurti’s insight into the nature of thought, we ought to briefly consider his unorthodox method of inquiry. This quasi-Socratic interrogative process, which is foreign to both theory-building philosophy and dogma-defending religious thought, is interested in formulating and posing questions more than in supplying constructive answers (Tubali 2023, 123–142). By acting as a persistent questioner in a dialog, Krishnamurti makes it clear to the discussant that facing the questions is their exclusive struggle (ibid., 128). Thus, this type of rhetoric is designed as a way to throw ‘man back on himself and the way the structure of thought operates’ (Jayakar 1986, 298).Footnote 11 This sense of absolute and urgent self-responsibility, aided by the questions’ mirroring effect, could give rise to what Krishnamurti termed the ‘awakening of intelligence’ in the minds of his interlocutors (Krishnamurti 1996b, 15). As we shall see in the final section, in Krishnamurti’s view, this awakened intelligence constitutes a condition for our ability to distinguish our minds from artificial intelligences. Krishnamurti insists that ‘the computers are challenging you’ and that ‘you have to meet the challenge’ (Krishnamurti 2023, 22), but for him, unwaveringly facing the question of the future of a mind pitted against machines that rob it of all meaning is, in a sense, the most potent response to this emerging reality.

By holding this pressing question and deploying it to mirror the mind’s reality, Krishnamurti argues, one is compelled to admit that the thought processes that one has trusted and given ‘such tremendous importance to’ are, in the end, mechanical in ways that are comparable to the workings of programmed computers (Krishnamurti and Bohm 1999, 67–68). We should, however, proceed with caution since this kind of statement may falsely lead us to classify Krishnamurti as a computational functionalist: one who maintains that ‘having a mind is nothing more than implementing a particular computer’ and that consequently, ‘building minds involves nothing more than building computers’ (Abramson 2011, 216). This false conclusion may draw strength from the fact that Krishnamurti also strikes us a physicalist when he endorses the claim that all ‘thought is physical and chemical’ (Krishnamurti and Bohm 1999, 74) and that ‘the psyche is a material process’ (Krishnamurti 2004, 171). Furthermore, he does not omit features of the mind such as creativity and consciousness—or any qualia, for that matter Footnote 12Footnote 13—from the umbrella term ‘thought’, since ‘consciousness is thought’ (Krishnamurti and Bohm 1999, 69).Footnote 14 This approach would inevitably imply that the ambitious enterprise of strong AI is realizable. Such misguided deductions are dispelled in the face of three crucial dissimilarities between Krishnamurti’s perspective and the computational theory of mind. First, Krishnamurti distinguishes mind from thought, considering thinking a wrongly cultivated activity of the mind that should be replaced with what he terms ‘perception’ or ‘insight’ (ibid., 69, 74). Second, his statement that the mind is computer-like is not ontological but phenomenological, addressing the mind’s current programmed predicament rather than its nature or potential (Krishnamurti 2004, 168). Third, Krishnamurti does not assume that only material reality exists but that, being material, all that thought can capture and process is material reality, while there are transcendent forms of perception and intelligence that cannot be computerized (ibid., 189, 192, 196–197). All these dissimilarities will become increasingly clear and intelligible in the remainder of this paper.

Notwithstanding the dissimilitude between Krishnamurti’s premise and computational functionalism, Krishnamurti certainly maintains that thought as a whole is computational, and that since the mind presently consists of thought (Krishnamurti 2004, 188), humans have become perfectly imitable. In fact, he goes as far as claiming that humans have turned into ‘poor imitations of the extraordinary machines called computers’ (Krishnamurti n.d.). This calls for a re-evaluation of the ways in which we think about the challenges posed by AI. In a sense, Krishnamurti proposes what may be thought of as a reversal of René Descartes’s (2017, 316) ‘language test’ and Alan Turing’s (1950) ‘imitation game’.Footnote 15 Both Descartes and Turing, each for their own reasons, suggest thought experiments that test whether or not a machine can reproduce the purely verbal behavior of a person by responding meaningfully to whatever is said (Abramson 2011, 204). Krishnamurti, however, is interested not in the machines’ potential ability to converse like humans but rather in the problem of humans that converse like machines. In other words, if machines could indeed pass the Turing test, that would evidence, according to Krishnamurti, the mechanical nature of human thought.

We should take note that Krishnamurti is content with machines behaving like humans as an attestation of the machinelike nature of human thought. This implies that his observations—whose concern is that machines might outdo humans, that is, gradually taking over all known human activities (Krishnamurti 2023, 4–5)—do not lean on strong AI’s speculated development. As a result, his analysis of the mechanisms of human minds compared to those of machines does not involve intricate and subtle distinctions of the type which can be found among philosophers of artificial intelligence. Krishnamurti’s position would, thus, remain unwavering in the face of Hubert Dreyfus’s argument that the unconscious background of common sense knowledge and meaning derived from experience, on the basis of which minds of embodied and existential being respond to situations in the world, cannot be formalized (Coeckelbergh 2020, 32–34).Footnote 16 He would also be uninterested in John Searle’s convincing argument that computers ‘have only a syntax but no semantics’ (Searle 1980, 422), or Mortimer Adler’s assertion that machines, unlike the human mind, cannot abstract from particulars to generate and comprehend concepts or universals (Stephan and Klima 2021, 11–14).Footnote 17 In addition, Krishnamurti would be disinclined to differentiate between mental states that can be represented and, thus, could be candidates for being computational states, imitable by computers, and some potentially non-representational activities of human minds (Crane 2016, 92; Coeckelbergh 2020, 43).

Krishnamurti’s disinterest in finer distinctions concerning the ability to understand, have intentionality, and conceptualize (let alone experience subjective and conscious states) stems from his unitary—some may say reductionist—perception of human thought. In his view, thinking, the computational activity of the mind, ‘starts from experience which becomes knowledge stored up in the cells of the brain as memory, then from memory there is thought’ (Krishnamurti, quoted in Kumar 2015, 87). This chain reaction of experience, memory, and knowledge is then reinforced through repetition: one reacts and acts on the basis of these amassed inner contents, learns from one’s reaction and action, and acquires further experience, memory, and knowledge (Krishnamurti 2004, 171). Krishnamurti perceives this process as purely mechanical, in the sense that it is repetitive, and avers that this inner programming, which is further shaped by cultural, religious, and economic conditions, is perfectly comparable to that which computers undergo: as brain-like storehouses of knowledge, computers too learn more and more while continuously correcting themselves (ibid., 168, 171). Sybille Krämer (2022, 18), who investigates the dividing line between ‘natural’ and ‘artificial’ intelligences, echoes Krishnamurti’s observation when she writes that ‘human intelligence is not “natural” or “unprocessed”: it is a collective and social commodity, bound to linguistic and non-linguistic signs, to cultural techniques, instruments and tacit routines.’

Krishnamurti does not distinguish human expertise and academic and religious knowledge from the psychological patterns woven as a result of past experiences, since ‘all knowledge is a mechanical process of acquisition’ (Krishnamurti 2004, 171). This perception of knowledge as a material and evolutionary process seems to be congruent with the dictionary definition of ‘knowledge’, as the ‘information, understanding and skills that you gain through education or experience’ (Oxford Learners Dictionaries 2024). Nonetheless, Krishnamurti does not assert that this process as a whole is erroneous; in effect, he accepts that thought has some place when it comes to practical affairs (Krishnamurti and Bohm 1999, 72–73). The problem begins when thought takes over the psychological, cerebral, and spiritual faculties of the mind, pretending to have something to do with human intelligence while it is nothing more than ‘memory responses’ (Krishnamurti 2004, 169).

In this sense, Krishnamurti maintains, human beings, as they presently are, are ‘a mass of accumulated knowledge and reactions according to that knowledge’ (Krishnamurti 2004, 180). Psychologically speaking, this reality is demonstrated in the psyche’s cause-and-effect relations such as ‘you hate me and I hate you back’ (ibid., 215–216) or ‘I like him because he flattered me’ (Krishnamurti 2023, 26). Intellectually, accumulated knowledge crystallizes into strong convictions and opinions (ibid., 182–183), and also develops as an ever-complexifying net of associations and comparisons. For this reason, Krishnamurti remained unimpressed when philosopher Iris Murdoch told him at the conclusion of their dialog in 1984 that ‘by thinking about Plato, I have come to some understanding of what you have been saying’ (Krishnamurti 1996a, 128). As far as Krishnamurti is concerned, comments like this are nothing but thought’s memory response, perfectly comparable to what he terms the ‘intelligence of knowledge’ of artificial intelligence, hence their dangerously imitable nature (Krishnamurti 2023, 26). This challenge to knowledge-based human intelligence is strikingly exemplified in the following anecdote offered by Sven Nyholm (2024, 2). Nyholm depicts how, while experimenting with ChatGPT, he entered the prompt, ‘what would Martin Heidegger think about the ethics of AI?’ in response to which the software instantly generated a ‘fairly impressive short essay on the topic’. He candidly admits that ‘ChatGPT did a better job than at least some—or perhaps even many—of the students in my classes would do’, and finally goes as far as writing that the essay was better than what he would be able to come up with if he were asked to write an essay on the subject there and then (ibid.). If already, at this early stage of LLMs (large language models), machines can rapidly juxtapose complex notions from diverse schools of thought against each other to derive conceptual interrelationships, would this type of academic activity still be deemed intelligent?Footnote 18

3 A brain in our image

Krishnamurti’s point about the reproducibility of knowledge-based human intelligence—or, at least, the potential of its continuation in the form of computers and other machines—is enhanced by the fact that he does not perceive thought as originating from the individual brain. In his view, thought is the unceasing momentum of humanity’s triad of experience, memory, and knowledge, a momentum that encompasses even what religious people have considered to be transcendent and immaterial (Krishnamurti 2004, 169, 171). As such, the individual brain is but a host for this evolutionary process of thought, to which the human race has given the utmost significance (ibid., 169). In many respects, Krishnamurti’s conception of thought is reminiscent of Richard Dawkins’s (2016) notion of memes as Darwinian and gene-like forms of information—from ideas and behaviors to styles and usages—that spread from one person to another, sculpting minds and cultures as they go.Footnote 19 Similarly, Krishnamurti speaks of thought’s wish to transcend its constantly changing nature and to attain a permanent and imperishable status, making it sound as if thought were an actual organism or entity striving for self-perpetuation (Krishnamurti 1999, 67–68, 76, 78). Since thought can only be humanity’s tirelessly accumulative process of experience and knowledge, it is, in effect, irredeemable: any attempt at modifying and ameliorating it, including making it less or even not mechanical, would be yet another offshoot of the very same movement, hence the acute need for non-thought intelligence (Krishnamurti 2004, 173, 192).

The only ingredients of our known human experience that are excluded from Krishnamurti’s broad conception of copyable mechanical thought are the biological and sensory dimensions. The latter do not seem to be accessible to a form of intelligence whose foundation is syntax without semantics and intentionality (Searle 1980, 423) and whose ‘information-bearing structures’, or representations, are mental objects disengaged from the phenomenal content of sensory experience, that is, devoid of the external objects that they aim to represent (Pitt 2020). Aware of the distinction between linguistic imitability and embodiment, Krishnamurti points out time and again that while computers will eventually outdo ‘everything that man has done or will do’, including taking over the domain of religious thought, they will remain inherently unable to engage in experiences, from having sex to bearing witness to the ‘beauty of the constellation Orion, or the evening star that is by itself, lonely in the sky’ (Krishnamurti 2023, 16; see also ibid., 10, 22, 29). Even if they were able to simulate such experiences, they would lack the perception of the human eye, with which they could look up at the sky and exclaim, ‘what a marvellous night this is’ (Krishnamurti 2004, 209), or any other senses, for that matter.Footnote 20

Nevertheless, if we hope that at least the aspect of our embodied consciousness endows us with non-mechanical superiority, we will disappointedly realize that Krishnamurti also insists that our experiences are tainted by reactions rooted in knowledge gathered from past experiences. For instance, one’s reaction to the moment in which one bears witness to the beauty of the evening star is marred by a mental image on whose basis one compares and automatically labels one’s present experience. In Krishnamurti’s words,

I saw a sunset yesterday, it was a great pleasure, and it has left a mark, and this evening I look at the light on the hill with the eyes of yesterday, with the memories of yesterday. I’m doing this all the time. Why am I doing it? Why is thought building the image? Why, when I look at a sunset today, the past sunset comes into my mind, and when I look at you—husband, wife, children, brother, whoever it is—I look at you through the image which I have about you—about clothes, about food, about every thing (Krishnamurti 1967).

The problem of image-making that Krishnamurti raises—reminiscent of questions dealt with in the theory of representationism (Britannica 2016)—can be applied to many of our experiences, which have become nothing more than programmed behaviors, that is, persistent and conditioned reactions to certain stimuli. This involves not only the predictable ways in which one responds to world events according to one’s unwavering political or religious certitude (Krishnamurti 2023, 12), but also one’s obsession with food, sex, and forms of entertainment. Consider pornography’s image-making activity: one may be drawn to pornographic images on the basis of pleasurable mental images that have been amassed as a consequence of past experiences; however, one’s exposure to pornographic images produces in turn further stimulating mental images, which one must ceaselessly return to in order to evoke the very same conditioned response. Thus, Krishnamurti advises us not to take comfort in this inferiority of sense-deprived machines, since, in the end, the danger of the mechanical mind is all-pervasive.

Having invested much of the brain’s capacity in memory-based knowledge and expertise, and having discovered that this kind of acquired knowledge can be transferred into computers, we are now compelled to acknowledge the computer-like and replicable nature of all that we have achieved (Krishnamurti 2023, 12).Footnote 21 As it is perfectly mechanical and transferrable, Krishnamurti argues, we ought to call into question our assumption that this mental activity can be perceived as intelligent (Krishnamurti n.d.). But before we proceed to examine this argument of Krishnamurti’s in light of current discussions, it should be noted that the hazard of AI raised by Krishnamurti is not rooted in a dualistic notion of human minds versus intelligent machines. When he speaks of machines taking over, this is nothing like Nick Bostrom’s (2014) near-apocalyptic vision of AI, in which humankind’s fate would depend on the actions of the machine superintelligence, ‘just as the fate of the gorillas now depends more on humans than on the species itself’ (ibid., vii). In Krishnamurti’s view, there is no division between the human mind and the machines to which it gives rise. We have reduced our minds to mechanical knowledge, and employing our knowledge-based brains, we have produced a brain in our image, limited to the activity of thought (Krishnamurti 2004, 188, 198).Footnote 22 Nevertheless, both creator and creature are products of knowledge (ibid., 187), and thus artificial intelligence is but a perpetuation of a played-out instrument that has failed to solve and has even aggravated humanity’s non-technological problems and has already reached its limit (214–215). We must, therefore, determine that the true danger lies not in the machine but rather in the ‘machine of the mind’ (187, 189)—the machine is simply ‘going to help us deteriorate faster’ by accelerating the brain’s atrophy (182–183).

Krishnamurti’s insistence that artificial intelligence must be thought of not as an ‘other’—a kind of human-made competitive species—but rather as an extension of the human mind is echoed by several philosophers (e.g., Clark and Chalmers 1998; Coeckelbergh 2020; Krämer 2022; Nyholm 2024). Sybille Krämer (2022, 17–23) points out that when we cease focusing on the ‘visionary AI’ propagated by Bostrom and others and start delving into the potential and threats of weak or ‘prosaic AI’, we come to realize that artificial intelligence should be defined as artificial agency consisting of sustained human–machine co-performance. As such, it is but a toolbox that is incapable of performing autonomous machine learning and entirely relies on human activation and the insertion of human-gathered data (ibid., 24–25). She, thus, concludes that our main concern should be not so much the almightiness of the machine but rather the ‘enlightened reason and maturity of people in dealing with their machines’ (28–29). Nyholm (2024, 2–5), who contemplates the question of whether handing over too many intelligence-requiring tasks to AI systems might result in either enhancement or enfeeblement of human intelligence, suggests that the answer depends on whether we view ourselves as separate from the technologies that we create. Grounding his argument in Andy Clark and David Chalmers’s 1998 thesis, according to which external enhancements may be viewed as parts of our extended minds, Nyholm determines that one’s mind can be considered enhanced through technology so long as the machine is sufficiently dependent, that is, requiring one’s input and engagement (ibid., 6–7). Otherwise, machines merely enable us to act as if we had an impressive level of natural human intelligence (ibid., 9); in other words, they make it possible for us to attain exactly what Luciano Floridi defines as AI’s unique status: highly efficient agents without intelligence (Floridi 2023, 5–6).Footnote 23

Nyholm (2024, 3, 9) proposes that while individuals may not always be cognitively enhanced through AI technologies, these technologies could entail a group-level enhancement. From an evolutionary viewpoint (which is not considered by Nyholm), this may be plausible since humanity’s ability to bring forth such extensions of its own intelligence is in itself an expression of genius. Krishnamurti, however, further problematizes the non-dualistic perception of AI as an expansion of natural intelligence by provocatively stating that AI is an extension of human unintelligence and self-destructiveness in the form of thought. Consequently, AI neither enhances nor weakens human intelligence since it is merely an efficient continuation of thought’s mechanical activity.

The last point that should be accentuated with regard to the problem put forward by Krishnamurti is that it remains pertinent regardless of whether AI is able to develop a human-like mind or not. We do not need to have strong AI possessing consciousness to be troubled by the potential replaceability of the human mind.Footnote 24 As Floridi (2023, 1–2, 5) puts it, even non-sci-fi machines that ‘can do statistically what we can do semantically’ are enough to raise questions of uniqueness, originality, and replaceability. Bear in mind that Krishnamurti is concerned with the prospect of having our unexercised brains deteriorate in a machine-governed world, and thus our turning robot-like and ultimately resorting to pleasure and entertainment (Krishnamurti 2004, 182–183). Since, according to Krishnamurti, humans have not activated the inimitable faculties of their minds and instead reduced themselves to mechanical thinking, machines that perfectly imitate or simulate thought would suffice to pose a substantial threat. The challenge begins as soon as machines are able to operate as agents that effectively achieve the same goals that are achievable by thought, even without the need for intelligence (Floridi 2023, 5–6; Nyholm 2024, 5).

It should be remembered that neither Turing nor the researchers who coined the term ‘artificial intelligence’ in 1955 strived toward the development of an artificial mind. Turing conceived of machines that would behave as if they could think and, accordingly, his principal question was whether a machine could be ‘linguistically indistinguishable’ from a human (Bringsjord and Govindarajulu 2018).Footnote 25 The researchers in 1955, on the other hand, spoke not about imitating human intelligence but instead about simulating it (Nyholm 2024, 4). The existence of this kind of weak AI, be it an imitator or a simulator, has become extremely difficult to philosophers to deny (Bringsjord and Govindarajulu 2018). While presently AI is utterly unable to read, possess self-induced knowledge, construct new and abstract representations, become creative, or grow subjective consciousness (ibid.), the various ways in which it mimics or simulates some activities of the human mind certainly call into question what we understand by human intelligence—for instance, whether intelligence is something that humans have or whether it is a potential that can be more or less realized (Nyholm 2024, 3–4).Footnote 26 As far as Krishnamurti is concerned, the differentiation between the knowledge humanity has gathered as a result of conscious experience and the knowledge that is now stored in the computer, borrowed from human experience of the world, is beside the point. What truly matters is that machines are able to imitate thinking to such a degree that the machine-like nature of human thinking is finally exposed. Moreover, Krishnamurti asserts that intelligence is beyond AI’s reach for the simple reason that it is also beyond thought’s reach (Krishnamurti 2004, 196).

What Krishnamurti would nevertheless accept is that whereas machines cannot reflect on their own programmed condition, humans can grow conscious of their predicament. This leads us to our final discussion of Krishnamurti’s responses to his proposed challenge.

4 Intelligence is not the product of thought

Understandably, discussions of subjective or conscious states in the philosophy of artificial intelligence revolve around the question of what machines can or cannot achieve (Bringsjord and Govindarajulu 2018). In this respect, there seems to be a widespread agreement among philosophers that forms of intelligence which can be uncoupled from conscious states are realizable for machines—in reality, some of them have already been realized—while the goal of some AI developers to create artificial agents that constitute a complete model of the mind is unachievable (Coeckelbergh 2020, 15). Even scientists such as Margaret Boden (2016), who are confident that science will sooner or later fathom the mystery of consciousness, conclude that, in practice, general artificial intelligence is highly unlikely.Footnote 27 As things stand, although Searle’s ‘Chinese Room’ argument has been rejected by some philosophers on various grounds (e.g., Rapaport 1988; Schneider 2019, 148–149), his foundational argument that computers are immanently incapable of understanding, having intentionality, and being conscious (Searle 1980, 422) stands the test of time.

Nonetheless, whereas Searle’s argument and many other philosophical defenses of the same inference seem to safeguard the human mind’s unparalleled ontological status, Krishnamurti has little interest in consciousness as an inherent property. On the contrary, for him, a committed response to the challenge posed by AI would entail thinking of consciousness as an unpracticed and reducible human capacity. As such, our concern should not be the preservation of the status quo but the urgent need to exercise certain faculties of the mind that might ultimately degenerate in a world guided by AI (Krishnamurti 2023, 18; Krishnamurti 2004, 182). This requires an ethical shift in the discussion from the question of whether machines will ever become conscious to the question of whether humans will ever become sufficiently conscious to transcend the so-called artificial intelligence both within them and in the mirror-like reflection of the machine.

This important distinction between consciousness as an irreplicable property of the mind and consciousness as a capacity that requires cultivation—if one ever hopes to distinguish oneself even from weak AI—is one of Krishnamurti’s contributions to discussions in the field.Footnote 28 In Krishnamurti’s view, the fact that humans experience subjective and conscious states matters very little so long as the mind has been reduced to thought’s computational activity, that is, the ‘intelligence of knowledge’ and its material purposes (Krishnamurti 2023, 26; Jayakar 1986, 407). In actuality, the barriers separating brains from computers have and will become increasingly blurry. Moreover, the irreproducible human capacity to experience—from having sex to looking at the beauty of the stars, to use Krishnamurti’s persistent examples—will also erode, as intelligent machines such as computers and robots will take over all the activity of thought, leaving the brain unstimulated, inactive, and empty (Krishnamurti 2023, 13–14). In this world without work, devoid of ‘outward physical struggle’, experience will be reduced to seeking out different forms of entertainment and pleasure (ibid., 19), thus becoming mechanical in its own way, as it will be nothing more than reaction to stimuli.Footnote 29 Note that Krishnamurti does not differentiate between so-called higher and lesser leisure activities: for him, the religious or the philosophical are signifiers of this decay, just as sports and cinema are (ibid.). At this crossroad, Krishnamurti asserts, humans will only have two available pathways: they will either succumb to entertainment and become zombielike or choose to keep their brains active by enhancing their consciousness. What Krishnamurti’s enhancement of consciousness actually involves will be elucidated in the remainder of this article. At this point, it will suffice to say that he envisions a dedicated turning inwards to inquire into oneself and ‘explore the recesses of the mind’, which, for Krishnamurti, are infinite (ibid., 13–14, 17, 19).

Thus, Krishnamurti’s preliminary response to the challenge posed by the rise of AI is a choice to resist the entertained mind and insist on keeping one’s mind active by exploring the ‘vast recesses of one’s being’ (Krishnamurti 2023, 19). From an evolutionary point of view, one would perhaps expect that as soon as the brain became freed from its mechanical occupations, it would naturally seize the opportunity and turn its attention to higher mental activities. Krishnamurti, however, is reluctant to assume that an unexercised brain, which has been exclusively accustomed to the activity of thought, would be disposed to retain its active condition. Rather, the individual would have to put their mind’s property of intentionality into practice—to a degree that would already set them apart from machines—and strive to tap into activities and capacities of the mind that transcend thinking.

Note that when Krishnamurti speaks of exercising the brain’s muscle, he does not think of activities that may prevent cognitive decline, such as playing sudoku. Similarly, by maintaining that our turning inwards should involve looking at the psychological structure of ourselves (Krishnamurti n.d.), he does not recommend forms of psychological self-analysis. He indicates instead that the human mind has immense and even infinite capacity that cannot be known as long as it is laden with knowledge, specialized, and materialistically preoccupied (Krishnamurti 2023, 17; Krishnamurti 2004, 213–214). Krishnamurti justifiably assumes that computer scientists would classify this notion as purely mystical (Krishnamurti 2004, 196, 199), since most of them are convinced that the mind is reducible to thought. He is, however, adamant that the transcendence of thought, which has been thus far the domain of the mystic, has become in this age a pressing task for all.

This is not to say that Krishnamurti prescribes meditation techniques either. The enhancement of consciousness that he delineates consists of four stages, which build on one another. As mentioned at the conclusion of the previous section, besides exercising intentionality, another feature of the mind that seems to distinguish it from intelligent machines is its second-order cognition of reflective self-consciousness (Gallagher and Zahavi 2023) that enables it to grow aware of its programmed condition. Since the mind and mechanical thought are not one and the same (Krishnamurti 2004, 213–214), one can become conscious of the mind’s present reality—purely consisting of knowledge—and of the nature of thought as a dangerously ‘mechanistic process’ of experience–memory–knowledge repetition (ibid., 188, 202). According to Krishnamurti, this insight into the machinelike character of thought is the ‘very source of intelligence’ (197, 202), or, in other words, the arising of inimitable awareness.

Nonetheless, we should exercise caution at this point since this act of seeing, which inevitably arouses a sense of dissatisfaction with what is going on in one’s brain (Krishnamurti 2023, 27), must not involve an active denial of thought. Any resistance to thought and any attempt to control or suppress it would merely be another branch of the ever-ramifying thought itself, resulting in an illusory split into thought and thinker (Krishnamurti 2004, 193). Ultimately, this type of reaction would only reinforce one’s mechanical thinking, whose nature is perpetual action and reaction. This is the second stage: one comes to realize that, on the one hand, one’s thought is no different to an intelligent machine, and that, on the other hand, one’s mind is out of touch with non-mechanical intelligence, yet any further internal movement would be nothing but a continuation of the same mechanical activity. For thought, this implies a condition of unbearable stuckness, but Krishnamurti deems this a healthy state of undissipated or concentrated consciousness that can generate an intense activation of the brain (ibid., 177–178). If sustained, this unusual condition, in which one’s brain has been taken to its limits, the familiar mental activity is no longer deployed, and awareness is at its peak, may eventually yield what Krishnamurti terms perception or direct insight (ibid.).Footnote 30

Perception or insight, concepts central to Krishnamurti’s teachings generally (Rodrigues 2001), represent an activity of the mind that takes place outside the evolution and accumulation of knowledge and is as such utterly untethered from thought. This quality, Krishnamurti argues, renders it uncomputerizable, since whatever is inaccessible to thought would inevitably be inaccessible to the intelligent machine (Krishnamurti 2004, 189). We could think of insight as the mind’s faculty of consciousness in a state that is completely disengaged from the familiar activities of thought as they arise in an instance of perception, such as labeling one’s experience, verbalizing it, comparing it to previous occasions, and relating to it on the basis of one’s gathered knowledge (ibid., 219–220). Moreover, a perception untouched by thought does not lean on linear, bit-by-bit investigation and understanding, which remains forever incomplete by nature; rather, it is instantly comprehensive and also transformative, neither drawing on amassed knowledge nor forming into further knowledge (ibid., 197, 200, 205–206). The emergence of this kind of perception is the third and principal stage of Krishnamurti’s enhancement of consciousness, which entirely relies on the former stages of non-reactive awareness of the mind’s programmed condition.

One such major insight, naturally unattainable by thought’s inquiry, is seeing that, since thought is machinelike and can only give rise to machinelike forms of intelligence, it is unable to produce non-mechanical forms of intelligence (Krishnamurti 2004, 195, 199). This line of reasoning leads Krishnamurti to wonder whether anything that is inherently mechanical and programmed should be considered intelligent at all (ibid., 192; Krishnamurti 2023, 25). This intriguing question, which has bearing on both the human mind and so-called intelligent machines, demonstrates the degree to which the philosophy of AI is an investigation of the nature of intelligence, just as it is a discussion of the nature of animal- or human-like machines (Russell 1997; Nyholm 2024, 3). For Krishnamurti, the advent of AI calls for an urgent reconceptualization of intelligence. If we accept that intelligence cannot be mechanical, and is, therefore, unrealizable by both thought and its mechanized extension, we could start by defining intelligence negatively as that which cannot be imitated or simulated by computers and other machines. This as-yet-unactualized intelligence cannot be known by so-called natural intelligence, whose limits have been exposed by its artificial mirrors, but by human minds that have grown aware of the programming of natural intelligence and, thus, transcended it. However, Krishnamurti makes it clear that the very act of contemplating the question of intelligence’s true nature results in the awakening of this intelligence (Krishnamurti 2004, 202).

Krishnamurti’s inference that thought could never produce intelligence leads him to reject the attempts of computer scientists to reproduce intelligence and creativity (Krishnamurti 2004, 192, 208). Given that their premise is that intelligence could be replicated if only one fathomed and captured the thinking process in full, he maintains that the true essence of intelligence will forever elude their expert minds (ibid., 196). Furthermore, Krishnamurti forewarns that a superintelligence rooted in human thought—the same thought that has given rise to war and atomic bombs—would carry the seeds of world destruction (193, 198).

The cultivation of a genuinely intelligent mind is thus Krishnamurti’s fourth and final stage of the enhancement of consciousness. Such a mind is not just innately distinct from mind-like machines, in the way in which it is conventionally discussed in the philosophy of AI, but rather it has transcended both natural and artificial intelligences, since the two are cut from the same cloth. Krishnamurti portrays the intelligent mind as one that is no longer rooted in memory (Krishnamurti 2023, 8). Whereas most minds look at and interact with the world through their gathered experiences and knowledge, the intelligent mind employs knowledge ‘only when necessary’, while for the most part, it does not ‘function from the known’ (ibid.). Ascribing very little significance to the mechanistic process of memory—the psychological and intellectual content of yesterday and tomorrow—it keeps itself as knowledge-free and empty as possible (ibid.; Krishnamurti 2004, 196–197). Except for recording necessary functional and technical knowledge, the intelligent mind consistently unburdens itself of all past recording and storing, so that every day, when one goes to bed, one wipes out everything that one has collected, as if dying at the end of the day (Krishnamurti 2023, 8). This renders the mind genuinely spontaneous: the non-mechanical individual would see themselves as ‘something that is changing all the time’, each day discovering themselves anew (ibid.; Krishnamurti 2004, 196–197).

5 Conclusion

In this article, I have confined my discussion to an introduction of Krishnamurti’s conception of the computational mind in light of current discourses in the philosophy of AI. Due to this limited scope, I have mainly been content with instigating deliberations on this subject, while avoiding a critical examination of Krishnamurti’s judgement on the nature of thought and the pathways he offers for its transcendence.

When considering Krishnamurti’s problematization of the concept of thought, one could raise substantial arguments against his assertion that thinking as a whole is purely mechanical, inherently unintelligent, and destructive. Other arguments can be made against his contention that all cognitive processes known to us, including all forms of intelligence and creativity, can be reduced to thought. On the other hand, it would be difficult to reject the challenge to thought’s imitability put forward by Krishnamurti, given that at least at the level of goal achievement, even in the case of so-called weak AI, we are witnessing an unfolding reality in which vast regions and numerous activities of human thought are becoming mechanically discernible and, thus, transferable to computers and other machines. The borders separating the mind from the machine are becoming increasingly blurry, to such a degree that products created by AI are assumed to be human-made, and products created by humans are thought to be computer-made. In effect, AI technologies move forward so rapidly that at least some of the assumptions in this paper face the danger of becoming outdated while they are being written.

When it comes to Krishnamurti’s response to this challenge, what at first appears to be an approach rooted in physicalism and computational functionalism is revealed as essentially mystical. Maintaining that mind and thought, though intertwined, are not one and the same, Krishnamurti points out that while thought is entirely a material process, the mind at large is irreducible to matter. Owing to the mind’s immaterial nature, it is capable of employing its faculty of awareness to gain insight into and ultimately transcend the limits of thought. Such a world view also excludes the possibility of thinking of Krishnamurti as a property physicalist: since thought can only give rise to further thought (and material processes can only produce further material processes), machines would be able to perfectly imitate and simulate the material processes of thought. No machines could possibly develop immaterial minds, and therefore, according to Krishnamurti, even the most complex computers and other machines that will appear in the inconceivable future will be inherently prevented from ascending to the heights of the human mind.

This, of course, implies that Krishnamurti’s response to the challenge involves accepting that there is indeed an incorporeal aspect to the human mind. Isolated from this path of transcendence, the line of questioning he proposes would unavoidably lead us to conclude that we are nothing but human computers—precisely the calamity he warns us off. After all, when Krishnamurti states that humans are either robots or computers, this is for him a form of rhetoric, designed to push his interlocutors and readers to and beyond the edge of thought.

Presently, the field of the philosophy of artificial intelligence has very few methods for handling Krishnamurti’s propositions and solutions. If we wish to think of these notions, we are compelled to turn to the philosophy of religion. However, I suggest that the magnitude of the challenge necessitates a broadening of the field’s boundaries so as to include in it important voices such as Krishnamurti’s and engage in transformative questions of human potential as a part of its ethical commitments. Thinking on human–machine tensions through the lens of non-Western, non-scientific models of the mind enables us to come upon otherwise inaccessible philosophical responses to the reality of AI and to re-examine limited perspectives on human intelligence. It may be that at least some of our responses to this complex problem will need to derive from thinkers who have indirectly tackled problems in the philosophy of mind using phenomenological and transformative approaches.