Abstract
The dream of making conscious humanoid robots is one that has long tantalized humanity, yet today it seems closer than ever before. Assuming that science can make it happen, the question becomes: should we make it happen? Is it morally permissible to create synthetic beings with consciousness? While a consequentialist approach may seem logical, attempting to assess the potential positive and negative consequences of such a revolutionary technology is highly speculative and raises more questions than it answers. Accordingly, some turn to ancient and not-so-ancient stories of “automata” for direction. Of the many automata conjured throughout history, if not in matter then in mind, the Golem stands out as one of the most persistent paradigms employed to discuss technology in general and technologically engendered life forms in particular. In this essay, I introduce a novel reading of the Golem paradigm to argue not from consequentialism, but from a deep-seated two-thousand-year-old tradition, the ethical implications of which are wholly deontological.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
While it may sound like science fiction, real people in the real world of science and engineering are working away at developing humanoid robots that will not only be intelligent but conscious.Footnote 1 Ever since the Dartmouth Workshop in 1956, for which the stated goal was “to find how to make machines use language, form abstractions and concepts, solve the kinds of problems now reserved for humans” ([105], p. 2), the race has been on to make conscious machines – “the holy grail of artificial intelligence” (e.g., [21], p. 86, [61], p. 227). In 1985, philosophy professor John Haugeland explained that “the fundamental goal [of AI research] is not merely to mimic intelligence or produce some clever fake. Not at all. AI wants only the genuine article: machines with minds, in the full and literal sense” [69], p. 2, also, e.g., [158], fn. 8). To be clear, the term “mind” when used by philosophers refers very specifically to “consciousness,” and so, by the mid-1990s the new field of “Machine Consciousness” arose (see, e.g., [137], p. 258), bringing technologist Ray Kurzweil to write in 2006: “I do believe that we humans will come to accept that nonbiological entities are conscious” ([91], p. 385). And if one might balk at nonbiological entities attaining consciousness, there are people working on growing biological neural networks to power conscious humanoid robots (e.g., [1, 21], [108], Ch. 25, [136, 146, 154, 167]).
Given this goal—achievable or notFootnote 2—it behooves us, even at this relatively early stage of the scientific endeavor, to discuss the ethical implications of such conscious entities. Indeed, the philosophic community has long begun the discussion, with Hilary Putman making the call already in 1964:
Given the ever-accelerating rate of both technological and social change, it is entirely possible that robots will one day exist, and argue “we are alive; we are conscious!” In that event, what are today only philosophical prejudices of a traditional anthropocentric and mentalistic kind would all too likely develop into conservative political attitudes. But fortunately, we today have the advantage of being able to discuss this problem disinterestedly, and a little more chance, therefore, of arriving at the correct answer ([127], p. 678).
Commenting on this fifty years later, Matthias Scheutz wrote that “with all the recent successes in artificial intelligence and autonomous robotics, and with robots already being disseminated into society … it is high time for AI researchers and philosophers to reflect together on the potential of emotional and conscious machines” ([137], p. 262).Footnote 3 To this end, I seek to reflect on the ethical implications of conscious machines through the looking glass of Jewish philosophy. In particular, I wish to investigate the very propriety of creating a conscious machine – i.e., is it morally permissible to create synthetic beings with consciousness? Let us begin at the beginning:
“And God said: Let us make man in our image …” (Gen 1:26).
At the very core of Jewish philosophy is the belief that humanity was created in the image of God (b’tzelem Elokim). It is this image that makes humans unique from all others, for it is this image that entails, among other things, the capacity—indeed, the challenge—to be creative like God ([147], p. 64). The great twentieth-century leader of modern orthodoxy in America, R. Joseph Soloveitchik puts it like this:
“Man’s likeness to God expresses itself in man’s striving and ability to become a creator. … He engages in creative work, trying to imitate his Maker (imitatio Dei).”Footnote 4
This aspiration to imitate God in all His creativity has driven man in every field of endeavor and has tantalized him to that ultimate creation: an artificial human being, known in Jewish literature as: a Golem (see, e.g., [75], pp. 28, 345, 357). Beginning in ancient Greece, when Homer described metallic maidens (Iliad, Book 18) and Aristotle dreamed of autonomous weavers (Politics I:IV), and reaching to the present, when scientists like Stephen Younger [10] proclaim that “the creation of an artificial consciousness will be the greatest technological achievement of our species,” humanity ever dreams of imitating God’s creation. Yet, of all the automata that make up the history of man’s creative efforts, it is my thesis that the ancient Golem offers the most compelling ethical paradigm to address modern science’s call: “Let us make man in our image.”
2 The Golem
The term Golem is used to refer to a humanoid—a synthetic human, biologically based, with the capacity for autonomous action. And while some argue that the Golem was not biological (e.g., [132], p. 62; [100], p. 9), there is significant evidence to the contrary. First, both the Golem and Adam were made using the same initial material—i.e., “the dust of the earth.”Footnote 5 Dust, the Talmud (San. 91a) teaches, that is as worthy of the task as “sperm” (i.e., biological). Second, both the Golem and Adam were made using the same process (i.e., letter permutations) found in the very book that God is said to have used to create (i.e., Sefer Yetzirah).Footnote 6 Third, given that the Bible holds blood to be essential for a soul—“for the blood is the soul”Footnote 7—it seems obvious that anyone seeking to imbue a being with a soul (based on Jewish Thought), would seek to do so in a biological body.Footnote 8
3 Consciousness
Be that as it may, what is crucial in applying the Golem as an ethical paradigm for modern science’s synthetic human is not substrate but consciousness. For it is consciousness that is the defining ontological feature of human beings.Footnote 9 Indeed, while there is a long and venerable list of features describing the ontology of human beings, it is consciousness that stands out as the one upon which the rest depend.Footnote 10 But what, then, is consciousness? In his seminal article on consciousness, Thomas Nagel explains as follows:
“Conscious experience is a widespread phenomenon... But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism … We may call this the subjective character of experience” ([117], p. 436).
So consciousness, or to be more precise, phenomenal consciousness, is that which allows an organism to experience all the phenomena that the world has to offer. Importantly, this consciousness is discussed in “orders of consciousness”—primarily, first and second orders of consciousness. First-order phenomenal consciousness (1OPC) consists of the capacity to think about things. This level is generally associated with animals—e.g., a dog thinks about a bone. It is referred to as first-order in that the content of the thought at hand is that which is first perceived—e.g., the bone. Then there is second-order phenomenal consciousness (2OPC), which is a more sophisticated mental capacity whereby the content of what one has perceived is represented to oneself. It allows thinking about thinking. It allows speaking about what one is thinking—be it orally to others or silently to oneself (i.e., “inner-speech”).Footnote 11 This level is generally associated with human beings. For example, a human being not only thinks about the steak being eaten but can also entertain and express thoughts like: why am I eating a steak, what are the implications of eating steak for me, for the cow, for the environment, etc.
In consonance with this dichotomy, Jewish thought also distinguishes between orders of phenomenal consciousness, where 1OPC is associated with “animal soul” (nefesh behamit) and 2OPC with “human soul” (nefesh adam or neshamah). These associations should come as no surprise as the religious term “soul” is interchangeable with the secular term “mind.”Footnote 12 In fact, “consciousness” and “mind” are relatively new terms, attributed to John Locke who defined consciousness as “the perception of what passes in a man’s own mind” ([98], II:1:19). Prior to the Enlightenment the world used the term “soul.”
4 Should We Create Them
And thus, we arrive at the ethical question that has been called an “ethical question at the same level or even more intractable than cloning animal issues” ([144], p. 17): Ought machines be built that have not simply functional consciousness (i.e., cognition without experience), as they do today, but phenomenal consciousness, as is the goal for tomorrow? Should we be building machines with souls?
To be clear, I am not asking if we should make conscious slaves, conscious servants, or even conscious bots designed with some useful tendency or telos.Footnote 13 Rather, I am asking quite simply, should we be making conscious humanoids—i.e., free-willed autonomous beings with intelligence/superintelligence?Footnote 14 On the one hand, proponents of conscious humanoids argue that such beings could answer very real human needs—e.g., providing the infertile a child [116],Footnote 15 the lonely a friend ([166], p. 5), the lovesick a partner ([94], p. 209), the world its sages [25, 26, 58]? On the other hand, noble though these causes are, such beings raise significant ethical concerns, for example:
-
speciesism—being a different species,Footnote 16 they will be discriminated as “other” ([64], p. 207, [86], p. 313, [115], p. 1093, [137], p. 249).
-
psychology—human-like beings created outside the natural family will be psychologically burdened ([23], p. 72).
-
eternity—it is inappropriate to make an eternal being [10].
-
competition—they will take human resources ([160], p. 124) and make humans irrelevant ([173], p. 392).
-
control—such beings will reach “superintelligence” and we will lose control to the point that they rebel against humanity ([26], Hawking, Musk, Gates in [56], [59], p. 906, [144], p. 16, [173]).Footnote 17
These concerns are so significant that many call for banning the creation of conscious humanoids.Footnote 18
The arguments, for and against, are largely made on the basis of consequentialism (e.g., it would be good that an infertile couple has a child, it would be bad if such a child were discriminated against), which could also be argued from a virtue ethics position (e.g., it would be virtuous to help an infertile couple fulfill their dreams, it would be vicious to bring a child to the world destined for discrimination).Footnote 19 And, here, special mention needs to be made of the debate over the “control” problem which shifts the discussion from local consequences to global ones—proponents arguing that superintelligence will lead to utopia, opponents, that it will lead to the extinction of humanity. David Chalmers summarizes the debate on superintelligence with a simple, yet crucial, remark: “it is nontrivial to assess its value” ([35], p. 31).
As with all consequentialist arguments that reach into the great unknown to make their determination, they are not given to categorical resolve. Accordingly, some turn to ancient and not-so-ancient stories of “automata” for direction. Of the many automata conjured throughout history (see, e.g., [92, 104], if not in matter then in mind, the Golem stands out as one of the most persistent paradigms employed to discuss technology in general and technologically engendered life forms in particular.Footnote 20 My contribution to this discussion is in introducing a novel reading of the Golem paradigm to argue neither from consequence nor virtue, but from a deep-seated two-thousand-year-old tradition, the ethical implications of which, as will be explained, are wholly deontological.
4.1 Jeremiah’s Golem
As it turns out, not all Golems are created equal. In reviewing Moshe Idel’s monumental survey “The Golem,” three types of Golems can be discerned, each distinguished by a greater level of consciousness:
-
The simplest Golem is understood to be animated by, what R. Moshe Cordovero (Pardes Rimonim 24:10) calls, a “vitality” (hiyut)—i.e., a power to allow for mobility but no phenomenal experience (nefesh).Footnote 21 Consequently, it is what we might refer to as a “philosophical zombie” or “mindless (biological) machine.”Footnote 22
-
Then there are the most ubiquitous Golems in the mystical literature, described as having an animal soul (nefesh behemit)—i.e., possessing 1OPC and lacking the capacity for speech.
-
And finally, there is the rare, indeed, singular appearance in all Golem literature: a Golem with a human-level soul (nefesh adam, neshamah)—i.e., possessing 2OPC.Footnote 23 This Golem, as will be shown presently, not only speaks but admonishes.
Given that we are looking for a paradigm for making a 2OPC artificial person, this last Golem, complete with human soul, is of primary interest to our inquiry. This unique Golem is presented in a mystical Midrash,Footnote 24 dating to as early as the ninth century ([169], p. 31), as follows:
Ben Sira wanted to study [Sefer Yezirah] alone. A heavenly voice came out and said, “Two are better than one.” He went to [the prophet] Jeremiah his father and they studied it for three years following which a man was created for them and upon whose forehead it was written “Hashem Elokim Emet” / “The Lord God is True” (Jer. 10:10). Now, in his hand was a knife and he was erasing the “aleph” of “Emet” [such that the word “True” becomes the word “Dead (Met)”]. Jeremiah cried, “Why are you doing this?” He answered them, “I will tell you a parable: There was a man who was an architect and wise such that when the people recognized this, they coronated him king over them. In time, others came and learned the art such that the people left the former and followed the latter. Similarly, God looked into the Book of Creation (Sefer Yetzirah) and created the world such that all the creations coronated him king. Now, when you came and made [a man] as He did, what will be in the end? They will leave Him and follow you. And He who created you, what will become of Him? They said to him, “If so, what can we do?” He answered them, “Reverse the sequence [of the letters used to create me].” [They reversed the sequence] and the man became dust and ashes.Footnote 25
If the upshot of this story is not clear enough, another version includes the following conclusion:
Then, Jeremiah said, “Indeed it is worthwhile to study these matters for the sake of knowing the power and dynamis of the creator of the world, but not in order to do [them]. You shall study them in order to comprehend and teach.”Footnote 26
Clearly, this Midrash condemns, in no uncertain terms, the synthetic creation of a human-level conscious being. That said, the reason for such condemnation is also quite clear: theology.Footnote 27 Indeed, Gershom Scholem ([138], [139], p. 181) notes that it was this mystical Midrash that first proclaimed—“God is Dead”—when the Golem, having erased the aleph on his forehead, was left with Nietzsche’s famous proclamation: Hashem Elokim Met (Nietzsche [1887] 2001, p. 119). Accordingly, this Midrash expresses the very real concern that if human beings can do what God does, this will lead to a collapse of faith—a conclusion plainly established ever since the Scientific Revolution (see, e.g., [93, 157]).
Now, prima facie, such a concern cannot be enough to stop the progress of science and technology. Accordingly, R. J. David Bleich, commenting on the above mystical Midrash, explains that, while faith may be weakened in some and hubris may be strengthened in others, there is no halachic (i.e., legal) prohibition to be found here ([23], p.60; similarly, [38], p.150). Indeed, Jewish thought strongly endorses humanity’s license to better the world, as derived from the divine command “to conquer the earth” (Gen 1:28).Footnote 28
But there are limits.
On the verse (Lev. 19:19) prohibiting the mixing of species (kilayim), the medieval Spanish philosopher and biblical commentator Nachmanides, writes that, “one who combines two different species, thereby changes and defies the work of Creation, as if he is thinking that the Holy One, blessed be He, has not completely perfected the world and he desires to help along in the creation of the world by adding to it new kinds of creatures. … Thus he who mixes different kinds of seeds, denies and throws into disorder the work of Creation.” On this Bleich ([23], p. 55; also [129], p. 109) learns that, as long as we are not creating a new species, the prohibition of kilayim does not, in and of itself, preclude our mandate to “conquer the earth” (Gen 1:28). But even this constraint is too much for Avraham Steinberg and John Loike [153], who argue that the prohibition of kilayim is a “statute” (hok), a “decree of the King,” for which we do not apply reason nor expand to cases not explicitly covered in the decree. Accordingly, they write, “it would seem that the prohibition of interbreeding (and thereby creating new species) should not be expanded to include other situations which are halachically different—even if in such cases the possibility of creating new species arises.”
That said, Nachmanides elsewhere (Deut. 18:9) expands on the prohibition of kilayim, relating it to the illicit use of “powers” that would tamper with nature and the natural course of the world. R. Yuval Cherlow ([38], p. 149) applies the ethical notions underpinning Nachmanides’ comments, writing that “Nachmanides’ words seem to indicate that mitzvot [i.e., commandments] prohibiting the use of other-worldly forces [including, e.g., modern bio-technologies] stems from ethical and spiritual motivations. Although a person may achieve much by employing such forces, he is limited by these prohibitions. Halakha requires him to remain within the framework in which he was created; he may not become a creator himself.” While Cherlow assuredly recognizes the value of modern technologies, he nevertheless understands Nachmanides’ words to counsel restraint (e.g., prohibiting genetic testing for gender selection).
But even this is not enough to provide a bulwark against developing and deploying technologies that would, prima facie, benefit humanity’s estate (to use Bacon’s phrase). For, while Cherlow advances the ethic that one “may not become a creator himself”—essentially echoing the voice of the Golem in our mystical Midrash—one cannot ignore the fact that “many medieval and premodern sources did not ‘listen’ to the voice of the Golem” ([75], p. 391). Accordingly, starting just around the time of the Scientific Revolution (16c.), stories of creating actual Golems began to proliferate within Jewish literature (see, e.g., [139], p. 198, [75], Pt. 4).
On the one hand, the mere discussion of making actual humanoids, notwithstanding the fact that no one succeeded in making one with human-level consciousness, certainly seems to sanction the attempts. On the other hand, the fact that these Golems largely ran amok seems to call into question the propriety of such endeavors. As a result, these Golem stories are brought in ethical discussions as precautionary tales, neither condoning nor condemning similar (albeit technologically based) undertakings (see e.g., [5, 62, 133, 143, 165, 174]).
4.2 Rava’s Gavra
There is, however, a more significant source text that can provide categorical direction. That is to say, the Golem stories mentioned so far—whether of the post-Scientific Revolution genre or of the mystical Midrash genre from centuries prior—all fall into the category of Kabbalah (Jewish Mysticism). Accordingly, though mystical texts are considered to be important works within the corpus of Jewish literature, at times even taken into account in both ethical and legal discussions (see, e.g., Tzitz Eliezer 21:5), their weight is limited in the face of the foundational texts of the Bible, Midrash and Talmud that ground Jewish Thought.Footnote 29
And here it is critical to understand the distinctions between these texts and literary genres in order to judge the significance of the ideas being considered according to their source. For, while in general an idea should be judged by its merit, in Jewish thought there is a hierarchy of values. This is because the Bible is considered divine in origin and, though open to interpretation, cannot be easily overridden by human reason.Footnote 30 Following the Bible in import, the Talmud is considered to be the suppository of divinely inspired teachings and, though filled with discussions and debates, is not easily contested.Footnote 31
And this brings us to Midrash. First, we must distinguish between the “loose” versus “strict” use of the term “Midrash.” In its loose sense, the term applies to any extra-Biblical exposition on Biblical themes—the story of Jeremiah and Ben Sira being a case in point. In its strict sense, the term refers quite specifically to the extra-Biblical expositions on Biblical themes written from the time of Ezra through the eleventh century ([51], p. xviii) and restricted to specific rabbinic compilations, e.g., Midrash Rabbah, Midrash Tanchuma, Pirkei DeRebbi Eliezer, etc. It is these rabbinic compilations, as opposed to other “Midrashim,” that carry the most significant weight in Jewish Thought.
But even here there is another distinction to be made, and that is between Midrashim found within the Talmud and those found without. The Talmudic Midrashim are generally referred to as “Aggadot” and held to imply greater import than extra-Talmudic Midrashim [19]. Accordingly, R. Asher Ben Yehiel (medieval Talmudic scholar) writes that anything found in the Talmud has legal status (Ned. 9:2). That said, R. Yehezkel Landau (eighteenth century Talmudic scholar) makes no such distinction, writing: “When it comes to Midrash and Aggadot, their primary intention was to impart ethical lessons, through allusion and allegory, and indeed all of these constitute fundamentals of our religion” (Noda BeYehuda, Tinyana YD 161). As such, while all Midrashim carry ethical import, Aggadot may carry legal import—making the ethical values they express, enforceable.Footnote 32
With this introduction understood, we can now turn to the seminal Aggadah describing, what Idel ([75], p. 27) calls, “The most influential passage treating the possibility to create an artificial human being”:
Rava said: If the righteous wished, they could create a world, for it is written, “But your iniquities have separated between you and your God” (Is. 59:2). Rava created a man (Rava Bara Gavra) and sent him to R. Zeira. He [i.e., R. Zeira] spoke to him but received no answer. Then [R. Zeira] said: “You are from my pietist friends.Footnote 33 Return to your dust” (San. 65b).
What are we to make of this text?
To begin, Rava’s claim that “the righteous could create a world” is wholly unique, novel and without precedent, in short: a hypothesis. To support, or perhaps attempt to prove, the veracity of his hypothesis, he brings an axiom in the form of a biblical text which teaches that what separates humans from God, the Creator, are iniquities. Rava’s logic being that if one were free of iniquity then one would be like God, the Creator, and hence also be able to create (see, e.g., Maharal, Hidushei Aggada, ad loc., s.v. ee ba’u).
Having proven his hypothesis, at least theoretically, Rava is now ready to demonstrate its truth, empirically, by attempting to create a human being.Footnote 34 In this, he apparently succeeds, as the text attests: “Rava Bara Gavra.” But Rava is not yet satisfied and seeks to put his creation to the test. He thus sends the Gavra to his longtime friend R. Zeira, who, in a test prefiguring the now famous “Turing Test” [161],Footnote 35 attempts to engage Rava’s Gavra in conversation. He does so because speech has been considered, long before Turing made it the centerpiece of his test, the distinguishing feature of human intelligence (see, e.g., Aristotle, Politics I:II; Descartes, Discourse V; Hobbes, Leviathan 1:4). Indeed, as noted above, it is the potential to speak, what modern thinkers refer to as “inner speech,” that is understood to be the expression of second-order phenomenal consciousness, or in religious terminology, the soul (neshama).Footnote 36
Today, following Artificial Intelligence’s success with Large Language Models (see, e.g., [16, 171]), it is widely acknowledged that such a test is not enough to determine second-order phenomenal consciousness (see, e.g., [28], p. 213, [40], p. 37, [63], p. 458, [66], p. 194, [102], p. 163). Consequently, many a modified “Turing Test” have been devised to determine consciousness (see catalogs in, e.g., [50], [66], pp. 194–200, [67], ch. 12). Yet even these are found wanting, leaving many to maintain that there is simply no conclusive test to determine consciousness (see, e.g., [29], p. 309, [91], p. 78, [117], p. 436, [127], p. 690).
But if this is true, how can we explain R. Zeira’s conversation-based test? I suggest there are two possibilities. One is that R. Zeira was somehow, supernaturally, able to use the notion of conversation to determine not simply if it could speak (for killing a mute is murder),Footnote 37 but if it had “the potential to speak” (koach hadibbur). Alternatively, R. Zeira already knew, as maintains R. Gershon Chanoch Leiner (nineteenth century Chasidic Rebbi in his Sidrei Taharot, Ohalot 5), that the Gavra was conjured by his friends—the conversation attempt being a simple verification before casting his spell “back to your dust” (which, by the way, would have done nothing to a real human—[168], Noah 12:2).
Either way, Rava’s Gavra failed. It was not, after all, the 2OPC being that Rava had set out to create but a 1OPC being, “an animal in the form of a man.”Footnote 38 There are two prominent possibilities for the failure:
-
(1)
It could be that Rava was not the righteous tzaddik he had hoped he was but rather still had some iniquity separating him from God. This is the position of the Bahir (#196) as well as commentators like Leiner (Sidrei Taharot, Ohalot 5) who writes that while “there was no one so great as Rava, nevertheless he did not reach the level … of being utterly free of sin.”Footnote 39 It is this reading that provided the impetus for later Golem attempts recorded in the mystical literature (see, e.g., [75], pp. 106–107).
-
(2)
Alternatively, it could be that the hypothesis is simply false—i.e., the righteous cannot, if they want, create a human being. Such is the position of R. Moshe Cordovero (sixteenth century kabbalist), who writes incredulously, “How could one even imagine that it is possible to bring down a soul (i.e., “neshamah, nefesh and ruach”) into such a body?!” (Pardes Rimonim 24:10).Footnote 40 Accordingly, Rava’s support verse—i.e., what separates man from God is iniquity—does not come teach that man could become Godlike, a creator, but only that man could connect to God in a spiritual sense (deveikut). This is the position of the R. Judah Loew ben Bezalel (sixteenth century Talmudic scholar, a.k.a., “Maharal”) who explains:
Rava … purified himself, performed the procedures of the Book of Creation (Sefer Yetzirah), connected [spiritually] to God and created a Gavra. But it had not speech for [Rava] had not the power to bring a speaking soul into a man to make a being like himself. And this is obvious, since how can a man create a being like himself when it is impossible for God, Who is above all, to create a being like Himself!Footnote 41
Now, while one may question the soundness of the statement that “a being cannot create one like himself,” indeed Bleich calls it “curious” ([23], p. 81 fn. 58), nevertheless, the claim that man “has not the power to bring a speaking soul into a man,” does have broad consensus.Footnote 42 R. Chaim Yosef David Azulai (eighteenth century Talmudic scholar), for example, justifies this claim based on the creation verse (Gen. 2:7) as follows: “The potential (koach) for intelligence and speech is from God alone, as it says, ‘and [God] breathed into his nostrils the breath of life (nishmat hayim)’.”Footnote 43 Accepting this, R. Meir Abulafia (medieval Talmudic scholar) defends Rava’s original claim by suggesting that, while man cannot draw the soul, God will grant the soul to the Golem provided its creator is perfectly righteous.Footnote 44
And that brings us back to Rava’s claim. Here, against the understanding of a number of modern readers,Footnote 45 I suggest that this Aggadah was canonized in order to lay to rest the hypothesis that a human can create a 2OPC humanoid.Footnote 46 Indeed, its refutation is inherent in its very claim. The hypothesis, “if the righteous wanted …,” is explained by the great medieval biblical commentator, R. Shlomo Yitzhaki (Rashi), to mean that said “righteous” individual is not simply “righteous,” but “completely free of sin” (Rashi, ad loc.) Yet we know from Ecclesiastes that “there is not a righteous man upon earth, that doeth good, and sinneth not” (7:20).Footnote 47 And this is borne out by all the failed attempts to make a human-level Golem brought in the mystical literature,Footnote 48 the singular success of Jeremiah and Ben Sira coming not to affirm the possibility but admonish against it—i.e., even if you could do it, you shouldn’t.Footnote 49
And it is precisely this understanding that brings R. Zeira to send it back to its origins. I propose that R. Zeira, in administering his language test to the Gavra, is not checking for second-order phenomenal consciousness to then accept it into the family of persons. Rather, he wants to know whether, if it fails the test, he can eliminate it with impunity like any dangerous animal; or, if it passes the test, he will have to express his indignation, once again, at the “shenanigans” of his old friend Rava who had once killed him and brought him back to life (Meg. 7b), much like the life he has now given to the Gavra.
R. Zeira’s position, then, is clear: it is categorically forbidden to make a synthetic sentient being in the form of a human, whether it has first-order or second-order consciousness. Furthermore, I suggest that this is precisely the position of the Gemara itself. For, while interpreting a narrative is more of an art than a science, the evidence presented here more than reasonably supports reading the Aggadah as favoring R. Zeira over Rava. Indeed, if Rava’s position was to be considered ascendant, it would be reasonable to expect that he should have been cited as resurrecting his Gavra (just as he had done with R. Zeira [Meg. 7b]). The very fact that Rava’s actions are defeated and never returned to again—not here, nor anywhere else in the Gemara—surely bodes ill for his position.
Some may argue that R. Zeira’s position is not the last word on the issue, as there is a continuation of the Aggadah:
R. Hanina and R. Oshaia spent every Sabbath eve in studying the ‘Book of Creation’, by means of which they created a third-grown calf and ate it.
Here, in contrast to Rava’s Gavra which is eliminated with impunity, the calf created by R. Hanina and R. Oshaia is used for the purpose, holy purpose I might add, for which it was created. The Gemara clearly approves of the creation here, because, besides the fact that the creation of the calf was not a one-off event but repeated “every Sabbath eve,” if the calf was forbidden food it could not rightly be used for the religious purpose of a mitzvah meal with the recitation of blessings upon its consumption (Shulchan Aruch, OH 196:1). Interestingly, it is R. Oshaia himself who makes this ruling (JM Challah 1:5)! And if one might think that the calf itself was kosher but not its creation, the Talmud goes on to explain, explicitly, that the actions of R. Hanina and R. Oshaia were “categorically permissible” (mutar le’chatchila).
Now, while some see this explicit permit to create a calf as approval-by-association to create a Gavra,Footnote 50 I would argue precisely the opposite.Footnote 51 For, whereas the Talmud tells the two stories together in succession (65b), it is only after four pages (67b) that it reintroduces the calf story to give it its hechsher (stamp of religious approval).Footnote 52 The separation is telling. It graphically demonstrates the fact that permitting the synthetic creation of an animal is very far from permitting the synthetic creation of a human. Indeed, to realize this one need look no further than to the gravity of the issues surrounding the cloning of animals versus those of the cloning of humans (see, e.g., [85]). The potential ethical dilemmas inherent in the former pale, radically, in comparison to those inherent in the latter.
5 Conclusion
In conclusion, I have argued herein that if technology could achieve the creation of a 2OPC being, regardless of substrate (i.e., silicon, carbon, other), such technology should be banned. And, more restrictively, given the great epistemological concern that, while many an enhanced Turing Test has been proposed, no one has come up with a “R. Zeira (Supernatural) Test” that would ensure we know if a being is conscious or not, I herein endorse the calls to ban even the attempt to develop sentient robots.Footnote 53 This is in opposition to some who say that we should work to make such a being but then desist (e.g., Atlan in [74], p. 28).
To support this position, I brought the Golem, in its various forms, to serve as a moral paradigm. Beginning with Jeremiah’s Golem, who condemned the creation of a synthetic human for fear that such will lead humanity to declare that “God is dead,” I set this source aside for, in our era awash in atheism, the apparent demise of God is old news and, as such, creating synthetic humans today will not significantly change humanity’s faith in God.
But here it is important to note that the theological concern voiced by the Golem, in fact, remains relevant. For, if “God is dead,” then, as Dostoyevsky wrote, “all is permitted.”Footnote 54 That is to say, the Golem’s concern—that humanity will leave God and follow man—was not merely a petty concern over power in a physical sense (i.e., man will create just like God), but a deep concern over power in a metaphysical sense (i.e., man will assume moral authority instead of God). It is stunning just how prescient this ninth-century Golem was, for indeed, both concerns were realized as the Scientific Revolution gave birth to what might be called the moral revolution wherein Kant, for the first time in history, shifted moral authority from God to man.Footnote 55 And it was this shift that ultimately brought about the observation that “all is permitted,” that the world has lost its absolute moral ground.Footnote 56 The Golem’s admonition, then, while mute as a call to stop science and technology, remains in all its vigor as a call to return to the original Architect; for even if humans can build the same building, they cannot provide it the same ground.
This point was made contemporary by Bill Joy, former chief scientist of Sun Microsystems, who noted that Nietzsche followed his famous declaration with the warning that there is a danger in substituting science for God. And this because science pushes forward despite the “disutility”—i.e., despite the detriments it may entail for humanity (Nietzsche [1887] 2001, p. 201). Accordingly, Joy expresses great concern over the “control” problem: “The truth that science seeks can certainly be considered a dangerous substitute for God if it is likely to lead to our extinction” [81]. Joy calls for a ban on AI technologies—like conscious robots—that have the potential to extinct humanity. In this he is like the prophet Jeremiah, who forewarns of the destruction to come as a result of shirking divine morality and “worshipping the work of their own hands” (Jer. 1:16). Perhaps it is for this very reason that the mystical Midrash is told in Jeremiah’s name. And how apt for us today are Jeremiah’s words at the end of that Midrash: “it is worthwhile to study these matters for the sake of knowing the power and dynamis of the creator of the world, but not in order to do [them]. You shall study them in order to comprehend and teach.”
Following the analysis of Jeremiah and his Golem, I demonstrated that the position they voiced took on more normative significance in the Aggadah of Rava’s Gavra. Here I argued that this narrative, when looked at as an organic whole, comes to prohibit the making of a synthetic conscious being.Footnote 57 But perhaps I should be more exact and claim that the Aggadah comes to voice the notion that, while it is actually impossible to make a second-order phenomenally conscious being (i.e., ensouled with neshamah), making even a first-order phenomenally conscious being (i.e., ensouled with nefesh behemit) that looks like a human, although possible, is forbidden. Now, irrespective of whether such creations (1OPC or 2OPC) are possible, the ethical directive is unambiguous: it is forbidden to create sentient beings (1OPC or 2OPC) in human form.Footnote 58 This is, of course, in contradistinction to the explicit permit, made by the Talmud, to create a 1OPC being in the form of an animal. Though even here there is reticence to allow it (see, e.g., Shach YD 179:18).
Now, if my reading of this Talmudic Aggadah is correct, then the prohibition against making a sentient being in human form is of deontological value, it being adduced neither from consequences nor virtues but from the actions of great teachers—in rabbinic parlance, “maaseh rav.”Footnote 59 And it is precisely the fact that this teaching is sourced in rabbinic action that gives it not only ethical value but legal value. For, according to the principle of “halacha al pi maaseh rav,” the actions of a rabbi “are recounted specifically for purposes of learning halacha from them” ([19], p. 53).Footnote 60 And indeed, this narrative is brought in numerous legal discussions.Footnote 61
Accordingly, this deontological prohibition presents us with a legally binding ethical duty. It is a duty, however, squarely grounded in Jewish law and lore, thus making it incumbent upon those who accept Jewish law and lore. That said, I would argue that it could and, indeed, should apply to all peoples. For the maaseh rav is neither contingent on Jewish beliefs (i.e., there is nothing explicitly “Jewish” in the act of R. Zeira), nor is the issue itself a parochial one (i.e., the propriety of creating a sentient humanoid is of concern to all). Accordingly, given that the question is universal, and the response is universal, the duty, then, is universal ([106], pp. 34–36).
Nonetheless, coming as it does from within the corpus of Jewish Thought, the duty could be said to hold little significance to non-Jews. Yet nothing could be further from the truth. Indeed, the Torah itself commands that it be written in seventy languages (Sotah 7:5) in order to be made relevant to all the typological seventy nations (see, e.g., Rashi, Sotah 35b, s.v. he’ach). And so it has, as noted most emphatically by sixth American President John Quincy Adams: “The law given from Sinai was a civil and municipal as well as a moral and religious code; … [its] laws are essential to the existence of men in society, and most which have been enacted by every nation which ever professed any code of laws.”Footnote 62 Now, the “law given from Sinai” includes both written and oral, both Torah and Talmud (Ber. 5a), such that, if the written Torah is of value to the world at large, then the Talmud is no less, for it contains the values of the written Torah as applied over time (see, e.g., [17], p. 83, [49], p. 85).Footnote 63 Accordingly, I suggest that the Talmudic-based deontological ban on the development of sentient humanoids applies universally to all of humanity.
That said, many argue that imposing a ban on such a tantalizing technology—“the greatest technological achievement of our species”—will be difficult, if not impossible, to enforce.Footnote 64 Indeed, Bleich laments, “It is a truism that, in the usual course of human events, that which can be done will be done” (1998, p. 48). This attitude, known as “technological determinism” (see, e.g., [77], p. 13), has caused great consternation among all who realize that technology, while holding the great promise of creating a world perfect, also holds the great peril of bringing about its demise (see, e.g., [80]). As a result, ever since the Nuclear Bomb was dropped in the mid-twentieth century, the field of Technology Ethics has been growing with ever greater urgency to guide, and if need be, limit, the development of all our tantalizing technologies (see, e.g., [111], [162], p. 28–32). Interestingly, the celebrated first Chief Rabbi of pre-state Israel, Abraham Isaac Hacohen Kook, made this point already at the beginning of the twentieth century:
Behold, if human abilities will increase mightilyFootnote 65 and [yet] human goodwill will not develop according to pure ethics,Footnote 66 will not such of powers, then, benefit only man’s material being … selfish concern overriding moral concern … leading, perforce, to an onerous life for all?! … And in contrast, by applying ethics to guide these powers [of ability and will], great good and blessing will be achieved by both the individual and the world. For truly, the complete good will come specifically from the perfect joining of these two forces—the ability and the will—[with the intent] toward the good end. … And as humanity continues to increase in knowledge, so will it recognize the unity in all the various forces of nature [to the point that]: “If the righteous wanted they could create a world” (ee ba’u tzadikei baru alma—San. 65b). This is the ultimate approach to human endeavor, wherein “ability and will” are unified toward the fulfillment of that sublime goal: the perfection of man ([89], p. 24).
Stunningly, Kook encapsulates his approach on how to perfect our world via science and technology in Rava’s great claim. How are we to create our world? How are we to reach “our greatest achievement”? Only by joining our “will” (ba’u) and our creative “ability” (baru) under the guidance of “pure ethics” (tzadikei).Footnote 67 And as ethics is measured not only by what one does but by what one does not do, in the case of a conscious humanoid (gavra), our greatest achievement will be in the ethical act of refraining from that achievement.Footnote 68
In the words of R. Zeira, “Return to your dust!”
Notes
As of 2020 there were 72 AGI projects around the world [53]. And here it is important to review some terminology (see, e.g., [54], p. 2, [95], pp. 2, 19–21). To begin, Artificial Intelligence seeks, as noted by McCarthy, to build machines that can do everything that humans can do. To this end there is a spectrum from weak AI to strong AI, the former exhibiting cognitive consciousness, the latter, phenomenal consciousness (e.g., [141]). This spectrum parallels, generally but not necessarily, the spectrum from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI). The former refers to technology that is focused on a particular task, the latter to technology capable of a broad range of tasks. Ultimately, AGI is a machine that has the intelligence to perform all human tasks and, thus, sometimes referred to as “Human-Level AI.” Here naming conventions clash with technical and philosophical positions, as there is a significant disagreement regarding the role of phenomenal consciousness in the performance of all human tasks. Some hold AGI can be achieved with weak AI, some hold it demands strong AI. In either case, there is a belief that AGI, in being able to do all human tasks, including creating AGI, will thus give birth to superintelligence, i.e., the ability to think on a scale and at speeds that dwarf human abilities ([26], Ch. 2).
There is a spectrum of opinions on the prospect of building conscious machines. Some hold it will happen (e.g., [110], [112], p. 39, [40], p. 37, [91], p. 375, [142]), some hold, that while it may happen, it is a long way off (e.g., [159], p. 499, [164]; p. 349, [126]; p. 146, Boden 2016 and Dennett 1997 cited in [42], p. 38); others are more conservative and hold that it is most unlikely to happen (e.g., [8], p. 164, [151], p. 467, [90], p. 444, [20], p. 210, [42], p. 66, [116], p. 2), and finally, others are emphatic that it simply won’t happen (at least not computationally—see, e.g., [48, 54, 56], [124], p. 447, [156], p. 197, [170]). For a survey of when “High-Level Machine Intelligence” is expected to happen, see [27].
Of Adam: “the Lord God formed man of the dust (afar) of the ground” (Gen. 2:7) and “for dust (afar) thou art, and unto dust (afar) shalt thou return” (Gen. 3:19). Of Golem: “return to your dust (afar)” (San. 65b).
Deut. 12:23. For more on this, as relates to machine consciousness, see my forthcoming essay “To Make a Mind.”
See, e.g., [124], p. 9, [34], p. 26, [12], [166], p. 3, [82], p. 229, [159], p. 507, [94], p. 214, [63], pp. 457–8, [164], p. 253, [7, 14, 52, 102, 119, 126], [103], p. 6, [144], [3], p. 270, [87], p. 329, Liao [96], p. 497; [108], Ch.43, [121], p.199, [140], pp. 464, 473, [9, 88]. Dissenters do not deny that consciousness is paramount but that (a) it is not defined well enough to be of practical use, and (b) it is not discernable beyond behavior (e.g., [65], pp. 98–100).
See my forthcoming essay, “To Make a Mind.”
For more on the link between speech and 2OPC see my forthcoming essay, “To Make A Mind.”
For a discussion on this subject see my forthcoming essay, “Eudemonia of a Machine.”
The arguments herein, regarding whether a conscious being should or should not be built, apply whether said being is intelligent or “superintelligent.”
See also sources in [113], p. 1.
Calls to ban can be found in [81], [166], p. 5, [30], p. 10, [63], p. 458, [31], [173], p. 392, [115], p. 1095, [140], p. 463, [114], Intro. Strikingly, AI founder John McCarthy also believes humanoids should be designed to be “appliances rather than as people” (in [68]). Regarding the “control” problem, it should be noted that it applies especially to super-intelligent AI, which some believe may not require consciousness whereas others believe it will or must have consciousness (see fn. 1 above).
In addition, one might apply anti-cloning arguments such as: (a) the commodification argument – making artificial life leads to relating to such beings as a mere commodity ([15], p. 565, [85], p. 84), (b) the natural law argument – i.e., nature intended that the species be perpetuated solely by the natural method of conjugal procreation (see, e.g., [23], esp. pp. 51–55; [128], p. 6).
Similarly, Hesed LeAvraham (Ein Yaakov, Mayan 4, Nahar 30). While the exact nature of this “hiyut” is unclear, from Cordovero’s own words, it is plainly not one of the standard spiritual entities (nefesh, ruach, neshama). For discussions, see [139], pp. 194–195, [75], p. 133 fn. 24, pp. 197–198, 201, 276, 250 fn. 8.
For Cordovero, and his followers like Avraham Azulai, “the Golem is no more than an automaton” ([75], p. 201, see also pp. 250, 276).
For the sake of completeness, Idel ([75], p. 175) brings Alemanno who writes of creating a speaking Golem. But Idel explains that this outside the normal Jewish Golem tradition, influenced by Hermetic magical tradition.
As will be elaborated below, I use the term “Midrash” here descriptively (in that it is a religious narrative) and not formally (in that it is not part of rabbinic literature).
This text is found in various sources (see, e.g., [75], pp. 67, 98).
Interestingly, Geoffrey Jefferson ([76], p. 1107) expressed precisely this concern over thinking machines at the beginning of the computing revolution.
Indeed, Bleich ([23], p. 60) notes that the Jeremiah Midrash is not, in and of itself, to be regarded halachically.
This is not to say that the Bible contradicts human reason, but that human reason is limited in comparison to that of the divine (see, e.g., [118]).
See, e.g., R. Avraham Yeshaya Karelitz (Kovetz Igrot Chazon Ish, Vol. II, Ch. 24).
Note: while the discussion surrounding the relationship between ethics and law, be it secular or Jewish approaches, is beyond the scope here, it is important to note that legislation of a moral value makes the value enforceable – entailing a right if positive, a penalty if negative. On the Jewish approach to law and ethics see my forthcoming essay, “Polemics on Perfection.”
Lit. havraya, variously translated as “friends” (see, e.g., Rashi; Yad Rama ad loc.) but indicating the group of sages that could be referred to as “pietists” (see, e.g., Maharsha ad loc.; Steinsaltz ad loc.; [75], p. 27). The suggestion that the term refers to “magicians” is rejected by Idel (ibid.).
See, e.g., Sidrei Taharot (Ohalot 5), Aruch LeNer (San. 65b) who explain that Rava made a Gavra to demonstrate his claim. The word “world” is ambiguous and can be understood literally as “world” (see, e.g., [75], p. 31) or figuratively as “human” (see, e.g., [75], p. 110). Indeed, there is a notion that a human comprises or reflects the whole of creation ([75], p. 353).
For the sake of completeness, Turing proposed that if a machine could hold a conversation with a human without being discerned as a machine it could be said that the machine is intelligent. The “intelligence” he aimed at was clearly intelligence supported by 2OPC (see, e.g., [124], p. 9, [122], 2.4). Nevertheless, he realized his test could not verify subjective experience ([161], p. 447) and was thus expressing a kind of behaviorist approach – i.e., the most we can hope to observe from the test is behavior consistent with human-like intelligence, not the underlying causes of the behavior (see, e.g., [4], p. 254, [156], p. 196, [123], p. 12, [145], [55], p. 189, [150], p. 34, [60], p. 99, [36], [9], fn. 7).
For a discussion on relationship between speech and soul, see my forthcoming “To Make a Mind.”
See, e.g., Mishna Berura (Biur Halacha 329, s.v. ela); Sheilat Yavetz (2:82); Sidrei Taharot (Ohalot 5).
R. Yaakov Emden (Sheilat Yavetz 2:82); R. Chaim Azulai (Marit HaAyin, San. 65b, s.v. Rava); R. Leiner (Sidrei Taharot, Ohalot 5). Interestingly, the deficiency of Rava’s Gavra is noted by R. Aryeh Kaplan who explains, in his commentary to Sefer Yetzirah, that the numerical value of “Rava Bara Gavra” is 612, one less than the 613 typological limbs and veins of a full humans ([84], p. xxi). For completeness, the anomalous position of R. Tzadok MiLublin (Divrei Halomot 6) should be noted: the Gavra is above an animal, having 2OPC, but without a neshamah.
R. Cordovero’s position may be viewed as extreme only in the sense that he does not believe any spiritual essence can be imbued, as opposed to others in this camp who hold that only the highest-level human soul (neshamah) cannot be imbued.
Maharal (ad loc., s.v. rava). It is important to note a seeming contradiction in the Maharal’s commentary. On Rava’s hypothesis (ibid, s.v. ee’bau), he explains that it is entirely possible for the righteous to create a human being, yet here on the test of the hypothesis (ibid, s.v. rava bara) he writes that it is entirely impossible. The contradiction can be reconciled by understanding that the first comment explains the hypothesis, as is; while the second comment explains why the hypothesis failed—precisely as I am suggesting in my reading of the passage.
In defense of the Maharal, his point is raised as a possibility by a modern robotics professor: “there could be a fundamental theorem of the universe … that no creature is smart enough to build a copy of itself” ([130], p. 55).
Marit HaAyin (San. 65b, s.v. rava). Similarly, Maharsha (Hidushei Aggada, ad loc., s.v. v’lo hava), R. Tzadok MiLublin (Divrei Halomot 6, s.v. af shekatav), R. Joseph Ashkenazi ([75], p. 71), R. Pinhas Eliyahu Horwitz (ibid, p. 238), R. Elazar of Worms (ibid., p. 55). R. Bleich ([23], p. 81 fn. 58) understands the Hesed le-Avraham (Ein Yaakov, Mayan 4, Nahar 30) to agree that man cannot draw a soul to a Golem, but this only because of the limited power of Sefer Yetzirah, thus perhaps leaving open modern attempts. Rosenfeld ([132], p. 61) also makes this conjecture.
Yad Rama (San. 65b, s.v. amar). So too R. Hannanel (San. 67a), R. Weiss ([168] , Vaera 9:3). In this vein, it is important to note a rabbinic Midrash that teaches, “if all the creatures in the world gathered together to make a single gnat and put a soul (neshamah) into it, they would not succeed” (Gen. R. 39:14). R. Kaplan ([84], p. xx) explains that this statement is not denying the theoretical possibility but only the practical one due to the loss of knowledge. R. Weiss ([168], Vaera 9:3) understands this to mean, quite simply, that man cannot create synthetic life (yesh mi’ayin) without divine intervention. Interestingly, Turing makes this very claim in arguing for the possibility of a conscious machine: “We are … instruments of His will providing mansions for the souls that He creates” ([161], p. 443).
It should be noted that Peter Schafer ([135], p. 253) makes this claim but brings no support to defend it. Idel ([75], pp. 31, 413) argues, from a literary perspective, that the passage is a Talmudic polemic against creating a sentient being (albeit by magic not science) and teaching that endowing a body with a soul (neshamah) is impossible (ibid., p. 17).
R. Leiner (Sidrei Taharot, Ohalot 5) makes precisely this same argument using Ecclesiastes (7:20), but nevertheless leaves open the possibility, given that there were four saintly individuals recorded in the Gemara (BB 17a). This, however, begs the question (noted by Aruch LeNer, San. 65b, s.v. ee bau): if the possibility truly existed for these saintly four, why did they not indeed actualize this ultimate potential and create a world?!
See, e.g., R. Yaakov Emden (Sheilat Yavetz 2:82); Scholem ([139], pp. 202–203).
It should be mentioned here that R. Isaac ben Samuel of Acre, as part of the school of Ecstatic Kabbalah, reread the Jeremiah midrash as not only affirming the possibility of creating a full-fledged human-level Golem but permitting it ([75], pp. 346–352). That said, such sentiments must be understood in their mystical context—i.e., the endeavor to cleave to God by mystically imitating God’s creative act. Accordingly, as R. Isaac never writes that he succeeded in making an actual Golem, we too should not be moved to include his account in our normative analysis. In addition, Abraham Abulafia, founder of the Ecstatic school of thought, absolutely forbid creating a Golem ([75], p. 345), seeing it as purely a mystical endeavor (ibid., p. 102).
R. Shem Tov ibn Shaprut (Pardes Rimmonim 13a) also sees the two stories as contrasts between permitted and prohibited.
For the sake of completeness, the technical reason for the separation is that the discussion of permitted versus forbidden creations was not yet raised until four pages later. That said, if the creation of a Gavra (i.e., human) was permissible, it should have been cited explicitly alongside the calf, just as calf and Gavra were cited together originally. For one cannot infer a permit to create a Gavra from a permit to create a calf, the difference between human and animal being too vast, indeed, as vast as the difference between 2OPC and 1OPC.
See, e.g., [18], p. 106.
Noteworthy is the fact that even R. Bleich ([23], p. 75), who didn’t read the Aggadah as prohibitive, did acknowledge that neither does it “encourage” such creation.
Note that it is entirely irrelevant whether this “maaseh” (act) ever actually took place or not, for it is brought as if it did.
See also, Encyc. Talmudit (“halacha” s.v. al pi maaseh rav). Noteworthy is the fact that even without invoking the “maaseh rav” principle, the narrative would still have legal weight according to those, like R. Asher Ben Yehiel (Ned. 9:2), who consider any Aggadah to be halachically significant.
On the status of clones, see, e.g., [23, 99, 100, 129, 153]. On the status of humanoids, see, e.g., Hacham Tzvi (93), Sheilat Yaavetz (2:82), Lehorot Natan (7:11), Sidrei Taharot (Ohalot 5), Divrei Halomot (6), Hashukei Hemed (San. 65b), Marit HaAyin (San. 65b), Tzafnat Paneach (2:7), Darkei Teshuva (7:11). Birkei Yosef (OH 55:4 s.v. u'lmai), Machazik Bracha (ad loc), Ikarei HaDat (OH 3:15), Kaf HaChaim (OH 55:12), Rivevot Efraim (7:385), [168], Noah 12:2 and Vaera 9:3–5, Shu”t Yehuda Ya’aleh (1:26 s.v. v’da), Gilonei Hashas (San. 19b, s.v. sham maale).
[2], p. 61 (emphasis added). Similarly: “The Bible is the rock on which our republic rests” (Andrew Jackson). “The teachings of the Bible are so interwoven and so entwined with our civic and our social life, it would be impossible for us to figure what life would be if these teachings were removed” (Teddy Roosevelt). “The existence of the Bible, as a book for the people, is the greatest benefit which the human race has ever experienced” (Immanuel Kant, unbound pages in the Konigsberg library [Convolut G. i., ii]). See also [79], p. 585.
It should be noted that, while there is a lot of discussion regarding the propriety of teaching non-Jews the oral Torah (see, e.g., [22]), R. Bleich writes that “Jews should certainly not hesitate to make the teachings of Judaism, as they bear upon contemporary mores, more readily accessible to fellow citizens” (ibid., p. 339).
Literally, “sevenfold,” which is a biblical expression for “greatly” (see Lev. 26, esp. Rashbam, Lev. 26:18).
Note that there can be misguided goodwill! For humanity can believe it is doing great good, yet without reference to a “pure ethic,” do great damage.
See [129], p. 117.
References
Adamatzky, A.: Twenty five uses of slime mould in electronics and computing. Int. J. Unconvent. Comput. 11, 449–471 (2016). https://www.researchgate.net/publication/299462623_Twenty_five_uses_of_slime_mould_in_electronics_and_computing_Survey
Adams, J.Q.: Letters of John Quincy Adams to His Son, on the Bible and Its Teachings. James M. Alden, Auburn (1850). https://archive.org/download/lettersofjohnqui00adam/lettersofjohnqui00adam.pdf
Agar, N.: How to treat machines that might have minds. Philos. Technol. 33(2), 269–282 (2019). https://doi.org/10.1007/s13347-019-00357-8
Allen, C., Varner, G., Zinser, J.: Prolegomena to any future artificial moral agent. J. Exp. Theor. Artif. Intell. 12(3), 251–261 (2000). https://doi.org/10.1080/09528130050111428
Ambrus, G.: Image, servitude, partnership. In: Gocke, B.P., Rosenthal-Von der Putten, A. (eds.) Artificial Intelligence: Reflections in Philosophy, Theology, and the Social Sciences. Brill, Boston (2020)
Amital, Y.: Perfecting nature. In: VBM. Yeshivat Har Etzion, Gush Etzion (2002). https://www.etzion.org.il/en/tanakh/torah/sefer-vayikra/parashat-tazria/perfecting-nature
Anderson, D.L.: Machine intentionality, the moral status of machines, and the composition problem. In: Muller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-31674-6
Anderson, S.L.: Philosophical concerns with machine ethics. In: Anderson, M., Anderson, S.L. (eds.) Machine Ethics. Cambridge University Press, Cambridge (2011)
Andreotta, A.J.: The hard problem of AI rights. AI Soc. (2021). https://doi.org/10.1007/s00146-020-00997-x
Anthes, G.: Computer consciousness. In: Computerworld (2001). https://www.computerworld.com/article/2584729/computer-consciousness.html
Aristotle (350 BC): Politics. Translated by Carnes Lord, 2nd edn. University of Chicago Press, Chicago (2013)
Asaro, P.M.: What should we want from a robot ethic? Int. Rev. Inf. Ethics 6, 9–16 (2006). https://peterasaro.org/writing/Asaro%20IRIE.pdf
Barresi, J., Martin, R.: Naturalization of the Soul. Routledge, Abingdon (2012)
Basl, J.: What to do about artificial consciousness. In: Sandler, R.L. (ed.) Ethics and Emerging Technologies. Palgrave Macmillan, London (2014). https://doi.org/10.1057/9781137349088
Bedau, M., Triant, M.: Social and ethical implications of creating artificial cells. In: Sandler, R.L. (ed.) Ethics and Emerging Technologies. Palgrave Macmillan, London (2014). https://doi.org/10.1057/9781137349088
Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 610–623. Virtual Event. Association for Computing Machinery, Canada (2021). https://doi.org/10.1145/3442188.3445922
Berkovits, E.: Not in Heaven: The Nature and Function of Jewish Law. KTAV, New York (1983)
Berkovits, E.: God, Man and History. Shalem Press, Jerusalem (2004)
Bernstein, I.: Learning Halacha from Aggadah. J. Halacha Contemp. Soc. LXX (2015)
Birhane, A., van Dijk, J.: Robot rights? Let’s talk about human welfare instead. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 207–213 (2020). https://doi.org/10.1145/3375627.3375855
Bishop, J.M., Nasuto, S.: Of (zombie) mice and animats. In: Muller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-31674-6
Bleich, J.D.: Teaching Torah to Non-Jews. In: Contemporary Halakhic Problems, vol. 2. KTAV, New York (1983)
Bleich, J.D.: Survey of recent Halakhic periodical literature: cloning: homologous reproduction and Jewish law. Tradition 32(3) (1998). https://www.jstor.org/stable/23261122
Boden, M.A.: Wonder and understanding. Zygon 20(4), 391–400 (1985). https://doi.org/10.1111/j.1467-9744.1985.tb00605.x
Bostrom, N.: Ethical Issues in Advanced Artificial Intelligence. Nickbostrom.com (2003). https://nickbostrom.com/ethics/ai
Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)
Bostrom, N., Muller, V.C.: Future progress in artificial intelligence: a survey of expert opinion. In: Muller, V.C. (ed.) Fundamental Issues of Artificial Intelligence. Springer, Berlin (2016)
Brand, L.: Why machines that talk still do not think, and why they might nevertheless be able to solve moral problems. In: Benedikt, P.G., Rosenthal-Von der Putten, A. (eds.) Artificial Intelligence: Reflections in Philosophy, Theology, and the Social Sciences, pp. 203–217. Brill, Boston (2020)
Bringsjord, S.: Meeting Floridi’s challenge to artificial intelligence from the knowledge-game test for self-consciousness. Metaphilosophy 41(3), 292–312 (2010). http://www.jstor.org/stable/24439827
Bryson, J.: Robots should be slaves. In: Wilk, Y. (ed.) Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues (Natural Language Processing), vol. 8, pp. 63–74. John Benjamins Publishing Company (2010). https://researchportal.bath.ac.uk/en/publications/robots-should-be-slaves
Bryson, J.: Patiency is not a virtue. In: Gunkel, D., Bryson, J., Torrance, S. (eds.) The Machine Question: AI, Ethics and Moral Responsibility, pp. 73–77. Society for the Study of Artificial Intelligence and the Simulation of Behaviour (2012). http://events.cs.bham.ac.uk/turing12/
Bryson, J.: The artificial intelligence of the ethics of artificial intelligence. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI, pp. 3–25. Oxford University Press, Oxford (2020). https://doi.org/10.1093/oxfordhb/9780190067397.013.1
Calverley, D.J.: Legal rights for machines. In: Anderson, M., Anderson, S.L. (eds.) Machine Ethics. Cambridge University Press, Cambridge (2011)
Chalmers, D.: The conscious mind: in search of a fundamental theory. Oxford University Press, New York (1996)
Chalmers, D.: The singularity: A philosophical analysis. J. Conscious. Stud. 17 (2010). https://consc.net/papers/singularityjcs.pdf
Chalmers, D.: Episode 25: Sean Carroll Interviews David Chalmers on Consciousness. Sean Caroll’s Mindscape (2018). https://www.preposterousuniverse.com/podcast/2018/12/03/episode-25-david-chalmers-on-consciousness-the-hard-problem-and-living-in-a-simulation/
Charpa, U.: Synthetic biology and the Golem of Prague: philosophical reflections on a suggestive metaphor. Perspect. Biol. Med. 55(4), 554–570 (2012). https://doi.org/10.1353/pbm.2012.0036
Cherlow, Y.: In His Image. Maggid, Jerusalem (2016)
Chomanski, B.: What’s wrong with designing people to serve? Ethic. Theory Moral Pract. 22(4), 993–1015 (2019). https://doi.org/10.1007/s10677-019-10029-3
Churchland, P.M., Churchland, P.: Could a machine think? Sci. Am. 262 (1990). https://www.jstor.org/stable/24996642
Coeckelbergh, M.: Robotic appearances and forms of life. A phenomenological-hermeneutical approach to the relation between robotics and culture. In: Funk, M., Irrgang, B. (eds.) Robotics in Germany and Japan. Philosophical and Technical Perspectives. Peter Lang Edition, Frankfurt Am Main (2014)
Coeckelbergh, M.: AI Ethics. MIT Press, Cambridge (2020)
Danaher, J.: Why we should create artificial offspring: meaning and the collective afterlife. Sci. Eng. Ethics 24(4), 1097–1118 (2018). https://doi.org/10.1007/s1194801799320
Danaher, J.: Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci. Eng. Ethics 26(4), 2023–2049 (2019). https://doi.org/10.1007/s11948-019-00119-x
Descartes, R.: (1637) Discourse on the method of rightly conducting one’s reason and seeking truth in the sciences. Translated by Jonathan Bennett. Early Modern Philosophy (2017). https://www.earlymoderntexts.com/assets/pdfs/descartes1637.pdf
Descartes, R.: Meditations on First Philosophy (1641). Translated by Jonathan Bennett. Early Modern Philosophy (2017). https://www.earlymoderntexts.com/assets/pdfs/descartes1641.pdf
Dostoyevsky, F.: The Brothers Karamazov (1880). Translated by Constance Garnett. Om Books International, New Delhi (2019)
Dreyfus, H.L.: What Computers Can’t Do. Harper & Row, New York (1972)
Eisen, C.: Mosheh Rabbeinu and Rabbi Akiva. Jewish Thought 1, 2 (1991)
Elamrani, A., Yampolskiy, R.: Reviewing tests for machine consciousness. J. Conscious. Stud. 26(5–6) (2019). https://philpapers.org/rec/ELARTF
Epstein, I.: Foreword. In: Freedman, H., Simon, M. (eds.) Midrash Rabbah, vol. 1. Genesis. Soncino, New York (1983)
Eskens, R.: Is sex with robots rape? J. Pract. Ethics 5, 62–76 (2017)
Fitzgerald, M., Boddy, A., Baum, S.D.: 2020 survey of artificial general intelligence projects for ethics, risk, and policy. In: Global Catastrophic Risk Institute Technical Report 20-1 (2020). https://gcrinstitute.org/2020-survey-of-artificial-general-intelligence-projects-for-ethics-risk-and-policy/
Fjelland, R.: Why general artificial intelligence will not be realized. Hum. Soc. Sci. Commun. 7(1) (2020). https://doi.org/10.1057/s41599-020-0494-4
Floridi, L.: Artificial agents and their moral nature. In: Kroes, P., Verbeek, P.-P. (eds.) Moral Status of Technical Artefacts. Springer, New York (2014)
Floridi, L.: Should we be afraid of AI? Aeon (2016). https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible
Foerst, A.: Robots and theology. EWE 20(2), 181–193 (2009). https://www.researchgate.net/publication/273886034_Robots_and_Theology
Fossa, F.: Artificial moral agents: moral mentors or sensible tools? Ethics Inf. Technol. 20(2), 115–126 (2018). https://doi.org/10.1007/s10676-018-9451-y
Gamez, D.: Progress in machine consciousness. Conscious. Cogn. 17(3), 887–910 (2008). https://doi.org/10.1016/j.concog.2007.04.005
Gerdes, A., Ohrstrom, P.: Issues in robot ethics seen through the lens of a moral turing test. J. Inf. Commun. Ethics Soc. 13(2), 98–109 (2015). https://doi.org/10.1108/jices-09-2014-0038
Gocke, B.P.: Could artificial general intelligence be an end-in-itself? In: Gocke, B.P., Rosenthal-Von der Putten, A. (eds.) Artificial Intelligence: Reflections in Philosophy, Theology, and the Social Sciences. Brill, Boston (2020)
Goltz, N., Zeleznikow, J., Dowdeswell, T.: From the tree of knowledge and the Golem of Prague to Kosher autonomous cars: the ethics of artificial intelligence through Jewish eyes. Oxf. J. Law Religion 9(February), 132–156 (2020). https://doi.org/10.1093/ojlr/rwaa015
Grau, C.: There is no ‘I’ in ‘robot.’ In: Anderson, M., Anderson, S.L. (eds.) Machine Ethics. Cambridge University Press, Cambridge (2011)
Gunkel, D.J.: The Machine Question Critical Perspectives on AI, Robots, and Ethics. MIT Press, Cambridge (2012)
Gunkel, D.J.: Robot Rights. MIT Press, Cambridge (2018)
Haikonen, P.O.: Consciousness and Robot Sentience, 2nd edn. World Scientific Publishing, Singapore (2019)
Hales, C.G.: The Revolutions of Scientific Structure. World Scientific Pub. Co., Singapore (2014)
Harbron, P.: The future of humanoid robots. In: Discover Magazine (2000). https://www.discovermagazine.com/technology/the-future-of-humanoid-robots
Haugeland, J.: Artificial Intelligence: The Very Idea. MIT Press, Cambridge (1985)
Hauskeller, M.: Automatic sweethearts for transhumanists. In: Danaher, J., McArthur, N. (eds.) Robot Sex. MIT Press, Cambridge (2017). https://doi.org/10.7551/mitpress/9780262036689.001.0001
Heil, J.: Philosophy of Mind: A Guide and Anthology. Oxford University Press, Oxford (2004)
Hernandez-Orallo, J.: The Measure of All Minds. Cambridge University Press, Cambridge (2017)
Hobbes, T.: (1651) Leviathan. Translated by Jonathan Bennett. Early Modern Philosophy (2017). https://www.earlymoderntexts.com/assets/pdfs/hobbes1651part1.pdf
Idel, M.: Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid (HEBREW). Schocken, Jerusalem (1996)
Idel, M.: Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. KTAV, New York (2019)
Jefferson, G.: The mind of mechanical man. Br. Med. J. 1(4616), 1105–1110 (1949). https://www.jstor.org/stable/25372573
Johnson, D.G., Miller, K.: Computer Ethics: Analyzing Information Technology. Prentice Hall, Upper Saddle River (2008)
Johnson, D.G., Verdicchio, M.: Why robots should not be treated like animals. Ethics Inf. Technol. 20(4), 291–301 (2018). https://doi.org/10.1007/s10676-018-9481-5
Johnson, P.: A History of the Jews. Harper, New York (1987)
Jonas, H.: The Imperative of Responsibility: In Search of an Ethics for the Technological Age. The University of Chicago Press, Chicago (1984)
Joy, B.: Why the Future Doesn’t Need Us. Wired (2000). https://www.wired.com/2000/04/joy-2/
Kamm, F.: Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford University Press, Oxford (2007). https://doi.org/10.1093/acprof:oso/9780195189698.001.0001
Kaplan, A.: Bahir. Weiser, San Fransisco (1989)
Kaplan, A.: Sefer Yetzirah—Book of Creation. Weiser, San Francisco (1997)
Kass, L.: Preventing a brave new world. In: Sandler, R.L. (ed.) Ethics and Emerging Technologies. Palgrave Macmillan, London (2014). https://doi.org/10.1057/9781137349088
Kim, M.-S., Kim, E.-J.: Humanoid robots as ‘the cultural other’: are we able to love our creations? AI Soc. 28(3), 309–318 (2012). https://doi.org/10.1007/s00146-012-0397-z
Kingwell, M.: Are sentient AIs persons. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI. Oxford University Press, New York (2020)
Kohler, S.: Can we have moral status for robots on the cheap? J. Ethics Soc. Philos. 24, 1 (2023)
Kook, A.I.: Eder Hayakar (1906). http://www.daat.ac.il/daat/vl/tohen.asp?id=334
Koplin, J., Wilkinson, D.: Moral uncertainty and the farming of human-pig chimeras. J. Med. Ethics 45(7), 440–446 (2019). https://doi.org/10.1136/medethics-2018-105227
Kurzweil, R.: The Singularity Is Near: When Humans Transcend Biology. Penguin Books (2006)
LaGrandeur, K.: Androids and Intelligent Networks in Early Modern Literature and Culture Artificial Slaves. Routledge, New York (2013)
Lamm, N.: The religious implications of extraterrestrial life. Tradit. Online 7(4) (1965). https://traditiononline.org/the-religious-implications-of-extraterrestrial-life/
Levy, D.: The ethical treatment of artificially conscious robots. Int. J. Soc. Robot. 1(3), 209–216 (2009). https://doi.org/10.1007/s12369-009-0022-6
Liao, S.M.: A short introduction to the ethics of AI. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence. Oxford University Press, Oxford (2020). https://doi.org/10.1093/oso/9780190905033.001.0001
Liao, S.M.: The moral status and rights of artificial intelligence. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence. Oxford University Press, Oxford (2020). https://doi.org/10.1093/oso/9780190905033.001.0001
Liebes, Y.: Golem is numerically Hochmah. Kiryat Sefer 63 (1991). https://liebes.huji.ac.il/yehudaliebes/files/golem.pdf
Locke, J.: An Essay Concerning Human Understanding. The Project Gutenberg eBook (1690). https://www.gutenberg.org/files/10615/10615-h/10615-h.htm
Loike, J.D.: Is a human clone a Golem? Torah U-Madda J. 9, 236–244 (2000). https://www.jstor.org/stable/40914661
Loike, J.D., Tendler, M.D.: Ma Adam Va-Teda-Ehu: Halakhic criteria for defining human beings. Tradition 37(2) (2003). https://traditiononline.org/ma-adam-va-teda-ehu-halakhic-criteria-for-defining-human-beings/
Loike, J.D., Tendler, M.D.: Tampering with the genetic code of life: comparing secular and Halakhic ethical concerns. Hakira 18 (2014). https://hakirah.org/Vol18LoikeTendler.pdf
Lumbreras, S.: Strong artificial intelligence and imago hominis. In: Fuller, M., Evers, D., Runehov, A., Sæther, K.-W. (eds.) Issues in Science and Theology: Are We Special? Springer, New York (2018)
Mackenzie, R.: Sexbots: customizing them to suit us versus an ethical duty to created sentient beings to minimize suffering. Robotics 7(4) (2018). https://doi.org/10.3390/robotics7040070
Mayor, A.: Gods and Robots: The Ancient Quest for Artificial Life. Princeton University Press, Princeton (2018)
McCarthy, J., Minsky, M., Shannon, C.: A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (1955). http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
Mendelssohn, M.: (1782) Jerusalem. Translated by Jonathan Bennett. Early Modern Philosophy (2017). https://www.earlymoderntexts.com/assets/pdfs/mendelssohn1782.pdf
Metzinger, T.: Being No One. MIT Press, Cambridge (2003)
Miller, A.I.: Artist in the Machine: The World of AI-Powered Creativity. MIT Press, Cambridge (2020)
Miller, L.F.: Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent. Ethics Inf. Technol. 22(4), 297–305 (2017). https://doi.org/10.1007/s10676-017-9427-3
Minsky, M.: The Society of Mind. Simon and Schuster, New York (1985)
Mitcham, C., Nissenbaum, H.: Technology and Ethics (1998). https://doi.org/10.4324/9780415249126-L102-1
Moravec, H.: The Future of Robot and Human Intelligence. Harvard University Press, Cambridge (1988)
Muehlhauser, L., Helm, L.: The singularity and machine ethics. In: Eden, A.H., Moor, J.H. (eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer, Berlin (2012). https://www.researchgate.net/publication/233935716_The_Singularity_and_Machine_Ethics
Muller, V.C.: Is it time for robot rights? Moral status in artificial entities. Ethics Inf. Technol. (2021). https://doi.org/10.1007/s10676-021-09596-w
Musial, M.: Designing (artificial) people to serve—the other side of the coin. J. Exp. Theory Artif. Intell. 29(5), 1087–1097 (2017). https://doi.org/10.1080/0952813x.2017.1309691
Musial, M.: Can we design artificial persons without being manipulative? AI Soc. (2022). https://doi.org/10.1007/s00146-022-01575-z
Nagel, T.: What is it like to be a bat? Philos. Rev. 83(4), 435–450 (1974). https://doi.org/10.2307/2183914
Navon, M.: The binding of Isaac. Hakirah Flatbush J. Jewish Law Thought 17, 233–256 (2014). https://hakirah.org/Vol17Navon.pdf
Neely, E.L.: Machines and the moral community. Philos. Technol. 27(1), 97–111 (2014). https://doi.org/10.1007/s13347-013-0114-y
Nietzsche, F.: In: Arthur, E. (ed.) (1887) The Gay Science. Translated by Josefine Nauckhoff. Cambridge University Press, Cambridge (2001)
Nyholm, S.: Humans and Robots: Ethics, Agency, and Anthropomorphism. Rowman & Littlefield Publishing Group, New York (2020)
Oppy, G., Dowe, D.: The turing test. In: Zalta, E.N. (ed.) Stanford Encyclopedia of Philosophy (2021). https://plato.stanford.edu/entries/turing-test/
Parthemore, J., Whitby, B.: What makes any agent a moral agent? Int. J. Mach. Conscious. 05(02), 105–129 (2013). https://doi.org/10.1142/S1793843013500017
Penrose, R.: The Emperor’s New Mind: Concerning Computers, Minds and the Laws of Physics. Penguin, New York (1991)
Peters, T.: The soul of trans-humanism. Dialog. J. Theol. 44(4), 381–395 (2005)
Prescott, T.J.: Robots are not just tools. Connect. Sci. 29(2), 142–149 (2017). https://doi.org/10.1080/09540091.2017.1279125
Putman, H.: Robots: machines or artificially created life? J. Philos. 61(21) (1964)
Rachels, S., Rachels, J. (eds.): The Right Thing to Do: Basic Readings in Moral Philosophy. Mcgraw-Hill Education, New York (2015)
Rakover, N.: Cloning—competition with god? (HEBREW). Shana BeShana 105–117 (2002)
Richardson, K.: An Anthropology of Robots and AI: Annihilation Anxiety and Machines. Routledge, New York (2015)
Rosenfeld, A.: Religion and the robot. Tradit. Online 8(3) (1966). https://traditiononline.org/religion-and-the-robot/
Rosenfeld, A.: Human identity: Halakhic issues. Tradition 16(3), 58–74 (1977). https://www.jstor.org/stable/23258438
Rubin, C.: The Golem and the limits of artifice. New Atlantis (2013). https://www.thenewatlantis.com/publications/the-golem-and-the-limits-of-artifice
Sartre, J.-P.: (1947) Existentialism is a Humanism. Yale University Press, New Haven (2007)
Schafer, P.: The magic of the Golem: the early development of the Golem legend. J. Jew. Stud. 46(1–2), 249–261 (1995)
Scheler, G.: Sketch of a novel approach to a neural model. Preprint https://doi.org/10.13140/RG.2.2.25578.39368 (2023)
Scheutz, M.: Artificial emotions and machine consciousness. In: Frankish, K., Ramsey, W.M. (eds.) The Cambridge Handbook of Artificial Intelligence. Cambridge University Press, Cambridge (2014)
Scholem, G.: The Golem of Prague & the Golem of Rehovoth. Comment. Mag (1966). https://www.commentary.org/articles/gershom-scholem/the-golem-of-prague-the-golem-of-rehovoth/
Scholem, G.: On the Kabbalah and Its Symbolism. Translated by Ralph Manheim. Schocken Books, New York (1969)
Schwitzgebel, E., Garza, M.: Designing AI with rights, consciousness, self-respect, and freedom. In: Liao, S.M. (ed.) Ethics of Artificial Intelligence. Oxford University Press, Oxford (2020). https://doi.org/10.1093/oso/9780190905033.001.0001
Searle, J.: Minds, brains, and programs. Behav. Brain Sci. 3(03), 417–457 (1980). https://doi.org/10.1017/s0140525x00005756
Shanahan, M.: Beyond humans, what other kinds of minds might be out there? Aeon (2016). https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there
Sherwin, B.L.: Golems in the biotech century. Zygon 42(1), 133–144 (2007). https://doi.org/10.1111/j.1467-9744.2006.00810.x
Signorelli, C.M.: Can computers become conscious and overcome humans? Front. Robot. AI 5 (2018). https://doi.org/10.3389/frobt.2018.00121
Sloman, A.: Aaron Sloman absolves turing of the mythical turing test. In: Cooper, S.B., Van Leeuwen, J. (eds.) Alan Turing: His Work and Impact. Elsevier, Waltham (2013). https://www.cs.bham.ac.uk/research/projects/cogaff/sloman-turing-test.pdf
Smirnova, L., Caffo, B., Gracias, D., Huang, Q., Pantoja, I.M., Tang, B., Zack, D., et al.: Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish. Front. Sci. (2023). https://doi.org/10.3389/fsci.2023.1017235
Soloveitchik, J.: Redemption, prayer, Talmud Torah. Tradition 17(2) (1978). https://traditiononline.org/redemption-prayer-talmud-torah/
Soloveitchik, J.: Halakhic Man. Jewish Publication Society, Philadelphia (1983)
Soloveitchik, J.: (1965) The Lonely Man of Faith. Maggid, Jerusalem (2012). https://traditiononline.org/wp-content/uploads/2019/09/LMOF.pdf
Soraker, J.H.: Continuities and discontinuities between humans, intelligent machines, and other entities. Philos. Technol. 27(1), 31–46 (2014). https://doi.org/10.1007/s13347-013-0132-9
Sparrow, R.: Robots, rape, and representation. Int. J. Soc. Robot. 9(4), 465–477 (2017). https://doi.org/10.1007/s12369-017-0413-z
Steinberg, A.: Human cloning—scientific, moral and Jewish perspectives. Torah U-Madda J. 9, 199–206 (2000). https://www.jstor.org/stable/40914655
Steinberg, A., Loike, J.D.: Human cloning: scientific, ethical and Jewish perspectives. Assia-Jewish Med. Ethics 3(2), 11–19 (1998). https://pubmed.ncbi.nlm.nih.gov/11657947/
Straiton, J.: Grow your own brain. Biotechniques 66(3), 108–112 (2019). https://doi.org/10.2144/btn-2019-0019
Swinburne, R.: Are We Bodies or Souls? Oxford University Press, Oxford (2019)
Tallis, R.: Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. Acumen, Durham (2012)
Thorstensen, E.: Creating Golems: uses of Golem stories in the ethics of technologies. NanoEthics 11(2), 153–168 (2017). https://doi.org/10.1007/s11569-016-0279-9
Tonkens, R.: Out of character: on the creation of virtuous machines. Ethics Inf. Technol. 14(2), 137–149 (2012). https://doi.org/10.1007/s10676-012-9290-1
Torrance, S.: Ethics and consciousness in artificial agents. AI Soc. 22(4), 495–521 (2008). https://doi.org/10.1007/s00146-007-0091-8
Torrance, S.: Machine ethics and the idea of a more-than-human moral world. In: Anderson, M., Anderson, S.L. (eds.) Machine Ethics. Cambridge University Press, New York (2011)
Turing, A.M.: Computing machinery and intelligence. Mind LIX 236, 433–460 (1950). https://doi.org/10.1093/mind/lix.236.433
Vallor, S.: Technology and the Virtues a Philosophical Guide to a Future Worth Wanting. Oxford University Press, Oxford (2016)
Verbeek, P.-P.: Moralizing Technology: Understanding and Designing the Morality of Things. University of Chicago Press, Chicago (2011)
Veruggio, G., Abney, K.: Roboethics: the applied ethics for a new science. In: Lin, P., Abney, K., Bekey, G.A. (eds.) Robot Ethics: The Ethical and Social Implications of Robotics, pp. 347–363. MIT Press, Cambridge (2012)
Vudka, A.: The Golem in the age of artificial intelligence. NECSUS 9(1) (2020)
Walker, M.: A moral paradox in the creation of artificial intelligence: Mary Poppins 3000s of the World Unite! In: Metzler, T. (ed.) Human Implications of Human-Robot Interaction. AAAI (2006)
Warwick, K.: Robots with biological brains. In: Lin, P., Abney, K., Bekey, G. (eds.) Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, New York (2012)
Weiss, A.: Minhat Asher (HEBREW). Machon Minhat Asher, Jerusalem (2003)
Weiss, T.: The reception of Sefer Yetsirah and Jewish mysticism in the Early Middle Ages. Jewish Q. Rev. 103(1), 26–46 (2013). https://www.jstor.org/stable/43298679
Weizenbaum, J.: Computer Power and Human Reason. W.H. Freeman, San Francisco (1976)
Wiggers, K.: The emerging types of language models and why they matter. TechCrunch (2022). https://techcrunch.com/2022/04/28/the-emerging-types-of-language-models-and-why-they-matter/
Wurzburger, W.S.: The centrality of creativity in the thought of Rabbi Joseph B. Soloveitchik. Tradit. J. Orthodox Jewish Thought 30(4), 219–228 (1996). https://www.jstor.org/stable/23261246
Yampolskiy, R.V.: Artificial intelligence safety engineering: why machine ethics is a wrong approach. In: Muller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence. Springer, Berlin (2013). https://doi.org/10.1007/978-3-642-31674-6
Zoloth, L.: Go and tend the Earth: a Jewish view on an enhanced world. J. Law Med. Ethics 36(1), 10–25 (2008). https://doi.org/10.1111/j.1748-720x.2008.00233.x
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Navon, M. Let us make man in our image-a Jewish ethical perspective on creating conscious robots. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00328-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-023-00328-y