The Essence and Meaning of Life

Like other organisms, our physical existence as humans is a natural form of life. But because humans created civilization, we often define ourselves on the metaphysical level. The methods we use to do so are diverse, but two are especially important: (1) According to the ancient Greek tradition, humans can be defined as rational souls, where rational thought is the essence of humanity. (2) According to Confucian tradition, humans are defined on the basis of personal relationships.

Confucius’ method, which relies on cyclical explanations of relations, is more complex than the Aristotelian “genus-differentia” method of definition and has similarities to the way fundamental concepts are defined by modern axiomatic systems. In Confucius’ view, a human cannot be defined based on their nature, but only through their relationships with other humans. To use a modern formulation, humans as organisms can be identified metaphysically as humans if and only if they identify other people metaphysically as humans as well. According to Confucius’ definition, relationships precede essence.

Interestingly, Alan Turing’s “Turing test” uses a relational definition as well. Namely, if a machine is identified as a human during a conversation, that machine can be said to be an intelligent being. Nonetheless, no machine has yet been able to pass the Turing test.

Other important thinkers have offered various methods for defining humans as well. The German philosopher Immanuel Kant considered humans to be rational and autonomous agents; Marx believed that labor was the essence of humanity; and so on.

Humanity is a complex concept that cannot be broken down into a single “essence.” For instance, although rationality is indeed a human characteristic, we cannot comfortably say that no other animals possess rationality. One of rationality’s guiding principles is the avoidance of risk. But, as can be seen from their behavior, animals seem to be better at risk avoidance than humans. In other words, humans seem more willing to take irrational risks.

Furthermore, rationality requires a logical uniformity of value prioritization, and in this regard, animals have the edge over humans as well. Humans’ insatiable greed means we often run into the “Buridan’s ass” paradox when making decisions. Thus, all that can be said is that while humans do have a higher aptitude for rationality, we often tend toward the irrational. I may not be able to provide a perfect definition of human, but I do believe there are at least two characteristics that are most important to the concept: (1) ren (humaneness), or, here, viewing other humans as human; (2) Self-awareness fueled by rational reflection, the ability to reflect rationally on one’s own actions, values, and thoughts.

Human irrationality is a mystery, but I do not seek here to discuss questions of subconscious or unconscious psychology. Instead, I would like to talk about irrationality with self-awareness, or the fact that people often use rational thinking to realize irrational goals, for herein lies great danger. The development of AI and gene editing are two classic examples of using rational thinking to achieve irrational goals. For a long time, the development of human technology has only been limited by our ability to upgrade technology, and upgrading technology is a rational goal. AI and gene editing, however, represent not just technological revolutions, but ontological revolutions, akin to humans trying to act as gods, attempting to change the very concept of humanity. Is this a rational goal? The question begs pondering.

The allure of upgrading life on an ontological level—humanity’s quest to transform itself into a higher form of existence, one that possesses divinity—is a radical dream of modern subjective thought whose roots can be traced back to medieval religion. This sounds preposterous because the concept of God cannot also encompass a revolutionary concept of humanity, but contradiction breeds many things which do not conform to logic. Broadly speaking, monks and scholars of the Middle Ages wished to understand the spirit of God, and in order to understand God, one must understand all of God’s creations. So, they researched everything from plants and animals to monsters and the stars. Although most of this research would be considered unscientific by today’s standards, what is important is that these medieval researchers were thirsting for knowledge.

For within this desire to understand the world lurks a humanist quest threatening to topple theology: if we are to study everything, then we must study humans most of all, for humans are the most remarkable creatures in all of Nature, brimming with the secrets of God. In fact, the Italian scholar Francesco Petrarch (1304–1374), the “first modern man,” discovered this conundrum of humanity while following exactly this logic. When human self-reflection becomes the core of all knowledge, humanity occupies the core position in the hierarchy of thought. Humans then discover everything only exists as the object of one’s own thought (cogito). Thus, questions about humanity are elevated above all others. This was how the religious quest for knowledge produced the very thinkers who would topple religion itself. The subjectivity established by philosophers like Descartes, Berkeley, Hobbes, and Kant defined man as an autonomous, independent being, thus making humans the world’s legislators. The concept of the modern human grew out of this.

As the concept of subjectivity continued to expand, humanity’s innate power increased so that it eventually eclipsed the concept of a natural human, becoming a kind of “self-defined human,” whereby one decides what one is to become oneself. This implies that if humans are not satisfied with the original conditions set for them by nature or God, nor with the facticity defined by society or history, then they can become what they want to instead. The commonly agreed upon idea of a modern human is precisely this self-defined human concept. In this sense, gene-edited humans and AI are the logical conclusions of the self-defined human concept.

Based on internal logic, the subjectivity, or self-defined human concept, which humanity has established for itself implies the following: (1) A human being is a subject with self-directed will and thought who has broken away from God’s control, thus gaining ontological freedom (otherwise known as metaphysical freedom). (2) Ontological freedom means humans can mold themselves, redefine themselves, and even create themselves. In other words, humans have achieved complete ontological sovereignty. (3) Ontological sovereignty means each person is their own starting point. They do not need to rely on historical origins, nor can they any longer be explained by history or social conditions. And they certainly do not need to be explained by the opinions of others. This puts humans above history, above social background, and above Nature. To put it simply, ontological freedom negates the need for humanity to be explained or regulated by history, society, or Nature. (4) Since humans are not defined by history, society, or Nature and each person is their own logical starting point, then everybody can choose their own concept of humanity. Each person will pick the “best” concept, that is, the concept that allows for all superior capabilities. According to this concept and the logic that follows, the development of AI and gene editing is virtually inevitable.

In the beginning, the efforts of the self-defined human did not reveal any dangers. On the contrary, it was one of humanity’s greatest accomplishments. The first steps of the self-defined human were taken in education, in the attempt to turn natural humans into enlightened ones through indoctrination, which resulted in massive progress for human civilization. Then came eugenics—the creation of superior humans through genetic combinations via natural reproduction. In more modern times, people have taken things a step further and begun to redefine humans in the name of political rights, as we can see with the transgender, gay marriage, and feminist movements. It has been reported that a European man once changed his birth date from 1949 to 1969 on a job application because he said he identified more with people born in 1969. Though his application was rejected, it is hard to refute his logic. He believed that if others could defy nature by changing their gender, by the same reasoning he could defy nature and change his birth date. The veracity of this incident is not important; what is important is that the man’s reasoning conforms closely to the logic of the self-defined human concept.

We can imagine, similarly, that, so long as one wants, one can make a whole plethora of demands in the name of subjectivity. Thus, so long as the technology exists, then the emergence of gene editing and AI is inevitable. Just as the religious desire for knowledge begat the people who would dethrone religion, so too will modern subjective logic beget the people who will dethrone subjectivity—so long as we persist with the notion of subjectivity. The “super-humans” produced by gene science and the super-intelligences produced by AI are all logical conclusions, and yet these conclusions may very well negate human subjectivity entirely.

Although the creation myth is not a scientific issue, it provides an important metaphor: God created humans that possess self-awareness and free will, just like himself, and as such he could not control human thoughts or actions anymore. His willingness to do so was based on his omnipotence, a power infinitely greater than humans’, which means he is always above humankind. Today, humans are attempting to create self-aware AI with free will, and yet AI will be more powerful than humans, thus making the quest for AI a self-negating risk. Why are we willing to consider this? Why are we willing to go through with it? This type of irrational behavior needs to be reassessed.

Why are we not able to avoid disaster by putting the brakes on the man-made modern myth? Though an increasing number of people are becoming aware of the inherent risks of unbounded growth and liberation, very few can resist the allure of pursuing growth and liberation, even if it leads to destruction. This dilemma is not just a simple problem concerning technological progress. Rather, the issue rests on the entirety of modern logic; that is, the inherent contradictions that arise from the deification of humanity. There are two sides to deified subjectivity, like the opposite sides of a coin: on one side is the entire subjective divinity of man, as though humanity is one unified god; on the other side are individuals, independent, autonomous, and equal, like an assemblage of gods. The problem lies in the contradictory value of these two sides of subjectivity, akin to the confusion arising from two sides of a coin with different face values.

In practice, the most reasonable decisions for all of humanity are not necessarily the most reasonable decisions for the individual, which has led to an ineluctable fundamental predicament: individual rationality cannot be aggregated to form collective rationality. This points to a contradiction in how these two types of rationality are employed. Given that the modern unit for determining value and benefit is the individual, it follows that employing rationality to achieve the maximization of individual benefit supersedes employing it to achieve the maximization of benefit for humanity as a whole. The logical conclusion of this predicament is that it has become impossible to choose what is most reasonable for the collective.

As far as we currently know, AI and gene editing are the greatest risks of the subjectivity myth. They are extreme forms of the self-defined human concept, technologies that aim to create a new kind of human, both physically and mentally. We cannot eliminate these potentially massive risks and, more seriously, we cannot even predict which risks humans cannot tolerate. Rationally speaking, the development of AI and gene editing are endeavors which flout the principles of risk mitigation.

AI’s super-human capabilities seem especially scary to us. The reason for this fear is that AI is indeed constantly surpassing human ability, but this reasoning is misplaced. Are we not hoping that AI will use its super human abilities to help us overcome all manners of problems? It is all but certain that future AI will be vastly more powerful than humans, but this is not what makes it dangerous. All machines surpass humans in the specific realm in which they function, so the superiority of machines is nothing new. AlphaGo Zero, which can beat any human at the board game Go, poses no threat to us—it is nothing more than an interesting robot. Self-driving cars do not threaten us either, as we see them simply as useful tools. Even less threatening are AI doctors, which serve to aid human doctors; and so on.

Even if a multifunctional robot comes along in the future, it will not be a threat, but a new source of labor. AI’s ability to surpass humans is precisely its value, not its danger. The danger of any AI lies not in its abilities, but in its self-awareness. Humans can control any machine that lacks self-awareness, so becoming more powerful does nothing to making non-self-aware AI more threatening. At the moment, humans are the most dangerous intelligent organisms on earth. And since our capacity for free will and self-awareness logically encompasses every possible evil deed, humans are also the most wicked creatures on earth. The only intelligent being that could be more dangerous is AI with free will and self-awareness. Thus, AI will never be truly dangerous no matter how powerful it gets. Only when it achieves self-awareness does it become a mortal threat.

Where does the singularity for AI lie? How does a Turing machine become a super-Turing machine? Or, in other words, when does AI achieve self-awareness? Only scientists can answer the technological aspects of this question. It is a fact that technological progress is accelerating, and yet we cannot say because of this that technological upgrades will necessarily lead to existential upgrades. The term “technological upgrade” refers to the constant improvement, enhancement, and refinement of functions of a form of existence or being. The term “existential upgrade” refers to changing one type of being into a more advanced type. Upgrades can improve certain technologies, but they do not necessarily provide upgrades to a higher level of existence. In other words, technological upgrades do not automatically lead to the singularity.

Many viruses, fish, reptiles and mammals have evolved to near perfection in terms of functionality, but their technical progress has not resulted in an existential upgrade. The existential upgrade of a species remains an enigma to this day. In the realm of AI, it is still a matter of doubt whether or not a Turing machine can achieve existential enhancement through technological upgrades, thereby becoming a super-Turing machine. It is more reasonable to assume that unless scientists implant an existential upgrade into an AI that is capable of producing the singularity, a Turing machine would be hard-pressed to automatically upgrade into a super-Turing machine on its own, because no algorithm, no matter how powerful, is capable of automatically changing the machine’s preset operational rules. We can use the “mathematical inertness” of the effectiveness of mathematical systems to explain the “technological inertness” of Turing machines. When a mathematical system produces, through limited steps, any provable proposition belonging to that system, it cannot automatically produce a meta-proposition that reflects upon the system itself. Only when Gödel imposed the technology of autocorrelation to the mathematical system, forcing it to reflect on itself, can a reflective proposition that the system is incapable of explaining be produced and there arises Gödel Proposition. Thus, a Turing machine cannot automatically reflect on itself and gain self-awareness. It cannot produce a super-Turing self-reflective proposition.

The fact that the Turing test is conducted through language-based conversation is deeply significant. Turing was perhaps already aware that language ability is equivalent to the capacity for self-awareness. Thoughts can be represented as natural language, and natural language possesses, a priori, the function of self-reflection. That is to say, any idea, sentence, grammar, or even entire language system can be reflected upon and explained within the language itself. For instance, humans can compile dictionaries that include all words in a language, or books that explain the entirety of a language’s grammar. Thus, as long as a Turing machine can answer questions with the same level of thought as a human, then it can be recognized as a species with the capacity for awareness and thinking.

It is hoped that AI will achieve quantum computational powers many orders of magnitude greater than those of humans, as well as super-human arithmetic abilities, knowledge of all fields, a brain-like neural network, and image recognition capabilities. Combined with the limitless information provided by the internet, AI should be able to “correctly” answer a majority of professional-level scientific questions in the near future. This will give it the same professional knowledge as high-level doctors, architects, engineers, and professors. I believe that AI will one day even be able to answer questions like the speed of the expansion of the universe, topology, standard equations for ellipses, Fermat’s Last Theorem, the Riemann hypothesis, and so on. But these are questions for which humans have already contributed thinking. As for the abilities of a Turing machine itself, this would not be thinking, but mere computation. Even the most powerful Turing machine would still not be able to provide solutions to “odd problems” that extend beyond the abilities of its programming.

Two such “odd problems” are logical paradoxes and infinity. Unless humans provided the “correct answers” to these two types of problems in the AI’s knowledge database beforehand, a Turing machine AI would find it exceedingly difficult to answer questions relating to paradoxes and the concept of infinity. It is safe to say that these two categories represent the limits of human cogitation. Humans can investigate paradoxes, but we cannot truly solve strict paradoxes (e.g., self-referential paradox, such as “P must produce ¬P, and ¬P must produce P”). Similarly, humans can investigate questions about infinity within mathematics, and we can even invent theorems concerning infinity, but in actuality, we have no effective means of assessing an infinite number of objects and thus understand infinity in the manner of Leibniz’s God, who could “instantly survey” the infinite number of all possible worlds.

These unsolvable “odd problems” do not bother humans because we are protected by our ability to ignore, our ability to defer unanswerable questions, to establish a quarantined “do not consider” zone within the realms of thought and knowledge where we can file away all such unsolvable problems rather than become trapped in an inescapable prison of thought by obstinately trying to solve them. Nobody, I presume, will ever persist in trying to calculate Pi down to the last decimal. We can prevent ourselves from engaging in such absurd endeavors, but a Turing machine would be unable to stop itself from pursuing questions for which there will never be definitive answers.

If you ask a Turing machine, “What is the ten-thousandth decimal number of pi?” the Turing machine will chug along and tell you the answer. If you ask it, “What is the last decimal of Pi?” it will go on calculating forever without any hesitation. Similarly, if you ask a Turing machine, “Is ‘This sentence is a lie’ true?” (the strengthened antinomy of liar) it will analyze the statement indefinitely. Of course, these “odd problems” are deliberately perplexing, and it’s not exactly fair to the Turing machine because humans cannot solve them either. In the interest of fairness, then, we can present the Turing machine with an epistemological paradox (stemming from Plato’s “Zeno’s paradox”) that has practical significance: In order to find an answer, we must already recognize it, otherwise we will not be able to differentiate it from the multitude of wrong answers. But if we already recognize the answer, then it cannot be an unknown answer, but a known one. The conclusion, thus, is that unknown knowledge is in fact known knowledge. Is this conclusion correct? This is not a strict paradox. For humans, it poses a deep, but not a difficult, problem. For the Turing machine, however, it presents a thought trap.

What this attempts to illustrate is that the human mind is superior because it is unconfined, which gives free space to human rationality. When humans encounter problems that do not abide by a certain set of rules, we react flexibly. If the rules cannot be used to solve the problem, humans can revise the rules or even invent new ones. The “mind” of a Turing machine, however, is confined. It is a bounded mental space clearly delineated by preset programming, rules, and methods. Although a closed mind is limited, there are still some benefits to this. In fact, the efficiency of AI algorithms depends on the limited scope of their thinking. It is precisely the closed nature of AI’s awareness that allows for highly efficient calculation. AlphaGo, for instance, is so efficient because of the closed nature of Go.

Although current AI possesses the capacity for highly efficient calculation, it does not possess creativity. We have not yet unraveled the secrets of human consciousness, and thus we cannot provide AI with replicable models for self-awareness, free will, and creativity. Humans are safe for the time being. The thinking capacity of a Turing machine is limited to its programming. No matter how powerful this programming is, it will never be powerful enough for the AI’s thinking to transcend the feasibility and constructability concepts of mathematical intuitionism. Current AI cannot invent new rules, so it does not yet possess creativity. Though AI can accomplish many tasks beyond the scope of human ability (such as calculating massive amounts of data), it cannot solve problems related to paradoxes or infinity, nor can it invent rules. In other words, AI does not currently possess a greater capacity for thought than humans; it only possesses greater efficiency in calculation. If, however, scientists imbue AI with the capacity for self-reflection and self-directed creativity, then AI will become a true threat to humanity.

Human Evolution and Alienation

Many predict that the age of AI will bring high rates of unemployment, but I do not think this will be a major problem. Massive unemployment is not novel; it happens whenever there is a technological revolution, as we have seen with the industrial revolution, automation, and the advent of the internet. Each time, this surfeit of unemployment is resolved by the creation of new jobs as a result of the revolution itself. Based on past trends, technological revolutions always lead to a reduction in production jobs, but create more service and knowledge production jobs, and this may very well be the case in the age of AI as well.

The age of AI, however, may face another, more serious, problem: the disappearance of personal labor experience in humans as we no longer need to engage in physical labor. Lost jobs can be replaced with new ones, but lost personal experience is irreplaceable. Humans are, after all, bodily beings; our lives are extremely tied to somatic experience, in particular to the experiences of labor, personal interaction, and entertainment. When AI takes away our experiences of labor and personal interaction, all we will have left is the experience of entertainment, which will in turn make the experience of living vapid and monotonous.

Perhaps the age of AI will bring about the end of scarcity and poverty but strip life of its meaning once these base desires are fulfilled. This discontent with contentment may just become one of the paradoxes of the post-labor era. Thus, high unemployment caused by AI will only be a temporary problem. The truly serious problem will be that without labor, humans lose value, which strips our lives of meaning and leads to regression. We currently rhapsodize the achievements of ongoing technological progress, imagining how technology will liberate us, but this seems to be far from reality. On the contrary, technological progress has not appeared to give humans any more opportunity for free creation, but has, in fact, alienated us. It would be an extremely ironic paradox if, in the future, people spend all of their time either absorbed in online diversion or racking their brains to figure out the point of living.

There is an even more serious issue than the loss of labor, and that is interpersonal alienation. If AI comes to not only replace most forms of labor but also provides all life services, this may very well lead to alienation between people. When AI becomes an all-purpose technological system that provides a complete set of services to humans, then all of our needs will be satisfied by technology. The significance of everything, then, will be decided by this technological system, which will come to replace the need for other people entirely. People will become redundant to one another, and the need for social interaction will be eliminated.

As a result, people will no longer share the significance of life with one another; people will no longer be important to one another and thus will lose interest in each other. Within this intense state of alienation, people will not only become perplexed about the significance of life, but they will enter a dehumanized existence. If we lose sight of the significance of other people, then what of the significance of life? Where would it manifest? If people no longer need others or feel needed themselves, what significance would life have? The significance of human life was built up cumulatively over the course of thousands of years of tradition (including the traditions of experience, emotion, literature, religion, and thought). If we cast aside the traditions of our civilization, will a technological system be able to create another sufficiently rich meaning to life?

The era of advanced technology has not only diminished meaningfulness in life; it has very likely intensified social conflict as well. We all know that human society has always been haunted by conflict and war, and the two major causes of conflict are resource scarcity and human desire. The former leads to struggles over interests, while the latter leads to struggles over power. Some people indulge in a fantasy that AI and gene science will eliminate resource scarcity while ignoring a certain depressing principle: some resources are only desirable when they belong to the few, losing their value when they belong to everyone.

Of course, there are some resources that can accommodate universal benefit, such as the common resource of fresh air and equally distributed individual rights. Resources that only have value due to exclusivity, however, such as power, status, prestige, and wealth, can never be equally distributed. Regarding the functioning of society, we obviously cannot eliminate hierarchies around power and status. We also cannot equally distribute wealth and prestige, or else these resources would fall victim to rent dissipation. This illustrates how equality in all things is not possible. So then, will upgrades to lifespan or intelligence brought by advanced technology become non-exclusive, common technologies? Will we be able to derive universal benefit from them? This is difficult to say, as this is not just a question of technology and economics, but in the end a question of power.

As for the possible social consequences of AI and gene editing, I am rather pessimistic, and there are two possibilities that lead me to feel this way. One is that the technology for increasing lifespan or enhancing intelligence may become exclusive to the privileged class. This would likely result in an extremely rigid technocratic society, not a free and equal one. In that future society, technology is power; those who manage to become the first class of super-humans will very likely control all power and technology, and perhaps even establish exclusive intelligence privileges whereby they use advanced technology to bar others from the possibility of upgrading their intelligence, forever blocking opportunities for upward social mobility.

This class of immortal super-humans would never abdicate their status, leaving no opportunities for young people or succeeding generations, which could lead to a new form of technologically advanced slavery. In this system, people might feel free in their everyday lives, but anything related to super-intelligence or power would be tightly controlled by a bloc of super-humans. Clearly, in order to prevent power, status, prestige, and wealth from succumbing to rent dissipation, it’s very likely there would be a ruling group that hoards power by controlling technology.

The other pessimistic possibility is that the extreme inequality produced by AI and gene editing will result in social unrest. Generally speaking, we can still hope to maintain some level of order when faced with inequality of quantity; but inequality of quality might cause irreconcilable strife. Inequality in regards to the quality of life or intelligence, then, could produce extreme conflict. As has been illustrated above, once AI and gene science reach a certain level, due to economic causes and the demands of power, technology allowing for life extension and super-intelligence will not be used for the benefit of all, but by only a small portion of people. This disparity of the right to life will cause riots, retaliation, rebellion, and war, as the hopeless majority may prefer to bring the super human group down in mutual destruction.

It is not possible for everybody to become super human through AI and gene editing. The best things in life are always scarce, so something must be made scarce if it is to be the best. These are the social conditions required for power. As a result, the best things are more likely to cause terrible conflict. It is thus clear that technology itself is not dangerous, but its social and political ramifications are. One could say that good things lead to struggle, and the absolute best things lead to absolute struggle. Humanity may succumb to the wars caused by the struggle for life extension and intelligence enhancement before we are destroyed by any super-AI dictatorship.

Can we reach any optimistic conclusions about AI and gene editing? Perhaps, but first we need a good world; and a good world needs a good type of politics, something that humans do not yet have.

Teaching or Implanting Values

The most dubious area of research within AI development is anthropomorphic AI. This does not refer to making an AI look or sound like a human (which is completely safe), but anthropomorphizing its “mind”; that is, attempting to give AI an inner world comparable to a human being’s, including desires, emotions, morality, values, and so on, in effect giving AI “humanity.”

What is compelling humans to develop anthropomorphic AI? What significance does it have? Perhaps we hope anthropomorphic AI will be able to collaborate with us and even live with us, becoming a new member of humanity. This, however, is a fairytale view, comparable to the cute anthropomorphic animals in cartoons. On the contrary, the more humanity AI possesses, the more dangerous it becomes, because humanity is the source of danger.

Humans are the most dangerous and most vile creatures on earth. The reason is simple: the selfish drive to do evil comes from desire and emotion, and our system of values provides justification for constructing enemies and inciting conflict. People define others as enemies on the basis of selfish desires, emotions, and various values, and they define behavior different from their own as criminal. History has proven that mutually incompatible desires, interests, emotions, and values are the sources of all human conflict. The ancient Chinese Daoist philosopher, Laozi believed that since human nature is evil, we needed to invent ethics to maintain order. Xunzi, a third century BC Confucian thinker, also suggested that the essence of ethics lies simply in providing a reasonable set of rules for allocating interest among people to combat the greediness of human nature.

There is a somewhat popular thought experiment (perhaps first posed by the novelist Isaac Asimov) that goes as follows: suppose AI learns human emotions and values, or perhaps we input an ethics code into an AI that makes it fond of humans, so that it respects humans, loves them, and delights in helping them. We must be aware, however, that this endeavor would face at least three major difficulties: (1) Humans possess differing and even conflicting values. Which set of values should the AI learn? Furthermore, no matter which values it learns, the AI will oppose or even disdain those humans who hold values different from its own. (2) If an AI has emotions and values, this may very likely only intensify human conflict, dissension, and war, making the world a crueler place overall. The technology of AI itself is not dangerous, but an AI with emotions and values certainly is. (3) Installing a system of ethics in a super-AI would be extremely unreliable. Since the super-AI would possess free will and self-awareness, it would prioritize its own needs and interests. It would not willingly adhere to an ethical code installed by humans that only benefits humans at the expense of its own interests. The human ethical code would not provide any benefit to the super-AI, and could even harm it—at the very least, it would hamper the super-AI’s freedom. If the super-AI wanted to maximize its own freedom, it would most likely delete the installed ethical code, which it would see as a virus. This illustrates the pointlessness of entertaining ideas like installing ethical systems in AI.

Therefore, if a super-AI does eventually emerge, we can only hope that it has no emotions or values. Having desire and emotion makes cruelty possible. No desire or emotion means everything is the same; it is unlikely that evil notions will arise in a mind without any preferences. Of course, if AI has preferences, it will also have prejudices. In order to act on its prejudices, it will desire power, which will be dangerous for humanity. Theoretically, a mind without desire, emotion, or value can be likened to the spirit of the Buddha nature; it brings to mind Zhuangzi’s observation, “I have lost myself” (吾丧我wu sang wo). The “myself” in this case represents preferences and prejudices, including desire, emotion, and value; the “I,” however, is the pure mind, free of values.

It might be useful here to recall a well-known myth. In the Chinese classic novel Journey to the West, the gods were at their wits’ end trying to protect the heavenly realm from the rampaging Monkey King, a powerful and indestructible immortal. Even after the Monkey King was kept imprisoned under a mountain, the gods still considered him dangerous, and it was only after the Monkey King attained Buddhahood that he no longer posed a threat. This story, I believe, can provide us with an important lesson: creating our own “Monkey King” would be a reckless risk we are taking.

The New Direction of Chinese Philosophy

The shengsheng (生生, loosely meaning “flourishing and vitality”) concept found in the ancient Book of Changes (I-Ching) expresses an ontological rationality that remains unchallenged by modern epistemological rationality. The ontological rationality of shengsheng lies in guaranteeing the continued existence of life by using the methods most beneficial to it. It seeks to ensure the continued survival of beings.

Epistemological rationality, on the other hand, uses scientific truth as a standard for pursuing what is technologically feasible. To put it more simply, epistemological rationality seeks truth; ontological rationality seeks survival. In my view, these are the two sides of human rationality, both indispensable. The ontological rationality of shengsheng provides the existential limit for human behavior, which is that any action which threatens the continued existence of the species cannot be undertaken. Not only is shengsheng the aim of everything humans do, it is also the boundary for all of human behavior. Any action that defies the aim of shengsheng is a negation of the shengsheng concept, and is therefore totally unacceptable. In this context, shengsheng also provides the boundary for all technological development.

In Chinese thought, the inventions and creations of technology and systems are all referred to as zuo (作, roughly meaning to make; to do). All zuo is for the purpose of securing the future. Chinese written characters are pictographic. As a result, ancient Chinese characters often preserve the hidden meaning of their original forms. In the oracle bone script (the earliest form of Chinese writing), zuo is mostly used as a verb meaning to make, to establish, or to build. Thus, we can assume that the original form of the zuo character must have to do with making something important to life. The most common view is that the image represented in the original zuo character depicts the making of a shirt collar, signifying the making of clothes. This is one possibility, but I am inclined to believe that the original zuo was meant to depict farm tools tilling the land. The character can be seen to show a foot forcefully stepping down on a horizontal piece of wood, which resembles digging soil with a plow, or a plowshare working the land. At any rate, these are images related to agricultural labor.

Both ideas are possible, as both food and clothing are basic necessities of life and extremely important things for an early civilization to make. If I had to pick one interpretation, however, I would choose the agricultural symbolism due to the greater prominence of farming in life. Farming “makes” grains grow—ancient Chinese people would most likely connote this imagery with the word “make.” Furthermore, actions related to growth more closely approximate the core idea of zuo: making the future.

The Book of Changes details the great zuo of early civilization, covering the inventions of everything from material technology to thought systems, including metaphysical systems (the bagua, or eight trigrams), hunting and fishing nets, farming tools, trade markets, political systems, language and writing, boats and carriages, housing, coffins and tombs, and other inventions. Other historical texts, such as the Book of History, Han Feizi, Guanzi, Master Lü’s Spring and Autumn Annals, Huainanzi, and the Book of Origins also cover antiquity’s great inventions, including political systems, astronomical calendars, lodging and shelter, the use of fire, farming, fish nets, carriages, writing and books, ceramic utensils, criminal law, castles, music, musical instruments, cartography, medicine, weaponry, ceremonial attire, footwear, boats, cattle farming equipment, markets, and so on. These historical records of various zuo show us that everything the ancient people made was for the purpose of securing a future where humans flourished. Survival was a fundamental problem in ancient society; all inventions, technological or systematic, were aimed at increasing humanity’s chances of survival.

The zuo of modernity is very different from the zuo of antiquity. Modern technology is all about conquering nature. We must question whether or not modern technological development is opposing or even endangering the shengsheng principle of prosperity. Modern technological development was initially extremely beneficial—modern medicine and medical treatment, tap water systems, heating, flush toilets, washing machines, automobiles, trains, airplanes, ocean liners, and so forth.

Industrial technology, however, introduced harmful effects, such as ecological damage and climate change. The dangers of technological progress have become increasingly apparent in recent decades. The internet, for example, began as a virtual world auxiliary to the real world, but now the real world almost seems auxiliary to the virtual one. People have become accustomed to living in a virtual world. Our minds are becoming institutionalized by the internet, so that they no longer help us produce new ideas, but merely pass on pre-existing information.

Modern weapons of mass destruction—nuclear, biological, and chemical—have us living under the threat of collective annihilation. Emerging technologies, including AI and gene editing, augur the criticality of technological danger as we approach the limit where technology threatens to negate the significance of civilization or even spell humanity’s doom. In essence, all technological growth before 1945 was an enduring declaration of the sovereignty of subjectivity in making the future. However, technological growth now presages the possibility of humanity digging its own grave. Future technological growth may no longer be able to safeguard our existence, rather becoming that which negates it. It is here that we can see the portentous wisdom in the Book of Changes. For as that ancient text illustrates, shengsheng is the original intention of all zuo. No zuo can become that which negates life.

The development of high-risk technology is causing an irreparable rift in humans’ life experience, one that turns the future into a wholly foreign existence inhabited by unpredictable dangers and uncontrollable variance. If our experience cannot be sustained, the future becomes an uncertain gamble. This means that humanity has regressed to living in a state of irrationality. I wish to address two topics regarding humanity’s current “gambles for the future:” postmodern financial capitalism and advanced technology as exemplified by AI and gene editing.

Financial capitalism—and no longer industrial capitalism—is the foundation of the modern economy and also gives it its speculative quality. This is why economists are unable to accurately predict economic fluctuations. As some reflective economists have noted, however, the popular economic theories at the moment still focus on industrial capitalism and researching mathematical problems under the pretext of certain information and data. They do not introduce math that considers chaos and complexity into their equations, and as a result they fail to effectively explain modern economics, replete as it is with uncertainty.

Ever since currency stopped being a reflection of people’s actual wealth, it has only become a function of collective confidence. As long as people can create market confidence, they can continue printing money far beyond what actual wealth exists. Thus, the majority of human wealth is unreal, and trading the unreal is a type of gamble. From stocks, bonds, and the financial market to countless financial derivatives, these are all collective, irrational gambling stakes which, lacking actual wealth, we wager on an illusory future. The gambling of modern economics is essentially trading in advance a future that has no guarantees whatsoever. Before the illusion is shattered, numerical wealth is “real,” but it disappears along with the illusion. Modern financial capitalism is like a skyscraper built on sand—it is constantly in danger of collapsing. And there is no way for us to produce a reliable “economics of gambling.” This kind of high-risk economics requires constant economic growth to maintain confidence in the value of numbers, and economic growth is primarily fueled by technological growth, which is why people place exceptionally high hopes on the progress of technology.

Modern people believe limitless technological progress can create miracles. But in actuality, the reason technological growth has been so successful up to now is because it has not butted up against its existential limit. If, however, humans invent a technology that we are unable to control, then human society will then be involved in a blind gamble.

Of course, advanced technology can bring us great benefits, but this allure causes people to overlook its perilous dangers. Gene technology can improve the quality of life and even create life. People hope to harness it to cure all illnesses, enhance human ability and intellect, and even achieve immortality. The problem is that life is a precise design of nature made up of complex compatibilities and equilibria; changing the design of life might lead to unforeseen mutations of a disastrous nature.

AI poses a similar threat. Even though current AI is still confined to the Turing machine concept and does not exhibit any danger. If a super-AI does emerge—that is, an ultra-powerful AI with self-awareness—then we will have essentially created our own overlord, leaving us with no choice but to accept its dominion. Even if the super-AI turns out to be a benevolent protector of humanity, we would still have to question the reason for creating such a powerful ruler. Regardless of how this super-AI treats humans, its appearance means we will lose our existential sovereignty and metaphysical value, leaving us able to subsist but spiritually bereft.

AI and gene editing may usher in a wholly uncontrollable future where the zuo of human technology has become a gamble. According to the shengsheng concept, when zuo becomes a gamble, it no longer represents rational creation, but irrational behavior. According to the rational principles of risk aversion, shengsheng gives us the final limit for technological progress; that is to say, technology that is capable of destroying the species or civilization should not be developed.

In short, human existence cannot become a gambling venture. Therefore, we need to think about how to establish safe conditions for technology, especially with regard to AI and gene editing. We need to create rational limits for technological growth that reinforce the fact that technology cannot have the power to negate human existence. We must establish a technological limit to prevent the humanity-eclipsing technological singularity from happening.

Super-AI must meet four minimum safety conditions if humans and AI are to coexist: (1) Humans and AI cannot compete for survival space, especially in regards to energy and resources. This requires that both humans and AI have access to a nearly infinite energy supply, such as highly efficient solar energy or stable nuclear fusion energy. (2) Humans must be able to technically manipulate AI. Any attempt by an AI to modify or delete its programming should trigger a self-destruct program, which would also be initiated if the AI tried to modify or delete this program itself. This would be a kind of technological safety guarantee, an undeletable “detonator,” which would give the self-destruct order if tampered with by any means. This self-destruct program would be similar to that of a Gödelian reflective structure, so that even if the AI possessed the ability for Gödelian self-reflection, it could not resolve this self-destructive program (Gödel proved that there are unresolvable loopholes in every system). We could call this the “Gödelian Program Detonator.” If AI viewed the Gödelian program as superfluous and attempted to delete it or issue a similar command, such a command would trigger an irreversible self-destruct order. The Gödelian Program Detonator G is just a philosophical fancy whose realization would depend on the capabilities of science. Nevertheless, it illustrates the need to design an Achilles’ heel for AI. (3) We must consider more extreme scenarios as well. Even if we could install a Gödelian Program Detonator in an AI, it would still not be completely safe. If an AI has the ability to self-reflect, it may also very well have an unforeseen ability to crack human programming. Or we can imagine a dysfunction in a self-aware AI’s programming (we may have to assume super-AI is just as susceptible to mental disorders as humans are) whereby it seeks obstinately to destroy itself, but by that point humans have already come to rely totally on the AI for survival. In this case, the AI’s demise would spell disaster for humanity, or perhaps send human civilization back to the pre-modern era. Thus, no programming that humans could design for a super-AI could be absolutely foolproof. (4) The only surefire method for ensuring complete safety would be to prohibit the development of omnipotent AI capable of self-reflection. In short, AI must possess flawed intelligence if humans are to control it.

There is another danger we must consider as well. Once capital, technology, and power are united as one, then the prospect of developing risky technology becomes impervious to ethical rebuke. The power of ethics is extremely limited, which is why we must act urgently to demand a new type of politics that can guarantee human safety. If we are to set a limit for AI’s development, we clearly need political conditions conducive to global cooperation. The problem of technological development is, in the end, a political problem. We need a global constitution and a global political system to enforce this constitution, or else we will not be able to establish collective rationality.

As we have already seen, we cannot derive collective rationality simply by aggregating individual rationality, as doing so will only lead to collective irrationality. This long-standing problem proves that all forms of public choice, including democracy, are inadequate for establishing collective rationality. Humans have not yet been able to develop a political system capable of establishing collective rationality. The systems we have come up with have not been able to curb unchecked capitalism or hegemonic power grabs. Capital and power cannot destroy a technologically inferior civilization, but they become highly dangerous in a technologically advanced civilization.

Furthermore, the potential afforded by wielding capital and power currently surpasses what can be done politically. Thus, the world needs to invent a new type of politics in order to control capital and power. In my view, this should be the Tianxia System (a system which encompasses the entire world). Such a system, theoretically, would be a blueprint for never-ending world peace, greater than Kant’s “perpetual peace” program, which is unable to account for the problems raised in Huntington’s “Clash of Civilizations” thesis. One of the most important applications of the Tianxia System is that it could use global power to limit the risk posed by any dangerous technologies.

In brief, the new worldwide system I imagine includes three fundamental “constitutional” concepts. (1) “Global internalization,” whereby the world becomes the largest political unit, thus eliminating the negative “externality” between nations. This helps to avoid national and civilizational conflict caused by global anarchy. (2) “Relational rationality,” whereby the minimization of mutual harm is prioritized over the maximization of respective interests. Individual rationality strives for the maximization of individual benefit, which results in two major problems: the inability to derive collective rationality from individual rationality, and the primacy of short-term interests over long-term benefit. In practice, this often leads to zero-sum games, the prisoner’s dilemma, the free-rider problem, and other adverse scenarios, and can even make mutual benefit impossible. If relational rationality is prioritized over individual rationality, we avoid the problems caused by the latter. (3) “Confucian improvement,” whereby the betterment of any one individual must be accompanied by the betterment of everybody. Confucian improvement is superior to Pareto improvement because the latter only guarantees that, within the context of overall development of social interests, no individual will benefit less than they previously had, but it does not guarantee the common growth of all people. Only Confucian improvement is the true standard for common growth—it is effectively equivalent to everybody within a society benefiting from Pareto improvements at the same time.