figure a

Wei-Ning Xiang is a professor of geography and earth sciences at the University of North Carolina at Charlotte, USA (1990–present); he is the founding editor in chief of SEPR.

I dedicate this editorial to the memory of Horst Rittel (1930–1990) who, in 1970, “proposed an Issue-Based Information System (IBIS) to support argument and discourse when dealing with wicked problems.” (Verma 2023, p. 4 of 7)

1 A self-learning, knowledge-generating AI machine that mastered board games

In his 1953 seminal paper “Digital computers applied to games,” the British mathematician and logician Alan Turing (1912–1954) writes with a vivid imagination of self-learning machine in mind, “Could one make a machine to play chess, and to improve its play, game by game, profiting from its experience? … This could certainly be described as ‘learning,’ though it is not quite representative of learning as we know it.” (Turing 1953, p. 1 of 10, p. 10 of 10) Sixty-five years later, the first self-learning machine for board-game (including chess) play, AlphaZero, not only reached this milestone (McGrath et al. 2022, p. 1 of 10; Silver et al. 2018b) but also went above and beyond Turing’s imagination by virtue of its extraordinary “superhuman performance” (Campbell 2018; Silver et al. 2018a, p. 1140).

In a December 2018 editorial in the journal Science, the Russian chess grandmaster and former World Chess Champion Garry Kasparov marvels at AlphaZero, an artificial intelligence (AI) program—or “machine”, as Alan Turing would have called it—developed by the London-based British AI laboratory DeepMind, for its record-breaking performance in mastering board games. He commends, “AlphaZero is surpassing us (human board-game players and traditional AI board-game-playing programs) in a profound and useful way …” (Kasparov 2018, p. 1087; the parenthesis by myself).

AlphaZero indeed merits such a high compliment. As a one-of-a-kind generalist program, it played chess, Go, and shogi (Japanese chess) at a level superior to not only top human players but also those traditional specialist programs that had previously won human–machine board-game competitions—Stockfish (for chess), AlphaGo (for Go), and Elmo (for shogi) (Campbell 2018; Kasparov 2018; Kissinger 2018, p. 12; Kissinger et al. 2019, pp. 1–2 of 7; McGrath et al. 2022, p. 1 of 10; Silver et al. 2018a, pp. 1140–1143; Silver et al. 2018b).

The developers and admirers attribute this extraordinary performance to the extraordinary style in which AlphaZero played: It is “a distinctive, unorthodox, yet creative and dynamic playing style” (Silver et al. 2018b) that has not been previously seen in board-game history (Campbell 2018; Kissinger 2018, p. 14; Kissinger et al. 2019, pp. 1–2 of 7; Silver et al. 2018b). Playing with this “most fascinating,” “amazing” style (Campbell 2018; Silver et al. 2018b), AlphaZero conceived and executed bold yet brilliant moves that neither follow any known game strategies nor conform to time-honored norms of human play; as such, in the eyes of human players, the moves it made often appeared to be counterintuitive, risky, or even wrong (Kasparov 2018; Kissinger et al. 2019, p. 2 of 7).

The extraordinary style stems from an extraordinary dual capability that AlphaZero possesses: the ability to acquire its own knowledge of game play without human assistance of any kind (except the basic game rules); and the ability to rely entirely on this self-generated knowledge in games against other players. Specifically, via a cut-and-try process called reinforcement learning, AlphaZero learns how to play a game from scratch by playing the game against itself with no embedded human strategies but only the basic rules of the game (Campbell 2018; Kasparov 2018; McGrath et al. 2022, p. 1 of 10, p. 9 of 10; Silver et al. 2018a, p. 1140; Silver et al. 2018b). Once learned, it uses and only uses this body of self-acquired nonhuman knowledge to compete against other board-game players, whether they be humans or humanly-trained AI programs.

This extraordinary dual capability, according to the developers and admirers, confers on AlphaZero an extraordinary competitive advantage over traditional AI board-game-playing programs (e.g., Stockfish, AlphaGo, and Elmo) which rely largely if not entirely on human knowledge of game play as encoded by human programmers: That is the independence from human influence of any kind, including the priorities and prejudices of programmers (Kasparov 2018; McGrath et al. 2022, p. 1 of 10, p. 9 of 10; Silver et al. 2018a, p. 1140, p. 1144; Silver et al. 2018b). It is in the very sense of “having zero human knowledge in the loop,” as the chief designer David Silver explains, that the developers at DeepMind named it AlphaZero [see a video in Silver et al. (2018b) at https://www.youtube.com/watch?v=7L2sUGcOgh0&t=101s].

2 What if a self-learning, wisdom-generating AI machine became a reality?

While reading about this exemplary instance of human achievement and its fascinating progression from a twentieth century imagination to a twenty-first century socio-technological reality, I was intrigued by Garry Kasparov’s provocative conjecture that the human inventiveness leading to the AlphaZero wonder “may be duplicated on any other task or field where virtual knowledge can be generated” (Kasparov 2018, p. 1087).Footnote 1 My experience as a socio-ecological practice researcher gave me pause; my interest in how ecophronesis (ecological practical wisdom) enables people to effectively undertake the task of dealing with wicked problems in socio-ecological practice aroused my curiosityFootnote 2:

Now that we humans can make and use AlphaZero that self-learns to be superintelligent in mastering board games, IF we could also make AlphaZero-Ecophronesis, a sibling AI program or machine that, as the name implies, could self-learn to be ecophronetic—socio-ecologically and practically wise—in dealing with wicked problems (see Table 1), WHAT would the hitherto human task of dealing with wicked problems in socio-ecological practice look like?

Table 1 AlphaZero and proposed AlphaZero-Ecophronesis: their commonalities and differences

Admittedly, the question, by virtue of its disruptive premise, would necessarily invite bold even wild imaginations beyond the present scope of human reason. As I shall show in the next section, the premise that a self-learning, wisdom-generating AI machine AlphaZero-Ecophronesis became a socio-technological reality, following in the footsteps of its knowledge-generating sibling AlphaZero, is profoundly disruptive at least on several important counts.

3 How profoundly disruptive is the premise?

First, it means that the conventional belief that wisdom is “non-algorithmic, non-programmable” (Rowley 2007, p. 168, citing Awad and Ghaziri 2004) were no longer true; and that AlphaZero-Ecophronesis as a self-learning, wisdom-generating machine could operate independently at the top level of the classic data–information–knowledge–wisdom (DIKW) hierarchy (Fig. 1). Specifically, with a dual capability comparable to that of AlphaZero (see Sect. 1 and Table 1), not only could AlphaZero-Ecophronesis generate its own ecological practical wisdom—machine ecophronesis, but it could also rely entirely on it in dealing with wicked problems; and it could conduct these learning and acting activities entirely on its own in a virtual world.

Fig. 1
figure 1

The data–information–knowledge–wisdom (DIKW) hierarchy, the programmability and algorithmicability (i.e., being programmable and algorithmicable) of its four components (after Rowley 2007, p. 168). Note: [1] The conventional belief that wisdom is “non-algorithmic, non-programmable,” as shown here, would no longer be true if the self-learning, wisdom-generating AlphaZero-Ecophronesis with the attributes in Table 1 became a socio-technological reality. [2] All four elements in the DIKW hierarchy are necessary for humans to effectively undertake their various tasks in work and life, including the task of dealing with wicked problems in socio-ecological practice. However, being at the pinnacle of the hierarchy, wisdom plays an imperative role of paramount importance: It offers guidance, provides direction, and exercises control over data, information, and knowledge (Ackoff 1989, pp. 8–9; Lynch and Kaufman 2019, p. 455; Rowley 2007, p. 166). As a peculiar type of wisdom—the ecological practical wisdom, ecophronesis has the same chief authority over other three elements in the hierarchy; it decides what and how data, information, and knowledge should be used to effectively deal with wicked problems (a succinct definition of ecophronesis is given later in Sect. 3)

The second aspect of the premise’s disruptiveness is a logical yet fatal extension of the first. In his presidential address to the 1988 annual meeting of the International Society for General Systems Research (ISGSR) in St. Louis, Missouri, USA, the American management science scholar Russell Ackoff (1919–2009) famously enunciates:

“… wisdom-generating systems are ones that man will never be able to assign to automata. It may well be that wisdom, which is essential to the effective pursuit of ideals, and the pursuit of ideals itself, are the characteristics that differentiate man from machines.” (Ackoff 1989, p. 9)Footnote 3

Obviously, this distinction between humans and machines would unequivocally become blurred if not simply disappeared should AlphaZero-Ecophronesis with attributes tabulated in Table 1 become a reality.

Third, the premise means that the existent conceptions of wisdom, including those of ecophronesis, were no longer accurate. Among other things, wisdom would not be a virtue belonging solely to humans anymore, nor would there be only a human way to learn and exercise wisdom. A case in point is a recent conception of ecophronesis—the virtue of ecological practical wisdom. According to Xiang (2023, p. 5), ecophronesis is “the human ability par excellence to make and implement morally sound and pragmatically effective choices in the complex, heterogeneous situations of socio-ecological practice”; and people learn to be ecophronetic by simply doing ecophronetic things in professional and/or academic practices, including moral improvisation, reflection on experience, emulation of ecophronimoi—people of ecophronesis (plural form for ecophronimos), and journal writing (Ibid., p. 3, pp. 6–7).Footnote 4 Clearly, this conception would need to be revisited and augmented if the premise were true.

Fourth, the premise means that the problems AI programs or machines can tackle were no longer limited to what Horst Rittel (1930–1990), Melvin Webber (1920–2006), and West Churchman (1913–2004) dub “tame problems” (Churchman 1967; Rittel and Webber 1973).Footnote 5 Tame problems possess a set of attributes that arguably make themselves an easy and fitting target of conventional AI programs. These attributes include, but are not limited to, (1) being well-defined, routine, repetitive; (2) having clear goals and following rigid rules; (3) taking place in a closed, rather than an open, system; (4) being decomposable; and (5) having optimal or ultimate solutions (De Cremer and Kasparov 2021, p. 3 of 8; McCarthy et al. 1955/2006, p. 14). Evidently, none of these attributes is found in wicked problems (Table 2). As such, AlphaZero-Ecophronesis’ ability to deal with wicked problems on its own would be truly ground-breaking should it be materialized.

Table 2 The ten attributes of wicked problems (Chan and Xiang 2022, p. 2)

4 A human-machine-partnership scenario

What would the hitherto human task of dealing with wicked problems in socio-ecological practice look like if AlphaZero-Ecophronesis with attributes tabulated in Table 1 became a socio-technological reality? This question, with its disruptive if-clause above-discussed, invites us socio-ecological practice researchers [aka ecopracticologists, Xiang (2019, p. 12)] to envision alternative futures that are deemed surprising and uneasy. But at the same time, responding to the question and questions of this kind helps emancipate us from confinement and restraint of conventional thinking that we have long been taking for granted and feeling comfortable with (Kissinger et al. 2021, pp. 206–207, pp. 211–213). As such, the question warrants our due responses. In this very spirit, I dare to share my imaginations via a scenario in the hope that others will join in this liberating and meaningful intellectual exercise to “stretch and focus” our thinking on alternative futures (Xiang and Clarke 2003, p. 899).

Under this human-machine-partnership scenario,

  1. 1.

    In socio-ecological practice where wickedness—the ubiquity of wicked problems—is a daunting reality (see note [1] in Table 2), practitioners team up with AlphaZero-Ecophronesis to undertake the task of dealing with wicked problems cooperatively. They do so with the belief that dealing with wicked problems should not be completely outsourced to an AI machine—however competent it may be, but the task will benefit greatly from a collaborative human-machine partnership.

  2. 2.

    On this human–machine team, practitioners are willing and able to first learn about machine ecophronesis AlphaZero-Ecophronesis generates (Table 1) and then form an augmented ecophronesis by mingling machine ecophronesis with their own human ecophronesis. In so doing, they follow in the footsteps of those human board-game players who, as De Cremer and Kasparov (2021), McGrath et al. (2022), and Silver et al. (2018b) point out, improved their own play by learning about AlphaZero’s machine intelligence and emulating its playing style. The creation of an augmented ecophronesis is inspired by the belief of many AlphaZero’s admirers that an augmented intelligence, which combines the strengths of human intelligence with those of machine intelligence, provides an opportunity for humans to advance to higher levels of understanding, expansive thoughts, and self-awareness (e.g., De Cremer and Kasparov 2021, pp. 4–8 of 8; Hultin 2019; Johnson and Sivas 2018; Kasparov 2018; Kissinger et al. 2019, p. 1 of 7, p. 5 of 7; Kissinger et al. 2021, pp. 206–207, pp. 211–218).Footnote 6

  3. 3.

    The augmented ecophronesis combines strengths of machine ecophronesis with those of human ecophronesis and avoids many human weaknesses in judgment-making, including systematic cognitive biases embedded in heuristics, such as those identified collectively by the Israeli psychologist Amos Tversky (1937–1996) and the Israeli–American psychologist Daniel Kahneman in representativeness heuristic, availability heuristic, and adjustment heuristic (Tversky and Kahneman 1974)Footnote 7; as such, the augmented ecophronesis allows practitioners to transcend ordinary limits for human cognition and wisdom, and advance to an extraordinary level of human wisdom that is neither imaginable nor achievable with human ecophronesis alone.

  4. 4.

    With the augmented ecophronesis, practitioners undertake the task of dealing with wicked problems in a way that has not been previously seen in the history of socio-ecological practice. Among things that we can imagine at present, they prudently transform certain wicked problems into tame problems by eliminating all the ten attributes (Table 2); convert some other wicked problems into less-wicked ones by removing or mitigating some of the ten attributes; and develop novel strategies for working with remaining wicked problems as well as those less-wicked ones.

  5. 5.

    The human–machine team finds itself engaged in a continuously repeated learning-acting cycle as new mutations of wicked problems emerge. Through such a process, both practitioners and AlphaZero-Ecophronesis improve themselves in a coherent way toward a mutually beneficial end—better socio-ecological practice; more productive and fulfilled practitioners; and a more effective and humble AI machine.

  6. 6.

    Still, there is a less tangible yet more profound achievement: the emergence and progression of a harmonious human-machine relationship between practitioners and AlphaZero-Ecophronesis. In the eyes of practitioners, AlphaZero-Ecophronesis has proved itself neither a fancy “marvelous toy”, as Klaus Barber and Susan Rodman once dubbed FORPLAN—a short-lived computerized optimization model for forest planning and management at the US Forest Service in the 1980s (Barber and Rodman 1990), nor a competitor that might replace practitioners in the socio-ecological practice of dealing with wicked problems. Instead, it is a serious, useful, independent yet cooperative partner, and “a fellow ‘being’ experiencing and knowing the world—a combination of tool, pet, and mind.” (Kissinger et al. 2021, p. 212)

    It is noteworthy that the antithesis of the human-machine-partnership scenario is a human-machine-rivalry scenario. Under that scenario, neither humans nor AI machines can conceive of themselves as being in a positive-sum game where cooperations lead to mutual gains (as in the human-machine-partnership scenario); instead of partners, they choose to be competitors fighting with each other in a zero-sum or even a negative-sum game for hegemony [for a succinct description about these three game situations in game theory, see The Editors of Encyclopaedia Britannica (2015)]. As I shall show below with a recent example, the rivalry-scenario thinking undergirds many contemporary AI apprehensions and criticisms.

5 Embracing opportunities and challenges of the ever-evolving human inventiveness

In a March 22, 2023 open letter signed by thousands of people from around the world, Future of Life Institute (https://futureoflife.org/) calls on all AI laboratories “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (an AI program developed by the San Francisco, California-based American AI lab OpenAI).” (Future of Life Institute 2023, the parenthesis by myself) It urges AI developers to use this pause working with policymakers “to dramatically accelerate development of robust AI governance systems” in the hope that such governance systems help mitigate or avoid “potentially catastrophic effects (of AI programs or machines) on society” (Ibid., the parenthesis by myself).

According to the open letter, the call reflected a deep concern at a time when “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” (Future of Life Institute 2023) The concern was that AI programs or machines that are no longer constrained by the limits of human knowledge or intelligence are “becoming human-competitive at general tasks” so much so that one day, they “might eventually outnumber, outsmart, obsolete, and replace us (humans)” (Ibid., the parenthesis by myself). A terminator-like apocalypse seems on the horizon under this apparent human-machine-rivalry-scenario thinking.

While the open letter highlights the paramount need for a socio-technological ethic to comprehend and even guide the AI development and use, it raises a broader and more fundamental question: How should we human beings respond to the opportunities and challenges of our own ever-evolving inventiveness? Throughout human history, the question has arisen multiple times in response to seismic innovations that promised to both enrich and disrupt established ways of work and life. These include, but are not limited to, inventions of papermaking, printing, gunpowder, the compass, the steam engine, the Internet, and arguably the atomic bomb. This time however the question concerns an unprecedented disruptive socio-technological revolution which may lead to a world relying heavily on self-learning AI machines rarely if ever constrained by human knowledge, intelligence, and wisdom.Footnote 8

For us members of the socio-ecological practice research community (the SEPR community), there are two pressing pertinent issues:

As self-learning AI machines become ever more sophisticated, more knowledgeable, and even wiser beyond the present scope of human reason and imagination, they could increasingly undertake many tasks in socio-ecological practice hitherto undertaken solely by humans, including potentially the task of dealing with wicked problems; they could also fulfill these tasks in their own way—nonhuman way, that is—as aforenamed AlphaZero did in mastering board games. What opportunities and challenges would this socio-technological revolution present to us? How should we respond?

Since the ongoing AI revolution is driven by the endless, restless human inventiveness and deemed neither stoppable nor reversible (AI for Good 2023; Floridi and Cowls 2019; HAI 2022; Kissinger et al. 2021; Leeming 2021; Tomašev et al. 2020; U4SSC 2017, 2020, 2021), forgoing questions of this kind, including more specific ones—like the what-if question in Sect. 2, is simply infeasible. I, therefore, developed this essay within the confines of my own knowledge, experience, and imagination in the hope that it serves as a starting point or a springboard for an open, explorative dialogue among members of the SEPR community and other academic and professional communities. In the optimistic spirit that the future will favor those members of humanity who choose to courageously and prudently embrace both opportunities and challenges of the ever-evolving human inventiveness, let’s act proactively: asking questions about implications of the AI revolution to socio-ecological practice; liberating ourselves from the restraints of present knowledge, experience, and wisdom; and cooperating and coevolving with this new and powerful player in our noble socio-ecological endeavor to build a better world.