Avoid common mistakes on your manuscript.
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs42532-023-00153-x/MediaObjects/42532_2023_153_Figa_HTML.jpg)
Wei-Ning Xiang is a professor of geography and earth sciences at the University of North Carolina at Charlotte, USA (1990–present); he is the founding editor in chief of SEPR.
I dedicate this editorial to the memory of Horst Rittel (1930–1990) who, in 1970, “proposed an Issue-Based Information System (IBIS) to support argument and discourse when dealing with wicked problems.” (Verma 2023, p. 4 of 7)
1 A self-learning, knowledge-generating AI machine that mastered board games
In his 1953 seminal paper “Digital computers applied to games,” the British mathematician and logician Alan Turing (1912–1954) writes with a vivid imagination of self-learning machine in mind, “Could one make a machine to play chess, and to improve its play, game by game, profiting from its experience? … This could certainly be described as ‘learning,’ though it is not quite representative of learning as we know it.” (Turing 1953, p. 1 of 10, p. 10 of 10) Sixty-five years later, the first self-learning machine for board-game (including chess) play, AlphaZero, not only reached this milestone (McGrath et al. 2022, p. 1 of 10; Silver et al. 2018b) but also went above and beyond Turing’s imagination by virtue of its extraordinary “superhuman performance” (Campbell 2018; Silver et al. 2018a, p. 1140).
In a December 2018 editorial in the journal Science, the Russian chess grandmaster and former World Chess Champion Garry Kasparov marvels at AlphaZero, an artificial intelligence (AI) program—or “machine”, as Alan Turing would have called it—developed by the London-based British AI laboratory DeepMind, for its record-breaking performance in mastering board games. He commends, “AlphaZero is surpassing us (human board-game players and traditional AI board-game-playing programs) in a profound and useful way …” (Kasparov 2018, p. 1087; the parenthesis by myself).
AlphaZero indeed merits such a high compliment. As a one-of-a-kind generalist program, it played chess, Go, and shogi (Japanese chess) at a level superior to not only top human players but also those traditional specialist programs that had previously won human–machine board-game competitions—Stockfish (for chess), AlphaGo (for Go), and Elmo (for shogi) (Campbell 2018; Kasparov 2018; Kissinger 2018, p. 12; Kissinger et al. 2019, pp. 1–2 of 7; McGrath et al. 2022, p. 1 of 10; Silver et al. 2018a, pp. 1140–1143; Silver et al. 2018b).
The developers and admirers attribute this extraordinary performance to the extraordinary style in which AlphaZero played: It is “a distinctive, unorthodox, yet creative and dynamic playing style” (Silver et al. 2018b) that has not been previously seen in board-game history (Campbell 2018; Kissinger 2018, p. 14; Kissinger et al. 2019, pp. 1–2 of 7; Silver et al. 2018b). Playing with this “most fascinating,” “amazing” style (Campbell 2018; Silver et al. 2018b), AlphaZero conceived and executed bold yet brilliant moves that neither follow any known game strategies nor conform to time-honored norms of human play; as such, in the eyes of human players, the moves it made often appeared to be counterintuitive, risky, or even wrong (Kasparov 2018; Kissinger et al. 2019, p. 2 of 7).
The extraordinary style stems from an extraordinary dual capability that AlphaZero possesses: the ability to acquire its own knowledge of game play without human assistance of any kind (except the basic game rules); and the ability to rely entirely on this self-generated knowledge in games against other players. Specifically, via a cut-and-try process called reinforcement learning, AlphaZero learns how to play a game from scratch by playing the game against itself with no embedded human strategies but only the basic rules of the game (Campbell 2018; Kasparov 2018; McGrath et al. 2022, p. 1 of 10, p. 9 of 10; Silver et al. 2018a, p. 1140; Silver et al. 2018b). Once learned, it uses and only uses this body of self-acquired nonhuman knowledge to compete against other board-game players, whether they be humans or humanly-trained AI programs.
This extraordinary dual capability, according to the developers and admirers, confers on AlphaZero an extraordinary competitive advantage over traditional AI board-game-playing programs (e.g., Stockfish, AlphaGo, and Elmo) which rely largely if not entirely on human knowledge of game play as encoded by human programmers: That is the independence from human influence of any kind, including the priorities and prejudices of programmers (Kasparov 2018; McGrath et al. 2022, p. 1 of 10, p. 9 of 10; Silver et al. 2018a, p. 1140, p. 1144; Silver et al. 2018b). It is in the very sense of “having zero human knowledge in the loop,” as the chief designer David Silver explains, that the developers at DeepMind named it AlphaZero [see a video in Silver et al. (2018b) at https://www.youtube.com/watch?v=7L2sUGcOgh0&t=101s].
2 What if a self-learning, wisdom-generating AI machine became a reality?
While reading about this exemplary instance of human achievement and its fascinating progression from a twentieth century imagination to a twenty-first century socio-technological reality, I was intrigued by Garry Kasparov’s provocative conjecture that the human inventiveness leading to the AlphaZero wonder “may be duplicated on any other task or field where virtual knowledge can be generated” (Kasparov 2018, p. 1087).Footnote 1 My experience as a socio-ecological practice researcher gave me pause; my interest in how ecophronesis (ecological practical wisdom) enables people to effectively undertake the task of dealing with wicked problems in socio-ecological practice aroused my curiosityFootnote 2:
Now that we humans can make and use AlphaZero that self-learns to be superintelligent in mastering board games, IF we could also make AlphaZero-Ecophronesis, a sibling AI program or machine that, as the name implies, could self-learn to be ecophronetic—socio-ecologically and practically wise—in dealing with wicked problems (see Table 1), WHAT would the hitherto human task of dealing with wicked problems in socio-ecological practice look like?
Admittedly, the question, by virtue of its disruptive premise, would necessarily invite bold even wild imaginations beyond the present scope of human reason. As I shall show in the next section, the premise that a self-learning, wisdom-generating AI machine AlphaZero-Ecophronesis became a socio-technological reality, following in the footsteps of its knowledge-generating sibling AlphaZero, is profoundly disruptive at least on several important counts.
3 How profoundly disruptive is the premise?
First, it means that the conventional belief that wisdom is “non-algorithmic, non-programmable” (Rowley 2007, p. 168, citing Awad and Ghaziri 2004) were no longer true; and that AlphaZero-Ecophronesis as a self-learning, wisdom-generating machine could operate independently at the top level of the classic data–information–knowledge–wisdom (DIKW) hierarchy (Fig. 1). Specifically, with a dual capability comparable to that of AlphaZero (see Sect. 1 and Table 1), not only could AlphaZero-Ecophronesis generate its own ecological practical wisdom—machine ecophronesis, but it could also rely entirely on it in dealing with wicked problems; and it could conduct these learning and acting activities entirely on its own in a virtual world.
The data–information–knowledge–wisdom (DIKW) hierarchy, the programmability and algorithmicability (i.e., being programmable and algorithmicable) of its four components (after Rowley 2007, p. 168). Note: [1] The conventional belief that wisdom is “non-algorithmic, non-programmable,” as shown here, would no longer be true if the self-learning, wisdom-generating AlphaZero-Ecophronesis with the attributes in Table 1 became a socio-technological reality. [2] All four elements in the DIKW hierarchy are necessary for humans to effectively undertake their various tasks in work and life, including the task of dealing with wicked problems in socio-ecological practice. However, being at the pinnacle of the hierarchy, wisdom plays an imperative role of paramount importance: It offers guidance, provides direction, and exercises control over data, information, and knowledge (Ackoff 1989, pp. 8–9; Lynch and Kaufman 2019, p. 455; Rowley 2007, p. 166). As a peculiar type of wisdom—the ecological practical wisdom, ecophronesis has the same chief authority over other three elements in the hierarchy; it decides what and how data, information, and knowledge should be used to effectively deal with wicked problems (a succinct definition of ecophronesis is given later in Sect. 3)
The second aspect of the premise’s disruptiveness is a logical yet fatal extension of the first. In his presidential address to the 1988 annual meeting of the International Society for General Systems Research (ISGSR) in St. Louis, Missouri, USA, the American management science scholar Russell Ackoff (1919–2009) famously enunciates:
“… wisdom-generating systems are ones that man will never be able to assign to automata. It may well be that wisdom, which is essential to the effective pursuit of ideals, and the pursuit of ideals itself, are the characteristics that differentiate man from machines.” (Ackoff 1989, p. 9)Footnote 3
Obviously, this distinction between humans and machines would unequivocally become blurred if not simply disappeared should AlphaZero-Ecophronesis with attributes tabulated in Table 1 become a reality.
Third, the premise means that the existent conceptions of wisdom, including those of ecophronesis, were no longer accurate. Among other things, wisdom would not be a virtue belonging solely to humans anymore, nor would there be only a human way to learn and exercise wisdom. A case in point is a recent conception of ecophronesis—the virtue of ecological practical wisdom. According to Xiang (2023, p. 5), ecophronesis is “the human ability par excellence to make and implement morally sound and pragmatically effective choices in the complex, heterogeneous situations of socio-ecological practice”; and people learn to be ecophronetic by simply doing ecophronetic things in professional and/or academic practices, including moral improvisation, reflection on experience, emulation of ecophronimoi—people of ecophronesis (plural form for ecophronimos), and journal writing (Ibid., p. 3, pp. 6–7).Footnote 4 Clearly, this conception would need to be revisited and augmented if the premise were true.
Fourth, the premise means that the problems AI programs or machines can tackle were no longer limited to what Horst Rittel (1930–1990), Melvin Webber (1920–2006), and West Churchman (1913–2004) dub “tame problems” (Churchman 1967; Rittel and Webber 1973).Footnote 5 Tame problems possess a set of attributes that arguably make themselves an easy and fitting target of conventional AI programs. These attributes include, but are not limited to, (1) being well-defined, routine, repetitive; (2) having clear goals and following rigid rules; (3) taking place in a closed, rather than an open, system; (4) being decomposable; and (5) having optimal or ultimate solutions (De Cremer and Kasparov 2021, p. 3 of 8; McCarthy et al. 1955/2006, p. 14). Evidently, none of these attributes is found in wicked problems (Table 2). As such, AlphaZero-Ecophronesis’ ability to deal with wicked problems on its own would be truly ground-breaking should it be materialized.
4 A human-machine-partnership scenario
What would the hitherto human task of dealing with wicked problems in socio-ecological practice look like if AlphaZero-Ecophronesis with attributes tabulated in Table 1 became a socio-technological reality? This question, with its disruptive if-clause above-discussed, invites us socio-ecological practice researchers [aka ecopracticologists, Xiang (2019, p. 12)] to envision alternative futures that are deemed surprising and uneasy. But at the same time, responding to the question and questions of this kind helps emancipate us from confinement and restraint of conventional thinking that we have long been taking for granted and feeling comfortable with (Kissinger et al. 2021, pp. 206–207, pp. 211–213). As such, the question warrants our due responses. In this very spirit, I dare to share my imaginations via a scenario in the hope that others will join in this liberating and meaningful intellectual exercise to “stretch and focus” our thinking on alternative futures (Xiang and Clarke 2003, p. 899).
Under this human-machine-partnership scenario,
-
1.
In socio-ecological practice where wickedness—the ubiquity of wicked problems—is a daunting reality (see note [1] in Table 2), practitioners team up with AlphaZero-Ecophronesis to undertake the task of dealing with wicked problems cooperatively. They do so with the belief that dealing with wicked problems should not be completely outsourced to an AI machine—however competent it may be, but the task will benefit greatly from a collaborative human-machine partnership.
-
2.
On this human–machine team, practitioners are willing and able to first learn about machine ecophronesis AlphaZero-Ecophronesis generates (Table 1) and then form an augmented ecophronesis by mingling machine ecophronesis with their own human ecophronesis. In so doing, they follow in the footsteps of those human board-game players who, as De Cremer and Kasparov (2021), McGrath et al. (2022), and Silver et al. (2018b) point out, improved their own play by learning about AlphaZero’s machine intelligence and emulating its playing style. The creation of an augmented ecophronesis is inspired by the belief of many AlphaZero’s admirers that an augmented intelligence, which combines the strengths of human intelligence with those of machine intelligence, provides an opportunity for humans to advance to higher levels of understanding, expansive thoughts, and self-awareness (e.g., De Cremer and Kasparov 2021, pp. 4–8 of 8; Hultin 2019; Johnson and Sivas 2018; Kasparov 2018; Kissinger et al. 2019, p. 1 of 7, p. 5 of 7; Kissinger et al. 2021, pp. 206–207, pp. 211–218).Footnote 6
-
3.
The augmented ecophronesis combines strengths of machine ecophronesis with those of human ecophronesis and avoids many human weaknesses in judgment-making, including systematic cognitive biases embedded in heuristics, such as those identified collectively by the Israeli psychologist Amos Tversky (1937–1996) and the Israeli–American psychologist Daniel Kahneman in representativeness heuristic, availability heuristic, and adjustment heuristic (Tversky and Kahneman 1974)Footnote 7; as such, the augmented ecophronesis allows practitioners to transcend ordinary limits for human cognition and wisdom, and advance to an extraordinary level of human wisdom that is neither imaginable nor achievable with human ecophronesis alone.
-
4.
With the augmented ecophronesis, practitioners undertake the task of dealing with wicked problems in a way that has not been previously seen in the history of socio-ecological practice. Among things that we can imagine at present, they prudently transform certain wicked problems into tame problems by eliminating all the ten attributes (Table 2); convert some other wicked problems into less-wicked ones by removing or mitigating some of the ten attributes; and develop novel strategies for working with remaining wicked problems as well as those less-wicked ones.
-
5.
The human–machine team finds itself engaged in a continuously repeated learning-acting cycle as new mutations of wicked problems emerge. Through such a process, both practitioners and AlphaZero-Ecophronesis improve themselves in a coherent way toward a mutually beneficial end—better socio-ecological practice; more productive and fulfilled practitioners; and a more effective and humble AI machine.
-
6.
Still, there is a less tangible yet more profound achievement: the emergence and progression of a harmonious human-machine relationship between practitioners and AlphaZero-Ecophronesis. In the eyes of practitioners, AlphaZero-Ecophronesis has proved itself neither a fancy “marvelous toy”, as Klaus Barber and Susan Rodman once dubbed FORPLAN—a short-lived computerized optimization model for forest planning and management at the US Forest Service in the 1980s (Barber and Rodman 1990), nor a competitor that might replace practitioners in the socio-ecological practice of dealing with wicked problems. Instead, it is a serious, useful, independent yet cooperative partner, and “a fellow ‘being’ experiencing and knowing the world—a combination of tool, pet, and mind.” (Kissinger et al. 2021, p. 212)
It is noteworthy that the antithesis of the human-machine-partnership scenario is a human-machine-rivalry scenario. Under that scenario, neither humans nor AI machines can conceive of themselves as being in a positive-sum game where cooperations lead to mutual gains (as in the human-machine-partnership scenario); instead of partners, they choose to be competitors fighting with each other in a zero-sum or even a negative-sum game for hegemony [for a succinct description about these three game situations in game theory, see The Editors of Encyclopaedia Britannica (2015)]. As I shall show below with a recent example, the rivalry-scenario thinking undergirds many contemporary AI apprehensions and criticisms.
5 Embracing opportunities and challenges of the ever-evolving human inventiveness
In a March 22, 2023 open letter signed by thousands of people from around the world, Future of Life Institute (https://futureoflife.org/) calls on all AI laboratories “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (an AI program developed by the San Francisco, California-based American AI lab OpenAI).” (Future of Life Institute 2023, the parenthesis by myself) It urges AI developers to use this pause working with policymakers “to dramatically accelerate development of robust AI governance systems” in the hope that such governance systems help mitigate or avoid “potentially catastrophic effects (of AI programs or machines) on society” (Ibid., the parenthesis by myself).
According to the open letter, the call reflected a deep concern at a time when “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” (Future of Life Institute 2023) The concern was that AI programs or machines that are no longer constrained by the limits of human knowledge or intelligence are “becoming human-competitive at general tasks” so much so that one day, they “might eventually outnumber, outsmart, obsolete, and replace us (humans)” (Ibid., the parenthesis by myself). A terminator-like apocalypse seems on the horizon under this apparent human-machine-rivalry-scenario thinking.
While the open letter highlights the paramount need for a socio-technological ethic to comprehend and even guide the AI development and use, it raises a broader and more fundamental question: How should we human beings respond to the opportunities and challenges of our own ever-evolving inventiveness? Throughout human history, the question has arisen multiple times in response to seismic innovations that promised to both enrich and disrupt established ways of work and life. These include, but are not limited to, inventions of papermaking, printing, gunpowder, the compass, the steam engine, the Internet, and arguably the atomic bomb. This time however the question concerns an unprecedented disruptive socio-technological revolution which may lead to a world relying heavily on self-learning AI machines rarely if ever constrained by human knowledge, intelligence, and wisdom.Footnote 8
For us members of the socio-ecological practice research community (the SEPR community), there are two pressing pertinent issues:
As self-learning AI machines become ever more sophisticated, more knowledgeable, and even wiser beyond the present scope of human reason and imagination, they could increasingly undertake many tasks in socio-ecological practice hitherto undertaken solely by humans, including potentially the task of dealing with wicked problems; they could also fulfill these tasks in their own way—nonhuman way, that is—as aforenamed AlphaZero did in mastering board games. What opportunities and challenges would this socio-technological revolution present to us? How should we respond?
Since the ongoing AI revolution is driven by the endless, restless human inventiveness and deemed neither stoppable nor reversible (AI for Good 2023; Floridi and Cowls 2019; HAI 2022; Kissinger et al. 2021; Leeming 2021; Tomašev et al. 2020; U4SSC 2017, 2020, 2021), forgoing questions of this kind, including more specific ones—like the what-if question in Sect. 2, is simply infeasible. I, therefore, developed this essay within the confines of my own knowledge, experience, and imagination in the hope that it serves as a starting point or a springboard for an open, explorative dialogue among members of the SEPR community and other academic and professional communities. In the optimistic spirit that the future will favor those members of humanity who choose to courageously and prudently embrace both opportunities and challenges of the ever-evolving human inventiveness, let’s act proactively: asking questions about implications of the AI revolution to socio-ecological practice; liberating ourselves from the restraints of present knowledge, experience, and wisdom; and cooperating and coevolving with this new and powerful player in our noble socio-ecological endeavor to build a better world.
Notes
Garry Kasparov is not alone nor the first. For example, 2 years after Alan Turing presented his seminar idea (Turing 1953), the American computer scientist John McCarthy (1927–2011) expressed the hope “to program a machine to learn to play (board) games well and do other tasks.” (McCarthy 1955; the parenthesis and italics by myself).
[1] The concepts of ecophronesis and wicked problems will be discussed in Sect. 3. [2] “Socio-ecological practice is the human action and social process that take place in specific socio-ecological context to bring about a secure, harmonious, and sustainable socio-ecological condition serving human beings need for survival, development, and flourishing. It is the most fundamental and arguably primordial social practice Homo sapiens has been involuntarily engaging in over thousands of years of co-evolution with nature. Socio-ecological practice includes six distinct yet intertwining classes of human action and social process—planning, design, construction, restoration, conservation, and management.” (Xiang 2019, p.7)
As one of the pioneers in operations research, systems thinking, and management science, Russell Ackoff was actively involved in ISGSR. The society began in the early 1950s as the Society for the Advancement of General Systems Theory, changed its name to the Society for General Systems Research in the fall of 1955. The name was changed again in 1986 to the International Society for General Systems Research (ISGSR), and then finally, in 1988, to the International Society for the Systems Sciences (ISSS, https://www.isss.org) [https://www.isss.org/history/ (accessed March 30, 2023)].
For various conceptions of ecophronesis, see Austin (2018), Lu and Wang (2022, in Chinese), and Xiang (2016; 2023, pp.3–5); for the usefulness of ecophronesis to socio-ecological practice, see Achal and Mukherjee (2019), Grose et al. (2019), Heavers (2023), Jiang et al. (2022), Li et al. (2021), Wang (2019), and Yang and Young (2019), among others.
All three were erstwhile faculty members at the University of California at Berkeley, USA. On the contributions of their wicked-problems work to planning and design, more recent publications include Chan (2023), Chan and Xiang (2022), and Verma (2023); and on contributions to public policy, Head (2022).
Although wisdom and intelligence are widely regarded as distinct yet related human capabilities (Lynch and Kaufman 2019; Song 2021; Staudinger and Glück 2011), the great majority of AI literature draws no distinction between them. Among the few exceptions are two recent articles tellingly entitled “From artificial intelligence to artificial wisdom: what Socrates teaches us” (Kim and Mejia 2019) and “Morality in the AI world” (Lekka-Kowalik 2021), respectively. In the latter article, the author exclaims in delight that “AI ethics finally aims at artificial wisdom, or at wise artificial intelligence.” (Lekka-Kowalik 2021, p.48).
[1] The use of heuristics is a double-edged sword. “Evidently, people have a natural tendency to conduct their cognitive activities under ease-based heuristics. These are cognitive procedures or judgmental strategies that are simple, easy, and useful, on the one hand (Nisbett and Ross 1980, pp.254–255), but are narrow, shallow, and often biased or even misleading, on the other (Bazerman 2002, pp.11–40; Heath et al. 1998, p.2; Nisbett and Ross 1980, pp.17–23, pp.41–42; Tversky and Kahneman 1974, pp.1130–1131).” (Xiang and Clarke 2003, p.889) [2] According to the Polish philosopher Agnieszka Lekka-Kowalik, machine practical wisdom (i.e., machine ecophronesis in our case) which she calls “artificial wisdom” (Lekka-Kowalik 2021, p. 44, pp. 47–48) has some distinctive advantages over human practical wisdom (i.e., human ecophronesis in our case). She writes, “Some thinkers even claim that AI would be ‘morally better’ than human beings, for human moral judgments are disturbed by emotions, partiality, and individual vices. AI is immune to such disturbances. It is also better at collecting data, analyzing facts, discovering alternatives, and so on, so its judgments should be better. Moreover, AI may face moral dilemmas more often; there could be moral dilemmas that humans have never faced because of our limited capacities. So, it seems that human experience would not be sufficient for equipping AI with morality.” (Lekka-Kowalik 2021, p.48).
For more systematic explorations of challenges as well as opportunities AI brings to human beings, see Kissinger et al. (2021) and Russell (2019); critical examinations can also be found in, among others, Bender et al. (2021), Berendt (2018), Brundage et al. (2018), Candelon et al. (2021), Cellan-Jones (2014), Chisnall (2020), Hammershøj (2019), Kissinger (2018), Koch (2019), Manyika et al. (2017), Oluwaniyi (2023), Ossewaarde (2019), and Ossewaarde and Gülenç (2020).
References
Achal V, Mukherjee A (2019) Ecological wisdom inspired restoration engineering. Springer Nature, Singapore
Ackoff RL (1989) From data to wisdom: presidential address to ISGSR, June 1988. J Appl Syst Anal 16:3–9
AI for Good (2023) AI for Good. The International Telecommunication Union (ITU), https://aiforgood.itu.int/ (accessed multiple times since February 18, 2023; specific dates are noted as part of the in-text citations)
Austin H (2018) The virtue of Ecophronesis: an ecological adaptation of practical wisdom. Heythrop J 59(6):1009–1021
Awad EM, Ghaziri HM (2004) Knowledge management. Prentice-Hall, Upper Saddle River, New Jersey
Barber KH, Rodman SA (1990) FORPLAN: the marvelous toy. J For 88(5):26–30
Bazerman M (2002) Judgment in managerial decision making, 5th edn. Wiley, New York
Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the dangers of stochastic parrots: can language models be too big?. Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 610–623. https://doi.org/10.1145/3442188.3445922
Berendt B (2018) AI for the common good?! Pitfalls, challenges, and ethics pen-testing, Paladyn. J Behav Robot 10(1):44–65. https://doi.org/10.1515/pjbr-2019-0004
Brundage M, Avin S, Clark J et al (2018) The malicious use of artificial intelligence: forecasting, prevention, and mitigation, Preprint at https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf. Accessed 21 Feb 2023
Buchanan R (1992) Wicked problems in design thinking. Des Issues 8(2):5–21
Campbell M (2018) Mastering board games: a single algorithm can learn to play three hard board games. Science 362(6419):1118. https://doi.org/10.1126/science.aav1175
Candelon F, di Carlo CR, De Bondt M et al (2021) AI regulation is coming: how to prepare for the inevitable. Harvard Business Review, September-October, 2021. https://hbr.org/2021/09/ai-regulation-is-coming. Accessed 23 Feb 2023
Cellan-Jones R (2014) Stephen Hawking warns artificial intelligence could end mankind. BBC News, December 2, 2014. https://www.bbc.com/news/technology-30290540. Accessed 17F eb 2023
Chan JKH (2014) Planning ethics in the age of wicked problems. Int J E-Plan Res 3(2):18–37
Chan JKH (2023) The ethics of wicked problems: an exegesis. Socio Ecol Pract Res 5(1):35–47. https://doi.org/10.1007/s42532-022-00137-3
Chan JKH, Xiang W-N (2022) Fifty years after the wicked-problems conception: its practical and theoretical impacts on planning and design. Socio Ecol Pract Res 4(1):1–6. https://doi.org/10.1007/s42532-022-00106-w
Chisnall N (2020) Digital slavery, time for abolition? Policy Stud 41(5):488–506. https://doi.org/10.1080/01442872.2020.1724926
Churchman CW (1967) Wicked problems. Manage Sci 14(4):B141–B142
Couclelis H (1986) Artificial intelligence in geography: conjectures on the shape of things to come. Prof Geogr 38(1):1–11. https://doi.org/10.1111/j.0033-0124.1986.00001.x
De Cremer D, Kasparov G (2021) AI should augment human intelligence, not replace it. Harvard Business Review Digital Articles, 3/18/2021, p1–8. 8p
Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harvard Data Sci Review 1(1):14. https://doi.org/10.1162/99608f92.8cd550d1
Future of Life Institute (2023) Pause giant AI experiments: an open letter. We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. Published March 22, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 30 March 2023
Grose MJ, Wang Y, Cheng Y et al (2019) Ecological practical wisdom: common stances across design and planning. J Urban Ecol 5(1):1–3. https://doi.org/10.1093/jue/juz004
Head B (2022) Wicked problems in public policy. Palgrave Macmillan, Cham, Switzerland. https://link.springer.com/book/https://doi.org/10.1007/978-3-030-94580-0
Heath C, Larrick RP, Klayman J (1998) Cognitive repairs: how organizational practices can compensate for individual shortcomings. Res Organ Behav 20:1–37
HAI (Human-Centered Artificial Intelligence) (2022) Artificial intelligence index report 2022. Stanford University Human-Centered Artificial Intelligence https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf. Accessed 20 Feb 2023
Hammershøj LG (2019) The new division of labor between human and machine and its educational implications. Technol Soc 59:101142
Heavers N (2023) Dwelling drawing: seeking ecological wisdom in situ. Socio Ecol Pract Res. https://doi.org/10.1007/s42532-023-00150-0
Hultin J (2019) One of science and technology’s great challenges for the 21st century: how can humans compete with artificial intelligence? The New York Academy of Sciences Magazine, Fall 2019: 30
Jiang H, Xie W, Xiang W-N (2022) When the natural pendulum swings between drought and flood, a bifunctional natural drainage system safeguards a mountain village’s water security incessantly for centuries. Socio Ecol Pract Rese 4(2):117–129. https://doi.org/10.1007/s42532-022-00109-7
Johnson B, Sivas DA (2018) How the enlightenment ends. Atlantic 322(2):9
Kasparov G (2018) Chess, a Drosophila of reasoning. Science 362(6419):1087. https://doi.org/10.1126/science.aaw2221
Kim TW, Mejia S (2019) From artificial intelligence to artificial wisdom: what Socrates teaches us. Computer 52(10):70–74
Kissinger HA (2018) How the enlightenment ends: philosophically, intellectually—in every way—human society is unprepared for the rise of artificial intelligence. The Atlantic, June 2018, 11–14.
Kissinger HA, Schmidt E, Huttenlocher D (2019) The Metamorphosis. The Atlantic, August 1, 2019
Kissinger HA, Schmidt E, Huttenlocher D (2021) The age of AI: and our human future. Little, Brown and Company, New York
Koch C (2019) Will machines ever become conscious? AI may equal human intelligence without matching the true nature of our experiences. Scientific American, https://www.scientificamerican.com/article/will-machines-ever-become-conscious/ (accessed February 26, 2023). This article was originally published with the title "Proust among the Machines" in Scientific American 321, 6, 46–49 (December 2019). https://doi.org/10.1038/scientificamerican1219-46
La Rosa D, Pauleit S, Xiang W-N (2021) Unearthing time-honored examples of nature-based solutions. Socio Ecol Pract Res 3(4):329–335. https://doi.org/10.1007/s42532-021-00099-y
Leeming J (2021) How AI is helping the natural sciences: collaborations across disciplines are growing, and artificial intelligence is helping to make joint working more effective. Nature 598:S5-7
Lekka-Kowalik A (2021) Morality in the AI world. Law Bus 1(1):44–49
Li Y, Gao W, Xiang W-N (2021) “The trouble”, its maker, and Yang Gui’s confidence in “taming the troublemaker” with a 1962 bilateral agreement. Socio Ecol Pract Res 3(4):375–395. https://doi.org/10.1007/s42532-021-00095-2
Lu F, Wang YZ (2022) Ecological practice and ecological wisdom. In: Lu F, Wang YZ (2022) Ecological civilization and ecological philosophy, 403–433. China Social Sciences Press, Beijing [卢风, 王远哲 (2022) 生态实践与生态智慧。载于: 卢风, 王远哲 (2022) 《生态文明与生态哲学》, 403–433页。 中国社会科学出版社,北京]
Lynch SF, Kaufman J (2019) Creativity, intelligence, and wisdom: could versus should. In: Sternberg RJ, Glück J (eds) The Cambridge handbook of wisdom. Cambridge University Press, Cambridge, UK, pp 455–464
Manyika J, Lund S, Chui M et al (2017) Jobs lost, jobs gained: workforce transitions in a time of automation. McKinsey Global Institute, Brussels
McCarthy J (1955) Proposal for research by John McCarthy. In: McCarthy J, Minsky ML, Rochester N, Shannon CE (1955) A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. A reproduction of the complete typescript of 17 pages found on John McCarthy’s Stanford University website http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf. Accessed 29 March 2023, p 10
McCarthy J, Minsky ML, Rochester N, Shannon CE (1955/2006) A proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955. AI Magazine, 27(4): 12–14. https://doi.org/10.1609/aimag.v27i4.1904. A reproduction of the complete typescript of 17 pages which contains the quotation (page 10) can be found on John McCarthy’s Stanford University website http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf. Accessed 29 March 2023
McGrath T, Kapishnikov A, Kramnik V et al (2022) Acquisition of chess knowledge in AlphaZero. PNAS 119(47):e2206625119. https://doi.org/10.1073/pnas.2206625119
Nisbett R, Ross L (1980) Human inference: strategies and shortcomings of social judgment. Prentice-Hall, Englewood Cliffs, New Jersey, USA
Oluwaniyi R (2023) 7 Reasons why artificial intelligence can’t replace humans at work. Make Use Of, March 15, 2023. https://www.makeuseof.com/reasons-artificial-intelligence-cant-replace-humans/. Accessed 18 March 2023
Ossewaarde M (2019) Digital transformation and the renewal of social theory: unpacking the new fraudulent myths and misplaced metaphors. Technol Forecast Soc Chang 146:24–30
Ossewaarde M, Gülenç E (2020) National varieties of artificial intelligence discourses: myth, utopianism, and solutionism in West European policy expectations. Computer 53(11):53–61
Peters BG (2017) What is so wicked about wicked problems? A conceptual analysis and a research program. Policy Society 36(3):385–396. https://doi.org/10.1080/14494035.2017.1361633
Rittel HWJ, Webber MM (1973) Dilemmas in a general theory of planning. Policy Sci 4:155–169
Rowley J (2007) The wisdom hierarchy: representations of the DIKW hierarchy. J Inf Sci 33(2):163–180
Russell SJ (2019) Human compatible: artificial intelligence and the problem of control. Viking, New York
Silver D, Hubert T, Schrittwieser J et al (2018a) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362:1140–1144
Silver D, Hubert T, Schrittwieser J et al (2018b) AlphaZero: shedding new light on chess, shogi, and Go. DeepMind, December 6, 2018b. https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go. Accessed 11 April 2023
Song B (ed) (2021) Intelligence and wisdom: artificial intelligence meets Chinese philosophers. Springer, Singapore
Staudinger UM, Glück J (2011) Intelligence and wisdom. In: Sternberg RJ, Kaufman SB (eds) (2011) The Cambridge handbook of intelligence. Cambridge University Press, Cambridge, UK, pp 827–846
The Editors of Encyclopaedia Britannica (2015) Positive-sum game. Encyclopedia Britannica. https://www.britannica.com/topic/positive-sum-game
Tomašev N, Cornebise J, Hutter F et al (2020) AI for social good: unlocking the opportunity for positive impact. Nat Commun 11(1):2468
Turing A (1953) Digital computers applied to games. AMT/B/7, The Turing Digital Archive. https://turingarchive.kings.cam.ac.uk/publications-lectures-and-talks-amtb/amt-b-7 (Accessed April 18, 2023). Also in: Bowden BV (ed) (1953) Faster than thought: a symposium on digital computing machines. Pitman Publishing, London, pp 286–310
Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases: biases in judgments reveal some heuristics of thinking under uncertainty. Science 185(4157):1124–1131
U4SSC (2017) Implementing SDG11 by connecting sustainability policies and urban planning practices through ICTs. The United for Smart Sustainable Cities (U4SSC), September 2017. https://www.itu.int/en/publications/Documents/tsb/2017-U4SSC-Implementing-sustainable-devt/files/downloads/Brochure_U4SSC%20Implementing%20sustainable%20development%20goal%2011_422012.pdf. Accessed 21 March 2023
U4SSC (2020) U4SSC brochure: a UN initiative. The United for Smart Sustainable Cities (U4SSC), June 2020. https://www.itu.int/en/ITU-T/ssc/united/Documents/U4SSC%20Publications/U4SSC_Brochure-June%202020pdf.pdf. Accessed 21 March 2023
U4SSC (2021) Simple ways to be smart. The United for Smart Sustainable Cities (U4SSC), March 2021. https://www.itu.int/en/ITU-T/ssc/united/Documents/U4SSC%20Publications/Deliverables/Simple-ways-to-be-smart/U4SSC_Simple-ways-to-be-smart.pdf?csf=1&e=FgaZDb. Accessed 21 March 2023
Verma N (2023) The disarming simplicity of wicked problems: the biography of an idea. Socio Ecol Pract Res. https://doi.org/10.1007/s42532-023-00143-z
Wang X (2019) Ecological wisdom as a guide for implementing the precautionary principle. Socio Ecol Pract Res 1(1):25–32. https://doi.org/10.1007/s42532-018-00003-1
Xiang W-N, Clarke KC (2003) The use of scenarios in land use planning. Environ Plann B Plann Des 30(6):885–909
Xiang W-N (2013) Working with wicked problems in socio-ecological systems: awareness, acceptance, and adaptation. Landsc Urban Plan 110(1):1–4
Xiang W-N (2016) Ecophronesis: the ecological practical wisdom for and from ecological practice. Landsc Urban Plan 155:53–60
Xiang W-N (2019) Ecopracticology: the study of socio-ecological practice. Socio Ecol Pract Res 1(1):7–14. https://doi.org/10.1007/s42532-019-00006-6
Xiang W-N (2021a) Seven approaches to research in socio-ecological practice & five insights from the RWC-Schön-Stokes model. Socio Ecol Pract Res 3(1):71–88. https://doi.org/10.1007/s42532-021-00073-8
Xiang W-N (2021b) Re-examination research via the COVID glasses: an intellectual movement emerging for the better. Socio Ecol Pract Res 3(1):1–7. https://doi.org/10.1007/s42532-020-00071-2
Xiang W-N (2023) When the process socio-ecological practice meets the virtue ecophronesis, the SEPR community receives benefits. Socio Ecol Pract Res 5(1):1–10. https://doi.org/10.1007/s42532-023-00144-y
Yang B, Young RF (2019) Ecological wisdom: theory and practice. Springer Nature, Singapore
Acknowledgements
I thank the following individuals for their help during the preparation of this editorial (in alphabetic order): Cristina Bueti (International Telecommunication Union, Geneva, Switzerland), Taylor Jones (Future of Life Institute, Cambridge, Massachusetts, USA), David Silver (DeepMind, London,UK), Wenwu Tang and Tianyang Chen (University of North Carolina at Charlotte, USA). My interest in artificial intelligence began in the 1980s when it was commonly called “expert systems” or “knowledge-based systems.” Over the past four decades, especially in the last 10 years, I witnessed tremendous growth in the field. I thank the following individuals who introduced me to the field: Julius Gyula Fábos, Bruce MacDougall, Meir Gross (University of Massachusetts at Amherst, USA), and Hellen Couclelis, in particular, her 1986 article (Couclelis 1986) (University of California at Santa Barbara, USA).
Funding
The writing of this editorial was not supported by any funding.
Author information
Authors and Affiliations
Contributions
The author was fully responsible for the conception and writing of the editorial.
Corresponding author
Ethics declarations
Conflict of interest
The author confirms that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Xiang, WN. A 2023 socio-ecological imagination: What if a self-learning, wisdom-generating AI machine became a reality?. Socio Ecol Pract Res 5, 125–133 (2023). https://doi.org/10.1007/s42532-023-00153-x
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42532-023-00153-x