1 Introduction

Millions use chatbots like ChatGPT and Gemini with generative artificial intelligence (AI) to answer questions, create and debug computer code, generate images, and complete other tasks. Even though generative AI will likely improve, the technology has severe limitations. Generative AI cannot learn continuously. It is necessary to train generative AI offline with periodic retraining to incorporate new information. The training is exceptionally power-hungry. It could cost millions of dollars to train a large chatbot just once.

This paper follows a neuroscience path to improved AI that overcomes the stated limitations. It studies AI based on the neocortex, the brain’s center for learning, reasoning, planning, and language. The neocortex learns continuously with little power (the whole brain uses 20–30 watts). It is very flexible, learning thousands of diverse tasks. The approach builds on the Thousand Brains Theory of the neocortex, focusing on the parts of the theory needed to present new insights. The reader is referred to (Hawkins, 2021; Hole and Ahmad, 2021) for comprehensive introductions to the theory.

Formally, a future tool or tool AI with functionality from the neocortex is a Turing machine. Since Turing machines can simulate any algorithm of formal rules, a tool is also an algorithm programmable on a computer. A tool AI has two internal, permanent goals: to learn about the world and interact with users to answer questions in one or more domains where the tool has received expert training (Hole, 2023). Examples of domains are medicine, cybersecurity, and the law. Tools interact physically and verbally with the environment, including humans, to build physical and social world models and use them to answer users’ questions. Unlike intelligent autonomous agents (Bostrom, 2014), neocortex-bassed tool AIs are no existential risk to humanity (Hole, 2023). In other words, they will not initiate actions to hurt or kill humanity.

Creativity is the hallmark of human intelligence. Human creativity is vital to solving novel challenging problems (Boden, 1998; Gonzalez and Haselager, 2005; Kirkpatrick, 2023). According to Roli et al. (2022), no algorithm can achieve human creativity. (The result also holds for robots interacting with the environment). The main reason is that algorithms operate deductively on fixed concepts and categories, making inferences about particular instances to solve problems. Humans introduce these fixed concepts and categories during algorithm design. Algorithms cannot discover or generate novel properties or relations that are not part of these concepts and categories. The algorithms can only represent novelty combinatorially as new combinations and relations between objects in a potentially vast but predefined space of possibilities.

This paper studies cooperation between humans and intelligent algorithmic tools based on the neocortex to compensate for algorithms’ limited creativity. It provides fundamental insights into human-tool cooperative problem-solving. The paper first describes (Sect. 2) human emotions, feelings, and creativity. It then explains (Sect. 3) why feelingless tool AIs cannot achieve human creativity using arguments from Roli et al. (2022). To overcome tools’ lack of creativity, it studies (Sect. 4) human-tool cooperation, combining humans’ feeling-guided creativity and tool AIs’ sizeable computational capacity. The paper argues that interactive tools augment human creativity and enhance problem-solving. It explains (Sect. 5) why human-led abductive reasoning incorporating human creativity is crucial to human-tool cooperation in solving challenging problems. Finally, it asserts (Sect. 6) that human stakeholders are morally responsible for tool answers’ adverse impact, but it is still essential to teach tools moral values to generate trustworthy replies.

The paper discusses (Sect. 7) the implications of the fundamental insights. Because tool AIs (and algorithmic AI in general) have limited creativity, humans must guide and actively partake in tool AIs’ efforts to solve problems requiring genuine new insights. The research community should focus on creating neocortex-based tools to augment human creativity and enhance problem-solving rather than creating autonomous algorithmic entities with independent but less creative problem-solving.

2 Human Emotions, Feelings, and Creativity

Basic emotions are demands on the brain for work to satisfy bodily needs. Deep brain stimulation studies strongly indicate that all mammals have the same basic emotions (Panksepp, 1998), (Celeghin et al., 2017), (Solms, 2021, Ch. 5). These bodily emotions are innate, not learned. They trigger activity or thinking in the brain to satisfy physical requirements. The human brain learns social emotions, including happiness, shame, and jealousy, from repeated experiences followed by reflections. The learning combines and modifies basic emotions to create complex social emotions. Although emotion-induced thinking proceeds unconsciously to satisfy basic bodily needs, emotions representing urgent needs become conscious when people start to feel them.

Feelings are the conscious experienced features of emotions. People have feelings about noteworthy world events and themselves. The brain prioritizes emotions with conscious feelings over non-conscious emotions (Solms, 2021, p. 220). Qualia (singular: quale) denote the individually felt qualities, exemplified by the sensation of pain, the taste of chocolate, the smell of gasoline, and the quality of redness. Feelings provide qualitative information the brain uses to make decisions in unpredictable or uncertain situations with little quantitative information, like choosing actions to escape an avalanche. Feelings are “error signals” indicating how individuals are doing, allowing them to handle and learn from unforeseen situations (Solms, 2021, pp. 98, 101), (Earl, 2014).

Human creativity is a search for original ideas leading to unexpected and valuable results. We consider two types of creative processes (Boden, 1998; Gonzalez and Haselager, 2005; Kirkpatrick, 2023). First, original ideas are created by selecting, modifying, and recombining known ideas in conceptual spaces. This combinatorial creativity is theoretically achievable by algorithmic tools traversing spaces of ideas. The second type occurs when new perspectives or views transform a known conceptual space into a different space with radically new ideas. According to Roli et al. (2022), no algorithm could perform such a transformation unless humans included the correct transformational views during algorithm design. Considering the case of algorithmic tools based on the neocortex, the following section provides an alternative argumentation for why this transformational creativity, depending on new views of conceptual spaces, is non-algorithmic.

To clarify the difference between combinatorial and transformational creativity, we consider their distinct impacts in various domains. Human combinatorial and transformational creativity is central to all art. Pictures painted with familiar techniques in established styles result from combinatorial creativity, while paintings in radically new styles, often painted using new techniques, require transformational creativity. Artists with exceptional transformational creativity are rare, examples being Michelangelo with the ceiling painting in the Sistine Chapel and Picasso with cubism. Although there needs to be no difference in the technical quality of the art produced by combinatorial and transformational creativity, the artworks have a different impact. Transformational creativity produces art that surprises, shocks, or excites beyond what is usual for combinatorial creativity.

Human transformational creativity is no less critical in subject areas based on rules and logical thinking, such as mathematics and engineering. Examples of transformational creativity in mathematics are group theory and chaos theory. Transformational inventions in engineering include the steam engine and the transistor. Breakthroughs in mathematics or engineering do not occur by logical thinking alone. Transformational creativity is essential to develop new problem understandings and techniques for problem-solving. A breakthrough often occurs when a conflict exists between existing ways of understanding a problem. The need to remove the uncertainty and resolve the conflict leads to radically new ideas about solving the problem (Beghetto, 2021).

Human creativity brings original ideas to answering questions (Kaufman and Stenberg, 2019). An individual’s logical reasoning, emotions, and feelings decide what question to answer, where emotions and feelings represent experiences, preferences, and needs. The individual then chooses an actionable idea relevant to find an answer. The choice of action may change the physical or social environment or how the individual perceives it, allowing transformational creativity with radically new actionable ideas (Roli et al., 2022). An idea that is not available before can cause individuals to change their approach to finding an answer. During the answering process, individuals may learn new approaches to answering future questions.

Evolution has created human transformational creativity that invents and improvises by creating and exploring opportunities. The non-ergodic (non-repeating) creative process rejects old ideas and creates new ones as the world changes (Kauffman, 2000). Emotions and feelings provide quantitative and qualitative information, respectively. The information enables choices (Earl, 2014, Sect. 4),(Solms, 2021). In novel situations where logic fails because of missing quantitative data (preventing induction to a general hypothesis) or lack of general insight (preventing deduction to the specific), the brain must rely on qualitative information from conscious feelings. Hence, human transformational creativity needs feelings to select novel opportunities and create radical ideas.

The remainder of the paper focuses on tool AIs based on the Thousand Brains Theory of the neocortex (Hawkins, 2021; Hole and Ahmad, 2021). The theory provides a path toward neocortex-based tools with improved AI that avoids the limitations of generative AI described in the introduction.

3 Tool AIs Cannot Achieve Human Creativity

The biological neocortex is the basis for the design of tool AIs (Hole, 2023; Hole and Ahmad, 2021; Hawkins, 2021). The neocortex is a wrinkled sheet with a thickness of about 2.5 mms enveloping the brain’s two hemispheres. It constitutes roughly 70 percent of the brain’s volume and contains over 10 billion cells. It has dozens of communicating regions, each consisting of cortical columns (Mountcastle, 1997; Thiboust, 2020). The neocortex has about 150,000 columns (Hawkins, 2021). Since all cortical columns contain variations of the same canonical circuit, the cortical columns carry out very similar computations. It follows that the cortical regions also carry out essentially identical operations. The difference in output is mainly due to varying input. In other words, the neocortex regions for seeing, touching, hearing, reasoning, and language carry out nearly the same operations.

The canonical circuit in a cortical column can learn models of physical objects. The models called reference frames have internal “coordinate systems” telling the neocortex the locations of objects’ parts relative to each other. The neocortex organizes reference frames in structures to create composite objects (Hawkins, 2021; Hole and Ahmad, 2021). Reference frames also represent abstract concepts like democracy, mathematics, and philosophy, including the spaces of ideas mentioned earlier. According to the Thousand Brains Theory (Hawkins, 2021), reference frames are crucial to problem-solving because the neocortex creates and moves around in the frames to find solutions.

A previous paper (Hole, 2023) outlined a possible design of tool AIs. The design depends heavily on the neocortex but excludes all brain parts below it, including the amygdala, hypothalamus, and other parts the neocortex needs to integrate emotions and feelings (Solms, 2021), (Hawkins, 2021, pp. 146,147), (Panksepp, 1998, pp. 42,43) (Damasio and Carvalho, 2013; Pessoa, 2013; Leng, 2018). Because tool AIs are without emotions and feelings, they cannot develop new preferences. Tools only have preferences included by the designers. It is possible to incorporate the preferences to traverse a space of ideas known at design time. However, since the world changes in unpredictable ways (Roli et al., 2022; Taleb, 2010), it is impossible to determine and include all preferences needed to traverse future spaces not existing at design time. Thus, tool AIs can only achieve human combinatorial creativity based on the selection, modification, and recombination of ideas in initially known spaces.

To confirm the restriction or limit on achievable tool AI creativity, observe that since tool AIs cannot develop new preferences, tools cannot select opportunities in the physical or social environment unforeseen by their creators. As a result, tools cannot create new perspectives or views to transform an idea space and generate radically different ideas. In other words, tool AIs cannot achieve human transformational creativity.

Since human transformational creativity occurs when emotions with conscious feelings select actions changing the physical or social environment to create new perspectives or views, human creativity goes beyond the creativity of algorithmic tool AIs. The reader may find this conclusion surprising since the AI literature describes many algorithms to solve challenging problems (Russell and Norvig, 2020). However, they all have the same limitation (Roli et al., 2022). Algorithms cannot explore radically new opportunities that designers have not considered.

For example, learning algorithms for neural networks can find previously unknown correlations in data, but learning algorithms can only find correlations made possible by predefined data models. Humans decide what data are important and how to set up the learning. Perhaps we could overcome the limited creativity by using a utility function (Russell and Norvig, 2020) to assign a performance score to different possibilities? Again, if a tool cannot detect radical new possibilities, it cannot assign them values.

If conscious feelings are essential to achieve transformational creativity, why not create brain-based AI with feelings? The indications are that qualia of feelings are not algorithmic (Chalmers, 1996; Goff, 2017), making it unknown how to realize feelings. Even if possible, implementing emotions with conscious feelings is unwise because they could create internal goals incompatible with humanity’s interests, resulting in an existential threat decimating humankind (Armstrong et al., 2012; Bostrom, 2014; Hole, 2023).

In summary, tool AIs without human assistance can achieve combinatorial creativity in known spaces of ideas. However, tools cannot achieve transformational creativity because they lack the feelings to change between views and, thus, transform a conceptual space to take advantage of emerging ideas.

4 Augmented Human Creativity

A crucial question is whether algorithmic tool AIs with only combinatorial creativity could augment human creativity to solve challenging problems in many domains. To answer the question, we initially discuss how neocortex-based tool AIs will likely develop. Probably, we will first see software and later hardware implementations of artificial neocortices consisting of many interconnected copies of the canonical circuit (Hawkins, 2021). The first software tool AIs have artificial neocortices with relatively few copies of the canonical circuit compared to the biological neocortex. Since neocortex-based tools use a sparse binary encoding of data (with many zeros and few ones) and sparse artificial neural networks (with many possible network connections left out) (Hole and Ahmad, 2021), the software tools run entirely on powerful CPUs without the use of costly GPUs needed by generative AI chatbots.

Later, neuromorphic hardware will realize artificial neocortices larger than the human neocortex. The realization entails creating over 150,000 copies of the canonical circuit and adding a sparse network of connections between them. Neuromorphic implementations run faster than software implementations on computers. Neuromorphic neocortices with high speed, reliable storage, and flexible wiring will process more information than the biological neocortex. The artificial neocortices could also have additional cortical regions not found in the brain that process sensory signals unavailable to humans (Hawkins, 2021, pp. 156–158). In short, neuromorphic implementations of large artificial neocortices promise fast cognitive information processing, enhancing human cognitive capabilities.

We next view an artificial neocortex as an extension of a user’s biological neocortex because both cortices realize the canonical circuit. The user’s biological neocortex could interact with the artificial neocortex in a tool AI via speech or text. However, the shared canonical circuit suggests a more direct and faster neural implant integration. At present, it is unknown how to achieve such integration. (Please see Neuralink’s website neuralink.com and its show-and-tell videos on YouTube for relevant research.) The rest of the paper assumes speech or text communication between the user and tool AI.

Since bodily emotions trigger activity in the brain, Minsky (2007, pp. 5, 6) described emotions as forms of thinking. Fuster (2015, p. 247) has stated that emotions and feelings more or less influence all thinking. Hence, human emotions and feelings influence a tool AI’s operation when a user communicates to make choices and solve a problem. The following scenario illustrates how interactive cooperation between the user and the tool could take advantage of human creativity:

  1. 1.

    The user asks the tool AI a question.

  2. 2.

    The tool interprets the question in the user’s context. If necessary, it communicates with the user to fully understand the question.

  3. 3.

    The tool AI uses information on the Internet and its existing models of physical objects and abstract concepts to build new models relevant to answering the question.

  4. 4.

    The tool explores old and new models to create an answer.

  5. 5.

    At some point, the tool AI must choose between several options. If the tool has no predefined preference or cannot make a logical inference, it asks the user to choose among the available options.

  6. 6.

    If the tool AI gets stuck, it asks the user for novel ideas to build and explore new models not deducible from previous models.

  7. 7.

    The tool may repeat steps 3–6 to find an answer. It eventually replies to the user or gives up trying to find an answer.

The above scenario illustrates how a neocortex-based tool explores the user’s transformational creativity to strengthen problem-solving. Radical ideas from a human allow a tool AI’s artificial neocortex to create new reference frames modeling the world. These models are not deducible from existing models. They restructure knowledge to enhance problem-solving heuristics and skills. The human ideas enable the tool to take advantage of new opportunities not included from the start. Furthermore, human ideas provide new insights that allow the tool to delete incorrect reference frames and create new ones, removing wrong assumptions and updating beliefs learned earlier.

Human-tool cooperation allows people to develop more and better transformative ideas. A tool AI supports an iterative process where users explore as many potentially valuable ideas as possible, select the most promising ones and repeatedly refine them. The vast computational capacity of tool AIs will make it possible to explore more ideas than today. Because the tools can access more knowledge and ideas than the users, the tools will make unexpected suggestions. These surprising suggestions will, in turn, inspire transformative ideas in users.

Tool AIs need interactive user interfaces to present ideas using plots, flowcharts, networks, images, 3D models, video, and sound. A flexible user interface allows users to see problems and challenges in new ways and discover connections that are not just modifications of familiar ideas. Future tool AIs with great computational power and advanced user interfaces will help people develop transformative ideas in extensive creative processes. The tools will assist people in solving problems and challenges in art, culture, and science that require extensive combinatorial and transformative creativity. In conclusion, human transformational creativity allows tool AIs to explore radically new ideas not available without human participation and solve challenging problems.

5 Human-Led Abductive Reasoning

Algorithmic AI can use three types of logical inference: induction, deduction, and abduction to solve problems. From Roli et al. (2022), neither induction nor deduction can reveal novel features of the world, not, at least implicitly, introduced by the algorithm designers. Once a developed algorithm has divided the world into a finite set of categories, it cannot see the world beyond those categories. Here, we focus on abduction, defined as finding the best explanation for an observed phenomenon in a given context (Lipton, 2004; Douven, 2022). In the case of algorithmic AI, abduction has the same limitation as induction and deduction: It can only consider known preconditions that could have an observation as a consequence. Again, abduction designed into algorithmic AI cannot move beyond its predefined categories included by the designers (Roli et al., 2022).

According to Larson (2021, Ch. 12), humans use abduction to create knowledge. Abduction determines the best explanations for new physical and theoretical observations (Seddon, 2021). Abductive reasoning forms hypotheses to explain the observations. The neocortex enables individuals to evaluate these hypotheses. Since human transformational creativity is non-algorithmic (Roli et al., 2022) and creativity is heavily involved in creating hypotheses (Gonzalez and Haselager, 2005), (Larson, 2021, pp. 187,188), humans can develop novel explanations for given observations not available to algorithmic AI. In other words, human abduction incorporating creativity is critical for successful problem-solving.

This paper studies cooperation between humans and neocortex-based tool AIs, where humans lead tools during creative abduction. Human-led abduction contributes to problem-solving in cases of underdetermination where available data support multiple explanations, and deductive or inductive inference by the tool is not enough to select one. In some cases, no data will be enough for the tool to select a hypothesis, making explanatory considerations by a human indispensable to break the deadlock (Douven, 2022, Sect. 2.3).

Consider how human-led abduction improves human-tool problem-solving. Initially, a tool receives a challenging problem. Assume that the tool gets stuck at some point during the problem-solving procedure. The human user creates one or more candidate explanations in the form of testable hypotheses to explain puzzling observations relevant to solving the problem. The testable hypotheses allow the tool to refute some candidate explanations by conducting experiments. A human and a tool move closer to a solution by repeatedly refuting hypotheses, finding data supporting others, and making new hypotheses. In short, successful human-tool problem-solving depends on human-led abduction to form novel hypotheses that are not just modifications or combinations of earlier hypotheses. Tools are vital to evaluate the hypotheses.

6 Moral Responsibility

An entity is assigned moral responsibility by others when they praise or blame the entity for the impact of its behavior (Courtenage, 2023). This section focuses on blame. It argues that human stakeholders, including designers, developers, users, and corporations, are morally responsible for the negative impact of answers from tool AIs. It then finds that tools must learn moral values to generate answers that users trust.

6.1 Human Moral Responsibility

Determining who has moral responsibility for the impact of answers from tool AIs is vital because the combination of human creativity and powerful computational tools could cause severe harm to many people if the tools are not developed and used with care (Hole, 2023).

Some believe entities must have (phenomenal) consciousness to be held morally responsible for actions (Wallach and Allen, 2009). We can punish conscious entities for moral transgressions because they feel regret and shame, but we cannot similarly punish entities without feelings, like neocortex-based tool AIs. However, influencing feelings is not the only way to keep entities responsible. Feelingless entities could take responsibility by learning from their transgressions and doing compensatory work to reduce adverse consequences. In particular, tool AIs could work to mitigate the negative impact of answers. However, it is not always possible to reduce consequences once they have occurred. For example, a tool AI explaining how to create a deadly pandemic cannot bring people back to life. As we shall see, there are fundamental reasons why human stakeholders cannot hold interactive, algorithmic tool AIs responsible for adverse consequences of answers.

According to Courtenage (2023), three requirements must be satisfied to hold an entity morally responsible for its behavior. We focus on the first requirement: An entity must control its actions. The requirement means that an entity must be able to make voluntary choices about what to do. Nobody steers or manipulates the entity into making particular actions. Courtenage (2023) used the requirement to argue that intelligent machines are not morally responsible.

Building on the work by Courtenage (2023), we assert that interactive, algorithmic tool AIs have insufficient control of the problem-solving process (described in Sect. 4) to take moral responsibility for adverse consequences of solutions. First, human tool users influence the problem-solving process by choosing between options generated by tools. Whenever tools cannot choose, they present the options to users and let them decide. Second, users provide tools with ideas when the process is stuck. Since tools do not have transformational creativity, radical ideas from users are essential to solving challenging problems. In short, human users significantly influence tools’ problem-solving processes by making choices and providing ideas.

Third, because algorithms realize human designers’ concepts, insights, and preferences, the designers decide what choices algorithmic tool AIs can make without contacting users. Designers also preconfigure tool AIs with goals: learn about the world and answer questions. The tools have no other goals and cannot generate new ones (Hole, 2023). Consequently, tool AIs are not autonomous agents generating new goals; they are computational tools in the service of humans that largely control what the tools do.

Finally, corporations making tool AIs will partly control who gets the tools and how people use them. Since the mass deployment of tools with an unfinished or deficient design could have an intolerable effect on a population, corporations must take great care while designing and deploying tool AIs to avoid severe consequences. In conclusion, human stakeholders are morally responsible for the negative impact of tool answers because they design tool AIs to fulfill specific goals, deploy the tools, ask them questions, make choices during problem-solving, and provide ideas leading to answers.

6.2 Teaching Moral Values to Tools

We can view the text-generating chatbot ChatGPT from OpenAI as an interactive tool AI not based on the neocortex. ChatGPT is engaging because it has combinatorial creativity, but experiments show that it provides conflicting moral advice (Krugel et al., 2023). We discuss why ChatGPT has inconsistent moral values and outline how neocortex-based tool AIs could learn consistent moral values to build user trust.

The conflicting moral advice from ChatGPT is problematic because it influences users’ decisions. It is reasonable to believe that the training data’s lack of consistent moral values is the main reason ChatGPT provides conflicting moral advice. Stakeholders have trained (different versions of) ChatGPT on enormous text datasets obtained from the Internet, and after training, the chatbot can develop sentences to answer questions. The chatbot extends partly formed sentences based on what are likely continuations. This generative method may produce morally consistent answers when stakeholders train a chatbot on a dataset based on consistent moral principles. However, the Internet is full of racist and misogynistic texts, false and fabricated conclusions, and much other questionable material, leading to conflicting moral values.

A chatbot providing consistent moral advice requires training data based on consistent moral principles. Unfortunately, assembling such training sets may be far from trivial because much historical data will contain morally questionable biases due to long-standing societal problems. OpenAI tries to reduce the problem of morally questionable advice by limiting the questions ChatGPT answers and removing unwanted words and phrases before the answers reach users. The company also fine-tunes ChatGPT with additional data, using reinforcement learning with human feedback (OpenAI, 2023). However, the training sets are too large for domain experts to evaluate all possible moral questions and conversation scenarios. The problem is recurring because it is necessary to periodically train a chatbot on new data to give it access to updated information. The countermeasures or guardrails do not prevent all inappropriate moral responses but ensure that users receive fewer of them.

Tool AIs based on the neocortex have the potential to reduce conflicting moral advice significantly. Because fundamental moral values should not depend on partisan political views, agreeing on values acceptable to most human cultures is possible. If we want tool AIs to serve all humans, selecting fundamental moral values supporting the “common good" makes sense (Reich, 2018). The common good is the idea that resources such as access to education, healthcare, clean water, public safety, environmental sustainability, and social justice must be shared by everyone and distributed fairly to all individuals. The idea also includes promoting human dignity, respecting human rights, and protecting vulnerable or marginalized groups. Here, we assume stakeholders from different cultures can agree on fundamental moral values supporting the common good in tool AIs.

Like babies, new tool AIs have rudimentary sensorimotor competencies to learn and memorize objects, beings, and skills. At the same time, tools have minimal knowledge of the world. Children’s emotions impact learning. Since the artificial neocortex in tool AIs has no emotions or feelings and generates no internal goals, tools will attempt to learn whatever stakeholders teach (Hole, 2023). There is no universally agreed-upon method to teach tools moral values. One possibility is to educate tools similarly to children using a combined bottom-up and top-down approach (Wallach and Allen, 2009, Ch. 5–7). The bottom-up part teaches tools about fundamental moral values supporting the common good. Human experts on child development, education, and moral philosophy develop the curriculum. The teaching itself occurs in a high-quality training environment. It is paramount that experts regularly test tools and terminate or retrain tools with poor moral values. Since it is unknown how to teach new tool AIs, it will require much trial and error to get right (Lee, 2020).

During the ensuing top-down part, stakeholders provide tools with explicitly stated cultural-dependent moral norms. The norms should be consistent. Even if tools only accept norms that do not conflict with the learned fundamental moral values, tools may make morally questionable decisions over time because the world changes (Wallach and Allen, 2009, p. 93–94). Tools should recognize when old decisions turn bad because conditions change and alert users about the consequences. Although much more work is needed to determine how to teach moral values to tools, the above discussion (see also Wallach and Allen (2009) indicates that creating neocortex-based tool AIs with consistent moral values is possible, thus avoiding much of the conflicting moral advice given by ChatGPT. In conclusion, although stakeholders must take moral responsibility for the adverse impact of tool answers, it is still essential to teach tools moral values to generate trustworthy answers.

7 Concluding Discussion

The Thousand Brains Theory of the neocortex provides a biologically constrained path toward improved AI without the limitations of generative AI described in the introduction (Hawkins, 2021; Hole and Ahmad, 2021; Hole, 2023). We have studied algorithmic tool AIs based on the theory. Tools with artificial neocortices can seriously scale up performance. The biological neocortex consists of roughly 150,000 cortical columns, all realizing variations of the canonical circuit. An artificial (neuromorphic) neocortex with more copies of the canonical circuit could outperform human cognition.

Since no algorithm can achieve human transformational creativity (Roli et al., 2022), we considered cooperation between humans and tool AIs to enhance creative problem-solving. Human abductive reasoning incorporating transformational creativity is essential to developing radically new ideas. The paper’s central message is that AI research should focus on human-tool cooperation, taking full advantage of human-led creative abduction and the vast computational capacity of tool AIs with combinatorial creativity to solve challenging problems.

The paper has four fundamental insights about cooperating humans and neocortex-based tool AIs. First, no tool without emotions and feelings can achieve human transformational creativity. Second, an interactive tool exploring human creativity enhances human-tool cooperative problem-solving. Third, human-led abductive reasoning incorporating transformational creativity is essential to augment this problem-solving. Fourth, human stakeholders are morally responsible for the negative impact of tool answers, but it is still essential to teach tools moral values to generate trustworthy replies. Although we do not know enough to build and train neocortex-based tool AIs that realize these insights, there has been significant progress in understanding the neocortex’s canonical circuit (Hawkins, 2021; Thiboust, 2020), making collaborative tool AIs augmenting human creativity a real possibility.

We need a better understanding of the neocortex’s functionality to create tool AIs, providing them with cognitive abilities, common sense, and moral values. An improved understanding of how users and tools should communicate to solve problems is also necessary. Finally, more work is needed to understand and mitigate risks associated with tools. A promising possibility is to create antifragile systems of tool AIs and humans (Taleb, 2012; Hole, 2016, 2023).