Intelligent agents perceive conditions and problems in the world, gather and process information, then generate solutions and action plans. And insofar as outcomes add to knowledge and associated procedures, agents learn. In this sense, much learning is responsive and adaptive, the result of cycles of problem-solving, action generation, performance evaluation, and updates. For human beings, inter-cyclical performance feedback is the primary source of such updates, unfolding as patterns of experience through time. Whereas for artificial agents, much learning is intensely intra-cyclical, meaning it occurs during action cycles, often in real time, mediated by rapid feedforward mechanisms. This is possible because artificial agents cycle far more rapidly and precisely, relative to most behavioral and mechanical processes. Indeed, artificial agents achieve unprecedented complexity and learning rates. Owing to this capability, artificial agents are increasingly important in practical problem-solving and process control, especially in environments where real-time adjustments are beneficial. The same capabilities will empower learning by digitally augmented agents.

Yet supervision is challenging here too. Human and artificial agents have noticeably different capabilities and potentialities in learning. On the one hand, as stated above, artificial agents learn at high rates and levels of complexity and precision. They are hyperactive and hypersensitive in learning. For example, it only takes hours or days to train advanced artificial agents to high levels of expertise. On the other hand, human agents exhibit relatively sluggish learning rates and low levels of complexity. As any schoolteacher can attest, it takes years of incremental learning to educate a human being, and many never achieve expertise.

Clearly, these distinctions are like those in Chap. 7, regarding performance evaluation, which is no surprise because much learning is driven by performance feedback. Therefore, in both areas of functioning—the evaluation of performance and learning—human and artificial agents differ in terms of their processing rates, the complexity of processing, and primary mechanisms of updating. As noted, human learning is relatively sluggish, accrues in simpler increments, and mostly from inter-cyclical feedback, whereas, artificial agents are hyperactive and hypersensitive in learning, rapidly acquiring complex knowledge, including through feedforward mechanisms. It also follows that learning by augmented agents will exhibit the same potential distortions as the evaluation of performance. When combined in augmented agency, human and artificial learning can become divergent, dyssynchronous, and discontinuous, and thus ambiactive in terms of learning rates and levels of complexity.

Hence, learning by augmented agents can also skew, like the evaluation performance. Three patterns of distortion are possible. First, highly ambiactive learning will combine rapid, complex, artificial updates, with far slower, simpler, human updates. For example, digitalized learning systems cycle rapidly, shortening attention spans and compressing content, yet behavioral aspects of education and training require attentive dedication over long periods of time. In consequence, augmented learning could be overly divergent and hence dyssynchronous, discontinuous, and ambiactive. Second, in other situations, learning by augmented agents could be overly convergent and dominated by artificial processes which overwhelm or suppress human inputs. People would be increasingly reliant on digitalized procedures. Third, the opposite is also possible, in which learning is dominated by human myopia, bias, and idiosyncratic noise. Augmented learning would then reinforce and amplify erroneous priors. If any of these distortions occur, learning by augmented agents will be dysfunctional and often highly ambiguous and ambivalent.

8.1 Theories of Learning

Modern theories of learning emphasize the development of autonomous capabilities and reasoned problem-solving, rather than replication and rote memorization. Modern learning also highlights the role of experience and evaluative feedback. Via such mechanisms, agents develop capabilities, knowledge, and self-efficacy. For the same reasons, modern scholarship accords learning a major role in social development and human flourishing. Writing over a century ago, the founders of modern educational psychology espoused very similar principles (e.g., James, 1983; Pestalozzi, 1830). The enduring challenge is to explain and manage the deeper mechanisms of learning. For example, many continue to ask which aspects of learning are predetermined as natural priors, rather than resulting from experience and evaluative feedback. In other words, what is owing to nurture versus nature? And further, in which ways, and to what extent, can natural learning capabilities be enhanced, especially through structured experience and training? More recently, scholars also investigate how human and artificial agents best collaborate in learning (Holzinger et al., 2019).

Historical Debates

Once again, there is an impressive intellectual history. The ancient Greeks made important contributions which remain relevant today. For example, Plato argued that much knowledge was innate, bestowed by nature and inheritance. The challenge was then to release it. Whereas, Aristotle put more emphasis on nurture and learning through experience, and highlighted the role of memory in the absorption of such lessons (Bloch, 2007). Two thousand years later, Enlightenment scholars explored similar problems and solutions. John Locke (1979) explained learning in terms of the progressive encoding of new knowledge, initially onto a child’s blank mind or tabula rasa. This complemented Rousseau’s (1979) stronger emphasis on learning through the liberation of natural human curiosity, intuition, and inherent capability. Although, despite their differences, both Locke and Rousseau shared the modern view that all persons can learn and grow. Both elevated the status and potential of the autonomous, reasoning mind.

Later psychologists continued exploring the mechanisms of learning. In the mid-twentieth century, Skinner (1953) proposed a radical form of behaviorism, based on operant conditioning and reinforcement learning, driven by stimulus and response. However, then and now, critics view this approach as overly reductionist and materialistic. Most now agree that human beings are more intentional and agentic in learning. Not surprisingly, Bandura (1997) is among this community. He argues that humans learn much through experience and the modeling of behavior, which strengthen self-efficacy. Jerome Bruner (2004), another leading figure in educational and cognitive psychology, argues that human beings are embedded in culture and learn within it, as they compose and interpret narrative meaning. While for Gardner (1983), learning involves multiple intelligences, including rational and emotional, which engage a range of cognitive and affective functions. Different senses are engaged by these processes, including visual, auditory, and kinesthetic systems. To summarize, modern theories of learning emphasize the development of intelligent capabilities, the wider role of agentic functioning, the contextual nature of learning, and the importance of performance feedback.

Levels of Learning

At the individual level, contemporary theories of learning prioritize cognitive and affective factors, sensitivity to context, the value of experience, and the need for engagement, although, scholars have long disagreed about the mechanisms which explain these aspects of learning. For example, Chomsky (1957) argued for genetically encoded structures which scaffold grammar and the learning of language. Modern cognitive science was also emerging at the time, along with early computer science and neuroscience. Scientists started to explore the neurological systems which underpin human learning, akin to software architecture. Chomsky’s work can be seen in this context. However, his critics saw innate knowledge structures as a throwback to scholastic conceptions, versus the liberation of autonomous mind (Tomalin, 2003). Many therefore resisted any notion of innateness, favoring fully developmental processes instead. For example, also around the mid-twentieth century, Piaget (1972) argued there are progressive stages of learning through childhood, corresponding to the development of the neurophysiological system, layering more complex concepts, relations, and logical structures. He argued that these structures were cumulative and contingent on early, albeit predictable developmental processes. Chomsky (Chomsky & Piatelli-Palmarini, 1980) disagreed and argued in response, that deep semantic structures emerge holistically, irrespective of context.

Recent research is more complex and nuanced, including Chomsky’s (2014) own. No single psychological, behavioral, or neurophysiological model is fully explanatory. Like the agentic self generally, learning involves culture and context, physical embodiment, experience, and functional complexity. Evidence therefore points to more complex processes of development and procedures in learning (Osher et al., 2018). Reflecting this view, contemporary researchers focus on the variability of contexts and the way in which cognitive and neurophysiological processes interact in learning. They also investigate cognitive plasticity, with some arguing for relatively high levels of flexibility across the lifespan, while others are more conservative in this regard. Notably, recent studies show that the brain remains more plastic than previously thought (Magee & Grienberger, 2020). Many also highlight the importance of model-based learning, by which people master relatively complex patterns of thought and action in more holistic ways (Bandura, 2017). Formal education leverages model-based learning through experiential methods and problem-based instruction (Kolb & Kolb, 2009). The general trend is toward more complex models of learning which engage multiple components of the agentic system, including cognition, affect, different modalities, types of performance, feedback, and feedforward mechanisms (Bandura, 2007).

Similar ideas inform scholarship about learning at the group and collective levels. Theories emphasize functional complexity, integrating cognitive, affective, and behavioral factors, plus sensitivity to social, economic, and cultural contexts, and the potential to develop and grow over time (Argote et al., 2003). However, theories of collective learning also recognize strict limitations. In fact, they share many of the same concerns as individual level theories: cognitive boundedness, attentional deficits, poor absorptive capacity, persistent superstitious tendencies, plus myopias and biases (Denrell & March, 2001; Levinthal & March, 1993). Proposed solutions relate to the development of dynamic capabilities, flexible organizational design, the use of information technologies, and transactive memory systems, in which the storage and retrieval of knowledge are distributed among groups, allowing them to learn more efficiently (Wegner, 1995).

Procedural Learning

Another important strategy is procedural learning, which leverages individual habit and collective routine to reduce the processing load (Argote & Guo, 2016; Cohen et al., 1996). All agents benefit from acquiring less effortful learning procedures. The reader will recall that similar topics are discussed in Chap. 3, regarding agentic modality. In the earlier discussion, I review the debate about aggregation: whether collective routine emerges bottom-up, from the combination of individual habits, or functions top-down, from the devolution of social forms. In fact, the same questions arise for learning, namely, do collective learning routines emerge from the aggregation of individual learning habits, or vice versa? Some scholars privilege learning at the individual level and argue that routine learning is an aggregation of habit (Winter, 2013). In contrast, other scholars privilege the holistic origins of collective learning. From this alternative perspective, collective mind and action are the primitives of organizational learning, not the result of bottom-up aggregation. Therefore, questions of agentic modality come to the fore once again: does collective learning aggregate individual habits of learning, or vice versa? Similarly, do the limitations of collective learning reflect the aggregation of individual constraints, or do social and organizational factors impose limits on individual learning? Or perhaps all learning combines both types of constraint?

The solution to the aggregation question presented in Chap. 3, also applies to procedural learning. In this type of learning, many individual differences, such as personal encodings, beliefs, and goals, are downregulated and effectively latent, whereas shared characteristics, such as collective encodings, beliefs, and goals, are upregulated and active. Hence, learning habit and routine coevolve and are stored in individual and collective memory, integrated via common storage and retrieval processes. Collectives thus learn without activating significant differences, and without needing to aggregate such differences. Mischel and Shoda (1998) explain this is how cultural norms evolve, as common, recurrent psychological processes. Hence, routine learning is neither simply bottom-up nor top-down.

That said, collective agents also learn in nonroutine ways, from deliberate experimentation and risk-taking (March, 1991). Increasing environmental uncertainty and dynamism favor these approaches. In organizational life, this has led to an emphasis on continuous learning and methodologies which highlight feedforward processes, such as design thinking, lean startup methods, and agile software development (Contigiani & Levinthal, 2019). All provide ways for teams and organizations to learn in complex, dynamic environments, using intra-cyclical, adaptive means. Via such methods, agents learn despite high uncertainty, ambiguity, and accelerating rates of change. These methods contrast earlier, linear approaches toward learning, which emphasize slower, inter-cyclical feedback loops (e.g., Argyris, 2002).

Learning is therefore central to modern theories of agency. Apart from anything else, agents develop self-efficacy and capabilities through learning, by trying and sometimes failing, but succeeding often enough. In these respects, developmental learning exemplifies the vision of modernity, which is to nurture autonomous, intelligent agency, and thereby to increase the potential for human flourishing. Modern theories of learning prioritize the capability of agents to learn and grow, especially from the evaluation of performance. Digital augmentation promises major advances in all these functions.

8.2 Digitally Augmented Learning

Artificial agents are quintessentially problem-solving, learning systems. Some are fully unsupervised and self-generative, such as artificial neural networks and evolutionary machine learning. These agents are becoming genuinely creative, speculative, and empathic (Ventura, 2019). They also compose their own metamodels of learning, based on iterative, exploratory analysis, to identify the type of learning which fits best in any context. Artificial reinforcement learning is one recent development, which enables expert systems to learn rapidly from the ground up (Hao, 2019). Other agents are semi-supervised or even fully supervised. In these systems, models are encoded to guide learning and development. In all cases, artificial agents will employ a metamodel of learning, which is defined by its hyperparameters. Among other properties, hyperparameters will specify the potential categories and layers of learning, plus major mechanisms and cycle rates. These will include dynamic, intra-cyclical feedforward mechanisms, as well as longer inter-cyclical feedback processes.

Notably, artificial neural networks reflect the fundamental architecture of the human brain. This deep similarity between artificial and human agents facilitates close collaboration between them in augmented systems of learning (Schulz & Gershman, 2019). Both share similar features of neural architecture, and when joined, they can self-generate their own, augmented metamodels of learning. Some will be semi-supervised—such as Semi-Supervised Generative Adversarial Networks or SGANs—which might incorporate human values, beliefs, and commitments into metamodels of learning. Important applications already include artificial empathy and personality (Kozma et al., 2018). However, as in other areas of augmented functioning, the key challenge is to ensure effective supervision and metamodel fit.

Ambiactive Learning

Not surprisingly, the supervisory challenges of learning by augmented agents are like those of augmented evaluation of performance. To begin with, artificial agents will tend toward hypersensitive and hyperactive learning, meaning they quickly detect small degrees of variance, which trigger rapid, precise updates. As previously explained, this includes intra-cyclical feedforward mechanisms and entrogenous mediators, such as performative action generation. However, these mechanisms can lead to excessive updates and overlearning. Digitally augmented agents might learn too much and too often, thereby wasting time and resources. This could also produce ambiopic problem-solving, by encouraging sampling and searching too widely for problems and solutions, further and faster than required. For similar reasons, computer scientists research how to avoid unnecessary overlearning (Gendreau et al., 2013).

By comparison, human agents are frequently insensitive to outcome variance and sluggish in learning (Fiedler, 2012). Human learning is relatively simple and slow, compared to artificial agents. Significant problems therefore arise for augmented agents because they will combine sluggish, insensitive human updates, with hyperactive, hypersensitive artificial updates. As noted earlier, such learning could easily become dyssynchronous and discontinuous, and therefore ambiactive, meaning it simultaneously stimulates and dampens, different learning rates and levels of complexity and precision. For example, when people use online search about political matters, they trigger rapid, precise cycles of artificial processing, iterating rapidly to guide search and learning. But at the same time, the human agent may input inflexible, myopic priors as search terms. As a result, the learning process uses digitally augmented means to reinforce political bias. In fact, such learning is an expression of confirmation bias at digital scale and speed. And in most cases, it is highly ambiactive and dysfunctional. These scaling effects help to explain the rapid spread of fakery and falsehood on social networks.

As stated earlier, ambiactive learning also increases the risk of ambiguity and ambivalence, owing to its poorly synchronized, discontinuous nature. This is because ambiactive learning easily produces contradictory or incompatible beliefs, interpretations, and preferences. Granted, moderate degrees of ambiguity and ambivalence can be beneficial (Kelly et al., 2015; Rothman et al., 2017). They support creativity and enhance the robustness and flexibility of learning (March, 2010). But digitalization greatly amplifies these effects and extremes become more likely. Metamodel fit will be harder to achieve and sustain. Some people may become overly reliant on digitalized processes and incapable of autonomous, self-regulated learning.

Furthermore, excessive ambiguity and ambivalence can lead to cognitive dissonance and confusion, even triggering psychological and behavioral disorder, especially when they impact core beliefs and commitments (van Harreveld et al., 2015). Indeed, when ambiactive learning is extreme and widespread, people could lose a shared sense of reality, truth, and ethical norms (Dobbin et al., 2015; Hinojosa et al., 2016). Major consequences follow for digitally augmented communities and collectives. For without a shared sense of reality, truth, and right behavior, people are more vulnerable to deception and superstitious learning. Unable to discriminate real from fake, truth from falsehood, or right from wrong, they are more likely to be docile and rely on stereotypes.

Summary of Learning by Augmented Agents

Based on the foregoing discussion, we can now summarize the main features of learning by augmented agents. To begin with, it is important to recognize there are many potential benefits. Augmented agents acquire unprecedented capabilities to explore, analyze, generate, and exploit new knowledge and procedures. In many domains, significant benefits are already apparent. However, at the same time, the speed and scale of digitalization pose new risks. First, augmented agents risk discontinuous updating, because they might skew toward hypersensitive, complex artificial processing, while being relatively insensitive and simplified in human respects. Second, augmented agents risk dyssynchronous updating, because they might skew toward hyperactive, fast learning rates in artificial terms, while being sluggish in human terms. When combined, these divergent tendencies will produce ambiactive, dysfunctional learning, heightening the risks of ambiguity and ambivalence, and in extreme cases, superstitious learning and cognitive dissonance. The corresponding pattern of supervision is shown by segment 9 in Fig. 2.6. Alternatively, artificial agents might dominate learning and relegate human agency to the sidelines, like segment 7 in Fig. 2.6, or human agents could distort semi-supervised augmented learning, by importing myopia and bias, as in segment 3 of Fig. 2.6.

8.3 Illustrative Metamodels of Learning

This section develops illustrations of learning, showing premodern, modern, and digitalized metamodels. Like the earlier discussions of self-regulation in Chap. 6 and evaluation of performance in Chap. 7, the following illustrations highlight internal dynamics, especially the interaction of human and artificial agents in augmented systems. Also like the preceding two chapters, the following illustrations focus on contrasting processing rates and degrees of complexity, but this time in relation to the precision and rate of learning. Once again, the analysis highlights critical similarities and differences between human and artificial agents.

Lowly Ambiactive Modern Learning

Figure 8.1 illustrates the core features of a lowly ambiactive, modern system of learning. That is, learning in which agents are moderately assisted by technologies, and where updates are mainly from performance feedback, relatively synchronous and continuous. The horizontal axis depicts two major cycles of learning labeled 1 and 2, which are further subdivided. These cycles encompass processes of feedback generation and subsequent updates to knowledge and procedures. The vertical axis illustrates the complexity and precision of learning updates, which range from low in the center to high in the upper and lower regions. The figure then depicts cycles for two metamodels of learning, shown by the curved lines labeled L1 and L2. Each depicts a full cycle of processing and updates. Metamodel L2 cycles during each period at moderate complexity. However, metamodel L1 cycles only once over periods 1 and 2, at a lower level of complexity. In this respect, L1 represents a slower, simpler metamodel of learning, such as learning in premodern contexts. L2 on the other hand, represents a faster, more complex metamodel of learning, which is typical of modernity. That said, every two cycles of L2 are fully synchronized with one cycle of L1, making the two metamodels moderately synchronous overall. In fact, L2 is intra-cyclical relative to L1, but assuming slower rates for both, synchronization is feasible. Both are of comparable complexity, and hence moderately continuous as well. As a combined system of learning, therefore, these two metamodels are moderately synchronous and continuous. Indeed, modern learning can be exactly like this. Traditional and cultural systems of learning, represented by L1, are often moderately continuous and synchronized with technically assisted adaptive learning, represented by L2. Overall, the scenario illustrated in Fig. 8.1 is therefore lowly ambiactive, often functional, and neither excessively ambiguous nor ambivalent.

Fig. 8.1
A wave graph depicts 2 waves for the complexity of update processing versus cycles of learning. The graph has sinusoidal curves.

Synchronous and continuous modern learning

Highly Ambiactive Augmented Learning

Next, Fig. 8.2 depicts highly dyssynchronous and discontinuous, ambiactive learning by augmented agents. Once more, the horizontal axis depicts two temporal cycles of learning, labeled 1.1 and 1.2, while the vertical axis illustrates the complexity and precision of learning updates, from low to high. The figure again depicts two metamodels of learning, this time labeled L2 and L3. The curved line L2 is a modern metamodel of learning which assumes moderate technological assistance. L3 depicts a fully digitalized, generative metamodel with a higher learning rate and greater complexity. Importantly, the two metamodels of learning are now poorly synchronized and connected. As the figure shows, they intersect in an irregular fashion. Updates are therefore dyssynchronous. Furthermore, the two metamodels exhibit different levels of complexity, which means updates will be discontinuous as well. If we now assume that both metamodels combine in one augmented agent—that is, the agent combines modern adaptive learning L2 and digitalized generative learning L3—overall learning will be dyssynchronous, discontinuous, ambiactive, probably dysfunctional, and highly ambiguous and ambivalent as well.

Fig. 8.2
A wave graph depicts 2 waves for the complexity of update processing versus cycles of learning. The graph has sinusoidal curves.

Dyssynchronous and discontinuous augmented learning

Consider the following example. Assume that L2 in Fig. 8.2 represents a modern metamodel of adaptive learning, in which an aircraft pilot learns from practical performance feedback. Next, assume that L3 represents the generative learning of an artificial avionic control system. Now assume that the pilot and avionic agent collaborate in flying an aircraft. Given their different modes of learning, they will update knowledge and procedures in a dyssynchronous and discontinuous fashion, reflecting the pattern in Fig. 8.2. Overall learning will be highly ambiactive, ambiguous, and ambivalent. Learning will likely be dysfunctional and, in this case, potentially disastrous. Indeed, aircraft have crashed for this reason (Clarke, 2019). Pilot training and artificial systems were poorly coordinated and synchronized, and human and artificial agents failed to collaborate effectively. Pilots could not interpret or respond to the rapid, complex signals of the digitalized, flight control system. And the flight control system was insensitive to the needs and limitations of the pilots. The resulting accidents are tragic illustrations of ambiactive dysfunction. Similar risks are emerging in other expert domains, and many more instances are likely.

Lowly Ambiactive Augmented Learning

In contrast, Fig. 8.3 illustrates lowly ambiactive learning by an augmented agent. Digitalized learning is labeled L4, and modern adaptive learning is again labeled L2. As in the previous figures, the horizontal axis depicts temporal cycles, and the vertical axis again illustrates levels of complexity and precision. In contrast to the preceding figure, however, the two metamodels in Fig. 8.3 are now moderately synchronous and continuous. Cycles are better aligned, intersecting at the completion of major learning cycles, despite their different rates. Their levels of complexity are similar as well. By implication, collaborative supervision is strong. If we now assume that these two metamodels are combined in one augmented agent—that is, the agent combines modern adaptive learning L2 and digitalized generative learning L4—then overall learning will be lowly ambiactive, functional, and not significantly ambiguous or ambivalent. For the same reasons, entrogenous mediators will be adequately aligned, synchronous, and continuous as well. In summary, Fig. 8.3 illustrates a well-supervised system of learning by augmented agents, which is what engineers aspire to build (Chen et al., 2018; Pfeifer & Verschure, 2018). The supervision of learning achieves strong metamodel fit, in this case, with appropriately balanced learning rates and levels of complexity.

Fig. 8.3
A wave graph depicts 2 waves for the complexity of update processing versus cycles of learning. The graph has sinusoidal curves.

Synchronous and continuous augmented learning

8.4 Wider Implications

Highly ambiactive learning therefore poses major risks for augmented agents: extreme ambiguity and ambivalence, incoherent and inconsistent updates, functional losses, and cognitive dissonance. Moving forward, researchers must develop methods of supervision which mitigate these risks and maximize metamodel fit. Fortunately, research has already begun. For example, dissonance engineering explicitly addresses these risks (Vanderhaegen & Carsten, 2017). It seeks to manage and supervise human-machine interaction in learning, and especially the risks of learning conflict within augmented systems. Other researchers are working to develop more empathic communication interfaces, to facilitate better human-machine communication in learning (Schaefer et al., 2017). However, we have yet to see comparable research efforts in the social and behavioral sciences. Following sections highlight some of the major challenges and opportunities, in this regard.

Divergent Capabilities

Many of these problems arise, because the speed and scale of digital innovation are outpacing human absorptive capacities and traditional methods of learning. Most human learning is gradual, cycles relatively slowly, responding to inter-cyclical performance feedback. Knowledge is absorbed incrementally, often vicariously. Theories therefore assume broadly adaptive processes, driven by experience and performance feedback, iterating in punctuated gradualism over time. Moreover, owing to limited capabilities and behavioral contingency, human learning is often incomplete. In contrast, artificial learning is increasingly powerful, fast, and self-generative. Indeed, for today’s most advanced artificial agents, it may only take minutes or hours to perform complex learning tasks, which no human agent could ever complete. Compounding the challenge, artificial feedforward mechanisms are largely inaccessible to consciousness, given the relatively sluggish, insensitive nature of human monitoring. When combined, these divergent capabilities produce novel risks for augmented agents. Artificial processes could race ahead, while humans constantly struggle to keep up. Human attention spans may continue to shrink, while digitalized content is compressed and commoditized (Govindarajan & Srivastava, 2020). Recall that similar effects drive entrogenous divergence in Fig. 2.4. Learning could be simultaneously adaptive and generative, fast and slow, over-complete and incomplete. At the same time, human supervision could import erroneous myopia, bias, and noise, and digital augmentation will reinforce and amplify these limitations. In these situations, learning will be dysfunctional and lead to functional losses.

Existing theories of learning are ill-equipped to conceptualize and explain these risks. Theories typically assume the gradual, progressive absorption of knowledge and skills (Lewin et al., 2011). Myopic learning, insensitivity to feedback, and sluggishness are the main foci of scholarly attention, which makes perfect sense in a pre-digital world. For the same reasons, the risks of overlearning and over-absorption receive little attention. It is no surprise, therefore, that existing theories are ill-equipped to explain the effects of digitally augmented hyperopia, hyperactivity, and hypersensitivity, and the resulting risks of dyssynchronous, discontinuous updating, and ambiactive learning. In fact, to date, most of these risks are not even conceptualized in theories of learning.

The opposite is true for research in computer science and artificial intelligence. In these domains, overlearning, hypersensitivity, and hyperactivity already receive significant attention (Panchal et al., 2011), which is fully explicable because the risks are clear and growing. They interrupt and confuse artificial learning, leading to functional losses. Consider the control of autonomous vehicles once again. In an emergency, the artificial components of the augmented system should prioritize rapid approximate learning over slow exact learning. For example, there is no need to distinguish a pedestrian’s age or gender before stopping to avoid a collision (Riaz et al., 2018). Overlearning would delay action and be disastrous. Comparable trade-offs will arise in other augmented domains, including the piloting of aircraft, as discussed earlier. Supervised trade-offs will be critical, balancing the complexity of updates versus learning rates, given the context and goals. If well supervised, augmented agents can enjoy significant benefits. They will incorporate the best of human experience and commitment, and the best of artificial computation and discovery.

Furthermore, digitally augmented agents will recompose metamodels of learning in real time, to maintain metamodel fit as contexts change. To do so, they will rely heavily on entrogenous mediators, namely intelligent sensory perception, performative action generation, and contextual learning. Every phase of learning will be intelligent and generative, rather than procedural and incremental. In this fashion, digitally augmented learning enables adaptive learning by design. But this raises additional questions. How much of augmented learning will be accessible to human consciousness and supervision, or will it be an opaque product of artificial intelligence? And if the latter proves true, could this lead to a new type of superstitious learning, in which humans absorb outcomes without understanding how or why they came about? In fact, we already see some evidence of this, in the opacity of deep learning and artificial neural networks. These systems are widely applied in augmented systems, but important features can remain hidden and not explainable, even to the developers (Pan et al., 2019).

Problematics of Learning

When viewed collectively, these developments signal a major change in the problematics of learning. To begin with, recall that modernity problematizes the following: how and to what degree, can human beings transcend their natural limits through learning, to be more fully rational, empathic, and fulfilled? Digitalization prompts additional questions. To begin with, how can human beings collaborate closely with artificial agents in learning while remaining genuinely autonomous in reasoning, belief, and choice? Relatedly, how can humans absorb digitally augmented learning while preserving their natural intuitions, instincts, and commitments? And how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artificial learning while avoiding extremes of digitalized determinism? Also, how can humanity ensure fair access to the benefits of digitally augmented learning and not allow it to perpetuate inequality, discrimination, and injustice? Finally, how will human and artificial agents trust and respect each other in collaborative learning, despite their different levels of capability and potentiality? The metamodels and mechanisms described here can help guide research into these questions.