From a modern perspective, people should manage their own thoughts, feelings, and actions and thereby exercise autonomous self-regulation. Here again, Bandura (2001) is a leading scholar on the topic. In his social cognitive theory, he explains how individuals, groups, and collectives achieve such autonomy by developing self-efficacy, which is feeling confident to perform in specific task domains through self-regulated action. Self-efficacy thereby strengthens self-regulation, and self-regulation strengthens self-efficacy. Both mechanisms are complementary and reciprocal. While also noting that effective performance is contingent on access to appropriate resources, opportunities, and the development of capabilities. This helps to explain why Bandura (2007) and other scholars of self-regulation pay special attention to learning, and to the factors which limit the development of self-regulatory capabilities (Cervone et al., 2006; Ryan & Deci, 2006). By focusing on these questions, research into self-regulation reflects the ambition of enlightened modernity, to liberate and empower autonomous human agency.

Bandura and others fueled a blossoming of research on this topic during the late twentieth century. Not by coincidence, their efforts paralleled the rise of cognitive science, neuroscience, cybernetics, and computer science (Mischel, 2004). In many fields, scientists were exposing the deeper mechanisms of intelligent processing. Their discoveries inspired new understanding of human agency, self-regulation, and its functional companions, metacognition and self-supervision. Particularly from a systems perspective, human agents can be viewed as complex, open, adaptive, situated, and responsive, also, as agents which self-monitor, self-regulate, and self-supervise, to significant degrees. Human personality can be understood in this way too, not simply as an expression of fixed traits or conditioned responses. Chapter 1 explains this ecological perspective as viewing “persons in context.” Chapter 2 also relies heavily on this perspective, to develop historical metamodels of agency.

Social Cognitive Perspectives

Reflecting the contextual nature of self-regulation, many leading scholars on the subject are social cognitive psychologists, including Bandura. Social cognitive self-regulation allows agents to monitor and adapt to changing contexts and develop domain-specific self-efficacies. Theories therefore integrate major cognitive-affective processes into self-regulatory functioning: encodings and beliefs about the self, affective states, goals and values, motivations, competencies, and self-regulatory schemes (Shoda et al., 2002). Most theories of self-regulation combine these factors, although they do so in different ways. For example, Baumeister (2014) places more emphasis on attention and affective states, as primary sources of self-regulatory strength and capability. In contrast, Carver and Scheier (1998) emphasize the role of goals and control mechanisms. While Higgins (2012) and his collaborators, put special weight on agents’ core motivations and the experience of value. Yet, irrespective of emphasis, all agree that self-regulatory capabilities are critical and rarely develop unassisted. They require effective parenting, education, and social modeling, as well as natural capability. Without such support, self-regulation must rely on instinct and chance, which are poorly efficacious in the modern world.

Each theorist therefore integrates social, cognitive, and affective factors, views self-regulation as fundamental to agentic functioning, and highlights contextual variance. Most also recognize and seek to mitigate the limitations of self-regulatory capability. Indeed, one of the main functions of self-regulation is to manage such limitations, for people can only achieve so much, whichever mechanism is operative. Actual self-regulation regularly falls short of aspirations, and almost never matches ideals (Higgins, 1987). There are consistent trade-offs and compromises, between short- and long-term goals, ideal and actual outcomes, individual and collective priorities and commitments, and schematic complexity and processing rates, where processing rate, in this context, is defined as the number of full cycles of self-regulation which can be completed, per unit time. And schematic complexity is defined as the number of distinct steps and interactions, required for a self-regulatory process to complete. Importantly, these definitions of processing rate and schematic complexity are accurate for artificial self-regulation as well (see Den Hartigh et al., 2017). In fact, the rest of this chapter will focus primarily on these two aspects of self-regulation: processing rate and schematic complexity. My analysis is selective, in this respect. The reason being that these characteristics are fundamental to both human and artificial self-regulation and capture important similarities and differences between the two types of agents. That said, I acknowledge that other features of augmented self-regulation will require future investigation.

Rate and Complexity

Owing to their limited capabilities, human agents need to balance self-regulatory processing rate and schematic complexity, because both consume limited resources. Put simply, owing to limited capabilities, the higher the processing rate, the lower the schematic complexity, and vice versa (Fiedler et al., 2020). This results in two major options. Both entail trade-offs, which parallel those in complex problem-solving. First, people can try to optimize self-regulatory processing rates, responding quickly to signals and stimuli. To be sure, fast self-regulation is often advantageous for survival, especially in competitive or threatening situations. Evolution favors this characteristic. When immediate threats erupt, a fast response is typically more important than schematic complexity. But owing to limited capabilities and other inputs, agents must then simplify the self-regulatory scheme. They may need to rely on simple heuristics when fleeing from danger. For example, leave everything behind and run. Second, people can seek to optimize the complexity of self-regulatory schemes, to ensure outcomes are precise and complete, often by employing careful, calculative procedures. This type of self-regulation is important in extended, complex goal pursuit, or when potential gains and losses are into the future. But in consequence, agents must be content with a slower processing rate. For example, scientific research and vocational training often require complex self-regulatory schemes which take time to complete.

In fact, both scenarios stretch human agents to the limits of their capabilities. Whether they seek to optimize the self-regulatory processing rate and adopt a simpler scheme; or seek to optimize schematic complexity, and then self-regulate at a slower rate. Both scenarios can be very demanding. Choosing which to employ will depend on the type of agentic modality, the urgency and complexity of the task, its relation to values, goals, and commitments, and potential impact, plus the agent’s self-regulatory capabilities. Additional important factors include self-efficacy, goal orientation, and temporal frame, which reflect the desire for development and future gains, and/or to maintain existing conditions and prevent short-term losses (Higgins, 1998).

Impact of Digital Augmentation

Nevertheless, even though humans are limited in self-regulatory capabilities, some become experts in specific task domains, and hence very skilled self-regulators. They are highly self-efficacious experts. In the contemporary world, this often entails technological assistance. To illustrate, consider contemporary clinical medicine, in which doctors work with artificial agents to diagnose and treat disease. Granted, doctors also rely on their personal experience and intuition, but human insights are increasingly complemented by artificial agents. As a result, digitally augmented medicine increases overall self-regulatory processing rates and schematic complexity. Clinical practice is more timely, precise, personalized, and efficacious. In this fashion, digital augmentation is transforming clinicians’ self-regulation. As Bandura (2012, p. 12) observes:

Revolutionary advances in electronic technologies have transformed the nature, reach, speed, and loci of human influence. People now spend much of their lives in the cyberworld. Social cognitive theory addresses the growing primacy of the symbolic environment and the expanded opportunities it affords people to exercise greater influence in how they communicate, educate themselves, carry out their work, relate to each other, and conduct their business and daily affairs.

Central to this transformation are digitalized, intra-cyclical feedforward mechanisms of self-regulation. Via these mechanisms, augmented agents will rapidly update self-regulatory schemes within processing cycles, not only between them, and often in real time. This type of feedforward process is illustrated in Fig. 2.3, which depicts intra-cyclical, feedforward updating in digitally augmented agency. Chapter 2 further explains that these mechanisms involve novel entrogenous mediators: intelligent sensory perception, performative action generation, and contextual learning. Figure 2.4 illustrates the core principles of such entrogenous mediation. In relation to self-regulation, the main mediator of this kind will be performative action generation, whereby augmented agents dynamically update action plans during performances. Feedforward is therefore an important source of self-regulation for augmented agents, complementing inter-cyclical performance feedback.

Among the major consequences of this shift are that self-regulation will be more prospective, forward looking, proactive, and intelligent (Bandura, 2006). Processing rates and schemes will be subjects of self-regulation as well, adjusting in real time during the generation and performance of action. Autonomous artificial agents already function in this way, especially those which are fully self-generative and self-supervising. Moving forward, augmented self-regulatory processes will be equally intelligent and dynamic. Early evidence of this shift can already be seen in the everyday use of smartphones and digital assistants, which augment the self-regulation of human relationships, preferential choice, goal pursuits, and more.

Self-Regulatory Dilemmas

However, as artificial capabilities expand, and self-regulation becomes more complex and rapid, augmented agents will encounter new tensions and conflicts. Poorly supervised self-regulation could become dysfunctional, especially at the level of intra-cyclical entrogenous mediators. First, artificial and human agents often exhibit different processing rates. In many contexts, humans are relatively sluggish in self-regulation, processing more slowly over cultural and organizational cycles (Shipp & Jansen, 2021). By comparison, artificial agents are increasingly hyperactive, cycling quickly. When combined, these divergent rates could lead to dyssynchronous processing, meaning different aspects of self-regulation process at different rates, and therefore lack synchronization (see van Deursen et al., 2013). For example, in the self-regulation of problem-solving or cognitive empathizing, relatively sluggish human self-regulation of sampling and search could combine with hyperactive artificial self-regulation of the same functions. Compounding this divergence, the fast intra-cyclical mechanisms of artificial self-regulation will often be inaccessible to human consciousness, further impeding coordination. The overall result is dyssynchronous processing, with artificial systems self-regulating rapidly and humans relatively slowly. Comparable problems occur in automated control systems, and also in artificial neural networks, in which some updates lag for various reasons (Zhang et al., 2017).

Second, artificial and human agents exhibit different levels of schematic complexity. Human self-regulatory schemes are frequently simplified and heuristic, often for good reasons. Simple schemes facilitate effective functioning in everyday life. By comparison, artificial self-regulatory schemes are increasingly complex and expansive, supervising and regulating massive networks and processes. When these different characteristics combine in augmented agents, self-regulation may be discontinuous, meaning there are gaps and discontinuities in self-regulation, at different layers and levels of detail. Human self-regulatory processes will tend toward simpler schemes, while artificial schemes are precise and complex. To illustrate, consider the self-regulation of augmented problem-solving once again. Human self-regulatory heuristics in problem sampling might combine with complex, algorithmic self-supervision of solution search. The outcome will be discontinuous self-regulation of problem-solving, having gaps and possible conflicts in sampling and search.

In summary, depending on the quality of their collaborative supervision, augmented agents may combine relatively sluggish, human self-regulatory processing, with hyperactive artificial processes, resulting in highly dyssynchronous self-regulation, as well as combining simplified human self-regulatory schemes, with complex artificial schemes, resulting in highly discontinuous self-regulation. Moreover, these patterns are another example of poorly supervised, entrogenous mediation, and especially of performative action generation. Indeed, it is difficult to integrate the rapid, intra-cyclical feedforward updates generated by artificial processing, with the slower, inter-cyclical feedback updates generated by human processes. Overall self-regulation becomes dysfunctional. And as earlier examples show, this would compound the ambiopic distortions discussed in Chaps. 4 and 5, because poor self-regulation increases the risks of extreme myopia and hyperopia in sampling and search.

Ambiactive Self-Regulation

To conceptualize this self-regulatory dilemma, I import another new term, this time from biology. It is the term “ambiactive” which refers to processes which simultaneously stimulate and suppress a property or characteristic. For example, microbiologists use this term to refer to processes which simultaneously stimulate and suppress aspects of gene expression (Zukowski, 2012). In this chapter, the term “ambiactive” refers to processes which simultaneously dampen and stimulate the same feature of self-regulation and, specifically, processing rates and schematic complexity. Hence, self-regulation by augmented agents will often be ambiactive because it simultaneously suppresses and stimulates processing rates, and/or suppresses and stimulates levels of schematic complexity, among human and artificial collaborators.

However, it must be noted that ambiactive self-regulation is not inherently dysfunctional. Similar as ambimodality and ambiopia, a moderate level of ambiactivity is often advantageous in dynamic contexts. This is because, when environments are uncertain and unpredictable, ambiactive self-regulation will increase the diversity of potential responses. The agentic system is less tightly integrated, in both temporal and schematic terms, making it more flexible and adaptive (e.g., Fiedler et al., 2012). For the same reasons, moderately ambiactive self-regulation helps to stimulate novelty and creativity (March, 2006). The problem is that digital augmentation greatly amplifies these effects and the potential for ambiactivity. If supervision is strong and appropriate, this will be an advantage and enable more dynamic, effective self-regulation. Otherwise, extremely dyssynchronous and discontinuous self-regulation will become more likely and even probable in some contexts.

Notably, these potential risks and benefits are like the conditions identified in earlier chapters: ambimodal agency in Chap. 3, ambiopic problem-solving in Chap. 4, and ambiopic cognitive empathy in Chap. 5, and now ambiactive self-regulation in this chapter. The reader will quickly notice the common prefix, “ambi” meaning “both,” which captures the fundamental combinatorics of human-artificial augmentation. In each area of functioning, digital augmentation presents comparable opportunities and risks. There are opportunities to improve form and function and maximize metamodel fit, by adjusting ambimodal, ambiopic, and ambiactive settings. But there are also new risks of extreme divergence or convergence, if supervision is poor. Regarding self-regulation, the major risks stem from ambiactive rates and schemes.

Metamodels of Self-Regulation

Figure 6.1 summarizes the resulting metamodels of self-regulation, in terms of their hyperparameters for processing rates and schematic complexity. Rates are distinguished between hyperactive and sluggish, where artificial agents tend to be hyperactive, and humans are typically sluggish by comparison. The second dimension distinguishes complex from simplified self-regulatory schemes, where artificial agents are increasingly complex, and humans tend to be more simplifying. Given these hyperparameters, Fig. 6.1 shows four resulting metamodels of self-regulation. Quadrant 1 shows hyperactive processing of complex self-regulation, forming an ideal, optimizing metamodel of self-regulation. Artificial agents are more likely to attempt this option, given their greater capabilities; although humans are unlikely to do so, owing to lesser capabilities. Quadrant 2 shows sluggish processing of complex self-regulation, which results in a scheme-maximizing metamodel, meaning it prioritizes schematic complexity over faster processing rates. Next, quadrant 3 depicts hyperactive processing of simplified self-regulatory schemes, which results in a rate-maximizing metamodel, which prioritizes faster processing rates over schematic complexity. Both types of agent are likely to attempt these maximizing options, whether acting independently or together. Finally, quadrant 4 shows sluggish processing of simplified self-regulatory schemes, which results in a practical metamodel of self-regulation, which is neither fast nor complex, but adequate for the situation at hand. Humans often exhibit this approach in everyday life.

Fig. 6.1
A matrix diagram presents 4 combinations of hyperactive and sluggish self-regulatory cycle rates with complex and simple self-regulatory schema.

Metamodels of self-regulation

Not surprisingly, given limits and choices, and the need for trade-offs, theories of self-regulation focus on the maximizing options depicted in quadrants 2 and 3 of Fig. 6.1. Optimal self-regulation is a rare achievement in human activity. It is reserved for experts in specialist domains, for example, modern empirical science. Whereas human agents are more likely to exhibit practical self-regulation in everyday situations, often as habit and routine.

Furthermore, we can map these metamodels of self-regulation to the historical patterns of agency discussed in Chap. 2. To begin with, in premodern times, when agency was replicative (see Fig. 2.1 in Chap. 2), ideal optimizing metamodels were more feasible (quadrant 1 of Fig. 6.1). This is because, given the relative stability, simplicity, and regularity of agentic life, it was possible to self-regulate in a timely, complete fashion, given criteria at the time. Because rates were sluggish, by contemporary standards, and schemes relatively simple, optimizing self-regulation was at least feasible in such a world. Technological assistance was also minimal, meaning no more rapid or complex options were possible. Of course, not all self-regulation was, or is optimal, in a replicate metamodel of agency. Most of the time, people self-regulate using scheme or rate-maximizing options and especially practical self-regulation.

As modernity unfolded, the replicative metamodel gave way to enlightened, developmental ambitions. From a modern perspective, that is, self-regulation is an adaptive process (see Fig. 2.2 in Chap. 2). Human agents should monitor and manage their own goals and choices, develop their capabilities, seek opportunities and learn, all the while becoming more self-efficacious and autonomous in self-regulation. Indeed, as noted earlier, self-regulatory challenges are central to modernity: how can autonomous individual self-regulation coexist with collective self-regulation and responsibility (Giddens, 2013; Sen, 2017)? Two notable solutions to this question are Adam Smith’s (1950) invisible hand of market self-regulation and Thomas Hobbes’ (1968) leviathan of sovereign self-regulation. Smith’s conception is more rate maximizing, as he seeks to explain market dynamism and efficiency assuming a simplified self-regulatory scheme. Whereas Hobbes’ is more scheme maximizing, given his interest in the complex functioning of the state over time. Importantly, both conceptions eschewed divine intervention and made simplifying trade-offs.

In a digitalized world, by contrast, the extra power of augmented capabilities mean that self-regulation is potentially fast and complex. In fact, optimality is again within reach, not because of relative stability and simplicity, as in the premodern period, but thanks to the speed and scale of digitalized capabilities. Mediated by entrogenous intra-cyclical mechanisms, augmented agents will be capable of composing and recomposing their self-regulatory rates and schemes, in real time, to maintain and maximize fit. Self-regulatory potential greatly expands. In this regard, digital augmentation will enable consistent self-transformation and regeneration. This contrasts with modernity, in which incremental adaptation is typical, but self-transformation is harder to attain.

However, potentiality is one thing, and actuality is another. Digitalized self-regulation will require very sophisticated supervision. Augmented agents will have to marry relatively sluggish, simpler, human self-regulation, with the increasingly hyperactive, complex self-regulation of artificial agents. The challenge of managing ambiactivity is therefore daunting and already evident. Studies show that many people are poor managers of digitalized self-regulation (Kearns & Roth, 2019). They resist, flounder, or float on a rising tide of digital innovation, unable or unwilling to take responsibility for augmented being and becoming. I will return to this question in Chap. 9, which examines the implications of digital augmentation for self-generation.

6.1 Dilemmas of Self-Regulation

Digital augmentation therefore expands self-regulatory capabilities and potentialities. By collaborating with artificial agents, humans can self-regulate more rapidly, with higher levels of schematic complexity. However, major supervisory challenges need to be resolved. First, divergent rates might lead to extremely dyssynchronous processing: sluggish human self-regulatory mechanisms, combined with hyperactive artificial rates. Second, divergent degrees of schematic complexity could lead to extreme discontinuity: simpler human self-regulatory schemes, combined with more complex artificial ones. When processes diverge in this way, digitally augmented self-regulation will become highly ambiactive and dysfunctional. Third, one agent might dominate the other and self-regulation will be overly convergent and skew toward human or artificial control. Following sections discuss these dilemmas in greater depth.

Self-Regulatory Processing Rates

Regarding human self-regulation, as noted earlier, it is often relatively sluggish, and for good reasons. Many situations neither benefit from nor deserve rapid self-regulation. For instance, much of everyday life moves at behavioral or cultural speed. Thinking and acting more slowly are appropriate. Slower processing is also advantageous in exploratory learning, where speed can lead to premature, less creative outcomes. Though the opposite is true in competitive, risky situations, where fast self-regulation is often better. Human agents are therefore trained to accelerate self-regulation in some task domains, while keeping it slow in others. When such training is successful, it becomes deeply encoded as self-regulatory habit and routine. However, these procedures tend to persist, even when digitally augmented capabilities transcend prior limits, partly because humans are ill-equipped to monitor and manage this type of adjustment. Therefore, people may continue trying to accelerate self-regulation, even as artificial agents do exactly this. But such striving will be misplaced, and easily go too fast. Humans will remain inherently sluggish, while encouraging artificial acceleration. The result will be dyssynchronous self-regulation.

In contrast, artificial self-regulation is inherently hyperactive, again for good reasons. As noted earlier, one of the great strengths of artificial agency is its capability for rapid self-regulatory processing. However, this becomes a potential source of tension as well, especially if artificial agents cannot accommodate relatively sluggish humans. Hence, it is necessary to moderate artificial processing rates, to be more attuned to slower human processes. For example, consider travel by autonomous vehicles. In these contexts, artificial and human agents will collaborate as augmented agents for the shared purpose of efficient, safe, and enjoyable travel. In order to do so, agents will need to align their self-regulatory processing rates, to ensure adequate synchronization (Favaro et al., 2019). If collaborative supervision is poor, however, human processes may operate beyond the reach of artificial monitoring, or vice versa. Artificial agents may continue accelerating self-regulation, while humans remain inherently sluggish, and overall self-regulation will be even more dyssynchronous. In the case of autonomous travel, human response times could contradict or fail to coordinate with artificial controls, risking the safety and security of both vehicle and passenger.

Self-Regulatory Schemes

Artificial agents are equally capable of complex self-regulatory schemes. Indeed, this is another distinguishing strength of artificial agents. They can monitor and regulate many variables, across multiple levels, with great precision. By comparison, humans often adopt simpler self-regulatory schemes. They are far less capable, in these respects, and rely on heuristic and imitative schemes, more suited to behavioral and cultural situations. These opposing tendencies can easily exacerbate each other too, especially if human and artificial agents are incapable of monitoring each other’s schemes and functions. Self-regulation would be simultaneously simple and complex, and hence discontinuous. Augmented agents must therefore learn how to integrate human and artificial self-regulatory schemes, especially in the entrogenous mediation of performative action generation. Often, this will entail the deliberate simplification of some artificial components, while increasing the complexity of human elements.

The example of autonomous vehicles is instructive once again. If supervision is poor, the automated system could adopt a complex self-regulatory scheme, monitoring and managing multiple parameters, and perhaps presenting too many of these to human passengers. At the same time, passengers may adopt simple, heuristic schemes, as they come to rely on the automated system. As a result, the overall self-regulation of vehicles could be discontinuous, with significant gaps emerging between the artificial and human schemes. This will increase both technical and human risks. Automotive engineers already recognize this problem and are working to resolve it. Many of these efforts also address the complementary problem of synchronization. When both problems combine, dyssynchronous and discontinuous processing will lead to ambiactive self-regulation, that is, augmented self-regulation which simultaneously dampens and stimulates processing rates and schematic complexity.

Figure 6.2 illustrates the dilemmas just described. The horizontal dimension shows sequential cycles of self-regulatory processing. Two longer cycles are labeled 1 and 2. Each is further divided into two subperiods, labeled 1.1 through 2.2. Next, the vertical dimension of the figure shows schematic complexity, ranging from low in the center to high in the upper and lower sections. The figure also depicts three levels of processing capability and associated cycles, labeled L1, L2, and L3. As in previous chapters, these levels will represent agentic processing capabilities in premodern, modern, and digitalized periods, respectively.

Fig. 6.2
A graph explains the variation in the complexity of self-regulatory schema with self-regulatory cycles. The graph has sinusoidal curves.

Dilemmas of self-regulation

Now consider the two curved lines labeled L1 and L2. First, the unbroken line labeled L1 exhibits a relatively slow processing rate and low schematic simplicity. This corresponds to premodern, replicative metamodels of agency, with relatively sluggish, simplified self-regulatory schemes. Many culturally based forms of self-regulation continue to exhibit such patterns. Given these characteristics, optimal self-regulation is at least feasible in premodern and cultural contexts. Second, the dashed and dotted line labeled L2 illustrates a modern, adaptive metamodel of self-regulation, which iterates fully during each of the major cycles, and with a moderate degree of schematic complexity. Notably, the pattern depicted by L2(modern adaptive metamodel) is not fully synchronized or continuous with the pattern depicted by L1(premodern replicative metamodel). Processing rates and levels of schematic complexity both diverge, at least within the major temporal periods because L2 is cycling at twice the rate of L1. It requires effortful supervision, therefore, to ensure that replicative and adaptive self-regulation are adequately synchronized and continuous. Reflecting this challenge, critiques of modernity often highlight the potential for self-regulatory alienation, owing to the intrusion of technological and other external forces, into the ordinary rhythms of cultural life (Ryan & Deci, 2006).

Next, the fully dashed line L3 depicts self-regulation within the digitalized, generative metamodel of augmented agency, illustrated by Fig. 2.3 in Chap. 2. As Fig. 6.2 shows, this type of self-regulation cycles more rapidly and intra-cyclically, relative to the longer cycles of L1 and L2, and with a higher level of complexity. Hence, L3 is only partially synchronized and continuous, in relation to L2, and even less so in relation to L1. Partly for this reason, much of L3 is not accessible to ordinary human monitoring. There are higher risks of dyssynchronous and discontinuous processing, and hence of overly ambiactive self-regulation. In these respects, Fig. 6.2 illustrates the self-regulatory challenge for augmented agents: how to synchronize, integrate, and adapt different human and artificial, self-regulatory processes?

6.2 Illustrations of Augmented Self-Regulation

The following section illustrates highly ambiactive and non-ambiactive metamodels of self-regulation by augmented agents. First, recall that in relation to self-regulation, ambiactivity refers to processes which simultaneously dampen and stimulate processing rates and/or schematic complexity. In highly ambiactive systems, processes will be extremely divergent and often lack coordination. Whereas in lowly ambiactive systems, processes will be highly convergent and suppress one agent or the other. In each following illustration, the vertical axes show the complexity of self-regulatory schemes, and the horizontal axes show the processing rates of self-regulation, both ranging from low to high. The figures also depict the limits of self-regulatory processing capability, maintaining the labeling convention of earlier chapters. Moderately assisted, modern capability is labeled L2, and digitally augmented capability is labeled L3. As in earlier figures, capabilities reach limiting asymptotes, and in the case of self-regulation, of high schematic complexity and processing rates.

Highly Ambiactive Self-Regulation

Figure 6.3 illustrates highly ambiactive metamodels of self-regulation. Each exhibits an alternative combination of schematic complexity and processing rate. To begin with, the segments labeled D2, P2, and N2 illustrate modern metamodels of self-regulation with moderately assisted capabilities at level L2, while the segments labeled D3, P3, and N3 illustrate metamodels at higher level of digitalized capability L3. Note that the symbols are consistent with earlier chapters, for reasons I explain below.

Fig. 6.3
A graph of the complexity of self-regulatory schema versus self-regulatory cycle rate depicts 2 decreasing curves which depict the limits of processing capability.

Highly ambiactive self-regulation

Segments D2 and D3 both define relatively low processing rates, plus higher schematic complexity. In fact, these are examples of the scheme-maximizing scenario, depicted in quadrant 2 of Fig. 6.1. The symbol D is employed, because these metamodels prioritize the descriptive complexity of self-regulatory schemes, rather than processing rates. Both metamodels are therefore ambiactive, because they increase schematic complexity, while suppressing processing rates. Notably, D3 is even more ambiactive than D2, because D3 increases complexity while holding the processing rate constant. That is, the metamodel at L3 retains the prior self-regulatory processing rate at L2, even with enhanced, digitalized capabilities. This illustrates the dysfunction explained earlier, in which human and artificial agents reinforce each other’s opposing dispositions.

As an example, consider the self-regulatory schemes and processing rates of some expert professions, such as legal practice. In these contexts, patterns of action are frequently regulated and have mandated procedures and processing rates. Nevertheless, this domain is being digitally augmented, gradually shifting to level L3. Consequently, schematic complex is increasing, but often with no major change to overall processing rates, owing to the persistence of professional regulation and institutional factors. Hence, there is an ambiactive challenge for legal professionals and firms, to ensure that self-regulation remains synchronized and continuous, during the process of digitalization.

Next, N2 and N3 reference high processing rates, but low schematic complexity. The symbol N is employed, because these metamodels prioritize normative rates and efficiency, rather than the complexity of self-regulatory schemes. In this case, N3 cycles even more rapidly, owing to higher capabilities at level L3. These are examples of the rate-maximizing scenario, depicted in quadrant 3 of Fig. 6.1. Both N2 and N3 are therefore ambiactive, because they suppress self-regulatory complexity, while increasing the processing rate. Moreover, N3 is more ambiactive than N2, because the former increases the processing rate significantly, while not increasing the level of complexity. Prior self-regulatory schemes persist. As an example of N3, consider the self-regulatory schemes required of students in digitalized examinations. Processing rates may rapidly increase, allowing for real-time testing, evaluation, and feedback. At the same time, however, schematic complexity may be unchanged, owing to the nature of what is being examined and students’ natural capabilities. For example, students may still be asked to reason and write about the same problems. This poses a challenge for digitalized education and training, to ensure that self-regulation remains synchronized and continuous, during the digitalization of evaluation. The overall goal being to maximize metamodel fit, best suited to the context.

Combined Ambiactive Metamodels

Considered together, the metamodels in Fig. 6.3 constitute self-regulatory dualisms. First, D2 and N2 illustrate the ambiactive self-regulation which is typical of modernity, assuming moderate technological assistance. D2 represents humanistic self-regulation of personal, social, and cultural domains, which is relatively holistic, heuristic, and sluggish, while N2 represents the self-regulation of mechanized, industrialized domains, which are more focused, automated, and rapid. In summary, therefore, effective self-regulation in a modern context often requires agents to combine D2 and N2. They must be capable of integrating the detailed human thought and action depicted by D2, as well as the automated domains depicted by N2, for example, self-regulating both behavioral and normative patterns of choice in social and economic life. In organized collectives, this implies a type of ambidextrous capability, meaning agents can adopt and exercise different agentic metamodels at the same time and, specifically, exploratory risk-taking along with exploitative risk aversion (O’Reilly & Tushman, 2013). However, as Fig. 6.3 suggests, ambidexterity is challenging and coordination is difficult to achieve.

Second, D3 and N3 illustrate highly ambiactive self-regulation in digitalized contexts. Together they form an extreme type of dualism. D3 represents highly ambiactive self-regulation of digitalized human domains. In this type of self-regulation, schemes will be overly complex, owing to digital augmentation, but persistently sluggish, owing to human factors. While N3 represents highly ambiactive self-regulation of digitalized, technical domains, in which artificial processing rates are increasingly rapid, but schemes are persistently simplified. The overall consequence is dualistic, ambiactive self-regulation, combining extremes of artificial complexity and human simplification, with artificial hyperactivity and human sluggishness. As noted in Chap. 3, many contemporary organizations are struggling with this problem owing to rapid digitalization (Lanzolla et al., 2020).

The third set of segments in Fig. 6.3 are equivalent, labeled P2 and P3. They both refer to relatively low rates of processing, plus low levels of self-regulatory complexity. These are examples of practical self-regulation, depicted in quadrant 4 of Fig. 6.1. Such metamodels will be non-ambiactive overall because they simultaneously suppress both self-regulatory complexity and processing rates. However, the scope of practical self-regulation does not increase, despite the extra capabilities at L3. The segment P3 does not expand but remains bounded by the commitments and procedures of P2. This means that augmented processes remain anchored in human priors. As an example, consider the self-regulatory schemes of everyday habit and routine. Self-regulation in these domains could remain almost unchanged, even as humans collaborate with artificial agents. Anchoring commitments at level L2 persist and might escalate at level L3. Such persistence prevents the expansion of P3, and everyday habit and routine remain the same, although this response could be appropriate and effective, depending on the context (see Geiger et al., 2021).

Non-ambiactive Self-Regulation

It is equally possible that augmented self-regulation will be non-ambiactive, that is, relatively synchronous with respect to processing rates, and continuous regarding schematic complexity. Figure 6.4 depicts non-ambiactive metamodels of this kind, labeled D4, P4, and N4. They reference combinations of complexity and rate, at a higher level of digitalized capability L4, labeled thus to distinguish it from L3 in the preceding figure. The new Fig. 6.4 also includes the same modern metamodels as the preceding figure, D2, P2, and N2, which do not require repetitive description.

Fig. 6.4
A graph of the complexity of self-regulatory schema versus self-regulatory cycle rate depicts 2 decreasing curves which depict the limits of processing capability.

Non-ambiactive self-regulation

The most notable feature of the metamodels represented by D4, P4, and N4 is the fact that they are all equivalent. In stark contrast to Fig. 6.3, these metamodels fully overlap. This means that self-regulation has been completely digitalized. The distinctions between human and artificial functioning have been erased. Therefore, the human commitments which anchored self-regulation at level L2, and which drove high ambiactivity in Fig. 6.3, are now fully relaxed and variable. There is no significant divergence in self-regulatory processing rates and levels of schematic complexity between the metamodels D4, P4, and N4. They are non-ambiactive. As an example, consider the following scenario of autonomous vehicles. It is possible that human needs and interests will be fully known to the system, and artificial agency will be fully humanized and empathic. Likewise, artificial processes will be made fully clear and meaningful to the human passenger. Hence, overall self-regulation will be highly synchronous and continuous, ensuring safety and efficiency. In fact, this is exactly what automotive engineers aim to achieve, even going further to integrate autonomous transport systems with personal experience in the home, office, and community (Chen & Barnes, 2014; Zhang et al., 2018).

Now assume that the metamodels in Fig. 6.4 are the properties of an augmented agent, as in the autonomous vehicle scenario. This conflation also poses major risks. Most importantly, the complete digital augmentation of self-regulation could eliminate aspects of human autonomy and diversity. This may be appropriate in some very technical environments, where ordinary intuition could be dysfunctional—such as the control of autonomous vehicles—but not in other domains. For example, consider the role of self-regulation in many social, creative, and innovative domains. In these contexts, self-regulation benefits from the diversity of human behavior and commitments. Indeed, techniques for creativity and innovation deliberately upregulate such factors, encouraging team members to self-regulate differently from each other, some being fast, others slow, or analytical versus intuitive. If such diversity is lost, then valuable aspects of human experience will be lost as well. Augmented agents must therefore learn to be empathic and know when and how to incorporate purely human self-regulatory processes, as well as purely artificial processes, to avoid over-synchronization and over-integration, for example, admitting some intuitive human self-regulatory processes into the control of autonomous vehicles (Favaro et al., 2019). However, this will pose a further challenge for collective self-regulation and oversight. Societies will have to determine the level of acceptable risk posed by human involvement in collaborative supervision, monitoring overly convergent and divergent approaches.

6.3 Wider Implications

Throughout modernity, scholars have rightly assumed that human freedom and potentiality are enhanced by strengthening self-regulatory capabilities, often through the introduction of technological innovations. Digitalization promises to enhance these effects. Major gains are certainly possible. By collaborating with artificial agents, humans may enjoy greater self-regulatory freedom and control. Contemporary digital innovations, such as smartphones, wearable devices, and expert systems are only the beginning, in this regard. However, as this chapter explains, digitalization problematizes this optimistic prediction because the opposite scenario is now equally possible. In fact, if digitally augmented self-regulation is poorly supervised, it could reduce human freedom and potentiality. This will happen if artificial processes become too complex and go too fast, thereby overwhelming human inputs. Other losses will occur if persistent human processes are too sluggish and simplified. Both scenarios will tend toward very dyssynchronous and discontinuous processing, resulting in highly ambiactive self-regulation. The risks are clear and already topics for research (e.g., Camerer, 2017; Helbing et al., 2019). Other questions also warrant further study, as following sections explain.

Engagement and Responsibility

People experience engagement and a sense of value if their means of self-regulation align with goals and outcome orientation (Higgins, 2006). On the one hand, when seeking to ensure safety and prevent losses, people should use vigilant avoidance means, whereas when hoping for positive gains, they should employ eager approach means; and the stronger the alignment of means and goals, the stronger the engagement and experience of value. Task engagement also depends on the experience of effortful striving, on a sense of overcoming external obstacles and one’s internal resistance. Humans derive satisfaction and self-efficacy from such accomplishments (Bandura, 1997). Indeed, as earlier chapters explain, a central feature of modernity has been human striving to overcome obstacles and limitations. However, digitalization significantly reduces some traditional obstacles and sources of resistance. People will experience fewer struggles, compared to the past, or at least distinctly different challenges. Ironically, therefore, digitalization could result in less engagement and satisfaction from self-regulated goal pursuit.

Moreover, these changes are occurring rapidly, within relatively short cycles, and certainly within human generations. People experience the pace of change more intensely, with each cycle of digital innovation being more rapid and impactful than the last. Indeed, some aspects of augmented experience are already dyssynchronous and discontinuous. In such digitalized domains, the locus of self-regulatory control is shifting, from human to artificial agents. Artificial agents are taking more responsibility for self-regulatory persistence and outcomes. Similarly, human agents will exert less control over the entrogenous mediators of augmented agency: intelligent sensory perception, performative action generation, and contextual learning. These mediators will be central to augmented agency, yet less accessible to human consciousness and self-regulation.

As these effects become more pronounced, it could be more difficult for people to sense self-efficacy and meaning over time. The risk is that human beings will feel less engaged, less autonomous, and ultimately less fulfilled, even as efficiency and efficacy increase. Moreover, as Bandura (2016) argues, when agentic responsibility is diffused and distant, individuals and communities become disengaged from each other, and they lose a sense of moral obligation and responsibility. The locus of ethical agency shifts away from the self, spread out across the network or buried in an algorithm (Nath & Sahu, 2017). It becomes too easy, even normal, to avoid responsibility, passing it off to artificial intelligence or the system. Illustrating this effect, concern is growing that highly automated warfare will dull human sensitivity to its ethical and human implications (Hasselberger, 2019). Such effects will have major consequences for civility and good governance, not to mention international relations. It will be important, therefore, to maintain a strong sense of human striving and commitment in augmented self-regulation, and thus a sense of personal engagement and moral responsibility. As noted earlier, societies will have to determine the level of acceptable risk posed by human involvement and exclusion, in collaborative self-regulation.

Procedural Action

Additional implications follow for procedural action. In Chap. 3, I proposed a way to resolve nagging questions about the aggregation of procedural action, as a mediator of collective routine and modality. The solution requires that we treat human agents as complex, open, and adaptive systems, which respond to variable contexts. From this perspective, humans experience the downregulation of individual differences, in the recurrent, predictable pursuit of shared goals. At the same time, they experience the upregulation of shared norms and control procedures. In this way, it is possible to explain the origin and functioning of individual habit and collective routine, without aggregating personalities and individual differences. Importantly, this implies the downregulation and upregulation of self-regulatory plans and competencies as well. When routines form, personal self-regulatory orientations are effectively latent, and people adopt the shared goals and orientations of the collective, at least within routine contexts (Wood & Rünger, 2016). When routine procedures need adjustment, therefore, self-regulatory processes will require upregulation or downregulation, and perhaps deletion or creation. Via such means, augmented agents will supervise the rate and complexity of self-regulatory processing, to maximize metamodel fit.

However, if supervision falters, and self-regulation is overly ambiactive or non-ambiactive, the management of collective routine will go quickly awry. Self-regulation could become highly dyssynchronous and discontinuous, meaning the augmented agent is fast and complex in some respects, but slow and simple in other ways. These distortions will complicate the development and adaptation of procedural routine. If human action remains sluggish and simplified, while artificial self-regulation becomes fast and complex, the resulting routine will be dyssynchronous, discontinuous, and potentially dysfunctional, whereas fully non-ambiactive self-regulation will exacerbate human docility and dependence because human self-regulation will tend to downregulate. In both scenarios, collective routine encodes ambiactive distortion and dysfunction. Moreover, by doing so, it will also exacerbate ambimodal distortion. That is because the collective agent will be highly compressed in some respects, but layered and hierarchical in other ways. This follows, because as Chap. 3 explains, collective modality relies on routine, and the ambiactive distortion of routine flows through to cause collective ambimodality. Examples already exist in organizations which attempt digital transformation. They introduce highly dyssynchronous and discontinuous procedures powered by artificial intelligence, but in doing so, trigger stress and conflict with preexisting relationships and hierarchies.

The Regulating Self

Other implications follow at the individual level. To begin with, augmented self-regulation could lead to a false sense of autonomous self-efficacy. People might attribute too much to themselves by mistaking artificial capabilities for their own. They would experience a version of what Daniel Wegner (2002) called “the illusion of conscious will,” in which consciousness follows, rather than precedes, the neurological triggering of thought and action. But now the illusion of autonomy and control will follow, rather than precede, digitalized triggering which humans neither perceive nor understand. Indeed, as noted previously, the rapid, entrogenous mediation of augmented self-regulation will be largely inaccessible to ordinary consciousness. People could easily experience a digitalized illusion of conscious will. In fact, some powerful actors already understand this trend and see augmented self-regulation as a new means of social manipulation and control, by engineering an illusory sense of self-regulation.

Digitally augmented self-regulation therefore signals a potential shift in agentic locus. In fact, just as autonomous self-regulation was problematic relative to the gods of premodernity, and then problematic relative to collectivity during modernity, so autonomous self-regulation will be problematic relative to artificial agency in the period of digitalization. As artificial agents grow in power and become more deeply integrated into all areas of human experience, the primary locus of self-regulation may shift toward artificial agency and away from human sources. Whether by design or default, humans could become increasingly dependent on artificial forms of supervision and regulation. This prompts additional questions regarding the future role of human intuition, instinct, and commitment in self-regulation. In fact, such questions are not new. They often arise when considering the limits of self-regulatory capability in a social world. In contexts of digitalization, the same topics become pressing for a different reason. Uniquely human sources of self-regulation, such as intuition, instinct, and commitment will require deliberate preservation, to prevent artificial agents from becoming too intrusive and dominant.

That said, most people will enjoy the self-regulatory benefits of digital augmentation, but many will not realize the price they pay. A self-reinforcing process of diminishing autonomy could occur. The result would be digital docility and dependence (Pfeifer & Verschure, 2018). For Bandura et al. (2003), this raises the question of which type of agent—human, artificial, or both—will be truly efficacious and self-regulating, in digitalized domains. Likewise, which agent will set goals and regulate attention, even if persons experience an internal locus of control? While for Higgins and his collaborators (1999), which type of agent will guide self-regulatory orientation, whether toward achieving positive gains, or avoiding negative losses? And if digitalization reduces self-regulatory obstacles and resistance, will augmented agency weaken the human sense of task engagement and value experience? The supervision of digitally augmented self-regulation poses urgent questions for theory and practice.