Modeling Augmented Humanity

This multidisciplinary work analyzes the impact of digitalization on civilized humanity, conceived in terms of purposive, goal-directed agency. More particularly, it examines the close collaboration of human and artificial agents as augmented agents, viewing them both as complex, open adaptive systems, which vary dynamically in context. This first chapter explains general metamodels of agentic form and function, where metamodels are defined as families or related sets of models. From this perspective, augmented agency presents a new class of agentic metamodel, for individuals, groups, and collectives. At the same time, however, new risks and dilemmas emerge, which reflect the challenge of combining and supervising different human and artificial capabilities and potentialities. Problems of this kind are a recurrent topic throughout the book.

scarce for many, severely restricting their potential to develop and flourish. At the same time, capability and potentiality are unevenly distributed. Humans are limited by nature and nurture, especially regarding the capabilities required for intelligent thought and action. People gather and process information imperfectly, often in myopic or biased ways, then reason and act poorly, falling short of preferred outcomes and failing to learn. Not surprisingly, therefore, it takes time and effort to grow capabilities and potentialities, especially for purposive, goal-directed action. It is the work of a lifetime, to be fulfilled as an autonomous, intelligent, and efficacious human being. And the work of history, to achieve such fulfillment on a social scale.

Historical Patterns
Notwithstanding these challenges, capabilities and potentialities develop over time, owing to improved nurture, resources, opportunities, and learning. Major drivers also include social and technological innovation (Lenski, 2015). In fact, since the earliest periods of civilization, humans have crafted tools to complement their natural capabilities. They also pondered the stars and seasons and developed explanatory models which made sense of the world and life within it, where models, in this context, are defined as simplified representations of states or processes, showing their core components and relations (see Johnson-Laird, 2010;Simon, 1979). Granted, in the premodern period, models of the world and being human often relied on myth and superstition, but they captured broad patterns nonetheless and codified the rhythms of nature and fortunes of fate. Where, following others, I define premodern as before the modern period of European Enlightenment and industrialization (e.g., Crone, 2015;Smith, 2008). And importantly, the technological assistance of humanity began in premodernity, albeit in a primitive fashion. Over time, capabilities and technologies continued to evolve and diffuse. Despite episodic disruption and setbacks, civilized humanity has developed, typically in a path-dependent fashion (Castaldi & Dosi, 2006). For Western civilization, this path traces back to ancient Greece and Rome, which in turn drew deeply from earlier, Eastern civilizations. Their cumulative legacy survives today, in many of the languages, concepts, and models which still enrich culture and thought.
Ancient learning enjoyed a renaissance in parts of the Mediterranean world during the fifteenth century CE. Artists, scholars, and architects drew insight and inspiration from the ancients. Another important inflection point was the European Enlightenment of the seventeenth and eighteenth centuries. Over time, intelligent capabilities grew, initially among privileged members of society. After much historic struggle and social change, these capabilities diffused and deepened, to become the shared endowment of modernity. Here again, technological innovation was crucial. From the first telescopes and microscopes to the printing press and early adding machines, then to the steam, electronic, and computer ages, technological innovation has expanded agentic capability and potentiality. Adam Smith (1950, p. 17) noted this type of impact, when he wrote, "the invention of a great number of machines which facilitate and abridge labour, and enable one man to do the work of many." In parallel, new psychological and social models emerged, which assume that human beings have the potential to learn and develop as intelligent agents (Pinker, 2018). The modern challenge thus became, how to grow agentic capabilities and potentialities, so that more persons can enjoy these benefits and flourish. Political and cultural struggles also ensued, as groups fought to control the future and either to defend or to dismantle the vestiges of premodernity.
While the preceding historical account is reasonably grounded, it clearly simplifies. Almost by definition, periods of civilization span widely in time and culture. Any detailed history will be notoriously complex and irregular. There are few consistent patterns, and even those which can be observed should be treated as contingent (Geertz, 2001). For the same reasons, totalizing conceptions often over-simplify. As Bruno Latour (2017) explains, modern concepts of the globe and humanity itself assume unified categories which obscure fundamental distinctions. Hence, we must ask, is it possible to identify patterns of civilized humanity over time? Previous attempts have often been misguided and lacked validity. Most transparently failed, because they sought to generalize from one or other historical context, and then extrapolated from temporal contingency to universality. Arguably, the model of history offered by Karl Marx exhibits this flaw. Noting this common failing, it can be argued that all knowledge of such phenomena is contextual. Few, if any, patterns transcend historical contingency. To assume otherwise could be misleading and potentially dangerous, especially if it supports ideologies which deny the inherent diversity of human aspiration and experience. Nevertheless, if we respect caution and openly acknowledge contextual contingency, it is still possible to generalize, at least at a high level.
Given these caveats, scholars observe that civilized humanity exhibits broad patterns of behavior and striving over successive historical periods (Bandura, 2007). Many of these patterns are anthropological and ecological, rather than historical, in a detailed narrative sense. Evidence shows that civilized humanity has always been purposive and selfgenerating, creative and inventive, hierarchical and communal, settled as well as exploratory, competitive and cooperative. In short, civilized humanity is deeply agentic. Granted, these patterns are broad, but they are consistent, nonetheless. Scholars in numerous fields recognize them (e.g., Braudel & Mayne, 1995;Markus & Kitayama, 2010;Wilson, 2012). In any case, like all theoretical modeling, it is necessary to simplify, to focus on the main topics of interest. All models and theories must be selective. Debate is then about what to select and simplify, how, and why. Whether such models are illuminating and explanatory is determined by application and testing. Science always progresses in this fashion. The current work will focus on the broad effects of digital augmentation on humanity, viewed as cultural communities of purposive agents.

Dilemmas of Technological Assistance
Technologies assist and complement human capabilities, compensating for weaknesses and helping to overcoming limits. More specifically, technological assistance addresses the following needs. First, humans are limited by their physiological dependencies, whereas technologies can function independent of such constraints, for example, by operating in extreme, hostile environments. Second, humans are frequently proximal and nearsighted, whereas technologies are distal and farsighted.
Technologies therefore extend the range and scope of functioning, as when telescopes gather information from distant galaxies. Third, humans are often relatively slow and sluggish, compared to technologies which can be fast and hyperactive. Technologies therefore accelerate functioning. Fourth, humans are frequently insensitive to variance and detail, whereas technologies can be very precise and hypersensitive. In this fashion, technologies improve the accuracy and detail of functioning. Fifth, humans are irregular bundles of sensory, emotional, and cognitive functions, whereas most technologies are highly focused and coordinated. Hence, technologies enhance the reliability and accuracy of specific functions, for example, in robot-controlled manufacturing. And sixth, humans are distinguished as separate persons, groups, and collectives, while technologies can be tightly compressed, without significant boundaries or layers between them. Technologies thereby enhance functional coordination and control, exemplified by automated warehouses and factories.
All six types of extension reflect the fundamental combinatorics of technologically assisted humanity, that is, the combination of human and technological capabilities in agentic functioning. Over longer periods, history exhibits a process of punctuated equilibrium in these respects. During these punctuations, the technological assistance of agency achieves significantly greater scale, speed, and sophistication. Not surprisingly, transformations of this kind are consistent foci of study (Spar, 2020). For instance, studies investigate how modern mechanization combines technologies and humans in social and economic activity (Leonardi & Barley, 2010). Early sites were in cotton mills and steampowered railways. Other technologies infused social and domestic life, combining humans and machines in systems of communication and entertainment. More recently, human-machine combinatorics reach into everyday thought and action, through smartphones, digital assistants, and the ubiquitous internet. Once again, the technological assistance of agency is transitioning to a new level capability and potentiality. Major benefits include far greater productivity and connectivity.
Digitalization therefore continues the historic narrative of modernity, for good and ill, where digitalization is defined as the transformation of goal-directed processes which lead to action-that is, the transformation of agentic processes-through the application of digital technologies (Bandura, 2006). Thus defined, digitalization embraces a wide range of digital technologies and affects a wide range of agentic modalities and functional domains. Most notably, advanced digital technologies enable close collaboration between human and artificial agents as digitally augmented agents, also known as human-agent systems in computer science. New challenges thus emerge. On the one hand, advanced artificial agents are increasingly farsighted, fast, compressed, and sensitive to variation. On the other hand, humans are comparatively nearsighted, sluggish, layered, and insensitive to variance. Clearly, both agents possess complementary but different capabilities, and combining them will not be easy.
Digitalization therefore entails new opportunities, risks, and dilemmas for human-machine collaboration. One possible scenario is that artificial agents will overwhelm human beings and dominate their collaboration. The overall system would be convergent in artificial terms. Alternatively, persistent human myopia and bias could infect artificial agents, and digitalization would then amplify human limitations. Now the system would be convergent in human terms. While in other situations, both types of agent may lack appropriate supervision and go to divergent extremes, where supervision in this context means to observe and monitor, then direct a process or action. In fact, we already see evidence of each type of distortion. Digitalization therefore constitutes a historic shift in agentic capability, potentiality, and risk. As in earlier periods of technological transformation, humanity will need new methods to supervise human-machine collaboration in a digitally augmented world. Analysis of these developments is a major purpose of this book. Also for this reason, it is important to distinguish the following types of agency which are central to the argument: capabilities, while all capabilities reach asymptotes of maximum complexity for human and technological functioning. Figure 1.1 depicts a minor gap between supervisory capabilities L 1 and L 2 , meaning that technological assistance at L 2 does not add much to the baseline at L 1 . The figure then depicts two systems of functioning. The first is defined by human functioning H A and technological functioning T A . Technological complexity is less in this case, while human functioning is more complex. For example, it could be a deliberate intentional type of action which is modestly supported by technology, such as writing a letter using a pen. Assuming L 1 as the natural human baseline, an agent requires a small increase in supervisory capabilities to complete this activity, and hence capabilities at level L 2 are sufficient. Put simply, pens are simple tools, even if the written thoughts are complex. The second system is defined by human functioning H B and technological functioning T B . Now technological functioning is more complex, such as routine procedures which rely heavily on technologies, for example, riding in a carriage. Indeed, most people easily ride as passengers in carriages, although the carriage itself requires active supervision to maintain and control it. Once again, agents require modest supervision to complete this activity, and hence capabilities at level L 2 are sufficient. In addition, Fig. 1.1 shows two segments labeled A and B, which are the functions beyond baseline supervisory capability L 1 . Both segments are relatively small, owing to the modest increase in supervisory complexity between L 1 and L 2 . Put simply, it is relatively easy to supervise the use of pens and riding in carriages. In fact, many premodern activity systems were like this, owing to the relative simplicity of their technologies.
Next, Fig. 1.2 depicts a major gap between limits L 1 and L 3 , meaning that technological assistance at L 3 adds significantly to the baseline at L 1 , especially if L 3 includes digital technologies. The model again depicts two systems of functioning. The first is defined by more complex human functioning H C and less complex technological functioning T C . For example, it could be a deliberate, intentional form of action which is supported by digital technology. Perhaps the writer now uses a word processor to compose a news article. The tool may be fairly easy to use, while the thoughts are intellectually complex. Hence, the activity requires a greater level of overall supervisory capability at level L 3 . Furthermore, segment C in model in Fig. 1.2 is much larger than A in model Fig. 1.1. This means that more functionality lies beyond baseline capabilities, and the overall system requires more sophisticated supervision, which is true for writing using a computer, compared to using a pen.
The second system in the model in Fig. 1.2 is defined by less complex human functioning H D and more complex technological functioning T D . For example, it could be a routine activity which is automated by advanced digital technology. Perhaps the passenger now rides in an autonomous vehicle, rather than sitting in a carriage. In fact, we could map the spectrum of mobility systems along the horizontal axis, from less complex systems to the most advanced artificial agents. In all cases, the overall activity system requires greater supervisory capabilities at L 3 . In addition, segment D is much larger than B in Fig. 1.1. Far more functionality lies beyond baseline capabilities. The supervisory challenges are high. In terms of the example just given, it requires a significant advance in capabilities to supervise human engagement with autonomous vehicles.
Given these examples, Fig. 1.2 illustrates the supervisory challenge in highly digitalized contexts. Segments C and D show the scale of the challenge, as significant functions are beyond baseline supervisory capabilities. These segments also illustrate the functional losses which may occur if supervisory capabilities fall below level L 3 . Put simply, inadequate supervision will lead to poorly coordinated action. For example, technological processes might outrun and overtake human inputs, and thereby relegate humans to a minor role in some activity. Automated vehicles could override or ignore human wishes. Alternatively, human processes may import myopias and biases, and artificial agents then reinforce and amplify human limitations. Perfectly written news articles can be racially biased and discriminatory. In both scenarios, poor supervision skews collaborative processing and leads to functional losses. Strong collaborative supervision will therefore be required, involving human and artificial agents, to ensure that both types of agent work effectively together with mutual empathy and trust.

Period of Digitalization
Digitalization therefore continues the historical narrative of technologically assisted human agency. Moreover, advanced digital systems are intelligent, self-generative agents in their own right (Norvig & Russell, 2010). Where self-generation in this context means to produce or reproduce oneself without external guidance and support. Like human beings, artificial agents are situated in the world, sensory, perceptive, calculating, and self-regulating in goal pursuit. Artificial agents also gather and process information to identify and solve problems, thereby generating knowledge and action plans. Also like humans, artificial agents are autonomous to variable degrees. In fact, the most advanced artificial agents are fully self-generating and selfsupervising, meaning they generate and supervise themselves without external guidance or support. Finally, artificial and human agents are equally connected in collaborative relationships and networks.
Given these developments, human and artificial agents increasingly collaborate with each other as augmented agents. Digitalization connects them, and their combinatorics are deepening. Human and artificial agents are becoming jointly agentic, at behavioral, organizational, and even neurological levels (Kozma et al., 2018;Murray et al., 2020). So much so, that artificial and human agents will soon be indistinguishable in significant ways, approaching what some refer to as the singularity of human and artificial intelligence (Eden et al., 2015). If well supervised, the collaboration is reciprocal and productive: artificial agents digitalize collaborative functioning, and human agents civilize their joint functioning (Yuste et al., 2017). In these respects, digitalization penetrates far deeper into human experience, compared to earlier phases of technological innovation. As Bandura (2006, p. 175) writes about the digital revolution, "These transformative changes are placing a premium on the exercise of agency to shape personal destinies and the national life of societies." More specifically, digitalization is augmenting the sensory-perceptive, cognitive-affective, behavioral-performative, and evaluative-adaptive processes, which mediate human agency and personality (Mischel, 2004). In fact, artificial agents are being developed which imitate these features of human functioning. Enabling technologies will include artificial neural networks, quantum and cognitive computing, wearable computers, brainmachine engineering, intelligent sensors, and robotics. Smart digital assistants will also proliferate, reaching beyond smartphones to a wide range of digitally augmented interactions. These agents will deploy additional innovations, such as artificial personality and empathy (Kozma et al., 2018). Powered by such technologies, augmented agents will learn and act in far more expansive and effective ways. Consequently, a new type of agentic modality is emerging from digitalized human-machine collaboration.
Furthermore, assisted by digital technologies, people can more rapidly shift their attentional and calculative resources, updating memory, cognitive schema, and models of reasoning. In these respects, digital augmentation disrupts some traditional beliefs about the natural and human worlds. Particularly, given the massive growth of artificial intelligence and machine learning, the classic distinction between conscious mind and material nature appears unsustainable, as artificial agents become functionally sentient and empathic. Similarly, human collaboration with artificial personalities will challenge assumptions about privacy and the opacity of the self, because augmented agents will interpret and imitate empathy and other expressions of personality (Bandura, 2015). Therefore, a number of widely assumed distinctions appear increasingly contingent, and better viewed as options along a continuum, rather than as invariant categories (Klein et al., 2020). As Herbert Simon (1996), one of the founders of modern computer science and behavioral theory, observed, scientific insight often transforms assumed states into dynamic processes. In this case, insight transforms assumed material and conscious states into dynamic processes.
At the same time, there are grounds for concern and caution. To begin with, digitalization might enable new forms of oppression, superstition, and discrimination. Indeed, we already see evidence of these negative effects. For example, some institutional actors leverage digital technologies to dominate and oppress populations, for ideological, political, or commercial gain (Levy, 2018). Others use digital systems to restrict and distort information, spreading deliberate falsehood, superstition, and bias, again to serve self-interest. In addition, digitalization could be used to prolong the unsustainable, overexploitation of the natural world. Its benefits may also be unfairly distributed, privileging those who already possess capabilities and resources. Digitalization would then reinforce meritocratic privilege and undermine commitment to the common good (see Sandel, 2020). If this happens, the "digital divide" will continue to widen, exacerbating inequality across a range of social indicators, from mobility to education, health, political influence, and income. This book examines some of the underlying mechanisms which drive these effects.

Adaptive Challenges
Technological transitions of this scale are often fraught. They demand changes to fundamental beliefs and behaviors which are firmly encoded in culture and collective mind. Amending them is not easy. Nor should it be. Such beliefs and behaviors are typically contested and tested before encoding occurs, and the results are worthy of respect. Adding to the overall resilience of these systems, mental plasticity often declines with age, and most people adapt more slowly over time. Youthful curiosity and questioning give way to adult certainty and habit. Older institutions and organizations exhibit comparable tendencies. Although, here too, sluggish adaptation is sometimes advantageous. It may preserve evolutionary fitness in the face of temporary perturbation. In fact, without adequately stable ecologies, populations, and behaviors, biological and social order would neither evolve nor persist (Mayr, 2002). For this reason, incessant adaptation can be selfdefeating or an early sign of impending ecological collapse.
Digital augmentation simultaneously compounds and disrupts this dynamic. Compounding occurs, because digital augmentation might lead to excessive adaptation and the unintended erosion of ecological stability and agentic fitness. In fact, without adequate supervision and constraint, psychosocial coherence could be at risk. At the same time, the sheer speed and power of these technologies can be disruptive. Many human systems are not designed for rapid change and might fracture under pressure. Furthermore, even if digitalization improves adaptive fitness, in doing so, it might shift the locus of control away from human agents, toward artificial sources. Hence, as artificial agents become more capable and ubiquitous, humanity must learn how to supervise its participation in augmented agency, while artificial agents must learn to incorporate human values, interests, and commitments, where commitment, in this context, means being dedicated, feeling obligated and bound to some value, belief, or pattern of action (Sen, 1985). Put simply, human agents need to digitalize, and artificial agents need to humanize. Many benefits are possible if digital augmentation enriches agentic capability and potentiality. If poorly supervised, however, artificial and human agents might diverge and conflict, even as they seek to collaborate. Or one agent may dominate the other and they will overly converge. Augmented humanity needs to understand and manage the resulting dilemmas.

New Problematics
In fact, digital augmentation problematizes modern assumptions about human capability and potentiality, where problematization is defined as raising new questions about the fundamental concepts, beliefs, and models of a field of enquiry (Alvesson & Sandberg, 2011). Thus defined, problematization looks beyond the refinement of existing theory. It is more than critique. It questions deeply held assumptions and invites the reformation of enquiry. For the same reason, problematization does not entail a detailed review of all prior work. Rather, we need to identify key concepts, assumptions, and models and then apply fresh thinking, all the while, reflecting on the novel phenomena and puzzles which prompt this process. My argument adopts such an approach. It problematizes modernity's core assumptions about human agentic capability and potentiality and examines the emerging problematics of digitally augmented humanity.
In short, modernity assumes that human agents are capable but limited, and need to overcome numerous constraints, to develop and flourish. As the preface to this work also states, modernity therefore focuses on the following questions: how can human agents collaborate with each other, in collective thought and action, while developing as autonomous persons; how can humans absorb change, while preserving value commitments and psychosocial coherence; how can societies develop stronger institutions and organizations, while avoiding the risks of excessive docility, determinism, and domination; how can humanity ensure fair access to the benefits of modernity and not allow growth to perpetuate discrimination, deprivation, and injustice; and finally, a defining challenge of modernity, asks to what degree, can and should human beings overcome their natural limits, to be more fully rational, empathic, and fulfilled (Giddens, 2013). As Kant (1964, p. 131) wrote regarding moral imperatives, we strive "to comprehend the limits of comprehensibility." Continuing this tradition, contemporary scholars investigate the limits of human understanding and how to transcend them, hoping to increase agentic capability and potential while balancing individual and collective priorities.
However, owing to digitalization, capabilities are expanding rapidly. Humans are potentially less limited, in many respects. Digitalization therefore leads us to problematize the modern assumption that human agency is inherently limited. New questions and problematics emerge instead. I also list these in the preface and repeat them here: how can human beings collaborate closely with artificial agents, while remaining genuinely autonomous in reasoning, belief, and choice; relatedly, how can humans integrate digital augmentation into their subjective and inter-subjective lives, while preserving personal identities, commitments, and psychosocial coherence; how can digitally augmented institutions and organizations, conceived as collective agents, fully exploit artificial capabilities, while avoiding extremes of digitalized docility, dependence, and determinism; how can humanity ensure fair access to the benefits of digital augmentation and not allow them to perpetuate systemic discrimination, deprivation, and injustice; and finally, the most novel and controversial challenge, which is how will human and artificial agents learn to understand, trust, and respect each other, despite their different levels of capability and potentiality. This is controversial because it implies that artificial agents will exhibit autonomous judgment and empathy. It assumes that sometime soon, we will attribute intentional agency to artificial agents (Ventura, 2019;Windridge, 2017).

Theories of Agency
Albert Bandura (2001) is a towering figure in the psychology of human agency, both individual and collective. His social cognitive theories explain how the capacity for self-regulated, self-efficacious action, is the hallmark of human agency, as well as a prerequisite for human selfgeneration and flourishing. In this respect, Bandura epitomizes the modern perspective on human agency: despite their natural limitations, people are capable of self-regulated, efficacious thought and action. They sense conditions in the world, identify and resolve problems, and pursue purposive goals. Human potential is thereby realized as people develop, engage in purposive action, and learn. They also mature as reflexive beings, acquiring the capability to monitor and manage their own thoughts and actions, and ultimately to self-generate a life course. Thus empowered and confident, people find fulfillment and flourish. In modernity, being truly human is to be freely and fully agentic.

Persons in Context
For comparable reasons, Bandura (2015) is among the psychologists who advocate situated models of human agency, personality, and rationality, often termed the "persons in context" and "ecological" perspectives. Like other scholars in this community, Bandura views human agency in naturalistic terms, assuming agents are sensitive to context, inherently variable, adaptive, and self-generative (Bandura, 2015;Bar, 2021;Cervone, 2004;Fiedler, 2014;Kruglanski & Gigerenzer, 2011;Mischel & Shoda, 2010). Consequently, he and others reject static models of human personality and agency-for example, they reject fixed personality states and traits (e.g., McCrae & Costa, 1997)-and argue instead that persons are situated and adaptive. In the context of digitalization, conceiving of human agents in this way is important for two main reasons. First, if humans are complex, open, adaptive systems, situated, and self-generative, they are well suited for collaboration with artificial agents which share the same characteristics. Second, digitalization amplifies the impact of contextual dynamics because contexts change rapidly and penetrate more deeply into human experience. Being human in a digitalized world is to be human in augmented contexts.
For similar reasons, some psychologists explicitly compare human beings to artificial agents. They note that both types of agent can be modeled in terms of inputs, processes, and outputs (Shoda et al., 2002). In addition, both human and artificial agents sense the environment and gather information which they process using intelligent capabilities, leading to goal-directed action, and subsequent learning from performance. Humans and advanced artificial agents are both potentially selfgenerative as well. Therefore, human and artificial agents are deeply compatible, because both possess the same fundamental characteristics: (a) they are situated and responsive to context; (b) they use sensory perception of various kinds to sample the world and represent its problems; (c) both then apply intelligent processes to solve problems and develop action plans; (d) they self-regulate performances, including goal-directed action; (e) both evaluate performance processing and outputs, which results in learning, depending on sensitivity to variance; (f ) both are selfgenerative and can direct their own becoming; and (g) they do all this as separate individual agents or within larger cooperative groups and networks.
Two of these characteristics are especially notable, namely generativity and contextuality. First, self-generation reflects a wider interest in generative processes broadly conceived. In numerous fields, scholars research how different systems kinds originate, produce, and procreate form and function without external direction. Chomsky's (1957) theory of generative grammar is a perfect example. In it, he argues that semantic principles are genetically encoded, then embodied in neurological structures, and subsequently generate linguistic systems. In this way, the first principles of grammar help to generate language. Others propose generative models of social science, using agent-based and neurocognitive modeling (e.g., Epstein, 2014). Some economists exploit the same methods to explain the origins and dynamics of markets (e.g., Chen, 2017;Dieci et al., 2018). While in personal life, generativity embraces the parenting of children, mentoring the young, as well as curating identities and life stories (McAdams et al., 1997).
Second, contextuality is not limited to theories of human personality and agency. For example, philosophers also debate the role of context, when considering the content of an agent's thoughts and actions. Not surprisingly, naturalistic and pragmatic philosophers are highly skeptical of ideal objectivity free from contextual influence. As Amartya Sen (1993) argues, perception, observation, belief, and value, all arise in some context and positions within it. From this perspective, claims of objectivity, whether ontological, epistemological, or ethical, must be positioned within context. There is no view from nowhere (see Nagel, 1989) and no godlike position or point of view, sub specie aeternitatis, which John Rawls (2001) hoped for. Commitments of every kind imply context and position. Human agents are forever situated, embedded in social, cultural, and historical contexts, although, each context and position can be well lit, by focused attention, sound reasoning, and gracious empathy.
In fact, across many fields of enquiry, scholars are adopting similar approaches. Context and position matter. Examples are found in other areas of psychology and social theory (Giddens, 1984;Gifford & Hayes, 1999), in economics (Sen, 2004), as well as in linguistics and discourse analysis (Lasersohn, 2012;Silk, 2016). They all share a common motivation. Within each field of enquiry, there is growing awareness of contextual variance and complexity, plus skepticism about static methods and models. These concerns are amplified by the obvious increase in phenomenal novelty and dynamism, especially owing to digitalization and related global forces. At the same time, most scholars who embrace context and position also reject unfettered subjectivity and relativism. Rather, they problematize assumptions about universals and ideals and investigate systematic processes of variation and adaptation instead. All agentic states are then conceived as processes in context unless there is compelling evidence to the contrary. Debate then shifts to what common, underlying systems or structures might exist among different expressions of agency. To cite Simon (1996) once again, scientific insight often transforms the understanding of assumed states into dynamic processes.

Capability and Potentiality
However, individual human agency is not simply an expression of personality in context. While agency assumes personality, it goes further (Bandura, 2006). The two constructs are not fully correlated. First, agency is forward looking, prospective, and aspirational, whereas personalities need not be. Second, agency is self-reactive, allowing agents to evaluate and respond to their own processes and performances. This function exploits outcome sensitivity and various feedback and feedforward mechanisms. Third, human agents are self-reflective, whereby they process information about their own states and performances and form reflexive beliefs and affects. Fourth, agency is potentially self-generative, meaning agents curate their own life path and way of becoming, although not all persons do so. To summarize, individual agency is an affordance of personality. The two are integrated, interdependent systems of human functioning. Personality and agency together, allow individuals to be intentional, prospective, aspirational, self-reactive, self-reflective, and self-generative.
Collective agents exhibit comparable characteristics. Yet collective agency is not simply the aggregation of personalities (Bandura, 2006). Granted, collectives connect and combine different individuals, but at the same time, collective agency is more holistic and qualitatively different. It relies heavily on networks and culture, for example, which also help to define collective modality and action (DiMaggio, 1997;Markus & Kitayama, 2003). Nevertheless, collectives share many of the same functional qualities as individuals. Collective agency is also intentional, prospective, aspirational, self-reactive, self-reflective, and self-generative. But these are now properties of communities, organizations, institutions, and networks, rather than individuals or aggregations of them (March & Simon, 1993;Scott & Davis, 2007). In summary, collective agency is an affordance of cultural community. In this respect, cultural communities and collective agency are integrated, interdependent systems of human functioning as well, but at a more complex level of organization and modality.

Limits of Capabilities
Irrespective of agentic modality and context, however, purely human capabilities are limited. Theories of agency therefore allow for approximate outcomes, trade-offs, and heuristics. They explain how individuals and collectives simplify and compromise, in order to reason and act within their limits (Gigerenzer, 2000;March & Simon, 1993). Sometimes, simplifying heuristics and trade-offs work well. But at other times, agents fall prey to noise, bias, and myopia, owing to the fallibility of such strategies (Fiedler & Wanke, 2009;Kahneman et al., 2016). Each major area of agentic processing is affected. First, sensory perception is constrained by limited attentional and observational capabilities, and agents easily misperceive the world and themselves, becoming myopic or clouded by noise. Second, cognitive-affective processes are limited by bounded calculative capabilities, which allow biases and myopias to distort problemsolving, decision-making, and preferential choice. Empathic capabilities are limited as well, meaning agents often struggle to interpret and understand other people and themselves. Third, behavioral-performative outputs are constrained by limited self-efficacy and self-regulatory capabilities. Hence, humans often perform poorly or inappropriately. And fourth, updates from feedforward and feedback are limited by insensitivity to variance, memory capacity, and procedural controls, meaning humans often fail to learn adequately and correctly. Feedforward updating is especially vulnerable, owing to its complexity and speed.
Importantly, these limitations suggest the contingency of many assumed criteria of reality, rationality, and justice (Bandura, 2006). For if purely human capabilities are inherently limited, then whatever is grounded in such capabilities will be limited as well. This is especially problematic, because ordinary categories and beliefs often acquire ideal status, as fundamental realities, necessary truths, and mandatory self-guides. They are idealized, meaning they are extrapolated to apply universally and forever, when in fact they do not (Appiah, 2017). Once again, each area of agentic processing is affected. First, the ordinary limits of sensory-perceptive capabilities often determine agents' fundamental ontological commitments and the core categories of reality. For this reason, most naturalistic and behavioral theorists argue that ontologies are contextual and variable to some degree, and hence open to revision (Gifford & Hayes, 1999;Quine, 1995). In contemporary philosophy, this approach supports "conceptual engineering," in which fundamental concepts of reality and value are constructed and reconstructed to fit the context (Burgess et al., 2020;Floridi, 2011).
Second, agents regularly hold idealized epistemological commitments-criteria of true belief and models of reasoning-which reflect the limits of their cognitive capabilities. Most naturalistic, and behavioral theories view epistemic commitments as inherently adaptive and ecological (Kruglanski & Gigerenzer, 2011). One notable advocate of this position was the later Wittgenstein (2009), who illuminated how contingent "language games" become idealized in axiomatic models of reasoning. In fact, Wittgenstein exposed axiomatic models as a type of meta-game, which foreshadowed recent thinking about the evolution of logics (e.g., Foss et al., 2012;Thornton et al., 2012). Third, agents adopt ethical commitments. They form ideals of goodness and justice which reflect their limited relational and empathic capabilities, where empathic limits constrain how much people can appreciate about each other's values and commitments. Philosophers then debate the origin of such limits and the degree to which they might be overcome. Some view empathic incompleteness as intractable and humanizing, and central to sociality and culture (e.g., Sen, 2009); while others argue for empathic universals, at least regarding fundamental principles (e.g., Rawls, 2001).

Impact of Digitalization
By transcending ordinary human capabilities, digital augmentation problematizes these questions and assumptions. First, digital innovations are rapidly improving the capacity to sense the environment, thereby heightening the perception of contextual variation and problems. Enabling technologies include the internet of things, intelligent sensing technologies, and fully autonomous agents. Second, digital augmentation massively increases information processing capabilities, transcending the assumed limits of human intelligence. For example, anyone with a contemporary smartphone can access enormous processing power at the touch of an icon. Third, digital augmentation enables new modes of action, which augment human performances. Digital innovations are transforming sophisticated domains of expert action, such as clinical medicine. Fourth, augmented agents can learn at unprecedented rates and degrees of precision, through rapid performance feedback, coupled with intense feedforward mechanisms (Pan et al., 2016;Pan & Yu, 2017).
Altogether, therefore, digitalization is radically augmenting agentic capabilities and potentialities, regarding sensory perception, cognitiveaffective processing, behavior performance, evaluation of performance, and learning. In consequence, many traditional assumptions appear increasingly contingent and contextual, among them, conceptions of cognitive boundedness, distinctions between conscious mind and material nature, interpretive versus causal explanation, and abstract necessity versus practical contingency. Digitalization thus problematizes the conceptual architecture of modernity.

Metamodels of Agency
Fully to conceptualize and analyze this shift, we need to work at a higher level of metamodels. By way of definition, metamodels capture the common features of a related set of potential models within a field (Behe et al., 2014;Caro et al., 2014). Put simply, metamodels define families of models. They are specified by hyperparameters which define the core categories, relations, and mechanisms shared by a set of models (Feurer & Hutter, 2019). Thus defined, metamodels are studied in numerous fields, even if they are not labeled as such, for example, in decision-making (He et al., 2020;Puranam et al., 2015) and Chomsky's (2014) work on linguistics. The concept is very well established in computer science: "Metamodels determine the set of valid models that can be defined with models' language and behavior in a particular domain" (Sangiovanni-Vincentelli et al., 2009, p. 55). In this book, the term refers to families, or related sets, of models of agency. Regarding agentic metamodels, hyperparameters define levels of organization or modality, activation mechanisms, and processing rates, such as the speed of self-regulation and learning, where rates, in this context, are defined as the number of processing cycles performed per unit of time. The reader will therefore encounter the terms "metamodel of agency" and "agentic metamodel" throughout this book. However, I will not offer an alternative model of agency at the detailed level. The book does not present an alternative theory of human psychology or agency as such. Nor will it propose a formal model of augmented agency or humanity based on a specific theory. Rather, my argument will focus at a higher level, on the features of agentic metamodels.
To illustrate, consider the field of psychological science. In this field, a popular metamodel assumes that human beings perceive the world and themselves, then process information and perform action, with varying degrees of intelligence and autonomy. Human beings are therefore a type of input-process-output system (Mischel, 2004). Given this broad metamodel, scholars then formulate specific models of psychological functioning, such as behaviorist, social cognitive, state, and trait models. Importantly, each type of model exemplifies the principles of the broad metamodel, though they vary in terms of the specific parameters for inputs, the internal mechanisms of processing, and performance outputs. Moreover, in fields like psychology, domain-specific metamodels are often predetermined, typically from the analysis of practice and experience. Indeed, whole industries evolve this way. Prescriptive metamodels guide pedagogical and clinical practice (Bandura, 2017). Most psychologists therefore assume fairly stable agentic metamodels which are deeply encoded in culture and community (Soria-Alcaraz et al., 2017). This means that metamodels adapt incrementally, if at all, under normal conditions. Indeed, institutional fields are labeled "fields" for this reason; and similarly, personality types are labeled "types." Both labels reflect stable metamodels in these fields of study (Mischel, 2004;Scott, 2014). Moreover, few practitioners question the normative metamodels of a field. Most are encoded during training, or imposed by regulation, and remain fixed.
The question remains, however, whether metamodels will help in the analysis of human-artificial augmented agency. Even if metamodeling is a suitable way to analyze both human and artificial agents, at a high level, can the two be integrated in this way? Perhaps the fundamental features of the mind and consciousness are too incommensurable with artificial agency and intelligence. Arguably, this was the case until recently. However, as noted earlier, recent technical advances suggest that metamodeling is now feasible in this regard. For example, advanced systems of artificial intelligence are increasingly capable of higher forms of cognitive functioning, including self-generation and self-supervision, associative and speculative reasoning, heuristic problem-solving and decision-making, as well as interpreting affect and empathy (Asada, 2015;Caro et al., 2014). Human and artificial agents are increasingly similar and thus amenable to integrative metamodeling, especially when they combine as augmented agents.

Compositive Methods
Digitalized ecologies will be increasingly dynamic and responsive. Agency will be less reliant on stable metamodels and encoded templates. New metamodels, or families of models, will consistently emerge. In this way, augmented agents will be capable of rapid transformation. Humans and artificial agents will take on different, complementary roles, selfgenerating dynamically to fit changing contexts. Their metamodels will compose and recompose in real time, to fit changing conditions. In this respect, digital augmentation supports a more dynamic method, which can be described as "compositive" (cf. Latour, 2010), meaning that methods and models will compose, decompose, or recompose, to fit different contexts. From a design perspective, therefore, augmented agency will be near composability, as well as being near decomposable modular and hierarchical systems. Moreover, compositive methods are systematic and rigorous, the result of processing vast quantities of data. These methods are neither ad hoc nor idiosyncratic (e.g., Pappa et al., 2014;Wang et al., 2015).
Compositive methods are already employed in contemporary artificial intelligence. Systems maintain databases of processing modules and methods, and then select and combine these to fit the problem context. Metamodels and models are developed rapidly, contextually, in response to problems and situations. As noted previously, the most advanced software algorithms now compose their own metamodelsthey are fully self-generative-requiring minimal (if any) supervision. Evolutionary deep learning systems and Generative Adversarial Networks (GANs) function in exactly this way (Shwartz-Ziv & Tishby, 2017). Via rapid inductive and abductive learning, these systems process massive volumes of information, identifying hitherto undetectable patterns, to compose new metamodels and models, often without any external supervision. Augmented agents will do likewise. They will leverage the power of digitalization to select and combine different techniques and procedures, and thereby compose metamodels and methods which best fit the context. Development of, and investigation by, augmented agents will invoke compositive methods.
Notably, the great economist, Friedrich Hayek (1952), argued for compositive methods in the social sciences, as an antidote to naïve reductionism, developing models and methods which best fit the problem at hand (Lewis, 2017). In these respects, Hayek's conception of "compositive" is comparable to recent technical developments. Going beyond Hayek's conception, however, digitalized metamodeling is agentic and ecological, more similar to Latour's (2010) concept of composition. It synthesizes both top-down and bottom-up processing, detailed and holistic, rapidly iterating, using prospective metamodeling and testing, until maximizing metamodel fit, and often achieving this in a fully unsupervised, self-generative fashion. In these respects, digitalized composition also problematizes traditional methodological distinctions: between qualitative and quantitative, methodological individualism and collectivism, and between reductionism and holism. Instead, compositive methods will blend these options and treat such polarities as the extremities of continua (Latour, 2011). I will return to these topics in later chapters, and especially in the final chapter which discusses the future science of digitally augmented agency.

Dimensions of Metamodeling
Nevertheless, given the complexity of many problems and the processing they require, artificial agents must also simplify and approximate. This is done using algorithmic heuristics, which are shortcut means of specifying models and methods (Boussaid et al., 2013). At the most general level, hyperheuristics provide simplified means of specifying the broad hyperparameters of metamodels. Recall that metamodels are defined as related sets of potential models, and hyperparameters specify the broad features or attributes of metamodels, including their core categories, mechanisms, and processing rates (Feurer & Hutter, 2019). Hyperheuristics are shortcut means of specifying these properties. Metamodels are further distinguished by the supervision applied in their development. As noted earlier, they can be fully supervised, semi-supervised, or unsupervised, from artificial and human sources.
In supervised metamodeling, hyperparameters are fully determined by prior experience and learning (e.g., Amir-Ahmadi et al., 2018), whereas in semi-supervised systems, the initial hyperparameters are partially given, but provisional. Additional processing is required to tune and optimize them. Among the benefits of a semi-supervised approach, is that metamodeling can exploit prior learning while responding to novelty, although, semi-supervised metamodeling also poses risks, if it imports distorting biases and myopias (Horzyk, 2016). Alternatively, some artificial agents are fully unsupervised. Hyperparameters are developed by the agent itself, in a self-generative fashion. Metamodels are composed, rather than retrieved. Advanced artificial agents do this through rapid, iterative hyperparameter pruning, tuning, and optimization (Song et al., 2019).
As noted above, GANs are a recent innovation of this kind (Wang et al., 2017). In these systems, artificial agents compete in a collaborative game. A generator produces fake examples of some phenomenon, derived from pure noise. In parallel, a discriminator is trained on real examples of phenomena, such as photographs of human faces. If the system is fully unsupervised, these training data are unlabeled and unstructured. Using such data, the discriminator learns via multiple cycles of induction. Then the artificially generated, fake examples are passed to the discriminator, along with unclassified real examples, and the discriminator tries to distinguish real from fake. The competition ends in a Nash equilibrium, being the state in which neither the generator nor discriminator can do better against the other, but they rely on each other to achieve this maximal state, and both therefore benefit from stabilizing the system (Pan et al., 2019). In this fashion, the GAN produces a maximizing solution to the focal problem, for example, developing an artificial agent which can distinguish human faces, without needing any external supervision (Liong et al., 2020). And the metamodel is fully unsupervised and self-generative.

Parameters and Variables
Given initial hyperparameters and the metamodel they define, the next phase applies metaheuristics to select a model from the choice set (Feurer & Hutter, 2019). First, the agent will select specific parameters, about what counts as real versus fake, and what is exposed or hidden. Second, it will select activation functions, such as the type of action generation, or the outcome variance which triggers adaptive feedback. Third, there will be specific processing cycles and learning rates, for example, whether a particular type of feedback is slow and sluggish, or fast and hyperactive, and also about the level and intensity of feedforward processing. In purely human processes, such parameters tend to be encoded in memory and mental models, and supervised by metacognition (Bandura, 2017). Even ecological models of rationality are significantly supervised, by prescribing criteria of adaptation and association (Kruglanski & Gigerenzer, 2011). For example, the metaheuristic may encode "fast and frugal heuristics" as the most ecologically appropriate model for problem-solving (Gigerenzer & Goldstein, 1996). The system then employs this specific model to resolve a focal problem.
Next, given a specific model and its parameters, the process specifies the variables or expected patterns of variance. For example, in a naturalistic model of agency, variables might capture the expected degree and rate of variation in self-regulated behavior (the dependent variable), conditional on the strength or weakness of self-efficacy (the independent variable) (Bandura, 1997). In this case, the variables are supervised and predetermined. Though, if supervision is poor, the specification of variables can easily import distorting myopias and biases. Indeed, such biases often infect human and machine learning, resulting in poor choices and decision-making (Noble, 2018). Human and artificial agents must therefore learn how better to supervise the selection of variables, mindful of these risks. Otherwise, models could be underfitting (admitting too much noise and variance), or overfitting (excluding too much noise and variance). Both scenarios will increase functional losses (Kahneman et al., 2016).
Importantly, at each level of artificial processing, algorithmic heuristics help to manage the otherwise overwhelming complexity of data and processes. Indeed, much research into artificial intelligence and machine learning focuses on optimizing such hierarchies: using hyperheuristics to select the hyperparameters which define metamodel choice sets; then using metaheuristics to select the detailed model which fits best; and finally, the chosen model provides specific heuristics to solve a focal problem. The earlier example cited (a) the metamodel of associative, heuristic problem-solving, then (b) the model of "fast and frugal" heuristics, and (c) applying a specific heuristic, such as a simple stopping rule (Gigerenzer, 2000). These methods will be critical for the effectiveness and efficiency of digitalized problem-solving, and especially more complex problems. For the same reasons, these methods will be employed by digitally augmented agents.
Furthermore, depending on the type and level of supervision, hyperparameters are more, or less visible. Recall that some are predetermined, given by supervision, and hence immediately visible. Others may be hidden and unsupervised, and therefore wait to be discovered by further processing. In computer science, these questions loom large for the efficiency of artificial agents (Yao et al., 2017). On the one hand, the more prior supervision of hyperparameters, the less is hidden and metamodeling is more efficient and predictable. For example, in fully supervised machine learning, hyperparameters are predetermined and thus fully visible. However, as a result, there are fewer degrees of freedom: the greater the supervision, the less freedom in metamodeling. On the other hand, with less or no supervision, more is hidden. This entails greater degrees of freedom, to explore and self-generate. This is the case in unsupervised GANs, in which hyperparametric values are largely hidden and await discovery. However, the process of discovery consumes time and resources. To compensate, unsupervised systems also employ hyperheuristics in hyperparameter tuning and pruning, to optimize metamodel discovery and design. That is, they self-supervise their own objective function to maximize fit while also minimizing the processing load (Burke et al., 2019). Similar dynamics occur in the development and functioning of agentic metamodels. Human systems also need to balance metamodel fit and efficiency. But in these contexts, components can be hidden for other reasons, and especially owing to the limitations of human perception and consciousness.

The Role of Consciousness
In premodern cultures, it was assumed that most fundamental principles are accessible to ordinary consciousness, even if they depended on divine revelation and ritual. This included the core categories of reality, truth, and value, about persons, the polis, and the cosmos (Rochat, 2009). However, hyperparameters of this kind are inevitably anthropomorphic, owing to their origins in ordinary experience. This was certainly true for premodernity. Fundamental categories of reality and truth were defined in human terms, that is, in terms which reflected ordinary consciousness. Hence, the gods were superhuman characters and the cosmos emerged through anthropomorphic or animistic stories of creation. By implication, premodern cultures offered few degrees of self-generative freedom in agentic form and function.
By contrast, during post-Enlightenment modernity, the fundamental properties of nature are largely inaccessible to ordinary consciousness. To discover them, one requires specialized technological assistance, or in other words, the methods of modern empirical science. Nevertheless, many continued to believe that the fundamental properties of mind and self are directly accessible to consciousness. Descartes (1998) exemplified this belief when he introspected and famously concluded, "I think therefore I am." The modern mind-body problem was born and over time modernity bifurcated the sciences. On the one hand, the natural sciences demoted ordinary consciousness and relied on technological assistance to access the hidden, fundamental realities of nature. Whereas on the other hand, many human sciences continued relying on ordinary consciousness to access the fundamental properties of mind and self, with or without technological assistance (Thiel, 2011). Some disciplines continue to do so, believing that the hyperparameters of cognitive form and function are directly accessible to ordinary consciousness. Arguably, this is anthropomorphic and erroneous (see Chomsky, 2014).
In fact, owing to digitalization and neurophysiological discoveries, it is becoming abundantly clear that the fundamental realities of mind and self are opaque to ordinary consciousness (Carruthers, 2011). Specialized technologies are required here too. Introspection is a functional approximation, at best. From the perspective of digital augmentation, therefore, no fundamental categories and mechanisms-of neither physical nature nor mental phenomena-are directly accessible to ordinary consciousness. Both require technological assistance to observe and analyze them. However, this does not entail the reduction of mind and self to material cause or the digital dissolution of consciousness. Rather, as I will explain more fully in later chapters, it entails rethinking classic concepts of mind and self in terms of digitally augmented agency and self-generative systems.
Significant implications follow for the supervision of technologically assisted agency, and especially the supervision of digitally augmented agents. Most importantly, if ordinary consciousness is demoted and no longer a reliable source of fundamental reality and truth, then it will require deliberate supervision to ensure that ordinary human inputs are acknowledged and respected. They cannot, and should not, be either foundational or taken for granted. In fact, this problem is already a topic of research in computer science. Artificial agents are designed to recognize and accommodate the ordinary experience of mind and self, when they need to collaborate with humans in behavioral settings (Abbass, 2019), for example, when humans travel in autonomous vehicles. These situations require the systematic incorporation of human perceptions, values, and interests, despite their lack of precision and reliability. In this way, the supervision of augmented agency is humanized.
The earlier Figs. 1.1 and 1.2 illustrate these effects. Recall that these figures depict the core supervisory challenge of technologically assisted humanity, namely, how to combine and coordinate, divergent levels of human and technological functionality. In fact, the same factors explain the shifting role of ordinary consciousness in explanatory thought. To illustrate, instead of interpreting these figures as general models of supervision, now assume they depict the supervision of explanatory thought. Next, recall that the small gap between levels of capability L 1 and L 2 in Fig. 1.1 illustrates modest technological assistance. We can therefore reinterpret this figure to depict forms of science with modest technological tools and techniques. Also, note that segments A and B are both relatively small. Much supervision is achievable using baseline capability L 1 and hence accessible to consciousness. In fact, this was the dominant pattern in premodern science (Sorabji, 2006). It persists in some fields of human study, which still derive fundamental categories and mechanisms from ordinary consciousness and introspection.
By contrast, in Fig. 1.2, there is a larger gap between baseline capability L 1 and more technologically advanced capabilities L 3 . This figure therefore illustrates forms of science with significant technological input. Segments C and D are large, implying that much is inaccessible to ordinary consciousness and requires supervision at level L 3 . Modern natural science is certainly like this, as are the human sciences which no longer rely on ordinary consciousness and perception but employ specialized technologies instead. The science of digitally augmented agency will adopt the same approach. However, for this reason, future science confronts a major challenge. It will require strong collaborative supervision to avoid scenarios in which artificial agents overwhelm and ignore human inputs, and/or human supervision imports distorting myopias and bias into artificial intelligence and science.

Critical Dilemmas
Metamodeling therefore plays an important role in agentic thought and action. In practical, behavioral domains, most metamodeling is automatic, implicit, encoded in memory, and heavily supervised by procedural routine and custom. Data are labeled and principles are clear. In fact, in ordinary human experience, metamodeling is only self-generative in the most creative and speculative domains. By contrast, metamodeling by artificial agents is increasingly self-generative and unsupervised. This distinction between human and artificial supervision of metamodeling has profound implications for their collaboration. Digitally augmented humanity must integrate both types of agent and accommodate their different capabilities and potentialities. On the one hand, human inputs will be strongly supervised, replicative, and ordinary human intuitions and priors will often persist. Humans also tend to be comparatively myopic, sluggish, layered, and insensitive to variance. Whereas artificial agents will tend toward increasingly unsupervised, self-generated inputs, independent of human intervention. In addition, artificial agents are comparatively farsighted, fast, compressed, and hypersensitive to variance.
Overall, collaborative supervision will therefore be daunting, even for the most developed augmented agents (Cheng et al., 2020). If supervision is poor, the result could be extremely convergent or divergent forms and functions. Regarding over-convergence, one type of agent might dominate the other resulting in systems which are too digitalized or too humanized. Whereas regarding over-divergence, human and artificial inputs will both be significant but conflicting. A number of divergent dilemmas are possible. First, the human and artificial components of augmented agents could diverge in terms of range, being both farsighted and nearsighted at the same time, looking too far and near in sampling and search. Second, their processing rates might diverge, being rapid in some respects and sluggish in others, thus cycling both too fast and too slow. Third, artificial processes could be hypersensitive to variance, while human processes are relatively insensitive, thereby admitting too much and too little noise. And fourth, augmented agents might combine overly complex and simplified components, leading to poor integration and coordination. In all these scenarios, outcomes will easily become dysfunctional. The following chapters examine the origins and consequences of these dilemmas for key domains of agentic form and functioning. The final chapter looks forward to the future science of digitally augmented agency.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.