Throughout the modern period, scientific discovery, widespread education, and industrial and economic development have encouraged commitment to rationality and humanistic values as progressive forces (Pinker, 2018). Assuming such commitments, systematic reasoning and the scientific method promise resolution of increasingly complex and consequential problems. The proper ambition of problem-solving is then to transcend the limits of ordinary capability, even if rational ideals are forever unreachable. Not surprisingly, the advocates of rationality and scientific method are impatient with constraints on problem-solving and view them as challenges to be overcome. And their impatience is reasonable, given the dramatic growth of knowledge and technology during the modern period. Capabilities have greatly expanded, and many assumed limits have receded. Digital augmentation promises radically to enhance and accelerate this trend. Problem-solving continues to advance.

Even so, human capabilities remain limited. People still need to reduce potential complexity and manage cognitive load. They often do this by simplifying problem representation and/or solution search, depending on the relative significance of each activity in any problem context. This entails a series of trade-offs between accuracy and efficiency, which entail potential costs and risks (Brusoni et al., 2007). Most commonly, the simplification of sampling and search admits distorting biases, myopia, and noise into problem-solving (Kahneman et al., 2016; Kahneman & Tversky, 2000). Granted, in some situations, such simplification is warranted and satisfactory. For example, fast and frugal heuristics often work best in uncertain or urgent situations (Gigerenzer, 1996). And when they do, the practical challenge is not to mitigate the distortions of simplification, but to maximize its effectiveness. Either way, people naturally simplify sampling and/or search, resulting in less complex problems and solutions, respectively. They do so for a range of reasons: to maximize limited resources and capabilities; because prior commitments obviate the need for comparative processing (Sen, 2005); in order to maintain cultural norms and controls (Scott & Davis, 2007); or because heuristics are most appropriate for the problem at hand (Marengo, 2015).

Herbert Simon (1979) was among the first to expose these patterns. He argues that to solve problems with bounded or limited rationality, people simplify different aspects of problem-solving and satisfice at lower levels of aspiration, rather than fully satisfying criteria of optimality. Simon (ibid., p. 498) identifies two broad types of satisficing in problem-solving: “either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world.” On the one hand, that is, agents simplify the representation of problems, to reach optimal solutions. In this case, the processing required is more owing to the complexity of solution search. The major risks are myopic sampling and problem representation. Following common naming conventions, I call this type of problem-solving normative satisficing (see Simon, 1959). On the other hand, agents simplify solutions and address more realistic, better described problems. In this case, the processing required is more owing to the complex representation of the problem itself. Now the major risks are myopic solution search and selection. I call this type of problem-solving descriptive satisficing, again following convention.

Notably, the latter approach—accepting satisfactory solutions to more realistic problems, or what I call descriptive satisficing—is the type of problem-solving found in behavioral theories of decision-making, economics, and organizations. Stated in more formal language, it seeks no worse solutions, to the best representation of problems. Whereas, the former approach—seeking optimal solutions to simplified problems, which I call normative satisficing—is typical of classical microeconomics and formal decision-making (March, 2014). Put more formally, it seeks the best solutions, to no worse representation of problems. Hence, as Sen (1997b) explains, satisficing can be conceived as a type of formal maximizing, meaning problems and/or solutions are partially ordered, and agents accept some no worse option as good enough, assuming an aspiration level.

However, while descriptive satisficing is widely studied, normative satisficing is not. Even though Simon explained this important distinction decades ago—that classical theory also satisfices in the normative sense, by seeking optimal solutions for a simplified world—few studies investigate this phenomenon. Levinthal (2011, p. 1517) also observes this oversight, when he writes that all “but the most trivial problems require a behavioral act of representation prior to invoking a deductive, ‘rational’ approach.” Yet despite his astute observation, with a few notable exceptions (e.g., Denrell & March, 2001; Fiedler & Juslin, 2006), most behavioral researchers focus on descriptive satisficing, that is, finding satisfactory solutions for a more realistic world (e.g., Kahneman et al., 2016; Luan et al., 2019). Granted, this is an important topic. However, as a consequence, we still await a full treatment of bounded realism, representational heuristics, and normative satisficing, especially in classical theory (Thaler, 2016). This is another large project, but I will not attempt to fill the gap here.

Historical Developments

These questions have a history worth recounting. For over two centuries, classically inspired economists have idealized Adam Smith’s (1950) notion of the invisible hand to explain collective, calculative self-interest (Appiah, 2017). Equally, they idealize his characterization of Homo economicus, as a rational egoist bent on optimizing utility. Here are the roots of normative satisficing in microeconomics: seeking optimal, calculative solutions to simplified problems of economic utility. However, Smith (2010) also understood that rational egoism is a fictional, albeit functional ideal. In parallel, he recognized the complexity of human sentiments and commitments. From this more realistic perspective, Homo economicus defers to Homo sapiens meaning a richer conception of human agency and psychology (Thaler, 2000). Therefore, Smith also set the agenda for descriptive satisficing: accepting satisfactory solutions to well described, realistic problems, including problems of preferential and collective choice. In recent years, more scholars are embracing this broader conception of economic agency, responding to the increasing complexity and variety of choice (e.g., Bazerman & Sezer, 2016; Higgins & Scholer, 2009), although, as noted above, most research is still framed in terms of descriptive satisficing and largely overlooks the puzzles of normative satisficing, including myopic sampling and representational heuristics. Notable exceptions exist in the literature, but they are exceptional (e.g., Fiedler, 2012; Ocasio, 2012).

By contrast, the problems of both normative and descriptive satisficing are central to modern scientific method. Experimental researchers have consistently refined their methods of attention, observation, sampling, and problem representation. Indeed, the technological enhancement of attentional focus and observation are central to scientific method, along with the enhancement of data analysis and solution search. For example, in the early modern period, the telescope and microscope revolutionized observation in astronomy and biology, respectively. Using these tools and techniques, novel problems emerged which rendered prior explanations obsolete. In parallel, new mathematical and statistical methods enabled deeper analysis. Fast forward to the present, and observational tools include satellites, particle accelerators, and quantum microscopy. At the same time, computing technologies massively enhance the compilation and analysis of observational data. Using these techniques, today’s scientists represent and solve increasingly novel, complex, highly specified problems. Natural science continues to transcend the limits of human capabilities and consciousness, especially in the sampling and representation of problems.

Not surprisingly, social and behavioral scientists attempt to do the same (Camerer, 2019). In these fields, however, selective sampling and experimental techniques prompt concerns about oversimplification and validity. Many caution that social and behavioral phenomena are too variable and situated, and cannot be reduced to measurable constructs and mechanisms (e.g., Beach & Connolly, 2005; Geertz, 2001). Regarding problem-solving, particularly, some argue that this activity is best explained in terms of narrative interpretation and sense-making, rather than rational expectations, preference ordering, and reasoned choice (e.g., Bruner, 2004; Smith, 2008). By implication, determinant models of problem-solving will be overly simplified and mired in assumptions of normality and stability. Others are somewhere in between. They still present formal models and methods, but embrace a broader psychology of commitments, including empathy and altruism (e.g., Ostrom, 2000; Sen, 2000). As a further example, Stiglitz et al. (2009) argue for a richer description of human needs and wants, shifting toward Homo sapiens, and demonstrate how these could be measured and analyzed. Their ambition is an economics of human flourishing and well-being, with public policies to match.

Nevertheless, like most, these scholars agree that something must be simplified, to develop useful theories and actionable knowledge. Debate then focuses on what to sample, simplify, and conceptualize, when and how, and with what consequences for problem-solving. As stated above, those who endorse classical theory tend to simplify problems and psychology, seeking to optimize calculative solutions, whereas behavioral approaches seek richer problem representation, and then accept approximating heuristic solutions. The debate exemplifies the modern problematic noted in earlier chapters: to what degree can and should human beings overcome their limits, to be more fully rational, empathic, and fulfilled?

Contemporary Digitalization

Digitalization now brings the advanced capabilities of empirical science and computer engineering to everyday, human problem representation and solution. For example, consider personal digital devices, such as smartphones and tablet computers. They grant individuals access to increasingly powerful and intelligent sampling, search, and computation, far beyond traditionally bounded capabilities. Using such devices, humans become collaborators in digitally augmented problem-solving. Importantly, these capabilities also reduce the need for trade-offs. Less must be simplified. Artificial agents can process the enormous amount of information required to analyze highly complex problems and choices, and at all levels of agentic modality including collectives (Chen, 2017). In augmented collaboration, therefore, humans will have the potential to behave fully as Homo sapiens. In fact, it becomes feasible to pursue highly discriminate problems and solutions in many ordinary contexts, not just in the laboratory (Kitsantas et al., 2019). Thanks to digital augmentation, much human problem-solving will approach scientific levels of detail, precision, and rigor, in both sampling and search.

Yet at the same time, natural human capabilities remain limited and parochial values and commitments will likely persist. Given these enduring features of human problem-solving, digital augmentation may compound rather than ameliorate behavioral dilemmas. For example, if racial and gender biases are encoded into training data and algorithmic processing, machine learning leads to even greater discrimination. Digitally augmented capability amplifies biased beliefs about gender and race (Osoba & Welser, 2017). As another illustration, consider classically inspired economics, in which problem-solving is often assessed in terms of the rational optimization of self-interested utility. Here too, digital augmentation could lead to increasingly dysfunctional problem-solving, if augmented agents simply reinforce narrow assumptions about self-interest and expectation, and overlook wider ecological, social, and behavioral factors (Camerer, 2019; Mullainathan & Obermeyer, 2017). Digitally augmented capability would thus amplify the idiosyncratic noise which often clouds decision making (Kahneman et al., 2021). It is therefore appropriate to ask, under which conditions will digital augmentation enable more effective problem-solving, rather than perpetuating the limiting myopias and models of the past; and hence, which additional procedures might help to minimize the downside risks of digital augmentation, while maximizing the upside? Moreover, these questions are urgent. Already, the speed and scale of digital innovation are transforming much problem-solving. Organizations, institutions, and citizens are struggling to keep up, trying to remain active and relevant in the supervision of these digitally augmented processes.

4.1 Metamodels of Problem-Solving

To analyze the digital augmentation of problem-solving more deeply, we first need to review dominant metamodels of problem-solving, that is, the major problem-solving choice sets. As the preceding discussion explains, modern approaches combine two main functions: sampling of various kinds, which results in problem representation, followed by solution search and selection. Both functions—problem sampling and representation, and solution search and selection—can be more, or less, specified and complex. In ideal, optimal problem-solving, each should be fully specified and result in the best possible option, although this is rarely achieved and often impossible in practical contexts. In this regard, ideal problem-solving is truly an ideal, whereas people function with limited resources and capabilities. Given these constraints, a few metamodels of problem-solving are possible.

First, as Simon (1979) explains, agents can seek optimal solutions to simplified problems, that is, the normative satisficing of classical theory. Often, such solutions are axiomatic and formalized, while problems are represented in clear, but simplified terms. Hence, from a critical behavioral perspective, “utility maximization” is a simplified representation of the problem of economic choice. And highly calculative solutions to such problems—such as rational or adaptive expectations—can only aspire for optimality because of normative satisficing. If people choose this metamodel, more processing is required, owing to the complexity of optimizing the solution. As noted earlier, the major risks of doing so are the distortions which arise from myopic sampling and simplified problem representation.

Second, agents can seek satisfactory solutions to more fully described, realistic problems, that is, descriptive satisficing. Solutions are frequently heuristic and approximate, while problems are represented in a more detailed fashion. In this type of satisficing, solutions are partially ordered, while problem representation is highly discriminated, striving for completeness. Hence, problem representation is optimized, meaning the chosen representation is the best alternative. While the chosen solution is maximal, that is, no worse than alternatives. If people choose this metamodel, more processing is required, owing to the complexity of the problem itself. Major risks arise from myopic solution search and simplified selection criteria.

Third, it is at least conceivable to seek optimal solutions to realistically represented problems, that is, ideal problem-solving which is not satisficing in either sense. However, as noted earlier, this type of problem-solving is rarely observed in human contexts, and arguably impossible, in practical terms. Nonetheless, ideal metamodels are conceivable and play significant roles in abstract thought and formal approaches (March, 2006). Both problem representations and solutions are, in principle, fully ordered, and the best options are selected. Hence, I describe this metamodel as ideal optimizing. However, if people try to apply it in practice, they typically fall short owing to high complexity and limited capabilities, which is not to say they should not try. As March and Weil (2009) argue, pursuing unreachable ideals has its place, by helping to inspire and engage agents in the face of uncertainty and resistance.

Fourth, agents seek satisfactory solutions to simplified problems, which is not satisficing either, because agents do not seek to optimize problem representation or solution. Instead, both are no worse, at best, given some aspiration levels. In fact, this is the most frequent and feasible metamodel of problem-solving in practical terms (Sen, 2005). Much of the time, people solve imprecise problems in imprecise ways, which is good enough. Hence, we can expand Simon’s original analysis. As he correctly explains, there are situations in which agents rightly pursue optimal problem representation or optimal solutions, and one or the other might be attainable, or at least approachable. However, these satisficing options are an important subclass of problem-solving, not the full universe. Much practical problem-solving is not optimizing in either respect. It is fully maximizing instead, often owing to high complexity, or because optimizing is simply unwarranted. However, such problem-solving is therefore doubly myopic, in both problem sampling and representation, and solution search and selection. I label this metamodel of problem-solving as practical maximizing: choosing incompletely ordered, satisfactory solutions, to incompletely ordered, simplified problems. From this perspective, many “fast and frugal heuristics” are instances of practical maximizing (see Gigerenzer, 2000).

It is also important to note, that much practical maximizing is procedural, performed as individual habit or collective routine. The everyday world presents many ordinary problems which are appropriately solved in this way. Moreover, as the preceding chapter explains, this type of problem-solving is central to the coherence and organization of agentic modalities. In fact, many modalities come into being as systems of practical maximizing in problem-solving. Fortunately, enough problems are recurrent, easily recognized, and require little analysis to resolve. Habitual and routine problem-solving are sufficient. Not surprisingly, agentic modalities cohere around patterns of such procedures. Bundles of habits then mediate personalities, and bundles of routines mediate organizations. There are fewer processing trade-offs in both scenarios because less effort is required. Procedural, practical maximizing is efficient and sufficient. That said, failures still occur and are impactful, because habit and routine often fulfill important control functions. Maximal does not mean minimal or trivial, but rather less than optimal.

Figure 4.1 summarizes the four major metamodels of problem-solving just described. The figure’s dimensions are the complexity of problems and the complexity of solutions. Both range from high to low. The figure also assumes that complexity is proportional to the degree of variation, and hence to the processing required for rank ordering. That is, the more varied the choice set, the more complex it is, and the more processing required to discriminate between options. Given these assumptions, quadrant 1 depicts the ideal optimizing metamodel or the best solution for the best representation of problems. Quadrant 2 shows descriptive satisficing, which is seeking satisfactory solutions to the best representation of problems. Next, quadrant 3 shows normative satisficing, or seeking the best solutions to simplified problems. Finally, quadrant 4 shows practical maximizing, which is seeking satisfactory solutions to simplified problems. In this final metamodel, problem representation and solution search are both no worse than the alternatives, and hence maximizing on both dimensions.

Fig. 4.1
A matrix diagram depicts the 4 combinations of high and low complexity of solutions with high and low complexity of problems.

Metamodels of problem-solving

4.2 Dilemmas of Digital Augmentation

Many digital processes need to complete as quickly and accurately as possible and must avoid unnecessary processing. For example, consider the artificial agents which manage high reliability operations, monitor human safety, and mediate online transactions. They must function rapidly, with high accuracy, yet at the same time, gather and analyze massive volumes of data. Computer scientists therefore research how to maximize the efficiency of their processing. Adding to the challenge, artificial agents can easily over-sample problems, over-compute solutions, and over-complete rank ordering (e.g., Lee & Ro, 2015). Granted, overprocessing is sometimes beneficial. It can enhance robustness, by generating a richer set of options and thus slack. However, the risk is that processing becomes overly complex and less efficient. These are central issues for the design and supervision of artificial agents.

To mitigate these risks, artificial agents also simplify problem-solving. They accomplish this using algorithmic hyperheuristics and metaheuristics, defined as shortcut means of specifying metamodel hyperparameters and model parameters, respectively (Boussaid et al., 2013). Hyperheuristics are first used to compose choice sets of potential models of problem-solving and thereby to define metamodels of problem-solving, for example, composing sets of calculative or associative approaches (Burke et al., 2013). Next, given the resulting metamodel, metaheuristics are employed to select the appropriate model for solving a particular problem (Amodeo et al., 2018). The chosen model is then applied to resolve the focal problem, for example, using specific heuristic procedures. Importantly, at each level of processing, simplifying heuristics helps to manage the complexity of processing. As Chap. 1 also explains, research in artificial intelligence focuses on optimizing such hierarchies of heuristics. They are critical for the efficiency and effectiveness of problem-solving.

Human agents do likewise, although often unconsciously and automatically. They use simple, often routine hyperheuristics and metaheuristics. When faced with a new problem, a person may unconsciously deploy an encoded hyperheuristic to specify the appropriate metamodel of problem-solving (Fiedler & Wanke, 2009). For example, the context may be familiar and uncomplicated, suggesting a simplified, heuristic approach. Next, the person will apply a metaheuristic to choose one specific model. Perhaps the focal problem reflects prior experience and can be solved using limited sampling. Fast and frugal heuristics procedures could work very well. Studies of “gut feel” in decision-making exhibit this pattern (Gigerenzer, 2008).

Digital augmentation is now transforming this domain. For example, many experts use real-time, decision support systems powered by artificial intelligence (McGrath et al., 2018). Human intuition and calculation are being digitally augmented, and it may no longer be necessary or appropriate to reply solely on human inputs. In fact, Herbert Simon (1979, p. 499) predicted this shift many years ago: “As new mathematical tools for computing optimal and satisfactory decisions are discovered, and as computers become more and more powerful, the recommendations of normative decision theory will change.”

The challenge for human agents is learning how to integrate these additional sources of information and insight into problem-solving. However, given the complexity and speed of artificial processes, they are often opaque (Jenna, 2016). In fact, the inner workings of complex algorithms mirror the opacity of the human brain. It may be impossible to know exactly what artificial agents are doing, especially in real-time processing. In these respects, natural and artificial neural networks are deeply alike. Both employ extremely complex, dynamic connections, which are difficult to monitor, supervise, and predict (Fiedler, 2014; He & Xu, 2010). That said, the similarity of both agents increases the likelihood of developing effective methods of integration and supervision. Human and artificial agents are feasible collaborators in augmented agency.

Risk of Farsighted Processing

Furthermore, as collaborative capabilities become more powerful, augmented agents might err toward overly farsighted sampling and search, the opposite of myopia. They could easily over-sample the problem environment, search too extensively, and then over-compute solutions. Where farsighted in this context, is not simply a reference to spatial distance, but any sampling or search vector. To conceptualize this effect, I borrow a term from ophthalmology, “hyperopia,” which means farsighted vision, the opposite of nearsighted myopia (Remeseiro et al., 2018). When hyperopia occurs in problem-solving, agents will sample in a farsighted fashion to represent problems in rich detail; or they can search for solutions in a farsighted, extensive way. In both cases, hyperopia increases the overall complexity of problem-solving (Boussaid et al., 2013). Computer scientists already research ways to avoid these risks. Better supervision is key.

In contrast, with a few notable exceptions once again (see Denrell et al., 2017; Fiedler & Juslin, 2006; Liu et al., 2017), studies of human problem-solving largely ignore hyperopic risks. This neglect is partly owing to the priorities discussed earlier, namely, that scholars traditionally focus on myopia and limited capabilities. Moreover, when myopia is the primary focus of concern, farsighted sampling and search (that is, hyperopia) may be a welcome antidote. If so, then a modest degree of hyperopia is not a problem, but a potential advantage (e.g., Csaszar & Siggelkow, 2010). Moreover, when farsighted sampling and search do occur, they are typically viewed as natural characteristics relating to perceived temporal, spatial, and social distance (Trope & Liberman, 2011). For all these reasons, hyperopia is rarely included in studies of human problem-solving, and almost never viewed as problematic. However, in a world of digitally augmented capabilities, hyperopia is likely and potentially extreme. Going too far becomes a significant risk.

That said, neither myopia nor hyperopia is inherently erroneous. Indeed, depending on the problem context, if commitments, values, and interests are well served, and if satisfactory controls are maintained, then myopic or hyperopic, sampling and/or search, can be appropriate and highly effective (e.g., Gavetti et al., 2005). This is true for human and artificial agents alike. For example, when problems are stable and recurrent, myopic sampling and search may be fully suited to the task (Cohen, 2006). This is often the case in habitual and routine problem-solving. Alternatively, when problems are complex, multidimensional, and not urgent, then hyperopic processes could be more appropriate (Bandura, 1991; Forster et al., 2004). This is often the case in technical problem-solving and replication studies.

Dilemmas of Hyperopia

Notwithstanding these exceptions, humans tend to be myopic in sampling the problem environment and searching for solutions (Fiedler & Wanke, 2009). People must be trained, therefore, to overcome their natural myopia. Many areas of education and training focus on doing this, trying to develop capabilities, training students to sample and search more widely in specific domains. Moreover, if this training is successful, lessons are deeply encoded. Yet such learning is problematic in digitalized contexts. This is because digitally augmented capabilities increasingly transcend human limitations. Consequently, hyperopic efforts may be redundant because this is what digitalized systems are good at. But people may continue striving to overcome their limits and myopias—to extend problem sampling and solution search—irrespective of the extra capabilities acquired through digital augmentation. Trained to overcome limits, they continue reaching for hyperopia, trying to be more farsighted despite the fact, that digitally augmented processes already do so. The overall result is likely to be excessive sampling and search, or extreme hyperopia. What was corrective in previously myopic contexts is now a source of hyperopic distortion.

Artificial agents face complementary challenges in this regard. Although, in contrast to humans, artificial agents are built to be hyperopic, to sample and search widely, gathering massive volumes of information, and then process inputs at great speed and precision. Hence, artificial agents also tend toward hyperopic sampling and search, but by design. For this reason, they can also go too far and be overly hyperopic, which increases complexity and reduces efficiency. When these dispositions are imported into augmented agency, artificial and human agents easily compound each other. Human agents are trained to go further, and artificial agents go further by design. Hence, computer scientists research how to prevent over-sampling and over-computation, and to limit hyperopia (Chen & Barnes, 2014). This has led to a range of technical solutions, including hyperheuristics and metaheuristics, and algorithmic constraint satisfaction (Amodeo et al., 2018; Lauriere, 1978). In fact, a recent study conceptualizes “constraint satisficing,” mimicking Simon’s work in behavioral theory (Jaillet et al., 2016). Problems still arise, however, when simplifying heuristics are infected by human myopias and other priors (Osoba & Welser, 2017).

Myopia with Hyperopia

For complementary reasons, therefore, both human and artificial agents are trained to supervise the upper and lower bounds of complexity in problem-solving. On the one hand, human agents are naturally myopic and trained to do more. While on the other hand, artificial agents are naturally hyperopic and trained to do less, in specific contexts. Therefore, both agents move in opposite directions and are trained to correct in opposing ways, especially in complex problem-solving. Given these divergent characteristics and strategies, if collaborative supervision is poor, they will easily undermine each other and reinforce problematic tendencies. Artificial agents could be overly hyperopic, while their corrective procedures reinforce human myopia (Balasubramanian et al., 2020). At the same time, humans could remain myopic, while their corrective procedures reinforce artificial hyperopia. In this fashion, inadequate supervision of augmented agency could lead to problem-solving, which is highly myopic in some human respects, and highly hyperopic in artificial ways. Stated otherwise, augmented agents could be extreme satisficers. Either seeking overly optimized solutions to overly simplified problems (extreme normative satisficing) or accepting overly simplified solutions to overly detailed representations of problems (extreme descriptive satisficing).

Once again, we see the effects of poorly supervised, entrogenous mediation. Ideally, augmented agents will use these dynamic capabilities to maximize metamodels of problem-solving, adjusting sampling and search to fit the problem context. As noted earlier, however, the supervision of such capabilities will be challenging, given the speed and precision of digitalized updates. Each agent will struggle to monitor the other. Distorted outcomes are likely if the supervision of entrogenous mediation is poor. First, intelligent sensory perception might simply reinforce human myopia. For example, when racial biases guide hyperopic sampling in machine learning, algorithms are quickly discriminatory (Hasselberger, 2019). Second, if the supervision of performative action generation is equally poor, it could reinforce existing procedures, for example, by escalating racially biased behaviors. And when both effects occur, human myopia and artificial hyperopia will compound to produce dysfunctional, highly discriminatory problem-solving.

To conceptualize these effects, I adopt another ophthalmic term “ambiopia” which means double vision, which is also referred to as “diplopia” (Glisson, 2019). In these conditions, the same object is perceived at different distances by each eye—one being nearsighted and myopic, the other farsighted and hyperopic—causing the agent to perceive the image with distorted, double vision (Smolarz-Dudarewicz et al., 1980). Moreover, like other novel terms in this book, “ambiopia” includes the prefix “ambi” which is Latin for “both.” In the diagnosis of vision, ambiopia refers to the compounding of visual distortions. Analogously, in problem-solving, it can refer to the compounding of nearsighted myopia with farsighted hyperopia in problem representation and/or solution search.

Summary of Digitalized Problem-Solving

Based on the preceding discussion, we can summarize digitally augmented problem-solving. First, like human agents, artificial agents perform two key processes: sampling and data gathering, leading to problem representation; then searching for and selecting solutions to such problems. Humans are limited in these respects and tend to be myopic, while artificial agents are capable of increasingly complex, hyperopic sampling and search. In fact, digital augmentation imports the hyperopic methods of experimental science into ordinary problem-solving. Second, human and artificial agents both use heuristics to choose between alternative logics, models, and procedures of problem-solving (Boussaid et al., 2013). Heuristics are often layered, in a hierarchy of increasing specificity, from hyperheuristics in the specification of metamodels to metaheuristics about models and then heuristics for specific solutions. Third, human agents often encode ontological, epistemological, and normative commitments into artificial agents, which then guide sampling and search and which help to establish the threshold of sensitivity to variance, although such priors are frequently distorting, when myopia and bias are amplified by artificial means. Fourth, both types of agents need to manage the risks of myopic and hyperopic sampling and search, balancing the demands of speed, accuracy, efficiency, and appropriateness. Otherwise, human myopia and artificial hyperopia will combine to produce dysfunctional, ambiopic problem-solving.

The goal for augmented agents, therefore, is to maximize metamodel fit in any problem context. This will entail adjusting the relative myopia and hyperopia of sampling and/or search, combining both human and artificial capabilities, although, as noted, such supervision will be challenging. Instead, agents’ inherent tendencies will often lead to extreme patterns, either combining persistent human myopia with unfettered artificial hyperopia (extreme divergence and high ambiopia) or allowing one agent fully to dominate the other (extreme convergence and low ambiopia). Notably, these risks are already topics of research in artificial intelligence (Amodeo et al., 2018; Burke et al., 2013). Computer scientists worry about these problems already. What is not yet adequately understood is how these processes impact problem-solving by human-artificial augmented agents.

4.3 Illustrative Metamodels

Building on the preceding discussion, the following sections illustrate two major metamodels of digitally augmented problem-solving, that is, metamodels which are highly digitalized, similarly to the generative metamodel of agency shown in Fig. 2.3. The first scenario illustrated below is a highly ambiopic metamodel of problem-solving, with very divergent levels of complexity and simplification, in problem representation and solution search. The second is a non-ambiopic metamodel with very convergent levels of complexity and simplification. While these two scenarios are not exhaustive, they highlight ambiopic risks, associated mechanisms, and their consequences.

Highly Ambiopic Metamodels

Figure 4.2 illustrates highly ambiopic metamodels of problem-solving. The vertical axis shows the level of complexity of problem representation, and the horizontal axis shows the complexity of solution search. Both range from high complexity to relative simplicity. Where high complexity implies a hyperopic, farsighted process, and low complexity (or simplification) implies a myopic, nearsighted process. In addition, the figure shows two levels of processing capability, depicted by curved lines. One is labeled L2, and represents the processing capability of modernity, which assumes relatively moderate levels of technological assistance. The second is labeled L3, representing the greater processing capabilities of digital augmentation, now assuming high levels of technological assistance.

Fig. 4.2
A graph of complexity of problem representation versus complexity of solution search depicts 2 decreasing curves which depict the limits of processing capability.

Highly ambiopic problem-solving

As the figure shows, the greater the processing capabilities, the greater the complexity of problem representation and solution search. Capabilities and achievable complexity are positively associated. Nevertheless, even digitalized capabilities remain limited to some degree, meaning they need to be distributed. Figure 4.2 depicts this type of distribution. It also shows that combined complexities reach limiting asymptotes of complexity, for both problem representation and solution search. These upper limits are almost never reached, in practical terms. But they do play a significant role in formal modeling, and by setting the upper bounds of problem representation and solution search.

Descriptive Satisficing

The next features of Fig. 4.2 to note are the segments within it. To begin with, recall that descriptive satisficing is defined as seeking satisfactory solutions to the best representation of problems (see quadrant 2 in Fig. 4.1). Now consider D2 in Fig. 4.2 which depicts such a metamodel, assuming modern processing capabilities L2. Problem representation is moderately complex, and the solution is simplified. Overall, therefore, D2 is moderately divergent and ambiopic. It also suggests that solution search is anchored (and hence semi-supervised) by human myopia. This type of problem-solving is common in behavioral and informal approaches. Also note that D2 intersects with a small, curved section of L2. This feature illustrates a degree of possible variance in problem-solving, or in other words, the satisficing nature of such problem-solving.

By contrast, the segment labeled P2 depicts practical maximizing, given capabilities L2. This type of problem-solving was previously defined as the simplified representation of problems, combined with the search for satisfactory, simpler solutions (see quadrant 4 in Fig. 4.1). Therefore, the levels of complexity are relatively low and roughly equal in P2, meaning this type of problem-solving is non-ambiopic. Neither sampling nor search is particularly hyperopic; rather, both are relatively myopic. In fact, P2 illustrates the actual problem-solving of most agents in a modern, behavioral world. People are not optimizing in either sense, but they rather solve simplified problems in efficient ways, often using heuristic and intuitive means. Practical problem-solving is often like this.

Next, consider the segment labeled D3 in Fig. 4.2, which assumes stronger digitalized capabilities at level L3. Here too, the segment denotes descriptive satisficing. However, D3 shows greater divergence between the complexity of problem representation and simplification of solution search, and D3 is therefore highly divergent and ambiopic overall. Agents are now digitally augmented and capable of more complex problem representation and satisfactory solution search. They sample in a farsighted, hyperopic fashion, but solution search remains anchored in the same myopias as D2, for example, when racially biased priors are encoded into machine learning algorithms. In consequence, D3 constitutes an extreme form of descriptive satisficing.

Finally, the segment labeled P3 depicts practical maximizing, assuming digitally augmented capabilities L3. That is, P3 depicts incompletely ordered, simpler solutions, to incompletely ordered, simpler problems. As in P2, the levels of complexity in P3 are roughly equal, meaning this type of problem-solving is non-ambiopic, but it is highly myopic overall. In fact, P3 almost equals P2. This is because prior anchoring commitments have not shifted but are carried over from modernity to digitalization. This scenario illustrates the persistence of ordinary commitments and procedures. Even with digitalized capabilities, people do not seek to optimize, in either sense, but continue to rely on heuristic and intuitive means. They are persistently human, notwithstanding digital augmentation.

Normative Satisficing

Now consider the other set of segments in Fig. 4.2. To begin with, recall that normative satisficing is defined as optimal solutions to simplified problems. Segment N2 depicts such a metamodel, assuming modern processing capabilities L2. Within N2, the figure shows moderate divergence between the two dimensions of complexity—problem representation and solution—and therefore N2 is moderately divergent and ambiopic overall. As noted earlier, this type of problem-solving is often axiomatic and calculative, as in classical economics: seeking optimal, calculative solutions to simplified problems of utility. Whereas the segment labeled P2 again depicts practical maximizing, given modern capabilities L2. It illustrates non-ambiopic, actual problem-solving in the modern world of consumption and exchange. In such a world, most people often do not optimize nor try to. Rather, they solve the ordinary problems of transactional life using habitual or routine, heuristic, and intuitive means.

Next, consider the segment labeled N3, which depicts extreme normative satisficing, assuming stronger, digitalized capabilities L3. In this kind of problem-solving, artificial processes enable hyperopic search, but problem representation remains anchored in the same myopias as N2. Therefore, N3 shows even greater divergence between the two levels of complexity, and N3 is highly divergent and ambiopic overall. In fact, this type of distorted problem-solving is observed in semi-supervised machine learning, when hyperopic artificial intelligence amplifies human myopia and bias (Osoba & Welser, 2017), whereas the segment labeled P3 depicts practical maximizing, given capabilities L3. As in P2, the levels of complexity in P3 are roughly equal, meaning this problem-solving is non-ambiopic. In these respects, P3 illustrates the actual problem-solving of augmented agents in the behavioral world. Consider, for example, how many people search the internet or shop online, saving favorites and encoding habits.

Descriptive and Normative Satisficing

In poorly supervised augmented agents, both types of extreme satisficing (descriptive D3 and normative N3) are likely and will often occur together. Persistent human priors will be myopic and artificial hyperopia will be largely unchecked. Both descriptive and normative satisficing will then be ambiopic. Hence, overall problem-solving system is ambiopic as well. This is what Fig. 4.2 depicts. The augmented agent combines both types of extreme satisficing at L3. In consequence, overall problem-solving by this augmented agent is highly divergent and skewed, poorly fitted, and most likely dysfunctional. Digitally augmented individuals, groups, and collectives will be equally vulnerable in this way, if collaborative supervision is poor.

Non-ambiopic Augmented Metamodels

In contrast, Fig. 4.3 illustrates non-ambiopic metamodels of problem-solving. Once again, the horizontal axis shows the level of complexity of problem representation, and the vertical axis shows the complexity of solution search, both again ranging from low to high. The figure also shows two levels of processing capability. L2 represents the processing capability of modernity as in Fig 4.2; while the greater processing capabilities of digital augmentation are here labeled L4, to distinguish them from L3 in Fig. 4.2. Apart from this distinction, Fig. 4.3 shares its core features with Fig. 4.2. In fact, the segments D2, N2, and P2 are equivalent in both figures. They again illustrate modern, moderately assisted problem-solving and, therefore, do not require repetitive explanation.

Fig. 4.3
A graph of complexity of problem representation versus complexity of solution search depicts 2 decreasing curves which depict the limits of processing capability.

Non-ambiopic problem-solving

But now consider the segment labeled D4 in Fig. 4.3. It depicts descriptive problem-solving, given digitally augmented processing capabilities L4. That is, seeking solutions to richly described representation of problems. However, in contrast to D3 in Fig. 4.2, segment D4 shows no significant divergence between the two levels of complexity. This implies the relaxation of human priors and limited artificial hyperopia. Hence, D4 is neither ambiopic nor satisficing because it does not trade-off simplification for optimization. This scenario is non-hyperopic and non-myopic, in problem sampling and solution search. Furthermore, D4 is shown to equal N4. In other words, descriptive and normative methods are conflated. Neither is ambiopic nor satisficing. Instead, digitally augmented capabilities allow agents to heighten both problem representation and solution, to equal levels of complexity. By doing so, the agent mitigates myopia and hyperopia. In essence, description becomes highly computational, and normative computation is richly descriptive (Yan, 2019).

For similar reasons, the segment labeled P4, which depicts practical maximizing, is equivalent to D4 and N4 as well. In fact, all three segments overlap. What this illustrates, is that the agent fully relaxes prior commitments and forgoes optimization altogether. The result is practical maximizing, highly contextual, and generative. Moreover, owing to digitally augmented capabilities, such maximizing may achieve a high level of completeness. Also note that all three metamodels intersect with a curved section of L4. This feature illustrates a degree of possible variance, or in other words, the maximizing nature of such problem-solving. In this fashion, P4 overcomes the traditional polarity between descriptive and normative problem-solving. All of problem-solving at level L4 is highly augmented and non-ambiopic in this scenario, although, by the same token, P4 shrinks the role of ordinary human intuition, values, and commitments.

Therefore, human priors are relaxed and artificial hyperopia is controlled. Both problem representation and solution search will be largely free of human myopias and excessive artificial hyperopia. This is what Fig. 4.3 depicts. Problem-solving is non-ambiopic and fully maximizing, from a practical perspective. However, as explained above, this type of problem-solving reduces the role of ordinary human intuition, values, and commitments. Granted, the system achieves greater precision and integration, but it also depletes problem-solving of important human qualities. This approach is also dysfunctional, therefore, when problem-solving warrants the inclusion of humanistic factors and commitments.

Moderately Ambiopic Augmented Metamodels

Other digitalized metamodels will be less extreme, better supervised, and moderately ambiopic. Augmented problem-solving of this kind is more balanced. It includes some human supervision of descriptive and normative satisficing, while also exploiting the benefits of augmented, practical maximizing. Agents accept a modest degree of myopia and hyperopia in sampling and search, often using both structured and unstructured data, given agreed criteria of supervision. In this kind of metamodel, the segments D4, N4, and P4, will be partially distinct and not fully equivalent. The overall system of problem-solving will admit more human inputs, referencing personal and cultural values, goals, and commitments, but avoid excessive myopia, while at the same time allowing some artificial agents to operate fully independent of human supervision, but avoiding excessive hyperopia. In this fashion, augmented agents exploit digitalized capabilities, while preserving valued features of human and artificial problem-solving, thereby achieving strong metamodel fit. For this reason, many behavioral and social contexts will favor moderately ambiopic, augmented problem-solving.

4.4 Implications for Problem-Solving

Digital augmentation promises great advances for problem-solving, assuming human and artificial agents learn to function effectively as augmented agents, working together with mutual trust and empathic supervision. To some, it may seem strange to describe artificial agents in this way, almost as if they were human. Some may reject the description as fanciful, and even as dangerous. However, recent technical innovations are compelling. Artificial agents already surpass humans in many calculative functions, and recent developments enable associative and creative intelligence (Horzyk, 2016). In addition, artificial agents are rapidly acquiring empathic capability, which allows them to interpret and imitate personality, emotion, and mood. Many agents also function in a fully autonomous, self-generative fashion. When combined, these capabilities are approaching human levels, in significant respects (Goertzel, 2014), at least, to the degree required for meaningful collaboration in augmented problem-solving.

At the same time, significant challenges lie ahead, as humans respond to the rapid growth of artificial capabilities. The combinatorics are challenging. On the one hand, human absorptive capacities are limited, people habitually simplify, biases and myopias easily intrude, and learning is often truncated. While on the other hand, artificial intelligence and machine learning race ahead at unprecedented speed and scale. Indeed, we constantly see more powerful examples. However, owing to lagging skills of collaborative supervision, these technical innovations could amplify (rather than mitigate) the weaknesses of human problem-solving. Hence, humanity faces a growing challenge: to ensure that augmented problem-solving exploits the power of digitalization, while managing human needs and potential costs.

As this chapter reports, many are working on these questions. Some are optimistic (Harley et al., 2018; Woetzel et al., 2018). They point to positive developments, such as the diffusion of knowledge, greater variety of choice, and the delivery of highly intelligent services, not to mention advances in complex problem-solving. Others are more pessimistic. They highlight the contagion of digitalized falsehood, bias, and social discrimination in problem-solving, plus intrusive surveillance and manipulation, whereby elites seek to control the flow of online information and analysis (Osoba & Welser, 2017; Zuboff, 2019). The mechanisms exposed here help to explain what is occurring in these situations, namely, the deliberate use of myopia, hyperopia, and ambiopia, in digitally augmented problem-solving. Whether one is optimistic or pessimistic about the future, these mechanisms warrant urgent attention.

Myopia, Hyperopia, and Ambiopia

Among the most important topics for further research, therefore, are the risks of doing too much and too little. That is, of poorly supervised myopia plus hyperopia in sampling and search, leading to extremely divergent, ambiopic problem-solving (Baer & Kamalnath, 2017). As noted earlier, computer scientists already research similar risks. Many mitigating strategies focus on semi-supervised learning (Jordan & Mitchell, 2015). To date, however, these higher order procedures are not major topics for behavioral and organizational research (Gigerenzer & Gaissmaier, 2011). They should be. Augmented agents will confront these risks as well. Their goal will be to maximize metamodel fit in any problem context. Otherwise, augmented agents face the prospect of dysfunctional problem-solving.

The argument also highlights the role of human commitments, and especially those which serve as reference criteria about what is realistic, reasonable, and ethical, in problem-solving. Such criteria often emerge over time, are culturally embedded, and have institutional expression (Scott & Davis, 2007). Such commitments are deeply imprinted in thought and identity (Sen, 1985). For this reason, they are and often should be, difficult to change and adapt. Indeed, resilient commitments play an important role in sustaining institutions, social relations, and personalities. The risk is that absent appropriate supervision, inflexible commitments and their escalation can lead to excessive myopia and hyperopia in sampling and search. Overall problem-solving then becomes highly ambiopic for no good reason, and therefore dysfunctional.

Bounded and Unbounded

Myopic risks reflect the natural limits of human capabilities, which are widely assumed in modern thought. Whether in theories of perception, reasoning, empathy, memory, agency, or reflexive functioning, scholars assume limited human capabilities. In relation to problem-solving, Simon (1979) explains the bounded nature of human calculative rationality, and why agents satisfice against relevant performance criteria, rather than fully optimizing. As noted earlier, he formulated two broad methods of satisficing, which I label normative and descriptive. The former seeks optimum solutions for a simplified world, and the latter, satisfactory solutions for a detailed, realistic world. Simon’s insights have influenced numerous fields of enquiry, including behavioral theories of problem-solving and decision-making, the management and design of organizations, and branches of economics (Gavetti et al., 2007).

However, cognitive boundedness is significantly mitigated by digital augmentation. Digital technologies massively enhance everyday processing capabilities, and especially in complex problem-solving. Humans can perceive, reason, and memorize with far greater precision, speed, and collaborative reach. At least, these extensions are now feasible. In these respects, augmented agents can be bounded and unbounded, at the same time. This occurs because human agents will likely retain their natural boundedness, especially in everyday cognitive functioning. At the same time, artificial agents will be increasingly unbounded. When both agents join in collaborative problem-solving, therefore, the resulting augmented agents could be simultaneously bounded and unbounded. In other words, they will exhibit functional ambimodality, as distinct from the organizational types of ambimodality discussed in the preceding chapter.

Satisficing then becomes more complicated, but also more important, because it can help to limit overprocessing, including the tendency toward overly hyperopic sampling and search. The role of satisficing will therefore expand and deepen. Instead of satisficing because of limited capabilities, augmented agents will satisfice because of extra capabilities. Deliberate satisficing will help to avoid unnecessary optimization. Put another way, digitally augmented agents will satisfice, not only in response to limits, but to impose limits. They will choose descriptive or normative satisficing, even when ideal optimization is feasible, or at least approachable. People sometimes do this already when they employ heuristics (Gigerenzer & Gaissmaier, 2011). Artificial agents do as well when they limit their own processing to improve speed and efficiency. Augmented agents will do the same, by managing myopia and hyperopia to maximize metamodel fit in problem-solving, forgoing possible optimization for good reasons.

This analysis has major implications for the fields mentioned earlier, which assume Simon’s analysis of boundedness, including behavioral theories of problem-solving and decision-making, the management and design of organizations, and related fields of behavioral economics and choice theory. Each field will need to revisit its core assumptions, to accommodate less bounded capabilities and intentional satisficing. And when this happens, all of economics starts to look behavioral, as Thaler (2016) predicts. In similar fashion, scholars may need to rethink the assumed opacity of preference ordering, interpersonal comparison, and collective choice (Sen, 1997a). Given the expanded capabilities brought by digital augmentation, it will become feasible to seek comparative transparency, almost complete ordering, and approach optimization in some digitalized contexts. Granted, this may not be desirable. It could erode human diversity and creativity. But this type of choice will be feasible, nonetheless. Mindful of these risks, augmented humanity will need to monitor and manage the risks of over-completion in preference ordering and collective choice, and often choose to be better rather than perfect (see Bazerman, 2021).

Extended, Ecological Rationality

Another notable implication of digitalization is the extension of systematic intelligence to problem sampling and representation. In the past, everyday problems were taken as given, the intuited products of experience and sensory perception, whereas rigorous problem sampling and representation have been the preserve of empirical science. For this reason, most theories of behavioral problem-solving assume that systematic intelligence relates to solution search and selection, but rarely to problem sampling and representation. Rationality has been about finding solutions and making decisions, not about the specification of problems as such. However, digital augmentation upends these assumptions too. New tools and techniques allow augmented agents to reason systematically during problem sampling and representation. In this regard, recall the discussion of feedforward mechanisms and entrogenous mediation in Chaps. 1 and 2. Problem sampling and representation will be updated in a rapid, intra-cyclical fashion, through intelligent sensory-perception. Sampling and representation become reasoned activities. Ecological theory should therefore expand to embrace realism as well as rationality. Both aspects of problem solving will be contextual and dynamic.

Augmented agents will therefore apply intelligence to problem sampling and representation, not only to solution search and selection. For example, important problems regarding personal health, finances, and consumer preferences will be identified and curated by artificial agents, often in real time. In fact, this already happens, via smartphone applications. In the background, systems analyze and update problems in real time. However, this also entails that many processes will not be fully accessible to consciousness. In fact, as in other augmented domains, ordinary consciousness will play a different role in problem-solving. It will be an important source of humanistic guidance, but less significant as a window onto fundamental reality and truth. In all these ways, augmented problem-solving calls for an extended, ecological understanding of realism and rationality (Todd & Brighton, 2016).

This shift has another, profound implication. Important social-psychological distinctions relate to proximal versus distal processing. Construal Level Theory, for example, assumes that humans treat phenomena and problems differently, depending on their perceived spatial, temporal, social, and hypothetical distance (Trope & Liberman, 2011). If close or proximal, they are treated as more practical, short term, parochial, and risky, whereas if distal, they are more exploratory, long term, and expansive. Higgins’ (1998) Regulatory Focus Theory assumes comparable distinctions. However, if digitally enabled hyperopia draws everything closer on these dimensions, then what is distal with respect to human experience and capabilities could be proximal in artificial terms. When combined, augmented agents could perceive problems as proximal and distal at the same time. This would likely lead to ambiguous or conflicting construals, and misguided sampling and search. Once again, augmented agents must learn to manage these ambiopic risks.

Culture and Collectivity

Digitally augmented problem-solving has cultural implications as well. To begin with, communities share problems which they represent and resolve at a collective level. Such problem-solving is often divided between the domains of science and technology, on the one hand, and human value and meaning, on the other. In fact, some observers of modernity refer to two dominant cultures (March, 2006; Nisbett et al., 2001). Digitalization problematizes these distinctions. Already, digitalization is transforming the creative arts and entertainment. Artificial agents make meaning and create aesthetic value. In consequence, the two cultures are blending, at least in these domains. Numerous potential benefits accrue, in terms of cultural interaction and understanding. However, it is equally possible that these trends could amplify ambiopic problem-solving and exacerbate cultural divergence and division (Kearns & Roth, 2019). Here, too, the challenges of digital augmentation are far from understood, let alone effectively supervised. It remains an open question, whether augmented agency will evolve quickly enough to manage these growing risks.

In the modern period, systematic reason and experimental science support unprecedented problem-solving capabilities. Digitalization extends this historic narrative to everyday problem representation and solution search. Yet in doing so, digital augmentation alters the dynamics of problem-solving itself. Every aspect of problem-solving becomes more intelligent and agile. However, human beings remain limited by nature and nurture, and these human factors will persist. Moving forward, therefore, research should focus on the interaction of human and artificial agents in augmented problem-solving, the novel risks of hyperopia and ambiopia, and how collaborative supervision can mitigate these risks and maximize metamodel fit. Many of these questions already loom large in computer science. They deserve equal attention from scholars in the human and decision sciences.