Keywords

8.1 Introduction: Failure Is Due to Brittle Systems

Failure is due to brittle systems, not erratic components, subsystems, or human beings. Of course, such entities are limited due to constraints on resources, a world that continues to change and grow, and the necessity to navigate trade-offs between critical but conflicting purposes [1, 2]. But new science has emerged drawing on results from diverse fields all studying how human systems adapt to complexities. The new science highlights how real systems can build and sustain their adaptive capacities to overcome the risk of brittle collapse.

One classic finding is that increasing pressure for compliance with plans, standards, and procedures inevitably increases brittleness and degrades the ability of the system and organization to adapt to challenges ahead. This is a fundamental property of how the world we occupy functions and malfunctions as revealed empirically in studies, experientially in practice, and formally in proven science such as the Robust yet Fragile Theorem (e.g., [3]).

What is brittleness? Descriptively, brittleness is how rapidly a system’s performance declines when it nears and reaches its boundary. Brittle systems experience rapid performance collapses, or failures, when events challenge system boundaries. Due to the universal constraints of (a) finite resources and (b) the inherent variability of its environment in a continuously changing world, each system has an envelope within which it is capable of competent performance. Because competence envelopes are bounded, a core question for all systems is—how does the system perform when events push it near or beyond the edge of its envelope? When a system is unable to “stretch” when challenges arise in the boundary regions—when they are slow and stale to adapt—the risk of brittle collapse rises [4]. With the right forms of adaptive capacity, systems have capabilities to anticipate bottlenecks ahead, to synchronize activities across roles and layers for mutual assistance as stress grows, and possess the readiness-to-respond to reconfigure and reprioritize activities to fit the challenges [5].

The risk of brittle collapse is evident all around us in today’s worlds which operate at scale with extensive interdependencies across layers, roles, and organizations. The list of events is extensive and expands almost weekly (e.g., as this chapter was being written, the SpaceX Starship launch failure of April 20, 2023 is a vivid illustration of many dynamic patterns related to brittleness). The events are potentially “viability-crushing” for the organizations involved, whether the event in question leads to injuries/deaths as in the two Boeing 737 Max accidents in 2018/2019 (346 fatalities), or to sudden large financial losses such as the now classic example of the failure of the Knight Capital financial institution in 2012. Other examples include repeated IT infrastructure failures in multiple airlines leading to losses measured in hundreds of millions of US dollars, most recently Southwest Airlines’ service meltdown during the Christmas holiday in 2022. A notable brittle collapse with both many deaths (estimates range from a minimum of about 120 to as many as 600) and large financial losses was the February 2021 collapse of the Texas energy system, with losses spreading beyond Texas to ratepayers in other states not connected to the Texas grid (estimates of losses start at $200 billion).

8.2 The Command–Adapt Paradox

The Command–Adapt Paradox has persisted across time, societies, organizations, and industrial sectors. The paradox arises in the tension between two perspectives originating at different layers of a network organized to carry out valued, critical, and risky activities. One perspective is located at the upper echelons/layers of this network looking down toward sharp end layers where units-of-action confront the dynamics, uncertainty, and challenge of real operations.

With this perspective, the tendency is to adopt a centralized command orientation where the broader echelons/layers closely direct activities of the sharp end through plans that specify actions and contingencies in detail. On this side of the paradox, operations are pressured to follow rules, procedures and automation with the expectation that success will follow as long as the sharp end personnel work-to-rule, work-to-role, and work-to-plan. Challenges that disrupt plans-in-progress are handled by engaging roles at upper layers who revise plans to accommodate the disruptions and transmit the updates to the sharp end. Incidents and failures generally are diagnosed as failures of operational personnel to work-to-rule/role/plan which then leads to new pressures to conform. This is the systems architecture that underlies an emphasis on rule compliance in safety management.

The central theme of the centralized control perspective is “plan and conform”. It assumes that challenges to plans can be (a) identified/modeled clearly so they can be handled with contingency plans, (b) are relatively infrequent, and (c) when challenges do go beyond the limits of plans, these develop clearly and slowly enough for upper echelons to devise/implement new plans.

The second perspective is located at the sharp end layers where units-of-action are looking, first, at the dynamics, uncertainties, and challenges they confront when trying to manage the complexities of real operations. In order to cope with or tame these complexities, sharp end roles, then, look around (horizontally) or upward (vertically) for support. The emphasis here is on adaptive capacity [6]:

Adaptive capacity is a system’s readiness or potential to change how a system currently works—its models, plans, processes, behaviors, relationships—to continue to fit changing situations, anomalies and surprises.

The concern is how to keep pace with changing situations to mitigate the risk of brittle collapse. The key is how do other roles, connected both horizontally and vertically, support sharp end roles under stress as challenges occur, tempo accelerates, and disruptions spread over lines of interdependencies? Rules, standards, plans function differently from this perspective. They serve as resources for action and as a baseline for adapting to achieve goals when events disrupt plans-in-progress. From this perspective, safety staff support sharp end roles by putting in place organizational features that allow mutual assistance, or reciprocity, as situations deteriorate in the face of challenges [7]. In this perspective upper echelons guide adaptability at the sharp end to achieve mission goals despite goal conflicts, trade-offs, and uncertainty [8].

The central theme of the guided adaptability perspective is “plan and revise”—being poised to adapt. This perspective recognizes that disrupting events will challenge plans-in-progress, requiring adaptations, reprioritization, and reconfiguration in order to meet key goals given the effects of disturbances and changes. It is based on findings that (a) inevitably challenges will arise that surprise plans, (b) plans can never be complete and up to date, (c) surprises occur regularly that demand highly responsive interventions, and (d) surprises are handled by adaptive behavior of human roles [6, 9].

The two perspectives appear to conflict, which gives rise to an apparent paradox: organizations must choose one or the other perspective in safety management. Empirical studies, experience, and science all reveal that the paradox is only apparent: “good” systems embedded in this universe need to plan and revise—to do both. And the necessity of both is evident in the need to manage the risk of brittleness while coping with the side effects of growth and change.

The paradox dissolves, in part, when one realizes guided adaptability depends in part on plans. The difficulty arises when organizations over-rely on plans [7]. Over-reliance undermines adaptive capacity when beyond-plan challenges arise. Beyond-plan challenges occur regularly for complex systems. The catch is: pressure to comply focuses only on the first and degrades the second.

To do both, the fundamental surprise for organizations is:

  • first, one has to build and sustain adaptive capacities, which takes resources; then,

  • second, one develops the capability to guide deploying these capabilities when needed, as situations arise which demand movement beyond the baseline of planful, highly automated activities operating across roles, echelons, and scales [10].

8.3 Classic Findings on the Limits of Plans, Procedures, Automata

The classic findings had become clear by the mid-1980s despite having roots that stretch much further back.

8.3.1 Can Plans Completely Specify Actions?

The point of departure is the belief that plans, however embodied, are nearly complete specifications of actions. If plans can fully specify actions, or nearly so, then work-to-rule/role/plan is sufficient for productive and safe systems. Assuming this is possible leads to safety management by centralized command.

  • Finding 1. The potential for surprise when plans are deployed is always higher than assumed or projected.

  • Finding 2. Plans are underspecified and incomplete relative to the variability of the world [11]. This means gaps will arise that require local adaptations for systems to function smoothly [12].

  • Finding 3. Plans inevitably become stale in the face of new information, new capabilities, new relationships, change, and growth.

  • Finding 4. Plans have difficulty coping with changing tempos of operation at multiple parallel scales (even plans embodied in automation). Keeping pace with events invokes skills, forms of cognition, and coordinated activity over multiple roles that cannot be specified in procedures.

The potential for surprise is determined by the answer to the question: How will the next anomaly or event that a system experiences challenge predeveloped plans and algorithms? To assess how plans survive or fail to survive contact with events, one searches for the kinds of situations and factors that challenge the competence envelope for a field of practice [13, 14]. This finding dates at least to 1832 with Clausewitz’s treatise “On War” which highlighted (a) the potential for surprise, (b) the role of friction in putting plans into action, (c) how plans become stale quickly, and (d) the necessity for organizing around guided adaptation. The lessons have been relearned repeatedly in the history of military operations; see the contrasting cases in Finkel’s book “On Flexibility: Recovery from Technological and Doctrinal Surprise” (2011).

Generating and updating plans takes effort/time in the face of limited resources. The beyond-plan challenges drive up the tempo of operations so that there is more to monitor, more to do, more to analyze, and more to consider ahead. As surprises occur that challenge plans, assessing and revising plans/procedures takes time and resources. As a result, (a) plans will miss the potential for bottlenecks, overload, and oversubscription of key assets and contingency backups (this is the risk of saturation) and (b) plans will always tend to lag change in the real world. And modifying plans will lag the changes already underway. These are ubiquitous risks that demand investing in and sustaining adaptive capacities since all operations at all scales are limited by time—constraints on activities that play out over time and over multiple time scales (which really is all activities).

There is a third risk; ironically, it arises from organizational and technological efforts to expand capabilities—from “success” (see, for example, the rise of high-frequency trading in financial markets). Growth expands the network of interdependencies that accompany improved subcapabilities, productivity, and efficiencies. As a result of the extensive interdependencies, challenge events arise from or expand over these lines of connection so that hidden interdependencies come to the fore. Hidden interdependencies are a potent source of saturation and lag as problems in one area push saturation to others, diagnostic work has to track effects at a distance from the originating disruption, and an expanding set of roles and players have to coordinate and synchronize their activities, often across organizational boundaries, to resolve losses of valued services [10, 29, 31]. This is seen vividly in the incidents where there is a loss of valued services that which arises from breakdowns in critical digital infrastructure [15, 16], since this resource underpins the growth of capabilities, productivity, profitability—and safety—in every sector.

Safety does not lie separate from the processes of growth and change. The efforts to improve capabilities, productivity, and efficiencies are intimately connected to safety, especially through the resulting complexities that arise. Handling threats to safety and threats to economic viability ultimately derive from the risk of brittleness, and the same adaptive capacities are needed to overcome both risks.

8.3.2 Rationalizations

The above factors and dynamics are particularly true for plans intended to support operations under degraded conditions—yet operation under degraded conditions occurs much more regularly than planners, technologists, and managers expect. When beyond-plan challenges occur, after-action reviews often find the contingency plans were disconnected from the real dilemmas and difficulties, providing little or no support. For examples, see studies of beyond-surge capacity events in emergency medicine [17].

When beyond-plan challenges are handled poorly with negative consequences, after-action reviews often focus on identifying a flaw in a subsystem, component or operator to repair. But all incidents that threaten failure will reveal component weaknesses. When an incident occurs, the limits of some components have to be part of the story (a) given the trade-offs that were necessary since resources are limited and goals conflict and (b) given that the system and its environment continue to change.

But just seeking component weaknesses narrows focus and blocks the ability to see emergent system properties and risks such as brittleness (see The Stella Report [15]). In complex worlds with extensive interdependencies, emergent system properties drive system behavior and performance. Architectures that sustain adaptive capacities produce systems that operate reliably, robustly, and resiliently as a dynamic whole as the systems and the world around it continue to change [16]. New science has shown how this can happen despite the fact that all systems are composed of individual components/subsystems that are highly constrained by tangible performance trade-offs in a changing open world [3, 18].

The usual response from organizations to these classic findings is simple: my world is stable and not like space operations, military operations, and emergency or critical care medicine. In my world variability can be blocked or suppressed, minimizing the need for adaptation since work-to-plan/role/rule will reliably produce desired outcomes. This rationalization makes several assumptions that are erroneous everywhere given the complexities of the modern world:

  • Believes surprises only occur rarely (“corner” cases), whereas actually, challenge events regularly occur at the boundaries of plans, automata, and procedures (the definition of surprise is events that break plans in smaller or larger ways).

  • Believes it is easy to recognize conditions which would signal the need to modify plans and procedures; whereas, actually, this recognition is difficult and challenges arise from unexpected directions.

  • Believes the time required to put modified plans into action is short (or shorter than the pace of change in the world), whereas actually, disruptions accelerate tempo increasing the risk of being slow to revise and stale (and disruptions increase pace, data overload, workload peaks/bottlenecks, cognitive load, and coordination demands).

  • Believes interdependencies can be limited as new capabilities are deployed, and additional ones that arise can be identified and treated by expanding analysis and modeling efforts.

  • Believes the effects of surprise can be compartmentalized, whereas actually, surprises compound and spread over the extensive interdependencies in all modern systems.

These beliefs are rarely true—and even if true for the moment, processes of change and growth will make them moot. Yet, these beliefs lead organizations to over-rely on compliance. In the aftermath of incidents and breakdowns, the assumptions lead to increased pressure for compliance rather than learning the importance of guided adaptability. Pressure for compliance undermines the adaptive capacities needed to mitigate risks that arise from brittleness.

Again, these are classic findings that have been rediscovered repeatedly through both experience and research from many scientists from many fields and perspectives (too many to reference all here).

8.4 Reconceptualization

The result of the classic findings was a conceptual reframing in four parts:

  1. 1.

    Plans are Resources for Action

    The finding that plans only function as resources for action—not specifications is generally traced to Suchman [11]. She highlights work that documents how plans can never be complete given the variability of the world. This is highlighted in definitions of skill: the ability to adapt behavior in changing circumstances to pursue goals despite trade-offs.

    Plans and planning contribute to but cannot fully encompass the skills, forms of expertise, and coordinated activities needed for high performance given the characteristics of this world: dynamism, variability, messiness, change, growth, uncertainty, scale, extensive and hidden interdependencies.

  2. 2.

    Plans are Necessary to Recognize Anomalies

    About the same time as Suchman, my work on how people diagnose and respond to surprise revealed the importance and difficulties of anomaly recognition. Recognizing the unexpected is hard [19, 31]. To see events and changes as unexpected requires a strong appreciation of what is typical, standard, or even “normally” abnormal (especially to see the absence of an expected change as an unexpected “event”). Seeing what doesn’t fit your model of what has been going on, or what should be going on, or what usually happens is a form of insight (see Chap. 8 of Woods and Hollnagel [20] for a synthesis). This form of insight is built on the foundation of plans, procedures, standards that represents a plan-in-progress:

    • how a plan has played out over time (how the progression fits the plan and plan’s intent despite variability),

    • how it is playing out now as updates occur in the form of incoming information as context changes,

    • how to look ahead as context changes, as new events occur that block or facilitate progress, and as trade-offs arise.

  3. 3.

    Plans (and Automata) are Competent but Brittle

    In the 1980s, there was a wave of hype over the potential for deploying advances in artificial intelligence (AI). Studies looking at joint systems of people and AI or operators and advanced automation revealed the fundamental brittleness of automata regardless of the underlying technology [13]. The finding actually dates back at least to 1950 in warnings from Norbert Wiener about the new technology for automation he was pioneering. Basically, the problem identified was the way the new capability was deployed produced competent but brittle systems.

    Critically, the technology advance enabled new competencies, but developers failed to see its limits, how success would create new interdependencies at new scales, and how these follow-on changes would produce new forms of challenges and vulnerabilities to break down (the story has reappeared across multiple waves of new technologies, e.g., Woods [2]). The new capabilities could have been designed to support adaptive capacities that offset brittleness. However, those who developed and deployed the new capabilities discounted findings of brittleness with previous waves of technology, insisting that newer algorithms, in and of themselves, escaped risks of brittleness at their boundaries. They were incorrect. Risk of brittleness is universal. Ironically, studies of complex adaptive systems looked into biology and found biology abhors competent but brittle systems. Biology makes provisions for systems to sustain future adaptive capacity in the firm knowledge that the future will produce changes and stressors that threaten its viability despite past successes.

  4. 4.

    People (with the right help) Provide the Extra Adaptive Capacity to Mitigate Brittleness

    The initial work on resilience studied biological, ecological, human, and human–technology systems that are well adapted to flourish despite the risk of brittle collapse. These lines of inquiry shared a common starting point shifting the focus away from—“why did a failure occur?” Instead, the question posed was—“how do systems handle (or even flourish from) challenges to its basic competencies so that the organization can function in the face of changing sources, forms, and levels of stress?” The studies revealed adaptive capacity coming into action by looking at the interplay between the challenges that occurred and how people were able to handle the difficulties so that overt consequences were averted.

The studies used methods to deal with the consequences of the Fluency Law: “Well”-adapted activities occur with a facility that hides the difficulties resolved and the dilemmas balanced [20]. The results of these studies revealed the difficulties and dilemmas showing that (a) challenges occurred much more often than stakeholders realized, and (b) people in some roles were the critical source for resilient performance despite the stresses, risks, uncertainties, threat of overload, and bottlenecks (e.g., see studies of emergency medicine adapting to handle beyond-surge-capacity events successfully).

Synthesizing across these diverse lines of inquiry produced general lessons. Systems possess varieties of adaptive capacity which can be built, sustained, or degraded, and lost. Adaptive capacity is the potential for adjusting patterns of activity to handle future changes in the kinds of events, opportunities, and disruptions experienced; therefore, adaptive capacities exist before changes and disruptions call upon those capacities.

The studies reinforced and explained the previous findings about the limits of plans and automata. All systems are developed and operate with finite resources and live in a changing environment. As a result, plans, procedures, automation, agents, and roles are inherently limited and unable to completely cover the complexity of activities, events, and demands. All systems operate under pressures and in degraded modes [21]. As a result, all systems are subject to the risk of “brittle collapse”, and people adapt, stretch, and extend operations to meet the inevitable challenges, pressures, trade-offs, resource scarcity, uncertainties, and surprises [12, 22, 23]. This is resilience-as-extensibility, and this capability is universally necessary for resilient performance [1].

These findings generalize across system scales. All adaptive systems possess this capacity to stretch or extend performance when events challenge their normal competence for handling situations through a variety of properties such as initiative, reciprocity, and others. Without this capability for extensibility, brittle collapse would occur much more often than it is observed [6]. Pressure for compliance undermines these properties and they trade-off with pressure for near-term efficiency gains [6].

Being poised to adapt, a system develops a readiness to revise how it currently works—its models, plans, processes, and behaviors [6]. Adaptation is not about always changing the plan, model, or previous approaches but about the potential to modify plans to continue to fit changing situations. Building a system that is poised to adapt requires investment to build a readiness-to-revise and a readiness-to-respond to events and contexts that challenge the boundaries of normal work as specified in rules, policies, standard practices, and contingency plans. The readiness refers to the dynamic capacity to reprioritize over multiple interacting goals, and to reconfigure and resynchronize activities across roles and echelons.

Space mission control is the definitive exemplar for this capability, especially how space shuttle mission control developed its skill at handling anomalies, even as they expected that the next anomaly to be handled would not match any of the ones they had planned and practiced for [24]. Another highly productive natural laboratory for learning the basic patterns and laws of adaptation has been emergency and critical care medicine (e.g., [17, 25, 26]).

8.4.1 Plan and Revise: Guided Adaptability

How can organizations shift to function and operate in the mode of “plan and revise” versus “plan and conform?” Compliance cultures focus on how behavior doesn’t fit plans. Under guided adaptability, organizations monitor how plans no longer fit the world the organization operates in because change continues.

Checking (and pressuring) behavior to fit plans makes a strong assumption: the plan is a good fit for the world the organization operates in. The new science shows that this assumption is guaranteed to be wrong in the future, regardless of how well the plan has guided performance in the past. The timing on this guarantee is linked to the pace of change within and around the organization and how those changes expand the tangle of interdependencies it exists within.

The first step forward toward guided adaptability is remarkably simple to state: adopt practices to recognize when events challenge the assumption that planful behavior fits the world. But an organization cannot begin to take this step as long as the organization, through the pressures it exerts on behavior, is committed to the assumption that problems arise from failures to work-to-plan/role/rule. Instead, the organization has to be able to ask and re-ask: Do our plans/competencies/automata still fit the world the organization operates in as change continues within and around us?

The irony is you can only monitor how well plans fit the world by understanding how people have to adapt to fill the gaps and holes that inevitably arise as variability in the world exceeds the capability of plans and the competencies built into any system [12]. Remember this is necessary because of the Fluency Law, but this law also makes it harder to carry out this monitoring in practice.

Being able to question the fit of plans to a changing world prepares an organization to be poised to adapt when the guarantee about the future comes to pass—plans will become mis-fit as change continues Actually, the goal is to see misfits before the evidence is definitive so that adaptation occurs before visible incidents threaten or accidents create costly brittle failures.

Monitoring how people adapt to make the system work does not constitute approval that these adaptations are the “best” given the trade-offs faced in different situations. What is “best” is itself a dynamic judgment that can and should change as challenges vary—reprioritization rebalances goals in the trade space to fit the situation. One of the long-noted problems with compliance pressure is that it drives adaptation underground in covert work systems at the sharp end because these adaptations are seen as deviations from plan [27] (Perry and Wears 2015). The covert adaptations may “work” to cope with complexities locally, but adjustment may have limited impact more broadly or represent steering too close to a different vulnerability in the trade-off space. Driving gap-bridging adaptations underground also makes it harder to recognize how plans do not fit the changing patterns of variability in the world. Thus, compliance pressure drives a “vicious” adaptive cycle blinding organizations to evidence that contradicts the assumption of high plan fitness until a major visible failure punctures their overconfidence [27], or to put it more colloquially, “everyone has a plan until they get punched in the mouth”.

Once an organization is able to start to look at adaptations to bridge gaps, what do they look for? The second step toward guided adaptability is to learn and track over time how your system adapts. Recognizing what adaptations are going on allows one to see how your competencies/plans/automata are partial and incomplete for the worlds they operate in. Recognizing what adaptations are going on allows one to see the resources—physical, cognitive, collaborative, and others—that people draw on to produce resilient performances in the face of challenges small and large.

How can organizations monitor their adaptive capacities? First, the world is providing a steady stream of smaller and larger incidents that reveal the gaps as people adapt to fill them. The information is there for the taking! But only if one invests in building the ability to see the information flowing by. This process is the Learning from Incidents movement, or LFI, particularly active in critical software infrastructure.Footnote 1

LFI reframes the standard perspective on what is an incident and what is important to learn from analysis and synthesis across sets of incidents (e.g., The Stella Report [15]). The standard view focuses on events with high severity and, for each, one at a time, uses one of a variety of methods to identify one or a couple of “causes” for remediation. Guided adaptability turns this around: Examine how incidents are handled so that severe consequences are mitigated or avoided. Study how people in various roles adapt to handle challenges so that you can extract patterns (a) about challenges that recur in general even though the specifics vary in individual events and (b) about the ways people work and coordinate to handle challenges. The combination will reveal the adaptive capacities needed for beyond-plan events and generate information about what supports and what hinders these capacities [29, 31].

This process is a way to put into action Hollnagel’s call for studying what goes right instead of what goes wrong [30], which is at the heart of adopting the Safety II approach. But notice: Studying how plans fall short and how people adapt to cope with complexity generates knowledge about all of the functions an organization carries out for all of the goals the organization pursues across all of the external relationships that affect these functions. It goes far beyond just attempting a couple of repairs after events occur that harm people. If safety is about “repair after something goes wrong”, no organization can keep up with the pace of change, growth, and scale of modern systems and activities [32]. If safety is reframed as guided adaptability, then organizations become more dynamic and poised to adapt in a world with extensive and hidden interdependencies operating at new scales which reveal new forms of systemic vulnerabilities in our turbulent world.

How can organizations learn about the limits of their competencies/plans/automata? This form of learning has a built-in difficulty. After an incident highlights a limit to plans or automata, a common response from people in the organization can be paraphrased as follows:

If we had understood that this limit to or gap in our plans was important, we would have modified the plans to mitigate the issue. But we did look for and found limits to our original plans, and we did build-in mechanisms we had reasons to believe would mitigate the limits we identified. Yes, there can be other issues, but we made reasonable choices about priorities given real and pressing trade-offs we had to navigate, the information we had and the perspectives on priorities.

At one level this is a legitimate position, but it misses the fundamental point about how to operate in changing seas of complexity revealed by the emergence of Resilience Engineering. However effective your organization has become, however you have developed and deployed new capabilities to grow, whatever your record of past improvement in reliability/productivity/efficiency, and whatever the promises of new capabilities to-be-deployed, the world, in the near future, will produce challenges that go beyond the competencies embodied and require adaptive capacity to stretch.

Actually, this guarantee about the future will occur because the developments and deployments are valuable, and other people in various roles/entities will adapt to take advantage of those valued services in ways that increase the interdependencies, scale, and complexity the organization faces (see the Law of Stretched Systems [20]). Will you be poised to adapt when what has worked well or better than before no longer fits the world you operate in? The question is poignant because the processes of change are omnipresent in the form of brittleness leading to breakdowns in unexpected areas from unexpected directions. This process is vivid all around us in the form of regional, national and global reverberations of external events—extreme weather, migrations, pandemics, conflicts, and in far reaching unexpected consequences from deploying new technological powers as people adapt the powers in hard to anticipate ways (e.g., ransomware, disinformation campaigns, drone warfare, AI chatbots).

If you can track how people adapt as the world finds the limits to your plans/competencies, then you can take on the mission of re-architecting your organization for guided adaptability. You will have to establish the continuous feedback/learning loop in order to adapt how you adapt. In other words, you will be able to steer or guide how more sharp end layers adapt to beyond-plan challenges on demand. You will empower and participate in dynamic reprioritization/sacrifices over conflicting goals and provision fluent reconfiguration of activities and relationships at or ahead of the pace the world imposes. Even more importantly, you yourself at the upper echelons will demonstrate adaptive capacities—continuously reprioritizing and reconfiguring your organization in the larger network of interdependencies at new scales. Guided adaptability requires skillful, coordinated adaptive activities going on at multiple scales synchronized in parallel (as studies on how biological systems sustain future adaptive capacities have revealed).

The pragmatic guidance for architecting guided adaptability in workaday organizations is substantial, growing (e.g., [8] and the LFI movement in critical digital services), in part, because some of the techniques have been discovered before. Nevertheless, the methods are not as mature as past approaches to organization design. The issue isn’t a shortage of pragmatics derived from the new science (the opportunities are widespread and are detailed in other papers). The difficulty is the ability of organizations to adapt to the new scale and new outside pressures as complexities grow with capabilities. For organizations to re-architect themselves is a high bar, even in times of regular and vivid turbulence. New architectures and transition paths are a topic for another day as we grow the body of knowledge about the basic rules that govern all adaptive systems in the human sphere.