1 Introduction

Traditionally, scholars of creativity imbued qualities enabling creativity – like intelligence, resourcefulness, efficiency, and entrepreneurial alertness – within the individual agent. Creativity so modeled implies a system-to-agent relationship, where creative evolutionary processes operate exogenously to the agent, altering the system or agent context to which the agent then adapts.

This paper takes a different perspective on creative evolution in economics. Individuals not only observe and comprehend changes in state space, they create changes in state space. We firmly embed the economic observer in the creatively evolving systems their choices help create. This model choice renders standard epistemological frameworks unable to establish either an optimum set of choices or predict the outcome of a choice. Economic observers embedded in creatively evolving system construct their paths through the possible at the same time they construct their idea about what is possible. Neither process can ever end.

Logic is generative and creative, as realized by Emil Post in 1941. Tumbling into the unknown is incentivized, with all its unforeseeable risk, by the immense rewards of discovering a new dimension. New dimensions require new theories of explanation. Tumbling is exit from Flatland into 3D (and beyond). If you follow the rules of Flatland, you will never exit. Fully rational search on Flatland results in the eventual arbitrage of all possibilities, an equilibrium that is an artifact of not being able to model beyond Flatland’s fixed constraints. To exit Flatland, agents must act irrationally – they must believe a statement that cannot be proven true within Flatland.

We derive an observer-embedded representation of creative evolution from the theory of the adjacent possible (TAP). Though the theory of the adjacent possible was first described in terms of biological function (Kauffman 1993), it generalizes across disciplines. The theory of the adjacent possible implies a nonergodic movement through the adjacent possible (Koppl et al. 2015) and a nonergodic knowledge acquisition process by agents embedded in the system (Devereaux and Koppl 2023). As such, TAP requires a mathematically constructive (or intuitionist) formalism that disallows for the assumption of the law of the excluded middle by observers embedded in creatively evolving systems (cf. for multiagent models Borrill and Tesfatsion, 2011; Tesfatsion, 2017)

By embedding observers in their systems and allowing their choices to move them into adjacent possibles, we draw various conclusions about how knowledge is constructed and used in creatively evolving systems. These results are summarized in four major propositions and two corollaries, which then provide a foundation for theorizing about individual decision-making, innovation, and the emergence of social institutions and commons. Namely, we demonstrate that in creatively evolving systems, local knowledge is itself a mechanism of movement through the adjacent possible; all action is entrepreneurial action; the epistemological is indistinguishable from the ontological, i.e., causality is ambiguous; and individuals can agree to disagree. Throughout, we compare and contrast our approach and results with that of traditional evolutionary economics.

Section 2 develops a representation of creative evolution based on insights from the theory of the adjacent possible, including several major implications of the formalism. Section 3 builds on the framework developed in Section 2 to describe how individuals use knowledge in creatively evolving systems – it is in Section 3 that we develop our four major propositions and their corollaries. Section 4 applies the results from the previous sections to theorize about decision-making, innovation, and the emergence of institutions and commons in creatively evolving systems. Section 5 concludes, and includes a brief discussion about governance under creative evolution.

2 Creative evolution and knowledge

Our representation of creative evolution in observer-embedded systems focuses on observer-embedded knowledge acquisition and use in systems whose open-ended evolutionary nature is informed by the theory of the adjacent possible. We begin by first discussing formalizations of open-ended evolution to get a handle on which type of open-ended evolution is most relevant to the economic and social systems we wish to describe, then introduce TAP as a particularly salient framework in which to represent this type of open-ended evolution. We develop out representation of creative evolution after that.

Open-endedness in the social sciences typically pertains to either an embedded observer encountering epistemologically novel states or a theorist updating their methodology to take into account novel social phenomena that is not specifiable in the current framework. While each case seems different, both require the person in question to revise their understanding of the world, by adding new states, reclassifying, or by revising transformations between states. This revised understanding, to an embedded observer, looks and behaves like a new theory of their known world. We recommend (Banzhaf et al. 2016) categorization of different types of novelty as a way of conditioning thought about open-endedness, as applied to a description of the world as a classifiable and comprehensible state space. Type 2 novelty requires theory revision in order to understand discovered or created novel states.

While the true universe in which an individual is embedded has not changed when encountering type 2 novelty, their understanding of it has, and in an ambiguous direction – theoretical revisions are not simple extensions of previous theories in the formal mathematical sense of Davis (2004). Theory revision is not a simple search that arbitrages information until the obtain of some common optimum. Theory revision is often undertaken to resolve questions that are undecidable in the current theoretical framework (Kuhn 1962). There are an infinite number of undecidable questions, with no patterned relationship to each other or the resulting theory. Progression towards a more perfect theory in open-ended evolutionary systems is not computable (Doria, Devereaux and Koppl 2023). There will always be more undecidable questions, as Godel (1931) proved.

TAP theory originated with Kauffman (1993) and has since been applied to biological, ecological, and social systems (Felin et al. 2014; Koppl et al. 2015; Koppl et al. 2021; Koppl et al. 2023), biochemistry (Hordijk 2017), artificial intelligence (Roli et al. 2021), physics (Cortês et al. 2022a, b; Kauffman 2022), mathematics (Devereaux et al. 2021), and epistemology (Devereaux and Koppl 2023). TAP comes into the picture by offering a framework built around sets of affordances of current possibilities and enabling conditions for realizing unrealized states in the adjacent possible state space. Affordances are the alternative uses of a thing, and are generally unlistable and unorderable (Kauffman 1993; Kauffman and Roli 2021). Affordances are revealed to an embedded observer as the observer’s known-world and system evolve. The archetypical way to think of affordances are the uses of a screwdriver, which are unlistable and possibly infinite – a screwdriver screws screws but is also a lever, a pick, a chisel, a paint stirrer, a coffee stirrer, a baton with which to lead a marching band, a wedge to jam a machine that is running out of control, a pen in the sand, a window smasher, a table leg leveler, and so forth. It is obvious that how a screwdriver is used depends sensitively on who – and in what context, time, state of mind, and so forth – is using it. These dependencies – the enabling conditions for realizing a particular use of the screwdriver – also constitute an unlistable an unorderable set. There is no algorithmic way of listing the uses of a screwdriver without destroying some essential screwdriver-ness imaginable by someone in a context we cannot pre-state. If someone had been clever enough to fashion a broken screwdriver into a horseshoe nail, a kingdom would not have been lost.

The adjacent possible contains affordancs, the relevant and as-yet-unrealized possibilities that enabling conditions and agent-specific actions help realize in the future. TAP presents a discrete, non-continuous concept of time. Time is “real” in that it is not abstracted away as the passive argument of some algorithm or trajectory function through possibility space. Time represents a growth in realizable possibilities, as discussed in Koppl et al. (2023), which applies TAP to explain technology and technology’s contribution to economic growth.

We can use the TAP framework to explain how embedded observers move through their adjacent possibles. Movement through one’s adjacent possible is essentially epistemological, as ontological reality is by definition elusive to the embedded observer. Possibilities include newly discovered affordances, but discovery is comprehended in an agent-specific way, meaning two agents can discover the same affordance differently.

Creative evolutionary processes are defined as processes that undergo ceaseless, unpredictable change. Henri Bergson(1907, 2014[1889]) discussed creative evolution in his eponymous book using the concepts of duration and qualitative multiplicity. Qualitative multiplicities are sets of things that are generally unenumerable, unlistable, and unknowable in their entirety – there is a quality to the set that would be destroyed if the set were transformed into an enumerable, listable set. Duration is best understood as the quality that makes a process truly open-ended, in that it is both unbounded and innovative (Banzhaf et al. 2016; Adams et al. 2017).

Individuals embedded in socioeconomic systems occupy a qualitative multiplicity composed of what they know, their environments, their current possibilities, and the affordances of those possibilities stretching into the adjacent possible. We call this observer-specific qualitative multiplicity, in the terminology of Uexküll (1934[2010]), 1957), an individual’s “Umwelt”. Individuals also generate a conception of how all of those elements taken together describes their world. This understanding is, Uexkull’s terminology, an individual’s “Suchbild,” a semistable belief pattern of relationships between cause-and-effect that can be both explicitly realized and tacit, used to figure out what they can and should do as they progress within their Umwelt through the adjacent possible. Taken together, an Umwelt and Suchbild pair completely define an individual’s known world, which we call \(\Omega _{i,t}\) for an individual i at time t. Known worlds are epistemologically constructed, yet indistinguishable from individual-specific ontological reality. In positing the primacy of \(\Omega _{i,t}\) for each individual, we center theory and theorizing at the core of individual decision-making, rather than methodology (cf. Popper 1963; Kuhn 1962). Theorizing drives the derivation of \(\Omega _{i,t}\), which itself is an expression of epistemology and from which is derived a variety of models of phenomena and methodologies for associating observations, actions and outcomes.

An individual’s known world \(\Omega _{i,t}\) is not fixed, but rather exists as one entry in a sequence of known worlds that have passed, and known worlds yet to come: \(\{\Omega _{i,0},\Omega _{i,1},...,\Omega _{i,t-1},\Omega _{i,t},\Omega _{i,t+1},...\}\). Individuals revise their known worlds as they encounter the adjacent possible, resulting in an agent-specific sequence of many {Umwelt,Suchbild} pairs through time. Naturally, observer-embedded creative evolution within the theory of the adjacent possible is path dependent and nonergodic,Footnote 1 as it is open-ended (Birkhoff 1931; North 1999; David 2007; Koppl et al. 2015; Banzhaf et al. 2016; Adams et al. 2017; Devereaux and Koppl 2023). A broken screwdriver does not have the use “horseshoe nail” in its set of affordances until that use lights up in the adjacent possible of an individual’s Umwelt, an event dependent on the enabling conditions in the individual’s Suchbild.

Creative evolution, in the TAP framework, lies outside the bounds of both ergodic, timeless economic theory (Peters 2019) and evolutionary theories with ergodic (patterned) cyclicity or chaos (as discussed in Rosser 2021). As such, it lies outside the bounds of much of the theoretical infrastructure developed by mathematical economists since the “marginal revolution” of the late 19th century. The implications of creative evolution, unsurprisingly, deviate from those of standard mathematical economics and from evolutionary economics with ergodic dynamics.

As above, call a person i’s known-world \(\Omega _{i}\) (we abstract at first from time dependence for the sake of familiarity). Traditional epistemology deals with closed knowledge and possibility sets, where \({\Omega _{i}=\Omega }\) for each agent i and where i can always completely categorize \(\Omega \) via a knowledge partition (Aumann 1999a). Game theory and Bayesian learning share these same assumptions (Aumann 1999b). However, for embedded observers in creatively evolving systems, an individual’s known-world is always a proper subset of ontological reality (\(\Omega _{i}\subsetneq \Omega \)), a simple implication of the nonergodic sequential evolution of an embedded observer’s known-world (Devereaux and Koppl 2023).Therefore, embedded observers in creatively evolving systems cannot completely partition \(\Omega \).

The strongest assumption we can make about the information function of an embedded observer in a creatively evolving system is that it is complete and consistent with respect to the observer’s own known-world \(\Omega _{i}\), if we were to freeze the system’s evolution at some point in time. Given this assumption, individuals can completely partition their known-worlds and generate a model for how observed states relate to events in their known-world.

Suppose some individual i does just that. Then we can define a local information function \(P_{i}\) such that

  1. 1.

    \(\omega \in P_{i}(\omega )\forall \omega \in \Omega _{i}\)

  2. 2.

    If \(\omega '\in P_{i}(\omega )\), then \(P_{i}(\omega ')=P(\omega )\).

An event \(E_{i}\) is a subset of \(\Omega _{i}\). Then, an agent i’s local knowledge function can be defined as

$$\begin{aligned} K_{i}(E_{i})=\{\omega \in \Omega _{i}:P_{i}(\omega )\subseteq E_{i}\} \end{aligned}$$
(1)

An important note before we continue is that our definition of local knowledge is neither local because it cannot be extracted by a third party, nor because a part of the knowledge is tacit and therefore intransmissable from the observer to a third party. Our definition of local knowledge is local because the knowledge of an observer i is not in the ken of any other individual j. Even if i were to transmit the information of their known-world to j, it would not be comprehensible to j within j’s understanding of their own known-world. Individual j would have to revise their understanding of reality by both altering the state space of their known-world and its partition to accommodate the information from i’s known-world with respect to their own known-world.

Equation 1 is consistent with other ways agents are theorized to make decisions when embedded in open-ended processes. Kirzner (1996: 18) approaches “economic theory as the extensively worked out logic of acts of individual choice.” Theory depends entirely on the theorist’s “ability to relate back to the process to the individual acts of choice of which the process is made up.” Rather than neoclassical theory fulfilling this role – as Kirzner claims it can – a different approach is needed, one better reflected as represented by TAP and the type of local knowledge represented by Eq. 1. Individual decision-making is inherently local to the individual and incomplete with respect to the individual’s world, obviating several of the core underlying assumptions of neoclassical decision theory. Rather than individuals relying on listed prices to make choices, experiments show that prices are not known ahead of the price discovery process (Inoua and Smith 2022). This result is intuitive. If agents had complete information, they would not be incentivized to engage in the buy/sell price discovery process, since they could identify optimal equilibrium states a priori, which eliminates the need to create equilibrium states. The equilibria we observe cannot emerge in a system where agents have complete knowledge and a fully worked out, consistent logic of the system.

If there is a “logic of acts of individual choice,” then this logic itself must be local. Information is essentially a categorization of the possible. How someone understands their world differs from person to person, and changes as the system evolves. The knowledge of the embedded observer is, therefore, dependent not just on the observer themselves, but on time. We can easily modify (1) to have a time component:

$$\begin{aligned} K_{i,t}(E_{i,t})=\{\omega \in \Omega _{i,t}:P_{i,t}(\omega )\subseteq E_{i,t}\} \end{aligned}$$
(2)

Creative evolution for individual i is a movement forward in time: a progression of known-worlds \(\Omega _{i,t}\rightarrow \Omega _{i,t+1}\). How i understands their world evolves, as i updates their known events and known possible \(E_{i,t}\rightarrow E_{i,t+1},P_{i,t}\rightarrow P_{i,t+1}\), and finally their information function, \(K_{i,t}\rightarrow K_{i,t+1}\). Such updates in a creatively evolving system require embedded observers to update their theories of their known-worlds and models of specific phenomena based on those theories (Aumann 1999a; Doria 2017; Devereaux and Koppl 2023). We agree with Kirzner that “economic theory [is] the extensively worked out logic of acts of individual choice” if we understand logic to be creative in the sense of Post (1941) such that it can accommodate the embedded observer’s radically local, incomplete and continually updating theory of their creatively evolving system.

Define the local possible for an individual as a triplet of their world \(\Omega _{i,t}\), their information function \(P_{i,t}\) which partitions the world into events, and their local knowledge \(K_{i,t}\). Call each triplet \(\Pi _{i,t}=\langle \Omega _{i,t},P_{i,t},K_{i,t}\rangle \). Individuals experience evolution within their creatively evolving systems as a sequence of triplets \(\Pi _{i,t}\), as t updates from period to period.

$$\begin{aligned} \Pi _{i}=\{\Pi _{i,1},\Pi _{i,2},...,\Pi _{i,t},...\}=\{\langle \Omega _{i,1},P_{i,1},K_{i,1}\rangle ,...,\langle \Omega _{i,t},P_{i,t},K_{i,t}\rangle ,...\} \end{aligned}$$
(3)

The local adjacent possible is the result of subtracting the possible at time t from the possible at time \(t+1\). The local adjacent possible at time t is what agent i comes to know as they progress from t to \(t+1\). Define the local adjacent possible as \(A_{i,t}\) where

$$\begin{aligned} A_{i,t}=\langle \Omega _{i,t+1},P_{i,t+1},K_{i,t+1}\rangle \setminus \langle \Omega _{i,t},P_{i,t},K_{i,t}\rangle \end{aligned}$$
(4)

If we further require that \(A_{i,t}\ne \emptyset \) – which is a natural assumption in a creatively evolving system due to its open-ended nature (Devereaux and Koppl 2023) – then we see that time periods t are defined as symmetry-breaking movements into a local adjacent possible where the next known-world \(\Omega _{i,t+1}\) is a (perhaps, complex) combination of the previous known-world \(\Omega _{i,t}\) and the adjacent possible. The local adjacent possible \(A_{i,t}\) is not a trajectory definable at time t, as it contains inaccessible information about the future. \(A_{i,t}\) is a retrospective description of what comes to be known by the agent as they move from known-worlds \(\Omega _{i,t}\) to \(\Omega _{i,t+1}\). It does not describe the process of how the agent comes to know what is in their adjacent possible.

We discussed above our local knowledge concept, which acknowledges that information is not the same as knowledge, and that knowledge is essentially local because known-worlds are proper subsets of reality. We can combine this conception of local knowledge with the combinatorial complexity of social systems and their open-ended movement through time to derive another implication of observer-embedded knowledge in creatively evolving systems. Namely, we can say in general that embedded observers can “agree to disagree,” i.e., that we cannot generally expect observers to coordinate with each other using common knowledge, as it is logically defined.

Creative evolution in social systems produces a vast amount of combinatorial richness and computational complexity (Cortês et al. 2022a). Chess, while solvable, is solvable only in exponential time, making it very computationally expensive. Wordle, a popular word game of guessing words of length k in l total guesses, is NP-hard – even for words of length \(k=5\) (Lokshtanov and Subercaseaux 2022). Chess and Wordle are simple compared to the complexity of the average business plan, or the entangled complexity of an individual’s many competing ongoing plans to maintain health, increase happiness, minimize harm, find inspiration, impress one’s neighbors, and so forth.

Local knowledge combined with combinatorial complexity combined with a creatively evolving system implies that not only is \(\Omega _{i}\subsetneq \Omega \) and \(\Omega _{i,t}\ne \Omega _{i,\tau \ne t}\), but that \(\Omega _{i,t}\ne \Omega _{j,\tau }\forall i,j,t,\tau \). For a system of N individuals, we can abstract away the temporal element by considering any given cross section of time and write this implication as:

$$\begin{aligned} \Omega _{1}\ne \Omega _{2}\ne \cdots \ne \Omega _{N}\ne \Omega \end{aligned}$$
(5)

Since \(K_{i}\) is defined in terms of \(\Omega _{i}\), Eq. 5 implies that

$$\begin{aligned} K_{1}\ne K_{2}\ne \cdots \ne K_{i}\ne \cdots \ne K_{N} \end{aligned}$$
(6)

The (system-level) possible at any time t is a union of individual local possibles over the N agents in the system:

$$ \Pi _{t}=\bigcup _{i=1}^{N}\Pi _{i,t} $$

The evolution of the system-level possible is the sequence

$$\begin{aligned} \Pi =\{\Pi _{1},\Pi _{2},\ldots ,\Pi _{t},\ldots \} \end{aligned}$$
(7)

The (system-level) adjacent possible at any time t is a union of the local adjacent possibles over the N agents in the system:

$$\begin{aligned} A_{t}=\bigcup _{i=1}^{N}A_{i,t} \end{aligned}$$
(8)

In direct parallel to how we derived (5), we can conclude that in a creatively evolving social system with N agents, not only is \(A_{i,t}\subsetneq A_{t}\) in general at any given cross section of time t, but in that same cross section,

$$\begin{aligned} A_{1}\ne A_{2}\ne \cdots A_{i}\ne \cdots \ne A_{N} \end{aligned}$$
(9)

No individual’s local adjacent possible is the entire adjacent possible, nor do the local adjacent possibles of individuals in the system intersect entirely. In creatively evolving systems, embedded observers experience different adjacent possibles in general.

The adjacent possible uniquely represents the creation of new possibilities. Creative evolution is the discovery/generation of these new possibilities. Hence, creative evolution of a system is a progression through the adjacent possible. Creative evolution of a system can be modeled as the open-ended sequence

$$\begin{aligned} \mathfrak {\mathfrak {A}}=\{\ldots ,A_{t-2},A_{t-1},A_{t},A_{t+1},A_{t+2},\ldots \} \end{aligned}$$
(10)

where \(A_{t}\) is the system-level adjacent possible. The creative evolution of an individual is, therefore, its sequence of local adjacent possibles.

3 The use of knowledge in creatively evolving systems

The hard work of social science is not in theorizing how people coordinate when they all know the same things and plan the same way, but in theorizing how people coordinate when they know so very little about each other and possess such different and conflicting plans.

Untethered from common knowledge, individuals have both more and less agency than in closed systems and simple evolutionary systems with listable state spaces. Think of moving through the adjacent possible like tumbling into new and unforeseeable dimensions. Which new dimensions individuals tumble into are enabled by the constraints of the previous step, but in a manner unformalizable within the previous Umwelt. Take, for example, the development of Feynman diagrams. Feynman diagrams are not entailed by quantum theory, yet they are predictive of quantum interactions. In Richard Feynman’s words, “[M]ost of [developing Feynman diagrams] was first worked out by guessing” (Feynman 1965:172). More recently, a new duality in particle scattering amplitudes not entailed from quantum theory was discovered accidentally, when both equations were written down side-by-side (Dixon et al. 2022).

Evolutionary social science trucks in (predictable) trajectories, derived from or theorized about observed dynamical processes. In creatively evolving systems, deriving trajectories involves defining a generalizable transformation between any two sets \(A_{t+j},A_{t+k}\) in the sequence \(\mathfrak {A}\) that is simpler than including all the underlying processes, decisions, relationships, history, and decision contexts. It is not obvious how to define such a trajectory, as creative evolution is a nonergodic recursive process. As the system evolves from time t to time \(t+1\), what structures, information, relationships, worldviews, rules and so on are kept and reinforced from time t and which are disrupted or deleted cannot be predicted by anyone at time t.

Still, people do coordinate with each other: prices form, norms and useful institutions emerge, art styles alter the tastes of a generation. Nonergodic choice processes are not scientifically indescribable, but they are unsuited to analysis using standard logics. Choice processes that hinge so much on particulars cannot be divorced from particulars, if we are to understand them in a reasonably scientific manner. Understanding how individuals use knowledge in creatively evolving systems informs us as to how people innovate, prices form, and institutions emerge.

3.1 Local knowledge as a mechanism for creative evolution

Local knowledge is uniquely defined in creatively evolving systems. It’s plausible that the use of local knowledge might have different and even unintuitive implications in creatively evolving systems. Namely, we demonstrate that local knowledge itself can be a mechanism for creative evolution – that the inherently fragmented nature of knowledge in creatively evolving systems, when interacting with others, can spur movement into the adjacent possible.

We focus on language as an arena for imaginative and sometimes unintentional innovation. Godel’s Incompleteness Theorem limits the applicability of all mathematical systems and discrete symbolic systems (Doria 2017; Davis 2004). Hamming (1997:310-11) claims that Godel’s theorem does not apply to language, because there are an indeterminate number of possible meanings for each word (symbol). However, it is not language as an entity that somehow gets around Godel’s restrictions, it is language’s purpose and the way it is used which uniquely advantages it as a way to explore the adjacent possible.

We can understand why by applying the theory from Section 2. Suppose we have two internally-consistent languages \(L_{i},L_{j}\) associated with the known-worlds \(\Omega _{i}\ne \Omega _{j}\). \(\Omega _{i}\ne \Omega _{j}\Rightarrow L_{i}\ne L_{j}\) (Doria 2017).Words may have multiple meanings in each language, nuance, affect, context, characteristics of the speaker, time, place and other factors, once taken into consideration, render unique meanings to words however they are used. We assume the individual employing their known-world language, at the very least, knows the unique meanings of their words and uses them consistently.

Thus, known-world languages \(L_{i}, L_{j}\) are each individually modellable as discrete symbolic systems. Does this contradict Hamming? Of course, Hamming was referring not to individual languages but to language as a social structure evolving over time. Fix the time aspect, then consider known-worlds \(\Omega _i, \Omega _{j}\) at some time t. Then, if \(\Omega _{i}\ne \Omega _{j}\), an interaction between i and j requires each to project meanings from their language onto the language of the other. This might work up until a symbol combination for which there is no assigned meaning in the other language. This could be an entirely new word, like the word “grok” before it had been invented by Isaac Asimov in 1961, or it could be a known word or phrase coupled with a new combination of factors, like how “spill the tea” meant “let’s gossip” when used in memespeak in the 2010s, spreading to normal speech by the 2020s.

If we suppose the listener recognizes that they cannot classify a new symbol or novel combination in their own language, then the listener is forced to invent a meaning. Take “money laundering”. A teenager who has never been exposed to real-life or fictional crime may immediately imagine someone dumping a basket of dirty money into a washing machine. No one literally launders money, but this teenager can imagine it. They may believe for an unreasonably long time that this somehow renders money untraceable. The teenage listener has essentially expanded their possibility space by entertaining the idea that you can render money untraceable (by some means). It is not the means intended by the speaker, so the possibility that the listener entertains exists in neither \(\Omega _{i}\) nor \(\Omega _{j}\) but is created by the fact that \(\Omega _{i}\ne \Omega _{j}\).

We might counter that this belief is in error, but it does not matter if the possibility that this teenager i entertains is true or false in \(\Omega _{i,t}\). For the existence of this possibility may open up possibilities in i’s adjacent possible that are realizable in some future \(\Omega _{i,t+k}\). Consider the following sequence of ideas, which represent a movement through adjacent possibles: (money can be literally laundered to be untraceable) \(\rightarrow \) (smart washing machines and appliances are internet-connected) \(\rightarrow \) (it might be possible to hijack bank accounts harvested from emails and passwords stored insecurely by smart appliances) \(\rightarrow \) (it is possible to then render digital money untraceable by fragmenting large chunks into stolen accounts first then retrieving the lump sum incrementally, at a later date.)

A new process to launder money has revealed itself by being first “lit up” in the adjacent possible of an incorrect belief about the laundering of money, which was itself created by the use of inherently fragmented local knowledge which led to same-language speakers comprehending a single phrase in their shared language differently.

That is, local knowledge by itself is a mechanism for creative evolution. Differential comprehension of the same observation enabled by the use of local knowledge in creatively evolving systems propels people into different and novel adjacent possibles. In creatively evolving systems, creative evolution can still occur through the intentional expansion (or search) of the possibility space, but it can also occur through the sheer fragmentation of knowledge due to its inherently local quality. We summarize this result as a Proposition.

Proposition 1

Local knowledge in creatively evolving systems is a mechanism of exploration of the adjacent possible.

We will be discussing at more length below the TAP equation of recombination at the system-level, where individuals are envisioned as recombining new and existing tangible (things, people) and intangible (ideas, styles) system components together to produce novel possibilities. The many micro- and meso-level processes that underlie the system-level TAP process are numerous and yet-to-be-specified. One process could be a type of exaptation to produce novel possibilities through recombination through a genetic analogue (McClintock 1984; Ben-Jacob 1998), but it is one of many potential micro- and meso-describable processes.

3.2 Causality in creatively evolving systems is ambiguous

Causality at both the system and individual level is complex. Individuals move through the adjacent possible by means of imagination, interaction, and reflection. Proposition 1 implies that reflecting upon interactions can alter an individual’s theory of reality, and therefore change what they imagine to be possible, causing them to strike out along different paths than they would have absent the interaction.

Therefore, we cannot in general characterize the particulars of creative evolution at the individual level as a trajectory or algorithm updating based on fixed rules over time. Populations are not split between entrepreneurial and non-entrepreneurial agents, with entrepreneurial agents having some additional choice infrastructure built around being “alert” to new possibilities. Anyone can – and everyone does – strike out into the unknown by exploring their adjacent possibles. Individual j making some minor and mundane discovery at time t, and then interacting with i at time \(t+1\), could instigate some major and unintuitive discovery by i at time \(t+2\). We summarize this as the following proposition:

Proposition 2

In creatively evolving systems, action is entrepreneurial action.

This result is in line with the work of Eric von Hippel, who discovered that users of innovations are responsible for developing the majority of important innovations to those instruments (von Hippel 1988, von Hippel 2005; von Hippel 2017). Furthermore, the direction of causality between the individual and the system is ambiguous. Representing creative evolution as a system-level movement through the adjacent possible links the evolution of the system-level possible \(\Pi \) (see Eq. 7) with the creatively evolving worlds \(\Omega _{i}\) of embedded observers.

We will demonstrate that without a complete representation of the environment, no observer embedded in a creatively evolving system – including theorists – can infer a causal direction to agent-environment adaptation through time, and that individual-to-environment adaptation under creative evolution is formally indistinguishable from environment-to-individual adaptation.

Proposition 3

Any predictive model \(M_{i,t}\) constructed at time t of the alteration of an agent i’s local knowledge from \(K_{i,t}\rightarrow K_{i,t+1}\) is indistinguishable from a process whereby the agent’s environment changes in reaction to the agent’s interaction with it. That is, from a causal perspective, we cannot discern between an agent causing the change they see in their local universe and in modifying their understanding of the universe through insight or imagination.

Proof

(Sketch) If agents and theorists – observers within the system – had complete representations of the environment, then they would be capable of inferring the causal directions between decision contexts and behaviors that enact certain outcomes. In such a case, classifiable features of the environment alter the efficacy of a given behavior in enacting a particular outcome, implying a causal direction of environment-to-agent.

Agent local knowledge sets do not in general entirely intersect by Eq. 6, nor do agent adjacent possibles by Eq. 9. Observers do not in general have complete representations of the environment. Suppose observers could distinguish between how agent behavior changes as a result of perceived changes in the environment and how the environment seems to change as a result of agent movement into the adjacent possible. Then the observer must have access to a set of elements \(\mathcal {K}\subset K_{i,t+1}\) to which it can predict agent i must adapt. For agent i, this is true when \(\mathcal {K}\subset K_{i,t}\), and for some observer j, if \(K_{i,t}=K_{j,t}\) or if \(\mathcal {K}\subset K_{j,t}\) where \(K_{i,t}\ne K_{j,t}\). By Eq. 6, we can rule out the former for systems under creative evolution. Suppose now that our system and its agent-observers move into their adjacent possibles. We see immediately that causality can only be pinned down for a portion of the possible, under very special conditions, namely, what we know at time t that remains true at time \(t+1\). Recall that movement into the adjacent possible is the quotient of how agent i’s world seems to change \(\Omega _{i,t}\rightarrow \Omega _{i,t+1}\), how their partition function changes \(P_{i,t}\rightarrow P_{i,t+1}\), and how their local knowledge changes \(K_{i,t}\rightarrow K_{i,t+1}\).

\(k\in A_{i,t+1}\implies k\notin K_{i,t}\). We defined creative evolution as movement into the adjacent possible, where \(A_{i,t}\ne \emptyset \) (Devereaux and Koppl 2023). Then, in general, a set of elements \(\mathcal {K}\subset K_{i,t+1}\) does not have the property that \(\mathcal {K}\subset K_{i,t}\) or that \(\mathcal {K}\subset K_{j,t}\). This demonstrates claim (i). For claim (ii), suppose we could formally distinguish for any general decision situation between agent-to-environment adaptation and environment-to-agent adaptation. But we determined in our informal proof of (i) that system observers cannot in general determine causality under creative evolution, specifically when creating predictive models that make claims about elements within or entangled with the adjacent possible. This demonstrates claim (ii). Claims (i) and (ii) imply Proposition 3. \(\square \)

In creatively evolving systems, the comprehensible looks to an embedded observer as if it is being created as it is understood. This causal ambiguity between observer and data, while a generally unacceptable theoretical position in the social sciences, has been an acceptable if controversial theoretical position in quantum mechanics and cosmology for some time (Smolin 2001).

We can construct the following Corollary from Proposition 3:

Corollary 1

Trajectories through the adjacent possible are not a priori definable ahead of their realization.

Axiomatic standard logics, which are used to derive trajectories through time, are not suited to understanding how individuals use knowledge in creatively evolving systems. A kind of logic compatible with movement into an unlistably diverse unknowable unknown requires would be more in line with the intuitionistic logic of constructive mathematics and computable analysis (Rosser 2012). Constructive mathematics proves the existence of possibilities through their explicit construction, not by using a version of the law of the excluded middle, as in standard logic.

Creative evolution pushes agents into an unknowable unknown ungraspable by current methods. Unknown unknowns in creative evolving systems are uncountably infinite, and impossible to fully classify (Devereaux and Koppl 2023). Standard theories of unknown unknowns assume that all unknowable unknowns are completely describable and countably finite, as in Niven (2019), or are built on imprecise probability methods that rely on assumptions like an asymptotic distribution for large sample sizes, as in Kriegler (2009).

In creatively evolving systems, the sample space is not known, and as a direct corollary neither is there a concept of randomness with respect to some distribution. Creative adaptation is essentially different from standard models of adaptation in that embedded observer i seems to create new possibilities as they adapt. This subtle point turns on the embeddedness of observers in creatively evolving systems, wherein the knowable (epistemological) cannot be distinguished from what is physically possible (ontological). In creatively evolving systems, neither the embedded observer i nor the theorist can know what i adapts to before they have adapted to it.

Intuitionistic logic suggests that the modeling environment most suited to describing the production of novelty in creatively evolving systems is one that can represent systems with novelty-generating dynamics. In this we follow the Wolfram–Chomsky schema of dynamical system types, where type I systems are static, type II are characterized by limit cycles, type III by chaos, and type IV by self-organized complexity (Prokopenko et al 2019; Markose 2019; Wolfram 2002). While there is some disagreement as to how to categorize types (what looks like type III behavior can include type IV behavior – see Letourneau 2010), supposing a perfect categorization method, any model of a creatively evolving system must be type IV.

Computational complexity due to combinatorial richness permeates all levels of analysis. The individual agent i is hampered by it, but so is any third party j standing outside the agent’s decision context, looking in. Computational complexity is a problem inherent with the state space that can’t be compressed away, and implies that local knowledge and adjacent possibles are inherently fragmented. A direct implication of the fragmentation of local knowledge sets and local adjacent possibles is that j is unreconcilably at an epistemological disadvantage relative to i when considering a problem faced by i in i’s decision context.

3.3 Embedded observers can “agree to disagree”

Creative evolution from the perspective of the embedded observer is not simply an alteration in environment or observables that, if public, can be used as a correlating device to theoretically standardize knowledge among all people as “common knowledge.” Demonstrating common knowledge is essential to the mathematical tractability of certain strategic decision-making problems. Aumann (1976) tamed the infinite regress problem of common knowledge, and Milgrom (1981) demonstrated the need for a public correlating device for a well-defined notion of common knowledge. Samet (1990) and Aumann (1999a, 1999b) demonstrate that optimization in strategic decision-making implies that individuals in the system are able to list all possible states of the system, so that they can apply a well-defined partition function – a classification scheme that associates outcomes to actions to strategies – over all possibilities in the entire system \(\Omega \).

However, unlistability in creatively evolving systems prevents well-defined partitions over the entire system \(\Omega \) (Devereaux and Koppl 2023). Individuals are limited to their known-worlds, \(\Omega _{i}\), which are never completely overlapping. Knowledge is not just local in creatively evolving systems – it is fragmented, and therefore not generally resolvable into common knowledge. As all observers are embedded in the system, there is no third-party or external observer that can construct or serve as a public correlation device. Neither would somehow aggregating all extant knowledge, even tacit knowledge, result in a true conception of reality, as that would imply that the system-level adjacent possible is empty and that the system is stationary.

We summarize these results in the following Proposition:

Proposition 4

In creatively evolving systems, individuals can “agree to disagree.”

Corollary 2

Creatively evolving systems are multi-agent systems wherein embedded observers have their own logic and temporality.

Corollary 2 may seem obvious to evolutionary economists, who in general reject representative agent constructs, but it is important to note that Corollary 2 is stronger than a mere rejection of aggregation over a single agent’s characteristics. It suggests that it is not possible to approximate away specific agent behavior without losing a good deal of explanatory power, due to how it is derived from Proposition 4. If individuals can agree to disagree, then omitting viewpoints means (unjustifiably) taking a side. Since modelers are also embedded in the system, it follows we can agree to disagree with modeled aspects of other agents without being able to claim (onto)logical supremacy. Humility and curiosity are valuable characteristic of theorists of creatively evolving systems, who should spend as much or more time describing and cataloging aspects of observed systems as they spend modeling future states of the system.

4 Decision-making and innovation in creatively evolving systems

In this section, we expand upon the results in the previous section to derive implications for how individuals plan, interact and innovate, and how institutions and governance structures emerge from plans, interactions, and innovations.

4.1 Planning and problem-solving in creatively evolving systems

The planning process in creatively evolving systems must cope with new parts of reality being revealed in the process of plan-realization. Since possibilities are created in the process of interacting with the system and other individuals, embedded observers are aware that they do not know everything they need to in order to plan optimally (or often satisfactorily), but they’re also aware they might fill in missing steps of their plan in the process of executing it. Decision-making in creatively evolving systems encourages individuals to spend less computational resources at the outset of planning: missing steps, ill-formed plan-components, and vague or contradictory goals are fine, and will be filled in and sorted out down the road.

Pragmatic decision-making in creatively evolving systems tolerates the use of a basket of methods which do not necessarily need to be consistent with each other to approach planning and solving problems. Inconsistency is not incontrovertible in creatively evolving systems, because one’s understanding of reality changes through time. What seems consistent now may become inconsistent later, and vice versa.

Decision methods in creatively evolving systems diverge from standard approaches for navigating opportunity landscapes (e. g., Kauffman and Levin 1987; Kauffman and Weinberger 1989). Modeling innovation and the discovery of opportunities as a boundedly rational search over a certain topological space constrains the modeler to a deterministic set of possible searches generated by the topological space and specified decision procedures. As noted in recent controversies surrounding the use of opportunity landscapes for modeling discovery and innovation, it is not obvious how to adequately specify an opportunity landscape in the first place (Bryce and Winter 2009; Felin et al. 2014), nor the decision procedure(s) for searching the landscape (Winter 2011, 2012), and in particular to define what it means to create opportunities on a precomputed landscape (Alvarez and Barney 2007).

Felin et al. (2014: 274) note that existing opportunity landscape methods “require every observable in a given environment (i.e., the possible “space” or landscape) to somehow be listed and classified, and assigned its proper uses and functionalities.” In creatively evolving systems, innovation and discovery is a movement into an adjacent possible. Full listability of all salient observables and the correct theory of their uses and functionality isn’t available to the deciding individual at any point in their decision process. Decision-making individuals do not just observe and compute over opportunities and manage constraints viz a vis their computational resources, but they also create new opportunities that change their perceived landscapes. In creatively evolving systems, there is no map; there be dragons in all directions (Searle 2018).

Decision-making in creatively evolving systems goes beyond coping with cognitive and computational limitations. As individuals explore their adjacent possibles and interact with other individuals and evolving emergent social structures, their set of enabling constraints changes and opens up new portions of the adjacent possible that may be salient to ongoing problems and plans. There were a great many steps between the invention of the telegraph in 1774, Morses’s 1837 invention of the Victorian text-messaging system known as Morse code, and Morse code’s ubiquity by the 1890s (Hargadon 2003). Morse code and its ubiquity were not foreseeable in 1774, though its development and emergence was enabled by the constraints of the telegraph. Morse code’s recombinations haven’t yet been exhausted: a Morse code-to-text generation software could allow people with motor-neuron diseases like ALS to communicate (Niu et al. 2019), and a Morse code system with haptic feedback could enable people with both deafness and blindness to access the Internet (Norberg et al. 2014). Neither Morse, no anyone else, can list the full set of possible uses of Morse code.

Individuals in creatively evolving systems cannot cope with the novelty they encounter in their adjacent possible using axiomatic logic, including Bayesian probabilistic methods (Devereaux and Koppl 2023). Planning under intuitionistic logic has different implications than planning under standard logic. A third party assessment of the planning process (as in Camerer et al. (2003); and Thaler and Sunstein (2009)) becomes harder to justify, as do claims of biased – or, more accurately, the possibility of unbiased – behavior. By Corollary 1, trajectories through the adjacent possible are not a priori definable ahead of their realization. So, then, how does one define which trajectories are agents not adhering to when they fail to maximize their utility due to bias? Even cognitively perfect agents cannot lift the indelible veil between themselves and the adjacent possible.

Apparent biases might, however, be indicative of decision-making behavior that is more attuned to a creatively evolving system than an ontologically static system. Time compression, where agents see temporal distances farther in the future as closer together than temporal distances nearer to the present (Zauberman et al. 2009), could be explained by agents not seeing the further distances as having salience to their decision-making, now. Their future selves have the benefit of the intervening possibles to address decisions as time progresses, and in a combinatorially rich world with the size of the possibles growing at super-exponential rates. Heuristics have been discovered in laboratory settings to perform better for agents than brute-force optimization with respect to inter-temporal discounting (Marzilli Ericson et al. 2015). Apparent inter-temporal intransitivity falls out of cross-sectional incompleteness and theory updating under creative evolution. Behavior that is logically rational in \(\Omega _{i,t}\) and behavior that is logically rational in \(\Omega _{i,t+1}\) may be at odds with each other or the agent’s theory underlying \(\Omega _{i,t+1}\). Creative evolution makes pricing how current decisions affect the future a lot trickier. Present bias and decision-context bias or “framing” are also implied by inter-temporal incompleteness. The individual under (nonergodic) creative evolution is not one self, but many, over time (Koppl et al. 2015; Devereaux and Koppl 2023; Devereaux 2024, forthcoming).

Herbert Simon explained that adaptive change was the heart of the problem to be solved in describing biological and human realms, since adaptive change was “as much governed by a system’s environment as by its internal constitution” (Simon 1990b: 2). Simon’s solution is, “to describe, predict and explain the behavior of a system of bounded rationality, we must both construct a theory of the system’s processes and describe the environments to which it is adapting” (Simon 1990b: 6–7) which, in his estimation, is mostly a problem of coping with the computational limitations of modeling hugely combinatorially rich systems.

Ecological rationality, built upon the idea of Simonian bounded rationality, is equivalent to adaptive change in the Simonian sense, but focuses primarily on computability problems and how agents use computationally cheap decision procedures instead of computationally costly decision procedures, all other things held equal (Gigerenzer 2000), what theorists of the literature call “fast and frugal heuristics” (Gigerenzer and Todd 1999). Ecological rationality concerns itself with cataloguing environmental and choice contexts to create a language of cues and signals that compresses and reduces the complexity and combinatorial richness of the choice context. Sussing out which actions to associate to what signals in order to acquire which results becomes the sum total of human rationality. The human becomes an “intuitive statistician” who notices signals and cues if they are strong, frequent and/or salient enough (Felin and Koenderink 2022).

Ecological rationality recognizes that individual environments can be subjectively understood (Hertwig et al. 2022), but its assumptions are too strong to support decision-making in creatively evolving systems. Ecological rationality assumes that environments are classifiable in terms of which context supports which heuristic behavior, and that observers inside and outside the decision context can reason about individual and system behavior using the same classification (Gigerenzer and Gaissmaier 2011). Individuals can notice pre-existing salient cues, but cannot alter the salience of cues in their environment by moving into their adjacent possible.

“Noticing” in the ecologically rational sense means interpreting one’s world \(\Omega _{i}\) and states \(\omega \in \Omega _{i}\) as knowledge \(K_{i}\) that reflects the actual environment \(\Omega \) as closely as possible. Alertness in this framework means, essentially, having a priori a better or more salient knowledge set \(K_{i}\) for the problem under consideration, which implies an interpretation of one’s world \(\Omega _{i}\) that is closer to \(\Omega \) in relevant ways. There are no accidents or surprise involving unknowable unknown signals and cues – all salient cues have been catalogued. There is no causal ambiguity in ecologically rational action. Decision-making in creatively evolving systems is fundamentally underspecified by theories of ecological rationality.

4.2 Innovating in creatively evolving systems

By Proposition 2, all action is entrepreneurial action in creatively evolving systems, but the question remains as to how purposive innovators use knowledge in creatively evolving systems.

Schumpeter (1911 (1934): 15) characterized the evolution of the technosphere as combining “things and forces within our reach.” “Within reach” means graspable, though not necessarily predictable. “Things” can be tangible but they need not – actions, plans, and organizational schemes can be things. Since local knowledge is a mechanism for exploration of the adjacent possible via interaction with others by Proposition 1, innovators benefit from systems that bring them into contact with the works of other people relevant to their ongoing project. We discuss the emergence of knowledge commons and explain open-source software movements in the next section, but it is enough here to note that innovation in creatively evolving systems is a social act, due to local knowledge and the steep cost of whole-cloth duplicating the hard-won innovations of others.

Entrepreneurial alertness, under creative evolution, is closer to the concept of “insight” than some inherent individual talent for noticing. A predictable search analogy that relies on some stock or quality of “alertness” to better notice cues and opportunities in some mappable territory via some definable algorithm is not only scarce in creatively evolving systems, it is unquantifiably scarce, contra (McCaffrey 2014).

Suppose an entrepreneur has to combine four things to produce some innovation, but does not have access to the recipe ahead of time. The entrepreneur might end up discovering an order of operations, but they do not know this order ahead of discovering it. Insight is a hunch or idea that cannot be explained using a logical or algorithmic process on known opportunities. Insight initiates a tumbling into the adjacent possible. As the agent moves into their adjacent possible, more opportunities make themselves known, like fires at the perimeter of the known lighting up the not-yet-known. Noticing is a passive act. Insight – lighting fires – is an active act.

Where do insights or ideas come from? To answer this for creatively evolving systems, as per Proposition 1, we must place the entrepreneur in society. While innovations can be generated through imagination and introspection, additional paths through the adjacent possible are only traversable via interaction, like through market and product testing. See, for instance, the “innovation communities” in von Hippel (2005: 72), consumer-driven innovation in Von Hippel (2017), and the “lessons” for successful open-source innovation and development in Raymond (1999).

Extending our knowledge discussion from the previous section, analogies of alertness in creatively evolving systems involve the innovator being surprised. If an innovator were not surprised, then they could deduce the novel possibility as being within their existing Umwelt. Exploring the entire scope of one’s adjacent possible is a social act – only through repeated interaction can some elements of possibility or imagination “light up” in one’s adjacent possible (Felin and Koenderink 2022). A theory of innovation in creatively evolving systems must be a theory of surprise, the kind of surprise that cannot be assumed away or reduced to some probability calculus. Its dynamics are undecidable, and requires paradigms that can cope with undecidability like constructive and computable mathematics wherein creatively evolving systems are computationally universal (Prokopenko et al. 2019; Bennett 1990; Markose 2019).

Particulars are essential to the success of an innovation, not just in the entrepreneur’s adjacent possible, but in the adjacent possibles of adopters. Given the fragmentation of knowledge in creatively evolving systems, it is impossible for entrepreneurs to know whether their innovation can take hold within the system. Even excellent ideas that succeed the testing stages of development among smaller and specific groups might not be mass-adopted. Take the examples of the “picturephone” project undertaken by Bell Labs in the mid-20th century. The picturephone was conceptualized in 1910, with the first workable prototype in 1927 and explored further in the 1960s after the project was delayed by World War II (Noll 1992). Predictions of uptake were around 50,000 subscribers by 1975, and 1 million by 1980, with the picturephone overtaking ordinary phones by the turn of the century. The user base peaked in 1973 at around 100 subscribers, and by 1977, there were only nine picturephone subscribers in the entire system (ibid: 309-10). In the 2020s, picturephones are ubiquitous, but the sort developed by Bell Labs never caught on.

Bell Labs could not make a viable picturephone because massive network adoption and device affordability were not in their adjacent possible. Thomas Edison did not invent the pacemaker because resistors, invented later by researchers at Bell Labs, were not in his adjacent possible. Xerox’s PARC developed the Xerox Alto, one of the earliest personal computers (O’Regan 2015). Xerox never became a major player in the personal computing world because they had the wrong categorical framework – their knowledge partition, \(P_{PARC,t}\) of their known-world, \(\Omega _{PARC,t}\) – to recognize the potential in combining their graphical interface software with personal computers (Smith and Alexander 1999).

At the system level, innovation in a creatively evolving society expands the possibility space at a super-exponential rate. An abstract process in TAP for describing the expansion of the possibility space is the TAP equation (Eq. 11), introduced in Koppl et al. (2023).

$$\begin{aligned} M_{t}=M_{t-1}+P\sum _{i=1}^{M_{t-1}}\alpha _{i}\left( \begin{array}{c} M_{t-1}\\ i \end{array}\right) \end{aligned}$$
(11)

The economic intuition for this process recurs to Adam Smith’s notion of “tinkering and trade.” \(M_{t}\) is the number of combinations at time t, P is the master probability that a combination is successful, \(\alpha _{i}\) is the combinatorial probability that a certain combination of things is successful. We assume that \(\alpha _{i}\) decreases with increasing i – that it becomes increasingly difficult to combine more things together successfully. The process in Eq. 11 is a candidate process for modeling technological growth. Valverde (forthcoming) constructs a patent citation network, where nodes are patents and edges link a parent patent to its descendent, and find a power law distribution in the set of all direct and indirect descendants. Steel et al. (2020) link patent descent distributions to the TAP process detailed in Koppl et al. (2023), altering the TAP process in order to get a better sense for which combinations entail which other combinations.

4.3 Explaining institutions and other commons in creatively evolving systems

Sociotechnologies like social institutions enable individuals to coordinate with each other despite deeply fragmented knowledge and conflicting worldviews (Hodgson and Knudsen 2010: 171). Social institutions evolve from a complex interaction between individual and social practice and pre-existing sociotechnological and social artifacts (Bourdieu 1977). Institutions fundamentally extend the knowledge of the individual (Peltokorpi 2008; Rowlands 2010). We reason about the emergence of institutions as an extension of our discussion on the use of knowledge in creatively evolving systems. Individuals differ not only in the information they have at hand, but in their known-worlds, \(\Omega _{i,t}\). In some literatures, known-worlds are called ’frames’ – we will adopt this language temporarily as it is more evocative of the point we wish to make in this section.

Differing frames mean individuals can have the same core information but interpret it differently, leading to all sorts of potential coordination problems. There is the problem of a specific frame being more useful the more people who adopt it, as it enables interaction and greater exploration of the adjacent possible as discussed by the literature on coordinative institutions (Hodgson 2006). A single frame for everyone might seem like the most coordinative type of institution, but a single frame means some people would be forced into worldviews with which they disagree, reducing their overall utility. A single frame also reduces exploration of the adjacent possible through the mechanism of surprise. Standardization is valuable insomuch as it reflects individual beliefs; diversity breeds innovation which breeds further diversity. The dual character of institutions, to be both coordinative and diverse, is well known in the literature on general institutional emergence (Ostrom 2009) and cultural evolution (Herrmann-Pillath 2024).

Social institutions emerge in creatively evolving systems as islands of agreement in a sea of disagreement. Institutional variation is both an effect of and enables the movement into the adjacent possible of a system wherein the frames of individuals never fully intersect. Agreement in creatively evolving systems is not abundant or automatic – it is scarce and difficult to construct. Understood in this fashion, wherein the infinite regress of common knowledge is unattainable, we can explain why people undertake the costly establishment of firms, families, and governments. Such institutions are not solutions to grand optimization problems, but arise because grand optimization problems are unsolvable. Individuals need not completely agree on why a social institution is valuable to benefit from its existence.

The explanation of how particular institutions arise in creatively evolving systems, like the explanation for individual decision-making, hinges on particulars. The TAP process in Eq. 11 can express the growth of the system, but not which plans will succeed and fail, the composition of the technological landscape, how people live and work together through time, or their form of government. There are certainly aspects of creative evolution that can be analogized as biological evolution, in that biological evolution is also creative (Felin et al. 2014). Models of evolution have veered from the particulars of an evolutionary path to formally abstract inheritance and variation. These abstractions are not adequate to explain any particular evolutionary path in biology (Montévil 2022) or in economics, as they abstract away from the facts that determine a particular evolutionary path. As in how disparate communities solve common-pool resource problems, the specifics of how institutions self-organize carry much of the explanatory power (Ostrom 1990, 2005). Institutional persistence has analogies in creatively evolving systems with (co)evolutionary niche construction (Odling-Smee et al. 2003).

We would like especially to address the emergence of knowledge commons as an example of the applied economics of creatively evolving systems. The emergence of knowledge commons, like open-source code libraries and other knowledge repositories, are predicted by how people use knowledge in creatively evolving systems. Knowledge repositories increase agreement and standardization in a general user base, and may improve the value innovators receive from having a thriving commons more than the expected return from monetizing their innovation. This implication of the use of local knowledge in creatively evolving systems is in line with the work of von Hippel (2005:17).

Knowledge commons for freely revealed innovations arise in creatively evolving systems as dynamic repositories for beneficial aspects of the local knowledge of others. Organized around certain standards, such repositories are powerful attractors to innovation, even innovation provided for free as in open-source software development (Raymond 1999; Von Hippel 2005: 71) and in the gongkai open hardware community (Huang 2014; Hsing 2018). The advantages to standardization and local knowledge access across communities, particularly communities with niche skills and interests, is enough incentive in a creatively evolving system to explain why individuals would freely contribute to and manage such repositories. In academia, research is valuable not just in its initial published state, but in how it spurs and motivates the research of others. While academics earn prestige from publication, there are only a few prestigious journals in each field; most publications occupy a middle-tier that filters for basic quality but does not lend much prestige to contributors. There is no remuneration for this middle tier and little prestige – so, why contribute? Because journals serve as an academic knowledge commons, one that is explicitly about innovating upon and recombining the ideas of others.

5 Summary of ideas and further work

We have presented what could likely be a much longer and deeper treatise meant to demonstrate the promise in portraying economic systems as what they are: creatively evolving systems. Rather than dance around difficult topics like innovation and surprise and disagreement, with some alterations to how the logic of economic theory is approached, these topics can be investigated and in fact form the primary core of the study of creatively evolving economies.

As we’ve shown, our results are derived nearly entirely from reworking the epistemological basis of economic theory in a way that explicitly embraces the theory of the adjacent possible. We then derived four Propositions and two Corollaries from explicitly considering how individuals would use knowledge in creatively evolving systems. Upon this new foundation we addressed decision-making and problem solving, innovation, and sociotechnological emergence as exemplified by innovators – open software developers and academics, in particular – free-revealing innovations in a knowledge commons.

Dopfer et al. (2024: 1) discuss the areas of evolutionary economic analysis, and while we touch upon in each of these areas in the main body of the text, we address evolutionary political economy (EPE, not to be confused with entangled political economy) the least. This is not due to the relative importance of EPE, but to length constraints, as treating creatively evolving political economy (CEPE, perhaps) could be its own paper.

However, we can put up some signposts for CEPE in this section, as a “future direction” for applying the economics of creative evolution. CEPE would address policy-making under creative evolution, and governance. Crafting policy requires having a good account for why an observed behavior or effect deviates from a better, attainable behavior or effect, with better meaning welfare-enhancing by some metric. As talked about above, what looks like biases in a simple static or evolutionary system may look like reasonable behavior in a creatively evolving system.

If what look like biases in simple economic systems look like reasonable behavior in creatively evolving systems, then it follows that other policies which seem reasonable in simple economic systems may not be warranted, or may even be outright pernicious. Consider a cousin of the “nudge” idea, a concept called “nuzzle.” “Nuzzle” is a concept hatched by complexity economists intended to explain how individuals manage uncertainty in complex social systems. Nuzzling is when individuals stay close to big players like religious and cultural institutions in order to “reduce uncertainty.” Individuals stay close enough to adopt the rules of the big players but far enough away to occasionally strike out on their own, thus incorporating innovations with minimal disruption (Room 2016: 118). In this theory, the State is “the big actor par excellence” and the only stage with a big enough “degree of stability and certainty within which capitalist entrepreneurs and their ‘animal spirits’ could flourish” (Room 2016: 119, referencing Wagener and Drukker 1986: 38-9).

Institutions emerge as islands of agreement in a sea of disagreement; they are useful insofar as they provide the ’best’ available alternative to institutions more aligned with the worldviews of their participants. Institutions that are slow and brittle to change may drift farther and faster from the changing worldviews of their participants. Governance is a sociotechnology whose functions, usefulness, and scope serve a purpose in one form that could be better serviced in a different form (Trist 1978). TAP implies that sociotechnologies, like other technologies, are developed as social systems traverse their adjacent possibles. Bigness can help standardize behavior and expectations across large and diverse societies, but it can also lock in suboptimal behaviors and expectations by crowding out or banning alternative forms of governance and enshrine pernicious incentives created by the governance mechanism itself.

Governance is a (socio)technological problem to be solved, and the modern State is certainly not the only way standardize behavior and expectations across large, diverse and fast-changing societies. It is not clear what size, content or governance mechanism is necessary to deliver a minimal set of rules, nor what these rules entail. In creatively evolving systems, we would expect governance ideas to have their own knowledge commons, where individuals and groups attempt to learn about, contribute to, and test better forms of governance. In open-source shell scripting and other open-source projects, there are a variety of governance forms that span the liberal to authoritarian spectrum (De Laat 2007; Nyman and Lindman 2013). Decentralized autonomous organizations have become explicit testing grounds of different forms of governance (Jentzsch 2016). The governance of knowledge commons themselves vary greatly, from the many crowdsourced wikis to the walled gardens of OpenAI’s ChatGPT and Google’s Bard (Forte et al. 2009).