# Optimal modularity: a demonstration of the evolutionary advantage of modular architectures

## Abstract

Modularity is an important concept in evolutionary theorizing but lack of a consistent definition renders study difficult. Using the generalized NK-model of fitness landscapes, we differentiate modularity from decomposability. Modular and decomposable systems are both composed of subsystems, but in the former, these subsystems are connected via interface standards, while in the latter, subsystems are completely isolated. We derive the *optimal level of modularity*, which minimizes the time required to globally optimize a system, both for the case of two-layered systems and for the general case of multi-layered hierarchical systems containing modules within modules. This derivation supports the hypothesis of modularity as a mechanism to increase the speed of evolution. Our formal definition clarifies the concept of modularity and provides a framework and an analytical baseline for further research.

### Keywords

Modularity Decomposability Near-decomposability Complexity NK-model Search Hierarchy### JEL Classifications

D20 D83 L23 O31 O32## 1 Introduction

Simon’s (1962) seminal work on complex systems emphasized the modular and hierarchical structure of most complex systems, both natural and artificial. The modular nature of complex systems refers to the nearly decomposable architecture of the interaction between elements. In modular systems, the great majority of interactions occur within modules and only a few interactions occur between modules.

Modular architectures offer evolutionary advantages because, in most instances, the effect of a change in a given module is confined to that module. Due to this localization of the effects of changes, the probability of a successful change is greatly enhanced. Each module can be improved more or less independently of other modules. For example, modular technologies allow for innovation in each module without the risk of creating malfunctions in other modules. Similarly, modular organizational designs allow different departments to change their operating routines without creating problematic side effects in other departments. More generally, the typical feature of modular systems holds that they can more easily be improved by random mutation and natural selection than other complex systems.

The NK-model, originally developed by Kauffman (1993) and generalized by Altenberg (1994), is a common tool to analyze the evolutionary dynamics of complex systems, including organizations and technologies (Levinthal 1997). In the economics and management literatures, several simulation studies have been carried out to analyze the conditions under which modular systems favor adaptation compared to other complex systems (Frenken et al. 1999, Marengo et al. 2000; Ethiraj and Levinthal 2004; Dosi and Marengo 2005; Brusoni et al. 2007; Rivkin and Siggelkow 2007; Ciarli et al. 2008; Geisendorf 2010; McNerney et al. 2011; cf Bradshaw 1992; Baldwin and Clark 2000). These studies tend to confirm the central idea that modular systems are improved by random mutation and natural selection at a faster rate than other complex systems. Yet, the exact results of the simulation exercises differ across these studies, as they utilize different assumptions regarding search behaviour and memory constraints, as well as differing definitions of modularity.

In the following, we propose a formal definition of modularity that distinguishes it from decomposability. Though many use the terms decomposability and modularity interchangeably, we argue that modular systems differ from decomposable systems; while decomposability requires a full decomposition of a complex system into subsystems, modularity requires a system architecture in which subsystems are still connected via interface standards. Conceptually, the problem of the decomposability concept is that a decomposable system is no longer one system, but simply a collection of several smaller systems. As a representation of a technology, or an organization, it falls short in conceptualizing the fact that elements in a technology or organization always act together and are collectively subject to selection. The idea of a decomposable system is thus better understood as an analytical construct or as an approximation of reality rather than a precise representation of a real-world system. The concept of modularity overcomes these conceptual issues. A modular system cannot be partitioned into completely independent subsystems, but rather contains nearly independent subsystems (modules) which are connected via interfaces. These interfaces are elements of a system that connect subsystems such that the only interdependencies between the subsystems are via the interface standards. This definition corresponds quite closely to the concept of near decomposability introduced by Simon (1962, 1969, 2002), as well the more recent notion of modularity in complex networks (Newman 2006).

The applied literature on modularity has drawn similar distinctions between modularity and decomposability. For example, Baldwin (2008) compares perfect modularity (similar to our definition of decomposability) with near decomposability (similar to our definition of modularity). Langlois and Garzarelli (2008, p.128) differentiate between decomposable systems and modular systems which are “nearly decomposable system that preserves the possibility of cooperation by adopting a common interface”. This paper, then, is best seen not as creating a novel distinction, but as adopting an existing distinction and expressing it formally.

We will argue below, using a generalized NK framework developed by Altenberg (1994), that modular systems, defined in this way, can be optimized globally given the right sequence of problem-solving. Though a decomposition strategy is not feasible, modules can be optimized independently as long as interface standards between modules are left unchanged. This means that, contrary to decomposable systems, optimization of modular systems requires *hierarchical* problem-solving, where interface standards are defined first, followed by module design within the constraints of the standards.

Following this definition, we will proceed to derive the optimal level of modularity for systems of a given size, where the optimum is defined by the search time required for global optimization. This result is shown to be extendable to multi-layered hierarchical complex systems, where modules are defined recursively. We find this extension important since hierarchical complex systems are ubiquitous in technological artefacts and organizational design, yet have not been analyzed thus far in the NK-modelling framework.

The reader will note that the model we propose is quite simple. For example, it adopts a global search strategy. This is done purposively for the purpose of creating a framework and a baseline. The framework offers the possibility of comparability of results derived from different assumptions. The baseline of a simple model provides an anchor for comparison with more complex models. This approach of using a simplified model as a baseline is common in NK modelling. It should be noted, then, that the purpose of this model is not to make the empirical claim that it reflects actual behaviour. It rather should be interpreted as a tool to be useful in integrating and reconciling various models on modularity. The importance of creating common frameworks is discussed in terms of the ongoing debate as to whether over-modularity has evolutionary advantages.

## 2 Decomposability and modularity in a generalized NK-model

*N*elements (n = 1,...,N). For each element, n, there exist A

_{n}possible states. The number of possible system designs that make up the system’s

*design space*(Bradshaw 1992) is given by the product of the number of possible states for each element:

_{n}= A for all n, which implies that the size of the design space equals A

^{N}.

We assume that each pair of elements is either interdependent or not. Interdependence between a pair of elements means that, if a mutation is carried out in one element, the functioning of the other element is also affected. Decomposability means that a system can be partitioned into non-overlapping subsystems such that no interdependencies exist between subsystems. This implies that subsystems can be optimized independently and in parallel. The time required to globally optimize a system is then bounded by the size of the largest subsystem.

For example, consider a system with *N* = 5 and a binary design space (A = 2). The number of possible designs is 2^{5} = 32. If the functioning of all elements is dependent on the state of all other elements, global optimization requires exhaustive search: one has to evaluate the fitness of all 32 possible designs to determine which design has the highest fitness. Assuming one evaluation per time period, the search time is 32 periods. Now, consider the case in which the functioning of the first and second elements are interdependent, and the functioning of the third, fourth and fifth elements are interdependent. In this case, the subsystem containing the first and second elements can be optimized independently from the subsystem containing the third, fourth and fifth elements. Since search can proceed in parallel, the search time required to globally optimize the system is bounded by the size of the largest subsystem, in this case 2^{3} = 8 periods. The computational complexity of a system, as defined by the search time required to globally optimize a system, can then be expressed as a function of the number of elements of this largest subsystem (three in this example), also known as the cover size of a system (Page 1996).

### 2.1 Altenberg’s generalized NK-model

To formally model modular systems, the original NK-model as developed by Kauffman (1993) has to be generalized to allow for interface standards. The distinguishing feature of modular systems is that some elements of the system (the interface standards) have no direct contribution to the system’s fitness, but solely mediate the interdependencies between modules. However, in the original formulation of the NK-model by Kauffman, elements in a complex system by definition have a fitness value. As such, the Kauffman-type of NK-model is ill suited to deal with modular systems. The generalized NK-model developed by Altenberg (1994) allows a more general treatment in which elements are not required to have inherent fitness values, which allows the inclusion of mediating elements.

*N*elements (

*n*= 1,...,

*N*) and F fitness elements (

*f*= 1,...,

*F*). In biological systems, for which this generalized NK-model was conceived, an organism’s

*N*genes are the system’s elements and an organism’s

*F*traits are the selection criteria. The string of genes is collectively referred to as an organism’s genotype, while the set of traits is collectively referred to as an organism’s phenotype. A single gene affects one or several traits in the phenotype, and a single trait is affected by one or several genes in the genotype. The vector of genes affecting a trait is called a polygeny vector, while the vector of traits affected by a gene is called a pleiotropy vector. The structure of epistatic relations between genes and traits is represented in a “genotype-phenotype map”, which is represented by a matrix of size

*F*·

*N*with:

*N*elements and the

*F*functions it performs

*i*.

*e*. the quality attributes taken into account by users (Frenken and Nuvolari 2004). The string of alleles of elements describes the “genotype” of a product, and the list of functions describes the “phenotype” of the product (e.g., speed, weight, efficiency, comfort, safety, etc). The genotype–phenotype map of a product is generally called a product’s architecture (cf. Henderson and Clark 1990).

The original NK-model can now be understood as a special case of the generalized genotype–phenotype matrices. Three restrictive assumptions are operative in the original NK-model, namely N-F symmetry, N-F reflexivity, and polygeny symmetry. N-F symmetry is the condition that the number of functions *F* equals the number of elements *N*. This assumption is necessary in order to enforce N-F reflexivity, which is that each element (n_{x}) affects its counterpart function (f_{x}); in terms of the genotype–phenotype matrix, this implies that the diagonal is always characterized by presence of a relation between element and function. Polygeny symmetry is the requirement that each function is affected by the same number of elements. In the NK-model the polygeny of each function is assumed to be exactly *K* + 1, with pleiotropy of each element being determined randomly (with pleiotropy being on average equal to *K* + 1). Dropping these restrictions (i.e. allowing N \(\ne \) F, not enforcing n_{x}→f_{x} interdependencies, and allowing polygeny to differ from *K* + 1 for individual functions) provides a generalized NK-model of complex systems.

*A*= 2. Given the matrix specifying the system’s architecture and the design space of all possible designs, the fitness landscape of a system can be simulated as in Fig. 1(b). A fitness landscape is a mapping of fitness values onto all possible designs. As with Kauffman’s original NK-model, the fitness landscape is generated by randomly drawing a fitness value from a uniform distribution between 0 and 1 for each possible setting of alleles of the elements affecting a function

*f*. Total fitness is then derived as the normalized sum of the fitness values of all functions:

### 2.2 Non-decomposable, decomposable and modular systems

Using Altenberg’s generalized NK approach, one can conceptualize interface standards as elements that do not have an intrinsic function, but solely affect functions that are associated with other elements (Frenken 2006). Figure 1 provides an example of a modular system, albeit the most elementary one. The second element affects both functions, each of which is associated with one of the other two elements. Once the choice of the second element is made (i.e. the interface standard), each function can be optimized independently by tuning the element affecting it. Depending on whether the standard is 0 or 1, the designer ends up in either 000 or 110 (circled in the figure).

*N*= 9 and

*K*= 8 (maximum polygeny). Since the system is not decomposable, the time required to globally optimize the system equals the size of the design space. Assuming again that A = 2, the size of the design space is 2

^{9}= 512. System (b) is a decomposable

*N*= 9 system that can be decomposed into three, equally sized with a polygeny of three (

*K*= 2). The time required to globally optimize the system equals the size of the design space of each subsystem (2

^{3}= 8), because search can proceed in parallel (Frenken et al. 1999). Of course, a decomposable system with subsystems of size one, which corresponds to minimum polygeny (

*K*= 0), would only require two periods to globally optimize (in fact, there are no local optima is such systems). The optimal level of decomposability with regard to the search time required to globally optimize the system, is a fully decomposable system with subsystems of size one.

System (c) is a modular system according to our previous definition with three subsystems of size three, which are mediated by three interface standards yielding a polygeny of six (each function is affected by three elements in the subsystem and the three interface standards). The total number of elements in the new system, denoted by *N*′, is 12. Though the number of elements has been increased from nine to 12, the number of trials required to globally optimize the system *hierarchically *is much less that in case (a). For each set of interface standards, there exists an optimal setting of subsystems that can be found in 2^{3} = 8 periods (as for system b). As there are three standards, and thus 2^{3} = 8 settings of interface standards, the total time required adds up 8 · 8 = 64 periods. Thus, comparing system (a) with system (c), an increase in the number of design dimensions in a system actually simplifies the search for its optimal solution. A modular system can thus be constructed by *increasing *the number of elements in the system such that the elements become organized in modules, thereby *decreasing* the complexity of a system in terms of the search time required for global optimization. Contrary to decomposable systems, the optimal level of modularity with regard to the search time required to globally optimize the system is non-trivial.

## 3 Optimal modularity in two-layered hierarchies

We first investigate the case of two-layered hierarchies (precluding modules within modules) before proceeding with the generalized case of multi-layered hierarchies (allowing modules within modules) in the next section. The number of modules in which a system can be modularized varies between a single module (absence of modularity) and *N* modules (maximum modularity). The question becomes how many modules should be created as a function of the original size *N* of a non-decomposable system. Following our example of Fig. 2(c), we make three assumptions.

**Assumption 1**

*The number of interface standards in a modular system equals the number of modules in a modular system (given that modular systems contain two or more modules).*

**Assumption 2**

*An interface standard affects all functions, i.e., the pleiotropy of a standard equals F.*

**Assumption 3**

*All modules are of equal size, the possible sizes ranging from one module of size N (absence of modularity) to N modules of size one (maximum modularity).*

The first assumption is not crucial to our argument, and can be relaxed. The reasoning behind this assumption is that more modules require more interface standards. The second assumption defines a standard as an interface between all elements. As an interface standard affects all functions, all the fitness values of the non-modular system are redrawn to obtain the fitness values of the modular system. Note that this implies that the fitness values of a modular system are uncorrelated with the fitness values of the original non-modular system. The third assumption follows from the principle of cover size (Page 1996), which states that the search time to globally optimize a system is bounded by the size of the largest subsystem. Thus, optimal modularity requires the partitioning the system in equally sized modules.

To minimize the number of trials required to solve the system, one needs to compute the optimal level of modularity. Let *N* stand for the size of original non-decomposable system, as in the original NK-model. Let *N*′ stand for the size of original non-decomposable system plus the number of interface standards. Let *M* stand for the number of interface standards. Finally, let *S* stand for module size. It follows from assumption 1 that *N*^{′} = *N* + *M* and from assumption 3 that *S* = *N*/*M*.

*A*= 2, there are 512 unique options for interfaces and two unique options for each module. Thus, optimization requires 512 × 2 = 1024 periods, compared to the 512 periods to optimize the original NK-system as depicted in Fig. 2(a). It holds that, for any N, polygeny in a maximally modular system is N + 1 (N interface standards plus the function itself), compared to a polygeny of N for non-decomposable systems. So maximum modularity can never be the optimal solution. It will be shown that optimal modularity is determined by minimization of polygeny, which stands in a nonlinear relationship with level of modularity.

^{N/M}periods. As modules can be searched in parallel, the time required to optimize all modules is equal to the time required to optimize a single module. The number of possible sets of standards equals A

^{M}. Optimal modularity, i.e. the optimal number of modules, can now be derived as the number of modules that minimizes search time required to globally optimize the system. The time required to globally optimize a modular system,

*C*

_{time}, is given by the product of time required to solve a module (A

^{N/M}) and the time required to design all possible architectures (A

^{M}):

*N*elements equals the square root of

*N*(a result independent of A). Given

*S*=

*N*/

*M*it follows that the optimal module size also equals the square root of

*N*. The resulting time required to globally optimize the optimal modular system, equals:

## 4 Optimal modularity in multi-layered hierarchies

The analysis has thus far only considered modular systems with two layers: a layer of interface standards and a layer of modules. Our reasoning can be generalized for modular systems with more than two layers by considering an iterated modularization process. Iterated modularization allows for the formation of more than two levels (*i*.*e*. for the creation of a hierarchy of modules within modules). In order to derive the optimal modularity for a hierarchy of modules, we introduce variable *L*, which stands for number of levels of modularization. Under this notation, *L* = 1 stands for no modularization and *L* = 2 describes the single level of interfaces considered in the previous section. We now consider the general case of L ≥ 2.

_{n}, the set of modules at level n to be Mod

_{n}, and interface standards at level n to be IS

_{n}, a hierarchical modular system can be formally written as:

**Assumption 4**

*The number of interface standards at any level of the hierarchy equals the size of the leaf modules (i.e. *∣ {IS_{n}} ∣ = S*).*

**Assumption 5**

*A standard affects all functions in the level at and below the standard in the hierarchy (i.e. top level standards affect all functions).*

**Assumption 6**

*Division into modules is symmetrical across levels (i.e. *∣ {Mod_{n}} ∣ = S*).*

All functions are thus affected by at least one standard in the multi-layered case. As for the single-layered case, this implies that the fitness values of a multi-layered modular system are uncorrelated with the fitness values of the original non-modular system.

To globally optimize this multi-layered modular system, one has to search *hierarchically* via multiple cycles fixing the standards at the top level, then fixing the standards at the middle level, and then optimize each leaf module. Since the top level interface consists of three interfaces, there are 2^{3} possible standard settings at the first layer requiring 2^{3} cycles of exploration. For each of the settings at the first layer, testing the middle layer interfaces also involves 2^{3} cycles because each subsystem can be searched in parallel. Finally, optimizing each individual module takes 2^{3} periods. Thus, total search time is 2^{3}·2^{3}·2^{3} = 2^{9} = 512 periods, which is a small fraction of the time required to globally optimize a system of *N* = 27 (which requires 2^{27} periods).

*N*and

*L*. This relationship is (as shown by detailed proof in Appendix A):

*L*= 2. Minimizing (9) with respect to

*L*gives:

Note again that optimal level of modularity is independent of *A*.

*N*at which optimal modularity requires the introduction of a new layer. One should introduce a new layer of modules moving from

*L*=

*x*to

*L*=

*x*+ 1 if:

*x*+ 1 layers with modules of optimal size.

It follows from Eq. 13 that, as *N* increases exponentially, *L* increases linearly; a corollary is that, as *N* increases linearly, *L* increases logarithmically. Modularity thus represents a mechanism for coping with exponential system growth. It also suggests a hypothesis that early in linear growth processes of a complex system, modularity structure will change regularly, while later in the process, changes in modularity structure will be increasingly rarer. Introduction of a modular structure slows the growth of polygeny relative to system size.

*log*

_{A}(

*C*

_{time}) for different values of

*L*and

*N*in Fig. 6. Note that we express the search time required for global optimization in terms of the logarithm of

*A*, which render values of search time to be independent of

*A*. The figure shows that computational complexity sharply decreases with addition of layers of modules, reaches the minimum, and then slowly increases. This suggests that it is a more robust strategy to over-modularize than under-modularize.

The question of whether under- or over-modularity is to be preferred has important general implications. They suggest, for example, heuristic strategies for product design under conditions of uncertainty. However, different models within the NK tradition exhibit conflicting results on this question. Geisendorf (2010) summarizes the debate as between those (Dosi and Marengo 2005; Brusoni et al. 2007) who find speed of evolution advantages to over-modularization and those (Levinthal 1997; Ethiraj and Levinthal 2004; Geisendorf 2010) who do not. There are also other papers which address the question which do not explicitly frame their results in terms of over-modularization (e.g. Frenken et al. 1999). These many papers vary significantly in their assumptions. These differences in assumptions (or model specifications) are important in explaining these divergent results. As an example of the understanding which can be gained from detailed comparison of specifications, we compare our model to the model by Ethiraj and Levinthal (2004) which did not find benefits to over-modularization. Here we present only the conclusions of this comparison, the full comparison being available in Appendix B. Our model focuses only on search time and thus only looks at the theorized advantages of modularity through reduced polygeny. The Ethiraj–Levinthal model features parallel search with no co-ordination regarding mutations in interface standards, meaning that increasing modularity leads to increasingly chaotic fitness dynamics. Given the many differences between the models, it is difficult to decide which model is likely to exhibit the more robust results.^{1} It highlights the value of developing common frameworks in which differences between specifications can be tested more explicitly.

## 5 Discussion

This paper has focused on the implications of modular structure from an evolutionary time savings perspective. The question has been what kind of modular architecture is optimal with respect to the speed at which trial-and-error search can find the global optimum. In line with recent ideas on evolvability (Ethiraj and Levinthal 2004; Rivkin and Siggelkow 2007), creating an architecture which allows efficient search may be as important as the search strategy applied to a given problem.^{2} The interaction between the processes of architectural search and search within the current architecture is an interesting, though non-trivial, problem. Our analysis indeed shows that the choice of the right modular architecture creates strong advantages in the subsequent evolutionary search process towards the global optimum. If a designer is able to create a modular design with modules of optimal size, she realizes huge savings in the time required to find the global optimum by trial-and-error.

Our approach has been based on two important simplifications. First, we assumed that the creation of modular architectures did not itself involve time. The time devoted to creating a modular architecture will generally increase with the degree of modularity of that architecture (as more interface standards need to be introduced to separate elements into distinct modules). Once the problem of minimizing search time is translated into a cost minimization problem using a monetary value of time, the minimization problem can be extended to include the cost of the construction of an architecture, with such construction costs increasing with the degree of modularity as indicated previously by M. The cost perspective may have an impact on the desirability of over-modularization, as it would tend to attenuate the benefits of modularization.

A second simplification in our analysis is that our derivation of optimal modularity only takes into account the search time required to globally optimize the system and ignores the effects of modularization on the fitness value of the global optimum obtained. In our model, optimal modularity is achieved by minimizing the search time required for global optimization. Put differently, in designing a system with optimal modularity, one aims at minimizing the number of times the fitness values are redrawn. In the case of a non-modular system, for example, fitness values are redrawn A^{N} times, while for a system with optimal modularity, fitness values for each module are redrawn only A^{(S + M)} times. Since fitness values are redrawn less often for a system with optimal modularity compared to other systems, this implies that the global optimum of a non-modular system has a higher fitness than the global optimum of a system with optimal modularity, since N is greater that S+M (Kaul and Jacobson 2006). Thus, the advantage of modular systems in terms of search time may be offset by lower fitness, depending on how much weight is given to fitness obtained compared to search time required.

Note that the fitness of the global optimum of a non-modular system and the fitness of the global optimum a system with optimal modularity approach 1 asymptotically as system size N goes to infinity. Thus, the difference in their fitness values will start decreasing at some point as system size N increases. For sufficiently large systems, the negative effect of modularization on the fitness of the global optimum is only marginal and can be neglected. This conclusion is tied to the global search strategy, which is employed here. Whether fitness effects can similarly be taken as minimal when alternative search strategies are employed is an open question.

A final area for future research is to consider different search strategies within the framework we have discussed. Similar to our argument about the status of decomposable systems, a global optimization strategy is an analytical construct rarely seen in reality. In reality, shifts in the underlying landscape, cognitive and power limitations of agents, and opportunity costs (i.e. the tension between exploration and exploitation) mean that search processes are in reality satisficing as opposed to optimizing (Simon 1969). This then poses the question as to what hierarchical search might look like in a satisficing context. Two possibilities are discussed which suggest the utility of this framework in terms of future research potential.

Gavetti et al. (2005) explore the idea of analogy as a search strategy within an NK context. Their conceptualization of search is the resolution of “high level” choices through analogical knowledge flowing from experience followed by resolution of “low level” choices through local search. In this case, analogy is a tool that leverages past experience in order to suggest promising segments of the landscape within which to search using local search. In the original paper, the idea was to explore knowledge derived from different regions of the fitness landscape. In the case of the modular structure proposed here, it would be interesting to explore experiential knowledge of architectures. This would mean setting the interfaces on the basis of analogy, followed by local search within these interfaces. A first step in this setup might be to set standards randomly and proceed with low-level search. This would provide a baseline for assessing the impact of architectural knowledge.

A second possibility is to model recursive problem solving (a concept inspired by Arthur 2007). The departure point of this is to frame invention as a recursive problem solving process, in which work on a solution proceeds between levels and focuses on the most problematic component. This could be abstracted as a hierarchical extremal search wherein the lowest functioning module is the focus of search. If a satisfactory solution can be found at the level of the module, then it is resolved at that level. If this is not the case, search proceeds down the hierarchy in a recursive manner. After sufficient exploration of sub-modules, if a satisfactory solution is still not found, then the problem is elevated to the level of the hierarchy above which the problem occurred. In our terms, search begins at level N, moves down the hierarchy to level L, and then is elevated to exploration of the interfaces at level N + 1.

These discussion points indicate that, though the model we have presented is rather restrictive in the assumptions it utilized, it offers interesting possibilities for further research, which relaxes these assumptions. It represents the crucial first step of offering a formally consistent framework wherein an analytical baseline can be defined. Further, it confirms the primary hypothesis of the modularity literature: that modularity increases speed of evolution (Simon 2002). It does so by formally linking modular structure to a decrease in interdependencies between elements.

## 6 Concluding remarks

We had aimed to define modularity formally and explore the hypothesis that it represents a mechanism for increasing the speed of evolution. We have derived the optimal level of modularity with respect to the time required to globally optimize a system, both for two-layered hierarchies and multi-layered hierarchies. Our approach has taken advantage of rather restrictive assumptions in order to generate analytically tractable results. We have discussed several logical routes to relax these assumptions in future work.

A second line of research is to conduct empirical research on the levels of modularity of systems varying in size, so as to provide an empirical basis for the formal theory. For example, further work might be conducted into the suggestions that modularity of problem decomposition is observable in entrepreneurs who are involved in rapidly expanding enterprises (Sarasvathy and Simon 2000).

In the longer run, we hope our approach to modular systems contributes to a consistent formal approach to modularity in the fields of economics, innovation studies and organization science in a way that renders the results from different modelling exercises mutually comparable.

## Footnotes

- 1.
Robustness refers to a result that survives changes in specification.

- 2.
A similar argument has been made by Wagner and Altenberg (1996) in the context of biology.

- 3.
In fact, under one variant of the Ethiraj and Levinthal specification (2004, p.170), over-modularity is in fact preferable. Thus, even within the confines of a particular approach, small variances in specification can be important.

## Notes

### References

- Altenberg L (1994) Evolving better representations through selective genome growth. In: Proceedings of the IEEE world congress on computational intelligence, pp 182–187Google Scholar
- Arthur WB (2007) The structure of invention. Res Policy 36(2):274–287CrossRefGoogle Scholar
- Baldwin CY (2008) Where do transactions come from? Modularity, transactions, and the boundaries of firms. Ind Corp Change 17(1):155–195CrossRefGoogle Scholar
- Baldwin CY, Clark KB (2000) Design rules, vol 1: the power of modularity. Cambridge: MIT PressGoogle Scholar
- Bradshaw G (1992) The airplane and the logic of invention. In: RN Giere (ed) Cognitive models of science. The University of Minnesota Press, Minneapolis, pp 239–250Google Scholar
- Brusoni S, Marengo L, Prencipe A, Valente M (2007) The value and costs of modularity: a problem-solving perspective. Eur Manage Rev 4:121–132CrossRefGoogle Scholar
- Ciarli T, Leoncini R, Montresor S, Valente M (2008) Technological change and the vertical organization of industries. J Evol Econ 18:367–387CrossRefGoogle Scholar
- Dosi G, Marengo L (2005) Division of labor, organizational coordination and market mechanisms in collective problem-solving. J Econ Behav Organ 58:303–326CrossRefGoogle Scholar
- Ethiraj SK, Levinthal DA (2004) Modularity and innovation in complex systems. Manage Sci 50:159–173CrossRefGoogle Scholar
- Frenken K (2006) A fitness landscape approach to technological complexity, modularity, and vertical disintegration. Struct Change Econ Dynam 17(3):288–305CrossRefGoogle Scholar
- Frenken K, Nuvolari A (2004). The early development of the steam engine: an evolutionary interpretation using complexity theory. Ind Corp Change 13(2):419–450Google Scholar
- Frenken K, Marengo L, Valente M (1999) Interdependencies, near-decomposability and adaptation. In: Brenner T (ed) Computational techniques for modelling learning in economics. Kluwer, Boston, pp 145–165CrossRefGoogle Scholar
- Gavetti G, Levinthal DA, Rivkin JW (2005) Strategy making in novel and complex worlds: the power of analogy. Strateg Manage J 26(8):691–712CrossRefGoogle Scholar
- Geisendorf S (2010) Searching NK fitness landscapes: on the trade off between speed and quality in complex problem solving. Comput Econ 35:395–406CrossRefGoogle Scholar
- Henderson RM, Clark KB (1990) Architectural innovation. Admin Sci Q 35:9–30CrossRefGoogle Scholar
- Kauffman SA (1993) The origins of order. Self-organization and selection in evolution. Oxford University Press, New YorkGoogle Scholar
- Kaul H, Jacobson SH (2006) Global optima results for the Kauffman NK model. Math Program 106(2):319–338CrossRefGoogle Scholar
- Langlois R, Garzarelli G (2008) Of hackers and hairdressers: modularity and the organizational economics of open-source collaboration. Ind Innov 15(2):125–143CrossRefGoogle Scholar
- Levinthal DA (1997) Adaptation on rugged landscapes. Manage Sci 43:934–950CrossRefGoogle Scholar
- Marengo L, Dosi G, Legrenzi P, Pasquali C (2000) The structure of problem-solving knowledge and the structure of organizations. Ind Corp Change 9:757–788CrossRefGoogle Scholar
- McNerney J, Farmer JD, Rdener S, Trancik JE (2011) The role of design complexity in technology improvement. Proc Natl Acad Sci USA 108(22):9008–9013CrossRefGoogle Scholar
- Newman MEJ (2006) Modularity and community structure in networks. Proc Natl Acad Sci USA 103:8577–8582CrossRefGoogle Scholar
- Page SE (1996) Two measures of difficulty. Econ Theory 8:321–346Google Scholar
- Rivkin JW, Siggelkow N (2007) Patterned interactions in complex systems: Implications for exploration. Manage Sci 53:1068–1085CrossRefGoogle Scholar
- Sarasvathy S, Simon HA (2000) Effectuation, near-decomposability, and the creation and growth of entrepreneurial firms. In: Presented at the first research policy technology entrepreneurship conference, University of Maryland. Available online at http://www.effectuation.org/ftp/Neardeco.doc., accessed on 18 September 2009
- Simon HA (1962) The architecture of complexity: hierarchic systems. Proc Amer Phil Soc 106:467–482Google Scholar
- Simon HA (1969) The sciences of the artificial, third edition. MIT Press, Cambridge, 1996Google Scholar
- Simon HA (2002) Near decomposability and the speed of evolution. Ind Corp Change 11:587–599CrossRefGoogle Scholar
- Wagner GP, Altenberg L (1996) Complex adaptations and the evolution of evolvability. Evol 50:967–976CrossRefGoogle Scholar