1 Introduction

The study of how organizations are structured has a rich history that goes back to early examinations of bureaucracies and organizations with multiple divisions (Chandler 1969). Following this, the concept of contingency theory evolved, suggesting that organizations should be designed differently depending on various external and internal factors, highlighting the need for a well-aligned internal structure (Miller et al. 1984). Research in this field has covered many aspects of organizational design, including goals, technology, people, the social setup, and how the organization interacts with its environment (Scott 1998). For example, Baligh et al. (1996) differentiate between factors like technology, strategy, and the environment (contingency factors) and the elements of organizational design itself, which include the structure of the organization and features like its complexity, rules, and procedures.

Recently, studies on organizational structure have looked into “modern” ways of organizing. This includes new organizational types like holacracies (Robertson 2015), self-managing organizations (Lee and Edmondson 2017), and organizations without traditional bosses (Puranam and Håkonsson 2015; Burton et al. 2017). These models often feature decentralized decision making. This often also includes who does what tasks, which poses a challenge for traditional top-down design approaches. As a result, finding new ways to manage how tasks are assigned becomes crucial for an organization to function effectively. Recent research is concerned with these issues. For example, Blanco-Fernández et al. (2023a) recently analyzed how group structures emerge within organizations when they form autonomously. Borenich et al. (2020) are concerned with cost estimations in the automotive industry; they emphasize that these processes are currently often decentralized in practice and they study how to efficiently coordinate such decentralized approaches. Leitner and Behrens (2015) focus on decentralized decision making authority over capital allocation and whether or not coordination mechanisms are robust to prediction errors.

Good et al. (2019) reviewed the discussions around what makes up organizational design and found that many studies overlap in identifying these components. They argue that several key elements are widely recognized as critical. At the heart of these elements is the organizational goal, which is about setting clear objectives. From there, they outline three more design elements: (i) The tasks the organization must do to reach its goal, (ii) the supporting framework that makes it possible to do these tasks, and (iii) the people in the organization and how they act, including incentives to encourage certain behaviors and the organization’s culture. This paper extends the work in Leitner (2023) and zeroes in on the issue of how to control decentralized authority over task allocation, especially focusing on (ii) the structure that deals with dividing work among people in the organization, and (iii) incentive systems to direct how people act. It considers the (i) tasks needed to achieve the organization’s goals as given, meaning these tasks are set and the organization does not plan to change what it does. However, it does take into account that these tasks can vary in complexity.

Guiding autonomous task allocation toward efficiency poses a complex managerial challenge due to various factors. Organizations must contend with interdependent tasks and potential conflicts of interest between decentralized decision-makers and organizational management. This paper aims to provide support to organizational management in addressing these challenges. More specifically, the research presented here aims to deepen the understanding of how organizational design can emerge from the ground up by examining how emergent task allocation, incentives, and performance are interconnected. The following questions are in the focus of this paper:

  1. (1)

    In what ways do the interactions between emergent task allocation and incentive mechanisms affect organizational performance?

  2. (2)

    How modular is the structure that develops from a bottom-up approach, and what impact does this have on organizational performance?

  3. (3)

    What is the effect of task complexity on the outcomes related to the dynamics between emergent task allocation, incentive mechanisms, and modularity?

To investigate these questions, an agent-based model is introduced. The model simulates how tasks are assigned within an organization, incorporating different incentive systems to shape agent behavior. Agent-based modeling and simulation have been effective for examining organizational dynamics, especially in areas like organizational design (Blanco-Fernández et al. 2023a, b, 2024; Leitner and Behrens 2015), consumer behavior (Sonderegger-Wakolbinger and Stummer 2015; Ghanem et al. 2022), finance (Czupryna 2022; Mastroeni et al. 2023), innovation management (Stummer and Kiesling 2021; Haurand and Stummer 2018), and procurement and supply chain management (Strmenik et al. 2021; Colon et al. 2021). This approach has gained attention for its ability to offer a controlled environment for experiments, enabling researchers to tweak parameters and decision making rules. This flexibility facilitates the collection of extensive data and helps in understanding how various elements within the model affect outcomes. Therefore, this paper bridges operations research and business administration, demonstrating how agent-based modeling and simulation, a technique originating from operations research, can address pressing issues in business administration. The analysis presented in this paper reveals that emergent task allocation, driven by short-term behavior on the decision maker’s side and matched with fitting incentive systems, enhances organizational performance, surpassing that of traditionally designed organizations. Furthermore, it is observed that specific incentive systems can diminish the need for a direct reflection of a task's technical characteristics (in terms of interdependencies) in the organizational structure.

The structure of this paper is as follows: In Sec. 2, the concept of mirroring in task allocation is discussed. Section 3 introduces the agent-based model, detailing the simulation setup and data analysis. The results of the simulation are presented and discussed in Sec. 4. Finally, a summary and conclusion are provided in Sec. 5.

2 The mirroring hypothesis in organizational design

Research in the field of organizational design frequently endorses the “mirroring hypothesis,” which asserts that the formal structure of an organization, particularly in terms of task allocation, should mirror the technical characteristics (in particular the interdependencies) of the tasks it undertakes (Sanchez and Mahoney 1996; Colfer and Baldwin 2016). In alignment with this hypothesis, it is suggested that organizations might benefit from adopting a modular structure that aims to minimize dependencies among modules (Lawrence and Lorsch 1967). This hypothesis is rooted in the theory of complex systems (Simon 1991), characterized by a multitude of components engaging in often nonlinear interactions (Langlois 2002). The concept of modularity, which involves decomposing a system into interconnected modules with distinct interfaces (Ulrich 1995), emerges as a strategic means to navigate complexity. This principle of modularity is extendable to the domain of organizational design (Agrež and Damij 2015). In particular, organizations can be represented as a system of many departments that interact. In this context, the modularity of such a system is a continuum from complete autonomy of modules to full integration (Chen 2017). Organizational archetypes that are placed on the rather autonomous end of the continuum are characterized by independent units, such as is the case in divisional structure. In contrast, functional structures can be found at the other end of the continuum (Hax and Majluf 1981).

For the domain of product design, Ulrich (1995) offers a more fine-grained definition of modular design that encompasses modularity at both the functional component level and the interfaces between modules (see also, Sanchez and Mahoney 1996; Sanchez et al. 2013). Peng and Jifeng (2018) differentiate between two degrees of modularity: component modularity (the self-sufficiency of a module within a complex system) and product modularity (the overall system’s modularity). They argue that a system cannot be regarded to be modular if any interdependencies exist among its modules, regardless of the high degree of independence of some modules. Modular product architecture introduces an information structure that remains confined within modules. From an organizational design viewpoint, such a structure permits the creation of organizational units that parallel the product architecture, thereby facilitating coordination among autonomous units. This model can diminish both the necessity and costs associated with coordination (Colfer and Baldwin 2016), suggesting modularity as an advantageous organizational design strategy for the long term, assuming the underlying product or task features do not undergo significant changes. Moreover, Dawid et al. (2017) highlight that modular techniques enable the realization of economies of scale, thus maximizing investment returns, which could also be transferred to the realm of organization design.

Past studies have corroborated the mirroring hypothesis across various sectors. For instance, Cabigiosu and Camuffo (2012) observed a positive link between the alignment of product and organizational structures in the air-conditioning sector. Tee et al. (2019) demonstrated that while modularization could help overcome coordination challenges, it might hinder collaborative efforts in project environments. Wei et al. (2021) explored the application of mirroring in Chinese corporations, pinpointing the limitations and performance implications of diverse organizational models. Alochet et al. (2022) and Chen et al. (2019) presented evidence of partial mirroring (“misting”) in firms engaged in electric vehicle production. Additionally, misting has been endorsed as a viable approach within sectors experiencing shifts in product architecture (Kosaka 2021; Burton et al. 2020).

Conversely, the mirroring hypothesis does not always find support. Colfer and Baldwin (2016), after reviewing 142 empirical studies, found that while \(70\%\) of descriptive analyses supported the hypothesis, \(22\%\) offered limited support, and \(8\%\) refuted it. Their analysis of normative studies concluded that partial mirroring holds advantages in fields characterized by evolving technologies. Yet, \(56\%\) of studies focusing on collaborative projects found evidence contrary to the mirroring hypothesis. Colfer and Baldwin (2016) posited that emerging coordination mechanisms enabled by technological advancements might account for these findings. Similarly, Sanchez and Mahoney (2013) suggest that discrepancies in adopting the mirroring hypothesis could be attributed to factors such as cognitive, risk and capability issues, along with commitment and discipline concerns.

3 The agent-based model

3.1 Model overview

The main aim of this paper is to study organizations with bottom-up allocation of decision making tasks (e.g., operational or procurement decisions) and to analyze the effects of the incentive mechanism in place in the organization as well as task complexity on performance. The notion of decision making tasks is broad, extending its relevance to various fields, including social dynamics and team resilience (Massari et al. 2023), healthcare (Kapun et al. 2023), as well as product development processes (Ma and Nakamori 2005).Footnote 1

The model considers agents who represent organizational departments comprised of human decision makers, and together, the agents represent an organization with decentralized decision making. The model facilitates coordination among these decision making entities exclusively through an incentive-based mechanism. Drawing from organizational information processing theory, it posits that due to physical limitations, direct communication is impractically expensive (Marschak and Radner 1972). Despite these constraints, the model assumes that agents are inclined to act in their self-interest, which is why coordination is required to assure coordinated actions across departments, and coordination through incentives is a feasible option to do so (Fischer and Huddart 2008).

The agents are characterized by constraints like constrained time and cognitive abilities, which hinder their capacity to individually address complex decision making problems. Consequently, they collaborate as a group to collectively approach the problem they face. These agents possess complete autonomy in determining the allocation of tasks, allowing them to make independent decisions on task allocation and to adjust this allocation over time. While they recognize that there might be interdependencies among decision making tasks, their understanding of the precise nature of these interdependencies is incomplete. Nonetheless, they are capable of gradually acquiring the information needed to fill in these gaps over time. The model differentiates between two kinds of agents: those who prioritize immediate gains, making decisions based on short-term utility maximization without regard for future consequences, and those who take into account the long-term implications of their actions during task allocation. The latter group of agents strives to reduce the interdependencies among sub-tasks distributed across different agents while enhancing the interdependencies within their assigned responsibilities, aligning with the mirroring hypothesis. It is anticipated that focusing on optimizing these interdependencies will yield advantages over time.

Figure 1 gives information on the model structure and sequence of events during the simulations. In the first step, the performance landscape (Sect. 3.2) and agents are initialized and the initial task decomposition is performed, i.e., the tasks are allocated to organizational departments, meaning that the departments’ areas of responsibility are defined (Sect. 3.3). After the initialization, agents begin a hill-climbing search for ways to solve the tasks assigned to them (Sect. 3.4). This means, for example, that departments make operative decisions within their areas of responsibility, such as procurement decisions or decisions about marketing activities. In addition, the agents learn about the complexity of the task (Sect. 3.5). In particular, agents observe the consequences of their actions, and from their observations, they deduce the existence of interdependencies between tasks. Every \(\tau \in \mathbb {N}\) periods, agents are given the possibility to autonomously adapt the current task allocation using a signalling mechanism (Sect. 3.6). The model keeps track of the overall task performance and the task allocation resulting from the re-allocation process for \(t\in \{1,\dots ,T\} \subset \mathbb {N}\) periods. The simulation model is implemented in Matlab® R2022a.

Fig. 1
figure 1

Model architecture and sequence of events

3.2 Task environment

The conceptual framework for a stylized organization is based on the \(N\!K\)-model, which is widely used for examining organizational dynamics (Levinthal and March 1993; Wall and Leitner 2021; Blanco-Fernández et al. 2023a). The model depicts an organization as comprising \(M\in \mathbb {N}\) agents, and the organization is confronted with a multifaceted decision-making problem. Let us denote the decision problem by the N-dimensional vector

$$\begin{aligned} \textbf{d}=\left( d_1, \dots , d_N \right) ~, \end{aligned}$$
(1)

where \(d_n \in \{0,1\}\) for \(n \in \{1, \dots , N\}\).Footnote 2 The number of solutions to the overall problem is \(2^N\) and each solution is an N-digit bit-string. There are at most \(K \le N-1\) interdependencies between the decisions \(d_n\), which means that the contribution of a decision \(d_n\) to the task performance is affected by at most K other decisions. This relationship can be formalized in the payoff function

$$\begin{aligned} c_n = f\left( d_n, d_{i_1}, \dots , d_{i_K}\right) ~, \end{aligned}$$
(2)

where \(\{i_1, \dots , i_K\} \subseteq \{1, \dots , n-1, n+1, \dots , N\}\). The performance contributions are independently drawn from a uniform distribution, \(c_n \sim U\left( 0,1\right)\). The overall task performance for a solution \(\textbf{d}\) is the mean of the individual performance contributions \(c_n\):

$$\begin{aligned} c(\textbf{d}) = \frac{1}{|\textbf{d}|} \sum _{n=1}^{|\textbf{d}|} c_n~, \end{aligned}$$
(3)

where the function \(|\cdot |\) returns the length of a vector.

The creation of performance landscapes is achieved by associating solutions to the decision problem \(\textbf{d}\) with their respective performances, as specified in Eqs. 2 and 3. The complexity of the decision problem is influenced by the interdependencies between decisions, which is reflected in the ruggedness of the landscapes produced. As the degree of interdependencies, denoted by K, rises, so does the number of peaks (and local maxima). For example, if \(K=0\), the landscape is smooth and the global maximum is relatively easy to find. In contrast, if \(K=N-1\), the landscape is maximally rugged with numerous local maxima, and it becomes more difficult to find the global optimum.Footnote 3

3.3 Agents and task decomposition

Agents in the organization have limited capabilities and/or resources, such as limited cognitive capacities, time, or other resources to solve the entire N-dimensional decision problem alone. Therefore, they need to collaborate and work together to find solutions to the decision problem the entire organization faces. Let us denote the maximum number of decisions that an agent can handle at a time by \(Q\in \mathbb {N}\). To prevent agents from dropping out of the group, every agent must be responsible for at least one decision at a time. This means that \(1\le Q < N\).

Agents decompose the decision problem \(\textbf{d}\) into M disjoint sub-problems. Let us denote the decisions in agent m’s area of responsibility at time t by

$$\begin{aligned} \textbf{d}_{mt} = [d_{j_1},\dots , d_{j_Q}]~, \end{aligned}$$
(4)

where \(\{j_1, \dots , j_Q\} \subset \{1, \dots , N\}\) and \(m\in \{1,\dots ,M\}\subset \mathbb {N}\). The complement of \(\textbf{d}_{mt}\) in \(\textbf{d}\) is referred to as agent m’s residual decisions in period t:

$$\begin{aligned} \textbf{d}_{-mt} = \textbf{d} \setminus \textbf{d}_{mt} \end{aligned}$$
(5)

Initially, tasks are distributed among agents in a sequential and equal manner, ensuring that at time \(t=0\), each agent is responsible for an equal share of decisions, specifically, \(|\textbf{d}_{m0}| = M/N\). Throughout the simulation, agents have the option to adjust the allocation of tasks as outlined in Sect. 3.6. The model incorporates the concept of hidden action within the decision making process, indicating that while agents are fully aware of the solutions to their specific sub-tasks \(\textbf{d}_{mt}\), the decisions made by others—namely, the solutions to the remaining tasks \(\textbf{d}_{-mt}\)—become visible only in the subsequent period \(t+1\), following their implementation.

Agents gain utility from the solutions implemented for the decision problem, as detailed in Sect. 3.4. To compensate agents for their contributions, the organization employs a linear incentive mechanism. This incentive system differentiates between the performance contribution resulting from the decision made within the agent’s own area of responsibility and the performance from the remaining decisions. The utility function for each agent m is defined as follows:

$$\begin{aligned} U(\textbf{d}_{mt},\textbf{d}_{-mt}) = a \cdot c\left( \textbf{d}_{mt} \right) + (1-a) \cdot c\left( \textbf{d}_{-mt} \right) ~, \end{aligned}$$
(6)

where \(c\left( \textbf{d}_{mt} \right)\) and \(c\left( \textbf{d}_{-mt} \right)\) are agent m’s own and residual performances in period t, respectively (see Eq. 3). The parameter \(a = [0,1] \in \mathbb {R}^{+}\) is the incentive parameter that defines to which extent the two performances contribute to the the agent’s compensation.

3.4 Sub-model A: Hill climbing search

In periods where \(t \ \textrm{mod}\ \tau \ne 0\), agents can enhance their utility by employing a hill climbing algorithm. This method involves identifying and executing actions within the neighbourhood of the last implemented action, \(\textbf{d}_{mt-1}\), that promise to yield greater utility. The neighbourhood is determined by a Hamming distance of 1. When an agent discovers a potential action \(\textbf{d}^{*}_{mt}\) within this neighborhood, it assesses this action in comparison to the most recently implemented action, also referred to as the status quo.

In this phase, direct communication among agents is not allowed, so agent m must rely on the other agents’ decisions from the previous period, \(\textbf{d}_{-mt-1}\), when evaluating a candidate action. The agent makes its decisions about which action \(\textbf{d}_{mt}\) to take in period t according to the following rule:

$$\begin{aligned} \textbf{d}_{mt} = {\left\{ \begin{array}{ll} \textbf{d}_{mt-1} &{} \text {if } U(\textbf{d}_{mt-1},\textbf{d}_{-mt-1}) \ge U(\textbf{d}^{*}_{mt},\textbf{d}_{-mt-1})~, \\ \textbf{d}^{*}_{mt} &{} \text {otherwise .} \end{array}\right. } \end{aligned}$$
(7)

In the first scenario, the proposed action fails to provide greater utility compared to the existing condition, leading the agent to maintain \(\textbf{d}_{mt-1}\). Conversely, in the second scenario, the proposed action presents an increase in utility, prompting the agent to adopt it during period t.

The behavior of the entire organization in period t is the combination of the individual actions taken by all M agents:

$$\begin{aligned} \textbf{d}_t = \left[ {\textbf{d}_{1t}}, \dots , \textbf{d}_{Mt}\right] ~, \end{aligned}$$
(8)

and the performance achieved by the organization in that period is \(c(\textbf{d}_t)\) (see Eq. 3).

3.5 Sub-model B: Learning interdependencies

While agents recognize that decisions may be interdependent, they lack precise knowledge of their structure. Instead, agents form beliefs about these interdependencies through observing the outcomes of their decisions within their designated responsibilities. The instances where agent m has observed or not observed an interdependency between decisions \(d_i\) and \(d_j\) up to period t are recorded as \(\alpha _{mt}^{ij} \in \mathbb {N}\) and \(\beta _{mt}^{ij} \in \mathbb {N}\), respectively. To quantify agent m’s beliefs about the interdependencies between decisions \(d_i\) and \(d_j\) based on these observations, a Beta distribution is employed:

$$\begin{aligned} \mu _{mt}^{ij}= E(X)=\frac{\alpha _{mt}^{ij}}{\alpha _{mt}^{ij}+\beta _{mt}^{ij}}~, \end{aligned}$$
(9)

where \(X \sim B(\alpha _{mt}^{ij}, \beta _{mt}^{ij})\).

At the start of the simulation, all observations are set to one, which means that the initial value of \(\alpha _{m0}^{ij}\) and \(\beta _{m0}^{ij}\) are equal to one for all m, i, and j such that i and j are not the same. This results in initial beliefs of 0.5, indicating that agents initially assume that there is a \(50\%\) chance of interdependencies. Then, in every period \(t \ \textrm{mod}\ \tau \ne 0\), agents perform the search procedure introduced in Sect. 3.4 and also update their beliefs in line with the following steps:

  1. 1.

    Recall that the action that agent m takes to solve their partial decision problem in period t is \(\textbf{d}_{mt}\). If the agent decides to flip a decision (i.e., the second case in Eq. 7), this decision is indicated by i, where \(d_{it} \in \textbf{d}_{mt}\). After implementing \(\textbf{d}_{mt}\), agent m observes the performance contributions \(c_{jt}\) of all other decisions \(d_{jt} \in \textbf{d}_{mt}\) within their area of responsibility, with \(i\ne j\).

  2. 2.

    Next, agent m updates the observations for all decisions \(j\ne i\) in the area of responsibility as follows:

    $$\begin{aligned} \left( \alpha _{mt}^{ij}, \beta _{mt}^{ij}\right) = {\left\{ \begin{array}{ll} \left( \alpha _{mt-1}^{ij}, \beta _{mt-1}^{ij}\right) &{} \text {if } \textbf{d}_{mt}= \textbf{d}_{mt-1} ,\\ \left( \alpha _{mt-1}^{ij}+1, \beta _{mt-1}^{ij}\right) &{} \text {if } c_{jt} \ne c_{jt-1}~ \text { and } \textbf{d}_{mt}= \textbf{d}_{mt}^{*},\\ \left( \alpha _{mt-1}^{ij}, \beta _{mt-1}^{ij}+1 \right) &{} \text {if } c_{jt} = c_{jt-1}~ \text { and } \textbf{d}_{mt}= \textbf{d}_{mt}^{*},\\ \end{array}\right. } \end{aligned}$$
    (10)

    whereby \(\forall i: d_{it}\in \textbf{d}_{mt}\), \(\forall j: d_{jt}\in \textbf{d}_{mt}\), and \(j\ne i\). Whenever agent m notices a variation in the contribution to performance of decision j between period \(t-1\) and period t, potentially triggered by altering decision i, the variable \(\alpha _{mt}^{ij}\) is incremented by one, as illustrated in the second scenario of Eq. 10. In contrast, if there is no such change, \(\beta _{mt}^{ij}\) is incremented by one, reflecting the scenario depicted in the third case of Eq. 10. If agent m does not alter any decisions during the current period, then the observations remain consistent with the previous period, as outlined in the first scenario of Eq. 10. These learning trajectories are depicted in Fig. 2.

  3. 3.

    Finally, agents recompute their beliefs in period t according to Eq. 9.

Fig. 2
figure 2

Interrelations between sub-models A and B: Learning paths

It is important to understand that agents have visibility only into the performance contributions within their own areas of responsibility. Should the decision problem be structured in a way that it includes interdependencies with decisions external to an agent’s domain, this might lead to unseen external effects on performance contributions that the agent is unable to detect. Consequently, this setup opens the door to potential learning inaccuracies, as agents might incorrectly infer the presence of interdependencies based on their observations.

3.6 Sub-model C: Task allocation

At intervals of every \(\tau\) periods, agents have the opportunity to reassess and rearrange their task assignments. During these specific periods, agents refrain from modifying their current actions or gathering data to refine their understanding of task interdependencies. Their attention is solely on the redistribution of tasks. It is important to recognize that this process of task reallocation has the potential to shift the scope of the agents’ responsibilities, thereby impacting the tasks that contribute to their utility.

Initially, agent m proposes a decision task from their area of responsibility to other agents. Let us refer to the decision offered by agent m as \(i_m\in \{1,\dots ,N\}\).Footnote 4 Following this, all agents, with the exception of the one making the offer, express their willingness to assume the task by sending indicative signals. In this scenario, agents have two potential strategies: they may adopt a short-term view, focusing solely on immediate performance gains without considering future implications (as outlined in Sect. 3.6.1), or they may take a long-term perspective, aiming to optimize the internal interdependencies within their own area of responsibility while reducing dependencies on decisions managed by others (as detailed in Sect. 3.6.2). After collecting all responses, the task is reassigned to the agent who submitted the highest signal. In return, the agent who made the initial offer is compensated with an amount equivalent to the second-highest signal received.

3.6.1 Short-sighted re-allocation: Performance-based approach

Agents adopting this strategy exhibit short-sightedness, concentrating solely on the immediate benefits of the decisions under their control. They propose decisions that yield lower performance outcomes to their peers, and express interest in acquiring tasks by signaling for decisions that they believe will enhance their performance contributions beyond the compensation they must provide to the agent offering the task. This approach directly impacts their utility in the short term, as their actions are driven by the pursuit of immediate gains. The task allocation process is organized as follows:

  1. 1.

    Agent m selects the decision \(i_m\) which they are willing to exchange in period t and informs the other agents \(r \in \{1,\dots ,m-1, m+1, \dots , M\}\) about the offer. Selecting the decision \(i_m\) is based on the previous period’s performances. It is specifically the decision in agent m’s area of responsibility that is associated with the minimum performance contribution in \(t-1\):

    $$\begin{aligned} {i}_m \in \mathop {\mathrm {arg\,min}}\limits _{ i': {d}_{it} \in \textbf{d}_{mt-1}} c_{i't-1}~. \end{aligned}$$
    (11)
  2. 2.

    In addition, agent m fixes a threshold \(p_{i_{m}t}\) for re-allocating this decision in t. The threshold is the performance contribution of the offered decision, so

    $$\begin{aligned} {p}_{i_{m}t}= c_{i_{m}t-1}~. \end{aligned}$$
    (12)

    The offered decision will only be re-allocated to another agent if the signal sent by that agent is greater than \({p}_{i_{m}t}\).

  3. 3.

    Once all agents have selected the tasks they want to offer, they can submit their signals. However, only agents with available resources can participate in the allocation process. This means that an agent m will proceed to the next step only if \(|\textbf{d}_{mt-1}| < Q\).

  4. 4.

    If agents have available resources, they will compute their signals for all offers except their own. The signals \(\tilde{p}_{{i}_mt}^{r}\) submitted by an agent r for a given offered decision \(i_m\) is the performance contribution that the agent expects from this decision in period t. However, since the offered decision \(i_m\) falls outside of agent r’s area of responsibility, they can only estimate the related performance contribution using the following formula:

    $$\begin{aligned} \tilde{p}_{{i}_mt}^{r}= c_{i_{m}t-1} + \epsilon ~, \end{aligned}$$
    (13)

    where \(\epsilon \sim N(0,\sigma )\) indicates an error term that accounts for the uncertainty in the estimate.

3.6.2 Long-sighted re-allocation: Interdependence-based approach

When agents adopt this strategy, they look beyond short-term gains to also take into account principles similar to those suggested by the mirroring hypothesis, with the goal of achieving higher utility over time. This means agents will focus on strengthening the interdependencies between decisions within their own areas of responsibility. The process for allocating tasks under this approach is organized in the following manner:

  1. 1.

    Agent m identifies the decision \(i_m\) that is being offered to the other agents in the current round using the following criteria:

    $$\begin{aligned} {i}_m \in \mathop {\mathrm {arg\,min}}\limits _{i': d_{it} \in \textbf{d}_{mt-1}}\left( \frac{1}{|\textbf{d}_{mt-1}|-1} \sum _{\begin{array}{c} j: d_{jt} \in \mathbf {d_{mt-1}} \\ j \ne i' \end{array}} \mu _{mt}^{i'j} \right) \end{aligned}$$
    (14)

    As a reminder, agents want to maximize internal and minimize external interdependencies in this strategy. Equation 15 returns the decision that is associated with the minimum average belief about interdependencies between decision \(i_m\) and the other decisions in agent m’s area of responsibility.

  2. 2.

    Again, agent m will fix a threshold \(p_{i_{m}t}\) for re-allocating decision \(i_m\) to other agents in period t. For simplicity, the average belief about internal interdependencies is used as the threshold value:

    $$\begin{aligned} p_{i_{m}t} = \frac{1}{|\textbf{d}_{mt-1}|-1} \sum _{\begin{array}{c} j: d_{jt} \in \mathbf {d_{mt-1}} \\ j \ne i_m \end{array}} \mu _{mt}^{i{_m}j} \end{aligned}$$
    (15)
  3. 3.

    Once all agents prepared their offers, they can proceed to compute and send their signals. However, they will only move on to the next step if they have sufficient resources, i.e., if \(|\textbf{d}_{mt-1}| < Q\).

  4. 4.

    In period t, agents \(r \in \{1,\dots ,m-1, m+1, \dots , M\}\) send a signal containing the average belief about the interdependencies between the offered decision \(i_m\) and the decisions within their areas of responsibility \(\textbf{d}_{rt-1}\). Agent r’s signal for decision \(i_m\) in period t is computed according to:

    $$\begin{aligned} \tilde{p}_{i_{m}t}^{r}= \frac{1}{|\textbf{d}_{rt}|} \sum _{{ j: d_{jt} \in \mathbf {d_{rt}}}} \mu _{rt}^{i{_m}j} \end{aligned}$$
    (16)

3.6.3 Task allocation

Once all agents have sent their signal, there are exactly \(M-1\) signals for each offer \(i_m\). Recall that agent m offered decision \(i_m\) at a threshold signal of \({p}_{i_{m}t}\) and the other agents sent their signals \(\tilde{p}^{r}_{i_{m}t}\). We can denote the set of signals received for decision \(i_m\) in period t by the vector \(\textbf{P}_{it}\), and we can compute the maximum signal for decision \(i_m\) in period t by \({p}^{r*}_{i_{m}t}= \max _{p'\in \textbf{P}_{it}} (p')\). The agent who sends this signal is denoted by \(r^{*}\). The tasks are (re-)allocated as follows

  1. 1.

    If the the maximum signal \({p}^{r*}_{i_{m}t}\) is equal to or exceeds the threshold \({p}_{i_{m}t}\), the decision \(i_m\) is re-allocated from agent m to agent \(r^{*}\) according to

    $$\begin{aligned} \textbf{d}_{mt}= & {} \textbf{d}_{i_{m}t-1} \setminus \{ d_{i_{m}t-1} \} ~\text {and}\end{aligned}$$
    (17a)
    $$\begin{aligned} \textbf{d}_{r^{*}t}= & {} \left[ {\textbf{d}_{r^{*}t-1}},d_{i_{m}t-1} \right] ~, \end{aligned}$$
    (17b)

    where \(\setminus\) indicates the complement. If the second highest signal exceeds (does not exceed) the threshold, agent \(r^{*}\) gets charged the second highest bid (threshold).

  2. 2.

    If the the maximum signal \({p}^{r*}_{i_{m}t}\) does not exceed the threshold \({p}_{i_{m}t}\), agent m remains responsible for decision \(i_m\), so

    $$\begin{aligned} \textbf{d}_{mt}:=\textbf{d}_{mt-1}~. \end{aligned}$$
    (18)
  3. 3.

    Finally, agents do not update their beliefs about interdependencies in periods in which tasks are re-allocated. Therefore, the observations are the same as in the previous period, i.e., \((\alpha ^{ij}_{mt}, \beta ^{ij}_{mt}) = (\alpha ^{ij}_{mt-1}, \beta ^{ij}_{mt-1})\).

Comparison of the two approaches The strategies for making offers and calculating signals lead to distinct patterns of agent behavior. The strategy based on immediate performance encourages agents to focus on enhancing their short-term performance contributions, neglecting the potential long-term repercussions. Agents employing this approach tend to propose tasks within their domain that contribute the least to performance, anticipating that the compensation received will exceed the utility they would derive from executing these tasks themselves. This approach represents short-sighted utility maximization (Simon 1967).

Conversely, the strategy centered on interdependencies shifts focus away from immediate gains. It embraces the principles of the mirroring hypothesis by striving to reduce the interdependencies between the tasks under an agent’s control and those managed by others (Colfer and Baldwin 2016). Agents adopting this strategy anticipate that by minimizing these interdependencies, they will achieve a higher degree of autonomy and, consequently, greater utility over time. Therefore, the selected strategy significantly influences agents’ actions and their decision making processes.

3.7 Simulation setup and observations

This study examines how four primary factors influence performance and the resulting distribution of tasks. These factors include:

  1. 1.

    The type of information employed in allocating tasks, specifically focusing on performance-based and interdependence-based strategies discussed in previous sections.

  2. 2.

    The coefficient a within the linear incentive model outlined in Eq. 6, which spans from collective to individual rewards. This analysis explores a values ranging from 0.05 to 1 in increments of 0.05.

  3. 3.

    The periodicity of reallocating tasks, denoted by \(\tau\), with examined intervals of 5, 15, 25, and 35, alongside benchmark scenarios where task allocation is predetermined and immutable, represented by \(\tau =\infty\).

  4. 4.

    The eight distinct patterns of task interactions depicted in Fig. 3, which include configurations with large and small diagonal blocks along the main diagonal (Figs. 3a and b), mutual interdependencies among these blocks (Figs. 3c and d), and ring-shaped interdependencies (Figs. 3e and f). Additionally, Figs. 3g and h introduce random interdependencies into the small diagonal block pattern. Benchmark task allocations are highlighted, showcasing a linear and symmetrical distribution of tasks, indicating that agent 1 handles tasks 1 to 3, agent 2 covers tasks 4 to 6, and so on.

Fig. 3
figure 3

Interdependence patterns

In the simulations, three key variables are tracked: the collective performance, the pattern of task allocation that emerges, and the count of tasks that are reassigned. The collective performance of the decision (that is, the sum of all actions by all agents) is monitored at each period t across simulation runs \(s \in \{1,\dots ,S\} \subset \mathbb {N}\), represented as \(\textbf{d}_{ts}\) (Eq. 8). The effectiveness of this collective decision is evaluated using \(c(\textbf{d}_{ts})\) (Eq. 3). To facilitate comparison of performance across different simulation runs, the performance \(c(\textbf{d}_{ts})\) is normalized against the highest possible performance achievable within that landscape, denoted as \(c(\textbf{d}^{*}_s)\), applying the following formula:

$$\begin{aligned} \tilde{c}(\textbf{d}_{ts})=\frac{c(\textbf{d}_{ts})}{c(\textbf{d}^{*}_s)}~. \end{aligned}$$
(19)

In every period, observations extend beyond performance to include how tasks are distributed among agents. Specifically, the responsibility domains \(\textbf{d}_{mts}\) of all agents across all periods and simulation runs are documented. It is crucial to distinguish that while performance metrics derive from the collective solutions to the decision problem (the aggregated actions of all agents), task allocation directly relates to the decisions falling within the purview of individual agents’ responsibilities. Thus, the first type of observation sheds light on overall performance, whereas the second type reveals the development of organizational structures.

Table 1 Simulation parameters

3.8 Data analysis

To examine the functional relationships between dependent and independent variables listed in Table  1, regression neural networks are trained, and partial dependencies are calculated. This methodology aligns with data analysis techniques advocated by Patel et al. (2018), Law (2015), and Blanco-Fernández et al. (2021, 2023b) which endorse the application of regression analysis for evaluating the significance of parameters and deciphering pattern emergence. Let \(\textbf{X}\) be the set of all independent variables included in Table 1. The subset \(\textbf{X}^s\) includes the independent variable(s) that are in the scope of the analysis, while \(\textbf{X}^c\) consists of the complementary set of \(\textbf{X}^s\) in \(\textbf{X}\). Then, \(f(\textbf{X})=f(\textbf{X}^s,\textbf{X}^c)\) represents the trained regression model. The partial dependence of the performance on the independent variables in scope is defined by the expectation of the performance with respect to the complementary independent variables, as follows:

$$\begin{aligned} f^s(\textbf{X}^s)= E_c(f(\textbf{X}^s,\textbf{X}^c)) \approx \frac{1}{V}\sum _{i=1}^{V} f(\textbf{X}^s,\textbf{X}_{(i)}^c)~, \end{aligned}$$
(20)

where V is the number of independent variables in \(\textbf{X}^c\) and \(\textbf{X}_{(i)}^c\) is the \(i^{th}\) element. By marginalizing over the independent variables in \(\textbf{X}^c\), we obtain a function that depends only on the independent variables in \(\textbf{X}^s\).

To study the modularity of the emergent task allocation, the following metric is employed (Leitner 2023): We already know that agent m’s decision problem in period t and simulation run s covers the decisions included in \(\textbf{d}_{mts}\), and the parameter K describes the interdependencies of a particular decision and all performance contributions. Let \(K^{\text {int}}_{mts}\) be the number of interdependencies within agent m’s sub-problem in period t and simulation run s, and \(K^{\text {all}}_{mts}=|\textbf{d}_{mts}|\cdot K\) be the number of all interdependencies between the decisions in agent m’s area of responsibility and all performance contributions.Footnote 5 The modularity metric is then defined as the ratio of interdependencies within agent m’s decision problem (\(K^{\text {int}}_{mts}\), numerator) to the total number of times the decisions assigned to agent m affect all performance contributions (\(K^{\text {all}}_{mts}\), denominator):

$$\begin{aligned} \text {Mod}_{mts} = \frac{K^{\text {int}}_{mts}}{K^{\text {all}}_{mts}}~. \end{aligned}$$
(21)

To demonstrate the functionality of the modularity metric, let us examine a scenario focusing on agent 1, with task allocation in line with the benchmark model of symmetric and sequential distribution, as depicted by the shaded regions in Fig. 3. In this example, agent 1 oversees decisions 1 through 3. For the case of small diagonal blocks (Fig. 3a), agent 1 has \(K^{\text {int}}_{1ts}=6\) internal interdependencies, and the total interdependencies for the decisions assigned to agent 1 also total \(K^{\text {all}}_{^ts}=6\). Here, the modularity for the benchmark configuration is \(\text {Mod}_{mts}=1\). Transitioning to configurations with small blocks and reciprocal interdependencies (Fig. 3d), the count of internal interdependencies for agent 1 remains at \(K^{\text {int}}_{1ts}=6\), but the total interdependencies increase to \(K^{\text {all}}_{^ts}=18\) due to the complexity of the decision making scenario, resulting in a benchmark modularity of \(\text {Mod}_{mts}=0.3\dot{3}\). Also, please note that the modularity analysis employs the task allocation resulting from agents’ decisions (rather than the benchmark allocation) to calculate modularity, aiming to contrast the modularity of the evolved solution against the benchmark.Footnote 6

Fig. 4
figure 4

Partial dependencies of performances on the incentive parameter

4 Results and discussion

4.1 The effects of incentives in emergent task allocation

This section explores how emergent task allocation affects organizational performance and compares this with the traditional top-down approach to assigning tasks. The relationship between organizational performance (y-axis) and the incentive parameter (x-axis) is shown in Fig. 4 for various interdependence patterns. Red lines (with triangles) and black lines (with circles) represent scenarios where agents follow performance-based and interdependence-based strategies, respectively. These scenarios are the novelty of this study. Benchmark cases, where the organizational structure is determined from the top down, are shown with dashed lines. With respect to the characterization of the incentive mechanisms, it is important to note that lower values of the incentive parameter (on the left side of the x-axis) correspond to incentive schemes that focus more on collective performance, while higher values (on the right side of the x-axis) indicate incentive schemes that emphasize the performance within an agent’s own area of responsibility (see Eq. 6).

The benchmark cases mirror well-documented patterns in existing literature. For instance, Nalbantian and Schotter (1997) conduct experiments demonstrating that group-based incentives surpass individual incentives in scenarios involving externalities. Similarly, Kato and Kauhanen (2018) utilize panel data to show that group-based incentives enhance productivity and overall performance compared to individual ones. Pizzini (2010) analyzes survey data from medical group practices and finds that group-based incentives boost performance in tasks with high interdependence, as they foster cooperation and increase output. Similarly, Ladley et al. (2015) employ computational methods to reveal that group-based incentives promote more cooperative behavior, especially beneficial in departments with interdependent tasks. Moser and Wodzicki (2007) find that individual incentives are less effective in situations with task interdependence, primarily due to a lack of cooperative behavior among decision-makers whose rewards are not linked. This observation aligns with Rees et al. (2003), who conclude that strong individual incentives make decision-makers less sensitive to interdependencies, thus diminishing performance in interconnected tasks. Shaw et al. (2002) note that individual incentives are effective when task interdependencies are minimal. These patterns are reflected in Fig. 4, which shows that the effect of the incentive mechanism is negligible in scenarios with low cross-departmental interdependencies (e.g., Fig. 4a and b). In contrast, in cases with significant cross-departmental interdependencies, such as depicted in Fig. 4d and f, group-based incentives, indicated by lower incentive parameters, significantly outperform individual incentives.

The effectiveness of the analyzed task bottom-up allocation strategies (interdependence-, and performance-based) relies heavily on the nature of an organization’s incentive schemes. When incentives are structured around group performance (on the left side of the x-axis), the performance-based strategy tends to surpass the alternatives. Regarding organizational performance, both the benchmark scenario and the interdependence-based strategy yield comparable results. This pattern holds true across different interdependence scenarios, becoming most evident in situations involving complex and non-modular tasks (Fig. 4c–h), and is less distinct in cases involving modular and nearly modular tasks (Fig. 4a and b).

Shifting from group-based to individualistic incentive mechanisms (by movement to the right on the x-axis) results in a decrease in performance for scenarios that employ emergent task allocation. It is interesting to note that the outcomes obtained through the performance-based strategy are particularly sensitive to adjustments in the incentive parameter. Thus, while performance-based task allocation emerges as the optimal strategy under group-based incentives, it ranks as the least effective when incentives lean towards individualism. This transition, where the preferred task allocation strategy changes, is noticeable at intermediate values of the incentive parameter for all interdependence structures. The decline in effectiveness of the interdependence-based task allocation strategy is also apparent, albeit significantly less so than with the performance-based approach.

In scenarios where incentive mechanisms heavily favor individualism, the benchmark scenario tends to surpass other methods in nearly every instance. However, the findings also reveal that superior performance levels are attainable through either balanced or group-oriented incentives, especially when the tasks are sufficiently complex to generate interdependencies among them, as depicted in Fig. 4c–h.

4.2 Modularity and emergent task allocation

This section explores the impact of different independent variables on agent behavior within the process of task allocation. Figure 5 displays the average number of tasks swapped when agents are permitted to reallocate tasks. Scenarios employing a performance-based strategy are marked by red lines (triangles), whereas those based on task interdependencies are indicated by black lines (circles). The data suggest a more dynamic task allocation process under the performance-based strategy, as reflected by the stabilization of task exchanges at a count of 4 after approximately 4 reallocation periods. In contrast, the strategy focusing on interdependencies leads to fewer task swaps, with the number eventually dropping to zero after about 5 reallocation periods. Additionally, the results indicate that the interdependence pattern (Fig. 5a), the incentive parameter (Fig. 5d), and the number of periods between task allocations (Fig. 5c) do not or only marginally affect the average number of exchanged tasks. However, a slightly higher number of tasks are re-allocated when agents have more time to learn about interdependencies (Fig. 5c).

Fig. 5
figure 5

Partial dependence of the number of re-allocated tasks on selected parameters

Figure 6 presents the probability distributions for the modularity metric, which measures the alignment between the organizational structure and the interdependence pattern (as defined in Eq. 21). As discussed in Sect. 2, a better alignment is expected to result in higher organizational performance. Each subplot in the figure represents the distributions for the interdependence patterns introduced in Fig. 3. The red lines represent the strategy where tasks are allocated based on performance, while the grey lines indicate the strategy focusing on interdependence. The shaded regions illustrate the probability ranges for varying levels of incentive parameters. For instance, in the left portion of Fig. 6a, there is about a \(45\%\) chance that the modularity of the resulting structure will be 0.2 or less under an interdependence-focused allocation. In comparison, this probability increases to around \(60\%\) when a performance-focused task allocation is used.

Fig. 6
figure 6

Cumulative distributions of modularity metrics

Consistent with expectations, findings suggest that agents employing a performance-based strategy for allocating tasks tend to result in emergent task allocation patterns characterized by lower modularity. Conversely, when agents progressively understand interdependencies and adjust their task allocation accordingly, the emergent patterns are more likely to exhibit higher modularity. Surprisingly, the modularity observed in most emergent task allocation patterns is often less than that seen in the benchmark solution. For instance, as depicted in Fig. 6c, in about \(20\%\) of instances using an interdependence-based strategy and \(10\%\) of instances using a performance-based strategy, the emergent patterns reach or surpass the modularity of the benchmark solution. This trend is even more pronounced in other scenarios, such as those involving small diagonal blocks and ring-like interdependencies, where the benchmark modularity is met or exceeded in fewer than \(10\%\) of cases. Recall from Sect. 2 the discussion suggesting that higher modularity typically results in improved performance. However, the findings challenge this assumption. According to Fig. 6, the chance of encountering a less modular structure increases when tasks are re-allocated based on performance-driven motivations. Yet, as shown in Fig. 4, organizations with incentives leaning towards group achievements often outperform those with a static or interdependence-focused approach to task allocation. Therefore, it appears that the relevance of the mirroring principle diminishes in organizations employing performance-based strategies for task re-allocation.

4.3 Discussion

The analysis has produced some interesting insights, showing that the model successfully replicates well-known patterns from the literature on organizational design with conventional top-down task allocation, serving as a form of model validation. In particular, the model demonstrates the efficacy of group-based incentive mechanisms in influencing individual behavior, notably in scenarios where the tasks assigned to decision-makers are interconnected (Fischer and Huddart 2008). In addition, the model has provided new perspectives on the behavior of organizations with emergent structures. Specifically, the simulations indicate that adopting performance-based strategies for task re-allocation can lead to organizational structures characterized by lower modularity but potentially higher overall performance, especially when group-based incentives are effectively applied within the organization.

4.3.1 Aligning task allocation and incentives

The results underscore the necessity of evaluating the mirroring hypothesis within an expanded framework of dynamic organizational design, including dynamic organizational forms. The findings suggest that the benefits of adopting a flexible organizational structure are maximized when mechanisms for task allocation are effectively coordinated with organizational incentives. With the rapid advancements in digital technologies, this insight is increasingly pertinent. For example, previous research has indicated that digital technology advancements facilitate flexible organizational designs by enhancing processes and providing vital information about interdependencies (Snow et al. 2017; Ratner and Plotnikof 2022; Balasubramanian et al. 2022; Worren et al. 2020). It has also been noted that the progress in digital technologies not only provide information crucial for organizing but also highlights the need for organizations to be more adaptive in their design approaches. For instance, Verma et al. (2023) argue that to capitalize on the innovations offered by digital technologies, organizations must swiftly adjust their processes, including task allocation. In this context, technology might either substitute human skills and intelligence, such as in identifying interdependencies (Parry et al. 2016), or assume responsibility for task allocation (Gombolay et al. 2015). Moreover, integrating artificial intelligence with organizational learning could enhance the efficiency of organizing, which would likely lead to a dynamic alignment of task allocation with task characteristics in real-time, reflecting scenarios with bottom-up task allocation (Jarrahi et al. 2023; Wijnhoven 2022; Ewertowski et al. 2023).

The findings presented in this paper highlight a crucial aspect relevant to recent developments in organizational theory. Previous research has highlighted the importance of aligning elements of organizational design in traditionally structured, top-down organizations (Donaldson and Joffe 2014; Schlevogt 2002; Jiang et al. 2023; Hwang et al. 2022; Samuel et al. 2023). This study extends these insights to organizations characterized by dynamically emerging structures, suggesting that while alignment remains crucial, the principles governing it may require adaptation. Traditional organization theory contends that task allocations that adhere to the mirroring hypothesis – typified by modular structures – achieve optimal performance when paired with individualistic incentives (Langlois 2002). However, for environments with dynamic task allocation, the results indicate that for complex tasks, implementing group-based incentives and performance-driven, bottom-up strategies for task allocation is more advantageous than what is traditionally endorsed by organizational theory.

4.3.2 “Fluid” organizational structures

The results indicate that in organizations employing emergent methods for task allocation, the motivations of the agents significantly impact the volume of tasks swapped. For instance, in systems that rely on interdependence, most task swapping occurs in the initial five rounds. On the other hand, systems that employ performance-based criteria exhibit a consistently high level of task swapping throughout. Consequently, while task allocation structures based on interdependence tend to stabilize after a few rounds of task reassignment, those driven by performance-based criteria remain fluid and adaptable. This aspect is especially noteworthy considering that performance-based models tend to surpass other methods in specific situations.

The literature on organizational design has explored various related concepts. For instance, Teece et al. (1997) highlight the necessity for organizations to adapt to changes in their surroundings, such as shifts in customer demands and market trends, to capitalize on opportunities. Consequently, it is vital for organizational structures to be adaptable and dynamic. Englmaier et al. (2018) echo this sentiment, arguing that in dynamic organizations, the allocation of individuals to tasks happens internally, and they note that some tasks may not be assigned at all. This issue is deliberately omitted in the model presented in this paper. Similarly, Zohar (2021) introduce the notion of a quantum organization, which is characterized by its multiple, interconnected components, its agility, responsiveness, and adaptability, its emergent and self-organizing nature, and its evolutionary progression through various mutations. The findings presented in this paper demonstrate that such dynamic organizational forms can develop from the bottom up and, what is particularly fascinating, that they can outperform in certain scenarios and match the effectiveness of their more interdependence-based or top-down counterparts in others, when they are effectively integrated with other organizational design elements.

Deist et al. (2023) explore the role of digital units within innovative contexts, offering a vital perspective. They suggest that the adoption of fluid organizations requires management to adopt a supportive role rather than a directive one, facilitating dynamic task allocation rather than imposing tasks from the top down. This raises the question of how to cultivate an organizational culture that fosters emergent task allocation and ensures effective control of such organizations. Huettermann et al. (2024) provide insights into this challenge by demonstrating the ongoing need to direct employee behavior in decentralized settings Furthermore, the transition to emergent structures in organizations introduces challenges related to psychological safety. Unlike in organizations with static task allocation, whether designed top-down or based on interdependent task reallocation, fluid organizations face increased uncertainty due to the changing roles and responsibilities, potentially impacting psychological safety (Edmondson and Bransby 2023). This shift towards decentralizing task allocation may lead to negative outcomes, such as knowledge hiding (Jeong et al. 2023) and reduced team productivity (Tannenbaum et al. 2023).

5 Summary and conclusion

This paper introduced an agent-based model to explore the dynamics of emergent task allocation within stylized organizations, focusing on the interplay between task allocation strategies, incentive mechanisms, and task complexity. The findings highlight the efficacy of emergent, performance-based task allocation strategies under conditions of group-based incentives and complex task interdependencies, surpassing traditional top-down and interdependence-based approaches in such scenarios. However, the advantage of bottom-up strategies diminishes with the shift towards individualistic incentives, where traditional top-down allocation emerges as more effective. This nuanced understanding extends current knowledge on organizational incentives and task allocation (e.g., Fischer and Huddart 2008), challenging conventional beliefs about modularity in organizational design. Thus, the research suggests that the pursuit of modularity may not be universally beneficial, particularly in environments without individualistic incentives. These insights might be valuable for managers who aim for adaptable, efficient organizational structures in an era of increasing complexity.

This study has a few important limitations that need to be kept in mind when looking at the results. First, the model is based on the idea that the only way agents (or parts of the organization) work together is through the rewards or incentives they get, and it does not consider direct talks or messages between them. This setup might make sense for work-from-home situations (Bloom et al. 2015), but it would be interesting for future studies to think about how agents communicating with each other could change things. Second, we know from other studies that decentralized decision making can sometimes lead to problems like frustration, creating separate “islands” of decision making, or informal leaders popping up (Holck 2018). These issues are not covered in the model, but including them could make the model more like real life in future work. Third, it was not considered how culture affects how organizations run. Different countries have different ways of handling uncertainty and expectations about hierarchy in workplaces (Hofstede 2011). Future research could look at how decentralization might face more hurdles in some cultures than our model suggests. Fourth, the study focuses on just one organization and does not consider outside forces. It would be valuable to study how different organizations that use less traditional structures (like self-organizing teams) might grow and interact with each other (Volberda and Lewin 2003). Fifth, it is assumed that the environment or landscape the organization works in does not change over time. Looking at environments that change and how sudden shifts might affect the organization could be an interesting direction for new research (Leitner 2024). Lastly, the analysis uses a framework that assumes all parts of the organization are connected in the same way. But in real life, some connections might be stronger or more important, which could change how things work. This was not considered here but could be an interesting area for future studies.