1 Introduction

In managerial science, simulation is a frequently used methodological approach for developing theory in terms of gaining structural insights and understanding of fundamental processes within a certain area of interest (Davis et al. 2007; Harrison et al. 2007). However, in the domain of management accounting and control (MAC), simulation-based approaches are relatively rarely employed, particularly when compared to the use of analytical modeling, the development of “frameworks”, experiments and surveys, which, according to Hesford et al. (2007), appear to predominate.Footnote 1 Simulation is regarded to be particularly beneficial when the underlying research question contains a fundamental tension (e.g., short-run vs. long-run), captures a variety of highly interrelated variables (which lead to a high level of complexity), or is of a procedural nature (e.g., adaptation towards higher levels of performance) (Davis et al. 2007; Wall 2014; Harrison et al. 2007). Against this background, we seek to provide an illustrative overview of simulation-based research efforts in the domain of MAC. Based on this overview, we aim at establishing a sound basis for deriving potentially fruitful future options for simulation-based research in MAC.

A relevant question appears to be how to delimit the domain of MAC. This is a challenging task, since MAC spans a wide range from rather “technical” aspects of cost accounting systems through the use of several forms of organizational controls, up to supporting the development and implementation of a firm’s strategy (Berry et al. 2009; Otley 1999, 2003; Ferreira and Otley 2009; Simons 1995; Chenhall 2003). Correspondingly, the term management control system (MCS) is used in quite different ways,Footnote 2 ranging from the systematic use of management accounting in combination with various forms of organizational controls for achieving an overall objective (Chenhall 2003) to broader conceptualizations of MCS, which also include strategic development, the implementation of strategies, and learning processes (Simons 1995; Merchant and Otley 2006). Moreover, it has been argued that MCSs have to be studied together with other mechanisms of control in organizations (Abernethy and Brownell 1997), and that “performance management system” might be a more appropriate term to capture the comprehensive and emergent set of “evolving formal and informal mechanisms, processes, systems, and networks used by organizations for conveying the key objectives and goals elicited by management, for assisting the strategic process and ongoing management through analysis, planning, measurement, control, rewarding, and broadly managing performance, and for supporting and facilitating organizational learning and change” (Ferreira and Otley 2009, p. 264).

For the aim of our paper, we find it useful to rely on a rather broad understanding of MAC. The reason why we do so is two-fold: (1) a notable body of simulation-based research is directed towards strategic management and organizational design and, thus, has some overlap with control mechanisms, which are also discussed in the context of MCS; (2) as mentioned above, simulation-based research is particularly useful for studying complex and emergent phenomena. Therefore, it might be helpful to investigate mechanisms employed in MCS in a broader organizational context, as claimed by some researchers (e.g., Abernethy and Brownell 1997).

Doing this requires to decide which literature to attend to within the selected domains. Obviously, when seeking to provide an overview of research in a certain field this is a rather critical aspect in order to avoid a selection bias (Petticrew and Roberts 2006). We decided to mainly focus on articles which are published in academic journals in the area of managerial science and journals dedicated to computational economics and, of course, to follow citations within these articles. It is worth mentioning that we also tried to employ a more structured approach (Petticrew and Roberts 2006) involving the search in literature databases for relevant terms (a recent example for this approach is given in Hauschild and Knyphausen-Aufseß 2013). However, we found that a considerable number of rather influential articles which apparently employ simulation and which are published in well-known scientific management journals do not use relevant terms in the title, keywords or abstract.Footnote 3 Of course, relying on a less structured approach for the selection of articles involves the risk of a selection bias (Petticrew and Roberts 2006). However, given that we aim to provide an illustrative overview on the applications and contributions of simulation in the domain of MAC and that a database search on our topic appears problematic we decided to follow the approach as described above.

With this in mind and after giving a brief overview on simulation as a research method in Sect. 2, we structure our illustrative survey on simulation-based research in the domain of MAC as follows: in Sect. 3, we start with reviewing the use of simulation-based research related to management accounting systems. By doing so, we reflect a rather narrow perspective on the field of MAC, which is mainly directed towards management accounting information. In Sect. 4, we broaden the perspective by reviewing simulation-based research on information and control mechanisms (as, for example, captured by the delegation of decisional competencies, different approaches for performance evaluation, and reward systems) in the context of organizational structure and managerial decision-making. In Sect. 5 we provide an overview of a stream of simulation-based research, which focuses on strategic planning and the implementation of strategies within organizations. In this section, we reflect a broader conceptualization of MCS. Finally, Sect. 6 concludes our illustrative overview.

2 Simulation as research method

The use of simulation as a way to do management research can be considered as a young, but also as a fast-developing field (Meyer et al. 2009; Gilbert 1998; Axelrod 1997a). This, in the same way, applies to the application of simulation techniques in the field of MAC.Footnote 4 Amongst others, Davis et al. (2007) reason that, in particular for research in the context of organizations, the impact of simulation techniques is steadily increasing. One can regard the rise of simulation as research methodology as the natural consequence of the fact that organizational and managerial behavior typically result from complex and interdependent processes (Harrison et al. 2007). The application of simulation as research methodology brings along several advantages. For example, simulation allows for modeling nonlinear interactions within and also across organizations (Anderson 1999). Moreover, simulation techniques allow for addressing the complexities and interdependencies of (entire) organizations, and for overcoming difficulties that arise from distributed decision-making or multiple interrelated layers of decision-making authority within organizations (Jahangirian et al. 2010). It is particularly the different layers that make simulation an appealing methodology because simulation allows for investigating the impact of actions on the micro-level, interrelations amongst these actions, and also interrelations among the system’s components on performance (Ma and Nakamori 2005). By micro-level we refer to “individual level” mapped by the simulation model. Depending on the particular model, these can for example be individual agents or decision-makers, (management) accounting systems, reward structures. The macro-level, on the contrary, represents the aggregate level which, depending on the research question, can be the MCS or the organizational performance.

Traditional approaches (i.e., theoretical analysis or deduction, and empirical analysis or induction) find their boundaries when incorporating the extent of real-world organizational complexity (Berg 2004). For deduction, on the one hand, social processes can incorporate an extent of complexity that makes the derivation mathematically intractable. With respect to induction in the context of management research, on the other hand, there are always the problems of data availability and measurement (Harrison et al. 2007). Simulation methodologies allow for facing this particular complexity (Axelrod 1997a; Waldrop 1992). Simulation can be regarded as being a promising methodology, which combines the advantages of both induction and deduction. When building simulation models, one typically starts with specifying a set of explicit assumptions regarding the system of concern (e.g., an organization). Classical deduction also starts with specifying assumptions, but aims at proving their consequences, while simulation methodologies allow for incorporating more realistic assumptions and handling (intractable) mathematical relations by numerical methods (Harrison et al. 2007). By doing so, simulation generates a set of data on the basis of deductively specified rules (and, thus, also overcomes problems of data availability and measurement), which are, then, analyzed by induction (Berg 2004; Harrison et al. 2007).

However, despite the increasing use of simulation as research methodology, giving a proper definition of “simulation” still is a difficult task (Nelson 2004; Harrison et al. 2007). From a meta-perspective, one can regard simulation experiments as the construction of a model that represents a specific entity, which can, e.g., be organizations or groups of individuals (Dawson 1962). By experimentation with this descriptive computer model (e.g., by varying variables and their interrelationships), researchers aim at investigating the effect on the system’s behavior, e.g., with respect to system (in)stability or a particular notion of performance (Smith 2003; Berends and Romme 1999; Wall 2014). However, having this quite general definition in mind, when reading research articles that employ simulation methodologies, one typically encounters a wide variety of different approaches (Nelson 2004; Davis et al. 2007; Wall 2014). In their seminal paper, Davis et al. (2007) provide five clusters of simulation approaches; namely (i) system dynamics, (ii) NK fitness landscapes, (iii) genetic algorithms, (iv) cellular automata, and (v) stochastic processes.Footnote 5 Deckert and Klein (2010) provide a rather general classification and distinguish between continuous, dynamic, and Monte-Carlo simulation methodologies (cf. also Domschke and Drexel 2005). One might further group simulation methodologies according to Harrison et al. (2007). They basically cluster into system dynamics models, agent-based models, and cellular automata, whereby the class of agent-based models can be regarded as being comprised of the classes (ii) NK fitness landscapes and (iii) genetic algorithms, as classified by Davis et al. (2007). In their seminal paper, Harrison et al. (2007) do not discuss the class of stochastic processes. Subsequently, we follow the categorization of simulation approaches provided by Davis et al. (2007).

The approach of (i) system dynamics can be regarded as modeling methodology to learn about the behavior of complex systems, which is grounded on the idea of nonlinear dynamics, causality, and feedback (Sterman 2001; Forrester 1971; Sterman 1994). Typically, with system dynamics the system is modeled by a set of simple processes with circular causalities (causal loops) among them and the behavior of the system under investigation results from the interrelated processes. Basically, system dynamics-based simulation experiments are very close to the continuous simulation methodology (Deckert and Klein 2010). Typical research questions investigated with this class of models pertain to the effect of causality and system instability on a specific notion of performance.

The cluster of (ii) models grounded on the idea of NK fitness landscapes is mainly utilized in order to test the efficiency of different adaptive search and optimization processes in modular systems. The main question of concern is speed and efficiency of system adaptation (e.g., Rivkin and Siggelkow 2007). The model class originally stems from evolutionary biology (Kauffman and Levin 1987; Kauffman 1995), but was successfully transferred to the domain of management science (Levinthal 1997), so that it can be applied to questions of MAC (e.g., Rivkin and Siggelkow 2007; Leitner and Wall 2011, 2014; Behrens et al. 2014). The core feature of NK fitness landscapes is the explicit modeling of interactions among attributes, explaining its value for research in managerial science: Controlling parameter K,  the approach allows to study systems with variable complexity in terms of interdependencies among N subsystems (e.g., among subunits or decisions in an organization) with respect to overall fitness. Simulation models which are based on (iii) genetic algorithms do not focus on system adaptation, like NK fitness landscape grounded models do, but are utilized in order to investigate learning of a population of heterogeneous agents (like organizations or individuals), and their adaptation—in terms of mutation and retention—to a rather optimal agent form (Davis et al. 2007; Goldberg 1989; Reeves and Rowe 2003).

The idea of (iv) cellular automata considers a population of spatially related agents (like organizations or individuals), that behave according to relatively simple rules (Wolfram 1986b). In particular, cellular automata (Wolfram 1986a) consist of a grid where agents (e.g. firms) “reside” in a cell and, hence, the lattice reflects a spatial distribution of the agents. The cells can take various states and the state of each cell depends on its own state in the previous period and the previous state of the neighboring cells. The research focus is to investigate macro-level phenomena, which originate from interactions at the agent (micro-)level. In the context of management research, this class of models is frequently used for research on diffusion processes (see, e.g., Goldenberg and Efroni 2001; Kiesling et al. 2012). Classes (ii)–(iv) usually consider the adaption of a system or individuals over time, they can be classified as dynamic simulation methodologies (Deckert and Klein 2010).

Finally, under (v) stochastic processes, one can subsume simulation models, which exhibit less structured patterns as approaches (i)–(iv). Typically, the class of stochastic processes is characterized by custom-made algorithms and their combinations (Davis et al. 2007). A prominent methodology, which is frequently applied to questions of MAC and which falls in the class of stochastic models, is Monte-Carlo simulation. Monte-Carlo methods are basically grounded on the idea of stochastic values as inputs for computations of models, and, consequentially, are often used for sensitivity analyses (Harrison et al. 2007). To the classification of Deckert and Klein (2010), one can add that Monte-Carlo models are typically static-stochastic models. While classes (i)–(iv) usually incorporate changes in the underlying system’s state, Monte-Carlo models are usually characterized by steady systems, and have their focus on generating a large number of samples for subsequent inductive analyses.

One question that naturally arises is what makes “good” simulation experiments? Davis et al. (2007) propose a roadmap for developing theory by using simulation as the primer research method. Starting with a research question and a simple theory, choosing a proper simulation approach [out of simulation model classes (i)–(v)] and creating and verifying the computational model appear to be crucial for the success of the simulation experiments. The final phases of the roadmap proposed by Davis et al. (2007) cover experimenting with the model as well as validating the results with empirical data, in order to strengthen the validity of the results. However, it is beyond dispute that developing the simulation model is the most difficult part in this roadmap (Rybacki et al. 2014; Robinson 2008a). The process of modeling requires a set of diverse abilities and experience. The research problem needs to be properly analyzed and condensed into its most essential features, and, finally, crucial assumptions need to be identified and modified, before the modeler can extensively elaborate on the model. One can argue that building a proper model significantly contributes to “good” simulation-based research. Legitimately, modeling is frequently interpreted as “science” or “art” (Shannon 1975, 1998; Rybacki et al. 2014), which is often just learnt by experience (Robinson 2008a). However, in order to structure this very important step of doing simulation-based research, some scholars elaborated on various life-cycles of simulation model development (cf, e.g., Rybacki et al. 2014; Kreutzer 1986; Robinson 2008b). What, from a meta perspective, most of these models have in common [and this applies also to the roadmap proposed by Davis et al. (2007)] is the following sequence: (i) conceptual and (ii) formal modeling, (iii) building the simulator, (iv) model validation, (v) experimenting and (vi) analyzing. It is, however, notable that all of these phases are highly iterative and interwoven (Gilbert and Troitzsch 2005; Sargent 2005; Balci 1998).

While simulation as a research method has various appealing properties, there are also limitations and pitfalls. First of all, research based on simulation is often regarded as suffering from what is called a “black box”: models and results often suffer from not being comprehensible and transparent to other researchers (Lorscheid et al. 2012; Harrison et al. 2007; Reiss 2011; Barth 2012): a major virtue of simulation models is allowing to analyze rather complex phenomena (particularly, compared to “traditional” formal models); making simulation models fully comprehensible and transparent to other researchers, often would require extensive descriptions of data, rules, and parameter settings, etc. [which often remains undone to avoid overloading the reader or to meet the publisher’s space limitations (Lorscheid et al. 2012)]. Moreover, the major findings of a simulation study often subtly depend on parameter settings making it even more difficult to be clear and concise (Axelrod 1997b). For overcoming these somewhat “built-in” problems of simulation-based research, standardization—be it in the process of simulation modeling, in the data structures or in the algorithms—is regarded a promising approach (Richiardi et al. 2006; Lorscheid et al. 2012; Müller et al. 2014).

Subsequently, we provide an overview of the use of simulation-based research related to MAC and—as argued in the introduction—categorized into research efforts concerning management accounting systems (Sect. 3), organizational structure and decision-making (Sect. 4) and strategic and operative planning (Sect. 5). However, it is worth mentioning that simulation-based research appears to be of different relevance within these three fields: with respect to management accounting systems, there is some evidence that simulation as a research method is still only of minor relevance (Hesford et al. 2007),Footnote 6 while in the domain of organizational design as well as strategic management, simulation is a rather prominent research method. Several of these research efforts also cover aspects of MAC.Footnote 7 Of course, it is an appealing temptation to speculate on what causes the different popularity of simulation as a research method. One reason might be some kind different focus (not to say: world view) incorporated in the domain of (management) accounting compared to that one (implicitly) incorporated in the domain of simulation. For this, Christensen’s plea put forward in his seminal paper on accounting errors appears meaningful: “Endogenous errors of accounting are more common than acknowledged. First, the accounting model is linear, whereas the world is nonlinear. Second, accounting is not the only information channel, and accountants must consider the role of accounting when it supplements other information sources... Errors are inherent to accounting, and accountants must address them” (p. 1827). In contrast, among the virtues of simulation is to study non-linearities (e.g., tipping points and thresholds) and to figure out boundary conditions for certain regularities. In contrast, the domains of organizational design and strategic planning, non-linearities (e.g. caused by external shocks) are rather well-studied effects, and, this might be a reason why simulation-based research methods are more relevant.

3 Management accounting systems

The application of simulation approaches to concerns of management accounting systems primarily focuses on costing systems and procedures (e.g., activity-based costing systems, traditional cost accounting procedures). This section aims at giving an overview of selected influential papers that are placed within this domain and employ simulation methodologies. The first part of this section is concerned with costing systems, their robustness to errors, and their ability to provide accurate product costs. The second part of this section focuses on the question, which type of management accounting system (e.g., activity-based costing systems, traditional costing systems, throughput accounting systems) to implement for specific environmental circumstances so that accurate costing information is produced which is, then, further utilized for decision-making purposes.

In particular, there is a body of literature that focuses on the effect of errors in costing systems and procedures on the quality of provided information. Christensen and Demski (1997) investigate the case of a multiproduct setting and the ability to define (or estimate) separable cost functions. In particular, they ground the ability to define separable cost functions on a particular nested technology (e.g., production function specifications) and test the ability of activity-based costing procedures to produce relatively accurate estimates of marginal costs. In order to do so, they employ stochastic simulation and investigate various characterizations of the accounting procedure for different scenarios. They find that accuracy varies with technology specifications and products. Moreover, the authors draw attention to the question for which products to tolerate relatively large errors in order keep errors small for other products. In their seminal paper, Labro and Vanhoucke (2007) investigate the effect of (measurement, aggregation, and specification) errors on the accuracy of costing information in a two-stage activity-based cost allocation procedure. They employ a stochastic simulation approach. Their main findings can be summarized as follows: (1) partial improvement in the costing system usually leads to an increase in information quality. This is not surprising, however, there are some scenarios in which partial improvement decreases information quality due to offsetting effects among errors. They narrowly classify three such cases. In particular, Labro and Vanhoucke (2007) distinguish between stage I errors (measurement errors in the resource cost pools and in the resource drivers, aggregations errors in the resource cost pools, and specification errors in the resource drivers) and stage II errors (aggregation errors in the activity cost pools, measurement and specification errors in the activity drivers). They define three cases in which errors occur simultaneously and partial improvement negatively impacts information quality: (i) when measurement errors in the resource driver are high, increasing aggregation errors in the activity cost pools increases overall accuracy. They find similar results for scenarios with (ii) very high aggregation errors in the activity cost pools and measurement errors in the activity drivers. Finally, they provide evidence that (iii) high aggregation errors in the resource cost pools offset measurement errors in the resource cost pools when all resource cost pools are characterized by a high measurement error. Moreover, (2) they provide evidence that errors in the second phase of the cost allocation procedure have more impact on information quality than first-stage errors. Consequentially, when it comes to refining the costing system, Labro and Vanhoucke (2007) propose to focus on refinements in the second-phase of the activity-based costing system. Finally, (3) they find that measurement and aggregation errors have a higher tendency to lead to undercosting than to overcosting, even though the extent of undercosting appears to be typically small. Similarly, by stochastic simulation Leitner (2012b) investigates the cases of errors in traditional costing systems and characterizes the impact of errors (and interactions among errors) on the accuracy of the provided information. The effect of biased raw accounting data on the accuracy of the provided information in traditional costing systems is examined in Leitner (2014). He also employs a stochastic simulation approach. For the case of activity-based as well as for traditional costing systems, Labro and Vanhoucke (2007) and Leitner (2014, 2012b) find that there are specific interactions among errors that lead to offsetting effects, and provide profound guidance on how to consider their findings when designing costing systems.Footnote 8

Labro and Vanhoucke (2008) address the hypothesis that a high diversity in resource consumption patterns leads to a decrease in costing system robustness with respect errors. They employ a stochastic simulation. As in Labro and Vanhoucke (2007), aggregation, measurement, and specification errors on resource cost pools and-drivers, and activity cost pools and-drivers are investigated. They study three characterizations of diversity in the resource consumption patterns to be considered when designing costing systems, i.e., diversity (i) in the (dollar) size of resource cost pools, (ii) in proportional resource usage by activities and products at a cost pool, and (iii) in the way in which resources are shared among activities and products across the entire costing system. For (i) and (ii), the results presented in Labro and Vanhoucke (2008) are in line with intuition and the general tenor of academic as well practitioner literature, that is with increased diversity information quality decreases (Labro and Vanhoucke 2008). The highest level of robustness can be observed, when resource cost pools are of equal dollar size and when the percentages allocated by cost drivers at cost pools are equal, too. For (iii), however, the authors find that increased diversity does not generally lead to a higher sensitivity to errors. In particular, they find that decreasing the number of driver links or spreading them more unevenly over the cost pools increases the robustness of the costing system to measurement and specification errors on the drivers. As the number of driver links is decreased, the robustness to aggregation and measurement error on resource cost pools decreases (but not monotonically). For diversity type (iii), the authors introduce two parameters. First, the number of activity drivers. As the number of activity driver links decreases, the diversity in resource usage of products across activities increases too (i.e., each activity only serves a few cost objects which leads to more heterogeneity in resource usage across activities). Second, the way the activity drivers are distributed over the activity cost pools. The more unevenly they are spread or the lower their number is, the higher is the level of diversity. If the driver links are distributed unevenly or their number is low, the probability that an error occurs on a link to only one cost object increases and, thus, the level of robustness (to measurement and specification errors on the drivers) increases. The finding that robustness to aggregation and measurement error on resource cost pools does not decrease monotonically with a reduction of the number of driver links can be explained by a trade-off between two effects. Many driver links, i.e., cost objects use many activities, (1) increase the probability of offsetting effects (cf. also Footnote 8), and (2) increase the impact of aggregation and measurement errors (i.e., absolute errors expressed in monetary units) on cost objects. Absolute errors fully translate through to the cost objects. However, a relatively even distribution of links over cost pools appears to increase the probability of error offsetting. Labro and Vanhoucke (2008) narrowly characterize situations in which a decrease/increase in robustness is to be expected when changes in diversity of type (iii) are made.

Most methods utilized to compute product costs assume that production technologies consume input at fixed proportions. Dhavale (2007) relaxes this assumption and considers procedures that allow variable proportion of inputs (i.e, variable-proportion technologies). By stochastic simulation, the costing procedures based on fixed- and variable-proportion technologies are compared with respect to errors in product costs and differences in the resulting profit. Dhavale (2007) provides evidence that procedures based on fixed-proportion technologies lead to prices that are around three percent higher as compared to variable-proportion technologies. More fatal are the results presented with respect to pricing errors: the simulation experiments show that prices are 50–70 percent higher compared to “correct” prices. Balakrishnan et al. (2011) also focus on the design of costing systems. In particular, they argue that practitioners utilize rules of thumb when grouping resources into cost pools, and when selecting drivers for cost allocation. By stochastic simulation, they examine the impact of popular costing system design choices on the accuracy of finally computed costs. On the basis of their results, they develop a method for forming cost pools, which is a combination of size-based and correlation based rules, and provide evidence that utilizing composite cost drivers is superior to utilizing cost drivers based on consumption patterns. By an extensive sensitivity analysis they address the generalizability of their results. Kollock (1993) places his research on the edge of cooperation and accounting systems. In particular, by stochastic simulation he investigates advantages of different strategies for cooperation given (i) different accounting systems for tracking exchanges and (ii) noise. Strategies for cooperation basically stem from the prisoner’s dilemma. With respect to different types of accounting systems, Kollock (1993) distinguishes between relaxed and restrictive systems. He characterizes relaxed accounting systems by not necessarily balanced accounts where not all records are tracked in very much detail, while the rules incorporated into restrictive accounting systems are much more narrow. Kollock (1993) argues that implementing flexible accounting systems (at least for internal purposes) can bring along its advantages. One argumentation could be that flexible systems do not react to noise as strongly as restrictive accounting systems do.

By stochastic simulation, Lea and Fredendall (2002) focus on how the management accounting system and the methodology to determine the product mix impact on (short- and long-term) performance for different shop characterizations, that is shops with flat and deep product structure. With respect to the type of management accounting system, they examine the case of traditional costing systems, activity-based costing systems, and throughput accounting. For the case of product mix decisions, they consider two algorithms, one complex algorithm that is basically based on linear programming and considers a lot of constraints (LP), but might be difficult so solve (Markland et al. 1987), and one alternative algorithm that is based on the Theory of Constraints product mix heuristic (TOCh) and only considers dominant bottlenecks as constraints (Goldratt et al. 1992). In order to make product mix decisions, the product mix algorithm interacts with the accounting system if demand exceeds production capacity. As different types of accounting systems compute costs in a different way and different product mix algorithms use product cost information differently, one can assume that different combinations of accounting systems and product mix algorithms crucially impact on manufacturing performance. In a nutshell, they consider four input variables for their simulated product mix decisions: the type of the accounting system, the product structure, the product mix algorithm, and the planning horizon. Lea and Fredendall (2002) also consider uncertainty in the environment through stochastic processes. They provide some interesting results: they show that activity-based costing systems are more sensitive to environmental uncertainty than traditional costing systems. Traditional costing systems, thus, can still provide a proper information base for managerial decision-making (given an appropriate overhead allocation rate and updated information from an integrated information system). They also provide results from which one can derive that the complex product mix algorithm (LP) is much more sensitive to environmental uncertainty compared to the less complex product mix algorithm (TOCh). Moreover, they prove that short-and long-term profitability are not necessarily conflicting, i.e., they show that the utilization of management accounting systems that increase short-term profit also appear to be beneficial with respect to long-term profit. This results holds irrespective of whether financial or non-financial measures are deployed for the determination of performance. Lea and Fredendall (2002) narrowly define scenarios in which shop manageability (in terms of reduced bottleneck shiftiness) increases for flat and deep product structures. This is particularly the case for scenarios in which the TOCh product mix algorithm is deployed. One might explain this by the fact that, in contrast to the LP approach, the TOCh algorithm tries to fully utilize the bottle-neck (and also allows for idle time at the non-bottlenecks). Ultimately and having performance enhancement in mind, on the basis of their results, Lea and Fredendall (2002) provide a comprehensive decision tree, which can be utilized by practitioners as basis for decisions regarding the type of costing system, the product mix algorithm, and the product structure.

While Lea and Fredendall (2002) examine a rather special case (i.e, the optimal combination of the type of accounting system and product mix algorithm under consideration of the planning horizon and the product structure), Boyd and Cox (2002) investigate a more general case. They employ a stochastic simulation and focus on the impact of management accounting systems in a resource-constrained production environment on the quality of the decision. They consider two types of decision, namely make vs. buy decisions and pricing decisions. With respect to the type of management accounting system, similarly to Lea and Fredendall (2002) they distinguish between traditional cost accounting systems, activity-based costing, and throughput costing, but add direct costing as a fourth type. Their benchmark solution is computed utilizing a linear programming approach. Boyd and Cox (2002) show that (for their scenarios) the throughput costing approach leads to “good” decisions (i.e., the provided information is in line with the information provided by the linear programming solution), while the remaining three types of management accounting systems appear to lead to suboptimal results (as compared to the linear programming solution).

Lea and Min (2003) also focus on the question, which type of management accounting system leads to the best possible level of performance. In particular, by stochastic simulation they investigate the relation between (i) the type of management accounting system, (ii) the manufacturing control system, and (iii) the long-term competitive advantage in terms of manufacturing superiority for different time horizons. With respect to management accounting systems, they examine traditional costing, activity-based costing, and throughput accounting systems. The information provided by the management accounting system is then utilized by the manufacturing control system in order to make product mix decisions. They consider Just-in-Time and Theory of Constraints-based manufacturing as characterizations of the manufacturing control system. Their simulation model considers uncertainty in terms of short-term fluctuations. They provide evidence that activity-based approaches lead to superior performance in terms of short- and long-term profitability, customer service and minimized inventory as compared to throughput accounting and traditional costing systems. This particularly applies to situations in which overhead costs are high and labor and material costs are low. In their setting, traditional costing systems outperform throughput accounting. With respect to the manufacturing control system, they provide evidence that Just-in-Time based approaches lead to higher short- and long-term performances as compared to Theory of Constraints-based control approaches. One can assume that this finding might be due to differences in buffer inventory policies and sequencing rules. With respect to different time horizons, Lea and Min (2003) cannot show a significant interaction of the planning horizon with the types of management accounting system and the manufacturing control with respect to performance.

The different findings presented in Lea and Fredendall (2002), Boyd and Cox (2002), and Lea and Min (2003) might be explained as follows: all three studies are concerned with the choice of management accounting system. However, the context (i.e, the environment in which the decision on the type of management accounting system is to be made) is different. While Lea and Fredendall (2002) investigate the choice of which management accounting system to implement given different product structures, and different product mix algorithms, namely linear programming and the Theory of Constraints product mix heuristic, Lea and Min (2003) investigate the choice of management accounting system, given Just-in-Time-based and Theory of Constraints-based approaches to determine the product mix. Boyd and Cox (2002) follow another approach and compare the information provided by a set of accounting systems (in order to make pricing and make-or-buy decisions) to a linear programming solution. These fundamental differences in the research question might explain the different findings.

4 Organizational structure and decision-making

Management accounting systems provide information, which, according to the widely accepted distinction of Demski (1998), can perform two roles for decision-making: decision-facilitating information aims at reducing decision-maker’s pre-decision uncertainty and, thereby, increase the probability to make better decisions with respect to the desired objectives; decision-influencing information is intended to affect the behavior of (other) persons unfolding its effects via monitoring of behavior, measurement and evaluation of performance and rewarding or penalizing performance (cf. also Wall and Greiling 2011). Hence, management accounting information is embedded into a set of formal organizational design elements (e.g., delegation of decisional competencies, performance evaluation and reward systems), and, in this respect, we subsequently seek to provide an overview of simulation-based research efforts which particularly focus on the interrelation between information, organizational structure and decision-making. Among the various simulation-based research efforts in the domain of organizational design (see Footnote 7 for further references) we put particular emphasis on the relatively new field of agent-based simulation models: In this context, agents are autonomous decision-making entities which pursue certain objectives (e.g. Bonabeau 2002; Safarzyńska and van den Bergh 2010; Tesfatsion 2006). Agents may represent individuals (e.g., business unit managers or board members of a firm) or a group of individuals. Particularly, the possibility to “aggregate” agents is particularly interesting in managerial science since, for example, it allows to map hierarchical structures of heterogeneous decision-making agents in terms of business units, departments and individual managers (Chang and Harrington 2006; Anderson 1999).

In this sense, Dosi et al. (2003), building on the framework of NK-fitness landscapes, focus on the decomposition of organizational decision problems into partial problems in terms of a “division of cognitive labour” (p. 413). The way of how the overall problem is segmented is assumed to affect how search for new solutions in the organization is configured, while the allocation of decisional rights determines the way solutions are selected. The main concern of Dosi et al.’s study is the relation between decomposition on the one hand and incentives (related to individual, team or firm performance), and the power to veto the decisions of other decision-makers as selection mechanisms on the other hand. It turns out to be of crucial relevance whether the segmentation and delegation “cuts” interactions among decisions, i.e., whether or not the organizational segmentation (perfectly) reflects partitions in the decision problem. If decomposition and delegation do not perfectly follow the interactions between decisions, according to Dosi et al. (2003) the rewards based on performance information could induce sub-units to mutually perturb each other’s search processes. In this case, solely the reward structure does not lead to a sufficient coordination with respect to overall performance, and hierarchical or lateral veto power turns out to be useful in preventing endless perturbations.

In a similar vein, Rivkin and Siggelkow (2003), also using an NK model, analyze the interrelated effects of five organizational components on organizational performance, i.e., (1) the allocation of decisional tasks to sub-units, (2) the decision-making authority of the headquarter, (3) the broadness of search for new solutions by sub-units, (4) the incentive system which might reward sub-units for firm performance or for their departmental performance, and (5) the information-processing abilities of the central authority. The authors investigate, for example, the effects of sub-units’ managers’ capabilities for a broad search of solutions. It turns out that centralization is more valuable if managers are highly capable of excessive searching and interactions between sub-units’ decisions are dense. The reason is that, in this case, the central authority stabilizes search and selection which corresponds to the findings of Dosi et al. (2003) as reported before. Furthermore, Rivkin and Siggelkow (2003) find that central authority and firm-wide incentives are complements rather than substitutes—in particular if interactions among sub-units are dense. The intuition behind this result is that rewarding firm performance can direct sub-units to act in the firm’s best interest but does not necessarily lead to coordinated choices: “Capable subordinates can engage in aggressive, well-intentioned search that results in mutually destructive ‘improvement”’ (Rivkin and Siggelkow 2003, p. 306).

Siggelkow and Rivkin (2005) employ a simulation, based on the idea of NK fitness landscapes, to analyze, how well different organizational forms can cope with changes in the environment while searching for higher levels of performance. With respect to coordination, not only the reward structure is considered, but also a variety of intermediate coordination mechanisms between centralized and completely decentralized decision-making authority. While keeping the decomposition of decisions fixed (two sub-units of equal decisional scope), the turbulence of the environment and the complexity of the organizational decision problem (in terms of interactions between decisions) under the regime of different coordination modes are varied. Results indicate that in the most demanding case of highly turbulent environments in combination with high decisional complexity, two organizational forms turn out to be most appropriate: first, an organization making use of lateral communication and rewards based on firm performance, and, second, an organization with highly centralized decision-making authority where in both forms decision-makers are required to have considerable capabilities to evaluate alternatives, i.e., to make use of decision-facilitating information. This information may be provided by a management accounting system or another information system.

With having said this, the second focal point of this section is addressed, i.e. simulation-based studies on information(-processing) in decision-making. In particular, in the models sketched so far, the decision-makers—while suffering from limited capabilities for search—were engaged in search processes for new solutions; however, in none of these models do the decision-makers have difficulties evaluating the consequences of alternative solutions, once discovered. Hence, until now, decision-facilitating information was regarded to be perfect and the same was assumed for the ex ante-evaluation of alternatives by decision-makers. Of course, this does not necessarily reflect actual decision-making. Next, we review some simulation-based research efforts, which were made to fill this gap.

As such, Raghunathan (1999) focuses on a central question related to managerial decision-making: how does information quality and decision-maker quality impact decision quality? He employs a stochastic simulation approach and argues that the quality of information can be crucially affected by information technology. In order to express the quality of the information provided to the decision-maker, Raghunathan (1999) uses an accuracy measure as a proxy for data quality dimensions (cf. also Wang and Strong 1996). The quality of the decision-maker refers to the quality of the decision-making process. It is basically expressed as knowledge about (i) interrelations among fragments of the entire decision problem and (ii) knowledge about interrelations among decision variables. The decision-maker’s knowledge is modeled as conditional beliefs about these interrelations. In order to express the quality of the decision-maker, Raghunathan (1999) uses an accuracy measure, i.e., the decision-maker’s beliefs about (i) and (ii) are set in relation to their conditional probabilities. Finally, decision quality refers to the quality of the decision made by the decision-maker and is measured using the absolute difference between the probability and the decision-maker’s belief about the output value. Raghunathan (1999) shows that depending on decision-maker quality, an increase in information quality improves or decreases decision quality. In particular, if the decision-maker has knowledge about (i) and (ii), an increase in decision quality can be observed as information quality increases. In contrast, if the decision-maker quality is low, an increase in information quality might decrease decision quality. Please note, that the decision-maker quality might also be used as a proxy for operational and strategic decisions, i.e., at the operational level, relationships (i) and (ii) can often be described using exact rules and procedures. On the strategic level, knowledge about (i) and (ii) is usually more prone to errors. The results indicate that only focusing on increasing the quality of the information provided to the decision-maker is not recommendable, as decision quality appears to increase only if decision-maker quality and information quality are increased simultaneously. However, if exact relationships do not exist in the decision-making problem, the level of information quality appears not to have any impact on decision quality.

Knudsen and Levinthal (2007), building on the seminal work of Sah and Stiglitz 1986 on the robustness of organizational structures against Type I (accepting inferior options) and Type II errors (rejecting superior options), investigate the impact of path dependence and complexity of the decision-problem: the availability of alternatives depends on the current state of the organization, and, relying on the idea of a local search on NK fitness landscapes, alternative options are “neighbors” of the current practice; complexity captures whether or not changing one attribute from the current state in favor of an alternative also affects the performance contributions of other attributes. At the heart of Knudsen and Levinthal’s study are the imperfect capabilities of decision-makers (evaluators) to assess the consequences of alternatives, and how they affect the performance achieved in search processes under the regime of different organizational forms between hierarchies and polyarchies. The authors find that high-precision evaluators are more likely to be trapped in local peaks (i.e., inferior solutions) than evaluators with lower screening capabilities, and that organizations with hierarchical structures are more likely to stick to local maxima than hybrid forms. In a similar vein, by employing a variant of the NK-model Wall (2010, 2011, 2014) investigates the effects of imperfect decision-facilitating and decision-influencing information for different levels of complexity of the decision-problems and under the regime of different reward structures, other coordination mechanisms and learning capabilities of decision-makers. While the simulation results indicate that these “variables” interfere in a subtle way with respect to firm performance, the general finding is in line with that one of Knudsen and Levinthal (2007), suggesting that imperfect information eventually might also yield positive effects on overall performance. Moreover, results reveal that the negative as well as positive effects of informational imperfections can be nearly leveled out by appropriate coordination mechanisms. Leitner and Behrens (2014b, 2015) and Leitner et al. (2015a, b) are concerned with imperfect information in the context of coordinating distributed investment decisions given different organizational characterizations. By stochastic simulation, they also show that—under certain circumstances—noisy information (and offsetting effects among the resulting errors) can be beneficial with respect to decision-making performance.

While these research efforts reflect unsystematic errors in the ex-ante evaluations of alternatives, further simulation-based research activities were made in order to investigate the effects of systematic errors (biases) (Tversky and Kahneman 1974), when evaluating alternatives. According to Denrell and March (2001), employing a simulation based on stochastic processes, adaptive processes tend to reproduce success and, thus, lead to a bias against risky and novel alternative options. In a somewhat reverse perspective, Baumann and Martignoni (2011) investigate whether it might be beneficial to induce a “pro-innovation-bias” in organizations as a counterpart to several traditional “mechanisms” at the individual as well as organizational level, which tend to prevent rather than foster innovations and change. Among these innovation-preventing “mechanisms”, for example, the status quo bias in individual decision-making (Kahneman et al. 1982) or inadequate applications of standard financial tools like the discounted cash flow method (Christensen et al. 2008) are to be mentioned. Based on the framework of NK fitness landscapes, Baumann and Martignoni (2011) find that a moderate pro-innovation bias tends to enhance performance in the case of complex and stable environments, while, in most cases, an unbiased evaluation of options turns out to be most effective.

Behrens et al. (2014) employ a variant of the NK-model and investigate systematic biases resulting from the phrasing of information (in terms of negative or positive framing) in conjunction with the timing of information (with a particular focus on the recency effect). They find that organizational performance is rather robust against these biases if the organization faces a well decomposable decision problem. However, in case of intense cross-departmental interactions, the organization’s performance decreases, where both, negatively phrasing information and relying on recently derived information, induce an improvement. The effects of positively phrased information tends to function in the same direction, while the effect is less pronounced.

5 Strategic and operative planning

In the preceding sections we introduced simulation-based research efforts related with (i) mechanisms incorporated in MCS, (ii) organizational performance in the context of imperfect information (which is, e.g., provided by the management accounting system), and (iii) biased managerial decision-making. However, MCSs could also be intended to provide support in the development and implementation of corporate strategies (Berry et al. 2009; Otley 1999, 2003; Ferreira and Otley 2009; Simons 1995). Hence, another focal aspect of our illustrative overview is the contribution of simulation-based research in the field of strategic planning and its implementation via operative planning.

In this spirit, Ghemawat and Levinthal (2008) argue that a strategy is the result of two approaches, which complement each other: in the “ex ante design”, major principles and policies choices are made via top-down pre-specification; the “ex post adjustment” captures the emergence of strategic positions and tactical alignments. The relevance of these two complementary ways of strategic specification affects the requirements, which strategic planning has to meet. For example, strategic planning is subject to relatively modest requirements if with some few higher-level choices the subsequent lower-level decisions are more or less determined, while requirements are notable when the strategic action plan has to be completely specified in advance. Ghemawat and Levinthal (2008) employ an agent-based simulation model which modifies the symmetric structure of the NK model, taking into account that some choices might be more influential (“strategic”) than others. The results suggest to ex ante focus on the more influential choices rather than on a random selection of choices; moreover, the results indicate that tactical adjustments could compensate for mis-specified strategic choices if the latter are highly interrelated with other strategic decisions, but not if interactions with other policy choices are negligible.

Gavetti and Levinthal (2000), employing the framework of NK fitness landscapes, put more focus on the processes of strategic decision-making with respect to long-term performance, by analyzing forward-looking search processes in contrast to backward-looking ones, i.e., an experience-based search. In forward-looking search, the decision-maker relies on “beliefs about the linkage between the choice of actions and the impact of those actions on outcomes” (Gavetti and Levinthal 2000, p. 113). Experiential search is represented by local search, meaning that only one or some attributes of the current state (or policy) are changed, and, should this change be productive, the modified policy serves as basis for a new local search. Gavetti and Levinthal (2000) focus on situations, in which the decision-maker’s understanding (cognition) of the decision problem is a simplified version of the true one in terms fewer dimensions of actions. Hence, by applying a forward-looking search, a decision-maker seeks to identify a superior “region” of solutions to the true decision problem, which he/she then tries to exploit via experience-based search (local search). Gavetti and Levinthal (2000) find that even simplified representations of the actual decisional problem provide powerful guidance for subsequent experiential searches.

Another “mode” of search and strategic decision-making is investigated by Gavetti et al. (2005). By employing an agent-based simulation approach based on the idea of NK fitness landscapes, the authors investigate analogical reasoning, i.e., applying insights developed in one context to a new setting (p. 693). In particular, they study the effects of managerial characteristics on the contribution of analogical reasoning to firm performance. The presented results indicate that analogical reasoning particularly contributes to firm performance if managers effectively distinguish similar industries from different ones. Moreover, analogical reasoning appears to become less effective with the depth of managerial experience, but tends to become more effective with increasing the breadth of experience.

From a long-range perspective, Zott (2003) investigates how differences in firm performance within an industry arise. Therefore, he sets up a simulation model that considers dynamic capabilities that endogenously evolve through experimentation and imitation, that is, through internal and external search for configurations of organizational resources and/or operational routines. In particular, he takes a closer look at timing of organizational resources deployment in order to initiate adaptive changes, the cost adaption due to experimentation or imitation, and the ability to learn to adapt organizational resources. Zott (2003) sets up a formal model and examines the dynamics of the formal model using a combination of genetic algorithms and stochastic processes. He finds that timing impacts firm performance crucially, and that even small differences in initial resource configurations lead to large differences over time. For certain circumstances, Zott (2003) provides evidence for a path-dependence, that is some firms become skilled in experimentation while other firms become good imitators, which also results in differential performances within the industry.

The research efforts sketched so far are directed towards rather general issues of search for (new) strategic options and modes of strategic decision-making. In the following, we go into more detail of specific domains of strategic and operative planning.

By stochastic simulation, Feng et al. (2010) investigate the efficiency of rolling horizon planning procedures. In particular, they analyze the efficiency of rolling horizon models of (i) fully integrated, (ii) partially integrated, and (iii) decoupled sales and operations planning against fixed horizon planning models. Moreover, they incorporate demand uncertainties and forecasting inaccuracies. Approach (i) considers solely central but cross functional planning, while approaches (ii) and (iii) consider a mix of centralized and decentralized planning authority. They find that rolling horizon models are required, when addressing planning problems in practice (as fixed horizon models do lead to an inferior performance), and that a fully integrated planning approach (with central and cross-functional planning authority) is superior to the remaining two types of models. They examine the impact of different forecasting errors and find that, in general, forecast deviations do not have a great impact on performance.

Zhang et al. (2009) investigate ways to increase the efficiency of organizational planning procedures in complex product development projects. In particular, they apply an agent-based simulation approach, which considers human behavior, task networks, and organizational interdependencies. They generate a number of management insights. For example, they find that job-rotation can be a satisfying strategy for balancing workload. They also provide evidence that the process of team formation does not only need to consider characteristics of team members, but also the features of the product development process and that, in order to plan efficiently, it is not necessarily required to keep all team members capacities at a high level. They also show that there are several factors which influence planning performance simultaneously, like task networks, task scheduling. Cho and Eppinger (2005) are also concerned with complex design projects. In particular, they employ a stochastic simulation approach and investigate the case of iterative sequential, parallel, and overlapped tasks in a stochastic and resource-constrained environment, and consider a multiplicity of factors, such as information transfer patterns, uncertainty with respect to task duration and resource interdependencies. By extensive experimentation and analysis, they provide evidence for identifying leverage points for process improvements, and a set of results which can be utilized for decision support so that better project planning and control decisions are induced. In their paper, Van Landeghem and Vanmaele (2002) focus on tactical sales and operations planning. Their stochastic simulation model considers risk in demand and supply chains. They introduce a planning approach (which includes stochastic simulation as a pivotal part) that aims at helping to cope with unforeseen events that lead to re-planning (that is, to reduce re-planning cycles) or imperil performance targets. They (quite impressively) provide information of validation of their concept of robust planning within an (worldwide) enterprise over a period of three years. They argue that their approach is of particular interest for industries, which are limited in their flexibility due to their rigid cost structure or due to technical issues.

Balakrishnan and Sivaramakrishnan (2001) are concerned with product pricing and capacity management. For relatively simple settings, they formally investigate the extent of loss in case of capacity planning utilizing limited demand information with subsequent price adjustments. For more complex scenarios, they employ a stochastic simulation approach and find that relatively low losses occur. At the same time, they reason that as soon as the price adjustment corridor is restricted, significantly high economic losses become likely. Touran and Lopez (2006) address a special problem in (long-term) project planning, i.e., budgeting for cost escalation. Based on a review of forecasting procedures for project cost escalation and empirical data, they set up a stochastic simulation that considers uncertainty in terms of delays and different characterizations of escalations. By doing so, Touran and Lopez (2006) provide a simulation model that can be utilized for project planning decision support.

6 Conclusion and future research opportunities

The preceding sections provide an overview of simulation-based research in the domain of MAC and simulation studies from related fields of managerial science. The illustrative overview reveals that simulation-based studies contribute to understand the effects of errors on the quality of accounting information including interrelations among errors and the propagation in management accounting systems. By simulating (imperfect) information embedded in a broader organizational context and including several organizational controls, further insights, which are of relevance for the domain of MAC, are derived. The reviewed studies put claims for (management accounting) information to be as perfect as possible into perspective—even without taking information costs into account—and might justify doubt about the claim that decision-making should always be as rational as possible. Moreover, the research efforts sketched provide evidence that some organizational structures and settings of controls in terms of reward systems appear to be more robust against imperfect information and imperfect cognition by decision-makers. Moreover, simulation-based studies contribute to the analysis of various forms of strategic decision-making in comparison to each other and in different contexts of complexity and turbulence. It appears worth mentioning that these issues would be rather demanding—not to say impossible—to investigate by analytical modeling due to intractability, as well as in empirical studies due to problems of data availability and the multitude of variables to be controlled for.

However, we argue that simulation as a research method in MAC offers some more research opportunities than—to the best of our knowledge—fully exploited up to now.

For example, simulation allows for studying control mechanisms in organizations, where decision-makers do not necessarily have to be as gifted as typically assumed in neoclassical economics (cf., e.g., Leitner and Behrens 2014a). Relaxing these assumptions on agents’ cognitive capabilities allows to study to what extent the principal findings of research efforts conducted on the basis of these assumptions, for example, via principal-agent models, hold if applied to settings with agents suffering from more cognitive limitations (e.g., Axtell 2007; Davis et al. 2007; Leitner and Behrens 2014b, 2015; Guerrero and Axtell 2011).

Simulation also offers the opportunity to bridge between the micro- and the macro-level in research on MAC. Recall, with “micro-level” we mean the level of single decision-makers and the management accounting information and reward structures applied at that level. In contrast, “macro-level” denotes the overall MCS of an organization and the overall organizational performance. Simulation models allow for investigating the aggregate performance (macro-level) of rather complex organizational settings as a result of intertwined decisions at the micro level under the regime of different MCSs.

Moreover, simulation offers the opportunity to investigate complex and interrelated processes and emerging phenomena. In this sense, it could be interesting to study the emergence of MCSs within an organization taking, for example, environmental turbulence, organizational change processes or learning capabilities of decision-makers into account. Hence, simulation could contribute to investigate the dynamics of MCSs in their environments.