Abstract
The use of simulation methods is not very common in accounting research, even though several authors have pointed to the advantages these methods offer in addressing accounting research questions. In this position paper, I discuss the difficulties encountered when applying simulation methods in accounting research. These roadblocks are the problem of seeing the forest for the trees, the difficulty in designing the model and assessing which variables to include, issues with calibrating simulation models with relevant parameter values to guarantee external validity, and the unfamiliarity of the accounting readership with simulation methods. For each of these obstacles, I give some practical advice on how to overcome them from my experience as an author as well as a reviewer.
Similar content being viewed by others
Notes
I do not focus on the class of simulation models that are agent-based in my discussion. For a recent review of the application of such agent-based simulation models in which a population of independent agents interacts according to predetermined rules in the more general area of managerial science, please refer to Wall (2014). Most of the obstacles that I will discuss also apply to this type of simulation model (other than maybe the first).
Simulation methods are well positioned for achieving high internal validity, although researchers typically do not report what they do to ensure such internal validity, in particular with respect to validation of the programming code. Since so much of the internal validity of a simulation study depends on programming code being bug free and since the proprietary nature of such coding work places limitations on what other researchers can do to check such internal validity, good practice is to have multiple sets of eyes working through the code. Also, each co-author can make an independent assessment of the quality of the code by working by hand through many well-chosen numerical examples. Furthermore, we may be able to import the use of unit testing from Computer Science. A unit is defined as the smallest part of a computer application that is testable and computer scientists show that each such individual part of the code is correct.
Here too, we may be able to import techniques from Computer Science that test the brittleness of simulation models to systematic alterations of parameter values (e.g., Miller 1998).
Because numerical experiments are not constrained on the number of observations that can be simulated, high order interaction effects may appear significant in the statistical sense. However, other measures of effect size are appropriate to assess economic significance in such large samples.
As another example on model design choices, Balakrishnan et al. (2011) model a simple Leontief production environment as that is not the focus of their study. Christensen and Demski (1997), to the contrary, are interested in assessing the biases of costing procedures under a Leontief versus a Cobb Douglas production function and hence model both.
References
Balakrishnan, R., Hansen, S., & Labro, E. (2011). Evaluating heuristics used when designing product costing systems. Management Science, 57(3), 520–541.
Balakrishnan, R., & Penno, M. (2014). Causality in the context of analytical models and numerical experiments. Accounting, Organizations and Society, 39, 531–534.
Christensen, J., & Demski, J. S. (1997). Product costing in the presence of endogenous subcost functions. Review of Accounting Studies, 2, 65–87.
Drury, C., & Tayles, M. (2005). Explicating the design of overhead absorption procedures in Uk organizations. British Accounting Review, 37, 47–84.
Grim, P., Rosenberger, R., Rosenfeld, A., Anderson, B., & Eason, R. E. (2013). How simulations fail. Synthese, 190(12), 2367–2390.
Harrison, J. R., Lin, Z., Carroll, G. R., & Carley, K. M. (2007). Simulation modeling in organizational and management research. Academy of Management Review, 32(4), 1229–1245.
Labro, E., & Vanhoucke, M. (2007). A simulation analysis of interactions among errors in costing systems. The Accounting Review, 82(4), 939–962.
Labro, E., & Vanhoucke, M. (2008). Diversity in resource consumption patterns and robustness of costing systems to errors. Management Science, 54(10), 1715–1730.
Miller, J. H. (1998). Active nonlinear tests (ANTs) of complex simulation models. Management Science, 44(6), 820–830.
Shields, M. D. (1995). An empirical analysis of firms’ implementation experiences with activity-based costing. Journal of Management Accounting Research, 7, 148–166.
Shim, E., & Sudit, E. (1995). How manufacturers price products. Management Accounting, 76(8), 37–39.
Stubben, S. R. (2010). Discretionary revenues as a measure of earnings management. The Accounting Review, 85(2), 695–717.
Wall, F. (2014). Agent-based modeling in managerial science: an illustrative survey and study. Review of Managerial Science, 1–59. doi:10.1007/s11846-014-0139-3
Author information
Authors and Affiliations
Corresponding author
Additional information
Position Paper prepared for the Special Issue on Simulation in Management Accounting and Management Control of the Journal of Management Control.
I gratefully acknowledge financial support of the Kenan–Flagler Business School and the Latané Fund. Comments by Vic Anand, Ramji Balakrishnan and the editors of the special issue are highly appreciated. Opinions expressed are my own.
Rights and permissions
About this article
Cite this article
Labro, E. Using simulation methods in accounting research. J Manag Control 26, 99–104 (2015). https://doi.org/10.1007/s00187-015-0203-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00187-015-0203-4