Abstract
Extensive exploration of simulation models comes at a high computational cost, all the more when the model involves a lot of parameters. Economists usually rely on random explorations, such as Monte Carlo simulations, and basic econometric modeling to approximate the properties of computational models. This paper aims to provide guidelines for the use of a much more efficient method that combines a parsimonious sampling of the parameter space using a specific design of experiments (DoE), with a well-suited metamodeling method first developed in geostatistics: kriging. We illustrate these guidelines by following them in the analysis of two simple and well known economic models: Nelson and Winter’s industrial dynamics model, and Cournot oligopoly with learning firms. In each case, we show that our DoE experiments can catch the main effects of the parameters on the models’ dynamics with a much lower number of simulations than the Monte-Carlo sampling (e.g. 85 simulations instead of 2,000 in the first case). In the analysis of the second model, we also introduce supplementary numerical tools that may be combined with this method, for characterizing configurations complying with a specific criterion (social optimal, replication of stylized facts, etc.). Our appendix gives an example of the R-project code that can be used to apply this method on other models, in order to encourage other researchers to quickly test this approach on their models.
Notes
More precisely, consider that the null hypothesis is rejected if the observed values of the variable under interest are higher than a given value, at a frequency higher than a given confidence threshold \(\alpha \le 1\). Running simulations until one obtains \(n \cdot \alpha \le n\) observations leading to the rejection of the null hypothesis is shown to have the same power as running \(n\) simulations, and assessing only afterwards whether the null hypothesis has to be rejected or not. The number of simulations can hence be reduced by a factor up to \(\frac{1}{\alpha }\). A small amount of power can further be traded with a larger decrease in the number of simulations, and Silva et al. (2009) provide related estimations of the power loss.
Kriging models have been named after Danie G. Krige, a South African mining engineer who developed those models to improve ore evaluation techniques at the Witwatersrand reef complex in South Africa, pioneering the field of geostatistics, see Krige (1951). As for the statistical theory of DoE, it was developed in agriculture in the 1920’s, for real, non-simulated experiments, see Fisher (1935).
The estimation of the meta-model is actually done through feasible GLS as the covariance matrix \(C\) is unknown and its parameters have to be estimated, see below.
There are two possible triplets of rows (bac, cba, acb) and (abc, cab, bca), each can be permuted in \(3!=6\) different ways, so that one obtains \(2 \times 6=12\) possible configurations.
In that case, the meta-model refers to ordinary kriging, and of simple kriging if the mean is known, contrary to universal kriging in the more general case, which is exposed above.
Recall that kriging is an exact interpolator, so that the \(R^{2}\) coefficient cannot be computed.
In practice, either the factors have a finite set of values, and the ANOVA is performed using the common formula of the multi-variate analysis of variance with discrete factors, see, for instance, Frey and Patil (2002); or the factors are defined over a continuous domain, and the experimental domain has to be discretized in order to apply these formula. In that case, predictions of the response through the meta-model are evaluated over a \(k\)-dimensional grid [see (Welch et al. 1992; Saltelli et al. 1999)].
See also Nelson and Winter (1978) for an extensive presentation and discussion of the model. In this paper, we only use this model as a simple example, in order to apply the method previously developed. We adopt values used in the original model for the parameters that we do not include in our experiments.
We also consider a \(10,000\) simulations Monte Carlo sample for robustness checks.
R Development Core Team (2013) software can also be used but the package effects, which computes ANOVA marginal effects, is not directly connected to the DiceKriging package, which performs kriging estimation and the modeler has to use the package sensitivity, which delivers less detailed results [see (Roustant et al. 2010)].
These figures are built using the principles given in Sub-subsection 2.3.5.
We consider that we catch these effects in a robust way if they appear as significant in each of the \(100\) regressions obtained from \(100\) random sets of \(2,000\) simulations.
The complete code used in this section is provided in Appendix 4.
Higher order polynomials would involve too many parameters to be estimated, considering only 33 observations.
It should be noted that applying sensitivity analysis to the other forms of kriging identifies the same determinants, which indicates that the overall picture of the meta-model is not sensitive to the specification.
See also Salle et al. (2012) for an application of this function to the minimization of a Central Bank’s loss function in a macroeconomic agent-based model.
References
Besag, J., & Clifford, P. (1991). Sequential monte carlo p-values. Biometrika, 78(2), 301–304.
Booth, J., & Butler, R. (1999). An importance sampling algorithm for exact conditional tests in log-linear models. Biometrika, 86(2), 321–332.
Box, G., & Draper, N. (1987). Empirical model building and responses surfaces. New York: Wiley.
Cary, N. (2010). JMP \({\textregistered }\) 9 modeling and multivariate methods. Cary: SAS Institute Inc.
Cioppa, T., (2002). Efficient nearly orthogonal and space-filling experimental designs for high-dimensional complex models. Naval postgraduate school: Doctoral Dissertation in philosophy in operations research.
Durrande, N., Ginsbourger, O., & Roustant, O. (2012). Additive covariance kernels for high-dimensional Gaussian process modeling. Annales de la Faculté des Sciences de Toulouse Tome, 21(3), 481–499.
Fang, K., Lin, D., Winker, P., & Zhang, Y. (2000). Uniform design: theory and application. Technometrics, 42(3), 237–248.
Fisher, R. A. (1935). The design of experiments (9th ed.). New York: Macmillan.
Frey, C., & Patil, S. (2002). Identification and review of sensitivity analysis methods. Risk Analysis, 22(3), 553.
Goupy, J., & Creighton, L. (2007). Introduction to design of experiments with JMP examples (3rd ed.). Cary: SAS Institute Inc.
Herbst, E., & Schorfheide, F. (2013). Sequential monte carlo sampling for DSGE models, working paper 19152. National Bureau of Economic Research.
Iman, R., & Helton, J. (1988). An investigation of uncertainty and sensitivity analysis techniques for computer models. Risk Analysis, 8, 71–90.
Jeong, S., Murayama, M., & Yamamoto, K. (2005). Efficient optimization design method using kriging model. Journal of Aircraft, 42, 413–420.
Jourdan, A. (2005). Planification d’experiences numeriques. Revue MODULAD, 33, 63–73.
Krige, D. G. (1951). A statistical approach to some basic mine valuation problems on the Witwatersrand. Journal of the Chemical, Metallurgical and Mining Society of South Africa, 52(6), 119–139.
Masters, T. (1993). Practical neural network recipes in C++. New York: Academic Press.
Matheron, G. (1963). Principles of geostatistics. Economic Geology, 58, 1246.
Mebane, W. J., & Sekhon, J. (2011). Genetic optimization using derivatives: the rgenoud package for R. Journal of Statistical Software, 42(11), 1–26.
Miller, J., & Page, S. (2007). Complex adaptive systems. Princeton: Princeton University Press.
Nelson, R. R., & Winter, S. G. (1978). Forces generating and limiting concentration under schumpeterian competition. Bell Journal of Economics, 9(2), 524–548.
Nelson, R. R., & Winter, S. G. (1982). The schumpeterian tradeoff revisited. American Economic Review, 72(1), 114–132.
Oeffner, M. (2008). Agent-based Keynesian macroeconomics—an evolutionary model embedded in an agent-based computer simulation. Doctoral dissertation: Bayerische Julius - Maximilians Universitat, Wurzburg.
R Development Core Team. (2013). R: A language and environment for statistical computing, R Foundation for statistical computing, Vienna. ISBN 3-900051-07-0. http://www.R-project.org
Roustant, O., Ginsbourger, D., & Deville, Y. (2010). DiceKriging, diceOptim: two R packages for the analysis of computer experiments by kriging-based metamodeling and optimization. Journal of Statistical Software, 55(2), 100.
Sacks, J., Welch, W., Mitchell, T., & Wynn, H. (1989). Design and analysis of computer experiments. Statistical Science, 4(4), 409.
Salle, I., Sénégas, M., & Yıldızoğlu, M. (2012). How transparent should a Central Bank be? an ABM assessment. avril: mimeo, Bordeaux University.
Saltelli, A., Tarantola, S., & Chan, K. (1999). A quantitative model-independent method for global sensitivity analysis of model output. Technometrics, 41(1), 39–56.
Sanchez, S. M., (2005). Work smarter, not harder: Guidelines for designing simulation experiments, In M. E. Kuhl, N. M. Steiger, F. B. Armstrong, & J. A. Joines (eds.) Proceedings of the 2005 Winter Simulation Conference. Software available at http://harvest.nps.edu/linkedfiles/nolhdesigns_v4.xls
Silva, I., Assuncao, R., & Costa, M. (2009). Power of the sequential monte carlo test. Sequential Analysis, 28(2), 163–174.
Tesfatsion, L., & Judd, K. L. (Eds.). (2006). Handbook of computational economics. Agent-based computational economics (Vol. 2). Amsterdam: North-Holland.
Vallée, T., & Yıldızoğlu, M. (2009). Convergence in the finite Cournot oligopoly with social and individual learning. Journal of Economic Behavior & Organization, 72(2), 670–690.
van Beers, W., & Kleijnen, J. (2004). Kriging interpolation in simulation: a survey. In R. G. Ingalls, M. D. Rossetti, J. S. Smith, & B. A. Peters (Eds.), Handbook of statistics. New York: Elsevier.
Wang, G., & Shan, S. (2007). Review of metamodeling techniques in support of engineering design optimization. Journal of Mechanical Design, 129, 370.
Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J., & Morris, M. D. (1992). Screening, predicting, and computer experiments. Technometrics, 34(1), 15–25.
Ye, K. (1998). Orthogonal column latin hypercubes and their application in computer experiments. Journal of the American Statistical Association, 93(444), 1430–1439.
Yıldızoğlu, M. (2001). Connecting adaptive behaviour and expectations in models of innovation: The potential role of artificial neural networks. European Journal of Economic and Social Systems, 15(3), 51–65.
Yıldızoğlu, M., Sénégas, M.-A., & Zumpe, M. (2012). Learning the optimal buffer-stock consumption rule of Carroll. Macroeconomic Dynamics, 5, 255.
Acknowledgments
We are grateful to two anonymous referees for their comments and suggestions, and to the participants of the Lipari summer school on “Data mining and modelling of complex techno-socio-economic systems” (July 2012, Italy) for useful comments and discussions. We are responsible for all remaining errors.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Salle, I., Yıldızoğlu, M. Efficient Sampling and Meta-Modeling for Computational Economic Models. Comput Econ 44, 507–536 (2014). https://doi.org/10.1007/s10614-013-9406-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10614-013-9406-7