Validation Metrics: A Case for Pattern-Based Methods

  • Robert E. MarksEmail author
Part of the Simulation Foundations, Methods and Applications book series (SFMA)


This chapter discusses the issue of choosing the best computer model for simulating a real-world phenomenon through the process of validating the model’s output against the historical, real-world data. Four families of techniques are discussed that are used in the context of validation. One is based on the comparison of statistical summaries of the historical data and the model output. The second is used where the models and data are stochastic, and distributions of variables must be compared, and a metric is used to measure their closeness. After exploring the desirable properties of such a measure, the paper compares the third and fourth methods (from information theory) of measuring closeness of patterns, using an example from strategic market competition. The techniques can, however, be used for validating computer models in any domain.


Model validation State Similarity Measure Area Validation Metric Generalized Hartley metric 



I should like to thank Dan MacKinlay for his mention of the K-L information loss measure, Arthur Ramer for his mention of the Hartley or U-uncertainty metric and his suggestions, and Vessela Daskalova for her mention of the “cityblock” metric. The efforts of the editors of this volume and anonymous referees were very constructive, and have greatly improved this chapter’s presentation.


  1. Akaike, H. (1973). Information theory as an extension of the maximum likelihood principle. In B. N. Petrov, & F. Csaki (Eds.), Second International Symposium on Information Theory (pp. 267–281). Budapest: Akademiai Kiado.Google Scholar
  2. Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach (2nd ed.). New York: Springer.zbMATHGoogle Scholar
  3. Chen, S.-H., Chang, C.-L., & Du, Y.-R. (2012). Agent-based economic models and econometrics. The Knowledge Engineering Review, 27(2), 187–219.CrossRefGoogle Scholar
  4. Fagiolo, G., Moneta, A., & Windrum, P. (2007). A critical guide to empirical validation of agent-based models in economics: Methodologies, procedures, and open problems. Computational Economics, 30(3), 195–226.CrossRefGoogle Scholar
  5. Fagiolo, G., Guerini, M., Lamperti, F., Moneta, A., & Roventini, A. (2019). Validation of agent-based models in economics and finance. pp. 763–787.Google Scholar
  6. Ferson, S., Oberkampf, W. L., & Ginzburg, L. (2008). Model validation and predictive capability for the thermal challenge problem. Computer Methods in Applied Mechanics and Engineering, 197, 2408–2430.CrossRefGoogle Scholar
  7. Gilbert, N., & Troitzsch, K. G. (2005). Simulation for the social scientist (2nd ed.). Open University Press.Google Scholar
  8. Guerini, M., & Moneta, A. (2017). A method for agent-based models validation. Journal of Economic Dynamics & Control, 82, 125–141.MathSciNetCrossRefGoogle Scholar
  9. Hartley, R. V. L. (1928). Transmission of information. The Bell System Technical Journal, 7(3), 535–563.CrossRefGoogle Scholar
  10. Klir, G. J. (2006). Uncertainty and information: Foundations of generalized information theory. New York: Wiley.zbMATHGoogle Scholar
  11. Krause, E. F. (1986). Taxicab geometry: An adventure in non-euclidean geometry, New York: Dover. (First published by Addison-Wesley in 1975.)Google Scholar
  12. Kullback, J. L., & Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86.MathSciNetCrossRefGoogle Scholar
  13. Lamperti, F. (2018a). An information theoretic criterion for empirical validation of simulation models. Econometrics and Statistics, 5, 83–106.MathSciNetCrossRefGoogle Scholar
  14. Lamperti, F. (2018b). Empirical validation of simulated models through the GSL-div: An illustrative application. Journal of Economic Interaction and Coordination, 13, 143–171.CrossRefGoogle Scholar
  15. Lin, J. (1991). Divergence measures based on the Shannon entropy. IEEE Transactions on Information Theory, 37(1), 145–151.MathSciNetCrossRefGoogle Scholar
  16. Liu Y., Chen W., Arendt P., & Huang H.-Z. (2010). Towards a better understanding of model validation metrics. In 13th AIAA/ISSMO Multidisciplinary Analysis Optimization Conference, Multidisciplinary Analysis Optimization Conferences.Google Scholar
  17. Mankin, J. B., O’Neill, R. V., Shugart, H. H., & Rust, B. W. (1977). The importance of validation in ecosystem analysis. In G. S. Innis (Ed.), New Directions in the Analysis of Ecological Systems, Part 1, Simulation Council Proceedings Series, Simulation Councils, La Jolla, California (Vol. 5, pp. 63–71). Reprinted. In H. H. Shugart & R. V. O’Neill (Eds.), Systems ecology (pp. 309–317). Hutchinson and Ross, Stroudsburg, Pennsylvania: Dowden.Google Scholar
  18. Marks, R. E. (1992). Breeding hybrid strategies: Optimal behaviour for oligopolists. Journal of Evolutionary Economics, 2, 17–38.CrossRefGoogle Scholar
  19. Marks, R. E. (2007). Validating simulation models: A general framework and four applied examples. Computational Economics, 30(3), 265–290. Scholar
  20. Marks, R. E. (2010). Comparing two sets of time-series: The state similarity measure. In V. A. Alexandria (Ed.), 2010 Joint Statistical Meetings Proceedings-Statistics: A Key to Innovation in a Data-centric World, Statistical Computing Section (pp. 539–551). American Statistical Association.Google Scholar
  21. Marks, R. E. (2013). Validation and model selection: Three similarity measures compared. Complexity Economics, 2(1), 41–61.
  22. Marks, R. E. (2016). Monte Carlo. In D. Teece, & M. Augier (Eds.), The palgrave encyclopedia of strategic management. London: Palgrave.Google Scholar
  23. Marks, R. E., Midgley, D. F., & Cooper, L. G. (1995). Adaptive behavior in an oligopoly. In J. Biethahn, & V. Nissen (Eds.), Evolutionary algorithms in management applications (pp. 225–239). Berlin: Springer.Google Scholar
  24. Midgley, D. F., Marks, R. E., & Cooper, L. G. (1997). Breeding competitive strategies. Management Science, 43(3), 257–275.Google Scholar
  25. Midgley, D. F., Marks, R. E., & Kunchamwar, D. (2007). The building and assurance of agent-based models: An example and challenge to the field. Journal of Business Research, 60(8), 884–893. (Special Issue: Complexities in Markets).Google Scholar
  26. Oberkampf, W. L., & Roy, C. J. (2010). Chapter 12: Model accuracy assessment. Verification and validation in scientific computing (pp. 469–554). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  27. Ramer, A. (1989). Conditional possibility measures. International Journal of Cybernetics and Systems, 20, 233–247. Reprinted in D. Dubois, H. Prade, & R. R. Yager, (Eds.). (1993). Readings in fuzzy sets for intelligent systems (pp. 233–240). San Mateo, California: Morgan Kaufmann Publishers.Google Scholar
  28. Rényi, A. (1970). Probability theory. Amsterdam: North-Holland (Chapter 9, Introduction to information theory, pp. 540–616).Google Scholar
  29. Roy, C. J., & Oberkampf, W. L. (2011). A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Computer Methods in Applied Mechanics and Engineering, 200, 2131–2144.MathSciNetCrossRefGoogle Scholar
  30. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379–423, 623–656.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of EconomicsUniversity of New South WalesSydneyAustralia

Personalised recommendations