Advertisement

Monte Carlo Simulation

  • Surendra P. VermaEmail author
Chapter

Abstract

The Monte Carlo simulation (named after the Monte Carlo casino of Monaco) is an important computational technique to improve the statistical tests and other applications. This simulation has progressed steadily with the advancement of the hardware and software of computers. Some aspects are covered in this chapter, which have helped us improve numerous statistical tests (Chap.  5 and  6) for handling of experimental data. Law and Kelton (Simulation modeling and analysis. McGraw Hill, Boston, 2000) is an excellent source of information on simulation and modelling. A highly precise and accurate Monte Carlo procedure has been developed, which is unique in the sense that the precision and accuracy of simulated results can be calculated. Up to 190 independent chains of the identically and independently distributed normal random variates could thus be generated. Guidance is provided to carry out experiments in a random sequence, being a fundamental statistical requirement for optimisation of experiments. The innovated Monte Carlo procedure has been used for simulation of new precise and accurate critical values for discordancy and significance tests. The evaluation of test performance is another subject dealt with in this chapter. All earlier Monte Carlo simulations of test performance and central tendency and dispersion parameter evaluation are shown to be in error, because they do not closely represent any actual experiment. Our correct, precise and accurate simulation procedure shows that the best estimators of central tendency and dispersion parameters are the outlier-based mean and standard deviation, provided they are calculated from discordant outlier-free data arrays. This is contrary to the general belief about the superiority of the robust parameters.

References

  1. Barnett, V., & Lewis, T. (1994). Outliers in statistical data. Chichester: Wiley.Google Scholar
  2. Bevington, P. R. (1969). Data reduction and error analysis for the physical sciences. New York: Mc-Graw Hill Book Company.Google Scholar
  3. Bevington, P. R., & Robinson, D. K. (2003). Data reduction and error analysis for the physical sciences. Boston: McGraw Hill.Google Scholar
  4. Box, G. E. P., & Muller, M. E. (1958). A note on the generation of random normal deviates. Annales Mathematics Statistica, 29, 610–611.CrossRefGoogle Scholar
  5. Butler, J. C. (1979). Trends in ternary petrologic variation diagrams—Fact or fantasy? American Mineralogist, 64, 1115–1121.Google Scholar
  6. Cruz-Huicochea, R., & Verma, S. P. (2013). New critical values for F and their use in the ANOVA and Fisher’s F tests for evaluating geochemical reference material granite G-2 (U.S.A.) and igneous rocks from the Eastern Alkaline Province (Mexico). Journal of Iberian Geology, 39, 13–30.CrossRefGoogle Scholar
  7. Davies, P. L. (1988). Statistical evaluation of interlaboratory tests. Fresenius Zeitschrift für Analytische Chemie, 331, 513–519.CrossRefGoogle Scholar
  8. Dixon, W. J. (1950). Analysis of extreme values. The Annals of Mathematical Statistics, 21, 488–506.CrossRefGoogle Scholar
  9. Dixon, W. J. (1951). Ratios involving extreme values. The Annals of Mathematical Statistics, 22, 68–78.CrossRefGoogle Scholar
  10. Dixon, W. J. (1953). Processing data for outliers. Biometrics, 9, 74–89.CrossRefGoogle Scholar
  11. Doornik, J. A. (2005). An improved ziggurat method to generate normal random samples. University of Oxford.Google Scholar
  12. Grubbs, F. E. (1950). Sample criteria for testing outlying observations. The Annals of Mathematical Statistics, 21, 27–58.CrossRefGoogle Scholar
  13. Grubbs, F. E., & Beck, G. (1972). Extension of sample sizes and percentage points for significance tests of outlying observations. Technometrics, 14, 847–854.CrossRefGoogle Scholar
  14. Hayes, K., & Kinsella, T. (2003). Spurious and non-spurious power in performance criteria for tests of discordancy. The Statistician, 52, 69–82.Google Scholar
  15. Jain, R. B. (1981a). Detecting outliers: Power and some other considerations. Communications in Statistics—Theory and Methods, 10, 2299–2314.CrossRefGoogle Scholar
  16. Jain, R. B. (1981b). Percentage points of many-outlier detection procedures. Technometrics, 23, 71–75.CrossRefGoogle Scholar
  17. Kendall, M. G., & Bavington-Smith, B. (1938). Randomness and random sampling numbers. Journal of Royal Statistical Society, A101, 147–166.CrossRefGoogle Scholar
  18. Kinderman, A. J., & Ramage, J. G. (1976). Computer generation of normal random variables. Journal of the American Statistical Association, 71, 893–896.CrossRefGoogle Scholar
  19. Law, A. M., & Kelton, W. D. (2000). Simulation modeling and analysis. Boston: McGraw Hill.Google Scholar
  20. Maronna, R. A., & Zamer, R. H. (2002). Robust estimates of location and dispersion for high-dimensional datasets. Technometrics, 44, 307–317.CrossRefGoogle Scholar
  21. Marsaglia, G. (1968). Random numbers fall mainly in the plain. Proceedings of the National Academy of Sciences USA, 61, 25–28.CrossRefGoogle Scholar
  22. Marsaglia, G., & Bray, T. A. (1964). A convenient method for generating normal variables. SIAM Review, 6, 260–264.CrossRefGoogle Scholar
  23. Marsaglia, G., & Tsang, W. W. (2000). The ziggurat method for generating random variables. Journal of the Geological Society of London, 5, 1–7.Google Scholar
  24. Matsumoto, M., & Nishimura, T. (1998). Mersenne twister; A 623-dimensionally equidistributed uniform pseudorandom number generator. Association for Computing Machinery, ACM Transactions of Modelling and Computer Simulations, 8, 3–30.CrossRefGoogle Scholar
  25. Rosales Rivera, M. (2018). Desarrollo de herramientas estadísticas computacionales con nuevos valores críticos generados por simulación computacional. In Instituto de Investigación en Ciencias Básicas y Aplicadas, Centro de Investigación en Ciencias (pp. 105). Cuernavaca, Morelos, Mexico: Universidad Autónoma del Estado de Morelos.Google Scholar
  26. Rosales-Rivera, M., Díaz-González, L., & Verma, S. P. (2014). Comparative performance of thirteen single outlier discordancy tests from Monte Carlo simulations. In IAMG16: Geostatistical and Geospatial Approaches for the Characterization of Natural Resources in the Environment: Challenges, Processes and Strategies (pp. 4). New Delhi: International Association of Mathematical Geology.Google Scholar
  27. Rosales-Rivera, M., Díaz-González, L., & Verma, S. P. (2018). A new online computer program (BiDASys) for ordinary and uncertainty weighted least-squares linear regressions: Case studies from food chemistry. Revista Mexicana de Ingeniería Química, 17, 507–522.CrossRefGoogle Scholar
  28. Rosales-Rivera, M., Díaz-González, L., & Verma, S. P. (2019). Evaluation of nine USGS reference materials for quality control through Univariate Data Analysis System, UDASys3. Arabian Journal of Geosciences, 12, 40.  https://doi.org/10.1007/s12517-018-4220-0.CrossRefGoogle Scholar
  29. Rosner, B. (1975). On the detection of many outliers. Technometrics, 17, 221–227.CrossRefGoogle Scholar
  30. Verma, S. P. (2005). Estadística básica para el manejo de datos experimentales: aplicación en la Geoquímica (Geoquimiometría). México, D.F.: UNAM.Google Scholar
  31. Verma, S. P. (2012). Geochemometrics. Revista Mexicana de Ciencias Geológicas, 29, 276–298.Google Scholar
  32. Verma, S. P. (2015a). Present state of knowledge and new geochemical constraints on the central part of the Mexican Volcanic Belt and comparison with the Central American Volcanic Arc in terms of near and far trench magmas. Turkish Journal of Earth Sciences, 24, 399–460.CrossRefGoogle Scholar
  33. Verma, S. P. (2015b). Monte Carlo comparison of conventional ternary diagrams with new log-ratio bivariate diagrams and an example of tectonic discrimination. Geochemical Journal, 49, 393–412.CrossRefGoogle Scholar
  34. Verma, S. P. (2016). Análisis estadístico de datos composicionales. CDMX: Universidad Nacional Autónoma de México.Google Scholar
  35. Verma, S. P., & Cruz-Huicochea, R. (2013). Alternative approach for precise and accurate Student’st critical values and application in geosciences. Journal of Iberian Geology, 39, 31–56.Google Scholar
  36. Verma, S. P., & Quiroz-Ruiz, A. (2006a). Critical values for six Dixon tests for outliers in normal samples up to sizes 100, and applications in science and engineering. Revista Mexicana de Ciencias Geológicas, 23, 133–161.Google Scholar
  37. Verma, S. P., & Quiroz-Ruiz, A. (2006b). Critical values for 22 discordancy test variants for outliers in normal samples up to sizes 100, and applications in science and engineering. Revista Mexicana de Ciencias Geológicas, 23, 302–319.Google Scholar
  38. Verma, S. P., & Quiroz-Ruiz, A. (2008). Critical values for 33 discordancy test variants for outliers in normal samples of very large sizes from 1,000 to 30,000 and evaluation of different regression models for the interpolation of critical values. Revista Mexicana de Ciencias Geológicas, 25, 369–381.Google Scholar
  39. Verma, S. P., & Quiroz-Ruiz, A. (2011). Corrigendum to critical values for 22 discordancy test variants for outliers in normal samples up to sizes 100, and applications in science and engineering [Revista Mexicana de Ciencias Geológicas, 23, 302–319 (2006)]. Revista Mexicana de Ciencias Geológicas, 28, 202.Google Scholar
  40. Verma, S. P., Quiroz-Ruiz, A., & Díaz-González, L. (2008). Critical values for 33 discordancy test variants for outliers in normal samples up to sizes 1000, and applications in quality control in Earth Sciences. Revista Mexicana de Ciencias Geológicas, 25, 82–96.Google Scholar
  41. Verma, S. P., Díaz-González, L., Rosales-Rivera, M., & Quiroz-Ruiz, A. (2014). Comparative performance of four single extreme outlier discordancy tests from Monte Carlo simulations. Scientific World Journal, 2014, 27. Article ID 746451.  https://doi.org/10.1155/2014/746451.Google Scholar
  42. Verma, S. P., Díaz-González, L., Pérez-Garza, J. A., & Rosales-Rivera, M. (2016). Quality control in geochemistry from a comparison of four central tendency and five dispersion estimators and example of a geochemical reference material. Arabian Journal of Geosciences, 9, 740.CrossRefGoogle Scholar
  43. Verma, S. P., Rosales-Rivera, M., Díaz-González, L., & Quiroz-Ruiz, A. (2017a). Improved composition of Hawaiian basalt BHVO-1 from the application of two new and three conventional recursive discordancy tests. Turkish Journal of Earth Sciences, 26, 331–353.CrossRefGoogle Scholar
  44. Verma, S. P., Díaz-González, L., Pérez-Garza, J. A., & Rosales-Rivera, M. (2017b). Erratum to: Quality control in geochemistry from a comparison of four central tendency and five dispersion estimators and example of a geochemical reference material. Arabian Journal of Geosciences, 10, 24.CrossRefGoogle Scholar
  45. Verma, S. P., Verma, S. K., Rivera-Gómez, M. A., Torres-Sánchez, D., Díaz-González, L., Amezcua-Valdez, A. … Pandarinath, K. (2018). Statistically coherent calibration of X-ray fluorescence spectrometry for major elements in rocks and minerals. Journal of Spectroscopy, 2018, 13. Article ID 5837214.  https://doi.org/10.1155/2018/5837214.CrossRefGoogle Scholar
  46. Verma, S. P., Rosales-Rivera, M., Rivera-Gómez, M. A., & Verma, S. K. (2019). Comparison of matrix-effect corrections for ordinary and uncertainty weighted linear regressions and determination of major element mean concentrations and total uncertainties of 62 international geochemical reference materials from wavelength-dispersive X-ray fluorescence spectrometry. In Colloquium Spectroscopicum Internationale XLI (CSI XLI) and I Latin-American Meeting on Laser Induced Breakdown Spectroscopy (I LAMLIBS). Mexico City.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Instituto de Energías RenovablesUniversidad Nacional Autónoma de MéxicoTemixcoMexico

Personalised recommendations