Summary
Scientific learning is seen as an iterative process employing Criticism and Estimation. Sampling theory use of predictive distributions for model criticism is examined and also the implications for significance tests and the theory of precise measurement. Normal theory examples and ridge estimates are considered. Predictive checking functions for transformation, serial correlation, and bad values are reviewed as is their relation with Bayesian options. Robustness is seen from a Bayesian view point and examples are given The bad value problem is also considered and comparison withM estimators is made.
Similar content being viewed by others
References in the Discussion
BOX, G.E.P. and TIAO, G.C. (1962). A further look at robustness via Bayes’s theorem.Biometrika 49, 419–432.
GOOD, I.J. and GASKINS, R.A. (1980). Density estimation and bump-hunting by the penalized likelihood method exemplified by scatering and meteorite data.J. Amer. Statist. Assoc. 75, 42–73 (with discussion).
O’HAGAN, A. (1979). On outlier rejection phenomena in Bayes inference.J. Roy. Statist. Soc. B. 41, 358–367.
Author information
Authors and Affiliations
Additional information
This is an extended abstract of the paper “Sampling and Bayes’ Inference in Scientific Modelling and Robustness” later read before the Royal Statistical Society at Cardiff, Wales, on May 15th, 1980 and published inJ. Roy. Statist. Soc. A,143.
Rights and permissions
About this article
Cite this article
Box, G.E.P. Sampling inference, Bayes’ inference, and robustness in the advancement of learning. Trabajos de Estadistica Y de Investigacion Operativa 31, 366–381 (1980). https://doi.org/10.1007/BF02888360
Issue Date:
DOI: https://doi.org/10.1007/BF02888360