Finding Robust Models Using a Stratified Design
Predictive performance in model selection is often estimated using out-of-sample validation and test datasets. The assumption is that the test and validation datasets are from the same population as the training dataset. This assumption may not apply in the common application context where the model is applied to scoring of future data. This paper proposes a sample design which can lead to better model performance and robust estimates of model generalization error. The sample design is shown applied to a collection scoring application.
KeywordsTest Dataset Credit Risk Validation Dataset Challenger Design Data Mining Application
Unable to display preview. Download preview PDF.
- 2.Hand, D., Mannila, H., Smyth, P.: Principles of data mining. MIT Press, Cambridge (2001)Google Scholar
- 3.Han, J., Kimber, M.: Data Mining: Concepts and techniques. Morgan Kaufmann, San Francisco (2001)Google Scholar
- 5.Maindonald, J.: The role of models in predictive validation (statistics for budding data miners). In: ISIS (2003)Google Scholar
- 6.Dhar, V., Stein, R.: Finding robust and usable models from data mining. In: PCAI (1998)Google Scholar
- 7.Zadrozny, B.: Learning and Evaluating Classifiers under Sample Selection Bias (2004)Google Scholar
- 8.Elkan, C.: Foundations of cost-sensitive learning (2001)Google Scholar
- 9.Blake, C., Keogh, E., Merz, C.: Uci repository of machine learning databases. UCI website (2001)Google Scholar