Learning from automatically labeled data: case study on click fraud prediction
In the era of big data, both class labels and covariates may result from proprietary algorithms or ground models. The predictions of these ground models, however, are not the same as the unknown ground truth. Thus, the automatically generated class labels are inherently uncertain, making subsequent supervised learning from such data a challenging task. Fine-tuning a new classifier could mean that, at the extreme, this new classifier will try to replicate the decision heuristics of the ground model. However, few new insights can be expected from a model that tries to merely emulate another one. Here, we study this problem in the context of click fraud prediction from highly skewed data that were automatically labeled by a proprietary detection algorithm. We propose a new approach to generate click profiles for publishers of online advertisements. In a blinded test, our ensemble of random forests achieved an average precision of only 36.2 %, meaning that our predictions do not agree very well with those of the ground model. We tried to elucidate this discrepancy and made several interesting observations. Our results suggest that supervised learning from automatically labeled data should be complemented by an interpretation of conflicting predictions between the new classifier and the ground model. If the ground truth is not known, then elucidating such disagreements might be more relevant than improving the performance of the new classifier.
KeywordsClassification Click fraud prediction Big data Random forest Ensemble learning
I thank the anonymous reviewers very much for their very constructive comments, which have helped me a lot to improve this manuscript.
- 1.Berrar D (2012) Random forests for the detection of click fraud in online mobile advertising. In: Proceedings of the 1st International Workshop on Fraud Detection in Mobile Advertising, pp. 1–10Google Scholar
- 3.Bootkrajang J, Kabán A (2013) Boosting in the presence of label noise. In: Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, pp. 82–90Google Scholar
- 7.Chen C, Liaw A, Breiman L (2004) Using random forests to learn imbalanced data, Technical report #666. Department of Statistics, University of California, Berkeley, pp. 1–12Google Scholar
- 11.Immorlica N, Jain K, Mahdian M, Talwar K (2005) Click fraud resistant methods for learning click-through rates. In: Proceedings of the 1st Workshop on Internet and Network Economics, pp. 34–45Google Scholar
- 12.Lamiroy B, Sun T (2013) Computing precision and recall with missing or uncertain ground truth, graphics recognition. New trends and challenges. 9th international workshop, GREC 2011, Seoul, Korea, September 15–16, 2011. Revised selected papers, Springer Lecture Notes in Computer Science, pp 149–162Google Scholar
- 14.Oentaryo R, Lim E, Finegold M, Lo D, Zhu F, Phua C, Cheu E, Yap G, Sim K, Nguyen M, Perera K, Neupane B, Faisal M, Aung Z, Woon W, Chen W, Patel D, Berrar D (2014) Detecting click fraud in online advertising: a data mining approach. J Mach Learn Res 14:99–140Google Scholar
- 15.Quost B, Denœux T (2009) Learning from data with uncertain labels by boosting credal classifiers. In: Proceedings of the 1st ACM SIGKDD Workshop on Knowledge Discovery from Uncertain Data, pp. 38–47Google Scholar
- 16.R Development Core Team (2009) R: a Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna ISBN 3-900051-07-0Google Scholar
- 18.Wagstaff K (2012) Machine learning that matters. In: Proceedings of the 29th International Conference on Machine Learning, pp. 529–536Google Scholar
- 19.Zhu M (2004) Recall, precision and average precision. Technical report 2004–2009. University of Waterloo, Canada, pp. 1–11Google Scholar