Skip to main content
Log in

Statistical outlier detection using direct density ratio estimation

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

We propose a new statistical approach to the problem of inlier-based outlier detection, i.e., finding outliers in the test set based on the training set consisting only of inliers. Our key idea is to use the ratio of training and test data densities as an outlier score. This approach is expected to have better performance even in high-dimensional problems since methods for directly estimating the density ratio without going through density estimation are available. Among various density ratio estimation methods, we employ the method called unconstrained least-squares importance fitting (uLSIF) since it is equipped with natural cross-validation procedures, allowing us to objectively optimize the value of tuning parameters such as the regularization parameter and the kernel width. Furthermore, uLSIF offers a closed-form solution as well as a closed-form formula for the leave-one-out error, so it is computationally very efficient and is scalable to massive datasets. Simulations with benchmark and real-world datasets illustrate the usefulness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Akiyama T, Hachiya H, Sugiyama M (2010) Efficient exploration through active learning for value function approximation in reinforcement learning, Neural Netw (to appear)

  2. Best MJ (1982) An algorithm for the solution of the parametric quadratic programming problem, Technical Report 82-24, Faculty of Mathematics, University of Waterloo

  3. Bickel S, Brückner M, Scheffer T (2007) Discriminative learning for differing training and test distributions. In: Proceedings of the 24th international conference on machine learning, pp 81–88

  4. Bickel S, Scheffer T (2007) Dirichlet-enhanced spam filtering based on biased samples. In: Advances in neural information processing systems 19. MIT Press, Cambridge, pp 161–168

  5. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  6. Bradley AP (1997) The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recogn 30(7): 1145–1159

    Article  Google Scholar 

  7. Breunig MM, Kriegel H-P, Ng RT, Sander J (2000) LOF: identifying density-based local outliers. In: Proceedings of the ACM SIGMOD international conference on management of data, pp 93–104

  8. Chan J, Bailey J, Leckie C (2008) Discovering correlated spatio-temporal changes in evolving graphs. Knowl Inform Syst 16(1): 53–96

    Article  Google Scholar 

  9. Cheng KF, Chu CK (2004) Semiparametric density estimation under a two-sample density ratio model. Bernoulli 10(4): 583–604

    Article  MATH  MathSciNet  Google Scholar 

  10. Cover TM, Thomas JA (1991) Elements of information theory. Wiley, New York

    Book  MATH  Google Scholar 

  11. Efron B, Hastie T, Johnstone I, Tibshirani R (2002) Least angle regression. Ann Stat 32: 407–499

    MathSciNet  Google Scholar 

  12. Fan H, Zaïane OR, Foss A, Wu J (2009) Resolution-based outlier factor: detecting the top-n most outlying data points in engineering data. Knowl Inf Syst 19(1): 31–51

    Article  Google Scholar 

  13. Fernandez EA (2005) The dprep package, Technical report, University of Puerto Rico. http://math.uprm.edu/~edgar/dprep.pdf

  14. Fishman GS (1996) Monte carlo: concepts, algorithms, and applications. Springer, Berlin

    MATH  Google Scholar 

  15. Fujimaki R, Yairi T, Machida K (2005) An approach to spacecraft anomaly detection problem using kernel feature space. In: Proceedings of the 11th ACM SIGKDD international conference on knowledge discovery and data mining, pp 401–410

  16. Gao J, Cheng H, Tan P-N (2006a) A novel framework for incorporating labeled examples into anomaly detection. In: Proceedings of the 2006 SIAM international conference on data mining, pp 593–597

  17. Gao J, Cheng H, Tan P-N (2006b) Semi-supervised outlier detection. In: Proceedings of the 2006 ACM symposium on applied computing, pp 635–636

  18. Golub GH, Loan CFV (1996) Matrix computations. Johns Hopkins University Press, Baltimore

    MATH  Google Scholar 

  19. Hachiya H, Akiyama T, Sugiyama M, Peters J (2009) Adaptive importance sampling for value function approximation in off-policy reinforcement learning. Neural Netw 22(10): 1399–1410

    Article  Google Scholar 

  20. Hachiya H, Peters J, Sugiyama M (2009) Efficient sample reuse in M-based policy search. In: Buntine W, Grobelnik M, Mladenic D, Shawe-Taylor J (eds) Machine learning and knowledge discovery in databases, vol 5781. Lecture notes in computer science, Springer, Berlin, pp 469–484

  21. Härdle W, Müller M, Sperlich S, Werwatz A (2004) Nonparametric and semiparametric models. Springer series in statistics

  22. Hastie T, Rosset S, Tibshirani R, Zhu J (2004) The entire regularization path for the support vector machine. J Mach Learn Res 5: 1391–1415

    MathSciNet  Google Scholar 

  23. Hido S, Tsuboi Y, Kashima H, Sugiyama M, Kanamori T (2008) Inlier-based outlier detection via direct density ratio estimation. In: Proceedings of the 8th IEEE international conference on data mining, pp 223–232

  24. Hodge V, Austin J (2004) A survey of outlier detection methodologies. Artif Intell Rev 22(2): 85–126

    Article  MATH  Google Scholar 

  25. Huang J, Smola AJ, Gretton A, Borgwardt K, Schölkopf B (2007) Correcting sample selection bias by unlabeled data. In: Advances in neural information processing systems, vol 19

  26. Idé T, Kashima H (2004) Eigenspace-based anomaly detection in computer systems. In: Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining, pp 440–449

  27. Jiang X, Zhu X (2009) veye: behavioral footprinting for self-propagating worm detection and profiling. Knowl Inform Syst 18(2): 231–262

    Article  Google Scholar 

  28. Kanamori T (2007) Pool-based active learning with optimal sampling distribution and its information geometrical interpretation. Neurocomputing 71(1–3): 353–362

    Article  Google Scholar 

  29. Kanamori T, Hido S, Sugiyama M (2009a) Efficient direct density ratio estimation for non-stationarity adaptation and outlier detection. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems 21. MIT Press, Cambridge, pp 809–816

    Google Scholar 

  30. Kanamori T, Hido S, Sugiyama M (2009b) A least-squares approach to direct importance estimation. J Mach Learn Res 10: 1391–1445

    MathSciNet  Google Scholar 

  31. Kanamori T, Shimodaira H (2003) Active learning algorithm using the maximum weighted log-likelihood estimator. J Stat Plan Inference 116(1): 149–162

    Article  MATH  MathSciNet  Google Scholar 

  32. Kanamori T, Suzuki T, Sugiyama M (2009) Condition number analysis of kernel-based density ratio estimation, Technical report, arXiv. http://www.citebase.org/abstract?id=oai:arXiv.org:0912.2800

  33. Karatzoglou A, Smola A, Hornik K, Zeileis A (2004) kernlab—an S4 package for kernel methods in R. J Stat Softw 11(9): 1–20

    Google Scholar 

  34. Kawahara Y, Sugiyama M (2009) Change-point detection in time-series data by direct density-ratio estimation, In: Park H, Parthasarathy S, Liu H, Obradovic Z (eds) Proceedings of 2009 SIAM international conference on data mining (SDM2009). Sparks, Nevada, USA, pp 389–400

  35. Latecki LJ, Lazarevic A, Pokrajac D (2007) Outlier detection with kernel density functions. In: Proceedings of the 5th international conference on machine learning and data mining in pattern recognition, pp 61–75

  36. Li X, Liu B, Ng S-K (2007) Learning to identify unexpected instances in the test set. In: Proceedings of the 20th international joint conference on artificial intelligence, pp 2802–2807

  37. Li Y, Koike Y, Sugiyama M (2009) A framework of adaptive brain computer interfaces. In: Proceedings of the 2nd international conference on biomedical engineering and informatics (BMEI09), Tianjin, China, pp 473–477

  38. Liu B, Dai Y, Li X, Lee WS, Yu PS (2003) Building text classifiers using positive and unlabeled examples. In: Proceedings of the 3rd IEEE international conference on data mining, pp 179–186

  39. Manevitz LM, Yousef M (2002) One-class SVMs for document classification. J Mach Learn Res 2: 139–154

    Article  MATH  Google Scholar 

  40. Minka TP (2007) A comparison of numerical optimizers for logistic regression, Technical report, Microsoft Research

  41. Murray JF, Hughes GF, Kreutz-Delgado K (2005) Machine learning methods for predicting failures in hard drives: a multiple-instance application. J Mach Learn Res 6: 783–816

    MathSciNet  Google Scholar 

  42. Nguyen X, Wainwright MJ, Jordan MI (2008) Estimating divergence functions and the likelihood ratio by penalized convex risk minimization. In: Advances in neural information processing systems 20, pp 1089–1096

  43. Qin J (1998) Inferences for case-control and semiparametric two-sample density ratio models. Biometrika 85(3): 619–639

    Article  MATH  MathSciNet  Google Scholar 

  44. Quiñonero-Candela, J, Sugiyama, M, Schwaighofer, A, Lawrence, N (eds) (2009) Dataset Shift in Machine Learning. MIT Press, Cambridge

    Google Scholar 

  45. R Development Core Team (2008) The R Manuals. http://www.r-project.org

  46. Rätsch G, Onoda T, Müller KR (2001) Soft margins for AdaBoost. Mach Learn 42(3): 287–320

    Article  MATH  Google Scholar 

  47. Schölkopf B, Platt JC, Shawe-Taylor J, Smola AJ, Williamson RC (2001) Estimating the support of a high-dimensional distribution. Neural Comput 13(7): 1443–1471

    Article  MATH  Google Scholar 

  48. Schölkopf B, Smola AJ (2002) Learning with kernels. MIT Press, Cambridge

    Google Scholar 

  49. Shimodaira H (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. J Stat Plan Inference 90(2): 227–244

    Article  MATH  MathSciNet  Google Scholar 

  50. Smola A, Song L, Teo CH (2009) Relative novelty detection. In: Proceedings of the 14th international workshop on artificial intelligence and statistics, vol 5, pp 536–543

  51. Stein M, Branke J, Schmeck H (2008) Efficient implementation of an active set algorithm for large-scale portfolio selection. Comput Oper Res 35(12): 3945–3961

    Article  MATH  Google Scholar 

  52. Steinwart I (2001) On the influence of the kernel on the consistency of support vector machines. J Mach Learn Res 2: 67–93

    Article  MathSciNet  Google Scholar 

  53. Sugiyama M (2006) Active learning in approximately linear regression based on conditional expectation of generalization error. J Mach Learn Res 7: 141–166

    MathSciNet  Google Scholar 

  54. Sugiyama M (2007) Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis. J Mach Learn Res 8: 1027–1061

    Google Scholar 

  55. Sugiyama M, Idé T, Nakajima S, Sese J (2010) Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Mach Learn 78(1–2): 35–61

    Article  Google Scholar 

  56. Sugiyama M, Kanamori T, Suzuki T, Hido S, Sese J, Takeuchi I, Wang L (2009) A density-ratio framework for statistical data processing. IPSJ Trans Comput Vis Appl 1: 183–208

    Article  Google Scholar 

  57. Sugiyama M, Kawanabe M, Chui PL (2010) Dimensionality reduction for density ratio estimation in high-dimensional spaces. Neural Netw 23(1): 44–59

    Article  Google Scholar 

  58. Sugiyama M, Krauledat M, Müller K-R (2007) Covariate shift adaptation by importance weighted cross validation. J Mach Learn Res 8: 985–1005

    Google Scholar 

  59. Sugiyama M, Müller K-R (2005) Input-dependent estimation of generalization error under covariate shift. Stat Decis 23(4): 249–279

    Article  MATH  Google Scholar 

  60. Sugiyama M, Nakajima S (2009) Pool-based active learning in approximate linear regression. Mach Learn 75(3): 249–274

    Article  Google Scholar 

  61. Sugiyama M, Nakajima S, Kashima H, von Bünau P, Kawanabe M (2008) Direct importance estimation with model selection and its application to covariate shift adaptation. In: Advances in neural information processing systems 20, pp 1433–1440

  62. Sugiyama M, Suzuki T, Nakajima S, Kashima H, von Bünau P, Kawanabe M (2008) Direct importance estimation for covariate shift adaptation. Ann Inst Stat Math 60(4)

  63. Sugiyama M, Takeuchi I, Suzuki T, Kanamori T, Hachiya H, Okanohara D (2010) Least-squares conditional density estimation. EICE Trans Inform Syst E93-D(3) (to appear)

  64. Sugiyama M, von Bünau P, Kawanabe M, Müller K-R (2010) Covariate shift adaptation: towards machine learning in non-stationary environment, MIT Press, Cambridge (to appear)

  65. Suzuki T, Sugiyama M (2009a) Estimating squared-loss mutual information for independent component analysis., In: Adali T, Jutten C, Romano JMT, Barros AK (eds) Independent component analysis and signal separation, vol 544. Lecture notes in computer science, Springer, Berlin, pp 130–137

  66. Suzuki T, Sugiyama M (2009b) Sufficient dimension reduction via squared-loss mutual information estimation, Technical Report TR09-0005, Department of Computer Science, Tokyo Institute of Technology. http://www.cs.titech.ac.jp/

  67. Suzuki T, Sugiyama M, Kanamori T, Sese J (2009) Mutual information estimation reveals global associations between stimuli and biological processes. BMC Bioinform 10(1): S52

    Article  Google Scholar 

  68. Suzuki T, Sugiyama M, Sese J, Kanamori T (2008) Approximating mutual information by maximum likelihood density ratio estimation. In: Saeys Y, Liu H, Inza I, Wehenkel L, de Peer YV (eds) JMLR workshop and conference proceedings, vol 4. New challenges for feature selection in data mining and knowledge discovery, pp 5–20

  69. Suzuki T, Sugiyama M, Tanaka T (2009) Mutual information approximation via maximum likelihood estimation of density ratio. In: Proceedings of 2009 IEEE international symposium on information theory (ISIT2009), Seoul, Korea, pp 463–467

  70. Takimoto M, Matsugu M, Sugiyama M (2009) Visual inspection of precision instruments by least-squares outlier detection. In: Proceedings of the fourth international workshop on data-mining and statistical science (DMSS2009), Kyoto, Japan, pp 22–26

  71. Tax DMJ, Duin RPW (2004) Support vector data description. Mach Learn 54(1): 45–66

    Article  MATH  Google Scholar 

  72. Tsuboi Y, Kashima H, Hido S, Bickel S, Sugiyama M (2009) Direct density ratio estimation for large-scale covariate shift adaptation. J Inform Process 17: 138–155

    Article  Google Scholar 

  73. Vapnik VN (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  74. Wiens DP (2000) Robust weights and designs for biased regression models: least squares and generalized M-estimation. J Stat Plann Inference 83(2): 395–412

    Article  MATH  MathSciNet  Google Scholar 

  75. Yamada M, Sugiyama M (2009) Direct importance estimation with Gaussian mixture models. In: IEICE transactions on information and systems E92-D(10), pp 2159–2162

  76. Yamada M, Sugiyama M, Matsui T (2010) Semi-supervised speaker identification under covariate shift. Signal Process (to appear)

  77. Yamanishi K, Takeuchi J-I, Williams G, Milne P (2004) On-line unsupervised outlier detection using finite mixtures with discounting learning algorithms. Data Mining Knowl Discov 8(3): 275–300

    Article  MathSciNet  Google Scholar 

  78. Yankov D, Keogh E, Rebbapragada U (2008) Disk aware discord discovery: finding unusual time series in terabyte sized datasets. Knowl Inform Syst 17(2): 241–262

    Article  Google Scholar 

  79. Zadrozny B (2004) Learning and evaluating classifiers under sample selection bias. In: Proceedings of the twenty-first international conference on machine learning, ACM Press, New York, pp 903–910

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shohei Hido.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hido, S., Tsuboi, Y., Kashima, H. et al. Statistical outlier detection using direct density ratio estimation. Knowl Inf Syst 26, 309–336 (2011). https://doi.org/10.1007/s10115-010-0283-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-010-0283-2

Keywords

Navigation