Logistic Regression Learning Model for Handling Concept Drift with Unbalanced Data in Credit Card Fraud Detection System

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 380)

Abstract

Credit card is the well-accepted manner of remission in financial field. With the rising number of users across the globe, risks on usage of credit card have also been increased, where there is danger of stealing credit card details and committing frauds. Traditionally, machine learning area has been developing algorithms that have certain assumptions on underlying distribution of data, such as data should have predetermined and fixed distribution. Real-word situations are different than this constrained model; rather applications often face problems such as unbalanced data distribution. Additionally, data picked from non-stationary environments are also frequent that results in the sudden drifts in the concepts. These issues have been separately addressed by the researchers. This paper aims to propose a universal framework using logistic regression model that intelligently tackles issues in the incremental learning for the assessment of credit risks.

Keywords

Logistic regression learning Concept drift Class imbalance Credit card fraud detection 

References

  1. 1.
    Wang, G., Ma, J.: A hybrid ensemble approach for enterprise credit risk assessment based on support vector machine. Expert Syst. Appl. 39(5), 5325–5331 (2012)CrossRefGoogle Scholar
  2. 2.
    Ditzler, G., Polikar, R.: Incremental learning of concept drift from streaming imbalanced data. IEEE Trans. Knowled. Data Eng. 25(10), 2283–2301 (2013)CrossRefGoogle Scholar
  3. 3.
    He, H., Chen, S., Li, K., Xin, X.: Incremental learning from stream data. IEEE Trans. Neural Networks 22(12), 1901–1914 (2011)CrossRefGoogle Scholar
  4. 4.
    Razavi-Far, R., Baraldi, P., Zio, E.: Dynamic weighting ensembles for incremental learning and diagnosing new concept class faults in nuclear power systems. IEEE Trans. Nucl. Sci. 59(5), 2520–2530 (2012)CrossRefGoogle Scholar
  5. 5.
    Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Inf. Comput. 108(2), 212–261 (1994)MATHMathSciNetCrossRefGoogle Scholar
  6. 6.
    Kulkarni, P., Ade, R.: Prediction of student’s performance based on incremental learning. Int. J. Comput. Appl. 99(14), 10–16 (2014)Google Scholar
  7. 7.
    Yu, L., Wang, S., Keung Lai, K.: Credit risk assessment with a multistage neural network ensemble learning approach. Expert Syst. Appl. 34(2), 1434–1444 (2008)CrossRefGoogle Scholar
  8. 8.
    Denison, D.G.T., Mallick, B.K., Smith, A.F.M.: A bayesian CART algorithm. Biometrika 85(2), 363–377 (1998)MATHMathSciNetCrossRefGoogle Scholar
  9. 9.
    Ade, R., Prashant D.: Efficient knowledge transformation for incremental learning and detection of new concept class in students classification system. In: Information Systems Design and Intelligent Applications, pp. 757–766. Springer, India (2015)Google Scholar
  10. 10.
    Ade, R., Deshmukh, P.R.: Classification of students using psychometric tests with the help of incremental Naïve Bayes algorithm. Int. J. Comput. Appl. 89(14), 26–31 (2014)Google Scholar
  11. 11.
    Elwell, R., Polikar, R.: Incremental learning of concept drift in nonstationary environments. IEEE Trans. Neural Networks 22(10), 1517–1531 (2011)CrossRefGoogle Scholar
  12. 12.
    Ade, M.R., Pune, G., Deshmukh, P.R., Amravati, S.T.: Methods for incremental learning: a survey. Int. J. Data Mining Knowled. Manage. Process 3(4), 119–125 (2013)Google Scholar
  13. 13.
    Pallavi, K., Ade, R.: Incremental learning from unbalanced data with concept class, concept drift and missing features: a review. Int. J. Data Mining Knowled. Manage. Process 4(6) (2014)Google Scholar
  14. 14.
    Polikar, R., Upda, L., Upda, S.S., Honavar, V.: Learn++: An incremental learning algorithm for supervised neural networks. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 31(4), 497–508 (2001)CrossRefGoogle Scholar
  15. 15.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Computational learning theory. Springer, Heidelberg, pp. 23–37 (1995)Google Scholar
  16. 16.
    Muhlbaier, M.D., Apostolos T., Polikar, R.: Learn. NC: combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes. IEEE Trans. Neural Networks 20(1), 152–168 (2009)Google Scholar

Copyright information

© Springer India 2016

Authors and Affiliations

  1. 1.Dr. D.Y. Patil School of Engineering and Technology, Savitribai Phule Pune UniversityPuneIndia

Personalised recommendations