Advertisement

Hypertension Type Classification Using Hierarchical Ensemble of One-Class Classifiers for Imbalanced Data

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 311)

Abstract

The paper presents the research on the computer support system which is able to recognize the type of hypertension. This diagnostic problem is highly imbalanced, because only ca. 5% of patient suffering from hypertension are diagnosed as secondary hypertension. Additionally the secondary hypertension could be caused by several disorders (in our work we recognize the five most popular reasons) which require strikingly different therapies. Thus, appropriate classification methods, which take into consideration the nature of the decision task should be applied to this problem. We decided to employ the original classification methods developed by our team which have their origin in one-class classification and the ensemble learning. They quality was confirmed in our previous works. The accuracy of the chosen classifiers was evaluated on the basis of the computer experiments which were carried out on the real data set obtained from the hypertension clinic. The results of the experimental investigations confirmed usefulness of the proposed, hierarchical one-class classifier ensemble and could be applied in the real medical decision support systems.

Keywords

classifier ensemble pattern classification one-class classifier imbalanced data hypertension 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Liebowitz, J.: The Handbook of Applied Expert Systems. Taylor & Francis (1997), http://books.google.pl/books?id=6DNgIzFNSZsC
  2. 2.
    Wozniak, M.: Two-stage classifier for diagnosis of hypertension type. In: Maglaveras, N., Chouvarda, I., Koutkias, V., Brause, R. (eds.) ISBMDA 2006. LNCS (LNBI), vol. 4345, pp. 433–440. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  3. 3.
    Mancia, G., et al.: esh/esc guidelines for the management of arterial hypertension: the task force for the management of arterial hypertension of the european society of hypertension (esh) and of the european society of cardiology (esc). European Heart Journal 34(28), 2159–2219 (2013)CrossRefGoogle Scholar
  4. 4.
    Lloyd-Jones, D., Adams, R.J., Brown, T.M., Carnethon, M., Dai, S., De Simone, G., Ferguson, T.B., Ford, E., Furie, K., Gillespie, C., et al.: Heart disease and stroke statistics 2010 update a report from the american heart association. Circulation 121(7), e46–e215 (2010)Google Scholar
  5. 5.
    Sun, Y., Wong, A.K.C., Kamel, M.S.: Classification of imbalanced data: A review. International Journal of Pattern Recognition and Artificial Intelligence 23(4), 687–719 (2009)CrossRefGoogle Scholar
  6. 6.
    Chen, X., Wasikowski, M.: Fast: A roc-based feature selection metric for small samples and imbalanced data classification problems. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 124–132 (2008)Google Scholar
  7. 7.
    Garcia, V., Mollineda, R.A., Sánchez, J.S.: On the k-nn performance in a challenging scenario of imbalance and overlapping. Pattern Analysis and Applications 11(3-4), 269–280 (2008)CrossRefGoogle Scholar
  8. 8.
    Napierala, K., Stefanowski, J.: Identification of different types of minority class examples in imbalanced data. In: Corchado, E., Snášel, V., Abraham, A., Woźniak, M., Graña, M., Cho, S.-B. (eds.) HAIS 2012, Part II. LNCS (LNAI), vol. 7209, pp. 139–150. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  9. 9.
    Lopez, V., Fernandez, A., Moreno-Torres, J.G., Herrera, F.: Analysis of preprocessing vs. cost-sensitive learning for imbalanced classification. open problems on intrinsic data characteristics. Expert Systems with Applications 39(7), 6585–6608 (2012)CrossRefGoogle Scholar
  10. 10.
    Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research 16, 321–357 (2002)zbMATHGoogle Scholar
  11. 11.
    Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., Herrera, F.: A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews 42(4), 463–484 (2012)CrossRefGoogle Scholar
  12. 12.
    Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: Smoteboost: Improving prediction of the minority class in boosting. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS (LNAI), vol. 2838, pp. 107–119. Springer, Heidelberg (2003)Google Scholar
  13. 13.
    Liu, X., Wu, J., Zhou, Z.: Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 39(2), 539–550 (2009)CrossRefGoogle Scholar
  14. 14.
    Sun, Y., Kamel, M.S., Wong, A.K.C., Wang, Y.: Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition 40(12), 3358–3378 (2007)CrossRefzbMATHGoogle Scholar
  15. 15.
    Tax, D.M.J., Duin, R.P.W.: Support vector data description. Machine Learning 54(1), 45–66 (2004)CrossRefzbMATHGoogle Scholar
  16. 16.
    Tax, D., Duin, R.P.W.: Characterizing one-class datasets. In: Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp. 21–26 (2005)Google Scholar
  17. 17.
    Giacinto, G., Perdisci, R., Del Rio, M., Roli, F.: Intrusion detection in computer networks by a modular ensemble of one-class classifiers. Inf. Fusion 9, 69–82 (2008)CrossRefGoogle Scholar
  18. 18.
    Krawczyk, B., Woźniak, M.: Diversity measures for one-class classifier ensembles. Neurocomputing 126, 36–44 (2014)CrossRefGoogle Scholar
  19. 19.
    Krawczyk, B., Woźniak, M., Cyganek, B.: Clustering-based ensembles for one-class classification. Information Sciences 264, 182–195 (2014)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Wilk, T., Wozniak, M.: Soft computing methods applied to combination of one-class classifiers. Neurocomput. 75, 185–193 (2012)CrossRefGoogle Scholar
  21. 21.
    Alpaydin, E.: Combined 5 x 2 cv f test for comparing supervised classification learning algorithms. Neural Computation 11(8), 1885–1892 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Systems and Computer NetworksWroclaw University of TechnologyWroclawPoland

Personalised recommendations