Diversity Measures in Classifier Ensembles Used for Rotating Machinery Fault Diagnosis

Conference paper
Part of the Applied Condition Monitoring book series (ACM, volume 4)

Abstract

Recent progress in computational intelligence, sensor technology and soft computing methods permit the use of complex systems to achieve diagnostic process goal. Among many, machine learning and pattern recognition techniques are often applied. When dealing with complex machinery use of one classifier is often insufficient. It is known that classifier ensembles (combined prediction from several classifiers) have the capability to outperform single classifier, because ensemble results are less dependent on peculiarities of a single training set. Additionally a combination of multiple classifiers may learn a more expressive class. In the paper a comparative study of different diversity measures for the rotating machine common faults detection and isolation. The main premise was to investigate if there is a link between diversity measure and classification accuracy. Although in several cases the connection between diversity and fault detection as well as isolation performance was revealed, the generalization of the diversity measuring concept cannot be clearly formulated.

Keywords

Classifier fusion Information diversity Dempster-Shafer theory Rotating machinery diagnosing 

Notes

Acknowledgments

Scientific work financed from resources assigned to statutory activity of Institute of Fundamentals of Machinery Design, Silesian University of Technology at Gliwice.

References

  1. 1.
    Assaad B, Eltabach M, Antoni J (2014) Vibration based condition monitoring of a multistage epicyclic gearbox in lifting cranes. Mech Syst Sig Process 42:351–367CrossRefGoogle Scholar
  2. 2.
    Bartelmus W, Zimroz R (2009) A new feature for monitoring the condition of gearboxes in non-stationary operating conditions. Mech Syst Sig Process 23(5):1528–1534CrossRefGoogle Scholar
  3. 3.
    Marciniak A, Korbicz J (2004) Pattern recognition approach to fault diagnostics. In: Korbicz J, Kowalczuk Z, Kościelny JM, Cholewa W (eds) Fault diagnosis. Springer, Berlin, pp 557–590CrossRefGoogle Scholar
  4. 4.
    Oukhellou L, Debiolles A, Denoeux T, Aknin P (2010) Fault diagnosis in railway track circuits using Dempster-Shafer classifier fusion. Eng Appl Artif Intell 23(1):117–128CrossRefGoogle Scholar
  5. 5.
    Wang X, Xu XB, Ji YD, Sun XY (2012) Fault diagnosis using neuro-fuzzy network and Dempster-Shafer theory. In: 2012 International conference on wavelet analysis and pattern recognition (ICWAPR)Google Scholar
  6. 6.
    Nembhard AD, Sinha JK, Pinkerton AJ, Elbhbah K (2014) Combined vibration and thermal analysis for the condition monitoring of rotating machinery. Struct Health Monit. doi: 10.1177/1475921714522843 Google Scholar
  7. 7.
    Yang BS, Kim KJ (2006) Application of Dempster-Shafer theory in fault diagnosis of induction motors using vibration and current signals. Mech Syst Sig Process 20(2):403–420MathSciNetCrossRefGoogle Scholar
  8. 8.
    Basir O (2007) X.Y.: engine fault diagnosis based on multi-sensor information fusion using Dempster-Shafer evidence theory. Inf Fusion 8(4):379–386CrossRefGoogle Scholar
  9. 9.
    Dempster AP (1967) Upper and Lower probabilities induced by a multivalued mapping. Ann Math Stat 38:325–339MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Shafer G (1976) A mathematical theory of evidence. Princeton University Press, PrincetonGoogle Scholar
  11. 11.
    Zadeh L (1979) On the validity of Dempster’s rule of combination. In: Tech. Rep. UCB/ERL M79/24, University of California, BerkelyGoogle Scholar
  12. 12.
    Dezert J, Wang P, Tchamova A (2012) On the validity of Dempster-Shafer theory. In: 2012 15th International conference on information fusion (FUSION). pp 655–660Google Scholar
  13. 13.
    Kuncheva L, Whitaker C (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207CrossRefMATHGoogle Scholar
  14. 14.
    Kuncheva LI (2005) Diversity in multiple classifier systems. Inf Fusion 6(1):3–4 (Diversity in Multiple Classifier Systems)Google Scholar
  15. 15.
    Hadjitodorov ST, Kuncheva LI, Todorova LP (2006) Moderate diversity for better cluster ensembles. Inf Fusion 7(3):264–275CrossRefGoogle Scholar
  16. 16.
    Lysiak R, Kurzynski M, Woloszynski T (2014) Optimal selection of ensemble classifiers using measures of competence and diversity of base classifiers. Neurocomputing 126:29–35CrossRefGoogle Scholar
  17. 17.
    Yule GU (1900) On the association of attributes in statistics. Philos Trans A(194):257–319Google Scholar
  18. 18.
    Sneath P, Sokal R (1973) Numerical taxonomy. The principles and practice of numerical classification. Freeman, San FranciscoGoogle Scholar
  19. 19.
    Ho T (1998) The random space method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844CrossRefGoogle Scholar
  20. 20.
    Giacinto G, Roli F (2001) Design of effective neural network ensembles for image classification processes. Image Vis Comput J 19(9–10):699–707CrossRefGoogle Scholar
  21. 21.
    Kohavi R, Wolpert D (1996) Bias plus variance decomposition for zero-one loss functions. In: Saitta L (ed) ICML. Morgan Kaufmann, Massachusetts, pp 275–283 (1996)Google Scholar
  22. 22.
    Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40(2):139–157CrossRefMATHGoogle Scholar
  23. 23.
    Cunningham P, Carney J (2000) Diversity versus quality in classification ensembles based on feature selection. In: de Mántaras RL, Plaza E (eds) ECML, Lecture Notes in Computer Science, vol 1810. Springer, Heidelberg, pp 109–116Google Scholar
  24. 24.
    Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1001CrossRefGoogle Scholar
  25. 25.
    Partridge D, Krzanowski W (1997) Software diversity: practical statistics for its measurement and exploitation. Inf Softw Technol 39(10):707–717CrossRefGoogle Scholar
  26. 26.
    Osswald C, Martin A (2006) Understanding the large family of Dempster-Shafer theory’s fusion operators—a decision-based measure. In: IEEE FUSION, pp 1–7Google Scholar
  27. 27.
    Guralnik V, Mylaraswamy D, Voges H (2006) On handling dependent evidence and multiple faults in knowledge fusion for engine health management. In: 2006 IEEE aerospace conference, p 9Google Scholar
  28. 28.
    Han D, Han C, Yang Y (2007) Multiple k-nn classifiers fusion based on evidence theory. In: 2007 IEEE international conference on automation and logistics, pp 2155–2159Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Institute of Fundamentals of Machine DesignSilesian University of TechnologyGliwicePoland

Personalised recommendations