Skip to main content

Intelligent bearing fault diagnosis using swarm decomposition method and new hybrid particle swarm optimization algorithm


The quality of information extracted from the vibration signals, and the accuracy of the bearing status detection depend on the methods used to process the signal and select the informative features. In this paper, a new hybrid approach is introduced in which the relatively new swarm decomposition (SWD) method and the optimized compensation distance evaluation technique (OCDET) are used to enhance the signal processing stage and to improve the optimal features selection process, respectively. Firstly, the vibration signals are decomposed into their Oscillatory Components (OCs) using the SWD. The feature matrix is constructed by computing the time-domain features for the OCs. The CDET method is consequently utilized to select the most sensitive features corresponding to the bearing status. On the other hand, The CDET approach contains a parameter called threshold which affects the number of the selected features. In this way, the hybrid optimization algorithm, which is a combination of the Particle Swarm Optimization (PSO) algorithm with the Sine–Cosine Algorithm (SCA) and the Levy flight distribution, has been used to select the optimal CDET threshold and improve the support vector machine (SVM) classifier. The proposed technique ability is evaluated by vibration signals corresponding to different bearing defects and various speeds. The results indicate the capability of the proposed fault diagnosis method in identifying the very small-size defects under various bearing conditions. Finally, the presented method shows better performance in comparison with other well-known methods in the most of the case studies.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21


Download references


This study was financially supported by the University of Guilan to SNC. The funder had no role in study design, data collection, and analysis, and decision to publish or preparation of the manuscript.

Author information

Authors and Affiliations



SNC and PA designed and coordinated the study. SNC and PA wrote the manuscript. SNC, AB, BA, and IA reviewed the manuscript and contributed to its revision. All the authors discussed the results and gave their final approved for publication.

Corresponding author

Correspondence to Saeed Nezamivand Chegini.

Ethics declarations

Conflict of interest

There are no conflicts of interests of this work.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


Appendix A: the implementation steps of the CDET Method

Suppose that the feature space of C conditions is as follows:

$$\left\{{q}_{m,c,j}\right\}, m=\mathrm{1,2},\dots {M}_{c}, c=\mathrm{1,2},\dots ,C , j=\mathrm{1,2},\dots ,J$$

where \({q}_{m,c,j}\) is the jth feature value of the mth sample under the cth condition, Mc the sample number of cth condition, and J is the feature number of each sample. The process of calculating the weight of features using CDET is as follows:

  1. 1.

    Calculate the average distance of the same condition samples:

    $${d}_{c,j}=\frac{1}{{M}_{c}\times ({M}_{c}-1)}\sum_{l,m=1}^{{M}_{c}}\left|{q}_{m,c,j}-{q}_{l,c,j}\right|, l,m=\mathrm{1,2},\dots ,{M}_{c}, l\ne m$$

    Then getting the average distance of \(C\) conditions:

    $$ d_{j}^{{\left( w \right)}} = \left( {1/C} \right)\sum\limits_{{c = 1}}^{C} {g_{{c.j}} } $$
  2. 2.

    Define and determine the variance factor of \({d}_{j}^{(w)}\) as follows:

  3. 3.

    Calculate the average feature values for all samples under the same condition:


    Then, evaluate the average distance between various condition samples:

    $${d}_{j}^{(b)}=\frac{1}{C\times (C-1)}\sum_{c,e=1}^{C}\left|{u}_{e,j}-{u}_{c,j}\right|, c,e=\mathrm{1,2},\dots ,C,c\ne e.$$
  4. 4.

    Define and calculate the variance factor of \({d}_{j}^{(b)}\) as follows:

    $${v}_{j}^{(b)}=\frac{\mathrm{max}(\left|{u}_{e,j}-{u}_{c,j}\right|)}{\mathrm{min}(\left|{u}_{e,j}-{u}_{c,j}\right|)}, c,e=\mathrm{1,2},\dots ,C,c\ne e$$
  5. 5.

    Determine the compensation factor as follows:

    $${\lambda }_{j}=\frac{1}{\frac{{v}_{j}^{(w)}}{\mathrm{max}({v}_{j}^{(w)})}+ \frac{{v}_{j}^{(b)}}{\mathrm{max}({v}_{j}^{(b)})}}$$
  6. 6.

    The \({\overline{\alpha }}_{j}\) that is an index for evaluating the j-th feature is defined as follows:

    $${\overline{\alpha }}_{j}=\frac{{\alpha }_{j}}{\mathrm{max}({\alpha }_{j})}$$


    $${\alpha }_{j}={\lambda }_{j}\left({d}_{j}^{\left(b\right)}/{d}_{j}^{\left(w\right)}\right)$$
  7. 7.

    The predetermined threshold value, \(\xi \), is considered. For all features, the parameter \(\xi \) is compared with the distance evaluation criteria. The features with \(\overline{{\alpha }_{j}}>\xi \) are selected as useful attributes; otherwise, they are eliminated.

Appendix B: the equations of the PSOSCALF optimization algorithm

Suppose \({\overrightarrow{X}}_{i}\left(t\right)\) and \({\overrightarrow{V}}_{i}\left(t\right)\) are the position vector and velocity vector of ith particle, respectively. The updating equations of the position and velocity of each particle are as follows (Shi 2001):


In the above relationships, \({\overrightarrow{V}}_{i}\left(t+1\right)\) and \({\overrightarrow{X}}_{i}\left(t+1\right)\) are the velocity vector and the position vector at the t + 1-th moment, respectively. \({\overrightarrow{X}}_{pBest}\) and \({\overrightarrow{X}}_{gBest}\) are the best personal experience and the best experience of the whole particles, respectively. The random variables r1 and r2 are chosen in the interval [0, 1]. The coefficients c1 and c2 represent the personal learning factor and the global learning factor, respectively. w is the weighting factor that applies the effect of the previous velocity vector at the t-th instant in determining the new velocity vector at the t + 1-th instant.

In the SCA algorithm, the following relationships are used as the solution updating equations (Mirjalili 2016):

$${\overrightarrow{X}}_{i}\left(t+1\right)=\left\{\begin{array}{c}{\overrightarrow{X}}_{i}\left(t\right)+{z}_{1}\mathrm{sin}\left({z}_{2}\right)\left|{z}_{3}{P}^{t}-{\overrightarrow{X}}_{i}\left(t\right)\right|, {z}_{4}<0.5\\ {\overrightarrow{X}}_{i}\left(t\right)+{z}_{1}\mathrm{cos}\left({z}_{2}\right)\left|{z}_{3}{P}^{t}-{\overrightarrow{X}}_{i}\left(t\right)\right|, {z}_{4}\ge 0.5\end{array}\right.$$

where the parameters z1, z2, z3, and z4 represent the key parameters of the SCA method and are set according to Mirjalili (2016). \({P}^{t}\) is the destination point or the best solution obtained at time t. In PSOSCALF, the following combinational equations that benefit from Levy flight and SCA algorithms are used for producing new responses (Nezamivand Chegini et al. 2018):

$${\overrightarrow{X}}_{i}\left(t+1\right)=\left\{\begin{array}{c}{{Levy}_{walk}(\overrightarrow{X}}_{i}\left(t\right))+{z}_{1}\mathrm{sin}\left({z}_{2}\right)\left|{z}_{3}{\overrightarrow{X}}_{gBest}-{\overrightarrow{X}}_{i}\left(t\right)\right|, {z}_{4}<0.5\\ {{Levy}_{walk}(\overrightarrow{X}}_{i}\left(t\right))+{z}_{1}\mathrm{cos}\left({z}_{2}\right)\left|{z}_{3}{\overrightarrow{X}}_{gBest}-{\overrightarrow{X}}_{i}\left(t\right)\right|, {z}_{4}\ge 0.5\end{array}\right.$$

The expression of \({{Levy}_{walk}(\overrightarrow{X}}_{i}\left(t\right))\) is the only difference between Eq. (B.4) in PSOSCALF and Eq. (B.3) in SCA which is calculated as follows:

$$ {\text{Levy}}_{{{\text{walk}}}} \left( {\vec{X}_{i} \left( t \right)} \right) = \vec{X}_{i} \left( t \right) + \overrightarrow {{{\text{step}}}} \oplus \overrightarrow {{{\text{random}}}} \left( {{\text{size}}\left( {\vec{X}_{i} \left( t \right)} \right)} \right) $$
$$ \overrightarrow {{{\text{step}}}} = {\text{stepsize}} \oplus \vec{X}_{i} \left( t \right) $$

In the PSOSCALF approach, the relationships presented in Eqs. (B.4a) and (B.4b) are added to the process of producing the new position of particles. Moreover, the number of times that the position of a particle does not change is stored in a parameter called the limit value. If this value is greater than or equal to the predetermined value, the position of the particle is calculated by the hybrid relationships in Eq. (B.4).

Details of setting PSOSCALF parameters were presented in (Nezamivand Chegini et al. 2018).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Nezamivand Chegini, S., Amini, P., Ahmadi, B. et al. Intelligent bearing fault diagnosis using swarm decomposition method and new hybrid particle swarm optimization algorithm. Soft Comput 26, 1475–1497 (2022).

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Bearing fault diagnosis
  • Swarm decomposition
  • Optimized compensation distance evaluation
  • Hybrid particle swarm optimization algorithm
  • Support vector machine