Skip to main content

Improving Prediction Accuracy for Debonding Quantification in Stiffened Plate by Meta-Learning Model

  • Conference paper
  • First Online:
Proceedings of International Conference on Big Data, Machine Learning and their Applications

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 150))

Abstract

Improving prediction accuracy has been a major challenge for damage diagnosis system in the field of structural health monitoring. To tackle this issue, several machine learning algorithms have been used. This study presents effectiveness in improving prediction accuracy of meta-learning model over a range of individual machine learning algorithms in damage diagnosis. The learning algorithm chosen in this paper is support vector machine, random forest, vote method, gradient boosting regression and stacked regression as meta-model. The learning algorithms are employed for debonding quantification in metallic stiffened plate. The algorithms trained and tested on numerically simulated first mode shape vibration data. To check robustness of algorithms, artificial noise is added in numerically simulated data. The result showed that the prediction accuracy of the meta-model as stacked regression is better than the individual model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Kudva, J. N., Munir, N., & Tan, P. W. (1992). Damage detection in smart structures using neural networks and finite-element analyses. Smart Materials and Structures, 1(2), 108–112.

    Article  Google Scholar 

  2. Farrar, C. R., Worden, K., & Wiley, J. (2012). Structural Health monitoring: A machine learning perspective. Wiley.

    Google Scholar 

  3. Razak, H. A. (2017). Recent developments in damage identification of structures using data mining (pp. 2373–2401).

    Google Scholar 

  4. Fairhurst, M. C., & Rahman, A. F. R. (4997, February). Generalised approach to the recognition of structurally similar handwritten characters using multiple expert classifiers. IEE Proceedings-Vision, Image and Signal Processing, 144(1), 15–22.

    Google Scholar 

  5. Dietterich, T. G. (1995). Solving multiclass learning problems via error-correcting output codes (Vol. 2).

    Google Scholar 

  6. Ho, T. K., Hull, J. J., & Srihari, S. N. (1994, January). Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1), 66–75.

    Google Scholar 

  7. Jordan, M. I., & Jacobs, R. A. (1993). Hierarchical mixtures of experts and the EM algorithm. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (vol. 2, pp. 1339–1344).

    Google Scholar 

  8. Kittler, J. (1998). Combining classifiers: A theoretical framework. Pattern Analysis and Applications, 1(1), 18–27.

    Article  Google Scholar 

  9. Jacobs, R. A. (1995). Methods for combining experts’ probability assessments. Neural Computation, 7(5), 867–888.

    Article  Google Scholar 

  10. ldave, R., & Dussault, J. P. (2014). Systematic ensemble learning for regression (pp. 1–38).

    Google Scholar 

  11. Breiman, L. (1994). Bagging predictors. UCR Statistics Department - University of California (No. 2, p. 19).

    Google Scholar 

  12. Freund, Y., & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on Machine Learning (pp. 148–156).

    Google Scholar 

  13. Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5(2), 241–259.

    Article  Google Scholar 

  14. Breiman, L. (1996). Stacked regressions. Machine Learning, 24(1), 49–64.

    MATH  Google Scholar 

  15. Kuncheva, L. I. (2002). A theoretical study on six classifier fusion strategies. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(2), 281–286.

    Article  Google Scholar 

  16. Woods, K., Kegelmeyer, W. P., & Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), 405–410.

    Article  Google Scholar 

  17. Džeroski, S., & Ženko, B. (2004). Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54(3), 255–273.

    Article  Google Scholar 

  18. Zhai, B., & Chen, J. (2018). Development of a stacked ensemble model for forecasting and analyzing daily average PM 2.5 concentrations in Beijing, China. Science of the Total Environment, 635, 644–658.

    Article  Google Scholar 

  19. Sesmero, M. P., Ledezma, A. I., & Sanchis, A. (2015). Generating ensembles of heterogeneous classifiers using stacked Generalization. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 5(1), 21–34.

    Google Scholar 

  20. Ozay, M., & Yarman-Vural, F. T. (2016). Hierarchical distance learning by stacking nearest neighbor classifiers. Information Fusion, 29, 14–31.

    Article  Google Scholar 

  21. Cao, C., & Wang, Z. (2018). IMCStacking: Cost-sensitive stacking learning with feature inverse mapping for imbalanced problems. Knowledge-Based Systems, 150, 27–37.

    Article  Google Scholar 

  22. Naimi, A. I., & Balzer, L. B. (2018). Stacked generalization: An introduction to super learning. European Journal of Epidemiology, 33(5), 459–464.

    Article  Google Scholar 

  23. Hastie, T., Tibshirani, R., & Friedman, J. (2009). Springer Series in Statistics The elements of Statistical learning. The Mathematical Intelligencer, 27(2), 83–85.

    Google Scholar 

  24. Frank, E., Hall, M. A., & Witten, I. H. (2016). WEKA Workbench. Online appendix for data Mining: Practical machine learning tools and techniques [Online]. Available: www.cs.waikato.ac.nz/~ml/weka.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhijeet Kumar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, A., Guha, A., Banerjee, S. (2021). Improving Prediction Accuracy for Debonding Quantification in Stiffened Plate by Meta-Learning Model. In: Tiwari, S., Suryani, E., Ng, A.K., Mishra, K.K., Singh, N. (eds) Proceedings of International Conference on Big Data, Machine Learning and their Applications. Lecture Notes in Networks and Systems, vol 150. Springer, Singapore. https://doi.org/10.1007/978-981-15-8377-3_5

Download citation

Publish with us

Policies and ethics