Abstract
Improving prediction accuracy has been a major challenge for damage diagnosis system in the field of structural health monitoring. To tackle this issue, several machine learning algorithms have been used. This study presents effectiveness in improving prediction accuracy of meta-learning model over a range of individual machine learning algorithms in damage diagnosis. The learning algorithm chosen in this paper is support vector machine, random forest, vote method, gradient boosting regression and stacked regression as meta-model. The learning algorithms are employed for debonding quantification in metallic stiffened plate. The algorithms trained and tested on numerically simulated first mode shape vibration data. To check robustness of algorithms, artificial noise is added in numerically simulated data. The result showed that the prediction accuracy of the meta-model as stacked regression is better than the individual model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kudva, J. N., Munir, N., & Tan, P. W. (1992). Damage detection in smart structures using neural networks and finite-element analyses. Smart Materials and Structures, 1(2), 108–112.
Farrar, C. R., Worden, K., & Wiley, J. (2012). Structural Health monitoring: A machine learning perspective. Wiley.
Razak, H. A. (2017). Recent developments in damage identification of structures using data mining (pp. 2373–2401).
Fairhurst, M. C., & Rahman, A. F. R. (4997, February). Generalised approach to the recognition of structurally similar handwritten characters using multiple expert classifiers. IEE Proceedings-Vision, Image and Signal Processing, 144(1), 15–22.
Dietterich, T. G. (1995). Solving multiclass learning problems via error-correcting output codes (Vol. 2).
Ho, T. K., Hull, J. J., & Srihari, S. N. (1994, January). Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1), 66–75.
Jordan, M. I., & Jacobs, R. A. (1993). Hierarchical mixtures of experts and the EM algorithm. In Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan) (vol. 2, pp. 1339–1344).
Kittler, J. (1998). Combining classifiers: A theoretical framework. Pattern Analysis and Applications, 1(1), 18–27.
Jacobs, R. A. (1995). Methods for combining experts’ probability assessments. Neural Computation, 7(5), 867–888.
ldave, R., & Dussault, J. P. (2014). Systematic ensemble learning for regression (pp. 1–38).
Breiman, L. (1994). Bagging predictors. UCR Statistics Department - University of California (No. 2, p. 19).
Freund, Y., & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proceedings of the 13th International Conference on Machine Learning (pp. 148–156).
Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5(2), 241–259.
Breiman, L. (1996). Stacked regressions. Machine Learning, 24(1), 49–64.
Kuncheva, L. I. (2002). A theoretical study on six classifier fusion strategies. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(2), 281–286.
Woods, K., Kegelmeyer, W. P., & Bowyer, K. (1997). Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), 405–410.
Džeroski, S., & Ženko, B. (2004). Is combining classifiers with stacking better than selecting the best one? Machine Learning, 54(3), 255–273.
Zhai, B., & Chen, J. (2018). Development of a stacked ensemble model for forecasting and analyzing daily average PM 2.5 concentrations in Beijing, China. Science of the Total Environment, 635, 644–658.
Sesmero, M. P., Ledezma, A. I., & Sanchis, A. (2015). Generating ensembles of heterogeneous classifiers using stacked Generalization. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 5(1), 21–34.
Ozay, M., & Yarman-Vural, F. T. (2016). Hierarchical distance learning by stacking nearest neighbor classifiers. Information Fusion, 29, 14–31.
Cao, C., & Wang, Z. (2018). IMCStacking: Cost-sensitive stacking learning with feature inverse mapping for imbalanced problems. Knowledge-Based Systems, 150, 27–37.
Naimi, A. I., & Balzer, L. B. (2018). Stacked generalization: An introduction to super learning. European Journal of Epidemiology, 33(5), 459–464.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). Springer Series in Statistics The elements of Statistical learning. The Mathematical Intelligencer, 27(2), 83–85.
Frank, E., Hall, M. A., & Witten, I. H. (2016). WEKA Workbench. Online appendix for data Mining: Practical machine learning tools and techniques [Online]. Available: www.cs.waikato.ac.nz/~ml/weka.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kumar, A., Guha, A., Banerjee, S. (2021). Improving Prediction Accuracy for Debonding Quantification in Stiffened Plate by Meta-Learning Model. In: Tiwari, S., Suryani, E., Ng, A.K., Mishra, K.K., Singh, N. (eds) Proceedings of International Conference on Big Data, Machine Learning and their Applications. Lecture Notes in Networks and Systems, vol 150. Springer, Singapore. https://doi.org/10.1007/978-981-15-8377-3_5
Download citation
DOI: https://doi.org/10.1007/978-981-15-8377-3_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-8376-6
Online ISBN: 978-981-15-8377-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)