Background Information of Deep Learning for Structural Engineering
- 1.4k Downloads
Since the first journal article on structural engineering applications of neural networks (NN) was published, there have been a large number of articles about structural analysis and design problems using machine learning techniques. However, due to a fundamental limitation of traditional methods, attempts to apply artificial NN concept to structural analysis problems have been reduced significantly over the last decade. Recent advances in deep learning techniques can provide a more suitable solution to those problems. In this study, versatile background information, such as alleviating overfitting methods with hyper-parameters, is presented. A well-known ten bar truss example is presented to show condition for neural networks, and role of hyper-parameters in the structures.
This research was funded by a grant (NRF-2017R1A4A1015660) from NRF (National Research Foundation of Korea) funded by MEST (Ministry of Education and Science Technology) of Korean government.
Compliance with Ethical Standards
Conflict of interest
The authors declare that they have no conflict of interest.
- 3.Carbonell JG, Michalski RS, Mitchell TM (1983) Machine learning: a historical and methodological analysis. AI Mag 4(3):69Google Scholar
- 6.Nair V, Hinton G E (2010) Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp 807–814)Google Scholar
- 7.Hinton GE, Srivastava,N, Krizhevsky A, Sutskever I, Salakhutdinov, RR (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580
- 10.Gupta T, Sharma RK (2011) Structural analysis and design of buildings using neural network: a review. Int J Eng Manag Sci 2(4):216–220Google Scholar
- 11.Haftka RT, Grdal Z (2012) Elements of structural optimization, vol 11. Springer, DordrechtGoogle Scholar
- 13.Riedmiller, M, Braun H (1993) A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: IEEE international conference on neural networks, 1993, (pp 586–591)Google Scholar
- 14.Rumelhart, D. E., McClelland, J. L., and PDP Research Group. (1988). Parallel distributed processing. In: IEEE (Vol. 1, pp. 443–453)Google Scholar
- 15.Rojas R (1996) The backpropagation algorithm. In: Neural networks. Springer, Berlin, pp 149–182Google Scholar
- 16.Hinton G (2010) A practical guide to training restricted Boltzmann machines. Momentum 9(1):926Google Scholar
- 18.Bastien F, Lamblin,P, Pascanu R, Bergstra J, Goodfellow I, Bergeron A, Bouchard N, Warde-Farley D, Bengio, Y (2012) Theano: new features and speed improvements. arXiv preprint arXiv:1211.5590
- 19.Chollet F (2015) Keras: Theano-based deep learning library. Code: https://github.com/fchollet. Documentation: http://keras. ioGoogle Scholar
- 21.Lawrence S, Giles CL, Tsoi AC (1996) What size neural network gives optimal generalization? Convergence properties of backpropagation. Technical Report UMIACS-TR-96-22 and CS-TR-3617, Institute for Advanced Computer Studies, University of MarylandGoogle Scholar
- 22.Dugas C, Bengio Y, Blisle F, Nadeau C, Garcia R (2001) Incorporating second-order functional knowledge for better option pricing. Adv Neural Inf Process Syst 472–478Google Scholar
- 24.Zeiler, M. D. (2012). ADADELTA: an adaptive learning rate method. arXiv preprint arXiv:1212.5701
- 25.Tieleman, T. and Hinton, G. Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning. Technical report, 2012Google Scholar
- 26.Kingma D, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980
- 27.Ruder S (2016) An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747