Abstract
Deep learning is widely accepted as a major drive of this artificial intelligence era. In this article, we first briefly describe the milestones of deep learning. Then, based on a mini-review of the literature, we point out some common misconceptions of deep neural networks and deep learning. We also point out that the deep learning approach is being misused in the present days.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Du, K.-L., Swamy, M.N.S.: Neural Networks in a Softcomputing Framework. Springer, London (2006). https://doi.org/10.1007/1-84628-303-5
Du, K.-L., Swamy, M.N.S.: Neural Networks and Statistical Learning, 2nd edn. Springer, London (2019). https://doi.org/10.1007/978-1-4471-7452-3
LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., et al.: Handwritten digit recognition with a back-propagation network. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 2, pp. 396–404. Morgan Kaufmann, San Mateo (1989)
Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2006)
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). https://doi.org/10.1038/nature16961
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., et al.: Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2
Editorial: How AlphaFold can realize AI’s full potential in structural biology. Nature 608(8) (2022). https://doi.org/10.1038/d41586-022-02088-x
IBM: What is Deep Learning? https://www.ibm.com/cloud/learn/deep-learning. Accessed 13 Mar 2023
DeepAI: Deep Learning Definition. https://deepai.org/machine-learning-glossary-and-terms/deep-learning. Accessed 13 Mar 2023
Oracle: What is Deep Learning? https://www.oracle.com/artificial-intelligence/machine-learning/what-is-deep-learning/. Accessed 13 Mar 2023
Hardesty, L.: MIT News Office: Explained: Neural networks (2017). https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414. Accessed 13 Mar 2023
Zhang, W.J., Yang, G., Lin, Y., Ji, C., Gupta, M.M.: On definition of deep learning. In: Proceedings of the IEEE 2018 World Automation Congress, Stevenson, WA, USA, pp. 232–236 (2018)
IBM: What is Machine Learning? https://www.ibm.com/cloud/learn/machine-learning. Accessed 13 Mar 2023
Microsoft Azure: What is machine learning? https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-machine-learning-platform/. Accessed 13 Mar 2023
Oracle: What is Machine Learning? https://www.oracle.com/artificial-intelligence/machine-learning/what-is-machine-learning/. Accessed 13 Mar 2023
Google: What is Machine Learning? https://cloud.google.com/learn/what-is-machine-learning. Accessed 13 Mar 2023
IBM: What are Neural Networks? https://www.ibm.com/cloud/learn/neural-networks. Accessed 13 Mar 2023
DeepAI: What is a Neural Network? https://deepai.org/machine-learning-glossary-and-terms/neural-network. Accessed 13 Mar 2023
AWS: What is a Neural Network? https://aws.amazon.com/what-is/neural-network/?nc1=h_ls. Accessed 13 Mar 2023
Yu, F., Xiu, X., Li, Y.: A survey on deep transfer learning and beyond. Mathematics 10(3619) (2022). https://doi.org/10.3390/math10193619
Zhuang, F., et al.: A comprehensive survey on transfer learning. Proc. IEEE 109(1), 43–76 (2021). https://doi.org/10.1109/JPROC.2020.3004555
Yarotsky, D.: Error bounds for approximations with deep ReLU networks. Neural Netw. 94, 103–114 (2017)
Telgarsky, M.: Benefits of depth in neural networks. In: Proceedings of the 29th Annual Conference on Learning Theory (PMLR), New York, NY, vol. 49, pp. 1517–1539 (2016)
Eldan, R., Shamir, O.: The power of depth for feedforward neural networks. In: Proceedings of the 29th Annual Conference on Learning Theory (PMLR), New York, NY, vol. 49, pp. 907–940 (2016)
Szymanski, L., McCane, B.: Deep networks are effective encoders of periodicity. IEEE Trans. Neural Netw. Learn. Syst. 25(10), 1816–1827 (2014)
Mhaskar, H., Liao, Q., Poggio, T.: Learning functions: when is deep better than shallow. CBMM Memo No. 045 (2016). https://arxiv.org/pdf/1603.00988v4.pdf
Baldi, P., Vershynin, R.: The capacity of feedforward neural networks. Neural Netw. 116, 288–311 (2019)
Veit, A., Wilber, M., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. In: Advances in Neural Information Processing Systems, vol. 29, pp. 550–558 (2016)
He, F., Liu, T., Tao, D.: Why ResNet works? residuals generalize. IEEE Trans. Neural Netw. Learn. Syst. 31(12), 5349–5362 (2020). https://doi.org/10.1109/TNNLS.2020.2966319
Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Proceedings of British Machine Vision Conference, Newcastle, UK, pp. 87.1–87.12 (2016)
Simard, P.Y., Steinkraus, D., Platt, J.: Best practice for convolutional neural networks applied to visual document analysis. In: Proceedings of International Conference on Document Analysis and Recognition (ICDAR), pp. 958–962. IEEE Computer Society, Los Alamitos (2003)
Decoste, D., Schoelkopf, B.: Training invariant support vector machines. Mach. Learn. 46, 161–190 (2002)
Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 24(4), 509–522 (2002)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: Advances in Neural Information Processing Systems, vol. 19, 1137–1144 (2006)
Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220 (2010)
Illing, B., Gerstner, W., Brea, J.: Biologically plausible deep learning - but how far can we go with shallow networks? Neural Netw. 118, 90–101 (2019). https://doi.org/10.1016/j.neunet.2019.06.001
Du, K.-L., Swamy, M.N.S.: Search and Optimization by Metaheuristics. Springer, New York (2016). https://doi.org/10.1007/978-3-319-41192-7
Du, K.-L., Leung, C.-S., Mow, W.H., Swamy, M.N.S.: Perceptron: learning, generalization, model selection, fault tolerance, and role in the deep learning era. Mathematics 10, 4730 (2022). https://doi.org/10.3390/math10244730
Huang, G.-B., Zhu, Q.-Y., Siew, C.-K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)
Schmidt, W.F., Kraaijveld, M.A., Duin, R.P.W.: Feedforward neural networks with random weights. In: Proceedings of 11th IAPR International Conference on Pattern Recognition, The Hague, Netherlands, vol. 2, pp. 1–4 (1992) https://doi.org/10.1109/ICPR.1992.201708
Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Appl. Soft Comput. 105, 107239 (2021)
Jaeger, H.: The “echo state” approach to analyzing and training recurrent neural networks. GMD Technical Report 148. German National Research Center for Information Technology, Sankt Augustin, Germany (2001)
Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems, vol. 2, pp. 524–532 (1990)
Birost: “Yann LeCun: Who can explain where the extreme learning machine (ELM) is?” https://blog.birost.com/a?ID=e170d2e1-62f6-43e0-9b64-f6510be36803. Accessed 13 Mar 2023
Mhaskar, H.N., Poggio, T.: An analysis of training and generalization errors in shallow and deep networks. Neural Netw. 121, 229–241 (2020). https://doi.org/10.1016/j.neunet.2019.08.028
Martin, C.H., Mahoney, M.W.: Implicit self-regularization in deep neural networks: evidence from random matrix theory and implications for learning. J. Mach. Learn. Res. 22, 1–73 (2021)
Liu, B., Liu, Z., Zhang, T., Yuan, T.: Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks. Neural Netw. 144, 75–89 (2021). https://doi.org/10.1016/j.neunet.2021.08.005
Petzka, H., Sminchisescu, C.: Non-attracting regions of local minima in deep and wide neural networks. J. Mach. Learn. Res. 22, 1–34 (2021)
Acknowledgment
The author would like to thank Dr. M.N.S. Swamy for insightful discussions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Du, KL. (2023). Several Misconceptions and Misuses of Deep Neural Networks and Deep Learning. In: Chen, H., Fan, P., Wang, L. (eds) Communications, Networking, and Information Systems. CNIS 2023. Communications in Computer and Information Science, vol 1839. Springer, Singapore. https://doi.org/10.1007/978-981-99-3581-9_10
Download citation
DOI: https://doi.org/10.1007/978-981-99-3581-9_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-3580-2
Online ISBN: 978-981-99-3581-9
eBook Packages: Computer ScienceComputer Science (R0)