Abstract
In Chap. 11, we considered a model f(x) = \({\varvec{\phi}}\left({{\varvec{x}}}_{i}\right){\varvec{w}}\), where the initial input vector x is replaced by feature vector ϕ(x) = [ϕ0(x), …, ϕM(x)]′. As ideal basis functions ϕ(x) should be localized or adaptive w.r.t. x, we cluster the input dataset {xi|1 ≤ i ≤ N} ⊂ RD into M clusters, and let {μj, 0 ≤ j ≤ M-1} will be the centers of the clusters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Coenraad, M; Myburgh, Johannes C.; Davel, Marelie H. (2020). Gerber, Aurona (ed.). “Stride and Translation Invariance in CNNs”. Artificial Intelligence Research. Communications in Computer and Information Science. Cham: Springer International Publishing. 1342: 267–281.
Collobert, Ronan, Weston, Jason (2008–01–01). A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. Proceedings of the 25th International Conference on Machine Learning. ICML’08. New York, NY, USA: ACM. pp. 160–167.
Dupond, Samuel (2019). “A thorough review on the current advance of neural network structures”. Annual Reviews in Control. 14: 200–230.
Graves, Alex; Liwicki, Marcus; Fernandez, Santiago; Bertolami, Roman; Bunke, Horst; Schmidhuber, Jürgen (2009). “A Novel Connectionist System for Improved Unconstrained Handwriting Recognition” (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855-868.
McCulloch, W. S. and W. Pitts (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133.
Rosenblatt, F. (1962). Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan.
Rumelhart, D. E., J. L. McClelland, and the PDP Research Group (Eds.) (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. MIT Press.
Tealab, Ahmed (2018–12–01). “Time series forecasting using artificial neural networks methodologies: A systematic review”. Future Computing and Informatics Journal. 3 (2): 334–340.
Valueva, M.V.; Nagornov, N.N.; Lyakhov, P.A.; Valuev, G.V.; Chervyakov, N.I. (2020). “Application of the residue number system to reduce hardware costs of the convolutional neural network implementation”. Mathematics and Computers in Simulation. Elsevier BV. 177: 232–243.
Zhang, Wei (1990). “Parallel distributed processing model with local space-invariant interconnections and its optical architecture”. Applied Optics. 29 (32): 4790–7.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lee, J., Chang, JR., Kao, LJ., Lee, CF. (2023). Neural Networks and Deep Learning Algorithm. In: Essentials of Excel VBA, Python, and R. Springer, Cham. https://doi.org/10.1007/978-3-031-14283-3_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-14283-3_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14282-6
Online ISBN: 978-3-031-14283-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)