Abstract
This study explores the potential and application of the newly proposed Forward-Forward algorithm (FFA). The primary aim of this study is to analyze the results achieved from the proposed algorithm and compare it with the existing algorithms. What we are trying to achieve here is to know the extent to which FFA can be effectively deployed in any neural network and to investigate its efficacy in producing results that can be compared to those generated by the conventional Backpropagation method. For diving into a deeper understanding of this new algorithm’s benefits and limitations in the context of neural network training, this study is conducted. In the process of experimentation, the four datasets used are the MNIST dataset, COVID-19 X-ray, Brain MRI and the Cat vs. Dog dataset. Our findings suggest that FFA has potential in certain tasks in CV. However, it is yet far from replacing the backpropagation for common tasks. The paper describes the experimental setup and process carried out to understand the efficacy of the FFA and provides the obtained results and comparative analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)
Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020)
Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020)
Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:1701.06548 (2017)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)
Ren, M., Kornblith, S., Liao, R., Hinton, G.: Scaling forward gradient with local losses. arXiv preprint arXiv:2210.03310 (2022)
Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)
Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003)
Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:2006.01981 (2020)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016)
Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)
Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019)
Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)
Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)
Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)
Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017)
van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)
Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014)
Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263 (2019)
Grill, J.-B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020)
Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017)
Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010)
Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Thakur, S., Dhawan, R., Bhargava, P., Tripathi, K., Walambe, R., Kotecha, K. (2024). The Forward-Forward Algorithm: Analysis and Discussion. In: Garg, D., Rodrigues, J.J.P.C., Gupta, S.K., Cheng, X., Sarao, P., Patel, G.S. (eds) Advanced Computing. IACC 2023. Communications in Computer and Information Science, vol 2053. Springer, Cham. https://doi.org/10.1007/978-3-031-56700-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-56700-1_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-56699-8
Online ISBN: 978-3-031-56700-1
eBook Packages: Computer ScienceComputer Science (R0)