Skip to main content

The Forward-Forward Algorithm: Analysis and Discussion

  • Conference paper
  • First Online:
Advanced Computing (IACC 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2053))

Included in the following conference series:

  • 110 Accesses

Abstract

This study explores the potential and application of the newly proposed Forward-Forward algorithm (FFA). The primary aim of this study is to analyze the results achieved from the proposed algorithm and compare it with the existing algorithms. What we are trying to achieve here is to know the extent to which FFA can be effectively deployed in any neural network and to investigate its efficacy in producing results that can be compared to those generated by the conventional Backpropagation method. For diving into a deeper understanding of this new algorithm’s benefits and limitations in the context of neural network training, this study is conducted. In the process of experimentation, the four datasets used are the MNIST dataset, COVID-19 X-ray, Brain MRI and the Cat vs. Dog dataset. Our findings suggest that FFA has potential in certain tasks in CV. However, it is yet far from replacing the backpropagation for common tasks. The paper describes the experimental setup and process carried out to understand the efficacy of the FFA and provides the obtained results and comparative analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017)

    Article  Google Scholar 

  2. Carandini, M., Heeger, D.J.: Normalisation as a canonical neural computation. Nat. Rev. Neurosci. 13(1), 51–62 (2013)

    Article  Google Scholar 

  3. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1597–1607 (2020)

    Google Scholar 

  4. Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big self-supervised models are strong semi-supervised learners. arXiv preprint arXiv:2006.10029 (2020)

  5. Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., Hinton, G.: Regularising neural networks by penalising confident output distributions. arXiv preprint arXiv:1701.06548 (2017)

  6. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  Google Scholar 

  7. Lillicrap, T., Santoro, A., Marris, L., Akerman, C., Hinton, G.E.: Backpropagation and the brain. Nat. Rev. Neurosci. 21, 335–346 (2020)

    Article  Google Scholar 

  8. Ren, M., Kornblith, S., Liao, R., Hinton, G.: Scaling forward gradient with local losses. arXiv preprint arXiv:2210.03310 (2022)

  9. Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016)

    Article  Google Scholar 

  10. Welling, M., Williams, C., Agakov, F.: Extreme components analysis. Adv. Neural Inf. Process. 16 (2003)

    Google Scholar 

  11. Kendall, J., Pantone, R., Manickavasagam, K., Bengio, Y., Scellier, B.: Training end-toend analog neural networks with equilibrium propagation. arXiv preprint arXiv:2006.01981 (2020)

  12. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  13. Lillicrap, T., Cownden, D., Tweed, D., Akerman, C.: Synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7 (2016)

    Google Scholar 

  14. Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020)

    Article  Google Scholar 

  15. Löwe, S., O’Connor, P., Veeling, B.: Putting an end to end-to-end: gradient-isolated learning of representations. Adv. Neural Inf. Process. 32 (2019)

    Google Scholar 

  16. Rao, R., Ballard, D.: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999)

    Article  Google Scholar 

  17. Richards, B.A., Lillicrap, T.P.: Dendritic solutions to the credit assignment problem. Curr. Opin. Neurobiol. 54, 28–36 (2019)

    Article  Google Scholar 

  18. Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organisation in the brain. Psychol. Rev. 65(6), 386 (1958)

    Article  Google Scholar 

  19. Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11 (2017)

    Google Scholar 

  20. van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018)

  21. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. 2672–2680 (2014)

    Google Scholar 

  22. Grathwohl, W., Wang, K.-C., Jacobsen, J.-H., Duvenaud, D., Norouzi, M., Swersky, K.: Your classifier is secretly an energy based model and you should treat it like one. arXiv preprint arXiv:1912.03263 (2019)

  23. Grill, J.-B., et al.: Bootstrap your own latent: a new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020)

  24. Guerguiev, J., Lillicrap, T.P., Richards, B.A.: Towards deep learning with segregated dendrites (2017)

    Google Scholar 

  25. Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: a new estimation principle for unnormalised statistical models. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 297–304 (2010)

    Google Scholar 

  26. Hinton, G.E., Sejnowski, T.J.: Learning and relearning in Boltzmann machines. Parallel Distrib. Process.: Explor. Microstruct. Cogn. 1(282–317), 2 (1986)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rahee Walambe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thakur, S., Dhawan, R., Bhargava, P., Tripathi, K., Walambe, R., Kotecha, K. (2024). The Forward-Forward Algorithm: Analysis and Discussion. In: Garg, D., Rodrigues, J.J.P.C., Gupta, S.K., Cheng, X., Sarao, P., Patel, G.S. (eds) Advanced Computing. IACC 2023. Communications in Computer and Information Science, vol 2053. Springer, Cham. https://doi.org/10.1007/978-3-031-56700-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-56700-1_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-56699-8

  • Online ISBN: 978-3-031-56700-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics