Advertisement

Fast Implementation of Tunable ARN Nodes

  • Shilpa MayannavarEmail author
  • Uday Wali
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 941)

Abstract

Auto Resonance Network (ARN) is a general purpose Artificial Neural Network (ANN) capable of non-linear data classification. Each node in an ARN resonates when it receives a specific set of input values. Coverage of an ARN node indicates the spread of values within which the gain is guaranteed to be above half-power point. Tuning of ARN nodes therefore refers to adjusting the coverage of an ARN node. These tuning equations of ARN nodes are complex and hence a fast hardware accelerator needs to be built to reduce performance bottlenecks. The paper discusses issues related to speed of computation of a resonating ARN node and its numerical accuracy.

Keywords

Artificial Intelligence Auto Resonance Network Deep learning Graphic Processor Unit Hardware accelerators Tuned Neural Networks 

References

  1. 1.
    Lindholm, E., Nickolls, J., Oberman, S., Montrym, J.: NVidia Telsa: a unified graphics and computing architecture. IEEE Micro 28(2), 39–55 (2008)CrossRefGoogle Scholar
  2. 2.
    Halfhill, T.R.: Parallel processing with CUDA. Reed electronics group. http://www.nvidia.in/docs/IO/55972/220401_Reprint.pdf
  3. 3.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., Davis, A., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. White paper, Google Research, November 2015Google Scholar
  4. 4.
    Jouppi, N.P., Young, C., Patil, N., Patterson, D., Agarwal, G., Bajwa, R., et al.: In-datacenter performance analysis of a tensor processing unit. In: 44th International Symposium on Computer Architecture (ISCA), Toronto, Canada, June 2017Google Scholar
  5. 5.
    Liu, S., Du, Z., Tao, J., Han, D., Luo, T., Xie, Y., et al.: Cambricon: an instruction set architecture for neural networks. In: ACM/IEEE 43rd Annual International Symposium on Computer Architecture (2016)Google Scholar
  6. 6.
    Neural Processor News: Kirin 970’s Neural Processing Unit and Cambricon, November 2017. https://npu.ai/2017/11/kirin-970s-neural-processing-unit-and-cambricon/
  7. 7.
    Akopyan, F., Sawada, J., Cassidy, A., Alvarez-Icaza, R., Arthur, J., Merolla, P.: TrueNorth: design and tool flow of a 65 mW 1 million neuron programmable neurosynaptic chip. IEEE Trans. Comput. Aided Des. Integrated Circuits Syst. 34, 1537–1557 (2015).  https://doi.org/10.1109/TCAD.2015.2474396CrossRefGoogle Scholar
  8. 8.
    Andres, R., Eden, S., Etay, M., Evarist, F., Young, J.K., Haihao, S., Barukh, Z.: Lower numerical precision Deep Learning inference and training. White paper, Intel, January 2018Google Scholar
  9. 9.
    Koster, U., Webb, T., Wang, X., Nassar, M., Bansal, A., Constable, W., et al.: Flexpoint: an adaptive numerical format for efficient training of Deep Neural Networks. Intel, December 2017Google Scholar
  10. 10.
    Schmidhuber, J.: Deep learning in Neural Networks: An overview. Neural Netw. 61, 85–117 (2015). Archieves, Cornell university libraryCrossRefGoogle Scholar
  11. 11.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015).  https://doi.org/10.1038/nature14539. Review paperCrossRefGoogle Scholar
  12. 12.
    Mayannavar, S., Wali, U.: Performance comparison of Serial and parallel multipliers in massively parallel environment. In: ICEECCOT-2018, IEEE Xplore Digital Library (2018). (Accepted for publication)Google Scholar
  13. 13.
    Aparanji, V.M., Wali, U., Aparna, R.: A novel neural network structure for motion control in joints. In: ICEECCOT-2016, IEEE Xplore Digital Library, pp. 227–232, Doc no. 7955220 (2016)Google Scholar
  14. 14.
    Aparanji, V.M., Wali, U., Aparna, R.: Pathnet: a neuronal model for robotic motion planning. In: 3rd International Conference on Cognitive, Computing and Information Processing (CCIP 2017), Springer CCIS, December 2017Google Scholar
  15. 15.
    Mayannavar, S., Wali, U.: Hardware implementation of an activation function for neural network processor. In: IEEE Xplore Digital Library, January 2018. (In press)Google Scholar
  16. 16.
    Reif, J.H.: Complexity of the mover’s problem and generalizations. Department of Computer Science, University of Rochester, Research Report TR58, August 1979Google Scholar
  17. 17.
    LeCun, Y., Cortes, C., Burges, C.J.C.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
  18. 18.
    Mayannavar, S., Wali, U.V.: Design of modular processor framework. Int. J. Technol. Sci. (IJTS) IX(1), 36–39 (2016). ISSN (Online) 2350-1111, (Print) 2350-1103Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.C-Quad ResearchBelagaviIndia
  2. 2.KLE Dr. MSS CETBelagaviIndia

Personalised recommendations