Skip to main content

Deep Learning-Driven Structured Energy Efficient Affordable Ecosystem for Computational Learning Theory

  • Conference paper
  • First Online:
Recent Trends in Communication and Intelligent Systems

Part of the book series: Algorithms for Intelligent Systems ((AIS))

  • 184 Accesses

Abstract

Advancement in machine learning (ML) field is rising computational demands during deep learning (DL) applications development process. Emphasis on statistical, theoretical, and computational aspects of learning and graphical models, networked with sparse analysis, tensor, and topological methods, numerical simulation at various abstraction levels holds paramount importance while analyzing performance of a learning algorithm. The demand for high-end deep learning oriented, compute intensive, specialized accelerated hardware along with performance libraries is essential to train DNN models. Availability of such cost-effective computation ecosystem for research and development is challenging because of limiting factors like the financial viability, lack of appropriate technical hardware, and software knowledge while procuring, less expertise while determining and installing required computing environment, performance libraries, frameworks, tools, etc. With the availability of big data, innovation in chip design technology, significant improvement in neural network design, improvement in speed and accuracy of deep learning model has made it possible to avail necessary DL development ecosystem to common masses under affordable cost. The aim of this paper is to present the study of ready-to-use, end-to-end deep learning solution, suitable for simulation, modeling, and analysis, having latest generation of processor, accelerated computing software development environment for DNN applications development, video analytics, linear algebra, sparse matrix operations, multi-GPU communication, training-inference engine development. This enables users at grass root level to do R&D irrespective of their geographical location and environmental conditions while building technical capabilities to solve India specific problem statements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. CUDNN Homepage, https://developer.nvidia.com/cudnn. Last accessed 2021/08/17

  2. Caffe Homepage, http://caffe.berkeleyvision.org/. Last accessed 2021/08/25

  3. TensorFlow Homepage, https://www.tensorflow.org/. Last accessed 2021/08/25

  4. Theano Github page, https://github.com/Theano/Theano. Last accessed 2021/08/25

  5. CNTK Github page, https://github.com/Microsoft/CNTK. Last accessed 2021/08/25

  6. Keras Homepage, https://keras.io/. Last accessed 2021/08/25

  7. Mxnet Homepage, https://mxnet.incubator.apache.org/. Last accessed 2021/08/25

  8. Torch Homepage, http://torch.ch/. Last accessed 2021/08/25

  9. DIGITS Homepage, https://developer.nvidia.com/digits. Last accessed 2021/09/01

  10. Lambda Labs Homepage, https://lambdal.com/. Last accessed 2021/09/02

  11. LambdaTensorBook Homepage, https://lambdalabs.com/deep-learning/laptops/tensorbook. Last accessed 2021/09/02

  12. Lambda Vector Homepage, https://lambdalabs.com/gpu-workstations/vector. Last accessed 2021/09/02

  13. Nvidia DGX-1 Homepage, https://www.nvidia.com/en-us/data-center/dgx-1/. Last accessed 2021/09/03

  14. Math Supercomputer Pdf, https://www.wolfram.com/products/applications/sem/semproductflyer.pdf. Last accessed 2021/09/03

  15. C-DAC Homepage, www.cdac.in. Last accessed 2021/09/03

  16. Torch Homepage, https://github.com/PaddlePaddle/Paddle. Last accessed 2021/08/26

  17. Object Detection Github page, https://github.com/NVIDIA/DIGITS/tree/master/examples/object-detection. Last accessed 2021/09/01

  18. Kitti Homepage, http://www.cvlibs.net/datasets/kitti/eval_object.php. Last accessed 2021/09/01

  19. Convnet Github page, https://github.com/soumith/convnet-benchmarks. Last accessed 2021/09/05

  20. AlexNet wiki Page, https://en.wikipedia.org/wiki/AlexNet. Last accessed 2021/08/25

  21. C. Szegedy, W. Liu, Going deeper with convolutions, in IEEE Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2015, pp. 1–9

    Google Scholar 

  22. K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in International Conference on Learning Representations Conference 2015

    Google Scholar 

  23. Tf cnn Benchmark Github page, https://github.com/tensorflow/benchmarks. Last accessed 2021/09/05

  24. K. He, X. Zhang, X. Ren, S. Ren, Deep residual learning for image recognition, in Computer Vision and Pattern Recognition, arXiv:1512.03385v1 [cs.CV], 10 Dec 2015

  25. NVLink wiki page, https://en.wikipedia.org/wiki/NVLink. Last accessed 2021/09/05

  26. OneDnn Homepage, https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onednn.html. Last accessed 2021/09/05

Download references

Acknowledgements

The research work is supported by C-DAC, Pune. We like to thank members of HPC Technologies Group, C-DAC Pune.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Krishan Gopal Gupta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gupta, K.G., Maity, S.K., Das, A., Wandhekar, S. (2022). Deep Learning-Driven Structured Energy Efficient Affordable Ecosystem for Computational Learning Theory . In: Pundir, A.K.S., Yadav, N., Sharma, H., Das, S. (eds) Recent Trends in Communication and Intelligent Systems. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-19-1324-2_21

Download citation

Publish with us

Policies and ethics