Abstract
Advancement in machine learning (ML) field is rising computational demands during deep learning (DL) applications development process. Emphasis on statistical, theoretical, and computational aspects of learning and graphical models, networked with sparse analysis, tensor, and topological methods, numerical simulation at various abstraction levels holds paramount importance while analyzing performance of a learning algorithm. The demand for high-end deep learning oriented, compute intensive, specialized accelerated hardware along with performance libraries is essential to train DNN models. Availability of such cost-effective computation ecosystem for research and development is challenging because of limiting factors like the financial viability, lack of appropriate technical hardware, and software knowledge while procuring, less expertise while determining and installing required computing environment, performance libraries, frameworks, tools, etc. With the availability of big data, innovation in chip design technology, significant improvement in neural network design, improvement in speed and accuracy of deep learning model has made it possible to avail necessary DL development ecosystem to common masses under affordable cost. The aim of this paper is to present the study of ready-to-use, end-to-end deep learning solution, suitable for simulation, modeling, and analysis, having latest generation of processor, accelerated computing software development environment for DNN applications development, video analytics, linear algebra, sparse matrix operations, multi-GPU communication, training-inference engine development. This enables users at grass root level to do R&D irrespective of their geographical location and environmental conditions while building technical capabilities to solve India specific problem statements.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
CUDNN Homepage, https://developer.nvidia.com/cudnn. Last accessed 2021/08/17
Caffe Homepage, http://caffe.berkeleyvision.org/. Last accessed 2021/08/25
TensorFlow Homepage, https://www.tensorflow.org/. Last accessed 2021/08/25
Theano Github page, https://github.com/Theano/Theano. Last accessed 2021/08/25
CNTK Github page, https://github.com/Microsoft/CNTK. Last accessed 2021/08/25
Keras Homepage, https://keras.io/. Last accessed 2021/08/25
Mxnet Homepage, https://mxnet.incubator.apache.org/. Last accessed 2021/08/25
Torch Homepage, http://torch.ch/. Last accessed 2021/08/25
DIGITS Homepage, https://developer.nvidia.com/digits. Last accessed 2021/09/01
Lambda Labs Homepage, https://lambdal.com/. Last accessed 2021/09/02
LambdaTensorBook Homepage, https://lambdalabs.com/deep-learning/laptops/tensorbook. Last accessed 2021/09/02
Lambda Vector Homepage, https://lambdalabs.com/gpu-workstations/vector. Last accessed 2021/09/02
Nvidia DGX-1 Homepage, https://www.nvidia.com/en-us/data-center/dgx-1/. Last accessed 2021/09/03
Math Supercomputer Pdf, https://www.wolfram.com/products/applications/sem/semproductflyer.pdf. Last accessed 2021/09/03
C-DAC Homepage, www.cdac.in. Last accessed 2021/09/03
Torch Homepage, https://github.com/PaddlePaddle/Paddle. Last accessed 2021/08/26
Object Detection Github page, https://github.com/NVIDIA/DIGITS/tree/master/examples/object-detection. Last accessed 2021/09/01
Kitti Homepage, http://www.cvlibs.net/datasets/kitti/eval_object.php. Last accessed 2021/09/01
Convnet Github page, https://github.com/soumith/convnet-benchmarks. Last accessed 2021/09/05
AlexNet wiki Page, https://en.wikipedia.org/wiki/AlexNet. Last accessed 2021/08/25
C. Szegedy, W. Liu, Going deeper with convolutions, in IEEE Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2015, pp. 1–9
K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in International Conference on Learning Representations Conference 2015
Tf cnn Benchmark Github page, https://github.com/tensorflow/benchmarks. Last accessed 2021/09/05
K. He, X. Zhang, X. Ren, S. Ren, Deep residual learning for image recognition, in Computer Vision and Pattern Recognition, arXiv:1512.03385v1 [cs.CV], 10 Dec 2015
NVLink wiki page, https://en.wikipedia.org/wiki/NVLink. Last accessed 2021/09/05
OneDnn Homepage, https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/onednn.html. Last accessed 2021/09/05
Acknowledgements
The research work is supported by C-DAC, Pune. We like to thank members of HPC Technologies Group, C-DAC Pune.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gupta, K.G., Maity, S.K., Das, A., Wandhekar, S. (2022). Deep Learning-Driven Structured Energy Efficient Affordable Ecosystem for Computational Learning Theory . In: Pundir, A.K.S., Yadav, N., Sharma, H., Das, S. (eds) Recent Trends in Communication and Intelligent Systems. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-19-1324-2_21
Download citation
DOI: https://doi.org/10.1007/978-981-19-1324-2_21
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-1323-5
Online ISBN: 978-981-19-1324-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)