Abstract
In recent years, convolutional neural networks have gained popularity in the area of image processing, machine translation, speech recognition, object detection, and many other tasks. The data generated in all these areas are very large, and there are a large number of samples along with a large number of attributes. Areas like bioinformatics have a large amount of data, but face the problem of a small number of samples with a large number of attributes. In all these applications, initialization plays a better role for better generalization of the network. In this work, we have proposed a novel approach for kernel initialization in which the weights learned by each autoencoder hidden layer acts as the initial kernel (filter) weight of each convolutional neural network layer. The result of the proposed approach is compared with random initialization of the kernel weights. The results show that the proposed method performs comparably to random initialization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, pp. 153–160 (2007)
Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Advances in Neural Information Processing Systems, pp. 1137–1144 (2007)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 3371–3408 (2010)
Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp. 37–49 (2012)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation (No. ICS-8506). California University, San Diego, La Jolla, Institute for Cognitive Science (1985)
Sevakula, R.K., Thirukovalluru, R., Verma, N.K., Cui, Y.: Deep neural networks for transcriptome based cancer classification. BMC Bioinform. (2017) (Accepted)
Rajurkar, S., Singh, V., Verma, N.K., Cui, Y.: Deep stacked auto-encoder with deep fuzzy network for transcriptome based tumor type classification. BMC Bioinform. (2017) (Accepted)
Singh, V., Verma, N.K.: Deep learning architecture for high-level feature generation using stacked auto encoder for business intelligence. In: Complex Systems: Solutions and Challenges in Economics, Management and Engineering. Springer International Publishing (Accepted) (2017)
Sevakula, R.K., Singh, V., Verma, N.K., Kumar, C., Cui, Y.: Transfer learning for molecular cancer classification using deep neural networks. IEEE/ACM Trans. Comput. Biol. Bioinform. (1), 1–1 (2018)
Singh, V., Gupta, R.K., Sevakula, R.K., Verma, N.K.: Comparative analysis of Gaussian mixture model, logistic regression and random forest for big data classification using map reduce. In: 2016 11th IEEE International Conference on Industrial and Information Systems (ICIIS), pp. 333–338 (2016)
Verma, N.K., Sharma, T., Rajurkar, S.D., Salour, A.: Object identification for inventory management using convolutional neural network. In: 2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pp. 1–6 (2016)
Sevakula, R.K., Verma, N.K.: Assessing generalization ability of majority vote point classifiers. IEEE Trans. Neural Netw. Learn. Syst. 28(12), 2985–2997 (2017)
Rajurkar, S., Verma, N.K.: Developing deep fuzzy network with Takagi Sugeno fuzzy inference system. In: 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–6 (2017)
Verma, N. K., Singh, S.: Image sequence prediction using ANN and RBFNN. Int. J. Image Graph. 13(02), 1340006, (2013)
Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160(1), 106–154 (1962)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36(4), 193–202 (1980)
Le Cun, Y., Boser, B., et al.: Handwritten digit recognition with a back propagation network. Adv. Neural Inf. Process. Syst. (1990)
LeCun, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Steinkrau, D., Simard, P.Y., Buck, I.: Using GPUs for machine learning algorithms. In: Proceedings of the Eighth International Conference on Document Analysis and Recognition. IEEE Computer Society (2005)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. (2012)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision. Springer International Publishing (2014)
He, K., et al.: Deep residual learning for image recognition. arXiv:1512.03385 (2015)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)
Saxe, A.M., McClelland, J.L., Ganguli, S.: Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120 (2013)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 [cs] (2015)
Ng, A., Ngiam, J., Foo, C.Y., Mai, Y., Suen, C.: UFLDL Tutorial (2016)
Singh, V., Baranwal, N., Sevakula, R.K., Verma, N.K., Cui, Y.: Layerwise feature selection in stacked sparse auto-encoder for tumor type prediction. In: 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1542–1548. IEEE (2016)
http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/
Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images (2009)
Suckling, J., et al.: The mammographic image analysis society digital mammogram database. In: International Congress Series, vol. 1069, pp. 375–378 (1994)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Singh, V., Swaminathan, A., Verma, N.K. (2019). Convolutional Neural Network with Stacked Autoencoder for Kernel Initialization. In: Verma, N., Ghosh, A. (eds) Computational Intelligence: Theories, Applications and Future Directions - Volume II. Advances in Intelligent Systems and Computing, vol 799. Springer, Singapore. https://doi.org/10.1007/978-981-13-1135-2_5
Download citation
DOI: https://doi.org/10.1007/978-981-13-1135-2_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-1134-5
Online ISBN: 978-981-13-1135-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)