Abstract
With the power of deep learning taking over image classification and computer vision problems, it is no wonder that many algorithms look to their architecture to leverage better results. With Shepard Interpolation Neural Networks (SINN), there is no need for this deep architecture, but rather a shallow and wide approach is taken. SINNs fall short in the ability to take raw input information and extract meaningful features from them. This task is excelled however by deep learning approaches, more specifically, a deep convolutional neural network (CNN) which naturally learns important features from the raw input data for better discrimination. For this paper, we look to collide the power of deep learning features with the speed and efficiency of the shallow learning framework into one cohesive architecture that produces competitive results with a tenth of the computational cost. We start by using different CNNs to extract features from three popular image classification data sets (MNIST, CIFAR-10, and CIFAR-100), and then use those features to efficiently and effectively train a shallow SINN to classify the images accordingly. This method has the ability to not only produce competitive results to the state-of-the-art in image classification, but also blow their computational cost of running an efficient network out of the water by nearly ten times the speed.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Masci, J., Meier, U., Cireşan, D., Schmidhuber, J.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 52–59. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21735-7_7
Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)
Williams, P.: SINN: Shepard Interpolation Neural Networks. In: Bebis, G., et al. (eds.) ISVC 2016. LNCS, vol. 10073, pp. 349–358. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-50832-0_34
Le, Q.V.: Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8595–8598. IEEE, May 2013
Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUS). arXiv preprint arXiv:1511.07289 (2015)
Wan, L., Zeiler, M., Zhang, S., Cun, Y.L., Fergus, R.: Regularization of neural networks using dropconnect. In: Proceedings of the 30th International Conference on Machine Learning (ICML 2013), pp. 1058–1066 (2013)
Petrov, Y., Zhaoping, L.: Local correlations, information redundancy, and sufficient pixel depth in natural images. JOSA A 20(1), 56–66 (2003)
Ren, J.S., Xu, L., Yan, Q., Sun, W.: Shepard convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 901–909 (2015)
Donald, S.: A two-dimensional interpolation function for irregularly-spaced data. In: Proceedings of the 1968 23rd ACM National Conference. ACM (1968)
Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027–1035. Society for Industrial and Applied Mathematics, January 2007
Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649. IEEE, June 2012
Xie, S., et al.: Aggregated residual transformations for deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2017)
LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, vol. 3361, no. 10 (1995)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Smith, K.E., Williams, P., Chaiya, T., Ble, M. (2018). Deep Convolutional-Shepard Interpolation Neural Networks for Image Classification Tasks. In: Campilho, A., Karray, F., ter Haar Romeny, B. (eds) Image Analysis and Recognition. ICIAR 2018. Lecture Notes in Computer Science(), vol 10882. Springer, Cham. https://doi.org/10.1007/978-3-319-93000-8_21
Download citation
DOI: https://doi.org/10.1007/978-3-319-93000-8_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-92999-6
Online ISBN: 978-3-319-93000-8
eBook Packages: Computer ScienceComputer Science (R0)