Abstract
As one of supervised learning algorithms, extreme learning machine (ELM) has been proposed for single-hidden-layer feedforward neural networks (SLFN) and shown great generalization performance. ELM randomly assigns the weights and biases between the input and hidden layers and trains the weights between hidden and output layers. Physiological research has shown that neurons at the same layer are laterally inhibited to each other such that the output of each layer is a type of sparse codings. However, it is difficult to accommodate the lateral inhibition by directly using random feature mapping in ELM. Therefore, this paper proposes a sparse coding ELM (ScELM) algorithm, which can map the input feature vector into a sparse representation such that the mapped feature is sparse. In this proposed ScELM algorithm, an unsupervised way is used for sparse coding in the sense that dictionary is randomly assigned rather than learned. Gradient projection (GP) based method is used for the sparse coding. The output weights are trained in the same supervised way which ELM presents. Experimental results on benchmark databases have shown that this proposed ScELM algorithm can outperform other state-of-the art methods in terms of classification accuracy.
This work is supported by National Natural Science Foundation of China (NSFC) under grant 61473089.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cortes, C., Vapnik, V.N.: Support vector networks. Mach. Learn. 20, 273–297 (1995)
Hastie, T., Rosset, S., Tibshirani, R., Zhu, J.: The entire regularization path for the support vector machine. J. Mach. Learn. Res. 5, 1391–1415 (2004)
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 22, 781–796 (2006)
Salakhutdinov, R., Hinton, G.: An efficient learning procedure for deep boltzmann machines. Neural Comput. 24(8), 1967–2006 (2012)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Salakhutdinov, R., Hinton, G.: Deep Boltzmann machine. J. Mach. Learn. Res. 5, 448–455 (2009)
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Proceedings of Advances in Neural Information Processing Systems, vol. 19, pp. 153–160 (2006)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: International conference on machine learning. In: Proceedings of International Conference on Machine Learning, pp. 1096–1103 (2008)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
Coates, A., Andew, Y.N.: Search machine learning repository: the importance of encoding versus training with sparse coding and vector quantization. In: Proceedings of International Conference on Machine Learning, pp. 921–928 (2011)
Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: Proceedings of Advances in Neural Information Processing Systems, vol. 20, pp. 1–8 (2008)
Huang, G.-B., Zhou, H.-M., Ding, X.-J., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42(2), 513–529 (2012)
Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inf. Theory 44(2), 525–536 (1998)
Huang, G.-B., Chen, L., Siew, C.-K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)
Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: a strategy employed by v1? Vis. Res. 37(23), 3311–3325 (1997)
Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse representation: application to compressed sensing and other inverse problems. IEEE Trans. Select. Topics Signal Process. 1(4), 586–597 (2007)
Hubel, D.H., Wiesel, T.N.: Receptive fields of signal neurons in the cat’s striate cortex. J. Physiol. 148, 574–591 (1959)
Roll, E.T., Tovee, M.J.: Sparseness of the neuronal representation of stmuli in the primate temporal visual cortex. J. Neurophysiol. 173, 713–726 (1992)
Wright, J., Yang, A.Y., Ganesh, A.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227 (2009)
Yang, J.-C., Yu, K., Gong, Y.H., Huang, T.: Linear spatial pyramid matching using sparse coding for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 1017–1022 (2005)
Daubechies, M.D.F.I., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math 57, 1413–1457 (2004)
Figueiredo, M., Nowak, R.: An em algorithm for wavelet-bases image restoration. IEEE Trans. Image Process. 12, 906–916 (2003)
Kim, S.J., Koh, K., Boyd, S.: An interior-point method for large-scale \(l_1\)-regularized least squares. Neural Comput. 24(8), 1967–2006 (2012)
Fuchs, J.J.: More on sparse representations in arbitrary bases. IEEE Trans. Inf. Theory 50, 1341–1344 (2004)
Blake, C.L., Merz, C.J.: Uci repository of machine learning databases. Department of Information and Computer Sciences, University of California, Irvine, CA (1998)
Asif, M.S., Romberg, J.: Sparse recovery of streamiing signals using \(l_1\)-homotopy. IEEE Trans. Signal Process. 62(16), 4209–4233 (2014)
Kavukcuoglu, K., Ranzato, M., LeCun, Y.: Fast inference in sparse coding algorithms with applications to object recognition. In: Technical report. Computational and Biological Lerning Lab, NYU (2008)
Rozell, C.J., Johnson, D.H., Olshausen, B.A.: Sparse coding via thresholding and local competition in neural circuits. Neural Comput. (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Sun, Z., Yu, Y. (2016). Sparse Coding Extreme Learning Machine for Classification. In: Cao, J., Mao, K., Wu, J., Lendasse, A. (eds) Proceedings of ELM-2015 Volume 2. Proceedings in Adaptation, Learning and Optimization, vol 7. Springer, Cham. https://doi.org/10.1007/978-3-319-28373-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-28373-9_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-28372-2
Online ISBN: 978-3-319-28373-9
eBook Packages: EngineeringEngineering (R0)