Skip to main content

Sparse Coding Extreme Learning Machine for Classification

  • Conference paper
  • First Online:
Proceedings of ELM-2015 Volume 2

Part of the book series: Proceedings in Adaptation, Learning and Optimization ((PALO,volume 7))

Abstract

As one of supervised learning algorithms, extreme learning machine (ELM) has been proposed for single-hidden-layer feedforward neural networks (SLFN) and shown great generalization performance. ELM randomly assigns the weights and biases between the input and hidden layers and trains the weights between hidden and output layers. Physiological research has shown that neurons at the same layer are laterally inhibited to each other such that the output of each layer is a type of sparse codings. However, it is difficult to accommodate the lateral inhibition by directly using random feature mapping in ELM. Therefore, this paper proposes a sparse coding ELM (ScELM) algorithm, which can map the input feature vector into a sparse representation such that the mapped feature is sparse. In this proposed ScELM algorithm, an unsupervised way is used for sparse coding in the sense that dictionary is randomly assigned rather than learned. Gradient projection (GP) based method is used for the sparse coding. The output weights are trained in the same supervised way which ELM presents. Experimental results on benchmark databases have shown that this proposed ScELM algorithm can outperform other state-of-the art methods in terms of classification accuracy.

This work is supported by National Natural Science Foundation of China (NSFC) under grant 61473089.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cortes, C., Vapnik, V.N.: Support vector networks. Mach. Learn. 20, 273–297 (1995)

    Google Scholar 

  2. Hastie, T., Rosset, S., Tibshirani, R., Zhu, J.: The entire regularization path for the support vector machine. J. Mach. Learn. Res. 5, 1391–1415 (2004)

    Google Scholar 

  3. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  4. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 22, 781–796 (2006)

    Google Scholar 

  5. Salakhutdinov, R., Hinton, G.: An efficient learning procedure for deep boltzmann machines. Neural Comput. 24(8), 1967–2006 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Google Scholar 

  7. Salakhutdinov, R., Hinton, G.: Deep Boltzmann machine. J. Mach. Learn. Res. 5, 448–455 (2009)

    Google Scholar 

  8. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Proceedings of Advances in Neural Information Processing Systems, vol. 19, pp. 153–160 (2006)

    Google Scholar 

  9. Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: International conference on machine learning. In: Proceedings of International Conference on Machine Learning, pp. 1096–1103 (2008)

    Google Scholar 

  10. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  11. Coates, A., Andew, Y.N.: Search machine learning repository: the importance of encoding versus training with sparse coding and vector quantization. In: Proceedings of International Conference on Machine Learning, pp. 921–928 (2011)

    Google Scholar 

  12. Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: Proceedings of Advances in Neural Information Processing Systems, vol. 20, pp. 1–8 (2008)

    Google Scholar 

  13. Huang, G.-B., Zhou, H.-M., Ding, X.-J., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42(2), 513–529 (2012)

    Article  Google Scholar 

  14. Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inf. Theory 44(2), 525–536 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  15. Huang, G.-B., Chen, L., Siew, C.-K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)

    Article  Google Scholar 

  16. Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: a strategy employed by v1? Vis. Res. 37(23), 3311–3325 (1997)

    Article  Google Scholar 

  17. Figueiredo, M.A.T., Nowak, R.D., Wright, S.J.: Gradient projection for sparse representation: application to compressed sensing and other inverse problems. IEEE Trans. Select. Topics Signal Process. 1(4), 586–597 (2007)

    Google Scholar 

  18. Hubel, D.H., Wiesel, T.N.: Receptive fields of signal neurons in the cat’s striate cortex. J. Physiol. 148, 574–591 (1959)

    Article  Google Scholar 

  19. Roll, E.T., Tovee, M.J.: Sparseness of the neuronal representation of stmuli in the primate temporal visual cortex. J. Neurophysiol. 173, 713–726 (1992)

    Google Scholar 

  20. Wright, J., Yang, A.Y., Ganesh, A.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227 (2009)

    Google Scholar 

  21. Yang, J.-C., Yu, K., Gong, Y.H., Huang, T.: Linear spatial pyramid matching using sparse coding for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 1017–1022 (2005)

    Google Scholar 

  22. Daubechies, M.D.F.I., Mol, C.D.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math 57, 1413–1457 (2004)

    Google Scholar 

  23. Figueiredo, M., Nowak, R.: An em algorithm for wavelet-bases image restoration. IEEE Trans. Image Process. 12, 906–916 (2003)

    Google Scholar 

  24. Kim, S.J., Koh, K., Boyd, S.: An interior-point method for large-scale \(l_1\)-regularized least squares. Neural Comput. 24(8), 1967–2006 (2012)

    Article  MathSciNet  Google Scholar 

  25. Fuchs, J.J.: More on sparse representations in arbitrary bases. IEEE Trans. Inf. Theory 50, 1341–1344 (2004)

    Google Scholar 

  26. Blake, C.L., Merz, C.J.: Uci repository of machine learning databases. Department of Information and Computer Sciences, University of California, Irvine, CA (1998)

    Google Scholar 

  27. Asif, M.S., Romberg, J.: Sparse recovery of streamiing signals using \(l_1\)-homotopy. IEEE Trans. Signal Process. 62(16), 4209–4233 (2014)

    Article  MathSciNet  Google Scholar 

  28. Kavukcuoglu, K., Ranzato, M., LeCun, Y.: Fast inference in sparse coding algorithms with applications to object recognition. In: Technical report. Computational and Biological Lerning Lab, NYU (2008)

    Google Scholar 

  29. Rozell, C.J., Johnson, D.H., Olshausen, B.A.: Sparse coding via thresholding and local competition in neural circuits. Neural Comput. (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanlong Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Sun, Z., Yu, Y. (2016). Sparse Coding Extreme Learning Machine for Classification. In: Cao, J., Mao, K., Wu, J., Lendasse, A. (eds) Proceedings of ELM-2015 Volume 2. Proceedings in Adaptation, Learning and Optimization, vol 7. Springer, Cham. https://doi.org/10.1007/978-3-319-28373-9_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28373-9_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28372-2

  • Online ISBN: 978-3-319-28373-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics