Efficient Sparse Representation and Modeling

  • Hong Cheng
Part of the Advances in Computer Vision and Pattern Recognition book series (ACVPR)


In this chapter, some efficient sparse representation algorithms including feature-sign algorithm, graphical modeling methods and efficient sparse Bayesian learning, sparse quantization, hashed sparse representation, and compressive feature methods are described. Feature-sign search method uses the greedy algorithm’s way to solve the constraint model. The graphical model can be used to improve the speed of the algorithm and it can be efficiently used in Bayesian compressed sensing. Efficient sparse Bayesian Learning also uses the greedy algorithm way to solve sparse Bayesian learning model. Sparse quantization can be used efficiently to quantify the features which can be used in classification and can save the memory of the computer. Hashed sparse representation uses hashing way to efficiently search the nonzero position of the sparse representation. Compressive feature uses the theory of compressed sensing to compress the feature which can reduce the computational cost of the algorithms.


Sparse Representation Sparse Code Marginal Likelihood Variable Node Factor Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  2. 2.
    Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE TIP 20(6), 1709–1724 (2011)MathSciNetGoogle Scholar
  3. 3.
    Baron, D., Sarvotham, S., Baraniuk, R.G.: Bayesian compressive sensing via belief propagation. IEEE Trans. Signal Process. 58(1), 269–280 (2010)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Bishop, C.M., Nasrabadi, N.M.: Pattern Recognition and Machine Learning, vol. 1. Springer, New York (2006)MATHGoogle Scholar
  5. 5.
    Boix, X., Gygli, M., Roig, G., Van Gool, L.: Sparse quantization for patch description. In: IEEE CVPR (2013)Google Scholar
  6. 6.
    Boix, X., Roig, G., Leistner, C., Van Gool, L.: Nested sparse quantization for efficient feature coding. In: ECCV. Springer, Berlin (2012)Google Scholar
  7. 7.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Cheng, H., Liu, Z., Yang, L.: Sparsity induced similarity measure for label propagation. In: IEEE ICCV (2009)Google Scholar
  9. 9.
    Cooper, G.F.: The computational complexity of probabilistic inference using bayesian belief networks. Artif. Intell. 42(2), 393–405 (1990)CrossRefMATHGoogle Scholar
  10. 10.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: IEEE CVPR (2009)Google Scholar
  11. 11.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70(1), 41–54 (2006)CrossRefGoogle Scholar
  12. 12.
    Figueiredo, M.: Adaptive sparseness using Jeffreys prior. In: Advances in Neural Information Processing Systems (2001)Google Scholar
  13. 13.
    Gao, T., Koller, D.: Discriminative learning of relaxed hierarchy for large-scale visual recognition. In: IEEE ICCV (2011)Google Scholar
  14. 14.
    Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., Rubin, D.B.: Bayesian Data Analysis. CRC Press, Boca Raton (2013)Google Scholar
  15. 15.
    Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for \(\ell _1\)-minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)CrossRefMATHMathSciNetGoogle Scholar
  16. 16.
    Hofmann, M., Tiefenbacher, P., Rigoll, G.: Background segmentation with feedback: the pixel-based adaptive segmenter. In: IEEE CVPRW (2012)Google Scholar
  17. 17.
    Hyvärinen, A.: Sparse code shrinkage: denoising of nongaussian data by maximum likelihood estimation. Neural Comput. 11(7), 1739–1768 (1999)CrossRefGoogle Scholar
  18. 18.
    Jafarpour, S., Xu, W., Hassibi, B., Calderbank, R.: Efficient and robust compressed sensing using optimized expander graphs. IEEE Trans. Inf. Theory 55(9), 4299–4308 (2009)CrossRefMathSciNetGoogle Scholar
  19. 19.
    Ji, S., Xue, Y., Carin, L.: Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Jordan, A.: On discriminative vs. generative classifiers: a comparison of logistic regression and naive bayes. Adv. Neural Inf. Process. Syst. 14, 841 (2002)Google Scholar
  21. 21.
    Lee, H., Battle, A., Raina, R., Ng, A.: Efficient sparse coding algorithms. Adv. Neural Inf. Process. Syst. 19, 801–808 (2006)Google Scholar
  22. 22.
    Li, F., Fang, J., Li, H., Huang, L.: Robust one-bit bayesian compressed sensing with sign-flip errors. IEEE Signal Process. Lett. 22(7), 857–861 (2015)CrossRefGoogle Scholar
  23. 23.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: Annual International Conference on Machine Learning. ACM (2009)Google Scholar
  24. 24.
    Moosmann, F., Triggs, B., Jurie, F., et al.: Fast discriminative visual codebooks using randomized clustering forests. Adv. Neural Inf. Process. Syst. 19, 985–992 (2007)Google Scholar
  25. 25.
    Needell, D., Tropp, J.A.: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–321 (2009)CrossRefMATHMathSciNetGoogle Scholar
  26. 26.
    Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: IEEE Conference on Signals, Systems and Computers (1993)Google Scholar
  27. 27.
    Rogaway, P., Shrimpton, T.: Cryptographic hash-function basics: definitions, implications, and separations for preimage resistance, second-preimage resistance, and collision resistance. In: Fast Software Encryption (2004)Google Scholar
  28. 28.
    Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proc. IEEE 98(6), 1045–1057 (2010)CrossRefGoogle Scholar
  29. 29.
    Rubinstein, R., Zibulevsky, M., Elad, M.: Efficient implementation of the K-SVD algorithm using batch orthogonal matching pursuit. CS Tech. 40(8), 1–15 (2008)Google Scholar
  30. 30.
    Shen, Y., Duan, H., Fang, J., Li, H.: Pattern-coupled sparse bayesian learning for recovery of block-sparse signals. IEEE Trans. Signal Process. 63(2), 360–372 (2015)CrossRefMathSciNetGoogle Scholar
  31. 31.
    Sigal, L., Bhatia, S., Roth, S., Black, M.J., Isard, M.: Tracking loose-limbed people. In: IEEE CVPR (2004)Google Scholar
  32. 32.
    Sturm, B.L., Christensen, M.G.: Comparison of orthogonal matching pursuit implementations. In: Proceedings of Eusipco (2012)Google Scholar
  33. 33.
    Sudderth, E.B., Ihler, A.T., Isard, M., Freeman, W.T., Willsky, A.S.: Nonparametric belief propagation. Commun. ACM 53(10), 95–103 (2010)CrossRefGoogle Scholar
  34. 34.
    Sun, J., Zheng, N.N., Shum, H.Y.: Stereo matching using belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 787–800 (2003)CrossRefGoogle Scholar
  35. 35.
    Szlam, A., Gregor, K., LeCun, Y.: Fast approximations to structured sparse coding and applications to object classification. In: ECCV. Springer, Berlin (2012)Google Scholar
  36. 36.
    Tipping, A., Faul, A.: Analysis of sparse bayesian learning. In: Proceedings of the Conference on Neural Information Processing Systems (2002)Google Scholar
  37. 37.
    Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)MATHMathSciNetGoogle Scholar
  38. 38.
    Tipping, M.E., Faul, A.C., et al.: Fast marginal likelihood maximisation for sparse bayesian models. In: International Workshop on Artificial Intelligence and Statistics (2003)Google Scholar
  39. 39.
    Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 53(12), 4655–4666 (2007)CrossRefMATHMathSciNetGoogle Scholar
  40. 40.
    Wang, J., Yang, J., Yu, K., Lv, F., Huang, T., Gong, Y.: Locality-constrained linear coding for image classification. In: IEEE CVPR (2010)Google Scholar
  41. 41.
    Wen, Z., Yin, W., Goldfarb, D., Zhang, Y.: A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation. SIAM J. Sci. Comput. 32(4), 1832–1857 (2010)CrossRefMATHMathSciNetGoogle Scholar
  42. 42.
    Wen, Z., Yin, W., Zhang, H., Goldfarb, D.: On the convergence of an active-set method for \(\ell _1\)-minimization. Optim. Methods Softw. 27(6), 1127–1146 (2012)CrossRefMATHMathSciNetGoogle Scholar
  43. 43.
    Wipf, D.P., Rao, B.D.: Sparse bayesian learning for basis selection. IEEE Trans. Signal Process. 52(8), 2153–2164 (2004)CrossRefMathSciNetGoogle Scholar
  44. 44.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  45. 45.
    Yang, L., Cheng, H., Su, J., Chen, X.: Pixel-to-model background modeling in crowded scenes. In: IEEE International Conference on Multimedia and Expo (2014)Google Scholar
  46. 46.
    Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: ECCV. Springer, Berlin (2012)Google Scholar
  47. 47.
    Zhou, X., Yu, K., Zhang, T., Huang, T.S.: Image classification using super-vector coding of local image descriptors. In: ECCV. Springer, Heidelberg (2010)Google Scholar
  48. 48.
    Zhou, Z., Liu, K., Fang, J.: Bayesian compressive sensing using normal product priors. IEEE Signal Process. Lett. 22(5), 583–587 (2015)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  1. 1.University of Electronic Science and Technology of ChinaChengduChina

Personalised recommendations