Advertisement

Signal, Image and Video Processing

, Volume 8, Issue 8, pp 1613–1624 | Cite as

Sparsity aware consistent and high precision variable selection

  • T. Yousefi Rezaii
  • M. A. Tinati
  • S. Beheshti
Original Paper

Abstract

Variable selection is fundamental while dealing with sparse signals that contain only a few number of nonzero elements. This is the case in many signal processing areas extending from high-dimensional statistical modeling to sparse signal estimation. This paper explores a new and efficient approach to model a system with underlying sparse parameters. The idea is to get the noisy observations and estimate the minimum number of underlying parameters with acceptable estimation accuracy. The main challenge is due to the non-convex optimization problem to be solved. The reconstruction stage deals with some suitable objective function in order to estimate the original sparse signal by performing variable selection procedure. This paper introduces a suitable objective function in order to simultaneously recover the true support of the underlying sparse signal while still achieving an acceptable estimation error. It is shown that the proposed method performs the best variable selection compared to the other algorithms, while approaching the lowest least mean squared error in almost all the cases.

Keywords

Sparse signal reconstruction Lasso Variable selection Estimation 

References

  1. 1.
    Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  2. 2.
    Candes, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Tropp, J.: Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 50(10), 2231–2242 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Li, Y., Osher, S.: Coordinate descent optimization for \(L_{1}\) minimization with application to compressed sensing: greedy algorithm. Inverse Probl Imaging 3(3), 487–503 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  6. 6.
    Cetin, M., Malioutov, D.M., and Willsky, A.S.: A variational technique for source localization based on a sparse signal reconstruction perspective. In: Proc. ICASSP, pp. 2965–2968 (2002)Google Scholar
  7. 7.
    Candes, E.J., Wakin, M., Boyd, S.: Enhancing sparsity by reweighted L1 minimization. J Fourier Anal Appl 14(5), 877–905 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
  8. 8.
    Chartrand, R.: Exact reconstruction of sparse signals via non-convex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)CrossRefGoogle Scholar
  9. 9.
    Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: Proc. ICASSP, Las Vegas, pp. 3869–3872 (2008)Google Scholar
  10. 10.
    Miosso, C.J., Borries, R.V., Argaez, M., Velazquez, L., Quintero, C., Potes, C.M.: Compressive sensing reconstruction with prior information by iteratively reweighted least-squares. IEEE Trans. Signal Process. 57(6), 2424–2431 (2009)Google Scholar
  11. 11.
    Benesty, J., Gay, S.L.: An improved PNLMS algorithm. In: Proc. IEEE ICASSP, pp. 1881–1884 (2002)Google Scholar
  12. 12.
    Duttweiler, D.L.: Proportionate normalized least-mean-squares adaptation in echo cancellers. IEEE Trans. Speech Audio Process. 8, 508–518 (2000)CrossRefGoogle Scholar
  13. 13.
    Gaensler, T., Gay, S.L., Sondhi, M.M., Benesty, J.: Double-talk robust fast converging algorithms for network echo cancellation., IEEE Trans. Speech Audio Process. 8, 656–663 (2000)Google Scholar
  14. 14.
    Hoshuyama, O., Gubran, R.A., Sugiyama, A.: A generalized proportionate variable step-size algorithm for fast changing acoustic environments. In: Proc. IEEE ICASSP, pp. IV-161–IV-164 (2004)Google Scholar
  15. 15.
    Benesty, J., Paleologu, C., Ciochina, S.: Proportionate adaptive filters from a basis pursuit perspective. IEEE Signal Process. Lett. 17(12), 985–988 (2010)CrossRefGoogle Scholar
  16. 16.
    Chen, Y., Gu, Y., Hero, A.O.: Sparse LMS for system identification. In: Proc. IEEE ICASSP, pp. 3125–3128 (2009)Google Scholar
  17. 17.
    Gu, Y., Jin, J., Mei, S.: \(L_{0}\) norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett. 16(9), 774–777 (2009)CrossRefGoogle Scholar
  18. 18.
    Jin, J., Gu, Y., Mei, S.: A stochastic gradient approach on compressive sensing reconstruction based on adaptive filtering framework. IEEE J. Sel. Top. Signal Process. 4(2), 409–420 (2010)CrossRefGoogle Scholar
  19. 19.
    Eksioglu, E.M.: RLS adaptive filtering with sparsity regularization. In: 10th Int. Conf. on Inf. Sci., Signal Process., and their Applications, pp. 550–553 (2010)Google Scholar
  20. 20.
    Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)Google Scholar
  21. 21.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. Ann. Stat. 32, 407–499 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    Angelosante, D., Giannakis, G.B.: RLS-weighted Lasso for adaptive estimation of sparse signals. In: Proc. IEEE ICASSP, pp. 3245–3248 (2009)Google Scholar
  23. 23.
    Angelosante, D., Bazerque, J.A., Giannakis, G.B.: Online adaptive estimation of sparse signals: where RLS meets the \(L_{1}\) norm. IEEE Trans. Signal Process. 58(7), 3436–3447 (2010)CrossRefMathSciNetGoogle Scholar
  24. 24.
    Fan, J., Li, R.: Variable selection via non-concave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)CrossRefzbMATHMathSciNetGoogle Scholar
  25. 25.
    Yuan, M., Lin, Y.: On the Nonnegative Garotte Estimator, Technical Report. School of Industrial and Systems Engineering, Georgia Inst. of Tech, Georgia (2005)Google Scholar
  26. 26.
    Zou, H.: The adaptive Lasso and its oracle properties. J. Am. Stat. Assoc. 101(476), 1418–1429 (2006)CrossRefzbMATHGoogle Scholar
  27. 27.
    Zhao, P., Yu, B.: On model selection consistency of Lasso. J. Mach. Learn. Res. 7, 2541–2563 (2006)zbMATHMathSciNetGoogle Scholar
  28. 28.
    Knight, K., Fu, W.: Asymptotics for Lasso-type estimators. Ann. Stat. 28, 1356–1378 (2000)CrossRefzbMATHMathSciNetGoogle Scholar
  29. 29.
    Zou, B.H., Li, R.: One-step sparse estimation in non-concave penalized likelihood models. Ann. Stat. 36(4), 1509–1533 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
  30. 30.
    Donoho, D., Johnstone, I.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425–455 (1994)CrossRefzbMATHMathSciNetGoogle Scholar
  31. 31.
    Tinati, M.A., Yousefi Rezaii, T.: Adaptive sparsity-aware parameter vector reconstruction with application to compressed sensing. In: Proc. IEEE HPCS, pp. 350–356 (2011)Google Scholar
  32. 32.
    Friedman, J., Hastie, T., Hofling, H., Tibshirani, R.: Pathwise coordinate optimization. Ann. Appl. Stat. 1(2), 302–332 (2007) Google Scholar
  33. 33.
    Ward, R.: Compressed sensing with cross validation. IEEE Trans. Inf. Theory 55(12), 5773–5782 (2009)CrossRefGoogle Scholar
  34. 34.
    Boufounos, P., Duarte, M.F., Baraniuk, R.G.: Sparse signal reconstruction from noisy compressive measurements using cross validation. In: Proc. IEEE Workshop on Statistical, Signal Processing, pp. 299–303 (2007)Google Scholar

Copyright information

© Springer-Verlag London 2012

Authors and Affiliations

  • T. Yousefi Rezaii
    • 1
    • 2
  • M. A. Tinati
    • 1
  • S. Beheshti
    • 2
  1. 1.Faculty of Electrical and Computer EngineeringUniversity of TabrizTabrizIran
  2. 2.Department of Electrical and Computer EngineeringRyerson UniversityTorontoCanada

Personalised recommendations