Skip to main content

Compressive System Identification

  • Chapter
  • First Online:
Compressed Sensing & Sparse Filtering

Part of the book series: Signals and Communication Technology ((SCT))

Abstract

The first part of this chapter presents a novel Kalman filtering-based method for estimating the coefficients of sparse, or more broadly, compressible autoregressive models using fewer observations than normally required. By virtue of its (unscented) Kalman filter mechanism, the derived method essentially addresses the main difficulties attributed to the underlying estimation problem. In particular, it facilitates sequential processing of observations and is shown to attain a good recovery performance, particularly under substantial deviations from ideal conditions, those which are assumed to hold true by the theory of compressive sensing. In the remaining part of this chapter we derive a few information-theoretic bounds pertaining to the problem at hand. The obtained bounds establish the relation between the complexity of the autoregressive process and the attainable estimation accuracy through the use of a novel measure of complexity. This measure is suggested herein as a substitute to the generally incomputable restricted isometric property.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alan P (1983) Forecasting with univariate Box-Jenkins models: concepts and cases. Wiley, New York

    Google Scholar 

  2. Angelosante D, Bazerque JA, Giannakis GB (2010) Online adaptive estimation of sparse signals: where RLS meets the \(l_1\)-norm. IEEE Trans Signal Process 58:3436–3447

    Article  MathSciNet  Google Scholar 

  3. Angelosante D, Giannakis GB, Grossi E (2009) Compressed sensing of time-varying signals. Proceedings of the 16th international conference on digital signal processing

    Google Scholar 

  4. Asif MS, Charles A, Romberg J, Rozell C (2011)Estimation and dynamic updating of time-varying signals with sparse variations. In: International conference on acoustics, speech and signal processing (ICASSP), pp 3908–3911

    Google Scholar 

  5. Asif MS, Romberg J (2009) Dynamic updating for sparse time varying signals. In: Proceedings of the conference on information sciences and systems, pp 3–8

    Google Scholar 

  6. Baraniuk RG, Davenport MA, Ronald D, Wakin MB (2008) A simple proof of the restricted isometry property for random matrices. Constr Approx 28:253–263

    Article  MathSciNet  MATH  Google Scholar 

  7. Benveniste A, Basseville M, Moustakides GV (1987) The asymptotic local approach to change detection and model validation. IEEE Trans Autom Control 32:583–592

    Article  MathSciNet  MATH  Google Scholar 

  8. Blumensath T, Davies M (2009) Iterative hard thresholding for compressed sensing. Appl Comput Harmon Anal 27:265–274

    Article  MathSciNet  MATH  Google Scholar 

  9. Bosch-Bayard J. et al (2005) Estimating brain functional connectivity with sparse multivariate autoregression. Philos Trans R Soc 360:969–981

    Google Scholar 

  10. Brockwell PJ, Davis RA (2009) Time Series: theory and methods, Springer, New York

    Google Scholar 

  11. Candes E, Tao T (2007) The Dantzig selector: statistical estimation when p is much larger than n. Ann Stat 35:2313–2351

    Article  MathSciNet  MATH  Google Scholar 

  12. Candes EJ (2008) The restricted isometry property and its implications for compressed sensing. C R Math 346:589–592

    Article  MathSciNet  MATH  Google Scholar 

  13. Candes EJ, Eldar YC, Needell D, Randall P (2011) Compressed sensing with coherent and redundant dictionaries. Appl Comput Harmon Anal 31:59–73

    Article  MathSciNet  MATH  Google Scholar 

  14. Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52:489–509

    Article  MathSciNet  MATH  Google Scholar 

  15. Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51:4203–4215

    Article  MathSciNet  MATH  Google Scholar 

  16. Candes EJ, Tao T (2006) Near-optimal signal recovery from random projections: universal encoding strategies? IEEE Trans Inf Theory 52:5406–5425

    Article  MathSciNet  Google Scholar 

  17. Candes EJ, Wakin MB (2008) An introduction to compressive sampling. IEEE Signal Process Mag 25:21–31

    Article  Google Scholar 

  18. Carmi A, Gurfil P, Kanevsky D (2008) A simple method for sparse signal recovery from noisy observations using Kalman filtering. Technical Report RC24709, Human Language Technologies, IBM

    Google Scholar 

  19. Carmi A, Gurfil P, Kanevsky D (2010) Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms. IEEE Trans Signal Process 58:2405–2409

    Article  MathSciNet  Google Scholar 

  20. Carmi A, Mihaylova L, Kanevsky D (2012) Unscented compressed sensing. In: Proceedings of the IEEE international conference on acoustics, speech and signal processing (ICASSP)

    Google Scholar 

  21. Charles A, Asif MS, Romberg J, Rozell C (2011) Sparsity penalties in dynamical system estimation. In: Proceedings of the conference on information sciences and systems, pp 1–6

    Google Scholar 

  22. Chen S, Billings SA, Luo W (1989) Orthogonal least squares methods and their application to non-linear system identification. Int J Contro 50:1873–1896

    Article  MathSciNet  MATH  Google Scholar 

  23. Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20:33–61

    Article  MathSciNet  Google Scholar 

  24. Davis G, Mallat S, Avellaneda M (1997) Greedy adaptive approximation. Constr Approx 13:57–98

    MathSciNet  MATH  Google Scholar 

  25. Deurschmann J, Bar-Itzhack I, Ken G (1992) Quaternion normalization in spacecraft attitude determination. In: Proceedings of the AIAA/AAS astrodynamics conference, pp 27–37

    Google Scholar 

  26. Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52:1289–1306

    Article  MathSciNet  Google Scholar 

  27. Durate MF, Davenport MA, Takhar D, Laska JN, Sun T, Kelly KF, Baraniuk RG (2008) Single pixel imaging via compressive sampling, IEEE Signal Process Mag

    Google Scholar 

  28. Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32:407–499

    Article  MathSciNet  MATH  Google Scholar 

  29. Friedman N, Nachman I, Peer D (1999) Learning Bayesian network structure from massive datasets: The sparse candidate algorithm. In: Proceedings of the fifteenth conference annual conference on uncertainty in, artificial intelligence (UAI-99), pp 206–215.

    Google Scholar 

  30. Granger CWJ (1969) Investigating causal relations by econometric models and cross-spectral methods. Econometrica 37:424–438

    Article  Google Scholar 

  31. Haufe S, Muller K, Nolte G, Kramer N (2008) Sparse causal discovery in multivariate time series, NIPS Workshop on causality

    Google Scholar 

  32. Hirotugu A (1974) A new look at the statistical model identification. IEEE Trans Autom Control 19:716–723

    Article  MATH  Google Scholar 

  33. James GM, Radchenko P, Lv J (2009) DASSO: connections between the Dantzig selector and LASSO. J Roy Stat Soc 71:127–142

    Article  MathSciNet  MATH  Google Scholar 

  34. Ji S, Xue Y, Carin L (June 2008) Bayesian compressive sensing. IEEE Trans Signal Process 56:2346–2356

    Article  MathSciNet  Google Scholar 

  35. Julier SJ, LaViola JJ (2007) On Kalman filtering with nonlinear equality constraints. IEEE Trans Signal Process 55:2774–2784

    Article  MathSciNet  Google Scholar 

  36. Julier SJ, Uhlmann JK (1997) A new extension of the Kalman filter to nonlinear systems. In: Proceedings of the international symposium on aerospace/defense sensing, simulation and controls, pp 182–193

    Google Scholar 

  37. Kailath T (1980) Linear Systems. Prentice Hall, Englewood Cliffs

    Google Scholar 

  38. Kalouptsidis N, Mileounis G, Babadi B, Tarokh V (2011) Adaptive algorithms for sparse system identification. Signal Proc 91:1910–1919

    Article  MATH  Google Scholar 

  39. Laska JN, Boufounos PT, Davenport MA, Baraniuk RG (2011) Democracy in action: quantization, saturation, and compressive sensing. Appl Comput Harmon Anal 31:429–443

    Article  MathSciNet  MATH  Google Scholar 

  40. Mallat S, Zhang Z (1993) Matching pursuits with time-frequency dictionaries. IEEE Trans Signal Process 4:3397–3415

    Article  Google Scholar 

  41. Mendel JM (1995) Lessons in estimation theory for signal processing, communications, and control. Prentice Hall, Englewood-Cliffs

    Google Scholar 

  42. Pati YC, Rezifar R, Krishnaprasad PS (1993) Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: Proceedings of the 27th asilomar conf. on signals, systems and comput., pp 40–44

    Google Scholar 

  43. Rudelson M (1999) Random vectors in the isotropic position. J Funct Anal 164:60–72

    Article  MathSciNet  MATH  Google Scholar 

  44. Rudelson M, Vershynin R (2005) Geometric approach to error correcting codes and reconstruction of signals. Int Math Res Not 64:4019–4041

    Article  MathSciNet  Google Scholar 

  45. Sanadaji BM, Vincent TL, Wakin MB, Toth, Poola K (2011) Compressive System Identification of LTI and LTV ARX models. In: Proceedings of the IEEE conference on decision and control and european control conference (CDC-ECC), pp 791–798

    Google Scholar 

  46. Schwarz GE (1978) Estimating the dimension of a model. Ann Stat 6:461–464

    Article  MATH  Google Scholar 

  47. Sokal AD (1989) Monte carlo methods in statistical mechanics: foundations and new algorithms. Cours de Troisieme Cycle de la Physique en Suisse Romande, Laussane

    Google Scholar 

  48. Tibshirani R (1996) Regression shrinkage and selection via the LASSO. J Roy Stat Soc B Method, 58:267–288

    Google Scholar 

  49. Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. Int J Mach Learn Res 1:211–244

    MathSciNet  MATH  Google Scholar 

  50. Tropp JA (2004) Greed is good: Algorithmic results for sparse approximation. IEEE Trans Inf Theory 50:2231–2242

    Article  MathSciNet  Google Scholar 

  51. Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53:4655–4666

    Article  MathSciNet  Google Scholar 

  52. Vaswani N (2008) Kalman filtered compressed sensing. In: Proceedings of the IEEE international conference on image processing (ICIP) pp 893–896

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Avishy Y. Carmi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Carmi, A.Y. (2014). Compressive System Identification. In: Carmi, A., Mihaylova, L., Godsill, S. (eds) Compressed Sensing & Sparse Filtering. Signals and Communication Technology. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38398-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38398-4_9

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38397-7

  • Online ISBN: 978-3-642-38398-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics