DUK-SVD: dynamic dictionary updating for sparse representation of a long-time remote sensing image sequence
Sparse representations of data or signals have drawn considerable attentions in the past decade. In this paper, we focus on the problem of training high-efficacy dictionaries for remote sensing images of massive long-time sequences. By extending the classical K-SVD, we propose a new dictionaries learning algorithm. Different from K-SVD, in the proposed incremental K-SVD algorithm, we selectively train a certain number of atoms when each new batch of sample data are added into the training process; current dictionary are replenished by the selected and enhanced atoms. The new atoms are initialized by information entropy. Meanwhile, we introduce an uncertainty metric to determine whether or not new atoms should be added into the current dictionary. To efficiently and sparsely represent the long-time sequence data set, we also de-correlate the dictionary based on new atoms by introducing a mutual coherence constraint into the atom updating stage. The method presented in this paper aims to adaptively and dynamically train the dictionary from big data. Two other state-of-the-art dictionary learning methods such as online dictionary learning (ODL) and recursive least squares dictionary learning algorithm (RLS-DLA) who also could train the dictionary using relatively large data, are comprehensively compared with the proposed algorithm in both sparse model and error model. In the sparse model, the reconstruction error of the DUK-SVD dictionary was smaller than ODL and RLS-DLA. In the error model, the sparsity of the DUK-SVD was higher than ODL and RLS-DLA. We can also observe that in the sparse model the proposed DUK-SVD often consume fewer computing time than ODL.
KeywordsLong-time sequence Sparse representation Dictionary learning Remote sensing
This work is supported by the National Natural Science Foundation of China (Nos. 41571413 and 41471368).
Compliance with ethical standards
Conflict of interest
All authors declare that they have no conflict of interest.
This article does not contain any studies with human participants performed by any of the authors.
- Bottou L, Bousquet O (2008) The tradeoffs of large scale learning. In: Platt JC, Koller D, Singer, Roweis S (eds) Advances in neural information processing systems, Vancouver, British Columbia, Canada, pp 161–168Google Scholar
- Chen D, Hu Y, Wang L, Zomaya A, Li X (2016) H-parafac: hierarchical parallel factor analysis of multidimensional big data. IEEE Trans Parallel Distrib Syst PP(99):1–1Google Scholar
- Davis G, Mallat S, Avellaneda M (1994) Adaptive nonlinear approximations. Technical report, New York UniversityGoogle Scholar
- Engan K, Aase SO, Husoy JH (1999) Method of optimal directions for frame design. In: ICASSP, vol 05, pp 2443–2446Google Scholar
- Jiang Z, Zhang G, Davis LS (2012b). Submodular dictionary learning for sparse coding. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 3418–3425Google Scholar
- Mailh B, Barchiesi D, Plumbley MD (2012) INK-SVD: learning incoherent dictionaries for sparse representations. In: Proc. IEEE int. conf. acoust., speech signal process. (ICASSP), pp 3573–3576Google Scholar
- Mairal J, Bach F, Ponce J, Sapiro G (2009) Online dictionary learning for sparse coding. In: Proceedings of the 26th annual international conference on machine learning, ICML ’09. ACM, New York, pp 689–696Google Scholar
- Needell D, Tropp JA (2008) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Technical report, California Institute of Technology, PasadenaGoogle Scholar
- Palm G, Schwenker F, Sommer FT, Strey A (1993) Neural associative memories. Biol Cybern 36:36–19Google Scholar
- Pati YC, Rezaiifar R, Rezaiifar YC, Pati R, Krishnaprasad PS (1993) Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: Proceedings of the 27th annual Asilomar conference on signals, systems, and computers, pp 40–44Google Scholar
- Ramłrez I, Lecumberry F, Sapiro G (2009) Sparse modeling with universal priors and learned incoherent dictionaries inst. Technical report, University of MinnesotaGoogle Scholar