Clustering KSVD for sparse representation of images
 381 Downloads
Abstract
Ksingular value decomposition (KSVD) is a frequently used dictionary learning (DL) algorithm that iteratively works between sparse coding and dictionary updating. The sparse coding process generates sparse coefficients for each training sample, and the sparse coefficients induce clustering features. In the applications like image processing, the features of different clusters vary dramatically. However, all the atoms of dictionary jointly represent the features, regardless of clusters. This would reduce the accuracy of sparse representation. To address this problem, in this study, we develop the clustering KSVD (CKSVD) algorithm for DL and the corresponding greedy algorithm for sparse representation. The atoms are divided into a set of groups, and each group of atoms is employed to represent the image features of a specific cluster. Hence, the features of all clusters can be utilized and the number of redundant atoms are reduced. Additionally, two practical extensions of the CKSVD are provided. Experimental results demonstrate that the proposed methods could provide more accurate sparse representation of images, compared to the conventional KSVD and its existing extended methods. The proposed clustering DL model also has the potential to be applied to the online DL cases.
Keywords
Dictionary learning Sparse representation Image processingAbbreviations
 CKSVD
Clustering Ksingular value decomposition
 CS
Compressive sensing
 DCKSVD
Dynamic CKSVD
 DL
Dictionary learning
 KSVD
Ksingular value decomposition
 LS
Least square
 MOD
Method of optimal directions
 OMP
Orthogonal matching pursuit
 PSNR
Peak signaltonoise ratio
 SwCKSVD
Sparsitywise CKSVD
1 Introduction
where t∈{1,2,⋯ξ}, i∈{1,2,⋯q}, and ∥·∥_{F} denote the Frobenius norm. The notation d_{i} denotes the ith column of the dictionary D, which is also referred to as the ith atom. S is the sparse coefficients with respect to Z and D, and it is obtained with D in the DL process; s_{t} is the tth column of S.
To date, researchers have proposed various DL algorithms. In [5], Engan et al. propose the wellknown DL method, named “the method of optimal directions (MOD).” The MOD contained two iterative process, sparse coefficients computing and dictionary updating. The dictionary updating is globally realized by the least squares (LS) computation in terms of training samples and sparse coefficients. In [6], Aharon et al. propose another LSbased algorithm for DL, referred to as the KSVD. Different from the global update strategy, for KSVD, the atoms of dictionary is updated separately. The MOD, the KSVD, and their extended methods are used for batch DL, i.e., the training samples input simultaneously. However, when the training samples can not be obtained all at one, the online learning is required. In [7], Mairal et al. propose the online DL (ODL) algorithm, aiming to update the atoms by using only the newly input samples. This algorithm allows training samples to be input successively and realizes the online learning. Additionally, a set of DL methods that are extended from MOD, KSVD, and ODL have also been proposed, in order to improve the sparse representation accuracy or reduce the computational complexity [8, 9, 10, 11, 12, 13].
Among these algorithms, the KSVD is frequently used in the fields of image processing due to its generality and low complexity. The KSVD algorithm consists two processes, sparse coding and dictionary updating, which are executed alternately. In the sparse coding process, at most k sparse coefficients for each training sample are computed via greedy algorithms, inducing clustering features [14, 15, 16]. For the KSVD algorithm, all the atoms of dictionary jointly represent the training images, regardless of clusters. While representing different training samples, an atom may be employed by different clusters of features. In the applications of image processing, the features of different clusters vary dramatically, and therefore, the above phenomenon may reduce the accuracy of sparse representation. In [9], Nazzal et al. utilize the residual of training samples to train a set of subdictionaries. However, the subdictionaries are not distinguished by different clusters. In [11], Smith and Elad improve the KSVD by considering only the used atoms in dictionary updating process. In [31], Tariyal et al. propose the deep DL by combining the concepts of DL and deep learning. The multiple DL framework is developed for multiple levels of dictionaries. In [30], Yi et al. build a hierarchical sparse representation framework that consists of the local histogrambased model, the weighted alignment pooling model, and the sparsitybased discriminative model. In [28], Rubinstein et al. propose the approximate KSVD method to reduce the computational complexity, which can be regarded as another implementation of the KSVD. In [29], Mairal et al. develop a multiscale DL framework based on an efficient quadtree decomposition of the learned dictionary.
In this study, we aim to the utilize the clusters of features of training samples. We divide the atoms of learned dictionary into a set of groups, and each group serves for a specific cluster of features. This strategy improves the DL process from two aspects. First, besides the image features of the original training samples, we also consider the features of residuals of different clusters, which reduces the number of redundant atoms. Second, we develop the strategy to ensure that an arbitrary atom of a dictionary is utilized for only a specific feature of a cluster. Hence, the atom would not be influenced by the features of other clusters. Based on the strategy, we propose the CKSVD algorithm, as well as the corresponding greedy recovery algorithm for computing sparse representations. Compared to the conventional KSVD, the CKSVD improves the sparse reconstruction accuracy without increasing the requirements or computational complexity of DL process. Based on the clustering DL model, we also provide two practical extended methods of the CKSVD, which achieve the adaptive sparsity and the dynamic refinement of atoms, respectively.
The remainder of this paper is organized as follows. Section 2 describes the aim of this study and introduces the proposed method. Section 3 provides the extended methods of the CKSVD. Section 4 presents the experimental results. Section 5 discusses the proposed clustering model and its potential to be applied to online learning. Section 6 draws a conclusion.
2 Proposed method
In this section, we primarily review the conventional KSVD algorithm and describe the problem that needs to be addressed. Next, we introduce the proposed CKSVD algorithm.
2.1 Problem formulation
where the vector \(\boldsymbol {z}_{t}\in \mathbb {R}^{n}\) denotes an arbitrary training sample. The above problem is commonly solved by greedy algorithms like orthogonal matching pursuit (OMP) [17]. Specifically, for each iteration of the process for solving (3), the atom that leads the largest inner product with the residual of z_{t} is selected. Thus, k atoms are selected successively, inducing k clusters of features. For the image samples, the objective features of different clusters vary greatly. But the atoms jointly represent the features, without considering the clusters. Hence, an arbitrary atom of the dictionary may be interfered by different features.
Summary on how many times that a specific atom is invoked by different cluster of features
atom  Usage count  atom  Usage count  atom  Usage count  

H _{1}  H _{2}  H _{3}  H _{1}  H _{2}  H _{3}  H _{1}  H _{2}  H _{3}  
1st  2  416  76  17th  44  35  40  33rd  3  420  4398 
2nd  9  24  70  18th  4  332  20  34th  0  1349  163 
3rd  17  114  416  19th  1  375  704  35th  65  418  73 
4th  195  1  1  20th  20  5  2  36th  294  10  9 
5th  16  38  102  21st  6  156  215  37th  7  57  64 
6th  2  433  39  22nd  1339  0  1  38th  410  1  1 
7th  108  119  190  23rd  331  39  8  39th  271  8  20 
8th  6  96  282  24th  475  1  0  40th  445  2  0 
9th  545  0  0  25th  152  7  5  41st  231  26  14 
10th  0  193  72  26th  48  197  76  42nd  27  20  61 
11th  0  779  965  27th  0  2729  26  43rd  30  35  108 
12th  3  184  152  28th  12  505  78  44th  40  66  310 
13th  58  35  42  29th  25  9  20  45th  12  39  139 
14th  3079  2  0  30th  697  3  1  46th  258  3  0 
15th  249  3  0  31st  0  150  205  47th  20  56  323 
16th  0  93  58  32nd  17  10  6  48th  27  17  45 
To address this problem, we propose the CKSVD algorithm for DL and the corresponding greedy algorithm for sparse recovery, which will be introduced in the following section.
2.2 CKSVD for sparse representation of images
The above process is executed until the maximum number of coefficients is reached or the residual is small enough.
We update the atom d_{i} to be the first column of U, denoted as u_{1}, and update the sparse coefficients row \(\tilde {\boldsymbol {s}}_{\gamma _{i}}^{j}\) by multiplying Δ_{1,1} and the first column of V, denoted as v_{1}.
3 Extension of cKSVD
The proposed idea not only leads to the CKSVD method but also builds a framework for DL. In other words, the CKSVD can still be extended for better performance. Next, we introduce two practical extensions.
3.1 Sparsitywise CKSVD
For the standard CKSVD, we fix the sparsity level, i.e., the number of sparse coefficients, for each training sample. However, this may lead to underfitting or overfitting of sparse representation. To address this problem, we develop the sparsitywise CKSVD (SwCKSVD). We employ multiatoms to represent a cluster of features instead of a single atom. To obtain the SwCKSVD from the CKSVD, we set termination conditions to determine the number of used atoms for a training sample. The sparse recovery strategy is summarized in Algorithm 3. For each cluster, the sparse coding is realized via an iterative process. The parameter a_{max} controls the maximum expected number of used atoms. In step 9, it is determined whether more atoms are required by examining if the residual is relative enough to the remained atoms. This operation allows the sparsity to be adaptive to each training sample, aiming to achieve a satisfied representation accuracy by using as few atoms as possible. The parameter ρ controls the threshold of the termination condition, and we empirically suggest ρ=0.4∼0.6.
3.2 Dynamic CKSVD
Although the sparse coding strategies for the KSVD, the CKSVD, and the SwCKSVD are different, their dictionary updating strategies are the same. They all use the first principal component of the SVD result to update dictionary and sparse coefficients (see steps 9 and 10 in Algorithm 2), while ignoring other components. Under the framework of CKSVD, the first cluster contributes most to the representation, and later clusters make less contributions. For instance, it is possible that a second principal component of the SVD result of a residual \(\boldsymbol {R}_{\gamma _{i}}\) in the first cluster, expressed as \(\boldsymbol {u}_{2}\boldsymbol {\Delta }\boldsymbol {v}^{*}_{2}\), may be more significant than a first principal component of the SVD results of a residual in the second cluster. Based on this consideration, we extend the CKSVD to the dynamic CKSVD (DCKSVD), for which the atoms of different clusters are refined after each iterative cycle. The dictionary updating strategy is provided in Algorithm 4. For each iterative cycle, we use the second component of the SVD results with respect to the most used atom in D_{l}, to replace the least used atom in the next cluster of dictionary. This operation makes the dictionaries dynamic that means the atoms will be refined after an iterative cycle and those contribute little to the representation will be abandoned.
4 Results
In this section, we provide the experimental results and analysis. The Berkeley dataset is employed for the experiments [18]. The experiments are organized as follows. First, we conducted the standard CKSVDbased DL process and the conventional KSVDbased DL process, respectively, by using the training dataset, in order to verify the improvement of the CKSVD over the conventional KSVD. Second, we applied different dictionaries to the compressive sensing, which is the typical application in the field of image processing. Besides the KSVD and the standard CKSVD, we also employed the SwCKSVD, the DCKSVD, and two existing methods extended from the KSVD, proposed in [28] and [29], respectively.
4.1 Experiments on sparse representation
It could be noted that, with the same parameters, the dictionaries trained by the CKSVD can provide more accurate sparse representations to the test images. As the number of iterations increases, the performance of dictionaries is improved. When the number of iterations exceeds 10, the increasing trend is slow. Similarly, the accuracy of sparse representations can benefit from the increase of q_{0} the larger q_{0}. However, the growth trend slows down ceaselessly, and a too large q_{0} would increase the computational complexity of sparse coding and dictionary updating. The larger number of sparse coefficients could also improve the performance of dictionaries. But it is not suggested to set a too large k, as it would reduce the sparsity of images.
4.2 Applied to compressive sensing
This problem can be directly solved by greedy algorithms[17, 24, 25, 26], and then the original signal x can be obtained by (9).
Comparison of PSNR (dB) of reconstructed image “Elephant” with different number of measurements and different patch sizes
Methods  Patch size 6×6  Patch size 8×8  

m = 9  m = 12  m = 15  m = 18  m = 14  m = 20  m = 26  m = 32  
KSVD  20.04  21.97  23.16  23.70  19.85  21.83  23.14  23.92 
Method in [28]  19.89  21.90  22.82  23.53  19.78  21.66  22.91  23.77 
Method in [29]  26.06  26.83  27.24  27.69  25.73  26.80  27.32  27.75 
CKSVD  27.23  27.89  28.38  28.60  26.95  27.74  28.40  28.81 
SwCKSVD  27.65  28.82  28.79  28.97  27.21  28.24  28.70  29.03 
DCKSVD  27.44  27.92  28.51  28.78  27.07  27.93  28.45  28.90 
Comparison of PSNR (dB) of reconstructed image “Horse” with different number of measurements
Methods  Patch size 6×6  Patch size 8×8  

m = 9  m = 12  m = 15  m = 18  m = 14  m = 20  m = 26  m = 32  
KSVD  17.90  19.54  20.51  21.22  17.32  19.27  20.60  21.39 
Method in [28]  17.95  19.53  20.44  21.08  17.40  19.43  20.55  21.36 
Method in [29]  23.77  24.69  25.12  25.58  23.41  24.56  25.20  25.63 
CKSVD  24.79  25.26  26.01  26.34  24.33  25.02  25.94  26.30 
SwCKSVD  25.21  25.64  26.37  26.58  24.74  25.30  26.32  26.55 
DCKSVD  24.93  25.37  26.18  26.39  24.51  25.20  26.17  26.42 
Comparison of PSNR (dB) of reconstructed image “Penguin” with different number of measurements
Methods  Patch size 6×6  Patch size 8×8  

m = 9  m = 12  m = 15  m = 18  m = 14  m = 20  m = 26  m = 32  
KSVD  17.85  19.42  20.40  21.16  17.02  19.21  20.27  21.08 
Method in [28]  17.53  19.37  20.34  21.12  17.06  19.20  20.21  21.10 
Method in [29]  25.45  26.69  27.40  28.02  25.13  26.50  27.22  27.97 
CKSVD  27.48  28.79  29.61  30.34  27.19  28.60  29.65  30.53 
SwCKSVD  27.87  29.22  29.94  30.63  27.63  29.00  29.98  30.86 
DCKSVD  27.60  29.01  29.75  30.42  27.44  28.80  29.72  30.55 
Comparison of PSNR (dB) of reconstructed image “Horse” using the DCT basis for initialization
Methods  Patch size 6×6  Patch size 8×8  

m = 9  m = 12  m = 15  m = 18  m = 14  m = 20  m = 26  m = 32  
KSVD  19.31  20.98  22.25  23.76  19.24  21.16  22.50  24.07 
Method in [28]  19.42  21.13  22.30  23.82  19.49  21.32  22.67  24.28 
Method in [29]  24.67  25.83  26.34  26.60  24.22  25.37  26.18  26.49 
CKSVD  25.55  26.16  26.79  27.20  25.41  25.98  26.62  27.03 
SwCKSVD  25.87  26.49  27.07  27.36  25.70  26.28  26.91  27.34 
DCKSVD  25.69  26.27  26.95  27.12  25.60  26.04  26.75  27.11 
Comparison of PSNR (dB) of reconstructed image “Horse” using the prelearned dictionaries for initialization
Methods  Patch size 6×6  Patch size 8×8  

m = 9  m = 12  m = 15  m = 18  m = 14  m = 20  m = 26  m = 32  
Prelearned  18.72  19.89  21.36  22.53  18.66  19.90  21.51  22.74 
KSVD  19.23  20.79  22.07  23.15  19.10  20.99  22.28  23.87 
Method in [28]  19.26  20.94  22.01  23.04  19.20  21.18  22.49  23.35 
Method in [29]  22.45  23.62  24.17  24.43  22.02  23.08  23.96  24.30 
CKSVD  23.32  23.90  24.48  24.85  23.07  23.69  24.31  24.78 
SwCKSVD  23.61  24.14  24.73  25.12  23.39  23.98  24.65  25.03 
DCKSVD  23.47  24.02  24.60  24.99  23.22  23.86  24.54  24.95 
The results in Table 5 demonstrate the overcomplete DCT basis can also be utilized as the initialized dictionary for the proposed DL methods. Comparing to the Gaussian randominitialized dictionary, the DCTinitialized dictionary provide better performance of accuracy, regardless of the DL methods. In Table 6, it can be noted that the prelearned dictionaries can be still trained by new samples with or without using the online DL methods, and the performance is improved obviously after the new training process. Similarly, the proposed methods outperforms the conventional methods.
4.3 Applied to image denoising

We first added the Gaussian white noise to the original images with zero mean and the variance of σ.

We selected 20,000 patches and 40,000 patches of the noised “Koala” image and the noised “Pepper” image, respectively. The size of all patches was 8×8.

We used the vectorization of these patches to conduct the DL process for two test images.

We employed the learned dictionary for denoising by using the strategy given in [32].
5 Discussion
As mentioned in Section 3, this study not only develops a KSVDbased method but also provides the clustering DL model. The potential and advantage of the clustering model mainly come from two aspects. First, different cluster of dictionaries is isolated from each other. Thus, an atom of learned dictionary could concentrate on a specific type of feature, leading greater utilization of atoms. In other words, a common phenomenon in the conventional DL model can be avoid, that is, a part of atoms is widely employed by training samples whereas others are seldom used. Second, the clustering DL model makes it possible to adjust the sparsity based on different training samples and therefore to reduce the underfitting of overfitting of sparse representation. We provide the SwCKSVD by adaptively selected the number of used atoms for each cluster. It is believed that the adaptive strategy can also be implemented by adjusting the number of clusters. This potential is verified by the fact the SwCKSVD performs obviously better than the standard CKSVD.
Future work could consider extending the clustering DL model to online learning. In this study, we focus on the batch DL and the dictionary updating strategy is based on the SVD. We believe the proposed clustering DL model is not limited to batch DL but can be extended to online DL problem. In [7], the standard ODL method is proposed, for which the informationstoring variables are updated when a new training sample inputs. The variables are then used to update the learned dictionary through an optimization approach. For clustering DL, we may utilize a set of informationstoring variables for different cluster of dictionaries. When a new sample inputs, we could employ Algorithm 1 or Algorithm 3 to solve the sparse coefficients with respect to different cluster of subdictionaries. Then, the sparse coefficients are used to update different cluster of informationstoring variables. Finally, we could update the subdictionaries based on the informationstoring variables, such that the clustering ODL is achieved. We believe the clustering ODL has the potential to be applied in the cases where training samples cannot be obtained simultaneously.
6 Conclusions
We proposed a DL method named CKSVD for sparse representation of images. For CKSVD, the atoms of dictionary are divided into a set of groups, and each group of atoms serve for image features of a specific cluster. Hence, the features of all clusters can be utilized and the redundant atoms are avoided. Based on this strategy, we introduced the CKSVD and two practical extensions. Experimental results demonstrated that the proposed methods could provide more accurate sparse representation of images, compared to the conventional KSVD algorithm and its extended methods.
Notes
Acknowledgements
The authors would like to thank the National Key R&D Program of China and the National Natural Science Foundation of China for the financial support.
Authors’ contributions
JF provided the methodology. RZ wrote the original manuscript. HY reviewed and edited the manuscirpt. JF and LR funded this study. All authors read and approved the final manuscript.
Funding
This work was supported by the National Key R&D Program of China under Grant 2017YFD0700302 and by the National Natural Science Foundation of China under Grant 51705193.
Consent for publication
This manuscript does not contain any individual person’s data in any form.
Competing interests
The authors declare that they have no competing interests.
References
 1.R. Rubinstein, A. M. Bruckstein, M. Elad, Dictionaries for sparse representation modeling. Proc. IEEE. 98(6), 1045–1057 (2010).CrossRefGoogle Scholar
 2.X. Lu, D. Wang, W. Shi, D. Deng, Groupbased single image superresolution with online dictionary learning. EURASIP J. Adv. Signal Process.2016(84), 1–12 (2016).Google Scholar
 3.V. Naumova, K. Schnass, Fast dictionary learning from incomplete data. EURASIP J. Adv. Signal Process.2018(12), 1–21 (2018).Google Scholar
 4.L. Zhang, W. Zuo, D. Zhang, LSDT: latent sparse domain transfer learning for visual adaptation. IEEE Trans. on Image Process.25(3), 1177–1191 (2016).MathSciNetCrossRefGoogle Scholar
 5.K. Engan, S. O. Aase, J. H. Husy, Multiframe compression: theory and design. EURASIP Signal Process.90(2), 2121–2140 (2000).CrossRefGoogle Scholar
 6.M. Aharon, M. Elad, A. Bruckstein, The KSVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process.54(11), 4311–4322 (2006).CrossRefGoogle Scholar
 7.J. Mairal, F. Bach, J. Ponce, G. Sapiro, Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res.11:, 19–60 (2010).MathSciNetzbMATHGoogle Scholar
 8.B. Dumitrescu, P. Irofti, Regularized KSVD. IEEE Signal Process. Lett.24(3), 309–313 (2017).CrossRefGoogle Scholar
 9.M. Nazzal, F. Yeganli, H. Ozkaramanli, A strategy for residual componentbased multiple structured dictionary learning. IEEE Signal Process. Lett.22(11), 2059–2063 (2015).CrossRefGoogle Scholar
 10.J. K. Pant, S. Krishnan, Compressive sensing of electrocardiogram signals by promoting sparsity on the secondorder difference and by using dictionary learning. IEEE Trans. Biomed. Circuits Syst.8(2), 293–302 (2014).CrossRefGoogle Scholar
 11.L. N. Smith, M. Elad, Improving dictionary learning: multiple dictionary updates and coefficient reuse. IEEE Signal Process. Lett.20(1), 79–82 (2013).CrossRefGoogle Scholar
 12.R. Zhao, Q. Wang, Y. Shen, J. Li, Multidimensional dictionary learning algorithm for compressive sensingbased hyperspectral imaging. J. Electron. Imaging. 25(6), 063013 (2016).CrossRefGoogle Scholar
 13.K. Skretting, K. Engang, Recursive least squares dictionary learning algorithm. IEEE Trans. Signal Process.58(4), 2121–2130 (2010).MathSciNetCrossRefGoogle Scholar
 14.J. A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inf. Theory. 50(10), 2231–2242 (2004).MathSciNetCrossRefGoogle Scholar
 15.E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).MathSciNetCrossRefGoogle Scholar
 16.E. J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory. 51(12), 4203–4215 (2005).MathSciNetCrossRefGoogle Scholar
 17.J. A. Tropp, A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007).MathSciNetCrossRefGoogle Scholar
 18.D. Martin, C. Fowlkes, D. Tal, J. Malik, in Proc. IEEE Int. Conf. Comput. Vis. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics (IEEEVancouver, 2001), pp. 416–423.Google Scholar
 19.D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).MathSciNetCrossRefGoogle Scholar
 20.E. J. Candès, Compressive sampling. Int. Congress of Mathematicians, Madrid, Spain. 3:, 1433–1452 (2006).MathSciNetzbMATHGoogle Scholar
 21.A. Massa, P. Rocca, G. Oliveri, Compressive sensing in electromagnetics  a review. IEEE Anten. Propag. Mag.57(1), 224–238 (2015).CrossRefGoogle Scholar
 22.D. Craven, B. McGinley, L. Kilmartin, M. Glavin, E. Jones, Compressed sensing for bioelectric signals: a review. IEEE J. Biomed. Health Inf. 19(2), 539–540 (2015).Google Scholar
 23.Y. Zhang, L. Y. Zhang, et. al, A review of compressive sensing in information security field. IEEE Access. 4:, 2507–2519 (2016).CrossRefGoogle Scholar
 24.D. Nion, N. D. Sidiropoulos, Tensor algebra and multidimensional harmonic retrieval in signal processing for MIMO radar. IEEE Trans. Signal Process.58(11), 5693–4705 (2010).MathSciNetCrossRefGoogle Scholar
 25.W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory. 55(5), 2230–2249 (2009).MathSciNetCrossRefGoogle Scholar
 26.D. L. Donoho, Y. Tsaig, I. Drori, J. L. Starck, Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory. 58(2), 1094–1121 (2012).MathSciNetCrossRefGoogle Scholar
 27.L. Gan, in Proc. IEEE Int. Conf. Digit. Signal Process. Block compressed sensing of natural images (IEEEWales, 2007), pp. 403–406.Google Scholar
 28.R. Rubinstein, M. Zibulevsky, M. Elad. Efficient Implementation of the KSVD Algorithm Using Batch Orthogonal Matching Pursuit. Technical Report CS200808 (Technion UniversityHaifa, 2008).Google Scholar
 29.J. Mairal, G. Sapiro, M. Elad, Learning multiscale sparse representations for image restoration. Multiscale Model. Simul.7(1), 214–241 (2008).MathSciNetCrossRefGoogle Scholar
 30.Y. Yi, Y. Cheng, C. Xu, Visual tracking based on hierarchical framework and sparse representation. Multimed. Tools Appl.77(13), 16267–16289 (2018).CrossRefGoogle Scholar
 31.S. Tariyal, A. Majumdar, R. Singh, M. Vatsa, Deep dictionary learning. Multimed. Tools Appl.4:, 10096–10109 (2016).Google Scholar
 32.M. Elad, M. Aharon, Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process.15(12), 3736–3745 (2006).MathSciNetCrossRefGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.