Regularization and Incoherence
A dictionary should be faithful to the signals it represents, in the sense that the sparse representation error in learning is small, but also must be reliable when recovering sparse representations. A direct way to obtain good recovery guarantees is to modify the objective of the DL optimization such that the resulted dictionary is incoherent, meaning that the atoms are generally far from each other. Alternatively, we can explicitly impose mutual coherence bounds. A related modification of the objective is regularization, in the usual form encountered in least squares problems. Regularization has the additional benefit of making the DL process avoid bottlenecks generated by ill-conditioned dictionaries. We present several algorithms for regularization and promoting incoherence and illustrate their benefits. The K-SVD family can be adapted to such approaches, with good results in the case of regularization. Other methods for obtaining incoherence are based on the gradient of a combined objective including the frame potential or by including a decorrelation step in the standard algorithms. Such a step is based on successive projections on two sets whose properties are shared by the Gram matrix corresponding to the dictionary, which reduce mutual coherence, and rotations of the dictionary, which regain the adequacy to the training signals. We also give a glimpse on the currently most efficient methods that aim at the minimization of the mutual coherence of a frame, regardless of the training signals.
- 10.D. Barchiesi, M.D. Plumbley, Learning incoherent dictionaries for sparse approximation using iterative projections and rotations. IEEE Trans. Signal Process. 61(8), 2055–2065 (2013)Google Scholar
- 27.P.G. Casazza, R.G. Lynch, A brief introduction to Hilbert space frame theory and its applications, in Proceedings of Symposia in Applied Mathematics, ed. by K.A. Okoudjou, vol. 73 (American Mathematical Society, Providence, 2016), pp. 1–52Google Scholar
- 44.W. Dai, T. Xu, W. Wang, Simultaneous codeword optimization (SimCO) for dictionary update and learning. IEEE Trans. Signal Process. 60(12), 6340–6353 (2012)Google Scholar
- 49.P. Davies, N.J. Higham, Numerically stable generation of correlation matrices and their factors. BIT Numer. Math. 40(4), 640–651 (2000)Google Scholar
- 100.P. Irofti, B. Dumitrescu, Regularized algorithms for dictionary learning, in Proceedings of International Conference Communications, Bucharest (2016), pp. 439–442Google Scholar
- 120.T. Lin, S. Liu, H. Zha, Incoherent dictionary learning for sparse representation, in 21st International Conference on Pattern Recognition (ICPR), Tsukuba, November 2012, pp. 1237–1240Google Scholar