Skip to main content
Log in

Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Hyperspectral images (HSIs) have significant advantages over more traditional image types for a variety of computer vision applications dues to the extra information available. The practical reality of capturing and transmitting HSIs however, means that they often exhibit large amounts of noise, or are undersampled to reduce the data volume. Methods for combating such image corruption are thus critical to many HSIs applications. Here we devise a novel cluster sparsity field (CSF) based HSI reconstruction framework which explicitly models both the intrinsic correlation between measurements within the spectrum for a particular pixel, and the similarity between pixels due to the spatial structure of the HSI. These two priors have been shown to be effective previously, but have been always considered separately. By dividing pixels of the HSI into a group of spatial clusters on the basis of spectrum characteristics, we define CSF, a Markov random field based prior. In CSF, a structured sparsity potential models the correlation between measurements within each spectrum, and a graph structure potential models the similarity between pixels in each spatial cluster. Then, we integrate the CSF prior learning and image reconstruction into a unified variational framework for optimization, which makes the CSF prior image-specific, and robust to noise. It also results in more accurate image reconstruction compared with existing HSI reconstruction methods, thus combating the effects of noise corruption or undersampling. Extensive experiments on HSI denoising and HSI compressive sensing demonstrate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

Notes

  1. It should be noted that other clustering methods could also be used instead of the K-means.

  2. The derivation can be found in Appendix.

  3. http://www.cs.columbia.edu/CAVE/databases/multispectral/.

  4. http://cobweb.ecn.purdue.edu/~biehl/MultiSpec/.

  5. http://www.tec.army.mil/hypercube.

  6. http://www.ehu.eus/ccwintco/index.php?title=Hyperspectra_Remote_Sensing_Scenes.

  7. http://personalpages.manchester.ac.uk/staff/d.h.foster/Hyperspectral_images_of_natural_scenes_04.html.

References

  • Aharon, M., Elad, M., & Bruckstein, A. (2006). K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11), 4311–4322.

    Article  MATH  Google Scholar 

  • Akbari, H., Kosugi, Y., Kojima, K., & Tanaka, N. (2010). Detection and analysis of the intestinal ischemia using visible and invisible hyperspectral imaging. IEEE Transactions on Biomedical Engineering, 57(8), 2011–2017.

    Article  Google Scholar 

  • August, Y., & Stern, A. (2013). Compressive sensing spectrometry based on liquid crystal devices. Optics Letters, 38(23), 4996–4999.

    Article  Google Scholar 

  • August, Y., Vachman, C., Rivenson, Y., & Stern, A. (2013). Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains. Applied optics, 52(10), D46–D54.

    Article  Google Scholar 

  • Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends\({\textregistered }\). Machine Learning, 3(1), 1–122.

    MATH  Google Scholar 

  • Buades, A., Coll, B., Morel, J. M. (2005). A non-local algorithm for image denoising. In IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005 (Vol. 2, pp. 60–65). IEEE.

  • Chen, C., Huang, J. (2012). Compressive sensing mri with wavelet tree sparsity. In Advances in Neural Information Processing Systems, pp. 1115–1123

  • Chen, F., Zhang, L., Yu, H. (2015). External patch prior guided internal clustering for image denoising. In Proceedings of the IEEE international conference on computer vision (pp. 603–611).

  • Cotter, S. F., Rao, B. D., Engan, K., & Kreutz-Delgado, K. (2005). Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Transactions on Signal Processing, 53(7), 2477–2488.

    Article  MathSciNet  MATH  Google Scholar 

  • Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8), 2080–2095.

    Article  MathSciNet  Google Scholar 

  • Dong, W., Zhang, D., Shi, G. (2011). Centralized sparse representation for image restoration. In: 2011 IEEE international conference on computer vision (ICCV) (pp 1259–1266). IEEE.

  • Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al. (2004). Least angle regression. The Annals of statistics, 32(2), 407–499.

    Article  MathSciNet  MATH  Google Scholar 

  • Elhamifar, E., & Vidal, R. (2013). Sparse subspace clustering: Algorithm, theory, and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11), 2765–2781.

    Article  Google Scholar 

  • Foster, D. H., Amano, K., Nascimento, S., & Foster, M. J. (2006). Frequency of metamerism in natural scenes. JOSA A, 23(10), 2359–2372.

    Article  Google Scholar 

  • Fu, Y., Lam, A., Sato, I., & Sato, Y. (2017). Adaptive spatial-spectral dictionary learning for hyperspectral image restoration. International Journal of Computer Vision, 122(2), 228–245.

    Article  MathSciNet  Google Scholar 

  • Greer, J. B. (2012). Sparse demixing of hyperspectral images. IEEE Transactions on Image Processing, 21(1), 219–228.

    Article  MathSciNet  MATH  Google Scholar 

  • Huang, J., Zhang, T., & Metaxas, D. (2011). Learning with structured sparsity. The Journal of Machine Learning Research, 12, 3371–3412.

    MathSciNet  MATH  Google Scholar 

  • Ji, S., Xue, Y., & Carin, L. (2008). Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6), 2346–2356.

    Article  MathSciNet  MATH  Google Scholar 

  • Kerekes, J. P., & Baum, J. E. (2005). Full-spectrum spectral imaging system analytical model. IEEE Transactions on Geoscience and Remote Sensing, 43(3), 571–580.

    Article  Google Scholar 

  • Li, B., Zhang, Y., Lin, Z., Lu, H. (2015). Center CMI (2015) Subspace clustering by mixture of gaussian regression. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2094–2102).

  • Lin, D., Fisher, J. (2012). Manifold guided composite of markov random fields for image modeling. In 2012 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2176–2183). IEEE.

  • Liu, L., Wang, P., Shen, C., Wang, L., Van Den Hengel, A., Wang, C., et al. (2017). Compositional model based fisher vector coding for image classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 99, 1–1.

    Google Scholar 

  • Liu, X., Bourennane, S., & Fossati, C. (2012). Denoising of hyperspectral images using the parafac model and statistical performance analysis. IEEE Transactions on Geoscience and Remote Sensing, 50(10), 3717–3724.

    Article  Google Scholar 

  • Lu, C. Y., Min, H., Zhao, Z. Q., Zhu, L., Huang, D. S., Yan, S. (2012). Robust and efficient subspace segmentation via least squares regression. In Computer vision–ECCV 2012 (pp 347–360). Springer.

  • Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2012). Video denoising, deblocking, and enhancement through separable 4-d nonlocal spatiotemporal transforms. IEEE Transactions on Image Processing, 21(9), 3952–3966.

    Article  MathSciNet  MATH  Google Scholar 

  • Maggioni, M., Katkovnik, V., Egiazarian, K., & Foi, A. (2013). Nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Transactions on Image Processing, 22(1), 119–133.

    Article  MathSciNet  MATH  Google Scholar 

  • Martin, G., Bioucas-Dias, J. M., & Plaza, A. (2015). Hyca: A new technique for hyperspectral compressive sensing. IEEE Transactions on Geoscience and Remote Sensing, 53(5), 2819–2831.

    Article  Google Scholar 

  • Nasrabadi, N. M. (2014). Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Processing Magazine, 31(1), 34–44.

    Article  Google Scholar 

  • Peng, Y., Meng, D., Xu, Z., Gao, C., Yang, Y., Zhang, B. (2014). Decomposable nonlocal tensor dictionary learning for multispectral image denoising. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2949–2956). IEEE.

  • Qian, Y., & Ye, M. (2013). Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(2), 499–515.

    Article  Google Scholar 

  • Qian, Y., Shen, Y., Ye, M., Wang, Q. (2012). 3-d nonlocal means filter with noise estimation for hyperspectral imagery denoising. In 2012 IEEE international geoscience and remote sensing symposium (IGARSS) (pp. 1345–1348). IEEE.

  • Rasti, B., Sveinsson, J. R., Ulfarsson, M. O., Benediktsson, J. A. (2013). Hyperspectral image denoising using a new linear model and sparse regularization. In 2013 IEEE international geoscience and remote sensing symposium (IGARSS) (pp. 457–460). IEEE.

  • Renard, N., Bourennane, S., & Blanc-Talon, J. (2008). Denoising and dimensionality reduction using multilinear tools for hyperspectral images. IEEE Geoscience and Remote Sensing Letters, 5(2), 138–142.

    Article  Google Scholar 

  • Schmidt, U., Roth, S. (2014). Shrinkage fields for effective image restoration. In 2014 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 2774–2781). IEEE.

  • Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12), 4655–4666.

    Article  MathSciNet  MATH  Google Scholar 

  • Van Nguyen, H., Banerjee, A., Chellappa, R. (2010). Tracking via object reflectance using a hyperspectral video camera. In 2010 IEEE computer society conference on computer vision and pattern recognition workshops (CVPRW) (pp. 44–51). IEEE.

  • Wang, J. (2012). Locally linear embedding. Berlin Heidelberg: Springer.

    Book  Google Scholar 

  • Wang, P., Cao, Y., Shen, C., Liu, L., & Shen, H. T. (2017). Temporal pyramid pooling based convolutional neural network for action recognition. IEEE Transactions on Circuits and Systems for Video Technology, 27(12), 2613–2622.

    Article  Google Scholar 

  • Wang, P., Liu, L., Shen, C., Huang, Z., Van Den Hengel, A., Shen, HT. (2016). Whats wrong with that object? identifying images of unusual objects by modelling the detection score distribution. In Computer vision and pattern recognition (pp. 1573–1581)

  • Wang, Z., Nasrabadi, N. M., & Huang, T. S. (2015). Semisupervised hyperspectral classification using task-driven dictionary learning with laplacian regularization. IEEE Transactions on Geoscience and Remote Sensing, 53(3), 1161–1173.

    Article  Google Scholar 

  • Wei, W., Zhang, L., Tian, C., Plaza, A., & Zhang, Y. (2017). Structured sparse coding-based hyperspectral imagery denoising with intracluster filtering. IEEE Transactions on Geoscience and Remote Sensing, 55(12), 6860–6876.

    Article  Google Scholar 

  • Wipf, D. P., Rao, B. D., & Nagarajan, S. (2011). Latent variable bayesian models for promoting sparsity. IEEE Transactions on Information Theory, 57(9), 6236–6255.

    Article  MathSciNet  MATH  Google Scholar 

  • Yasuma, F., Mitsunaga, T., Iso, D., & Nayar, S. K. (2010). Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum. IEEE Transactions on Image Processing, 19(9), 2241–2253.

    Article  MathSciNet  MATH  Google Scholar 

  • Yuan, Q., Zhang, L., & Shen, H. (2012). Hyperspectral image denoising employing a spectral-spatial adaptive total variation model. IEEE Transactions on Geoscience and Remote Sensing, 50(10), 3660–3677.

    Article  Google Scholar 

  • Zhang, H., He, W., Zhang, L., Shen, H., & Yuan, Q. (2014). Hyperspectral image restoration using low-rank matrix recovery. IEEE Transactions on Geoscience and Remote Sensing, 52(8), 4729–4743.

    Article  Google Scholar 

  • Zhang, L., Wei, W., Shi, Q., Shen, C., Van Den Hengel, A., Zhang, Y. (2017). Beyond low rank: A data-adaptive tensor completion method arXiv:1708.01008

  • Zhang, L., Wei, W., Tian, C., Li, F., & Zhang, Y. (2016a). Exploring structured sparsity by a reweighted laplace prior for hyperspectral compressive sensing. IEEE Transactions on Image Processing, 25(10), 4974–4988.

    Article  MathSciNet  Google Scholar 

  • Zhang, L., Wei, W., Zhang, Y., Li, F., Shen, C., Shi, Q. (2015). Hyperspectral compressive sensing using manifold-structured sparsity prior. In Proceedings of the IEEE international conference on computer vision (ICCV) (pp. 3550–3558)

  • Zhang, L., Wei, W., Zhang, Y., Shen, C., Van Den Hengel, A., Shi, Q. (2016b). Cluster sparsity field hyperspectal imagery denoising. In European conference on computer vision. Springer.

  • Zhang, L., Wei, W., Zhang, Y., Shen, C., Van Den Hengel, A., & Shi, Q. (2016c). Dictionary learning for promoting structured sparsity in hyperspectral compressive sensing. IEEE Transactions on Geoscience and Remote Sensing, 54(12), 7223–7235.

    Article  Google Scholar 

  • Zhang, Z., & Rao, B. D. (2011). Sparse signal recovery with temporally correlated source vectors using sparse bayesian learning. IEEE Journal of Selected Topics in Signal Processing, 5(5), 912–926.

    Article  Google Scholar 

  • Zhao, Q., Meng, D., Xu, Z., Zuo, W., Zhang, L. (2014). Robust principal component analysis with complex noise. In ICML (pp. 55–63)

Download references

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China (Nos. 61671385, 61231016, 61571354), Natural Science Basis Research Plan in Shaanxi Province of China (No. 2017JM6021), Innovation Foundation for Doctoral Dissertation of Northwestern Polytechnical University (No. CX201521) and Australian Research Council Grant (No. FT120100969). Lei Zhang’s contribution was made when he was a visiting student at the University of Adelaide.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wei.

Additional information

Communicated by Srinivasa Narasimhan.

Lei Zhang and Wei Wei have contributed equally to this work.

Appendices

Appendix A \({\varvec{\Theta }}\)-subproblem

In this appendix, we give the detailed derivation of solving \({\varvec{\Theta }}\)-subproblem with the alternative minimization scheme, which reduces the \({\varvec{\Theta }}\)-subproblem into four simpler subproblems on \({\varvec{\gamma }}_k\),\({\varvec{\varpi }}_k\),\({\varvec{\eta }}_k\), and \({\varvec{\nu }}_k\), respectively.

1.1 A.1 Optimization for \({\varvec{\gamma }}_k\)

Removing the irrelevant variables, we can obtain the subproblem over \({\varvec{\gamma }}_k\) as

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\gamma }}_k} \Vert \mathbf{{Y}}_k\Vert ^2_{{\varvec{\Gamma }}_k} - n_k\log |{\varvec{\Lambda }}_k| + n_k\log |{\varvec{\Gamma }}_k| + \sum \limits _{j} \varpi _{ik}\gamma _{jk},\\&\quad = \min \limits _{{\varvec{\gamma }}_k} \Vert \mathbf{{Y}}_k\Vert ^2_{{\varvec{\Gamma }}_k} + n_k\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}|\\&\qquad + n_k\log |{\varvec{\Gamma }}_k| + \sum \limits _{j} \varpi _{jk}\gamma _{jk}. \end{aligned} \end{aligned}$$
(28)

Since \(f({\varvec{\gamma }}^{-1}_k) = \log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}|\) over \({\varvec{\gamma }}^{-1}_k = [\gamma ^{-1}_{1k},\ldots ,\gamma ^{-1}_{n_dk}]^T\) results in Eq. (28) to be nonconvex, we turn to find a strict upper bound of \(f({\varvec{\gamma }}^{-1}_k)\) as

$$\begin{aligned} \begin{aligned} f({\varvec{\gamma }}^{-1}_k) \le \mathbf{z}^T{\varvec{\gamma }}^{-1}_k - f^*(\mathbf{z}), \end{aligned} \end{aligned}$$
(29)

where \(f^*(\mathbf{z})\) is the concave conjugate function of \(f({\varvec{\gamma }}^{-1}_k)\) and \(\mathbf{z} = [z_1,\ldots ,z_{n_d}]^T\). The equality of Eq. (29) holds iff

$$\begin{aligned} \begin{aligned} \mathbf{z}&= \nabla _{{\varvec{\gamma }}^{-1}_k}\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}| \\&= \mathbf {diag}[({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A} {\varvec{\Phi }}})^{-1}]. \end{aligned} \end{aligned}$$
(30)

Substituting the upper bound in Eq. (29) into Eq. (28) and removing the irrelevant variables, the subproblem over \({\varvec{\gamma }}_k\) can be simplified as

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\gamma }}_k} \Vert \mathbf{{Y}}_k\Vert ^2_{{\varvec{\Gamma }}_k} + n_k\mathbf{z}^T{\varvec{\gamma }}^{-1}_k + n_k\log |{\varvec{\Gamma }}_k| + \sum \limits _{j} \varpi _{jk}\gamma _{ik},\\&\quad =\min \limits _{{\varvec{\gamma }}_k} \sum \limits _j \left( \frac{\overline{y}_j + n_kz_j}{\gamma _{ik}} + n_k\log \gamma _{jk} + \varpi _{jk}\gamma _{ik} \right) , \end{aligned} \end{aligned}$$
(31)

where \(\overline{y}_i\) is the i-th entry of \(\overline{\mathbf{y}} = \mathbf {diag}(\mathbf{{Y}}_k\mathbf{{Y}}^T_k) = [\overline{y}_1,\ldots ,\overline{y}_{n_d}]^T\). Since the variance \({\varvec{\gamma }}_k \ge 0\), this convex optimization over \({\varvec{\gamma }}_k\) gives a closed form solution over \(\gamma _{jk}\) as

$$\begin{aligned} \begin{aligned} \gamma _{jk} = \left( \sqrt{4\varpi _{jk}(\overline{y}_j + n_kz_j) + n^2_k} - n_k\right) /\left( 2\varpi _{jk}\right) . \end{aligned} \end{aligned}$$
(32)

1.2 A.2 Optimization for \({\varvec{\varpi }}_k\)

Given \({\varvec{\gamma }}_k\), \({\varvec{\varpi }}_k\) can be estimated by solving the problem

$$\begin{aligned} \begin{aligned} \min \limits _{{\varvec{\varpi }}_k} \sum \limits _{j}\left( \varpi _{jk}\gamma _{jk} - 2\log \varpi _{jk}\right) . \end{aligned} \end{aligned}$$
(33)

This problem produce a close-form solution of \(\varpi _{jk}\) as

$$\begin{aligned} \begin{aligned} \varpi _{jk} = 2/{\gamma _{jk}}. \end{aligned} \end{aligned}$$
(34)

1.3 A.3 Optimization for \({\varvec{\eta }}_k\)

Similar to optimization of \({\varvec{\gamma }}_k\), the subproblem over \({\varvec{\eta }}_k\) can be formulated as

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\eta }}_k} \Vert \mathbf{{Y}}_k - \mathbf{{M}}_k\Vert ^2_{{{\varvec{\Sigma }}}_k} - n_k\log |{\varvec{\Lambda }}_k| + n_k\log |{{\varvec{\Sigma }}}_k|\\&\qquad + \sum \limits _{j}\left( \nu _{jk}\eta _{jk} - 2\log \nu _{jk}\right) \\&\quad = \min \limits _{{\varvec{\eta }}_k} \Vert \mathbf{{Y}}_k - \mathbf{{M}}_k\Vert ^2_{{{\varvec{\Sigma }}}_k} + n_k\log |{{\varvec{\Sigma }}}_{k}| \\&\qquad + \sum \limits _{j}\left( \nu _{jk}\eta _{jk} - 2\log \nu _{jk}\right) \\&\qquad + n_k\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n\mathbf{{A}}{\varvec{\Phi }}|. \end{aligned} \end{aligned}$$
(35)

Let \(\phi ({\varvec{\eta }}^{-1}_k) = \log | {\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}|\) with \({\varvec{\eta }}^{-1}_k = [\eta ^{-1}_{1k},\ldots ,\eta ^{-1}_{n_dk}]^T\). We have the upper bound of \(\phi ({\varvec{\eta }}^{-1}_k)\)

$$\begin{aligned} \begin{aligned} \phi ({\varvec{\eta }}^{-1}_k) \le {\varvec{\alpha }}^T{\varvec{\eta }}^{-1}_k - \phi ^*({\varvec{\alpha }}), \end{aligned} \end{aligned}$$
(36)

where \(\phi ^*({\varvec{\alpha }})\) is the concave conjugate function with \({\varvec{\alpha }} = [\alpha _1,\ldots ,\alpha _{n_d}]^T\). The equality of this upper bound holds iff

$$\begin{aligned} \begin{aligned} {\varvec{\alpha }}&= \nabla _{{\varvec{\eta }}^{-1}_k}\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}|\\&= \mathbf {diag}\left[ ({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}})^{-1}\right] . \end{aligned} \end{aligned}$$
(37)

Substituting this upper bound into Eq. (35), we have a simpler subproblem over \({\varvec{\eta }}_k\) as

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\eta }}_k} \Vert \mathbf{{Y}}_k - \mathbf{{M}}_k\Vert ^2_{{{\varvec{\Sigma }}}_k} + n_k{\varvec{\alpha }}^T{\varvec{\eta }}^{-1}_k + n_k\log |{{\varvec{\Sigma }}}_k|\\&\qquad + \sum \limits _{j}\left( \nu _{jk}\eta _{jk} - 2\log \nu _{jk}\right) \\&\quad = \min \limits _{{\varvec{\eta }}_k} \sum \limits _j \left( \frac{\widehat{y}_j + n_k\alpha _j}{\eta _{jk}} + n_k\log \eta _{jk} + \nu _{jk}\eta _{jk} \right) , \end{aligned} \end{aligned}$$
(38)

where \(\widehat{y}_j\) is the j-th entry of \(\widehat{\mathbf{y}} =\mathbf {diag}[(\mathbf{{Y}}_k - \mathbf{{M}}_k)(\mathbf{{Y}}_k - \mathbf{{M}}_k)^T] = [\widehat{y}_1,\ldots ,\widehat{y}_{n_d}]^T\). This convex optimization problem gives a closed form solution over \(\eta _{jk}\) as

$$\begin{aligned} \begin{aligned} \eta _{jk} = \frac{\sqrt{4\nu _{jk}(\widehat{y}_j + n_k\alpha _j) + n^2_k} - n_k}{2\nu _{jk}}. \end{aligned} \end{aligned}$$
(39)

1.4 A.4 Optimization for \({\varvec{\nu }}_k\)

Given \({\varvec{\eta }}_k\), we can estimate \({\varvec{\nu }}_k\) by solving the following formula as

$$\begin{aligned} \begin{aligned} \min \limits _{{\varvec{\nu }}_k} \sum \limits _{j}\left( \nu _{jk}\eta _{jk} - 2\log \nu _{jk}\right) . \end{aligned} \end{aligned}$$
(40)

This convex optimization problem yields a closed form solution as

$$\begin{aligned} \begin{aligned} \nu _{jk} = \frac{2}{\eta _{jk}}. \end{aligned} \end{aligned}$$
(41)

Appendix B \({\varvec{\lambda }}\)-subproblem

Finally, \({\varvec{\lambda }}\)-subproblem can be formulated as

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\lambda }}} \sum \limits _k (\Vert {\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k\Vert ^2_{{{\varvec{\Sigma }}}_n} - n_k\log |{\varvec{\Lambda }}_k| + n_k\log |{{\varvec{\Sigma }}}_n|)\\&\quad = \min \limits _{{\varvec{\lambda }}} \sum \limits _k (\Vert {\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k\Vert ^2_{{{\varvec{\Sigma }}}_n} + n_k\log |{{\varvec{\Sigma }}}_n|) \\&\qquad + \sum \limits _k n_k\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n{\mathbf{A}{\varvec{\Phi }}}| \end{aligned} \end{aligned}$$
(42)

According to the following algebra equation

$$\begin{aligned} \begin{aligned}&\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k + {\varvec{\Phi }}^T\mathbf{{A}}^T{{\varvec{\Sigma }}}^{-1}_n\mathbf{{A}}{\varvec{\Phi }}| + n_k\log |{{\varvec{\Sigma }}}_n|\\&\quad =\log |{{\varvec{\Sigma }}}_n + {\mathbf{A}{\varvec{\Phi }}}({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k)^{-1}{\varvec{\Phi }}^T\mathbf{{A}}^T| \\&\qquad +\log |{\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k|, \end{aligned} \end{aligned}$$
(43)

the subproblem in Eq. (42) simplifies to

$$\begin{aligned} \begin{aligned} \min \limits _{{\varvec{\lambda }}}&\sum \limits _k n_k\log |{{\varvec{\Sigma }}}_n + {\mathbf{A}{\varvec{\Phi }}}({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k)^{-1}{\varvec{\Phi }}^T\mathbf{{A}}^T| \\&+ \sum \limits _k (\Vert {\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k\Vert ^2_{{{\varvec{\Sigma }}}_n} \end{aligned} \end{aligned}$$
(44)

Let \(g({\varvec{\lambda }}) = \log |{{\varvec{\Sigma }}}_n + {\mathbf{A}{\varvec{\Phi }}}({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k)^{-1}{\varvec{\Phi }}^T\mathbf{{A}}^T|\), we can obtain its upper bound as

$$\begin{aligned} \begin{aligned} g({\varvec{\lambda }}) \le {\varvec{\beta }}_k^T{\varvec{\lambda }} - g^*({\varvec{\beta }}_k), \end{aligned} \end{aligned}$$
(45)

where \(g^*({\varvec{\beta }}_k)\) is the corresponding concave conjugate function with \({\varvec{\beta }}_k = [\beta _{1k},...,\beta _{n_bk}]^T\). It can be found that the equality in Eq. (45) only holds when

$$\begin{aligned} \begin{aligned} {\varvec{\beta }}_k&= \nabla _{{\varvec{\lambda }}}\log |{{\varvec{\Sigma }}}_n + {\mathbf{A}{\varvec{\Phi }}}({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k)^{-1}{\varvec{\Phi }}^T\mathbf{{A}}^T|\\&= \mathbf {diag}[({{\varvec{\Sigma }}}_n + {\mathbf{A}{\varvec{\Phi }}}({\varvec{\Gamma }}^{-1}_k + {{\varvec{\Sigma }}}^{-1}_k)^{-1}{\varvec{\Phi }}^T\mathbf{{A}}^T)^{-1}]. \end{aligned} \end{aligned}$$
(46)

Substituting this upper bound into Eq. (44), the subproblem over \({\varvec{\lambda }}\) further simplifies to

$$\begin{aligned} \begin{aligned}&\min \limits _{{\varvec{\lambda }}} \sum \limits _k (\Vert {\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k\Vert ^2_{{{\varvec{\Sigma }}}_n} + n_k{\varvec{\beta }}_k^T{\varvec{\lambda }}). \end{aligned} \end{aligned}$$
(47)

This amounts to the following optimization over each component \(\lambda _j\) of \({\varvec{\lambda }}\) as

$$\begin{aligned} \begin{aligned} \min \limits _{\lambda _j} \frac{\overline{q}_j}{\lambda _j} + \lambda _j\sum \limits _k n_k\beta _{jk}, \end{aligned} \end{aligned}$$
(48)

where \(\overline{q}_j\) is the jth component of \({\overline{\mathbf{q}}} = \sum \nolimits _k \mathbf {diag}[({\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k)({\mathbf{A}{\varvec{\Phi }}}{} \mathbf{{Y}}_k - \mathbf{{F}}_k)^T] = [\overline{q}_1,\ldots ,\overline{q}_{n_b}]^T\). This convex optimization gives a closed form solution as

$$\begin{aligned} \begin{aligned} \lambda _j = \sqrt{\frac{\overline{q}_j}{\sum \limits _k n_k\beta _{jk}}}. \end{aligned} \end{aligned}$$
(49)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, L., Wei, W., Zhang, Y. et al. Cluster Sparsity Field: An Internal Hyperspectral Imagery Prior for Reconstruction. Int J Comput Vis 126, 797–821 (2018). https://doi.org/10.1007/s11263-018-1080-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-018-1080-8

Keywords

Navigation