Abstract
Hyperspectral unmixing identifies and quantifies the materials in each mixed pixel of a hyperspectral image arising from low spatial resolution. However, most of the methods for unmixing use spectral information and disregard rich spatial information. At the same time, a few methods that use spatial information find difficulty representing the spatial-spectral features because of the high dimensionality of the hyperspectral image. So, deeper architectures are used, but such deep architectures find converging difficult. This article proposes a new method for hyperspectral unmixing that utilizes a capsule network and a 3D convolutional neural network. Using a capsule network allows for encoding rich features such as spatial information, spectral signature, and possible affine transformations of spectra using vectors rather than scalars. Hence, assuming the linear mixing model and enforcing appropriate constrain on the considered method, blind hyperspectral unmixing has been done. The performance of a proposed method was evaluated using Spectral Angle Distance (SAD) and Mean Square Error (MSE) on three datasets: Jasper Ridge, Samson, and Urban. The mean SAD and MSE values for each dataset were as follows: Jasper Ridge - 0.04259 and 0.01289; Samson - 0.02599 and 0.00370; Urban - 0.04954 and 0.02084. The results show that the proposed hybrid capsule network, which uses the spatial-spectral feature, performed well, with low SAD and MSE values, indicating that it has well-estimated reflectance and fractional abundance.
Keywords
- Hybrid capsule network
- hyperspectral image unmixing
- spectral-spatial feature
- 3D CNN
- abundance map
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Goetz, A.F.H., et al.: Imaging spectrometry for earth remote sensing. Science 228(4704), 1147–1153 (1985). https://doi.org/10.1126/science.228.4704.1147
Bioucas-Dias, J.M., et al.: Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 5(2), 354–379 (2012). https://doi.org/10.1109/JSTARS.2012.2194696
Keshava, N., Mustard, J.F.: Spectral unmixing. IEEE Signal Process. Mag. 19(1), 44–57 (2002). https://doi.org/10.1109/79.974727
Paoletti, M.E., et al.: Capsule networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 57(4), 2145–2160 (2019). https://doi.org/10.1109/TGRS.2018.2871782
Hughes, G.: On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 14(1), 55–63 (1968). https://doi.org/10.1109/TIT.1968.1054102
Chen, Y., et al.: Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 7(6), 2094–2107 (2014). https://doi.org/10.1109/JSTARS.2014.2329330
Li, T., Zhang, J., Zhang, Y.: Classification of hyperspectral image based on deep belief networks. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 5132–5136 (2014). https://doi.org/10.1109/ICIP.2014.7026039
Ma, X., Wang, H., Geng, J.: Spectral-spatial classification of hyperspectral image based on deep auto-encoder. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 9(9), 4073–4085 (2016). https://doi.org/10.1109/JSTARS.2016.2517204
Chen, Y., et al.: Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 54(10), 6232–6251 (2016). https://doi.org/10.1109/TGRS.2016.2584107
Wang, J., et al.: Dual-channel capsule generation adversarial network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 60, 1–16 (2022). https://doi.org/10.1109/TGRS.2020.3044312
Guo, R., Wang, W., Qi, H.: Hyperspectral image unmixing using autoencoder cascade. In: 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), pp. 1–4 (2015). https://doi.org/10.1109/WHISPERS.2015.8075378
Palsson, B., Sveinsson, J.R., Ulfarsson, M.O.: Spectral-spatial hyperspectral unmixing using multitask learning. IEEE Access 7, 148861–148872 (2019). https://doi.org/10.1109/ACCESS.2019.2944072
Palsson, B., Ulfarsson, M.O., Sveinsson, J.R.: Convolutional autoencoder for spectral-spatial hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 59(1), 535–549 (2021). https://doi.org/10.1109/TGRS.2020.2992743
Ji, S., et al.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013). https://doi.org/10.1109/TPAMI.2012.59
He, K., et al.: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (2015). 1502.01852. https://arxiv.org/abs/1502.01852
Hinton, G.E., Krizhevsky, A., Wang, S.D.: Transforming auto-encoders. In: Honkela, T., Duch, W., Girolami, M., Kaski, S. (eds.) ICANN 2011. LNCS, vol. 6791, pp. 44–51. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21735-7_6
Sabour, S., Frosst, N., Hinton, G.E.: Dynamic Routing Between Capsules (2017). https://doi.org/10.48550/ARXIV.1710.09829. https://arxiv.org/abs/1710.09829
Palsson, B., et al.: Hyperspectral unmixing using a neural network autoencoder. IEEE Access 6, 25646–25656 (2018). https://doi.org/10.1109/ACCESS.2018.2818280
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Giri, R., Pant, D.R., Heikkonen, J., Kanth, R. (2023). Hybrid Capsule Network for Hyperspectral Image Unmixing and Classification. In: Daimi, K., Al Sadoon, A. (eds) Proceedings of the 2023 International Conference on Advances in Computing Research (ACR’23). ACR 2023. Lecture Notes in Networks and Systems, vol 700. Springer, Cham. https://doi.org/10.1007/978-3-031-33743-7_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-33743-7_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33742-0
Online ISBN: 978-3-031-33743-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)