Skip to main content

Improving Autoencoder Training Performance for Hyperspectral Unmixing with Network Reinitialisation

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Abstract

Neural networks, in particular autoencoders, are one of the most promising solutions for unmixing hyperspectral data, i.e. reconstructing the spectra of observed substances (endmembers) and their relative mixing fractions (abundances), which is needed for effective hyperspectral analysis and classification. However, as we show in this paper, the training of autoencoders for unmixing is highly dependent on weights initialisation; some sets of weights lead to degenerate or low-performance solutions, introducing negative bias in the expected performance. In this work, we experimentally investigate autoencoders stability as well as network reinitialisation methods based on coefficients of neurons’ dead activations. We demonstrate that the proposed techniques have a positive effect on autoencoder training in terms of reconstruction, abundances and endmembers errors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

  • 04 August 2022

    In the originally published version of chapter 33, table 2 included an error. This has been corrected.

Notes

  1. 1.

    With [12] method using uniform initialisation we have used the version from the PyTorch library, which differs from the paper with: 1) biases are not initialised to 0; 2) bounds of a uniform distribution are constant and not dependent on the number of connections.

References

  1. Alabdulmohsin, I., Maennel, H., Keysers, D.: The impact of reinitialization on generalization in convolutional neural networks (2021)

    Google Scholar 

  2. Bingham, G., Miikkulainen, R.: AutoInit: analytic signal-preserving weight initialization for neural networks (2021)

    Google Scholar 

  3. Bioucas-Dias, J.M.: A variable splitting augmented Lagrangian approach to linear spectral unmixing. In: 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pp. 1–4 (2009). https://doi.org/10.1109/WHISPERS.2009.5289072

  4. Hyperspectral unmixing overview: geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 5(2), 354–379 (2012). https://doi.org/10.1109/JSTARS.2012.2194696

  5. Boardman, J., Kruse, F.A., Green, R.: Mapping target signatures via partial unmixing of AVIRIS data. In: Summaries of the Fifth Annual JPL Airborne Earth Science Workshop. Volume 1: AVIRIS Workshop (1995)

    Google Scholar 

  6. Borsoi, R.A., Imbiriba, T., Bermudez, J.C.M.: Deep generative endmember modeling: an application to unsupervised spectral unmixing. IEEE Trans. Comput. Imaging 6, 374–384 (2020). https://doi.org/10.1109/TCI.2019.2948726

    Article  MathSciNet  Google Scholar 

  7. Conover, W.J.: Practical Nonparametric Statistics, vol. 350, 3rd edn. Wiley, Hoboken (1998)

    Google Scholar 

  8. Conover, W.J., Iman, R.L.: On multiple-comparisons procedures (1979)

    Google Scholar 

  9. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of AISTATS (2010), vol. 9, pp. 249–256 (2010)

    Google Scholar 

  10. Guo, A.J., Zhu, F.: Improving deep hyperspectral image classification performance with spectral unmixing. Signal Process. 183, 107949 (2021). https://doi.org/10.1016/j.sigpro.2020.107949

  11. Guo, R., Wang, W., Qi, H.: Hyperspectral image unmixing using autoencoder cascade. In: 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), pp. 1–4 (2015). https://doi.org/10.1109/WHISPERS.2015.8075378

  12. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of ICCV, pp. 1026–1034 (2015). https://doi.org/10.1109/ICCV.2015.123

  13. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006). https://doi.org/10.1126/science.1127647

    Article  MathSciNet  MATH  Google Scholar 

  14. Keshava, N., Mustard, J.F.: Spectral unmixing. IEEE Signal Process. Mag. 19(1), 44–57 (2002). https://doi.org/10.1109/79.974727

    Article  Google Scholar 

  15. Krizhevsky, A., Hinton, G.E.: Using very deep autoencoders for content-based image retrieval. In: ESANN (2011)

    Google Scholar 

  16. Kruskal, W.H., Wallis, W.A.: Use of ranks in one-criterion variance analysis. J. Am. Stat. Assoc. 47(260), 583–621 (1952). https://doi.org/10.2307/2280779

    Article  MATH  Google Scholar 

  17. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_3

    Chapter  Google Scholar 

  18. Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J.E., Stoica, I.: Tune: a research platform for distributed model selection and training (2018)

    Google Scholar 

  19. Lu, L., Shin, Y., Su, Y., Em Karniadakis, G.: Dying ReLU and initialization: theory and numerical examples. Commun. Comput. Phys. 28(5), 1671–1706 (2020). https://doi.org/10.4208/cicp.OA-2020-0165

    Article  MathSciNet  MATH  Google Scholar 

  20. Lv, J., Shao, X., Xing, J., Cheng, C., Zhou, X.: A deep regression architecture with two-stage re-initialization for high performance facial landmark detection. In: Proceedings of CVPR 2017, pp. 3691–3700 (2017). https://doi.org/10.1109/CVPR.2017.393

  21. Ozkan, S., Kaya, B., Akar, G.B.: EndNet: sparse autoencoder network for endmember extraction and hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 57(1), 482–496 (2019). https://doi.org/10.1109/TGRS.2018.2856929

    Article  Google Scholar 

  22. Palsson, B., Ulfarsson, M.O., Sveinsson, J.R.: Convolutional autoencoder for spatial-spectral hyperspectral unmixing. In: Proceedings of IGARSS 2019, pp. 357–360 (2019). https://doi.org/10.1109/IGARSS.2019.8900297

  23. Palsson, B., Sigurdsson, J., Sveinsson, J.R., Ulfarsson, M.O.: Hyperspectral unmixing using a neural network autoencoder. IEEE Access 6, 25646–25656 (2018). https://doi.org/10.1109/ACCESS.2018.2818280

    Article  Google Scholar 

  24. Palsson, B., Ulfarsson, M.O., Sveinsson, J.R.: Convolutional autoencoder for spectral-spatial hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 59(1), 535–549 (2021). https://doi.org/10.1109/IGARSS.2019.8900297

    Article  Google Scholar 

  25. Plaza, A., Chang, C.: Impact of initialization on design of endmember extraction algorithms. IEEE Trans. Geosci. Remote Sens. 44(11), 3397–3407 (2006). https://doi.org/10.1109/TGRS.2006.879538

    Article  Google Scholar 

  26. Ranasinghe, Y., et al.: Convolutional autoencoder for blind hyperspectral image unmixing (2020)

    Google Scholar 

  27. Rister, B., Rubin, D.L.: Probabilistic bounds on neuron death in deep rectifier networks (2021)

    Google Scholar 

  28. Su, Y., Li, J., Plaza, A., Marinoni, A., Gamba, P., Chakravortty, S.: DAEN: deep autoencoder networks for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 57(7), 4309–4321 (2019). https://doi.org/10.1109/TGRS.2018.2890633

    Article  Google Scholar 

  29. Winter, M.E.: N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data. In: Descour, M.R., Shen, S.S. (eds.) Imaging Spectrometry V, vol. 3753, pp. 266–275. International Society for Optics and Photonics, SPIE (1999). https://doi.org/10.1117/12.366289

  30. Zhao, M., Wang, M., Chen, J., Rahardja, S.: Hyperspectral unmixing via deep autoencoder networks for a generalized linear-mixture/nonlinear-fluctuation model (2019)

    Google Scholar 

  31. Zhu, F.: Hyperspectral unmixing: ground truth labeling, datasets, benchmark performances and survey (2017)

    Google Scholar 

Download references

Acknowledgements

K.K. acknowledges funding from the European Union through the European Social Fund (grant POWR.03.02.00-00-I029). B.G. acknowledges funding from the budget funds for science in the years 2018-2022, as a scientific project “Application of transfer learning methods in the problem of hyperspectral images classification using convolutional neural networks” under the “Diamond Grant” program, no. DI2017 013847.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kamil Książek .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1712 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Książek, K., Głomb, P., Romaszewski, M., Cholewa, M., Grabowski, B., Búza, K. (2022). Improving Autoencoder Training Performance for Hyperspectral Unmixing with Network Reinitialisation. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13231. Springer, Cham. https://doi.org/10.1007/978-3-031-06427-2_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06427-2_33

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06426-5

  • Online ISBN: 978-3-031-06427-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics