Skip to main content
Log in

Sub-network modeling and integration for low-light enhancement of aerial images

  • Published:
Optical and Quantum Electronics Aims and scope Submit manuscript

Abstract

Poor intensity and contrasts of the pictures produced by picture-acquiring equipment in low-light conditions create a significant barrier to completing other machine-learning activities. This is crucial to advance the study of low-light picture-enhancing techniques to allow the efficient performance of other visual tasks. This research introduces innovative recognition-based neural networks for generating high-quality augmented low-light pictures using raw sensory information to tackle such a challenging task. We initially use an artificial learning approach called CNN (Convolutional Neural networking) to decrease unwanted chromatic distortion and sound. Utilizing the non-local correlations present in the picture, the geographic attention component concentrates on de-noising. A system is directed to improve redundant color characteristics via the channels attention component. In addition, we suggest an innovative pooling level dubbed the reversed shuffle level that picks meaningful data from earlier characteristics in an adaptable manner. Numerous tests show the suggested system’s efficiency in reducing chromatic distortion and disturbance artifacts during improvement, particularly if the original low-light picture contains a lot of disturbance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

My manuscript has no associated data.

References

  • Dai, Q., Pu, Y.-F., Rahman, Z., Aamir, M.: Fractional-order fusion model for low-light image enhancement. Symmetry 11, 574 (2019). https://doi.org/10.3390/sym11040574

    Article  ADS  Google Scholar 

  • Dhas, D.S.E.J., Raja, R., Jannet, S., Wins, K.L.D., Thomas, J.M., Kandavalli, S.R.: Effect of carbide ceramics and coke on the properties of dispersion strengthened aluminium-silicon7-magnesium hybrid composites. Materialwiss. Werkstofftech. 54(2), 147–157 (2023)

    Article  Google Scholar 

  • Jiang, Y. et al.: Enlightengan: deep light enhancement without paired supervision. (2019), arXiv:1906.06972

  • Li, H., He, X., Tao, D., Tang, Y., Wang, R.: Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 79, 130–146 (2018)

    Article  ADS  Google Scholar 

  • Li, J., Feng, X., Fan, H.: Saliency-based image correction for colorblind patients. Comput. vis. Media 6, 169–189 (2020). https://doi.org/10.1007/s41095-020-0172-x

    Article  Google Scholar 

  • Liu, X., Zhang, Y., Bao, F., Shao, K., Sun, Z., Zhang, C.: Kernel-blending connection approximated by a neural network for image classification. Comput. vis. Media 6, 467–476 (2020). https://doi.org/10.1007/s41095-020-0181-9

    Article  Google Scholar 

  • Loh, Y.P., Chan, C.S.: Getting to know low-light images with the exclusively dark Dataset. CoRR. (2018). abs/1805.11227

  • Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)

    Article  ADS  Google Scholar 

  • Lu, H., Li, Y., Uemura, T., Kim, H., Serikawa, S.: Low illumination underwater light field images reconstruction using deep convolutional neural networks. Future Gener. Comput. Syst. 82, 142–148 (2018)

    Article  Google Scholar 

  • Lv, F., Lu, F., Wu, J., Lim, C.: MBLLEN: Low-Light Image/Video Enhancement Using CNNs. In: Proceedings of the 29th British Machine Vision Conference (BMVC), (Northumbria University, Newcastle, 2018), p. 4

  • Maharjan, P., Li, L., Li, Z., Xu, N., Ma, C., Li, Y.: Improving extreme low-light image denoising via residual learning. In: 2019 IEEE International Conference on Multimedia and Expo (ICME). IEEE, (2019), pp. 916–921

  • Patil, R.Y., Patil, Y.H., Kachhoria, R., Lonare, S.: A provably secure data sharing scheme for smart gas distribution grid using fog computing. Int. J. Inf. Technol. (2022). https://doi.org/10.1007/s41870-022-01043-3

    Article  Google Scholar 

  • Ren, X., Li, M., Cheng, W.H., Liu, J.: Joint enhancement and denoising method via sequential decomposition. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS) (Florence: IEEE). (2018), pp. 1–5

  • Wang, J., Tan, W., Niu, X., Yan, B.: RDGAN: Retinex decomposition based adversarial learning for low-light enhancement. In: 2019b IEEE International Conference on Multimedia and Expo, (2019b) pp. 1186–1191

  • Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2019a), pp. 6849–6857

  • Yan, Q., Gong, D., Shi, Q., Denhengel, A.V., Shen, C., Reid, I., Zhang, Y.: Attention-guided network for ghost-free high dynamic range imaging. arXiv preprint arXiv:1904.10293, (2019)

  • Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical lowlight image enhancer. In: Proceedings of the 27th ACM international conference on multimedia, (2019a), pp. 1632–1640

  • Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S., Zhao, S.: Zero-shot restoration of back-lit images using deep internal learning. In: Proceedings of the 2019b ACM International Conference on Multimedia (ACMMM), (Nice, France, 2019b), pp. 1623–1631.

Download references

Funding

This research received no specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

UG: Conceptualization, methodology, data curation, investigation, writing—original draft, visualization. CHSD: Supervision, methodology, validation, reviewing, and editing. Abhay Chaturvedi: Methodology, software, formal analysis, visualization. Dr. Shankar B B: Resources, methodology, reviewing, and editing. Janjhyam Venkata Naga Ramesh: Methodology, software, formal analysis, visualization. Dr. Ajmeera Kiran: Conceptualization, supervision, reviewing, and editing. Each author contributed significantly to the research and preparation of the manuscript. All authors have read and approved the final version of the manuscript.

Corresponding author

Correspondence to G. Uganya.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Uganya, G., Devi, C.H.S., Chaturvedi, A. et al. Sub-network modeling and integration for low-light enhancement of aerial images. Opt Quant Electron 55, 984 (2023). https://doi.org/10.1007/s11082-023-05224-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11082-023-05224-7

Keywords

Navigation