Abstract
Deep neural networks are typically trained on images taken under controlled illumination. Those networks work well on such images, but not on images that contain severe illumination changes which often occur in practice. To improve the robustness of networks to illumination changes, we propose to use local brightness normalization (LBN) as pre-processing of the input images and to train the network on those normalized images. The LBN can convert images to have similar appearances from various types of illumination changes. Then, we assume that input images of training and testing are more aligned by the LBN. Experimental comparisons of the image classification task and the object detection task show that the proposed LBN-based approach can improve the accuracy with images of uniform and non-uniform illumination changes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: Proceedings of the International Conference on Learning Representations (2019)
Michaelis, C., et al.: Benchmarking robustness in object detection: autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484 (2019)
Croce, F., et al.: RobustBench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670 (2020)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of the International Conference on Learning Representations (2015)
Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. In: Proceedings of International Conference on Learning Representations (2017)
Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Endo, K., Tanaka, M., Okutomi, M.: CNN-based classification of degraded images with awareness of degradation levels. IEEE Trans. Circuits Syst. Video Technol. 31(10), 4046–4057 (2020)
Endo, K., Tanaka, M., Okutomi, M.: Cnn-based classification of degraded images without sacrificing clean images. IEEE Access 9, 116094–116104 (2021)
Dodge, S.F., Karam, L.J.: Quality robust mixtures of deep neural networks. IEEE Trans. Image Process. 27(11), 5553–5562 (2018)
Zheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Borkar, T.S., Karam, L.J.: Deepcorrect: correcting DNN models against image distortions. IEEE Trans. Image Process. 28(12), 6022–6034 (2019)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2009)
Gonzalez, R.C.: Digital Image Processing. Pearson Education India, Bangalore (2009)
Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 39(3), 355–368 (1987)
Jarrett, K., Kavukcuoglu, K., Ranzato, M., LeCun, Y.: What is the best multi-stage architecture for object recognition? In: 2009 IEEE 12th International Conference on Computer Vision. IEEE (2009)
Carandini, M., Heeger, D.J.: Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2012)
Fournier, A., Fussell, D., Carpenter, L.: Computer rendering of stochastic models. Commun. ACM 25(6), 371–384 (1982)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 213–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_13
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Lu, Y., Tanaka, M., Kawakami, R., Okutomi, M. (2024). Local Brightness Normalization for Image Classification and Object Detection Robust to Illumination Changes. In: Yan, W.Q., Nguyen, M., Nand, P., Li, X. (eds) Image and Video Technology. PSIVT 2023. Lecture Notes in Computer Science, vol 14403. Springer, Singapore. https://doi.org/10.1007/978-981-97-0376-0_5
Download citation
DOI: https://doi.org/10.1007/978-981-97-0376-0_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0375-3
Online ISBN: 978-981-97-0376-0
eBook Packages: Computer ScienceComputer Science (R0)