Skip to main content
Log in

Dual-Branch Enhanced Network for Change Detection

  • Research Article-Electrical Engineering
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

Change detection is an essential task in intelligent monitoring, and the accuracy of detection is of central importance for subsequent target tracking and recognition. However, a series of challenges such as illumination change, severe weather, shadow, and camera jitter have brought great troubles. To reduce the impact of these factors, we propose a novel model, called dual-branch enhanced network (DBEN), which can simultaneously extract enough spatial features and context information. Specifically, we design a recurrent gated bottleneck module to get high-level features, and build the global attention module as an auxiliary branch to obtain fine resolution details. Moreover, we also propose a gated residual dense module to enhance feature expression by reconstructing the combined information. Meanwhile, a weighted loss function is designed to optimize the network. The proposed DBEN is verified on CDnet2014, DAVIS and AICD, which are three large-scale change detection datasets. Experimental results show that the proposed model is competitive in overall performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Wang, T: Spatio-temporal point process for multiple object tracking. IEEE Trans. Neur. Net Lear. 1–12 (2020)

  2. Wu, Y.; Lim, T.; Yang, M.: Object tracking benchmark. IEEE Trans. Pattern Anal. 37(9), 1834–1848 (2015)

    Article  Google Scholar 

  3. Zhang, H.; Qu, S.; Li, H.; Luo, J.; Xu, W.: A moving shadow elimination method based on fusion of multi-feature. IEEE Access 8, 63971–63982 (2020)

    Article  Google Scholar 

  4. Jiang, G., Jiang, X., Fang, Z., Chen, S.: An efficient attention module for 3d convolutional neural networks in action recognition. Appl. Intell. 1–15 (2021)

  5. Cai, Y.; Liu, J.; Guo, Y.; Hu, S.; Lang, S.: Video anomaly detection with multi-scale feature and temporal information fusion. Neurocomputing 423, 264–273 (2021)

    Article  Google Scholar 

  6. Jardim, E.; Thomaz, L.A.; Da Silva, E.A.B.; Netto, S.L.: Domain-transformable sparse representation for anomaly detection in moving-camera videos. IEEE Trans. Image Process. 29, 1329–1343 (2020)

    Article  MathSciNet  Google Scholar 

  7. Zakaria, N.J.: Gradient-based edge effects on lane marking detection using a deep learning-based approach. Arab. J. Sci Eng. 45(12), 10989–11006 (2020)

    Article  Google Scholar 

  8. ElTantawy, A.; Shehata, M.S.: Local null space pursuit for real-time moving object detection in aerial surveillance. Signal Image Video Process. 14(1), 87–95 (2020)

    Article  Google Scholar 

  9. Khan, S.D.; Basalamah, S.: Sparse to dense scale prediction for crowd couting in high density crowds. Arab J. Sci Eng. 46(4), 3051–3065 (2021)

    Article  Google Scholar 

  10. Qu, S., Zhang, H., Wu, W., Xu, W., Li, Y.: Symmetric pyramid attention convolutional neural network for moving object detection. Signal Image Video Process. 1–9 (2021)

  11. Shahbaz, A.; Jo, K.: Dual camera-based supervised foreground detection for low-end video surveillance systems. IEEE Sens. J. 21(7), 9359–9366 (2021)

    Article  Google Scholar 

  12. Chen, L.; Liu, Y.; Xiao, W.; Wang, Y.; Xie, H.: SpeakerGAN: speaker identification with conditional generative adversarial network. Neurocomputing 418, 211–220 (2020)

    Article  Google Scholar 

  13. Xu, H.; Yang, M.; Deng, L.; Qian, Y.; Wang, C.: Neutral cross-entropy loss based unsupervised domain adaptation for semantic segmentation. IEEE Trans. Image Process. 30, 4516–4525 (2021)

    Article  MathSciNet  Google Scholar 

  14. Liu, Y.; Zhang, X.; Bian, J.; Zhang, L.; Cheng, M.: SAMNet: stereoscopically attentive multi-scale network for lightweight salient object detection. IEEE Trans. Image Process. 30, 3804–3814 (2021)

    Article  Google Scholar 

  15. Yu, C., Wang, J., Peng, C.: BiSeNet: bilateral segmentation network for real-time semantic segmentation (2018)

  16. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 246–252 (1999)

  17. Barnich, O.; Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)

    Article  MathSciNet  Google Scholar 

  18. Li, P., Wang, Y.: An improved vibe algorithm based on visual saliency. In: Proceedings of the international conference on computer technology, electronics and communication, pp. 603–607 (2017)

  19. St-Charles, P.; Bilodeau, G.; Bergevin, R.: SuBSENSE: a universal change detection method with local adaptive sensitivity. IEEE Trans. Image Process. 24(1), 359–373 (2015)

    Article  MathSciNet  Google Scholar 

  20. Javed, S.; Mahmood, A.; Al-Maadeed, S.; Bouwmans, T.; Jung, S.K.: Moving object detection in complex scene using spatiotemporal structured-sparse RPCA. IEEE Trans. Image Process. 28(2), 1007–1022 (2019)

    Article  MathSciNet  Google Scholar 

  21. Wu, M.; Peng, X.: Spatio-temporal context for codebook-based dynamic background subtraction. AEU Int. Journal of Electron. Commun. 64(8), 739–747 (2010)

    Article  Google Scholar 

  22. Braham, M., Droogenbroeck, M.V: Deep background subtraction with scene-specific convolutional neural networks, In: Proceedings of the 23rd International Conference on Systems, Signals and Image Processing, pp. 1–4 (2016)

  23. Wang, Y.; Luo, Z.; Jodoin, P.: Interactive deep learning method for segmenting moving objects. Pattern Recogn. Lett. 96, 66–75 (2017)

    Article  Google Scholar 

  24. Lim, L.A.; Yalim Keles, H.: Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recogn. Lett. 112, 256–262 (2018)

    Article  Google Scholar 

  25. Long, J.; Shelhamer, E.; Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. 4(39), 640–651 (2015)

    Google Scholar 

  26. Yang, L.; Li, J.; Luo, Y.; Zhao, Y.; Cheng, H.; Li, J.: Deep background modeling using fully convolutional network. IEEE Trans. Intell. Transp. 19(1), 254–262 (2018)

    Article  Google Scholar 

  27. Ozan Tezcan, M., Ishwar, P., Konrad, J.: BSUV-Net: A fully-convolutional neural network for background subtraction of unseen videos. In: Proceedings of the WACV, pp.2774–2783 (2020)

  28. Bakkay, M. C., Rashwan, H. A., Salmane, H., Khoudour, L., Puig, D., Ruichek, Y.: BScGAN: deep background subtraction with conditional generative adversarial networks, In: Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 4018–4022 (2018)

  29. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation, In: Proceedings of the MICCAI, pp. 234–241 (2015)

  30. Sakkos, D.; Ho, E.S.L.; Shum, H.P.H.: Illumination-aware multi-task GANs for foreground segmentation. IEEE Access 7, 10976–10986 (2019)

    Article  Google Scholar 

  31. Akilan, T.; Wu, Q.M.J.: sEnDec: an improved image to image CNN for foreground localization. IEEE Trans. Intell. Transp. 21(10), 4435–4443 (2020)

    Article  Google Scholar 

  32. Mandal, M.; Dhar, V.; Mishra, A.; Vipparthi, S.K.: 3DFR: A swift 3D feature reductionist framework for scene independent change detection. IEEE Signal Proc. Let. 26(12), 1882–1886 (2019)

    Article  Google Scholar 

  33. Hu, Z.; Turki, T.; Phan, N.; Wang, J.T.L.: A 3D atrous convolutional long short-term memory network for background subtraction. IEEE Access 6, 43450–43459 (2018)

    Article  Google Scholar 

  34. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 772–778 (2016)

  35. Li, A.; Qi, J.; Lu, H.: Multi-attention guided feature fusion network for salient object detection. Neurocomputing 411, 416–427 (2020)

    Article  Google Scholar 

  36. Fu, J., Liu, J., Tian, H.: Dual attention network for scene segmentation, In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 3141–3149 (2019)

  37. Lei, Y.; Du, W.; Hu, Q.: Face sketch-to-photo transformation with multi-scale self-attention GAN. Neurocomputing 396, 13–23 (2020)

    Article  Google Scholar 

  38. Huang, G., Liu, Z., van der Maaten, L.: Densely connected convolutional networks (2018)

  39. Wang, Y., Jodoin, P., Porikli, F., Konrad, J., Benezeth, Y., Ishwar, P.: CDnet 2014: an expanded change detection benchmark dataset, In: Proceedings of the 2014 IEEE conference on computer vision and pattern recognition workshops, pp. 393–400 (2014)

  40. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 724–732 (2016)

  41. Bourdis, N., Marraud, D., Sahbi, H.: Constrained optical flow for aerial image change detection, In: Proceedings of the 2011 IEEE international geoscience and remote sensing symposium, pp. 4176–4179 (2011)

  42. St-Charles, P., Bilodeau, G., Bergevin, R.: A self-adjusting approach to change detection based on background word consensus, In: Proceedings of the 2015 IEEE winter conference on applications of computer vision, pp. 990–997 (2015)

  43. Chen, Y.; Wang, J.; Zhu, B.; Tang, M.; Lu, H.: Pixelwise deep sequence learning for moving object detection. IEEE Trans. Circ. Syst. Vid. 29(9), 2567–2579 (2017)

    Article  Google Scholar 

  44. Bianco, S.; Ciocca, G.; Schettini, R.: Combination of video change detection algorithms by genetic programming. IEEE Trans. Evolut. Comput. 21(6), 914–928 (2017)

    Article  Google Scholar 

  45. Babaee, M.; Dinh, D.T.; Rigoll, G.: A deep convolutional neural network for video sequence background subtraction. Pattern Recogn. 76, 635–649 (2018)

    Article  Google Scholar 

  46. Jiang, S.; Lu, X.: WeSamBE: a weight-sample-based method for background subtraction. IEEE Trans. Circ. Syst. Vid. 28(9), 2105–2115 (2018)

    Article  Google Scholar 

  47. Işık, K.; Özkan.: S. Günal.: Ö.N. Gerek,: SWCD: a sliding window and self-regulated learning-based background updating method for change detection in videos. J. Electr. Imag. 2(27), 23002 (2018)

    Google Scholar 

  48. Wang, K., Gou, C., Wang, F.: M4CD: a robust change detection method for intelligent visual surveillance (2018)

  49. Patil, P.W., Murala, S., Dhall, A., Chaudhary, S.: MsEDNet: multi-scale deep saliency learning for moving object detection, In: Proceedings of the IEEE international conference on systems, man, and cybernetics, pp. 1670–1675 (2018)

  50. Mondéjar-Guerra, V., Rouco, J., Novo, J.: An end-to-end deep learning approach for simultaneous background modeling and subtraction, In: Proceedings of the British machine vision conference, pp. 1–12 (2019)

  51. Xu, Y.; Ji, H.; Zhang, W.: Coarse-to-fine sample-based background subtraction for moving object detection. Optik 207, 164195 (2020)

    Article  Google Scholar 

  52. Ramakanth, S. A., Babu, R. V.: SeamSeg: video object segmentation using patch seams. In: Proceedings of the IEEE conference on computer vision and pattern recognition, (2014)

  53. Fan, Q.; Zhong, F.; Lischinski, D.; Cohen-Or, D.; Chen, B.: JumpCut: non-successive mask transfer and interpolation for video cutout. Acm Trans. Graphic. 34(6), 1–10 (2015)

    Article  Google Scholar 

  54. Tsai, Y., Yang, M., Black, M. J.: Video segmentation via object flow. In: Proceedings of the IEEE conference on computer vision and pattern recognition, (2016)

  55. Marki, N., Perazzi, F., Wang, O., Sorkine-Hornung, A.: Bilateral space video segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 743–751 (2016)

  56. Hu, Y., Huang, J., Schwing, A.G.: MaskRNN: instance level video object segmentation. (2018)

  57. Jampani, V., Gadde, R., Gehler, P. V: Video propagation networks. (2016)

  58. Akilan, T.; Wu, Q.J.; Safaei, A.; Huo, J.; Yang, Y.: A 3D CNN-LSTM-based image-to-image foreground segmentation. IEEE Trans. Intell. Transp. 21(3), 959–971 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the Self-Determined Research Funds of Central China Normal University (CCNU) from the Colleges’ Basic Research and Operation of the Ministry of Education (MOE) under Grant CCNU18TS042.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaocheng Qu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Qu, S. & Li, H. Dual-Branch Enhanced Network for Change Detection. Arab J Sci Eng 47, 3459–3471 (2022). https://doi.org/10.1007/s13369-021-06306-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-021-06306-y

Keywords

Navigation