Skip to main content
Log in

Multimodal background subtraction for high-performance embedded systems

  • Original Research Paper
  • Published:
Journal of Real-Time Image Processing Aims and scope Submit manuscript

Abstract

In many computer vision systems, background subtraction algorithms have a crucial importance to extract information about moving objects. Although color features have been extensively used in several background subtraction algorithms, demonstrating high efficiency and performances, in actual applications the background subtraction accuracy is still a challenge due to the dynamic, diverse and complex background types. In this paper, a novel method for the background subtraction is proposed to achieve low computational cost and high accuracy in real-time applications. The proposed approach computes the background model using a limited number of historical frames, thus resulting suitable for a real-time embedded implementation. To compute the background model as proposed here, pixels grayscale information and color invariant H are jointly exploited. Differently from state-of-the-art competitors, the background model is updated by analyzing the percentage changes of current pixels with respect to corresponding pixels within the modeled background and historical frames. The comparison with several traditional and real-time state-of-the-art background subtraction algorithms demonstrates that the proposed approach is able to manage several challenges, such as the presence of dynamic background and the absence of frames free from foreground objects, without undermining the accuracy achieved. Different hardware designs have been implemented, for several images resolutions, within an Avnet ZedBoard containing an xc7z020 Zynq FPGA device. Post-place and route characterization results demonstrate that the proposed approach is suitable for the integration in low-cost high-definition embedded video systems and smart cameras. In fact, the presented system uses 32 MB of external memory, 6 internal Block RAM, less than 16,000 Slices FFs, a little more than 20,000 Slices LUTs and it processes Full HD RGB video sequences with a frame rate of about 74 fps.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Bouwmans, T., Porikli, F., Hferlin, B., Vacavant, A.: Traditional and recent approaches in background modeling for foreground detection: an overview. Comput. Sci. Rev. 11–12(5), 31–66 (2014)

    Article  Google Scholar 

  2. Sobral, A., Vacavant, A.: A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 122(5), 4–21 (2014)

    Article  Google Scholar 

  3. Norouznezhad, E., Bigdeli, A., Postula, A., Lovell, B.C.: A high resolution smart camera with GigE Vision extension for surveillance applications. In: Second ACM/IEEE International Conference on Distributed Smart Cameras, Stanford, CA, September 2008, pp. 1–8

  4. Savage, S.: Focusing on smart cameras [control vision]. IET Eng. Technol. 3(10), 44–46 (2008)

    Article  Google Scholar 

  5. Almagambetov, A., Velipasalar, S., Casares, M.: Robust and computationally lightweight autonomous tracking of vehicle taillights and signal detection by embedded smart cameras. IEEE Trans. Ind. Electron. 62(6), 3732–3741 (2015)

    Article  Google Scholar 

  6. Khan, M.U.K., Khan, A., Kyung, C.M.: EBSCam: Background subtraction for ubiquitous computing. In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems

  7. ElHakim, R., Abdelwahab, M., Eldesokey, A., ElHelw, M.: Traffisense: a smart integrated visual sensing system for traffic monitoring. In: IEEE SAI Conference on Intelligent Systems, London, UK, November 2015, pp. 418–426

  8. Lee, B., Hedley, M.: Background estimation for video surveillance. In: International Conference on Image and Vision Computing, Auckland (New Zealand), November 2002, pp. 315–320

  9. Chiu, W.Y., Tsai, D.M.: Dual-mode detection for foreground segmentation in low-contrast video images. J. Real-Time Image Process. 9(4), 647–659 (2014)

    Article  Google Scholar 

  10. Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)

    Article  Google Scholar 

  11. Stauffer, C., Grimson, E.: Adaptive background mixture models for real-time tracking. In: IEEE International Conference on Computer Vision and Pattern Recognition, June 1999, Ft. Collins (USA), pp. 246–252

  12. Chen, S., Zhang, J., Li, Y., Zhang, J.: A hierarchical model incorporating segmented regions and pixel descriptors for video background subtraction. IEEE Trans. Ind. Inf. 8(1), 118–127 (2012)

    Article  Google Scholar 

  13. Zivkovic, Z.: Improved adaptive Gaussian mixture model for background subtraction. In: IEEE International Conference on Pattern Recognition, Cambridge (UK), August 2004, pp. 28–31

  14. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.S.: Background modeling and subtraction by codebook construction. In: IEEE International Conference on Image Processing, Lion City, Singapore, September 2004, pp. 3061–3064

  15. Guo, J.M., Liu, Y.F., Hsia, C.H., Shih, M.H., Hsu, C.S.: Hierarchical method for foreground detection using codebook model. IEEE Trans. Circuits Syst. Video Technol. 21(6), 804–815 (2011)

    Article  Google Scholar 

  16. Reddy, V., Sanderson, C., Lovell, B.C.: Improved foreground detection via block-based classifier cascade with probabilistic decision integration. IEEE Trans. Circuits Syst. Video Technol. 23(1), 83–93 (2013)

    Article  Google Scholar 

  17. Zhou, H., Chen, Y., Feng, R.: A novel background subtraction method based on color invariants. Comput. Vis. Image Underst. 117(11), 1589–1597 (2013)

    Article  Google Scholar 

  18. Guachi, L., Cocorullo, G., Corsonello, P., Frustaci, F., Perri, S.: A novel background subtraction method based on color invariants and grayscale levels. In: IEEE International Carnahan Conference on Security Technology, Rome, Italy, October 2014, pp. 1–5

  19. Barnich, O., Van Droogenbroeck, M.: ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 20(6), 1709–1724 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. Calvo-Gallego, E., Brox, P., Sánchez-Solano, S.: Low-cost dedicated hardware IP modules for background subtraction in embedded vision systems. J. Real-Time Image Process. (2014). doi:10.1007/s11554-014-0455-5

    Article  Google Scholar 

  21. Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 780–785 (1997)

    Article  Google Scholar 

  22. Abutaleb, M., Hamdy, A., Abuelwafa, M., Saad, E.: FPGA-based object-extraction based on multimodal Σ-Δ background estimation. In: IEEE International Conference on Computer, Control and Communication, Karachi (Pakistan), February 2009, pp. 1–7

  23. Sigari, M., Mozayani, N., Pourreza, H.: Fuzzy running average and fuzzy background subtraction: concepts and application. Int. J. Comput. Sci. Netw. Secur. 8(2), 138–143 (2008)

    Google Scholar 

  24. Lijun, X.: Moving object segmentation based on background subtraction and fuzzy inference. In: IEEE International Conference on Mechatronic Science, Electric Engineering and Computer, Jilin, China, August 2011, pp. 434–437

  25. Geusebroek, J.M., van den Boomgaard, R., Smeulders, A.W.M., Geerts, H.: Color invariance. IEEE Trans. Pattern Anal. Mach. Intell. 23(12), 1338–1350 (2001)

    Article  Google Scholar 

  26. Droogenbroeck, V., Paquot, O.: Background subtraction: experiments and improvements for ViBe. In: IEEE International Conference on Computer Vision and Pattern Recognition, Providence (RI), USA, June 2012, pp. 32–37

  27. Ramirez Rivera, A., Murshed, M., Kim, J., Chae, O.: Background modeling through statistical edge-segment distributions. IEEE Trans. Circuits Syst. Video Technol. 23(8), 1375–1387 (2013)

    Article  Google Scholar 

  28. Jacques, J., Jung, C., Musse, S.: Background subtraction and shadow detection in grayscale video sequences. In: Brazilian Symposium on Computer Graphics and Image Processing, Natal, Brazil, October 2005, pp. 189–196

  29. Zhao, M., Bu, J., Chen, C.: Robust background subtraction in HSV color space. In: Proceedings of SPIE: Multimedia Systems and Applications V, vol 4861, pp. 1–8 (2002)

  30. Salvador, E., Cavallaro, A., Ebrahimi, T.: Shadow identification and classification using invariant color models. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, USA, May 2001, pp. 1545–1548

  31. Luke, R.H., Anderson, D., Keller, J.M., Skubic, M.: Human segmentation from video in indoor environments using fused color and texture feature. Technical Report, Electrical and Computer Engineering Department, University of Missouri, Columbia, MO (2008)

  32. Greiffenhagen, M., Ramesh, V., Comaniciu, D., Niemann, H.: Statistical modeling and performance characterization of a real-time dual camera surveillance system. IEEE International Conference on Computer Vision and Pattern Recognition, Hilton Head Island, USA, June 2000, pp. 335–342

  33. Shoushtarian, B., Bez, H.: A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking. Pattern Recogn. Lett. 26(1), 5–26 (2005)

    Article  Google Scholar 

  34. Elgammal, A., Duraiswami, R.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 90(7), 1151–1163 (2002)

    Article  Google Scholar 

  35. Elhabian, S.S.Y., El-Sayed, K.M.: Moving object detection in spatial domain using background removal techniques-state-of-art. Recent Pat. Comput. Sci. 1(1), 32–54 (2008)

    Article  Google Scholar 

  36. Murshed, M., Ramirez, A., Chae, O.: Statistical background modeling: an edge segment based moving object detection approach. In: IEEE International Conference on Advanced Video and Signal Based Surveillance, Boston (MA), USA, September 2010, pp. 300–306

  37. Elgammal, A., Harwood, D., Larry, D.: Non-parametric model for background subtraction. In: European Conference on Computer Vision, Dublin, Ireland, July 2000, pp. 751–767

    Google Scholar 

  38. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.S.: Real-time foreground-background segmentation using codebook model. Real-Time Imaging 11(3), 172–185 (2005)

    Article  Google Scholar 

  39. Butler, D., Sridharan, S.: Real-time adaptive background segmentation. In: IEEE International Conference on Multimedia and Expo, Baltimore (MD), USA, July 2003, pp. 341–344

  40. Messelodi, S., Modena, C., Segata, N., Zanin, M.: A Kalman filter based background updating algorithm robust to sharp illumination changes. In: International Conference on Image Analysis and Processing, Cagliari, Italy, September 2005, pp. 163–170

  41. Karmann, K.P., Brandt, A.V., Gerl, R.: Moving object segmentation based on adaptive reference images. In: European Conference on Signal Processing, Barcelona, Spain, September 1990, pp. 951–954

  42. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and Practice of Background Maintenance. In: International Conference on Computer Vision, Corfu, Greece, September 1999, pp. 255–261

  43. Culbrik, D., Marques, O., Socek, D., Kalva, H., Furht, B.: Neural network approach to background modeling for video object segmentation. IEEE Trans. Neural Netw. 18(6), 1614–1627 (2007)

    Article  Google Scholar 

  44. Maddalena, L., Petrosino, A.: A self organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process. 17(7), 1168–1177 (2008)

    Article  MathSciNet  Google Scholar 

  45. Panda, D.K., Meher, S.: Detection of moving objects using fuzzy color difference histogram based background subtraction. IEEE Signal Process. Lett. 23(1), 45–49 (2016)

    Article  Google Scholar 

  46. Sivabalakrishnan, M., Manjula, D.: Adaptive background subtraction in dynamic environments using fuzzy logic. Int. J. Comput. Sci. Eng. 2(2), 270–273 (2010)

    Google Scholar 

  47. http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html

  48. http://research.microsoft.com/en-us/um/people/jckrumm/wallflower/testimages.htm

  49. http://www.changedetection.net/

  50. Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J., Ishwar, P.: Changedetection.net: a new change detection benchmark dataset. In: IEEE Workshop on Computer Vision and Pattern Recognition, Providence (RI), USA, June 2012, pp. 1–8

  51. Huang, S.: An advanced motion detection algorithm with video quality analysis for video surveillance systems. IEEE Trans. Circuits Syst. Video Technol. 21(1), 1–14 (2011)

    Article  Google Scholar 

  52. Karaman, M., Goldmann, L., Yu, D., Sikora, T.: Comparison of static background segmentation methods. Proc. SPIE: Vis. Commun. Image Process. 5960, 2140–2151 (2005)

    Google Scholar 

  53. Kryjak, T., Komorkiewicz, M.: Real-time background generation and foreground object segmentation for high-definition colour video stream in FPGA device. J. Real-Time Image Process. 9(1), 61–77 (2014)

    Article  Google Scholar 

  54. Brutzer, S., Hoferlin, B., Heidemann, G.: Evaluation of background subtraction techniques for video surveillance. In: IEEE Conference on Computer Vision and Pattern Recognition, Providence (RI), USA, June 2011, pp. 1937–1944

  55. Tessier, R., Pocek, K., DeHon, A.: Reconfigurable computing architectures. Proc. IEEE 103(3), 332–354 (2015)

    Article  Google Scholar 

  56. Kestur, S., Davis, J.D., Williams, O.: BLAS comparison on FPGA, CPU and GPU. In: IEEE Computer Society Annual Symposium on VLSI, Lixouri, Kefalonia, July 2010, pp. 288–293

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pasquale Corsonello.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cocorullo, G., Corsonello, P., Frustaci, F. et al. Multimodal background subtraction for high-performance embedded systems. J Real-Time Image Proc 16, 1407–1423 (2019). https://doi.org/10.1007/s11554-016-0651-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11554-016-0651-6

Keywords

Navigation