Abstract
A fast and effective moving object detection method for a moving camera is proposed in this paper. The global motion is estimated through tracking the grid-based key points using optical flow. After the motion compensation, the background model, candidate background model and candidate age are used for the background modelling. Then the local pixel difference and the consistency of local changes between the current frame and the background model are used for the background subtraction. The lighting influence threshold and the local pixel difference between the current frame and two previous aligned frames are used to reduce the lighting influences. Finally, Gaussian filter, connected-components analysis, erosion and dilation are used to refine the results. The performance evaluation shows that this proposed method works very fast in real time and has competitive results compared with others in the public dataset.
Similar content being viewed by others
References
S. Minaeian, J. Liu, and Y. J. Son, “Effective and efficient detection of moving targets from a UAV’s camera,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 2, pp. 497–506, 2018.
J. Kim, J. Baek, H. Choi, and E. Kim, “Wet area and puddle detection for advanced driver assistance systems (ADAS) using a stereo camera,” International Journal of Control Automation and Systems, vol. 14, no. 1, pp. 263–271, 2016.
L. Kurnianggoro, Wahyono, Y. Yu, D. C. Hernandez, and K. H. Jo, “Online background-subtraction with motion compensation for freely moving Camera,” Proc. of International Conference on Intelligent Computing. Lecture Notes in Computer Science, vol. 9772, pp. 569–578, 2016.
T. T. Q. Bui, T. T. Vu, and K. S. Hong, “Extraction of sparse features of color images in recognizing objects,” International Journal of Control Automation and Systems, vol. 14, no. 2, pp. 616–627, 2016.
Y. Lin, Y. Tong, Y. Cao, Y. Zhou, and S. Wang, “Visual-attention-based background modeling for detecting infrequently moving objects,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 6, pp. 1208–1221, 2017.
H. J. Choi, Y. S. Lee, D. S. Shim, C. G. Lee, and K. N. Choi, “Effective pedestrian detection using deformable part model based on human model,” International Journal of Control Automation and Systems, vol. 14, no. 6, pp. 1618–1625, 2016.
X. Z. Zhao, B. Chen, L. S. Pei, T. Li, and M. X. Li, “Hierarchical saliency: a new salient target detection framework,” International Journal of Control Automation and Systems, vol. 14, no. 1, pp. 301–311, 2016.
Y. Wu, X. He, and T. Q. Nguyen, “Moving object detection with a freely moving camera via background motion subtraction,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 2, pp. 236–248, 2017.
H. Y. Zhao, O. Kim, J. S. Won, and D. J. Kang, “Lane detection and tracking based on annealed particle filter,” International Journal of Control Automation and Systems, vol. 12, no. 6, pp. 1303–1312, 2014.
L. Maddalena, A. Petrosino, “The SOBS algorithm: what are the limits?” Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 21–26, 2012.
H. H. Kim, J. K. Park, J. H. Oh, and D. J. Kang, “Multitask convolutional neural network system for license plate recognition,” International Journal of Control Automation and Systems, vol. 15, no. 6, pp. 2942–2949, 2017.
Y. Wang, Z. M. Luo, and P. M. Jodoin, “Interactive deep learning method for segmenting moving objects,” Pattern Recognition Letters, vol. 96, pp. 66–75, 2017.
S. A. Wibowo, H. Lee, E. K. Kim, and S. Kim, “Collaborative learning based on convolutional features and correlation filter for visual tracking,” International Journal of Control Automation and Systems, vol. 16, no. 1, pp. 335–349, 2018.
D. Zhou, V. Frémont, B. Quost, Y. Dai, and H. Li, “Moving object detection and segmentation in urban environments from a moving platform,” Image and Vision Computing, vol. 68, pp. 76–87, 2017.
T. Chen and S. Lu, “Object-level motion detection from moving cameras,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 11, pp. 2333–2343, 2017.
S. Kim, D. W. Yang, and H. W. Park, “A disparity-based adaptive multihomography method for moving target detection based on global motion compensation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 8, pp. 1407–1420, 2016.
C. H. Yeh, C. Y. Lin, K. Muchtar, H. E. Lai, and M. T. Sun, “Three-pronged compensation and hysteresis thresholding for moving object detection in real-Time video surveillance,” IEEE Transactions on Industrial Electronics, vol. 64, no. 6, pp. 4945–4955, 2017.
T. Minematsu, H. Uchiyama, A. Shimada, H. Nagahara, and R. I. Taniguchi, “Adaptive background model registration for moving cameras,” Pattern Recognition Letters, vol. 96, pp. 86–95, 2017.
D. Avola, L. Cinque, G.L. Foresti, C. Massaroni, and D. Pannone, “A keypoint-based method for background modeling and foreground detection using a PTZ camera,” Pattern Recognition Letters, vol. 96, pp. 96–105, 2017.
A. Zheng, L. Zhang, W. Zhang, C. Li, J. Tang, and B. Luo, “Local-to-global background modeling for moving object detection from non-static cameras,” Multimedia Tools and Applications, vol. 76, no. 8, pp. 11003–11019, 2017.
K. Yun, J. Lim, and J. Y. Choi, “Scene conditional background update for moving object detection in a moving camera,” Pattern Recognition Letters, vol. 88, no. 1, pp. 57–63, 2017.
M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
S. M. Smith and J. M. Brady, “Susana new approach to low level image processing,” International Journal of Computer Vision, vol. 23, no. 1, pp. 45–78, 1997.
E. Rosten and T. W. Drummond, “Machine learning for high-speed corner detection,” Proc. of European Conference on Computer Vision (ECCV), vol. 3951, pp. 430–443, 2006.
C. Harris and M. Stephens, “A combined corner and edge detector,” Proc. of Alvey Vision Conference, vol. 15, pp. 147–151, 1988.
M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent elementary features,” Proc. of European Conference on Computer Vision (ECCV), vol. 6314, pp. 778–792, 2010.
H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded up robust features,” Proc. of European Conference on Computer Vision (ECCV), vol. 3951, pp. 404–417, 2006.
D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157, 1999.
J. Y. Bouguet, “Pyramidal implementation of the afne Lucas Kanade feature tracker description of the algorithm,” Intel Corporation, vol. 5, no. 4, pp. 1–10, 2001.
http://jacarini.dinf.usherbrooke.ca/dataset2014/
P. L. St-Charles, G. A. Bilodeau, and R. Bergevin, “A self-adjusting approach to change detection based on background word consensus,” Proc. of 2015 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 990–997, 2015.
H. Sajid and S.-C. S. Cheung, “Universal multimode background subtraction,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3249–3260, 2017.
S. Bianco, G. Ciocca and R. Schettini, “Combination of video change detection algorithms by genetic programming,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 6, pp. 914–928, 2017.
M. D. Gregorio and M. Giordano, “WiSARDRP for change detection in video sequences,” Proc. of 25th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 453–458, 2017.
G. Allebosch, F. Deboeverie, P. Veelaert, and W. Philips, “EFIC: edge based foreground background segmentation and interior classification for dynamic camera viewpoints,” Proc. of International Conference on Advanced Concepts for Intelligent Vision Systems (ACIVS), vol. 9386, pp. 130–141, 2015.
H. Sajid and S.-C. S. Cheung, “Background subtraction for static & moving camera,” Proc. of 2015 IEEE International Conference on Image Processing (ICIP), pp. 4530–4534, 2015.
Y. Chen, J. Wang, and H. Lu, “Learning sharable models for robust background subtraction,” Proc. of IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, 2015.
P. L. St-Charles, G. A. Bilodeau, and R. Bergevin, “Subsense: a universal change detection method with local adaptive sensitivity,” IEEE Transactions on Image Processing, vol. 24, no. 1, pp. 359–373, 2015.
M. Gregorio and M. Giordano, “Change detection with weightless neural networks,” Proc. of Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 403–407, 2014.
S. Varadarajan, P. Miller, and H. Zhou, “Spatial mixture of gaussians for dynamic background modelling,” Proc. of 10th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 63–68, 2013.
Author information
Authors and Affiliations
Corresponding author
Additional information
Recommended by Associate Editor Dong-Joong Kang under the direction of Editor Euntai Kim. This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ICT Consilience Creative program(IITP-2019-2016-0-00318) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).
Yang Yu received the B.S. degree in automation from Northeastern University, China, in 2002. He received the M.S. degree in control theory and control engineering from Liaoning University of Technology, China, in 2006. He is currently with Liaoning University of Technology as an associate professor. He is currently a Ph.D. candidate at the Graduate School of Electrical Engineering, University of Ulsan, Ulsan, Korea. His research interests include intelligent control and computer vision.
Laksono Kurnianggoro received his bachelor of engineering from the University of Gadjah Mada, Indonesia, in 2010. He is currently a Ph.D. student at the Graduate School of Electrical Engineering, University of Ulsan, Ulsan, Korea. He is actively participating as a member of societies such as IEEE. His research interest include stereo vision, 3D image processing, computer vision, and machine learning. He has scientific publications in some publishers such as IEEE, Springer, and Elsevier. He also involved in several projects including development of autonomous vehicle system, advanced car washer system, low-cost 3D scanner, autonomous robot, and many mores. He is also an active contributor of the popular computer vision library, like OpenCV.
Kang-Hyun Jo received the Ph.D. degree in Computer Controlled Machinery from Osaka University, Japan, in 1997. After a year of experience at ETRI as a postdoctoral research fellow, he joined the School of Electrical Engineering, University of Ulsan, Ulsan, Korea. He has served as a director or an AdCom member of Institute of Control, Robotics and Systems, the Society of Instrument and Control Engineers, and IEEE IES Technical Committee on Human Factors Chair. Currently, he is serving as AdCom member, and from 2018, as the Secretary, of the IEEE IES. He has also been involved in organizing many international conferences such as International Workshop on Frontiers of Computer Vision, International Conference on Intelligent Computation, International Conference on Industrial Technology, International Conference on Human System Interactions, and Annual Conference of the IEEE IES. At present, he is an Editorial Board Member for international journals, such as the International Journal of Control, Automation, and Systems and the Transactions on Computational Collective Intelligence. His research interests include computer vision, robotics, autonomous vehicle, and ambient intelligence.
Rights and permissions
About this article
Cite this article
Yu, Y., Kurnianggoro, L. & Jo, KH. Moving Object Detection for a Moving Camera Based on Global Motion Compensation and Adaptive Background Model. Int. J. Control Autom. Syst. 17, 1866–1874 (2019). https://doi.org/10.1007/s12555-018-0234-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-018-0234-3