3D Object Global Localization Using Particle Filter with Back Projection-based Sampling on Saliency

  • Alongkorn Pirayawaraporn
  • Nachaya Chindakham
  • Ji-Yong Lee
  • Mun-Ho JeongEmail author


Estimating the 3D pose of a target object using particle filter has an important problem of high dimensional search space. Because the objects probably appear anywhere in the search space along the camera ray in the 3D world space, a huge number of samples covering the whole search space are required, which necessitates costly expensive computation time and iterations until convergence. For this reason, we propose a particle filter base on back projection sampling on saliency technique. We obtain the object boundaries as foreground regions using saliency segmentation based on color and depth information that is robust to complex environments. Moreover, we apply the particle filter with sampling, which is based on the back projection technique, using the concept of a relationship between 3D world space and the 2D image plane. The sampling dimension of whole samples along the camera ray can be omitted by generating the samples in the 2D image plane on saliencies before they are back projected into 3D world space using depth information. The required number of samples and iterations are drastically decreased. In addition, our method can perceive the salient regions that may be the region of the target object. Most of the samples will be predicted into these promising regions that make the algorithm converges rapidly.


3D pose tracking back projection-based sampling particle filter RGB-D saliency segmentation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.



  1. [1]
    D. Sidibé, D. Fofi, and F. Mériaudeau, “Using visual saliency for object tracking with particle filters,” Proc. of 18th European Signal Processing Conference, pp. 1776–1780, August 2010.Google Scholar
  2. [2]
    Y. Yuan, C. Gao, Q. Liu, J. Wang, and C. Zhang, “Using local saliency for object tracking with particle filters,” IEEE International Conference on Signal Processing, Communications and Computing, pp. 388–393, August 2014.Google Scholar
  3. [3]
    O. Jong-Kyu, L. Sukhan, and L. Chan-Ho, “Stereo Vision Based Automation for a Bin-Picking Solution,” International Journal of Control, Automation, and Systems, vol. 10, no. 2, pp. 362–373, April 2012.CrossRefGoogle Scholar
  4. [4]
    L. Mingyu and K. Hashimoto, “Accurate object pose estimation using depth only,” Sensors (Basel), MDPI, vol. 18, no. 4, p. 1045, March 2018.CrossRefGoogle Scholar
  5. [5]
    H. Liu, F. Sun, B. Fang, and X. Zhang, “Robotic room-level localization using multiple sets of sonar measurements,” IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 1, pp. 2–13, January 2017.CrossRefGoogle Scholar
  6. [6]
    L. Jiang, A. Koch, and A. Zell, “Salient regions detection for indoor robots using RGB-D data,” Proc. of IEEE International Conference on Robotics and Automation, pp. 1323–1328, May 2015.Google Scholar
  7. [7]
    J. Guo, T. Ren, and J. Bei, “Salient object detection for RGB-D image via saliency evolution,” IEEE International Conference Multimedia and Expo, July 2016.Google Scholar
  8. [8]
    L. Jiyong, J. Mun-Ho, L. Joongjae, K. KangGeon, and Y. Bum-Jae, “3D pose tracking using particle filter with back projection-based sampling,” International Journal of Control, Automation, and Systems, vol. 10, no. 6, pp. 1232–1239, December 2012.CrossRefGoogle Scholar
  9. [9]
    E. Yörük and R. Vidal “Efficient object localization and pose estimation with 3D wireframe models,” Proc. of IEEE International Conference on Computer Vision Workshops, pp. 538–545, December 2013.Google Scholar
  10. [10]
    A. Pirayawaraporn, N. Chindakham, and M. H. Jeong, “Object tracking using particle filter with back projection-based sampling on saliency,” Proc. of IEEE Int. Conf. on Control, Automation and Systems (ICCAS), pp. 1696–1698, October 2017.Google Scholar
  11. [11]
    E. Hashemi, M. G. Jadid, M. Lashgarian, M. Yaghobi, and M. Shafiei, “Particle filter based localization of the Nao biped robots,” Proc. of the 44th IEEE Southeastern Symposium on System Theory, pp. 168–173, March 2012.Google Scholar
  12. [12]
    A. Burchardt, T. Laue, and T. Röfer, “Optimizing particle filter parameters for self-localization,” Robot Soccer World Cup XIV (RoboCup-2010), vol. 6556, pp. 145–156, January 2011.CrossRefGoogle Scholar
  13. [13]
    A. Dakkak and A. Husain, “Recovering missing depth information from Microsoft’s Kinect,” Proc. Embedded Vis. Alliance, pp. 1–9, 2012.Google Scholar
  14. [14]
    F. Perazzi, P. Krähenbühl, Y. Pritch, and A. Hornung, “Saliency filters: contrast based filtering for salient region detection,” Proc. of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 733–740, June 2012.Google Scholar
  15. [15]
    W. Zhu, S. Liang, Y. W. Wei, and J. Sun, “Saliency optimization from robust background detection,” IEEE International Conference on Computer Vision and Pattern Recognition, June 2014.Google Scholar
  16. [16]
    A. Blake and M. Isard, Active Contours - The Application of Techniques from Graphics, Vision,Control Theory and Statistics to Visual Tracking of Shapes in Motion, Springer, pp. 99–100, 2000.Google Scholar

Copyright information

© ICROS, KIEE and Springer 2019

Authors and Affiliations

  • Alongkorn Pirayawaraporn
    • 1
  • Nachaya Chindakham
    • 1
  • Ji-Yong Lee
    • 2
  • Mun-Ho Jeong
    • 3
    Email author
  1. 1.The Department of Control and Instrumentation EngineeringKwangwoon UniversitySeoulKorea
  2. 2.Bitwin Media LabSeobudaeseong-roChuncheon-si, Gangwon-doKorea
  3. 3.The Division of RoboticsKwangwoon UniversitySeoulKorea

Personalised recommendations