A Point and Line Features Based Method for Disturbed Surface Motion Estimation

  • Xiang Li
  • Yue Zhou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)


Calculating the motion of disturbed surface such as a reflective monochromatic one is often a difficult part, especially when using single feature based method. The error introduced from the feature extraction and matching will gradually accumulate into a larger final error. For a texture-less surface, the number of features makes the situation even more challenging. In this paper, point and line features from stereo sequences are combined to estimate 3D motion of disturbed surfaces. Taking the advantage of feature combination by two-stage iterative optimization and multiple filtering, the motion of surfaces can be estimated accurately, even under little motion blur. This paper also explored the relationship between measurement accuracy and object motion mode. This may provide a reference for the design of a vision based motion measuring system.


Motion estimation Combined features Iterative optimization 



This work is supported by the National High Technology Research and Development Program of China (863 Program) under Grant 2015AA016402.


  1. 1.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 404–417. Springer, Heidelberg (2006). doi: 10.1007/11744023_32 CrossRefGoogle Scholar
  2. 2.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679–698 (1986)CrossRefGoogle Scholar
  3. 3.
    Feng, Y., Wu, Y., Fan, L.: On-line object reconstruction and tracking for 3D interaction. In: 2012 IEEE International Conference on Multimedia and Expo (ICME), pp. 711–716. IEEE (2012)Google Scholar
  4. 4.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2003)MATHGoogle Scholar
  5. 5.
    Hirschmüller, H.: Accurate and Efficient Stereo Processing by Semi-global Matching and Mutual Information, vol. 2, pp. 807–814 (2005)Google Scholar
  6. 6.
    Jia, Q., Gao, X., Fan, X., Luo, Z., Li, H., Chen, Z.: Novel coplanar line-points invariants for robust line matching across views. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 599–611. Springer, Cham (2016). doi: 10.1007/978-3-319-46484-8_36 CrossRefGoogle Scholar
  7. 7.
    Men, H., Gebre, B., Pochiraju, K.: Color point cloud registration with 4D ICP algorithm. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1511–1516. IEEE (2011)Google Scholar
  8. 8.
    Nistér, D.: Preemptive RANSAC for live structure and motion estimation. Mach. Vis. Appl. 16(5), 321–329 (2005)CrossRefGoogle Scholar
  9. 9.
    Pilz, F., Pugeault, N., Krüger, N.: Comparison of point and line features and their combination for rigid body motion estimation. In: Cremers, D., Rosenhahn, B., Yuille, A.L., Schmidt, F.R. (eds.) Statistical and Geometrical Approaches to Visual Motion Analysis. LNCS, vol. 5604, pp. 280–304. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03061-1_14 CrossRefGoogle Scholar
  10. 10.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006). doi: 10.1007/11744023_34 CrossRefGoogle Scholar
  11. 11.
    Taylor, C.J., Kriegman, D.J.: Structure and motion from line segments in multiple images. IEEE Trans. Pattern Anal. Mach. Intell. 17(11), 1021–1032 (1995)CrossRefGoogle Scholar
  12. 12.
    Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment — a modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R. (eds.) IWVA 1999. LNCS, vol. 1883, pp. 298–372. Springer, Heidelberg (2000). doi: 10.1007/3-540-44480-7_21 CrossRefGoogle Scholar
  13. 13.
    Wu, C.: Towards linear-time incremental structure from motion. In: 2013 International Conference on 3DTV-Conference, pp. 127–134. IEEE (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Institute of Image Processing and Pattern RecognitionShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations