Advertisement

Ground-Truth Tracking Data Generation Using Rotating Real-World Objects

  • Zoltán Pusztai
  • Levente Hajder
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 693)

Abstract

Quantitative comparison of feature matchers/trackers is essential in 3D computer vision as the accuracy of spatial algorithms mainly depends on the quality of feature matching. This paper shows how a structured-light applying turntable-based evaluation system can be developed. The key problem here is the highly accurate calibration of scanner components. The ground truth (GT) tracking data generation is carried out for seven testing objects. It is shown how the OpenCV3 feature matchers can be compared on our GT data, and the obtained quantitative results are also discussed in detail.

Notes

Acknowledgement

This work was partially supported by the Hungarian National Research, Development and Innovation Office under the grant VKSZ_14-1-2015-0072.

References

  1. 1.
    Agrawal, M., Konolige, K., Blas, M.R.: CenSurE: center surround extremas for realtime feature detection and matching. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 102–115. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88693-8_8 CrossRefGoogle Scholar
  2. 2.
    Alcantarilla, P.F., Bartoli, A., Davison, A.J.: KAZE features. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 214–227. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33783-3_16 CrossRefGoogle Scholar
  3. 3.
    Anwar, H., Din, I., Park, K.: Projector calibration for 3D scanning using virtual target images. Int. J. Precis. Eng. Manuf. 13(1), 125–131 (2012)CrossRefGoogle Scholar
  4. 4.
    Audet, S., Okutomi, M.: A user-friendly method to geometrically calibrate projector-camera systems. In: Computer Vision and Pattern Recognition Workshops, pp. 47–54 (2009)Google Scholar
  5. 5.
    Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92(1), 1–31 (2011)CrossRefGoogle Scholar
  6. 6.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  7. 7.
    Björck, Å.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)CrossRefMATHGoogle Scholar
  8. 8.
    Bradley, C., Vickers, G., Tlusty, J.: Automated rapid prototyping utilizing laser scanning and free-form machining. CIRP Ann. - Manuf. Technol. 41(1), 437–440 (1991)CrossRefGoogle Scholar
  9. 9.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15561-1_56 CrossRefGoogle Scholar
  10. 10.
    Fischler, M., Bolles, R.: Random sampling consensus: a paradigm for model fitting with application to image analysis and automated cartography. Commun. Assoc. Comp. Mach. 24, 358–367 (1981)Google Scholar
  11. 11.
    Fitzgibbon, A.W., Cross, G., Zisserman, A.: Automatic 3D model construction for turn-table sequences. In: Koch, R., Gool, L. (eds.) SMILE 1998. LNCS, vol. 1506, pp. 155–170. Springer, Heidelberg (1998). doi: 10.1007/3-540-49437-5_11 CrossRefGoogle Scholar
  12. 12.
    Forssén, P.-E., Lowe, D.G.: Shape descriptors for maximally stable extremal regions. In: ICCV. IEEE (2007)Google Scholar
  13. 13.
    Gauglitz, S., Höllerer, T., Turk, M.: Evaluation of interest point detectors and feature descriptors for visual tracking. Int. J. Comput. Vis. 94(3), 335–360 (2011)CrossRefMATHGoogle Scholar
  14. 14.
    Hartley, R.I., Sturm, P.: Triangulation. Comput. Vis. Image Underst.: CVIU 68(2), 146–157 (1997)CrossRefGoogle Scholar
  15. 15.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)MATHGoogle Scholar
  16. 16.
    Draréni, J., Roy, P.S.S.: Geometric video projector auto-calibration. In: Proceedings of the IEEE International Workshop on Projector-Camera Systems, pp. 39–46 (2009)Google Scholar
  17. 17.
    Kazo, C., Hajder, L.: High-quality structured-light scanning of 3D objects using turntable. In: IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), pp. 553–557 (2012)Google Scholar
  18. 18.
    Lepetit, V., Moreno-Noguer, F., Fua, P.: Epnp: an accurate o(n) solution to the pnp problem. Int. J. Comput. Vis. 81(2), 155–166 (2009)CrossRefGoogle Scholar
  19. 19.
    Leutenegger, S., Chli, M., Siegwart, R.Y.: BRISK: binary robust invariant scalable keypoints. In: Proceedings of the 2011 International Conference on Computer Vision, ICCV 2011, pp. 2548–2555 (2011)Google Scholar
  20. 20.
    Levi, G., Hassner, T.: LATCH: learned arrangements of three patch codes. CoRR (2015)Google Scholar
  21. 21.
    Liao, J., Cai, L.: A calibration method for uncoupling projector and camera of a structured light system. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 770–774 (2008)Google Scholar
  22. 22.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision, ICCV 1999, pp. 1150–1157 (1999)Google Scholar
  23. 23.
    Mair, E., Hager, G.D., Burschka, D., Suppa, M., Hirzinger, G.: Adaptive and generic corner detection based on the accelerated segment test. In Proceedings of the 11th European Conference on Computer Vision: Part II, pp. 183–196 (2010)Google Scholar
  24. 24.
    Martynov, I., Kamarainen, J.-K., Lensu, L.: Projector calibration by “Inverse Camera Calibration”. In: Heyden, A., Kahl, F. (eds.) SCIA 2011. LNCS, vol. 6688, pp. 536–544. Springer, Heidelberg (2011). doi: 10.1007/978-3-642-21227-7_50 CrossRefGoogle Scholar
  25. 25.
    Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide baseline stereo from maximally stable extremal regions. In: Proceedings of BMVC, pp. 36.1–36.10 (2002)Google Scholar
  26. 26.
    Moreno, D., Taubin, G.: Simple, accurate, and robust projector-camera calibration. In: 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012, pp. 464–471 (2012)Google Scholar
  27. 27.
    Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algorithm configuration. In: VISAPP International Conference on Computer Vision Theory and Applications, pp. 331–340 (2009)Google Scholar
  28. 28.
    Nayar, S.K., Krishnan, G., Grossberg, M.D., Raskar, R.: Fast separation of direct and global components of a scene using high frequency illumination. ACM Trans. Graph. 25(3), 935–944 (2006)CrossRefGoogle Scholar
  29. 29.
    Ortiz, R.: FREAK: fast retina keypoint. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 510–517 (2012)Google Scholar
  30. 30.
    Pablo Alcantarilla (Georgia Institute of Technology), Jesus Nuevo (TrueVision Solutions AU), A.B. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the British Machine Vision Conference. BMVA Press (2013)Google Scholar
  31. 31.
    Pal, C.J., Weinman, J.J., Tran, L.C., Scharstein, D.: On learning conditional random fields for stereo - exploring model structures and approximate inference. Int. J. Comput. Vis. 99(3), 319–337 (2012)MathSciNetCrossRefMATHGoogle Scholar
  32. 32.
    Park, S.-Y., Park, G.G.: Active calibration of camera-projector systems based on planar homography. In: ICPR, pp. 320–323 (2010)Google Scholar
  33. 33.
    Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: Internation Conference on Computer Vision, pp. 1508–1515 (2005)Google Scholar
  34. 34.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to sift or surf. In: International Conference on Computer Vision (2011)Google Scholar
  35. 35.
    Sadlo, F., Weyrich, T., Peikert, R., Gross, M.H.: A practical structured light acquisition system for point-based geometry and texture. In: 2005 Proceedings of Symposium on Point Based Graphics, Stony Brook, NY, USA, pp. 89–98 (2005)Google Scholar
  36. 36.
    Scharstein, D., Hirschmüller, H., Kitajima, Y., Krathwohl, G., Nesic, N., Wang, X., Westling, P.: High-resolution stereo datasets with subpixel-accurate ground truth. In Proceedings of Pattern Recognition - 36th German Conference, GCPR 2014, Münster, Germany, 2–5 September 2014, pp. 31–42 (2014)Google Scholar
  37. 37.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002)CrossRefMATHGoogle Scholar
  38. 38.
    Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: CVPR, vol. 1, pp. 195–202 (2003)Google Scholar
  39. 39.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 17–22 June 2006, pp. 519–528, NY, USA, New York (2006)Google Scholar
  40. 40.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of the International Conference on Intelligent Robot Systems (IROS) (2012)Google Scholar
  41. 41.
    Tola, E., Lepetit, V., Fua, P.: DAISY: an efficient dense descriptor applied to wide baseline stereo. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 815–830 (2010)CrossRefGoogle Scholar
  42. 42.
    Tomasi, C., Shi, J.: Good features to track. In: IEEE Conference Computer Vision and Pattern Recognition, pp. 593–600 (1994)Google Scholar
  43. 43.
    Xu, Y., Aliaga, D.G.: Robust pixel classification for 3d modeling with structured light. In: Proceedings of the Graphics Interface 2007 Conference, 28–30 May 2007, pp. 233–240. Montreal, Canada (2007)Google Scholar
  44. 44.
    Yamauchi, K., Saito, H., Sato, Y.: Calibration of a structured light system by observing planar object from unknown viewpoints. In: 19th International Conference on Pattern Recognition, pp. 1–4 (2008)Google Scholar
  45. 45.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Distributed Events Analysis Research Laboratory, MTA SZTAKIBudapestHungary
  2. 2.Eötvös Loránd UniversityBudapestHungary

Personalised recommendations