Reconstruction of Deformation from Depth and Color Video with Explicit Noise Models

  • Andreas Jordt
  • Reinhard Koch
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8200)


Depth sensors like ToF cameras and structured light devices provide valuable scene information, but do not provide a stable base for optical flow or feature movement calculation because the lack of texture information makes depth image registration very challenging. Approaches associating depth values with optical flow or feature movement from color images try to circumvent this problem, but suffer from the fact that color features are often generated at edges and depth discontinuities, areas in which depth sensors inherently deliver unstable data. Using deformation tracking as an application, this article will discuss the benefits of Analysis by Synthesis (AbS) while approaching the tracking problem and how it can be used to:

  • exploit the complete image information of depth and color images in the tracking process,

  • avoid feature calculation and, hence, the need for outlier handling,

  • regard every measurement with respect to its accuracy and expected deviation.

In addition to an introduction to AbS based tracking, a novel approach to handle noise and inaccuracy is proposed, regarding every input measurement according to its accuracy and noise characteristics. The method is especially useful for time of flight cameras since it allows to take the correlation between pixel noise and the measured amplitude into account. A set of generic and specialized deformation models is discussed as well as an efficient way to synthesize and to optimize high dimensional models. The resulting applications range from real-time deformation reconstruction methods to very accurate deformation retrieval using models of 100 dimensions and more.


Noise Model Depth Image Deformation Function Depth Sensor NURBS Surface 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Ballard, D.H., Brown, C.M.: Computer Vision. Prentice-Hall, Englewood Cliffs (1982)Google Scholar
  2. 2.
    Tomasi, C., Kanade, T.: Detection and tracking of point features. Technical report, International Journal of Computer Vision (1991)Google Scholar
  3. 3.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  4. 4.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision (ijcai). In: Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI 1981), pp. 674–679 (April 1981)Google Scholar
  5. 5.
    Li, X., Guskov, I.: 3d object recognition from range images using pyramid matching. In: International Conference on Computer Vision (ICCV), pp. 1–6 (2007)Google Scholar
  6. 6.
    Johnson, A.: Spin-Images: A Representation for 3-D Surface Matching. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA (August 1997)Google Scholar
  7. 7.
    Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRefGoogle Scholar
  8. 8.
    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (UIST 2011), pp. 559–568. ACM, New York (2011)Google Scholar
  9. 9.
    Horn, B.K.P., Harris, J.G.: Rigid body motion from range image sequences. CVGIP: Image Understanding 53, 1–13 (1991)CrossRefzbMATHGoogle Scholar
  10. 10.
    Koch, R.: Dynamic 3-d scene analysis through synthesis feedback control. IEEE Trans. Pattern Anal. Mach. Intell. 15(6), 556–568 (1993)CrossRefGoogle Scholar
  11. 11.
    Steimle, J., Jordt, A., Maes, P.: Flexpad: Highly flexible bending interactions for projected handheld displays. In: Brewster, S., Bødker, S., Baudisch, P., Beaudouin-Lafon, M. (eds.) Proceedings of the 2013 Annual Conference on Human Factors in Computing Systems (CHI 2013). ACM, Paris (2013)Google Scholar
  12. 12.
    Fugl, A.R., Jordt, A., Petersen, H.G., Willatzen, M., Koch, R.: Simultaneous estimation of material properties and pose for deformable objects from depth and color images. In: Pinz, A., Pock, T., Bischof, H., Leberl, F. (eds.) DAGM and OAGM 2012. LNCS, vol. 7476, pp. 165–174. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  13. 13.
    Hansen, N.: The CMA evolution strategy: A comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a new evolutionary computation. STUDFUZZ, vol. 192, pp. 75–102. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Jordt, A., Koch, R.: Direct model-based tracking of 3d object deformations in depth and color video. International Journal of Computer Vision 102, 239–255 (2013)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Jordt, A., Koch, R.: Fast tracking of deformable objects in depth and colour video. In: Proceedings of the British Machine Vision Conference (BMVC 2011). British Machine Vision Association (2011)Google Scholar
  16. 16.
    Piegl, L., Tiller, W.: The NURBS Book, 2nd edn. Springer, Berlin (1997)CrossRefGoogle Scholar
  17. 17.
    Schiller, I., Beder, C., Koch, R.: Calibration of a pmd camera using a planar calibration object together with a multi-camera setup. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Part B3a, Beijing, China, vol. XXXVII, pp. 297–302 XXI. ISPRS Congress (2008)Google Scholar
  18. 18.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press (2004) ISBN: 0521540518Google Scholar
  19. 19.
    Bayer, B.: Color imaging array (July 1976)Google Scholar
  20. 20.
    Menon, D., Andriani, S., Calvagno, G.: Demosaicing with directional filtering and a posteriory decision. IEEE Trans. Image Process. 16, 132–141 (2007)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Hirakawa, K., Member, S., Parks, T.W.: Adaptive homogeneity-directed demosaicing algorithm. IEEE Trans. Image Processing 14, 360–369 (2005)CrossRefGoogle Scholar
  22. 22.
    Stürmer, M., Becker, G., Hornegger, J.: Assessment of time-of-flight cameras for short camera-object distances. In: Proceedings of the Vision, Modeling, and Visualization Workshop 2011, pp. 231–238 (2011)Google Scholar
  23. 23.
    Reynolds, M., Doboš, J., Peel, L., Weyrich, T., Brostow, G.J.: Capturing time-of-flight data with confidence. In: CVPR (2011)Google Scholar
  24. 24.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Falie, D.: Noise characteristics of 3d time-of-flight cameras. In: International Symposium on Signals Circuits and Systems (ISSCS 2007), vol. 1, pp. 1–4 (2007)Google Scholar
  26. 26.
    Lindner, M., Schiller, I., Kolb, A., Koch, R.: Time-of-flight sensor calibration for accurate range sensing. Computer Vision and Image Understanding 114(12), 1318–1328 (2010)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Andreas Jordt
    • 1
  • Reinhard Koch
    • 1
  1. 1.Multimedia Information Processing GroupUniversity of KielKielGermany

Personalised recommendations