Depth-Aware Motion Magnification

  • Julian F. P. KooijEmail author
  • Jan C. van Gemert
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9912)


This paper adds depth to motion magnification. With the rise of cheap RGB+D cameras depth information is readily available. We make use of depth to make motion magnification robust to occlusion and large motions. Current approaches require a manual drawn pixel mask over all frames in the area of interest which is cumbersome and error-prone. By including depth, we avoid manual annotation and magnify motions at similar depth levels while ignoring occlusions at distant depth pixels. To achieve this, we propose an extension to the bilateral filter for non-Gaussian filters which allows us to treat pixels at very different depth layers as missing values. As our experiments will show, these missing values should be ignored, and not inferred with inpainting. We show results for a medical application (tremors) where we improve current baselines for motion magnification and motion measurements.


Motion magnification Bilateral filter RGB+D 



This work is part of the research programme Technology in Motion (TIM [628.004.001]), financed by the Netherlands Organisation for Scientific Research (NWO).

Supplementary material

Supplementary material 1 (mp4 40116 KB)

Supplementary material 2 (mp4 46260 KB)

419983_1_En_28_MOESM3_ESM.pdf (1.9 mb)
Supplementary material 3 (pdf 1925 KB)


  1. 1.
    Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. 32(4), 80:1–80:9 (2013). (Proceedings SIGGRAPH)Google Scholar
  2. 2.
    Elgharib, M.A., Hefeeda, M., Durand, F., Freeman, W.T.: Video magnification in presence of large motions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4119–4127. IEEE (2015)Google Scholar
  3. 3.
    Rubinstein, M., Wadhwa, N., Durand, F., Freeman, W.T.: Revealing invisible changes in the world. Science 339(6119), 519–519 (2013)Google Scholar
  4. 4.
    Wu, H.Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.T.: Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 31(4), 65:1–65:8 (2012). (Proceedings SIGGRAPH)Google Scholar
  5. 5.
    Davis, A., Rubinstein, M., Wadhwa, N., Mysore, G., Durand, F., Freeman, W.T.: The visual microphone: passive recovery of sound from video. ACM Trans. Graph. 33(4), 79:1–79:10 (2014). (Proceedings of SIGGRAPH)CrossRefGoogle Scholar
  6. 6.
    Balakrishnan, G., Durand, F., Guttag, J.: Detecting pulse from head motions in video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3430–3437 (2013)Google Scholar
  7. 7.
    Aziz, N.A., Tannemaat, M.R.: A microscope for subtle movements in clinical neurology. Neurology 85(10), 920–920 (2015)CrossRefGoogle Scholar
  8. 8.
    Amir-Khalili, A., Peyrat, J.-M., Abinahed, J., Al-Alao, O., Al-Ansari, A., Hamarneh, G., Abugharbieh, R.: Auto localization and segmentation of occluded vessels in robot-assisted partial nephrectomy. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014, Part I. LNCS, vol. 8673, pp. 407–414. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    Davis, A., Bouman, K.L., Chen, J.G., Rubinstein, M., Durand, F., Freeman, W.T.: Visual vibrometry: estimating material properties from small motions in video. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5335–5343. IEEE (2015)Google Scholar
  10. 10.
    Bain, P.G.: Parkinsonism & related disorders. Tremor 13, S369–S374 (2007)Google Scholar
  11. 11.
    Deuschl, G., Bain, P., Brin, M.: Consensus statement of the movement disorder society on tremor. Mov. Disord. 13(S3), 2–23 (1998)CrossRefGoogle Scholar
  12. 12.
    Schwingenschuh, P., Katschnig, P., Seiler, S., Saifee, T.A., Aguirregomozcorta, M., Cordivari, C., Schmidt, R., Rothwell, J.C., Bhatia, K.P., Edwards, M.J.: Moving toward laboratory-supported criteria for psychogenic tremor. Mov. Disord. 26(14), 2509–2515 (2011)CrossRefGoogle Scholar
  13. 13.
    Fattal, R.: Edge-avoiding wavelets and their applications. ACM Trans. Graph. 28(3), 22:1–22:10 (2009). (Proceedings SIGGRAPH)CrossRefGoogle Scholar
  14. 14.
    He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)CrossRefGoogle Scholar
  15. 15.
    Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. Int. J. Comput. Vis. 81(1), 24–52 (2009)CrossRefGoogle Scholar
  16. 16.
    Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth International Conference on Computer Vision, pp. 839–846. IEEE (1998)Google Scholar
  17. 17.
    Freeman, W.T., Adelson, E.H.: The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell. 9, 891–906 (1991)CrossRefGoogle Scholar
  18. 18.
    Simoncelli, E.P., Freeman, W.T.: The steerable pyramid: a flexible architecture for multi-scale derivative computation. In: ICIP, p. 3444. IEEE (1995)Google Scholar
  19. 19.
    Bugeau, A., Bertalmío, M., Caselles, V., Sapiro, G.: A comprehensive framework for image inpainting. IEEE Trans. Image Process. 19(10), 2634–2645 (2010)CrossRefMathSciNetGoogle Scholar
  20. 20.
    Herling, J., Broll, W.: Pixmix: A real-time approach to high-quality diminished reality. In: 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 141–150. IEEE (2012)Google Scholar
  21. 21.
    Getreuer, P.: Total variation inpainting using split bregman. Image Process. Line 2, 147–157 (2012)CrossRefGoogle Scholar
  22. 22.
    Tiefenbacher, P., Bogischef, V., Merget, D., Rigoll, G.: Subjective and objective evaluation of image inpainting quality. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 447–451. IEEE (2015)Google Scholar
  23. 23.
    Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)CrossRefMathSciNetGoogle Scholar
  24. 24.
    Liu, C., Torralba, A., Freeman, W.T., Durand, F., Adelson, E.H.: Motion magnification. ACM Trans. Graph. (Proceedings SIGGRAPH) 24(3), 519–526 (2005)CrossRefGoogle Scholar
  25. 25.
    Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Riesz pyramids for fast phase-based video magnification. In: 2014 IEEE International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2014)Google Scholar
  26. 26.
    Bai, J., Agarwala, A., Agrawala, M., Ramamoorthi, R.: Selectively de-animating video. ACM Trans. Graph. 31(4), 66:1–66:10 (2012). (Proceedings SIGGRAPH)CrossRefGoogle Scholar
  27. 27.
    Malleson, C., Bazin, J.C., Wang, O., Bradley, D., Beeler, T., Hilton, A., Sorkine-Hornung, A.: Facedirector: continuous control of facial performance in video. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3979–3987 (2015)Google Scholar
  28. 28.
    Shankar Nagaraja, N., Schmidt, F.R., Brox, T.: Video segmentation with just a few strokes. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3235–3243 (2015)Google Scholar
  29. 29.
    Bai, J., Agarwala, A., Agrawala, M., Ramamoorthi, R.: User-assisted video stabilization. Comput. Graph. Forum 33(4), 61–70 (2014)CrossRefGoogle Scholar
  30. 30.
    Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1033–1038. IEEE (1999)Google Scholar
  31. 31.
    Chen, J.G., Wadhwa, N., Cha, Y.J., Durand, F., Freeman, W.T., Buyukozturk, O.: Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vibr. 345, 58–71 (2015)CrossRefGoogle Scholar
  32. 32.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
  33. 33.
    Kooij, J.F.P.: SenseCap: synchronized data collection with Microsoft Kinect2 and LeapMotion. In: Proceedings of the 22nd ACM International Conference on Multimedia. ACM (2016, to appear)Google Scholar
  34. 34.
    Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Delft University of TechnologyDelftThe Netherlands
  2. 2.Leiden University Medical CenterLeidenThe Netherlands

Personalised recommendations