International Journal of Computer Vision

, Volume 120, Issue 3, pp 300–323 | Cite as

DeepMatching: Hierarchical Deformable Dense Matching

  • Jerome RevaudEmail author
  • Philippe Weinzaepfel
  • Zaid Harchaoui
  • Cordelia Schmid


We introduce a novel matching algorithm, called DeepMatching, to compute dense correspondences between images. DeepMatching relies on a hierarchical, multi-layer, correlational architecture designed for matching images and was inspired by deep convolutional approaches. The proposed matching algorithm can handle non-rigid deformations and repetitive textures and efficiently determines dense correspondences in the presence of significant changes between images. We evaluate the performance of DeepMatching, in comparison with state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al. A comparison of affine region detectors, 2005), the MPI-Sintel (Butler et al. A naturalistic open source movie for optical flow evaluation, 2012) and the Kitti (Geiger et al. Vision meets robotics: The KITTI dataset, 2013) datasets. DeepMatching outperforms the state-of-the-art algorithms and shows excellent results in particular for repetitive textures. We also apply DeepMatching to the computation of optical flow, called DeepFlow, by integrating it in the large displacement optical flow (LDOF) approach of Brox and Malik (Large displacement optical flow: descriptor matching in variational motion estimation, 2011). Additional robustness to large displacements and complex motion is obtained thanks to our matching approach. DeepFlow obtains competitive performance on public benchmarks for optical flow estimation.


Non-rigid dense matching Optical flow Deep convolutional neural networks  



This work was supported by the European Integrated Project AXES, the MSR/INRIA Joint Project, the LabEx PERSYVAL-Lab (ANR-11-LABX-0025), and the ERC Advanced Grant ALLEGRO.


  1. Bailer, C., Taetz, B., & Stricker, D. (2015). Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation.Google Scholar
  2. Baker, S., Scharstein, D., Lewis, J. P., Roth, S., Black, M. J., & Szeliski, R. (2011). A database and evaluation methodology for optical flow. In IJCV.Google Scholar
  3. Barnes, C., Shechtman, E., Goldman, D. B., & Finkelstein, A. (2010). The generalized PatchMatch correspondence algorithm. In ECCV.Google Scholar
  4. Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 1(1), 1–127.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Black, M. J., & Anandan, P. (1996). The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer Vision and Image Understanding, 63, 75–104.CrossRefGoogle Scholar
  6. Braux-Zin, J., Dupont, R., & Bartoli, A. (2013). A general dense image matching framework combining direct and feature-based costs. In ICCV.Google Scholar
  7. Brox, T., & Malik, J. (2011). Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Transactions on PAMI, 33, 500–513.CrossRefGoogle Scholar
  8. Brox, T., Bruhn, A., Papenberg, N., & Weickert, J. (2004). High accuracy optical flow estimation based on a theory for warping. In ECCV.Google Scholar
  9. Bruhn, A., Weickert, J., Feddern, C., Kohlberger, T., & Schnörr, C. (2005). Variational optical flow computation in real time. IEEE Transactions on Image Processing, 15, 608–615.MathSciNetCrossRefzbMATHGoogle Scholar
  10. Butler, D. J., Wulff, J., Stanley, G. B., & Black, M. J. (2012). A naturalistic open source movie for optical flow evaluation. In ECCV.Google Scholar
  11. Chen, Z., Jin, H., Lin, Z., Cohen, S., & Wu, Y. (2013). Large displacement optical flow from nearest neighbor fields. In CVPR.Google Scholar
  12. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In CVPR.Google Scholar
  13. Demetz, O., Stoll, M., Volz, S., Weickert, J., & Bruhn, A. (2014). Learning brightness transfer functions for the joint recovery of illumination changes and optical flow. In ECCV.Google Scholar
  14. Ecker, A., & Ullman, S. (2009). A hierarchical non-parametric method for capturing non-rigid deformations. Image and Vision Computing, 27, 87–98.CrossRefGoogle Scholar
  15. Forsyth, D., & Ponce, J. (2011). Computer vision: A modern approach. New York: Pearson Education.Google Scholar
  16. Furukawa, Y., Curless, B., Seitz, S. M., & Szeliski, R. (2010). Towards internet-scale multi-view stereo. In CVPR.Google Scholar
  17. Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The KITTI dataset. IJRR, 32, 1231–1237.Google Scholar
  18. HaCohen, Y., Shechtman, E., Goldman, D. B., & Lischinski, D. (2011). Non-rigid dense correspondence with applications for image enhancement. In SIGGRAPH.Google Scholar
  19. Hassner, T., Mayzels, V., & Zelnik-Manor, L. (2012). On sifts and their scales. In CVPR.Google Scholar
  20. Horn, B. K. P., & Schunck, B. G. (1981). Determining optical flow. Artificial Intelligence, 17, 185–203.CrossRefGoogle Scholar
  21. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., & Darrell, T. (2014). Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093.
  22. Kennedy, R., & Taylor, C. J. (2015). Optical flow with geometric occlusion estimation and fusion of multiple frames. In EMMCVPR.Google Scholar
  23. Keysers, D., Deselaers, T., Gollan, C., & Ney, H. (2007). Deformation models for image recognition. IEEE Transactions on PAMI, 29, 1422–1435.CrossRefGoogle Scholar
  24. Kim, J., Liu, C., Sha, F., & Grauman, K. (2013). Deformable spatial pyramid matching for fast dense correspondences. In CVPR.Google Scholar
  25. Korman, S., & Avidan, S. (2011). Coherency sensitive hashing. In ICCV.Google Scholar
  26. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998a). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324.CrossRefGoogle Scholar
  27. LeCun, Y., Bottou, L., Orr, G., & Muller, K. (1998b). Efficient backprop. In Neural Networks: Tricks of the trade.Google Scholar
  28. Leordeanu, M., Zanfir, A., & Sminchisescu, C. (2013). Locally affine sparse-to-dense matching for motion and occlusion estimation. In ICCV.Google Scholar
  29. Liu, C., Yuen, J., & Torralba, A. (2011). SIFT flow: Dense correspondence across scenes and its applications. IEEE Transactions on PAMI, 33, 978–994.CrossRefGoogle Scholar
  30. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. IJCV, 60, 91–110.CrossRefGoogle Scholar
  31. Malik, J., & Perona, P. (1990). Preattentive texture discrimination with early vision mechanisms. Journal of the Optical Society of America A, 7, 923–932.CrossRefGoogle Scholar
  32. Menze, M., Heipke, C., & Geiger, A. (2015). Discrete optimization for optical flow. In GCPR.Google Scholar
  33. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., et al. (2005). A comparison of affine region detectors. IJCV, 65, 43–72.CrossRefGoogle Scholar
  34. Muja, M., & Lowe, D.G. (2009). Fast approximate nearest neighbors with automatic algorithm configuration. In International Conference on Computer Vision Theory and Application VISSAPP’09). INSTICC Press.Google Scholar
  35. Papenberg, N., Bruhn, A., Brox, T., Didas, S., & Weickert, J. (2006). Highly accurate optic flow computation with theoretically justified warping. IJCV, 67, 141–158.CrossRefGoogle Scholar
  36. Philbin, J., Isard, M., Sivic, J., & Zisserman, A. (2010). Descriptor learning for efficient retrieval. In ECCV.Google Scholar
  37. Ranftl, R., Bredies, K., & Pock, T. (2014). Non-local total generalized variation for optical flow estimation. In ECCV.Google Scholar
  38. Revaud, J., Weinzaepfel, P., Harchaoui, Z., & Schmid, C. (2015). EpicFlow: Edge-preserving interpolation of correspondences for optical flow. In CVPR.Google Scholar
  39. Stoll, M., Volz, S., & Bruhn, A. (2012). Adaptive integration of feature matches into variational optical flow methods. In ACCV.Google Scholar
  40. Sun, D., Liu, C., & Pfister, H. (2014a). Local layering for joint motion estimation and occlusion detection. In CVPR.Google Scholar
  41. Sun, D., Roth, S., & Black, M. (2014b). A quantitative analysis of current practices in optical flow estimation and the principles behind them. IJCV, 106, 115–137.CrossRefGoogle Scholar
  42. Sun, J. (2012). Computing nearest-neighbor fields via propagation-assisted kd-trees. In CVPR.Google Scholar
  43. Szeliski, R. (2010). Computer vision: Algorithms and applications. New York: Springer.zbMATHGoogle Scholar
  44. Timofte, R., & Van Gool, L. (2015). Sparse flow: Sparse matching for small to large displacement optical flow. In Applications of Computer Vision (WACV).Google Scholar
  45. Tola, E., Lepetit, V., & Fua, P. (2008). A fast local descriptor for dense matching. In CVPR.Google Scholar
  46. Tola, E., Lepetit, V., & Fua, P. (2010). DAISY: An efficient dense descriptor applied to wide baseline stereo. IEEE Transactions on PAMI, 32, 815–830.CrossRefGoogle Scholar
  47. Uchida, S.,&Sakoe, H. (1998). A monotonic and continuous two-dimensional warping based on dynamic programming. In ICPR.Google Scholar
  48. Vogel, C., Roth, S., & Schindler, K. (2013a). An evaluation of data costs for optical flow. In GCPR.Google Scholar
  49. Vogel, C., Schindler, K., & Roth, S. (2013b). Piecewise rigid scene flow. In ICCV.Google Scholar
  50. Wedel, A., Cremers, D., Pock, T., & Bischof, H. (2009). Structure- and motion-adaptive regularization for high accuracy optic flow. In ICCV.Google Scholar
  51. Weinzaepfel, P., Revaud, J., Harchaoui, Z., & Schmid, C. (2013). Deepflow: Large displacement optical flow with deep matching. In ICCV.Google Scholar
  52. Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., & Bischof, H. (2009). Anisotropic Huber-L1 optical flow. In BMVC.Google Scholar
  53. Wills, J., Agarwal, S., & Belongie, S. (2006). A feature-based approach for dense segmentation and estimation of large disparity motion. IJCV.Google Scholar
  54. Xu, L., Jia, J., & Matsushita, Y. (2012). Motion detail preserving optical flow estimation. IEEE Transactions on PAMI, 34, 1744–1757.CrossRefGoogle Scholar
  55. Yang, H., Lin, W., & Lu, J. (2014). DAISY filter flow: A generalized discrete approach to dense correspondences. In CVPR.Google Scholar
  56. Young, D. M., & Rheinboldt, W. (1971). Iterative solution of large linear systems. New York: Academic Press.Google Scholar
  57. Zimmer, H., Bruhn, A., & Weickert, J. (2011). Optic flow in harmony. IJCV.Google Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  • Jerome Revaud
    • 1
    Email author
  • Philippe Weinzaepfel
    • 1
  • Zaid Harchaoui
    • 1
  • Cordelia Schmid
    • 1
  1. 1.Thoth team, Inria Grenoble Rhone-Alpes, Laboratoire Jean KuntzmannGrenobleFrance

Personalised recommendations