Advertisement

LiteFlowNet3: Resolving Correspondence Ambiguity for More Accurate Optical Flow Estimation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12365)

Abstract

Deep learning approaches have achieved great success in addressing the problem of optical flow estimation. The keys to success lie in the use of cost volume and coarse-to-fine flow inference. However, the matching problem becomes ill-posed when partially occluded or homogeneous regions exist in images. This causes a cost volume to contain outliers and affects the flow decoding from it. Besides, the coarse-to-fine flow inference demands an accurate flow initialization. Ambiguous correspondence yields erroneous flow fields and affects the flow inferences in subsequent levels. In this paper, we introduce LiteFlowNet3, a deep network consisting of two specialized modules, to address the above challenges. (1) We ameliorate the issue of outliers in the cost volume by amending each cost vector through an adaptive modulation prior to the flow decoding. (2) We further improve the flow accuracy by exploring local flow consistency. To this end, each inaccurate optical flow is replaced with an accurate one from a nearby position through a novel warping of the flow field. LiteFlowNet3 not only achieves promising results on public benchmarks but also has a small model size and a fast runtime.

Supplementary material

504476_1_En_11_MOESM1_ESM.pdf (22.4 mb)
Supplementary material 1 (pdf 22934 KB)

References

  1. 1.
    Bailer, C., Taetz, B., Stricker, D.: Flow fields: dense correspondence fields for highly accurate large displacement optical flow estimation. In: ICCV, pp. 4015–4023 (2015)Google Scholar
  2. 2.
    Brabandere, B.D., Jia, X., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: NIPS (2016)Google Scholar
  3. 3.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24673-2_3CrossRefGoogle Scholar
  4. 4.
    Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33783-3_44CrossRefGoogle Scholar
  5. 5.
    Dai, J., et al.: Deformable convolutional networks. In: ICCV, pp. 764–773 (2017)Google Scholar
  6. 6.
    Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: ICCV, pp. 2758–2766 (2015)Google Scholar
  7. 7.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? In: CVPR, pp. 3354–3361 (2012)Google Scholar
  8. 8.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Aritif. Intell. 17, 185–203 (1981)CrossRefGoogle Scholar
  9. 9.
    Hui, T.W., Loy, C.C.: Supplementary material for LiteFlowNet3: resolving correspondence ambiguity for more accurate optical flow estimation (2020)Google Scholar
  10. 10.
    Hui, T.W., Tang, X., Loy, C.C.: LiteFlowNet: a lightweight convolutional neural network for optical flow estimation. In: CVPR, pp. 8981–8989 (2018)Google Scholar
  11. 11.
    Hui, T.W., Tang, X., Loy, C.C.: A lightweight optical flow CNN - revisiting data fidelity and regularization. TPAMI (2020).  https://doi.org/10.1109/TPAMI.2020.2976928CrossRefGoogle Scholar
  12. 12.
    Hur, J., Roth, S.: Iterative residual refinement for joint optical flow and occlusion estimation. In: CVPR, pp. 5754–5763 (2019)Google Scholar
  13. 13.
    Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet2.0: evolution of optical flow estimation with deep networks. In: CVPR, pp. 2462–2470 (2017)Google Scholar
  14. 14.
    Ilg, E., Saikia, T., Keuper, M., Brox, T.: Occlusions, motion and depth boundaries with a generic network for disparity, optical flow or scene flow estimation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01258-8_38CrossRefGoogle Scholar
  15. 15.
    Janai, J., Güney, F., Ranjan, A., Black, M., Geiger, A.: Unsupervised learning of multi-frame optical flow with occlusions. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 713–731. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01270-0_42CrossRefGoogle Scholar
  16. 16.
    Jiang, H., Sun, D., Jampani, V., Lv, Z., Learned-Miller, E., Kautz, J.: SENSE: a shared encoder network for scene-flow estimation. In: ICCV, pp. 3195–3204 (2019)Google Scholar
  17. 17.
    Kang, S.B., Szeliski, R., Chai, J.: Handling occlusions in dense multi-view stereo. In: CVPR, pp. 103–110 (2001)Google Scholar
  18. 18.
    Liu, P., Lyu, M., King, I., Xu, J.: SelFlow: self-supervised learning of optical flow. In: CVPR, pp. 4566–4575 (2019)Google Scholar
  19. 19.
    Lu, Y., Valmadre, J., Wang, H., Kannala, J., Harandi, M., Torr, P.H.S.: Devon: deformable volume network for learning optical flow. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 673–677. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-11024-6_50CrossRefGoogle Scholar
  20. 20.
    Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: CVPR, pp. 4040–4048 (2016)Google Scholar
  21. 21.
    Meister, S., Hur, J., Roth, S.: UnFlow: unsupervised learning of opticalflow with a bidirectional census loss. In: AAAI, pp. 7251–7259 (2018)Google Scholar
  22. 22.
    Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: CVPR, pp. 3061–3070 (2015)Google Scholar
  23. 23.
    Papenberg, N., Bruhn, A., Brox, T., Didas, S., Weickert, J.: Highly accurate optic flow computation with theoretically justified warping. IJCV 67(2), 141–158 (2006)CrossRefGoogle Scholar
  24. 24.
    Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: CVPR, pp. 4161–4170 (2017)Google Scholar
  25. 25.
    Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: edge-preserving interpolation of correspondences for optical flow. In: CVPR, pp. 1164–1172 (2015)Google Scholar
  26. 26.
    Rhemann, C., Hosni, A., Bleyer, M., Rother, C., Gelautz, M.: Fast cost-volume filtering for visual correspondence and beyond. In: CVPR, pp. 3017–3024 (2011)Google Scholar
  27. 27.
    Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: CVPR, pp. 8934–8943 (2018)Google Scholar
  28. 28.
    Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Models matter, so does training: an empirical study of CNNs for optical flow estimation. TPAMI (2019).  https://doi.org/10.1109/TPAMI.2019.2894353CrossRefGoogle Scholar
  29. 29.
    Werlberger, M., Trobin, W., Pock, T., Wedel, A., Cremers, D., Bischof, H.: Anisotropic Huber-L\(^{1}\) optical flow. In: BMVC (2009)Google Scholar
  30. 30.
    Xu, J., Ranftl, R., Koltun, V.: Accurate optical flow via direct cost volume processings. In: CVPR, pp. 1289–1297 (2017)Google Scholar
  31. 31.
    Yang, G., Ramanan, D.: Volumetric correspondence networks for optical flow. In: NeurIPS (2019)Google Scholar
  32. 32.
    Yin, Z., Darrell, T., Yu, F.: Hierarchical discrete distribution decomposition for match density estimation. In: CVPR, pp. 6044–6053 (2019)Google Scholar
  33. 33.
    Zimmer, H., Bruhn, A., Weickert, J.: Optic flow in harmony. IJCV 93(3), 368–388 (2011).  https://doi.org/10.1007/s11263-011-0422-6MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.The Chinese University of Hong KongHong KongChina
  2. 2.Nanyang Technological UniversitySingaporeSingapore

Personalised recommendations