Skip to main content
Log in

GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical Flow Learning

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Existing homography and optical flow methods are erroneous in challenging scenes, such as fog, rain, night, and snow because the basic assumptions such as brightness and gradient constancy are broken. To address this issue, we present an unsupervised learning approach that fuses gyroscope into homography and optical flow learning. Specifically, we first convert gyroscope readings into motion fields named gyro field. Second, we design a self-guided fusion module (SGF) to fuse the background motion extracted from the gyro field with the optical flow and guide the network to focus on motion details. Meanwhile, we propose a homography decoder module (HD) to combine gyro field and intermediate results of SGF to produce the homography. To the best of our knowledge, this is the first deep learning framework that fuses gyroscope data and image content for both deep homography and optical flow learning. To validate our method, we propose a new dataset that covers regular and challenging scenes. Experiments show that our method outperforms the state-of-the-art methods in both regular and challenging scenes. The code and dataset are available at https://github.com/lhaippp/GyroFlowPlus.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Data availibility

The GHOF data-set and pre-trained models will be available for download on our webpage at https://github.com/lhaippp/GyroFlowPlus under the MIT License.

References

  • Balntas, V., Lenc, K., Vedaldi, A., & Mikolajczyk, K. (2017). Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5173–5182.

  • Barath, D., Matas, J. & Noskova, J. (2019). Magsac: Marginalizing sample consensus. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10197–10205

  • Barath, D., Noskova, J., Ivashechkin, M., & Matas, J. (2020). Magsac++, a fast, reliable and accurate robust estimator. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1304–1312.

  • Behl, A., Hosseini Jafari, O., Karthik Mustikovela, S., Abu Alhaija, H., Rother, C., & Geiger, A. (2017). Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios? In Proceedings of ICCV, pp. 2574–2583.

  • Bloesch, M., Omari, S., Fankhauser, P., Sommer, H., Gehring, C., Hwangbo, J., Hoepflinger, M. A., Hutter, M., & Siegwart, R. (2014). Fusion of optical flow and inertial measurements for robust egomotion estimation. In Proceedings of IROS, pp. 3102–3107.

  • Butler, D. J., Wulff, J., Stanley, G. B., & Black, M. J. (2012). A naturalistic open source movie for optical flow evaluation. In Proceedings of ECCV, pp. 611–625.

  • Campbell, J., Sukthankar, R., & Nourbakhsh, I. (2004). Techniques for evaluating optical flow for visual odometry in extreme terrain. In Proceedings of IROS, pp. 3704–3711.

  • Cao, S.-Y., Hu, J., Sheng, Z., & Shen, H.-L. (2022). Iterative deep homography estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 1879–1888.

  • Cheng, S., Wang, Y., Huang, H., Liu, D., Fan, H., & Liu, S. (2021). Nbnet: Noise basis learning for image denoising with subspace projection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4896–4906.

  • Chi, C., Hao, T., Wang, Q., Guo, P., & Yang, X. (2022). Subspace-PNP: A geometric constraint loss for mutual assistance of depth and optical flow estimation. International Journal of Computer Vision, 130(12), 3054–3069.

    Article  MATH  Google Scholar 

  • Cunningham, P., & Delany, S. J. (2021). K-nearest neighbour classifiers-a tutorial. ACM Computing Surveys (CSUR), 54(6), 1–25.

    Article  MATH  Google Scholar 

  • Dai, J. S. (2015). Euler-rodrigues formula variations, quaternion conjugation and intrinsic connections. Mechanism and Machine Theory, 92, 144–152.

    Article  MATH  Google Scholar 

  • DeTone, D., Malisiewicz, T., & Rabinovich, A. (2016). Deep image homography estimation. arXiv preprint arXiv:1606.03798.

  • DeTone, D., Malisiewicz, T., & Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 224–236.

  • Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., & Brox, T. (2015). Flownet: Learning optical flow with convolutional networks. In Proceedings of ICCV, pp. 2758–2766.

  • Farnebäck, G. (2003). Two-frame motion estimation based on polynomial expansion. In Scandinavian conference on image analysis, pp. 363–370. Springer.

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  MathSciNet  MATH  Google Scholar 

  • Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of CVPR, pp. 3354–3361

  • Gelfand, N., Adams, A., Park, S.H., & Pulli, K. (2010). Multi-exposure imaging on mobile devices. In Proceedings of the 18th ACM international conference on multimedia, pp. 823–826.

  • Gilbert, A., Trumble, M., Malleson, C., Hilton, A., & Collomosse, J. (2019). Fusing visual and inertial sensors with semantics for 3D human pose estimation. International Journal of Computer Vision, 127, 381–397.

    Article  Google Scholar 

  • Gupta, H. P., Chudgar, H. S., Mukherjee, S., Dutta, T., & Sharma, K. (2016). A continuous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. IEEE Sensors Journal, 16(16), 6425–6432.

    Article  Google Scholar 

  • Guse, D. & Müller, B. (2012). Gesture-based user authentication on mobile devices using accelerometer and gyroscope. In Informatiktage, pp. 243–246.

  • Han, Y., Luo, K., Luo, A., Liu, J., Fan, H., Luo, G., & Liu, S. (2022). Realflow: Em-based realistic optical flow dataset generation from videos. In European conference on computer vision, pp. 288–305. Springer.

  • Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Hong, M., Lu, Y., Ye, N., Lin, C., Zhao, Q., & Liu, S. (2022). Unsupervised homography estimation with coplanarity-aware gan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17663–17672.

  • Horn, B. K., & Schunck, B. G. (1981). Determining optical flow. Artificial Intelligence, 17(1–3), 185–203.

    Article  MATH  Google Scholar 

  • Huang, W., & Liu, H. (2018). Online initialization and automatic camera-imu extrinsic calibration for monocular visual-inertial slam. In Proceedings of ICRA, pp. 5182–5189

  • Huang, Z., Shi, X., Zhang, C., Wang, Q., Cheung, K. C., Qin, H., Dai, J., & Li, H. (2022). Flowformer: A transformer architecture for optical flow. In European conference on computer vision, pp. 668–685. Springer.

  • Hui, T.-W., Tang, X., & Loy, C. C. (2018). Liteflownet: A lightweight convolutional neural network for optical flow estimation. In Proceedings of CVPR, pp. 8981–8989.

  • Hur, J., & Roth, S. (2019). Iterative residual refinement for joint optical flow and occlusion estimation. In Proceedings of CVPR, pp. 5754–5763.

  • Hwangbo, M., Kim, J.-S., & Kanade, T. (2009). Inertial-aided klt feature tracking for a moving camera. In Proceedings of IROS, pp. 1909–1916.

  • Im, W., Kim, T.-K., & Yoon, S.-E. (2020). Unsupervised learning of optical flow with deep feature similarity. In Proceedings of ECCV, pp. 172–188.

  • Jaegle, A., Borgeaud, S., Alayrac, J.-B., Doersch, C., Ionescu, C., Ding, D., Koppula, S., Zoran, D., Brock, A., Shelhamer, E., et al. (2021). Perceiver IO: A general architecture for structured inputs & outputs. arXiv preprint arXiv:2107.14795.

  • Janai, J., Guney, F., Ranjan, A., Black, M., & Geiger, A. (2018). Unsupervised learning of multi-frame optical flow with occlusions. In Proceedings of ECCV, pp. 690–706.

  • Jason, J. Y., Harley, A. W., & Derpanis, K. G. (2016). Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In Proceedings of ECCV, pp. 3–10.

  • Ji, H., & Fermüller, C. (2008). Robust wavelet-based super-resolution reconstruction: theory and algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(4), 649–660.

    Article  MATH  Google Scholar 

  • Jia, C., & Evans, B. L. (2013). Online calibration and synchronization of cellphone camera and gyroscope. In 2013 IEEE global conference on signal and information processing, pp. 731–734.

  • Jiang, H., Li, H., Lu, Y., Han, S., & Liu, S. (2022). Semi-supervised deep large-baseline homography estimation with progressive equivalence constraint.

  • Jiang, H., Li, H., Han, S., Fan, H., Zeng, B., & Liu, S. (2023). Supervised homography learning with realistic dataset generation. arXiv preprint arXiv:2307.15353.

  • Jiang, S., Campbell, D., Lu, Y., Li, H., & Hartley, R. (2021). Learning to estimate hidden motions with global motion aggregation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 9772–9781

  • Jonschkowski, R., Stone, A., Barron, J. T., Gordon, A., Konolige, K., & Angelova, A. (2020). What matters in unsupervised optical flow. In Proceedings of ECCV, pp. 557–572.

  • Jung, A. B., Wada, K., Crall, J., Tanaka, S., Graving, J., Reinders, C., et al. (2020). Imgaug. https://github.com/aleju/imgaug. Accessed 01 Feb 2020.

  • Karpenko, A., Jacobs, D., Baek, J., & Levoy, M. (2011). Digital video stabilization and rolling shutter correction using gyroscopes. CSTR, 1(2), 13.

    MATH  Google Scholar 

  • Kharismawati, D. E., Akbarpour, H. A., Aktar, R., Bunyak, F., Palaniappan, K., & Kazic, T. (2020). Cornet: Unsupervised deep homography estimation for agricultural aerial imagery. In European conference on computer vision, pp. 400–417. Springer.

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

  • Kroeger, T., Timofte, R., Dai, D., & Van Gool, L. (2016). Fast optical flow using dense inverse search. In Proceedings of ECCV, pp. 471–488.

  • Kundra, L., & Ekler, P. (2014). Bias compensation of gyroscopes in mobiles with optical flow. Aasri Procedia, 9, 152–157.

    Article  MATH  Google Scholar 

  • La Rosa, F., Virzì, M. C., Bonaccorso, F., & Branciforte, M. (2015). Optical image stabilization (ois). 2015. STMicroelectronics. Accessed 31 Oct 2015.

  • Le, H., Liu, F., Zhang, S., & Agarwala, A. (2020). Deep homography estimation for dynamic scenes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 7652–7661.

  • Leland, R. P. (2006). Adaptive control of a mems gyroscope using Lyapunov methods. IEEE Transactions on Control Systems Technology, 14(2), 278–283.

    Article  MATH  Google Scholar 

  • Li, H., Luo, K., & Liu, S. (2021). Gyroflow: Gyroscope-guided unsupervised optical flow learning. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 12869–12878.

  • Li, P., & Ren, H. (2018). An efficient gyro-aided optical flow estimation in fast rotations with auto-calibration. IEEE Sensors Journal, 18(8), 3391–3399.

    Article  MATH  Google Scholar 

  • Li, R., Tan, R. T., Cheong, L.-F., Aviles-Rivero, A. I., Fan, Q., & Schonlieb, C.-B. (2019). Rainflow: Optical flow under rain streaks and rain veiling effect. In Proceedings of ICCV, pp. 7304–7313.

  • Li, Z., & Snavely. N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2041–2050.

  • Liu, C., Freeman, W. T., Adelson, E. H., & Weiss, Y. (2008). Human-assisted motion annotation. In Proc. CVPR, pp. 1–8.

  • Liu, L., Zhang, J., He, R., Liu, Y., Wang, Y., Tai, Y., Luo, D., & Wang, C., Li, J., & Huang, F. (2020). Learning by analogy: Reliable supervision from transformations for unsupervised optical flow estimation. In Proc. CVPR, pp. 6489–6498.

  • Liu, P., King, I., Lyu, M. R., & Xu, J. (2019). Ddflow: Learning optical flow with unlabeled data distillation. In Proc. AAAI, pp. 8770–8777.

  • Liu, S., Yuan, L., Tan, P., & Sun, J. (2013). Bundled camera paths for video stabilization. ACM Transactions on Graphics (TOG), 32(4), 1–10.

    MATH  Google Scholar 

  • Liu, S., Li, H., Wang, Z., Wang, J., Zhu, S., & Zeng, B. (2021). Deepois: Gyroscope-guided deep optical image stabilizer compensation. IEEE Transactions on Circuits and Systems for Video Technology. https://doi.org/10.1109/TCSVT.2021.3103281

    Article  MATH  Google Scholar 

  • Liu, S., Luo, K., Luo, A., Wang, C., Meng, F., & Zeng, B. (2021). Asflow: Unsupervised optical flow learning with adaptive pyramid sampling. IEEE Transactions on Circuits and Systems for Video Technology, 32(7), 4282–4295.

    Article  MATH  Google Scholar 

  • Liu, S., Luo, K., Ye, N., Wang, C., Wang, J., & Zeng, B. (2021). Oiflow: Occlusion-inpainting optical flow estimation by unsupervised learning. IEEE Transactions on Image Processing, 30, 6420–6433.

    Article  MATH  Google Scholar 

  • Liu, S., Lu, Y., Jiang, H., Ye, N., Wang, C., & Zeng, B. (2022a). Unsupervised global and local homography estimation with motion basis learning. IEEE Transactions on Pattern Analysis and Machine Intelligence.

  • Liu, S., Ye, N., Wang, C., Zhang, J., Jia, L., Luo, K., Wang, J., & Sun, J. (2022). Content-aware unsupervised deep homography estimation and its extensions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3), 2849–2863.

    MATH  Google Scholar 

  • Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

    Article  MATH  Google Scholar 

  • Lu, Y., Wang, Q., Ma, S., Geng, T., Chen, Y. V., Chen, H., & Liu, D. (2023). Transflow: Transformer as flow learner. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18063–18073.

  • Lucas, B. D., & Kanade, T., et al. (1981). An iterative image registration technique with an application to stereo vision. IJCAI: In Proc.

  • Luo, A., Yang, F., Li, X., & Liu, S. (2022). Learning optical flow with kernel patch attention. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8906–8915.

  • Luo, K., Wang, C., Liu, S., Fan, H., Wang, J., & Sun, J. (2021). Upflow: Upsampling pyramid for unsupervised optical flow learning. In Proceedings of CVPR, pp. 1045–1054.

  • Ma, J., Jiang, X., Fan, A., Jiang, J., & Yan, J. (2021). Image matching from handcrafted to deep features: A survey. International Journal of Computer Vision, 129, 23–79.

    Article  MathSciNet  MATH  Google Scholar 

  • Malleson, C., Collomosse, J., & Hilton, A. (2020). Real-time multi-person motion capture from multi-view video and imus. International Journal of Computer Vision, 128, 1594–1611.

    Article  MathSciNet  Google Scholar 

  • Meister, S., Hur, J., & Roth, S. (2018). Unflow: Unsupervised learning of optical flow with a bidirectional census loss. In Proceedings of AAAI

  • Menze, M., & Geiger, A. (2015). Object scene flow for autonomous vehicles. In Proceedings of CVPR, pp. 3061–3070.

  • Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). Orb-slam: a versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.

    Article  Google Scholar 

  • Mustaniemi, J., Kannala, J., Särkkä, S., Matas, J., & Heikkila, J. (2019). Gyroscope-aided motion deblurring with deep networks. In Proceedings of WACV, pp. 1914–1922.

  • Nguyen, T., Chen, S. W., Shivakumar, S. S., Taylor, C. J., & Kumar, V. (2018). Unsupervised deep homography: A fast and robust homography estimation model. IEEE Robotics and Automation Letters, 3(3), 2346–2353.

    Article  MATH  Google Scholar 

  • Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. In Proceedings of CVPR, pp. 4161–4170.

  • Ren, Z., Yan, J., Ni, B., Liu, B., Yang, X., & Zha, H. (2017). Unsupervised deep learning for optical flow estimation. In Proceedings of AAAI.

  • Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision, pp. 2564–2571. IEEE

  • Shao, R., Wu, G., Zhou, Y., Fu, Y., Fang, L., & Liu, Y. (2021). Localtrans: A multiscale local transformer network for cross-resolution homography estimation. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 14890–14899.

  • Shen, X., Darmon, F., Efros, A. A., & Aubry, M. (2020). Ransac-flow: generic two-stage image alignment. In Proceedings of computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Part IV 16, pp. 618–637. Springer.

  • Shi, X., Huang, Z., Bian, W., Li, D., Zhang, M., Cheung, K. C., See, S., Qin, H., Dai, J., & Li, H. (2023). Videoflow: Exploiting temporal cues for multi-frame optical flow estimation. ArXiv preprint arXiv:2303.08340.

  • Stone, A., Maurer, D., Ayvaci, A., Angelova, A., & Jonschkowski, R. (2021). Smurf: Self-teaching multi-frame unsupervised raft with full-image warping. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 3887–3896.

  • Suárez, I., Sfeir, G., Buenaposada, J. M., & Baumela, L. (2020). Beblid: Boosted efficient binary local image descriptor. Pattern Recognition Letters, 133, 366–372.

    Article  Google Scholar 

  • Sun, D., Yang, X., Liu, M.-Y., & Kautz, J. (2018). Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of CVPR, pp. 8934–8943.

  • Sun, J., Shen, Z., Wang, Y., Bao, H., & Zhou, X. (2021). Loftr: Detector-free local feature matching with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8922–8931.

  • Teed, Z., & Deng, J. (2020). Raft: Recurrent all-pairs field transforms for optical flow. In Proceedings of ECCV, pp. 402–419.

  • Tian, Y., Yu, X., Fan, B., Wu, F., Heijnen, H., & Balntas, V. (2019). Sosnet: Second order similarity regularization for local descriptor learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11016–11025.

  • Truong, P., Danelljan, M., & Timofte, R. (2020). Glu-net: Global-local universal network for dense flow and correspondences. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 6258–6268.

  • Truong, P., Danelljan, M., Van Gool, L., & Timofte, R. (2021a). Learning accurate dense correspondences and when to trust them. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5714–5724.

  • Truong, P., Danelljan, M., Yu, F., & Van Gool, L. (2021b). Warp consistency for unsupervised learning of dense correspondences. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 10346–10356.

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.

  • Wang, L., Guo, Y., Liu, L., Lin, Z., Deng, X., & An, W. (2020). Deep video super-resolution using hr optical flow estimation. IEEE Transactions on Image Processing, 29, 4323–4336.

    Article  MATH  Google Scholar 

  • Wang, Y., Yang, Y., Yang, Z., Zhao, L., Wang, P., & Xu, W. (2018). Occlusion aware unsupervised learning of optical flow. In Proc. CVPR, pp. 4884–4893.

  • Weinzaepfel, P., Revaud, J., Harchaoui, Z., & Schmid, C. (2013). Deepflow: Large displacement optical flow with deep matching. In Proceedings of ICCV, pp. 1385–1392.

  • Xu, H., Zhang, J., Cai, J., Rezatofighi, H., & Tao, D. (2022). Gmflow: Learning optical flow via global matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8121–8130.

  • Yan, W., Sharma, A., & Tan, R. T. (2020). Optical flow in dense foggy scenes using semi-supervised learning. In Proceedings of CVPR, pp. 13259–13268.

  • Ye, N., Wang, C., Fan, H., & Liu, S. (2021). Motion basis learning for unsupervised deep homography estimation with subspace projection. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 13117–13125.

  • Yi, K. M., Trulls, E., Lepetit, V., & Fua, P. (2016). Lift: Learned invariant feature transform. In European conference on computer vision, pp. 467–483. Springer.

  • Yin, Z., & Shi, J. (2018). Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of CVPR, pp. 1983–1992.

  • Zaragoza, J., Chin, T.-J., Brown, M. S., & Suter, D. (2013). As-projective-as-possible image stitching with moving dlt. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2339–2346.

  • Zhang, F. (1997). Quaternions and matrices of quaternions. Linear Algebra and its Applications, 251, 21–57.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, J., Wang, C., Liu, S., Jia, L., Ye, N., Wang, J., Zhou, J., & Sun, J. (2020). Content-aware unsupervised deep homography estimation. In European conference on computer vision, pp. 653–669. Springer.

  • Zhang, K., Ren, W., Luo, W., Lai, W.-S., Stenger, B., Yang, M.-H., & Li, H. (2022). Deep image deblurring: A survey. International Journal of Computer Vision, 130(9), 2103–2130.

    Article  MATH  Google Scholar 

  • Zhang, R., Vogler, C., & Metaxas, D. (2004). Human gait recognition. In Proceedings of CVPRW.

  • Zheng, Y., Zhang, M., & Lu, F. (2020). Optical flow in the dark. In Proceedings of CVPR, pp. 6749–6757.

  • Zhong, Y., Ji, P., Wang, J., Dai, Y., & Li, H. (2019). Unsupervised deep epipolar flow for stationary or dynamic scenes. In Proceedings of CVPR, pp. 12095–12104.

Download references

Funding

Funding was provided by National Natural Science Foundation of China (Grant No. 62372091 and No. 62031009), and Sichuan Province Science and Technology Support Program (Grant No. 2023NSFSC0462).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Bing Zeng or Shuaicheng Liu.

Additional information

Communicated by Jifeng Dai.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, H., Luo, K., Zeng, B. et al. GyroFlow+: Gyroscope-Guided Unsupervised Deep Homography and Optical Flow Learning. Int J Comput Vis (2024). https://doi.org/10.1007/s11263-023-01978-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11263-023-01978-5

Keywords

Navigation