Skip to main content
Log in

Video Inpainting Using Advanced Homography-based Registration Method

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

A new video inpainting technique for videos taken from free moving cameras is suggested in this research paper. The effective results of video inpainting can be achieved by maintaining spatiotemporal coherence while filling the holes in the target frames. This is possible only with the proper registration of source frames to the target frame. Image registration plays a vital role in the process of video inpainting to obtain effective results. An advanced homography-based image registration method is introduced, based on HALF-SIFT: high accurate localization feature for SIFT to extract feature points without localization error. The covariance matrix has been used to estimate the localization error. Further, new inlier selection method using CW MLESAC and refining is carried out for homography matrix with CW L-M. This iteration process can improve the accuracy of image registration. After registering frames to the target frame, the hole is inpainted by globally minimizing the energy cost function. The proposed video inpainting is applied to several complex video sequences. Experimental results are outperformed in visual quality when compared with the state-of-the-art methods. The performance metrics like peak signal-to-noise ratio and Structural Similarity Index are determined and compared with existing methods for different video sequences.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Sridevi, G., Kumar, S.S.: Image inpainting and enhancement using fractional order variational model. Defence Sci. J. 67(3), 308–315 (2017). https://doi.org/10.14429/dsj.67.10665

    Article  Google Scholar 

  2. Sridevi, G., Srinivas Kumar, S.: Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuits, Syst. Signal Process. (2019). https://doi.org/10.1007/s00034-019-01029-w

    Article  Google Scholar 

  3. Janardhana Rao, B., Chakrapani, Y., Srinivas Kumar, S.: Image inpainting method with improved patch priority and patch selection. IETE J. Educ. 59(1), 26–34 (2018). https://doi.org/10.1080/09747338.2018.1474808

    Article  Google Scholar 

  4. Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004). https://doi.org/10.1109/TIP.2004.833105

    Article  Google Scholar 

  5. Lee, J., Lee, D.K., Park, R.H.: Robust exemplar-based inpainting algorithm using region segmentation. IEEE Trans. Consum. Electron. (2012). https://doi.org/10.1109/TCE.2012.6227460

    Article  Google Scholar 

  6. Matsushita, Y., Ofek, E., Ge, W., Tang, X., Shum, H.-Y.: Full-frame video stabilization with motion inpainting. IEEE Trans. Pattern Anal. Mach. Intell. 28(7), 1150–1163 (2006). https://doi.org/10.1109/TPAMI.2006.141

    Article  Google Scholar 

  7. Patwardhan, K.A., Sapiro, G., Bertalmio, M.: Video inpainting under constrained camera motion. IEEE Trans. Image Process. 16(2), 545–553 (2007). https://doi.org/10.1109/TIP.2006.888343

    Article  MathSciNet  Google Scholar 

  8. Shih, T.K., Tang, N.C., Hwang, J.-N.: Exemplar-based video inpainting without ghost shadow artifacts by maintaining temporal continuity. IEEE Trans. Circuits Syst. Video Technol. 19(3), 347–360 (2009). https://doi.org/10.1109/TCSVT.2009.2013519

    Article  Google Scholar 

  9. Shih, T. K., Tan, N. C., Tsai, J. C., and Zhong, H.-Y.: “Video falsifying by motion interpolation and inpainting.” In: Proc. IEEE Conf. Comput.Vis. Pattern Recognit., (Jun. 2008), pp. 1–8. DOI: https://doi.org/10.1109/CVPR.2008.4587701

  10. M. Granados, J. Tompkin, K. I. Kim, J. Kautz, and C. Theobalt, “Background inpainting for videos with dynamic objects and a free moving camera.” In: Proc. Eur. Conf. Comput. Vis., 2012, pp. 682–695. https://doi.org/10.1007/978-3-642-33718-5_49

  11. Whyte, O., Sivic, J., and Zisserman, A.: “Get out of my picture! Internet based inpainting.” In: Proc. Brit. Mach. Vis. Conf., (2009), pp. 1–11. doi:https://doi.org/10.5244/C.23.116

  12. Chen, X., Shen, Y., and Yang, Y. H.: “Background estimation using graph cuts and inpainting.” In: Proc. Graph. Inter., (2010), pp. 97–103.

  13. Newson, A., Almansa, A., Fradet, M., Gousseau, Y., Pérez, P.: Video inpainting of complex scenes. SIAM J. Imag. Sci. 7(4), 1993–2019 (2014). https://doi.org/10.1137/140954933

    Article  MathSciNet  MATH  Google Scholar 

  14. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patch Match: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24:1-24:11 (2009). https://doi.org/10.1145/1576246.1531330

    Article  Google Scholar 

  15. Ebdelli, M., Le Meur, O., Guillemot, C.: Video inpainting with short term windows: application to object removal and error concealment. IEEE Trans. Image Processing 24(10), 3034–3047 (2015). https://doi.org/10.1109/TIP.2015.2437193

    Article  MathSciNet  MATH  Google Scholar 

  16. Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Temporally coherent completion of dynamic video. ACM Trans. Graph. (2016). https://doi.org/10.1145/2980179.2982398

    Article  Google Scholar 

  17. Janardhana Rao, B., Chakrapani, Y., Srinivas Kumar, S.: Hybridized cuckoo search with multi-verse optimization-based patch matching and deep learning concept for enhancing video inpainting. Comput. J. (2021). https://doi.org/10.1093/comjnl/bxab067

    Article  Google Scholar 

  18. Janardhana Rao, B., Chakrapani, Y., Srinivas Kumar, S.: An enhanced video inpainting technique with grey wolf optimization for object removal application. J. Mobile Multimedia 18(3), 561–582 (2022). https://doi.org/10.13052/jmm1550-4646.1835

    Article  Google Scholar 

  19. Janardhana Rao, B., Chakrapani, Y., Srinivas Kumar, S.: MABC-EPF: Video in-painting technique with enhanced priority function and optimal patch search algorithm. Concurr. Computat. Pract. Exper. (2022). https://doi.org/10.1002/cpe.6840

    Article  Google Scholar 

  20. Zhao, C., Zhao, H.: Accurate and robust feature-based homography estimation using HALF-SIFT and feature localization error weighting. J. Vis. Commun. Image Represent. 40, 288–299 (2016). https://doi.org/10.1016/j.jvcir.2016.07.002

    Article  Google Scholar 

  21. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  22. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Surf: speeded up robust features. Comput. Vis. Image Understand. (CVIU) 110(3), 346–359 (2008). https://doi.org/10.1016/j.cviu.2007.09.014

    Article  Google Scholar 

  23. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981). https://doi.org/10.1016/B978-0-08-051581-6.50070-2

    Article  MathSciNet  Google Scholar 

  24. Kai Cordes, Oliver Müller, Bodo Rosenhahn, Jörn Ostermann: “HALF-SIFT: high accurate localized features for SIFT”, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami, U.S.A., (2009), pp. 31–38. DOI: https://doi.org/10.1109/CVPRW.2009.5204283

  25. Brooks, M.J., Chojnacki, W., Gawley, D.: “What value covariance information in estimating vision parameters?”. In: IEEE International Conference on Computer Vision, Vancouver, Canada, (2001), pp 302–308. DOI: https://doi.org/10.1109/ICCV.2001.937533

  26. Steele, R.M., Christopher, J.: Feature uncertainty arising from covariant image noise. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, (2005), pp. 1063–1070.

  27. Abdel-Hakim, A.E., Farag, A.A.: A novel stability quantification of detected interest points in scale-space. In: International Conference on Pattern Recognition, Tampa, FL (2008), pp. 124–127

  28. Zeisl, B., Georgel, P.F., Schweiger, F.: Estimation of location uncertainty for scale invariant feature points. In: Proceedings of the British Machine Vision Conference, London, United Kingdom (2009)

  29. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004). https://doi.org/10.1109/TPAMI.2004.1262177

    Article  Google Scholar 

  30. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001). https://doi.org/10.1109/34.969114

    Article  Google Scholar 

  31. Boykov, Y., Kolmogorov, V.: An experimental comparison of mincut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004). https://doi.org/10.1109/TPAMI.2004.60

    Article  Google Scholar 

  32. Perazzi, Federico, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung: "A benchmark dataset and evaluation methodology for video object segmentation." In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 724–732.( 2016). DOI: https://doi.org/10.1109/CVPR.2016.85

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to B. Janardhana Rao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Janardhana Rao, B., Chakrapani, Y. & Srinivas Kumar, S. Video Inpainting Using Advanced Homography-based Registration Method. J Math Imaging Vis 64, 1029–1039 (2022). https://doi.org/10.1007/s10851-022-01111-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-022-01111-0

Keywords

Navigation