Improving Spatiotemporal Inpainting with Layer Appearance Models

  • Thommen Korah
  • Christopher Rasmussen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4292)


The problem of removing blemishes in mosaics of building facades caused by foreground objects such as trees may be framed in terms of inpainting. Affected regions are first automatically segmented and then inpainted away using a combination of cues from unoccluded, temporally adjacent views of the same building patch, as well as surrounding unoccluded patches in the same frame. Discriminating the building layer from those containing foreground features is most directly accomplished through parallax due to camera motion over the sequence. However, the intricacy of tree silhouettes often complicates accurate motion-based segmentation, especially along their narrower branches. In this work we describe methods for automatically training appearance-based classifiers from a coarse motion-based segmentation to recognize foreground patches in static imagery and thereby improve the quality of the final mosaic. A local technique for photometric adjustment of inpainted patches which compensates for exposure variations between frames is also discussed.


Support Vector Machine Image Patch Appearance Model Foreground Object Median Absolute Deviation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: SIGGRAPH, pp. 417–424 (2000)Google Scholar
  2. 2.
    Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Processing 13 (2004)Google Scholar
  3. 3.
    Jia, J., Wu, T., Tai, Y., Tang, C.: Video repairing: Inference of foreground and background under severe occlusion. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (2004)Google Scholar
  4. 4.
    Wexler, Y., Shechtman, E., Irani, M.: Space-time video completion. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition (2004)Google Scholar
  5. 5.
    Rasmussen, C., Korah, T.: Spatiotemporal inpainting for recovering texture maps of partially occluded building facades. In: IEEE Int. Conf. on Image Processing (2005)Google Scholar
  6. 6.
    Korah, T., Rasmussen, C.: Pca-based recognition for efficient inpainting. In: Proc. Asian Conf. Computer Vision (2006)Google Scholar
  7. 7.
    Tommasini, T., Fusiello, A., Trucco, E., Roberto, V.: Making good features to track better. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 178–183 (1998)Google Scholar
  8. 8.
    Efros, A., Freeman, W.: Image quilting for texture synthesis and transfer. In: SIGGRAPH (2001)Google Scholar
  9. 9.
    Bornard, R., Lecan, E., Laborelli, L., Chenot, J.H.: Missing data correction in still images and image sequences. In: ACM Multimedia (2002)Google Scholar
  10. 10.
    Szeliski, R.: Video mosaics for virtual environments. IEEE Computer Graphics and Applications 16, 22–30 (1996)CrossRefGoogle Scholar
  11. 11.
    Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. In: ACM Transactions on Graphics (SIGGRAPH 2003), pp. 313–318 (2003)Google Scholar
  12. 12.
    Kim, S.J., Pollefeys, M.: Radiometric alignment of image sequences. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 645–651 (2004)Google Scholar
  13. 13.
    Capel, D., Zisserman, A.: Computer vision applied to super resolution. IEEE Signal Processing Magazine 20, 75–86 (2003)CrossRefGoogle Scholar
  14. 14.
    Jin, H., Favaro, P., Soatto, S.: Real-time feature tracking and outlier rejection with changes in illumination. In: Proc. Int. Conf. Computer Vision, pp. 684–689 (2001)Google Scholar
  15. 15.
    Varma, M., Zisserman, A.: A statistical approach to texture classification from single images. International Journal of Computer Vision: Special Issue on Texture Analysis and Synthesis 62, 61–81 (2005)Google Scholar
  16. 16.
    Winn, J., Criminisi, A., Minka, T.: Object categorization by learned universal visual dictionary. In: Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV 2005), vol. 2 (2005)Google Scholar
  17. 17.
    Lu, L., Toyama, K., Hager, G.D.: A two level approach for scene recognition. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 688–695 (2005)Google Scholar
  18. 18.
    Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000)CrossRefGoogle Scholar
  19. 19.
    Leung, T.K., Malik, J.: Recognizing surfaces using three-dimensional textons. In: ICCV, pp. 1010–1017 (1999)Google Scholar
  20. 20.
    Joachims, T.: Making large-scale SVM learning practical. In: Schölkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods: Support Vector Learning. MIT Press, Cambridge (1999)Google Scholar
  21. 21.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Principles and practice of background maintenance. In: Proc. Int. Conf. Computer Vision (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Thommen Korah
    • 1
  • Christopher Rasmussen
    • 1
  1. 1.Dept. Computer and Information SciencesUniversity of DelawareNewarkUSA

Personalised recommendations