View Synthesis by Appearance Flow

  • Tinghui Zhou
  • Shubham Tulsiani
  • Weilun Sun
  • Jitendra Malik
  • Alexei A. Efros
Conference paper

DOI: 10.1007/978-3-319-46493-0_18

Part of the Lecture Notes in Computer Science book series (LNCS, volume 9908)
Cite this paper as:
Zhou T., Tulsiani S., Sun W., Malik J., Efros A.A. (2016) View Synthesis by Appearance Flow. In: Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9908. Springer, Cham

Abstract

We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints. We approach this as a learning task but, critically, instead of learning to synthesize pixels from scratch, we learn to copy them from the input image. Our approach exploits the observation that the visual appearance of different views of the same instance is highly correlated, and such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows – 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. Furthermore, the proposed framework easily generalizes to multiple input views by learning how to optimally combine single-view predictions. We show that for both objects and scenes, our approach is able to synthesize novel views of higher perceptual quality than previous CNN-based techniques.

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Tinghui Zhou
    • 1
  • Shubham Tulsiani
    • 1
  • Weilun Sun
    • 1
  • Jitendra Malik
    • 1
  • Alexei A. Efros
    • 1
  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations