Advertisement

Introduction

  • Zhengming Ding
  • Handong Zhao
  • Yun Fu
Chapter
Part of the Advanced Information and Knowledge Processing book series (AI&KP)

Abstract

Multi-view data generated from various view-points or multiple sensors are commonly seen in real-world applications. For example, the popular commercial depth sensor Kinect uses both visible light and near infrared sensors for depth estimation; autopilot uses both visual and radar sensors to produce real-time 3D information on the road; face analysis algorithms prefer face images from different views for high-fidelity reconstruction and recognition. However, such data with large view divergence would lead to an enormous challenge: data across various views have a large divergence preventing them from a fair comparison. Generally, different views tend to be treated as different domains from different distributions. Thus, there is an urgent need to mitigate the view divergence when facing specific problems by either fusing the knowledge across multiple views or adapting knowledge from some views to others. Since there are different terms regarding “multi-view” data analysis and its aliasing, we first give a formal definition and narrow down our research focus to differentiate it from other related works but in different lines.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Indiana University-Purdue University IndianapolisIndianapolisUSA
  2. 2.Adobe ResearchSan JoseUSA
  3. 3.Northeastern UniversityBostonUSA

Personalised recommendations