Advertisement

Augmented and Virtual Reality

Chapter
Part of the The Frontiers Collection book series (FRONTCOLL)

Abstract

Wearable computing, i.e. virtual and augmented reality, is a new medium that provides unprecedented user experiences. Eventually, wearable computing systems will redefine communication, entertainment, education, collaborative work, simulation, training, telesurgery, and basic vision research. Before these systems become practical for the consumer, two major challenges have to be solved: (i) the hardware systems have to be miniaturized into a socially-acceptable, i.e. eyeglasses-like, device form factor while providing sufficient battery life and (ii) the user experiences offered by these systems have to surpass those of existing system, such as phones, tablets, television, etc. This chapter outlines technical challenges to achieving these goals and summarizes the state of the art of related algorithms, optics, and electronics.

References

  1. 1.
    R. Aggarwal, A. Vohra, A.M. Namboodiri, Panoramic stereo videos with a single camera, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016, pp. 3755–3763Google Scholar
  2. 2.
    K. Akeley, S. Watt, A. Girshick, M. Banks, A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) 23(3), 804–813 (2004)CrossRefGoogle Scholar
  3. 3.
    R. Anderson, D. Gallup, J.T. Barron, J. Kont-kanen, N. Snavely, C. Hernandez, S. Agarwal, S.M. Seitz, Jump: virtual reality video. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 35(6), 198:1–13 (2016)Google Scholar
  4. 4.
    G. Avveduto, F. Tecchia, H. Fuchs, Real-world occlusion in optical see-through ar displays, in Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (ACM, 2017), p. 29Google Scholar
  5. 5.
    A. Ballestad, R. Boitard, G. Damberg, G. Stojmenovik, Advances in HDR display technology for cinema applications, including light steering projection. Inf. Disp. 35(3), 16–19 (2019)Google Scholar
  6. 6.
    M.S. Banks, D.M. Hoffman, J. Kim, G. Wetzstein, 3d displays. Annu. Rev. Vis. Sci. 2(1), 397–435 (2016)CrossRefGoogle Scholar
  7. 7.
    F. Banterle, A. Artusi, T.O. Aydin, P. Didyk, E. Eisemann, D. Gutierrez, R. Mantiuk, K. Myszkowski, Multidimensional image retargeting, in SIGGRAPH Asia 2011 Courses (ACM, 2011), p. 15Google Scholar
  8. 8.
    M. Ben-Chorin, D. Eliav, Multi-primary design of spectrally accurate displays. J. Soc. Inf. Disp. 15(9), 667–677 (2007)CrossRefGoogle Scholar
  9. 9.
    T. Bertel, N.D.F. Campbell, C. Richardt, MegaParallax: casual 360° panoramas with motion parallax. IEEE Trans. Vis. Comput. Graph. 25(5), 1828–1835 (2019)CrossRefGoogle Scholar
  10. 10.
    F. Berthouzoz, R. Fattal, Resolution enhancement by vibrating displays. ACM Trans. Graph. (TOG) 31(2), 15 (2012)CrossRefGoogle Scholar
  11. 11.
    O. Bimber, B. Fröhlich, Occlusion shadows: Using projected light to generate realistic occlusion effects for view-dependent optical see-through displays, in Proceedings of IEEE ISMAR (2002)Google Scholar
  12. 12.
    O. Bimber, A. Grundhöfer, G. Wetzstein, S. Knödel, Consistent illumination within optical see-through augmented environments, in Proceedings of IEEE ISMAR (2003), pp. 198–207Google Scholar
  13. 13.
    O. Bimber, D. Iwai, G. Wetzstein, A. Grundhoefer, The visual computing of projector-camera systems, in Computer Graphics Forum (2008)Google Scholar
  14. 14.
    M. Brown, D.G. Lowe, Automatic panoramic image stitching using invariant features. Int. J. Comput. Vis. 74(1), 59–73 (2007)CrossRefGoogle Scholar
  15. 15.
    B. Cabral, VR capture: designing and building an open source 3D-360 video camera, in SIGGRAPH Asia Keynote, December 2016Google Scholar
  16. 16.
    O. Cakmakci, Y. Ha, J. Rolland, Design of a compact optical see-through head-worn display with mutual occlusion capability, in Proceedings of SPIE, vol. 5875 (2005)Google Scholar
  17. 17.
    O. Cakmakci, Y. Ha, J.P. Rolland, A compact optical see-through head-worn display with occlusion support, in Proceedings of IEEE ISMAR (2004), pp. 16–25Google Scholar
  18. 18.
    P. Chakravarthula, D. Dunn, K. AkÅŸit, H. Fuchs, Focusar: auto-focus augmented reality eyeglasses for both real world and virtual imagery. IEEE Trans. Vis. Comput. Graph. 24(11), 2906–2916 (2018)CrossRefGoogle Scholar
  19. 19.
    J.-H.R. Chang, B.V.K.V. Kumar, A.C. Sankaranarayanan, 216 shades of gray: high bit-depth projection using light intensity control. Opt. Express 24(24), 27937–27950 (2016)ADSCrossRefGoogle Scholar
  20. 20.
    J.-H.R. Chang, B.V.K.V. Kumar, A.C. Sankaranarayanan, Towards multifocal displays with dense focal stacks. ACM Trans. Graph. (SIGGRAPH Asia) 37(6), 198:1–198:13 (2018)Google Scholar
  21. 21.
    G. Chaurasia, S. Duchene, O. Sorkine-Hornung, G. Drettakis, Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. 32(3):30, 1–12 (2013)Google Scholar
  22. 22.
    G. Chaurasia, O. Sorkine-Hornung, G. Drettakis, Silhouette-aware warping for image-based rendering, in Computer Graphics Forum (Proceedings of Eurographics Symposium on Rendering), vol. 30, no. 4, June 2011, pp. 1223–1232Google Scholar
  23. 23.
    J.-S. Chen, D.P. Chu, Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications. Opt. Express 23(14), 18143–18155 (2015)ADSCrossRefGoogle Scholar
  24. 24.
    S.A. Cholewiak, G.D. Love, P.P. Srinivasan, R. Ng, M.S. Banks, Chromablur: rendering chromatic eye aberration improves accommodation and realism. ACM Trans. Graph. (SIGGRAPH Asia) 36(6), 210:1–210:12 (2017)Google Scholar
  25. 25.
    A. Collet, M. Chuang, P. Sweeney, D. Gillett, D. Evseev, D. Calabrese, H. Hoppe, A. Kirk, S. Sullivan, High-quality streamable free-viewpoint video. ACM Trans. Graph. (Proc. SIGGRAPH) 34(4), 69:1–13 (2015)Google Scholar
  26. 26.
    N. Corporation. VRWorks—Lens Matched Shading (2016)Google Scholar
  27. 27.
    N. Corporation. VRWorks—Multi-Res Shading (2016)Google Scholar
  28. 28.
    C.A. Curcio, K.A. Allen, Topography of ganglion cells in human retina. J. Comp. Neurol. 300(1), 5–25 (1990)CrossRefGoogle Scholar
  29. 29.
    C.A. Curcio, K.R. Sloan, R.E. Kalina, A.E. Hendrickson, Human photoreceptor topography. J. Comp. Neurol. 292(4), 497–523 (1990)CrossRefGoogle Scholar
  30. 30.
    B. Curless, S. Seitz, J.-Y. Bouguet, P. Debevec, M. Levoy, S.K. Nayar, 3D photography, in SIGGRAPH Courses (2000)Google Scholar
  31. 31.
    J. Cutting, P. Vishton, Perceiving layout and knowing distances: the interaction, relative potency, and contextual use of different information about depth, in Perception of Space and Motion, Chap. 3, ed. by W. Epstein, S. Rogers (Academic Press, 1995), pp. 69–117Google Scholar
  32. 32.
    A. Dai, M. Nießner, M. Zollhofer, S. Izadi, C. Theobalt, BundleFusion: Real-time globally consistent 3D reconstruction using on-the-fly surface reintegration. ACM Trans. Graph. 36(3), 24:1–18 (2017)Google Scholar
  33. 33.
    G. Damberg, H. Seetzen, G. Ward, W. Heidrich, L. Whitehead, High dynamic range projection systems, in SID Symposium Digest of Technical Papers (2007), pp. 4–7Google Scholar
  34. 34.
    N. Damera-Venkata, N.L. Chang, Display supersampling. ACM Trans. Graph. (TOG) 28(1), 9 (2009)CrossRefGoogle Scholar
  35. 35.
    A. Davis, M. Levoy, F. Durand, Unstructured light fields, in Computer Graphics Forum (Proceedings of Eurographics), vol. 31, no. 2, May 2012, pp. 305–314Google Scholar
  36. 36.
    P. Debevec, The light stages and their applications to photoreal digital actors, in SIGGRAPH Asia Technical Briefs (2012)Google Scholar
  37. 37.
    P. Debevec, C. Bregler, M.F. Cohen, L. McMillan, F. Sillion, R. Szeliski, Image-based modeling, rendering, and lighting, in SIGGRAPH Courses (2000)Google Scholar
  38. 38.
    E. Dolgoff, Real-depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications, in Proceedings of SPIE, vol. 3012 (1997), pp. 282–288Google Scholar
  39. 39.
    A. Duane, Normal values of the accommodation at all ages. J. Am. Med. Assoc. 59(12), 1010–1013 (1912)CrossRefGoogle Scholar
  40. 40.
    A.T. Duchowski, D.H. House, J. Gestring, R.I. Wang, K. Krejtz, I. Krejtz, R. Mantiuk, B. Bazyluk, Reducing visual discomfort of 3d stereoscopic displays with gaze-contingent depth-of-field, in Proceedings of the ACM Symposium on Applied Perception (ACM, 2014), pp. 39–46Google Scholar
  41. 41.
    D. Dunn, C. Tippets, K. Torell, P. Kellnhofer, K. Aksit, P. Didyk, K. Myszkowski, D. Luebke, H. Fuchs, Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. IEEE TVCG 23(4), 1322–1331 (2017)Google Scholar
  42. 42.
    H. Durrant-Whyte, T. Bailey, Simultaneous localization and mapping: part i. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)CrossRefGoogle Scholar
  43. 43.
    Facebook, Filming the future with RED and Facebook 360, Sept 2018Google Scholar
  44. 44.
    J. Flynn, M. Broxton, P. Debevec, M. DuVall, G. Fyffe, R. Overbeck, N. Snavely, R. Tucker, DeepView: view synthesis with learned gradient descent, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, pp. 2367–2376Google Scholar
  45. 45.
    S. Friston, T. Ritschel, A. Steed, Perceptual rasterization for head-mounted display image synthesis. ACM Trans. Graph. (Proc. SIGGRAPH 2019) 38(4), 1–14 (2019)Google Scholar
  46. 46.
    S. Fuhrmann, F. Langguth, M. Goesele, MVE: a multi-view reconstruction environment, in Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage (2014), pp. 11–18Google Scholar
  47. 47.
    S. Galliani, K. Lasinger, K. Schindler, Massively parallel multiview stereopsis by surface normal diffusion, in Proceedings of the International Conference on Computer Vision (ICCV), Dec 2015, pp. 873–881Google Scholar
  48. 48.
    C. Gao, Y. Lin, H. Hua, Occlusion capable optical see-through head-mounted display using freeform optics, in Proceedings of IEEE ISMAR (2012), pp. 281–282Google Scholar
  49. 49.
    C. Gao, Y. Lin, H. Hua, Optical see-through head-mounted display with occlusion capability, in Proceedings of SPIE, vol. 8735 (2013)Google Scholar
  50. 50.
    Q. Gao, J. Liu, J. Han, X. Li, Monocular 3d see-through head-mounted display via complex amplitude modulation. Opt. Express 24(15), 17372–17383 (2016)ADSCrossRefGoogle Scholar
  51. 51.
    S.J. Gortler, R. Grzeszczuk, R. Szeliski, M.F. Cohen, The lumigraph, in Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Aug 1996, pp. 43–54Google Scholar
  52. 52.
    B. Guenter, M. Finch, S. Drucker, D. Tan, J. Snyder, Foveated 3d graphics. ACM Trans. Graph. (TOG) 31(6), 164 (2012)CrossRefGoogle Scholar
  53. 53.
    T. Hamasaki, Y. Itoh, Varifocal occlusion for optical see-through head-mounted displays using a slide occlusion mask. IEEE TVCG 25(5), 1961–1969 (2019)Google Scholar
  54. 54.
    T. Hansen, L. Pracejus, K.R. Gegenfurtner, Color perception in the intermediate periphery of the visual field. J. Vis. 9(4), 26 (2009)CrossRefGoogle Scholar
  55. 55.
    N. Hasan, A. Banerjee, H. Kim, C.H. Mastrangelo, Tunable-focus lens for adaptive eyeglasses. Opt. Express 25(2), 1221–1233 (2017)ADSCrossRefGoogle Scholar
  56. 56.
    A. Hasnain, P.-Y. Laffont, S.B.A. Jalil, K. Buyukburc, P.-Y. Guillemet, S. Wirajaya, L. Khoo, T. Deng, J.-C. Bazin, Piezo-actuated varifocal head-mounted displays for virtual and augmented reality, vol. 10942 (2019)Google Scholar
  57. 57.
    P. Hedman, S. Alsisan, R. Szeliski, J. Kopf, Casual 3D photography. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 36(6), 234:1–15 (2017)Google Scholar
  58. 58.
    P. Hedman, J. Kopf, Instant 3D photography. ACM Trans. Graph. (Proc. SIGGRAPH) 37(4), 101:1–12 (2018)Google Scholar
  59. 59.
    P. Hedman, J. Philip, T. Price, J.-M. Frahm, G. Drettakis, Deep blending for free-viewpoint image-based rendering. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37(6), 257:1–15 (2018)Google Scholar
  60. 60.
    P. Hedman, T. Ritschel, G. Drettakis, G. Brostow, Scalable inside-out image-based rendering. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 35(6), 231:1–11 (2016)Google Scholar
  61. 61.
    F. Heide, J. Gregson, G. Wetzstein, R. Raskar, W. Heidrich, Compressive multi-mode superresolution display. Opt. Express 22(12), 14981–14992 (2014)ADSCrossRefGoogle Scholar
  62. 62.
    F. Heide, D. Lanman, D. Reddy, J. Kautz, K. Pulli, D. Luebke, Cascaded displays: spatiotemporal superresolution using offset pixel layers. ACM Trans. Graph. (TOG) 33(4), 60 (2014)zbMATHCrossRefGoogle Scholar
  63. 63.
    R. Held, E. Cooper, J. O’Brien, M. Banks, Using blur to affect perceived distance and size. ACM Trans. Graph. 29(2), 1–16 (2010)CrossRefGoogle Scholar
  64. 64.
    S. Hillaire, A. Lecuyer, R. Cozot, G. Casiez, Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments, in 2008 IEEE Virtual Reality Conference (2008), pp. 47–50Google Scholar
  65. 65.
    M. Hirsch, G. Wetzstein, R. Raskar, A compressive light field projection system. ACM Trans. Graph. (TOG) 33(4), 58 (2014)CrossRefGoogle Scholar
  66. 66.
    D. Hoffman, A. Girshick, K. Akeley, M. Banks, Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 8(3) (2008)Google Scholar
  67. 67.
    B.A. Holden, T.R. Fricke, S.M. Ho, R. Wong, G. Schlenther, S. Cronjé, A. Burnett, E. Papas, K.S. Naidoo, K.D. Frick, Global vision impairment due to uncorrected presbyopia. Arch. Ophthalmol. 126(12), 1731–1739 (2008)CrossRefGoogle Scholar
  68. 68.
    I.P. Howard, B.J. Rogers, Seeing in Depth (Oxford University Press, New York, 2002)Google Scholar
  69. 69.
    I.D. Howlett, Q. Smithwick, Perspective correct occlusion-capable augmented reality displays using cloaking optics constraints. J. Soc. Inf. Display 25(3), 185–193 (2017)CrossRefGoogle Scholar
  70. 70.
    X. Hu, H. Hua, Design and assessment of a depth-fused multi-focal-plane display prototype. J. Disp. Technol. 10(4), 308–316 (2014)ADSCrossRefGoogle Scholar
  71. 71.
    H. Hua, Enabling focus cues in head-mounted displays. Proc. IEEE 105(5), 805–824 (2017)CrossRefGoogle Scholar
  72. 72.
    H. Hua, B. Javidi, A 3D integral imaging optical see-through head-mounted display. Opt. Express 22(11), 13484–13491 (2014)ADSCrossRefGoogle Scholar
  73. 73.
    F.-C. Huang, K. Chen, G. Wetzstein, The light field stereoscope: immersive computer graphics via factored near-eye light field display with focus cues. ACM Trans. Graph. (SIGGRAPH) 34(4) (2015)Google Scholar
  74. 74.
    F.-C. Huang, D. Pajak, J. Kim, J. Kautz, D. Luebke, Mixed-primary factorization for dual-frame computational displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 149–1 (2017)Google Scholar
  75. 75.
    F.-C. Huang, G. Wetzstein, B.A. Barsky, R. Raskar, Eyeglasses-free display: towards correcting visual aberrations with computational light field displays. ACM Trans. Graph. (SIGGRAPH) 33(4), 59 (2014)Google Scholar
  76. 76.
    J. Huang, Z. Chen, D. Ceylan, H. Jin, 6-DOF VR videos with a single 360-camera, in Proceedings of IEEE Virtual Reality (VR), Mar 2017, pp. 37–44Google Scholar
  77. 77.
    P.-H. Huang, K. Matzen, J. Kopf, N. Ahuja, J.-B. Huang, DeepMVS: learning multi-view stereopsis, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  78. 78.
  79. 79.
    H. Ishiguro, M. Yamamoto, S. Tsuji, Omni-directional stereo. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 257–262 (1992)CrossRefGoogle Scholar
  80. 80.
    Y. Itoh, T. Hamasaki, M. Sugimoto, Occlusion leak compensation for optical see-through displays using a single-layer transmissive spatial light modulator. IEEE TVCG 23(11), 2463–2473 (2017)Google Scholar
  81. 81.
    Y. Itoh, T. Langlotz, D. Iwai, K. Kiyokawa, T. Amano, Light attenuation display: subtractive see-through near-eye display via spatial color filtering. IEEE TVCG 25(5), 1951–1960 (2019)Google Scholar
  82. 82.
    M. Jancosek, T. Pajdla, Multi-view reconstruction preserving weakly-supported surfaces, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2011, pp. 3121–3128Google Scholar
  83. 83.
    P.V. Johnson, J.A. Parnell, J. Kim, C.D. Saunter, G.D. Love, M.S. Banks, Dynamic lens and monovision 3d displays to improve viewer comfort. OSA Opt. Express 24(11), 11808–11827 (2016)ADSCrossRefGoogle Scholar
  84. 84.
    P.M.S. Julian, P. Brooker, Operator performance evaluation of controlled depth of field in a stereographically displayed virtual environment, vol. 4297 (2001)Google Scholar
  85. 85.
    H. Kato, M. Billinghurst, Marker tracking and HMD calibration for a video-based augmented reality conferencing system, in Proceedings of International Workshop on Augmented Reality (1999), pp. 85–94Google Scholar
  86. 86.
    I. Kauvar, S.J. Yang, L. Shi, I. McDowall, G. Wetzstein, Adaptive color display via perceptually-driven factored spectral projection. ACM Trans. Graph. (SIGGRAPH Asia) 34(6), 165–1 (2015)Google Scholar
  87. 87.
    C. Kim, H. Zimmer, Y. Pritch, A. Sorkine-Hornung, M. Gross, Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. (Proc. SIGGRAPH) 32(4), 73:1–12 (2013)Google Scholar
  88. 88.
    H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, M. Zollhofer, C. Theobalt, Deep video portraits. ACM Trans. Graph. (Proc. SIGGRAPH) 37(4), 163:1–14 (2018)Google Scholar
  89. 89.
    J. Kim, Y. Jeong, M. Stengel, K. Akşit, R. Albert, B. Boudaoud, T. Greer, J. Kim, W. Lopes, Z. Majercik, P. Shirley, J. Spjut, M. McGuire, D. Luebke, Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38(4), 99:1–99:15 (2019)Google Scholar
  90. 90.
    K. Kiyokawa, M. Billinghurst, B. Campbell, E. Woods, An occlusion-capable optical see-through head mount display for supporting co-located collaboration, in Proceedings of IEEE ISMAR (2003)Google Scholar
  91. 91.
    K. Kiyokawa, Y. Kurata, H. Ohno, An optical see-through display for mutual occlusion of real and virtual environments, in Proceedings of ISAR (2000), pp. 60–67Google Scholar
  92. 92.
    K. Kiyokawa, Y. Kurata, H. Ohno, An optical see-through display for mutual occlusion with a real-time stereovision system. Comput. Graph. 25(5), 765–779 (2001)CrossRefGoogle Scholar
  93. 93.
    R. Konrad, A. Angelopoulos, G. Wetzstein, Gaze-contingent ocular parallax rendering for virtual reality, ACM Trans. Graph. 39(2) (2020)Google Scholar
  94. 94.
    R. Konrad, E.A. Cooper, G. Wetzstein, Novel optical configurations for virtual reality: evaluating user preference and performance with focus-tunable and monovision near-eye displays, in Proceedings of SIGCHI (2016)Google Scholar
  95. 95.
    R. Konrad, D.G. Dansereau, A. Masood, G. Wetzstein, SpinVR: towards live-streaming 3D virtual reality video. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 36(6), 209:1–12 (2017)Google Scholar
  96. 96.
    R. Konrad, N. Padmanaban, K. Molner, E.A. Cooper, G. Wetzstein, Accommodation-invariant computational near-eye displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 88:1–88:12 (2017)Google Scholar
  97. 97.
    F.L. Kooi, A. Toet, Visual comfort of binocular and 3d displays. Displays 25(2–3), 99–108 (2004)CrossRefGoogle Scholar
  98. 98.
    J. Kopf, S. Alsisan, F. Ge, Y. Chong, K. Matzen, O. Quigley, J. Patterson, J. Tirado, S. Wu, M.F. Cohen, Practical 3D photography, in Proceedings of CVPR Workshops (2019)Google Scholar
  99. 99.
    G.A. Koulieris, K. AkÅŸit, M. Stengel, R.K. Mantiuk, K. Mania, C. Richardt, Near-eye display and tracking technologies for virtual and augmented reality. Comput. Graph. Forum 38(2), 493–519 (2019)CrossRefGoogle Scholar
  100. 100.
    G.-A. Koulieris, B. Bui, M.S. Banks, G. Drettakis, Accommodation and comfort in head-mounted displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 87:1–87:11 (2017)Google Scholar
  101. 101.
    G. Kramida, Resolving the vergence-accommodation conflict in head-mounted displays. IEEE TVCG 22, 1912–1931 (2015)Google Scholar
  102. 102.
    M. Lambooij, M. Fortuin, I. Heynderickx, W. IJsselsteijn, Visual discomfort and visual fatigue of stereoscopic displays: a review. J. Imaging Sci. Technol. 53(3):30201–1 (2009)Google Scholar
  103. 103.
    T. Langlotz, M. Cook, H. Regenbrecht, Real-time radiometric compensation for optical see-through head-mounted displays. IEEE TVCG 22(11), 2385–2394 (2016)Google Scholar
  104. 104.
    T. Langlotz, J. Sutton, S. Zollmann, Y. Itoh, H. Regenbrecht, Chromaglasses: computational glasses for compensating colour blindness, in Proceedings of SIGCHI (2018), pp. 390:1–390:12Google Scholar
  105. 105.
    D. Lanman, M. Hirsch, Y. Kim, R. Raskar, Content-adaptive parallax barriers: optimizing dual-layer 3d displays using low-rank light field factorization, in ACM Transactions on Graphics (SIGGRAPH Asia), vol. 29 (ACM, 2010), p. 163Google Scholar
  106. 106.
    D. Lanman, D. Luebke, Near-eye light field displays. ACM Trans. Graph. (SIGGRAPH Asia) 32(6), 220:1–220:10 (2013)Google Scholar
  107. 107.
    D. Lanman, G. Wetzstein, M. Hirsch, W. Heidrich, R. Raskar, Polarization fields: dynamic light field display using multi-layer LCDs, in ACM Transactions on Graphics (SIGGRAPH Asia), vol. 30, p. 186 (2011)Google Scholar
  108. 108.
    S.M. LaValle, A. Yershova, M. Katsev, M. Antonov, Head tracking for the oculus rift, in IEEE International Conference on Robotics and Automation (ICRA) (2014), pp. 187–194Google Scholar
  109. 109.
    J. Lee, B. Kim, K. Kim, Y. Kim, J. Noh, Rich360: Optimized spherical representation from structured panoramic camera arrays. ACM Trans. Graph. (Proc. SIGGRAPH) 35(4), 63:1–11 (2016)Google Scholar
  110. 110.
    S. Lee, C. Jang, S. Moon, J. Cho, B. Lee, Additive light field displays: realization of augmented reality with holographic optical elements. ACM Trans. Graph. (SIGGRAPH Asia) 35(4), 60:1–60:13 (2016)Google Scholar
  111. 111.
    T. Lee, T. Hollerer, Multithreaded hybrid feature tracking for markerless augmented reality. IEEE Trans. Vis. Comput. Graph. 15(3), 355–368 (2009)CrossRefGoogle Scholar
  112. 112.
    M. Levoy, P. Hanrahan, Light field rendering, in Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Aug 1996, pp. 31–42Google Scholar
  113. 113.
    G. Li, D. Lee, Y. Jeong, J. Cho, B. Lee, Holographic display for see-through augmented reality using mirror-lens holographic optical element. Opt. Lett. 41(11), 2486–2489 (2016)ADSCrossRefGoogle Scholar
  114. 114.
    G. Li, D.L. Mathine, P. Valley, P. Äyräs, J.N. Haddock, M.S. Giridhar, G. Williby, J. Schwiegerling, G.R. Meredith, B. Kippelen, S. Honkanen, N. Peyghambarian, Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications. Proc. Natl. Acad. Sci. 103(16), 6100–6104 (2006)ADSCrossRefGoogle Scholar
  115. 115.
    Y. Li, A. Majumder, D. Lu, M. Gopi, Content-independent multi-spectral display using superimposed projections, in Computer Graphics Forum, vol. 34 (Wiley Online Library, 2015), pp. 337–348Google Scholar
  116. 116.
    C. Lipski, C. Linz, K. Berger, A. Sellent, M. Magnor, Virtual video camera: image-based viewpoint navigation through space and time. Comput. Graph. Forum 29(8), 2555–2568 (2010)CrossRefGoogle Scholar
  117. 117.
    S. Liu, D. Cheng, and H. Hua. An optical see-through head mounted display with addressable focal planes. In Proc. ISMAR, pages 33–42, 2008Google Scholar
  118. 118.
    P. Llull, N. Bedard, W. Wu, I. Tosic, K. Berkner, N. Balram, Design and optimization of a near-eye multifocal display system for augmented reality, in OSA Imaging and Applied Optics (2015)Google Scholar
  119. 119.
    S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, Y. Sheikh, Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. (Proc. SIGGRAPH) (2019)Google Scholar
  120. 120.
    D. Long, M.D. Fairchild, Optimizing spectral color reproduction in multiprimary digital projection, in Color and Imaging Conference, vol. 2011 (Society for Imaging Science and Technology, 2011), pp. 290–297Google Scholar
  121. 121.
    G.D. Love, D.M. Hoffman, P.J.W. Hands, J. Gao, A.K. Kirby, M.S. Banks, High-speed switchable lens enables the development of a volumetric stereoscopic display. Opt. Express 17(18), 15716–15725 (2009)ADSCrossRefGoogle Scholar
  122. 122.
    B. Luo, F. Xu, C. Richardt, J.-H. Yong, Parallax360: stereoscopic 360° scene representation for head-motion parallax. IEEE Trans. Vis. Comput. Graph. 24(4), 1545–1553 (2018)CrossRefGoogle Scholar
  123. 123.
    G. Maiello, M. Chessa, F. Solari, P.J. Bex, Simulated disparity and peripheral blur interact during binocular fusion. J. Vis. 14(8), 13 (2014)CrossRefGoogle Scholar
  124. 124.
    A. Maimone, H. Fuchs, Computational augmented reality eyeglasses, in Proceedings of IEEE ISMAR (2013), pp. 29–38Google Scholar
  125. 125.
    A. Maimone, A. Georgiou, J.S. Kollin, Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. (SIGGRAPH) 36(4), 85:1–85:16 (2017)Google Scholar
  126. 126.
    A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, H. Fuchs, Focus 3D: compressive accommodation display. ACM Trans. Graph. 32(5), 153–1 (2013)CrossRefGoogle Scholar
  127. 127.
    A. Maimone, X. Yang, N. Dierk, A. State, M. Dou, H. Fuchs, General-purpose telepresence with head-worn optical see-through displays and projector-based lighting, in 2013 IEEE Virtual Reality (VR) (IEEE, 2013), pp. 23–26Google Scholar
  128. 128.
    R. Martin-Brualla, R. Pandey, S. Yang, P. Pidlypenskyi, J. Taylor, J. Valentin, S. Khamis, P. Davidson, A. Tkach, P. Lincoln, A. Kowdle, C. Rhemann, D.B. Goldman, C. Keskin, S. Seitz, S. Izadi, S. Fanello, LookinGood: enhancing performance capture with real-time neural re-rendering. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37(6), 255:1–14 (2018)Google Scholar
  129. 129.
    B. Masia, G. Wetzstein, P. Didyk, D. Gutierrez, A survey on computational displays: pushing the boundaries of optics, computation, and perception. Comput. Graph. 37(8), 1012–1038 (2013)CrossRefGoogle Scholar
  130. 130.
    N. Matsuda, A. Fix, D. Lanman, Focal surface displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 86:1–86:14 (2017)Google Scholar
  131. 131.
    M. Mauderer, S. Conte, M.A. Nacenta, D. Vishwanath, Depth perception with gaze-contingent depth of field, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM, 2014), pp. 217–226Google Scholar
  132. 132.
    T. Mazuryk, M. Gervautz, Virtual reality—history, applications, technology and future, 12 (1999)Google Scholar
  133. 133.
    X. Meng, R. Du, M. Zwicker, A. Varshney, Kernel foveated rendering. Proc. ACM Comput. Graph. Interact. Tech. (I3D) 1(5), 1–20 (2018)Google Scholar
  134. 134.
    O. Mercier, Y. Sulai, K. Mackenzie, M. Zannoli, J. Hillis, D. Nowrouzezahrai, D. Lanman, Fast gaze-contingent optimal decompositions for multifocal displays. ACM Trans. Graph. (SIGGRAPH Asia) 36(6) (2017)Google Scholar
  135. 135.
    M. Meshry, D.B. Goldman, S. Khamis, H. Hoppe, R. Pandey, N. Snavely, R. Martin-Brualla, Neural rerendering in the wild, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  136. 136.
    B. Mildenhall, P.P. Srinivasan, R. Ortiz-Cayon, N.K. Kalantari, R. Ramamoorthi, R. Ng, A. Kar, Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. (Proc. SIGGRAPH) (2019)Google Scholar
  137. 137.
    A. Mohan, R. Raskar, J. Tumblin, Agile spectrum imaging: programmable wavelength modulation for cameras and projectors, in Computer Graphics Forum, vol. 27 (Wiley Online Library, 2008), pp. 709–717Google Scholar
  138. 138.
    E. Moon, M. Kim, J. Roh, H. Kim, J. Hahn, Holographic head-mounted display with RGB light emitting diode light source. Opt. Express 22(6), 6526–6534 (2014)ADSCrossRefGoogle Scholar
  139. 139.
    S. Mori, S. Ikeda, A. Plopski, C. Sandor, Brightview: increasing perceived brightness of optical see-through head-mounted displays through unnoticeable incident light reduction, in Proceedings of IEEE VR (2018), pp. 251–258Google Scholar
  140. 140.
    P. Moulon, P. Monasse, R. Marlet, Adaptive structure from motion with a Contrario model estimation, in Proceedings of the Asian Conference on Computer Vision (ACCV) (2012), pp. 257–270Google Scholar
  141. 141.
    F. Mueller, F. Bernard, O. Sotnychenko, D. Mehta, S. Sridhar, D. Casas, C. Theobalt, Generated hands for real-time 3d hand tracking from monocular RGB, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 49–59Google Scholar
  142. 142.
    R. Narain, R.A. Albert, A. Bulbul, G.J. Ward, M.S. Banks, J.F. O’Brien, Optimal presentation of imagery with focus cues on multi-plane displays. ACM Trans. Graph. (SIGGRAPH) 34(4) (2015)Google Scholar
  143. 143.
    R.A. Newcombe, A.J. Davison, S. Izadi, P. Kohli, O. Hilliges, J. Shotton, D. Molyneaux, S. Hodges, D. Kim, A. Fitzgibbon, KinectFusion: real-time dense surface mapping and tracking, in Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR), Oct 2011, pp. 127–136Google Scholar
  144. 144.
    T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, Y.-L. Yang, HoloGAN: unsupervised learning of 3D representations from natural images, in Proceedings of the International Conference on Computer Vision (ICCV) (2019)Google Scholar
  145. 145.
    M. Nießner, M. Zollhofer, S. Izadi, M. Stamminger, Real-time 3D reconstruction at scale using voxel hashing. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 32(6), 169:1–11 (2013)Google Scholar
  146. 146.
    D. Nister, O. Naroditsky, J. Bergen, Visual odometry, in Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1 (2004)Google Scholar
  147. 147.
    C. Noorlander, J.J. Koenderink, R.J. Den Olden, B.W. Edens, Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vis. Res. 23(1), 1–11 (1983)CrossRefGoogle Scholar
  148. 148.
    R.S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, P. Debevec, A system for acquiring, compressing, and rendering panoramic light field stills for virtual reality. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 37(6), 197:1–15 (2018)Google Scholar
  149. 149.
    N. Padmanaban, R. Konrad, T. Stramer, E.A. Cooper, G. Wetzstein, Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc. Natl. Acad. Sci. U.S.A. 114, 2183–2188 (2017)ADSCrossRefGoogle Scholar
  150. 150.
    N. Padmanaban, R. Konrad, G. Wetzstein. Autofocals: evaluating gaze-contingent eyeglasses for presbyopes. Sci. Adv. 5(6) (2019)Google Scholar
  151. 151.
    N. Padmanaban, Y. Peng, G. Wetzstein, Holographic near-eye displays based on overlap-add stereograms. ACM Trans. Graph. (SIGGRAPH Asia) 38(6) (2019)Google Scholar
  152. 152.
    S.E. Palmer, Vision Science—Photons to Phenomenology (MIT Press, 1999)Google Scholar
  153. 153.
    V.F. Pamplona, M.M. Oliveira, D.G. Aliaga, R. Raskar, Tailored displays to compensate for visual aberrations. ACM Trans. Graph. (SIGGRAPH) 31(4), 81:1–81:12 (2012)Google Scholar
  154. 154.
    J.J. Park, P. Florence, J. Straub, R. Newcombe, S. Lovegrove, DeepSDF: learning continuous signed distance functions for shape representation, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2019)Google Scholar
  155. 155.
    A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, A. Lefohn, Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. (TOG) 35(6), 179 (2016)CrossRefGoogle Scholar
  156. 156.
    S. Peleg, M. Ben-Ezra, Y. Pritch, Omnistereo: panoramic stereo imaging. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 279–290 (2001)zbMATHCrossRefGoogle Scholar
  157. 157.
    E. Penner, L. Zhang, Soft 3D reconstruction for view synthesis. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 36(6), 235:1–11 (2017)Google Scholar
  158. 158.
    F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Watson, M. Gross, Panoramic video from unstructured camera arrays. Comput. Graph. Forum (Proc. Eurographics) 34(2), 57–68 (2015)CrossRefGoogle Scholar
  159. 159.
    R. Raskar, H. Nii, B. deDecker, Y. Hashimoto, J. Summet, D. Moore, Y. Zhao, J. Westhues, P. Dietz, J. Barnwell, S. Nayar, M. Inami, P. Bekaert, M. Noland, V. Branzoi, E. Bruns, Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators. ACM Trans. Graph. (SIGGRAPH) 26(3) (2007)Google Scholar
  160. 160.
    K. Rathinavel, H. Wang, A. Blate, H. Fuchs, An extended depth-at-field volumetric near-eye augmented reality display. IEEE Trans. Vis. Comput. Graph. 24(11), 2857–2866 (2018)CrossRefGoogle Scholar
  161. 161.
    K. Rathinavel, G. Wetzstein, H. Fuchs, Varifocal occlusion-capable optical see-through augmented reality display based on focus-tunable optics. IEEE TVCG (Proc. ISMAR) (2019)Google Scholar
  162. 162.
    J. Rekimoto. Matrix: a realtime object identification and registration method for augmented reality, in Proceedings of Asia Pacific Computer Human Interaction (1998), pp. 63–68Google Scholar
  163. 163.
    J.P. Rice, S.W. Brown, J.E. Neira, R.R. Bousquet, A hyperspectral image projector for hyperspectral imagers, in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, vol. 6565 (International Society for Optics and Photonics, 2007), p. 65650CGoogle Scholar
  164. 164.
    C. Richardt, P. Hedman, R.S. Overbeck, B. Cabral, R. Konrad, S. Sullivan, Capture4VR: from VR photography to VR video, in SIGGRAPH Courses (2019)Google Scholar
  165. 165.
    C. Richardt, Y. Pritch, H. Zimmer, A. Sorkine-Hornung, Megastereo: constructing high-resolution stereo panoramas, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2013, pp. 1256–1263Google Scholar
  166. 166.
    J.P. Rolland, M.W. Krueger, A. Goon, Multifocal planes head-mounted displays. Appl. Opt. 39(19), 3209–3215 (2000)ADSCrossRefGoogle Scholar
  167. 167.
    J. Rovamo, V. Virsu, P. Laurinen, L. Hyvärinen, Resolution of gratings oriented along and across meridians in peripheral vision. Invest. Ophthalmol. Vis. Sci. 23(5), 666–670 (1982)Google Scholar
  168. 168.
    B. Sajadi, M. Gopi, A. Majumder, Edge-guided resolution enhancement in projectors via optical pixel sharing. ACM Trans. Graph. (TOG) 31(4), 79 (2012)CrossRefGoogle Scholar
  169. 169.
    B. Sajadi, D. Qoc-Lai, A.H. Ihler, M. Gopi, A. Majumder, Image enhancement in projectors via optical pixel shift and overlay, in IEEE International Conference on Computational Photography (ICCP) (IEEE, 2013), pp. 1–10Google Scholar
  170. 170.
    J.L. Schönberger, J.-M. Frahm, Structure-from-motion revisited, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 4104–4113Google Scholar
  171. 171.
    J.L. Schönberger, E. Zheng, J.-M. Frahm, M. Pollefeys, Pixelwise view selection for unstructured multi-view stereo, in Proceedings of the European Conference on Computer Vision (ECCV), ed. by B. Leibe, J. Matas, N. Sebe, M. Welling (2016), pp. 501–518Google Scholar
  172. 172.
    C. Schroers, J.-C. Bazin, A. Sorkine-Hornung, An omnistereoscopic video pipeline for capture and display of real-world VR. ACM Trans. Graph. 37(3), 37:1–13 (2018)Google Scholar
  173. 173.
    H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, A. Vorozcovs, High dynamic range display systems. ACM Trans. Graph. 23(3), 760–768 (2004)CrossRefGoogle Scholar
  174. 174.
    S. Seitz, B. Curless, J. Diebel, D. Scharstein, R. Szeliski, A comparison and evaluation of multi-view stereo reconstruction algorithms, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, June 2006, pp. 519–528Google Scholar
  175. 175.
    A. Serrano, I. Kim, Z. Chen, S. DiVerdi, D. Gutierrez, A. Hertzmann, B. Masia, Motion parallax for 360° RGBD video. IEEE Trans. Vis. Comput. Graph. 25(5), 1817–1827 (2019)CrossRefGoogle Scholar
  176. 176.
    L. Shi, F.-C. Huang, W. Lopes, W. Matusik, D. Luebke, Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3d computer graphics. ACM Trans. Graph. (SIGGRAPH Asia) 36(6), 236:1–236:17 (2017)Google Scholar
  177. 177.
    T. Shibata, J. Kim, D.M. Hoffman, M.S. Banks, The zone of comfort: predicting visual discomfort with stereo displays. J. Vis. 11(8), 11 (2011)CrossRefGoogle Scholar
  178. 178.
    H. Shum, S.B. Kang, Review of image-based rendering techniques, in Visual Communications and Image Processing, vol. 4067 (2000)Google Scholar
  179. 179.
    H.-Y. Shum, S.-C. Chan, S.B. Kang, Image-Based Rendering (Springer, Berlin, 2007)zbMATHGoogle Scholar
  180. 180.
    V. Sitzmann, J. Thies, F. Heide, M. Niessner, G. Wetzstein, M. Zollhofer, DeepVoxels: learning persistent 3D feature embeddings, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 2437–2446Google Scholar
  181. 181.
    V. Sitzmann, M. Zollhofer, G. Wetzstein, Scene representation networks: continuous 3D-structure-aware neural scene representations, in Proceedings of the Conference on Neural Information Processing Systems (NeurIPS) (2019). arXiv:1906.01618
  182. 182.
    N. Snavely, S.M. Seitz, R. Szeliski, Photo tourism: exploring photo collections in 3D. ACM Trans. Graph. (Proc. SIGGRAPH) 25(3), 835–846 (2006)CrossRefGoogle Scholar
  183. 183.
    S. Sridhar, F. Mueller, A. Oulasvirta, C. Theobalt, Fast and robust hand tracking using detection-guided optimization, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 3213–3221Google Scholar
  184. 184.
    S. Sridhar, F. Mueller, M. Zollhofer, D. Casas, A. Oulasvirta, C. Theobalt, Real-time joint tracking of a hand manipulating an object from RGB-D input, in European Conference on Computer Vision (Springer, Cham, 2016), pp. 294–310Google Scholar
  185. 185.
    P.P. Srinivasan, R. Tucker, J.T. Barron, R. Ramamoorthi, R. Ng, N. Snavely, Pushing the boundaries of view extrapolation with multiplane images, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2019, pp. 175–184Google Scholar
  186. 186.
    M. Stengel, S. Grogorick, M. Eisemann, M. Magnor, Adaptive image-space sampling for gaze-contingent real-time rendering, in Computer Graphics Forum, vol. 35 (Wiley Online Library, 2016), pp. 129–139Google Scholar
  187. 187.
    R.E. Stevens, T.N. Jacoby, I.Ş. Aricescu, D.P. Rhodes, A review of adjustable lenses for head mounted displays, in Digital Optical Technologies 2017, vol. 10335 (International Society for Optics and Photonics, 2017), p. 103350QGoogle Scholar
  188. 188.
    R.E. Stevens, D.P. Rhodes, A. Hasnain, P.-Y. Laffont, Varifocal technologies providing prescription and VAC mitigation in HMDs using Alvarez lenses, vol. 10676 (2018)Google Scholar
  189. 189.
    H. Strasburger, I. Rentschler, M. Jüttner, Peripheral vision and pattern recognition: a review. J. Vis. 11(5), 13 (2011)CrossRefGoogle Scholar
  190. 190.
    D.J. Sturman, D. Zeltzer, A survey of glove-based input. IEEE Comput. Graph. Appl. 14(1), 30–39 (1994)CrossRefGoogle Scholar
  191. 191.
    T. Sugihara, T. Miyasato, 32.4: A lightweight 3-D HMD with accommodative compensation. SID Dig. 29(1):927–930 (1998)Google Scholar
  192. 192.
    Q. Sun, F.-C. Huang, J. Kim, L.-Y. Wei, D. Luebke, A. Kaufman, Perceptually-guided foveation for light field displays. ACM Trans. Graph. 36(6), 192:1–192:13 (2017)Google Scholar
  193. 193.
    I.E. Sutherland, A head-mounted three dimensional display, in Proceedings of Fall Joint Computer Conference (1968), pp. 757–764Google Scholar
  194. 194.
    N.T. Swafford, J.A. Iglesias-Guitian, C. Koniaris, B. Moon, D. Cosker, K. Mitchell, User, metric, and computational evaluation of foveated rendering methods, in Proceedings of the ACM Symposium on Applied Perception (ACM, 2016), pp. 7–14Google Scholar
  195. 195.
    C. Sweeney, Theia multiview geometry library: tutorial & reference (2016). http://theia-sfm.org
  196. 196.
    C. Sweeney, A. Holynski, B. Curless, S.M. Seitz, Structure from motion for panorama-style videos (2019). arXiv:1906.03539
  197. 197.
    R. Szeliski, Image alignment and stitching: a tutorial. Found. Trends Comput. Graph. Vis. 2(1), 1–104 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  198. 198.
    M. Teragawa, A. Yoshida, K. Yoshiyama, S. Nakagawa, K. Tomizawa, Y. Yoshida, Multi-primary-color displays: the latest technologies and their benefits. J. Soc. Inf. Disp. 20(1), 1–11 (2012)CrossRefGoogle Scholar
  199. 199.
    L.N. Thibos, D.L. Still, A. Bradley, Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vis. Res. 36(2), 249–258 (1996)CrossRefGoogle Scholar
  200. 200.
    J. Thies, M. Zollhofer, M. Niessner, Deferred neural rendering: Image synthesis using neural textures. ACM Trans. Graph. (Proc. SIGGRAPH) (2019)Google Scholar
  201. 201.
    S. Tulsiani, R. Tucker, N. Snavely, Layer-structured 3D scene inference via view synthesis, in Proceedings of the European Conference on Computer Vision (ECCV), Sept 2018Google Scholar
  202. 202.
    K. Vaidyanathan, M. Salvi, R. Toth, T. Foley, T. Akenine-Möller, J. Nilsson, J. Munkberg, J. Hasselgren, M. Sugihara, P. Clarberg et al., Coarse pixel shading, in Proceedings of High Performance Graphics (Eurographics Association, 2014), pp. 9–18Google Scholar
  203. 203.
    J. Ventura, Structure from motion on a sphere, in Proceedings of the European Conference on Computer Vision (ECCV), ed. by B. Leibe, J. Matas, N. Sebe, M. Welling (2016), pp. 53–68Google Scholar
  204. 204.
    M. von Waldkirch, P. Lukowicz, G. Tröster, Multiple imaging technique for extending depth of focus in retinal displays. Opt. Express 12(25) (2004)Google Scholar
  205. 205.
    R. Wang, S. Paris, J. Popović, 6d hands: markerless hand-tracking for computer aided design, in Proceedings of ACM Symposium on User Interface Software and Technology (UIST) (2011)Google Scholar
  206. 206.
    S.J. Watt, K. Akeley, M.O. Ernst, M.S. Banks, Focus cues affect perceived depth. J. Vis. 5(10), 834–862 (2005)CrossRefGoogle Scholar
  207. 207.
    S.-E. Wei, J. Saragih, T. Simon, A.W. Harley, S. Lombardi, M. Perdoch, A. Hypes, D. Wang, H. Badino, Y. Sheikh, VR facial animation via multiview image translation. ACM Trans. Graph. (Proc. SIGGRAPH) 38(4), 67:1–16 (2019)Google Scholar
  208. 208.
    C. Weissig, O. Schreer, P. Eisert, P. Kauff, The ultimate immersive experience: panoramic 3D video acquisition, in Advances in Multimedia Modeling (MMM), ed. by K. Schoeffmann, B. Merialdo, A.G. Hauptmann, C.-W. Ngo, Y. Andreopoulos, C. Breiteneder, vol. 7131 of Lecture Notes in Computer Science (2012), pp. 671–681Google Scholar
  209. 209.
    G. Westheimer, The Maxwellian view. Vis. Res. 6, 669–682 (1966)CrossRefGoogle Scholar
  210. 210.
    G. Wetzstein, O. Bimber, Radiometric compensation through inverse light transport, in 15th Pacific Conference on Computer Graphics and Applications (PG’07) (2007), pp. 391–399Google Scholar
  211. 211.
    G. Wetzstein, W. Heidrich, D. Luebke, Optical image processing using light modulation displays. Comput. Graph. Forum 29(6), 1934–1944 (2010)CrossRefGoogle Scholar
  212. 212.
    G. Wetzstein, D. Lanman, Factored displays: improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing. IEEE Sig. Process. Mag. 33(5), 119–129 (2016)ADSCrossRefGoogle Scholar
  213. 213.
    G. Wetzstein, D. Lanman, W. Heidrich, R. Raskar, Layered 3d: tomographic image synthesis for attenuation-based light field and high dynamic range displays, in ACM Transactions on Graphics (SIGGRAPH), vol. 30 (2011), p. 95Google Scholar
  214. 214.
    G. Wetzstein, D. Lanman, M. Hirsch, R. Raskar, Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. (SIGGRAPH) 31(4), 1–11 (2012)CrossRefGoogle Scholar
  215. 215.
    T. Whelan, S. Leutenegger, R.F. Salas-Moreno, B. Glocker, A.J. Davison, ElasticFusion: dense SLAM without a pose graph, in Proceedings of Robotics: Science and Systems (RSS), July 2015Google Scholar
  216. 216.
    A. Wilson, H. Hua, Design and prototype of an augmented reality display with per-pixel mutual occlusion capability. OSA Opt. Express 25(24), 30539–30549 (2017)ADSCrossRefGoogle Scholar
  217. 217.
    D.N. Wood, D.I. Azuma, K. Aldinger, B. Curless, T. Duchamp, D.H. Salesin, W. Stuetzle, Surface light fields for 3D photography, in Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) (2000), pp. 287–296Google Scholar
  218. 218.
    C. Wu, VisualSFM: a visual structure from motion system (2011). http://ccwu.me/vsfm/
  219. 219.
    W. Wu, P. Llull, I. Tosic, N. Bedard, K. Berkner, N. Balram, Content-adaptive focus configuration for near-eye multi-focal displays, in IEEE International Conference on Multimedia and Expo (ICME) (2016), pp. 1–6Google Scholar
  220. 220.
    K. Yücer, A. Sorkine-Hornung, O. Wang, O. Sorkine-Hornung, Efficient 3D object segmentation from densely sampled light fields with applications to 3D reconstruction. ACM Trans. Graph. 35(3), 22:1–15 (2016)Google Scholar
  221. 221.
    H.-J. Yeom, H.-J. Kim, S.-B. Kim, H. Zhang, B. Li, Y.-M. Ji, S.-H. Kim, J.-H. Park, 3d holographic head mounted display using holographic optical elements with astigmatism aberration compensation. Opt. Express 23(25), 32025–32034 (2015)ADSCrossRefGoogle Scholar
  222. 222.
    W. Yifan, F. Serena, S. Wu, C. Öztireli, O. Sorkine-Hornung, Differentiable surface splatting for point-based geometry processing (2019). arXiv:1906.04173
  223. 223.
    J. Zaragoza, T.-J. Chin, Q.-H. Tran, M.S. Brown, D. Suter, As-projective-as-possible image stitching with moving DLT. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1285–1298 (2014)CrossRefGoogle Scholar
  224. 224.
    F. Zhang, F. Liu, Parallax-tolerant image stitching, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2014, pp. 3262–3269Google Scholar
  225. 225.
    K.C. Zheng, S.B. Kang, M.F. Cohen, R. Szeliski, Layered depth panoramas, in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR), June 2007Google Scholar
  226. 226.
    T. Zhou, R. Tucker, J. Flynn, G. Fyffe, N. Snavely, Stereo magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. (Proc. SIGGRAPH) 37(4), 65:1–12 (2018)Google Scholar
  227. 227.
    M. Zollhöfer, J. Thies, P. Garrido, D. Bradley, T. Beeler, P. Pérez, M. Stamminger, M. Niessner, C. Theobalt, State of the art on monocular 3D face reconstruction, tracking, and applications. Comput. Graph. Forum 37(2), 523–550 (2018)CrossRefGoogle Scholar
  228. 228.
    B. Krajancich, N. Padmanaban, G. Wetzstein, Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display, IEEE TVCG (Proc. VR) (2020)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Stanford UniversityStanfordUSA

Personalised recommendations