Advertisement

State of the Art in Perceptual VR Displays

Chapter
  • 1k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11900)

Abstract

Wearable computing systems, i.e. virtual and augmented reality (VR/AR), are widely expected to be the next major computing platform. These systems strive to generate perceptually realistic user experiences that seamlessly blend physical and digital content to unlock unprecedented user interfaces and applications. Due to the fact that the primary interface between a wearable computer and a user is typically a near-eye display, it is crucial that these displays deliver perceptually realistic and visually comfortable experiences. However, current generation near-eye displays suffer from limited resolution and color fidelity, they suffer from the vergence–accommodation conflict impairing visual comfort, they do not support all depth cues that the human visual system relies on, and AR displays typically do not support mutually consistent occlusions between physical and digital imagery. In this chapter, we review the state of the art of perceptually-driven computational near-eye displays addressing these and other challenges.

Keywords

Virtual reality Augmented reality Visual perception Displays 

References

  1. 1.
    Ajito, T., Obi, T., Yamaguchi, M., Ohyama, N.: Expanded color gamut reproduced by six-primary projection display. In: Projection Displays 2000, vol. 3954, pp. 130–137 (2000)Google Scholar
  2. 2.
    Akeley, K., Watt, S., Girshick, A., Banks, M.: A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) 23(3), 804–813 (2004)CrossRefGoogle Scholar
  3. 3.
    Avveduto, G., Tecchia, F., Fuchs, H.: Real-world occlusion in optical see-through AR displays. In: Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, p. 29. ACM (2017)Google Scholar
  4. 4.
    Ballestad, A., Boitard, R., Damberg, G., Stojmenovik, G.: Advances in HDR display technology for cinema applications, including light-steering projection. Inf. Disp. 35(3), 16–19 (2019)Google Scholar
  5. 5.
    Banks, M.S., Hoffman, D.M., Kim, J., Wetzstein, G.: 3D displays. Ann. Rev. Vis. Sci. 2(1), 397–435 (2016)CrossRefGoogle Scholar
  6. 6.
    Banterle, F., et al.: Multidimensional image retargeting. In: SIGGRAPH Asia 2011 Courses, p. 15. ACM (2011)Google Scholar
  7. 7.
    Ben-Chorin, M., Eliav, D.: Multi-primary design of spectrally accurate displays. J. Soc. Inf. Disp. 15(9), 667–677 (2007)CrossRefGoogle Scholar
  8. 8.
    Berthouzoz, F., Fattal, R.: Resolution enhancement by vibrating displays. ACM Trans. Graph. (TOG) 31(2), 15 (2012)CrossRefGoogle Scholar
  9. 9.
    Bimber, O., Fröhlich, B.: Occlusion shadows: using projected light to generate realistic occlusion effects for view-dependent optical see-through displays. In: Proceedings of the IEEE ISMAR (2002)Google Scholar
  10. 10.
    Bimber, O., Grundhöfer, A., Wetzstein, G., Knödel, S.: Consistent illumination within optical see-through augmented environments. In: Proceedings of the IEEE ISMAR, pp. 198–207 (2003)Google Scholar
  11. 11.
    Bimber, O., Iwai, D., Wetzstein, G., Grundhoefer, A.: The visual computing of projector-camera systems. Comput. Graph. Forum 27(8), 2219–2245 (2008)CrossRefGoogle Scholar
  12. 12.
    Bingham, G.P.: Optical flow from eye movement with head immobilized: “ocular occlusion” beyond the nose. Vis. Res. 33(5), 777–789 (1993)CrossRefGoogle Scholar
  13. 13.
    Brewster, D.: On the law of visible position in single and binocular vision, and on the representation of solid figures by the union of dissimilar plane pictures on the retina. Proc. Roy. Soc. Edinb. 1, 405–406 (1845)CrossRefGoogle Scholar
  14. 14.
    Cakmakci, O., Ha, Y., Rolland, J.: Design of a compact optical see-through head-worn display with mutual occlusion capability. In: Proceedings of SPIE, vol. 5875 (2005)Google Scholar
  15. 15.
    Cakmakci, O., Ha, Y., Rolland, J.P.: A compact optical see-through head-worn display with occlusion support. In: Proceedings of the IEEE ISMAR, pp. 16–25 (2004)Google Scholar
  16. 16.
    Chakravarthula, P., Dunn, D., Akit, K., Fuchs, H.: FocusAR: auto-focus augmented reality eyeglasses for both real world and virtual imagery. IEEE Trans. Vis. Comput. Graph. 24(11), 2906–2916 (2018)CrossRefGoogle Scholar
  17. 17.
    Chang, J.H.R., Kumar, B.V.K.V., Sankaranarayanan, A.C.: 2\(^16\) shades of gray: high bit-depth projection using light intensity control. Opt. Express 24(24), 27937–27950 (2016)CrossRefGoogle Scholar
  18. 18.
    Chang, J.H.R., Kumar, B.V.K.V., Sankaranarayanan, A.C.: Towards multifocal displays with dense focal stacks. ACM Trans. Graph. (SIGGRAPH Asia) 37(6), 198:1–198:13 (2018)Google Scholar
  19. 19.
    Chen, J.S., Chu, D.P.: Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications. Opt. Express 23(14), 18143–18155 (2015)CrossRefGoogle Scholar
  20. 20.
    Choi, J.S., Howell, J.C.: Paraxial ray optics cloaking. Opt. Express 22(24), 29465–29478 (2014)CrossRefGoogle Scholar
  21. 21.
    Cholewiak, S.A., Love, G.D., Srinivasan, P.P., Ng, R., Banks, M.S.: ChromaBlur: rendering chromatic eye aberration improves accommodation and realism. ACM Trans. Graph. (SIGGRAPH Asia) 36(6), 210:1–210:12 (2017)Google Scholar
  22. 22.
    NVIDIA Corporation: VRWorks - Lens Matched Shading (2016). https://developer.nvidia.com/vrworks/graphics/lensmatchedshading
  23. 23.
    NVIDIA Corporation: VRWorks - Multi-Res Shading (2016). https://developer.nvidia.com/vrworks/graphics/multiresshading
  24. 24.
    Cossairt, O., Nayar, S.K.: Spectral focal sweep: extended depth of field from chromatic aberrations. In: Proceedings of ICCP (2010)Google Scholar
  25. 25.
    Cossairt, O., Zhou, C., Nayar, S.K.: Diffusion coded photography for extended depth of field. ACM Trans. Graph. (SIGGRAPH) 29(4), 31:1–31:10 (2010)CrossRefGoogle Scholar
  26. 26.
    Curcio, C.A., Allen, K.A.: Topography of ganglion cells in human retina. J. Comp. Neurol. 300(1), 5–25 (1990)CrossRefGoogle Scholar
  27. 27.
    Curcio, C.A., Sloan, K.R., Kalina, R.E., Hendrickson, A.E.: Human photoreceptor topography. J. Comp. Neurol. 292(4), 497–523 (1990)CrossRefGoogle Scholar
  28. 28.
    Cutting, J., Vishton, P.: Perceiving layout and knowing distances: the interaction, relative potency, and contextual use of different information about depth. In: Epstein, W., Rogers, S. (eds.) Perception of Space and Motion, Chap. 3, pp. 69–117. Academic Press (1995)Google Scholar
  29. 29.
    Damberg, G., Seetzen, H., Ward, G., Heidrich, W., Whitehead, L.: 3.2: high dynamic range projection systems. In: SID Symposium Digest of Technical Papers, pp. 4–7 (2007) Google Scholar
  30. 30.
    Damera-Venkata, N., Chang, N.L.: Display supersampling. ACM Trans. Graph. (TOG) 28(1), 9 (2009)CrossRefGoogle Scholar
  31. 31.
    Didyk, P., Ritschel, T., Eisemann, E., Myszkowski, K., Seidel, H.P.: A perceptual model for disparity. ACM Trans. Graph. (SIGGRAPH) 30(4), 96:1–96:10 (2011)CrossRefGoogle Scholar
  32. 32.
    Dolgoff, E.: Real-depth imaging: a new 3D imaging technology with inexpensive direct-view (no glasses) video and other applications. In: Proceedings of SPIE, vol. 3012, pp. 282–288 (1997)Google Scholar
  33. 33.
    Dowski, E.R., Cathey, W.T.: Extended depth of field through wave-front coding. Appl. Opt. 34(11), 1859–66 (1995)CrossRefGoogle Scholar
  34. 34.
    Duane, A.: Normal values of the accommodation at all ages. J. Am. Med. Assoc. 59(12), 1010–1013 (1912)CrossRefGoogle Scholar
  35. 35.
    Duchowski, A.T., et al.: Reducing visual discomfort of 3D stereoscopic displays with gaze-contingent depth-of-field. In: Proceedings of the ACM Symposium on Applied Perception, pp. 39–46. ACM (2014)Google Scholar
  36. 36.
    Dunn, D., et al.: Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. IEEE TVCG 23(4), 1322–1331 (2017)Google Scholar
  37. 37.
    Friston, S., Ritschel, T., Steed, A.: Perceptual rasterization for head-mounted display image synthesis. ACM Trans. Graph. 38(4), 1–14 (2019).  https://doi.org/10.1145/3306346.3323033. Article no. 97. ISSN 0730-0301CrossRefGoogle Scholar
  38. 38.
    Gao, C., Lin, Y., Hua, H.: Occlusion capable optical see-through head-mounted display using freeform optics. In: Proceedings of the IEEE ISMAR, pp. 281–282 (2012)Google Scholar
  39. 39.
    Gao, C., Lin, Y., Hua, H.: Optical see-through head-mounted display with occlusion capability. In: Proceedings of SPIE, vol. 8735 (2013)Google Scholar
  40. 40.
    Gao, Q., Liu, J., Han, J., Li, X.: Monocular 3D see-through head-mounted display via complex amplitude modulation. Opt. Express 24(15), 17372–17383 (2016)CrossRefGoogle Scholar
  41. 41.
    Guenter, B., Finch, M., Drucker, S., Tan, D., Snyder, J.: Foveated 3D graphics. ACM Trans. Graph. (TOG) 31(6), 164 (2012)CrossRefGoogle Scholar
  42. 42.
    Hadani, I., Ishai, G., Gur, M.: Visual stability and space perception in monocular vision: mathematical model. J. Opt. Soc. Am. 70(1), 60–65 (1980)CrossRefGoogle Scholar
  43. 43.
    Hamasaki, T., Itoh, Y.: Varifocal occlusion for optical see-through head-mounted displays using a slide occlusion mask. IEEE TVCG 25(5), 1961–1969 (2019)Google Scholar
  44. 44.
    Hansen, T., Pracejus, L., Gegenfurtner, K.R.: Color perception in the intermediate periphery of the visual field. J. Vis. 9(4), 26–26 (2009)CrossRefGoogle Scholar
  45. 45.
    Hasan, N., Banerjee, A., Kim, H., Mastrangelo, C.H.: Tunable-focus lens for adaptive eyeglasses. Opt. Express 25(2), 1221–1233 (2017)CrossRefGoogle Scholar
  46. 46.
    Hasnain, A., et al.: Piezo-actuated varifocal head-mounted displays for virtual and augmented reality, vol. 10942 (2019).  https://doi.org/10.1117/12.2509143
  47. 47.
    Häusler, G.: A method to increase the depth of focus by two step image processing. Opt. Commun. 6(1), 38–42 (1972)CrossRefGoogle Scholar
  48. 48.
    Heide, F., Gregson, J., Wetzstein, G., Raskar, R., Heidrich, W.: Compressive multi-mode superresolution display. Opt. Express 22(12), 14981–14992 (2014)CrossRefGoogle Scholar
  49. 49.
    Heide, F., Lanman, D., Reddy, D., Kautz, J., Pulli, K., Luebke, D.: Cascaded displays: spatiotemporal superresolution using offset pixel layers. ACM Trans. Graph. (TOG) 33(4), 60 (2014)zbMATHCrossRefGoogle Scholar
  50. 50.
    Held, R., Cooper, E., O’Brien, J., Banks, M.: Using blur to affect perceived distance and size. ACM Trans. Graph. 29(2), 1–16 (2010)CrossRefGoogle Scholar
  51. 51.
    Hillaire, S., Lecuyer, A., Cozot, R., Casiez, G.: Using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual environments. In: 2008 IEEE Virtual Reality Conference, pp. 47–50 (2008)Google Scholar
  52. 52.
    Hirsch, M., Wetzstein, G., Raskar, R.: A compressive light field projection system. ACM Trans. Graph. (TOG) 33(4), 58 (2014)CrossRefGoogle Scholar
  53. 53.
    Hoffman, D.M., Girshick, A.R., Akeley, K., Banks, M.S.: Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 8(3), 33 (2008) CrossRefGoogle Scholar
  54. 54.
    Holden, B.A., et al.: Global vision impairment due to uncorrected presbyopia. Arch. Ophthalmol. 126(12), 1731–1739 (2008)CrossRefGoogle Scholar
  55. 55.
    Howard, I.P., Rogers, B.J.: Seeing in Depth. Oxford University Press, Oxford (2002)Google Scholar
  56. 56.
    Howlett, I.D., Smithwick, Q.: Perspective correct occlusion-capable augmented reality displays using cloaking optics constraints. J. Soc. Inf. Disp. 25(3), 185–193 (2017)CrossRefGoogle Scholar
  57. 57.
    Hu, X., Hua, H.: Design and assessment of a depth-fused multi-focal-plane display prototype. J. Disp. Technol. 10(4), 308–316 (2014)CrossRefGoogle Scholar
  58. 58.
    Hua, H.: Enabling focus cues in head-mounted displays. Proc. IEEE 105(5), 805–824 (2017)CrossRefGoogle Scholar
  59. 59.
    Hua, H., Javidi, B.: A 3D integral imaging optical see-through head-mounted display. Opt. Express 22(11), 13484–13491 (2014)CrossRefGoogle Scholar
  60. 60.
    Huang, F.C., Chen, K., Wetzstein, G.: The light field stereoscope: immersive computer graphics via factored near-eye light field display with focus cues. ACM Trans. Graph. (SIGGRAPH) 34(4) (2015)Google Scholar
  61. 61.
    Huang, F.C., Pajak, D., Kim, J., Kautz, J., Luebke, D.: Mixed-primary factorization for dual-frame computational displays. ACM Trans. Graph. 36(4), 1–13 (2017).  https://doi.org/10.1145/3072959.3073654. Article no. 149. ISSN 0730-0301CrossRefGoogle Scholar
  62. 62.
    Huang, F.C., Wetzstein, G., Barsky, B.A., Raskar, R.: Eyeglasses-free display: towards correcting visual aberrations with computational light field displays. ACM Trans. Graph. 33(4), 1–12 (2014).  https://doi.org/10.1145/2601097.2601122. Article no. 59. ISSN 0730-0301CrossRefGoogle Scholar
  63. 63.
    Itoh, Y., Hamasaki, T., Sugimoto, M.: Occlusion leak compensation for optical see-through displays using a single-layer transmissive spatial light modulator. IEEE Trans. Vis. Comput. Graph. 23(11), 2463–2473 (2017)CrossRefGoogle Scholar
  64. 64.
    Itoh, Y., Langlotz, T., Iwai, D., Kiyokawa, K., Amano, T.: Light attenuation display: subtractive see-through near-eye display via spatial color filtering. IEEE TVCG 25(5), 1951–1960 (2019)Google Scholar
  65. 65.
    Johnson, P.V., Parnell, J.A., Kim, J., Saunter, C.D., Love, G.D., Banks, M.S.: Dynamic lens and monovision 3D displays to improve viewer comfort. Opt. Express 24(11), 11808–11827 (2016)CrossRefGoogle Scholar
  66. 66.
    Brooker, J.P., Sharkey, P.M.: Operator performance evaluation of controlled depth of field in a stereographically displayed virtual environment, vol. 4297 (2001).  https://doi.org/10.1117/12.430841
  67. 67.
    Kauvar, I., Yang, S.J., Shi, L., McDowall, I., Wetzstein, G.: Adaptive color display via perceptually-driven factored spectral projection. ACM Trans. Graph. (SIGGRAPH Asia) 34(6) (2015). Article No. 165Google Scholar
  68. 68.
    Kellnhofer, P., Didyk, P., Ritschel, T., Masia, B., Myszkowski, K., Seidel, H.P.: Motion parallax in stereo 3D: model and applications. ACM Trans. Graph. 35(6), 1–12 (2016).  https://doi.org/10.1145/2980179.298023. Article no. 176. ISSN 0730-0301CrossRefGoogle Scholar
  69. 69.
    Kim, J., et al.: Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38(4), 99:1–99:15 (2019).  https://doi.org/10.1145/3306346.3322987CrossRefGoogle Scholar
  70. 70.
    Kiyokawa, K., Billinghurst, M., Campbell, B., Woods, E.: An occlusion-capable optical see-through head mount display for supporting co-located collaboration. In: Proceedings of the IEEE ISMAR (2003)Google Scholar
  71. 71.
    Kiyokawa, K., Kurata, Y., Ohno, H.: An optical see-through display for mutual occlusion of real and virtual environments. In: Proceedings of ISAR, pp. 60–67 (2000)Google Scholar
  72. 72.
    Kiyokawa, K., Kurata, Y., Ohno, H.: An optical see-through display for mutual occlusion with a real-time stereovision system. Comput. Graph. 25(5), 765–779 (2001)CrossRefGoogle Scholar
  73. 73.
    Konrad, R., Angelopoulos, A., Wetzstein, G.: Gaze-contingent ocular parallax rendering for virtual reality. arXiv (2019)Google Scholar
  74. 74.
    Konrad, R., Cooper, E., Wetzstein, G.: Novel optical configurations for virtual reality: evaluating user preference and performance with focus-tunable and monovision near-eye displays. In: Proceedings of SIGCHI (2015)Google Scholar
  75. 75.
    Konrad, R., Cooper, E.A., Wetzstein, G.: Novel optical configurations for virtual reality: evaluating user preference and performance with focus-tunable and monovision near-eye displays. In: Proceedings of SIGCHI (2016)Google Scholar
  76. 76.
    Konrad, R., Padmanaban, N., Molner, K., Cooper, E.A., Wetzstein, G.: Accommodation-invariant computational near-eye displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 88:1–88:12 (2017)CrossRefGoogle Scholar
  77. 77.
    Kooi, F.L., Toet, A.: Visual comfort of binocular and 3D displays. Displays 25(2–3), 99–108 (2004)CrossRefGoogle Scholar
  78. 78.
    Koulieris, G.A., Bui, B., Banks, M.S., Drettakis, G.: Accommodation and comfort in head-mounted displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 87:1–87:11 (2017)CrossRefGoogle Scholar
  79. 79.
    Kramida, G.: Resolving the vergence-accommodation conflict in head-mounted displays. IEEE TVCG 22, 1912–1931 (2015)Google Scholar
  80. 80.
    Kudo, H., Ohnishi, N.: Study on the ocular parallax as a monocular depth cue induced by small eye movements during a gaze. In: Proceedings of the IEEE Engineering in Medicine and Biology Society, vol. 6, pp. 3180–3183 (1998)Google Scholar
  81. 81.
    Kudo, H., Saito, M., Yamamura, T., Ohnishi, N.: Measurement of the ability in monocular depth perception during gazing at near visual target-effect of the ocular parallax cue. In: Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, pp. 34–37 (1999)Google Scholar
  82. 82.
    Lambooij, M., Fortuin, M., Heynderickx, I., IJsselsteijn, W.: Visual discomfort and visual fatigue of stereoscopic displays: a review. J. Imaging Sci. Technol. 53(3), 30201-1–30201-14 (2009)CrossRefGoogle Scholar
  83. 83.
    Langlotz, T., Cook, M., Regenbrecht, H.: Real-time radiometric compensation for optical see-through head-mounted displays. IEEE TVCG 22(11), 2385–2394 (2016)Google Scholar
  84. 84.
    Langlotz, T., Sutton, J., Zollmann, S., Itoh, Y., Regenbrecht, H.: ChromaGlasses: computational glasses for compensating colour blindness. In: Proceedings of the SIGCHI, pp. 390:1–390:12 (2018)Google Scholar
  85. 85.
    Lanman, D., Hirsch, M., Kim, Y., Raskar, R.: Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM Trans. Graph. (SIGGRAPH Asia) 29 (2010). Article No. 163Google Scholar
  86. 86.
    Lanman, D., Hirsch, M., Kim, Y., Raskar, R.: Content-adaptive parallax barriers: Optimizing dual-layer 3D displays using low-rank light field factorization. In: ACM SIGGRAPH Asia, pp. 163:1–163:10 (2010)Google Scholar
  87. 87.
    Lanman, D., Luebke, D.: Near-eye light field displays. ACM Trans. Graph. (SIGGRAPH Asia) 32(6), 220:1–220:10 (2013)Google Scholar
  88. 88.
    Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R.: Polarization fields: dynamic light field display using multi-layer LCDs. ACM Trans. Graph. (SIGGRAPH Asia) 30, 186 (2011)Google Scholar
  89. 89.
    Lee, S., Jang, C., Moon, S., Cho, J., Lee, B.: Additive light field displays: realization of augmented reality with holographic optical elements. ACM Trans. Graph. (SIGGRAPH Asia) 35(4), 60:1–60:13 (2016)Google Scholar
  90. 90.
    Li, G., Lee, D., Jeong, Y., Cho, J., Lee, B.: Holographic display for see-through augmented reality using mirror-lens holographic optical element. Opt. Lett. 41(11), 2486–2489 (2016)CrossRefGoogle Scholar
  91. 91.
    Li, G., et al.: Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications. Proc. Nat. Acad. Sci. 103(16), 6100–6104 (2006)CrossRefGoogle Scholar
  92. 92.
    Li, Y., Majumder, A., Lu, D., Gopi, M.: Content-independent multi-spectral display using superimposed projections. Comput. Graph. Forum 34, 337–348 (2015)CrossRefGoogle Scholar
  93. 93.
    Liu, S., Cheng, D., Hua, H.: An optical see-through head mounted display with addressable focal planes. In: Proceedings of ISMAR, pp. 33–42 (2008)Google Scholar
  94. 94.
    Llull, P., Bedard, N., Wu, W., Tosic, I., Berkner, K., Balram, N.: Design and optimization of a near-eye multifocal display system for augmented reality. In: Imaging and Applied Optics. OSA (2015)Google Scholar
  95. 95.
    Long, D., Fairchild, M.D.: Optimizing spectral color reproduction in multiprimary digital projection. In: Color and Imaging Conference, vol. 2011, pp. 290–297. Society for Imaging Science and Technology (2011)Google Scholar
  96. 96.
    Love, G.D., Hoffman, D.M., Hands, P.J.W., Gao, J., Kirby, A.K., Banks, M.S.: High-speed switchable lens enables the development of a volumetric stereoscopic display. Opt. Express 17(18), 15716–25 (2009)CrossRefGoogle Scholar
  97. 97.
    Maiello, G., Chessa, M., Solari, F., Bex, P.J.: Simulated disparity and peripheral blur interact during binocular fusion. J. Vis. 14(8), 13 (2014)CrossRefGoogle Scholar
  98. 98.
    Maimone, A., Fuchs, H.: Computational augmented reality eyeglasses. In: Proceedings of the IEEE ISMAR, pp. 29–38 (2013)Google Scholar
  99. 99.
    Maimone, A., Georgiou, A., Kollin, J.S.: Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. (SIGGRAPH) 36(4), 85:1–85:16 (2017)CrossRefGoogle Scholar
  100. 100.
    Maimone, A., Lanman, D., Rathinavel, K., Keller, K., Luebke, D., Fuchs, H.: Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources. ACM Trans. Graph. (SIGGRAPH) 33(4), 89:1–89:11 (2014)CrossRefGoogle Scholar
  101. 101.
    Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., Fuchs, H.: Focus 3d: compressive accommodation display. ACM Trans. Graph. 32(5) (2013). Article No. 153Google Scholar
  102. 102.
    Maimone, A., Yang, X., Dierk, N., State, A., Dou, M., Fuchs, H.: General-purpose telepresence with head-worn optical see-through displays and projector-based lighting. In: 2013 IEEE Virtual Reality (VR), pp. 23–26. IEEE (2013)Google Scholar
  103. 103.
    Majumder, A., Brown, R.G., El-Ghoroury, H.S.: Display gamut reshaping for color emulation and balancing. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 17–24. IEEE (2010)Google Scholar
  104. 104.
    Mapp, A.P., Ono, H.: The rhino-optical phenomenon: ocular parallax and the visible field beyond the nose. Vis. Res. 26(7), 1163–1165 (1986)CrossRefGoogle Scholar
  105. 105.
    Masia, B., Wetzstein, G., Didyk, P., Gutierrez, D.: A survey on computational displays: pushing the boundaries of optics, computation, and perception. Comput. Graph. 37(8), 1012–1038 (2013)CrossRefGoogle Scholar
  106. 106.
    Matsuda, N., Fix, A., Lanman, D.: Focal surface displays. ACM Trans. Graph. (SIGGRAPH) 36(4), 86:1–86:14 (2017)CrossRefGoogle Scholar
  107. 107.
    Mauderer, M., Conte, S., Nacenta, M.A., Vishwanath, D.: Depth perception with gaze-contingent depth of field. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 217–226. ACM (2014)Google Scholar
  108. 108.
    Mckee, S.P., Nakayama, K.: The detection of motion in the peripheral visual field. Vis. Res. 24(1), 25–32 (1984)CrossRefGoogle Scholar
  109. 109.
    Meng, X., Du, R., Zwicker, M., Varshney, A.: Kernel foveated rendering. In: Proceedings of the ACM on Computer Graphics and Interactive Techniques (I3D), vol. 1, no. 5, pp. 1–20, May 2018.  https://doi.org/10.1145/3203199
  110. 110.
    Mercier, O., et al.: Fast gaze-contingent optimal decompositions for multifocal displays. ACM Trans. Graph. (SIGGRAPH Asia) 36(6) (2017)Google Scholar
  111. 111.
    Mohan, A., Raskar, R., Tumblin, J.: Agile spectrum imaging: programmable wavelength modulation for cameras and projectors. Comput. Graph. Forum 27, 709–717 (2008)CrossRefGoogle Scholar
  112. 112.
    Moon, E., Kim, M., Roh, J., Kim, H., Hahn, J.: Holographic head-mounted display with RGB light emitting diode light source. Opt. Express 22(6), 6526–6534 (2014)CrossRefGoogle Scholar
  113. 113.
    Mori, S., Ikeda, S., Plopski, A., Sandor, C.: BrightView: increasing perceived brightness of optical see-through head-mounted displays through unnoticeable incident light reduction. In: Proceedings of IEEE VR, pp. 251–258 (2018)Google Scholar
  114. 114.
    Nagahara, H., Kuthirummal, S., Zhou, C., Nayar, S.K.: Flexible depth of field photography. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 60–73. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-88693-8_5CrossRefGoogle Scholar
  115. 115.
    Narain, R., Albert, R.A., Bulbul, A., Ward, G.J., Banks, M.S., O’Brien, J.F.: Optimal presentation of imagery with focus cues on multi-plane displays. ACM Trans. Graph. (SIGGRAPH) 34(4), 59:1–59:12 (2015)CrossRefGoogle Scholar
  116. 116.
    Noorlander, C., Koenderink, J.J., Den Olden, R.J., Edens, B.W.: Sensitivity to spatiotemporal colour contrast in the peripheral visual field. Vis. Res. 23(1), 1–11 (1983)CrossRefGoogle Scholar
  117. 117.
    Padmanaban, N., Konrad, R., Stramer, T., Cooper, E.A., Wetzstein, G.: Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proc. Natl. Acad. Sci. U.S.A. 114, 2183–2188 (2017)CrossRefGoogle Scholar
  118. 118.
    Padmanaban, N., Konrad, R., Wetzstein, G.: Evaluation of accommodation response to monovision for virtual reality. In: Imaging and Applied Optics, p. DM2F.3 (2017)Google Scholar
  119. 119.
    Padmanaban, N., Konrad, R., Wetzstein, G.: Autofocals: evaluating gaze-contingent eyeglasses for presbyopes. Science Advances 5(6) (2019)Google Scholar
  120. 120.
    Palmer, S.E.: Vision Science - Photons to Phenomenology. MIT Press, Cambridge (1999)Google Scholar
  121. 121.
    Pamplona, V.F., Oliveira, M.M., Aliaga, D.G., Raskar, R.: Tailored displays to compensate for visual aberrations. ACM Trans. Graph. (SIGGRAPH) 31(4), 81:1–81:12 (2012)CrossRefGoogle Scholar
  122. 122.
    Patney, A., et al.: Towards foveated rendering for gaze-tracked virtual reality. ACM Trans. Graph. 35(6), 1–12 (2016).  https://doi.org/10.1145/2980179.2980246. Article no. 179. ISSN 0730-0301MathSciNetCrossRefGoogle Scholar
  123. 123.
    Rathinavel, K., Wang, H., Blate, A., Fuchs, H.: An extended depth-at-field volumetric near-eye augmented reality display. IEEE Trans. Vis. Comput. Graph. 24(11), 2857–2866 (2018)CrossRefGoogle Scholar
  124. 124.
    Rathinavel, K., Wetzstein, G., Fuchs, H.: Varifocal occlusion-capable optical see-through augmented reality display based on focus-tunable optics. IEEE TVCG 25(11), 3125–3134 (2019). Proceedings of ISMARGoogle Scholar
  125. 125.
    Rice, J.P., Brown, S.W., Allen, D.W., Yoon, H.W., Litorja, M., Hwang, J.C.: Hyperspectral image projector applications. In: Douglass, M.R., Oden, P.I., (eds.) Emerging Digital Micromirror Device Based Systems and Applications IV, vol. 8254, pp. 213–220. SPIE (2012).  https://doi.org/10.1117/12.907898
  126. 126.
    Rice, J.P., Brown, S.W., Neira, J.E., Bousquet, R.R.: A hyperspectral image projector for hyperspectral imagers. In: Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, vol. 6565, p. 65650C. International Society for Optics and Photonics (2007)Google Scholar
  127. 127.
    Rolland, J.P., Fuchs, H.: Optical versus video see-through head-mounted displays in medical visualization. Presence Teleoperators Virtual Environ. 9(3), 287–309 (2000).  https://doi.org/10.1162/105474600566808CrossRefGoogle Scholar
  128. 128.
    Rolland, J.P., Krueger, M.W., Goon, A.: Multifocal planes head-mounted displays. Appl. Opt. 39(19), 3209–3215 (2000)CrossRefGoogle Scholar
  129. 129.
    Rovamo, J., Virsu, V., Laurinen, P., Hyvärinen, L.: Resolution of gratings oriented along and across meridians in peripheral vision. Investig. Ophthalmol. Vis. Sci. 23(5), 666–670 (1982)Google Scholar
  130. 130.
    Sajadi, B., Gopi, M., Majumder, A.: Edge-guided resolution enhancement in projectors via optical pixel sharing. ACM Trans. Graph. (TOG) 31(4) (2012). Article No. 79Google Scholar
  131. 131.
    Sajadi, B., Qoc-Lai, D., Ihler, A.H., Gopi, M., Majumder, A.: Image enhancement in projectors via optical pixel shift and overlay. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2013)Google Scholar
  132. 132.
    Schowengerdt, B.T., Seibel, E.J.: True 3-D scanned voxel displays using single or multiple light sources. J. SID 14(2), 135–143 (2006)Google Scholar
  133. 133.
    Seetzen, H., et al.: High dynamic range display systems. ACM Trans. Graph. 23(3), 760–768 (2004)CrossRefGoogle Scholar
  134. 134.
    Shi, L., Huang, F.C., Lopes, W., Matusik, W., Luebke, D.: Near-eye light field holographic rendering with spherical waves for wide field of view interactive 3D computer graphics. ACM Trans. Graph. (SIGGRAPH Asia) 36(6), 236:1–236:17 (2017)Google Scholar
  135. 135.
    Shibata, T., Kim, J., Hoffman, D.M., Banks, M.S.: The zone of comfort: predicting visual discomfort with stereo displays. J. Vis. 11(8), 11 (2011)CrossRefGoogle Scholar
  136. 136.
    Stengel, M., Grogorick, S., Eisemann, M., Magnor, M.: Adaptive image-space sampling for gaze-contingent real-time rendering. Comput. Graph. Forum 35, 129–139 (2016)CrossRefGoogle Scholar
  137. 137.
    Stevens, R.E., Rhodes, D.P., Hasnain, A., Laffont, P.Y.: Varifocal technologies providing prescription and VAC mitigation in HMDs using Alvarez lenses, vol. 10676 (2018).  https://doi.org/10.1117/12.2318397
  138. 138.
    Stevens, R.E., Jacoby, T.N., Aricescu, I.Ş., Rhodes, D.P.: A review of adjustable lenses for head mounted displays. In: 2017 Digital Optical Technologies, vol. 10335, p. 103350Q. International Society for Optics and Photonics (2017)Google Scholar
  139. 139.
    Strasburger, H., Rentschler, I., Jüttner, M.: Peripheral vision and pattern recognition: a review. J. Vis. 11(5), 13 (2011)CrossRefGoogle Scholar
  140. 140.
    Sugihara, T., Miyasato, T.: 32.4: a lightweight 3-D HMD with accommodative compensation. SID Dig. 29(1), 927–930 (1998)CrossRefGoogle Scholar
  141. 141.
    Sun, Q., Huang, F.C., Kim, J., Wei, L.Y., Luebke, D., Kaufman, A.: Perceptually-guided foveation for light field displays. ACM Trans. Graph. 36(6), 192:1–192:13 (2017).  https://doi.org/10.1145/3130800.3130807CrossRefGoogle Scholar
  142. 142.
    Swafford, N.T., Iglesias-Guitian, J.A., Koniaris, C., Moon, B., Cosker, D., Mitchell, K.: User, metric, and computational evaluation of foveated rendering methods. In: Proceedings of the ACM Symposium on Applied Perception, pp. 7–14. ACM (2016)Google Scholar
  143. 143.
    Teragawa, M., Yoshida, A., Yoshiyama, K., Nakagawa, S., Tomizawa, K., Yoshida, Y.: Multi-primary-color displays: the latest technologies and their benefits. J. Soc. Inf. Disp. 20(1), 1–11 (2012)CrossRefGoogle Scholar
  144. 144.
    Thibos, L.N., Still, D.L., Bradley, A.: Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vis. Res. 36(2), 249–258 (1996)CrossRefGoogle Scholar
  145. 145.
    Vaidyanathan, K., et al.: Coarse pixel shading. In: Proceedings of High Performance Graphics, pp. 9–18. Eurographics Association (2014)Google Scholar
  146. 146.
    von Waldkirch, M., Lukowicz, P., Tröster, G.: Multiple imaging technique for extending depth of focus in retinal displays. Opt. Express 12(25), 6350–6365 (2004)CrossRefGoogle Scholar
  147. 147.
    Watt, S.J., Akeley, K., Ernst, M.O., Banks, M.S.: Focus cues affect perceived depth. J. Vis. 5(10), 834–862 (2005)CrossRefGoogle Scholar
  148. 148.
    Westheimer, G.: The Maxwellian view. Vis. Res. 6, 669–682 (1966)CrossRefGoogle Scholar
  149. 149.
    Wetzstein, G., Bimber, O.: Radiometric compensation through inverse light transport. In: 15th Pacific Conference on Computer Graphics and Applications (PG 2007), pp. 391–399 (2007)Google Scholar
  150. 150.
    Wetzstein, G., Lanman, D.: Factored displays: Improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing. IEEE Sig. Process. Mag. 33(5), 119–129 (2016)CrossRefGoogle Scholar
  151. 151.
    Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R.: Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. (SIGGRAPH) 31(4), 80:1–80:11 (2012)CrossRefGoogle Scholar
  152. 152.
    Wetzstein, G., Heidrich, W., Luebke, D.: Optical image processing using light modulation displays. Comput. Graph. Forum 29(6), 1934–1944 (2010)CrossRefGoogle Scholar
  153. 153.
    Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R.: Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays. ACM Trans. Graph. (SIGGRAPH) 30, 95 (2011)CrossRefGoogle Scholar
  154. 154.
    Wilson, A., Hua, H.: Design and prototype of an augmented reality display with per-pixel mutual occlusion capability. Opt. Express 25(24), 30539–30549 (2017)CrossRefGoogle Scholar
  155. 155.
    Wu, W., Llull, P., Tosic, I., Bedard, N., Berkner, K., Balram, N.: Content-adaptive focus configuration for near-eye multi-focal displays. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2016)Google Scholar
  156. 156.
    Yamaguchi, Y., Takaki, Y.: See-through integral imaging display with background occlusion capability. Appl. Opt. 55(3), A144–A149 (2016)CrossRefGoogle Scholar
  157. 157.
    Yeom, H.J., et al.: 3D holographic head mounted display using holographic optical elements with astigmatism aberration compensation. Opt. Express 23(25), 32025–32034 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Stanford UniversityStanfordUSA
  2. 2.Facebook Reality LabsRedmondUSA
  3. 3.AdobeSan JoséUSA

Personalised recommendations