Two Plane Volumetric Display for Simultaneous Independent Images at Multiple Depths
 839 Downloads
Abstract
We propose a new projection system to visualise different independent images simultaneously on planes placed at different depths within a volume using multiple projectors. This is currently not possible with traditional systems, and we achieve it by projecting interference patterns rather than simple images. The main research issue is therefore to determine how to compute a distributed interference pattern that would recombine into multiple target images when projected by the different projectors. In this paper, we show that while the problem is not solvable exactly, good approximations can be obtained through optimization techniques. We also propose a practical calibration framework and validate our method by showing the technique in action with a prototype system. The system opens up significant new possibilities to extend projection mapping techniques to dynamic environments for artistic purposes, as well as visual assessment of distances.
Keywords
Multiple Projects Depth Plane Semitransparent Screen Intensity Response Curves Compensation Patterns1 Introduction
In this paper, we propose a technique to realize such a system in practice. Our proposed system consists of multiple projectors coupled with a novel pattern creation algorithm that generates interference patterns to be projected, which recombine at userdefined depths to generate the desired images. The underlying principle of the algorithm can be intuitively understood by considering the following setup. For simplicity, let us assume a system consisting of two projectors and two planes placed at different depths as shown in Fig. 1, where each projector projects its own individual pattern. The aim is to project a single ‘1’ on the first plane and nothing on the second. The patterns are initially designed to project the same image at the same position on the first depth plane as shown in Fig. 1a. Since the patterns’ projected position will not coincide on the second depth plane, a compensation pattern should be projected by either projector to remove the duplicate pattern as shown in Fig. 1b. However, since the compensation pattern for the second depth plane will also intersect the first depth plane, this creates another pattern on the first depth plane, which should be removed with another compensation pattern as shown in Fig. 1c. Finally, the final pattern pair can be retrieved by iterating the process until convergence as shown in Fig. 1d. One may consider whether the process always converges to create valid patterns. In this paper, it is revealed that the problem cannot be solved exactly because of the finite field of view of the pattern of projector, however, at the same time, it is shown that close approximations can be created by distributing the approximation error over the whole projected pattern image.
We show a functioning system able to simultaneously project two images at two distinct depths using two conventional LCD or laser projectors. We further contribute by describing a practical geometric and photometric calibration procedure for the system, as well as an automatic procedure for the generation of the distributed interference pattern. The performance of the system is shown both on simulations as well as on our prototype with natural RGB images. The method can also be applied to videos on a perframe basis. Since this is a brandnew realm of applications, we discuss the limitation of our current version as well as the implementation steps required for replication.
The paper is structured as follows. First, we illustrate related works treating multiple projector systems in Sect. 2. Then, we give an overview of our proposed method in Sect. 3 followed by detailed techniques for projector calibration and distributed pattern creation in Sect. 4. In Sect. 5, simulation results as well as results on our prototype are discussed. Finally, we provide our concluding remarks on the technique in Sect. 6.
2 Related Work
Most projectionbased augmented reality techniques assume that each single point on the object is illuminated by a single projector. In this case, the color and intensity of the point is determined by the value of the originating pixel of the projector. In contrast, when multiple projectors are considered to illuminate a common scene, we have additional degrees of freedom given by the combination of pixel values to represent a desired intensity on the object. Since the human visual system only concentrates on the center of field of view (FOV), Godin proposed a multiprojector system which projects a high resolution image in the central FOV portion, while a low resolution image is projected to the peripheral areas [5]. Bimber and Emmerling used multiple projectors to improve resolution [4] while Amano to compensate colours [1]. Recently, Nagase et al. [7] used multiple projectors to improve the visual quality of displayed content against defocus, occlusion and stretching artifacts by selecting the best projector for each object point. In this case, binary values are assigned as weights to the projectors and each projector is not used at full capacity. Similarly, in [10] an array of mirrors with a procam system is used to view around occlusions and selectively reilluminate portions of the scene.
3 System Overview
The system consists of two LCD projectors stacked vertically as shown in Fig. 2a and a matte cardboard plane for projection. This was mounted on a motorised rail as to control its position precisely. In order to show the ability to project two different images simultaneously at two different depths, a semitransparent screen was also included and placed before the matte plane. To calibrate the geometric relationship between projectors, a camera as well as a standard checkerboard calibration plane is required. The main phases of the algorithm are shown in Fig. 2b: first, together with the geometric calibration, prior to projection it is necessary to carry out a photometric calibration procedure in order to compensate for any nonlinearities in the intensity response of the projectors as well as to fix their white balance. Both these phases are described in Sect. 4.1.
Once the system is calibrated offline, the homographies from the geometric calibration together with the desired images to be shown on each plane and positional information about where in space the patterns should recombine are given as the input to our algorithm, which outputs the distributed interference patterns for each projector. This pattern generation procedure is described in Sect. 4.3. Then, the projectors’ intensity response curves estimated during the photometric calibration are used to linearise the intensity of the calculated patterns. Finally, all the resulting patterns from each projector are projected simultaneously onto the scene, recombining into the desired images at the requested positions.
4 Multiple Simultaneous Image Projections at Multiple Depths
4.1 Geometric Calibration
4.2 Photometric Calibration
It is known that the intensity response curve of the projector is nonlinear because of unique features of various types of light sources. More importantly, the intensity response curve is not necessarily the same for all projectors considered in the system. Since our proposed algorithm relies on the precise compensation of the intensity value from both projected patterns, it is crucial for the projected patterns to reflect accurately their nominal intensity. Indeed, experimentally it was found that whenever this stage was omitted, large errors were visible in the recombined images.
To confirm our photometric calibration, we flip horizontally the calibration pattern for one of the projectors and we display it at the same time from both projectors. Since the pattern is linearly increasing, the result of the superposition between the two patterns should be a constant grey value across all bands as shown in Fig. 4e. Conversely, if photometric compensation is not performed, the superposition result shows obvious errors as in Fig. 4d.
4.3 Interference Pattern Generation
We formulate the problem of creating the distributed interference patterns for projecting simultaneously different images at different depths, as a sparse linear system. Figure 5 shows the variables definitions. While for clarity we illustrate the process in the case of two projectors and two different images placed at two depth levels, the system can be extended to a higher number of projectors and depth planes.
The two projected patterns from the projectors are denoted as \(P_j\) where \(j \in \{1,\cdots ,J\}\), and the two images to be shown at the two different depths are depicted as \(I_k\) where \(k \in \{1,\cdots ,K\}\), Let pixels on \(P_j\) be expressed as \(p_{j,1},p_{j,2},\cdots ,p_{j,m},\cdots ,p_{j,M}\) and let pixels on \(I_k\) be \(i_{k,1},i_{k,2},\cdots ,i_{k,n},\cdots ,i_{k,N}\).
The image projection from \(P_j\) to \(I_k\) can be modeled as a homography with the parameters estimated during calibration. Using these parameters, we can define an inverse projection mapping q, where, if \(i_{k,n}\) is illuminated by \(p_{j,m}\), q(k, n, j) is defined as m, and if \(i_{k,n}\) is not illuminated by any pixels of \(P_{j}\), q(k, n, j) is defined as 0. In the example of Fig. 5, \(q(2,2,1)=2\) since \(i_{2,2}\) is illuminated by \(p_{1,2}\), and \(q(2,2,2)=1\) since \(i_{2,2}\) is illuminated by \(p_{2,1}\). \(q(2,1,2)=0\) since \(i_{2,1}\) is not illuminated by \(P_2\).
4.4 Solving Linear Constraints
Let the number of elements in \(\mathbf{P}\) be Q, and the number of elements in \(\mathbf{I}\) be R. Q is also the number of unknown variables in the system, while R is the number of constraints.
To solve the sparse system, our system is set up so that the equation is either wellposed or overconstrained (\(R \ge Q\)), as underconstrained (\(R<Q\)) configurations may lead to unstable results. In practice, this entails a system consisting of at least as many projectors as depth planes. For the overconstrained configuration, Eq. (5) can be approximately solved by estimating the pseudoinverse of \(\mathbf{A}\). Since \(\mathbf{A}\) is a large sparse matrix, sparse matrix linear calculation package is needed. In this paper, we approximated the solution by using the LSQR solver described in [3, 13]. The system can be solved quite efficiently, and in our MATLAB implementation convergence is reached in about 1 second given an input pattern resolution of \(1024 \times 768\) on a standard PC running at 2.66 GHz. While the implementation is not realtime yet, the short runtimes make it possible to individually precompute patterns for each frame of a video as well as static images. This is in contrast with the formulation by [12], where the different structure of the matrix \(\mathbf{A}\) does not allow the use of sparse solvers, requiring instead a computationally expensive global optimization.
5 Experiments
5.1 Simulations
5.2 Real Data
(a) PSNR and (b) SSIM results for combinations of image and depth pairs.

6 Conclusion
In this paper, we propose a new pattern projection method which can project different patterns at different depths simultaneously. This novel system is realized by using multiple projectors with an efficient algorithm to create suitable distributed interference patterns. In addition, a practical calibration method for both geometric and photometric parameters is proposed. Experiments were conducted on a working prototype to show the quality of the combined images as well as the calibration and pattern creation method with simulated and real data. Extensions will concentrate on increasing the dynamic range as well as scaling the numbers of patterns and projectors in the prototype.
Notes
Acknowledgments
This work was supported by The Japanese Foundation for the Promotion of Science, GrantinAid for JSPS Fellows no. 26.04041.
References
 1.Amano, T., Kato, H.: Appearance enhancement using a projectorcamera feedback system. In: International Conference on Pattern Recognition (ICPR), pp. 1–4 (2008)Google Scholar
 2.Barnum, P.C., Narasimhan, S.G., Kanade, T.: A multilayered display with water drops. ACM Trans. Graph. (TOG) 29(4), 76 (2010)CrossRefGoogle Scholar
 3.Barrett, R., Berry, M., Chan, T.F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., der Vorst, H.V.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd edn. SIAM, Philadelphia (1994)CrossRefzbMATHGoogle Scholar
 4.Bimber, O., Emmerling, A.: Multifocal projection: a multiprojector technique for increasing focal depth. IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006)CrossRefGoogle Scholar
 5.Godin, G., Massicotte, P., Borgeat, L.: Highresolution insets in projectorbased display: principle and techniques. In: SPIE Proceedings: Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (2006)Google Scholar
 6.Hirsch, M., Wetzstein, G., Raskar, R.: A compressive light field projection system. ACM Trans. Graph. (TOG) 33(4), 58 (2014)CrossRefGoogle Scholar
 7.Iwai, D.: Extended depthoffield projector by fast focal sweep projection. IEEE Trans. Vis. Comput. Graph. 21, 462–470 (2015)CrossRefGoogle Scholar
 8.Jurik, J., Jones, A., Bolas, M., Debevec, P.: Prototyping a light field display involving direct observation of a video projector array. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 15–20 (2011)Google Scholar
 9.Kagami, S.: Rangefinding projectors: visualizing range information without sensors. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 239–240, October 2010Google Scholar
 10.Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., Bolas, M.: Synthetic aperture confocal imaging. ACM Trans. Graph. 23(3), 825–834 (2004)CrossRefGoogle Scholar
 11.Nagano, K., Jones, A., Liu, J., Busch, J., Yu, X., Bolas, M., Debevec, P.: An autostereoscopic projector array optimized for 3d facial display. In: ACM SIGGRApPH 2013 Emerging Technologies, SIGGRAPH 2013, p. 3:1 (2013)Google Scholar
 12.Nakamura, R., Sakaue, F., Sato, J.: Emphasizing 3D structure visually using coded projection from multiple projectors. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part II. LNCS, vol. 6493, pp. 109–122. Springer, Heidelberg (2011)CrossRefGoogle Scholar
 13.Paige, C.C., Saunders, M.A.: Lsqr: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8, 43–71 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
 14.Tsao, C.C., Chen, J.S.: Moving screen projection: a new approach for volumetric threedimensional display. In: SPIE Projection Displays II, vol. 2650, pp. 254–264 (1996)Google Scholar