Advertisement

Two Plane Volumetric Display for Simultaneous Independent Images at Multiple Depths

  • Marco Visentini-ScarzanellaEmail author
  • Takuto Hirukawa
  • Hiroshi Kawasaki
  • Ryo Furukawa
  • Shinsaku Hiura
Conference paper
  • 839 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9555)

Abstract

We propose a new projection system to visualise different independent images simultaneously on planes placed at different depths within a volume using multiple projectors. This is currently not possible with traditional systems, and we achieve it by projecting interference patterns rather than simple images. The main research issue is therefore to determine how to compute a distributed interference pattern that would recombine into multiple target images when projected by the different projectors. In this paper, we show that while the problem is not solvable exactly, good approximations can be obtained through optimization techniques. We also propose a practical calibration framework and validate our method by showing the technique in action with a prototype system. The system opens up significant new possibilities to extend projection mapping techniques to dynamic environments for artistic purposes, as well as visual assessment of distances.

Keywords

Multiple Projects Depth Plane Semi-transparent Screen Intensity Response Curves Compensation Patterns 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

In Augmented Reality (AR) and/or Mixed Reality (MR) systems, projectors are commonly used to efficiently present information to users by projecting various images onto a scene or object surfaces. Apart from AR/MR applications, projector systems have found extensive artistic applications in the form of projection mapping, where the precalculated scene geometry is used to project an appropriately warped image to be mapped on the scene as an artificial texture. However, in the case of projection mapping the projected pattern is the same along each ray, and what is viewed is therefore spatially invariant up to a projective transformation. Conversely, the potential for practical applications could be significantly broadened if different patterns can be projected at different depths simultaneously. For instance, by considering projection mapping using multiple semi-transparent screens, different movies can be projected on each screen. If different depth layers from a scene are projected on each screen for example, this could effectively increase the users’ three-dimensional perception; such volume displays are now intensely investigated [2, 6, 8, 11]. Similarly, in a scene exhibiting dynamic, local geometry changes, different patterns could be visualised according to the changing scene depth without the need for explicit 3D reconstruction and/or change of projected images. A system able to project different patterns at different predefined depths can also be used as a non-contact three-dimensional measurement device, which can be used for manufacturing purposes or to aid visual assessment of distances to avoid, for example, vehicle collisions.
Fig. 1.

Basic scheme to create patterns for two depths with two projectors.

In this paper, we propose a technique to realize such a system in practice. Our proposed system consists of multiple projectors coupled with a novel pattern creation algorithm that generates interference patterns to be projected, which recombine at user-defined depths to generate the desired images. The underlying principle of the algorithm can be intuitively understood by considering the following setup. For simplicity, let us assume a system consisting of two projectors and two planes placed at different depths as shown in Fig. 1, where each projector projects its own individual pattern. The aim is to project a single ‘1’ on the first plane and nothing on the second. The patterns are initially designed to project the same image at the same position on the first depth plane as shown in Fig. 1a. Since the patterns’ projected position will not coincide on the second depth plane, a compensation pattern should be projected by either projector to remove the duplicate pattern as shown in Fig. 1b. However, since the compensation pattern for the second depth plane will also intersect the first depth plane, this creates another pattern on the first depth plane, which should be removed with another compensation pattern as shown in Fig. 1c. Finally, the final pattern pair can be retrieved by iterating the process until convergence as shown in Fig. 1d. One may consider whether the process always converges to create valid patterns. In this paper, it is revealed that the problem cannot be solved exactly because of the finite field of view of the pattern of projector, however, at the same time, it is shown that close approximations can be created by distributing the approximation error over the whole projected pattern image.

We show a functioning system able to simultaneously project two images at two distinct depths using two conventional LCD or laser projectors. We further contribute by describing a practical geometric and photometric calibration procedure for the system, as well as an automatic procedure for the generation of the distributed interference pattern. The performance of the system is shown both on simulations as well as on our prototype with natural RGB images. The method can also be applied to videos on a per-frame basis. Since this is a brand-new realm of applications, we discuss the limitation of our current version as well as the implementation steps required for replication.

The paper is structured as follows. First, we illustrate related works treating multiple projector systems in Sect. 2. Then, we give an overview of our proposed method in Sect. 3 followed by detailed techniques for projector calibration and distributed pattern creation in Sect. 4. In Sect. 5, simulation results as well as results on our prototype are discussed. Finally, we provide our concluding remarks on the technique in Sect. 6.

2 Related Work

Most projection-based augmented reality techniques assume that each single point on the object is illuminated by a single projector. In this case, the color and intensity of the point is determined by the value of the originating pixel of the projector. In contrast, when multiple projectors are considered to illuminate a common scene, we have additional degrees of freedom given by the combination of pixel values to represent a desired intensity on the object. Since the human visual system only concentrates on the center of field of view (FOV), Godin proposed a multi-projector system which projects a high resolution image in the central FOV portion, while a low resolution image is projected to the peripheral areas [5]. Bimber and Emmerling used multiple projectors to improve resolution [4] while Amano to compensate colours [1]. Recently, Nagase et al. [7] used multiple projectors to improve the visual quality of displayed content against defocus, occlusion and stretching artifacts by selecting the best projector for each object point. In this case, binary values are assigned as weights to the projectors and each projector is not used at full capacity. Similarly, in [10] an array of mirrors with a procam system is used to view around occlusions and selectively re-illuminate portions of the scene.

Concerning projection-based stereoscopic and volumetric displays, physically dynamic screen devices such as droplet [2] and moving screens [14] have been proposed, which require special projectors with very high frame rates and are inherently expensive. If the position of the screen is physically moved, the content shown on the screen should be changed electronically by using the depth maps measured by a range finder or alternative 3D tracking methods. In light-field displays [8, 11], while multiple projectors are used each light ray is observed separately from specific viewpoints and never mixed. Overall, although the act of combining pixel values cooperatively from multiple projectors that share a common object point could be optimized for numerous tasks, algorithms and applications have not been well explored yet in the community.
Fig. 2.

(a) Configuration of our practical system with two projectors and two planes. (b) Overview of the algorithm.

There are two systems that propose techniques for highlighting 3D structure according to its depth using multiple projectors. In [9], structure and depth is highlighted by projecting interfering Moire patterns or complementary colours. In [12], Nakamura et al. use a similar matrix formulation to ours to colorise predetermined volume sections and highlight areas in space. However, the technique only crudely exploits the possibilities of light superposition and is therefore unable to produce complex, distinct images at discrete points in space, and only highlights a 3D region with a single colour. Conversely, we propose a novel application for the display of detailed images at distinct locations in space by actively exploiting interference patterns from multiple projectors. Furthermore, in Sect. 4 we highlight the differences in the formulation, which allows us to exploit the sparse structure of the problem and to solve it very efficiently despite very large matrix sizes.
Fig. 3.

(a) Calibration board for plane/camera homography. (b),(c) Composite images with calibration board and projected checkerboard pattern for camera/projector homography calculation from projector 1 and 2 respectively. (d) Required homographies.

3 System Overview

The system consists of two LCD projectors stacked vertically as shown in Fig. 2a and a matte cardboard plane for projection. This was mounted on a motorised rail as to control its position precisely. In order to show the ability to project two different images simultaneously at two different depths, a semi-transparent screen was also included and placed before the matte plane. To calibrate the geometric relationship between projectors, a camera as well as a standard checkerboard calibration plane is required. The main phases of the algorithm are shown in Fig. 2b: first, together with the geometric calibration, prior to projection it is necessary to carry out a photometric calibration procedure in order to compensate for any nonlinearities in the intensity response of the projectors as well as to fix their white balance. Both these phases are described in Sect. 4.1.

Once the system is calibrated offline, the homographies from the geometric calibration together with the desired images to be shown on each plane and positional information about where in space the patterns should recombine are given as the input to our algorithm, which outputs the distributed interference patterns for each projector. This pattern generation procedure is described in Sect. 4.3. Then, the projectors’ intensity response curves estimated during the photometric calibration are used to linearise the intensity of the calculated patterns. Finally, all the resulting patterns from each projector are projected simultaneously onto the scene, recombining into the desired images at the requested positions.

4 Multiple Simultaneous Image Projections at Multiple Depths

4.1 Geometric Calibration

In our method, the homography parameters between each planar board at depths \(\mathbf D _1, \mathbf D _2\) and each projected pattern \(\mathbf P _1, \mathbf P _2\), as well as distortion parameters for each projector are required as shown in Fig. 3d. Similarly to projection mapping, the homographies are calculated so that the patterns can be warped in order to be projected to the same area on each plane by both projectors, and to compensate that the planes are not perfectly frontoparallel to the projector array. In order to estimate the homographies, we use an external camera and we place a board with a printed standard checkerboard pattern at the desired positions. Then, for each projector the same checkerboard pattern is projected on the board, and the composite image of the printed and projected patterns is captured by the camera. The two patterns are printed and projected using two different colours as shown in Fig. 3a and b and simple colour thresholding is used to divide the composite image into its constituent patterns. The homographies are found between the plane and the camera as well as between the camera and the projector through chessboard calibration, which allows us to calculate the homography between the plane and the projector. The process is repeated for all projectors and depths.
Fig. 4.

(a),(b) Intensity response curves for projectors 1 and 2 respectively. (c) Projected calibration pattern. (d) Calibration pattern superimposed with its own mirrored version, before and (e) after colour compensation.

4.2 Photometric Calibration

It is known that the intensity response curve of the projector is nonlinear because of unique features of various types of light sources. More importantly, the intensity response curve is not necessarily the same for all projectors considered in the system. Since our proposed algorithm relies on the precise compensation of the intensity value from both projected patterns, it is crucial for the projected patterns to reflect accurately their nominal intensity. Indeed, experimentally it was found that whenever this stage was omitted, large errors were visible in the recombined images.

For the photometric calibration, we project from each projector a linearly increasing grayscale pattern covering the full [0, 255] intensity range, as shown in Fig. 4c. The projected pattern is captured by an external camera with a linear response and the median value for each of the RGB channels is taken for each intensity bar. The recorded values for both projectors are plotted against their nominal intensity, resulting in characteristic gamma curves as shown in Fig. 4a and b. These are approximated for each channel as \(f(x)= ax^b\), where x is the intensity value and ab are the parameters found through fitting of the observed data. The function is then inverted and kept for compensating the generated pattern prior projection.
Fig. 5.

Variables of linear constraints.

To confirm our photometric calibration, we flip horizontally the calibration pattern for one of the projectors and we display it at the same time from both projectors. Since the pattern is linearly increasing, the result of the superposition between the two patterns should be a constant grey value across all bands as shown in Fig. 4e. Conversely, if photometric compensation is not performed, the superposition result shows obvious errors as in Fig. 4d.

4.3 Interference Pattern Generation

We formulate the problem of creating the distributed interference patterns for projecting simultaneously different images at different depths, as a sparse linear system. Figure 5 shows the variables definitions. While for clarity we illustrate the process in the case of two projectors and two different images placed at two depth levels, the system can be extended to a higher number of projectors and depth planes.

The two projected patterns from the projectors are denoted as \(P_j\) where \(j \in \{1,\cdots ,J\}\), and the two images to be shown at the two different depths are depicted as \(I_k\) where \(k \in \{1,\cdots ,K\}\), Let pixels on \(P_j\) be expressed as \(p_{j,1},p_{j,2},\cdots ,p_{j,m},\cdots ,p_{j,M}\) and let pixels on \(I_k\) be \(i_{k,1},i_{k,2},\cdots ,i_{k,n},\cdots ,i_{k,N}\).

The image projection from \(P_j\) to \(I_k\) can be modeled as a homography with the parameters estimated during calibration. Using these parameters, we can define an inverse projection mapping q, where, if \(i_{k,n}\) is illuminated by \(p_{j,m}\), q(knj) is defined as m, and if \(i_{k,n}\) is not illuminated by any pixels of \(P_{j}\), q(knj) is defined as 0. In the example of Fig. 5, \(q(2,2,1)=2\) since \(i_{2,2}\) is illuminated by \(p_{1,2}\), and \(q(2,2,2)=1\) since \(i_{2,2}\) is illuminated by \(p_{2,1}\). \(q(2,1,2)=0\) since \(i_{2,1}\) is not illuminated by \(P_2\).

Let us define \(p_{1,0}=p_{2,0}=0\). Then, using these definitions, the constraints of the projections are expressed as follows:
$$\begin{aligned} i_{k,n}=p_{1,q(k,n,1)}+p_{2,q(k,n,2)}. \end{aligned}$$
(1)
By collecting these equations, linear equations
$$\begin{aligned} \mathbf{I}_1 = \mathbf{A}_{1,1}{} \mathbf{P}_1 + \mathbf{A}_{1,2}{} \mathbf{P}_2\end{aligned}$$
(2)
$$\begin{aligned} \mathbf{I}_2 = \mathbf{A}_{2,1}{} \mathbf{P}_1 + \mathbf{A}_{2,2}{} \mathbf{P}_2 \end{aligned}$$
(3)
follow, where \(\mathbf{P}_j\) is a vector \([p_{j,1},p_{j,2},\cdots ,p_{j,M}]\), and \(\mathbf{I}_k\) is a vector \([i_{k,1},i_{k,2},\cdots ,i_{k,N}]\), and the matrix \(\mathbf{A}_{k,j}\) is defined by its (mn)-elements as
$$\begin{aligned} \mathbf{A}_{k,j}(n,m)= {\left\{ \begin{array}{ll} \frac{d_{k,n,j}^2}{\mathbf{L}_{k,n,j}\cdot \mathbf{N}_{k}} &{} ( q(k,n,j)=m )\\ 0 &{} ( otherwise ) \end{array}\right. } , \end{aligned}$$
(4)
where \(d_{k,n,j}\) is the distance between a pixel on the plane and the projector in order to compensate for the light fall-off and \(\mathbf{L}_{k,n,j}\cdot \mathbf{N}_{k}\) is the angle between the normal \(\mathbf{N}\) of \(I_k\) and the incoming light vector \(\mathbf{L}\) at pixel n from \(P_j\) to compensate the Lambertian reflectance of the matte plane. By using \( \mathbf{I} \equiv \left[ \begin{array}{r} \mathbf{I}_1\\ \mathbf{I}_2 \end{array} \right] \), \( \mathbf{P} \equiv \left[ \begin{array}{r} \mathbf{P}_1\\ \mathbf{P}_2 \end{array} \right] \) and \( \mathbf{A} \equiv \left[ \begin{array}{rr} \mathbf{A}_{1,1}&{}\mathbf{A}_{1,2}\\ \mathbf{A}_{2,1}&{}\mathbf{A}_{2,2} \end{array} \right] \), we get our complete linear system
$$\begin{aligned} \mathbf{I} = \mathbf{A}{} \mathbf{P}. \end{aligned}$$
(5)
The problem to be solved is to obtain \(\mathbf{P}\) given \(\mathbf{I}\) and \(\mathbf{A}\). The length of vector \(\mathbf{P}\) is \(M\cdot J\), while the length of vector \(\mathbf{I}\) is \(N\cdot K\). Thus, the matrix \(\mathbf{A}\) is a very large sparse matrix. To model the real system, this simple linear model has a problem. Since \(\mathbf{I}\) and \(\mathbf{P}\) are images, their elements should be non-negative values with a fixed dynamic range. However, the lack of positivity constraints in the solution of the sparse system means that \(\mathbf{P}\) may include negative elements. To overcome this issue, we normalize \(\mathbf{P}\) by scaling it and adding a constant vector so that the elements are in the range of [0, 1], and obtain the final pixel values of the pattern images by multiplying by the maximum representable pixel value (normally 255). The effect of this is a compression of the resulting dynamic range and a lowering of the contrast. We explore this issue in our results, adding that it can be fixed by using projectors with finer quantisation.

4.4 Solving Linear Constraints

Let the number of elements in \(\mathbf{P}\) be Q, and the number of elements in \(\mathbf{I}\) be R. Q is also the number of unknown variables in the system, while R is the number of constraints.

To solve the sparse system, our system is set up so that the equation is either well-posed or over-constrained (\(R \ge Q\)), as under-constrained (\(R<Q\)) configurations may lead to unstable results. In practice, this entails a system consisting of at least as many projectors as depth planes. For the over-constrained configuration, Eq. (5) can be approximately solved by estimating the pseudo-inverse of \(\mathbf{A}\). Since \(\mathbf{A}\) is a large sparse matrix, sparse matrix linear calculation package is needed. In this paper, we approximated the solution by using the LSQR solver described in [3, 13]. The system can be solved quite efficiently, and in our MATLAB implementation convergence is reached in about 1 second given an input pattern resolution of \(1024 \times 768\) on a standard PC running at 2.66 GHz. While the implementation is not real-time yet, the short runtimes make it possible to individually precompute patterns for each frame of a video as well as static images. This is in contrast with the formulation by [12], where the different structure of the matrix \(\mathbf{A}\) does not allow the use of sparse solvers, requiring instead a computationally expensive global optimization.

5 Experiments

Our setup consists of two stacked EPSON LCD projectors as in Fig. 2a, with an external Point Grey Grasshopper3 camera for calibration. The patterns were projected on a matte plane placed on a motorised stage for fine distance control. Three depths were tested, at 80 cm, 90 cm and 100 cm, referred to as \(D_1\), \(D_2\) and \(D_3\) respectively.
Fig. 6.

Projected patterns for (a) top and (b) bottom projectors.

Fig. 7.

Simulation results. Top row: original images, from left to right: Lena, Mandrill, Peppers, Fruits. From second to last row, showing tests with Lena/Fruits, Lena/Mandrill, Lena/Peppers, Peppers/Fruits, Peppers/Lena. The columns show the simulated recombined images on the two projection planes placed at depths \(D_1/D_2\), \(D_2/D_3\) and \(D_1/D_3\) respectively.

Fig. 8.

Results with our prototype system. For each pair on consecutive rows, we show the recombined patterns at depths \(D_1\) and \(D_3\) respectively. The datasets are, (a) Cameraman/Jetplane, (b) Lena/Mandrill, (c) Lena/Cameraman, (d) Lena/Peppers, (e) Peppers/House and (f) Peppers/Lena.

5.1 Simulations

To give a quantitative evaluation of the system performance, we use the publicly available test images Lena, Mandrill, Peppers and Fruits stretched to the \(1024 \times 768\) projector resolution to use all the available pixels, as well as the homographies calculated for our real experimental setup. In our simulations, we include the effect of integer rounding to the standard intensity range [0, 255]. In general, we observe that the range of intensity values represented is reduced, thus reducing the overall image contrast. It is important to stress that when considering the output of the sparse matrix solver without integer rounding and fitting in the [0, 255] range, the PSNR is consistently above 30 dB for all datasets, and the major factor affecting the performance is the dynamic range compression needed to fit into the standard 24-bit per RGB pixel range. Therefore, together with the PSNR values between original and generated images which could be misleading due to the changed contrast, we include the SSIM in order to give a higher-level similarity metric between the original and generated images. Results are reported in Table 1a and b, while examples of generated images are shown in Fig. 7. From the table, we can observe that for all image pair combinations the performance is highest for the combination of depths \(D_1/D_3\), which is the one with the largest separations between projection planes. For that combination, for almost all image pairs considered the PSNR exceeds 20 dB, while the SSIM exceeds 0.8, with peaks of 0.93. Examples of the generated patterns are shown in Fig. 6, where it can be seen that no discernible figure can be made out of a single projected pattern.
Fig. 9.

Numerical evaluation of the proposed system. (a) Original Peppers image. (b) Recombined Peppers image. (c) Recombined pattern outside the predefined depths.

5.2 Real Data

We tested our prototype including a wider range of images from public datasets like Cameraman, Jetplane and House with the \(D_1/D_3\) distance pair. Figure 8 the system indeed accurately shows the two images with a good image quality. Numerically, we further tested the system by projecting the original Peppers image, capturing it and comparing it with the capture of our recombined image. For this experiment, we chose grayscale images not to incur in any white balance issues. Visually, the results are pleasing and are shown in Fig. 9, however, due to noise in the recapturing process and small calibration errors, the numerical results indicate a PSNR of 14.88 dB and an SSIM of 0.690. Despite the values, the images are clearly visible and importantly, it is striking how suddenly the images recombine at the desired depth as shown in Fig. 9c taken 5 cm before, while outside the predefined depths nothing meaningful is visible, reinforcing the case for visual distance assessment applications of the proposed system. The main issue is one of dynamic range as discussed for the simulations, as the contrast appears reduced in the recombined images. This will be our main focus for future investigations. Finally, we show the possibility of showing simultaneously both images using a semi-transparent screen followed by a matte screen in Fig. 10. While the materials used do not allow good definition on the semi-transparent screen, the image of Lena is clearly visible and it successfully demonstrates our concept.
Table 1.

(a) PSNR and (b) SSIM results for combinations of image and depth pairs.

Fig. 10.

Prototype showing two images simultaneously projected on a matte and semi-transparent screen for (a) Lena/Peppers and (b) Peppers/Lena.

6 Conclusion

In this paper, we propose a new pattern projection method which can project different patterns at different depths simultaneously. This novel system is realized by using multiple projectors with an efficient algorithm to create suitable distributed interference patterns. In addition, a practical calibration method for both geometric and photometric parameters is proposed. Experiments were conducted on a working prototype to show the quality of the combined images as well as the calibration and pattern creation method with simulated and real data. Extensions will concentrate on increasing the dynamic range as well as scaling the numbers of patterns and projectors in the prototype.

Notes

Acknowledgments

This work was supported by The Japanese Foundation for the Promotion of Science, Grant-in-Aid for JSPS Fellows no. 26.04041.

References

  1. 1.
    Amano, T., Kato, H.: Appearance enhancement using a projector-camera feedback system. In: International Conference on Pattern Recognition (ICPR), pp. 1–4 (2008)Google Scholar
  2. 2.
    Barnum, P.C., Narasimhan, S.G., Kanade, T.: A multi-layered display with water drops. ACM Trans. Graph. (TOG) 29(4), 76 (2010)CrossRefGoogle Scholar
  3. 3.
    Barrett, R., Berry, M., Chan, T.F., Demmel, J., Donato, J., Dongarra, J., Eijkhout, V., Pozo, R., Romine, C., der Vorst, H.V.: Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd edn. SIAM, Philadelphia (1994)CrossRefzbMATHGoogle Scholar
  4. 4.
    Bimber, O., Emmerling, A.: Multifocal projection: a multiprojector technique for increasing focal depth. IEEE Trans. Vis. Comput. Graph. 12(4), 658–667 (2006)CrossRefGoogle Scholar
  5. 5.
    Godin, G., Massicotte, P., Borgeat, L.: High-resolution insets in projector-based display: principle and techniques. In: SPIE Proceedings: Stereoscopic Displays and Virtual Reality Systems XIII, vol. 6055 (2006)Google Scholar
  6. 6.
    Hirsch, M., Wetzstein, G., Raskar, R.: A compressive light field projection system. ACM Trans. Graph. (TOG) 33(4), 58 (2014)CrossRefGoogle Scholar
  7. 7.
    Iwai, D.: Extended depth-of-field projector by fast focal sweep projection. IEEE Trans. Vis. Comput. Graph. 21, 462–470 (2015)CrossRefGoogle Scholar
  8. 8.
    Jurik, J., Jones, A., Bolas, M., Debevec, P.: Prototyping a light field display involving direct observation of a video projector array. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 15–20 (2011)Google Scholar
  9. 9.
    Kagami, S.: Range-finding projectors: visualizing range information without sensors. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 239–240, October 2010Google Scholar
  10. 10.
    Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., Bolas, M.: Synthetic aperture confocal imaging. ACM Trans. Graph. 23(3), 825–834 (2004)CrossRefGoogle Scholar
  11. 11.
    Nagano, K., Jones, A., Liu, J., Busch, J., Yu, X., Bolas, M., Debevec, P.: An autostereoscopic projector array optimized for 3d facial display. In: ACM SIGGRApPH 2013 Emerging Technologies, SIGGRAPH 2013, p. 3:1 (2013)Google Scholar
  12. 12.
    Nakamura, R., Sakaue, F., Sato, J.: Emphasizing 3D structure visually using coded projection from multiple projectors. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part II. LNCS, vol. 6493, pp. 109–122. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  13. 13.
    Paige, C.C., Saunders, M.A.: Lsqr: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8, 43–71 (1982)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Tsao, C.C., Chen, J.S.: Moving screen projection: a new approach for volumetric three-dimensional display. In: SPIE Projection Displays II, vol. 2650, pp. 254–264 (1996)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Marco Visentini-Scarzanella
    • 1
    Email author
  • Takuto Hirukawa
    • 1
  • Hiroshi Kawasaki
    • 1
  • Ryo Furukawa
    • 2
  • Shinsaku Hiura
    • 2
  1. 1.Computer Vision and Graphics LaboratoryKagoshima UniversityKagoshimaJapan
  2. 2.Graduate School of Information SciencesHiroshima City UniversityHiroshimaJapan

Personalised recommendations