Advertisement

Immersive Visualization of Cellular Structures

  • Sinan Kockara
  • Nawab Ali
  • Serhan Dagtas
Chapter
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 132)

Abstract

Bioimaging is an immensely powerful tool in biomedical research that aids the understanding of cellular structures and the molecular events in the cell. Understanding the biological functions within the cell requires an in-depth understanding of all the diverse functions of the microscopic structures and the molecular interactions between macromolecules in their natural environment. Traditionally, cell biologists have used light microscopy techniques to study topographical characterization of cell surfaces. The optical properties of a light microscope give occasion to a blurring phenomenon similar to the one with a conventional microscope with the result that images are characterized by low resolution and magnification. We address the challenging task of enhancing the image produced by a light microscope by reconstructing a stack of monoscopic images from a light microscope to produce a single image with inferential and useful information than that obtained at the best focus level. We believe such an approach will enable a wider base of microscope users to take advantage of light microscope imaging in biological research.

Keywords

Virtual Reality Texture Mapping Layer Image Focus Level Immersive Virtual Reality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

17.1 Introduction

Bioimaging is an immensely powerful tool in biomedical research that aids the understanding of cellular structures and the molecular events in the cell. Understanding the biological functions within the cell requires an in-depth understanding of all the diverse functions of the microscopic structures and the molecular interactions between macromolecules in their natural environment. Traditionally, cell biologists have used light microscopy techniques to study topographical characterization of cell surfaces. The optical properties of a light microscope give occasion to a blurring phenomenon similar to the one with a conventional microscope with the result that images are characterized by low resolution and magnification. We address the challenging task of enhancing the image produced by a light microscope by reconstructing a stack of monoscopic images from a light microscope to produce a single image with inferential and useful information than that obtained at the best focus level. We believe such an approach will enable a wider base of microscope users to take advantage of light microscope imaging in biological research.

Most of the prior work in this area has focused on reconstruction of the surface model, based on variations of methods such as shape from focus and depth from focus (Nayar and Nakagawa 1990). Shape from focus (SFF) method obtains the surface models of objects from multiple monoscopic images (Nayar and Nakagawa 1990; Nayar 1992; Nayar et al. 1995; Jarvis 1983; Darrell and Wohn 1988; Xiong and Shafer 1993; Yeo et al. 1993; Horn 1968) while depth from focus (DFF) is a model-based method in which the depth map of a scene is computed by solving the inverse problem of blurring process based on camera and edge models (Grossman 1987; Pentland 1987; Subbarao 1988; Lai et al. 1992; Ens and Lawrance 1993). Our work addresses microscopic imaging for the visualization of the inner structures using translucent images by the reconstruction of the optical sectioning of a specimen from a stack of images.

The inherent imaging properties of a light microscope used in visualization of cellular structures result in images characterized by low resolution and magnification. The images are obtained by using the microscope parameters (typically the focal setting), and are taken from a single point of view. We report a novel mutual information based image clearance algorithm to refine a set of 2D images taken in a light microscope at different focal settings each representing a cross sectional view at the focal plane level and combining the most informative features from each level to produce a single image. The results illustrate that the algorithm produces an image with enhanced visualization of cellular structures than that obtained by the best focus leveled image and thus offers a viable alternative to enhance images produced by a traditional light microscope.

Our goal is to develop a practical, low-cost layer separation method by taking images of cellular structures using multiple focus level of light microscope and visualize the cellular structure in a six-degree of freedom virtual reality environment. We have developed several algorithms that use 2D images obtained at multiple focus levels to construct an image of the object by cleaning other layers’ data from focused layer as much as possible. To do that we used statistical distance measures; Kullback-Leibler (KL) divergence and its more intuitive version, the Information Radius (IRad). This has the potential of greatly enhancing the viewing process and expose visual information which is otherwise hidden. The results illustrate that our proposed layer separation from multiple focus levels method is efficient for translucent objects in light microscope.

The rest of the chapter structured as follows. Section 17.2 explains basics of focus. Section 17.3 introduces flat-field correction to eliminate detectors’ artifacts. Section 17.4 represents the separation of layers of each focal plane using statistical techniques. Section 17.5 presents 3D Visualization of Cellular Structures. Section 17.6 presents the conclusions of the study.

17.2 Light Microscopic Cellular Images and Focus: Basics

Fundamental to the method using the statistical layer separation (SLS) algorithm is the relationship between focused and defocused images. Figure 17.1 is an illustration of the basic image formation system in a typical light microscope with different focus level of images (image stack). Light rays from P pass through the aperture A are refracted by the lens to converge at hk on the image plane at distance m. The relationship among the object distance u, focal length of the lens f, and image distances k and m is determined by the lens law (17.1). When an object is placed at distance u from the lens, the sharply focused image of the object is obtained if the distance between the image plane and the lens is k, where u, k and the focal length f of the lens satisfy the thin lens formula:
$$\frac{1}{f} = \frac{1}{u} + \frac{1}{k}$$
(17.1)
Fig. 17.1.

Image formation and volume from focus

A simple microscope model consists of a lens and an image plane used to derive some fundamental characteristics of focusing based on geometrical optics. Figure 17.2 shows the depiction of the microscope’s optical system and the image stack with 240 monoscopic images obtained at different focus levels. Here we denote the thickness of the specimen of 240 multiple images as z0 and the best focused image as k150. Each image in the Figure 17.2 (k1 … k240) corresponds to the image obtained by using focus intervals that gives the information from the specific layer of the specimen. z0 is the overall thickness of the specimen. z1 and (z0- z1)are partial thicknesses of the specimen post-best focused image and pre-best focus respectively. Every image layer has its own unique information that is not present in any other layer. The SLS algorithm first extracts the unique information from each layer and then combines them to produce an enhanced image. The SLS does not only produce final image including the most informative data from each layer but also extracts each layers unique data.
Fig. 17.2.

Imaging of volume object, M is magnification of the objective lens, f is the focal length of the lens and T is the optical tube length

17.3 Flat-Field Correction

A number indicating the quality of the image is the signal-to-noise ratio (SNR). It can be calculated from two independent images I 1 and I 2 of a homogeneous, uniformly illuminated area with the same mean value µ. The variance σ2 of the difference between the two images depends on photon shot noise, dark current, and readout noise and is calculated as:
$$\sigma ^2 = \frac{{{\mathop{\rm var}} (I1 - I2)}}{2}$$
(17.2)
Then SNR is calculated as:
$${\rm SNR} = 20\log (\frac{{\rm{\mu }}}{\sigma }){\rm dB}$$
(17.3)
Even with a clean specimen preparation and optimal optics, it is possible to get undesired elements in the output image, such as shading or dark current. To eliminate these artifacts, every image taken by a camera (Io) should be corrected by flat field correction. In addition to the image of the object, an image of the background (Ib) and an image of the camera noise (Idark) are needed (17.4). Dark field image (Idark) is taken by covering the camera’s lens with a black cover and the background image (Ib) is taken without a specimen to catch possible undesired noise.
$${\rm Ic} = \frac{{{\rm Io - Idark}}}{{{\rm Ib - Idark}}}$$
(17.4)
The flat-field correction of images generally corrects small-scale artifacts introduced by the detectors e.g. dust on the camera’s lens or microscope bulb. This procedure corrects for the variation in sensitivity across the detectors. The idea is that any variation in an image records the pixel-to-pixel variation in the sensitivity of the imaging system. Once the raw image of the specimen is corrected for any dark current (Idark), a flat-field correction can be done as shown in Fig. 17.3, which shows the results of the flat-field correction.
Fig. 17.3.

a) Original image k150, b) flat-field corrected image

17.4 Separation of Transparent Layers using Focus

Considerable problem in each layer is that every image contains some of the other layers’ data. To clean these data, different classical image subtraction algorithms are not adequate due to minute changes in each layer.

The next step in our approach is to obtain a filtered version of the images captured at each focal plane. Statistical separation algorithms have not been explored widely for layer separation of microscopic translucent image sequences. Schechner et al. (2000) proposes a new method for separation of transparent layers. Several measures have been reported to quantify the difference (sometimes called divergence) between two (or more) probability distributions (Borovkov 1984). Of those measures, the two most commonly used are KL divergence and the Jensen-Shannon divergence (information radius )) measures. KL divergence uses the following formula:
$${\rm D(P\parallel Q)} = \sum\limits_i {{\rm pi}*\log (\frac{{{\rm pi}}}{{{\rm qi}}})}$$
(17.5)
This can be considered as a measure of how different two probability distributions of p and q are. The problem with this method is that it is undefined when q i = 0. In addition, KL divergence is not appropriate as a measure of similarity because it is not symmetric, i.e.,
$${\rm D(P\parallel Q)} \ne {\rm D(Q\parallel P)}$$
(17.6)
If we apply KL divergence to differentiate each specimen-layer’s image from others, then the measure can be formulated as
$${\rm D(P\parallel Q)} = \sum\limits_{{\rm x} = 1}^{\rm W} {\sum\limits_{{\rm y} = 1}^{\rm H} {{\rm abs}(\frac{1}{3}\sum\limits_{{\rm z} = 1}^3 {{\rm P(x,y,z)}} \log \frac{{{\rm P(x,y,z)}}}{{{\rm Q(x,y,z)}}})} }$$
(17.7)

This formula provides a quantitative measure of how much image P differs from image Q. P and Q are images that are taken from different focus levels. W and H are image width and height. Z is RGB color channel.

Ф (x) is regularizing function shown in (17.8). This function is used to keep image’s intensity values within the range.
$$\Phi ({\rm x}) = \left\{ \begin{array}{l} {\rm x} = 0\begin{array}{*{20}c} {} & {} \\\end{array},{\rm if}\begin{array}{*{20}c} {} \\\end{array}{\rm x < 0} \\ {\rm x = \beta} \begin{array}{*{20}c} {} & {} \\\end{array},if\begin{array}{*{20}c} {} \\\end{array}{\rm x > \beta} \\ \end{array} \right.$$
(17.8)

β is a maximum intensity value of the image.

Because the undefined nature of K-L divergence when Qi(x,y,z) = 0 (i is ith image) we used another mutual information approach for layer cleaning which is the Jensen-Shannon divergence (also known as information radius or IRad) (Haykin 1994).
$${\rm IRad(Pk,Qi)} = {\rm D(Pk\parallel \frac{{(Pk + Qi)}}{2}) + D(Qi\parallel \frac{{(Pk + Qi)}}{2})}$$
(17.9)
Information radius is a symmetric function, i.e,
$${\rm IRad(Pk,Qi) = IRad(Qi,Pk)}$$
(17.10)
Suppose image Pk(x,y,z) is an RGB image of our specimen at focus level k. Pk(x,y,z) shows probability distribution of image at position x, y, z which can be thought as the source of a certain amount of contribution, which is defined as
$$\log \frac{1}{{{\rm Pk(x,y,z)}}}$$
(17.11)
and assume Pk(x,y,z) is the probability function associated with the outcome. Then the entropy of this probability distribution is defined as
$${\rm H(Pk(x,y,z))} = - \sum {{\rm Pk(x,y,z)}\log ({\rm Pk(x,y,z)})}$$
(17.12)
Also, joint entropy H(Pk(x,y,z), Qi(x,y,z)) is defined as below (Papoulis 1991),
$${\rm H(Pk(x,y,z),Qi{\rm{(x,y,z)}})} = - \sum\limits_{\rm k} {\sum\limits_{\rm i} {{\rm Pk(x,y,z)}} } {\rm Qi}{\rm (x,y,z)}{\rm log(Pi(x,y,z)Qi(x,y,z))}$$
(17.13)
If we normalize (17.9) with joint entropy in (17.13), normalized quantity of difference, which is IRad in our case, with two different images (P and Q) becomes
$$\frac{{{\rm D(Pk\parallel \frac{{(Pk + Qi)}}{2}}) + {\rm D(Qi\parallel \frac{{(Pk + Qi)}}{2})}}}{{{\rm H(Pk(x,y,z),Qi{\rm{(x,y,z))}}}}}$$
(17.14)
If we plug normalized IRad (equation (17.14)) into the equation (17.5), then final form becomes
$${\rm Let}\,\begin{array}{*{20}c} {{\rm \kappa = P_k (x,y,z)\log (\frac{{P_k (x,y,z)}}{{\frac{{P_k (x,y,z) + Q_i (x,y,z)}}{2}}})}\begin{array}{*{20}c} {} \\\end{array}and\begin{array}{*{20}c} {} \\\end{array}{\rm \nu = Q_i (x,y,z)\log (\frac{{Q_i (x,y,z)}}{{\frac{{Q_i (x,y,z) + P_k (x,y,z)}}{2}}})}} \\\end{array}$$
(17.15)
where P and Q are two different images. (x,y) values are corresponding to a unique pixel of an image and z value represents 3 channels RGB values of that pixel.
$$\begin{array}{l} \sum\limits_{{\rm x} = 1}^{\rm W} {\sum\limits_{{\rm y} = 1}^{\rm H} {\Phi (\frac{1}{3} \otimes \frac{1}{{{\rm k\mu }}}\sum\limits_{{\rm z} = 1}^3 {{\rm \kappa + \nu} } } } \\ \sum\limits_{{\rm x} = 1}^{\rm W} {\sum\limits_{{\rm y} = 1}^{\rm H} {\Phi (\frac{1}{3} \otimes \frac{1}{{{\rm k\eta} }}\sum\limits_{{\rm z} = 1}^3 {{\rm \kappa + \nu} } } } \\ \end{array}$$
(17.16)
where η and µ are constants with \({\rm \eta < \mu}\) and \({\rm \eta + \mu} = 1\). We assume that the upper layer images affect the current layer image less and the lower layer images affect the current layer image more in a linear way, as the light source is below the specimen. Indices k and i are variables showing layer numbers of the image P and image Q.
As shown in (17.16), correlated information can be used to measure the mutual similarity between two images. This formula measures the mutual dependency or the amount of information one image contains about another. As a result, mutual information can be used to measure mutual difference between two images. Using this basic formulation, we can differentiate one image layer from others. Figure 17.4 demonstrates the significant results of the modified version of IRad algorithm for image separation in (17.16) as applied to the image (k150) in the original image stack. Significance of the results is verified by biologists.
Fig. 17.4.

a) Original image k1 (upper left image), b) After IRad applied, c), and d) colored forms

We have summarized below the steps involved in the SLS Algorithm (Algorithm 1). First, we crop every image to eliminate unnecessary data to save on computation time. Second, the flat-field correction of images corrects small-scale artifacts introduced by the detectors. Third, the modified IRad mutual information algorithm is applied to separate each specimen’s layer at different focus levels from other (defocused) layers. The separated images are then combined together to establish a new image that contains not only the best focused level of details but also all the informative data from each layer.

Algorithm 1. Image Restoration (SLS algorithm)

Image Restoration INPUTS: W, image width H, image height N, number of images I[W, H, N], input matrix-3D image stack (3D Array) OUTPUTS: IR, Restored Image Stack METHODS: cropImage(), crops unnecessary data from image applyFlatField(), applies flat field correction algorithm (Eq. 4) applyIRAD(), applies information radius algorithm to given image stack (Eq. 23) VARIABLES: I, input matrix Index, number of images for index in 1 to Z I[:,:,N] ← cropImage ( I[:,:,N] ) I[:,:,N] ← applyFlatField( I[:,:,N] ) IRapplyIRAD( IR ) end for

17.5 3D Visualization of Cellular Structures

Virtual Reality (VR) is a virtual environment that provides a user a sense of reality. It’s usually a three-dimensional computer generated representation of the real world where user is surrounded by the environment and ideally, can interact with it. This is realized by a viewer centered view by tracking the position of the user. With VR, complex 3D databases can be rendered and investigated real-time which enables exploration of products that are in digital format.

We implemented a user friendly 3D visualization in an immersive VR environment that user can interact with a 3D specimen easily. To that aim, CAVE (CAVE Automatic Virtual Environment) VR system is used. It was first introduced in 1991 by Carolina Cruz-Neira at the University of Illinois at Chicago (Cruz-Neira et al. 1993). The CAVE is composed of three to six projection screens driven by a set of coordinated image-generation systems. It is assisted by a head and hand tracking system that produces stereo perspective.

17.5.1 Volume Rendering

So far, we have primarily focused on image processing algorithms to clean monoscopic images while preserving any informative data from these images for a viable 3D visualization. Our next step is to build 3D volume data using the stack of filtered 2D images.

Volume data is most often represented as a mesh (also known as grid). When representing volume data in 2D screen, four basic direct volume rendering methods have been used to depict the data in its entirety (interior as well as the exterior). A comparison of those volume rendering algorithms can be found at Meißner et al. (2000). The rendering times of these methods are very much dependent on the type of the dataset and the transfer functions. Shear-warp and 3D texture-mapping algorithms provide high performance, but at the cost of degraded quality of the output image. However, it is hard to say that one of those algorithms is the winner under all conditions; instead each one of them has certain advantages for the specific conditions.

Texture mapping is a method of adding realism to our images using textures instead of shading it. It was used by Cabral et al. (1994). This process reduces the computing time and makes the algorithm faster. After the volume loaded into the texture memory polygonal slices are rasterized to the view plane. For the interpolation usually a tri-linear function is used but quad-linear interpolation is also available. Also there are texture mappings with shading. We used this approach to exploit the increasing processing power and flexibility of the Graphics Processing Unit. One method to exploit graphics hardware is based on 2D texture mapping (Rezk-Salama et al. 2000). This method stores stacks of slices for each viewing axis in memory as two-dimensional textures. The stack most parallel to the current viewing direction is chosen and then mapped on object-aligned proxy geometry. It is rendered in back-to-front order using alpha blending.

Volume is represented by using texture mapping as the fundamental graphics primitive. The basic principle of three dimensional texture mapping is to apply texture onto the 3D scene. Volumetric data is copied into the two dimensional texture images as a stack of parallel slice planes of specimen and then displayed back-to-front with not sufficiently small intervals. Intervals were not sufficiently small since microscopic images were taken manually adjusted intervals. A number of slices determine the quality of resulting volumetric images. Since the specimen’s layer images were taken fairly large intervals; thus, visualized volume data is not appealing.

Each slice plane of the chicken embryo is drawn by sampling the data in the volume by mapping sampled texture coordinates onto the polygon’s vertices. Polygons are sliced through the volume perpendicular to the viewing direction. By using ray casting algorithm, the rendering process is done from back-to-front, and each voxel (a pixel representing volume) is blended using alpha blending algorithm with the previously drawn voxel.

The fundamental idea behind the alpha blending is that a third channel or image can be used to drive a blending process which combines the image of the object to be blended and background images. Alpha blending permits two or more images to be composited in such a way that composited image is undetectable whether composited or not. Alpha blending technique presented here combines the two images; one is the image of the object to be blended called cutout image and the other one is the background image. Alpha blending (Porter and Duff 1984) achieved using the equation below
$${\rm{B}}_{{\rm ij}} = {\rm{C}}_{{\rm ij}} \,{\rm{A}}_{{\rm ij}} + (1 - {\rm{A}}_{{\rm ij}} )\,{\rm{B}}_{{\rm ij}}$$
(17.17)
where i and j are image column and row indexes, respectively, and A ij is a alpha factor that has a value between 0 and 1. B ij is a pixel in the output image and C ij is a pixel in the cutout image. 3D texture mapping allows using tri-linear interpolation supported by the graphics hardware and provides a consistent sampling rate (Meißner et al. 1999). Figure 17.5 represents volumized and cross-sectioned data using 2D texture mapping technique with alpha blending.
Fig. 17.5.

Volumized embryo with different rotations and zooming levels

17.5.2 Immersive Visualization: CAVE Environment

Computer graphics and virtual reality technologies have advanced tremendously in recent years. Many application areas have emerged, from telemedicine to military training. A CAVE (Cave Audio Visual Experience Automatic Virtual Environment) is a room-size, immersive VR display environment where the stereoscopic view of the virtual world is generated according to the user’s head position and orientation. The CAVE was chosen as a volume visualization environment because it provides an expanded field of view, the larger scale of objects, and suitability for gestural expressions and natural interaction. A participant who is submerged in a VR experience in which all sensory stimuli is cut out except what the computer is sending to the devices worn by the participant. A key advantage that a good VR experience has is real-time responses. The participant has the ability to move in six degrees of freedom within a virtual environment inhabited with virtual 3D objects. With increased flexibility, the users can move and interact in the environment freely and manipulate virtual objects in a virtual environment as the movements are being tracked and monitored as to the position in the x, y and z axis. This function is provided through electromagnetic sensing devices usually warn on the HMD (head mounted stereo displays) and the hands or other body area so that the participant is properly oriented within the environment and to the objects.

VRML (Virtual Reality Modeling Language) is one of the major platform independent development tools for 3D graphics and animation. VRML is rapidly becoming the standard file format for transmitting 3D virtual worlds across the Internet. Static and dynamic descriptions of 3D objects, multimedia content, and a variety of hyperlinks can be represented in the VRML files. Both VRML browsers and authoring tools for the creation of VRML files are widely available for several different platforms. Our VRML conversion method combines accelerated ray casting with VRML for the design of a CAVE based interactive system. The algorithm below summarizes the major steps of our conversion method. The algorithm below (Algorithm 2) produces the final VRMLsurface model of the volume that is produced by our modified IRad algorithm. Also, the configuration file with an extension “.vrd” is produced. VRD files are VRScape server’s configuration files which are simple ASCII (text) files and specify the models that comprise a VRScape world. VRScape is a tool used to view VRML files in the CAVE environment (vrco.com 2007). This program also sets preferences on how to rescale and reposition the objects in the immersive world. Figure 17.6 shows VRML output of the example embryo displayed in the CAVE.

Algorithm 2. VRML conversion

VRML export INPUTS: W, image width H, image height N, number of images I[W,H,N], input matrix-3D image stack (3D Array) I, input matrix OUTPUTS: Export.wrl, VRML file METHODS: importColor, imports colors from 3D array importCoordinate, imports coordinates from 3D array importPoints, imports points from 3D array importColorIndex, imports color indexes from 3D array insert, exports 3D array into VRML files CreateVRD, creates VRD files for VRSCAPE server to export CAVE environment VARIABLES: RGB_value, color value of the specific pixel Coordinate_Index, coordinate indexes Points, vertex points Color_Index, color indexes I[W,H,N], I is WxHxN matrix for index in 1 to W for index2 in 1 to H for index3 in 1 to N RGB_value _ importColor( I[index,index2, index3] ) Coordinate_Index _ importCoordinate( I[index,index2,index3] ) Points _ importPoints( I[index,index2, index3] ) Color_Index _ importColorIndex( I[index,index2,index3] ) insert( RGB_value, Coordinate_Index, Points, Color_Index, into Export.wrl ) end for end for end for createVRD(Export.wrl)
Fig. 17.6.

Scientists can view and interact with the example 3D embryonic structure in a CAVE

17.6 Conclusions

We report a novel method to produce refined images of cellular structures from stack of images from a light microscope using the SLS algorithm. Our approach produces an enhanced image of the cellular structures typically obtained only with advanced equipment and is thus a low-cost alternative. The most critical step in light microscopy is focusing. Because our method combines the most informative and unique data from each focal plane, the necessity of finding best focus for light microscopic images may not be necessary. The areas of improvement include; improving the clarity of the output images and using the SLS algorithm to build 3D images by stacking optically sectioned 2D morphologies of living cells and visualize the results in a virtual environment. These images would provide sub-cellular structural details in living cells with image volume projections to create a sense of cell interior depth in living cells taking advantage of real-time operation of SLS algorithm.

17.7 Exercises

  1. 1.

    What does flat-field correction do?

     
  2. 2.

    Give some examples to possible small scale artifacts introduced by the image detectors.

     
  3. 3.

    Prove that KL divergence is not symmetric for two different data sets of P and Q with 5 elements by using equation (17.5).

     
  4. 4.

    Solve the question 3 for information radius by using equation (17.9). See the equality when changing the order of data sets P and Q.

     
  5. 5.

    When does K-L divergence have undefined nature?

     
  6. 6.

    When is information radius more useful?

     
  7. 7.

    What is alpha blending?

     
  8. 8.

    What are the possible advantages of immersive visualization in a virtual environment?

     

References

  1. Borovkov AA (1984) Mathematical Statistics (Mir, Moscow, 1984).Google Scholar
  2. Cabral B, Cam N, Foran J (1994) Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. 1994 Symposium on Volume Visualization, pp. 91-98.Google Scholar
  3. Cruz-Neira C, Sandin D, DeFanti T (1993) Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE. Computer Graphics, ACM SIGGRAPH, 135-142.Google Scholar
  4. Darrell T, Wohn K (1988) Pyramid based depth from focus. Proc. CVPR, pp. 504-509.Google Scholar
  5. Ens J, Lawrance P (1993) An Investigation of Methods for Determining Depth from Focus. IEEE Trans. On Pattern Analysis and Machine Intelligence, 15(2):97-108.CrossRefGoogle Scholar
  6. Grossman P (1987) Depth from Focus. Pattern Recognition Letters, 5:63-69, 1987.CrossRefGoogle Scholar
  7. Haykin S (1994) Communication Systems Chap.10. 3rd ed., John Wiley & Sons, Inc.Google Scholar
  8. Horn BKP (1968) Focusing. MIT Artificial Intelligence Laboratory, Memo No. 160.Google Scholar
  9. Jarvis R (1983) A perspective on range-finding techniques for computer vision. IEEE, Trans. Patt. Anal. Machine. Intell, vol. PAMI-3, pp. 122-139.Google Scholar
  10. Lai SH, Fu CW, Chang S (1992) A Generalized Depth Estimation Algorithm with a Single Image. IEEE Trans. On Pattern Analysis and Machine Intelligence, 14(4):405-411.CrossRefGoogle Scholar
  11. Meißner M, Hoffmann U, Straßer W (1999) Enabling classification and shading for 3D texture mapping based volume rendering using OpenGL and extensions. In Proceedings of Visualization 1999, pages 207-214.Google Scholar
  12. Meißner M, Huang J, Bartz D, Mueller K, and Crawfis R (2000) A practical evaluation of popular volume rendering algorithms. In Proceedings of the Symposium on Volume Visualization 2000, pages 81-90.Google Scholar
  13. Nayar SK, Nakagawa Y (1990) Shape form Focus: An Effective Approach for Rough Surfaces. IEEE Intl. Conference on Robotics and Automation, pp. 218-225.Google Scholar
  14. Nayar S.K (1992) Shape from focus system. Proc. CVPR, pp.302-308.Google Scholar
  15. Nayar SK, Walanabe M, Nogouchi M (1995) Real time focus range sensor. Proc. ICCV, pp. 995-1001.Google Scholar
  16. Nayar SK (1992) Shape from Focus System for Rough Surface. In Proc. Image Understanding Workshop, pp. 539-606.Google Scholar
  17. Papoulis A (1991) Probability, Random Variables, and Stochastic Processes Chap.15. 2nd ed., McGrawHill.Google Scholar
  18. Pentland AP (1987) A New Sense of Depth of Field. IEEE Trans. On Pattern Analysis and Machine Intelligence, 9(4):523-531.CrossRefGoogle Scholar
  19. Porter T, Duff T (1984) Compositing Digital Images. In Proceedings of the SIGGRAPH.Google Scholar
  20. Rezk-Salama C, Engel K, Bauer M, Greiner G, Ertl T (2000) Interactive volume rendering on standard PC graphics hardware using multi-textures and multi-stage rasterization. In Proceedings of the Workshop on Graphics Hardware 2000, pages 109-118.Google Scholar
  21. Schechner YY, Kiryati N, Basri R (2000) Separation of Transparent Layers using Focus. International Journal of Computer Vision, 39 (1): 25-39, Kluwer Academic Publishers.MATHCrossRefGoogle Scholar
  22. Subbarao M (1988) Parallel Depth Recovery by Changing Camera Parameters. In Proc. International Conference on Computer Vision, pp. 149-155.Google Scholar
  23. Xiong Y, Shafer SA (1993) Depth from focusing and defocusing. Proc. CVPR, pp. 68-73.Google Scholar
  24. Yeo TTE, Ong S H, Jayasooriah, Sinniah R (1993) Autofocusing for tissue microscopy. Image and, Vision Comp. vol. 11, pp. 629-639.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Sinan Kockara
    • 1
  • Nawab Ali
    • 1
  • Serhan Dagtas
    • 1
  1. 1.Department of Applied ScienceUniversity of Arkansas at Little RockLittle RockUSA

Personalised recommendations