Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Introduction
Optical holography is a superior tool to retrieve phase and amplitude of light from an intensity image, which bears detailed information of the object as size, shape, and refractive index. Reconstructed object/hologram from an intensity image has important roles in high-security encryption1,2, microscopy3, data storage4, 3D object recognition5, and planar solar concentrator6,7,8. Besides the information gained from intensity images is reversible, and the phase and amplitude information can be used to generate intensity images for numerous applications such as imaging9,10,11, photostimulation12, printing13, optical beam steering14, aberration correction15, display16,17, and augmented reality18. Optical holography enables image formation at different observation planes or through a sample without requiring any focusing elements or mechanical scanning. For generation of a holographic image, fine-tuned phase/amplitude distribution of a hologram is required that increases design duration of a hologram. Considering wide-area implementation of optical holography for both retrieving object/hologram information and generation of a holographic image, a versatile methodology with a short design/optimization duration is highly demanded.
For generation of optical holographic images and object/hologram recovery, there are a variety of algorithms frequently used19,20,21,22,23,24. These algorithms are easy to employ and yield good performance but require iterative optimization. Unfortunately, these algorithms lack full flexibility in design parameters, especially when number of design wavelengths and number of observation planes are more than one. Furthermore, a strong push to convergence for tolerable error may cause the algorithms to yield physically infeasible patterns. In contrast to these algorithms, deep learning correlates an intensity distribution to a hologram without reconstruction of phase and amplitude information from an intensity distribution thanks to its data-driven approach. Deep learning is a superior tool that presents important achievement especially in holography for imaging25,26,27,28,29,30,31, microscopy32,33,34,35, optical trapping36, and molecular diagnostics37. However, for generation of optical holograms which provide holographic images at different observation planes and wavelengths, versatile neural networks are required38,39,40. This issue is weakly addressed in the literature, and proper modalities are demanded for generation of optical holographic images having diverse properties: different observation planes, wavelengths, and figures. Using a comprehensive neural network model, design of holograms could be accelerated that leads to generation of optical holographic images at different observation planes and wavelengths.
In this study, firstly, we present a method to compute color holographic images at multiple observation planes without cross-talk in detail. Later we share, for the first time to our knowledge, a single deep learning architecture that generates (1) multi-color images at a single observation plane, (2) single color three images at multiple observation planes, (3) multi-color images at multiple observation planes. In addition to generation of holographic images, the deep learning model provides retrieval of object/hologram information from an intensity image without requiring phase and amplitude information of an intensity holographic image thanks to statistically retrieval behavior of deep learning. Moreover, we obtain holographic images with twin-image free by eliminating phase and amplitude retrieval. Training the deep learning model with a data set which is a one-time process lasting less than an hour speeds up generation of holographic images and object/hologram recovery from an intensity image down to 2 s.
Methods
A computer-generated hologram (CGH) could be a phase mask or a three-dimensional object where thickness value changes spatially. Thickness distribution of a CGH is also interpreted in a phase distribution with a refractive index value of a holographic plate and wavelength of light. With Fresnel–Kirchhoff diffraction integral (FKDI), intensity distribution of a holographic image at each wavelength of a light source is computed by considering thickness distribution of a CGH. The detail of holographic image calculation for a hologram is discussed in the supplementary document.
We use correlation coefficient \(\rho\) as a metric to evaluate similarity between a ground-truth/ideal holographic image \(I_{Ideal}\) and a designed holographic image \(I_{Designed}\) with the deep learning model/FKDI (see Eq. (1)). The same metric evaluates similarity between a ground-truth hologram \(I_{Ideal}\) and a reconstructed hologram with the deep learning model \(I_{Designed}\). N in Eq. (1) is number of pixels in the ideal image and the designed image; \(I_{Ideal, i}\) and \(I_{Designed, i}\) in Eq. (1) are intensity values of ith pixel of the ideal image and the designed image, respectively. The other terms in Eq. (1) are mean of the ideal image \(\mu _{{I_{Ideal} }}\), mean of the designed image \(\mu _{{I_{Designed} }}\), standard deviation of the ideal image \(\sigma _{I_{Ideal}}\), and standard deviation of the designed image \(\sigma _{I_{Designed}}\). We compute a correlation coefficient for a ground-truth/ideal holographic image \(I_{Ideal}\) and corresponding designed holographic image \(I_{Designed}\) at each observation plane distance d and each wavelength of light \(\lambda\).
Using spatially varying 8-level thickness profile of a hologram, we tune phase of a light source to form a holographic image. The holographic image is designed with a linearly polarized continuous light at normal incidence. Resolutions of a hologram and a holographic image are 40-by-40. Each pixel of a hologram has a thickness value ranging from 1 to 8 \(\upmu\)m, which is divided into 8 equal discrete steps. Here, we especially chose parameters that are experimentally achievable using large-area fabrication methods41. We evaluate performance of a hologram with a correlation coefficient \(\rho\) in terms of percent with Eq. (1) during organizing thickness distribution of the hologram with the local search optimization algorithm. The local search optimization algorithm sequentially changes thickness of each pixel in the hologram after generating a random hologram distribution to form desired intensity images at selected frequencies of the light and observation plane distances. We used two decision-making criteria: AND and MEAN, which are also called logic operations, in the algorithm to minimize difference between an ideal image (uniform and binary image distribution) \(I_{Ideal}\) and a designed image \(I_{Designed}\). With AND logic operation, we organize a hologram thickness distribution considering increases in values of the correlation coefficient at each wavelength and each observation plane simultaneously. With MEAN logic operation, the hologram thickness distribution is tuned when average correlation coefficient is computed with correlation coefficients at all wavelengths and all observation plane increases. At the end of designing a hologram, we obtained 51,200 optical hologram distributions and corresponding holographic images at each light frequency/observation plane.
In this manuscript, we employed the deep learning model the CHoloNet presented in our previous work42 (see Fig. S1). The CHoloNet demonstrates reconstruction of an object thickness distribution/hologram by using holographic intensity images at all wavelengths of light and observation planes. The framework uses holographic intensity images to train weights of the model and understands how to transform spectral and spatial information of incident light encoded within holographic images to optical holograms. The CHoloNet receives multiple images at different observation planes/colors as inputs and produces a hologram in terms of thickness that can reconstruct input images at predefined depths/colors. The model takes three-channel intensity distributions, two of which belong to spatial size of intensity distributions, and the third is for either different wavelengths, observation planes, or wavelengths plus observation planes. After the training, the network inference time for reconstruction of an object/hologram distribution with reversing one-hot vector operation is almost 2 s on average. The detail of our neural network is presented in the supplementary material.
Results and discussion
The selection of design wavelengths is a crucial step to form holographic intensity images at the image plane without a cross-talk between images at different frequencies of light. For this concern, at first, we inspect spectral bandwidth, which a holographic image presents when a hologram designed for this image is illuminated by broadband light. We design a CGH that images an intensity distribution of letter A at a wavelength of 700 nm as seen in Fig. 1a. This image provides a correlation coefficient of 96.6% with a uniform and binary image of letter A after utilizing Eq. (1). As seen in this image, we generated a clear and almost uniform intensity distribution of letter A which is obtained with a hologram distribution having a thickness profile in Fig. 1b. When the same CGH is illuminated by a uniform broadband light (650–750 nm), the correlation coefficient varies through different illumination wavelengths of light as seen in Fig. 1c, and correlation coefficient peaks at the design wavelength of CGH, which is 700 nm as expected.
In Fig. 2, we present holographic images of letter A under different illumination wavelengths of light between 692 and 707 nm. As seen here beyond and before the design wavelength of the CGH (700 nm), holographic images lose clear shape of letter A, and diffraction orders appear in the images. Especially beyond the design wavelength of the CGH between 701 and 707 nm, diffraction orders in the holographic images are dominant. The diffraction orders can be eliminated by selection of a longer distance between the hologram plane and the image plane. In that case, correlation coefficient at the illumination wavelength will change, and the same hologram may not form the image of letter A. These holographic images are acquired for the selected design parameters, and by considering updated design parameters, the hologram can be re-designed.
In Fig. 1c, we see that correlation value is less than 10% when the illumination wavelength shifts 30 nm from the design wavelength of the CGH. With this figure, we understand that the intensity image of letter A is drastically distorted when the wavelength of light shows a 30 nm difference from the design wavelength of the optimized CGH. This wavelength shift is strongly affected by the distance between the CGH plane and the image plane, sizes of the CGH plane and the image plane, size of each pixel in the CGH plane and the image plane, and wavelength of light source. As a result, we conclude that when the design wavelengths of a color holographic image are 30 nm apart, there is no overlap between the images. Therefore, a true multi-color hologram should work beyond 30 nm bandwidth.
At first, we tuned thickness distribution of a CGH for generation of a holographic color image at a single observation plane. The wavelengths of the color image are 670 nm for alphabet image of A, 700 nm for alphabet image of B, and 730 nm for alphabet image of C. The transmitted images from the hologram plane are projected onto a white screen 350 \(\upmu\)m away from surface of the CGH. The thickness optimization of the hologram was performed for four ful scannings of all the hologram pixels which last 3.6 h with the local search optimization algorithm. The holographic images of letters are presented in Fig. 3a–c. The intensity values on pixels of the letter figures are higher than background, and we observe formation of three holographic images with high contrast. For generation of these images, we use the equation of correlation coefficient in Eq. (1) as a cost function with MEAN logic operation. When AND logic operation is employed, the holographic images present a mean correlation of 81.1% (Fig. 4). With MEAN logic operation, we received a higher mean correlation value of 93.3% due to increasing mean of correlation coefficient calculated with correlation coefficients at all design wavelengths. The images of letters A, B, and C show correlation coefficients of 93.8%, 91.8%, and 94.4% in Fig. 4, respectively.
When the designed hologram is illuminated by wavelength of light at 670 nm, we see low correlation coefficients of 36.8% and 15.3% between letters A and B and letters A and C on the image plane of letter A, respectively (Fig. 5). However, cross-talk in Fig. 3a is not visible even though there is a correlation of 36.8% between letters A and B, and we observe only formation of letter A without formation of letters B and C. The ideal, uniform, and binary holographic image of letter A shows correlation coefficients of 34.8% and 12.0% with letters B and C, respectively. These high values are due to the fact that a great portion of the intensity image pixels is formed with zero intensity values. Moreover, the spatial similarity of the letters leads to a correlation which is expected. Therefore, correlation coefficients of 36.8% and 15.3% between letters A and B and letters A and C are not surprising, respectively. A similar situation occurs for letters A and C when illumination wavelength of light is 700 nm. There is no sign for formation of letters A and C in Fig. 3b, but there are 51.8% and 38.8% correlation coefficients between letters B and A and letters B and C in Fig. 5, respectively. The ideal, uniform, and binary holographic image of letter B shows a correlation coefficient of 44.7% with letter C. Correlation coefficients between letters C and A and C and B are 54.2% and 15.7% in the same figure when the same hologram is illuminated with light at a wavelength of 730 nm, respectively. However, there is no sign for formation of letters A and B on the image plane of letter C in Fig. 3c.
While tuning thickness profile of the hologram for holographic images of letters A, B, and C, we stored intensity images and holograms at each optimization attempt. With the collected data set, we tuned weights of the CHoloNet to reconstruct object information from an intensity color holographic image. The CHoloNet yields 99.7% accuracies with the training and the validation data sets. We reconstruct a hologram with the CHoloNet, which shows a correlation coefficient of 99.9% with the ground-truth hologram. The reconstructed hologram by the CHoloNet is high-fidelity object information obtained within 2 s. When the reconstructed hologram is illuminated with a light source at 670 nm, 700 nm, and 730 nm, we see holographic three images as presented in Fig. 3d–f. These three holographic images of letters are obtained with the reconstructed hologram by using FKDI and Eqs. (S1–S3) (see equations in the supplementary material). These holographic images present correlation coefficients of 99.9% with the ground-truth images. By using the CHoloNet, we reconstruct a hologram structure that provides a color image at a single observation plane within 2 s. We proved generalization ability of the CHoloNet by using this data set and pointed out our discussion for the results (see the supplementary material). This holographic structure behaves color filter to simultaneously achieve a full-color holographic image with no cross-talk. Moreover, we see twin image-free holographic intensity images due to elimination of recovery of phase and amplitude information from the holographic intensity images.
Next, we reconstructed a hologram structure that images three different figures forming at different observation planes (Fig. 6). The holographic images are arranged in order from letter A (350 \(\upmu\)m), letter B (400 \(\upmu\)m) to letter C (450 \(\upmu\)m), where letter A is the nearest to the hologram plane and letter C is the farthest to the hologram plane. The CHoloNet provides accuracies of 99.7% with the training and the validation data sets, and the reconstructed hologram with the CHoloNet has a correlation coefficient of 98.5% with the ground-truth hologram. Later, the reconstructed hologram is illuminated with a light source emitting at a single wavelength of 700 nm. With FKDI, we perform a numerical calculation for holographic intensity images seen at 700 nm and three observation planes. As seen in Fig. 6, we obtained clear and high contrast holographic images of letters formed at the same observation planes. We see formation of letter images A, B, and C at 350 \(\upmu\)m, 400 \(\upmu\)m, and 450 \(\upmu\)m, respectively. Moreover, holographic images cannot be formed out of predefined observation planes (Fig. 7). In this figure, we observe background correlation coefficients for images A and B and images A and C at the observation plane of image A, 350 \(\upmu\)m away from the hologram plane. The same situation happens at the observation planes of images B and C. These three holographic images are almost the same as the ground-truth images and present correlation coefficients of 99.9% with the ground-truth images.
Lastly, the CHoloNet provides us reconstruction of a hologram that generates a color holographic image at three observation planes (Fig. 8). The letters A, B, and C are seen 390 \(\upmu\)m, 370 \(\upmu\)m, and 350 \(\upmu\)m away from the hologram plane, respectively. Using the training data set of 51200 color images and holograms, the CHoloNet yields accuracies of 99.4% with the training and the validation data sets. The hologram designed with the CHoloNet has a correlation coefficient of 98.9% with the ground-truth hologram. When the reconstructed hologram is illuminated with a light source emits at three wavelengths: 670 nm, 700 nm, and 730 nm, a color holographic image is seen at the aforementioned observation planes in Fig. 8. The letter A is formed at 390 \(\upmu\)m away from the hologram plane at a wavelength of 670 nm due to encoded spatial and spectral information into the reconstructed hologram in the same figure. In the meantime, we see formation of letter B at a wavelength of 700 nm and an observation plane of 370 \(\upmu\)m. The same hologram images letter C at a wavelength of 730 nm and an observation plane of 350 \(\upmu\)m as seen in Fig. 8. These holographic images present high correlation coefficients of 99.9% with the ground-truth images. The formation of images is so sensitive to the design parameters, and the images are seen at the predefined observation planes (Fig. 9a). Figure 9a shows that letters A, B, and C are only generated at the observation plane of 390 \(\upmu\)m, 370 \(\upmu\)m, and 350 \(\upmu\)m, respectively. When the same hologram is illuminated by the broadband light source at between 630 and 770 nm, we observe formation of holographic images only at the predefined light wavelengths (Fig. 9b), and we have cross-talk free color holographic images at multiple observation planes.
The deep learning model used in this study decodes three-dimensional information of an object from an intensity-only recording. Our results qualitatively and quantitatively with correlation coefficient demonstrate effectiveness of the CHoloNet framework for formation of holographic images having different properties as wavelength, figure, and display plane. The CHoloNet does not expand a possible solution set that is constrained by physics. Instead, the CHoloNet finds the best solution within the data set considering desired intensity distribution. Compared to exhaustive search algorithms, we create holograms by using the CHoloNet. The developed neural network architecture may boost design duration of a holographic image obtained by a hologram displayed on a spatial light modulator or a digital micro-mirror device. This work can be further improved to observe holographic images for multiple observers at oblique viewing circumference. Also, if phase modulation capability of a hologram is extended with increasing number of thickness values in a hologram, clarity of holographic images improves. We believe the CHoloNet can be used to retrieve imaginary (phase) and real (intensity) parts of light from an intensity-only holographic image. The model we develop may be used for hyperspectral image generation, which will increase amount of data transfer rate. Similarly, the method can be applied to control optical microcavity arrays or quantum dot arrays that are placed at different longitudinal and lateral positions43.
Conclusion
In this work, we present color holographic image generation with a 30 nm wavelength step to obtain cross-talk free images. By using a single deep learning architecture, the CHoloNet, we demonstrate holographic image formation not only at different frequencies but also at different observation planes. The CHoloNet enables us to obtain phase-only holographic structures to perform generation of holographic intensity images in 2 s. Moreover, thanks to data-driven statistical retrieval behavior of deep learning, we reconstruct hologram/object information from holographic images without requiring phase and amplitude information of recorded holographic intensity images. Compared to the iterative optimization methodologies, our neural network produces holograms with several orders of magnitude faster and up to an accuracy of 99.9%. The reconstructed holograms give superior quality reproduced intensity patterns. We believe our work inspires a variety of fields as biomedical sensing, information encryption, and atomic-level imaging that benefit from frequent accommodation of optical holographic images and object/hologram recovery.
Data availability
We openly share the fast and efficient framework that we develop, the CHoloNet, in the Supplementary Information.
References
Fang, X., Ren, H. & Gu, M. Orbital angular momentum holography for high-security encryption. Nat. Photon. 14, 102. https://doi.org/10.1038/s41566-019-0560-x (2020).
Javidi, B. & Nomura, T. Securing information by use of digital holography. Opt. Lett. 25, 28. https://doi.org/10.1364/ol.25.000028 (2000).
Liebel, M., Pazos-Perez, N., van Hulst, N. F. & Alvarez-Puebla, R. A. Surface-enhanced Raman scattering holography. Nat. Nanotechnol. 15, 1005. https://doi.org/10.1038/s41565-020-0771-9 (2020).
Yoneda, N., Saita, Y. & Nomura, T. Binary computer-generated-hologram-based holographic data storage. Appl. Opt. 58, 3083. https://doi.org/10.1364/ao.58.003083 (2019).
Javidi, B. & Tajahuerce, E. Three-dimensional object recognition by use of digital holography. Opt. Lett. 25, 610. https://doi.org/10.1364/ol.25.000610 (2000).
Yolalmaz, A. & Yüce, E. Effective bandwidth approach for the spectral splitting of solar spectrum using diffractive optical elements. Opt. Express 28, 12911. https://doi.org/10.1364/oe.381822 (2020).
Gün, B. N. & Yüce, E. Wavefront shaping assisted design of spectral splitters and solar concentrators. Sci. Rep. 11, 2825. https://doi.org/10.1038/s41598-021-82110-w (2021).
Yolalmaz, A. & Yüce, E. Spectral splitting and concentration of broadband light using neural networks. APL Photon. 6, 046101. https://doi.org/10.1063/5.0042532 (2021).
Zheng, G. et al. Metasurface holograms reaching 80% efficiency. Nat. Nanotechnol. 10, 308. https://doi.org/10.1038/nnano.2015.2 (2015).
Rivenson, Y., Zhang, Y., Günaydın, H., Teng, D. & Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 7, 17141. https://doi.org/10.1038/lsa.2017.141 (2017).
Leonardo, R. D. & Bianchi, S. Hologram transmission through multi-mode optical fibers. Opt. Express 19, 247. https://doi.org/10.1364/oe.19.000247 (2010).
Anselmi, F., Ventalon, C., Begue, A., Ogden, D. & Emiliani, V. Three-dimensional imaging and photostimulation by remote-focusing and holographic light patterning. Proc. Natl. Aad. Sci. 108, 19504. https://doi.org/10.1073/pnas.1109111108 (2011).
Kim, Y. et al. Seamless full color holographic printing method based on spatial partitioning of SLM. Opt. Express 23, 172. https://doi.org/10.1364/oe.23.000172 (2015).
Hayasaki, Y., Itoh, M., Yatagai, T. & Nishida, N. Nonmechanical optical manipulation of microparticle using spatial light modulator. Opt. Rev. 6, 24. https://doi.org/10.1007/s10043-999-0024-5 (1999).
Love, G. D. Wave-front correction and production of zernike modes with a liquid-crystal spatial light modulator. Appl. Opt. 36, 1517. https://doi.org/10.1364/ao.36.001517 (1997).
Moon, E., Kim, M., Roh, J., Kim, H. & Hahn, J. Holographic head-mounted display with RGB light emitting diode light source. Opt. Express 22, 6526. https://doi.org/10.1364/oe.22.006526 (2014).
Zhao, R. et al. Multichannel vectorial holographic display and encryption. Light Sci. Appl. 7, 1. https://doi.org/10.1038/s41377-018-0091-0 (2018).
Hong, K., Yeom, J., Jang, C., Hong, J. & Lee, B. Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality. Opt. Lett. 39, 127. https://doi.org/10.1364/ol.39.000127 (2013).
Zhang, J., Pégard, N., Zhong, J., Adesnik, H. & Waller, L. 3D computer-generated holography by non-convex optimization. Optica 4, 1306. https://doi.org/10.1364/optica.4.001306 (2017).
Kettunen, V. Review of iterative Fourier-transform algorithms for beam shaping applications. Opt. Eng. 43, 2549. https://doi.org/10.1117/1.1804543 (2004).
Gerchberg, R. W. & Saxton, W. O. A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237 (1972).
Liu, G. & Scott, P. D. Phase retrieval and twin-image elimination for in-line Fresnel holograms. J. Opt. Soc. Am. A 4, 159. https://doi.org/10.1364/josaa.4.000159 (1987).
Fienup, J. R. Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758. https://doi.org/10.1364/ao.21.002758 (1982).
Kim, G., Domínguez-Caballero, J. A. & Menon, R. Design and analysis of multi-wavelength diffractive optics. Opt. Express 20, 2814. https://doi.org/10.1364/oe.20.002814 (2012).
Wu, Y. et al. Deep learning enables high-throughput analysis of particle-aggregation-based biosensors imaged using holography. ACS Photon. 6, 294. https://doi.org/10.1021/acsphotonics.8b01479 (2018).
Wu, Y. et al. Bright-field holography: cross-modality deep learning enables snapshot 3d imaging with bright-field contrast using a single hologram. Light Sci. Appl. 8, 1. https://doi.org/10.1038/s41377-019-0139-9 (2019).
Jaferzadeh, K., Hwang, S.-H., Moon, I. & Javidi, B. No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network. Biomed. Opt. Express 10, 4276. https://doi.org/10.1364/boe.10.004276 (2019).
Ning, K. et al. Deep-learning-based whole-brain imaging at single-neuron resolution. Biomed. Opt. Express 11, 3567. https://doi.org/10.1364/boe.393081 (2020).
Schnell, M., Carney, P. S. & Hillenbrand, R. Synthetic optical holography for rapid nanoimaging. Nat. Commun. 5, 3499. https://doi.org/10.1038/ncomms4499 (2014).
Shi, L., Li, B., Kim, C., Kellnhofer, P. & Matusik, W. Towards real-time photorealistic 3d holography with deep neural networks. Nature 591, 234. https://doi.org/10.1038/s41586-020-03152-0 (2021).
Peng, Y., Choi, S., Padmanaban, N. & Wetzstein, G. Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39, 1. https://doi.org/10.1145/3414685.3417802 (2020).
Pitkäaho, T., Manninen, A. & Naughton, T. J. Focus prediction in digital holographic microscopy using deep convolutional neural networks. Appl. Opt. 58, A202. https://doi.org/10.1364/ao.58.00a202 (2019).
Liu, T. et al. Deep learning-based holographic polarization microscopy. ACS Photon. 7, 3023. https://doi.org/10.1021/acsphotonics.0c01051 (2020).
Liu, T. et al. Deep learning-based color holographic microscopy. J. Biophoton. 12, 11. https://doi.org/10.1002/jbio.201900107 (2019).
Luo, Z. et al. Pixel super-resolution for lens-free holographic microscopy using deep learning neural networks. Opt. Express 27, 13581. https://doi.org/10.1364/oe.27.013581 (2019).
David, G., Esat, K., Thanopulos, I. & Signorell, R. Digital holography of optically-trapped aerosol particles. Commun. Chem. 1, 46. https://doi.org/10.1038/s42004-018-0047-6 (2018).
Kim, S.-J. et al. Deep transfer learning-based hologram classification for molecular diagnostics. Sci. Rep. 8, 17003. https://doi.org/10.1038/s41598-018-35274-x (2018).
Horisaki, R., Takagi, R. & Tanida, J. Deep-learning-generated holography. Appl. Opt. 57, 3859. https://doi.org/10.1364/ao.57.003859 (2018).
Eybposh, M. H., Caira, N. W., Atisa, M., Chakravarthula, P. & Pégard, N. C. DeepCGH: 3D computer-generated holography using deep learning. Opt. Express 28, 26636. https://doi.org/10.1364/OE.399624 (2020).
Lee, J. et al. Deep neural network for multi-depth hologram generation and its training strategy. Opt. Express 28, 27137. https://doi.org/10.1364/oe.402317 (2020).
Tokel, O. et al. In-chip microstructures and photonic devices fabricated by nonlinear laser lithography deep inside silicon. Nat. Photon. 11, 639. https://doi.org/10.1038/s41566-017-0004-4 (2017).
Yolalmaz, A., & Yüce, E. Hybrid design of spectral splitters and concentrators of light for solar cells using iterative search and neural networks. Photonics and Nanostructures - Fundamentals and Applications 48, 100987. https://doi.org/10.1016/j.photonics.2021.100987 (2022).
Yüce, E. et al. Adaptive control of necklace states in a photonic crystal waveguide. ACS Photon. 5, 3984. https://doi.org/10.1021/acsphotonics.8b01038 (2018).
Acknowledgements
This study is financially supported by The Scientific and Technological Research Council of Turkey (TUBITAK), Grant No 118F075. PhD study of Alim Yolalmaz is supported by TUBITAK, with Grant program of 2211-A. We would like to thank Turkish Academy of Sciences for the support provided via TÜBA-GEBİP project.
Author information
Authors and Affiliations
Contributions
E.Y. planned, direct, and supervised the study. A.Y. planned the study, designed the CHoloNet, performed calculations and analysis, and prepared the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yolalmaz, A., Yüce, E. Comprehensive deep learning model for 3D color holography. Sci Rep 12, 2487 (2022). https://doi.org/10.1038/s41598-022-06190-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-022-06190-y
- Springer Nature Limited