Phase Screen Generator
We simulate the turbulence in the Earth’s atmosphere using Kolmogorov phase screens and the split-step beam propagation method [12]. This approach models the distributed turbulence along the propagation path utilizing 15 discrete, infinitely thin, phase screens equally spaced over the path. The variation in the phase across each screen represents the fluctuations induced by changes of the refractive index in the atmosphere over the volume of space half-way to adjacent screens.
The phase screen code [12] takes the initial wavefront and propagates it through a vacuum until it reaches a phase screen and repeats this process for the entire propagation path until every phase screen has perturbed the wavefront. The propagation between the phase screens also gives rise to amplitude fluctuations in the wavefront. The turbulence in each phase screen is defined by the refractive index structure parameter (\(C_n^2\)) which we model using the Hufnagel-Valley approximation [9]
$$\begin{aligned} C^2_n (z) = \mathbf {a}_{\mathbf {1}}~\exp (-z/H_1) + \mathbf {a}_{\mathbf {2}}~\exp (-z/H_2) + \mathbf {a}_{\mathbf {3}}~z^{10}\, \exp (-z/H_3)~, \end{aligned}$$
(1)
where the \(a_1\) coefficient represents the strength of the ground layer of the atmosphere, and \(H_1\) is the height of its 1/e decay. \(a_2\) and \(H_2\) are similarly defined for the turbulence in the troposphere, and \(a_3\) and \(H_3\) are related to the turbulence peak located at the tropopause. We select the values of the coefficients such that the \(C_N^2\) profile simulates the conditions at Mount Haleakala on Maui. We then make minor adjustments to the coefficients to obtain the desired Fried parameter [11],
$$\begin{aligned} r_0 = \left[ 0.423~\left( \frac{2\pi }{\lambda }\right) ^2 \sec \zeta \int _{0}^L C_n^2(z)~dz\right] ^{-3/5}~. \end{aligned}$$
(2)
The resulting turbulence and physical parameters are listed in Table 1. At the end of the propagation process, the wavefront is in the optical system’s aperture plane, and the values are wrapped between \(\pi\) and \(-\pi\). To unwrap the phase, we use the Goldstein branch cut phase unwrapping algorithm [2]. We note that we have ignored the variation of the refractive index (n) on wavelength, i.e., dispersion [8]. The effects of dispersion will be included in future studies.
Table 1 Table of simulation parameters Image Generation
In the case of incoherent light, the model for the noise-free intensity of an image \(g(x_i,y_i)\) is [3]
$$\begin{aligned} g(x_i,y_i) = \iint f(x_0,y_0)h(x_0,y_0;x_i,y_i)dx_0dy_0 \end{aligned}$$
(3)
where \((x_0,y_0)\) and \((x_i,y_i)\) are the coordinates in the object plane and the image plane respectively, \(f(x_0,y_0)\) is the object intensity distribution and \(h(x_0,y_0;x_i,y_i)\) is the space-varying point spread function (PSF) of the imaging system. For our simulations we assume that the field-of-view for our images is less than the size of the isoplanatic angle (\(\theta _0\)) for the atmosphere during the observations, where
$$\begin{aligned} \theta _0 = ~\left[ 2.91~\left( \frac{2\pi }{\lambda }\right) ^2 \sec \zeta \int _{0}^L C_n^2(z)~z^{5/3}~dz\right] ^{-3/5}~. \end{aligned}$$
(4)
In this case the PSF becomes spatially invariant over the field-of-view and Eq. 3 simplifies to a convolution [8]. Invoking the linear superposition principle for electromagnetic fields [3], which allows us to model a broadband image as a summation of monochromatic images, we then generate an observed broadband image using
$$\begin{aligned} g_{x,y} = \sum \limits _{\lambda _1}^{ \lambda _2} \varDelta \lambda _N\left( f_\lambda \circledast h_{\lambda }\right) _{x,y}, \end{aligned}$$
(5)
where \(\circledast\) denotes convolution and \(\lambda _2 - \lambda _1\) is the spectral bandwidth for the observation. The spectral bandwidth is sampled at N discrete wavelengths such that \(\varDelta \lambda _N = (\lambda _2-\lambda _1)/N\). We use \(\lambda _1\) = 400 nm and \(\lambda _2\) = 1000 nm. This is the wavelength range over which silicon-based cameras have sensitivity. We set \(\varDelta \lambda _N\) = 15 nm and generate monochromatic images for \(N=40\) color planes. This sampling represents the change in wavelength needed to provide a subtle visible difference in the speckle morphology of the PSFs created using the wavefronts generated at two adjacent wavelengths near 400 nm. For the object, we use point sources to simulate two unresolved, closely spaced objects. For simplicity, these sources are given by Blackbody spectra with different temperatures. We then generate a range of objects for our validation tests by varying the separation and difference in the objects’ brightness. We model the monochromatic point spread functions using
$$\begin{aligned} h_{x,y,\lambda } = | FT^{-1}\left( A(u,v) \exp {-i\phi _\lambda (u,v)} \right) |^2 \end{aligned}$$
(6)
where \(FT^{-1}\) denotes the inverse Fourier transform operator. For this work we use wavefronts that represent turbulence conditions of \(D/r_0=33\). This simulates observations with the AEOS telescope on Mount Haleakala through median daytime seeing conditions during the Fall (see Table 1 of [1]).
Figure 1 shows example images of a turbulence degraded binary star object at selected monochromatic color planes of 400 nm, 700 nm, and 1000 nm, along with the ultra-broadband integrated over 40 color planes, covering 400 nm to 1000 nm (left to right), for four different realizations of the atmosphere (top to bottom). The contrast in the secondary component’s brightness to that of the binary’s primary component is \(10^{-2}\). The images’ pixel scale is set so that the data at 400 nm are Nyquist sampled (i.e. 0.014 arcsec/pixel).
Image Reconstruction
For our initial investigation into the feasibility of recovering the original object from a set of broadband images, we assume we have a high-quality measurement of the PSF and that we only need to recover the object from a time series of observed broadband images. We then minimize the least-squares cost function
$$\begin{aligned} \epsilon = \sum \limits _k \sum \limits _{x,y} \left( g_{k,x,y}- \hat{g}_{k,x,y}\right) ^2 = \sum \limits _k \sum \limits _{x,y} r_{k,x,y}^2 ~, \end{aligned}$$
(7)
where
$$\begin{aligned} \hat{g}_{k,x,y} = \sum \limits _{\lambda _1}^{\lambda _2} \varDelta \lambda _M \left( \hat{f}_\lambda \circledast \hat{h}_{k,\lambda }\right) _{x,y} ~, \end{aligned}$$
(8)
and r is the residual difference between the observed and modeled images. Here \(\varDelta \lambda _M=(\lambda _2 - \lambda _1)/M.\) with \(M < N\). That is, we estimate the object at fewer color planes than were used to generate the broadband images. The pixels in the M color planes of \(\hat{f}\) are the variables in the minimization. We note the “monochromatic” PSF in Eq. 8 represents an integral of N/M of the monochromatic PSFs used in the generation of the broadband images.
We minimize the cost function using the variable metric limited memory optimization algorithm. This is a gradient-based algorithm and we calculate the gradient of the cost function with respect to the variables using
$$\begin{aligned} \frac{d \epsilon }{d\hat{f}}_{x,y,\lambda } = -2 \sum \limits _k \left( r_k \star \hat{h}_{k,\lambda }\right) _{x,y}~, \end{aligned}$$
(9)
where \(\star\) denotes correlation.
For our validation tests, we generate broadband data sets for two separations between the two unresolved objects (0.69 arc-seconds and 0.14 arc-seconds) and two contrast ratios for each separation (the brightness of the secondary component is \(10^{-2}\) and \(10^{-3}\) of the brightness of the primary component). For all the reconstructions presented in the paper, we use \(k=11\) broadband images and allow 1,000 iterations in the minimization routine. The number of iterations used was chosen to minimize computation time.