1 Introduction

Light-sheet microscopy is a fluorescence microscopy technique that enables volumetric imaging of biological samples at high frame rate with better sectioning and lower photo-toxicity in comparison with other fluorescent techniques. This is achieved by illuminating a thin slice of the sample using a sheet of light and detecting the emitted fluorescence from this plane with another objective perpendicular to the plane of the sheet. A schematic representation of a light-sheet microscope is shown in Fig. 1. Other microscopy techniques present certain disadvantages. For example, widefield microscopy [1] illuminates the whole sample using a single objective and achieves only very limited sectioning, while confocal microscopy [1] allows improved sectioning by utilising a pinhole to discard out-of-focus light, at the cost of higher photo-toxicity and reduced frame rate. Light-sheet microscopy avoids these downsides by only selectively illuminating the slice of the sample being imaged. In this way, less photo-toxicity damage is induced and, therefore, imaging of living samples over a longer period of time is possible. The combination of lower photo-toxicity, better sectioning capabilities and faster image acquisition led to light-sheet microscopy being recognised as “Method of the Year” by Nature Methods in 2014 [2].

Fig. 1
figure 1

Schematic of a light-sheet microscope, showing the illumination and the detection directions. The interaction of the light-sheet with the detection PSF leads to a spatially varying overall PSF and decreasing of the pixel intensities away from the centre in the horizontal direction

The focus of the present manuscript is on deconvolution techniques for light-sheet microscopy data. In this context, deconvolution refers to the computational method of reversing the effect of blurring in the image acquisition process due to the point spread function (PSF) of the microscope [3,4,5]. Specifically, the PSF of an imaging system represents its response to a point object. In general, knowledge of the PSF can be modelled mathematically and calibrated using bead data (samples containing small spheres of known dimensions), and then used in the formulation of a forward model of the image formation, which can then be inverted, for example using optimisation methods, to reconstruct the original, deblurred object [6].

However, in the case of light-sheet microscopy, simply knowing or estimating the PSF of the detection objective is not sufficient, since the overall response of the system to a point source is also influenced by the excitation light-sheet used to illuminate the slice. The overall PSF could be approximated by the detection PSF in the region where the illumination sheet is focused. However, the detection PSF becomes more distorted and loses intensity away from the focus of the excitation light-sheet, an effect illustrated in Fig. 1, so this approach is not accurate. Therefore, we address this problem as a case of spatially varying deconvolution [7, 8], where the variation of the system’s overall PSF is determined by the interaction between the detection PSF and the light-sheet. We note that, in general, the detection PSF itself can be spatially varying due to optical aberrations in the sample, a problem that is not specific to light-sheet microscopy. We do not address this source of variability in this work, although such a spatially varying detection PSF could in principle be incorporated in our method.

Two examples of acquired data are shown in Fig. 2. We can see in both cases the effect of the spatially varying light-sheet: the image is sharper in the centre and blurry on the sides, with the amount of blur growing with the horizontal distance from the centre. In addition, the fluorescence intensity of imaged beads in Fig. 2a is unevenly distributed despite imaging a homogeneous sample of beads, with the centre of the image being brighter than the left and right sides. The aim of our work is to correct these effects.

Fig. 2
figure 2

Examples of light-sheet microscopy data of dimensions \(665.6\,\upmu \mathrm{m} \times 665.6\,\upmu \mathrm{m}\): beads in a and Marchantia thallus in b. We show for both samples maximum intensity projections on the \(x-y\) plane in the top row and on the \(x-z\) plane in the bottom row, with the bead intensity shown in log scale for increased contrast. The effect of the light-sheet is visible along the horizontal direction (the x axis), as the image is sharp and has higher intensity in the centre, where the sheet is focused, while the quality of the image decreases away from the centre. The blurring effect of the light-sheet in the z direction is particularly noticeable in the \(x-z\) projections in the bottom row. Another source of blur observed, especially in the bead image (left) is given by optical aberrations due to the sample imaging medium. The Marchantia image has been acquired using samples from Dr. Alessandra Bonfanti and Dr. Sarah Robinson using the genetic line provided by Prof. Sebastian Schornack and Dr. Giulia Arsuffi at the Sainsbury Laboratory Cambridge University

1.1 Contribution

We propose a method for deconvolution of 3D light-sheet microscopy data that takes into account the spatially varying nature of the PSF and is scalable to the dimensions typical to biological samples imaged using light-sheet microscopy—4.86 GB per 3D 16-bit stack of \(2048 \times 2048 \times 580\) voxels.

Our approach is based on a new model for image formation that describes the interaction between the light-sheet and the detection PSF which replicates the physics of the microscope. Then, we formulate an inverse problem where the forward operator is given by model of the image formation process and which takes into account the degradation of the data by both Gaussian and Poisson noise as an infimal convolution (a concept that will be defined in Sect. 3) between an \({{\,\mathrm{\mathrm {L}}\,}}^2\) term and a Kullback–Leibler divergence term, following [9]. The proposed variational problem is solved by applying the Primal Dual Hybrid Gradient (PDHG) algorithm [6, 10, 11] in a novel way. Finally, we exploit the noise model to automatically tune the balance between the data fidelity and regularisation resorting to a discrepancy principle. We obtain convergence rates in a Bregman distance for the infimal convolution fidelity from [9] under a standard source condition.

In our numerical experiments, we first show how this method performs on simulated data, where the ground truth is known, then we apply our method to two examples of data from experiments: an image of fluorescent beads and a sample of Marchantia. In both cases, we see that the deconvolved images show improved contrast, while outperforming deconvolution using only the constant detection PSF.

1.2 Related Work

Before describing in more detail our approach to the deconvolution problem, we give a brief overview of the literature on spatially varying deconvolution in the context of microscopy and how our work relates to it.

Purely data-driven approaches estimate a spatially varying PSF in a low dimensional space (for scalability reasons) using bead images [7, 8, 12]. This is usually not application specific and can be included in a more general blind deconvolution framework. Similarly, the work in [13] involves writing the spatially varying PSF as a convex combination of spatially invariant PSFs. The algorithm alternates between estimating the image and estimating the PSF. In a similar vein, the authors of [14] approach the problem of blind deconvolution by defining the convolution operator using efficient matrix-vector multiplication operations. This decomposition is similar to the discrete formulation of our image formation model. These methods optimise over the (unknown) operator in addition to the unknown image. Related to these results is [15], where the authors consider the models from [12] and [14] under the assumption that the blurring operator is known and given as a sum of weighted spatially invariant operators. They exploit this structure of the operator and use a Douglas–Rachford-based splitting to solve the optimisation problem efficiently. A different data-driven approach is presented in [16], where a deep artificial neural network is used to learn the spatially varying PSF from simulated data obtained using a forward model of the microscope. While these approaches are more general than our method, we consider that using the knowledge of the image formation process in the forward model is advantageous for the reconstruction of light-sheet microscopy data.

A number of groups consider the problem of reconstruction from multiple views in the context of light-sheet microscopy. In [17], the problem of multi-view reconstruction under a spatially varying blurring operator for 3D light-sheet data is considered. They divide the image into small blocks where they perform deconvolution using spatially invariant PSFs estimated from beads (and interpolated PSFs in regions where there are no beads). In [18], the authors extend the Richardson–Lucy algorithm to the multi-view reconstruction problem in a Bayesian setting. While it allows for different PSFs for each view (estimated using beads), this work does not consider spatial variations of the PSF. While using data from multiple views improves the quality of the reconstruction, these approaches are agnostic to the physics of the microscope.

The approach taken in [19, 20] involves directly measuring the spatially varying PSF in different regions of the field of view using an additional hardware module installed with the microscope, and then deconvolving the image in each region using the measured PSF. In particular, [20] employs a sophisticated tiling-based deconvolution method based on the Richardson–Lucy algorithm and a formulation similar to a convolutional neural network in order to avoid artefacts usually caused by stitching tiles deconvolved with different PSFs.

Taking an approach similar in spirit to ours, the authors of [21] model the effective PSF of a light-sheet microscope, which is then plugged into a regularised version of the Richardson–Lucy algorithm for deconvolution. However, while they model the detection PSF and the light-sheet separately, they assume the effective PSF of the microscope is spatially invariant and the point-wise product of the two PSFs. In contrast, we do not take this simplifying step in our modelling, as we consider that the relationship between the two PSFs plays in important role in the resulting blur of the image.

The work of Guo et al. [22] uses a modified Richardson–Lucy algorithm implemented on GPU to improve the speed of convergence, further improved by the use of a deep neural network, which is a promising approach.

Moreover, in [23] the authors introduce an image formation model similar to the one described in the present manuscript. However, the regions of the resulting PSF where the light-sheet is out of focus are discarded, hence approximating the overall PSF with a constant PSF and then performing deconvolution using the ADMM algorithm. In Cueva et al. [24], a mathematical model which takes into account image fusion with two-sided illumination is derived from first principles. However, it is restricted to 2D and they do not apply the method to real data.

Lastly, regarding the mixed Gaussian–Poisson noise fidelity, our method follows the infimal convolution variational approach described in [9], with the additional light-sheet blurring operator. The same inverse problem, without the blurring operator, is solved in [25] albeit using an ADMM algorithm for the minimisation.

1.3 Paper Structure

The paper is organised as follows. In Sect. 2, we introduce a mathematical model of the image formation process in a light-sheet microscope. This model describes how the sample is blurred by the excitation illumination together with the detection objective PSF. Optical aberrations of the system are modelled using Zernike polynomials in the detection PSF, which we discuss in Sect. 2.3. In Sect. 3, we define the mathematical setting for the deconvolution problem and we state an inverse problem using a data fidelity as an infimal convolution of the individual Gaussian and Poisson data fidelities. We discuss convergence rates and a discrepancy principle for choosing the regularisation parameter in Sect. 2.1. In Sect. 4, we describe how PDHG is applied to this inverse problem, with details of the implementation of the proximal operator and the convex conjugate of the joint Kullback–Leibler divergence. Finally, we validate our method with numerical experiments both with simulated and real data in Sect. 5, before concluding and giving a few directions for future work in Sect. 6.

2 Forward Model

The first contribution of the current work is a model of the image formation process in light-sheet microscopy. By modelling the excitation light-sheet and the detection PSF separately and their interaction in a way that replicates the physics of the microscope, we are able to accurately simulate the spatially varying PSF of the imaging system. We then incorporate this knowledge as the forward model in an inverse problem, which we solve to remove the noise and blur in light-sheet microscopy data. In this section, we describe the image formation process and the PSF model.

2.1 Image Formation Model

A light-sheet propagated along the x direction is focused by the excitation objective at an axial position \(z=z_0\) and the local light-sheet intensity l is modelled by the incoherent point spread function (PSF) of the excitation objective. The sample with local density of fluorophores u emits photons proportionally to the local intensity l of the light-sheet. These photons are then collected by a detection objective, whose action on the illuminated sample is modelled as a convolution with its PSF h. For clarity, see Fig. 3 for the directions of the axes. Finally, the sensor conjugated with the image plane \(z_0\) collects photons and converts them to digital values for storage. Consequently, the recorded image is corrupted by a combination of Gaussian and Poisson noise. We can see here again how the local variation of the light-sheet will result in a spatially varying blur and spatially varying illumination intensity in the captured image. This process is then repeated for each \(z_0\) to obtain the measured data f.

Fig. 3
figure 3

Coordinate axes showing the light-sheet beam direction along the x axis and the detection direction along the z axis

More specifically, we model u, f, l and h as functions defined on \(\Omega \subset \mathbb {R}^3\), a rectangular domain of dimensions \(\Omega _x \times \Omega _y \times \Omega _z\) (in \(\upmu \mathrm{m}\)) with \(\Omega = [-\frac{\Omega _x}{2}, \frac{\Omega _x}{2}] \times [-\frac{\Omega _y}{2},\frac{\Omega _y}{2}] \times [-\frac{\Omega _z}{2},\frac{\Omega _z}{2}]\). For the sample u, the light-sheet l and the detection objective PSF h, the measured data f is given by:

$$\begin{aligned} f(x,y,z)&= \iiint u(s,t,w - z) l(s,t,w)h(x-s,y\nonumber \\&\quad -t,{-w}) \mathrm{d}s \mathrm{d}t \mathrm{d}w. \end{aligned}$$
(2.1)

The detection PSF h is given by

$$\begin{aligned}&h(x,y,z)\nonumber \\&\quad = \left| \iint g_{\sigma } * p_{Z}(\kappa _x,\kappa _y) e^{ 2 i \pi z \sqrt{(n/\lambda _h)^2 - \kappa _x^2 - \kappa _y^2} } e^{ 2 i \pi (\kappa _x x + \kappa _y y) } \mathrm{d}\kappa _x \mathrm{d}\kappa _y \right| ^2 \end{aligned}$$
(2.2)

and the light-sheet l is the y-averaged beam PSF \(l_{{\textit{beam}}}\):

$$\begin{aligned}&l_{{\textit{beam}}}(x,y,z) \nonumber \\&\quad =\left| \iint p_0(\kappa _z,\kappa _y) e^{ 2 i \pi x \sqrt{(n/\lambda _l)^2 - \kappa _z^2 - \kappa _y^2} } e^{ 2 i \pi (\kappa _z z + \kappa _y y) } \mathrm{d}\kappa _z \mathrm{d}\kappa _y \right| ^2, \end{aligned}$$
(2.3)

where n is the refractive index, \(\lambda _h,\lambda _l\) are the wave lengths corresponding to the detection objective and light-sheet beam, respectively, and \(g_{\sigma }\) represents Gaussian blur. Lastly, \(p_Z(\kappa _x,\kappa _y)\) and \(p_0(\kappa _z,\kappa _y)\) are the pupil functions for the detection PSF and the light-sheet beam, respectively, both given by:

(2.4)

for their respective wave lengths, \(\lambda _i = \lambda _h\) or \(\lambda _i = \lambda _l\), and numerical apertures, \({\textit{NA}}_i = {\textit{NA}}_h\) or \({\textit{NA}}_i = {\textit{NA}}_l\), where the phase for the light-sheet pupil \(p_0\) is equal to zero and the phase for the detection PSF pupil \(p_Z\) is an approximation of the optical aberrations written as an expansion in a Zernike polynomial basis. The Gaussian blur \(g_{\sigma }\) in (2.2) is a technical detail that enables better fitting of the detection PSF h to the optical aberrations seen in bead data, an idea introduced in [26]. More details about the pupil functions and the aberration fitting using Zernike polynomials and the Gaussian blur \(g_{\sigma }\) will be given in Sect. 2.3. In general, the NA of the excitation sheet is much lower than the NA of the detection lens. We note that the overall process is not translation invariant and cannot be modelled by a convolution operator.

Note that both the detection PSF h and the light-sheet PSF have a similar formulation derived from:

$$\begin{aligned}&\text {PSF}(x,y,z)\nonumber \\&\quad =\left| \iint p(\kappa _x,\kappa _y) e^{ 2 i \pi z \sqrt{(n/\lambda _i)^2 - \kappa _x^2 - \kappa _y^2} } e^{ 2 i \pi (\kappa _x x + \kappa _y y) } \mathrm{d}\kappa _x \mathrm{d}\kappa _y \right| ^2, \end{aligned}$$
(2.5)

which includes the pupil function for modelling aberrations and a defocus term before taking the Fourier transform (see, for example [27, 28]). In addition, the actual light-sheet which illuminates a slice of the sample is obtained by rapid scanning of the illumination beam, which we model by y-averaging the illumination PSF \(l_{{\textit{beam}}}\) given in (2.3) and repeating it in the y direction for the full length of the sample.

In practice, the image formation process modelled by (2.1) is discretised at the point of recording by the camera sensor in the xy plane and by the step size of the light-sheet in the z direction. If the camera has a resolution of \(N_x \times N_y\) pixels and the light-sheet illuminates the sample at \(N_z\) distinct steps, the model (2.1) becomes:

$$\begin{aligned} \tilde{f}_{i,j,k} = \frac{1}{\tilde{C}} \sum _{i'=1}^{N_x} \sum _{j'=1}^{N_y} \sum _{k'=1}^{N_z} \tilde{l}_{i',j',k'} \tilde{u}_{i',j',k' - k} \tilde{h}_{i-i',j-j',k'}, \end{aligned}$$
(2.6)

for all \(i = 1,\ldots ,N_x, j = 1,\ldots ,N_y,k = 1,\ldots ,N_z\), and a normalisation constant \(\tilde{C}\), where \(\tilde{u},\tilde{f},\tilde{l},\tilde{h} \in \mathbb {R}^{N_x \times N_y \times N_z}\) are the discretised versions of uflh, respectively. Similarly, the sampling performed by the camera sensor leads to a discretisation of the Fourier space and the use of the discrete Fourier transform in the PSF and light-sheet models (2.2) and (2.3). Lastly, in our implementation we normalise \(\tilde{h}\) so that \(\sum _{i=1}^{N_x}\sum _{j=1}^{N_y}\sum _{k=1}^{N_z} \tilde{h}_{i,j,k} = 1\) and choose the normalisation constant \(\tilde{C}\) so that the norm of the resulting operator is equal to one.

2.2 Derivation of the Model

Let luh be defined as in Sect. 2.1, with h and l centred at the origin and l translation invariant in the y direction and symmetric around the yz plane. For a fixed \(z_0 \in [-\frac{\Omega _z}{2},\frac{\Omega _z}{2}]\), we take the following steps, which replicate the inner workings of a light-sheet microscope:

  1. 1.

    Image the sample at \(z = z_0\): centre the sample u at \(z_0\) and multiply the result with the light-sheet l:

    $$\begin{aligned} F(x,y,z;z_0) = u(x,y,z - z_0) \cdot l(x,y,z), \end{aligned}$$
    (2.7)
  2. 2.

    Convolve with the objective PSF h:

    $$\begin{aligned}&C(x,y,z;z_0) \nonumber \\&\quad = F(x,y,z;z_0) * h(x,y,z) \nonumber \\&\quad =\iiint F(s,t,w;z_0) h(x-s,y-t,z-w) \mathrm{d}s \mathrm{d}t \mathrm{d}w, \end{aligned}$$
    (2.8)
  3. 3.

    Slice at \(z = 0\):

    $$\begin{aligned} f(x,y,z_0) = \left[ C(x,y,z;z_0) \right] _{z=0}, \end{aligned}$$
    (2.9)

which leads to:

$$\begin{aligned} f(x,y,z_0)&=\iiint u(s,t,w-z_0) l(s,t,w) h(x-s,y\nonumber \\&\quad -t,-w) \mathrm{d}s \mathrm{d}t \mathrm{d}w. \end{aligned}$$
(2.10)

This is the same as model (2.1), where we substitute z for \(z_0\). Note that, if there are no aberrations in h or other sources of asymmetry in the z direction, we could simply write \(h(x-s, y-t, w)\) instead.

For a discretisation of the domain using a 3D grid with \(N_x \times N_y\) pixels and \(N_z\) light-sheet steps, the forward model can be computed by following the three steps above for each \(k=1,\ldots ,N_z\), where we perform the convolutions using the fast Fourier transform (FFT), resulting in a number of \(\mathcal {O}(N_x N_y N_z^2 \log (N_x N_y N_z))\) operations.

Alternatively, we can rewrite the last integral above as:

$$\begin{aligned} f(x,y,z_0) = \int K(x,y,w) * h(x,y,{-w}) \mathrm{d}w, \end{aligned}$$
(2.11)

where

$$\begin{aligned} K(x,y,w) = l(x,y,w) u(x,y,w-z_0), \end{aligned}$$
(2.12)

and the convolution in (2.11) is a 2D convolution in (xy):

$$\begin{aligned}&K(x,y,w) * h(x,y,{-w})\nonumber \\&\quad = \iint K(s,t,w) h(x-s,y-t,{-w}) \mathrm{d}s \mathrm{d}t. \end{aligned}$$
(2.13)

In terms of numbers of FFTs performed on a discretised \(N_x \times N_y \times N_z\) grid, this alternative formulation requires \(\mathcal {O}(N_x N_y N_z^2 \log (N_x N_y))\) operations.

2.3 Point Spread Function Model

While both the light-sheet profile and the detection PSF are based on the same model of a defocused system (2.5) introduced in [27], note that our definition of h in (2.2) includes an additional convolution operation with a Gaussian \(g_{\sigma }\) and a pupil function \(p_Z\) with a nonzero phase. Let us turn to why this is the case.

It is well known that optical aberrations hamper results based on deconvolution with theoretical PSFs. In light-sheet microscopy, the effect of aberrations is more visible away from the centre, as shown for example in the bead image in Fig. 2, or in the more detailed example beads in Fig. 4. It is, therefore, required that we model the (spatially invariant) aberrations of the detection lens.

Fig. 4
figure 4

Examples of beads and light-sheet profile. The bead in a is cropped from the centre of Fig. 2a and the bead in b is cropped from the right-hand side of Fig. 2a. The maximum intensity projections are taken in the \(x-y\) plane (top left), the \(z-y\) plane (top right) and the \(x-z\) plane (bottom left)

The general PSF model (2.5), with the phase of the pupil function equal to zero, does not take optical aberrations into account and therefore it is not an accurate representation of the objective PSF h. For example, a PSF calculated using (2.5) with zero phase of the pupil and the parameters of the detection objective, shown in Fig. 5, does not resemble the actual bead images in the data in Fig. 4.

There has been extensive work on the problem of phase reconstruction in the literature [26, 29, 30], but here we take a more straightforward approach using Zernike polynomials to include aberrations in the PSF [31], as follows. Let \(h_z\) be the objective PSF calculated using (2.5) with Zernike polynomials in the phase of the pupil function:

$$\begin{aligned}&h_z(x,y,z; c)\nonumber \\&\quad =\left| \iint p_Z(\kappa _x,\kappa _y; c) e^{ 2 i \pi z \sqrt{(n/\lambda _h)^2 - \kappa _x^2 - \kappa _y^2} } e^{ 2 i \pi (\kappa _x x + \kappa _y y) } \mathrm{d}\kappa _x \mathrm{d}\kappa _y \right| ^2, \end{aligned}$$
(2.14)

where \(p_z(\kappa _x,\kappa _y; c)\) is the pupil function with \(N_Z\) Zernike polynomials in the phase:

(2.15)

and \( c = [c_1, \ldots ,c_{N_z}]^T \) are coefficients corresponding to the polynomials for some integer \(N_Z > 0\).

Fig. 5
figure 5

Objective PSF used in our model, with no aberrations (maximum intensity projections taken in the same way as in Fig. 4)

Moreover, let \(h_{zb}\) be the blurred PSF obtained by convolving \(h_z\) with a Gaussian \(g_{\sigma }\) with width \(\sigma \):

$$\begin{aligned} h_{zb}(x,y,z; c, \sigma ) = h_z(x,y,z; c) * g_{\sigma }. \end{aligned}$$
(2.16)

This allows us to obtain a better approximation of the objective PSF [26]. The parameters c and \(\sigma \) are calculated by solving the least-squares problem

$$\begin{aligned} \min _{c,\sigma } \quad&\Vert h_{zb}(c,\sigma ) * b - h_{{\textit{bead}}} \Vert _2^2 \end{aligned}$$
(2.17)
$$\begin{aligned} \quad \text {subject to} \quad&c \in [-B_Z,B_Z]^{N_Z}, \sigma > 0, \end{aligned}$$
(2.18)

for some \(B_Z > 0\), where \(h_{{\textit{bead}}}\) is a bead image containing the aberrations that one wants to capture in the fitted detection PSF (for example the bead in Fig. 4b) and b is equal to one inside the sphere of the radius equal to the radius of the bead (a parameter that is provided) and zero outside the sphere. This takes into account the non-negligible size of the beads used to generate the data.

In the implementation of the fitting procedure, we normalise both the bead image \(h_{{\textit{bead}}}\) and the simulated PSF \(h_{zb}\) by their maximum values before calculating their error, and we include two additional parameters, scaling and shift, to ensure a better fit of the intensity values (not shown here for simplicity of the presentation).

The best choice of the number of Zernike polynomial basis elements \(N_Z\) and the boundary \(B_Z\) of the coefficients c depend on the data \(h_{{\textit{bead}}}\) and how well the fit is required to be in the deconvolution step. In general, at least the first 15 Zernike polynomials are needed to capture the main optical aberrations such as spherical and astigmatism. In our experiments, for the bead shown in Fig. 4b, we found that \(N_Z = 15\) and \(B = 3\) are an appropriate choice. The Zernike polynomials used and their resulting corresponding coefficients are shown in Table 1 and in Fig. 6. The resulting PSF is the detection PSF model (2.2) and is shown in Fig. 7.

Finally, we note that a more thorough approach to choosing the Zernike basis is possible, by using multiple bead images for fitting the Zernike coefficients and comparing the quality of the fit for different values of \(N_Z\) and \(B_Z\). Alternatively, one could average multiple beads and perform the fitting procedure described above on the averaged bead. In both cases, it is worth mentioning that, since the optical aberrations vary within the sample image, we would only be able to fit the general shape of the PSF rather than the sharper features present in each bead, effectively fitting the low frequency information in the beads. In the end, this would achieve a similar effect to the Gaussian blur \(g_\sigma \) that we use in the fitting process. Moreover, one can employ more advanced techniques such as the ones described in [26, 29, 30] for estimating the pupil function, which can be plugged in to our image formation model. However, such an analysis focused on the pupil function is beyond the scope of the present work.

Table 1 The first 15 Zernike Polynomials (in polar coordinates) and their coefficients used in \(h_z\)
Fig. 6
figure 6

The Zernike polynomials used in the PSF \(h_z\), with image range \([-1,1]\)

2.4 Convolution with Spatially Varying Kernel

Having introduced the image formation model for a light-sheet microscope (2.1) as well as the models for the individual PSFs, it is worth expanding on the source of spatial variability that we tackle in this work.

Fig. 7
figure 7

Fitted PSF using Zernike polynomials. In panel c, we can see the benefits of using the Gaussian blur \(g_{\sigma }\) in obtaining an accurate approximation of the bead in a

First, note that with a change of variable \(w \rightarrow w + z\), we can rewrite the model (2.1) as:

$$\begin{aligned} f(x,y,z)&=\iiint u(s,t,w) l(s,t,w + z) h(x-s,y-t,\nonumber \\&\quad -w-z) \mathrm{d}s \mathrm{d}t \mathrm{d}w, \end{aligned}$$

so f(xyz) is the convolution of u(xyz) with the spatially varying kernel \(\tilde{h}(x,y,z; s,t,w)\):

$$\begin{aligned} f(x,y,z) = \iiint u(s,t,w) \tilde{h}(x,y,z; s,t,w) \mathrm{d}s \mathrm{d}t \mathrm{d}w, \end{aligned}$$
(2.19)

where

$$\begin{aligned} \tilde{h}(x,y,z; s,t,w) = l(s,t,w + z) h(x-s,y-t,-w-z) \end{aligned}$$
(2.20)

gives the expression of a kernel \(\tilde{h}(x,y,z;\cdot ,\cdot ,\cdot )\) which varies with its centre (xyz).

Therefore, the model presented in this section describes the spatial variation of the overall PSF of the system \(\tilde{h}\), as a consequence of the interaction between the light-sheet beam PSF l and the detection PSF h. We highlight that l and h are not themselves spatially varying. By the process described in Sect. 2.2, this interaction (and spatial variability) is modelled explicitly. This is in contrast to approaches such as [16], where the spatial variability of the PSF is learned from the data and encoded in the black-box mechanism of an artificial neural network.

In practice, a second source of spatial variability of the PSF may be the detection PSF h, due to the optical aberrations that can vary within the sample image. As described in Sect. 2.3, in this work we do not account for this potential spatial variability of h, and we fit one pupil function to the bead data.

3 Inverse Problem

3.1 Problem Statement

In this section we formally state the inverse problem of deblurring a light-sheet microscopy image. Let \(\Omega \subset {\mathbb {R}}^3\) be a bounded Lipschitz domain and let \(L :{{\,\mathrm{\mathrm {L}}\,}}^p(\Omega ) \rightarrow {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) be the forward operator defined by (2.1). Here \(1< p < 3/2\) is chosen such that the embedding of the \({{\,\mathrm{BV}\,}}\) space is compact [32]. Clearly, L is linear.

We consider the following inverse problem

$$\begin{aligned} Lu = \bar{f}, \end{aligned}$$
(3.1)

where \({\bar{f}} \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) is the exact (noise-free) data. As outlined in Sect. 2.1, the measurements in light microscopy are corrupted by a combination of Poisson and Gaussian noise. More precisely, the measurement is given by \(f = v + w\), where \(v \sim {\textit{Pois}}(\bar{f})\) is a Poisson distributed random variable with mean \({\bar{f}}\) and w represents additive zero-mean Gaussian noise. We do not model Gaussian noise statistically and instead, in the spirit of (deterministic) variational regularisation, assume that \(w \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) is a fixed perturbation with for some known \(\sigma _G > 0\). Poisson noise is typically modelled using the Kullback–Leibler divergence as the data fidelity term [33, 34].

Let us give a brief justification of the inverse problem formulation described in this section [9, 35], from a Bayesian perspective. First, by using the Poisson and Gaussian probability density functions, we have that \(p(v|u) = \frac{(Lu)^v e^{-(Lu)}}{v!}\) and \(p(f|v) = \frac{1}{\sqrt{2\pi }\sigma _G}e^{-\frac{1}{2}\left( \frac{f-v}{\sigma _G}\right) ^2}\), and from Bayes’ theorem and conditional probability:

$$\begin{aligned} p(u,v|f) = \frac{p(f|v)p(v|u)p(u)}{p(f)}, \end{aligned}$$
(3.2)

where we used that \(p(f|u,v) = p(f|v)\). Moreover, we assume that the prior is a Gibbs distribution \(p(u) = e^{-\alpha {\mathcal {J}}(u)}\) for a convex functional \({\mathcal {J}}(u)\), which we will introduce later. To obtain a maximum a posteriori estimation of u and v (i.e. maximise the posterior distribution p(uv|f)), we take the minimum of the negative log of (3.2) and, after discarding the denominator p(f) and using the Stirling approximation for the factorial \(\log v! = v \log v - v\), we obtain the minimisation problem:

$$\begin{aligned} {{\,\mathrm{arg\,min}\,}}_{u,v} \alpha {\mathcal {J}}(u) + \frac{1}{2\sigma _G^2} \Vert f-v\Vert ^2 + v \log \frac{v}{Lu} + Lu - v, \end{aligned}$$
(3.3)

where the first term is the regularisation term and the remaining terms form the data fidelity term.

We will now describe the formal mathematical setting for (3.3) in the context of variational regularisation. This will allow us to show well-posedness of the model, establish convergence rates of the solution with respect to the noise in the measurements and to introduce a discrepancy principle for choosing the value of the regularisation parameter \(\alpha \).

First, note that in (3.3), we can perform the minimisation over v only on the data fidelity part of the objective, which can be written as an infimal convolution of the two separate Gaussian and Poisson fidelities. The infimal convolution of two functionals \(\varphi _1, \varphi _2\) on \({{\,\mathrm{\mathrm {L}}\,}}^2\) is defined asFootnote 1:

$$\begin{aligned} (\varphi _1 \square \varphi _2)(f) = \inf _{v \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )} \left\{ \varphi _1(f-v) + \varphi _2(v) \right\} , \end{aligned}$$
(3.4)

for \(f \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\). Therefore, we define the following data fidelity term, as proposed in [9]:

$$\begin{aligned} \Phi ({\bar{f}},f) :=\inf _{v \in {{\,\mathrm{\mathrm {L}}\,}}^2_+(\Omega )} \left\{ \frac{1}{2}\left\Vert f-v\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2} + D_{{\textit{KL}}}(v, {\bar{f}}) \right\} , \end{aligned}$$
(3.5)

for \(f \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) and \({\bar{f}} \in {{\,\mathrm{\mathrm {L}}\,}}^1_+(\Omega )\), where \({{\,\mathrm{\mathrm {L}}\,}}^{1,2}_+(\Omega )\) denotes the positive cone in \({{\,\mathrm{\mathrm {L}}\,}}^{1,2}(\Omega )\) (that is, functions \(f \in {{\,\mathrm{\mathrm {L}}\,}}^{1,2}(\Omega )\) such that a.e.) and \(D_{{\textit{KL}}}\) denotes the Kullback–Leibler divergence, which we define as follows

(3.6)

We note that \({\left|\int _\Omega v(x) \log v(x) \mathrm{d}x\right| < \infty }\) for \(v \in {{\,\mathrm{\mathrm {L}}\,}}^2\), since \({{\,\mathrm{\mathrm {L}}\,}}^2\) is continuously embedded into the Orlicz space \({{\,\mathrm{\mathrm {L}}\,}}\log {{\,\mathrm{\mathrm {L}}\,}}\) of functions of finite entropy [36, 37]

$$\begin{aligned}&{{\,\mathrm{\mathrm {L}}\,}}\log {{\,\mathrm{\mathrm {L}}\,}}(\Omega )\nonumber \\&\quad :=\{f \in {{\,\mathrm{\mathrm {L}}\,}}^1(\Omega ) :\int _\Omega \left|f(x)\right| (\log \left|f(x)\right|)_+ \mathrm{d}x < \infty \}, \end{aligned}$$
(3.7)

where \((\cdot )_+ = \max \{\cdot ,0\}\) denotes the positive part.

A proof of the following result can be found in [9], but we provide it here for readers’ convenience.

Proposition 3.1

(Exactness of the infimal convolution) For any \({\bar{f}} \in {{\,\mathrm{\mathrm {L}}\,}}^1_+\) such that \(\int _\Omega {\bar{f}} \mathrm{d}x = 1\), there exists a unique solution \(v^* = v^*({\bar{f}})\) of (3.5), that is, the infimal convolution is exact. Moreover, the functional \(\Phi ({\bar{f}}, \cdot ) :{{\,\mathrm{\mathrm {L}}\,}}^2 \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\) is proper, convex and lower semicontinuous.

Proof

Fix \({\bar{f}} \in {{\,\mathrm{\mathrm {L}}\,}}^1_+\) such that \(\int _\Omega {\bar{f}} \mathrm{d}x = 1\). Then, (3.5) is the infimal convolution of the following two functionals on \({{\,\mathrm{\mathrm {L}}\,}}^2\)

$$\begin{aligned} \varphi (g)&:=\chi _{{{\,\mathrm{\mathrm {L}}\,}}^2_+}(g) + D_{{\textit{KL}}}(g, {\bar{f}}),\\ \psi (g)&:=\frac{1}{2}\left\Vert g\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2}, \quad g \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega ), \end{aligned}$$

where \(\chi \) denotes the characteristic function. The function \(\varphi \) is proper, convex, non-negative and lower semicontinuous, while \(\psi \) is proper, convex, lower semicontinuous and coercive. Therefore, by [38, Prop. 12.14], the infimal convolution is exact and is itself a proper, convex and lower semicontinuous function. Uniqueness follows from strict convexity of \(\psi \). \(\square \)

Now we turn our attention to the lower semicontinuity of the functional \(\Phi (\cdot ,f)\) in its first argument.

Proposition 3.2

(Lower semicontinuity) For any \(f \in {{\,\mathrm{\mathrm {L}}\,}}^2_+(\Omega )\) such that \(\int _\Omega f \mathrm{d}x = 1\) the functional \(\Phi (\cdot ,f) :L^1(\Omega ) \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\) is lower semicontinuous.

Proof

We have

$$\begin{aligned} \Phi (g,f)&= \inf _{v \in {{\,\mathrm{\mathrm {L}}\,}}^2_+(\Omega )} \left\{ \frac{1}{2}\left\Vert f-v\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2} + D_{{\textit{KL}}}(v, g) \right\} \\&= \frac{1}{2}\left\Vert f-v^*(g)\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2} + D_{{\textit{KL}}}(v^*(g), g)\\&= \frac{1}{2}\left\Vert f-v^*(g)\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2} + \\&\quad + \int _{\Omega } v(x) (\log v(x) - \log {\bar{f}}(x)) \mathrm{d}x + \chi _{{\mathcal {C}}}(g), \end{aligned}$$

where \(g \in {{\,\mathrm{\mathrm {L}}\,}}^1(\Omega )\), \(v^*(g)\) is as defined in Proposition 3.1 and \({\mathcal {C}} := \{g \in {{\,\mathrm{\mathrm {L}}\,}}^1_+(\Omega ) :\int _\Omega g \mathrm{d}x = 1\}\). The characteristic function is lower semicontinuous because \({\mathcal {C}}\) is closed in \({{\,\mathrm{\mathrm {L}}\,}}^1\) and the rest is lower semicontinuous by [9, Thm. 4.1]. \(\square \)

The following fact is easily established.

Proposition 3.3

The operator \(L :{{\,\mathrm{\mathrm {L}}\,}}^p(\Omega ) \rightarrow {{\,\mathrm{\mathrm {L}}\,}}^1(\Omega )\) defined in (2.1) is continuous for any . Moreover, if l and h are non-negative and have overlapping support:

$$\begin{aligned} {{\,\mathrm{supp}\,}}(l) \cap {{\,\mathrm{supp}\,}}(h) \ne \varnothing , \end{aligned}$$

then \(\mathbf {1} \notin {\mathcal {N}}(L)\), where \(\mathbf {1}\) is the constant one function and \({\mathcal {N}}(L)\) is the null space of L.

Proof

By (2.1), we have

$$\begin{aligned}&Lu(x,y,z)\\&\quad = \int _{\Omega } l(s,t,w)h(x-s,y-t,w)u(s,t,w-z) \mathrm{d}\mu _{stw}, \end{aligned}$$

where \(\mathrm{d}\mu _{stw} := \mathrm{d}s \mathrm{d}t \mathrm{d}w\). Noting that the light-sheet PSF l and detection PSF h are bounded from above by some \(C_1, C_2 > 0\), we have that:

where in the last inequality we applied Hölder’s inequality and C(p) is a constant that depends on p (as well as \(C_{1,2}\) and \(\Omega \)). Hence, we obtain the first claim.

For the second claim, we observe that

Consider

$$\begin{aligned}&\int _\Omega L{\mathbf {1}}(x,y,z) \mathrm{d}\mu _{xyz}\\&\quad =\int _\Omega \int _{\Omega } l(s,t,w)h(x-s,y-t,w) \mathrm{d}\mu _{stw} \mathrm{d}\mu _{xyz} \end{aligned}$$

and let \(B_{l,h} \subset {{\,\mathrm{supp}\,}}{l} \cap {{\,\mathrm{supp}\,}}{h}\). Then, since both l and h are non-negative on \(\Omega \), from the last equality above we have that:

which proves the second claim. \(\square \)

Remark 3.1

Our setting with the measured data \(f \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) differs slightly from [9], where \(f \in {{\,\mathrm{\mathrm {L}}\,}}^\infty (\Omega )\) was assumed.

We will consider the following variational regularisation problem

$$\begin{aligned} \min _{u \in {{\,\mathrm{\mathrm {L}}\,}}^p_+(\Omega )} \Phi (f,Lu) + \alpha {\mathcal {J}}(u), \end{aligned}$$
(3.8)

where \(\Phi \) is the infimal convolution fidelity as defined in (3.5), \({\mathcal {J}}:{{\,\mathrm{\mathrm {L}}\,}}^p \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\) is a regularisation functional, \(\alpha \in {\mathbb {R}}_+\) is a regularisation parameter and \(1< p < 3/2\). Without loss of generality, we assume that \(\int _\Omega {\bar{f}} \mathrm{d}x = 1\).

As the regulariser \({\mathcal {J}}\), we choose the total variation [39]

By the Rellich–Kondrachov theorem, the space

$$\begin{aligned} {{\,\mathrm{BV}\,}}(\Omega )&:=\{u \in {{\,\mathrm{\mathrm {L}}\,}}^1(\Omega ) :{{\,\mathrm{TV}\,}}(u) < \infty \},\\ \left\Vert u\right\Vert _{{{\,\mathrm{BV}\,}}}&:=\left\Vert u\right\Vert _{{{\,\mathrm{\mathrm {L}}\,}}^1} + {{\,\mathrm{TV}\,}}(u), \end{aligned}$$

is compactly embedded into \({{\,\mathrm{\mathrm {L}}\,}}^p(\Omega )\) for and continuously embedded into \({{\,\mathrm{\mathrm {L}}\,}}^{3/2}(\Omega )\) since \(\Omega \subset {\mathbb {R}}^3\). Therefore, we consider \({{\,\mathrm{TV}\,}}:{{\,\mathrm{\mathrm {L}}\,}}^p \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\)

We will denote by \(u^\dagger _{{{\,\mathrm{TV}\,}}}\) the \({{\,\mathrm{TV}\,}}\)-minimising solution of (3.1), i.e. a solution that satisfies

The existence of such solution is obtained by standard arguments [40]. We will make the reasonable assumption that the \({{\,\mathrm{TV}\,}}\)-minimising solution is positive, i.e. a.e. Due to the positivity of the kernels involved in (2.1), it is clear that implies .

Since by Proposition 3.1 the infimal convolution (3.5) is exact, we can equivalently rewrite (3.8) as follows

$$\begin{aligned} \min _{\begin{array}{c} u \in {{\,\mathrm{\mathrm {L}}\,}}^p_+(\Omega ) \\ v \in {{\,\mathrm{\mathrm {L}}\,}}^2_+(\Omega ) \end{array}} \frac{1}{2}\left\Vert f-v\right\Vert ^2_{{{\,\mathrm{\mathrm {L}}\,}}^2} + D_{{\textit{KL}}}(v, Lu) + \alpha {\mathcal {J}}(u). \end{aligned}$$
(3.9)

Existence of minimisers in (3.8) and (3.9) is obtained by standard arguments [9, Thm. 4.1].

Proposition 3.4

Each of the optimisation problems (3.8) and (3.9) admits a unique minimiser.

We will also need the following coercivity result.

Proposition 3.5

The functional \(\Phi (f,\cdot ) :{{\,\mathrm{\mathrm {L}}\,}}^1(\Omega ) \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\) is strongly coercive with exponent 2, i.e. there exists a constant \(C>0\) such that

Proof

Using Pinsker’s inequality for the Kullback–Leibler divergence, we get

for some \(C>0\). Note that Pinsker’s inequality assumes that and \(\inf _\Omega f \mathrm{d}x = \int _\Omega g \mathrm{d}x = 1\), which we ensure by definition in (3.6).

Now, using the inequality that holds for all \(a,b \in {\mathbb {R}}\) and the triangle inequality, we obtain the claim

\(\square \)

3.2 Convergence Rates

Our aim in this section is to establish convergence rates of minimisers of (3.8) as the amount of noise in the data decreases. But first we need to specify what we mean by the amount of noise in our setting.

We argue as follows. Since the noise in the measurement is generated sequentially, i.e. photo-electrons are first counted by the sensor leading to a Poisson noise and later they are collected by the electronic circuit generating an additive Gaussian noise, for any exact data \({\bar{f}}\) there exists \({\bar{z}} \sim Pois({\bar{f}})\) such that , where \(\gamma >0\) depends on the exposure time t and vanishes as \(t \rightarrow \infty \) [33]. Further, there exists \(w \in {{\,\mathrm{\mathrm {L}}\,}}^2(\Omega )\) with such that \(f = {\bar{z}} + w\). Since is feasible in (3.5), we get the following upper bound on the fidelity term (3.5) evaluated at the measurement f and the exact data \({\bar{f}}\)

(3.10)

The standard tool for establishing convergence rates are Bregman distances associated with the regulariser \({\mathcal {J}}\). We briefly recall the necessary definitions.

Definition 3.1

Let X be a Banach space and \({\mathcal {J}}:X \rightarrow {\mathbb {R}}_+\cup \{+\infty \}\) a proper convex functional. The generalised Bregman distance between \(x,y \in X\) corresponding to the subgradient \(q \in \partial {\mathcal {J}}(y)\) is defined as follows

$$\begin{aligned} D_{\mathcal {J}}^q(x,y) :={\mathcal {J}}(x) - {\mathcal {J}}(y) - \langle q,x-y \rangle . \end{aligned}$$

Here \(\partial {\mathcal {J}}(v)\) denotes the subdifferential of \({\mathcal {J}}\) at \(y \in X\). If, in addition, \(p \in \partial {\mathcal {J}}(x)\), the symmetric Bregman distance between \(x,y \in X\) corresponding to the subgradients pq is defined as follows

$$\begin{aligned} D_{\mathcal {J}}^{p,q}(x,y) :=D_{\mathcal {J}}^q(x,y) + D_{\mathcal {J}}^p(y,x) = \langle p-q,x-y \rangle . \end{aligned}$$

To obtain convergence rates, an additional assumption on the regularity of the \({{\,\mathrm{TV}\,}}\)-minimising solution, called the source condition, needs to be made. We use the following variant [41].

Assumption 3.1

(Source condition) There exists an element \(\mu ^\dagger \in L^\infty (\Omega )\) such that

$$\begin{aligned} q^\dagger := L^*\mu ^\dagger \in \partial {\mathcal {J}}(u^\dagger _{{{\,\mathrm{TV}\,}}}). \end{aligned}$$

3.2.1 Parameter Choice Rules

Let us summarise what we know about the fidelity function \(\Phi \) as defined in (3.5), the regularisation functional \({{\,\mathrm{TV}\,}}\) and the forward operator L:

  • \(\Phi (f,\cdot )\) is proper, convex and coercive (Proposition 3.5) in \({{\,\mathrm{\mathrm {L}}\,}}^1(\Omega )\);

  • \(\Phi (\cdot ,\cdot )\) is jointly convex [42] and lower semicontinuous (Propositions 3.1 and 3.2);

  • \(\Phi (f,g) = 0\) if and only if \(f=g\);

  • \({{\,\mathrm{TV}\,}}:{{\,\mathrm{\mathrm {L}}\,}}^1(\Omega ) \rightarrow {\mathbb {R}}\cup \{+\infty \}\) is proper, convex and lower semicontinuous [32] and its null space is given by \({\mathcal {N}}({{\,\mathrm{TV}\,}}) = {{\,\mathrm{span}\,}}\{{\mathbf {1}}\}\), where \({\mathbf {1}}\) denotes the constant one function;

  • \({{\,\mathrm{TV}\,}}\) is coercive on the complement of its null space in \({{\,\mathrm{\mathrm {L}}\,}}^1(\Omega )\) [32];

  • \(L :{{\,\mathrm{\mathrm {L}}\,}}^p(\Omega ) \rightarrow {{\,\mathrm{\mathrm {L}}\,}}^1(\Omega )\) is continuous and \({\mathcal {N}}({{\,\mathrm{TV}\,}})\cap {\mathcal {N}}(L) = \{0\}\) (Proposition 3.3).

Using these facts and slightly modifying the proofs from [43], we obtain the following

Theorem 3.2

(Convergence rates under a priori parameter choice rules) Let assumptions made in Sect. 3.1 hold and let the source condition (Theorem 3.1) be satisfied at the \({{\,\mathrm{TV}\,}}\)-minimising solution \(u^\dagger _{{{\,\mathrm{TV}\,}}}\). Let \(u_{\sigma _G,\gamma }\) be a solution of (3.8) and let \(\alpha \) be chosen such that

$$\begin{aligned} \alpha (\sigma _G,\gamma ) = O(\sigma _G+\sqrt{\gamma }). \end{aligned}$$

Then,

$$\begin{aligned} D^{q^\dagger }_{{{\,\mathrm{TV}\,}}} (u_{\sigma _G,\gamma },u^\dagger _{{{\,\mathrm{TV}\,}}}) = O(\sigma _G+\sqrt{\gamma }), \end{aligned}$$

where \(q^\dagger = L^*\mu ^\dagger \) is the subgradient from Theorem 3.1 and \(\sigma _G,\gamma >0\) are as defined in (3.10).

Proof

The proof is similar to [43, Thm. 3.9]. \(\square \)

In a similar manner, we can obtain convergence rates for an a posteriori parameter choice rule known as the discrepancy principle [44,45,46]. Let f be the noisy data and \(\delta >0\) the amount of noise such that , where \(\Phi \) is as defined in (3.5). In our case, \(\delta = \frac{\sigma _G^2}{2} + \gamma \) by (3.10). The discrepancy principle amounts to selecting \(\alpha =\alpha (f,\delta )\) such that

$$\begin{aligned} pg{\alpha = \sup \{\alpha >0 :\Phi (Lu^\alpha ,f) \leqslant \tau \delta \},} \end{aligned}$$
(3.11)

where \(u^\alpha \) is the regularised solution corresponding to the regularisation parameter \(\alpha \) and \(\tau >1\) is a parameter.

Again, slightly modifying the proofs from [43], we obtain the following

Theorem 3.3

(Convergence rates under the discrepancy principle) Let assumptions made in Sect. 3.1 hold and let the source condition (Theorem 3.1) be satisfied at the \({{\,\mathrm{TV}\,}}\)-minimising solution \(u^\dagger _{{{\,\mathrm{TV}\,}}}\). Let \(u_{\sigma _G,\gamma }\) be a solution of (3.8) with \(\alpha \) chosen according to the discrepancy principle (3.11). Then,

$$\begin{aligned} D^{q^\dagger }_{{{\,\mathrm{TV}\,}}} (u_{\sigma _G,\gamma },u^\dagger _{{{\,\mathrm{TV}\,}}}) = O(\sigma _G+\sqrt{\gamma }), \end{aligned}$$

where \(q^\dagger = L^*\mu ^\dagger \) is the subgradient from Theorem 3.1 and \(\sigma _G,\gamma >0\) are as defined in (3.10).

Proof

The proof is similar to [43, Thm. 4.10]. \(\square \)

4 Solving the Minimisation Problem

4.1 PDHG for Infimal Convolution Model

In practice, due to the joint convexity of the Kullback–Leibler divergence, we solve the minimisation problem (3.9), where we treat the reconstructed sample u and the Gaussian denoised image v jointly and, in addition, we impose lower and upper bound constraints on u and v by including the corresponding characteristic functions in the objective:

$$\begin{aligned}&\min _{u,v} \alpha {{\,\mathrm{TV}\,}}(u) + \frac{1}{2\sigma _G^2} \Vert f-v\Vert _2^2 + D_{{\textit{KL}}}(v,Lu) \nonumber \\&\quad + \chi _{[l_1,l_2]^{2N}}([u, v]^T). \end{aligned}$$
(4.1)

Note that the objective function in (4.1) is a sum of convex functions (the Kullback–Leibler divergence \(D_{{\textit{KL}}}\) is jointly convex [47]), and therefore is itself convex. We then write the problem (4.1) as:

$$\begin{aligned} \min _{w} G(w) + \sum _{i=1}^m H_i(L_i w), \end{aligned}$$
(4.2)

where we solve for \(w = \begin{bmatrix}u \\ v\end{bmatrix}\), \(m=3\) and:

$$\begin{aligned}&G(w) = \chi _{[l_1,l_2]^{2N}}\left( \begin{bmatrix} u\\ v \end{bmatrix}\right) , \end{aligned}$$
(4.3)
$$\begin{aligned}&H_1(\cdot ) = \frac{1}{2\sigma _G^2} \left\| \cdot - f \right\| _2,&L_1 = \begin{bmatrix} 0&1 \end{bmatrix}, \end{aligned}$$
(4.4)
$$\begin{aligned}&H_2(w) = D_{{\textit{KL}}}(v, u),&L_2 = \begin{bmatrix} L &{} 0 \\ 0 &{} 1 \end{bmatrix}, \end{aligned}$$
(4.5)
$$\begin{aligned}&H_3(\cdot ) = \alpha \left\| \cdot \right\| _1,&L_3 = \begin{bmatrix} \nabla _x &{} 0 \\ \nabla _y &{} 0 \\ \nabla _z &{} 0 \end{bmatrix}, \end{aligned}$$
(4.6)

where L is the forward operator corresponding to the image formation model from Sect. 2.1.

Rather than solving the problem (4.2) directly, a common approach is to reformulate it as a saddle point problem using the Fenchel conjugate \(G^*(y) = \sup _z \langle z,y \rangle - G(z)\). For proper, convex and lower semicontinuous function G, we have that \(G^{**} = G\), so (4.2) can be written as the saddle point problem

$$\begin{aligned} \min _w \sup _{y_1,\ldots ,y_m} G(w) + \sum _{i=1}^m \langle y_i, L_i x \rangle - H_i^*(y_i), \end{aligned}$$
(4.7)

and by swapping the \(\min \) and the \(\sup \) and applying the definition of the convex conjugate \(G^*\), one obtains the dual of (4.2):

$$\begin{aligned} \max _{y_1,\ldots ,y_m} -G^*\left( -\sum _{i=1}^m L_i^* y_i \right) - \sum _{i=1}^m H_i^*(y_i). \end{aligned}$$
(4.8)

The saddle point problem (4.7) is commonly solved using the primal–dual hybrid gradient (PDHG) algorithm [6, 10, 11], and by doing so, both the primal problem (4.2) and the dual (4.8) are solved. We apply the variant of PDHG from [48], which accounts for the sum of composite terms \(H_i \circ L_i\). Given an initial guess for \((w_0,y_{1,0},\ldots ,y_{m,0})\) and the parameters \(\sigma , \tau > 0\), and \(\rho \in [\epsilon ,2-\epsilon ]\) for some \(\epsilon > 0\), each iteration consists of the following steps:

$$\begin{aligned}&1.\quad \tilde{w}_{k+1} := {{\,\mathrm{prox}\,}}_{\tau G} (w_k - \tau \sum _{i=1}^m L_i^* y_{i,k}), \nonumber \\&2.\quad w_{k+1} := \rho _k \tilde{w}_{k+1} + (1-\rho _k) w_k, \nonumber \\&3.\quad \forall i = 1,\ldots ,m: \nonumber \\&\quad \qquad \tilde{y}_{i,k+1} := {{\,\mathrm{prox}\,}}_{\sigma H_i^*} \left( y_{i,k} + \sigma L_i(2 \tilde{w}_{k+1} - w_k) \right) , \nonumber \\&4.\quad \forall i = 1,\ldots ,m: \nonumber \\&\quad \qquad y_{i,k+1} := \rho \tilde{y}_{i,k+1} + (1-\rho )y_{i,k}. \end{aligned}$$
(4.9)

where for a proper, lower semicontinuous, convex function G, \({{\,\mathrm{prox}\,}}_{\tau G}\) is its proximal operator, defined as:

$$\begin{aligned} {{\,\mathrm{prox}\,}}_{\tau G}(y) := {{\,\mathrm{arg\,min}\,}}_x \left\{ \frac{1}{2\tau } \Vert x - y\Vert _2^2 + G(x) \right\} . \end{aligned}$$
(4.10)

The iterates \((w_k)_{k \in \mathbb {N}}\) and \((y_{i,k})_{k \in \mathbb {N}}\) (\(i=1,\ldots ,m\)) are shown to converge if the parameters \(\sigma \) and \(\tau \) are chosen such that (see [48, Theorem 5.3]). In step 3 in (4.9), we use Moreau’s identity to obtain \({{\,\mathrm{prox}\,}}_{\sigma H^*_i}\) from \({{\,\mathrm{prox}\,}}_{H_i/\sigma }\):

$$\begin{aligned} {{\,\mathrm{prox}\,}}_{\sigma H_i^*}(y) + \sigma {{\,\mathrm{prox}\,}}_{H_i/{\sigma }} (y/{\sigma }) = y. \end{aligned}$$
(4.11)

As a stopping criterion, one can use the primal–dual gap, i.e. the difference between the primal objective cost at the current iterate and the dual objective cost at the current (dual) iterate:

$$\begin{aligned}&D_{pd}(w,y_1,\ldots ,y_m) \nonumber \\&\quad =G(w) + \sum _{i=1}^m H_i(L_i w) + G^*(-\sum _{i=1}^m L_i^*y_i) + \sum _{i=1}^m H_i^*(y_i) \end{aligned}$$
(4.12)

Due to strong duality, optimality is reached when the primal–dual gap is zero, so a practical stopping criterion is when the gap reaches a certain threshold set in advance.

Lastly, note that the optimisation is performed jointly over both u and v, which introduces a difficulty for the term \(H_2(L_2 w)\) in Step 3 above, as this requires the proximal operator of the joint Kullback–Leibler divergence \(D_{{\textit{KL}}}(u,v)\). Similarly, the computation of the primal–dual gap in (4.12) requires the convex conjugate of the joint Kullback–Leibler divergence. We describe the details of these computations in Sects. 4.2 and 4.3, respectively.

4.2 Computing the Proximal Operator of the Joint Kullback–Leibler Divergence

When writing the optimisation problem in the form (4.2), it is common that the functions G and \(H_i\) (\(i=1,\ldots ,m\)) are “simple”, meaning that their proximity operators have a closed-form solution or can be easily computed with high precision. This is certainly true for G and \(H_1\), but not obvious for the joint Kullback–Leibler divergence.

First, for discrete images \(u=[u_1,\ldots , u_N]^T,[v_1,\ldots ,v_N]^T\), the definition (3.6) becomes:

$$\begin{aligned} D_{{\textit{KL}}}(v,u)= \sum _{j=1}^N u_j - v_j + v_j \log \frac{v_j}{u_j} \end{aligned}$$
(4.13)

and then:

$$\begin{aligned} {{\,\mathrm{prox}\,}}_{\gamma D_{{\textit{KL}}}}(u^*,v^*)&= {{\,\mathrm{arg\,min}\,}}_{u,v} D_{{\textit{KL}}}(u,v) + \frac{1}{2\gamma } \left\| \begin{bmatrix} u\\ v \end{bmatrix} - \begin{bmatrix} u^*\\ v^* \end{bmatrix} \right\| ^2_2 \nonumber \\&= {{\,\mathrm{arg\,min}\,}}_{u,v} \sum _{j=1}^N u_j - v_j + v_j \log \frac{v_j}{u_j} \nonumber \\&\quad + \frac{1}{2\gamma } \left[ (u_j-u_j^*)^2 + (v_j-v_j^*)^2 \right] \nonumber \\&= \sum _{j=1}^N {{\,\mathrm{arg\,min}\,}}_{u_j,v_j} \Phi (u_j,v_j), \end{aligned}$$
(4.14)

where we define the function \(\Phi \) as:

$$\begin{aligned} \Phi (u_j,v_j)&:= u_j - v_j + v_j \log \frac{v_j}{u_j}\nonumber \\&\quad + \frac{1}{2\gamma } [(u_j-u_j^*)^2 + (v_j-v_j^*)^2]. \end{aligned}$$
(4.15)

To find the minimiser of \(\Phi (u_j,v_j)\), we let its gradient be equal to zero:

$$\begin{aligned}&{\left\{ \begin{array}{ll} \partial _{u_j} \Phi (u_j,v_j) = 0 \\ \partial _{v_j} \Phi (u_j,v_j) = 0 \end{array}\right. }\nonumber \\&\quad \iff {\left\{ \begin{array}{ll} 1 - \frac{v_j}{u_j} + \frac{1}{\gamma }(u_j-u_j^*) = 0\\ \log v_j - \log u_j + \frac{1}{\gamma }(v_j-v_j^*) = 0 \end{array}\right. }. \end{aligned}$$
(4.16)

In the second equation, we write \(u_j\) as a function of \(v_j\), which we substitute in the first equation to obtain:

$$\begin{aligned} {\left\{ \begin{array}{ll} 1 - e^{-\frac{1}{\gamma }(v_j-v_j^*)} + \frac{1}{\gamma } \left( v_j e^{\frac{1}{\gamma }(v_j-v_j^*)} - u_j^*\right) = 0 \\ u_j = v_j e^{\frac{1}{\gamma }(v_j-v_j^*)} \end{array}\right. }. \end{aligned}$$
(4.17)

The first equation is then solved using Newton’s method, where the iteration is given by:

$$\begin{aligned} v_j^{(k+1)} = v_j^{(k)} - \frac{ \gamma - \gamma e^{-\frac{1}{\gamma }(v_j^{(k)}-v_j^*)} + v_j^{(k)} e^{\frac{1}{\gamma } (v_j^{(k)} - v_j^*)} - u_j^* }{ e^{-\frac{1}{\gamma }(v_j^{(k)}-v_j^*)} + (1 + \frac{1}{\gamma } v_j^{(k)}) e^{\frac{1}{\gamma }(v_j^{(k)}-v_j^*)} }. \end{aligned}$$
(4.18)
Fig. 8
figure 8

Ground truth (top row) and measured images (middle row), shown using maximum intensity projections in each axis direction, except for the tissue images, where slices in each axis direction are shown. The axes of the plots are shown in the panel in the bottom row, with the missing axis in each panel being the direction in which the maximum intensity projection (or slice) is taken

4.3 Computing the Convex Conjugate of the Joint Kullback–Leibler Divergence

We compute the convex conjugate of the discrete joint Kullback–Leibler divergence \(D_{{\textit{KL}}}(v,u)\) in (4.13) for \(u,v \in [l_1,l_2]^N\):

$$\begin{aligned} D^*_{{\textit{KL}}}(v^*,u^*)&= \sup _{v,u \in [l_1,l_2]^N} \left\langle \begin{bmatrix} u\\ v \end{bmatrix}, \begin{bmatrix} u^*\\ v^* \end{bmatrix} \right\rangle - D_{{\textit{KL}}}(v,u) \nonumber \\&= \sup _{v,u \in [l_1,l_2]^N } \sum _{j=1}^N u_j u_j^* + v_j v_j^*\nonumber \\&\quad - u_j + v_j - v_j \log \frac{v_j}{u_j} \nonumber \\&= \sum _{j=1}^N \sup _{v_j,u_j \in [l_1,l_2]} \Psi (v_j,u_j), \end{aligned}$$
(4.19)

where \(\Psi \) is defined as:

$$\begin{aligned} \Psi (v_j,u_j) := u_j u_j^* + v_j v_j^* - u_j + v_j - v_j \log \frac{v_j}{u_j}. \end{aligned}$$
(4.20)

To solve the optimisation problem on the last line in (4.19), we write the KKT conditions (where we use uv instead of \(u_j, v_j\) to simplify the notation:

$$\begin{aligned} -\nabla \Psi (v,u)&+ \sum _{i=1}^4 \mu _i \nabla g_i(v,u) = 0, \end{aligned}$$
(4.21a)
$$\begin{aligned} g_i(v,u)&\leqslant 0, \quad \forall i=1,\ldots ,4, \end{aligned}$$
(4.21b)
$$\begin{aligned} \mu _i&\geqslant 0, \quad \forall i=1,\ldots ,4, \end{aligned}$$
(4.21c)
$$\begin{aligned} \mu _i g_i(v,u)&= 0, \quad \forall i=1,\ldots ,4. \end{aligned}$$
(4.21d)

where the functions \(g_i\) correspond to the bound constraints:

$$\begin{aligned} g_1(v,u)&= u-l_2; \end{aligned}$$
(4.22a)
$$\begin{aligned} g_2(v,u)&= v-l_2; \end{aligned}$$
(4.22b)
$$\begin{aligned} g_3(v,u)&= -u+l_1; \end{aligned}$$
(4.22c)
$$\begin{aligned} g_4(v,u)&= -v+l_1; \end{aligned}$$
(4.22d)

Noting that (4.21a) is equivalent to:

$$\begin{aligned}&-u^* + 1 - \frac{v}{u} + \mu _1 - \mu _3 = 0, \end{aligned}$$
(4.23a)
$$\begin{aligned}&-v^* + \log v - \log u + \mu _2 - \mu _4 = 0, \end{aligned}$$
(4.23b)

we solve the last two equations by using the complementarity conditions (4.21d) for cases when the Lagrange multipliers \(\mu _i\) are zero or nonzero.

5 Numerical Results

In this section, we describe a number of numerical experiments that illustrate the performance of our deconvolution method. We start with four examples of simulated data, where we are able to quantify the reconstructed image in relation to the known ground truth image. Then, we show how our method performs on microscopy data, where we reconstruct an image of spherical beads and a sample of a Marchantia thallus. In the experiments with microscopy data, we compare our method with two standard approaches of performing shift-invariant deconvolution, one where the convolution kernel is the detection PSF and one where the convolution kernel is the point-wise multiplication of the detection PSF with the light-sheet.

5.1 Simulated Data

We consider four images of size \(128\times 125\times 64\): a \(5 \times 5 \times 5\) grid of beads where the effect of the light-sheet in the z coordinate and the shape of the objective PSF are noticeable, a piecewise constant image of “steps” where the Poisson noise affects each step differently based on intensity, and an image that replicates realistic biological samples of tissue. These are shown in the top row of Fig. 8.

To obtain the measured data, we proceed as follows. Given the ground truth image \(u_0\), the forward operator described in Sect. 2.1 is applied to obtain the blurred image \(Lu_0\). The parameters for the forward model are taken to be those of the microscope used in the experimental setup and are given in Table 2. Then, the image is corrupted with a mixture of Poisson and Gaussian noise. For the vectorised image \(Lu_0\), at each pixel \(i=1,\ldots ,N\), the Poisson noise component follows the Poisson distribution with parameter \((Lu_0)_i\) and the additive Gaussian component has zero mean and standard deviation \(\sigma _G=10\). The original image, which has intensity in [0, 1] is scaled so that the intensity of \(Lu_0\) is in [0, 2000], to replicate a realistic scenario for the Poisson noise intensity. The resulting simulated measured data is shown in the bottom row of Fig. 8.

Table 2 Forward model parameters used in Sect. 5

We compare the reconstruction obtained using the proposed approach, which we will refer to as LS-IC (light-sheet-infimal convolution), with the reconstructions obtained by using an \(L^2\) data fidelity term instead of the infimal convolution term, or using a convolution operator corresponding to the objective PSF instead of the light-sheet forward model from Sect. 2.1. Specifically, we compare the solution of (4.1) with the solutions to the following problems, all solved using PDHG as described in Sect. 4:

figure h
figure i
figure j

where H is the convolution operator with the detection objective PSF \(h_z\) as given in (2.14).

For each test image and each method above, the PDHG parameters \(\rho \) and \(\sigma \) used are given in Table 3 and \(\tau \) is set to \(\tau = 1/\sigma \Vert \sum _{i=1}^m L_i^* L_i \Vert \) to ensure convergence according to Theorem 5.3 in [48]. As a stopping criterion, we used the primal–dual gap (4.12), normalised by the number of pixels N and the dynamic range of the measured image f:

$$\begin{aligned} \tilde{D}_{pd} = \frac{D_{pd}}{N \cdot \max _{j=1,\ldots ,N} f_j}, \end{aligned}$$
(5.1)

with a threshold of \(10^{-6}\) and a maximum number of 10,000 iterations.

The results of the four methods applied to the test images are given in Fig. 9 and quantitative results are given in Table 4. For each test image and each method, the regularisation parameter has been chosen to optimise the normalised \(l^2\) error and the structural similarity index (SSIM), respectively.

Table 3 Values of the PDHG parameters \(\rho \) and \(\sigma \) used in the numerical experiments with simulated data

We note that PSF-L2 and PSF-IC perform particularly poorly, highlighting the importance of an accurate representation of the image formation model instead of simply using the detection objective PSF as the forward operator. Comparing LS-IC and LS-L2, we see better results when using the infimal convolution data fidelity for the beads and the steps image, both visually and quantitatively. The deblurring is performed better on the beads image, while on the steps image we see a better denoising effect, especially along the edges in the image. For the tissue image, both fidelities give comparable results, but as we see in Fig. 10, when the ground truth is not known, choosing \(\alpha \) using the discrepancy principle gives a better result for the infimal convolution model.

The reconstructions shown in Fig. 10 are obtained by applying the discrepancy principle corresponding to each method. For LS-IC, we choose a value of \(\alpha \) such that it satisfies a variation of the discrepancy principle given in (3.11), where we enforce that the single noise fidelities are bounded by their respective noise bounds, rather than the sum of the fidelities being bounded by the sum of the noise bounds, as stated in (3.11). While both versions give good results, we found the former to give more accurate reconstructions. Here, the bound on the Poisson noise is set to \(\frac{1}{2}\), motivated by the following lemma from [49], which gives the expected value of the Kullback–Leibler divergence:

Lemma 5.1

Let \(Y_{\beta }\) be a Poisson random variable with expected value \(\beta \) and consider the function:

$$\begin{aligned} F(Y_{\beta }) = 2\left\{ Y_{\beta } \log \left( \frac{Y_{\beta }}{\beta }\right) +\beta - Y_{\beta } \right\} . \end{aligned}$$

Then, for large \(\beta \), the following estimate of the expected value of \(F(Y_{\beta })\) holds:

$$\begin{aligned} \mathbb {E}[F(Y_{\beta })] = 1 + \mathcal {O}\left( \frac{1}{\beta }\right) . \end{aligned}$$

One last observation worth making about the results in Figs. 9 and 10 is about the square shape of the reconstructed beads (the first column of both figures). By looking carefully at the ground truth bead image in Fig. 8, one can see that the beads are almost square to begin with, due to the small dimensions of the image. The finer details that make them appear round are lost in the blurring process which, in combination with the total variation regulariser used in the deconvolution algorithm, leads to this detail not being present in the reconstruction, thus making them square. Moreover, the sharpening of their edges is an expected effect of the total variation regularisation, which could be avoided by using a different regularisation technique. However, this is beyond the scope of this article.

The experiments were run using Matlab version R2020b Update 2 (9.9.0.1524771) 64-bit in Scientific Linux 7.9 on a machine with Intel Xeon E5-2680 v4 2.40 GHz CPU, 256 GB memory and Nvidia P100 16 GB GPU. The running times, averaged over 5 runs for each method and each image, are given in Table 5.

Fig. 9
figure 9

Reconstruction on simulated data with regularisation parameter \(\alpha \) such that best MSE is achieved for each method and each image. Shown as maximum intensity projections, except for tissue, where slices in each direction in the centre of the sample are shown. The axes are as shown in the bottom row of Fig. 8. First row: PSF-L2. Second row: PSF-IC. Third row: LS-L2. Fourth row: LS-IC

5.2 Light-Sheet Data

In this section, we show the results of applying LS-IC to a cropped portion of the full resolution images in Fig. 2. Specifically, we select a cropped beads image of \(1127 \times 111 \times 100\) voxels and a cropped Marchantia image of \(1127 \times 156 \times 100\) voxels.

For comparison, we also run PSF-L2 on the same images. In addition, we run an alternative light-sheet deconvolution method, where we perform shift-invariant deconvolution using a PSF \(\bar{h}\) obtained by point-wise multiplication of the detection PSF \(h_z\) in (2.14) and the light-sheet l, effectively clipping h by the width of the light-sheet. Therefore, the problem we solve, which we denote by PSF-L2-clip is:

figure k

where \(\bar{H}\) is the convolution operator with the PSF \(\bar{h} = h_z \cdot l\). A justification of this method is given by a simplified image formation model where we assume that the light-sheet has constant width (in the z direction) and constant intensity throughout the full sample, or in a region of interest where deconvolution is performed, as it is done for example in [21].

Table 4 Results of the numerical experiments on simulated data, with the regularisation parameter \(\alpha \) chosen to optimise the normalised \(l_2\) error and the SSIM, respectively
Fig. 10
figure 10

Reconstruction on simulated data with regularisation parameter \(\alpha \) chosen to satisfy the discrepancy principle (3.11). Shown as maximum intensity projections, except for tissue, where slices in each direction in the centre of the sample are shown. The axes are as shown in the bottom row of Fig. 8. First row: PSF-L2. Second row: PSF-IC. Third row: LS-L2. Fourth row: LS-IC

We run each method on both images for up to 6000 iterations, with a normalised primal–dual gap of \(10^{-6}\) as a stopping criterion. The parameters for the image formation model used are the same as in Table 2 and the PDHG parameters are given in Table 6.

The results of the deconvolution are shown in Figs. 11 and 13 for the beads image and the Marchantia image, respectively. In both figures, we first show the position of the light-sheet in the first row (due to the cropping, this is no longer centred), the measured data in the second row, followed by the PSF-L2, the PSF-L2-clip and the LS-IC reconstructions on the third, fourth and fifth rows, respectively. The regularisation parameter \(\alpha \) was chosen in all four cases visually such that a balance is achieved between the amount of regularisation and the noise in the reconstruction.

Table 5 Running times for each method and each simulated test image, averaged over 5 runs, in seconds
Table 6 Values of the PDHG parameters \(\rho \) and \(\sigma \) used in the numerical experiments with real data
Fig. 11
figure 11

Reconstruction results for the light-sheet bead image, shown as maximum intensity projections. The axes are as shown in the bottom row of Fig. 8. First row: The fitted light-sheet profile. Second row: The data. Third row: PSF-L2 with \(\alpha =0.1\). Fourth row: PSF-L2-clip with \(\alpha =0.7943\). Fifth row: LS-IC with \(\alpha =0.0046\)

In the beads image in Fig. 11, we note that LS-IC performs better than PSF-L2 and PSF-L2-clip at reversing the effect of the light-sheet. This is most obvious in the zy plane on the right-hand side of the image, where the length of the beads in the z direction has been reduced to a greater extent than in the PSF-L2 and the PSF-L2-clip reconstructions. In addition, the beads appear less blurry in the LS-IC reconstruction in the right-hand side of the xy plane. We also note that PSF-L2-clip fails to properly reverse the effects of the optical aberrations in the beads. This is not unexpected, as the information related to the aberrations is lost when the detection PSF is clipped by setting its upper and lower extremities to zero. The extent to which this happens depends on the width of the light-sheet: as the light-sheet becomes wider, the overall PSF approaches the detection PSF, in which case the deconvolved image will be the same as the reconstruction using PSF-L2. We show the bead images in 3D in Fig. 12, where the effect of the deconvolution in the z direction is more significant in the LS-IC reconstruction than in both the PSF-L2 and the PSF-L2-clip reconstructions, namely the beads are shorter in the z direction.

In the Marchantia reconstruction in Fig. 13, we see a similar effect of better sharpening in the z direction, most easily seen in the right-hand side and bottom projections in each panel (maximum intensity projections in the zy and the xz planes, respectively). In particular, we see additional artefacts in the PSF-L2-clip reconstruction: horizontal lines (parallel with the xy plane), likely due to the clipping of the detection PSF. Moreover, the 3D rendering of the Marchantia sample in Fig. 14 shows smoother cell edges in the LS-IC reconstruction compared to the other methods. Specifically, the PSF-L2 reconstruction contains reconstruction artefacts that are non-existent in the LS-IC reconstruction (indicated by the yellow arrows), while the PSF-L2-clip reconstruction contains areas where the blur has not been fully removed (for example at the same locations indicated by the yellow arrows), where the edges are not as sharp as in the LS-IC reconstruction.

Lastly, we reiterate that the strength of our proposed method is given by the physically accurate modelling of the interaction between the detection PSF and the light-sheet. This allows one to model the optical aberrations as part of the detection PSF (with no requirements on how this should be done), as well as the spatial dependence of the width and the intensity of the light-sheet and to combine them in an image formation model that does not require approximating using a light-sheet with constant width and intensity. As we see in the numerical experiments shown in this section, such approximation, while faster and less expensive computationally, leads to loss of information and results that are at most locally accurate.

6 Conclusion

In this paper, we introduced a novel method for performing deconvolution for light-sheet microscopy. We start by modelling the image formation process in a way that replicates the physics of a light-sheet microscope, which is achieved by explicitly modelling the interaction of the illumination light-sheet and the detection objective PSF. Moreover, the optical aberrations in the system are modelled using a linear combination of Zernike polynomials in the pupil function of the detection PSF, fitted to bead data using a least squares procedure. We then formulate a variational model taking into account the image formation model as the forward operator and a combination of Poisson and Gaussian noise in the data. The model combines a total variation regularisation term and a fidelity term that is an infimal convolution between an \({{\,\mathrm{\mathrm {L}}\,}}^2\) term and the Kullback–Leibler divergence, introduced in [9]. In addition, we establish convergence rates with respect to the noise and we introduce a discrepancy principle for selecting the regularisation parameter \(\alpha \) in the mixed noise setting. We solve the resulting inverse problem by applying the PDHG algorithm in a non-trivial way.

Fig. 12
figure 12

3D rendering of the beads data and reconstruction images using Imaris Viewer 9.7.2. First row: The data. Second row: PSF-L2 with \(\alpha =0.1\). Third row: PSF-L2-clip with \(\alpha =0.7943\). Fourth row: LS-IC with \(\alpha =0.0046\)

Fig. 13
figure 13

Reconstruction results for the Marchantia sample, shown as slices in each direction in the centre of the sample. The axes are as shown in the bottom row of Fig. 8. First row: The fitted light-sheet profile. Second row: The data. Third row: PSF-L2 with \(\alpha =0.1\). Fourth row: PSF-L2-clip with \(\alpha =0.1\). Fifth row: LS-IC with \(\alpha =0.0005\)

Fig. 14
figure 14

3D rendering of the Marchantia data and reconstruction images using Imaris Viewer 9.7.2. First row: The data. Second row: PSF-L2 with \(\alpha =0.1\). Third row: PSF-L2-clip with \(\alpha =0.1\). Fourth row: LS-IC with \(\alpha =0.0005\)

The results in the numerical experiments section show that our method, LS-IC, outperforms simpler approaches to deconvolution of light-sheet microscopy data, where one does not take into account the variability of the overall PSF introduced by the light-sheet excitation, or the combination of Gaussian and Poisson noise. In particular, numerical experiments with simulated data show superior reconstruction quality in terms of the normalised \(l^2\) error and the structural similarity index, not only by optimising over the regularisation parameter \(\alpha \) given the ground truth, but also with an a posteriori choice of \(\alpha \) using the stated discrepancy principle. On bead data, the reconstruction obtained using LS-IC shows a more significant reduction of the blur in the z direction compared to PSF-L2, where the light-sheet variations and the Poisson noise are not taken into account. Moreover, reconstruction of a Marchantia sample with LS-IC shows fewer artefacts than the PSF-L2 reconstruction, as well as sharper cell edges and smoother cell membranes.

Future work includes applying this technique to a broader range of samples and using it to answer questions of biological interest. To do so, we see a number of potential future directions that this work can take:

  1. 1.

    Adapting the discrepancy principle given in (3.11) for choosing the regularisation parameter \(\alpha \) to real data sets, like the ones in Sect. 5.2.

  2. 2.

    Improving the running time of the method potentially by means of randomised approaches.

  3. 3.

    Investigating other regularisation terms.

  4. 4.

    Making the technique available to other users as a more user-friendly tool.