# Statistical Multiresolution Estimation for Variational Imaging: With an Application in Poisson-Biophotonics

- First Online:

- 14 Citations
- 1.4k Downloads

## Abstract

In this paper we present a spatially-adaptive method for image reconstruction that is based on the concept of *statistical multiresolution estimation* as introduced in Frick et al. (Electron. J. Stat. 6:231–268, 2012). It constitutes a variational regularization technique that uses an *ℓ*_{∞}-type distance measure as data-fidelity combined with a convex cost functional. The resulting convex optimization problem is approached by a combination of an inexact alternating direction method of multipliers and Dykstra’s projection algorithm. We describe a novel method for balancing data-fit and regularity that is fully automatic and allows for a sound statistical interpretation. The performance of our estimation approach is studied for various problems in imaging. Among others, this includes deconvolution problems that arise in Poisson nanoscale fluorescence microscopy.

### Keywords

Statistical multiresolution Extreme-value statistics Total-variation regularization Statistical inverse problems Statistical imaging Alternating direction method of multipliers Poisson regression## 1 Introduction

*u*

^{0}∈L

^{2}(

*Ω*) with

*Ω*=[0,1]

^{2}given the data

*ε*

_{ij}are independent and identically distributed Gaussian random variables with

**E**(

*ε*

_{11})=0 and \(\mathbf{E} (\varepsilon_{11}^{2} ) = \sigma^{2} > 0\) and that

*K*:L

^{2}(

*Ω*)→ℝ

^{m×n}is a linear and bounded operator.

*K*is assumed to model image acquisition and sampling at the same time, i.e. (

*Ku*)

_{ij}is assumed to be a sample at the pixel (

*i*/

*m*,

*j*/

*n*) of a smoothed version of

*u*. Throughout the paper we will assume that

*σ*

^{2}is known (for reliable estimation techniques for

*σ*

^{2}see e.g. [30] and references therein).

*u*

^{0}from the data

*Y*given in the Gaussian model (1) consists in minimizing the penalized least squares functional, i.e.

*λ*>0 a suitable multiplier. In the seminal work [33], for example, the authors proposed the

*total variation semi-norm*

*u*|(

*Ω*) denotes the total variation of the (measure-valued) gradient of

*u*which coincides with ∫

_{Ω}|∇

*u*| if

*u*is smooth. Numerous efficient solution methods for (6) [7, 10, 26] and various modifications have been suggested so far (cf. [8, 18, 31, 36] to name but a few). In particular, in order to accelerate numerical algorithms and to prevent oversmoothing the total variation semi-norm is often augmented to

*γ*≥0.

The quadratic fidelity in (2) has an essential drawback: The information in the residual is incorporated *globally*, that is each pixel value (*Ku*)_{ij}−*Y*_{ij} contributes equally to the estimator \(\hat{u}(\lambda)\)*independent of its spatial position*. In practical situations this is clearly undesirable, since images usually contain features of different scales and modality, i.e. constant and smooth portions as well as oscillating patterns both of different spatial extent. A solution \(\hat{u}(\lambda)\) of (2) is hence likely to exhibit under- and oversmoothed regions at the same time.

*spatially-adaptive*reconstruction approaches became popular that are based on (2) with a locally varying regularization parameter, i.e.

*λ*

_{ij}is subtle and different approaches have been suggested. See for instance [11, 21, 22, 27].

*J*by minimizing it over a convex set that is determined by the statistical extreme value behaviour of the residual process. More precisely, we study estimators \(\hat{u}\) of

*u*

^{0}that are computed as solutions of the convex optimization problem

*G*={1,…,

*m*}×{1,…,

*n*} and \(\{ c_{S}: S\in\mathcal{S} \}\) is a set of positive weights that govern the trade-off between data-fit and regularity locally on each set \(S\in\mathcal{S}\). Solutions of (6) are special instances of

*statistical multiresolution estimators (SMRE)*as studied in [20]. In this context the statistic

*T*:ℝ

^{m×n}→ℝ defined by

*multiresolution (MR) statistic*. Summarizing, an SMRE \(\hat{u}\) of

*u*

^{0}is an element with minimal

*J*among all candidate estimators

*u*that satisfy the condition

*T*(

*Ku*−

*Y*)≤1.

Special instances of (6) have been studied recently: For the case when \(\mathcal{S}\) contains the entire domain *G* only, it has been shown in [8] that (6) is equivalent to (2) if *K* satisfies certain conditions. As mentioned above, this approach is likely to oversmooth small-scaled image features (such as texture) and/or underregularize smooth parts of the image. An improved model was proposed in [2] where \(\mathcal{S}\) is chosen to consist of a (data-dependent) *partition* of *G* that is obtained in a preprocessing step (for the numerical simulations in [2], Mumford-Shah segmentation is considered). Under similar conditions on *K* as in [8], it was shown in [2] that (6) is equivalent to (5) where *λ*_{ij} is constant on each \(S\in \mathcal{S}\). This approach was further developed in [1] where a subset *S*⊂*G* is fixed and afterwards \(\mathcal{S}\) is defined to be the collection of all translates of *S* (in fact, the authors study the convolution of the squared residuals with a discrete kernel). The authors propose a proximal point method for the solution of (6). This approach of local constraints w.r.t. a window (or kernel) of *fixed size* was also studied in [15] for irregular sampling and regularization functionals other than the total variation were considered. In particular, it is observed that the difference between results obtained by using the total variation penalty (3) and the Dirichlet-energy (integrated squared norm of the derivative) is not so big when using local constraints. This is in accordance to findings in [19] for one-dimensional signals. In [11] the model of [1] was studied in the continuous function space setting. Moreover the authors in [11] provided a fast algorithm for the solution of the constrained optimization problem based on the hierarchical decomposition scheme [34] combined with the unconstrained problem (5).

In this paper, we propose a novel, automatic selection rule for the weights *c*_{S} based on a statistically sound method that is applicable for any pre-specified, deterministic system of subsets \(\mathcal{S}\). We are particularly interested in the case when \(\mathcal{S}\) constitutes a highly redundant collection of subsets of *G* consisting of overlapping subsets of different scales. This is a substantial extension to the approaches in [1, 11, 15] that only consider one fixed (pre-defined) scale. Our approach will amount to select a single parameter *α*∈[0,1] with the interpretation that the true signal *u*^{0} satisfies the constraint in (6) with probability *α*. From the definition of (6) it is then readily seen that \(\mathbb{P} (J(\hat{u}) \leq J(u^{0}) ) \geq\alpha\) for any solution \(\hat{u}\) of (6). In other words, our method controls the probability that the reconstruction \(\hat{u}\) is at least as smooth (in the sense of *J*) as the true image *u*^{0}. To this aim, it will be necessary to gain stochastic control on the null-distribution *T*(*ε*), where *ε*={*ε*_{ij}} is a lattice of independent \(\mathcal{N}(0,\sigma^{2})\)-distributed random variables.

*inexact*alternating direction method of multipliers (ADMM) [9, 13] with Dykstra’s projection algorithm [4]. Finally, we indicate how our approach can be applied to image deblurring problems in fluorescence microscopy where the observed data does not fit into the white noise model (1) but where one usually assumes that independently

*β*) stands for the Poisson distribution with parameter

*β*>0. We mention that similar models occur in positron emission tomography (cf. [35]) and large binocular telescopes (cf. [3]) and we claim that our method can be useful there as well. We apply Anscombe’s transform to transform the Poisson data to normality. Furthermore we present a modified version of the ADMM to solve the resulting variant of (6). We finally illustrate the capability of our approach by numerical examples: Image denoising, deblurring and inpainting and deconvolution problems that arise in nanoscale fluorescence microscopy.

*S*| the cardinality of \(S\in\mathcal{S}\). We often refer to |

*S*| as the

*scale*of

*S*. We assume that

*m*,

*n*∈ℕ are fixed and denote by 〈⋅,⋅〉 and ∥⋅∥ the Euclidean inner-product and norm on ℝ

^{m×n}and by \(\Vert u \Vert _{\text{L}^{2}}\) the L

^{2}-norm of

*u*. For a convex functional \(J:\text {L}^{2}(\varOmega)\rightarrow\overline{\mathbb{R}}\) the subdifferential

*∂J*(

*u*) is the set of all

*ξ*∈L

^{2}(

*Ω*) such that

*J*(

*v*)≥

*J*(

*u*)+∫

_{Ω}

*ξ*(

*v*−

*u*) for all

*v*∈L

^{2}(

*Ω*). The Bregman-distance between

*v*,

*u*∈L

^{2}(

*Ω*) w.r.t.

*ξ*∈

*∂J*(

*u*) is defined by

*η*∈

*∂J*(

*v*) we define the symmetric Bregman distance by \(D_{J}^{\text{sym}}(u,v) = D_{J}^{\xi}(v,u) + D_{J}^{\eta}(u,v)\). By

*J*

^{∗}we denote the Legendre-Fenchel transform of

*J*, i.e. \(J^{*}(q) = \sup_{u\in\text{L}^{2}(\varOmega)} \int_{\varOmega}u q - J(u)\). We finally note that it would not be restricitve (yet less intuitive) to replace L

^{2}(

*Ω*) by any other separable Hilbert space.

## 2 Statistical Multiresolution Estimation

We review sufficient conditions that guarantee existence of SMREs, solutions of (6) that is. To this end, we rewrite (6) into an equality constrained problem and study the corresponding augmented Lagrangian function (Sect. 2.1). Moreover, we address the important question on how to choose the *scale weights**c*_{S} automatically in Sect. 2.2. Finally, we discuss different choices for the system \(\mathcal{S}\) that have proved feasible in practice in Sect. 2.3.

### 2.1 Existence of SMRE

*v*∈ℝ

^{m×n}. To be more precise, we aim for the solution of

*H*denotes the indicator function on the feasible set \(\mathcal{C}\) of (6), i.e.

*augmented Lagrangian*of (9): Here

*p*∈ℝ

^{m×n}denotes the Lagrange multiplier for the linear constraint in (9). Note that

*L*

_{λ}equals the ordinary Lagrangian

*L*(

*u*,

*v*;

*p*)=

*J*(

*u*)+

*H*(

*v*)+〈

*p*,

*Ku*−

*v*〉 augmented by the quadratic term ∥

*Ku*−

*v*∥

^{2}/(2

*λ*) that fosters the fulfillment of the linear constraint in (9).

It is well known that the saddle-points of *L* and *L*_{λ} coincide (cf. [16, Chap. III, Theorem 2.1]) and that existence of a saddle point of *L*_{λ} follows from existence of solutions of (9) together with constraint qualifications of the MR-statistic *T*. One typical example for the latter is given in Proposition 2.1. The result is rather standard and can be deduced e.g. from [12, Chap. III, Proposition 3.1 and Theorem 4.2] (cf. also [16, Chap. III])

### Proposition 2.1

*Assume that*(9)

*has a solution*\((\hat{u}, \hat{v})\in L^{2}(\varOmega)\times\mathbb{R}^{m\times n}\)

*and that there exists*\(\bar{u} \in L^{2}(\varOmega)\)

*such that*\(J(\bar{u}) < \infty\)

*and*\(T(K\bar{u}-Y) < 1\) (Slater’s constraint qualification).

*Then*,

*there exists*\(\hat{p}\in\mathbb{R}^{m\times n}\)

*such that*\((\hat{u}, \hat{v}, \hat{p})\)

*is a saddle point of*

*L*

_{λ},

*i*.

*e*.

### Remark 2.2

- (1)If \(\hat{u}\in\text{L}^{2}(\varOmega)\) and \(\hat{v},\hat{p}\in \mathbb{R}^{m\times n}\) are as in Proposition 2.1, then \(\hat{u}\) is an SMRE, i.e. it solves (6). Moreover, the following extremality relations hold:$$ -K^*\hat{p}\in\partial J(\hat{u}),\qquad\hat{p} \in\partial H(\hat{v})\quad\text{and}\quad K\hat{u} = \hat{v}. $$
- (2)Slater’s constraint qualification is for instance satisfied if the setis dense in ℝ$$ \bigl\{ Ku: u\in\text{L}^{2}(\varOmega)\text{ and } J(u)< \infty \bigr\} $$
^{m×n}. - (3)
If

*J*is chosen to be the total variation semi-norm (3), then a sufficient condition for the existence of solutions of (9) will be that there exists (*i*,*j*)∈*S*for some \(S\in\mathcal{S}\) such that (*K***1**)_{ij}≠0, where**1**∈L^{2}(*Ω*) is the constant 1-function. This is immediate from Poincaré’s inequality for functions in BV(*Ω*) (cf. [37, Theorem 5.11.1]).

### 2.2 An a Priori Parameter Selection Method

The choice of the *scale weights**c*_{S} in (6) is of utmost importance for they determine the trade-off between smoothing and data-fit (and hence play the role of spatially local regularization parameters). We propose a statistical method that is based on quantile values of extremes of transformed *χ*^{2} distributions.

*u*

^{0}satisfies the constraint in (6). To this end, observe that for \(S\in\mathcal{S}\) the random variable

*χ*

^{2}-distributed with |

*S*| degrees of freedom (d.o.f.). With this notation, it follows from (1) that

*u*

^{0}satisfies the constraints in (6) if

*balanced*in the sense that the probability for

*c*

_{S}

*t*

_{S}(

*ε*)>1 is equal for each \(S\in\mathcal{S}\).

*t*

_{S}(

*ε*) to normality. It was shown in [23] that the

*fourth root transform*\(\sqrt[4]{t_{S}(\varepsilon)}\) is approximately normal with mean and variance

*t*

_{S}(

*ε*) to normality each scale contributes equally to the supremum in (12). Hence a parameter choice strategy based on quantile values of the statistic (12) is likely to balance the different scales occurring in \(\mathcal{S}\). We make this precise in the following

### Proposition 2.3

### Proof

*c*

_{S}are chosen such that the true signal

*u*

^{0}satisfies the constraints with probability

*α*. By the fact that \(\hat{u}\) is a solution of (6) it follows that \(\mathbb{P}(T(Ku^{0} - Y) \leq1) \leq\mathbb{P}(J(\hat{u})\leq J(u^{0}))\). □

### Remark 2.4

*c*

_{S}=(

*q*

_{α}

*σ*

_{S}+

*μ*

_{S})

^{−4}in Proposition 2.3 the problem of selecting the

*set*of scale weights

*c*

_{S}is reduced to the question on how to choose the

*single*value

*α*∈(0,1). The probability

*α*plays the role of an universal regularization parameter and allows for a precise statistical interpretation: It constitutes a lower bound on the probability that the SMRE \(\hat{u}\) is more regular (in the sense of

*J*) than the true object

*u*

^{0}. Moreover, one has that

*c*

_{S}|

*S*|) can be considered as a

*relaxation*parameter, that takes into account the uncertainty of estimating the variance of the residual on finite scales |

*S*|. Put differently, it is expected to be large on small scales and approaches 1 as |

*S*| increases. This is illustrated in Fig. 1: Here the quantity 1/(

*c*

_{S}|

*S*|) is depicted for the system \(\mathcal {S}_{0}\) of all squares with sidelengths up to 20 (left panel) and the system \(\mathcal{S}_{2}\) of all dyadic squares (middle panel) in an 341×512 image for

*α*=0.2 (‘+’) and

*α*=0.9 (‘o’). It becomes clear, that only on the smallest scales there are non-negligible differences between the scale weights for \(\mathcal{S}_{0}\) and \(\mathcal{S}_{2}\). Also our numerical experiments confirm, that reconstruction results do not differ very much for different choices of

*α*.

We note that in [1] and [11] the authors propose relaxation parameters for the case when \(\mathcal{S}\) consists of the translates of a window of fixed size. In [1] the authors fix such a parameter, 1.01 say, and determine the corresponding window size by heuristic reasoning. In [11] the authors give for a fixed window size |*S*| a formula for a relaxation parameter that uses moments of the extreme value statistic of independent *χ*^{2} random variables with |*S*| degrees of freedom. We note that these methods can not be generalized to systems \(\mathcal{S}\) that contains sets of different scales case in a straightforward manner. Our selection rule for the weights *c*_{S} is designed such that different scales are balanced appropriately. Hence our approach is a *multi-scale* extension of the (single-scale) methods in [1, 11].

### Remark 2.5

It is important to note that the random variable *t*_{S}(*ε*) and *t*_{S′}(*ε*) are independent if and only if *S*∩*S*′=∅. As we do not assume that \(\mathcal{S}\) consists of pairwise disjoint sets, (12) constitutes an extreme value statistic of *dependent* random variables. Except for special cases, little is known about the distribution of such statistics (see e.g. [28, 29] for asymptotic results). It is an open and interesting problem to investigate the asymptotic properties of the distribution of the statistic in (12).

In practice, the quantile values *q*_{α} in Proposition 2.3 are derived from the empirical distribution of (12). The right panel in Fig. 1 shows the empirical density of the statistic (12) for *m*=341 and *n*=512 and the systems \(\mathcal {S}_{0}\) (solid) and \(\mathcal{S}_{2}\) (dashed) (for our simulations in Fig. 1 we used 5000 trials).

### 2.3 On the Choice of \(\mathcal{S}\)

In the previous section we addressed the question on how to select the scale weights \(\{ c_{S} \}_{S\in\mathcal{S}}\) for a given system of subsets \(\mathcal{S}\) of the grid *G*. Although it is not the primary aim of this paper to advocate a particular systems \(\mathcal{S}\), we will now comment on possible determinants for a rational choice of \(\mathcal{S}\).

On the one hand, \(\mathcal{S}\) should be chosen rich enough to resolve local features of the image sufficiently well at various scales. On the other hand, it is desirable to keep the cardinality of \(\mathcal{S}\) small such that the optimization problem in (6) remains solvable within reasonable time. As a consequence of this, *a priori information* on the signal *u*^{0} should be employed in practice in order to delimit a suitable system \(\mathcal{S}\) (e.g. the range of scales to be used). Furthermore we note that for guaranteeing that the extreme value statistic (12) does not degenerate (as *m*,*n* and the cardinality of \(\mathcal{S}\) increase), \(\mathcal{S}\) typically has to satisfy certain entropy conditions (see e.g. [19]). We stress that it is a challenging and interesting task to extend these results to *random* (data-driven) systems \(\mathcal{S}\). It is well known that such methods can yield good results in practice (see e.g. [2]).

- (1)
the set of

*all discrete squares*in*G*: for computational reasons usually subsystems are considered. We found the subset consisting of all squares with sidelengths up to 20 to be efficient. We denote this subset henceforth by \(\mathcal{S}_{0}\). - (2)the set \(\mathcal{S}_{2}\) of
*dyadic partitions*of*G*. For a quadratic grid*G*with*m*=*n*=2^{r}the system \(\mathcal{S}_{2}\) is obtained by recursively splitting the grid into four equal squares until the lowest level of single pixels is reached. To be more precise,For general grids$$ \mathcal{S}_2 = \bigcup_{l=1}^{r} \bigl\{ \bigl\{ k2^{l},\ldots,(k+1)2^{l} \bigr\}^2: k = 0,\ldots ,2^{r-1} \bigr\}. $$*G*the left and lower most squares are clipped accordingly.

*u*

^{0}(left, at a resolution of

*m*×

*n*=341×512 pixels and gray values scaled in [0,1]) and data

*Y*according to (1) with

*K*=Id and

*σ*=0.1.

*u*

^{0}is computed that exhibits both over- and undersmoothed regions (here, we set

*λ*=0.075). This estimator is depicted in Fig. 3. The oversmoothed parts in \(\hat{u}(\lambda)\) can be identified via the MR-statistic

*T*in (7) by marking those sets

*S*in \(\mathcal{S}\) for which

*S*

_{0}(left column) and

*S*

_{2}(right column) are highlighted in Fig. 4 where we examine the scales |

*S*|=4,8,16. The parameters

*c*

_{S}are chosen as in Sect. 2.2 with

*α*=0.9.

## 3 Algorithmic Methodology

In what follows, we present an algorithmic approach to the numerical computation of SMRE in practice that extends the methodology in [20] where we proposed an alternating direction method of multipliers (ADMM). Here, we use an *inexact* version of the ADMM which decomposes the original problem into a series of subproblems which are substantially easier to solve. In particular, an inversion of the operator *K* is no longer necessary. For this reason the inexact ADMM has attracted much attention recently (see. e.g. [9, 14, 36]).

### 3.1 Inexact ADMM

*L*

_{λ}in (11), we use the inexact ADMM that can be considered as a modified version of the Uzawa algorithm (see e.g. [16, Chap. III]). Starting with some initial

*p*

_{0}∈ℝ

^{m×n}, the original Uzawa algorithm consists in iteratively computing

- (1)
\((u_{k},v_{k}) \in \operatorname{argmin}_{u\in\text{L}^{2}(\varOmega), v\in\mathbb{R}^{m\times n}}L_{\lambda}(u,v;p_{k-1})\)

- (2)
*p*_{k}=*p*_{k−1}+*λ*(*Ku*_{k}−*v*_{k}).

*u*and

*v*whereas (2) constitutes an explicit maximization step for the Lagrange multiplier

*p*. The algorithm is usually stopped once the constraint in (9) is fulfilled up to a certain tolerance.

*u*and

*v*instead of minimizing simultaneously, i.e. given (

*u*

_{k−1},

*v*

_{k−1},

*p*

_{k−1}) we compute

- (1)
\(u_{k} \in\operatorname{argmin}_{u\in\text{L}^{2}(\varOmega)} L_{\lambda}(u,v_{k-1};p_{k-1})\)

- (2)
\(v_{k} \in\operatorname{argmin}_{v\in\mathbb{R}^{m\times n}} L_{\lambda}(u_{k},v;p_{k-1})\)

- (3)
*p*_{k}=*p*_{k−1}+*λ*(*Ku*_{k}−*v*_{k}).

*alternating direction method of multipliers (ADMM)*as proposed in [16, Chap. III]). There, convergence of the algorithm has been studied for the case when

*J*satisfies some regularity assumptions. In [20] we extended this result for general functionals

*J*(as for example the total variation semi-norm (3)). The resulting two minimization problems usually can be tackled much more efficiently than the original problem.

*K*. Thus, a second modification adds in the

*k*-th loop of the algorithm the following additional term to

*L*

_{λ}(

*u*,

*v*

_{k−1};

*p*

_{k−1}):

*ζ*is chosen such that

*ζ*>∥

*K*∥

^{2}. After some rearrangements of the terms in

*L*

_{λ}and (14) it can easily be seen that

*Ku*cancels out and thus the undesirable inversion of

*K*is replaced by a single evaluation of

*K*at the previous iterate

*u*

_{k−1}. However, by adding (14) the distance to the previous iterate

*u*

_{k−1}is additionally penalized and

*L*

_{λ}(

*u*,

*v*

_{k};

*p*

_{k−1}) is minimized only

*inexactly*.

After the aforementioned rearrangements and by keeping in mind that *H* is the indicator function of the convex set \(\mathcal{C}\) in (10), the inexact ADMM can be summarized as follows:

### Theorem 3.1

[9, Theorem 1]

*Assume that*\((\hat{u}, \hat{v}, \hat{p})\)

*is a saddle point of*

*L*

_{λ}.

*Moreover*,

*let*{

*u*

_{k},

*v*

_{k},

*p*

_{k}}

_{k∈ℕ}

*be generated by Algorithm*1

*with*

*ζ*>∥

*K*∥

^{2}

*and define the averaged sequences*

*Then*,

*each weak cluster point of*\(\{ \bar{u}_{k} \}_{k\in \mathbb{N}}\)

*is a solution of*(6)

*and there exists a constant*

*C*>0

*such that*

The above result is rather general and in particular situations the assertions may be quite weak. In particular if *J* and *H*^{∗} have *linear growth*, as it is for instance the case for *J* as in (3) and *H* as in (10), the Bregman distances appearing in (18) may vanish although \((\bar{u}_{k}, \bar{p}_{k}) \neq (\hat{u}, \hat{p})\). If at least one of the functionals *J* or *H*^{∗} is uniformly convex, it is possible to come up with accelerated versions of Algorithm 1 that allow for stronger convergence results (see [9]). For the sake of simplicity we restrict our consideration to the basic algorithm.

### 3.2 Subproblems

Closer inspection of Algorithm 1 reveals that the original problem—computing a saddle point of *L*_{λ}—has been replaced by an iterative series of subproblems (15) and (16). We will now examine these two subproblems and propose methods that are suited to solve them. Here we proceed as in [20].

*v*

_{k}:=

*Ku*

_{k}+

*λp*

_{k−1}onto the feasible region \(\mathcal{C}\) as defined in (10). Due to the supremum taken in the definition (7) of the statistic

*T*, we can decompose \(\mathcal{C}\) into \(\mathcal{C} = \bigcap_{S \in\mathcal {S}} \mathcal{C}_{S}\) where

*S*only. Note that all \(\mathcal{C}_{S}\) are closed and convex sets (in fact, they are circular cylinders in ℝ

^{m×n}; see the left panel in Fig. 5). If we fix a \(\mathcal{C}_{S}\) and consider some \(v\notin\mathcal{C}_{S}\), the projection of

*v*onto \(\mathcal{C}_{S}\) can be stated explicitly as

This insight leads us to the conclusion that any method which computes the projection onto the intersection of closed and convex sets by merely using the projections onto the individual sets only would be feasible to solve (16). Dykstra’s Algorithm [4] works exactly in this way and is hence our method of choice to solve (16). For a detailed statement of the algorithm and how the total number of sets that enter it may be decreased to speed up runtimes, see [20, Sect. 2.3]. We note that despite these considerations, the predominant part of the computation time of Algorithm 1 is spent for the projection step (16). So far we did not take into account parallelization of the projection algorithm. To some extent this is possible in a straightforward manner, since the projections onto disjoint sets in \(\mathcal{S}\) can be carried out simultaneously (on GPUs for instance). But also inherently parallel projection algorithms (including parallel versions of Dykstra’s method) received much attention recently and potentially would yield a speed up of Algorithm 1. See for instance [6] for an overview.

We finally turn our attention to (15). In contrast to the standard version of the ADMM as proposed in [20], the second subproblem in Algorithm 1 does not involve the inversion of the operator *K*. For this reason, (15) here simply amounts to solving an unconstrained denoising problem with a least-squares data-fit. Numerous methods for a wide range of different choices of *J* are available in order to cope with this problem. If *J* is chosen as the total variation seminorm, for example, the methods introduced in [7, 10, 26] will be suited (we will use the one in [10]).

## 4 Application in Fluorescence Microscopy

For image acquisition techniques that are based on single photon counts of a light emitting sample, such as fluorescence microscopy, the Gaussian error assumption (1) is not realistic. Here, the non-additive model (8) is to be preferred. Still, the estimation paradigm above can be adapted to this scenario by means of *variance stabilizing transformations*. To this end we first recall [5, Lemma 1].

*c*=3/8 is likely to stabilize the variance of \(2\sqrt{Y+c}\) at the constant value 1 (in second order) and its mean at \(2\sqrt {\beta}\) (in first order) or in other words it approximately holds that

*c*=1/4 will result in a better reduction of the bias, since the mean is then stabilized at \(2\sqrt{\beta}\) in second order (at the cost of a less stable variance). Numerically, we found the difference to be negligible for our purposes.

*c*>0 the function \(t\mapsto(\sqrt{t} - c)^{2}\) is convex on [0,∞) and thus Problem (21) is again a convex optimization problem. Similar as in Sect. 2.1 we can rewrite (21) into

The right panel in Fig. 5 depicts the sets \(\tilde {\mathcal{C}}\) for the simple case *m*=2 and *n*=1. It is important to note, that in contrast to the Gaussian case (left panel), the feasibility sets \(\tilde{C}\) are not translation invariant, i.e. their shape and size depend on the data *Y*. In particular, the size increases with ∥*Y*∥ which is due to the fact that the variance of a Poisson random variable with law Pois(*β*) increases linearly with the parameter *β*.

*S*|=1) and approximate solutions have to be used. Since during the runtime of Algorithm 1 these projections are to be computed a considerable amount of time, this is clearly undesirable since inevitable numerical errors are likely to accumulate.

*u*∈L

^{2}(

*Ω*) that

*k*-th loop of Algorithm 1 the estimator \(\hat{y}\) by \(\sqrt{Ku_{k}}\). To avoid instabilities we rather use \(\sqrt{\max(Ku_{k},\delta)}\) for some small positive parameter

*δ*>0. We formalize these ideas in Algorithm 2.

In practice the Algorithm has proved to be very stable, however it seems to be rather involved to prove numerical convergence of Algorithm 2 which is beyond the scope of this paper. If \((\bar{u}, \bar{w}, \bar{p})\) is a limit of the sequence (*u*_{k},*w*_{k},*p*_{k}) in Algorithm 2, it is quite obvious that \((\bar{u}, \bar{w}^{2})\) is a solution of (22) and hence that \(\bar{u}\) solves (21). Moreover, it is not straightforward to incorporate a preconditioner similar to (14) that renders the step (26) semi-implicit.

## 5 Numerical Results

We conclude this paper by demonstrating the performance of SMRE as computed by our methodology introduced in the previous Sections. We will treat the denoising problem in Sect. 5.1 as well as deconvolution and inpainting problems in Sect. 5.2. Finally, we will study SMRE for the Poisson model (8) computed by means of Algorithm 2 in Sect. 5.3. Here we will use real data from nanoscale fluorescence microscopy provided by the Department of NanoBiophotonics at the Max Planck Institute for Biophysical Chemistry in Göttingen.^{1}

*u*as an

*m*×

*n*array of pixels rather than an element in L

^{2}(

*Ω*). Accordingly, the operator

*K*is realized as an

*mn*×

*mn*matrix and ∇ denotes the discrete (forward) gradient. In all our experiments we use a step size

*λ*=0.001 for the ADMM method (Algorithm 1) and we stop the iteration if the following criteria are satisfied Here, \(\mathcal{S}\) is the system of subsets in use and

*t*

_{s},

*μ*

_{S}and

*σ*

_{S}are defined as in Sect. 2.2.

For the Poisson modification in Algorithm 2 we use the same criteria, instead that *Ku*_{k}−*v*_{k} is replaced by \(\sqrt{Ku_{k}}-w_{k}\) and *Ku*_{k}−*Y* by \(2\sqrt{Ku_{k}}-X\), where *X* is as in Sect. 4.

### 5.1 Denoising

*Y*given by (1) when

*K*is the identity matrix and

*u*

^{0}is the test image in Fig. 2 (

*m*=341 and

*n*=512), i.e.

*σ*=0.1 (10 % Gaussian noise) and

*σ*=0.2 (20 % Gaussian noise). We compute SMRE based on the subsystems of

*S*

_{0}and

*S*

_{2}as introduced in Sect. 2.3 where we fixed

*α*=0.9. To this end we utilize Algorithm 1 with

*ζ*=1, i.e. the standard ADMM.

*λ*>0) as defined in (2). We choose

*λ*=

*λ*

_{2}and

*λ*=

*λ*

_{B}such that the mean squared distance and the mean symmetric Bregman distance to the true signal

*u*

^{0}is minimized, respectively. To be more precise, we set

*J*as in (3) formally reads as This means that \(D^{\text{sym}}_{J}(u,v)\) is small if for sufficiently many pixels (

*i*,

*j*)∈

*G*either both

*u*and

*v*are constant in a neighborhood of (

*i*,

*j*) or the level lines of

*u*and

*v*at (

*i*,

*j*) are locally parallel. In practice, we rather use

*J*in (3) for some small constant

*β*≈10

^{−8}. Then the above formulae are slightly more complicated. Since the parameters

*λ*

_{2}and

*λ*

_{B}are not accessible in practice as

*u*

^{0}is unknown, we refer to \(\hat{u}(\lambda_{2})\) and \(\hat{u}(\lambda_{\text{B}})\) as L

^{2}- and

*Bregman-oracle*, respectively. Simulations lead values

*λ*

_{2}=0.026,0.0789 and

*λ*

_{B}=0.0607,0.1767 for

*σ*=0.1,0.2, respectively.

In addition, we compare our approach to the *spatially adaptive TV (SA-TV)* method as introduced in [11]. The SA-TV algorithm approximates solutions of (6) for the case where \(\mathcal{S}\) constitutes the set of all translates of a fixed window *S*⊂*G* (cf. also [1]) by computing a solution of (5) with a suitable spatially dependent regularization parameter *λ*. Starting from a (constant) initial parameter *λ*≡*λ*_{0} the SA-TV algorithm iteratively adjusts *λ* by increasing it in regions that were poorly reconstructed in the previous step. For our numerical comparisons, we used the SA-TV-Algorithm considering square windows with side lengths 11 (as suggested in [11]) and 19. All parameters involved in the algorithm were chosen as suggested in [11]. In particular we set *λ*_{0}=0.5 and choose an upper bound for *λ* of *L*=1000 in all our simulations. As a stopping condition, we used the discrepancy principle which ended the reconstruction process after exactly four iteration steps in all of our experiments.

*σ*=0.1) and Fig. 7 (

*σ*=0.2). By visual inspection, we find that the oracles are globally under- (L

^{2}) and over-regularized (Bregman), respectively. While the scalar parameter

*λ*was chosen optimally w.r.t. the different distance measures, it still cannot cope with the spatially varying smoothness of the true object

*u*

^{0}.

*at once*, while SA-TV only adapts the parameter on a single given scale. As a result, SA-TV reconstructions are of varying quality for finer and coarser features of the object, while the SMRE is capable of reconstructing such features equally well. This becomes particularly obvious when zooming into the reconstructions (cf. Fig. 8).

### 5.2 Deconvolution & Inpainting

*K*in (1) is non-trivial. To be exact, we consider

*inpainting*and

*deconvolution*problems. For the first we consider an inpainting domain that occludes 15 % of the image with noise level

*σ*=0.1 (upper left panel in Fig. 9) and for the latter a Gaussian convolution kernel with variance 2 and noise level

*σ*=0.02 (lower left panel in Fig. 9).

For all experiments we use the dyadic system \(\mathcal{S}_{2}\) and *α*=0.9. Note that in both cases we have *K*=*K*^{∗} and ∥*K*∥=1; we therefore set *ζ*=1.01 in (15). The results are depicted in the upper right and lower right images of Fig. 9, respectively.

Again, the results indicate that a reasonable trade-off between data fit and smoothing is found by the proposed a priori parameter choice rule and that the amount of smoothing is adapted according to the image features.

### 5.3 Examples from Fluorescence Microscopy

We finally study the performance of our approach in a practical application, namely fluorescence microscopy. To be more precise, we consider deconvolution problems for standard confocal microscopy and STED (STimulated Emission Depletion) microscopy. Both examples have in common, that the recorded data is a realization of independent Poisson variables where the intensity at each pixel is determined by a blurred version of true signal. In other words, Model (8) applies. In both cases the blurring can be modelled (in first order) as a convolution with a Gaussian kernel where the width of the kernel for confocal microscopes is 3–4 times larger than it is for STED. As standard references we refer to [32] (confocal microscopy) and to [24, 25] (STED).

*potorous tridactylus*, where beforehand the protein

*β*-tubulin was tagged with a fluorescent marker. What becomes visible is the microtubule part of the cytosceleton of the cells. The left panels in Figs. 10 and 12 show the confocal and STED recordings, respectively. Both sample images show an area of 18×18 μm

^{2}at a resolution of 798×798 pixels. As a regularization functional we use in both cases a combination of the total variation semi-norm and the L

^{2}-norm as in (4) with

*γ*=1.

#### 5.3.1 Confocal Microscopy

Figure 10 depicts a confocal recording of a PtK2 cell (left) and the solution of (21) computed by Algorithm 2 (right). We have used the subset of \(\mathcal {S}_{0}\) with maximal side length of 20 pixel and the scale weights *c*_{S} are chosen as in Proposition 2.3 with *α*=0.9. For the convolution kernel we assume a full width at half maximum of 230 nm which corresponds to a standard deviation of 4.3422 pixels. Due to this relatively large kernel, the impact of the deconvolution is clearly visible.

We finally remark, that we proposed a different multi-scale deconvolution method for confocal microscopy in [20]. In contrast to this work, we there used a different MR-statistic than *T* in (7) and we used the standardization \((Y-\beta)\slash\sqrt{\beta}\) in order to transform the Poisson data *Y* to normality. The performance of the two approaches for confocal recording is comparable since the image intensity (= photon count rate) is relatively high throughout the data and hence standardization yields a fair approximation to normality. However, for low-count Poisson data, as we will investigate in the following section, our new approach is clearly preferable, mostly due to the fact that Anscombe’s transform also works well for small intensities (cf. [5]).

#### 5.3.2 STED Microscopy

*c*

_{S}are chosen according to Proposition 2.3 with

*α*=0.9. Due to the relatively small convolution kernel, the impact of the deconvolution is less striking as e.g. for the confocal recording in Sect. 5.3.1.

*global*reconstruction \(\hat{u}_{\text{g}}\), i.e. we computed a solution of (21) w.r.t. to the trivial system \(\mathcal{S}= \{ G \}\) and the parameter

*c*

_{G}as in Proposition 2.3 with

*α*=0.9. The global reconstruction exhibits typical concentration phenomena (especially in the upper half of the image) that are due to the ill-posedness of the deconvolution. These artefacts are less prominent for the SMRE solution.

*at the same time*the global image reconstruction has artefacts due to the ill-posedness of the deconvolution (upper zoom-box). These can only be avoided by invoking stronger regularization. A comparison with the corresponding details of the SMRE (right) shows that our locally adaptive approach lacks this undesirable behaviour.

## 6 Conclusion

In this paper we show how statistical multiresolution estimators, that is solutions of (6), can be employed for image reconstruction. We stress that our method, combined with a new automatic selection rule, locally adapts the amount of regularization according to the multi-scale nature of the image features. For the solution of the optimization problem (6) we suggest an inexact alternating direction method of multipliers combined with Dykstra’s projection algorithm. We show how this estimation paradigm can be extended to the Poisson model which opens up a vast field of applications, such as Poisson nanoscale fluorescence microscopy. Aside to this application, the performance of our method is illustrated for standard problems in imaging such as denoising and inpainting.

## Acknowledgements

K.F. and A.M. are supported by the DFG–SNF Research Group FOR916 *Statistical Regularization and Qualitative Constraints* (Z-Project). P.M is supported by the BMBF project 03MUPAH6 *INVERS*. A.M. and P.M. are supported by the SFB755 *Nanoscale Photonic Imaging* and the SFB803 *Functionality Controlled by Organization in and between Membranes*. The authors are indebted to S. Hell, A. Egner and A. Schoenle (Department of NanoBiophotonics, Max Planck Institute for Biophysical Chemistry, Göttingen and Laser Laboratorium Göttingen) for providing the microscopy data and for fruitful discussions.

### Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.