Skip to main content
Log in

Multiview Attenuation Estimation and Correction

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Measuring attenuation coefficients is a fundamental problem that can be solved with diverse techniques such as X-ray or optical tomography and lidar. We propose a novel approach based on the observation of a sample from a few different angles. This principle can be used in existing devices such as lidar or various types of fluorescence microscopes. It is based on the resolution of a nonlinear inverse problem. We propose a specific computational approach to solve it and show the well-foundedness of the approach on simulated data. Some of the tools developed are of independent interest. In particular, we propose an efficient method to correct attenuation defects, new robust solvers for the lidar equation as well as new efficient algorithms to compute the proximal operator of the logsumexp function in dimension 2.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Applying the logarithm is important for numerical purposes. When \(y_2-y_1-a\) is very small, the exponential cannot be computed accurately in double precision.

References

  1. Ansmann, A., Riebesell, M., Weitkamp, C.: Measurement of atmospheric aerosol extinction profiles with a Raman lidar. Opt. Lett. 15(13), 746–748 (1990)

    Article  Google Scholar 

  2. Boyer, C., Chambolle, A., De Castro, Y., Duval, V., De Gournay, F., Weiss, P.: On representer theorems and convex regularization. (2018). arXiv preprint arXiv:1806.09810

  3. Can, A., Al-Kofahi, O., Lasek, S., Szarowski, D.H., Turner, J.N., Roysam, B.: Attenuation correction in confocal laser microscopes: a novel two-view approach. J. Microsc. 211(1), 67–79 (2003)

    Article  MathSciNet  Google Scholar 

  4. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imag. Vis. 20(1–2), 89–97 (2004)

    MathSciNet  MATH  Google Scholar 

  5. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer. 25, 161–319 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Chhetri, R.K., Amat, F., Wan, Y., Höckendorf, B., Lemon, W.C., Keller, P.J.: Whole-animal functional and developmental imaging with isotropic spatial resolution. Nat. Methods 12, 1171 (2015)

    Article  Google Scholar 

  7. Combettes, P.L., Pesquet, J.C.: Proximal splitting methods in signal processing. In: Fixed-point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York, NY (2011)

  8. Cremer, C., Cremer, T.: Considerations on a laser-scanning-microscope with high resolution and depth of field. Microsc. Acta 81, 31–44 (1978)

    Google Scholar 

  9. Cuesta, J., Flamant, P.H.: Lidar beams in opposite directions for quality assessment of Cloud–Aerosol Lidar with orthogonal polarization spaceborne measurements. Appl. Opt. 49(12), 2232–2243 (2010)

    Article  Google Scholar 

  10. Dupé, F.-X., Fadili, J.M., Starck, J.-L.: A proximal iteration for deconvolving Poisson noisy images using sparse representations. IEEE Trans. Image Process. 18(2), 310–321 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-value Problems, vol. 15. Elsevier, New York (2000)

    MATH  Google Scholar 

  12. Garbarino, Sara, Sorrentino, Alberto, Massone, Anna Maria, Sannino, Alessia, Boselli, Antonella, Wang, Xuan, Spinelli, Nicola, Piana, Michele: Expectation maximization and the retrieval of the atmospheric extinction coefficients by inversion of Raman lidar data. Opt. Express 24(19), 21497–21511 (2016)

    Article  Google Scholar 

  13. Hell, S., Stelzer, E.H.K.: Properties of a 4Pi confocal fluorescence microscope. JOSA A 9(12), 2159–2166 (1992)

    Article  Google Scholar 

  14. Hiriart-Urruty, J.B.: A note on the Legendre-Fenchel transform of convex composite functions. In: Nonsmooth Mechanics and Analysis, pp. 35–46. Springer, Boston, MA (2006)

  15. Hughes, Herbert G, Paulson, M.R.: Double-ended lidar technique for aerosol studies. Appl. Opt. 27(11), 2273–2278 (1988)

    Article  Google Scholar 

  16. Huisken, J., Stainier, Didier Y.R.: Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM). Opt. Lett. 32(17), 2608–2610 (2007)

    Article  Google Scholar 

  17. Kervrann, C., Legland, D., Pardini, L.: Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy. J. Microsc. 214(3), 297–314 (2004)

    Article  MathSciNet  Google Scholar 

  18. Klett, J.D.: Stable analytical inversion solution for processing lidar returns. Appl. Opt. 20(2), 211–220 (1981)

    Article  Google Scholar 

  19. Krzic, U., Gunther, S., Saunders, T.E., Streichan, S.J., Hufnagel, Lars: Multiview light-sheet microscope for rapid in toto imaging. Nat. Methods 9(7), 730–733 (2012)

    Article  Google Scholar 

  20. Kunz, Gerard J: Bipath method as a way to measure the spatial backscatter and extinction coefficients with lidar. Appl. Opt. 26(5), 794–795 (1987)

    Article  Google Scholar 

  21. Mayer, J., Robert-Moreno, A., Danuser, R., Stein, J.V., Sharpe, J., Swoger, J.: OPTiSPIM: integrating optical projection tomography in light sheet microscopy extends specimen characterization to nonfluorescent contrasts. Opt. Lett. 39(4), 1053–1056 (2014)

    Article  Google Scholar 

  22. Natterer, F.: The Mathematics of Computerized Tomography, vol. 32. Siam, Philadelphia (1986)

    MATH  Google Scholar 

  23. Ng, M.K., Weiss, P., Yuan, X.: Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods. SIAM J. Sci. Comput. 32(5), 2710–2736 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  24. Ortega, J.M.: The Newton–Kantorovich theorem. Am. Math. Mon. 75(6), 658–660 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  25. Polyak, B.T.: Newton’s method and its use in optimization. Eur. J. Oper. Res. 181(3), 1086–1096 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  26. Pornsawad, P., Böckmann, C., Ritter, C., Rafler, M.: Ill-posed retrieval of aerosol extinction coefficient profiles from Raman lidar data by regularization. Appl. Opt. 47(10), 1649–1661 (2008)

    Article  Google Scholar 

  27. Rigaut, J.P., Vassy, J.: High-resolution three-dimensional images from confocal scanning laser microscopy. Quantitative study and mathematical correction of the effects from bleaching and fluorescence attenuation in depth. Anal. Quant. Cytol. Histol. 13(4), 223–232 (1991)

    Google Scholar 

  28. Roerdink, J.B.T.M., Bakker, M.: An FFT-based method for attenuation correction in fluorescence confocal microscopy. J. Microsc. 169(1), 3–14 (1993)

    Article  Google Scholar 

  29. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  30. Schmidt, T., Dürr, J., Keuper, M., Blein, T., Palme, K., Ronneberger, O.: Variational attenuation correction in two-view confocal microscopy. BMC Bioinformatics 14(1), 366 (2013)

    Article  Google Scholar 

  31. Sharpe, J., Ahlgren, U., Perry, P., Hill, B., Ross, A., Hecksher-Sørensen, J., Baldock, R., Davidson, D.: Optical projection tomography as a tool for 3D microscopy and gene expression studies. Science 296(5567), 541–545 (2002)

    Article  Google Scholar 

  32. Shcherbakov, V.: Regularized algorithm for Raman lidar data processing. Appl. Opt. 46(22), 4879–4889 (2007)

    Article  Google Scholar 

  33. Steidl, G., Teuber, T.: Removing multiplicative noise by Douglas–Rachford splitting methods. J. Math. Imag. Vis. 36(2), 168–184 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Tomer, R., Khairy, K., Amat, F., Keller, P.J.: Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat. Methods 9(7), 755–763 (2012)

    Article  Google Scholar 

  35. Vermeer, K.A., Mo, J., Weda, J.J.A., Lemij, H.G., de Boer, J.F.: Depth-resolved model-based reconstruction of attenuation coefficients in optical coherence tomography. Biomed. Opt. Express 5(1), 322–337 (2014)

    Article  Google Scholar 

  36. Weitkamp, Claus: Lidar: Range-Resolved Optical Remote Sensing of the Atmosphere, vol. 102. Springer Science & Business, New York (2006)

    Google Scholar 

  37. Zhu, C., Byrd, R.H., Lu, P., Nocedal, J.: Algorithm 778: L-bfgs-b: fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw. (TOMS) 23(4), 550–560 (1997)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Fondation pour la Recherche Médicale (FRM Grant Number ECO20170637521 to V.D.) and by Plan CANCER, MIMMOSA project. The authors wish to thank Juan Cuesta, Emilio Gualda, Jan Huisken, Philipp Keller, Théo Liu, Jürgen Mayer and Anne Sentenac for interesting discussions and feedbacks on the model. They thank the anonymous reviewers for pointing out reference [30], which is closely related to this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Valentin Debarnot.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof Proposition 1

Proof

The first item is obtained by direct inspection:

  • for \(\beta \) fixed, \(\alpha \mapsto \left\langle \exp (-A_j\alpha ), \mathbbm {1}\right\rangle \) is the composition of a linear operator with a convex function; hence, it is convex. In addition, \(\alpha \mapsto \left\langle A_j\alpha , \mathbbm {1}\right\rangle \) is a linear mapping.

  • for \(\alpha \) fixed, the first term in \(\beta \) is linear and \(\beta \mapsto \left\langle - \log (\beta ), \mathbbm {1}\right\rangle \) is convex.

Let us now focus on the second and third points. The function G can be rewritten as a sum of functions:

$$\begin{aligned} G(\alpha ,\beta ) = \sum _{i=1}^n g_{i}\left( (A_j\alpha [i])_{1\le j\le m},\beta [i]\right) , \end{aligned}$$

where \(g_{i}:\mathbb {R}^m\times \mathbb {R}_+ \rightarrow \mathbb {R}\) is defined as follows:

$$\begin{aligned} (x,y) \mapsto g_{i}(x,y)&= \sum _{j=1}^m \exp (-x[j]) y \\&\quad +u_j[i]\left( x[j] - \log (y) \right) . \end{aligned}$$

To prove the convexity of G, it suffices to study the convexity of each function \(g_{i}\). From now on, we skip the indices i to lighten the notation.

Let us analyze the eigenvalues of the Hessian \(H_g\):

$$\begin{aligned} H_g(x,y)=\begin{pmatrix} \mathrm {diag}(y\exp (-x)) &{}\quad - \exp (-x)\\ - \exp (-x)^T &{}\quad \sum _{j=1}^m u_j/y^2 \end{pmatrix}. \end{aligned}$$

To study the positive semidefiniteness, let \((v,w)\in \mathbb {R}^{m+1}\), denote an arbitrary vector. We have:

$$\begin{aligned}&\left\langle \begin{pmatrix} v \\ w \end{pmatrix}, H_g(x,y)\begin{pmatrix} v \\ w \end{pmatrix} \right\rangle \\&\quad = \sum _{j=1}^m v[j]\exp (-x[j])\left( y v[j] -2w\right) + w^2\frac{\sum _{j=1}^m u_j}{y^2}. \end{aligned}$$

In the case \(y>\frac{\sum _{j=1}^mu_j}{\sum _{j=1}^m\exp (-x[j])}\) and \(w \ne 0\), we get that:

$$\begin{aligned}&\left\langle \begin{pmatrix} v \\ w \end{pmatrix}, H_g(x,y)\begin{pmatrix} v \\ w \end{pmatrix} \right\rangle \\&\quad < \sum _{j=1}^m v[j]\exp (-x[j])\left( y v[j]-2w\right) \\&\qquad + w^2\frac{\sum _{j=1}^m\exp (-x[j])}{y}\\&\quad = \sum _{j=1}^m \exp (-x[j])\left( y v[j]^2 -2wv[j] + \frac{w^2}{y}\right) \\&\quad = \sum _{j=1}^m \exp (-x[j])\left( v[j]\sqrt{y} -\frac{w}{\sqrt{y}}\right) ^2 \end{aligned}$$

where the last expression is equal to 0 for the particular choice \(v[j] y =w\), for all \(1\le j \le m\). This implies \( \left\langle \begin{pmatrix} v \\ w \end{pmatrix}, H_g\begin{pmatrix} v \\ w \end{pmatrix} \right\rangle <0\), which shows that function g is not convex in \(\mathbb {R}^n\times \mathbb {R}_+^n\) and proves the second item.

The same argument with \(y\le \frac{\sum _{j=1}^mu_j}{\sum _{j=1}^m\exp (-x[j])}\) proves the third item. \(\square \)

1.2 Proof Proposition 2

Proof

With this specific choice, it is easy to check that the optimality conditions of problem (10) with respect to variable \(\beta \) yield (14). By replacing this expression in (11), we obtain the optimization problem shown in equation (13).

Checking convexity of this problem can be done by simple inspection. The term \(\langle u_j (A_j \alpha ),\mathbbm {1}\rangle \) is linear, hence convex. The term \(\log \left( \sum _{j=1}^m \exp (-(A_j\alpha )[i])\right) \) is the composition of the convex logsumexp function with a linear operator; hence, it is convex. \(\square \)

1.3 Proximal Operator of logsumexp in Dimension 2

In this section, we propose a fast and accurate numerical algorithm based on Newton’s method to solve the following problem:

$$\begin{aligned} w&=\mathrm {prox}_{\gamma g_1}(z) \\&=\mathop {\mathrm {argmin}}_{x\in \mathbb {R}^{2n}} \frac{1}{2}\Vert x-z\Vert _2^2 \\&\quad +\gamma \sum _{i=1}^n \sum _{j=1}^2 u_j[i] \left[ x_j[i] + \log \left( \sum _{j=1}^2 \exp (-x_j[i])\right) \right] , \end{aligned}$$

where \(z=\begin{pmatrix} z_1 \\ z_2 \end{pmatrix}\) and \(x=\begin{pmatrix} x_1 \\ x_2 \end{pmatrix}\) are vectors in \(\mathbb {R}^{2n}\). This problem may seem innocuous at first sight, but turns out to be quite a numerical challenge. The first observation is that it can be decomposed as nindependent problems of dimension 2 since:

$$\begin{aligned} w[i]&=\mathop {\mathrm {argmin}}_{(x_1,x_2)\in \mathbb {R}^{2}} \frac{1}{2}(x_j-z_j[i])_2^2 \nonumber \\&\quad + \gamma \sum _{j=1}^2 u_j[i] \left[ x_j + \log \left( \sum _{j=1}^2 \exp (-x_j)\right) \right] . \end{aligned}$$
(20)

To simplify the notation, we will skip the index i in what follows. The following proposition shows that our problem is equivalent to finding the proximal operator associated with the “logsumexp” function.

Proposition 6

Define the logsumexp function \(\mathrm {lse}(x_1,x_2)=\log \left( \sum _{j=1}^2 \exp (x_j)\right) \). The solution of problem (20) coincides with the opposite of the proximal operator of \(\mathrm {lse}\):

$$\begin{aligned} w[i]&= -\mathop {\mathrm {argmin}}_{(x_1,x_2)\in \mathbb {R}^2} a \mathrm {lse}(x_1,x_2)\nonumber \\&\quad + \frac{1}{2}((x_1-y_1)^2+(x_2-y_2)^2) \end{aligned}$$
(21)
$$\begin{aligned}&= -\mathrm {prox}_{a \mathrm {lse}} (y_1,y_2), \end{aligned}$$
(22)

where \(a=\gamma (u_1+u_2)\) and \(y_j = \gamma u_j - z_j\).

Proof

The first-order optimality conditions for problem (20) read

$$\begin{aligned} \left\{ \begin{array}{l} \gamma u_1 - \frac{\gamma (u_1+u_2)\exp (-x_1)}{\exp (-x_1)+\exp (-x_2)} + x_1 - z_1 = 0 \\ \gamma u_2 - \frac{\gamma (u_1+u_2)\exp (-x_2)}{\exp (-x_1)+\exp (-x_2)} + x_2 - z_2 = 0. \end{array}\right. \end{aligned}$$
(23)

By letting \(a=\gamma (u_1+u_2)\) and \(y_j = \gamma u_j - z_j\), this equation becomes

$$\begin{aligned} \left\{ \begin{array}{l} - \frac{a\exp (-x_1)}{\exp (-x_1)+\exp (-x_2)} + x_1 + y_1 = 0 \\ - \frac{a\exp (-x_2)}{\exp (-x_1)+\exp (-x_2)} + x_2 + y_2 = 0. \end{array}\right. \end{aligned}$$
(24)

It now suffices to make the change of variable \(x'_i=-x_i\) to retrieve the optimality conditions of problem (22)

$$\begin{aligned} \left\{ \begin{array}{l} \frac{a\exp (x'_1)}{\exp (x'_1)+\exp (x'_2)} + x'_1 - y_1 = 0 \\ \frac{a\exp (x'_2)}{\exp (x'_1)+\exp (x'_2)} + x'_2 - y_2 = 0. \end{array}\right. \end{aligned}$$
(25)

\(\square \)

Remark 4

To the best of our knowledge, this is the first attempt to find a fast algorithm to evaluate the prox of logsumexp. This function is important in many regards. In particular, it is a \(C^\infty \) approximation of the maximum value of a vector. In addition, its Fenchel conjugate coincides with the Shannon entropy restricted to the unit simplex. We refer to [14, §3.2] for some details. The algorithm that follows has potential applications outside the scope of this paper.

We now design a fast and accurate minimization algorithm for problem (22) or equivalently, a root-finding algorithm for problem (25). This algorithm differs depending on whether \(y_1\ge y_2\) or \(y_2\ge y_1\). We focus on the case \(y_1\ge y_2\). The case \(y_2\ge y_1\) can be handled by symmetry.

Let \(\lambda = \frac{\exp (x'_1)}{\exp (x'_1)+\exp (x'_2)}\) and notice that

$$\begin{aligned} \frac{\exp (x'_2)}{\exp (x'_1)+\exp (x'_2)} =1-\lambda . \end{aligned}$$

Therefore, (25) becomes:

$$\begin{aligned} \left\{ \begin{array}{l} x'_1 = y_1 -a\lambda \\ x'_2 = y_2 - a(1-\lambda ). \end{array}\right. \end{aligned}$$
(26)

Hence,

$$\begin{aligned} \frac{1-\lambda }{\lambda } = \exp (x'_2-x'_1) = \exp (y_2-y_1-a)\exp (2a\lambda ). \end{aligned}$$
(27)

Taking the logarithm on each side yieldsFootnote 1:

$$\begin{aligned} \log (1-\lambda )-\log (\lambda ) = y_2-y_1-a + 2a\lambda . \end{aligned}$$
(28)

We are now facing the problem of finding the root \(\lambda ^*\) of the following function:

$$\begin{aligned} f(\lambda ) = y_2-y_1-a + 2a\lambda - \log (1-\lambda )+\log (\lambda ). \end{aligned}$$
(29)

There are two important advantages for this approach compared to the direct resolution of (25). First, we have to solve a 1D problem instead of a 2D problem. More importantly, we directly constrain \(x'\) to be of form \(x'=y - a\delta \), where \(\delta \) lives on the 2D simplex.

Let us collect a few properties of function f. First, we have:

$$\begin{aligned} f'(\lambda ) = 2a + \frac{1}{1-\lambda } + \frac{1}{\lambda } > 0,\quad \forall \lambda \in (0,1). \end{aligned}$$
(30)

Therefore, f is increasing on (0, 1). To use convergence results of Newton’s algorithm, we need to compute \(f''\) as well:

$$\begin{aligned} f''(\lambda ) = -\frac{1}{\lambda ^2}+\frac{1}{(1-\lambda )^2}. \end{aligned}$$
(31)

Proposition 7

If \(y_1\ge y_2\), then \(x_1'\ge x_2'\) and

$$\begin{aligned}&\max \left( \frac{1}{2}, \frac{1}{1+\exp (y_2-y_1+a)}\right) \le \lambda ^* \nonumber \\&\quad \le \frac{1}{1+\exp (y_2-y_1)}. \end{aligned}$$
(32)

Proof

The first statement can be proven by contradiction. Assume that \(x_2'> x_1'\), then Eq. (25) indicates that \(y_2>y_1\).

For the second statement, it suffices to evaluate f at the extremities of the interval since \(f'>0\). We get \(f(1/2)=y_2-y_1\le 0\) and \(f\left( \frac{1}{1+\exp (y_2-y_1)}\right) = -a + \frac{2a}{1 + \exp (y_2-y_1)} \ge 0\). \(\square \)

Proposition 8

Set \(\lambda _0 = \frac{1}{1+\exp (y_2-y_1)}\). Then, the following Newton’s method

$$\begin{aligned} \lambda _{k+1} = \lambda _k - \frac{f(\lambda _k)}{f'(\lambda _k)} \end{aligned}$$
(33)

converges to the root \(\lambda ^*\) of f, with a locally quadratic rate.

figure e
Fig. 9
figure 9

Performance evaluation for Newton’s algorithm. Left: \(\lambda ^*\) depending on a and \(y_1-y_2\). Right: number of iterations of Newton’s method to reach machine precision

Proof

First notice that \(f''(\lambda )\ge 0\) on the interval [1 / 2, 1). Hence, \(f''\) is also positive on \(I=[\lambda ^*,\lambda _0]\). This ensures that

$$\begin{aligned} \lambda _0 \ge \lambda _1 \ge \cdots \ge \lambda ^*. \end{aligned}$$
(34)

We prove this assertion by recurrence. Notice that \(\lambda _0\ge \lambda ^*\) by Proposition 7. Now, assume that \(\lambda _k\ge \lambda ^*\), then

$$\begin{aligned} f(\lambda _k) = f(\lambda ^*) + \int _{\lambda ^*}^{\lambda _k} f'(t)\,\mathrm{d}t \le f'(\lambda _k) (\lambda _k-\lambda ^*). \end{aligned}$$
(35)

Hence, \(\lambda _k-\lambda ^* \ge \frac{f(\lambda _k)}{f'(\lambda _k)}\) and \(\lambda _{k+1}\ge \lambda ^*\). In addition \(\frac{f(\lambda _k)}{f'(\lambda _k)}\ge 0\) on I, so that \(\lambda _{k+1}\ge \lambda _k\).

The sequence \((\lambda _k)_{k\in \mathbb {N}}\) is monotonically decreasing and bounded below; therefore it converges to some value \(\lambda '\ge \lambda ^*\). Necessarily \(\lambda '=\lambda ^*\), since for \(\lambda '>\lambda ^*\), \(\frac{f(\lambda ')}{f'(\lambda ')}>0\).

To prove the locally quadratic convergence rate, we just invoke the celebrated Newton–Kantorovich’s theorem [24, 25] that ensures local quadratic convergence if \(f''\) is bounded in a neighborhood of the minimizer. \(\square \)

Finally, let us mention that computing \(\lambda _0\) on a computer is a tricky due to underflow problems: in double precision the command \(1+\exp (y_2-y_1)\) will return 1 for \(y_2-y_1<-37\simeq \log (10^{-16})\). This may cause the algorithm to fail since f and its derivatives are undefined at \(\lambda =1\). In practice, we therefore set \(\lambda _0=1/(1+\exp (y_2-y_1))-10^{-16}\). Similarly, by bound (32), we get \(\lambda ^*=1\) up to machine precision whenever \(y_2-y_2-a<\log (10^{-16})\). Algorithm 2 summarizes all the ideas described in this paragraph.

An attentive reader may have remarked that the convergence of Newton’s algorithm depends only on the difference \(y(1)-y(2)\) and a. A shift of y(1) and y(2) by the same value does not change Newton’s iteration. In Fig. 9, we show that the algorithm behaves very well for a wide range of parameters. For \(y(1)-y(2)\) and a varying in the interval \([2^{-10},2^{20}]\), the algorithm never requires more than 18 iterations to reach machine precision and needs 2.8 iterations on average.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Debarnot, V., Kahn, J. & Weiss, P. Multiview Attenuation Estimation and Correction. J Math Imaging Vis 61, 780–797 (2019). https://doi.org/10.1007/s10851-019-00871-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-019-00871-6

Keywords

Navigation