Skip to main content
Log in

Transport Between RGB Images Motivated by Dynamic Optimal Transport

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

We propose two models for the interpolation between RGB images based on the dynamic optimal transport model of Benamou and Brenier (Numer Math 84:375–393, 2000). While the application of dynamic optimal transport and its extensions to unbalanced transform were examined for gray-value images in various papers, this is the first attempt to generalize the idea to color images. The non-trivial task to incorporate color into the model is tackled by considering RGB images as three-dimensional arrays, where the transport in the RGB direction is performed in a periodic way. Following the approach of Papadakis et al. (SIAM J Imaging Sci 7:212–238, 2014) for gray-value images we propose two discrete variational models, a constrained and a penalized one which can also handle unbalanced transport. We show that a minimizer of our discrete model exists, but it is not unique for some special initial/final images. For minimizing the resulting functionals we apply a primal-dual algorithm. One step of this algorithm requires the solution of a four-dimensional discretized Poisson equation with various boundary conditions in each dimension. For instance, for the penalized approach we have simultaneously zero, mirror, and periodic boundary conditions. The solution can be computed efficiently using fast Sin-I, Cos-II, and Fourier transforms. Numerical examples demonstrate the meaningfulness of our model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. Images from Wikimedia Commons: AGOModra_aurora.jpg by Comenius University under CC BY SA 3.0, Aurora-borealis_andoya.jpg by M. Buschmann under CC BY 3.0.

  2. Images from Wikimedia Commons: Europe_satellite_orthographic.jpg and Earthlights_2002.jpg by NASA, Köhlbrandbrücke5478.jpg by G. Ries under CC BY SA 2.5, Köhlbrandbrücke.jpg by HafenCity1 under CC BY 3.0.

References

  1. Ambrosio, L., Gigli, N., Savaré, G.: Gradient Flows in Metric Spaces and in the Space of Probability Measures. Springer, Heidelberg (2006)

    MATH  Google Scholar 

  2. Angenent, S., Haker, S., Tannenbaum, A.: Minimizing flows for the Monge-Kantorovich problem. SIAM J. Math. Anal. 35, 61–97 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aravkin, A.Y., Burke, J.V., Friedlander, M.P.: Variational properties of value functions. SIAM J. Optim. 23(3), 1689–1717 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer, New York (2003)

    MATH  Google Scholar 

  5. Baiocchi, C., Buttazzo, G., Gastaldi, F., Tomarelli, F.: General existence theorems for unilateral problems in continuum mechanics. Arch. Ration. Mech. Anal. 100(2), 149–189 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  6. Benamou, J.D.: A domain decomposition method for the polar factorization of vector-valued mappings. SIAM J. Numer. Anal. 32(6), 1808–1838 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  7. Benamou, J.-D.: Numerical resolution of an ‘unbalanced’ mass transport problem. ESAIM Math. Model. Numer. Anal. 37(5), 851–868 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  8. Benamou, J.-D., Brenier, Y.: A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math. 84(3), 375–393 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Benning, M., Calatroni, L., Düring, B., Schönlieb, C.-B.: A primal-dual approach for a total variation Wasserstein flow. Geometric Science of Information. LNCS, pp. 413–421. Springer, Berlin (2013)

    Chapter  Google Scholar 

  10. Bertalmio, M.: Image Processing for Cinema. CRC Press, Boca Raton (2014)

    MATH  Google Scholar 

  11. Björck, A.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)

    Book  MATH  Google Scholar 

  12. Brune, C.: 4D imaging in tomography and optical nanoscopy. Dissertation, PhD Thesis, Münster (Westfalen) University, (2010)

  13. Burger, M., Franek, M., Schönlieb, C.-B.: Regularized regression and density estimation based on optimal transport. Appl. Math. Res. eXpress 2012(2), 209–253 (2012)

    MathSciNet  MATH  Google Scholar 

  14. Burger, M., Sawatzky, A., Steidl, G.: First order algorithms in variational image processing. ArXiv:1412.4237 (2014)

  15. Caffarelli, L.A.: A localization property of viscosity solutions to the monge-ampere equation and their strict convexity. Ann. Math. 131(1), 129–134 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  16. Carlier, G., Oberman, A., Oudet, E.: Numerical methods for matching for teams and Wasserstein barycenters. ArXiv:1411.3602 (2014)

  17. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40(1), 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Chizat, L., Schmitzer, B., Peyré, G., Vialard, F.-X.: An interpolating distance between optimal transport and Fisher-Rao. ArXiv:1506.06430 (2015)

  19. Cullen, M.J.: Implicit finite difference methods for modelling discontinouos atmospheric flows. J. Comput. Phys. 81, 319–348 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  20. Cullen, M.J., Purser, R.J.: An extended Lagrangian theory of semigeostrophic frontogenesis. J. Atmos. Sci. 41, 1477–1497 (1989)

    Article  MathSciNet  Google Scholar 

  21. Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. Adv. Neural Inf. Process. Syst. 26, 2292–2300 (2013)

    Google Scholar 

  22. Cuturi, M., Coucet, A.: Fast computation of Wasserstein barycenters. In: Proceedings of the 31st International Conference on Machine Learning, pp. 685–693 (2014)

  23. Dacorogna, B., Maréchal, P.: The role of perspective functions in convexity, polyconvexity, rank-one convexity and separate convexity. J. Convex Anal. 15(2), 271–284 (2008)

    MathSciNet  MATH  Google Scholar 

  24. Dedieu, J.-P.: Cônes asymptotes d’un ensemble non convexe. Application à l’optimisation. C. R. Acad. Sci. 287, 91–103 (1977)

    Google Scholar 

  25. Delon, J.: Midway image equalization. J. Math. Imaging Vision 21(2), 119–134 (2004)

    Article  MathSciNet  Google Scholar 

  26. Ferradans, S., Papadakis, N., Peyré, G., Aujol, J.-F.: Regularized discrete optimal transport. SIAM J. Imaging Sci. 7(3), 1853–1882 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  27. Fitschen, J.H., Laus, F., Steidl, G.: Dynamic optimal transport with mixed boundary condition for color image processing. In: International Conference on Sampling Theory and Applications (SampTA). ArXiv:1501.04840, pp. 558–562 (2015)

  28. Frogner, C., Zhang, C., Mobahi, H., Araya, M., Poggio, T.A.: Learning with a Wasserstein loss. Adv. Neural Inf. Process. Syst. 28, 2044–2052 (2015)

    Google Scholar 

  29. Galerne, B., Gousseau, Y., Morel, J.-M.: Random phase textures: theory and synthesis. IEEE Trans. Image Process. 20(1), 257–267 (2011)

    Article  MathSciNet  Google Scholar 

  30. Haber, E., Rehman, T., Tannenbaum, A.: An efficient numerical method for the solution of the \(l_2\) optimal mass transfer problem. SIAM J. Sci. Comput. 32(1), 197–211 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  31. Jimenez, C.: Dynamic formulation of optimal transport problems. J. Convex Anal. 15(3), 593–622 (2008)

    MathSciNet  MATH  Google Scholar 

  32. Kochengin, S.A., Oliker, V.I.: Determination of reflector surfaces from near-field scattering data. Inverse Probl. 13(2), 363–373 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  33. Maas, J., Rumpf, M., Schönlieb, C., Simon, S.: A generalized model for optimal transport of images including dissipation and density modulation. ArXiv:1504.01988 (2015)

  34. Nikolova, M., Steidl, G.: Fast hue and range preserving histogram specification: theory and new algorithms for color image enhancement. IEEE Trans. Image Process. 23(9), 4087–4100 (2014)

    Article  MathSciNet  Google Scholar 

  35. Papadakis, N., Peyré, G., Oudet, E.: Optimal transport with proximal splitting. SIAM J. Imaging Sci. 7(1), 212–238 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  36. Patankar, S.: Numerical Heat Transfer and Fluid Flow. CRC Press, New York (1980)

  37. Peyré, G., Fadili, J., Rabin, J.: Wasserstein active contours. In: 19th IEEE ICIP, pp. 2541–2544 (2012)

  38. Pock, T., Chambolle, A., Cremers, D., Bischof, H.: A convex relaxation approach for computing minimal partitions. In: IEEE Conference Computer Vision and Pattern Recognition, pp. 810–817 (2009)

  39. Potts, D., Steidl, G.: Optimal trigonometric preconditioners for nonsymmetric Toeplitz systems. Linear Algebra Appl. 281, 265–292 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  40. Rabin, J., Delon, J., Gousseau, Y.: Transportation distances on the circle. J. Math. Imaging Vision 41(1–2), 147–167 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  41. Rabin, J., Peyré, G., Delon, J., Bernot, M.: Wasserstein barycenter and its application to texture mixing. In: SSVM, pp. 435–446. Springer (2012)

  42. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  43. Santambrogio, F.: Optimal Transport for Applied Mathematicians. Springer, New York (2015)

    Book  MATH  Google Scholar 

  44. Schmitzer, B.: A sparse multi-scale algorithm for dense optimal transport. ArXiv:1510.05466 (2015)

  45. Schmitzer, B., Schnörr, C.: A hierarchical approach to optimal transport. In: SSVM, pp. 452–464. Springer (2013)

  46. Strang, G., MacNamara, S.: Functions of difference matrices are Toeplitz plus Hankel. SIAM Rev. 56, 525–546 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  47. Swoboda, P., Schnörr, C.: Convex variational image restoration with histogram priors. SIAM J. Imaging Sci. 6(3), 1719–1735 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  48. Teuber, T., Steidl, G., Chan, R.H.: Minimization and parameter estimation for seminorm regularization models with I-divergence constraints. Inverse Probl. 29, 1–28 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  49. Trouvé, A., Younes, L.: Metamorphoses through Lie group action. Found. Comput. Math. 5(2), 173–198 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  50. Villani, C.: Optimal Transport: Old and New. Springer, Berlin (2008)

    MATH  Google Scholar 

Download references

Acknowledgments

Funding by the DFG within the Research Training Group 1932 is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Friederike Laus.

Appendices

Appendix 1: Diagonalization of Structured Matrices

In the following we collect known facts on the eigenvalue decomposition of various difference matrices. For further information we refer, e.g., to [39, 46]. The following matrices \(F_n\), \(C_n\) and \(S_n\) are unitary, resp., orthogonal matrices. The Fourier matrix

$$\begin{aligned} F_n := \, \sqrt{\tfrac{1}{n}} \left( \mathrm {e}^{\frac{-2\pi \mathrm {i}jk}{n}} \right) _{j,k=0}^n \end{aligned}$$

diagonalizes circulant matrices, i.e., for \(a := (a_j)_{j=0}^{n-1} \in \mathbb R^n\) we have

$$\begin{aligned} { \left( \begin{array}{llll} a_0&{}a_{n-1}&{} \ldots &{} a_1\\ a_1&{}a_0&{}\ldots &{}a_2\\ \vdots &{} &{} \ddots &{} \vdots \\ a_{n-1} &{}a_1&{}\ldots &{}a_0 \end{array} \right) }&= \bar{F}_n \mathrm{diag}(\sqrt{n} F_n a) F_n\nonumber \\&= F_n \mathrm{diag}(\sqrt{n} \, \bar{F}_n a) \bar{F}_n. \end{aligned}$$
(21)

In particular it holds

$$\begin{aligned} \varDelta _n^{\mathrm{per}}&\,:=\,\frac{1}{n^2} (D_n^{\mathrm{per}})^{\scriptscriptstyle {\text {T}}}D_n^{\mathrm{per}} = \frac{1}{n^2} D_n^{\mathrm{per}} (D_n^{\mathrm{per}})^{\scriptscriptstyle {\text {T}}}\nonumber \\&= {\left( \begin{array}{rrrrrrrr} 2 &{}-1&{} &{} &{} &{} &{} &{}-1 \\ -1&{} 2&{}-1&{} \\ &{} &{} &{}\ddots &{} &{} \\ &{} &{} &{} &{} &{}-1&{}2 &{}-1 \\ -1&{} &{} &{} &{} &{} &{}-1&{} 2 \end{array} \right) }= \bar{F}_n \mathrm{diag}(\mathrm{d}^{\mathrm{per}}_n ) F_n \end{aligned}$$
(22)

with \(\mathrm{d}^{\mathrm{per}}_n := \left( 4 \sin ^2 \frac{k\pi }{n}\right) _{k=0}^{n-1}\). The operator \(\varDelta _n^{\mathrm{per}}\) typically appears when solving the one-dimensional Poisson equation with periodic boundary conditions by finite difference methods.

The DST-I matrix

$$\begin{aligned} S_{n-1} \,:=\, \sqrt{\tfrac{2}{n}} \left( \sin \frac{jk\pi }{n} \right) _{j,k=1}^{n-1}, \end{aligned}$$

and the DCT-II matrix

$$\begin{aligned} C_n := \sqrt{\tfrac{2}{n}} \left( \epsilon _j \cos \frac{j(2k+1)\pi }{2n} \right) _{j,k=0}^{n-1} \end{aligned}$$

with \(\epsilon _0 := 1/\sqrt{2}\) and \(\epsilon _j := 1\), \(j=1,\ldots ,n-1\) are related by

$$\begin{aligned} D_n = S_{n-1} \left( 0 \, | \, \mathrm{diag}(\mathrm{d}^{\mathrm{zero}}_{n-1})^\frac{1}{2} \right) C_n, \end{aligned}$$
(23)

where \( \mathrm{d}^{\mathrm{zero}}_{n-1} := \left( 4 \sin ^2 \frac{k\pi }{2n}\right) _{k=1}^{n-1} \). Further they diagonalize sums of certain symmetric Toeplitz and persymmetric Hankel matrices. In particular it holds

$$\begin{aligned} \varDelta _{n-1}^{\mathrm{zero}} \,&:=\, \frac{1}{n^2} D_{n} D_{n} ^{\scriptscriptstyle {\text {T}}}\nonumber \\&= {\left( \begin{array}{rrrrrrrr} 2 &{}-1&{} &{} &{} &{} &{} &{}0 \\ -1&{} 2&{}-1&{} \\ &{} &{} &{}\ddots &{} &{} \\ &{} &{} &{} &{} &{}-1&{}2 &{}-1 \\ 0&{} &{} &{} &{} &{} &{}-1&{} 2 \end{array} \right) }\nonumber \\&= S_{n-1} \mathrm{diag}(\mathrm{d}^{\mathrm{zero}}_{n-1}) S_{n-1} \end{aligned}$$
(24)

and

$$\begin{aligned} \varDelta _n^{\mathrm{mirr}}&:=\, \frac{1}{n^2} D_{n}^{\scriptscriptstyle {\text {T}}}D_{n} \nonumber \\&= {\left( \begin{array}{rrrrrrrr} 1 &{}-1&{} &{} &{} &{} &{} &{}0 \\ -1&{} 2&{}-1&{} \\ &{} &{} &{}\ddots &{} &{} \\ &{} &{} &{} &{} &{}-1&{}2 &{}-1 \\ 0&{} &{} &{} &{} &{} &{}-1&{} 1 \end{array} \right) } =C_n^{\scriptscriptstyle {\text {T}}}\mathrm{diag}(\mathrm{d}^{\mathrm{mirr}}_n) C_n \end{aligned}$$
(25)

with \( \mathrm{d}^{\mathrm{mirr}}_n := \begin{pmatrix} 0\\ \mathrm{d}_{n-1}^{\mathrm{zero}} \end{pmatrix} = \left( 4\sin ^2 \frac{j \pi }{2n} \right) _{j=0}^{n-1} \). The operators \(\varDelta _{n-1}^{\mathrm{zero}}\) and \(\varDelta _{n}^{\mathrm{mirr}}\) are related to the Poisson equation with zero boundary conditions and mirror boundary conditions, respectively.

Appendix 2: Computation with Tensor Products

The tensor product (Kronecker product) of matrices

$$\begin{aligned} A&= \begin{pmatrix} a_{1,1} &{} \cdots &{} a_{1,n}\\ \vdots &{} \cdots &{} \vdots \\ a_{m,1} &{} \cdots &{} a_{m,n} \end{pmatrix} \in \mathbb {C}^{m, n} \end{aligned}$$

and

$$\begin{aligned} B&= \begin{pmatrix} b_{1,1} &{} \cdots &{} b_{1,t}\\ \vdots &{} \cdots &{} \vdots \\ b_{s,1} &{} \cdots &{} b_{s,t} \end{pmatrix} \in \mathbb {C}^{s, t} \end{aligned}$$

is defined by

$$\begin{aligned} A \otimes B := \begin{pmatrix} a_{1,1} B &{} \cdots &{} a_{1,n} B\\ \vdots &{} \ddots &{} \vdots \\ a_{m,1} B &{} \cdots &{} a_{m,n} B \end{pmatrix} \in \mathbb C^{ms, nt}. \end{aligned}$$

The tensor product is associative and distributive with respect to the addition of matrices.

Lemma 2

(Properties of Tensor Products)

  1. (i)

    \(( A \otimes B)^{\scriptscriptstyle {\text {T}}}= A^{\scriptscriptstyle {\text {T}}}\otimes B^{\scriptscriptstyle {\text {T}}}\) for \( A \in \mathbb {C}^{m, n}\), \( B \in \mathbb {C}^{s , t}\). Let \(A, C \in \mathbb {C}^{m, m}\) and \( B, D \in \mathbb {C}^{n , n}\). Then the following holds:

  2. (ii)

    \(( A \otimes B)( C \otimes D) = A C \otimes B D\) for \(A, C \in \mathbb {C}^{m, m}\) and \( B, D \in \mathbb {C}^{n , n}\).

  3. (iii)

    If A and B are invertible, then \( A \otimes B\) is also invertible and

    $$\begin{aligned} ( A \otimes B)^{-1} = A^{-1} \otimes B^{-1} \, . \end{aligned}$$

The tensor product is needed to establish the connection between images and their vectorized versions, i.e., we consider images \(F\in \mathbb {R}^{n_1\times n_2}\) columnwise reshaped as

$$\begin{aligned} f : = \text {vec}(F) \in \mathbb {R}^{n_1 n_2}. \end{aligned}$$

Then the following relation holds true:

$$\begin{aligned} \text {vec}(A F B^{\scriptscriptstyle {\text {T}}}) = (B \otimes A) f. \end{aligned}$$
(26)

Appendix 3: Proofs and Generalization of the Tensor Product Approach to 3D

Proof of Proposition 3

By definition of A and using (25), (22), we obtain for periodic boundary conditions

$$\begin{aligned} A A^{\scriptscriptstyle {\text {T}}}&= I_P \otimes \big (D_N^{\mathrm{per}} \big )^{\scriptscriptstyle {\text {T}}}D_N^{\mathrm{per}} + D_P^{\scriptscriptstyle {\text {T}}}D_P \otimes I_N\\&= I_P \otimes N^2 \varDelta _N^{\mathrm{per}} + P^2 \varDelta _P^{\mathrm{mirr}} \otimes I_N\\&= \Big (C_P^{\scriptscriptstyle {\text {T}}}\otimes \bar{F}_N \Big ) \; \mathrm{diag}\Big ( I_P \otimes N^2 \mathrm{d}_N^{\mathrm{per}} \\&\ \ \ \ + P^2 \mathrm{d}_P^{\mathrm{mirr}} \otimes I_N \Big ) \; \Big (C_P \otimes F_N\Big ). \end{aligned}$$

Similarly we get with (25) for mirror boundary conditions

$$\begin{aligned} A A^{\scriptscriptstyle {\text {T}}}&= I_P \otimes D_{N-1}^{\scriptscriptstyle {\text {T}}}D_{N-1} + D_{P}^{\scriptscriptstyle {\text {T}}}D_{P} \otimes I_{N-1} \\&= I_P \otimes N^2 \varDelta _{N-1}^{\mathrm{mirr}} + P^2 \varDelta _P^{\mathrm{mirr}} \otimes I_{N-1}\\&= \Big (C_P^{\scriptscriptstyle {\text {T}}}\otimes C_{N-1}^{\scriptscriptstyle {\text {T}}}\Big ) \; \mathrm{diag}\Big ( I_P \otimes N^2 \mathrm{d}_{N-1}^{\mathrm{mirr}}\\&\ \ \ \ + P^2 \mathrm{d}_{N-1}^{\mathrm{mirr}} \otimes I_{N-1}\Big ) \Big (C_P \otimes C_{N-1}\Big ) \end{aligned}$$

which finishes the proof. \(\square \)

Proof of Proposition 4

By definition of A we obtain

$$\begin{aligned} \lambda A^{\scriptscriptstyle {\text {T}}}A + \tfrac{1}{\tau } I= & {} \left( \begin{array}{ll} \lambda D_\mathrm{m}^{\scriptscriptstyle {\text {T}}}D_\mathrm{m} + \tfrac{1}{\tau } I&{} \quad \lambda D_\mathrm{m}^{\scriptscriptstyle {\text {T}}}D_\mathrm{f}\\ \lambda D_\mathrm{f}^{\scriptscriptstyle {\text {T}}}D_\mathrm{m}&{}\quad \lambda D_\mathrm{f}^{\scriptscriptstyle {\text {T}}}D_\mathrm{f} + \frac{1}{\tau } I \end{array} \right) \\= & {} \!\!{:} \left( \begin{array}{cc} X &{} Y\\ Y^{\scriptscriptstyle {\text {T}}}&{} Z \end{array} \right) \end{aligned}$$

so that the inverse can be written by the help of the Schur complement

$$\begin{aligned} S := Z - Y^{\scriptscriptstyle {\text {T}}}X^{-1} Y \end{aligned}$$

as

$$\begin{aligned} \left( \begin{array}{ll} X &{} Y\\ Y^{\scriptscriptstyle {\text {T}}}&{} Z \end{array} \right) ^{-1} = \left( \begin{array}{ll} I &{} - X^{-1}Y\\ 0 &{} I \end{array} \right) \left( \begin{array}{ll} X^{-1} &{} 0\\ 0 &{} S^{-1} \end{array} \right) \left( \begin{array}{ll} I &{} 0\\ -Y^{\scriptscriptstyle {\text {T}}}X^{-1}&{} I \end{array} \right) . \end{aligned}$$

By (22) and (24) we have with \(D \in \{D_N^{\mathrm{per}},D_N\}\) that

$$\begin{aligned} X^{-1}&= \left( \lambda D_\mathrm{m}^{\scriptscriptstyle {\text {T}}}D_\mathrm{m} + \tfrac{1}{\tau } I\right) ^{-1} = \left( I_P \otimes \lambda N^2 D D^{\scriptscriptstyle {\text {T}}}+\tfrac{1}{\tau } I\right) ^{-1} \\&= I_P \otimes (\lambda N^2 D D^{\scriptscriptstyle {\text {T}}}+\tfrac{1}{\tau } I)^{-1} \\&= {\left\{ \begin{array}{ll} I_P \otimes S_{N-1} \mathrm{diag}(\lambda N^2 \mathrm{d}_{N-1}^{\mathrm{zero}} + \tfrac{1}{\tau }) ^{-1} S_{N-1} &{} \\ \qquad \qquad \qquad \qquad \qquad \qquad \text {mirror boundary},\\ I_P \otimes F_N \mathrm{diag}(\lambda N^2 \mathrm{d}_N^{\mathrm{per}} + \tfrac{1}{\tau }) ^{-1} \bar{F}_N &{} \\ \qquad \qquad \qquad \qquad \qquad \qquad \text {periodic boundary}. \end{array}\right. } \end{aligned}$$

The Schur complement reads as

$$\begin{aligned} S&= \Big (\lambda D_\mathrm{f}^{\scriptscriptstyle {\text {T}}}D_\mathrm{f} + \tfrac{1}{\tau } I \Big ) - \lambda ^2 D_\mathrm{f}^{\scriptscriptstyle {\text {T}}}D_\mathrm{m} X^{-1} D_\mathrm{m}^{\scriptscriptstyle {\text {T}}}D_\mathrm{f}\\&= \Big (\lambda D_P D_P^{\scriptscriptstyle {\text {T}}}\otimes I_N + \tfrac{1}{\tau } I \Big ) \\&\ \ \ \ - \lambda ^2 \big (D_P \otimes D^{\scriptscriptstyle {\text {T}}}\big ) \Big ( I_P \otimes \Big (\lambda N^2 D D^{\scriptscriptstyle {\text {T}}}+\tfrac{1}{\tau } I \Big )^{-1} \Big )\\&\qquad \Big (D_P^{\scriptscriptstyle {\text {T}}}\otimes D \Big )\\&= \Big (\lambda D_P D_P^{\scriptscriptstyle {\text {T}}}\otimes I_N + \tfrac{1}{\tau } I \Big ) \\&\ \ \ \ - \lambda ^2 \Big ( D_P D_P^{\scriptscriptstyle {\text {T}}}\otimes D^{\scriptscriptstyle {\text {T}}}\big (\lambda D D^{\scriptscriptstyle {\text {T}}}+ \tfrac{1}{\tau } I_N \big )^{-1} D \Big )\\&= \lambda D_P D_P^{\scriptscriptstyle {\text {T}}}\otimes \left( I_N - \lambda D^{\scriptscriptstyle {\text {T}}}\big (\lambda D D^{\scriptscriptstyle {\text {T}}}+ \tfrac{1}{\tau } I_N \big )^{-1} D \right) + \frac{1}{\tau } I. \end{aligned}$$

By (21) we have

$$\begin{aligned} \big (D_N^{\mathrm{per}} \big )^{\scriptscriptstyle {\text {T}}}= N F_N \mathrm{diag}\Big (-1 + {\,\mathrm {e}}^{+2\pi \mathrm {i}k/N} \Big )_k \bar{F}_N \end{aligned}$$

and

$$\begin{aligned} D_N ^{\mathrm{per}}= N F_N \mathrm{diag}\Big (-1 + {\,\mathrm {e}}^{-2\pi \mathrm {i}k/N} \Big )_k \bar{F}_N \end{aligned}$$

so that we obtain for periodic boundary boundary conditions

$$\begin{aligned} I_N - \lambda \Big (D_N^{\mathrm{per}}\Big )^{\scriptscriptstyle {\text {T}}}\Big (\lambda D_N^{\mathrm{per}} \Big (D_N^{\mathrm{per}} \Big )^{\scriptscriptstyle {\text {T}}}+ \tfrac{1}{\tau } I_N \Big )^{-1} D_N^{\mathrm{per}} \\ = F_N \mathrm{diag}\left( 1+ \tau \tfrac{1}{\lambda N^2} \mathrm{d}^{\mathrm{per}}_N \right) ^{-1} \bar{F}_N. \end{aligned}$$

Therewith it follows with (24)

$$\begin{aligned} S&= S_{P-1} \mathrm{diag}\Big (\lambda P^2 \mathrm{d}^{\mathrm{zero}}_{P-1} \Big ) S_{P-1} \\&\ \ \ \ \otimes F_N \mathrm{diag}\left( 1+ \tau \tfrac{1}{\lambda N^2} \mathrm{d}^{\mathrm{per}}_N \right) ^{-1} \bar{F}_N + \tfrac{1}{\tau } I\\&= \Big (S_{P-1} \otimes F_N\Big ) \mathrm{diag}\Big (\lambda P^2 \mathrm{d}^{\mathrm{zero}}_{P-1} \\&\ \ \ \ \otimes \Big (1+ \tau \tfrac{1}{\lambda N^2} \mathrm{d}^{\mathrm{per}}_N \Big )^{-1} + \tfrac{1}{\tau } \Big ) \Big (S_{P-1} \otimes \bar{F}_N\Big ) \end{aligned}$$

which yields the assertion for \(S^{-1}\) in the periodic case.

For mirror boundary conditions we compute using (23)

$$\begin{aligned} S&= \Big (S_{P-1} \otimes C_N^{\scriptscriptstyle {\text {T}}}\Big ) \mathrm{diag}\Big (\lambda P^2 \mathrm{d}^{\mathrm{zero}}_{P-1} \\&\ \ \ \otimes \Big (1+ \tau \tfrac{1}{\lambda N^2} \mathrm{d}^{\mathrm{mirr}}_N \Big )^{-1} + \tfrac{1}{\tau }\Big ) \Big (S_{P-1} \otimes C_N\Big ) \end{aligned}$$

and inverting this matrix finishes the proof. \(\square \)

Discretization for three spatial dimensions + time For RGB images of size \(N_1 \times N_2 \times N_3\), where \(N_3 = 3\), we have to work in three spatial dimensions. Setting \(N :=(N_1,N_2,N_3)\), \(j :=(j_1,j_2,j_3)\) and defining the quotient \(\tfrac{j}{N}\) componentwise we obtain

  • \(f_i = \bigg (f_i \Big ( \tfrac{j-1/2}{N}\Big ) \bigg )_{j=(1,1,1)}^{N} \in \mathbb R^{N_1,N_2,N_3}\), \(i= 0,1\),

  • \(f = \bigg ( f \Big ( \tfrac{j-1/2}{N}, \tfrac{k}{P} \Big ) \bigg )_{j={(1,1,1)},k=1}^{N,P-1} \in \mathbb R^{N_1,N_2,3,P-1}\),

  • \(m = (m_1, m_2, m_3)\), with

    $$\begin{aligned}&\bigg (m_1 \Big (\tfrac{j_1}{N_1},\tfrac{j_2-1/2}{N_2},\tfrac{j_3-1/2}{3},\tfrac{k-1/2}{P} \Big )\bigg )_{j_1=1,j_2=1,j_3=1,k=1}^{N_1-1,N_2,3,P}\\&\in \mathbb R^{N_1-1,N_2,3,P},\\&\bigg (m_2\Big (\tfrac{j_1-1/2}{N_1},\tfrac{j_2}{N_2},\tfrac{j_3-1/2}{3},\tfrac{k-1/2}{P}\Big )\bigg )_{j_1=1,j_2=1,j_3=1,k=1}^{N_1,N_2-1,3,P}\\&\in \mathbb R^{N_1,N_2-1,3,P},\\&\bigg (m_3\Big (\tfrac{j_1-1/2}{N_1},\tfrac{j_2-1/2}{N_2},\tfrac{j_3}{3},\tfrac{k-1/2}{P}\Big )\bigg )_{j_1=1,j_2=1,j_3=0,k=1}^{N_1,N_2,2,P}\\&\in \mathbb R^{N_1,N_2,3,P}. \end{aligned}$$

In the definition of m we take the periodic boundary for the third spatial direction into account. Analogously as in the one-dimensional case, when reshaping m and f into long vectors, the interpolation and differentiation operators can be written using tensor products. For the interpolation operator we have

$$\begin{aligned} S_{\text {m}}m&= \begin{pmatrix} (I_P \otimes I_3\otimes I_{N_2} \otimes S_{N_1}^{\scriptscriptstyle {\text {T}}}) m_1\\ (I_P \otimes I_3\otimes S_{N_2}^{\scriptscriptstyle {\text {T}}}\otimes I_{N_1}) m_2\\ (I_P \otimes S_3^{\scriptscriptstyle {\text {T}}}\otimes I_{N_2} \otimes I_{N_1}) m_3\end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} S_{\text {f}}f = \Big (S_P^{\scriptscriptstyle {\text {T}}}\otimes I_3\otimes I_{N_2} \otimes I_{N_1} \Big ) f, \end{aligned}$$

which means, that \(S_{\text {m}}m\) computes the average of \(m_i\) with respect to the i-th coordinate, \(i=1,2,3\), and \(S_{\text {f}}f\) computes the average of f with respect to the time variable. Similarly we generalize the difference operator. Then, reordering f and m into large vectors, the matrix form of the operator A is

$$\begin{aligned} A = \Big ( I_P\otimes I_3\otimes I_{N_2}\otimes D_{N_1}^{\scriptscriptstyle {\text {T}}}\, \Big | \, I_P\otimes I_3\otimes D_{N_2}^{\scriptscriptstyle {\text {T}}}\otimes I_{N_1}\, \Big | \,\\ I_P\otimes (D_3^{\mathrm{per}})^{\scriptscriptstyle {\text {T}}}\otimes I_{N_2}\otimes I_{N_1} \, \Big | \, D_P\otimes I_3\otimes I_{N_2}\otimes I_{N_1} \Big ) \end{aligned}$$

so that \(AA^{\scriptscriptstyle {\text {T}}}\) reads as

$$\begin{aligned} A A^{\scriptscriptstyle {\text {T}}}&= I_P\otimes I_3\otimes I_{N_2}\otimes D_{N_1}^{\scriptscriptstyle {\text {T}}}D_{N_1}\\&\ \ \ \ +I_P\otimes I_3\otimes D_{N_2}^{\scriptscriptstyle {\text {T}}}D_{N_2}\otimes I_{N_1}\\&\ \ \ \ +I_P\otimes (D_3^{\mathrm{per}})^{\scriptscriptstyle {\text {T}}}(D_3^{\mathrm{per}})\otimes I_{N_2}\otimes I_{N_1}\\&\ \ \ \ +D_P^{\scriptscriptstyle {\text {T}}}D_P\otimes I_3\otimes I_{N_2}\otimes I_{N_1}\\&= \left( C^{\scriptscriptstyle {\text {T}}}_P \otimes \bar{F_3} \otimes C_{N_2-1}^{\scriptscriptstyle {\text {T}}}\otimes C_{N_1-1}^{\scriptscriptstyle {\text {T}}}\right) \mathrm{diag}(\mathrm{d})\\&\ \ \ \ \left( C_P \otimes F_3 \otimes C_{N_2-1}\otimes C_{N_1-1}\right) , \end{aligned}$$

where

$$\begin{aligned} \mathrm{d}&:=I_{3 P (N_2-1)} \otimes N_1^2 \mathrm{diag}\Big (\mathrm{d}_{N_1-1}^{\text {mirr}} \Big )\\&\ \ \ \ + I_{3P}\otimes {N_2}^2 \mathrm{diag}\Big (\mathrm{d}_{{N_2}-1}^{\text {mirr}}\Big )\otimes I_{N_1-1}\\&\ \ \ \ + I_P \otimes 3^2 \mathrm{diag}\Big (\mathrm{d}_{3}^{\text {per}}\Big ) \otimes I_{(N_2-1)(N_1-1)}\\&\ \ \ \ + P^2 \mathrm{diag}\Big (\mathrm{d}_{P}^{\text {mirr}} \Big )\otimes I_{3P(N_2-1)(N_1-1)}. \end{aligned}$$

For the three-dimensional spatial setting we have to solve a four-dimensional Poisson equation, which can be handled separately in each dimension. For the constrained problem, this can be computed using fast cosine and Fourier transforms with a complexity of \(\mathcal {O}(N_1 N_2 P\log (N_1 N_2 P))\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fitschen, J.H., Laus, F. & Steidl, G. Transport Between RGB Images Motivated by Dynamic Optimal Transport. J Math Imaging Vis 56, 409–429 (2016). https://doi.org/10.1007/s10851-016-0644-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-016-0644-x

Keywords

Navigation