Skip to main content

Isotropic self-consistent equations for mean-field random matrices

Abstract

We present a simple and versatile method for deriving (an)isotropic local laws for general random matrices constructed from independent random variables. Our method is applicable to mean-field random matrices, where all independent variables have comparable variances. It is entirely insensitive to the expectation of the matrix. In this paper we focus on the probabilistic part of the proof—the derivation of the self-consistent equations. As a concrete application, we settle in complete generality the local law for Wigner matrices with arbitrary expectation.

This is a preview of subscription content, access via your institution.

References

  1. Ajanki, O., Erdős, L., Krüger, T.: Local eigenvalue statistics for random matrices with general short range correlations. Preprint arXiv:1604.08188

  2. Ajanki, O., Erdős, L., Krüger, T.: Quadratic vector equations on complex upper half-plane. Preprint arXiv:1506.05095

  3. Ajanki, O., Erdős, L., Krüger, T.: Singularities of solutions to quadratic vector equations on complex upper half-plane. Preprint arXiv:1512.03703

  4. Ajanki, O., Erdős, L., Krüger, T.: Universality for general Wigner-type matrices. Preprint arXiv:1506.05098

  5. Ajanki, O., Erdős, L., Krüger, T.: Local spectral statistics of Gaussian matrices with correlated entries. J. Stat. Phys. 163, 280–302 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Anderson, G.W., Guionnet, A., Zeitouni, O.: An Introduction to Random Matrices, vol. 118. Cambridge University Press, Cambridge (2010)

    MATH  Google Scholar 

  7. Bao, Z., Erdős, L., Schnelli, K.: Local law of addition of random matrices on optimal scale. Preprint arXiv:1509.07080

  8. Bao, Z., Erdős, L., Schnelli, K.: Convergence rate for spectral distribution of addition of random matrices. Preprint arXiv:1606.03076 (2016)

  9. Bauerschmidt, R., Knowles, A., Yau, H.-T.: Local semicircle law for random regular graphs. Preprint arXiv:1503.08702

  10. Benaych-Georges, F., Knowles, A.: Lectures on the local semicircle law for Wigner matrices. Preprint arXiv:1601.04055

  11. Bloemendal, A., Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Isotropic local laws for sample covariance and generalized Wigner matrices. Electron. J. Probab. 19, 1–53 (2014)

    MathSciNet  MATH  Google Scholar 

  12. Bloemendal, A., Knowles, A., Yau, H.-T., Yin, J.: On the principal components of sample covariance matrices. Probab. Theory Relat. Fields 164, 459–552 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  13. Bourgade, P., Huang, J., Yau, H.-T.: Eigenvector statistics of sparse random matrices. Preprint arXiv:1609.09022

  14. Bourgade, P., Yau, H.-T.: The eigenvector moment flow and local quantum unique ergodicity. Commun. Math. Phys. 350, 231–278 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Che, Z.: Universality of random matrices with correlated entries. Preprint arXiv:1604.05709

  16. Boutet de Monvel, A., Khorunzhy, A.: Asymptotic distribution of smoothed eigenvalue density. II. Wigner random matrices. Random Oper. Stoch. Equ. 7, 149–168 (1999)

  17. Erdős, L., Krüger, T. (in preparation)

  18. Erdős, L., Knowles, A., Yau, H.-T.: Averaging fluctuations in resolvents of random band matrices. Ann. H. Poincaré 14, 1837–1926 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  19. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Delocalization and diffusion profile for random band matrices. Commun. Math. Phys. 323, 367–416 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: The local semicircle law for a general class of random matrices. Electron. J. Probab. 18, 1–58 (2013)

    MathSciNet  MATH  Google Scholar 

  21. Erdős, L., Knowles, A., Yau, H.-T., Yin, J.: Spectral statistics of Erdős-Rényi graphs I: local semicircle law. Ann. Probab. 41, 2279–2375 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  22. Erdős, L., Schlein, B., Yau, H.-T.: Local semicircle law and complete delocalization for Wigner random matrices. Commun. Math. Phys. 287, 641–655 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Erdős, L., Yau, H.-T., Yin, J.: Universality for generalized Wigner matrices with Bernoulli distribution. J. Comb. 1(2), 15–85 (2011)

    MathSciNet  MATH  Google Scholar 

  24. Erdős, L., Yau, H.-T., Yin, J.: Bulk universality for generalized Wigner matrices. Probab. Theory Relat. Fields 154, 341–407 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  25. Erdős, L., Yau, H.-T., Yin, J.: Rigidity of eigenvalues of generalized Wigner matrices. Adv. Math. 229, 1435–1515 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Girko, V.L.: Theory of Stochastic Canonical Equations, vol. 2. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  27. He, Y., Knowles, A.: Mesoscopic eigenvalue statistics of Wigner matrices. Preprint arXiv:1603.01499

  28. Helton, J.W., Far, R.R., Speicher, R.: Operator-valued semicircular elements: solving a quadratic matrix equation with positivity constraints. Int. Math. Res. Not. (2007). doi:10.1093/imrn/rnm086

  29. Khorunzhy, A.M., Khoruzhenko, B.A., Pastur, L.A.: Asymptotic properties of large random matrices with independent entries. J. Math. Phys. 37, 5033–5060 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  30. Knowles, A., Yin, J.: Anisotropic local laws for random matrices. Preprint arXiv:1410.3516

  31. Knowles, A., Yin, J.: The isotropic semicircle law and deformation of Wigner matrices. Commun. Pure Appl. Math. 66, 1663–1749 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  32. Lee, J.O., Schnelli, K.: Edge universality for deformed Wigner matrices. Rev. Math. Phys. 27, 1550018 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  33. Lee, J.O., Schnelli, K.: Local law and Tracy–Widom limit for sparse random matrices. Preprint arXiv:1605.08767

  34. Lee, J.O., Schnelli, K.: Local deformed semicircle law and complete delocalization for Wigner matrices with random potential. J. Math. Phys. 54, 103504 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  35. Lee, J.O., Schnelli, K., Stetler, B., Yau, H.-T.: Bulk universality for deformed Wigner matrices. Ann. Probab. 44, 2349–2425 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  36. Mehta, M.L.: Random Matrices. Academic Press, Cambridge (2004)

    MATH  Google Scholar 

  37. Pastur, L.A., Shcherbina, M.: Eigenvalue Distribution of Large Random Matrices. In: Mathematical Surveys and Monographs, vol. 171, pp. 632 (2011)

  38. Pillai, N.S., Yin, J.: Universality of covariance matrices. Ann. Appl. Probab. 24, 935–1001 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  39. Voiculescu, D.V., Dykema, K.J., Nica, A.: Free Random Variables: A Noncommutative Probability Approach to Free Products with Applications to Random Matrices, Operator Algebras and Harmonic Analysis on Free Groups, vol. 1. American Mathematical Society, Providence (1992)

    MATH  Google Scholar 

  40. Wigner, E.P.: Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 62, 548–564 (1955)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank Torben KrĂĽger for drawing our attention to the importance of the equation (1.3) for general mean-field matrix models. In a private communication, Torben KrĂĽger also informed us that the idea of organizing proofs of local laws by bootstrapping is being developed independently for random matrices with general correlated entries in [17].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antti Knowles.

Appendices

Appendix 1: Proof of Lemma 2.4

In this section we prove the cumulant expansion formula with the remainder term (2.3). We start with an elementary inequality.

Lemma 5.1

Let X be a nonnegative random variable with finite moments. Then for any \(a,b,t \geqslant 0\), we have

$$\begin{aligned} \mathbb {E}X^a \mathbb {E}\big [X^b \mathbf {1}_{X>t}\big ] \leqslant \mathbb {E}\big [X^{a+b}\mathbf {1}_{X>t}\big ]\,. \end{aligned}$$

Proof

It suffices to assume \(a>0\). Let us abbreviate \(\Vert X\Vert _{a}:=(\mathbb {E}X^a)^{1/a}\). For \(t \geqslant \Vert X\Vert _a\), we have

$$\begin{aligned} \mathbb {E}X^a \mathbb {E}\big [X^b \mathbf {1}_{X>t}\big ]\leqslant \mathbb {E}\big [ t^aX^b \mathbf {1}_{X>t}\big ]\leqslant \mathbb {E}\big [X^{a+b}\mathbf {1}_{X>t}\big ]\,, \end{aligned}$$
(5.1)

which is the desired result. For \(t < \Vert X\Vert _a\), we have

$$\begin{aligned} \mathbb {E}X^a \mathbb {E}\big [X^b \mathbf {1}_{X\leqslant t}\big ]>\mathbb {E}\big [t^aX^b \mathbf {1}_{X\leqslant t}\big ] \geqslant \mathbb {E}\big [X^{a+b}\mathbf {1}_{X\leqslant t}\big ]\,. \end{aligned}$$
(5.2)

Jensen’s (or Hölder’s) inequality yields

$$\begin{aligned} \mathbb {E}X^a \, \mathbb {E}X^b \leqslant \mathbb {E}X^{a+b}\,, \end{aligned}$$
(5.3)

and the claim follows from (5.2) and (5.3), using \(1 = \mathbf {1}_{X \leqslant t} + \mathbf {1}_{X > t}\).\(\square \)

Let \(\chi (t):=\log \mathbb E e^{\mathrm {i}t h}\). For \(n \geqslant 1\), we have

$$\begin{aligned} \partial ^n_t\big (e^{\chi (t)}\big )=\partial ^{n-1}_t\big (\chi '(t)e^{\chi (t)}\big )=\sum _{k=1}^n {n-1 \atopwithdelims ()k-1} \partial ^k_{t}\big (\chi (t)\big ) \partial ^{n-k}_{t} \big (e^{\chi (t)}\big )\,, \end{aligned}$$

hence

$$\begin{aligned} \mathbb E h^n = (-\mathrm {i})^n \partial ^n_{t} e^{\chi (t)}\big |_{t=0}= \sum \limits _{k=1}^n {n-1 \atopwithdelims ()k-1} \mathcal {C}_k(h) \mathbb E h^{n-k}\,. \end{aligned}$$

For \(g(h)=h^\ell \), we have

$$\begin{aligned} \mathbb {E}\big [h\cdot g(h)\big ] =\mathbb {E}h^{\ell +1} = \sum \limits _{k=1}^{\ell +1} {\ell \atopwithdelims ()k-1}\mathcal {C}_k(h) \mathbb {E}h^{\ell +1-k} = \sum \limits _{k=0}^{\ell } \frac{1}{k!} \mathcal {C}_{k+1}(h) \mathbb {E}\bigl [{g^{(k)}(h)}\bigr ]\,, \end{aligned}$$

and by linearity the same relation holds for any polynomial P of degree \(\leqslant \ell \):

$$\begin{aligned} \mathbb {E}\big [h\cdot P(h)\big ]=\sum _{k=0}^{\ell } \frac{1}{k!}\mathcal {C}_{k+1}(h)\mathbb {E}\big [ P^{(k)}(h)\big ]\,. \end{aligned}$$
(5.4)

Next, let f be as in the statement of Lemma 2.4, and fix \(\ell \in \mathbb {N}\). By Taylor expansion we can find a polynomial P of degree at most \(\ell \), such that for any \(0 \leqslant k \leqslant \ell \),

$$\begin{aligned} f^{(k)}(h)=P^{(k)}(h)+\frac{1}{(\ell +1-k)!}f^{(\ell +1)}(\xi _k)h^{\ell +1-k}\,, \end{aligned}$$
(5.5)

where \(\xi _k\equiv \xi _k(h)\) is a random variable taking values between 0 and h.

By (5.4), (5.5), homogeneity of the cumulants, and Jensen’s inequality we find that the error term in (2.2) satisfies

$$\begin{aligned} \mathcal {R}_{\ell +1}= & {} \mathbb {E}\big [h\cdot f(h)\big ]-\sum _{k=0}^{\ell }\frac{1}{k!}\mathcal {C}_{k+1}(h)\mathbb {E}\big [f^{(k)}(h)\big ]\nonumber \\= & {} \mathbb {E}\big [h\cdot (f(h)-P(h))\big ]-\sum _{k=0}^{\ell }\frac{1}{k!}\mathcal {C}_{k+1}(h)\mathbb {E}\big [f^{(k)}(h)-P^{(k)}(h)\big ]\nonumber \\= & {} \frac{1}{(\ell +1)!}\mathbb {E}\big [f^{(\ell +1)}(\xi _0)\cdot h^{\ell +2}\big ]\nonumber \\&-\sum _{k=0}^{\ell }\frac{1}{k!(\ell +1-k)!}\mathcal {C}_{k+1}(h)\mathbb {E}\big [f^{(\ell +1)}{(\xi _k)}\cdot h^{\ell +1-k}\big ]\nonumber \\\leqslant & {} O(1)\cdot \sum _{k=0}^{\ell +1} \mathbb {E}|h|^k \cdot \mathbb {E}\Big [\sup _{|x|\leqslant |h|}\big |f^{(\ell +1)}(x)\big |\cdot h^{\ell +2-k} \Big ]\nonumber \\\leqslant & {} O(1)\cdot \mathbb {E}|h|^{\ell +2}\cdot \sup _{|x|\leqslant t}\big |f^{(\ell +1)}(x)\big | + O(1)\cdot \sum _{k=0}^{\ell +1} \mathbb {E}|h|^k \nonumber \\&\cdot \mathbb {E}\Big [\sup _{|x|\leqslant |h|}\big |f^{(\ell +1)}(x)\big |\cdot h^{\ell +2-k} \cdot \mathbf {1}_{|h|>t}\Big ]\,. \end{aligned}$$
(5.6)

The desired result then follows from estimating the last term of (5.6) by Cauchy-Schwarz inequality and Lemma 5.1.

Appendix 2: The complex case

In this appendix we explain how to generalize our results to the complex case. We briefly explain how to conclude the proof of Theorem 1.5 (i); Theorem 1.5 (ii) follows in a similar fashion.

If H is complex Hermitian, in general \(\mathbb E H^2_{ij}\) and \(\mathbb E |H_{ij}|^2\) are different. We define, in addition to (3.1),

$$\begin{aligned} t_{ij}:=(1+\delta _{ij})^{-1}N\mathbb E H^{2}_{ij}\,, \end{aligned}$$
(6.1)

and, in addition to (3.2), for a vector \({\mathbf{x}}=(x_i)_{i \in \mathbb N} \in \mathbb C^N\),

$$\begin{aligned} \underline{{\mathbf{x}}} \!\,^{j} = (\underline{x} \!\,^{j}_i)_{i\in [\![{N}]\!]} \,, \qquad \text { where } \quad \underline{x} \!\,^{j}_i :=x_i {t}_{ij}\,. \end{aligned}$$
(6.2)

Now (3.11) generalizes to

$$\begin{aligned}&(\mathcal {S}(G)G)_{{\mathbf{v}}{\mathbf{w}}} = \mathcal {J}_{{\mathbf{v}} {\mathbf{w}}} + \mathcal {K}_{{\mathbf{v}} {\mathbf{w}}} \,, \qquad \mathcal {J}_{{\mathbf{v}} {\mathbf{w}}} :=\frac{1}{N}\sum _{j} G_{j \overline{{\underline{{\mathbf{v}}} \!\,^{j}}}}G_{j{\mathbf{w}}}\,, \nonumber \\&\quad \mathcal {K}_{{\mathbf{v}} {\mathbf{w}}} :=\frac{1}{N}\sum _{j} G_{jj} G_{ {\mathbf{v}}^{j}{\mathbf{w}}}\,, \end{aligned}$$
(6.3)

and \(\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}\) remains the same as in (3.12).

Following a similar argument as in Sect. 3.1, we see that Proposition 3.1 is enough to conclude Theorem 1.5 (i). As in (3.21), we fix a constant \(p \in \mathbb N\) and write

$$\begin{aligned} \mathbb {E}\left[ |\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}|^{2p}\right] =\mathbb {E}\left[ \mathcal {K}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\right] +\sum _{i,j}{v}_i\mathbb {E}\left[ H_{ij}G_{j{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\right] \,. \end{aligned}$$
(6.4)

We compute the second term on the right-hand side of (6.4) using the complex cumulant expansion, given in [27, Lemma 7.1], which replaces the real cumulant expansion from Lemma 2.4. The result is

$$\begin{aligned} \mathbb {E}|\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}|^{2p} =\mathbb {E}\left[ \mathcal {K}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\right] +\sum _{k=1}^{\ell }\tilde{X}_k+\sum _{i,j}{v}_i \tilde{\mathcal {R}}_{\ell +1}^{ij}\,, \end{aligned}$$
(6.5)

where \(\tilde{X_k}\) and \(\tilde{\mathcal {R}}^{ij}_{\ell +1}\) are defined analogously to (3.24) and (3.25), respectively. Using the same proof, one can easily extend Lemma 3.4 (ii)–(iii) with \(X_k\) and \(\mathcal {R}^{ij}_{\ell +1}\) replaced by \( \tilde{X}_k\) and \(\tilde{\mathcal {R}}^{ij}_{\ell +1}\), so that it remains to estimate \(\mathbb {E}[\mathcal {K}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p]+\tilde{X}_1\). Using the complex cumulant expansion, we find

$$\begin{aligned} \begin{aligned} \tilde{X}_1 =&\sum _{i,j}\Big ({v}_{i}\mathcal {\mathbb {E}}|H_{ij}^2|\,\mathbb {E}\big [\partial _{ji}\big (G_{j{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\\&+{v}_{i}\mathcal {\mathbb {E}}[H_{ij}^2]\,\mathbb {E}\big [\partial _{ij}\big (G_{j{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\Big )(1+\delta _{ij})^{-1}\\ =&\sum _{i,j}\Big ({v}_{i}\mathcal {\mathbb {E}}|H_{ij}^2|\,\mathbb {E}\big [G_{j{\mathbf{w}}}\partial _{ji}\big (\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\\&+{v}_{i}\mathcal {\mathbb {E}}[H_{ij}^2]\,\mathbb {E}\big [G_{j{\mathbf{w}}}\partial _{ij}\big (\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\Big )(1+\delta _{ij})^{-1}\\&\quad -\mathbb {E}\big [(\mathcal {S}(G)G))_{\mathbf {v}\mathbf {w}}\mathcal {D}_{\mathbf {v}\mathbf {w}}^{p-1}\overline{\mathcal {D}}_{\mathbf {v}\mathbf {w}}^{p}\big ]\,, \end{aligned} \end{aligned}$$

where in the second step we used (6.3) and

$$\begin{aligned} \frac{\partial G_{ij}}{\partial H_{kl}}=-G_{ik}G_{lj}\,. \end{aligned}$$
(6.6)

(Here we take the conventions of [27, Section 7] for the derivatives in the complex entries of H.) Thus we have

$$\begin{aligned} \mathbb {E}[\mathcal {K}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p]+\tilde{X}_1= & {} -\mathbb {E}\left[ \mathcal {J}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\right] \nonumber \\&+ \sum _{i,j}\Big ({v}_{i}\mathcal {\mathbb {E}}|H_{ij}^2|\,\mathbb {E}\big [G_{j{\mathbf{w}}}\partial _{ji}\big (\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\nonumber \\&+{v}_{i}\mathcal {\mathbb {E}}[H_{ij}^2]\,\mathbb {E}\big [G_{j{\mathbf{w}}}\partial _{ij}\big (\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\big )\big ]\Big )(1+\delta _{ij})^{-1}.\qquad \qquad \end{aligned}$$
(6.7)

By estimating the right-hand side of (6.7) in a similar way as we dealt with the right-hand side of (3.27), we obtain

$$\begin{aligned} \mathbb {E}\left[ \mathcal {K}_{{\mathbf{v}}{\mathbf{w}}}\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}^{p-1}\overline{\mathcal {D}}_{{\mathbf{v}}{\mathbf{w}}}^p\right] +\tilde{X}_1 = O_\prec (\widetilde{\zeta }) \cdot \mathbb {E}|\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}|^{2p-1} +O_\prec (\widetilde{\zeta }\lambda )\cdot \mathbb {E}|\mathcal {D}_{{\mathbf{v}}{\mathbf{w}}}|^{2p-2}\,, \end{aligned}$$

which completes the proof.

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, Y., Knowles, A. & Rosenthal, R. Isotropic self-consistent equations for mean-field random matrices. Probab. Theory Relat. Fields 171, 203–249 (2018). https://doi.org/10.1007/s00440-017-0776-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-017-0776-y

Mathematics Subject Classification

  • 15B52
  • 82B44
  • 82C44