Skip to main content
Log in

Embedding surfaces inside small domains with minimal distortion

  • Published:
Calculus of Variations and Partial Differential Equations Aims and scope Submit manuscript

Abstract

Given two-dimensional Riemannian manifolds \(\mathcal {M},\mathcal {N}\), we prove a lower bound on the distortion of embeddings \(\mathcal {M}\rightarrow \mathcal {N}\), in terms of the areas’ discrepancy \(V_{\mathcal {N}}/V_{\mathcal {M}}\), for a certain class of distortion functionals. For \(V_{\mathcal {N}}/V_{\mathcal {M}} \ge 1/4\), homotheties, provided they exist, are the unique energy minimizing maps attaining the bound, while for \(V_{\mathcal {N}}/V_{\mathcal {M}} \le 1/4\), there are non-homothetic minimizers. We characterize the maps attaining the bound, and construct explicit non-homothetic minimizers between disks. We then prove stability results for the two regimes. We end by analyzing other families of distortion functionals. In particular we characterize a family of functionals where no phase transition in the minimizers occurs; homotheties are the energy minimizers for all values of \(V_{\mathcal {N}}/V_{\mathcal {M}}\), provided they exist.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Efrati, E., Sharon, E., Kupferman, R.: Elastic theory of unconstrained non-Euclidean plates. J. Mech. Phys. Solids 57, 762–775 (2009)

    Article  MathSciNet  Google Scholar 

  2. Klein, Y., Efrati, E., Sharon, E.: Shaping of elastic sheets by prescription of non-Euclidean metrics. Science 315, 1116–1120 (2007)

    Article  MathSciNet  Google Scholar 

  3. Klein, Y., Venkataramani, S., Sharon, E.: Experimental study of shape transitions and energy scaling in thin non-euclidean plates. PRL 106, 118303 (2011)

    Article  Google Scholar 

  4. Danescu, A., Chevalier, C., Grenet, G., Regreny, Ph., Letartre, X., Leclercq, J.L.: Spherical curves design for micro-origami using intrinsic stress relaxation. Appl. Phys. Lett. 102(12), 123111 (2013)

    Article  Google Scholar 

  5. Aharoni, H., Kolinski, J., Moshe, M., Meirzada, I., Sharon, E.: Internal stresses lead to net forces and torques on extended elastic bodies. Phys. Rev. Lett. 117(12), 124101 (2016)

    Article  Google Scholar 

  6. Šilhavý, M.: Rank-1 convex hulls of isotropic functions in dimension 2 by 2. In: Proceedings of Partial Differential Equations and Applications (Olomouc, 1999) 126, pp. 521–529 (2001)

  7. Dolzmann, G.: Regularity of minimizers in nonlinear elasticity—the case of a one-well problem in nonlinear elasticity. Technische Mechanik 32, 189–194 (2012)

    Google Scholar 

  8. Hartman, P.: On isometries and on a theorem of liouville. Math. Z. 69, 202–210 (1958)

    Article  MathSciNet  Google Scholar 

  9. Calabi, E., Hartman, P.: On the smoothness of isometries. Duke Math. J. 37(4), 741–750 (1970)

    Article  MathSciNet  Google Scholar 

  10. Taylor, M.: Existence and regularity of isometries. Trans. Am. Math. Soc. 358(6), 2415–2423 (2006)

    Article  MathSciNet  Google Scholar 

  11. Kupferman, R., Maor, C., Shachar, A.: Reshetnyak rigidity for Riemannian manifolds. Arch. Ration. Mech. Anal. 231(1), 367–408 (2019)

    Article  MathSciNet  Google Scholar 

  12. Bryant, R.: https://mathoverflow.net/users/13972/robert_bryant. Are all maps \(\mathbb{R}^2 \rightarrow \mathbb{R}^2\) with fixed singular values affine? MathOverflow. https://mathoverflow.net/q/351550 (version: 2020-02-01)

  13. Bryant, R.: Communication on the mathoverflow website (2020), available online at https://mathoverflow.net/questions/376018/metric-obstructions-for-area-preserving-diffeomorphisms-with-constant-singular-v

  14. Bryant, R.: https://mathoverflow.net/users/13972/robert bryant. A diffeomorphism of the torus with constant singular values. MathOverflow. URL:https://mathoverflow.net/q/375931 (version: 2020-11-08)

  15. Bertoldi, K., Vitelli, V., Christensen, J., van Hecke, M.: Flexible mechanical metamaterials. Nat. Rev. Mater. 2(11), 1–11 (2017)

    Article  Google Scholar 

  16. Stoop, N., Lagrange, R., Terwagne, D., Reis, P.M., Dunkel, J.: Curvature-induced symmetry breaking determines elastic surface patterns. Nat. Mater. 14(3), 337–342 (2015)

    Article  Google Scholar 

  17. Reshetnyak, Yu.G.: On the stability of conformal mappings in multidimensional spaces. Sibirskii Matematicheskii Zhurnal 8(1), 91–114 (1967)

    Google Scholar 

  18. Müller, S.: Higher integrability of determinants and weak convergence in \(L^1\). Journal für die reine und angewandte Mathematik 412, 20–34 (1990)

    MathSciNet  MATH  Google Scholar 

  19. Giaquinta, M., Modica, G., Soucek, J.: Cartesian Currents in the Calculus of Variations II: Variational Integrals, vol. 1. Springer, Berlin (1998)

    Book  Google Scholar 

  20. Shachar, A.: https://mathoverflow.net/users/46290/asaf_shachar. Does weak continuity of Jacobians hold for non nondegenerate maps? MathOverflow. URL:https://mathoverflow.net/q/381194 (version: 2021-01-15)

  21. Ciarlet, P.G.: Mathematical Elasticity, Volume 1: Three-Dimensional Elasticity. Elsevier, Amsterdam (1988)

    MATH  Google Scholar 

  22. Sivaloganathan, J., Spector, S.J.: On the global stability of two-dimensional, incompressible, elastic bars in uniaxial extension. Proc. R. Soc. A Math. Phys. Eng. Sci. 466(2116), 1167–1176 (2010)

    MathSciNet  MATH  Google Scholar 

  23. Mora-Corral, C.: Explicit energy-minimizers of incompressible elastic brittle bars under uniaxial extension. C.R. Math. 348(17–18), 1045–1048 (2010)

    Article  MathSciNet  Google Scholar 

  24. Hajłasz, P.: Sobolev mappings, co-area formula and related topics (1999)

  25. Bony, J.-M., Colombini, F., Pernazza, L.: On square roots of class \(C^m\) of nonnegative functions of one variable. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze 9(3), 635–644 (2010)

    MathSciNet  MATH  Google Scholar 

  26. Hajlasz, P.: https://mathoverflow.net/users/121665/piotr_hajlasz. Is \(L^1\) strong convergence of Jacobians valid for maps between manifolds? MathOverflow. URL:https://mathoverflow.net/q/374383 (version: 2020-10-20)

  27. Lee, J.M.: Introduction to Smooth Manifolds, 2nd edn. Springer, Berlin (2013)

    MATH  Google Scholar 

  28. Mizar https://mathoverflow.net/users/36952/mizar. Are metric isometries smooth at the boundary? MathOverflow. https://mathoverflow.net/q/253994 (version: 2016-11-05)

  29. Eells, J., Lemaire, L.: Selected Topics in Harmonic Maps, vol. 50. American Mathematical Society, Providence (1983)

    Book  Google Scholar 

  30. Kupferman, R., Shachar, A.: A geometric perspective on the piola identity in riemannian settings. J. Geom. Mech. 11(1), 59–76 (2019)

    Article  MathSciNet  Google Scholar 

  31. Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (1998)

    MATH  Google Scholar 

  32. Bryant, R.: https://mathoverflow.net/users/13972/robert_bryant. Local obstructions for maps with constant singular values. MathOverflow. URL:https://mathoverflow.net/q/383251 (version: 2021-02-08)

  33. DeSimone, A., Dolzmann, G.: Macroscopic response of nematic elastomers via relaxation of a class of so (3)-invariant energies. Arch. Ration. Mech. Anal. 161(3), 181–204 (2002)

    Article  MathSciNet  Google Scholar 

  34. Dap https://math.stackexchange.com/users/467147/dap. Can we choose smoothly the singular vectors of a matrix? Mathematics Stack Exchange. https://math.stackexchange.com/q/3163368 (version: 2019-03-31)

  35. Pinelis, I.: https://mathoverflow.net/users/36721/iosif_pinelis. Is the optimum of this problem convex in the constraint parameter. MathOverflow. https://mathoverflow.net/q/357467 (version: 2020-04-14)

  36. Neff, P., Nakatsukasa, Y., Fischle, A.: A logarithmic minimization property of the unitary polar factor in the spectral and frobenius norms. SIAM J. Matrix Anal. Appl. 35(3), 1132–1154 (2014)

    Article  MathSciNet  Google Scholar 

  37. Neff, P., Eidel, B., Martin, R.J.: Geometry of logarithmic strain measures in solid mechanics. Arch. Ration. Mech. Anal. 222(2), 507–572 (2016)

    Article  MathSciNet  Google Scholar 

  38. Lankeit, J., Neff, P., Nakatsukasa, Y.: The minimization of matrix logarithms: on a fundamental property of the unitary polar factor. Linear Algebra Appl. 449, 28–42 (2014)

    Article  MathSciNet  Google Scholar 

  39. Kupferman, R., Shachar, A.: On strain measures and the geodesic distance to \(SO_n\) in the general linear group. J. Geom. Mech. 8(4), 437–460 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank Stefan Müller for suggesting the proof that asymptotically conformal maps converge to a conformal map. We thank Connor Mooney for suggesting the use of a concave function \(\psi \) in Proposition 2.6, and Dmitri Panov for suggesting the area-preserving flow example \(\phi _c\) in (2.6). We thank Fedor Petrov for a suggested proof of a convexity result. We thank Nadav Dym for suggesting a proof that no phase transition occurs for some energy functionals. We thank Cy Maor for providing many helpful insights along the way. Finally, we thank Raz Kupferman for carefully reading this manuscript, and for suggesting various improvements during the research process.

This research was partially supported by the Israel Science Foundation (Grant No. 1035/17), and by a grant from the Ministry of Science, Technol- ogy and Space, Israel and the Russian Foundation for Basic Research, the Russian Federation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Asaf Shachar.

Additional information

Communicated by J. M. Ball.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Convexity results

Proof

[of Lemma 2.3] The strict convexity of F in \([a,\infty )\) implies that for \(x>a\ge y\),

$$\begin{aligned} F\left( t x + (1-t) y\right) = tF(x) + (1-t)F(y) \Rightarrow t\in \{0,1\}. \end{aligned}$$

Denote \(A = g^{-1}((a,\infty ))\). If \(\mu (A)=0\) or \(\mu (A^c)=0\), we are done: If \(\mu (A)=1\), then \(g>a\) a.e., so we are in the domain where F is strictly convex. If \(\mu (A)=0\) then \(g \le a\) a.e. - and the only case we need to check is when \(\int _X g \in [a,\infty )\). We then must have \(\int _X g =a\), so \(g=a\) a.e.

We show that \(0<\mu (A)<1\) cannot occur. Denote \(x=\fint _A g\), and \(y=\fint _{A^c} g\); then \(y\le a<x\) and \(\int _X g= \mu (A)x + \mu (A^c)y\). Thus, using the equality assumption and Jensen’s inequality,

$$\begin{aligned} F\left( \int _X g\right) = \int _X F\circ g = \int _A F\circ g + \int _{A^c} F\circ g \ge \mu (A)F(x) + \mu (A^c)F(y)\ge F\left( \int _X g\right) . \end{aligned}$$

Therefore equality holds, so the comment at the beginning of the proof implies that either \(\mu (A)=1\) or \(\mu (A^c)=1\). \(\square \)

The following lemma is a variant of Jensen inequality, when we have convexity only on a partial subset of our domain.

Lemma A.1

Let \(F:(0,\infty ) \rightarrow [0,\infty )\). Assume that \(F|_{(0,1]}\) is strictly decreasing and convex, with \(F^{-1}(0)=\{1\}\).

Let \(g:X \rightarrow (0,\infty )\) be a measurable function defined on a probability space X with \(\int _X g \in (0,1]\). Then \(F(\int _X g) \le \int _X F\circ g\,\,\) and if equality occurs \(g(x) \in (0,1]\) a.e.  If \(F|_{(0,1]}\) is strictly convex equality occurs if and only if g(x) is constant a.e.

Equivalently: F is convex at every point in [0, 1].

Proof

Set \(F^*(x)=F(\min (x,1))\); \(F^*\le F\) pointwise, and \(F(x)=F^*(x)\) for \(x\in [0,1]\). Since \(\int _X g\in (0,1]\), \(F\left( \int _X g\right) =F^*\left( \int _X g\right) .\) Since \(F^*\) is convex

$$\begin{aligned} F\left( \int _X g\right) =F^*(\int _X g)\le \int _X F^*{\circ } g \le \int _X F{\circ } g \end{aligned}$$

as desired. If there is an equality, then we have \(F^*{\circ } g=F {\circ } g\) a.e., so \(g \in (0,1]\) a.e. If \(F|_{(0,1]}\) is strictly convex then equality occurs if and only if g(x) is constant a.e. 

We prove that \(F^*\) is convex: (Just draw it!:)

We need to show that \(F^*(t x + (1-t) y) \le tF^*(x) + (1-t)F^*(y)\). If \(t x + (1-t) y \ge 1\), then the LHS vanishes, so we are done. Thus, suppose that \(t x + (1-t) y \le 1\). If xy are both not greater than 1, then the assertion is just the convexity of \(F|_{(0,1]}\).

So, we may assume W.L.O.G that \(0<x\le 1< y\), and \(t x + (1-t) y \le 1\). The assumptions imply \(1\ge t x + (1-t) y \ge t x + (1-t) 1 \), hence

$$\begin{aligned} F\left( t x + (1-t) y\right) \le F\left( t x + (1-t) 1\right) \le tF(x) + (1-t)F(1) =tF(x), \end{aligned}$$

where the second inequality is due to the convexity of \(F|_{(0,1]}\). Thus

$$\begin{aligned} F^*\left( t x + (1-t) y \right) =F\left( t x + (1-t) y\right) \le tF(x)= tF^*(x)=tF^*(x) + (1-t)F^*(y). \end{aligned}$$

\(\square \)

Proof

[of Lemma 5.10] We prove the claim under the assumption that F is convex on \((1-\epsilon ,1]\). The proof for the case of strict convexity is identical. Since \(F|_{[1-\epsilon ,1]}\) is (strictly) convex we have

$$\begin{aligned} F(x) \ge T_y(x):=F(y)+F_-'(y) (x-y) \, \, \, \text { for every } \, \, x,y \in [1-\epsilon ,1]. \end{aligned}$$
(A.1)

Let \(y \in [1-\epsilon ,1]. \) Since F is decreasing on (0, 1], \(F_-'(y) \le 0\), so \(x \mapsto T_y(x)\) is decreasing. Since \(T_y(1) \le F(1)= 0\), \(T_y(x) \le 0 \le F(x)\) for every \(x \ge 1\), so Inequality (A.1) holds for every \(x \in [1-\epsilon ,\infty )\) and every \(y \in [1-\epsilon ,1]. \)

Let \(x \in (0,1-\epsilon ]\). For every \(y <1\) sufficiently close to 1, we have

$$\begin{aligned} F(x) \ge F(1-\epsilon ) {\mathop {\ge }\limits ^{(1)}} F(y)+|F_-'(y)|\ge F(y)+F_-'(y) (x-y)= T_y(x), \end{aligned}$$

where inequality (1) follows from \(\lim _{y \rightarrow 1^-} F_-'(y)=F_-'(1)=0\) together with \(\lim _{y \rightarrow 1^-}F(y)=F(1)=0\). (We used the fact that the left derivative of a convex function is left-continuous.) Thus, we proved that for \(y <1\) sufficiently close to 1, Inequality (A.1) holds for every \(x \in (0,\infty )\), which means that \(F=F|_{(0,\infty )}\) is convex at all such y. \(\square \)

B Proof of Lemma 3.4

Lemma 3.4 is concerned with the distance of a matrix to the set K. We will therefore need the following claim:

Proposition B.1

Let \(A \in M_2\) with \( \det A \ge 0\), and let \(\sigma _1\le \sigma _2\) be its singular values. Then

$$\begin{aligned} {\text {dist}}^2(A,K)={\left\{ \begin{array}{ll} \frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2, &{} \text { if }\, \sigma _2 \le \sigma _1 + 1 \\ \sigma _1^2+\left( \sigma _2-1 \right) ^2 , &{} \text { if }\,\sigma _2 \ge \sigma _1 + 1 \end{array}\right. } \end{aligned}$$
(B.1)

We first use Proposition B.1 for proving Lemma 3.4, then we prove it.

Proof

[Of Lemma 3.4] Let \(\sigma _1,\sigma _2\) be the singular values of A. Then

$$\begin{aligned}&{\text {dist}}^2(A,{\text {SO}}_2)-(1-2\det A) \nonumber \\&\quad = (\sigma _1-1)^2+(\sigma _2-1)^2-(1-2\sigma _1 \sigma _2) \nonumber \\&\quad = \sigma _1^2+\sigma _2^2+1-2\sigma _1-2\sigma _2+2\sigma _1 \sigma _2 \nonumber \\&\quad =\left( \sigma _1+\sigma _2-1\right) ^2. \end{aligned}$$
(B.2)

The presence of the sum \(\sigma _1+\sigma _2\) is no coincidence here! We represented the symmetric polynomial \( P(\sigma _1,\sigma _2)=(\sigma _1-1)^2+(\sigma _2-1)^2\) as a polynomial in \(\sigma _1 \sigma _2\) and \(\sigma _1+\sigma _2\): \((x-1)^2+(y-1)^2=(1-2xy)+\left( x+y-1\right) ^2.\)

Comment: Equation (B.2) implies that \({\text {dist}}^2(A,{\text {SO}}_2) \ge 1-2\det A\) and equality holds exactly when \(\sigma _1+\sigma _2=1\). This gives another proof for Lemma 2.1 in the regime where \(\det A \le 1/4\).

Equation (B.2) and Proposition B.1 imply that if \(\sigma _2 \le \sigma _1 + 1\) then

$$\begin{aligned} {\text {dist}}^2(A,{\text {SO}}_2)=(1-2\det A) +2{\text {dist}}^2(A,K). \end{aligned}$$

Suppose that \(\sigma _2 \ge \sigma _1 + 1\). Setting \(x=\sigma _1, y=\sigma _2-1\), \(x,y \ge 0\), hence

$$\begin{aligned} x^2+y^2 \le (x+y)^2 \le 2(x^2+y^2). \end{aligned}$$

Equations (B.2) and (B.1) imply that \((x+y)^2={\text {dist}}^2(A,{\text {SO}}_2)-(1-2\det A)\) and \(x^2+y^2={\text {dist}}^2(A,K)\) which completes the proof. \(\square \)

1.1 B.1 Computing \({\text {dist}}(.,K)\)

We prove Proposition B.1. By Definition 1.2 \(K=\cup _{0\le s\le \frac{1}{4}}K_s\); we use the following lemma (that we prove below):

Lemma B.2

Let \(0\le \sigma _1 \le \sigma _2\), and let \(A \in M_2\) satisfy \(\det A \ge 0\). Then

$$\begin{aligned} {\text {dist}}^2(A,K_{\sigma _1,\sigma _2})=\left( \sigma _1(A)-\sigma _1 \right) ^2+\left( \sigma _2(A)-\sigma _2 \right) ^2. \end{aligned}$$
(B.3)

Proof

[Of Proposition B.1]

Define \(F:[0,\frac{1}{4}] \rightarrow [0,\infty )\) by \(F(s)={\text {dist}}^2(A,K_s)\). Since \(K=\cup _{0\le s\le \frac{1}{4}}K_s\),

$$\begin{aligned} {\text {dist}}^2(A,K)=\min _{0 \le s \le \frac{1}{4}}{\text {dist}}^2(A, K_s)=\min _{0 \le s \le \frac{1}{4}}F(s). \end{aligned}$$

We prove that \(\min _{0 < s \le \frac{1}{4}} F(s)\) equals the RHS of Equation (B.1).

Given \(0 \le s \le 1/4\), let \(\sigma _1(s) \le \sigma _2(s)\) be the unique numbers satisfying

$$\begin{aligned} \sigma _1(s)\sigma _2(s)=s,\,\,\,\sigma _1(s)+\sigma _2(s)=1. \end{aligned}$$

Since \(K_s=K_{\sigma _1(s),\sigma _2(s)}\), Lemma B.2 implies that

$$\begin{aligned} F(s)=\left( \sigma _1-\sigma _1(s) \right) ^2+\left( \sigma _2-\sigma _2(s) \right) ^2. \end{aligned}$$

Note that \(\sigma _1(s)=\frac{1}{2} - \frac{\sqrt{1-4s}}{2},\sigma _2(s)=\frac{1}{2} + \frac{\sqrt{1-4s}}{2}\); thus \(\sigma _i(s), F(s)\) are smooth functions of s on \([0,\frac{1}{4})\) and continuous on \([0,\frac{1}{4}]\). We shall use the following lemma, which we prove at the end of the current proof.

Lemma B.3

The function \(F(s)={\text {dist}}^2(A,K_s)\) has a critical point \(0\le s^* < \frac{1}{4}\) if and only if \(\sigma _1<\sigma _2 \le \sigma _1+1\). When such a critical point exists, it is unique and satisfies \(F(s^*)=\frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2\).

Next, we claim that if \(\sigma _1<\sigma _2 \le \sigma _1+1\), then \(\min _{0 \le s \le \frac{1}{4}} F(s)=F(s^*)\). (One can prove that F is convex, so any critical point is a global minimum, but we won’t do that.) The possible candidates for minimum points are interior critical points \(s \in (0,1/4)\), and the endpoints \(0,1/4\). Thus, we need to show that

$$\begin{aligned} F(0) \ge \frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2, F\left( \frac{1}{4}\right) \ge \frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2. \end{aligned}$$

Since \(\sigma _1(\frac{1}{4})=\sigma _2(\frac{1}{4})=\frac{1}{2}\), \(F( \frac{1}{4})=\sigma _1^2+\sigma _2^2+\frac{1}{2}-\sigma _1-\sigma _2\); thus

$$\begin{aligned}&F\left( \frac{1}{4}\right) \ge \frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2 \\&\quad \iff \sigma _1^2+\sigma _2^2+1-2\sigma _1-2\sigma _2+2\sigma _1\sigma _2 \le 2\sigma _1^2+2\sigma _2^2+1-2\sigma _1-2\sigma _2 \\&\quad \iff 0 \le \sigma _1^2+\sigma _2^2-2\sigma _1\sigma _2=(\sigma _1-\sigma _2)^2. \end{aligned}$$

Since \(\sigma _1(0)=0,\sigma _2(0)=1\), \(F(0)=\sigma _1^2+\sigma _2^2-2\sigma _2+1.\) Thus,

$$\begin{aligned}&F( 0) \ge \frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2 \\&\quad \iff \sigma _1^2+\sigma _2^2+1-2\sigma _1-2\sigma _2+2\sigma _1\sigma _2 \le 2\sigma _1^2+2\sigma _2^2-4\sigma _2+2 \\&\quad \iff -2\sigma _1-2\sigma _2+2\sigma _1\sigma _2 \le \sigma _1^2+\sigma _2^2-4\sigma _2+1 \\&\quad \iff 2(\sigma _2-\sigma _1) \le (\sigma _2-\sigma _1)^2+1, \end{aligned}$$

which always holds since \(2x \le x^2+1\) holds for every real x.

If \(\sigma _1=\sigma _2\) or \(\sigma _2 > \sigma _1 + 1\), then F has no critical points. In these cases all is left to do is to compare F(0) and \(F( 1/4)\). A direct computation shows that \(F(0) \le F( \frac{1}{4})\) iff \(\sigma _2 \ge \sigma _1 + 1/2\), from which the conclusion follows. \(\square \)

Proof

[Of Lemma B.3]

$$\begin{aligned} 2F'(s)=-\left( \sigma _1-\sigma _1(s)\right) \sigma _1'(s)-\left( \sigma _2-\sigma _2(s)\right) \sigma _2'(s). \end{aligned}$$

Since \(\sigma _1(s)+\sigma _2(s)=1 \Rightarrow \sigma _1'(s)=-\sigma _2'(s)\), we get

$$\begin{aligned} 2F'(s)=\sigma _1'(s)\left( \Delta \sigma - \Delta \sigma (s)\right) , \end{aligned}$$

where \(\Delta \sigma :=\sigma _2-\sigma _1, \Delta \sigma (s):=\sigma _2(s)-\sigma _1(s)\).

\(\sigma _1'(s)>0\) so \(F'(s)=0\) if and only if \(\Delta \sigma = \Delta \sigma (s)\). Since \(s \rightarrow \Delta \sigma (s)=\sqrt{1-4s}\) is strictly decreasing, the uniqueness of the critical point is established. We now prove existence. The condition \(\sigma _1 < \sigma _2 \le \sigma _1+1\) is necessary: \(0 \le \sigma _i(s) \le 1\) implies that if \(F'(s^*)=0\) then \(\sigma _2-\sigma _1= \Delta \sigma (s^*) \le 1\). Furthermore, since \(s^* < \frac{1}{4}\), \( \Delta \sigma (s^*)>0\), the equality \(\Delta \sigma = \Delta \sigma (s^*)\) implies that \(\sigma _1<\sigma _2\).

To prove sufficiency, note that for every \(0 < r \le 1\), there exists \(s \in [0,\frac{1}{4})\) satisfying \(\Delta \sigma (s)=r\). Now, let \(s^*\) be the critical point. Since \(\sigma _1(s^*)+\sigma _2(s^*)=1\),

$$\begin{aligned} -2\sigma _1(s^*)=-1+\sigma _2(s^*)-\sigma _1(s^*)=\sigma _2-\sigma _1-1, \end{aligned}$$

thus

$$\begin{aligned} \sigma _1-\sigma _1(s^*)=\frac{1}{2} \left( 2\sigma _1-2\sigma _1(s^*)\right) =\frac{1}{2} \left( 2\sigma _1+\sigma _2-\sigma _1-1\right) =\frac{1}{2} \left( \sigma _1+\sigma _2-1\right) , \end{aligned}$$

so

$$\begin{aligned} F(s^*)=2\left( \sigma _1-\sigma _1(s^*) \right) ^2=\frac{1}{2} \left( \sigma _1+\sigma _2-1\right) ^2, \end{aligned}$$

where in the first equality we have used the implication \(F'(s^*)=0 \Rightarrow \sigma _2-\sigma _2(s^*) = \sigma _1-\sigma _1(s^*)\). This completes the proof. \(\square \)

Proof

[Of Lemma B.2]

Given \(X \in K_{\sigma _1,\sigma _2}\), we have

$$\begin{aligned} |A-X|^2=|A|^2 + |X|^2 -2\langle A,X\rangle . \end{aligned}$$

Since |A| and \(|X|=\sigma _1^2+\sigma _2^2\) are constant, we need to maximize \(X \rightarrow \langle A,X\rangle \) over \(X \in K_{\sigma _1,\sigma _2}\). By Von Neumann’s trace inequality,

$$\begin{aligned} \langle A,X\rangle ={\text {tr}}(A^TX) \le \sigma _1(A)\sigma _1+\sigma _2(A)\sigma _2. \end{aligned}$$

It remains to show that this upper bound is realized by some \(X \in K_{\sigma _1,\sigma _2}\). Using the bi-\({\text {SO}}_2\) invariance, we may assume that \(A={\text {diag}}(\sigma _1(A),\sigma _2(A))\) is positive semidefinite and diagonal; taking \(X={\text {diag}}(\sigma _1,\sigma _2)\) then realizes the bound. Thus

$$\begin{aligned} {\text {dist}}^2(A,K_{\sigma _1,\sigma _2})&=\sigma _1(A)^2+\sigma _2(A)^2+\sigma _1^2+\sigma _2^2-2\sigma _1(A)\sigma _1-2\sigma _2(A)\sigma _2 \\&=\left( \sigma _1(A)-\sigma _1 \right) ^2+\left( \sigma _2(A)-\sigma _2 \right) ^2. \end{aligned}$$

Comment: In the reduction of the problem to the diagonal positive semidefinite case, we explicitly use the assumption that \(\det A \ge 0\). Indeed, let \(A=U\Sigma V^T\) be the SVD of A. If \(\det A>0\), then either both \(U,V \in {\text {SO}}_2\) or both \(U,V \in {\text {O}}^{-}_2\). In the latter case, we can multiply by \({\text {diag}}\left( -1,1\right) \) from both sides of \(\Sigma \) to make them in \({\text {SO}}_2\). A similar argument works when \(\det A=0\). \(\square \)

C Estimating \({\text {dist}}(.,{\text {CO}}_2)\)

Proof

[Of Lemma 3.5]

First, expand

$$\begin{aligned}&{\text {dist}}^2(A,{\text {SO}}_2)-2(\sqrt{\det A}-1)^2 \nonumber \\&\quad = (\sigma _1-1)^2+(\sigma _2-1)^2-2(\sqrt{\sigma _1 \sigma _2}-1)^2 \nonumber \\&\quad = \left( \sigma _1^2+\sigma _2^2-2\sigma _1 \sigma _2 \right) -2\sigma _1-2\sigma _2+4\sqrt{\sigma _1 \sigma _2} \nonumber \\&\quad =\left( \sigma _2-\sigma _1\right) ^2-2\left( \sqrt{\sigma }_2-\sqrt{\sigma }_1\right) ^2 \le \left( \sigma _2-\sigma _1\right) ^2=2{\text {dist}}^2(A,{\text {CO}}_2). \end{aligned}$$
(C.1)

Now,

$$\begin{aligned} \left( \sigma _2-\sigma _1\right) ^2-2\left( \sqrt{\sigma }_2-\sqrt{\sigma }_1\right) ^2 = \left( \sqrt{\sigma }_2-\sqrt{\sigma }_1\right) ^2 \left( \left( \sqrt{\sigma }_2+\sqrt{\sigma }_1\right) ^2-2\right) . \end{aligned}$$
(C.2)

By the AM-GM inequality, if \(\det A \ge 1/4\) then

$$\begin{aligned} \left( \sqrt{\sigma }_2+\sqrt{\sigma }_1\right) ^2 \ge 4\sqrt{ \sigma _1\sigma _2} \ge 2, \end{aligned}$$

which implies

$$\begin{aligned} \left( \sqrt{\sigma }_2+\sqrt{\sigma }_1\right) ^2-2 \ge \left( \sqrt{\sigma }_2+\sqrt{\sigma }_1\right) ^2 - 4\sqrt{ \sigma _1\sigma _2}= \left( \sqrt{\sigma }_2-\sqrt{\sigma }_1\right) ^2. \end{aligned}$$
(C.3)

Equations (C.2), (C.3) complete the proof. \(\square \)

D The Euler–Lagrange equation of \({\text {dist}}^2(.,{\text {SO}})\)

In this section we prove that the Euler–Lagrange equation of the functional of \(E_2\) is

$$\begin{aligned} \delta \left( d\phi -O(d\phi )\right) =0, \end{aligned}$$

where \(O(d\phi )\) is the orthogonal polar factor of \(d\phi \). (The derivation of the EL equations for \(p \ne 2\) follows from the special case of \(p=2\).)

For brevity, we show the derivation only for the Euclidean case where \(\mathcal {M}=\Omega \subseteq \mathbb {R}^n, \mathcal {N}=\mathbb {R}^n\) are endowed with the usual flat metrics; the general Riemannian case follows in a similar fashion.

Let \({\text {GL}}_n^+\) be the group of real \(n \times n\) matrices having positive determinant, and let \(O:{\text {GL}}_n^+\rightarrow {\text {SO}}_n\) map \(A\in {\text {GL}}_n^+\) into its orthogonal polar factor, i.e.

$$\begin{aligned} O(A)=A\left( \sqrt{A^TA}\right) ^{-1}. \end{aligned}$$

\(\sqrt{A^TA}\) denotes the unique symmetric positive-definite square root of \(A^TA\). The map O is smooth. We use the following observation:

Lemma D.1

Let \(A \in {\text {GL}}_n^+\). Given \(B\in M_n\) write \(\dot{O}=dO_A(B)\). Then for every \(B \in M_n\),

$$\begin{aligned} \langle \dot{O},O\rangle =\langle \dot{O},A\rangle =0. \end{aligned}$$

Proof

The equality \(\langle \dot{O},O\rangle =0\) follows from differentiating \(\langle O,O\rangle =n\). Now,

$$\begin{aligned} \dot{O} \in T_{O}{\text {SO}}_n=OT_{{\text {Id}}}{\text {SO}}_n=O\text {skew} \end{aligned}$$

implies that \( \dot{O}=OS\) for some \(S \in \text {skew}\). Thus,

$$\begin{aligned} \langle \dot{O},A\rangle = \langle OS,OP\rangle = \langle S,P\rangle =0, \end{aligned}$$

where the last equality follows from the fact that the spaces of symmetric matrices and skew-symmetric matrices are orthogonal. \(\square \)

Derivation of the EL equation Recall that

$$\begin{aligned} E_2(\phi )=\int _{\Omega } {\text {dist}}^2\left( d\phi ,{\text {SO}}_n\right) =\int _{\Omega }|d\phi -O(d\phi )|^2. \end{aligned}$$

Let \(\phi \in C^2(\Omega ,\mathbb {R}^n)\), and let \(\phi _t=\phi +tV\) for some \(C^2\) vector field \(V:\Omega \rightarrow \mathbb {R}^n\). Then,

$$\begin{aligned} \frac{1}{2}\left. \frac{d}{dt} E\left( \phi _t \right) \right| _{t=0}&= \int _{\Omega } \left. \langle \nabla _{\frac{\partial }{\partial t}} d\phi _t - \nabla _{\frac{\partial }{\partial t}} O(d\phi _t) , d\phi _t-O(d\phi _t)\rangle \right| _{t=0} dx \nonumber \\&=\int _{\Omega } \langle \left. \nabla _{ \frac{\partial }{\partial t} } d\phi _t \right| _{t=0} , d\phi -Q(d\phi )\rangle dx=\int _{\Omega } \langle \nabla V , d\phi -Q(d\phi )\rangle dx \nonumber \\&=-\int _{\Omega } \langle V , {\text {div}}\left( d\phi -Q(d\phi )\right) \rangle dx. \end{aligned}$$
(D.1)

The passage from the first to the second line relied upon the fact that \(\nabla _{\frac{\partial }{\partial t}} O(d\phi _t)\) is orthogonal to \(d\phi _t,O(d\phi _t) \), which essentially follows from Lemma D.1.

E Additional proofs

Lemma E.1

The function F defined in (1.9) is well-defined and continuous; it is strictly decreasing on (0, 1] and strictly increasing on \([1,\infty )\).

Proof

First, suppose that \(s \le 1\); then the minimum is obtained at a point (ab) where both \(a,b \le 1\). Indeed, if \(a>1\) (and so \(b <s \le 1\)), we can replace a by 1 and b by s to get the same product with both numbers closer to 1. Thus, it suffices to show that the minimum exists when \(0<a,b \le 1\). Now, \(b \le 1 \Rightarrow s=ab \le a\), and similarly \(b \ge s\). So, the problem reduces to proving existence of a minimum over the compact set \(\{ (a,b) \in [s,1]^2 \, | \, ab=s\}\). Since f was assumed continuous we are done.

Next, we prove that F is strictly decreasing on (0, 1]. Indeed, let \(0< s_1 < s_2 \le 1\), and suppose that \(F(s_1)=f(a)+f(b)\), for some \((a,b) \in [s_1,1]^2, ab=s_1\). Choose a smooth path (a(t), b(t)) from (ab) to (1, 1), where a(t), b(t) are both strictly increasing. Then for \(t>0\)

$$\begin{aligned} F\left( a(t)b(t)\right) \le f(a(t))+f(b(t)) < f(a)+f(b)=F(s_1). \end{aligned}$$

Since a(t)b(t) movies continuously from \(s_1\) to 1, it hits \(s_2\) at some time \(t>0\), which establishes the claim. Finally, a symmetric argument shows that if \(s \ge 1\), then the minimum is obtained in \(\{ (a,b) \in [1,s]^2 \, | \, ab=s\}\), and that F is strictly increasing on \([1,\infty )\). Proving F is continuous is routine and we omit it. \(\square \)

Proof

[of Proposition 5.6] Suppose that \((\sqrt{s}_n,\sqrt{s}_n)\) is a minimizer of (1.9) for some sequence \(s_n \in (0,1)\) which converges to zero. We prove that f diverges to \(\infty \) at zero. Recasting everything in terms of \(g(x)=f(e^x)\), we get

$$\begin{aligned} g\left( \frac{x + y}{2}\right) \le \frac{g(x) + g(y)}{2} \, \,\, \, \text { whenever } \, \, x,y \le 0 \, \, \, \text { and } x+y=\lambda _n, \end{aligned}$$

where \(\lambda _n=\log s_n \rightarrow -\infty \). Choose a subsequence \((\lambda _{n_k})\) such that \( \lambda _{n_k} < 2 \lambda _{n_{k-1}}\) for all k. Choosing \(x=0\) and \(y= \lambda _{n_k}\) in the condition above and the monotonicity of g give

$$\begin{aligned} g(\lambda _{n_{k-1}}) \le g\left( \frac{1}{2} \lambda _{n_k}\right) \le \frac{1}{2} g(\lambda _{n_k}) \end{aligned}$$

so

$$\begin{aligned} g(\lambda _{n_k}) \ge 2 g(\lambda _{n_{k-1}}) \ge 2^2 g(\lambda _{n_{k-2}}) \ge \cdots \ge 2^{k-1} g(\lambda _{n_{1}}) \, . \end{aligned}$$

\(\square \)

Proof

[Of Lemma 2.1] Since the problem is bi-\({\text {SO}}_2\)-invariant, using SVD we can assume that \(A={\text {diag}}(a,b)\) is diagonal with \(a,b>0\). We need to compute

$$\begin{aligned} F(s)=\min _{a,b \in \mathbb {R}^+,ab=s} (a-1)^2+(b-1)^2. \end{aligned}$$

Using Lagrange’s multiplier, there exists \(\lambda \) such that \(\left( a-1,b-1\right) =\lambda (b,a)\). Thus \(a(1-a)=b(1-b)\) which implies \(a=b\) or \(a=1-b\). In the latter case \(s=ab=b(1-b)\). Since \(a=1-b,b,s\) are positive, we must have \(0<b<1,0<s\le \frac{1}{4}\). (since \(\max _{0<b<1} b(1-b)=\frac{1}{4}\)). We then have

$$\begin{aligned} (a-1)^2+(b-1)^2 =b^2+(b-1)^2=2b(b-1)+1=1-2s. \end{aligned}$$
  • If \(s \ge 1/4\) then there is only one critical point \((a,b)=(\sqrt{s},\sqrt{s})\), so the minimum is obtained exactly when \(a=b\) and \(F(s)=2(\sqrt{s}-1)^2.\)

  • If \(s \le 1/4\), there are up to 3 to critical points: \((\sqrt{s}, \sqrt{s}),(x,1-x),(1-x,x)\) when \(0<x<1\) satisfies \(x(1-x)=s\). (For \(s=1/4\) they all merge into a single point. For \(s<1/4\) these are 3 distinct points.)

    To decide which point is the global minimizer, we need to compare the values of the objective function at the critical points, which are \(2(\sqrt{s}-1)^2,1-2s.\) Since

    $$\begin{aligned} 1-2s \le 2(\sqrt{s}-1)^2 \iff (2\sqrt{s}-1)^2 \ge 0, \end{aligned}$$

    \(F(s)=1-2s\) for \(s \le \frac{1}{4}\).

\(\square \)

Proof

[of Lemma 4.2] By Proposition 4.1,

$$\begin{aligned} d\phi _x-O(d\phi _x)=\alpha {\text {Cof}}d\phi _x \end{aligned}$$

if and only if \(d\phi _x \in K\) and \(\alpha =-1\) or \(d\phi _x\) is conformal and \(\sigma _i(d\phi _x)=\frac{1}{1-\alpha }\). In both cases \(\sigma _1(d\phi _x)+\sigma _2(d\phi _x)=\frac{2}{1-\alpha }\) is constant. If \(\alpha =-1\), then \(d\phi \in K\), and if \(\alpha \ne -1\) then \(d\phi \in {\text {CO}}_2\) with \(\sigma _i(d\phi )=\frac{1}{1-\alpha }\), i.e. \(\phi \) is a homothety.

Now, define \(H(\phi )={\text {dist}}^{p-2}\left( d\phi ,{\text {SO}}(\mathfrak {g},\phi ^*\mathfrak {h})\right) \). Let \(p \ne 2\) and suppose that

$$\begin{aligned} H(\phi )\left( d\phi -O(d\phi )\right) =\alpha {\text {Cof}}d\phi . \end{aligned}$$

Since we assumed \(J\phi >0\), \({\text {Cof}}d\phi \) is invertible, hence \(H(\phi ) \ne 0\), and

$$\begin{aligned} d\phi -O(d\phi )=\frac{\alpha }{H(\phi )} {\text {Cof}}d\phi . \end{aligned}$$

Proposition 4.1 implies that either \(H(\phi )(x)=-\alpha \) and \(d\phi _x \in K\), or that \(d\phi _x\) is conformal and

$$\begin{aligned} \sigma (d\phi _x)=\frac{1}{1-\alpha /H(\phi )(x)}=\frac{1}{1-\frac{\alpha }{(\sqrt{2}|\sigma (d\phi _x)-1|)^{p-2}}}. \end{aligned}$$

Thus \( \sigma (d\phi _x)\) is a solution for the equation

$$\begin{aligned} \sigma =\frac{1}{1-\frac{\beta }{|\sigma -1|^{p-2}}}, \end{aligned}$$

which has a finite number of solutions \(\{\alpha _1,\dots ,\alpha _k\}\).

We showed that for every \(x\in \mathcal {M}\), \(d\phi _x \in K\), or \(d\phi _x\) is conformal with \( \sigma (d\phi _x) \in \{\alpha _1,\dots ,\alpha _k\}\). Thus \(x \mapsto \sigma _1(d\phi _x)+\sigma _2(d\phi _x)\) is a continuous function on \(\mathcal {M}\), which takes values in a finite set \(\{1,2\alpha _1,\dots ,2\alpha _k\}\), hence it must be constant. Thus, \(d\phi \in K\) or \(\phi \) is a homothety with \(\sigma (d\phi ) \in \{\alpha _1,\dots ,\alpha _k\}\). If \(d\phi \in K\), then \(H(\phi )=-\alpha \) is constant. Thus

$$\begin{aligned} \sigma _1(d\phi )+ \sigma _2(d\phi )=1, \left( \sigma _1(d\phi )-1\right) ^2+\left( \sigma _2(d\phi )-1\right) ^2 \end{aligned}$$

are constants, which implies that \( \sigma _1(d\phi ), \sigma _2(d\phi )\) are constants as required. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shachar, A. Embedding surfaces inside small domains with minimal distortion. Calc. Var. 60, 147 (2021). https://doi.org/10.1007/s00526-021-02014-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00526-021-02014-5

Mathematics Subject Classification

Navigation