1 Introduction

A Finsler metric F is called Douglas if there exists an affine connection \(\Gamma =(\Gamma _{jk}^i)\) such that each geodesic of F, after some re-parameterisation, is a geodesic of \(\Gamma \). We assume without loss of generality that \(\Gamma \) is torsion free. Such Finsler metrics were considered by Douglas in [7, 8] and were named Douglas metrics (or metrics of Douglas type) in [2].

Though results of our paper are local, let us note that a partition of unity argument shows that the existence of such a connection locally, in a neighborhood of any point, implies its existence globally.

Prominent examples of Douglas metrics are Riemannian metrics (with \(\Gamma \) being the Levi–Civita connection), Berwald metrics (in this case, as \(\Gamma \), we can take the associated connection), and locally projectively flat metrics (in this case, in the local coordinates such that the geodesics are straight lines, one can take the flat connection \(\Gamma \equiv 0\)).

In the present paper, we study the following question: can two conformally related Finsler metrics F and \(e^{\sigma (x)} F\) both be Douglas? We do not require that the connection \(\Gamma \) is the same for both metrics, in fact, by [3], two conformally equivalent metrics can not have the same (unparameterized) geodesics unless the conformal coefficient is constant.

Of course two conformally related Riemannian metrics are both Douglas. Another trivial example is as follows: let F be Douglas and \(\sigma \) be a constant. Then \(e^\sigma F\) is also Douglas.

Let us give a less trivial example:

Example 1.1

Consider the Randers metric \(F= \alpha + \beta \), where \(\alpha (x,y)= \sqrt{g_{ij} y^iy^j} \) for a Riemannian metric g and \(\beta \) is a 1-form. Assume in addition that \(\beta \) is closed, locally it is equivalent to the condition that \(\beta = df\) for a function f on the manifold. Then the metric F is Douglas since adding the closed 1-form \(\beta \) does not change the geodesics, so the geodesics of F are (up to a re-parameterisation) geodesics of the Levi–Civita connection of g.

Next, for any function \(\gamma \) of one variable, the conformally related metric \(\tilde{F}= e^{\gamma (f(x))} F= e^{\gamma (f(x))} \alpha + e^{\gamma (f(x))} \beta \) is also Douglas. Indeed, since the 1- form \( e^{\gamma (f(x))} \beta \) is closed, the geodesics of \(\tilde{F}\) are geodesics of \( e^{\gamma (f(x)) } \alpha \), i.e., geodesics of the Levi–Civita connection of the Riemannian metric \(e^{2 \gamma (f(x))} g\).

It is easy to see (see e.g. [4, Theorem 3.1]) that in the class of Randers metrics the above example is the only possible example (of conformally related Douglas metrics with nonconstant conformal coefficient). Indeed, Douglas metrics are geodesically-reversible, in the sense that for any geodesic, its certain orientation-reversing unparameterisation is also a geodesic. Now, it is known (see e.g. [10, Theorem 1]) that for a geodesically reversible Randers metric \(\alpha + \beta \), the 1-form \(\beta \) is necessary closed. Thus, if two conformally related Randers metrics \(\alpha + \beta \) and \( e^{\gamma ( x )}\alpha + e^{\gamma (x)}\beta \) are both Douglas, then both 1-forms \(\beta \) and \(\gamma (x) \beta \) are closed, which locally implies that \(\beta = df\) and \(\gamma \) is a function of f as we claimed.

Our main result is that in dimension two only this example is possible:

Theorem 1.2

Let F be a Douglas 2-dimensional metric such that the conformally related \(e^{\sigma (x)}F\) is also Douglas. Assume \(d\sigma \ne 0\) at a point p. Then, in a neighborhood of p, the metrics F and \(e^{\sigma (x)} F\) are as in Example 1.1 above: F is Randers, \(F= \alpha + \beta \), the 1-form \(\beta \) is the differential of a function f, and \(\sigma \) is a function of f.

Statements similar to Theorem 1.2 appeared in literature before. In most cases, one considered special Finsler metrics though. In particular, [4] proves the analogous statement for \( (\alpha ,\beta )\) metrics in dimension \(n\ge 3\). Conformally related Douglas \((\alpha , \beta )\) metrics were also considered in [13]. The question when conformally related Kropina metrics are Douglas was studied in [9, Theorem 9], where it was shown that if the conformal transformation of a Kropina metric \(\alpha ^{2}/\beta \) is Douglas, then \(\beta \wedge d\beta = 0\), and vice versa. Note that as it follows from [5], if a Kropina metric is Douglas, then \(\beta \wedge d\beta = 0\).

Related results are [16, Theorem 5], [15, Theorem 3], and [15, Theorem 5], where it is proved that nontrivially conformally related Berwald metrics are Riemannian, and also [12, Theorem 8.1], where an analogous statement was proved for Minkowski metrics.

All objects in our paper are assumed to be sufficiently smooth; Finsler mertics are assumed to be strictly convex.

2 Proof of Theorem 1.2

2.1 Necessary conditions on \(F_{|T_xM}\) implied by the assumption that F and \(e^{\sigma (x)}F\) are both Douglas.

Recall that (arc-length parameterised) geodesics of a Finsler metric F are solutions of the differential equation

$$\begin{aligned} \ddot{x}^i + 2 G^i = 0. \end{aligned}$$
(2.1)

Here \(G^i= G(x,\dot{x})^i\) are the so-called spray coefficients. They are given by

$$\begin{aligned} {G}^{i}=\dfrac{1}{4}{g}^{il}\left( \left[ F^{2}\right] _{{x}^{k}y^{l}}-\left[ {F}^{2}\right] _{{x}^{l}}\right) , \end{aligned}$$
(2.2)

where \(g_{ij}(x,y):=\dfrac{1}{2}[F^{2}]_{y^{i}y^{j}}(x,y)\) and \((g^{ij}):=(g_{ij})^{-1}\). The notation \([ F^{2} ]_{x^k y^j}\) and later \(F_{y^i y^j}\) means the partial derivative with respect to the indicated variables. The condition that geodesics of F are geodesics of the affine connection \(\Gamma \) is therefore equivalent to the condition

$$\begin{aligned} 2 {G}^{i}(x,y)= \Gamma _{jk}^{i} {y}^{k} {y}^{j} + P(x,y) {y}^{i}, \end{aligned}$$
(2.3)

which should be fulfilled for some function P and for any \(y\in T_xM\).

Next, by replacing F by \(\tilde{F}= e^{\sigma (x)} F\) in (2.2), we obtain the following known relation (see e.g. [6, Equation (9.8)]) between the spray coefficients \(G^{i}\) of F and \(\tilde{G^{i}}\) of \(\tilde{F}\):

$$\begin{aligned} \tilde{G}^{i}=G^{i}+\sigma _{0}y^{i}-\tfrac{F^{2}}{2}\sigma ^{i}, \end{aligned}$$
(2.4)

where \(\sigma ^{i} = g^{i\ell }\sigma _{\ell }\), \(\sigma _i=\tfrac{\partial }{\partial x^i} \sigma \), and \(\sigma _0= \sigma _i y^i\). In view of this, the condition that the metric \(\tilde{F}\) is Douglas ensures the existence of a function \(\tilde{P}\) and of a torsion-free affine connection \(\tilde{\Gamma }\) such that

$$\begin{aligned} 2 G^i(x,y)= \tilde{\Gamma }_{jk}^i y^k y^j + {F^2} \sigma ^i + \tilde{P}(x,y) y^i. \end{aligned}$$
(2.5)

Combining (2.3) and (2.5), we obtain

$$\begin{aligned} T_{jk}^i y^k y^j = {F^2} \sigma ^i + \hat{P}(x,y) y^i, \end{aligned}$$
(2.6)

where \(T= \Gamma - \tilde{\Gamma }\) and \(\hat{P}= (\tilde{P} -P)\). Let us now multiply the equation (2.6) by \(g_{is}\): we obtain

$$\begin{aligned} T_{jk}^i g_{is} y^k y^j = {F^2} \sigma _s + \hat{P}(x,y) y^i g_{is}. \end{aligned}$$

In view of \(g_{is}= \tfrac{1}{2}[F^2]_{y^i y^s}= FF_{y^i y^s}+ F_{y^i}F_{y^s}\), this equation is equivalent to

$$\begin{aligned} T_{jk}^i F F_{y^i y^s} y^k y^j + T_{jk}^i F_{y^i}F_{y^s} y^k y^j = {F^2} \sigma _s + \hat{P}(x,y) y^i F_{y^i} F_{y^s} + \hat{P}(x,y) y^i F F_{y^i y^s}. \end{aligned}$$

Because F is 1-homogeneous, \(y^i F_{y^i y^s}=0\) and \(y^iF_{y^i}= F\). Using these relations and rearranging the terms, we obtain

$$\begin{aligned} T_{jk}^i F F_{y^i y^s} y^k y^j + F_{y^s} \left( T_{jk}^i F_{y^i} y^k y^j - \hat{P}(x,y) F \right) -F^2 \sigma _s=0. \end{aligned}$$
(2.7)

We apply to the above equation the linear operation \(\xi _s \mapsto \xi _s - \tfrac{1}{F} y^i \xi _i F_{y^s}\). This formula defines a (linear) mapping from covectors to covectors; the equation (2.7) is of the form “certain covector is zero”. We apply the mapping to the covector staying in the left hand side of (2.7), the result must be equal to the zero covector.

In view of \(F F_{y^s} - y^i F_{y^i} F_{y^s}=0\), after such operation, the middle term of the left hand side of (2.7) disappears. The first term remains unchanged in view of \(y^iF_{y^i y^s}=0\). After dividing the result by F, we obtain

$$\begin{aligned} T_{jk}^i F_{y^i y^s} y^k y^j = F \sigma _s - \sigma _0 F_{y^s}. \end{aligned}$$
(2.8)

Remark 2.1

We see that (2.8) does not contain derivatives with respect to the x-variables. Thus, for a fixed \(x\in M\), it is a system of partial differential equations on F restricted to \(T_xM\).

Remark 2.2

We see that the PDE-system (2.8) is linear in F. In fact, linearity was clear in advance because of the following geometrical argument: if the metrics \(F_1\) and \(F_2\) are Douglas with respect to the same connection \(\Gamma \), and the conformally related metrics \(\tilde{F}_1 = e^\sigma F_1\) and \(\tilde{F}_2= e^\sigma F_2\) (with the same conformal factor \(e^\sigma \)) are also Douglas with respect to the same connection \(\tilde{\Gamma }\), then any linear combination \(\lambda _1 F_1 + \lambda _2 F_2\) and the conformally related metric \((\lambda _1 F_1 + \lambda _2 F_2)e^\sigma \) are also Douglas with respect to \(\Gamma \) and \(\tilde{\Gamma }\), respectively. We assume that \(\lambda _1, \lambda _2 \in \mathbb {R}\) are such that \(\lambda _1 F_1 + \lambda _2 F_2\) is a Finsler metric.

2.2 Proof of Theorem 1.2

We assume that the function \(\sigma (x)\) equals \(x^1\) (so in the coordinates \(d\sigma = (1,0)\)), one can always achieve this by a coordinate change. Next, we fix a point x and work at the tangent space \(T_xM= \mathbb {R}^2(y^1, y^2)\) at this point. Because of homogeneity, in the polar coordinates \(y^1= r \cos (\theta ), y^2= r\sin (\theta )\), the function \(F(y^1,y^2)\) is given by \(r f(\theta )\). The PDE-system (2.8) reduces in this setting to one ODE on the function f; let us find this ODE. By direct calculations, we see that the Hessian (with respect to the coordinates \(y^1, y^2\)) of the function F is given by

$$\begin{aligned} \begin{pmatrix} F_{y^{1}y^{1}} &{} F_{y^{1}y^{2}} \\ F_{y^{2}y^{1}} &{} F_{y^{2}y^{2}} \end{pmatrix} = \frac{f(\theta )+f''(\theta )}{r} \begin{pmatrix} \sin (\theta )^2 &{} {-\cos (\theta ) \sin (\theta )} \\ {-\cos (\theta ) \sin (\theta )} &{} {\cos (\theta )^2} \end{pmatrix}. \end{aligned}$$
(2.9)

Combining this with (2.8), we see that (2.8) is equivalent to the ODE

$$\begin{aligned} \left( f''(\theta )+f(\theta )\right) P(\theta )= {f(\theta )\sin (\theta ) +\cos (\theta ) f'(\theta )}, \end{aligned}$$
(2.10)

where

$$\begin{aligned} P(\theta )= K_0 \cos (\theta )^3 + K_1 \cos (\theta )^2 \sin (\theta ) + K_2 \cos (\theta ) \sin (\theta )^2 + K_3 \sin (\theta )^3 \end{aligned}$$

and where the constants \(K_0,\ldots ,K_3\) are given by

$$\begin{aligned} K_{0}=-T ^{2}_{11}, K_{1}=T^{1}_{11}-2T^{2}_{12}, \quad K_{2}=2T^{1}_{12}- T_{22}^{2}, \quad K_{3}=T^{1}_{22}. \end{aligned}$$

Thus is a linear ODE of 2nd order, so its solution space is at most two-dimensional. Actually, locally, near the points where \(P(\theta )\ne 0\), it is precisely two-dimensional, but not all local solutions can be extended to global solutions. Indeed, since the variable \(\theta \) “lives” on the circle, we are only interested in \(2\pi \)-periodic solutions. Besides, since the highest derivative of f comes with the nontrivial coefficient \(P(\theta )\), a solution can approach infinity when \(\theta \) approaches \(\theta _0\) such that \(P(\theta _0)= 0\).

By direct calculations, we see that the function \(\cos (\theta )\) is a solution.

Remark 2.3

Geometrically, the addition of the function \(\cos (\theta )\) to a solution f corresponds to the addition of the closed 1-form \(dx^1\) to \(F = r f\); this operation does not change unparameterized geodesics of the metrics F and of the conformally related metric \(e^\sigma F + e^\sigma dx^1\), see the explanation in Example 1.1.

Our goal is to find all constants \(K_0,\ldots ,K_3\) such that there exists a \(2\pi \)-periodic bounded solution f of (2.10) such that f is positive and \(f'' + f\) is positive (the last condition corresponds to the condition that F is strictly convex, see e.g. (2.9)). We formulate the answer in the following lemma:

Lemma 2.4

For the constants \(K_0,\ldots ,K_3\), there exists a bounded solution \(f(\theta )\) of ODE (2.10) such that it is \(2\pi \)-periodic and such that f and \(f'' + f\) are positive at all \(\theta \) if and only if

$$\begin{aligned} K_{0}= & {} \frac{g_{12}g_{11}}{g_{11}g_{22}-g^{2}_{12}} ,\quad K_{1}=1+\frac{3g^{2}_{12}}{g_{11}g_{22}-g^{2}_{12}}, \end{aligned}$$
(2.11)
$$\begin{aligned} K_{2}= & {} \frac{3g_{22}g_{12}}{g_{11}g_{22}-g^2_{12}}, \quad K_3=\frac{g^2_{22}}{g_{11}g_{22}-g^2_{12}} \end{aligned}$$
(2.12)

for a certain positively definite symmetric (constant) \(2\times 2\)-matrix \(g_{ij}\). In this case, the general solution of (2.10) is given by

$$\begin{aligned} f(\theta )= \text {const}_1 \sqrt{\cos (\theta )^2 g_{11} + 2 \cos (\theta )\sin (\theta )g_{12} + \sin (\theta )^2g_{22}} + \text {const}_2 \cos (\theta ). \end{aligned}$$
(2.13)

Clearly, the solution (2.13) corresponds to \(F= \text {const}_1 \alpha + \text {const}_2 \beta \), with \(\alpha = \sqrt{g_{ij}y^iy^j}\) and \(\beta = d\sigma = dx^1\); so Lemma 2.4 together with the explanation after Example 1.1 imply Theorem 1.2.

Let us prove Lemma 2.4. The direction “\(\Longleftarrow \)” (that for \(K_0,\ldots ,K_3\) given by (2.12) the function (2.13) with \(\text {const}_1>0\) and \(\text {const}_2=0\) is a \(2\pi \)-periodic solution of (2.10) satisfying the assumptions in Lemma 2.4) is geometrically clear and can be checked by calculations. Let us prove the other (difficult) direction: we need to show that the existence of such a solution f implies that \(K_0,\ldots ,K_3\) are as in (2.12). We first replace the solution f by the function \(f_s(\theta ):= f(\theta ) + f(\theta + \pi )\). Since the function \(P(\theta )\) satisfies \(P(\theta + \pi )= -P(\theta )\), and also the functions \(\cos (\theta )\) and \(\sin (\theta )\) satisfy \(\cos (\theta + \pi )= -\cos (\theta )\), \(\sin (\theta + \pi )= -\sin (\theta )\), all coefficients in the equation (2.10) change the sign after the addition of \(\pi \) to the coordinate, so the function \( f(\theta + \pi )\), and therefore the function \(f_s(\theta )\) is also a solution. If f is positive, \(f_s\) is positive; if \(f'' +f\) is positive, \(f_s'' + f_s\) is positive. If f is \(2\pi \)-periodic, \(f_s\) is \(\pi \)-periodic since \(f_s(\theta + \pi )= f(\theta +\pi ) + f(\theta + 2 \pi )=f_s(\theta )\).

Without loss of generality, we may and will think that \(f= f_s\), i.e., in addition to the above assumptions, we also think that f is \(\pi \)-periodic.

Remark 2.5

The operation \(f \longrightarrow f_s\) corresponds geometrically to the “symmetrisation” \(F(x,y) \longrightarrow F(x,y) + F(x,-y)\). This operation is compatible with the conformal change of the metric and with the property of the metric to be Douglas with respect to a connection \(\Gamma \).

Observe that \((f'\cos (\theta ) +f\sin (\theta ))^{'}=\cos (\theta )(f+f'')\). Denoting \((f'\cos (\theta )+f\sin (\theta ))\) by H, we obtain \(\frac{H'(t)}{\cos (\theta )} = \frac{H}{P(\theta )}\).

Since \( \frac{H'(\theta )}{\cos (\theta )}=f''(\theta )+f(\theta ) \) and \( f''+f>0, \) we see that

$$\begin{aligned} G(\theta ):= \frac{H'(\theta )}{\cos (\theta )}=\frac{H(\theta )}{P(\theta )} \end{aligned}$$
(2.14)

is a smooth positive \(\pi \)-periodic function. Its derivative satisfies

$$\begin{aligned} G'(\theta )= \frac{H'(\theta ) P(\theta ) - H(\theta ) P'(\theta )}{P(\theta )^2}{\mathop {=}\limits ^{(2.14)}} \frac{G\cos (\theta ) -G(\theta )P'(\theta )}{P(\theta )}. \end{aligned}$$

Which implies

$$\begin{aligned} \left( \ln (G(\theta ))\right) ' = \frac{\cos (\theta ) - P'(\theta )}{P}. \end{aligned}$$
(2.15)

By our assumptions, the function \(\ln (G(\theta ))\) is smooth and \(\pi \)-periodic, so the following two conditions are satisfied:

  1. (A)

    \( \int _{-\pi /2}^{\pi /2}\frac{\cos (\theta ) -P'(\theta )}{P(\theta )}d\theta =0 \) and

  2. (B)

    \(\frac{\cos (\theta )- P'(\theta )}{P(\theta )}\) is bounded.

Our next goal is to see that the existence of a solution of (2.15) satisfying (A,B) is a strong condition on \(K_0,\ldots ,K_3\). In fact, we show that \(K_0,\ldots ,K_3\) are given as in (2.12) provided that there exists a solution of (2.15) satisfying (A,B) .

First, by direct calculations, we observe

$$\begin{aligned} \frac{\cos (\theta )-P'(\theta )}{P(\theta )}=\frac{1-K_{1}-2K_{2}\tan ( \theta )-3K_{3 } \tan ^2(\theta )}{(K_{0}+K_{1}\tan (\theta )+K_{2}\tan ^2(\theta )+K_{3}\tan ^3(\theta ))\cos ^{2}(\theta )}+3\tan (\theta ). \end{aligned}$$
(2.16)

We consider this function restricted to the interval \(\left( -\tfrac{\pi }{2}, \tfrac{\pi }{2}\right) \). There, \(\tan (\theta )\) runs over all real values, and the condition (B) implies that the real roots of the polynomial \( K_{0}+K_{1} t+K_{2} t^{2}+K_{3}t^{3} \) counted with their multiplicities must be roots of the polynomial \(1-K_{1}-2K_{2} t-3K_{3 } t^2\). Since it is at most of degree 2, the cubic polynomial \( K_{0}+K_{1} t+K_{2} t^{2}+K_{3}t^{3} \) has precisely one real root.

Thus,

$$\begin{aligned} K_{0}+K_{1}\tan (\theta ) +K_{2}\tan ^{2}(\theta ) +K_{3}\tan ^{3}(\theta )= & {} (C+D\tan (\theta )\\&+\tan ^{2}(\theta ))(B-\tan (\theta )) E, \\ 1-K_{1}-2K_{2} \tan (\theta )-3K_{3 } \tan ^{2}(\theta )= & {} 3(A-\tan (\theta ))(B-\tan (\theta ))E \end{aligned}$$

(for some constants ABCDE). Then the condition (A) reads (we make the substitution \(t= \tan (\theta )\) in the integral and also use that the function \(\tan (\theta )\) is odd so \(\int _{-r}^{r} 3 \tan (\theta ) d\theta =0\) for each \(r\in \left( -\tfrac{\pi }{2}, \tfrac{\pi }{2}\right) \).)

$$\begin{aligned} \int \limits _{-\pi /2}^{\pi /2} \frac{(A-\tan (\theta ))}{(C+D\tan (\theta )+\tan ^{2}(\theta ))\cos ^{2}(\theta )}dt= \int \limits _{-\infty }^{\infty } \frac{(A-t)}{(C+Dt+t^2) }dt=0. \end{aligned}$$
(2.17)

Clearly, the above integral is zero if and only if \(C+Dy+y^{2}=(N+(A-y)^{2})\) for some constant N. In addition, in order for \((N+(A-y)^{2})\) to be nonzero (which is necessary by the condition (B)), we have \(N=C-A^{2}>0\) (which in particular implies \(C>0\)). Thus,

$$\begin{aligned} K_{0}+K_{1} t+K_{2}t ^{2}+K_{3}t^{3}= & {} ((C-A^{2})+(A- t)^{2})(B- t)E, \end{aligned}$$
(2.18)
$$\begin{aligned} 1-K_{1}-2K_{2}t-3K_{3 } t ^{2}= & {} 3(A- t)(B-t)E. \end{aligned}$$
(2.19)

Analyzing these two equations, we obtain \(A=B\) and \(E=\frac{1}{A^{2}-C}\).

Combining the formulas for ENB as functions of AC obtained above with (2.18, 2.19), we see that \(K_{0}, K_{1}, K_{2}, K_{3}\) are given by the following formulas:

$$\begin{aligned} K_{0}= & {} \frac{AC}{A^{2}-C}, \end{aligned}$$
(2.20)
$$\begin{aligned} K_{1}= & {} 1-\frac{3A^{2}}{A^{2}-C}, \end{aligned}$$
(2.21)
$$\begin{aligned} K_{2}= & {} \frac{3A}{A^{2}-C}, \end{aligned}$$
(2.22)
$$\begin{aligned} K_{3}= & {} \frac{-1}{A^{2}-C}. \end{aligned}$$
(2.23)

By direct calculations, we see that the set of quadruples \((K_0,\ldots ,K_3)\) given by the formulas (2.202.23) coincides with the set of quadruples \((K_0,\ldots ,K_3)\) given by the formulas (2.11). Indeed, for each A and C, if we substitute in (2.11)

$$\begin{aligned} g_{ij}= \begin{pmatrix} C &{} -A \\ -A&{} 1\end{pmatrix}, \end{aligned}$$
(2.24)

we obtain (2.202.23). Note that the condition \(C-A^2=N>0\) implies that (2.24) is positively definite. Thus, the set of “admissible” quadruples \((K_0,K_1, K_2, K_3)\) (such that (2.15) has a solution satisfying (A,B)) is precisely the set of quadruples \((K_0,K_1, K_2, K_3)\) obtained by a symmetric positive definite matrix \(g_{ij}\) by (2.202.23). Lemma 2.4 and Theorem 1.2 are proved. \(\square \)

3 Prolongation of (2.8) in higher dimensions and conclusion

Our initial goal was to describe all nontrivially conformally related Finsler metrics such that both are Douglas. We achieved this goal in dimension 2; Theorem 1.2 gives a complete answer. In higher dimension, we do not know whether examples other than constructed in Example 1.1 exist. In this section, we would like to explain a way to approach a general problem (or to tackle the dimension 3 case). We start with the following remark:

Remark 3.1

Suppose a function \(F=F(y)\) on \(\mathbb {R}^n\) satisfies the following conditions: it is positive, positively 1-homogeneous, strictly convex, and there exists a constant tensor \(T^i_{jk}\), symmetric with respect to the lower indexes, and a nonzero constant covector \(\sigma _i\) such that (2.8) are fulfilled. Then we can build a pair of nontrivially conformally related Douglas metric: the first one is the Minkowski metric given by \(F_M(x,y)= F(y)\), and the conformally related one is \(e^{\sum _i \sigma _i x^i} F_M(x,y)\) (the function \(\sigma (x)= \sum _i \sigma _i x^i\) is chosen such that its differential is the constant covector \(\sigma _i\)). The associated connection of the metric \(F_M\) is the flat one \(\Gamma \equiv 0\), and of the metric \(\sigma (x) F_M\) is given by \(\tilde{\Gamma }^i_{jk}= -T^i_{jk}\).

The proof of the statement formulated in the remark is straightforward and follows from the calculations in §2.1: one simply needs to reverse all the arguments. The only place where reversing the arguments may require additional comments (because all others are in fact algebraic manipulations) is the transition from (2.8) to (2.7). Comparing these two formulas, we see that they are equivalent if \(\left( T_{jk}^i F_{y^i} y^k y^j - \hat{P}(x,y) F \right) = - F\sigma _0 \); one can achieve it by choosing the appropriate function \( \hat{P}(x,y)\).

Note that the Minkowski metric \(F_M\) is clearly Douglas with \(\Gamma \equiv 0\). Now, by the construction of the equations (2.8), we see that the difference between the spray coefficients of F and of \(e^\sigma F\) is equal, up to the addition of the appropriate term of the form \(\check{P}(x,y) y^i\) (the addition of this term does not change the geodesics), to \(T^i_{jk}\) so the metric \(\tilde{F}= e^\sigma F\) is also Douglas with \(\tilde{\Gamma }^i_{jk}= -T^i_{jk}\).

We see that the difficulty of our problem is in fact located in one tangent space; the dependence of the metric on the position is not important at least for the existence statement. Note that many previous researches working in this topic used the curvature of the metrics and in particular involved the dependence of the metric on the point in the calculations; as we explained this is not necessary.

Let us now study the equations (2.8). The system is clearly overdetermined; let us calculate the first compatibility conditions. It can be done explicitly, the answer is given in the following proposition.

Proposition 3.2

In dimension \(n\ge 3\), equations (2.8) are fulfilled (for a certain 1-homogeneous smooth function F) if and only if the following system of equations is fulfilled

$$\begin{aligned} F_{y^i y^s} T_{jk}^i y^k - F_{y^i y^j} T_{sk}^i y^k = F_{y^j} \sigma _s - \sigma _j F_{y^s}. \end{aligned}$$
(3.1)

Proof

The direction “\(\Longleftarrow \)” is easy: if we contract (3.1) with \(y^j\), we obtain (2.8). Let us proof the statement in the direction “\(\Longrightarrow \)”. Assume that (2.8) are satisfied. We differentiate them with respect to \(y^\ell \) to obtain

$$\begin{aligned} F_{y^i y^s y^\ell } T_{jk}^i y^k y^j + 2 F_{y^i y^s } T_{j\ell }^i y^j = F_{y^\ell } \sigma _s - \sigma _\ell F_{y^s}- \sigma _0 F_{y^sy^\ell }. \end{aligned}$$
(3.2)

Interchanging the indexes \(\ell \) and s in (3.2), we obtain

$$\begin{aligned} F_{y^i y^s y^\ell } T_{jk}^i y^k y^j + 2 F_{y^i y^\ell } T_{js }^i y^j = F_{y^s } \sigma _\ell - \sigma _s F_{y^\ell }- \sigma _0 F_{y^sy^\ell }. \end{aligned}$$

Subtracting this equation from (3.2), we obtain

$$\begin{aligned} 2 F_{y^i y^s } T_{j\ell }^i y^j - 2F_{y^i y^\ell } T_{js }^i y^j = 2 F_{y^\ell } \sigma _s - 2F_{y^s } \sigma _\ell , \end{aligned}$$

which is clearly equivalent to (3.1). Proposition 3.2 is proved. \(\square \)

Note that the number of equations in (3.1), together with the equations

$$\begin{aligned} F_{y^iy^s} y^i= 0 \end{aligned}$$
(3.3)

corresponding to the 1-homogeneity of F, is \(\tfrac{n(n+1)}{2}\) and is precisely the number of second derivatives of F with respect to y variables. It is easy to see that for generic T and in a neighborhood of almost every point y, one can solve the system with respect to the second derivatives and therefore bring the system in the Cauchy-Frobenius form, that is, all highest derivatives of the unknown function \(F=F(y)\) are given as functions of the lower derivatives and of the coordinates y. Indeed, since the system is linear, it is sufficient to show this for one tensor \(T^i_{jk}\) (because the determinant of a matrix is an algebraic expression in its components), and it is easy to find \(T^i_{jk}\) such that the system has only one solution; one of the simplest examples is:

$$\begin{aligned} T^i_{jk}=\left\{ \begin{array}{ccl}0 &{} \quad \text { if } &{} \quad i\ne 1, \\ 0 &{} \quad \text { if } &{} \quad j\ne k, \\ 1 &{} \quad \text { if } &{} \quad i=1 \text { and } j =k. \end{array}\right. \end{aligned}$$

(This example corresponds to the flat metric \(g_{ij}=\delta _{ij}\) and to the 1-form \(\sigma _i=(1,0,\ldots ,0)\)). Now, from the general theory, it follows that for \(T^i_{jk}\) such that the solution on the system is unique, the restriction of the Finsler metric to the tangent space \(T_xM\) depends on finitely many parameters (which in our case are the values of the first y-derivatives at one point of \(T_xM\)).

Combining Proposition 3.2 with Remark 3.1, we obtain:

Proposition 3.3

Assume there exists a strictly convex positively 1-homogeneous function \(F:\mathbb {R}^n\rightarrow \mathbb {R}\) satisfying (3.1) such that the level set \(\{y\mid F(y)= 1\}\) is not an ellipsoid. Then there exists a non-Randers metric F such that it is Douglas and a conformal coefficient such that the conformally related metric is also Douglas.

We do not know whether such functions exist. The system (3.1, 3.3) is a linear overdetermined system of PDE and in theory, there exists an algorithmic way to solve it, but we did not managed to go through the algebraic difficulties in the case of general \(T^i_{jk}\). Besides, it is not clear how to analyze whether the solution is indeed strictly convex (note that in dimension two, strict convexity corresponds to the condition \(f'' + f>0\) and was essentially used in the proof), and also how to analyze the solutions near the points where the solution of the system (3.1, 3.3) is not unique (in dimension two, the analog of such points are the points where the coefficient of \(f''\) vanishes; they play an important role in the proof). But for explicitly given “special metrics” (i.e., \((\alpha ,\beta )\) metrics), in order to understand whether one can construct nontrivially conformally equivalent Douglas metrics in their class, one should simply put the form of the metric in (3.1) and then analyze the obtained equations.