1 Introduction

This paper is inspired by 15 years of collaboration with Alexander Vasil’ev. It gives some details related to a talk given at the conference “ICAMI 2017 at San Andrés Island, Colombia”, November 26–December 1, 2017, partly in honor of Alexander Vasilév.

My collaboration with Alexander Vasilév started with some specific questions concerning Hele-Shaw flow and evolved over time into various areas of modern mathematical physics. The governing equation for the Hele-Shaw flow moving boundary problem we were studying is called the Polubarinova–Galin equation, after the two Russian mathematicians Polubarinova-Kochina and Galin who formulated this equation around 1945. Shortly later, in 1948, Vinogradov and Kufarev were able to prove local existence of solutions of the appropriate initial value problem, under the necessary analyticity conditions.

Much later, around 2000, another group of Russian mathematicians, or mathematical physicists, led by Mineev-Weinstein, Wiegmann, Zabrodin, considered the Hele-Shaw problem from the point of view of integrable systems, and the corresponding equation then reappears under the name “string equation”. See for example [11, 12, 14, 25]. The integrable system approach appears as a consequence of the discovery 1972 by Richardson [15] that the Hele-Shaw problem has a complete set of conserved quantities, namely the harmonic moments. See [24] for the history of the Hele-Shaw problem in general. It is not clear whether the name “string equation” really refers to string theory, but it is known that the subject as a whole has connections to, for example, 2D quantum gravity, and hence is at least indirectly related to string theory. In any case, these matters have been a source of inspiration for Alexander Vasilév and myself, and in our first book [8] one of the chapters has the title “Hele-Shaw evolution and strings”.

The string equation is deceptively simple and beautiful. It reads

$$\begin{aligned} \{f,f^*\}=1, \end{aligned}$$
(1)

in terms of a special Poisson bracket referring to harmonic moments and with f any normalized conformal map from some reference domain, in our case the unit disk, to the fluid domain for the Hele-Shaw flow. The main question for this paper now is: if such a beautiful equation as (1) holds for all univalent functions, shouldn’t it also hold for non-univalent functions?

The answer is that the Poisson bracket does not (always) make sense in the non-univalent case, but that one can extend its meaning, actually in several different ways, and after such a step the string equation indeed holds. Thus the problem is not that the string equation is particularly difficult to prove, the problem is that the meaning of the string equation is ambiguous in the non-univalent case. In this paper we focus on polynomial mappings, and show that the string equation has a natural meaning, and holds, in this case. In a companion paper [3] (see also [2]) we treat certain kinds of rational mappings related to quadrature Riemann surfaces.

2 The string equation for univalent conformal maps

We consider analytic functions \(f(\zeta )\) defined in a neighborhood of the closed unit disk and normalized by \(f(0)=0\), \(f'(0)>0\). In addition, we always assume that \(f'\) has no zeros on the unit circle. It will be convenient to write the Taylor expansion around the origin on the form

$$\begin{aligned} f(\zeta )= \sum _{j=0}^{\infty }a_{j}\zeta ^{j+1} \quad (a_0>0). \end{aligned}$$

If f is univalent it maps \({\mathbb D}=\{\zeta \in {\mathbb C}: |\zeta |<1\}\) onto a domain \(\Omega =f({\mathbb D})\). The harmonic moments for this domain are

$$\begin{aligned} M_k = \frac{1}{\pi }\int _\Omega z^k {d}x{d}y, \quad k=0,1,2,\ldots . \end{aligned}$$

The integral here can be pulled back to the unit disk and pushed to the boundary there. This gives

$$\begin{aligned} M_k =\frac{1}{2\pi \mathrm {i}} \int _{ {\mathbb D}} f(\zeta )^k |f'(\zeta )|^2 d\bar{\zeta } d\zeta =\frac{1}{2\pi \mathrm {i}} \int _{\partial {\mathbb D}} f(\zeta )^k f^*(\zeta )f'(\zeta ) d\zeta , \end{aligned}$$
(2)

where

$$\begin{aligned} f^*(\zeta )=\overline{f(1/\bar{\zeta })} \end{aligned}$$
(3)

denotes the holomorphic reflection of f in the unit circle. In the form in (2) the moments make sense also when f is not univalent.

Computing the last integral in (2) by residues gives Richardson’s formula [15] for the moments:

$$\begin{aligned} M_k= \sum _{(j_1,\ldots ,j_k)\ge (0,\ldots ,0)} (j_0+1) a_{j_0}\cdots a_{j_{k}}\bar{ a}_{j_0+\ldots +j_{k}+k}, \end{aligned}$$
(4)

This is a highly nonlinear relationship between the coefficients of f and the moments, and even if f is a polynomial of low degree it is virtually impossible to invert it, to obtain \(a_k=a_k(M_0, M_1, \ldots )\), as would be desirable in many situations. Still there is, quite remarkably, an explicit expressions for the Jacobi determinant of the change \((a_0,a_1,\ldots )\mapsto (M_0,M_1,\ldots )\) when f restricted to the class of polynomials of a fixed degree. This formula, which was proved by to Kuznetsova and Tkachev [13, 21] after an initial conjecture of Ullemar [22], will be discussed in depth below, and it is the major tool for the main result of this paper, Theorem 1.

There are examples of different simply connected domains having the same harmonic moments, see for example [17, 18, 26]. Restricting to domains having analytic boundary the harmonic moments are however sensitive for at least small variations of the domain. This can easily be proved by potential theoretic methods. Indeed, arguing on an intuitive level, an infinitesimal perturbation of the boundary can be represented by a signed measure sitting on the boundary (this measure representing the speed of infinitesimal motion). The logarithmic potential of that measure is a continuous function in the complex plane, and if the harmonic moments were insensitive for the perturbation then the exterior part of this potential would vanish. At the same time the interior potential is a harmonic function, and the only way all these conditions can be satisfied is that the potential vanishes identically, hence also that the measure on the boundary vanishes. On a more rigorous level, in the polynomial case the above mentioned Jacobi determinant is indeed nonzero. Compare also discussions in [16].

The conformal map, with its normalization, is uniquely determined by the image domain \(\Omega \) and, as indicated above, the domain is locally encoded in the sequence the moments \(M_0, M_1, M_2,\ldots \). Thus the harmonic moments can be viewed as local coordinates in the space of univalent functions, and we may write

$$\begin{aligned} f(\zeta )= f(\zeta ; M_0, M_1, M_2,\ldots ). \end{aligned}$$

In particular, the derivatives \({\partial f}/{\partial M_k}\) make sense. Now we are in position to define the Poisson bracket.

Definition 1

For any two functions \(f(\zeta )=f(\zeta ; M_0, M_1, M_2,\ldots )\), \(g(\zeta )=g(\zeta ; M_0, M_1, M_2,\ldots )\) which are analytic in a neighborhood of the unit circle and are parametrized by the moments we define

$$\begin{aligned} \{f,g\}=\zeta \frac{\partial f}{\partial \zeta }\frac{\partial g}{\partial M_0}-\zeta \frac{\partial g}{\partial \zeta }\frac{\partial f}{\partial M_0}. \end{aligned}$$
(5)

This is again a function analytic in a neighborhood of the unit circle and parametrized by the moments.

The Schwarz function [1, 20] of an analytic curve \(\Gamma \) is the unique holomorphic function defined in a neighborhood of \(\Gamma \) and satisfying

$$\begin{aligned} S(z)=\bar{z}, \quad z\in \Gamma . \end{aligned}$$

When \(\Gamma =f(\partial {\mathbb D})\), f analytic in a neighborhood of \(\partial {\mathbb D}\), the defining property of S(z) becomes

$$\begin{aligned} S\circ f = f^*, \end{aligned}$$
(6)

holding identically in a neighborhood of the unit circle. Notice that \(f^*\) and S depend on the moments \(M_0, M_1, M_2\ldots \), like f. The string equation asserts that

$$\begin{aligned} \{f,f^*\}=1 \end{aligned}$$
(7)

in a neighborhood of the unit circle, provided f is univalent in a neighborhood of the closed unit disk. This result was first formulated and proved in [25] for the case of conformal maps onto an exterior domain (containing the point of infinity). For conformal maps to bounded domains a proof based on somewhat different ideas and involving explicitly the Schwarz function was given in [5]. For convenience we briefly recall the proof below.

Writing (6) more explicitly as

$$\begin{aligned} f^*(\zeta ; M_0, M_1,\ldots )=S(f(\zeta ; M_0, M_1,\ldots ); M_0,M_1,\ldots ) \end{aligned}$$

and using the chain rule when computing \(\frac{\partial f^*}{\partial M_0}\) gives, after simplification,

$$\begin{aligned} \{f,f^*\}=\zeta \frac{\partial f}{\partial \zeta }\cdot ( \frac{\partial S}{\partial M_0}\circ f). \end{aligned}$$
(8)

Next one notices that the harmonic moments are exactly the coefficients in the expansion of a certain Cauchy integral at infinity:

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}} \int _{\partial \Omega }\frac{\bar{w}dw}{z-w} = \sum _{k=0}^\infty \frac{M_k}{z^{k+1}} \qquad (|z|>>1). \end{aligned}$$

Combining this with the fact that the jump of this Cauchy integral across \(\partial \Omega \) is \(\bar{z}\) it follows that S(z) equals the difference between the analytic continuations of the exterior (\(z\in \Omega ^e\)) and interior (\(z\in \Omega \)) functions defined by the Cauchy integral. Therefore

$$\begin{aligned} S(z; M_0,M_1,\ldots )= \sum _{k=0}^\infty \frac{M_k}{z^{k+1}} + \text {function holomorphic in }\Omega , \end{aligned}$$

and so, since \(M_0, M_1,\ldots \) are independent variables,

$$\begin{aligned} \frac{\partial S}{\partial M_0}(z;M_0, M_1,\ldots )=\frac{1}{ z} + \text {function holomorphic in }\Omega . \end{aligned}$$

Inserting this into (8) one finds that \(\{f,f^*\}\) is holomorphic in \({\mathbb D}\). Since the Poisson bracket is invariant under holomorphic reflection in the unit circle it follows that \(\{f,f^*\}\) is holomorphic in the exterior of \({\mathbb D}\) (including the point of infinity) as well, hence it must be constant. And this constant is found to be one, proving (7).

We wish to extend the above to allow non-univalent analytic functions in the string equation. Then the basic ideas in the above proof still work, but what may happen is that f and S are not determined by the moments \(M_0, M_1,\ldots \) alone. Since \(\partial f/\partial M_0\) is a partial derivative one has to specify all other independent variables in order to give a meaning to it. So there may be more variables, say

$$\begin{aligned} f(\zeta )=f(\zeta ; M_0,M_1,\ldots ; B_1,B_2,\ldots ). \end{aligned}$$
(9)

Then the meaning of the string equation depends on the choice of these extra variables. Natural choices turn out to be locations of branch points, i.e., one takes \(B_j =f(\omega _j)\), where the \(\omega _j\in {\mathbb D}\) denote the zeros of \(f'\) inside \({\mathbb D}\). One good thing with choosing the branch points as additional variables is that keeping these fixed, as is implicit in the notation \(\partial /\partial M_0\), means that f in this case can be viewed as a conformal map into a fixed Riemann surface, which will be a branched covering over the complex plane.

But there are also other possibilities of giving a meaning to the string equation, for example by restricting f to the class of polynomials of a fixed degree, as we shall do in this paper. Then one must allow the branch points to move, so this gives a different meaning to \(\partial /\partial M_0\).

3 Intuition and physical interpretation in the non-univalent case

We shall consider also non-univalent analytic functions as conformal maps, then into Riemann surfaces above \({\mathbb C}\). In general these Riemann surfaces will be branched covering surfaces, and the non-univalence is then absorbed in the covering projection. It is easy to understand that such a Riemann surface, or the corresponding conformal map, will in general not be determined by the moments \(M_0, M_1, M_2, \ldots \) alone.

As a simple example, consider an oriented curve \(\Gamma \) in the complex plane encircling the origin twice (say). In terms of the winding number, or index,

$$\begin{aligned} \nu _\Gamma (z)=\frac{1}{2\pi \mathrm {i}} \oint _{\Gamma } \frac{d\zeta }{\zeta -z} \quad (z\in {\mathbb C}\setminus \Gamma ), \end{aligned}$$
(10)

this means that \(\nu _\Gamma (0)=2\). Points far away from the origin have index zero, and some other points may have index one (for example). Having only the curve \(\Gamma \) available it is natural to define the harmonic moments for the multiply covered (with multiplicities \(\nu _\Gamma \)) set inside \(\Gamma \) as

$$\begin{aligned} M_k=\frac{1}{\pi } \int _{\mathbb C}z^k \nu _\Gamma (z) dxdy= \frac{1}{2\pi \mathrm {i}}\int _\Gamma z^k \bar{z}dz, \quad k=0,1,2,\ldots . \end{aligned}$$

It is tempting to think of this integer weighted set as a Riemann surface over (part of) the complex plane. However, without further information this is not possible. Indeed, since some points have index \(\ge 2\) such a covering surface will have to have branch points, and these have to be specified in order to make the set into a Riemann surface. And only after that it is possible to speak about a conformal map f. Thus f is in general not determined by the moments alone. In the simplest non-univalent cases f will be (locally) determined by the harmonic moments together with the location of the branch points.

In principle these branch points can be moved freely within regions a constant values (\(\ge 2\)) of \(\nu _\Gamma \). However, if we restrict f to belong to some restricted class of functions, like polynomials of a fixed degree, it may be that the branch points cannot move that freely. Thus restricting the structure of f can be an alternative to adding new parameters \(B_1, B_2,\ldots \) as in (9). This is a way to understand our main result, Theorem 1 below.

In the following two examples, the first illustrates a completely freely moving branch point, while in the second example the branch point is still free, but moving it forces also the boundary curve \(f(\partial {\mathbb D})\) to move.

Example 1

Let

$$\begin{aligned} f(\zeta )=-\frac{a}{|a|}\cdot \zeta \cdot \frac{1-\bar{a}\zeta }{\zeta -a}, \end{aligned}$$

where \(|a|>1\). This function maps \({\mathbb D}\) onto \({\mathbb D}\) covered twice, so the above index function is \(\nu =2 \chi _{\mathbb D}\). Thus the corresponding moments are

$$\begin{aligned} M_0=2, \quad M_1=M_2= M_3=\dots =0, \end{aligned}$$

independent of the choice of a, which hence is a free parameter which does not affect the moments. The same is true for the branch point

$$\begin{aligned} B=f(\omega )=a|a|\left( 1- \sqrt{1-\frac{1}{|a|^2}}\,\right) ^2, \end{aligned}$$
(11)

where

$$\begin{aligned} \omega =a\left( 1-\sqrt{1- \frac{1}{|a|^2}}\right) \end{aligned}$$

is the zero of \(f'\) in \({\mathbb D}\). Thus this example confirms the above idea that the branch point can be moved freely without this affecting the image curve \(f({\partial {\mathbb D}})\) or the moments, while the conformal map itself does depend on the choice of branch point.

Example 2

A related example is given by

$$\begin{aligned} f(\zeta )=c \zeta \cdot \frac{\zeta -2/\bar{a}+ a/|a|^4}{\zeta -a}, \end{aligned}$$

still with \(|a|>1\). The derivative of this function is

$$\begin{aligned} f'(\zeta )=c\cdot \frac{(\zeta -1/\bar{a})(\zeta -2a+1/\bar{a})}{(\zeta -a)^2}, \end{aligned}$$

which vanishes at \(\zeta =1/\bar{a}\). The branch point is \(B=f(1/\bar{a})=ac/|a|^4\).

Also in this case there is only one nonzero moment, but now for a different reason. What happens in this case is that the zero of \(f'\) in \({\mathbb D}\) coincides with a pole of the holomorphically reflected function \(f^*\), and therefore annihilates that pole in the appropriate residue calculation. (In the previous example the reason was that both poles of \(f^*\) were mapped by f onto the same point, namely the origin.) The calculation goes as follows: for any analytic function g in \({\mathbb D}\), integrable with respect to \(|f'|^2\), we have

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\mathbb D}g(\zeta )|f'(\zeta )|^2 d\bar{\zeta }d\zeta =\frac{1}{2\pi \mathrm {i}} \int _{\partial {\mathbb D}} g(\zeta ) f^*(\zeta ) f'(\zeta )d\zeta \end{aligned}$$
$$\begin{aligned} =\mathrm {Res}_{\zeta =0} g(\zeta ) f^*(\zeta ) f'(\zeta )d\zeta +\mathrm {Res}_{\zeta =1/\bar{a}} g(\zeta ) f^*(\zeta ) f'(\zeta )d\zeta \end{aligned}$$
$$\begin{aligned} =A\cdot g(0) + 0\cdot g(1/\bar{a})=Ag(0), \end{aligned}$$

where \(A= \bar{a}^2(2|a|^2-1)B^2\). Applied to the moments, i.e. with \(g(\zeta )=f(\zeta )^k\), this gives

$$\begin{aligned} M_0=A, \quad M_1=M_2=\dots =0. \end{aligned}$$

Clearly we can vary either a or B freely while keeping \(M_0= \bar{a}^2(2|a|^2-1)B^2\) fixed, so there is again two free real parameters in f for a fixed set of moments.

We remark that this example has been considered in a similar context by Sakai [19], and that \(f'(\zeta )\) is a contractive zero divisor in the sense of Hedenmalm [9, 10]. One way to interpret the example is to say that \(f({\mathbb D})\) represents a Hele-Shaw fluid region caused by a unit source at the origin when this has spread on the Riemann surface of \(\sqrt{z-B}\). See Examples 5.2 and 5.3 in [4].

The physical interpretation of the string equation is most easily explained with reference to general variations of analytic functions in the unit disk. Consider an arbitrary smooth variation \(f(\zeta )=f(\zeta ,t)\), depending on a real parameter t. We always keep the normalization \(f(0,t)=0\), \(f'(0,t)>0\), and f is assumed to be analytic in a full neighborhood of the closed unit disk, with \(f'\ne 0\) on \(\partial {\mathbb D}\). Then one may define a corresponding Poisson bracket written with a subscript t:

$$\begin{aligned} \{f,g\}_t=\zeta \frac{\partial f}{\partial \zeta }\frac{\partial g}{\partial t}-\zeta \frac{\partial g}{\partial \zeta }\frac{\partial f}{\partial t}. \end{aligned}$$
(12)

This Poisson bracket is itself an analytic function in a neighborhood of \(\partial {\mathbb D}\). It is determined by its values on \(\partial {\mathbb D}\), where we have

$$\begin{aligned} \{f,f^*\}_t= 2\mathrm{Re}[\dot{f} \,\overline{\zeta f'}]. \end{aligned}$$

The classical Hele-Shaw flow moving boundary problem, or Laplacian growth, is a particular evolution, characterized (in the univalent case) by the harmonic moments being conserved, except for the first one which increases linearly with time, say as \(M_0= 2t+\mathrm{constant}\). This means that \(\dot{f}=2\partial f/\partial M_0\), which makes \(\{f,f^*\}_t=2\{f,f^*\}\) and identifies the string Eq. (7) with the Polubarinova–Galin equation

$$\begin{aligned} \mathrm{Re\,} [\dot{f}(\zeta , t)\,\overline{\zeta f'(\zeta ,t)} ]=1, \quad \zeta \in \partial {\mathbb D}, \end{aligned}$$
(13)

for the Hele-Shaw problem.

Dividing (13) by \(|f'|\) gives

$$\begin{aligned} \mathrm{Re\,} [\dot{f}\cdot \overline{\frac{\zeta f'}{|\zeta f' |}} ]=\frac{1}{|\zeta f'|} \quad \text {on }\partial {\mathbb D}. \end{aligned}$$

Here the left member can be interpreted as the inner product between \(\dot{f}\) and the unit normal vector on \(\partial \Omega =f(\partial {\mathbb D})\), and the right member as the gradient of a suitably normalized Green’s function of \(\Omega =f({\mathbb D})\) with pole at the origin.

Thus (13) says that \(\partial \Omega \) moves in the normal direction with velocity \( |\nabla G_\Omega |\), and for the string equation the interpretation becomes

$$\begin{aligned} 2\frac{\partial f}{\partial M_0}\big |_\mathrm{normal} = \frac{\partial G_\Omega }{\partial n} \quad \text {on }\partial \Omega , \end{aligned}$$

the subscript “normal” signifying normal component when considered as a vector on \(\partial \Omega \).

The general Poisson bracket (12) enters when differentiating the formula (2) for the moments \(M_k\) with respect to t for a given evolution. For a more general statement in this respect we may replace the function \(f(\zeta )^k\) appearing in (2) by a function \(g(\zeta ,t)\) which is analytic in \(\zeta \) and depends on t in the same way as \(h(f(\zeta ,t))\) does, where h is analytic, for example \(h(z)=z^k\). This means that \(g=g(\zeta ,t)\) has to satisfy

$$\begin{aligned} \frac{\dot{g}(\zeta ,t)}{ g'(\zeta ,t)}=\frac{\dot{f}(\zeta ,t)}{f'(\zeta ,t)}, \end{aligned}$$
(14)

saying that g “flows with” f and locally can be regarded as a time independent function in the image domain of f.

We then have (cf. Lemma 4.1 in [4])

Lemma 1

Assume that \(g(\zeta ,t)\) is analytic in \(\zeta \) in a neighborhood of the closed unit disk and depends smoothly on t in such a way that (14) holds. Then

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\frac{d}{d t} \int _{\mathbb D}g(\zeta ,t)|f(\zeta ,t)|^2 d\bar{\zeta } d\zeta =\frac{1}{2\pi }\int _{0}^{2\pi } g(\zeta ,t) \{f,f^*\}_t \,d\theta , \end{aligned}$$
(15)

the last integrand being evaluated at \(\zeta =e^{\mathrm {i}\theta }\).

As a special case, with \(g(\zeta ,t)=h(f(\zeta ,t))\), we have

Corollary 1

If h(z) is analytic in a fixed domain containing the closure of \(f({{\mathbb D}},t)\) then

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\frac{d}{d t} \int _{\mathbb D}h(f(\zeta ,t))|f(\zeta ,t)|^2 d\bar{\zeta }d\zeta =\frac{1}{2\pi }\int _{0}^{2\pi } h( f(\zeta ,t)) \{f,f^*\}_t \,d\theta . \end{aligned}$$

Proof

The proof of (15) is straight-forward: differentiating under the integral sign and using partial integration we have

$$\begin{aligned} \frac{d}{d t} \int _{\mathbb D}g |f'|^2 d\bar{\zeta }d\zeta =\frac{d}{d t} \int _{\partial {\mathbb D}} g f^* f' d\zeta =\int _{\partial {\mathbb D}} \left( \dot{g} f^* f' + g\dot{f}^* f' + gf^* \dot{f}'\right) d\zeta \end{aligned}$$
$$\begin{aligned} =\int _{\partial {\mathbb D}} \left( \dot{g} f^* f' + g\dot{f}^* f' -g' f^*\dot{f} -g (f^*)' \dot{f} \right) d\zeta \end{aligned}$$
$$\begin{aligned} =\int _{\partial {\mathbb D}} \left( (\dot{g}f'- \dot{f}g') f^*+ g( \dot{f}^* f' - (f^*)' \dot{f} )\right) d\zeta =\int _{\partial {\mathbb D}} g\cdot \{f,f^*\}_t \,\frac{{d}\zeta }{\zeta }, \end{aligned}$$

which is the desired result. \(\square \)

4 The string equation for polynomials

We now focus on polynomials, of a fixed degree \(n+1\):

$$\begin{aligned} f(\zeta )= \sum _{j=0}^{n}a_{j}\zeta ^{j+1}, \quad a_0>0. \end{aligned}$$
(16)

The derivative is of degree n, and we denote its coefficients by \(b_j\):

$$\begin{aligned} f'(\zeta )=\sum _{j=0}^n b_j\zeta ^j =\sum _{j=0}^n (j+1)a_j \zeta ^j, \end{aligned}$$
(17)

It is obvious from Definition 1 that whenever the Poisson bracket (5) makes sense (i.e., whenever \(\partial f/\partial M_0\) makes sense), it will vanish if \(f'\) has zeros at two points which are reflections of each other with respect to the unit circle. Thus the string equation cannot hold in such cases. The main result, Theorem 1, says that for polynomial maps this is the only exception: the string equation makes sense and holds whenever \(f'\) and \(f'^*\) have no common zeros.

Two polynomials having common zeros is something which can be tested by the classical resultant, which vanishes exactly in this case. Now \(f'^*\) is not really a polynomial, only a rational function, but one may work with the polynomial \(\zeta ^n f'^*(\zeta )\) instead. Alternatively, one may use the meromorphic resultant, which applies to meromorphic functions on a compact Riemann surface, in particular rational functions. Very briefly expressed, the meromorphic resultant \({\mathcal {R}}(g,h)\) between two meromorphic functions g and h is defined as the multiplicative action of one of the functions on the divisor of the other. The second member of (18) below gives an example of the multiplicative action of h on the divisor of g. See [7] for further details.

We shall need the meromorphic resultant only in the case of two rational functions of the form \(g(\zeta )=\sum _{j=0}^n b_j\zeta ^j \) and \(h(\zeta )=\sum _{k=0}^n c_k \zeta ^{-k}\), and in this case it is closely related to the ordinary polynomial resultant \(\mathcal {R}_\mathrm{pol}\) (see [23]) for the two polynomials \(g(\zeta )\) and \(\zeta ^n h(\zeta )\). Indeed, denoting by \(\omega _1,\ldots , \omega _n\) the zeros of g, the divisor of g is the formal sum \(1\cdot (\omega _1)+\dots +1\cdot (\omega _n)-n\cdot (\infty )\), noting that g has a pole of order n at infinity. This gives the meromorphic resultant, and its relation to the polynomial resultant, as

$$\begin{aligned} \mathcal {R} (g,h) =\frac{h(\omega _1)\cdot \dots \cdot h(\omega _n)}{h(\infty )^n} =\frac{1}{b_0^n c_0^n} \mathcal {R}_\mathrm{pol} (g(\zeta ), \zeta ^n h(\zeta )). \end{aligned}$$
(18)

The main result below is an interplay between the Poisson bracket, the resultant and the Jacobi determinant between the moments and the coefficients of f in (16). The theorem is mainly due to Kuznetsova and Tkachev [13, 21], only the statement about the string equation is (possibly) new. One may argue that this string equation can actually be obtained from the string equation for univalent polynomials by “analytic continuation”, but we think that writing down an explicit proof in the non-univalent case really clarifies the nature of the string equation. In particular the proof shows that the string equation is not an entirely trivial identity.

Theorem 1

With f a polynomial as in (16), the identity

$$\begin{aligned} \frac{\partial (\bar{M}_{n}, \ldots \bar{M}_1, M_0, M_1, \ldots , M_n)}{\partial (\bar{a}_{n},\ldots , \bar{a}_1, a_0, a_1, \ldots , a_{n} )} =2a_0^{n^2+3n+1}\mathcal {R}(f',f'^*) \end{aligned}$$
(19)

holds generally. It follows that the derivative \(\partial f/\partial M_0\) makes sense whenever \(\mathcal {R}(f',f'^*)\ne 0\), and then also the string equation

$$\begin{aligned} \{f,f^*\}=1 \end{aligned}$$
(20)

holds.

Proof

For the first statement we essentially follow the proof given in [6], but add some details which will be necessary for the second statement.\(\square \)

Using Corollary 1 we shall first investigate how the moments change under a general variation of f, i.e., we let \(f(\zeta )=f(\zeta ,t)\) depend smoothly on a real parameter t. Thus \(a_j=a_j(t)\), \(M_k=M_k(t)\), and derivatives with respect to t will often be denoted by a dot. For the Laurent series of any function \(h(\zeta )=\sum _i c_i \zeta ^i\) we denote by \(\mathrm{coeff}_i (h)\) the coefficient of \(\zeta ^i\):

$$\begin{aligned} \mathrm{coeff}_i (h)=c_i=\frac{1}{2\pi \mathrm {i}} \oint _{|\zeta |=1} \frac{h(\zeta )d\zeta }{\zeta ^{i+1}}. \end{aligned}$$

By Corollary 1 we then have, for \(k\ge 0\),

$$\begin{aligned} \frac{d}{dt}{M}_k=\frac{1}{2\pi \mathrm {i}}\frac{d}{d t} \int _{\mathbb D}f(\zeta ,t)^k |f(\zeta ,t)|^2d\bar{\zeta }d\zeta =\frac{1}{2\pi }\int _{0}^{2\pi } f^k \{f,f^*\}_t \,d\theta \end{aligned}$$
$$\begin{aligned} =\mathrm{coeff}_0 (f^k \{f,f^*\}_t)= \sum _{i= 0}^n \mathrm{coeff}_{+i} (f^k) \cdot \mathrm{coeff}_{-i} (\{f,f^*\}_t). \end{aligned}$$

Note that \(f(\zeta )^k\) contains only positive powers of \(\zeta \) and that \(\{f,f^*\}_t\) contains powers with exponents in the interval \(-n\le i\le n\) only.

In view of (16) the matrix

$$\begin{aligned} v_{ki}= \mathrm{coeff}_{+i} (f^k) \quad (0\le k,i\le n) \end{aligned}$$
(21)

is upper triangular, i.e., \(v_{ki}=0\) for \(0\le i<k\), with diagonal elements being powers of \(a_0\):

$$\begin{aligned} v_{kk}= a_0^k. \end{aligned}$$

Next we shall find the coefficients of the Poisson bracket. These will involve the coefficients \(b_k\) and \(\dot{a}_j\), but also their complex conjugates. For a streamlined treatment it is convenient to introduce coefficients with negative indices to represent the complex conjugated quantities. The same for the moments. Thus we define, for the purpose of this proof and the forthcoming Example 3,

$$\begin{aligned} M_{-k}=\bar{M}_k, \quad a_{-k}=\bar{a}_k, \quad b_{-k}=\bar{b}_k \quad (k>0). \end{aligned}$$
(22)

Turning points are the real quantities \(M_0\) and \(a_0=b_0\).

In this notation the expansion of the Poisson bracket becomes

$$\begin{aligned} \{f,f^*\}_t = f'(\zeta )\cdot \zeta \dot{f}^*(\zeta ) + f'^*(\zeta )\cdot \zeta ^{-1}\dot{f}(\zeta ) \end{aligned}$$
(23)
$$\begin{aligned} =\sum _{\ell ,j\ge 0} b_\ell \dot{{\bar{a}}}_{j}\zeta ^{\ell -j}+\sum _{\ell ,j\le 0}{\bar{b}}_{\ell } \dot{a}_j\zeta ^{j-\ell } =\sum _{\ell \ge 0,\, j\le 0} b_\ell \dot{{a}}_{j}\zeta ^{\ell +j}+\sum _{\ell \le 0, \, j\ge 0}{b}_{\ell } \dot{a}_j\zeta ^{\ell +j} \end{aligned}$$
$$\begin{aligned} =b_0\dot{a}_0 +\sum _{\ell \cdot j\le 0} b_\ell \dot{{a}}_{j}\zeta ^{\ell +j} =b_0\dot{a}_0 +\sum _i\left( \sum _{\ell \cdot j\le 0,\, \ell +j=-i} b_\ell \dot{{a}}_{j}\right) \zeta ^{-i}. \end{aligned}$$

The last summation runs over pairs of indices \((\ell ,j)\) having opposite sign (or at least one of them being zero) and adding up to \(-i\). We presently need only to consider the case \(i\ge 0\). Eliminating \(\ell \) and letting j run over those values for which \(\ell \cdot j\le 0\) we therefore get

$$\begin{aligned} \mathrm{coeff}_{-i} (\{f,f^*\}_t)= b_{0}\, \dot{a}_0\,\delta _{i0}+\sum _{j\le -i} b_{-(i+j)}\, \dot{a}_j +\sum _{j\ge 0} b_{-(i+j)}\, \dot{a}_j. \end{aligned}$$

Here \(\delta _{ij}\) denotes the Kronecker delta. Setting, for \(i\ge 0\),

$$\begin{aligned} u_{ij} = {\left\{ \begin{array}{ll} b_{-(i+j)} + b_{0} \delta _{i0} \delta _{0j}, &{}\quad \text {if } -n\le j \le -i \text { or } 0\le j\le n, \\ 0 &{}\quad \text {in remaining cases} \end{array}\right. } \end{aligned}$$

we thus have

$$\begin{aligned} \mathrm{coeff}_{-i} (\{f,f^*\}_t)=\sum _{j=0}^n u_{ij} \dot{a}_j. \end{aligned}$$
(24)

Turning to the complex conjugated moments we have, with \(k< 0\),

$$\begin{aligned} \dot{M}_{k}= \dot{\bar{M}}_{-k}= \sum _{i=-n}^0 \overline{\mathrm{coeff}_{-i} ({f}^{-k})} \cdot \overline{ \mathrm{coeff}_{+i} (\{f,f^*\}_t)}. \end{aligned}$$

Set, for \(k<0\), \(i\le 0\),

$$\begin{aligned} v_{ki}= \overline{ \mathrm{coeff}_{-i} (f^{-k})}. \end{aligned}$$

Then \(v_{ki}=0\) when \(k<i\le 0\), and \(v_{kk}=a_0^{-k}\). To achieve the counterpart of (24) we define, for \(i\le 0\),

$$\begin{aligned} u_{ij} = {\left\{ \begin{array}{ll} b_{-(i+j)} + b_{0} \delta _{i0} \delta _{0j}, &{}\quad \text {if } -n\le j \le 0\text { or } -i \le j\le n, \\ 0 &{}\quad \text {in remaining cases}. \end{array}\right. } \end{aligned}$$

This gives, with \(i\le 0\),

$$\begin{aligned} \overline{\mathrm{coeff}_{+i} (\{f,f^*\}_t)}=\sum _{j=0}^n u_{ij} \dot{a}_j. \end{aligned}$$

As a summary we have, from (21), (24) and from the corresponding conjugated equations,

$$\begin{aligned} \dot{M}_k=\sum _{-n\le i,j \le n}v_{ki} u_{ij} \dot{a}_j, \quad -n\le k\le n, \end{aligned}$$
(25)

where

$$\begin{aligned} v_{ki}&= \mathrm{coeff}_{+i}(f^k)&\quad \text {when } 0\le k\le i,\\ v_{ki}&= \overline{\mathrm{coeff}_{-i}({f}^{-k})}&\quad \text {when } i\le k<0,\\ v_{ki}&=0&\quad \text {in remaining cases},\\ u_{ij}&= b_{-(i+j)} + b_{0} \delta _{i0}\delta _{0j}&\quad \text {in index intervals made explicit above},\\ u_{ij}&=0&\quad \text {in remaining cases}. \end{aligned}$$

We see that the full matrix \(V=(v_{ki})\) is triangular in each of the two blocks along the main diagonal and vanishes completely in the two remaining blocks. Therefore, its determinant is simply the product of the diagonal elements. More precisely this becomes

$$\begin{aligned} \det V = a_0^{n(n+1)}. \end{aligned}$$
(26)

The matrix \(U=(u_{ij})\) represents the linear dependence of the bracket \(\{f,f^*\}_t\) on \(f'\) and \(f'^*\), and it acts on the column vector with components \(\dot{a}_j\), then representing the linear dependence on \(\dot{f}\) and \(\dot{f}^*\). The computation started at (23) can thus be finalized as

$$\begin{aligned} \{f,f^*\}_t= \sum _{-n\le i,j \le n}u_{ij} \dot{a}_j \zeta ^{-i}. \end{aligned}$$
(27)

Returning to (25), this equation says that the matrix of partial derivatives \(\partial M_k/ \partial a_j\) equals the matrix product VU, in particular that

$$\begin{aligned} \frac{\partial ({M}_{-n}, \ldots {M}_{-1}, M_0, M_1, \ldots , M_n)}{\partial ({a}_{-n},\ldots , {a}_{-1}, a_0, a_1, \ldots , a_{n} )} =\det V\cdot \det U. \end{aligned}$$

The first determinant was already computed above, see (26). It remains to connect \(\det U\) to the meromorphic resultant \(\mathcal {R}(f',f'^*)\).

For any kind of evolution, \(\{f,f^*\}_t\) vanishes whenever \(f'\) and \(f'^*\) have a common zero. The meromorphic resultant \(\mathcal {R} (f', f'^*)\) is a complex number which has the same vanishing properties as \(\{f,f^*\}_t\), and it is in a certain sense minimal with this property. From this one may expect that the determinant of U is simply a multiple of the resultant. Taking homogenieties into account the constant of proportionality should be \(b_0^{2n+1}\), times possibly some numerical factor. The precise formula in fact turns out to be

$$\begin{aligned} \det U = 2 b_0^{2n+1} \mathcal {R}(f', f'^* ). \end{aligned}$$
(28)

One way to prove it is to connect U to the Sylvester matrix S associated to the polynomial resultant \(\mathcal {R}_\mathrm{pol}(f'(\zeta ), \zeta ^n f'^*(\zeta ))\). This matrix is of size \(2n\times 2n\). By some operations with rows and columns (the details are given in [6], and will in addition be illustrated in the example below) one finds that

$$\begin{aligned} \det U = 2b_0 \det S. \end{aligned}$$

From this (28) follows, using also (18).

Now, the string equation is an assertion about a special evolution. The string equation says that \(\{f,f^*\}_t=1\) for that kind of evolution for which \(\partial /\partial t \) means \(\partial / \partial M_0\), in other words in the case that \(\dot{M}_0=1\) and \(\dot{M}_k=0\) for \(k\ne 0\). By what has already been proved, a unique such evolution exists with f kept on the form (16) as long as \(\mathcal {R} (f',f'^*)\ne 0\).

Inserting \(\dot{M}_k=\delta _{k0}\) in (25) gives

$$\begin{aligned} \sum _{-n\le i,j \le n}v_{ki} u_{ij} \dot{a}_j=\delta _{k0}, \quad -n\le k\le n. \end{aligned}$$
(29)

It is easy to see from the structure of the matrix \(V=(v_{ki})\) that the 0:th column of the inverse matrix \(V^{-1}\), which is sorted out when \(V^{-1}\) is applied to the right member in (29), is simply the unit vector with components \(\delta _{k0}\). Therefore (29) is equivalent to

$$\begin{aligned} \sum _{-n\le j \le n} u_{ij} \dot{a}_j=\delta _{i0}, \quad -n\le i\le n. \end{aligned}$$
(30)

Inserting this into (27) shows that the string equation indeed holds.

Example 3

To illustrate the above proof, and the general theory, we compute everything explicitly when \(n=2\), i.e., with

$$\begin{aligned} f(\zeta )=a_0 \zeta +a_1 \zeta ^2 +a_2 \zeta ^3. \end{aligned}$$

We shall keep the convention (22) in this example. Thus

$$\begin{aligned} f'(\zeta )&=b_0 +b_1 \zeta +b_2 \zeta ^2 = a_0+ 2a_1 \zeta +3a_2 \zeta ^2,\\ f^*(\zeta )&=a_0\zeta ^{-1} + {a}_{-1} \zeta ^{-2} + {a}_{-2} \zeta ^{-3}, \end{aligned}$$

for example. When the Eq. (25) is written as a matrix equation it becomes (with zeros represented by blanks)

$$\begin{aligned} \begin{pmatrix} \dot{{M}}_{-2}\\ \dot{{M}}_{-1}\\ \dot{M}_{0}\\ \dot{M}_{1}\\ \dot{M}_{2}\\ \end{pmatrix} = \begin{pmatrix} a_0^2 &{} &{} &{} &{} \\ {a}_{-1}&{} a_0 &{} &{} &{} \\ &{} &{} 1 &{} &{} \\ &{} &{} &{} a_{0} &{} {a}_{1} \\ &{} &{} &{} &{} a_0^2 \\ \end{pmatrix} \begin{pmatrix} &{} &{} b_2 &{} &{} b_0 \\ &{} b_2 &{} b_1 &{} b_0 &{} {b}_{-1} \\ b_{2} &{} b_{1} &{} 2b_0 &{} {b}_{-1} &{} {b}_{-2} \\ b_1&{} b_0&{}{b}_{-1} &{} {b}_{-2} &{} \\ b_0&{} &{}{b}_{-2} &{} &{} \\ \end{pmatrix} \begin{pmatrix} \dot{{a}}_{-2}\\ \dot{{a}}_{-1}\\ \dot{a}_{0}\\ \dot{a}_{1}\\ \dot{a}_{2}\\ \end{pmatrix}. \end{aligned}$$
(31)

Denoting the two \(5\times 5\) matrices by V and U respectively it follows that the corresponding Jacobi determinant is

$$\begin{aligned} \frac{\partial ({M}_{-2}, {M}_{-1}, M_0, M_1, M_2)}{\partial ({a}_{-2},{a}_{-1}, a_0, a_1, a_{2} )} =\det V\cdot \det U=a_0^6 \cdot \det U. \end{aligned}$$

Here U can essentially be identified with the Sylvester matrix for the resultant \( \mathcal {R}(f',f'^*)\). To be precise,

$$\begin{aligned} \det U = 2b_0 \det S, \end{aligned}$$
(32)

where S is the classical Sylvester matrix associated to the two polynomials \(f'(\zeta )\) and \(\zeta ^2 f'^*(\zeta )\), namely

$$\begin{aligned} S= \begin{pmatrix} &{} b_2 &{} &{} b_0 \\ b_2 &{} b_1 &{} b_0 &{} {b}_{-1} \\ b_{1} &{} b_0 &{} {b}_{-1} &{} {b}_{-2} \\ b_0&{} &{} {b}_{-2} &{} \\ \end{pmatrix}. \end{aligned}$$

As promised in the proof above, we shall explain in this example the column operations on U leading from U to S, and thereby proving (32) in the case \(n=2\) (the general case is similar). The matrix U appears in (31). Let \(U_{-2}\), \(U_{-1}\), \(U_{0}\), \(U_1\), \(U_2\) denote the columns of U. We make the following change of \(U_0\):

$$\begin{aligned} U_0 \mapsto \frac{1}{2}U_0-\frac{1}{2b_0} (b_{-2} U_{-2}+ b_{-1}U_{-1} -b_1 U_1 -b_2 U_2) \end{aligned}$$

The first term makes the determinant become half as big as it was before, and the other terms do not affect the determinant at all. The new matrix is the \(5\times 5\) matrix

$$\begin{aligned} \begin{pmatrix} &{} &{} b_2 &{} &{} b_0 \\ &{} b_2 &{} b_1 &{} b_0 &{} {b}_{-1} \\ b_{2} &{} b_{1} &{} b_0 &{} {b}_{-1} &{} {b}_{-2} \\ b_1&{} b_0&{} &{} {b}_{-2} &{} \\ b_0&{} &{} &{} &{} \\ \end{pmatrix}, \end{aligned}$$

which has \(b_0\) in the lower left corner, with the complementary \(4\times 4\) block being exactly S above. From this (32) follows.

The string Eq. (20) becomes, in terms of coefficients and with \(\dot{a}_j\) interpreted as \(\partial a_j/\partial M_0\), the linear equation

$$\begin{aligned} \begin{pmatrix} a_0^2 &{} &{} &{} &{} \\ {a}_{-1}&{} a_0 &{} &{} &{} \\ &{} &{} 1 &{} &{} \\ &{} &{} &{} a_{0} &{} {a}_{1} \\ &{} &{} &{} &{} a_0^2 \\ \end{pmatrix} \begin{pmatrix} &{} &{} b_2 &{} &{} b_0 \\ &{} b_2 &{} b_1 &{} b_0 &{} {b}_{-1} \\ b_{2} &{} b_{1} &{} 2b_0 &{} {b}_{-1} &{} {b}_{-2} \\ b_1&{} b_0&{}{b}_{-1} &{} {b}_{-2} &{} \\ b_0&{} &{}{b}_{-2} &{} &{} \\ \end{pmatrix} \begin{pmatrix} \dot{{a}}_{-2}\\ \dot{{a}}_{-1}\\ \dot{a}_{0}\\ \dot{a}_{1}\\ \dot{a}_{2}\\ \end{pmatrix} = \begin{pmatrix} 0\\ 0\\ 1\\ 0\\ 0\\ \end{pmatrix}. \end{aligned}$$

Indeed, in view of (31) this equation characterizes the \(\dot{a}_i\) as those belonging to an evolution such that \(\dot{M}_0=1\), \(\dot{M}_k=0\) for \(k\ne 0\). As remarked in the step from (29) to (30), the first matrix, V, can actually be removed in this equation.