1 Introduction

Given a real 2-dimensional sequence

$$\begin{aligned} \beta \equiv \beta ^{(2k)}=\{\beta _{0,0},\beta _{1,0},\beta _{0,1},\ldots ,\beta _{2k,0},\beta _{2k-1,1},\ldots , \beta _{1,2k-1},\beta _{0,2k}\} \end{aligned}$$

of degree 2k and a closed subset K of \({\mathbb {R}}^2\), the truncated moment problem (K-TMP) supported on K for \(\beta ^{(2k)}\) asks to characterize the existence of a positive Borel measure \(\mu \) on \({\mathbb {R}}^2\) with support in K, such that

$$\begin{aligned} \beta _{i,j}=\int _{K}x^iy^j d\mu \quad \text {for}\quad i,j\in {\mathbb {Z}}_+,\;0\le i+j\le 2k. \end{aligned}$$
(1.1)

If such a measure exists, we say that \(\beta ^{(2k)}\) has a representing measure supported on K and \(\mu \) is its K-representing measure (K-rm).

In the degree-lexicographic order

$$\begin{aligned} 1 ,X,Y,X^2,XY,Y^2,\ldots ,X^k,X^{k-1}Y,\ldots ,Y^k \end{aligned}$$

of rows and columns, the corresponding moment matrix to \(\beta \) is equal to

$$\begin{aligned} {{\mathcal {M}}}(k)\equiv {{\mathcal {M}}}(k;\beta ):= \left( \begin{array}{cccc} {{\mathcal {M}}}[0,0](\beta ) &{} {{\mathcal {M}}}[0,1](\beta ) &{} \cdots &{} {{\mathcal {M}}}[0,k](\beta )\\ {{\mathcal {M}}}[1,0](\beta ) &{} {{\mathcal {M}}}[1,1](\beta ) &{} \cdots &{} {{\mathcal {M}}}[1,k](\beta )\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ {{\mathcal {M}}}[k,0](\beta ) &{} {{\mathcal {M}}}[k,1](\beta ) &{} \cdots &{} {{\mathcal {M}}}[k,k](\beta ) \end{array}\right) , \end{aligned}$$
(1.2)

where

$$\begin{aligned} {{\mathcal {M}}}[i,j](\beta ):= \left( \begin{array}{ccccc} \beta _{i+j,0} &{} \beta _{i+j-1,1} &{} \beta _{i+j-2,2} &{} \cdots &{} \beta _{i,j}\\ \beta _{i+j-1,1} &{} \beta _{i+j-2,2} &{} \beta _{i+j-3,3} &{} \cdots &{} \beta _{i-1,j+1}\\ \beta _{i+j-2,2} &{} \beta _{i+j-3,3} &{} \beta _{i+j-4,4} &{} \cdots &{} \beta _{i-2,j+2}\\ \vdots &{} \vdots &{} \vdots &{} \ddots &{}\vdots \\ \beta _{j,i} &{} \beta _{j-1,i+1} &{} \beta _{j-2,i+2} &{} \cdots &{} \beta _{0,i+j}\\ \end{array}\right) . \end{aligned}$$

Let \({\mathbb {R}}[x,y]_{\le k}:=\{p\in {\mathbb {R}}[x,y]:\deg p\le k\}\) stand for the set of real polynomials in variables xy of total degree at most k. For every \(p(x,y)=\sum _{i,j} a_{i,j}x^iy^j\in {\mathbb {R}}[x,y]_{\le k}\) we define its evaluation p(XY) on the columns of the matrix \({\mathcal {M}}(k)\) by replacing each capitalized monomial \(X^iY^j\) in \(p(X,Y)=\sum _{i,j} a_{i,j}X^iY^j\) by the column of \({\mathcal {M}}(k)\), indexed by this monomial. Then p(XY) is a vector from the linear span of the columns of \({\mathcal {M}}(k)\). If this vector is the zero one, i.e., all coordinates are equal to 0, then we say p is a column relation of \({\mathcal {M}}(k)\). A column relation p is nontrivial, if \(p\not \equiv 0\). We denote by \({{\mathcal {Z}}}(p):=\{(x,y)\in {\mathbb {R}}^2:p(x,y)=0\}\), the zero set of p. We say that the matrix \({\mathcal {M}}(k)\) is recursively generated (rg) if for \(p,q,pq\in {\mathbb {R}}[x,y]_{\le k}\) such that p is a column relation of \({\mathcal {M}}(k)\), it follows that pq is also a column relation of \({\mathcal {M}}(k)\). The matrix \({\mathcal {M}}(k)\) is p-pure, if the only column relations of \({\mathcal {M}}(k)\) are those determined recursively by p. We call a sequence \(\beta \) p-pure, if \(\mathcal M(k)\) is p-pure.

A concrete solution to the TMP is a set of necessary and sufficient conditions for the existence of a K-representing measure \(\mu \), that can be tested in numerical examples. Among necessary conditions, \({\mathcal {M}}(k)\) must be positive semidefinite (psd) and rg [14, 25], and by [12], if the support \(\textrm{supp}(\mu )\) of \(\mu \) is a subset of \({{\mathcal {Z}}}(p)\) for a polynomial \(p\in {\mathbb {R}}[x,y]_{\le k}\), then p is a column relation of \({\mathcal {M}}(k)\). The bivariate K-TMP is concretely solved in the following cases:

  1. (1)

    \(K={{\mathcal {Z}}}(p)\) for a polynomial p with \(1\le \deg p\le 2\). Assume that \(\deg p=2\). By applying an affine linear transformation it suffices to consider one of the canonical cases: \(x^2+y^2=1\), \(y=x^2\), \(xy=1\), \(xy=0\), \(y^2=y\). The case \(x^2+y^2=1\) is equivalent to the univariate trigonometric moment problem, solved in [13]. The other four cases were tackled in [13,14,15, 27] by applying the far-reaching flat extension theorem (FET) [12, Theorem 7.10] (see also [16, Theorem 2.19] and [34] for an alternative proof), which states that \(\beta ^{(2k)}\) admits a \(({{\,\textrm{rank}\,}}{\mathcal {M}}(k))\)-atomic rm if and only if \({\mathcal {M}}(k)\) is psd and admits a rank-preserving extension to a moment matrix \({\mathcal {M}}(k+1)\). For an alternative approach with shorter proofs compared to the original ones by reducing the problem to the univariate setting see [4, Section 6] (for \(xy=0\)), [42] (for \(y^2=y\)), [43] (for \(xy=1\)) and [44] (for \(y=x^2\)). For \(\deg p=1\) the solution is [17, Proposition 3.11] and uses the FET, but can be also derived in the univariate setting (see [44, Remark 3.3.(4)])

  2. (2)

    \(K={\mathbb {R}}^2\), \(k=2\) and \({\mathcal {M}}(2)\) is invertible. This case was first solved nonconstructively using convex geometry techniques in [29] and later on constructively in [22] by a novel rank reduction technique.

  3. (3)

    K is one of \({{\mathcal {Z}}}(y-x^3)\) [26, 41], \({{\mathcal {Z}}}(y^2-x^3)\) [41], \({{\mathcal {Z}}}(y(y-a)(y-b))\) [38, 42], \(a,b\in {\mathbb {R}}{\setminus }\{0\}\), \(a\ne b\), or \({{\mathcal {Z}}}(xy^2-1)\) [43]. The main technique in [26] is the FET, while in [41,42,43] the reduction to the univariate TMP is applied.

  4. (4)

    \({\mathcal {M}}(k)\) has a special feature called recursive determinateness [18] or extremality [19].

  5. (5)

    \({\mathcal {M}}(3)\) satisfies symmetric cubic column relations which can only cause extremal moment problems. In order to satisfy the variety condition, another symmetric column relation must exist, and the solution was obtained by checking consistency [20].

  6. (6)

    Non-extremal sextic TMPs with \({{\,\textrm{rank}\,}}{\mathcal {M}}(3)\le 8\) and with finite or infinite algebraic varieties [21].

  7. (7)

    \({\mathcal {M}}(3)\) with reducible cubic column relations [39].

The solutions to the K-TMP, which are not concrete in the sense of definition from the previous paragraph, are known in the cases \(K={{\mathcal {Z}}}(y-q(x))\) and \(K={{\mathcal {Z}}}(yq(x)-1)\), where \(q\in {\mathbb {R}}[x]\). [26, Section 6] gives a solution in terms of the bound on the degree m for which the existence of a positive extension \(\mathcal M(k+m)\) of \({\mathcal {M}}(k)\) is equivalent to the existence of a rm. In [44] the bound on m is improved to \(m=\deg q-1\) for curves of the form \(y=q(x)\), \(\deg q\ge 3\), and to \(m=\ell +1\) for curves of the form \(yx^\ell =1\), \(\ell \in {\mathbb {N}}{\setminus }\{1\}\).

References to some classical work on the TMP are monographs [2, 3, 33], while for a recent development in the area we refer a reader to [36]. Special cases of the TMP have also been considered in [6, 7, 24, 28, 31, 32], while [35] considers subspaces of the polynomial algebra and [8] the TMP for commutative \({\mathbb {R}}\)-algebras.

The motivation for this paper was to solve the TMP concretely on some reducible cubic curves, other than the case of three parallel lines solved in [42]. Applying an affine linear transformation we show that every such TMP is equivalent to the TMP on one of 8 canonical cases of reducible cubics of the form \(yc(x,y)=0\), where \(c\in {\mathbb {R}}[x,y]\), \(\deg c=2\). In this article we solve the TMP for the cases \(c(x,y)=ay+x^2+y^2\), \(a\in {\mathbb {R}}{\setminus } \{0\}\), and \(c(x,y)=x-y^2\), which we call the circular and the parabolic type, respectively. The main idea is to characterize the existence of a decomposition of \(\beta \) into the sum \(\beta ^{(\ell )}+\beta ^{(c)}\), where \(\beta ^{(\ell )}=\{\beta _{i,j}^{(\ell )}\}_{i,j\in {\mathbb {Z}}_+,\; 0\le i+j\le 2k}\) and \(\beta ^{(c)}=\{\beta _{i,j}^{(c)}\}_{i,j\in {\mathbb {Z}}_+,\; 0\le i+j\le 2k}\) admit a \({\mathbb {R}}\)-rm and a \({{\mathcal {Z}}}(c)\)-rm, respectively. Due to the form of the cubic \(yc(x,y)=0\), it turns out that all but two moments of \(\beta ^{(\ell )}\) and \(\beta ^{(c)}\) are not already fixed by the original sequence, i.e., \(\beta _{0,0}^{(\ell )}\), \(\beta _{1,0}^{(\ell )}\), \(\beta _{0,0}^{(c)}\), \(\beta _{1,0}^{(c)}\) in the circular type case and \(\beta _{0,0}^{(\ell )}\), \(\beta _{2k,0}^{(\ell )}\), \(\beta _{0,0}^{(c)}\), \(\beta _{2k,0}^{(c)}\) in the parabolic type case. Then, by an involved analysis, the characterization of the existence of a decomposition \(\beta =\beta ^{(\ell )}+\beta ^{(c)}\) can be done in both cases. We also characterize the number of atoms in a minimal representing measure, i.e., a measure with the minimal number of atoms in the support.

1.1 Readers Guide

The paper is organized as follows. In Sect. 2 we present some preliminary results needed to establish the main results of the paper. In Sect. 3 we show that to solve the TMP on every reducible cubic curve it is enough to consider 8 canonical type relations (see Proposition 3.1). In Sect. 4 we present the general procedure for solving the TMP on all but one of the canonical types and prove some results that apply to them. Then in Sects. 5 and 6 we specialize to the circular and the parabolic type relations and solve them concretely (see Theorems 5.1 and 6.1). In both cases we show, by numerical examples, that there are pure sequences \(\beta ^{(6)}\) with a psd \({\mathcal {M}}(3)\) but without a rm (see Examples 5.3 and 6.3).

2 Preliminaries

We write \({\mathbb {R}}^{n\times m}\) for the set of \(n\times m\) real matrices. For a matrix M we call the linear span of its columns a column space and denote it by \({{\mathcal {C}}}(M)\). The set of real symmetric matrices of size n will be denoted by \(S_n\). For a matrix \(A\in S_n\) the notation \(A\succ 0\) (resp. \(A\succeq 0\)) means A is positive definite (pd) (resp. positive semidefinite (psd)). We write \(\textbf{0}_{t_1,t_2}\) for a \(t_1\times t_2\) matrix with only zero entries and \(\textbf{0}_{t}=\textbf{0}_{t,t}\) for short, where \(t_1,t_2,t\in {\mathbb {N}}\). The notation \(E^{(\ell )}_{i,j}\), \(\ell \in {\mathbb {N}}\), stands for the usual \(\ell \times \ell \) coordinate matrix with the only nonzero entry at the position (ij), which is equal to 1.

In the rest of this section let \(k\in {\mathbb {N}}\) and \(\beta \equiv \beta ^{ (2k)}=\{\beta _{i,j}\}_{i,j\in {\mathbb {Z}}_+,\; 0\le i+j\le 2k}\) be a bivariate sequence of degree 2k.

2.1 Moment Matrix

Let \({\mathcal {M}}(k)\) be the moment matrix of \(\beta \) (see (1.2)). Let \(Q_1, Q_2\) be subsets of the set \(\{X^iY^j:i,j \in {\mathbb {Z}}_+,\; 0\le i+j\le k\}\). We denote by \(({\mathcal {M}}(k))_{Q_1,Q_2}\) the submatrix of \({\mathcal {M}}(k)\) consisting of the rows indexed by the elements of \(Q_1\) and the columns indexed by the elements of \(Q_2\). In case \(Q:=Q_1=Q_2\), we write \((\mathcal M(k))_{Q}:=({\mathcal {M}}(k))_{Q,Q}\) for short.

2.2 Affine Linear Transformations

The existence of representing measures is invariant under invertible affine linear transformations of the form

$$\begin{aligned} \phi (x,y)=(\phi _1(x,y),\phi _2(x,y)):=(a+bx+cy,d+ex+fy),\; (x,y)\in {\mathbb {R}}^{2}, \nonumber \\ \end{aligned}$$
(2.1)

\(a,b,c,d,e,f\in {\mathbb {R}}\) with \(bf-ce \ne 0\). Namely, let \(L_{\beta }:\mathbb {R}[x,y]_{\le 2k}\rightarrow {\mathbb {R}}\) be a Riesz functional of the sequence \(\beta \) defined by

$$\begin{aligned} L_{\beta }(p):=\sum _{\begin{array}{c} i,j\in {\mathbb {Z}}_+,\\ 0\le i+j\le 2k \end{array}} a_{i,j}\beta _{i,j},\qquad \text {where}\quad p= \sum _{\begin{array}{c} i,j\in {\mathbb {Z}}_+,\\ 0\le i+j\le 2k \end{array}} a_{i,j}x^iy^j. \end{aligned}$$

We define \({\widetilde{\beta }}=\{{\widetilde{\beta }}_{i,j}\}_{i,j\in {\mathbb {Z}}_+,\; 0\le i+j\le 2k}\) by

$$\begin{aligned} {\widetilde{\beta }}_{i,j}=L_{\beta }(\phi _1(x,y)^i \cdot \phi _2(x,y)^j). \end{aligned}$$

By [14, Proposition 1.9], \(\beta \) admits a (r-atomic) K-rm if and only if \({\widetilde{\beta }}\) admits a (r-atomic) \(\phi (K)\)-rm. We write \({\widetilde{\beta }}=\phi (\beta )\) and \(\mathcal M(k;{\widetilde{\beta }})=\phi ({\mathcal {M}}(k;\beta ))\).

2.3 Generalized Schur Complements

Let

$$\begin{aligned} M=\left( \begin{array}{cc} A &{} B \\ C &{} D \end{array}\right) \in {\mathbb {R}}^{(n+m)\times (n+m)} \end{aligned}$$

be a real matrix where \(A\in {\mathbb {R}}^{n\times n}\), \(B\in {\mathbb {R}}^{n\times m}\), \(C\in {\mathbb {R}}^{m\times n}\) and \(D\in {\mathbb {R}}^{m\times m}\). The generalized Schur complement [45] of A (resp. D) in M is defined by

$$\begin{aligned} M/A=D-CA^{\dagger } B\quad (\text {resp.}\; M/D=A-BD^\dagger C), \end{aligned}$$

where \(A^\dagger \) (resp. \(D^\dagger \)) stands for the Moore–Penrose inverse of A (resp. D).

The following lemma will be frequently used in the proofs.

Lemma 2.1

Let \(n,m\in {\mathbb {N}}\) and

$$\begin{aligned} M=\left( \begin{array}{cc} A &{} B \\ B^{T} &{} C\end{array}\right) \in S_{n+m}, \end{aligned}$$

where \(A\in S_n\), \(B\in {\mathbb {R}}^{n\times m}\) and \(C\in S_m\). If \({{\,\textrm{rank}\,}}M={{\,\textrm{rank}\,}}A\), then the matrix equation

$$\begin{aligned} \begin{pmatrix} A\\ B^T \end{pmatrix} W = \begin{pmatrix} B\\ C \end{pmatrix}, \end{aligned}$$
(2.2)

where \(W\in {\mathbb {R}}^{n\times m}\), is solvable and the solutions are precisely the solutions of the matrix equation \(AW=B\). In particular, \(W=A^{\dagger }B\) satisfies (2.2).

Proof

The assumption \({{\,\textrm{rank}\,}}M={{\,\textrm{rank}\,}}A\) implies that

$$\begin{aligned} \begin{pmatrix} A\\ B^T \end{pmatrix}W = \begin{pmatrix} AW\\ B^TW \end{pmatrix} = \begin{pmatrix} B\\ C \end{pmatrix} \end{aligned}$$
(2.3)

for some \(W\in {\mathbb {R}}^{n\times m}\). So the Eq. (2.2) is solvable. In particular, \(AW=B\). It remains to prove that any solution W to \(AW=B\) is also a solution to (2.3). Note that all the solutions of the equation \(A{\widetilde{W}}=B\) are

$$\begin{aligned} {\widetilde{W}}=A^\dagger B+Z, \end{aligned}$$
(2.4)

where each column of \(Z\in {\mathbb {R}}^{n\times m}\) is an arbitrary vector from \(\ker A\). So W satisfiying (2.3) is also of the form \(A^\dagger B+Z_0\) for some \(Z_0\in {\mathbb {R}}^{n\times m}\) with columns belonging to \(\ker A\). We have that

$$\begin{aligned} C=B^TW=B^T(A^\dagger B+Z_0)=B^TA^\dagger B+B^TZ_0=B^TA^\dagger B, \end{aligned}$$
(2.5)

where we used the fact that each column of B belongs to \({{\mathcal {C}}}(A)\) and \(\ker (A)^\perp = {{\mathcal {C}}}(A).\) Replacing W with any \({\widetilde{W}}\) of the form (2.4) in the calculation (2.5) gives the same result, which proves the statement of the proposition. \(\square \)

The following theorem is a characterization of psd \(2\times 2\) block matrices.

Theorem 2.2

[1] Let

$$\begin{aligned} M=\left( \begin{array}{cc} A &{} B \\ B^{T} &{} C\end{array}\right) \in S_{n+m} \end{aligned}$$

be a real symmetric matrix where \(A\in S_n\), \(B\in {\mathbb {R}}^{n\times m}\) and \(C\in S_m\). Then:

  1. (1)

    The following conditions are equivalent:

    1. (a)

      \(M\succeq 0\).

    2. (b)

      \(C\succeq 0\), \({{\mathcal {C}}}(B^T)\subseteq {{\mathcal {C}}}(C)\) and \(M/C\succeq 0\).

    3. (c)

      \(A\succeq 0\), \({{\mathcal {C}}}(B)\subseteq {{\mathcal {C}}}(A)\) and \(M/A\succeq 0\).

  2. (2)

    If \(M\succeq 0\), then

    $$\begin{aligned} {{\,\textrm{rank}\,}}M= {{\,\textrm{rank}\,}}A+{{\,\textrm{rank}\,}}M/A={{\,\textrm{rank}\,}}C+{{\,\textrm{rank}\,}}M/C. \end{aligned}$$

2.4 Extension Principle

Proposition 2.3

Let \({{\mathcal {A}}}\in S_n\) be positive semidefinite, Q a subset of the set \(\{1,\ldots ,n\}\) and \({{\mathcal {A}}}|_Q\) the restriction of \({{\mathcal {A}}}\) to the rows and columns from the set Q. If \({{\mathcal {A}}}|_Qv=0\) for a nonzero vector v, then \({{\mathcal {A}}}{\widehat{v}}=0\), where \(\widehat{v}\) is a vector with the only nonzero entries in the rows from Q and such that the restriction \(\widehat{v}|_Q\) to the rows from Q equals to v.

Proof

See [25, Proposition 2.4] or [42, Lemma 2.4] for an alternative proof. \(\square \)

2.5 Partially Positive Semidefinite Matrices and Their Completions

A partial matrix \(A=(a_{i,j})_{i,j=1}^n\) is a matrix of real numbers \(a_{i,j}\in {\mathbb {R}}\), where some of the entries are not specified.

A partial symmetric matrix \(A=(a_{i,j})_{i,j=1}^n\) is partially positive semidefinite (ppsd) (resp. partially positive definite (ppd)) if the following two conditions hold:

  1. (1)

    \(a_{i,j}\) is specified if and only if \(a_{j,i}\) is specified and \(a_{i,j}=a_{j,i}\).

  2. (2)

    All fully specified principal minors of A are psd (resp. pd).

For \(n\in {\mathbb {N}}\) write \([n]:=\{1,2,\ldots ,n\}\). We denote by \(A_{Q_1,Q_2}\) the submatrix of \(A\in {\mathbb {R}}^{n\times n}\) consisting of the rows indexed by the elements of \(Q_1\subseteq [n]\) and the columns indexed by the elements of \(Q_2\subseteq [n]\). In case \(Q:=Q_1=Q_2\), we write \(A_{Q}:=A_{Q,Q}\) for short.

It is well-known that a ppsd matrix \(A(\textbf{x})\) of the form as in Lemma 2.4 below admits a psd completion (This follows from the fact that the corresponding graph is chordal, see e.g., [5, 23, 30]). Since we will need an additional information about the rank of the completion \(A(x_0)\) and the explicit interval of all possible \(x_0\) for our results, we give a proof of Lemma 2.4 based on the use of generalized Schur complements.

Lemma 2.4

Let \(A(\textbf{x})\) be a partially positive semidefinite symmetric matrix of size \(n\times n\) with the missing entries in the positions (ij) and (ji), \(1\le i<j\le n\). Let

$$\begin{aligned}&A_1 = (A(\textbf{x}))_{[n]\setminus \{i,j\}},\; a=(A(\textbf{x}))_{[n]\setminus \{i,j\},\{i\}},\;\\&b=(A(\textbf{x}))_{[n]\setminus \{i,j\},\{j\}},\; \alpha =(A(\textbf{x}))_{i,i},\; \gamma =(A(\textbf{x}))_{j,j}. \end{aligned}$$

Let

$$\begin{aligned} A_2=(A(\textbf{x}))_{[n]\setminus \{j\}} =\begin{pmatrix} A_1 &{}\quad a \\ a^T &{}\quad \alpha \end{pmatrix}\in S_{n-1},\qquad A_3=(A(\textbf{x}))_{[n]\setminus \{i\}} =\begin{pmatrix} A_1 &{}\quad b \\ b^T &{}\quad \gamma \end{pmatrix}\in S_{n-1}, \end{aligned}$$

and

$$\begin{aligned} x_{\pm }:=b^TA_1^{\dagger }a\pm \sqrt{(A_2/A_1)(A_3/A_1)}\in {\mathbb {R}}. \end{aligned}$$

Then:

  1. (i)

    \(A(x_{0})\) is positive semidefinite if and only if \(x_0\in [x_-,x_+]\).

  2. (ii)
    $$\begin{aligned} {{\,\textrm{rank}\,}}A(x_0)= \left\{ \begin{array}{rl} \max \big \{{{\,\textrm{rank}\,}}A_2, {{\,\textrm{rank}\,}}A_3\big \},&{} \text {for}\;x_0\in \{x_-,x_+\},\\[0.5em] \max \big \{{{\,\textrm{rank}\,}}A_2, {{\,\textrm{rank}\,}}A_3\big \}+1,&{} \text {for}\;x_0\in (x_-,x_+). \end{array}\right. \end{aligned}$$
  3. (iii)

    The following statements are equivalent:

    1. (a)

      \(x_-=x_+\).

    2. (b)

      \(A_2/A_1=0\) or \(A_3/A_1=0\).

    3. (c)

      \({{\,\textrm{rank}\,}}A_2={{\,\textrm{rank}\,}}A_1\) or \({{\,\textrm{rank}\,}}A_3={{\,\textrm{rank}\,}}A_1\).

Proof

We write

$$\begin{aligned} A(\textbf{x})&= \begin{pmatrix} A_{11} &{} a_{12} &{} A_{13} &{} a_{14} &{} A_{15} \\[0.2em] (a_{12})^T &{} \alpha &{} (a_{23})^T &{} \textbf{x} &{} (a_{25})^T \\[0.2em] (A_{13})^T &{} a_{23} &{} A_{33} &{} a_{34} &{} a_{35}\\[0.2em] (a_{14})^T &{} \textbf{x} &{} (a_{34})^T &{} \gamma &{} (a_{45})^T \\[0.2em] (A_{15})^T &{} a_{25} &{} (A_{35})^T &{} a_{45} &{} A_{55} \end{pmatrix}\\&\in \begin{pmatrix} S_{i-1} &{} {\mathbb {R}}^{(i-1)\times 1} &{} {\mathbb {R}}^{(i-1)\times (j-i-1)} &{} {\mathbb {R}}^{(i-1)\times 1} &{} {\mathbb {R}}^{(i-1)\times (n-j)}\\ {\mathbb {R}}^{1\times (i-1)} &{} {\mathbb {R}}&{} {\mathbb {R}}^{1\times (j-i-1)} &{} {\mathbb {R}}&{} {\mathbb {R}}^{1\times (n-j)}\\ {\mathbb {R}}^{(j-1-1)\times (i-1)} &{} {\mathbb {R}}^{(j-i-1)\times 1} &{} S_{j-i-1} &{} {\mathbb {R}}^{(j-i-1)\times 1} &{} {\mathbb {R}}^{(j-i-1)\times (n-j)}\\ {\mathbb {R}}^{1\times (i-1)} &{} {\mathbb {R}}&{} {\mathbb {R}}^{1\times (j-i-1)} &{} {\mathbb {R}}&{} {\mathbb {R}}^{1\times (n-j)}\\ {\mathbb {R}}^{(n-j)\times (i-1)} &{} {\mathbb {R}}^{(n-j)\times 1} &{} {\mathbb {R}}^{(n-j)\times (j-i-1)} &{} {\mathbb {R}}^{(n-j)\times 1} &{} S_{n-j} \end{pmatrix} \end{aligned}$$

Let P be a permutation matrix, which changes the order of columns to

$$\begin{aligned} 1,2,\ldots ,i-1,i+1,\ldots ,j-1,j+1,\ldots ,n,i,j. \end{aligned}$$

Then

$$\begin{aligned} P^TA(\textbf{x})P = \begin{pmatrix} A_{11} &{} A_{13} &{} A_{15} &{} a_{12} &{} a_{14} \\[0.2em] (A_{13})^T &{} A_{33} &{} A_{35} &{} a_{23} &{} a_{34} \\[0.2em] (A_{15})^T &{} (A_{35})^T &{} A_{55} &{} a_{25} &{} a_{45}\\[0.2em] (a_{12})^T &{} (a_{23})^T &{} (a_{25})^T &{} \alpha &{} \textbf{x} \\[0.2em] (a_{14})^T &{} (a_{34})^T &{} (a_{45})^T &{} \textbf{x} &{} \gamma \end{pmatrix} \end{aligned}$$

Note that

$$\begin{aligned} P^TA(\textbf{x})P = \begin{pmatrix} A_{1} &{}\quad a &{}\quad b\\[0.2em] a^T &{}\quad \alpha &{}\quad \textbf{x} \\[0.2em] b^T &{}\quad \textbf{x} &{}\quad \gamma \end{pmatrix} \qquad \text {and} \qquad P^TA(\textbf{x})P\succeq 0\; \Leftrightarrow \; A(\textbf{x})\succeq 0. \end{aligned}$$
(2.6)

Lemma 2.4 with the missing entries in the positions \((n-1,n)\) and \((n,n-1)\) was proved in [41, Lemma 2.11] using computations with generalized Schur complements under one additional assumption:

$$\begin{aligned} A_1\;\text {is invertible}\quad \text {or} \quad {{\,\textrm{rank}\,}}A_1={{\,\textrm{rank}\,}}A_2. \end{aligned}$$
(2.7)

Here we explain why the assumption (2.7) can be removed from [41, Lemma 2.11]. The proof of [41, Lemma 2.11] is separated into two cases: \(A_2/A_1>0\) and \(A_2/A_1=0\). The case \(A_2/A_1=0\) does not use (2.7). Assume now that \(A_2/A_1>0\) or equivalently \({{\,\textrm{rank}\,}}A_2>{{\,\textrm{rank}\,}}A_1\). Invertibility of \(A_1\) (and by \(A_2/A_1>0\) also \(A_2\) is invertible) is used in the proof of [41, Lemma 2.11] for the application of the quotient formula ( [10])

$$\begin{aligned} (A(x)/A_2)=\big (A(x)/A_1\big )\big /\big (A_2/A_1\big ), \end{aligned}$$
(2.8)

where

$$\begin{aligned} A(x)/A_1= \begin{pmatrix} A_2/A_1 &{} \begin{pmatrix} A_1 &{} b \\ a^T &{} x \end{pmatrix}\big /A_1\\ \begin{pmatrix} A_1 &{} a \\ b^T &{} x \end{pmatrix}\big /A_1&A_3/A_1 \end{pmatrix} \end{aligned}$$

However, the formula (2.8) has been generalized [9, Theorem 4] to noninvertible \(A_1\), \(A_2\), where all Schur complements are the generalized ones, under the conditions:

$$\begin{aligned} \begin{pmatrix}b&x\end{pmatrix}^T \in {{\mathcal {C}}}(A_2) \qquad \text {and}\qquad a \in {{\mathcal {C}}}(A_1). \end{aligned}$$
(2.9)

So if we show that the conditions (2.9) hold, the same proof as in [41, Lemma 2.11] can be applied in the case \(A_1\) is singular. From \(A_2\) (resp. \(A_3\)) being psd, \(a \in {{\mathcal {C}}}(A_1)\) (resp. \(b\in {{\mathcal {C}}}(A_1)\)) follows by Theorem 2.2, used for \((M,A):=(A_2,A_1)\) (resp. \((M,A):=(A_3,A_1)\)). The assumption \(A_2/A_1>0\) implies that \(\begin{pmatrix}a&\alpha \end{pmatrix}^T \notin {{\mathcal {C}}}(\begin{pmatrix} A_1&a^T \end{pmatrix}^T)\). Since \(a \in {{\mathcal {C}}}(A_1)\), it follows that \(\begin{pmatrix}0&1\end{pmatrix}^T \in {{\mathcal {C}}}(A_2)\). Hence, \(\begin{pmatrix}b&x\end{pmatrix}^T \in {{\mathcal {C}}}(A_2)\) for every \(x\in {\mathbb {R}}\), which concludes the proof of (2.9). \(\square \)

2.6 Hamburger TMP

Let \(k\in {\mathbb {N}}\). For

\(v=(v_0,\ldots ,v_{2k} )\in {\mathbb {R}}^{2k+1}\)

we define the corresponding Hankel matrix as

(2.10)

We denote by \(\mathbf {v_j}:=\left( v_{j+\ell } \right) _{\ell =0}^k\) the \((j+1)\)-th column of \(A_{v}\), \(0\le j\le k\), i.e.,

$$\begin{aligned} A_{v}=\left( \begin{array}{ccc} \mathbf {v_0}&\cdots&\mathbf {v_k} \end{array}\right) . \end{aligned}$$

As in [11], the rank of v, denoted by \({{\,\textrm{rank}\,}}v\), is defined by

$$\begin{aligned} {{\,\textrm{rank}\,}}v= \left\{ \begin{array}{rl} k+1,&{} \text {if } A_{v} \text { is nonsingular},\\ \min \left\{ \textbf{i}:\textbf{v}_\textbf{i}\in {{\,\textrm{span}\,}}\{{\textbf{v}_\textbf{0}},\ldots ,\textbf{v}_{\textbf{i}- \textbf{1}}\}\right\} ,&{} \text {if } A_{v} \text { is singular}. \end{array}\right. \end{aligned}$$

For \(m\le k\) we denote the upper left-hand corner \(\left( v_{i+j} \right) _{i,j=0}^m\in S_{m+1}\) of \(A_{v}\) of size \(m+1\) by \(A_{v}(m)\). A sequence v is called positively recursively generated (prg) if for \(r={{\,\textrm{rank}\,}}v\) the following two conditions hold:

  • \(A_v(r-1)\succ 0\).

  • If \(r<k+1\), denoting

    $$\begin{aligned} (\varphi _0,\ldots ,\varphi _{r-1}):=A_{v}(r-1)^{-1}(v_r,\ldots ,v_{2r-1})^{T}, \end{aligned}$$
    (2.11)

    the equality

    $$\begin{aligned} v_j=\varphi _0v_{j-r}+\cdots +\varphi _{r-1}v_{j-1} \end{aligned}$$
    (2.12)

    holds for \(j=r,\ldots ,2k\).

The solution to the \({\mathbb {R}}\)-TMP is the following.

Theorem 2.5

[11, Theorems 3.9–3.10] For \(k\in {\mathbb {N}}\) and \(v=(v_0,\ldots ,v_{2k})\in {\mathbb {R}}^{2k+1}\) with \(v_0>0\), the following statements are equivalent:

  1. (1)

    There exists a \({\mathbb {R}}\)-representing measure for \(\beta \).

  2. (2)

    There exists a \(({{\,\textrm{rank}\,}}A_v)\)-atomic \({\mathbb {R}}\)-representing measure for \(\beta \).

  3. (3)

    \(A_v\) is positive semidefinite and one of the following holds:

    1. (a)

      \(A_v(k-1)\) is positive definite.

    2. (b)

      \({{\,\textrm{rank}\,}}A_v(k-1)={{\,\textrm{rank}\,}}A_v\).

  4. (4)

    v is positively recursively generated.

2.7 TMP on the Unit Circle

The solution to the \({{\mathcal {Z}}}(x^2+y^2-1)\)-TMP is the following.

Theorem 2.6

[13, Theorem 2.1] Let \(p(x,y)=x^2+y^2-1\) and \(\beta :=\beta ^{(2k)}=(\beta _{i,j})_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\), where \(k\ge 2\). Then the following statements are equivalent:

  1. (1)

    \(\beta \) has a \({\mathcal {Z}}(p)\)-representing measure.

  2. (2)

    \(\beta \) has a \(({{\,\textrm{rank}\,}}{\mathcal {M}}(k))\)-atomic \({\mathcal {Z}}(p)\)-representing measure.

  3. (3)

    \({\mathcal {M}}(k)\) is positive semidefinite and the relations \(\beta _{2+i,j}+\beta _{i,2+j}=\beta _{i,j}\) hold for every \(i,j\in {\mathbb {Z}}_+\) with \(i+j\le 2k-2\).

2.8 Parabolic TMP

We will need the following solution to the parabolic TMP (see [44, Theorem 3.7]).

Theorem 2.7

Let \(p(x,y)=x-y^2\) and \(\beta :=\beta ^{(2k)}=(\beta _{i,j})_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\), where \(k\ge 2\). Let

$$\begin{aligned} {{\mathcal {B}}}=\{ 1 ,Y,X,XY,X^2,X^2Y,\ldots , X^i,X^iY,\ldots ,X^{k-1},X^{k-1}Y,X^k\}. \end{aligned}$$

Then the following statements are equivalent:

  1. (1)

    \(\beta \) has a \({\mathcal {Z}}(p)\)-representing measure.

  2. (2)

    \(\beta \) has a \(({{\,\textrm{rank}\,}}{\mathcal {M}}(k))\)-atomic \({\mathcal {Z}}(p)\)-representing measure.

  3. (3)

    \({\mathcal {M}}(k)\) is positive semidefinite, the relations \(\beta _{1+i,j}=\beta _{i,2+j}\) hold for every \(i,j\in {\mathbb {Z}}_+\) with \(i+j\le 2k-2\) and one of the following statements holds:

    1. (a)

      \(\big ({\mathcal {M}}(k)\big )_{{{\mathcal {B}}}\setminus \{X^k\}}\) is positive definite.

    2. (b)

      \({{\,\textrm{rank}\,}}\big ({\mathcal {M}}(k)\big )_{{{\mathcal {B}}}{\setminus } \{X^k\}} = {{\,\textrm{rank}\,}}{\mathcal {M}}(k).\)

  4. (4)

    The relations \(\beta _{1+i,j}=\beta _{i,2+j}\) hold for every \(i,j\in {\mathbb {Z}}_+\) with \(i+j\le 2k-2\) and \(\gamma =(\gamma _0,\ldots ,\gamma _{4k})\), defined by \(\gamma _i=\beta _{\lfloor \frac{i}{2}\rfloor ,i\; \textrm{mod}\; 2}\), admits a \({\mathbb {R}}\)-representing measure.

Remark 2.8

The equivalence (3)\(\Leftrightarrow \)(4) is part of the proof of [44, Theorem 3.7].

3 TMP on Reducible Cubics: Case Reduction

In this section we show that to solve the TMP on reducible cubic curves it suffices, after applying an affine linear transformation, to solve the TMP on 8 canonical forms of curves.

Proposition 3.1

Let \(k\in {\mathbb {R}}\) and \(\beta := \beta ^{(2k)}= (\beta _{i,j})_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\). Assume \({\mathcal {M}}(k;\beta )\) does not satisfy any nontrivial column relation between columns indexed by monomials of degree at most 2, but it satisfies a column relation \(p(X,Y)=\textbf{0}\), where \(p\in {\mathbb {R}}[x,y]\) is a reducible polynomial with \(\deg p=3\). If \(\beta \) admits a representing measure, then there exists an invertible affine linear transformation \(\phi \) of the form (2.1) such that the moment matrix \(\phi \big (\mathcal M(k;\beta )\big )\) satisfies a column relation \(q(x,y)=0\), where q has one of the following forms:

Parallel lines type::

\(q(x,y)=y(a+y)(b+y)\), \(a,b\in {\mathbb {R}}\setminus \{0\}\), \(a\ne b\).

Circular type::

\(q(x,y)=y(ay+x^2+y^2)\), \(a\in {\mathbb {R}}\setminus \{0\}\).

Parabolic type::

\(q(x,y)=y(x-y^2).\)

Hyperbolic type 1::

\(q(x,y)=y(1-xy)\).

Hyperbolic type 2::

\(q(x,y)=y(x+y+axy)\), \(a\in {\mathbb {R}}\setminus \{0\}\).

Hyperbolic type 3::

\(q(x,y)=y(ay+x^2-y^2)\), \(a\in {\mathbb {R}}\).

Intersecting lines type::

\(q(x,y)=yx(y+1)\),

Mixed type::

\(q(x,y)=y(1+ay+bx^2+cy^2)\), \(a,b,c\in {\mathbb {R}}\), \(b\ne 0\).

Remark 3.2

The name of the types of the form q in Proposition 3.1 comes from the type of the conic \(\frac{q(x,y)}{y}=0\). The conic \(x+y+axy=0\), \(a\in {\mathbb {R}}{\setminus } \{0\}\), is a hyperbola, since the discriminant \(a^2\) is positive. Similarly, the conic \(ay+x^2-y^2=0\), \(a\in {\mathbb {R}}\), is a hyperbola, since its discriminant is equal to 4. Clearly, the conic \(ay+x^2+y^2=0\), \(a\in {\mathbb {R}}\), is a circle with the center \((0,- \frac{a}{2})\) and radius \(\frac{a}{2}\).

Now we prove Proposition 3.1.

Proof of Proposition 3.1

Since p(xy) is reducible, it is of the form \(p=p_1p_2\), where

$$\begin{aligned} p_1(x,y)&=a_0+a_1x+a_2y \quad \text {with } a_i\in {\mathbb {R}},\; (a_1,a_2)\ne (0,0),\\ p_2(x,y)&=b_0+b_1x+b_2y+b_3x^2+b_4xy+b_5y^2 \quad \text {with } b_i\in {\mathbb {R}}, \; (b_3,b_4,b_5)\\ {}&\ne (0,0,0). \end{aligned}$$

Without loss of generality we can assume that \(a_2\ne 0\), since otherwise we apply the alt \((x,y)\mapsto (y,x)\) to exchange the roles of x and y. Since \(a_2\ne 0\), the alt

$$\begin{aligned} \phi _1(x,y)=(x,a_0+a_1x+a_2y) \end{aligned}$$

is invertible and hence:

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&c_0Y+c_1X+c_2Y^2+c_3X^2Y+c_4XY^2+c_5Y^3=\textbf{0} \text { with }c_i\in {\mathbb {R}},\; (c_3,c_4,c_5)\\ {}&\ne (0,0,0). \end{aligned} \nonumber \\ \end{aligned}$$
(3.1)

We separate two cases according to the value of \(c_3\).

Case 1: \(c_3=0\). In this case (3.1) is equal to

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&c_0Y+c_1XY+c_2Y^2+c_4XY^2+c_5Y^3=\textbf{0} \text { with }c_i\in {\mathbb {R}},\; (c_4,c_5)\ne (0,0). \end{aligned} \end{aligned}$$
(3.2)

If \(c_0=c_1=c_2=0\), then (3.2) is equal to \(c_4XY^2+c_5Y^3=\textbf{0}\). Since by assumption \(\beta \) and hence \(\phi _1(\beta )\) admit a rm, supported on

$$\begin{aligned} {{\mathcal {Z}}}(y^2(c_4x+c_5y))={{\mathcal {Z}}}(y(c_4x+c_5y)), \end{aligned}$$

it follows by [12] that \(c_4 XY+c_5 Y^2=\textbf{0}\) is a nontrivial column relation in \(\phi _1\big ({\mathcal {M}}(k;\beta )\big )\). Hence, also \({\mathcal {M}}(k;\beta )\) satisfies a nontrivial column relation between columns indexed by monomials of degree at most 2, which is a contradiction with the assumption of the proposition. Therefore \((c_0,c_1,c_2)\ne (0,0,0).\)

Case 1.1: \(c_0\ne 0\). Dividing the relation in (3.2) by \(c_0\), we get:

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&Y+ {\widetilde{c}}_1XY+ {\widetilde{c}}_2Y^2+ {\widetilde{c}}_4XY^2+ {\widetilde{c}}_5Y^3 =\textbf{0} \text { with }{\widetilde{c}}_i\in {\mathbb {R}},\; ({\widetilde{c}}_4,{\widetilde{c}}_5)\ne (0,0). \end{aligned} \end{aligned}$$
(3.3)

Case 1.1.1: \({\widetilde{c}}_1=0\). In this case (3.3) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&Y+ {\widetilde{c}}_2Y^2+ {\widetilde{c}}_4XY^2+ {\widetilde{c}}_5Y^3 =\textbf{0} \text { with }{\widetilde{c}}_i\in {\mathbb {R}},\; (\widetilde{c}_4,{\widetilde{c}}_5)\ne (0,0). \end{aligned} \end{aligned}$$
(3.4)

Case 1.1.1.1: \({\widetilde{c}}_4=0\). In this case (3.4) is equivalent to

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&Y+ {\widetilde{c}}_2Y^2+ {\widetilde{c}}_5Y^3 =\textbf{0} \text { with }{\widetilde{c}}_2\in {\mathbb {R}},{\widetilde{c}}_5\in {\mathbb {R}}\setminus \{0\}. \end{aligned} \end{aligned}$$
(3.5)

The quadratic equation \(1+{\widetilde{c}}_2 y+ {\widetilde{c}}_5 y^2=0\) must have two different real nonzero solutions, otherwise \({{\mathcal {Z}}}(y(1+{\widetilde{c}}_2x+ {\widetilde{c}}_5y))\) is a union of two parallel lines. Then it follows by [12] that there is a nontrivial column relation in \({\mathcal {M}}(k;\beta )\) between columns indexed by monomials of degree at most 2, which is a contradiction with the assumption of the proposition. So we have the parallel lines type relation from the proposition.

Case 1.1.1.2: \({\widetilde{c}}_4\ne 0\). In this case the alt

$$\begin{aligned} \phi _2(x,y)= \Big ( -{\widetilde{c}}_2 - {\widetilde{c}}_4 x - {\widetilde{c}}_5 y, y \Big ) \end{aligned}$$

is invertible and applying it to \(\phi _1(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _2\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _2\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the hyperbolic type 1 relation from the proposition.} \end{aligned} \end{aligned}$$

Case 1.1.2: \({\widetilde{c}}_1\ne 0\). We apply the alt

$$\begin{aligned} \phi _3(x,y)=(1+{\widetilde{c}}_1 x,y) \end{aligned}$$

to \(\phi _1(\beta )\) and obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _3\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } XY+ {\widehat{c}}_2Y^2+ {\widehat{c}}_4XY^2+ {\widehat{c}}_5Y^3 =\textbf{0} \text { with }{\widehat{c}}_i\in {\mathbb {R}},\; ({\widehat{c}}_4,{\widehat{c}}_5)\ne (0,0). \end{aligned} \end{aligned}$$
(3.6)

Case 1.1.2.1: \({\widehat{c}}_4\ne 0\). We apply the alt

$$\begin{aligned} \phi _4(x,y)= \Big (x- \frac{{\widehat{c}}_5}{{\widehat{c}}_4}y,y\Big ) \end{aligned}$$

to \((\phi _3\circ \phi _1)(\beta )\) and obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _4\circ \phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _4\circ \phi _3\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } XY+ \breve{c}_2 Y^2+ {\widehat{c}}_4XY^2 =\textbf{0} \text { with } \breve{c}_2, {\widehat{c}}_4 \in {\mathbb {R}},\; {\widehat{c}}_4\ne 0. \end{aligned} \end{aligned}$$
(3.7)

Case 1.1.2.1.1: \(\breve{c}_2= 0\). In this case the relation in (3.7) is of the form

$$\begin{aligned} XY+ {\widehat{c}}_4XY^2 =\textbf{0} \quad \text {with } {\widehat{c}}_4 \in {\mathbb {R}}\setminus \{0\}. \end{aligned}$$

Applying the alt

$$\begin{aligned} \phi _5(x,y)=(x,{\widehat{c}}_4 y) \end{aligned}$$

to \((\phi _4\circ \phi _3\circ \phi _1)(\beta )\) we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _5\circ \phi _4\circ \phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _5\circ \phi _4\circ \phi _3\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the intersecting lines type relation from the proposition.} \end{aligned} \end{aligned}$$

Case 1.1.2.1.2: \(\breve{c}_2\ne 0\). We apply the alt

$$\begin{aligned} \phi _6(x,y)=(x,\breve{c}_2y) \end{aligned}$$

to \((\phi _4\circ \phi _3\circ \phi _1)(\beta )\) and obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _6\circ \phi _4\circ \phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _6\circ \phi _4\circ \phi _3\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the hyperbolic type 2 relation in the proposition.} \end{aligned} \end{aligned}$$

Case 1.1.2.2: \({\widehat{c}}_4= 0\). In this case (3.6) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _3\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } XY+ {\widehat{c}}_2Y^2+ {\widehat{c}}_5Y^3 =\textbf{0} \text { with }{\widehat{c}}_2,{\widehat{c}}_5\in {\mathbb {R}},\;\\ {}&{\widehat{c}}_5\ne 0. \end{aligned} \end{aligned}$$
(3.8)

Case 1.1.2.2.1: \({\widetilde{c}}_2=0\). Applying the alt

$$\begin{aligned} \phi _7(x,y)=(x,- {\widehat{c}}_5 y), \end{aligned}$$

to \((\phi _3\circ \phi _1)(\beta )\) we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _7\circ \phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _7\circ \phi _3\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the parabolic type relation in the proposition.} \end{aligned} \end{aligned}$$

Case 1.1.2.2.2: \({\widetilde{c}}_2\ne 0\). Applying the alt

$$\begin{aligned} \phi _8(x,y)=(x,{\widehat{c}}_2 y) \end{aligned}$$

to \((\phi _3\circ \phi _1)(\beta )\) and obtain:

(3.9)

Further on, the relation in (3.9) is equivalent to

(3.10)

Finally, applying the alt

to \((\phi _8\circ \phi _3\circ \phi _1)(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _9\circ \phi _8\circ \phi _3\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _9\circ \phi _8\circ \phi _3\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \\&\text {satisfying the parabolic type relation in the proposition.} \end{aligned} \end{aligned}$$

Case 1.2: \(c_0= 0\). In this case (3.2) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } \phi _1(\beta ) \text { has a moment matrix } \phi _1\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the column relation}\\&c_1XY+c_2Y^2+c_4XY^2+c_5Y^3=\textbf{0} \text { with }c_i\in {\mathbb {R}},\; (c_4,c_5)\ne (0,0). \end{aligned} \end{aligned}$$
(3.11)

Assume that \(c_1=0\). Since by assumption \(\beta \) and hence \(\phi _1(\beta )\) admits a rm, supported on

$$\begin{aligned} {{\mathcal {Z}}}(y^2(c_2+c_4x+c_5y)) ={{\mathcal {Z}}}(y(c_2+c_4x+c_5y)), \end{aligned}$$

it follows by [12] that \(c_2Y+c_4 XY+c_5 Y^2=\textbf{0}\) is a nontrivial column relation in \(\phi _1\big ({\mathcal {M}}(k;\beta )\big )\). Hence, also \({\mathcal {M}}(k;\beta )\) satisfies a nontrivial column relation between columns indexed by monomials of degree at most 2, which is a contradiction with the assumption of the proposition. Hence, \(c_1\ne 0.\) Applying the alt \((x,y)\mapsto (c_1x,y)\) to \(\phi _1(\beta )\), we obtain a sequence with the moment matrix satisfying the column relation of the form (3.6) and we can proceed as in the Case 1.1.2 above.

Case 2: \(c_3\ne 0\). Applying the alt

$$\begin{aligned} \phi _{10}(x,y)=\Big (\sqrt{|c_3|} x,y\Big ) \end{aligned}$$

to \(\phi _1(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } c_0 Y + {\widetilde{c}}_1 XY + c_2 Y^2 + \frac{|c_3|}{c_3} X^2Y+ {\widetilde{c}}_4 XY^2+ c_5Y^3 =\textbf{0}\\ {}&\text { with }c_i,{\widetilde{c}}_i\in {\mathbb {R}}. \end{aligned} \nonumber \\ \end{aligned}$$
(3.12)

Case 2.1: \({\widetilde{c}}_1= 0\). In this case (3.12) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } c_0 Y + c_2 Y^2 + \frac{|c_3|}{c_3} X^2Y+ {\widetilde{c}}_4 XY^2+ c_5Y^3 =\textbf{0} \text { with }c_i,{\widetilde{c}}_i\in {\mathbb {R}}. \end{aligned} \end{aligned}$$
(3.13)

Case 2.1.1: \(c_0= 0\). Dividing the relation in (3.13) with \(\frac{|c_3|}{c_3}\), (3.13) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } {\widetilde{c}}_2 Y^2 + X^2Y+ {\widehat{c}}_4 XY^2+ {\widetilde{c}}_5Y^3 =\textbf{0} \text { with }{\widetilde{c}}_2,{\widehat{c}}_4, {\widetilde{c}}_5\in {\mathbb {R}}. \end{aligned} \end{aligned}$$
(3.14)

Applying the alt

$$\begin{aligned} \phi _{11}(x,y)= \bigg ( x+\frac{{\widehat{c}}_4}{2}y, y \bigg ) \end{aligned}$$

to \((\phi _{10}\circ \phi _1)(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{11}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{11}\circ \phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \\&\text {satisfying the column relation } \breve{c}_2 Y^2 + X^2Y + \breve{c}_5Y^3 =\textbf{0} \text { with } \breve{c}_2,\breve{c}_5\in {\mathbb {R}}. \end{aligned} \end{aligned}$$
(3.15)

Case 2.1.1.1: \(\breve{c}_5=0\). Since by assumption of the proposition, \((\phi _{11}\circ \phi _{10}\circ \phi _1)(\beta )\) admits a rm, supported on \({{\mathcal {Z}}}(y(\breve{c}_2y+x^2))\), \(\breve{c}_2\) in (3.15) cannot be equal to 0. Indeed, \(\breve{c}_2=0\) would imply that \({{\mathcal {Z}}}(y(\breve{c}_2y+x^2))= {{\mathcal {Z}}}(yx^2)={{\mathcal {Z}}}(yx)\) and by [12], \(XY=\textbf{0}\) would be a nontrivial column relation in \((\phi _{11}\circ \phi _{10}\circ \phi _1)\big (\mathcal M(k;\beta )\big )\). Hence, also \({\mathcal {M}}(k;\beta )\) would satisfy a nontrivial column relation between columns indexed by monomials of degree at most 2, which is a contradiction with the assumption of the proposition. Since \(\breve{c}_2\ne 0\), after applying the alt

$$\begin{aligned} \phi _{12}(x,y)= (x,- \breve{c}_2 y) \end{aligned}$$

to \((\phi _{11}\circ \phi _{10}\circ \phi _1)(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{12}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{12}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text {satisfying the parabolic type relation in the proposition.} \end{aligned} \end{aligned}$$

Case 2.1.1.2: \(\breve{c}_5>0\). Applying the alt

$$\begin{aligned} \phi _{13}(x,y)= \Big (x,\sqrt{\breve{c}_5} y\Big ) \end{aligned}$$

to \((\phi _{11}\circ \phi _{10}\circ \phi _1)(\beta )\) we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{13}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{13}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text {satisfying the circular type relation in the proposition.} \end{aligned} \end{aligned}$$

Case 2.1.1.3: \(\breve{c}_5<0\). Applying the alt

$$\begin{aligned} \phi _{14}(x,y)= \Big (x,\sqrt{- \breve{c}_5} y\Big ) \end{aligned}$$

to \((\phi _{11}\circ \phi _{10}\circ \phi _1)(\beta )\), we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{14}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{14}\circ \phi _{11}\circ \phi _{10}\circ \phi _1)\\ {}&\big ({\mathcal {M}}(k;\beta )\big ) \text {satisfying the hyperbolic type 3 relation in the proposition.} \end{aligned} \end{aligned}$$

Case 2.1.2: \(c_0\ne 0\). Dividing the relation in (3.13) with \(c_0\), (3.13) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying}\\&\text {the column relation } Y + {\widetilde{c}}_2 Y^2 + {\widetilde{c}}_3 X^2Y+ {\widehat{c}}_4 XY^2+ {\widetilde{c}}_5 Y^3 =\textbf{0} \text { with }{\widetilde{c}}_i, {\widehat{c}}_4\in {\mathbb {R}}, {\widetilde{c}}_3\ne 0. \end{aligned} \end{aligned}$$
(3.16)

Applying the alt

$$\begin{aligned} \phi _{15}(x,y)= \Bigg (x+\frac{{\widehat{c}}_4}{2{\widetilde{c}}_3}, y \Bigg ) \end{aligned}$$

to \((\phi _{10}\circ \phi _1)(\beta ),\) we obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{15}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{15}\circ \phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \\&\text {satisfying the mixed type relation in the proposition.} \end{aligned} \end{aligned}$$

Case 2.2: \({\widetilde{c}}_1\ne 0\). Dividing the relation in (3.12) with \(\frac{|c_3|}{c_3}\), (3.12) is equivalent to:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big ) \text { satisfying the}\\&\text {column relation } {\widehat{c}}_0 Y + {\widehat{c}}_1 XY + {\widehat{c}}_2 Y^2 + X^2Y+ {\widehat{c}}_4 XY^2+ {\widehat{c}}_5Y^3 =\textbf{0} \text { with }{\widehat{c}}_i\in {\mathbb {R}},\\ {}&{\widehat{c}}_1\ne 0. \end{aligned} \nonumber \\ \end{aligned}$$
(3.17)

Now we apply the alt

$$\begin{aligned} \phi _{16}(x,y)=\Bigg (x+\frac{{\widehat{c}}_1}{2},y\Bigg ) \end{aligned}$$

to \((\phi _{10}\circ \phi _1)(\beta )\) and obtain:

$$\begin{aligned} \begin{aligned}&\text {A sequence } (\phi _{16}\circ \phi _{10}\circ \phi _1)(\beta ) \text { has a moment matrix } (\phi _{16}\circ \phi _{10}\circ \phi _1)\big ({\mathcal {M}}(k;\beta )\big )\\&\text {satisfying the column relation } \breve{c}_0Y + \breve{c}_2 Y^2 + X^2Y+ \breve{c}_4 XY^2+ \breve{c}_5Y^3 =\textbf{0} \text { with }\breve{c}_i\in {\mathbb {R}}. \end{aligned} \end{aligned}$$
(3.18)

Case 2.2.1: \(\breve{c}_0= 0\). In this case the relation in (3.18) becomes equal to the relation in (3.14) from the Case 2.1.1, so we can proceed as above.

Case 2.2.2: \(\breve{c}_0\ne 0\). Dividing the relation in (3.18) with \(\breve{c}_0\), it becomes equal to the relation in (3.16) from the Case 2.1.2, so we can proceed as above. \(\square \)

4 Solving the TMP on Canonical Reducible Cubic Curves

Let \(\beta =\{\beta _i\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k}\) be a sequence of degree 2k, \(k\in {\mathbb {N}}\), and

$$\begin{aligned} {{\mathcal {C}}}=\{ 1 ,X,Y,X^2,XY,Y^2,\ldots ,X^k,X^{k-1}Y,\ldots ,Y^k\} \end{aligned}$$
(4.1)

the set of rows and columns of the moment matrix \({{\mathcal {M}}}(k;\beta )\) in the degree-lexicographic order. Let

$$\begin{aligned} p(x,y)=y\cdot c(x,y)\in {\mathbb {R}}[x,y]_{\le 3} \end{aligned}$$
(4.2)

be a polynomial of degree 3 in one of the canonical forms from Proposition 3.1. Hence, c(xy) a polynomial of degree 2. \(\beta \) will have a \({{\mathcal {Z}}}(p)\)-rm if and only if it can be decomposed as

$$\begin{aligned} \beta =\beta ^{(\ell )}+\beta ^{(c)}, \end{aligned}$$
(4.3)

where

$$\begin{aligned} \beta ^{(\ell )}&:= \{\beta _i^{(\ell )}\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k} \quad \text {has a representing measure on }y=0,\\ \beta ^{(c)}&:= \{\beta _i^{(c)}\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k} \quad \text {has a representing measure on the conic }c(x,y)=0, \end{aligned}$$

and the sum in (4.3) is a component-wise sum. On the level of moment matrices, (4.3) is equivalent to

$$\begin{aligned} {{\mathcal {M}}}(k;\beta )={{\mathcal {M}}}(k;\beta ^{(\ell )})+{{\mathcal {M}}}(k;\beta ^{(c)}). \end{aligned}$$
(4.4)

Note that if \(\beta \) has a \({{\mathcal {Z}}}(p)\)-rm, then the matrix \(\mathcal M(k;\beta )\) satisfies the relation \(p(X,Y)=\textbf{0}\) and it must be rg, i.e.,

$$\begin{aligned} X^iY^jp(X,Y)=\textbf{0} \quad \text {for }i,j=0,\ldots ,k-3\text { such that }i+j\le k-3. \end{aligned}$$
(4.5)

We write \(\vec {X}^{(0,k)}:=( 1 ,X,\ldots ,X^k)\). Let \({{\mathcal {T}}}\subseteq {{\mathcal {C}}}\) be a subset, such that the columns from \({{\mathcal {T}}}\) span the column space \({{\mathcal {C}}}({{\mathcal {M}}}(k;\beta ))\) and

$$\begin{aligned} \begin{aligned}&P \text { is a permutation matrix such that moment matrix } \widetilde{{\mathcal {M}}}(k;\beta ):=P{\mathcal {M}}(k;\beta )P^T\\&\text {has rows and columns indexed in the order } \vec {X}^{(0,k)}, {{\mathcal {T}}}\setminus \vec {X}^{(0,k)}, {{\mathcal {C}}}\setminus (\vec {X}^{(0,k)}\cup {{\mathcal {T}}}). \end{aligned} \end{aligned}$$
(4.6)

In this new order of rows and columns, (4.4) becomes equivalent to

$$\begin{aligned} {\widetilde{{{\mathcal {M}}}}}(k;\beta )= {\widetilde{{{\mathcal {M}}}}}(k;\beta ^{(\ell )})+ {\widetilde{{{\mathcal {M}}}}}(k;\beta ^{(c)}). \end{aligned}$$
(4.7)

We write

(4.8)

By the form of the atoms, we know that \({\widetilde{{{\mathcal {M}}}}}(k;\beta ^{(\ell )})\) and \({\widetilde{{{\mathcal {M}}}}}(k;\beta ^{(c)})\) will be of the forms

(4.9)

for some Hankel matrix \(A\in S_{k+1}.\) Define matrix functions \({{\mathcal {F}}}:S_{k+1}\rightarrow S_{\frac{(k+1)(k+2)}{2}}\) and \({{\mathcal {H}}}:S_{k+1}\rightarrow S_{k+1}\) by

$$\begin{aligned} {{\mathcal {F}}}(\textbf{A})&= \begin{pmatrix} \textbf{A} &{} A_{12} &{} A_{13}\\[0.3em] (A_{12})^T &{} A_{22} &{} A_{23}\\[0.3em] (A_{13})^T &{} (A_{23})^T &{} A_{33} \end{pmatrix}\quad \text {and}\quad {{\mathcal {H}}}(\textbf{A})= A_{11}- \textbf{A} . \end{aligned}$$
(4.10)

Using (4.9), (4.7) becomes equivalent to

$$\begin{aligned} \widetilde{{\mathcal {M}}}(k;\beta ) = {{\mathcal {F}}}(A)+{{\mathcal {H}}}(A)\oplus \textbf{0}_{\frac{k(k+1)}{2}} \end{aligned}$$
(4.11)

for some Hankel matrix \(A\in S_{k+1}\).

Lemma 4.1

Assume the notation above. The sequence \(\beta =\{\beta _i\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k}\), where \(k\ge 3\), has a \({{\mathcal {Z}}}(p)\)-representing measure if and only if there exist a Hankel matrix \(A\in S_{k+1}\), such that:

  1. (1)

    The sequence with the moment matrix \({{\mathcal {F}}}(A)\) has a \({{\mathcal {Z}}}(c)\)-representing measure.

  2. (2)

    The sequence with the moment matrix \({{\mathcal {H}}}(A)\) has a \({\mathbb {R}}\)-representing measure.

Proof

First we prove the implication \((\Rightarrow )\). If \(\beta \) has a \({{\mathcal {Z}}}(p)\)-rm \(\mu \), then \(\mu \) is supported on the union of the line \(y=0\) and the conic \(c(x,y)=0\). Since the moment matrix, generated by the measure supported on \(y=0\), can be nonzero only when restricted to the columns and rows indexed by \({\vec {X}}^{(0,k)}\), it follows that the moment matrix generated by the restriction \(\mu |_{\{c=0\}}\) (resp. \(\mu |_{\{y=0\}}\)) of the measure \(\mu \) to the conic \(c(x,y)=0\) (resp. line \(y=0\)), is of the form \({{\mathcal {F}}}(A)\) (resp. \({{\mathcal {H}}}(A)\oplus \textbf{0}_{\frac{k(k+1)}{2}}\)) for some Hankel matrix \(A\in S_{k+1}\).

It remains to establish the implication \((\Leftarrow )\). Let \(\mathcal M^{(c)}(k)\) (resp. \({\mathcal {M}}^{(\ell )}(k)\)) be the moment matrix generated by the measure \(\mu _1\) (resp. \(\mu _2\)) supported on \({{\mathcal {Z}}}(c)\) (resp. \(y=0\)) such that

$$\begin{aligned} P{\mathcal {M}}^{(c)}(k)P^T&={{\mathcal {F}}}(A),\quad P{\mathcal {M}}^{(\ell )}(k)P^T ={{\mathcal {H}}}(A)\oplus \textbf{0}_{\frac{k(k+1)}{2}}, \end{aligned}$$
(4.12)

respectively, where P is as in (4.6). The equalities (4.12) imply that \({{\mathcal {M}}}(k;\beta )={{\mathcal {M}}}^{(c)}(k)+{{\mathcal {M}}}^{(\ell )}(k;\beta )\). Since the measure \(\mu _1+\mu _2\) is supported on the curve \({{\mathcal {Z}}}(c)\cup \{y=0\}={{\mathcal {Z}}}(p)\), the implication \((\Leftarrow )\) holds. \(\square \)

Lemma 4.2

Assume the notation above and let the sequence \(\beta =\{\beta _i\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k}\), where \(k\ge 3\), admit a \({{\mathcal {Z}}}(p)\)-representing measure. Let \(A:=A_{\big (\beta _{0,0}^{(c)},\beta _{1,0}^{(c)},\ldots ,\beta _{2k,0}^{(c)}\big )}\in S_{k+1}\) be a Hankel matrix such that \({{\mathcal {F}}}(A)\) admits a \({{\mathcal {Z}}}(c)\)-representing measure and \({{\mathcal {H}}}(A)\) admits a \({\mathbb {R}}\)-representing measure. Let c(xy) be of the form

$$\begin{aligned} \begin{aligned}&c(x,y) =a_{00}+a_{10}x+a_{20}x^2+a_{01}y+a_{02}y^2+a_{11}xy\quad \text {with }a_{ij}\in {\mathbb {R}}\\&\text {and exactly one of the coefficients }a_{00},a_{10},a_{20}\text { is nonzero}. \end{aligned} \end{aligned}$$
(4.13)

If:

  1. (1)

    \(a_{00}\ne 0\), then

    $$\begin{aligned} \beta _{i,0}^{(c)} = - \frac{1}{a_{00}} (a_{01}\beta _{i,1}+a_{02}\beta _{i,2}+a_{11}\beta _{i+1,1}) \quad \text {for }i=0,\ldots ,2k-2. \end{aligned}$$
  2. (2)

    \(a_{10}\ne 0\), then

    $$\begin{aligned} \beta _{i,0}^{(c)} = - \frac{1}{a_{10}} (a_{01}\beta _{i,1}+a_{02}\beta _{i,2}+a_{11}\beta _{i+1,1}) \quad \text {for }i=1,\ldots ,2k-1. \end{aligned}$$
  3. (3)

    \(a_{20}\ne 0\), then

    $$\begin{aligned} \beta _{i,0}^{(c)} = - \frac{1}{a_{20}} (a_{01}\beta _{i,1}+a_{02}\beta _{i,2}+a_{11}\beta _{i+1,1}) \quad \text {for }i=2,\ldots ,2k. \end{aligned}$$

Proof

By Lemma 4.1, \({{\mathcal {F}}}(A)\) has a \({{\mathcal {Z}}}(c)\)-rm for some Hankel matrix \(A\in S_{k+1}\). Hence, \({{\mathcal {F}}}(A)\) satisfies the rg relations \(X^iY^jc(X,Y)=\textbf{0}\) for \(i,j\in {\mathbb {Z}}_+\), \(i+j\le k-2\). Let us assume that \(a_{00}\ne 0\) and \(a_{10}=a_{20}=0\). In particular, \({{\mathcal {F}}}(A)\) satisfies the relations

$$\begin{aligned} \begin{aligned}&a_{00} 1 +a_{01}Y+a_{02}Y^2+a_{11}XY=\textbf{0},\\&a_{00}X^{k-2}+a_{01}X^{k-2}Y+a_{02}X^{k-2}Y^2+a_{11}X^{k-1}Y=\textbf{0}. \end{aligned} \nonumber \\ \end{aligned}$$
(4.14)

Observing the rows \( 1 ,X,\ldots ,X^k\) of \({{\mathcal {F}}}(A)\), the relations (4.14) imply that

$$\begin{aligned} \beta _{i,0}^{(c)} = - \frac{1}{a_{00}} \big (a_{01}\beta _{i,1}^{(c)}+a_{02}\beta _{i,2}^{(c)}+a_{11}\beta _{i+1,1}^{(c)}\big ) \quad \text {for }i=0,\ldots ,2k-2. \end{aligned}$$
(4.15)

Using the forms of \({\widetilde{{{\mathcal {M}}}}}(k;\beta )\) and \({{\mathcal {F}}}(A)\) (see (4.8) and (4.10)), it follows that \(\beta _{i,1}^{(c)}=\beta _{i,1}\) and \(\beta _{j,2}^{(c)}=\beta _{j,2}\) for each ij. Using this in (4.15) proves the statement 1 of the lemma. The proofs of the statements 2 and 3 are analogous. \(\square \)

Lemma 4.2 states that for all canonical relations from Proposition 3.1 except for the mixed type relation, all but two entries of the Hankel matrix A from Lemma 4.1 are uniquely determined by \(\beta \). The following lemma gives the smallest candidate for A in Lemma 4.1 with respect to the usual Loewner order of matrices.

Lemma 4.3

Assume the notation above and let \(\beta =\{\beta _i\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k}\), where \(k\ge 3\), be a sequence of degree 2k. Assume that \(\widetilde{{\mathcal {M}}}(k;\beta )\) is positive semidefinite and satisfies the column relations (4.5). Then:

  1. (1)

    \({{\mathcal {F}}}(A)\succeq 0\) for some \(A\in S_{k+1}\) if and only if \(A\succeq A_{12}(A_{22})^{\dagger } (A_{12})^T\).

  2. (2)

    \({{\mathcal {F}}}\big (A_{12}(A_{22})^{\dagger } (A_{12})^T\big )\succeq 0\) and \({{\mathcal {H}}}\big (A_{12}(A_{22})^{\dagger } (A_{12})^T\big ) \succeq 0\).

  3. (3)

    \({{\mathcal {F}}}\big (A_{12}(A_{22})^{\dagger } (A_{12})^T\big )\) satisfies the column relations \(X^iY^jc(X,Y)=0\) for \(i,j\in {\mathbb {Z}}_+\) such that \(i+j\le k-2\).

  4. (4)

    We have that

    $$\begin{aligned} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )&= {{\,\textrm{rank}\,}}A_{22}+ {{\,\textrm{rank}\,}}\big (A_{11}-A_{12}(A_{22})^{\dagger } (A_{12})^T\big )\\&={{\,\textrm{rank}\,}}{{\mathcal {F}}}\big (A_{12}(A_{22})^{\dagger } (A_{12})^T\big )+ {{\,\textrm{rank}\,}}{{\mathcal {H}}}\big (A_{12}(A_{22})^{\dagger } (A_{12})^T\big ). \end{aligned}$$

Proof

By the equivalence between (1a) and (1b) of Theorem 2.2 used for \((M,A)=(\widetilde{{\mathcal {M}}}(k;\beta ),A_{11})\) and \((M,A)=(\big (\widetilde{\mathcal {M}}(k;\beta )\big )_{\vec {X}^{(0,k)}\cup {{\mathcal {T}}}},A_{11})\), it follows in particular that

$$\begin{aligned} \begin{aligned} {{\mathcal {C}}}\left( \begin{pmatrix} (A_{12})^T \\ (A_{13})^T \end{pmatrix} \right)&\subseteq {{\mathcal {C}}}\left( \begin{pmatrix} A_{22} &{} A_{23} \\ (A_{23})^T &{} A_{33} \end{pmatrix} \right) ,\\ {{\mathcal {C}}}(A_{12}^T)&\subseteq {{\mathcal {C}}}(A_{22}). \end{aligned} \nonumber \\ \end{aligned}$$
(4.16)

and

$$\begin{aligned} {{\mathcal {H}}}(A_{\min })\succeq 0, \end{aligned}$$
(4.17)

where

$$\begin{aligned} A_{\min }:= \begin{pmatrix} A_{12}&A_{13} \end{pmatrix} \begin{pmatrix} A_{22} &{} A_{23} \\[0.3em] (A_{23})^T &{} A_{33} \end{pmatrix}^{\dagger } \begin{pmatrix} (A_{12})^T \\ (A_{13})^T \end{pmatrix}. \end{aligned}$$

Using the equivalence between (1a) and (1b) of Theorem 2.2 again for the pairs \((M,A)=({{\mathcal {F}}}(A),A)\) and \((M,A)=(\big ({{\mathcal {F}}}(A)\big )_{\vec {X}^{(0,k)}\cup {{\mathcal {T}}}},A)\), it follows that

$$\begin{aligned} \begin{aligned} {{\mathcal {F}}}(A)\succeq 0 \quad&\Leftrightarrow \quad A\succeq A_{\min },\\ \big ({{\mathcal {F}}}(A)\big )_{\vec {X}^{(0,k)}\cup {{\mathcal {T}}}}\succeq 0 \quad&\Leftrightarrow \quad A\succeq A_{12} (A_{22})^{\dagger } (A_{12})^T =:{\widetilde{A}}_{\min }. \end{aligned} \nonumber \\ \end{aligned}$$
(4.18)

Since \({{\mathcal {F}}}(A)\succeq 0\) implies, in particular, that \(\big ({{\mathcal {F}}}(A)\big )_{\vec {X}^{(0,k)}\cup {{\mathcal {T}}}}\succeq 0\), (4.18) implies that

$$\begin{aligned} A_{\min }\succeq {\widetilde{A}}_{\min }. \end{aligned}$$
(4.19)

Claim. \(A_{\min }= {\widetilde{A}}_{\min }\).

Proof of Claim

By (4.18) and (4.19), it suffices to prove that \({{\mathcal {F}}}({\widetilde{A}}_{\min })\succeq 0\). By definition of \({{\mathcal {T}}}\) and the relations \(X^iY^jp(X,Y)=X^iY^{j+1}c(X,Y)=\textbf{0}\), \(i,j\in {\mathbb {Z}}_+,i+j\le k-3\), which hold in \(\widetilde{{{\mathcal {M}}}}(k;\beta )\), it follows, in particular, that

$$\begin{aligned} {{\mathcal {C}}}\left( \begin{pmatrix} A_{23} \\ A_{33} \end{pmatrix} \right) \subseteq {{\mathcal {C}}}\left( \begin{pmatrix} A_{22} \\[0.3em] (A_{23})^T \end{pmatrix} \right) \end{aligned}$$
(4.20)

(4.16) and (4.20) together imply that

$$\begin{aligned} {{\mathcal {C}}}\left( \begin{pmatrix} (A_{12})^T \\ (A_{13})^T \end{pmatrix} \right) \subseteq {{\mathcal {C}}}\left( \begin{pmatrix} A_{22} \\[0.3em] (A_{23})^T \end{pmatrix} \right) . \end{aligned}$$
(4.21)

(4.16) and (4.21) can be equivalently expressed as

$$\begin{aligned} \begin{aligned} \begin{pmatrix} A_{22} \\[0.3em] (A_{23})^T \end{pmatrix} W&= \begin{pmatrix} A_{23} \\ A_{33} \end{pmatrix} \;\text {for some matrix }W,\\ \begin{pmatrix} A_{22} \\[0.3em] (A_{23})^T \end{pmatrix} X&= \begin{pmatrix} (A_{12})^T \\ (A_{13})^T \end{pmatrix} \;\text {for some matrix }X. \end{aligned} \nonumber \\ \end{aligned}$$
(4.22)

We have that

$$\begin{aligned} 0&\preceq \begin{pmatrix} X^T\\ I\\ W^T \end{pmatrix} A_{22} \begin{pmatrix} X&\quad I&\quad W \end{pmatrix}\\&= \begin{pmatrix} X^TA_{22}X &{}\quad X^T A_{22} &{}\quad X^TA_{22}W\\[0.3em] A_{22}X &{}\quad A_{22} &{}\quad A_{22}W\\[0.3em] W^TA_{22}X &{}\quad W^TA_{22} &{}\quad W^T A_{22} W \end{pmatrix} \\&= \begin{pmatrix} A_{12}(A_{22})^{\dagger }(A_{12})^T &{}\quad A_{12} &{}\quad A_{13}\\[0.3em] (A_{12})^T &{}\quad A_{22} &{}\quad A_{23}\\[0.3em] (A_{13})^T &{}\quad (A_{23})^T &{}\quad A_{33} \end{pmatrix} = {{\mathcal {F}}}({\widetilde{A}}_{\min }) \end{aligned}$$

where I is the identity matrix of the same size as \(A_{22}\) and we used (4.22) in the second equality. This proves the Claim. \(\square \)

Using (4.17), (4.18) and Claim, the statements 1 and 2 follow. By Theorem 2.2.2, used for \((M,A)=(\widetilde{{{\mathcal {M}}}}(k;\beta ),A_{11})\), we have that

$$\begin{aligned} \begin{aligned} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )&= {{\,\textrm{rank}\,}}\begin{pmatrix} A_{22} &{} A_{23} \\[0.3em] (A_{23})^T &{} A_{33} \end{pmatrix} + {{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\\&= {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }) + {{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min }). \end{aligned} \end{aligned}$$
(4.23)

By (4.20) and

$$\begin{aligned} B:= \begin{pmatrix} A_{22} &{} A_{23} \\[0.3em] (A_{23})^T &{} A_{33} \end{pmatrix} \succeq 0, \end{aligned}$$

it follows by Theorem 2.2, used for \((M,A)=(B,A_{22})\), that \({{\,\textrm{rank}\,}}B={{\,\textrm{rank}\,}}A_{22}\). Using this and the Claim, (4.23) implies the statement 4.

Since \(\widetilde{{\mathcal {M}}}(k;\beta )\) satisfies the relations (4.5), it follows that the restriction \(\big ({{\mathcal {F}}}({\widetilde{A}}_{\min })\big )_{{{\mathcal {C}}}{\setminus } \vec {X}^{(0,k)},{{\mathcal {C}}}} \) satisfies the column relations \(X^iY^jc(X,Y)=\textbf{0}\) for \(i,j\in {\mathbb {Z}}_+\) such that \(i+j\le k-2\). By Proposition 2.3, these relations extend to \({{\mathcal {F}}}({\widetilde{A}}_{\min })\), which proves 3. \(\square \)

Remark 4.4

By Lemmas 4.14.3, solving the \({{\mathcal {Z}}}(p)\)-TMP for the sequence \(\beta =\{\beta _i\}_{i\in {\mathbb {Z}}_+^2,|i|\le 2k}\), where \(k\ge 3\), with p being any but the mixed type relation from Proposition 3.1, the natural procedure is the following:

  1. (1)

    First compute \(A_{\min }:=A_{12}(A_{22})^{\dagger }A_{12}\). By Lemma 4.3.3, there is one entry of \(A_{\min }\), which might need to be changed to obtain a Hankel structure. Namely, in the notation (4.13), if:

    1. (a)

      \(a_{00}\ne 0\), then the value of \((A_{\min })_{k,k}\) must be made equal to \((A_{\min })_{k-1,k+1}\).

    2. (b)

      \(a_{10}\ne 0\), then the value of \((A_{\min })_{1,k+1}\) must be made equal to \((A_{\min })_{2,k}\).

    3. (c)

      \(a_{20}\ne 0\), then the value of \((A_{\min })_{2,2}\) must be made equal to \((A_{\min })_{3,1}\).

    Let \({\widehat{A}}_{\min }\) be the matrix obtained from \(A_{\min }\) after performing the changes described above.

  2. (2)

    Study if \({{\mathcal {F}}}({\widehat{A}}_{\min })\) and \({{\mathcal {H}}}({\widehat{A}}_{\min })\) admit a \({{\mathcal {Z}}}(c)\)-rm and a \({\mathbb {R}}\)-rm, respectively. If the answer is yes, \(\beta \) admits a \({{\mathcal {Z}}}(p)\)-rm. Otherwise by Lemma 4.2, there are two antidiagonals of the Hankel matrix \({\widehat{A}}_{\min }\), which can by varied so that the matrices \({{\mathcal {F}}}({\widehat{A}}_{\min })\) and \({{\mathcal {H}}}({\widehat{A}}_{\min })\) will admit the corresponding measures. Namely, in the notation (4.13), if:

    1. (a)

      \(a_{00}\ne 0\), then the last two antidiagonals of \({\widehat{A}}_{\min }\) can be changed.

    2. (b)

      \(a_{10}\ne 0\), then the left-upper and the right-lower corner of \({\widehat{A}}_{\min }\) can be changed.

    3. (c)

      \(a_{20}\ne 0\), then the first two antidiagonals of \({\widehat{A}}_{\min }\) can be changed.

    To solve the \({{\mathcal {Z}}}(p)\)-TMP for \(\beta \) one needs to characterize, when it is possible to change these antidiagonals in such a way to obtain a matrix \(\breve{A}_{\min }\), such that \({{\mathcal {F}}}(\breve{A}_{\min })\) and \({{\mathcal {H}}}(\breve{A}_{\min })\) admit a \({{\mathcal {Z}}}(c)\)-rm and a \({\mathbb {R}}\)-rm, respectively.

In Sects. 5 and 6 we solve concretely the TMP on reducible cubic curves in the circular and parabolic type form (see the classification from Proposition 3.1). The parallel lines type form was solved in [42], while the hyperbolic type forms will be solved in the forthcoming work [40].

5 Circular Type Relation: \(p(x,y)=y(ay+x^2+y^2)\), \(a\notin {\mathbb {R}}{\setminus }\{0\}\)

In this section we solve the \({{\mathcal {Z}}}(p)\)-TMP for the sequence \(\beta =\{\beta _{i,j}\}_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\) of degree 2k, \(k\ge 3\), where \(p(x,y)=y(ay+x^2+y^2)\), \(a\in {\mathbb {R}}{\setminus }\{0\}\). Assume the notation from Sect. 4. If \(\beta \) admits a \({{\mathcal {Z}}}(p)\)-TMP, then \({\mathcal {M}}(k;\beta )\) must satisfy the relations

$$\begin{aligned} aY^{2+j}X^{i} + Y^{1+j}X^{2+i} = -Y^{3+j}X^{i}\quad \text {for }i,j\in {\mathbb {Z}}_+\text { such that }i+j\le k-3. \nonumber \\ \end{aligned}$$
(5.1)

In the presence of all column relations (5.1), the column space \({{\mathcal {C}}}({\mathcal {M}}(k;\beta ))\) is spanned by the columns in the set

$$\begin{aligned} {{\mathcal {T}}}= \vec {X}^{(0,k)} \cup Y\vec {X}^{(0,k-1)} \cup Y^2\vec {X}^{(0,k-2)}, \end{aligned}$$
(5.2)

where

$$\begin{aligned} Y^i\vec {X}^{(j,\ell )}:=(Y^iX^j,Y^iX^{j+1},\ldots ,Y^iX^{\ell }) \quad \text {with }i,j,\ell \in {\mathbb {Z}}_+,\; j\le \ell ,\; i+\ell \le k. \end{aligned}$$

Let \({\widetilde{{{\mathcal {M}}}}}(k;\beta )\) be as in (4.9). Let

$$\begin{aligned} A_{\min }:=A_{12}(A_{22})^{\dagger } (A_{12})^T. \end{aligned}$$
(5.3)

As described in Remark 4.4, \(A_{\min }\) might need to be changed to

$$\begin{aligned} {\widehat{A}}_{\min } =A_{\min }+\eta E_{2,2}^{(k+1)}, \end{aligned}$$

where

$$\begin{aligned} \eta :=(A_{\min })_{1,3}-(A_{\min })_{2,2}. \end{aligned}$$

Let \({{\mathcal {F}}}(\textbf{A})\) and \({{\mathcal {H}}}(\textbf{A})\) be as in (4.10). Write

(5.4)

Define also the matrix function

$$\begin{aligned} {\mathcal {G}}:{\mathbb {R}}^2\rightarrow S_{k+1},\qquad {\mathcal {G}}(\textbf{t},\textbf{u})= {\widehat{A}}_{\min } +\textbf{t}E_{1,1}^{(k+1)} +\textbf{u}\big (E_{1,2}^{(k+1)}+E_{2,1}^{(k+1)}\big ). \end{aligned}$$
(5.5)

The solution to the cubic circular type relation TMP is the following.

Theorem 5.1

Let \(p(x,y)=y(ay+x^2+y^2)\), \(a\in {\mathbb {R}}{\setminus }\{0\}\), and \(\beta =(\beta _{i,j})_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\), where \(k\ge 3\). Assume also the notation above. Then the following statements are equivalent:

  1. (1)

    \(\beta \) has a \({{\mathcal {Z}}}(p)\)-representing measure.

  2. (2)

    \(\widetilde{{\mathcal {M}}}(k;\beta )\) is positive semidefinite, the relations

    $$\begin{aligned} a\beta _{i,2+j} + \beta _{2+i,1+j} = - \beta _{i,3+j} \quad \text {hold for every }i,j\in {\mathbb {Z}}_+\text { with }i+j\le 2k-3 \nonumber \\ \end{aligned}$$
    (5.6)

    and one of the following statements holds:

    1. (a)

      \(\eta =0\) and one of the following holds:

      1. (i)

        \({{\,\textrm{rank}\,}}({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(0,k-1)}}=k\).

      2. (ii)

        \({{\,\textrm{rank}\,}}(H_2)_{\vec {X}^{(1,k-1)}}={{\,\textrm{rank}\,}}H_2\).

    2. (b)

      \(\eta >0\), \(H_2\) is positive semidefinite and defining a real number

      $$\begin{aligned} \begin{aligned} u_0&= \beta _{1,0}-(A_{\min })_{1,2} -(h_{12}^{(1)})^T (H_{22})^{\dagger } h_{12}^{(2)}, \end{aligned} \end{aligned}$$
      (5.7)

      a function

      $$\begin{aligned} h(\textbf{t})= \sqrt{ (H_1/H_{22}- \textbf{t}) (H_2/H_{22}) } \end{aligned}$$
      (5.8)

      and a set

      $$\begin{aligned} \begin{aligned} {\mathcal {I}}&= \Big \{(t,\sqrt{\eta t})\in {\mathbb {R}}_+\times {\mathbb {R}}_+:\sqrt{\eta t}= u_{0}+h(t)\Big \},\\&\hspace{0.5cm} \cup \Big \{(t,\sqrt{\eta t})\in {\mathbb {R}}_+\times {\mathbb {R}}_- :\sqrt{\eta t}= u_{0}-h(t)\Big \},\\&\hspace{0.5cm} \cup \Big \{(t,- \sqrt{\eta t})\in {\mathbb {R}}_+\times {\mathbb {R}}_+:- \sqrt{\eta t}= u_{0}+h(t)\Big \},\\&\hspace{0.5cm} \cup \Big \{(t,- \sqrt{\eta t})\in {\mathbb {R}}_+\times {\mathbb {R}}_- :- \sqrt{\eta t}= u_{0}-h(t)\Big \},\\ \end{aligned}\nonumber \\ \end{aligned}$$
      (5.9)

one of the following holds:

  1. (i)

    The set \({\mathcal {I}}\) has two elements and \(H_2\) is positive definite.

  2. (ii)

    \({\mathcal {I}}=\{({\tilde{t}},{\tilde{u}})\}\) and

    $$\begin{aligned} {{\,\textrm{rank}\,}}\big ( \big ( {{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})) \big )_{\vec {X}^{(0,k-1)}} \big ) = {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})). \end{aligned}$$
    (5.10)

Moreover, if a \({\mathcal {Z}}(p)\)-representing measure for \(\beta \) exists, then:

  • There exists at most \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure.

  • There exists a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure if and only if any of the following holds:

    • \(\eta =0\).

    • \(\eta >0\) and \({{\mathcal {H}}}(A_{\min })\) is positive definite.

In particular, a p-pure sequence \(\beta \) with a \({{\mathcal {Z}}}(p)\)-representing measure admits a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure.

Remark 5.2

In this remark we explain the idea of the proof of Theorem 5.1 and the meaning of the conditions in the statement of the theorem.

By Lemmas 4.14.2, the existence of a \({\mathcal {Z}}(p)\)-rm for \(\beta \) is equivalent to the existence of \(t,u\in {\mathbb {R}}\) such that \({{\mathcal {F}}}({\mathcal {G}}(t,u))\) admits a \({{\mathcal {Z}}}(ay+x^2+y^2)\)-rm and \({{\mathcal {H}}}({\mathcal {G}}(t,u))\) admits a \({\mathbb {R}}\)-rm. Let

$$\begin{aligned} {\mathcal {R}}_1&=\big \{(t,u)\in {\mathbb {R}}^2:{{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq 0\big \} \quad \text {and}\quad {\mathcal {R}}_2 =\big \{(t,u)\in {\mathbb {R}}^2:{{\mathcal {H}}}({\mathcal {G}}(t,u))\succeq 0\big \}. \end{aligned}$$

We denote by \(\partial R_i\) and \(\mathring{R}_i\) the topological boundary and the interior of the set \(R_i\), respectively. By the necessary conditions for the existence of a \({{\mathcal {Z}}}(p)\)-rm [12, 14, 25], \(\widetilde{{\mathcal {M}}}(k;\beta )\) must be psd and the relations (5.6) must hold. Using also Theorem 2.6, Theorem 5.1.1 is equivalent to

$$\begin{aligned} \begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold and }\\&\exists (t_0,u_0)\in {\mathcal {R}}_1\cap {\mathcal {R}}_2: {{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))\text { admits a }{\mathbb {R}}\text {-rm}. \end{aligned} \nonumber \\ \end{aligned}$$
(5.11)

In the proof of Theorem 5.1 we show that (5.11) is equivalent to Theorem 5.1.2:

  1. (1)

    First we establish (see Claims 1 and 2 below) that the form of:

    • \({\mathcal {R}}_1\) is one of the following:

      figure a

      where the left case occurs if \(\eta >0\) and the right if \(\eta =0\). The case \(\eta <0\) cannot occur.

    • \({\mathcal {R}}_2\) is one of the following:

      figure b

      where the left case occurs if \(H_2/H_{22}>0\) and the right if \(H_2/H_{22}=0\).

  2. (2)

    If \(\eta =0\), then we show that (5.11) is equivalent to

    $$\begin{aligned} \begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold and } {{\mathcal {H}}}({\mathcal {G}}(0,0))\text { admits a }{\mathbb {R}}\text {-rm}. \end{aligned} \end{aligned}$$

    The latter statement is further equivalent to Theorem 5.1.(2a).

  3. (3)

    If \(\eta >0\), then by the forms of \({\mathcal {R}}_1\) and \({\mathcal {R}}_2\), \({{\mathcal {I}}}=\partial {\mathcal {R}}_1\cap \partial {\mathcal {R}}_2\) is one of the following: (i) \(\emptyset \), (ii) a one-element set, (iii) a two-element set. In the case:

    • (i), a \({{\mathcal {Z}}}(p)\)-rm for \(\beta \) clearly cannot exist.

    • (ii), then denoting \({{\mathcal {I}}}=\{({\tilde{t}},{\tilde{u}})\}\), (5.11) is equivalent to

      $$\begin{aligned} \begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold and } {{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\text { admits a }{\mathbb {R}}\text {-rm}. \end{aligned} \end{aligned}$$

      The latter statement is equivalent to Theorem 5.1.(2(b)ii).

    • (iii), (5.11) is equivalent to \(H_2\) being positive definite, which is Theorem 5.1.(2(b)i). Moreover, in this case for at least one of the points \((t,u)\in {{\mathcal {I}}}\), a \({{\mathcal {Z}}}(ay+x^2+y^2)\)-rm and a \({\mathbb {R}}\)-rm exist for \({{\mathcal {F}}}({\mathcal {G}}(t,u))\) and \({\mathcal {H}}({\mathcal {G}}(t,u))\), respectively.

Proof of Theorem 5.1

Let \({\mathcal {R}}_1, {\mathcal {R}}_2\) be as in Remark 5.2. As explained in Remark 5.2, Theorem 5.1.(1) is equivalent to (5.11), thus it remains to prove that (5.11) is equivalent to Theorem 5.1.(2).

First we establish a few claims needed in the proof. Claim 1 (resp. 2) describes \({\mathcal {R}}_1\) (resp. \({\mathcal {R}}_2\)) concretely. \(\square \)

Claim 1. Assume that \(\widetilde{{\mathcal {M}}}(k;\beta )\succeq 0\). Then

$$\begin{aligned} {\mathcal {R}}_1 = \left\{ \begin{array}{rl} \big \{ (t,u)\in {\mathbb {R}}^2:t\ge 0, u\in \left[ - \sqrt{\eta t},\sqrt{\eta t}\right] \big \},&{} \text {if }\eta \ge 0,\\[0.3em] \emptyset ,&{} \text {if }\eta <0. \end{array} \right. \end{aligned}$$
(5.12)

If \(\eta \ge 0\), we have

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t,u))= \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }),&{} \text {if } t=0, \eta =0,\\[0.3em] {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1,&{} \text {if } (t>0 \text { or }\eta>0) \text { and } u\in \{- \sqrt{\eta t},\sqrt{\eta t}\},\\[0.3em] {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+2,&{} \text {if } t>0,\eta >0, u\in \left( - \sqrt{\eta t},\sqrt{\eta t}\right) , \end{array} \right. \end{aligned}$$
(5.13)

where \(A_{\min }\) is an in (5.3).

Proof of Claim 1

Note that

$$\begin{aligned} \begin{aligned} {\mathcal {G}}(\textbf{t},\textbf{u})&= A_{\min } +\eta E_{2,2}^{(k+1)} +\textbf{t}E_{1,1}^{(k+1)} +\textbf{u}\big (E_{1,2}^{(k+1)}+E_{2,1}^{(k+1)}\big )\\&= A_{\min }+ \begin{pmatrix} \textbf{t} &{} \textbf{u} \\ \textbf{u} &{} \eta \end{pmatrix} \oplus \textbf{0}_{k-1}. \end{aligned} \nonumber \\ \end{aligned}$$
(5.14)

By Lemma 4.3, we have that

$$\begin{aligned} {{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq 0 \quad \Leftrightarrow \quad {\mathcal {G}}(t,u)\succeq A_{\min } \end{aligned}$$
(5.15)

Using (5.14), (5.15) and the definition of \({\mathcal {R}}_1\), we have that

$$\begin{aligned} (t,u)\in {\mathcal {R}}_1 \quad&\Leftrightarrow \quad \begin{pmatrix} t &{} u \\ u &{} \eta \end{pmatrix}\succeq 0 \quad \Leftrightarrow \quad t\ge 0, \eta \ge 0, t\eta \ge u^2, \end{aligned}$$
(5.16)

which proves (5.12).

To prove (5.13) first note that by construction of \({{\mathcal {F}}}(A_{\min })\), the columns 1 and X are in the span of the columns indexed by \({{\mathcal {C}}}\setminus \vec {X}^{(0,k)}\). Hence, there are vectors

$$\begin{aligned} v_1, v_2 \in \ker {{\mathcal {F}}}(A_{\min }) \end{aligned}$$
(5.17)

of the forms

$$\begin{aligned} v_1=\begin{pmatrix} 1&\textbf{0}_{1,k}&({\tilde{v}}_1)^T \end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}} \quad \text {and}\quad v_2=\begin{pmatrix} 0&1&\textbf{0}_{1,k-2}&({\tilde{v}}_2)^T \end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}}. \end{aligned}$$

Let \(r:={{\,\textrm{rank}\,}}\begin{pmatrix} t &{} u \\ u &{} \eta \end{pmatrix}\). Clearly,

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t,u))\le {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+r. \end{aligned}$$
(5.18)

We separate three cases according to r.

Case 1: \(r=0\). In this case \(t=u=\eta =0\) and \(\mathcal G(0,0)=A_{\min }\). In this case (5.13) clearly holds.

Case 2: \(r=1\). In this case \(t\eta =u^2\). Together with (5.16), this is equivalent to \((t>0 \text { or }\eta >0) \text { and } u\in \{- \sqrt{\eta t},\sqrt{\eta t}\}\). By (5.18) and \({{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq {{\mathcal {F}}}(A_{\min })\) to prove (5.13), it suffices to find \(v\in \ker {{\mathcal {F}}}(A_{\min })\) and \(v\notin \ker {{\mathcal {F}}}({\mathcal {G}}(t,u))\). Note that at least one of \(v_1,v_2\) from (5.17) is such a vector, since

$$\begin{aligned} (v_1)^T{{\mathcal {F}}}({\mathcal {G}}(t,u))v_1=t \quad \text {and}\quad (v_2)^T{{\mathcal {F}}}(\mathcal G(t,u))v_2=\eta . \end{aligned}$$

Case 3: \(r=2\). In this case \(t\eta >u^2\). Together with (5.16), this is equivalent to \(t>0,\eta >0,u\in (- \sqrt{\eta t},\sqrt{\eta t})\). Note that

$$\begin{aligned} {{\mathcal {F}}}({\mathcal {G}}(t,u))= {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{u^2}{\eta },u\Big )\Big )+ \begin{pmatrix} t- \frac{u^2}{\eta } \end{pmatrix} \oplus \textbf{0}_{\frac{(k+1)(k+2)}{2}-1} \succeq {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{u^2}{\eta },u\Big )\Big ).\nonumber \\ \end{aligned}$$
(5.19)

By Case 2, we have \({{\,\textrm{rank}\,}}{{\mathcal {F}}}\Big (\mathcal G\Big (\frac{u^2}{\eta },u\Big )\Big )={{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1\). By (5.18) and (5.19), to prove (5.13), it suffices to find \(v\in \ker {{\mathcal {F}}}\Big (\mathcal G\Big (\frac{u^2}{\eta },u\Big )\Big )\) and \(v\notin \ker {{\mathcal {F}}}(\mathcal G(t,u))\). We will check below, that \(v_3\), defined by

$$\begin{aligned}v_3= v_1- \frac{u}{\eta }v_2 = \begin{pmatrix} 1&- \frac{u}{\eta }&({\tilde{v}}_3)^T\end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}}, \end{aligned}$$

is such a vector. This follows by

$$\begin{aligned} {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{u^2}{\eta },u\Big )\Big )v_3={{\mathcal {F}}}(A_{\min })v_3+ \left( \begin{pmatrix} \frac{u^2}{\eta } &{} u \\ u &{} \eta \end{pmatrix} \oplus \textbf{0}_{\frac{(k+1)(k+2)-1}{2}}\right) v_3=\textbf{0}_{\frac{(k+1)(k+2)}{2},1} \end{aligned}$$

and

$$\begin{aligned} (v_3)^T{{\mathcal {F}}}({\mathcal {G}}(t,u))v_3=t- \frac{u^2}{\eta }>0. \end{aligned}$$

This concludes the proof of Claim 1. \(\square \)

Claim 2. Assume that \(\widetilde{{\mathcal {M}}}(k;\beta )\succeq 0\). Let \(u_{0}\), \(h(\textbf{t})\) be as in (5.7),(5.8) and

$$\begin{aligned} t_0 = \beta _{0,0}-(A_{\min })_{1,1}- (h_{12}^{(1)})^T (H_{22})^{\dagger } h_{12}^{(1)}. \end{aligned}$$

Then

$$\begin{aligned} {\mathcal {R}}_2 = \left\{ \begin{array}{rl} \big \{ (t,u)\in {\mathbb {R}}^2:t\le t_0, u\in [u_0-h(t),u_0+h(t)] \big \},&{}\text {if }H_2\succeq 0,\\[0.3em] \emptyset ,&{}\text {if }H_2\not \succeq 0. \end{array} \right. \end{aligned}$$
(5.20)

If \(H_2\succeq 0\), we have that

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t,u))= \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}H_{2},&{} \text {for } t=t_0, u=u_0,\\[0.2em] {{\,\textrm{rank}\,}}H_{22}+1,&{} \text {for } t<t_0, u\in \{u_0-h(t),u_0+h(t)\},\\[0.2em] {{\,\textrm{rank}\,}}H_{22}+2,&{} \text {for } t<t_0, u\in (u_0-h(t),u_0+h(t)). \end{array} \right. \end{aligned}$$
(5.21)

Proof of Claim 2

Write

Note that \(H(0)=({{\mathcal {H}}}(A_{\min }))_{\{ 1 \}\cup \vec {X}^{(2,k)}}\). By Lemma 4.3.(2), \({{\mathcal {H}}}(A_{\min })\succeq 0\) and hence, \(H(0)\succeq 0\). By Theorem 2.2, used for \((M,C)=(H(0),H_{22})\), it follows that \(H_2\succeq 0\) and \(h_{12}^{(1)}\in {{\mathcal {C}}}(H_{22}).\) Again, by Theorem 2.2, used for \((M,C)=(H(t),H_{22})\), it follows that \(H(t)\succeq 0\) iff \(t\le t_0\). For a fixed t satisfying \(t\le t_0\), Lemma 2.4, used for \(A(\textbf{x})={{\mathcal {H}}}(\mathcal {G}(t,\textbf{x}))\), together with \(H(t)/H_{22}=H_1/H_{22}-t\), implies (5.20)–(5.21) and proves Claim 2. \(\square \)

Claim 3. If \(\eta =0\), then \((0,0)\in \partial {\mathcal {R}}_1\cap {\mathcal {R}}_2\).

Proof of Claim 3

By Claim 1, \(\eta =0\) implies that \((0,0)\in \partial {\mathcal {R}}_1\). By (5.14) and \(\eta =0\), \({{\mathcal {H}}}(A_{\min })={{\mathcal {H}}}({\mathcal {G}}(0,0))\). By Lemma 4.3.(2), \({{\mathcal {H}}}(A_{\min })\succeq 0\). Hence, \((0,0)\in {\mathcal {R}}_2\), which proves Claim 3. \(\square \)

Claim 4. If \(\eta >0\), then:

  • The set \({{\mathcal {I}}}\) (see (5.9)) has at most 2 elements.

  • \({\mathcal {R}}_1\cap {\mathcal {R}}_2\ne \emptyset \) if and only if \({{\mathcal {I}}}\ne \emptyset .\)

  • If \({{\mathcal {I}}}\) has two elements, then \(H_2/H_{22}>0\).

  • If \({{\mathcal {I}}}\) has one element, which we denote by \(({\tilde{t}},{\tilde{u}})\), then one of the following holds:

    • \({\mathcal {R}}_1 \cap {\mathcal {R}}_2={{\mathcal {I}}}\).

    • \(\partial {\mathcal {R}}_2={\mathcal {R}}_2=\{(t,u_0):t\le t_0\}\) and \({{\mathcal {I}}}\subsetneq {\mathcal {R}}_1\cap {\mathcal {R}}_2= \{(t,u_0):{\tilde{t}}\le t\le t_0\}\).

Proof of Claim 4

Note that the set \({{\mathcal {I}}}\) is equal to \(\partial {{\mathcal {R}}}_1 \cap \partial { {\mathcal {R}}}_2\) (see (5.12) and (5.20)). Further on, \(\partial {{\mathcal {R}}}_1\) is the union of the square root functions \(\pm \sqrt{\eta \textbf{t}}\), defined for \(\textbf{t}\in [0,\infty )\). Similarly, \(\partial {{\mathcal {R}}}_2\) is the union of the square root functions \(u_0\pm \sqrt{(H_1/H_{22}- \textbf{t}) (H_2/H_{22})}\), defined for \(\textbf{t}\in (- \infty ,t_0]\). If \(H_2/H_{22}=0\), then the latter could be a half-line \(\{(t,u_0):t\le t_0\}\). If \({\mathcal {R}}_1\cap {\mathcal {R}}_2\ne \emptyset \), then geometrically it is clear that \({{\mathcal {I}}}\) contains one or two elements. Assume that \({{\mathcal {I}}}\) contains only one element, denoted by \(({\tilde{t}},{\tilde{u}})\). Clearly, \({{\mathcal {I}}}\subseteq {\mathcal {R}}_1\cap {\mathcal {R}}_2\). Further on, we either have \({{\mathcal {I}}}={\mathcal {R}}_1\cap {\mathcal {R}}_2\) or \({{\mathcal {I}}}\subsetneq {\mathcal {R}}_1\cap {\mathcal {R}}_2\). By the forms of \(\partial \mathcal R_1\) and \(\partial {\mathcal {R}}_2\), the latter case occurs if \(H_2/H_{22}=0\) or equivalently \(\partial {\mathcal {R}}_2=\mathcal R_2=\{(t,u_0):t\le t_0\}\). But then the whole line segment \(\{(t,u_0:{\tilde{t}}\le t\le t_0\}\) lies in \({\mathcal {R}}_1\), which proves Claim 4. \(\square \)

Claim 5. Let \(H_2\) (see (5.4)) be positive definite, \((t_1,u_1)\in \partial {\mathcal {R}}_2, (t_2,u_2)\in \partial {\mathcal {R}}_2\) and \(u_1\ne u_2\). Then at least one of \({{\mathcal {H}}}(\mathcal G(t_1,u_1))\) and \({{\mathcal {H}}}({\mathcal {G}}(t_2,u_2))\) admits a \({\mathbb {R}}\)-rm.

Proof of Claim 5

Note that \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\), \(i=1,2\), is of the form

Assume on the contrary that none of \({{\mathcal {H}}}({\mathcal {G}}(t_1,u_1))\) and \({{\mathcal {H}}}({\mathcal {G}}(t_2,u_2))\) admits a \({\mathbb {R}}\)-rm. Theorem 2.5 implies that the column \(X^k\) of \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\), \(i=1,2\), is not in the span of the other columns. Using this fact, the facts that \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\), \(i=1,2\), are not pd (by \((t_i,u_i)\in \partial {\mathcal {R}}_2\), \(i=1,2\)) and \(H_2\) is pd, it follows that there is a column relation \( 1 =\sum _{j=1}^{k-1} \alpha ^{(i)}_j X^{j}, \) \(\alpha _j^{(i)}\in {\mathbb {R}}\), in \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\), \(i=1,2\). Since \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\succeq 0\), \(i=1,2\), it follows in particular by Theorem 2.2, used for \((M,A)=({{\mathcal {H}}}(\mathcal G(t_i,u_i)),({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)))_{\vec {X}^{(0,k-1)}})\), \(i=1,2\), that

$$\begin{aligned} \begin{pmatrix} {\widetilde{\beta }}_{k,0}&{\widetilde{\beta }}_{k+1,0}&({\widehat{h}}_3)^T \end{pmatrix}^T&\in {{\mathcal {C}}}\Big (\big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{\vec {X}^{(0,k-1)}}\Big ),\quad i=1,2. \end{aligned}$$
(5.22)

Since the first column of \({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\succeq 0\), \(i=1,2\), is in the span of the others, (5.22) is equivalent to

$$\begin{aligned} \begin{pmatrix} {\widetilde{\beta }}_{k,0}&{\widetilde{\beta }}_{k+1,0}&({\widehat{h}}_3)^T \end{pmatrix}^T&\in {{\mathcal {C}}}\Big (\big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{\vec {X}^{(0,k-1)},\vec {X}^{(1,k-1)}}\Big ),\quad i=1,2. \end{aligned}$$
(5.23)

Since

$$\begin{aligned} {\widetilde{H}}_2:= \big ({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\big )_{\vec {X}^{(1,k-1)}},\quad i=1,2, \end{aligned}$$

is invertible as a principal submatrix of \(H_2\), it follows that

$$\begin{aligned} \begin{pmatrix} {\widetilde{\beta }}_{k,0}&{\widetilde{\beta }}_{k+1,0}&({\widehat{h}}_3)^T \end{pmatrix}^T&=\Big (\big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{ \vec {X}^{(0,k-1)},\vec {X}^{(1,k-1)}}\Big ) v, \quad i=1,2. \end{aligned}$$
(5.24)

with

$$\begin{aligned} v= {\widetilde{H}}_2^{-1} \begin{pmatrix} {\widetilde{\beta }}_{k+1,0}&{\widehat{h}}_3 \end{pmatrix}^T = \begin{pmatrix} v_1&v_2&\cdots&v_{k-1} \end{pmatrix}^T. \end{aligned}$$

If \(v_1\ne 0\), this contradicts to (5.24) since \(u_{1}\ne u_2\). Hence, \(v_1=0\). By the Hankel structure of \({{\mathcal {H}}}(\mathcal G(t_i,u_i))\), \(i=1,2\), we have that

$$\begin{aligned} \big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{ \vec {X}^{(0,k-2)},\vec {X}^{(2,k)}}= \big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{ \vec {X}^{(1,k-1)},\vec {X}^{(1,k-1)}}, \quad i=1,2. \end{aligned}$$

Then (5.24) and \(v_1=0\) imply that

$$\begin{aligned} \Big (\big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{ \vec {X}^{(0,k-2)},\vec {X}^{(2,k)}}\Big ){\widetilde{v}}= \Big (\big ( {{\mathcal {H}}}({\mathcal {G}}(t_i,u_i)) \big )_{ \vec {X}^{(1,k-1)},\vec {X}^{(1,k-1)}}\Big ){\widetilde{v}}=\textbf{0}_{k+1,1},\nonumber \\ \end{aligned}$$
(5.25)

where \( {\widetilde{v}} = \begin{pmatrix} v_2&\cdots&v_{k-1}&-1 \end{pmatrix}. \) Since \(\big ({{\mathcal {H}}}({\mathcal {G}}(t_i,u_i))\big )_{ \vec {X}^{(1,k-1)},\vec {X}^{(1,k-1)}},\) \(i=1,2\), is a principal submatrix of \(H_{2}\), (5.25) contradicts to \(H_2\) being pd. This proves Claim 5. \(\square \)

Now we prove the implication (5.11)\(\Rightarrow \) Theorem 5.1.(2). Since \((t_0,u_0)\in {\mathcal {R}}_1\), it follows that \({\mathcal {R}}_1\ne \emptyset .\) By (5.12), \(\eta \ge 0\). We separate two cases according to the value of \(\eta \).

Case 1: \(\eta =0\). We separate two cases according to the invertibility of \(H_2\).

Case 1.1: \(H_2\) is not pd. Since \(H_2\) is not pd, then by Theorem 2.5, the last column of \({{\mathcal {H}}}({{\mathcal {G}}}(t_{0},u_{0}))\) is in the span of the previous ones. But then by rg, the last column of \(H_2\) is in the span of the previous ones. This is the case Theorem 5.1.(2(a)ii).

Case 1.2: \(H_2\) is pd. We separate two cases according to the invertibility of \(({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(0,k-1)}}\).

Case 1.2.1: \({{\,\textrm{rank}\,}}({{\mathcal {H}}}(A_{\min })_{\vec {X}^{(0,k-1)}})=k\). This is the case Theorem 5.1.(2(a)i).

Case 1.2.2: \({{\,\textrm{rank}\,}}({{\mathcal {H}}}(A_{\min })_{\vec {X}^{(0,k-1)}})<k\). We will prove that this case cannot occur. It follows from the assumption in this case that \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_2=k\). Further on, the last column of \({{\mathcal {H}}}(A_{\min })\) cannot be in the span of the previous ones (otherwise \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })<k\)). Hence, by Theorem 2.5, \({{\mathcal {H}}}(A_{\min })={{\mathcal {H}}}({{\mathcal {G}}}(0,0))\) does not admit a \({\mathbb {R}}\)-rm. Using this fact and Claim 3, \((0,0)\in \partial {\mathcal {R}}_2\). If \(t_0=0\), then \({\mathcal {R}}_1\cap {\mathcal {R}}_2= \{(0,0)\}\), which contradicts to the third condition in (5.11). So \(0<t_0\) must hold. Since \(\eta =0\), Claim 1 implies that \({\mathcal {R}}_1= \{(t,0) :t\ge 0 \}\) is a horizontal half-line. By the form of \(\partial \mathcal R_2\), which is the union of the graphs of two square root functions on the interval \((- \infty ,t_0]\), intersecting in the point \((t_0,u_0)\) and such that \((t_0,u_0)\in \partial R_2\), it follows that \({\mathcal {R}}_1\cap {\mathcal {R}}_2= \{(0,0)\}\). Note that by \(H_2\succ 0\), we have \(H_2/H_{22}>0\) and hence \(h(t)\not \equiv 0\) (see (5.8)), which implies that the square root functions are indeed not just a horizontal half-line. As above this contradicts to the third condition in (5.11). Hence, Case 1.2.2 cannot occur.

Case 2: \(\eta >0\). By assumptions, \((t_0,u_0)\in {\mathcal {R}}_1\cap {\mathcal {R}}_2\). By Claim 4, \({{\mathcal {I}}}\ne \emptyset \) and \({{\mathcal {I}}}\) has one or two elements. We separate two cases according to the number of elements in \({{\mathcal {I}}}\).

Case 2.1: \({{\mathcal {I}}}\) has two elements. By Claim 4, \(H_2/H_{22}>0\). If \(H_2\) is not pd, then the fact that \({{\mathcal {H}}}(\mathcal {G}(t_0,u_0))\) has a \({\mathbb {R}}\)-rm, implies that \(H_2/H_{22}=0\), which is a contradiction. Indeed, if \(H_2/H_{22}>0\) and \(H_2\) is not pd, then there is a nontrivial column relation among columns \(X^2,\ldots ,X^k\) in \(H_2\). By Proposition 2.3, the same holds for \({{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))\). Let \(\sum _{i=0}^{k-2} c_i X^{i+2}=\textbf{0}\) be the nontrivial column relation in \({{\mathcal {H}}}(\mathcal {G}(t_0,u_0))\). But then \({{\mathcal {Z}}}(x^2\sum _{i=0}^{k-2} c_i x^i)={{\mathcal {Z}}}(x\sum _{i=0}^{k-2} c_i x^i)\) and it follows by [12] that \(\sum _{i=0}^{k-2} c_i X^{i+1}=\textbf{0}\) is also a nontrivial column relation in \({{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))\). In particular, \(H_2/H_{22}=0\). Hence, \(H_2\) is pd. This is the case Theorem 5.1.(2(b)i).

Case 2.2: \({{\mathcal {I}}}\) has one element. Let us denote this element by \(({\tilde{t}},{\tilde{u}})\). By Claim 4, \({{\mathcal {I}}}={\mathcal {R}}_1\cap {\mathcal {R}}_2\) or \(\partial {\mathcal {R}}_2={\mathcal {R}}_2=\{(t,u_0):t\le t_0\}\) and \({{\mathcal {I}}}\subsetneq {\mathcal {R}}_1\cap {\mathcal {R}}_2= \{(t,u_0):{\tilde{t}}\le t\le t_0\}\). We separate two cases according to these two possibilities.

Case 2.2.1: \({{\mathcal {I}}}={\mathcal {R}}_1\cap {\mathcal {R}}_2\). In this case \((t_0,u_0)=({\tilde{t}},{\tilde{u}})\) and hence \({{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})\) admits a \({\mathbb {R}}\)-rm. Since \((\tilde{t},{\tilde{u}})\in \partial {\mathcal {R}}_1\), \({{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\) is not pd. Hence, by Theorem 2.5, the statement Theorem 5.1.(2(b)ii) holds.

Case 2.2.2: \(\partial {\mathcal {R}}_2=\mathcal {R}_2=\{(t,u_0):t\le t_0\}\) and \({{\mathcal {I}}}\subsetneq {\mathcal {R}}_1\cap \mathcal {R}_2= \{(t,u_0):{\tilde{t}}\le t\le t_0\}\). By (5.20), it follows that \(H_2/H_{22}=0\) (see the definition (5.8) of \(h(\textbf{t})\)). Since \(H_2\) is not pd, Theorem 2.5 used for \({{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))\), implies that the last column of \(H_2\) is in the span of the others. Hence, the same holds by Proposition 2.3 for \({{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\) and \({{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\) admits a \({\mathbb {R}}\)-rm by Theorem 2.5. Since \({{\mathcal {H}}}(\mathcal {G}({\tilde{t}},{\tilde{u}}))\) is not pd, it in particular satisfies (5.10). Hence, we are in the case Theorem 5.1.(2(b)ii).

This concludes the proof of the implication (5.11)\(\Rightarrow \) Theorem 5.1.(2).

Next we prove the implication Theorem 5.1.(2)\(\Rightarrow \) (5.11). We separate four cases according to the assumptions in Theorem 5.1.(2).

Case 1: Theorem5.1.(2(a)i) holds. By Claim 3, \((0,0)\in {\mathcal {R}}_1\cap {\mathcal {R}}_2\). This and the assumption \({{\,\textrm{rank}\,}}({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(0,k-1)}}=k\), imply by Theorem 2.5, that \({{\mathcal {H}}}({\mathcal {G}}(0,0))={{\mathcal {H}}}(A_{\min })\) admits a \({\mathbb {R}}\)-rm. This proves (5.11) in case of Theorem 5.1.(2(a)i).

Case 2: Theorem5.1.(2(a)ii) holds. By Claim 3, \((0,0)\in {\mathcal {R}}_1\cap {\mathcal {R}}_2\). Since the last column of \(H_2\) is by assumption in the span of the previous ones, the same holds for \({{\mathcal {H}}}({\mathcal {G}}(0,0))\) by Proposition 2.3. By Theorem 2.5, \({{\mathcal {H}}}({\mathcal {G}}(0,0))\) admits a \({\mathbb {R}}\)-rm. This proves (5.11) in case of Theorem 5.1.(2(a)ii).

Case 3: Theorem5.1.(2(b)i) holds. By assumption, \({{\mathcal {I}}}=\partial {\mathcal {R}}_1\cap \partial {\mathcal {R}}_2=\{(t_1,u_1),(t_2,u_2)\}\). Since \(H_2\) is pd, \(\partial {\mathcal {R}}_{2}\) is not a half-line and hence \(u_1\ne u_2\). By Claim 5, at least one of \({{\mathcal {H}}}({\mathcal {G}}(t_{1},u_{1}))\) and \({{\mathcal {H}}}({\mathcal {G}}(t_2,u_2))\) admits a \({\mathbb {R}}\)-rm. This proves (5.11) in case of Theorem 5.1.(2(b)i).

Case 4: Theorem5.1.(2(b)ii) holds. The assumptions imply by Theorem 2.5, that \({{\mathcal {H}}}({\mathcal {G}}(\tilde{t},{\tilde{u}}))\) admits a \({\mathbb {R}}\)-rm. This proves (5.11) in case of Theorem 5.1.(2(b)ii).

This concludes the proof of the implication Theorem 5.1.(2)\(\Rightarrow \)(5.11).

Up to now we established the equivalence (1) \(\Leftrightarrow \) (2) in Theorem 5.1. It remains to prove the moreover part. We observe again the proof of the implication (2) \(\Rightarrow \) (5.11). By Lemma 4.3.(4),

$$\begin{aligned} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ) ={{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+ {{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min }). \end{aligned}$$
(5.26)

In the proof of the implications Theorem 5.1.(2(a)i)\(\Rightarrow \) (5.11) and Theorem 5.1.(2(a)ii)\(\Rightarrow \) (5.11) we established that \({{\mathcal {H}}}({\mathcal {G}}(0,0))\) has a \({\mathbb {R}}\)-rm. By Theorem 2.5, there also exists a \(({{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(0,0)))\)-atomic one. By Theorem 2.6, the sequence with the moment matrix \({{\mathcal {F}}}({\mathcal {G}}(0,0))\) can be represented by a \(({{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(0,0)))\)-atomic \({{\mathcal {Z}}}(ay+x^2+y^2)\)-rm. By (5.26) and \(\mathcal {G}(0,0)=A_{\min }\) if \(\eta =0\), in these two cases \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

In the proof of the implication Theorem 5.1.(2(b)i)\(\Rightarrow \) (5.11) we established that \({{\mathcal {H}}}({\mathcal {G}}(t',u'))\) has a \({\mathbb {R}}\)-rm for some \((t',u')\in {{\mathcal {I}}}\). Analogously as for the point (0, 0) in the previous paragraph, it follows that \(\beta \) has a \(( {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t',u')) + {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u')) )\)-atomic \({{\mathcal {Z}}}(p)\)-rm. Using (5.13), (5.21) and \({{\,\textrm{rank}\,}}H_2={{\,\textrm{rank}\,}}H_{22}+1\) (by \(H_2\) being pd), it follows that

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t',u')) + {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u')) = {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+{{\,\textrm{rank}\,}}H_2+1. \end{aligned}$$
(5.27)

We separate two cases:

  • If \({{\mathcal {H}}}(A_{\min })\) is pd, then \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{2}+1\). This, (5.26) and (5.27) imply that \(\beta \) admits a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

  • If \({{\mathcal {H}}}(A_{\min })\) is not pd, then we must have \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{2}\), Otherwise we have \(({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(1,k)}}/H_{22}=0\) and hence \(({{\mathcal {H}}}(A_{\min }- \eta E_{2,2}^{(k+1)}))_{\vec {X}^{(1,k)}}/H_{22}<0\), which contradicts to \({{\mathcal {H}}}(A_{\min }- \eta E_{2,2}^{(k+1)})\) being psd. Hence, in this case \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic \({{\mathcal {Z}}}(p)\)-rm. Moreover, there cannot exist a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm. Indeed, since \(\eta >0\), at least \({{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1\) (resp. \({{\,\textrm{rank}\,}}H_2\)) atoms are needed to represent \({{\mathcal {F}}}({\mathcal {G}}(t'',u''))\) (resp. \({{\mathcal {H}}}({\mathcal {G}}(t'',u''))\)) for any \((t'',u'')\in {\mathcal {R}}_1\cap {\mathcal {R}}_2\) (see (5.13) and (5.21)). Hence, at least \({{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }) + {{\,\textrm{rank}\,}}H_{2}+1\) atoms are needed in a \({{\mathcal {Z}}}(p)\)-rm for any \((t'',u'')\in {\mathcal {R}}_1\cap {\mathcal {R}}_2\).

In the proof of the implication Theorem 5.1.(2(b)ii)\(\Rightarrow \) (5.11) we established that \({{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\) has a \({\mathbb {R}}\)-rm. Analogously as for the point (0, 0) in two paragraphs above, it follows that \(\beta \) has a \(( {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})) + {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})) )\)-atomic \({{\mathcal {Z}}}(p)\)-rm. By (5.13) and (5.21), this measure is \(( {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }) + {{\,\textrm{rank}\,}}H_{22}+2 )\)-atomic.

  • If \({{\mathcal {H}}}(A_{\min })\) is pd, then \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{22}+2\). This and (5.26) imply that \(\beta \) admits a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

  • If \({{\mathcal {H}}}(A_{\min })\) is not pd, then we have \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{22}+1\), since otherwise the equality \(({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(1,k)}}/H_{22}=0\) implies \(({{\mathcal {H}}}(A_{\min }- \eta E_{2,2}^{(k+1)}))_{\vec {X}^{(1,k)}}/H_{22}<0\), which contradicts to \({{\mathcal {H}}}(A_{\min }- \eta E_{2,2}^{(k+1)})\) being psd. Hence, in this case \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic \({{\mathcal {Z}}}(p)\)-rm. Moreover, there cannot exist a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm in this case. Indeed,

    $$\begin{aligned} ({\mathcal {R}}_1\cap {\mathcal {R}}_2)\setminus {{\mathcal {I}}}= (\partial {\mathcal {R}}_1\cap \mathring{{\mathcal {R}}}_2) \cup (\mathring{{\mathcal {R}}}_1\cap \partial {\mathcal {R}}_2) \cup (\mathring{{\mathcal {R}}}_1\cap \mathring{{\mathcal {R}}}_2). \end{aligned}$$

    Using (5.13) and (5.21), in every point from \(({\mathcal {R}}_1\cap {\mathcal {R}}_2)\setminus {{\mathcal {I}}}\) at least \({{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }) + {{\,\textrm{rank}\,}}H_{22}+2\) atoms are needed in a \({{\mathcal {Z}}}(p)\)-rm.

This concludes the proof of the moreover part.

Since for a p-pure sequence with \(\widetilde{\mathcal {M}}(k;\beta ))\succeq 0\), (5.26) implies that \({{\mathcal {H}}}(A_{\min })\) is pd, it follows by the moreover part that the existence of a \({{\mathcal {Z}}}(p)\)-rm implies the existence of a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm. \(\square \)

The following example, generated by [37], demonstrates the use of Theorem 5.1 to show that there exists a bivariate \(y(-2y+x^2+y^2)\)-pure sequence \(\beta \) of degree 6 with a positive semidefinite \({\mathcal {M}}(3)\) and without a \({{\mathcal {Z}}}(y(-2y+x^2+y^2))\)-rm.

Example 5.3

Let \(\beta \) be a bivariate degree 6 sequence given by

$$\begin{aligned} \beta _{00}&= 10,&\beta _{10}&=\frac{38}{5},&\beta _{01}&= \frac{39}{5},\\ \beta _{20}&= \frac{602}{25},&\beta _{11}&= \frac{3}{25},&\beta _{02}&=\frac{313}{25},\\ \beta _{30}&=\frac{9152}{125},&\beta _{21}&=\frac{421}{125},&\beta _{12}&=\frac{3}{125},\\ \beta _{03}&=\frac{2709}{125},&\beta _{40}&=\frac{172118}{625},&\beta _{31}&=\frac{27}{625},\\ \beta _{22}&=\frac{2717}{625},&\beta _{13}&=\frac{3}{625},&\beta _{04}&=\frac{24373}{625},\\ \beta _{50}&= \frac{3303368}{3125},&\beta _{41}&= \frac{7789}{3125},&\beta _{32}&= \frac{27}{3125},\\ \beta _{23}&= \frac{19381}{3125},&\beta _{14}&= \frac{3}{3125},&\beta _{05}&= \frac{224349}{3125},\\ \beta _{60}&= 4156,&\beta _{51}&= \frac{243}{15625},&\beta _{42}&= \frac{44453}{15625},\\ \beta _{33}&= \frac{27}{15625},&\beta _{24}&= \frac{149357}{15625},&\beta _{15}&= \frac{3}{15625},\\ \beta _{06}&= \frac{2094133}{15625}. \end{aligned}$$

Assume the notation as in Theorem 5.1. \(\widetilde{\mathcal {M}}(3)\) is psd with the eigenvalues \(\approx 4445\), \(\approx 189.2\), \(\approx 16.6\), \(\approx 11.9\), \(\approx 3.2\), \(\approx 1.22\), \(\approx 0.57\), \(\approx 0.022\), \(\approx 0.0030\), 0 and the column relation

$$\begin{aligned} -2Y^2+X^2Y+Y^3=0. \end{aligned}$$

We have that

$$\begin{aligned} A_{\min } = \begin{pmatrix} \frac{324330}{55873} &{} \frac{132789}{278915} &{} \frac{77}{25} &{} \frac{27}{125}\\[0.5em] \frac{132789}{278915} &{} \frac{4180091}{1394575} &{} \frac{27}{125} &{} \frac{1493}{625}\\[0.5em] \frac{77}{25} &{} \frac{27}{125} &{} \frac{1493}{625} &{} \frac{243}{3125}\\[0.5em] \frac{27}{125} &{} \frac{1493}{625} &{} \frac{243}{3125} &{} \frac{33437}{15625} \end{pmatrix} \end{aligned}$$

and so

$$\begin{aligned} \eta =\frac{77}{25}- \frac{4180091}{1394575}=\frac{4608}{55783}. \end{aligned}$$

The matrix \(H_{2}\) is equal to:

$$\begin{aligned} H_{2}&=\begin{pmatrix} 21 &{} 73 &{} 273\\ 73 &{} 273 &{} 1057\\ 273 &{} 1057 &{} \frac{64904063}{15625} \end{pmatrix}. \end{aligned}$$

The eigenvalues of \(H_2\) are \(\approx 4441.1\), \(\approx 6.74\), \(\approx -0.019\) and hence \(H_2\) is not psd. By Theorem 5.1, \(\beta \) does not have a \({{\mathcal {Z}}}(y(-2y+x^2+y^2))\)-rm, since by (2b) of Theorem 5.1, \(H_2\) should be psd.

6 Parabolic Type Relation: \(p(x,y)=y(x-y^2)\)

In this section we solve the \({{\mathcal {Z}}}(p)\)-TMP for the sequence \(\beta =\{\beta _i\}_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\) of degree 2k, \(k\ge 3\), where \(p(x,y)=y(x-y^2)\). Assume the notation from Sect. 4. If \(\beta \) admits a \({{\mathcal {Z}}}(p)\)-TMP, then \({\mathcal {M}}(k;\beta )\) must satisfy the relations

$$\begin{aligned} Y^{3+j}X^{i}=Y^{1+j}X^{i+1}\quad \text {for }i,j\in {\mathbb {Z}}_+\text { such that }i+j\le k-3. \end{aligned}$$
(6.1)

In the presence of all column relations (6.1), the column space \({{\mathcal {C}}}({\mathcal {M}}(k;\beta ))\) is spanned by the columns in the set

$$\begin{aligned} {{\mathcal {T}}}= \vec {X}^{(0,k)} \cup Y\vec {X}^{(0,k-1)} \cup Y^2\vec {X}^{(0,k-2)}, \end{aligned}$$
(6.2)

where

$$\begin{aligned} Y^i\vec {X}^{(j,\ell )}:=(Y^iX^j,Y^iX^{j+1},\ldots ,Y^iX^{\ell }) \quad \text {with }i,j,\ell \in {\mathbb {Z}}_+,\; j\le \ell ,\; i+\ell \le k. \end{aligned}$$

Let \({\widetilde{{{\mathcal {M}}}}}(k;\beta )\) be as in (4.8). Let

$$\begin{aligned} A_{\min }:=A_{12}(A_{22})^{\dagger } (A_{12})^T. \end{aligned}$$
(6.3)

As described in Remark 4.4, \(A_{\min }\) might need to be changed to

$$\begin{aligned} {\widehat{A}}_{\min } =A_{\min }+\eta \left( E_{1,k+1}^{(k+1)}+E_{k+1,1}^{(k+1)}\right) , \end{aligned}$$

where

$$\begin{aligned} \eta :=(A_{\min })_{2,k}-(A_{\min })_{1,k+1}. \end{aligned}$$

Let \({{\mathcal {F}}}(\textbf{A})\) and \({{\mathcal {H}}}(\textbf{A})\) be as in (4.10). Define also the matrix function

$$\begin{aligned} {\mathcal {G}}:{\mathbb {R}}^2\rightarrow S_{k+1},\qquad {\mathcal {G}}(\textbf{t},\textbf{u})= {\widehat{A}}_{\min } +\textbf{t}E_{1,1}^{(k+1)} +\textbf{u} E_{k+1,k+1}^{(k+1)}. \end{aligned}$$
(6.4)

Write

(6.5)

Let us define the matrix

$$\begin{aligned} K&:= {{\mathcal {H}}}({\widehat{A}}_{\min })/H_{22}\\&= \begin{pmatrix} \beta _{0,0}-(A_{\min })_{1,1} &{} \beta _{k,0}-(A_{\min })_{2,k}\\[0.2em] \beta _{k,0}-(A_{\min })_{2,k} &{} \beta _{2k,0}-(A_{\min })_{k+1,k+1}. \end{pmatrix} - \begin{pmatrix} (h_{12})^T\\ (h_{23})^T \end{pmatrix} (H_{22})^\dagger \begin{pmatrix} h_{12}&h_{23} \end{pmatrix}\\&:= \begin{pmatrix} \beta _{0,0}-(A_{\min })_{1,1}-(h_{12})^T(H_{22})^\dagger h_{12}&{} \beta _{k,0}-(A_{\min })_{2,k}-(h_{12})^T(H_{22})^\dagger h_{23}\\[0.3em] \beta _{k,0}-(A_{\min })_{2,k}-(h_{23})^T(H_{22})^\dagger h_{12} &{} \beta _{2k,0}-(A_{\min })_{k+1,k+1}-(h_{12})^T(H_{22})^\dagger h_{12} \end{pmatrix}\\&:= \begin{pmatrix} k_{11} &{} k_{12}\\ k_{12} &{} k_{22} \end{pmatrix}. \end{aligned}$$

Let

$$\begin{aligned} {\widehat{{{\mathcal {T}}}}} =\{ 1 ,Y,X,XY,X^2,X^2Y,\ldots ,X^{i},X^iY,\ldots , X^{k-1},X^{k-1}Y,X^k\}, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&{\widehat{P}} \text { be a permutation matrix such that moment matrix } \widehat{{\mathcal {M}}}(k;\beta ):={\widehat{P}}{\mathcal {M}}(k;\beta )({\widehat{P}})^T\\&\text {has rows and columns indexed in the order } {\widehat{{{\mathcal {T}}}}}, {{\mathcal {C}}}\setminus {\widehat{{{\mathcal {T}}}}}. \end{aligned} \end{aligned}$$
(6.6)

Write

(6.7)

The solution to the cubic parabolic type relation TMP is the following.

Theorem 6.1

Let \(p(x,y)=y(x-y^2)\) and \(\beta :=\beta ^{(2k)}=(\beta _{i,j})_{i,j\in {\mathbb {Z}}_+,i+j\le 2k}\), where \(k\ge 3\). Assume also the notation above. Then the following statements are equivalent:

  1. (1)

    \(\beta \) has a \({{\mathcal {Z}}}(p)\)-representing measure.

  2. (2)

    \(\widetilde{{\mathcal {M}}}(k;\beta )\) is positive semidefinite, the relations

    $$\begin{aligned} \beta _{i,j+3}=\beta _{i+1,j+1} \quad \text {hold for every }i,j\in {\mathbb {Z}}_+\text { with }i+j\le 2k-3, \end{aligned}$$
    (6.8)

    \({{\mathcal {H}}}({\widehat{A}}_{\min })\) is positive semidefinite, defining real numbers

    $$\begin{aligned} \begin{aligned} t_1&=H_1/H_{22} =\beta _{0,0}-(A_{\min })_{1,1} - (h_{12})^T (H_{22})^\dagger h_{12},\\ u_1&=H_2/H_{22} = \beta _{2k,0} -(A_{\min })_{k+1,k+1} - (h_{23})^T (H_{22})^{\dagger } h_{23}, \end{aligned} \nonumber \\ \end{aligned}$$
    (6.9)

    and the property

    $$\begin{aligned}&({{\mathcal {H}}}({\widehat{A}}_{\min }))_{\vec {X}^{(0,k-1)}}\succ 0 \quad \text {or}\quad {{\,\textrm{rank}\,}}({{\mathcal {H}}}({\widehat{A}}_{\min }))_{\vec {X}^{(0,k-1)}} = {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\widehat{A}}_{\min }), \end{aligned}$$
    (6.10)

    one of the following statements holds:

    1. (a)

      \(F_{22}\) is not positive definite, \(\eta =0\) and (6.10) holds.

    2. (b)

      \(F_{22}\) is positive definite, \(H_{22}\) is not positive definite and one of the following holds:

      1. (i)

        \(u_1=\eta =0\).

      2. (ii)

        \(u_1>0\), \(t_1>0\), \(t_1u_1\ge \eta ^2\) and \( \beta _{k,0}-(A_{\min })_{2,k}= (h_{12})^T(H_{22})^{\dagger } h_{23}. \)

    3. (c)

      \(F_{22},H_{22}\) are positive definite and one of the following holds:

      1. (i)

        \(\eta =0\) and (6.10) holds.

      2. (ii)

        \(\eta \ne 0\) and

        $$\begin{aligned} \left( \sqrt{k_{11}k_{22}}- {{\,\textrm{sign}\,}}(k_{12})k_{12}\right) ^2\ge \eta ^2, \end{aligned}$$
        (6.11)

        where \({{\,\textrm{sign}\,}}\) is the sign function and \({{\,\textrm{sign}\,}}(0)=0\).

Moreover, if a \({\mathcal {Z}}(p)\)-representing measure for \(\beta \) exists, then:

  • There exists at most \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure.

  • There exists a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure if and only if any of the following holds:

    • \(\eta =0\).

    • \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })= {{\,\textrm{rank}\,}}H_{22}+2. \)

    • \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })= {{\,\textrm{rank}\,}}H_{22}+1 \) and one of the following holds:

      \(*\):

      \(H_{22}\) is not positive definite and \(t_1u_1=\eta ^2\).

      \(*\):

      \(H_{22}\) is positive definite, \(k_{12}=0\) and \(k_{11}k_{22}=\eta ^2\).

In particular, a p-pure sequence \(\beta \) with a \({{\mathcal {Z}}}(p)\)-representing measure admits a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-representing measure.

Remark 6.2

In this remark we explain the idea of the proof of Theorem 6.1 and the meaning of conditions in the statement of the theorem.

By Lemmas 4.14.2, the existence of a \({\mathcal {Z}}(p)\)-rm for \(\beta \) is equivalent to the existence of \(t,u\in {\mathbb {R}}\) such that \({{\mathcal {F}}}({\mathcal {G}}(t,u))\) admits a \({{\mathcal {Z}}}(x-y^2)\)-rm and \({{\mathcal {H}}}({\mathcal {G}}(t,u))\) admits a \({\mathbb {R}}\)-rm. Let

$$\begin{aligned} {\mathcal {R}}_1&=\big \{(t,u)\in {\mathbb {R}}^2:{{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq 0\big \} \quad \text {and}\quad {\mathcal {R}}_2 =\big \{(t,u)\in {\mathbb {R}}^2:{{\mathcal {H}}}({\mathcal {G}}(t,u))\succeq 0\big \}. \end{aligned}$$

We denote by \(\partial R_i\) and \(\mathring{R}_i\) the topological boundary and the interior of the set \(R_i\), respectively. By the necessary conditions for the existence of a \({{\mathcal {Z}}}(p)\)-rm [12, 14, 25], \(\widetilde{{\mathcal {M}}}(k;\beta )\) must be psd and the relations (6.8) must hold. Then Theorem 6.1.(1) is equivalent to

$$\begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold and }\nonumber \\&\exists (t_0,u_0)\in {\mathcal {R}}_1\cap {\mathcal {R}}_2: {{\mathcal {F}}}({\mathcal {G}}(t_0,u_0)) \text { and } {{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))\text { admit }\nonumber \\&\hspace{1cm} \text {a }{{\mathcal {Z}}}(x-y^2)\text {-rm and a }{\mathbb {R}}\text {-rm, respectively.} \end{aligned}$$
(6.12)

In the proof of Theorem 6.1 we show that (6.12) is equivalent to Theorem 6.1.(2):

  1. (1)

    First we establish (see Claims 1 and 2 below) that the form of:

    • \({\mathcal {R}}_1\) is one of the following:

      figure c

      where the left case occurs if \(\eta \ne 0\) and the right if \(\eta =0\).

    • \({\mathcal {R}}_2\) is one of the following:

      figure d

      where the left case occurs if \(k_{12}\ne 0\) and the right if \(k_{12}=0\).

  2. (2)

    If \(F_{22}\) is only positive semidefinite but not definite, then we show that (6.12) is equivalent to

    $$\begin{aligned} \begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold}, \eta =0\text { and } {{\mathcal {H}}}({\mathcal {G}}(0,0))\text { admits a }{\mathbb {R}}\text {-rm}. \end{aligned} \end{aligned}$$
    (6.13)

    The latter statement is further equivalent to Theorem 6.1.(2a).

  3. (3)

    Assume that \(F_{22}\) is positive definite and \(H_{22}\) is only positive semidefinite but not definite. If:

    • \(u_1=0\), then we show that (6.12) is equivalent to (6.13). The latter statement is further equivalent to Theorem 6.1.(2(b)i).

    • \(u_1>0\), then we show that (6.12) is equivalent to

      $$\begin{aligned} \begin{aligned}&\widetilde{{{\mathcal {M}}}}(k;\beta )\succeq 0, \text { the relations } 6.8\text { hold, } {{\mathcal {F}}}({\mathcal {G}}(t_1,u_1)) \text { and }\\&{{\mathcal {H}}}({\mathcal {G}}(t_1,u_1))\text { admit a } {{\mathcal {Z}}}(x-y^2)\text {-rm and a }{\mathbb {R}}\text {-rm, respectively.} \end{aligned} \end{aligned}$$

      The latter statement is further equivalent to Theorem 6.1.(2(b)ii).

    • \(u_1<0\), then (6.12) cannot hold.

  4. (4)

    Assume that \(F_{22}\) and \(H_{22}\) are positive definite. If:

    • \(\eta =0\), then we show that (6.12) is equivalent to (6.13). The latter statement is further equivalent to Theorem 6.1.(2(c)i).

    • \(\eta \ne 0\), then we show that (6.12) is equivalent to \({\mathcal {R}}_1\cap {\mathcal {R}}_2\ne \emptyset \). The latter statement is further equivalent to Theorem 6.1.(2(c)ii).

Proof of Theorem 6.1

Let \({\mathcal {R}}_1, {\mathcal {R}}_2\) be as in Remark 6.2. As explained in Remark 6.2, Theorem 6.1.(1) is equivalent to (6.12), thus it remains to prove that (6.12) is equivalent to Theorem 6.1.(2).

First we establish a few claims needed in the proof. Claim 1 (resp. 2) describes \({\mathcal {R}}_1\) (resp. \({\mathcal {R}}_2\)) concretely.

Claim 1. Assume that \(\widetilde{{\mathcal {M}}}(k;\beta )\succeq 0\). Then

$$\begin{aligned} {\mathcal {R}}_1 = \big \{ (t,u)\in {\mathbb {R}}^2:t\ge 0, u\ge 0, tu\ge \eta ^2 \big \}. \end{aligned}$$
(6.14)

If \((t,u)\in {\mathcal {R}}_1\), we have

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t,u))= \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min }),&{} \text {if } \eta =t=u=0, \\[0.3em] {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1,&{} \text {if } (\eta =t=0, u>0) \text { or } (\eta =u=0, t>0) \\[0.2em] &{} \text { or }(\eta \ne 0, tu=\eta ^2),\\[0.3em] {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+2,&{} \text {if } tu>\eta ^2. \end{array} \right. \end{aligned}$$
(6.15)

where \(A_{\min }\) is as in (6.3). \(\square \)

Proof of Claim 1

Note that

$$\begin{aligned} \begin{aligned} {\mathcal {G}}(\textbf{t},\textbf{u})&= A_{\min } +\eta \big (E_{1,k+1}^{(k+1)}+E_{k+1,1}^{(k+1)}\big ) +\textbf{t}E_{1,1}^{(k+1)} +\textbf{u}E_{k+1,k+1}^{(k+1)}\\&= A_{\min }+ \begin{pmatrix} \textbf{t} &{} \textbf{0}_{1,k-1} &{} \eta \\ \textbf{0}_{k-1,1} &{} \textbf{0}_{k-1} &{} \textbf{0}_{k-1,1} \\ \eta &{} \textbf{0}_{1,k-1} &{} \textbf{u} \end{pmatrix}. \end{aligned}\nonumber \\ \end{aligned}$$
(6.16)

By Lemma 4.3, we have that

$$\begin{aligned} {{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq 0 \quad \Leftrightarrow \quad {\mathcal {G}}(t,u)\succeq A_{\min } \end{aligned}$$
(6.17)

Using (6.16), (6.17) and the definition of \({\mathcal {R}}_1\), we have that

$$\begin{aligned} (t,u)\in {\mathcal {R}}_1 \quad&\Leftrightarrow \quad \begin{pmatrix} t &{} \eta \\ \eta &{} u \end{pmatrix}\succeq 0 \quad \Leftrightarrow \quad t\ge 0, u\ge 0, tu\ge \eta ^2, \end{aligned}$$
(6.18)

which proves (6.14).

To prove (6.15) first note that by construction of \({{\mathcal {F}}}(A_{\min })\), the columns 1 and \(X^k\) are in the span of the columns indexed by \({{\mathcal {C}}}\setminus \vec {X}^{(0,k)}\). Hence, there are vectors

$$\begin{aligned} v_1, v_2 \in \ker {{\mathcal {F}}}(A_{\min }) \end{aligned}$$
(6.19)

of the forms

$$\begin{aligned} v_1=\begin{pmatrix} 1&\textbf{0}_{1,k}&({\tilde{v}}_1)^T \end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}} \quad \text {and}\quad v_2=\begin{pmatrix} \textbf{0}_{1,k}&1&({\tilde{v}}_2)^T \end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}}. \nonumber \\ \end{aligned}$$
(6.20)

Let \(r:={{\,\textrm{rank}\,}}\begin{pmatrix} t &{}\quad \eta \\ \eta &{}\quad u \end{pmatrix}\). Clearly,

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(t,u))\le {{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+r. \end{aligned}$$
(6.21)

We separate three cases according to r.

Case 1: \(r=0\). In this case \(t=u=\eta =0\) and \(\mathcal G(0,0)=A_{\min }\). In this case (6.15) clearly holds.

Case 2: \(r=1\). In this case \(tu=\eta ^2\). Together with (6.18), this is equivalent to \((\eta =t=0, u>0)\) or \((\eta =u=0, t>0)\) or \((\eta \ne 0, tu=\eta ^2).\) By (6.21) and \({{\mathcal {F}}}({\mathcal {G}}(t,u))\succeq {{\mathcal {F}}}(A_{\min })\) to prove (6.15), it suffices to find \(v\in \ker {{\mathcal {F}}}(A_{\min })\) and \(v\notin \ker {{\mathcal {F}}}({\mathcal {G}}(t,u))\). Note that at least one of \(v_1,v_2\) from (6.20) is such a vector, since

$$\begin{aligned} (v_1)^T{{\mathcal {F}}}({\mathcal {G}}(t,u))v_1=t \quad \text {and}\quad (v_2)^T{{\mathcal {F}}}({\mathcal {G}}(t,u))v_2=u. \end{aligned}$$

Case 3: \(r=2\). In this case \(tu>\eta ^2\). Note that

$$\begin{aligned} {{\mathcal {F}}}({\mathcal {G}}(t,u))= {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{\eta ^2}{u},u\Big )\Big )+ \begin{pmatrix} t- \frac{\eta ^2}{u} \end{pmatrix} \oplus \textbf{0}_{\frac{(k+1)(k+2)}{2}-1} \succeq {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{\eta ^2}{u},u\Big )\Big ). \nonumber \\ \end{aligned}$$
(6.22)

By Case 2, we have \({{\,\textrm{rank}\,}}{{\mathcal {F}}}\Big (\mathcal G\Big (\frac{\eta ^2}{u},u\Big )\Big )={{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1\). By (6.21) and (6.22), to prove (6.15), it suffices to find \(v\in \ker {{\mathcal {F}}}\Big ({\mathcal {G}}\Big (\frac{\eta ^2}{u},u\Big )\Big )\) and \(v\notin \ker {{\mathcal {F}}}({\mathcal {G}}(t,u))\). We will check below, that \(v_3\), defined by

$$\begin{aligned} v_3= v_1- \frac{\eta }{u}v_2 = \begin{pmatrix} 1&\textbf{0}_{1,k-1}&- \frac{\eta }{u}&({\tilde{v}}_3)^T\end{pmatrix}^T\in {\mathbb {R}}^{\frac{(k+1)(k+2)}{2}}, \end{aligned}$$

is such a vector. This follows by

$$\begin{aligned} {{\mathcal {F}}}\Big (\mathcal G\Big (\frac{\eta ^2}{u},u\Big )\Big )v_3=\textbf{0}_{\frac{(k+1)(k+2)}{2},1} \end{aligned}$$

and

$$\begin{aligned} (v_3)^T{{\mathcal {F}}}({\mathcal {G}}(t,u))v_3=t- \frac{\eta ^2}{u}>0. \end{aligned}$$

This concludes the proof of Claim 1. \(\square \)

Note that

Define the matrix function

$$\begin{aligned} \begin{aligned} {\mathcal {K}}(\textbf{t},\textbf{u}) = {{\mathcal {H}}}(\mathcal G(\textbf{t},\textbf{u}))\big /H_{22}&= {{\mathcal {H}}}(\widehat{A}_{\min })\big /H_{22} - \begin{pmatrix} \textbf{t} &{} 0 \\ 0 &{} \textbf{u} \end{pmatrix}\\&= K- \begin{pmatrix} \textbf{t} &{} 0 \\ 0 &{} \textbf{u} \end{pmatrix} = \begin{pmatrix} k_{11}-\textbf{t} &{} k_{12} \\ k_{12} &{} k_{22}-\textbf{u} \end{pmatrix}. \end{aligned} \nonumber \\ \end{aligned}$$
(6.23)

Claim 2. Assume that \(\widetilde{{\mathcal {M}}}(k;\beta )\succeq 0\). Then

$$\begin{aligned} \begin{aligned} {\mathcal {R}}_2&= \big \{ (t,u)\in {\mathbb {R}}^2:{\mathcal {K}}(t,u)\succeq 0 \big \}\\&=\big \{ (t,u)\in {\mathbb {R}}^2:t\le k_{11}, u\le k_{22}, (k_{11}-t)(k_{22}-u)\ge k_{12}^2 \big \}. \end{aligned}\nonumber \\ \end{aligned}$$
(6.24)

If \((t,u)\in {\mathcal {R}}_2\), we have

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t,u))= \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}H_{22},&{} \text {if } k_{12}=0, t=k_{11}, u=k_{22}, \\[0.3em] {{\,\textrm{rank}\,}}H_{22}+1,&{} \text {if } (k_{11}-t)(k_{22}-u)=k_{12}^2, (t\ne k_{11}\text { or }u\ne k_{22}),\\[0.3em] {{\,\textrm{rank}\,}}H_{22}+2,&{} \text {if } (k_{11}-t)(k_{22}-u)>k_{12}^2. \end{array} \right. \end{aligned}$$
(6.25)

where \(A_{\min }\) is as in (6.3). \(\square \)

Proof of Claim 2

Permuting rows and columns of \({{\mathcal {H}}}({\mathcal {G}}(\textbf{t},\textbf{u}))\) we define

Note that

$$\begin{aligned} {{\mathcal {H}}}({\mathcal {G}}(t,u))\succeq 0 \quad \Leftrightarrow \quad {\widetilde{{{\mathcal {H}}}}}({\mathcal {G}}(t,u))\succeq 0 \end{aligned}$$

and

(6.26)

By Lemma 4.3.2, \({{\mathcal {H}}}(A_{\min })\succeq 0\). Permuting rows and columns, this implies that

By Theorem 2.2, used for \((M,C)=({\widetilde{{{\mathcal {H}}}}}(A_{\min }),H_{22})\), it follows that \(H_{22}\succeq 0\) and \(h_{12},h_{23}\in {{\mathcal {C}}}(H_{22})\). Let

$$\begin{aligned} {\mathcal {L}}: S_2\rightarrow S_{k+1},\quad {\mathcal {L}}(\textbf{A}) = \begin{pmatrix} \textbf{A} &{} \left( \begin{array}{c} (h_{12})^T \\ (h_{23})^T \end{array}\right) \\ \left( \begin{array}{cc} h_{12} &{} h_{23} \end{array}\right)&H_{22} \end{pmatrix}. \end{aligned}$$

be a matrix function. Using Theorem 2.2 again for \((M,C)=({\mathcal {L}}(A),H_{22})\), it follows that

$$\begin{aligned} {\mathcal {L}}(A)\succeq 0 \quad \Leftrightarrow \quad A\succeq \begin{pmatrix} (h_{12})^T\\ (h_{23})^T \end{pmatrix} (H_{22})^\dagger \begin{pmatrix} h_{12}&h_{23} \end{pmatrix} \end{aligned}$$
(6.27)

and

$$\begin{aligned} {{\,\textrm{rank}\,}}{{\mathcal {L}}}(A)={{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}\left( A-\begin{pmatrix} (h_{12})^T\\ (h_{23})^T \end{pmatrix} (H_{22})^\dagger \begin{pmatrix} h_{12}&h_{23} \end{pmatrix}\right) \end{aligned}$$
(6.28)

Further, (6.27) implies that

$$\begin{aligned} {\widetilde{{{\mathcal {H}}}}}({\mathcal {G}}(t,u))&\succeq 0\\&\quad \Leftrightarrow \quad \begin{pmatrix} \beta _{0,0}-(A_{\min })_{1,1}-t &{} \beta _{k,0}-(A_{\min })_{2,k} \\ \beta _{k,0}-(A_{\min })_{2,k} &{} \beta _{2k,0}-(A_{\min })_{k+1,k+1}-u \end{pmatrix} \\ {}&\quad - \begin{pmatrix} (h_{12})^T\\ (h_{23})^T \end{pmatrix} (H_{22})^\dagger \begin{pmatrix} h_{12}&h_{23} \end{pmatrix} \succeq 0\\&\quad \Leftrightarrow \quad {\mathcal {K}}(t,u)\succeq 0, \end{aligned}$$

where we use the definition (6.23) of \({\mathcal {K}}(t,u)\) in the last equivalence. Moreover, \({{\,\textrm{rank}\,}}\widetilde{{\mathcal {H}}}({\mathcal {G}}(t,u))={{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}{\mathcal {K}}(t,u)\). This proves (6.24) and (6.25). \(\square \)

Claim 3. If \((t,u)\in {\mathcal {R}}_2\cap ({\mathbb {R}}_+)^2\), then

$$\begin{aligned} tu\le (\sqrt{k_{11}k_{22}}-{{\,\textrm{sign}\,}}(k_{12})k_{12})^2=:p_{\max }. \end{aligned}$$

The equality is achieved if:

  • \(k_{12}=0\), in the point \((t,u)=(k_{11},k_{22})\).

  • \(k_{12}>0\), in the point \((t_-,u_-)= ( k_{11}-\frac{k_{12}\sqrt{k_{11}}}{\sqrt{k_{22}}}, k_{22}+\frac{k_{12}\sqrt{k_{22}}}{\sqrt{k_{11}}} )\).

  • \(k_{12}<0\), in the point \((t_+,u_+)= ( k_{11}+\frac{k_{12}\sqrt{k_{11}}}{\sqrt{k_{22}}}, k_{22}-\frac{k_{12}\sqrt{k_{22}}}{\sqrt{k_{11}}} )\).

Moreover, if \(k_{12}\ne 0\), then for every \(p\in [0,p_{\max }]\) there exists a point \(({\tilde{t}},{\tilde{u}})\in {\mathcal {R}}_2\cap ({\mathbb {R}}_+)^2\) such that \({\tilde{t}} {\tilde{u}}=p\) and \((k_{11}-{\tilde{t}})(k_{22}-{\tilde{u}})=k_{12}^2\).

Proof of Claim 3

If \(k_{12}=0\), then \((t,u)\in {\mathcal {R}}_2\cap ({\mathbb {R}}_+)^2=[0,k_{11}]\times [0,k_{22}]\) and Claim 3 is clear.

Assume that \(k_{12}\ne 0\). Then clearly tu is maximized in some point satisfying \((k_{11}-t)(k_{22}-u)=k_{12}^2\). Let \(f(t):=t\big (k_{22}-\frac{k_{12}^2}{k_{11}-t}\big ).\) We are searching for the maximum of f(t) on the interval \([0,k_{11}]\). The stationary points of f are \(t_{\pm }=k_{11}\pm \frac{k_{12}\sqrt{k_{11}}}{\sqrt{k_{22}}}.\) Then \(u_\pm =k_{22}\mp \frac{k_{12}\sqrt{k_{22}}}{\sqrt{k_{11}}}\). If \(k_{12}>0\), then \(t_-\in [0,k_{11}]\) (note that \(k_{11}k_{22}\ge k_{12}^2\) if \({\mathcal {R}}_2\cap ({\mathbb {R}}_+)^2\ne \emptyset \)). Further on, \(t_-u_-=(\sqrt{k_{11}k_{22}}-k_{12})^2\). Similarly, if \(k_{12}<0\), then \(t_{+}\in [0,k_{11}]\) and \(t_+u_+=(\sqrt{k_{11}k_{22}}+k_{12})^2\). The moreover part follows by noticing that \(f(0)=0\) and hence on the interval \([0,t_{\pm }]\), f attains all values between 0 and \(p_{\max }\). \(\square \)

In the proof of Theorem 6.1 we will need a few further observations:

  • Observe that

    $$\begin{aligned} \begin{aligned}&({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(0,k-1)}} =H_{1}-tE_{1,1}^{(k)},\\&({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(1,k-1)}} =H_{22},\\&({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(1,k)}} =H_{2}-uE_{k,k}^{(k)}. \end{aligned} \nonumber \\ \end{aligned}$$
    (6.29)
  • We have

    $$\begin{aligned} ({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(0,k-1)}}\big / ({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(1,k-1)}} = H_1/H_{22}-t = t_1-t, \end{aligned}$$
    (6.30)

    where in the first equality we used (6.29) and in the second the definition of \(t_1\) (see (6.9)).

  • We have

    $$\begin{aligned} ({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(1,k)}}\big / ({{\mathcal {H}}}({\mathcal {G}}(t,u)))_{\vec {X}^{(1,k-1)}} = H_2/H_{22}-u = u_1-u, \end{aligned}$$
    (6.31)

    where in the first equality we used (6.29) and in the second the definition of \(u_1\) (see (6.9)).

First we prove the implication (6.12) \(\Rightarrow \) Theorem 6.1.2. By the necessary conditions for the existence of a \({{\mathcal {Z}}}(p)\)-rm [12, 14, 25], \(\widetilde{{\mathcal {M}}}(k;\beta )\) must be psd and the relations (6.8) must hold. By Lemma 4.3.2, \({{\mathcal {F}}}(A_{\min })\succeq 0\). Hence,

(6.32)

where \({\widehat{P}}\) is as in (6.6). In particular, \(F_{22}\succeq 0\). We separate two cases according to the invertibility of \(F_{22}\).

Case 1: \(F_{22}\) is not pd. Let \(\beta ^{(c)}\) be a sequence corresponding to the moment matrix \({{\mathcal {F}}}({\mathcal {G}}(t_0,u_0))\). Let \(\gamma =(\gamma _0,\ldots ,\gamma _{4k})\) be a sequence defined by \(\gamma _i=\beta ^{(c)}_{\lfloor \frac{i}{2}\rfloor ,i\; \text {mod}\; 2}\). Note that

$$\begin{aligned} \big ({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0))\big )_{{\widehat{{{\mathcal {T}}}}}\setminus \{ 1 ,X^k\}}= ({\widehat{F}})_{{\widehat{{{\mathcal {T}}}}}\setminus \{ 1 ,X^k\}} =F_{22} =A_{{\widehat{\gamma }}}, \end{aligned}$$

where \({\widehat{\gamma }}=(\gamma _2,\ldots ,\gamma _{4k-2})\). Since \(F_{22}\) is not pd, it follows that there is a non-trivial column relation in \(F_{22}\), which is also a column relation in \(A_{\gamma }\) by Proposition 2.3. By Theorem 2.7, \(\gamma \) has a \({\mathbb {R}}\)-rm, which implies by Theorem 2.5, that \(A_{\gamma }\) is rg. Hence, the last column of \(A_{\gamma }={\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0))\) is in the span of the columns in \({\widehat{{{\mathcal {T}}}}}\setminus \{1,X^k\}\). It follows that

$$\begin{aligned} \begin{pmatrix} (f_{12})^T\\ F_{22}\\ (f_{23})^T \end{pmatrix} (F_{22})^\dagger f_{23}= \begin{pmatrix} (A_{\min })_{2,k}\\[0.2em] f_{23}\\[0.2em] (A_{\min })_{k+1,k+1}+u_0 \end{pmatrix}. \end{aligned}$$
(6.33)

On the other hand, by construction of \({\widehat{F}}\), the column \(X^k\) is also in the span of the columns in \({\widehat{{{\mathcal {T}}}}}\setminus \{1,X^k\}\). Hence,

$$\begin{aligned} \begin{pmatrix} (f_{12})^T\\ F_{22}\\ (f_{23})^T \end{pmatrix} (F_{22})^\dagger f_{23}= \begin{pmatrix} (A_{\min })_{1,k+1}\\[0.2em] f_{23}\\[0.2em] (A_{\min })_{k+1,k+1} \end{pmatrix}. \end{aligned}$$
(6.34)

By (6.33) and (6.34), it follows that \((A_{\min })_{2,k}=(A_{\min })_{1,k+1}\) or equivalently \(\eta =0\), and \(u_0=0\). Note that

$$\begin{aligned} \begin{aligned} {\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0))={\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,0))&\succeq {\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(0,0))={{\mathcal {F}}}(A_{\min }),\\ {{\mathcal {H}}}({\mathcal {G}}(t_0,u_0))={{\mathcal {H}}}({\mathcal {G}}(t_0,0))&\preceq {{\mathcal {H}}}({\mathcal {G}}(0,0))={{\mathcal {H}}}(A_{\min }). \end{aligned}\nonumber \\ \end{aligned}$$
(6.35)

Further on, \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) has a \({{\mathcal {Z}}}(x-y^2)\)-rm by Theorem 2.7 and \({{\mathcal {H}}}(A_{\min })\) by Theorem 2.5. Indeed, the column \(X^k\) of \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) is in the span of the others and since \({{\mathcal {H}}}({\mathcal {G}}(t_0,0))\) satisfies the conditions in Theorem 2.5, the same holds for \({{\mathcal {H}}}(A_{\min })\). But then the property (6.10) holds (note that \(\eta =0\)). This is the case Theorem 6.1.(2a).

Case 2: \(F_{22}\) is pd. By Lemma 4.3.2, \({{\mathcal {H}}}(A_{\min })\succeq 0\) (see (6.26)). In particular, \(H_{22}\succeq 0\). We separate two cases according to the invertibility of \(H_{22}\).

Case 2.1: \(H_{22}\) is not pd. By (6.31) and Theorem 2.5, it follows that

$$\begin{aligned} u_1= u_0. \end{aligned}$$
(6.36)

By (6.14),

$$\begin{aligned} u_0\ge 0. \end{aligned}$$
(6.37)

We separate two cases according to the value of \(u_1\).

Case 2.1.1: \(u_1=0\). By (6.36), it follows that \(u_0=0\). Note that

$$\begin{aligned} \big ({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0))\big )_{ \widehat{{\mathcal {T}}}\setminus \{ 1 \} } = \big ({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,0))\big )_{ {\widehat{{{\mathcal {T}}}}}\setminus \{ 1 \} } = \big (\widehat{F}\big )_{{\widehat{{{\mathcal {T}}}}}\setminus \{ 1 \}}. \end{aligned}$$
(6.38)

Since in \({\widehat{F}}\) we have the column relation (6.34) by construction, (6.38) and Proposition 2.3 imply that

$$\begin{aligned} \big ({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,0))\big )_{ {\widehat{{{\mathcal {T}}}}},{\widehat{{{\mathcal {T}}}}}\setminus \{ 1 ,X^k\} } (F_{22})^\dagger f_{23} = \big ({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,0))\big )_{ {\widehat{{{\mathcal {T}}}}},\{X^k\} }, \end{aligned}$$

or equivalently (6.33) with \(u_0=0\). By (6.33) and (6.34), it follows that \((A_{\min })_{2,k}=(A_{\min })_{1,k+1}\) or equivalently \(\eta =0\). This is the case Theorem 6.1.(2(b)i).

Case 2.1.2: \(u_1>0\). Since the column \(X^k\) of \({{\mathcal {H}}}({\mathcal {G}}(t_0,u_1))\) is in the span of the columns in \(\vec {X}^{(1,k-1)}\), it first follows by observing the first row of \({{\mathcal {H}}}({\mathcal {G}}(t_0,u_1))\) that

$$\begin{aligned} \beta _{k,0}-(A_{\min })_{2,k}= (h_{12})^T(H_{22})^{\dagger } h_{23}. \end{aligned}$$
(6.39)

Further on,

$$\begin{aligned}{} & {} {{\mathcal {H}}}({\mathcal {G}}(t,u_1))\big / ({{\mathcal {H}}}({\mathcal {G}}(t,u_1)))_{\vec {X}^{(1,k)}}= ({{\mathcal {H}}}({\mathcal {G}}(t,u_1)))_{\vec {X}^{(0,k-1)}}\big / ({{\mathcal {H}}}({\mathcal {G}}(t,u_1)))_{\vec {X}^{(1,k-1)}} \nonumber \\ {}{} & {} \quad = t_1-t, \end{aligned}$$
(6.40)

where we used (6.30) in the second equality. By (6.40) and Theorem 2.2 used for \((M,C)=({{\mathcal {H}}}({\mathcal {G}}(t,u_1)),({{\mathcal {H}}}({\mathcal {G}}(t,u_1)))_{\vec {X}^{(1,k)}})\), it follows that \({{\mathcal {H}}}({\mathcal {G}}(t_1,u_1))\succeq 0\). By Theorem 2.5, \({{\mathcal {H}}}({\mathcal {G}}(t_1,u_1))\) admits a \({\mathbb {R}}\)-rm. Note that

$$\begin{aligned} \begin{aligned} {\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0))={\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_1))&\preceq {\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_1,u_1)), \end{aligned} \end{aligned}$$
(6.41)

where we used that \(t_0\le t_1\) by (6.40). By Theorem 2.7, \(({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_1,u_1)))_{\widehat{\mathcal {T}}{\setminus } \{X^k\}}\) must be pd. (Here we used that since \(u_1>0\) and \(F_{22}\succ 0\), it follows that \(({\widehat{{{\mathcal {F}}}}}(\mathcal {G}(t_1,u_1)))_{\widehat{{\mathcal {T}}}{\setminus } \{1\}}\succ 0.\)) Therefore Claim 1 implies that \(t_1>0\) and \(t_1u_1\ge \eta ^2\). Together with (6.39), this is the case Theorem 6.1.(2(b)ii).

Case 2.2: \(H_{22}\) is pd. We separate two cases according to the value of \(\eta .\)

Case 2.2.1: \(\eta =0\). By Lemma 4.3.2, \({{\mathcal {H}}}(A_{\min })\succeq 0\) (see (6.26)).

If \({{\mathcal {H}}}(A_{\min })\) does not admit a \({\mathbb {R}}\)-rm, it follows by Theorem 2.5, that \(({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(0,k-1)}}\) is not pd and \(u_1>0\). Equivalently,

$$\begin{aligned} t_1=({{\mathcal {H}}}(A_{\min }))_{\vec {X}^{(0,k-1)}} \big / H_{22}=0, \end{aligned}$$

which by (6.30) implies that \(t_0=0\). By Theorem 2.7, since \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_0,u_0)) = {\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(0,u_0)) \) admits a \({{\mathcal {Z}}}(x-y^2)\)-rm, \(F_{22}\succ 0\) and \(({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(0,u_0)))_{{\widehat{{{\mathcal {T}}}}}{\setminus } \{X^k\}}\) is not pd, it follows that \(u_0=0\). But then \({{\mathcal {H}}}(\mathcal G(t_0,u_0))={{\mathcal {H}}}({\mathcal {G}}(0,0))={{\mathcal {H}}}(A_{\min })\) does not admit a \({\mathbb {R}}\)-rm, which is a contradiction.

Hence, \({{\mathcal {H}}}(A_{\min })\) admits a \({\mathbb {R}}\)-rm, which is equivalent to (6.10) (using \(\eta =0\)). This is the case Theorem 6.1.(2(c)i).

Case 2.2.2: \(\eta \ne 0\). By (6.15) it follows that \(t_0u_0\ge \eta ^2.\) This fact and Claim 3 imply the second condition in the case Theorem 6.1.(2(c)ii).

This concludes the proof of the implication (6.12) \(\Rightarrow \) Theorem 6.1.2.

Next we prove the implication Theorem 6.1.2 \(\Rightarrow \) (6.12). We separate five cases according to the assumptions in Theorem 6.1.2.

Case 1: Theorem 6.1.(2a) holds. By Lemma 4.3.2, \({{\mathcal {F}}}(A_{\min })\succeq 0\) and \({{\mathcal {H}}}(A_{\min })\succeq 0\). Since \(\eta =0\), both matrices have a moment structure. Since by construction, the column \(X^k\) of \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) is in the span of the others, it has a \({{\mathcal {Z}}}(x-y^2)\)-rm by Theorem 2.7. Since \({{\mathcal {H}}}(A_{\min })\) satisfies (6.10) (using \(\eta =0\)), it admits a \({\mathbb {R}}\)-rm by Theorem 2.5. This proves (6.12) in this case.

Case 2: Theorem 6.1.(2(b)i) holds. By the same reasoning as in the Case 1 above, \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) has a \({{\mathcal {Z}}}(x-y^2)\)-rm. Since \(u_1=0\), the column \(X^k\) of \({{\mathcal {H}}}(A_{\min })\) is in the span of the other columns. By Theorem 2.5, \({{\mathcal {H}}}(A_{\min })\) admits a \({\mathbb {R}}\)-rm. This proves (6.12) in this case.

Case 3: Theorem 6.1.(2(b)ii) holds. By (6.30), (6.31) and the fourth assumption of (2(b)ii), it follows that \({{\mathcal {H}}}(\mathcal G(t_1,u_1))\) is psd and the columns \(1,X^k\) are in the span of the columns in \(\vec {X}^{(1,k-1)}\). By Theorem 2.5, \({{\mathcal {H}}}(\mathcal G(t_1,u_1))\) admits a \({\mathbb {R}}\)-rm. Since \((t_1,u_1)\in {\mathcal {R}}_1\) by (6.14) and the assumptions in (2(b)ii), it follows that \({\widehat{F}}(\mathcal G(t_1,u_1))\) is psd and by construction, \(\big ({\widehat{F}}(\mathcal G(t_1,u_1))\big )_{{\widehat{{{\mathcal {T}}}}}{\setminus } \{X^k\}}\) is pd. By Theorem 2.7, it has a \({{\mathcal {Z}}}(x-y^2)\)-rm. This proves (6.12) in this case.

Case 4: Theorem 6.1.(2(c)i) holds. \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) has a \({{\mathcal {Z}}}(x-y^2)\)-rm and \({{\mathcal {H}}}(A_{\min })\) has a \({\mathbb {R}}\)-rm by the same reasoning as in the Case 1 above. This proves (6.12) in this case.

Case 5: Theorem 6.1.(2(c)ii) holds. We separate three cases according to the sign of \(k_{12}\).

  • If \(k_{12}=0\), then by Claim 2, \({{\mathcal {H}}}({\mathcal {G}}(k_{11},k_{22}))\) is psd and the column \(X^k\) is in the span of the previous ones. Since \({{\mathcal {H}}}({\mathcal {G}}(0,0))={{\mathcal {H}}}(\widehat{A}_{\min })\) is psd by assumption, it follows that \(k_{11}\ge 0\) and \(k_{22}\ge 0\). Since \(\eta \ne 0\) and \(k_{11}k_{22}\ge \eta ^2\) by (6.11), it follows that \(k_{11}>0\) and \(k_{22}>0\). By Claim 1, \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(k_{11},k_{22}))\succ 0\). By Theorem 2.7, it has a \({{\mathcal {Z}}}(x-y^2)\)-rm. This proves (6.12) in this case.

  • If \(k_{12}>0\), then by Claim 3, \({{\mathcal {H}}}({\mathcal {G}}(t_-,u_-))\) is psd and \(t_-u_-\ge \eta ^2\). By construction, \({{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t_-,u_-))=k\) and since \(t_-<k_{11}\), it follows that \(({{\mathcal {H}}}({\mathcal {G}}(t_-,u_-)))_{\vec {X}^{(0,k-1)}}\) is pd. Hence, the column \(X^k\) of \({{\mathcal {H}}}({\mathcal {G}}(t_-,u_-))\) is in the span of the others. By Theorem 2.5, \({{\mathcal {H}}}({\mathcal {G}}(t_-,u_-))\) admits a \({\mathbb {R}}\)-rm. By Claim 1 and \(t_-u_-\ge \eta ^2\), it follows that \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_-,u_-))\succeq 0\). Since \(t_->0\), it follows that \(\big ({\widehat{F}}({\mathcal {G}}(t_-,u_-))\big )_{{\widehat{{{\mathcal {T}}}}}\setminus \{X^k\}}\) is pd. By Theorem 2.7, it has a \({{\mathcal {Z}}}(x-y^2)\)-rm. This proves (6.12) in this case.

  • If \(k_{12}<0\), then the proof of (6.12) is analogous to the case \(k_{12}>0\) by replacing \((t_-,u_-)\) with \((t_+,u_+)\).

This concludes the proof of the implication Theorem 6.1.2\(\Rightarrow \)(6.12).

By now we established the equivalence 1 \(\Leftrightarrow \) 2 in Theorem 6.1. It remains to prove the moreover part. We observe again the proof of the implication 2 \(\Rightarrow \) (6.12). By Lemma 4.3.4,

$$\begin{aligned} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ) ={{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+ {{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min }). \end{aligned}$$
(6.42)

In the proofs of the implications Theorem 6.1.(2a)\(\Rightarrow \) (6.12), Theorem 6.1.(2(b)i)\(\Rightarrow \) (6.12) and Theorem 6.1.(2(c)i)\(\Rightarrow \) (6.12), we established that \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) and \({{\mathcal {H}}}(A_{\min })\) admit a \({{\mathcal {Z}}}(x-y^2)\)-rm and a \({\mathbb {R}}\)-rm, respectively. By Theorems 2.5 and 2.7, there also exist a \(({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min }))\)-atomic and a \(({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min }))\)-atomic rms. By (6.42), \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

Assume that Theorem 6.1.(2(b)ii) holds. We separate two cases according to the value of \(\eta \):

  • \(\eta =0\). We separate two cases according to the existence of a \({\mathbb {R}}\)-rm of \({{\mathcal {H}}}(A_{\min })\):

    • The last column of \({{\mathcal {H}}}(A_{\min })\) is in the span of the previous ones. Then as in the previous paragraph, \({\widehat{{{\mathcal {F}}}}}(A_{\min })\) and \({{\mathcal {H}}}(A_{\min })\) admit a \(({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min }))\)-atomic \({{\mathcal {Z}}}(x-y^2)\)-rm and a \(({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min }))\)-atomic \({\mathbb {R}}\)-rm, respectively. Hence, \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

    • The last column of \({{\mathcal {H}}}(A_{\min })\) is not in the span of the previous ones. Since also \(t_1>0\), it follows that \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{22}+2\). But then \({{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t_1,u_1))={{\,\textrm{rank}\,}}H_{22}\) and \({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_1,u_1))={{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+2\) (see (6.15)). This implies that \(\widetilde{{\mathcal {M}}}(\beta ;k)\) admits a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k);\beta )\)-atomic \({{\mathcal {Z}}}(p)\)-rm.

  • \(\eta \ne 0\). We separate two cases according to \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\), which can be either \({{\,\textrm{rank}\,}}H_{22}+2\) or \({{\,\textrm{rank}\,}}H_{22}+1\) (since \(t_1>0\)).

    • \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{22}+2\). Then as in the second Case of the case \(\eta =0\) above, in the point \((t_1,u_1)\) there is a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm for \(\beta \). (Note that \(t_1u_1\) is automatically strictly larger than \(\eta ^2\), otherwise the measure was \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )-1)\)-atomic, which is not possible.)

    • \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })={{\,\textrm{rank}\,}}H_{22}+1\). In this case we have

      $$\begin{aligned}&{{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t_1,u_1)) + {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_1,u_1)) = {{\,\textrm{rank}\,}}H_{22} + {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t_1,u_1))\\&\quad = \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1,&{} \text {if }t_1u_1=\eta ^2,\\[0.3em] {{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+2,&{} \text {if }t_1u_1>\eta ^2, \end{array} \right. \\&\quad = \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ),&{} \text {if }t_1u_1=\eta ^2,\\[0.3em] {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1,&{} \text {if }t_1u_1>\eta ^2, \end{array} \right. \end{aligned}$$

where we used (6.15) in the second and (6.42) in the third equality. Hence, \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic rm if \(t_1u_1=\eta ^2\) and \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic rm if \(t_1u_1>\eta ^2\). It remains to show that in the case \(t_1u_1>\eta ^2\), there does not exist a \(({{\,\textrm{rank}\,}}\widetilde{\mathcal {M}}(k;\beta ))\)-atomic rm. Since \(H_{22}\) is not pd and \(u_1>0\), if \({{\mathcal {H}}}({\mathcal {G}}(t',u'))\) has a \({\mathbb {R}}\)-rm, then \(u'=u_1\). Since \(\eta \ne 0\), then \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u_1))\) with a \({{\mathcal {Z}}}(x-y^2)\)-rm is at least \(({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\)-atomic (see (6.15)). If \(t'\ne t_1\), then \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(\mathcal {G}(t',u_1))={{\,\textrm{rank}\,}}H_{22}+1\). Hence,

$$\begin{aligned}{} & {} {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u_1))+ {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u_1)) \ge ({{\,\textrm{rank}\,}}H_{22}+1) + ({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\\{} & {} \quad ={{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1, \end{aligned}$$

where we used (6.42) in the last equality.

  • Assume that Theorem 6.1.(2(c)ii) holds. We separate two cases according to the value of \(k_{12}\).

    • \(k_{12}=0\). We separate two cases according to \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\), i.e., \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\in \{k,k+1\}\). Note that \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\) cannot be \(k-1\), since \(\eta \ne 0\) and \(k_{12}=0\) imply that \(\big ({{\mathcal {H}}}(A_{\min })/H_{22}\big )_{12}\ne 0\).

      \(*\):

      \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })=k+1\). Then as in the second case of the case \(\eta =0\) of Theorem 6.1.(2(b)ii) above, in the point \((t_1,u_1)\) there is a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm for \(\beta \). (Note that \(t_1u_1\) is automatically strictly larger than \(\eta ^2\), otherwise the measure was \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )-1)\)-atomic, which is not possible.)

      \(*\):

      \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })=k\). In this case we have

      $$\begin{aligned}&{{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(k_{11},k_{22})) + {{\,\textrm{rank}\,}}{{\mathcal {F}}}({\mathcal {G}}(k_{11},k_{22}))\\&\quad = \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+1,&{} \text {if }k_{11}k_{22}=\eta ^2,\\[0.3em] {{\,\textrm{rank}\,}}H_{22}+{{\,\textrm{rank}\,}}{{\mathcal {F}}}(A_{\min })+2,&{} \text {if }k_{11}k_{22}>\eta ^2, \end{array} \right. \\&\quad = \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ),&{} \text {if }k_{11}k_{22}=\eta ^2,\\[0.3em] {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1,&{} \text {if }k_{11}k_{22}>\eta ^2, \end{array} \right. \end{aligned}$$

      where we used (6.15) in the first and (6.42) in the second equality. Hence, \(\beta \) has a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic rm if \(k_{11}k_{22}=\eta ^2\) and \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1)\)-atomic rm if \(k_{11}k_{22}>\eta ^2\). It remains to show that in the case \(k_{11}k_{22}>\eta ^2\), there does not exist a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic rm. Since \(\eta \ne 0\), if \({{\mathcal {F}}}({\mathcal {G}}(t',u'))\) is psd, it follows that \(t'u'\ge \eta ^2\) by (6.14). But then if \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u'))\) also admits a \({{\mathcal {Z}}}(x-y^2)\)-rm, this rm is at least \(({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\)-atomic (see (6.15)). If \(t'<k_{11}\) or \(u'<k_{22}\), then \({{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u'))\ge {{\,\textrm{rank}\,}}H_{22}+1\). Hence,

      $$\begin{aligned}{} & {} {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u'))+ {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u')) \ge ({{\,\textrm{rank}\,}}H_{22}+1) + ({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\\{} & {} \quad ={{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1, \end{aligned}$$

      where we used (6.42) in the last equality.

    • \(k_{12}\ne 0\). We separate two cases according to \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\), i.e. \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\in \{k,k+1\}\). Note that \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })\) cannot be \(k-1\), since otherwise \({{\mathcal {H}}}({\widehat{A}}_{\min })/H_{22} = \begin{pmatrix} 0 &{} \eta \\ \eta &{} 0 \end{pmatrix} \), which cannot be psd by \(\eta \ne 0\). By Claim 3, there is a point \(({\tilde{t}},{\tilde{u}})\in {\mathcal {R}}_2\cap ({\mathbb {R}}_+)^2\), such that \({\tilde{t}}{\tilde{u}}=\eta ^2\) and \((k_{11}-{\tilde{t}})(k_{22}-{\tilde{u}})=k_{12}^2\). By (6.15) and (6.25) we have

      $$\begin{aligned}&{{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}})) + {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}({\tilde{t}},{\tilde{u}}))\\&\quad = ({{\,\textrm{rank}\,}}H_{22}+1) + ({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\\&\quad = \left\{ \begin{array}{rl} {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ),&{} \text {if }{{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })=k+1,\\[0.3em] {{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1,&{} \text {if }{{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })=k, \end{array} \right. \end{aligned}$$

      where we used (6.42) in the second equality. It remains to show that in the case \({{\,\textrm{rank}\,}}{{\mathcal {H}}}(A_{\min })=k\), there does not exist a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic rm. Since \(\eta \ne 0\), if \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u'))\) is psd, it follows that \(t'u'\ge \eta ^2\) by (6.14). But then if \({\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u'))\) also admits a \({{\mathcal {Z}}}(x-y^2)\)-rm, this rm is at least \(({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\)-atomic (see (6.15)). Since \(k_{12}\ne 0\), \({{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u'))\ge {{\,\textrm{rank}\,}}H_{22}+1\) by (6.25). Hence,

      $$\begin{aligned}{} & {} {{\,\textrm{rank}\,}}{{\mathcal {H}}}({\mathcal {G}}(t',u'))+ {{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}({\mathcal {G}}(t',u')) \ge ({{\,\textrm{rank}\,}}H_{22}+1) + ({{\,\textrm{rank}\,}}{\widehat{{{\mathcal {F}}}}}(A_{\min })+1)\\{} & {} \quad ={{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta )+1, \end{aligned}$$

      where we used (6.42) in the last equality.

This concludes the proof of the moreover part.

Since for a p-pure sequence with \(\widetilde{\mathcal {M}}(k;\beta ))\succeq 0\), (6.42) implies that \({{\mathcal {H}}}(A_{\min })\) is pd, it follows by the moreover part that the existence of a \({{\mathcal {Z}}}(p)\)-rm implies the existence of a \(({{\,\textrm{rank}\,}}\widetilde{{\mathcal {M}}}(k;\beta ))\)-atomic \({{\mathcal {Z}}}(p)\)-rm. \(\square \)

The following example demonstrates the use of Theorem 6.1 to show that there exists a bivariate \(y(x-y^2)\)-pure sequence \(\beta \) of degree 6 with a positive semidefinite \({\mathcal {M}}(3)\) and without a \({{\mathcal {Z}}}(y(x-y^2))\)-rm.

Example 6.3

Let \(\beta \) be a bivariate degree 6 sequence given by

$$\begin{aligned} \beta _{00}&= \frac{1228153}{1372615},&\beta _{10}&=\frac{97}{10},&\beta _{01}&= \frac{21}{10},\\ \beta _{20}&= \frac{2289}{10},&\beta _{11}&= \frac{441}{10},&\beta _{02}&=\frac{91}{10},\\ \beta _{30}&=\frac{67207}{10},&\beta _{21}&=\frac{12201}{10},&\beta _{12}&=\frac{455}{2},\\ \beta _{03}&=\frac{441}{10},&\beta _{40}&=\frac{2142693}{10},&\beta _{31}&=\frac{376761}{10},\\ \beta _{22}&=\frac{67171}{10},&\beta _{13}&=\frac{12201}{10},&\beta _{04}&=\frac{455}{2},\\ \beta _{50}&= \frac{71340727}{10},&\beta _{41}&= \frac{12313161}{10},&\beta _{32}&= \frac{428519}{2},\\ \beta _{23}&= \frac{376761}{10},&\beta _{14}&= \frac{67171}{10},&\beta _{05}&= \frac{12201}{10},\\ \beta _{60}&= \frac{2438236509}{10},&\beta _{51}&= \frac{415998681}{10},&\beta _{42}&= \frac{71340451}{10},\\ \beta _{33}&= \frac{12313161}{10},&\beta _{24}&= \frac{428519}{2},&\beta _{15}&= \frac{376761}{10},\\ \beta _{06}&= \frac{67171}{10}. \end{aligned}$$

Assume the notation as in Theorem 6.1. \(\widetilde{{\mathcal {M}}}(3)\) is psd with the eigenvalues \(\approx 2.51\cdot 10^8\), \(\approx 47179\), \(\approx 112.1\), \(\approx 7.4\), \(\approx 1.11\), \(\approx 0.1\), \(\approx 0.03\), \(\approx 0.0005\), \(\approx 4.9\cdot 10^{-6}\), 0, and the column relation \(Y^3=YX\). We have that

$$\begin{aligned} A_{\min } = \begin{pmatrix} \frac{5537}{9230} &{} \frac{91}{10} &{} \frac{455}{2} &{} \frac{61999553}{9230}\\[0.5em] \frac{91}{10} &{} \frac{455}{2} &{} \frac{67171}{10} &{} \frac{428519}{2}\\[0.5em] \frac{455}{2} &{} \frac{67171}{10} &{} \frac{428519}{2} &{} \frac{71340451}{10}\\[0.5em] \frac{61999553}{9230} &{} \frac{428519}{2} &{} \frac{71340451}{10} &{} \frac{450098209309}{1846} \end{pmatrix} \end{aligned}$$

and so

$$\begin{aligned} \eta =\frac{67171}{10}-\frac{61999553}{9230}=-\frac{72}{923}. \end{aligned}$$

The matrices \(F_{22}\) and \(H_{22}\) are equal to:

$$\begin{aligned} F_{22}&=\begin{pmatrix} \frac{91}{10} &{} \frac{441}{10} &{} \frac{455}{2} &{} \frac{12201}{10} &{} \frac{67171}{10} \\[0.5em] \frac{441}{10} &{} \frac{455}{2} &{} \frac{12201}{10} &{} \frac{67171}{10} &{} \frac{376761}{10} \\[0.5em] \frac{455}{2} &{} \frac{12201}{10} &{} \frac{67171}{10} &{} \frac{376761}{10} &{} \frac{428519}{2} \\[0.5em] \frac{12201}{10} &{} \frac{67171}{10} &{} \frac{376761}{10} &{} \frac{428519}{2} &{} \frac{12313161}{10} \\[0.5em] \frac{67171}{10} &{} \frac{376761}{10} &{} \frac{428519}{2} &{} \frac{12313161}{10} &{} \frac{71340451}{10} \end{pmatrix},\qquad H_{22} =\begin{pmatrix} \frac{7}{5} &{} \frac{18}{5} \\[0.5em] \frac{18}{5} &{} \frac{49}{5} \end{pmatrix}. \end{aligned}$$

They are both pd with the eigenvalues \(\approx 7.3\cdot 10^6\), \(\approx 1987.6\), \(\approx 5.6\), \(\approx 0.099\), \(\approx 0.0013\) and \(\approx 11.1\), \(\approx 0.068\), respectively. The matrix K is equal to

$$\begin{aligned} K= \begin{pmatrix} k_{11} &{} k_{12} \\ k_{12} &{} k_{22} \end{pmatrix} = \begin{pmatrix} \frac{6050329}{48143098510} &{} \frac{3}{95} \\[0.2em] \frac{3}{95} &{} \frac{4941414}{87685} \end{pmatrix} \end{aligned}$$

and thus

$$\begin{aligned} (\sqrt{k_{11}k_{12}}-k_{12})^2-\eta ^2=-0.0033<0. \end{aligned}$$
(6.43)

By Theorem 6.1, \(\beta \) does not have a \({{\mathcal {Z}}}(y(x-y^2))\)-rm, since by (2(c)ii) of Theorem 6.1, (6.43) should be positive.