Function spaces
Let us denote by \(C^q([-1,\,1])\), \(q=0,1,\ldots \), the set of all continuous functions on \([-1,1]\) having q continuous derivatives, and by \(L^p\) the space of all measurable functions f such that
$$\begin{aligned} \Vert f\Vert _{p}= \left( \int _{-1}^1 |f(x)|^p \,dx\right) ^{\frac{1}{p}}<\infty , \quad 1 \le p<\infty . \end{aligned}$$
Let us introduce a Jacobi weight
$$\begin{aligned} u(x)=(1-x)^\gamma (1+x)^\delta , \end{aligned}$$
(3)
with \(\gamma ,\delta >-1/p\). Then, \(f \in L^p_u\) if and only if \(fu \in L^p\), and we endow the space \(L^p_u\) with the norm
$$\begin{aligned} \Vert f\Vert _{L^p_u}=\Vert fu\Vert _p=\left( \int _{-1}^1 |f(x)u(x)|^p \,dx\right) ^{\frac{1}{p}}<\infty , \quad 1 \le p<\infty . \end{aligned}$$
If \(p=\infty \), the space of weighted continuous functions is defined as
$$\begin{aligned} L^\infty _{u}=\left\{ f\in C^0((-1,1)): \lim _{x\rightarrow \pm 1}(f u)(x)=0\right\} , \end{aligned}$$
in the case when \(\gamma , \delta > 0\). If \(\gamma =0\) (respectively \(\delta =0\)) \(L^\infty _{u}\) consists of all functions which are continuous on \((-1,1]\) (respectively \([-1,1)\)) and such that \(\displaystyle \lim _{x\rightarrow -1}(f u)(x)=0\) (respectively \(\displaystyle \lim _{x\rightarrow 1}(f u)(x)=0\)). Moreover, if \(\gamma =\delta =0\) we set \(L^\infty _u=C^0([-1,1])\).
We equip the space \(L^\infty _{u}\) with the weighted uniform norm
$$\begin{aligned} \Vert f\Vert _{L^\infty _u}=\Vert fu\Vert _\infty =\max _{x\in [-1,1] }|(fu)(x)|, \end{aligned}$$
and we remark that \(L^\infty _u\) endowed with such a weighted norm is a Banach space.
The definition of \(L^\infty _u\) ensures the validity of the Weierstrass theorem. Indeed, for any polynomial P of degree n we have
$$\begin{aligned} \Vert (f-P)u\Vert _\infty \ge |(fu)(\pm 1)|. \end{aligned}$$
For smoother functions, we introduce the weighted Sobolev–type space
$$\begin{aligned} {\mathscr {W}}^p_r(u)=\left\{ f \in L^p_u: \Vert f\Vert _{{\mathscr {W}}^p_{r}(u)}=\Vert fu\Vert _p+\Vert f^{(r)}\varphi ^{r} u\Vert _p <\infty \right\} , \end{aligned}$$
where \(1 \le p \le \infty \), \(r=1,2,\ldots \), and \(\varphi (x)=\sqrt{1-x^2}\). If \(\gamma = \delta = 0\), we set \(L^\infty :=L^\infty _1\) and \({{\mathscr {W}}^p_r}:={\mathscr {W}}^p_r(1)\).
Monic orthogonal polynomials
Let \(\{p_j\}_{j=0}^\infty \) be the sequence of monic orthogonal polynomials on \((-1,\,1)\) with respect to the Jacobi weight defined in (2), i.e.,
$$\begin{aligned} \langle p_i, p_j \rangle _w =\int _{-1}^1 p_i(x) p_j(x) w(x) \,dx= {\left\{ \begin{array}{ll} 0, &{} j \ne i, \\ c_j, &{} j=i, \end{array}\right. } \end{aligned}$$
(4)
where
$$\begin{aligned} c_j=\frac{2^{2j+\alpha +\beta +1}}{2j+\alpha +\beta +1} \cdot \frac{\varGamma (j+\alpha +1)\varGamma (j+\beta +1)}{j!\,\varGamma (j+\alpha +\beta +1)} \left( {\begin{array}{c}2j+\alpha +\beta \\ j\end{array}}\right) ^{-2}, \end{aligned}$$
(5)
and \(\varGamma \) is the Gamma function. It is well known (see, for instance, [12]) that such a sequence satisfies the following three-term recurrence relation
$$\begin{aligned} {\left\{ \begin{array}{ll} p_{-1}(x)=0, \quad p_0(x)=1, \\ p_{j+1}(x)=(x-\alpha _j) p_j(x)-\beta _j p_{j-1}(x), \quad j=0,1,2,\ldots , \end{array}\right. } \end{aligned}$$
where the coefficients \(\alpha _j\) and \(\beta _j\) are given by
$$\begin{aligned} \alpha _j&= \frac{\beta ^2-\alpha ^2}{(2j+\alpha +\beta )(2j+\alpha +\beta +2)},&j \ge 0, \end{aligned}$$
(6)
$$\begin{aligned} \beta _0&= \frac{2^{\alpha +\beta +1} \varGamma (\alpha +1) \varGamma (\beta +1)}{\varGamma (\alpha +\beta +2)}, \end{aligned}$$
(7)
$$\begin{aligned} \beta _j&= \frac{4j(j+\alpha )(j+\beta )(j+\alpha +\beta )}{(2j+\alpha +\beta )^2 ((2j+\alpha +\beta )^2-1)},&j \ge 1. \end{aligned}$$
(8)
Equivalently, by virtue of the Stieltjes process, the recursion coefficients can be written as
$$\begin{aligned} \alpha _j&= \frac{\langle x p_j,p_j\rangle _w}{\langle p_j,p_j\rangle _w},&j\ge 0, \end{aligned}$$
(9)
$$\begin{aligned} \beta _0&= \langle p_0,p_0 \rangle _w, \end{aligned}$$
(10)
$$\begin{aligned} \beta _j&= \frac{\langle p_j,p_j\rangle _w}{\langle p_{j-1},p_{j-1}\rangle _w},&j\ge 1. \end{aligned}$$
(11)
Quadrature formulae
In this subsection, we recall two quadrature rules which will be useful for our aims. The first one is the classical Gauss-Jacobi quadrature rule [10], whereas the second one is the anti-Gauss quadrature rule, developed by Laurie in [18]; see also [19].
The Gauss-Jacobi quadrature formula
Let f be defined in \((-1,\,1)\), w be the Jacobi weight given in (2), and let us express the integral
$$\begin{aligned} I(f)=\int _{-1}^1 f(x) w(x) \,dx\end{aligned}$$
(12)
as
$$\begin{aligned} I(f)= \sum _{j=1}^{n} \lambda _j f(x_j) + e_{n}(f)=:G_n(f)+e_n(f), \end{aligned}$$
(13)
where the sum \(G_n(f)\) is the well-known n-point Gauss-Jacobi quadrature rule and \(e_{n}(f)\) stands for the quadrature error. The quadrature nodes \(\{x_j\}_{j=1}^n\) are the zeros of the Jacobi orthogonal polynomial \(p_n(x)\), and the weights or coefficients \(\{\lambda _j\}_{j=1}^n\) are the so-called Christoffel numbers, defined as (see [20, p. 235])
$$\begin{aligned} \lambda _j= \int _{-1}^1 \ell _j(w,x) w(x) \,dx= \frac{\varGamma (n+\alpha +1) \varGamma (n+\beta +1)}{n! \, \varGamma (n+\alpha +\beta +1)} \frac{2^{\alpha +\beta +1}}{(1-x_j^2) \left[ p'_n(x_j)\right] ^2}, \end{aligned}$$
with
$$\begin{aligned} \ell _j(w,x)= \frac{p_n(x)}{p'_n(x_j)(x-x_j)}. \end{aligned}$$
The Gauss-Jacobi quadrature rule is an interpolatory formula having optimal algebraic degree of exactness \(2n-1\), namely
$$\begin{aligned} I(P)=G_n(P), \quad \text {or equivalently} \quad e_n(P)=0, \qquad \forall P \in {\mathbb {P}}_{2n-1}, \end{aligned}$$
(14)
where \({\mathbb {P}}_{2n-1}\) is the set of the algebraic polynomials of degree at most \(2n-1\), the coefficients \(\lambda _j\) are all positive, and the formula is stable in the sense of [20, Definition 5.1.1.], as
$$\begin{aligned} \Vert G_n\Vert _\infty = \sup _{\Vert f\Vert _\infty =1} |G_n(f)|= \sum _{j=1}^n \lambda _j = \int _{-1}^1 w(x) \,dx< \infty . \end{aligned}$$
Moreover, the above condition, together with (14), guarantees the convergence of the quadrature rule (see, for instance, [27, 35]), that is
$$\begin{aligned} \lim _{n \rightarrow \infty } e_n(f)=0. \end{aligned}$$
If \(f \in C^{2n}([-1,\,1])\), the error \(e_n(f)\) of the Gauss quadrature formula has the following analytical expression [5]
$$\begin{aligned} e_n(f)=\frac{f^{(2n)}(\xi )}{(2n)!} \int _{-1}^1 \prod _{j=1}^n (x-x_j)^2 w(x) \,dx, \end{aligned}$$
where \(\xi \in (-1,\,1)\) depends on n and f.
If we consider functions belonging to the Sobolev-type spaces \({\mathscr {W}}^1_r(w)\), it is possible to estimate \(e_n(f)\) (see, e.g., [20]) in terms of the weighted error of best polynomial approximation, i.e.,
$$\begin{aligned} \displaystyle E_{n}(f)_{w,1}=\inf _{P\in {\mathbb {P}}_{n}} \Vert (f-P)w\Vert _{1}. \end{aligned}$$
Indeed,
$$\begin{aligned} |e_n(f)| \le \frac{{\mathscr {C}}}{2n-1} E_{2n-2}(f')_{\varphi w,1}, \end{aligned}$$
(15)
where \({\mathscr {C}} \ne {\mathscr {C}}(n,f)\) and \(\varphi (x)=\sqrt{1-x^2}\). Here and in the sequel, \({\mathscr {C}}\) denotes a positive constant which has a different value in different formulas. We write \({\mathscr {C}} \ne {\mathscr {C}}(a,b,\ldots )\) in order to say that \({\mathscr {C}}\) is independent of the parameters \(a,b,\ldots \), and \({\mathscr {C}} = {\mathscr {C}}(a,b,\ldots )\) to say that \({\mathscr {C}}\) depends on them.
About the computation of the nodes \(x_j\) and weights \(\lambda _j\) of the Gauss-Jacobi quadrature rule, in 1962 Wilf observed (see also [14]) that they can be obtained by solving the eigenvalue problem for the Jacobi matrix of order n
$$\begin{aligned} J_n= \begin{bmatrix} \alpha _0 &{} \sqrt{\beta _1} \\ \sqrt{\beta _1} &{} \alpha _1 &{} \sqrt{\beta _2} \\ &{} \sqrt{\beta _2} &{} \alpha _2 &{} \ddots \\ &{} &{} \ddots &{} \ddots &{} \sqrt{\beta _{n-1}} \\ &{} &{} &{} \sqrt{\beta _{n-1}} &{} \alpha _{n-1} \\ \end{bmatrix}, \end{aligned}$$
associated to the coefficients \(\alpha _j\) and \(\beta _j\) defined in (6) and (8), respectively. Specifically, the nodes \(x_j\) are the eigenvalues of the symmetric tridiagonal matrix \(J_n\), and the weights are determined as
$$\begin{aligned} \lambda _j=\beta _0 \, v^2_{j,1}, \end{aligned}$$
where \(\beta _0\) is defined as in (7) and \(v_{j,1}\) is the first component of the normalized eigenvector corresponding to the eigenvalue \(x_j\).
The anti-Gauss quadrature formula
Let us approximate the integral I(f) defined in (12) by
$$\begin{aligned} I(f) = \sum _{j=1}^{n+1} {\tilde{\lambda }}_j f({\tilde{x}}_j) + {\tilde{e}}_{n+1}(f)=:{\widetilde{G}}_{n+1}(f)+{\tilde{e}}_{n+1}(f), \end{aligned}$$
(16)
where \({\widetilde{G}}_{n+1}(f)\) is the \(n+1\) point anti-Gauss quadrature formula and \({\tilde{e}}_{n+1}(f)\) is the corresponding remainder term.
Such a rule is an interpolatory formula designed to have the same degree of exactness of the Gauss-Jacobi formula \(G_n(f)\) in (13) and an error of the same magnitude and opposite in sign to the error of \(G_n(f)\), when applied to polynomials of degree at most \(2n+1\), namely
$$\begin{aligned} I(f)-{\widetilde{G}}_{n+1}(f)=-(I(f)-G_{n}(f)), \quad \text {for all } f \in {\mathbb {P}}_{2n+1}, \end{aligned}$$
from which
$$\begin{aligned} {\widetilde{G}}_{n+1}(f)= 2 I(f)-G_{n}(f), \quad \text {for all } f \in {\mathbb {P}}_{2n+1}. \end{aligned}$$
(17)
This quadrature formula was developed with the aim to estimate the error term \(e_n(f)\) of the Gauss rule \(G_n(f)\), especially when the Gauss-Kronrod formula fails in this intent. This happens, for instance, when we deal with a Jacobi weight with parameters \(\alpha \) and \(\beta \) such that \(\min \{\alpha , \beta \}\ge 0\) and \(\max \{\alpha , \beta \}>5/2\); see [26].
If f is a polynomial of degree at most \(2n+1\), the Gauss and the anti-Gauss quadrature rules provide an interval containing the exact integral I(f), an interval which gets smaller as the degree of the polynomial n increases. Indeed, it either holds
$$\begin{aligned} G_n(f) \le I(f) \le {\widetilde{G}}_{n+1}(f) \qquad \text {or} \qquad {\widetilde{G}}_{n+1}(f) \le I(f) \le G_n(f). \end{aligned}$$
(18)
If, on the contrary, f is a general function, it is still possible to prove, under suitable assumptions (see [4, Equations (26)–(28)], [8, p. 1664], and [28, Theorem 3.1]) that the Gauss and the anti-Gauss quadrature rules bracket the integral I(f), and that the error of the averaged Gaussian quadrature formula [18]
$$\begin{aligned} G^{AvG}_{2n+1}(f) =\frac{G_n(f)+{\widetilde{G}}_{n+1}(f)}{2}, \end{aligned}$$
(19)
is bounded by
$$\begin{aligned} \left| I(f) - G^{AvG}_{2n+1}(f) \right| \le \frac{1}{2} |G_n(f)-{\widetilde{G}}_{n+1}(f)|. \end{aligned}$$
The above bound allows one to choose the integer n so that the averaged Gaussian formula reaches a prescribed accuracy. It is also worth noting that, while the averaged rule (19) has, in general, degree of exactness \(2n+1\), under particular conditions it has been proved to have degree of exactness \(4n-2\ell +2\) for a fixed integer (and usually small) value of \(\ell \) [21, 23, 34].
An anti-Gauss quadrature formula can easily be constructed [18]. The key of such a construction is relation (17), which characterizes the anti-Gauss quadrature formula as an \(n+1\) points Gauss rule for the functional \({\mathscr {I}}(f)=2 I(f)-G_{n}(f)\). If \(q\in {\mathbb {P}}_{2n-1}\), by virtue of (14), then,
$$\begin{aligned} {\mathscr {I}}(q)= {I}(q), \end{aligned}$$
(20)
while for the Jacobi polynomial \(p_n\) and any integrable function f, it holds
$$\begin{aligned} {\mathscr {I}}(fp^2_n)= 2 I(fp^2_n). \end{aligned}$$
(21)
By using (20) and (21) we can compute the recursion coefficients \(\{{\tilde{\alpha }}_j\}_{j=0}^n\) and \(\{{\tilde{\beta }}_j\}_{j=1}^n\) for the recurrence relation
$$\begin{aligned} {\left\{ \begin{array}{ll} {\tilde{p}}_{-1}(x)=0, \quad {\tilde{p}}_{0}(x)=1, \\ {\tilde{p}}_{j+1}(x)=(x-{\tilde{\alpha }}_j) {\tilde{p}}_{j}(x)-{\tilde{\beta }}_j {\tilde{p}}_{j-1}(x), \quad j=0,1,\ldots ,n, \end{array}\right. } \end{aligned}$$
defining the sequence \(\{{\tilde{p}}_j\}_{j=0}^{n+1}\) of monic polynomials orthogonal with respect to the functional \({\mathscr {I}}\).
The following theorem holds.
Theorem 1
The recursion coefficients for the polynomials orthogonal with respect to the functional \({\mathscr {I}}\) are related to the recursion coefficients for the Jacobi polynomials as follows
$$\begin{aligned} \begin{aligned} {\tilde{\alpha }}_j&={\alpha }_j, \qquad&j=0,\,\dots ,\,n, \\ {\tilde{\beta }}_j&={\beta }_j, \qquad&j=0,\,\dots ,\,n-1, \\ {\tilde{\beta }}_n&=2{\beta }_n. \end{aligned} \end{aligned}$$
Proof
The theorem was proved by Laurie in [18]. For its relevance, we report here the scheme of the proof.
The fact that \({\tilde{\alpha }}_0=\alpha _0\) and \({\tilde{\beta }}_0=\beta _0\) is trivial. Then, the recurrence relations for the two families of orthogonal polynomials implies that \({\tilde{p}}_1=p_1\). Let us proceed by induction. Let \({\tilde{p}}_j=p_j\) for any \(1\le j \le n-1\). Taking into account (9), (11), and (20), we have
$$\begin{aligned} \begin{aligned} {\tilde{\alpha }}_j&= \frac{{\mathscr {I}}(x{\tilde{p}}^2_j)}{{\mathscr {I}}({\tilde{p}}^2_j)} = \frac{{\mathscr {I}}(x{p}^2_j)}{{\mathscr {I}}({p}^2_j)}= \frac{{I}(x{p}^2_j)}{{I}({p}^2_j)} = \alpha _j, \\ {\tilde{\beta }}_j&= \frac{{\mathscr {I}}({\tilde{p}}^2_j)}{{\mathscr {I}}({\tilde{p}}^2_{j-1})} =\frac{{\mathscr {I}}({p}^2_j)}{{\mathscr {I}}({p}^2_{j-1})}= \frac{{I}({p}^2_j)}{{I}({p}^2_{j-1})} = \beta _j, \end{aligned} \end{aligned}$$
so that \({\tilde{p}}_{j+1}=p_{j+1}\). In particular, \({\tilde{p}}_n=p_n\). To conclude the proof, by applying (21) and again (9), (11), and (20), we obtain
$$\begin{aligned} \begin{aligned} {\tilde{\alpha }}_n&=\frac{{\mathscr {I}}(x{\tilde{p}}^2_n)}{{\mathscr {I}}({\tilde{p}}^2_n)}= \frac{{\mathscr {I}}(x {p}^2_n)}{{\mathscr {I}}( {p}^2_n)}=\frac{{2I}(x {p}^2_n)}{{2I}( {p}^2_n)}= \alpha _n, \\ {\tilde{\beta }}_n&=\frac{{\mathscr {I}}({\tilde{p}}^2_n)}{{\mathscr {I}}({\tilde{p}}^2_{n-1})} =\frac{{\mathscr {I}}({p}^2_n)}{{\mathscr {I}}({p}^2_{n-1})}=\frac{{2I}({p}^2_n)}{{I}({p}^2_{n-1})}= 2\beta _n. \end{aligned} \end{aligned}$$
\(\square \)
The previous theorem implies that the sequence of polynomials \(\{{\tilde{p}}_j\}_{j=0}^{n+1}\) is defined by
$$\begin{aligned} {\left\{ \begin{array}{ll} {\tilde{p}}_j(x)=p_j(x), \qquad j=0,1,\ldots ,n, \\ {\tilde{p}}_{n+1}(x)=(x-\alpha _n)p_{n}(x)-2 \beta _n p_{n-1}(x) =p_{n+1}(x)- \beta _n p_{n-1}(x). \end{array}\right. } \end{aligned}$$
(22)
Since the polynomials \(\{{\tilde{p}}_j\}_{j=0}^{n+1}\) satisfy a recurrence relation, the nodes \({\tilde{x}}_j\) and the weights \({\tilde{\lambda }}_j\) of the associated anti-Gauss quadrature formula can be computed by solving the eigenvalue problem for the modified Jacobi matrix of order \(n+1\)
$$\begin{aligned} {\widetilde{J}}_{n+1}= \begin{bmatrix} J_n &{} \sqrt{2 \beta _n}\mathbf {e}_n \\ \sqrt{2 \beta _n} \mathbf {e}^T_n &{} \alpha _n \\ \end{bmatrix}, \end{aligned}$$
with \(\mathbf {e}_n=(0,0,\dots ,1)^T \in {\mathbb {R}}^n\). In fact, the \(n+1\) nodes are the eigenvalues of the above matrix and the weights are determined as
$$\begin{aligned} {\tilde{\lambda }}_j=\beta _0 \, {\tilde{v}}^2_{j,1}, \end{aligned}$$
where \(\beta _0\) is defined by (7) and \({\tilde{v}}_{j,1}\) is the first component of the eigenvector associated to the eigenvalue \({\tilde{x}}_j\).
The anti-Gauss quadrature rule has nice properties: the weights \(\{{\tilde{\lambda }}_j\}_{j=1}^{n+1}\) are strictly positive and the nodes \(\{{\tilde{x}}_j\}_{j=1}^{n+1}\) interlace with the Gauss nodes \(\{{x}_j\}_{j=1}^n\), i.e.,
$$\begin{aligned} {\tilde{x}}_1<x_1<{\tilde{x}}_2<\cdots<{\tilde{x}}_{n}<x_n<{\tilde{x}}_{n+1}. \end{aligned}$$
(23)
Thus, we can deduce that the anti-Gauss nodes \({\tilde{x}}_j\) with \(j=2,\dots ,n\), belong to the interval \((-1,\,1)\), whereas the first and the last node may be outside of it. Specifically, it was proved in [18] that
$$\begin{aligned} \begin{aligned} {\tilde{x}}_1&\in [-1,\,1] \qquad \text {if and only if} \qquad \frac{p_{n+1}(-1)}{p_{n-1}(-1)}\ge \beta _n, \\ {\tilde{x}}_{n+1}&\in [-1,\,1] \qquad \text {if and only if} \qquad \frac{p_{n+1}(1)}{p_{n-1}(1)}\ge \beta _n. \end{aligned} \end{aligned}$$
More in detail [18, Theorem 4], if the following conditions are satisfied
$$\begin{aligned} {\left\{ \begin{array}{ll} \alpha \ge -\frac{1}{2}, \\ \beta \ge -\frac{1}{2}, \\ (2 \alpha +1)(\alpha +\beta +2)+\frac{1}{2}(\alpha +1)(\alpha +\beta )(\alpha +\beta +1) \ge 0, \\ (2 \beta +1)(\alpha +\beta +2)+\frac{1}{2}(\beta +1)(\alpha +\beta )(\alpha +\beta +1) \ge 0, \end{array}\right. } \end{aligned}$$
(24)
then all the anti-Gauss nodes belong to \([-1,1]\). From now on, we will assume that the parameters of the weight function w satisfy (24).
Let us remark that some classical Jacobi weights, such as the Legendre weight (\(\alpha =\beta =0\)) and the Chebychev weights of the first (\(\alpha =\beta =-1/2\)), second (\(\alpha =\beta =1/2\)), third (\(\alpha =-1/2\), \(\beta =1/2\)), and fourth (\(\alpha =1/2\), \(\beta =-1/2\)) kind, satisfy conditions (24).
Let us also emphasize that the nodes might include the endpoints \(\pm 1\). This happens, for instance, with the Chebychev weights of the first (\({\tilde{x}}_1=-1\) and \({\tilde{x}}_{n+1}=1\)), third (\({\tilde{x}}_{n+1}=1\)), and fourth (\({\tilde{x}}_1=-1\)) kind.
The next theorem defines the anti-Gauss rule for Chebychev polynomials of the first kind. It will be useful in Sect. 4. Let us denote by
$$\begin{aligned} T_0(x)=1, \qquad T_n(x) = \cos (n\arccos (x)) = 2^{n-1} p_n(x), \quad n\ge 1, \end{aligned}$$
the trigonometric form of first kind Chebychev polynomial of degree n, where \(p_n(x)\) is the monic polynomial of the same degree; see Sect. 2.2.
Theorem 2
If \(\alpha =\beta =-1/2\), then the nodes and the weights for the anti-Gauss quadrature formula (16) are given by
$$\begin{aligned} {\tilde{x}}_j&=\cos {\left( (n-j+1)\frac{\pi }{n}\right) }, \qquad j=1,\dots ,n+1, \\ {\tilde{\lambda }}_j&= {\left\{ \begin{array}{ll} \dfrac{\pi }{2n}, \quad &{} j=1,n+1 \\ \dfrac{\pi }{n}, &{} j=2,...,n. \end{array}\right. } \end{aligned}$$
Proof
From recurrence (22), being \(\beta _n=\frac{1}{4}\), we have
$$\begin{aligned} {\tilde{p}}_{n+1}(x) = 2^{-n} \left[ T_{n+1}(x) - T_{n-1}(x) \right] = -2^{1-n} U_{n-1}(x) \cdot (1-x^2), \end{aligned}$$
where
$$\begin{aligned} U_{n-1}(x) = \frac{\sin (n\arccos (x))}{\sqrt{1-x^2}}, \quad n=1,2,\ldots , \end{aligned}$$
denote the Chebychev polynomials of the second kind. This proves the expression for the nodes.
Now, let us apply (16) to a first kind Chebychev polynomial of degree \(k=0,1,\ldots ,n\). We have
$$\begin{aligned} {\widetilde{G}}_{n+1}(T_k) = \sum _{j=1}^{n+1} {{\tilde{\lambda }}}_j \cos (k{{\tilde{\theta }}}_j) = \pi \delta _{k,0}, \end{aligned}$$
where \(\delta _{k,0}\) is the Kronecker symbol and \({{\tilde{\theta }}}_j=(n-j+1)\frac{\pi }{n}\). Multiplying both terms by \(\cos (k{{\tilde{\theta }}}_r)\), and summing over k, we obtain
$$\begin{aligned} \sum _{j=1}^{n+1} {{\tilde{\lambda }}}_j \sum _{k=0}^n{''} \cos (k{{\tilde{\theta }}}_j) \cos (k{{\tilde{\theta }}}_r) = \pi \sum _{k=0}^n{''} \delta _{k,0} \cos (k{{\tilde{\theta }}}_r), \end{aligned}$$
where the double prime means that the first and the last terms of the summation are halved. The expression for the weights follows from the trigonometric identity
$$\begin{aligned} \sum _{k=0}^n{''} \cos (k{{\tilde{\theta }}}_j) \cos (k{{\tilde{\theta }}}_r) = {\left\{ \begin{array}{ll} n, \quad &{} j=r=1,n+1, \\ \frac{1}{2}n, &{} j=r=2,\ldots ,n, \\ 0 &{} j\ne r. \end{array}\right. } \end{aligned}$$
\(\square \)