Skip to main content
Log in

Extensions of Gauss Quadrature Via Linear Programming

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

Gauss quadrature is a well-known method for estimating the integral of a continuous function with respect to a given measure as a weighted sum of the function evaluated at a set of node points. Gauss quadrature is traditionally developed using orthogonal polynomials. We show that Gauss quadrature can also be obtained as the solution to an infinite-dimensional linear program (LP): minimize the \(n\)th moment among all nonnegative measures that match the \(0\) through \(n-1\) moments of the given measure. While this infinite-dimensional LP provides no computational advantage in the traditional setting of integration on the real line, it can be used to construct Gauss-like quadratures in more general settings, including arbitrary domains in multiple dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. M. Abramowitz and I. Stegun. Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables. Dover Publications, Incorporated, New York, 1964.

  2. C. Aliprantis and K. Border. Infinite Dimensional Analysis: A Hitchhiker’s Guide. Springer, New York, 3rd edition, 2006.

  3. E. Anderson and P. Nash. Linear Programming in Infinite-dimensional Spaces: Theory and Applications. Wiley, New York, 1987.

  4. E. Anderson and A. Philpott. Infinite Programming: Proceedings of an International Symposium on Infinite Dimensional Linear Programming, Churchill College, Cambridge, United Kingdom, September 7–10, 1984. Springer-Verlag, New York, 1985.

  5. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004.

  6. E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006.

    Google Scholar 

  7. S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1999.

    Google Scholar 

  8. P. Davis. A construction of nonnegative approximate quadratures. Mathematics of Computation, 21(100):578–582, 1967.

  9. P. Davis and P. Rabinowitz. Methods of Numerical Integration. Academic Press, Orlando, 2nd edition, 1984.

  10. A Fiacco and K. Kortanek. Semi-infinite Programming and Applications: An International Symposium, Austin, Texas, September 8–10, 1981. Springer-Verlag, New York, 1983.

  11. C. Gauss. Methodus nova integralium valores per approximationem inveniendi. Commentationes Societatis Regiae Scientarium Gottingensis Recentiores, 3:39–76, 1814.

    Google Scholar 

  12. A. Glaser, X. Liu, and V. Rokhlin. A fast algorithm for the calculation of the roots of special functions. SIAM Journal on Scientific Computing, 29(4):1420–1438, 2007.

    Google Scholar 

  13. M. Goberna and M. López. Linear Semi-infinite Optimization. John Wiley, 1998.

  14. G. Golub and J. Welsch. Calculation of gauss quadrature rules. Mathematics of Computation, 23(106):221–230, 1969.

  15. P. Hammer and A. Stroud. Numerical evaluation of multiple integrals II. Mathematics of Computation, 12:272–280, 1958.

    Google Scholar 

  16. J. Hiriart-Urruty and C. Lemarechal. Convex Analysis and Minimization Algorithms I: Fundamentals. Springer, New York, October 1993.

  17. S. Karlin and W. Studden. Tchebycheff Systems: With Applications in Analysis and Statistics. Interscience Publishers, New York, 1966.

  18. M. Krein. The ideas of P. L. Chebyshev and A. A. Markov in the theory of limiting values of integrals and their further development. American Mathematical Society Translations (Series 2), 12:1–122, 1959.

  19. V. Krylov. Approximate Calculation of Integrals. Macmillan, New York, 1962.

  20. J. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on Optimization, 11:796–817, 2001.

    Google Scholar 

  21. D. Luenberger. Optimization by Vector Space Methods. John Wiley & Sons, New York, 1967.

  22. D. Luenberger and Y. Ye. Linear and Nonlinear Programming. Springer, New York, third edition, 2008.

  23. J. Ma, V. Rokhlin, and S. Wandzura. Generalized Gaussian quadrature rules for systems of arbitrary functions. SIAM Journal on Numerical Analysis, 33(3):971–996, 1996.

    Google Scholar 

  24. A. Markov. On the limiting values of integrals in connection with interpolation. Zapiski Imperatorskoj Akademii Nauk po Fiziko-matematiceskomu Otdeleniju, 5:146–230, 1898.

    Google Scholar 

  25. G. Murty and S. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Mathematical Programming, 39(2):117–129, 1987.

    Google Scholar 

  26. J. Nocedal and S. Wright. Numerical Optimization. Springer, New York, 2nd edition, 2006.

  27. W. Peirce. Numerical integration over the planar annulus. Journal of the Society for Industrial and Applied Mathematics, 5(2):66–73, 1957.

    Google Scholar 

  28. J. Powell. Approximation Theory and Methods. Cambridge University Press, Cambridge, 1981.

  29. R. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1996.

  30. A. Stroud. Quadrature methods for functions of more than one variable. Annals of the New York Academy of Sciences, 86(3):776–791, 1960.

    Google Scholar 

  31. A. Stroud and D. Secrest. Gaussian Quadrature Formulas. Prentice-Hall, Englewood Cliffs, 1966.

  32. E. Süli and D. Mayers. An Introduction to Numerical Analysis. Cambridge University Press, Cambridge, 2003.

  33. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B), 58:267–288, 1996.

    Google Scholar 

  34. B. Vioreanu. Spectra of Multiplication Operators as a Numerical Tool. PhD thesis, Yale University, 2012.

  35. H. Xiao and Z. Gimbutas. A numerical algorithm for the construction of efficient quadrature rules in two and higher gimensions. Computational Mathematics with Applications, 59(2), 2010.

Download references

Acknowledgments

We thank Pablo Parrilo for helpful conversations; indeed, this paper grew out of conversations with him. We also thank Paul Constantine for his helpful feedback. Ernest Ryu is supported by the Department of Energy Office of Science Graduate Fellowship Program (DOE SCGF) and by the Simons Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ernest K. Ryu.

Additional information

Michael Overton.

Appendices

Appendix 1: Notation

We write \(C_{b}\) for the Banach space of bounded continuous functions on \(\Omega \). Write \(\Vert \cdot \Vert _{\infty }:C_{b}\rightarrow {\text{ R }}\) for the supremum norm defined as

$$\begin{aligned} \Vert f\Vert _{\infty }\mathop {=}\limits ^{\text {def}}\sup _{x\in \Omega }|f (x)|. \end{aligned}$$

We write \(\mathcal {M}\) for the Banach space of finite signed Borel measures on \(\Omega \) and for \(\mu \in \mathcal {M}\) write \(\mu \ge 0\) to denote that \(\mu \) is unsigned. An unsigned measure \(\mu \) is finite if \(\mu (\Omega )<\infty \).

The support of \(\mu \in \mathcal {M}\) is defined as

$$\begin{aligned} {\mathbf {supp}\,}\mu = \left\{ x\in \Omega | \forall \,r>0,\, \mu (B (x,r))>0 \right\} , \end{aligned}$$

and \(|{\mathbf {supp}\,}\mu |\) denotes the cardinality of \({\mathbf {supp}\,}\mu \) as a set.

We write \(\mathcal {N}\) for the Banach space of normal signed Borel charges of bounded variation. A charge is a set function defined on an algebra and is like a measure except that it is only finitely (and not necessarily countably) additive. A charge is Borel if it is defined on the algebra generated by open sets. A charge is normal if \(\mu (A)=\sup \{\mu (F):F\subseteq A,\,F\text { closed}\} =\inf \{\mu (G):A\subseteq G,\,G\text { open}\}\). A charge is tight if \(\mu (A)=\sup \{\mu (K):K\subseteq A,\,K\text { compact}\}\). For \(\mu \in \mathcal {M}\) write \(\mu \ge 0\) to denote that \(\mu \) is unsigned. An unsigned charge \(\mu \) is of bounded variation if \(\mu (\Omega )<\infty \). Integration with respect to a charge is defined similarly to Lebesgue integration [2, §14.2].

By Theorem 5, \(\mathcal {N}\) is isomorphic to the dual of \(C_b\). Thus, for any \(\mu \in \mathcal {N}\) and \(f\in C_b\) we can view \(\mu \) as a linear functional acting on \(f\), and we denote this action as

$$\begin{aligned} \langle f,\mu \rangle \mathop {=}\limits ^{\text {def}}\int ^{}_{\Omega }f\;d \mu . \end{aligned}$$

Appendix 2: Miscellaneous Theorems

Theorem 3

If \({\mathbf {supp}\,}q=\Omega \), a Gauss quadrature is the only quadrature that integrates \(1,x,\ldots , x^{n-1}\) exactly with \(n/2\) or fewer nodes [31].

Theorem 4

On \(\Omega \subseteq {\text{ R }}^d\), every tight finite Borel charge is a measure [2, §12.1]. (Precisely speaking, the charge has a unique extension to the Borel \(\sigma \)-algebra that is a measure.)

In the proof of strong duality we will encounter charges, which are generalizations of measures. Fortunately, Theorem 4 will allow us to conclude that the charge is in fact a measure.

Theorem 5

For a domain \(\Omega \) not necessarily compact, the dual space \(C_b^{*}\) is isomorphic to \(\mathcal {N}\), the space of signed normal Borel charges of bounded variation [2, §14.3].

Theorem 5, used in the proof of Theorem 1, is analogous to the Riesz–Markov theorem and provides an explicit representation of the dual space \(C_b^*\).

Appendix 3: Proof of Main Results

1.1 Appendix 3.1: Proof of Theorem 1

We will call the optimization problem (1) the primal problem and the following problem the dual problem.

$$\begin{aligned} \begin{array}{ll} \text{ maximize }&{} \sum \limits ^{n-1}_{i=0}\nu _{i+1} \langle x^i,q\rangle \\ \text{ subject } \text{ to } &{} \lambda = x^{n} -\sum \limits ^{n-1}_{i=0}\nu _{i+1}x^{i},\\ &{}\lambda (x)\ge 0 , \quad \text { for all }x\in \Omega , \end{array} \end{aligned}$$
(5)

where \(\nu \in {\text{ R }}^n\) is the optimization variable. We define \(\mu ^\star \) and \(\nu ^\star \) as solutions of the primal and dual problems, respectively. We write \(\lambda ^\star \) for the polynomial that corresponds to \(\nu ^\star \). Let \(p^\star \) and \(d^\star \) denote the optimal values of the primal and dual problems.

In the proof we first introduce a new LP, problem (6), that is similar to the original LP, problem (1), but different in that is has a larger search space. We show that problem (6) is the dual of problem (5) and that strong duality and complementary slackness holds. The nonnegative polynomial \(\lambda ^\star \) can have at most \(n/2\) roots, and this will allow us to conclude that in fact problems (6) and (1) must share the same solution and that the solution must be a Gauss quadrature.

Proof

Define

$$\begin{aligned} \psi (x)= \left\{ \begin{array}{ll} 1&{} \text { if }|x|\le 1\\ \frac{1}{x^{n}}&{} \text {otherwise} \end{array} \right. \end{aligned}$$

and the norm

$$\begin{aligned} \Vert f\Vert _{\infty ,\psi }=\Vert f\psi \Vert _{\infty }=\sup _{x\in \Omega }|f (x)\psi (x)|. \end{aligned}$$

Let \(D\) be the set of continuous real-valued functions \(f\) defined on \(\Omega \) such that \(\Vert f\Vert _{\infty ,\psi }<\infty \). In other words, \(D=\frac{1}{\psi }C_b\) is the space of continuous functions that grow at a rate of at most \(\mathcal {O}(x^n)\).

The map \(T:D\rightarrow C_b\), where \(T:f \mapsto f\psi \), is an isometric lattice isomorphism between \((D,\Vert \cdot \Vert _{\infty ,\psi })\) and \((C_b,\Vert \cdot \Vert _{\infty })\) [2, §14.3]. Thus, \((D,\Vert \cdot \Vert _{\infty ,\psi })\) is a Banach space. Moreover, since \(C_b^*\cong \mathcal {N}\) by Theorem 5, the isomorphism tells us that \(D^{*}\cong \psi \mathcal {N}\), where \(\mathcal {N}\) is the Banach space of normal Borel charges of bounded variation.

Consider the following variant of the primal problem: (1)

$$\begin{aligned} \text{ minimize }&\langle x^{n},\mu \rangle \nonumber \\ \text{ subject } \text{ to }&\langle x^{i},\mu \rangle = \langle x^{i},q\rangle , \quad i=0,\ldots , n-1,\nonumber \\&\mu \ge 0, \end{aligned}$$
(6)

where \(\mu \in \psi \mathcal {N}\cong D^*\) is the optimization variable. Note that \(\mu \), which used to be in \(\mathcal {M}\), now resides in a larger space. Weak duality between (6) and (5) can be readily shown via standard arguments. Both primal and dual problems are feasible because \(\mu =q\) and \(\nu =0\) are feasible points, and therefore \(-\infty <d^\star \le p^\star <\infty \).

Now we can apply Lagrange duality, which states: if \(d^\star \) is finite (which we have shown) and if there is a strictly feasible \(\nu \), then strong duality holds, a primal solution exists, and complementary slackness holds [21, §8.6]. The point \(\nu =e_1\) is strictly feasible, and this establishes strong duality and the existence of a primal solution \(\mu ^\star \).

We now claim that the dual problem attains the supremum, i.e., a solution \(\nu ^\star \) exists. We prove this in Appendix 3.2.

Next we will show that \(\mu ^\star \) is a measure (not just a charge) and that \({\mathbf {supp}\,}\mu ^\star \subseteq \{x_1,x_2,\ldots , x_k\}\), where \(x_1,x_2,\ldots , x_{k}\) are the roots of the polynomial \(\lambda ^\star \). Complementary slackness states that \(\left< \mu ^{\star },\lambda ^\star \right>=0\). Remember that \(\lambda ^\star \ge 0\) by definition. For any set \(A\subseteq {\text{ R }}\backslash \bigcup ^k_{i=1}B(x_i,\varepsilon )\), where \(\varepsilon >0\) is small, there exists a small enough \(\delta >0\) such that \(\delta 1_A\le \lambda ^\star \), where \(1_A\) is the indicator function, and we have

$$\begin{aligned} \delta \mu ^\star (A)= \int _\Omega \delta 1_A\;d\mu ^\star \le \int _\Omega \lambda ^\star \;d\mu ^\star =0. \end{aligned}$$

So we conclude \(\mu ^\star (A)=0\). Now by the normality of the charge \(\mu ^\star \),

$$\begin{aligned} \mu ^\star ((x_i-\varepsilon ,x_i))=\sup \{\mu ^\star (F):F \text { closed and }F\subseteq (x_i-\varepsilon ,x_i)\}. \end{aligned}$$

However, by the previous argument, \(\mu ^\star (F)=0\) for any closed \(F\) such that \(F\subseteq (x_i-\varepsilon ,x_i)\). Therefore, \(\mu ^\star ((x_i-\varepsilon ,x_i))=0\) and, by the same logic, \(\mu ^\star ((x_i,x_i+\varepsilon ))=0\). Thus, \(\mu ^\star ({\text{ R }}\backslash \{x_1,x_2,\ldots , x_k\})=0\), and for any measurable set \(A\) we have \(\mu ^\star (A)=\mu ^\star (A\cap \{x_1,x_2,\ldots , x_k\})\). In particular, \(\{x_1,x_2,\ldots , x_k\}\) is compact, and this establishes the tightness of \(\mu ^\star \). Thus, by Theorem 4, we conclude that \(\mu ^\star \) is a discrete measure and can have point masses only at \(x_1,x_2,\ldots ,x_k\).

Since \(\mu ^\star \) is a discrete measure,

$$\begin{aligned} \mu ^\star \in \{\mu \in \mathcal {M}|\mu \text { is feasible for the primal problem (1)}\} \subseteq \psi \mathcal {N}. \end{aligned}$$

Therefore, the primal problem can be simplified by searching over the feasible measures in \(\mathcal {M}\) and not over the entire superspace \(\psi \mathcal {N}\). In other words, \(\mu ^\star \), the solution to problem (6), is a solution to the original problem (1).

Moreover, since \(\lambda ^\star \) is a nonnegative polynomial of degree \(n\), it can have at most \(n/2\) distinct roots in \(\Omega \), and therefore \(|{\mathbf {supp}\,}\mu ^{\star }|\le n/2\). In other words, \(\mu ^\star \) is equivalent to a quadrature that integrates \(1,x,x^2,\ldots ,x^{n-1}\) exactly with \(n/2\) or fewer nodes. Thus, by Theorem 3, we conclude that \(\mu ^{\star }\) must be the Gauss quadrature and that the solution \(\mu ^\star \) is unique. \(\square \)

1.2 Appendix 3.2: Attainment of Dual Optimum

Lemma 1

Let \(K\subseteq {\text{ R }}^n\) be a proper cone. Assume \(u_0\in K^*\) has the property that for any \(v\in K\) we have \(v^Tu_0>0\), unless \(v=0\). Then \(u_0\in {\mathbf {int}\,}(K^*)\).

Proof

Assume for contradiction that \(u_0\in \partial K^*\). Then there exists a nonzero separating hyperplane \(\lambda \) such that \(\lambda ^Tu_0=0\) and \(\lambda ^Tu\ge 0\) for any \(u\in K^*\). However, this implies that \(\lambda \in K^{**}=K\), and this contradicts the assumption that \(v^Tu_0>0\) for any nonzero \(v\in K\). Thus we conclude that \(u_0\in {\mathbf {int}\,}(K^*)\). \(\square \)

Theorem 6

A solution to the dual problem (5) is attained.

Proof

Let \(K\) be the convex cone defined as

$$\begin{aligned} K=\{y\in {\text{ R }}^{n+1}|y_{n+1}x^n+y_{n}x^{n-1}+\cdots +y_2x+y_1\ge 0 \text { for }x\in \Omega \}, \end{aligned}$$

i.e., the cone of coefficients of nonnegative polynomials.

Also, let \(M\) be the convex cone defined as

$$\begin{aligned} M=\{ m\in {\text{ R }}^{n+1} : m=\left( \langle 1,\mu \rangle , \langle x,\mu \rangle , \ldots , \langle x^{n-1},\mu \rangle , \langle x^n,\mu \rangle \right) ,\, \mu \ge 0 \}, \end{aligned}$$
(7)

i.e., the cone of possible moments. We note that \(K^*={\mathbf {cl}\,}M\), where \({\mathbf {cl}\,}M\) denotes the closure of \(M\), and that \(K^{**}=K\) as \(K\) is a proper cone [5, pp. 65–66]. The following duality argument hinges on these facts.

Let \(m_0\in {\text{ R }}^{n+1}\) be the moment vector, i.e., \((m_0)_{i+1}=q(x^i)\) for \(i=0,\ldots ,n\). Consider the following problem:

$$\begin{aligned} \text{ minimize }&x_{n+1}\nonumber \\ \text{ subject } \text{ to }&x_{i}=(m_0)_{i},\quad i=1,\ldots , n,\nonumber \\&x\in K^*, \end{aligned}$$
(8)

where \(x\in {\text{ R }}^{n+1}\) is the optimization variable. (Since \(K^*={\mathbf {cl}\,}M\), problem (8) is in fact equivalent to problem (1).) Lagrange duality tells us that the dual of (8) is (5) and that a dual solution \(\nu ^\star \) exists if (8) has a strictly feasible point (i.e., if Slater’s constrain qualification holds) [5, §5.3]. We omit the Lagrange dual derivation because it is routine and involves no unexpected tricks.

Consider any \(y\in K\) such that \(y\ne 0\). Then

$$\begin{aligned} y^Tm_0= \int _\Omega \sum _{i=0}^n y_{i+1}x^i\;dq>0 \end{aligned}$$

since by definition \(y\in K\) implies \(\sum _{i=0}^ny_{i+1}x^i\ge 0\) and since \({\mathbf {supp}\,}q=\Omega \). Thus, by Lemma 1, we see that \(m_0\in {\mathbf {int}\,}(K^*)\), i.e., \(m_0\) is strictly feasible. Hence, we conclude that a dual solution \(\nu ^\star \) exists. \(\square \)

1.3 Appendix 3.3: Proof of Theorem 2

Proof

We write

$$\begin{aligned} M=\{ m\in {\text{ R }}^{n+1} : m=\left( \langle p^{(0)},\mu \rangle , \langle p^{(1)},\mu \rangle , \ldots , \langle p^{(n-1)},\mu \rangle , \langle r,\mu \rangle \right) ,\, \mu \ge 0 \},\nonumber \\ \end{aligned}$$
(9)

and we will call \(M\) the moment cone. Also, define

$$\begin{aligned} K=\left\{ (p^{(0)}(x),p^{(1)}(x), \ldots , p^{(n-1)}(x),r(x)) \in {\text{ R }}^{n+1}: x\in \Omega \right\} . \end{aligned}$$

Since the \(p^{(i)}\) and \(r\) are continuous, \(K\) is the image of a compact set under a continuous function and therefore is compact. We assume as before that \(p^{(0)}=1\) and, therefore, \(0\notin \mathbf {conv}K\), where \(\mathbf {conv}K\) denotes the convex hull of \(K\). Therefore, \({\mathbf {cone}\,}K\), the conical hull of \(K\), is closed [16, §1.4].

Now we prove \({\mathbf {cone}\,}K=M\). By choosing \(\mu \) to have point masses at a finite number of points in (9), we can produce any element in \({\mathbf {cone}\,}K\) and therefore \({\mathbf {cone}\,}K\subseteq M\). Now assume for contradiction that \({\mathbf {cone}\,}K\ne M\). In other words, assume there exists an \(m\in M\) such that \(m\notin {\mathbf {cone}\,}K\). Since \({\mathbf {cone}\,}K\) is a closed convex set, there must be a strictly separating hyperplane \(\lambda \) such that \(\lambda ^Tm<0\) and \(\lambda ^Tn\ge 0\) for any \(n\in {\mathbf {cone}\,}K\). However, since \(m\in M\), there must exist a corresponding measure \(\mu \ge 0\) that produced \(m\) in (9), i.e.,

$$\begin{aligned} m_{i+1}=\langle p^{(i)},\mu \rangle \quad \text {for }i=0,\ldots ,n-1 \quad \text {and}\quad m_{n+1}=\langle r,\mu \rangle . \end{aligned}$$

Therefore,

$$\begin{aligned} \lambda ^Tm=\left< \lambda _{n+1}r+\sum ^{n-1}_{i=0}\lambda _{i+1}p^{(i)},\mu \right> <0, \end{aligned}$$

and this in particular implies that

$$\begin{aligned} \lambda _{n+1}r(x)+\sum ^{n-1}_{i=0}\lambda _{i+1}p^{(i)}(x)<0 \end{aligned}$$

for some \(x\in \Omega \). However, since by construction \(\lambda ^Tn\ge 0\) for all \(n\in K\subseteq {\mathbf {cone}\,}K\), i.e.,

$$\begin{aligned} \lambda _{n+1}r(x)+\sum ^{n-1}_{i=0}\lambda _{i+1}p^{(i)}(x)\ge 0 \quad \text { for all }x\in \Omega , \end{aligned}$$

we have a contradiction. Therefore, \({\mathbf {cone}\,}K= M\) and \(M\) is closed.

Now consider the optimization problem

$$\begin{aligned} \text{ minimize }&m_{n+1}\\ \text{ subject } \text{ to }&m_i=q(p^{(i)}), \quad i=0,\ldots ,n-1,\\&m\in M, \end{aligned}$$

where \(m\in {\text{ R }}^{n+1}\) is the optimization variable. Note that this problem is equivalent to the original problem (2). Since \(M\) is closed, so is the feasible set. Moreover, the feasible set is bounded because for any \(m\in M\), the last coordinate \(m_{n+1}\) (the only one that is not fixed) is bouned since

$$\begin{aligned} |m_{n+1}|=|\mu (r)|\le \Vert r\Vert _\infty \mu (1)=\Vert r\Vert _\infty q(1)<\infty \end{aligned}$$

for some nonnegative measure \(\mu \in \mathcal {M}\). Therefore, the feasible set is compact. Finally, the feasible set is nonempty because \(m\in M\) generated by the measure \(q\) is a feasible point. Therefore, there exist an optimal \(m^\star \) for the reduced problem and a \(\mu ^\star \) that generated \(m^\star \); this \(\mu ^\star \) is optimal for the original problem (2).

Now, by Carathéodory’s theorem on cones [29, §17], \(m^\star \in {\mathbf {cone}\,}K= M\) can be expressed as a linear combination of at most \(n+1\) vectors in \(K\). This linear combination is equivalent to a measure with point masses at at most \(n+1\) locations. In other words, \(m^\star \) can be produced (in the sense of (9)) by a measure \(\mu ^\star \), where \(|{\mathbf {supp}\,}\mu ^\star |\le n+1\). This \(\mu ^\star \) is a solution to problem (2).

Finally, we can further reduce the support of this solution. Given a solution \(\mu ^\star \) with finite support, we can restrict problem (2) to searching only over measures that are supported on \({\mathbf {supp}\,}\mu ^\star \). This reduces problem (2) to a finite-dimensional LP, which always has a solution supported on \(n\) or fewer points [22, §2.4]. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ryu, E.K., Boyd, S.P. Extensions of Gauss Quadrature Via Linear Programming. Found Comput Math 15, 953–971 (2015). https://doi.org/10.1007/s10208-014-9197-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-014-9197-9

Keywords

Mathematics Subject Classification

Navigation