1 Introduction

The problem of the determination of quadrature rules for triangles, tetrahedra and, in general, for d-dimensional simplicial domains has reached the attention of a number of scholars starting from the middle of the nineteenth century up today (see [13] and the references therein). Although many papers focus on quadrature rules for triangles [3, 10, 12, 16], only a limited literature is available on the integration in three or higher dimensions [4, 14, 17]. In this paper, we approach the problem of integration over general d -dimensional simplices by special type integration formulas which use functional and derivative values of the integrand function f mainly on points on the boundary of the d-dimensional simplex S. When the nodes lie only on the boundary of S these formulas are called boundary type quadrature formulas and are used when the values of f and its derivatives inside the simplex are not given or are not easily determinable. Applications of these formulas can be realized in the framework of the numerical solution of boundary value problems of partial differential equations (see [9] and the references therein).

To reach our goal, we follow the approach proposed in Refs. [1, 2]. More precisely, we approximate the integrand function f with a polynomial interpolant \(L_r^S[f]({\varvec{x}})\) which uses functional and derivative data values up to a fixed order \(r\in \mathbb {N}\) at the vertices of S, i.e.

$$\begin{aligned} f({\varvec{x}})=L_r^S[f]({\varvec{x}})+R_{r}^S[f]({\varvec{x}}),\quad {\varvec{x}}\in S , \end{aligned}$$
(1.1)

and then, we integrate both sides of (1.1) over the d-dimensional simplex S to get the quadrature formula

$$\begin{aligned} \int _S f({\varvec{x}}){\text {d}}{\varvec{x}}=Q^{S}_{r}[f]+E_r^S[f] \end{aligned}$$
(1.2)

where

$$\begin{aligned} Q^{S}_{r}[f]=\int _{S }L_r^S[f]({\varvec{x}}){\text {d}}{\varvec{x}}\quad \text {and}\quad E_r^S[f]=\int _S R_{r}^S[f]({\varvec{x}}){\text {d}}{\varvec{x}}. \end{aligned}$$

The obtained quadrature formula (1.2) uses function and derivative data up to the order r at the vertices of S and has degree of exactness \(1+r\), i.e. \(E_{r}^S[f]=0\) whenever f is a polynomial in d variables of total degree at most \(1+r\). The main feature of the quadrature formula \(Q^{S}_{r}[f]\) is that it relies only on function and derivative data up to the order r at the vertices of S. This motivates us to look for approximations of those derivatives to obtain quadrature formulas which do not use any derivative data. To this end, we restrict to the case \(r=1\) and, by following the technique described in Ref. [6], we approximate the first order derivative data by three-point finite differences approximation. According to the choice of the approximation of the derivative data, we get different quadrature formulas with degree of exactness 2 and, to increase the degree of exactness of such formulas, we consider the convex combination of two of them to get a quadrature formula with degree of exactness 3 (see Sect. 3). Finally, we restrict to the two-dimensional case (see Sect. 4) and we provide numerical results to test the approximation accuracies of the proposed formulas (see Sect. 5).

2 A Quadrature Formula on the d-Dimensional Simplex

2.1 Preliminaries and Notations

Let \(S\subset \mathbb {R}^{d}\) be a not degenerated d-dimensional simplex with vertices \({\varvec{v}}_{0},\dots ,{\varvec{v}}_{d}\in \mathbb {R}^{d}\) and

$$\begin{aligned} A(S)=\left| \begin{array}{cccc} 1 &{} &{} {\varvec{v}}_{0} &{} \\ 1 &{} &{} {\varvec{v}}_{1} &{} \\ 1 &{} &{} \vdots &{} \\ 1 &{} &{} {\varvec{v}}_{d} &{} \end{array} \right| \end{aligned}$$

the signed volume of the hyperparallelepiped with vertices \({\varvec{v}}_{0},\dots ,{\varvec{v}}_{d}\). For a point \({\varvec{x}}\in S\) and for each \(l=0,1,\dots ,d\) we denote by \(S_{l}({\varvec{x}})\) the d-dimensional simplex of vertices \({\varvec{v}} _{0},\dots ,{\varvec{v}}_{l-1},{\varvec{x}},{\varvec{v}}_{l+1},\dots ,{\varvec{v}}_{d}\). The barycentric coordinates of \({\varvec{x}}\) with respect to the simplex S are then defined by

$$\begin{aligned} \lambda _{l}({\varvec{x}})=\frac{A(S_{l})}{A(S)},\quad l=0,1,\dots ,d. \end{aligned}$$
(2.1)

For each \(\alpha =\left( \alpha _{1},\alpha _{2},\dots ,\alpha _{d}\right) \in \mathbb {N}^{d}\) and \({\varvec{x}}=(x_{1},\dots ,x_{d})\in \mathbb {R}^{d}\), as usual, we denote by

$$\begin{aligned} |\alpha |=\alpha _{1}+\alpha _{2}+\dots +\alpha _{d},\quad \alpha !=\alpha _{1}!\dots \alpha _{d}!, \quad {\varvec{x}}^{\alpha }=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\dots x_{d}^{\alpha _{d}}. \end{aligned}$$

We also set \({\varvec{\lambda }}=(\lambda _{0},\lambda _{1},\lambda _{2},\dots ,\lambda _{d})\) and \({\varvec{\lambda }}_{l}=(\lambda _{0},\dots ,\lambda _{l-1},\lambda _{l+1},\dots ,\lambda _{d})\) for each \(l=0,\dots ,d\). Moreover, we denote by \(D^{\alpha }f=\frac{\partial ^{{\left| \alpha \right| }}f}{ \partial x_{1}^{\alpha _{1}}\partial x_{2}^{\alpha _{2}}\dots \partial x_{d}^{\alpha _{d}}}\) and by

$$\begin{aligned} D_{{\varvec{x}}-{\varvec{v}}}^{k}f=\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\frac{k!}{\alpha !}({\varvec{x}}-{\varvec{v}})^{\alpha }D^{\alpha }f \end{aligned}$$
(2.2)

the k-th order directional derivative of f along the line segment between \({\varvec{x}}\) and \({\varvec{v}}\). Finally, we use the notations

$$\begin{aligned} D_{ij}f=({\varvec{v}}_{i}-{\varvec{v}}_{j})\cdot \nabla f,\quad i,j=0,1,\dots ,d \end{aligned}$$
(2.3)

for the derivative of f along the directed line segment from \({\varvec{x}}_{j}\) to \({\varvec{x}}_{i}\) (as usual, \(\cdot \) denotes the dot product) and

$$\begin{aligned} D_{l}^{\alpha }=D_{0,l}^{\alpha _{1}}D_{1,l}^{\alpha _{2}}\dots D_{l-1,l}^{\alpha _{l}}D_{l+1,l}^{\alpha _{l+1}}\dots D_{d,l}^{\alpha _{d}},\quad l=0,1,\dots ,d, \end{aligned}$$
(2.4)

for the composition of derivatives along the directed sides of the simplex. Under these assumptions, we get the following result.

Lemma 2.1

Let \(f\in C^{r}(S)\), then

$$\begin{aligned} D^{k}_{{\varvec{x}}-{\varvec{v}}_{l}} f({\varvec{v}}_{l})=\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\frac{k!}{\alpha !}D^{\alpha }_{l} f({\varvec{v}}_{l}) {\varvec{\lambda }}_{l}^{\alpha }({\varvec{x}}),\quad l=0,1,\dots ,d, \end{aligned}$$
(2.5)

for any \(k \in \mathbb {N}, k \le r \text { and }\varvec{x}\in \mathbb {R}^d\).

Proof

Due to the properties satisfied by the barycentric coordinates, for any \(l=0,1,\dots ,d\), we have

$$\begin{aligned} {\varvec{x}}-{\varvec{v}}_{l}= & {} {\varvec{v}}_{l}\lambda _{l}({\varvec{x}})-{\varvec{v}}_{l}+\displaystyle \sum _{\begin{array}{c} j=0\\ j\ne l \end{array}}^{d}{\varvec{v}}_{j}\lambda _{j}({\varvec{x}})\nonumber \\= & {} {\varvec{v}}_{l}\left( \lambda _{l}({\varvec{x}})-1\right) +\displaystyle \sum _{\begin{array}{c} j=0\\ j\ne l \end{array}}^{d}{\varvec{v}}_{j}\lambda _{j}({\varvec{x}}) \nonumber \\= & {} -{\varvec{v}}_{l}\left( \displaystyle \sum _{\begin{array}{c} j=0\\ j\ne l \end{array}}^{d}\lambda _{j}({\varvec{x}})\right) +\displaystyle \sum _{\begin{array}{c} j=0\\ j\ne l \end{array}}^{d}{\varvec{v}}_{j}\lambda _{j}({\varvec{x}}) \nonumber \\= & {} \displaystyle \sum _{\begin{array}{c} j=0\\ j\ne l \end{array}}^{d}({\varvec{v}}_{j}-{\varvec{v}}_{l})\lambda _{j}({\varvec{x}})\nonumber \\= & {} ({\varvec{v}}_0-{\varvec{v}}_l,\dots ,{\varvec{v}}_{l-1}-{\varvec{v}}_l,{\varvec{v}}_{l+1}-{\varvec{v}}_l,\dots ,{\varvec{v}}_d-{\varvec{v}}_l) \cdot {\varvec{\lambda }}_l({\varvec{x}}). \end{aligned}$$
(2.6)

By substituting (2.6) in (2.2) and by definition (2.4), we have

$$\begin{aligned} D^{k}_{{\varvec{x}}-{\varvec{v}}_{l}} f({\varvec{v}}_{l})= & {} \displaystyle \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\dfrac{k!}{\alpha !}\left( \left( {\varvec{v}}_{0}-{\varvec{v}}_{l},\dots ,{\varvec{v}}_{l-1}-{\varvec{v}}_l,{\varvec{v}}_{l+1}-{\varvec{v}}_l,\dots , {\varvec{v}}_{d}-{\varvec{v}}_{l}\right) \cdot {\varvec{\lambda }}_l({\varvec{x}})\right) ^{\alpha } D^{\alpha }f({\varvec{v}}_{l}) \\= & {} \displaystyle \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\dfrac{k!}{\alpha !}\left( \left( {\varvec{v}}_{1}-{\varvec{v}}_{l},\dots ,{\varvec{v}}_{l-1}-{\varvec{v}} _l,{\varvec{v}}_{l+1}-{\varvec{v}}_l,\dots , {\varvec{v}}_d-{\varvec{v}}_{l}\right) D f({\varvec{v}}_{l})\right) ^{\alpha } {\varvec{\lambda }}_l({\varvec{x}})^{\alpha } \\= & {} \displaystyle \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\dfrac{k!}{\alpha !} D_l^{\alpha }f({\varvec{v}}_l)^{\alpha } {\varvec{\lambda }}_l( {\varvec{x}})^{\alpha }. \end{aligned}$$

\(\square \)

Proposition 2.2

Let \(S\subset \mathbb {R}^{d}\) be a not degenerated d-dimensional simplex with vertices \({\varvec{v}}_0,\dots ,{\varvec{v}}_d\) then

$$\begin{aligned} \int _{S}{\varvec{\lambda }}^{\alpha }({\varvec{x}}){\mathrm{d}}{\varvec{x}}=\frac{A(S)\alpha !}{(d+ {\left| \alpha \right| })!},\quad \alpha \in \mathbb {N}^{d+1}. \end{aligned}$$
(2.7)

Proof

See [15, Theorem 2.2]. \(\square \)

Proposition 2.3

Let \(S\subset \mathbb {R}^{d}\) be a not degenerated d-dimensional simplex with vertices \({\varvec{v}}_{0},{\varvec{v}} _{1},\dots ,{\varvec{v}}_{d}\). For any \({\varvec{x}}\in S\) and \(r\in \mathbb {N}\), we have

$$\begin{aligned} \int _{S}\sum _{i=0}^{d}\Vert {\varvec{x}}-{\varvec{v}}_{i}\Vert ^{r+2}\lambda _{i}({\varvec{x}} ){\text {d}}{\varvec{x}}\le \frac{A(S)}{(d+2)!}\sum _{i=0}^{d}\sum _{j=0}^{d}\Vert {\varvec{v}}_{i}- {\varvec{v}}_{j}\Vert ^{r+2}. \end{aligned}$$
(2.8)

Proof

By the equality (2.6) and by recalling that \(0\le \lambda _{j}({\varvec{x}})\le 1\) for each \({\varvec{x}}\in S\), we easily obtain

$$\begin{aligned} \Vert {\varvec{x}}-{\varvec{v}}_{l}\Vert ^{r+2}\le \left( \sum _{j=0}^{d}\Vert {\varvec{v}}_{j}-{\varvec{v}}_{l}\Vert \lambda _{j}({\varvec{x}})\right) ^{r+2}\le \sum _{j=0}^{d}\Vert {\varvec{v}}_{j}-{\varvec{v}}_{l}\Vert ^{r+2}\lambda _{j}({\varvec{x}}) \end{aligned}$$

for any \(l=0,\dots ,d\). Consequently,

$$\begin{aligned} \begin{array}{ll} \displaystyle \int _{S }\sum _{i=0}^{d}\Vert {\varvec{x}}-{\varvec{v}}_{i}\Vert ^{r+2}\lambda _{i}({\varvec{x}}) {\text {d}}{\varvec{x}}&{} \le \displaystyle \int _{S } \sum _{i=0}^{d}\sum _{j=0}^{d}\Vert {\varvec{v}}_{j}-{\varvec{v}}_{i}\Vert ^{r+2}\lambda _{j}({\varvec{x}})\lambda _{i}({\varvec{x}}){\text {d}}{\varvec{x}}\\ &{}\le \displaystyle \sum _{i=0}^{d}\sum _{j=0}^{d}\Vert {\varvec{v}}_{j}-{\varvec{v}}_{i}\Vert ^{r+2}\int _{S }\lambda _{j}({\varvec{x}})\lambda _{i}({\varvec{x}}){\text {d}}{\varvec{x}} \end{array} \end{aligned}$$

and, by Proposition 2.2, we easily get the inequality (2.8). \(\square \)

2.2 Construction of the Quadrature Formula

The multivariate Lagrange interpolation polynomial on the simplex S in barycentric coordinates is

$$\begin{aligned} L^{S}[f]({\varvec{x}})=\sum _{l=0}^{d}\lambda _{l}({\varvec{x}})f({\varvec{v}}_{l}). \end{aligned}$$
(2.9)

The operator \(L^{S}\) reproduces polynomials up to the degree 1 and interpolates the values of f at the vertices \({\varvec{v}}_{l}\) of the simplex S. If the function f belongs to \(C^{r}(S)\), we can replace the values \(f( {\varvec{v}}_{l})\) by the modified Taylor polynomial of degree r at \({\varvec{v}}_{\ell }\) proposed in [6], the resulting polynomial operator is

$$\begin{aligned} L_{r}^{S}[f]({\varvec{x}})=\sum _{l=0}^{d}\left( \sum _{k=0}^{r}\frac{a_{rk}}{k!}D_{ {\varvec{x}}-{\varvec{v}}_{l}}^{k}f({\varvec{v}}_{l})\right) \lambda _{l}({\varvec{x}}),\quad {\varvec{x}}\in \mathbb {R}^d, \end{aligned}$$
(2.10)

where \(a_{rk}=\frac{(1+r-k)!r!}{(1+r)!(r-k)!}\). As specified in [6], the operator \(L_{r}^{S}[f]({\varvec{x}})\) reproduces polynomials up to the degree \(\max \left\{ 1,1+r\right\} =1+r\). Moreover, for each \({\varvec{x}}\in S\) and \(f\in C^{r+2}\left( S\right) \), its remainder term \(R_{r}^{S}[f]({\varvec{x}})=f({\varvec{x}} )-L_{r}^{S}[f]({\varvec{x}})\) can be explicitly represented as

$$\begin{aligned} R_{r}^{S}[f]({\varvec{x}})=\sum _{l=0}^{d} \lambda _t(x) \int _{0}^{1}\frac{-t(1-t)^{r}}{ (1+r)!}D_{{\varvec{x}}-{\varvec{v}}_{l}}^{r+2} f \left( {\varvec{v}}_{l}+t\left( {\varvec{x}}-{\varvec{v}} _{l} \right) \right) {\text {d}}t . \end{aligned}$$
(2.11)

Remark 2.4

Since S is a compact convex domain and \(L^{S}\) is a linear bounded operator, in line with [8], \(L_{r}^{S}\) can be interpreted as

$$\begin{aligned} L_{r}^{S}[f]({\varvec{x}})=L^{S}\left[ \sum _{k=0}^{r}\frac{a_{rk}}{k!}D_{{\varvec{x}} -\cdot }^{k}f\right] \left( {\varvec{x}}\right) \end{aligned}$$

and, from [8, Proposition 3.4], it follows that \(L_{r}^{S}\) inherits the interpolation properties of the Lagrange operator (2.9).

To obtain the desired quadrature formula, we rearrange polynomial (2.10) by taking into account Lemma 2.1. More precisely,

$$\begin{aligned} L_{r}^{S}[f]({\varvec{x}})= & {} \displaystyle \sum _{l=0}^{d}\left( \sum _{k=0}^{r}\frac{ a_{rk}}{k!}\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}} \frac{k!}{\alpha !}D_{l}^{\alpha }f({\varvec{v}}_{l}){\varvec{\lambda }}_{l}^{\alpha }( {\varvec{x}})\right) \lambda _{l}({\varvec{x}}) \nonumber \\= & {} \displaystyle \sum _{l=0}^{d}\left( \sum _{k=0}^{r}\frac{(1+r-k)!r!}{ (1+r)!(r-k)!}\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}} \frac{1}{\alpha !}D_{l}^{\alpha }f({\varvec{v}}_{l}){\varvec{\lambda }}_{l}^{\alpha }( {\varvec{x}})\right) \lambda _{l}({\varvec{x}}) \nonumber \\= & {} \displaystyle \sum _{l=0}^{d}\left( \sum _{k=0}^{r}\frac{1+r-k}{1+r} \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\frac{1}{ \alpha !}D_{l}^{\alpha }f({\varvec{v}}_{l}){\varvec{\lambda }}_{l}^{\alpha }({\varvec{x}} )\right) \lambda _{l}({\varvec{x}}) \end{aligned}$$
(2.12)

and, by the change of dummy index, we get

$$\begin{aligned} L_{r}^{S}[f]({\varvec{x}})=\displaystyle \sum _{k=0}^{r}\frac{1+r-k}{1+r}\left( \sum _{l=0}^{d}\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}} \frac{1}{\alpha !}D_{l}^{\alpha }f({\varvec{v}}_{l}){\varvec{\lambda }}_{l}^{\alpha }( {\varvec{x}})\lambda _{l}({\varvec{x}})\right) . \end{aligned}$$
(2.13)

The quadrature formula is then computed by integrating the right hand side of (2.13) on the simplex S.

Theorem 2.5

Let \(f\in C^{r+2}(S)\). Then

$$\begin{aligned} \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}=Q_{r}^{S}[f]+E_{r}^{S}[f], \end{aligned}$$

where

$$\begin{aligned} Q_{r}^{S}[f]=\frac{A(S)}{(1+r)}\sum _{k=0}^{r}\frac{1+r-k}{(d+1+k)!}\left( \sum _{l=0}^{d}\sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}} D_{l}^{\alpha }f({\varvec{v}}_{l})\right) \end{aligned}$$
(2.14)

and

$$\begin{aligned} E_{r}^{S}[f]= & {} \int _{S}R_{d}^{S}[f]({\varvec{x}})\nonumber \\= & {} \int _{S}\sum _{l=0}^{d}\left( \int _{0}^{1}\frac{-t(1-t)^{r}}{(1+r)!}D_{{\varvec{x}}-{\varvec{v}}_{l}}^{r+2}f\left( {\varvec{v}}_{l}+t\left( {\varvec{x}}-{\varvec{v}}_{l}\right) \right) {\text {d}}t\right) \lambda _{l}( {\varvec{x}}){\text {d}}{\varvec{x}}.\nonumber \\ \end{aligned}$$
(2.15)

Moreover, the quadrature formula \(Q_{r}^{S}[f]\) has degree of exactness \(1+r\) .

Proof

By integrating the right hand side of equality (2.13), we get

$$\begin{aligned} Q^{S}_{r}[f]({\varvec{x}})= & {} \int _{S}L^{S}_{d}[f]({\varvec{x}}){\text {d}}{\varvec{x}} \nonumber \\= & {} \sum _{k=0}^{r} \frac{1+ r- k}{(1+r)} \left( \sum _{l=0}^{d} \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\frac{1}{\alpha !} D^{\alpha }_{l} f({\varvec{v}}_{l})\int _{S} {\varvec{\lambda }}_{l}^{\alpha }({\varvec{x}}) \lambda _{l}({\varvec{x}}){\text {d}}{\varvec{x}} \right) .\nonumber \\ \end{aligned}$$
(2.16)

By Proposition (2.2)

$$\begin{aligned} \int _{S} {\varvec{\lambda }}_{l}^{\alpha }({\varvec{x}}) \lambda _{l}({\varvec{x}}){\text {d}}{\varvec{x}}=\frac{A(S)\alpha !}{ (d+1+{\left| \alpha \right| })!} \end{aligned}$$

and then (2.16) becomes

$$\begin{aligned} Q^{S}_{r}[f]= \sum _{k=0}^{r} \frac{1+ r- k}{(1+r)} \left( \sum _{l=0}^{d} \sum _{\begin{array}{c} {\left| \alpha \right| }=k \\ \alpha \in \mathbb {N}^{d} \end{array}}\frac{1}{\alpha !} D^{\alpha }_{l} f({\varvec{v}}_{l})\frac{A(S)\alpha !}{(d+1+{\left| \alpha \right| })!} \right) . \end{aligned}$$
(2.17)

The expression of \(E^{S}_{r} [f] \) is obtained by integrating on the simplex S the remainder term \(R_{r} ^{S}[f]({\varvec{x}})\) in formula (2.11). Since \(R_r^S[f]({\varvec{x}})\) vanishes whenever f is a polynomial in d variables of total degree at most \(1+r\), \(E^{S}_{r}[f]\) inherits this property and the quadrature formula has degree of exactness \(1+r\). \(\square \)

2.3 Error Bounds

To give a bound for the remainder term \(E_{r}^{S}[f]\) of the quadrature formula in Theorem 2.5, we need some additional notations. More precisely, for a k-times continuous differentiable function \(f:S\rightarrow \mathbb {R}\), we introduce the norm

$$\begin{aligned} {\left| D^{k}f\right| }_{S}:=\sup _{x\in S}\sup \{{\left| D^{k}_{{\varvec{y}}}f({\varvec{x}})\right| }:{\varvec{y}} \in \mathbb {R}^{d},\Vert {\varvec{y}}\Vert =1\}. \end{aligned}$$
(2.18)

where \(\Vert \cdot \Vert \) denotes the Euclidean norm in \(\mathbb {R}^{d}\), and \({\varvec{y}}\) is assumed to be a column vector. Consequently, for any \({\varvec{x}} \in S\) and \({\varvec{y}}\in \mathbb {R}^{d}\), we have

$$\begin{aligned} |D_{{\varvec{y}}}^{k}f({\varvec{x}})|\le |D^{k}f|_{S}\cdot \Vert {\varvec{y}}\Vert ^{r+2}. \end{aligned}$$
(2.19)

Proposition 2.6

Let \(S\subset \mathbb {R}^{d}\) be a not degenerated d-dimensional simplex with vertices \({\varvec{v}}_{0},\dots ,{\varvec{v}}_{d}\) and \(f\in C^{r+2}(S)\). Then

$$\begin{aligned} {\left| E^{S}_{r} [f]\right| } \le \displaystyle \frac{{\left| D^{r+2}f\right| }_{S}}{(r+2)!(r+1)} \frac{A(S)}{(d+2)!}\sum _{l=0}^{d}\sum _{j=0}^{d}{\Vert {\varvec{v}}_l-{\varvec{v}}_{j}\Vert } ^{r+2}. \end{aligned}$$
(2.20)

Proof

By taking the modulus of both sides of equality (2.15), by applying the triangular inequality and by bounding the directional derivative of f of order \(r+2\) by (2.19), we have

$$\begin{aligned} \begin{array}{ll} {\left| E^{S}_{r} [f]\right| } &{}\le \displaystyle {\left| \int _{S}\sum _{l=0}^{d} \left( \int _{0}^{1}\frac{-t(1-t)^{r}}{(1+r)!} D_{{\varvec{x}}-{\varvec{v}}_{l}}^{r+2}f\left( {\varvec{v}}_{l}+t\left( {\varvec{x}}-{\varvec{v}}_{l}\right) \right) {\text {d}}t \right) \lambda _{l}({\varvec{x}}){\text {d}}{\varvec{x}}\right| }\\ &{} \le \displaystyle {\left| D^{r+2}f\right| }_{S}\int _{S}\sum _{l=0}^{d} \left( \int _{0}^{1}\frac{t(1-t)^{r}}{(1+r)!} {\Vert {\varvec{x}}-{\varvec{v}}_{l}\Vert }^{r+2}{\text {d}}t \right) \lambda _{l}({\varvec{x}}){\text {d}}{\varvec{x}}\\ &{}\le \displaystyle \frac{{\left| D^{r+2}f\right| }_{S}}{(1+r)!}\int _{S}\sum _{l=0}^{d}{\Vert {\varvec{x}}-{\varvec{v}}_{l}\Vert }^{r+2} \left( \int _{0}^{1}t(1-t)^{r} {\text {d}}t \right) \lambda _{l}({\varvec{x}}){\text {d}}{\varvec{x}}. \end{array} \end{aligned}$$

Using the inequality in Proposition (2.3), and by the fact that

$$\begin{aligned} \int _{0}^{1}t(1-t)^{r}{\text {d}}t=\frac{1}{(r+2)(r+1)}, \end{aligned}$$

we have

$$\begin{aligned} {\left| E^{S}_{r} [f]\right| } \le \displaystyle \frac{{\left| D^{r+2}f\right| }_{S}}{(1+r)!(r+1)(r+2)}\frac{A(S)}{(d+2)!}\sum _{l=0}^{d}\sum _{j=0}^{d}{\Vert {\varvec{v}}_l-{\varvec{v}}_{j}\Vert }^{r+2} \end{aligned}$$

and then (2.20). \(\square \)

Remark 2.7

It is worth noting that Theorem 2.5 gives a quadrature formula obtained by integrating both sides of the expression in [6, Theorem 1] and the bound in Proposition 2.6 is nothing but the integral of the bound given in [6, Theorem 2], where \(\Omega =S\), \(m=1\) and \(\phi _{i}\left( {\varvec{x}}\right) =\lambda _{i}({\varvec{x}})\). Consequently, the bound (2.20) is the best possible estimation for each \(r\in \mathbb {N}_0\).

3 Integration Formulas on the Simplex with Only Function Data

The main feature of the quadrature formula (2.14) is that it uses only derivatives of f along the edges of S; this motivates to consider approximations of those derivatives to obtain quadrature formulas which use the function data at the vertices of the simplex S, at points on its facets or at its center of gravity. To this aim, we focus on the case \(r=1\) and we consider different kinds of approximation of the derivatives in (2.14). For \( r=1\), the quadrature formula (2.14) in Theorem 2.5 becomes

$$\begin{aligned} Q^{S}_{1}[f]= \frac{A(S)}{(d+1)!} \sum _{l=0}^{d}f({\varvec{v}}_{l})+ \frac{A(S)}{ 2(d+2)!}\sum _{l=0}^{d} \sum _{\begin{array}{c} {\left| \alpha \right| }=1 \\ \alpha \in \mathbb {N}^{d} \end{array}}D^{\alpha }_{l} f({\varvec{v}}_{l}). \end{aligned}$$
(3.1)

Proposition 3.1

Let \(f:S \rightarrow \mathbb {R}\) be a 3-times continuous differentiable function on S, then

$$\begin{aligned} \tilde{Q}^{S}_{1}[f]= & {} \displaystyle \frac{A(S)}{(d+2)!}\left( (3d+2)\sum _{l=0}^{d}f({\varvec{v}}_{l})-4\sum _{l=0}^{d-1} \sum _{r=l+1}^d f\left( \frac{{\varvec{v}} _r+{\varvec{v}}_{l}}{2}\right) \right) \nonumber \\&+\displaystyle \frac{A(S)}{2(d+2)!}\sum _{l=0}^{d-1} \sum _{r=l+1}^d \varepsilon _{l,r}. \end{aligned}$$
(3.2)

where

$$\begin{aligned} \varepsilon _{l,r}[f]= & {} D^{3}_{{\varvec{v}}_{l}-{\varvec{v}}_{r} } f({\varvec{v}}_{r}+\xi _{1}( {\varvec{v}}_{l}-{\varvec{v}}_{r}))\\&-D^{3}_{{\varvec{v}}_{l}-{\varvec{v}}_{r} } f({\varvec{v}}_{r}+\xi _{2}( {\varvec{v}}_{l}-{\varvec{v}}_{r})), \quad \xi _{1},\xi _{2}\in [0,1]. \end{aligned}$$

Moreover, the quadrature formula (3.2) has degree of exactness 2.

Proof

By definition (2.4), the sum of first-order derivatives in the second term of \(Q^{S}_{1}[f]\) can be rewritten as

$$\begin{aligned} \displaystyle \sum _{l=0}^{d}\sum _{\begin{array}{c} {\left| \alpha \right| }=1 \\ \alpha \in \mathbb {N}^{d} \end{array}}D^{\alpha }_{l} f({\varvec{v}}_{l})= \displaystyle \sum _{l=0}^{d}\sum _{\begin{array}{c} r=0\\ r\ne l \end{array}}^d D_{rl}f({\varvec{v}}_l)=\displaystyle \sum _{l=0}^{d-1}\sum _{r=l+1}^{d}\left( D_{lr}f({\varvec{v}}_{r})-D_{lr}f({\varvec{v}}_{l})\right) ,\nonumber \\ \end{aligned}$$
(3.3)

where the differences of directional derivatives along the edges of the simplex S can be replaced by a three-point finite difference approximation. To do this, let us recall that for a univariate function g, it is possible to consider the derivation formula

$$\begin{aligned} g'(a-h)=\frac{1}{h}\left( -\dfrac{1}{2}g(a+h)+2g(a)-\frac{3}{2}g(a-h)\right) +\frac{h^{2}}{3}g'''(\xi ) \end{aligned}$$
(3.4)

for some \(\xi \in [a-h,a+h]\). Using this formula with \(h=\pm 1/2\) and \(a=1/2\) we get a three-point approximation for \(g'(0)-g'(1)\) with a remainder term which is expressed in terms of the modulus of continuity of \(g'''\) [6, Section 5.1]. By applying the formula (3.4) along the edges of S we get

$$\begin{aligned} D_{lr}f({\varvec{v}}_{r})-D_{lr}f({\varvec{v}}_{l})=4\left( f({\varvec{v}}_r) -2f\left( \frac{{\varvec{v}}_r+{\varvec{v}}_l}{2}\right) +f({\varvec{v}}_l)\right) +\varepsilon _{l,r} \end{aligned}$$
(3.5)

with

$$\begin{aligned} {\left| \varepsilon _{l,r}\right| }\le \frac{1}{12}\omega \left( D^3_l f\left( (1-t){\varvec{v}}_l+t{\varvec{v}}_r\right) ,1\right) , \end{aligned}$$

where \(\omega \) denotes the modulus of continuity with respect to \(t\in [0,1]\). By substituting expression (3.5) in (3.3) and by rearranging, we get

$$\begin{aligned} \displaystyle \sum _{l=0}^d\sum _{\begin{array}{c} {\left| \alpha \right| }=1 \\ \alpha \in \mathbb {N}^{d} \end{array}}D^{\alpha }_{l} f({\varvec{v}}_{l})= & {} 4d\displaystyle \sum _{l=0}^d f({\varvec{v}}_l)-8\sum _{l=0}^{d-1}\sum _{r=l+1}^{d}f\left( \frac{{\varvec{v}}_r +{\varvec{v}}_l}{2}\right) +\displaystyle \sum _{l=0}^{d-1}\sum _{r=l+1}^{d}\varepsilon _{l,r}. \nonumber \\ \end{aligned}$$
(3.6)

Finally, by substituting (3.6) in (3.1), we get

$$\begin{aligned} \tilde{Q}^{S}_{1}[f]= & {} \displaystyle \frac{A(S)}{(d+2)!}\left( (3d+2)\sum _{l=0}^{d}f({\varvec{v}}_{l})-4\sum _{l=0}^{d-1} \sum _{r=l+1}^d f\left( \frac{{\varvec{v}}_r+{\varvec{v}}_{l}}{2}\right) \right) \\&+\displaystyle \frac{A(S)}{2(d+2)!}\sum _{l=0}^{d-1} \sum _{r=l+1}^d \varepsilon _{l,r}. \end{aligned}$$

\(\varepsilon _{l,r}[f]=0\) whenever f is a polynomial in d variables of total degree 2 and this implies that \(\tilde{Q}_1^S[f]\) has degree of exactness 2. \(\square \)

Proposition 3.2

Let \(S\subset \mathbb {R}^{d}\) be a not degenerated d-dimensional simplex with vertices \(({\varvec{v}}_{l})_{l=0,1,\dots ,d}\). Let us denote by \( s_{l}^{d-1},l=0,1,\dots ,d\) the facet opposite to the vertex \({\varvec{v}}_{l}\) and by \({\varvec{g}}_{l}\) the barycenter of \(s_{l}^{d-1}\). For all \(\alpha \in (0,1)\) we have

$$\begin{aligned} \begin{array}{ll} \displaystyle \int _{S}f({\varvec{x}}){\mathrm{d}}{\varvec{x}}= &{} \displaystyle \frac{A(S)}{2(d+2)!} \left( \frac{\alpha (d+4)-d}{\alpha }\sum _{l=0}^{d}f({\varvec{v}}_{l})+\frac{d}{ \alpha -\alpha ^{2}}\sum _{l=0}^{d}f({\varvec{y}}_{l}(\alpha ))\right. \\ &{} \\ &{}\qquad \qquad \ \left. +\displaystyle \frac{\alpha d}{\alpha -1}\sum _{l=0}^{d}f({\varvec{g}} _{l})\right) +R(d,\alpha )[f], \end{array} \end{aligned}$$
(3.7)

with \({\varvec{y}}_{l}(\alpha )={\varvec{v}}_{l}+\alpha ({\varvec{g}}_{l}-{\varvec{v}}_{l})\) and

$$\begin{aligned} R(d,\alpha )[f]= & {} \displaystyle \frac{dA(S)}{4(d+2)!(\alpha -\alpha ^{2})} \sum _{l=0}^{d}\left( \alpha ^{2}\int _{0}^{1}(1-t)^{2}D_{{\varvec{g}}_{l}-{\varvec{v}} _{l}}^{3}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})){\mathrm{d}}t\right. \nonumber \\&\displaystyle \left. -\int _{0}^{\alpha }(\alpha -t)^{2}D_{{\varvec{g}}_{l}-{\varvec{v}} _{l}}^{3}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})){\mathrm{d}}t\right) . \end{aligned}$$
(3.8)

The quadrature formula (3.7) has degree of exactness 2.

Proof

Let \({\varvec{g}}_l\) be the barycenter of \(s^{d-1}_{l}\), by Lemma (2.1) the sum of first-order derivatives along the edges of S in (3.1) can be rewritten as

$$\begin{aligned} \displaystyle \sum _{l=0}^{d}\sum _{\begin{array}{c} {\left| \alpha \right| }=1 \\ \alpha \in \mathbb {N}^{d} \end{array}}D^{\alpha }_{l} f({\varvec{v}}_{l}) {\varvec{\lambda }}^{\alpha }({\varvec{g}}_l)=\sum _{l=0}^{d}D_{{\varvec{g}}_l-{\varvec{v}}_l}f({\varvec{v}}_l) \end{aligned}$$

and, since \(\lambda _k({\varvec{g}}_l)=\frac{1}{d}\), for each \(l,k=0,\dots ,d\), then

$$\begin{aligned} \displaystyle \sum _{l=0}^{d}\sum _{\begin{array}{c} {\left| \alpha \right| }=1 \\ \alpha \in \mathbb {N}^{d} \end{array}}D^{\alpha }_{l} f({\varvec{v}}_{l}) =d\sum _{l=0}^{d}D_{{\varvec{g}}_l-{\varvec{v}}_l}f({\varvec{v}}_l). \end{aligned}$$
(3.9)

By substituting (3.9) in (3.1), we get

$$\begin{aligned} Q^{S}_{1}[f]= \frac{A(S)}{(d+1)!} \sum _{l=0}^{d}f({\varvec{v}}_{l})+ \frac{d A(S)}{2(d+2)!}\sum _{l=0}^{d} D_{{\varvec{g}}_l-{\varvec{v}}_l}f({\varvec{v}}_l). \end{aligned}$$
(3.10)

To have a three point finite difference approximation of the directional derivatives in (3.10), for each \(l=0,\dots ,d\), let us introduce the univariate function

$$\begin{aligned} h_{l} :\left\{ \begin{array}{ll} [0,1]\rightarrow \mathbb {R}, &{} \hbox {} \\ t\mapsto h_{l}(t)=f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})). \end{array} \right. \end{aligned}$$
(3.11)

For \(t=1\) and \(t=\alpha \in (0,1)\), the second-order Taylor expansion of \(h_{l}(1)\) and \(h_{l}(\alpha )\) centered at 0 with integral remainder are

$$\begin{aligned} h_{l}(1)=h_{l}(0)+h^{^{\prime }}_{l}(0)+\frac{1}{2}h^{^{\prime \prime }}_{l}(0)+\frac{1}{2}\int _{0}^{1}(1-t)^2h^{^{\prime \prime \prime }}_{l}(t){\text {d}}t \end{aligned}$$
(3.12)

and

$$\begin{aligned} h_{l}(\alpha )=h_{l}(0)+\alpha h^{^{\prime }}_{l}(0)+\frac{1}{2} \alpha ^2h^{^{\prime \prime }}_{l}(0)+\frac{1}{2}\int _{0}^{\alpha }( \alpha -t)^2h^{^{\prime \prime \prime }}_{l}(t){\text {d}}t. \end{aligned}$$
(3.13)

Then, by (3.12) and (3.13), we get

$$\begin{aligned} h_l(\alpha )-\alpha ^2h_l(1)=\left( 1-\alpha ^2\right) h_l(0)+(\alpha -\alpha ^2)h_l^{\prime }(0)+R_l[f](\alpha ) \end{aligned}$$
(3.14)

where

$$\begin{aligned} R_l[f](\alpha )=\frac{1}{2}\int _{0}^{\alpha }( \alpha -t)^2h^{^{\prime \prime \prime }}_{l}(t){\text {d}}t-\alpha ^2\frac{1}{2}\int _{0}^{1}(1-t)^2h^{^{\prime \prime \prime }}_{l}(t){\text {d}}t. \end{aligned}$$

Therefore, by (3.14) it follows that

$$\begin{aligned} h^{^{\prime }}_{l}(0)=\frac{1}{\alpha -\alpha ^2}h_{l}(\alpha ) -\frac{\alpha }{ 1-\alpha }h_{l}(1)-\frac{1+\alpha }{\alpha }h_{l}(0)+\frac{1}{\alpha -\alpha ^2}R_{l}(\alpha )[f].\nonumber \\ \end{aligned}$$
(3.15)

By rewriting equality (3.15) in terms of f we get

$$\begin{aligned} D_{{\varvec{g}}_{l}-{\varvec{v}}_{l}}f({\varvec{v}}_{l})= & {} \frac{1}{\alpha -\alpha ^2}f\left( \alpha {\varvec{g}}_l+(1-\alpha ){\varvec{v}}_l\right) -\frac{ \alpha }{1-\alpha }f({\varvec{g}}_{l})-\frac{1+\alpha }{\alpha }f({\varvec{v}}_{l})\nonumber \\&+\frac{1}{\alpha -\alpha ^2}R_{l}( \alpha )[f], \end{aligned}$$
(3.16)

where

$$\begin{aligned} R_{l}(\alpha )[f]= & {} \displaystyle \frac{1}{2}\left( \alpha ^2\int _{0}^{1}(1-t)^2D^{3}_{{\varvec{g}}_{l}-{\varvec{v}}_{l}}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})){\text {d}}t\right. \nonumber \\&\left. \displaystyle -\int _{0}^{\alpha }(\alpha -t)^2D^{3}_{{\varvec{g}}_{l}-{\varvec{v}}_{l}}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})){\text {d}}t \right) . \end{aligned}$$
(3.17)

Finally, by substituting (3.16) in (3.1), we get (3.7). \(R(\alpha )[f]=0\) whenever f is a polynomial in d variables of total degree 2 and, therefore, the quadrature formula (3.7) has degree of exactness 2. \(\square \)

Remark 3.3

  1. 1.

    For \(\alpha =\dfrac{d}{d+4}\) the formula (3.7) becomes

    $$\begin{aligned} \displaystyle \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}= & {} \frac{A(S)}{2(d+2)!}\left( \frac{ (d+4)^{2}}{4}\sum _{l=0}^{d}f\left( \frac{d{\varvec{g}}_{l}+4{\varvec{v}}_{l}}{d+4} \right) -\frac{d^{2}}{4}\sum _{l=0}^{d}f({\varvec{g}}_{l})\right) \nonumber \\&+R\left( d,\frac{d}{d+4}\right) [f], \end{aligned}$$
    (3.18)

    that is a quadrature formula which uses only the function data at the points \({\varvec{g}}_{l}\) and \(\frac{d{\varvec{g}}_{l}+4{\varvec{v}}_{l}}{d+4}\), \(l=0,\dots ,d\), and is exact for all polynomial of degree less than or equal to 2.

  2. 2.

    For \(\alpha =\frac{d}{d+1}\), the quadrature formula (3.7) becomes,

    $$\begin{aligned} \displaystyle \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}= & {} \frac{A(S)}{2(d+2)!}\left( 3\sum _{l=0}^{d}f\left( {\varvec{v}}_{l}\right) +(d+1)^{3}f({\varvec{x}}^{*})-d^{2}\sum _{l=0}^{d}f({\varvec{g}}_{l})\right) \nonumber \\&+R\left( d,\frac{d}{d+1}\right) [f] \end{aligned}$$
    (3.19)

    where \({\varvec{x}}^{*}\) is the center of gravity of S.

To improve the approximation accuracy of the quadrature formula (3.19), let us consider a convex combination of this formula with a multivariate Simpson rule for a simplex proposed in [7, Theorem 5.1]. For a particular value of the parameter of the linear combination, we are able to get a quadrature formula with an higher degree of exactness.

Corollary 3.4

Let \(f:S \rightarrow \mathbb {R}\) be a 3-times continuous differentiable function on S. Let us denote by \({\varvec{x}}^*\) the center of gravity of S, by \( s^{d-1}_{l}\), \(l=0,1,\dots ,d\) the facets of S opposite to the vertex \( {\varvec{v}}_{l}\) and by \({\varvec{g}}_{l}\) the barycenter of \(s^{d-1}_{l}\). Then,

$$\begin{aligned} \int _{S }f({\varvec{x}}){\text {d}}{\varvec{x}}=\tau F_{1}[f]+(1-\tau ) F_{2}[f]+\tilde{R} (\tau )[f],\quad \tau \in \mathbb {R}, \end{aligned}$$
(3.20)

where

$$\begin{aligned} F_{1}[f]=\dfrac{A(S)}{d!}\left( \frac{d+1}{d+2}f({\varvec{x}}^{*})+\frac{1}{ (d+1)(d+2)}\sum _{l=0}^{d}f({\varvec{v}}_{l})\right) \end{aligned}$$

is the multivariate Simpson rule for a simplex [7, Theorem 5.1],

$$\begin{aligned} F_{2}[f]=\frac{A(S)}{2(d+2)!}\left( 3\sum _{l=0}^{d}f\left( {\varvec{v}}_{l} \right) +(d+1)^3f({\varvec{x}}^*) -d^2\sum _{l=0}^{d}f({\varvec{g}}_{l})\right) \end{aligned}$$

is given by (3.7) for \(\alpha =\frac{d}{d+1}\) and

$$\begin{aligned} \tilde{R}(\tau )[f]=\tau R^{Si}_d[f]+(1-\tau )R\left( d,\frac{d}{d+1}\right) [f] \end{aligned}$$
(3.21)

with \(R^{Si}_d[f]\) denoting the remainder term in the multivariate Simpson rule for a simplex. For all \(\tau \in \mathbb {R}\) we have \(\tilde{R} (\tau )[f]=0\), whenever f is a polynomial in d variables of total degree at most 2.

Proof

Since \(R^{Si}_d[f]=0\) and \(R\left( d,\frac{d}{d+1}\right) [f]=0\) whenever f is a polynomial in d variables of total degree less than or equal to 2, it easily follows that \(\tilde{R} (\tau )[f]\) vanishes for each polynomial of degree at most 2. \(\square \)

For \(\tau =\frac{3(d+1)}{d+3}\), the family of quadrature formulas (3.20) yields to a formula which has degree of exactness 3.

Theorem 3.5

Let \(f:S\rightarrow \mathbb {R}\) be a 3-times continuous differentiable function on S and let us denote by \({\varvec{x}}^{*}\) the center of gravity of S, by \(s_{l}^{d-1}\), \(l=0,1,\dots ,d\) the facet opposite to the vertex \({\varvec{v}}_{l}\) and by \({\varvec{g}}_{l}\) the barycenter of \( s_{l}^{d-1}\). The quadrature formula

$$\begin{aligned} \displaystyle \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}= & {} \frac{A(S)}{(d+3)!}\left( 3\sum _{l=0}^{d}f({\varvec{v}}_{l})+d^{3}\sum _{l=0}^{d}f({\varvec{g}} _{l})+(d+1)^{3}(3-d)f({\varvec{x}}^{*})\right) \nonumber \\&\displaystyle +\tilde{R}\left( \frac{3(d+1)}{d+3}\right) [f], \end{aligned}$$
(3.22)

with \(\tilde{R}\left( \frac{3(d+1)}{d+3}\right) [f]\) defined in (3.21), has degree of exactness 3.

Proof

For \(\tau =\frac{3(d+1)}{d+3}\), the quadrature formula (3.20) reduces to

$$\begin{aligned}&\int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}\approx F_3[f]\nonumber \\&\quad =\frac{A(S)}{(d+3)!}\left( 3 \sum _{l=0}^{d}f({\varvec{v}}_{l})+d^3\sum _{l=0}^{d}f({\varvec{g}}_{l})+(d+1)^3(3-d)f({\varvec{x}}^{*})\right) \nonumber \\ \end{aligned}$$
(3.23)

and, by Corollary 3.4, it follows that \(F_{3}[f]\) has degree of exactness 2. Let \(P_{3}({\varvec{x}})\) be a polynomial in d variables of degree 3; we can write \(P_3({\varvec{x}})\) as

$$\begin{aligned} P_{3}({\varvec{x}})=P_{2}({\varvec{x}})+\sum _{i=0}^{d}c_{i}x^{3}_{i}+\sum _{i=0}^{d} \sum _{\begin{array}{c} j=0 \\ j\ne i \end{array}}^{d}b_{ij}x^{2}_{i}x_{j}+\sum _{i=0}^{d}\sum _{\begin{array}{c} j=0 \\ j\ne i \end{array}}^{d}\sum _{\begin{array}{c} k=0 \\ k\ne i, j \end{array}}^{d}d_{ijk}x_{i}x_{j}x_{k} \end{aligned}$$

where \(P_{2}({\varvec{x}})\) is a polynomial of degree 2. Therefore, it is sufficient to prove that \(F_{3}[\cdot ]\) is exact for the monomials

$$\begin{aligned} \begin{array}{lll} M_{1}=x^{3}_{i};&M_{2}= x^{2}_{i}x_{j}\quad (j\ne i) ;&M_{3}=x_{i}x_{j}x_{k}\quad (k\ne i,j)\quad i,j,k=0,\dots ,d. \end{array} \end{aligned}$$

Thanks to the linear isomorphism which maps the standard simplex \(\Delta _d\) of \({\mathbb {R}}^d\) to a generic simplex S, without loss of generality, we can restrict to the case of the simplex \(\Delta _d\) of vertices \({\varvec{v}}_{0}=(0,\dots ,0)\); \({\varvec{v}}_{1}=(1,0,\dots ,0)\); \({\varvec{v}}_l=(0,0,\dots ,1,\dots ,0)\); \({\varvec{v}}_{d}=(0,\dots ,0,1)\). The center of gravity of \(\Delta _d\) is \({\varvec{x}}^{*}=\left( \frac{1}{d+1},\dots ,\frac{1}{d+1}\right) \) and the barycenter of the facets are

$$\begin{aligned} \begin{array}{llll} {\varvec{g}}_0=\left( \frac{1}{d},\frac{1}{d},\dots ,\frac{1}{d}\right) ,&{\varvec{g}}_1=\left( 0,\frac{1}{d},\dots ,\frac{1}{d}\right) ,&\dots ,&{\varvec{g}}_d=\left( \frac{1}{d},\frac{1}{d},\dots ,0\right) . \end{array} \end{aligned}$$

Let us now consider \(M_{1}=x_i^3\), \(i=0,\dots ,d\). The exact integral of \(M_1\) over the simplex \(\Delta _d\) is [11]

$$\begin{aligned} \int _{\Delta _d }M_{1}{\text {d}}{\varvec{x}}=\frac{6}{(d+3)!}, \end{aligned}$$

and, in addition,

$$\begin{aligned} \begin{array}{l} \displaystyle \sum _{l=0}^df({\varvec{v}}_l)=1,\\ \displaystyle \sum _{l=0}^df({\varvec{g}}_l)=\frac{d}{d^3},\\ f({\varvec{x}}^*)=\frac{1}{(d+1)^3}. \end{array} \end{aligned}$$
(3.24)

By substituting equalities (3.24) in the quadrature formula (3.23) we have

$$\begin{aligned} F_{3}[M_{1}]= & {} \frac{A(\Delta _d)}{(d+3)!}\left( 3+d^3\frac{d}{d^3} +(d+1)^3(3-d)\frac{1}{(d+1)^3}\right) \\= & {} \frac{6}{(d+3)!}=\int _{\Delta _d }M_{1}{\text {d}}{\varvec{x}}. \end{aligned}$$

Let us consider \(M_{2}=x_i^3x_j\), \(i,j=0,\dots ,d\) and \(j\ne i\). The exact integral of \(M_2\) over the simplex \(\Delta _d\) is [11]

$$\begin{aligned} \int _{\Delta _d }M_{2}{\text {d}}{\varvec{x}}=\frac{2}{(d+3)!}, \end{aligned}$$

and, in addition,

$$\begin{aligned} \begin{array}{l} \displaystyle \sum _{l=0}^df({\varvec{v}}_l)=0,\\ \displaystyle \sum _{l=0}^df({\varvec{g}}_l)=\frac{d-1}{d^3},\\ f({\varvec{x}}^*)=\frac{1}{(d+1)^3}. \end{array} \end{aligned}$$
(3.25)

By substituting equalities (3.25) in the quadrature formula (3.23) we have

$$\begin{aligned} F_{3}[M_{2}]= & {} \frac{A(\Delta _d)}{(d+3)!}\left( 0+d^3\frac{d-1}{d^3} +(d+1)^3(3-d)\frac{1}{(d+1)^3}\right) \\= & {} \frac{2}{(d+3)!}=\int _{\Delta _d }M_{2}{\text {d}}{\varvec{x}}. \end{aligned}$$

Finally, let us consider \(M_{3}=x_i x_j x_k\), \(i,j,k=0,\dots ,d\) and \(k\ne i,j\) for which the exact integral over the simplex \(\Delta _d\) is [11]

$$\begin{aligned} \int _{\Delta _d }M_{3}{\text {d}}{\varvec{x}}=\frac{1}{(d+3)!} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{l} \displaystyle \sum _{l=0}^df({\varvec{v}}_l)=0,\\ \displaystyle \sum _{l=0}^df({\varvec{g}}_l)=\frac{d-2}{d^3},\\ f({\varvec{x}}^*)=\frac{1}{(d+1)^3}. \end{array} \end{aligned}$$
(3.26)

By substituting equalities (3.26) in the quadrature formula (3.23), we have

$$\begin{aligned} F_{3}[M_{3}]= & {} \frac{A(\tilde{S})}{(d+3)!}\left( 0+d^3\frac{d-2}{d^3} +(d+1)^3(3-d)\frac{1}{(d+1)^3}\right) \\= & {} \frac{1}{(d+3)!}=\int _{\Delta _d }M_{3}{\text {d}}{\varvec{x}}. \end{aligned}$$

Then

$$\begin{aligned} \int _{\Delta _d}P_{3}({\varvec{x}})d{{\varvec{x}}}=F_{3}[P_{3}] \end{aligned}$$

and this shows that the degree of exactness of the quadrature formula (3.22) is 3. \(\square \)

4 The 2-Simplex Case

In this section, we restrict to the case \(d=2\) in which S is a triangle of vertices \({\varvec{v}}_0\), \({\varvec{v}}_1\), \({\varvec{v}}_2\). In this particular case, the quadrature formula (2.14) reduces to

$$\begin{aligned} Q^{S}_{r}[f]= & {} \displaystyle \frac{1}{(1+r)}\sum _{k=0}^{r} \frac{1+r-k}{ (3+k)!} \sum _{j=0}^{k}\left( D_{{\varvec{v}}_1-{\varvec{v}}_0}^{j}D_{{\varvec{v}}_2-{\varvec{v}} _0}^{k-j}f({\varvec{v}}_0)\right. \nonumber \\&\left. + D_{{\varvec{v}}_0-{\varvec{v}}_1}^{j}D_{{\varvec{v}}_2-{\varvec{v}}_1}^{k-j}f( {\varvec{v}}_1)\right. \nonumber \\&\left. +D_{{\varvec{v}}_0-{\varvec{v}}_2}^{j}D_{{\varvec{v}}_1-{\varvec{v}}_2}^{k-j}f({\varvec{v}}_2)\right) \end{aligned}$$
(4.1)

and, by easy computations, the bound for the approximation error becomes

$$\begin{aligned} {\left| E^{\Delta _2}_r[f]\right| }\le \frac{{\left| D^{r+2}f\right| }_S}{12(r+2)!(1+r)}\left( 1+ \sqrt{2^r}\right) . \end{aligned}$$
(4.2)

The quadrature formula (3.19), which has degree of exactness 2 and uses only function data at the vertices of S, at the midpoints of its sides and at its center of gravity, becomes

$$\begin{aligned} \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}=\tilde{Q}_{1}^{S}[f]+R\left( 2,\frac{2}{3}\right) [f] \end{aligned}$$

where

$$\begin{aligned} \tilde{Q}_{2}^{S}[f]=\frac{A(S)}{16}\left( \sum _{l=0}^{2}f({\varvec{v}}_{l})-\frac{ 4}{3}\sum _{l=0}^{1}\sum _{r=l+1}^{2}f\left( {\varvec{g}}_{l}\right) +9f({\varvec{x^*}} )\right) \end{aligned}$$
(4.3)

and

$$\begin{aligned} R\left( 2,\displaystyle \frac{2}{3}\right) [f]&= \displaystyle \frac{3A(S)}{32 }\sum _{l=0}^{2}\left( \frac{4}{9}\int _{0}^{1}(1-t)^{2}D_{{\varvec{g}}_{l}-{\varvec{v}} _{l}}^{3}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}}_{l})){\text {d}}t\right. \\&\quad \displaystyle \left. -\int _{0}^{\frac{2}{3}}\left( \frac{2}{3}-t\right) ^{2}D_{{\varvec{g}}_{l}-{\varvec{v}}_{l}}^{3}f({\varvec{v}}_{l}+t({\varvec{g}}_{l}-{\varvec{v}} _{l})){\text {d}}t\right) . \end{aligned}$$

The quadrature formula (3.22), which has degree of exactness 3 and uses only function data at the vertices of S, at the midpoints of its sides and at its center of gravity, becomes

$$\begin{aligned} \tilde{Q}_3^S[f]=\frac{A(S)}{40}\left( \sum _{l=0}^{d}f({\varvec{v}}_{l})+\frac{8}{3 }\sum _{l=0}^{d}f({\varvec{g}}_{l})+9f({\varvec{x}}^{*})\right) , \end{aligned}$$
(4.4)

where \({\varvec{g}}_l= \dfrac{{\varvec{v}}_{l}+{\varvec{v}}_{r}}{2}\), i.e. \({\varvec{g}}_l\) is nothing but the midpoint of the side of the triangle S opposite to \({\varvec{v}} _l\). For \(d=2\) and \(\alpha =1/3\) the formula (3.7), which has degree of exactness 2 and uses function data at the midpoints of the sides of S and at the points \(\frac{{\varvec{g}}_l+2{\varvec{v}}_l}{3}\), \( l=0,\dots ,d\), becomes

$$\begin{aligned} \int _{S }f({\varvec{x}}){\text {d}}{\varvec{x}}=\widehat{Q}_2^S[f]+R\left( 2,\frac{1}{3}\right) [f] \end{aligned}$$
(4.5)

with

$$\begin{aligned} \widehat{Q}_2^S[f]=\frac{A(S)}{48}\left( 9 \sum _{l=0}^{2}f\left( \frac{ {\varvec{g}}_l+2{\varvec{v}}_l}{3}\right) - \sum _{l=0}^{2}f({\varvec{g}}_{l})\right) \end{aligned}$$
(4.6)

and \(R\left( 2,\frac{1}{3}\right) [f]\) defined in (3.8).To enhance the degree of exactness of the quadrature formula (4.6), let us consider the midpoint formula for the 2-dimensional simplex [5]

$$\begin{aligned} \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}=\frac{A(S)}{6}\sum _{r=0}^{2}f({\varvec{g}}_{r})+E[f] \end{aligned}$$

where \(E[f]=0\) whenever f is a polynomial in 2 variables of total degree at most 2. We set

$$\begin{aligned} Q_{2}^{mid}[f]=\frac{A(S)}{6}\sum _{l=0}^{2}f({\varvec{g}}_{l}) \end{aligned}$$

and define the convex combination

$$\begin{aligned} \int _{S}f({\varvec{x}}){\text {d}}{\varvec{x}}=\alpha \widehat{Q}_{2}^{S}[f]+(1-\alpha )Q_{2}^{mid}[f]+E_{\alpha }[f],\quad \alpha \in \mathbb {R}. \end{aligned}$$
(4.7)

where

$$\begin{aligned} E_{\alpha }[f]=\alpha R\left( 2,\frac{1}{3}\right) [f]+(1-\alpha )E[f]. \end{aligned}$$
(4.8)

Since, for all \(\alpha \in \mathbb {R}\), \(E_{\alpha }[f]=0\) whenever f is a polynomial in 2 variables of total degree at least 2, the quadrature formula (4.7) has at least degree of exactness 2.

Theorem 4.1

Let \(f:S\subset \mathbb {R}^{2} \rightarrow \mathbb {R}\) be a 3-times continuous differentiable function on S. Then, the quadrature formula

$$\begin{aligned} \int _{S }f({\varvec{x}}){\text {d}}{\varvec{x}}=MQ^S_{3}[f]+E_{\frac{4}{5}}[f] \end{aligned}$$
(4.9)

with

$$\begin{aligned} MQ^S_3[f]=\frac{A(S)}{20} \left( \sum _{l=0}^{2}f\left( \frac{{\varvec{g}}_l+2 {\varvec{v}}_l}{3}\right) +\frac{1}{3} \sum _{l=1}^{3}f({\varvec{g}}_{l})\right) \nonumber \\ \end{aligned}$$
(4.10)

has degree of exactness 3.

Proof

Equality (4.10) follows by setting \(\alpha =\frac{4}{5}\) in (4.7) and by rearranging. To prove the degree of exactness of the formula (4.9), it is sufficient to follow the same arguments used in the proof of Theorem 3.5 for \(d=2\). \(\square \)

Finally, to obtain a quadrature formula over S with degree of exactness 4, we consider the convex combination of the quadrature formulas (4.4) and (4.10)

$$\begin{aligned} \int _{S }f({\varvec{x}}){\text {d}}{\varvec{x}}=\alpha \tilde{Q}^S_3[f]+(1-\alpha )MQ^S_3[f]+E^{ \prime }_{\alpha }[f],\quad \alpha \in \mathbb {R}, \end{aligned}$$
(4.11)

where

$$\begin{aligned} E^{\prime }_{\alpha }[f]=\alpha \tilde{R}\left( \frac{9}{5} \right) [f]+(1-\alpha )E_{\frac{4}{5}}[f], \end{aligned}$$

with \(\tilde{R}\left( \frac{9}{5} \right) [f]\) given by the Eq. (3.21) and \(E_{\frac{4}{5} }[f]\) by the equation (4.8). Since, for all \(\alpha \in \mathbb {R}\) , \(E^{\prime }_{\alpha }[f]=0\) whenever f is a polynomial in 2 variables of total degree at least 3, the quadrature formula (4.11) has at least degree of exactness 3.

Theorem 4.2

Let \(f:S\subset \mathbb {R}^{2} \rightarrow \mathbb {R}\) be a 3-times continuous differentiable function on S and \({\varvec{x}}^*\) the center of gravity of S. Then, the quadrature formula

$$\begin{aligned} \int _{S }f({\varvec{x}}){\text {d}}{\varvec{x}}=MQ^S_{4}[f]+E^{\prime }_{\frac{1}{3}}[f] \end{aligned}$$
(4.12)

where

$$\begin{aligned} MQ^S_{4}[f]=\frac{A(S)}{120}\left( \sum _{l=0}^{2}f({\varvec{v}}_{l})+4 \sum _{l=0}^{2}f({\varvec{g}}_{l})+12 f\left( \frac{{\varvec{g}}_l+2{\varvec{v}}_l}{3}\right) +9f( {\varvec{x}}^{*})\right) \nonumber \\ \end{aligned}$$
(4.13)

and

$$\begin{aligned} E^{\prime }_{\frac{1}{3}}[f]=\frac{1}{3} \tilde{R}\left( \frac{9}{5} \right) [f]+\left( 1-\frac{1}{3}\right) E_{\frac{4}{5}}[f], \end{aligned}$$

has degree of exactness 4.

Table 1 Absolute value of the remainder terms \(E_r^{\Delta _2}[f_i]=Q_r^{ \Delta _2}[f_i]-\int _{\Delta _2}f_i({\varvec{x}}){\text {d}}{\varvec{x}}\), \(i=1,\dots ,4\); \( r=1,\dots ,10\)
Table 2 Absolute value of the remainder terms \(\tilde{E}_2^{\Delta _2}[f_i]\) , \(\tilde{E}_3^{\Delta _2}[f_i]\), \(\tilde{E}_{MQ_3}^{\Delta _2}[f_i]\), \(\tilde{ E}_{MQ_4}^{\Delta _2}[f_i]\) \(i=1,\dots ,4\)

Proof

Equality (4.13) follows by setting \(\alpha =\frac{1}{3}\) in (4.11). To prove the degree of exactness of the formula (4.12), we proceed by verifying the exactness of the quadrature formula for the monomials \(x^4\), \(x^3y\), \(xy^3\), \(x^2y^2\), \(y^4\), similarly to the proof of Theorem 3.5 for \(d=2\). \(\square \)

5 Numerical Results in \(d=2\)

To test the approximation accuracies of the proposed formulas, we consider the case \(d=2\) and the standard triangle \(S=\Delta _{2}\) of vertices \({\varvec{v}}_{0}=(0,0)\), \({\varvec{v}}_{1}=(1,0)\), \({\varvec{v}}_{2}=(0,1)\). The numerical experiments are conducted by considering the following set of test functions [1]

$$\begin{aligned} f_{1}(x,y)= & {} \cos \left( \sqrt{1+x^{2}+y^{2}}\right) , \\ f_{2}(x,y)= & {} \exp \left( -\frac{(3x-2)^{2}+(3y-2)^{2}}{4}\right) , \\ f_{3}(x,y)= & {} \sin \left( \frac{\pi }{4}x+\frac{\pi }{6}y\right) , \\ f_{4}(x,y)= & {} \sinh \left( \frac{\pi }{4}x+\frac{\pi }{6}y\right) . \end{aligned}$$

In all the experiments, the exact value of the integrals for functions \(f_{1} \) and \(f_{2}\) are computed by assuming as exact the numerical integration performed by Mathematica. In Table 1, we report the absolute value of the remainder terms \(E_{r}^{\Delta _{2}}[f_{i}]=Q_{r}^{\Delta _{2}}[f_{i}]-\int _{\Delta _{2}}f_{i}({\varvec{x}}){\text {d}}{\varvec{x}}\), \(i=1,\dots ,4\), \( r=1,\dots ,10,\) and, in Table 2, we display the absolute value of the remainder terms

$$\begin{aligned} \tilde{E}_{2}^{\Delta _{2}}[f_{i}]= & {} \tilde{Q}_{2}^{\Delta _{2}}[f_{i}]-\int _{\Delta _{2}}f_{i}({\varvec{x}}){\text {d}}{\varvec{x}}, \qquad i=1,\ldots , 4 , \\ \tilde{E}_{3}^{\Delta _{2}}[f_{i}]= & {} \tilde{Q}_{3}^{\Delta _{2}}[f_{i}]-\int _{\Delta _{2}}f_{i}({\varvec{x}}){\text {d}}{\varvec{x}}, \qquad i=1,\ldots , 4 , \\ \tilde{E}_{MQ_{3}}^{\Delta _{2}}[f_{i}]= & {} MQ_{3}^{\Delta _{2}}[f_{i}]-\int _{\Delta _{2}}f_{i}({\varvec{x}}){\text {d}}{\varvec{x}}, \quad i=1,\ldots , 4 , \\ \tilde{E}_{MQ_{4}}^{\Delta _{2}}[f_{i}]= & {} MQ_{4}^{\Delta _{2}}[f_{i}]-\int _{\Delta _{2}}f_{i}({\varvec{x}}){\text {d}}{\varvec{x}}, \quad \, i=1,\ldots , 4. \end{aligned}$$