Recursion formulas for integrated products of Jacobi polynomials

From the literature it is known that orthogonal polynomials as the Jacobi polynomials can be expressed by hypergeometric series. In this paper, the authors derive several contiguous relations for terminating multivariate hypergeometric series. With these contiguous relations one can prove several recursion formulas of those series. This theoretical result allows to compute integrals over products of Jacobi polynomials in a very efficient recursive way. Moreover, the authors present an application to numerical analysis where it can be used in algorithms which compute the approximate solution of boundary value problem of partial differential equations by means of the finite elements method (FEM). With the aid of the contiguous relations, the approximate solution can be computed much faster than using numerical integration. A numerical example illustrates this effect.


Introduction.
Finite element methods (FEM) are popular and versatile methods to compute approximate solutions of partial differential equations (PDE) on complicated domains, for which no exact solution is known.The solution is expanded in a basis that is constructed on a mesh of basic geometric objects, in our case simplices.The coefficients of these basis functions are computed as solution to a linear system Ku = f whose entries are the integral of products of (derivatives of) these basis functions, see [15,47,42,11]).If the solution of the underlying PDE is smooth, the local polynomial degree p is increased in order to improve the accuracy of the computed FEM-solution in comparison to the exact one, [45,16].However, the computation of the FEM-solution requires a suitable basis in order to keep the computational cost as low as possible.In [10], a set of basis functions based for the Poisson problem in 2D was presented, that yields a sparse linear system.These basis function are based on classical orthogonal polynomials on a triangle, see e.g.[41], [34] and also [18].The sparsity is obtained by the orthogonality relations.This work was later generalized to 3D as well as other partial differential equations, see [9] for an overview on these results.
In all these cases, the sparsity for the respective basis functions was proven by explicitly evaluating the integrals using difference-differential relations satisfied by Jacobi polynomials.In the case of tetrahedral elements, these computations became so involved that they could not be carried out by hand, but were proven using symbolic computation.The basis for the application of computer algebra is that Jacobi polynomials P (α,β) n (x) are holonomic [51,35], i.e., they satisfy certain types of difference-differential equations in all parameters.For various representations, it is possible to automatically derive new identities involving Jacobi polynomials automatically [30,8].These procedures are rigorous and many come with easy-to-verify certificates.
The outcome of the algorithm are the coefficients K ik of the linear system (matrix) K computed explicitly as rational functions in the parameters.For a rational function, it is fairly easy to derive recurrence relations.In order to find a minimal order recurrence, we have used a Mathematica package by Kauers [29] for guessing recurrences.These are easily proven by plugging in the actual entries.This guess-and-prove approach is very common also with more complicated input than rational functions.There exist implementations of algorithms for holonomic functions in several computer algebra systems such as, e.g., mgfun in Maple [14] or ore algebra in Sage [31].
The results in this article show how recurrences, for the integrals of two multivariate orthogonal polynomials and H 1 basis functions on a triangle, can be found directly from a multivariate hypergeometric series representation p F q of the integrals using contiguous relations and terminating p F q identities.This kind of techniques is well known since the works of Gauß and Euler, see e.g.[4] or [43], but are seldom applied to multivariate series.Usually one would refer to the works of Wilson [49] or would use symbolic computation to derive such recursion formulas, but a direct derivation approach happened to be more insightful, not only for the proof, but for the general structure of such series as well.Multivariate hypergeometric series are more difficult to handle than general p F q series.In some cases one can reduce a multivariate series to a general one, by using well known summation theorems like the Pfaff-Saalsch ütz, Dougall's Summation or Whipple's transformation, see e.g.[7] or [4], but usually there is no transformation, which holds for all parameter configurations, which we are interested in.A broad overview of convergence theory for multivariate series can be found e.g. in [21], but does not need to be applied here, since our series are based on Jacobi polynomials, which are terminating series in itself.The notation which will be used goes back to Burchnall and Chaundy [12].
For the first time, this sheds light on the underlying structure of the integral values and furthermore the sparsity pattern of finite element matrices can be read out from the coefficients of the recursion formulas.To our knowledge, these identities are unknown and interesting in their own right.For the community in numerical analysis theorem 4.1 is the most important result stating how the nonzero entries of K can be computed recursively in optimal arithmetical complexity.It is a consequence of the main result of this article, theorem 4.12.The results can be extended to an arbitrary simplex or basis functions in the function spaces H(curl) and H(div).Optimal complexity was first achieved for finite element matrices not so long ago by [1] for basis functions based on Bernstein polynomials, but the resulting element matrices were dense.One can transform the basis functions based on Jacobi polynomials to a basis based on Bernstein polynomials, see e.g.[3], [2], but this transformation loses optimal arithmetical complexity for the assembly, though it has other useful properties, which will not be discussed here.For element mass matrices, based on Bernstein polynomials, one can use a block recursive structure as can be seen in [33], which results in an efficient inversion technique.Furthermore recursion formulas for the orthogonal polynomials on a triangle have already been computed as can be seen in e.g. in [50] and [38].But to the knowledge of the authors, there are only a few publications in which the matrix entries K ik of a FEM-Matrix are computed completely recursively, see [40] for a special case.Part of the here shown recursion formulas were first published in [25].This paper generalizes the recursion formulas and presents a proof for those relations.
This paper is organized as follows: In section 2, the authors introduce hypergeometric series and Jacobi polynomials and give an overview on several known identities and notations that are used throughout the paper.A short motivation of the background from FEM is given in section 3. The main theoretical results are formulated in section 4. Finally, some algorithmic aspects and numerical experiments are presented in section 5.
Throughout this paper, the indices n, m, i, j, k, l denote natural numbers.P (α,β) n is the n-th Jacobi Polynomial with indices α, β.The set ρ, δ usually denotes another set of Jacobi polynomial indices.F stands for some kind of (generalized or multivariate) hypergeometric series and I for the exact value of an integral.

Hypergeometric series. For
be the Pochhammer symbol or rising factorial and Γ() denote the Gamma function [17], which is given by Γ(n) = (n − 1)! for n ∈ N. DEFINITION 2.1 (Gaussian hypergeometric series).For a, b, c arbitrarily the series The Gaussian hypergeometric series converges absolutely for Re(c − a − b) > 0, see [4] or [43].Classical orthogonal polynomials can be expressed as hypergeometric series, see e.g.[43].For Jacobi polynomials in particular, we have (2.2) For x = 1 series (2.1) can be summed and written in closed form.
This can be proven by using an integral representation due to Euler, see [4].If a (or b) is a negative integer, the identity simplifies even more: A generalized version of the 2 F 1 () is called a generalized hypergeometric series.DEFINITION 2.4.Let (a i ), (b j ), i = 1, . . .p, j = 1, . . .q. Then the series is called a generalized hypergeometric series.
The series p F q a 1 , a 2 , . . ., a p b 1 , b 2 , . . .b q ; x converges absolutely for all x if p < q or for |x|< 1 if p = q + 1, see e.g.[4].Higher classes of polynomials, e.g.polynomials of the Hahn class (see [4]), can be described by those series.
For the special case 3 3) can be found.
THEOREM 2.5 (Pfaff-Saalsch ütz).Let m ∈ N, then This can be proven by equating the coefficients of a transformation for the 2 F 1 which again is due to Euler, see [4] or [43].Important for summability is the fact that the series is balanced or Saalsch ützian.DEFINITION 2.6.A generalized hypergeometric series (2.4) is called s-balanced if x = 1 and The case s = 1 is also called Saalsch ützian.
There are some summation theorems like Chu-Vandermonde or the Pfaff-Saalsch ütz theorem for p, q greater than 3, 2, but they are usually more restrictive.For example for a balanced 7 F 6 series, there holds Dougall's summation, but the series needs to fulfil additional properties, see e.g.[4], [7] for more information.

Contiguous Relations.
Recurrence relations can be proven by using the contiguous relations of the hypergeometric series.We will briefly summarize the basic ideas as can be found in the book of Rainville [43].Alternative and/or equivalent methods for more general hypergeometric series, can be found for example in the book of Andrews et al. [4] or in the work of Bailey [6] and Wilson [49].
; x be a Gaussian hypergeometric series.Taking the derivative yields For the ease of notation when stating the contiguous relations, we follow common practice [4] and write Therefore (2.6) can be written as ∂ ∂x F = ab c F(a+, b+, c+).Now define the differential operator θ x = x ∂ ∂x , also known as Euler operator.It is applied as follows Analogously the formulas From (2.7) and (2.8) follows (a − b)F = aF(a+) − bF(b+), this is one of the 15 contiguous relations1 for the 2 F 1 and its six contiguous functions.The other can be obtained by the same kind of straight forward computations.For a complete list, see e.g.[4], [7], [43] or [46].Many of the important recurrence relation between Jacobi polynomials can be proven by using the contiguous relations of the 2 F 1 .

Jacobi Polynomials.
The Jacobi polynomials (2.2) are the polynomials, which are orthogonal on [−1, 1] with respect to the weight function w(x) := (1 − x) α (1 + x) β , α, β > −1.They can either be given by which is the Rodrigues formula or by the hypergeometric representation (2.2).Furthermore the property (2.10) yields the representation (2.11) We refer to the standard literature for more information on these properties, e.g.[4], [43], [48].Since the Jacobi polynomials are orthogonal polynomials they satisfy a three term recurrence relation.It is given by, (2.12) Next, let us recall several difference equations satisfied by Jacobi polynomials, (see e.g.[43,Ch. 16]) that are summarized in the following lemma.LEMMA 2.7.The Jacobi polynomials (2.2) satisfy Further relations between different Jacobi polynomials that can be found in [43] will be introduced if required.
In a high order finite element context it is often useful to use integrated Jacobi polynomials P(α,0) n (x),, which can be written as Jacobi polynomials, (2.20) where the Jacobi polynomials with index β = −1 and α > −1 are defined properly, see e.g.[48], as (2.21) Integrated Legendre polynomials can be defined similarly, x −1 n−2 (x).

Background from the Finite Element Method (FEM).
Variational formulation and the function space H 1 .In this paper, we investigate the following problem in variational formulation: For a given bounded domain Ω ⊂ R d find u in the Sobolev space H 1 such that holds.For ease of notation, we assume Neumann boundary conditions.The bilinear form a(•, •) and the linear form F(•) are well-defined and bounded for f ∈ [L 2 (Ω)] and µ, κ ∈ [L ∞ (Ω)] with µ > 0 and κ, µ are assumed to be piecewise constant.The variational formulation is well-defined for square-integrable vector-valued functions u : Ω → R 3 with square-integrable gradient.We denote the according function space by The variational formulation (3.1) is obtained from the discretization of the reaction-diffusion equation −∇ • (µ∇u) + κu = f by multiplication with a test function v, integration over the domain Ω and applying Green's formula to the second order part.We refer the interested reader to [20,36,22] for more informations concerning this topic.
For complicated geometries Ω and real-life data it is not possible to solve the equations (3.1) analytically.The finite element method (FEM) provides a general method for the numerical solution of partial differential equations.It is based on the variational formulation of the underlying PDE and provides a profound analysis.
Finite element discretization of H 1 .Galerkin methods such as, e.g., FEMs are among the most powerful methods for the solution of boundary value problems of the form (3.1).The Galerkin approximation relies on the orthogonal projection of the implicitly given solution u in (3.1) onto a N-dimensional subspace V N ⊂ H 1 with respect to the bilinear form a(•, •).Therefore, we construct a sequence of finite dimensional spaces V N ⊂ H 1 and consider the solution of (3.1) on V N (see e.g.[15,47] or the textbooks [42,11]), namely The finite element method provides a special construction of these discrete spaces V N by piecewise polynomial functions on an admissible subdivision (see [15]) T h of Ω into simplices τ s with s = 1, ..., nel, i.e., where P p is the space of all polynomials defined on τ s of maximal total degree p.The elements τ s are chosen such that κ and µ are constant on the elements.In hp-finite element methods the polynomial degree p can vary on each element τ s which provides extraordinary fast convergence of the finite element method with respect to the number of degrees of freedom N = dim(V N ), see e.g.[45].This is crucial for the solution of real world problems of the form (3.1).Since the space V N is finite dimensional, the space is equipped with a row vector of basis functions [Ψ] := [ψ 1 , . . ., ψ N ].The basis functions ψ j are chosen such that they have local support (see e.g.[15]).
Then using the ansatz u N (x) = ∑ N i=1 u i ψ i (x) and setting v = ψ j for j = 1, ..., N in (3.3) the problem becomes equivalent to solving the following system of N linear algebraic equations (3.5)

Find a coefficient vector
Note that the matrix K depends on the choice of the basis functions.Efficient solution of algebraic system.In practical problems, the dimension N usually becomes very large ( 10 6 ).Iterative methods as the preconditioned GMRES method or the preconditioned conjugate gradient-method (pcgmethod) for positive definite systems are preferred for the solution of (3.3).The two main important issues for the fast solution of the system Ku = f are • the fast multiplication Ku, • the choice of a good preconditioner C −1 for K such that the condition number κ(C −1 K) becomes small, in order to obtain a fast convergence of the iterative solver for the solution of (3.5).If K is a dense matrix, the operation Ku requires N 2 flops.If K is a sparse matrix with a bounded number c of nonzero entries per row, the computational complexity of the matrix vector-multiplication is bounded by cN.Since K in (3.5) depends on the choice of the basis [Ψ] it is essential to choose a basis with as many orthogonality relations as possible with respect to the bilinear form a(•, •).The choice of the basis heavily influences the properties of the matrix K: • the local support of finite element basis functions yields sparse system matrices K and hence a cheap matrix vector multiplication Ku • the condition number of K and C −1 K, respectively, stability and less iterations in iterative solution methods.In the lower order version of FEM, i.e., the h-version, multigrid solvers are the most powerful methods for discretizations of boundary value problems of partial differential equations, see [23] and the references therein.For hp-FEM this strategy is combined with appropriate local smoothers and static condensation.
hp-FEM and choice of basis functions.In hp-FEM, the local polynomial degree p s on the elements may be large.Despite of the local support of the basis functions [Ψ], the local dimension n s grows as O(p d s ), where d = 2, 3 is the spatial dimension.Hence we are interested in a bounded number of nonzero entries in the system matrix independent of the polynomial degrees.Let [Φ s ] = [φ i,s ] n s i=1 be the set of all basis functions ψ j with supp ψ j ∩ τ s = ∅, e.g.[Φ s ] = [Ψ]L s with the (boolean) finite element connectivity matrices L s .
In finite element methods, the global system matrix K is the result of assembling local matrices, see [15].In our case, one obtains with the local stiffness and mass matrices on the elements τ s , respectively, where µ s = µ | τ s and κ s = κ | τ s are constants.
Therefore, the sparsity of the matrices K s in (3.7) implies sparsity of the matrix K, cf.(3.6).Our aim is to develop a local polynomial basis [Φ s ] such that the matrices A s and M s in (3.7) have a bounded number of nonzero entries per row.The global basis is the obtained in the usual way, see e.g.[16].
Model problem.For ease of presentation, we are focusing on the following model problem: Let denote an arbitrary non degenerated simplex i=1 of degree p with φ i : → R 2 such that the matrices have O(n) nonzero entries.This basis should be suited for H 1 conformity.Definition of the basis functions.Let denote an arbitrary non-degenerated simplex ⊂ R 2 , its set of four vertices by Using the integrated Jacobi polynomials (2.20), we define the shape functions on the affine triangle with baryzentrical coordinates λ m (x, y), m = 1, 2, 3.
• The vertex functions are chosen as the usual linear hat functions ] be the basis of the vertex functions.
• The interior bubbles are defined as (3.9) where the auxiliary bubble functions g i and h ij are given by denotes the basis of all interior bubbles.Finally, let I be the set of all shape functions on s .Sparsity Results.It can be proved, [10], that the matrices M and A (3.8) have a limited number of nonzeros entries.Usually the stiffness matrices is the more important part, but for the ease of presentation, we will focus on the mass matrix.
In particular, one obtains for the mass matrix.Similar results (with a similar sparsity pattern) can be proved for stiffness matrix as well as for the 3D case and other applications, see [9] for an overview.Nevertheless, it remains to compute the nonzero entries in optimal arithmetical complexity.The type of integrals is similar to all of the above mentioned applications.For simplicity, it is explained for m ij,kl in (3.10) on the reference element ˆ with the vertices V 1 = (−1, −1), V 2 = (1, −1) and V 3 = (0, 1).Then, one has to compute The integral Inserting this into the second integral, one has to compute (3.12) with i, j ∈ N and |i − k|≤ 2. For the other applications, the weights in (3.12) differ slightly.Our aim is to develop recursion formulas for (3.12) in a more general setting.

Recursion identities.
4.1.Application: Building a bridge between special functions and FEM.The FEM basis functions are defined on a triangle.By using the so called Duffy transformation one can transform these onto the square (−1, 1) × (−1, 1), see e.g.[18], [28] for details.Thus the problem in the FEM application boils down to one dimensional integrals over integrated Jacobi polynomials.A generalized version of one of these integrals is by (2.21) and (2.22) equivalent to with n + α + β ≥ 0 as well as m + ρ + δ ≥ 0. We are interested in finding recursion formulas for I n,m with respect to n, m.Our main result for the numerical application of this paper is the following theorem.

Moreover the exact value I
(2) m (x) dx can be calculated recursively by Proof.The results are an immediate consequence of theorem 4.12 and corollary 4.13.Both recursion formulas shown in theorem 4.1 are just special cases of the more general theorem 4.12, which will be derived and proven analytically in chapter 4.4.The right picture in figure 4.1 shows a schematic representation of the recursion formulas, whereas the left picture displays the typical nonzero pattern of the element matrix.We will give now a short example, how these recursion formulas can be applied to the finite element context.EXAMPLE 1.The recursion formula for the entry I n,m can be applied directly to case of the mass matrix for the H 1 basis functions.On the triangle T with vertices (−1, −1), (1, −1) and (0, 1) the interior basis functions (3.9) take the form This results in the following two one dimensional integrals, as seen above in (3.11), Both integrals I (0,0,0,0) i,k and I (i+k+1,0,2i−1,2j−1) j,l can be computed using (4.2), though in the case of I (0,0,0,0) i,k it reduces to due to the sparsity pattern.
Since each recursion formula needs starting values, we compute the lowest order integrals in each block of the mass matrix, compare the left picture in figure 4.1, by a low order quadrature.An extension to the edge functions is straight forward, since those are just special cases of the interior functions.The remainder of this paper is dedicated to proof the generalized version of 4.1.We start by rewriting the exact integral value.By using a series representation of the Jacobi polynomial, one can write the exact value of the integral as The integral in the double sum is exactly Euler's Beta integral, i.e.
The Beta function can be expressed as using Pochhammer symbols as well, i.e.

4.2.
Recurrence relation for the special case ν, β, δ = 0.If we set ν = β = δ = 0 one can rewrite (4.5) in a generalized hypergeometric series.First, rewrite the Beta function using Pochhammer symbols, then split r + l by using combinatorial arguments and use the Pfaff-Saalsch ütz theorem (2.5) to reduce the double sum in (4.5).After some more combinatorial arguments (4.6) follows, see [27] or [19] for more details.Starting from this representation, we now prove a recursion originally obtained through the symbolic package Guess by Manuel Kauers [29]. holds.
Proof.We start by deriving a contiguous relation of the hypergeometric series in (4.6).Since the contiguous functions can be written as Hence, in order to find a recursion of the form (4.7), needs to hold.Expanding the fractions in the same denominator reduces the condition to To find the coefficients c i , i = 0, . . ., 3, we view this as polynomial of κ and equate the coefficients of κ to zero, leading to the equations This leads to the underdetermined system then we can choose c 0 = 1 and bring this to the right-hand side Solving this system leads to Rescaling c 0 leads to the proposed recurrence relation.Further simplification of the generalized hypergeometric series in (4.6) is not that that trivial.There is a summation formula for a balanced 4 F 3 in ( [44], [32]), but this can not be applied here since the coefficients of the series don't match.A summation by using Whipple's transformation and Dougall's summation (see e.g.[4]) works only for the case n = m, otherwise the resulting transformed series isn't well-poised.Moreover the more general case µ, ν, β and δ arbitrarily can't be represented by a 4 F 3 since neither of the sums in (4.5) is summable by the Pfaff-Saalsch ütz theorem.To be precise, those general series are ν + 1 balanced.We therefore use a more general approach.

Kampé de Fériet Series.
The concept of hypergeometric series can be extended to the multivariate case.Examples are the Appell series [5], the Kampé de Fériet series [5] or the Lauricella series.DEFINITION 4.3.For p 1 , p 2 , p 3 , q 1 , q 2 , q 3 ∈ N and coefficients (a 1 , . . .a q 1 ), (b 1 , . . .b p 1 ), . . .arbitrarily the series F p 1 ;p 2 ;p 3 The notation is due to Burchnall and Chaundy (see [12], [13]).More information, in particular on the convergence theory of such series, can be found e.g. in [21].We can write I n,m now as a generalized Kampé de Fériet series, i.e. (4.8) Γ(x+y) is the usual Beta-function as above.F converges, since it is terminating due to the coefficients −n and −m.Furthermore we omit indices of F to keep the notation as simple as possible.We denote the forward shift of parame-ters by one analogously to earlier, i.e., and 7 analogue recurrence formulas in m, ρ and δ, where is one of them.Furthermore we can derive more recurrence relations by linear combinations of the above.COROLLARY 4.5.The following recurrence formula holds Proof.The equation is just a linear combination of (4.9) and (4.12).Start by transforming (4.9) and (4.12) Adding these equations lead to (4.14).One important, but rather trivial relation is given by .The following relations will later be proven by using a generalized form of (4.8) in the appendix.Set x = y = 1 in lemma A.2 and A.3, then the following corollary holds COROLLARY 4.6. (n Since the weights in the Jacobi polynomials are interchangeable, see (2.10), the following two relations can be derived as well.
see lemma A.5 and A.6 and set x = y = 1.

-point recurrence relation.
There are some known starlike recurrence relations, see [40].Those can be derived in this context as follows.
Proof.Take the first mixed relation (4.20) and replace all series by (4.12), this yields the first equation.The second equation follows by using (4.21) with (4.11).Lastly, the third equation can be derived by linear combination of (4.22) and (4.23) Alternatively one can use the 3-term recursion (2.12) to proof the same result as in [40].
Recurrence relation.Multiple recurrence relations similar to (4.7) can be proven.LEMMA 4.10.Let F = I n,m , where I n,m is as in (4.8).Then the following recurrence relation holds (4.24) Proof.Start with equation (4.16) Replace both terms of the RHS by using shifted versions of equation ( Moreover use the shifted relation (4.11) for the middle part of the last two equations, i.e.
Lastly replace the remaining terms by a shifted version of (4.17), The claim follows from combining the above with the remaining terms of (4.25), (4.26), (4.27) and (4.28).
Since α and β or ρ and δ are interchangeable, the following lemma can be proven by using (4.18) and (4.19) instead of (4.16) and (4.17).Hence LEMMA 4.11. (4.29) Both of these recursion formulas have the drawback, that terms with ν + 1 or µ + 1 vanish only for ν = −1 or µ = −1, which correspond to the special cases (1 + x) 0 or (1 − x) 0 .If the steps of the proof are slightly adjusted, a recursion formula, which applies to more cases, can be proven.The following theorem is the main result of this paper: THEOREM 4.12.Let F = I n,m , where I n,m is as in (4.8).Then the following recurrence relation holds (4.30) Proof.Again start with recursion (4.16), i.e.
Instead of multiplying with 1, as in the proof for (4.24), we will add a 0 to expand the RHS.Hence In the last step use again (4.17 5. Numerical aspects.Usually an arbitrary high order finite element matrix is computed by Gaussian quadrature.The steps of an element assembly algorithm can be summarized in 3 steps: 1. Calculate the 1D quadrature points and weights: O(n), see e.g.[24].2. Evaluate Jacobi polynomials or more generally the basis functions: O(n) in 1D for n quadrature points, see e.g.[26].3. Gaussian quadrature and in higher dimension sum factorization:O(n d+1 ), see [39], [37].The ansatz by using the recursion formulas, skips the first two steps and replaces the third by a summation, which is independent of n.For the ease of representation we provide a 1D example.

1D -Example
. The resulting sparsity pattern can be seen in figure 5.1(a).To compare the standard assembly routine with the recursive version, we assume now, that the quadrature points, weights and the basis functions are tabulated, i.e. we only need to perform step 3 of the standard assembly routine.In figure 5.1(b) the runtime of a Matlab implementation of both assembly routines are compared.We measured the assembly time of the Gram matrix for the total polynomial order 10 < n max < 160 for the integration of each non zero value φ i , φ j .As expected the recursive version is a lot faster than the Gaussian quadrature, even in 1D.While this recurrence relations hold even for low order polynomials, the main strength is the application to higher order or spectral methods, where two integrals in 2D or three in 3D are computed for each non zero entry.The number of flops per entry remains constant in the recursive case, thus we achieved optimal arithmetical complexity for each integral dimension.REMARK 2. Polynomial coefficients q n (x) of the degree n in the integrand, i.e. q n (x)φ i (x), φ j (x) can be handled as well.Since the Legendre polynomials are a basis of P n , we can write q n (x) = ∑ n i=0 c i L i .The following two corollaries show, that we are able to connect integrals of three orthogonal polynomials and hence we are able to compute polynomial coefficients for the integrand as well, though depending on q n a lot more flops are needed.
COROLLARY 5.1.With given a ∈ N the integrand satisfies the following recursion formula A similar result holds for the integrated Jacobi polynomials.
COROLLARY 5.2.For the integrand holds the following recursion formula

Conclusion.
We were able to derive and prove generalized versions of recurrence formulas, which were originally derived by symbolic computation.One of the main complications in high order finite elements is the assembly of local mass and stiffness matrices.Using these newly computed recurrences one is able to gain a better asymptotic complexity than the state of the art method, sum factorization.Furthermore this approach does not only skip the numerical quadrature, but the initialization step as well.From the special functions point of view the representation as Kampé de Fériet series in itself is interesting.) follow directly by applying the differential operators, since the prefactor doesn't need to be changed.For the last relation (A.8) the differential operator (θ x + θ y + µ + ν + 1) applied to x k y l yields (k + l + µ + ν + 1).This reduces (ν + µ + 2) k+l in the denominator to (ν + µ + 2) k+l−1 .Therefore we multiply the series with ν+µ+1 ν+µ+1 , such that the part in the denominator becomes (ν + µ + 1) k+l .Since it is only a change in the denominator it is rather a change in ν than in µ.The rest follows using a property of the Beta-function, i.e.

THEOREM 4 . 1 .m
The exact value I n,m :(x) dx can be calculated recursively by
i λ e 2