, Volume 95, Supplement 1, pp 75–88

On optimal node and polynomial degree distribution in one-dimensional \(hp\)-FEM


    • Department of Mathematics, Faculty of Civil EngineeringCzech Technical University
  • Pavel Solin
    • Department of Mathematics and StatisticsUniversity of Nevada
    • Institute of Thermomechanics

DOI: 10.1007/s00607-012-0232-x

Cite this article as:
Chleboun, J. & Solin, P. Computing (2013) 95: 75. doi:10.1007/s00607-012-0232-x


We are concerned with the task of constructing an optimal higher-order finite element mesh under a constraint on the total number of degrees of freedom. The motivation for this work is to obtain a truly optimal higher-order finite element mesh that can be used to compare the quality of automatic adaptive algorithms. Minimized is the approximation error in a global norm. Optimization variables include the number of elements, positions of nodes, and polynomial degrees of elements. Optimization methods and software that we use are described, and numerical results are presented.


\(hp\)-FEMOptimal meshOptimal polynomial degreeBoundary value problem

Mathematics Subject Classification (2000)


1 Introduction

Even a novice in a common \(h\) version of the finite element method (FEM) is well aware of the importance of a good mesh regardless of the vagueness of such an attribute. Although criteria ranging from pure geometry to numerical efficiency can be involved, it is most common to describe a mesh as good if the corresponding FE solution is a close approximation to the exact solution. In other words, if the error, that is, the difference between the exact and the approximate solution measured by a relevant norm, is small. This approach makes possible to define an optimal mesh as the mesh that leads to minimum error. To correctly define the optimization problem, one has to prevent an unbounded increase of the number of mesh nodes by fixing it. Then, the optimal position of the fixed number of mesh nodes is sought after.

In practice, one can hardly spend effort searching for the optimal mesh because the exact solution is not known and, even if it were known, the optimization process is computationally demanding. If the goal is reduced and finding a good mesh instead of the optimal mesh is sufficient, a great number of adaptive methods are available that construct a mesh, or, more exactly, a sequence of meshes that exhibit a fast convergence rate of the FE solutions.

Up to now, the degree of FE basis functions (polynomials) has not been mentioned. It is obvious that allowing higher degree polynomials makes the FE space richer and, consequently, improves the accuracy of the FE solution. If, however, the polynomial degree is constant over all elements, it might be unnecessarily high in some parts of the domain of the solution, the resulting system of algebraic equations might be unnecessarily large and its solving unnecessarily demanding with respect to the gain in accuracy of the approximate solution.

These considerations had lead to the genesis of the \(hp\)-FEM. In this method, higher order polynomials are used where appropriate whereas low order polynomials elsewhere, and, moreover, the mesh is step by step optimized, too. For a given boundary value problem (BVP), the optimality of FE spaces is no longer dependent on the underlying mesh only, but the distribution of polynomial degrees over the elements is also crucial.

In solving a BVP, the analyst’s goal is obvious: be economical with the degrees of freedom (DOFs) of the FE space, that is, strive for such a FE space that has a low dimension and approximates the exact solution sufficiently accurately. Given a fixed number of DOFs, an optimal, with respect to the error minimization, FE space can be sought after. This is the subject of this paper.

Again, as in the \(h\)-FEM, though an optimal \(hp\)-FE space is not the goal of computation in practice, adaptive methods can work towards optimality and deliver \(hp\)-FE spaces that result in an exponential rate of convergence with respect to the used DOFs; see [1, 3, 4, 6, 8, 9], for instance.

The liberty of choosing different polynomial degrees on different parts of the mesh opens new horizons for error reduction but, on the other hand, places new demands on algorithms of adaptive FEMs. Indeed, if a source-area of error is localized, a difficult question arises what combination of a polynomial order change and mesh change will most reduce the error.

The following strategy has been generally accepted: locally fine mesh and low degree polynomials are preferred in areas where the exact solution is not sufficiently smooth or exhibits a singularity; locally rough mesh and high degree polynomials are used in areas where the solution is smooth. Adaptive strategies based on smoothness evaluation or error estimation can be found in the literature from which we choose only [2, 5, 7, 9, 10], the works that pay special attention to problems in one space dimension.

Unlike the above references, this paper does not propose at least partly universal \(hp\) adaptive algorithm. Instead, optimal \(hp\)-FE spaces are found numerically for a particular boundary value problem with a known solution. The goal is twofold. First, the importance of the polynomial degree distribution is studied on respective optimal meshes. Second, the found optimal FE spaces can serve as benchmarks for \(hp\) adaptive algorithms in 1D or, in special cases, even 2D boundary value problems.

The paper is organized as follows. In Sect. 2, a BVP and optimization problems are defined. Computational issues are briefly introduced in Sect. 3. Computational results constitute Sect. 4 and are discussed in Sect. 5.

2 Problem setting

2.1 Model boundary value problem

Let us consider a problem weakly formulated on \(\varOmega =(-1,1)\) that arises from the following BVP: Find a function \(u\) such that
$$\begin{aligned} -u^{\prime \prime }+u&= f, \end{aligned}$$
$$\begin{aligned} u^{\prime }(-1)&= a,\quad u^{\prime }(1)=b. \end{aligned}$$
In the weak formulation, \(u\) belongs to \(H^1(\varOmega )\), the Sobolev space of functions defined on \(\varOmega \) that are square integrable and have a square integrable weak derivative. The space is endowed with the norm
$$\begin{aligned} \Vert u\Vert _1=\left(\int _{-1}^1 \left(u^{\prime 2}(x)+u^2(x)\right)\,\text{ d}x\right)^{1/2}. \end{aligned}$$
The weak counterpart to (1)–(2) reads as: Find \(u\in H^1(\varOmega )\) such that
$$\begin{aligned} \int _{-1}^1 \left(u^{\prime }(x)v^{\prime }(x)+u(x)v(x)\right)\,\text{ d}x =\int _{-1}^1 f(x)v(x)\,\text{ d}x + bv(1)-av(-1) \end{aligned}$$
holds for any \(v\in H^1(\varOmega )\).
Since we will need to calculate the true error, we identify \(u\) with a chosen smooth function and, by (1) and (2), we infer the right-hand side function \(f\) as well as the numbers \(a\) and \(b\) accordingly. In particular, we define
$$\begin{aligned} u(x)=\arctan (20x),\quad x \in [-1,1]; \end{aligned}$$
see Fig. 1 (top). As a consequence, \(a=b=\dfrac{20}{401}\). The derivative of \(u\) attains its maximum at \(x=0\), that is, \(u^{\prime }(0)=20\).
Fig. 1

The exact solution \(u(x)=\arctan (20x)\) (top). Lobatto basis functions of \(V^{\fancyscript{T}_h,p}, p=(1,4,1,3)\), 10 degrees of freedom (bottom)

2.2 Approximate BVP

To obtain a FE solution to (4), we first introduce \(hp\)-FE spaces.

Let \(\fancyscript{T}_h\) be a FE mesh on \(\varOmega \) that comprises \(m+1\) nodes \(x_i\), that is,
$$\begin{aligned} -1=x_0<x_1<\cdots <x_{m}=1, \end{aligned}$$
and let \(p=(p_1,\ldots ,p_m)\) be a vector of natural numbers. Let \(I_k=[x_{k-1},x_{k}]\). We define
$$\begin{aligned} V^{\fancyscript{T}_h,p}=\left\{ v_{h,p}\in C([-1,1]):\ \left.v_{h,p}\right|_{I_k}\in P_{p_k}(I_k), \; k=1,\ldots ,m\right\} , \end{aligned}$$
where \(C([-1,1])\) stands for the space of continuous functions on \([-1,1]\) and \(P_{p_k}(I_k)\) is the \((p_k+1)\)-dimensional space of polynomials on \(I_k\) of degree \(p_k\) or less. The dimension of \(V^{\fancyscript{T}_h,p}\) is equal to the sum of the dimensions of \(P_{p_k}(I_k)\) minus \((m-1)\) (the number of continuity conditions), that is,
$$\begin{aligned} \dim V^{\fancyscript{T}_h,p}= \sum _{k=1}^m \dim P_{p_k}(I_k) - m +1=1+\sum _{k=1}^m p_k. \end{aligned}$$
Various types of polynomials can be chosen as the basis of \(P_{p_k}(I_k)\). By following [9], we chose Lobatto shape functions up to degree \(p_k\le 10\) defined in the interval \([-1,1]\), and then transformed into the intervals \(I_k\), \(k=1,\ldots ,m\). Let us list a few of Lobatto shape functions:
$$\begin{aligned} l_0(x)&= \frac{1-x}{2},\quad l_1(x) = \frac{x+1}{2}\\ l_2(x)&= \frac{1}{2}\sqrt{\frac{3}{2}}\left(x^2-1\right),\quad l_3(x) = \frac{1}{2}\sqrt{\frac{5}{2}}\left(x^2-1\right)x,\\ l_4(x)&= \frac{1}{8}\sqrt{\frac{7}{2}}\left(x^2-1\right) \left(5x^2-1\right),\quad l_5(x) = \frac{1}{8}\sqrt{\frac{9}{2}}\left(x^2-1\right) \left(7x^2-3\right)x; \end{aligned}$$
we refer to [9, Chapter 1], for instance, for higher degree Lobatto shape functions.

Figure 1 (bottom) shows the basis functions of \(V^{\fancyscript{T}_h,p}\) where the mesh is uniform, \(p=(1,4,1,3)\), and \(\dim V^{\fancyscript{T}_h,p}=10\). Indeed, we can identify five piecewise linear basis functions (two of them vanishing outside the first and the last mesh subinterval, respectively), two quadratic and two cubic polynomials, and one quartic polynomial with the support in \(I_2\). Constant functions lie in the linear hull of the piecewise linear functions.

Vectors \(p\) will also be called \(p\)-distributions in this paper.

The approximate BVP (ABVP) reads: Find \(u_{h,p}\in V^{\fancyscript{T}_h,p}\) such that
$$\begin{aligned} \int _{-1}^1 \left(u_{h,p}^{\prime }v_{h,p}^{\prime }+u_{h,p}v_{h,p}\right)\,\text{ d}x =\int _{-1}^1 fv_{h,p}\,\text{ d}x + bv_{h,p}(1)-av_{h,p}(-1) \end{aligned}$$
holds for any \(v_{h,p}\in V^{\fancyscript{T}_h,p}\).
Since the exact solution is known, we can define and calculate the error as
$$\begin{aligned} \varPsi (u_{h,p})=\left\Vert u - u_{h,p}\right\Vert_1, \end{aligned}$$
where \(u_{h,p}\) and \(u\) are the respective solutions of (6) and (4), see also (5).

2.3 Inner optimization problem

As already indicated in Sect. 1, we will strive for an optimal \(hp\)-FE space that produces a minimum of \(\varPsi (u_{h,p})\). The first step towards this goal is to optimize the FE mesh.

To this end and to prevent a mesh degeneration, we define \(\fancyscript{M}_{\text{ ad}}^p\), the set of admissible meshes relevant for a given degree distribution \(p\). A mesh is called admissible if the length of \(I_k\), where \(k=1,\ldots ,m\), is greater than or equal to \(\epsilon \), a small positive parameter (say \(10^{-4}\), for example).

Next, for a given degree distribution \(p\), we introduce \({\fancyscript{U}_\text{ ad}^p}\), the set of all solutions of ABVP (6) that can be obtained via admissible meshes. That is,
$$\begin{aligned} {\fancyscript{U}_\text{ ad}^p}=\{ u_{h,p}\in V^{\fancyscript{T}_h,p}:\ V^{\fancyscript{T}_h,p}{ \text{ is} \text{ defined} \text{ on} }\ \fancyscript{T}_h\in \fancyscript{M}_{\text{ ad}}^p, u_{h,p}{ \text{ solves} }\ {(6)}\}. \end{aligned}$$
Let us, in this subsection, fix the degree distribution \(p\), then the number of mesh nodes in \(\fancyscript{T}_h\in \fancyscript{M}_{\text{ ad}}^p\) is fixed, too.
We are ready to define the inner optimization problem: Find \(\fancyscript{T}_\text{ opt}^p\in \fancyscript{M}_{\text{ ad}}^p\) such that
$$\begin{aligned} \varPsi (\widehat{u}_{p})=\min _{u_{h,p}\in {\fancyscript{U}_\text{ ad}^p}} \varPsi (u_{h,p}), \end{aligned}$$
where \(\widehat{u}_{p}\in V^{\fancyscript{T}_\text{ opt},p}\) is the solution of the ABVP that is characterized by the fixed degree distribution \(p\) and the optimal mesh \(\fancyscript{T}_\text{ opt}^p\).

Remark 1

As it is known from the FEM algorithms, problem (6) leads to a system of linear algebraic equations represented by a matrix \(K\) and a right-hand side column vector \(r\). Both \(K\) and \(r\) are continuously dependent on the position of mesh nodes, so is the approximate solution \(u_{h,p}\). Since \(\varPsi \) is continuous with respect to \(u_{h,p}\), problem (9) can be interpreted as a minimization of a continuous function of several variables (positions of mesh nodes) over a compact set. As a consequence, at least one solution exists.

2.4 Outer optimization problem

The ultimate goal is to minimize the error with respect to \(N\), a given number of DOFs (the dimension of \(V^{\fancyscript{T}_h,p}\)) that we have at our disposal. Generally, we can use DOFs either in lowering the polynomial degrees and adding nodes to a current mesh or in increasing the polynomial degrees and deleting nodes from the current mesh.

To formulate the idea in mathematical terms, we begin with defining \(\fancyscript{P}_N\), the set of all polynomial distributions \(p\) such that the corresponding spaces \(V^{\fancyscript{T}_h,p}\) have the dimension equal to \(N\). Let us recall that \(p\) uniquely determines both the number of mesh nodes of \(\fancyscript{T}_h\) and \(\dim V^{\fancyscript{T}_h,p}\).

To give an example, let us assume \(N=6\). Then
$$\begin{aligned} \fancyscript{P}_6&= \{(1,1,1,1,1), (2,1,1,1), (1,2,1,1), (1,1,2,1), (1,1,1,2), \nonumber \\&(2,2,1), (2,1,2), (1,2,2), (3,1,1), (1,3,1), (1,1,3),\nonumber \\&(3,2), (2,3),(4,1), (1,4), (5)\} \end{aligned}$$
comprises 16 elements, that is, \(\left| \fancyscript{P}_6 \right|=16\).
We are able to define the outer optimization problem: For a fixed \(N\), find \(\widehat{p}\in \fancyscript{P}_N\) such that
$$\begin{aligned} \varPsi (\widehat{u}_{\widehat{p}}) =\min _{p\in \fancyscript{P}_N} \varPsi (\widehat{u}_{p}), \end{aligned}$$
where \(\widehat{u}_{p}\) is determined by (9) and \(\widehat{u}_{\widehat{p}}\) is the solution of the ABVP (6) characterized by \(\widehat{p}\) and the corresponding optimal mesh \(\fancyscript{T}_\text{ opt}^{\widehat{p}}\); see the inner optimization problem (9).

3 Computational issues

Problem (11) falls within combinatorial problems and its complexity rapidly increases with the number of DOFs; see Table 1. We observe that \(\left| \fancyscript{P}_N \right|\) grows exponentially at the beginning, then, when the maximum admissible polynomial degree is reached (see Sect. 2.2), the increase rate is slightly reduced.
Table 1

The cardinality of \(\fancyscript{P}_N\) increases with \(N\), the total number of DOFs














\(\left| \fancyscript{P}_N \right|\)













Problem (11) allows for parallelization because it can be divided up to \(\left| \fancyscript{P}_N \right|\) independently solvable problems (9).

To solve (9), where \(p\) is fixed, MATLAB\(^{\textregistered }\) Optimization Toolbox\(^\text{ TM}\) function fmincon was employed. This routine is designed to find a (local) minimum of a constrained multivariable function. To increase the chance to find a global minimum, several fmincon runs were performed from different starting points, i.e., from meshes with a fixed number of nodes but with different initial positions of the nodes. The routine uses the gradient of the objective function \(\varPsi \). Although it would be possible to infer a sensitivity-analysis-based algorithm for gradient calculation, the gradient was calculated approximately via numerical differentiation. Frequent calls for \(\varPsi (u_{h,p})\) were met by calling an ABVP solver and a routine evaluating \(\varPsi (u_{h,p})\); both programmed in the MATLAB\(^{\textregistered }\) environment.

Windows HPC Server 2008 R2 was employed for solving the outer optimization problem (11) in parallel on a cluster of multi-core personal computers. Although up to 200 cores shared the burden of computation, solving (11) proved to be a time consuming task whose completion took days if \(N\) exceeded 13.

Since the exact solution (5) is an odd function, the optimal \(hp\)-FEM setting should exhibit some symmetry or mirroring that can be used to check, at least partly, the correctness of calculated results; see the next section. Function (5) is monotone on \([-1,1]\), rather rapidly increasing near \(0\) and slowly increasing near points \(-1\) and \(1\).

Obtaining the solution of problem (11) is not the final step of the analysis. We are concerned with several questions as: If \(p\ne \widehat{p}\), what is the difference between \(\varPsi (\widehat{u}_{p}) - \varPsi (\widehat{u}_{\widehat{p}})\)? Is the difference significant? For a fixed \(p\), is the optimal mesh reachable from a uniform initial mesh by a descent method? How much can \(\varPsi ({u}_{p})\) be reduced by the mesh optimization?

4 Computational results

To better understand the questions listed in the previous paragraph, let us take a look at Fig. 2 (top). By taking (10) into account, we observe that \(\widehat{p} = (1,3,1)\) [\(p\)-distribution no. 10 in Fig. 2 and (10)] solves the outer minimization problem (11). Indeed, for the initial (i.e., uniform) mesh, the value of \(\varPsi \) is qual to \(3.276\) while \(\varPsi (\widehat{u}_{\widehat{p}})=1.327\). However, for \(p=(1,2,1,1)\) as well as \(p=(1,1,2,1)\) (case no. 3 and 4, respectively), we have \(\varPsi (\widehat{u}_{{p}})= 1.389\). Cases (\(p\)-distributions) 3 and 4 are, in fact, equivalent. Indeed, the respective inner optimal nodes \(x_1\), \(x_2\), \(x_3\) are
$$\begin{aligned} -0.134,\ 0.0296,\ 0.0926 \text{ and} -0.0926,\ -0.0296,\ 0.134. \end{aligned}$$
In both cases, the initial value of \(\varPsi \) is equal to \(4.096\) for the uniform mesh.
Fig. 2

Comparison of initial and minimum values of \(\varPsi \) obtained via the optimization of uniform initial meshes; \(N=6\), \(\left| \fancyscript{P}_N \right|=16\), the \(p\)-distributions are ordered in accordance with (10). Initial and minimum value of \(\varPsi \) (top). The ratio between the minimum and initial value of \(\varPsi \) (bottom)

Case 16 represents \(p=(5)\), that is, a fifth order polynomial defined on \([-1,1]\) without any inner node; no mesh optimization is possible.

The minimum errors in cases 1 (piecewise linear functions), 3, 4, and 10 are almost equal if assessed from the viewpoint of practical computation.

Figure 2 (bottom) shows the ratio between the minimum and initial value of \(\varPsi \).

One can also ask whether minimization runs starting from random meshes outperform a single run starting from a uniform mesh. In our computation, the number of random meshes was rather low and equalled \(2(m-1)\). Although, in general, random mesh optimization results were at least as good as those obtained from uniform initial meshes, the opposite can also happen as Fig. 3 shows; see the sixth \(p\)-distribution. For the ninth \(p\)-distribution, on the other hand, the uniform initial mesh result was further improved by repeated minimizations starting from random meshes.
Fig. 3

Minimum value of \(\varPsi \) after minimization starting from uniform and random initial meshes; \(N=6\), \(\left| \fancyscript{P}_N \right|=16\)

The latter was observed far more frequently. If \(N=13\), for example, then, in 36 cases, the minimum obtained from an optimized uniform mesh was noticeable less than the minimum obtained by repeated random mesh optimization. The opposite was true in 954 cases. In the other 1,055 cases, the difference did not exceed 1% and the minima were considered computationally equal. For \(N=15\), the respective numbers were 149 and 4,387 (if the tolerated difference is equal to 1%) or \(122\) and \(4,090\) (tolerated difference 5%).

Since \(\left| \fancyscript{P}_N \right|\) increases rapidly with growing \(N\), we have to abandon detailed graphs and content ourselves with summarizing graphs where individual \(p\)-distributions are not exactly identifiable.

Figure 4 depicts both the initial values of \(\varPsi \) obtained on uniform meshes and the smallest values among the minima found in the minimization runs starting from uniform as well as random meshes.
Fig. 4

\(N=12\), \(\left| \fancyscript{P}_N \right|=1023\). The values of \(\varPsi \) obtained on uniform meshes before minimization (top). The values of \(\varPsi \) after minimization comprising runs starting from uniform and random meshes (bottom)

To give some indication of the correspondence between the position on the horizontal axis and the polynomial degrees that form \(p\in \fancyscript{P}_{12}\), let us say that degree 3 together with 1 and 2 appear between positions \(145\) and \(504\) inclusive. Polynomials of the fourth order occupy positions \(505\)\(773\) inclusive. Polynomials of the fifth order are placed between \(774\) and \(912\) inclusive. Sixth order polynomials stand between \(913\) and \(976\) inclusive. Next, there are \(28\) polynomials of the seventh order, \(12\)\(p\)-distributions containing degree \(8\), five \(p\)-distributions containing degree \(9\), and only two \(p\)-distributions of the tenth order, that is, \(p=(1,10)\) and \(p=(10,1)\).

We observe in Fig. 5 (top) that the error (7) is accumulated mostly in the interval \([3,3.6]\) if the mesh is uniform. Figure 4 (top) shows patterns clearly indicating that unless the \(p\)-distribution is properly chosen, the error will be rather large on fixed uniform meshes, that is, a mediocre performance of the FEM will be observed. For uniform meshes, there are not many proper \(p\)-distributions in \(\fancyscript{P}_{12}\); see also Fig. 5 (top).
Fig. 5

\(N=12\), \(\left| \fancyscript{P}_N \right|=1023\). All the \(p\)-distributions are grouped according to the value of \(\varPsi \) into 20 bins that uniformly cover the entire range of \(\varPsi \). Histogram of values of \(\varPsi \) on uniform meshes; no minimization of \(\varPsi \) (top). Histogram showing the number of the \(p\)-distributions related to the minimum values of \(\varPsi \) obtained by mesh optimization starting from uniform as well as random meshes (bottom)

The mesh optimization reduces both the error \(\varPsi \) and its sensitivity to a proper \(p\)-distribution. Indeed, a substantial number of \(p\)-distributions can result in a comparable error; see Figs. 4 (bottom) and  5 (bottom).

Figures 6 and  7 further confirm the above observations.
Fig. 6

\(N=15\), \(\left| \fancyscript{P}_N \right|=8172\). The values of \(\varPsi \) obtained on uniform meshes before minimization (top). The values of \(\varPsi \) after minimization comprising runs starting from uniform and random meshes (bottom)
Fig. 7

\(N=15\), \(\left| \fancyscript{P}_N \right|=8172\). All the \(p\)-distributions are grouped according to the value of \(\varPsi \) into 20 bins that uniformly cover the entire range of \(\varPsi \). Histogram of values of \(\varPsi \) on uniform meshes; no minimization of \(\varPsi \) (top). Histogram showing the number of the \(p\)-distributions related to the minimum values of \(\varPsi \) obtained by mesh optimization starting from uniform as well as random meshes (bottom)

Figure 8 illustrates the rate of convergence of the \(hp\)-FEM and the \(p\)-FEM. The former graph originates from the values of \(\varPsi (\widehat{u}_{\widehat{p}})\), see the outer optimization problem (11), whereas the latter is based on the minimum values of \(\varPsi \) on uniform meshes, that is, on the \(p\)-distributions that minimize \(\varPsi \) on uniform meshes.
Fig. 8

The dependence of the values of \(\varPsi \) on \(N\) if (a) the mesh is uniform and only \(p\) is optimized, (b) both \(p\) and the mesh are optimized

Drawing inspiration from [10], we summarize the results of optimization in Fig. 9.
Fig. 9

Optimal \(hp\)-settings. For the optimal \(p\)-distributions, the corresponding optimal meshes are depicted; the crosses \(\times \) mark the mesh nodes. The vertical position of each mesh is determined by the relevant value of \(\varPsi (\widehat{u}_{\widehat{p}})\). For example, if \(N=5\) (five DOFs), then the polynomial degree distribution \(\widehat{p}=(1,1,1,1)\) and the mesh given by \(x_0=-1\), \(x_1=-0.1172\), \(x_2=-0.0479\), \(x_3=0.0625\), and \(x_4=1\) result in the minimum of \(\varPsi \), that is, \(\varPsi (\widehat{u}_{\widehat{p}})=1.898\). Due to the symmetry of \(u\), the same value of \(\varPsi \) is obtained if \(x_1=-0.0625\), \(x_2=0.0479\), and \(x_3=0.1172\); only the former mesh is depicted. For 15 DOFs, \(\widehat{p}=(2,5,1,5,1)\) and \(\varPsi (\widehat{u}_{\widehat{p}})=0.140\)

5 Conclusions

Solving the outer optimization problem (11) is an excessively demanding task. Although our algorithms might be improved to be more efficient and the entire code rewritten in C/C++ or FORTRAN to run faster, one can hardly expect a speed up by a factor greater than 100. This corresponds to 6–7 more DOFs as one added DOF almost doubles the cardinality of \(\fancyscript{P}_N\) (see Table 1) as well as, for some \(p\in \fancyscript{P}_N\), increases the number of mesh nodes and, consequently, the number of generated random meshes.

Massive parallelization would certainly allow for more DOFs because problem (11) is completely parallelizable and problems (9) can be solved fully independently. But the gain would be limited, too. Indeed, by assuming that 200,000 cores are loaded instead of 200, we get a speed up factor 1,000 that represents at most 10 DOFs.

It is questionable to claim that the minima found during our mesh optimization process are satisfactory approximations of the true global minima. Even Fig. 10 originating from meshes with only two moving nodes indicates that the search for global minima might be a challenging task whose completion with a sufficient confidence would further contribute to the computational complexity of problem (11).
Fig. 10

The surface and contours defined by the values of \(\varPsi (u_{h,p})\) where \(p=(2,2,2)\) and meshes \(\fancyscript{T}_h\) are determined by nodes \(x_1\) and \(x_2\), where \(-1<x_1<x_2<1\)

We are well aware that it is inadmissible to make general conclusions about the \(hp\)-FEM on the basis of solving one boundary value problem whose exact solution is, in addition, smooth. This implies that the following thoughts are limited to our particular BVP and their generalization has to be taken with reservations.

It seems to be a good strategy to pay more attention to mesh optimization than to polynomial degree optimization. Although the latter must not be neglected, Figs. 47 suggest that for many non-optimal degree distributions an optimized mesh can substantially reduce the error. However, it is necessary to note that, in mesh adaptation algorithms, meshes are usually optimized by halving selected subintervals. A question arises whether this approach leads to meshes that are close to the optimal meshes. Figure 8 indicates that the mesh optimality is not a sensitive issue if \(p\) is optimal or almost optimal. We can conclude that a coupling of a reasonable mesh adaptation algorithm and a satisfactory (though not optimal) \(p\) adaptation algorithm will result in a fast convergence rate as expected in the \(hp\)-FEM.


The first author is grateful to Dr. Richard (Dick) Haas for many fruitful discussions.

Copyright information

© Springer-Verlag Wien 2012