1 Introduction

Many problems encountered in science and engineering, for example, physics, chemistry, biology, mechanics, astronomy, population, resources, economics, and so on, are related to a mathematical model in the form of differential equations. Generally, the analytical expressions of mathematical solutions for practical problems do not exist or are difficult to find. Therefore, it is necessary to study the numerical method of solving differential equations. This means calculating approximate value \(y_{i}\) of exact solution \(y ( x_{i} )\) for differential equations at discrete points \(x_{i}, i = 0,1, \ldots \) in the solution domain.

For a long time, many numerical methods were proposed for solving ODEs [1], including single- and multi-step methods; single-step methods include Euler first order method (EM) [2], second order Runge–Kutta (R–K) method inspired by Taylor’s expansion, Suen third order R–K method (Suen-R-K3), the classic fourth-order R–K method (R-K4) [3,4,5], etc. In addition, in order to obtain high accuracy, based on the numerical integration method, a lot of linear multi-step methods were proposed, such as Adams implicit formulas [6], methods based on Taylor expansion [7], prediction–correction algorithms [8], shooting methods [9], difference methods [10], etc. The numerical methods for solving boundary value problem (BVP) have many mature research results, with high calculation accuracy, but there is a problem that with increasing sample size, the execution time increases rapidly.

With the development of artificial intelligence and computer technology, more and more researchers have developed a keen interest in neural network methods. Neural networks have been used in many fields such as pattern recognition [11], graphics processing [12], risk assessment [13], control systems [14], forecasting [15,16,17,18], and classification [19], showing wide application prospects. Based on the advantages of neural network methods, the use of neural network function approximation capabilities [20,21,22] has led to the development of a number of adopted neural network model for solving differential equations. The neural network methods for solving differential equations mainly include the following categories: multilayer perceptron neural network [23,24,25,26,27,28], radial basis function neural network [29,30,31], multi-scale radial basis function neural network [32,33,34,35], cellular neural network [36, 37], finite element neural network [38,39,40,41,42,43,44,45,46] and wavelet neural network [28]. The main research focuses on two parts: the construction of the approximate solution and the weights training algorithm.

Approximate solutions of differential equations are often constructed by selecting different activation functions: Meade and Fernandez [47, 48] used a hard limit function as an activation function to construct the neural network model; Lagaris and Likas [23] proposed that multi-layer perceptrons can be used to construct approximate solutions; a hybrid technique for constructing the neural network was studied by Ioannis and Tsoulos [49]; Mall and Chakraverty [50] used a Legendre polynomial as an activation function to construct an approximate solution; Xu Liying, Wen Hui and Zeng Zhezhao [51] proposed using a triangular basis function as an activation function to construct approximate solutions for solving ODEs. Regarding the research on network weights training optimization algorithm, we mention Reidmiller and Braun [52] who proposed RPROP algorithm based on local adaptation; Lagaris and Likas [23] proposed using DE-evolutionary algorithm to train the weights in the neural network model of partial differential equations; Malek and Shekari Beidokhti [53] presented an optimization algorithm for hybrid neural network model; Rudd and Ferrari [54] analyzed the constrained integral method (CINT) combining the classical Galerkin method with the constrained BP process; Lucie and Peter [55] proposed genetic algorithms for solving a neural network model.

This paper presents a novel Legendre neural network method with improved extreme learning machine algorithm for solving several types of linear or nonlinear differential equations. Candidate solutions are expressed by using Legendre network. With the boundary conditions taken into account, the problem of solving differential equations is transformed into that of nonlinear algebraic equation systems. We call this method for training network weights the improved extreme learning machine algorithm. Convergence analysis, numerical experiments and a comparative study show the superiority of the present method to other classical methods or methods in the recent literature. We believe that the proposed method may be the first to use Legendre neural network model with IELM algorithm in solving differential equations.

The aim and motivation of the present method is to propose a new Legendre neural network with IELM algorithm to solve differential equations such as linear or nonlinear ordinary differential equations, system of ordinary differential equations, and singular initial value Emden–Fowler equations. IELM algorithm is used here for training the network weights. The advantages of the proposed approach are as follows:

  • It is a single hidden layer neural network—by randomly choosing of the input layer weights, we only need to train the weights of the output layer.

  • It is easy to implement and runs quickly.

  • The improved extreme learning machine algorithm is an unsupervised learning algorithm, and we use no optimization technique.

  • Calculation accuracy is higher than for other numerical methods presented in the recent literature.

The organization of this paper is as follows: we give a description of the problem to be solved in the next section. Section 3 talks about constructing Legendre neural network for approximating and solving ODEs. IELM algorithm for training network weights is proposed and several algorithm steps are summed up in Sect. 4. In Sect. 5, convergence analysis of the proposed Legendre network is verified. We provide many numerical results to verify the effectiveness of the algorithm and its superiority in performance in Sect. 6. Finally, in Sect. 7 we present some conclusions and directions for future research.

2 Description of the problem

We first introduce the general form of the following ordinary differential equations.

2.1 Second-order ordinary differential equations

We usually describe two-point BVP of second-order ODEs in the following form:

$$ \textstyle\begin{cases} y'' = f ( x,y,y' ), \\ y ( a ) = \alpha_{1},\qquad y ( b ) = \alpha_{2}, \end{cases}\displaystyle a \le x \le b. $$
(1)

2.2 First-order system of ordinary differential equations

Let us use the following formula to represent the first-order SODE:

$$ \textstyle\begin{cases} y'_{i} = f_{i} ( x,y_{1},y_{2}, \ldots,y_{n}, ) \\ y_{i} ( a ) = \alpha_{i}, \end{cases}\displaystyle a \le x \le b\ ( i = 0,1, \ldots,n ). $$
(2)

We know that first-order ODEs is a particular case of a system of ordinary differential equations (2).

2.3 Higher-order ODEs and higher-order SODE problem

Higher-order ODEs have the general form as below:

$$ \textstyle\begin{cases} y^{ ( n )} = f ( x,y,y',y'', \ldots,y^{ ( n - 1 )} ), \\ y ( a ) = \alpha_{0},\qquad y' ( a ) = \alpha_{1}, \ldots,y^{ ( n - 1 )} ( a ) = \alpha_{n - 1}, \end{cases}\displaystyle a \le x \le b. $$
(3)

If we make the transformation \(y_{1} = y,y_{2} = y', \ldots,y_{n} = y^{ ( n - 1 )}\), the higher-order ODEs change to the following SODE:

$$ \textstyle\begin{cases} y'_{1} = y_{2}, \\ y'_{2} = y_{3}, \\ \vdots \\ y'_{n} = f ( x,y,y', \ldots,y^{ ( n - 1 )} ) = f ( x,y_{1},y_{2}, \ldots,y_{n} ), \end{cases} $$
(4)

where the initial conditions are \(y_{1} ( a ) = \alpha_{0},y_{2} ( a ) = \alpha_{1}, \ldots,y_{n} ( a ) = \alpha_{n - 1}\).

If for a higher-order SODE composed of two higher-order ODEs

$$ \textstyle\begin{cases} x^{ ( m )} = f ( x,x',x'', \ldots,x^{ ( m - 1 )},y,y', \ldots,y^{ ( n - 1 )} ), \\ y^{ ( n )} = g ( x,x',x'', \ldots,x^{ ( m - 1 )},y,y',y'', \ldots,y^{ ( n - 1 )} ), \\ x ( a ) = \alpha_{0},\quad x' ( a ) = \alpha_{1}, \ldots,x^{ ( m - 1 )} ( a ) = \alpha_{m - 1}, \\ y ( a ) = \alpha_{m},\quad y' ( a ) = \alpha_{m + 1}, \ldots,y^{ ( n - 1 )} ( a ) = \alpha_{m + n - 1}, \end{cases}\displaystyle a \le x \le b $$
(5)

we select state variables \(y_{1} = x,y_{2} = x', \ldots,y_{m} = x^{ ( m - 1 )},y_{m + 1} = y,y_{m + 2} = y', \ldots,y_{m + n} = y^{ ( n - 1 )}\), then the above system of higher-order ODEs can be expressed as:

$$ \textstyle\begin{cases} y'_{1} = y_{2}, \\ y'_{2} = y_{3}, \\ \vdots\\ y'_{m} = f ( t,y_{1},y_{2}, \ldots y_{m},y_{m + 1}, \ldots,y_{m + n} ), \\ y'_{m + 1} = y_{m + 2}, \\ \vdots \\ y'_{m + n} = g ( t,y_{1},y_{2}, \ldots, y_{m},y_{m + 1}, \ldots,y_{m + n} ), \end{cases}\displaystyle a \le x \le b $$
(6)

with initial conditions \(y_{i} ( a ) = \alpha_{i - 1},i = 1,2, \ldots,m + n\).

Considering the same notation as that of Jose [56], we can describe the above linear or nonlinear ordinary differential equations in the following general form:

$$ \mathcal{L}\mathbf{y} ( x ) = \mathbf{f} ( x ) \quad \mbox{in }{I} $$
(7)

with the initial or boundary conditions

$$ \mathcal{B}\mathbf{y} ( x ) = \boldsymbol{\alpha}\quad \mbox{on }\partial {I}, $$
(8)

where \(\mathcal{L}\) and \(\mathcal{B}\) are differential operators on the interval I; \(\mathbf{y} ( x )\) denotes the vector to be found, \(\mathbf{f} ( x )\) is a linear or nonlinear source term, which depends on \(x, \mathbf{y} ( x )\) and its derivatives; α denotes the value of \(\mathbf{y} ( x )\) and its derivatives at the end points of interval I.

By established the ODEs problem, differential equations (7) and (8) can be transformed into a constrained optimization problem in the following form:

$$\begin{aligned} \begin{aligned} &\mbox{minimize}\\ &\quad \arg \min \bigl\Vert \mathcal{L}\mathbf{y} ( x ) - \mathbf{f} ( x ) \bigr\Vert \end{aligned} \end{aligned}$$
(9)
$$\begin{aligned} \begin{aligned} &\mbox{subject to}\\ &\quad \big\| \mathcal{B}\mathbf{y} ( x ) - \boldsymbol{\alpha} \big\| = 0. \end{aligned} \end{aligned}$$
(10)

3 Legendre basis function neural network for approximating and solving ODEs

3.1 Legendre basis function neural networks and approximation

In this subsection, employing the recursive properties of Legendre polynomials, we will discuss construction of approximate solutions based on Legendre basis function neural network.

Theorem 1

Suppose that the vector \(\mathbf{P} ( x )\) is defined as \(\mathbf{P} ( x ) = [ P_{0} ( x ),P_{1} ( x ), \ldots,P_{N + 1} ( x ) ]\), in which \(P_{n} ( x ), n = 0,1, \ldots,N + 1\) is the nth order Legendre polynomial in the interval \([ 0,1 ]\), and let \(\mathbf{P'} ( x )\) be defined as \(\mathbf{P'} ( x ) = [ P'_{0} ( x ),P'_{1} ( x ), \ldots,P'_{N + 1} ( x ) ]\), where \(P_{n}^{\prime} ( x ), n = 0,1, \ldots,N + 1\) is the derivative of the nth order Legendre polynomial \(P_{n} ( x )\). Then \(\mathbf{P'} ( x ) = \mathbf{P} ( x )\mathbf{M}\), where M is the Legendre operational matrix given by

M= [ 0 1 0 1 0 1 0 1 0 0 3 0 3 0 3 0 0 0 0 5 0 5 0 5 0 0 0 0 7 0 7 0 0 0 0 0 0 9 0 9 0 0 0 0 0 0 0 2 n 1 0 0 0 0 0 0 0 0 0 ] ( N + 2 ) × ( N + 2 ) .

Proof

The derivatives of Legendre polynomials satisfy the recurrence relation

$$ P'_{n + 1} ( x ) - P'_{n - 1} ( x ) = ( 2n + 1 )P_{n} ( x ), $$
(11)

and from this property, we can easily draw the conclusion. □

Theorem 2

For any continuous function \(y: [ a,b ] \to R\), there is a natural number N, constants \(a_{n},b_{n},\beta_{n}\ ( n = 0,1, \ldots,N )\), and Legendre polynomials \(P_{0} ( x ),P_{1} ( x ), \ldots,P_{n} ( x )\), such that the Legendre neural network with \(N + 1\) neurons is given by

$$ y_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} \beta_{n}P_{n} ( a_{n}x + b_{n} ), $$
(12)

\(y_{\mathrm{LNN}}\) is an approximation of y, and

$$ \bigl\Vert y ( x ) - y_{\mathrm{LNN}} ( x ) \bigr\Vert = \Biggl\Vert y ( x ) - \sum_{n = 0}^{N} \beta_{n}P_{n} ( a_{n}x + b_{n} ) \Biggr\Vert < \varepsilon. $$
(13)

3.2 Legendre basis function neural networks for solving ODEs

Legendre basis neural networks consist of three layers: an input layer, a hidden Legendre basis function layer and an output layer. The output of Legendre basis neural network for general differential problem described in (7) is as follows:

$$ \mathbf{y}_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} P_{n} ( a_{n}x + b_{n} )\boldsymbol{ \beta}, $$
(14)

where \(a_{n}\) is a weight connecting input to the nth hidden node, \(b_{n}\) is the bias of the nth hidden node, β is the hidden layer-to-output layer weight vector.

Substituting the approximate solution (14) into (7) and boundary conditions (8), we can obtain an equation system of weights β, and the new equation system is

$$\begin{aligned} \begin{aligned} &\mathcal{L}\mathbf{y}_{\mathrm{LNN}} ( x ) = \mathbf{f} ( x ) \quad \mbox{in }{I}, \\ & \mathcal{B}\mathbf{y}_{\mathrm{LNN}} ( x ) = \boldsymbol{\alpha} \quad \mbox{on }\partial {I}. \end{aligned} \end{aligned}$$
(15)

Using a discretization of interval \({I} = \{ x_{i}:x_{i} \in {I},i = 0,1, \ldots,M \}\), define \(\mathbf{f}_{i} = \mathbf{f} ( x_{i} )\). Then the weights \(a_{n},b_{n},\boldsymbol{\beta} \) can be solved for from the following system of equations:

[ L ( n = 0 N P n ( a n x i + b n ) ) B ( n = 0 N P n ( a n x boundary + b n ) ) ] [β]= [ f i α ] .
(16)

Let us take the following SODE as an example:

$$ \textstyle\begin{cases} y'_{1} + g_{1} ( x )y_{1} + g_{2} ( x )y_{2} = f_{1} ( x ), \\ y'_{2} + h_{1} ( x )y_{1} + h_{2} ( x )y_{2} = f_{2} ( x ), \\ y_{1} ( a ) = \alpha_{1},\qquad y_{2} ( a ) = \alpha_{2}, \end{cases}\displaystyle a \le x \le b. $$
(17)

Assume that the weight connecting input to the nth hidden node is 1, and the bias of the nth hidden node is 0. Then the approximate solutions \(y1_{\mathrm{LNN}},y2_{\mathrm{LNN}}\) of the SODE are given as below:

$$\begin{aligned} &y1_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} \beta_{1n}P_{n} ( x ) = P ( x )\beta_{1}, \end{aligned}$$
(18)
$$\begin{aligned} &y2_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} \beta_{2n}P_{n} ( x ) = P ( x )\beta_{2}, \end{aligned}$$
(19)

where \(\beta_{1} = [ \beta_{10},\beta_{11}, \ldots,\beta_{1N} ]^{T}\), \(\beta_{2} = [ \beta_{20},\beta_{21}, \ldots,\beta_{2N} ]^{T}\). By Theorem 1, we can rewrite problem (17) as:

$$ \textstyle\begin{cases} ( P ( x )M + g_{1} ( x )P ( x ) )\beta_{1} + g_{2} ( x )P ( x )\beta_{2} = f_{1} ( x ), \\ h_{1} ( x )P ( x )\beta_{1} + ( P ( x )M + h_{2} ( x )P ( x ) )\beta_{2} = f_{2} ( x ), \\ P ( a )\beta_{1} = \alpha_{1}, \\ P ( a )\beta_{2} = \alpha_{2}. \end{cases} $$
(20)

Noting that \(x_{i} = a + \frac{b - a}{M}i, i = 0,1, \ldots,M\), and defining

H= [ w 11 ( x 0 ) , w 12 ( x 0 ) w 11 ( x M ) , w 12 ( x M ) w 21 ( x 0 ) , w 22 ( x 0 ) w 21 ( x M ) , w 22 ( x M ) w 31 , w 32 w 41 , w 42 ] ( 2 M + 4 ) × ( 2 N + 2 ) ,β= [ β 1 β 2 ] ,T= [ f 1 ( x 0 ) f 1 ( x M ) f 2 ( x 0 ) f 2 ( x M ) α 1 α 2 ] ( 2 M + 4 ) × 1 ,

where

$$\begin{aligned} &w_{11} ( x ) = P ( x )M + g_{1} ( x )P ( x ),\qquad w_{12} = g_{2} ( x )P ( x ),\qquad w_{21} = h_{1} ( x )P ( x ),\\ & w_{22} ( x ) = P ( x )M + h_{2} ( x )P ( x ),\qquad w_{31} = P ( a ),\\ & w_{32} = w_{41} = ( 0 )_{ ( N + 1 ) \times 1},\qquad w_{42} = w_{31}, \end{aligned}$$

we can rewrite equation (20) in the form:

$$ \mathbf{H}\beta = \mathbf{T}. $$
(21)

By solving the new system equation (21), the unknown weights of the Legendre neural network are obtained.

Generally, by using the Legendre basis function neural network, the approximate solution of ODEs can be constructed. Then substituting the true solution of the problem by the approximate solution and its derivatives, we can obtain a system equation for finding network weights; the process is shown in Fig. 1.

Figure 1
figure 1

Neural network model for SODE based on Legendre polynomial

4 IELM algorithm for training the Legendre neural networks

There are many numerical algorithms for solving system equation (21). In this paper, following the ELM algorithm proposed by Huang Guangbin [57], we use IELM algorithm to train the Legendre network.

Theorem 3

The system equation \(\mathbf{H}\beta = \mathbf{T}\) is solvable in the following several cases:

  1. (I)

    If matrix H is a square invertible matrix, then \(\beta = \mathbf{H}^{ - 1}\mathbf{T}\).

  2. (II)

    If matrix H is rectangular, then \(\beta = \mathbf{H}^{\dagger} \mathbf{T}\), and β is the minimal least-squares solution of \(\mathbf{H}\beta = \mathbf{T}\), that is, \(\beta = \arg \min \Vert \mathbf{H}\beta - \mathbf{T} \Vert \).

  3. (III)

    If H is a singular matrix, then \(\beta = \mathbf{H}^{\dagger} \mathbf{T}\), and \(\mathbf{H}^{\dagger} = \mathbf{H}^{T} ( \lambda \mathbf{I} + \mathbf{HH}^{T} )^{ - 1}\), λ is regularization coefficient, which can be set according to a specific instance.

Proof

For the proof of the theorem we refer to the related facts about the generalized inverse matrix in matrix theory [58] and the paper by Guang-Bin Huang [57].

According to Theorem 2, and as in the article of Guang-Bin Huang [57], when using extreme learning machine (ELM) algorithm to solve neural network model, that is, when solving \(\mathbf{H}\beta = \mathbf{T}\), the number of hidden neuron nodes must be less than or equal to the sample size, that is, \(N \le M\).

But by matrix analysis theory, if matrix H is rectangular, there exists a β, such that it is the minimal least-squares solution of \(\mathbf{H}\beta = \mathbf{T}\), that is, \(\beta = \arg \min \Vert \mathbf{H}\beta - \mathbf{T} \Vert \). Here, H is a rectangular matrix, and the number of hidden neuron nodes does not have to be less than or equal to the sample size; we call this improved algorithm for solving \(\mathbf{H}\beta = \mathbf{T}\) the improved extreme learning machine (IELM).

The steps for solving ODEs using Legendre network and IELM algorithm are as follows:

Step 1. Discretize the domain as \(a = x_{0} < x_{1} < \cdots < x_{M} = b\), \(x_{i} = a + \frac{b - a}{M}i, i = 0,1, \ldots,M\), and construct an approximate solution by using Legendre polynomial as an activation function, that is, \(y_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} \beta_{n}P_{n} ( x )\);

Step 2. At discrete points, substitute the approximate solution \(y_{\mathrm{LNN}} ( x )\) and its derivatives into the differential equation and its boundary conditions, and obtain the system equation \(\mathbf{H}\beta = \mathbf{T}\);

Step 3. Solve the system equation \(\mathbf{H}\beta = \mathbf{T}\) by IELM algorithm introduced in Theorem 3, and obtain the network weights \(\beta = \mathbf{H}^{\dagger} \mathbf{T}\), \(\beta = \arg \min \Vert \mathbf{H}\beta - \mathbf{T} \Vert \);

Step 4. Form the approximate solution as \(y_{\mathrm{LNN}} ( x ) = \sum_{n = 0}^{N} \beta_{n}P_{n} ( x ) = \mathbf{P} ( x )\boldsymbol{\beta} \). □

5 Convergence analysis

In this section, we will verify the feasibility and convergence of the LNN method in solving differential equations by proving another theorem.

Theorem 4

Given a standard single layer feedforward neural network with \(n + 1\) hidden nodes and Legendre basis function \(P_{i} ( x ):R \to R, i = 0,1, \ldots,n\), suppose that the approximate solution of one-dimensional differential equations is given by (14). If we have any \(m + 1\) distinct samples \(( \mathbf{x},\mathbf{f} )\), for any \(a_{n},b_{n}\) randomly chosen from any intervals of R, respectively, according to any continuous probability distribution, then the hidden layer output matrix H of Legendre network is invertible and \(\Vert \mathbf{H}\beta - \mathbf{T} \Vert = 0\).

Proof

According to Legendre network, for any \(m + 1\) arbitrary and distinct samples \(( \mathbf{x},\mathbf{f} )\), with \(\mathbf{x} = [ x_{0}, \ldots,x_{m} ]^{T}\), \(\mathbf{f} = [ f_{0}, \ldots,f_{m} ]^{T}\), let us consider the (\(i + 1\))th column of the Legendre hidden layer output matrix \(\mathbf{c} ( b_{i} )\), \(\mathbf{c} ( b_{i} ) \in R^{m + 1}\), and suppose that \(b_{i} \in I\), where I is an open interval of R, and

$$ \mathbf{c} ( b_{i} ) = \bigl[ P_{i} ( a_{i}x_{0} + b_{i} ),P_{i} ( a_{i}x_{1} + b_{i} ), \ldots,P_{i} ( a_{i}x_{m} + b_{i} ) \bigr]^{T}, $$
(22)

then following the proof of Huang Guangbin [57], we can easily prove by contradiction that the vector c does not belong to any subspace whose dimension less than \(m + 1\).

We assume that \(a_{i}\) is generated randomly based on a continuous probability distribution, for any \(k \ne k'\), and we have \(a_{i}x_{k} \ne a_{i}x_{k'}\). Suppose that c belongs to an m-dimensional subspace and vector α is perpendicular to this subspace. Then we have

$$\begin{aligned} \bigl( \boldsymbol{\alpha},\mathbf{c} ( b_{i} ) - \mathbf{c} ( a ) \bigr) ={}& \alpha_{0} \cdot P_{i} ( d_{0} + b_{i} ) + \alpha_{1} \cdot P_{i} ( d_{1} + b_{i} )+\cdots \\ &{}+ \alpha_{m} \cdot P_{i} ( d_{m} + b_{i} ) - c = 0, \end{aligned}$$
(23)

where \(d_{k} = a_{i}x_{k},k = 0,1, \ldots,m\) and \(c = \boldsymbol{\alpha} \cdot \mathbf{c} ( a )\), so we may as well assume \(\alpha_{m} \ne 0\), and (23) can be rewritten as

$$ P_{i} ( d_{m} + b_{i} ) = - \sum _{k = 0}^{m - 1} \gamma_{k} P_{i} ( d_{k} + b_{i} ) + c / \alpha_{m}, $$
(24)

where \(\gamma_{k} = \alpha_{k} / \alpha_{m},k = 0, \ldots,m - 1\), by the infinite differentiability of the function on the left-hand side of (24). Calculating the derivatives of \(b_{i}\) on both sides, we obtain

$$\begin{aligned} P_{i}^{ ( l )} ( d_{m} + b_{i} ) = - \sum _{k = 0}^{m - 1} \gamma_{k} P_{i}^{ ( l )} ( d_{k} + b_{i} ), \quad l = 1, \ldots,m,m + 1, \ldots, \end{aligned}$$
(25)

where the number of coefficients \(\gamma_{k}\) is less than the number of equations l, which produces a contradiction, and so c does not belong to any subspace whose dimension less than \(m + 1\).

This means that for any \(a_{n},b_{n}\) randomly chosen from any intervals of R, such as \(a_{n} = 1,b_{n} = 0\), according to any continuous probability distribution, the column vectors of H can be made of full rank with probability one, which validates the above theorem.

Moreover, there exists an \(n \le m\), so that matrix H is rectangular, and given any small positive value \(\varepsilon > 0\) and Legendre activation function \(P_{i} ( x ):R \to R\), for m arbitrary distinct samples \(( \mathbf{x},\mathbf{f} )\), and for any \(a_{n},b_{n}\) randomly chosen from any intervals of R, according to any continuous probability distribution, we have \(\Vert \mathbf{H}_{m \times n}\boldsymbol{\beta}_{n \times 1} - \mathbf{f}_{m \times 1} \Vert < \varepsilon\). □

6 Numerical results and comparative study

Numerical experiments were conducted to verify the effectiveness and superiority of the Legendre network with IELM algorithm. The new scheme was tested on linear differential equations (such as first order, second order ODEs, and SODE) and nonlinear differential equations. A comparative study with other approaches is also described in this section, including traditional methods and latest research works. We will discuss a differential equation appearing in practice (such as Emden–Fowler type equation) to validate the proposed Legendre neural network with IELM algorithm and will show that this method is very encouraging at the end of this section.

All numerical results are obtained using MATLAB R2015a, on a computer with INTEL Core I7-6500U CPU, 4 GB of memory, 512 GB SSD and WIN10 operating system.

We use mean absolute deviation (MAD) to measure the error of numerical solution:

$$ \mathrm{MAD} = \frac{1}{m} ( \Delta 1 + \Delta 2 + \cdots + \Delta m ), $$
(26)

where \(\Delta 1,\Delta 2, \ldots,\Delta m\) is the absolute error at each discrete point.

6.1 Experimental results

Example 1

First, we consider the initial value problem of ODEs expressed as

$$\begin{aligned} &y' + \frac{\cos ( x )}{\sin ( x )}y = \frac{1}{\sin ( x )}, \\ &y ( 1 ) = \frac{3}{\sin ( 1 )}. \end{aligned}$$

This problem has been solved in \([ 1,2 ]\), and the exact solution is \(y ( x ) = \frac{ ( x + 2 )}{\sin ( x )}\).

Example 2

The second problem is given by

$$\begin{aligned} &y' + \frac{1}{5}y = e^{ - \frac{x}{5}}\cos ( x ), \\ &y ( 0 ) = 0 \end{aligned}$$

and considered on the interval \([ 0,2 ]\). The exact solution is \(y ( x ) = e^{ - \frac{x}{5}}\sin ( x )\).

Example 3

For the third problem, we consider

$$\begin{aligned} &y' + \biggl( x + \frac{1 + 3x^{2}}{1 + x + x^{3}} \biggr)y = x^{3} + 2x + \frac{x^{2} + 3x^{4}}{1 + x + x^{3}}, \\ &y ( 0 ) = 1, \end{aligned}$$

which has the exact solution \(y ( x ) = \frac{e^{ - \frac{x^{2}}{2}}}{1 + x + x^{3}} + x^{2}\) on the interval \([ 0,1 ]\).

Example 4

The last initial value problem of ODEs is as below:

$$\begin{aligned} &y' - \sin ( x )y = 2x - x^{2}\sin ( x ), \\ &y ( 0 ) = 0. \end{aligned}$$

It has been solved in \([ 0,1 ]\), with the exact solution being \(y ( x ) = x^{2}\).

Figures 2 and 3 show the numerical solutions and absolute errors in Examples 14. The numerical results are obtained by considering \(n = 10,m = 30\).

Figure 2
figure 2

Comparison between exact and LNN results for first-order ODEs

Figure 3
figure 3

The absolute error for first-order ODEs

Example 5

Here we consider the second-order ODEs given as follows:

$$\begin{aligned} &y'' + xy' - 4y = 12x^{2} - 3x, \\ &y ( 0 ) = 0,\qquad y ( 1 ) = 2 \end{aligned}$$

with the interval being \([ 0,1 ]\), and the exact solution \(y ( x ) = x^{4} + x\).

Example 6

The next problem is given by

$$\begin{aligned} &y'' - y' = - 2\sin ( x ), \\ &y ( 0 ) = - 1, \qquad y \biggl( \frac{\pi}{2} \biggr) = 1 \end{aligned}$$

and considered on the interval \([ 0,\frac{\pi}{2} ]\), with the exact solution being \(y ( x ) = \sin ( x ) - \cos ( x )\).

Example 7

One more problem to be solved is

$$\begin{aligned} &y'' + 2y' + y = x^{2} + 3x + 1, \\ &y ( 0 ) = 0,\qquad y ( 1 ) = - e^{ - 1} + 1, \end{aligned}$$

which is considered on the interval \([ 0,1 ]\), and the exact solution is \(y ( x ) = - e^{ ( - x )} + x^{2} - x + 1\).

Example 8

As the last problem we consider the following problem:

$$\begin{aligned} &y'' + \frac{1}{5}y' + y = - \frac{1}{5}e^{ - \frac{x}{5}}\cos ( x ), \\ &y ( 0 ) = 0,\qquad y ( 2 ) = e^{ - \frac{2}{5}}\sin ( 2 ), \end{aligned}$$

defined on the interval \([ 0,2 ]\) and having the exact solution \(y ( x ) = e^{ - \frac{x}{5}}\sin ( x )\).

Figures 4 and 5 show the numerical solutions and absolute errors in Examples 58. The numerical results are obtained by considering \(n = 10,m = 30\).

Figure 4
figure 4

Comparison between exact and LNN results for second-order ODEs

Figure 5
figure 5

The absolute error for second-order ODEs

Example 9

Next we consider the initial value problem of SODEs expressed as

$$\begin{aligned} &y_{1}^{\prime} + 2y_{1} + y_{2} = \sin ( x ), \\ &y_{2}^{\prime} - 4y_{1} - 2y_{2} = \cos ( x ), \\ &y_{1} ( 0 ) = 0,\qquad y_{2} ( 0 ) = - 3, \end{aligned}$$

which has been solved in \([ 0,2 ]\), and the exact solution is

$$\textstyle\begin{cases} y_{1} ( x ) = 2\sin ( x ) + x, \\ y_{2} ( x ) = - 3\sin ( x ) - 2\cos ( x ) - 2x - 1. \end{cases} $$

Example 10

One more problem is given by

$$\begin{aligned} &y_{1}^{\prime} + 2y_{1} - y_{2} = 2\sin ( x ), \\ &y_{2}^{\prime} - y_{1} + 2y_{2} = 2 \bigl( \cos ( x ) - \sin ( x ) \bigr), \\ &y_{1} ( 0 ) = 2,\qquad y_{2} ( 0 ) = 3, \end{aligned}$$

and it is defined on the interval \([ 0,2 ]\), with the exact solution being

$$\textstyle\begin{cases} y_{1} ( x ) = 2e^{ - x} + \sin ( x ), \\ y_{2} ( x ) = 2e^{ - x} + \cos ( x ). \end{cases} $$

Figures 6 and 7 show the numerical solutions and absolute errors in Examples 910. The numerical results are obtained by considering \(n = 10,m = 30\).

Figure 6
figure 6

Comparison between exact and LNN results for SODE

Figure 7
figure 7

The absolute error for SODE

Example 11

Consider the nonlinear boundary value problem from [59]

$$\begin{aligned} &y'' = \frac{1}{2x^{2}} \bigl( y^{3} - 2y^{2} \bigr), \\ &y ( 1 ) = 1,\qquad y ( 2 ) = 4/3 \end{aligned}$$

with the exact solution \(y ( x ) = 2x / ( x + 1 )\).

Figure 8 shows the numerical solutions and absolute errors of Example 11. These numerical results are obtained with \(n = 22,m = 20\).

Figure 8
figure 8

(a) Comparison between exact and LNN results; (b) Absolute errors of Example 11

We also calculated all the test examples with different parameters. Tables 1 and 2 show the mean absolute deviation and execution time of each example. The execution time in Table 2 is the average time of 100 repetitions (in seconds). By analyzing the data of Tables 1 and 2, the best parameter value for each testing problem becomes evident, and we observe that the execution time was only slightly affected by changing network parameters.

Table 1 Mean absolute deviation of test examples with different parameters
Table 2 Execution time of test examples with different parameters

6.2 Comparative study

A comparative study with other approaches such as traditional methods and latest research work is described in this subsection to verify the superiority of the proposed method. We first compared our approach with some common traditional methods.

For test Examples 14, different methods such as Euler method (EM), Suen third-order Runge–Kutta method (Suen-R-K3),classical fourth-order Runge–Kutta method (R-K4), cosine basis function neural network based on gradient descent algorithm (CNN(GD)) and improved extreme learning machine algorithm (CNN(IELM)) were used. Tables 3 and 4 show the numerical results by all the methods mentioned above; they show that LNN method has maximum accuracy and minimum execution time. Table 5 gives the parameters of each algorithm.

Table 3 Mean absolute deviation of different methods for first-order ODEs
Table 4 Execution time of different methods for first-order ODEs
Table 5 Parameters of different methods

Shooting method (SM), difference method (DM), and CNN(IELM) method were used to calculate solutions in Examples 58. All calculation were taken over 100 sample points, and the number of neurons was 10 for neural network. The mean absolute deviation and execution time are shown in Tables 6 and 7. It is tempting to conclude that LNN method has maximum accuracy, its execution time is less than that of the shooting and CNN (IELM) methods, and the difference is not significant with the difference method.

Table 6 Mean absolute deviation of different methods for second-order ODEs
Table 7 Execution time of different methods for second-order ODEs

We calculated 100 sample points for two SODE problems (Examples 9 and 10) using EM, R-K4, CNN(IELM) methods; the number of neurons in the neural network method is 10. Tables 8 and 9 show the experimental results. It is easy to conclude that LNN method has maximum accuracy, its execution time is less than that of CNN(IELM) method, and the differences are not significant with Euler and R-K4 methods.

Table 8 Mean absolute deviation of different methods for SODE
Table 9 Execution time of different methods for SODE

The execution time for each algorithm in Table 4 is the average time of 30 repetitions, while in Tables 7 and 9, we averaged over 100 times (results are in seconds).

By comparing with traditional methods, we testified the superiority of the new method both in terms of calculation accuracy and execution time. In order to further prove the superiority of the proposed method, a comparison with the latest reported methods is done. The following three ODEs chosen for testing the proposed method are boundary problems.

Example 12

The first problem chosen from [51] is

$$\begin{aligned} &y' = y - x^{2} + 1, \\ &y ( 0 ) = 0.5 \end{aligned}$$

with \(x \in [ 0,2 ]\) and the exact solution being \(y ( x ) = ( x + 1 )^{2} - 0.5\mathrm{e}^{x}\).

In our numerical experiment, the sampling parameter is \(m = 10\), the result is shown in Fig. 9(a). By comparison, the maximum error in [51] is 1.9e–2, the maximum error of BeNN method in [60] is 2.7e–3, and the maximum error of the proposed LNN method is 4.9e–10. It is easy to seen from Fig. 9(a) that LNN method can obtain higher solution accuracy than the other two, which fully validates the superiority of LNN method with IELM algorithm.

Figure 9
figure 9

Error comparison of Examples 12 and 13

Example 13

The second differential equation is given by [61]

$$\begin{aligned} &y'' + y = 2, \\ &y ( 0 ) = 1, y ( 1 ) = 0 \end{aligned}$$

with the exact solution being \(y ( x ) = \frac{\cos ( 1 ) - 2}{\sin ( 1 )}\sin ( x ) - \cos ( x ) + 2, x \in [ 0,1 ]\).

The sampling parameter in this test experiment is \(m = 10\), the result is shown in Fig. 9(b). By comparison, the maximum error of BeNN method in [60] is 7.3e–9, and the maximum error of the proposed LNN method is 5.2e–12, so it is easy to seen from Fig. 9(b) that LNN method can obtain higher accuracy solution than BeNN method in [60]. Considering the method given by [61], the maximum error is 3.5e–2 with \(m = 50\), while, using LNN method, we are able to obtain a higher accuracy with maximum error 2.4e–13 by \(n = 10\) neurons, which also fully validates the superiority of LNN method with IELM algorithm.

Example 14

We also test the SODE given by [49]

$$\begin{aligned} &y_{1}^{\prime} = \cos ( x ), \qquad y_{1} ( 0 ) = 0, \\ &y_{2}^{\prime} = - y_{1},\qquad y_{2} ( 0 ) = 1, \\ &y_{3}^{\prime} = y_{2}, \qquad y_{3} ( 0 ) = 0, \\ &y_{4}^{\prime} = - y_{3},\qquad y_{4} ( 0 ) = 1, \\ &y_{5}^{\prime} = y_{4}, \qquad y_{5} ( 0 ) = 0 \end{aligned}$$

and its exact solution \(y_{1} ( x ) = \sin ( x ),y_{2} ( x ) = \cos ( x ),y_{3} ( x ) = \sin ( x ),y_{4} ( x ) = \cos ( x ),y_{5} ( x ) = \sin ( x )\), \(x \in [ 0,1 ]\).

As shown in Figs. 10 and 11, the sampling parameter is \(m = 10\) in this study, and only with \(n = 9\) neurons. By comparison with the method proposed in [49] and BeNN method in [60], the maximum errors are 2.3e–5 and 6.8e–8, the obtained maximum error of LNN method with IELM algorithm is 4.0e–12, which fully validates the superiority of the new proposed method.

Figure 10
figure 10

Comparison between exact and LNN results of Example 14

Figure 11
figure 11

The absolute error of Example 14

6.3 Classic Emden–Fowler equation

Many problems in science and engineering can be modelled by Emden–Fowler equation. A lot of attention has been focused on the numerical solution of Emden–Fowler type equation. In this subsection, we will apply the proposed alternative approach of Legendre neural network (LNN) with IELM algorithm to solve classic Emden–Fowler equation.

Example 15

Let us consider the classic Emden–Fowler equation given by [62]

$$\begin{aligned} &y'' + \frac{2}{x}y' + 2y = 0, \\ &y ( 0 ) = 1, \qquad y' ( 0 ) = 0, \end{aligned}$$

where the exact solution is \(y = \frac{\sin \sqrt{2} x}{\sqrt{2} x}, x \in [ 0,1 ]\).

Here we choose 10 equidistant points in the domain \([ 0,1 ]\) to train the proposed Legendre network and with 12 neurons. Figure 12 shows a comparison between the exact solution and LNN results, and the absolute error at each point. Table 10 provides a more intuitive comparison of the exact and numerical solution.

Figure 12
figure 12

(a) Comparison between exact and LNN results; (b) absolute error of Example 15

Table 10 Exact and LNN neural network results

7 Conclusions

In this paper, we have presented a novel Legendre neural network to solve several linear or nonlinear ODEs. A Legendre polynomial was chosen as a basis function of the hidden neurons. We used Legendre polynomials to eliminate the hidden layer of the network by expanding the input pattern. An improved extreme learning machine (IELM) algorithm was used for network weights training when solving the algebraic equation systems. Convergence analysis has proved the feasibility of this method. The accuracy of the proposed method has been examined by solving a lot of testing examples, and the results obtained by the proposed method have been compared with the exact solution. We found the presented method to be better. A comparative study has fully validated the superiority of the new proposed method over other numerical algorithms published in the latest literature. An application of the approach to solve the classic Emden–Fowler equation also shows the feasibility and applicability of our method. From the presented investigation we can see that the LNN neural network with IELM algorithm is straightforward, easily implementable and has higher accuracy when solving ODEs.

In addition, the neural network for solving ODEs and PDEs has been discussed a lot and still has some potential to work on. The recent research articles such as [63,64,65,66] have studied using neural network method to solve several fractional differential equations (FDEs). We have never dealt with the numerical solution of FDEs using neural network method. This will become an important research direction for us in the future. As mentioned in many articles, a variety of phenomena in astrophysics and mathematical physics can be described by Emden–Fowler equations, so this differential equation will also become a research direction for us in the future. If we consider only one type of orthogonal polynomial, there are some published papers as [67, 68], hybrid methods may also be a new research direction.