Skip to main content

Modeling Beach Rotation Using a Novel Legendre Polynomial Feedforward Neural Network Trained by Nonlinear Constrained Optimization

Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT,volume 475)


A Legendre polynomial feedforward neural network is proposed to model/predict beach rotation. The study area is the reef-fronted Ammoudara beach, located at the northern coastline of Crete Island (Greece). Specialized experimental devices were deployed to generate a set of input-output data concerning the inshore bathymetry, the wave conditions and the shoreline position. The presence of the fronting beachrock reef (parallel to the shoreline) increases complexity and imposes high non-linear effects. The use of Legendre polynomials enables the network to capture data non-linearities. However, in order to maintain specific functional requirements, the connection weights must be confined within a pre-determined domain of values; it turns out that the network’s training process constitutes a constrained nonlinear programming problem, solved by the barrier method. The performance of the network is compared to other two neural-based approaches. Simulations show that the proposed network achieves a superior performance, which could be improved if an additional wave parameter (wave direction) was to be included in the input variables.


  • Beach rotation
  • Feedforward neural network
  • Legendre polynomials
  • Perched beach
  • Nonlinear constrained optimization

1 Introduction

Beach rotation refers to the realignment of the beach shoreline due mainly to lateral (alongshore) sediment movement caused by shifts in incident wave energy [1]. The phenomenon is controlled by the wave-coastal morphology interaction that can result in large localized changes in shoreline position (retreat or advance) and, thus, in changes of the beach planform which, however, may not lead to long term sediment loss or gain; beaches often return to their initial platform with the changes being often seasonal [24]. Although beach rotation has been considered/modeled as an alongshore sediment transport process, recent research suggests a more complex beach response to wave energy, whereby alongshore variability in cross-shore sediment fluxes may also be significant [2, 5]. Beach rotation processes are expected to be more complicated in the case of perched beaches i.e. beaches that are fronted by natural or artificial reefs [6], as the sediment dynamics and morhodynamics of these beaches are controlled also by the reef’s depth and morphology. Wave transformation and breaking over the reef can induce high non-linear effects [7, 8]. As a result, the standard modeling methodologies require complex mathematical structures with extremely high computational costs [35, 9].

On the other hand, polynomial functions are in the position to effectively model data nonlinearities [10]. Polynomial neural networks utilize polynomials to represent the nodes’ activation functions and, thus, increase modeling capabilities. Ma and Khorasani [11] incorporated into the network’s structure Hermite polynomials, where the corresponding parameters were optimized in terms of an adaptive learning scheme. Lee and Jeng [12] used tensor products to develop a Chebysev polynomial type network, whereas Patra et al. [13] performed nonlinear channel equalization for wireless communication systems in terms of a Legendre polynomial-based neural network. Although the above approaches show good testing performances, they use high number of nodes and thus, can hardly be applied to high dimensional nonlinear problems. In comparison, Chebyshev polynomial radial basis function and neural-fuzzy networks were developed to perform efficient shoreline extraction from coastal imagery [10] and predict coastal erosion [14].

In this paper, we propose a feed-forward neural network that employs Legendre polynomials as activation functions to model the shoreline beach rotation of a perched beach (Ammoudara, Crete Fig. 1). Linear combinations of the input variables are generated and appropriately scaled by constraining the corresponding weighting parameters. The scaled functions maintain the linearity of the input variables (which is an important issue when regression analysis is to be applied) and the output is expanded into truncated Legendre series. Since the network’s connection weights are constrained, the training process becomes a constrained nonlinear optimization problem solved by the barrier method.

Fig. 1.
figure 1

(a) Ammoudara beach, Heraklion, Crete; the position of the offshore POSEIDON E1-M3A wave buoy is illustrated as a black dot in the inset. (b) Optical system location (diamond point), and field of vision of the 3 deployed cameras (confined within the red lines); the shoreline section examined by video imagery (i.e. detected shoreline) is shown by the dashed black line along the shoreline, with the white vertical dashed-line corresponding to one cross-shore section (out of the 52 sections studied); the offshore dark grey zone parallel to the shoreline delineates the beachrock reef. (Satellite image source: Bing Maps, Microsoft) (Color figure online)

The paper is organized as follows. Section 2 describes the experimental setup and the data acquisition process. Section 3 provides a detailed analysis of the proposed network and the training process used. Section 4 illustrates the simulation experiments, and the paper concludes in Sect. 5.

2 Experimental Setup and Raw Data Extraction

The study area is the eastern sector of Ammoudara beach, a 6.1 km long microtidal, urban perched beach, located at the west of the port of Heraklion, Crete, Greece (Fig. 1). The beach, is fronted by a submerged beachrock reef, oriented almost parallel to the shoreline, the width of which and its distance from the shoreline vary between 15–50 m and 40–70 m, respectively. Before detailing the experimental setup and the data acquisition process, it is convenient to discuss some concepts involved in the analysis as well as the physical meaning of the input-output variables used in this paper.

In view of Fig. 1(b) the length of the shoreline studied is defined by the black dashed line lying on the shoreline (denoted as “Detected Shoreline” in the figure). The vertical white dashed line corresponds to one out of 52 cross-shore sections used in our experiments. Each cross-shore section is associated with a bathymetry profile, which is shown in Fig. 2(a). It is widely accepted that specific morphological characteristics of a reef can affect beach rotation; the reef acts in a similar manner to a submerged breakwater, absorbing the incoming wave energy [8, 9, 15]. The reef morphological characteristics used as inputs are enumerated as follows (Fig. 2): the reef depth (in meters) from the sea surface is denoted as \( d \) (Fig. 2(a)), the reef inshore and offshore slopes as \( \omega_{1} \) and \( \omega_{2} \) (Fig. 2(b)) and the reef width (in meters) at 1.2 m water depth as \( w \) (Fig. 2(b)). In the latter case, the depth of 1.2 m was decided after a specialized data processing that showed that this reef width at this water depth had the most substantial effects.

Fig. 2.
figure 2

(a) Cross-shore bathymetric profile (see Fig. 1(b)), showing the beachrock reef and the elevation parameters. (b) Zoom on the reef showing its structural parameters.

Apart from the above parameters that quantify the bathymetry characteristics, we use two more parameters that describe the wave conditions namely, the significant wave height denoted as \( H_{S} \) (in meters), and the peak wave period symbolized \( T_{P} \) (in seconds). Note that the last two are important as they impose a direct control on beach morphodynamics [1, 3, 7]. All above parameters form the input variables of our analysis. The output variable quantifying beach rotation is the distance (in meters) from the reef top point (crown) to the shoreline denoted as \( y \) in Fig. 2(a). Indeed, the variability of this parameter (cross-shore distance) along the shoreline defines beach rotation [14, 7]. In summary, the input variables are: \( x_{1} = d \), \( x_{2} = \tan \omega_{1} \), \( x_{3} = \tan \omega_{2} \), \( x_{4} = w \), \( x_{5} = H_{S} \), and \( x_{6} = T_{P} \), and the output variable is \( y \) (input variables \( p = 6 \) and one output).

The experimental methodology consists of high detailed nearshore bathymetric data, and a long-term time series (10-month period from January 2014 to November 2014) of shoreline position and wave conditions. More specifically, bathymetric data were obtained through a single beam digital Hi-Target HD 370 echo-sounder and a Differential GPS (Topcon Hipper RTK-DGPS) deployed from a very shallow draft inflatable boat. Using interpolation, from these bathymetric data 52 cross-shore sections were derived (a sample is given in Figs. 1 and 2). Information on the shoreline position for the 10-month period was obtained from coastal video imagery provided by a system consisting of 3 PointGrey FLEA-2 video cameras, installed on the study area, monitoring a beach stretch of 1400 m long (the fields of vision of these 3 cameras are shown in Fig. 1(b)). A detailed description of the system and the automated procedure developed to extract the shoreline from the video images is provided in Velegrakis et al. [8]. The above experiments provided the raw data for the input variables \( x_{1} - x_{4} \), and for the output variable \( y \). Data concerning the variables \( x_{5} = H_{S} \), and \( x_{6} = T_{P} \) were obtained by an offshore wave buoy (POSEIDON E1-M3A buoy) located about 35 km to the north of the beach (35.660 N and 24.990 E) at 1440 m water depth (see Fig. 1(a)), installed/operated by the Greek National Centre for Marine Research (GNCMR).

In total, the experimental setup generated \( N = 4148 \) input-output data of the form \( \left. {\left\{ {\varvec{x}_{k} ;\,y_{k} } \right\}} \right|_{k = 1}^{N} \) with \( \varvec{x}_{k} = \left[ {\begin{array}{*{20}c} {x_{k1} } & {x_{k2} } & {x_{k3} } & {x_{k4} } & {x_{k5} } & {x_{k6} } \\ \end{array} } \right]^{T} \). These data are going to be elaborated by the proposed neural network in order to model/predict beach rotation.

3 The Proposed Legendre Polynomial Feedforward Network

In this section, we introduce a feed-forward neural network (FFNN), the nodes of which utilize Legendre polynomials as activation functions. The Legendre polynomials possess powerful function approximation capabilities, and they are defined by the subsequent formula [16],

$$ P_{n} (x) = \frac{1}{{2^{n} n!}}\frac{{d^{n} }}{{dx^{n} }}\left[ {(x^{2} - 1)^{n} } \right]\,\,\,\, $$

where \( n = 0,1, \ldots \) is the polynomial order. The Legendre polynomials are orthogonal for \( x \in \left[ { - 1,\,\,1} \right] \) satisfying the following inner product condition [13, 16],

$$ \int_{ - 1}^{1} {P_{m} } (x)P_{n} (x){\mkern 1mu} dx = \left\{ \begin{aligned} \frac{2}{2n + 1},\,\,\,m = n \\ 0,\,\,\,m \ne n \\ \end{aligned} \right. $$

In addition, they can be generated by the next recurrent relations [16],

$$ P_{0} (x) = 1\,;\,\,P_{1} (x) = x\,;\,P_{n} (x) = \frac{1}{n}\left[ {\left( {2n - 1} \right)xP_{n - 1} \left( x \right) - \left( {n - 1} \right)P_{n - 2} \left( x \right)} \right]{\mkern 1mu} \,\,\,\,for\,\,\,\,n \ge 2 $$

Let us assume that the available input-output dataset is denoted as,

$$ S = \left\{ {\left( {\varvec{x}_{k} ,y_{k} } \right):\varvec{x}_{k} = [x_{k1} ,x_{k2} , \ldots ,x_{kp} ]^{T} ,k = 1,2, \ldots ,N} \right\} $$

where p is the dimension of the input space, and \( N \) is the number of the training data (note that in the application discussed in this paper: \( p = 6 \) and \( N = 4148 \)).

The proposed neural network is illustrated in Fig. 3. There are four layers involved. Given that the desired order of the Legendre polynomials is \( n \), the Layer 1 comprises \( n \) nodes, each of which generates a linear combination of the input variables,

Fig. 3.
figure 3

The Legendre polynomial feedforward neural network.

$$ h_{\ell } \left( \varvec{x} \right) = \sum\limits_{j = 1}^{p} {a_{\ell j} \,x_{j} } $$

where \( 1 \le \ell \le n \), \( 1 \le j \le p \), and \( a_{\ell j} \) are the weight parameters.

The Layer 2 applies a scaling procedure, which maps the values of \( h_{\ell } \left( \varvec{x} \right)\,\,\,\,\left( {1 \le \ell \le n} \right) \) in the interval \( \left[ { - 1,\,\,1} \right] \).

As mentioned above, the reason for this scaling procedure is that the Legendre polynomials are orthogonal in the interval \( \left[ { - 1,\,\,1} \right] \) and therefore, they are able to operate only in this interval. To accomplish this task, we introduce a specialized methodology that maintains the linearity with respect to the original inputs. We denote the domain of values associated with \( j \)-th input variable as \( D_{j} = \left[ {x_{j}^{L} ,\,\,\,x_{j}^{U} } \right] \), meaning that \( x_{j}^{L} \le x_{j} \le \,x_{j}^{U} \). Note that the lower and upper bounds \( x_{j}^{L} \) and \( \,x_{j}^{U} \) are fixed and depend on the system data, only. Relationally, we can define an interval \( A = \left[ {a_{L} ,\,\,\,a_{U} } \right] \) to confine the weight parameters so that \( a_{L} \le a_{\ell j} \le \,a_{U} \) (i.e. \( a_{\ell j} \in A \)) for every \( \ell \) and \( j \). The values for lower bound \( a_{L} \) and the upper bound \( a_{U} \) are pre-selected in terms of a trial-and-error approach as to obtain the best possible results, and are kept fixed throughout the whole learning process.

The question is to find a transformation to map the functions \( h_{\ell } \left( \varvec{x} \right) \) in the interval \( \left[ { - 1,\,\,1} \right] \). Based on the interval arithmetic [17] the multiplication of the intervals \( D_{j} = \left[ {x_{j}^{L} ,\,\,\,x_{j}^{U} } \right] \) and \( A = \left[ {a_{L} ,\,\,\,a_{U} } \right] \) gives the interval \( \left[ {L_{j} ,\,\,U_{j} } \right] \) with,

$$ L_{j} = \hbox{min} \left\{ {a_{L} \,x_{j}^{L} ,\,\,a_{L} \,x_{j}^{U} ,\,\,a_{U} \,x_{j}^{L} ,\,\,a_{U} \,x_{j}^{U} \,} \right\} $$
$$ U_{j} = \hbox{max} \left\{ {a_{L} \,x_{j}^{L} ,\,\,a_{L} \,x_{j}^{U} ,\,\,a_{U} \,x_{j}^{L} ,\,\,a_{U} \,x_{j}^{U} \,} \right\} $$


$$ L_{j} \le a_{\ell j} \,x_{j} \le U_{j} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left( {1 \le j \le p} \right) $$

For all input variables we add the above inequalities, and using Eq. (5),

$$ \,\sum\limits_{j = 1}^{p} {L_{j} \le \sum\limits_{j = 1}^{p} {a_{\ell j} \,x_{j} } } \le \sum\limits_{j = 1}^{p} {U_{j} } \Rightarrow Q_{L} \le h_{\ell } \left( \varvec{x} \right) \le Q_{U} $$

where \( Q_{L} = \sum\limits_{j = 1}^{p} {L_{j} } \) and \( Q_{U} = \sum\limits_{j = 1}^{p} {U_{j} } \). Thus, \( h_{\ell } \left( \varvec{x} \right) \in \left[ {Q_{L} ,\,\,\,Q_{U} } \right]\,\,\,\forall \ell \). The question of finding a transformation to map the functions \( h_{\ell } \left( \varvec{x} \right)\,\,\,\left( {1 \le \ell \le n} \right) \) in the interval \( \left[ { - 1,\,\,1} \right] \) is now equivalently rephrased as finding a transformation to map the interval \( \left[ {Q_{L} ,\,\,\,Q_{U} } \right] \) on the interval \( \left[ { - 1,\,\,1} \right] \). We can easily prove that this transformation is,

$$ \tilde{h}_{\ell } \left( \varvec{x} \right) = \frac{2}{{Q_{U} - Q_{L} }}h_{\ell } \left( \varvec{x} \right) - \frac{{Q_{U} + Q_{L} }}{{Q_{U} - Q_{L} }} $$

where \( \tilde{h}_{\ell } \left( \varvec{x} \right) \in \left[ { - 1,\,\,1} \right] \).

By setting \( R = {2 \mathord{\left/ {\vphantom {2 {\left( {Q_{U} - Q_{L} } \right)}}} \right. \kern-0pt} {\left( {Q_{U} - Q_{L} } \right)}} \), \( \varOmega = {{\left( {Q_{U} + Q_{L} } \right)} \mathord{\left/ {\vphantom {{\left( {Q_{U} + Q_{L} } \right)} {\left( {Q_{U} - Q_{L} } \right)}}} \right. \kern-0pt} {\left( {Q_{U} - Q_{L} } \right)}} \) and taking into account the Eq. (5), the Eq. (10) yields,

$$ \tilde{h}_{\ell } \left( \varvec{x} \right) = R\,\sum\limits_{j = 1}^{p} {a_{\ell j} \,x_{j} \, - \varOmega } = \,\sum\limits_{j = 1}^{p} {R\,a_{\ell j} x_{j} \, - \varOmega } $$

The above equation directly indicates that the scaled functions \( \tilde{h}_{\ell } \left( \varvec{x} \right)\,\,\,\left( {1 \le \ell \le n} \right) \) are linear combinations of the input variables, something very important for the regression analysis that follows.

The Layer 3 includes \( n \) nodes with activation functions the Legendre polynomials of the linear combinations reported in Eq. (11),

$$ P_{\ell } \left( \varvec{x} \right) = P_{\ell } \left( {\tilde{h}_{\ell } \left( \varvec{x} \right)} \right) = P_{\ell } \left( {\sum\limits_{j = 1}^{p} {R\,a_{\ell j} x_{j} \, - \varOmega } } \right) $$

Note that, based on (3), the zero order polynomial \( P_{0} \left( \varvec{x} \right) \) is always equal to one and has no effect on the input variables. Therefore, it is used as the network’s bias.

Finally, the Layer 4 produces the network’s estimated output by intertwining the outputs of the Layer 3, in order to expand the linear combinations of Eq. (11) into the subsequent truncated Legendre series,

$$ \hat{y} = \,\beta_{0} + \sum\limits_{\ell = 1}^{n} {\beta_{\ell } \,P_{\ell } \left( {\sum\limits_{j = 1}^{p} {R\,a_{\ell j} x_{j} \, - \varOmega } } \right)\,} \,\, $$

The network’s learning process carries out the estimation of weights \( a_{\ell j} \) and \( \beta_{\ell } \) through the minimization of the network’s square error: \( J_{SE} = \sum\limits_{k = 1}^{N} {\left| {y_{k} - \hat{y}_{k} } \right|^{2} } \), which based on the form of Eq. (13), constitutes a regression analysis. However, to maintain \( \tilde{h}_{\ell } \left( \varvec{x} \right) \in \left[ { - 1,\,\,1} \right] \), the following relation must hold,

$$ a_{\ell j} \in A \Rightarrow a_{L} \le \,a_{\ell j} \le a_{U} \,\,\,\,\,\,\,\,\,\,\,\forall \ell ,\,j $$

Thus, while the estimation of \( \beta_{\ell } \,\,\,\left( {1 \le \ell \le n} \right) \) is unconstrained, the estimation of \( a_{\ell j} \,\,\left( {1 \le \ell \le n;\,\,1 \le j \le p} \right)\, \) is constrained. By splitting the double inequality in (14), the constrained optimization problem is now given as follows:


$$ J_{SE} = \sum\limits_{k = 1}^{N} {\left| {y_{k} - \hat{y}_{k} } \right|^{2} } = \,\sum\limits_{k = 1}^{N} {\left| {y_{k} - \left( {\beta_{0} + \sum\limits_{\ell = 1}^{n} {\beta_{\ell } \,P_{\ell } \left( {\sum\limits_{j = 1}^{p} {R\,a_{\ell j} \,x_{j} \, - \varOmega } } \right)\,} } \right)} \right|^{2} } \, $$

Subject to

$$ \phi (a_{\ell j} ) = - a_{\ell j} + a_{L} \le 0\,\,\,\,\,\,\,\,\,\,\left( {\ell = 1,\,2,\, \ldots ,\,n;\,\,j = 1,\,2,\, \ldots ,\,p} \right) $$
$$ \psi (a_{\ell j} ) = a_{\ell j} - a_{U} \le 0\,\,\,\,\,\,\,\,\,\,\left( {\ell = 1,\,2,\, \ldots ,\,n;\,\,j = 1,\,2,\, \ldots ,\,p} \right) $$

It can be easily verified that the feasible area of the problem is convex, because it forms the intersection of plane surfaces (i.e. convex sets). To perform the optimization we could use the well-known penalty method, which approaches to the feasible area from outside [18]. However, we are required not to leave the feasible area; otherwise the Legendre polynomials would be forced to operate outside of the orthogonality region. Thus, we choose to use the barrier method [18], which searches for a solution only in the feasible area without leaving it. According to the barrier method, the above constrained problem can be exactly resolved by the unconstrained minimization of the following function,

$$ F = \sum\limits_{k = 1}^{N} {\left| {y_{k} - \left( {\beta_{0} + \sum\limits_{\ell = 1}^{n} {\beta_{\ell } \,P_{\ell } \left( {\sum\limits_{j = 1}^{p} {R\,a_{\ell j} \,x_{j} \, - \varOmega } } \right)\,} } \right)} \right|^{2} - \frac{1}{\gamma }\sum\limits_{\ell = 1}^{n} {\sum\limits_{j = 1}^{p} {\left( {\frac{1}{{\phi \left( {a_{\ell j} } \right)}} + \frac{1}{{\psi \left( {a_{\ell j} } \right)}}} \right)} } } $$

where \( \sum\limits_{\ell = 1}^{n} {\sum\limits_{j = 1}^{p} {\left( {\frac{1}{{\phi \left( {a_{\ell j} } \right)}} + \frac{1}{{\psi \left( {a_{\ell j} } \right)}}} \right)} } \) is the barrier function and \( \gamma \) is the barrier constant, which is required to take a sufficiently large positive value.

To perform the unconstrained minimization of \( F \) we use the steepest-descent method based on Armijo’s rule [19]. To do so we define the vector,

$$ \varvec{z} = \left[ {z_{1} ,\,z_{2} ,. \ldots ,\,z_{np} ,\,z_{np + 1} ,\,z_{np + 2} , \ldots ,\,z_{{n\left( {p + 1} \right)}} } \right]^{T} = \left[ {a_{11} ,\,a_{12} , \ldots ,\,a_{np} ,\,\beta_{1} ,\,\beta_{2,} \, \ldots ,\,\beta_{n} } \right]^{T} $$

For the \( t + 1 \) iteration the learning rule is,

$$ \varvec{z}(t + 1) = \,\varvec{z}(t) - \eta (t)\,\nabla F(\varvec{z}(t)) $$

where \( \eta (t) = \lambda^{\tau } \) with \( \lambda \in \left( {0,\,1} \right) \). The parameter \( \tau \) is the smallest positive integer such that,

$$ F\left( {\varvec{z}(t) - \eta (t)\,\nabla F(\varvec{z}(t))} \right)\, - \,F\left( {\varvec{z}(t)} \right) < - \varepsilon \,\eta (t)\,\left\| {\nabla F(\varvec{z}(t))} \right\|^{2} $$

with \( \varepsilon \in (0,\,1) \). Finally, the partial derivatives in Eqs. (20) and (21) can easily be derived.

4 Simulation Study

Based on the analysis described in Sect. 2, the data set includes \( N = 4148 \) input-output data pairs (corresponding to 52 beach cross-sections) of the form \( \left. {\left\{ {\varvec{x}_{k} ;\,y_{k} } \right\}} \right|_{k = 1}^{N} \) with \( \varvec{x}_{k} = \left[ {\begin{array}{*{20}c} {x_{k1} } & {x_{k2} } & \ldots & {x_{k6} } \\ \end{array} } \right]^{T} \) and \( y_{k} \in \Re \). The data set was divided into a training set consisting of the 60 % of the original data, and a testing set consisting of the remainder 40 %. Table 1 depicts the parameter setting for the Legendre polynomial neural network.

Table 1. Parameter setting for the proposed network

For comparison, two more neural networks were designed. The first one was a radial basis function (RBF) network. The parameters of the basis functions were estimated in terms of the conditional fuzzy clustering, which was developed in [20], while the connection weights were calculated by the least-squares method.

The second one was a feedforward neural network (FFNN), the activation functions of which read as follows,

$$ f(x) = \tanh \frac{x}{2} $$

To train the FFNN we applied the steepest-descent based on the Armijo’s rule (see previous Section) in order to minimize the network’s square error. All networks were implemented using the Matlab software.

The performance index to conduct the simulations was the root mean square error,

$$ RMSE = \sqrt {\frac{1}{N}\sum\limits_{k = 1}^{N} {\left| {y_{k} - \hat{y}_{k} } \right|^{2} } } $$

For the three networks we considered various numbers of nodes, while for each number of nodes we run 20 different initializations.

The results are shown in Table 2. The Legendre polynomial neural network appears to have superior performance compared with the other two networks. The best result for both the training and testing data sets is obtained by the proposed network when \( n = 4 \).

Table 2. Comparative results in terms of the RMSE mean values and the corresponding standard deviations obtained by the three networks for various numbers of nodes in the hidden layer

The results reported in Table 2 are visualized also in Fig. 4. There are some interesting remarks: (a) the difference between the Legendre polynomial and the remainder of the networks tested appears to be significant particularly in the testing data; (b) the RBF outperforms the FFNN in both cases; (c) although the best result for the proposed network was obtained for \( n = 4 \), the general tendency is to obtain smaller RMSEs as the number of polynomial nodes increases.

Fig. 4.
figure 4

Mean values of the RMSE as a function of the number of nodes for: (a) the training data, and (b) the testing data.

It is very interesting to see how the above results are translated into meaningful observations as far as the beach rotation is concerned. Figure 5 concerns a specific testing data and shows the predictions obtained by the three methods. Based on this figure, the proposed network clearly achieves the best beach rotation prediction.

Fig. 5.
figure 5

A sample of the cross-shore shoreline postion predicted by the three networks.

Although the overall prediction performance of the proposed Legendre polynomial neural network may not appear to be very satisfactory on the basis of the RMSE (9.5 m), the following should be noted. First, Ammoudara shoreline position is characterized by high spatiotemporal variability, with the difference between the most inshore and most offshore recorded shoreline position during the 10-month monitoring period being between 3 and 8 m [8]; the proposed network’s predictions are of the same order of magnitude and thus may be considered as satisfactory in this high non-linear coastal system, particularly as in many cross-shore sections, the network’s predictions were much closer to the observed shoreline position (see Fig. 5). Secondly, adjacent sections of the shoreline showed large differences in terms of beach erosion/accretion patterns, suggesting significant control by small differences in reef morphology and the direction of wave approach. Hydrodynamic modeling has shown that small differences in the angle of wave approach result in quite different inshore hydrodynamic regimes (waves and wave-induced currents), even in the case of offshore waves with the same significant wave heights (\( H_{S} \)) and periods (\( T_{P} \)) [8]. As the offshore wave data set did not include details on wave direction, the offshore waves used were grouped collectively as northerly waves (those waves affecting the beach); thus, the observed discrepancies could be mainly due to the absence of an additional wave parameter (angle of wave incidence) in the network input variables. As a future research, it would be interesting to test how the above results would be in the case that wave direction was included as an input variable.

5 Summary and Conclusions

In this paper we present a systematic methodology that includes a sophisticated experimental setup and a novel feedforward neural network to model beach rotation in a reef fronted (perched) beach (Ammoudara, Crete). A set of significant morphological and wave variables were identified that can directly affect beach rotation, which together with records of shoreline position from a coastal video imagery system were used to generate the network’s input-output training data. The proposed network consists of four layers. The main task of the first and the second layers has been to obtain linear combinations of the input variables and then, to appropriately scale them before entering the third layer that comprises the Legendre polynomial activation functions. This scaling process was deemed necessary due to limitations imposed by the orthogonality of the Legendre polynomial. As a result, the weights of the linear combinations must be confined in a predetermined domain of values. Therefore, the training process of the network becomes a constrained nonlinear optimization problem, resolved by the barrier method. The comparative simulation experiments carried out showed that the proposed network can effectively model beach rotation, particularly if detailed wave direction data are available to be included as an additional input variable.


  1. Thomas, T., Phillips, M.R., Williams, A.T.: A Centurial Record of Beach Rotation. J. Coast. Res. 65, 594–599 (2013)

    CrossRef  Google Scholar 

  2. Thomas, T., Rangel-Buitrago, N., Phillips, M.R., Anfuso, G., Williams, A.T.: Mesoscale morphological change, beach rotation and storm climate influences along a macrotidal embayed beach. J. Marine Sci. Eng. 3, 1006–1026 (2015)

    CrossRef  Google Scholar 

  3. Ranasinghe, R., McLoughlan, R., Seasonal, A., Symonds, G.: The southern oscillation index, wave climate and beach rotation. Marine Geol. 204(3–4), 273–287 (2004)

    CrossRef  Google Scholar 

  4. Klein, A.H.F., Filho, L.B., Schumacher, D.H.: Seasonal-term beach rotation processes in distinct Headland Bay systems. J. Coast. Res. 18(3), 442–458 (2002)

    Google Scholar 

  5. Harley, M.D., Turner, I.L., Short, A.D.: New insights into embayed beach rotation: The importance of wave exposure and cross-shore processes. J. Geophys. Res. 120(8), 16 (2015)

    Google Scholar 

  6. Gallop, S.L., Bosserelle, C., Eliot, I., Pattiaratchi, C.B.: The influence of lime-stone reefs on storm erosion and recovery of a perched beach. Cont. Shelf Res. 47, 16–27 (2012)

    CrossRef  Google Scholar 

  7. Gallop, S.L., Bosserelle, C., Eliot, I., Pattiaratchi, C.B.: The influence of coastal reefs on spatial variability in seasonal sand fluxes. Marine Geol. 344, 132–143 (2013)

    CrossRef  Google Scholar 

  8. Velegrakis, A.F., Trygonis, V., Chatzipavlis, A.E., Karambas, Th., Vousdoukas, M.I., Ghionis, G., Monioudi, I.N., Hasiotis, Th., Andreadis, O., Psarros, F.: Shoreline variability of an urban beach fronted by a beachrock reef from video imagery. Natural Hazards (2016). doi:10.1007/s11069-016-2415-9

    Google Scholar 

  9. Lowe, R.J., Hart, C., Pattiaratchi, C.B.: Morphological constraints to wave-driven circulation in coastal reef-lagoon systems: a numerical study. J. Geophys. Res. 115, C09021 (2010)

    CrossRef  Google Scholar 

  10. Rigos, A., Tsekouras, G.E., Vousdoukas, M.I., Chatzipavlis, A., Velegrakis, A.F.: A Chebyshev polynomial radial basis function neural network for automated shoreline extraction from coastal imagery. Integr. Comput. Aided Eng. 23, 141–160 (2016)

    CrossRef  Google Scholar 

  11. Ma, L., Khorasani, K.: Constructive feedforward neural networks using Hermite polynomial activation functions. IEEE Trans. Neural Netw. 16(4), 821–833 (2005)

    CrossRef  Google Scholar 

  12. Lee, T.T., Jeng, J.T.: The Chebyshev-polynomials-based unified model neural networks for function approximation. IEEE Trans. Syst. Man Cybern. Part B Cybern. 28(6), 925–935 (1998)

    CrossRef  Google Scholar 

  13. Patra, J.C., Meher, P.K., Chakraborty, G.: Nonlinear channel equalization for wireless communication systems using legendre neural networks. Sig. Process. 89(11), 2251–2262 (2009)

    CrossRef  MATH  Google Scholar 

  14. Tsekouras, G.E., Rigos, A., Chatzipavlis, A., Velegrakis, A.: A neural-fuzzy network based on Hermite polynomials to predict the coastal erosion. Commun. Comput. Inf. Sci. 517, 195–205 (2015)

    CrossRef  Google Scholar 

  15. Alexandrakis, G., Ghionis, G., Poulos, S.E.: The Effect of beach rock formation on the morphological evolution of a beach. The case study of an Eastern Mediterranean beach: Ammoudara, Greece. J. Coast. Res. 69(SI), 47–59 (2013)

    CrossRef  Google Scholar 

  16. Bell, W.W.: Special Functions for Scientists and Engineers. D. Van Nostrand Company Ltd., London (1968)

    MATH  Google Scholar 

  17. Moore, R.E.: Interval Analysis. Prentice-Hall, Englewood Cliff (1966)

    MATH  Google Scholar 

  18. Luenberger, D.G., Ye, Y.: Linear and Nonlinear Programming, 3rd edn. Springer, New York (2008)

    MATH  Google Scholar 

  19. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pacific J. Math. 16(1), 1–3 (1966)

    MathSciNet  CrossRef  MATH  Google Scholar 

  20. Pedrycz, W.: Conditional fuzzy clustering in the design of radial basis function neural networks. IEEE Trans. Neural Netw. 9(4), 601–612 (1998)

    CrossRef  Google Scholar 

Download references


This research has been co-financed in 85 % by the EEA GRANTS, 2009–2014, and 15 % by the Public Investments Programme (PIP) of the Hellenic Republic. Project title: Recording of and Technical Responses to Coastal Erosion of Touristic Aegean island beaches (ERA BEACH).

Author information

Authors and Affiliations


Corresponding author

Correspondence to George E. Tsekouras .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2016 IFIP International Federation for Information Processing

About this paper

Cite this paper

Rigos, A., Tsekouras, G.E., Chatzipavlis, A., Velegrakis, A.F. (2016). Modeling Beach Rotation Using a Novel Legendre Polynomial Feedforward Neural Network Trained by Nonlinear Constrained Optimization. In: Iliadis, L., Maglogiannis, I. (eds) Artificial Intelligence Applications and Innovations. AIAI 2016. IFIP Advances in Information and Communication Technology, vol 475. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44943-2

  • Online ISBN: 978-3-319-44944-9

  • eBook Packages: Computer ScienceComputer Science (R0)