Adaptive Smoothing Spline Estimator for the Function-on-Function Linear Regression Model

In this paper, we propose an adaptive smoothing spline (AdaSS) estimator for the function-on-function linear regression model where each value of the response, at any domain point, depends on the full trajectory of the predictor. The AdaSS estimator is obtained by the optimization of an objective function with two spatially adaptive penalties, based on initial estimates of the partial derivatives of the regression coefficient function. This allows the proposed estimator to adapt more easily to the true coefficient function over regions of large curvature and not to be undersmoothed over the remaining part of the domain. A novel evolutionary algorithm is developed ad hoc to obtain the optimization tuning parameters. Extensive Monte Carlo simulations have been carried out to compare the AdaSS estimator with competitors that have already appeared in the literature before. The results show that our proposal mostly outperforms the competitor in terms of estimation and prediction accuracy. Lastly, those advantages are illustrated also on two real-data benchmark examples.

Abstract: In this paper, we propose an adaptive smoothing spline (AdaSS) estimator for the function-on-function linear regression model where each value of the response, at any domain point, depends on the full trajectory of the predictor.The AdaSS estimator is obtained by the optimization of an objective function with two spatially adaptive penalties, based on initial estimates of the partial derivatives of the regression coefficient function.This allows the proposed estimator to adapt more easily to the true coefficient function over regions of large curvature and not to be undersmoothed over the remaining part of the domain.A novel evolutionary algorithm is developed ad hoc to obtain the optimization tuning parameters.Extensive Monte Carlo simulations have been carried out to compare the AdaSS estimator with competitors that have already appeared in the literature before.The results show that our proposal mostly outperforms the competitor in

Introduction
Complex datasets are increasingly available due to advancements in technology and computational power and have stimulated significant methodological developments.In this regard, functional data analysis (FDA) addresses the issue of dealing with data that can be modeled as functions defined on a compact domain.FDA is a thriving area of statistics and, for a comprehensive overview, the reader could refer to [30,18,17,21,12].In particular, the generalization of the classical multivariate regression analysis to the case where the predictor and/or the response have a functional form is referred to as functional regression and is illustrated e.g., in [25] and [30].Most of the functional regression methods have been developed for models with scalar response and functional predictors (scalar-on-function regression) or functional response and scalar predictors (function-on-scalar regression).Some results may be found in [8,20,39,26].Models where both the response and the predictor are functions, namely function-on-function (FoF) regression, have been far less studied until now.In this work, we study FoF linear regression models, where the response variable function, at any domain point, depends linearly on the full trajectory of the predictor.That is, for i = 1, . . ., n.The pairs (X i , Y i ) are independent realizations of the predictor X and the response Y , which are assumed to be smooth random process with realizations in L 2 (S) and L 2 (T ), i.e., the Hilbert spaces of square integrable functions defined on the compact sets S and T , respectively.Without loss of generality, the latter are also assumed with functional mean equal to zero.The functions ε i are zero-mean random errors, independent of X i .The function β is smooth in L 2 (S × T ), i.e., the Hilbert space of bivariate square integrable functions defined on the closed intervals S × T , and is hereinafter referred to as coefficient function.For each t ∈ T , the contribution of X i (•) to the conditional value of Y i (t) is generated by β (•, t), which works as continuous set of weights of the predictor evaluations.Different methods to estimate β in (1.1) have been proposed in the literature.Ramsay and Silverman [30] assume the estimator of β to be in a finite dimension tensor space spanned by two basis sets and where regularization is achieved by either truncation or roughness penalties.
(The latter is the foundation of the method proposed in this article as we will see below.)Yao et al. [41] assume the estimator of β to be in a tensor product space generated by the eigenfunctions of the covariance functions of the predictor X and the response Y , estimated by using the principal analysis by conditional expectation (PACE) method [40].More recently, Luo and Qi [23] propose an estimation method of the FoF linear model with multiple functional predictors based on a finite-dimensional approximation of the mean response obtained by solving a penalized generalized functional eigenvalue problem.Qi and Luo [28] generalize the method in [23] to the high dimensional case, where the number of covariates is much larger than the sample size (i.e., p >> n).In order to improve model flexibility and prediction accuracy, Luo and Qi [24] consider a FoF regression model with interaction and quadratic effects.A nonlinear FoF additive regression model with multiple functional predictors is proposed by Qi and Luo [29].
One of the most used estimation method is the smoothing spline estimator βSS introduced by Ramsay and Silverman [30].It is obtained as the solution of the following optimization problem βSS = argmin where S k1,k2,M1,M2 is the tensor product space generated by the sets of B-splines of orders k 1 and k 2 associated with the non-decreasing sequences of M 1 + 2 and M 2 + 2 knots defined on S and T , respectively.The operators L ms s and L mt t , with m s ≤ k 1 − 1 and m t ≤ k 2 − 1, are the m s th and m t th order linear differential operators applied to α with respect to the variables s and t, respectively.The two penalty terms on the right-hand side of (1.2) measure the roughness of the function α.The positive constants λ s and λ t are generally referred to as roughness parameters and trade off smoothness and goodness of fit of the estimator.The higher their values, the smoother the estimator of the coefficient function.
Note that the two penalty terms on the right-side hand of (1.2) do not depend on s and t.Therefore, the estimator βSS may suffer from over and under smoothing when, for instance, the true coefficient function β is wiggly or peaked only in some parts of the domain.To solve this problem, we consider two adaptive roughness parameters that are allowed to vary on the domain S × T .In this way, more flexible estimators can be obtained to improve the estimation of the coefficient function.
Methods that use adaptive roughness parameters are very popular and well established in the field of nonparametric regression, and are referred to as adaptive methods.In particular, the smoothing spline estimator for nonparametric regression [36,13,11,14] has been extended by different authors to take into account the non-uniform smoothness along the domain of the function to be estimated [33,27,35,37,38].
In this paper, a spatially adaptive estimator is proposed as the solutions of the following minimization problem argmin where the two roughness parameters λ s (s, t) and λ t (s, t) are functions that produce different amount of penalty, and, thus, allow the estimator to spatially adapt, i.e., to accommodate varying degrees of roughness over the domain S ×T .Therefore, the model may accommodate the local behavior of β by imposing a heavier penalty in regions of lower smoothness.Because λ s (s, t) and λ t (s, t) are intrinsically infinite dimensional, their specification could be rather complicated without further assumptions.The proposed estimator is applied to FoF linear regression model reported in (1.1), and is referred to as adaptive smoothing spline (AdaSS) estimator.It is obtained as the solution of the optimization problem in (1.3), with λ s (s, t) and λ t (s, t) chosen based on an initial estimate of the partial derivatives L ms s α (s, t) and L mt t α (s, t).The rationale behind this choice is to allow the contribution of λ s (s, t) and λ t (s, t), to the penalties in (1.3), to be small over regions where the initial estimate has large m s th and m t th curvatures (i.e., partial derivatives), respectively.This can be regarded as an extension to the FoF linear regression model of the idea of Storlie et al. [35] and Abramovich and Steinberg [1].
Moreover, to overcome some limitations of the most famous grid-search method [4], a new evolutionary algorithm is proposed for the choice of the unknown parameters, needed to compute the AdaSS estimator.
The rest of the paper is organized as follows.In Section 2.1, the proposed estimator is presented.Computational issues involved in the AdaSS estimator calculation are discussed in Section 2.2 and Section 2.3.In Section 3, by means of a Monte Carlo simulation study, the performance of the proposed estimator are compared with those achieved by competing estimators already appeared in the literature.Lastly, two real-data examples are presented in Section 4 to illustrate the practical applicability of the proposed estimator.The conclusion is in Section 5.

The Estimator
The AdaSS estimator βAdaSS is defined as the solution of the optimization problem in (1.3) where the two roughness parameters λ s (s, t) and λ t (s, t) are as follows As in [1], we suggest to apply the m s th and m t th order linear differential operator to the smoothing spline estimator βSS in (1.2).

The Derivation of the AdaSS Estimator
The minimization in (2.1) is carried out over α ∈ S k1,k2,M1,M2 .This implicitly means that we are approximating β as follows Then, the first term of the right-hand side of (2.1) may be rewritten as (see [30], pag 291-293, for the derivation) T dt.The term Tr [A] denotes the trace of a square matrix A.
In (2.4) and (2.5), we are assuming that β ms s and β mt t are well approximated by a piecewise constant function, whose values are constant on rectangles defined by the two knot sequences Θ s and Θ t .It can be easily proved, by following Schumaker [34] (pag.491, Theorem 12.7), that the approximation error in both cases goes to zero as the mesh widths δ s = max i (τ s,i+1 − τ s,i ) and ) go to zero.Therefore, β ms s and β mt t can be exactly recovered by uniformly increasing the number of knots L s and L t .In this way, the two penalties on the right-hand side of (2.1) can be rewritten as (A) and where The optimization problem in (2.1) can be then approximated with the fol-lowing BAS ≈ argmin or by vectorization as bAS ≈ argmin where bAS = vec BAS , L rw,ij = (R t,j ⊗ W s,i ) and L wr,ij = (W t,j ⊗ R s,i ), for i = 1, . . ., L s + 1 and j = 1, . . ., L t + 1.For a matrix A ∈ R j×k , vec(A) indicates the vector of length jk obtained by writing the matrix A as a vector column-wise, and ⊗ is the Kronecker product.Because the matrices W t , L wr,ij and L rw,ij for i = 1, . . ., L s + 1 and j = 1, . . ., L t + 1 are positive definite and by assuming that X T X is positive definite, then the minimizer of the optimization problem in (2.9) exists, is unique and has the following expression [5] bAdaSS (2.10) To obtain bAdaSS in (2.10) the tuning parameters λ AdaSS s , δ s , γ s , λ AdaSS t , δ t , γ t must be opportunely chosen.This issue is discussed in Section 2.3.

The Algorithm for the Parameter Selection
There are some tuning parameters in the optimization problem (2.9) that must be chosen to obtain the AdaSS estimator.Usually, the tensor product space S k1,k2,M1,M2 is chosen with k 1 = k 2 = 4, i.e., cubic B-splines, and equally spaced knot sequences.Although the choice of M 1 and M 2 is not crucial [8], it should allow the final estimator to capture the local behaviour of the coefficient function β, that is, M 1 and M 2 should be sufficiently large.The smoothness of the final estimator is controlled by the two penalty terms on the right-hand side of (2.9).
The tuning parameters λ AdaSS s , δ s , γ s , λ AdaSS t , δ t , γ t could be fixed by using the conventional K-fold cross validation (CV) [15], where the combination of parameters to be explored is chosen by means of the classic grid search method [15].That is an exhaustive searching through a manually specified subset of the tuning parameter space [3].Although, in our setting, grid search is embarrassingly parallel [16], it is not scalable because it suffers from the curse of dimensionality.However, even if this is beyond the scope of the present work, note that the number of combinations to explore grows exponentially with the number of tuning parameters and makes unsuitable the application of the proposed method to the FoF linear model in the case of multiple predictors.Then, to facilitate the use of the proposed method by practitioners, in what follows, we proposed a novel evolutionary algorithm for tuning parameter selection, referred to as evolutionary algorithm for adaptive smoothing estimator (EAASS) inspired by the population based training (PBT) introduced by Jaderberg et al. [19].The PBT algorithm was introduced to address the issue of hyperparameter optimization for neural networks.It bridges and extends parallel search method (e.g., grid search and random search) with sequential optimization method (e.g., hand tuning and Bayesian optimization).The former runs many parallel optimization processes, for different combinations of hyperparameter values, and, then chooses the combination that shows the best performance.The latter performs several steps of few parallel optimizations, where, at each step, information coming from the previous step is used to identify the combinations of hyperparameter values to explore.For further details on the PBT algorithm the readers should refer to [19], where the authors demonstrated its effectiveness and wide applicability.The pseudo code of the EAASS algorithm is given in Algorithm 1.

Algorithm 1 EAASS algorithm
1: Choose the initial population P = {p i } of combinations of tuning parameter values 2: Obtain the set V = {v i } of estimated prediction errors corresponding to P 3: repeat 4: Identify the set Q ⊆ P and the corresponding Z ⊆ V exploitation 5: for p i ∈ Q do exploration 6: Obtain the new combination of tuning parameter values, p i 7: Obtain the new estimated prediction error v i corresponding to p i 8: end for 9: Define Q = {p i } and Z = {v i } 10: Set The first step is the identification of an initial population P of tuning parameter combinations p i s.This can be done, for each combination and each tuning parameter, by randomly selecting a value in a pre-specified range.Then, the set V of estimated prediction errors v i s corresponding to P is obtained by means of K-fold CV.We choose a subset Q of P, by following a given exploitation strategy and, thus, the corresponding subset Z of V. A typical exploitation strategy is the truncation selection, where the worse r%, for 0 ≤ r ≤ 100, of P, in terms of estimated prediction error, is substituted by elements randomly sampled from the remaining (100−r)% part of the current population [19].Then the following step consists of an exploration strategy where the tuning parameter combinations in Q are substituted by new ones.The simulation study in Section 3 and the real-data Examples in Section 4 are based on a perturbation where each tuning parameter value of the given combination is randomly perturbed by a factor of 1.2 or 0.8.The exploitation and exploration phases are repeated until a stopping condition is met, e.g, maximum number of iterations.Other exploration and exploitation strategies can be found in [2].At last, the selected tuning parameter combination is obtained as an element of P that achieves the lowest estimated prediction error.As a remark, in our trials the AdaSS estimator works quite well with

Simulation Study
In this section, the performance of the AdaSS estimator is assessed on several simulated datasets.In particular, we compare the AdaSS estimator with cubic B-splines and m s = m t = 2 with five competing methods that represent the state of the art in the FoF liner regression model estimation.The first two are those proposed by Ramsay and Silverman [30].The first one, hereinafter referred to as SMOOTH estimator, is the smoothing spline estimator described in (1.2), whereas, the second one, hereinafter referred to as TRU estimator, assumes that the coefficient function is in a finite dimensional tensor product space generate by two sets of B-splines with regularization achieved by choosing the space dimension.Then, we consider also the estimator proposed by Yao et al. [41] and Canale and Vantini [6].The former is based on the functional principal component decomposition, and is hereinafter referred to as PCA estimator, while the latter relies on a ridge type penalization, hereinafter referred to as RIDGE estimator.Lastly, as the fifth alternative, we explore the estimator proposed by Luo and Qi [23], hereinafter referred to as SIGCOMP.Moreover, the AdaSS estimator with cubic B-splines and m s = m t = 2 is considered.For illustrative purposes, we also consider a version of the AdaSS estimator, referred to AdaSStrue, whose roughness parameters are calculated by assuming that the true coefficient function is known.Obviously, the AdaSStrue has not a practical meaning because the true coefficient function is never known.However, it allows one to better understand the influence of the initial estimates of the partial derivatives on the AdaSS performance.All the unknown parameters of the competing methods considered are chosen by means of 10-fold CV.The tuning parameters of the AdaSS and AdaSStrue estimators are chosen through the EAASS algorithm.The set P is obtained by using 10-fold CV, the exploitation and exploration phases are as described in Section 2.3 and a maximum number of iterations equal to 15 is set as stopping condition.For each simulation, a training sample of n observations is generated along with a test set T of size N = 4000.They are used to estimate β and to test the predictive performance of the estimated model, respectively.Three different sample sizes are considered, viz., n = 100, 500, 1000.The estimation accuracy of the estimators are assessed by using the integrated squared error (ISE) defined as where A is the measure of S × T .The ISE aims to measure the estimation error of β with respect to β. Whereas, the predictive accuracy is measured through the prediction mean squared error (PMSE) defined as x ij Ψ x i and ε i = k 20 j=1 e ij Ψ ε i .The coefficients x ij and e ij , for i = 1, . . ., n, j = 1, . . ., 32 and j = 1, . . ., 20, are independent realizations of standard normal random variable and the numbers of basis have been randomly chosen between 10 and 50.The constant k is chosen such that the signal-to-noise ratio SN .
, where Var X is the variance with respect to the random covariate X.Then, given the coefficient function β, the response Y i is obtained.

Mexican Hat Function
The Mexican hat function is a linear function with a sharp smoothness variation in central part of the domain.In this case, the coefficient function β is defined as where φ is a multivariate normal distribution with mean µ = (0.6, 0.6) T and diagonal covariance matrix Σ = diag (0.001, 0.001).Figure 1 displays the AdaSS and the SMOOTH estimates along with the true coefficient function for a randomly selected simulation run.The proposed estimator tends to be smoother on the flat region and is able to better capture the peak in the coefficient function (at t ≈ 0.6) than the SMOOTH estimate.The latter, to perform reasonably well along the whole domain, selects tuning parameters that are not sufficiently small (large) on the peaky (flat) region.This is also confirmed by the graphical appeal of the AdaSS estimate with respect to the competitor ones.In Figure 2 and top of Table 1, the values of ISE and PMSE achieved by the AdaSS, AdaSStrue, and competitor estimators are shown as functions of the sample size n.Without considering the AdaSStrue estimator, the AdaSS estimator yields the lowest ISE for all sample sizes, and thus has the lowest estimation error.In terms of PMSE, it is the best one for n = 150, whereas for n = 500, 1000 it performs comparably with SIGCOMP and PCA estimators.The performance of the AdaSStrue  and AdaSS estimators is very similar in terms of ISE, whereas the AdaSStrue shows a lower PMSE.However, as expected, the effect of the knowledge of the true coefficient function tends to disappear as n increases, because the partial derivatives estimates become more accurate.

Dampened Harmonic Motion Function
This simulation scenario considers as coefficient function β the dampened harmonic motion function, also known as the spring function in the engineering literature.It is characterized by a sinusoidal behaviour with exponentially decreasing amplitude, that is Figure 3 displays the AdaSS and the SMOOTH estimates along with the true coefficient function.Also in this scenario, the AdaSS estimates is smoother than the SMOOTH estimates in regions of small curvature.But, it is more flexible where the coefficient function is more wiggly.Note that intuitively, the SMOOTH estimator trades off its smoothness over the whole domain.Indeed, it over-smooths at small values of s and t and under-smooths elsewhere.
In Figure 4 and in the second tier of Table 1, values of the ISE and PMSE for the AdaSS, AdaSStrue, and competitor estimators are shown as function of the sample size n, in the case of the dampened harmonic motion function.Even in  Table 1 The integrated squared error (ISE) and the prediction mean squared error (PMSE) for the TRU, SMOOTH, PCA, RIDGE, SIGCOMP, AdaSS and AdaSStrue estimators.The numbers outside the parentheses are the averages over 100 Monte Carlo replications, and the numbers inside parentheses are the corresponding standard errors.The values corresponding to the AdaSStrue estimator are emphasized to underline the fact that they rely on the knowledge of the true coefficient function, which is unlikely in real applications.
In bold are marked the lowest values among the AdaSS and the competitors.this case, the AdaSS estimator achives the lowest ISE for all sample sizes, and thus, the lowest estimation error, without taking into account the AdaSStrue estimator.Strictly speaking, in terms of PMSE, note that the proposed estimator is not always the best choice, but it shows only a small difference with best methods, viz., PCA and SIGCOMP estimators.In this case, the AdaSS and AdaSStrue performance is very similar for n = 500, 1000, whereas, for n = 150, the AdaSStrue performs slightly better especially in terms of PMSE.

Rapid Change Function
In this scenario the true coefficient function β is obtained by the rapid change function, that is Figure 5 shows the AdaSS and SMOOTH estimate when β is the rapid change function.The SMOOTH estimate is rougher than the AdaSS one in regions that are far from the rapid change point.On the contrary, the AdaSS estimate is able to be smoother in the flat region and to be as accurate as the SMOOTH estimate near the rapid change point.In Figure 6 and the third tier of Table 1, values of the ISE and PMSE for the AdaSS, AdaSStrue, and competitor estimators are shown for sample sizes n = 150, 500, 1000.In this case, the AdaSS estimator outperforms the competitors, both in terms of ISE and PMSE.Also in this case, the performance of the AdaSStrue estimator is slightly better than that of the AdaSS one and this difference in performance reduces as n increases.

Real-data Examples
In this section, two real datasets, namely Swedish mortality and ship CO 2 emission datasets, are considered in order to asses the performance of the AdaSS estimator in real applications.

Swedish Mortality Dataset
The Swedish mortality dataset (available from the Human Mortality Database -http://mortality.org-) is very well known in the functional literature as benchmark dataset.It has been analysed by Chiou and Müller [10] and Ramsay et al. [31], among others.In this analysis, we consider the log-hazard rate functions of the Swedish females mortality data for year-of-birth cohorts that refer to females born in the years 1751-1935 with ages 0-80.The value of a log-hazard   The functions from 1751 (1752) to 1934 (1935) are considered as observations X i (Y i ) of the predictor (response) in (1.1), i = 1, . . ., 184.In this way, the relationship between two consecutive log-hazard rate functions becomes the focus of the analysis.To asses the predictive performance of the methods considered in the simulation study (Section 3), for 100 times, 166 observations out of 184 are randomly chosen, as training set, to fit the model.The 18 remaining ones are used as test set to calculate the PMSE.The averages and standard deviations of PMSEs are shown in the first line of Table 2.The AdaSS estimator outperforms all the competitors.Only the RIDGE estimator has comparable predictive performance.Figure 8 shows the AdaSS estimates along with the RIDGE estimates that represents the best competitor methods in terms of PMSE.The proposed estimator has slightly better performance than the competitor, but, at the same time, it is much more interpretable.In fact, it is much smoother where the coefficient function seem to be mostly flat and successfully captures the pattern of β in the peak region.On the contrary, the RIDGE estimates is particularly rough over region of low curvature.

Ship CO 2 Emission Dataset
The ship CO 2 emission dataset has been thoroughly studied in the very last years [22,32,7,9].It was provided by the shipping company Grimaldi Group to address some aspects that are related to the issue of monitoring fuel consumptions or CO 2 emissions for Ro-Pax ship that sail along a route in the Mediterranean Sea.In particular, we focus on the study of the relation between the fuel consumption per hour (FCPH), assumed as the response, and the speed over ground (SOG), assumed as predictor.The observations considered were recorded from 2015 to 2017. Figure 9 shows the 44 available observations of SOG and FCPH [9].Similarly to the Swedish mortality dataset, The prediction performance of the methods are assessed by randomly chosen 40 out of 44 observations to fit the model and by using the 4 remaining observations to compute the PMSE.This is repeated 100 times.The averages and standard deviations of the PM-SEs are listed in the second line of Table 2.The AdaSS estimator is in this case outperformed by the RIDGE estimator, which achieves the lowest PMSE.However, as shown in Figure 10, it is able both to well estimate the coefficient function over peaky regions, as the RIDGE estimator, and to smoothly adapt over the remaining part of the domain.In this case, also the PCA estimator achieves smaller PMSE than that of the proposed estimator.However, the PCA estimator even rougher than the RIDGE estimator and, thus, it is not shown in Figure 10.

Conclusion
In this article, the AdaSS estimator is proposed for the function-on-function linear regression model where each value of the response, for any domain point, depends linearly on the full trajectory of the predictor.The introduction of two adaptive smoothing penalties, based on initial estimate of its partial derivatives, allows the proposed estimator to better adapt to the coefficient function.By means of a simulation study, the proposed estimator has proven favourable performance with respect to those achieved by the five competitors already appeared in the literature before, both in terms of estimation and prediction error.The adaptive feature of the AdaSS estimator is advantageous for the interpretability of the results with respect to the competitors.Moreover, its performance has shown to be competitive also with respect to the case where the true coefficient function is known.Finally, the proposed estimator has been successfully applied to real-data examples, viz., the Swedish mortality and ship CO 2 emission datasets.However, some challenges are still open.Even though the proposed evolutionary algorithm has shown to perform particularly well both in the simulation study and the real-data examples, the choice of the tuning  parameters still remains in fact a critical issue, because of the curse of dimensionality.This could be even more problematic in the perspective to extend the AdaSS estimator to the FoF regression model with multiple predictors.

Fig 1 :Fig 2 :
Fig 1: AdaSS (solid line) and SMOOTH (dashed line) estimates of the coefficient functions and the TRUE coefficient function β (dotted line) for different values of t in the case of the Mexican hat function.

Fig 3 :
Fig 3: AdaSS (solid line) and SMOOTH (dashed line) estimates of the coefficient functions and the TRUE coefficient function β (dotted line) for different values of t in the case of the dampened harmonic motion function.

Fig 4 :
Fig 4: (a) The integrated squared error (ISE) and (b) the prediction mean squared error (PMSE) ±standard error for the TRU, SMOOTH, PCA, SIG-COMP, AdaSS and AdaSStrue estimators in the case of the dampened harmonic motion function.The Ridge estimator is not considered due to its too different performance.

Fig 5 :
Fig 5: AdaSS (solid line) and SMOOTH (dashed line) estimates of the coefficient functions and the TRUE coefficient function β (dotted line) for different values of t in the case of the rapid change function.

Fig 6 :Fig 7 :
Fig 6: (a) The integrated squared error (ISE) and (b) the prediction mean squared error (PMSE) ±standard error for the TRU, SMOOTH, PCA, SIG-COMP, AdaSS and AdaSStrue estimators in the case of the rapid change function function.The Ridge estimator is not considered due to its too different performance.

Fig 8 :
Fig 8: AdaSS (solid line) and RIDGE (dashed line) estimates of the coefficient functions for different values of t in the Swedish Mortality dataset.

Fig 10 :
Fig 10: AdaSS (solid line) and RIDGE (dashed line) estimates of the coefficient functions for different values of t in the ship CO 2 emission dataset.
Therefore, the presence of δ s and δ t makes βAdaSS more robust against the choice of the initial estimate of the linear differential operators applied to β with respect to s and t.Finally, γ s and γ t control the amount of weight placed in β ms s and β mt t .
The observations in the test set are centred by subtracting to each observation the corresponding sample mean function estimated in the training set.The observations in the training and test sets are obtained as follows.The covariate X i and the errors ε i are generated as linear combination of cubic B-splines, Ψ x i and Ψ ε i , with evenly spaced knots, i.e., X i = 32 j=1

Table 2
The prediction mean squared error (PMSE) for the TRU, SMOOTH, PCA, RIDGE, SIGCOMP, and AdaSS estimators.The numbers outside the parentheses are the averages of the PMSE over 100 replications, and the numbers inside parentheses are the corresponding standard errors.