Radial basis functions as surrogate models with a priori bias in comparison with a posteriori bias
 1.5k Downloads
 5 Citations
Abstract
In order to obtain a robust performance, the established approach when using radial basis function networks (RBF) as metamodels is to add a posteriori bias which is defined by extra orthogonality constraints. We mean that this is not needed, instead the bias can simply be set a priori by using the normal equation, i.e. the bias becomes the corresponding regression model. In this paper we demonstrate that the performance of our suggested approach with a priori bias is in general as good as, or even for many test examples better than, the performance of RBF with a posteriori bias. Using our approach, it is clear that the global response is modelled with the bias and that the details are captured with radial basis functions. The accuracy of the two approaches are investigated by using multiple test functions with different degrees of dimensionality. Furthermore, several modeling criteria, such as the type of radial basis functions used in the RBFs, dimension of the test functions, sampling techniques and size of samples, are considered to study their affect on the performance of the approaches. The power of RBF with a priori bias for surrogate based design optimization is also demonstrated by solving an established engineering benchmark of a welded beam and another benchmark for different sampling sets generated by successive screening, random, Latin hypercube and Hammersley sampling, respectively. The results obtained by evaluation of the performance metrics, the modeling criteria and the presented optimal solutions, demonstrate promising potentials of our RBF with a priori bias, in addition to the simplicity and straightforward use of the approach.
Keywords
Metamodeling Radial basis function Design optimization Design of experiment1 Introduction
With exponentially increasing computing power, designers have today the possibility by simulation driven product development to create new innovative complex products in a short time. In addition, simulation based design also reduces the cost of product development by eliminating the need of creating several physical prototypes. Furthermore, a designer can create an optimized design with respect to multiple objectives with several constraints and design variables. However, the models and simulations, particularly those pertained in multidisciplinary design optimization (MDO), can be very complex and computationally expensive, see e.g. the multiobjective optimization of a disc brake in Amouzgar et al. (2013). Surrogate or metamodels have been accepted widely in the MDO community to deal with this issue. A metamodel is an explicit approximation function that predicts the response of a computational expensive simulation based model such as a nonlinear finite element model. It also develops a relation between the input variables and their corresponding responses. In general, the aim of a metamodel is to create an approximation function of the original function over a given design domain. Many metamodeling methods have been developed for metamodel based design optimization problems. Some of the most recognized and studied metamodels are response surface methodology (RSM) or polynomial regression (Box and Wilson 1951), Kriging (Sacks et al. 1989), radial basis functions (Hardy 1971), support vector regression (SVR) (Vapnik et al. 1996) and artificial neural networks (Haykin 1998). Extensive surveys and reviews of different metamodeling methods and their applications are e.g. given by Simpson et al. (2001a, 2008), Wang and Shan (2007) and Forrester and Keane (2009).
Several comparative studies, investigating the accuracy and effectiveness of various surrogate models, can be found in the literature. However, one cannot find an agreement on the dominance of one specific method over others. In an early study, Simpson et al. (1998) compared secondorder response surfaces with Kriging. The metamodels were applied on a multidisciplinary design problem and four optimization problems. Jin et al. (2001) conducted a systematic comparison study of four different metamodeling techniques: polynomial regression, Kriging, multivariate adaptive regression splines and radial basis function. They used 13 mathematical test functions and an engineering test problem considering various characteristics of the sample data and evaluation criteria. They concluded that in overall RBF performed the best for both large and small scale problems with highorder of nonlinearity. Fang et al. (2005) studied RSM and RBF to find the best method for modeling highly nonlinear responses found in impact related problems. They also compared the RSM and RBF models with a highly nonlinear test function. Despite the computation cost of RBF, they concluded dominance of RBF over RSM in such optimization problems. Mullur and Messac (2006) compared extended radial basis function (ERBF), with three other approaches; RSM, RBF and Kriging. A number of modelling criteria including problem dimension, sampling technique, sample size and performance criteria were employed. The ERBF was identified as the superior method since parameter setting was avoided and the method resulted in an accurate metamodel without a significant increase in computation time. Kim et al. (2009) performed a comparative study of four metamodeling techniques using six mathematical functions and evaluated the results by root mean squared error. Kriging and moving least squares showed promising results in that study. In another study by Zhao and Xue (2010), four metamodeling methods are compared by considering three characteristics of quality of the sample (sample size, uniformity and noise) and four performance measures (accuracy, confidence, robustness and efficiency). Backlund et al. (2012) studied the accuracy of RBF, Kriging and support vector regression (SVR) with respect to their capability in approximating base functions with large number of variables and variant modality. The conclusion was that Kriging appeared to be the dominant method in its ability to approximate accurately with fewer or equivalent number of training points. Also, unlike RBF and SVR, the parameter tuning in Kriging was automatically done during training process. RBF was found to be the slowest in building the model with large number of training points. In contrast, SVR was the fastest in large scale multimodal problems.
In most of the previously conducted comparison studies, RBF has shown to perform well in different test problem and engineering applications. Therefore, in this paper, we don’t recognize a need to compare RBF with other metamodeling techniques again. Instead we focus on a detailed comprehensive comparison of our proposed RBF with a priori bias with the classical augmented RBF (RBF with a posteriori bias). The factors that are present during the construction of a metamodel (modeling criteria), range from the dimension of the problem, the type of radial basis functions used in RBF, the sampling technique and sample size. The evaluation of the modeling criteria and their affect on the accuracy, performance and robustness of a metamodel will help the designer to chose an appropriate metamodeling technique for their specific application. A recent comparison study of these two approaches have been conducted by the authors Amouzgar and Strömberg (2014). The preliminary results revealed the potential of RBF with a priori bias in predicting the test problem values. This potential is evaluated in detail in this paper for nine established mathematical test functions. A prestudy on the performance of our RBF with a priori bias in metamodel based design optimization is also performed for two benchmarks. The results clearly demonstrate that our RBF with a priori bias is most attractive for the choice of surrogate model in MDO.
2 Radial Basis Functions Networks
2.1 Bias known a priori
We suggest to set up the RBF in (1) by treating the bias as known a priori. This is presented here. The established approach by letting the bias be unknown is presented next.
2.2 Bias known a posteriori
3 Test functions
 1.BraninHoo function (Branin 1972)$$ {f}_{1}=2(x_{2}\frac{5.1{x_{1}^{2}}}{4\pi^{2}}+\frac{5x_{1}}{\pi}6)+10(1\frac{1}{8\pi})\cos(x_{1})+10. $$(21)
 2.GoldsteinPrice function (Goldstein and Price 1971)$$ \begin{array}{ll} f_{2}=&\left( 1+(x_{1}+x_{2}+1)^{2}\left( 1914x_{1}+3{x_{1}^{2}}14x_{2}\right.\right.\\ &\left.\left.+6x_{1}x_{2}+3{x_{2}^{2}}\right)\right)\\ &\times\left( 30+(2x_{1}3x_{2})^{2}\left( 1832x_{1}+12{x_{1}^{2}}+48x_{2}\right.\right.\\ &\left.\left.36x_{1}x_{2}+27{x_{2}^{2}}\right)\right) . \end{array} $$(22)
 3.Rastrigin functionIn this study, the Rastrigin function with 2 variables is used (N=2).$$\begin{array}{@{}rcl@{}} {f}_{3} = 20 + \sum\limits_{i=1}^{N}{{x_{i}^{2}}  10\cos (2\pi x_{i})}. \end{array} $$(23)
 4.ThreeHump Camel function$$ {f}_{4}=2{x_{1}^{2}}1.05{x_{1}^{4}}+\frac{{x_{1}^{6}}}{6}+x_{1}x_{2}+{x_{2}^{2}}. $$(24)
 5.Colville function$$ \begin{array}{ll} {f}_{5}=& 100({x_{1}^{2}}x_{2})^{2}+(x_{1}1)^{2}+(x_{3}1)^{2}+90({x_{3}^{2}}x_{4})^{2}\\ &+10.1((x_{2}1)^{2}+(x_{4}1)^{2})+19.8(x_{2}1)(x_{4}1). \end{array} $$(25)
 6.Math 1$$ \begin{array}{ll} {f}_{6}=& (x_{1}10)^{2}+5(x_{2}12)^{2}+{x_{3}^{4}}+3(x_{4}11)^{2}\\ &+10{x_{5}^{6}}+7{x_{6}^{2}}+{x_{7}^{4}}4x_{6}x_{7}10x_{6}8x_{7}. \end{array} $$(26)
 7.Rosenbrock10 function (Rosenbrock 1960)In this study, the Rosenbrock function with 10 variables is used (N=10).$$ {f}_{7}=\sum\limits_{n=1}^{N1}\left( 100(x_{n+1}{x_{n}^{2}})^{2}+(x_{n}1)^{2}\right). $$(27)
 8.Math 2 (A 10variable mathematical function)$$ {f}_{8}=\sum\limits_{m=1}^{10}\left( \frac{3}{10}+\sin(\frac{16}{15}x_{m}1)+\sin(\frac{16}{15}x_{m}1)^{2}\right). $$(28)
 9.
Mathematical test functions
Function  Function  No. of  Design 

name  variables  range(s)  
f _{1}  BraninHoo  2  x _{1}:[−5,10], x _{2}:[0,15] 
f _{2}  GoldsteinPrice  2  x _{1}, x _{2}:[−2,2] 
f _{3}  Rastrigin  2  x _{1}, x _{2}:[−5.12,5.12] 
f _{4}  ThreeHump Camel  2  x _{1}, x _{2}:[−5,5] 
f _{5}  Colville  4  x _{ i }:[−10,10],i = 1,2...4 
f _{6}  Math 1  7  x _{ i }:[−10,10],i = 1,2...7 
f _{7}  Math 2  10  x _{ i }:[−1,1],i = 1,2...10 
f _{8}  Rosenbrock10  10  x _{ i }:[−5,10],i = 1,2...10 
f _{9}  Math 3  16  x _{ i }:[−1,1],i = 1,2...16 
4 Modelling and performance criteria for comparison
Standard statistical error analysis is used to evaluate the accuracy of the the two RBF approaches. Details of this analysis are presented in this section.
4.1 Performance metrics
The two standard performance metrics are applied to the offdesign test points: (i) Root Mean Squared Error (RMSE) and (ii) Maximum Absolute Error (MAE). The lower the RMSE and MAE values, the more accurate the metamodel will be. The aim is to have these two error measures as near to zero as possible.
4.2 Radial basis functions
Several different radial basis functions can be used in constructing the RBF, as mentioned in Section 2. Each will yield to a different result depending to the nature of the problem. However, in real world applications, the mathematical properties of the problem is usually not known in advance. Thus, a designer needs a robust choice of radial basis function which is as independent as possible to the nature of the problem and will result to a acceptably accurate metamodel. In this paper, four different radial basis functions: (i) linear, (ii) cubic, (iii) Gaussian, and (iv) quadratic, formulated in (2), are used to study the effect of radial basis functions on the accuracy of metamodels.
4.3 Sampling techniques
Sampling techniques are used to create DoEs for which the particular RBF then is fitted to. A robust sampling technique is desired for a designer to avoid dependencies to sampling techniques, as much as possible, for different problems. In other words, one would like to have a metamodeling technique that is as independent as possible to the sampling technique. In this study, three different sampling techniques are chosen: (i) Random sampling (RND), (ii) Latin hypercube sampling (LHS) and (iii) Hammersley sequence sampling (HSS) and their effects on the accuracy of the two approaches are investigated. For the optimization problems studied at the end, we also compare these sampling techniques to a successive screening approach for generating appropriate DoEs.
4.4 Size of samples
The DoE size (size of samples) has an important effect on choosing an accurate surrogate model. In general, increasing the size of DoE will improve the quality of metamodels when using the RBF approach, however over fitting is an critical issue in these approaches. Three different sample size are used in this paper: (i) Low, (ii) Medium and (iii) High. The number of samples for each sampling group is proportional to a reference value for low and high dimension problems. The number of coefficients k = (m + 1)(m + 2)/2 in a second order polynomial with m number of variables is used as a reference. For all the test functions the number of DoE is chosen as a coefficient of k. The sample size for low dimension test function are: (i) 1.5k for low sample size, (ii) 3.5k for medium sample size equals, and (iii) 6k for high sample size. High dimensional test functions have the size of DoE defined as: i) 1.5k for low sample size, (ii) 2.5k for medium sample size, and (iii) 5k for high sample size.
4.5 Test functions dimensionality
Dimension of a test function or the number of the variables in a problem, is one of the most important properties in generating an accurate surrogate model. In order to investigate the effect of this modelling criteria on the two approaches we divided the test functions into two categories: (i) Low, where the number of variables are less than or equal to 4, and (ii) High, for test functions with the number of variables of more than 4. Labelling the second group by “high”, implies the relative meaning of higher number of variables compared to the first group. Otherwise, high dimensional engineering problems generally consist of considerable higher number of variables. The results are grouped separately for low and high dimension test functions in all modeling criteria. Our goal is that a final conclusion can be drawn by studying the results.
5 Comparison procedure

Step 1: The number of DoEs is determined based on the three sample size groups (low, medium and high) in Table 2, for each test function.

Step 2: The design domains are mapped linearly between 0 and 1 (unit hypercube). The surrogate models are fitted on the mapped variables by using the two approaches. For calculating the performance metrics the metamodel is mapped back to the original space.

Step 3: To avoid any probable sensitivity of metamodels to a specific DoE, 50 distinctive sample sets are generated for each sample size of step 1 by using RND and LHS described in the previous section. The sensitivity of surrogate models to a specific DoE is avoided to a great extent. Since HSS technique is deterministic, only one sample set is generated by using this method for each sample size. The Latin Hypercube sampling techniques (LHS) is performed by using the Matlab function ”lhsdesign”. The Latin hypercube samples are created with 20 iterations to maximize the minimum distance between points. 50 different sets of sample points are created for each sample size by using the LHS and the random (RND) sampling technique. The Hammersely (HSS) samples, are created from Hammersley quasirandom sequence using successive primes as bases by using an inhouse Matlab code.

Step 4: Metamodels are constructed using the two RBF approaches (R B F _{ p r i } and R B F _{ p o s }) with each of the four different radial basis functions (linear, cubic, Guassian and quadratic) to be compared for each set of DoE generated by the three sampling techniques. Therefore, for each test function 2 (RBF approaches) × 4 (radial basis functions) × 3 (sampling techniques) × 3 (sample sizes) × 50 (set of DoE) = 3600 surrogate models are constructed.

Step 5: 1000 test points are randomly selected within the design space. The exact function value \(\hat {f}_{i}\) and the predicted function value f _{ i } at each test point is calculated. RMSE, MAE, and the corresponding normalized values are computed by using (30) to (33). The average of the normalized errors is calculated across the 50 sample sets. The average of the normalized root mean squared and maximum absolute error are simply shown by N R M S E and NMAE in this paper. Finally, the relative difference measures of the computed average errors, N R M S E and NMAE for R B F _{ p o s } are calculated by using (34) and (35).

Step 6: The procedure from step 1 to 5 is repeated for all test problems. In addition to the mean normalized errors (N R M S E and NMAE), the average of low dimension problems (the first five test functions) denoted by “Ave. Low”, the average of high dimension problems (test functions 6 to 9) expressed by “Ave. High” and the average error metrics of all 9 test functions shown by “Ave. All” are computed for the surrogate approaches using different sampling techniques.
Modeling criteria of test functions
Function  Function  No. of  Problem  Sample size  No. of  

name  variables  dimension  Low  Medium  High  test points  
f _{1}  BraninHoo  2  Low  9  30  60  1000 
f _{2}  GoldsteinPrice  2  Low  9  30  60  1000 
f _{3}  Rastrigin  2  Low  9  30  60  1000 
f _{4}  ThreeHump Camel  2  Low  9  30  60  1000 
f _{5}  Colville  4  Low  23  75  150  1000 
f _{6}  Math 1  7  High  54  90  180  1000 
f _{7}  Math 2  10  High  99  165  330  1000 
f _{8}  Rosenbrock10  10  High  99  165  330  1000 
f _{9}  Math 3  16  High  229  380  765  1000 
It should be noted that, because the variables are mapped to a unit cube (in step 2), the parameter setting can be done without considering the magnitude of the design variables. Thus, the parameter 𝜃 used in the radial basis functions in (2) is set to one (𝜃 = 1). The bias chosen for this study, in (4), is a quadratic polynomial with 6 terms.
6 Results and discussion
In this section, the results gathered from the metamodels constructed according to the comparison procedure in previous section are presented. The effect of each modeling criteria is discussed by comparing the two main error measures, NRMSE and NMAE, for the two RBF approaches by presenting them in several tables and charts. Including all modeling criteria in the comparison study of each criteria for all test functions requires an extensive and very detailed results section which should incorporate all 3600 surrogate models. This is out of scope of this work and can be the topic of future studies. Therefore, for studying the effect of each modeling criteria, a specific selection of other criteria is chosen. They are mentioned in the forthcoming sections.
Before presenting the results, it is worth mentioning that the computational cost of the proposed R B F _{ p r i } is less than R B F _{ p o s }. This has been investigated by calculating the training time of the two approaches for test functions 3 and 8 with 100 variables and 15453 sampling points by using the cubic radial basis function and HSS sampling method. The computational times related to f _{3} are 346.67 and 396.97 seconds for R B F _{ p r i } and R B F _{ p o s }, respectively. Test function 8 is trained in 350.48 and 591.76 seconds by using R B F _{ p r i } and R B F _{ p o s }, respectively.
6.1 Effect of basis functions
NRMSE and NMAE (LHS sampling with high sample size)
Test function  RBF approach  NRMSE  NMAE  

Linear  Cubic  Gaussian  Quadratic  Linear  Cubic  Gaussian  Quadratic  
f _{1}  R B F _{ p r i }  0.1908  0.0952  0.4538  0.1349  1.8655  0.8358  6.0043  1.5904 
R B F _{ p o s }  0.1951  0.0975  0.1979  0.4017  2.5765  0.9711  2.5429  6.3368  
f _{2}  R B F _{ p r i }  0.3158  0.2594  0.1874  0.1620  3.0204  2.4426  1.9849  1.5635 
R B F _{ p o s }  0.3735  0.2496  0.3674  0.1769  3.6027  2.4930  3.5192  1.8219  
f _{3}  R B F _{ p r i }  0.3080  0.4122  8.1544  8.1544  2.3752  4.7238  225.677  87.9028 
R B F _{ p o s }  0.3067  0.4162  0.3078  10.9940  2.4914  4.2085  2.4926  312.287  
f _{4}  R B F _{ p r i }  0.3409  0.2634  0.1184  0.1521  2.0918  1.7894  1.3465  1.4218 
R B F _{ p o s }  0.4612  0.2709  0.4538  0.1473  3.0001  1.9679  3.0208  1.6017  
f _{5}  R B F _{ p r i }  0.2012  0.1967  0.1590  0.1752  1.2435  1.2760  1.4980  1.3941 
R B F _{ p o s }  0.3220  0.2146  0.3219  0.1767  2.4984  1.5238  2.4598  1.6206  
f _{6}  R B F _{ p r i }  0.4469  0.5012  0.6332  0.5617  2.1897  2.5543  3.3058  2.9341 
R B F _{ p o s }  0.6063  0.5254  0.7355  0.6154  3.1961  2.8564  4.1352  3.3001  
f _{7}  R B F _{ p r i }  0.1249  0.1185  0.1247  0.1178  3.1090  2.9903  3.3601  3.0411 
R B F _{ p o s }  0.1162  0.1175  0.1162  0.1141  2.7253  2.9346  2.7001  2.9427  
f _{8}  R B F _{ p r i }  0.1683  0.1646  0.1741  0.1659  1.1983  1.2398  1.3317  1.2620 
R B F _{ p o s }  0.1842  0.1653  0.1847  0.1697  1.4382  1.2617  1.5064  1.3617  
f _{9}  R B F _{ p r i }  0.0211  0.0190  0.0215  0.0196  0.4572  0.3441  0.4860  0.3834 
R B F _{ p o s }  0.0329  0.0209  0.0388  0.0248  1.0555  0.4795  1.5252  0.7643 
Summary of chosen basis functions
Test function  Sampling technique  Sample size  Problem dimension  Overall accuracy 

f _{1}  Cubic  Cubic  Cubic  Cubic 
f _{2}  Quadratic  Quadratic  Quadratic  Quadratic 
f _{3}  Linear  Cubic  Cubic  Linear 
f _{4}  Cubic  Cubic  Cubic  Quadratic 
f _{5}  Quadratic  Cubic  Cubic  Cubic 
f _{6}  Cubic  Cubic  Cubic  Cubic 
f _{7}  Cubic  Cubic  Cubic  Cubic 
f _{8}  Cubic  Cubic  Cubic  Cubic 
f _{9}  Cubic  Cubic  Cubic  Cubic 
In cases where the best performed basis function is different for the two approaches under a modeling criteria, the basis function which performed better by using the R B F _{ p o s } is selected. This will enable a more reliable comparison between the two approaches.
6.2 Effect of sampling technique
NRMSE and NMAE of each sampling technique (high sample size)
Test function  RBF approach  NRMSE  NMAE  

RND  LHS  HSS  RND  LHS  HSS  
f _{1}  R B F _{ p r i }  0.1171  0.0952  0.0747  1.0179  0.8358  0.8646 
R B F _{ p o s }  0.1221  0.0975  0.0752  1.1759  0.9711  1.3284  
f _{2}  R B F _{ p r i }  0.2365  0.1620  0.1397  2.0912  1.5635  1.9509 
R B F _{ p o s }  0.3110  0.1769  0.1602  3.1755  1.8219  2.3215  
f _{3}  R B F _{ p r i }  0.3164  0.3080  0.3144  2.5913  4.7238  2.1904 
R B F _{ p o s }  0.3117  0.3067  0.3098  2.5876  4.2085  2.4636  
f _{4}  R B F _{ p r i }  0.3250  0.2634  0.1126  2.2031  1.7894  0.7961 
R B F _{ p o s }  0.3270  0.2709  0.1260  2.2347  1.9679  1.1646  
f _{5}  R B F _{ p r i }  0.1853  0.1752  0.1649  1.3390  1.2760  1.3270 
R B F _{ p o s }  0.1863  0.1767  0.1730  1.6783  2.8564  1.6052  
Average Low  R B F _{ p r i }  0.2361  0.2008  0.1613  1.8485  2.0377  1.4258 
R B F _{ p o s }  0.2516  0.2058  0.1688  2.1704  2.3652  1.7767  
f _{6}  R B F _{ p r i }  0.5021  0.5012  0.4839  2.5461  2.5543  2.6711 
R B F _{ p o s }  0.5300  0.5254  0.5090  2.8477  2.8564  2.7938  
f _{7}  R B F _{ p r i }  0.1196  0.1185  0.1138  3.0491  2.9903  2.9283 
R B F _{ p o s }  0.1186  0.1175  0.1134  2.9861  2.9346  2.8446  
f _{8}  R B F _{ p r i }  0.1669  0.1646  0.1586  1.2700  1.2398  1.3377 
R B F _{ p o s }  0.1674  0.1653  0.1615  1.2915  1.2617  1.4356  
f _{9}  R B F _{ p r i }  0.0192  0.0190  0.1586  0.3513  0.3441  2.0628 
R B F _{ p o s }  0.0209  0.0209  0.0233  0.4878  0.4795  0.6263  
Average High  R B F _{ p r i }  0.2020  0.2008  0.2287  1.8041  1.7821  2.2500 
R B F _{ p o s }  0.2092  0.2073  0.2018  1.9032  1.8831  1.9251  
Average all  R B F _{ p r i }  0.2209  0.2008  0.1913  1.8288  1.9241  1.7921 
R B F _{ p o s }  0.2328  0.2064  0.1835  2.0517  2.1509  1.8426 
The “ave. all” bars in Fig. 4a and b along with the data in Table 5, show 4.7 % and 6.8 % improve in NRMSE and NMAE when using HSS technique instead of LHS in R B F _{ p r i } approach. While these values are 11.1 % and 14.3 % for the R B F _{ p o s } approach. The advantage of R B F _{ p r i } over R B F _{ p o s }, as being more robust in terms of NRMSE and NMAE with regards to the change of sampling technique can be seen in the aforementioned percentages.
6.3 Effect of sampling size
Relative differences of NRMSE and NMAE comparing R B F _{ p r i } and R B F _{ p o s } considering sampling size
Error metrics  D _{ N R M S E }(%)  D _{ N M A E }(%)  

Sample Size  LOW  MED  HIGH  LOW  MED  HIGH 
Average Low  9.10  6.64  4.47  16.36  9.68  13.40 
Average high  −17.61  0.74  3.41  −12.84  6.83  12.77 
Average All  −2.77  4.02  4.00  3.38  8.42  13.12 
6.4 Effect of dimension
NRMSE, NMAE and their related relative differences values averaged over all sampling techniques
Performance metrics  RBF approach  Average Low  Average high 

NRMSE  R B F _{ p r i }  0.1994  0.2105 
R B F _{ p o s }  0.2087  0.2061  
NMAE  R B F _{ p r i }  1.7707  1.9454 
R B F _{ p o s }  2.1041  1.9038  
D _{ N R M S E }(%)  R B F _{ p o s }  4.59  −2.12 
D _{ N M A E }(%)  R B F _{ p o s }  17.21  −2.16 
6.5 Overall accuracy
Overall accuracy performance of R B F _{ p r i } over R B F _{ p o s }
R B F _{ p o s }  

Test functions  RND  LHS  HSS  
D _{ N R M S E }(%)  D _{ N M A E }(%)  D _{ N R M S E }(%)  D _{ N M A E }(%)  D _{ N R M S E }(%)  D _{ N M A E }(%)  
f _{1}  4,18  15,52  2,39  16,18  0,75  53,64 
f _{2}  27,21  51,85  8,85  16,53  13,64  19,00 
f _{3}  −1,50  −0,14  −0,42  4,90  −1,48  12,47 
f _{4}  0,61  1,43  −3,18  12,65  4,05  9,93 
f _{5}  12,59  25,35  8,71  19,42  14,55  20,97 
Ave. Low  8,62  18,80  3,27  13,93  6,30  23,20 
f _{6}  5,40  11,84  4,72  11,83  5,06  4,59 
f _{7}  −0,85  −2,07  −0,84  −1,86  −0,37  −2,86 
f _{8}  0,30  1,69  0,42  1,76  1,78  7,33 
f _{9}  9,76  38,85  9,35  39,34  59,91  79,85 
Ave. High  3,65  12,58  3,41  12,77  16,59  22,23 
Ave. All  6,41  16,04  3,33  13,42  10,88  22,77 
7 Optimization examples
The problem in (37) is now solved by performing a DoE procedure and setting up corresponding RBFs which in turn define a new optimization problem that is solved using a global search with a genetic algorithm and a local search with sequential linear and/or quadratic programming. First, a set of sampling points are generated by successive linear response surface optimization of the problem in (37) using four successive iterations with automatic panning and zooming (Gustafsson and Strömberg 2008). This screening generates 12 sampling points according to Fig. 10. Then, RBFs are fitted to this DoE and an optimal point is identified. The DoE is then augmented with this optimal point and the RBFs are set up again. This procedure is repeated three times generating in total a DoE with 12 sampling points from screening and three optimal points from RBFs. Finally, meta model based design optimization using our RBFs for this DoE of 12+3 sampling points is performed. The optimal solution generated with this procedure is (1.4962,1.5049), which is very close to the analytical optimum of (37).
8 Concluding remarks
In this paper, a new approach for setting up radial basis functions network is proposed by letting the bias be defined a priori by a corresponding regression model. Our new approach is compared with the established treatment of RBF, where the bias is obtained by using extra orthogonality constraints. It is numerically proven that our approach with a priori bias is in general as good as the performance of RBF with a posteriori bias. In addition, we mean that our approach is easier to set up and interpret. It is clear that the bias capture the global behavior and the radial basis functions tune the local response. It is also demonstrated that our RBF with a priori bias performs excellent in metamodel based design optimization and it captures both coarse and dense sampling densities simultaneously of DoEs generated from successive screening and optimal augmentation most accurately. In conclusion, the paper shows that our new RBF approach with a priori bias is a most attractive choice for surrogate model. We believe that our approach has a promising potential and opens up new possibilities for surrogate modelling in optimization, which we hope to be able to explore in a near future.
References
 Amouzgar K, Strömberg N (2014) An approach towards generating surrogate models by using rbfn with a priori bias. In: ASME 2014 International design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical EngineersGoogle Scholar
 Amouzgar K, Rashid A, Strömberg N (2013) Multiobjective optimization of a disc brake system by using spea2 and rbfn. In: Proceedings of the ASME 2013 international design engineering technical conferences, vol 3 B. American Society of Mechanical Engineers, Portland, doi: 10.1115/DETC201312809
 Backlund PB, Shahan D W, Seepersad CC (2012) A comparative study of the scalability of alternative metamodelling techniques. Eng Optim 44(7):767–786CrossRefGoogle Scholar
 Box GEP, Wilson KB (1951) On the experimental attainment of optimum conditions. J R Stat Soc Series B (Methodological) 13(1):1–45MathSciNetzbMATHGoogle Scholar
 Branin FH (1972) Widely convergent method for finding multiple solutions of simultaneous nonlinear equations. IBM J Res Develop 16(5):504–522. . ISSN 0018 8646 doi: 10.1147/rd.165.0504 MathSciNetCrossRefzbMATHGoogle Scholar
 Fang H, RaisRohani M, Liu Z, Horstemeyer MF (2005) A comparative study of metamodeling methods for multiobjective crashworthiness optimization. Comput Struct 83(25–26):2121–2136. doi: 10.1016/j.compstruc.2005.02.025. ISSN 00457949CrossRefGoogle Scholar
 Forrester A IJ, Keane AJ (2009) Recent advances in surrogatebased optimization. Progress Aerospace Sci 45(1–3):50–79. doi: 10.1016/j.paerosci.2008.11.001. ISSN 03760421. http://linkinghub.elsevier.com/retrieve/pii/S0376042108000766
 Garg H (2014) Solving structural engineering design optimization problems using an artificial bee colony algorithm. J Ind Manag Optim 10(3):777–794MathSciNetCrossRefzbMATHGoogle Scholar
 Goldstein AA, Price J F (1971) On descent from local minima. Math Comput 25(115):569–574. ISSN 00255718. http://www.jstor.org/stable/2005219
 Gustafsson E, Strömberg N (2008) Shape optimization of castings by using successive response surface methodology. Struct Multidiscip Optim 35(1):11–28CrossRefGoogle Scholar
 Hardy RL (1971) Multiquadric equations of topography and other irregular surfaces. J Geophys Res 76 (8):1905– 1915CrossRefGoogle Scholar
 Haykin S (1998) Neural networks: a comprehensive foundation, 2nd edn. Prentice Hall. ISBN 0132733501. http://www.worldcat.org/isbn/0132733501
 Jin R, Chen W, Simpson TW (2001) Comparative studies of metamodelling techniques under multiple modelling criteria. Struct Multidiscip Optim 23(1):1–13CrossRefGoogle Scholar
 Kalagnanam JR, Diwekar UM (1997) An efficient sampling technique for offline quality control. Technometrics 39(3):308–319. http://www.jstor.org/stable/pdf/1271135.pdf?acceptTC=true
 Kim BS, Lee YB, Choi DH (2009) Comparison study on the accuracy of metamodeling technique for nonconvex functions. J Mech Sci Technol 23(4):1175–1181CrossRefGoogle Scholar
 McKay MD, Beckman RJ, Conover WJ (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2):239–245MathSciNetzbMATHGoogle Scholar
 Mullur A, Messac A (2006) Metamodeling using extended radial basis functions: a comparative approach. Eng Comput 21(3):203–217. doi: 10.1007/s0036600500057 CrossRefGoogle Scholar
 Rosenbrock HH (1960) An automatic method for finding the greatest or least value of a function. Comput J 3(3):175–184. doi: 10.1093/comjnl/3.3.175. http://comjnl.oxfordjournals.org/content/3/3/175.abstract MathSciNetCrossRefGoogle Scholar
 Sacks J, Schiller SB, Welch WJ (1989) Designs for computer experiments. Technometrics 31(1):41–47. doi: 10.2307/1270363. ISSN 00401706
 Simpson TW, Mauery TM, Korte JJ (1998) Multidisciplinary optimization branch, and Farrokh Mistree. Comparison of response surface and kriging models for multidisciplinary design optimization. In: AIAA paper 984758. 7 th AIAA/USAF/NASA/ISSMO Symposium on multidisciplinary analysis and optimization, pp 98–4755Google Scholar
 Simpson TW, Lin DKJ, Chen W (2001a) Sampling strategies for computer experiments: design and analysis. Int J Reliab Appl 2(3):209–240Google Scholar
 Simpson TW, Poplinski JD, Koch PN, Allen JK (2001b) Metamodels for computerbased engineering design: survey and recommendations. Eng Comput 17(2):129–150CrossRefzbMATHGoogle Scholar
 Simpson TW, Toropov V, Balabanov V, Viana FAC (2008) Design and analysis of computer experiments in multidisciplinary design optimization: a review of how far we have come or not. In: 12th AIAA/ISSMO multidisciplinary analysis and optimization conference, pp 10–12Google Scholar
 Strömberg N (2016) Reliability based design optimization by using a slp approach and radial basis function networks. In: (to appear) ASME 2016 international design engineering technical conferences and computers and information in engineering conference. American Society of Mechanical EngineersGoogle Scholar
 Vapnik V, Golowich SE, Smola A (1996) Support vector method for function approximation, regression estimation, and signal processing. In: Advances in neural information processing systems, vol 9, pp 281–287. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.3139
 Wang GG, Shan S (2007) Review of metamodeling techniques in support of engineering design optimization. J Mech Des 129(4):370–380CrossRefGoogle Scholar
 Zhao D, Xue D (2010) A comparative study of metamodeling methods considering sample quality merits. Struct Multidiscip Optim 42(6):923–938. doi: 10.1007/s0015801005293. ISSN 1615147X. http://www.springerlink.com/index/10.1007/s0015801005293
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.