Fuzzy radial basis function network for fuzzy regression with fuzzy input and fuzzy output
 1.2k Downloads
Abstract
In this study, fuzzy regression (FR) models with fuzzy inputs and outputs are discussed. Some of the FR methods based on linear programming and fuzzy least squares in the literature are explained. Within this study, we propose a Fuzzy Radial Basis Function (FRBF) Network to obtain the estimations for FR model in the case that inputs and outputs are symmetric/nonsymmetric triangular fuzzy numbers. Proposed FRBF Network approach is a fuzzification of the inputs, outputs and weights of traditional RBF Network and it can be used as an alternative to FR methods. The FRBF Network approach is constructed on the basis of minimizing the square of the total difference between observed and estimated outputs. A simple training algorithm from the cost function of the FRBF Network through Backpropagation algorithm is developed in this study. The advantage of our proposed approach is its simplicity and easy computation as well as its performance. To compare the performance of the proposed method with those given in the literature, three numerical examples are presented.
Keywords
Fuzzy sets Fuzzy regression Fuzzy cmeans clustering Fuzzy radial basis function networkIntroduction
Regression analysis is one of the most widely used methods of estimation and it is applied to determine the functional relationship between independent and dependent variables. Fuzzy regression (FR) is a fuzzy type of classical regression in which some elements of the model are represented by any type of fuzzy numbers [35].
Fuzzy linear regression (FLR) first proposed by Tanaka et al. [46] is used to minimize the total spread of the fuzzy parameters subject to the support of the estimated values cover the support of the observed values for a certain \(\alpha \)level. In the light of Tanaka et al.’s [46] study, several methods have been developed for FR models. Another approach to FLR method is proposed by Diamond [16] to determine the fuzzy parameters in analog to conventional normal equations derived with a suitable metric. In general, there are two main approaches in FR analysis: linear programmingbased methods and FLSbased methods. The first one is based on minimizing fuzziness as an optimal criterion [4, 5, 6, 8, 20, 33, 36, 37, 38, 40, 41, 42, 45, 47], whereas the second one is based on least squares (LS) of errors as a fitting criterion [3, 9, 15, 16, 25, 26, 27, 31, 48].
There are many studies in the literature related to FR since then proposed by Tanaka et al. [46]. Bardossy [5] developed a general form of regression equations for the fuzzy numbers and formulated the FR problem as a mathematical programming. Bardossy et al. [6] introduced a general methodology for FR and applied to an actual hydrological case study including the imprecise relationship between soil electrical resistivity and hydraulic permeability. Sakawa and Yano [40] developed LPbased methods for solving formulated three types of problems for obtaining the FLR models, where both input and output data are fuzzy numbers. Sakawa and Yano [41] introduced three types of multiobjective programming (MOP) problems for obtaining FLR models with fuzzy input and fuzzy output data. They developed an LPbased interactive decision making method to derive the satisfying solution of the decision maker for the MOP problems. Ming et al. [31] described a model for LS fitting of fuzzy input and fuzzy output data. Kao and Chyu [26] introduced the method of LS under fuzzy environment to handle fuzzy observations in regression analysis for three cases: crisp inputfuzzy output, fuzzy inputfuzzy output, and nontriangular fuzzy observations. Yang and Lin [48] proposed two estimation methods along with an FLS approach for considered FLR models with fuzzy inputs, fuzzy outputs and fuzzy parameters. Hojati et al. [20] proposed a simple goal programminglike approach for computation of FR for two cases: crisp inputsfuzzy outputs and fuzzy inputsfuzzy outputs. Chen and Dang [10] proposed a threephase method to construct the FR model with variable spreads to resolve the problem of increasing spreads. Lu and Wang [30] proposed an enhanced fuzzy linear regression model (FLR\(_{\textit{FS}})\). Shakouri and Nadimi [43] introduced an approach to find the parameters of an FLR with crisp inputs and fuzzy outputs. Khan and Valeo [27] introduced a method, which is an extension of the Diamond’s [16] FLS method, for FLR with fuzzy regressors, regressand and coefficients.
Many Neural Networks (NN) models are similar or identical to wellknown statistical techniques such as linear regression, polynomial regression, nonparametric regression, discriminant analysis, principal components analysis and cluster analysis. Radial Basis Function Network (RBFN) is a special kind of NNs that consists of input layers, only one hidden layer and output layers. It has radial basis functions in hidden units and linear functions in output units, with adjustable weights. In recent years, various fuzzified versions of the NNs and the RBF Network have been developed for linear, nonlinear and nonparametric regression models.
NNs models have been applied in the FR analysis by various researchers. For example, Ishibuchi and Tanaka [23] introduced simple and powerful methods for FR analysis using NNs. Ishibuchi et al. [24] proposed an architecture of Fuzzy Neural Networks (FNN) that have crisp inputs, interval weights and interval outputs for FR analysis. Ishibuchi et al. [21] introduced an architecture of FNN with triangular fuzzy weights. Ishibuchi and Nii [22] proposed nonlinear fuzzy regression methods based on FNN with asymmetric fuzzy weights. Cheng and Lee [11] proposed FRBF Network that weights between inputhidden units and outputs considered as fuzzy numbers, but inputs and weights between hiddenoutput units considered as crisp numbers for FR analysis. Dunyak and Wunsch [17] described a method for nonlinear FR using NN models. Khashei et al. [28] proposed a hybrid method that yields more accurate results with incomplete data sets based on the basic concepts of NN and FR models to overcome the limitations in both methods. Mosleh et al. [35] presented a novel hybrid method based on FNN for approximate fuzzy parameters of fuzzy linear and nonlinear regression models with crisp inputs and fuzzy output. Cobaner et al. [14] proposed an adaptive neurofuzzy approach to estimate suspended sediment concentration on rivers. The potential of neurofuzzy technique is compared with Generalized Regression Neural Networks (GRNN), Radial Basis Function Neural Networks (RBFNN) and Multilayer Perceptron (MLP) and also two different sediment rating curves (SRC). Haddadnia et al. [18] presented a fuzzy hybrid learning algorithm for the RBFNN. Roh et al. [39] presented a Fuzzy RBFNN based on the concept of information ambiguity. Hathaway et al. [19] presented a model that integrates three data types of numbers, intervals and linguistic assessment. Staiano et al. [44] described a novel approach to fuzzy clustering as a summation of a number of linear local regression models. Their approach is more effective in the training of RBFNN leading to improved performance with respect to other clustering algorithms. Alvisi and Franchini [2] proposed an approach under uncertainty using NN for water level (or discharge) forecasting. The parameters of the NN, i.e., the weights and biases, are represented by fuzzy numbers. Mitra and Basak [32] proposed a fuzzy version of the RBF Network.
To the best knowledge of the authors, there is no study on FRBF Network dealing with fuzzy regression with fuzzy input and fuzzy output. Therefore, we propose FRBF Network with fuzzy input, fuzzy output and also fuzzy weights, as an alternative to the existing FR methods in the literature. To show its appropriateness and effectiveness, our proposed method is applied to the three numerical examples and its performance is compared with existing FR methods. The results indicate that our proposed method is an effective method to estimate the output under fuzzy environment.
The remainder of the paper is organized as follows: in Sect. 2, fuzzy regression methods in the literature are reviewed. Our proposed Fuzzy Radial Basis Function Network approach is presented in Sect. 3. Three numerical examples are illustrated to compare the proposed approach with other FR methods given in Sect. 4. Finally, conclusions are drawn in Sect. 5.
Fuzzy regression methods
Fuzzy linear regression was first introduced by Tanaka et al. [46] and since then several different methods have been proposed for FR by various researchers. In general, fuzzy regression methods are divided into two categories: the first one is based on linear programming (LP) approach and the second one is based on the fuzzy least squares (FLS) approach. The first class which minimizes the total vagueness of the estimated values for the output includes Tanaka et al.’s [46] method and its extensions [20, 33, 40, 45, 46]. The second class includes FLS methods to minimize the total square of errors in the estimated values [15, 16, 31, 48].
In this section, we investigate widely used fuzzy regression methods of Fuzzy Least Squares (FLS), General Fuzzy Least Squares (GFLS), Sakawa–Yano (SY), Hojati–Bector–Smimou (HBS), ApproximateDistance Fuzzy Least Squares (ADFLS) and IntervalDistance Fuzzy Least Squares (IDFLS).
Proposed approach
Radial Basis Function (RBF) Network is a special kind of NN which has input layers, a single hidden layer and output layers. The hidden layer contains hidden units, also called as radial basis function units, which have two parameters that describe the location of the function’s center and its deviation (or width). Hidden units measure the distance between an input data and the functions’s center. There are two sets of weights, one connecting the input layer to the hidden layer and the other connecting the hidden layer to the output layer. The weights between input and hidden layer which are also called as centers are determined by any clustering method, such as Fuzzy cMeans Clustering (FCM). The weights connecting the hidden layer to the output layer are used to form linear combinations of the hidden units for generating outputs of the RBF Network. RBF Network is trained by unsupervised learning or combining the supervised and unsupervised learning [12, 13, 50].
In this section, we propose a FRBF Network approach for FR model with fuzzy input and fuzzy output which are symmetric or nonsymmetric TFNs. Our proposed FRBF Network includes fuzzy input (\(X_p )\), fuzzy output (\(Y_p )\), fuzzy weights between input and hidden unit (\(W_{ij} )\) and also fuzzy weights between hidden and output unit (\(V_j )\). In this approach, the weights \(W_{ij} \) and normalization factor \(\sigma _j^2 \) are determined by unsupervised learning. \(W_{ij} \) s are initialized by modified FCM algorithm given in Sect. 3.2 and \(V_j \) s are randomly selected as TFNs. Then, \(W_{ij} \), \(V_j \) and \(\sigma _j^2 \) s are updated by BackPropagation (BP) algorithm which is supervised learning.
Training algorithm of our proposed Fuzzy Radial Basis Function Network
Training algorithm of our proposed FRBF Network is constituted by Yapıcı Pehlivan [49]. In the algorithm, Choi et al.’s [13] BP algorithm for RBF Network is fuzzified and it is integrated with Ishibuchi et al.’s [21] BackPropagation (BP) algorithm for FNN. Framework of the training algorithm for the proposed FRBF Network is demonstrated in Fig. 1.
 (i) If \(\max \left\{ {\,\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right \,,\,\left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \,} \right\} =\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right \), then$$\begin{aligned} \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{\alpha } ^L }&=\alpha (\left {\,[Y_p ]_{\alpha } ^L [\hat{Y}_p ]_{_\alpha }^L } \right )h_{pj} (\sigma _{pj} )^{2}\nonumber \\&\quad \times \left {\,[X_{pi} ]_{\alpha }^L [W_{ij} ]_{\alpha }^L } \right v_j^L \\&\quad \alpha (\left {\,[Y_p ]_{\alpha }^U [\hat{Y}_p ]_{\alpha }^U } \right )h_{pj} (\sigma _{pj} )^{2}\,\nonumber \\&\quad \times \left {\,[X_{pi} ]_{\alpha }^L [W_{ij} ]_{\alpha }^L } \right v_j^U\\ \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{\alpha } ^U }&=0 \end{aligned}$$
 (ii) If \(\max \left\{ {\,\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right \,,\,\left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \,} \right\} =\left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \), thenThe normalization factors \(\sigma _{pj}^2 \) are updated by;$$\begin{aligned} \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{_\alpha } ^L }&=0\\ \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{_\alpha } ^U }&=\alpha \left( \left {\,[Y_p ]_\alpha ^L [\hat{Y}_p ]_{_\alpha }^L } \right \right) h_{pj} (\sigma _{pj} )^{2}\,\nonumber \\&\quad \times \left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right v_j^L \\&\quad \alpha \left( \left {\,[Y_p ]_{_\alpha }^U [\hat{Y}_p ]_{_\alpha }^U } \right \right) h_{pj} (\sigma _{pj} )^{2}\,\nonumber \\&\quad \times \left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right v_j^U \end{aligned}$$where \(\Delta \sigma _{pj}^{(t)}\) can be calculated using the cost function \(E_{p,\alpha } \) as follows:$$\begin{aligned} \sigma _{pj} (t+1)=\sigma _{pj} (t)+\Delta \sigma _{pj} (t) \end{aligned}$$(25)The derivative \(\frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }\) in Eq. (26) can be written as;$$\begin{aligned} \Delta \sigma _{pj} (t)=\eta \frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }+\lambda .\Delta \sigma _{pj} (t1) \end{aligned}$$(26)$$\begin{aligned} \frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }=\zeta ^L+\zeta ^U \end{aligned}$$
 (i) If \(\max \left\{ {\,\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right ,\quad \left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \,} \right\} =\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right \), then$$\begin{aligned} \zeta ^L&=\alpha \left( [Y_{pk} ]_\alpha ^L [\hat{Y}_{pk} ]_\alpha ^L \right) h_{pj} (\sigma _{pj} )^{3}\\&\quad \times \left {[X_{pi} ]_\alpha ^L [W_{ij} ]_\alpha ^L } \right ^2v_j^L\\ \zeta ^U&=\alpha \left( [Y_{pk} ]_\alpha ^U [\hat{Y}_{pk} ]_\alpha ^U \right) h_{pj} (\sigma _{pj} )^{3}\\&\quad \times \left {[X_{pi} ]_\alpha ^L [W_{ij} ]_\alpha ^L } \right ^2v_j^U \end{aligned}$$
 (ii) If \(\max \left\{ {\,\left {\,[X_{pi} ]_{_\alpha }^L [W_{ij} ]_{_\alpha }^L } \right ,\quad \left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \,} \right\} =\left {\,[X_{pi} ]_{_\alpha }^U [W_{ij} ]_{_\alpha }^U } \right \), then$$\begin{aligned} \zeta ^L&=\alpha \left( [Y_{pk} ]_\alpha ^L [\hat{Y}_{pk} ]_\alpha ^L \right) h_{pj} (\sigma _{pj} )^{3}\\&\quad \times \left {[X_{pi} ]_\alpha ^U [W_{ij} ]_\alpha ^U } \right ^2v_j^L\\ \zeta ^U&=\alpha \left( [Y_{pk} ]_\alpha ^U [\hat{Y}_{pk} ]_\alpha ^U \right) h_{pj} (\sigma _{pj} )^{3}\\&\quad \times \left {[X_{pi} ]_\alpha ^U [W_{ij} ]_\alpha ^U } \right ^2v_j^U \end{aligned}$$

Step 1 Determine the fuzzy weights \(W_{ij} \) using modified FCM algorithm given in Eqs. (27)–(29) Initialize the fuzzy weights \(V_j \) as fuzzy numbers randomly Calculate the initial values of normalization factor by Eq. (13)

Step 2 Repeat Step 3 for \(\alpha _1 ,\alpha _2 ,\ldots ,\alpha _s \)
 Step 3 Repeat the following procedures for \(p=1,2,\ldots ,n\)

Step 4 If the total number of iterations is satisfied, stop. Otherwise, go to Step 2.
Fuzzy input–output data set from Sakawa and Yano [40]
i  \(X_i =(x_i ,\underline{f}_i ,\overline{f} _i )_T \)  Interval \(X_i \)  \(Y_i =(y_i ,\underline{e}_i ,\overline{e} _i )_T \)  Interval \(Y_i \) 

1  (2.0, 0.5, 0.5)  [1.5, 2.5]  (4.0, 0.5, 0.5)  [3.5, 4.5] 
2  (3.5, 0.5, 0.5)  [3.0, 4.0]  (5.5, 0.5, 0.5)  [5.0, 6.0] 
3  (5.5, 1.0, 1.0)  [4.5, 6.5]  (7.5, 1.0, 1.0)  [6.5, 8.5] 
4  (7.0, 0.5, 0.5)  [6.5 7.5]  (6.5, 0.5, 0.5)  [6.0, 7.0] 
5  (8.5, 0.5, 0.5)  [8.0, 9.0]  (8.5, 0.5, 0.5)  [8.0, 9.0] 
6  (10.5, 1.0, 1.0)  [9.5, 11.5]  (8.0, 1.0, 1.0)  [7.0, 9.0] 
7  (11, 0.5, 0.5)  [10.5, 11.5]  (10.5, 0.5, 0.5)  [10.0, 11.0] 
8  (12.5, 0.5, 0.5)  [12.0, 13.0]  (9.5, 0.5, 0.5)  [9.0, 10.0] 
Modified Fuzzy cMeans Clustering algorithm
The Fuzzy cMeans Clustering (FCM) algorithm is the most common cluster algorithm for RBF Network. It divides n data sets into cfuzzy groups and estimates the cluster centers of each group [7, 12].

Step 1 Set the number of clusters m and parameter b. Initialize cluster centers \(W_{ij} \) and inputs \(X_i \) for \(\alpha =0\).

Step 2 Determine the membership values using \(W_{ij} \) in two ways as;
 (i) If \(\left\ {\left[ {[X_i ]_\alpha ^L ,[X_i ]_\alpha ^U } \right] \left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] } \right\ ^2\ne 0\), then$$\begin{aligned}&\mu _j ([X_i ]_\alpha )\nonumber \\&=\left[ {\sum \limits _{k=1}^m {\left( {\frac{\left( {\max \left\{ {\,\left {x_i ^Lw_{ij} ^L} \right \,,\,\left {x_i ^Uw_{ij} ^U} \right \,} \right\} \,}\right) ^2}{\left( {\,\max \left\{ {\,\left {x_i ^Lw_{ik} ^L} \right \,,\,\left {x_i ^Uw_{ik} ^U} \right \,} \right\} \,}\right) ^2}}\right) ^{1/(b1)}} } \right] ^{1} \end{aligned}$$(27)
 (ii) If \(\left\ {\left[ {[X_i ]_\alpha ^L ,[X_i ]_\alpha ^U } \right] \left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] } \right\ ^2=0\), then$$\begin{aligned}&\mu _j ([X_i ]_\alpha )\nonumber \\&\quad =\left\{ {\begin{array}{l} 1,\quad if\quad \left[ {[X_i ]_{\alpha }^L ,[X_i]_{\alpha }^U } \right] =\left[ {[W_{ij} ]_{\alpha }^L ,[W_{ij} ]_{\alpha }^U } \right] \\ 0,\quad if\quad \left[ {[X_i ]_{\alpha }^L ,[X_i ]_{\alpha }^U } \right] \ne \left[ {[W_{ij} ]_{\alpha }^L ,[W_{ij} ]_{\alpha }^U } \right] \\ \end{array}} \right. \nonumber \\ \end{aligned}$$(28)
 Step 3 Update the cluster centers \(W_{ij} \) until the membership values are stabilized by;$$\begin{aligned}&\left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] \nonumber \\&\quad =\left[ {\frac{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b[X_i ]_\alpha ^L } }{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b} }},\right. \left. {\frac{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b[X_i ]_\alpha ^U } }{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b} }} \right] \nonumber \\ \end{aligned}$$(29)
Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for the considered methods
Parameters  FLS  GFLS  SY  HBS  ADFLS  IDFLS  FRBF 

\(A_0 =(a_0 ,\underline{c}_0 ,\bar{c}_0 )\)  
\(a_0 \)  3.4877  3.5085  3.4545  3.4091  3.5653  3.5749  – 
\(\underline{c}_0 \)  –  –  0  0.4091  0.2688  0.2969  – 
\(\bar{c}_0 \)  –  –  0  0.4091  0.2977  0.2667  – 
\(A_1 =(a_1 ,\underline{c}_1 ,\bar{c}_1 )\)  
\(a_0 \)  0.5306  0.5278  0.5573  0.5227  0.5203  0.5190  – 
\(\underline{c}_0 \)  –  –  0.0119  0.0227  0.0041  0.0005  – 
\(\bar{c}_0 \)  –  –  0.0119  0.0227  0.0003  0.0041  – 
Predicted intervals  
1  [4.28, 4.81]  [4.30, 4.82]  [4.27, 4.87]  [4.00, 4.90]  [4.06, 5.16]  [4.05, 5.14]  [3.66, 5.10] 
2  [5.07, 5.60]  [5.09, 5.61]  [5.09, 5.73]  [4.75, 5.72]  [4.84, 5.94]  [4.83, 5.93]  [4.70, 6.14] 
3  [5.87, 6.93]  [5.88, 6.93]  [5.90, 7.15]  [5.75, 6.81]  [5.61, 7.24]  [5.61, 7.24]  [5.58, 7.01] 
4  [6.93, 7.46]  [6.93, 7.46]  [7.00, 7.72]  [6.50, 7.63]  [6.64, 7.76]  [6.64, 7.76]  [6.77, 8.18] 
5  [7.73, 8.26]  [7.73, 8.25]  [7.81, 8.57]  [7.25, 8.45]  [7.42, 8.54]  [7.42, 8.54]  [7.54, 8.94] 
6  [8.52, 9.58]  [8.52, 9.57]  [8.63, 10.0]  [8.25, 9.54]  [8.19, 9.84]  [8.20, 9.85]  [8.21, 9.58] 
7  [9.05, 9.58]  [9.05, 9.57]  [9.18, 10.0]  [8.50, 9.81]  [8.71, 9.84]  [8.72, 9.85]  [8.61, 9.95] 
8  [9.85, 10.38]  [9.84, 10.36]  [10.0, 10.85]  [9.25, 10.63]  [9.48, 10.63]  [9.50, 10.64]  [9.16, 10.46] 
SSE  17.0088  22.1612  17.3682  15.1991  15.4723  10.3435  9.9680 
Numerical examples
In this section, we considered three numerical examples to demonstrate the proposed FRBF Network approach that performs well while handling with FR model when input and outputs are triangular fuzzy numbers. Using these fuzzy data, we obtain an estimated fuzzy regression equation \(\hat{Y}=A_0 +A_1 \hat{X}\) with fuzzy parameters \(A_0 =(a_0 ,\underline{c}_0 ,\bar{c}_0 )\) and \(A_1 =(a_1 ,\underline{c}_1 ,\bar{c}_1 )\). The proposed FRBF Network approach is applied to the examples and compared with FLS, GFLS, SY, HBS, ADFLS and IDFLS methods. LINGO and MATLAB Softwares are used for computations of FR methods and MATLAB Software is applied for generating the proposed FRBF Network on a Notebook (Intel Core 2 Duo) with CPU time of 2.0 GHz.
In all computations of the examples, we use learning constant(\(\eta \)) as 0.01, momentum constant(\(\lambda \)) as 0.1 and values of \(\alpha \)cut as \(\alpha =0,\,0.2,\,0.4,\,0.6,\,0.8\) and 1.0 for traininig algorithm of the proposed FRBF Network. The initial values of the \(W_{ij} \) s for \(\alpha =0\) are computed using modified FCM algorithm via Eqs. (27)–(29). The initial values of the \(\sigma _{pj}^2 \)s are determined using the initial values of \(W_{ij} \)s. The initial values of the \(V_j \)s are randomly determined as fuzzy numbers. We calculate the cost function of each fuzzy output by Eq. (15) and total cost function by Eq. (16) according to the values of \(\alpha =0,\,0.2,\,0.4,\,0.6,\,0.8\) and 1.0.
To compare the performance of the methods, we calculate the total errors in estimation using Eq. (2) for FLS and GFLS, Eq. (6) for SY, Eq. (7) for HBS, Eq. (9) for ADFLS and Eq. (10) for IDFLS methods.
Example 1
Sakawa and Yano [40] used an example to illustrate the regression model, in which input and outputs are symmetrical TFNs. The example has eight sets of the fuzzy observations \((X_i ,Y_i )\) as shown in Table 1.
 (1)
Number of input units: \(n_{I }= 1\) unit
 (2)
Number of hidden units: \(n_{O }= 3\) units
 (3)
Number of output units: \(n_{O }= 1\) unit
 (4)
Stopping condition: \(t= 20,000\) iterations of the training algorithm
Example 2
Fuzzy input–output data set from Diamond [16]
i  \(X_i =(x_i ,\underline{f}_i ,\overline{f} _i )_T \)  Interval \(X_i \)  \(Y_i =(y_i ,\underline{e}_i ,\overline{e} _i )_T \)  Interval \(Y_i \) 

1  (21, 4.2, 2.1)  [16.8, 23.1]  (4.0, 0.6, 0.8)  [3.40, 4.80] 
2  (15.0, 2.25, 2.25)  [12.75, 17.25]  (3.0, 0.3, 0.3)  [2.70, 3.30] 
3  (15.0, 1.5, 2.25)  [13.5, 17.25]  (3.5, 0.35, 0.35)  [3.15, 3.85] 
4  (9.0, 1.35, 1.35)  [7.65, 10.35]  (2, 0.4, 0.4)  [1.60, 2.40] 
5  (12.0, 1.2, 1.2)  [10.80, 13.20]  (3.0, 0.3, 0.45)  [2.70, 3.45] 
6  (18.0, 3.6, 1.8)  [14.40, 19.80]  (3.5, 0.53, 0.7)  [2.97, 4.20] 
7  (6.0, 0.6, 1.2)  [5.40, 7.20]  (2.5, 0.25, 0.38)  [2.25, 2.88] 
8  (12.0, 1.8, 2.4)  [10.20, 14.40]  (2.5, 0.5, 0.5)  [2.00, 3.00] 
Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for considered methods
Parameters  FLS  GFLS  ADFLS  IDFLS  FRBF 

\(A_0 =(a_0 ,\underline{c}_0 ,\bar{c}_0 )\)  
\(a_0 \)  1.1286  1.1885  1.3415  1.3730  – 
\(\underline{c}_0 \)  –  –  0.1509  0.3379  – 
\(\bar{c}_0 \)  –  –  0.0943  0.0627  – 
\(A_1 =(a_1 ,\underline{c}_1 ,\bar{c}_1 )\)  
\(a_0 \)  0.1415  0.1363  1.1229  0.1205  – 
\(\underline{c}_0 \)  –  –  0  \(\)0.0153  – 
\(\bar{c}_0 \)  –  –  0.0129  0.0137  – 
Predicted intervals  
1  [3.50, 4.63]  [3.47, 4.62]  [3.25, 4.53]  [3.31, 4.53]  [3.55, 4.68] 
2  [2.93, 3.56]  [2.92, 3.53]  [2.75, 3.74]  [2.76, 3.75]  [2.67, 3.57] 
3  [3.03, 3.46]  [3.02, 3.43]  [2.84, 3.74]  [2.86, 3.75]  [2.78, 3.71] 
4  [2.21, 2.59]  [2.23, 2.59]  [2.13, 2.81]  [2.07, 2.82]  [1.92, 2.61] 
5  [2.65, 2.99]  [2.66, 2.98]  [2.51, 3.20]  [2.50, 3.20]  [2.30, 3.10] 
6  [3.16, 4.18]  [3.15, 4.13]  [2.96, 4.09]  [2.99,4.09]  [2.97, 3.95] 
7  [1.89, 2.06]  [1.92, 2.08]  [1.85, 2.39]  [1.76, 2.40]  [1.72, 2.36] 
8  [2.57, 3.08]  [2.57, 3.06]  [2.44, 3.35]  [2.42, 3.36]  [2.27, 3.06] 
SSE  2.4055  3.0867  2.0843  1.4477  1.5517 
 (1)
Number of input units: \(n_{I }= 1\) unit
 (2)
Number of hidden units: \(n_{O }= 3\) units
 (3)
Number of output units: \(n_{O }= 1\) unit
 (4)
Stopping condition: \(t= 20,000\) iterations of the training algorithm
i  \(X_i =(x_i ,\underline{f}_i ,\overline{f} _i )_T \)  Interval \(X_i \)  \(Y_i =(y_i ,\underline{e}_i ,\overline{e} _i )_T \)  Interval \(Y_i \) 

1  (1, 3/4 3/4)  [0.25, 1.75]  (1, 3/4, 3/4)  [0.25, 1.75] 
2  (2, 1, 1)  [1, 3]  (15/8, 3/2, 3/2)  [0.375, 3.375] 
3  (3, 1, 1)  [2, 4]  (13/4, 3/2, 3/2)  [1.75, 4.75] 
Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for considered methods
Parameters  FLS  GFLS  SY  HBS  ADFLS  IDFLS  FRBF 

\(A_0 =(a_0 ,\underline{c}_0 ,\bar{c}_0 )\)  
\(a_0 \)  \(\)0.4527  \(\)0.4155  1.75  0.7333  \(\)0.2004  \(\)0.1775  – 
\(\underline{c}_0 \)  –  –  0  0.8167  0  \(\)0.0363  – 
\(\bar{c}_0 \)  –  –  0  0.8167  0  \(\)0.3857  – 
\(A_1 =(a_1 ,\underline{c}_1 ,\bar{c}_1 )\)  
\(a_0 \)  1.2472  1.2286  0  0.6292  1.1194  1.1263  – 
\(\underline{c}_0 \)  –  –  0  0.1708  0.1256  0.2316  – 
\(\bar{c}_0 \)  –  –  0  0.1708  0.1331  0.1331  – 
Predicted intervals  
1  [\(\)0.14, 1.72]  [\(\)0.10, 1.73]  [1.75, 1.75]  [0.03, 2.95]  [\(\)0.04, 1.89]  [0.08, 1.76]  [\(\)0.13, 1.85] 
2  [0.79, 3.28]  [0.81, 3.27]  [1.75, 1.75]  [0.375, 3.95]  [0.66, 3.42]  [0.75, 3.43]  [0.59, 2.97] 
3  [2.04, 4.53]  [2.04, 4.49]  [1.75, 1.75]  [0.83, 4.75]  [1.66, 4.67]  [1.64, 4.76]  [1.84, 4.86] 
SSE  0.5390  0.6060  3.0152  16.3878  0.1566  0.1161  0.0770 
Computational experience
The superiority of the proposed FRBF Network approach can be also observed through a test example from Diamond [16] and Ming et al. [31], in which inputs and outputs are symmetrical TFNs. This example has three sets of the fuzzy observations \((X_i ,Y_i )\) as given in Table 5.
 (1)
Number of input units: \(n_{I }= 1\) unit
 (2)
Number of hidden units: \(n_{O }= 2\) units
 (3)
Number of output units: \(n_{O }= 1\) unit
 (4)
Stopping condition: \(t= 10,000\) iterations of the training algorithm
To compare the performance of the seven FR methods in the estimation given in Sect. 2, we applied to calculate the errors in estimating the observed outputs. Table 6 shows parameter estimations, predicted intervals of fuzzy outputs and SSE values in estimating the eight observation for these considered methods. In the methods of FLS, GFLS, SY, HBS, ADFLS, IDFLS and proposed FRBF Network approach, the results for \(\alpha =0\) are used for comparison. In Table 6, SSE value of the FRBF Network approach is 0.0770, which is obviously better than FLS, GFLS, SY, HBS, ADFLS and IDFLS methods with 0.5390, 0.6060, 3.0152, 16.3878, 0.1566 and 0.1161 SSE values, respectively. Figure 4 shows the errors in estimations of FR methods and proposed FRBF Network approach.
LINGO Software is used for solving the fuzzy regression methods. The training algorithm for the proposed FRBFN is coded in MATLAB Software and implemented on a Notebook (Intel Core 2 Duo) with CPU time of 2.0 GHz. The average relative performance of the proposed FRBF Network approach and other FR methods, measured by SSE values and CPU time, is shown in Table 7.
Relative performance of the considered FR methods and FRBF Network approach for Test Example
Methods  SSE  CPU (time/s) 

FLS  0.5390  0.1768 
GFLS  0.6060  0.9535 
SY  3.0152  0.3860 
HBS  4.5632  0.6021 
ADFLS  0.1778  2.9660 
IDFLS  0.1161  36.8333 
FRBFN  0.0770  233.6269 
Conclusion
In this study, we have reviewed the relevant articles on Fuzzy Regression and provided an easily computation approach to estimate FR models with fuzzy input and fuzzy output. We presented a new estimation approach, Fuzzy Radial Basis Function Network, for Fuzzy Regression in the case that inputs and outputs are symmetric or nonsymmetric triangular fuzzy numbers. We derived a training algorithm of threelayer FRBF Network consisting of input, hidden and output layers. In the training algorithm, inputs, outputs and weights were defined by triangular fuzzy numbers. The construction of the algorithm is quite simple and the parameters of the FRBF Network, i.e., fuzzy weights and normalization factors, are systematically updated using this training algorithm given in Sect. 3.1.
The effectiveness of the derived training algorithm is demonstrated by computation of three numerical examples performed for proposed FRBF Network approach using the Backpropagation algorithm. The examples show that our proposed approach performs better than the existing fuzzy regression methods based on Linear Programming and Fuzzy Least Squares.
This study is one of the approaches to derive training algorithm of FRBF Network approach which has fuzzy input, fuzzy output and fuzzy weights, as an alternative to FR methods in the literature. The advantage of this approach is its simplicity and easy computation as well as its performance, while its disadvantage is spending more time than the other FR methods. The proposed approach is more suitable than the existing FR methods: firstly, the proposed method is able to handle symmetric and nonsymmetric triangular fuzzy inputs and outputs. Secondly, Example 1 and Example 3 show that the FRBF Network approach is better than of the existing FR methods, in terms of the SSE values and predicted intervals in estimation.
As a conclusion, our proposed approach suggests an efficient alternative procedure to estimate predicted intervals for FR model with fuzzy input and output. As a limitation of our study, we only focused on fuzzy regression model in the case that input and output are assumed to be symmetric or nonsymmetric triangular fuzzy numbers. Therefore, we only considered FRBF Network when input, output and weights are triangular fuzzy numbers and we did not consider another types of fuzzy numbers in this study. Although the discussion of this study is confined to simple regression with one input and one output, it can be generalized to cope with cases of multiple inputs and outputs. For future studies, more general fuzzy inputs, outputs and weights such as trapezoidal fuzzy numbers could be handled with our FRBF Network approach and it could be applied to different FR models.
Notes
Acknowledgments
The authors are grateful for the valuable comments and suggestions from the respected reviewers. Their valuable comments and suggestions have improved the quality of the our study.
References
 1.Alefeld G, Mayer G (2000) Interval analysis: theory and applications. J Comput Appl Math 121:421–464. doi: 10.1016/S03770427(00)003423 MathSciNetCrossRefzbMATHGoogle Scholar
 2.Alvisi S, Franchini M (2011) Fuzzy neural networks for water level and discharge forecasting with uncertainty. Environ Model Softw 26:523–537. doi: 10.1016/j.envsoft.2010.10.016 CrossRefGoogle Scholar
 3.Apaydin A, Baser F (2010) Hybrid fuzzy leastsquares regression analysis in claims reserving with geometric separation method. Insur Math Econ 47:113–122. doi: 10.1016/j.insmatheco.2010.07.001 MathSciNetCrossRefzbMATHGoogle Scholar
 4.Azadeh A, Khakestani M, Saberi M (2009) A flexible fuzzy regression algorithm for forecasting oil consumption estimation. Energy Policy 37:5567–5579. doi: 10.1016/j.enpol.2009.08.017 CrossRefGoogle Scholar
 5.Bárdossy A (1990) Note on fuzzy regression. Fuzzy Sets Syst 37:65–75. doi: 10.1016/01650114(90)90064D MathSciNetCrossRefzbMATHGoogle Scholar
 6.Bárdossy A, Bogárdi I, Duckstein L (1993) Fuzzy nonlinear regression analysis of doseresponse relationships. Eur J Oper Res 66:36–51. doi: 10.1016/03772217(93)90204Z CrossRefzbMATHGoogle Scholar
 7.Bezdek JC, Ehrlich R, Full W (1984) FCM: the fuzzy cmeans clustering algorithm. Comput Geosci 10:191–203. doi: 10.1016/00983004(84)900207 CrossRefGoogle Scholar
 8.Chang PT, Lee ES (1994) Fuzzy linear regression with spreads unrestricted in sign. Comput Math Appl 28:61–70. doi: 10.1016/08981221(94)001278 MathSciNetCrossRefzbMATHGoogle Scholar
 9.Chang YHO (2001) Hybrid fuzzy leastsquares regression analysis and its reliability measures. Fuzzy Sets Syst 119:225–246. doi: 10.1016/S01650114(99)000925 MathSciNetCrossRefzbMATHGoogle Scholar
 10.Chen SP, Dang JF (2008) A variable spread fuzzy linear regression model with higher explanatory power and forecasting accuracy. Inf Sci 178:3973–3988. doi: 10.1016/j.ins.2008.06.005 CrossRefzbMATHGoogle Scholar
 11.Cheng CB, Stanley Lee E (2001) Fuzzy regression with radial basis function network. Fuzzy Sets Syst 119:291–301. doi: 10.1016/S01650114(99)000986 MathSciNetCrossRefGoogle Scholar
 12.Cherkassky V, Mulier F (1998) Learn from data: concepts. Theory and methods. Wiley, New YorkzbMATHGoogle Scholar
 13.Choi SW, Lee D, Park JH, Lee IB (2003) Nonlinear regression using RBFN with linear submodels. Chemometr Intell Lab Syst 65:191–208. doi: 10.1016/S01697439(02)001090 CrossRefGoogle Scholar
 14.Cobaner M, Unal B, Kisi O (2009) Suspended sediment concentration estimation by an adaptive neurofuzzy and neural network approaches using hydrometeorological data. J Hydrol 367:52–61. doi: 10.1016/j.jhydrol.2008.12.024 CrossRefGoogle Scholar
 15.D’Urso P (2003) Linear regression analysis for fuzzy/crisp input and fuzzy/crisp output data. Comput Stat Data Anal 42:47–72. doi: 10.1016/S01679473(02)001172 MathSciNetCrossRefzbMATHGoogle Scholar
 16.Diamond P (1988) Fuzzy least squares. Inf Sci 46:141–157. doi: 10.1016/00200255(88)900473 MathSciNetCrossRefzbMATHGoogle Scholar
 17.Dunyak JP, Wunsch D (2000) Fuzzy regression by fuzzy number neural networks. Fuzzy Sets Syst 112:371–380. doi: 10.1016/S01650114(97)00393X MathSciNetCrossRefzbMATHGoogle Scholar
 18.Haddadnia J, Faez K, Ahmadi M (2003) A fuzzy hybrid learning algorithm for radial basis function neural network with application in human face recognition. Pattern Recognit 36:1187–1202. doi: 10.1016/S00313203(02)002315 CrossRefzbMATHGoogle Scholar
 19.Hathaway R, Bezdek JC, Pedrycz W (1996) A parametric model for fusing heterogeneous data. IEEE Trans Fuzzy Syst 4:270–281CrossRefGoogle Scholar
 20.Hojati M, Bector CR, Smimou K (2005) A simple method for computation of fuzzy linear regression. Eur J Oper Res 166:172–184. doi: 10.1016/j.ejor.2004.01.039 MathSciNetCrossRefzbMATHGoogle Scholar
 21.Ishibuchi H, Kwon K, Tanaka H (1995) A learning algorithm of fuzzy neural networks with triangular fuzzy weights. Fuzzy Sets Syst 71:277–293. doi: 10.1016/01650114(94)00281B CrossRefGoogle Scholar
 22.Ishibuchi H, Nii M (2001) Fuzzy regression using asymmetric fuzzy coefficients and fuzzified neural networks. Fuzzy Sets Syst 119:273–290. doi: 10.1016/S01650114(98)003704 MathSciNetCrossRefzbMATHGoogle Scholar
 23.Ishibuchi H, Tanaka H (1992) Fuzzy regression analysis using neural networks. Fuzzy Sets Syst 50:257–265. doi: 10.1016/01650114(92)90224R MathSciNetCrossRefGoogle Scholar
 24.Ishibuchi H, Tanaka H, Okada H (1993) An architecture of neural networks with interval weights and its application to fuzzy regression analysis. Fuzzy Sets Syst 57:27–39. doi: 10.1016/01650114(93)901182 MathSciNetCrossRefzbMATHGoogle Scholar
 25.Kao C, Chyu CL (2002) A fuzzy linear regression model with better explanatory power. Fuzzy Sets Syst 126:401–409. doi: 10.1016/S01650114(01)000690 MathSciNetCrossRefzbMATHGoogle Scholar
 26.Kao C, Chyu CL (2003) Leastsquares estimates in fuzzy regression analysis. Eur J Oper Res 148:426–435. doi: 10.1016/S03772217(02)00423X MathSciNetCrossRefzbMATHGoogle Scholar
 27.Khan UT, Valeo C (2015) A new fuzzy linear regression approach for dissolved oxygen prediction. Hydrol Sci J 60:1096–1119. doi: 10.1080/02626667.2014.900558 CrossRefGoogle Scholar
 28.Khashei M, Reza Hejazi S, Bijari M (2008) A new hybrid artificial neural networks and fuzzy regression model for time series forecasting. Fuzzy Sets Syst 159:769–786. doi: 10.1016/j.fss.2007.10.011 MathSciNetCrossRefzbMATHGoogle Scholar
 29.Klir GJ, Yuan B (1995) Fuzzy sets fuzzy logic: theory and application. Prentice Hall International Inc., New JerseyzbMATHGoogle Scholar
 30.Lu J, Wang R (2009) An enhanced fuzzy linear regression model with more flexible spreads. Fuzzy Sets Syst 160:2505–2523. doi: 10.1016/j.fss.2009.02.023
 31.Ming M, Friedman M, Kandel A (1997) General fuzzy least squares. Fuzzy Sets Syst 88:107–118. doi: 10.1016/S01650114(96)000516 MathSciNetCrossRefzbMATHGoogle Scholar
 32.Mitra S, Basak J (2001) FRBF: a fuzzy radial basis function network. Neural Comput Appl 10:244–252. doi: 10.1007/s52100180529
 33.Modarres M, Nasrabadi E, Nasrabadi MM (2005) Fuzzy linear regression models with least square errors. Appl Math Comput 163:977–989. doi: 10.1016/j.amc.2004.05.004 MathSciNetzbMATHGoogle Scholar
 34.Moore RE (1979) Methods and application of interval analysis. SIAM, PhiladelphiaCrossRefGoogle Scholar
 35.Mosleh M, Otadi M, Abbasbandy S (2010) Evaluation of fuzzy regression models by fuzzy neural network. J Comput Appl Math 234:825–834. doi: 10.1016/j.cam.2010.01.046 MathSciNetCrossRefzbMATHGoogle Scholar
 36.Nasrabadi MM, Nasrabadi E, Nasrabady AR (2005) Fuzzy linear regression analysis: a multiobjective programming approach. Appl Math Comput 163:245–251. doi: 10.1016/j.amc.2004.02.008 MathSciNetzbMATHGoogle Scholar
 37.Özelkan EC, Duckstein L (2000) Multiobjective fuzzy regression: a general framework. Comput Oper Res 27:635–652. doi: 10.1016/S03050548(99)001100 CrossRefzbMATHGoogle Scholar
 38.Peters G (1994) Fuzzy linear regression with fuzzy intervals. Fuzzy Sets Syst 63:45–55. doi: 10.1016/01650114(94)901449 MathSciNetCrossRefGoogle Scholar
 39.Roh SB, Joo SC, Pedrycz W, Oh SK (2010) The development of fuzzy radial basis function neural networks based on the concept of information ambiguity. Neurocomputing 73:2464–2477. doi: 10.1016/j.neucom.2010.05.006 CrossRefGoogle Scholar
 40.Sakawa M, Yano H (1992a) Fuzzy linear regression analysis for fuzzy inputoutput data. Inf Sci 63:191–206. doi: 10.1016/00200255(92)90069K MathSciNetCrossRefzbMATHGoogle Scholar
 41.Sakawa M, Yano H (1992b) Multiobjective fuzzy linear regression analysis for fuzzy inputoutput data. Fuzzy Sets Syst 47:173–181. doi: 10.1016/01650114(92)901754 MathSciNetCrossRefzbMATHGoogle Scholar
 42.Sánchez JdA (2006) Calculating insurance claim reserves with fuzzy regression. Fuzzy Sets Syst 157:3091–3108. doi: 10.1016/j.fss.2006.07.003 MathSciNetCrossRefzbMATHGoogle Scholar
 43.Shakouri GH, Nadimi R (2009) A novel fuzzy linear regression model based on a nonequality possibility index and optimum uncertainty. Appl Soft Comput 9:590–598. doi: 10.1016/j.asoc.2008.08.005 CrossRefGoogle Scholar
 44.Staiano A, Tagliaferri R, Pedrycz W (2006) Improving RBF networks performance in regression tasks by means of a supervised fuzzy clustering. Neurocomputing 69:1570–1581. doi: 10.1016/j.neucom.2005.06.014 CrossRefGoogle Scholar
 45.Tanaka H (1987) Fuzzy data analysis by possibilistic linear models. Fuzzy Sets Syst 24:363–375. doi: 10.1016/01650114(87)900339 MathSciNetCrossRefzbMATHGoogle Scholar
 46.Tanaka H, Uejima S, Asai K (1982) Linear regression analysis with fuzzy model. IEEE Trans Syst Man Cybernet 12:903–907. doi: 10.1109/TSMC.1982.4308925 CrossRefzbMATHGoogle Scholar
 47.Tanaka H, Watada J (1988) Possibilistic linear systems and their application to the linear regression model. Fuzzy Sets Syst 27:275–289. doi: 10.1016/01650114(88)900541 MathSciNetCrossRefzbMATHGoogle Scholar
 48.Yang MS, Lin TS (2002) Fuzzy leastsquares linear regression analysis for fuzzy inputoutput data. Fuzzy Sets Syst 126:389–399. doi: 10.1016/S01650114(01)000665 MathSciNetCrossRefzbMATHGoogle Scholar
 49.Yapıcı Pehlivan N (2005) Fuzzy estimators in nonparametric regression. PhD Thesis, Selçuk University, KonyaGoogle Scholar
 50.Yilmaz I, Kaynar O (2011) Multiple regression, ANN (RBF, MLP) and ANFIS models for prediction of swell potential of clayey soils. Exp Syst Appl 38:5958–5966. doi: 10.1016/j.eswa.2010.11.027 CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.