Introduction

Regression analysis is one of the most widely used methods of estimation and it is applied to determine the functional relationship between independent and dependent variables. Fuzzy regression (FR) is a fuzzy type of classical regression in which some elements of the model are represented by any type of fuzzy numbers [35].

Fuzzy linear regression (FLR) first proposed by Tanaka et al. [46] is used to minimize the total spread of the fuzzy parameters subject to the support of the estimated values cover the support of the observed values for a certain \(\alpha \)-level. In the light of Tanaka et al.’s [46] study, several methods have been developed for FR models. Another approach to FLR method is proposed by Diamond [16] to determine the fuzzy parameters in analog to conventional normal equations derived with a suitable metric. In general, there are two main approaches in FR analysis: linear programming-based methods and FLS-based methods. The first one is based on minimizing fuzziness as an optimal criterion [46, 8, 20, 33, 3638, 4042, 45, 47], whereas the second one is based on least squares (LS) of errors as a fitting criterion [3, 9, 15, 16, 2527, 31, 48].

There are many studies in the literature related to FR since then proposed by Tanaka et al. [46]. Bardossy [5] developed a general form of regression equations for the fuzzy numbers and formulated the FR problem as a mathematical programming. Bardossy et al. [6] introduced a general methodology for FR and applied to an actual hydrological case study including the imprecise relationship between soil electrical resistivity and hydraulic permeability. Sakawa and Yano [40] developed LP-based methods for solving formulated three types of problems for obtaining the FLR models, where both input and output data are fuzzy numbers. Sakawa and Yano [41] introduced three types of multiobjective programming (MOP) problems for obtaining FLR models with fuzzy input and fuzzy output data. They developed an LP-based interactive decision making method to derive the satisfying solution of the decision maker for the MOP problems. Ming et al. [31] described a model for LS fitting of fuzzy input and fuzzy output data. Kao and Chyu [26] introduced the method of LS under fuzzy environment to handle fuzzy observations in regression analysis for three cases: crisp input-fuzzy output, fuzzy input-fuzzy output, and non-triangular fuzzy observations. Yang and Lin [48] proposed two estimation methods along with an FLS approach for considered FLR models with fuzzy inputs, fuzzy outputs and fuzzy parameters. Hojati et al. [20] proposed a simple goal programming-like approach for computation of FR for two cases: crisp inputs-fuzzy outputs and fuzzy inputs-fuzzy outputs. Chen and Dang [10] proposed a three-phase method to construct the FR model with variable spreads to resolve the problem of increasing spreads. Lu and Wang [30] proposed an enhanced fuzzy linear regression model (FLR\(_{\textit{FS}})\). Shakouri and Nadimi [43] introduced an approach to find the parameters of an FLR with crisp inputs and fuzzy outputs. Khan and Valeo [27] introduced a method, which is an extension of the Diamond’s [16] FLS method, for FLR with fuzzy regressors, regressand and coefficients.

Many Neural Networks (NN) models are similar or identical to well-known statistical techniques such as linear regression, polynomial regression, nonparametric regression, discriminant analysis, principal components analysis and cluster analysis. Radial Basis Function Network (RBFN) is a special kind of NNs that consists of input layers, only one hidden layer and output layers. It has radial basis functions in hidden units and linear functions in output units, with adjustable weights. In recent years, various fuzzified versions of the NNs and the RBF Network have been developed for linear, nonlinear and nonparametric regression models.

NNs models have been applied in the FR analysis by various researchers. For example, Ishibuchi and Tanaka [23] introduced simple and powerful methods for FR analysis using NNs. Ishibuchi et al. [24] proposed an architecture of Fuzzy Neural Networks (FNN) that have crisp inputs, interval weights and interval outputs for FR analysis. Ishibuchi et al. [21] introduced an architecture of FNN with triangular fuzzy weights. Ishibuchi and Nii [22] proposed nonlinear fuzzy regression methods based on FNN with asymmetric fuzzy weights. Cheng and Lee [11] proposed FRBF Network that weights between input-hidden units and outputs considered as fuzzy numbers, but inputs and weights between hidden-output units considered as crisp numbers for FR analysis. Dunyak and Wunsch [17] described a method for nonlinear FR using NN models. Khashei et al. [28] proposed a hybrid method that yields more accurate results with incomplete data sets based on the basic concepts of NN and FR models to overcome the limitations in both methods. Mosleh et al. [35] presented a novel hybrid method based on FNN for approximate fuzzy parameters of fuzzy linear and nonlinear regression models with crisp inputs and fuzzy output. Cobaner et al. [14] proposed an adaptive neuro-fuzzy approach to estimate suspended sediment concentration on rivers. The potential of neuro-fuzzy technique is compared with Generalized Regression Neural Networks (GRNN), Radial Basis Function Neural Networks (RBFNN) and Multi-layer Perceptron (MLP) and also two different sediment rating curves (SRC). Haddadnia et al. [18] presented a fuzzy hybrid learning algorithm for the RBFNN. Roh et al. [39] presented a Fuzzy RBFNN based on the concept of information ambiguity. Hathaway et al. [19] presented a model that integrates three data types of numbers, intervals and linguistic assessment. Staiano et al. [44] described a novel approach to fuzzy clustering as a summation of a number of linear local regression models. Their approach is more effective in the training of RBFNN leading to improved performance with respect to other clustering algorithms. Alvisi and Franchini [2] proposed an approach under uncertainty using NN for water level (or discharge) forecasting. The parameters of the NN, i.e., the weights and biases, are represented by fuzzy numbers. Mitra and Basak [32] proposed a fuzzy version of the RBF Network.

To the best knowledge of the authors, there is no study on FRBF Network dealing with fuzzy regression with fuzzy input and fuzzy output. Therefore, we propose FRBF Network with fuzzy input, fuzzy output and also fuzzy weights, as an alternative to the existing FR methods in the literature. To show its appropriateness and effectiveness, our proposed method is applied to the three numerical examples and its performance is compared with existing FR methods. The results indicate that our proposed method is an effective method to estimate the output under fuzzy environment.

The remainder of the paper is organized as follows: in Sect. 2, fuzzy regression methods in the literature are reviewed. Our proposed Fuzzy Radial Basis Function Network approach is presented in Sect. 3. Three numerical examples are illustrated to compare the proposed approach with other FR methods given in Sect. 4. Finally, conclusions are drawn in Sect. 5.

Fuzzy regression methods

Fuzzy linear regression was first introduced by Tanaka et al. [46] and since then several different methods have been proposed for FR by various researchers. In general, fuzzy regression methods are divided into two categories: the first one is based on linear programming (LP) approach and the second one is based on the fuzzy least squares (FLS) approach. The first class which minimizes the total vagueness of the estimated values for the output includes Tanaka et al.’s [46] method and its extensions [20, 33, 40, 45, 46]. The second class includes FLS methods to minimize the total square of errors in the estimated values [15, 16, 31, 48].

In this section, we investigate widely used fuzzy regression methods of Fuzzy Least Squares (FLS), General Fuzzy Least Squares (GFLS), Sakawa–Yano (SY), Hojati–Bector–Smimou (HBS), Approximate-Distance Fuzzy Least Squares (ADFLS) and Interval-Distance Fuzzy Least Squares (IDFLS).

To determine the parameters of FR by minimizing the total square of errors in the estimated values, FLS and GFLS methods were proposed by Diamond [16] and Ming et al. [31], respectively. Fuzzy regression model for the methods of FLS and GFLS as considered as follows:

$$\begin{aligned} Y_i =a_0 +a_1 X_i,\quad i=1,2,\ldots ,n \end{aligned}$$
(1)

where \(a_0 ,a_1 \in \mathfrak {R}\) are nonfuzzy parameters, \(X_i ,Y_i \in E^1\) are fuzzy numbers and \(E^1\) is fuzzy number space. \(X_i =(x_i ,\underline{f}_i ,\overline{f} _i )_T \) are fuzzy inputs and \(Y_i =(y_i ,\underline{e}_i ,\overline{e} _i )_T \) are fuzzy outputs considered as triangular fuzzy numbers (TFNs). In fuzzy inputs, \(x_i \) is the center, \(\underline{f}_i \) and \(\overline{f} _i \) are the left and right spread of \(X_i \), respectively. It is assumed that, \(x_i -\underline{f}_i \ge 0\).

The objective of the FLS and GFLS methods is defined as follows:

$$\begin{aligned} \mathrm{Minimize}\,r(a_0 ,a_1 )=\sum \limits _{i=1}^n {d(a_0 +a_1 X_i ,Y_i )^2} \end{aligned}$$
(2)

In Eq. (2), two cases arise according to \(a_1 \ge 0\) or \(a_1 <0\). In case of \(a_1 \ge 0\), \(d(a_0 +a_1 X_i ,Y_i )^2\) is given by;

$$\begin{aligned} d(a_0 +a_1 X_i ,Y_i )^2= & {} (a_0 +a_1 x_i -y_i -a_1 \underline{f}_i +\underline{e}_i )^2 \nonumber \\&+\,(a_0 +a_1 x_i -y_i +a_1 \overline{f_i } -\overline{e} _i )^2 \nonumber \\&+\,(a_0 +a_1 x_i -y_i )^2 \end{aligned}$$
(3)
$$\begin{aligned} d(a_0 +a_1 X_i ,Y_i )^2= & {} (a_0 +a_1 x_i -y_i -a_1 \underline{f}_i +\underline{e}_i )^2\nonumber \\&+\,(a_0 +a_1 x_i -y_i +a_1 \overline{f_i } -\overline{e} _i )^2\nonumber \\&+\,2(a_0 +a_1 x_i -y_i )^2 \end{aligned}$$
(4)

for FLS and GFLS, respectively. In Eqs. (3) and (4), the parameters \(a_0 \) and \(a_1 \) parameters are derived via \(\frac{\partial r}{\partial a_0 }=0\) and \(\frac{\partial r}{\partial a_1 }=0\) (for \(a_1 <0\); see [16, 31]).

Sakawa and Yano [40], and Hojati et al. [20] considered the following fuzzy regression model:

$$\begin{aligned}&Y_i =A_0 +A_1 X_{i1} +\cdots +A_j X_{ij},\nonumber \\&\quad i=1,2,\ldots ,n;\quad j=0,1,\ldots ,k \end{aligned}$$
(5)

where \(X_i =(x_i ,f_i )_T \), \(Y_i =(y_i ,e_i )_T \) and parameters \(A_j =(a_j ,c_j )\) are considered as symmetric TFNs.

Sakawa and Yano [40] formulated three types of problems for obtaining the FLR models with fuzzy input and fuzzy output using the three indices for equality between two fuzzy numbers as follows:

$$\begin{aligned} \mathrm{Minimize}\,&\sum \limits _{i=1}^n {(Y_{i,0}^R -Y_{i,0}^L } ) \nonumber \\ \mathrm{subject\,to}&\nonumber \\&y_i -\sum \limits _{j=0}^k {a_j x_{ij}} \le L^{-1}(\alpha )\nonumber \\&\times \sum \limits _{j\in J_1 ,J_2 } {[a_j f_{ij} +c_j x_{ij} +L^{-1}(\alpha )c_j f_{ij} ]} \nonumber \\&+ L^{-1}(\alpha )\sum \limits _{j\in J_3 } {[-a_j f_{ij} +c_j x_{ij} +L^{-1}(\alpha )c_j f_{ij} ]}\nonumber \\&+ L^{-1}(\alpha )e_i\nonumber \\&-y_i +\sum \limits _{j=0}^k a_j x_{ij} \le L^{-1}(\alpha )\\&\times \sum \limits _{j\in J_1 } {[a_j f_{ij} +c_j x_{ij} -L^{-1}(\alpha )c_j f_{ij} ]} \nonumber \\&\quad +L^{-1}(\alpha ) \sum \limits _{j\in J_2 ,J_3 } [-a_j f_{ij} +c_j x_{ij} +L^{-1}(\alpha )c_j f_{ij} ] \nonumber \\&\quad +L^{-1}(\alpha )e_i \nonumber \\&cj\ge 0,\quad j=0,1,\ldots ,k;\quad i=1,2,\ldots ,n \nonumber \end{aligned}$$
(6)

Hojati et al. [20] proposed a goal programming-like approach which minimizes the total deviations of upper and lower points of \(\alpha \)-certain predicted and associated observed intervals, for FLR model with fuzzy input and fuzzy output as follows:

$$\begin{aligned} \mathrm{Minimize}\,&\sum \limits _{i=1}^{n} d_{ilU}^+ +d_{ilU}^- +d_{ilL}^+ +d_{ilL}^- +d_{irU}^+ \nonumber \\&\quad +d_{irU}^- +d_{irL}^+ +d_{irL}^- \nonumber \\ \mathrm{subject\,to}&\nonumber \\&\sum \limits _{j=0}^{1} (a_j +(1-\alpha )c_j )(x_{ij} -(1-\alpha )f_{ij} )\nonumber \\&\quad +d_{ilU}^{+} -d_{ilU}^{-} =y_i +(1-\alpha )e_i \nonumber \\&\sum \limits _{j=0}^{1} (a_j +(1-\alpha )c_j )(x_{ij} +(1-\alpha )f_{ij} )\nonumber \\&\quad +d_{irU}^+ -d_{irU}^- =y_i +(1-\alpha )e_i \nonumber \\&\sum \limits _{j=0}^1 (a_j -(1-\alpha )c_j )(x_{ij} -(1-\alpha )f_{ij} )\nonumber \\&\quad +d_{ilL}^+ -d_{ilL}^- =y_i -(1-\alpha )e_i \nonumber \\&\sum \limits _{j=0}^1 (a_j -(1-\alpha )c_j )(x_{ij} +(1-\alpha )f_{ij} )\nonumber \\&\quad +d_{irL}^+ -d_{irL}^- =y_i -(1-\alpha )e_i \nonumber \\&d_{ilU}^{+},\quad d_{ilU}^{-},\quad d_{ilL}^{+},\quad d_{ilL}^{-},\quad d_{irU}^{+},\nonumber \\&d_{irU}^{-},\quad d_{irL}^{+},\quad d_{irL}^{-} \ge 0 \nonumber \\&a_j =free,\quad cj\ge 0,\quad j=0,1,\nonumber \\&i=1,2,\ldots ,n \end{aligned}$$
(7)

where \(d_{ilU}^+ ,\quad d_{ilU}^- ,\quad d_{ilL}^+ ,\quad d_{ilL}^- ,\quad d_{irU}^+ ,\quad d_{irU}^- ,\quad d_{irL}^+ \) and \(d_{irL}^- \) are deviation variables, “l” and “r” refer to the left (lower) and right (upper) points of the input intervals, “U” and “L” refer to the upper and lower points of the observed and predicted intervals, respectively (for details, see [20, 40]).

Yang and Lin [48] proposed alternative FLS methods called as Approximate-distance fuzzy least squares (ADFLS) and Interval-distance fuzzy least squares (IDFLS), for FLR model with fuzzy input and fuzzy output as follows:

$$\begin{aligned}&Y_i =A_0 +A_1 X_{i1} +\cdots +A_j X_{ij} ,\nonumber \\&\quad j=0,1,\ldots ,k;\quad i=1,2,\ldots ,n \end{aligned}$$
(8)

where \(X_{ij} =(x_{ij} ,\,\bar{f}_{ij} ,\underline{f}_{ij} )\), \(Y_i =(y_i ,\,\bar{e}_i ,\underline{e}_i )\) and parameters \(A_j =(a_j ,\,\bar{c}_j ,\underline{c}_j )\) are considered as LR fuzzy numbers.

In the ADFLS method, the objective function is defined as follows:

$$\begin{aligned}&\mathrm{Minimize}\,J(A_0 ,A_1 ,\ldots ,A_k )\nonumber \\&\quad =\sum \limits _{i=1}^n {d_{LR}^2 (Y_i ,A_0 +A_1 X_{j1} +\cdots +A_k X_{jk} )} \nonumber \\&\quad =\sum \limits _{i=1}^n (y_i -\tilde{m}_i )^2 +\sum \limits _{i=1}^n [(y_i -(1-\alpha )\underline{e}_i)\nonumber \\&\qquad -(\tilde{m}_i -(1-\alpha )\tilde{l}_i )]^{2} \\&\qquad +\sum \limits _{i=1}^n [(y_i +(1-\alpha )\bar{e}_i )\nonumber \\&\qquad -(\tilde{m}_i +(1-\alpha )\tilde{r}_i )]^{2}\nonumber \end{aligned}$$
(9)

The objective function \(J(A_0 ,A_1 ,\ldots ,A_k )\) is minimized over \(A_j \) subject to \(\bar{c}_j \ge 0\) and \(\underline{c}_j \ge 0\) for ADFLS method. In Eq. (9), \(\tilde{m}_i \), \(\tilde{l}_i \), \(\tilde{r}_i \), \(H_{1}\) and \(H_{2}\) are defined as follows:

$$\begin{aligned} \tilde{m}_i =a_0 +\sum \limits _{p=1}^k {a_1 x_{ip} } \end{aligned}$$
$$\begin{aligned} \tilde{l}_i= & {} \underline{c}_0 +\sum \limits _{A_p \in H_1 } {\left[ {s_{ip} (a_p \underline{f}_i +x_{ip} \underline{c}_p )+(1-s_{ip} )(a_p \underline{f}_i -x_{ip} \bar{c}_p )} \right] } \\&+ \sum \limits _{A_p \in H_2 } {\left[ {s_{ip} (x_{ip} \underline{c}_p -a_p \bar{c}_p )+(1-s_{ip} )(-a_p \bar{f}_i -x_{ip} \bar{c}_p )} \right] } \\ \tilde{r}_i= & {} \bar{c}_0 +\sum \limits _{A_p \in H_1 } {\left[ {s_{ip} (a_p \bar{f}_i +x_{ip} \bar{c}_p )+(1-s_{ip} )(a_p \bar{f}_i -x_{ip} \underline{c}_p )} \right] } \\&+ \sum \limits _{A_p \in H_2 } {\left[ {s_{ip} (x_{ip} \bar{c}_p -a_p \underline{c}_p )+(1-s_{ip} )(-a_p \underline{f}_i -x_{ip} \underline{c}_p )} \right] } \\ \end{aligned}$$

and

$$\begin{aligned} H_1= & {} \left\{ {A_p \left| {A_p >0,p\in \{1,2,\ldots ,k\}} \right. } \right\} ,\\ H_2= & {} \left\{ {A_p \left| {A_p <0,p\in \{1,2,\ldots ,k\}} \right. } \right\} ,\\ s_{ip}= & {} \left\{ {\begin{array}{l} 1,\quad if\quad X_{ip} \ge 0\, \\ 0,\quad if\quad X_{ip} <0 \\ \end{array}} \right. \end{aligned}$$

In the IDFLS method, the objective function is defined as follows:

$$\begin{aligned}&\mathrm{Minimize}\,\rho (A_0 ,A_1 ,\ldots ,A_k )\nonumber \\&\quad =\sum \limits _{i=1}^{n} D^2 \left( Y_i ,A_0 +A_1 X_{i1} \right. \nonumber \\&\left. \quad +\cdots +A_k X_{ik} \right) \nonumber \\&\quad =\sum \limits _{i=1}^{n} \int \limits _0^{1} ((Y_{i,\alpha }^L -(\tilde{A}\otimes \tilde{X}_i )_\alpha ^L )^2+(Y_{i,\alpha }^U -(\tilde{A}\otimes \tilde{X}_i )_\alpha ^U )^2)d\alpha \nonumber \\ \end{aligned}$$
(10)

The objective function \(\rho (A_0 ,A_1 ,\ldots ,A_k )\) is minimized over \(A_j \) for IDFLS method (for details of ADFLS and IDFLS, see [48]).

Proposed approach

Radial Basis Function (RBF) Network is a special kind of NN which has input layers, a single hidden layer and output layers. The hidden layer contains hidden units, also called as radial basis function units, which have two parameters that describe the location of the function’s center and its deviation (or width). Hidden units measure the distance between an input data and the functions’s center. There are two sets of weights, one connecting the input layer to the hidden layer and the other connecting the hidden layer to the output layer. The weights between input and hidden layer which are also called as centers are determined by any clustering method, such as Fuzzy c-Means Clustering (FCM). The weights connecting the hidden layer to the output layer are used to form linear combinations of the hidden units for generating outputs of the RBF Network. RBF Network is trained by unsupervised learning or combining the supervised and unsupervised learning [12, 13, 50].

In this section, we propose a FRBF Network approach for FR model with fuzzy input and fuzzy output which are symmetric or nonsymmetric TFNs. Our proposed FRBF Network includes fuzzy input (\(X_p )\), fuzzy output (\(Y_p )\), fuzzy weights between input and hidden unit (\(W_{ij} )\) and also fuzzy weights between hidden and output unit (\(V_j )\). In this approach, the weights \(W_{ij} \) and normalization factor \(\sigma _j^2 \) are determined by unsupervised learning. \(W_{ij} \) s are initialized by modified FCM algorithm given in Sect. 3.2 and \(V_j \) s are randomly selected as TFNs. Then, \(W_{ij} \), \(V_j \) and \(\sigma _j^2 \) s are updated by BackPropagation (BP) algorithm which is supervised learning.

\(\alpha \)-level sets of the fuzzy input \(X_{pi} \) and the fuzzy output \(Y_p \) are expressed as \([X_{pi} ]_\alpha =[X_{pi}^L ,X_{pi}^U ]\) and \([Y_p ]_\alpha =[Y_p^L ,Y_p^U ]\), respectively. The weights between input and hidden units are symmetrical TFNs and denoted as \(W_{ij} =(w_{ij}^L ,w_{ij}^C ,w_{ij}^U )\), where \(w_{ij}^L \) is the lower limit, \(w_{ij}^C\) is the center and \(w_{ij}^U \) is the upper limit of \(W_{ij} . \quad \alpha \)-level sets of \(W_{ij} \) are written as follows:

$$\begin{aligned}&[W_{ij} ]_\alpha =\left[ {[W_{ij} ]_\alpha ^L ,\,[W_{ij} ]_\alpha ^U } \right] \nonumber \\&\quad =\left[ w_{ij}^L \left( 1-\frac{\alpha }{2}\right) +w_{ij}^U \left( \frac{\alpha }{2}\right) ,w_{ij}^L \left( \frac{\alpha }{2}\right) +w_{ij}^U \left( 1-\frac{\alpha }{2}\right) \right] \nonumber \\ \end{aligned}$$
(11)

The weights between hidden unit and output unit are TFNs and denoted as \(V_j =(v_j^L ,v_j^C ,v_j^U )\). \(\alpha \)-level sets of \(V_j\) can be written as same manner in \(W_{ij}\). Arithmetic operations on fuzzy numbers and intervals can be found in Alefeld and Mayer [1], Klir and Yuan [29] and Moore [34].

The hidden unit j is calculated as follows:

$$\begin{aligned} \left[ h_{pj}\right] _\alpha= & {} \exp \left( {-\frac{1}{2}\left( {\frac{\left\| {\,[X_{pi} ]_\alpha -[W_{ij} ]_\alpha } \right\| }{[\sigma _{pj} ]_\alpha }}\right) ^2}\right) , p=1,2,\ldots ,n;i=1,2,\ldots ,n_I ;j=1,2,\ldots ,n_H \nonumber \\= & {} \exp \left[ {\left( {-\frac{1}{2}\frac{\left| {\,\max \left\{ {\,\left| {[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| \,,\,\left| {[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \,} \right\} \,} \right| ^2}{\left[ {\sigma _{pj}^2 } \right] _\alpha }}\right) } \right] \end{aligned}$$
(12)

Normalization factor of hidden unit j is determined as follows:

$$\begin{aligned}&[\sigma _{pj}^2 ]_\alpha =\frac{1}{n}\sum \nolimits _{p=1}^n {( {[X_{pi} ]_\alpha -[W_{ij} ]_\alpha })^2} \,\nonumber \\&\quad =\frac{1}{n}\sum \nolimits _{p=1}^n {\left( {\max \left\{ {\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| ,\left| {\,[X_{pi} ]_{_\alpha }^U {-}[W_{ij} ]_{_\alpha }^U } \right| } \right\} }\right) ^2}\nonumber \\ \end{aligned}$$
(13)

Fuzzy estimated output for observation p of FRBF Network is calculated by;

$$\begin{aligned}{}[\hat{Y}_p ]_\alpha&=\frac{\sum \nolimits _{j=1}^{n_H } {[V_j ]_\alpha [h_{pj} ]_\alpha } }{\sum \nolimits _{j=1}^{n_H } {[h_{pj} ]_\alpha } }\,\nonumber \\&=\left[ {\frac{\sum \nolimits _{j=1}^{n_H } {[V_j ]_\alpha ^L .[h_{pj} ]_\alpha } }{\sum \nolimits _{j=1}^{n_H } {[h_{pj} ]_\alpha } },\frac{\sum \nolimits _{j=1}^{n_H } {[V_j ]_\alpha ^U .[h_{pj} ]_\alpha } }{\sum \nolimits _{j=1}^{n_H } {[h_{pj} ]_\alpha } }} \right] \end{aligned}$$
(14)

Let \(Y_p \) be the fuzzy output corresponding to the fuzzy input \(X_p \). The cost function for the \(\alpha \)-level sets of the fuzzy estimated output \(\hat{Y}_p \) and the corresponding fuzzy output \(Y_p \) is introduced in Ishibuchi et al. [24] as follows:

$$\begin{aligned} E_{p,\alpha }= & {} E_{p,\alpha }^L +E_{p,\alpha }^U \,\nonumber \\= & {} \frac{\alpha }{2}\left( {[Y_p ]_\alpha ^L -[\hat{Y}_p ]_\alpha ^L }\right) ^2+\frac{\alpha }{2}\left( {[Y_p ]_\alpha ^U -[\hat{Y}_p ]_\alpha ^U }\right) ^2 \end{aligned}$$
(15)

where, \(E_{p,\alpha }^L \) and \(E_{p,\alpha }^U \) indicate the squared errors for the lower limit and the upper limit of the \(\alpha \)-level sets of \(E_p\), respectively. The total cost function E for the input–output pair (\(X_p \),\(Y_p )\) is computed as follows:

$$\begin{aligned} E=\sum \limits _{p=1}^n {\sum \limits _{i=1}^s {E_{p,\alpha _i } } } \end{aligned}$$
(16)

Training algorithm of our proposed Fuzzy Radial Basis Function Network

Training algorithm of our proposed FRBF Network is constituted by Yapıcı Pehlivan [49]. In the algorithm, Choi et al.’s [13] BP algorithm for RBF Network is fuzzified and it is integrated with Ishibuchi et al.’s [21] Back-Propagation (BP) algorithm for FNN. Framework of the training algorithm for the proposed FRBF Network is demonstrated in Fig. 1.

The purpose of the proposed FRBF Network is to minimize total errors in estimations through the training algorithm. Let \(\eta \) be a learning constant, \(\lambda \) be a momentum constant and t indicates the number of iterations. The weights \(V_j\), \(W_{ij}\) and normalization factor \(\sigma _j^2\) are updated by the training algorithm as follows:

Fig. 1
figure 1

Framework of the training algorithm of our proposed FRBF Network

The fuzzy weights \(V_j\) are updated by;

$$\begin{aligned} v_j^L (t+1)&=v_j^L (t)+\Delta v_j^L (t)\end{aligned}$$
(17)
$$\begin{aligned} v_j^U (t+1)&=v_j^U (t)+\Delta v_j^U (t) \end{aligned}$$
(18)

If \(v_j^L >v_j^U \) then,

$$\begin{aligned} v_j^L (t+1)&=\min \left\{ v_j^L (t+1),v_j^U (t+1)\right\} \\ v_j^U (t+1)&=\max \left\{ v_j^L (t+1),v_j^U (t+1)\right\} . \end{aligned}$$

In Eqs. (17) and (18), \(\Delta v_j^L (t)\) and \(\Delta v_j^U (t)\) can be calculated using the cost function \(E_{p,\alpha } \) as follows:

$$\begin{aligned} \Delta v_j^L (t)=-\eta \frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^L }+\lambda .\Delta v_j^L (t-1)\end{aligned}$$
(19)
$$\begin{aligned} \Delta v_j^U (t)=-\eta \frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^U }+\lambda .\Delta v_j^U (t-1) \end{aligned}$$
(20)

The derivatives \(\frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^L }\) and \(\frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^U }\) in Eqs. (19) and (20) can be written as follows:

$$\begin{aligned} \frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^L }= & {} -\alpha \left( [Y_{pk} ]_\alpha ^L -[\hat{Y}_{pk} ]_\alpha ^L\right) h_{pj} \left( {1-\frac{\alpha }{2}}\right) \\&-\,\alpha \left( [Y_{pk} ]_\alpha ^U -[\hat{Y}_{pk} ]_\alpha ^U\right) h_{pj} \left( {\frac{\alpha }{2}}\right) \\ \frac{\partial \,E_{p,\alpha } }{\partial \,v_{jk}^U }= & {} -\alpha \left( [Y_{pk} ]_\alpha ^L -[\hat{Y}_{pk} ]_\alpha ^L\right) h_{pj} \left( {\frac{\alpha }{2}}\right) \\&-\,\alpha \left( [Y_{pk} ]_\alpha ^U -[\hat{Y}_{pk} ]_\alpha ^U\right) h_{pj} \left( {1-\frac{\alpha }{2}}\right) \end{aligned}$$

The fuzzy weights \(W_{ij} \) are updated by;

$$\begin{aligned} w_{ij}^L (t+1)=w_{ij}^L (t)+\Delta w_{ij}^L (t)\end{aligned}$$
(21)
$$\begin{aligned} w_{ij}^U (t+1)=w_{ij}^U (t)+\Delta w_{ij}^U (t) \end{aligned}$$
(22)

If \(w_{ij}^L >w_{ij}^U \) then,

$$\begin{aligned} w_{ij}^L (t+1)= & {} \min \left\{ w_{ij}^L (t+1),w_{ij}^U (t+1)\right\} \\ w_{ij}^U (t+1)= & {} \max \left\{ w_{ij}^L (t+1),w_{ij}^U (t+1)\right\} . \end{aligned}$$

In Eqs. (21) and (22), \(\Delta w_{ij}^L (t)\) and \(\Delta w_{ij}^U (t)\) can be computed using the cost function \(E_{p,\alpha } \) as follows:

$$\begin{aligned} \Delta w_{ij}^L (t)=-\eta \frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^L }+\lambda \Delta w_{ij}^L (t-1)\end{aligned}$$
(23)
$$\begin{aligned} \Delta w_{ij}^U (t)=-\eta \frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^U }+\lambda \Delta w_{ij}^U (t-1) \end{aligned}$$
(24)

The derivatives \(\frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^L }\) and \(\frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^U }\) in Eqs. (23) and (24) can be written as follows:

$$\begin{aligned} \frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^L }=\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^L }\left( {1-\frac{\alpha }{2}}\right) +\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^U }\,\left( {\frac{\alpha }{2}}\right) \\ \frac{\partial \,E_{p,\alpha } }{\partial \,w_{ij}^U }=\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^U }\left( {1-\frac{\alpha }{2}}\right) +\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^L }\left( {\frac{\alpha }{2}}\right) \end{aligned}$$

where \(\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^L }\) and \(\frac{\partial \,E_{p,\alpha } }{\partial \,[W_{ij}]_{\alpha } ^U }\) can be computed in two ways as follows:

  • (i) If \(\max \left\{ {\,\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| \,,\,\left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \,} \right\} =\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| \), then

    $$\begin{aligned} \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{\alpha } ^L }&=-\alpha (\left| {\,[Y_p ]_{\alpha } ^L -[\hat{Y}_p ]_{_\alpha }^L } \right| )h_{pj} (\sigma _{pj} )^{-2}\nonumber \\&\quad \times \left| {\,[X_{pi} ]_{\alpha }^L -[W_{ij} ]_{\alpha }^L } \right| v_j^L \\&\quad -\alpha (\left| {\,[Y_p ]_{\alpha }^U -[\hat{Y}_p ]_{\alpha }^U } \right| )h_{pj} (\sigma _{pj} )^{-2}\,\nonumber \\&\quad \times \left| {\,[X_{pi} ]_{\alpha }^L -[W_{ij} ]_{\alpha }^L } \right| v_j^U\\ \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{\alpha } ^U }&=0 \end{aligned}$$
  • (ii) If \(\max \left\{ {\,\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| \,,\,\left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \,} \right\} =\left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \), then

    $$\begin{aligned} \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{_\alpha } ^L }&=0\\ \frac{\partial E_{p,\alpha } }{\partial [W_{ij}]_{_\alpha } ^U }&=-\alpha \left( \left| {\,[Y_p ]_\alpha ^L -[\hat{Y}_p ]_{_\alpha }^L } \right| \right) h_{pj} (\sigma _{pj} )^{-2}\,\nonumber \\&\quad \times \left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| v_j^L \\&\quad -\alpha \left( \left| {\,[Y_p ]_{_\alpha }^U -[\hat{Y}_p ]_{_\alpha }^U } \right| \right) h_{pj} (\sigma _{pj} )^{-2}\,\nonumber \\&\quad \times \left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| v_j^U \end{aligned}$$

    The normalization factors \(\sigma _{pj}^2 \) are updated by;

    $$\begin{aligned} \sigma _{pj} (t+1)=\sigma _{pj} (t)+\Delta \sigma _{pj} (t) \end{aligned}$$
    (25)

    where \(\Delta \sigma _{pj}^{(t)}\) can be calculated using the cost function \(E_{p,\alpha } \) as follows:

    $$\begin{aligned} \Delta \sigma _{pj} (t)=-\eta \frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }+\lambda .\Delta \sigma _{pj} (t-1) \end{aligned}$$
    (26)

    The derivative \(\frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }\) in Eq. (26) can be written as;

    $$\begin{aligned} \frac{\partial \,E_{p,\alpha } }{\partial \,\sigma _{pj} }=\zeta ^L+\zeta ^U \end{aligned}$$

where \(\zeta ^\mathrm{L}\) and \(\zeta ^\mathrm{U}\) can be computed in two ways as follows:

  • (i) If  \(\max \left\{ {\,\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| ,\quad \left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \,} \right\} =\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| \), then

    $$\begin{aligned} \zeta ^L&=-\alpha \left( [Y_{pk} ]_\alpha ^L -[\hat{Y}_{pk} ]_\alpha ^L \right) h_{pj} (\sigma _{pj} )^{-3}\\&\quad \times \left| {[X_{pi} ]_\alpha ^L -[W_{ij} ]_\alpha ^L } \right| ^2v_j^L\\ \zeta ^U&=-\alpha \left( [Y_{pk} ]_\alpha ^U -[\hat{Y}_{pk} ]_\alpha ^U \right) h_{pj} (\sigma _{pj} )^{-3}\\&\quad \times \left| {[X_{pi} ]_\alpha ^L -[W_{ij} ]_\alpha ^L } \right| ^2v_j^U \end{aligned}$$
  • (ii) If  \(\max \left\{ {\,\left| {\,[X_{pi} ]_{_\alpha }^L -[W_{ij} ]_{_\alpha }^L } \right| ,\quad \left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \,} \right\} =\left| {\,[X_{pi} ]_{_\alpha }^U -[W_{ij} ]_{_\alpha }^U } \right| \), then

    $$\begin{aligned} \zeta ^L&=-\alpha \left( [Y_{pk} ]_\alpha ^L -[\hat{Y}_{pk} ]_\alpha ^L \right) h_{pj} (\sigma _{pj} )^{-3}\\&\quad \times \left| {[X_{pi} ]_\alpha ^U -[W_{ij} ]_\alpha ^U } \right| ^2v_j^L\\ \zeta ^U&=-\alpha \left( [Y_{pk} ]_\alpha ^U -[\hat{Y}_{pk} ]_\alpha ^U \right) h_{pj} (\sigma _{pj} )^{-3}\\&\quad \times \left| {[X_{pi} ]_\alpha ^U -[W_{ij} ]_\alpha ^U } \right| ^2v_j^U \end{aligned}$$

From the above expressions, the training algorithm of the proposed FRBF Network can be summarized as follows:

  • Step 1 Determine the fuzzy weights \(W_{ij} \) using modified FCM algorithm given in Eqs. (27)–(29) Initialize the fuzzy weights \(V_j \) as fuzzy numbers randomly Calculate the initial values of normalization factor by Eq. (13)

  • Step 2 Repeat Step 3 for \(\alpha _1 ,\alpha _2 ,\ldots ,\alpha _s \)

  • Step 3 Repeat the following procedures for \(p=1,2,\ldots ,n\)

    • Step 3.1 \(h_{pj} \), \(\hat{Y}_p \)and \(E_{p,\alpha } \) are calculated by Eqs. (12)–(15)

    • Step 3.2 Update the fuzzy weights \(V_j \) by Eqs. (17)–(18)

    • Step 3.3 Update the fuzzy weights \(W_{ij} \) by Eqs. (21)–(22)

    • Step 3.4 Update the normalization factors \(\sigma _{pj}^2 \) by Eq. (25)

  • Step 4 If the total number of iterations is satisfied, stop. Otherwise, go to Step 2.

Table 1 Fuzzy input–output data set from Sakawa and Yano [40]

Modified Fuzzy c-Means Clustering algorithm

The Fuzzy c-Means Clustering (FCM) algorithm is the most common cluster algorithm for RBF Network. It divides n data sets into c-fuzzy groups and estimates the cluster centers of each group [7, 12].

In this study, we modified the FCM algorithm because of \(X_i \) and \(W_{ij} \) are fuzzy numbers. Modified FCM algorithm for our proposed FRBF Network is given as follows:

  • Step 1 Set the number of clusters m and parameter b. Initialize cluster centers \(W_{ij} \) and inputs \(X_i \) for \(\alpha =0\).

  • Step 2 Determine the membership values using \(W_{ij} \) in two ways as;

  • (i) If \(\left\| {\left[ {[X_i ]_\alpha ^L ,[X_i ]_\alpha ^U } \right] -\left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] } \right\| ^2\ne 0\), then

    $$\begin{aligned}&\mu _j ([X_i ]_\alpha )\nonumber \\&=\left[ {\sum \limits _{k=1}^m {\left( {\frac{\left( {\max \left\{ {\,\left| {x_i ^L-w_{ij} ^L} \right| \,,\,\left| {x_i ^U-w_{ij} ^U} \right| \,} \right\} \,}\right) ^2}{\left( {\,\max \left\{ {\,\left| {x_i ^L-w_{ik} ^L} \right| \,,\,\left| {x_i ^U-w_{ik} ^U} \right| \,} \right\} \,}\right) ^2}}\right) ^{1/(b-1)}} } \right] ^{-1} \end{aligned}$$
    (27)
  • (ii) If \(\left\| {\left[ {[X_i ]_\alpha ^L ,[X_i ]_\alpha ^U } \right] -\left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] } \right\| ^2=0\), then

    $$\begin{aligned}&\mu _j ([X_i ]_\alpha )\nonumber \\&\quad =\left\{ {\begin{array}{l} 1,\quad if\quad \left[ {[X_i ]_{\alpha }^L ,[X_i]_{\alpha }^U } \right] =\left[ {[W_{ij} ]_{\alpha }^L ,[W_{ij} ]_{\alpha }^U } \right] \\ 0,\quad if\quad \left[ {[X_i ]_{\alpha }^L ,[X_i ]_{\alpha }^U } \right] \ne \left[ {[W_{ij} ]_{\alpha }^L ,[W_{ij} ]_{\alpha }^U } \right] \\ \end{array}} \right. \nonumber \\ \end{aligned}$$
    (28)
  • Step 3 Update the cluster centers \(W_{ij} \) until the membership values are stabilized by;

    $$\begin{aligned}&\left[ {[W_{ij} ]_\alpha ^L ,[W_{ij} ]_\alpha ^U } \right] \nonumber \\&\quad =\left[ {\frac{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b[X_i ]_\alpha ^L } }{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b} }},\right. \left. {\frac{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b[X_i ]_\alpha ^U } }{\sum \nolimits _{i=1}^n {[\mu _j ([X_i ]_\alpha )]^b} }} \right] \nonumber \\ \end{aligned}$$
    (29)
Table 2 Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for the considered methods
Fig. 2
figure 2

Errors in estimations of the FLS, GFLS, SY, HBS, ADFLS, IDFLS and proposed FRBF Network for Example 1

Numerical examples

In this section, we considered three numerical examples to demonstrate the proposed FRBF Network approach that performs well while handling with FR model when input and outputs are triangular fuzzy numbers. Using these fuzzy data, we obtain an estimated fuzzy regression equation \(\hat{Y}=A_0 +A_1 \hat{X}\) with fuzzy parameters \(A_0 =(a_0 ,\underline{c}_0 ,\bar{c}_0 )\) and \(A_1 =(a_1 ,\underline{c}_1 ,\bar{c}_1 )\). The proposed FRBF Network approach is applied to the examples and compared with FLS, GFLS, SY, HBS, ADFLS and IDFLS methods. LINGO and MATLAB Softwares are used for computations of FR methods and MATLAB Software is applied for generating the proposed FRBF Network on a Notebook (Intel Core 2 Duo) with CPU time of 2.0 GHz.

In all computations of the examples, we use learning constant(\(\eta \)) as 0.01, momentum constant(\(\lambda \)) as 0.1 and values of \(\alpha \)-cut as \(\alpha =0,\,0.2,\,0.4,\,0.6,\,0.8\) and 1.0 for traininig algorithm of the proposed FRBF Network. The initial values of the \(W_{ij} \) s for \(\alpha =0\) are computed using modified FCM algorithm via Eqs. (27)–(29). The initial values of the \(\sigma _{pj}^2 \)s are determined using the initial values of \(W_{ij} \)s. The initial values of the \(V_j \)s are randomly determined as fuzzy numbers. We calculate the cost function of each fuzzy output by Eq. (15) and total cost function by Eq. (16) according to the values of \(\alpha =0,\,0.2,\,0.4,\,0.6,\,0.8\) and 1.0.

To compare the performance of the methods, we calculate the total errors in estimation using Eq. (2) for FLS and GFLS, Eq. (6) for SY, Eq. (7) for HBS, Eq. (9) for ADFLS and Eq. (10) for IDFLS methods.

Example 1

Sakawa and Yano [40] used an example to illustrate the regression model, in which input and outputs are symmetrical TFNs. The example has eight sets of the fuzzy observations \((X_i ,Y_i )\) as shown in Table 1.

In the computations of the Example 1, we consider following specifications of our proposed FRBF Network approach for the training algorithm:

  1. (1)

    Number of input units: \(n_{I }= 1\) unit

  2. (2)

    Number of hidden units: \(n_{O }= 3\) units

  3. (3)

    Number of output units: \(n_{O }= 1\) unit

  4. (4)

    Stopping condition: \(t= 20,000\) iterations of the training algorithm

To compare the performance of the seven FR methods in estimation given in Sect. 2, we applied to calculate the errors in estimating the observed outputs. Table 2 shows parameter estimations, predicted intervals of fuzzy outputs and sum of squares errors (SSE) in estimating the eight observations for these considered methods. In the methods of FLS, GFLS, SY, HBS, ADFLS, IDFLS and proposed FRBF Network approach, the results for \(\alpha =0\) are used for comparison. In Table 2, SSE value of the FRBF Network approach is 9.9680, which is obviously better than FLS, GFLS, SY, HBS, ADFLS and IDFLS methods with 17.008, 22.162, 17.3682, 15.1991, 15.4723 and 10.3435 SSE values, respectively. Figure 2 illustrates the errors in estimations of FR methods and proposed FRBF Network approach.

Example 2

Diamond [16] used an example to illustrate the regression model, in which inputs and outputs are nonsymmetrical TFNs. The example has eight sets of the fuzzy observations \((X_i ,Y_i )\), see Table 3.

Table 3 Fuzzy input–output data set from Diamond [16]
Table 4 Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for considered methods
Fig. 3
figure 3

Errors in estimations of the FLS, GFLS, ADFLS, IDFLS and proposed FRBF Network for Example 2

In the computations of the Example 2, we consider following specifications of our proposed FRBF Network approach for the training algorithm:

  1. (1)

    Number of input units: \(n_{I }= 1\) unit

  2. (2)

    Number of hidden units: \(n_{O }= 3\) units

  3. (3)

    Number of output units: \(n_{O }= 1\) unit

  4. (4)

    Stopping condition: \(t= 20,000\) iterations of the training algorithm

The methods of SY and HBS could not be applied on Example 2, because data include nonsymmetrical TFNs. To compare the performance of the five FR methods in estimation given in Sect. 2, we applied to calculate the errors in estimating the observed outputs. Table 4 shows parameter estimations, predicted intervals of fuzzy outputs and SSE values in estimating the eight observation for these considered methods. In the methods of FLS, GFLS, ADFLS, IDFLS and proposed FRBF Network approach, the results for \(\alpha =0\) is used for comparison. In Table 4, SSE values of the IDFLS method is 1.4477 and FRBF Network approach is 1.5517, which are obviously better than FLS, GFLS and ADFLS methods with 2.4055, 3.0867 and 2.0843 SSE values, respectively. Figure 3 depicts the errors in estimations of FR methods and proposed FRBF Network approach.

Table 5 Fuzzy input–output data set from Diamond [16], Ming et al. [31]
Table 6 Parameter estimations, predicted intervals \([\hat{Y}^L,\hat{Y}^U]\) and SSE values for considered methods
Fig. 4
figure 4

Errors in estimations of the FLS, GFLS, SY, HBS, ADFLS, IDFLS and proposed FRBF Network for a test example

Computational experience

The superiority of the proposed FRBF Network approach can be also observed through a test example from Diamond [16] and Ming et al. [31], in which inputs and outputs are symmetrical TFNs. This example has three sets of the fuzzy observations \((X_i ,Y_i )\) as given in Table 5.

In the computations of the Example 3, we consider following specifications of our proposed FRBF Network approach for the training algorithm:

  1. (1)

    Number of input units: \(n_{I }= 1\) unit

  2. (2)

    Number of hidden units: \(n_{O }= 2\) units

  3. (3)

    Number of output units: \(n_{O }= 1\) unit

  4. (4)

    Stopping condition: \(t= 10,000\) iterations of the training algorithm

The training algorithm of the proposed FRBFN is started with fuzzy weights between input and hidden unit as \(W_{11} =\left[ {0.2793,1.7982} \right] ,\quad W_{12} =\left[ {1.5904,3.5904} \right] \) which is calculated by the FCM method, and normalization factor as \(\sigma _1^2 =1.614,\quad \sigma _2^2 =1.182\) and fuzzy weights between hidden unit and output unit as \(V_1 =\left[ {1,2} \right] ,\quad V_2 =\left[ {2,3} \right] \).

To compare the performance of the seven FR methods in the estimation given in Sect. 2, we applied to calculate the errors in estimating the observed outputs. Table 6 shows parameter estimations, predicted intervals of fuzzy outputs and SSE values in estimating the eight observation for these considered methods. In the methods of FLS, GFLS, SY, HBS, ADFLS, IDFLS and proposed FRBF Network approach, the results for \(\alpha =0\) are used for comparison. In Table 6, SSE value of the FRBF Network approach is 0.0770, which is obviously better than FLS, GFLS, SY, HBS, ADFLS and IDFLS methods with 0.5390, 0.6060, 3.0152, 16.3878, 0.1566 and 0.1161 SSE values, respectively. Figure 4 shows the errors in estimations of FR methods and proposed FRBF Network approach.

LINGO Software is used for solving the fuzzy regression methods. The training algorithm for the proposed FRBFN is coded in MATLAB Software and implemented on a Notebook (Intel Core 2 Duo) with CPU time of 2.0 GHz. The average relative performance of the proposed FRBF Network approach and other FR methods, measured by SSE values and CPU time, is shown in Table 7.

Table 7 shows relative performance of the existing Fuzzy Regression methods and Fuzzy Radial Basis Function Network approach for Test Example from Diamond [16] and Ming et al. [31]. We compared the performance of considered methods with respect to the SSE values and CPU time. The SSE value of the proposed FRBF Network approach is 0.0770, whereas its CPU time is 233.626 s. As can be seen from Table 7, compared with FLS, GFLS, SY, HBS, ADFLS and IDFLS, the performance of FRBF Network approach improves substantially when the CPU time is increased. Although the CPU time of our proposed approach is more than the compared FR methods, SSE value of the estimations is obtained minimum than those. Because, it is expected to obtain the estimations with minimum SSE. It can be seen that our proposed approach gives better results than existing methods for FR models with fuzzy input and fuzzy output.

Table 7 Relative performance of the considered FR methods and FRBF Network approach for Test Example

Conclusion

In this study, we have reviewed the relevant articles on Fuzzy Regression and provided an easily computation approach to estimate FR models with fuzzy input and fuzzy output. We presented a new estimation approach, Fuzzy Radial Basis Function Network, for Fuzzy Regression in the case that inputs and outputs are symmetric or nonsymmetric triangular fuzzy numbers. We derived a training algorithm of three-layer FRBF Network consisting of input, hidden and output layers. In the training algorithm, inputs, outputs and weights were defined by triangular fuzzy numbers. The construction of the algorithm is quite simple and the parameters of the FRBF Network, i.e., fuzzy weights and normalization factors, are systematically updated using this training algorithm given in Sect. 3.1.

The effectiveness of the derived training algorithm is demonstrated by computation of three numerical examples performed for proposed FRBF Network approach using the Backpropagation algorithm. The examples show that our proposed approach performs better than the existing fuzzy regression methods based on Linear Programming and Fuzzy Least Squares.

This study is one of the approaches to derive training algorithm of FRBF Network approach which has fuzzy input, fuzzy output and fuzzy weights, as an alternative to FR methods in the literature. The advantage of this approach is its simplicity and easy computation as well as its performance, while its disadvantage is spending more time than the other FR methods. The proposed approach is more suitable than the existing FR methods: firstly, the proposed method is able to handle symmetric and nonsymmetric triangular fuzzy inputs and outputs. Secondly, Example 1 and Example 3 show that the FRBF Network approach is better than of the existing FR methods, in terms of the SSE values and predicted intervals in estimation.

As a conclusion, our proposed approach suggests an efficient alternative procedure to estimate predicted intervals for FR model with fuzzy input and output. As a limitation of our study, we only focused on fuzzy regression model in the case that input and output are assumed to be symmetric or nonsymmetric triangular fuzzy numbers. Therefore, we only considered FRBF Network when input, output and weights are triangular fuzzy numbers and we did not consider another types of fuzzy numbers in this study. Although the discussion of this study is confined to simple regression with one input and one output, it can be generalized to cope with cases of multiple inputs and outputs. For future studies, more general fuzzy inputs, outputs and weights such as trapezoidal fuzzy numbers could be handled with our FRBF Network approach and it could be applied to different FR models.