SN Applied Sciences

, 2:23 | Cite as

A combined support vector regression with firefly algorithm for prediction of bottom hole pressure

  • Menad Nait AmarEmail author
  • Noureddine Zeraibi
Research Article
Part of the following topical collections:
  1. Engineering: Artificial Intelligence


Bottom hole pressure (BHP) is a fundamental parameter for the proper design of the production process and the development of reservoirs. BHP can be measured directly through the deployment of pressure down-hole gauges (PDG) or by the application of existing correlations and mechanistic models based on surface measurements. Unfortunately, these methods suffer from two main problems: the cost of measurement which is quite expensive mainly for PDG, and the inaccuracies for the correlations and mechanistic models, due to the limitation of their ranges of application. In this work, a new model based on support vector regression (SVR) optimized with firefly algorithm (FFA) is proposed to predict BHP of vertical wells with multiphase flow. Firefly algorithm is implemented for the optimal selection of SVR hyper-parameters. SVR-FFA model development is done using real-life measurement datasets obtained from distinct Algerian oil wells. The performance of the SVR-FFA model is compared with another hybridization SVR-genetic algorithm, trial and error SVR and with existing correlations and mechanistic models. The results demonstrate that the SVR-FFA model outperforms all the other models.


Flowing bottom hole pressure (BHP) BHP correlations and mechanistic models Support vector regression (SVR) Firefly algorithm (FFA) 

1 Introduction

Bottom hole pressure (BHP) is an essential parameter for a well from its completion till its abandonment stage. It is used to establish the development strategies, design facilities (such as well head completion and tubing size), and also in predicting suitable time for implementing activation mechanisms. Two main categories of methods can be distinguished for estimating BHP: the first is by using a permanent gauge in the bottom hole or by applying well testing; and the second is by a direct calculation using well-known empirical correlations and mechanistic models that are based on the available surface measurements. The most widely used correlations for BHP determination are those proposed by Hagedorn and Brown [1], Duns and Ros [2], Orkiszewski [3], Beggs and Brill [4], Aziz and Govier [5], Mukherjee and Brill [6]. The mechanistic models includes those of Ansari et al. [7], Chokshi et al. [8], Gomez et al. [9] and Gray [10]. Although the accuracy of the first category (i.e. gauge and well tests) and the simplicity of application of the second (i.e. the correlations and the mechanistic models), both of them present sensible limitations. These incudes the cost which is expensive for the first category, and the poor performances for the second, since all the existing correlations and mechanistic models were developed under a range of conditions, consequently, when their applications are out of these domains, their results became mediocre.

During the last years, data-driven through its different methods has been increasingly introduced in several fields of science and technology including petroleum engineering, to resolve practical problems [11, 12, 13]. Accordingly, very attractive frameworks such as Artap, which is based on machine learning and nature-inspired algorithms have been implemented for practical applications [14]. Accurate prediction of BHP using data-driven techniques is one of these successful applications as we have demonstrated in our previously published work [15]. On the other hand, support vector regression (SVR) is one of the most popular data-driven methods. It is characterized by the high generalization capability that is assured by the exploitation of both structural risk minimization (SRM) and empirical risk minimization (ERM) principles [16]. It has been applied in various fields of science and technology, such as signal processing [17], finance [18], biology [19], biomedicine [20], engineering [21, 22], oil industry [23, 24] and others. Many articles shed light the successful application of this tool in petroleum and reservoir engineering. Ansari and Gholami [25] have applied SVR to estimate the saturation pressure of crude oils. Na’imi et al. [16] have used SVR to estimate reservoir porosity and water saturation distributions. Bian et al. [26] have employed SVR to predict minimum miscibility pressure (MMP) of the system CO2-oil. Fattahi et al. [27] have used SVR in the prediction of asphaltene precipitation.

Recently, firefly algorithm (FFA), which is a population-based and a nature-inspired heuristic algorithm has been applied in many fields as a global optimization approach [28]. It is based on flashing behavior of fireflies [29]. This algorithm often leads to a significant improvement for solving problems with multimodal functions. In this paper, it is applied as an optimization method to increase the accuracy and the prediction reliability of SVR by finding the optimum SVR hyper-parameters.

This research aims to establish a robust and fast approach to estimate BHP in vertical wells with multiphase flow using hybridization SVR with firefly algorithm. 100 field data gathered from various Algerian fields and covering a wide range of variables are used for this purpose. The performance of the combined SVR-FFA model is compared with SVR-GA (hybridization SVR and genetic algorithm), SVR-TE trial and error SVR) and existing correlations and mechanistic models.

The rest of the paper is organized as follows. Section 2 briefs the implemented data-driven technique, i.e. SVR and the applied optimization algorithms. Section 3 describes the utilized data sets. Results are presented and discussed in Sect. 4. The paper ends with Sect. 5 which summarizes the main findings of the study.

2 Methodology

2.1 Support vector regression (SVR)

SVR is a well-formulated supervised learning technique which was developed by Vapnik [30, 31]. SVR aims to identify a function that approximates the functional dependency between targets \(T = \left\{ {t_{1} ,t_{2} , \ldots ,t_{m} } \right\}\) defined on \(R\), and inputs \(X = \left\{ {\varvec{x}_{1} ,\varvec{x}_{2} , \ldots ,\varvec{x}_{\varvec{m}} } \right\}\) where \(\varvec{x}_{\varvec{i}} \in R^{n}\) and m is the number of data points. This approximation is defined as follows [30]:
$$f\left( x \right) = w\varphi \left( x \right) + b$$
where \(\varphi \left( x \right)\) points out a mapping function that transforms the input space vector \(x\) to a high dimensional feature space, \(w\) represents a weight vector and \(b\) is a bias.
To find the proper values for the elements of \(w\) and \(b\), the following regularized risk function should be minimized [30]:
$$R_{SVR} \left( C \right) = C\frac{1}{m}\mathop \sum \limits_{i = 1}^{m} L(f(x_{i} ) - t_{i} ) + \frac{1}{2}w^{2}$$
where \(C\frac{1}{m}\mathop \sum \nolimits_{i = 1}^{m} L(f(x_{i} ) - t_{i} )\) shows the empirical error and \(\frac{1}{2}w^{2}\) is the measure of function flatness. The constant \(C > 0\) is the penalty parameter, which computes the trade-off between the empirical error and the model complexity (regularization factor).
ε-insensitive loss function \(L(f(x_{i} ) - t_{i} )\) proposed by Vapnik is used for estimation of empirical error and is determined using the following equation [30]:
$$L(f(x) - t) = \left\{ {\begin{array}{*{20}l} 0 \hfill & { if\,\left| {f\left( x \right) - t} \right| \le \varepsilon } \hfill \\ {\left| {f\left( x \right) - t} \right| - \varepsilon } \hfill & { otherwise } \hfill \\ \end{array} } \right.$$
where \(\varepsilon\) denotes the error tolerance.
Then, the optimum parameters are obtained in the following equation by formulating the constrained optimization problem as:
$$minimize \frac{1}{2}w^{2} + C\mathop \sum \limits_{i = 1}^{m} \left( {\xi_{i}^{ - } + \xi_{i}^{ + } } \right)$$
$$subject to \left\{ {\begin{array}{*{20}c} {t_{i} - \left( {w\varphi \left( {x_{i} } \right) + b} \right) \le \varepsilon + \xi_{i}^{ + } } \\ {\left( {w\varphi \left( {x_{i} } \right) + b} \right) - t_{i} \le \varepsilon + \xi_{i}^{ - } } \\ {\xi_{i}^{ - } ,\xi_{i}^{ + } \ge 0, i = 1,2, \ldots ,m} \\ \end{array} } \right.$$
\(\xi_{i}^{ - }\,and\,\xi_{i}^{ + }\) are positive slack variables which demonstrate lower and upper excess deviations, correspondingly.
To solve this problem, the constrained optimization function of (4) can be transformed into dual space using Lagrange multipliers, and the solution obtained is shown below [32]:
$$f\left( x \right) = \mathop \sum \limits_{i = 1}^{m} \left( {\alpha_{i} - \alpha_{i}^{*} } \right)K\left( {x_{i} ,x_{j} } \right) + b$$
In the above-equation, \(\alpha_{i}\,and\,\alpha_{i}^{*}\) are Lagrange multipliers and are subjected to the constraints 0 ≤ \(\alpha_{i}\), \(\alpha_{i}^{*}\) ≤ C, and the term \(K\left( {x_{i} ,x_{j} } \right)\) represents the kernel function. This latter aims to map the input space into some higher dimensional space. This trick allows SVR to conveniently solve non-linear regression problems. There are many kernel functions proposed in the literature [33], the popular ones are polynomial function, radial basis function (RBF) and Gaussian function. In this study, RBF is used as the kernel function and it is defined as shown below:
$$K\left( {x_{i} ,x_{j} } \right) = \exp \left( { - \gamma x_{i} ,x_{j} } \right)$$
where \(\gamma\) is the kernel parameter.

The ideal performance and the high accuracy of SVR depend greatly on the combination of \(C, \varepsilon\) and the kernel function parameter \(\gamma\). Hence, it is necessary to optimize these parameters using robust algorithms able to automatically select their optimum combination.

2.2 Firefly algorithm (FFA)

Firefly algorithm is a smart-swarm based algorithm which was developed by Yang [34] for solving optimization problems. The inspiration and the basic formulation of FA is based on the natural behavior of fireflies which emulate flashes of lights as a strategy of communication. Accordingly, one firefly is attracted towards other if the latter one emits higher light intensity [29]. It is worth noting that the brightness of every firefly imitates the quality of the solutions.

The movement task of a firefly “i” towards a more attractive firefly “j” is done basing on the following equation:
$$x_{i} = x_{i} + \beta_{0} e^{{ - \gamma r_{i,j}^{2} }} \left( {x_{i} - x_{j} } \right) + \alpha \left( {rand - \frac{1}{2}} \right)$$

In the above-equation, the term \(\beta_{0} e^{{ - \gamma r_{i,j}^{2} }} \left( {x_{i} - x_{j} } \right)\) corresponds to the attraction effect; and the term \(\alpha \left( {rand - \frac{1}{2}} \right)\) is a randomization term and essentially it provides a random sign or direction, where \(\alpha\) corresponds to the randomization coefficient. \(\beta_{0}\) is the light intensity at distance \(r = 0\) and generally it corresponds to 1. \(\gamma\) is absorption coefficient, whose the value is distributed over [0, ∞]. The Cartesian distance is applied to calculate the distance between any two fireflies i and j: \(r_{i,j} = \sqrt {\mathop \sum \nolimits_{k = 1}^{D} \left( {x_{i,k} - x_{j,k} } \right)}\).

The traveling mechanism of fireflies is repeated iteratively until a stopping condition is satisfied. The fittest firefly represents the best solution.

In this paper, firefly algorithm is employed to tune SVR hyper-parameters (\(C, \varepsilon\) and \(\gamma\)).

The flowchart of the proposed hybridization SVR-FFA is shown in Fig. 1.
Fig. 1

SVR-FFA flow chart

2.3 Genetic algorithm (GA)

Genetic algorithm (GA) is an evolutionary optimization technique based on genetic principles developed by Holland [35]. The algorithm begins by representing an initial population of possible solution in a form of chromosomes. Then, three types of genetic operators are applied to explore the different regions of the search space and find the optimal solutions. These operators are: selection, crossover and mutation operators. More details about these operators can be found in previously published works [35, 36]. This process is repeated until some predefined termination criteria are fulfilled.

In the current work, and as FFA, GA is used to optimize SVR hyper-parameters (C, ε and γ). Figure 2 describes briefly SVR-GA model.
Fig. 2

SVR-GA flow chart

3 Data analysis

To implement the described paradigms for accurate prediction of BHP under wide-ranged operational conditions, a representative database should be considered. Accordingly, 125 real points are collected from distinct oil wells located in some Algerian oil fields. These points are the same used in our previous work [15]. The data used for developing the model cover the variables ranges illustrated in the Table 1. To evaluate the correctness of the collected data and delete the suspected outliers, empirical correlations and mechanistic models were used to predict the BHP and compare it with the measured value. Data sets which consistently resulted in poor predictions by all correlations and mechanistic models were considered to be invalid and, therefore, removed. A cut-off-error percentage (absolute relative error) of 20% is implemented for the whole data. After a such screening, a total 100 data sets are remained to develop the SVR models. Among the filtered data points, 80% of are used to build (training) the models, and the other 20% are utilized to check their accuracy (predictive test).
Table 1

Statistical parameters for the study datasets


Min value

Max value




Qoil (bbl/d)




Qgas (Mscf/d)




Qwater (bbl/d)




Oil gravity




Gas gravity




ID (in)




Depth (ft)




WHP (Psi)




WHT (°F)




GOR (bbl/scf)






BHP (Psia)




4 Results and discussions

Before highlighting and explaining the obtained results, the influence of the individual independent variables illustrated in Table 1 on BHP is done through the coefficient of correlation analyses between these independent variables and BHP. As the correlation coefficient between any input variable and output variable increases, the influence of that input in determining the output’s value increases. Figure 3 depicts the correlation coefficient analyses. According to this figure, WHP has the maximum degree of linear relation with BHP. In addition, BHP follows the fairly linear tendency with the depth.
Fig. 3

Correlation coefficient analyses of the experimental data

To improve the convergence conditions of SVR models, the used data points are standardized. Afterwards, and for GA and FFA cases, the investigated SVR hyper-parameters (C, ε and γ) are codified in forms of vectors for FFA and chromosomes for GA. During the optimization process of these two algorithms, mean square error (MSE) is considered the fitness function to distinguish between the quality of the individuals. This function is expressed as:
$$MSE = \frac{{\mathop \sum \nolimits_{i = 1}^{m} \left( {t_{i} - o_{i} } \right)^{2} }}{m}$$
where \(t_{i}\) and \(o_{i}\) point out the real and the predicted BHP, respectively, and \(m\) is the number of data points.
The corresponding parameters of FFA and GA considered in this study to establish SVR-FFA and SVR-GA paradigms, respectively, are reported in Table 2. For SVR-TE model, trial and error is developed by manually searching a specified subset of the hyper-parameter (C, ε and γ).
Table 2

FFA and GA setting parameters





Number of fireflies


Maximum number of iterations









Population size


Crossover’s probability


Mutation’s probability


Type of replacement

Elitist (10% of the population)

Type of selection

Linear ranking

Max number of generations


To evaluate the developed models and existing approaches, and their predictive performances, the average absolute percent error (AARD %), the standard deviation (SD), and the correlation factor (R2) are calculated by the following equations:
$$AARD\% = \frac{1}{m}\mathop \sum \limits_{i = 1}^{m} \left| {\frac{{BHP_{i}^{\exp } - BHP_{i}^{cal} }}{{BHP_{i}^{\exp } }}} \right| \times 100$$
$$SD = \sqrt {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {(m - 1)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${(m - 1)}$}}\mathop \sum \limits_{i = 1}^{m} \left( {\frac{{BHP_{i}^{\exp } - BHP_{i}^{cal} }}{{BHP_{i}^{\exp } }}} \right)^{2} }$$
$$R^{2} = 1 - \frac{{\mathop \sum \nolimits_{i = 1}^{m} \left( {BHP_{i}^{exp} - BHP_{i}^{cal} } \right)^{2} }}{{\mathop \sum \nolimits_{i = 1}^{m} \left( {BHP_{i}^{cal} - \overline{BHP} } \right)^{2} }}$$
where \(m\) represents the number of the measured information, \(BHP_{i}^{exp}\) is the observed bottom hole pressure values while \(BHP_{i}^{cal}\) is the calculated BHP values which are predicted by the developed models. Average value of the BHP data is shown by \(\overline{BHP}\).
The optimum hyper-parameters for each of the established models are shown in Table 3. For all these models, high values of the regularization factor (C) and small Epsilon values are achieved, while medium values of the kernel parameter (γ) are expected.
Table 3

Obtained SVR hyper-parameters for developed models

















Cross plots between output and target values for training and test data of the developed models are illustrated in Fig. 4. For each model, all the predicted values are sketched against the experimental values, and therefore across plot is created and compared against a unit slope line which states the perfect model line. The closer the plotted data to this line, the higher is the reliability of the model. Furthermore, another comparison is presented in Fig. 5 for all data in sample series plots. According to Figs. 4 and 5, the prediction capability of the three models is documented. For a deep comparison between the BHP calculated by SVR-FFA, SVR-GA and SVR-TE and the experimental data, the detailed statistical results (the average absolute percent error (AARD %), the standard deviation (SD), and the correlation factor (R2)) are shown in Table 4. According to Table 4, for the whole data set, the AARD %, R2 and SD values are 2.13% (1.01% for the training data and 6.65% for the test data), 0.9981 (0.9997 for the training data and 0.9917 for the test data) and 4.13% (2.86% for the training data and 9.20% for the test data) for SVR-FFA, while these values for the SVR-GA are 2.20% (1.09% for the training data and 6.63% for the test data), 0.9981 (0.9997 for the training data and 0.9918 for the test data) and 4.18% (2.94% for the training data and 9.17% for the test data), respectively, and for SVR-TE are 2.66% (1.65% for the training data and 6.72% for the test data), 0.9980 (0.9995 for the training data and 0.9922 for the test data) and 4.96% (3.79% for the training data and 4.96% for the test data) respectively. The comparison reveals that SVR-FFA model gives the best performance with the data as shown from the values of the three statistical indicators.
Fig. 4

Cross plots of the results: a BHP measured versus BHP SVR-FFA; b BHP measured versus BHP SVR-GA; c BHP measured versus BHP SVR-TE

Fig. 5

A comparison between the measured BHP and the predicted values using the proposed SVR models: a SVR-FFA; b SVR-GA; c SVR-TE

Table 4

Comparison of SVR-FFA. SVR-GA and SVR-TE models for BHP prediction



AARE (%)


SD (%)








































Figure 6 shows the results of sensitivity analysis of the new BHP SVR-FFA. This figure presents the correlation coefficients that are calculated between the output variable of the SVR-FFA and the samples for each of the input parameters. The comparison between this figure and Fig. 3 shows that SVR-FFA results matches highly the experimental data with the interrelationship between the input and output variables, and thus proves the high robustness and accuracy of SVR-FFA.
Fig. 6

Correlation coefficient analyses of SVR-FFA results

Finally, to demonstrate the superiority of the SVR-FFA model in contrast with well-known BHP correlations and mechanistic models, the aforementioned statistical parameters of the existing approaches versus those of the current study are depicted in Figs. 7. Bar plots (a), (b) and (c) show comparisons according to AARD %, R2 and SD respectively. The comparison demonstrates the large superiority of SVR-FFA compared with other conventional methods. In summary, the performance and the accuracy analyses clarified that SVR, whether optimized or not, have a reliable ability to predict BHP. Also, it should be noted that the proposed SVR-FFA model, achieved minimum average absolute percent error and resulting more accurate output estimation over the other SVR or conventional approaches, thanks to the usage of FFA meta-heuristic algorithm for hyper-parameters optimization and tuning. “Appendix 1” reports detailed steps for using the established SVR-FFA.
Fig. 7

Comparison between performance indicators values obtained from SVR models and conventional methods: a AARD; b R2; c SD

5 Conclusions

In this work, three new models have been proposed and applied to the prediction of BHP in vertical wells with multiphase flow from Algerian fields. Two of these models are on the basis of integrating support vector regression with firefly algorithm and genetic algorithm, respectively, and the last model is based on trial and error method. The results generated by the SVR-FFA, SVR-GA and SVR-TE, are then compared with the experimental results and those generated by the existing approaches. The main findings of the paper are summarized as follows:
  1. 1.

    This study also provides a considerable improvement over previous proposed correlations and mechanistic models with broader applicability in terms independent variable ranges.

  2. 2.

    The proposed paradigms based on SVR provided very satisfactory prediction abilities.

  3. 3.

    Among the implemented models, SVR coupled with FFA is deemed the most reliable and outperforms the other proposed schemes.

  4. 4.

    SVR-FFA exhibited excellent prediction performance with an overall AARD value of 2.13% and a total determination coefficient of 0.9981.

  5. 5.

    The newly proposed hybridizations in this study outperform the prior correlation and mechanistic models for predicting BHP.



Compliance with ethical standards

Conflict of interest

The author declares that he has no conflict of interest.


  1. 1.
    Hagedorn AR, Brown KE (1965) Experimental study of pressure gradients occurring during continuous two-phase flow in small-diameter vertical conduits. J Pet Technol 17:475–484. CrossRefGoogle Scholar
  2. 2.
    Duns Jr. H, Ros NCJ et al. (1963) Vertical flow of gas and liquid mixtures in wells. In: The 6th World one health congress.Google Scholar
  3. 3.
    Orkiszewski J et al (1967) Predicting two-phase pressure drops in vertical pipe. J Pet Technol 19:829–838CrossRefGoogle Scholar
  4. 4.
    Beggs DH, Brill JP et al (1973) A study of two-phase flow in inclined pipes. J Pet Technol 25:607–617CrossRefGoogle Scholar
  5. 5.
    Aziz K, Govier GW et al (1972) Pressure drop in wells producing oil and gas. J Can Pet Technol 11:38Google Scholar
  6. 6.
    Mukherjee H, Brill JP (1985) Pressure drop correlations for inclined two-phase flow. J Energy Resour Technol 107:549. CrossRefGoogle Scholar
  7. 7.
    Ansari AM, Sylvester ND, Shoham O, Brill JP et al. (1990) A comprehensive mechanistic model for upward two-phase flow in wellbores. In: SPE annual technical conference and exhibitionGoogle Scholar
  8. 8.
    Chokshi RN, Schmidt Z, Doty DR et al. (1996) Experimental study and the development of a mechanistic model for two-phase flow through vertical tubing. In: SPE western regional meetingGoogle Scholar
  9. 9.
    Gomez LE, Shoham O, Schmidt Z, Chokshi RN, Northug T et al (2000) Unified mechanistic model for steady-state two-phase flow: horizontal to vertical upward flow. SPE J 5:339–350CrossRefGoogle Scholar
  10. 10.
    Gray HE (1974) Vertical flow correlation in gas wells. API user’s manual for API 14B subsurface controlled subsurface safety valve sizing computer program (1974)Google Scholar
  11. 11.
    Nait Amar M, Zeraibi N, Redouane K (2018) Pure co2-oil system minimum miscibility pressure prediction using optimized artificial neural network by differential evolution. Pet Coal 60:284–293Google Scholar
  12. 12.
    Menad NA, Noureddine Z, Hemmati-Sarapardeh A, Shamshirband S, Mosavi A, Chau K (2019) Modeling temperature dependency of oil-water relative permeability in thermal enhanced oil recovery processes using group method of data handling and gene expression programming. Eng Appl Comput Fluid Mech 13:724–743Google Scholar
  13. 13.
    Menad NA, Noureddine Z (2019) An efficient methodology for multi-objective optimization of water alternating CO2 EOR process. J Taiwan Inst Chem Eng 99:154–165CrossRefGoogle Scholar
  14. 14.
    Pánek D, Orosz T, Karban P (2019) Artap: robust design optimization framework for engineering applications. In: Third international conference on intelligent computing in data sciences ICDS201. IEEE, pp. 1–5 (in Press)Google Scholar
  15. 15.
    Nait Amar M, Zeraibi N, Redouane K (2018) Bottom hole pressure estimation using hybridization neural networks and grey wolves optimization. Petroleum. 4:419–429. CrossRefGoogle Scholar
  16. 16.
    Na’imi SR, Shadizadeh SR, Riahi MA, Mirzakhanian M (2014) Estimation of reservoir porosity and water saturation based on seismic attributes using support vector regression approach. J Appl Geophys 107:93–101. CrossRefGoogle Scholar
  17. 17.
    Vapnik V, Golowich SE, Smola AJ (1997) Support vector method for function approximation, regression estimation and signal processing. In: Advances in neural information processing systems, pp 281–287Google Scholar
  18. 18.
    Trafalis TB, Ince H (2000) Support vector machine for regression and applications to financial forecasting. In: Proceedings of the IEEE-INNS-ENNS international joint conference on neural networks. IJCNN 2000. Neural computing: new challenges and perspectives new Millenn. IEEE, vol 6, pp 348–353.
  19. 19.
    Schölkopf B, Tsuda K, Vert JP (2004) Kernel methods in computational biology. MIT Press, LondonCrossRefGoogle Scholar
  20. 20.
    Khandoker AH, Palaniswami M, Karmakar CK (2009) Support vector machines for automated recognition of obstructive sleep apnea syndrome from ECG recordings. IEEE Trans Inf Technol Biomed 13:37–48. CrossRefGoogle Scholar
  21. 21.
    Clarke SM, Griebsch JH, Simpson TW (2005) Analysis of support vector regression for approximation of complex engineering analyses. J Mech Des 127:1077. CrossRefGoogle Scholar
  22. 22.
    Wen YF, Cai CZ, Liu XH, Pei JF, Zhu XJ, Xiao TT (2009) Corrosion rate prediction of 3C steel under different seawater environment by using support vector regression. Corros Sci 51:349–355. CrossRefGoogle Scholar
  23. 23.
    Nait Amar M, Zeraibi N (2018) Application of hybrid support vector regression artificial bee colony for prediction of MMP in CO <inf> 2 </inf>-EOR process. Petroleum. CrossRefGoogle Scholar
  24. 24.
    Menad NA, Noureddine Z, Hemmati-Sarapardeh A, Shamshirband S (2019) Modeling temperature-based oil-water relative permeability by integrating advanced intelligent models with grey wolf optimization: application to thermal enhanced oil recovery processes. Fuel 242:59CrossRefGoogle Scholar
  25. 25.
    Ansari HR, Gholami A (2015) An improved support vector regression model for estimation of saturation pressure of crude oils. Fluid Phase Equilib 402:124–132. CrossRefGoogle Scholar
  26. 26.
    Bian X-Q, Han B, Du Z-M, Jaubert J-N, Li M-J (2016) Integrating support vector regression with genetic algorithm for CO2-oil minimum miscibility pressure (MMP) in pure and impure CO2 streams. Fuel 182:550–557. CrossRefGoogle Scholar
  27. 27.
    Fattahi H, Gholami A, Amiribakhtiar MS, Moradi S (2015) Estimation of asphaltene precipitation from titration data: a hybrid support vector regression with harmony search. Neural Comput Appl 26:789–798CrossRefGoogle Scholar
  28. 28.
    Fister I, Fister I, Yang X-S, Brest J (2013) A comprehensive review of firefly algorithms. Swarm Evol Comput 13:34–46. CrossRefGoogle Scholar
  29. 29.
    Yang XS (2009) Firefly algorithms for multimodal optimization. In: Lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, Berlin, pp 169–178. CrossRefGoogle Scholar
  30. 30.
    Vapnik V (1995) The nature of statistical learning theory. Springer, New York. CrossRefzbMATHGoogle Scholar
  31. 31.
    Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2:121–167. CrossRefGoogle Scholar
  32. 32.
    Shawe-Taylor J, Cristanini N (2004) Kernel methods for pattern analysis. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  33. 33.
    Forrester AIJ, Sbester A, Keane AJ (2008) Engineering design via surrogate modelling. J Wiley. CrossRefGoogle Scholar
  34. 34.
    Yang X-S (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms, pp 169–178CrossRefGoogle Scholar
  35. 35.
    Sivanandam SN, Deepa SN (2007) Introduction to genetic algorithms. Springer, BerlinzbMATHGoogle Scholar
  36. 36.
    Menad NA, Hemmati-Sarapardeh A, Varamesh A, Shamshirband S (2019) Predicting solubility of CO2 in brine by advanced machine learning systems: application to carbon capture and sequestration. J CO2 Util 33:83–95CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Laboratoire Génie Physique des Hydrocarbures, Faculté des Hydrocarbures et de la ChimieUniversité M’Hamed Bougara de BoumerdesBoumerdesAlgeria

Personalised recommendations