Advertisement

A Machine Learning Approach to Enhanced Oil Recovery Prediction

  • Fedor KrasnovEmail author
  • Nikolay Glavnov
  • Alexander Sitnikov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10716)

Abstract

In a number of computational experiments, a meta-algorithm is used to solve the problems of the oil and gas industry. Such experiments begin in the hydrodynamic simulator, where the value of the function is calculated for specific nodal values of the parameters based on the physical laws of fluid flow through porous media. Then, the values of the function are calculated, either on a more detailed set of parameter values, or for parameter values that go beyond the nodal values.

Among other purposes, such an approach is used to calculate incremental oil production resulting from the application of various methods of enhanced oil recovery (EOR).

The authors found out that in comparison with the traditional computational experiments on a regular grid, computation using machine learning algorithms could prove more productive.

Keywords

Enhanced oil recovery EOR Random forest Regular grid interpolation 

References

  1. 1.
    Guo, Z., Reynolds, A.C., Zhao, H.: A Physics-Based Data-Driven Model for History-Matching, Prediction and Characterization of Waterflooding Performance. Society of Petroleum Engineers.  https://doi.org/10.2118/182660-MS
  2. 2.
    Shehata, A.M., El-banbi, A.H., Sayyouh, H.: Guidelines to Optimize CO2 EOR in Heterogeneous Reservoirs. Society of Petroleum Engineers.  https://doi.org/10.2118/151871-MS
  3. 3.
    Weiser, A., Zarantonello, S.E.: A note on piecewise linear and multilinear table interpolation in many dimensions. Math. Comput. 50(181), 189–196 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Ghassemzadeh, S., Charkhi, A.H.: Optimization of integrated production system using advanced proxy based models. J. Nat. Gas Sci. Eng. 35, 89–96 (2016). ISSN 1875-5100CrossRefGoogle Scholar
  5. 5.
    Dierckx, P.: Curve and Surface Fitting With Splines Monographs on Numerical Analysis. Oxford University Press, New York (1993)zbMATHGoogle Scholar
  6. 6.
    Dyakonov, A.: Blog “Random Forest”, 14 November 2016. https://alexanderdyakonov.wordpress.com
  7. 7.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001).  https://doi.org/10.1023/A:1010933404324 CrossRefzbMATHGoogle Scholar
  8. 8.
    Gashler, M., Giraud-Carrier, C., Martinez, T.: Decision tree ensemble: small heterogeneous is better than large homogeneous. In: The Seventh International Conference on Machine Learning and Applications, pp. 900–905 (2008).  https://doi.org/10.1109/ICMLA.2008.154
  9. 9.
    Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999).  https://doi.org/10.1613/jair.614 zbMATHGoogle Scholar
  10. 10.
    Pedregosa, F., et al.: Scikit-learn machine learning in python. JMLR 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Oliphant, T.E.: Python for scientific computing. Comput. Sci. Eng. 9, 10–20 (2007).  https://doi.org/10.1109/MCSE.2007.58 CrossRefGoogle Scholar
  12. 12.
    Jarrod Millman, K., Aivazis, M.: Python for scientists and engineers. Comput. Sci. Eng. 13, 9–12 (2011).  https://doi.org/10.1109/MCSE.2011.36 CrossRefGoogle Scholar
  13. 13.
    van der Walt, S., Colbert, S.C., Varoquaux, G.: The NumPy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13, 22–30 (2011).  https://doi.org/10.1109/MCSE.2011.37 CrossRefGoogle Scholar
  14. 14.
    McKinney, W.: Data structures for statistical computing in python. In: Proceedings of the 9th Python in Science Conference, pp. 51–56 (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Gazpromneft NTCSt. PetersburgRussia

Personalised recommendations