Skip to main content

Advertisement

Log in

Prediction of permeability of porous media using optimized convolutional neural networks

  • Original Paper
  • Published:
Computational Geosciences Aims and scope Submit manuscript

Abstract

Permeability is an important parameter to describe the behavior of a fluid flow in porous media. To perform realistic flow simulations, it is essential that the fine scale models include permeability variability. However, these models cannot be used directly in simulations because require high computational cost, which motivates the application of upscaling approaches. In this context, machine learning techniques can be used as an alternative to perform the upscaling of porous media properties with lower computational cost than traditional upscaling methods. Hence, in this work, an upscaling methodology is proposed to compute the equivalent permeability on the large grid through convolutional neural networks (CNN). This method achieves suitable precision, with less computational demand, when evaluated on 2D and 3D models, if compared with the local upscaling approach. We also present a genetic algorithm (GA) to automatically determine the optimal configuration of CNNs for the target problems. The GA procedure is applied to yield the optimal CNN architecture for upscaling of the permeability fields with outstanding results when compared with counterpart techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Computer Code Availability

The codes used to obtain the results can be found in https://github.com/upscalingpermeability/porous-media.

References

  1. Hauge, V.L., Lie, K.A., Natvig, J.R.: Flow-based coarsening for multiscale simulation of transport in porous media. Comput. Geosci. 16, 391–408 (2012)

    Article  Google Scholar 

  2. Durlofsky, L.J.: Upscaling of geocellular models for reservoir flow simulation: A review of recent progress. In: 7th International Forum on Reservoir Simulation, pp. 23–27 (2003)

  3. Aarnes, J.E., Gimse, T., Lie, K.A.: An introduction to the numerics of flow in porous media using matlab. In: Hasle, G., Lie, K.A., Quak, E. (eds.) Geometric Modelling, Numerical Simulation, and Optimization: Applied Mathematics at SINTEF, pp 265–306. Springer (2007)

  4. Firoozabadi, B., Mahani, H., Ashjari, M.A., Audigane, P.: Improved upscaling of reservoir flow using combination of dual mesh method and vorticity-based gridding. Comput. Geosci. 13, 57–58 (2009)

    Article  Google Scholar 

  5. Durlofsky, L.J.: Numerical calculation of equivalent grid block permeability tensors for heterogeneous porous media. Water Resour. Res. 27, 699–708 (1991)

    Article  Google Scholar 

  6. Gerritsen, M.G., Durlofsky, L.J.: Modeling fluid flow in oil reservoirs. Annu. Rev. Fluid Mech. 37(1), 211–238 (2005)

    Article  Google Scholar 

  7. Chen, Y., Durlofsky, L.J., Gerritsen, M., Wen, X.H.: A coupled local-global upscaling approach for simulating flow in highly heterogeneous formations. Adv. Water Resour. 26, 1041–1060 (2003)

    Article  Google Scholar 

  8. Sharifi, M., Kelkar, M.G.: Novel permeability upscaling method using fast marching method. Fuel 117, 568–578 (2014)

    Article  Google Scholar 

  9. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

  11. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

  12. Alqahtani, N., Armstrong, R.T., Mostaghimi, P.: Deep learning convolutional neural networks to predict porous media properties. In: SPE Asia Pacific Oil and Gas Conference and Exhibition, pp. 1–10 (2018)

  13. Lähivaara, T., Kärkkäinen, L., Huttunen, J.M.J., Hesthaven, J.S.: Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography. Acoustical Society of America 143, 1148–1158 (2018)

    Article  Google Scholar 

  14. Wu, J., Yin, X., Xiao, H.: Seeing permeability from images: fast prediction with convolutional neural networks. Sci. Bull. 63, 1215–1222 (2018)

    Article  Google Scholar 

  15. Zhong, Z., Carr, T.R., Wu, X., Wang, G.: Application of a convolutional neural network in permeability prediction: A case study in the Jacksonburg-Stringtown oil field, West Virginia, USA. Geophysics 84, 363–373 (2019)

    Article  Google Scholar 

  16. Assunção, F., Lourenço, N., Machado, P., Ribeiro, B.: Denser: Deep evolutionary network structured representation. Genet. Program Evolvable Mach. 20, 5–35 (2018)

    Article  Google Scholar 

  17. Ma, B., Li, X., Xia, Y., Zhang, Y.: Autonomous deep learning: A genetic DCNN designer for image classification. Neurocomputing 379, 152–161 (2020)

    Article  Google Scholar 

  18. Chen, T., Clauser, C., Marquart, G., Willbrand, K., Mottaghy, D.: A new upscaling method for fractured porous media. Adv. Water Resour. 80, 60–68 (2015)

    Article  Google Scholar 

  19. Trehan, S., Durlofsky, L.J.: Machine-learning-based modeling of coarse-scale error, with application to uncertainty quantification. Comput. Geosci. 22, 1093–1113 (2018)

    Article  Google Scholar 

  20. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  Google Scholar 

  21. Scheidt, C., Caers, J., Chen, Y., Durlofsky, L.J.: A multi-resolution workflow to generate high-resolution models constrained to dynamic data. Comput. Geosci. 15, 545–563 (2011)

    Article  Google Scholar 

  22. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. Mach. Learn. Res. 13, 281–305 (2012)

    Google Scholar 

  23. Snoek, J., Larochelle, H., Adams, R.P.: Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems (NIPS) 25, 2960–2968 (2012)

    Google Scholar 

  24. Baldominos, A., Saez, Y., Isasi, P.: Evolutionary convolutional neural networks: An application to handwriting recognition. Neurocomputing 283, 38–52 (2018)

    Article  Google Scholar 

  25. Sun, Y., Xue, B., Zhang, M., Yen, G.G.: A particle swarm optimization based flexible convolutional autoencoder for image classification. IEEE Trans. Neural Networks Learn. Syst. 30, 2295–2309 (2019)

    Article  Google Scholar 

  26. Garro, B.A., Vázquez, R. A.: Designing artificial neural networks using particle swarm optimization algorithms. Comput. Intell. Neurosci., 1–20 (2015)

  27. Conforth, M., Meng, Y.: Toward evolving neural networks using bio-inspired algorithms. In: IC-AI, pp. 413–419 (2008)

  28. Wang, B., Sun, Y., Xue, B., Zhang, M.: A hybrid differential evolution approach to designing deep convolutional neural networks for image classification. In: Australasian Conference on Artificial Intelligence, pp. 237–250 (2018)

  29. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  30. Liang, M., Hu, X.: Recurrent convolutional neural network for object recognition. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3367–3375 (2015)

  31. Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: International Conference on Machine Learning (ICML), pp. 807–814 (2010)

  32. Boureau, Y., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: International Conference on Machine Learning (ICML), pp. 111–118 (2010)

  33. Wang, T., Wu, D.J., Coates, A., Ng, A.Y.: End-to-end text recognition with convolutional neural networks. In: International Conference on Pattern Recognition (ICPR), pp. 3304–3308 (2012)

  34. Akhtar, N., Ragavendran, U.: Interpretation of intelligence in cnn-pooling processes: A methodological survey. Neural Comput & Applic 32, 879–898 (2020)

    Article  Google Scholar 

  35. Holland, J.: Adaptation in Natural and Artificial. System University of Michigan Press (1975)

  36. Goldberg, D.E.: Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley (1989)

  37. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv:1502.03167, 1–11 (2015)

  38. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. ArXiv:1207.0580, 1–18 (2012)

  39. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: 4th International Conference on Learning Representations (ICLR), pp. 1–14 (2016)

  40. Xavier, G., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: 13th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 249–256 (2010)

  41. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  42. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  43. Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res 12, 2121–2159 (2011)

    Google Scholar 

  44. Kingma, D.P., Ba, J.L.: Adam: A method for stochastic optimization. In: 3rd International Conference on Learning Representations (ICLR), pp. 1–15 (2015)

  45. Tieleman, T., Hinton, G.: Lecture 6.5 - rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4, 26–31 (2012)

    Google Scholar 

  46. Zeiler, M.D.: Adadelta: An adaptive learning rate method. ArXiv:1212.5701, 1–6 (2012)

  47. Abadi, M., Agarwal, A., Barham, P., Brevdo, E.: Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv:http://arxiv.org/1603.04467 (2016)

  48. Baldominos, A., Saez, Y., Isasi, P.: On the automated, evolutionary design of neural networks: Past, present, and future. Neural Comput. Applic. 32, 519–545 (2020)

    Article  Google Scholar 

  49. Loève, M.: Probability Theory. Springer (1977)

  50. Remy, N., Boucher, A., Wu, J.: Applied Geostatistics with Sgems: A User’s Guide. Cambridge University Press (2009)

  51. Remy, N.: Geostatistical Earth Modeling Software: User’s Manual. Cambridge University Press (2004)

  52. Strebelle, S.: Conditional simulation of complex geological structures using multiple-point statistics. Math. Geol 34, 1–21 (2002)

    Article  Google Scholar 

  53. Jia, H., Xia, Y., Song, Y., Zhang, D., Huang, H., Zhang, Y., Cai, W.: 3D APA-Net: 3D adversarial pyramid anisotropic convolutional network for prostate segmentation in mr images. IEEE Trans. Med. Imaging 39, 447–457 (2020)

    Article  Google Scholar 

  54. Lie, K.A.: An Introduction to Reservoir Simulation Using MATLAB/ User Guide for the MATLAB Reservoir Simulation Toolbox (MRST). Cambridge University Press (2019)

  55. Sun, Y., Xue, B., Zhang, M., Yen, G.G., Lv, J.: Automatically designing cnn architectures using the genetic algorithm for image classification. IEEE Trans. Cybern. 50, 3840– 3854 (2020)

    Article  Google Scholar 

  56. Bourgeat, A.: Homogenized behavior of two-phase flows in naturally fractured reservoirs with uniform fractures distribution. Comput. Method. Appl. Mech. Eng 47, 205–16 (1984)

    Article  Google Scholar 

  57. Krizhevsky, A., Hinton, G.: Learning Multiple Layers of Features from Tiny Images. University of Toronto, Technical report (2009)

    Google Scholar 

  58. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. ArXiv:1708.07747, 1–6 (2017)

Download references

Acknowledgments

The authors would like to thank the financial support provided by the CAPES (Grant-88887.464658/2019-00) and PETROBRAS (Grant-2017/00768-7), and the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer. We would also like to thank Professor Marcio Murad for suggesting the resolution of the upscaling problem through neural networks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eliaquim M. Ramos.

Ethics declarations

Conflict of Interests

The authors have no conflicts of interest to declare.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Marcio R. Borges, Gilson A. Giraldi, Bruno Schulze and Felipe Bernardo contributed equally to this work.

Appendices

Appendix A: data generation

This section presents the equations that govern the single-phase for the calculation of the larger scale permeability fields through of local upscaling to train and validate the CNN (Section 3).

1.1 A.1 Governing equations

The mass conservation law for single-phase flow in porous media and in the absence of sources is given by:

$$ \frac{\partial (\rho \phi)} {\partial t}+\nabla \cdot (\rho \mathbf{v})=0, $$
(12)

where ρ is the density of the fluid, ϕ is the porosity of the medium and v is the Darcy velocity, and in the absence of gravitational effects, Darcy’s law is given by:

$$ \mathbf{v}=-\frac{\mathbf{k}}{\mu}\nabla p, $$
(13)

where k is the permeability of the medium, μ is the viscosity of the fluid, p is the pressure of the fluid and v is the Darcy velocity.

In terms of Darcy velocity, the mass conservation law can be expressed as:

$$ \frac{\partial (\rho \phi)} {\partial t}-\nabla \cdot \left( \rho \frac{\mathbf{\mathbf{k}}}{\mu}\nabla p\right)=0. $$
(14)

Considering the hypotheses that porosity does not vary with time and incompressible fluid, we have:

$$ \nabla \cdot \left( \frac{\mathbf{k}}{\mu}\nabla p\right)=0. $$
(15)

Assuming that the permeability varies on two different scales: a coarse x that describes a slow variation of this property and another fine y that describes a fast variation, and that the viscosity does not vary with pressure, then we can write (4) as:

$$ \nabla \cdot\left( \mathbf{k}\left( \mathbf{x}, \mathbf{y}\right) \nabla p\right)=0. $$
(16)

Applying the homogenization process [56] in (5), we obtain an equation characterized by a permeability, designated k, which varies only in the coarse scale x, defined as:

$$ \nabla_{\mathbf{x}} \cdot\left( \mathbf{k}^{\ast}(\mathbf{x})\nabla_{\mathbf{x}} p\right)=0, $$
(17)

where the index x on the gradient operator indicates operation on the scale x.

To calculate k, we need to solve the fine scale pressure equation subject to boundary conditions as follows:

$$ \left\{ \begin{array}{ll} \nabla_{\mathbf{y}} \cdot\left( \mathbf{k}(\mathbf{y})\nabla_{\mathbf{y}} p\right)=0, & \text{on}~ {\Omega}, \\ p(0,y_{2})=p_{1}, & \text{on}~ {\Gamma}_{D}, \\ p(y_{1},y_{2})=p_{2}, & \text{on}~ {\Gamma}_{D}, \\\mathbf{v}(y_{1},0)\cdot \mathbf{n}= \mathbf{v}(y_{1},y_{2}) \cdot \mathbf{n}=0, & \text{on}~ {\Gamma}_{N}, \end{array} \right. $$
(18)

where the index y on the gradient operator indicates operation on the y scale, p1, p2 are known pressures, ΓD and ΓN represent, respectively, Dirichlet and Neumann boundary and n is the normal vector.

There are different numerical methods that can be used to solve this problem (7). In this work, we use the finite difference method to obtain the numerical solution. The approximation of the fine scale pressure equation through centered differences is given by:

$$ \begin{array}{llllll} k_{y_{1},i+1/2,j}\left( p_{i+1,j}-p_{i,j} \right)-k_{y_{1},i-1/2,j}\left( p_{i,j}-p_{i-1,j}\right)+\\ k_{y_{2},i,j+1/2}\left( p_{i,j+1} - p_{i,j} \right) - k_{y_{2},i,j-1/2}\left( p_{i,j}-p_{i,j-1} \right)=0, \end{array} $$
(19)

where pi,j represents the pressure in the cell centers, \( k_{y_{1}, i + 1/2, j} \) and \( k_{y_{2}, i, j + 1/2 } \) denote the permeability values at the cell boundaries, given by the harmonic mean as:

$$ k_{y_{1},i+1/2,j}= \frac{2 \left( {k_{y_{1},i,j}k_{y_{1},i+1,j}} \right)}{k_{y_{1},i,j}+k_{y_{1},i+1,j}}, $$
(20)
$$ k_{y_{2},i,j+1/2}= \frac{2 \left( {k_{y_{2},i,j}k_{y_{2},i,j+1}} \right)}{k_{y_{2},i,j}+k_{y_{2},i,j+1}}. $$
(21)

Upon resolution of (7), the average velocity 〈v〉 at the cell is determined as follows:

$$ \langle\mathbf{v}\rangle=-{\int}_{\Gamma}{\mathbf{v}\cdot \mathbf{n}}d{\Gamma}. $$
(22)

where Γ is the boundary, and n is the normal vector. Using the result of (11) in the coarse scale Darcy’s law, we obtain the upscaled permeability k.

Fig. 30
figure 30

Examples of images of the datasets. (a) MNIST, (b) CIFAR-10, and (c) Fashion-MNIST

1.2 A.2 Random fields

In this work, the random permeability is a scalar field with log-normal distribution, given by:

$$ \mathbf{k}(\mathbf{y})=\mathbf{k}_{0}e^{\rho\mathbf{Y(\mathbf{y})}}, $$
(23)

where \( \mathbf {k} _{0}, \rho \in \mathbb {R} \) and \( \vec {Y} \left (\mathbf {y}\right ) \sim \mathcal {N} \left (0,\mathcal {C}\right ) \) is a Gaussian random field characterized by its mean \(\langle \vec {Y} \left (\mathbf {y}\right )\rangle =0\) and covariance function \(\mathcal {C}\). This field is generated by the Karhunen-Loève method (Appendix Appendix).

1.3 A.3 Karhunen-Loève expansion

This expansion represents the Gaussian random fields as follows:

$$ \mathbf{Y}\left( \mathbf{y}, \omega \right)=< \mathbf{Y}\left( \mathbf{y}\right)>+{\sum}_{n=1}^{\infty}\sqrt{\lambda_{n}}f_{n}(\mathbf{y})\xi_{n}(w), $$
(24)

where ξn(w) represents a set of Gaussian random variables, λn and fn(y) denote, respectively, eigenvalues and eigenfunctions, that are obtained from the Fredholm integral:

$$ {\int}_{\Omega}\mathcal{C}(\mathbf{y},\mathbf{y^{\prime}})f_{n}(\mathbf{y})d\mathbf{y}=\lambda_{n} f_{n}(\mathbf{y^{\prime}}), $$
(25)

where \( \mathcal {C}(\mathbf {y}, \mathbf {y^{\prime }}) \) represents the covariance function, defined by:

$$ \mathcal{C}(\mathbf{y},\mathbf{y}^{\prime})=\sigma^{2}exp\left( -\frac{(y_{1}-y^{\prime}_{1})^{2}}{{\eta_{1}^{2}}}-\frac{(y_{2}-y^{\prime}_{2})^{2}}{{\eta_{2}^{2}}}\right), $$
(26)

where y = (y1,y2), \( \mathbf {y '} = (y^{\prime } _{1}, y '_{2}) \) represent spatial locations, σ2 is the variance, η1 and η2 are the correlation lengths.

Appendix B: classification problem

In order to validate our GA methodology, we first apply it on classification tasks using the CIFAR-10 [57], CIFAR-100 [57], MNIST [9] and Fashion-MNIST [58] datasets. The MNIST contains 70,000 images of handwritten digits (0 to 9) in grayscale of size 28 × 28 pixels. This database contains 60,000 training images and 10,000 validation images. Examples of digits from the MNIST dataset are shown in Fig. 30(a). The CIFAR-10 dataset consists of 60,000 color images with the size of 32 × 32 pixels, divided into 10 different classes. This set contains 50,000 training images and 10,000 validation images. The CIFAR-100 contains the same number of images as CIFAR-10, but images are uniformly distributed over 100 classes. Figure 30(b) shows examples of images from the CIFAR-10 dataset. Similarly to the standard MNIST, the Fashion-MNIST is composed of 60,000 training images and 10,000 validation images of clothing as shown in Fig. 30(c).

Table 11 Structure of the best network for the CIFAR-10 dataset

We begin by analyzing the performance of the three best models on the CIFAR-10 dataset obtained with three independent executions of the GA methodology. Figure 31 depicts the fitness (accuracy) evolution of the best CNNs with the number of generations, as well as the average result. Note that there was no significant increase in the performance of the models after 20th generation. Among the accuracy results, we note that the best accuracy obtained on the experiments is 89.88% reported in the 95th generation (blue plot). Moreover, the best average accuracy in the dashed curve of Fig. 31 is 89.74%, as shown in the first line of Table 12. The best accuracy was achieved with a CNN that consists of 6 convolutional layers, and 2 fully connected layers, with filters and neurons, respectively, as specified in Table 11. The training is performed by Adam, with batch size of 20.

Table 12 Results obtained by CNNs constructed through GA on the CIFAR-10

After the hyperparameters optimization, the best CNN discovered by GA reaches with 400 epochs a higher accuracy of 90.16% over the validation set, and an average accuracy of 89.89% over the models obtained on five training of the best CNN architecture. These results are obtained using the weights and bias adjusted in the optimization phase as initialization, without data augmentation. Using the same CNN, we get on the best case an accuracy of 92.20% over the validation set, and an average accuracy of 91.93%, with data augmentation [47]. The mean results obtained after 400 epochs, on the validation set, by the three networks found are displayed on the boxplot in Fig. 32. Note that average and median accuracies of architecture 1 are superior to the results of architectures 2 and 3, whereas architecture 2 presents the performance with the smaller variability. Although the architectures present different results, they are statistically equal by the Anova statistical method.

Fig. 31
figure 31

Evolution of the best fitness (accuracy) and average result of three executions of the GA

To evaluate the generalization ability of the best configuration of CNN found by GA on the CIFAR-10, we re-train five times the network on the MNIST dataset with 400 epochs. From the application of the five models over the MNIST, we get an average classification accuracy of 99.57% over the validation set. Similarly to the experiments on the MNIST, we obtain for the Fashion-MNIST an average classification accuracy of 94.23%. For the CIFAR-100, the network achieves an average accuracy of 63.47%. All the accuracy results are obtained without using any data argumentation technique. Using the same setup with data argumentation, we get for the MNIST, Fashion-MNIST and CIFAR-100 average accuracies of 99.63%, 94.86% and 69.10%, respectively.

Fig. 32
figure 32

Boxplot of the accuracy of the three architectures discovered by GA, where the red line shows the median and the circle shows the mean

Table 13 Accuracy of different approaches for the classification problems

Table 13 presents the comparison between the classification accuracy of the CNNs generated by the proposed GA method and DENSER. Also, recognition rates of traditional architectures for classification tasks are reported to position our results in the state-of-the-art. Note that our outstanding model reaches average accuracy results for the MNIST, CIFAR-10 and CIFAR-100, which are 0.07%, 2.20% and 5.04% below, when compared to the DENSER, considered the state-of-the-art method to automatically design CNNs. For the Fashion-MNIST, the result obtained is 0.16% above in relation to the DENSER. These results indicate that the CNN found on the CIFAR-10 has good generalization ability and reaches competitive results with the methods for classification problems.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramos, E.M., Borges, M.R., Giraldi, G.A. et al. Prediction of permeability of porous media using optimized convolutional neural networks. Comput Geosci 27, 1–34 (2023). https://doi.org/10.1007/s10596-022-10177-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10596-022-10177-z

Keywords

Navigation