Introduction

Permeability, which is a measure of the ability of porous medium to transmit fluids, is an important parameter required in many areas, such as reservoir engineering, hydrology, and environmental engineering. Anisotropy of permeability or multi directional permeability calculation is also crucial for understanding the directions of potential permeable fluid migration, for example, in volcanic rocks (Wright et al. 2006). Measurements of permeability in the laboratory on core samples are usually conducted in one direction while the permeability in the other direction must be obtained by measuring different core plugs (Farquharson et al. 2016). Since permeability of rocks varies due to their pore structures heterogeneity and also depends on the direction of the measurements, different core plugs may give different results. It will influence interpretation of reservoirs and result of fluid flow modeling. As an example, the finite-time (short-term) amount of CO2 which can be dissolved in anisotropic sedimentary rocks is much larger than in isotropic rocks (De Paoli et al. 2016).

Several techniques have been developed to easily estimate the permeability of rocks and its anisotropy (Fauzi 2011; Iscan and Kok 2009; Jin et al. 2004; Kameda and Dvorkin 2004). Digital image processing was used to calculate permeability of rocks from two dimensional images obtained from thin sections (Fauzi 2011; Iscan and Kok 2009; Jin et al. 2004; Kameda and Dvorkin 2004). Techniques for anisotropic permeability estimation from 3D microtomographic images were also developed by many authors (Aharony et al. 1991; Rama et al. 2011), but it still needs further investigation. As an alternative, estimation of absolute permeability in three perpendicular directions from a microscopic 3-D image may be conducted by means of LBM. Intensive and time consumption in computational processes for large-size samples are, however, becoming issues for this methodology (Liu et al. 2014). Liu et al. (2016) reported that that simulated permeability obtained from LBM after segmentation processes to reduce computing times are 6–9 times higher than the laboratory measurements.

Renormalization group approaches (RGA) are intensively applied to calculate permeability and to investigate its anisotropy (Green and Paterson 2007; Karim and Krabbenhoft 2010; King 1989). Firstly, RGA was developed and applied to calculate effective permeability of permeability distribution functions (King 1989). It was then applied to evaluate macroscopic permeability of oil reservoir (Green and Paterson 2007) and images of rock-thin sections (Fauzi 2011). Simple analytical approximation was applied to the method of RGA developed previously (Green and Paterson 2007). A new renormalization scheme was introduced and gave more accurate results than existing schemes (Karim and Krabbenhoft 2010). An approximation technique based on Laplace solver and RGA was used to calculate permeability of 3D tomographic rock samples (Khalili et al. 2012).

In this work, simple and less time-consuming techniques are proposed to calculate permeability and its anisotropy from 3D tomographic images. A combination of LBM and RGA was applied to calculate permeability for the three dimensional micro-CT images of three different types of sandstones: Fontainebleau (FO), Berea (BE), and Sumatra (SU). Several renormalization schemes (Green and Paterson 2007; Karim and Krabbenhoft 2010; King 1989) and the combination of the three methods were applied in this study.

Methodology

Renormalization group approach (RGA)

The renormalization group approach (RGA) is an elegant method which can be easily applied to obtain equivalent fluid permeability (King 1989) from porous media samples. RGA models a permeable block in the sample as a resistor network. A block with permeability k has the equivalent resistor between the mid-points of the edges of 1/k. In three-dimensional block, a single block is divided into eight cells. Each cell has a permeability of k1, k2, k3, …, k8, respectively. In the horizontal (x) direction, each edge is assigned a pressure Pi and P0. Dead-end branches in the z and y directions were ignored or cut, and were joined with the corresponding pressure nodes. The network can be simplified to calculate the effective value using the star–triangle transformation (King 1989). By assuming that the fluid flows from the center of one cell to the center of each adjoining cell, the effective permeability can be estimated by averaging the 2D effective permeability of four components, i.e., the top and bottom halves of the block (see Fig. 1). The effective permeability of a three-dimensional block (keff 3D) in the x direction is calculated using the following equation (Green and Paterson 2007):

Fig. 1
figure 1

Schematic illustration of the two-dimensional renormalization blocks used in the approximation of the three-dimensional renormalization (Green and Paterson 2007)

$$\begin{aligned} {k_{{\text{eff}}\;3{\text{D}}}}\left( {{k_1},{k_2},{k_3},{k_4},{k_5},{k_6},{k_7},{k_8}} \right)= & \frac{1}{4}\left[ {{k_{{\text{eff}}\;{\text{2D}}}}\left( {{k_3},{k_4},{k_7},{k_8}} \right)} \right. \\ & \quad +{k_{{\text{eff}}\;{\text{2D}}}}\left( {{k_1},{k_2},{k_5},{k_6}} \right) \\ & \quad +{k_{{\text{eff}}\;{\text{2D}}}}\left( {{k_1},{k_3},{k_5},{k_7}} \right) \\ & \quad \left. {+{k_{{\text{eff}}\;{\text{2D}}}}\left( {{k_2},{k_4},{k_6},{k_8}} \right)} \right]. \\ \end{aligned}$$
(1)

A new scheme method of renormalization was introduced by Karim and Krabbenhoft (2010) and is shown in Fig. 2. It presents the configuration scheme of a block with eight cells, each with a permeability of \({k_1},{k_2},{k_3},{k_4}, \ldots {k_8}\). For the configuration shown in Fig. 2b, the block-permeability is given by the following equation (Karim and Krabbenhoft 2010):

Fig. 2
figure 2

Schematic illustration of the two-dimensional renormalization blocks used in the approximation of the three-dimensional renormalization (Karim and Krabbenhoft 2010)

$${k_{{\text{L}},x}}={\mu _{\text{a}}}[{\mu _h}({k_1},{k_2}),{\mu _{\text{h}}}({k_3},{k_4}),{\mu _{\text{h}}}({k_5},{k_6}),{\mu _{\text{h}}}({k_7},{k_8})],$$
(2)

where subscript a and h are arithmetic and harmonic mean, respectively.

For the configuration shown in (Fig. 2c), the permeability is calculated as follows:

$${k_{{\text{U}},x}}={\mu _{\text{h}}}[{\mu _{\text{a}}}({k_1},{k_3},{k_5},{k_7}),{\mu _{\text{a}}}({k_2},{k_4},{k_6},{k_8})].$$
(3)

The effective permeability in the x direction is the geometric mean of both configurations (Karim and Krabbenhoft 2010):

$${k_{{\text{eff}},x}}={\left( {{k_{{\text{L}},x}}{k_{{\text{U}},x}}} \right)^{\frac{1}{2}}}=\frac{1}{2}{\left[ {\frac{{\left( {{k_1}+{k_3}+{k_5}+{k_7}} \right)\left( {{k_2}+{k_4}+{k_6}+{k_8}} \right)}}{{\left( {{k_1}+{k_2}} \right)\left( {{k_3}+{k_4}} \right)\left( {{k_5}+{k_6}} \right)\left( {{k_7}+{k_8}} \right)}}} \right]^{\frac{1}{2}}}{\left( {\frac{J}{I}} \right)^{\frac{1}{2}}},$$
(4)
$$I={k_1}+{k_2}+{k_3}+{k_4}+{k_5}+{k_6}+{k_7}+{k_8},$$
$$J={k_1}{k_2}{S_{12}}+{k_3}{k_4}{S_{34}}+{k_5}{k_6}{S_{56}}+{k_7}{k_8}{S_{78}},$$
$${S_{ij}}=\frac{{({k_1}+{k_2})({k_3}+{k_4})({k_5}+{k_6})({k_7}+{k_8})}}{{{k_i}+{k_j}}}.$$
(5)

The three methods above may also be combined to calculate the effective permeability to compare the results.

These four methods, i.e., King, GP, KK, and a combination of King–GP–KK were applied to the uniform and log-normal permeability distribution generated using a random number generator. The values of the uniformed permeability distribution is generated in the range of 1–9 with a mean (µ) of 5, and the log-normal distribution is generated using \(\mu =5\) and standard deviation \(\sigma =0.5\). The renormalization procedure was performed until a single effective permeability is obtained.

The probability density function (PDF) of the uniform and log-normal distribution with 87 data are shown in Fig. 3. The renormalization procedure was applied to the data which later produced a plot of PDF.

Fig. 3
figure 3

a Distribution of permeability during renormalization processes for log-normal distribution, b comparison of the number of renormalization to the mean data from the four RGA scheme for log-normal distribution

In case of the uniform distribution, from the first to the last (fourth) renormalization produced a normal distribution. For the log-normal distribution, the first renormalization forms the log-normal distribution. On the other hand, the second, third and fourth renormalization changed the distribution into a normal distribution. Thus, we can generally observe that the renormalization process causes the distribution to change into a normal distribution.

Figure 3b shows the change in average permeability during the renormalization processes. As the number of renormalizations increases, the average permeability decreases. KK’s scheme produced the highest values and GP’s scheme gave the lowest estimation. The King’s and the combined RGA schemes yielded values that fall in between the other techniques.

Lattice Boltzmann method

Lattice Boltzmann method (LBM) is one of the many techniques in fluid flow simulation which has gained a wide acceptance due to its easiness of implementation of boundary conditions and numerical stability in various conditions with various Reynolds numbers. In this method, fluid is treated as a collection of particles which is represented by a velocity distribution function at each discrete lattice node. The particles collide with each other and the rules governing the collisions are designed such that the time-average motion of the particles is consistent with the Navier–Stokes equations. In LBM, the computation of each node at every time step depends solely on the properties of itself and the neighboring nodes at the previous time step.

Various LB models exist for numerical solutions of various fluid flow scenarios, where each model has different way of characterizing microscopic movement of the fluid particles. The LB models are usually denoted as DxQy where x and y corresponds to the number of dimensions and number of microscopic velocity directions (ei), respectively. For example, D2Q9 represents a two-dimensional geometry with nine microscopic velocity directions. The following sub chapter will give general picture of the lattice D2Q9 and D3Q19 LB models which were implemented here. These models obey the distributions function (Kutay and Aydilek 2005):

$$F_{i}^{{{\text{eq}}}}={w_{\text{a}}}\rho \left[ {1+\frac{{{{\mathbf{e}}_{\mathbf{i}}} \cdot {\mathbf{u}}}}{{c_{s}^{2}}}+\frac{{{{\left( {{{\mathbf{e}}_{\mathbf{i}}} \cdot {\mathbf{u}}} \right)}^2}}}{{2c_{s}^{4}}} - \frac{{{\mathbf{u}} \cdot {\mathbf{u}}}}{{2c_{s}^{2}}}} \right],$$
(6)

where \(F_{i}^{{{\text{eq}}}}\) is the equilibrium distribution function, ρ is the density, u is the macroscopic velocity of the node. The relaxation time relates to viscosity of fluid (ν) as follows:

$$\upsilon =c_{s}^{2}\left( {\tau - \tfrac{1}{2}} \right).$$
(7)

The macroscopic properties, density and velocity, of the nodes are calculated using the following relations:

$$\rho =\sum\limits_{{i=1}}^{Q} {{F_i}} ,\,\,{\mathbf{u}}=\frac{{\sum\limits_{{i=1}}^{Q} {{F_i}} }}{\rho },$$
(8)

where ρ and u are the macroscopic density and velocity of the fluid at each node of lattice.

Samples

Three types of digitized sandstones [i.e., Fontainebleau (FO), Berea (BE), and Sumatra (SU)] were used as samples in this study. The FO and BE samples were already in the form of digital binary images (available for public, for example at https://goo.gl/RRWjfn), whereas the SU sample was a core plug with a diameter of 1.5 in. A subsample of SU with a dimension of 4 × 9 × 10 mm3 was taken from the core plug to be digitized. Digital images of SU were obtained using a SkyScan 1173 micro-CT scanner. The scanning was performed using X-ray energy of 130 kV and current of 60 µA filtered by a 0.25 mm brass filter. The projection image of the sample is acquired by a flat-panel detector with dimension of 2240 × 2240 pixels and exposure time of 1250 ms. The sample stage is rotating 180° at a rotation step of 0.2°. The scanning process took 102 min which produced 1200 raw projection images (16 bit TIFF images) in a total of ~ 11.4 GB size of files. Measured porosity and permeability of the core samples are 14.84% and 1300 mD, 16.00% and 1100 mD, 24.52% and 1865 mD for FO, BE, and SU respectively (Dong 2008; Latief et al. 2010).

Subsequent to the scanning and reconstruction process, segmentation by means of global-threshold method (Otsu method) was applied to the samples to produce a distinguished pore-void phase in the form of binary image. Figure 4 shows the three dimensional view as well as the two dimensional cross sections of the samples to observe the pore structures of each sample. Figure 5 provides velocity profiles generated by LBM method for sample image size of 2563.

Fig. 4
figure 4

3D visual (top) and 2D cross sections (bottom) of the samples. Left to right: Fontainebleau (FO), Berea (BE), and Sumatra (SU)

Fig. 5
figure 5

Flow paths of rock samples (FO, BE, and SU). White colored streamlines represent flowpath. Porosity of both total and effective, percentage of closed pores, permeability and its degree of anisotropy are presented in Table 1

Table 1 Porosity and permeability of the samples

It is shown qualitatively from the tomographic images that FO contained less impurities or small particles than BE and SU. The samples show different degree of roundness in their grain shape. Grain size distribution of the samples looks very different. As can be seen from the two-dimensional cross sections of the three samples, FO has a lower porosity than BE and SU due to a denser grain distribution. This is also evident from the lower amount of void spaces between grains of FO, which acts as a pathway for the fluid to flow (see Fig. 5). Table 1 shows both total and effective porosity, percentage of closed pores, permeability and the degree of anisotropy for all samples with 2563 voxels in size. Porosity of FO is lower than BE and SU, presumably due to a denser grain distribution as shown in the cross sections. Table 1 shows that the effective porosity of the samples differs from the total porosity. This indicates that there are void spaces (closed pores) in the samples that do not allow the flow of fluids.

The difference between calculated permeability from images shown in Table 1 with measured permeability is commonly obtained due to different sample size and resolving power. The size of CT-scan images for permeability calculation is in the scale of micron or mm; however, the size of samples for laboratory permeability measurement is in cm. It means that the ratio in volume is more than 103 times. Problems in upscaling are still a hot issue; therefore, this study will give contribution to deal with such problems. The resolution of measured permeability is about the size of Helium and CT-scan is 7 micron, respectively. Different resolving powers will produce different permeability values. Comparison between measured and calculated permeability cannot always be easy to carry out.

Permeability of the samples from the highest to medium and the lowest coincide with x, y, and z directions respectively. The permeability of FO was found to be much lower than the permeability of BE and SU, although its porosity is only slightly lower than BE. In general, porosity of the samples taken from tomographic images is different from the measured core samples. The measured permeability is, however, quite close to the range of calculated permeability from tomographic images. The largest degree of anisotropy of permeability in x and z directions (kx/kz) belongs to SU. Permeability of BE and SU in x and y (kx/ky) are almost similar with degree of anisotropy close to one.

Results and discussions

As explained in the previous section that tomographic images of the samples have the size of 2563 voxels. The images were first divided into blocks, and the permeability of blocks for each sample was then calculated using the LBM (i.e., Palabos software with the D3Q19 momentum operator). The smallest blocks with the size of 64 and 128 voxels are chosen in this study. With this size, the pore structure is quite complicated but with low computing time for LBM. Every eight neighboring blocks were renormalized to obtain a single effective permeability. This renormalization procedure was repeated and applied to calculate permeability for x, y, and z directions for the whole tomographic images. Illustration of the processes is shown in Fig. 6.

Fig. 6
figure 6

Processes of combination of LBM and RGA (top: LBM was applied to blocks with the size of 643 voxels; bottom: LBM was applied to blocks with the size of 1283 voxels)

Combination of RGA, i.e., King–KK and King–GP was applied to blocks with the size of smallest blocks 643 voxels. Blocks with the smallest size blocks of 1283 voxels were only used for single RGA scheme. Figure 7 shows the effective permeability calculated by the renormalization process for FO, BE, and SU. In this study, the largest size of the CT-images is 2563 voxels, so that the combination of renormalization group techniques (King–GP and King–KK) with starting cell size 1283 cannot be conducted, since the renormalization processes already arrive to a single value.

Fig. 7
figure 7

Effective permeability after renormalization processes

The results show that the keff of cells sized 643 and 1283 voxels are lower than the calculated permeability of a block with a dimension of 2563 voxels, i.e., the largest block defined as ‘true permeability’ which is listed in Table 1. Table 2 shows error analysis of all renormalization schemes and the true permeability of the largest block. The error is defined as

Table 2 Error analysis of all renormalization schemes which are calculated using Eq. 9
$$e=\frac{{{k_{\text{T}}} - {k_{\text{R}}}}}{{{k_{\text{T}}}}} \times 100\% ,$$
(9)

where kR is the permeability after renormalization scheme and kT is the true permeability.

It seems that almost all of permeability values after renormalization processes are lower than the true permeability. In general, the closest effective permeability to the true permeability for almost all samples in all directions was produced by Karim and Krabbenhoft’s (KK) renormalization scheme. The KK scheme does not give closest results for sample SU, but still produces quite low error (less than 15%).

Interestingly the ratio in all directions for different initial block sizes (i.e., kx − 64/kx − 128, kx − 128/kx − 256 and also in the direction of y and z) for almost all of the samples is similar in the range of 0.8–1.0. The plots of the degree of anisotropy against the RGA schemes are shown in Fig. 8.

Fig. 8
figure 8

Permeability anisotropy of the samples

Comparing degree of anisotropy plotted in Fig. 8a, b, we can see that the degree of anisotropy in the z-direction (kx/kz) was greater than the degree of anisotropy in the y-direction (kx/ky). The results show that size of the smallest block for RGA influenced the degree of anisotropy. Blocks with the size of 643 voxels as starting block for RGA have higher degree of anisotropy than blocks with the size of 1283 and 2563 voxels. Investigation on variation in the petrophysical parameters with the size of the selected sub-volume (either 512 or 1024 voxels) using LBM showed that homogeneous sandstones give the same results for the two subset sizes (Alyafei et al. 2016). Compromise between the representative sample size and the pixel resolution of the CT scans, especially regarding heterogeneities of the sampled material, will still be challenging (Cnudde and Boone 2013; Henkel et al. 2016). Applicability of this method to carbonate rocks with more complicated and heterogeneous pore space (Krakowska et al. 2016; Nurgalieva and Nurgalieva 2016) as mentioned by many authors, are also encouraging.

Conclusions

The effective permeability calculated by means of RGA was lower than LBM for a single block with a dimension of 2563 voxels. The keff produced by Karim and Krabbenhoft’s renormalization technique are mostly the closest to the true permeability. The degree of anisotropy showed that FO, BE, and SU are slightly anisotropic, where the degree of anisotropy (i.e., kx/kz) is larger than the kx/ky. Degree of anisotropy seems to be influenced by initial block size for RGA.