Introduction

Fractures, as discontinuities within rocks in the crust, appear in vast length scales (Naderi et al. 2019). There are three principal models for fractured media: discrete, multi-continua, and hybrid models (Naderi et al. 2019). Discrete models are required when the continuum approach, where the whole system is modeled with an equivalent permeable medium characterized by permeability tensor, is not applicable. One of the most important criteria to choose a model is the scale of fractures concerning the study scale. For systems where the fracture scale is much less than the interest continuum approach’s scale is sensible. Otherwise, the fracture network’s geometry will be important, and discrete models would be considered (Kadkhodaei and Naderi 2017; Fathianpour et al. 2013). In most modeling of fractures and faults as part of heterogeneity models, fractures have been considered as discrete objects that have been given stochastic distributions often based on power-laws, calibrated either from the local core or seismic data or from more general correlations (Makel 2007; Darcel et al. 2003a, b; Heffer et al. 1999). One way of fracture modeling is the full geomechanical approach, i.e., modeling of the fracturing process, including the initiation of cracks, their propagation, coalescence, and mechanical interaction (Masihi et al. 2005; Dreuzy et al. 2004; Olson et al. 1998). Sahimi, Arbabi, and Tatomir used the minimization of an appropriate form of elastic energy along with geomechanics to model fractures (Tatomir 2007; Sahimi et al. 1993; Sahimi and Arbabi 1996). DFN has been successively used in recent years (Kang et al. 2019; Pichot et al. 2012; Berrone et al. 2013). Jing et al. (2017) developed a discrete fracture network for coal characterization. The model integrates the pore-scale roughness obtained from micro-computed tomography (micro-CT) imaging of coals into the fracture networks’ discrete representation. Analysis of the fracture surfaces obtained from micro-CT imaging demonstrates random isotropic surfaces following a Gaussian distribution. The developed rough-walled discrete fracture network (RW-DFN) models are not restricted by the imaging resolution, so that they are favorable for direct numerical simulation of permeability. Besides, RW-DFN models can be constructed with an extended domain size to be incorporated into existing reservoir characterization frameworks to predict coal properties. A novel approach was presented by Song et al. (2018) to locally optimize DFN by integrating field tracer data based on an improved simulated annealing (SAW) process. Their approach has been validated with two field case examples by largely improving the performances of numerical simulation. The results of their work showed that optimization has a minimum effect on the original fracture density model. Additionally, the percentage of fractures updated in each perturbation of the proposed method decides the optimization’s speed and precision level. The improved simulated annealing algorithm can also be widely used for other optimization problems. Jiang et al. (2019) investigated the effect of in situ stresses on fluid flow in a natural fracture network. Their simulations show that anisotropic stress loading tends to reduce fracture apertures and suppress fluid flow, resulting in decreased fractured rock’s equivalent permeability. Anisotropic stresses may cause a significant sliding of fracture walls accompanied by shear-induced dilation and some preferentially oriented fractures, resulting in enhanced flow heterogeneity and channelization. In this study, a geostatistical approach is employed, which directly considers elastic energy in a fractured medium, a model that assumes all the fractures have existed, and the medium is in the equilibrium state. One main assumption is that elastic-free energy associated with fracture distribution follows a Boltzmann distribution. The simulated annealing (S.A.) algorithm is employed to minimize associated energy function. A mathematical model with which we can describe the connectivity and the percolation properties in complex geometries is called the percolation theory (Stauffer and Aharony 1992; Follin et al. 2014). The system is composed of objects distributed randomly and independently, which are not restricted to points on the fixed lattice (without any maximal concentration), such as fractured media, and continuum percolation gets rights into. Realizations of anisotropic random fracture networks produced by the simulated annealing algorithm will be used as a disordered system in the percolation theory’s basic methodology. The critical properties of the resulting network will be estimated for different realizations for different system sizes.

Modeling

Fracture network modeling

In this paper, we use simulated annealing technique (Belayneh et al. 2006; Kirkpatrick et al. 1983) as an optimization tool with energy (objective) function to generate the fracture network’s realization. Indeed, simulated annealing is a stochastic optimization technique that has been used in a variety of global optimization problems that involve objective functions with a large number of independent variables. Simulated annealing can be utilized with an energy function based on the covariance function to generate correlation in the system (Jing et al. 2017; Hamzehpour and Sahimi 2006). In the following adjustment and implementation of simulated annealing technique for this particular problem, an optimized configuration of fractures is outlined.

Simulated annealing algorithm

We can find an appropriate fracture configuration in 2D space with an anisotropic condition using simulated annealing. Application of simulated annealing needs to define several terms: (1) a set of possible structures of fractures and an initial design (2) a method for a small random symmetric change to the configuration (system perturbation method) (3) an objective function to be minimized (cost function) (4) an annealing schedule of changing a temperature-like parameter T, so that the system can reach its minimum global energy (5) stopping criterion.

Possible configurations and initial configuration

Considering constraints of three characteristic parameters of fractures, i.e., position, length, and orientation, any configuration which obeys these constraints is possible. That is, fractures’ centers must lie in the square of the side L with \(x \in (0,L)\) and \(y \in (0,L)\). There is no constraint for fractures’ orientation, and fractures may have any direction. One way of minimizing the energy function defined by Eq. 1 is to reduce fractures’ length.

$$E = \sum\limits_{k = 1}^{N} {\sum\limits_{\begin{subarray}{l} k \ne l \\ l = 1 \end{subarray} }^{N} {\left( {Au_{k} u_{l} [\eta |\cos (\theta_{k} - \theta_{l} )| + |\cos (\alpha - \theta_{l} )\cos (\alpha - \theta_{k} )|]/r_{kl} } \right)} }$$
(1)

where A = E/16πμ(1 − ν), N is the number of fractures in the system, and the components uk and ul are the displacements on fractures in k and l direction. α, θk, and θl specify the orientations of the distance vector r and fracture vectors uk and ul concerning the horizontal x-axis. This equation relates to the pairwise interactions of fractures to the total elastic energy of the system (Roy et al. 2010; Heffer and King 2005).

If simulated annealing procedure advances without any lower bound for fractures’ length, fractures’ size tends to zero. Two solutions have been proposed for this problem: (1) initializing model with large fractures (concerning square side) and defining a maturity control parameter and restriction of simulated annealing procedure using this parameter which is called accepted ratio (2) renormalizing fracture length distribution and keeping mean fracture length fixed. In this work, the first method is used.

One of the significant advantages of the simulated annealing technique is the independence of its initial configuration and the algorithm’s convergence. As the fractures’ initial design’s choice does not affect the confluence, we can use different distributions for the fracture lengths and orientations in the algorithm. Here, Gaussian distribution for fracture length and uniform distribution for fracture orientation and position on (0, π) and (0, L) are used, respectively. A realization of the initial fracture network configuration is shown as follows (Fig. 1).

Fig. 1
figure 1

Initial network configuration with Gaussian distribution for fracture length

System perturbation method

After defining the initial configuration to progress to lower energy levels, a simulated annealing algorithm needs a new potential solution to evaluate. This new configuration must slightly differ from the prior accepted arrangement reached by the algorithm. The updating method for generating new near possible solutions from the currently accepted design (solution) may be fixed throughout, making the model mature or changed during algorithm progress. Contrary to the algorithm used by previous works, like, the updating method used to generate new potential solutions is not fixed and would change during algorithm progress. Model energy goes down, and the model becomes more mature, the number of further possible solution rejection will increase gradually. If the same schedule used for early iterations of the algorithm is used in late iterations, the number of overall rejection at the end of the process will be too high, and the model cannot reach a better configuration with lower energy. So a secondary schedule has been used to change the updating method. Thus, a more precise solution in more down run time is achieved.

To change the current solution and generate a new near potential solution, three characteristic properties of one randomly chosen model element (fracture) are altered slightly, i.e., length, orientation, and position. The updating equations must have the stuff to generate new potential solutions that differ somewhat from the current solution and be symmetric. Here, the Masihi and King (2007) equations are used. That is:

$$\begin{gathered} \theta_{i}^{{{\text{new}}}} = \theta_{i}^{{{\text{old}}}} + 0.05\pi (2R - 1) \hfill \\ l_{i}^{{{\text{new}}}} = l_{i}^{{{\text{old}}}} + 0.05(2R - 1) \hfill \\ rx_{i}^{{{\text{new}}}} = rx_{i}^{{{\text{old}}}} + 0.5(2R - 1) \hfill \\ ry_{i}^{{{\text{new}}}} = ry_{i}^{{{\text{old}}}} + 0.5(2R - 1) \hfill \\ \end{gathered}$$
(2)

where R is the random number uniformly distributed on [0, 1]. The term (2R − 1) induces symmetry in simulated annealing algorithms. Fractures may get larger or smaller, rotate clockwise or counterclockwise, and move in the north, south, east, or west direction.

Objective function and transition

So far, an initial configuration and an updating equation set have been presented. Now a decision-making criterion is required, so the model should transit to a new design or not. This criterion is the value or better change in the amount of some function called the objective function. Here, the objective function is the energy function presented in Eq. 1. To introduce anisotropy in this model, different η values have been assigned to x- and y-direction ηy = 2. Then, in the computation of energy between two fractures, an average of these values has been used. Averaging is based on length and orientation (projection of fractures on each axis). When a new configuration is generated by updating equations, the change in the energy function is computed. Suppose ΔE < 0, the transition is accepted unconditionally. Otherwise, the new design is obtained or rejected based on the Metropolis–Hastings algorithm (with probability proportional to the Boltzmann’s factor e−ΔE/T). This flexibility allows the algorithm to escape from local minima.

Temperature lowering schedule

At any temperature, after a limited number of iterations system reaches a situation of maturity where the temperature parameter must be lowered. If the temperature parameter does not get reduced, the system maturity stays at that level captured into some loop, and algorithm progress would be too slow because of the high rate of acceptance for transitions with ΔE < 0. To increase progress speed, this temperature should get lowered based on some appropriate schedule. This schedule directly depends on the size of the systems. Here, approximately after ten visits of each fracture in the system, the temperature gets lowered such that Tnew = kTold and k ranges from 0.94 to 0.97. The following figure illustrates the variation in temperature against algorithm iteration. Here, colors indicate the energy levels of the system. The brighter the color, the higher the system energy level (Fig. 2).

Fig. 2
figure 2

Illustration of temperature lowering schedule (Tnew = kTold)

Stopping criteria

Depending on the particular system under study, time, computational limitations of available computers, and the study’s purpose, different stopping criteria may be used. In this specific fracture system, stopping criteria may be defined as the average value ΔE for last N iterations, mean fractures’ length, accepted, the total number of iterations, etc.

Here, a maturity control parameter is used to terminate algorithm progress. Indeed, this parameter is the acceptance ratio (A.R.) defined over the last 50,000 iterations if AR < 0.01, the algorithm would be released.

Periodic boundary condition (PBC)

The fractures in the model are distributed over a square of sides L. The limited lateral extension of this square will introduce boundary effects in the final configuration of fractures. This is because of the different stress experienced by fractures in the interior areas and the exterior areas. By looking at equation one, this difference is noticeable. When a fracture is positioned on areas near boundaries, the average distance between this fracture and other fractures is smaller, and the contribution of this fracture in the total energy of the model will be smaller concerning other fractures in areas far from boundaries. Consider a simple boundary condition that replaces a fracture exiting from one side with exactly a similar fracture entering the other side. If this simple boundary condition that neglects the effect of boundaries is used, all the fractures would be distributed on the model’s limitations, maximizing their distances from each other. This is because of the lack of any other fracture beyond the limits.

Masihi and King (2007) used a simple periodic boundary condition that considers the effect of boundaries using reflected imaginary fractures. In this boundary condition and the above simple one, if a fracture’s end exits the square, keeping this fracture from the other side, an imaginary fracture exactly similar to leaving fracture with the same distance of boundary would be introduced in the court. But here, a different periodic boundary condition is used, which is physically more meaningful. The square is surrounded with eight exactly equal squares plus replacing exiting fractures from the other side. This boundary condition is shown as follows (Fig. 3).

Fig. 3
figure 3

Order and position of periodic boundary condition used in the algorithm in this work

MATLAB code for simulated annealing algorithm

In this work, the MATLAB programming language is used to write a simulated annealing algorithm. MATLAB (MATrix LABoratory) is a fourth-generation programming language and a software environment with various powerful computation and visualization tools in engineering projects. The primary data element is a matrix, so if you need a program that manipulates array-based data, it is generally fast to write and run in MATLAB, or vice versa. To speed up iterations of the simulated annealing algorithm’s main loop, the center block’s properties and eight surrounding blocks are unified in one matrix in a meaningful order such that there is only one length matrix for all fractures, whether in the center block or surrounding block. At least one-third of the whole time of this work has been spent writing code, especially in the percolation part. The first block of MATLAB codes for the simulated annealing algorithm, which initializes the model with 500 fractures, is given as follows. A full version of the MATLAB code of the algorithm is provided in the Appendix.

figure a

Results and discussion

Generation of model realizations

Different realizations for different model sizes (or better fractures number) are required to investigate the fracture network’s connectivity properties using percolation theory. In this work, totally 200 various realizations in four different system sizes of 100, 150, 200, and 250 using the simulated annealing algorithm have been generated. The final length distribution of fractures must be controlled between two extremes such that an investigation of connectivity is possible. Fractures in the final configuration must be in post-percolation threshold condition such that by randomly omitting fractures in the network, the particular percolation threshold can be extracted through the Monte Carlo technique. One extreme is when final fractures are too long, and the system is connected. The other height is when fractures are too short, and there is no spanning cluster of fractures.

Indeed, a balance between fracture number and fracture length in the final configuration must exist. So simulated annealing algorithm with various fractures number and initial length distribution (initial fracture density), perturbations method, initial temperatures and temperature lowering schedules, and stopping criteria have been examined through trial and error to reach an optimum combination of these significant factors. Different settings (algorithm input) used for simulated annealing algorithms in three different model sizes are given in Table 1.

Table 1 Initial conditions of the simulated annealing algorithm

Energy minimization during algorithm progress

As it was said before, by progressing the simulation process, the system’s energy levels go down. This energy reduction process has two main phases. In the first phase, the model is at high energy levels, and the temperature is high. So the metropolis expression in the algorithm gives more chances for transitions, decreasing the system’s total energy and having more opportunities to escape from local minima toward the global minimum. In other words, the hill-climbing property of the simulated annealing algorithm is more probable in this phase. Unlike the first phase, the system’s second-phase energy levels are low, and the system reaches relative stability. In this phase, the system’s opportunity to escape from traps is much shorter, and the system configuration only changes slightly. The following figure shows a typical behavior of system total energy against iteration.

These two phases of the total energy of the system look like two steps. In the upper stage, the system is too unstable, and there are severe fluctuations in the virtue of higher temperatures (brighter colors). Then, by lowering the temperature, system transits to a stable condition with lower energy levels and smooth changes because of relatively low temperature (darker colors).

The second useful graph is the graph of the energy difference between before and after perturbation. The two main phases of system energy here appear with a different characteristic. In this graph, fluctuations are evident in Fig. 4. But there is another factor in this graph which separates two phases. This factor is the exact symmetry condition around the x-axis. In the early iterations of the algorithm, there is a high acceptance rate for energy increasing transitions in virtue of the system’s high temperature. So there is approximately asymmetry about the x-axis because nearly any change in the system’s energy is accepted (brighter colors). As the number of iterations gets larger, the conditions change. By increasing the rate of rejection of energy increasing transitions, the graph becomes asymmetric around the x-axis and tends to the upper half with DE > 0 (darker colors) (Fig. 5).

Fig. 4
figure 4

Variation in total system energy (E) as a function of algorithm iteration (sampled with ratio 1:50)

Fig. 5
figure 5

Variation in energy difference before and after perturbation with iteration (1:50)

The other useful graph is the cumulative number of accepted transitions against the cumulative number of iterations, which shows these two phases. The chart looks like a boomerang with a nearly straight wing and a gently curved wing. The left wing represents the first phase. This wing’s slope is approximately 0.98, which means almost all perturbation has been accepted whether they have decreased energy or not. Then, the system transits to the second phase (curved region). Unlike a left wing, the right wing representing the second phase has a very gentle average slope of approximately 0.09. Nearly, only perturbations that reduce fracture length have been accepted (Fig. 6).

Fig. 6
figure 6

Cumulative number of accepted transition against iteration

The exploitation of statistics of the model

In the following figure, one realization of a fracture network configuration has been given. The questions that come up are: How are different fracture network parameters distributed? Is there any correlation between the parameters of the final fracture configuration?

First of all, let inspect fracture length distribution. In Fig. 7, fracture size distribution has been presented. The fracture size distribution for this particular realization can be fitted either standard exponential or power-law distributions. As it has been mentioned before several times, different initial conditions (e.g., other fracture length distributions, different initial orientation) may be used for simulated annealing algorithm, and subsequently, additional final output can be generated. The fractures’ absolute length values depend on initial values, and this model must be calibrated with local well log and core and seismic data. Figure 8 shows that the slope of fracture size distribution has been presented, which is a criterion of relative fracture size values.

Fig. 7
figure 7

a A final configuration of the fracture network with 1150 fractures and L = 150. b Percolating cluster containing 453 fractures

Fig. 8
figure 8

Fracture length distribution of a typical realization. This distribution can be fitted with either exponential or power-law (slope ≈ −1.3)

Orientation is the other fundamental parameter of the final fracture network configuration. As shown in Fig. 8, fractures generally constitute two fracture sets, a horizontal set and a vertical set. In the following figure, the distribution of orientation values of the fracture, the network has been given. In the upper part, distribution of all fracture sizes is offered, while relatively larger fractures have been shown. A diminution in all bars’ size means smaller fractures are randomly distributed in the lower part, which is reasonable because smaller fractures have smaller contributions to the system’s total energy. The metropolis term in the algorithm would have smaller values. This means smaller fractures have more chance to rotate randomly.

In the upper part of Fig. 9, the tall fracture orientation distribution on the model has been given. In this figure, two principal fracture sets have been shown as two modes of an asymmetric bimodal distribution, one more massive method in approximately 0 radians, the horizontal location, and the other in the approximately 1.6-radian vertical set. The latest height is nearly two times larger than the standing setting. The summary of the statistics of these two fracture sets is given as follows (Table 2).

Fig. 9
figure 9

Fracture orientation distribution on (above) all fractures (below) fractures larger than mean fracture length

Table 2 Summary statistics of two principal fracture sets

Now, this question comes up: Is there any correlation between fracture length and fracture position? A particular type of correlation exists between these two parameters of the final fracture configuration. Bour and Davy (1999) and Lei et al. (2017) presented a new type of correlation in a fracture network between fracture length and fracture position. They defined parameter dc(l) to be the distance between the center of a fracture and its nearest neighbor. They showed there is a scaling law between this parameter and fractures length such that dc(l) \(\propto\) lm where the exponent m varies in [0, 1]. The lower bound of this interval indicates there should be no correlation between fracture length and fracture position, while the upper bound corresponds to space-filling objects such as the apollonian described by Mandelbrot (1982) and Lei et al. (2016). Darcel et al. (2003a, b) and Hardebol et al. (2015), by using natural fracture network data, computed the exponent m from each fracture map within the range [0.1, 0.3].

It is straightforward for a fracture network that the initial condition of simulated annealing algorithm m will be nearly zero because the fractures were distributed randomly. But as the following figure shows, there is a correlation between fracture length and fracture position in the fractures’ final configuration. There is a positive correlation between fracture length and placement of its nearest neighbor. This means, the larger the fracture, the larger the space around it (Fig. 10).

Fig. 10
figure 10

Correlation between fracture length and position (slope ≈ 0.12)

Here, summary statistics of all realizations have been given. In Table 3, important statistic parameters of all 200 realizations of fracture network such as relative number and length of two fracture sets, mean orientation angle for each fracture set separately, the slope of correlation between size and position of fractures have been computed.

Table 3 Summary statistics of all 200 realizations (M: Mean, SD: Standard deviation, R: Relative)

Connectivity properties of the model

Eventually, in this section, the connectivity of this basic fracture model would be studied. Primarily, fracture network models are considered continuum percolation model cases with definitions of equivalent terms, and the percolation properties of these models are evaluated. Bour and Davy (1997) and Geiger (2019) modeled and quantified random fault networks’ percolation properties following power-law fault length distribution, which differed from values for equal size fractures systems. Here, in this work, the percolation threshold (\(p_{c}^{\infty }\)), correlation length exponent (ν), and connectivity exponent (β) have been estimated.

Percolation threshold

From the practice point of view, all the systems under study are limited by size. As we know, percolation thresholds for these systems were defined as the occupancy where 50% of all realizations at that occupancy connect the system’s boundaries (boundaries), and it was called the apparent percolation threshold. The apparent percolation threshold slightly varies with the system size due to finite size effects, which is modeled by:

$$p_{c} (L) - p_{c}^{\infty } \propto L^{ - 1/\nu }$$
(3)

The apparent percolation threshold \(p_{c}^{\infty }\) is the actual percolation threshold (i.e., for infinitely large systems) and ν denotes the correlation length exponent (Song et al. 2018; Stauffer and Aharony 1992). This proportionality means the more massive the system size is, the better is the estimate of the actual percolation threshold. The error Δ(L) associated with apparent percolation values due to finite size effects also varies like L−1/ν denotes the root mean square (RMS) deviation of apparent percolation threshold values for different realizations with the same system size. So without any knowledge ν, one may obtain an estimate of the percolation threshold by plotting pc(L) against Δ(L) by extrapolation Δ = 0. This method works particularly well for percolation, where it was introduced by Levinshtein et al. (1975). The equations and expressions above are the average percolation threshold by doing numerous Monte Carlo experiments for the same system size and check for when the spanning cluster appears for the first time when the system is getting filled slowly vice versa. In large networks, a standard method may be required. Here, an adapted version of the technique introduced by Stauffer and Aharony (1992) is used. First, check the connectivity for N/2 all fractures (randomly). If it does, decrease the number of fractures by N/4, otherwise, increase it by N/4. Now check again. If percolation occurs, reduce by N/8, otherwise increase it by N/8. Repeat this way to reach sufficient accuracy. After about ten such iterations, the onset of percolation is known with an accuracy sufficient for many purposes. This value is the threshold of this particular realization. By averaging over all realizations of this system size, one value pc(L) would be estimated (Fig. 11).

Fig. 11
figure 11

Estimation of \(p_{c}^{\infty }\) using extrapolation at Δ(L) = 0

Correlation length exponent

There are three different ways to estimate the correlation length exponent. First is the scale dependency of the error associated with the apparent threshold. As mentioned in the previous section, it is sufficient to plot RMS deviation of apparent percolation threshold (Δ(L)) for different system sizes in log–log scale paper. The slope of the resulting graph gives an estimate for the exponent ν. The alternative way is to use pf and p1−f as the occupancy probabilities at which the fraction f and 1 − f of all realizations percolate, which is expected to have scale dependency given by Gawlinski and Stanley (1981) (Fig. 12),

$$p_{1 - f} - p_{f} \propto L^{ - 1/\nu }$$
(4)
Fig. 12
figure 12

Using the slope of variations Δ(L) against L in estimation of \(1/\nu \approx 0.52 \pm 0.016\)

Different values f give different estimates, which can be averaged to obtain a more reliable estimate of ν. Finally, one can use equation five and take a log–log plot ξ(p) against \(p - p_{c}^{\infty }\) a relatively large system size.

$$\xi (p) \propto |p - p_{c}^{\infty } |^{ - \nu }$$
(5)

Here, in this work, the first method has been used.

Connectivity exponent

There are two ways to obtain an estimate of the connectivity exponent (β). To estimate this exponent, one can take a log–log plot of P(p, L) versus \(p - p_{c}^{\infty }\) for a relatively large system size. The gradient of the produced curve would give an estimate of the connectivity exponent of the system. Alternatively, as the relevant scale is the system size at the apparent threshold, one can estimate β the percolation probability’s finite size dependency (connected fraction), according to the following equation (Fig. 13).

$$P(p,L) \propto L^{ - \beta /\nu }$$
(6)
Fig. 13
figure 13

Variation in P against L, \(\beta /\nu \approx 0.05 \pm 0.017\)

Discussion

Heterogeneity in fractured petroleum reservoir rock is such an extent that it is impossible to model it fully. So, the significance of a statistical method is undeniable. Usually, different fracture network parameters such as size, orientation, and position are given statistical distributions and are calibrated using well log, seismic, and outcrop data of local reservoir rock. These models at least lack any physical constraint of the distribution of different characteristics of the network. In this work, a model was developed using a simulated annealing algorithm where it is the fundamental equation to be minimized was extracted from elasticity theory. Thus, the resulting model would have physical constraints rather than random unrestricted full statistical models. Section “The exploitation of statistics of the model” is devoted to the exploitation of statistics of the realizations of the model. This model is not capable of modeling the aperture of fractures. However, for example, to investigate the model’s conductivity, one can use existing correlations between size and aperture of the fractures.

One of the most critical problems in the fractured reservoir is their high rate of uncertainty and variability due to different scale fractures, and this problem would appear in various stages of reservoir life from the early stages of drilling through field development and management. Fractures can act as hydraulic conductors or barriers to flow. The fracture network’s connectivity is a crucial parameter in controlling fluid movement, which depends on individual fractures and the system’s geometrical parameters.

One approach to investigating this extensively complicated network’s connectivity is percolation theory as a basic mathematical model of a highly disordered media such as fracture networks. In Sect. “Fracture network modeling,” three crucial percolation properties were estimated overall realizations. The percolation threshold (critical density) of the resulting system was evaluated. The scaling law exponents that appear in connectivity formulations and dictate the network’s connectivity behavior were assessed. Here, a limited number (200) of realizations have been used. The results may be improved using more realizations.

Conclusions

Fracture petroleum reservoirs provide over 20% of reserve globally, and this number is nearly 90% in Iran. The usual approach in reservoir numerical modeling for fractured reservoirs is to use the statistical distribution of different fracture network parameters and calibrating to local well log, seismic, and outcrop data, and consequently, these models lack any physical restrictions. In this work, generalizations of a discrete fracture network for an anisotropic medium with different properties along each axis have been generated.

  1. 1.

    Fractures form two principal sets. Fractures along the x-axis are generally longer and more by number.

  2. 2.

    Fractures with length larger than fracture’s mean length are more polarized along two principal axes. Smaller fractures due to their relatively smaller contribution to total system energy are nearly randomly distributed.

  3. 3.

    Log–log plot of fracture’s length distributions has nearly linear form and can be fitted either power-law or harmful exponential law. The mean slope of this line overall realizations is approximately −1.4.

  4. 4.

    There is a positive correlation between fracture’s positions and lengths. Fractures with more extensive measurements are more robust to expel other fractures, and the nearest fracture would be more distance away.

  5. 5.

    There crucial parameter of the percolation model of the system has been estimated. These parameters were percolation threshold, correlation exponent, and connectivity exponent.

  6. 6.

    A new algorithm to identify fracture clusters has been developed (Appendix).

As future work, by reducing number of matrix which defines fracture network properties, for example, by uniting length, orientation, and position matrices, the computation elapsed time will decrease significantly. And, using aspect ratio as stopping criterion makes the output final fracture configuration be in same range by different properties such as orientation distribution and can be used in percolation methodology of connectivity.