Introduction

The original problem objective of the case study was to evaluate infill drilling potentials and quantify associated uncertainty. Permeability and porosity fields are two major uncertainties identified during the static modeling phase. Other uncertainties became known after the history matching phase and from analogy using the near-by fields. There was no core information available, and petrophysical properties were generated using existing correlations. Presently, the reservoir is producing under primary recovery with recovery factor approximate 28 %. This recovery is uncharacteristic of Niger Delta under the current mechanism and it desires to improve on this factor to about 35 %. It was conceived that by reducing well spacing, reservoir heterogeneity, and layer continuity can be changed to enhance well connectivity which in turns can improve oil recovery by accelerating production. However, due to large uncertainty, infill well performance can seriously be hampered. Hence, uncertainty quantification becomes an integral part of this study.

As pointed out by Subbey et al. (2003) and Bustamante et al. (2005), the success of infill drilling is directly related to the uncertainties associated with it. Individual reservoir uncertainties (static and dynamic) all add up to give a resultant total uncertainty associated in particular, with performance of infill drilling. There are several methods for uncertainty estimation. Analytical methods are characterized by several assumptions and are becoming less efficient in estimation and quantification of various uncertainties due to increasing complexity of petroleum reservoirs. Some other common methods include Monte Carlo technique (Hammersley and Handscomb 1964), derivative tree technique (Steagall and Schiozer 2001), and statistical theory (Venkataraman 2000). Floris et al. (2001) had provided comprehensive comparison of the performance, in terms of accuracy, of some existing methods used for quantification of uncertainty of production forecasts. However, detail review on uncertainty estimation and analysis can be obtained from Amaefule and Keelan (1989) , Akinwumi et al. (2004), Ballin et al. (2001), and Alhuthali et al. (2006).

Drilling infill wells, according to Ofoh (1992) can improve the recovery of hydrocarbon by accelerating the hydrocarbon productions. However, the determination of infill potential as well as selection of well type and placement has been a challenge (Thakur and Satter 1998). The recommended way to determine infill drilling potential in a reservoir is to conduct a complete reservoir evaluation involving geological, geophysical, and reservoir analyses and interpretations. This approach is prohibitively time-consuming and expensive for some large hydrocarbon fields (Linhua et al. 2005). Infill Drilling Predictive Model (IDPM) which requires minimum amount of reservoir and geologic description has also been used (Fuller et al. 1992). However, IDPM requires knowledge of heterogeneity elements (pay continuity and permeability variation among layers.) which are not easily or often measured in actual fields. Voneiff and Cipolla (1996) developed a model-based analysis method, the moving window technique, and apply it for rapid assessment of infill and re-completion potential in the Ozona field. The method according to the authors is quick but the accuracy decreases with increasing heterogeneity. Empirical correlations (Hudson et al. 2000) are also available to determine infill potential in a complex, low-permeability gas reservoir. These correlations are reservoir specific and therefore have gained limited applications. The use of numerical-based instead of analytical-based conceptual models has been reported (Ogbe et al. 2009). Conceptualized models have been used to provide answers to frequently asked questions such as the numbers of wells and their optimum placement within the reservoir. In this study, a full-field numerical simulation was conducted for the determination of infill location, infill type selection, and its placement within the reservoir sub-regions. For accurate uncertainty quantification, a good proxy is needed. Our methodology included use of modern experimental design (UD) to determine the locations of sample points within the design space instead of conventional designs like central composite, Box–Behnken, and full factorial designs. Least-Squares method was used to construct response surface and to interpolate a limited number of data obtained from simulation for uncertainty quantification.

Methodology

Grid set-up and model initialization

The top surface map and well location of the case study is shown in Fig. 1. The reservoir is penetrated by seven wells, all deviated. The dynamic simulation model was constructed using a 100 × 100 × 1 ft believed to preserve reservoir heterogeneity and accommodate a quick simulation runs. The grid was created using corner point geometry with dimension of 300 × 300 ft. The reservoir average porosity is 26.4 %, and average water saturation and permeability are 27 % and 700 mD, respectively. The reservoir has an initial pressure, Pi = 1950 psia, the API of 22, FVF of 1.22, and temperature of 165 °F.

Fig. 1
figure 1

The case study top surface map and well location

Reservoir descriptions of porosity and permeability

The reservoir was divided into four different sub-regions using faults. In all these regions, permeability ranges from 100 to 1300 mD. The vertical variation of permeability distribution is prevalent in all the regions. The lateral continuity of permeability was also observed especially in Regions 1, 2, and 3. For example, the distribution of permeability values less than 500 mD in Region 2 is mostly laterally continuous which is due to presence of stacked beds with a high degree of lateral continuity. Figure 2 shows isometric view of the permeability distribution in all the sub-regions of the reservoir.

Fig. 2
figure 2

Isometric view of regional permeability models

The permeability histograms (Fig. 3) of Regions 1 and 3 reflect averages of 500 and 800 mD, respectively. The permeability shows a multi-modal distribution in Regions 1 with some small peaks at the extremes of the distribution. These peaks correspond to shale bodies distributed in the reservoir. The permeability value was high at the center of the structure and becomes degraded towards the western direction. In Region 3, the distribution is fairly normal with mean permeability value of 900 mD and standard deviation of 141.4.

Fig. 3
figure 3

Permeability histograms for all the reservoir sub-regions

In Region 4, the permeability is log normal skewed to the right with mean of 500 mD with some peaks at the extreme corresponds to shale bodies distributed in the reservoir. In Region 2, the distribution of permeability is fairly uniform also with peaks at the lower extreme. The average permeability in this Region ranges between 500 and 720 mD. Porosity histograms for all the regions are shown in Fig. 4. Both the porosity and permeability distributions agreed geologically with high permeability found mostly in high porosity regions. Generally porosities range between 10 and 27 %. Porosity value as high as 36 % are found in some areas as well. Porosity distribution in regions 1 and 2 is uniform, while Regions 3 and 4 exhibited skewed normal distribution.

Fig. 4
figure 4

Porosity histograms for all the reservoir sub-regions

Pressure and saturation matching

Traditional history matching method (Rwechungura et al. 2011) was adopted. The workflow consists of reservoir type definition, data preparations/consistency checks, material balance analysis, pressure and saturation matching, and results analyses. The material balance tool (MBAL) was used for the evaluation. The three reservoir drive mechanisms were established to be fluid expansion pore volume compressibility and water drive. As shown in Fig. 5a, the predominant energy is water influx. The obtained Analytical plot in Fig. 5b shows clearly that the observed production cannot be matched without the aquifer in place. A steep drop in reservoir pressure profile when no aquifer model was included indicates insufficient energy within the reservoir. The aquifer was modeled numerically and attached to the grid. Using appropriate aquifer pore volume, a stable pressure that fairly matched the reservoir pressure was obtained. After a number of modifications to the aquifer size and its permeability, the pressure match was achieved on reservoir basis.

Fig. 5
figure 5

a Reservoir energy diagram and b pressure profile with and without aquifer model

A review of the production performance shown in Fig. 6 depicts a realistic trend in oil and water production. Due to gas metering issues in the facilities, it was concluded that the gas volumes recorded had inherent errors. Hence, apart from the oil rate that was constrained in the saturation match, preference was given to matching the water cut using traditional approach.

Fig. 6
figure 6

Field production performance profiles

Description of the key uncertainties

The description of the key uncertainties that were changed during the history match is given below:

  1. (1)

    Relative permeability curves and critical water Relative permeability is highly uncertain in this study because there is no core measurement. The measurement from analogous reservoir was carefully tuned and used to obtain a history match. Initially the analog data were not favorable to the flow of water as observed in some of the wells. A multiplier of 1.25 was used on water relative permeability to match the water cut (initial water saturation of 19.5 %). The reservoir appears to be mixed wet and the water saturation end point has a wide range. Critical water saturation was increased by a factor of 2 and 1.5 around well AK-06 and AK-07 to match the water breakthrough time (WBT). The connate water saturations obtained from the static model were relatively low.

  2. (2)

    Vertical/horizontal permeability (KV/KH) Among the parameters that remain uncertain was the ratio of vertical to horizontal permeability. Since the reservoir is undersaturated with no horizontal well penetrated, it was expected that the impact of the parameter be minimal. However, the dominant of water influx (bottom water) and proposed horizontal wells necessitates carrying this ratio further for uncertainty analysis. PERMZ was reduced globally by a multiplying factor of 0.01. This improved the water production in AK-01 by enhancing the lateral flow of water in this area that allowed matching the water cut.

  3. (3)

    Fault transmissibility A fault multiplier of 2 was applied to the fault located between AK-01 and AK-04 to match the water influx around AK-01 well. The transmissibility of another fault towards the crest of the reservoir was reduced by 15 % as a result of the excessive gas being produced in the model by well AK-05.

  4. (4)

    Aquifer pore volume used for numerical aquifer model was sensitized upon until satisfactory pressure and saturation match was obtained.

Result from history match

Figure 7 shows the results of the pressure, GOR, water cut, and cumulative water matches at field level. In all the plots, the dotted points indicate the observed data, while the continuous line indicates the simulated values. The pressure match is shown in black color, water cut, and cumulative water produced in blue, while the GOR is shown in red color. A good pressure match was obtained an indication of adequate capturing of the energy within the reservoir. Field-wise, the water cut match considered satisfactory to be used for further analysis. Figure 8 shows the distribution of the residual oil saturation at the end of history in some layers of the reservoir. It is obvious that beyond the life of the existing wells, substantial residual oil saturation is left behind. These are found between the existing wells as well as in the intra reservoir shale breaks. Infill drilling evaluation is a good reservoir management for this case study.

Fig. 7
figure 7

Field pressure and saturation match

Fig. 8
figure 8

Distribution of the residual oil saturation at the end of history match

Infill wells selection and placement

In this study, the placement of wells at different layers and the determination of optimum horizontal length to be perforated were done manually but guided well by layers oil saturation map and practice within the Niger Delta. In order to achieve a 35 % recovery factor, a number of production schemes were considered. These include the use of all vertical wells or all horizontal wells, or a combination of both horizontal and vertical wells. This sensitivity study was done manually. The well placement was optimized for each scheme considered by placing the wells one at a time and running the simulation for 18 years. For the horizontal wells, the evaluation of the vertical placement and optimal lateral length was simulated assuming horizontal length of 700 and 1000 m. The inter-well spacing assumed was 400 m. In all the simulations, WBT and recoverable oil are the responses.

Table 1 compares the performance of the horizontal and vertical wells across the reservoir sub-regions as well as the effects of the lateral length of the horizontal wells on production and water break through time. The recommendations to drill or not was based strictly on number of wells, WBT, and cumulative oil recovery. The results show that in all the recommended regions for infill drilling horizontal wells with 1000 m lateral lengths give higher productivity.

Table 1 Summary of wells performance for optimum well selection and placement

Drilling a horizontal well with horizontal lateral length of 700 m in Region 1 produced additional 5 MMSTB the same quantity obtained from 4 vertical wells in the same region. This region can be said to be almost depleted because the simulation result indicated no significant difference in additional reserves using 1000 and 700 m horizontal well lengths. Considering additional reserves, a horizontal well is optimum with well length of 700 m in Regions 1 and 3. However, it was observed that all wells (vertical and horizontal) experienced WBT 1 month after production hence, infill drilling in Regions 1 and 3 was not a viable option.

The horizontal wells were observed to be more efficient producers than the vertical wells at the target rate simulated from Regions 2 and 4. Drilling 2 horizontal wells each with lateral length of 1000 m in Regions 2 and 4 allows more recovery and exhibited delay in WBT. Figure 9 shows the incremental production and compares well performance based on well number and type. The “8 vertical wells” is the total number of vertical wells drilled and completed in the selected two regions of the reservoir with 4 wells drilled each to Regions 2 and 4. Likewise, total number of four horizontal wells was drilled with two horizontal wells drilled and completed each in the two regions.

Fig. 9
figure 9

Comparison of incremental production from vertical and horizontal wells

The result of the simulation shows better performance with four horizontal wells compared to eight vertical wells. For optimal number of infill well required, three horizontal wells were drilled and simulated. First, two completed in Region 2 and one in Region 4. Then, with one horizontal well completed in Region 2 and two horizontal wells drilled and completed in Region 4. Region 4 was found to be more productive than Region 2 and hence the ratio of horizontal wells simulated in Regions 2 and 4 is 1:2. A significant difference in additional reserve was produced when compared with the four horizontal wells. Therefore all the subsequent analysis was done based on production results from 4 wells.

Influence of well length

The lengths of the horizontal well examined are 700, 1000, and 1200 m for the horizontal wells. The simulation was performed at constant flow rate of 1500 Stb/day. Figure 10 shows the sensitivity of the horizontal lateral length to incremental production. As shown in Fig. 11, there is no significant difference in additional recovery when 1000 and 1200 m horizontal length was simulated. However, a horizontal length of 1000 m shows a marked difference in additional recovery when compare with simulated 700 m lateral horizontal length. Based on this analysis, 2 horizontal wells of 1000 m lateral length each was recommended for drilling and evaluation in the Reservoir Regions 2 and 4. The new reservoir model that included infill wells was therefore used to quantify uncertainty associated with the development concept.

Fig. 10
figure 10

Sensitivity of additional recovery to different lateral length of the horizontal well

Fig. 11
figure 11

Experimental runs showing the degree of uncertainty in the MM Field (Dark line represents the base case run)

Uncertainty quantification

The following sections described in details the workflow the study adopted to assess uncertainty associated with infill drilling.

Screening analysis using Placket–Burman design (PBD)

The essence of performing this exercise was to minimize number of simulation. All ten uncertain factors were subjected to screening using PBD to select “heavy hitters” for response surface development. Table 2 shows all parameters and their ranges in terms of multiplier on the base case model. The multipliers were arrived at after series of sensitivity runs to study the deflection of the response curves from the base case position. This was achieved by either decrease the base case value or increases it using a multiplier. Through this process, the parameter ranges were determined.

Table 2 Experimental range in terms of multipliers on the base case uncertain parameters

Using the parameters in Table 2, a PBD matrix shown in Table 3 was constructed. The “+1” and “−1” correspond to the absolute high and low values of the variables. The simulation was done on the forecast reserves, and response value was recorded for 15 and 30 years of forecast to avoid missing out of any impactful factor. In all the runs, the original history match is preserved.

Table 3 PB design table for 10 parameters

Figure 11 shows the deviation of different runs from the base value (thick-black color). The plot indicates the presence of uncertainties in the production forecast. This can seriously affects its optimal development strategies. With the Analysis of variance (ANOVA), the main effects were computed. The relative contribution of different main factors is presented as Pareto charts in Fig. 12. There are five “heavy-hitters” identified after 15 years of forecast are OVISC, SWI, PERMX, PORO, and PERMZ. Whereas, after 30 years of forecast, only four “heavy-hitters” namely: OVISC, SWI, PERMX, and PORO were identified at 95 % analysis confidence level.

Fig. 12
figure 12

Pareto chart showing key parameters impacting reserves after a 15 years forecast, b 30 years forecast

Design of experiment

Experiments are designed for different purposes. The objectives here are to gain maximum information and for building model that most approximated the reservoir behavior at minimum costs. This study proposed a quadratic model using the screened four (m = 4) variables which will require at least a total of 15 experiments [P = (m + 1)(m + 2)/2]. The goal here is to select best design without compromising the efficiency. Consider that it is desired to utilize the 4 identified decision variables with each factor has 16 levels (discretized possible values of parameters). The total of all possible combinations is 164 = 65,536. Orthogonal test designs can reduce test number to 162 = 256; uniform design (UD), however, can reduce it to only 16 tests. This is an advantage of modern experimental design over the conventional designs like central composite, Box–Behnken, and full factorial designs.

UD method was used to determine the locations of sample points within the design space. The design was constructed such that each of the variables is divided into 16 equal levels each comprises of only one sample point. To prevent a design that has poor space filling qualities, Translational Propagation Algorithm (Cioppa and Lucas 2007) was modified and implemented in MATLAB.

Figure 13 shows the schematics of initial 16 sample points (red) selected by UD for a two-dimensional problem and augmented samples (green) that were used to validate the model.

Fig. 13
figure 13

Schematics of initial 16 sample points (red) selected by UD for a two-dimensional design problem and augmented sample point (green) for model validation

Surrogate modeling

Surrogate modeling is a technique that makes use of the sampled data to build surrogate models, which are sufficient to predict the output of an expensive computer experiment at untried points in the design space. Thus, how to choose sample points, how to build surrogate models, and how to evaluate the accuracy of surrogate models are key issues for surrogate modeling. The use of polynomial approximation model in surrogate modeling is common. Here, the sampled data are fitted by a least-square regression technique. The accuracy of this interpolation method depends on the highest degree of the polynomials used. However, the degree has opposite effect on the smoothness of the representative function. To maintain balance between interpolation accuracy, smoothness and computational expense, the “quadratic” polynomial model was selected.

The true quadratic RSM can be written in the following form:

$$y\left( X \right) = \hat{y}\left( X \right) + \varepsilon , \quad X \in R^{m},$$
(1)

where \(\hat{y}\left( X \right)\) the quadratic polynomial approximation and ε is the random error which is assumed to be normally distributed with mean zero and variance of σ 2. The error \(\varepsilon_{i}\) at each observation is supposed to be independent and identically distributed (iid).

The modeled quadratic RSM predictor \(\hat{y}\left( X \right)\) is defined as:

$$\hat{y}\left( X \right) = \beta_{0} + \mathop \sum \limits_{i = 1}^{m} \beta_{i} x_{i} + \mathop \sum \limits_{i = 1}^{m} \beta_{ii} x_{i}^{2} + \mathop \sum \limits_{i = 1}^{m} \mathop \sum \limits_{j \ge i}^{m} \beta_{ij} x_{i} x_{j},$$
(2)

where \(\beta_{0} , \;\beta_{i} , \;\beta_{ii} \;{\text{and}}\; \beta_{ij}\) are the unknown coefficients.

Let \(\beta \varepsilon R^{m }\) be the column vector contains these P unknown coefficients. The least square estimator of β is

$$\beta = (U^{\text{T}} U)^{ - 1} U^{\text{T}} y_{\text{s}} ,$$
(3)

where

$$U = \left[ {\begin{array}{*{20}c} 1 \\ \vdots \\ 1 \\ \end{array} \begin{array}{*{20}c} {x_{1}^{(1)} } & \cdots & {x_{m}^{(1)} } \\ \vdots & \ddots & \vdots \\ {x_{1}^{(n)} } & \cdots & {x_{m}^{(n)} } \\ \end{array} \begin{array}{*{20}c} {x_{1}^{(1)} x_{2}^{(1)} } \\ \vdots \\ {x_{1}^{(n)} x_{2}^{(n)} } \\ \end{array} \begin{array}{*{20}c} \ldots \\ \ddots \\ \ldots \\ \end{array} \begin{array}{*{20}c} {x_{m - 1}^{(1)} x_{m}^{(1)} } \\ \vdots \\ {x_{m - 1}^{(n)} x_{m}^{(n)} } \\ \end{array} \begin{array}{*{20}c} {(x_{1}^{(1)} )^{2} } \\ \vdots \\ {(x_{1}^{(n)} )^{2} } \\ \end{array} \begin{array}{*{20}c} \ldots \\ \ddots \\ \ldots \\ \end{array} \begin{array}{*{20}c} {(x_{m}^{(1)} )^{2} } \\ \vdots \\ {(x_{m}^{(n)} )^{2} } \\ \end{array} } \right] \in R^{{nX_{\text{p}} }}$$
(4)

and \(y_{{{\text{s}} }} \;{\text{are}}\; {\text{observations}},\; \beta_{0} = 2.488, \;\beta_{1} = 53.0332 \;{\text{and}}\; \beta_{2} = - 17.2437.\) The approximated response \(\hat{y} (X)\) at any untried x can be efficiently predicted by Eq. (5).

$$\hat{y}\left( X \right)\left[ {\text{MMstb}} \right] = \beta_{0} + \beta_{1} {\text{SWI}} + \beta_{2} {\text{PERMX}}^{2} .$$
(5)

The cross plot of the model prediction against the actual experimental value is shown in Fig. 14. On this plot, the vast majority of the points is along the x = y line. This shows that the predictions were a perfect fit of the experimental data.

Fig. 14
figure 14

Comparison of the actual experimental value and model predicted values

Evaluation of approximation models

Relative error (RE) and root mean squared error (RMSE) were used to evaluate the error of the approximation models at test points other than those used in building the model. The test points comprises of ten new sample points within the sample space. These points are shown as green doted points augmented in the original design in Fig. 13.

The relative error is

$$\bar{e} = \frac{1}{{n_{\text{t}} }}\mathop \sum \limits_{i = 1}^{{n_{t} }} e^{(i)} , \quad {\text{where}}, \quad e^{(i)} = \frac{{\hat{y}_{\text{t}}^{(i)} - y_{\text{t}}^{(i)} }}{{y_{\text{t}}^{(i)} }}$$
(6)

\(n_{\text{t}}\) is number of the test points; y (i)t and \(\bar{y}_{\text{t}}^{(i)}\) are the true value and predicted value corresponding to the ith test point, respectively.

The RMSE is defined by

$$\sigma_{\text{e}} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{{n_{t} }} (e^{(i)} )^{2} }}{{n_{\text{t}} }}}.$$
(7)

Table 4 shows the result obtained from statistical error analysis. The low (RMSE = 1.05) value indicates a good model. Approximately zero value of average absolute percentage relative error (AAPRE = 0.76 %) recorded indicates relative low absolute deviation in percent from the experimental values. Hence, implies a better correlation.

Table 4 Statistical error analysis

Markov Chain Monte Carlo (MCMC) simulation

In order to quantify the uncertainty on production forecast the Bayesian parameter estimation conditioned on historical information was done using the software Winbugs (Spiegelhalter et al. 2000). This requires a prior distribution for the parameters and the likelihood function. Winbugs has a built-in likelihood function for uncensored and censored normal data that were used in these simulations.

Parameter estimation is made from updating with the prior distribution to compute the posterior distribution using Bayes’ Theorem.

$$p\left( {{\raise0.7ex\hbox{$\theta $} \!\mathord{\left/ {\vphantom {\theta D}}\right.\kern-0pt} \!\lower0.7ex\hbox{$D$}}} \right) = \frac{{f\left( {{\raise0.7ex\hbox{$D$} \!\mathord{\left/ {\vphantom {D \theta }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\theta $}}} \right)\varepsilon (\theta )}}{{\mathop \smallint \nolimits f\left( {{\raise0.7ex\hbox{$D$} \!\mathord{\left/ {\vphantom {D \theta }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\theta $}}} \right)\varepsilon \left( \theta \right){\text{d}}\theta }},$$
(8)

where \(p\left( {{\raise0.7ex\hbox{$\theta $} \!\mathord{\left/ {\vphantom {\theta D}}\right.\kern-0pt} \!\lower0.7ex\hbox{$D$}}} \right)\) the posterior distribution of the parameters θ is, \(f\left( {{\raise0.7ex\hbox{$D$} \!\mathord{\left/ {\vphantom {D \theta }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\theta $}}} \right)\) is the likelihood function, and ɛ(θ) is the prior distribution of θ. The denominator is a normalizing constant that scales the posterior so that the area under the posterior pdf equals one.

The stochastic parameters \(\beta_{0} , \;\beta_{1} ,\; \beta_{2}\) and τ are given proper prior distributions, while the logical expression for σ allows the standard deviation to be estimated. To check the convergence of MCMC simulations, multiple chains with divergent starting points were run using derivative-free adaptive rejection sampling algorithm.

Figure 15 shows the trace plots for different parameters. The overlapping of the chains is an indication that reasonable convergence has been achieved after 11,000 iterations. To obtain samples for posterior inference, Monte Carlo error was calculated for each parameter. A total of additional 10,000 simulations were required to obtain Monte Carlo error less than 5 % of the sample standard deviation for all parameters.

Fig. 15
figure 15

History plots showing two chains that are overlapped, an indication of convergence

Table 5 shows the posterior summaries of the parameters of the regression coefficients and the variance of the regression model. The posterior means and medians of the coefficients of PERMX and SWI indicated that they are important variables. Moreover, we observe that the posterior means of β are slightly different from the ordinary least square estimates (2.152, 52.58, −17.31)T concluding that our prior was essentially a little bit informative implementing minor on the model parameters.

Table 5 Posterior summaries of the indicator parameters included in the Bayesian model

Figure 16 displays the posterior kernel density plots for model parameters β i. The posterior distributions of the coefficients are normal for all the variables. The posterior median of the distribution and the posterior mean justify inclusion of the variables in the model.

Fig. 16
figure 16

Posterior densities of the regression coefficients

Summary

  • The study objective was to evaluate infill drilling potentials and quantify uncertainty associated with infill drilling after model calibration.

  • The active energy in the reservoir was determined using material balance calculation software (MBAL) and pore volume that matched reservoir pressure was modeled using Hurst–van Evaerdingen–Odeh radial model and implemented using numerical model.

  • Saturation match was achieved at reservoir level using traditional history match approach. The major responses include field pressure and water cut.

  • To locate infill positions, the reservoir was divided into four sub-regions and optimization of Infill Wells selection and placement was achieved using full-field numerical simulation guided by saturation and pressure depletion maps.

  • A linear design of experiment was performed to identify key input parameters for the proposed quadratic surrogate model implemented on MATLAB.

  • A modern experimental design method (UD of experiment) using translation propagation algorithm was used for sampling variables used for surrogate development.

  • A full Bayesian treatment of historical data using MCMC technique was used to estimate the uncertainty. Important uncertainty quartiles (P2.5 %, P10 %, P50 %, P97.5 %, and P90 %) were obtained.

Conclusion

This study showed that adequately guided numerical simulation technique is suitable for the evaluation of infill location, selection, and placement where delivery time becomes a constraint on investment decision during reservoir development and management. Apart from wider degree of uncertainty domain coverage and economic viability, the use of modern experimental design method such as UD can offer more reliable proxy model for uncertainty quantification. This was demonstrated in this study. On the available historical information, a full Bayesian treatment of uncertainty was performed using Bayesian framework. The required computations are performed using MCMC rather than ordinary Monte Carlo simulation. The posterior summaries of the parameters along side thir uncertainties given by P2.5 %, P10 %, P50 %, P97.5 %, and P90 % quartiles were obtained. The approach used in this study and different uncertainty quartiles can serve as framework for evaluating similar underdeveloped reservoirs where large uncertainties are involved.