Stochastic optimization strategies applied to the OLYMPUS benchmark

In this work, we discuss the application of stochastic optimization approaches to the OLYMPUS case, a benchmark challenge which seeks the evaluation of different techniques applied to well control and field development optimization. For that matter, three exercises have been proposed, namely, (i) well control optimization; (ii) field development optimization; and (iii) joint optimization. All applications were performed considering the so-called OLYMPUS case, a synthetic reservoir model with geological uncertainty provided by TNO (Fonseca 2018). Firstly, in the well control exercise, we successfully applied an ensemble-based approximate gradient method in a robust optimization formulation. Secondly, we solve the field development exercise using a genetic algorithm framework designed with special features for the problem of interest. Finally, in order to evaluate further gains, a sequential optimization approach was employed, in which we run one more well control optimization based on the optimal well locations. Even though we utilize relatively well-known techniques in our studies, we describe the necessary adaptations to the algorithms that enable their successful applications to real-life scenarios. Significant gains in the expected net present value are obtained: in exercise (i) a gain of 7% with respect to reactive control; for exercise (ii) a gain of 660% with respect to a initial well placement based on an engineering approach; and for (iii) an extra gain of 3% due to an additional well control optimization after the well placement optimization. All these gains are obtained with an affordable computational cost via the extensive utilization of high-performance computing (HPC) infrastructure. We also apply a scenario reduction technique to exercise (i), with similar gains obtained in the full ensemble optimization, however, with substantially inferior computational cost. In conclusion, we demonstrate how the state-of-the-art optimization technology available in the model-based reservoir management literature can be successfully applied to field development optimization via the conscious utilization of HPC facilities.


Introduction
Well control and field development optimization (or well placement) aim to maximize a given performance function-e.g., net present value (NPV), recovery factorof oil and gas reservoirs by means of defining optimal well parameters with respect to, respectively, the well settings (rate, pressure) and well locations/configurations. To that end, computer-assisted techniques based on optimization algorithms have been developed to support the decisionmaking process. Optimization algorithms, by construction, V. L. S. Silva vinicius.luiz@petrobras.com.br Extended author information available on the last page of the article. are iterative processes that require many reservoir simulations and, hence, are notably computationally expensive. Such simulations can take from several hours to days. Additionally, due to the large uncertainty surrounding the exploitation of subsurface natural resources, robust optimization framework [9] has been largely employed [15] with the objective to account for uncertainty. Robust optimization accounts for uncertainty via the utilization of multiple geological realizations. Clearly, any measure based on multiple models requires even more computational power in order to evaluate the responses of the ensemble of realizations. Therefore, efficient optimization strategies are of utmost importance to enable a timely decision-making process.
Even though a large body of the literature is dedicated to discuss the developments and applications of optimization techniques on both well control and well placement studies, there is no clear indication of which type of optimization strategy is more appropriate to each study. In order to further understand the applicability of the different optimization methods to the different types of optimization studies, the OLYMPUS case [10] has been deployed in order to serve as a benchmark case to evaluate and compare optimization strategies.
Relevant aspects of the solution of optimization problems include the nature of the parameters (i.e., continuous or discrete), the utilization of derivative information, and methods to reduce computational cost. Derivative-based optimization algorithms usually require fewer iterations to find an optimal solution; however, the efficient and accurate computation of derivative information of the objective function is a challenging task. The most efficient accurate technique to compute the objective function derivative information is the adjoint method [17]. But, largely due to the high complexity to develop an adjoint model, stochastic ensemble methods [6,11] have gained considerable attention over the past decade. It has been shown that ensemble-based optimization has been successfully applied to well control optimization problems, also for real field cases [22,23].
On the other hand, gradient-based optimization methods require continuous parameters, and because the well position in (mainly commercial) reservoir simulators is usually defined in terms of its discrete coordinate in the numerical grid, the usage of derivative-based optimization algorithms is hampered. This can be overcome by re-parameterization [12,29,30]. Nevertheless, derivative-free optimization algorithms [25] are often applied to well placement studies [13].
Ideally, both well control and well placement problems should be handled concomitantly [4]. However, due to the different nature of the parameters, alternatively, one can solve the two optimization problems sequentially. Fully joint optimization approaches could be reached if, for instance, a fully derivative-free optimization strategy was used for both well control and location problems. Nonetheless, the former option is usually expensive since many simulations runs are usually required, and the latter often requires a re-parameterization of the discrete parameters, which is, usually, not trivial.
Regarding the computational cost reduction, two alternatives are usually employed. The reduction of the reservoir simulation cost by means of the construction of reduced order models [5,7,21], and scenario reduction as a strategy to select representative scenarios from the full ensemble of geological realizations [1,3,19,27,28].
In this paper, we employ different optimization techniques to address each the well control and well placement problems. For the well control problem, we apply a gradient-based optimization approach based on a ensemblebased approximate gradient [11]. In the well placement exercise, we employ a genetic algorithm [20]. And for the the joint optimization, we employ a sequential strategy in which, firstly, a well placement optimization is performed and, based on the optimal well locations and drilling sequence, a well control optimization is applied subsequently. Even though one must note that the sequential approach could be done iteratively until convergence of the objective function for both well controls and optimal positions, there is no theoretical work that show that this sequential approach is, indeed, convergent [26]. Furthermore, we also apply a scenario reduction technique [28] to alleviate the computational cost of the optimization process.
The remaining of this paper is structured as follows. The next section provides a short description of the OLYMPUS benchmark model and details of the challenge necessary to support the discussion in this work. Section 3 describes the mathematical framework utilized in the well control and field development exercises. The specializations of the algorithms for both well control and well placement are also discussed. The results for both exercises, as well as for the joint optimization one, are presented in Section 4. A detailed discussion of the results and possible further improvements on the methodology is presented together with the description of the results. Finally, concluding remarks are provided in Section 5.

Brief description of the OLYMPUS case
The OLYMPUS case is comprised of a two-phase oilwater reservoir developed by TNO for benchmarking well control and field development optimization methodologies [10]. Inspired by a virgin oil field in the North Sea, the OLYMPUS reservoir model geometric extent is 9 km × 3 km in area and 50 m in thickness. The discretized grid contains 16 layers and 192,750 active cells. The geological description is based on four different facies, from which the reservoir properties were obtained by geostatistical methods (more details in [10]).
The geological uncertainty is described by an ensemble of 50 realizations. The uncertain properties considered are porosity, permeability, net-to-gross ratio, and connate water. Figure 1 shows layer 6 and 10 permeability fields from two different realizations.
The reservoir performance in the OLYMPUS challenge is measured by the net present value (NPV) function given by where m represents the N m -dimension vector of model uncertain parameters that describe the reservoir, u represents the N u -dimension vector which contains the decision variables that can include the targets for the well control optimization problem (see Section 3.1) and/or the well placement parameters (see Section 3.3) for the field development optimization exercise, such that where u wc represents the vector of well control variables, which may include rates, flowing pressures or valve's settings, and u fd indicates the field development decision variables, such as well location and type, drilling schedule and sequence, etc. Additionally, t n indicates the time for the n th time interval, N t is total number of time intervals, b is the annual discount factor, τ is the reference time interval, and V is the cash flow defined as where Q op (m, u, t n ), Q wp (m, u, t n ), and Q wi (m, u, t n ) represent, respectively, the total oil and water production and total water injection over the nth time interval, and r op , r wp , and r wi are, respectively, the oil revenue and the water production and injection costs. The platform investment cost is given by C P (u, t n ), whereas C D (u, t n ) represents the total well drilling cost related to the decision variable vector u. For the well control optimization problem, C P and C D are neglected since they are constant for all well target combinations evaluated by our methodologies. Further details on the reservoir performance function can be found in [10]. The operational constraints are listed in Table 1. We emphasize that the restriction on maximum number of wells and maximum dogleg severity is applicable only on the field development optimization problem.

Methodology
We aim to solve a problem in the form J (u) is the objective function and u indicates the feasible region. We recall that u may contain either well control variables and field development decision variables, or both, depending on the type of optimization problem we aim to solve. Next we discuss the formulations employed in the present work.

Well control optimization
For the well control optimization exercise, we solve Eq. 4 considering fixed the field development variables and only adjusting the well targets. As stated in [10], the objective function is the expected NPV among a set of 50 geological realizations which describes the reservoir uncertainty. Also, the feasible region u is defined based on the operational constraints given by [10] and repeated in Table 1, which includes the maximum platform liquid production rate, the maximum oil production rates for each producing well, the maximum water injection rates for each injector well, and the limits of bottom-hole pressure for both producer and injector wells. We employ a stochastic optimization method based on approximate gradients. The gradient is computed by an ensemble-based approximation, following the ideas introduced by [11]. Those ideas were implemented and adapted in an in-house software tailored to take advantage of high-performance computing (HPC) environments [23,24].
Two main steps compose our modified ensemble-based optimization method [23,24], which resembles the doubly smoothed StoSAG of [11]. First, for an arbitrary controlvector u, an approximate gradient is computed as where J represents the NPV function,û k,j is a sample from a multi-Gaussian distribution with mean u and an arbitrary covariance C u , N p is the number of perturbation points used for approximating the gradient at u, and N s is the number of geological realizations used to describe uncertainty in the robust optimization framework. At every iteration = 1, 2, . . ., the gradient approximation is performed for the current iterate u . Then, a line search step is performed using d = C u ·g u as search direction, attempting to find a new iterate which improves the objective function value. Instead of performing a backtracking procedure for the line search, in our implementation, several control sets are evaluated simultaneously. We evaluate as many control sets as allowed by the available computational resources. For instance, if we can run 100 simulations in parallel and considering N S = 50, we will evaluate two different control sets during the line search. The control sets are uniformly distributed along the direction of the line search, respecting the maximum step size. The maximum step size is one-tenth of the normalized search space. More details can be found in [23,24]. We have been able to run 50 simulations in parallel in the well control exercise performed here. If a new, better iterate cannot be found after a line search, a new set of N p perturbed controls is sampled from a normal distribution N(u, C u ). Then, Eq. 5 is evaluated again to provide a new gradient approximation, and the search is repeated. The algorithm is terminated after three consecutive unsuccessful line searches.
The two-step procedure is repeated until convergence, which is measured by the tolerance on the relative change in the objective function value and in the controls. The setup for our implementation of the ensemble-based method is based on our previous experience with this method in similar problems, meaning that we did not perform further experimentation to tune our modified approximate gradient technique specifically for the OLYMPUS case.

Scenario reduction
Dealing with several reservoir models to characterize uncertainty in a robust optimization framework is a challenging task, particularly in terms of the computational cost. Even though using the full set of models leads to the maximum representativeness in terms of the known geological uncertainty, techniques for scenario reduction can be considered to make the computational cost more tractable. In that sense, we applied a scenario reduction approach based on minimizing the total weight of the discarded models, following the ideas implemented by [1,16,23,24,28].
In our approach, one must choose a priori the number of selected models (N s ) and specify which properties/features should be considered to drive the selection. The models are selected based on their similarities, measured in terms of the distance between them, given by where d m,n is the distance between two arbitrary models, m and n, and ω m and ω n are, respectively, the m and n model properties/features vectors. In our case, ω is composed of fault transmissibility multipliers, cumulative oil production, cumulative water production, cumulative water injection, original oil in place, recovery factor, NPV, and mean and standard deviation of each grid property. The grid properties include horizontal and vertical permeabilities, porosity, and net-to-gross ratio. To avoid scaling issues, the vector ω of each model is normalized based on the average and the standard deviation among all models. We define the discard-cost function as where S and D are, respectively, the subset of selected and discarded models. The best scenario reduction is found by minimizing C(S), i.e., S = arg min C(S). In this work, we use a genetic algorithm (GA) to solve this optimization problem as proposed by [2]. Because the objective function evaluation is almost inexpensive, we can afford to use GA with a large population size. Here, we use a population of 1000 individuals over a period of 100 generations.

Field development optimization
The field drainage network optimization is a very challenging problem due to the large number of decision variables, the well placement constraints, and the non-linearity of the reservoir response. Many researches have published on this problem over the past years, most of them using optimization routines coupled to reservoir simulation models. However, there is a lack of robust optimization tools ready to be applied by asset teams in real field development projects. For the OLYMPUS field development challenge, we also used an in-house software [18]. This tool uses the reservoir simulator as the evaluation function and it is based on a genetic algorithm for the simultaneous optimization of the quantity, location, and trajectory of producing and injection wells [8]. It deals with real well placement problems with arbitrary trajectories, complex models, linear and nonlinear constraints, compositional and black-oil models, and multiple geological realizations.
As indicated in Fig. 2, the well parameterization is defined by 8 decision variables, being 3 variables for the starting point (I, J, K coordinates of the heel point), 3 variables for the ending point (I, J, K coordinates of the toe point), a variable that defines the type of well (oil producer, gas producer, water injector, gas injector, and water alternating gas (WAG) injector), and a binary variable that defines the status of the well (active or inactive).
Due to this parameterization, the number of decision variables can be large. For example, if one considers an optimization problem with a maximum number of 20 wells, there are 160 variables to be optimized, demanding thousands of simulation runs for the optimization process. However, this parameterization allows the simultaneous optimization of well location, trajectory, type, and the number of wells. The computational cost can grow significantly when several model realizations are applied. Such task has become feasible via a distributed computing environment running multiple simulations at the same time.
The tool uses a technique called Genocop III-Genetic Algorithm for Numerical Optimization of Constraints Problems [20]-to handle well placement constraints. Such restrictions include simulation grid size, minimum distance between wells, maximum well length, inactive cells, and user-defined regions in which the optimization process should not place wells.
As explained in [8], the Genocop III method uses two separate populations, but an evolution in one of them influences evaluations of individuals in the other one. The first one is called search population and consists of individuals that satisfy the linear constraints of the problem. The second population, called reference population, consists only of fully feasible individuals, i.e., individuals who satisfy all constraints (linear and nonlinear). The reference individuals are directly evaluated in the beginning of the optimization process and so are the feasible search individuals.
However, unfeasible-searched individuals are " repaired" prior to the evaluation. This repairing process consists of the random selection of a reference individual (R i ) with a probability of replacement P (or p r ) and the application of a crossover operator between the unfeasible-searched individual (S i ) and R i by generating random numbers a from the range [0, 1] with Z = aS i + (1 − a)R i , until a new feasible individual (Z) is found and then evaluated. Additionally, if the evaluation of Z is better than the evaluation of R i , this individual replaces R i in the reference population. Also, Z can replace S i with some probability p r . Figure 3 ilustrates the Genocop III procedure presented in Algorithm 1.
Heuristic rule states that, in many combinatorial optimization problems, an evolutionary computation procedure with a repair algorithm provides the best results when part of repaired individuals replace their unfeasible original. We used p r = 0.2 as suggested in [20].
In determining the initial population, one must first inform the number of individuals of the population (n). This population will consist of x seeds provided by the user and n − x seeds generated by the method, with 25% created based on model properties such as permeability, porosity, reservoir thickness, and oil saturation, and 75% generated by the Metropolis-Hastings algorithm. This algorithm is a Fig. 3 Genocop III procedure [20] Markov chain Monte Carlo (McMC) method for generating random sample sequences from a probability distribution, being especially useful for multidimensional distributions, especially when the number of dimensions is large.
The field development optimization procedure just described can be applied even if uncertain parameters are present in the model description. The use of multiple realizations can strongly affect the results of the flow simulations. However, the computational cost increases with the number of realizations, since each individual of the population must be simulated with all the realizations.

Well control optimization
We considered the bottom-hole pressures of all the 18 wells in a 3-month control interval as control variables. That represents a total of N u = 1440 variables. We use the modified ensemble-based optimization as described in Section 3.1 with the objective function as the expected NPV (Eq. 1) and control-vector containing only u wc . We apply a logarithm transformation to deal with the bound constraints on the BHP controls [14]. All other constraints are imposed directly on the reservoir simulator data deck. Additionally, after the optimization process is completed, we impose the reactive control setting to shut in the producers when the water-cut becomes uneconomic, i.e., a producing well is shut in when its water-cut exceeds 88% [10].
Based on the ideas originally presented in [6], the approximate gradient computed with Eq. 5 comes from an approximation of the cross-covariance between the controls u and the objective function F (u) for each model realization. Thus, attempting to provide a better approximation for the gradient, we initially used N p = 10 in Eq. 5 to obtaing u for every iteration = 1, 2, . . . of our method. Also, despite the computational cost involved, we used N s = 50 to preserve the best uncertainty representation. Figure 4 shows the optimal bottom-hole pressures for six selected wells. The results present the changes in the control variables found with our technique and the smoothing effect introduced by the selected covariance matrix of the control variables (C u ). Here, we assume that the controls are correlated over a period of five years using a spherical covariance function. The standard deviation for the correlation function is equal to 1% of the maximum range over the controls. We can clearly observe from Fig. 4 that our method obtained a set of controls with reasonable smoothness, but the capability to drive the controls to their upper or lower limits is preserved. Figure 5 shows the evolution of the normalized objective function during the optimization process. The algorithm reaches the optimum after 43 iterations, which resulted in 25,298 simulation runs and 506 equivalent simulations. An equivalent simulation accountability depends on the computational resources available for simultaneous runs, which in our case was a maximum of 50 simultaneous jobs. In our setup, using N p = 10 and N s = 50 in Eq. 5, the minimum value of equivalent simulations would be 87 if we have been able to run 500 parallel jobs. In each iteration, the optimization algorithm needs to perform a gradient approximation and a line search. Each one of these two steps can be run in parallel; therefore, the minimum value of  Figure 6 shows the cumulative distribution function (CDF) of the original strategy (where all wells are fully opened), the reactive control strategy (based on an uneconomic water-cut level of 88%), and the optimal strategy. The values of the expected NPV for these three strategies are $1.298 × 10 9 , $1.495 × 10 9 , and $1.600 × 10 9 , respectively. These results show that the optimal strategy significantly improves the expected NPV compared with the reactive approach (an increase of 6.9%).
Although we have obtained a significant improvement with respect to the final expected NPV in comparison with the reactive strategy with our approach, we note that the computational cost is extremely high. Even though we can make the computational burden more tractable by exploring HPC environments and paralleling as much as we can the simulation runs, we decided to investigate other possibilities to reduce the number of simulation runs. Firstly, we explore the effects of reducing the uncertainty representativeness by employing a scenario reduction scheme, and secondly, we analyze the impact of reducing the approximate gradient quality.

Reducing uncertainty representativeness
In order to explore other strategies to reduce the computational cost, we performed a model selection technique as described in Section 3.2 to select different-sized subsets of the 50 models. Thus, we applied the methodology presented in Section 3.2 for selecting subsets with N s = 5, 10, 15, 20, 25. We performed an optimization using our method, with the same setup applied before, for each reduced-scenario subset. Even though we optimize each case with fewer reservoir models than the full set of uncertainty, at the end, for validation purposes, we run all 50 realizations with the solution obtained in order to properly characterize the optimal result in terms of the known uncertainty.  Table 2 shows a comparative summary among all optimization runs with different levels of uncertainty representativeness. It is possible to note the effectiveness our model selection strategy combined with our robust optimization framework from the results. The difference between the optimal expected NPV over the 50 model realizations and the optimal expected NPV over the subsets of selected models was less than 0.5% for any subset with N s ≥ 10.
We can also see from the results in Table 2 the effect of model selection on the computational cost. Except for two cases, we reduce the computational cost by reducing the number of selected models. Additionally, Fig. 7 shows the distribution of each optimal solution in a box-plot representation.

Reducing the approximate gradient quality
We performed another optimization study revising the number of perturbed controls, i.e., we used N p = 1 in Eq. (5) instead of N p = 10 as presented before. The setup with N p = 1 is more consistent with the applications presented by [6,11]. Our revised study obtained an optimal expected NPV of $1.592 × 10 9 after 8150 simulation runs and 163 equivalent simulations using 50 simultaneous jobs. The comparative results between the optimizations with N p = 1 and N p = 10 are depicted in Figs. 8 and 9, in terms of their evolution w.r.t. the number of iterations and number of simulations, respectively. We note from the results that by reducing the number of perturbations from N p = 10 to N p = 1, we decrease the final objective function with 0.5% and the number of simulation required with 67.8%. Figure 8 shows that the improvement iteration by iteration obtained with N p = 10 is better than the result obtained with N p = 1, indicating the positive effect of gradient approximation quality to deliver better search direction. However, we note from Fig. 9 a much faster evolution for the case with N p = 1 in terms of the number of simulations, which is self-evident since the individual computational cost of each iteration for N p = 1 is substantially less than for N p = 10.  Table 3 show the result of the optimization using N p = 1 in comparison with the other previously described strategies. In Fig. 10, we review the result presented in Fig. 6 but adding the data correspondent to the new optimization using N p = 1. We observe that both results with N p = 10 and N p = 1 are very similar with respect to their cumulative probability distribution. Table 3 also denotes the similarity between our two optimization studies in terms of the NPV and its distribution. However, once more we highlight the significant reduction in the required computational cost.
In terms of the optimal well controls obtained after optimizing with N p = 10 and N p = 1, we present in   Fig. 4. Although we can note differences between the optimal controls obtained with N p = 10 and N p = 1, the general similarity for the controls is clear.
Finally, in Fig. 12, we summarize the comparison with respect to the computational cost among all strategies. Again, we verify substantial benefit of reducing the number of perturbations for the gradient approximation.

Field development optimization
In the field development exercise, we defined a population of 30 individuals and proposed one initial well placement strategy (base case) with only vertical wells. As described in Section 3.3, the remaining 29 individuals were determined by a random process. Then, the optimization process was simultaneously applied to the 50 OLYMPUS model realizations. Following the operational constraints for the OLYMPUS case described in Table 1, we applied a procedure of trial and error to manually perform the well  . 11 Comparison between the optimal control variables for wells INJ-5, INJ-7, PROD-4, PROD-6, PROD-10 and PROD-11 for optimization with N p = 10 (in red) and N p = 1 (in gray). In each plot the horizontal axis is the time in days and the vertical axis is the bottom-hole pressure in kPa. The dashed black lines indicate the control bounds placement strategy for the base case. Figure 13 shows the well placement strategy with 8 water injectors (red dots) and 11 producers (black dots). As the OLYMPUS reservoir consists of two zones separated by an impermeable shale layer, Figs. 14 and 15 exhibit the base case with the production and injection wells drilled only in the upper and lower zones, respectively. The high complexity of the OLYMPUS model makes clear the limitations of manually performing the well  Figure 16 shows the platform position (black dot) and the optimized well placement strategy with 6 water injectors and 11 producers. The figure shows the projection of the wells in the first layer; however, these wells are not necessarily completed in the same grid layer. The thin line indicates the trajectories of all drilled wells from the platform to the start of the completed segment, indicated by the thick lines. Since the reservoir has two separate zones, the optimized strategy has wells drilled and completed only in the upper zone (Fig. 17), wells only in the lower zone (Fig. 18), and some wells in both zones (Fig. 19). The straight lines represent the well segments inside the reservoir. All cells in the model intercepted by the segments       Optimized case with the production and injection wells drilled in both zones are completed. The thin lines represent the connections of the wells with the platform and are determined observing the restriction of dogleg severity. When a solution proposed by the algorithm violates some restrictions, the procedure described in Section 3.3 is applied, which, from another solution that respects all constraints, applies the crossover operator until a new feasible solution is found. Figure 20 shows the evolution of the normalized objective function over the 200 generations of the optimization process. The optimal value of the expected NPV was $1.445×10 9 . The total computational cost was 105,000 simulations, or 420 equivalent simulations, running with 250 jobs simultaneously. The jump in NPV observed in generation 116 represents a characteristic of the genetic algorithm. During the evolutionary process, it is not uncommon that in some generations individuals with a superior performance may emerge.
The optimized well placement strategy proposed by our technique assumes that all wells are opened at the initial simulation date. To define the drilling sequence, a methodology is used in which a streamline simulation for each of the realizations is carried out. In the streamline simulation, it is possible to determine the influence of the injection wells on each of the producing wells, so that, through a heuristic process, the well drilling strategy that maximizes the net present value is defined.
The optimized well placement strategy started with the streamline simulation of the 50 realizations. Applying the heuristic described above, 50 drilling alternatives were generated. By simulating all the realizations with each one of the alternatives it was possible to determine the one that,   Figure 21 presents the well drilling sequence for the production wells (black dots) and injection wells (red dots) applying the methodology. As indicated above, the expected NPV for the optimized strategy with all wells opening at the initial simulation time was $1.445 × 10 9 . However, after applying the well drilling sequence the NPV was reduced by 17%, reaching $1.204 × 10 9 . Figure 22 compares the total cumulative oil production for the 50 realizations using the base case well placement strategy (left), the optimized strategy (middle), and the optimized strategy with the well drilling sequence (right). Likewise, Fig. 23 compares the total water production for the same cases (base, optimized, optimized with well drilling sequence).
The results were obtained following the guidance provided by the OLYMPUS Field Development Challenge organizing committee, in which all wells were to be drilled from the platform position, typical of a development plan based on dry tree completions. With dry tree systems, the surface tree is installed on the production deck. Some of the most important features of field development plans based on dry tree completion systems are (i) the limited drilling capabilities due the drilling template set below the host facility and the drilling step-out distances; (ii) the direct vertical flow path from subsurface completion that minimizes flow assurance issues; (iii) the production ramp up that is limited due the extended distances that must be drilled; and (iv) the limitation in the operating water depths.
On the other hand, field development plans based on wet tree completion systems are suitable for infield wells and long-distance tiebacks. The wells can be located as stand-alone or clustered locations, but distance from the host facility may result in reduced well performance due to pressure drop and flow assurance issues. As advantage, drilling is segregated from production operations, and the competence to pre-drill and complete allows for quick    26 Well drilling sequence for the production (black dots) and injection wells (red dots) for the wet completion well placement strategy production ramp up once the facility is commissioned. Wet trees or subsea trees can be used with any hull form, but are typically used in conjunction with FPSOs (floating production storage and offloading units), which represent about 70% of global floating facilities.
Considering that some giant deep water fields in the Gulf of Mexico and in the Brazilian and African coasts are developed using wet tree completion systems, we performed another field development optimization considering that all production and injection wells are completed with wet trees. Figure 24 shows the optimized wet tree wells placement strategy with 8 water injectors and 11 producers.
The well drilling and completion cost was calculated using the equation 5, 000 · Z + 10, 000 · | XY | given by the OLYMPUS organizing committee. We adopted a cost of $2,000 per meter for the flowline between the platform and the wet trees. Figure 25 shows the evolution of the normalized objective function over the 200 generations of the optimization process. The total computational cost was 105,000 simulations, or 420 equivalent simulations, running with 250 jobs simultaneously, exactly the same used on the dry tree wells problem. The expected NPV for the optimized strategy with all wells opening at the initial simulation time was $1.460 × 10 9 . Figure 26 presents the well drilling sequence for the wet tree production (black dots) and injection wells (red dots) using the described methodology. The NPV was reduced by 8.5% applying the well drilling sequence, reaching $1.335 × 10 9 .
The average drilling and completion time for the dry tree system was 67.2 days and for the wet tree 44.2 days, allowing quicker production ramp up. The average cost for  the well drilling and completion was $32.6 × 10 6 for the dry tree system and $18.6 × 10 6 for the wet tree one.

Joint optimization
For the joint optimization exercise, we optimized the well controls of the optimal solution of the field development challenge for the dry tree system. This is simply a sequential, non-iterative, optimization approach. Figure 27 illustrates the idea.
In order to be adherent to the economical constraints imposed by the challenge, similarly to what has been done in the well control exercise, we impose a reactive control strategy after the simulation is completed so that the uneconomical wells are shut in.
The additional computational cost of the well control optimization was 5000 simulations. We used N p = 1 in Eq. 5 to approximate the gradient. Figure 28 and Table 4 show that the joint optimization slightly improves the expected NPV compared with the field development optimization. The increase in the objective function is 3%.

Conclusions
In this work, we present the results of the three exercises proposed in the OLYMPUS challenge. We used a modified ensemble-based method for the well control optimization, a genetic algorithm (Genocop III) for the field development optimization and a sequential approach for the joint optimization, well placementfollowed by a well control optimization, with no iterations. The results of the first and second exercises showed a significant improvement in the expected NPV computed over the 50 realizations, respectively 7% with respect to reactive control and 660% with respect to an initial well placement based on engineering judgment. Furthermore, we showed the effectiveness of a scenario reduction procedure to choose a subset of representative models for the well control optimization and we verify substantial benefit of reducing the number of perturbations for gradient approximation in the well control problem, without compromising the gain in the optimized NPV. The result of the joint optimization showed a slight increase in the objective function of 3%, despite the fact that we have not performed a strict joint optimization framework. We highlight the importance of conscious utilization of highperformance computing (HPC) infrastructures. Modelbased optimization is inherently expensive due to the noticeably large, but tractable, number of reservoir model response evaluations. As it is shown in this work, scalable optimization algorithms combined with HPC facilities can enable a timely decision-making process in the field development planning. A natural extension of the work presented here is the further investigation of joint optimization strategies.