A comprehensive modeling in predicting the effect of various nanoparticles on filtration volume of water-based drilling fluids

Filtration volume of drilling fluid is directly associated with the amount of formation damage in hydrocarbon reservoirs. Many different additives are added to the drilling fluid in order to minimize the filtration volume. Nanoparticles have been utilized recently to improve the filtration properties of drilling fluids. Up to now, no model has yet been presented to investigate the effect of nanoparticles on filtration properties of drilling fluids. The impact of various nanoparticles is investigated in this study. Artificial neural network is used as a powerful tool to develop a novel approach to predict the effect of various nanoparticles on filtration volume. Model evaluation is performed by calculating the statistical parameters. The obtained results by the model and the experimental results are in an excellent agreement with average absolute relative error of 2.6636%, correlation coefficient (R2) of 0.9928, and mean square error of 0.4797 for overall data. The statistical results showed that the proposed model is able to predict the amount of filtration volume with high precision. Furthermore, the sensitivity analysis on the input parameters demonstrated that nanoparticle concentration has the highest effect on filtration volume and should be considered by researchers during process optimization.


Introduction
The first stage in the petroleum industry to access a hydrocarbon reservoir is the drilling operation. During drilling operation, drilling fluid is the most crucial and important components, which is usually called drilling mud. Among all types of drilling fluid (water-based, oil-based, and syntheticbased), water-based drilling fluids are the most widely used fluids in the drilling industry in comparison with the other types that is due to their higher cost, environmental issues, disposal problems, and health and safety issues (Mahmoud et al. 2016). Drilling fluids are responsible to carry out many different functions during the drilling process as controlling the formation hydrostatic pressure, preventing collapse of wellbore surface, transferring cuttings to the surface, suspending cuttings and additives, preventing formation damage by forming good mudcake on the wellbore surface, lubricating and cooling the bit and drilling string, and so on.
Therefore, these fluids should be designed and manufactured well in such a specific characterization in order to handle the above functions. To modify or enhance a specific function or property, some additives are also added to the drilling fluid. The success of a drilling operation is closely related to the design and performance of the drilling fluid (Mahmoud et al. 2016). In order to minimize the encountered problems and to have low-cost drilling, the functions of drilling fluid must be optimized by improving the drilling fluid properties (Salih and Bilgesu 2017). Most of the problems encountered during the drilling operation such as stuck pipe, high torque and drag, surge and swab pressures, bit balling, shale swelling, and loss circulation are because of improper drilling fluid design (Bourgoyne et al. 1986) that are directly or indirectly related to hydraulic, rheological, and filtration properties of the drilling fluid (Salih and Bilgesu 2017). Therefore, one of the integral and essential tasks is to monitor and control the drilling fluid properties ) that should be scheduled in the predefined drilling program. The effect of additives and also human must be added to the usual problems that are encountered during the drilling process that affect the drilling fluid properties.
Among the mentioned problems, one of the most crucial one is drilling fluid loss into the formation. Mitigating and controlling the fluid loss is essential to reduce both the formation damage and the cost of drilling fluid. So many fluid loss agents are added to the drilling fluid as fluid loss additives (Vryzas et al. 2016). The function of these additives is to make a filter cake on the wellbore surface in order to mitigate or prevent fluid loss into the formation, which is usually called mudcake (Barry et al. 2015). For this purpose, fluid loss agents must be optimized to have a homogeneous, thin, and low permeable mudcake. Whether the mudcake is thick or highly permeable, it would increase the probability of pipe sticking or the filtration volume of the drilling fluid, respectively.
By the arrival of nanotechnology, the special characteristics of nanoparticles have made them a potential additives in several fields of the sciences in order to mitigate, minimize, or solve the encountered problems, and moreover to improve the characteristic of a material or a process. Over the time, nanoparticles have also got involved in various fields of oil and gas industry. Recently, various nanoparticles are utilized by many researchers to improve the filtration properties of the water-based drilling fluids while maintaining the other drilling fluid properties optimal or to improve them.
In recent years, by the development of technology, artificial intelligence tools have widely applied in order to model nonlinear problems in various fields of science. Artificial neural network (ANN) is one of the most common tools that are utilized. Predicting the filtration properties of drilling fluids is also investigated in some recent researches in the absence of nanoparticles (Jeirani and Mohebbi 2006); however, no model is presented to see the effect of nanoparticles on filtration properties of drilling fluids yet.
In this study, the influence of various nanoparticles on filtration volume is considered by applying an ANN model as a novel method. For this purpose, a total of 1003 data points are gathered from the most recent researches found in the literature. Then, the ANN model is developed for accurate prediction of filtration volume of water-based drilling fluid as a function of effective parameters such as nanoparticle type, nanoparticle concentration, KCl salt concentration, temperature, pressure, round per minute (RPM), and time. The assessment of the proposed model is evaluated by statistical analyses. Finally, a sensitivity analysis is performed to determine the most sensitive parameters on the filtration volume.

Preview of artificial neural network
Artificial neural network (ANN) is known as a processing system that is used to obtain a nonlinear regression model between variables benchmarked from human brain. ANN constructs a nonlinear mapping between inputs and outputs and converts these complex relationships to a series of training patterns which could be useful for wherever the mathematical modeling may be too complicated (Liang and Bose 1996). A network usually consists of at least three layers: input layer, one hidden layer, and output layer. Each layer consists of some neurons in order to connect to each other. A neuron is an individual processing unit which includes in the modeling process. Neurons are connected to each other through a transfer function which consists of the weights and bias. Input parameters are multiplied by their associated weights and then summed with their associated bias. The performance of an ANN model depends on both the weights and the transfer function (Masoudi et al. 2018). The general mathematical expression of each neuron is given in the following: where O i is the output of ith neuron, Y i is the ith input to the function, σ is transfer function, N is the number of neurons in each layer, w ji is the weights that connects the ith neuron to the other neurons of the next layer for jth input parameter, P j is the jth input parameters, and b i is the bias of the ith neuron.
The initial weights are set as random numbers and are corrected during the training process of the network. Three steps are performed in order to develop an ANN model: building the model; model training; and testing procedure (Kassem et al. 2018). The most common training algorithm utilized in the training process is back-propagation (BP). This training process consists of two phases: feed-forward phase, in which the knowledge is processed from the input layer to the output layer, and back-propagation phase, in which the difference between network output values obtained in the first phase and desired output value is compared with previously determined difference tolerance and the error is propagated backward to update the links in the input layer (Kassem et al. 2018). The weights and biases are adjusted through the training process to minimize the error in each step. The adjustment of weights is also done through the training algorithm that the best one is Levenberg-Marquardt training algorithm (TRAINLM). The weight difference for a given neuron is calculated by gradient descent with momentum weight and bias learning function (LEARNGDM) as: where dw jinew is the new weight change for ith neuron and jth input parameter, MC is the momentum constant, dw ijprev is the previous weight change for ith neuron and jth input parameter, LR is the learning rate, and gw ji is the weight gradient descent.
Among the various types of transfer functions, tangent sigmoid (TANSIG) and pure linear (PURELIN) functions are usually used for the inner and outer, respectively. The mathematical description for tangent sigmoid and pure linear functions is given in Eqs. 3 and 4, respectively: All parameters of Eqs. 1 to 4 are dimensionless. The general architecture of an ANN model and a neuron structure is presented in Fig. 1.

Data collection
In this study, a data collection is used from the literature that relates the filtration volume of water-based drilling fluid to the effective input parameters. As said before, various nanoparticles are used by researchers in order to improve the drilling fluid properties. The data bank is gathered from those researches that the effect of nanoparticles on the filtration volume of water-based drilling fluid has been investigated. Some of the previous studies have also presented the effect of nanoparticles simultaneously with various polymers and surfactants. Such studies have not contributed in this research in order to model and investigate the effect of nanoparticles solely. Moreover, it is necessary to use dependable experimental results to construct a reliable network.
The utilized nanoparticles and their respective studies that are contributed in the data bank are SiO 2 (Parizad et al. 2018), Fe 2 O 3 (Vryzas et al. 2015;Mahmoud et al. 2017), CuO (Shakib et al. 2016), Al 2 O 3 (Shakib et al. 2016), SnO2 (Parizad and Shahbazi 2016), and Clay (Shakib et al. 2016). The experimental data in these studies have included the impact of nanoparticle type, nanoparticle concentration (wt%), KCl salt concentration (wt%), temperature (°F), pressure (psi), RPM (round/min), and time (s) on filtration volume (ml) of water-based drilling fluid. The input parameter named nanoparticle type is considered as one of the input parameters to see the effect of different nanoparticles. A total number of 1003 data points are used for predicting model, which is described later in this paper. These data points are randomly divided into two categories that are named training data and testing data. In order to check the performance of the model in predicting the target, these two categories must be apart from each other and do not have any points in common. For this purpose, the training data consist of about 80% of the main data points that are 803 data points, and the remaining 20% of the main data points are used as testing data that are 200 data points. The statistical description of the data bank used in this study is given in Table 1.

Model construction
In order to predict the target, a feed-forward back-propagation algorithm is used. As said before, an ANN consists of some layers that are containing some neurons. An optimal network is a network with the optimum amount of layers and neurons, which must be obtained through the training process. TRAINLM and LEARNGDM are set as the training and adaption learning functions, respectively. The transfer functions that are used to connect the neurons are set TANSIG, TANSIG, and PURELIN in first, second, and third layer, respectively. The criterion that specifies the best network among the generated networks is set as MSE. A network has higher accuracy in predicting the target that has lower MSE and higher R 2 . Trial and error process is done for several times to find the optimal network. The best network is found with 3 layers (2 hidden layers and output layer) that the number of neurons is 21, 9, and 1 in first, second, and third layer, respectively. The workflow of model construction, model training, and sensitivity analysis (which is  described later) is represented in Fig. 2. Training parameters of the proposed model are given in Table 2.

Error assessment
After a model has been developed, the robustness of the model in predicting the target must be evaluated to see whether the model has the desired performance and accuracy. Measuring the data fitness of the model in predicting the target is determined by correlation coefficient (R 2 ). As the criterion that is used in finding the optimal network, mean-squared error (MSE) is calculated as one of the most commonly used parameters. Also some other statistical parameters are calculated including root-mean-squared error (RMSE) that is the root of MSE; average relative error (ARE) that is the average value of RE; average absolute relative error (AARE) that is the absolute value of ARE; relative deviation (RE) that is the relative difference between actual and predicted values; and standard deviation (SD) between the actual and predicted data. Equations 5 to 11 express the definitions of the above parameters: 1. Correlation coefficient (R 2 ): 2. Mean-squared error (MSE): 3. Root-mean-squared error (RMSE):  In the above equations, N is the number of data; Y ac i is the actual data; Y pr i is the predicted data; and Y ac is the average of the actual data.

Sensitivity analysis
The concepts of sensitivity and uncertainty analysis are one of the essential parts of modeling and statistical studies. Sensitivity analysis determines the contribution of each input parameter to the output parameters of a model or a data series. In other words, sensitivity analysis quantifies how the input parameters would influence the output parameters. Also, it is defined as the contribution of each input parameter to the uncertainty in model predictions (Hammonds et al. 1994).Uncertainty analysis determines the uncertainty in each input parameter in model predictions (Hammonds et al. 1994). Sensitivity analysis methods could be categorized as two points of view. Based on the concept, sensitivity analysis methods are categorized as deterministic and statistic (Zhou 2014). Statistical methods dealt with sensitivity analysis after the uncertainty analysis stage, while deterministic methods dealt with sensitivity analysis first and then go through the uncertainty analysis (Cacuci et al. 2005). Based on the factor space of interest, sensitivity analysis methods are categorized as local and global (Yang 2011). Local methods investigate the model behavior by varying each input parameter at a time (Zhou 2014). Global methods investigate the model behavior by varying all parameters over their ranges simultaneously (Zhou 2014).
Among the various approaches for sensitivity analysis, Monte Carlo method is one of the most utilized methods in several fields of engineering. In this method, random samples of the input parameters are generated by various distribution sampling types over the ranges of parameters. Then, it repeatedly simulates the model several times that each time a different set of generated parameters of the distribution is utilized. The output of the model is the probability of output results (Bonate 2001). Monte Carlo method provides an effective approach to assess the influence of several interacting parameters that could exhibit a wide range of uncertainties (Santoso et al. 2019). The generated runs by this method may not be representative of the full-physics models, and it requires an excessive number of runs (Santoso et al. 2019). In order to reduce the number of runs and develop an efficient model, design of experiment (DoE) is developed (Santoso et al. 2019).
In this study, the sensitivity analysis is performed by relevancy factor (RF). Each of the input parameters that have the higher (or lower) relevancy factor has a higher (or lower) impact on the filtration volume, respectively. The relevancy factor definition is given in the following expression: where P i is the ith input parameter, P i,j is the jth value of the ith input parameter, P i is the average of ith input, Y pr j is the jth predicted filtration volume, N is the number of overall data, and Y pr is the average of the predicted filtration volume. (12) The results of the sensitivity analysis are given in the Results and Discussion section.

Results and discussion
In order to evaluate the proposed model, the predefined statistical parameters are calculated. Also the predicted and actual data are plotted versus each other. Figure 3 represents the crossplot of the model predicted filtration volume versus the actual filtration volume for training, testing, and overall data. The y = x line is plotted in this figure to evaluate the precision of the proposed model. The precision of the model is determined via the tight accumulation of data points around the y = x line. The amount of this precision is usually measured by correlation coefficient. The correlation coefficient is calculated by fitting the best line that passes through the data, which has the lowest amount of this coefficient between all the other lines that could pass from the data.
To obtain a precise comparison between the proposed model output and the actual output, the obtained data from the model and the actual data are simultaneously plotted versus the index of data points in Fig. 4 for training, testing, and overall data. It could be seen that the predicted values of filtration volume closely fit the actual values.
The values of R 2 for training, testing, and overall data in this figure are 0.9938, 0.9904, and 0.9928, respectively. These values exhibit the efficiency of the employed ANN model to predict the filtration volume. The other statistical parameters are represented in Table 3. The table shows that the values of MSE, AARE, and SD for overall data are 0.4797, 2.6636, and 0.0585, respectively. It is clear from the table that the proposed model has the high performance and could estimate and predict the filtration volume precisely.
In order to better understanding and comparing the values of each statistical parameter with each other between each dataset, it is better to declare another representation of these parameters. Figure 5 depicts a graphical representation of measured statistical parameters for training, testing, and overall data. Figure 6 represents a basecase from the databank (Parizad et al. 2018) in order to compare the filtration volume before and after adding nanoparticles and also to observe the physics happening to drilling fluid system. In this figure, after adding nanoparticles, it could be seen that the filtration volume decreases by increasing nanoparticles. There is also a range that nanoparticles could do their best in lowering the filtration volume that is stated in the literature (Mahmoud et al. 2017).
The relative deviation that measures the difference between predicted and actual data gives an illustrative view about the adjacency of actual and model target. The range of relative deviation values for training, testing, and overall data is [− 3.9797, 7.1553], [− 2.8577, 3.4762], and [− 3.9797, 7.1553], respectively. Such a low value of relative deviation illustrates the consistency between the actual and predicted filtration volume as presented in Fig. 7 for training, testing, and overall data. The statistical values of relative deviation are given in Table 4.
The distribution plots explore the distribution of a specific parameter across the whole data points that are considered in the study. Distribution of relative deviation is shown in Fig. 8 for training, testing, and overall data. The curve that is plotted on the overall data bars is the normal distribution curve. As it can be seen from the figure, the distribution of the relative deviation is located in the vicinity of zero that represents the uniform consistency between the actual and predicted data across the whole data points.
A specific parameter that is measured in all parts of the science is affected by some other independent parameters that must be considered during the measurement. It is obvious that all the effective parameters do not have the same effect on that specific parameter. Some of these parameters impact the specific parameter more and some less. As said before, sensitivity analysis is a method for the assessment of the effect of independent parameters on a specific parameter. Then, to specify the effect of each effective parameter on the filtration volume, a sensitivity analysis is done. Various methods are available to present the sensitivity analysis. In this work, relevancy factor is used (which is discussed in the Sensitivity Analysis section). Figure 9 represents the results of the sensitivity analysis of all input parameters on the filtration volume. It is obvious from this figure that in modeling the filtration volume, the most sensitive parameter is nanoparticle concentration and the least sensitive parameter is RPM. This result should be considered by the researchers for further researches.

Conclusions
In this study, an ANN model is developed to predict the effect of nanoparticles on filtration volume of water-based drilling fluids. For this purpose, 1003 data points are gathered from the most recent researches. Influencing parameters including in this data bank are nanoparticle type,