The Development and Application of an Evolutionary Algorithm for the Determination of Kinetic Parameters in Automotive Aftertreatment Models

Accurate mathematical models are an essential tool in the development of aftertreatment systems, as they can provide detailed information on the impact of design changes while simultaneously reducing development costs and the time between the initiation and completion of a catalytic process development. Identifying the set of kinetic parameters that achieves a perceived acceptable level of accuracy may require significant time and effort. Optimisation techniques can be used to speed up the tuning process; however, these techniques can require a large computation time, and may not produce a satisfactory answer. This invariably leads to questioning regarding the accuracy of such automated approaches. In this paper, the performance of a number of optimisation algorithms including a GA, nPSO and a hybrid algorithm has been explored with respect to their applicability to kinetic parameters optimisation in the context of aftertreatment modelling. The focus is therefore on the assessment of the optimisers’ performance, rather than the characterisation of catalytic reactions. The different algorithms were applied to the optimisation of parameters in a number of mathematical functions and theoretical aftertreatment systems. The optimisation algorithms were tested on theoretical aftertreatment systems since these have known absolute solutions thereby allowing the optimisers’ performance to be assessed in the absence of any other external source of inaccuracy such as model structure and experimental error. The results obtained demonstrate that such optimisation approaches facilitate the determination of kinetic parameters with suitable accuracy. The proposed hybrid optimisation algorithm achieved excellent performance in considerably shorter computation time than the GA or nPSO optimisers.


Introduction
Emissions legislation has become increasingly stringent in recent years and the introduction of real driving emission (RDE) regulations places automotive manufacturers under greater pressure to reduce emissions from their vehicles. Emissions are controlled using a number of aftertreatment devices such as three-way catalysts (TWC) and diesel oxidation catalysts (DOC). These aftertreatment devices are subject to constant development in an effort to meet vehicle emissions limits while reducing cost.
Accurate mathematical models are essential tools in the development of aftertreatment systems, as they can provide useful information on the impact of design changes while simultaneously reducing development costs and lead times. These models commonly use global kinetics to calculate the Electronic supplementary material The online version of this article (https://doi.org/10.1007/s40825-018-0085-7) contains supplementary material, which is available to authorized users. rate of reactions on the surface of the catalyst. In the global kinetic approach, each reaction is represented by an equation that provides a rate of surface reaction for a wide range of concentrations of all gases that inhibit or promote that reaction [1]. Rival techniques such as micro-kinetics [2] incorporate adsorption and desorption steps associated with the surface reactions and therefore do not usually make assumptions regarding the rate-limiting step. While this has the potential to improve the accuracy of the model [3], the number of parameters that require validation increases dramatically [4]. For all approaches, the determination of the model's kinetic parameters is crucial since it determines its predictive capabilities with respect to the reaction conditions experienced by the catalyst.
In the presented research, the global kinetic approach has been selected since, despite its limitations, it is frequently used in automotive catalyst models, primarily due to its relative ease of implementation. All reactions are assumed to be first order with respect to each reactant and are modelled using the Langmuir-Hinshelwood expression [5]. The Langmuir-Hinshelwood rate expression was expanded by Voltz et al. [6] for the oxidation of carbon monoxide and propylene over platinum catalysts, as shown in Eq. 1: For instance, the oxidation of CO can be expressed by Eq. 2: where Ais the pre-exponential factor (m 3 mol −1 s −1 ) E A is the activation energy (kJ mol −1 ) [i]is the concentration of gas species i Gis the inhibition term k ai is the inhibition coefficient (m 3 mol −1 ) m Assuming the relevant reactions and structure of the Langmuir-Hinshelwood rate expression have been determined, the pre-exponential factor and activation energy terms may be tuned to match experimental data. However, the determination (or "tuning") of these kinetic parameters is an extensive and complex task. In a typical automotive TWC, there are approximately 20 reactions occurring simultaneously, which equates to up to 40 such parameters that require tuning. This does not include the cross-inhibition or self-inhibition terms. Thus, models may require considerable manual effort on the part of the model developer to achieve an acceptable capture of the catalyst's behaviour by the simulation.
In this respect, optimisation techniques based on computer algorithms become vital to speed up the tuning process. However, the application of these techniques to aftertreatment models can require a large computation time and may not produce a solution deemed satisfactory. This invariably leads to questioning regarding the accuracy of such an automated approach. i.e. whether the absence of fit is due to the lack of accuracy of the optimiser, suitability of the model in capturing the chemical and/or physical phenomena, the quality of the experimental data or a combination of these factors.
Several optimisation techniques have been applied to the tuning of automotive catalysis models, and various authors have used third party programs for optimisation of aftertreatment systems. Pandya et al. [7] used the DVODE solver in their work to optimise the kinetic parameters of five reactions in a DOC system. The DVODE solver uses a generalised pattern search (GPS) optimisation algorithm which has been shown to be outperformed by a standard genetic algorithm (GA) in later work [8]. Ramanathan and Sharma used the iSIGHT optimisation package to tune kinetic parameters in a three-way catalyst model [9]. The package, in conjunction with the model, achieved good accuracy. However, the optimisation process required a large number of iterations, between 5000 and 10,000, to optimise the kinetic parameters [9]. This was acceptable due to the rapid computation time of the kinetic model, which required seconds to run, but this number of iterations would be prohibitive on a model with a larger computation time. iSight uses a variety of optimisation algorithms, and uses various forms of a genetic algorithm approach for multi-objective functions [10]. While the use of a third-party optimisation program is a convenient method to solve engineering problems, the user is provided with a black box optimisation package, with a limited ability to alter search variables within the optimiser.
Sola et al. [11] used a GPS algorithm within the MATLAB optimisation toolbox to optimise kinetic parameters of CO and C 3 H 6 oxidation on a platinum DOC model. Pontikakis and Stamelos [12] programmed a genetic algorithm approach to optimise a set of ten reactions in a three-way catalyst model, showing a high level of accuracy when applied to the new European drive cycle (NEDC).
Recent development of metaheuristic optimisation techniques applied to scientific and engineering modelling has demonstrated the superiority of approaches such as genetic algorithms and particle swarm optimisation (PSO) over the more traditional local search optimisation approaches collectively referred to as "hill climbing". Beheshti notes that as optimisation design problems become more complex, the design space for these variables grows exponentially [13].
GA and PSO techniques have been applied to a number of mathematical problems in recent years due to their versatility and optimisation power. Liu et al. [14] applied a GA technique to a support vector machine model of a SCR catalyst. The GA was tasked with maximising the NO x conversion and minimising NH 3 under certain conditions. This multiobjective optimisation approach produced accurate results when compared with experimental data, procuring error values of the order of 10 −6 , with an absolute error of 5%.
PSO algorithms are a useful technique in model development and, in recent years, have been used to assist in predictive models and control strategies. Mozaffari et al. [15] have developed a dynamic PSO algorithm which was applied to a nonlinear model predictive control strategy. This dynamic PSO technique was applied to a theoretical cold start model and it was found that the PSO technique could guarantee convergence on a solution for their application. Bertram et al. [16] applied a basic PSO technique and a hybridised GA-PSO technique to engine control parameters in order to reduce NO x emissions from a diesel engine. The hybrid approach outperformed the standard PSO technique, resulting in a 27% reduction in NO x emissions and 60% reduction in PM emissions.
While generic algorithms have been explored for the optimisation of kinetic parameters, the particle swarm approach does not seem to have been reported. More interestingly, when such optimisation approaches are employed, their performance assessment seems to solely rely on the level of the fit between the simulated and experimental data which, by definition, will also be affected by the "quality" of the model used.
Reporting of an assessment of the intrinsic performance of the different algorithms when applied to such problems is rare.
In this present work, a comparative investigation of the intrinsic performance of several optimisation algorithms including a GA, particle swarm optimisation with niching (nPSO) and a hybrid algorithm, has been performed with respect to their applicability in aftertreatment modelling optimisation. The GA approach has been selected as it has demonstrated a high level of accuracy in previous studies [12,16]. The GA algorithm has been shown to be accurate across multiple test scenarios, including automotive modelling problems. nPSO was explored since it has been reported as superior to GA for different types of optimisation problems [17], but has not yet been applied to the optimisation of kinetic parameters. Finally, a hybrid optimiser aimed at capturing the strengths of both GA and nPSO was created and tested with the aim of reducing computational cost without impacting accuracy.
The optimisation algorithms were tested on theoretical aftertreatment data which was generated using the Axisuite® catalyst simulation package. This approach is advantageous since it isolates the possible sources of error in the system and, consequently, allows an assessment of the intrinsic performance of the algorithms. Errors may be generated in different areas of an aftertreatment model, including inaccurate values for the convective and conductive heat transfer parameters, mass transfer parameters, errors generated by the inability of global kinetics to accurately represent a complex chemical system, and finally errors generated by an inability of the chosen optimiser to find an accurate solution. In the present case, using the AxiSuite software to generate both the mathematical model and the objective function negates any error due to inaccurate modelling of the system, thereby isolating the optimiser as the only possible source of error.
Using such conditions, it is possible to analyse and compare the intrinsic performance of different optimisation techniques within the mathematical framework of an aftertreatment model. This approach does not seek to validate aftertreatment models using experimental data, but rather attempts to validate the suitability of using different metaheuristic optimisation techniques in multiple established aftertreatment conditions.

Optimisation Techniques
Details about the theoretical and mathematical background of the different algorithms are extensively reported in the literature, and only a short description of their main features and context in which they are employed in the present work is provided in the following section.

Genetic Algorithm
Genetic algorithms have previously been used to optimise complex mathematical systems, such as reaction kinetic problems [12,[17][18][19][20][21][22], grouping problems [23], and the mathematical travelling salesman problem [24]. GAs are inspired by nature and mimic the biological process of natural selection [23]. In a GA each optimisation variable is known as a bit, and these bits are grouped into a single bit array known as an "individual". For example, when a GA is applied to the optimisation of catalyst kinetic parameters, each bit array contains a unique set of preexponential values and activation energies.
In the study presented in this paper, a floating point (real number) GA was used instead of binary coding. The floating point GA uses real numbers to form bits in the array which is preferential due to the large magnitude of the kinetic parameters, which would be cumbersome for a binary-coded algorithm to handle. Consequently, the values for the preexponential and activation energies can be directly used by the GA code. The GA search methodology has been fully described by Pedlow et al. [25]. The algorithm uses a double-point crossover mechanism and a non-uniform mutation rate to assist its search process.

Particle Swarm Optimisation
The second optimisation technique explored in this study was particle swarm optimisation. This search technique is based on swarm intelligence and is a powerful and adaptable optimiser [26,27]. PSO is inspired by the social behaviour of flocking birds and was developed by Eberhart and Kennedy [28]. Each individual in the swarm contains a unique set of parameters which correspond to a solution to the optimisation problem. PSO searches the design space by assigning a position vector for each individual. The population's vectors are updated using a velocity term. This velocity term is constructed using the best solution of the entire swarm and the individual's own experience. The velocity equation is shown in Eqs. 4 and 5: is the inertia weighting n nth is the iteration number c 1 is the arbitrary constant c 2 is the arbitrary constant R 1 is the random constant 0 < R 1 < 1 R 2 is the random constant 0 < R 2 < 1 P best is the personal best solution G best is the global best solution An inertia term is included in the PSO algorithm in order to assist the convergence mechanism of the PSO algorithm, as reported by Eberhart and Kennedy [28]. The reported algorithm used a linearly decreasing inertia term from 0.9 to 0.1 [27]. The search process of the PSO considers both an individual's solution and the current best solution of the system. This search process considers both local and global weightings. In the algorithm used here, the values for c 1 and c 2 were altered according to Eqs. 6 and 7.
where c 1i is the initial value of c 1 c 1f is the final value of c 1 iG is the initial generation nG is the final generation The c 1 constant is linearly altered from a value of 2.5 to 0.5, while the c 2 variable is altered from 0.5 to 2.5. The c 1 constant is a multiplier for the local search functionality of PSO and c 2 for the global search functionality. Altering the values of these constants enables the PSO to identify multiple optimum values in a search space, which helps its search functionality [7]. The PSO algorithm incorporated a non-uniform mutation rate, as used in the GA optimiser. Previous studies have shown that the addition of a mutation technique can help the search process of the PSO algorithm [29][30][31], since the mutation function prevents premature convergence, a problem to which PSO is particularly susceptible.
Particle swarm optimisation with niching or nPSO is a PSO with an integrated niching technique. A niche is a subpopulation, based on geographical location, of the main population. Each niche has its own variable limits and will produce its own best solution. Initially, the nPSO algorithm operates as a standard PSO algorithm using an increased local search (c 1 ) parameter. After a set number of iterations, the algorithm detects if individuals in the population have stagnated, which occurs when an individual's best solution has remained unchanged over a number of iterations [32,33]. The best individual is identified and removed from the population to form a niche seed. Individuals geographically close to the niche seed are also removed from the population and added to the niche. This process is repeated for the allowable number of niches. Individuals which do not fall into a niche are kept as part of the main population. Next, all the niches are processed simultaneously, with each niche producing its own best solution. Finally the best results from the niches are compared and the best of all solutions is chosen.

GA-PSO Hybrid Model (Hybrid)
The GA-PSO algorithm was developed as part of the research presented here. It is based on the work presented by Kao et al. who reported that this algorithm outperformed a continuous GA over a number of mathematical functions [34]. This algorithm utilises the search process from both GA and PSO algorithms. Figure 1 shows a diagrammatic representation of the search process.
This algorithm uses different aspects of both the GA and PSO algorithms, utilising the ability of PSO to search large spaces associated with the GA's better converging ability, in addition to the stochastic nature of its mutation capability. In this algorithm the initial population, which must be a multiple of four, is ranked in terms of fitness. The population is separated into two parts, with the fittest individuals sent to the GA algorithm. This subpopulation is run through the GA as previously described, including roulette selection, double-point crossover and a non-uniform mutation rate [25]. The ranked subpopulation is then returned to the main population.
The less fit subpopulation is fed to a PSO search algorithm. This algorithm uses the solution from the GA subpopulation as each of its individuals' best solutions and the overall best solution from the GA subpopulation is kept as the global best solution. The PSO subpopulation is then returned to the main population, the entire population is ranked and the process iterates until a convergence criterion or iteration limit is met.
This Hybrid approach was expanded upon that reported by Kao et al. by adding the operators from the GA section of the optimiser i.e. a double-point crossover mechanism, a nonuniform mutation rate and roulette selection. These operators increase the search capability of this section of the optimiser by increasing its stochastic nature. A double-point crossover can exchange smaller sections of the genetic information in each individual, which is a considerable advantage in an aftertreatment scenario. A roulette selection mechanism decreases the chance of the system stagnating by choosing random individuals within the population for the crossover mechanism. Finally, a non-uniform mutation rate increases the chance of mutation as a function of number of iterations. This is an important consideration as the mutation search function becomes the main search function for GA as the system starts to converge [25,34].
The PSO section of the optimiser has been adapted to include a number of operations, such as linearly decreasing inertia and acceleration constants which have been shown by Chaturvedi et al. to improve the PSO search performance [35]. As previously stated, these operators promote the ability of the PSO algorithm to search wide areas of the design space. This is particularly important in the format of the Hybrid algorithm. The PSO section of the code also utilises an adaptive mutation function, based on the work by Rajakamur et al. [30], which increases the chance of mutation as the solution gets closer to the objective function.
These operators have been added to specific sections of the Hybrid optimiser, improving the search functionality of each section and thereby improving its overall search capability. In addition to these operators, additional functions were added to the Hybrid optimiser, such as a user-defined temperature range for optimisation. This function enables the user to select one or more sets of temperature ranges across a light-off curve to consider for optimisation. This increases the sensitivity of the optimiser since small deviations from this region of the objective function will have a greater effect on the fitness value of the optimiser. Full details on this operator have been published by Pedlow et al. [25]. A convergence function was also added which identifies when the Hybrid optimiser has stagnated by monitoring the best results at each iteration. Once the population has stagnated, this function greatly reduces the limits around the best solution. Hereby, the population is reinitialised within this smaller design space and will continue searching until the population has stagnated or the iteration limit has been reached.

Testing Method
The algorithms were tested for two separate scenarios. The first scenario applied the algorithms to a set of mathematical functions, and the second a set of AxiSuite® generated lightoff curves corresponding to different aftertreatment systems.

Mathematical Functions
Initially, the algorithms were tested using three mathematical functions. These functions are shown in Figs. 2, 3, and 4 and were used to test the optimisers' performance for multimodal systems [17]. The Shubert function (Eq. 8 and Fig. 2), Griewank function (Eq. 9 and Fig. 3) and a two-dimensional multimodal function reported by Cuevas et al. [17] (Eq. 10 and Fig. 4) were used to test the algorithms. The Shubert function contains 18 optimum points within a twodimensional search space. The results were collected over 1000 runs of each algorithm, using a population size of 100 and a maximum number of 100 iterations: The Griewank function is described as follows: where Fig. 1 Hybrid model proposed by Kao et al. [34] x 1 ; x 2 ∈ −100; 100 ½ The two-dimensional multimodal function is described as follows: Where The results achieved by the algorithms for each of the mathematical systems are shown in Table 1, which includes the best error, the average error and the standard deviation. The best error shows the smallest deviation from the minimum value in the system, while the average error shows the average deviation across the 1000 runs for each algorithm. Taking into account the relative scales used in each of the three problems, the analysis of the mathematical function optimisation results, in terms of average error, shows that all the algorithms perform well, achieving an accuracy value greater than 99.9%.  The difference in the best results from each optimiser lies in the ability of each algorithm to converge fully.
As expected, the standard GA algorithm is outperformed by the other two algorithms, which is consistent with what has been reported previously [35]. However, it is important to note that the performance of the optimiser is partly dependent on the optimisation problem. The Hybrid algorithm consistently outperforms the standard GA algorithm, and outperforms the nPSO algorithm on two of the tests. In the Griewank function and the two-dimensional problem, the Hybrid algorithm produces a smaller value for the average and best errors, indicating that the hybrid algorithm can reproducibly achieve a more accurate answer than the nPSO algorithm. These functions are slightly more complex than the Shubert function, which may indicate that the Hybrid model is more appropriate for complex systems. This is due to the Hybrid algorithm's incorporation of both PSO and GA characteristics, enhancing the ability of the Hybrid to search large parts of the design space and converge towards an optimum point.
The analysis of the mathematical systems indicates that the hypotheses underlying the anticipated enhanced performance of the Hybrid algorithm, and which guided its code development, appear validated since it outperformed the standard GA algorithm and the nPSO algorithm for complex tasks. However, all the functions listed have small design spaces when compared against an aftertreatment system model, and can be considered as comparatively simple systems.

Mathematical Aftertreatment System
The second testing scenario for the optimisation algorithms was a set of theoretical aftertreatment systems. This scenario introduces a more complex system while still containing a global optimum and perfect solution to  the problem (i.e. the default set of kinetic parameters used to generate to light-off curves are later used as objective functions). The mathematical models were created using Axisuite®, a commercially available aftertreatment modelling software package. The software offers a Simulink® connection [36], which was used to run the aftertreatment system within an optimisation loop coded using MatLab®. Axisuite is able to simulate a number of aftertreatment devices, provides a good degree of flexibility and has been used in the development of previous aftertreatment models [25,37,38]. Using Axisuite to generate a set of light-off curves removes experimental and modelling errors and facilitates assessment of the optimisers' intrinsic performance in isolation. Figure 5 diagrammatically shows the optimisation process that occurs in the MatLab® environment. The values for the initial kinetic parameters are chosen randomly within the selected limits. These variables are provided, via a Simulink® interface, to Axisuite® which subsequently generates a set of results in the form of light-off curves. These results are then compared against the objective function which, in this scenario, is the initial set of light-off curves generated from the base mathematical model (i.e. using the set of selected parameters representing the "ideal" solution). The least squares error technique is used to produce an error value, and each individual is ranked in terms of its performance [25]. The optimisation technique controls the iteration process altering the kinetic parameters and ranking the new values until an acceptable error or termination criteria has been reached.
The optimisers provide a user-defined temperature range and modified objective function to assist in the search process. These processes have both been reported in a previous paper [25] and improve the optimisers' performance. The modified objective function is as shown in Eq. 11. The P i terms are user-defined penalty functions which enable the targeting of specific gas species: The error terms are calculated using an error function derived from the root sum squared error approach. The conversion efficiencies at each time interval are used to standardise the discrete error values. Consequently, the error value is in the range 0 ≤ e t ≤ 1 and is calculated using Eq. 12: e t is the error value at time t E t is the individual conversion efficiency at time t Ê t is the objective function conversion efficiency at time t Subsequently, the total error function is calculated using the residual sum of squares at each time interval for the required gas species. In Eq. 12, the objective gas species are CO, NO and the total hydrocarbons (THC). An average value for the total error value, commonly referred to as the sum of the squares (SoTS) and described in Eq. 13, is used for the subsequent optimisation analysis, thereby enabling comparison of results across multiple scenarios: where e total is the overall error value for the individual N is the number of gas species considered iT is the total number of data points considered In each of the theoretical test systems, the algorithms operated with 100 starting individuals and were run for 100 iterations, or generations. Each algorithm was allowed to run to completion, and its accuracy was recorded at each iteration. Wide limits were used in the search process, with each preexponential term allowed a deviation of three orders of magnitude from the set value, and each activation energy term permitted to deviate by 20% from the set value.
The rates of each of the chemical reactions considered in this study are affected by the concentrations of other components in the gas mix. For example, the oxidation rate of CO, described in Eqs. 2 and 3, is inhibited by CO (self-inhibition), and also C 3 H 6 , and NO (cross-inhibition). The parameters which characterise these inhibition effects are contained in the denominator of the reaction rate equation. The Axisuite model used in this study allows the user to introduce inhibiting effects for numerous other gas species. The determination of the kinetic coefficients for these effects is normally undertaken as a separate phase of the overall process of determining the kinetic parameters, in which the reaction rate is measured in the presence of varying concentrations of each inhibiting gas component. In this regard, an automated optimisation technique could be used to assist in the determination of the inhibition coefficients. However, for the study presented in this paper, the inhibition coefficients were held at their default values, and only the promoting coefficients were considered. By increasing the number of reactions included in the optimisation process to 12, and thereby considering 24 individual kinetic parameters, it was felt that this provided sufficient challenge to the simultaneous optimisation process and allowed the relative performance of each algorithm to be determined.

DOC Aftertreatment System
The first scenario corresponded to a diesel oxidation catalyst (DOC) with a set of eight reactions. Tables 2 and 3 summarise the conditions used and chemical reactions involved respectively. Compared to the mathematical functions previously discussed, the design space has increased to 16 dimensions and has increased in size by multiple orders of magnitude. The reaction conditions selected were such that the DOC operated within a lean environment in which the availability of oxygen is not a limiting factor.
During implementation of the optimisation algorithms to aftertreatment systems, a test was performed to determine the effect of using a logarithmic search space for the activation energy values. Using the logarithmic search space improved the search function for these variables and thus was chosen for both parameters. During the post-optimisation analysis, these values were transformed back from their logarithmic values.

DOC Aftertreatment System-Results
The results for the DOC system are reported in Table 4. The optimisation algorithms were able to produce fit to an order of magnitude of 10 −4 for this initial optimisation scenario. As Figs. 6, 7, and 8 demonstrate, both the Hybrid and nPSO algorithms produce results that allow the model to satisfactorily capture the objective function light-off curves. Also, as shown in Table 5, the best kinetic parameters obtained by both optimisers are similar to the values used to create the objective functions.
In this scenario, the shape of the NO curve is challenging to match, as is the distinctive THC data. Figure 6 and Table 4 demonstrate that the GA algorithm had difficulties matching these complex light-off curves; however, both the nPSO and Hybrid optimisers were able to better fit those curves. There is a minimal difference in accuracy between the nPSO and Hybrid optimisers; however, the Hybrid optimiser needed a five times shorter computation time to provide its best solution. Examination of Figs. 7 and 8 and Table 4 demonstrates that the Hybrid optimiser is slightly more accurate in the THC light-off curve, whereas the nPSO algorithm matches the NO peak with a slightly greater accuracy. The best input variables reported by both these optimisers, Table 5, are similar to the input objective variables, indicating that the best solutions found are within the area of the global optima, any difference being attributed to the known convergence issues of these optimisers [12,34].
An analysis of the iteration-dependent evolution of the sum of the squares error results highlights the improvement in the  search process between the GA and Hybrid optimisers Fig. 9.
The dependences observed demonstrates that while the Hybrid optimiser quickly converged on an area with a minimum (4th iteration), it kept searching the design space, which can be seen in the small improvements in later iterations. This performance is contrasted with that of the standard GA algorithm which required a greater number of iterations to converge. Significantly, the GA's search pattern can be seen to have a number of step improvements, followed by periods of stagnation. This indicates that the mutation function dominated this search process. This is contrasted against the Hybrid optimiser which has a series of smaller improvements in its SoTS error, indicating that the Hybrid algorithm avoids stagnation and searches the design space more completely than the GA. Reproducibility tests performed with different randomised starting sets of parameters showed essentially the same contrasted convergence behaviours between the two optimiser algorithms.

Sensitivity Analysis
In order to better understand the variability of the standard deviations obtained (see supplemental information) for the optimised kinetic parameters, a sensitivity analysis was performed where each kinetic parameter was considered in isolation. Pre-exponential terms were, in turn, altered from the value used to generate the objective function by a factor of 10 3 while activation energy terms were altered by 20%. When an individual kinetic parameter was altered, the other parameters were set to the objective function values. Consequently, any deviation from the objective function was solely due to the altered value. This provided an overview of how the alteration of each individual kinetic parameter affected the overall accuracy of the light-off curves throughout the considered range, and gave an indication of how prominent that reaction was in the overall system. The influence of a parameter on the three light-off curves was quantified by a "percentage deviation" value, which is defined as follows: Considering the DOC system, each of the 16 parameters were considered individually and the results are shown in Table 6. Analysis of the parameters highlights that the system is most sensitive to the pre-exponential of the first reaction i.e. the oxidation of CO. Additionally, there are several parameters which are not sensitive. Considering both kinetic parameters for each of the reactions, Reactions 4, 5 and 7 have low comparative sensitivities. Further investigation of this system may mean that different reaction conditions will need to be explored where Reactions 4, 5 and 7 will be sensitive. If such conditions cannot be found or are unrealistic, this may mean that these reactions could be discounted for future optimisation. A summary of the results of the sensitivity analysis in Table 6 highlights that the optimiser is able to match the activation energies with a high degree of accuracy.
Further analysis of the kinetic parameters is shown in Table 7 and Supplemental 1. Table 7 shows that the nPSO and Hybrid algorithms outperform the GA algorithm in both accuracy and repeatability. The table in supplemental 1 demonstrates that the kinetic parameters used to produce these results are varied, and the standard deviations are higher for parameters associated with reactions that have low sensitivities. Both the nPSO and Hybrid algorithms produce a more accurate fit, with a greater reproducibility than the GA optimiser. It is important to note at this stage that several kinetic parameters have an average value that is closer to the objective value than the value which produced the best solution. This highlights that there are multiple local optimum points in the system which are close to the perfect solution. While some kinetic parameters may be closer to the objective function, the combined effect of the parameters has produced a less accurate solution.

Three-Way Catalyst-Lean Conditions
The second testing scenario used a three-way catalyst under lean conditions. The inlet gas mixture composition is reported in Table 8. Table 9 reports the eight reactions that were considered for optimisation as per those defined in AxiSuite®. Eight reactions were optimised in this scenario for which there is a high degree of commonality between reactants.  A n is the pre-exponential factor of reaction n, as denoted in Tables 5 and 6 (m 3 mol −1 s −1 ) E An is the activation energy of reaction n, as denoted in Tables 5 and 6 (kJ mol −1 ) Fig. 9 Comparison of SoTS error between Hybrid ( ) and GA ( ) optimisers Table 10 and Figs. 10, 11, and 12 show the optimiser results for this system.

TWC-Lean Condition Results
The results indicate that consistent with the previous scenarios, the GA optimiser struggles to match the performance of the other optimisers. However, all three optimisers produce results accurate to the order of 10 −4 . The Hybrid and nPSO optimisers significantly outperform the standard GA optimiser, achieving an accuracy of 6 × 10 −5 and 2 × 10 −5 respectively. Interestingly, for this scenario the nPSO algorithm outperforms the Hybrid model, although both performances are very close. In addition, the optimisers achieved a greater accuracy on the light-off curves than that obtained with the previous results for the DOC system. This is likely due to the greater complexity of the light-off curves for the TWC system. However, it is important to note that while the best light-off curves produced by the optimisers are close to the objective functions, the kinetic parameters that were selected by the optimiser differ from those used to generate the objective functions. These differences can be seen in Table 11. The results show that the optimisation algorithms were able to match the activation energies of the considered reactions; however, there were differences with the pre-exponential terms. In particular, the pre-exponential term for the oxidation of CO was underestimated by a factor of almost 2. While this is a large error, it should be noted that the optimisation algorithms used logarithmic values to search the pre-exponential design space. When determining kinetic data using an iterative approach, the perfect values of activation energies and preexponential constants are not known, and so the fit between the measured and predicted light-off curves is the only piece of information that can be used to determine the acceptability of the kinetic data being assessed. The accuracy shown in Figs. 11 and 12 would, most likely, be viewed as being excellent, despite the differences in the kinetic terms which make up the curves. This is likely to be one of the causes of the variation in kinetic data reported in literature.
A sensitivity analysis was carried out on all the kinetic parameters as reported in Table 12, together with the percentage deviation from the objective function. An analysis of the 16 kinetic parameters considered in this optimisation scenario identified a number of sensitive reactions. Specifically, Reactions 1 and 4 from Table 12 (the oxidation of CO and the reaction between CO and NO respectively) had a significant effect on the overall accuracy of the light-off curves. Reactions 2 and 5 also significantly contributed to the overall accuracy.
The kinetic parameters corresponding to the best solutions are reported in Table 11. Analysis of these results showed that both the nPSO and Hybrid algorithms were able to accurately match the activation energy values of the objective function. Although the nPSO achieved a more accurate solution, the Hybrid optimiser was able to accurately match more of the objective kinetic parameters. The reason for the improved performance of the nPSO became apparent when the magnitudes of the kinetic parameters were considered as part of the analysis. Table 13 and Supplemental 2 indicate that the nPSO algorithm achieved a greater accuracy for the TWC lean system on a consistent basis. However, the table in Supplemental 2 highlights that the Hybrid algorithm achieved a greater accuracy for several kinetic    parameters, perfectly matching, for example, the activation energies for Reactions 7 and 8.
It is important to remember that while the kinetic parameters deviated from the objective kinetic parameters, the actual simulated light-off curves achieved were very similar to the objective function. As in the case of the DOC system, this highlights that the solution does not correspond to a single set of parameters but larger areas of the design space, even when considering the sensitive parameters only. This is a common problem and has been identified and discussed by other researchers such as Stewart et al. [2] who detailed ten separate sets of published reaction kinetic values for the simple CO oxidation reaction. The sensitivity analysis confirmed that both the nPSO and Hybrid optimisers were able to closely match the objective function reaction parameters.
14 Three-Way Catalyst-Rich Conditions The third system examined was a theoretical three-way catalyst operating under rich conditions. The inlet gas composition details are reported in Table 14, while the reactions considered for optimisation are given in Table 15. This scenario adds in complexity through conditions where oxygen is a limiting reactant which introduces a greater interdependency of the reactions. Full details of the system, and the catalyst parameters used, have previously been reported [25].
The normalised sum of the squares error values obtained following the optimisation are reported in Table 16 and the objective functions and simulated light-off curves are compared in Figs. 13, 14, and 15.
The nPSO and Hybrid optimisers achieved an accuracy of 5.2 × 10 −4 and 1.3 × 10 −5 respectively. Both the optimisers provide parameters that allow satisfactory simulation of the system, generating conversion profiles that are similar to the objective functions. The GA optimiser produced parameters leading to simulated curves that only partially match the objective function as highlighted by Fig. 13. The difference between the PSO-based best results and the objective function can be attributed to the known difficulty of this algorithm to finely converge towards an optimum, a finding which has been previously reported [30].
The Hybrid algorithm outperformed the GA and nPSO optimisers, as can easily be visually assessed from the lightoff curves. While it still did not fully converged on the objective function, the values of the parameters generated provide an acceptable degree of accuracy. The sensitivity analysis for this system is summarised in Table 17. The pre-exponential factor for the reaction between CO and O 2 (A 1 ) had the largest effect on the light-off curves, causing a deviation of 61% when multiplied by a factor of 10 3 . A number of the kinetic parameters led to a deviation greater than 10%.  Table 17 also highlights that two reactions that had no perceptible effect on the light-off curves. These were Reactions 3 and 10 from Table 15, i.e. the oxidation of propane and the reaction between CO 2 and H 2 . While these single parameters had no effect on the light-off-curves using the chosen gas concentrations, they might have an effect using alternative gas mixes and/or temperatures. As is the case with a manual tuning process, it is therefore important that the calibration of the kinetic coefficients be undertaken using a suitable range of gas compositions.
An analysis of the reaction kinetic parameters, as reported in Table 18, demonstrated that the Hybrid optimiser is best able, of all the algorithms, to closely match the original values of the objective function parameters. The Hybrid algorithm best matched the activation energy parameters of the objective function, and closely matched the pre-exponential factors of the objective function, which is impressive considering that the search process used a logarithmic search space. Table 19 shows that the Hybrid algorithm outperformed the other algorithms in regards to the mean error, while Supplemental 3 demonstrates the accuracy of the Hybrid algorithm in regards to the values of the chosen kinetic parameters.

Three-Way Catalyst-Rich Conditions with Oxygen Storage
The final theoretical system considered in this study corresponded to the same theoretical three-way catalyst operating under rich conditions but with an additional set of oxygen storage reactions therefore adding an another layer of complexity. Table 20 reports the inlet gas composition used in this testing scenario while Table 21 shows the set of reactions considered. Values for the oxygen storage reaction parameters would commonly be estimated by analysing rapid switches between rich and lean conditions before using the kinetic parameters obtained in that process as input to the light-off simulation. However, this study used the oxygen storage reactions to add complexity to the system, and analysed how this affected the performance of the various optimisers.
The results of the optimisation runs are reported in Table 22 and Figs. 16,17,18. Both the nPSO and Hybrid optimisers achieved similar accuracies, to the order of 1 × 10 −4 . The standard GA failed to produce parameters allowing a match of the objective function which led to a much larger error value of 3.6 × 10 −3 .  In this scenario the nPSO optimiser and the Hybrid optimiser achieved similar performances. It is, however, important to note that the Hybrid optimiser is significantly less computationally intensive than the nPSO algorithm, so while the nPSO algorithm produced a similar result, the increase in computation time required (by a factor of 5) reiterates that it is more beneficial to use the Hybrid optimiser for this type of system.
A sensitivity analysis was undertaken for this optimisation scenario, and the results for each kinetic parameter are reported in Table 23. A combination of kinetic parameters A 9 and E A9 had the largest effect on the light-off curves. These variables correspond to the oxidation of cerium oxide, which is involved in the availability of oxygen in the system. A number of other variables had a significant influence on the system such as preexponential coefficients A 2 and A 4 , which correspond to the oxidation of hydrogen and the reaction between CO and NO respectively.
The results from the sensitivity analysis highlighted the difficulty of matching a complex light-off curve with a set of kinetic parameters. When kinetic parameters are simultaneously optimised, there is a danger that some errors in the system will be "tuned out", with other variables changed to compensate. The results in Table 24 show that the Hybrid optimiser closely matched the majority of the kinetic parameters, in particular the activation energies, whereas the nPSO algorithm estimated the correct values for eight of the twelve variables. However, the nPSO algorithm provided a marginal improvement in the SoTS error value, despite matching fewer of the kinetic parameters. Table 25 indicates that the Hybrid algorithm achieves a greater average accuracy than the other two algorithms. Supplemental 4 highlights the accuracy of the estimated Hybrid algorithm kinetic parameters for this system.

Discussion
The various scenarios considered in this research cover a range of aftertreatment systems and reaction conditions and offer a strenuous test to the optimisers. The results from the mathematical aftertreatment systems show that the Hybrid algorithm and the nPSO algorithm consistently achieve a high degree of accuracy for each of the systems. The Hybrid algorithm slightly outperforms the nPSO algorithm for three out of the five testing scenarios and has a significantly shorter computation time than the nPSO algorithm. It is important to note that there is only a minimal difference between the two optimisers throughout two of the four tests, specifically the DOC and Rich TWC system scenarios. This set of testing scenarios provides confidence in the ability of the nPSO and Hybrid optimisers to optimise aftertreatment systems. However, it must be remembered that the scenarios considered in this paper are theoretical systems that contain a unique, global solution, which can be perfectly defined using a global kinetic approach. These were created using the assumption that other areas of a mathematical aftertreatment model, such as convective and conductive heat transfer, and mass transfer, effectively model the experimental environment. Therefore when these optimisers are applied to experimental data any discrepancy may be caused by experimental error, inaccurate values within the heat and mass transfer models, the kinetics model or an inability of global kinetics to accurately represent the conditions within the catalyst over the range of conditions explored.  The results show that both the nPSO and Hybrid algorithms are suitable optimisers to apply to kinetic parameters within a mathematical aftertreatment model. This is demonstrated by their ability to provide parameters that allow satisfactory matches between the simulated and objective function lightoff curves, in addition to their ability to arrive at kinetic values that are close to the objective variables which were used to create the objective curves.
In terms of computational time, the Hybrid optimiser achieves its solution in a factor of five times quicker than the nPSO algorithm. This is highly advantageous as the results from the presented scenarios have shown that the Hybrid optimiser is able to achieve a similar accuracy to that of the nPSO. Therefore, the Hybrid algorithm may be run for a greater number of iterations to generate a more accurate solution while requiring a shorter computation time than the nPSO algorithm.
The results indicate that a good performance in more simple systems, such as the mathematical functions, is not representative for more complex systems. This is shown through the standard GA performance in the scenarios tested. While it achieved a comparable accuracy for the more simple systems proposed in the mathematical functions, it struggled to obtain an accurate solution in the more complex systems of the aftertreatment tests.
It is important to note that, while the presented work was targeted at discriminating the relative performances of different optimisation algorithms and therefore assumed a situation where the reaction mechanism is known, in most real life situations, the mechanism itself will be in question. In such situations, the difficulty will lie with deciding the relevance or  not of various reactions, species, and inhibition terms in the mechanism. In the case of the global mechanism approach, the goal will be to find the simplest mechanism capable of fairly representing the experimental data. Such a mechanism will use the smallest number of possible competing reaction pathways and inhibition terms to reduce the number of degrees of freedom. If such an approach is not employed, many parameters will have low sensitivity towards the resulting calculated reaction profiles and therefore produce large standard deviations. Ultimately, such an approach will result in too many parameters which will produce large regions of the design space that can adequately fit the reaction profile, and also low accuracy for the resulting parameters which is obviously not desirable. This situation illustrates one of the limitations of the "full reaction network" optimisation approach and highlights the need to bring the maximum constraint to the system to limit as much as possible the degrees of freedom. This is usually brought about using a combination of approaches aimed at gathering independent information regarding the relative importance of different reaction pathways involving common molecules. In the present work, this is illustrated by the case of propane that can be converted through either steam reforming reactions or oxidation with oxygen. There is a possible risk of having the optimiser favouring the former while the oxidation reactions have been shown to be dominant. This can be disentangled using targeted independent experiments under simpler conditions. Another aspect to consider is the fact that while a complete mechanism is necessary to capture the catalyst behaviour under the whole set of possible conditions experienced under real-life operations, the light-off approach to experimental data may cover only a limited set of reaction conditions. This usually means that not all reactions listed in the reaction mechanism will be "active". It is therefore necessary to select a range of conditions that will ensure, collectively, that all reactions are "activated" and consequently that the set of experimental data used for the parameters optimisation is fully relevant.
In all cases, the results obtained for the optimised kinetic parameters in the present study highlights the difficulty in obtaining perfect matches with the objective function parameters even though the resulting curves appeared to match. This discrepancy is easy to interpret for the parameters with low sensitivities since large deviations in their values will lead to limited impact on the shape of the curves. This comes from the presence of either "non-activated" reactions under the conditions used or the presence of redundant reaction steps.
Redundant reactions can occur when a complex reaction network is present. In such situations, duplicate reactions are  present in the network and identical behaviour can de facto be obtained by either reaction. If such duplicates are not identified and removed, the optimiser will, by definition, generate a family of poor accuracy optimal parameters for these reactions. For example, in Table 9, the water-gas shift (WGS), i.e. reaction 8, is stoichiometrically equivalent to CO and H 2 oxidation reactions (Reaction 1 minus Reaction 2). Anything accomplished with Reaction 8 can also be achieved by incrementally increasing Reaction 1 and simultaneously decreasing Reaction 2 by the same amount. In this case, it will be possible to study the WGS independently of the oxidation reactions by running tests in the absence of oxygen. However, the oxidation reactions cannot be analysed without interference from the WGS reaction. The difficulties associated with redundant, or nearly redundant, reaction pathways in global reaction kinetics are a recognised issue in the aftertreatment community. This is precisely why the analysis and simulation of kinetics requires a measure of "manual" tuning in order to account for the chemical information obtained during the studies which is aimed at disentangling complex pathways. In the case of the WGS reaction example, this will mean performing a priori fits    of the WGS reaction in the absence of oxygen, and then holding the fitted parameters fixed during subsequent optimisation for more complex reactions conditions.
In such cases, and in the absence of manual intervention, an effect of compensation between parameters will therefore be at play which highlights that the possible solutions do not correspond to a single set of parameters but encompass larger areas of the design space. This inter-dependence (i.e. compensation between parameters) is often referred as "correlation" and its identification is achieved through correlation analyses. Such correlations are linked to the system studied and the mathematical structure of the model adopted. It is a difficulty but, as illustrated with the WGS, it is usually possible to find conditions which will break such correlations. When that is not the case, this leads to an intrinsic reduction of the possible accuracy on the determination of such correlated parameters. Additional possible ways to improve confidence in the kinetic variables obtained usually involves conditions leading to increased constraints on the model. These can involve simultaneous optimisation of parameters for the same catalyst for different environmental conditions. Another possible approach will make use of recent advances in spatial resolution of catalytic reactors by using the spatially resolved profiles rather than end pipe data.