Computing T-optimal designs via nested semi-infinite programming and twofold adaptive discretization

Modeling real processes often results in several suitable models. In order to be able to distinguish, or discriminate, which model best represents a phenomenon, one is interested, e.g., in so-called T-optimal designs. These consist of the (design) points from a generally continuous design space at which the models deviate most from each other, under the condition that they are best fitted to those points. Thus, the T-criterion represents a bi-level optimization problem, which can be transferred into a semi-infinite one, but whose solution is very unstable or time consuming for non-linear models and non-convex lower- and upper-level problems. If one considers only a finite number of possible design points, a numerically well tractable linear semi-infinite optimization problem arises. Since this is only an approximation of the original model discrimination problem, we propose an algorithm which alternately and adaptively refines discretizations of the parameter as well as of the design space and, thus, solves a sequence of LSIPs. We prove convergence of our method and its subroutine and show on the basis of discrimination tasks from process engineering that our approach is stable and can outperform the known methods.


Introduction 1.Motivation
In science, one uses mathematical formulas to describe processes or phenomena.For example in chemical process engineering there exists a plethora of models that describe the course of a reaction.
Often one can make an educated guess based on experience or prior knowledge which model fits an investigated phenomenon.Still it might happen that there are several plausible models.In order to decide which of these models fits the true underlying phenomenon best, we want to conduct further experiments.Then, based on these additional experiments, we can hopefully clearly distinguish, i.e. discriminate, the suggested models and pick the one that suits best.However, such experiments can be very time consuming and costly.For example, a large batch reactor needs time and energy until it reaches the right temperature and pressure to conduct an experiment.Therefore, if we carry out costly experiments, we want to do as few experiments as possible.Thus, every chosen experiment should carry as much information as attainable.
The task of determining such experiments is generally called design of experiments (DoE).In this article we focus on model discrimination (MD), which is a sub-problem in the family of DoE problems: Given two plausible models f 1 and f 2 , as well as perhaps some existing data D N , we want to find settings for new experiments that give insight whether f 1 or f 2 better fits the real phenomenon.

Literature
In order to evaluate whether an experiment is worthwhile or not, several criteria have been introduced.Consequently, optimal settings for new experiments can be determined by optimizing these criteria.
For model discrimination specifically, the first recorded criterion was presented by Hunter and Reiner [16].They proposed fitting the two competing response models f 1 and f 2 to some given data D N , and then performing the experiment where the models differ the most after the fit.
One issue with the Hunter-Reiner criterion is its implicit assumption that the measurement errors for both models have constant variance.To circumvent this problem, Box and Hill [5] proposed a Bayesian sequential discrimination method based on the Kullback-Leibler divergence (short: KL-divergence, see, e.g., [3]).Additionally, they formulated their method in a way such that several competing models can be discriminated.
Then in 1975, Atkinson and Fedorov [1] introduced the T-optimality criterion, or short T-criterion.The idea for this criterion is to fix one of the models, while fitting the other model as good as possible to the first one by a weighted least-squares sum.Afterwards, one selects the points with the largest difference between the models as the proposed experiments.
In the same year, Atkinson and Fedorov also introduced an extension for discriminating several models at once [2].
Theoretic properties of the T-criterion have been investigated in further publications.A good first overview of the most important characteristics can be found in [14].A comprehensive analysis is included in the dissertation by Kuczewski [17].Additionally, [11] states the duality of the T-criterion and the Chebychev approximation problem, which was later used in [6] to further characterize T-optimal solutions.
An extension of the T-criterion for discriminating dynamic systems that depend on end times and initial values has been introduced in [24].Recently, a further generalization of the T-criterion for semi-parametric models has been presented by [8].
An important advancement of the T-criterion was introduced by López-Fidalgo et al. [19]: the KL-optimality criterion.In fact, the idea for the KL-criterion is the same as for the T-criterion, but in order to fit one model to the other they use the Kullback-Leibler divergence instead of a sum of least-squares.The advantage of this criterion is that we can deal with models that assume non-Gaussian measurement errors, which the T-criterion cannot do out of the box.
In order to compute optimal solutions for the T-criterion, called T-optimal designs, Atkinson and Fedorov [1] used an algorithm that is based on the Vector Direction Method [14,26].But since then, many more strategies have been successfully applied.Kuczewski [17] and Duarte et al. [13] both formulated the T-criterion as a semi-infinite program (SIP) and solved it using the general Blankenship & Falk algorithm.Braess and Dette [6] used results from approximation theory to formulate an algorithm which alternatingly finds new points by solving the dual Chebyshev approximation problem and calculates new weights for a new solution of the primal T-criterion.Later, Dette et al. [9] simplified this approach by finding several new design points using the directional derivative from the Vector Direction Method, and computing the corresponding new weights for the current selection of design points with the help of a linearization.A heuristic approach based on particle swarm optimization has been successfully applied by Chen et al. [7].
In the special case where we discriminate two polynomial models, Yue et al. [27] presented an approach based on moment relaxation and semi-definite programming to find T-optimal solutions.In the even more special case where the degree of the two competing polynomial models differ by exactly two, Dette et al. [10] were even able to give an analytic approach for calculating T-optimal designs.Additional investigation when discriminating polynomial, or rational models has been done by [15].

Outline
In this paper, we present a new algorithm for computing T-optimal designs based on two nested adaptive discretization schemes.At first, we introduce some preliminaries in Section 2. This includes basic notations from design theory, and selected insights on the T-criterion.Then, we introduce our new algorithm in Section 3.This section is separated into two parts, where the first considers the inner discretization scheme, while the second considers the outer one.We also show that our algorithm convergences towards arbitrary good approximations of T-optimal designs under realistic assumptions in Section 4. Finally in Section 5, we compare our algorithm against three other state-of-the-art methods on two examples from chemical process engineering.

Preliminaries
We start by briefly introducing some basic notations from design theory and presenting the main characteristics for T-optimal experimental designs.

Fundamentals of Design Theory
A model function in the following context is of the form f : X × Θ → Y.It maps inputs or design points x to their respective outputs or responses y and can be tweaked by parameters θ.
We call X the design space, Y the output space, and Θ the parameter space.These sets are assumed to be subsets of the multi-dimensional real numbers, i.e.Θ ⊆ R d θ , X ⊆ R dx , and Y ⊆ R dy .Additionally, if the number of outputs is one, i.e. d y = 1, we call f a single-response model, whereas if it has more than one output, i.e. d y > 1, then we call f a multi-response model.
An experiment in this context coincides with a design point x ∈ X .This point gives the settings for the corresponding experiment, like temperature, pressure, initial concentrations, etc.
As introduced in [14], we model an experimental design plan as a probability measure.In case of a discrete measure, we write with Here, the x i ∈ X are the suggested experiments, and the weights w i represent the relative number of experiments being proposed for their respective design points.The set of experimental designs, which coincides with the set of probability measures on X , is denoted by Ξ, or Ξ(X ).

The T-criterion
We now rigorously introduce the so-called T-optimality criterion, which aims to find several design points with maximum squared difference between two competing models f 1 and f 2 .
Definition 1 (T-Optimality Criterion).Assume the first model f 1 to be true, i.e. having fixed parameters θ 1 .Write f 1 (x) = f 1 (x, θ 1 ).The alternative model f 2 stays parametrized with parameters θ 2 ∈ Θ 2 .Then, the T-optimality criterion, or short T-criterion, for model discrimination is defined as One benefit of the T-criterion is its relative independence from data.In fact, if we set the true parameters θ 1 of the first model based on prior assumptions, then we can even compute a T-optimal design without any data at all.Nevertheless, if data D N = {(x i , y i ) | i = 1, . . ., N } is available, it is still best practice to use this data in order to estimate θ 1 , e.g. by the method of least-squares.
To work with the T-criterion, we introduce the following notations.
Notation 2. We define the squared model difference as We further define the function and the optimal parameter with respect to a given design ξ as θ2 := θ2 (ξ) := arg min With these notations, we may also write the T-criterion as T (ξ) = T (ξ, θ2 ).
One issue is that the optimal parameters θ2 are not necessarily unique.This cannot be avoided in general, e.g. for designs with a single support point.Nevertheless, we are mostly interested in designs that contain several design points.And for those, in practice, one would assume that they have in fact a unique solution to the corresponding leastsquares problem.For example, linear models have a unique solution to the least-squares problem if the number of support points equals the number of parameters.This justifies the notion of regular designs.Definition 3. (Regular Design) A design ξ is called regular if the corresponding weighted least-squares problem min θ 2 ∈Θ 2 T (ξ, θ 2 ) has a unique minimizer θ2 (ξ).
We further use the following assumptions on the model functions such that we can guarantee the existence of T-optimal designs and other nice properties like concavity (see e.g.[14]).Assumption 4. We assume the following properties on a single-response model f : Another important concept when optimizing the T-criterion are directional derivatives.But beforehand, we need the following notation for designs with a single support point.Notation 5. We denote the probability measure with full weight on one design point x ∈ X as ξ x , i.e. ξ x := {(x, 1)}.
Proposition 6 (Directional Derivative of the T-Criterion).The right-hand directional derivative for the T-criterion for some regular design ξ in direction ξ − ξ exists and is given by In particular, it holds where the (right-hand) directional derivative in direction ξ x − ξ for any design point x ∈ X is given by The existence of the directional derivative is derived in, e.g., [17].
We are now able to state the Equivalence Theorem for the T-criterion, which gives a necessary and sufficient criterion for T-optimal solutions.Theorem 7 (Equivalence Theorem for the T-Criterion).The following statements hold: 1.A design ξ * is T-optimal if and only if there exists a measure ζ on the parameters such that where Θ2 (ξ) := arg min θ 2 ∈Θ 2 X ϕ(x, θ 2 ) ξ(dx) is the set of optimal parameters w.r.t.ξ, and ζ is a probability measure on Θ2 (ξ), i.e. ζ Θ2 (ξ) = 1.
2. The inequality (2) holds with equality for all support points of a T-optimal design ξ * .
3. The set of optimal designs is convex.
We end this section with a helpful result concerning the existence of T-optimal solutions with finitely many support points.

Proposition 8 (Existence of T-optimal solutions with finite support).
There always exists a T-optimal design ξ * with at most dim(Θ 2 ) + 1 many design points.
Proof.The statement can be shown by applying the characterization theorem for best uniform approximation, see [6].

The 2-ADAPT-MD Algorithm
In the previous section we have seen that optimizing the T-criterion can be formulated as a maximin problem of the form where ξ is a discrete probability measure with dim(Θ 2 ) + 1 many support points.Thus, it can also be formulated as a semi-infinite problem and solved by the Blankenship & Falk algorithm (see Appendix A).Similar formulations were also used be Kuczewski [17] and Duarte et al. [13].However, the upper-level problem, which has to be solved in every iteration, is max where N = dim(Θ 2 ) + 1 is the maximum number of required support points.Not only does this problem contain many variables, (d x + 1) • N + 1 to be exact, but it is also often highly non-linear in the design points, which makes it hard to solve.Therefore, we present a new algorithm for computing T-optimal designs, the 2-ADAPT-MD.Although we do not eradicate the problems with the SIP approach completely, we mitigate the issues by exploiting the linearity in the weights, and optimizing with respect to the design points as infrequently as possible.
The idea for the 2-ADAPT-MD algorithm is the same as Algorithm 3.2 by Dette et al. [9]: We use a sub-routine to compute T-optimal designs on a finite subset of design points, then we augment this subset by a new design point to improve the solution with regard to the entire set of design points.
But unlike the approach by Dette et al., our new method does not rely on a linearization.This is advantageous for cases where the linearization is not good enough of an approximation.Also, our algorithm is able to deal with multi-response models, whereas the linearization formulated by Dette et al. is limited to single-response models.

The Grid-Based DISC-MD Sub-Routine
At first we consider the sub-problem to find T-optimal designs on a finite set of design points X N = {x 1 , . . ., x N } ⊂ X .This is: We again reformulate this maximin problem as a SIP.But this time we get a linear SIP: Notice that in this formulation, we only optimize with respect to the weights (and t) since the former inner optimization problem is replaced by infinitely many constraints.
To solve (5), we may use the Blankenship & Falk algorithm.So we approximate the SIP by considering finitely many constraints, i.e. we consider a finite discretization of the parameter set Θ2 ⊂ Θ 2 .This yields the upper-level problem max which is a linear program.The corresponding lower-level problem aims to find new parameters for the discretization Θ2 .This is the weighted least-squares problem min where the weights are given by the solution of the upper-level problem (6).The complete approach is presented in Algorithm 1, which we denote by DISC-MD.The name stems from the fact that we assume a discrete (and finite) set X N for the set of all design points.Algorithm 1 Gird-Based DISC-MD Algorithm Require: s = 0; finitely many design points 2: obtain w (s+1) by solving (6) given discretization Θ(s) until convergence criterion is met

The 2-ADAPT-MD Algorithm
Now we want to find T-optimal designs on a continuous set of design points X .So we want to solve To do so, we approximate the design set X by a set of finitely many design points X N and compute an optimal solution ξ * N on this finite set.This is done by the DISC-MD method which yields in the limit.Obviously T (ξ * N ) ≤ T (ξ * ), but for good choices for the design points in X N we also get a good approximate solution ξ * N .In fact, recall there always exists an optimal design ξ * with at most dim(Θ 2 ) + 1 many design points (Theorem 8).Finding these few design points such that they are included in the finite set X N is sufficient for T (ξ * N ) = T (ξ * ).The question is now: How do we find new design points to get better designs?Like with the Vector Direction Method and the algorithm by Dette et al. [9], we take the design point that maximizes the directional derivative from Proposition 6, i.e.
This point is of interest since it induces a direction of ascent.It is also the point with highest potential to increase the objective T (•) as ϕ(x, θ 2 ) is an upper bound for the optimal objective value T (ξ * ).Altogether, we end up with Algorithm 2, which is called 2-ADAPT-MD.The name follows from the idea that we consider two discretizations which we adaptively improve to compute T-optimal designs for model discrimination.On the one hand, we augment the discretization of the parameter set Θ2 to get better approximate designs ξN to the T-optimal design ξ * N on X N .On the other, we determine new points to update X N so that ξ * N becomes a better approximation to the optimal design ξ * on X .

Algorithm 2 2-ADAPT-MD Algorithm for the T-Criterion
Require: finitely many design points 2 by solving the LS problem (7) given design ξ (n+1) 4: obtain x (n+1) by maximizing the squared distance (10) given parameters θ(n+1) augment candidate design points 7: n := n + 1 iterate 8: until convergence criterion is met Remark 9 (2-ADAPT-MD for Multi-Response Models).With the 2-ADAPT-MD algorithm, we can also compute T-optimal designs for multi-response models f : X ×Θ → Y with Y ⊆ R dy and d y > 1.This is achieved by changing the squared distance ϕ to

Convergence
We now show that we can find arbitrary good approximations to T-optimal designs with the 2-ADAPT-MD algorithm.

Convergence of the DISC-MD Procedure
Our initial focus is on the DISC-MD sub-routine.We show that we find arbitrary good approximations of the T-optimal design ξ * N on the finite design space X N by using again results from semi-infinite programming.
Theorem 10 (Convergence of the DISC-MD Procedure).Let Θ(0) 2 be a finite discretization of Θ 2 .Assume that the design ξ (s) is regular in each iteration, i.e. we find unique minimizers in the least-squares problem in the lower-level.In particular, the initial design ξ (0) is regular.Then, every accumulation point of the sequence of solutions (ξ (s) ) s∈N of Algorithm 1 is a T-optimal solution on X N .
Proof.We prove the statement by showing that we satisfy the conditions of Theorem 20 (see Appendix A).If its requirements are satisfied, every accumulation point of the sequence of solutions is an optimal solution of the semi-infinite problem (5).Hence, it is a T-optimal solution on X N .
So, we have to show that the feasible region of the initial iteration 2 ) can be defined by finitely many linear inequalities, it is a closed convex polytope.It remains to show that F( Θ(0) 2 ) is bounded.The constraints imply w ∈ [0, 1] N , and W.l.o.g., we can further assume t ≥ 0. This is due to maximizing with respect to t and all upper bounds of t being positive.Altogether, F( Θ(0) 2 ) is bounded and closed in R N +1 , i.e. compact.

Convergence of the 2-ADAPT-MD Algorithm
We have already highlighted that the 2-ADAPT-MD algorithm is reminiscent of the algorithm by Dette et al. [9].In fact, if the DISC-MD routine returned an optimal design on X N , then Theorem 3.3 from their paper proves convergence.But for this sub-routine, we only have convergence in the limit.Therefore, in finite time, we are only guaranteed that ξN is an arbitrary good approximation to ξ * N .Hence, we introduce the notion of ε-T-optimal designs.

Definition 11 (ε-T-Optimal Designs
Definition 11 means that the directional derivative ψ(x, ξ) = ϕ(x, θ2 (ξ))−T (ξ) is small.We also conclude that if we have an ε-T-optimal solution ξ, then the objective value of this design T (ξ) is close to the optimal objective value.This justifies our definition for approximate solutions.
Corollary 12.If ξ is an ε-T-optimal design and ξ * is an actual T-optimal design on the same design and parameter spaces, then Proof.By the duality of the T-criterion and the Chebyshev approximation problem (see [11]), we have max Therefore, Additionally, we will need the following lemma.
Lemma 13.Let Assumptions (M1)-(M3) hold.Let (ξ n ) n∈N be a sequence of designs such that the objective values converge, i.e.T (ξ n ) → T * as n → ∞ for some T * ∈ R.Then, there exists a solution ξ * ∈ Ξ(X ) such that Proof.The sequence of solutions (ξ n ) n∈N is obviously tight because the solutions are probability measures defined on the compact set X .But then, by Prokhorov's Theorem (see e.g.[20]), there exists a subsequence (ξ n k ) k∈N which weakly converges to a probability measure ξ * defined on X .And by the weak continuity of the T-criterion [17], we have But since the objectives (T (ξ n )) n∈N converges to T * by assumption, we also have So by the uniqueness of limits, we have T (ξ * ) = T * .
We now show that the new algorithm finds arbitrary good approximations of the Toptimal design ξ * as long as the solution of the DISC-MD sub-routine returns a sufficiently good approximation for the optimal design on a finite subset X N .
But before we can prove the statement, we have to introduce a bound for the second directional derivative.Recall from Proposition 6 that for a design ξ, and the directional derivative ψ(x, ξ) in direction ξ x − ξ, we have for some K ∈ R which depends on the given models.This K can be understood as a bound for the second directional derivative, but also plays a role on how close we can get to a T-optimal solution.
Theorem 14 (Convergence of SIP-Based Algorithm for T-Criterion).Let ε > 0. Assume in each iteration n that DISC-MD returns a regular ε-T-optimal design ξ (n) on the finite design space X (n) ⊂ X .Then, a subsequence of intermediate solutions (ξ (n j ) ) j∈N of Algorithm 2 converges to a 2 √ Kε-T-optimal design ξ on X , i.e.
Proof.Let ξN be the solution found by the DISC-MD sub-routine for the finite design space X N = {x 1 , . . ., x N } as input, and let ξ * N be an actual T-optimal design on X N .Since the sub-routine stops with an ε-T-optimal solution on X N , we have We divide this proof into two parts.First, we show that the sequence of solutions ( ξN ) N ∈N converges weakly to a design ξ.Then, we prove that ξ is a 2 √ Kε-T-optimal solution.
We start with the sequence of optimal objective values (T (ξ * N )) N ∈N .It is monotone increasing because we may only add points to the finite design space X N in each iteration, thus increasing the feasible region Ξ(X N ) for the designs.Also, (T (ξ * N )) N ∈N is bounded from above by T (ξ * ), which is the objective value of the T-optimal design on the whole set X .Therefore this sequence converges to an element T * * ∈ R, i.e. lim Next, we show that the sequence of the ε-T-optimal solutions ( ξN ) N ∈N has a weakly convergent subsequence.Recall (12), which states Combining ( 13) and ( 14), we have that for a fixed δ > 0, there exists a sufficiently large N 0 ∈ N such that for all N > N 0 : As a consequence, the sequence (T ( ξN )) N ∈N is bounded within a compact subset of R.
for N > N 0 .So there exists a convergent subsequence (T ( ξN j )) j∈N with lim j→∞ T ( ξN j ) = T for some T ∈ R.And by the same arguments as above, we take another subsequence which weakly converges to a design ξ with For the sake of convenience we only denote the final subsequence by ( ξs ) s∈N .We finally show that ξ is a 2 √ Kε-T-optimal solution.Assume it is not, i.e.
Due to continuity of the T-criterion w.r.t.weakly convergent designs, weak outer semicontinuity of the optimal parameters (see both in [17]), and the obvious continuity of ϕ, we have that for any ε > 0, there is a sufficiently large s 0 ∈ N such that for all s ≥ s 0 :

Numerical Results
We tested our new algorithm on two examples from chemical process engineering.Additionally, we compared the performance against three other algorithms from the literature, namely the original Vector Direction Method used by Atkinson and Fedorov, the algorithm by Dette et al. [9], and the original SIP formulation which solves the maximin problem (3) by the Blankenship & Falk algorithm.All algorithms have been implemented in Python [25].

Implementation Details
In the following, we describe how we numerically solve the sub-problems that make up each algorithm.
Computing new parameters.We solve the least-squares problem (7) with the help of either the scipy dogbox solver, or the KNITRO solver, depending on the application.Also, in order to find a globally optimal solution, we do a multi-start where one starting point is the solution from the previous iteration, and all other starting points are determined by a Sobol sequence [22].However, we might run into cases where we have very few support points with positive weight.In a degenerate case it might happen that we have full weight on a single design point in an iteration, i.e. we get an intermediate solution with ξ = {(x 1 , 1), (x 2 , 0), . . ., (x N , 0)}.This can happen due to numerical instabilities.In these cases, the least-squares problem does not have a unique optimal solution and may yield poor fits.Thus, we use a regularization by adding a small weight to all currently selected design points x 1 , . . ., x N .This yields the regularized least-squares problem min with some small least-squares regularizer λ > 0, e.g.λ = 10 −8 .This problem often has a unique optimal solution, and it is still very close to the original problem.The idea to consider all design points x 1 , . . ., x N with an additional small weight can also be justified by the fact that the Chebyshev approximation problem is the dual of the T-criterion.So for the corresponding parameters of a T-optimal design, we require that the squared distances are low everywhere.
Computing new design points.The sub-problem of finding a new design point (10) is in general non-convex and solved using BARON, or by enumeration if the design space was discrete.
Updating the design.Computing a new design by the linear program ( 6) is done with the MOSEK solver via the pyomo interface.Additionally, for other algorithms, we solve the quadratic program in Dette's algorithm with the KNITRO solver, and use again BARON for the simultaneous optimization of the weights and design points in the original SIP approach.
Stopping rule.We use a stopping criterion based on the Equivalence Theorem 7.For regular designs, it simplifies to where the ε is a pre-determined tolerance.However, if the estimated parameters θ2 (ξ) are not exact, it might happen that this criterion is satisfied too early.Specifically, poorly estimated parameters might cause that the support points of the design ξ are not all equal, which is also required by the Equivalence Theorem 7.But then the maximal support point might still satisfy (19), whereas the others do not.Therefore, we propose a small adaption: If ( 20) is satisfied, so is (19), but it does recognize the aforementioned case.

Examples
In the following, we compare the performance of the Vector Direction Method (VDM), Dette's algorithm, the original SIP approach, and the new 2-ADAPT-MD algorithm.We consider two examples: One simple example that should be easy to solve for all models, and a harder example based on a small system of ordinary differential equations.
In Table 1, we present the parameters of the algorithms.They can be tweaked to find a better solution or terminate earlier.If not otherwise stated, we take the default values.However, for the VDM, we make two distinctions: • The least-squares regularizer λ is not required and thus set to zero since all support points keep a positive weight.
• The maximum number of iterations n maxiter is increased to 1000.
The following results were computed on an Intel Xeon Gold 6248R processor with four available cores.

Parameter
Description Default ε tolerance of stopping criterion 10 −5 n maxiter maximum number of iterations 100 n θ number of starting solutions to solve the least-squares problem (7) or (18), excluding the optimal parameters from the previous iteration

Michaelis-Menten Model
In our first example we consider the Michaelis-Menten (MM) model.It gives a formula for the rate of a reaction where a substrate reacts with an enzyme to form a product: Here, x is a substrate concentration, V is the maximum reaction rate, and K is the so-called Michaelis-Menten constant which is the substrate concentration at which the reaction rate is at half maximum 1 2 V [23].Alternatively, we might add a linear term to this rate.This yields the modified Michaelis-Menten (ModMM) model: Our goal is to find experiments such that we can discriminate between the two models and decide, which one fits our real-world problem better.This example as it stands was also studied in [19] and has since then become a standard example for model discrimination.
Considering that the original model is included in the modified version, we assume that f 1 is our true function with the parameters V = 1, K = 1, and F = 0.1.Accordingly, f 2 is the alternative model.The design space is chosen to be X = [0.001,5], and for the parameter space we have (V, K) ∈ Θ 2 = [10 −3 , 5] × [10 −3 , 5].Lastly, as the initial design we select The performance of the algorithms for this example is presented in Table 2. Additionally, the computed designs for each algorithm are shown in Table 3 Moreover, we illustrate the results.In Figure 1, we show how well the Michaelis-Menten model f 2 is fitted to the modified Michaelis-Menten model.Since all algorithms find designs with approximately the same optimal parameters θ2 = ( V , K) ≈ (1.86, 2.15), this figure is representative for all algorithms.Additionally in Figure 2, we present a plot of the directional derivative x → ψ(x, ξ * ) for the optimal design computed by the 2-ADAPT-MD algorithm.It illustrates that the necessary and sufficient optimality criterion based on the Equivalence Theorem 7 is satisfied.We also plotted the directional derivative of the solution from the Vector Direction Method, which shows its problems to get rid of bad support points.Overall, we see that all methods are capable of computing T-optimal designs.However, the algorithm by Dette et al. is the fastest in this easy example, closely followed by the 2-ADAPT-MD algorithm.The Vector Direction Method already takes significantly longer, whereas the original SIP approach has tremendous problems solving the upper-level problem to global optimality, thus taking by far the longest.

Partially Reversible vs. Irreversible Consecutive Reaction
Sometimes it is desirable to discriminate between an irreversible consecutive reaction of the form and a consecutive reaction where the first part is reversible, i.e.
This problem was investigated in [24].Typically, reactions of such forms are modelled via a system of ordinary differential equations.For example, by modelling (22) with power law expressions for the reaction rates, we get Now, if we want to discriminate between the models, we have to assume that the reaction with the reversible part ( 22) is the reference model, and the irreversible reaction ( 21) is our alternative model.Otherwise, we could perfectly fit the alternative to the reference model, which means that the squared distance is zero everywhere and every design would be trivially optimal.
As design variables, we consider the initial concentrations [A] 0 , [B] 0 , and [C] 0 , and the time point of measurement t.However since evaluating the models at one point is already hard enough, we use a small discretized design space and maximize the squared distance (10) by evaluating over the complete lattice instead of optimizing over the continuous space.So we assume ([A] 0 , [B] 0 , [C] 0 , t) ∈ X = {0.5, 0.7, 0.9} × {0.1, 0.2, 0.3} × {0.0, 0.15, 0.3} × {2, 4, . . ., 10}.The initial design is Due to this problem definition, the original SIP approach and the new 2-ADAPT-MD algorithm both try to solve max But whereas the original SIP approach uses the Blankenship & Falk algorithm directly, we iteratively determine a smaller subset of design points and apply the Blankenship & Falk algorithm to this subset.This reduces the number of evaluations required to formulate the linear program (6) in the upper-level but should lead to more iterations overall.
The performance of the algorithms is presented in Table 4. Though, for the original SIP method, we had to set the least-squares regularizer to λ = 0, which worked in this example, but might lead to problems with other instances.Additionally, see Table 5 for the computed designs.Notice that this example discriminates between two multiresponse models, and Dette's algorithm can only deal with single-response models, so its row is blank.The results prove that we can not only find optimal designs even for multi-response models, but taking a small subset of candidate design points as it is done in the 2-ADAPT-MD algorithm is indeed beneficial compared to the usual SIP approach.The additional Algorithm Optimal Design ξ * VDM    (0.5, 0.1, 0, 2) (0.9, 0.3, 0.3, 10) (0.9, 0.

Conclusion
In this paper we have introduced a novel algorithm to compute T-optimal experimental designs, the 2-ADAPT-MD.The algorithm utilizes a two-fold adaptive discretization of the parameters space Θ 2 as well as the design space X .Both discretizations are chosen similar to approaches from semi-infinite programming where an inner optimization problem is solved to select a new design point x, or a new parameter value θ 2 , respectively.We have proven the convergence of the 2-ADAPT-MD.Additionally we have shown bounds on the error in objective value, depending on the tolerance obtained in the discretized model discrimination sub-problem.The main advantage of the 2-ADAPT-MD is that it converges under only mild assumptions on the considered models.By using an increasing discretization in design points x and parameters θ 2 , we obtain linear programs and least-square programs as most frequent sub-problems.These structures are well studied and specialized solvers can be used to efficiently compute corresponding solutions.The use of discretizations also decreases our problem dimensions, which leads to a higher numerical stability.In particular, this is noticeable when adding a regularization to the least-square term.
In two examples from chemical engineering we have seen that the 2-ADAPT-MD can compete and even outperform state-of-the-art model discrimination methods.In comparison to a classical semi-infinite approach, we can drastically improve the time required for the computations.By using a second inner discretization in the parameters θ 2 we decrease the dimension in the regularization term of the least-squares problem.Compared to the approach by Dette et al., we do not require linear approximations in the parameters for the selection of new candidate experiments.This makes our new algorithm more reliable and stable.Additionally, the 2-ADAPT-MD can also handle multi-response models, whereas the algorithm by Dette et al. is limited to the single-response case.
However, some aspects of the 2-ADAPT-MD need further investigation and can be improved upon.First, we need to solve a non-convex sub problem in each iteration to global optimality in order to refine the discretization in the design space X .This is similar to most state-of-the-art algorithms, which rely on global optimization solvers or discrete design spaces X .Secondly, throughout this paper we have assumed the worst-case parameters θ2 to be unique.In future work we want to investigate if the 2-ADAPT-MD can be adjusted to also handle problems where the parameters are not unique.
Notice that x is feasible, i.e. g(x, y) ≥ 0 for all y ∈ Y , if the optimal solution ȳ of Q(x) satisfies g(x, ȳ) ≥ 0.
However, it is in general still hard to derive a candidate for an optimal solution of a problem with infinitely many constraints.Therefore, we consider only a finite subset Ẏ ⊂ Y of the index set which yields the discretized problem or upper-level problem.
Notation 19 (Upper-Level Problem).For a given discretization Ẏ ⊂ Y of the index set, the upper-level problem is given by SIP( Ẏ ) : max solve SIP(Y (k) ) and obtain new solution x (k) upper-level 3: solve Q(x (k) ) and obtain new solution y (k) lower-level 4: if g(x (k) , y (k) ) ≥ 0 then check feasibility k := k + 1 9: until some stopping criterion is met One has to be careful with the computed solutions x (k) , though.They are not necessarily feasible because the upper-level problem in every iteration is a relaxation due to having fewer constraints.Nevertheless, convergence can be shown under some additional assumptions.For example the following theorem is stated in [18].
Theorem 20 (Convergence of the Blankenship & Falk Algorithm).Suppose that the starting feasible region F(Y (0) ) := {x | g(x, y) ≥ 0, ∀y ∈ Y (0) } is compact.Then, the Blankenship & Falk algorithm either stops with an optimal solution of the given semiinfinite program, or any accumulation point of the sequence of solutions (x (k) ) k∈N is an optimal solution.

Figure 1 :
Figure 1: ModMM reference model vs. fitted MM alternative model with support points of optimal design from 2-ADAPT-MD algorithm.The size of the crosses indicate the weights of the design, and the black lines highlight the model differences.

Table 1 :
Table of customizable parameters.
(21)e [A], [B], and [C] are the concentrations of each component at a time t.Furthermore, k 1 , k 2 , and k 3 are rate constants, and n 1 , n 2 , and n 3 are reaction orders.Note that the model for the irreversible reaction(21)is included in this model by choosing k 3 = 0.And notice further that in this case, we deal with multi-response models that output the concentrations for all components [A], [B], and [C] at time points t.

Table 4 :
Results for partially reversible vs. irreversible consecutive reaction model.

Table 5 :
1, 0.15, 10). . .Optimal designs per algorithm for partially reversible vs. irreversible consecutive reaction model.effort to formulate the LP exceeds the cost of additional iterations.Also, the least-squares regularization can be applied more effectively, which can be crucial for instances where intermediate solutions with too few support points appear.