Space mapping-based optimization with the macroscopic limit of interacting particle systems

We propose a space mapping-based optimization algorithm for microscopic interacting particle dynamics which are infeasible for direct optimization. This is of relevance for example in applications with bounded domains for which the microscopic optimization is difficult. The space mapping algorithm exploits the relationship of the microscopic description of the interacting particle system and a corresponding macroscopic description as partial differential equation in the “many particle limit”. We validate the approach with the help of a toy problem that allows for direct optimization. Then we study the performance of the algorithm in two applications. A pedestrian flow is considered and the transportation of goods on a conveyor belt is optimized. The numerical results underline the feasibility of the proposed algorithm.


Introduction
In the recent decades interacting particle systems attracted a lot of attention from researchers of various fields such as swarming, pedestrian dynamics and opinion formation (cf. [1,25,31,32] and the references therein).In particular, a model hierarchy was established [12,20].The main idea of the hierarchy is to model the same dynamics with different accuracies, each having its own advantages and disadvantages.The model with the highest accuracy is the microscopic one.It describes the positions and velocities of each particle explicitly.For applications with many particles involved this microscopic modelling leads to a huge amount of computational effort and storage needed.Especially, when it comes to the optimization of problems with many particles [10,11].
There is also an intermediate level of accuracy given by the mesoscopic description, see [1,12,32].We do not want to give its details here, instead, we directly pass to the macroscopic level, where the velocities are averaged and a position-dependent density describes the probability of finding a particle of the dynamics at given position.Of course, we loose the explicit information of each particle, but have the advantage of saving a lot of storage in the simulation of the dynamics.Despite the lower accuracy many studies [1,11,29] indicate that the evolution of the density yields a good approximation of the original particle system, see also [35], which proposed a limiting procedure that is considered in more detail below.This observation motivates us to exploit the aforementioned relationship of microscopic and macroscopic models and propose a space mapping-based optimization scheme for interacting particle dynamics which are inappropriate for direct optimization.
For example, this might be the case for particle dynamics that involve a huge number of particles for which traditional optimization is expensive in terms of storage, computational effort and time.Another example is the optimization of particle dynamics in bounded domains, where the movement is restricted by obstacles or walls.In fact, systems based on ordinary differential equations (ODEs) do not have a natural prescription of zero-flux or Neumann boundary data, but those conditions might be useful for applications.In contrast, models based on partial differential equations (PDEs) require boundary conditions and often zero-flux or Neumann type boundary conditions are chosen.The approach discussed in the following allows to approximate the optimizer of microscopic dynamics with additional boundary behavior while only optimizing the macroscopic model.

Modeling equations and general optimization problem
We begin with the general framework and propose the space mapping technique to approximate an optimal solution of the interacting particle system.In general, the interacting particle dynamic for N ∈ N particles in the microscopic setting is given by the ODE system where x i ∈ R 2 , v i ∈ R 2 are the position and the velocity of particle i supplemented with initial condition x i (0) = x 0 i , v i (0) = v 0 i for i = 1, . . ., N .Here, F denotes an interaction kernel which is often given as a gradient of a potential [15].For notational convenience, we define the state vector y = (x i , v i ) i=1,...,N which contains the position and velocity information of all particles.
Remark 1.Note that there are models that include boundary dynamics with the help of soft core interactions, see for example [25].In general, these models allow for direct optimization.Nevertheless, for N ≫ 1 the curse of dimensions applies and the approach discussed here may still be useful.
Sending N → ∞ and averaging the velocity, we formally obtain a macroscopic approximation of the ODE dynamics given by the PDE where ρ = ρ(x, t) denotes the particle density in the domain Ω ⊆ R 2 .The velocity v is the averaged velocity depending on the position and k(ρ) describes the diffusion.We consider constrained optimization problems of the form min u∈U ad

J(u, y)
subject to E(u, y) = 0, where J is the cost functional, U ad is the set of admissible controls and y are the state variables with E(u, y) = 0.In the following, for a given control u ∈ U ad , the constraint E(u, y) contains the modeling equations for systems of ODEs or PDEs.With the additional assumption that for a given control u, the model equations have a unique solution, we can express y = y(u) and consider the reduced problem min This is a nonlinear optimization problem, which we intend to solve for an ODE constraint E(u, y(u)).To do this, one might follow a standard approach [26] and apply a gradient descent method based on adjoints [34] to solve the microscopic reduced problem iteratively.
In contrast, the space mapping technique employs a cheaper, substitute model (coarse model) for the optimization of the fine model optimization problem.Under the assumption that the optimization of the microscopic system is difficult and the optimization of the macroscopic system can be computed efficiently, we propose space mapping-based optimization.The main objective is to iteratively approximate an optimal control for the microscopic dynamics.To get there, we solve a related optimal control problem on the macroscopic level in each iteration.

Literatur review and outline
Space mapping was originally introduced in the context of electromagnetic optimization [6].
The original formulation has been subject to improvements and changes [8] and enhanced by classical methods for nonlinear optimization.The use of Broyden's method to construct a linear approximation of the space mapping function, so-called aggressive space mapping (ASM) was introduced by Bandler et al. [7].We refer to [4,8] for an overview of space mapping methods.More recently, space mapping has been successfully used in PDE based optimization problems.Banda and Herty [5] presented an approach for dynamic compressor optimization in gas networks.Göttlich and Teuber [24] use space mapping based optimization to control the inflow in transmission lines.In both cases, the fine model is given by hyperbolic PDEs on networks and the main difficulty arises from the nonlinear dynamics induced by the PDE.These dynamics limit the possibility to efficiently solve the optimization problems.In their model hierarchy, a simpler PDE serves as the coarse model and computational results demonstrate that such a space mapping approach enables to efficiently compute accurate results.Pinnau and Totzeck [33] used space mapping for the optimization of a stochastic interacting particle system.In their approach the deterministic state model was used as coarse model and lead to satisfying results.Here, we employ a mixed hyperbolic-parabolic PDE as the coarse model in the space mapping technique to solve a control problem on the ODE level.Our optimization approach therefore combines different hierarchy levels.As discussed, the difficulty on the ODE level can arise due to boundaries in the underlying spatial domain or due to a large number of interacting particles.In contrast, the macroscopic equation naturally involves boundary conditions and its computational effort is independent of the particle number.
The outline of the paper is as follows: We introduce the space mapping technique in section 2 together with the fine and coarse model description in the subsections 2.1 and 2.2.Particular attention is payed to the solution approach for the discretized coarse model in section 2.2.2, which is an essential step in the space mapping algorithm.The discretized fine model optimal control problem is presented in section 3 and the space mapping approach is validated by comparisons to a standard optimization technique for the fine model.We provide numerical examples in bounded domains in section 4. Various controls such as the source of an eikonal field in evacuation dynamics, cf.section 4.1, and the conveyor belt velocity in a material flow setting, cf.section 4.2, demonstrate the diversity of the proposed space mapping approach.In the conclusion in section 5 our insights are summarized.

Space mapping technique
Space mapping considers a model hierarchy consisting of a coarse and a fine model.Let the operators mapping a given control u to a specified observable G c (u) in the coarse and G f (u) in the fine model, respectively.The idea of space mapping is to find the optimal control u f * ∈ U f ad of the complicated (fine) model control problem with the help of a coarse model, that is simple to optimize.
We assume that the optimal control of the fine model where ω ∈ R n is a given target state, is inappropriate for optimization.In contrast, we assume the optimal control u c * ∈ U c ad of the coarse model control problem can be obtained with standard optimization techniques.While it is computationally cheaper to solve the coarse model, it helps to acquire information about the optimal control variables of the fine model.By exploiting the relationship of the models, space mapping combines the simplicity of the coarse model and the accuracy of the more detailed, fine model very efficiently [3,17].
Definition 2.1.The space mapping function T : U f ad → U c ad is defined by The process of determining T (u f ), the solution to the minimization problem in Definition 2.1, is called parameter extraction.It requires a single evaluation of the fine model G f (u f ) and a minimization in the coarse model to obtain T (u f ) ∈ U c ad .Uniqueness of the solution to the optimization problem is desirable but in general not ensured since it strongly depends on the two models and the admissible sets of controls U f ad , U c ad , see [17] for more details.
The basic idea of space mapping is that either the target state is reachable, i.e., G f (u f * ) ≈ ω * or both models are relatively similar in the neighborhood of their optima, i.e., G f (u f * ) ≈ G c (u c compare [17].In general, it is very difficult to establish the whole mapping T , we therefore only use evaluations.In fact, the space mapping algorithms allows us to shift most of the model evaluations in an optimization process to the faster, coarse model.In particular, no gradient information of the fine model is needed to approximate the optimal fine model control [3]. Figure 1 illustrates the main steps of the space mapping algorithm.In the literature, many variants of the space mapping idea can be found [8].We will use the ASM algorithm, see algorithm 1 in Appendix A or the references [7,24] for algorithmic details.Starting from the iterate u 1 = u c * , the descent direction d k is updated in each iteration k using the space mapping evaluation T (u k ).The algorithm terminates when the parameter extraction maps the current iterate u k (approximately) to the coarse model optimum * is smaller than a given tolerance in an appropriate norm • .The solutions u c * and T (u k ) are computed using adjoints here and will be explained in section 2.2.2.

Fine model
We seek to control a general microscopic model for the movement of N particles with dynamics given by (1).We choose the velocity selection mechanism which describes the correction of the particle velocities towards an equilibrium velocity v(x) with relaxation time τ .Such systems describe the movements of biological ensembles such as school of fish, flocks of birds [2,13,16], ant [9] or bacterial colonies [28] as well as pedestrian crowds [23,25] and transport of material [21,22].In general, the force F occuring in (1) is a pairwise interaction force between particle i and particle j.We choose to activate it whenever two particles overlap and therefore x i − x j 2 < 2R.For x i − x j 2 ≥ 2R, the interaction force is assumed to be zero.In the following we restrict ourselves to forces described by where b F > 0.
We consider the optimization problem (3) and set E(u, y) = 0 if and only if the microscopic model equations ( 1) are satisfied to investigate various controls u.For example, u being the local equilibrium velocity v(x) of the velocity selection mechanism or u being the factor A scaling the interaction force between the particles.The objective function under consideration in each of the scenarios is the squared deviation of the performance evaluation j(u, y(u)) from the target value ω * ∈ R, that is In the following we discuss the macroscopic approximation which is used as coarse model for the space mapping.

Coarse model
Reference [35] shows that in the many particle limit, N → ∞, the microscopic system (1) can be approximated by the advection-diffusion equation ( 2) with k(ρ) = CρH(ρ − ρ crit ).The constant C = ACτ , derived from the microscopic interaction force, is defined through the relation lim The density ρ crit = 1 is a density threshold, above which diffusion in the macroscopic model is activated.H denotes the Heaviside function At the boundary, we apply zero-flux boundary conditions for the advective and the diffusive flux where n = (n (1) , n (2) ) T is the outer normal vector at the boundary ∂Ω.
The advection-diffusion equation ( 2) serves as the coarse model in the space mapping technique.To solve optimization problems in the coarse model, we pursue a first-discretizethen-optimize approach.In the following, we discretize the macroscopic model and derive the first order optimality system for the discretized macroscopic system.
Remark 2. We recommend to choose the optimization approach depending on the structure of the macroscopic equation.Here, the PDE is hyperbolic whenever no particles overlap, we therefore choose first-discretize-then-optimize.If the macroscopic equation would be purely diffusive, one might employ a first-optimize-then-discretize approach instead.

Discretization of the macroscopic model
We discretize a rectangular spatial domain (Ω∪∂Ω) ⊂ R 2 with grid points x ij = (i∆x (1) , j∆x (2) ), (i, j) ∈ I Ω = {1, . . .N x (1) } × {1, . . .N x (2) }.The boundary ∂Ω is described with the set of indices I ∂Ω ⊂ I Ω .The time discretization of the coarse model is ∆t c and the grid constants are λ (1) = ∆t c /∆x (1) and λ (2) = ∆t c /∆x (2) .We compute the approximate solution to the advection-diffusion equation (2) as follows where )∆x (1) , )∆x (2) , (j + 1 2 )∆x (2) , The discretization of the initial density in ( 2) is obtained from the microscopic initial positions smoothed with a Gaussian filter η η such that the initial density reads To compute ρ s ij , s > 0, we solve the advection part with the Upwind scheme and apply dimensional splitting.The diffusion part is solved implicitly where the following short notation is used Moreover, the fluxes F (1) , F (2) and B are given by The Heaviside function H is smoothly approximated and the time step restriction for the numerical simulations is given by the CFL condition of the hyperbolic part ∆t c ≤ min , compare [27,35].We denote the vector of density values ρ = (ρ s ij ) (i,j,s)∈I Ω ×{0,...N c t } .It is the discretized solution (8) of the macroscopic equation ( 2) which depends on a given control u.The vectors containing intermediate density values ρ, ρ and Lagrange parameters µ, μ, µ used below are defined analogously.

Solving the coarse model optimization problem
Next, we turn to the solution of the coarse-scale optimization problem.The construction of a solution to this problem is paramount to the space mapping algorithm.We provide a short discussion on the adjoint method for the optimization problem (3) before we specify the macroscopic adjoints.
First Order Optimality System Let J(u, y(u)) be an objective function which depends on the given control u.We wish to solve the optimization problem (3) and apply a descent algorithm.In a descent algorithm, a current iterate u k , is updated in the direction of descent of the objective function J until the first order optimality condition is satisfied.An efficient way to compute the first order optimality conditions is based on the adjoint, which we recall in the following.Let the Lagrangian function be defined as where µ is called the Lagrange multiplier.
Solving dL = 0 yields the first order optimality system For nonlinear systems it is difficult to solve the coupled optimality system (i)-(ii) all at once.We therefore proceed iteratively: for the computation of the total derivative d du J(u, y(u)), the system E(u, y(u)) = 0 is solved forward in time.Then, the information of the forward solve is used to solve the adjoint system (ii) backwards in time.Lastly, the gradient is obtained from the adjoint state and the objective function derivative.
Nonlinear conjugate gradient method We use a nonlinear conjugate gradient method [14,19] within our descent algorithm to update the iterate as follows The step size σ k is chosen such that it satisfies the Armijo-Rule [26,30] and the standard Wolfe condition [30] ∇J with 0 < c 1 < c 2 < 1.We start from σ k = 1 and cut the step size in half until ( 10)-( 11) are satisfied.The parameter βk is given by βk = ∇J(u k+1 , y(u k+1 )) which together with conditions (10)-( 11) ensures convergence to a minimizer [14].We refer to this method as adjoint method (AC).In the following we apply this general strategy to our macroscopic equation.

Macroscopic Lagrangian
We consider objective functions depending on the density, i.e., J c (u, ρ).The discrete Lagrangian L = L(u, ρ, ρ, ρ, µ, μ, µ) is given by + 1) ∆x (2) . ( We differentiate the Lagrangian with respect to ρ s ij (1),s,− i+1j ∆x (1)   + μs−1 Rearranging terms yields ) on the left-hand side and ( 16)- (17), see Appendix B, on the right-hand side, leads to This is solved backward in time to obtain the Lagrange parameter (µ s−1 ij ) (i,j)∈I Ω .Note that the above expression T (µ s−1 ) = (T i,j (µ s−1 )) (i,j)∈I Ω defines a coupled system for the Lagrange parameter of time step s − 1 in space and has to be solved in each time step.This system arises from the implicit treatment of the diffusion term in the forward system (8).It is the main difference to adjoints for purely hyperbolic equations where the Lagrange parameters in step s − 1 in the backward system are simply obtained as a convex combination of those from step s, see [18].Proceeding further, we differentiate the Lagrangian with respect to ρs ij to get Again, rearranging terms yields = μs ij − λ (2) Finally, we differentiate the Lagrangian with respect to ρ s ij to obtain The equality of the Lagrange parameters μ, µ stems from the fact that the diffusion is solved implicitly in the forward system (8)1 .In the next section, we consider the diffusion coefficient C as control for the macroscopic system, u = C.In this case, the derivative of the Lagrangian with respect to the control reads

Validation of the approach
To validate the proposed approach, we consider a toy problem and compare the results of the space mapping method to optimal solutions computed directly on the microscopic level.
For the toy problem, we control the potential strength A of the microscopic model.The macroscopic analogue is the diffusion coefficient C.

Discrete microscopic adjoint
Let N f t ∈ N and ∆t f ∈ R be the number of time steps and the time step size, respectively.We discretize the fine, microscopic model (1) in time to obtain Furthermore, let J f (u, x) be the microscopic objective function.The microscopic Lagrange function L(u, x, v, µ, μ, µ, μ) is then given by where The details of the derivatives of the force terms and the computation of the adjoint state can be found in Appendix C.Moreover, the derivative of the Lagrangian with respect to the control u = A reads

Comparison of space mapping to direct optimization
We apply ASM and the direct optimization approach AC to the optimization problem (3).In each iteration k of the adjoint method for the fine model, a computation of the gradient ∇J f for the stopping criterion as well as several objective function and gradient evaluations for the computation of the step size σ k are required.These evaluations are (mostly) shifted to the coarse model in ASM.Let Ω = [−5, 5] 2 be the domain and v(x) = −x the velocity field of our toy example.We investigate whether the macroscopic model is an appropriate coarse model in the space mapping technique.For the microscopic interactions, we use the force term (4) with b F = 1/R 5 .Without interaction forces, A = 0, all particles are transported to the center of the domain x (1) , x (2) = (0, 0) within finite time.Certainly, they overlap after some time.With increasing interaction parameter, i.e., increasing A, particles encounter stronger forces as they collide.Therefore, scattering occurs and the spatial spread increases.We control the spatial spread of the particle ensemble at t = T in the microscopic model, leading to a cost and the objective function derivative with respect to the state variables x i is given by We choose A, the scaling parameter of the interaction force, as microscopic control.The coarse, macroscopic model is given by ( 2) and the spatial spread of the density at t = T is given by where M is the total mass, i.e., M = (i,j) ρ 0 ij ∆x (1) ∆x (2) .According to [35], the macroscopic diffusion constant C is given by We choose τ = 1/C to simplify the macroscopic diffusion coefficient (C = A), compare (2), and consider the parameters in Table 1.
Two particle collectives with N/2 = 100 particles are placed in the domain, see Figure 3a.The macroscopic representation (7) of the particle groups is shown in Figure 3b.We set box constraints on the controls 0 ≤ A, C ≤ 10 and compare the number of iterations of the two approaches to obtain a given accuracy2 of J f (u k , x) 2 < 10 −7 .The step sizes σ k for AC are chosen such that they satisfy the Armijo Rule and standard Wolfe condition ( 10)-( 11) with c 1 = 0.01, c 2 = 0.9.If an iterate violates the box constraint, it is projected into the feasible set.
In the space mapping algorithm, the parameter extraction T (u k ) is the solution of an optimization problem in the coarse model space, see Definition 2.1.The optimization is solved via adjoint calculus with c 1 , c 2 as chosen above and u start = T (u k−1 ), which we expect to be close to T (u k ).Further, to determine the step size σ k for the control update, we consider step sizes such that and thus decreases the distance of the parameter extraction to the coarse model optimal control from one space mapping iteration to the next.
The optimization results and computation times (obtained as average computation time of 20 runs on an Intel(R) Core(TM) i7-6700 CPU 3.40 GHz, 4 Cores) for target values ω * ∈ {1, 2, 3} are compared in Table 2.Both optimization approaches start far from the optima at u 0 = 8.Optimal controls u AC * and u ASM * closely match.The objective function evaluations J f (u AC * , x), J c (u c * , ρ) describe the accuracy at which the fine and coarse model control problem are solved, respectively.J f (u ASM * , x) denotes the accuracy of the space mapping optimal control when the control is plugged into the fine model and the fine model objective function is evaluated.Note that the ASM approach in general does not ensure a decent in the microscopic objective function value J f (u k , x) during the iterative process and purely relies on the idea to reduce the distance T (u k ) − u c * 2 .However, ASM also generates small target values J f (u ASM * , x) and therefore validates the proposed approach.Moreover, the model responses of the optimal controls illustrate the similarity of the fine and the coarse model, see Figure 3c-3d.The space mapping iteration finishes within two to four iterations and therefore needs less iterations than the pure optimization on the microscopic level here, see Figure 2. Note that each of the space mapping iterations involves the solution of the coarse optimal control problem.Hence, the comparison of the iterations may be misleading and we consider the computation times as additional feature.It turns out that the iteration times vary and therefore this data does not allow to prioritize one of the approaches.Obviously, the times depend on the number of particles, the space and time discretizations.
Figure 2: Objective function value of iterates.

Space mapping in bounded domains
In the following, we consider problems with dynamics restricted to a spatial domain with boundaries.For the microscopic simulations we add artificial boundary behaviour, tailored for each application, to the ODEs.

Evacuation dynamics
We consider a scenario similar to the evacuation of N individuals from a domain with obstacles.The goal is to gather as many individuals as possible at a given location x s ∈ Ω ⊂ R 2 up to the time T .The control is the evacuation point x s = (x s ).We model this task with the help of the following cost functions x for the fine and coarse model, respectively.They measure the spread of the crowd at time t = T with respect to the location of the source.The velocity v(x) is based on the solution to the eikonal equation with point source x s .In more detail, we solve the eikonal equation where T (x) is the minimal amount of time required to travel from from x to x s and f (x) is the speed of travel.We choose f (x) = 1 and set the velocity field to In this way, the velocity vectors point into the direction of the gradient of the solution to the eikonal equation and the speed depends on the distance of the particle to x s .The particles are expected to slow down when approaching x s and the maximum velocity is bounded v(x) 2 ≤ 1.The solution to the eikonal equation on the 2-D cartesian grid is computed using the fast marching algorithm implemented in C with Matlab interface .The travel time isoclines of the eikonal equation and the corresponding velocity field are illustrated in Figure 4.Note that we have to set the travel time inside the boundary to a finite value to obtain a smooth velocity field.The derivative of the macroscopic Lagrangian (12) with respect to the location of the point source, u = x s , is given by where i−1j ≥ 0, (i, j) i−1j < 0, (i − 1, j) ∈ I Ω \ I ∂Ω , 0 otherwise, and ∂x s F (2),s,− ij are defined analogously.
To obtain the partial derivatives ∂x ij , the travel-time source derivative of the eikonal equation is required.It is approximated numerically with finite differences where e (1) = (1, 0) T , e (2) = (0, 1) T denote the unit vectors.

Discussion of the numerical results
To investigate the robustness of the space mapping algorithm, we consider different obstacles in the microscopic and macroscopic setting.Let Ω = [−8, 8] 2 be the domain.For the microscopic model we define an internal boundary 2 ≤ x (1) ≤ 3, 1 ≤ x (2) ≤ 8, see Figure 6a.For the macroscopic setting the obstacle is shifted by gap ≥ 0 in the x (2) -coordinate.Additionally, we shift the initial density with the same gap, see Figure 6b.It is interesting to see whether the space mapping technique is able to recognize the linear shift between the microscopic and the macroscopic model.This is not trivial due to the non-linearities in the models and the additional non-linearities induced by the boundary interactions.Macroscopically, we use the zero flux conditions (6) at the boundary.Microscopically, a boundary correction is applied, that means, a particle which would hypothetically enter boundary is reflected into the domain, see Figure 5.For computational simplicity, we restrict the admissible set of the controls i.e., the point source is located to the left-hand side of the obstacle.* .For gap = 2, the first parameter extraction underestimates the shift in x (2) -direction and thus, two iterations are needed to obtain the optimal solution, see Table 3.

Material Flow
In the following, the control of a material flow system with a belt conveyor is considered.Similar control problems have been investigated in [18].We use the microscopic model proposed in [21] that describes the transport of homogeneous parts with mass m and radius R on a conveyor belt Ω ⊂ R 2 with velocity v T = (v  4 and the results are summarized in Table 5.Each parameter extraction uses u start = T (u k−1 ) and has an optimality tolerance of 10 −5 .For every diffusion coefficient, space mapping finishes in less than five iterations and Table 5 indicates that the microscopic optimal control lies in the interval (0.5676, 0.5874).In all cases, space mapping generates solutions close to optimal.Even for the case with C = 0, which is pure advection (without diffusion) in the macroscopic model, the ASM algorithm is able to identify a solution close to the microscopic optimal control.This underlines the robustness of the space mapping algorithm and emphasizes that even a very rough depiction of the underlying process can serve as coarse model.However, the advection-diffusion equations with C > 0 clearly match the microscopic situation better and portray the spread of particles in front of the obstacle more realistically, see Figure 8

Conclusion
We proposed space mapping-based optimization algorithms for interacting particle systems.The coarse model of the space mapping is chosen to be the macroscopic approximation of the fine model that considers every single particle.The algorithm is validated with the help of a toy problem that allows for the direct computation of optimal controls on the particle level.Further, the algorithm was tested in scenarios where the direct computation of microscopic gradients in infeasible due to boundary conditions that do not naturally appear in the particle system formulation.Numerical studies underline the feasibility of the approach and motivate to use it in further applications.
The derivatives of the interaction force F are ∂x (l),s i

Figure 1 :
Figure 1: Schematic representation of a space mapping algorithm.

Figure 4 :
Figure 4: Solution of the Eikonal equation in a bounded domain.

Figure 5 :
Figure 5: Reflection at the boundary.
the particles with the boundary and therefore delays gathering of the particles around the source location x c * , see Figure 7b.Space mapping for gap ∈ {1, 3} finishes within one iteration since the parameter extraction of u 1 is given by T (u 1 ) = u 1 + [0, gap] and T (u 2 ) = u c Density (u = u c * ).

Figure 7 :
Figure 7: Solutions of the space mapping iterates at t = T with gap = 2.
gap Iteration u k

Table 4 :
Model parameters

Table 5 :
Space Mapping with different diffusion coefficients C Now, we differentiate the Lagrangian with respect to the state variables.First, we differentiate with respect to x j.Third, we differentiate with respect to v i = ∆t f μs i + μs i + ∆t f ∂v i .