Robust Shape Optimization of Electric Devices Based on Deterministic Optimization Methods and Finite Element Analysis With Affine Decomposition and Design Elements

In this paper, gradient-based optimization methods are combined with finite-element modeling for improving electric devices. Geometric design parameters are considered by affine decomposition of the geometry or by the design element approach, both of which avoid remeshing. Furthermore, it is shown how to robustify the optimization procedure, i.e., how to deal with uncertainties on the design parameters. The overall procedure is illustrated by an academic example and by the example of a permanent-magnet synchronous machine. The examples show the advantages of deterministic optimization compared to standard and popular stochastic optimization procedures such as, e.g., particle swarm optimization.


Introduction
In almost all electric design procedures, numerical optimization is employed as one of the last design steps in order to optimize the device's performance and efficiency, to minimize its weight and size and to save on material and manufacturing costs. Often, the quality of this optimization step indirectly determines the success of the product and, hence, the market position of the company. The reliability, accuracy and computational cost of the numerical optimization procedure becomes in itself a subject of competition. This paper illustrates that shape optimization can be improved substantially when finite element (FE) analysis procedures are equipped with affine decomposition and design elements, such that well-performing deterministic optimization methods become applicable.
Impressive technical improvements have been achieved by numerical optimization on the basis of magnetic equivalent circuits or 2D and 3D FE models. All have led to highly optimized designs, e.g., for permanent-magnet synchronous machines (PMSMs) in automotive applications. Since three decades, FE-based optimization has been addressed in several text books (e.g. [12]) and hundreds of journal articles (see e.g. [14] and the references therein). Although originally gradient-based methods were preferred (see, e.g., [47,57,55]), already for more than two decades, stochastic algorithms are more popular, see, e.g., [19] and [32]. The majority of proposed procedures opt for stochastic or population-based optimization methods, such as genetic algorithms and particle swarm optimization (e.g. [33]), because they allow to use FE solvers as a black-box, they can easily consider geometric parameters, their parallelization is straightforward and they are more likely to find the global optimum. Stochastic algorithms have been used for robust optimization, have been applied together with surrogate modeling and have been extended to multiobjective optimization problems [3,23]. In particular for PMSMs, optimization with stochastic methods became the method of choice [9,2,50].
The trend toward stochastic optimization combined with FE analysis continues without restraint, as is illustrated by the number of according contributions at recent conferences. This paper partially counteracts this tendency by turning back to deterministic optimization algorithms. These are known to converge faster than stochastic optimization methods, albeit possibly to a local optimum. Moreover, the analysis of gradient based methods is more mature, allowing for a rigorous control of mesh discretization errors, for instance. The main drawback of many deterministic methods is, however, the necessity to provide derivatives, which is particularly cumbersome when optimization according to geometric parameters is pursued. This drawback is here addressed explicitly and is alleviated by affine decomposition of the geometry or by the design element approach. The overall deterministic optimization routine is shown to outperform the most popular stochastic algorithms by factors. Moreover, the optimization method will be robustified to include uncertainties on the design parameters.
The paper is structured as follows: Section 2 recalls the basics of mathematical optimization. It clearly distinguishes between deterministic methods (subsection 2.3) and particle swarm optimization as a relevant representative of stochastic methods (subsection 2.4). Furthermore, an extension to robust optimization is discussed in subsection 2.5. Section 3 deals with FE analysis of magnetodynamic fields. The core parts of the paper are subsection 3.3.1 about affine decomposition and subsection 3.3.2 about design elements, both facilitating and improving the calculation of derivatives with respect to geometric parameters. The superior performance of gradient-type deterministic optimization is illustrated for a benchmark example in section 4 and for a PMSM in section 5. Conclusions are formulated in section 6.

Constrained optimization problem
The optimization is carried out with respect to I design parameters p = (p 1 , p 2 , . . . , p I ) belonging to the admissible set P ad = {p ∈ R I |G m (p) ≤ 0, m = 1, . . . , M }, where G m (p) denote the constraints. The design parameters can be any continuous variables, e.g., material constants, excitation parameters and geometric sizes or positions. The constraints limit the admissible range of these parameters, e.g., to preserve the topology of the geometry or to set physical and operational constraints. Discrete design parameters are not considered in this work, although many methods apply, e.g., as part of a branch-and-bound technique, to mixed-integer optimization problems as well [21].
The optimization goal is represented by the objective function J(p) returning a scalar value for every set of design parameters. Relevant quantities are, e.g., force, torque, current, efficiency, weight, temperature or a combination thereof. When Q objective functions J n (p), n = 1, . . . , N are relevant, a possible approach is to combine them with user-defined weight factors α n into a single cost function J(p) = N n=1 α n J n (p). The optimization problem then reads subject to G m (p) ≤ 0 , m = 1, . . . , M .
In this work, the evaluation of G m (p) and/or J(p) involves a FE analysis of the device. Hence, the computational performance of the overall approach is heavily determined by the number of FE-solver calls.

Optimization methods
The selection of a particular optimization method consists of four, essentially independent choices (see also Table 3 in [18]).
-Problem (1) considers a single optimization goal. For a multi-objective optimization problem, a Pareto front is calculated such that the relative importance of the optimization goals can be fixed in a later design stage [12,7]. This paper does not further consider multi-objective optimization. Nonetheless, the developed techniques are applicable to multi-objective optimization as well. -Especially when the evaluation of the objective function is computationally expensive, it is recommended to carry out the optimization method on the basis of a surrogate model (indirect optimization methods). Such a simplified model can be obtained by expert knowledge on the application [58], by design space reduction [17], by a response surface methodology [17] or by space mapping [28] or manifold mapping [15]. Here, a direct optimization procedure is used. All ideas presented here can, however, be used in combination with indirect optimization approaches as well. -The result from a nominal optimization is a set of optimized design parameters leading to an optimum of the objective function. The optimum may, however, become irrelevant when it is highly sensitive to uncertainties in the design parameters. One speaks about robust optimization when the optimization is carried out taking such uncertainties into account. In this paper, both nominal and robust optimization methods are considered. An approach for robustification is discussed in subsection 2.5. -Two families of basic optimization methods exist: deterministic and stochastic methods. Among the stochastic methods, genetic algorithms [34], differential evolution [37] and particle swarm optimization (PSO) [26] are well known.
This paper motivates the use of a gradient-based deterministic method for nominal and robust optimization and compares it with a standard particle swarm technique.

Gradient-based deterministic method
This work proposes to solve (1) by standard Sequential Quadratric Programming (SQP) with damped Broyden-Fletcher-Goldfarb-Shanno (BFGS) updates for the Hessian approximation [39,22]. This method establishes locally a second order convergence, which means that for C > 0 and k the iteration step, which should be sufficiently large. The method, however, requires knowledge about the sensitivities of the objective function with respect to the design parameters, i.e., ∇ p J(p) or, alternatively, a locally quadratic approximation of the objective function [43]. Many FE solution and post-processing routines do not provide this information, especially when geometric design parameters are involved. Therefore, one is tempted to approximate the sensitivities by finite differences as in, e.g., [47]. This is, however, known to be particularly cumbersome because of the limited accuracy of the finite differences [57]. Even when relying on gradient-free deterministic methods (e.g., [43,44]), artifacts caused by FE analysis may hamper the convergence of the optimization routines. Eventually, as apparently the only option, deterministic optimization algorithms are abandoned in favor of stochastic approaches. This paper, however, sticks to gradient-based deterministic methods by complementing the FE simulation procedure with sensitivity information. The problems caused by the presence of geometric parameters are alleviated by introducing affine decomposition (see subsection 3.3.1) or, alternatively, design elements (see subsection 3.3.2) to the FE procedure.

Particle swarm optimization
Particle swarm optimization (PSO) [26] belongs to the broad class of stochastic algorithms and is particularly popular for electric machines, see e.g. [33,3,23,9,2]. In PSO, a set of Q particles indicated by q = 1, . . . , Q, moves through the admissible set in the design space in search of an optimum. At each iteration step k, the algorithm evaluates the objective function J(p) in every particle position p k,q . The newly obtained values are compared to the previous best values in the individual particle histories and the best value of the entire swarm. The corresponding best sets are denoted byp q andp swarm respectively. The velocities of the particles are updated according to where ω 0 , ω 1 and ω 2 are swarm characteristic constants and N 1 and N 2 are two random diagonal matrices with elements in [0, 1] generated independently and uniformly for each particle at every step, representing the free will of the swarm. The components of the velocity update are: 1. maintain a part of the current velocity; 2. head towards the particle's best found point (p q ); 3. head towards the swarm's best found point (p swarm ).
If at some iteration there is a particle that leaves the admissible set, its position is projected on the boundary of the admissible set. Initially, all particles are randomly and uniformly distributed in the admissible set and the initial velocities are set to 0. The particle swarm is a gradient-free method and works for non-smooth functions as well. The iteration ends when a maximum number of iterations is reached, or when the majority of the particles are close enough to the best pointp swarm , i.e., with a user-defined tolerance , or if there is no further change in the global best pointp swarm over N stall consecutive iterations.

Robust optimization
In a nominal optimization, one is looking for the minimum value of an objective function. However, during manufacturing small deviations can occur on the parameters. As a consequence, the optimal solution may become suboptimal in reality. Robust optimization searches for an optimum that is not too much affected by the expected parameter deviations [59,40].
One possibility is to optimize such that the worst-case scenario within a stochastic set of possibilities around the optimal design parameters is the best possible. The robust counterpart of (1) adopting a worst-case scenario is Here, the uncertainty set for the deviations δ is defined by where D is a scaling matrix and where δ l i = −δ u i . The nested optimization problem formulated by (5) is hard to solve. A numerically feasible optimization problem is obtained by approximating the max problem, i.e., by applying a first order Taylor approximation of the objective function and the constraints with respect to p [13], i.e., for m = 1, . . . , M . Inserting this approximation into (5), one obtains the linear approximation of the robust optimization problem: for m = 1, . . . , M . A dual norm || · || * is defined by In this particular case, one can use the property that the dual of D −1 · ∞ is given by D · 1 . A further problem is introduced by the fact that the norms are not differentiable, which leads to a non-smooth optimization problem. A differentiable problem is obtained by introducing M + 1 slack variables ξ 0 , . . . , ξ M and reformulate (9) as where m = 1, . . . , M and V = [1, . . . , 1] ∈ R I . This optimization problem can now be efficiently solved numerically. Additionally to the quantities introduced in the previous section, now also second order sensitivities with respect to the design parameters are required. This approach can be generalized to use a quadratic approximation with respect to p as worked out in [29].

Finite-Element Model
The behavior of the devices under consideration is determined by magnetic field phenomena and is simulated using a FE model.

Magnetoquasistatic Formulation
The magnetoquasistatic (MQS) subset of Maxwell's equations is considered. The design parameters p influence the material distribution represented by the reluctivity ν(p) and the conductivity σ(p), as well as the excitations, represented by the applied current density J src (p) in current carrying conductors and the magnetizing field strength H m (p) of the present permanent magnets. The MQS formulation in terms of the magnetic vector potential A(p) reads and is complemented with adequate boundary conditions. Eq. 12 encompasses the case of linear, nonlinear and remanent magnetic materials expressed by respectively. H(p) and B(p) = ∇ × A(p) are the magnetic field strength and magnetic flux density. In the nonlinear setting, the formulation is treated by the Newton method, which is equivalent to using a linearized material relation and updating the tensorial differential permeability ν (k) (p) and the magnetizing field strength H

Finite-element discretization
The magnetic vector potential is discretized by lowest-order Nédélec edge shape functions w j (x, y, z), i.e., where a j (p) are the degrees of freedom and N dof is the number of degrees of freedom. In the 3D case, the shape functions are associated with the edges of a tetrahedral mesh. In the 2D cartesian case, the edge shape functions are aligned with the z-axis and are constructed from the nodal shape functions N j (x, y) associated with the nodes of a 2D mesh, i.e., where l z is the length of the device in z-direction. In both cases, the discretization procedure leads to the system of equations where and where V D is the computational domain [35]. In the 2D case, V D = S D × [0, l z ] where S D is the cross section of the device. Eq. 18 is further discretized in time by, e.g., an implicit Runge-Kutta method, linearized by the Newton-Raphson method and solved by a solution method for large sparse systems of equations [10,27].

Geometry Parametrization
In the following, designs will be optimized with respect to geometric parameters. At first sight, the changing geometry necessitates the reconstruction of the computational mesh. This would, however, lead to unacceptably high computation times. Moreover, the unavoidable changes in mesh topology would introduce numerical noise which could mask the true sensitivity of the quantities of interest on the geometric parameters. Two different types of parametrizations are presented in the following. Affine decomposition (see e.g. [46]) is particularly appealing in the context of model order reduction and well-suited for parallelization. However, curved geometries cannot be represented exactly and additional approximation errors occur in this case. This is not the case for the second parametrization which is based on the well-established concept of design elements [6] in combination with Non-Uniform Rational B-Splines (NURBS). Here, the mapping will not be affine and more effort is needed for the update of the FE matrices and vectors. Another drawback is the difficulty in assuring the mesh quality during optimization. Yet, good results can be obtained for many shape optimization problems by one of the two methods, with moderate implementation effort. It should also be mentioned that nonparametric approaches to shape optimization [11] present a viable alternative and have already been applied for electric machines [16]. There, however, advanced techniques for both derivation and implementation are needed. The geometry is decomposed in a domain V 0 D that is unaffected from the geometric parameters and domains V D , = 1, . . . , L subject to geometric changes. The FE matrices K ν (p) and M σ (p) and vectors j src (p) and j m (p) can be partitioned accordingly, e.g., and similarly for M σ (p), j src (p) and j m (p). Reference geometriesV D , l = 1, . . . , L andV 0 D = V 0 D are defined, as well as a map fromV D to V D , given by f :r → r = f (r), which depends on the geometric parameters p.

Affine Decomposition
In the case of affine decomposition, the domains V D , = 1, . . . , L are triangles or tetrahedra. Hence, the maps are affine, and referred to as f app . These transformations shift the corners of the mesh, while preserving straight edges.
A key advantage of affine decomposition is that the Jacobian of the map, whereM σ andĵ src are assembled for the reference geometry only once. Additionally, the affine maps affect the curl operators in (19) and (22). A bit of calculation is needed to work out the transformed curl operators and the scalar products component-wise. For the 2D cartesian case, the results are j m (p) = ϑ 5 (p)ĵ m,x + ϑ 6 (p)ĵ m,y , where the matrix factorsK ν,xx ,K ν,yy ,K ν,xy ,K ν,yx , and the vector factorŝ j m,x andĵ m,y are assembled for the reference geometry in advance. Hence, the assembly of new FE matrices and vectors can be avoided during the optimization procedure. The functions ϑ q (p) are simple scalar functions in terms of the design parameters and are evaluated for each model instantiation.

Design Element Approach
NURBS are a very general way to represent geometries and widely used in CAD systems. Therefore, it seems natural to use the control points (and weights) of NURBS curves as design parameters [6,48]. This approach has received considerable attention in recent years as new approaches, incorporating NURBS geometries into FE analysis, have emerged. Isogeometric analysis [24] and the NURBS-enhanced FE method [49] are important examples. Here, only NURBS are used for the geometry parametrization. A triangular (tetrahedral) mesh is generated once and deformed using the well-established concept of design elements [6,25]. In the following, for simplicity, the two-dimensional case is considered solely. A generic NURBS curve of degree p is given as where P i refers to a control point and the rational spline R p i is defined in terms of B-splines N p i and weights w i as In total, L design elements are considered, each of which is represented by two NURBS curves C 1 and C 2 . More precisely, a design element is defined by a map f de :V D = [0, 1] 2 → V D given as Hence, design elements are given as Cartesian products of NURBS curves, whereas the affine decomposition may result in unstructured representations.
For each node (x i , y i ) in V D , its position in the reference domain [0, 1] 2 is computed in advance by solving e.g., with the Newton-Raphson method. Then, the mesh can be easily deformed by applying the parameter-dependent map f de to all nodes (x i ,ŷ i ). The transformation of the FE matrices and vectors is more involved compared to the affine decomposition described in Section 3.3.1. Each entry of the mass matrix is transformed as where it is important to emphasize that |J de (p)| is not constant on each design element. A similar expression is obtained for j src (p), whereas the conforming transformation of the curl operator yields In (34) and (35), the dependence of the integration domain on the geometry changes was eliminated. Because the Jacobian J de (p) can be expressed as a function of the geometry parameters p i , the analytical derivative of the system matrix and of the right hand side with respect to the geometry parameters can be determined.

Sensitivities
After differentiating the FE system, a new linear system for the derivatives of the degrees of freedom with respect to the geometry parameters is obtained: where s i (p) = ∂a(p) ∂pi are the sensitivities of the FE solution. To calculate s i , I equations of the form (36) have to be solved. In the case of affine decomposition, derivatives of K ν are easily calculated from (23) and (27) using expressions for ∂ϑ (p) ∂pi which are known analytically as derivatives of the functions ϑ (p). The expressions become more involved when NURBS are involved, yet closed form formulas also exist in this case.
The optimization algorithm requires the derivatives of the objective function with respect to each of the design parameters. Often, the objective function does not explicitly depend on the design parameters, i.e., J(p) =J(a(p)). In this case, the derivatives are given as For a large number of parameters, an adjoint method should be used instead [56].

Example 1: Die Press Mold
As a first example, a die press mold for radially magnetizing a segment of sintered magnetic powder (SMP) is considered [53]. This problem has been proposed as Testing-Electromagnetic-Analysis-Methods (TEAM) benchmark problem 25 [52] and has been used in numerous papers for comparing optimization algorithms. The vast majority of these publications apply and compare stochastic optimization methods [31,51], possibly combined with surrogate models [8], uncertainty quantification [38], multi-objective optimization or a combination of them [30]. Only a few papers (e.g. [1] and [4]) choose deterministic methods, again possibly combined with surrogate models [20], uncertainty quantification [54] or multi-objective optimization. This paper addresses one of the main drawbacks of deterministic methods, i.e., the consideration of geometric parameters. For this example, the design element approach is used. The SMP segment is arranged between a cylindrical inner pole and a more generally shaped outer pole (Fig. 1). The original TEAM-25 problem considers an outer pole with an elliptical inner surface. Here, the inner surface is described by a spline. This is motivated by the fact that splines are currently the basic building block for mechanical processing. The considered design parameters are then chosen to be p 1 : radius of the inner yoke ; p 2 , p 3 : semiaxis of ellipse between points i and j ; p 4 : x-coordinate of points m and k .
Both the circle and the ellipse are exactly represented by NURBS curves. The relation between the geometric parameters and the NURBS control points is given in the appendix.
The optimization aims at a homogeneous, radially oriented magnetic flux density of B goal = 0.35 T inside the SMP segment. The objective function J(p) is defined as the mean squared error between the simulated magnetic field and the goal at 9 sample points equidistantly distributed along the arc with radius r smp between points e = (r smp , 0) and f = (r smp cos ϕ f , r smp sin ϕ f ), i.e., where ϕ k = ϕ f k−1 8 and e k = (cos ϕ k , sin ϕ k ). The optimization problem yields: subject to p ∈ F , (39b) where the admissible set is defined as: For the gradient, the derivatives of J with respect to the geometry parameters p i are needed. Before applying the chain rule on (38), the derivatives of the degrees of freedom with respect to the geometry parameters ∂ pi a are calculated described in Section 3.3.2.
The performance of a standard algorithm for Particle Swarm Optimization (PSO), of the Sequential Quadratic Programming (SQP) method implemented in MATLAB R 's fmincon function [42] and of an own implementation of SQP are compared in Table 1. Both SQP implementations use the analytical gradients, the BFGS formula for updating the Hessian and a sufficient decrease condition in a merit function. For the PSO, a set of 40 particles is considered and the implementation is multi-threaded, while the gradient-based methods are single-thread implementations. The termination criterion for the PSO algorithm is the number of stall iterations, which was set to 5. The PSO actually finds the optimum after 2 iterations. This is because the optimum is at a vertex of the box-shaped domain and all the particles leaving the admissible region are projected onto the boundary. All three methods converge to the same optimum. The deterministic algorithms are by substantially faster than PSO, even though PSO exploits parallelization. On the same machine, an evaluation of Table 1: Results from the optimization of the die press mold with particle swarm optimization (PSO), trust region (TR) (with MATLAB R 's fmincon) and an own implementation of sequential quadratic programming (SQP) combined with the design element approach.

Design parameters
The second example is a 3-phase 6-pole permanent-magnet (PM) synchronous machine (PMSM) borrowed from [41] (Fig. 2) and already studied as an optimization example in [5]. The stator features two slots per pole and per phase with a conventional distributed double-layer winding. The rotor contains a buried rare-earth magnet. The yoke parts are laminated. The design parameters are p 1 : width of the PM ; p 2 : thickness of the PM ; p 3 : distance from the PM to the rotor surface .

Objective function
The optimization goal is to minimize the size S pm = p 1 p 2 of PM material while preserving a prescribed electromotive force E 0 . The electromotive force (EMF) E 0 (p) is post-processed from a magnetostatic solution of a 2D FE model of the PMSM using the loading method proposed in [45]. For that purpose, the FE solution of the z-component of the magnetic vector potential is sampled at a circle (or in the case of a partial machine model, an arc) in the PMSM's air gap, yielding A z (r ag , ϕ) ≈Â z,eff √ 2 sin(N p ϕ − ϕ d ), where N p = 3 is the polepair number,Â z,eff is the rms magnitude of fundamental harmonic component and ϕ d is the angle of the PMSM's direct axis. The EMF is then found from where ω syn is the synchronous speed and N w is the number of windings per phase. The winding factor is where q is the number of coil sides per phase belt, α el is the electric angle between two slots, τ c is the coil pitch, τ p is the pole pitch and ε is the electric skew angle [36,45].

Optimization problem
The optimization problem reads minimize p∈R 3 The first four constraints are related to the lower (p l ) and upper (p u ) bounds of p: (p l 1 , p l 2 , p l 3 ) = (1, 1, 5) mm and (p u 1 , p u 2 , p u 3 ) = (∞, ∞, 14) mm. To ensure the validity of the affine decomposition, i.e., intersections are not allowed, the fifth constraint is added. The sixth constraint is a design constraint enforcing that each PM has to keep a sufficient distance to the rotor surface, especially for wide PMs. The last constraint expresses the requirement to fulfill the prescribed EMF. Since the EMF is post-processed from the FE solution, the optimization problem actually has a PDE constraint.

Results
The results for 5 different optimization methods are collected in Table 2. 1. The first optimization run is carried out with the genetic algorithm implemented in MATLAB R . 2. The second optimization run is carried out with MATLAB R 's PSO implementation. To circumvent the restriction to box-shaped parameter domains, the admissible set is enforced by a penalty turn. The new objective function reads J pen (p) = J(p) + 2J(p) f (max(p 2 + p 3 − 15, 0)) where f (t) = e (4t 0.1 ) − 1 was chosen heuristically such that J pen grows exponentially if one of the constraints is violated. The function J pen was called 4740 times, but was organized as to only evaluate the nonlinear constraint if all other constraints were satisfied. The number of particles was set to 30, the maximum number of stall iterations to N stall = 15 and the function change tolerance to 10 −6 . The PSO characteristic constants are chosen to be ω 0 = 0.5 and ω 1 = ω 2 = 1.49. The algorithm took 157 iterations before termination. 3. The third optimization is carried out with an own PSO implementation, for the original objective function J(p) and applying the nonlinear constraints directly. Here, it is assumed that the admissible set is convex such that points inside the convex hull formed by all previous points do not need to be checked. 50 particles were used. Termination was enforced after maximally N it,max = 100 steps or when N stall,max = 15 stall iterations were observed. 4. The fourth run was done with a deterministic method, relying upon FE simulations equipped with an affine decomposition of the geometry as described in subsection 3.3.1. 5. The fifth run was done with a deterministic method for robust optimization, again with affine decomposition of the geometry.
The three stochastic algorithms were run on a 64 GB RAM Intel R Xeon R E5-2630 v4 machine. Both deterministic algorithms were run on a 16 GB RAM Intel R Core TM with i7-5820K processors (3.30 GHz).
The results of all optimization procedures are compared with the values of the initial design. All routines achieve a substantial decrease of the PM size from 133 mm 2 up to about 63 mm 2 . The price for robustness is a slightly larger size of about 77 mm 2 . The deterministic methods outperform the stochastic ones by two orders of magnitude. This impressively illustrates the major message of this paper stating that deterministic optimization methods accompanied by FE analysis providing gradients with respect to geometric parameters should be favored over stochastic methods, at least for the here considered class of problems.

Conclusion
Affine decomposition and design element approaches are capable of parametrizing the geometry of finite-element models such that accurate derivatives with respect to geometric parameters become available. This alleviates one of the major drawbacks of gradient-type deterministic optimization methods. For the example of a die mold press, standard sequential programming combined with the design element approach outperforms particle swarm optimization by more than a factor ten. The second example illustrates the applicability of gradient-type robust optimization combined with an affine decomposition of the geometry for a permanent-magnet synchronous machine. Supported by the substantial improvement in computational efficiency, this paper stands up for a revival of deterministic methods for numerical optimization in electrotechnical design procedures.
The degree of the basis functions is p = 2. The corresponding knots are K = {0, 0, 0, 1, 1, 1}. The deformation of the mesh inside one design element region V D with N V vertices is given by where (x i , y i ) are the coordinates of the vertices of the deformed mesh and (x i ,ŷ i ) are the coordinates in the reference domain [0, 1] 2 .