Multi-criteria optimization of an aircraft propeller considering manufacturing Structural design optimization simultaneously considering a knowledge-based manufacturing and a structural model

Carbon-fiber-reinforced polymers are gaining ground; high mass-specific stiffness and strength proper-ties in fiber direction are commonly identified as reasons. Nonetheless, there are great challenges unleashing the entire light-weight potential. For instance, the multitude of parameters (e.g. fiber orientations), also being linked with each other and having huge influence on, not only structural mechanics, but also onto effort in terms of manufacturing. Moreover, these parameters ideally need to be determined, such that mass and costs are minimal, while all structural and technical requirements are fulfilled. The challenge of considering manufacturing aspects along with structural mechanics are mainly addressed in this paper. It is outlining an approach for modeling manufacturing effort via expert knowledge and how to actually consider this model in a multi-criteria optimization framework. In addition, it will be shown how to incorporate these knowledge-based models into an efficient structural design optimization. For this sake, a braided propeller structure is optimized.


Introduction
In the optimal design of composites, multiple requirements and goals, each possibly originating from different field, do form design criteria and constraints. So optimizing composite structures, often leads to multi-criteria and multidisciplinary optimization (Sobieszczanksi-Sobieski et al. 2015).
The choice of fiber material choice should thus consider mechanical as well as technical, economical and ecological aspects. Further underpinning this, the mechanical focal point is lying on designing as light as possible, yet this may induce conflicts in technical criteria, where high manufacturing effort may be observed (Schatz and Baier 2014a). Manufacturing effort is frequently considered by numerical process simulations or analytics (Ghiasi et al. 2010). However, either of those approaches might be applicable, since numerics can be computationally expensive or analytics not comprehensive enough. Which is why, herein, a soft computing approach is being used to generate a model based on expert knowledge Schatz and Baier (2014a, b) and Schatz and Baier (2015). This approach has already successfully been applied for extruded aluminum profiles by Wehrle and Baier (2016). The methods used to form such knowledge-based models are associated with soft computing (Hajela 2002). There are similar approaches exploiting this, e.g. Pillai et al. (1997), where the cure of thick composites is considered via a knowledge-based system and Iqbal (Iqbal et al. 2007), where a fuzzy-rule-based system is used to model a hard-milling process. Further examples are given by Huber and Baier (2006) and Zhou and Saitou (2015). However regarding generality and flexibility in choice of algorithm, formalization of the optimization task or a more detailed manufacturing model shows room for improvement. For the optimization to be as technically relevant as possible, it evidently needs to comprise structural aspects as well. Therefore a holistic optimization frame as depicted with Fig. 1 and being inspired by Baier et al. (1994) will be outlined in this paper as well. Furthermore, since multiple criteria need to be considered throughout the optimization, the optimization approach needs to be augmented. Edgeworth, was among the first addressing multi-criteria problems (Edgeworth 1881) by describing possible settlements in-between conflicting consumer interests. Considerable contributions were made by Pareto, e.g. Pareto (1906). In Baier (1977Baier ( , 1978 and Stadler (1984) applications in the structural design have showed huge potential by simultaneously considering multiple criteria and thereby yielding optimal compromises. Sobieszczanksi-Sobieski et al. (2015) covers most of the mentioned aspects as multiple objectives, multiple disciplines and knowledge engineering.
Herein, the propeller is braided. Thus the stacking of different layers is defined by textile preforms, called braids. These braids are formed, by continuously placing dry fiber bundles onto a mandrel-which defines the structure's shape-while the fiber carriers (horn gears) are counterrotating and thus realizing a distinct fiber architecture (Mazumdar 2002). This is illustrated in Fig. 2.
In this work, basic fundamentals on multi-criteria optimization and soft computing will be given first. This section is followed by the discussion of the structural and particularly the effort model. Then, the outcomes of various optimizations, which for instance vary in the considered Fig. 2 The braiding technology criteria set, will be discussed. Lastly, the paper will be concluded by providing a summary and an outlook.

Mathematical basis of multi-crtieria optimization
A general mutli-criteria optimization statement is defined by, with x, f and g being the vector of design variables, vector of objectives-or more frequently in this context: criteria vector-and vector of inequalities (Stadler 1988 This space is depicted in Fig. 3, where in addition to Y the best and worst fictitious criteria values: utopia f and nadir f , e.g. (Marler and Arora 2004). Moreover, one can find the set of optimal criteria values, referred to as Pareto efficient solutions E and the so-called Pareto frontier P in Fig. 3. The Pareto frontier is herein understood as a superset of all Pareto efficient solutions in the general case, thus P ⊇ E . In the following, an approach capable of approximating the Pareto frontier P will be presented. This approach will be gradient-based, since this did prove to be the most efficient kind of optimization algorithms for the discussed structural composite problem. Reasons for this are, the large design space, the availability of analytical gradients and the lack of multi-modality of the involved optimization responses. An overview on solution methods is given by Marler et al. in Marler and Arora (2004).

Solution technique
In this paper, the approach first introduced by Pereyra in Pereyra (2009) has been chosen, since it performed best in terms of numerical effort and equi-spaced solutions. The key idea is to solve for discrete points f p opt of the Pareto frontier P and thus approximate it as given with (2). Figure 4 illustrates this approximation.
Each of those solutions f p are locally Pareto efficient and are computed based on a scalarized, i.e. substitute single objective optimization problem, wherefore n p single objective optimization task need to solved for obtaining P . The scalarized single objective problem is thereby defined as follows, As can be observed, the scalarized problem stated via equation set (3) is extended by additional design variable κ being the objective weight for the scalarization and two augmented constraints. Both constraints are given with equation set (4) and are also illustrated in Fig. 5.
The first constraint, the inequality g BS , ensures that the algorithm does not step back, thus, that no priorly computed solution is computed again. h γ on the other hand, ensures, that the final approximation of the Pareto frontier is equispaced, thus display equal distance in-between each pair of subsequent solutions. Details about their implementation or influence is discussed in Pereyra et al. (2013). For this to work properly, the distance in-between two subsequent solutions is restricted based on the following, with the anchor points f 1 and f n p and the chord factor c being a parameter and accounting for the convexity of the Pareto frontier. Herein, c is set to be 1.2 since most Pareto frontier are slightly bend. Finally, the gradient-based approach for computing equispaced Pareto frontier approximations is given in form of a flowchart with Fig. 6.

The concept to define knowledge-based effort models
Now, the developed concept for generating effort models based on expert knowledge will be introduced. These models require only a fraction of the computation effort of conventional process simulations, while being as comprehensive. The developed concept can basically be separated into two levels: knowledge engineering and soft computing. Figure 7 brings both levels into context with the acquisition of knowledge and how it is numerically emulated.
The level of knowledge engineering This level represents the challenge of acquiring knowledge from domain-specific experts and the thereafter translation into a well-structured knowledge base (KB), being compliant to numerical implementation. In this regard, it can be considered as an intermediate step towards the final knowledge-based system (KBS) (Feigenbaum and McCorduck 1983). At first, one gathers fundamentals from sources like literature, existing data bases, norms or even preliminary numerical analyses. The knowledge engineer (KE), i.e. the engineer who acquires the expert knowledge, thereby becomes familiar with terminology and peculiarities of the domain-specific topic. By   (Buchanan and Shortliffe 1984). Thereafter, acquisition of knowledge is realized by conducting multiple interviews, each ideally guided by a questionnaire. The questionnaires shall as such facilitate a fluent, unbiased and unbroken interview, also allowing the KE to comprehensively record by making short notes. With Fig. 8, two questions of the developed electronically and hence interactive questionnaire are given. The first question, i.e. Example 1, is regarded to be open, because the range of possible answers is not restricted, whereas the second, Example 2, is a closed one since a specific ratio is queried. Obviously, the open question type is often used when the KE explores new domains of the tobe-modeled topic. Opposed to that, the closed ones help to deepen and detail the KB. It can be comprehended that the questions are shall evolve from open to closed question types throughout the iterative process of iterations.  Besides, generality of the deduced KB can be guaranteed, when the KE abstains from including absolute numbers, but uses relative definitions instead. For instance, the query of a minimal ratio R D is generally valid, whereas asking for limits on the radius R can be misleading or dependent on expert's working field.
Next, the interview records are translated into the knowledge base. Figure 9, provides an extract of the interview records being interpreted by the KE in terms of a fuzzy membership function. This function will be explained in the following paragraph. It shall however be noted, that the membership function (given in blue) is able to address the imprecision due to the qualitative nature. Furthermore, one can observe that the expert and thus the manufacturing process itself dictates the shape of the membership function.
The level of soft computing is characterized by the knowledge-based system itself. Here, a fuzzy inference system (FIS) is used to evaluate and reason about information described by knowledge priorly acquired from experts. This FIS is based on the fuzzy logic arithmetic initiated by Zadeh (1994) and its inputs, inference rules and outputs are arranged according to Mamdani and Assilian (1975), which makes it a Mamdani FIS. This FIS has, however, been extended in various ways. Figure 10 illustrates a general Mamdani FIS, where crisp input variables x i are first translated into fuzzy sets (fuzzification to μ j x i ) and evaluated rule-wise (Inference of each rule j ). The output of each rule μ j r (implication) is combined to one single output (aggregation to μ r ) and translated back to a crisp value (defuzzification). The implication and aggregation can be realized by different methods each either representing AND or OR rules. In Fig. 10, the minimum rule is used for the implication and maximum rule for aggregation. Despite their wide range of applicability, there exists a wide range of further rule definitions in literature and praxis. Defuzzification is most frequently accomplished by computing the center of gravity of the aggregated output function, thus, The Mamdani FIS has been extended, such that the weights leading to a certain output, are utilized to reveal the reasons causing that specific output. Thus, the knowledgebased system founded on such a FIS is then able to not only evaluate complex situations, but moreover to provide reasoning about decisions made and give advice for optimal improvement. This extension was realized by fetching the arguments of the evaluated rules j and in that consequence querying the major contributor to the output or a given rule. This is shown next; at first for the aggregation rule and then followed by the query for the key argument of the implication, where j Active represent the most relevant rule and i Active the decision-dominating input variable index.
3 Developed models and problem statement

Propeller design
The developed methods will herein be demonstrated on a propeller blade of a piston engine airplane. For such a propeller blade, two critical loading conditions can be identified: maximum thrust T max and windmilling T min . The latter is characterized by an operation of the propeller similar to a windmill, thus, minimal thrust. In addition to these two loading conditions, the modal performance of the propeller is relevant as well. Hence, the first three natural frequencies ω i need to be higher than the rotary citation frequency, i.e. revolutions of blade. The following Table 1 and Fig. 11 provide these frequencies of the initial design. Multiple analyses and reviewing literature (Theodorsen 1948) revealed, that for our case, the aerodynamics and mechanics can be separated with reason. However, by doing so, coupling-effects such as flutter are not considered here. Therefore, in the following section a computational fluid dynamics (CFD) simulation is conducted so as to compute the pressure field for different configurations, being passed to the structural simulations.

Determining the pressure fields via fluid simulations
The flow around the propeller is described by the conservation of mass, momentum and energy. The latter is used to consider compressibility effects arising here, due to the high rotational speed of the propeller. These three equations are known as the compressible Navier-Stokes-Equations (Tu et al. 2008). For modeling turbulence effects, the turbulent viscosity SST model is used. So as to account for the inertia effects of the rotating propeller, the discretized flow domain is divided into a rotary domain with a moving reference frame according to a frozen rotor concept and a stationary domain covering the far flow field (Kumar and Wurm 2015). Both are connected via an interface and are schematically given with Fig. 12. It shall be noted, that the fluid domain is by far larger than the propeller domain, causing the CFD simulation to be computationally expensive.
With Fig. 13 one of the computed pressure fields is given. The pressure fields are in a subsequent step mapped onto the structural model (see Table 2).

Parametrized structural model
In Fig. 14, the parametrized structural model is given. This geometric model has been set up by taking advantage of Abaqus CAE. As can be observed by the presence of three distinct braid layer, e.g. t 1,2,3 , the propeller blade is over-braided multiple times.  For parameterizing braid angles of the three layers (i ∈ {1, 2, 3}), the following equation was used, with r being the coordinate axis ( Fig. 14) and α the slope at r = 0, With Table 2, a overview of the implemented solution sequence is given. The first two steps evaluating the stability of the structure are optional and are only activated once an optimum is post-processed. This is mainly because all found optima where non-critical regarding buckling, however, it needed to be verified that this is the case.

Derived meso-scale material model
As earlier mentioned, braiding was chosen as the manufacturing technique for this propeller structure (see Fig. 2). Since this technique-as the name already indicates-yields a  rather complex fiber architecture, which significantly influences mechanical properties, for instance through undulations, a specific material model addressing this, was required as well. In addition to an accurate rendering of the mechanical properties, such as stiffness of the braid, it was also indispensable that the material model is numerically efficient due to the embedding in the optimization framework. Such that these conflicting requirements can be met, it was decided to first build up a numerical unit cell model for high fidelity homogenization analysis. The analysis was performed in Abaqus, where the TexGen libraries (Lin et al. 2011) were used to generate the unit cell as depicted in Fig. 15.
Then, subsequently, a meta model is build upon this homogenization model. A polynomial regression of order two was used to determine the response surface approximation as given with Fig. 16.
It was shown by comparison with further numerical analyses and experimental investigations, that the encountered error is less than five percent. The error bars are also given in Fig. 16 as (I). More on this can be found in Schatz and Baier (2014a).

The manufacturing effort model
The manufacturing effort model (MEM) was formed based on the concepts introduced in Section 2.3. Thus multiple interviews with braiding experts have been conducted,  thereafter, condensed to a knowledge base, which is then incorporated into knowledge-based system (KBS). The final KBS is then able to evaluate the level of manufacturing effort associated with the technical input variables y tech . The input variables namely are: the braiding angle ϕ, profile circumference, curvature of the mandrel axis, aspect ratio, edge radii and number of layers.
With Fig. 17 the response surface of the MEM's primary output manufacturing effort is plotted over a subset of input variables, being the braiding angle ϕ and edge radius r. As can be presumed, the red color hereby reflects high effort level and blue the opposite case. In addition to this primary output, the secondary outputs reason R and advise A are given for multiple distinct points as well. These secondary outputs basically provide insight on how the MEM made the decision for a certain effort level (reason R) and how the situation may be improved the most effectively (advise A). Therefore, these outputs not only support the verification of the MEM's decisions, but also enable the use of this MEM as an independent CAE tool giving live feed-back to the designer.  The definition of manufacturing effort as being a qualitative measure was made deliberately. The main reason for doing so, is the fact, that this qualitative measure enables a certain degree of abstractness, which in turn leverages the MEM to a high level of generality. For instance, the alternative cost would lead to a model possibly being valid for one single company, because all parameters-expect the mechanical and physical ones-are situation and country dependent. Examples could be shift pattern, man, capital and plant surface costs, maintenance pattern et cetera (Mazumdar 2002).

Deducing the optimization model
Both analyses-finite element analysis (FEA) and manufacturing effort analysis (MEA)-are conducted for each update of the design variables x so to determine the optimization responses (f, g) making the optimization an multi-criteria (vector of objectives f) as well as a multi-disciplinary one. Herein, the analyses are conducted sequentially, since a subset of the mechanical responses are required by the MEM (r mech ---y tech ), e.g. geometric parameter created by the pre-processing of the FEA (Fig. 18).
Following (11) states how the gradients are computed for the objective f; a similar expression can be stated for the constraints. The first partial derivative (I) is the direct Fig. 18 Computing optimization responses (f, g) composed of mechanical r mech and technical responses r tech for given a design vector x and a set of constant parameters p dependency on the design variables and is, hence, explicitly given. (II) is computed implicitly but yet analytically by the FEA, e.g. for displacement sensitivity ∇u = K −1 {∇f − (∇K)u}. The last partial derivative (III) is obtained via finite differences. Finite differences have been used, since the manufacturing effort model is evaluated within no time.
On the side, the whole optimization is set up and solved using the interface EOS (Environment for optimization and simulation) to the optimization package pyOpt (Perez et al. 2012). EOS is written by da Rocha und the first author at the Institute of Lightweight Structures.

Results of the optimization 4.1 Choice of optimization algorithm
Unless it is not stated differently, the NLPQLP algorithm by Schitttkowski (2010) being a sequential-quadraticprogramming algorithm has been used. The optimizations have actually been conducted with different algorithms being available in the pyOpt module, however, NLPQLP needed the fewest number of function evaluations. The choice of a gradient-based optimization was made after an excessive parametric study, which did not reveal nonconvexity in the design responses. To further ensure, that the results being shown are not of local nature, the optimization has been started from multiple starting points. The starting points their-self, have been determined by scanning the design space via latin hypercube sampling so as to ensure a certain coverage.

Simultaneous optimization on mass and frequency
At first, the propeller structure will solely be optimized considering the criteria mass m and the first, i.e. lowest, natural frequency ω 1 . While minimizing m and maximizing ω 1 , all structural requirements are imposed as constraints g and manufacturing aspects are not considered yet. The structural constraints are maximum tolerable hub force F h,Max , bound on the natural frequencies defined by the revolution n P,Max , stiffness requirements u Tip,LC1/2,Max and, ultimately, structural integrity for both load cases measured by the failure index FI. For the last constraint, it shall be noted, that failure is evaluated for multiple sections, i.e. n Sec , along the propeller blade based on the Puck failure theory. The multicriteria optimization task can thus be defined as follows, With Table 3 the design variables x and the design space χ is given.
This optimization task is solved by the gradient-based approach given by Fig. 6, where the optimization problem is scalarized in analogy to (3) and solved considering both additional constraints defined by (4) (also see Fig. 5). The thereby computed approximation of the Pareto frontier is depicted in Fig. 19 with the red dots representing the found Fig. 19 Computed approximation of the Pareto frontier of (12)

Fig. 20 Approximation of Pareto frontier
Pareto efficient criteria (n P = 9 optimizations) and the blue circle the illustration of the equality constraints h γ .
Studying Fig. 19 reveals, that both criteria of (12) are competing and do form a pronounced convex Pareto frontier. This can also be comprehended by the extremal solutions which differ considerably: m = 1.7kg (36 %) and ω 1 = 34Hz (97 %). The fact that both criteria are competing is physically plausible as well, because enforcing the propeller's root elevates the first natural frequency, but yet also increases the mass. The non-normalized approximation of the Pareto frontier P is given with the following Fig. 20 with . Next, the objective weights κ are plotted for each of the n P = 9 single objective optimizations in Fig. 21. Again, κ was utilized for scalarizing f such that, f κ = κm−(1−κ)ω 1 . This figure underpins, how the objective weights evolve over the different points of the frontier and, thus, reveals how challenging it would be to a priori chose κ for obtaining a certain Pareto efficient solution. This has actually also been observed by Zhang and Gao (2006), who did vary their weights as well. They moreover already revealed, that the set of active constraints has a major impact on the actual value of the weights κ.

Contrasting the gradient-based multi-criteria optimization with a genetic algorithm
For the sake of an rudimentary comparison, the multicriteria optimization task stated by (12) will now by solved by an alternative approach, namely by a genetic algorithm (GA) whose fitness evaluation is extended such that multiple objectives are considerable. A GA has been chosen for the comparison, since in many cases where soft computing methods have been used to define an optimization models, a GA has been used. NSGA-II (Deb et al. 2002) is such a biologically inspired algorithm. It has been used here as well, whereby the settings were set as default, allowing for possible improvement. Figure 22 provides the solutions obtained by the gradient-based Pareto approximation approach and the ones obtained by the NSGA-II algorithm, where , and differ in the population size n pop and number of generations n gen according to (n pop = 72, n gen = 10), (n pop = 144, n gen = 20) and (n pop = 288, n gen = 10).
The gradient-based approach performed to our satisfaction. First of all reasons, the number of function evaluations is orders smaller than the ones required by the GA. On the used cluster nodes-neglecting parallelization, even though both approaches are parallelizable-this mentioned difference in function evaluations, translates into a computation times of 3h : 47 min for and of 3d : 14h : 17 min for . Secondly, the information gain is greater due to the equal spacing in-between Pareto efficient solutions. And, last but not least, the solutions dominate all others. The latter advantage can be fortified by evaluating the well known optimality criteria given by the Karush-Kuhn-Tucker (KKT) conditions (Kuhn and Tucker 1951). Such that the KKT can be plotted for all derived solutions p of Figs. 19 and 22, the 2 , is introduced. Because both, the stationary and the slackness need to be zero at an optima-either being local or global-O should take values being numerically zero as well (Fig. 23).
In spite of these mentioned advantages, it shall also mentioned, that the GA approach is by far more flexible regarding the optimization problem. This becomes clear, when considering that a GA generally can handle discrete variable types with ease or that it imposes less requirements on the system responses behavior, i.e. convex, continuous et cetera. Moreover, the performance of the GA can definitely be improved by increasing the number of generations and population size. Nonetheless, this may only be viable if computational resources are large or if computation times are low.

Post-processing of the conducted optimization
The gradient-based approach also enables new postprocessing possibilities. Next, an extrapolation similarly to the concept of shadow prices will be derived and shown. For this sake, the Lagrangian function L := f κ + λg will be pertubated first δL, while stationary is assumed as well, thus, Rearranging this and further assuming that feasibility is retained (δg ! = 0), yields the following statement, which basically defines the slope of the Pareto frontier P at any given point. To verify this statement-which has already been mentioned and discussed in Baier (1978) on structural problems-a comparison will be made next. The basis for comparison will here be given by a response surface approximation (RSA) of the Pareto frontier as depicted  Deriving this equation at m = 1.8kg and computing the slope ∂ω 1 ∂m of the Pareto frontier based on κ leads to the results of Table 4. It can be verified that the relative difference in-between both methods is less then ten percent, which makes the approach given by (14) a viable one, especially if one considers that the RSA model itself is subjected to numerical inaccuracies as well. Knowing the slope at different points enables an extrapolation, thus the estimation of possible improvement if one criteria is loosened, i.e. at the discussed point an increase of ten percent mass m will possibly improve the frequency ω 1 by five Hertz.
In addition to that, the concept of shadow prices is still being applicable as well. Thus if one seeks for the imprint of the stiffness constraint g 13 := u u Tip − 1 or in other words, the hidden price (value of the criteria) paid such that optimality (including feasibility) is reached, the following statement can be used, Assuming one could relax the displacement requirement by five perfect, i.e. u Tip = 5 %, (16) extrapolated  an increase in the natural frequency of seven Hertz. This underpins, how a post-processing of such gradient-based multi-criteria optimizations could support the understanding of the optimal design regarding robustness, possible improvements et cetera. In that consequence, the decision making can be leveraged by this increase in transparency.

Considering manufacturing effort as the criterion
So far, the propeller has solely been optimized based on structural criteria. Now, the optimization model is made more holistic, by incorporating technical aspects, namely, the manufacturing efforts computed by the MEM as introduced in Section 3.5. Before the optimization task is being conducted, the design displaying the maximum natural frequency (at the right upper side of Fig. 20) will be evaluated in terms of manufacturing effort. Figure 24 depicts the effort densityė i being the effort per area or pointwise evaluated effort. In that consequence, the layer-wise effort e i is then defined by e i = 1 A Aė i dA, where A denotes the propeller surface. As then can be seen in Fig. 24, so associated levels of manufacturing efforts are high for each of the three braid layers (indicated by the red color).
For this reason, the structural multi-criteria optimization statement as given with equation set (12), will be augmented Using the same gradient-based multi-criteria optimization approach (see Fig. 6) leads to the approximation P of the whole Pareto frontier given with in Fig. 25. For the computation of P nine single objective optimizationsin analogy to the problem statement given by the equation set (3)-where the objective vector f was scalarized to f κ = κe − (1 − κ)ω 1 . The minus was again necessary, since the frequency is, opposed to the effort level, being maximized.
The approximation P in Fig. 25 appears to have some sort of dent ranging from the second to the firth Pareto efficient solution f p . Switching of rules in the MEM (braid opening to critical ratio of take-up and horn gear speed) is deemed to be the origin of this dent. A plot supporting this, is given in the following Fig. 26, where, as before with Fig. 23, the first order optimality O = [∇L; λg] T 2 is depicted for each Pareto point p. Despite the fact that these KKT conditions appear to be met for each point, it shall be noted that each solution could still represent a local instead of a global optima.

Comparing the propeller designs
Before the final propeller design is introduced, an overview of the extremal solutions, thus minimizing each of the distinct criteria individually, will be provided by Table 5. As it turned out, each of the three criteria is in conflict with the others. This is comprehend-able considering the spectrum of each criteria, especially, when one of the others is being optimized on. Therefore, by studying Table 5 the actual imprint of the manufacturing effort model can be revealed. This is because, solely setting either mass m or eigenfrequency −w 1 -first and second row -as the criterion, while not considering manufacturing effort, leads to different results.
For guaranteeing a properly design propeller, the buckling analysis, as well as a full frequency screening were performed for the optimal compromise f = [ω 1 ; e; m] T = [57.287Hz; 0.38; 2.3kg] T . The results of the latter are provided and discussed next. Commonly, the results of such a frequency screening analysis are plotted in a so called Campbell diagram, wherein the revolutions of the rotor-dynamical system, here the propeller, are defined as the coordinate axis and the resulting excitation frequency (rotary frequency) is plotted in tandem with the eigenfrequencies of the system. Such a Campbell diagram is given with Fig. 29, where the red and blue curve represent the corresponding eigenfrequencies of the propeller and the black Fig. 29 Campbell of the effort-frequency-compromise design line the rotary and thus excitation frequency. Since none of the eigenfrequency curves is intersecting with the rotary one, it ca be assumed that resonance caused by the rotation of the propeller will not occur. In addition to that, the plot also displays the rotary stiffening of the propeller by the great revolution.

Summary
With this paper, an approach for coping with multi-criteria optimization problems involving multiple disciplines have been showed. As it turned out, the approach is capable of approximating Pareto frontiers of industry relevant problems with a moderate number of function evaluations, while still revealing most of the frontier (equidistance). Thus, making it an efficient and reliable approach for many engineering design problems, especially when analytical or at least semi-analytical sensitivity information is obtainable.
In addition to this, a method derived from the concept of soft computing has been presented, with which soft aspects, e.g. verbal information of experts, can be modeled and processed. With this method the modeling of the technical measure manufacturing effort was facilitated. For doing so, expert knowledge was accessed via interviews of braiding experts and translated into a knowledge-based system. The thereby derived manufacturing effort model was extended by the determination of reasons causing certain levels of manufacturing effort and by additionally providing an advise, pointing into the direction of maximal improvement.
Last but not least, several optimizations considering structural mechanics in concert with technical aspects put forth a propeller design meting multiple technical requirements and representing an optimal compromise in-between the involved objectives: mass, natural frequency and manufacturing effort.

Prospective research
The multi-criteria optimizations shown, will further be evaluated and studied by for instance contrasting it, with other scalarization techniques and Pareto approximation approaches such as the one given by Eichfelder (2009). Moreover, the incorporation of a cost model, albeit being situation dependent, could possibly enrich the optimization approach demonstrated on the propeller structure. The developed approach will applied to another large design problem to further asses its advantages and possible limitations.