# A specification language for problem partitioning in decomposition-based design optimization

- 859 Downloads
- 9 Citations

## Abstract

Decomposition-based design optimization consists of two steps: partitioning of a system design problem into a number of subproblems, and coordination of the design of the decomposed system. Although several generic frameworks for coordination method implementation are available (the second step), generic approaches for specification of the partitioned problem (the first step) are rare. Available specification methods are often based on matrix or graph representations of the entire system. For larger systems these representations become intractable due to the large number of design variables and functions. This article presents a new linguistic approach for specification of partitioned problems in decomposition-based design optimization. With the elements of the proposed specification language, called Ψ (the Greek letter “Psi”), a designer can define subproblems, and assemble these into larger systems in a bottom-up fashion. The assembly process allows the system designer to control the complexity and tractability of the problem partitioning task. To facilitate coupling to generic coordination frameworks, a compiler has been developed for Ψ that generates an interchange file in the INI format. This INI-definition of the partitioned problem can easily be interpreted by programs written in other languages. The flexibility provided by the Ψ language and the automated generation of input files for computational frameworks is demonstrated on a vehicle chassis design problem. The developed tools, including user manuals and examples, are made publicly available.

### Keywords

Decomposition Problem partitioning Specification language Distributed analysis Distributed optimization Implementation Multidisciplinary design optimization Ψ language## 1 Introduction

Decomposition-based optimization approaches are attractive for addressing the challenges that arise in the optimal design of advanced engineering systems (see, e.g., Wagner and Papalambros 1993; Papalambros 1995; Sobieszczanski-Sobieski and Haftka 1997; Alexandrov 2005). The main motivation for the use of decomposition-based optimization is the organization of the design process itself. Since a single designer is not able to oversee each relevant aspect, the design process is distributed over a number of design teams. Each team is responsible for a part of the system, and typically uses specialized analysis and design tools to solve its design subproblems. Generally speaking, a subproblem can be associated with a design discipline, or represent a subsystem or component in the entire system.

- 1.
Specifying the variables and functions of each discipline

- 2.
Specifying the partitioned problem (i.e. the distribution of variables and functions over subproblems and systems)

- 3.
Coordinating the solution of the partitioned system

Variable and function specifications of Step 1 include information such as initial estimates and bounds for variables, how functions and their sensitivities are to be evaluated, and on which variables each function depends. These specifications are typically provided by each discipline separately, most likely with only partial knowledge of the interdisciplinary interactions.

The interactions are defined in the partitioned system specification of Step 2. A partitioned system specification defines how the variables and functions are distributed over design subproblems and which interactions are present between subproblems. Defining the partitioned system is a task performed partially by the disciplinary designers that define the subproblems, and a system designer that defines the interaction between the subproblems.

Once the partitioned problem is defined, a coordination algorithm needs to be implemented to solve the system design problem in Step 3. The coordination process requires the specifications of the first two steps as inputs. Which coordination algorithm is used needs to be defined as well (e.g. distributed analysis or distributed optimization). The coordination algorithm drives the design subproblems defined in Step 2 towards a system design that is consistent, feasible, and optimal. Here, consistency assures that quantities shared by multiple subproblems take equal values, feasibility refers to the satisfaction of all design constraints of all disciplines, and optimality reflects that the obtained design is optimal for the system as a whole.

Most research on decomposition-based optimal design has focussed on the final, most challenging step: coordination. Many coordination algorithms are available (see Balling and Sobieszczanski-Sobieski 1996; Sobieszczanski-Sobieski and Haftka 1997; Tosserams 2009a, for overviews), as well as generic approaches for the implementation of these methods (e.g. Michelena 1999; Etman 2005; Huang 2006; Moore 2008; de Wit and van Keulen 2008; Martins 2009). The first two steps have received far less attention.

Theoretical and numerical studies however show that both the choice of coordination algorithm and the way the system is partitioned have an effect on the efficiency and effectiveness of decomposition-based optimization. See Balling and Sobieszczanski-Sobieski (1996), Sobieszczanski-Sobieski and Haftka (1997), Perez (2004), Tosserams (2007), Allison (2007), de Wit and van Keulen (2007), Yi (2008), and Tosserams (2009a) for examples of these observations. Experimentation with different system decompositions within the available generic coordination frameworks is not straightforward. Since each framework is developed for its appropriateness for implementation of coordination methods, it typically does not provide an intuitive environment for specifying partitioned problems. As systems become larger, their specification in such a non-intuitive environment becomes complicated and is prone to errors. Being able to specify partitioned problems in a more intuitive language is clearly preferable. Such a generic specification language provides a tool for the easy manipulation of the way a system is partitioned.

In this paper, we present a linguistic approach that allows an intuitive, compact, and flexible specification of partitioned problems. We adopt the name Ψ (the Greek letter “Psi”), an acronym for partitioning and specification. The proposed language Ψ is highly expressive and has only a small set of language elements, which is a clear advantage over more generic system modeling languages such as SysML (Friedenthal 2008).

Ψ follows a composition paradigm that starts from the definition of individual components (subproblems) that are assembled into larger systems. Components are typically specified by disciplinary designers, while defining systems is the task of a system designer (see Fig. 1). The composition process is modular in that definitions of variables and functions are local to components and systems; disciplinary designers do not have to worry whether a variable or function they use locally is used by another designer elsewhere in the system. Instead, the user must *specify* interactions between components by defining systems that describe the interactions between the local definitions.

The *non-automated* composition process of Ψ provides specification autonomy to disciplines, it provides control over the composition process, and allows for the definition of multi-level systems. This in contrast to *automated* composition methods that assemble component definitions based on overlaps in variable and function names; a process that can become intractable for larger systems. An example of an automated system composition approach is given by Alexandrov and Lewis (2004a, b).

We would like to point out that a specification in Ψ is *independent* of the type of coordination method selected for Step 3. Subproblems become analysis subproblems if a single-level coordination method is selected, or they become optimization subproblems if a multi-level coordination algorithm is selected. Similarly, the language does not differentiate between hierarchical or non-hierarchical coordination methods.

As a second contribution, a compiler and two generators have been developed for Ψ. The compiler is required for processing specifications in Ψ, and the generators have been developed as examples of how input files for computational coordination frameworks can be automatically generated. The compiler and the two generators provide an easy transition from the specification of the partitioned problem in Step 2 to the solution of the partitioned problem in Step 3. The compiler checks the Ψ specification for correctness, and translates it to a normalized structure designed to simplify further automatic processing. The data is written to a file in the generic INI format. The INI format can easily be interpreted by programs in other languages such that framework-specific input files can easily be generated. As validation of the concept, we have implemented two additional generators that operate on this INI format. One generator derives the functional dependence table of the system, and another generator derives Matlab files that are used as inputs for a generic implementation of the augmented Lagrangian coordination algorithm (ALC, Tosserams 2008, 2009c). This generator for ALC was the original motivation for the work presented in this article, and we expect that similar generators can be developed for other computational frameworks.

The paper is organized as follows. First, the general system design problem and its decomposition are discussed in Section 2. Second, we illustrate the elements of Ψ for a simple example in Section 3, and discuss the compiler-generated output formats in Section 4. The application of the language and the developed tools to a larger example is described in Section 5. Concluding remarks are offered in Section 6.

The developed tools, including user manuals and several examples, are available for download at http://se.wtb.tue.nl/sewiki/mdo.

## 2 Decomposition-based optimization for system design

Decomposition-based optimization approaches are used for the distributed design of large-scale and/or multidisciplinary systems. Decomposition methods consist of two main steps: partitioning the system and coordinating the partitioned system (Wagner and Papalambros 1993; Papalambros 1995). In partitioning, the optimal design problem is divided into a number of smaller subproblems, each typically associated with a discipline or component of the system. The task of coordination is then to drive these individual subproblems towards a design that is consistent, feasible, and optimal for the system as a whole. The main advantage of decomposition methods is that a degree of disciplinary autonomy is given to each subproblem, such that designers are free to select their own analysis and design tools.

In this section, we introduce the general system design problem followed by a description of the two main steps in decomposition: partitioning and coordination.

### 2.1 Optimal design problem in integrated form

**z**= [

*z*

_{1},...,

*z*

_{n}] is the vector that contains the design variables of the entire system. Response variables \(\mathbf{r}=[r_1,\ldots,r_{m_\text{a}}]\) are intermediate quantities computed by analysis functions \(\mathbf{a}=[a_1,\ldots,a_{m_\text{a}}]\). These response variables are also known as coupling variables. With

**r**=

**a**(

**z**,

**r**), we mean to express that each analysis function

*a*

_{i}for response

*r*

_{i}may depend on the

*other*responses

*r*

_{i}|

*i*≠

*j*, i.e.

*r*

_{i}=

*a*

_{i}(

**z**,

*r*

_{j}|

*j*≠

*i*). The response

*r*

_{i}may not depend on itself. \(\mathbf{f}=[f_1,\ldots,f_{\text{m}_\text{f}}]\) is the vector of objective functions, and constraints \(\mathbf{g}=[g_1,\ldots,g_{\text{m}_\text{g}}]\) and \(\mathbf{h}=[h_1,\ldots,h_{\text{m}_\text{h}}]\) are the collections of inequality and equality constraints, respectively. Although the majority of the coordination methods do not allow multiple objectives, we do so here for the sake of generality. We refer to the above formulation as integrated since it includes the variables and functions of all disciplines in a single optimization problem.

### 2.2 Partitioning

The purpose of partitioning is to distribute the variables and functions of the integrated problem (1) over a number of subproblems. These subproblems are typically mathematical entities that perform (possibly coupled) analyses, evaluate objective and constraint values, or solve optimization problems. The subproblems may therefore (partially) differ from the original disciplines from which the integrated problem was synthesized.

Three partitioning strategies are often identified (Wagner and Papalambros 1993): aspect-based, object-based, and model-based partitioning. Aspect-based partitioning follows the human organization of disciplinary experts and analysis tools. Object-based partitioning is aligned with the subsystems and components that comprise the system. Model-based partitioning relies on mathematical techniques to obtain an appropriately balanced partition computationally. Model-based partitioning methods often rely on graph theory or matrix representations of problem structure (see, e.g., Krishnamachari and Papalambros 1997; Michelena and Papalambros 1997; Chen 2005; Li and Chen 2006; Allison 2007, for examples of model-based partitioning methods).

Partitioning problem (1) requires a distribution of all variables and functions over a number of subproblems. To this end, the variables **z** are partitioned into *M* sets of variables \(\overline{\mathbf{x}}_j\) allocated to subproblems *j* = 1,...,*M*, and a set of system-level variables \(\overline{\mathbf{x}}_0\). Each set of subproblem variables \(\overline{\mathbf{x}}_j=[\mathbf{y}_j,\mathbf{x}_j,\mathbf{r}_j,\mathbf{r}_{mj}|\min\mathcal{N}_j]\) consists of a set of local design variables **x**_{j} associated exclusively to subproblem *j*, and a set of shared design variables **y**_{j} and response variables **r**_{j} and **r**_{mj}, \(m \in \mathcal{N}_j\). Here, \(\mathcal{N}_j\) is the set of neighbors from which subproblem *j* requires analysis responses, and **r**_{mj} is an auxiliary variable introduced at subproblem *j* for the responses received from subproblem *m*. The set of system variables \(\overline{\mathbf{x}}_0\) contains the system-level response variables **r**_{0}.

Some of the shared variables **y**_{j} and all coupling responses **r**_{mj} are auxiliary variables introduced for decoupling the optimization subproblems. Interactions between the various shared and coupling variables are defined in a set of consistency constraints \(\mathbf{c}(\overline{\mathbf{x}}_0,\overline{\mathbf{x}}_1,\ldots,\overline{\mathbf{x}}_M)=\mathbf{0}\), where it is understood that these constraints depend only on the shared variables and coupling responses. For further details on the use of consistency constraints in distributed optimization, the reader is referred to Cramer (1994), Alexandrov and Lewis (1999), and Tosserams (2009a).

Objective functions **f**, constraints **g** and **h**, and analyses **a** are partitioned into *M* sets of local functions **f**_{j}, **g**_{j}, **h**_{j}, **a**_{j}, *j* = 1,...,*M*, and a set of system-wide functions **f**_{0}, **g**_{0}, **h**_{0}, **a**_{0}.

**r**

_{j}may depend on

*other*responses in

**r**

_{j}, but a response cannot depend on itself.

Although the above formulation assumes that the integrated problem (1) possesses a certain sparsity structure (i.e. the presence of local variables and functions), it is general enough to encompass many practical engineering problems. Several coordination methods have been developed for a subclass of (2), so-called quasiseparable problems that do not have coupling objectives **f**_{0}, but may have additively separable coupling constraints **g**_{0} and **h**_{0} and analysis functions **a**_{0} (see Tosserams 2009a). For the remainder of this article, however, we consider partitioned problems of the form (2) with general coupling objectives **f**_{0}, general coupling constraints **g**_{0}, **h**_{0}, and general analysis functions **a**_{0}.

The artificially introduced variables in the above problem can be eliminated from the optimization variables through the consistency constraints **c** or the analysis equations. Whether or not these variables are eliminated is however a matter of *coordination*, and not a choice we want to make at the *partitioning* stage. Similarly, the coordination method determines how local and coupling variables and functions are treated. Hence, we will refer to problem (2) with all optimization variables included when we speak of the partitioned problem for the remainder of this article.

### 2.3 Coordination

After partitioning, a coordination strategy prescribes how the partitioned problem is to be solved. Single-level methods typically act directly on the partitioned problem (2), while the use of multi-level methods involves the formulation of optimization subproblems for *j* = 1,...,*M* and a method for coordinating the solution of these subproblems. Each coordination method is unique in its treatment of variables and functions, the way in which the partitioned problem is reformulated, and how the reformulated problem is solved. For reviews of coordination methods, the reader is referred to the works of Wagner and Papalambros (1993), Cramer (1994), Balling and Sobieszczanski-Sobieski (1996), Alexandrov and Lewis (1999), and Tosserams (2009a). Note that partitioned problem specifications in Ψ are *independent* of the choice of coordination method.

- 1.
Variable and function information

- 2.
Partitioned problem structure

For the computational frameworks listed in the introduction, the problem specification needs to be supplied in the programming language environment in which the framework is implemented. This programming language is selected for its appropriateness as a computational environment, and its language elements are relatively well-suited for defining variable and function specifications. The definition of the problem partitioning may however be less intuitive. The language limitations become more pronounced if a decomposition has non-hierarchical couplings, or has multiple levels. For such decompositions, specifying the problem becomes a tedious process that is prone to errors (Alexandrov and Lewis 2004a, b). A more intuitive specification process is clearly desired. In addition, having partitioned problem specifications in a unified format promotes their portability between computational frameworks. In the following section, several representation concepts are reviewed with respect to their appropriateness for specifying the partitioned problem.

### 2.4 Existing approaches for specification of the partitioned problem

Several representations have appeared in research focused on model-based partitioning (see, e.g., Kusiak and Larson 1995; Krishnamachari and Papalambros 1997; Michelena and Papalambros 1997; Chen 2005). Model-based partitioning methods typically rely on matrix or graph abstractions of the couplings between variables and functions to define a sparsity structure of the integrated problem (1). Examples of such representations are the functional dependence table and the adjacency matrix. The use of matrices and graphs to specify the partitioning becomes prohibitive for larger systems due to the large number of variables and functions.

Alternative matrix and graph representations have appeared in research on the decomposition of the system de sign process into individual design tasks (see, e.g., Steward 1981; Eppinger et al. 1994; Kusiak and Larson 1995; Browning 2001). The transfer of information between engineers defines precedence relations between the individual tasks that can be captured in matrices or graphs. For example, element *i*,*j* of the so-called design structure matrix is non-zero if task *j* requires information from task *i*, and zero otherwise. In a graph format, vertices can be defined for each task, and precedence relations between two task can be represented by directed edges between the associated vertices. Partitioning methods for process decomposition aim at obtaining a sequence of tasks that minimizes the amount of feedback coupling between tasks or maximizes concurrency of tasks. The main difference between process decomposition and optimization problem decomposition is that the amount of detail in process decomposition is much smaller than for optimization. The number of tasks in design processes is typically one or more orders of magnitude smaller than the number of variables and functions in system optimization.

### 2.5 Linguistic approach to partitioned problem specification

To our opinion, the decomposition-based design community would benefit from an approach that allows the intuitive specification of partitioned problems from which matrix or graphs representations can be automatically generated, instead of working directly with matrices or graphs.

We propose to use a linguistic approach to specifying partitioned problems. The developed language is similar to the reconfigurable multidisciplinary synthesis approach (REMS) proposed by Alexandrov and Lewis (2004a, b). REMS is a linguistic approach to problem description, formulation, and solution that follows a bottom-up assembly process. The method starts from the definition of individual subproblems that are automatically assembled into a complete system optimization problem.

The language we propose in the next section follows a similar bottom-up process, but does not automate the assembly process. Instead, disciplines have purely local definitions of variables and functions, and it is the system designer that assembles the subproblem definitions into subsystems and systems. The advantage of having a multi-level architecture composed of subproblems, lower-level systems and higher-level systems is that a designer no longer has to work on the entire system. Instead, the designer can divide the assembly into multiple levels, where each level is associated to a level of abstraction in the system. The designer can control the complexity of the specification tasks and does not have to oversee the entire system. We expect that this controllability of the complexity improves a designer’s overview of the system, and provides control over the interactions between disciplines. This control over the assembly process is not available in the REMS approach.

An additional advantage is that variable and function definitions are local to subproblems. One designer does not have to worry about whether a variable defined locally also exists in another context somewhere else in the system. Instead, subproblems are free to use local nomenclatures, and the interactions between subproblem nomenclatures have to be defined explicitly at the system level. Such a decoupling of definitions appears to be appropriate in a distributed design environment.

The proposed specification approach is similar to the python-based format used for the pyMDO framework of Martins (2009). However, the pyMDO approach does not have local subproblem nomenclatures, nor does it allow the definition of multi-level systems.

The multi-level assembly process has another advantage. Since a different coordination method can be assigned to each system, a multi-level nested coordination process can be formulated. For example, one can nest a lower-level system that is coordinated with a multidisciplinary feasible formulation within a higher-level system that is coordinated with collaborative optimization. Note that it is the choice of the designer how to assign coordination tasks to lower-level systems.

## 3 The Ψ language

^{1}we first introduce the following main definitions:

A

*variable*is an optimization variable of the system design problem (2), and can be an actual design variable or a response variable computed as the output of an analysis.A

*function*represents an analysis that takes variables as arguments, and computes responses based on the values of the variables.A

*component*represents a computational subproblem in a partitioned problem, which contains a number of variables and functions.A

*system*contains a collection of coupled sub-components whose coupled solution is guided by a coordination method.A

*sub-component*is a component or system that is a direct child of another system.

Note that Ψ is not a model-based partitioning method that automatically derives problem decompositions that can be efficiently coordinated. Ψ is dedicated to *specification* of partitioned problem, no claim is made regarding the “optimality” of the partitioning. Model-based partitioning techniques are beyond the scope of this article, and the interested reader is referred to Krishnamachari and Papalambros (1997), Michelena and Papalambros (1997), Chen (2005), Li and Chen (2006), and Allison et al. (2007, 2009) for examples of such approaches. Ψ can of course be used to generate input for a model-based partitioning software tool with the aim to optimize the partitioning structure.

### 3.1 Components

The main building blocks of a partitioned problem definition in Ψ are components. These components are typically associated with analysis disciplines in aspect-based decompositions or with components in object-based partitions, but may also be purely computational subproblems that have no direct relation to the physical system. At the partitioning stage, we are not concerned with the assignment of analysis and/or optimization authorities to components. This is a choice that is made at the coordination stage, and does therefore not appear in the component definitions.

*x*

_{1},

*x*

_{2}} and local functions {

*f*

_{1},

*g*

_{1},

*a*

_{1}}. The second subproblem has local variables {

*x*

_{3},

*x*

_{4},

*u*} and local functions {

*f*

_{2},

*g*

_{2},

*a*

_{2}}. The two components are coupled through the variables {

*y*,

*r*} and the system-wide objective {

*f*

_{0}} that depends on variable

*x*

_{2}of the first subproblem and on

*x*

_{3}of the second. The partitioning structure is depicted in Fig. 2.

For this partitioned problem, the specification of the two components is given below.

compt*First* =

|[ extvar*x*_{2}, *y*, *r*

intvar*x*_{1}

objfunc*f*_{1} (*x*_{1}, *x*_{2}, *y*, *r*)

confunc*g*_{1} (*x*_{1}, *y*)

resfunc *r* = *a*_{1} (*x*_{2}, *y*)

]|

compt*Second* =

|[ extvar*x*_{3}, *y*, *r*

intvar*x*_{4},*u*

objfunc*f*_{2}(*x*_{3},*x*_{4},*y*)

confunc *g*_{2}(*x*_{3},*x*_{4},*y*,*u*)

resfunc *u* = *a*_{2}(*x*_{3},*x*_{4},*r*)

]|

The first component has name *First* and has four variables *x*_{1},*x*_{2},*y*,*r* and three functions *f*_{1},*g*_{1},*a*_{1}. The language distinguishes between two types of variables: external variables defined after the keyword extvar, and internal variables defined after the keyword intvar. External variables can be accessed by the system the component is part of. External variables can be shared variables or coupling variables that are communicated between components, or local variables on which system-wide functions depend. Variables *y*,*r* fall in the former category and *x*_{2} falls in the latter since it is an argument of the system-wide objective *f*_{0}. Internal variables are only accessible within the component.

The reason for taking a division of variables different from the traditional local and coupling/shared variables is that from a system designer’s viewpoint it is relevant to know which variables have an influence beyond the component in which they are defined. From this perspective, external variables are those variables that affect other components and systems and therefore also include local variables that are arguments of system-wide functions.

Three groups of functions are available in the Ψ language: objective functions, constraint functions, and response functions. In component *First* of the example, function *f*_{1}(*x*_{1},*x*_{2},*y*,*r*) is a local objective with four arguments *x*_{1},*x*_{2},*y*,*r*, and function *g*_{1}(*x*_{1},*y*) is a local constraint with two arguments *x*_{1},*y*. Function *a*_{1}(*x*_{2},*y*) is a response function that determines the values of variable *r*. A response function may have multiple variables as outputs. It is possible to apply the same function multiple times with different arguments.

Definitions of variables and functions in components (and systems) have a local scope. Variables and functions defined in one component may have the same name as other variables and functions of another component without being automatically coupled. Instead, interactions between components have to be specified in systems.

It is important to realize that a component definition is *independent* on the choice of coordination method. At the coordination stage, the system designer can use a multi-level coordination method and formulate an optimization problem for each component. Alternatively, using a single-level coordination method only assigns analysis capabilities to components, and decision-making is centralized in a single optimization problem. Hence, defining design variables and objective and constraint functions in a component does not necessarily imply that an optimization problem is actually formulated for this component. It simply indicates where the variables and functions originate from.

### 3.2 Systems

Once the components of a partitioned problem are defined, they can be assembled into systems. A system definition includes two or more subcomponents and describes the couplings between them. The subcomponents of a system can be components or other systems. In the latter case, a multi-level system is obtained.

The system definition for the example partitioning of problem (3) is given by

syst*Problem*=

|[ sub*A*:*First*, *B*:*Second*

link*A*.*y* -- *B*.*y*, *A*.*r* -- *B*.*r*

objfunc*f*_{0}(*A*.*x*_{2}, *B*.*x*_{3})

]|

The system is named *Problem* and has two subcomponents: *A* of type *First*, and *B* of type *Second*. Multiple subcomponents of the same type can be instantiated in a system. The expression *A*_{1},*A*_{2}: *First* instantiates two subcomponents *A*_{1} and *A*_{2} of the same type *First*. These multiple instantiations are useful for systems that have many identical components, such as structural systems consisting of many similar elements.

The consistency constraints between the two components are given by the link statement that connects variables *y* and *r* of component *A* to variables *y* and *r* of component *B*, respectively (note that the linked variables need not have the same local name). In systems, the dot notation *A*.*y* denotes variable *y* from component *A*.

The specification of the system is completed by the definition of the system-wide objective function *f*_{0} that depends on variable *x*_{2} of *A* and *x*_{3} of subcomponent *B*. Systems can also have system-wide constraint functions or response functions.

In contrast to components, a system does not have *design* variables of its own. However, *response* variables associated with the coupling analysis functions have to be included as variables of the system definition. Similar to components, the keywords extvar and intvar are used to define which response variables are external and which are internal.

The systems used in Ψ are different from the traditional notion of systems in the MDO context. Here, a system is simply a collection of components that are coordinated jointly. Systems in the MDO context typically also include design aspects and typically have design variables of their own (so-called global variables or system variables). The task of these MDO systems is to solve the system-level design problem *while at the same time* coordinating the solution of the subproblems. These are actually two separate tasks that should be considered as such. In Ψ, this distinction is made explicit since a user needs to define the design part of the MDO system in a component, while the couplings associated with the coordination part are specified in a system definition.

The final ingredient of the partitioned problem specification for the example partitioning of problem (3) is the statement

topsyst*Problem*

which instantiates the partitioned problem by defining that the highest system in the hierarchy is *Problem*. The definitions for components *First* and *Second*, system *Problem*, and the topsyst statement comprise the specification of the partitioned problem for our example problem.

## 4 Automatic processing and generation of input files

A compiler and two generators have been developed to automatically derive input files for a coordination framework and a matrix representation of the problem structure. The two generators presented in this article should be seen as examples of how framework-specific input files can be automatically derived. The development of additional generators for other frameworks are expected to be easy to add due to the use of the generic INI format, and the information created by the compiler. The compiler and the two generators have been coded in Python (Lutz 2006).

The compiler-based approach proposed in this article offers developers of coordination methods the freedom to focus on input files that integrate easily with the computational routines they are designing. The Ψ language and the associated compiler and generators should therefore be seen as powerful, generic pre-processors that provide these computational frameworks with easy-to-process input specifications while allowing users to specify the partitioned problem in an intuitive and easy way.

### 4.1 Partitioned problem normalized format

Uniqueness of variable/component/system names,

Whether arguments and outputs of functions are defined as variables in components/systems,

Whether sub-components of a system refer to existing component or system definitions,

Whether variables used in systems exist in the associated sub-component,

Etc.

The specification of the partitioned problem in INI format is defined by a number of sections. Each section contains a section header \([\textit{section}]\) and a number of key/value pairs of the form \(\textit{keyname}~=~textit{value}\). Separate sections are introduced for each variable, each function, each component, each coupling link, each system, and one for the top-level system. The collection of sections contains the necessary information to uniquely represent the partitioned problem.

Component and system sections include keys for its definition name (type), the name of its instantiation (name) and the associated instantiation path (path), its shared and local variables (coupling_vars^{2} and local_vars, only for components), its coupling and local responses (coupling_resvars and local_ resvars), and its objective, constraint, and response functions (objfuncs, confuncs, resfuncs). System sections also include a list of sub-components (sub_comps) and links (links). Note that the local and shared variables correspond to the definitions of **x**_{j} and **y**_{j} in the partitioned problem (2). Response functions **r**_{j} are split into local (i.e. disciplinary) responses and coupling responses similarly.

A coupling section (link_) includes the variables that it couples (coupling), the name of the system definition in which it is defined (defined_in), and the instantiation path of this system (path). A coupling can be defined between two shared design variables or between two coupling response variables. Finally, the top section includes the key system whose value denotes the name of the top-level system.

### 4.2 Matlab input file for ALC toolbox

The first generator translates the INI output into Matlab problem specification files that can be used as input for our Matlab implementation of the augmented Lagrangian coordination algorithm (ALC, Tosserams 2008, 2009c). This ALC-generator was the original motivation for the work presented in this article. It is beyond the scope of this article to discuss the ALC method or the details of the input files in greater detail. Our intention is to present the generated ALC files to demonstrate the possibilities that the compiler-based approach offers. The interested reader is referred to the references given above for further details on the ALC method.

*r*=

*a*

_{1}(

*x*

_{2},

*y*) and

*u*=

*a*

_{2}(

*x*

_{3},

*x*

_{4},

*r*) of the example have been included as constraint functions

*h*

_{1}(

*r*,

*x*

_{2},

*y*) =

*r*−

*a*

_{1}(

*x*

_{2},

*y*) = 0 and

*h*

_{2}(

*u*,

*x*

_{3},

*x*

_{4},

*r*) =

*u*−

*a*

_{2}(

*x*

_{3},

*x*

_{4},

*r*) = 0 for this purpose. The ALC-generator automatically checks whether a Ψ specification has response functions or not. The reason for including these checks in the ALC-generator (and not in the compiler) is that Ψ is generic, i.e. independent of the coordination method.The difference between the Matlab and Ψ specifications is obvious, as well as the difference in readability between the two. Specification of the partitioned problem using Ψ is clearly more intuitive than specifying them using the ALC format in Matlab.

### 4.3 Function dependence table file

A second generator creates a file that contains the functional dependence table (FDT) of the specified problem. The FDT is a matrix whose rows and columns are associated with the functions and variables of the problem, respectively. The (*i*,*j*)-th entry of the matrix is 1 if the function of row *i* depends on the variable of column *j*. The FDT and related mathematical representations are typical inputs to model-based partitioning methods such as those proposed by Krishnamachari and Papalambros (1997), Michelena and Papalambros (1997), Chen (2005), Li and Chen (2006), and Allison (2007).

Being able to generate different types of input files from the same Ψ specification not only saves time, but also leads to consistent definitions of the partitioned problem. These advantages make comparing results from different computational frameworks easier.

## 5 Chassis design example

In this section, we demonstrate the use and advantages of Ψ on a larger example. Two variants of partitioning the problem are demonstrated using the Ψ language. For one variant, the INI output files, as well as the Matlab ALC files and the FDT table are generated, clearly showing the compactness and intuitiveness of the specification in Ψ.

Description of the optimization variables for the vehicle chassis problem

Design variables | Response variables | ||

| Tire position | \(\omega_\text{sf}\) | Spring nat. freq. |

| Tire position | \(\omega_\text{sr}\) | Spring nat. freq. |

\(P_\text{if}\) | Tire pressure | \(\omega_\text{tf}\) | Tire nat. freq. |

\(P_\text{ir}\) | Tire pressure | \(\omega_\text{tr}\) | Tire nat. freq. |

\(D_\text{f}\) | Coil diameter | \(k_\text{us}\) | Understeer gradient |

\(D_\text{r}\) | Coil diameter | \(K_\text{sf}\) | Spring stiffness |

\(d_\text{f}\) | Wire diameter | \(K_\text{sr}\) | Spring stiffness |

\(d_\text{r}\) | Wire diameter | \(K_\text{tf}\) | Tire stiffness |

\(p_\text{f}\) | Pitch | \(K_\text{tr}\) | Tire stiffness |

\(p_\text{r}\) | Pitch | \(C_{{\alpha} \text{f}}\) | Cornering stiffness |

\(Z_\text{sf}\) | Suspension deflection | \(C_{{\alpha}\text{r}}\) | Cornering stiffness |

\(Z_\text{sr}\) | Suspension deflection | \(K_\text{Lf}\) | Linear stiffness |

\(K_\text{Lr}\) | Linear stiffness | ||

\(K_\text{Bf}\) | Bending stiffness | ||

\(K_\text{Br}\) | Bending stiffness | ||

\(L_\text{0f}\) | Free length | ||

\(L_\text{0r}\) | Free length |

### 5.1 Specification of the partitioned problem

*Chassis*has seven sub-components:

*Vehicle*,

*Tire*,

*Corner*, two of type

*Suspension*, and two of type

*Spring*. Each sub-component includes its relevant set of optimization variables and functions. The similarity of the front and rear suspensions and springs is exploited by defining a single suspension and a single spring component. By instantiating these components twice in system

*Chassis*, two independent subproblems are defined, each with a separate set of design variables.

comp*Vehicle* =

|[ extvar \(a,b,K_\text{sf},K_\text{sr}, K_\text{tf},K_\text{tr},C_{{\alpha}\text{f}},C_{{\alpha}\text{r}}\)

intvar \(\omega_\text{sf},\omega_\text{sr},\omega_\text{tf},\omega_\text{tr},k_\text{us}\)

objfunc \(\mathbf{f}(\omega_\text{sf},\omega_\text{sr},\omega_\text{tf},\omega_\text{tr},k_\text{us})\)

resfunc \((\omega_\text{sf},\omega_\text{sr},\omega_\text{tf},\omega_\text{tr},k_\text{us})=\)

\(\mathbf{a}_1(a,b,K_\text{sf},K_\text{sr}, K_\text{tf},K_\text{tr},C_{{\alpha} \text{f}},C_{{\alpha}\text{r}})\)

]|

compt *Tire*=

|[ extvar \(a,b,K_\text{tf},K_\text{tr},P_\text{if},P_\text{ir}\)

resfunc \((K_\text{tf},K_\text{tr})= \mathbf{a}_3(P_\text{if},P_\text{ir},a,b)\)

]|

compt *Corner*=

|[ extvar \(a,b,C_{{\alpha}\text{f}},C_{{\alpha}\text{r}},P_\text{if},P_\text{ir}\)

resfunc \((C_{{\alpha}\text{f}},C_{{\alpha}\text{r}}) = \mathbf{a}_4(P_\text{if},P_\text{ir},a,b)\)

]|

compt*Suspension* =

|[ extvar \(K_\text{s},K_\text{L},K_\text{B},L_\text{0}\)

intvar \(Z_\text{s}\)

confunc \(\mathbf{g}_1(Z_\text{s},K_\text{L},K_\text{B},L_\text{0})\)

resfunc \(K_\text{s} = \mathbf{a}_2(Z_\text{s},K_\text{L},K_\text{B},L_\text{0})\)

]|

compt *Spring* =

|[ extvar \(K_\text{L},K_\text{B},L_\text{0}\)

intvar *D*,*d*,*p*

confunc **g**_{2}(*D*,*d*,*p*)

resfunc \((K_\text{L},K_\text{B})=\mathbf{a}_5(D,d,p,L_\text{0})\)

]|

syst *Chassis* =

|[ sub *V: Vehicle, T: Tire, C: Corner*

, \(S_\text{f}, S_\text{r}\): *Suspension*, *Sp*\(_\text{f}\), *Sp*\(_\text{r}\): *Spring*

link *V*.*a* -- {*T*.*a*,*C*.*a*}, \(T.P_\text{if}\) -- \(C.P_\text{if}\)

, *V*.*b* -- {*T*.*b*,*C*.*b*}, \(T.P_\text{ir}\) -- \(C.P_\text{ir}\)

, \(V.K_\text{tf}\) -- \(T.K_\text{tf}\), \(V.C_{\alpha \text{f}}\) -- \(C.C_{\alpha \text{f}}\)

, \(V.K_\text{tr}\) -- \(T.K_\text{tr}\), \(V.C_{\alpha \text{r}}\) -- \(C.C_{\alpha \text{r}}\)

, \(V.K_\text{sf}\) -- \(S_\text{f}.K_\text{s}\), \(V.K_\text{sr}\) -- \(S_\text{r}.K_\text{s}\)

, \(S_\text{f}.K_\text{L}\) -- \(Sp_\text{f}.K_\text{L}\), \(S_\text{f}.L_0\) -- \(Sp_\text{f}.L_0\)

, \(S_\text{r}.K_\text{L}\) -- \(Sp_\text{r}.K_\text{L}\), \(S_\text{r}.L_0\) -- \(Sp_\text{r}.L_0\)

,\(S_\text{f}.K_\text{B}\) -- \(Sp_\text{f}.K_\text{B}\)

, \(S_\text{r}.K_\text{B}\) -- \(Sp_\text{r}.K_\text{B}\)

]|

topsyst *Chassis*

A second partitioning of the problem as shown in Fig. 7 is used to demonstrate how multi-level coordination can be facilitated by including systems as sub-components of other systems. This partition has a subsystem *SuspSpring* that includes a *Suspension* and a *Spring* component. Two instantiations of this lower-level system are included in a system *Chassis*_{2} that also includes the *Vehicle*, *Tire*, and *Corner* components of the first definition above. The differences between the two partitioned problems are illustrated in Fig. 7. The specification of the systems *SuspSpring* and *Chassis*_{2} for second problem partitioning is given below.

syst *SuspSpring* =

|[ sub *S*: *Suspension*, *Sp*: *Spring*

link \(S.K_\text{L}\) -- *Sp*.\(K_\text{L}\), *S*.*L*_{0} -- *Sp*.*L*_{0}, \(S.K_\text{B}\) -- *Sp*.\(K_\text{B}\)

alias \(K_\text{s}=S.K_\text{s}\)

]|

syst *Chassis*_{2} =

|[ sub*V*: *Vehicle*, *T*: *Tire*, *C*: *Corner*

, \(S_\text{f},S_\text{r}\): *SuspSpring*

link *V*.*a* -- {*T*.*a*,*C*.*a*}, \(T.P_\text{if}\) -- \(C.P_\text{if}\)

, *V*.*b* -- {*T*.*b*,*C*.*b*}, \(T.P_\text{ir}\) -- \(C.P_\text{ir}\)

, \(V.K_\text{tf}\) -- \(T.K_\text{tf}\), \(V.C_{\alpha \text{f}}\) -- \(C.C_{\alpha \text{f}}\)

, \(V.K_\text{tr}\) -- \(T.K_\text{tr}\), \(V.C_{\alpha \text{r}}\) -- \(C.C_{\alpha \text{r}}\)

, \(V.K_\text{sf}\) -- \(S_\text{f}.K_\text{s}\), \(V.K_\text{sr}\) -- \(S_\text{r}.K_\text{s}\)

]|

topsyst *Chassis*_{2}

The couplings between the variables of *Suspension* and *Spring* are included in the system *SuspSpring*. Two systems *SuspSpring* are instantiated in system *Chassis*_{2}, and links between the different sub-components are defined accordingly. With this second partitioning, the coordination of the *SuspSpring* lower-level systems can be performed nested within the coordination of the top-level system *Chassis*_{2}.

System *SuspSpring* includes the definition of an *alias* (\(K_\text{s}\)), which is introduced to make this variable of component *Suspension* accessible by system *Chassis*_{2}. In general, aliases are used in systems that are themselves part of another system, and are included to make a variable of a sub-component accessible by a higher level system. An advantage of using aliases instead of an identifier such as *N*.*S*.*v* is that the higher-level systems do not need to have detailed knowledge of the structure of its subsystems. Additionally, the definition of the higher level system does not need to be changed if the structure of the subsystem is modified. Observe that an alias definition does *not* define a consistency constraint; aliases are simply used to forward variable values of lower to higher levels in the problem hierarchy.

### 5.2 Generated input files

For the first partitioned problem, the compiler and both generators are used to automatically generate the three input files from the Ψ-specification. Note that for the purpose of the ALC input file, the response functions have been included as constraint functions, similar to Section 4.2.

## 6 Summary and discussion

Decomposition-based design of engineering systems requires two main ingredients: a problem specification that defines the structure of the system to be optimized, and a computational framework that performs the numerical operations associated with coordination and solution of the partitioned problem. Several generic computational frameworks have been developed over the past decade, but generic and intuitive approaches to partitioned problem specification are rare.

This article proposes a linguistic approach to partitioned problem specification that is generic, compact, and easy to use. The proposed language Ψ allows a designer to intuitively define partitioned optimization problems using only a small set of language elements. The developed tools, including user manuals and several examples, are available for download at http://se.wtb.tue.nl/sewiki/mdo.

So-called components are the building blocks of a specification in Ψ. A component definition includes a number of variables and objective, constraint, and response functions. Components are assembled into systems in which variable couplings between components are defined as well as coupling functions. These systems can themselves be part of another system, allowing an incremental multi-level assembly of the partitioned problem. This incremental assembly process allows the designer to control the complexity of the individual assembly tasks, and improves the overview of the system.

A generic compiler has been developed that produces an easy to process normalized format. Two generators automatically derive input files for computational coordination frameworks. The compiler-based approach proposed in this article offers developers of coordination methods the freedom to focus on input files that integrate easily with the computational routines they are designing. The Ψ language and the associated compiler should therefore be seen as powerful generic pre-processor that provides these computational frameworks with easy-to-process input specifications while allowing users to focus on partitioning the problem in an intuitive and easy way rather than handling the details needed by the coordination frameworks.

Users that want to use the Ψ language for their computational framework need to develop a generator. This generator is similar to the examples presented in this paper, and should automatically translate the partition specification in the INI format to an input file appropriate for the computational framework. It is recommended that this generator also checks for framework-specific requirements that are not covered by the generic Ψ-compiler. Examples of such requirements are not allowing system-wide functions or not allowing response functions.

The flexibility of Ψ can be used to experiment with different partitions of the same problem. By solving different decompositions of the same problem, insights can be gained with respect to the notion of coupling strength present in a partitioned problem. These insights can be used to further refine model-based partitioning methods as those proposed by Krishnamachari and Papalambros (1997), Li and Chen (2006), and Allison (2007). In turn, the problem partitions derived using model-based methods can be stored in Ψ or INI format.

Finally, we note that in the development of Ψ we have not made *a priori* assumptions about the class of optimization problems that can be treated, nor about the coordination method that will be used to solve the problem. The language seems applicable to linear as well as nonlinear problems, continuous or discrete variables, single- and multi-objective problems, deterministic or probabilistic optimization problems, and is suitable for both single-level as well as multi-level coordination methods.

## Footnotes

## Notes

### Acknowledgments

The authors are grateful for the comments made by the anonymous referees which helped to improve the presentation of the paper.

**Open Access**

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

### References

- Alexandrov NM (2005) Editorial—multidisciplinary design optimization. Optim Eng 6(1):5–7CrossRefGoogle Scholar
- Alexandrov NM, Lewis RM (1999) Comparative properties of collaborative optimization and other approaches to MDO. In: ASMO UK/ISSMO conference on engineering design optimization. MCB University Press, Bradford, pp 39–46Google Scholar
- Alexandrov NM, Lewis RM (2004a) Reconfigurability in MDO problem synthesis, part 1. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, Albany, NY, AIAA paper 2004-4307Google Scholar
- Alexandrov NM, Lewis RM (2004b) Reconfigurability in MDO problem synthesis, part 2. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, Albany, NY, AIAA paper 2004-4308Google Scholar
- Allison JT, Kokkolaras M, Papalambros PY (2007) Optimal partitioning and coordination decisions in system design using an evolutionary algorithm. In: Proceedings of the 7th world congress on structural and multidisciplinary optimization. Seoul, South KoreaGoogle Scholar
- Allison JT, Kokkolaras M, Papalambros PY (2009) Optimal partitioning and coordination decisions in decomposition-based design optimization. ASME J Mech Des 131(8):1–8CrossRefGoogle Scholar
- Balling RJ, Sobieszczanski-Sobieski J (1996) Optimization of coupled systems: a critical overview of approaches. AIAA J 34(1):6–17MATHCrossRefGoogle Scholar
- Browning TR (2001) Applying the design structure matrix to system decomposition and integration problems: a review and new directions. IEEE Trans Eng Manage 48(3):292–306CrossRefGoogle Scholar
- Chen L, Ding Z, Li S (2005) A formal two-phase method for decomposition of complex design problems. ASME J Mech Des 127:184–195CrossRefGoogle Scholar
- Cloanto (2009) Cloanto implementation of INI file format. http://www.cloanto.com/specs/ini.html. Date accessed: 30 January 2009
- Cramer EJ, Dennis JE, Frank PD, Lewis RM, Shubin GR (1994) Problem formulation for multidisciplinary optimization. SIAM J Optim 4(4):754–776MATHCrossRefMathSciNetGoogle Scholar
- de Wit AJ, van Keulen F (2007) Numerical comparison of multi-level optimization techniques. In: Proceedings of the 3rd AIAA multidisciplinary design optimization specialist conference, Honolulu, HIGoogle Scholar
- de Wit AJ, van Keulen F (2008) Framework for multilevel optimization. In: Proceedings of the 5th China-Japan-Korea joint symposium on optimization of structural and mechanical systems, Seoul, South KoreaGoogle Scholar
- Eppinger SD, Whitney DE, Smith RP, Gebala DA (1994) A model-based method for organizing tasks in product development. Res Eng Des 6:1–17CrossRefGoogle Scholar
- Etman LFP, Kokkolaras M, Hofkamp AT, Papalambros PY, Rooda JE (2005) Coordination specification in distributed optimal design of multilevel systems using the
*χ*language. Struct Multidisc Optim 29(3):198–212CrossRefGoogle Scholar - Friedenthal S, Moore A, Steiner R (2008) A practical guide to SysML: the systems modeling language. Morgan Kaufmann, San FranciscoGoogle Scholar
- Huang GQ, Qu T, Cheung WL (2006) Extensible multi-agent system for optimal design of complex systems using analytical target cascading. Int J Adv Manuf Technol 30:917–926CrossRefGoogle Scholar
- Kim HM, Michelena NF, Papalambros PY, Jiang T (2003) Target cascading in optimal system design. ASME J Mech Des 125(3):474–480CrossRefGoogle Scholar
- Krishnamachari RS, Papalambros PY (1997) Optimal hierarchical decomposition synthesis using integer programming. ASME J Mech Des 119:440–447CrossRefGoogle Scholar
- Kusiak A, Larson N (1995) Decomposition and representation methods in mechanical design. ASME J Mech Des 117(3):17–24CrossRefGoogle Scholar
- Li S, Chen L (2006) Model-based decomposition using non-binary dependency analysis and heuristic partitioning analysis. In: Proceedings of the ASME design engineering technical conferences, Philadelphia, PYGoogle Scholar
- Lutz M (2006) Programming python, 3rd edn. O’Reilly Media, Inc, SebastopolGoogle Scholar
- Martins JRRA, Marriage C, Tedford N (2009) pyMDO: an object-oriented framework for multidisciplinary design optimization. ACM Trans Math Softw 36(4):1–25CrossRefGoogle Scholar
- Michelena NF, Papalambros PY (1997) A hypergraph framework for optimal model-based decomposition of design problems. Comput Optim Appl 8(2):173–196MATHCrossRefMathSciNetGoogle Scholar
- Michelena NF, Scheffer C, Fellini R, Papalambros PY (1999) CORBA-based object-oriented framework for distributed system design. Mech Des Struct Mach 27(4):365–392CrossRefGoogle Scholar
- Moore KT, Naylor BA, Gray JS (2008) The development of an open source framework for multidisciplinary analysis and optimization. In: Proceedings of the 12th AIAA/ISSMO multidisciplinary analysis and optimization conference, Victoria, BC, Canada, AIAA paper 2008-6069Google Scholar
- Papalambros PY (1995) Optimal design of mechanical engineering systems. ASME J Mech Des 117(B):55–62CrossRefGoogle Scholar
- Perez RE, Liu HHT, Behdinan K (2004) Evaluation of multidisciplinary optimization approaches for aircraft conceptual design. In: Proceedings of the 10th AIAA/ISSMO multidisciplinary analysis and optimization conference, Albany, NY, AIAA paper 2004-4537Google Scholar
- Sobieszczanski-Sobieski J, Haftka RT (1997) Multidisciplinary aerospace design optimization: survey of recent developments. Struct Optim 14(1):1–23CrossRefGoogle Scholar
- Steward DV (1981) The design structure system: a method for managing the design of complex systems. IEEE Trans Eng Manage EM-28(3):71–74Google Scholar
- Tosserams S, Etman LFP, Rooda JE (2007) An augmented Lagrangian decomposition method for quasi-separable problems in MDO. Struct Multidisc Optim 34(3):211–227CrossRefMathSciNetGoogle Scholar
- Tosserams S, Etman LFP, Rooda JE (2008) Augmented Lagrangian coordination for distributed optimal design in MDO. Int J Numer Methods Eng 73(13):1885–1910MATHCrossRefMathSciNetGoogle Scholar
- Tosserams S, Etman LFP, Rooda JE (2009a) A classification of methods for distributed system optimization based on formulation structure. Struct Multidisc Optim 39:503–517 doi:10.1007/s00158-008-0347-z CrossRefMathSciNetGoogle Scholar
- Tosserams S, Hofkamp AT, Etman LFP, Rooda JE (2009b) Ψ reference manual. SE-report 2009-04, Eindhoven University of Technology. http://se.wtb.tue.nl
- Tosserams S, Hofkamp AT, Etman LFP, Rooda JE (2009c) Using the alc matlab toolbox with input files generated from Ψ specifications. SE-report 2009-05, Eindhoven University of Technology. http://se.wtb.tue.nl
- Wagner TC, Papalambros PY (1993) General framework for decomposition analysis in optimal design. In: Gilmore B, Hoeltzel D, Azarm S, Eschenauer H (eds) Advances in design automation, Albuquerque, NM, pp 315–325Google Scholar
- Yi SI, Shin JK, Park GJ (2008) Comparison of MDO methods with mathematical examples. Struct Multidisc Optim 35(5):391–402CrossRefGoogle Scholar