A specialized primal-dual interior point method for the plastic truss layout optimization

We are concerned with solving linear programming problems arising in the plastic truss layout optimization. We follow the ground structure approach with all possible connections between the nodal points. For very dense ground structures, the solutions of such problems converge to the so-called generalized Michell trusses. Clearly, solving the problems for large nodal densities can be computationally prohibitive due to the resulting huge size of the optimization problems. A technique called member adding that has correspondence to column generation is used to produce a sequence of smaller sub-problems that ultimately approximate the original problem. Although these sub-problems are significantly smaller than the full formulation, they still remain large and require computationally efficient solution techniques. In this article, we present a special purpose primal-dual interior point method tuned to such problems. It exploits the algebraic structure of the problems to reduce the normal equations originating from the algorithm to much smaller linear equation systems. Moreover, these systems are solved using iterative methods. Finally, due to high degree of similarity among the sub-problems after preforming few member adding iterations, the method uses a warm-start strategy and achieves convergence within fewer interior point iterations. The efficiency and robustness of the method are demonstrated with several numerical experiments.


Introduction
Optimization of truss structures goes back to a seminal work of Michell [26] and has grown to a variety of disciplines in structural optimization for which several advanced formulations dealing with practical requirements, theories on existence and uniqueness of solutions, efficient solution methods, and benchmark problems have been proposed, [2][3][4][5]15,17,[20][21][22][23]28,29,35] to mention only a few.
In this paper, we are concerned with solving the topology optimization problems in plastic design by linear programming. These problem formulations are known to ignore kinematic compatibility conditions. However, for single-load case, their equivalence to the (elastic design) minimum compliance problem has been shown, for examples see [1,[3][4][5]17,36]. For multiple-load case, establishing the equivalence is harder, except for some special cases [31]. Nevertheless, it is worth mentioning that the solution of the simplified linear programming formulations provides a reference lower bound plastic design that can be used at early design stages or as an initial truss layout for multilevel optimization problems, for example [16].
The optimization problems are usually formulated by using a ground structure approach [8] in which a set of nodes is distributed in the design domain and all the possible interconnecting bars are generated. The main goal is then to determine the optimal cross-sectional areas of these bars and obtain the lightest structure that is able to sustain a given set of applied load cases. In order to find the ultimate optimal designs that converge to the corresponding exact solution of the so-called generalized Michell trusses [15,22,23] or solutions less sensitive to nodal positions, which may actually be dealt with by non-linear geometry optimization of involving smaller size problems [5], we need to use a very large number of nodes [9]. However, this results in a huge number of possible bars and causes that the underlying optimization problems impose additional requirements on the existing solution techniques. The challenges are apparent for a single-load case problems and increase significantly when multiple-load case problems are dealt with. Although attempts have been made to split the multipleload case problems into certain set of single-load case problems based on the principle of superposition, they have been successful only for special loading conditions [30]. A need for a rigorous treatment of multiple-load case problems still exists.
An adaptive ground structure approach proposed in [10] has been applied in several studies [33]. It is an iterative procedure, closely related to column generation methods for linear programming [14,24] where the problems are initially solved for a minimal connecting bars and subsequently members are added until the optimal design is obtained. The technique is very attractive because it relies on solving a sequence of smaller problems and avoids the need of solving the full formulation which is prohibitively expensive due to its excessive size. However, after performing the first few member adding iterations, the size of the sub-problems grows and they require extensive computational effort to reach the solution.
The purpose of this article is to address these challenges by developing a special purpose solution technique based on primal-dual interior point method [37] which is well-known to deliver fast convergence in practice and excel on large-scale problems. For more details and a survey of recent developments in the primal-dual interior point method, we refer the reader to [12] and the references therein.
Interior point methods usually reach a solution after a modest number of iterations. However, a single iteration in these methods might be expensive when very largescale problems are solved. In such cases, an improvement in the efficiency might sometimes be delivered by replacing direct linear algebra techniques with well-suited iterative methods equipped with efficient preconditioners, see [6,7,12] and the references therein.
In this paper, we address several aspects of interior point method implementation and demonstrate how to specialize it to single-and multiple-load case plastic truss layout optimization problems and achieve major improvements of its efficiency. In particular, we focus on three important algorithmic features of IPMs which contribute most to the improvement of the overall efficiency of the method.
First of all, we exploit the algebraic structure of structure of the optimization problems when solving normal equation formulation of the reduced Newton systems. We exploit a particular sparsity structure of the LP constraint matrix to perform implicit eliminations and to reduce significantly the size of these linear equation systems in which we determine the search direction for the virtual displacements. To be precise with the size of the linear systems, for problems on N -dimensional design domain, with d nodes inter-connected by n( d) member bars, and subjected to n L independent load cases, we solve linear systems with an m · n L × m · n L , m ≈ N d, coefficient matrix instead of that with an (m + n) · n L × (m + n) · n L matrix in the case of standard normal equations.
Secondly, we employ iterative methods to solve the already reduced linear systems. We use conjugate gradient method [18,32] with a preconditioner designed to exploit the particular sparsity structure and other mathematical properties of the reduced geometry matrix.
Finally, we take advantage of the similarity of the sequence of problems after some of the first few member adding iterations. In that case, a warm-start strategy [11,13] is used to define an initial point for the interior point algorithm. This significantly reduces the number of interior point iterations when compared to a cold-start strategy which consists of solving every problem from scratch.
Let us mention at this point that interior point methods have already been applied in the context of truss topology optimization problems. A specialized variant of such method was used in [19]. The linear systems applied to compute search directions were reduced to involve only the displacement variables and one dual variable associated to a volume constraint. The resulting linear systems were dense because no member adding strategy was used. These systems were solved using direct methods of linear algebra. Alternative approaches to truss topology optimization problem include a reformulation as an unconstrained optimization problem using only displacement variables, solved by a gradient descent method [4].
The article is organized in the following manner. In Sect. 2, the overview of a the primal-dual interior point method for a standard linear programming is presented. In Sect. 3, the plastic truss layout optimization, its dual formulation, and the member adding scheme are described. In Sect. 4, the structure-exploiting linear algebra techniques are discussed. In Sect. 5, the iterative method and the applied preconditioner are described. In Sect. 6, the warm-start strategy is explained. The implementation of the method is discussed in Sect. 7 and the numerical results are discussed in Sect. 8. Finally, the conclusions are given in Sect. 9.

Primal-dual interior point method for linear programming
Consider the standard primal linear problem where A ∈ R m×n , x, c ∈ R n , and its dual problem where y, s ∈ R m . In primal-dual interior point methods, we introduce a barrier parameter μ > 0 and formulate the perturbed first-order optimality conditions as where X = diag(x), S = diag(s), and e = (1, . . . , 1) of appropriate size. Then, we solve the system (3) for a sequence of μ k → 0 to find the solution of the primal (1) and dual (2) problems. We apply Newton's method to the optimality conditions (3) and solve the linear system where ξ d = c− A T y −s, ξ p = b− Ax, , and ξ c = μe− X Se. We follow the Mehrotra's predictor-corrector method [25] to determine the search directions (Δx, Δy, Δs) in two steps. First, we solve the system (4) with the right hand side (ξ d , ξ p , −X Se) T to find the predictor direction (Δx a , Δy a , Δs a ). Then, we determine the maximal primal α p and dualᾱ d step lengths Next, we compute μ as and solve once again the system (4) with the right hand side (0, 0, μe − ΔX a Δs a ) T to find the corrector direction (Δx c , Δy c , Δs c ). Finally, we determine the final primal α p and dual α d step lengths as where τ ∈ (0, 1). Then, the new iterate (x + , y + , s + ) is When solving the system (4) in most primal-dual interior point algorithms, the unknowns Δs and Δx are eliminated first and a smaller system called the normal equations is solved.
where ξ is the appropriate right hand. We assume any general solver that uses interior point method would solve the normal equations (9) or other larger systems and perform backward substitution to determine the other directions. Particularly, in case of (9), Δx and Δs. However, for the plastic layout optimization of trusses covered in this article, we can further exploit their algebraic structure and find a much smaller system than (9) which can be efficiently solved. This is described in Sect. 4.

The plastic truss layout optimization problem
The plastic truss layout optimization problem is formulated following the ground structure approach in which a finite set of nodes, say d, are (uniformly) distributed in the design domain. The nodes are then connected by all possible potentials bars n d. If the overlapping bars are included, then n = d(d − 1)/2. We define an optimization problem in which the design variables are the cross-sectional areas a i , i = 1, . . . , n of the member bars.
Let m(≈ N d, N is the dimension of the design domain) be the number of the non-fixed degrees of freedom, f ∈ R m , ∈ {1, . . . , n L } be a set of external forces applied to the structure, and q + , q − ∈ R n + be the associated tensile and compressive forces of the bars, respectively.
Then, the multiple-load least-weight truss topology optimization in plastic design can be formulated as minimize a,q l T a subject to Bq where l ∈ R n is a vector of bar lengths, and σ − > 0 and σ + > 0 are the material's yield stresses in compression and tension, respectively. Problem (10) is a linear program. After introducing primal slack variables x ∈ R n + , ∈ {1, . . . , n L } to the inequality constraints and transforming them to equality constraints, i.e., to we derive the dual problem associated with (10) given by where u ∈ R m denotes the virtual nodal displacement, s q + , s q − , y ∈ R n , ∈ {1, . . . , n L }, and s a ∈ R n .

The member adding
We follow the member adding strategy proposed in [10]. It is an iterative process that starts with a structure constituting a minimum connectivity, see Fig. 1 for example. Let n 0 be the number of bars in the initial structure. Let K 0 ⊂ {1, . . . , n} be the set of indices of the bars for which the optimization problems (10) and (11) are currently solved. Next, we compute the dual violations and generate the set K defined by 0} with u * denoting the optimal virtual nodal displacement and β > 0 some allowed tolerance. Then, the bars with indices in K are identified, filtered, and finally added in new problem instance. The member adding process stops when K = ∅.
There are several heuristics approaches to filter and determine how many of the bars with indices in K should be added when formulating the new problem instances. Here, we present three. AP1 Include all members in K . AP2 Sort the members in K and include the largest min{αn 0 , |K |} members, where α ≤ 1. For example, α = 0.1 implies at most 10% of the initial number of bars. This is one of the techniques used in [10]. AP3 Include those in { j ∈ K |l j ≤ L k }, where L k is a limit on the lengths of the bars to be added at the kth member adding iteration. Its value increases at every member adding iteration, and reaches the maximum possible length. Particularly, for the numerical experiments in this paper, we use a simple heuristic rule L k = 2 k r , where r is the diagonal distance between two adjacent nodes. This is motivated by the scheme used in [33].

Exploiting the algebraic structures
In this section, we describe how the utilize the structure of the least-weight truss layout problems. The primal and dual least-weight truss layout problems (10) and (11) are equivalent to the standard primal-dual linear programming problems (1)-(2) with and where ( (9) is The matrix in (15) has dimension (m + n) · n L × (m + n) · n L of whichD 22 is an n · n L × n · n L matrix and recall that n m. When the problems are solved with general solvers that use the primal-dual interior point method, the Newton system in (4) is at most reduced to the normal equations with the coefficient matrix in (15). In this article, we further utilize the structure of the matrixD 22 which is a diagonal matrix for single-load case problems and built of blocks of diagonal matrices for multiple-load case problems. Example of such structure for a three-load case is displayed in Fig. 3b. In either case, it can be explicitly inverted at almost no cost. Then, instead of solving the normal equations with the larger coefficient matrix (15) which displays structures as in Fig. 2a and c, we solve a much smaller system . 2 a and c show the sparsity structure of (15), and b and d show the sparsity structure of (17). a Single-load case, b single-load case, c three-loads case, d three-loads case T 12 (18) and ξ u the resulting appropriate right hand side. The coefficient matrixBDB T has dimension m · n L × m · n L and its corresponding sparsity structures are shown in Fig. 2b and d. Note that for the single-load case problems, the reduction does not even affect the sparsity structure of the block (1, 1) of (15).

Remark 1
The matrixBDB T has always a dimension m · n L × m · n L . However, its sparsity depends on the member adding iterations.

Remark 2
Algebraic structure was exploited in interior point method for a nonlinear programming formulation of the minimum compliance problem for truss design developed in [19]. The linear systems in this method were reduced to the ones which involved only the displacements and one extra dual variable corresponding to the volume constraint. The reduced systems in [19] solved using direct methods were completely dense, and no member adding strategy was used.

Iterative methods for linear systems
Applying direct methods of linear algebra to (17) is challenging due to the size and density of the matrix involved especially for the three-dimensional problems, see Sect. 8.3 and Table 6. Hence, we use the preconditioned conjugate gradient method, that is, we solve the system where M is a suitable preconditioner. In this section, we propose a preconditioner that well approximates the matrixBDB T in the sense of Frobenius norm and has the sparsity pattern determined from the detailed features ofB andD. These are described below for two-dimensional problems. Similar steps can be followed to extend the analysis to three-dimensional problems, see also Remark 3. We start with analyzing the entries of the matrix B ∈ R m×n . Since, theses are direction cosines, we have |B i j | ≤ 1, ∀(i, j). The number of non-zero entries in each row cannot exceed m/2 which implies (B B T ) ii ≤ m/2, ∀i. Note that, m/2 is Therefore, the Frobenius norm of B B T is dominated by the elements on its three diagonals, that is, the entries with indices in the set T defined by See also Fig. 3a. Moreover, we derive the following bound where the first and second terms in the right hand side are contributions from the entries with indices in T and the last term accounts for the remaining off-diagonal elements.
Recall thatB = blkdiag(B, . . . , B) andD has the sparsity structure displayed in Fig. 3b. Then, the matrixBDB T has the structurẽ where the block matricesD k,l , k, l ∈ {1, . . . , n L } are diagonal. Define the sets and consequently consider the partition Let n i be the maximum number of non-zero entries in row i of B N .
Then, the error in the normal equations after dropping the contribution of the D ii , i ∈ N can be estimated as where Δ ≤ (m 2 −(3m−2)) is the less significant contribution from the non-tridiagonal elements.
We propose the preconditioner M defined by for k, l ∈ {1, . . . , n L }. In this case, we have

Remark 3
For three dimensional problems, the set of indces T in (20) is extended to Remark 4 For some of the first few interior point iterations, we use a simpler preconditioner for k, l ∈ {1, . . . , n L } unless the the warm-start strategy described in Sect. 6 is activated.
In Sect. 7, we discuss in more detail the practical effects of using the preconditioners, their implementation, and the spectral properties of linear systems solved.

Warm-start strategy
Here, we describe the warm-start strategy for truss layout optimization problems. At every member adding iteration described in Sect. 3.1, we generate the set K in (12) to identify and add the new members. In that case, the size of the problem grows and the new variables are appended to the problem where all the variables with the super-bar are vectors in R k , k = |K |, and ∈ {1, . . . , n L }.

Computing a warm-start point
The starting point for the part of the variables in the right hand side of (30) that correspond to the old ones, i.e., those without super-bar, is the solution that is saved while solving the preceding problem with a loose relative optimality tolerance that depends on the level of similarity between the problems, see Sect. 7. This choice of loose tolerances is to avoid points located close to the boundary of the feasible region which could adversely affect the behaviour of interior point methods [11]. Below we propose the initial point for the newly added variables. We setȳ as where σ max = max{σ − , σ + } and μ 0 is the value of the barrier parameter at the time when the solution was saved. Next, we define the new dual slack variables as Moreover, the new primal variables are set as Finally, we derive the bounds on the violations of primal and dual infeasibility and complementarity constraints for these newly introduced variables.

Dual infeasibility
The last three dual infeasibilties in (11) can be easily shown to be (ξ d 2, , ξ d 3, , ξ d 4, ) = (0, 0, 0) from (32) by direct substitution. However, for ξ d 1, , we have This is the violation of the first dual constraint in (11) which is actually proportional to the magnitude of the dual violations used as criteria for adding the members, see (12). Such violation may be considerable, especially in the early member adding iterations.
The warm starting routine [11,13] will be applied to absorb it.

Centrality
In order to assess the centrality of the new point, we will compute complementarity products for all newly added variables. The pairs (ā,s a ) are μ 0 -centered from (33).
The above two equations show that for any j ∈ K , either of the products (q + ) j (s q + ) j or (q − ) j (s q − ) j is always nearly μ 0 -centered when σ + = σ − depending on the sign of (B T u ) j . Nevertheless, using the fact that the maximum number of non-zero entries in each column ofB is 2N for N -dimensional problem and the absolute value of these entries does not exceed unity, we have Then, we get the following general estimate. Similarly, Finally, On the other hand, Then, (42) The estimates in (38), (39), and (42) show that the pairs (q + ,s q + ), (q − ,s q − ), (x ,s x ), ∈ {1, . . . , n L } have no significant outliers from the μ 0 -centrality because the shiftlike terms involving |u | ∞ in the upper bounds are multiplied by μ 1 2 0 . Therefore, their contribution to the violation of μ 0 -centrality is reduced to some extent. Moreover, these terms are found out to be small in practice.

Implementation, algorithmic parameters, and problem data
The interior point method is implemented in MATLAB (R2016a). All numerical experiments are performed on Intel(R) Core(TM) i5-4590T CPU, running at 2.00 GHz with 16 GB RAM. The interior point iterations stop when or μ < 10 −10 , where ξ p and ξ d are given as (4).

Algorithmic parameters
The initial points for the case of cold-start are set x 0 , s 0 both to unity and y 0 to zero. The feasibility tolerances in (43) are set as p = d = 10 −6 when the linear systems (17) are solved using direct methods. Otherwise p = d = 10 −5 . For the optimality tolerance, we use loose tolerances in the first few member adding iterations and then tighter in the last ones, i.e., opt = [10 −2 , 10 −2 , 10 −2 , 10 −2 , 10 −3 , 10 −4 ], and then always 10 −5 , unless specified. We use τ = 0.95 in (7) for the step lengths and β = 0.001 in (12) for the the member adding criteria.

The preconditioner
In case of the cold-start, we use the simple preconditioner (29) as long as μ > 10 −3 since there is a relative uniformity in the diagonal elements of the matrixD in (18). However, when μ ≤ 10 −3 or in general when the warm-starting strategy is invoked, we use the splitting preconditioner (26). To determine the threshold parameter δ in (23), recall that m is the number of non-fixed degrees of freedom and n( m) is the number of bars. We then set δ to be the smallest of the largest m diagonal elements iñ D in (18) for the two-dimensional problems, and the smallest of the largest [m/3] elements for the three-dimensional problems. This choice is based on performing many numerical experiments and at this stage we do not have any theoretical justification why we needed a larger number of the diagonal elements inD for two-dimensional problems compared to that of the three-dimensional problems. We have included Fig. 4 to give an insight into the eigenvalue distribution of the unpreconditionedBDB T and preconditioned M −1BDB T matrices for small size two-and three-dimensional problems Example 5a I and Example 5c I in Table 1. The distributions are shown for the final linear programs in the member adding scheme and the last interior point iteration. The histograms display the number of eigenvalues of a given magnitude. The Fig. 4a and c show that eigenvalues of the upreconditioned matrices are spread over a large range roughly between 10 −7 −10 8 for the two-dimensional and 10 −6 −10 6 for the three-dimensional problems. However, for the preconditioned matrices the distribution clusters within the interval roughly 0.003-2 for the two-dimensional problem and 0.01-2 for the three-dimensional problem, see Fig. 4b and d. Moreover, most of these eigenvalues are clustered near 1.
For the preconditioned conjugate gradient method (pcg), we set the maximum number of iterations to 100 when the simple preconditioner (29) is used and to 80 when the splitting preconditioner (26) is used. The tolerance is set to max{min{10 −4 , 0.1||ξ || 2 }, 10 −6 }, where ξ is the residual given as in (4), i.e, we use tighter tolerance for the pcg when iterates are close to optimality.  The number of degrees of freedom (DOF) is based on the problem formulation (10), i.e, for the variables (a, q + , q − ). See also Remark 7 Remark 5 For all of the two-dimensional problems in our numerical experiments, we always use direct methods for solving the linear systems (17) unless n > 4m including the case when the option is set to use iterative methods. This is because, direct methods significantly outperform the iterative ones as long as n remains comparable to m. In practice, we observed that this happens only in the first two member adding iterations for the AP3 filtering approach which is our choice.

The warm-start
If the option to use the warm-starting strategy is set to 'on', then it is activated as early as in the third member adding iteration. We use the solutions obtained with tolerance ε opt = 0.1 for warm-starting the third and fourth, ε opt = 0.01 for the fifth and sixth, and ε opt = 0.001 for the seventh and the other subsequent problem instances in the member adding iterations. This is because, the similarity between the problems increases as we approach the last stages of the member adding iterations.

Remark 6
When the problems are too close, i.e., when only very few members are added the initial points used in warm-starts often correspond to optimality tolerances which are smaller than the prescribed ones 0.001-0.1. In that case, we save the solution obtained after the third interior point iteration.

Problem data
For the numerical experiments, we solve the problems displayed in Fig. 5. The loads are all nodal of unit magnitude. The dimensions of the design domains are 1 × 2, 1 × 2, 2 × 1 × 1, and 1 × 1 × 3 of unit lengths. The problems are solved for three levels of nodal density distributions. The problem instances are named as Example 5a I, Example 5a II, Example 5a III, Example 5b I and so on. Their statistics are given in Table 1. The number of degrees of freedom is based on the problem formulation (10), i.e, for the variables (a, q + , q − ). See also Remark 7. In all cases, we consider equal tensile and compressive strengths of unity. The optimal designs are given in Fig. 6 where the bars shown are those with cross-sectional area ≥ 0.001a max .

Remark 7
Note that the size of the corresponding standard linear program of the form (1) with the matrix A in (14) can be obtained from Table 1. For example, for the three-dimensional problem Example 5c III with 163, 452, 240 bars, the corresponding standard linear program (1) will have 653, 808, 960 primal variables and 163, 506, 471 constraints. Of course, by applying the member adding technique we avoid the need of dealing with problems of such excessive sizes.

Starting structures
We always start with the structures shown in Fig. 1a for two-dimensional problems and Fig. 1b for three-dimensional problems and then upgrade these structures based on the member adding procedure described in Sect. 3.1.   The linear systems (17) Table 3 Numerical results comparing AP2 and AP3 for the problems Example 5a II and Example 5a III in Table 1 Example 5a II Example 5a III AP2 AP3 AP3 The linear systems (17) are solved using direct methods. Warm-start strategy activated at the third member adding iterations

Numerical results
The reported CPU times correspond only to solving the optimization problems. In Tables 2, 3, 4, 5, 6 and 7, the label "Final num. of bars" refers to the number of bars considered in the linear programming problem formulation of the last member adding iteration and not the number of bars with non-zero cross-sectional area.

The filtering approaches
We compare the three filtering approaches AP1, AP2, and AP3 in the member adding process described in Sect. 3.1 applied to test problem Example 5a. It is solved for three nodal densities using the interior point method described in this paper where the linear systems (17) are solved using direct methods, and the warm-start strategy is used. The numerical results are presented in Tables 2 and 3. In the first table, we present all approaches that additionally include the case when the problem is solved for all potential member bars. In the second table, we compare only AP2 and AP3 as the problem is larger in this case. In general, the results illustrate two outcomes. The first one is the efficiency of the member adding method, i.e, it obtains solutions using approximately 1% of all the possible members and it needs significantly less time than a method which Table 4 Numerical results comparing the overall performance of the cold-start and warm-start strategies for some of the problems in Table 1 Problems The linear systems (17) are solved using direct methods. AP3 filtering approach is used The linear systems (17) are solved using direct methods. AP3 filtering approach is used uses all potential bars, see columns 6 and 7 in Table 2. The second observation is that the approach AP3 seems to outperform the others. It obtains solutions faster by using fewer member adding iterations keeping the sizes of the problems small enough, somewhere between those obtained by strategy AP2 with α = 0.1 and α = 1. We have also noticed this behaviour has been consistent for the other examples. Based on these results, AP3 becomes our method of choice and we follow this approach in all of the next examples.

Cold-start versus warm-start
In Table 4, we present numerical results to compare the warm-start and cold-start strategies. The linear systems (17) are solved using direct methods. As it can be seen in columns 5 and 10, the use of the warm-start strategy reduces the computational time. This is achieved by reducing the number of interior point iterations, see Table 5.

Remark 8
In the early member adding iterations, warm-starting strategy is able to save merely a few interior point iterations. However, in later stages of the member adding, when only a few new members are added and the problem instances do not change significantly, the warm-starting technique is very successful. In Table 5 the full history of the interior point iterations is reported for a number of examples with cold-start and warm-start strategies used. Additionally, in Fig. 7 we present the corresponding optimal designs obtained with cold start and warm start strategies, respectively. The Table 6 Numerical results comparing the use of direct and iterative methods for solving the linear systems (17) for some of the problems listed in Table 1 Problems Warm-start strategy activated at the third member adding iteration and AP3 filtering approach used Table 7 Numerical results for using MOSEK and the interior point method of this paper for problems listed in Table 1 Problems For the IPM of this paper, the linear systems (17) are solved using iterative methods except during the first two member adding iterations for the two-dimensional problems. Warm-start strategy is activated at the third member adding iteration and AP3 filtering approach is used Fig. 7 Optimal designs for the problems Example 5a I and Example 5a II for cold-start and warm-start strategies. The bars shown are those with cross-sectional area ≥ 0.001a max . a Example 5a I, cold-start. b Example 5a I, warm-start. c Example 5a II, cold-start. d Example 5a II, warm-start reader will observe that there is no noticeable difference in the optimal solutions obtained with these two variants of the starting strategies.

Remark 9
We have observed that, the efficiency of the warm-start strategy further improves when the iterative methods are used to solve the linear systems. This is because it promotes the early start of the more efficient splitting preconditioner.

Direct methods versus iterative methods
In this section, we present numerical results to make comparisons between the use of the direct and iterative methods to solve linear system (17). For direct methods, we use the Matlab built-in function chol to find the Cholesky factorization of the coefficient matrix. Once again, we consider some of the problems listed in Table 1 and report their solution statistics in Table 6. The computational times reported in columns 4 and 8 demonstrate the efficiency of the iterative methods specially for the three-dimensional problems, see the CPU times for Example 5d II in Table 6. There are two main reasons why their efficiency for two-dimensional problems is not as pronounced as for three-dimensional ones. Firstly, as pointed out in Sect. 7, we needed more non-zero entries of the diagonal matrixD to determine the splitting preconditioner for the twodimensional problems. This makes the preconditioner more dense and more expensive. Secondly, the linear systems solved to compute the predictor and corrector directions require relatively larger number of pcg iterations for two-dimensional problems, see columns 11 and 12 of Table 6. These pcg iterations seem to be reduced if we consider a denser preconditioner, but this leads to longer run times. It is worth mentioning that three-dimensional problems are more relevant for practical applications. However, they involve more complicated grids and this causes a quick loss of sparsity in computations. The use of iterative linear algebra solver produces more spectacular savings over direct methods in this case.

Large-scale problems
In this section, we consider solutions of all the problem instances listed in Table 1 including the ones with the finest nodal density distributions. We include numerical results when the problems are solved by a commercial solver MOSEK (version 7) [27] with Matlab interface. This is to give an insight into the overall performance of the interior point method of this paper and the techniques employed in reference to existing solvers. For MOSEK, we set the algorithm to interior point method, the maximum number of interior point iterations to 70, pre-solve 'on', the primal and dual feasibility tolerances 10 −6 , a member adding dependant optimality tolerance [10 −2 , 10 −2 , 10 −2 , 10 −2 , 10 −3 , 10 −4 ], and then always 10 −5 . This is similar to the settings presented in Sect. 7 for our implementation.

Remark 10
The numerical results in Table 7 for using MOSEK and our version of interior point method should not be considered as a direct comparison between the two solvers for many reasons. There are important differences between these two solvers. First, the implementations use different programming languages. Moreover, MOSEK uses direct methods to solve linear systems in its interior point implementation. In addition, the linear systems are reduced only to the normal equation systems (9) (or (15) for the truss layout problems of this paper) and not further to the smaller system (17). MOSEK uses powerful presolving which may significantly reduce the problem size and as a result simplify the normal equations. For example, MOSEK's CPU with pre-solve turned 'off' for the first eleven problems in Table 7 is 75, 250, 522, 214, 605, 1890, 99, 785, 8413, 221, and 9555 s. Our solver does not use pre-solve. Finally, MOSEK uses its own default initial point and does not use a warm-start point.
The numerical results presented in Table 7, particularly columns 4 and 8, show that the IPM of this paper is competitive for two-dimensional problems and very efficient for three-dimensional problems, see the last four rows of these columns. Once again as mentioned in the discussion on direct versus iterative methods, most practical applications are spacial, i.e., three-dimensional problems, and they are more challenging to solve due to their excessive size. Therefore, having an efficient tool such as the one presented in this paper able to solve three-dimensional problems is paramount.

Remark 11
Note that for a very dense nodal distribution, the solution of test problem Example 5a is expected to be π [10], and those of problems Example 5b and d are expected to converge to 7.78480 and 19.67476 which are the corresponding analytic solutions to the exact least-weight truss layout problems as shown in [31] and [34], respectively.

Conclusions
We have described a primal-dual interior point method that exploits the algebraic structure of multiple-load plastic layout optimization of truss structures. The method is supported with a warm-start strategy that benefits from the closeness of the problems after performing few member adding iterations. Moreover, large linear systems arising in the interior point method are solved using Krylov type iterative method. The numerical results in Sect. 8 illustrate the overall efficiency of the method to solve large-scale problems. The method excels on three-dimensional problems which are more challenging due to high connectivity of the grids involved, and are significantly more important for practical engineering applications than two-dimensional problems.