On the efficiency of artificial neural networks for plastic analysis of planar frames in comparison with genetic algorithms and ant colony systems
Abstract
The investigation of plastic behavior and determining the collapse load factors are the important ingredients of every kinematical method that is employed for plastic analysis and design of frames. The determination of collapse load factors depends on many effective parameters such as the length of bays, height of stories, types of loads and plastic moments of individual members. As the number of bays and stories increases, the parameters that have to be considered make the analysis a complex and tedious task. In such a situation, the role of algorithms that can help to compute an approximate collapse load factor in a reasonable time span becomes more and more crucial. Due to their interesting properties, heuristic algorithms are good candidates for this purpose. They have found many applications in computing the collapse load factors of low-rise frames. In this work, artificial neural networks, genetic algorithms and ant colony systems are used to obtain the collapse load factors of two-dimensional frames. The latter two algorithms have already been employed in the analysis of frames, and hence, they provide a good basis for comparing the results of a newly developed algorithm. The structure of genetic algorithm, in the form presented here, is the same as previous works; however, some minor amendments have been applied to ant colony systems. The performance of each algorithm is studied through numerical examples. The focus is mainly on the behavior of artificial neural networks in the determination of collapse load factors of two-dimensional frames compared with other two algorithms. The investigation of results shows that a careful selection of the structure of artificial neural networks can lead to an efficient algorithm that predicts the load factors with higher accuracy. The structure should be selected with the aim to reduce the error of the network for a given frame. Such an algorithm is especially useful in designing and analyzing frames whose geometry is known a priori.
Keywords
Collapse load factor Collapse mechanism Plastic limit analysis Heuristic methods Artificial neural networks Genetic algorithms Ant colony systems1 Introduction
The minimum and maximum principles are the basis of almost all analytical methods used for plastic analysis and design of frames [1]. The most frequently used method, based on the minimum principle, is the combination of elementary mechanisms, first developed by Neal and Symonds [2, 3, 4]. On the other hand, the method of inequalities can be mentioned as one of the methods based on the maximum principle. The problem of plastic analysis and design of frames with rigid joints was solved by linear programming by Charnes and Greenberg [5], as early as 1951. Further progress in this field is due to Heyman [6], Horne [7], Baker and Heyman [8], Jennings [9], Watwood [10], Gorman [11], Thierauf [12], Kaveh [13], Kaveh and Khanlari [14], Munro [15] and Livesley [16] among others. The progress during 1955–1977 is well documented in the papers of Ref. [17]. A survey of research results achieved in the subsequent 25 years on limit analysis (LA), shakedown analysis (SDA) and mathematical programming in plasticity is provided by Maier et al. [18]. The classical theorems on LA and SDA as well as representative contributions of the early 2000s to these techniques are described comprehensively by these authors. More recently, Mahini et al. [19] have formulated the nonlinear analysis of a proportionally loaded frame into a mathematical programming form. They have adopted a piecewise-linear yield surface and the associated flow rule to construct the required elastic–perfectly plastic hinge constitutive model.
Generally, two approaches exist for analysis and design of frames. The first widely used approach is the finite element method in which the stiffness matrix of individual elements is computed and assembled into the global stiffness matrix. Then, a set of simultaneous equations is solved to obtain the response of the whole system. If, however, the response of the system is nonlinear, a similar set of equations should be solved iteratively in each step of incremental analysis. In these methods, the history of loading should be analyzed incrementally until the failure of structure; hence, the analysis can be very time-consuming [18]. The second approach, the direct approach, falls into a class of methods called algebraic methods. In these methods, the direct computation of stiffness matrix is not necessary. The structure is assumed to be in the onset of failure [18]. In the method of combination of elementary mechanisms, as one of the algebraic methods which is used in this work, the elementary mechanisms are initially determined by performing Gaussian elimination on a special matrix. Then, the elementary mechanisms are combined to obtain a final collapse mechanism whose load factor is lower than all possible combinations of elementary mechanisms. This mechanism represents the failure mechanism of structure. Unlike the conventional finite element method, it is not necessary to analyze the complete history of loading. Only the final collapse mechanism and the associated collapse load factor are determined. Regardless of the features of the method, it has a major drawback that prohibits its application as an efficient tool for analysis. Given a two-dimensional frame, as the numbers of bays and stories increase, the number of independent/combined mechanisms that has to be considered in the combination process grows more quickly making the solution procedure complex and unmanageable [20, 21]. In other words, finding the correct collapse mechanism with the lowest load factor among all possible combinations of mechanisms becomes a very hard task to accomplish. The problem is NP-hard, meaning that it is hard to propose an algorithm that finds the actual collapse mechanism in a polynomial function of time. Both steps of generating the elementary mechanisms and combination of those mechanisms aimed at minimizing the collapse load factor are time-consuming and need special considerations. The high computation time devoted to greedy algorithms stems from the same factors. Regarding the problems mentioned above, it is important to develop algorithms that can compute an approximate collapse load factor with sufficient accuracy within a reasonable time span. Thus, the role of heuristic algorithms to leverage the accuracy and computational time by optimizing the combination process becomes more and more evident.
In recent years, the trend in solving optimization problems has been directed toward using heuristic algorithms such as neural networks, genetic and ant colony algorithms. The main reason for this trend is attributable to the fact that these algorithms can be efficiently adapted to the specific search space to which they are applied and consequently they can be used for many optimization problems of different nature. Cao et al. [22, 23] used neural networks to perform a sensitivity analysis on geotechnical systems. They also employed neural networks to predict time-varying behavior of engineering structures. Aydin et al. [24] studied the efficiency of neural networks in monitoring damaged beam structures. The logic and structure of genetic algorithms are described comprehensively in references [25, 26]. They are perhaps the most referenced monographs on genetic algorithms. Kaveh et al. [14, 27, 28] applied genetic algorithms for determining collapse load factors of planar frames. Kohama et al. [29] performed collapse analysis on rigid frames using genetic algorithms. Hofmeyer et al. [30] compared the performance of co-evolutionary and genetic algorithms in optimizing the spatial and structural designs via the application of finite element method. These authors employed topology and one-step evolutionary optimizations to allow for automated studies of spatial-structural design processes [31]. Rafiq used a structured genetic algorithm to provide a design support tool for optimum building concept generation [32]. Turrin et al. [33] combined parametric modeling and genetic algorithms to achieve a performance-oriented process in architectural design. Aminian et al. [34] developed a hybrid genetic and simulated annealing method for estimating base shear of plane structures. The assessment of load-carrying capacity of castellated beams has been carried out by these authors using a combination of linear genetic programming and an integrated search algorithm of genetic programming and simulated annealing [35]. Ant colony algorithms have been used by Kaveh et al. [20, 28, 36] for analysis and design of frames. Kaveh et al. [37] also developed variants of ant colony systems for suboptimal selection of cycle bases with applications in force method. Jahanshahi et al. [21] proposed modified ant colony algorithms applicable to special frames with certain configurations. Chen et al. [38, 39] developed an optimization model for investigating the prestress stability of pin-jointed assemblies. They used ant colony systems to solve the equivalent traveling salesman problem. These authors also employed the ant colony systems to find the optimized configuration for tensegrity structures. Forcael et al. [40] proposed a simulation model to find out optimum evacuation routes, during a tsunami using ant colony optimization. Talatahari et al. [41] developed a multistage particle swarm algorithm for optimum design of truss structures. A review of these and many others clearly show the preference of using heuristic algorithms in various optimization problems arising in structural engineering.
In this work, genetic algorithms (GAs), ant colony systems (ACS) and artificial neural networks (ANNs) are used to obtain the collapse load factors of two-dimensional frames. The latter two algorithms have already been employed by the authors in the analysis of frames, and hence, they provide a good basis for comparing the results of a newly developed algorithm. The structure of genetic algorithm, in the form presented here, is the same as previous works; however, some minor amendments have been applied to ant colony systems. The performance of each algorithm is studied through numerical examples. The focus is mainly on the behavior of artificial neural networks in the determination of collapse load factors of two-dimensional frames compared with other two algorithms. The investigation of results shows that a careful selection of the structure of ANN can lead to an efficient algorithm that predicts the load factors with higher accuracy. The structure should be selected with the aim to reduce the error of the network for a given frame. Such an algorithm is especially useful in designing and analyzing frames whose geometry is known a priori. The article is organized as follows: Generation of elementary mechanisms and combination of these mechanisms to find the final collapse mechanism are briefly described in Sects. 2 and 3, respectively. The detailed description of ANN is provided in Sect. 4. Genetic algorithms and ant colony systems with amendments employed herein are reviewed in Sects. 5 and 6. Numerical examples are provided in Sect. 7 to measure the performance of individual algorithms and compare the results. Finally, concluding remarks are given in Sect. 8.
2 Generation of elementary mechanisms
In order to find a set of independent mechanisms, one can start with the method developed by Watwood [10]. However, in this method, joint mechanisms are computed as well, which is unnecessary because joint mechanisms can be automatically assigned to each joint after the computation of joint displacements. Moreover, axial deformations can also be neglected, since mechanisms are the result of excessive deformations in rotational degrees of freedom leading to plastic hinges. With these redundant mechanisms out of the way, one ends up with a method similar to that of Pellegrino and Calladine [42] and Deeks [43].
Following Deeks [43], independent mechanisms can be purified by removing excess hinges to obtain a set of potential collapse mechanisms. Using this method, for each independent mechanism in turn, it is checked whether that mechanism contains a complete set of active hinges of another independent mechanism or not. If this is the case, it is purified by removing the contained mechanism. This process is continued until no modification is possible.
3 Determination of collapse load factor
Since joint mechanisms have been neglected during the formation of independent mechanisms, it is necessary to find the location of hinges in members. These locations are determined by minimizing the internal virtual work. If a joint is restrained against rotation, hinges are formed in all members connecting to that joint. However, if the joint is not restrained against rotation, hinges are formed in (n − 1) members among the n members connected to that joint. In this case, n possible locations exist for hinges, and it is necessary to find a location that minimizes the internal virtual work. When less than the maximum number of hinges is formed, the rotation in one or more of the assumed hinges is zero and does not contribute to the virtual work [43].
4 Artificial neural networks
Artificial neural networks (ANNs) are computational models inspired by central nervous systems of animals and in particular the human’s brain, which are capable of machine learning as well as pattern recognition [44]. Artificial neural networks are widely used for solving engineering problems [45]. They try to replicate only the most basic elements of the complicated, versatile, and powerful operation of human’s brain. But for engineers who are trying to solve engineering problems, neural computing was never about replicating human’s brain. It is about machines and an innovative way to solve problems.
ANN is generally presented as systems of interconnected neurons. Neurons are the fundamental processing elements of a neural network that are capable of performing short computations on input data, and they are intended to function in a parallel fashion [46]. Neural networks are simple mathematical models that define a relation between a set of input variables A and another set B which is the output set. It is also required to associate the model with a training algorithm or learning rule. This can be conveniently represented as a network structure with arrows depicting the dependencies between variables. Training a neural network stands for selecting a specific model among the set of admissible models that minimizes the computational error. There are numerous algorithms available for training neural networks; most of them can be viewed as a straightforward application of trial and error.
The word network in the term artificial neural network refers to the interconnections between the neurons in different layers of each system [46]. As an example, a typical system can have three layers. The first layer has input neurons which send data via synapses to the second layer of neurons and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons with some having increased layers of input and output neurons. The synapses store parameters called weights that manipulate data in network calculations.
4.1 Implementation of ANN
In the implementation of ANN, there are many parameters; the modifications of these parameters can affect the overall operation, speed and accuracy of the network. The number of ANN layers and each layer of neurons are among the important parameters in the deployment of ANN, since these layers are the actual processing units of the network [46]. Therefore, the major point in designing every ANN is the careful selection of pattern and number of various layers. However, it should be noted that an increase in the number of layers does not necessarily lead to a corresponding increase in speed and accuracy. The other important parameter in the implementation of ANN is the rate of training. If this parameter is lower than a certain limit, the speed of the network to find desirable results will be reduced, and if it is higher, the network will become unstable. Stability in training process means that the reaction of the network after each cycle has less error than the previous one. Likewise in consecutive cycles, the results have been improved to achieve a desirable error [46].
Another effective parameter of ANN is the Pearson Correlation Coefficient (PCC) that indicates the rate of accuracy in the network and reveals how well a network is trained. PCC is a measure of linear correlation between two variables x and y, giving a value between −1 and +1 inclusive, where −1 is total negative correlation, 0 is no correlation and +1 is total positive correlation. It is widely used in the literature as a measure of the degree of linear dependence between two variables. The absolute values of Pearson correlation coefficients are less than or equal to 1. Correlations equal to −1 or +1 correspond to data points lying exactly on a line. Therefore, when the PCC is close to 1, it shows that the network is tuned satisfactorily.
A further point that must be considered in training the network is the rate of scattering of input and output parameters. If the scattering of parameters is relatively high, the network fails to yield acceptable results. In such cases, the problem is usually resolved by revising the data with the aim to reduce scattering. In this work, the parameters are normalized by dividing their values to the maximum values they can take [47]. In this way, the rate of scattering is somehow decreased, and all parameters are uniformly distributed over 0 and 1. The additional advantage of this technique is that one deals with dimensionless parameters.
4.2 Training algorithm of ANN
The most widely used training algorithm for artificial neural networks is the back-propagation algorithm. The popularity of this type of algorithm is majorly due to its relative simplicity, together with its universal approximation capabilities [48]. Motivated by these properties, the algorithm employed in this work for training the ANN is the back-propagation algorithm. The algorithm defines a systematic way to update the synaptic weights of multilayer feed-forward supervised networks composed of an input layer that receives the input values, an output layer, which calculates the neural network output, and one or more intermediary layers, so-called the hidden layers. The back-propagation supervised training process is based on the gradient descent method that usually minimizes the sum of squared errors between the target value and the output of neural network [46, 49].
In the current context, an optimum network is a network, which yields the least error, the highest PCC and, hence, the best performance. In order to find the best possible structure for the ANNs employed in this work, several networks with different number of layers, number of neurons and rates of training have been implemented. To study the behavior of ANNs, the outputs of each implemented network are investigated closely and the network with the best performance is selected as the optimum network for computing the collapse load factor of a given frame. Hence, the selection of an admissible structure for the ANN, in the sense described above, is a trial-and-error process.
5 Genetic algorithms
As the name suggests, the design and development of genetic algorithms are inspired by natural mechanisms for reproduction [25, 26]. Genes with better fitness have higher probability to survive and mate with other survivors to reproduce new generations. Samples that are reproduced from generations to generations are granted the property to inherit superior aspects of their predecessors and eliminate their defects [25, 26, 27, 28]. Genetic algorithms are adapted besides other algorithms in this work to choose the appropriate elementary mechanisms to be used in the combination of elementary mechanisms. The combination should be carried out in such a way that it leads to the actual collapse mechanism and the associated collapse load factor. For this purpose, a few definitions are necessary, which are presented in the sequel.
6 Ant colony algorithms
Ant colony optimization (ACO) is a recently developed optimization algorithm, and ant colony system (ACS) is a variant of the former, which has been shown to behave more robustly and provide better results. In this work, ACS is used along with ANN and GA optimization algorithms for finding the collapse load factors of two-dimensional frames. In the process of adapting ACS to the problem of finding the minimum collapse load factor, it is inevitable to give a brief description of ACO and provide justifications for implementing ACS [20, 21, 50].
The building blocks of ACO and ACS are cooperative agents called ants [50]. These agents encompass simple capabilities, which make their behavior resemble real ants. Ants mark paths leading to food sources by depositing pheromone, and they communicate information through pheromone trails that they leave behind. However, pheromone trail evaporates and its effect weakens over time. As a result of pheromone accumulation and evaporation, more ants tend to pass over certain paths, and these paths are visited more often as the intensity of pheromone increases.
In order to get more insight into the problem, suppose that the length of the longer path is twice the length of the shorter one. It is desired to know what happens at discrete time steps t = 1, 2…. Assume that at t = 1, 30 ants start to move to the food source either through the shorter path or through the longer one. Also, assume that each ant moves at a velocity of 1 per time unit and lays down a pheromone trail of intensity 1 that evaporates completely after each time step.
At t = 1, there is no trail on paths. The probability of choosing either of the paths is equal. At t = 2, the new 30 ants that are headed for the food source find a trail intensity of 15 on the longer path laid by the 15 ants that chose this path and an intensity of 30 on the shorter path laid by the other 15 ants and the 15 ants returning back to the nest through the shorter path. The probability of choosing the shorter path is therefore doubled according to the amount of pheromone being laid. This process is continued until all of the ants eventually select the shorter path.
The brief discussion presented in preceding paragraphs should give an idea about the behavior of real ants and the mechanism based on which optimization algorithms such as ACO are developed. A thorough description of ACO and its descendant, the ACS algorithm, can be found in references [50, 51] and its applications to plastic analysis of frames in references [20, 21, 28].
6.1 Adapting ACS to the computation of minimum collapse load factor
Assuming that a typical frame has n elementary mechanisms, a graph consisting of n nodes is constructed and each node is associated with each elementary mechanism. The set of nodes is connected together using n(n − 1)/2 edges, resulting in a clique (see references [13, 52, 53] for definitions). A predefined number, m of ants are randomly distributed over the nodes of the graph, and the search starts by moving the set of ants from their current position to newly selected nodes based on a decision-making rule. According to local search strategies that have been adopted for ACS algorithms, each ant normally visits n nodes during its tour and consequently nm movements take place in each iteration.
In concluding this section, it is noteworthy that one can use a hybrid GA/ACS algorithm to leverage the accuracy and speed demands. In [21, 28], it was mentioned that GA operates faster than ACS; however, the results are not as accurate as those of ACS. Therefore, one can use the output from several iterations of GA and then build the construction graph according to the patterns suggested by the chromosomes of GA’s last generation. In such an approach, it is not necessary to connect all of the nodes of construction graph to obtain a clique. Fittest chromosomes are selected based on an appropriate criterion, and then, for each chromosome in turn, the nodes corresponding to set bits (bits containing 1’s) in that specific chromosome are connected to each other. After this step, the iterations of ACS algorithm are employed to refine the preliminary solution obtained by GA.
7 Numerical results
It was already mentioned that, in determining the collapse load factor of a typical two-dimensional frame, the number of mechanisms to be combined grows quickly as the numbers of bays and stories increase. As a consequence, the process of finding the correct collapse mechanism becomes a formidable task and requires a great deal of computational time. To circumvent the problem, heuristic methods such as ANN, GA and ACS can be used as an alternative to the method of combination of elementary mechanisms. Application of heuristic methods and careful tuning of relevant parameters make it possible to provide a compromise between accuracy and computational cost. In certain cases, where the determination of collapse mechanism is immaterial and only the value of collapse load factor is required, it is more expedient to use ANN or GA. In other cases, however, the application of a more robust method such as ACS or greedy algorithm is advised.
7.1 Two-bay and six-story frame
Values of effective parameters for 20 sample frames of example 1
Sample frame | F_{x} | F_{y} | l | h | M_{1} | M_{2} | M_{3} | M_{4} | M_{5} | M_{6} | M_{7} | M_{8} | M_{9} | M_{10} | M_{11} | M_{12} |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1.5 | 1 | 1 | 1 | 2.5 | 1.3 | 2 | 1.2 | 1.8 | 0.9 | 1.4 | 0.8 | 1 | 0.7 | 0.8 | 0.6 |
2 | 1.5 | 1 | 1 | 1.25 | 2.5 | 1.3 | 2 | 1.2 | 1.8 | 0.9 | 1.4 | 0.8 | 1 | 0.7 | 0.8 | 0.6 |
3 | 1.5 | 1.5 | 1.5 | 1 | 2.5 | 1.5 | 2 | 1.3 | 1.8 | 1.1 | 1.4 | 0.9 | 1 | 0.7 | 0.8 | 0.5 |
4 | 1.5 | 1.5 | 1.5 | 1.25 | 2.5 | 1.5 | 2 | 1.3 | 1.8 | 1.1 | 1.4 | 0.9 | 1 | 0.7 | 0.8 | 0.5 |
5 | 1.5 | 1.5 | 1.5 | 1.5 | 2.7 | 2 | 2.5 | 1.8 | 2.1 | 1.6 | 1.7 | 1.4 | 1.1 | 1.2 | 0.9 | 1 |
6 | 1.5 | 1.5 | 1.5 | 1.75 | 2.7 | 2 | 2.5 | 1.8 | 2.1 | 1.6 | 1.7 | 1.4 | 1.1 | 1.2 | 0.9 | 1 |
7 | 1.7 | 1.7 | 2 | 1 | 2.5 | 2 | 2.1 | 1.8 | 1.9 | 1.6 | 1.5 | 1.4 | 1.1 | 1.2 | 0.7 | 1 |
8 | 1.7 | 1.7 | 2 | 1.5 | 2.5 | 2 | 2.1 | 1.8 | 1.9 | 1.6 | 1.5 | 1.4 | 1.1 | 1.2 | 0.7 | 1 |
9 | 1.9 | 1.7 | 2 | 1.75 | 2.8 | 2.3 | 2.3 | 2.1 | 1.8 | 1.9 | 1.3 | 1.7 | 0.9 | 1.5 | 0.6 | 1.3 |
10 | 1.9 | 1.7 | 2 | 2 | 2.8 | 2.3 | 2.3 | 2.1 | 1.8 | 1.9 | 1.3 | 1.7 | 0.9 | 1.5 | 0.6 | 1.3 |
11 | 1.9 | 2 | 2.5 | 1 | 2.4 | 1.9 | 1.9 | 1.7 | 1.5 | 1.5 | 1.2 | 1.2 | 0.9 | 1 | 0.7 | 0.8 |
12 | 1.9 | 2 | 2.5 | 1.5 | 2.4 | 1.9 | 1.9 | 1.7 | 1.5 | 1.5 | 1.2 | 1.2 | 0.9 | 1 | 0.7 | 0.8 |
13 | 1.3 | 1.3 | 2.5 | 2 | 2.2 | 1.2 | 1.8 | 1 | 1.4 | 0.8 | 1.1 | 0.6 | 0.8 | 0.5 | 0.5 | 0.4 |
14 | 1.3 | 1.3 | 2.5 | 2.5 | 2.2 | 1.2 | 1.8 | 1 | 1.4 | 0.8 | 1.1 | 0.6 | 0.8 | 0.5 | 0.5 | 0.4 |
15 | 1 | 1 | 3 | 1.5 | 2.2 | 1.4 | 1.8 | 1.3 | 1.4 | 1.2 | 1.1 | 1.1 | 0.8 | 1 | 0.5 | 0.9 |
16 | 1 | 1 | 3 | 2 | 2.2 | 1.4 | 1.8 | 1.3 | 1.4 | 1.2 | 1.1 | 1.1 | 0.8 | 1 | 0.5 | 0.9 |
17 | 2 | 1 | 3 | 2.5 | 2.1 | 1.4 | 1.9 | 1.2 | 1.7 | 0.9 | 1.5 | 0.8 | 1.3 | 0.7 | 1.1 | 0.6 |
18 | 2 | 1 | 3 | 3 | 2.1 | 1.4 | 1.9 | 1.2 | 1.7 | 0.9 | 1.5 | 0.8 | 1.3 | 0.7 | 1.1 | 0.6 |
19 | 1 | 2 | 4 | 2.5 | 2.2 | 1.4 | 2 | 1.2 | 1.8 | 0.9 | 1.4 | 0.8 | 1 | 0.7 | 0.8 | 0.6 |
20 | 1 | 2 | 4 | 3 | 2.2 | 1.4 | 2 | 1.2 | 1.8 | 0.9 | 1.4 | 0.8 | 1 | 0.7 | 0.8 | 0.6 |
Structure of ANNs trained for sample frames of example 1
Network | Rate of training | Structure of layers | PCC | Average error (%) | Training time (s) |
---|---|---|---|---|---|
1 | 0.149 | 5 × 6 × 7 × 1 | 0.910 | 1.364 | 723.6 |
2 | 0.140 | 5 × 9 × 3 × 1 | 0.891 | 2.093 | 848.3 |
3 | 0.164 | 7 × 9 × 1 | 0.995 | 1.972 | 661.5 |
4 | 0.159 | 7 × 7 × 1 | 0.998 | 1.031 | 677.9 |
5 | 0.171 | 7 × 10 × 1 | 0.935 | 1.198 | 693.4 |
Regarding the networks in Table 2, it is evident that the fourth network with a PCC factor of 0.998 and an average error of 1.031 % has the best performance compared with other networks trained in this example. Therefore, this network is selected for computing the collapse load factors of sample frames.
Exact and estimated collapse load factors for 20 sample frames of example 1 using greedy, ANN, GA and ACS algorithms
Sample frame | Exact λ | Estimated λ | ||
---|---|---|---|---|
ANN | GA | ACS | ||
1 | 0.910 | 0.911 | 0.937 | 0.910 |
2 | 0.728 | 0.729 | 0.750 | 0.750 |
3 | 0.990 | 0.980 | 0.990 | 0.990 |
4 | 0.792 | 0.792 | 0.792 | 0.809 |
5 | 0.858 | 0.858 | 0.858 | 0.858 |
6 | 0.735 | 0.733 | 0.736 | 0.736 |
7 | 1.098 | 1.089 | 1.164 | 1.098 |
8 | 0.732 | 0.719 | 0.732 | 0.732 |
9 | 0.613 | 0.612 | 0.613 | 0.613 |
10 | 0.536 | 0.547 | 0.536 | 0.536 |
11 | 0.903 | 0.909 | 0.916 | 0.903 |
12 | 0.602 | 0.611 | 0.602 | 0.602 |
13 | 0.445 | 0.446 | 0.445 | 0.450 |
14 | 0.356 | 0.350 | 0.3704 | 0.360 |
15 | 0.944 | 0.965 | 0.944 | 0.944 |
16 | 0.708 | 0.718 | 0.708 | 0.708 |
17 | 0.272 | 0.277 | 0.272 | 0.275 |
18 | 0.227 | 0.230 | 0.227 | 0.230 |
19 | 0.485 | 0.480 | 0.548 | 0.597 |
20 | 0.431 | 0.441 | 0.447 | 0.444 |
Comparing the second and third columns of Table 3, it is obvious that there is a good coincidence between the collapse load factors yielded by ANN and the exact load factors obtained via the application of greedy algorithm. The error graph in Fig. 12 shows that the maximum error in this case is below 2.5 %, which is far less than GA and ACS algorithms.
Values of effective parameters for sample 21 of example 1
Sample frame | F_{x} | F_{y} | l | h | M_{1} | M_{2} | M_{3} | M_{4} | M_{5} | M_{6} | M_{7} | M_{8} | M_{9} | M_{10} | M_{11} | M_{12} |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21 | 1.7 | 1.6 | 3 | 2.2 | 2.5 | 1.8 | 2.1 | 1.6 | 1.8 | 1.4 | 1.4 | 1.2 | 1 | 1 | 0.8 | 0.8 |
The exact collapse load factor for sample frame 21 is 0.459. The collapse load factors predicted by ANN, GA and ACS are 0.447, 0.459 and 0.594, respectively. It is observed that the collapse load factor obtained by GA coincides with the exact load factor. The one computed by ACS is 30 % higher, while the one by ANN is 2.61 % lower.
Mean error for ANN, GA and ACS algorithms applied to all samples of example 1 and CPU time for these algorithms to calculate the collapse load factor for sample 21
Algorithm | Mean error (%) | CPU time (s) |
---|---|---|
ANN | 1.10 | 8.8 |
GA | 1.63 | 4.6 |
ACS | 3.12 | 6.2 |
7.2 Three-bay and three-story frame
Values of effective parameters for 20 sample frames of example 2
Sample frame | l | h | F_{1} | F_{2} | F_{3} | F_{4} | F_{5} | F_{6} | F_{7} | F_{8} | F_{9} | M_{1} | M_{2} | M_{3} | M_{4} | M_{5} | M_{6} |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 1 | 2 | 8 | 6 | 4 | 4 | 5 | 6 | 1 | 2 | 6 | 4 | 4 | 3 | 3 | 2 |
2 | 1 | 1 | 2 | 6 | 4 | 4 | 5 | 4 | 6 | 2 | 3 | 6 | 4 | 4 | 3 | 3 | 2 |
3 | 2 | 1 | 3 | 6 | 4 | 5 | 5 | 4 | 7 | 2 | 3 | 6 | 4 | 4 | 3 | 3 | 2 |
4 | 2 | 1 | 3 | 6 | 4 | 5 | 5 | 4 | 7 | 2 | 3 | 6 | 6 | 5 | 5 | 3 | 4 |
5 | 2 | 1.25 | 3 | 9 | 7 | 5 | 7 | 5 | 7 | 5 | 3 | 6 | 6 | 5 | 5 | 3 | 4 |
6 | 2 | 1.25 | 6 | 9 | 7 | 5 | 7 | 5 | 4 | 5 | 3 | 6 | 6 | 5 | 5 | 3 | 4 |
7 | 2.5 | 1.5 | 6 | 9 | 7 | 5 | 7 | 5 | 4 | 5 | 4 | 5 | 5 | 3 | 4 | 2 | 3 |
8 | 2.5 | 1.5 | 4 | 10 | 8 | 5 | 8 | 6 | 6 | 6 | 4 | 5 | 5 | 3 | 4 | 2 | 3 |
9 | 3 | 1.5 | 3 | 10 | 8 | 4 | 8 | 6 | 5 | 6 | 4 | 5 | 5 | 3 | 4 | 2 | 3 |
10 | 3 | 1.5 | 3 | 6 | 5 | 4 | 5 | 4 | 5 | 1 | 2 | 6 | 3 | 5 | 2 | 4 | 1 |
11 | 3.5 | 1.5 | 3 | 6 | 5 | 4 | 5 | 4 | 5 | 1 | 2 | 6 | 3 | 5 | 2 | 4 | 1 |
12 | 3.5 | 1.5 | 6 | 6 | 5 | 4.5 | 4 | 4 | 3 | 1 | 1 | 6 | 3 | 5 | 2 | 4 | 1 |
13 | 1.5 | 2 | 1 | 5 | 4 | 2 | 3 | 2 | 3 | 2 | 1 | 4 | 4 | 3 | 3 | 2 | 2 |
14 | 1.5 | 2 | 3 | 5 | 4 | 4.5 | 3 | 2 | 6 | 2 | 1 | 4 | 4 | 3 | 3 | 2 | 2 |
15 | 2.5 | 1.75 | 2 | 7 | 5 | 3 | 4 | 3 | 4 | 2 | 1 | 4 | 4 | 3 | 3 | 2 | 2 |
16 | 2.5 | 1.75 | 8 | 10 | 9 | 9 | 8 | 7 | 10 | 6 | 5 | 6 | 6 | 5 | 5 | 4 | 4 |
17 | 3.5 | 2 | 3 | 10 | 9 | 4 | 8 | 7 | 5 | 6 | 5 | 6 | 6 | 5 | 5 | 4 | 4 |
18 | 3.5 | 2 | 10 | 10 | 9 | 9 | 8 | 7 | 8 | 6 | 5 | 6 | 6 | 5 | 5 | 4 | 4 |
19 | 4 | 2 | 1 | 4 | 3 | 2 | 3 | 2 | 3 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 1 |
20 | 4 | 2 | 5 | 10 | 8 | 7 | 8 | 6 | 9 | 4 | 6 | 3 | 3 | 2 | 2 | 1 | 1 |
Structure of ANNs trained for sample frames of example 2
Network | Rate of training | Structure of layers | PCC | Average error (%) | Training time (s) |
---|---|---|---|---|---|
1 | 0.155 | 5 × 4 × 5 × 1 | 0.940 | 1.537 | 789.7 |
2 | 0.150 | 6 × 9 × 1 | 0.901 | 1.764 | 752.6 |
3 | 0.162 | 7 × 9 × 7 × 1 | 0.999 | 0.771 | 854.2 |
4 | 0.159 | 7 × 7 × 5 × 1 | 0.998 | 0.803 | 815.3 |
5 | 0.169 | 7 × 8 × 1 | 0.955 | 0.995 | 607.4 |
Observing the PCC factors and average error values in Table 7, it can be concluded that the third network has the best performance. The PCC factor and average error for this network are 0.999 and 0.771 %, respectively. Hence, this network is the admissible candidate for computing the collapse load factors of sample frames.
Exact and estimated collapse load factors for 20 sample frames of example 2 using greedy, ANN, GA and ACS algorithms
Sample frame | Exact λ | Estimated λ | ||
---|---|---|---|---|
ANN | GA | ACS | ||
1 | 2.545 | 2.536 | 2.625 | 2.629 |
2 | 2.667 | 2.667 | 2.702 | 2.710 |
3 | 1.825 | 1.825 | 2.083 | 1.861 |
4 | 2.554 | 2.554 | 2.700 | 2.569 |
5 | 1.907 | 1.910 | 2.097 | 1.907 |
6 | 2.081 | 2.080 | 2.308 | 2.081 |
7 | 1.353 | 1.356 | 1.439 | 1.355 |
8 | 1.195 | 1.200 | 1.320 | 1.226 |
9 | 1.152 | 1.148 | 1.222 | 1.222 |
10 | 1.040 | 1.039 | 1.067 | 1.067 |
11 | 0.914 | 0.940 | 0.914 | 0.914 |
12 | 1.037 | 1.027 | 1.143 | 1.143 |
13 | 2.205 | 2.200 | 2.205 | 2.214 |
14 | 1.033 | 1.030 | 1.033 | 1.033 |
15 | 1.469 | 1.445 | 1.578 | 1.469 |
16 | 0.968 | 0.967 | 0.968 | 0.968 |
17 | 1.218 | 1.218 | 1.371 | 1.371 |
18 | 0.836 | 0.830 | 0.853 | 0.858 |
19 | 1.000 | 0.990 | 1.182 | 1.000 |
20 | 0.333 | 0.350 | 0.409 | 0.333 |
Comparing the collapse load factors estimated by ANN with the exact collapse load factors obtained from the application of greedy algorithm demonstrates the accuracy of ANN results. The graph in Fig. 15 shows that the maximum error for the ANN with selected structure is <6 %, which is quite satisfactory compared with GA and ACS algorithms.
Values of effective parameters for sample 21 of example 2
Sample frame | L | h | F_{1} | F_{2} | F_{3} | F_{4} | F_{5} | F_{6} | F_{i} | F_{8} | F_{9} | M_{1} | M_{2} | M_{3} | M_{4} | M_{5} | M_{6} |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21 | 2.4 | 1.7 | 9.2 | 12 | 10 | 8.1 | 9.4 | 7.2 | 9.5 | 7.5 | 5.4 | 6.4 | 5.9 | 5.2 | 4.9 | 3.8 | 3.9 |
The exact collapse load factor for the 21st sample frame is 0.864. The collapse load factors estimated by ANN, GA and ACS are 0.872, 0.972 and 0.867, respectively. It is observed that the error in the collapse load factor obtained by GA is 12.5 %, while the errors from ACS and ANN are 0.348 and 0.926 %, respectively. Obviously, the ACS algorithm performs the best and the performance of ANN is comparable.
Mean error for ANN, GA and ACS algorithms applied to all samples of example 2 and CPU time for these algorithms to calculate the collapse load factor for sample 21
Algorithm | Mean error (%) | CPU time (s) |
---|---|---|
ANN | 0.74 | 10.3 |
GA | 7.45 | 5.7 |
ACS | 2.15 | 8.5 |
8 Conclusions
In this work, the performance of three optimization algorithms, namely GA, ACS and ANN, has been studied through detailed numerical examples. The collapse load factors of sample frames have been computed via the application of all three algorithms, and the results have been compared with exact values calculated using the greedy algorithm. It was observed that if the parameters of respective algorithms are adjusted carefully, collapse load factors with acceptable accuracy can be obtained within reasonable computational time. It was also discussed that, in most practical cases of interest, GA operates faster but less accurate than ACS, and it is possible to propose a hybrid GA/ACS algorithm that provides leverage between accuracy and speed. In such an algorithm, the initial solution obtained by GA can be ameliorated by ACS.
The artificial neural networks employed in this work have been selected in such a way that a desired performance is obtained. Many structures have been tested and trained for neural networks and those with the least average error have been used to estimate the load factors of various frames. Reviewing the errors yielded by different structures clearly shows that it is not possible to use a single structure for all frame configurations. The single structure can lead to acceptable results for certain frames, but for others unpredictable results should be expected. Hence, the importance of using different structures for different frames and training specifically for those frames becomes obvious. The selection of a structure for the ANN and training for the parameters of a given frame, as utilized in this work, is a trial-and-error process. It is hard to propose an automated procedure for finding an optimum structure for the ANN and then training the selected structure. Numerical examples show that for certain frame configurations, GA and ACS algorithms yield good approximations to the correct collapse load factors, while for other configurations the results are not as satisfactory as expected. This is in contrast to the behavior of an ANN, whose structure is especially selected for a given frame and is trained for that frame. For such an ANN, it is expected that the computational error does not exceed a desired limit. Another distinguishing property of ANNs is that when they are trained for the parameters of a specified frame they can quickly compute the load factors for a rather wide range of parameters for that specific frame, while GA and ACS algorithms have to perform the same sort of operations every time the parameters of a given frame are modified.
As a final note on the behavior of aforementioned algorithms, it is noteworthy to mention that since the method of combination of elementary mechanisms is based on the minimum principle, the exact collapse load factor is a lower bound to the collapse load factors estimated by GA and ACS algorithms. This was also observed in the numerical examples presented in this work, as the load factors obtained by these algorithms are always greater than or at best equal to the correct collapse load factor. However, the exact collapse load factor is neither an upper nor a lower bound to the collapse load factors computed by ANN (the load factor estimated by ANN is sometimes lower and sometimes higher than the exact collapse load factor). This is due to the fact that the output of greedy algorithm is merely fed to training algorithm to obtain the best possible performance for the ANN. In the form presented in this work, the training algorithm does not take advantage of the information concerning the underlying fundamental mechanisms. Naturally, the ANN is unable to provide a collapse mechanism associated with an output collapse load factor. Hence, if for certain applications the collapse mechanism is obligatory, it is advised to use an algorithm that yields the correct collapse mechanism, greedy algorithm for example. This is of course at the cost of additional computational effort.
As mentioned previously, it will be more advantageous to be able to design an ANN that provides both the collapse mechanism and the associated collapse load factor. In such an ANN, the neurons of input layer can still be associated with the parameters of a given frame, while the output layer can comprise of as many neurons as the number of members of that frame (instead of a single neuron giving the collapse load factor). The ANN can then be trained for the rotations of members of each fundamental mechanism. In this way, the ANN will be able to provide an estimation of the actual collapse mechanism. The collapse load factor can be computed either by equating the internal virtual work done at plastic hinges to the external virtual work performed by point loads or by providing an additional output neuron devoted solely to the collapse load factor. In the latter case, the ANN is trained both for the rotations of individual members of fundamental mechanisms and the associated collapse load factors. Of course in such an approach, combined mechanisms can be used in training the network with the aim to diversify the search space.
References
- 1.Baker J, Horn MR, Heyman J (1956) The steel skeleton plastic behavior and design. Cambridge University Press, CambridgeGoogle Scholar
- 2.Neal BG, Symonds PS (1952) The rapid calculation of plastic collapse load for a framed structure. In: Proceedings of the institution of civil engineers, London, pp 58–71Google Scholar
- 3.Neal BG, Symonds PS (1952) The calculation of plastic loads for plane frames. In: International association for bridge and structural engineering, fourth congress, Cambridge and LondonGoogle Scholar
- 4.Neal BG, Symonds PS (1951) The calculations of collapse loads for framed structures. J Inst Civ Eng 35:21–40CrossRefGoogle Scholar
- 5.Charnes A, Greenberg HJ (1959) Plastic collapse and linear programming. In: Summer meeting of the American mathematical societyGoogle Scholar
- 6.Heyman J (1960) On the minimum weight design of a simple portal frame. Int J Mech Sci 1:121–134CrossRefGoogle Scholar
- 7.Horne MR (1953) Determination of the shape of fixed ended beams for maximum economy according to the plastic theory. In: International association of bridge and structural engineering, fourth congressGoogle Scholar
- 8.Baker J, Heyman J (1969) Plastic design of frames, fundamentals, vol 1. Cambridge University Press, CambridgeCrossRefMATHGoogle Scholar
- 9.Jennings A (1983) Adapting the simplex method to plastic design. In: Proceedings of instability and plastic collapse of steel structures, pp 164–173Google Scholar
- 10.Watwood VB (1979) Mechanism generation for limit analysis of frames. J Struct Div ASCE 109:1–15Google Scholar
- 11.Gorman MR (1981) Automated generation for limit analysis of frames. In: Proceedings of ASCE ST7, pp 1350–1354Google Scholar
- 12.Thierauf G (1987) A method for optimal limit design of structures with alternative loads. Comput Meth Appl Mech Eng 16:134–149Google Scholar
- 13.Kaveh A (2006) Optimal structural analysis. Wiley, ChichesterCrossRefMATHGoogle Scholar
- 14.Kaveh A, Khanlari K (2004) Collapse load factor of planar frames using a modified genetic algorithm. Commun Numer Methods Eng 20:911–925CrossRefMATHGoogle Scholar
- 15.Munro J (1977) Optimal plastic design of frames. In: Proceedings of the NATO advanced study in engineering plasticity by mathematical programming, pp 136–171Google Scholar
- 16.Livesley RK (1977) Linear programming in structural analysis and design. In: Gallagher RH et al (eds) Proceedings of optimum structural design. Wiley, New YorkGoogle Scholar
- 17.Best MJ (1977) Engineering plasticity by mathematical programming. In: Proceedings of the NATO advanced study in engineering plasticity by mathematical programming, pp 517–522Google Scholar
- 18.Maier G, Pastor J, Ponter ARS, Weichert D (2003) Direct methods in limit and shakedown analysis. In: de Borst R, Mang HA (eds) Numerical and computational methods; In: Milne I, Ritchie RO, Karihaloo B (eds) Comprehensive structural integrity, Elsevier-Pergamon, AmsterdamGoogle Scholar
- 19.Mahini MR, Moharrami H, Cocchetti G (2013) A dissipated energy maximization approach to elastic-perfectly plastic analysis of planar frames. Arch Mech 65(3):171–194MathSciNetMATHGoogle Scholar
- 20.Kaveh A, Jahanshahi M (2008) Plastic limit analysis of frames using ant colony systems. Comput Struct 86:1152–1163CrossRefGoogle Scholar
- 21.Jahanshahi M, Pouraghajan M, Pouraghajan M (2013) Enhanced ACS algorithms for plastic analysis of planar frames. Comput Methods Civ Eng 4:65–82Google Scholar
- 22.Cao M, Qiao P (2008) Neural network committee-based sensitivity analysis strategy for geotechnical engineering problems. Neural Comput Appl 17:509–519CrossRefGoogle Scholar
- 23.Cao M, Qiao P, Ren Q (2009) Improved hybrid wavelet neural network methodology for time-varying behavior prediction of engineering structures. Neural Comput Appl 18:821–832CrossRefGoogle Scholar
- 24.Aydin K, Kisi O (2012) Damage detection in Timoshenko beam structures by multilayer perceptron and radial basis function networks. Neural Comput Appl 24:583–597CrossRefGoogle Scholar
- 25.Goldberg DE (1989) Genetic algorithms in search, optimization and machine learning. Addison-Wesley, USAMATHGoogle Scholar
- 26.Holland JH (1992) Adaptation in natural and artificial systems. MIT Press, USAGoogle Scholar
- 27.Kaveh A, Jahanshahi M (2004) Plastic analysis of planar frames using kinematic method and genetic algorithm. Asian J Civ Eng 5:145–160Google Scholar
- 28.Kaveh A, Jahanshahi M, Khanzadi M (2008) Plastic analysis of frames using genetic and ant colony algorithms. Asian J Civ Eng 9:229–249Google Scholar
- 29.Kohama Y, Takada T, Kozawa N, Miyamura A (1997) Collapse analysis of rigid frame by genetic algorithm. In: Proceedings of the computer aided optimum design of structures, pp 193–202Google Scholar
- 30.Hofmeyer H, Davila Delgado JM (2015) Co-evolutionary and genetic algorithm based building spatial and structural design. AI EDAM 29:351–370Google Scholar
- 31.Hofmeyer H, Davila Delgado JM (2013) Automated design studies: topology versus one-step evolutionary structural optimisation. Adv Eng Inform 27(4):427–443CrossRefGoogle Scholar
- 32.Rafiq MY (2000) A design support tool for optimum building concept generation using a structured genetic algorithm. Int J Comput Integr Des Constr 2(2):92–102MathSciNetGoogle Scholar
- 33.Turrin M, Von Buelow P, Stouffs R (2011) Design explorations of performance driven geometry in architectural design using parametric modelling and genetic algorithms. Adv Eng Inform 25(4):656–675CrossRefGoogle Scholar
- 34.Aminian P, Javid MR, Asghari A, Gandomi AH, Arab Esmaeili M (2011) A robust predictive model for base shear of steel frame structures using a hybrid genetic programming and simulated annealing method. Neural Comput Appl 20:1321–1332CrossRefGoogle Scholar
- 35.Aminian P, Niroomand H, Gandomi AH, Alavi AH, Arab Esmaeili M (2013) New design equations for assessment of load carrying capacity of castellated steel beams: a machine learning approach. Neural Comput Appl 23:119–131CrossRefGoogle Scholar
- 36.Kaveh A, Bakhshpoori M, Kalateh-Ahani M (2013) Optimum plastic analysis of frames using ant colony system and charged system search algorithms. Sci Iran Trans A 20:414–421Google Scholar
- 37.Kaveh A, Jahanshahi M (2010) An ACS algorithm for the formation of subminimal-suboptimal cycle bases. In: Proceedings of the fourth international conference on structural engineering, mechanics and computations, Cape Town, South AfricaGoogle Scholar
- 38.Chen Y, Feng J, Wu Y (2012) Prestress stability of pin-jointed assemblies using ant colony systems. Mech Res Commun 41:30–36CrossRefGoogle Scholar
- 39.Chen Y, Feng J, Wu Y (2012) Novel form-finding of tensegrity structures using ant colony systems. J Mech Robot T ASME 4:031001CrossRefGoogle Scholar
- 40.Forcael E, González V, Orozco F, Vargas S, Pantoja A, Moscoso P (2014) Ant colony optimization model for tsunamis evacuation routes. Comput Aided Civ Inf 29:723–737CrossRefGoogle Scholar
- 41.Talatahari S, Kheirollahi M, Farahmandpour C, Gandomi AH (2013) A multi-stage particle swarm for optimum design of truss structures. Neural Comput Appl 23:1297–1309CrossRefGoogle Scholar
- 42.Pellegrino S, Calladine CR (1991) Structural computation of an assembly of rigid links, frictionless joints, and elastic springs. J Appl Mech ASME 58:749–753CrossRefGoogle Scholar
- 43.Deeks AJ (1996) Automatic computation of plastic collapse loads for frames. Comput Struct 60:91–102CrossRefMATHGoogle Scholar
- 44.Chen S-C, Lin S-W, Tseng T-Y, Lin H-C (2006) Optimization of back-propagation network using simulated annealing approach. In: IEEE international conference of systems, man and cybernetics, pp 2819–2824Google Scholar
- 45.Karlik B, Aydin S, Pakdemirli M (1998) Vibrations of beam-mass systems using artificial neural networks. Comput Struct 69:339–347CrossRefMATHGoogle Scholar
- 46.Haykin S (1999) Neural networks: a comprehensive foundation. Prentice Hall, New JerseyMATHGoogle Scholar
- 47.Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2:359–366CrossRefGoogle Scholar
- 48.de Lima LRO, da S Vellasco PCG, de Andrade SAL, da Silva JGS, Vellasco MMBR (2005) Neural networks assessment of beam-to-column joints. J Braz Soc Mech Sci Eng 27:314–324CrossRefGoogle Scholar
- 49.Niyati M, Moghadam AME (2009) Estimation of products final price using bayesian analysis generalized poisson model and artificial neural networks. J Ind Eng 2:55–60Google Scholar
- 50.Dorigo M, Maniezzo V, Colorni A (1996) The ant system: optimization by a colony of cooperative agents. IEEE Trans Syst Man Cybern Part B 26:1–13CrossRefGoogle Scholar
- 51.Dorigo M, Stutzle T (2005) Ant colony optimization. Prentice Hall, New YorkMATHGoogle Scholar
- 52.Kaveh A (2004) Structural mechanics: graph and matrix methods. Research Studies Press, LondonMATHGoogle Scholar
- 53.Kaveh A, Jahanshahi M (2006) Plastic design of frames using heuristic algorithms. In: Proceedings of the eight international conference on computational structures technology, Las Palmas, Spain, pp 239–240Google Scholar