# Meta-heuristic bus transportation algorithm

- 279 Downloads

## Abstract

Over recent decades, several experience-based mathematical models have been proposed. In addition to collective intelligence, in recent years some efforts have been made to apply human experience-based intelligence to open up a new world of possibilities to design new meta-heuristic algorithms for solving NP problems. In these algorithms instead of only relying on collective intelligence, each individual tries to search for optimal solution based on her or his own experience and others. In this paper, we use a social behavior of humans in transportations to reach their destination and we took experience-based human behavior and smartness of humans as an inspiration to propose our meta-heuristic algorithm and we name it bus transportation algorithm. As a simple example, we restrict our paper to solve a well-known problem integer programming which is known as an NP-Complete problem. Compared to other algorithms, the results in these papers show that our algorithm outperforms PSO (particle swarm optimization), GA (genetic algorithm) and SA (simulated annealing) in terms of efficiency and convergence.

## Keywords

Meta-heuristic algorithm Human intelligence Bus transportation algorithm Intelligent algorithms Empiricism## 1 Introduction

Confrontation of rationalism and empiricism is a philosophical confrontation that many philosophers have written materials over centuries. As a result of this confrontation, some invaluable concepts in Rene Descartes philosophy have emerged such as analytical geometry [2]. These arguments are really controversial that we are not going to discuss them. Conversely, we could use those ideas and it can be inferred that if we can have a simple model that reflects experience, it would be beneficial. Meta-heuristic algorithms are witnesses that they are based on animal behaviors and natural phenomena.

Around thirty years ago, collective intelligence have been introduced by Gerardo Beni and Jing Wang and it is an important topic in artificial intelligence. In recent years, we could have seen many changes in that concept. As an example, we can consider ant Colony algorithm and in that algorithm each ant as an individual has simple abilities but together they form an intelligent algorithm but they follow simple rules. We can deem complex neural networks as a complex example.

The point that exists is that, in such algorithms, interaction between individuals results in complex behavior which paves the way to converge to global optimum solution [3, 4, 5]. In addition, there exits some relationship between different local optimum solutions that would help too. What is more, relying on properties that humans possess would help us to design better algorithms. These abilities namely are being social, eloquent, sentimental and purposeful. So, in few years that past, some algorithms have been proposed [1, 6].

After all those explanations, we introduce the bus transportation algorithm (BTA) inspired by civic behavior of humans and the usage of public transport. In cities, the purpose of using buses is to help people reach their destination. Humans, buses and stations are components of the general algorithm. We add some experience-based and random behaviors to our agents that are humans to design an efficient algorithm and the main idea is to load and unload the passengers until we reach an equilibrium which is likely to be the global optimum solution. Most of the humans know how to reach their destination and few of them do not know how to do that and this idea could be applied to solve optimization problems. Stations are also one important part of algorithm and they are some storage that provide us with information that we gained so far from the algorithm. So, we can get local optimum solutions. The effectiveness of the final algorithm, in turn, depends on the structure of transportation system and it is clear that we cannot ascertain whether our construction is completely efficient. This approach is only a more equipped metaheuristic algorithm that not only relies on the experience gained during time but also, the construction maybe, plays the most important part. So, it is based on empiricism.

In this paper, we introduce the general algorithm and then we apply it to solve integer programming problem whose variables are limited to be zero or one. To build a transportation network for this problem, we consider that we have intelligent passengers who are able to interact with others and to do so, we apply Simple human learning optimization algorithm (SHLO) introduced in [1]. We add a simple feature which is a simple heuristic function that helps agents to decide. We limit the number of buses to one and we modify the transportation system status (variables, the objective function and experience gathered) in each step based on the number of passengers in each station and the procedure continues. This problem is an NP-Complete problem which is unlikely for us to find an efficient algorithm and even in this case, approximation algorithms such as Balas [7, 8] are not necessarily efficiently computable where the size of the problem grows. So, we may use heuristic and metaheuristic algorithms to solve them [9, 10].

## 2 Preliminaries

In this part, we will have a review of the components that we used to design the algorithm which we developed for integer programming problem.

### 2.1 Definition of the problem

*m*is the number of constraints and

*n*is the number of variables. In this research, there are some concepts such as simple human learning optimization algorithm (SHLO), taboo search and improvement of position (IMPRO), some of which were used directly and some indirectly to develop the algorithm.

### 2.2 Simple human learning optimization algorithm (SHLO) [1]

#### 2.2.1 Random learning operator

#### 2.2.2 Individual learning operator

*L*is the maximum number of experiences that are storable for each person and

*N*is the number of agents and

*M*is related to the space needed to store the information.

#### 2.2.3 Social learning operator

*H*simply is a constant and indicates how many of experiences we can store.

### 2.3 Taboo search

Taboo search is a general search method which was based on searching neighbors introduced by Glover in 1986. This method investigates neighbors who are not in the taboo list and then with strategies’ updates taboo list which means that in the search space nodes that are recently investigated are not likely to be investigated soon [11, 12].

### 2.4 Improvement of position (IMPRO)

In this algorithm, agents try to improve their own social welfare. Each person tries to find whether with some modifications on his or her own properties can find patterns to improve welfare. We only use the idea to improve our social learning operator since each variable in this integer programming problem can be only zero or one then the idea solely can be used in strings. Then, we can improve each entity in SKD matrix.

## 3 General bus transportation algorithm

As we indicated in the introduction part, metaheuristic algorithms and experience based models opens up new possibilities to solve NP problems. It seems that may be considering humans as agents could be the best choice and humans can extend their understanding by experience and random responses that are creative and we paid attention to these properties that humans possess. Early humans normally did not think too much when they tried to figure out their primitive problems they faced but since ages past, systematic approach governed people life. We will follow somehow the same approach and in our algorithm we used transportation as a coordinator. Some of passengers know how to reach their destination and some of them do not. So, some of them will choose the right bus and then they go to their correct destination and the others try to find their destination based on their and others experiences. Humans would like to alight from the bus when they guess they are near to their destination then they would use another bus to go elsewhere. In each station, we see if people who are present in that station are near to their destination and also we know that people cannot go to the stations resulting in contradiction to feasibility of solution. This procedure recurs again and again until we find that they are near to the destination (or they are in the exact place) which in the optimization problem means that we have reached a local optimum that is fair for us and this can be detected by seeing that the passengers are reluctant to change their position.

## 4 Solving the integer programming problem

In this section, we apply the algorithm to solve a well-known problem which is a case of integer programming known as an NP-Complete problem and this is the part which combines the previous concepts together. To model the network, we consider variables as passengers and they communicate with each other to help each other find the whether zero is appropriate or zero. We take four groups of stations and each group has two. So, all in all, we have eight stations that each passenger could go there although we can consider more and more. Note that the first station in each group belongs to zeros and the second belongs to ones. In the following paragraphs, we are going to describe these groups of stations.

*Constant Stations Group (CSG)* In this group, we are sure that these variables should be set to zero or one. The first station in this group belongs to variables which must be zero and the second includes variables that we are sure that they must be one. These variables are not going to be checked so its time omitted them.

*Short-Term Stations Group (STSG)* In this group, we are not sure whether these variables should have zero or one. So, these variables must be kept at hand to be controlled. One of the good aspects that this algorithm possess is that we only load variables which belong to this group and we ignore other passengers that are not here. To omit the others, we update the problem by assigning the value that we guessed to the variables then we have another integer programming problem and this in terms of bus transportation, means that the bus is not going to pick them up. This really helps us to have lower complexity and to avoid repetition.

*Mid-Term Stations Group (MTSG)* In this group, we place variables that they are somehow in the middle and we are not really sure about them but we are going to decide that whether they should go back to short-term stations or go to long-terms.

*Long-term Stations Group (LTSG)* About these variables, we can figure that these variables somehow reached an equilibrium and it is likely to have the right assignment.

Each bus is a processing unit that has some passengers and the bus tries to transport the passengers to stations. We can really think about parallel processing but in this paper we limited the number of buses to one for simplicity since variables are not really independent.

*j*that is written bellow where \(\alpha \) and \(\beta \) are number that are in [0, 1] which are chosen smartly to lead to local optimum.

Finally, we add another option and it is choosing based on SKD matrix which is related to social learning. We consider SKD matrix which stores best solutions and for each acceptable solution we try to follow a simple pattern that used in IMPRO [6] to have more acceptable solutions. For each acceptable solution which is good we try to find some neighbors which are likely to be the final solution. We consider a constant number and then from \(k=1\) up to this number we choose the *k* bits randomly and then we try to generate acceptable neighbors and If they are better than rows in SKD then we update SKD matrix.

- 1.
When all variables go to long-term stations group or constants because we have investigated enough and the probability of success is high enough.

- 2.
We have an input which denote the maximum number of loads/unloads in the algorithm and if we exceed this number we cannot continue. For sufficiently large this input if we did not meet the first condition we can infer we have trapped in local optimums. So, it is appropriate to start this algorithm form the beginning and it could be better to consider parallel computing as an option but when we face this condition.

The parameters

Parameters | Algorithm |
---|---|

SSG \(=\) 6, STSG \(=\) 8, MTSG \(=\) 4, LSSG \(=\) 8, ZLG \(=\) 2, OLG \(=\) 2 | BTA |

\(T_{\mathrm{initialize}} = 100, \varDelta T = 0.1\) | |

Reduction factor temperature (TFT) = 0.1, | SA |

Repeat number in any item temperature (RNAT) = 150 | |

Number of iteration = ITER = 1500 | |

Number of variable = | |

Number of each for variable = BS = [100 100] | |

Length of chromosome = | GA |

Minimum of each variable = Lo = 0 | |

Maximum of each variable = Hi = 1 | |

Population = rand(rand( | |

Number of population (even) = | |

Probability of crossover= \(P_\mathrm{c} = 0.6\) | |

\(C_1 = 1.5, C_2 = 1.5, W_\mathrm{MIN} = 0.1, W_\mathrm{MAX} = 0.9, V_\mathrm{MIN} = 4, V_\mathrm{MAX} = 4\) | PSO |

## 5 Evaluation of algorithm

In this section we compare BTA algorithm with genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO). Although it is not proved for us whether PSO or SA algorithm are the best among other meta-heuristics, since they are well-known, we do the comparison.

We show in the following that BTA outperforms PSO, GA and SA in terms of efficiency and convergence and this is possible due to getting near to global optimum with lower number of steps or generations needed.

### 5.1 Simulations and results

The problem that we used to do the comparison contains 100 variables and 50 constraints. Table 1 includes parameters in each algorithm respectively.

#### 5.1.1 Results

The values of the objective function in the first step

The difference in the value of the target function at the end of the 100th generation change and the beginning of the convergence | Value of objective function in the end (after 100 generations) | Value of objective function after the first step | Algorithms |
---|---|---|---|

0.145 | 0.490 | 0.593 | BTA with 100 variables and 50 constraints |

0.48 | 0.897 | 0.954 | SA with 100 variables and 50 constraints |

0.316 | 0.682 | 0.738 | GA with 100 variables and 50 constraints |

0.41 | 0.761 | 0.802 | PSO with 100 variables and 50 constraints |

0.377 | 0.578 | 0.612 | BTA with 150 variables and 100 constraints |

0.553 | 0.952 | 0.976 | SA with 150 variables and 100 constraints |

0.481 | 0.749 | 0.771 | GA with 150 variables and 100 constraints |

0.609 | 0.846 | 0.868 | PSO with 150 variables and 100 constraints |

0.340 | 0.581 | 0.662 | BTA with 200 variables and 150 constraints |

0.567 | 0.966 | 0.987 | SA with 200 variables and 150 constraints |

0.512 | 0.780 | 0.831 | GA with 200 variables and 150 constraints |

0.656 | 0.893 | 0.884 | PSO with 200 variables and 150 constraints |

Table 2 the values of the objective function in the first step (generation). These answers compared with the final generation and the subtraction shows the transition from the first to the last and this indicates that, from the beginning and first responses, how the algorithm moves to optimality. As you can see in BTA algorithm the subtraction of the objective function in the 100 generations and the beginning of the convergence has 0.145 distance to the final answer whereas in GA, SA and PSO we can see the numbers are 0.316, 0.48 and 0.41 respectively. So, the algorithm, from the beginning compared to the others converges better. In BTA when we have 150 variables and 100 constraints and when we have 200 variables and 150 constraints the numbers are 0.377 and 0.340 respectively while for GA, we have 0.481 and 0.512, for SA, 0.553 and 0.567 and for PSO 0.609 and 0.656. This show we the problem gets complex then although the subtraction of the objective function in the 100-th generation and the beginning grows but on the other hand, we can see better convergence still remains.

In the following table we observe the generations for each of four algorithms until they find the final answer (beginning of the convergence) and these information complete pervious results. So, we have seen BTA requires less generations and in addition this monotone behavior is invaluable Which showed stability too (Table 3).

The generations needed to meet the final answer (beginning of the convergence)

Normalized value of objective function in the last response | Generations | Algorithm |
---|---|---|

0.345 | 629 | BTA with 100 variables and 50 constraints |

0.417 | 1023 | SA with 100 variables and 50 constraints |

0.366 | 716 | GA with 100 variables and 50 constraints |

0.351 | 723 | PSO with 100 variables and 50 constraints |

0.201 | 811 | BTA with 150 variables and 100 constraints |

0.399 | 1729 | SA with 150 variables and 100 constraints |

0.268 | 1023 | GA with 150 variables and 100 constraints |

0.237 | 1329 | PSO with 150 variables and 100 constraints |

0.198 | 956 | BTA with 200 variables and 150 constraints |

0.378 | 2012 | SA with 200 variables and 150 constraints |

0.235 | 1430 | GA with 200 variables and 150 constraints |

0.206 | 1671 | PSO with 200 variables and 150 constraints |

The running time of the algorithms after 200, 300 and 500 iterations and when they begin to converge

Total running time (MS) | Average time to run a iteration the algorithm (MS) | Algorithms |
---|---|---|

5712 | 28.56 | BTA with 100 variables and 50 constraints (200 iterations algorithm) |

6015 | 30.075 | SA with 100 variables and 50 constraints (200 iterations algorithm) |

5802 | 29.01 | GA with 100 variables and 50 constraints (200 iterations algorithm) |

5798 | 28.99 | PSO with 100 variables and 50 constraints (200 iterations algorithm) |

8336 | 41.68 | BTA with 200 variables and 150 constraints (200 iterations algorithm) |

8737.2 | 43.686 | SA with 200 variables and 150 constraints (200 iterations algorithm) |

8419.8 | 42.099 | GA with 200 variables and 150 constraints (200 iterations algorithm) |

8406.2 | 42.031 | PSO with 200 variables and 150 constraints (200 iterations algorithm) |

8553 | 28.51 | BTA with 100 variables and 50 constraints (300 iterations algorithm) |

9024.3 | 30.081 | SA with 100 variables and 50 constraints (300 iterations algorithm) |

8676 | 28.92 | GA with 100 variables and 50 constraints (300 iterations algorithm) |

8691 | 28.97 | PSO with 100 variables and 50 constraints (300 iterations algorithm) |

12.477 | 41.59 | BTA with 200 variables and 150 constraints (300 iterations algorithm) |

13,116 | 43.72 | SA with 200 variables and 150 constraints (300 iterations algorithm) |

12,604.8 | 42.016 | GA with 200 variables and 150 constraints (300 iterations algorithm) |

12,600.3 | 42.001 | PSO with 200 variables and 150 constraints (300 iterations algorithm) |

14,351 | 28.702 | BTA with 100 variables and 50 constraints (500 iterations algorithm) |

15,018 | 30.036 | SA with 100 variables and 50 constraints (500 iterations algorithm) |

14,504 | 29.008 | GA with 100 variables and 50 constraints (500 iterations algorithm) |

14,430 | 28.86 | PSO with 100 variables and 50 constraints (500 iterations algorithm) |

20,810 | 41.62 | BTA with 200 variables and 150 constraints (500 iterations algorithm) |

21,855 | 43.71 | SA with 200 variables and 150 constraints (500 iterations algorithm) |

21,019.5 | 42.039 | GA with 200 variables and 150 constraints (500 iterations algorithm) |

21,010.5 | 42.021 | PSO with 200 variables and 150 constraints (500 iterations algorithm) |

*T*student is a test that evaluates and compares the average of a sample with the average of the society when the standard deviation of the society is unknown. Because the distribution in t test in small amounts is alleviated by degrees of freedom, we could use this test when the size of that sample is small. In addition, this test when the standard error of the society is unknown but the standard error of the sample is known is applicable too. To use this test, the variables that we study must be in normal scale and distribution We also for the evaluation of the BTA used 30 independent runs for one problem that has 200 variables and 150 constraints. As the Fig. 11 illustrate, BTA shows the least fluctuation while SA shows the most. Figure 11 indicates that the when the size of the problem grows, although it is not as stable as problems whose size are small, it is to some extent reliable in terms of reliability (Fig. 12).

## 6 Future work

We may think that this idea could be applied to solve wide range of problems such as continuous problems or with some modifications, we may consider solving nonlinear problems. We may also hope that this approach helps us either in machine learning or image processing and in other applications as well since this algorithm relies on experience, discipline, randomness, smartness and more importantly, it requires a proper modeling procedure.

## 7 Conclusion

So, we designed an algorithm which worked better than other well-known algorithms and the key was human experience. Having a determined network of buses combined with human experiences resulted a simple and implementable algorithm which works fast and it is stable too which was shown with Student’s *t* test. We implemented all those algorithms in MATLAB and we observed that ours works more efficient compared to SA, GA and PSO. We do not claim the algorithm is the best choice but this general algorithm has lots of flexibility could be applied to various problems. We looked at meta-heuristics in a different way and we came up with a new approach that could be improved. There could be a potential question and it is whether that we can improve algorithm via increasing the number of stations or buses or even we can think of having more advanced and intelligent passengers which needs more space and complexity problems add up. So, one may think that we can have a city which all people are smart enough to decide well and there are lots of buses, airports and so forth but we mention that the complexity and simplicity must be taken into account. In conclusion, there are lots of things that would affect the algorithm both in positive and negative ways and in the end we hope that this algorithm could help us solve more complicated and conceptual problems which the search process must be done innovatively.

## Notes

### Acknowledgements

We thank our professors Dr. Mohammadebrahim Shiri Ahamadabadi and Dr. Farzad Didehvar who provided insight that assisted the research. This idea began in the classes and it was developed by us until we have reached this point.

## References

- 1.Wang, L., Ni, H., Yang, R., Fei, M., Ye, W.: A simple human learning optimization algorithm. In: International Conference on Life System Modeling and Simulation and International Conference on Intelligent Computing for Sustainable Energy and Environment, pp. 56–65. Springer, Berlin (2014)Google Scholar
- 2.Boyer, C.B.: History of Analytic Geometry. Courier Corporation, Chelmsford (2012)zbMATHGoogle Scholar
- 3.Roy, S., Biswas, S., Chaudhuri, S.S.: Nature-inspired swarm intelligence and its applications. Int. J. Mod. Educ. Comput. Sci.
**6**(12), 55 (2014)Google Scholar - 4.Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York (1999)zbMATHGoogle Scholar
- 5.Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.)
**26**(1), 29–41 (1996)Google Scholar - 6.Azar, A., Seyedmirzaee, S.: Providing new meta-heuristic algorithm for optimization problems inspired by humans’ behavior to improve their positions. Int. J. Artif. Intell. Appl.
**4**(1), 1 (2013)Google Scholar - 7.Balas, E.: An additive algorithm for solving linear programs with zero-one variables. Oper. Res.
**13**(4), 517–546 (1965)MathSciNetzbMATHGoogle Scholar - 8.Balas, E., Zemel, E.: An algorithm for large zero-one knapsack problems. Oper. Res.
**28**(5), 1130–1154 (1980)MathSciNetzbMATHGoogle Scholar - 9.Dréo, J., Pétrowski, A., Siarry, P., Taillard, E.: Metaheuristics for Hard Optimization: Methods and Case Studies. Springer Science & Business Media, Berlin (2006)zbMATHGoogle Scholar
- 10.Moraga, R.J., DePuy, G.W., Whitehouse, G.E.: Metaheuristics: A Solution Methodology for Optimization Problems. Handbook of Industrial and Systems Engineering. CRC Press, Boca Raton (2006)Google Scholar
- 11.Glover, F., Laguna, M.: Tabu Search. In: Pardalos, P., Du, D.Z., Graham, R. (eds.) Handbook of Combinatorial Optimization, pp. 3261–3362. Springer, New York, NY (2013)Google Scholar
- 12.Glover, F.: Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res.
**13**(5), 533–549 (1986)MathSciNetzbMATHGoogle Scholar