Advertisement

Iran Journal of Computer Science

, Volume 2, Issue 1, pp 23–32 | Cite as

Meta-heuristic bus transportation algorithm

  • Mohammad BodaghiEmail author
  • Koosha Samieefar
Original Article
  • 279 Downloads

Abstract

Over recent decades, several experience-based mathematical models have been proposed. In addition to collective intelligence, in recent years some efforts have been made to apply human experience-based intelligence to open up a new world of possibilities to design new meta-heuristic algorithms for solving NP problems. In these algorithms instead of only relying on collective intelligence, each individual tries to search for optimal solution based on her or his own experience and others. In this paper, we use a social behavior of humans in transportations to reach their destination and we took experience-based human behavior and smartness of humans as an inspiration to propose our meta-heuristic algorithm and we name it bus transportation algorithm. As a simple example, we restrict our paper to solve a well-known problem integer programming which is known as an NP-Complete problem. Compared to other algorithms, the results in these papers show that our algorithm outperforms PSO (particle swarm optimization), GA (genetic algorithm) and SA (simulated annealing) in terms of efficiency and convergence.

Keywords

Meta-heuristic algorithm Human intelligence Bus transportation algorithm Intelligent algorithms Empiricism 

1 Introduction

Confrontation of rationalism and empiricism is a philosophical confrontation that many philosophers have written materials over centuries. As a result of this confrontation, some invaluable concepts in Rene Descartes philosophy have emerged such as analytical geometry [2]. These arguments are really controversial that we are not going to discuss them. Conversely, we could use those ideas and it can be inferred that if we can have a simple model that reflects experience, it would be beneficial. Meta-heuristic algorithms are witnesses that they are based on animal behaviors and natural phenomena.

Around thirty years ago, collective intelligence have been introduced by Gerardo Beni and Jing Wang and it is an important topic in artificial intelligence. In recent years, we could have seen many changes in that concept. As an example, we can consider ant Colony algorithm and in that algorithm each ant as an individual has simple abilities but together they form an intelligent algorithm but they follow simple rules. We can deem complex neural networks as a complex example.

The point that exists is that, in such algorithms, interaction between individuals results in complex behavior which paves the way to converge to global optimum solution [3, 4, 5]. In addition, there exits some relationship between different local optimum solutions that would help too. What is more, relying on properties that humans possess would help us to design better algorithms. These abilities namely are being social, eloquent, sentimental and purposeful. So, in few years that past, some algorithms have been proposed [1, 6].

After all those explanations, we introduce the bus transportation algorithm (BTA) inspired by civic behavior of humans and the usage of public transport. In cities, the purpose of using buses is to help people reach their destination. Humans, buses and stations are components of the general algorithm. We add some experience-based and random behaviors to our agents that are humans to design an efficient algorithm and the main idea is to load and unload the passengers until we reach an equilibrium which is likely to be the global optimum solution. Most of the humans know how to reach their destination and few of them do not know how to do that and this idea could be applied to solve optimization problems. Stations are also one important part of algorithm and they are some storage that provide us with information that we gained so far from the algorithm. So, we can get local optimum solutions. The effectiveness of the final algorithm, in turn, depends on the structure of transportation system and it is clear that we cannot ascertain whether our construction is completely efficient. This approach is only a more equipped metaheuristic algorithm that not only relies on the experience gained during time but also, the construction maybe, plays the most important part. So, it is based on empiricism.

In this paper, we introduce the general algorithm and then we apply it to solve integer programming problem whose variables are limited to be zero or one. To build a transportation network for this problem, we consider that we have intelligent passengers who are able to interact with others and to do so, we apply Simple human learning optimization algorithm (SHLO) introduced in [1]. We add a simple feature which is a simple heuristic function that helps agents to decide. We limit the number of buses to one and we modify the transportation system status (variables, the objective function and experience gathered) in each step based on the number of passengers in each station and the procedure continues. This problem is an NP-Complete problem which is unlikely for us to find an efficient algorithm and even in this case, approximation algorithms such as Balas [7, 8] are not necessarily efficiently computable where the size of the problem grows. So, we may use heuristic and metaheuristic algorithms to solve them [9, 10].

2 Preliminaries

In this part, we will have a review of the components that we used to design the algorithm which we developed for integer programming problem.

2.1 Definition of the problem

Zero and one integer programming is an NP-Complete problem and it is one of the simple cases of integer programming. The problem is defined as follows in two different ways of writing:
$$\begin{aligned} \begin{array}{c} \begin{array}{lr} \max z = \sum \limits _{j=1}^{n}c_j x_j &{}\\ \text {s.t.}&{}\\ \sum \limits _{j=1}^{n} a_{ij}x_j \le b_i &{} (i = 1, 2, \ldots , m)\\ x_j = 0 \vee 1 &{} (j = 1, 2, \ldots , n)\\ \end{array}\\ \begin{array}{rcl} \max z &{} = &{} c_1x_1 + c_2x_2 + \cdots + c_nx_n\\ &{}&{} a_{1} x_{1} + a_{2} x_{2} + \cdots + a_{n} x_{n} \le b_{1}\\ &{}&{} a_{1} x_{1} + a_{2} x_{2} + \cdots + a_{n} x_{n} \le b_{2}\\ &{}&{} \vdots \\ &{}&{} a_{1} x_{1} + a_{2} x_{2} + \cdots + a_{n} x_{n} \le b_{m}\\ x_j &{} = &{} 1 \text { or } 2\ (j = 1, 2, \ldots , n)\\ \end{array} \end{array} \end{aligned}$$
(1)
where m is the number of constraints and n is the number of variables. In this research, there are some concepts such as simple human learning optimization algorithm (SHLO), taboo search and improvement of position (IMPRO), some of which were used directly and some indirectly to develop the algorithm.

2.2 Simple human learning optimization algorithm (SHLO) [1]

The basis of this algorithm is experienced and random behaviors and then with some innovations agents get experienced. In this algorithm, each person is coded with a binary string including each individual status. SHLO algorithm in each step has three types of choices that are random learning, learning based on individual choices or social learning. We use this algorithm with some simple modifications:
$$\begin{aligned} \begin{array}{l} x_i = \begin{bmatrix} x_{i1} &{} x_{i2} &{} \ldots &{} x{ij} &{} \ldots &{} x_{in} \end{bmatrix} \quad x_{ij} \in \{0, 1\}\\ 1 \le i \le m, \quad 1 \le j \le n. \end{array} \end{aligned}$$
(2)

2.2.1 Random learning operator

In this type of learning, the algorithm, at first when each person does not have any kind of experience, sets the binary string randomly with zero and one.
$$\begin{aligned} x_{ij} = \text {Rand}(0, 1) = {\left\{ \begin{array}{ll} 0, &{} 0 < \text {rand} \le 0.5\\ 1, &{} \text {else}. \end{array}\right. } \end{aligned}$$
(3)

2.2.2 Individual learning operator

This operator stores feasible experiences of each person I in the corresponded row of Individual Knowledge Database (IKD) matrix that each row itself is a vector of strings, more precisely:
$$\begin{aligned} \begin{array}{l} \text {IKD} = \begin{bmatrix} \text {ikd}_{_1}\\ \text {ikd}_{_2}\\ \vdots \\ \text {ikd}_{i}\\ \vdots \\ \text {ikd}_{N}\\ \end{bmatrix} ,\quad 1 \le i \le N\\ \\ \text {ikd}_i = \begin{bmatrix} \text {ikd}_{i_1}\\ \text {ikd}_{i_2}\\ \vdots \\ \text {ikd}_{i}\\ \vdots \\ \text {ikd}_{i_p}\\ \end{bmatrix} {i_L} = \begin{bmatrix} \text {ikd}_{i_{11}} &{} \text {ikd}_{i_{12}} &{} \ldots &{} \text {ikd}_{i_{1j}} &{} \ldots &{} \text {ikd}_{i_{1M}}\\ \text {ikd}_{i_{21}} &{} \text {ikd}_{i_{22}} &{} \ldots &{} \text {ikd}_{i_{2j}} &{} \ldots &{} \text {ikd}_{i_{2M}}\\ &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{}\\ \text {ikd}_{i_{i1}} &{} \text {ikd}_{i_{i2}} &{} \ldots &{} \text {ikd}_{i_{ij}} &{} \ldots &{} \text {ikd}_{i_{iM}}\\ &{} \vdots &{} \vdots &{} \vdots &{} \vdots &{}\\ \text {ikd}_{i_{p1}} &{} \text {ikd}_{i_{p2}} &{} \ldots &{} \text {ikd}_{i_{pj}} &{} \ldots &{} \text {ikd}_{i_{pM}} \end{bmatrix} {}.\\ \end{array} \end{aligned}$$
(4)
In the matrix, L is the maximum number of experiences that are storable for each person and N is the number of agents and M is related to the space needed to store the information.

2.2.3 Social learning operator

This operator stores feasible experiences gained by the group in Social Knowledge Data (SKD) matrix whose rows contain a binary string representing the information gotten by group of agents.
$$\begin{aligned} \text {SKD} = \begin{bmatrix} \text {skd}_{_1}\\ \text {skd}_{_2}\\ \vdots \\ \text {skd}_{q}\\ \vdots \\ \text {skd}_{H}\\ \end{bmatrix} = \begin{bmatrix} \text {skd}_{_{11}}&\text {skd}_{_{12}}&\ldots&\text {skd}_{_{1j}}&\ldots&\text {skd}_{_{1M}}\\ \text {skd}_{_{21}}&\text {skd}_{_{22}}&\ldots&\text {skd}_{_{2j}}&\ldots&\text {skd}_{_{2M}}\\&\vdots&\vdots&\vdots&\vdots&\\ \text {skd}_{_{q1}}&\text {skd}_{_{q2}}&\ldots&\text {skd}_{_{qj}}&\ldots&\text {skd}_{_{qM}}\\&\vdots&\vdots&\vdots&\vdots&\\ \text {skd}_{_{H1}}&\text {skd}_{_{H2}}&\ldots&\text {skd}_{_{Hj}}&\ldots&\text {skd}_{_{HM}} \end{bmatrix} \end{aligned}$$
(5)
where H simply is a constant and indicates how many of experiences we can store.

2.3 Taboo search

Taboo search is a general search method which was based on searching neighbors introduced by Glover in 1986. This method investigates neighbors who are not in the taboo list and then with strategies’ updates taboo list which means that in the search space nodes that are recently investigated are not likely to be investigated soon [11, 12].

2.4 Improvement of position (IMPRO)

In this algorithm, agents try to improve their own social welfare. Each person tries to find whether with some modifications on his or her own properties can find patterns to improve welfare. We only use the idea to improve our social learning operator since each variable in this integer programming problem can be only zero or one then the idea solely can be used in strings. Then, we can improve each entity in SKD matrix.

3 General bus transportation algorithm

As we indicated in the introduction part, metaheuristic algorithms and experience based models opens up new possibilities to solve NP problems. It seems that may be considering humans as agents could be the best choice and humans can extend their understanding by experience and random responses that are creative and we paid attention to these properties that humans possess. Early humans normally did not think too much when they tried to figure out their primitive problems they faced but since ages past, systematic approach governed people life. We will follow somehow the same approach and in our algorithm we used transportation as a coordinator. Some of passengers know how to reach their destination and some of them do not. So, some of them will choose the right bus and then they go to their correct destination and the others try to find their destination based on their and others experiences. Humans would like to alight from the bus when they guess they are near to their destination then they would use another bus to go elsewhere. In each station, we see if people who are present in that station are near to their destination and also we know that people cannot go to the stations resulting in contradiction to feasibility of solution. This procedure recurs again and again until we find that they are near to the destination (or they are in the exact place) which in the optimization problem means that we have reached a local optimum that is fair for us and this can be detected by seeing that the passengers are reluctant to change their position.

To be more precise, when we want to solve a problem with this algorithm, at first we must determine what are the buses, stations and passengers and then the number of each of them must be taken into account. Then, we must have a transportation system that works convincing to optimize the solution. As we mentioned in the previous parts, stations are places where we can examine the current solution and people want to commute until they reach their destination and these transportations are based on one or more learning algorithms which control the network. The general algorithm is described in Fig. 1.
Fig. 1

General bus transportation algorithm

4 Solving the integer programming problem

In this section, we apply the algorithm to solve a well-known problem which is a case of integer programming known as an NP-Complete problem and this is the part which combines the previous concepts together. To model the network, we consider variables as passengers and they communicate with each other to help each other find the whether zero is appropriate or zero. We take four groups of stations and each group has two. So, all in all, we have eight stations that each passenger could go there although we can consider more and more. Note that the first station in each group belongs to zeros and the second belongs to ones. In the following paragraphs, we are going to describe these groups of stations.

Constant Stations Group (CSG) In this group, we are sure that these variables should be set to zero or one. The first station in this group belongs to variables which must be zero and the second includes variables that we are sure that they must be one. These variables are not going to be checked so its time omitted them.

Short-Term Stations Group (STSG) In this group, we are not sure whether these variables should have zero or one. So, these variables must be kept at hand to be controlled. One of the good aspects that this algorithm possess is that we only load variables which belong to this group and we ignore other passengers that are not here. To omit the others, we update the problem by assigning the value that we guessed to the variables then we have another integer programming problem and this in terms of bus transportation, means that the bus is not going to pick them up. This really helps us to have lower complexity and to avoid repetition.

Mid-Term Stations Group (MTSG) In this group, we place variables that they are somehow in the middle and we are not really sure about them but we are going to decide that whether they should go back to short-term stations or go to long-terms.

Long-term Stations Group (LTSG) About these variables, we can figure that these variables somehow reached an equilibrium and it is likely to have the right assignment.

Each bus is a processing unit that has some passengers and the bus tries to transport the passengers to stations. We can really think about parallel processing but in this paper we limited the number of buses to one for simplicity since variables are not really independent.

As we mentioned before, we will use simple human learning optimization but we add a simple heuristic function instead of IKD matrix that would boost the convergence of algorithm. We can deem it as a simple merit function which we may think that this function would be the specifier when we face the problem without a machine. This function, same as all other simple heuristics, has the same problem which is local optimum. To tackle these local optimums, we would choose randomly whether we assign the based on this function or assign a random number which only could be zero or one. Here is function for each variable j that is written bellow where \(\alpha \) and \(\beta \) are number that are in [0, 1] which are chosen smartly to lead to local optimum.
$$\begin{aligned} F_j = \alpha \left( \dfrac{c_j}{\sum _{t=1}^{n} c_t}\right) - \beta \left( \max _j \left( \dfrac{a_{ij}}{b_i}\right) \right) . \end{aligned}$$
(6)
There is one additional problem and that is the function is in \([-1,1]\). So, we modify it so it could remain in [0, 1].
$$\begin{aligned} F'_j = \frac{F_j + 1}{2}. \end{aligned}$$
(7)
The way we settle with assignment of a variable based on the function is at first variables whose value of function is greater than \(\frac{1}{2}\) will be set one and otherwise we end up with zero. And if at least 10 percent of variables are not greater than \(\frac{1}{2}\) then we do the same with checking whether the function is greater than \(\frac{1}{3}\) or not. This procedure continues until we reach a constant number \(\frac{1}{t}\). This would help us avoid repetition that seems to be caused by load and unloads. Since after some steps we somehow dealt with some variables and now we are sure that some of variables are going to be checked whether they can be one. We know that if there is a limited number of acceptable solutions then it is unlikely to find it soon so, if we have lots of solutions then we can increase \(\alpha \) and that means we pay much more attention to objective function. Likewise, the sentence we mentioned before, we can increase \(\beta \) to find more solutions. We increase or decrease \(\alpha \) and \(\beta \) by 10 percent if we need. We continue doing this until we find suitable \(\alpha \) and \(\beta \) but one note of caution is that we must control if those numbers are not greater than one and to do so, we use weighted average function and we consider the weight of current value two and weight of a random number one and if needed we continue doing this operation. For simplicity, we consider that if we increase or decrease a parameter by 10 percent, then we do the opposite operation to the other parameter.

Finally, we add another option and it is choosing based on SKD matrix which is related to social learning. We consider SKD matrix which stores best solutions and for each acceptable solution we try to follow a simple pattern that used in IMPRO [6] to have more acceptable solutions. For each acceptable solution which is good we try to find some neighbors which are likely to be the final solution. We consider a constant number and then from \(k=1\) up to this number we choose the k bits randomly and then we try to generate acceptable neighbors and If they are better than rows in SKD then we update SKD matrix.

So, in conclusion, we randomly choose between three options. The algorithm stops under these two conditions:
  1. 1.

    When all variables go to long-term stations group or constants because we have investigated enough and the probability of success is high enough.

     
  2. 2.

    We have an input which denote the maximum number of loads/unloads in the algorithm and if we exceed this number we cannot continue. For sufficiently large this input if we did not meet the first condition we can infer we have trapped in local optimums. So, it is appropriate to start this algorithm form the beginning and it could be better to consider parallel computing as an option but when we face this condition.

     
This pseudocode shows how the algorithm works briefly.
In Fig. 2 demonstrates the transportation network and shows how the algorithm works. Possible movements are shown by dotted line between the stations. In other words we could see that the bus transports passengers to their destinations which is allowed for example a passenger which was in the STSG group can only go to MTSG or STSG groups.
Fig. 2

Transportation network in integer programming problem

Table 1

The parameters

Parameters

Algorithm

SSG \(=\) 6, STSG \(=\) 8, MTSG \(=\) 4, LSSG \(=\) 8, ZLG \(=\) 2, OLG \(=\) 2

BTA

\(T_{\mathrm{initialize}} = 100, \varDelta T = 0.1\)

Reduction factor temperature (TFT) = 0.1,

SA

Repeat number in any item temperature (RNAT) = 150

Number of iteration = ITER = 1500

Number of variable = m = 100 or 150 or 200

Number of each for variable = BS = [100 100]

Length of chromosome = L = sum(BS)

GA

Minimum of each variable = Lo = 0

Maximum of each variable = Hi = 1

Population = rand(rand(NL))

Number of population (even) = n = 2000

Probability of crossover= \(P_\mathrm{c} = 0.6\)

\(C_1 = 1.5, C_2 = 1.5, W_\mathrm{MIN} = 0.1, W_\mathrm{MAX} = 0.9, V_\mathrm{MIN} = 4, V_\mathrm{MAX} = 4\)

PSO

5 Evaluation of algorithm

In this section we compare BTA algorithm with genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO). Although it is not proved for us whether PSO or SA algorithm are the best among other meta-heuristics, since they are well-known, we do the comparison.

We show in the following that BTA outperforms PSO, GA and SA in terms of efficiency and convergence and this is possible due to getting near to global optimum with lower number of steps or generations needed.

5.1 Simulations and results

The problem that we used to do the comparison contains 100 variables and 50 constraints. Table 1 includes parameters in each algorithm respectively.

5.1.1 Results

We investigated the results of simulation in different wats and it show this algorithm works better than the others.
Table 2

The values of the objective function in the first step

The difference in the value of the target function at the end of the 100th generation change and the beginning of the convergence

Value of objective function in the end (after 100 generations)

Value of objective function after the first step

Algorithms

0.145

0.490

0.593

BTA with 100 variables and 50 constraints

0.48

0.897

0.954

SA with 100 variables and 50 constraints

0.316

0.682

0.738

GA with 100 variables and 50 constraints

0.41

0.761

0.802

PSO with 100 variables and 50 constraints

0.377

0.578

0.612

BTA with 150 variables and 100 constraints

0.553

0.952

0.976

SA with 150 variables and 100 constraints

0.481

0.749

0.771

GA with 150 variables and 100 constraints

0.609

0.846

0.868

PSO with 150 variables and 100 constraints

0.340

0.581

0.662

BTA with 200 variables and 150 constraints

0.567

0.966

0.987

SA with 200 variables and 150 constraints

0.512

0.780

0.831

GA with 200 variables and 150 constraints

0.656

0.893

0.884

PSO with 200 variables and 150 constraints

Table 2 the values of the objective function in the first step (generation). These answers compared with the final generation and the subtraction shows the transition from the first to the last and this indicates that, from the beginning and first responses, how the algorithm moves to optimality. As you can see in BTA algorithm the subtraction of the objective function in the 100 generations and the beginning of the convergence has 0.145 distance to the final answer whereas in GA, SA and PSO we can see the numbers are 0.316, 0.48 and 0.41 respectively. So, the algorithm, from the beginning compared to the others converges better. In BTA when we have 150 variables and 100 constraints and when we have 200 variables and 150 constraints the numbers are 0.377 and 0.340 respectively while for GA, we have 0.481 and 0.512, for SA, 0.553 and 0.567 and for PSO 0.609 and 0.656. This show we the problem gets complex then although the subtraction of the objective function in the 100-th generation and the beginning grows but on the other hand, we can see better convergence still remains.

These diagrams below, show how these algorithms converge where we have 100 variables and 50 constraints which begins from the first generation. BTA not only has a kind of monotone direction to the answer but also after only 629 generations meets the final answer whereas SA, GA and PSO require 1023, 716 and 723 generations to converge. In diagram 5 up to 8 we observe the same but we have 200 variables and 150 constraints and we infer although it requires more generations but the algorithm required less generations and saw a monotone path through (Figs. 3, 4, 5, 6, 7, 8, 9, 10).
Fig. 3

The path which BTA converges to the answer where we have 100 variables and 50 constraints in integer programming problem

Fig. 4

The path which SA converges to the answer where we have 100 variables and 50 constraints in integer programming problem

Fig. 5

The path which GA converges to the answer where we have 100 variables and 50 constraints in integer programming problem

Fig. 6

The path which PSO converges to the answer where we have 100 variables and 50 constraints in integer programming

Fig. 7

The path which BTA converges to the answer where we have 200 variables and 150 constraints in integer programming

Fig. 8

The path which SA converges to the answer where we have 200 variables and 150 constraints in integer programming

Fig. 9

The path which GA converges to the answer where we have 200 variables and 150 constraints in integer programming

Fig. 10

The path which PSO converges to the answer where we have 200 variables and 150 constraints in integer programming

In the following table we observe the generations for each of four algorithms until they find the final answer (beginning of the convergence) and these information complete pervious results. So, we have seen BTA requires less generations and in addition this monotone behavior is invaluable Which showed stability too (Table 3).

Table 4 shows running time of these four algorithms for integer programming problem which shows the algorithm requires less amount of time compared to the others and in addition it requires fewer generations to converge. These information show BTA takes 150,510.52 ms lesser compared to SA, 68,161.52 lesser than GA and 81,720.22 lesser than PSO when we have 200 variables and 150 constraints.
Table 3

The generations needed to meet the final answer (beginning of the convergence)

Normalized value of objective function in the last response

Generations

Algorithm

0.345

629

BTA with 100 variables and 50 constraints

0.417

1023

SA with 100 variables and 50 constraints

0.366

716

GA with 100 variables and 50 constraints

0.351

723

PSO with 100 variables and 50 constraints

0.201

811

BTA with 150 variables and 100 constraints

0.399

1729

SA with 150 variables and 100 constraints

0.268

1023

GA with 150 variables and 100 constraints

0.237

1329

PSO with 150 variables and 100 constraints

0.198

956

BTA with 200 variables and 150 constraints

0.378

2012

SA with 200 variables and 150 constraints

0.235

1430

GA with 200 variables and 150 constraints

0.206

1671

PSO with 200 variables and 150 constraints

Table 4

The running time of the algorithms after 200, 300 and 500 iterations and when they begin to converge

Total running time (MS)

Average time to run a iteration the algorithm (MS)

Algorithms

5712

28.56

BTA with 100 variables and 50 constraints (200 iterations algorithm)

6015

30.075

SA with 100 variables and 50 constraints (200 iterations algorithm)

5802

29.01

GA with 100 variables and 50 constraints (200 iterations algorithm)

5798

28.99

PSO with 100 variables and 50 constraints (200 iterations algorithm)

8336

41.68

BTA with 200 variables and 150 constraints (200 iterations algorithm)

8737.2

43.686

SA with 200 variables and 150 constraints (200 iterations algorithm)

8419.8

42.099

GA with 200 variables and 150 constraints (200 iterations algorithm)

8406.2

42.031

PSO with 200 variables and 150 constraints (200 iterations algorithm)

8553

28.51

BTA with 100 variables and 50 constraints (300 iterations algorithm)

9024.3

30.081

SA with 100 variables and 50 constraints (300 iterations algorithm)

8676

28.92

GA with 100 variables and 50 constraints (300 iterations algorithm)

8691

28.97

PSO with 100 variables and 50 constraints (300 iterations algorithm)

12.477

41.59

BTA with 200 variables and 150 constraints (300 iterations algorithm)

13,116

43.72

SA with 200 variables and 150 constraints (300 iterations algorithm)

12,604.8

42.016

GA with 200 variables and 150 constraints (300 iterations algorithm)

12,600.3

42.001

PSO with 200 variables and 150 constraints (300 iterations algorithm)

14,351

28.702

BTA with 100 variables and 50 constraints (500 iterations algorithm)

15,018

30.036

SA with 100 variables and 50 constraints (500 iterations algorithm)

14,504

29.008

GA with 100 variables and 50 constraints (500 iterations algorithm)

14,430

28.86

PSO with 100 variables and 50 constraints (500 iterations algorithm)

20,810

41.62

BTA with 200 variables and 150 constraints (500 iterations algorithm)

21,855

43.71

SA with 200 variables and 150 constraints (500 iterations algorithm)

21,019.5

42.039

GA with 200 variables and 150 constraints (500 iterations algorithm)

21,010.5

42.021

PSO with 200 variables and 150 constraints (500 iterations algorithm)

T student is a test that evaluates and compares the average of a sample with the average of the society when the standard deviation of the society is unknown. Because the distribution in t test in small amounts is alleviated by degrees of freedom, we could use this test when the size of that sample is small. In addition, this test when the standard error of the society is unknown but the standard error of the sample is known is applicable too. To use this test, the variables that we study must be in normal scale and distribution We also for the evaluation of the BTA used 30 independent runs for one problem that has 200 variables and 150 constraints. As the Fig. 11 illustrate, BTA shows the least fluctuation while SA shows the most. Figure 11 indicates that the when the size of the problem grows, although it is not as stable as problems whose size are small, it is to some extent reliable in terms of reliability (Fig. 12).
Fig. 11

The result of each single independent run (30 runs with 200 variables and 150 constraints)

Fig. 12

Student’s t test of the BTA algorithm (at 30 runs)

6 Future work

We may think that this idea could be applied to solve wide range of problems such as continuous problems or with some modifications, we may consider solving nonlinear problems. We may also hope that this approach helps us either in machine learning or image processing and in other applications as well since this algorithm relies on experience, discipline, randomness, smartness and more importantly, it requires a proper modeling procedure.

7 Conclusion

So, we designed an algorithm which worked better than other well-known algorithms and the key was human experience. Having a determined network of buses combined with human experiences resulted a simple and implementable algorithm which works fast and it is stable too which was shown with Student’s t test. We implemented all those algorithms in MATLAB and we observed that ours works more efficient compared to SA, GA and PSO. We do not claim the algorithm is the best choice but this general algorithm has lots of flexibility could be applied to various problems. We looked at meta-heuristics in a different way and we came up with a new approach that could be improved. There could be a potential question and it is whether that we can improve algorithm via increasing the number of stations or buses or even we can think of having more advanced and intelligent passengers which needs more space and complexity problems add up. So, one may think that we can have a city which all people are smart enough to decide well and there are lots of buses, airports and so forth but we mention that the complexity and simplicity must be taken into account. In conclusion, there are lots of things that would affect the algorithm both in positive and negative ways and in the end we hope that this algorithm could help us solve more complicated and conceptual problems which the search process must be done innovatively.

Notes

Acknowledgements

We thank our professors Dr. Mohammadebrahim Shiri Ahamadabadi and Dr. Farzad Didehvar who provided insight that assisted the research. This idea began in the classes and it was developed by us until we have reached this point.

References

  1. 1.
    Wang, L., Ni, H., Yang, R., Fei, M., Ye, W.: A simple human learning optimization algorithm. In: International Conference on Life System Modeling and Simulation and International Conference on Intelligent Computing for Sustainable Energy and Environment, pp. 56–65. Springer, Berlin (2014)Google Scholar
  2. 2.
    Boyer, C.B.: History of Analytic Geometry. Courier Corporation, Chelmsford (2012)zbMATHGoogle Scholar
  3. 3.
    Roy, S., Biswas, S., Chaudhuri, S.S.: Nature-inspired swarm intelligence and its applications. Int. J. Mod. Educ. Comput. Sci. 6(12), 55 (2014)Google Scholar
  4. 4.
    Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York (1999)zbMATHGoogle Scholar
  5. 5.
    Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 26(1), 29–41 (1996)Google Scholar
  6. 6.
    Azar, A., Seyedmirzaee, S.: Providing new meta-heuristic algorithm for optimization problems inspired by humans’ behavior to improve their positions. Int. J. Artif. Intell. Appl. 4(1), 1 (2013)Google Scholar
  7. 7.
    Balas, E.: An additive algorithm for solving linear programs with zero-one variables. Oper. Res. 13(4), 517–546 (1965)MathSciNetzbMATHGoogle Scholar
  8. 8.
    Balas, E., Zemel, E.: An algorithm for large zero-one knapsack problems. Oper. Res. 28(5), 1130–1154 (1980)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Dréo, J., Pétrowski, A., Siarry, P., Taillard, E.: Metaheuristics for Hard Optimization: Methods and Case Studies. Springer Science & Business Media, Berlin (2006)zbMATHGoogle Scholar
  10. 10.
    Moraga, R.J., DePuy, G.W., Whitehouse, G.E.: Metaheuristics: A Solution Methodology for Optimization Problems. Handbook of Industrial and Systems Engineering. CRC Press, Boca Raton (2006)Google Scholar
  11. 11.
    Glover, F., Laguna, M.: Tabu Search. In: Pardalos, P., Du, D.Z., Graham, R. (eds.) Handbook of Combinatorial Optimization, pp. 3261–3362. Springer, New York, NY (2013)Google Scholar
  12. 12.
    Glover, F.: Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13(5), 533–549 (1986)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Amirkabir University of TechnologyTehranIran

Personalised recommendations