A collaborative neurodynamic optimization algorithm to traveling salesman problem

This paper proposed a collaborative neurodynamic optimization (CNO) method to solve traveling salesman problem (TSP). First, we construct a Hopfield neural network (HNN) with n×n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n \times n$$\end{document} neurons for the n cities. Second, to ensure the convergence of continuous HNN (CHNN), we reformulate TSP to satisfy the convergence condition of CHNN and solve TSP by CHNN. Finally, a population of CHNNs is used to search for local optimal solutions of TSP and the globally optimal solution is obtained using particle swarm optimization. Experimental results show the effectiveness of the CNO approach for solving TSP.


Introduction
The traveling salesman problem (TSP) is to find a route to travel each city once and return to the starting city. The best route is a feasible route of a minimum total distance of a given Jing Zhong, Yuelei Feng, Shuyu Tang, Jiang Xiong, Xiangguang Dai, and Nian Zhang have contributed equally to this work. city list. TSP can be regarded as a classic combinatorial optimization problem. The related optimization theory can also be used to some similar problems including the quadratic assignment problem and the scheduling problem [1]. It is well known that TSP is a NP-hard optimization problem, which was discussed and studied by extensive researchers [2][3][4][5]. Some classic optimization methods, including Nearest Neighborhood Search, Simulated Annealing, and Genetic Algorithm, were proposed to solve TSP.
In the past decade, some optimization theory based on neural network was emerged to solve optimization problems. Hopfield [6] first used the networks of several neurons as a powerful computational model to solve the complexity problem. In the seminal paper of Hopfield, two types of Hopfield neural network models (i.e., the continuous HNN and the discrete HNN) were proposed. The two neural network modes were used to solve linear programming problems and combinatorial optimization problems [7][8][9]. After that, numerous neural network models were developed to solve various optimization problems, including linear and nonlinear programming [7,[10][11][12][13], generalized convex optimization problems (e.g., [14,15]), minimax optimization problems (e.g., [16]), distributed optimization problems (e.g., [17]), and combinatorial optimization (e.g., [18]).
Because of the computational complexity of TSP, the above-mentioned neural network methods fall into a local solution easily. Recently, collaborative neurodynamic opti-mization (CNO) approaches are very popular for solving the combinatorial optimization problems [19][20][21]. Compared with traditional neural networks, CNO can search the global solution of a given problem. In the CNO, several neurodynamic models in a parallel mode are used to search the local solutions of the optimization problem and the searching process are repeated by the initialization of initial states until the global solution is achieved. Theory and experiments were presented to prove the convergence of CNO approaches and the effectiveness in searching the global optima of combinatorial optimization problems [22].
In this paper, the CNO method is proposed for solving TSP based on continuous Hopfield networks (CHNs). First, we reformulate the TSP into a quadratic unconstrained binary optimization (QUBO) problem [23] by converting the penalty functions into equality constraints. Second, we propose a population of CHNs to search the local solution of TSP. Third, we reinitialize the initial states of each CHN by employing Particle Swarm Optimization (PSO) and repeat the step 2 until the global solution of TSP is achieved. Our achievements of this paper are • Combining CHNs and PSO, this paper proposed a CNO algorithm to search the global solution of the TSP. • Experimental results of four benchmark datasets are presented to demonstrate the superior performance of the CNO approach than the existing TSP algorithm based on CHNs.

Continuous Hopfield network
where t and v ∈ {0, 1} n denote a constant and a state vector, respectively. du dt is decided by the following equation: where T ∈ R n×n and I ∈ R n×1 denote a symmetric matrix and a bias matrix, respectively. The g(u i ) of Eq. (1) is expressed as follows: where u 0 is a positive constant. To satisfy the convergence property of CHN synchronous, two conditions should be satisfied. First, any neuron should not exist a self feedback. Second, the connecting weight between neurons T i j and T ji should be the same.
In general, the energy function [24] of CHN is described by For Eq. (3), there are two updating modes (i.e., asynchronous or synchronous). The asynchronous mode means that each neuron v i can be updated sequentially. The synchronous mode can update all the neurons simultaneously. The two update modes have been extensively studied in [6,[25][26][27]. In this paper, we use the synchronous mode. The T of Eq. (4) should satisfy the following two conditions: (1) the values of the diagonal elements should be zeros; (2) T should be symmetric.
The initial value of v are initialized randomly. Therefore, the CHN can achieve different local optimal solutions by different initial values. In other words, CHN cannot search for a global optimal solution. In the following subsection, we introduce Particle Swarm Optimization to search for a global optimal solution.

Particle swarm optimization
Particle swarm optimization (PSO) is a popular metaheuristic optimization algorithm [28][29][30][31][32][33][34], which is often used to solve NP-hard problems. PSO is first proposed by Kennedy and Eberhart [35], which simulates the bird flock searching for food. PSO provides a searching procedure by a population of individuals. Each individual called the particle can change its position (state) with time. While searching a multidimensional space, each particle re-adjusts its position (state) by a new velocity which is computed by its own and its neighboring's flying experience.
Suppose that x and v denote a particle position (state) and its velocity in a searching space, respectively.
. . , pbest i j ) denotes the best previous position of the ith particle. gbest is the global optimal position searched by all particles in the group.
represents the velocity of the ith particle. The velocity and position of the particle are calculated in terms of the following formula: where c 1 and c 2 denote the acceleration speeds, r 1 and r 2 denote random numbers in [0, 1], and w is a positive constant called the inertia weight. However, this mode cannot be used to optimize discrete variable problems [36]. Kennedy and Eberhart [37] proposed another PSO algorithm to address this problem. The updated velocity of the particle x id is expressed as follows: where the definition of x id , v id , pbest and gbest are given in the beginning. According to Eq. (8), x id , pbest id and gbest id can be normalized to 0 or 1.

Problem formulation
In [38], the energy function form of TSP is where A, B, C and D are positive constants. The first three terms of Eq. (9) are constraints, and the last term is the objective function. Some explanations of Eq. (9) are given as follows: • For the first term, each row has exactly one 1 or the values of each row are all zeros. • For the second term, each column has only one 1 or the values of each column are all zeros. • For the third term, the matrix v should has 1 for n times. Therefore, each row or each column appears only one once.  • The last item depicts the total path that may be taken through these cities. According to the first three constraints, only one path is a local optimum solution.
Note: d xy denote the distance between city x and city y, and v xi denote whether the city x is passed. v yi denote whether the city y is passed.
TSP can be mapped into the state vector of the neural network and expressed by a permutation matrix. Suppose that n cities are needed to visit. Each row and column must has one 1 once, and the rests are zeros. A local optimum solution of TSP can be expressed by a permutation matrix in Table 1.

Problem reformulation
To simplify the form of Eq. (9), Sun and Zheng [39] make some improvements. Next, Eq. (9) can be rewritten as follows: where d xy is the distance of cities x and y, n is the number of cities, and v xi = 1 denotes that the city x is visited in the ith time. Equation (10a) is the total distance of an effective path, and the constraints in (10b) and (10c) denote that a salesman enters and leaves a city only once. The Euclidean distance is used to measure the distance of cities x and y, where d xy is symmetric. Equation (10d) can be rewritten by the Lagrange multiplier method as follows: where A and D are positive penalty parameters. The partial derivative of Eq. (11) is expressed as follows:

Algorithmic design
The solution of TSP is based on the CHN and PSO, and the details of procedure are described as follows: (1) Initialize the population (i.e., given multiple initial solutions of the Hopfield neural network); (2) CHN is used to optimize Eq. (11) using Eqs.

Experiment set
In the paper, our proposed CHN_PSO approach is used to measure performance on the att48, ulysses16, ulysses22, and burma14. The parameters of our algorithm refer to Table 2.
Where two parameters N (i.e., pop) and M (i.e., termination criteria) in Table 2 are obtained based on experience.
Note: DHN denotes Discrete Hopfield Network, CHN denotes Continuous Hopfield Network, and CHN_PSO denotes our proposed algorithm.

Datasets
The att48, burma14, and bayg29 datasets contain 48, 14, and 29 instances, respectively. Each data set contains three columns of data, namely, serial number, abscissa, and ordinate. The ulysses16 and ulysses22 datasets contain 16 and 22 instances, respectively. Each dataset contains two columns of data, namely, abscissa and ordinate. Figure 2 depicts the convergent behaviors of the objective function computed with CHN in the inner loop of our algorithm on datasets att48, burma14, bayg29, ulysses16, and ulysses22. Figure 3 depicts the convergent behaviors of the outer loop of our algorithm on datasets att48, burma14, bayg29, ulysses16, and ulysses22. These experiments show that outer loop iterations are less than inner loop iterations to reach function convergence. • The final optimization path of our method outperforms CHN and DHN algorithms on the att48, burma14, and ulysses16. • For ulysses22, the final optimization path of the CHN and DHN algorithms is close to our algorithm, but they still perform unsatisfactorily.

Conclusion
In this paper, a collaborative neurodynamic optimization (CNO) is proposed to solve the traveling salesman problem (TSP). The PSO and HNN are employed in the proposed algorithm. They are used to reach satisfactory results. Experimental results show the effectiveness of the CNO approach for solving four TSP benchmarks. This paper use CHN and PSO to solve the TSP problem. In the future work, the discrete Hopfield networks can be use to solve this problem and combine with others Swarm intelligence algorithm. We are studying how to effectively and efficiently combine them at present.