Skip to main content
Log in

The 2-opt behavior of the Hopfield Network applied to the TSP

  • Original paper
  • Published:
Operational Research Aims and scope Submit manuscript

Abstract

The Continuous Hopfield Network (CHN) became one of the major breakthroughs in the come back of Neural Networks in the mid 80s, as it could be used to solve combinatorial optimization problems such as the Traveling Salesman Problem. Once researchers provided a mechanism, not based in trial-and-error, to guarantee the feasibility of the CHN, the quality of the solution was inferior to the ones provided by other heuristics. The next natural step is to study the behavior of the CHN as an optimizer, in order to improve its performance. With this regard, this paper analyzes the attractor basins of the CHN and establishes the mathematical foundations that guarantee the behavior of the network as a 2-opt; with the aim to open a new research line in which the CHN may be used, given the appropriate parameter setting, to solve a k-opt, which would make the network highly competitive. The analysis of the attraction basins of the CHN and its interpretation as a 2-opt is the subject of this article.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Note this benchmark does not provide a full and fair comparison between the heuristics, CHN –detailed in this paper– and LKH-3 (Helsgaun 2017), as they have been implemented in different programming languages, MATLAB and C, respectively.

  2. As noted in Remark 1 (see Sect.  1.1.1), the state matrix \(\mathbf {V} = (v_{x,i})_{x \in \{1,\dots ,n\}, i\in \{1,\dots ,n-K\}}\) can be expressed as a vector: \({\mathbf {v}}~=~\left[ \begin{array}{l} {\mathbf {v}}_{\cdot 1}\\ {\mathbf {v}}_{\cdot 2}\\ \vdots \\ {\mathbf {v}}_{\cdot (n-K)}\end{array}\right] _{(n\times (n-K))\times 1}\) with \({\mathbf {v}}_{\cdot i} = \left[ \begin{array}{l} v_{1,i}\\ v_{2,i}\\ \vdots \\ v_{n,i}\end{array}\right] _{n\times 1}\)

  3. Note that \(\mathbf {V}(0)\) is not an equilibrium point of the dynamic system, since the projection of the saddle point is a different point than the saddle point of the projected energy function.

  4. In case of continuing at the same distance, that is, \(v_{3,2}(\Updelta t) = v_{4,2}(\Updelta t)\), \(\mathbf {V}(\Updelta t)\) is projected again onto the same face, and the process is repeated.

References

  • Abe S (1993) Global convergence and suppression of spurious states of the Hopfield neural networks. IEEE Trans Circuits Syst I Fundam Theory Appl 40(4):246–257

    Article  Google Scholar 

  • Croes GA (1958) A method for solving traveling-salesman problems. Oper Res 6(6):791–812

    Article  Google Scholar 

  • Cuykendall R, Reese R (1989) Scaling the neural TSP algorithm. Biol Cybern 60(5):365–371

    Article  Google Scholar 

  • De Mazancourt T, Gerlic D (1983) The inverse of a block-circulant matrix. IEEE Trans Antennas Propag 31(5):808–810

    Article  Google Scholar 

  • García L (2017) Algunas cuestiones notables sobre el modelo de Hopfield en optimización. Ph.D. thesis, Universidad Complutense de Madrid. https://eprints.ucm.es/46536/

  • García L, Talaván PM, Yáñez J (2017) Attractor basin analysis of the Hopfield model: the generalized quadratic knapsack problem. In: International work-conference on artificial neural networks. Springer, pp 420–431

  • García L, Talaván PM, Yáñez J (2017) Improving the Hopfield model performance when applied to the traveling salesman problem. Soft Comput 21(14):3891–3905. https://doi.org/10.1007/s00500-016-2039-8

    Article  Google Scholar 

  • Hedge SU, Sweet JL, Levy WB (1988) Determination of parameters in a Hopfield/Tank computational network. In: IEEE international conference on neural networks, 1988. IEEE, pp. 291–298

  • Helsgaun K (2000) An effective implementation of the Lin-Kernighan traveling salesman heuristic. Eur J Oper Res 126(1):106–130

    Article  Google Scholar 

  • Helsgaun K (2017) An extension of the Lin-Kernighan-Helsgaun TSP solver for constrained traveling salesman and vehicle routing problems. Roskilde University, Roskilde

    Google Scholar 

  • Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. Proc Nat Acad Sci 79(8):2554–2558

    Article  Google Scholar 

  • Hopfield JJ (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proc Nat Acad Sci 81(10):3088–3092

    Article  Google Scholar 

  • Hopfield JJ, Tank DW (1985) “Neural” computation of decisions in optimization problems. Biol Cybern 52(3):141–152

    Article  Google Scholar 

  • Joya G, Atencia M, Sandoval F (2002) Hopfield neural networks for optimization: study of the different dynamics. Neurocomputing 43(1–4):219–237

    Article  Google Scholar 

  • Lin S, Kernighan BW (1973) An effective heuristic algorithm for the traveling-salesman problem. Oper Res 21(2):498–516

    Article  Google Scholar 

  • Papadimitriou CH (1977) The Euclidean travelling salesman problem is NP-complete. Theoret Comput Sci 4(3):237–244

    Article  Google Scholar 

  • Platt JC, Barr AH (1988) Constrained differential optimization for neural networks

  • Reinelt G (1991) TSPLIB. A traveling salesman problem library. ORSA J Comput 3(4):376–384

    Article  Google Scholar 

  • Talaván PM, Yáñez J (2002) Parameter setting of the Hopfield network applied to TSP. Neural Netw 15(3):363–373

    Article  Google Scholar 

  • Wasserman PD, Meyer-Arendt JR (1990) Neural computing, theory and practice. Appl Opt 29:2503

    Article  Google Scholar 

  • Woeginger GJ (2003) Exact algorithms for NP-hard problems: a survey. In: Combinatorial optimization—Eureka, You Shrink!. Springer, pp 185–207

Download references

Acknowledgements

This research has been partially supported by the Government of Spain, grant TIN2015-66471-P, and by the local Government of Madrid, grant S2013/ICE-2845 (CASI-CAM).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas García.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Some technical results

Some technical results

Lemma 1

Given a block-circulant matrix (each row-block is rotated one block to the right relative to the preceding row-block vector) of size N (N even, \(N = 2m\)), its inverse has the following structure:

figure d

Proof

Let \({\mathbf {T}}\) be an invertible block-circulant matrix. Then, using the result by De Mazancourt and Gerlic (1983) regarding block-circulant matrices (BCM), the inverse of a BCM matrix is also BCM.

Thus, since \({\mathbf {T}}^{\mathbf{-1}}\) is BCM, it can be written as follows (if \(N = 2m\)):

figure e

Since \({\mathbf {T}}^{\mathbf{-1}}\) is symmetric (as \({\mathbf {T}}\) is symmetric), it is deduced that:

$$\begin{aligned} \mathbf {Q}_{\mathbf{k}} = \mathbf {Q}_{{\mathbf {N-(k-2)}}} \quad \forall k \in \left\{ 2,\dots ,\frac{N}{2} +1\right\} \end{aligned}$$

An equivalent result is obtained if N is odd. \(\square \)

Proof

(Proof of Proposition 1(See page 8 for formulation)) The saddle point of the energy function for the second phase of the Divide-and-Conquer scheme, defined by Eq. 10, is obtained by computing the partial derivatives of \(E\left( {\mathbf {v}}\right) \) with respect to \({\mathbf {v}}\) and make them equal to zero, \(-\mathbf {Tv} - {\mathbf {i}}^{b} = 0\). Thus, the saddle point \({\mathbf {v}}^{*}\) is:

$$\begin{aligned} {\mathbf {v}}^{*} = -{\mathbf {T}}^{-1}{\mathbf {i}}^{b} \end{aligned}$$
(16)

The matrix \({\mathbf {T}}\) can be written as a (\(n \times (n-K)) \times (n \times (n-K))\) matrix (\((n-K) \times (n-K)\) block-matrix of square matrices of size \(n \times n\)):

figure f

where the matrices \({\mathbb {1}}\), \(\mathbf {I}\), \(\mathbf {I}_\mathbf {K}\) and \({\mathbf {D}}_\mathbf {K}\) follow the definitions of the proposition.

In order to be able to consider the trivial case \(K = 0\) in which there are no fixed chains and all isolated cities are couples to themselves, the term \(\delta _K\) was introduced in the definition of the matrix \(\mathbf {I}_\mathbf {K}\).

Moreover, the bias vector \({\mathbf {i}}^{{\mathbf {b}}}\) has the structure:

$$\begin{aligned} \begin{array}{ll} {\mathbf {i}}^{b} &{}= {+} \left[ \begin{array}{c} C N^{\prime } \\ \vdots \\ C N^{\prime } \end{array}\right] _{(n \times (n-K))\times 1} \end{array} \end{aligned}$$

\({\mathbf {T}}\) is a block-circulant matrix (see De Mazancourt and Gerlic 1983). If \({\mathbf {T}}\) is invertible, using Lemma 1, \({\mathbf {T}}^{-1}\) is also block-circulant and has the following structure (depending on whether \(n-K\) is even or odd):

If \(n-K = 2m\):

figure g

where \(\mathbf {Q}_{\mathbf {s}}\) are \(n\times n\) matrices \(\forall s \in \{1, \dots , \frac{n-K}{2}+1\}\).

If \(n-K = 2m + 1\):

figure h

where \(\mathbf {Q}_{\mathbf {s}}\) are \((n-K)\times (n-K)\) matrices \(\forall s \in \{1, \dots , \frac{n-K+1}{2}\}\).

Considering Eq. 16 and the structure for \({\mathbf {T}}^{-1}\), \({\mathbf {v}}^{*}\) may be obtainedFootnote 2 in the following way:

$$\begin{aligned} {\mathbf {v}}^{*} = - {\mathbf {T}}^{-1} {\mathbf {i}}^{b} = \left[ \begin{array}{l} {\mathbf {v}}^{*}_{\cdot 1}\\ {\mathbf {v}}^{*}_{\cdot 2}\\ \vdots \\ {\mathbf {v}}^{*}_{\cdot (n-K)}\\ \end{array}\right] _{(n \times (n-K))\times 1} \end{aligned}$$

with

$$\begin{aligned} {\mathbf {v}}^{*}_{\cdot i} =\left\{ \begin{array}{rl} -\Big (\mathbf {Q}_{\mathbf {1}} + 2\displaystyle \sum _{s = 2}^{\frac{n-K}{2}}\mathbf {Q}_{\mathbf {s}} + \mathbf {Q_{\frac{n-K}{2}+1}}\Big ) CN^{\prime } \mathbf {1} &{} \, \text{ if } \,\,\, n-K = 2m\\ -\Big (\mathbf {Q}_{\mathbf {1}} + 2\displaystyle \sum _{s = 2}^{\frac{n-K+1}{2}}\mathbf {Q}_{\mathbf {s}} \Big ) CN^{\prime } \mathbf {1} &{} \, \text{ if } \,\,\, n-K = 2m{+}1\\ \end{array} \right. \end{aligned}$$

where \(\mathbf {1} = \left[ \begin{array}{c} 1\\ 1\\ \vdots \\ 1\end{array}\right] _{n\times 1}\).

Let \(\mathbf {Q}^{row} \equiv \left\{ \begin{array}{rl} \mathbf {Q}_{\mathbf {1}} + 2\displaystyle \sum\nolimits _{s = 2}^{\frac{n-K}{2}}\mathbf {Q}_{\mathbf {s}} + \mathbf {Q_{\frac{n-K}{2} + 1}} &{} \text{ if } \quad n-K = 2m\\ \mathbf {Q}_{\mathbf {1}} + 2\displaystyle \sum\nolimits _{s = 2}^{\frac{n-K+1}{2}}\mathbf {Q}_{\mathbf {s}} &{} \text{ if } \quad n-K = 2m + 1\\ \end{array} \right. \) then, considering that \(\mathbf {Tv}^{*} = {\mathbf {i}}^b\) and defining \(\mathbf {q}^{row} \equiv \mathbf {Q}^{row} \mathbf {1}\):

$$\begin{aligned} \begin{array}{ll} \mathbf {Tv}^{*} &{}= {\mathbf {T}} \left[ \begin{array}{c} {\mathbf {v}}^{*}_{\cdot 1}\\ {\mathbf {v}}^{*}_{\cdot 2}\\ \vdots \\ {\mathbf {v}}^{*}_{\cdot (n-K)}\\ \end{array}\right] _{(n\times (n-K))\times 1} = {\mathbf {T}} \left[ \begin{array}{c} -\mathbf {Q}^{row} CN^{\prime } \mathbf {1}\\ -\mathbf {Q}^{row} CN^{\prime } \mathbf {1}\\ \vdots \\ -\mathbf {Q}^{row} CN^{\prime } \mathbf {1}\\ \end{array}\right] _{(n\times (n-K))\times 1}\\ &{} = -CN^{\prime }{\mathbf {T}} \, \left[ \begin{array}{c} \mathbf {q}^{row}\\ \mathbf {q}^{row}\\ \vdots \\ \mathbf {q}^{row}\end{array}\right] _{(n\times (n-K))\times 1} = \left[ \begin{array}{c} CN^{\prime }\\ CN^{\prime }\\ \vdots \\ CN^{\prime } \end{array}\right] _{(n\times (n-K))\times 1} \end{array} \end{aligned}$$

Since all the components are equal, it is sufficient to develop just one of them:

$$\begin{aligned} - CN^{\prime } \left( -\left[ \left( B{+}C\right) {\mathbb {1}} - B\mathbf {I} + \left( n{-}K{-}3\right) \left( C{\mathbb {1}}{+}A\mathbf {I}_{\mathbf {K}}\right) + 2\left( C{\mathbb {1}}{+}A \mathbf {I}_{\mathbf {K}}\right) + D \left( {\mathbf {D}}_{\mathbf {K}} {+} (\mathbf {D_K})^T\right) \right] \right) \mathbf {q}^{row} = CN^{\prime } \mathbf {1} \end{aligned}$$

obtaining:

$$\begin{aligned} \mathbf {q}^{row} &= \left[ \left( B+C\right) {\mathbb {1}} - B\mathbf {I} + \left( n-K-3\right) \left( C{\mathbb {1}} + A\mathbf {I}_{\mathbf {K}}\right) + 2\left( C{\mathbb {1}} + A \mathbf {I}_{\mathbf {K}}\right) + D \left( {\mathbf {D}}_{\mathbf {K}} {+} (\mathbf {D_K})^T\right) \right] ^{-1}\mathbf {1} \\= & {} \left[ (B + (n{-}K)C){\mathbb {1}} + (n{-}K{-}1)A\mathbf {I}_{\mathbf {K}} - B\mathbf {I} + D\left( {\mathbf {D}}_{\mathbf {K}} {+} ({\mathbf {D}}_{\mathbf {K}})^T\right) \right] ^{-1}\mathbf {1} \end{aligned}$$

The saddle point \({\mathbf {v}}^{*}\) of the energy function \(E({\mathbf {v}})\) given by Eq. 9 in terms of the parameters ABCD and \(N^{\prime }\) is:

$$\begin{aligned} {\mathbf {v}}^{*} = \left[ \begin{array}{c} {\mathbf {v}}^{*}_{\cdot 1}\\ {\mathbf {v}}^{*}_{\cdot 2}\\ \vdots \\ {\mathbf {v}}^{*}_{\cdot (n-K)}\\ \end{array}\right] _{(n\times (n-K))\times 1} \end{aligned}$$

where \(\forall i \in \{1,\dots , n-K\}\)

$$\begin{aligned} {\mathbf {v}}_{\cdot i}^{*} CN^{\prime } \mathbf {q}^{row} = CN^{\prime } \left[ (B + (n{-}K)C){\mathbb {1}} + (n{-}K{-}1)A\mathbf {I}_{\mathbf {K}} - B\mathbf {I} + D\left( {\mathbf {D}}_{\mathbf {K}} {+} (\mathbf {D_K})^T\right) \right] ^{-1} \mathbf {1} \end{aligned}$$

which may be written equivalently as:

$$\begin{aligned} \displaystyle v^{*}_{x,i} = CN^{\prime }\sum _{j = 1}^{n} m_{x,j}, \quad \forall x \in \{1,\ldots ,n\}, \forall i \in \{1, \dots , n-K\}, \end{aligned}$$

with

$$\begin{aligned} \mathbf {M} = \left[ (B + (n{-}K)C){\mathbb {1}} + (n{-}K{-}1)A\mathbf {I}_{\mathbf {K}} - B\mathbf {I} + D\left( {\mathbf {D}}_{\mathbf {K}} {+} (\mathbf {D_K})^T\right) \right] ^{-1} \end{aligned}$$
(18)

Considering the parameter setting in Eq. 13 for ABCD and \(N^{\prime }\):

$$\begin{aligned} \displaystyle v^{*}_{x,i}= & {} C \left( n - K + \frac{3}{C}\right) \sum _{j = 1}^{n} m_{x,j},\quad \forall x \in \{1,\ldots ,n\}, \forall i \in \{1, \dots , n-K\},\\ \mathbf {M}= & {} \left[ \Big (\left( n{-}K{+}1\right) C{+}3+\frac{d_L}{d_U} \Big ){\mathbb {1}} + \left( n{-}K{-}1\right) \left( C{+}3\right) \mathbf {I}_{\mathbf {K}} - \Big (C+3+\frac{d_L}{d_U}\Big )\mathbf {I} + \frac{1}{d_U}\Big ({\mathbf {D}}_{\mathbf {K}} {+} (\mathbf {D_K})^T\Big )\right] ^{-1} \end{aligned}$$

\(\square \)

Lemma 2

Given a positive definite matrix \(\mathbf {X}\) of size \(4 \times 4\) with the following structure:

$$\begin{aligned} \mathbf {X} = \left[ \begin{array}{cccc} x_1 &{} x_2 &{} x_3 &{} x_4 \\ x_2 &{} x_1 &{} x_4 &{} x_3 \\ x_3 &{} x_4 &{} x_1 &{} x_2 \\ x_4 &{} x_3 &{} x_2 &{} x_1 \end{array}\right] _{4 \times 4} \end{aligned}$$

then its inverse matrix \(\mathbf {M}\) satisfies

$$\begin{aligned} \sum _{j=1}^4 m_{i,j} = \frac{1}{x_1 + x_2 + x_3 + x_4}, \quad \forall i =1,\,2,\,3,\,4. \end{aligned}$$

Proof

Being \(\mathbf {X}\) positive definite, its inverse matrix exists. Thus, let \(\mathbf {M}\) be the inverse matrix of \(\mathbf {X}\). Being its inverse, it satisfies that

$$\begin{aligned} \mathbf {m}_{i,\cdot } \mathbf {x}_{\cdot ,j} = \left\{ \begin{array}{ll} 1 &{} \text{ if } i = j \\ 0 &{} \text{ otherwise } \end{array} \right. \end{aligned}$$

with

$$\begin{aligned} \mathbf {m}_{i,\cdot } = \left[ \begin{array}{cccc} m_{i,1}&m_{i,2}&m_{i,3}&m_{i,4} \end{array}\right] , \quad \forall i=1,\,2,\,3,\,4. \end{aligned}$$

and \(\mathbf {x}_{\cdot ,j} = \left[ \begin{array}{c} x_{1,j} \\ x_{2,j} \\ x_{3,j} \\ x_{4,j} \end{array} \right] , \quad \forall j=1,\,2,\,3,\,4.\) obtaining:

$$\begin{aligned} \sum _{j=1}^4 \mathbf {m}_{i,\cdot } \mathbf {x}_{\cdot ,j} = \mathbf {m}_{i,\cdot } \sum _{j=1}^4 \mathbf {x}_{\cdot ,j} = 1, \quad \forall i=1,\,2,\,3,\,4. \end{aligned}$$

According to the definition for \(\mathbf {X}\) of the lemma:

$$\begin{aligned} \sum _{j=1}^4 \mathbf {x}_{\cdot ,j} = \left[ \begin{array}{c} x_1 + x_2 + x_3 + x_4 \\ x_1 + x_2 + x_3 + x_4 \\ x_1 + x_2 + x_3 + x_4 \\ x_1 + x_2 + x_3 + x_4 \end{array} \right] \end{aligned}$$

which is not the vector \(\mathbf {0}\) as \(\mathbf {X}\) is invertible.

Thus, \(\displaystyle \,\mathbf {m}_{i,\cdot } \sum _{j=1}^4 \mathbf {x}_{\cdot ,j} = \sum _{j=1}^4 m_{i,j} (x_1 + x_2 + x_3 + x_4) = 1, \displaystyle \forall i=1,\,2,\,3,\,4.\)


and therefore

$$\begin{aligned} \sum _{j=1}^4 m_{i,j} = \frac{1}{x_1 + x_2 + x_3 + x_4}, \quad \forall i=1,\,2,\,3,\,4. \end{aligned}$$

\(\square \)

Lemma 3

As a consequence of Proposition 1, the saddle point for the case with four cities and two chains (\(n = 4\), \(K = 2\)) is:

$$\begin{aligned}&v_{x,i}^{*} = (2 C + 3) \frac{d_U}{(13 C + 15) d_U + 3 d_L + d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}},\\&\forall x \in \{1,\,2,\,3,\,4\}, \quad \forall i \in \{1,\,2\}. \end{aligned}$$

Proof

If \(n = 4\) and \(K = 2\), and considering the definitions for the matrices in Proposition 1:

$$\begin{aligned} \begin{array}{ll} {\mathbb {1}} &{} = ({\mathbb {1}})_4 \equiv \left[ \begin{array}{cccc} 1 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 \\ 1 &{} 1 &{} 1 &{} 1 \end{array}\right] _{4 \times 4},\, \mathbf {I} = (\mathbf {I})_4 \equiv \left[ \begin{array}{cccc} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \end{array}\right] _{4\times 4}, \\ \mathbf {J_2} &{} = (\mathbf {J_2})_4 \equiv \left[ \begin{array}{cc}({\mathbb {1}})_2 - (\mathbf {I})_2 &{} (\mathbf {0})_2 \\ (\mathbf {0})_2 &{} ({\mathbb {1}})_2 - (\mathbf {I})_2\end{array}\right] _{4 \times 4} = \left[ \begin{array}{cccc} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \end{array}\right] _{4\times 4},\\ \mathbf {I_2} &{} = (\mathbf {I_2})_4 \equiv (\mathbf {J_2})_4 + (\mathbf {I})_4 = \left[ \begin{array}{cccc} 1 &{} 1 &{} 0 &{} 0 \\ 1 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 1 \\ 0 &{} 0 &{} 1 &{} 1 \end{array}\right] _{4\times 4} \end{array} \end{aligned}$$
(19)

and

$$\begin{aligned} \mathbf {D_2}&= (\mathbf {D_2})_4 \equiv {\mathbf {D}} \cdot \mathbf {J_2} \\&= \left[ \begin{array}{cccc} 0 &{} 0 &{} d_{1,3} &{} d_{1,4} \\ 0 &{} 0 &{} d_{2,3} &{} d_{2,4} \\ d_{1,3} &{} d_{2,3} &{} 0 &{} 0 \\ d_{1,4} &{} d_{2,4} &{} 0 &{} 0 \end{array}\right] _{4\times 4} \cdot \left[ \begin{array}{cccc} 0 &{} 1 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 1 &{} 0 \end{array}\right] _{4\times 4} = \left[ \begin{array}{cccc} 0 &{} 0 &{} d_{1,4} &{} d_{1,3} \\ 0 &{} 0 &{} d_{2,4} &{} d_{2,3} \\ d_{2,3} &{} d_{1,3} &{} 0 &{} 0 \\ d_{2,4} &{} d_{1,4} &{} 0 &{} 0 \end{array}\right] _{4\times 4} \end{aligned}$$
(20)

the matrix \(\mathbf {M}\) required to obtain the saddle point (see Eq. 18 and Remark 4) can be computed as:

$$\begin{aligned} \mathbf {M}&= \left[ (B + 2C){\mathbb {1}} + A\mathbf {I_2} - B\mathbf {I} + D\left( \mathbf {D_2} {+} (\mathbf {D_2})^T\right) \right] ^{-1} \\&= \left[ \begin{array}{cccc} 2 C {+} A &{} A {+} B {+} 2C &{} B {+} 2C {+} D (d_{1,4} {+} d_{2,3})&{} B {+} 2C {+} D (d_{1,3} {+} d_{2,4}) \\ A {+} B {+} 2C &{} 2 C {+} A &{} B {+} 2C {+} D (d_{1,3} {+} d_{2,4}) &{} B {+} 2C {+} D (d_{1,4} {+} d_{2,3}) \\ B {+} 2C {+} D (d_{1,4} {+} d_{2,3}) &{} B {+} 2C {+} D (d_{1,3} {+} d_{2,4}) &{} 2 C {+} A &{} A {+} B {+} 2C \\ B {+} 2C {+} D (d_{1,3} {+} d_{2,4}) &{} B {+} 2C {+} D (d_{1,4} {+} d_{2,3}) &{} A {+} B {+} 2C &{} 2 C {+} A \end{array}\right] ^{-1}_{4 \times 4} \end{aligned}$$

Taking into account the results of Proposition 1, Lemma 2 and the parameter setting in Eq. 13:

$$\begin{aligned} v^{*}_{x,i}&= \left( 2 C + 3\right) \sum _{j = 1}^{4} m_{x,j} = \frac{(2 C + 3)d_U}{(13 C + 15) d_U + 3 d_L + d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}}, \\ {}&\quad \forall x \in \{1,\,2,\,3,\,4\}, \quad \forall i \in \{1,\,2\}. \end{aligned}$$

\(\square \)

Remark 8

From the obtained saddle point in Lemma 3, it can be concluded that:

$$\begin{aligned} \displaystyle \lim _{C \rightarrow \ \infty } v^{*}_{x,i} = \frac{2}{13}, \quad \forall x \in \{1,\,2,\,3,\,4\}, \quad \forall i \in \{1,\,2\} \end{aligned}$$

and \(\displaystyle \lim _{C \rightarrow \ 0} v^{*}_{x,i} = \frac{3 d_U}{15 d_U + 3 d_L + d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}}, \forall x \in \{1,\,2,\,3,\,4\}, \quad \forall i \in \{1,\,2\}.\)

Proof

(Proof of Theorem 1(See page 11 for formulation)) Let \(\{1,\,2,\,3,\,4\}\) be the set of \(n = 4\) cities and let \({[1,2],\,[3,4]}\) be the \(K = 2\) fixed chains. From the saddle point \({\mathbf {v}}^{*}\) obtained in Lemma 3, we can fix, without loss of generality, any of the 4 cities as the first visited city. Thus, city 1 is considered to be the first visited city in the tour, which allows to obtain the following starting point:

$$\begin{aligned} \mathbf {V}(0) = \left[ \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0 \\ 0 &{} v_{3,2}^{*} \\ 0 &{} v_{4,2}^{*} \end{array}\right] , \end{aligned}$$

where

$$\begin{aligned} v_{3,2}^{*} = v_{4,2}^{*} = \frac{ (2 C + 3) d_U}{(13 C + 15) d_U + 3 d_L + d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}} \end{aligned}$$

With this projection, all symmetries of the problem are eliminated. That is, there are only two possible solutions: the tour \(1-2-3-4\) or the tour \(1-2-4-3\), whose solutions are respectively:

$$\begin{aligned} \mathbf {V^1} = \left[ \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0 \\ 0 &{} 1 \\ 0 &{} 0 \end{array}\right] , \qquad \mathbf {V^2} = \left[ \begin{array}{cc} 1 &{} 0 \\ 0 &{} 0 \\ 0 &{} 0 \\ 0 &{} 1 \end{array}\right] \end{aligned}$$

The Hopfield model with \(n = 4\) and \(K = 2\) will behave like a 2-opt if the starting point \(\mathbf {V}(0)\) convergesFootnote 3 to the optimal solution, that is, between the solutions \(\mathbf {V^1}\) and \(\mathbf {V^2} \), converges to the one that has less value in the objective function.

To find out to which solution the point \(\mathbf {V}(0)\) converges to, it suffices to compute \(\mathbf {V}(\Updelta t)\), \(\Updelta t > 0\) (following the differential Eq. 1 with \(\lambda \xrightarrow {} \infty \)) and verify if the point is closerFootnote 4 to the solution \(\mathbf {V_1}\) or \(\mathbf {V_2}\).

$$\begin{aligned} \frac{d{\mathbf {u}}(0)}{dt} = {\mathbf {T}}\cdot {\mathbf {v}}(0) + \mathbf {i^b} \end{aligned}$$
(21)

where the vector \({\mathbf {v}}(0)\) corresponds to the matrix \(\mathbf {V}(0)\) written in vector form. In order to simplify calculations, the matrix \(\mathbf {V}\) will be written as:

$$\begin{aligned} \mathbf {V} = \left[ \begin{array}{cc} {\mathbf {v}}_{\cdot ,1}&{\mathbf {v}}_{\cdot ,2}\end{array}\right] \end{aligned}$$

being arranged in vector form as:

$$\begin{aligned} {\mathbf {v}} = \left[ \begin{array}{c} {\mathbf {v}}_{\cdot ,1} \\ {\mathbf {v}}_{\cdot ,2}\end{array}\right] \end{aligned}$$

As seen in the proof of Proposition 1, the weight matrix \({\mathbf {T}}\) and the bias vector \(\mathbf {i^b}\) are:

$$\begin{aligned} {\mathbf {T}}= & {} - \left[ \begin{array}{cc} (B{+}C){\mathbb {1}}{-}B\mathbf {I} &{} C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T}) \\ C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T}) &{} (B{+}C){\mathbb {1}}{-} B\mathbf {I} \end{array}\right] \\ {\mathbf {i}}^{b}= & {} CN^{\prime } \left[ \begin{array}{c} (\mathbf {1})_{4\times 1} \\ (\mathbf {1})_{4\times 1} \end{array}\right] = CN^{\prime } \left[ \begin{array}{c} \mathbf {1} \\ \mathbf {1}\end{array}\right] \end{aligned}$$

By breaking down Eq. 21, we obtain:

$$\begin{aligned} \frac{d{\mathbf {u}}(0)}{dt}&= {\mathbf {T}}\cdot {\mathbf {v}}(0) + \mathbf {i^b} \\&= - \left[ \begin{array}{cc} (B{+}C){\mathbb {1}}{-}B\mathbf {I} &{} C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T}) \\ C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T}) &{} (B{+}C){\mathbb {1}}{-} B\mathbf {I} \end{array}\right] \cdot \left[ \begin{array}{c} {\mathbf {v}}_{\cdot ,1}(0) \\ {\mathbf {v}}_{\cdot ,2}(0) \end{array}\right] + CN^{\prime } \left[ \begin{array}{c} \mathbf {1} \\ \mathbf {1}\end{array}\right] \\&= - \left[ \begin{array}{c} \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\\ \left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1} \end{array}\right] \end{aligned}$$

The value of the potential in \(t=0\) can be obtained by applying the inverse of the activation function to the vector \({\mathbf {v}}(0)\):

$$\begin{aligned} {\mathbf {u}}(0) = g^{-1}({\mathbf {v}}(0)) \end{aligned}$$

Using the piecewise-linear activation function proposed by Abe (1993):

$$\begin{aligned} v_{i} = g(u_{i}) = \left\{ \begin{array}{cll} 0 &{} \text{ si } &{} u_{i} \le -u_0\\ \displaystyle \frac{1}{2} \big (1 + \frac{u_i}{u_0}\big ) &{} \text{ si } &{} -u_0< u_i < u_0\\ 1 &{} \text{ si } &{} u_{i} \ge u_0 \end{array} \right. , \quad \forall i \in \{1,\ldots ,n\} \end{aligned}$$
(22)

its inverse function is:

$$\begin{aligned} u_{i} = g^{-1}(v_{i}) = \left\{ \begin{array}{cll} -u_0 &{} \text{ if } &{} v_{i} \le 0 \\ u_0 (2 v_i - 1)&{} \text{ if } &{} 0< v_i < 1 \\ u_0 &{} \text{ if } &{} v_{i} \ge 1 \end{array} \right. , \quad \forall i \in \{1,\ldots ,n\}. \end{aligned}$$

Thus:

$$\begin{aligned} {\mathbf {u}}(0) = g^{-1}({\mathbf {v}}(0)) = g^{-1}\left( \left[ \begin{array}{c} {\mathbf {v}}_{\cdot ,1}(0) \\ {\mathbf {v}}_{\cdot ,2}(0) \end{array}\right] \right) = \left[ \begin{array}{c} {\mathbf {u}}_{\cdot ,1}(0) \\ {\mathbf {u}}_{\cdot ,2}(0) \end{array}\right] \end{aligned}$$

where

$$\begin{aligned} {\mathbf {u}}_{\cdot ,1}(0) = \left[ \begin{array}{c} u_0\\ -u_0\\ -u_0\\ -u_0 \end{array}\right] \text{ and } {\mathbf {u}}_{\cdot ,2}(0) = \left[ \begin{array}{c} -u_0\\ -u_0\\ g^{-1}(v^{*}_{3,2}) \\ g^{-1}(v^{*}_{4,2})\end{array}\right] \end{aligned}$$

with \( \displaystyle g^{-1}(v^{*}_{3,2}) = g^{-1}(v^{*}_{4,2}) = u_0 \left( \frac{ 2(2 C + 3) d_U}{(13 C + 15) d_U + 3 d_L + d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}} - 1\right) \).

Then, the potential in \(t = \Updelta t\) is computed:

$$\begin{aligned} {\mathbf {u}}(\Updelta t) = {\mathbf {u}}(0) + \frac{d{\mathbf {u}}(0)}{dt} \cdot \Updelta t \end{aligned}$$

being \(\Updelta t\) the integration step.

Finally, the output of the dynamic system is computed at \(t = \Updelta t\) (using again Abe’s piecewise-linear activation function, see Eq. 22):

$$\begin{aligned} {\mathbf {v}}(\Updelta t)&= g({\mathbf {u}}(\Updelta t)) = g\left( {\mathbf {u}}(0) + \frac{d{\mathbf {u}}(0)}{dt} \cdot \Updelta t\right) \\&= \left[ \begin{array}{c}g \Big ({\mathbf {u}}_{\cdot ,1}(0) - \big (\left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\big ) \Updelta t \Big ) \\ g\Big ({\mathbf {u}}_{\cdot ,2}(0) - \big (\left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\big ) \Updelta t \Big ) \end{array}\right] \end{aligned}$$

Each of the two vectors (by blocks) obtained for \({\mathbf {v}}(\Updelta t)\) is then computed:

$$\begin{aligned} {\mathbf {v}}_{\cdot ,1}(\Updelta t)&= g \left( {\mathbf {u}}_{\cdot ,1}(0) - \left( \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\right) \Updelta t \right) \\&= \frac{1}{2}\left( \mathbf {1} + \frac{1}{u_0} \left( {\mathbf {u}}_{\cdot ,1}(0) - \left( \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\right) \Updelta t \right) \right) \\&= \frac{1}{2}\left (\left[ \begin{array}{c}1\\ 1\\ 1\\ 1\end{array}\right] + \frac{1}{u_0} \left (\left[ \begin{array}{c} u_0\\ -u_0\\ -u_0\\ -u_0 \end{array}\right] - \left ( \left[ \begin{array}{cccc} C &{} B{+}C &{} B{+}C &{} B{+}C \\ B{+}C &{} C &{} B{+}C &{} B{+}C \\ B{+}C &{} B{+}C &{} C &{} B{+}C \\ B{+}C &{} B{+}C &{} B{+}C &{} C \end{array}\right] \cdot \left[ \begin{array}{c}1\\ 0\\ 0\\ 0\end{array}\right] \right.\right.\right.\\&\left.\left.\left.{=} + \left[ \begin{array}{cccc} A{+}C &{} A{+}C &{} C {+} D(d_{1,4}{+}d_{2,3}) &{} C {+} D(d_{1,3}{+}d_{2,4}) \\ A{+}C &{} A{+}C &{} C {+} D(d_{1,3}{+}d_{2,4}) &{} C {+} D(d_{1,4}{+}d_{2,3}) \\ C {+} D(d_{1,4}{+}d_{2,3}) &{} C {+} D(d_{1,3}{+}d_{2,4}) &{} A{+}C &{} A{+}C \\ C {+} D(d_{1,3}{+}d_{2,4}) &{} C {+} D(d_{1,4}{+}d_{2,3}) &{} A{+}C &{} A{+}C \end{array}\right] \cdot \left[ \begin{array}{c}0\\ 0\\ v^{*}_{3,2}\\ v^{*}_{3,2}\end{array}\right]\right.\right.\right. \\&\left.\left.\left.{=} + \left[ \begin{array}{c} C N^{\prime }\\ C N^{\prime }\\ C N^{\prime }\\ C N^{\prime }\end{array}\right] \right ) \Updelta t \right ) \right ) = -\frac{1}{2u_0} \left[ \begin{array}{c}\Big ((2v^{*}_{3,2} + N^{\prime } + 1) C + v^{*}_{3,2} (d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}) D \Big )\Updelta t - 2 u_0 \\ \Big (B + (2v^{*}_{3,2} + N^{\prime } + 1) C + v^{*}_{3,2} (d_{1,3} + d_{1,4} + d_{2,3} + d_{2,4}) D\Big )\Updelta t \\ \Big ( 2v^{*}_{3,2}A + B + (2v^{*}_{3,2} + N^{\prime } + 1) C\Big )\Updelta t\\ \Big ( 2v^{*}_{3,2}A + B + (2v^{*}_{3,2} + N^{\prime } + 1) C\Big )\Updelta t\end{array}\right] \\ {\mathbf {v}}_{\cdot ,2}(\Updelta t)&= g\Big ({\mathbf {u}}_{\cdot ,2}(0) - \big (\left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\big ) \Updelta t \Big )\\&= \frac{1}{2}\left( \mathbf {1} + \frac{1}{u_0}\left( {\mathbf {u}}_{\cdot ,2}(0) - \big (\left( C{\mathbb {1}}{+}A\mathbf {I_2}{+}D(\mathbf {D_2}{+}(\mathbf {D_2})^{T})\right) {\mathbf {v}}_{\cdot ,1}(0) + \left( (B{+}C){\mathbb {1}}{-}B\mathbf {I}\right) {\mathbf {v}}_{\cdot ,2}(0) + C N^{\prime } \mathbf {1}\big ) \Updelta t\right) \right) \\&= \frac{1}{2}\left (\left[ \begin{array}{c}1\\ 1\\ 1\\ 1\end{array}\right] + \frac{1}{u_0}\left (\left[ \begin{array}{c}-u_0\\ -u_0\\ g^{-1}(v^{*}_{3,2}) \\ g^{-1}(v^{*}_{3,2})\end{array}\right] \right.\right.\\&\left.\left.{=} - \left (\left[ \begin{array}{cccc} A{+}C &{} A{+}C &{} C {+} D(d_{1,4}{+}d_{2,3}) &{} C {+} D(d_{1,3}{+}d_{2,4}) \\ A{+}C &{} A{+}C &{} C {+} D(d_{1,3}{+}d_{2,4}) &{} C {+} D(d_{1,4}{+}d_{2,3}) \\ C {+} D(d_{1,4}{+}d_{2,3}) &{} C {+} D(d_{1,3}{+}d_{2,4}) &{} A{+}C &{} A{+}C \\ C {+} D(d_{1,3}{+}d_{2,4}) &{} C {+} D(d_{1,4}{+}d_{2,3}) &{} A{+}C &{} A{+}C \end{array}\right] \cdot \left[ \begin{array}{c}1\\ 0\\ 0\\ 0\end{array}\right] \right.\right.\right.\\&\left.\left.\left.{=} + \left[ \begin{array}{cccc} C &{} B{+}C &{} B{+}C &{} B{+}C \\ B{+}C &{} C &{} B{+}C &{} B{+}C \\ B{+}C &{} B{+}C &{} C &{} B{+}C \\ B{+}C &{} B{+}C &{} B{+}C &{} C \end{array}\right] \cdot \left[ \begin{array}{c}0\\ 0\\ v^{*}_{3,2}\\ v^{*}_{3,2}\end{array}\right] + \left[ \begin{array}{c} C N^{\prime }\\ C N^{\prime }\\ C N^{\prime }\\ C N^{\prime }\end{array}\right] \right ) \Updelta t\right ) \right )\\&= -\frac{1}{2u_0}\left[ \begin{array}{c} (A + 2v^{*}_{3,2}B + (2 v^{*}_{3,2} + N^{\prime } + 1) C)\Updelta t\\ (A + 2v^{*}_{3,2}B + (2 v^{*}_{3,2} + N^{\prime } + 1) C)\Updelta t\\ (v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,4} + d_{2,3}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2}) \\ (v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,3} + d_{2,4}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2}) \end{array}\right] \end{aligned}$$

The saddle point leaves all valid solutions at the same distance (see Lemma 3). Having projected the saddle point \(\mathbf {v^{*}}\) on one of its faces to obtain \({\mathbf {v}}(0)\), this point will not be an equilibrium point. However, the distance from \({\mathbf {v}}(0)\) to the two problem solutions will remain the same. In order to find out to which solution, starting from \({\mathbf {v}}(0)\), the dynamic system converges to, it will be enough to know to which solution \({\mathbf {v}}(\Updelta t)\) is closer to. The dynamic system will converge to such solution.

Let’s assume that \({\mathbf {v}}(\Updelta t)\) is closer to the first solution. The case in which \({\mathbf {v}}(\Updelta t)\) is closer to the second solution is equivalent. Considering that the solutions are written in vector form, we obtain:

$$\begin{aligned} dist({\mathbf {v}}(\Updelta t),\mathbf {v^1}) < dist({\mathbf {v}}(\Updelta t),\mathbf {v^2}) \end{aligned}$$

Considering the squares of the distances to simplify the analysis:

$$\begin{aligned} dist({\mathbf {v}}(\Updelta t),\mathbf {v^1})^2 < dist({\mathbf {v}}(\Updelta t),\mathbf {v^2})^2 \end{aligned}$$

and taking into account that all the terms except the last two are canceled:

$$\begin{aligned} \begin{array}{cc} &{} {+} \Big ( -\frac{1}{2u_0} \big ((v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,4} + d_{2,3}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2})\big ) - 1 \Big )^2 \\ &{} + \Big ( -\frac{1}{2u_0} \big ((v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,3} + d_{2,4}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2})\big ) \Big )^2\\ &{} < \\ &{} {+} \Big ( -\frac{1}{2u_0} \big ((v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,4} + d_{2,3}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2})\big ) \Big )^2\\ &{} + \Big ( -\frac{1}{2u_0} \big ((v^{*}_{3,2}B + (2v^{*}_{3,2} + N^{\prime } + 1) C + (d_{1,3} + d_{2,4}) D) \Updelta t - u_0 - g^{-1}(v^{*}_{3,2}) - 1 \big ) \Big )^2\\ \end{array} \end{aligned}$$

The inequality \((\alpha -1)^2 + \beta ^2 < \alpha ^2 + (\beta -1)^2\) is satisfied if and only if \(\beta < \alpha \). This result leads to conclude that the previous inequality is satisfied if and only if:

$$\begin{aligned} d_{1,3} + d_{2,4} < d_{1,4} + d_{2,3} \end{aligned}$$

that matches with the inequality satisfied in a 2-opt swap when the tour that produces \(\mathbf {v^1}\) is shorter than the one produced by \(\mathbf {v^2}\) (see Remark 5 and Fig. 3, taking \(a=1\), \(b=2\), \(c=3\), \(d=4\)). Therefore, using as the starting point in the Hopfield model the projection of the saddle point on one of its faces (to fix the first city to be visited), the Hopfield model behaves exactly as a 2-opt swap. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

García, L., Talaván, P.M. & Yáñez, J. The 2-opt behavior of the Hopfield Network applied to the TSP. Oper Res Int J 22, 1127–1155 (2022). https://doi.org/10.1007/s12351-020-00585-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12351-020-00585-3

Keywords

Navigation