1 Introduction

The Traveling Salesman/Salesperson Problem TSP is one of the best known and most widely investigated combinatorial optimization problems with four famous books entirely devoted to its study (Lawler et al. 1985; Reinelt 1994; Gutin and Punnen 2006; Applegate et al. 2006). Thus, we will refrain from giving extensive references but mainly refer to the treatment in Applegate et al. (2006). Given a complete graph \(G = (V, E)\) with \(|V| = n\) and \(|E| = m=n(n-1)/2\), and nonnegative distances \(d_e\) for each \(e \in E\), the TSP asks for a shortest tour with respect to the distances \(d_e\) containing each vertex exactly once.

Let \(\delta (v):=\{e=(v,u)\in E {\mid } u \in V\}\) denote the set of all edges adjacent to \(v\in V\). Introducing binary variables \(x_e\) for the possible inclusion of any edge \(e\in E\) in the tour we get the following classical ILP formulation:

$$\begin{aligned}&\hbox {minimize} \quad \sum _{e \in E} d_e x_e \end{aligned}$$
(1)
$$\begin{aligned}&\quad \quad \hbox {s.t.} \quad \sum _{e \in \delta (v)}{x_e} = 2 \quad \forall \; v \in V, \end{aligned}$$
(2)
$$\begin{aligned}&\qquad \quad \quad \sum _{\begin{array}{c} {e=(u,v) \in E}\\ {u,v \in S} \end{array}}{x_e} \le |S| - 1 \quad \forall \;S \subset V, S \ne \emptyset , \end{aligned}$$
(3)
$$\begin{aligned}&\qquad \qquad \qquad \qquad \quad x_e \in \{0, 1\} \quad \forall \; e \in E \end{aligned}$$
(4)

Equation (1) defines the objective function, (2) is the degree equation for each vertex, (3) are the subtour elimination constraints (SEC), which forbid solutions consisting of several disconnected tours, and finally (4) defines the integrality constraints. Note also that some SEC are redundant: For the vertex sets \(S \subset V\), \(S \ne \emptyset \), and \(S^\prime = V \backslash S\) we get pairs of SEC both enforcing the connection of S and \(S^\prime \).

The established standard approach to solve TSP to optimality, as pursued successfully during the last 30+ years, is a branch-and-cut approach, which solves the LP-relaxation obtained by relaxing the integrality constraints (4) into \(x_e \in [0,1]\). In each iteration of the underlying branch-and-bound scheme cutting planes are generated, i.e. constraints that are violated by the current fractional solution, but not necessarily by any feasible integer solution. Since there exists an exponential number of subsets \(S \subset V\) implying SEC (3), the computation starts with a small collection of subsets \(S \subset V\) (or none at all), and identifies violated SEC as cutting planes in the so-called separation problem. Moreover, a wide range of other cutting plane families were developed in the literature together with heuristic and exact algorithms to find them (see e.g. Applegate et al. 2006; Schrijver 2003, ch. 58). Also the undisputed champion among all TSP codes, the famous Concorde package (see Applegate et al. 2006), is based on this principle.

In this paper we introduce and examine another concept for solving the TSP. In Sect. 2 we introduce the basic idea of our approach. Some improvement strategies follow in Sect. 3 with our best approach presented in Sect. 3.4. Since the main contribution of this paper are computational experiments, we discuss them in detail in Sect. 4. The common details of all these tests will be given in Sect. 4.1. In Sect. 5, we present some theoretical results and further empirical observations. Finally, we provide an “Appendix” with illustrations, graphs and two summarizing tables (Tables 5, 6).

2 General solution approach

Clearly, the performance of the above branch-and-cut approach depends crucially on the performance of the used LP-solver. Highly efficient LP-solvers have been available for quite some time, but also ILP-solvers have improved rapidly during the last decades and reached an impressive performance. This motivated the idea of a very simple approach for solving TSP without using LP-relaxations explicitly.

The general approach works as follows (see Algorithm 1). First, we relax all SEC (3) from the model and solve the remaining ILP model (corresponding to a weighted 2-matching problem). Then we check if the obtained integer solution contains subtours. If not, the solution is an optimal TSP tour. Otherwise, we find all subtours in the integral solution (which can be done by a simple scan) and add the corresponding SEC to the model, each of them represented by the subset of vertices in the corresponding subtour. The resulting enlarged ILP model is solved again to optimality. Iterating this process clearly leads to an optimal TSP tour.

figure a

Every execution of the ILP-solver (see line 3) will be called an iteration. We define the set of violated SEC as the set of all included SEC which were violated in an iteration (see line 7). Figures 3, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17 in the “Appendix” illustrate a problem instance and the execution of the algorithm on this instance respectively.

It should be pointed out that the main motivation of this framework is its simplicity. The separation of SEC for fractional solutions amounts to the solution of a max-flow or min-cut problem. Based on the procedure by Padberg and Rinaldi (1990), extensive work has been done to construct elaborated algorithms for performing this task efficiently. On the contrary, violated SEC of integer solutions can be found by a simple scan. Moreover, we refrain from using any other additional inequalities known for classical branch-and-cut algorithms, which might also be used to speed up our approach, since we want to underline the strength of modern ILP-solvers in connection with a refined subtour selection process (see Sect. 3.4).

This approach for solving TSP is clearly not new but was available since the earliest ILP formulation going back to Dantzig et al. (1954) and can be seen as folklore nowadays. Several authors followed the concept of generating integer solutions for some kind of relaxation of an ILP formulation and iteratively adding violated integer SEC. However, it seems that the lack of fast ILP-solvers prohibited its direct application in computational studies although it was used in an artistic context by Bosch (2008).

Miliotis (1976) also concentrated on generating integer SEC, but within a fractional LP framework. The classical paper by Crowder and Padberg (1980) applies the iterative generation of integer SEC as a second part of their algorithm after generating fractional cutting planes in the first part to strengthen the LP-relaxation. They report that not more than three iterations of the ILP-solver for the strengthened model were necessary for test instances up to 318 vertices. Also Grötschel and Holland (1991) follow this direction of first improving the LP-model as much as possible, e.g. by running preprocessing, fixing certain variables and strengthening the LP-relaxation by different families of cutting planes, before generating integer subtours as last step to find an optimal tour. It turns out that about half of their test instances never reach this last phase. In contrast, we stick to the pure ILP-formulation without any previous modifications.

From a theoretical perspective, the generation of subtours involves a certain trade-off. For an instance (Gd) there exists a minimal set of subtours \(\mathcal {S}^*\), such that the ILP model with only those SEC implied by \(\mathcal {S}^*\) yields an overall feasible, and thus optimal solution. However, in practice we can only find collections of subtours larger than \(\mathcal {S}^*\) by adding subtours in every iteration until we reach optimality. Thus, we can either collect as many subtours as possible in each iteration, which may decrease the number of iterations but increases the running time of the ILP-solver because of the larger number of constraints. Or we try to control the number of SEC added to the model by trying to judge their relevance and possibly remove some of them later, which keeps the ILP-model smaller but may increase the number of iterations. In the following we describe various strategies to find the “right” subtours.

2.1 Representation of subtour elimination constraints

The SEC (3) can be expressed equivalently by the following cut constraints:

$$\begin{aligned} \sum _{\begin{array}{c} {e=(u,v) \in E}\\ {u \in S, v \not \in S} \end{array}}{x_e} \ge 2 \qquad \forall \;S \subset V, \; S \ne \emptyset \end{aligned}$$
(5)

Although mathematically equivalent, the two ways of forbidding a subtour in S may result in quite different performances of the ILP-solver. For more details about different ILP-models see Öncan et al. (2009).

It was observed that in general the running time for solving an ILP increases with the number of non-zero entries of the constraint matrix. Hence, we also tested a hybrid variant which chooses between (3) and (5) by picking for each considered set S the version with the smaller number of nonnegative coefficients on the left-hand side as follows:

$$\begin{aligned} \begin{array}{ll} \displaystyle \mathop {\sum }\limits _{\begin{array}{c} {e=(u,v) \in E}\\ {u,v \in S} \end{array}}{x_e} \ \le \ |S| - 1\\ \displaystyle \mathop {\sum }\limits _{\begin{array}{c} {e=(u,v) \in E}\\ {u \in S, v \not \in S} \end{array}}{x_e} \ \ge \ 2 \end{array}\quad \forall \;\; S \subset V, \; S \ne \emptyset \quad \begin{array}{ll} \hbox {if}&{} |S| \le \displaystyle \frac{2 n + 1}{3}\\ \hbox {if}&{} |S| > \displaystyle \frac{2 n + 1}{3} \end{array} \end{aligned}$$
(6)

We performed computational tests of our approach to compare the three representations of SEC, namely (3), (5) and (6), and list the results in Table 1. Technical details about the setup of the experiments can be found in Sect. 4.1.

Table 1 Comparison of the behavior of the algorithm for different representations of SEC

It turned out that the three versions sometimes (but not always) lead to huge differences in running time (up to a factor of 5). This is an interesting experience that should be taken into consideration also in other computational studies. From our limited experiments it could be seen that version (5) was inferior most of the times (with sometimes huge deviations) whereas only a small dominance of the hybrid variant (6) in comparison with the standard version (3) could be observed. This is due to the small size of most subtours occurring during the solution process (the representation (3) equals to the representation (6) in these cases). But since also bigger subtours can occur (mostly in the last iterations), we use the representation (6) for all further computational tests.

3 Generation of subtours

As pointed out above, the focus of our attention lies in the generation and selection of a “good” set of SEC, including as many as possible of those required by the ILP-solver to determine an optimal solution which is also feasible for TSP, but as few as possible of all others which only slow down the performance of the ILP-solver.

Trying to strike a balance between these two goals we followed several directions, some of them motivated by theoretical results, others by visually studying plots of all subtours generated during the execution of Algorithm 1.

3.1 Subtour elimination constraints from suboptimal integer solutions

Many ILP-solvers report all feasible integer solutions found during the underlying branch-and-bound process. In this case, we can also add all corresponding SEC to the model. These constraints can be considered simply as part of the set of violated SEC. Not surprisingly, these additional constraints always lead to a decrease in the number of iterations for the overall computation and to an increase in the total number of SEC generated before reaching optimality (see Table 2). While the time consumed in each iteration is likely to increase, it can also be observed that the overall running time is often decreased significantly by adding all detected subtours to the model. On the other hand, for the smaller number of instances where this is not the case, only relatively modest increases of running times are incurred. Therefore, we stick to adding all detected SEC for the remainder of the paper. The algorithm in this form will be called BasicIntegerTSP.

Table 2 Using all constraints generated from all feasible solutions found during the solving process versus using only the constraints generated from the final ILP solutions of each iteration

3.2 Clustering into subproblems

It can be observed that many subtours have a local context, meaning that a small subset of vertices separated from the remaining vertices by a reasonably large distance will always be connected by one or more subtours, independently from the size of the remaining graph (see also Figs. 3, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17 in the “Appendix”). Thus, we aim to identify clusters of vertices and run the BasicIntegerTSP on the induced subgraphs with the aim of generating within a very small running time the same subtours occurring in the execution of the approach on the full graph. Furthermore, we can use the optimal tour from every cluster to generate a corresponding SEC for the original instance and thus enforce a connection to the remainder of the graph.

For our purposes the clustering algorithm should fulfill the following properties:

  • Clustering quality The obtained clusters should correspond well to the distance structure of the given graph, as in a classical geographic clustering.

  • Running time Should be low relative to the running time required for the main part of the algorithm.

  • Cluster size If clusters are too large, solving the TSP takes too much time. If clusters are too small, only few SEC are generated.

Clearly, there is a huge body of literature on clustering algorithms (see e.g. Jain and Dubes 1988) and selecting one for a given application will never satisfy all our objectives. Our main restriction was the requirement of using a clustering algorithm which works also if the vertices are not embeddable in Euclidean space, i.e. only arbitrary edge distances are given. Simplicity being another goal, we settled for the following approach described in Algorithm 2:

figure b

First, we fix the number of clusters c with \(1 \le c \le n\) and sort the edges in increasing order of distances (see line 1). Then we start with the empty graph \(G^\prime = (V^\prime , E^\prime )\) (line 2) containing only isolated vertices (i.e. n clusters) and add iteratively edges in increasing order of distances until the desired number of clusters c is reached (see lines 5 and 6). In each iteration the current clustering is implied by the connected components of the current graph (see line 7). We denote this clustering approach by \(C {\mid } c\). Note that this clustering algorithm does not make any assumptions about the underlying TSP instance and does not exploit any structural properties of the Metric TSP or the Euclidean TSP.

It was observed in our computational experiments that the performance of the TSP algorithm is not very sensitive to small changes of the cluster number c and thus a rough estimation of c is sufficient. The behavior of the running time as a function of c can be found for particular test instances in Fig. 19, see Sect. 4.2 for further discussion.

3.3 Restricted clustering

Although the clustering algorithm (see Algorithm 2) decreases the computational time of the whole solution process for some test instances, we observed a certain shortcoming. There may easily occur clusters consisting of isolated points or containing only two vertices. Clearly, these clusters do not contribute any subtour on their own. Moreover, the degree constraints (2) guarantee that each such vertex is connected to the remainder of the graph in any case. The connection of these vertices to some “neighboring” cluster enforced in BasicIntegerTSP implies that the clustering yields different subtours for these neighbors and not the violated SEC arising in BasicIntegerTSP.

To avoid this situation, we want to impose a minimum cluster size of 3. An easy way to do so is as follows: After reaching the c clusters, continue to add edges in increasing order of distances (as before), but add an edge only, if it is incident to one of the vertices in a connected component (i.e. cluster) of size one or two. This means basically that we simply merge these small clusters to their nearest neighbor with respect to the actual clustering. Note that this is a step-by-step process and it can happen that two clusters of size 1 merge first before merging the resulting pair to its nearest neighboring cluster. The resulting restricted clustering approach will be denoted by \(RC_3 {\mid } c\).

Against our expectations, the computational experiments (see Sect. 4) show that this approach often impacts the algorithm in the opposite way (see also Fig. 19, Table 6 in the “Appendix”) if compared for the same original cluster size c.

Surprisingly, we could observe an interesting behavior if \(c \approx n\). In this case, the main clustering algorithm (see Algorithm 2) has almost no effect, but the “post-phase” which enforces the minimum cluster size yields a different clustering on its own. This variant often beats the previous standard clustering algorithm with \(c \ll n\) (see Table 6 in the “Appendix”). Note that we cannot fix the actual number of clusters \(c^\prime \) in this case. But our computational results show that \(c^\prime \approx \frac{n}{5}\) usually holds if the points are distributed relatively uniformly in the Euclidean plane and if the distances correspond to their relative Euclidean distances (see Fig. 18 in the “Appendix”).

3.4 Hierarchical clustering

It was pointed out in Sect. 3.2 that the number of clusters c is chosen as an input parameter. The computational experiments in Sect. 4.2 give some indication on the behavior of Algorithm 2 for different values of c, but fail to provide a clear guideline for the selection of c. Moreover, from graphical inspection of test instances, we got the impression that a larger number of relevant SEC might be obtained by considering more clusters of moderate size. In the following we present an idea that takes both of these aspects into account.

In our hierarchical clustering process denoted by HC we do not set a cluster number c, but let the clustering algorithm continue until all vertices are connected (this corresponds to \(c=1\)). The resulting clustering process can be represented by a binary clustering tree which is constructed in a bottom-up way. The leaves of the tree represent isolated vertices, i.e. the n trivial clusters given at the beginning of the clustering algorithm. Whenever two clusters are merged by the addition of an edge, the two corresponding tree vertices are connected to a new common parent vertex in the tree representing the new cluster. At the end of this process we reach the root of the clustering tree corresponding to the complete vertex set. An example of such a clustering tree is shown in Figs. 1 and 2.

Now, we go through the tree in a bottom-up fashion from the leaves to the root. In each tree vertex we solve the TSP for the associated cluster, after both of its child vertices were resolved. The crucial aspect of our procedure is the following: All SEC generated during such a TSP solution for a certain cluster are propagated and added to the ILP model used for solving the TSP instance of its parent cluster. Obviously, at the root vertex the full TSP is solved.

The advantage of this strategy is the step-by-step construction of the violated SEC. A disadvantage is that many constraints can make sense in the local context but not in the global one and thus too many constraints could be generated in this way. Naturally, one pays for the additional SEC by an increase in computation time required to solve a large number of—mostly small—TSP instances. To avoid the solution of TSPs of the same order of magnitude as the original instance, it makes sense to impose an upper bound u on the maximum cluster size. This means that the clustering tree is partitioned into several subtrees by removing all tree vertices corresponding to clusters of size greater than u. After resolving all these subtrees we collect all generated SEC and add them to the ILP model for the originally given TSP. This approach will be denoted as \(HC {\mid } u\). Computational experiments with various choices of u indicated that \(u = 4 \frac{n}{\log _2{n}}\) would be a good upper bound.

Fig. 1
figure 1

Example illustrating the hierarchical clustering: vertices of the TSP instance. Distances between every two vertices correspond to their respective Euclidean distances in this example

Fig. 2
figure 2

Example illustrating the hierarchical clustering: clustering tree

Let us take a closer look at the problem of including too many SEC which are redundant in the global graph context. Of course the theoretical “best” way would be to check which of the propagated SEC were not used during the runs of the ILP solver and drop them. To do this, it would be necessary to get this information from the ILP solver which often is not possible.

However, we can try to approximately identify subtours which are not only locally relevant in the following way: All SEC generated in a certain tree vertex, i.e. for a certain cluster, are marked as considered SEC. Then we solve the TSP for the cluster of its parent vertex in the tree without using the subtours marked as considered. If we generate such a considered subtour again during the solution of the parent vertex, we take this as an indicator of global significance and add the constraint permanently for all following supersets of this cluster. If we set the upper bound u, we take also all SEC found in the biggest solved clusters. This approach will be denoted as \(\textit{HCD} {\mid } u\).

Of course, it is only a heuristic rule and one can easily find examples, where this prediction on a subtour’s relevance fails, but our experiments indicate that \(\textit{HCD} {\mid } 4 n / \log _2{n}\) is the best approach we considered. A comparison with other hierarchical clustering methods for all test instances can be found in Table 5 in the “Appendix”. It can be seen that without an upper bound we are often not able to find the solution at all (under time and memory constraints we made on the computational experiments). In the third and fourth column we can see a comparison between approaches both using the upper bound \(u = 4 \frac{n}{\log _2{n}}\) where the former collects all detected SEC and the latter allows to drop those which seem to be relevant only in a local context. Both these methods beat BasicIntegerTSP (for the comparison of this approach with other presented algorithms see the computational experiments in Sect. 4).

4 Computational experiments

In the following the computational experiments and their results will be discussed. Additional illustrative material can be found in an accompanying technical report, Pferschy and Staněk (2015), which is an extended version of this paper.

4.1 Setup of the computational experiments

All tests were run on an Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz with 16 GB RAM under Linux Footnote 1 and all programs were implemented in C++ Footnote 2 by using the SCIP MIP-solverFootnote 3 (see Achterberg 2009) together with CPLEX as LP-solver. It has often been discussed in the literature (see e.g. Naddef and Thienel 2002) and in personal communications that ILP-solvers are relatively unrobust and often show high variations in their running time performance, even if the same instance is repeatedly run on the same hardware and same software environment. Our first test runs also exhibited deviations up to a factor of 2 when identical tests were repeated. Thus we took special care to guarantee the relative reproducibility of the computational experiments: No additional swap memory was made available during the tests, only one thread was used and no other parallel user processes were allowed. This leads to a high degree of reproducibility in our experiments. However, this issue makes a comparison to other simple approaches, which were tested on other computers under other hardware and software conditions, extremely difficult.

We used two groups of test instances: The first group is taken from the well-known TSPLIB95 (Reinelt 1995), which contains the established benchmarks for TSP and related problems. From the collection of instances we chose all those with (i) at least 150 and at most 1000 vertices and (ii) which could be solved in at most 12 hours by our BasicIntegerTSP. It turned out that 25 instances of the TSPLIB95 fall into this category (see Table 6), the largest having 783 vertices.

We also observed some drawbacks of these instances: Most of them (23 out of 25) are defined as point sets in the Euclidean plane with distances corresponding to the Euclidean metric or as a set of geographical cities, i.e. points on a sphere. Moreover, they often contain substructures like meshes or sets of colinear points and finally, since all distances are rounded to the nearest integer, there are many instances which have multiple optimal solutions. These instances are relatively unstable with respect to solution time, number of iterations, and—important for our approach—cardinality of the set of violated SEC. For our approach instances with a mesh geometry (e.g. ts225 from TSPLIB95) were especially prone to unstable behavior, such as widely varying running times for minor changes in the parameter setting. This seems to be due to the fact that these instances contain many 2-matchings with the same objective function value, and thus the search process for a feasible TSP tour can vary widely (for more details see Pferschy and Staněk 2015).

In order to provide further comparisons, we also defined a set of instances based on random Euclidean graphs: In a unit square \([0, 1]^2\) we chose n uniformly distributed points and defined the distance between every two vertices as their respective Euclidean distance.Footnote 4 These random Euclidean instances eliminate the potential influence of substructures and always have only one unique optimal solution in all stages of the solving process. We created 40 such instances named RE_X_n where \(n \in \{150, 200, 250, \ldots , 500\}\) indicates the number of vertices and \(X \in \{A, B, C, D, E\}\).

The running times of our test instances, most of them containing between 150 and 500 vertices, were often within several hours. Since we tested many different variants and configurations of our approach, we selected a subset of these test instances to get faster answers for determining the best algorithm settings for use in the final tests. This subset contains 12 (of the 25) TSPLIB instances and one random instances for every number of vertices n (see e.g. Table 1).

All our running time tables report the name of the instance, the running time (t (s)) in wall-clock seconds (rounded down to nearest integers), the number of iterations (#iter), i.e. the number of calls to the ILP-solver in the main part of our algorithm (without the TSP solutions for the clusters) and the number of SEC (#SEC) added to the ILP model in the last iteration, i.e. the number of constraints of the model which yielded an optimal TSP solution. We often compare two approaches in a table by taking the mean ratio, i.e. we compute the quotients between the particular columns t (s)/#iter/#SEC of the compared method over the first “reference” method on the same instances and then we report the arithmetic mean of these quotients.

4.2 Computational details for selected examples

Let us now take a closer look at two instances in detail. While this serves only as an illustration, we studied lots of these special case scenarios visually during the development of the clustering approach to gain a better insight into the structure of subtours generated by BasicIntegerTSP.

We selected instances kroB150 and u159 whose vertices are depicted in Figs. 4 and 5 in the “Appendix”. Both instances consist of points in the Euclidean plane and the distances between every two vertices correspond to their respective Euclidean distances, however, they represent two very different instance types: The instance kroB150 consists of relatively uniformly distributed points, the instance u159 is more structured and it contains e.g. mesh substructures which are the worst setting for our algorithm (recall Sect. 4.1).

Figure 19 in the “Appendix” illustrates the behavior of the running time t in seconds as a function of the parameter c for the instances kroB150 and u159. The full lines correspond to standard clustering approach \(C {\mid } c\) described in Sect. 3.2 (see Algorithm 2), while the dashed line corresponds to the restricted clustering \(RC_3 {\mid } c\) of Sect. 3.3 with minimum cluster size 3. The standard BasicIntegerTSP without clustering arises for \(c=1\).

Instance kroB150 consists of relatively uniformly distributed points in the Euclidean plane, but has a specific property: By using Algorithm 2 we can observe the occurrence of two main components also for relatively small coefficient c (already for \(c = 6\)). This behavior is rather atypical for random Euclidean graphs, cf. (Penrose 2003, ch. 13), but it provides an advantage for our approach since we do not have to solve cluster instances of the same order of magnitude as the original graph but have several clusters of moderate size also for small cluster numbers c.

Considering the standard clustering approach (Algorithm 2) in Fig. 19, upper graph, it can be seen that only a small improvement occurs for c between 2 and 5. Looking at the corresponding clusterings in detail, it turns out in these cases that there exists only one “giant connected component” and all other clusters have size 1. This structure also implies that for the restricted clustering these isolated vertices are merged with the giant component and the effect of clustering is lost completely. For larger cluster numbers c, a considerable speedup is obtained, with some variation, but more or less in the same range for almost all values of \(c\ge 6\) (in fact, the giant component splits in these cases). Moreover, the restricted clustering performs roughly as good as the standard clustering for \(c \ge 6\).

Instance u159 is much more structured and has many colinear vertices. Here, we can observe a different behavior. While the standard clustering is actually beaten by BasicIntegerTSP for smaller cluster numbers and has a more or less similar performance for larger cluster numbers, the restricted clustering is almost consistently better than the other two approaches. For c between 2 and 10 there exists a large component containing many mesh substructures which consumes as much computation time as the whole instance.

These two instances give some indication of how to characterize “good” instances for our algorithm: They should

  • Consist of more clearly separated clusters and

  • Not contain mesh substructures and colinear vertices.

4.3 General computational results

A summary of the computational results for BasicIntegerTSP and the most promising variants of clustering based subtour generations can be found in Table 6. For random Euclidean instances we report only the mean values of all five instances of the same size [detailed results for all random Euclidean instances can be found in our accompanying technical report, Pferschy and Staněk (2015)]. It turns out that \(\textit{HCD} {\mid } 4 \frac{n}{\log _2{n}}\), i.e. the hierarchical clustering approach combined with dropping SEC and fixing them only if they are generated again in the subsequent iteration and with the upper bound on the maximum cluster size \(u = 4 \frac{n}{\log _2{n}}\), gives the best overall performance. A different behavior can be observed for instances taken from the TSPLIB and for random Euclidean instances. On the TSPLIB instances this algorithm \(\textit{HCD} {\mid } 4 \frac{n}{\log _2{n}}\) is on average about 20 % faster than pure BasicIntegerTSP and beats the other clustering based approaches for most instances. In those cases, where it is not the best choice, it is usually not far behind.

As already mentioned, best results are obtained with \(\textit{HCD} {\mid } 4 \frac{n}{\log _2{n}}\) for instances with a strong cluster structure and without mesh substructures (e.g. pr299). For instances with mesh substructures it is difficult to find an optimal 2-matching which is also a TSP tour. For random Euclidean instances the results are less clear but approaches with fixed number of clusters seem to be better then the hierarchical ones.

It was a main goal of this study to find a large number of “good” SEC, i.e. subtours that are present in the last iteration of the ILP-model of BasicIntegerTSP. Therefore, we show the potentials and limitations of our approach in reaching this goal. In particular, we will report the relation between the set \(S_1\) consisting of all subtours generated by running a hierarchical clustering algorithm with an upper bound u (set as in the computational tests to \(u = 4 \frac{n}{\log _2{n}}\)) before solving the original problem (i.e. the root vertex) and the set \(S_2\) containing only the SEC included in the final ILP model of BasicIntegerTSP. We tested the hierarchical clustering with and without the dropping of non-repeated subtours.

There are two aspects we want to describe: At first, we want to check whether \(S_1\) contains a relevant proportion of “useful” subtour contraints, i.e. constraints also included in \(S_2\), or whether \(S_1\) contains “mostly useless” subtours. Therefore, we report the proportion of used subtours defined as

$$\begin{aligned} p_{used} :=\frac{\left| S_1\cap S_2\right| }{\left| S_1\right| }. \end{aligned}$$
(7)

Secondly, we want to find out to what extend it is possible to find the “right” subtours by our approach. Hence, we define the proportion of covered subtours defined as

$$\begin{aligned} p_{cov} :=\frac{\left| S_1 \cap S_2\right| }{\left| S_2\right| }. \end{aligned}$$
(8)

The values of \(p_{used}\) and \(p_{cov}\) are given in Table 3. It can be seen that empirically there is the chance to find about 26–31% (\(p_{cov}\)) of all required violated SEC. If SEC are allowed to be dropped, we are able to find fewer such constraints, but our choice has a better quality (\(p_{cov}\) is smaller, but \(p_{used}\) is larger), i.e. the solver does not have to work with a large number of constraints which only slow down the solving process and are not necessary to reach an optimal solution.

Table 3 Proportion of used and proportion of covered subtours for our hierarchical clustering approaches with the upper bound \(u = 4 \frac{n}{\log _2{n}}\) which (i) does not allow (\(\textit{HC} {\mid } 4 \frac{n}{\log _2{n}}\)) and which (ii) does allow (\(\textit{HCD} {\mid } 4 \frac{n}{\log _2{n}}\)) to drop the unused SEC

Furthermore, we can observe a relative big difference between the values of the proportion of used SEC (\(p_{used}\)) for the TSPLIB instances and for random Euclidean instances if the dropping of redundant constraints is allowed.

4.4 Adding a starting heuristic

Of course, there are many possibilities of adding improvements to our basic approach. Lower bounds and heuristics can be introduced, branching rules can be specified, or cutting planes can be generated. We did not pursue these possibilities since we want to focus on the simplicity of the approach. Moreover, we wanted to take the ILP solver as a “black box” and not interfere with its execution.

Just as an example which immediately comes to mind, we added a starting heuristic to give a reasonably good TSP solution as a starting solution to the ILP solver. We used the improved version of the classical Lin–Kernighan heuristic in the code written by Helsgaun (2008). The computational results reported in Table 4 show that a considerable speedup (roughly a factor of 3, but also much more) can be obtained in this way.

Table 4 Results for BasicIntegerTSP used without/with the Lin–Kernighan heuristic for generating an initial solution: Using the starting heuristic yields faster running times for all instances

5 Some theoretical results and further empirical observations

Although our work mainly aims at computational experiments, we also tried to analyze BasicIntegerTSP from a theoretical point of view. In particular we studied the expected behavior on random Euclidean instances and tried to characterize the expected cardinality of the minimal set of required subtours \(\mathcal {S}^*\) as defined in Sect. 2. It is well known that no polynomially bounded representation of the TSP polytope can be found and there also exist instances based on a mesh-structure for which \(\mathbb {E}\left[ \left| S^*\right| \right] \) has exponential size, but the question for the expected size of \(\left| S^*\right| \) for random Euclidean instances and thus for the expected number of iterations of our solution algorithm remains an interesting open problem.

We started with extensive computational tests, some of them presented in Figs. 20 and 21 in the “Appendix”, to gain empirical evidence on this aspect. The upper graph in Fig. 20 illustrates the mean number of iterations needed by BasicIntegerTSP to reach optimality for different numbers of vertices n (we evaluated 100 random Euclidean instances for every value n). The lower graph of Fig. 20 shows the mean length of the optimal TSP tour and of the optimal 2-matching (i.e. the objective value after solving the ILP in the first iteration) by using the same setting.

It was proven by Beardwood et al. (1959) that the expected length of an optimal TSP tour is asymptotically \(\beta \sqrt{n}\), where \(\beta \) is a constant. This approach was later generalized for other settings and other properties of the square root asymptotic were identified by Rhee (1993) and Yukich (1998). We used these properties to prove the square root asymptotic also for the 2-matching problem (cf. Fig. 20, lower graph, dashed).

Definition 1

[complete convergence, Yukich (1998)] A sequence of random variables \(X_n\), \(n \ge 1\), converges completely (c.c.) to a constant C if and only if for all \(\varepsilon > 0\) we have

$$\begin{aligned} \sum _{n = 1}^{\infty }{\mathbb {P}\left[ |X_n - C| > \varepsilon \right] } < \infty . \end{aligned}$$
(9)

Theorem 1

Let \(G = (V, E)\) be a random Euclidean graph with \(n = |V|\) vertices and let \(d :E \rightarrow \mathbb {R}^+_0\) be the Euclidean distance function. Furthermore, let \(M_2(G, d)\) be the length of an optimal 2-matching. Then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{M_2(G, d)}{\sqrt{n}} = \alpha \ \hbox { c.c., where } \alpha > 0. \end{aligned}$$
(10)

Proof

See our accompanying technical report, Pferschy and Staněk (2015). \(\square \)

Based on these results the following idea might lead to a proof that the expected cardinality \(\mathcal {S}^*\) is polynomially bounded: After the first iteration of the algorithm we have a solution possibly consisting of several separate subtours of total asymptotic length \(\alpha \sqrt{n} = \alpha _1 \sqrt{n}\). If there are subtours, we add SEC (in fact at most \(\lfloor \frac{n}{3}\rfloor \)), resolve the enlarged ILP and get another solution whose asymptotic length is \(\alpha _2 \sqrt{n}\). By proving that the expected length of the sequence \(\alpha = \alpha _1, \ldots , \alpha _{\#i} = \beta \) is polynomially bounded in n, one would obtain that also \(\mathbb {E}\left[ \left| S^*\right| \right] \) is polynomially bounded since only polynomially many subtours are added in each iteration. Our intuition and computational tests illustrated in Fig. 20, upper graph, indicate that the length of this sequence could be proportional to \(\sqrt{n}\) as well. Unfortunately, we could not find the suitable techniques to show this step.

A different approach is illustrated in Fig. 21, where we examine the mean number of subtours contained in every iteration. In particular, we chose \(n = 60\), generated 100,000 random Euclidean instances and sorted them by the number of iterations #iter required by BasicIntegerTSP. The most frequent number of ILP solver runs was 7 (dotted line), but we summarize the results for 5 (full line), 6 (dashed), 8 (loosely dashed) and 9 (loosely dotted) necessary runs in this figure as well. For every iteration of every class (with respect to the number of involved ILP runs) we compute the mean number of subtours contained in the respective solutions. As can be expected these numbers of subtours are decreasing (in average) over the number of iterations. To allow a better comparison of this behavior for different numbers of iterations we scaled the iteration numbers into the interval [0, 10] (horizontal axis of Fig. 21). It can be seen that the average number of subtours contained in an optimal 2-matching (first iteration) is about 9.2 while in the last iteration we trivially have only one tour. Between these endpoints we can first observe a mostly convex behavior, only in the last step before reaching the optimal TSP tour a sudden drop occurs. It would be interesting to derive an asymptotic description of these curves. An intuitive guess would point to an exponential function, but so far we could not find a theoretical justification of this claim.

6 Conclusions

In this paper we provide a “test of concept” of a very simple approach to solve TSP instances of medium size to optimality by exploiting the power of current ILP solvers. The approach consists of iteratively solving ILP models with relaxed SEC to integer optimality. Then it is easy to find integral subtours and add the corresponding SEC to the ILP model. Iterating this process until no more subtours are contained in the solution obviously solves the TSP to optimality. It would also be possible to treat subtour elimination constraints as so-called lazy cuts and invoke them only when a new integer solution is found. However, since the number of generated constraints remains moderate in our test instances we found no advantage in pursuing this option.

In this work we focus on the structure of SEC and how to find a “good” set of SEC in reasonable time. Therefore, we aim to identify the local structure of the vertices of a given TSP instance by running a clustering algorithm. Based on empirical observations and results from random graph theory we further extend this clustering-based approach and develop a hierarchical clustering method with a mechanism to identify SEC as “relevant”, if they appear in consecutive iterations of the algorithm.

We mostly refrained from adding additional features which are highly likely to improve the performance considerably, such as starting heuristics (cf. Sect. 4.4), lower bounds or adding additional cuts. In the future it might be interesting to explore the limits of performance one can reach with a purely integer linear programming approach by adding these improvements. Clearly, we can not expect such a basic approach to compete with the performance of Concorde, Applegate et al. (2006), which has been developed over many years and basically includes all theoretical and technical developments known so far. However, it turns out that most of the standard benchmark instances with up to 400 vertices can be solved in a few minutes by this purely integer strategy.

Finally, we briefly discussed some theoretical aspect for random Euclidean graphs which could lead to polyhedral results in the expected case.