Generating subtour elimination constraints for the TSP from pure integer solutions

The traveling salesman problem (TSP) is one of the most prominent combinatorial optimization problems. Given a complete graph \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G = (V, E)$$\end{document}G=(V,E) and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach. Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions. In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs.


Introduction
The Traveling Salesman/Salesperson Problem TSP is one of the best known and most widely investigated combinatorial optimization problems with four famous books entirely devoted to its study ( [11], [18], [8], [2]).Thus, we will refrain from giving extensive references but mainly refer to the treatment in [2].Given a complete graph G = (V, E) with |V | = n and |E| = m = n(n − 1)/2, and nonnegative distances d e for each e ∈ E, the TSP asks for a shortest tour with respect to the distances d e containing each vertex exactly once.
Let δ(v) := {e = (v, u) ∈ E | u ∈ V } denote the set of all edges adjacent to v ∈ V .Introducing binary variables x e for the possible inclusion of any edge e ∈ E in the tour we get the following classical ILP formulation: s.t. e∈δ(v) e=(u,v)∈E u,v∈S (1) defines the objective function, (2) is the degree equation for each vertex, (3) are the subtour elimination constraints, which forbid solutions consisting of several disconnected tours, and finally (4) defines the integrality constraints.Note also that some subtour elimination constraints are redundant: For the vertex sets S ⊂ V , S = ∅, and S ′ = V \S we get pairs of subtour elimination constraints both enforcing the connection of S and S ′ .The established standard approach to solve TSP to optimality, as pursued successfully during the last 30+ years, is a branch-and-cut approach, which solves the LP-relaxation obtained by relaxing the integrality constraints (4) into x e ∈ [0, 1].In each iteration of the underlying branch-and-bound scheme cutting planes are generated, i.e. constraints that are violated by the current fractional solution, but not necessarily by any feasible integer solution.Since there exists an exponential number of subsets S ⊂ V implying subtour elimination constraints (3), the computation starts with a small collection of subsets S ⊂ V (or none at all), and identifies violated subtour elimination constraints as cutting planes in the so-called separation problem.Moreover, a wide range of other cutting plane families were developed in the literature together with heuristic and exact algorithms to find them (see e.g.[20, ch. 58], [2]).Also the undisputed champion among all TSP codes, the famous Concorde package (see [2]), is based on this principle.
In this paper we introduce and examine another concept for solving the TSP.In Section 2 we introduce the basic idea of our approach.Some improvement strategies follow in Section 3 with our best approach presented in Subsection 3.6.Since the main contribution of this paper are computational experiments, we discuss them in detail in Section 4. The common details of all these tests will be given in Subsection 4.1.In Section 5, we present some theoretical results and further empirical observations.Finally, we provide an Appendix with illustrations, graphs and two summarizing tables (Tables 8  and 9).

General solution approach
Clearly, the performance of the above branch-and-cut approach depends crucially on the performance of the used LP-solver.Highly efficient LP-solvers have been available for quite some time, but also ILP-solvers have improved rapidly during the last decades and reached an impressive performance.This motivated the idea of a very simple approach for solving TSP without using LP-relaxations explicitly.
The general approach works as follows (see Algorithm 1).First, we relax all subtour elimination constraints (3) from the model and solve the remaining ILP model (corresponding to a weighted 2-matching problem).Then we check if the obtained integer solution contains subtours.If not, the solution is an optimal TSP tour.Otherwise, we find all subtours in the integral solution (which can be done by a simple scan) and add the corresponding subtour elimination constraints to the model, each of them represented by the subset of vertices in the corresponding subtour.The resulting enlarged ILP model is solved again to optimality.Iterating this process clearly leads to an optimal TSP tour.
Input: TSP instance Output: an optimal TSP tour 1: define current model as ( 1), ( 2), (4); 2: repeat 3: solve the current model to optimality by an ILP-solver; if solution contains no subtour then 5: set the solution as optimal tour; 6: find all subtours of the solution and add the corresponding subtour elimination constraints into the model; end if 9: until optimal tour found; Algorithm 1: Main idea of our approach.
Every execution of the ILP-solver (see line 3) will be called an iteration.We define the set of violated subtour elimination constraints as the set of all included subtour elimination constraints which were violated in an iteration (see line 7).Figures 5 and  8 -19 in the Appendix illustrate a problem instance and the execution of the algorithm on this instance respectively.
It should be pointed out that the main motivation of this framework is its simplicity.The separation of subtour elimination constraints for fractional solutions amounts to the solution of a max-flow or min-cut problem.Based on the procedure by Padberg and Rinaldi [15], extensive work has been done to construct elaborated algorithms for performing this task efficiently.On the contrary, violated subtour elimination constraints of integer solutions are trivial to find.Moreover, we refrain from using any other additional inequalities known for classical branch-and-cut algorithms, which might also be used to speed up our approach, since we want to underline the strength of modern ILP-solvers in connection with a refined subtour selection process (see Section 3.6).
This approach for solving TSP is clearly not new but was available since the earliest ILP formulation going back to [6] and can be seen as folklore nowadays.Several authors followed the concept of generating integer solutions for some kind of relaxation of an ILP formulation and iteratively adding violated integer subtour elimination constraints.However, it seems that the lack of fast ILP-solvers prohibited its direct application in computational studies although it was used in an artistic context by [4].
Miliotis [12] also concentrated on generating integer subtour elimination constraints, but within a fractional LP framework.The classical paper by Crowder and Padberg [5] applies the iterative generation of integer subtour elimination constraints as a second part of their algorithm after generating fractional cutting planes in the first part to strengthen the LP-relaxation.They report that not more than three iterations of the ILP-solver for the strengthened model were necessary for test instances up to 318 vertices.Also Grötschel and Holland [7] follow this direction of first improving the LP-model as much as possible, e.g. by running preprocessing, fixing certain variables and strengthening the LP-relaxation by different families of cutting planes, before generating integer subtours as last step to find an optimal tour.It turns out that about half of their test instances never reach this last phase.In contrast, we stick to the pure ILP-formulation without any previous modifications.
From a theoretical perspective, the generation of subtours involves a certain trade-off.For an instance (G, d) there exists a minimal set of subtours S * , such that the ILP model with only those subtour elimination constraints implied by S * yields an overall feasible, and thus optimal solution.However, in practice we can only find collections of subtours larger than S * by adding subtours in every iteration until we reach optimality.Thus, we can either collect as many subtours as possible in each iteration, which may decrease the number of iterations but increases the running time of the ILP-solver because of the larger number of constraints.Or we try to control the number of subtour elimination constraints added to the model by trying to judge their relevance and possibly remove some of them later, which keeps the ILP-model smaller but may increase the number of iterations.In the following we describe various strategies to find the "right" subtours.

Representation of subtour elimination constraints
The subtour elimination constraints (3) can be expressed equivalently by the following cut constraints: Although mathematically equivalent, the two ways of forbidding a subtour in S may result in quite different performances of the ILP-solver.It was observed that in general the running time for solving an ILP increases with the number of non-zero entries of the constraint matrix.Hence, we also tested a hybrid variant which chooses between (3) and ( 5) by picking for each considered set S the version with the smaller number of nonnegative coefficients on the left-hand side as follows: We performed computational tests of our approach to compare the three representations of subtour elimination constraints, namely (3), ( 5) and ( 6), and list the results in Table  Mean ratios refer to the arithmetic means over ratios between the running times of the approaches using the subtour elimination constraints represented as in ( 5) and ( 6) respectively and the running time of the approach using the subtour elimination constraints represented as in (3)."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.
It turned out that the three versions sometimes (but not always) lead to huge differences in running time (up to a factor of 5).This is an interesting experience that should be taken into consideration also in other computational studies.From our limited experiments it could be seen that version (5) was inferior most of the times (with sometimes huge deviations) whereas only a small dominance of the hybrid variant (6) in comparison with the standard version (3) could be observed.This is due to the small size of most subtours occurring during the solution process (the representation (3) equals to the representation (6) in these cases).But since also bigger subtours can occur (mostly in the last iterations), we use the representation (6) for all further computational tests.For more details about different ILP-models see [14].

Generation of subtours
As pointed out above, the focus of our attention lies in the generation and selection of a "good" set of subtour elimination constraints, including as many as possible of those required by the ILP-solver to determine an optimal solution which is also feasible for TSP, but as few as possible of all others which only slow down the performance of the ILP-solver.
Trying to strike a balance between these two goals we followed several directions, some of them motivated by theoretical results, others by visually studying plots of all subtours generated during the execution of Algorithm 1.

Subtour elimination constraints from suboptimal integer solutions
Many ILP-solvers report all feasible integer solutions found during the underlying branchand-bound process.In this case, we can also add all corresponding subtour elimination constraints to the model.These constraints can be considered simply as part of the set of violated subtour elimination constraints.Not surprisingly, these additional constraints always lead to a decrease in the number of iterations for the overall computation and to an increase in the total number of subtour elimination constraints generated before reaching optimality (see Table 2).While the time consumed in each iteration is likely to increase, it can also be observed that the overall running time is often decreased significantly by adding all detected subtours to the model.On the other hand, for the smaller number of instances where this is not the case, only relatively modest increases of running times are incurred.Therefore, we stick to adding all detected subtour elimination constraints for the remainder of the paper.The algorithm in this form will be called BasicIntegerTSP.

Subtours of size 3
The next idea we tried was to add subtour inequalities corresponding to some subtours of size 3 into the model before starting the iteration process (i.e. in line 1 of Algorithm 1).This idea was motivated by the observations that in many examples smaller subtours (with respect to their cardinality) occur more often than the larger ones.However, there are  2: Using all constraints generated from all feasible solutions found during the solving process vs. using only the constraints generated from the final ILP solutions of each iteration.Mean ratios refer to the arithmetic means over ratios between the running times of BasicIntegerTSP over the other approach."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.
After studying our computational tests we decided to use the shortest ones with respect to their length.Table 3 summarizes our computational results and it can be seen that this idea actually tends to slow down our approach.Thus we did not follow it any more.

Subtour selections
As mentioned above, a large number of subtour inequalities which are not really needed only slow down our approach.Thus we also tried not to use all subtour inequalities we instance p = 0 p = Table 3: Using no subtours of size 3 vs. using the shortest subtours of size 3 for generation of subtour constraints before starting the solving process.The parameter p defines the proportion of used subtour constraints.Mean ratios refer to the arithmetic means over ratios between the running times of the particular approaches and the running time of the BasicIntegerTSP (corresponding to p = 0)."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.The entries "-" by TSPLIB instances cannot be computed with 16 GB RAM.
are able to generate during one iteration, but to make a proper selection.We again used our computational tests in order to identify two general properties which seem to point to such "suitable" subtour inequalities.
• Sort all obtained subtours with respect to their cardinality, chose the smallest ones and added the corresponding subtour inequalities into the model.
• Sort all obtained subtours with respect to their length and proceed as above.
The corresponding results are summarized in Tables 4 and 5 and it is obvious that this idea does not speed up our approach as intended.Thus we dropped it from our considerations.4: Using all subtours vs. using only the smallest subtours with respect to their cardinality for generation of subtour constraints.The parameter p defines the proportion of used subtour constraints.Mean ratios refer to the arithmetic means over ratios between the running times of the particular approaches and the running time of the BasicIntegerTSP (corresponding to p = 1)."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.

Clustering into subproblems
It can be observed that many subtours have a local context, meaning that a small subset of vertices separated from the remaining vertices by a reasonably large distance Table 5: Using all subtours vs. using only the smallest subtours with respect to their length for generation of subtour constraints.The parameter p defines the proportion of used subtour constraints.Mean ratios refer to the arithmetic means over ratios between the running times of the particular approaches and the running time of the BasicIn-tegerTSP (corresponding to p = 1)."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.The entries "-" by TSPLIB instances cannot be computed with 16 GB RAM.
will always be connected by one or more subtours, independently from the size of the remaining graph (see also Figures 5 and 8 to 19 in the Appendix).Thus, we aim to identify clusters of vertices and run the BasicIntegerTSP on the induced subgraphs with the aim of generating within a very small running time the same subtours occurring in the execution of the approach on the full graph.Furthermore, we can use the optimal tour from every cluster to generate a corresponding subtour elimination constraint for the original instance and thus enforce a connection to the remainder of the graph.
For our purposes the clustering algorithm should fulfill the following properties: • clustering quality: The obtained clusters should correspond well to the distance structure of the given graph, as in a classical geographic clustering.
• running time: Should be low relative to the running time required for the main part of the algorithm.
• cluster size: If clusters are too large, solving the TSP takes too much time.If clusters are too small, only few subtour elimination constraints are generated.
Clearly, there is a huge body of literature on clustering algorithms (see e.g.[10]) and selecting one for a given application will never satisfy all our objectives.Our main restriction was the requirement of using a clustering algorithm which works also if the vertices are not embeddable in Euclidean space, i.e.only arbitrary edge distances are given.Simplicity being another goal, we settled for the following approach described in Algorithm 2: First, we fix the number of clusters c with 1 ≤ c ≤ n and sort the edges in increasing order of distances (see line 1).Then we start with the empty graph G ′ = (V ′ , E ′ ) (line 2) containing only isolated vertices (i.e.n clusters) and add iteratively edges in increasing order of distances until the desired number of clusters c is reached (see lines 5 and 6).In each iteration the current clustering is implied by the connected components of the current graph (see line 7).We denote this clustering approach by C | c.Note that this clustering algorithm does not make any assumptions about the underlying TSP instance and does not exploit any structural properties of the Metric TSP or the Euclidean TSP.
It was observed in our computational experiments that the performance of the TSP algorithm is not very sensitive to small changes of the cluster number c and thus a rough estimation of c is sufficient.The behavior of the running time as a function of c can be found for particular test instances in Figure 21, see Section 4.2 for further discussion.

Restricted clustering
Although the clustering algorithm (see Algorithm 2) decreases the computational time of the whole solution process for some test instances, we observed a certain shortcoming.There may easily occur clusters consisting of isolated points or containing only two vertices.Clearly, these clusters do not contribute any subtour on their own.Moreover, the degree constraints (2) guarantee that each such vertex is connected to the remainder of the graph in any case.The connection of these vertices to some "neighboring" cluster enforced in BasicIntegerTSP implies that the clustering yields different subtours for these neighbors and not the violated subtour elimination constraints arising in BasicIn-tegerTSP.
To avoid this situation, we want to impose a minimum cluster size of 3.An easy way to do so is as follows: After reaching the c clusters, continue to add edges in increasing order of distances (as before), but add an edge only, if it is incident to one of the vertices in a connected component (i.e.cluster) of size one or two.This means basically that we simply merge these small clusters to their nearest neighbor with respect to the actual clustering.Note that this is a step-by-step process and it can happen that two clusters of size 1 merge first before merging the resulting pair to its nearest neighboring cluster.The resulting restricted clustering approach will be denoted by RC 3 | c.
Against our expectations, the computational experiments (see Section 4) show that this approach often impacts the algorithm in the opposite way (see also Figure 21 and Table 9 in the Appendix) if compared for the same original cluster size c.
Surprisingly, we could observe an interesting behavior if c ≈ n.In this case, the main clustering algorithm (see Algorithm 2) has almost no effect, but the "post-phase" which enforces the minimum cluster size yields a different clustering on its own.This variant often beats the previous standard clustering algorithm with c ≪ n (see Table 9 in the Appendix).Note that we cannot fix the actual number of clusters c ′ in this case.But our computational results show that c ′ ≈ n 5 usually holds if the points are distributed relatively uniformly in the Euclidean plane and if the distances correspond to their relative Euclidean distances (see Figure 20 in the Appendix).

Hierarchical clustering
It was pointed out in Section 3.4 that the number of clusters c is chosen as an input parameter.The computational experiments in Subsection 4.2 give some indication on the behavior of Algorithm 2 for different values of c, but fail to provide a clear guideline for the selection of c.Moreover, from graphical inspection of test instances, we got the impression that a larger number of relevant subtour elimination constraints might be obtained by considering more clusters of moderate size.In the following we present an idea that takes both of these aspects into account.
In our hierarchical clustering process denoted by HC we do not set a cluster number c, but let the clustering algorithm continue until all vertices are connected (this corresponds to c = 1).The resulting clustering process can be represented by a binary clustering tree which is constructed in a bottom-up way.The leaves of the tree represent isolated vertices, i.e. the n trivial clusters given at the beginning of the clustering algorithm.Whenever two clusters are merged by the addition of an edge, the two corresponding tree vertices are connected to a new common parent vertex in the tree representing the new cluster.At the end of this process we reach the root of the clustering tree corresponding to the complete vertex set.An example of such a clustering tree is shown in Figures 1 and 2. Distances between every two vertices correspond to their respective Euclidean distances in this example.Now, we go through the tree in a bottom-up fashion from the leaves to the root.In each tree vertex we solve the TSP for the associated cluster, after both of its child vertices were resolved.The crucial aspect of our procedure is the following: All subtour elimination constraints generated during such a TSP solution for a certain cluster are propagated and added to the ILP model used for solving the TSP instance of its parent cluster.Obviously, at the root vertex the full TSP is solved.
The advantage of this strategy is the step-by-step construction of the violated subtour elimination constraints.A disadvantage is that many constraints can make sense in the local context but not in the global one and thus too many constraints could be generated in this way.Naturally, one pays for the additional subtour elimination constraints by an increase in computation time required to solve a large number of -mostly small -TSP instances.To avoid the solution of TSPs of the same order of magnitude as the original instance, it makes sense to impose an upper bound u on the maximum cluster size.This means that the clustering tree is partitioned into several subtrees by removing all tree vertices corresponding to clusters of size greater than u.After resolving all these subtrees we collect all generated subtour elimination constraints and add them to the ILP model for the originally given TSP.This approach will be denoted as HC | u.Computational experiments with various choices of u indicated that u = 4 n log 2 n would be a good upper bound.
Let us take a closer look at the problem of including too many subtour elimination constraints which are redundant in the global graph context.Of course the theoretical "best" way would be to check which of the propagated subtour elimination constraints were not used during the runs of the ILP solver and drop them.To do this, it would be necessary to get this information from the ILP solver which often is not possible.
However, we can try to approximately identify subtours which are not only locally relevant in the following way: All subtour elimination constraints generated in a certain tree vertex, i.e. for a certain cluster, are marked as considered subtour elimination constraints.Then we solve the TSP for the cluster of its parent vertex in the tree without using the subtours marked as considered.If we generate such a considered subtour again during the solution of the parent vertex, we take this as an indicator of global significance and add the constraint permanently for all following supersets of this cluster.If we set the upper bound u, we take also all subtour elimination constraints found in the biggest solved clusters.This approach will be denoted as HCD | u.
Of course, it is only a heuristic rule and one can easily find examples, where this prediction on a subtour's relevance fails, but our experiments indicate that HCD | 4n/ log 2 n is the best approach we considered.A comparison with other hierarchical clustering methods for all test instances can be found in Table 8 in the Appendix.It can be seen that without an upper bound we are often not able to find the solution at all (under time and memory constraints we made on the computational experiments).In the third and fourth column we can see a comparison between approaches both using the upper bound u = 4 n log 2 n where the former collects all detected subtour elimination constraints and the latter allows to drop those which seem to be relevant only in a local context.Both these methods beat BasicIntegerTSP (for the comparison of this approach with other presented algorithms see the computational experiments in Section 4).

Computational experiments
In the following the computational experiments and their results will be discussed.

Setup of the computational experiments
All tests were run on an Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz with 16 GB RAM under Linux1 and all programs were implemented in C++2 by using the SCIP MIPsolver [1] together with CPLEX as LP-solver3 .It has often been discussed in the literature (see e.g.[13]) and in personal communications that ILP-solvers are relatively unrobust and often show high variations in their running time performance, even if the same instance is repeatedly run on the same hardware and same software environment.Our first test runs also exhibited deviations up to a factor of 2 when identical tests were repeated.Thus we took special care to guarantee the relative reproducibility of the computational experiments: No additional swap memory was made available during the tests, only one thread was used and no other parallel user processes were allowed.This leads to a high degree of reproducibility in our experiments.However, this issue makes a comparison to other simple approaches, which were tested on other computers under other hardware and software conditions, extremely difficult.
We used two groups of test instances: The first group is taken from the well-known TSPLIB95 [17], which contains the established benchmarks for TSP and related problems.From the collection of instances we chose all those with (i) at least 150 and at most 1000 vertices and (ii) which could be solved in at most 12 hours by our BasicIntegerTSP.It turned out that 25 instances of the TSPLIB95 fall into this category (see Table 9), the largest having 783 vertices.
We also observed some drawbacks of these instances: Most of them (23 of 25) are defined as point sets in the Euclidean plane with distances corresponding to the Euclidean metric or as a set of geographical cities, i.e. points on a sphere.Moreover, they often contain substructures like meshes or sets of colinear points and finally, since all distances are rounded to the nearest integer, there are many instances which have multiple optimal solutions.These instances are relatively unstable with respect to solution time, number of iterations, and -important for our approach -cardinality of the set of violated subtour elimination constraints.For our approach instances with a mesh geometry (e.g.ts225 from TSPLIB95) were especially prone to unstable behavior, such as widely varying running times for minor changes in the parameter setting.This seems to be due to the fact that these instances contain many 2-matchings with the same objective function value as illustrated in the following example: Consider a 3 × (2n + 2) mesh graph (see Figure 3, left graph).It has 2 n optimal TSP tours (see [21]).If we fix a subtour on the first 6 vertices, we obviously have 2 n−1 optimal TSP tours on the remaining 3 × (2n + 2) − 2 vertices (see Figure 3, right graph) and together with the fixed subtour we have 2 n−1 2-matchings having the same objective value as an optimal TSP tour on the original graph.Thus the search process for a feasible TSP tour can vary widely.
In order to provide further comparisons, we also defined a set of instances based The running times of our test instances, most of them containing between 150 and 500 vertices, were often within several hours.Since we tested many different variants and configurations of our approach, we selected a subset of these test instances to get faster answers for determining the best algorithm settings for use in the final tests.This subset contains 12 (of the 25) TSPLIB instances and one random instances for every number of vertices n (see e.g.Table 1.)All our running time tables report the name of the instance, the running time (sec.) in wall-clock seconds (rounded down to nearest integers), the number of iterations (#i.), i.e. the number of calls to the ILP-solver in the main part of our algorithm (without the TSP solutions for the clusters) and the number of subtour elimination constraints (#c.) added to the ILP model in the last iteration, i.e. the number of constraints of the model which yielded an optimal TSP solution.We often compare two columns of a table by taking the mean ratio, i.e. computing the quotient between the running times on the same instance and taking the arithmetic mean of these quotients.

Computational details for selected examples
Let us now take a closer look at two instances in detail.While this serves only as an illustration, we studied lots of these special case scenarios visually during the development of the clustering approach to gain a better insight into the structure of subtours generated by BasicIntegerTSP.
We selected instances kroB150 and u159 whose vertices are depicted in Figures 6  and 7 in the Appendix.Both instances consist of points in the Euclidean plane and the distances between every two vertices correspond to their respective Euclidean distances, however, they represent two very different instance types: The instance kroB150 consists of relatively uniformly distributed points, the instance u159 is more structured and it contains e.g.mesh substructures which are the worst setting for our algorithm (recall Subsection 4.1).
Figure 21 in the Appendix illustrates the behavior of the running time t in seconds as a function of the parameter c for the instances kroB150 and u159.The full lines correspond to standard clustering approach C | c described in Section 3.4 (see Algorithm 2), while the dashed line corresponds to the restricted clustering RC 3 | c of Section 3.5 with minimum cluster size 3.The standard BasicIntegerTSP without clustering arises for c = 1.
Instance kroB150 consists of relatively uniformly distributed points in the Euclidean plane, but has a specific property: By using Algorithm 2 we can observe the occurrence of two main components also for relatively small coefficient c (already for c = 6).This behavior is rather atypical for random Euclidean graphs, cf.[16, ch. 13], but it provides an advantage for our approach since we do not have to solve cluster instances of the same order of magnitude as the original graph but have several clusters of moderate size also for small cluster numbers c.
Considering the standard clustering approach (Algorithm 2) in Figure 21, upper graph, it can be seen that only a small improvement occurs for c between 2 and 5. Looking at the corresponding clusterings in detail, it turns out in these cases that there exists only one "giant connected component" and all other clusters have size 1.This structure also implies that for the restricted clustering these isolated vertices are merged with the giant component and the effect of clustering is lost completely.For larger cluster numbers c, a considerable speedup is obtained, with some variation, but more or less in the same range for almost all values of c ≥ 6 (in fact, the giant component splits in these cases).Moreover, the restricted clustering performs roughly as good as the standard clustering for c ≥ 6.
Instance u159 is much more structured and has many colinear vertices.Here, we can observe a different behavior.While the standard clustering is actually beaten by BasicIntegerTSP for smaller cluster numbers and has a more or less similar performance for larger cluster numbers, the restricted clustering is almost consistently better than the other two approaches.For c between 2 and 10 there exists a large component containing many mesh substructures which consumes as much computation time as the whole instance.
These two instances give some indication of how to characterize "good" instances for our algorithm: They should • consist of more clearly separated clusters and • not contain mesh substructures and colinear vertices.

General computational results
A summary of the computational results for BasicIntegerTSP and the most promising variants of clustering based subtour generations can be found in Table 9.For random Euclidean instances we report only the mean values of all five instances of the same size.It turns out that HCD | 4 n log 2 n , i.e. the hierarchical clustering approach combined with dropping subtour elimination constraints and fixing them only if they are generated again in the subsequent iteration and with the upper bound on the maximum cluster size u = 4 n log 2 n , gives the best overall performance.A different behavior can be observed for instances taken from the TSPLIB and for random Euclidean instances.On the TSPLIB instances this algorithm HCD | 4 n log 2 n is on average about 20% faster than pure BasicIntegerTSP and beats the other clustering based approaches for most instances.In those cases, where it is not the best choice, it is usually not far behind.
As already mentioned, best results are obtained with HCD | 4 n log 2 n for instances with a strong cluster structure and without mesh substructures (e.g.pr299).For instances with mesh substructures it is difficult to find an optimal 2-matching which is also a TSP tour.For random Euclidean instances the results are less clear but approaches with fixed number of clusters seem to be better then the hierarchical ones.
It was a main goal of this study to find a large number of "good" subtour elimination constraints, i.e. subtours that are present in the last iteration of the ILP-model of BasicIntegerTSP.Therefore, we show the potentials and limitations of our approach in reaching this goal.In particular, we will report the relation between the set S 1 consisting of all subtours generated by running a hierarchical clustering algorithm with an upper bound u (set as in the computational tests to u = 4 n log 2 n ) before solving the original problem (i.e. the root vertex) and the set S 2 containing only the subtour elimination constraints included in the final ILP model of BasicIntegerTSP.We tested the hierarchical clustering with and without the dropping of non-repeated subtours.
There are two aspects we want to describe: At first, we want to check whether S 1 contains a relevant proportion of "useful" subtour contraints, i.e. constraints also included in S 2 , or whether S 1 contains "mostly useless" subtours.Therefore, we report the proportion of used subtours defined as Secondly, we want to find out to what extend it is possible to find the "right" subtours by our approach.Hence, we define the proportion of covered subtours defined as The values of p used and p cov are given in Table 6.It can be seen that empirically there is the chance to find about 26-31% (p cov ) of all required violated subtour elimination constraints.If subtour elimination constraints are allowed to be dropped, we are able to find fewer such constraints, but our choice has a better quality (p cov is smaller, but p used is larger), i.e. the solver does not have to work with a large number of constraints which only slow down the solving process and are not necessary to reach an optimal solution.Furthermore, we can observe a relative big difference between the values of the proportion of used subtour elimination constraints (p used ) for the TSPLIB instances and for random Euclidean instances if the dropping of redundant constraints is allowed.

Adding a starting heuristic
Of course, there are many possibilities of adding improvements to our basic approach.Lower bounds and heuristics can be introduced, branching rules can be specified, or cutting planes can be generated.We did not pursue these possibilities since we want to focus on the simplicity of the approach.Moreover, we wanted to take the ILP solver as a "black box" and not interfere with its execution.
Just as an example which immediately comes to mind, we added a starting heuristic to give a reasonably good TSP solution as a starting solution to the ILP solver.We used the improved version of the classical Lin-Kernighan heuristic in the code written by Helsgaun [9].The computational results reported in Table 7 show that a considerable speedup (roughly a factor of 3, but also much more) can be obtained in this way.7: Results for BasicIntegerTSP used without / with the Lin-Kernighan heuristic for generating an initial solution.Mean ratios refer to the arithmetic means over ratios between the running times of the approaches."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.

Some theoretical results and further empirical observations
Although our work mainly aims at computational experiments, we also tried to analyze BasicIntegerTSP from a theoretical point of view.In particular we studied the expected behavior on random Euclidean instances and tried to characterize the expected cardinality of the minimal set of required subtours S * as defined in Section 2. It is well known that no polynomially bounded representation of the TSP polytope can be found and there also exist instances based on a mesh-structure for which E [|S * |] has exponential size, but the question for the expected size of |S * | for random Euclidean instances and thus for the expected number of iterations of our solution algorithm remains an interesting open problem.
We started with extensive computational tests, some of them presented in Figures 22 and 23 in the Appendix, to gain empirical evidence on this aspect.The upper graph in Figure 22 illustrates the mean number of iterations needed by BasicIntegerTSP to reach optimality for different numbers of vertices n (we evaluated 100 random Euclidean instances for every value n).The lower graph of Figure 22 shows the mean length of the optimal TSP tour and of the optimal 2-matching (i.e. the objective value after solving the ILP in the first iteration) by using the same setting.
It was proven back in 1959 that the expected length of an optimal TSP tour is asymptotically β √ n, where β is a constant [3].This approach was later generalized for other settings and other properties of the square root asymptotic were identified [19,22].We used these properties to prove the square root asymptotic also for the 2-matching problem (cf. Figure 22, lower graph, dashed).We need some definitions definitions, lemmas and theorems originally introduced by [19] and summarized by [22] first in order to prove this result.
Furthermore, let F ∈ F be a point set in R dim and let R ∈ R be a dim-dimensional rectangle in R dim where dim ≥ 2.
And finally, let d : R × R → R + 0 be a metric and let G = G(F, R) = V (G), E(G) be a complete graph with the vertex set V (G) = F ∩ R and with the distances d(e) between every two vertices u, v ∈ V (G) where e = (u, v).
Then we will denote the matching functional.Furthermore, we will denote the boundary matching functional.F stays for the set of all partitions (F i ) i≥1 of F and ∂R stays for the set of all sequences of pairs of points {a i , b i } i≥1 belonging to the boundary of the rectangle R denoted by ∂R.Additionally, we set d(a, b) = 0 for all a, b ∈ ∂R.
Similarly, we define the 2-matching functional L and the boundary 2-matching functional L B .
If it is obvious which rectangle R is considered, we also write Finally, we define Definition 2 (simple subadditivity, geometric subadditivity and superadditivity).Let R be a rectangle defined as [0, t] dim for some positive constant t ∈ R + partitioned into two rectangles R 1 and R 1 (R 1 ∪ R 2 = R).Furthermore, let F, G be finite sets in [0, t] dim and let P (F, R) : F × R → R + 0 be a function.The function P is simple subadditive if the following inequality is satisfied: where is fulfilled, we will call the function P geometric subadditive.diam(R) denotes the diameter of the rectangle R and is satisfied, we will call the function P superadditive.
Let R be a rectangle defined as [0, t] dim for some positive constant t ∈ R + partitioned into two rectangles R 1 and R 1 (R 1 ∪ R 2 = R).If P satisfies the geometric subadditivity (14), we will say that P is a subadditive Euclidean functional.If P is supperadditive (15), we will say that P is a superadditive Euclidean functional.
We can prove our result now.
Lemma 10.The 2-matching functional L and the boundary 2-matching functional L B fulfill the conditions of Theorem 9.
Proof.The proof is a modification of similar proofs for other combinatorial optimization problems contained in [22].
Let us now show the superadditivity.We can distinguish 2 cases in general: (a) Either the solution over the whole rectangle R does not cross the boundary between the rectangles R 1 and R 2 at all or (b) at least one subtour crosses the boundary between the rectangles R 1 and R 2 .
obviously holds in the first case.
Let us now consider a subtour crossing the boundary between the rectangles R 1 and R 2 (for an example see Figure 4).W.l.o.g.we can assume that the boundary is crossed between the points v 1 and v 2 , and v 3 and v 4 and that the crossing points are x and y respectively.Furthermore, w.l.o.g.we can assume that v 1 , v 3 ∈ R 1 .
Then the new subtour, containing the vertices v 1 and v 3 , and lying in the rectangle R 1 , consists of the following parts: • the same path between the vertices v 1 and v 3 belonging to the rectangle R 1 as in the whole rectangle R, • the orthogonal connections between the vertices v 1 and v 3 and the boundary between the both rectangles, and finally • a piece of this boundary (see also Figure 4).
We have to choose the vertices a and b on this boundary in such a way that a = arg min (2) Next, we check some properties of the 2-matching functional L. Equalities ( 16), ( 17) and ( 18) obviously hold.
Further, it is easy to see that the 2-matching functional L fulfill the geometric subadditivity.Since we minimize, the minimum weighted 2-matching over the First, note that L B F, [0, 1] dim ≤ L F, [0, 1] dim always hold (see (12)).Thus it suffices to show where Let F * ⊆ F be the set of vertices which are connected with the boundary ∂[0, 1] dim by a path.Now, we remove all edges incident with the vertices contained in the vertex set F * .If |F * | < 3, we can just put these vertices to an arbitrary subtour (if such a subtour exists) and get the above inequality (the increase of the objective value can be easily bounded by 4 √ dim in this case).If |F * | ≥ 3, we can construct an minimum weighted 2-matching on this vertex set and obtain (3) Finally, we prove the smoothness of the boundary 2-matching functional L B .We will show the simple and geometric subadditivity of this functional first in order to be ably to show the smoothness.
Let F and G be finite sets in [0, t] dim .If the minimum weighted 2-matching for the vertex set F ∪ G equals to the minimum weighted 2-matchings for the vertex sets F and G joined together, inequality (13) holds with equality.Since we can always construct such a solution for the vertex set F ∪ G, the objective value can be only smaller in the other case (note that we minimize it).
Let us now prove the geometric subadditivity in order to fulfill the conditions of Lemma 5. We know that the 2-matching functional L is geometric subadditive.Now, it is easy to see that the proof of inequality (25) can be easily modified in order to obtain for an arbitrary dim-dimensional rectangle.Since L B (F, R) ≤ L(F, R), we obtain inequality (14) immediately.
Using the simple subadditivity and Lemma 5 we get for all finite sets F, G ⊂ [0, 1] dim where C 5 . .= C 5 (dim) denotes a finite constant.This completes this part of the proof if L B (F ∪ G) − L B (F ) ≥ 0. Hence we just need to show the following inequality for some finite constant C 6 . .= C 6 (dim).
Consider the global minimum weighted 2-matching on the vertex set G ∪ F and remove all edges from all subtours incident with a vertex g ∈ G.This yield at most |G| paths of a length of at least 1 containing only vertices from the vertex set F and some isolated points F ′ ⊆ F .Let F * denote the set of all endpoints of those paths.Clearly, we have |F ′ | ≤ |G| and |F * | ≤ 2|G|.Consider now the boundary matching functional M B (F * ) and the corresponding matching m.This matching together with the disconnected paths and with parts of the boundary of the dim-dimensional rectangle [0, 1] dim yield a set of subtours { Fi } N i=1 for some particular positive integer N .Furthermore, we can construct an minimum weighted 2-matching on the vertex set F ′ and get a feasible minimum weighted 2-matching.We can write By using Lemmas 4 and 5 we obtain This is exactly inequality (29).Based on these results the following idea might lead to a proof that the expected cardinality S * is polynomially bounded: After the first iteration of the algorithm we have a solution possibly consisting of several separate subtours of total asymptotic length α √ n = α 1 √ n.If there are subtours, we add subtour elimination constraints (in fact at most ⌊ n 3 ⌋), resolve the enlarged ILP and get another solution whose asymptotic length is α 2 √ n.By proving that the expected length of the sequence α = α 1 , . . ., α #i = β is polynomially bounded in n, one would obtain that also E [|S * |] is polynomially bounded since only polynomially many subtours are added in each iteration.Our intuition and computational tests illustrated in Figure 22, upper graph, indicate that the length of this sequence could be proportional to √ n as well.Unfortunately, we could not find the suitable techniques to show this step.
A different approach is illustrated by Figure 23, where we examine the mean number of subtours contained in every iteration.In particular, we chose n = 60, generated 100000 random Euclidean instances and sorted them by the number of iterations #i.required by BasicIntegerTSP.The most frequent number of ILP solver runs was 7 (dotted line), but we summarize the results for 5 (full line), 6 (dashed), 8 (loosely dashed) and 9 (loosely dotted) necessary runs in this figure as well.For every iteration of every class (with respect to the number of involved ILP runs) we compute the mean number of subtours contained in the respective solutions.As can be expected these numbers of subtours are decreasing (in average) over the number of iterations.To allow a better comparison of this behavior for different numbers of iterations we scaled the iteration numbers into the interval [0, 10] (horizontal axis of Figure 23).It can be seen that the average number of subtours contained in an optimal 2-matching (first iteration) is about 9.2 while in the last iteration we trivially have only one tour.Between these endpoints we can first observe a mostly convex behavior, only in the last step before reaching the optimal TSP tour a sudden drop occurs.It would be interesting to derive an asymptotic description of these curves.An intuitive guess would point to an exponential function, but so far we could not find a theoretical justification of this claim.

Conclusions
In this paper we provide a "test of concept" of a very simple approach to solve TSP instances of medium size to optimality by exploiting the power of current ILP solvers.The approach consists of iteratively solving ILP models with relaxed subtour elimination constraints to integer optimality.Then it is easy to find integral subtours and add the corresponding subtour elimination constraints to the ILP model.Iterating this process until no more subtours are contained in the solution obviously solves the TSP to optimality.
In this work we focus on the structure of subtour elimination constraints and how to find a "good" set of subtour elimination constraints in reasonable time.Therefore, we aim to identify the local structure of the vertices of a given TSP instance by running a clustering algorithm.Based on empirical observations and results from random graph theory we further extend this clustering-based approach and develop a hierarchical clustering method with a mechanism to identify subtour elimination constraints as "relevant", if they appear in consecutive iterations of the algorithm.
We mostly refrained from adding additional features which are highly likely to improve the performance considerably, such as starting heuristics (cf.Section 4.4), lower bounds or adding additional cuts.In the future it might be interesting to explore the limits of performance one can reach with a purely integer linear programming approach by adding these improvements.Clearly, we can not expect such a basic approach to compete with the performance of Concorde [2], which has been developed over many years and basically includes all theoretical and technical developments known so far.However, it turns out that most of the standard benchmark instances with up to 400 vertices can be solved in a few minutes by this purely integer strategy.
Finally, we briefly discussed some theoretical aspect for random Euclidean graphs which could lead to polyhedral results in the expected case.8: Results for BasicIntegerTSP and for different variants of the approach which uses the hierarchical clustering.Mean ratios refer to the arithmetic means over ratios between the running times of the particular approaches and the running time of the BasicIntegerTSP."sec." is the time in seconds, "#i." the number of iterations and "#c." the number of subtour elimination constraints added to the ILP before starting the last iteration.The entries "-" by TSPLIB instances cannot be computed with 16 GB RAM or would take more than 12 hours.

• BasicIntegerTSP
• HC | n -hierarchical clustering; the constraints cannot be dropped and the maximum size of a solved cluster is u = n (i.e. in fact, there is no upper bound) are the connected components of graph G ′ ; 8: end while Algorithm 2: Clustering algorithm.

Figure 1 :
Figure 1: Example illustrating the hierarchical clustering: Vertices of the TSP instance.Distances between every two vertices correspond to their respective Euclidean distances in this example.

3 Figure 3 :
Figure 3: Example illustrating the behavior of our approaches by instances based on graphs containing mesh substructures.Distances between every two vertices correspond to their respective Euclidean distances in this example.

α∈∂R 1 ∩∂R 2 {d(v 1
, α)} and b = arg min β∈∂R 1 ∩∂R 2 {d(v 3 , β)} hold in order to achieve the minimality.Due to this choice of the vertices a and b we can write d(v 1 , a) ≤ d(v 1 , x) and d(v 3 , b) ≤ d(v 3 , y) and since d(a, b) = 0 and the remaining part of the subtour belonging to the rectangle R 1 yield the same contribution to the objective value, we can claim that the contribution of this new subtour to the objective value is smaller or equal to the contribution of the part of the original subtour lying in the rectangle R 1 .The same argument can be used for the second rectangle R 2 and for all other subtours crossing the boundary between the rectangles R 1 and R 2 .

Figure 4 :
Figure 4: Example illustrating the superadditivity of the boundary 2-matching functional L B .

Theorem 11 .
Let G = (V, E) be a random Euclidean graph with n = |V | vertices and let d : E → R + 0 be the Euclidean distance function.Furthermore, let M 2 (G, d) be the length of an optimal 2-matching.Then lim n→∞ M 2 (G, d) √ n = α c.c., where α > 0. (33)Proof.The theorem immediately follows from Theorem 9 and Lemma 10.

Figure 8 :
Figure 8: Instance RE A 150: Main idea of our approach -iteration 0.

Table 1 :
. Technical details about the setup of the experiments can be found in Subsection 4.1.Comparison of the behavior of the algorithm for different representations of subtour elimination constraints.

Table 6 :
Proportion of used and proportion of covered subtours for our hierarchical clustering approaches with the upper bound u = 4 n log 2 n which (i) does not allow (HC | 4 n log 2 n ) and which (ii) does allow (HCD | 4 n log 2 n ) to drop the unused subtour elimination constraints.
• HC | 4n/ log 2 n -hierarchical clustering; the constraints cannot be dropped and the maximum size of a solved cluster is u = 4 n log 2 n• HCD | 4n/ log 2 n -hierarchical clustering; the constraints can be dropped and the maximum size of a solved cluster is u = 4 n