Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP

2-Opt is probably the most basic local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst case running time on 2-dimensional Euclidean instances was known so far. We clarify this issue by presenting, for every \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$p\in\mathbb{N}$\end{document}, a family of Lp instances on which 2-Opt can take an exponential number of steps. Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0,1]2, where it was shown that the expected number of steps is bounded by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde{O}(n^{10})$\end{document} for Euclidean instances. We consider a more advanced model of probabilistic instances in which the points can be placed independently according to general distributions on [0,1]d, for an arbitrary d≥2. In particular, we allow different distributions for different points. We study the expected number of local improvements in terms of the number n of points and the maximal density ϕ of the probability distributions. We show an upper bound on the expected length of any 2-Opt improvement path of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde{O}(n^{4+1/3}\cdot\phi^{8/3})$\end{document}. When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde{O}(n^{4+1/3-1/d}\cdot\phi^{8/3})$\end{document}. If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tilde{O}(n^{4-1/d}\cdot\phi)$\end{document}. In addition, we prove an upper bound of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(\sqrt[d]{\phi})$\end{document} on the expected approximation factor with respect to all Lp metrics. Let us remark that our probabilistic analysis covers as special cases the uniform input model with ϕ=1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with ϕ∼1/σd.

family of L p instances on which 2-Opt can take an exponential number of steps.Previous probabilistic analyses were restricted to instances in which n points are placed uniformly at random in the unit square [0, 1] 2 , where it was shown that the expected number of steps is bounded by Õ(n 10 ) for Euclidean instances.We consider a more advanced model of probabilistic instances in which the points can be placed according to general distributions on [0, 1] d , for an arbitrary d ≥ 2. In particular, we allow different distributions for different points.We study the expected number of local improvements in terms of the number n of points and the maximal density φ of the probability distributions.We show an upper bound on the expected length of any 2-Opt improvement path of Õ(n 4+1/3 • φ 8/3 ).When starting with an initial tour computed by an insertion heuristic, the upper bound on the expected number of steps improves even to Õ(n 4+1/3−1/d •φ 8/3 ).If the distances are measured according to the Manhattan metric, then the expected number of steps is bounded by Õ(n 4−1/d • φ).In addition, we prove an upper bound of O( d √ φ) on the expected approximation factor w. r. t. all L p metrics.
Let us remark that our probabilistic analysis covers as special cases the uniform input model with φ = 1 and a smoothed analysis with Gaussian perturbations of standard deviation σ with φ ∼ 1/σ d .Besides random metric instances, we also consider an alternative random input model in which an adversary specifies a graph and distributions for the edge lengths in this graph.In this model, we achieve even better results on the expected number of local improvements of 2-Opt.

Introduction
In the traveling salesperson problem (TSP), we are given a set of vertices and for each pair of distinct vertices a distance.The goal is to find a tour of minimum length that visits every vertex exactly once and returns to the initial vertex at the end.Despite many theoretical analyses and experimental evaluations of the TSP, there is still a considerable gap between the theoretical results and the experimental observations.One important special case is the Euclidean TSP in which the vertices are points in R d , for some d ∈ N, and the distances are measured according to the Euclidean metric.This special case is known to be NP-hard in the strong sense [Pap77], but it admits a polynomial time approximation scheme (PTAS), shown independently in 1996 by Arora [Aro98] and Mitchell [Mit99].These approximation schemes are based on dynamic programming.However, the most successful algorithms on practical instances rely on the principle of local search and very little is known about their complexity.
The 2-Opt algorithm is probably the most basic local search heuristic for the TSP.2-Opt starts with an arbitrary initial tour and incrementally improves this tour by making successive improvements that exchange two of the edges in the tour with two other edges.More precisely, in each improving step the 2-Opt algorithm selects two edges {u 1 , u 2 } and {v 1 , v 2 } from the tour such that u 1 , u 2 , v 1 , v 2 are distinct and appear in this order in the tour, and it replaces these edges by the edges {u 1 , v 1 } and {u 2 , v 2 }, provided that this change decreases the length of the tour.The algorithm terminates in a local optimum in which no further improving step is possible.We use the term 2-change to denote a local improvement made by 2-Opt.This simple heuristic performs amazingly well on "real-life" Euclidean instances like, e. g., the ones in the well-known TSPLIB [Rei91].Usually the 2-Opt heuristic needs a clearly subquadratic number of improving steps until it reaches a local optimum and the computed solution is within a few percentage points of the global optimum [JM97].
There are numerous experimental studies on the performance of 2-Opt.However, the theoretical knowledge about this heuristic is still very limited.Let us first discuss the number of local improvement steps made by 2-Opt before it finds a locally optimal solution.When talking about the number of local improvements, it is convenient to consider the state graph.The vertices in this graph correspond to the possible tours and an arc from a vertex v to a vertex u is contained if u is obtained from v by performing an improving 2-Opt step.On the positive side, van Leeuwen and Schoone consider a 2-Opt variant for the Euclidean plane in which only steps are allowed that remove a crossing from the tour.Such steps can introduce new crossings, but van Leeuwen and Schoone [vLS81] show that after O(n 3 ) steps, 2-Opt has found a tour without any crossing.On the negative side, Lueker [Lue75] constructs TSP instances whose state graphs contain exponentially long paths.Hence, 2-Opt can take an exponential number of steps before it finds a locally optimal solution.This result is generalized to k-Opt, for arbitrary k ≥ 2, by Chandra, Karloff, and Tovey [CKT99].These negative results, however, use arbitrary graphs whose edge lengths do not satisfy the triangle inequality.Hence, they leave open the question about the worst case complexity of 2-Opt on metric TSP instances.In particular, Chandra, Karloff, and Tovey ask whether it is possible to construct Euclidean TSP instances on which 2-Opt can take an exponential number of steps.We resolve this question by constructing such instances in the Euclidean plane.In chip design applications, often TSP instances arise in which the distances are measured according to the Manhattan metric.Also for this metric and for every other L p metric, we construct instances with exponentially long paths in the 2-Opt state graph.
Theorem 1.1.For every p ∈ N ∪ {∞} and n ∈ N, there is a two-dimensional TSP instance with 16n vertices in which the distances are measured according to the L p metric and whose state graph contains a path of length 2 n+4 − 22.
For Euclidean instances in which n points are placed uniformly at random in the unit square, Kern [Ker89] shows that the length of the longest path in the state graph is bounded by O(n 16 ) with probability 1 − c/n for some constant c.Chandra, Karloff, and Tovey [CKT99] improve this result by bounding the expected length of the longest path in the state graph by O(n 10 log n).That is, independent of the initial tour and the choice of the local improvements, the expected number of 2-changes is bounded by O(n 10 log n).For instances in which n points are placed uniformly at random in the unit square and the distances are measured according to the Manhattan metric, Chandra, Karloff, and Tovey show that the expected length of the longest path in the state graph is bounded by O(n 6 log n).
We consider a more general probabilistic input model and improve the previously known bounds.The probabilistic model underlying our analysis allows that different vertices are placed according to different continuous probability distributions in the unit hypercube [0, 1] d , for some constant dimension d ≥ 2. The distribution of a vertex v i is defined by a density function f i : [0, 1] d → [0, φ] for some given φ ≥ 1.Our upper bounds depend on the number n of vertices and the upper bound φ on the density.We denote instances created by this input model as φ-perturbed Euclidean or Manhattan instances, depending on the underlying metric.The parameter φ can be seen as a parameter specifying how close the analysis is to a worst case analysis since the larger φ is, the better can worst case instances be approximated by the distributions.For φ = 1 and d = 2, every point has a uniform distribution over the unit square, and hence the input model equals the uniform model analyzed before.Our results narrow the gap between the subquadratic number of improving steps observed in experiments [JM97] and the upper bounds from the probabilistic analysis.With slight modifications, this model also covers a smoothed analysis, in which first an adversary specifies the positions of the points and after that each position is slightly perturbed by adding a Gaussian random variable with small standard deviation σ.In this case, one has to set φ = 1/( √ 2πσ) d .We also consider a model in which an arbitrary graph G = (V, E) is given and for each edge e ∈ E, a probability distribution according to which the edge length d(e) is chosen independently of the other edge lengths.Again, we restrict the choice of distributions to distributions that can be represented by density functions f e : [0, 1] → [0, φ] with maximal density at most φ for a given φ ≥ 1.We denote inputs created by this input model as φ-perturbed graphs.Observe that in this input model only the distances are perturbed whereas the graph structure is not changed by the randomization.This can be useful if one wants to explicitely prohibit certain edges.However, if the graph G is not complete, one has to initialize 2-Opt with a Hamiltonian cycle to start with.
We prove the following theorem about the expected length of the longest path in the 2-Opt state graph for the three probabilistic input models discussed above.It is assumed that the dimension d ≥ 2 is an arbitrary constant.Usually, 2-Opt is initialized with a tour computed by some tour construction heuristic.One particular class are insertion heuristics, which insert the vertices one after another into the tour.We show that also from a theoretical point of view, using such an insertion heuristic yields a significant improvement for metric TSP instances because the initial tour 2-Opt starts with is much shorter than the longest possible tour.In the following theorem, we summarize our results on the expected number of local improvements.
Theorem 1.3.The expected number of steps performed by 2-Opt a) is O(n 4−1/d • log n • φ) on φ-perturbed Manhattan instances with n points when 2-Opt is initialized with a tour obtained by an arbitrary insertion heuristic.
) on φ-perturbed Euclidean instances with n points when 2-Opt is initialized with a tour obtained by an arbitrary insertion heuristic.
In fact, our analysis shows not only that the expected number of local improvements is polynomially bounded but it also shows that the second moment and hence the variance is bounded polynomially for φ-perturbed Manhattan and graph instances.For the Euclidean metric, we cannot bound the variance but the 3/2-th moment polynomially.
Similar to the running time, the good approximation ratios obtained by 2-Opt on practical instances cannot be explained by a worst-case analysis.In fact, there are quite negative results on the worst-case behavior of 2-Opt.For example, Chandra, Karloff, and Tovey [CKT99] show that there are Euclidean instances in the plane for which 2-Opt has local optima whose costs are Ω log n log log n times larger than the optimal costs.However, the same authors also show that the expected approximation ratio of the worst local optimum for instances with n points drawn uniformly at random from the unit square is bounded from above by a constant.We generalize their result to our input model in which different points can have different distributions with bounded density φ and to all L p metrics.
Theorem 1.4.Let p ∈ N ∪ {∞}.For φ-perturbed L p instances, the expected approximation ratio of the worst tour that is locally optimal for 2-Opt is bounded by O( d √ φ).
The remainder of the paper is organized as follows.We start by stating some basic definitions and notations in Section 2. In Section 3, we present the lower bounds.
In Section 4, we analyze the expected number of local improvements and prove Theorems 1.2 and 1.3.Finally, in Sections 5 and 6, we prove Theorem 1.4 about the expected approximation factor and we discuss the relation between our analysis and a smoothed analysis.

Preliminaries
An instance of the TSP consists of a set V = {v 1 , . . ., v n } of vertices (depending on the context, synonymously referred to as points) and a symmetric distance function The goal is to find a Hamiltonian cycle of minimum length.We also use the term tour to denote a Hamiltonian cycle.For a natural number n ∈ N, we denote the set {1, . . ., n} by [n].
A pair (V, d) of a nonempty set V and a function d : V × V → R ≥0 is called a metric space if for all x, y, z ∈ V the following properties are satisfied: A well-known class of metrics on R d is the class of L p metrics.For p ∈ N, the distance d p (x, y) of two points x ∈ R d and y ∈ R d with respect to the L p metric is given by d The L 1 metric is often called Manhattan metric, and the L 2 metric is well-known as Euclidean metric.For p → ∞, the L p metric converges to the L ∞ metric defined by the distance function We also use the terms Manhattan instance and Euclidean instance to denote L 1 and L 2 instances, respectively.Furthermore, if p is clear from context, we write d instead of d p .
A tour construction heuristic for the TSP incrementally constructs a tour and stops as soon as a valid tour is created.Usually, a tour constructed by such a heuristic is used as the initial solution 2-Opt starts with.A well-known class of tour construction heuristics for metric TSP instances are so-called insertion heuristics.These heuristics insert the vertices into the tour one after another, and every vertex is inserted between two consecutive vertices in the current tour where it fits best.To make this more precise, let T i denote a subtour on a subset S i of i vertices, and suppose v / ∈ S i is the next vertex to be inserted.If (x, y) denotes an edge in T i that minimizes d(x, v) + d(v, y) − d(x, y), then the new tour T i+1 is obtained from T i by deleting the edge (x, y) and adding the edges (x, v) and (v, y).Depending on the order in which the vertices are inserted into the tour, one distinguishes between several different insertion heuristics.Rosenkrantz et al. [RSI77] show an upper bound of log n + 1 on the approximation factor of any insertion heuristic on metric TSP instances.Furthermore, they show that two variants which they call nearest insertion and cheapest insertion achieve an approximation ratio of 2 for metric TSP instances.The nearest insertion heuristic always inserts the vertex with the smallest distance to the current tour, and the cheapest insertion heuristic always inserts the vertex whose insertion leads to the cheapest tour T i+1 .

Exponential Lower Bounds
In this section, we answer Chandra, Karloff, and Tovey's question [CKT99] whether it is possible to construct TSP instances in the Euclidean plane on which 2-Opt can take an exponential number of steps.We present, for every p ∈ N ∪ {∞}, a family of twodimensional L p instances with exponentially long sequences of improving 2-changes.In Section 3.1, we present our construction for the Euclidean plane, and in Section 3.2 we extend this construction to general L p metrics.

Exponential Lower Bound for the Euclidean Plane
In Lueker's construction [Lue75] many of the 2-changes remove two edges that are far apart in the current tour in the sense that many vertices are visited between them, no matter of how the direction of the tour is chosen.Our construction differs significantly from the previous one as the 2-changes in our construction affect the tour only locally.The instances we construct are composed of gadgets of constant size.Each of these gadgets has a zero state and a one state, and there exists a sequence of improving 2-changes starting in the zero state and eventually leading to the one state.Let G 0 , . . ., G n−1 denote these gadgets.If gadget G i with i > 0 has reached state one, then it can be reset to its zero state by gadget G i−1 .The crucial property of our construction is that whenever a gadget G i−1 changes its state from zero to one, it resets gadget G i twice.Hence, if in the initial tour, gadget G 0 is in its zero state and every other gadget is in state one, then for every i with 0 ≤ i ≤ n − 1, gadget G i performs 2 i state changes from zero to one as, for i > 0, gadget G i is reset 2 i times.
Every gadget is composed of 2 subgadgets, which we refer to as blocks.Each of these blocks consists of 4 vertices that are consecutively visited in the tour.For i ∈ {0, . . ., n− 1} and j ∈ [2], let B i 1 and B i 2 denote the blocks of gadget G i and let A i j , B i j , C i j , and D i j denote the four points B i j consist of.If one ignores certain intermediate configurations that arise when one gadget resets another one, our construction ensures the following properties: The points A i j , B i j , C i j , and D i j are always consecutive in the tour, the edge between B i j and C i j is contained in every tour, and B i j and C i j are always the inner points of the block.That is, if one excludes the intermediate configurations, only the configurations A i j B i j C i j D i j and A i j C i j B i j D i j occur during the sequence of 2-changes.Observe that the change from one of these configurations to the other corresponds to a single 2-change in which the edges A i j B i j and C i j D i j are replaced by the edges A i j C i j and B i j D i j , or vice versa.In the following, we assume that the sum , and we refer to the configuration A i j B i j C i j D i j as the short state of the block and to the configuration A i j C i j B i j D i j as the long state.Another property of our construction is that neither the order in which the blocks are visited nor the order of the gadgets is changed during the sequence of 2-changes.Again with the exception of the intermediate configurations, the order in which the blocks are visited is

2
(see Figure 3.1).Due to the aforementioned properties, we can describe every non-intermediate tour that occurs during the sequence of 2-changes completely by specifying for every block if it is in its short state or in its long state.In the following, we denote the state of In the illustration, we use m to denote n − 1.Every tour that occurs in the sequence of 2-changes contains the thick edges.For each block, either both solid or both dashed edges are contained.In the former case the block is in its short state; in the latter case the block is in its long state.
a gadget G i by a pair (x 1 , x 2 ) with x j ∈ {S, L}, meaning that block B i j is in its short state if and only if x j = S. Since every gadget consists of two blocks, there are four possible states for each gadget.However, only three of them appear in the sequence of 2-changes, namely (L, L), (S, L), and (S, S).We call state (L, L) the zero state and state (S, S) the one state.In order to guarantee the existence of an exponentially long sequence of 2-changes, the gadgets we construct possess the following property.
and gadget G i+1 is in state (S, S), then there exists a sequence of seven consecutive 2-changes terminating with gadget G i being in state (S, L) or (S, S), respectively, and gadget G i+1 in state (L, L).In this sequence only edges of and between the gadgets G i and G i+1 are involved.
If this property is satisfied and if in the initial tour gadget G 0 is in its zero state (L, L) and every other gadget is in its one state (S, S), then there exists an exponentially long sequence of 2-changes in which gadget G i changes 2 i times from state zero to state one, as the following lemma shows.Lemma 3.2.If, for i ∈ {0, . . ., n − 2}, gadget G i is in the zero state (L, L) and all gadgets G j with j > i are in the one state (S, S), then there exists a sequence of 2 n+3−i − 14 consecutive 2-changes in which only edges of and between the gadgets G j with j ≥ i are involved and that terminates in a state in which all gadgets G j with j ≥ i are in the one state.
Proof.We prove the lemma by induction on i.If gadget G n−1 is in state (L, L), then it can change its state with two 2-changes to (S, S) without affecting the other gadgets.Hence, the lemma is true for i = n − 1.Now assume that the lemma is true for i + 1 and consider a state in which gadget G i is in state (L, L) and all gadgets G j with j > i are in state (S, S).Due to Property 3.1, there exists a sequence of seven consecutive 2-changes in which only edges of and between G i and G i+1 are involved terminating with G i being in state (S, L) and G i+1 being in state (L, L).By the induction hypothesis there exists a sequence of 2 n+2−i − 14 2-changes after which all gadgets G j with j > i are in state (S, S).Then, due to Property 3.1, there exists a sequence of seven consecutive 2-changes in which only G i changes its state from (S, L) to (S, S) while resetting gadget G i+1 again from (S, S) to (L, L).Hence, we can apply the induction hypothesis again, yielding that after another 2 n+2−i − 14 2-changes all gadgets G j with j ≥ i are in state (S, S).This concludes the proof as the number of 2-changes performed is 14 + 2(2 n+2−i − 14) = 2 n+3−i − 14.

Detailed description of the sequence of steps
Now we describe in detail how a sequence of 2-changes satisfying Property 3.1 can be constructed.First, we assume that gadget G i is in state (S, L) and that gadget G i+1 is in state (S, S).Under this assumption, there are three consecutive blocks, namely B i 2 , B i+1 1 , and B i+1 2 , such that the leftmost one is in its long state, and the other blocks are in their short states.We need to find a sequence of 2-changes in which only edges of and between these three blocks are involved and after which the first block is in its short state and the other blocks are in their long states.Remember that when the edges {u 1 , u 2 } and {v 1 , v 2 } are removed from the tour and the vertices appear in the order u 1 , u 2 , v 1 , v 2 in the current tour, then the edges {u 1 , v 1 } and {u 2 , v 2 } are added to the tour and the subtour between u 1 and v 2 is visited in reverse order.If, e. g., the current tour corresponds to the permutation (1, 2, 3, 4, 5, 6, 7) and the edges {1, 2} and {5, 6} are removed, then the new tour is (1, 5, 4, 3, 2, 6, 7).The following sequence of 2-changes has the desired properties.Brackets indicate the edges that are removed from the tour.

1)
A If gadget G i is in state (L, L) instead of state (S, L), a sequence of steps that satisfies Property 3.1 can be constructed analogously.Additionally, one has to take into account that the three involved blocks B i 1 , B i+1 1 , and B i+1 2 are not consecutive in the tour but that block B i 2 lies between them.However, one can easily verify that this block is not affected by the sequence of 2-changes, as after the seven 2-changes have been performed, the block is in the same state and at the same position as before.

Embedding the construction into the Euclidean plane
The only missing step in the proof of Theorem 1.1 for the Euclidean plane is to find points such that all of the 2-changes that we described in the previous section are improving.We specify the positions of the points of gadget G n−1 and give a rule how the points of gadget G i can be derived when all points of gadget G i+1 have already been placed.In our construction it happens that different points have exactly the same coordinates.This is only for ease of notation; if one wants to obtain a TSP instance in which distinct points have distinct coordinates, one can slightly move these points without affecting the property that all 2-changes are improving.For j ∈ [2], we choose A n−1 j = (0, 0), B n−1 j = (1, 0), C n−1 j = (−0.1,1.4), and D n−1 j = (−1.1,4.8).Then We place the points of gadget G i as follows (see Figure 3.2): 1. Start with the coordinates of the points of gadget G i+1 .
2. Rotate these points around the origin by 3π/2.3. Scale each coordinate with a factor of 3. 4. Translate the points by the vector (−1.2, 0.1).
From this construction it follows that each gadget is a scaled, rotated, and translated copy of gadget G n−1 .If one has a set of points in the Euclidean plane that admit certain improving 2-changes, then these 2-changes are still improving if one scales, rotates, and translates all points in the same manner.Hence, it suffices to show that the sequences in which gadget G n−2 resets gadget G n−1 from (S, S) to (L, L) are improving.There are two such sequences; in the first one, gadget G n−2 changes its state from (L, L) to (S, L), in the second one, gadget G n−2 changes its state from (S, L) to (S, S).Since the coordinates of the points in both blocks of gadget G n−2 are the same, the inequalities for both sequences are also identical.The improvements made by the steps in both sequences are bounded from below by 0.03, 0.91, 0.06, 0.05, 0.43, 0.06, and 0.53.This concludes the proof of Theorem 1.1 for the Euclidean plane as it shows that all 2-changes in Lemma 3.2 are improving.

Exponential Lower Bound for L p Metrics
We were not able to find a set of points in the plane such that all 2-changes in Lemma 3.2 are improving with respect to the Manhattan metric.Therefore, we modify the construction of the gadgets and the sequence of 2-changes.Our construction for the Manhattan metric is based on the construction for the Euclidean plane, but it does not possess the property that every gadget resets its neighboring gadget twice.This property is only true for half of the gadgets.To be more precise, we construct two different types of gadgets which we call reset gadgets and propagation gadgets.Reset gadgets perform the same sequence of 2-changes as the gadgets that we constructed for the Euclidean plane.Propagation gadgets also have the same structure as the gadgets for the Euclidean plane, but when such a gadget changes its state from (L, L) to (S, S), it resets its neighboring gadget only once.Due to this relaxed requirement it is possible to find points in the Manhattan plane whose distances satisfy all necessary inequalities.Instead of n gadgets, our constructions consists of 2n gadgets, namely n propagation gadgets G P 0 , . . ., G P n−1 and n reset gadgets G R 0 , . . ., G R n−1 .The order in which these gadgets appear in the tour is As before, every gadget consists of two blocks and the order in which the blocks and the gadgets are visited does not change during the sequence of 2-changes.Consider a reset gadget G R i and its neighboring propagation gadget G P i+1 .Then Property 3.1 is still satisfied.That is, if G R i is in state (L, L) or (S, L) and G P i+1 is in state (S, S), then there exists a sequence of seven consecutive 2-changes resetting gadget G P i+1 to state (L, L) and leaving gadget G R i in state (S, L) or (S, S), respectively.The situation is different for a propagation gadget G P i and its neighboring reset gadget G R i .In this case, if G P i is in state (L, L), it first changes its state with a single 2-change to (S, L).After that, gadget G P i changes its state to (S, S) while resetting gadget G R i from state (S, S) to state (L, L) by a sequence of seven consecutive 2-changes.In both cases, the sequences of 2-changes in which one block changes from its long to its short state while resetting two blocks of the neighboring gadget from their short to their long states are chosen analogously to the ones for the Euclidean plane described in Section 3.1.1.
In the initial tour, only gadget G P 0 is in state (L, L) and every other gadget is in state (S, S).With similar arguments as for the Euclidean plane, we can show that gadget G R i is reset from its one state (S, S) to its zero state (L, L) 2 i times and that the total number of steps is 2 n+4 − 22.

Embedding the construction into the Manhattan plane
Similar to the construction in the Euclidean plane, the points in both blocks of a reset gadget G R i have the same coordinates.Also in this case one can slightly move all the points without affecting the inequalities if one wants distinct coordinates for distinct points.Again, we choose points for the gadgets G P n−1 and G R n−1 and describe how the points of the gadgets G P i and G R i can be chosen when the points of the gadgets G P i+1 and G R i+1 are already chosen.For j ∈ [2], we choose A n−1 R,j = (0, 1), B n−1 R,j = (0, 0), C n−1 R,j = (−0.7,0.1), and D n−1 R,j = (−1.2,0.08).Furthermore, we choose 2), C n−1 P,2 = (1.9,−1.5), and D n−1 P,2 = (−0.8,−1.1).Before we describe how the points of the other gadgets are chosen, we first show that the 2-changes within and between the gadgets Also the 2-change in which G P n−1 changes its state from (L, L) to (S, L) is improving because The improvements made by the 2-changes in the sequence in which G P n−1 changes its state from (S, L) to (S, S) while resetting G R n−1 are 0.04, 0.4, 0.04, 0.16, 0.4, 0.04, and 0.6.
Again, our construction possesses the property that each pair of gadgets G P i and G R i is a scaled and translated version of the pair G P n−1 and G R n−1 .Since we have relaxed the requirements for the gadgets, we do not even need rotations here.We place the points of G P i and G R i as follows: 1. Start with the coordinates specified for the points of gadgets G P i+1 and G R i+1 .2. Scale each coordinate with a factor of 7.7.
46, 1.07), and D n−2 R,j = (−7.31,0.916).Similar to our construction for the Euclidean plane, it suffices to show that the sequences in which gadget G R n−2 resets gadget G P n−1 from (S, S) to (L, L) are improving.As the coordinates of the points in the two blocks of gadget G R n−2 are the same, the inequalities for both sequences are also identical.The improvements made by the steps in both sequences are 1.06, 1.032, 0.168, 1.14, 0.06, 0.4, and 0.012.This concludes the proof of Theorem 1.1 for the Manhattan metric as it shows that all 2-changes are improving.
Let us remark that this also implies Theorem 1.1 for the L ∞ metric because distances w. r. t. the L ∞ metric coincide with distances w. r. t. the Manhattan metric if one rotates all points by π/4 around the origin and scales every coordinate with 1/ √ 2.

Embedding the construction into general L p metrics
It is also possible to embed our construction into the L p metric for p ≥ 3.For j ∈ [2], we choose A n−1 R,j = (0, 1), B n−1 R,j = (0, 0), C n−1 R,j = (3.5, 3.7), and 2), C n−1 P,2 = (−6.5,−1.6), and D n−1 P,2 = (−1.5,−7.1).We place the points of G P i and G R i as follows: 1. Start with the coordinates specified for the points of gadgets G P i+1 and G R i+1 .2. Rotate these points around the origin by π.
3. Scale each coordinate with a factor of 7.8.4. Translate the points by the vector (7.2, 5.3).
It can be calculated that the distances of these points when measured according to the L p metric for any p ≥ 3 satisfy all necessary inequalities.

Expected Number of 2-Changes
We analyze the expected number of 2-changes on random d-dimensional Manhattan and Euclidean instances, for an arbitrary constant dimension d ≥ 2, and on general TSP instances.The previous results on the expected number of 2-changes due to Kern [Ker89] and Chandra, Karloff, and Tovey [CKT99] are based on the analysis of the improvement made by the smallest improving 2-change.If the smallest improvement is not too small, then the number of improvements cannot be large.In our analyses for the Manhattan and the Euclidean metric, we consider not only a single step but certain pairs of steps.We show that the smallest improvement made by any such pair is typically much larger than the improvement made by a single step, which yields our improved bounds.Our approach is not restricted to pairs of steps.One could also consider sequences of steps of length k for any small enough k.In fact, for general φ-perturbed graphs with m edges, we consider sequences of length √ log m.The reason why we can analyze longer sequences for general graphs is that these inputs possess more randomness than φperturbed Manhattan and Euclidean instances because every edge length is a random variable that is independent of the other edge lengths.Hence, the analysis for general φ-perturbed graphs demonstrates the limits of our approach under optimal conditions.For Manhattan and Euclidean instances, the gain of considering longer sequences is small due to the dependencies between the edge lengths.

Manhattan Instances
In this section, we analyze the expected number of 2-changes on φ-perturbed Manhattan instances.First we prove a weaker bound than the one in Theorem 1.2.The proof of this weaker bound illustrates our approach and reveals the problems one has to tackle in order to improve the upper bounds.It is solely based on an analysis of the smallest improvement made by any of the possible 2-Opt steps.If with high probability every 2-Opt step decreases the tour length by a polynomially large amount, then with high probability only polynomially many 2-Opt steps are possible before a local optimum is reached.
Theorem 4.1.Starting with an arbitrary tour, the expected number of steps performed by 2-Opt on φ-perturbed Manhattan instances with n vertices is O(n 6 • log n • φ).
Proof.In order to prove the desired bound on the expected convergence time, we only need two simple observations.First, the initial tour can have length at most dn as the number of edges is n and every edge has length at most d.And second, every 2-Opt step decreases the length of the tour by a polynomially large amount with high probability.The latter can be shown by a union bound over all possible 2-Opt steps.Consider a fixed 2-Opt step S, let e 1 and e 2 denote the edges removed from the tour in step S, and let e 3 and e 4 denote the edges added to the tour.Then the improvement ∆(S) of step S can be written as (4.1) Without loss of generality let e 1 = (v 1 , v 2 ) be the edge between the vertices v 1 and v 2 , and let e 2 = (v 3 , v 4 ), e 3 = (v 1 , v 3 ), and e 4 = (v 2 , v 4 ).Furthermore, for i ∈ {1, . . .4}, let x i ∈ R d denote the coordinates of vertex v i .Then the improvement ∆(S) of step S can be written as Depending on the order of the coordinates, ∆(S) can be written as linear combination of the coordinates.If, e. g., for all i There are (4!) d such orders and each one gives rise to a linear combination of the x j i 's with integer coefficients.For each of these linear combinations, the probability that it takes a value in the interval (0, ε] is bounded from above by εφ, following, e. g., from Lemma A.1.Since ∆(S) can only take a value in the interval (0, ε] if one of the linear combinations takes a value in this interval, the probability of the event ∆(S) ∈ (0, ε] can be upper bounded by (4!) d εφ.
Let ∆ min denote the improvement of the smallest improving 2-Opt step S, i. e., ∆ min = min{∆(S) | ∆(S) > 0}.We can estimate ∆ min by a union bound, yielding as there are at most n 4 different 2-Opt steps.Let T denote the random variable describing the number of 2-Opt steps before a local optimum is reached.Observe that T can only exceed a given number t if the smallest improvement ∆ min is less than dn/t, and hence Since there are at most (n!) different TSP tours and none of these tours can appear twice during the local search, T is always bounded by (n!).Altogether, we can bound the expected value of T by Since we assumed the dimension d to be a constant, bounding the n-th harmonic number by ln(n) + 1 and using ln(n!) = O(n log n) yields The bound in Theorem 4.1 is only based on the smallest improvement ∆ min made by any of the 2-Opt steps.Intuitively, this is too pessimistic since most of the steps performed by 2-Opt yield a larger improvement than ∆ min .In particular, two consecutive steps yield an improvement of at least ∆ min plus the improvement ∆ min of the second smallest step.This observation alone, however, does not suffice to improve the bound substantially.Instead, we regroup the 2-changes to pairs such that each pair of 2-changes is linked by an edge, i. e., one edge added to the tour in the first 2-change is removed from the tour in the second 2-change, and we analyze the smallest improvement made by any pair of linked 2-Opt steps.Obviously, this improvement is at least ∆ min + ∆ min but one can hope that it is much larger because it is unlikely that the 2-change that yields the smallest improvement and the 2-change that yields the second smallest improvement form a pair of linked steps.We show that this is indeed the case and use this result to prove the bound on the expected length of the longest path in the state graph of 2-Opt on φ-perturbed Manhattan instances claimed in Theorem 1.2.

Construction of pairs of linked 2-changes
Consider an arbitrary sequence of consecutive 2-changes of length t.The following lemma guarantees that the number of disjoint linked pairs of 2-changes in every such sequence increases linearly with the length t.
Lemma 4.2.In every sequence of t consecutive 2-changes, the number of disjoint pairs of 2-changes that are linked by an edge, i. e., pairs such that there exists an edge added to the tour in the first 2-change of the pair and removed from the tour in the second 2-change of the pair, is at least t/3 − n(n − 1)/4.
Proof.Let S 1 , . . ., S t denote an arbitrary sequence of consecutive 2-changes.The sequence is processed step by step and a list L of disjoint linked pairs of 2-changes is created.Assume that the 2-changes S 1 , . . ., S i−1 have already been processed and that now 2-change S i has to be processed.Assume further that in step S i the edges e 1 and e 2 are exchanged with the edges e 3 and e 4 .Let j denote the smallest index with j > i such that edge e 3 is removed from the tour in step S j if such a step exists, and let j denote the smallest index with j > i such that edge e 4 is removed from the tour in step S j if such a step exists.If the index j is defined, the pair (S i , S j ) is added to the constructed list L. If the index j is not defined but the index j is defined, the pair (S i , S j ) is added to the constructed list L. After that, both steps S j and S j (if defined) are removed from the sequence of 2-changes, that is, they are not processed in the following in order to guarantee disjointness of the pairs in L.
If one 2-change is processed, it excludes at most two other 2-changes from being processed.Hence, the number of pairs added to L is at least t/3 − n(n − 1)/4 because there can be at most n(n − 1)/4 steps S i for which neither j nor j is defined.
Consider a fixed pair of 2-changes linked by an edge.Without loss of generality assume that in the first step the edges {v 1 , v 2 } and {v 3 , v 4 } are exchanged with the edges {v 1 , v 3 } and {v 2 , v 4 }, for distinct vertices v 1 , . . ., v 4 .Also without loss of generality assume that in the second step the edges {v 1 , v 3 } and {v 5 , v 6 } are exchanged with the edges {v 1 , v 5 } and {v 3 , v 6 }.However, note that the vertices v 5 and v 6 are not necessarily distinct from the vertices v 2 and v 4 .We distinguish between three different types of pairs.
2. |{v 2 , v 4 } ∩ {v 5 , v 6 }| = 1.We can assume w. l. o. g. that v 2 ∈ {v 5 , v 6 }.We have to distinguish between two subcases: a) The edges {v 1 , v 5 } and {v 2 , v 3 } are added to the tour in the second step.b) The edges {v 1 , v 2 } and {v 3 , v 5 } are added to the tour in the second step.These cases are illustrated in Figure 4.2.
3. |{v 2 , v 4 } ∩ {v 5 , v 6 }| = 2.The case v 2 = v 5 and v 4 = v 6 cannot appear as it would imply that the tour is not changed by performing the considered pair of steps.Hence, for pairs of this type, we must have v 2 = v 6 and v 4 = v 5 .
When distances are measured according to the Euclidean metric, pairs of type 3 result in vast dependencies and hence the probability that there exists a pair of this type in which both steps are improvements by at most ε w. r. t. the Euclidean metric cannot be bounded appropriately.In order to reduce the number of cases we have to consider and in order to prepare the analysis of φ-perturbed Euclidean instances, we exclude pairs of type 3 from our probabilistic analysis by leaving out all pairs of type 3 when constructing the list L in the proof of Lemma 4.2.
We only need to show that there are always enough pairs of type 1 or 2. Consider two steps S i and S j with i < j that form a pair of type 3. Assume that in step S i the edges {v 1 , v 2 } and {v 3 , v 4 } are replaced by the edges {v 1 , v 3 } and {v 2 , v 4 }, and that in step S j these edges are replaced by the edges {v 1 , v 4 } and {v 2 , v 3 }.Now consider the next step S l with l > j in which the edge {v 1 , v 4 } is removed from the tour if such a step exists and the next step S l with l > j in which the edge {v 2 , v 3 } is removed from the tour if such a step exists.Observe that neither (S j , S l ) nor (S j , S l ) can be a pair of type 3 because otherwise the improvement of one of the steps S i , S j , and S l , or S l , respectively, must be negative.In particular, we must have l = l .
If we encounter a pair (S i , S j ) of type 3 in the construction of the list L, we mark step S i as being processed without adding a pair of 2-changes to L and without removing S j from the sequence of steps to be processed.Let x denote the number of pairs of type 3 that we encounter during the construction of the list L. Our argument above shows that the number of pairs of type 1 or 2 that are added to L is at least x − n(n − 1)/4.This implies t ≥ 2x − n(n − 1)/4 and x ≤ t/2 + n(n − 1)/8.Hence, the number of relevant steps reduces from t to t = t − x ≥ t/2 − n(n − 1)/8.Using this estimate in Lemma 4.2 yields the following lemma.
Lemma 4.3.In every sequence of t consecutive 2-changes the number of disjoint pairs of 2-changes of type 1 or 2 is at least t/6 − 7n(n − 1)/24.

Analysis of pairs of linked 2-changes
The following lemma gives a bound on the probability that there exists a pair of type 1 or 2 in which both steps are small improvements.
Lemma 4.4.In a φ-perturbed Manhattan instance with n vertices, the probability that there exists a pair of type 1 or type 2 in which both 2-changes are improvements by at most ε is bounded by Proof.First, we consider pairs of type 1.We assume that in the first step the edges {v 1 , v 2 } and {v 3 , v 4 } are replaced by the edges {v 1 , v 3 } and {v 2 , v 4 } and that in the second step the edges {v 1 , v 3 } and {v 5 , v 6 } are replaced by the edges {v 1 , v 5 } and {v 3 , v 6 }.For i ∈ [6], let x i ∈ R d denote the coordinates of vertex v i .Furthermore, let ∆ 1 denote the (possibly negative) improvement of the first step and let ∆ 2 denote the (possibly negative) improvement of the second step.The random variables ∆ 1 and ∆ 2 can be written as For any fixed order of the coordinates, ∆ 1 and ∆ 2 can be expressed as linear combinations of the coordinates with integer coefficients.For i ∈ [d], let σ i denote an order of the coordinates x 1 i , . . ., x 6 i , let σ = (σ 1 , . . ., σ d ), and let ∆ σ 1 and ∆ σ 2 denote the corresponding linear combinations.We denote by A the event that both ∆ 1 and ∆ 2 take values in the interval (0, ε], and we denote by A σ the event that both linear combinations ∆ σ 1 and ∆ σ 2 take values in the interval (0, ε].Obviously A can only occur if for at least one σ, the event A σ occurs.Hence, we obtain Since there are (6!) d different orders σ, which is constant for constant dimension d, it suffices to show that for every tuple of orders σ, the probability of the event A σ is bounded from above by O(ε 2 φ 2 ).Then a union bound over all possible pairs of linked 2-changes of type 1 yields the lemma for pairs of type 1.
We divide the set of possible pairs of linear combinations (∆ σ 1 , ∆ σ 2 ) into three classes.We say that a pair of linear combinations belongs to class A if at least one of the linear combinations equals 0, we say that it belongs to class B if ∆ σ 1 = −∆ σ 2 , and we say that it belongs to class C if ∆ σ 1 and ∆ σ 2 are linearly independent.For tuple of orders σ that yield pairs from class A or B, the event A σ can never occur because in both cases the value of at least one linear combination is at most 0. For tuples σ that yield pairs from class C, we can apply Lemma A.1 from Appendix A, which shows that the probability of the event A σ is bounded from above by (εφ) 2 .Hence, we only need to show that every pair (∆ σ 1 , ∆ σ 2 ) of linear combinations belongs either to class A, B, or C. Consider a fixed tuple of orders σ = (σ 1 , . . ., σ d ).We split ∆ σ 1 and ∆ σ 2 into d parts that correspond to the d dimensions.To be precise, for j ∈ [2], we write ∆ , where X σ i ,j i is a linear combination of the variables x 1 i , . . ., x 6 i .For i ∈ [d], we show that the pair of linear combinations (X σ i ,1 i , X σ i ,2 i ) belongs either to class A, B, or C.This directly implies that also (∆ σ 1 , ∆ σ 2 ) must belong to one of these classes.Assume that the pair of linear combinations ( ) is linearly dependent for the fixed order σ i .Observe that this can only happen if X σ i ,1 i does not contain x 2 i and x 4 i and if X σ i ,2 i does not contain x 5 i and x 6 i .The former can only happen if either The latter can only happen if either x 5 i ≥ x 6 i , x 3 i ≥ x 6 i , and x 5 i ≥ x 1 i or if x 5 i ≤ x 6 i , x 3 i ≤ x 6 i , and x 5 i ≤ x 1 i .If one chooses the order such that x 2 i , x 4 i , x 5 i , and x 6 i cancel out and such that x 1 i ≥ x 3 i , one can verify by a case distinction that Hence, in this case the resulting pair of linear combinations belongs either to class A or B. Analogously, if one chooses the order such that x 2 i , x 4 i , x 5 i , and x 6 i cancel out and such that x 3 i ≥ x 1 i , we have Hence, also in this case, the pair of resulting linear combinations belongs either to class A or B.
With similar arguments we prove the lemma for pairs of type 2. We first prove the lemma for pairs of type 2 a).Using the same notations as for pairs of type 1, we can write the improvement ∆ 2 as Again we show that, for every i ∈ [d] and every order σ i , the pair of linear combinations (X σ i ,1 i , X σ i ,2 i ) belongs either to class A, B, or C. Assume that the pair is linearly dependent for the fixed order σ i .Observe that this can only happen if X σ i ,1 i does not contain x 4 i and if X σ i ,2 i does not contain x 5 i .The former can only happen if either The latter can only happen if either x 2 i ≥ x 5 i and x 1 i ≥ x 5 i or if x 2 i ≤ x 5 i and x 1 i ≤ x 5 i .If one chooses the order such that x 4 i and x 5 i cancel out and such that x 1 i ≥ x 3 i , one can verify by a case distinction that Hence, under the assumption that the linear combinations X σ i ,1 i and X σ i ,2 i are linearly dependent, the pair of resulting linear combinations in the case x 1 i ≥ x 3 i belongs either to class A or B. Analogously, if one chooses the order such that x 4 i and x 5 i cancel out and such that Hence, also in this case, the pair of resulting linear combinations belongs either to class A or B.
It remains to consider pairs of type 2 b).For these pairs, we can write ∆ 2 as Assume that the pair of linear combinations ( ) is linearly dependent for the fixed order σ i .Observe that this can only happen if X σ i ,1 i does not contain x 4 i and if X σ i ,2 i does not contain x 5 i .As we have already seen for pairs of type 2 a), the former can only happen if either The latter can only happen if either x 2 i ≥ x 5 i and x 3 i ≥ x 5 i or if x 2 i ≤ x 5 i and x 3 i ≤ x 5 i .If one chooses the order such that x 4 i and x 5 i cancel out and such that x 1 i ≥ x 3 i , one can verify by a case distinction that Hence, under the assumption that the linear combinations X σ i ,1 i and X σ i ,2 i are linearly dependent, the pair of resulting linear combinations belongs either to class A or B. If one chooses the order such that x 4 i and x 5 i cancel out and such that x 3 i ≥ x 1 i , one can verify by a case distinction that Hence, also in this case, the pair of resulting linear combinations belongs either to class A or B.

Expected number of 2-changes
Based on Lemmas 4.3 and 4.4, we are now able to prove part a) of Theorem 1.2.
Proof of Theorem 1.2 a).Let T denote the random variable that describes the length of the longest path in the state graph.If T ≥ t, then there must exist a sequence S 1 , . . ., S t of t consecutive 2-changes in the state graph.We start by identifying a set of linked pairs of type 1 and 2 in this sequence.Due to Lemma 4.3, we know that we can find at least t/6 − 7n(n − 1)/24 such pairs.Let ∆ * min denote the smallest improvement made by any pair of improving 2-Opt steps of type 1 or 2. For t > 2n 2 , we have t/6 − 7n(n − 1)/24 > t/48 and hence due to Lemma 4.4, Since T cannot exceed (n!), this implies the following bound on the expected number of 2-changes: This concludes the proof of part a) of the theorem.
Chandra, Karloff, and Tovey [CKT99] show that for every metric that is induced by a norm on R d , and for any set of n points in the unit hypercube [0, 1] d , the optimal tour visiting all n points has length O(n (d−1)/d ).Furthermore, every insertion heuristic finds an O(log n)-approximation [RSI77].Hence, if one starts with a solution computed by an insertion heuristic, the initial tour has length O(n (d−1)/d • log n).Using this observation yields part a) of Theorem 1.3.
Proof of Theorem 1.3 b).Since the initial tour has length O(n (d−1)/d • log n), we obtain for an appropriate constant c and t > 2n 2 , This yields

Euclidean Instances
In this section, we analyze the expected number of 2-changes on φ-perturbed Euclidean instances.The analysis is similar to the analysis of Manhattan instances in the previous section, only Lemma 4.4 needs to be replaced by its equivalent version for the L 2 metric.
Lemma 4.5.For φ-perturbed L 2 instances, the probability that there exists a pair of type 1 or type 2 in which both 2-changes are improvements by at most ε ≤ 1/2 is bounded by The bound that this lemma provides is slightly weaker than its L 1 counterpart, and hence also the bound on the expected running time is slightly worse for L 2 instances.The crucial step to prove Lemma 4.5 is to gain a better understanding of the random variable that describes the improvement of a single fixed 2-change.In the next section, we analyze this random variable under several conditions, e. g., under the condition that the length of one of the involved edges is fixed.With the help of these results, pairs of linked 2-changes can easily be analyzed.Let us mention that our analysis of a single 2-change yields a bound of O(n 7 • log 2 (n) • φ 3 ) for the expected number of 2-changes.For Euclidean instances in which all points are distributed uniformly at random over the unit square, this bound already improves the best previously known bound of O(n 10 • log n).

Analysis of a single 2-change
We analyze a 2-change in which the edges {O, Q 1 } and {P, Q 2 } are exchanged with the edges {O, Q 2 } and {P, Q 1 } for some vertices O, P , Q 1 , and Q 2 .In the input model we consider, each of these points has a probability distribution over the unit hypercube according to which it is chosen.In this section, we consider a simplified random experiment in which O is chosen to be the origin and P , Q 1 , and Q 2 are chosen independently and uniformly at random from a d-dimensional hyperball with radius √ d centered at the origin.In the next section, we argue that the analysis of this simplified random experiment helps to analyze the actual random experiment that occurs in the probabilistic input model.
Due to the rotational symmetry of the simplified model, we assume without loss of generality that P lies at position (0, T ) for some Then the improvement ∆ of the 2-change can be expressed as Z 1 − Z 2 .The random variables Z 1 and Z 2 are identically distributed, and they are independent if T is fixed.We denote by f Z|T =τ,R=r the density of Z 1 and Z 2 under the conditions that d(O, Q 1 ) = r and d(O, Q 2 ) = r, respectively, and T = τ .Lemma 4.6.For τ, r ∈ (0, √ d], and z ∈ (−τ, min{τ, 2r − τ }), For z / ∈ [−τ, min{τ, 2r − τ }], the density f Z|T =τ,R=r (z) is 0.
Proof.We denote by Z the random variable d(O, Q) − d(P, Q), where Q is a point chosen uniformly at random from a d-dimensional hyperball with radius √ d centered at the origin.In the following, we assume that the plane spanned by the points O, P , and Q is fixed arbitrarily, and we consider the random experiment conditioned on the event that Q lies in this plane.To make the calculations simpler, we use polar coordinates to describe the location of Q.Since the radius d(O, Q) = r is given, the point Q is completely determined by the angle α between the y-axis and the line between O and Q (see Figure 4.3).Hence, the random variable Z can be written as It is easy to see that Z can only take values in the interval [−τ, min{τ, 2r − τ }], and hence the density f Z|T =τ,R=r (z) is 0 outside this interval.
Since Q is chosen uniformly at random, the angle α is chosen uniformly at random from the interval [0, 2π).For symmetry reasons, we can assume that α is chosen uniformly from the interval [0, π).When α is restricted to the interval [0, π), then there exists a unique inverse function mapping Z to α, namely The density f Z|T =τ,R=r can be expressed as where f α denotes the density of α, i. e., the uniform density over [0, π).For |x| < 1, the derivative of the arc cosine is Hence, the derivative of α(z) equals In order to prove the lemma, we distinguish between the cases r ≥ τ and r ≤ τ .First case: r ≥ τ .In this case, it suffices to show Second case: r ≤ τ .In this case, it suffices to show Based on Lemma 4.6, the density of the random variable ∆ = Z 1 − Z 2 under the conditions R Lemma 4.7.Let τ, r 1 , r 2 ∈ (0, √ d], and let Z 1 and Z 2 be independent random variables drawn according to the densities f Z|T =τ,R=r 1 and f Z|T =τ,R=r 2 , respectively.For δ ∈ (0, 1/2] and a sufficiently large constant κ, the density The simple but somewhat tedious calculation that yields Lemma 4.7 is deferred to Appendix B.1.In order to prove Lemma 4.5, we need bounds on the densities of the random variables ∆, Z 1 , and Z 2 under certain conditions.We summarize these bounds in the following lemma. b) The density of ∆ under the condition T = τ is bounded by c) The density of ∆ is bounded by the density of Z i under the condition T = τ is bounded by Lemma 4.8 follows from Lemmas 4.6 and 4.7 by integrating over all values of the unconditioned distances.The proof can be found in Appendix B.2.

Simplified random experiments
In the previous section we did not analyze the random experiment that really takes place.Instead of choosing the points according to the given density functions, we simplified their distributions by placing point O in the origin and by giving the other points P , Q 1 , and Q 2 uniform distributions centered around the origin.In our input model, however, each of these points is described by a density function over the unit hypercube.We consider the probability of the event ∆ ∈ [0, ε] in both the original input model as well as in the simplified random experiment.In the following, we denote this event by E. We claim that the simplified random experiment that we analyze is only slightly dominated by the original random experiment, in the sense that the probability of the event E in the simplified random experiment is smaller by at most some factor depending on φ.
In order to compare the probabilities in the original and in the simplified random experiment, consider the original experiment and assume that the point O lies at position x ∈ [0, 1] d .Then one can identify a region R x ⊆ R 3d with the property that the event E occurs if and only if the random vector (P, Q 1 , Q 2 ) lies in R x .No matter of how the position x of O is chosen, this region always has the same shape, only its position is shifted.
Then the probability of E can be bounded from above by φ 3 • V in the original random experiment.Since ∆ is invariant under translating O, P , Q 1 , and Q 2 by the same vector, we obtain √ d, its volume can be bounded from above by (4d) d/2 .Hence, the probability of E in the simplified random experiment is smaller by at most a factor of φ 3 (4d) 3d/2 compared to the original random experiment.
Taking into account this factor and using Lemma 4.8 c) and a union bound over all possible 2-changes yields the following lemma about the improvement of a single 2-change.
Lemma 4.9.The probability that there exists an improving 2-change whose improvement is at most ε ≤ 1/2 is bounded from above by O(n Using similar arguments as in the proof of Theorem 4.1 yields the following upper bound on the expected number of 2-changes. Theorem 4.10.Starting with an arbitrary tour, the expected number of steps performed by 2-Opt on φ-perturbed Euclidean instances is O(n 7 • log 2 (n) • φ 3 ).
Pairs of type 1.In order to improve upon Theorem 4.10, we consider pairs of linked 2-changes as in the analysis of φ-perturbed Manhattan instances.Since our analysis of pairs of linked 2-changes is based on the analysis of a single 2-change that we presented in the previous section, we also have to consider simplified random experiments when analyzing pairs of 2-changes.For a fixed pair of type 1, we assume that point v 3 is chosen to be the origin and the other points v 1 , v 2 , v 4 , v 5 , and v 6 are chosen uniformly at random from a hyperball with radius √ d centered at v 3 .Let E denote the event that both ∆ 1 and ∆ 2 lie in the interval [0, ε], for some given ε.With the same arguments as above, one can see that the probability of E in the simplified random experiment is smaller compared to the original experiment by at most a factor of ((4d) d/2 φ) 5 .
Pairs of type 2. For a fixed pair of type 2, we consider the simplified random experiment in which v 2 is placed in the origin and the other points v 1 , v 3 , v 4 , and v 5 are chosen uniformly at random from a hyperball with radius √ d centered at v 2 .In this case, the probability in the simplified random experiment is smaller by at most a factor of ((4d) d/2 φ) 4 .

Analysis of pairs of linked 2-changes
Finally, we can prove Lemma 4.5.
Proof of Lemma 4.5.We start by considering pairs of type 1.We consider the simplified random experiment in which v 3 is chosen to be the origin and the other points are drawn uniformly at random from a hyperball with radius √ d centered at v 3 .If the position of the point v 1 is fixed, then the events ∆ 1 ∈ [0, ε] and ∆ 2 ∈ [0, ε] are independent as only the vertices v 1 and v 3 appear in both the first and the second step.In fact, because the densities of the points v 2 , v 4 , v 5 , and v 6 are rotationally symmetric, the concrete position of v 1 is not important in our simplified random experiment anymore, but only the distance R between v 1 and v 3 is of interest.
For i ∈ [2], we determine the conditional probability of the event ∆ i ∈ [0, ε] under the condition that the distance d(v 1 , v 3 ) is fixed with the help of Lemma 4.8 a) and obtain Since for fixed distance d(v 1 , v 3 ) the random variables ∆ 1 and ∆ 2 are independent, we obtain Combining this observation with the bound given in (4.3) yields There are O(n 6 ) different pairs of type 1, hence a union bound over all of them concludes the first part of the proof when taking into account the factor ((4d) d/2 φ) 5 that results from considering the simplified random experiment.It remains to consider pairs of type 2. We consider the simplified random experiment in which v 2 is chosen to be the origin and the other points are drawn uniformly at random from a hyperball with radius √ d centered at v 2 .In contrast to pairs of type 1, pairs of type 2 exhibit larger dependencies as only 5 different vertices are involved in these pairs.Fix one pair of type 2. The two 2-changes share the whole triangle consisting of v 1 , v 2 , and v 3 .In the second step, there is only one new vertex, namely v 5 .Hence, there is not enough randomness contained in a pair of type 2 such that ∆ 1 and ∆ 2 are nearly independent as for pairs of type 1.
We start by considering pairs of type 2 a).First, we analyze the probability that ∆ 1 lies in the interval [0, ε].After that, we analyze the probability that ∆ 2 lies in the interval [0, ε] under the condition that the points v 1 , v 2 , v 3 , and v 4 have already been chosen.In the analysis of the second step we cannot make use of the fact that the distances d(v 1 , v 3 ) and d(v 2 , v 3 ) are random variables anymore since we exploited their randomness already in the analysis of the first step.The only distances whose randomness we can exploit are the distances d(v 1 , v 5 ) and d(v 2 , v 5 ).We pessimistically assume that the distances d(v 1 , v 3 ) and d(v 2 , v 3 ) have been chosen by an adversary.This means the adversary can determine an interval of length ε in which the random variable Analogously to (4.2), the probability of the event ∆ 1 ∈ [0, ε] under the condition d(v 1 , v 2 ) = r can be bounded by The intervals the adversary can specify which have the highest probability of Z falling into them are [−r, −r + ε] and [r − ε, r].Hence, the conditional probability of the event ∆ 2 ∈ [0, ε] under the condition d(v 1 , v 2 ) = r and for fixed points v 3 and v 4 is bounded from above by for a sufficiently large constant κ .Combining this inequality with (4.4) yields In order to get rid of the condition d(v 1 , v 2 ) = r, we integrate over all possible values the random variable d(v 1 , v 2 ) can take, yielding Applying a union bound over all O(n 5 ) possible pairs of type 2 a) concludes the proof when one takes into account the factor ((4d) d/2 φ) 4 due to considering the simplified random experiment.For pairs of type 2 b), the situation looks somewhat similar.We analyze the first step and in the second step, we can only exploit the randomness of the distances d(v 2 , v 5 ) and d(v 3 , v 5 ).Due to Lemma 4.8 b) and similar to (4.2), the probability of the event ∆ 1 ∈ [0, ε] under the condition d(v 2 , v 3 ) = τ can be bounded by The remaining analysis of pairs of type 2 b) can be carried out completely analogously to the analysis of pairs of type 2 a).

The expected number of 2-changes
Based on Lemmas 4.3 and 4.5, we are now able to prove part b) of Theorem 1.2.
Proof of Theorem 1.2 b).We use the same notations as in the proof of part a) of the theorem.For t > 2n 2 , we have t/6 − 7n(n − 1)/24 > t/48 and hence due to Lemma 4.5, This implies that the expected length of the longest path in the state graph is bounded from above by Splitting the sums at t = n 4 • log(nφ) • φ 5/2 and t = n 13/3 • log 2/3 (nφ) • φ 8/3 , respectively, yields This concludes the proof of part b) of the theorem.
Using the same observations as in the proof of Theorem 1.3 a) also yields part b).

General Graphs
In this section, we analyze the expected number of 2-changes on φ-perturbed graphs.
Observe that φ-perturbed graphs contain more randomness than φ-perturbed Manhattan or Euclidean instances because each edge length is a random variable that is independent of the other edge lengths.It is easy to obtain a polynomial bound on the expected number of local improvements by just estimating the smallest improvement made by any of the 2-changes.For Manhattan and Euclidean instances we improved this simple bound by considering pairs of linked 2-changes.For φ-perturbed graphs we pursue the same approach but due to the larger amount of randomness, we are now able to consider not only pairs of linked steps but longer sequences of linked steps.We know that every sequence of steps that contains k distinct 2-changes shortens the tour by at least ∆ min denotes the i-th smallest improvement.This observation alone, however, does not suffice to improve the simple bound substantially.Instead we show that one can identify in every long enough sequence of consecutive 2-changes, subsequences that are linked, where a sequence S 1 , . . ., S k of 2-changes is called linked if for every i ∈ [k − 1], there exists an edge that is added to the tour in step S i and removed from the tour in step S i+1 .We analyze the smallest improvement of a linked sequence that consists of k distinct 2-Opt steps.Obviously, this improvement must be at least ∆ (k) as in the worst-case, the linked sequence consists of the k smallest improvements.Intuitively, one can hope that it is much larger than ∆ (k) because it is unlikely that the k smallest improvements form a sequence of linked steps.We show that this is indeed the case and use this result to prove the desired upper bound on the expected number of 2-changes.
We introduce the notion of witness sequences, i. e., linked sequences of 2-changes that satisfy some additional technical properties.We show that the smallest total improvement made by a witness sequence yields an upper bound on the running time.That is, whenever the 2-Opt heuristic needs many local improvement steps to find a locally optimal solution, there must be a witness sequence whose total improvement is small.Furthermore, our probabilistic analysis reveals that it is unlikely that there exists a witness sequence whose total improvement is small.Together, these results yield the desired bound on the expected number of 2-changes.

Definition of witness sequences
In this section, we give a formal definition of the notion of a k-witness sequence.As mentioned above, a witness sequence S 1 , . . ., S k has to be linked, i. e., for i ∈ [k − 1], there must exist an edge that is added to the tour in step S i and removed from the tour in step S i+1 .Let m denote the number of edges in the graph.Then there are at most 4 k−1 • m k+1 such linked sequences as there are at most m 2 different choices for S 1 , and once S i is fixed, there are at most 4m different choices for S i+1 .For a fixed 2-change, the probability that it is an improvement by at most ε is bounded by εφ.We would like to show an upper bound of (εφ) k on the probability that each step in the witness sequence S 1 , . . ., S k is an improvement by at most ε.For general linked sequences, this is not true as the steps can be dependent in various ways.Hence, we need to introduce further restrictions on witness sequences.
.4: Illustration of the notations used in Definitions 4.11, 4.12, and 4.13.Every node in the DAG corresponds to an edge involved in one of the 2-changes.An arc from a node u to a node v indicates that in one of the 2-changes, the edge corresponding to node u is removed from the tour and the edge corresponding to node v is added to the tour.Hence, every arc is associated with one 2-change.
In the following definitions, we assume that a linked sequence S 1 , . . ., S k of 2-changes is given.For i ∈ [k], in step S i the edges e i−1 and f i−1 are removed from the tour and the edges e i and g i are added to the tour, i. e., for i ∈ [k − 1], e i denotes an edge added to the tour in step S i and removed from the tour in step S i+1 .These definitions are illustrated in Figure 4.4.Definition 4.11 (witness sequences of type 1).If for every i ∈ [k], the edge e i does not occur in any step S j with j < i, then S 1 , . . ., S k is called a k-witness sequence of type 1.
Intuitively, witness sequences of type 1 possess enough randomness as every step introduces an edge that has not been seen before.Based on this observation, we prove in Lemma 4.14 the desired bound of (εφ) k on the probability that every step is an improvement by at most ε for these sequences.Definition 4.12 (witness sequences of type 2).Assume that for every i ∈ [k], the edge e i does not occur in any step S j with j < i.If both endpoints of f k−1 occur in steps S j with j < k, then S 1 , . . ., S k is called a k-witness sequence of type 2. Also for witness sequences of type 2, we obtain the desired bound of (εφ) k on the probability that every step is an improvement by at most ε.Due to the additional restriction on f k−1 , there are less than 4 k−1 m k+1 witness sequences of type 2. As the two endpoints of f k−1 must be chosen among those vertices that occur in steps S j with j < k, there are only O(k 2 ) choices for the last step S k .This implies that the number of k-witness sequences of type 2 can be upper bounded by O(4 k k 2 m k ).Definition 4.13 (witness sequences of type 3).Assume that for every i ∈ [k − 1], the edge e i does not occur in any step S j with j < i.If the edges e k and g k occur in steps S j with j < k and if f k−1 does not occur in any step S j with j < k, then S 1 , . . ., S k is called a k-witness sequence of type 3. Also witness sequences of type 3 possess enough randomness to bound the probability that every step is an improvement by at most ε by (εφ) k as also the last step introduces a new edge, namely f k−1 .

Improvement made by witness sequences
In this section, we analyze the probability that there exists a k-witness sequence in which every step is an improvement by at most ε.
Lemma 4.14.The probability that there exists a k-witness sequence in which every step is an improvement by at most ε a) is bounded from above by 4 k−1 m k+1 (εφ) k for k-witness sequences of type 1. b) is bounded from above by k 2 4 k−1 m k (εφ) k for k-witness sequences of type 2. c) is bounded from above by k 2 4 k m k (εφ) k for k-witness sequences of type 3.
Proof.We use a union bound to estimate the probability that there exists a witness sequence in which every step is a small improvement.a) We consider k-witness sequences of type 1 first.As already mentioned in the previous section, the number of such sequences is bounded by 4 k−1 m k+1 as there are at most m 2 choices for the first step S 1 , and once S i is fixed, there are at most 4m choices for step S i+1 .The number 4m follows since if S i is fixed, there are two choices for the edge added to the tour in step S i and removed from the tour in step S i+1 , there are at most m choices for the other edge removed in step S i+1 , and once these edges are determined, there are two possible 2-Opt steps in which these edges are removed from the tour.Now fix an arbitrary k-witness sequence S 1 , . . ., S k of type 1.We use the same notations as in Figure 4.4 to denote the edges involved in this sequence.In the first step, the edges e 0 and f 0 are replaced by the edges e 1 and g 1 .We assume that the lengths of the edges e 0 , f 0 , and g 1 are determined by an adversary.The improvement of step S 1 can be expressed as a simple linear combination of the lengths of the involved edges.Hence, for fixed lengths of e 0 , f 0 , and g 1 , the event that S 1 is an improvement by at most ε corresponds to the event that the length d(e 1 ) of e 1 lies in some fixed interval of length ε.Since the density of d(e 1 ) is bounded by φ, the probability that d(e 1 ) takes a value in the given interval is bounded by εφ.Now consider a step S i and assume that arbitrary lengths for the edges e j and f j with j < i and for g j with j ≤ i are chosen.Since the edge e i is not involved in any step S j with j < i, its length is not determined.Hence, analogously to the first step, the probability that step S i is an improvement by at most ε is bounded by εφ independent of the improvements of the steps S j with j < i. Applying this argument to every step S i yields the desired bound of (εφ) k .b) In witness sequences of type 2, there are at most m 2 choices for step S 1 .Analogously to witness sequences of type 1, the number of possible choices for S i with 1 < i < k − 1 is at most 4m.The number of different vertices involved in steps S j with j < k is at most 4 + 2(k − 2) = 2k as the first step introduces four new vertices and every other step at most 2. Since the endpoints of the edge f k−1 must be chosen among those vertices that have been involved in the steps S j with j < k, there are at most 4k 2 possible choices for step S k−1 .This implies that the number of different k-witness sequences of type 2 is bounded by 4 For a fixed witness sequence of type 2, applying the same arguments as for witness sequences of type 1, yields a probability of at most (εφ) k that every step is an improvement by at most ε.
c) The number of different edges involved in the steps S i with i < k is at most 4 + 3(k − 2) < 3k.Hence, the number of k-witness sequences of type 3 is bounded by 9k 2 4 k−2 m k < k 2 4 k m k .Furthermore, similar to witness sequences of type 1, we can bound the probability that a fixed k-witness sequence of type 3 consists only of improvements by at most ε by (εφ) k since the last step introduces an edge which does not occur in the steps S i with i < k, namely f k−1 .Definition 4.15.In the following, we use the term k-witness sequence to denote a k-witness sequence of type 1 or an i-witness sequence of type 2 or 3 with i ≤ k.We call a k-witness sequence improving if every 2-change in the sequence is an improvement.Moreover, by ∆ (k) ws we denote the smallest total improvement made by any improving k-witness sequence.
Due to Lemma 4.14, it is unlikely that there exists an improving witness sequence whose total improvement is small.
Proof.Due to Lemma 4.14 and the fact that witness sequences of type 2 or 3 must consist of at least two steps, applying a union bound yields the following bound on the probability that there exists an improving k-witness sequence whose total improvement is at most ε: Since 4mεφ < 1, we can bound the sum by which implies the corollary because for ε ≤ (4m (k−1)/(k−2) φ) −1 , the second term in the sum is at least as large as first one.

Finding witness sequences
In the previous section, we have shown an upper bound on the probability that there exists an improving k-witness sequence whose total improvement is small.In this section, we show that in every long enough sequence of consecutive 2-changes, one can identify a certain number of disjoint k-witness sequences.This way, we obtain a lower bound on the improvement made by any long enough sequence of consecutive 2-changes in terms of ∆ ws .
Basically, we have to show that one can find t/4 k+3 disjoint k-witness sequences in the given sequence of consecutive 2-changes.Therefore, we first introduce a so-called witness DAG (directed acyclic graph) which represents the sequence S 1 , . . ., S t of 2changes.In order to not confuse the constructed witness DAG W with the input graph G, we use the terms nodes and arcs when referring to the DAG W and the terms vertices and edges when referring to G. Nodes of W correspond to edges of G combined with a time stamp.The construction is started by adding the edges of the initial tour as nodes into W .These nodes get the time stamps 1, . . ., n in an arbitrary order.Then the sequence S 1 , . . ., S t is processed step by step.Assume that the steps S 1 , . . .S i−1 have already been processed and that step S i is to be processed next.Furthermore, assume that in step S i the edges e i−1 and f i−1 are exchanged with the edges e i and g i .Since the edges e i−1 and f i−1 are contained in the tour after the steps S 1 , . . ., S i−1 , there are nodes in W corresponding to these edges.Let u 1 and u 2 denote the nodes with the most recent time stamps corresponding to e i−1 and f i−1 , respectively.We create two new nodes u 3 and u 4 corresponding to the edges e i and g i , each with time stamp n + i.Finally, four new arcs are added to W , namely the arcs (u 1 , u 3 ), (u 1 , u 4 ), (u 2 , u 3 ), and (u 2 , u 4 ).We refer to these four arcs as twin arcs.Observe that each node in W has indegree and outdegree at most 2. We call the resulting DAG W a t-witness DAG.
By the height of a node u, we denote the length of a shortest path from u to a leaf of W .After the witness DAG has been completely constructed, we associate with each node u with height at least k a sub-DAG of W .The sub-DAG W u associated with such a node u is the induced sub-DAG of those nodes of W that can be reached from u by traversing at most k arcs.The following two lemmas imply Lemma 4.17.
Lemma 4.18.For every sub-DAG W u , the 2-changes represented by the arcs in W u and their twin arcs yield a total improvement of at least ∆ (k) ws .
Proof of Lemma 4.18.Assume that a sub-DAG W u with root u is given.Since node u has height k, one can identify 2 k−1 distinct sequences of linked 2-changes of length k in the sub-DAG W u .In the following, we show that at least one of these sequences is a k-witness sequence or a sequence whose total improvement is as large as the total improvement of one of the k-witness sequences.We give a recursive algorithm Sequ that constructs such a sequence step by step.It is initialized with the sequence which consists only of the first step S 1 that is represented by the two outgoing arcs of the root u and their twin arcs.
Assume that Sequ is called with a sequence of steps S 1 , . . ., S i that has been constructed so far.Given this sequence, it has to decide if the sequence is continued with a step S i+1 such that S i and S i+1 are linked or if the construction is stopped since a k-witness sequence is found.In Figure 4.5, we summarize the notations that we use in the following.In step S j for j ≤ i − 1 and j = i + 1, the edges e j−1 and f j−1 are exchanged with the edges e j and g j .In step S i , the edges e i−1 and f i−1 are exchanged with the edges e i and e i , and in step S i+1 , the edges e i and f i are exchanged with the edges e i+1 and g i+1 .We denote by E i all edges that are involved in steps S j with j ≤ i.
Similarly, by E i−1 we denote all edges that are involved in steps S j with j ≤ i − 1.
Our construction ensures that whenever Sequ is called with a sequence S 1 , . . ., S i as input, then at least one of the edges that is added to the tour in step S i is not contained in E i−1 .In the following, assume without loss of generality that e i / ∈ E i−1 .When we call the algorithm recursively with the sequence S 1 , . . ., S i+1 or with the sequence S 1 , . . ., S i , S i+1 , then either the recursive call never gives back a return value since a witness sequence is found in the recursive call, which immediately stops the construction, or a 2-change S is returned.Whenever a 2-change S is returned, the meaning is as follows: There exists a sequence of linked 2-changes in the sub-DAG W u starting with S i+1 or S i+1 , respectively, whose net effect equals the 2-change S. That is, after all steps in the sequence have been performed, the same two edges as in S are removed from the tour, the same two edges are added to the tour, and all other edges either stay in or out of the tour.In this case, we can virtually replace step S i+1 or S i+1 , respectively, by the new step S.
When Sequ is called with the sequence S 1 , . . ., S i , then it first identifies the steps S i+1 and S i+1 based on the last step S i .If i = k, then S 1 , . . ., S i is a k-witness sequence of type 1, and Sequ stops.Otherwise, the following steps are performed, where we assume that whenever Sequ has identified a witness sequence, it immediately stops the construction.
1. Type 2 Sequence: If in one of the recursive calls a step S is returned, which happens only in case 3 (c), then replace the corresponding step S i+1 or S i+1 virtually by the returned step.That is, in the following steps of the algorithm, assume that S i+1 or S i+1 equals step S. The algorithm ensures that the edges that are added to the tour in the new step S are always chosen from the set E i .
3. No Continuation I: e i ∈ E i−1 and e i+1 , g i+1 ∈ E i and , then as in case 3 (c), assume w. l. o. g. g i+1 = g i+1 = f i−1 and e i+1 , e i+1 ∈ E i−1 .In this case, it must be f i = e i and f i = e i as otherwise step S i would be reversed in step S i+1 or S i+1 .Hence, f i , f i ∈ E i−1 , and S 1 , . . ., S i is a witness sequence of type 2 since one endpoint of f i−1 equals one endpoint of f i and the other endpoint equals one endpoint of f i .
Observe that basically Sequ just constructs a path through the DAG starting at node u.When a path corresponding to the sequence S 1 , . . ., S i of 2-changes has been constructed, Sequ decides to either stop the construction since a witness sequence has been found, or, if possible, to continue the path with an arc corresponding to a step S i+1 or S i+1 .In some situations, it can happen that Sequ has not found a witness sequence yet but cannot continue the construction.In such cases, step S i is pruned and Sequ reconsiders the path S 1 , . . ., S i−1 .Based on the pruned step S i , it can then either decide that a witness sequence has been found, that also S i−1 has to be pruned, or it can decide to continue the path with S i instead of S i .This concludes the proof as the presented algorithm always identifies a k-witness sequence whose total improvement is at most as large as the improvement made by the steps in the sub-DAG W u .
Proof of Lemma 4.19.A t-witness DAG W consists of n+2t nodes and n of these nodes are leaves.Since the indegree and the outdegree of every node is bounded by 2, there are at most n2 k nodes in W whose height is less than k.Hence, there are at least n + 2t − n2 k ≥ t nodes in W with an associated sub-DAG.We construct a set of disjoint sub-DAGs in a greedy fashion: We take an arbitrary sub-DAG W u and add it to the set of disjoint sub-DAGs that we construct.After that, we remove all nodes, arcs, and twin arcs of W u from the DAG W .We repeat these steps until no sub-DAG W u is left in W .
In order to see that the constructed set consists of at least t/4 k+2 disjoint sub-DAGs, observe that each sub-DAG consists of at most 2 k+1 − 1 nodes as its height is k.Hence, it contains at most 2 k − 1 pairs of twin arcs, and there are at most 2 k+2 − 4 arcs that belong to the sub-DAG or that have a twin arc belonging to the sub-DAG.Furthermore, observe that each of these arcs can be contained in at most 2 k − 1 sub-DAGs.Hence, every sub-DAG W u can only be non-disjoint from at most 2 2k+2 = 4 k+1 other sub-DAGs.Thus, the number of disjoint sub-DAGs must be at least t/4 k+1 > t/4 k+2 , where the last inequality follows because we assumed t > n4 k+1 .

The expected number of 2-changes
Now we can prove Theorem 1.2.
Proof of Theorem 1.2 c).We combine Corollary 4.16 and Lemma 4.17 to obtain an upper bound on the probability that the length T of the longest path in the state graph exceeds t.For t > n4 k+1 , the tour is shortened by the sequence of 2-changes by at least t/4 k+2 • ∆ (k) ws .Hence, for t > n4 k+1 , Combining this inequality with Corollary 4.16 yields for t ≥ Hence, we can bound the expected number of 2-changes by Splitting the sum at Setting k = √ log m yields the theorem.

Expected Approximation Ratio
In this section, we consider the expected approximation ratio of the solution found by 2-Opt on φ-perturbed L p instances.Chandra, Karloff, and Tovey [CKT99] show that if one has a set of n points in the unit hypercube [0, 1] d and the distances are measured according to a metric that is induced by a norm, then every locally optimal solution has length at most c • n (d−1)/d for an appropriate constant c depending on the dimension d and the metric.Hence, it follows for every L p metric that 2-Opt yields a tour of length at most O(n (d−1)/d ) on φ-perturbed L p instances.This implies that, in order to bound the expected approximation ratio of 2-Opt on these instances, we just need to upper bound the expected value of 1/Opt, where Opt denotes the length of the shortest tour.
Lemma 5.1.Let p ∈ N ∪ {∞}.For φ-perturbed L p instances with n points, it holds Proof.Let v 1 , . . ., v n ∈ R d denote the points of the φ-perturbed instance.We partition the unit hypercube into k = nφ smaller hypercubes with volume 1/k each and analyze how many of these smaller hypercubes contain at least one of the points.Assume that X of these hypercubes contain a point, then the optimal tour must have length at least X/(3 In order to see this, we construct a set P ⊆ {v 1 , . . ., v n } of points as follows: Consider the points v 1 , . . ., v n one after another, and insert a point v i into P if P does not contain a point in the same hypercube as v i or in one of its 3 d − 1 neighboring hypercubes yet.Due to the triangle inequality, the optimal tour on P is at most as long as the optimal tour on v 1 , . . ., v n .Furthermore, P contains at least X/3 d points and every edge between two points from P has length at least 1/ d √ k since P does not contain two points in the same or in two neighboring hypercubes.
Hence, it remains to analyze the random variable X.For each hypercube i with 1 ≤ i ≤ k, we define a random variables X i which takes value 0 if hypercube i is empty and value 1 if hypercube i contains at least one point.The density functions that specify the locations of the points induce for each pair of hypercube i and point j a probability p j i such that point j falls into hypercube i with probability p j i .Hence, one can think of throwing n balls into k bins in a setting where each ball has its own probability distribution over the bins.Due to the bounded density, we have p j i ≤ φ/k.For each hypercube i, let M i denote the probability mass associated with the hypercube, that is We can write the expected value of the random variable X i as as, under the constraint j (1 − p j i ) = n − M i , the term j (1 − p j i ) is maximized if all p j i are equal.Due to linearity of expectation, the expected value of X is Observe that i M i = n.Thus, the sum i (1 − M i /n) becomes maximal if the M i 's are chosen as unbalanced as possible.Hence, we assume that k/φ of the M i 's take their maximal value of nφ/k and the other M i 's are zero.This yields, for sufficiently large n, Hence, we obtain the following bound on the expected length of the optimal tour We still need to determine the expected value of the random variable 1/Opt.Therefore, we first show that X is sharply concentrated around its mean value.The random variable X is the sum of n 0-1-random variables.If these random variables were independent, we could simply use a Chernoff bound to bound the probability that X takes a value that is smaller than its mean value.The X i 's are negatively associated, in the sense that whenever we already know that some of the X i 's are zero, then the probability of the event that another X i also takes the value zero becomes smaller.Hence, intuitively, the dependencies can only help to bound the probability that X takes a value smaller than its mean value.Dubhashi and Ranjan [DR98] formalize this intuition by introduction the notion of negative dependence and by showing that in the case of negative dependent random variables, one can still apply a Chernoff bound.This yields Pr X ≤ n 10 ≤ e −n/40 .
Thus, as 1/X ≤ 1 with certainty, for sufficiently large n, If one combines Lemma 5.1 with the result of Chandra, Karloff, and Tovey that every locally optimal solution has length O(n (d−1)/d ), one obtains Theorem 1.4.

Smoothed Analysis
Smoothed Analysis was introduced by Spielman and Teng [ST04] as a hybrid of worst case and average case analysis.The semi-random input model in a smoothed analysis is designed to capture the behavior of algorithms on typical inputs better than a worst case or average case analysis alone as it allows an adversary to specify an arbitrary input which is randomly perturbed afterwards.In Spielman and Teng's analysis of the Simplex algorithm the adversary specifies an arbitrary linear program which is perturbed by adding independent Gaussian random variables to each number in the linear program.Our probabilistic analysis of Manhattan and Euclidean instances can also be seen as a smoothed analysis in which an adversary can choose the distributions for the points over the unit hypercube.The adversary is restricted to distributions that can be represented by densities that are bounded by φ.Our model cannot handle Gaussian perturbations directly because the support of Gaussian random variables is not bounded.
Assume that every point v 1 , . . ., v n is described by a density whose support is restricted to the hypercube [−α, α] d , for some α ≥ 1.Then after appropriate scaling and translating, we can assume that all supports are restricted to the unit hypercube [0, 1] d .Thereby, the maximal density φ increases by at most a factor of (2α) d .Hence, after appropriate scaling and translating, Theorems 1.2, 1.3, and 1.4 can still be applied if one takes into account the increased densities.
One possibility to cope with Gaussian perturbations is to consider truncated Gaussian perturbations.In such a perturbation model, the coordinates of each point are initially chosen from [0, 1] d and then perturbed by adding Gaussian random variables with some standard deviation σ to them that are conditioned to lie in [−α, α] for some α ≥ 1.The maximal density of such truncated Gaussian random variables for σ ≤ 1 is bounded from above by 1/(σ .
After such a truncated perturbation, all points lie in the hypercube [−α, 1 + α] d .Hence, one can apply Theorems 1.2, 1.3, and 1.4 with It is not necessary to truncate the Gaussian random variables if the standard deviation is small enough.For σ ≤ min{α/ 2(n + 1) ln n + 2 ln d, 1}, the probability that one of the Gaussian random variables has an absolute value larger than α ≥ 1 is bounded from above by n −n .In this case, even if one does not truncate the random variables, Theorems 1.2, 1.3, and 1.4 can be applied with φ = O(α d /σ d ).To see this, it suffices to observe that the worst-case bound for the number of 2-changes is (n!) and the worst-case approximation ratio is O(log n) [CKT99].Multiplying these values with the failure probability of n −n constitutes less than 1 to the expected values.In particular, this implies that the expected length of the longest path in the state graph is bounded by O(poly(n, 1/σ)).

Conclusions and Open Problems
We have shown several new results on the running time and the approximation ratio of the 2-Opt heuristic.However, there is still a variety of open problems regarding this algorithm.Our lower bounds only show that there exist families of instances on which 2-Opt takes an exponential number of steps if it uses a particular pivot rule.It would be interesting to analyze the diameter of the state graph and to either present instances on which every pivot rule needs an exponential number of steps or to prove that there is always an improvement sequence of polynomial length to a locally optimal solution.Also the worst number of local improvements for some natural pivot rules like, e. g., the one that always makes the largest possible improvement or the one that always chooses a random improving 2-change, is not known yet.Furthermore, the complexity of computing locally optimal solutions is open.The only result in this regard is due to Krentel [Kre89] who shows that it is PLS-complete to compute a local optimum for the metric TSP for k-Opt for some constant k.It is not known whether his construction can be embedded into the Euclidean metric and whether it is PLS-complete to compute locally optimal solutions for 2-Opt.Fischer and Torenvliet [FT95] show, however, that for the general TSP, it is PSPACE-hard to compute a local optimum for 2-Opt that is reachable from a given initial tour.
The obvious open question concerning the probabilistic analysis is how the gap between experiments and theory can be narrowed further.In order to tackle this question, new methods seem to be necessary.Our approach, which is solely based on analyzing the smallest improvement made by a sequence of linked 2-changes, seems to yield too pessimistic bounds.Another interesting area to explore is the expected approximation ratio of 2-Opt.In experiments, approximation ratios close to 1 are observed.For instances that are chosen uniformly at random, the bound on the expected approximation ratio is a constant but unfortunately a large one.It seems to be a very challenging problem to improve this constant to a value that matches the experimental results.
Besides 2-Opt, there are also other local search algorithms that are successful for the traveling salesperson problem.In particular, the Lin-Kernighan heuristic [LK73] is one of the most successful local search algorithm for the symmetric TSP.It is a variant of k-Opt in which k is not fixed and it can roughly be described as follows: Each local modification starts by removing one edge {a, b} from the current tour, which results in a Hamiltonian path with the two endpoints a and b.Then an edge {b, c} is added, which forms a cycle; there is a unique edge {c, d} incident to c whose removal breaks the cycle, producing a new Hamiltonian path with endpoints a and d.This operation is called a rotation.Now either a new Hamiltonian cycle can be obtained by adding the edge {a, d} to the tour or another rotation can be performed.There are a lot of different variants and heuristic improvements of this basic scheme, but little is known theoretically.Papadimitriou [Pap92] shows for a variant of the Lin-Kernighan heuristic that computing a local optimum is PLS-complete, which is a sharp contrast to the experimental results.Since the Lin-Kernighan heuristic is widely used in practice, a theoretical explanation for its good behavior in practice is of great interest.Our analysis of 2-Opt relies crucially on the fact that there is only a polynomial number of different 2-changes.For the Lin-Kernighan heuristic, however, the number of different local improvements is exponential.Hence, it is an interesting question whether nonetheless the smallest possible improvement is polynomially large or whether different methods yield a polynomial upper bound on the expected running time of the Lin-Kernighan heuristic.

A Some Probability Theory
Lemma A.1.Let X 1 , . . ., X n be independent d-dimensional random row vectors, and, for i ∈ [n] and some φ ≥ 1, let f i : [0, 1] d → [0, φ] denote the joint density of the entries of X i .Furthermore, let λ 1 , . . ., λ k ∈ Z dn be linearly independent row vectors.For i ∈ [n] and a fixed ε ≥ 0, we denote by A i the event that λ i • X takes a value in the interval [0, ε], where X denotes the vector X = (X 1 , . . ., X n ) T .Under these assumptions, Proof.The main tool for proving the lemma is a change of variables.Instead of using the canonical basis of the dn-dimensional vector space R dn , we use the given linear combinations as basis vectors.To be more precise, the basis B that we use consists of two parts: it contains the vectors λ 1 , . . ., λ k and it is completed by some vectors from the canonical basis {e 1 , . . ., e dn }, where e i denotes the i-th canonical row vector, i. e., e i i = 1 and e i j = 0 for j = i.That is, the basis B can be written as {λ 1 , . . ., λ k , e π(1) , . . ., e π(dn−k) }, for some injective function π : Let Φ : R dn → R dn be defined by Φ(x) = Ax, where A denotes the (dn)×(dn)-matrix where det ∂ denotes the determinant of the Jacobian matrix of Φ −1 .The matrix A is invertible as B is a basis of R dn .Hence, for y ∈ R dn , Φ −1 (y) = A −1 y and the Jacobian matrix of Φ −1 equals A −1 .Thus, det ∂ Φ −1 = det A −1 = (det A) −1 .Since all entries of A are integers, also its determinant must be an integer, and since it has rank dn, we know that det A = 0. Hence, | det A | ≥ 1 and | det A −1 | ≤ 1.For y ∈ R dn , we decompose Φ −1 (y) ∈ R dn into n subvectors with d entries each, i. e., Φ −1 (y) = (Φ −1 1 (y), . . ., Φ −1 n (y)) with Φ −1 i (y) ∈ R d for i ∈ [n].This yields Proof of Lemma 4.7.The conditional density of ∆ can be calculated as convolution of the conditional densities of Z 1 and Z 2 as follows: In order to estimate this integral, we distinguish between several cases.In the following, let κ denote a sufficiently large constant.Due to Lemma 4.6, we can estimate the densities of Z 1 and Z 2 by For δ ∈ (0, min{1/2, 2τ }], we obtain the following upper bound on the density of ∆: Second case: r 1 ≤ τ and r 2 ≤ τ .Since Z i takes only values in the interval [−τ, 2r i − τ ], we can assume 0 < δ ≤ min{1/2, 2r 1 } and Due to Lemma 4.6, we can estimate the densities of Z 1 and Z 2 by Case 2.1: δ ∈ (max{0, 2(r 1 − r 2 )}, 2r 1 ].We obtain the following upper bound on the density of ∆: Case 2.2: δ ∈ (0, max{0, 2(r 1 − r 2 )}].
We obtain the following upper bound on the density of ∆: For δ ∈ (0, min{1/2, 2r 1 }], we obtain the following upper bound on the density of ∆:   We obtain the following upper bound on the density of ∆: Altogether, this yields the lemma.

B.2 Proof of Lemma 4.8
First, we derive the following lemma, which gives bounds on the conditional density of the random variable ∆ when only one of the radii R 1 and R 2 is given.
Lemma B.1.Let r 1 , r 2 , τ ∈ (0, √ d) and δ ∈ (0, 1/2].In the following, let κ denote a sufficiently large constant. a) The density of ∆ under the conditions T = τ and R 1 = r 1 is bounded by b) The density of ∆ under the conditions T = τ and R 2 = r 2 is bounded by Proof.a) We can write the density of ∆ under the conditions T = τ and R 1 = r 1 as where f R 2 denotes the density of the length R 2 = d(O, Q 2 ).We use Lemma 4.7 to bound this integral.For r 1 ≤ τ and sufficiently large constants κ and κ , we obtain For τ ≤ r 1 and a sufficiently large constant κ , we obtain analogously For r 2 ≤ τ and sufficiently large constants κ and κ , we obtain For τ ≤ r 2 and a sufficiently large constant κ , we obtain Now we are ready to prove Lemma 4.8.
Proof of Lemma 4.8.a) In order to prove part a), we integrate f ∆|T =τ,R 1 =r (δ) over all values τ that T can take: Furthermore, we integrate f ∆|T =τ,R 2 =r (δ) over all values τ that T can take:

Figure 3 . 2 :
Figure 3.2:This illustration shows the points of the gadgets G n−1 and G n−2 .One can see that G n−2 is a scaled, rotated, and translated copy of G n−1 .
r 2 , and T := d(O, P ) = τ can be computed as the convolution of the density f Z|T =τ,R=r with itself.

. 4 )
Due to Lemma 4.8 d), the conditional density of the random variable Z = d(v 2 , v 5 ) − d(v 1 , v 5 ) under the condition d(v 1 , v 2 ) = r can be bounded by Proof of Theorem 1.3 b).Estimating the length of the initial tour by O(n (d−1)/d • log n) instead of O(n) improves the upper bound on the expected number of 2-changes by a factor of Θ(n 1/d / log n) compared to Theorem 1.2 b).This observation yields the bound claimed in Theorem 1.3 b).

Figure 4 . 5 :
Figure 4.5: Construction of a path in the witness DAG: The path has been constructed up to step S i and now it has to be decided whether to continue it along e i or e i .
then S 1 , ..., S i+1 is a witness sequence of type 3.(b) If e i+1 , g i+1 ∈ E i−1 , then S 1 , ..., S i is a witness sequence of type 2 since one endpoint of f i−1 equals one endpoint of e i and the other one equals one endpoint of either e i+1 or g i+1 .(c)Iff i ∈ E i and (e i+1 ∈ E i \ E i−1 or g i+1 ∈ E i \ E i−1 ),then one can assume w. l. o. g. that g i+1 = f i−1 and e i+1 ∈ E i−1 since e i+1 = e i and g i+1 = e i (e i+1 and g i+1 share one endpoint with e i ; e i does not share an endpoint with e i .)In this case, return the step S = (e i−1 , f i ) → (e i+1 , e i ).Observe that e i+1 , e i ∈ E i−1 , as desired.4. No Continuation II: e i / ∈ E i−1 and e i+1 , g i+1 , e i+1 , g i+1 ∈ E i and f i−1 / ∈ E i−1 (a) If e i+1 , g i+1 , e i+1 , g i+1 ∈ E i−1 , then S 1 , . . ., S i is a witness sequence of type 2. (b) If f i / ∈ E i , then S 1 , . . ., S i+1 is a witness sequence of type 3. (c) If f i / ∈ E i , then S 1 , . . ., S i , S i+1 is a witness sequence of type 3.