Skip to main content

Euclidean Travelling Salesman Problem with Location-Dependent and Power-Weighted Edges


Consider \(n\) nodes \(\{X_i\}_{1 \le i \le n}\) independently distributed in the unit square \(S,\) each according to a density \(f\), and let \(K_n\) be the complete graph formed by joining each pair of nodes by a straight line segment. For every edge \(e\) in \(K_n\), we associate a weight \(w(e)\) that may depend on the individual locations of the endvertices of \(e\) and is not necessarily a power of the Euclidean length of \(e.\) Denoting \(\mathrm{TSP}_n\) to be the minimum weight of a spanning cycle of \(K_n\) corresponding to the travelling salesman problem (TSP) and assuming an equivalence condition on the weight function \(w(\cdot ),\) we prove that \(\mathrm{TSP}_n\) appropriately scaled and centred converges to zero almost surely and in mean as \(n \rightarrow \infty .\) We also obtain upper and lower bound deviation estimates for \(\mathrm{TSP}_n.\)

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. Beardwood, J., Halton, J.H., Hammersley, J.M.: The shortest path through many points. Proc. Camb. Philos. Soc. 55, 299–327 (1959)

    MathSciNet  Article  Google Scholar 

  2. Bradonjic, M., Elsasser, R., Friedrich, T., Sauerwald, T., Stauffer, A.: Efficient broadcast on random geometric graphs. Proc. SODA 2010, 1412–1421 (2010)

    MathSciNet  MATH  Google Scholar 

  3. Ganesan, G.: Minimum Spanning Trees of Random Geometric Graphs with Location Dependent Weights, Accepted for Publication in Bernoulli, January 2021. arXiv:2103.00764

  4. Gutin, G., Punnen, A.P.: The Traveling Salesman Problem and its Variations. Springer, Berlin (2006)

    MATH  Google Scholar 

  5. Rhee, W.T.: On the fluctuations of the stochastic traveling salesperson problem. Math. Oper. Res. 16, 482–489 (1991)

    MathSciNet  Article  Google Scholar 

  6. Steele, J.M.: Probability Theory and Combinatorial Optimization. SIAM (1997)

  7. Steele, J.M.: Subadditive Euclidean functionals and nonlinear growth in geometric probability. Ann. Probab. 9, 365–376 (1981)

    MathSciNet  Article  Google Scholar 

  8. Steinerberger, S.: New bounds for the Traveling Salesman constant. Adv. Appl. Probab. 47, 27–36 (2015)

    MathSciNet  Article  Google Scholar 

  9. Yukich, J.: Probability Theory of Classical Euclidean Optimization Problems. Lecture Notes in Mathematics, 1675, Springer, Berlin (1998)

Download references


I thank Professors Rahul Roy, Thomas Mountford, Federico Camia, C. R. Subramanian and the referee for crucial comments that led to an improvement of the paper. I also thank IMSc for my fellowships.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Ghurumuruhan Ganesan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Miscellaneous Results

Appendix: Miscellaneous Results

Standard Deviation Estimates

We use the following standard deviation estimates for sums of independent Poisson and Bernoulli random variables.

Lemma 13

Suppose \(W_i, 1 \le i \le m\) are independent Bernoulli random variables satisfying \(\mu _1 \le {\mathbb {P}}(W_1=1) = 1-{\mathbb {P}}(W_1~=~0) \le \mu _2.\) For any \(0< \epsilon < \frac{1}{2},\)

$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{m} W_i > m\mu _2(1+\epsilon ) \right) \le \exp \left( -\frac{\epsilon ^2}{4}m\mu _2\right) \end{aligned}$$


$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{m} W_i < m\mu _1(1-\epsilon ) \right) \le \exp \left( -\frac{\epsilon ^2}{4}m\mu _1\right) \end{aligned}$$

Estimates (7.1) and (7.2) also hold if \(\{W_i\}\) are independent Poisson random variables with \(\mu _1 \le {\mathbb {E}}W_1 \le \mu _2.\)

For completeness, we give a quick proof.

Proof of Lemma 13

First suppose that \(\{W_i\}\) are independent Poisson with

\(\mu _1 \le {\mathbb {E}}W_i \le \mu _2\) so that \({\mathbb {E}}e^{sW_i} = \exp \left( {\mathbb {E}}W_i (e^{s}-1)\right) \le \exp \left( \mu _2(e^{s}-1)\right) \) for \(s~>~0.\) By Chernoff bound, we then have

$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{m} W_i > m\mu _2(1+\epsilon )\right) \le e^{-sm\mu _2(1+\epsilon )} \exp \left( m\mu _2(e^{s}-1)\right) = e^{m\mu _2 \varDelta _1}, \end{aligned}$$

where \(\varDelta _1 = e^{s} -1 -s-s\epsilon .\) For \(s \le 1,\) we have the bound

$$\begin{aligned} e^{s}-1-s = \sum _{k\ge 2} \frac{s^{k}}{k!} \le s^2\sum _{k\ge 2} \frac{1}{k!} = s^2(e-2) \le s^2 \end{aligned}$$

and so we set \(s = \frac{\epsilon }{2}\) to get that \(\varDelta _1 \le s^2-s\epsilon =\frac{-\epsilon ^2}{4}.\)

Similarly, for \(s > 0,\) we have

$$\begin{aligned} {\mathbb {E}}e^{-sW_i} = \exp \left( {\mathbb {E}}W_i(e^{-s}-1)\right) \le \exp \left( \mu _1(e^{-s}-1)\right) \end{aligned}$$

and so

$$\begin{aligned} {\mathbb {P}}\left( \sum _{i=1}^{m} W_i < m\mu _1(1-\epsilon )\right) \le e^{sm\mu _1(1-\epsilon )} \exp \left( m\mu _1(e^{-s}-1)\right) = e^{-m\mu _1\varDelta _2}, \end{aligned}$$

where \(\varDelta _2 = 1 -s-e^{-s} + s\epsilon .\) For \(s \le 1,\) the term \(e^{-s}\le 1-s + \frac{s^2}{2}\) and so we get \(\varDelta _2 \ge -\frac{s^2}{2}+s\epsilon =\frac{\epsilon ^2}{2}\) for \(s = \epsilon .\)

The proof for the Binomial distribution follows from the fact that if \(\{W_i\}_{1 \le i \le m}\) are independent Bernoulli distributed with \(\mu _1 \le {\mathbb {E}}W_i \le \mu _2,\) then for \(s > 0\) we have \({\mathbb {E}}e^{s W_i} = 1-{\mathbb {E}}W_i + e^{s} {\mathbb {E}}W_i \le \exp \left( {\mathbb {E}}W_i (e^{s}-1)\right) \le \exp \left( \mu _2(e^{s}-1)\right) \) and similarly \({\mathbb {E}}e^{-s W_i} \le \exp \left( \mu _1(e^{-s}-1)\right) .\) The rest of the proof is then as above.  \(\square \)

Proof of the Monotonicity Property (2.14)

For \(\alpha \le 1\), we couple the original Poisson process \(\mathcal{P}\) and the homogenous process \(\mathcal{P}_{\delta }\) in the following way. Let \(V_{i}, i \ge 1\) be i.i.d. random variables each with density \(f(\cdot )\) and let \(N_V\) be a Poisson random variable with mean \(n,\) independent of \(\{V_i\}.\) The nodes \(\{V_i\}_{1 \le i \le N_V}\) form a Poisson process with intensity \(nf(\cdot )\) which we denote as \(\mathcal{P}\) and colour green.

Let \(U_i, i \ge 1\) be i.i.d. random variables each with density \(\epsilon _2-f(\cdot )\) where \(\epsilon _2 \ge 1\) is as in (1.1) and let \(N_U\) be a Poisson random variable with mean \(n(\epsilon _2-1).\) The random variables \((\{U_i\},N_U)\) are independent of \((\{V_i\},N_V)\) and the nodes \(\{U_i\}_{1 \le i \le N_U}\) form a Poisson process with intensity \(n(\epsilon _2-f(\cdot ))\) which we denote as \(\mathcal{P}_{ext}\) and colour red. The nodes of \(\mathcal{P}\) and \(\mathcal{P}_{ext}\) together form a homogenous Poisson process with intensity \(n\epsilon _2,\) which we denote as \(\mathcal{P}_{\delta }\) and define it on the probability space \((\varOmega _{\delta },\mathcal{F}_{\delta }, {\mathbb {P}}_{\delta }).\)

Let \(\omega _{\delta } \in \varOmega _{\delta }\) be any configuration and as above let \(\{i^{(\delta )}_{j}\}_{1 \le j \le Q_{\delta }}\) be the indices of the squares in \(\{R_j\}\) containing at least one node of \(\mathcal{P}_{\delta }\) and let \(\{i_{j}\}_{1 \le j \le Q}\) be the indices of the squares in \(\{R_j\}\) containing at least one node of \(\mathcal{P}.\) The indices in \(\{i^{(\delta )}_j\}\) and \(\{i_j\}\) depend on \(\omega _{\delta }.\) Defining \(S_{\alpha } = S_{\alpha }(\omega _{\delta })\) and \(S^{(\delta )}_{\alpha } = S_{\alpha }^{(\delta )}(\omega _{\delta })\) as before, we have that \(S_{\alpha }\) is determined only by the green nodes of \(\omega _{\delta }\) while \(S^{(\delta )}_{\alpha }\) is determined by both green and red nodes of \(\omega _{\delta }.\)

From the monotonicity property, we therefore have that \(S_{\alpha }(\omega _{\delta }) \le S^{(\delta )}_{\alpha }(\omega _{\delta })\) and so for any \(x > 0\) we have

$$\begin{aligned} {\mathbb {P}}_{\delta }(S^{(\delta )}_{\alpha }< x) \le {\mathbb {P}}_{\delta }(S_{\alpha }< x) = {\mathbb {P}}_0(S_{\alpha } < x), \end{aligned}$$

proving (2.14).

If \(\alpha > 1,\) we perform a slightly different analysis. Letting \(\epsilon _1 \le 1\) be as in (1.1), we construct a Poisson process \(\mathcal{P}_{ext}\) with intensity \(n(f(\cdot )-\epsilon _1)\) and colour nodes of \(\mathcal{P}_{ext}\) red. Letting \(\mathcal{P}_{\delta }\) be another independent Poisson process with intensity \(n\epsilon _1,\), we colour nodes of \(\mathcal{P}_{\delta }\) green. The superposition of \(\mathcal{P}_{ext}\) and \(\mathcal{P}_{\delta }\) is a Poisson process with intensity \(nf(\cdot ),\) which we define on the probability space \((\varOmega _{\delta },\mathcal{F}_{\delta },{\mathbb {P}}_{\delta }).\) In this case, the sum \(S_{\alpha }\) is determined by both green and red nodes while \(S^{(\delta )}_{\alpha }\) is determined only by the green nodes. Again using the monotonicity property of \(S_{\alpha },\) we get (7.3). \(\square \)

Additive Relations

If \(\mathrm{TSP}(x_1,\ldots ,x_j), j \ge 1\) denotes the length of the TSP cycle with vertex set \(\{x_1,\ldots ,x_j\},\) then for any \(k \ge 1,\) we have

$$\begin{aligned}&\mathrm{TSP}(x_1,\ldots ,x_{j+k}) \le \mathrm{TSP}(x_1,\ldots ,x_j) \nonumber \\&\quad + \mathrm{TSP}(x_{j+1},\ldots ,x_{j+k}) + (c_2\sqrt{2})^{\alpha } \end{aligned}$$

and if \( \alpha \le 1\) and the edge weight function \(h\) is a metric, then

$$\begin{aligned} \mathrm{TSP}(x_1,\ldots ,x_j) \le \mathrm{TSP}(x_1,\ldots ,x_{j+k}). \end{aligned}$$

Proof of (7.4) and (7.5)

To prove (7.4), suppose \(\mathcal{C}_{1}\) is the minimum weight spanning cycle formed by the nodes \(\{x_l\}_{1 \le l \le j}\) and \(\mathcal{C}_2\) is the minimum weight spanning cycle formed by \(\{x_l\}_{j+1 \le l \le j+k}.\) Let \(e_1 = (u_1,v_1) \in \mathcal{C}_1\) and \(e_2 = (u_2,v_2) \in \mathcal{C}_2\) be any two edges. The cycle

$$\begin{aligned} \mathcal{C}_{tot} = \left( \mathcal{C}_1 \setminus \{e_1\}\right) \cup \left( \mathcal{C}_2 \setminus \{e_2\}\right) \cup \{(u_1,u_2), (v_1,v_2)\} \end{aligned}$$

obtained by removing the edges \(e_1,e_2\) and adding the “cross”-edges \((u_1,u_2)\) and \((v_1,v_2)\) is a spanning cycle containing all the nodes \(\{x_l\}_{1 \le l \le j+k}.\) The edges \((u_1,u_2)\) and \((v_1,v_2)\) have a Euclidean length of at most \(\sqrt{2}\) and so a weight of at most \((c_2\sqrt{2})^{\alpha }\) using the bounds for the metric \(h\) in (1.2). This proves (7.4).

It suffices to prove (7.5) for \(k =1.\) Let \(\mathcal{C} = (y_1,\ldots ,y_{j+1},y_1)\) be any cycle with vertex set \(\{y_i\}_{1 \le i \le j+1} = \{x_i\}_{1 \le i \le j+1}\) and without loss of generality suppose that \(y_{j+1} = x_{j+1}.\) Removing the edges \((y_j,y_{j+1})\) and \((y_{j+1},y_1),\) and adding the edge \((y_1,y_j)\) we get a new cycle \(\mathcal{C}'\) with vertex set \(\{x_i\}_{1 \le i \le j}.\)

Since the edge weight function \(h\) is a metric, we have by triangle inequality that \(h(y_j,y_1) \le h(y_j,y_{j+1}) + h(y_{j+1},y_1).\) Using \((a+b)^{\alpha } \le a^{\alpha } + b^{\alpha }\) for \(a,b > 0\) and \(0 < \alpha \le 1\), we get that \(h^{\alpha }(y_j,y_1) \le h^{\alpha }(y_j,y_{j+1}) + h^{\alpha }(y_{j+1},y_1).\) Therefore, the weight \(W(\mathcal{C}')\) of \(\mathcal{C}'\)

$$\begin{aligned} W(\mathcal{C'})= & {} \sum _{i=1}^{j-1}h^{\alpha }(y_i,y_{i+1}) + h^{\alpha }(y_j,y_1) \\&\le \sum _{i=1}^{j} h^{\alpha }(y_i,y_{i+1}) + h^{\alpha }(y_{j+1},y_1) = W(\mathcal{C}). \end{aligned}$$

Therefore, \(\mathrm{TSP}(x_1,\ldots ,x_j) \le W(\mathcal{C}') \le W(\mathcal{C}).\) Taking minimum over all cycles \(\mathcal{C}\) with vertex set \(\{x_i\}_{1 \le i \le j+1},\) we get (7.5) for \(k=1.\) \(\square \)

Moments of Random Variables

Let \(X \ge 1\) be any integer valued random variable such that

$$\begin{aligned} {\mathbb {P}}(X \ge l) \le e^{-\theta (l-1)} \end{aligned}$$

for all integers \(l \ge 1\) and some constant \(\theta > 0\) not depending on \(l.\) For every integer \(r \ge 1,\)

$$\begin{aligned} {\mathbb {E}}X^{r} \le r\sum _{l\ge 1} l^{r-1} {\mathbb {P}}(X \ge l) \le r\sum _{l \ge 1} l^{r-1} e^{-\theta (l-1)} \le \frac{r!}{(1-e^{-\theta })^{r}} \end{aligned}$$

Proof of (7.7)

For \(r \ge 1\), we have

$$\begin{aligned} {\mathbb {E}}X^{r} = \sum _{l \ge 1} l^{r} {\mathbb {P}}(X= l) = \sum _{l \ge 1}l^{r} {\mathbb {P}}(X \ge l) - l^{r}{\mathbb {P}}(X \ge l+1) \end{aligned}$$

and substituting the \(l^{r}\) in the final term of (7.8) with \( (l+1)^{r} - ((l+1)^{r}-l^{r})\), we get

$$\begin{aligned} {\mathbb {E}}X^r= & {} \sum _{l \ge 1} \left( l^{r} {\mathbb {P}}(X \ge l) - (l+1)^{r} {\mathbb {P}}(X \ge l+1)\right) \nonumber \\&+ \;\;\;\sum _{l \ge 1} ((l+1)^{r}-l^{r}) {\mathbb {P}}(X \ge l+1) \nonumber \\&= 1 + \sum _{l \ge 1}((l+1)^{r}-l^{r}) {\mathbb {P}}(X \ge l+1) \nonumber \\&= \sum _{l \ge 0}((l+1)^{r}-l^{r}) {\mathbb {P}}(X \ge l+1) \end{aligned}$$

where the second equality is true since \(l^{r}{\mathbb {P}}(X \ge l) \le l^{r}e^{-\theta (l-1)} \longrightarrow 0\) as \(l~\rightarrow ~\infty .\) Using \((l+1)^{r} - l^{r} \le r\cdot (l+1)^{r-1}\) in (7.9), we get the first relation in (7.7).

We prove the second relation in (7.7) by induction as follows. Let \(\gamma = e^{-\theta } < 1\) and \(J_r := \sum _{l \ge 1} l^{r-1} \gamma ^{l-1}\) so that

$$\begin{aligned} J_{r+1}(1 - \gamma ) = \sum _{l \ge 1} l^{r} \gamma ^{l-1} - \sum _{l \ge 1}l^{r} \gamma ^{l} = \sum _{l \ge 1} \left( l^{r}-(l-1)^{r} \right) \gamma ^{l-1}. \end{aligned}$$

Using \(l^{r}-(l-1)^r \le r\cdot l^{r-1}\) for \(l \ge 1\) we therefore get that

$$\begin{aligned} J_{r+1}(1-\gamma ) \le r\sum _{l \ge 1}l^{r-1} \gamma ^{l-1} = rJ_r \end{aligned}$$

and so the second relation in (7.7) follows from induction. \(\square \)

Scaling and Translation Relations

For a set of nodes \(\{x_1,\ldots ,x_n\}\) in the unit square \(S,\) recall from Sect. 1 that \(K_n(x_1,\ldots ,x_n)\) is the complete graph formed by joining all the nodes by straight line segments and the edge \((x_i,x_j)\) is assigned a weight of \(d^{\alpha }(x_i,x_j),\) where \(d(x_i,x_j)\) is the Euclidean length of the edge \((x_i,x_j).\) We denote \(\mathrm{TSP}(x_1,\ldots ,x_n)\) to be the length of the minimum spanning cycle of \(K_n(x_1,\ldots ,x_n)\) with edge weights obtained as in (1.4),

\(\underline{Scaling }\): For any \(a > 0,\) consider the graph \(K_n(ax_1,\ldots ,ax_n)\) where the length of the edge between the vertices \(ax_1\) and \(ax_2\) is simply \(a\) times the length of the edge between \(x_1\) and \(x_2\) in the graph \(K_n(x_1,\ldots ,x_n)\) Using the definition of TSP in (1.4), we then have \(\mathrm{TSP}(ax_1,\ldots ,ax_n) = a^{\alpha } \mathrm{TSP}(x_1,\ldots ,x_n)\) and so if \(Y_1,\ldots ,Y_n\) are \(n\) nodes uniformly distributed in the square \(aS\) of side length \(a,\) then

$$\begin{aligned} \mathrm{TSP}(n;a) := \mathrm{TSP}(Y_1,\ldots ,Y_n) = a^{\alpha } \mathrm{TSP}(X_1,\ldots ,X_n), \end{aligned}$$

where \(X_i = \frac{Y_i}{a}, 1 \le i \le n\) are i.i.d. uniformly distributed in \(S.\) Recalling the notation \(\mathrm{TSP}_n = \mathrm{TSP}(X_1,\ldots ,X_n)\) from (1.4), we therefore get

$$\begin{aligned} {\mathbb {E}} \mathrm{TSP}(n;a) = a^{\alpha } {\mathbb {E}}\mathrm{TSP}_n. \end{aligned}$$

\(\underline{Translation }\): For \(b \in {\mathbb {R}}^2\), consider the graph \(K_n(x_1+b,\ldots ,x_n+b).\) Using the translation property \((b2),\) the weight \(h(x_1+b,x_2+b) \le h_0 \cdot h(x_1,x_2),\) the weight of the edge between \(x_1\) and \(x_2.\) Using the definition of TSP in (1.4), we therefore have

$$\begin{aligned} \mathrm{TSP}(x_1+b,\ldots ,x_n+b) \le h_0^{\alpha } \cdot \mathrm{TSP}(x_1,\ldots ,x_n), \end{aligned}$$

obtaining the desired bound. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ganesan, G. Euclidean Travelling Salesman Problem with Location-Dependent and Power-Weighted Edges. J Theor Probab 35, 819–862 (2022).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Travelling salesman problem
  • Location-dependent edge weights
  • Deviation estimates

Mathematics Subject Classification (2020)

  • 60D05