Smoothed Analysis of the Squared Euclidean Maximum-Cut Problem

  • Michael Etscheid
  • Heiko Röglin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9294)


It is well-known that local search heuristics for the Maximum-Cut problem can take an exponential number of steps to find a local optimum, even though they usually stabilize quickly in experiments. To explain this discrepancy we have recently analyzed the simple local search algorithm FLIP in the framework of smoothed analysis, in which inputs are subject to a small amount of random noise. We have shown that in this framework the number of iterations is quasi-polynomial, i.e., it is polynomially bounded in nlogn and φ, where n denotes the number of nodes and φ is a parameter of the perturbation.

In this paper we consider the special case in which the nodes are points in a d-dimensional space and the edge weights are given by the squared Euclidean distances between these points. We prove that in this case for any constant dimension d the smoothed number of iterations of FLIP is polynomially bounded in n and 1/σ, where σ denotes the standard deviation of the Gaussian noise. Squared Euclidean distances are often used in clustering problems and our result can also be seen as an upper bound on the smoothed number of iterations of local search for min-sum 2-clustering.


Euclidean Distance Local Search Edge Weight Active Point Local Search Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Arthur, D., Manthey, B., Röglin, H.: Smoothed analysis of the k-means method. JACM 58(5), 19 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Arthur, D., Vassilvitskii, S.: Worst-case and smoothed analysis of the ICP algorithm. SICOMP 39(2), 766–782 (2009)CrossRefzbMATHGoogle Scholar
  3. 3.
    Elsässer, R., Tscheuschner, T.: Settling the complexity of local max-cut (almost) completely. In: Aceto, L., Henzinger, M., Sgall, J. (eds.) ICALP 2011, Part I. LNCS, vol. 6755, pp. 171–182. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Englert, M., Röglin, H., Vöcking, B.: Worst case and prob. analysis of the 2-Opt algorithm for the TSP. Algorithmica 68(1), 190–264 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Etscheid, M., Röglin, H.: Smoothed analysis of local search for the maximum-cut problem. In: Proc. of 25th SODA, pp. 882–889 (2014)Google Scholar
  6. 6.
    Kanungo, T., Mount, D., Netanyahu, N., Piatko, C., Silverman, R., Wu, A.: A local search appr. algo. for k-means clustering. Comput. Geom. 28, 89–112 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Kleinberg, J., Tardos, É.: Algorithm design. Addison-Wesley (2006)Google Scholar
  8. 8.
    Manthey, B., Röglin, H.: Smoothed analysis: Analysis of algorithms beyond worst case. IT - Information Technology 53(6), 280–286 (2011)CrossRefGoogle Scholar
  9. 9.
    Manthey, B., Veenstra, R.: Smoothed analysis of the 2-opt heuristic for the TSP: Polynomial bounds for gaussian noise. In: Cai, L., Cheng, S.-W., Lam, T.-W. (eds.) Algorithms and Computation. LNCS, vol. 8283, pp. 579–589. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Sankar, A., Spielman, D., Teng, S.-H.: Smoothed analysis of the condition numb. and growth factors of matrices. SIMAX 28(2), 446–476 (2006)CrossRefzbMATHGoogle Scholar
  11. 11.
    Schäffer, A., Yannakakis, M.: Simple local search problems that are hard to solve. SICOMP 20(1), 56–87 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Schulman, L.: Clustering for edge-cost minimization. In: Proc. of 32nd STOC, pp. 547–555 (2000)Google Scholar
  13. 13.
    Spielman, D., Teng, S.-H.: Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. JACM 51(3), 385–463 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Spielman, D., Teng, S.-H.: Smoothed analysis: An attempt to explain the behavior of algorithms in practice. CACM 52(10), 76–84 (2009)CrossRefGoogle Scholar
  15. 15.
    Telgarsky, M., Vattani, A.: Hartigan’s method: k-means clustering without voronoi. In: Proc. of 13th AISTATS, pp. 820–827 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of BonnBonnGermany

Personalised recommendations