Analysis of Consensus Sorting via the Cycle Metric

  • Ivan AvramovicEmail author
  • Dana S. Richards
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11346)


Sorting is studied in this paper as an archetypal example to explore the optimizing power of consensus. In conceptualizing the consensus sort, the classical hill-climbing method of optimization is paired with the modern notion that value and fitness can be judged by data mining. Consensus sorting is a randomized sorting algorithm which is based on randomly selecting pairs of elements within an unsorted list (expressed in this paper as a permutation), and deciding whether to swap them based on appeals to a database of other permutations. The permutations in the database are all scored via some adaptive sorting metric, and the decision to swap depends on whether the database consensus suggests a better score as a result of swapping. This uninformed search process does not require the definition of the concept of sorting, but rather a depends on selecting a metric which does a good job of distinguishing a good path to the goal, a sorted list. A previous paper has shown that the ability of the algorithm to converge on the goal depends strongly on the metric which is used, and analyzed the performance of the algorithm when number of inversions was used as a metric. This paper continues by analyzing the performance of a much more efficient metric, the number of cycles in the permutation.


Adaptive sorting Randomized algorithms Uninformed search Combinatorics Simulation and modeling 


  1. 1.
    Avramovic, I., Richards, D.S.: Randomized sorting as a big data search algorithm. In: International Conference on Advances in Big Data Analytics, ABDA 2015, pp. 57–63 (2015)Google Scholar
  2. 2.
    Estivill-Castro, V., Wood, D.: A survey of adaptive sorting algorithms. ACM Comput. Surv. 24(4), 441–476 (1992)CrossRefGoogle Scholar
  3. 3.
    Flajolet, P., Sedgewick, R.: Analytic combinatorics - symbolic combinatorics. Technical report, Algorithms Project, INRIA Rocquencourt (2002)Google Scholar
  4. 4.
    Hashem, I.A.T., Yaqoob, I., Anuar, N.B., Mokhtar, S., Gani, A., Khan, S.U.: The rise of “big data” on cloud computing: Review and open research issues. Inf. Syst. 47, 98–115 (2015)CrossRefGoogle Scholar
  5. 5.
    Hays, J., Efros, A.A.: IM2GPS: estimating geographic information from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)Google Scholar
  6. 6.
    Tian, W., Zhao, Y.: Optimized Cloud Resource Management and Scheduling, Chap. 2. Morgan Kaufmann, Elsevier Science & Technology Books, Amsterdam (2014)Google Scholar
  7. 7.
    Wang, H., He, X., Chang, M.W., Song, Y., White, R.W., Chu, W.: Personalized ranking model adaptation for web search. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2013, pp. 323–332. ACM, New York (2013).
  8. 8.
    Yang, D., Zhang, D., Yu, Z., Yu, Z., Zeghlache, D.: SESAME: mining user digital footprints for fine-grained preference-aware social media search. ACM Trans. Internet Technol. 14(4), 28:1–28:24 (2014). Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.George Mason UniversityFairfaxUSA

Personalised recommendations