Scientometrics

, Volume 96, Issue 2, pp 651–665 | Cite as

Reverse-engineering conference rankings: what does it take to make a reputable conference?

  • Peep Küngas
  • Siim Karus
  • Svitlana Vakulenko
  • Marlon Dumas
  • Cristhian Parra
  • Fabio Casati
Article

Abstract

In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.

Keywords

Conference rankings Computer science Bibliometrics Conference acceptance rate Publication counts Citation counts Objective criteria 

Mathematics Subject Classification (2000)

68P99 62-07 62P25 

References

  1. Batista, G. E., Prati, R. C., & M. C. Monard. (2004). A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter, 6(1), 20–29.CrossRefGoogle Scholar
  2. Breiman, L., Friedman, J. H., Olshen, R. A., & C. J. Stone. (1984). Classification and Regression Trees, Wadsworth.Google Scholar
  3. Chen, P., Xie, H., Maslov, S. & S. Redner. (2007). Finding scientific gems with Google’s PageRank algorithm. Journal of Informetrics, 1(1), 8–15.CrossRefGoogle Scholar
  4. Eckmann, M., Rocha, A., & J. Wainer. (2012). Relationship between high-quality journals and conferences in computer vision. Scientometrics, 90(2), 617–630.CrossRefGoogle Scholar
  5. Egghe, L. (2006). Theory and practice of the g-index. Scientometrics, 69(1):131–152.MathSciNetCrossRefGoogle Scholar
  6. Garfield, E. (1972). Citation analysis as a tool in journal evaluation, American Association for the Advancement of Science.Google Scholar
  7. Goodrum, A., McCain, K. W., Lawrence, S., & C.L. Giles. (2001). Scholarly publishing in the internet age: a citation analysis of computer science literature. Inf. Process. Manage, 37(5), 661–675.MATHCrossRefGoogle Scholar
  8. Hamermesh, D. S., Pfann, (2000) G. A. Markets for reputation: evidence on quality and quantity in academe, SSRN eLibrary. http://ssrn.com/paper=1533208.
  9. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572.CrossRefGoogle Scholar
  10. Jacso, P. (2011). The pros and cons of Microsoft Academic Search from a bibliometric perspective. Online Information Review, 35(6), 983–997.CrossRefGoogle Scholar
  11. Jensen, P., Rouquier, J. B., & Y. Croissant. (2009). Testing bibliometric indicators by their prediction of scientists promotions. Scientometrics, 78(3), 467–479.CrossRefGoogle Scholar
  12. Laender, A. H. F., de Lucena, C. J. P., Maldonado, J. C., de Souza e Silva, E., & N. Ziviani. (2008). Assessing the research and education quality of the top brazilian computer science graduate programs. SIGCSE Bull., 40, 135–145.CrossRefGoogle Scholar
  13. Laloë, F., & R. Mosseri. (2009). Bibliometric evaluation of individual researchers: not even right... not even wrong!. Europhysics News, 40(5), 26–29.CrossRefGoogle Scholar
  14. Liu, X., Bollen, J., Nelson, M.L., & H. Van de Sompel. (2005). Co-authorship networks in the digital library research community. Information Processing & Management, 41(6), 1462–1480.CrossRefGoogle Scholar
  15. Ma, N., Guan, J., & Y. Zhao. (2008). Bringing PageRank to the citation analysis. Information Processing & Management, 44(2), 800–810.CrossRefGoogle Scholar
  16. Moed, H. F., & M. S. Visser. (2007). Developing bibliometric indicators of research performance in computer science: An exploratory study, Tech. Rep. CWTS Report 2007-01, Centre for Science and Technology Studies (CWTS), Leiden University, the Netherlands. http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf.
  17. Page, L., Brin, S., Motwani, R. & T. Winograd. (1998). The PageRank citation ranking: bringing order to the web, Tech. rep., Stanford Digital Library Technologies Project.Google Scholar
  18. Rahm, E., & A. Thor. (2005). Citation analysis of database publications. ACM Sigmod Record, 34(4), 48–53.CrossRefGoogle Scholar
  19. Sakr, S., & M. Alomari. (2012). A decade of database conferences: a look inside the program committees. Scientometrics, 91(1), 173–184.CrossRefGoogle Scholar
  20. Shi, X., Tseng, B., & L. Adamic (2009). Information diffusion in computer science citation networks. In: Proceedings of the International Conference on Weblogs and Social Media (ICWSM 2009).Google Scholar
  21. Shi, X., Leskovec, J., & D. A. McFarland. (2010). Citing for high impact. In: Proceedings of the 10th Annual Joint Conference on Digital Libraries, ACM, pp. 49–58.Google Scholar
  22. Sidiropoulos, A., & Y. Manolopoulos. (2005). A new perspective to automatically rank scientific conferences using digital libraries. Information Processing & Management, 41(2), 289–312.CrossRefGoogle Scholar
  23. Silva Martins, W., Gonçalves, M. A., Laender, A. H. F., & G. L. Pappa. (2009). Learning to assess the quality of scientific conferences: a case study in computer science. In: Proceedings of the Joint International Conference on Digital Libraries (JCDL), Austin, pp. 193–202.Google Scholar
  24. Silva Martins, W., Gonçalves, M. A., Laender, A. H. F., & N. Ziviani. (2010). Assessing the quality of scientific conferences based on bibliographic citations. Scientometrics, 83(1), 133–155.CrossRefGoogle Scholar
  25. Vanclay, J. K. (2011). An evaluation of the australian research council’s journal ranking. Journal of Informetrics, 5(2), 265–274.CrossRefGoogle Scholar
  26. Zhou, D., Orshanskiy, S. A., Zha, H., & C. L. Giles. (2008) Co-ranking authors and documents in a heterogeneous network. In: Seventh IEEE International Conference on Data Mining, ICDM 2007, IEEE, pp. 739–744.Google Scholar
  27. Zhuang, Z., Elmacioglu, E., Lee, D., & C. L. Giles. (2007). Measuring conference quality by mining program committee characteristics. In: Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries, ACM, pp. 225–234.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2013

Authors and Affiliations

  • Peep Küngas
    • 1
  • Siim Karus
    • 1
  • Svitlana Vakulenko
    • 1
  • Marlon Dumas
    • 1
  • Cristhian Parra
    • 2
  • Fabio Casati
    • 2
  1. 1.Institute of Computer ScienceUniversity of TartuTartuEstonia
  2. 2.Department of Information Engineering and Computer ScienceUniversity of TrentoPovoItaly

Personalised recommendations