Reverse-engineering conference rankings: what does it take to make a reputable conference?
- 590 Downloads
In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.
KeywordsConference rankings Computer science Bibliometrics Conference acceptance rate Publication counts Citation counts Objective criteria
Mathematics Subject Classification (2000)68P99 62-07 62P25
The authors thank Luciano García-Bañuelos, Marju Valge, Svetlana Vorotnikova and Karina Kisselite for their input during the initial phase of this work. This work was partially funded by the EU FP7 project LiquidPublication (FET-Open grant number 213360).
- Breiman, L., Friedman, J. H., Olshen, R. A., & C. J. Stone. (1984). Classification and Regression Trees, Wadsworth.Google Scholar
- Garfield, E. (1972). Citation analysis as a tool in journal evaluation, American Association for the Advancement of Science.Google Scholar
- Hamermesh, D. S., Pfann, (2000) G. A. Markets for reputation: evidence on quality and quantity in academe, SSRN eLibrary. http://ssrn.com/paper=1533208.
- Moed, H. F., & M. S. Visser. (2007). Developing bibliometric indicators of research performance in computer science: An exploratory study, Tech. Rep. CWTS Report 2007-01, Centre for Science and Technology Studies (CWTS), Leiden University, the Netherlands. http://www.cwts.nl/pdf/NWO_Inf_Final_Report_V_210207.pdf.
- Page, L., Brin, S., Motwani, R. & T. Winograd. (1998). The PageRank citation ranking: bringing order to the web, Tech. rep., Stanford Digital Library Technologies Project.Google Scholar
- Shi, X., Tseng, B., & L. Adamic (2009). Information diffusion in computer science citation networks. In: Proceedings of the International Conference on Weblogs and Social Media (ICWSM 2009).Google Scholar
- Shi, X., Leskovec, J., & D. A. McFarland. (2010). Citing for high impact. In: Proceedings of the 10th Annual Joint Conference on Digital Libraries, ACM, pp. 49–58.Google Scholar
- Silva Martins, W., Gonçalves, M. A., Laender, A. H. F., & G. L. Pappa. (2009). Learning to assess the quality of scientific conferences: a case study in computer science. In: Proceedings of the Joint International Conference on Digital Libraries (JCDL), Austin, pp. 193–202.Google Scholar
- Zhou, D., Orshanskiy, S. A., Zha, H., & C. L. Giles. (2008) Co-ranking authors and documents in a heterogeneous network. In: Seventh IEEE International Conference on Data Mining, ICDM 2007, IEEE, pp. 739–744.Google Scholar
- Zhuang, Z., Elmacioglu, E., Lee, D., & C. L. Giles. (2007). Measuring conference quality by mining program committee characteristics. In: Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries, ACM, pp. 225–234.Google Scholar