Scientometrics

, Volume 83, Issue 1, pp 133–155

Assessing the quality of scientific conferences based on bibliographic citations

  • Waister Silva Martins
  • Marcos André Gonçalves
  • Alberto H. F. Laender
  • Nivio Ziviani
Article

Abstract

Assessing the quality of scientific conferences is an important and useful service that can be provided by digital libraries and similar systems. This is specially true for fields such as Computer Science and Electric Engineering, where conference publications are crucial. However, the majority of the existing quality metrics, particularly those relying on bibliographic citations, has been proposed for measuring the quality of journals. In this article we conduct a study about the relative performance of existing journal metrics in assessing the quality of scientific conferences. More importantly, departing from a deep analysis of the deficiencies of these metrics, we propose a new set of quality metrics especially designed to capture intrinsic and important aspects related to conferences, such as longevity, popularity, prestige, and periodicity. To demonstrate the effectiveness of the proposed metrics, we have conducted two sets of experiments that contrast their results against a “gold standard” produced by a large group of specialists. Our metrics obtained gains of more than 12% when compared to the most consistent journal quality metric and up to 58% when compared to standard metrics such as Thomson’s Impact Factor.

Keywords

Bibliometrics Citation analysis Ranking Classification 

References

  1. Amin, M., & Mabe, M. (2000). Impact factors: Use and abuse. Perspectives in Publishing, 1, 1–6.Google Scholar
  2. Bollen, J., & de Sompel, H. V. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. JASIST, 59, 136–149.Google Scholar
  3. Bollen, J., de Sompel, H. V., & Rodriguez, M. A. (2008). Towards usage-based impact metrics: first results from the MEASUR project. In Proceedings of the 8th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 231–240.Google Scholar
  4. Bollen, J., de Sompel, H. V., Smith, J. A., & Luce, R. (2005). Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing and Management, 41, 1419–1440.CrossRefGoogle Scholar
  5. Bollen, J., Rodriguez, M. A., & de Sompel, H. V. (2006). Journal status. Scientometrics, 69, 669–687.CrossRefGoogle Scholar
  6. Braun, T., Glänzel, W., & Schubert, A. (2006). A hirsch-type index for journals. Scientometrics, 69, 169–173.CrossRefGoogle Scholar
  7. Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30, 107–117.CrossRefGoogle Scholar
  8. Clausen, H., & Wormell, I. (2001). A bibliometric analysis of IOLIM conferences 1977–1999. Journal of Information Science, 27, 157–169.CrossRefGoogle Scholar
  9. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102, 16569–16572.CrossRefGoogle Scholar
  10. Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30, 81–93.MATHMathSciNetGoogle Scholar
  11. Laender, A. H. F., de Lucena, C. J. P., Maldonado, J. C., Silva, E. S., & Ziviani, N. (2008). Assessing the research and education quality of the top Brazilian computer graduate programs. ACM SIGCSE Bulletin, 40, 135–145.CrossRefGoogle Scholar
  12. Larsen, B., & Ingwersen, P. (2006). Using citations for ranking in digital libraries. In Proceedings of the 6th ACM/IEEE joint conference on digital libraries, Chapel Hill, NC, p. 370.Google Scholar
  13. Martins, W. S., Gonçalves, M. A., Laender, A. H. F., & Pappa, G. L. (2009). Learning to assess the quality of scientific conferences: A case study in computer science. In Proceedings of the 9th ACM/IEEE-CS joint conference on digital libraries, Austin, TX, pp. 193–202.Google Scholar
  14. Patterson, D. A. (2004). The Health of Research Conferences and the Dearth of Big Idea Papers. Communications of the ACM, 47, 23–24.CrossRefGoogle Scholar
  15. Rahm, E., & Thor, A. (2005). Citation analysis of database publications. SIGMOD Record, 34, 48–53.CrossRefGoogle Scholar
  16. Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality?. Journal of the Medical Library Association, 1, 42–46.Google Scholar
  17. Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314, 498–502.Google Scholar
  18. Sidiropoulos, A., & Manolopoulos, Y. (2006). Generalized comparison of graph-based ranking algorithms for publications and authors. Journal of Systems and Software, 79, 1679–1700.CrossRefGoogle Scholar
  19. Souto, M. A. M., Warpechowski, M., & de Oliveira, J. P. M. (2007). An ontological approach for the quality assessment of computer science conferences. Proceedings of the 2007 workshop on quality of information systems, 4802, 202–212.Google Scholar
  20. Yan, S., & Lee D. (2007). Toward alternative measures for ranking venues: a case of database research community. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 235–244.Google Scholar
  21. Zhuang, Z., Elmacioglu, E., Lee, D., & Giles, C. L. (2007). Measuring conference quality by mining program committee characteristics. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 225–234.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2009

Authors and Affiliations

  • Waister Silva Martins
    • 1
  • Marcos André Gonçalves
    • 1
  • Alberto H. F. Laender
    • 1
  • Nivio Ziviani
    • 1
  1. 1.Computer Science DepartmentFederal University of Minas GeraisBelo HorizonteBrazil

Personalised recommendations