Assessing the quality of scientific conferences based on bibliographic citations
- 469 Downloads
Assessing the quality of scientific conferences is an important and useful service that can be provided by digital libraries and similar systems. This is specially true for fields such as Computer Science and Electric Engineering, where conference publications are crucial. However, the majority of the existing quality metrics, particularly those relying on bibliographic citations, has been proposed for measuring the quality of journals. In this article we conduct a study about the relative performance of existing journal metrics in assessing the quality of scientific conferences. More importantly, departing from a deep analysis of the deficiencies of these metrics, we propose a new set of quality metrics especially designed to capture intrinsic and important aspects related to conferences, such as longevity, popularity, prestige, and periodicity. To demonstrate the effectiveness of the proposed metrics, we have conducted two sets of experiments that contrast their results against a “gold standard” produced by a large group of specialists. Our metrics obtained gains of more than 12% when compared to the most consistent journal quality metric and up to 58% when compared to standard metrics such as Thomson’s Impact Factor.
KeywordsBibliometrics Citation analysis Ranking Classification
This research is partially funded by the Brazilian National Institute of Science and Technology for the Web (MCT/CNPq Grant Number 573871/2008-6), by the InfoWeb project (grant number 55.0874/2007-0), and by the authors’s individual research grants from CNPq.
- Amin, M., & Mabe, M. (2000). Impact factors: Use and abuse. Perspectives in Publishing, 1, 1–6.Google Scholar
- Bollen, J., & de Sompel, H. V. (2008). Usage impact factor: The effects of sample characteristics on usage-based impact metrics. JASIST, 59, 136–149.Google Scholar
- Bollen, J., de Sompel, H. V., & Rodriguez, M. A. (2008). Towards usage-based impact metrics: first results from the MEASUR project. In Proceedings of the 8th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 231–240.Google Scholar
- Larsen, B., & Ingwersen, P. (2006). Using citations for ranking in digital libraries. In Proceedings of the 6th ACM/IEEE joint conference on digital libraries, Chapel Hill, NC, p. 370.Google Scholar
- Martins, W. S., Gonçalves, M. A., Laender, A. H. F., & Pappa, G. L. (2009). Learning to assess the quality of scientific conferences: A case study in computer science. In Proceedings of the 9th ACM/IEEE-CS joint conference on digital libraries, Austin, TX, pp. 193–202.Google Scholar
- Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality?. Journal of the Medical Library Association, 1, 42–46.Google Scholar
- Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. British Medical Journal, 314, 498–502.Google Scholar
- Souto, M. A. M., Warpechowski, M., & de Oliveira, J. P. M. (2007). An ontological approach for the quality assessment of computer science conferences. Proceedings of the 2007 workshop on quality of information systems, 4802, 202–212.Google Scholar
- Yan, S., & Lee D. (2007). Toward alternative measures for ranking venues: a case of database research community. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 235–244.Google Scholar
- Zhuang, Z., Elmacioglu, E., Lee, D., & Giles, C. L. (2007). Measuring conference quality by mining program committee characteristics. In Proceedings of the 7th ACM/IEEE joint conference on digital libraries, ACM, New York, USA, pp. 225–234.Google Scholar