Advertisement

Scientometrics

, Volume 107, Issue 3, pp 1389–1403 | Cite as

A performance indicator for academic communities based on external publication profiles

  • Thiago H. P. Silva
  • Gustavo Penha
  • Ana Paula Couto da Silva
  • Mirella M. Moro
Article

Abstract

Studying research productivity is a challenging task that is important for understanding how science evolves and crucial for agencies (and governments). In this context, we propose an approach for quantifying the scientific performance of a community (group of researchers) based on the similarity between its publication profile and a reference community’s publication profile. Unlike most approaches that consider citation analysis, which requires access to the content of a publication, we only need the researchers’ publication records. We investigate the similarity between communities and adopt a new metric named Volume Intensity. Our goal is to use Volume Intensity for measuring the internationality degree of a community. Our experimental results , using Computer Science graduate programs and including both real and random scenarios, show we can use publication profile as a performance indicator.

Keywords

Bibliometry Data similarity Analysis 

Notes

Acknowledgments

This work was funded by the authors’ individual grants from CNPq and FAPEMIG.

References

  1. Abramo, G., D’Angelo, C., & Di Costa, F. (2011). National research assessment exercises: A comparison of peer review and bibliometrics rankings. Scientometrics, 89(3), 929–941.CrossRefGoogle Scholar
  2. Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS One, 4(6), 6022.CrossRefGoogle Scholar
  3. Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? a review of studies on citing behavior. Journal of Documentation, 64(1), 45–80.CrossRefGoogle Scholar
  4. Brandão, M. A., Moro, M. M., & Almeida, J. M. (2014). Experimental evaluation of academic collaboration recommendation using factorial design. Journal of Information and Data Management, 5(1), 52–63.Google Scholar
  5. Digiampietri, L. A., Mena-Chalco, J. P., Vaz de Melo, P. O. S., Malheiro, A. P. R., Meira, D. N. O., Franco, L. F., et al. (2014). BraX-ray: An X-ray of the Brazilian computer science graduate programs. PLoS One, 9(4), e94541.CrossRefGoogle Scholar
  6. Freire, V. P., & Figueiredo, D. R. (2011). Ranking in collaboration networks using a group based metric. Journal of the Brazilian Computer Society, 17(4), 255–266.MathSciNetCrossRefGoogle Scholar
  7. Garfield, E. (1999). Journal impact factor: A brief review. Canadian Medical Association Journal, 161(8), 979–980.Google Scholar
  8. Gonçalves, G. D., Figueiredo, F., Almeida, J. M., & Gonçalves, M. A. (2014). Characterizing scholar popularity. A case study in the computer science research community. In: JCDL, London, pp. 57–66.Google Scholar
  9. Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). The leiden manifesto for research metrics. Nature, 520, 429–431.CrossRefGoogle Scholar
  10. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. PNAS, 102(46), 16,569–16,572.CrossRefGoogle Scholar
  11. Järvelin, K., & Kekäläinen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4), 422–446.CrossRefGoogle Scholar
  12. Laender, A. H. F., de Lucena, C. J. P., Maldonado, J. C., de Souza e Silva, E., & Ziviani, N. (2008). Assessing the research and education quality of the top brazilian computer science graduate programs. SIGCSE Bulletin, 40(2), 135–145.CrossRefGoogle Scholar
  13. Lee, D., Kang, J., Mitra, P., Giles, C. L., & On, B. W. (2007). Are your citations clean? Communications of the ACM, 50(12), 33–38.CrossRefGoogle Scholar
  14. Lima, H., Silva, T. H. P., Moro, M. M., Santos, R. L. T, Jr., & Meira, W., Laender AHF,. (2013). Aggregating productivity indices for ranking researchers across multiple areas. JCDL (pp. 97–106). USA: Indianapolis.Google Scholar
  15. Lima, H., Silva, T. H. P., Moro, M. M., Santos, R. L. T., Wagner, Meira J., & Laender, A. H. F. (2015). Assessing the profile of top brazilian computer science researchers. Scientometrics, 103(3), 879–896.CrossRefGoogle Scholar
  16. Lopes, G. R., Moro, M. M., da Silva, R., Barbosa, E. M., & de Oliveira, J. P. M. (2011). Ranking strategy for graduate programs evaluation. In: ICITA, Sydney, Australia, pp 59–64.Google Scholar
  17. Mena-Chalco, J. P., Digiampietri, L. A., Lopes, F. M., & Cesar, R. M. (2014). Brazilian bibliometric coauthorship networks. Journal of the Association for Information Science and Technology, 65(7), 1424–1445.CrossRefGoogle Scholar
  18. Menezes, G. V., Ziviani, N., & Laender, A. H., Almeida, V. (2009). A geographical analysis of knowledge production in computer science. In: WWW, Madrid, Spain, pp 1041–1050.Google Scholar
  19. Molinari, J. F., & Molinari, A. (2008). A new methodology for ranking scientific institutions. Scientometrics, 75(1), 163–174.CrossRefGoogle Scholar
  20. Ortega, J. L., López-Romero, E., & Fernández, I. (2011). Multivariate approach to classify research institutes according to their outputs: The case of the csic’s institutes. Journal of Informetrics, 5(3), 323–332.Google Scholar
  21. Podlubny, I. (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95–99.CrossRefGoogle Scholar
  22. Ribas, S., Ribeiro-Neto, B., de Souza e Silva, E., Ueda, A. H., & Ziviani, N. (2015). Using reference groups to assess academic productivity in computer science. In: WWW Companion, pp 603–608.Google Scholar
  23. Sheskin, D. J. (2007). Handbook of parametric and nonparametric statistical procedures (4th ed.). Boca Raton: Chapman & Hall/CRC.zbMATHGoogle Scholar
  24. Silva T. H. P., Moro, M. M., Silva, A. P. C., Meira, W. Jr., & Laender, A. H. F. (2014) Community-based endogamy as an influence indicator. In: JCDL, London, UK, pp 67–76.Google Scholar
  25. Silva, T. H. P,, Moro, M. M., & Silva, A. P. C. (2015a). Authorship contribution dynamics on publication venues in computer science: An aggregated quality analysis. In: SAC, Salamanca, Spain, pp 1142–1147.Google Scholar
  26. Silva, T. H. P., Moro, M. M., & Silva, A. P. C. (2015b) Tc-index: A new research productivity index based on evolving communities. In: TPDL, Poznań, Poland, pp 209–221.Google Scholar
  27. Vieira, E., & Gomes, J. (2010). A research impact indicator for institutions. Journal of Informetrics, 4(4), 581–590.CrossRefGoogle Scholar
  28. Wainer, J., Eckmann, M., Goldenstein, S., & Rocha, A. (2013). How productivity and impact differ across computer science subareas. Communicatoins of the ACM, 56(8), 67–73.CrossRefGoogle Scholar
  29. Waltman, L., & van Eck, N. J. (2012). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406–415.CrossRefGoogle Scholar
  30. Waltman, L., van Eck, N. J., & Wouters, P. (2013). Counting publications and citations: Is more always better? Journal of Informetrics, 7(3), 635–641.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  • Thiago H. P. Silva
    • 1
  • Gustavo Penha
    • 1
  • Ana Paula Couto da Silva
    • 1
  • Mirella M. Moro
    • 1
  1. 1.Computer Science DepartmentUniversidade Federal de Minas GeraisBelo HorizonteBrazil

Personalised recommendations