, Volume 77, Issue 1, pp 91–112 | Cite as

New evaluation indexes for articles and authors’ academic achievements based on Open Access Resources

  • Soo-Ryun Cho


In Open Access (OA) environment where article-based or author-based evaluation is important, a new evaluation system is needed to accommodate characteristics of Open Access Resources (OAR) and to overcome limitations of pre-existing evaluation systems such as journal-based evaluation.

Primary and secondary evaluation factors were selected. Primary factors include hits and citations that constitutes composite index. Several secondary factors each for article and author evaluation were selected for normalization of the indexes.

To validate superiority of newly developed normalized composite index systems compared to the monovariable index system, time-driven bias and power of discrimination were adopted.

The results led to the conclusion that composite index proved to be a more stable index offsetting the negative effects from one element to another and normalization makes the composite index even more stable by controlling the bias from external elements.


Academic Achievement Evaluation Index Composite Index Evaluation Factor Open Access Journal 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Antelman, K. (2004), Do open access articles have a greater research impact? College & Research Libraries, 65 (5): 372–382.Google Scholar
  2. Brody, T., Stamerjohanns, H., Harnad, S., Gingras, Y., Vallieres, F., Oppenheim, C. (2004), The effect of Open Access on Citation Impact. Presented at: National Policies on Open Access (OA) Provision for University Research Output: an International meeting. Southampton University, Southampton UK. 19 February 2004. Retrieved May 17, 2005, from
  3. Brody, T., Harnad, S. (2005), Earlier Web Usage Statistics as Predictors of Later Citation Impact, DOI: cs.IR/0503020, arXiv.Google Scholar
  4. Darmoni, S. J. (2002), Reading factor: a new bibliometric criterion for managing digital libraries, J Med Libr Assoc, 90 (3): 323–327.Google Scholar
  5. Gami, A. S., Montori, V. M., Wilczynski, N. L., Haynes, R. B. (2004), Author self-citation in the diabetes literature, Canadian Medical Association or its Licensors, 170 (13): 1925–1927.CrossRefGoogle Scholar
  6. Harnad, S., Brody, T. (2004), Comparing the impact of Open Access(OA) vs. non-OA articles in the same journals, D-Lib Magazine [serial on the Internet] 10 (6). Retrieved May 30, 2005, from
  7. Hitchcock, S., Bergmark, D., Brody, T., Gutteridge, C., Carr, L., Hall, W., Lagoze, C., Harnad S. (2002), Open Citation linking: the way forward, D-Lib Magazine [serial on the Internet] 8 (10). Retrieved April 15, 2005, from
  8. Institute of Higher Education. (2004), Ranking methodology. In: Academic Ranking of World Universities. Retrieved April 30, 2005, from
  9. Irvine, J., Martin, B. R. (1983), Assessing basic research: The case of the Isaac-Newton telescope, Social Studies of Science, 13 (1): 49–86.CrossRefGoogle Scholar
  10. ISI (2004), The Impact of Open Access Journals: A Citation Study from Thomson ISI. Retrieved April 10, 2005, from
  11. Koenig, M. E. D. (1983), Bibliometric indicators versus expert opinion in accessing research performance, Journal of the American Society for Information Science, 34: 136–145.CrossRefGoogle Scholar
  12. Kurtz, M. J. (2004), Restrictive Access Policies Cut Readership of Electronic Research Journal Articles by a Factor of Two. Cambridge, MA: Harvard-Smithsonian Centre for Astrophysics. Retrieved April 15, 2005, from Google Scholar
  13. Kurtz, M. J. & AL. (2005), The bibliometric properties of article readership information, JASIST, 56 (2): 111–128.CrossRefGoogle Scholar
  14. Kurtz, M. J., Eichhorn, G., Accomazzi, A., Grant, C. S., Murray, S. S., Watwon, J. M. (2000), The NASA astrophysics data system: Overview, Astronomy & Astrophysics Supplement Series, 143: 41–59.CrossRefGoogle Scholar
  15. Lawrence, S. (2001), Online or invisible? Nature, 411 (6837): 521.CrossRefGoogle Scholar
  16. Meho, L. I., Sonnenwald, D. H. (2000), Citation ranking versus peer evaluation of senior faculty research performance: A case study of kurdish scholarship, JASIS, 51 (2): 123–138.CrossRefGoogle Scholar
  17. Odlyzko, A. M. (2002), The rapid evolution of scholarly communication, Learned Publishing, 15: 7–19.CrossRefGoogle Scholar
  18. Perneger, T. V. (2004), Relation between online “hit counts” and subsequent citations: prospective study of research papers in the BMJ, BMJ, 329: 546–547.CrossRefGoogle Scholar
  19. Sen, B. K., Pandalai, T. A., Karanjai, A. (1998), Ranking of scientists: A new approach, Journal of Documentation, 54 (5): 622–628.CrossRefGoogle Scholar
  20. Seglen, P. O. (1992), Skewness of science, JASIS, 43 (9): 628–638.CrossRefGoogle Scholar
  21. Seglen, P. O. (1997), Why the impact factor of journals should not be used for evaluating research, BMJ, 314 (7079): 497. Retrieved May 10, 2005, from Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  1. 1.International Vaccine Institute LibrarySeoulKorea
  2. 2.Institute for Information ManagementSung Kyun Kwan UniversityJongno-gu, SeoulKorea

Personalised recommendations