Advertisement

Artificial Life and Robotics

, Volume 23, Issue 2, pp 235–240 | Cite as

Attribute-based quality classification of academic papers

  • Tetsuya Nakatoh
  • Sachio Hirokawa
  • Toshiro Minami
  • Takeshi Nanri
  • Miho Funamori
Original Article
  • 302 Downloads

Abstract

Investigating the relevant literature is very important for research activities. However, it is difficult to select the most appropriate and important academic papers from the enormous number of papers published annually. Researchers search paper databases by combining keywords, and then select papers to read using some evaluation measure—often, citation count. However, the citation count of recently published papers tends to be very small because citation count measures accumulated importance. This paper focuses on the possibility of classifying high-quality papers superficially using attributes such as publication year, publisher, and words in the abstract. To examine this idea, we construct classifiers by applying machine-learning algorithms and evaluate these classifiers using cross-validation. The results show that our approach effectively finds high-quality papers.

Keywords

Bibliometrics Academic paper Feature selection Machine learning SVM 

Notes

Acknowledgements

This work was supported by JSPS KAKENHI Grant Number JP15K00426. The computation was mainly carried out using the computer facilities at Research Institute for Information Technology, Kyushu University.

References

  1. 1.
    Garfield E (1955) Citation indexes for science: a new dimension in documentation through association of ideas. Science 122(3159):108–111CrossRefGoogle Scholar
  2. 2.
    Garfield E (2006) The history and meaning of the journal impact factor. J Am Med Assoc 295(1):90–93CrossRefGoogle Scholar
  3. 3.
    Hirsch JE (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA 102(46):16569–16572CrossRefzbMATHGoogle Scholar
  4. 4.
    Martin BR (1996) The use of multiple indicators in the assessment of basic research. Scientometrics 36(3):343–362CrossRefGoogle Scholar
  5. 5.
    Nakatoh T, Nakanishi H, Baba K, Hirokawa S (2015) Focused citation count: a combined measure of relevancy and quality. In: IIAI 4th International Congress on Advanced Applied Informatics (IIAI AAI 2015), pp 166–170Google Scholar
  6. 6.
    Sakai T, Hirokawa S (2012) Feature words that classify problem sentence in scientific article. In: Proceedings of the 14th International Conference on Information Integration and Web-based Applications and Services (IIWAS ’12). ACM, New York, pp 360–367Google Scholar
  7. 7.
    Ashok VG, Feng S, Choi Y (2013) Success with style: using writing style to predict the success of novels. In: 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1753–1764Google Scholar
  8. 8.
    Otani S, Tomiura Y (2014) Extraction of key expressions indicating the important sentence from article abstracts. In: IIAI 3rd International Conference on Advanced Applied Informatics, pp. 216–219Google Scholar
  9. 9.
    Zahedi Z, Costas R, Wouters P (2014) How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics 101(2):1491–1513CrossRefGoogle Scholar

Copyright information

© ISAROB 2017

Authors and Affiliations

  • Tetsuya Nakatoh
    • 1
  • Sachio Hirokawa
    • 1
  • Toshiro Minami
    • 2
  • Takeshi Nanri
    • 1
  • Miho Funamori
    • 3
  1. 1.Research Institute for Information TechnologyKyushu UniversityFukuokaJapan
  2. 2.Kyushu Institute of Information SciencesFukuokaJapan
  3. 3.National Institute of InformaticsTokyoJapan

Personalised recommendations