Matching Graduate Applicants with Faculty Members

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10539)

Abstract

Every year, millions of students apply to universities for admission to graduate programs (Master’s and Ph.D.). The applications are individually evaluated and forwarded to appropriate faculty members. Considering human subjectivity and processing latency, this is a highly tedious and time-consuming job that has to be performed every year. In this paper, we propose several information retrieval models aimed at partially or fully automating the task. Applicants are represented by their statements of purpose (SOP), and faculty members are represented by the papers they authored. We extract keywords from papers and SOPs using a state-of-the-art keyword extractor. A detailed exploratory analysis of keywords yields several insights into the contents of SOPs and papers. We report results on several information retrieval models employing keywords and bag-of-words content modeling, with the former offering significantly better results. While we are able to correctly retrieve research areas for a given statement of purpose (F-score of 57.7% at rank 2 and 61.8% at rank 3), the task of matching applicants and faculty members is more difficult, and we are able to achieve an F-measure of 21% at rank 2 and 24% at rank 3, when making a selection among 73 faculty members.

Keywords

Graduate application Statement of purpose Keyword extraction Information retrieval 

References

  1. 1.
    Juola, P.: Authorship attribution. Found. Trends Inf. Retr. 1(3), 233–334 (2006)CrossRefGoogle Scholar
  2. 2.
    Kim, S.N., Medelyan, O., Kan, M.-Y., Baldwin, T.: SemEval-2010 task 5: automatic keyphrase extraction from scientific articles. In: Proceedings of 5th International Workshop on Semantic Evaluation, SemEval 2010, Stroudsburg, PA, USA, pp. 21–26. Association for Computational Linguistics (2010)Google Scholar
  3. 3.
    Koppel, M., Schler, J., Argamon, S.: Computational methods in authorship attribution. J. Am. Soc. Inf. Sci. Technol. 60(1), 9–26 (2009)CrossRefGoogle Scholar
  4. 4.
    Lahiri, S., Mihalcea, R., Lai, P.-H.: Keyword extraction from emails. Nat. Lang. Eng. 23(2), 295–317 (2017)CrossRefGoogle Scholar
  5. 5.
    Li, H.: A short introduction to learning to rank. IEICE Trans. 94–D(10), 1854–1862 (2011)CrossRefGoogle Scholar
  6. 6.
    Nguyen, T.D., Kan, M.-Y.: Keyphrase extraction in scientific publications. In: Goh, D.H.-L., Cao, T.H., Sølvberg, I.T., Rasmussen, E. (eds.) ICADL 2007. LNCS, vol. 4822, pp. 317–326. Springer, Heidelberg (2007). doi:10.1007/978-3-540-77094-7_41 CrossRefGoogle Scholar
  7. 7.
    Smyth, B., McClave, P.: Similarity vs. diversity. In: Aha, D.W., Watson, I. (eds.) ICCBR 2001. LNCS (LNAI), vol. 2080, pp. 347–361. Springer, Heidelberg (2001). doi:10.1007/3-540-44593-5_25 CrossRefGoogle Scholar
  8. 8.
    Rosen-Zvi, M., Griffiths, T., Steyvers, M., Smyth, P.: The author-topic model for authors and documents. In: Proceedings of 20th Conference on Uncertainty in Artificial Intelligence, UAI 2004, Arlington, Virginia, USA, pp. 487–494. AUAI Press (2004)Google Scholar
  9. 9.
    Singh, A., Rose, C., Visweswariah, K., Chenthamarakshan, V., Kambhatla, N.: PROSPECT: a system for screening candidates for recruitment. In: Proceedings of 19th ACM International Conference on Information and Knowledge Management, CIKM 2010, New York, NY, USA, pp. 659–668. ACM (2010)Google Scholar
  10. 10.
    Stamatatos, E.: A survey of modern authorship attribution methods. J. Am. Soc. Inf. Sci. Technol. 60(3), 538–556 (2009)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Shibamouli Lahiri
    • 1
  • Carmen Banea
    • 1
  • Rada Mihalcea
    • 1
  1. 1.University of MichiganAnn ArborUSA

Personalised recommendations