Abstract
The study shown in this paper helps the human resource domain eliminate the time-consuming recruitment process task. Screening resume is the most critical and challenging task for human resource personnel. Natural Language Processing (NLP) techniques are the computer’s ability to understand spoken/written language. Now a day’s, online recruitment platform is more vigorous along with consultancies. A single job opening will get hundreds of applications. To discover the finest candidate for the position, Human Resource (HR) employees devote extra time to the candidate selection process. Most of the time, shortlisting the best fit for the job is time-consuming and finding an apt person is hectic. The proposed study helps to shortlist the candidates with a better match for the job based on the skills provided in the resume. As it is an automated process, the candidate’s personalized favor and soft skills are not affected by the hiring process. The Sentence-BERT (SBERT) network is a Siamese and triplet network-based variant of the Bidirectional Encoder Representations from Transformers (BERT) architecture, which may generate semantically significant sentence embeddings. An end-to-end tool for the HR domain, which takes hundreds of resumes along with required skills for the job as input and provides the better-ranked candidate fit for the job as output. The SBERT is compared with BERT and proved that it is superior to BERT.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Siddique, C.M.: Job analysis: a strategic human resource management practice. Int. J. Human Resour. Manage. 15(1), 219–244 (2004)
Sanabria, R., et al.: How2: a large-scale dataset for multimodal language understanding. arXiv preprint arXiv:1811.00347 (2018)
Arora, S., Li, Y., Liang, Y., Ma, T., Risteski, A.: A latent variable model approach to PMIbased word embeddings. Trans. Assoc. Comput. Linguist. 4, 385–399 (2016)
Rieck, B., Leitte, H.: Persistent homology for the evaluation of dimensionality reduction schemes. Comput. Graph. Forum 34(3) (2015)
Stein, R.A., Jaques, P.A., Valiati, J.F.: An analysis of hierarchical text classification using word embeddings. Inf. Sci. 471, 216–232 (2019)
Mishra, M.K., Viradiya, J.: Survey of sentence embedding methods. Int. J. Appl. Sci. Comput. 6(3), 592 (2019)
Chernyavskiy, A., Ilvovsky, D., Nakov, P.: Transformers: “The End of History” for natural language processing? In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12977, pp. 677–693. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86523-8_41
Suryadjaja, P.S., Mandala, R.: Improving the performance of the extractive text summarization by a novel topic modeling and sentence embedding technique using SBERT. In: 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), pp. 1–6 (2021). https://doi.org/10.1109/ICAICTA53211.2021.9640295
Cao, S., Kitaev, N., Klein, D.: Multilingual alignment of contextual word representations. arXiv preprint arXiv:2002.03518 (2020)
Choi, H., Kim, J., Joe, S., Gwon, Y.: Evaluation of BERT and ALBERT sentence embedding performance on downstream NLP tasks. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5482–5487 (2021). https://doi.org/10.1109/ICPR48806.2021.9412102
Dharma, E.M., et al.: The accuracy comparison among word2vec, glove, and fasttext towards convolution neural network (CNN) text classification. J. Theor. Appl. Inf. Technol. 100(2), 31 (2022)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint arXiv:1605.09090 (2016)
Choi, H., Cho, K., Bengio, Y.: Fine-grained attention mechanism for neural machine translation. Neurocomputing 284, 171–176 (2018). ISSN 0925-2312
Giorgi, J., et al.: DeCLUTR: deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659 (2020)
Conneau, A., et al.: Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 (2017)
Parameswaran, P., Trotman, A., Liesaputra, V., Eyers, D.: Detecting the target of sarcasm is hard: really?? Inf. Process. Manage. 58(4), 102599 (2021)
Naseem, U., Musial, K.: DICE: deep intelligent contextual embedding for Twitter sentiment analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR), pp. 953–958 (2019). https://doi.org/10.1109/ICDAR.2019.00157
Reimers, N., Gurevych, I.: Alternative weighting schemes for elmoembeddings. arXiv preprint arXiv:1904.02954 (2019)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Association for Computational Linguistics (2019)
Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-Xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Jo, T., Lee, J.H.: Latent keyphrase extraction using deep belief networks. Int. J. Fuzzy Logic Intell. Syst. 15(3), 153–158 (2015)
Papagiannopoulou, E., Tsoumakas, G.: Local word vectors guiding keyphrase extraction. Inf. Process. Manage. 54(6), 888–902 (2018)
Bennani-Smires, K., Musat, C., Hossmann, A., Baeriswyl, M., Jaggi, M.: Simple unsupervised keyphrase extraction using sentence embeddings. In: Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, pp. 221–229. Association for Computational Linguistics (2018)
Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. arXiv preprint arXiv:1908.10084 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
James, V., Kulkarni, A., Agarwal, R. (2023). Resume Shortlisting and Ranking with Transformers. In: Nandan Mohanty, S., Garcia Diaz, V., Satish Kumar, G.A.E. (eds) Intelligent Systems and Machine Learning. ICISML 2022. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 471. Springer, Cham. https://doi.org/10.1007/978-3-031-35081-8_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-35081-8_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35080-1
Online ISBN: 978-3-031-35081-8
eBook Packages: Computer ScienceComputer Science (R0)