1 Introduction

A recommendation or recommender system is a type of information filtering system that employs data mining and analytics of user behaviors, including preferences and activities, to filter required information from a large information source. In the era of big data, recommendation systems have become important applications in our daily lives by recommending music, videos, movies, books, news, etc. In academia, there has been a substantial increase in the extent of information (literature, collaborators, conferences, datasets, and many more) available online and it has become increasingly taxing for researchers to stay up to date with relevant information. Several recommendation tools and search engines in academia (Google Scholar, ResearchGate, Semantic Scholar, and others) are available for researchers to recommend relevant publications, collaborators, funding opportunities, etc. Recommendation systems are evolving rapidly. The initial scholarly recommender system was intended for literature by recommending publications using content-based similarity methods [1]. Currently, there are several recommendation systems available for researchers and these are widely used in different scholarly areas.

1.1 Motivation and research questions

In this article, we focus on different scholarly recommenders used to improve the quality of research. To the best of our knowledge, no article currently focusing on all scholarly recommendation systems together is available right now. Previous surveys on recommendation systems were conducted separately for each recommendation system. Most of these studies were based on literature or collaborator recommendation systems [2]. Currently, there is no comprehensive review that contains a description of different types of scholarly recommendation systems, particularly for academic use.

Therefore, it is necessary to provide a survey as a guide and reference to researchers interested in this area; a systematic review of scholarly recommendation system would serve this purpose. It helps to explore research achievements in scholarly recommendation, provide researchers with an overall presentation of systems for allocating academic resources, and identify improvement opportunities.

This article describes the different scholarly recommendation systems that researchers use in their daily activities. We are taking a closer look at the methodologies used for developing such systems. The research questions of our study are as follows:

  • RQ1 What different problems are addressed by scholarly recommendation systems?

  • RQ2 What datasets or repositories were used for developing these recommendation systems?

  • RQ3 What types of methodologies were implemented in these recommendation systems?

  • RQ4 What further research can be performed to overcome the drawbacks of the current research and develop new recommenders to enhance the field of scholarly recommendation?

To answer our first research question, we collected over 500 publications on scholarly recommenders from the ACM Digital Library, DBLP, IEEE Explorer, and Scopus. Literature and collaborator recommendation systems are the most studied recommenders in the literature, with many publications in each. Websites for searching publications host literature recommendations as a key function, almost all of which are free for researchers. However, a few collaborator recommendation systems have been implemented online; and are not free for all users. One of the reasons can be attributed to the large amount of personal information and preferences required by these recommenders.

Furthermore, we studied journal and conference recommendation systems for publishing papers and articles. Although many publishing houses have implemented their own online journal recommender systems, conference recommender systems are not available online. Next, we studied reviewer recommendation problems, in which reviewers are recommended for conferences, journals, and grants. Finally, we identified datasets and grant recommendation systems, which are the least studied scholarly recommendation systems. Figure 1 shows all currently available scholarly recommendations.

Fig. 1
figure 1

Scholarly recommenders studied in this article

1.2 Materials and methods

An initial literature survey was conducted to identify keywords related to individual recommendation systems that can be used to search for relevant publications. A total of 26 keywords were identified to search for relevant publications (see Supplementary 17).

At the end of the full-text review process, 225 publications were included in this study. The number of publications on individual recommendation systems is shown in Fig. 2. To be eligible for the review, we focused on the description, evaluation, and use of natural language processing algorithms. During the full-text review process, we excluded studies that were not peer-reviewed, such as abstracts and commentary, perspective, or opinion pieces. Finally, we performed data extraction and analysis on 225 articles and summarized their data, methodology, evaluation metrics, and detailed categorization in the following sections. The PRISMA flowchart for our publication collection is shown in Fig. 3; with example search keywords.

Fig. 2
figure 2

Number of papers/articles collected for studying different recommenders

Fig. 3
figure 3

PRISMA flowchart for including publications in scholarly recommendation

The remainder of this paper is organized as follows. Section 2 describes different literature recommendation systems based on their methodologies and corresponding datasets. Section 3 describes different approaches for developing collaborator recommendation systems. Section 4 reviews the journal and conference venue recommendation systems. Section 5 describes the reviewer’s recommendation system. In Sect. 6, we review all other scholarly recommendation systems available in the literature such as datasets and grant recommendation systems. Finally, Sect. 7 discusses future work and concludes the article.

2 Literature recommendation

Literature recommendation is one of the most well-studied scholarly recommendation problems with several research articles published in the past decade. Recommender systems for scholarly literature have been widely used by researchers to locate papers, keep up with their research fields, and find relevant citations for drafts. To summarize the literature recommendation systems, we collected 82 publications for scholarly papers and citations.

The first research paper recommendation system was introduced as a part of the CiteSeer project [1]. In total, 11 out of 82 publications (approximately 13%) used applications or methodologies based on a citation recommendation system. As one of the widest subsets of scholarly literature recommendation, citation recommendation aims to recommend citations to researchers while authoring a paper and finding work related to their ideas. It recommends citations based on the content of the researchers’ work. Among the 11 citation recommender papers, content-based filtering (CBF) methodologies have been widely used on the fragments of the citations for the recommendation, and some of them applied collaborative filtering (CF) to develop a potential citation recommendation system based on users’ research interests and citation networks [3].

2.1 Data

In this section, we describe the datasets used to develop literature recommendation systems. A total of 75 reviewed publications evaluated the methodologies using different datasets. The authors of 45 publications chose to construct their own datasets based on manually collected information or paid datasets that were rarely used. Several open-source published datasets are commonly used to develop literature recommendations.

Owing to the rapid development of modern websites for literature search, datasets for literature recommendation are readily available. There were 28 publications that used public databases for the testing and evaluation of the methods. The sources of these datasets are listed in Table 1. These websites collected publications from several scientific publishers and indexed them with their references and keywords. Using the information extracted from these public resources, researchers created datasets to perform recommendation methodologies and obtain the ground truth for offline evaluation.

Table 1 Sources of datasets used for literature recommendation approaches

DBLP was used in 12 reviewed publications and ACM was used in 11 reviewed publications to construct datasets for evaluation. DBLP hosts more than 5.2 million publications,Footnote 1 and obtains its database entries by using a limited number of volunteers who manually enter tables of contents of journals and conference proceedings. The CiteSeer dataset was used in 9 reviewed publications to conduct an offline evaluation. It currently contains over 6 million publications; and is continuously crawling the web to find new content using user submissions, conferences, and journals as data entries. Petricek et al. [4] proved that the application of autonomous acquisition through web crawling in CiteSeer introduces a significant bias against papers with a low number of authors. Among the reviewed papers, we can say that most of researchers constructed their own datasets for evaluation by combining the information from multiple databases. These self-constructed evaluation datasets based on different resources were used to avoid bias resulting from using information from only one source.

The CiteULike dataset was used in 7 reviewed publications. CiteULike is a web service that contains social tags added to research articles by users. The dataset was not originally intended for literature recommendation system research, but is still frequently used for this purpose.

2.2 Methods

Three main approaches were used to develop literature recommenders; CBF (N = 37 papers), CF (N = 16 papers), and hybrid (N = 29 papers). Next, we introduce the promising and popular approaches used in each recommendation class. We also provide an overview of the most important aspects and techniques used for literature recommendation.

2.2.1 Content-based filtering (CBF)

CBF is one of the most popular methods for recommending literature and is used in 37 of 82 publications. Based on the user-item model that treats textual contents as ‘items,’ CBF usually uses topic-based methods to measure the similarity of the publication’s topic that users are interested in and the topic of target publications. These methods performed well in terms of topic and content matching. A summary of CBF approaches used for literature recommendation can be found in Table 2.

Table 2 Overview of literature recommendation systems using CBF

CBF recommenders use keywords or topics as key features because they are used to describe a publication. The creation of a content-based profile of users usually concentrates on the user’s preference model, and the user’s interaction log with the recommendation system converted by a weighted vector of item features. For example, Hong et al. [9] constructed a paper recommendation methodology based on a user profile built with extracted keywords, and calculated the similarity between a given topic and collected papers by using cosine similarity to recommend initial publications for each topic.

Most of the reviewed publications used the term frequency and inverse document frequency (TF-IDF) representation to evaluate the similarities between text objects. TF-IDF negates the effect of high-frequency words while determining the importance of an item. Magara et al. [38] constructed methodologies for recommending serendipitous research papers from two large normally mismatched information spaces or domains using Bisociative Information Networks (BisoNets) and TF-IDF measures as weighting and filtering terms. Lofty et al. [11] combined TF-IDF with a cosine similarity measure to construct a methodology for paper recommendation using ontology. To address higher relevancy and serendipity, Sugiyama and Kan [25] also constructed feature vectors using the TF-IDF measure and user profiles utilizing the Co-Author Network (CAN), computed cosine similarity and recommended papers with higher similarity.

In summary, researchers claim that content-based recommender systems are independent for each user to build their own profiles so that the most suitable recommendation can be made for different users. Also, based on automatically generated user models, recommendation systems using CBF would spend less time and calculation on up-front classification.

The limitations of CBF can also be concluded. The improvements made in the papers we collected were mostly to overcome these limitations. CBF requires more calculation and resources to analyze each item for its features and build each user model individually. For example, to mark passages for citation recommendations, users are typically required to provide a representative bibliography. By examining the relevance between segments in a query manuscript and the representative segments extracted from a document corpus, He et al. [36] formulated a dependency feature model based on language model, contextual similarity, and topic relevance to produce a citation recommendation approach without author supervision. Neethukrishnan et al. [8] proposed a paper recommender methodology using an SVM classifier to found their users’ personal ontology similarity to specify the conceptualization. Nasciment et al. [35] also proposed a novel source independent framework for research paper recommendation to reduce the resources required. They designed a framework that required only a single research paper as input, and generated several weighting candidate queries by using terms in that paper, and then applied a cosine similarity metric to rank the candidates to recommend the ones most related to the input paper.

In addition, the traditional CBF methods are not able to consider the popularity and rating of items, that is, it is difficult to differentiate between two research papers if they have similar terms in user model. To overcome this limitation, Ollagnier et.al [21] formulated a centrality indicator for their software, which was dedicated to the analysis of bibliographical references extracted from scientific collections of papers. This approach determines the impact and inner representativeness of each bibliographical reference according to their occurrences. Pera and Ng [30] adopted CombMNZ, a linear combination strategy that combines similarity degree and popularity score into a joint ranking, to build up their application, and a paper recommender system recommends papers considering both context similarity and popularity of the paper among users. Liu et.al [23] constructed a publication ranking approach with PRF (Pseudo Relevance Feedback) by leveraging a number of meta-paths on a heterogeneous bibliographic graph.

2.2.2 Collaborative filtering

We collected 16 studies that used the Collaborative Filtering (CF) method. CF methods find the users that are similar to the target user in their past ratings, and then recommend similar user options to the target user. These methods are suitable for extending the recommended range. A summary of literature recommendation papers using CF methods is presented in Table 3.

Table 3 Overview of literature recommendation system using collaborative filtering

Common methodologies using a collaborative filtering algorithms can be categorized into two groups: model-based and memory-based. The main difference between the two approaches is that the model-based approach uses a matrix factorization-based algorithm, in which the preferences of users can be calculated by embedding factors. The memory-based approach calculates the preferences of users for items based on arithmetic operations (correlation coefficients or cosine similarity). Memory-based CF approaches are widely used in scholarly literature recommendation systems, which includes several different approaches, such as k-nearest neighbors (kNN), Latent Semantic Index (LSI), and Singular Value Decomposition (SVD). Pan and Li [48] used the LDA (Latent Dirichlet Allocation) model to construct a paper recommendation system using a thematic similarity measurement to transform a topic-based recommendation into a modified version of the item-based recommendation approach. Ha et al. [46] proposed a novel method using SVD for matrix factorization and rating prediction to recommende newly published papers that have not been cited by other papers by predicting the interests of the target researchers.

Compared to CBF methods and applications based on CF show the following advantages. First, given that CF approaches is independent of content, resource costs for error-prone item processing are reduced. In addition, popularity and quality assessments are often considered limitations of CBF, but CF can achieve them easily. Sugiyama and Kan [43] used the PageRank approach to rank the popularity factor and measure the importance of research papers, to enhance the user profile derived directly from the researchers’ past works with information coming from their referenced papers as well as papers that cite the work. CF approaches are also used for serendipitous recommendations; because they are usually based on user similarity and not item similarity. Tang and McCalla [44] constructed user profiles via a co-author network to build a serendipitous paper recommendation system based on a scholarly social network.

The limitations of CF are also shown in the reviewed papers. To make precise recommendations, a CF system requires a great volume of existing data to start the recommendation. This problem is called Cold Start. Loh et al. [55] used scientific papers written by users to compose user profiles, representing user interests or expertise in order to alleviate the cold start problem in the recommender system. Data sparsity is another problem, which represents active users only by observing a small subset of the dataset to rate the papers. Keshavarz and Honarvar [47] presented an approach for paper recommendation based on local sensitive hashing by converting the citations of papers to signatures and comparing these signatures to each other to detect similar papers according to their citations. Sugiyama and Kan [3] also applied CF to discover potential citation papers that help in representing target papers to recommend, in order to alleviate sparsity. The authors also attempted to improve the scalability of the approaches, to reduce the amount of calculation and resources required for recommendations.

2.2.3 Hybrid

Approaches to the previously introduced recommendation may be combined with hybrid approaches. We reviewed 29 studies that applied hybrid recommendation approaches. Table 4 summarizes the papers that we collected where literature recommendation was developed using hybrid approaches.

Table 4 Overview of literature recommendation system using hybrid method

As a combination of CBF and CF, hybrid recommendation approaches can be categorized into four main groups. The first group implements CBF and CF methods separately and then combine their recommendation results. Liu et al. [70] constructed a citation recommendation method that employed an association mining technique to obtain the representation of each citing paper from the citation context. Then, these paper representations were compared pairwise to compute similarities between the cited papers for CF. Zarrinkalam and Kahani [62] used multiple linked data sources to create a rich background data layer and combine multiple-criteria CF and CBF to develop a citation recommender. Zhang et al. [65] constructed a paper recommendation method based on the semantic concept similarity computed from collaborative tags.

The second and third groups incorporate CBF characteristics into a CF method or incorporate some CF characteristics into a CBF method. West et al. [63] formulated a citation-based method for making scholarly recommendations. The method uses a hierarchical structure of scientific knowledge, making possible multiple scales of relevance for different users. Nart et al. [82] built a method that simplifies CF paper recommendations by extracting concepts from papers to generate and explain the recommendations. Zhou et al. [57] used the concepts and methods of community partitioning and introduced a model to recommend authoritative papers based on the specific community. Magalhaes et al. [67] constructed a user paper-based recommendation approach by considering the user’s academic curriculum vitae.

The fourth group is to constructs a general unifying model that incorporates both content-based and collaborative characteristics. Meng et al. [58] built a unified graph model with multiple types of information (e.g., content, authorship, citation, and collaboration networks) for efficient recommendation. Pohl et al. [64] treated access data as a bipartite graph of users and documents analogous to item-to-item recommendation systems to build a paper recommender method using digital access records (e.g., http-server logs) as indicators. Gipp et al. [41] developed a paper recommender system that used keyword-based search by combining it with citation analysis, author analysis, source analysis, implicit ratings, explicit ratings, and, in addition, innovative and yet unused methods like the ‘Distance Similarity Index’ (DSI) and the ‘In-text Impact Factor’ (ItIF).

2.3 Evaluation

The evaluation metrics for different recommendation methods vary, making it difficult to compare them. To objectively compare the performance of these approaches, 75 publications used two main evaluation metrics.

First, accuracy is the most widely used parameter for evaluating a recommendation system, and it is the capability to recommend the most relevant items based on the given information. Among the reviewed papers, many offline evaluation metrics were applied to measure the accuracy. The second factor is the recommendation system’s ability to satisfy users. For example, considering serendipitous factors and user requirements instead of only considering the accuracy of the recommendation system. Some of the reviewed papers designed questionnaires for users to collect their feedback, or applied their methods to real-world systems to evaluate user satisfaction. To quantify and compare the accuracy and user satisfaction of recommendation systems, evaluation methods can be divided into two groups: online and offline.

2.3.1 Online evaluation

A total of 17 publications evaluated their methods with a user study or a real-world system using an online evaluation. They created a rating scheme for users to rate the recommendation results. These manual rating results were then used to analyze and judge an method. In addition, 6 publications out of the 17 applied online evaluations, the methodology of recommendation methods in real-world systems and collected feedback from users for evaluation. Despite analyzing a method based on manually rated the results, online evaluation is typically based on users’ acceptance results. Acceptance is commonly measured by the Click-Through Rate (CTR), that is, the ratio of recommendations clicked by users.

2.3.2 Offline evaluation

A total of 59 publications applied offline evaluations to analyze the recommendation algorithms based on the prepared offline datasets. Offline evaluations typically measure the accuracy of recommendation methods based on the ground truth, normally obtained from the information provided by the database, or obtained by manual tests.

To measure the accuracy, precision at position n (P@n) is often used to express how many items of the ground truth are recommended within the top n recommendations. Other decision support metrics including Recall and F-measure were also commonly used, often together with Precision as a reference. To evaluate the quality of recommendation, rank-aware evaluation metrics including mean reciprocal rank (MRR) and normalized discounted cumulative gain (nDCG) were also widely used to test highly relevant items that were ranked at the top of a recommendation list. The different evaluation metrics used are illustrated in Fig. 4.

Fig. 4
figure 4

Distribution for evaluation metrics used in literature recommendation

3 Collaborator recommendation

Currently, research in any area has expanded exponentially beyond its own fields to other research fields in the form of collaborative research. Collaboration is essential in academia to obtain good publications and grants. Identifying and determining a potential collaborator is challenging. Hence, a recommendation system for collaboration would be very helpful. Fortunately, many publications on recommending collaborators are available.

3.1 Data

A total of 59 publications were identified using databases to develop, test and evaluate recommender systems. In 20 publications, the authors constructed their own datasets based on manually collected information, unique social platforms, or paid databases that are rarely used. In 39 out of the 59 publications, the authors used open-source databases. Of these 39 publications, 17 used data from the DBLP library to evaluate the developed collaborator recommendation systems.

The datasets needed for developing collaborator recommendations usually include 2 major subjects: (1) contexts and keywords based on researchers’ information; and (2) information networks based on academic relationships. Owing to the rapid development of online libraries and academic social networks, the extraction of information networks has become available. These datasets extracted relative information from different online sources and collected information to (i) construct profiles for researchers, (ii) retrieve keywords for constructing a structure, for specific domains and concepts, and (iii) extract weighted co-author graphs. In addition, data mining and social network analysis tools may also be used for clustering analysis and for identifying representatives of expert communities. The sources of datasets used in the 59 publications are listed in Table 5.

Table 5 Sources of datasets used for collaborator recommendation approaches

Among the reviewed studies, most researchers extracted information from these databases to construct training and evaluation datasets for their recommendations.

The DBLP dataset was used in 17 publications to evaluate the performance of the collaborator recommendation approaches. The DBLP computer science bibliography provides an open bibliographic list of information on major computer science fields and is widely used to construct co-authorship networks. In the co-authorship network graphs of DBLP bibliography, the nodes represent computer scientists and the edges represent a co-authorship incident.

ScholarMate, a social research management tool launched in 2007 was used in 4 publications. It has more than 70,000 research groups created by researchers for their own projects, collaboration, and communication. As a platform for presenting publication research outputs, ScholarMate automatically collects scholarly related information about researchers’ output from multiple online resources. These resources include multiple online databases such as Scopus, one of the largest abstract and citation databases for peer-reviewed literature, including scientific journals, books, and conference proceedings. ScholarMate uses aggregated data to provide researchers with recommendations on relevant opportunities based on their profiles.

3.2 Methods

Similar to other scholarly recommendation areas, research on methodologies to develop collaborator recommendations can be classified into the following categories: CBF, CF, and hybrid approaches. In this section, we introduce the approaches that are widely used in each recommendation class. In addition, we provide an overview of the most important aspects and techniques used in these fields.

3.2.1 Content-based filtering (CBF)

23 publications presented CBF methods for collaborator recommendation. CBF focuses on the semantic similarity between researchers’ personal features, such as their personal profiles, professional fields, and research interests. Natural language processing techniques (NLP) were used to extract keywords from the associated documents to define researchers’ professional fields and interests. A summary of publications on collaborator recommendation using CBF approaches is presented in Table 6.

Table 6 Overview of collaborator recommendation system using CBF

The Vector Space Model (VSM) is widely used in content-based recommendation methodologies. By expressing queries and documents as vectors in a multidimensional space, these vectors can be used to calculate the relevance or similarity. Yukawa et al. [84] proposed an expert recommendation system employing an extended vector space model that calculates document vectors for every target document for authors or organizations. It provides a list in the order of relevance between academic topics and researchers.

Topic clustering models using VSM have been widely used to profile fields of researchers using a list of keywords with a weighting schema. Using a keyword weighting model, Afzal and Maurer [85] implemented an automated approach for measuring expertise profiles in academia that incorporates multiple metrics for measuring the overall expertise level. Gollapalli et al. [86] proposed a scholarly content-based recommendation system by computing the similarity between researchers based on their personal profiles extracted from their publications and academic homepages.

Topic-based models have also been widely applied for document processing. The topic-based model introduces a topic layer between the researchers and extracted documents. For example, in a popular topic modeling approach, based on the latent Dirichlet allocation (LDA) method, each document is considered as a mixture of topics and each word in a document is considered randomly drawn from the document’s topics. Yang et al. [87] proposed a complementary collaborator recommendation approach to retrieve experts for research collaboration using an enhanced heuristic greedy algorithm with symmetric Kullback–Leibler divergence based on a probabilistic topic model. Kong et al. [88] applied a collaborator recommendation system by generating a recommendation list based on scholar vectors learned from researchers’ research interests extracted from documents based on topic modeling.

As mentioned previously in the literature recommendation section, content-based methods usually suffer from a high calculation cost because of the large number of analyzed documents and vector space. To minimize this cost and maximize the preference, Kong et al. [100] presented a scholarly collaborator recommendation method based on matching theory, which adopts multiple indicators extracted from associated documents to integrate the preference matrix among researchers. Some researchers have also modified weighted features and hybrid topic extraction methods with other factors to obtain higher accuracy. For example, Sun et al. [92] designed a career age-aware academic collaborator recommendation model consisting of authorship extraction from digital libraries, topic extraction based on published abstractions, and career age-aware random walk for measuring scholar similarity.

3.2.2 Collaborative filtering

Six publications presented a methodology based merely on collaborative filtering. Traditional CF-based recommendations aim to find the nearest neighbor in a social context similar to that of the targeted user. It selects the nearest neighbors based on the users’ rating similarities. When the users rate a set of items in a manner similar to that of a target user, the recommendation systems would define these nearest neighbors as groups with similar interests and recommend items that are favored by these groups but not discovered by the target user. To apply this method to collaborator recommendation, the system would recommend persons who have worked with a target author’s colleagues but not with the target author himself. Analogously, the system considers each author as an item to be rated and the scholarly activities such as writing a paper together as a rating activity, following the methodology of traditional CF-based recommendations. Researchers’ publication activities are transformed into rating actions, and the frequency of co-authored papers is considered a rating value. Using this criterion, a graph based on a scholarly social network was built. A summary of the collaborator recommendation paper using CF approaches is presented in Table 7.

Table 7 Overview of collaborator recommendation system using collaborative filtering

Based on this co-authorship network transformed from researchers’ publication activities, several methods for link prediction and edge weighting have been utilized. Benchettara et al. [108] solved the problem of link prediction in co-authoring networks by using a topological dyadic supervised machine learning approach. Koh and Dobbie [110] proposed an academic collaborator recommendation approach that uses a co-authorship network with a weighted association rule approach using a weighting mechanism called sociability. Recommendation approaches based on this co-authorship network transformed from publication activities, where all nodes have the same functions, are called homogeneous network-based recommendation approaches.

The random walk model, which can define and measure the confidence of a recommendation, is popular in co-authorship network-based collaborator recommendations. Tong et al. [113] published Random Walk with Restart (RWR), a famous random walk model, which provides a good way to measure how closely related two nodes are in a graph. Applications and improvements based on RWR model are widely used for link prediction in co-authorship networks. Li et al. [109] proposed a collaboration recommendation approach based on a random walk model using three academic metrics as the basics through co-authorship relationship in a scholarly social network. Yang et al. [112] combined the RWR model with the PageRank method to propose a nearest-neighbor-based random walk algorithm for recommending collaborators.

Compared with content-based recommendation approaches, which involve only the published profiles of researchers without considering scholarly social networks, homogeneous network-based approaches apply CF methods based on social network technology to recommend collaborators. Lee et al. [111] compared ASN-based collaborator recommendations with metadata-based and hybrid recommendation methodologies, and suggested it as the best method. However, homogeneous network-based collaboration recommendations do not consider the contextual features of researchers. As a combination of these two methods, a hybrid collaboration recommendation system based on a heterogeneous network is popular in current collaboration recommendation approaches and applications.

3.2.3 Hybrid

Approaches to previously introduced recommendation classes may be combined with hybrid approaches. 37 of the reviewed papers applied approaches with hybrid characteristics. As an improvement, heterogeneous network-based recommendations overcome these limitations. Table 8 summarizes all collaborator recommendation papers that we collected using hybrid approaches.

Heterogeneous networks are networks in which two or more node classes are categorized by their functions. Based on the co-authorship network used in most homogeneous network-based approaches, heterogeneous network-based approaches incorporate more information into the network, such as the profiles of researchers, the results of topic modeling or clustering, and the citation relationship between researchers and their published papers. Xia et al. [52] presented MVCWalker, an innovative method based on RWR for recommending collaborators to academic researchers. Based on academic social networks, other factors such as co-author order, latest collaboration time, and times of collaboration were used to define link importance. Kong et al. [114] proposed a collaboration recommendation model that combines the features extracted from researchers’ publications using a topic clustering model and a scholar collaboration network using the RWR model to improve the recommendation quality. Kong et al. [115] proposed a collaboration recommendation model that considers scholars’ dynamic research interests and collaborators’ academic levels. By using the LDA model for topic clustering and fitting the dynamic transformation of interest, they combined the similarity and weighting factors in a co-authorship network to recommend collaborators with high prevalence. Xu et al. [116] designed a recommendation system to provide serendipitous scholarly collaborators that could learn the serendipity-biased vector representation of each node in the co-authorship network.

Table 8 Overview of collaborator recommendation system using hybrid methods

4 Venue recommendation

In this section, we describe recommendation systems that can help researchers identify scientific research publishing opportunities. Recently, there has been an exponential increase in the number of journals and conferences researchers can select to submit their research. Recommendation systems can alleviate some of the cognitive burden that arises when choosing the right conference or journal for publishing a work. In the following sections, we describe academic venue recommendation systems for conferences and journals.

4.1 Conference recommendation

The dramatic rise in the number of conferences/journals has made it nearly impossible for researchers to keep track of academic conferences. While there is an argument to be made that researchers are familiar with the top conferences in their field, publishing to those conferences is also becoming increasingly difficult due to the increasing number of submissions. A conference recommendation system will be helpful in reducing the time and complexity requirement to find a conference that meets the needs of a given researcher. Thus, conference recommendation is a well-studied problem in the domain of data analysis, with many studies being conducted using a variety of methods such as citation analysis, social networks, and contextual information.

Table 9 Sources of data used for Conference Recommendation Systems

4.1.1 Data

All reviewed publications used databases to test their methodology. Two publications chose to construct a custom dataset based on the manual collection of information and one publication used a rare paid dataset. The remaining 20 studies used published open-source databases to create the datasets used in their testing and evaluation environments. Table 9 provides a summary of the frequencies with which published open-source databases were used.

DBLP was the most used database with 12 occurrences, followed by ACM Digital Library and WikiCFP, both with 5 occurrences. The unique databases utilized in conference recommendation systems are Microsoft Academic Search, CORE Conference Portal, Epinion, IEEE Digital Library, and Scigraph.

Microsoft Academic Search hosts over 27 million publications from over 16 million authors and is primarily used to extract metadata on authors, their publications, and their co-authors. The CORE Conference portal provides rankings for conferences primarily in Computer Science and related disciplines. The CORE Conference provides metadata on conference publishers and rankings. The Epinion is a general review website founded in 1999 and utilized to create networks of ‘trusted’ users. The IEEE Digital Library is a database used to access journal articles, conference proceedings, and other publications in computer science, electrical engineering, and electronics. A scigraph is a knowledge graph aggregating metadata from publications in Springer Nature and other sources. WikiCFP is a website that collates and publishes calls for papers.

4.1.2 Methods

There are three main subtypes of conference recommendation systems: content-based, collaborative, and hybrid systems. The following section provides an overview of the most popular methods used by each sub types.

Content-based filtering (CBF)

Only 1 of the 23 publications in conference recommendations utilized pure CBF. Using data from Microsoft Academic Search, Medvet et al. [146] created three disparate CBF systems seeking to reduce the input data required for accurate recommendations: (a) utilizing Cavnar-Trenkle text classification, (b) utilizing two-step latent Dirichlet allocation (LDA), and (c) utilizing LDA alongside topic clustering.

Cavnar-Trenkle classification is an n-gram-based text classification method. Given a set of conferences \(C = \{c_1, c_2, c_3, \ldots \}\), it is necessary to define for each conference \(c \in C\) a set of papers \(P = \{p_1, p_2, p_3, \ldots \}\) that were published in conference \(c\). It creates an n-gram profile for each conference \(c \in C\), using n-grams generated from each paper in the conference \(p \in P\). Finally, it computes the distance between the n-gram profiles of each conference \(c \in C\) and a publication of interest \(p_i\) and recommends an \(n\) number of conferences that optimize the minimum distance between \(c\) and \(p_i\).

Collaborative filtering

Among 18 publications employed collaborative filtering strategies out of the 23 collected publications, the most popular filtering approach was based on around generating and analyzing a variety of networks on different types of metadata including citations, co-authorship, references, social proximity, etc.

Asabere and Acakpovi [147, 148] generated a user-based social context aware filter with breadth-first search (BFS) and depth-first search (DFS) on a knowledge graph created by computing the Social Ties between users, and added geographical, computing, social, and time contexts. Social Ties were generated by computing the network centrality based on the number of links between users and presenters at a given conference.

Other types of network-based collaborative filters include a co-author-based network that assigns weights with regard to venues where one’s collaborators have published previously [149, 150], a broader metadata-based network that utilizes one or more distinct characteristics to assign weights to conferences (i.e., citations, co-authors, co-activity, co-interests, colleagues, interests, location, references, etc.) [146, 151,152,153,154], and RWR-based methods [155, 156].

Kucuktunc et al. [155] iterated the traditional RWR model by adding a directionality parameter \((\kappa )\), which is used to chronologically calibrate the recommendations as either recent or traditional. The list of publications that used CF for conference recommendations is presented in Table 10.

Table 10 Overview of conference recommendation systems using collaborative filtering

Hybrid

A total of 6 publications used hybrid filtering strategies out of the total 23 publications. The most common hybrid strategy i to amalgamate standard topic-based content filtering with network-based collaborative filters. Table 11 summarizes publications that used hybrid filtering methods for conference recommendations.

Table 11 Overview of conference recommendation systems using hybrid filtering

4.2 Journal recommendation

As of April 14, 2020, the Master Journal List of the Web of Science Group contains 24,748 peer-reviewed journals for publishing articles from different publishing houses. The authors may face difficulties in finding suitable journals for their manuscripts. In many cases, a manuscript submitted to a journal is rejected because it is not within the scope of that journal. Finding suitable journals for a manuscript is the most important step in publishing articles. A journal recommendation system may reduce the burden of authors by selecting appropriate journals to publish as well as reducing the burden of editors from rejecting manuscripts that do not align with the scopes of the journals. Many publishing companies have their own journal finders that can help authors find suitable journals for their manuscripts.

In this section, we review all available journal recommendation systems by analyzing the methods used and their journal coverage. There are a total of ten journal recommendation systems, but we found only four papers describing details corresponding to their recommendation procedures. A detailed list of journal recommenders with their methods and datasets is provided in Table 12. Most journal recommenders were developed for different publishing houses. Most journal recommenders contain journals from multiple domains except eTBLAST, Jane, and SJFinder, where the journals are from the biomedical and life science domains.

Table 12 Detailed overview of journal recommendation systems

TF-IDF, kNN, and BM25 were used to find similar journals using the keywords provided keywords. Kang et al. [172] used a classification model (using kNN and SVM) to identify the suitable journals. Errami et al. [169] used the similarity between provided keywords and journal keywords.

Rollins et al. [39] evaluated a journal recommender by using feedback from real users. Kang et al. [172] evaluated a system based on previously published articles. If the top three or top ten recommended journals contained the journal in which the input paper was published, then this would be counted as a correct recommendation; otherwise, it would be counted as a false recommendation. Similarly, eTBLAST [169] and Jane [170] were evaluated using previously published articles.

Deep learning-based recommenders perform better than traditional matching-based NLP or machine learning algorithms. However, none of the existing systems available for journal recommendations uses deep learning algorithms. One of the future goals may be the implementation of different deep learning algorithms. In addition to these publication houses, developing journal recommenders for different publication repositories (DBLP, arxiv, etc.) may be another future task.

5 Reviewer recommendation

In this section, we describe paper, journal, and grant reviewer recommendation systems that rae available in literature. With the rapid increase in publishable research materials, pressure to find reviewers is overwhelming for conference organizers/journal editors. Similarly, it overwhelms program directors in finding appropriate reviewers for grants.

In the case of conferences, authors normally choose some research fields during the submission. The organizing committee of a conference typically has a set of researchers as reviewers who have been assigned from the same set of fields. Based on the matching of the fields, the reviewers were assigned papers. However, the research fields are broad and may not exactly match those of the reviewer. In the case of journals, authors need to suggest that reviewers or editors need to find reviewers for manuscript reviewing. Whereas, for reviewing grant proposals, program directors are responsible for finding suitable reviewers for reviewing proposals.

The problem of finding reviewers can be solved by a reviewer recommendation system, which the system can recommend reviewers based on the similarity of contents or past experiences. The reviewer recommendation problem is known as the reviewer assignment problem. We searched for publications related to both reviewer recommendations and assignments.

5.1 Data

A total of 67 reviewed publications were retrieved using Google searches, and 36 publications were included in the final analysis after title, abstract, and full-text screening. Among these 36 publications, 23 conducted experiments to supplement the theoretical contents, and the sources of the datasets used are listed in Table 13.

Table 13 Sources of datasets used for reviewer recommendation

5.2 Methods

Broadly, there are three major categories in terms of techniques used, one is based on information retrieval (IR), another one on optimization where the recommendation is viewed as an enhanced version of the generalized assignment problem (GAP), and the third includes techniques that fall between the first two categories.

5.2.1 Informational retrieval (IR)-based

IR-based studies generally focus on calculating matching degrees between reviewers and submissions.

Hettich and Pazzani [178] discussed a prototype application in the U.S. National Science Foundation (NSF) to assist program directors in identifying reviewers for proposals, named Revaid, which uses TF-IDF vectors for calculating proposal topics and reviewer expertise, and defined a measure called the Sum of Residual Term Weight (SRTW) for the assignment of reviewers. Yang et al. [179] constructed a knowledge base of expert domains extracted from the web and used a probability model for domain classification to compute the relatedness between experts and proposals for ranking expertise. Ferilli et al. [180] used Latent Semantic Indexing (LSI) to extract the paper topic and expertise of reviewers from publications available online, followed by Global Review Assignment Processing Engine (GRAPE), a rule-based expert system for the actual assignment of reviewers.

Serdyukov et al. [181] formulated a search for an expert to absorb a random walk in a document-candidate graph. A recommendation was made on reviewer candidate nodes with high probabilities after an infinite number of transitions in the graph, with the assumption that expertise is proportional to probability. Yunhong et al. [182] used LDA for proposal and expertise topic extraction, and defined a weighted sum of varied index scores for ranking reviewers for each proposal. Peng et al. [183] built a time-aware reviewer’s personal profile using LDA to represent the expertise of reviewers, then a weighted average of matching degree by topic vectors and TF-IDF of the reviewer and submitted papers were used for recommendation. Medakene et al. [184] used pedagogical expertise in addition to the research expertise of the reviewers with LDA in building reviewers’ profiles and used a weighted sum of the topic similarity and the reference similarity for assigning reviewers to papers. Rosen-Zvi et al. [185] proposed an Author-Topic Model (ATM) that extends the LDA to include authorship information. Later, Jin et al. [186] proposed an Author-Subject-Topic (AST) model, with the addition of a ‘subject’ layer that supervises the generation of hierarchical topics and sharing of subjects among authors for reviewer recommendations. Alkazemi [187] developed PRATO (Proposals Reviewers Automated Taxonomy-based Organization) that first sorted proposals and reviewers into categorized tracks as defined by a tree of hierarchical research domains, and then assigned the reviewers based on the matching of tracks using Jaccard similarity scores. Cagliero et al. [188] proposed an association rule-based methodology (Weighted Association Rules, WAR) to recommend additional external reviewers.

Ishag et al. [189] modeled citation data of published papers as a heterogeneous academic network, integrating authors’ h-index and papers’ citation counts, proposed a quantification to account for author diversity, and formulated two types of target patterns, namely, researcher-general topic patterns (RSP) and researcher-specific topic patterns (RSP) for searching reviewers.

Recently deep learning techniques have been incorporated into feature representations. Zhao et al. [190] used word embeddings to represent the contents of both the papers and reviewers. Then, the Word Mover’s distance (WMD) method was used to measure the minimum distances between paper and reviewer vectors. Finally, the Constructive Covering Algorithm (CCA) was used to classify reviewer labels for recommending reviewers. Anjum et al. [191] proposed a common topic model (PaRe) that jointly models topics to a submission and a reviewer profile based on word embedding. Zhang et al. [192] proposed a two-level bidirectional gated recurrent unit with an attention mechanism (Hiepar-MLC) to represent the semantic information of reviewers and papers and used a simple multilabel-based reviewer assignment strategy (MLBRA) to match the most similar multilabeled reviewer to a particular multilabeled paper.

Co-authorship and reviewer preferences were incorporated into collaborative filtering application. Li and Watanabe [193] designed a scale-free network combining preferences and a topic-based approach that considers both reviewer preferences and the relevance of reviewers and submitted papers to measure the final matching degrees between reviewers and submitted papers. Xu and Du [194] designed a three-layer network that combines a social network, semantic concept analysis and citation analysis, and proposed a particle swarm algorithm to recommend reviewers for submissions. Maleszka et al. [195] used a modular approach to determine a grouping of reviewers that consisted of a keyword-based module, a social graph module and a linguistic module. A summary of all IR-based reviewer recommendations can be found in Table 14.

Table 14 Overview of reviewer recommendation systems, IR-based

5.2.2 Optimization-based

Optimization-based reviewer recommendations focus more on theory, modeling an algorithm of assignments under multiple constraints such as reviewer workload, authority, diversity, and conflict of interest (COI).

Sun et al. [196] proposed a hybrid of knowledge and decision models to solve the proposal-reviewer assignment problem under constraints. Kolasa and Krol [197] compared artificial intelligence methods for reviewer-paper assignment problems, namely, genetic algorithms (GA), ant colony optimization (ACO), tabu search (TS), hybrid ACO-GA and GA-TS, in terms of time efficiency and accuracy. Chen et al. [198] employed a two-stage genetic algorithm to solve the project-reviewer assignment problem. In the first stage, reviewer were assigned by taking into consideration their respective preferences, and then, in the second stage, review venues were arranged in a way that allows the minimum times of change for reviewers.

Das and Gocken [199] used fuzzy linear programming to solve the reviewer assignment problem by maximizing the matching degree between expert sets and grouped proposals, under crisp constraints. Tayal et al. [200] used type-2 fuzzy sets to represent reviewers’ expertise in different domains, and proposed using the fuzzy equality operator to calculate equality between the set representing the expertise levels of a reviewer and the set representing the keywords of a submitted proposal, and optimized the assignment under various constraints.

Wang et al. [201] formulated the problem into a multiobjective mixed integer programming model that considers Direct Matching Score (DMS) between manuscripts and reviewer, Manuscript Diversity (MD), and Reviewer Diversity (RD), and proposed a two-phased stochastic-biased greedy algorithm (TPGA) to solve the problem. Long et al. [202] studied the paper-reviewer assignment problem from the perspective of goodness and fairness, where they proposed maximizing topic coverage and avoiding the conflict of interest (COI) for the optimization objectives. They also designed an approximation method that provides 1/3 approximation.

Kou et al. [203] modeled reviewers’ published papers as a set of topics and performed weighted-coverage group-based assignments of reviewers to papers. They also proposed a greedy algorithm that achieves a 1/2 approximation ratio compared with the exact solution. Kou et al. [204] developed a system that automatically extracts the profiles of reviewers and submissions in the form of topic vectors using the author-topic model (ATM) and assigns reviewers to papers based on the weighted coverage of paper topics.

Stelmakh et al. [205] designed an algorithm, PeerReview4All, which is based on an incremental max-flow procedure to maximize the review quality of the most disadvantaged papers (fairness objective) and to ensure the correct recovery of the papers that should be accepted (accuracy objective). Yesilcimen and Yildirim [206] proposed an alternative mixed integer programming formulation for the reviewer assignment problem whose size grows polynomially as a function of the input size. A summary of all the optimization-based reviewer recommendation papers is presented in Table 15.

Table 15 Overview of reviewer recommendation systems, optimization-based

5.2.3 Hybrid

Finally, we see hybrid of both methods in other studies. Conry et al. [207] modeled reviewer-paper preferences using CF of ratings, latent factors, paper-to-paper content similarity, and reviewer-to-reviewer content similarity and optimized the paper assignment under global conference constraints; therefore, the assignment was transformedinto a linear programming problem. Tang et al. [208] formulated the problem of expertise matching to a convex cost flow problem which turned the recommendation into an optimization problem under constraints, and also used online matching algorithms to support user feedback to the system.

As one of the most popular systems for conference reviewer assignment, Charlin and Zemel [209] addressed the assignment by first using a language model and LDA for learning reviewer expertise and submission topics, followed by a linear regression for initial predictions of reviewers’ preferences, combined with reviewers’ elicitation scores (reviewers’ disinterest or interests) in specific papers for the final recommendation, and optimized the objective functions under constraints. Liu et al. [210] constructed a graph network for reviewers and query papers using LDA to establish edge weights, and used the Random Walk with Restart (RWR) model on a graph network with sparsity constraints to recommend reviewers with the highest probabilities incorporating aspects of expertise, authority and diversity. Liu et al. [211] combined the heuristic knowledge of expert assignment and techniques of operations research, in which different aspects are involved, such as reviewer expertise, title and project experience. A multiobjective optimization problem was formulated to maximize the total expertise level of the recommended experts and avoid conflicts between reviewers and authors. Ogunleye et al. [212] used a mixture of TF-IDF, LSI, LDA and word2vec to represent the semantic similarity between submissions and reviewers’ publications and then used integer linear programming to match submissions with the most appropriate reviewers. Jin et al. [213] extracted topic distributions of reviewers’ publications and submissions using the Author-Topic Model (ATM) and Expectation Maximization (EM), then formulated the problem of reviewer assignment into an integer linear programming problem that takes into consideration the topic relevance, interest trend of a reviewer candidate, and authority of candidates. A summary of the reviewer recommendation papers is presented in Table 16.

Table 16 Detailed overview of reviewer recommendation systems, other

6 Other scholarly recommendation

6.1 Dataset recommendation

In the Big Data era, extensive data have been generated for scientific discoveries. However, storing, accessing, analyzing, and sharing a vast amount of data is becoming a major challenge and bottleneck for scientific research. Furthermore, making a large amount of public scientific data findable, accessible, interoperable, and reusable (FAIR) is challenging. Many repositories and knowledge bases have been established to facilitate data-sharing. Most of these repositories are domain-specific, and none of them recommend datasets to researchers or users. Furthermore, over the past two decades, there has been an exponential increase in the number of datasets added to these dataset repositories. Researchers must visit each repository to find suitable datasets for their research. In this case, a dataset recommender would be helpful to researchers. This can save time and the visibility of the dataset.

A dataset recommender is not commonly used. However, dataset retrieval is a popular information retrieval task. Many dataset retrieval systems exist for general datasets as well as biomedical datasets. Google’s Dataset SearchFootnote 2 is a popular search engine for datasets from different domains. DataMedFootnote 3 is another dataset search engine specific to biomedical domain datasets that combines biomedical repositories and enhances query searching using advanced natural language processing (NLP) techniques [214, 215]. DataMed indexes and provides the functionality to search diverse categories of biomedical datasets [215]. The research focus of DataMed is to retrieve datasets using a focused query. Search engines such as DataMed or Google Dataset Search are helpful when the user knows the type of dataset to search for, but determining the user intent of web searches is a difficult problem because of the sparse data available concerning the searcher [216].

A few experiments have been performed on data linking where similar datasets can be clustered together using different semantic features. Data linking or identifying/clustering similar datasets has received relatively less attention in research on recommendation systems. Specifically, only a few papers [217,218,219] have been published on this topic. Ellefi et al. [218] defined dataset recommendation as the problem of computing a rank score for each set of target datasets (\(D_T\)) such that the rank score indicates the relatedness of \(D_T\) to a given source dataset (\(D_S\)). The rank scores provide information on the likelihood of a \(D_T\) containing linking candidates for \(D_S\). Similarly, Srivastava [219] proposed a dataset recommendation system by first creating similarity-based dataset networks, and then recommending connected datasets to users for each searched dataset. This recommendation approach is difficult to implement because of the cold start problem. Here, the cold start problem refers to the user’s initial dataset selection, where the user has no idea what dataset to select/search for. If the user lands on an incorrect dataset, the system will always recommend the wrong dataset to the user.

Patra et al. [220, 221] and Zhu et al. [222] proposed a dataset recommendation system for the Gene Expression Omnibus (GEO) based on the publications of researchers. This system recommends GEO datasets using classification and similarity-based approaches. Initially, they identified the research areas from the publications of researchers using the Dirichlet Process Mixture Model (DPMM) and recommended datasets for each cluster. The classification-based approach uses several machine and deep learning algorithms, whereas the similarity-based approach uses cosine similarity between publications and datasets. This is the first study on dataset recommendations.

6.2 Grants/funding recommendation

Obtaining grants or funding for research is essential in academic settings. Grants help researchers in many ways during their careers. Finding appropriate funding opportunities is an important step in this process, and there are multiple grant opportunities available that a researcher may not be aware of. No universal repositories available for funding announcements worldwide. However, few repositories are available for funding announcements in the United States of America, such as, grants.gov, NIH, and SPIN. These websites host many funding opportunities in various areas. Furthermore, multiple new opportunities are available daily. Thus, it is difficult to find suitable opportunities for researchers. A recommendation system for funding announcements will help researchers find appropriate research funding opportunities. Recently, Zhu et al. [223] developed a grant recommendation system for NIH grants based on researchers’ publications. They developed the recommendation as a classification using Bidirectional Encoder Representations from Transformers (BERT) to capture intrinsic, nonlinear relationships between researchers’ publications and grant announcements. Internal and external evaluations were performed to assess the usefulness of the system. Two publications are available on developing a search engine to find Japanese research announcements [224, 225]. The titles of these papers suggest recommendation systems; however, the full text reveals that these publications describe the search for funding announcements in Japan. These publications describe a keyword-based search engine using TF-IDF and association rules.

7 Conclusion and future directions

Numerous recommendation systems have been developed since the beginning of the twenty-first century. In this comprehensive survey, we discussed all common types of scholarly recommendation systems outlining the data resources, applied methodologies and evaluation metrics.

Recommendation systems for the literature are still the most focused areas for scholarly recommendations. With the increasing need to collaborate with other researchers and publish research results, recommenders for collaborators and reviewers are becoming popular. Compared with these popular research targets, published recommendation systems for conferences/journals, datasets and grants are relatively less common.

To develop recommendation systems and evaluate their results, researchers commonly construct datasets using information extracted from multiple resources. Published open-source databases, such as DBLP, ACM and IEEE Digital Libraries, are the most commonly used sources for multiple types of recommendation systems. Some web services containing scholarly related information about its users, or social tags added by researchers, such as, ScholarMate and CiteULike, were also used to develop recommendation systems.

Content-based filtering (CBF) is the most commonly used approach for recommendation systems. Owing to the requirement of processing context information, measuring keywords and searching topics of academic resources, most recommendation systems were built based on CBF. It is difficult to consider the popularity and rating of objects in traditional CBF. To overcome these limitations, CF has been used to solve the problem, especially when recommending items based on researchers’ interests and profiles. With the rapid development of recommendation systems and the need to overcome the high calculation costs, hybrid methods combining CBF and CF have been used by several recommenders to achieve better performance.

Based on the information gathered for the survey, we provide the following suggestions for better recommendation developments:

  1. 1.

    To Improve System Performance And Avoid The Limitations Of Existing Methodologies, A Combination Of Different Methods, Or Incorporating The Characteristics Of One Method Into Another May Be Helpful.

  2. 2.

    Evaluating The Efficiency Of The Recommendation System, Including Both Decision Support Metrics Such As Precision And Recall, And Rank-Aware Evaluation Metrics, Including Mrr And Ndcg, Will Make The Offline Evaluation More Applicable.

  3. 3.

    For Future Directions Of Scholarly Recommendation Research, We Suggest That Researchers Apply Recommendation Methodologies In Areas Less Studied, Such As Datasets And Grant Recommendations. We Believe That Researchers Would Benefit Significantly From These Areas From A Practical Perspective.

Based on extensive research, our literature review provides a comprehensive summary of scholarly recommendation systems from various perspectives. For researchers interested in developing future recommendation systems, this would be an efficient overview and guide.