1 Introduction

Recommender systems aim to help people find items of interest from a large pool of potentially interesting items. The users’ preferences may change depending on their current context, such as the time of the day, the device they use, or their location. Hence, those recommendations or suggestions should be tailored to the context of the user. Typically, recommender systems suggest a list of items based on users’ preferences. However, awareness of the importance of context as a third dimension beyond users and items has increased, for recommendation (Adomavicius and Tuzhilin 2011) and search (Melucci 2012) alike. The goal is to anticipate users’ context without asking them, as stated in The Second Strategic Workshop on Information Retrieval (SWIRL 2012) (Allan et al. 2012): “Future information retrieval systems must anticipate user needs and respond with information appropriate to the current context without the user having to enter a query”. This problem is known as contextual suggestion in Information Retrieval (IR) and context-aware recommendation in the Recommender Systems (RS) community.

The TREC Contextual Suggestion (CS) track introduced in 2012 provides a common evaluation framework for investigating this task (Dean-Hall et al. 2012). The aim of the CS task is to provide a list of ranked suggestions, given a location as the (current) user context and past preferences as the user profile. The public Open Web was the only source for collecting candidate documents in 2012. Using APIs based on the Open Web (either for search or recommendation) has the disadvantage that the end-to-end contextual suggestion process cannot be examined in all detail, and that reproducibility of results is at risk  (Hawking et al. 2001, 1999). To address this problem, starting from 2013 participating teams were allowed to collect candidate documents either from Open Web or from the ClueWeb12 collection.

In the 2013 and 2014 editions of CS track, there were more submissions based on the Open Web compared to those based on the ClueWeb12 collection. However, to achieve reproducibility, ranking web pages from ClueWeb12 should be the preferred method for scientific evaluation of contextual suggestion systems. It has been found that the systems that build their suggestion algorithms on top of input taken from the Open Web achieve consistently a higher effectiveness than systems based on the ClueWeb12 collection. Most of the existing works have relied on public tourist APIs to address the contextual suggestion problem. These tourist sites (such as Yelp and Foursquare) are specialized in providing tourist suggestions, hence those works are focused on re-ranking the resulting candidate suggestions based on user preferences. Gathering suggestions (potential venues) from the ClueWeb12 collection has indeed proven a challenging task. First, suggestions have to be selected from a very large collection. Second, these documents should be geographically relevant (the attraction should be located as close as possible to the target context), and they should be of interest for the user.

The finding that Open Web results achieve higher effectiveness raises the question whether research systems built on top of the ClueWeb12 collection are still representative of those that would work directly on industry-strength web search engines. In this paper, we focus on analyzing reproducibility and representativeness of the Open Web and ClueWeb12 systems. We study the gap in effectiveness between Open Web and ClueWeb12 systems through analyzing the relevance assessments of documents returned by them. After that, we identify documents that overlap between Open Web and ClueWeb12 results. We define two different sets of overlap: First, the overlap in the relevance assessments of documents returned by Open Web and ClueWeb12 systems, to investigate how these documents were judged according to the relevance assessments gathered when they were considered by Open Web or ClueWeb12 systems. The second type of overlap is defined by the documents in the relevance assessments of the Open Web systems which are in ClueWeb12 collection but not in the relevance assessments of ClueWeb12 systems. The purpose is to use the judgments of these documents (mapped from Open Web on ClueWeb12 collection) to expand the relevance assessments of ClueWeb12 systems resulting on having a new test collection. Figure 1 illustrates these different test collections, the details given in Sect. 3.3. Then, we focus on how many of the documents returned by Open Web systems can be found in the ClueWeb12 collection, an analysis to assess the reproducibility point of view. Finally, we apply the knowledge about the tourist information available in the Open Web for selecting documents from ClueWeb12 to find a representative sample from the ClueWeb12 collection. Specifically, we address the following research questions:

RQ1:

Do relevance assessments of Open Web URLs differ (significantly) from relevance assessments of ClueWeb12 documents?

RQ2:

Can we identify an overlap between Open Web systems and ClueWeb12 systems in terms of documents suggested by both?, how are those documents in the overlap judged?

RQ3:

How many of the documents returned by Open Web systems can be found in the ClueWeb12 collection as a whole?

RQ4:

Can we identify a representative sample from the ClueWeb12 collection for the CS track by applying the tourist domain knowledge obtained from the Open Web?

Fig. 1
figure 1

Illustration of the relation between pools and the source of the documents. Subset 1 represents the documents in the Open Web pool and were found in ClueWeb12 collection but do not exist in the ClueWeb12 pool (this subset is used to expand the ClueWeb12 pool). Subset 2 represents the overlap between the Open Web pool and ClueWeb12 pool, documents in this subset were double judged (we use this subset to show the bias between Open Web and ClueWeb12 results)

The remainder of the paper is organized as follows: first we discuss related work (Sect. 2), followed by a description of the experimental setup (Sect. 3). After that we present an analysis to compare Open Web and ClueWeb12 relevance assessments (Sect. 4). Then we discuss how much of the Open Web systems can be reproduced from the ClueWeb12 collection, and we evaluate them on the ClueWeb12 test collection (Sect. 5). After that we discuss how to apply tourist domain knowledge available on the public Open Web to annotate documents from the ClueWeb12 collection. Finally, we discuss conclusions drawn from our findings (Sect. 7).

2 Related work

In the Recommender Systems area, recommendation algorithms for several types of content have been studied (movies, tourist attractions, news, friends, etc.). These types of algorithms are typically categorized according to the information they exploit: collaborative filtering (based on the preferences of like-minded users Resnick et al. 1994) and content-based filtering (based on similar items to those liked by the user Lops et al. 2011). In the Information Retrieval area, approaches to contextual suggestion usually follow a content-based recommendation approach. The majority of related work results from the corresponding TREC track, focusing on the specific problem of how to provide tourist attractions given a location as context, where many participants have relied on APIs of location-based services on the Open Web. Candidate suggestions based on location are then ranked based on their similarity with the known user interests. In this case, the key challenge is to model user interests.

Given the description of a set of examples (suggestions) judged by the user, existing studies exploit the descriptions of the suggestions to build her profile, usually represented as the textual information contained in the description of the suggestions.  Sappelli et al. (2013) build two user profiles: a positive profile represents terms from those suggestions liked by the user before, whereas a negative profile is based on descriptions of suggestions disliked by the user. In Hubert et al. (2013); Yang and Fang 2012) both the descriptions and the categories of the suggestions are used to build the user profiles. In Yang and Fang (2013), the authors proposed an opinion-based approach to model user profiles by leveraging similar user opinions of suggestions on public tourist APIs. If the user rated a suggestion as relevant, then the positive profile represents all positive reviews of that suggestion. The negative profile represents all negative reviews of the suggestion rated as irrelevant to the user. The aforementioned approaches consider different ranking features based on the similarity between candidate suggestions and positive and negative profiles. On the other hand, a learning to rank model exploiting 64 features using information obtained from Foursquare is presented by  Deveaud et al. (2014). They used four groups of features: (a) city-dependent features which describe the context (city) such as total number of venues in the city and total number of likes, (b) category-dependent features that consist of the count of the 10 highest level categories obtained from Foursquare, (c) venue-dependent features which describe the popularity of the venue in the city, and (d) user-dependent features describing the similarity between user profiles and the suggestions. The most effective features were the venue-dependent features, that is, those indicating venue importance.

Besides recommendation, a critical part of our work is how to build test collections and create sub-collections from them. Because of this, we now introduce the topic and survey some of the most relevant works on that area. Creating a test collection is the most common approach for evaluating different Information Retrieval systems. Any test collection consists of a set of topics, a set of relevance assessments, and a set of retrievable documents. Since the beginning of IR evaluation by means of test collections, many researchers have looked at test collections from different angles. For example, what is the optimal number of topics to obtain reliable evaluations? In Voorhees and Buckley (2002) the authors find that to have a reliable order of the systems, at least 50 topics have to be used in the evaluation stage. More recently, in Dean-Hall and Clarke (2015) the authors use data from the CS track to give insights about the required number of assessors. The problem of analyzing the impact of different sub-collections (as a set of test collections) is also studied in the literature. In Scholer et al. (2011), the authors split TREC ad-hoc collections into two sub-collections and compared the effectiveness ranking of retrieval systems on each of them. They obtained a low correlation between the two rank runs, each run based on one of the two sub-collections. Later, in Sanderson et al. (2012) a more exhaustive analysis is presented. The authors studied the impact of different sub-collections on the retrieval effectiveness by analyzing the effect over many test collections divided using different splitting approaches. Their study was based on runs submitted to two different TREC tracks, the ad hoc track from 2002 to 2008 and the terabyte one from 2004 to 2008. The authors found that the effect of these sub-collections is substantial, even affecting the relative performance of retrieval systems. In Santos et al. (2011), the authors analyze the impact of the first-tier documents from ClueWeb09 collection in the effectiveness. The analysis was carried out on the TREC 2009 Web track, where participating teams were encouraged to submit runs based on Category A, and Category B. These categories were extracted from ClueWeb09 collection. Category A consists of 500 million English documents, Category B is a subset from Category A, it consists of 50 million documents of high quality seed documents and Wikipedia documents (they represent the first-tier documents). By analyzing the number of documents per subset and the relevance assessment, the authors found a bias towards Category B documents, in terms of assessed documents and those judged as relevant. In order to investigate this bias, they analyze the effect of first-tier documents on the effectiveness of runs based on Category A. First, they found that there is a high correlation between effectiveness and number of documents retrieved from the first-tier subset. Second, by removing all documents not from the first-tier subset, the effectiveness of almost all runs based on Category A was improved.

In the context of the CS track these questions arise again, since in this track participants share the same topics (profile, context) but they have to return a ranked list of documents for each topic, where these candidate documents can be selected from either the Open Web or ClueWeb12 collection. Considering the potential impact that different collections may have on the retrieval effectiveness, one of our main interests in the rest of the paper is to study the gap in effectiveness between Open Web systems and ClueWeb12 systems in order to achieve reproducible results on a representative sample of the Web from ClueWeb12 collection.

3 Experimental setup

3.1 Dataset

Our analyses are based on data collected from the TREC 2013 and 2014 Contextual Suggestion tracks (CS 2013, CS 2014). The CS track provides a set of profiles and a set of geographical contexts (cities in the United States) and the task is to provide a ranked list of suggestions (up to 50) for each topic (profile, context) pair. Each profile represents a single assessor past preferences for a given suggestion. Each user profile consists of two ratings per suggestion, on a 5-point scale; one rating for a suggestion’s description as shown in the result list (i.e., a snippet), and another rating for its actual content (i.e., a web page). There are some differences between 2013 and 2014: First, the 50 target contexts used each year are not the same. Second, seeds cities from which the example suggestions were collected: in 2013 examples were collected from Philadelphia, PA, whereas in 2014 examples were collected from Chicago, IL and Santa Fe, NM. Third, the number of assessors also changed in these editions of the track. More details about the CS track can be found in the track’s overview papers (Dean-Hall et al. 2013, 2014), for 2013 and 2014, respectively.

The evaluation is performed as follows. For each topic—(profile, context) pairs—the top-5 documents of every submission are judged by the actual users whose profile is given (resulting in three ratings: description, actual document content, and geographical relevance assessments) and by NIST assessors (an additional rating for the geographical relevance assessment). Judgments are graded: subjective judgments range from 0 (strongly uninterested) to 4 (strongly interested) whereas objective judgments go from 0 (not geographically appropriate) to 2 (geographically appropriate). In both cases, a value of \(-\)2 indicates that the document could not be assessed (for example, the URL did not load in the judge’s Web browser interface).

Documents are identified by their URLs (if they are submitted by runs based on Open Web) or by their ClueWeb12 ids (if they are submitted by runs based on ClueWeb12). In our study, we use ClueWeb12-qrels to refer to relevance assessments of ClueWeb12 documents, and OpenWeb-qrels to refer to relevance assessments of Open Web URLs, both sets of assessments built from the three relevance assessments files provided by the organizers: desc-doc-qrels, geo-user-qrels, and geo-nist-qrels.

The following metrics are used to evaluate the performance of the participating teams: Precision at 5 (P@5), Mean Reciprocal Rank (MRR), and a modified Time-Biased Gain (TBG) (Dean-Hall et al. 2013). These metrics consider geographical and profile relevance (both in terms of document and description judgments), taking as thresholds a value of 1 and 3 (inclusive), respectively.

3.2 URL normalization

A recurring pre-processing step to produce the various results reported in the paper concerns the normalization of URLs. We have normalized URLs consistently by removing their www, http://, https:// prefixes, as well as their trailing “forwarding slash” character /, if any. In the special case of the URL referencing an index.html Web page, the index.html string is stripped from the URL before the other normalizations are applied.

3.3 Mapping OpenWeb-qrels to ClueWeb12

We identify documents that are included in OpenWeb-qrels and exist in ClueWeb12 collection (these documents are subsets 1 and 2 in Fig. 1). We achieve this by obtaining the URLs from the OpenWeb-qrels, then, we search for these URLs in the ClueWeb12 collection. To check the matching between qrels URLs and ClueWeb12 document URLs, both were normalized as described in Sect. 3.2. We shared this subset with the CS track community.Footnote 1 In Table 1 we summarize the statistics derived from the Open Web and ClueWeb12 relevance assessments in 2013 and 2014. We observe that the qrels do contain duplicates, that are not necessarily assessed the same. The differences can be explained by the CS track evaluation setup, where the top-5 suggestions per topic provided by each submitted run were judged individually (Dean-Hall et al. 2013, 2014).

We have separated these documents into two subsets: subsets 1 and 2 from Fig. 1. First, the subset 1 represents documents that were judged as Open Web documents and that have a matching ClueWeb12 document, however they do not exist in ClueWeb12 relevance assessments; we refer to this subset as (OpenWeb-qrels-urls-in-ClueWeb12). We consider these documents as additional judgments that can be used to expand the ClueWeb12 relevance assessments. The second subset consists of documents that overlap between Open Web and ClueWeb12 relevance assessments – that is, they were judged twice –, we refer to this subset as ClueWeb12-qrels (qrels-overlap).

Table 1 Summary of judged documents form the Open Web and the ClueWeb12 collection

3.4 Expanding ClueWeb12-qrels

We expand the ClueWeb12 relevance assessments by modifying the provided qrels files mentioned in Sect. 3.1. We achieve this by replacing in the qrels the URLs with their ClueWeb12 ids (if they exist) based on the subset identified in Sect. 3.3.

3.5 Mapping URLs from Open Web runs to the ClueWeb12 documents URLs

In this section, we describe how we map all URLs found by Open Web systems (in the submitted runs) to their ClueWeb12 ids. We need this mapping to evaluate Open Web systems on ClueWeb12 collection. In order to achieve this, we obtain the URLs from the Open Web runs. Then, we search for these URLs in ClueWeb12 collection by matching the normalized URLs against documents normalized URLs in ClueWeb12 collection. The result of this process is a mapping between URLs in the Open Web runs and their corresponding ClueWeb12 ids (OpenWeb-runs-urls-in-ClueWeb12). Table 2 presents a summary about the Open Web URLs and the number of URLs found in ClueWeb12 collection. As we see in the table, for CS 2013 around 25.6 % of URLs have a matching document in ClueWeb12, while for CS 2014 only 13.2 % exist in ClueWeb12 collection.

Table 2 URLs obtained from Open Web runs

4 Comparing Open Web and Closed Web relevance assessments

In this section we present an analysis to compare Open Web and ClueWeb12 relevance assessments. In Bellogín et al. (2014), we already showed that Open Web runs tend to receive better judgments than ClueWeb12 results, based on analyzing the CS 2013 results. We repeat here the same experiment in order to investigate whether such tendency is still present in the 2014 test collection. We first compare Open Web and ClueWeb12 in general (the distribution of relevance assessments of documents returned by Open Web systems versus those documents returned by ClueWeb12 systems). Next, we focus on the documents in the overlap of the relevance assessments between Open Web systems and ClueWeb12 systems.

4.1 Fair comparison of test collections

In this section, we study RQ1: Do relevance assessments of Open Web URLs differ (significantly) from relevance assessments of ClueWeb12 documents? We analyze the distribution of profile judgments of documents returned by Open Web and ClueWeb12 runs. In our analysis, we leave out the user, context, and system variables, and compare the judgments given to documents from the Open Web against those from ClueWeb12. In Fig. 2, we observe that the Open Web histogram is slightly skewed towards the positive, relevant judgments. Even though we are not interested in comparing the actual frequencies. This would not be fair, mainly because there were many more Open Web submissions than ClueWeb12 ones. Specifically, in TREC CS 2013, 27 runs submitted URLs from the Open Web, and only 7 runs used ClueWeb12 documents. However, it is still relevant to see the relative frequency of \(-\)2’s or \(-\)1’s (document could not load at assessing time), used in CS 2013 and CS 2014, respectively. 4’s (strongly interested) in each dataset: this is an important difference which will impact the performance of the systems using ClueWeb12 documents.

Fig. 2
figure 2

Judgments (document relevance) histogram of documents from Open Web (left) and from ClueWeb12 (right) CS 2013

Fig. 3
figure 3

Judgments (document relevance) histogram of documents from Open Web runs (left) and ClueWeb12 runs (right) CS 2014

Figure 3 shows the same analyses based on 2014 test collection. In that year of the track, 25 runs submitted URLs from the Open Web, and only 6 runs used ClueWeb12 documents. We find that the judgments of documents from Open Web are skewed towards the positive (relevant) side, while judgments of documents from ClueWeb12 are—again—skewed towards the negative (not relevant) part of the rating scale, similar to the findings on the 2013 test collection.

4.2 Difference in evaluation of identical documents from Open Web and ClueWeb12

In Sect. 3.3, we identified two subsets of overlap between Open Web and ClueWeb12 results: first, OpenWeb-qrels-urls-in-ClueWeb12 that maps URLs from OpenWeb-qrels to ClueWeb12 collection, and qrels-overlap that contains documents that exist in both OpenWeb-qrels and ClueWeb12-qrels. Based on these datasets, we investigate RQ2: Can we identify an overlap between Open Web systems and ClueWeb12 systems in terms of documents suggested by both?, how are those documents in the overlap judged?

Fig. 4
figure 4

Judgments histogram of documents from Open Web qrels which exist in ClueWeb12 collection for CS 2013 (left) and CS 2014 (right)

Figure 4 shows the distribution of relevance assessments of documents in OpenWeb-qrels-urls-in-ClueWeb12 for both CS 2013 and CS 2014. We observe that the distribution of judgments of these documents have a similar behavior as the whole Open Web judged documents. More precisely, we observe that the distribution is skewed towards the positive ratings when we look at 3 and 4 ratings for 2013 and 2014 datasets.

Fig. 5
figure 5

Judgments histogram of documents that exist in both Open Web qrels and ClueWeb12 qrels. Figure on the (left) shows how these documents were judged as Open Web URLs, while the figure on the (right) shows how the same documents were judged as ClueWeb12 documents CS 2013

Fig. 6
figure 6

Judgments histogram of documents that exist in both Open Web qrels and ClueWeb12 qrels. Figure on the (left) shows how these documents were judged as Open Web URLs, while the figure on the (right) shows how the same documents were judged as ClueWeb12 documents CS 2014

Now we focus on the qrels-overlap subset which contains documents shared by both OpenWeb-qrels and ClueWeb12-qrels. Our aim here is to detect any bias towards any of the document collections (the Open Web vs. ClueWeb12) based on the available sample of the judgments. In principle, the relevance judgments should be the same for the two sources, since in each situation the same document was retrieved by different systems for exactly the same user and context, the only difference being how the document was identified (as a URL or as a ClueWeb12 id). Figures 5 and 6 show how documents in the qrels-overlap were judged as Open Web URLs and as ClueWeb12 documents in CS 2013 and CS 2014 test collections, respectively. We find that the documents in the overlap were judged differently. The judgments distributions of the documents shared by both OpenWeb-qrels and ClueWeb12-qrels suggest that there is a bias towards OpenWeb-qrels and this bias is consistent in 2013 and 2014 data. For CS 2013, part of the differences in judgments was attributed to a different rendering of the document for each source.Footnote 2 Assessors are influenced by several conditions, one of them is the visual aspect of the interface, but also the response time, the order of examination, the familiarity with the interface, etc. (Kelly 2009). Therefore, it is important that these details are kept as stable as possible when different datasets are evaluated at the same time. It is also interesting to note that the number of ClueWeb12 documents that could not load is higher in CS 2013 (\(-2\)) compared to CS 2014 (\(-1\)), probably due to the efforts of the organizers in the latter edition of running a fairer evaluation (Dean-Hall and Clarke 2015).

5 Reproducibility of Open Web systems

In this section, we investigate RQ3: How many of the documents returned by Open Web systems can be found in the ClueWeb12 collection as a whole? The goal of this analysis is to show how many of the results obtained by Open Web systems can be reproduced based on ClueWeb12 collection. In Sect. 3.5, we presented the number of URLs found by Open Web systems and have a matching documents in ClueWeb12 collection. Precisely in Table 2, we showed that for CS 2013 26,248 out of 102,649 URLs have a matching with ClueWeb12 documents (25.6 %), while for CS 2014 10,014 out of the 75,719 URLs (13.2 %) have ClueWeb12 documents match. In this section, we evaluate Open Web systems on ClueWeb12 data. Analyzing the impact of ClueWeb12 documents on the effectiveness of Open Web systems requires the following. First, we need to modify the Open Web runs using the OpenWeb-runs-urls-in-ClueWeb12 dataset which has the mapping between Open Web URLs to ClueWeb12 ids. Second—for evaluation completeness—we use the expanded ClueWeb12-qrels which was generated based on the OpenWeb-qrels URLs found in the ClueWeb12 collection (OpenWeb-qrels-urls-in-ClueWeb12 subset described in Sect. 3.4).

While modifying the Open Web runs, if the suggested URL has a matching in ClueWeb12, we replace the URL with its corresponding ClueWeb12 id. If the URL has no match, then we skip the line containing that URL. We hence change the ranking after skipping those URLs. We present the effectiveness of original Open Web runs and the effectiveness of modified runs (replacing URLs with ClueWeb12 ids), and we show the percentage of relative improvement in effectiveness of Open Web systems (on Open Web data vs ClueWeb12). Nonetheless, replacing the URLs with their matching ClueWeb12 ids and pushing up their ranks by removing the URLs which have no ClueWeb12 match will overestimate the performance and not show the corresponding impact on performance of those ClueWeb12 documents if the ranking was preserved. To give an insight about the importance of ClueWeb12 documents compared to the Open Web URLs that have no ClueWeb12 match, we also include the percentage of ClueWeb12 documents occurring in the top-5. To achieve this, when modifying the Open Web run, we replace the URLs with their match ClueWeb12 ids, and keep the URLs as they are if they do not have a match. Then, for each topic, we compute the percentage of ClueWeb12 documents in the top-5. The score for each run is the mean across all topics.

For CS 2013 systems (see Table 3) and for CS 2014 systems (see Table 4), we report the effectiveness of Open Web systems using their original run files as submitted to the track based on the original qrels (column named original). We report their effectiveness using the modified run files based on the expanded qrels as described above. Finally, we report the percentage of ClueWeb12 documents in the top-5 as described above (how many ClueWeb12 documents remain in the top-5 while preserving the URLs with no match).

In both tables, we observe the following: First, for some Open Web systems we were not able to reproduce their results based on ClueWeb12 data, mainly because some systems have no matching at all with ClueWeb12 collection. For systems that rely on the Yelp API to obtain candidate documents, we could not find any document whose host is Yelp in ClueWeb12 collection, this is due to very strict indexing rules.Footnote 3 Second, we observe that the performance of Open Web systems decreases. However, this reduction in performance varies between systems, suggesting that pushing ClueWeb12 documents up in the submitted rankings by removing URLs with no ClueWeb12 id match has a different effect on each Open Web system. Third, some of top performing Open Web systems are performing very well when constrained to the ClueWeb12 collection. For example, in the CS 2014 edition, UDInfoCS2014_2, BJUTa, and BJUTb systems even perform better than ClueWeb12 systems (underlined systems in the table). Fourth, in terms of how representative ClueWeb12 documents in the top-5, the percentage of ClueWeb12 documents in the top-5 ranges from 1 to 46 % (19 % the mean across all Open Web systems, median = 22 %) for CS 2014 systems. For CS 2013, it ranges from 1 to 51 % (22 % the mean across all Open Web systems, median = 25 %).

Table 3 Performance of Open Web systems on Open Web data vs. their performance on ClueWeb12 data
Table 4 Performance of Open Web systems on Open Web data versus their performance on ClueWeb12 data

6 Selection method for identifying a representative sample of the Open Web from ClueWeb12

In this section we study RQ4: Can we identify a representative sample from the ClueWeb12 collection for the CS track by applying the tourist domain knowledge obtained from the Open Web? We use the tourist domain knowledge available on the Open Web to annotate documents in ClueWeb12 collection. The aim is not only to obtain reproducible results based on ClueWeb12 collection, but also to obtain a representative sample of the Open Web.

6.1 Selection methods of candidate documents from ClueWeb12

We formulate the problem of candidate selection from ClueWeb12 as follows. We have a set of contexts (locations) C—which correspond to US cities—provided by the organizers of the CS track. For each context \(c\in \texttt {C}\), we generate a set of suggestions \(S_c\) from the ClueWeb12 collection, which are expected to be located in that context.

We define four filters for selecting documents from ClueWeb12 collection, each of them will generate a sub-collection. The first filter is a straightforward filter based on the content of the document. The remaining three filters use knowledge derived from the Open Web about sites existing in ClueWeb12 that provide touristic information. We will show empirically that the additional information acquired from tourist APIs provides the evidence needed to generate high quality contextual suggestions. While our results still depend upon information that is external to the collection, we only need to annotate ClueWeb12 with the tourist domain knowledge identified to achieve reproducible research results. We describe the filters in more detail in the following sections.

6.1.1 Geographically filtered sub-collection

Our main hypothesis in this approach is that a good suggestion (a venue) will contain its location correctly mentioned in its textual content. Therefore, we implemented a content-based geographical filter geo_filter that selects documents mentioning a specific context with the format (City, ST), ignoring those mentioning the city with different states or those matching multiple contexts. With this selection method we aim to ensure that the specific target context is mentioned in the filtered documents (hence, being geographically relevant documents). We will still miss relevant documents, for example due to misspellings or because they mention more than one city at the same web page.

In the simplest instantiation of our model, the probability of any document in ClueWeb12 to be included in the GeographicFiltered sub-collection is assigned to 0 or 1 depending on whether it passes the geo_filter:

$$\begin{aligned} P(s)= {\left\{ \begin{array}{ll} 1,&{} \text {if } (s)\,\text {passes geo}\_\text {filter}\\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(1)

Approximately 9 million documents (8,883,068) from the ClueWeb12 collection pass the geo_filter. The resulting set of candidates forms the first sub-collection, referred to as GeographicFiltered.

6.1.2 Domain-oriented filter

The first type of domain knowledge depends on a list of hosts that are well-known to provide tourist information, and are publicly available (and have been crawled during the construction of ClueWeb12). We manually selected the set of hosts \({\mathcal {H}} :=\) {yelp, xpedia, tripadvisor, wikitravel, zagat, orbitz, and travel.yahoo}, some of these host APIs were used by the Open Web systems. We consider these hosts as a domain filter to select suggestions from ClueWeb12 collection. Again, the probability of a document in ClueWeb12 to be a candidate suggestion is either 0 or 1 depending only on its host. We define the probability P(s) as:

$$\begin{aligned} P(s)= {\left\{ \begin{array}{ll} 1,&{} \text {if}\,\text {host}(s) \in {\mathcal {H}}\\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(2)

We refer to the set of documents that pass the domain filter defined in Eq. (2) as TouristSites.

We assume pages about tourist information also have links to other interesting related pages, acknowledging the fact that pages on the same topic are connected to each other (Davison 2000). In order to maximize the extracted number of documents from the tourist domain we also consider the outlinks of documents from touristic sites. For each suggestion \(s \in\) TouristSites, we extract its outlinks outlinks(s) and combine all of them together in a set \(\mathcal {O}\); including links between documents from two different hosts (external links) as well as links between pages from the same host (internal links). Notice that some of the outlinks may also be part of the TouristSites set, in particular whenever they satisfy Eq. (2). Next, we extract any document from ClueWeb12 whose normalized URL matches one of the outlinks in \(\mathcal {O}\). The probability of document s to be selected in this case is defined as:

$$\begin{aligned} P(s)= {\left\{ \begin{array}{ll} 1,&{} \text {if}\,\text {URL}(s) \in {\mathcal {O}}\\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(3)

The set of candidate suggestions that pass this filter is called TouristSitesOutlinks.

6.1.3 Attraction-oriented filter

The previously described selection method relies on a manually selected list of sites to generate the set of candidate suggestions. We will now consider a different type of domain knowledge, by leveraging the information available on the Foursquare API.Footnote 4 For each context c, we obtain a set of URLs by querying Foursquare API. If the document’s URL is not returned by Foursquare (we are not interested in the page describing that venue inside Foursquare, but its corresponding webpage), we use the combination of document name and context to issue a query to the Google search API e.g., “Gannon University Erie, PA” for name Gannon University and context Erie, PA. Extracting the hosts of the URLs obtained results in a set of 1, 454 unique hosts. We then select all web pages in ClueWeb12 from these hosts as the candidate suggestions, with its probability defined in the same way as in Eq. 2.

The set of documents that pass the host filter is referred to by Attractions.

Together, the three subsets of candidate suggestions TouristSites, TouristSitesOutlinks and Attractions form our second ClueWeb12 sub-collection that we refer to as TouristFiltered.

$$\begin{aligned} {\mathbf{TouristFiltered}}:= {\mathtt{TouristSites}} \cup {\mathtt{TouristSitesOutlinks}} \cup {\mathtt{Attractions}} \end{aligned}$$

Table 5 shows the number of documents found by each filter.

Table 5 Number of documents passing each filter. Documents that pass the first three filters represent the TouristFiltered sub-collection, whereas the GeographicFiltered sub-collection is composed by those documents passing the geo_filter

6.2 Impact of domain knowledge filters

Our contribution to the CS track included the following two runs: a first one based on the GeographicFiltered sub-collection, and a second one based on the TouristFiltered sub-collection. We have found that the run based on TouristFiltered sub-collection is significantly better than the one based on GeographicFiltered sub-collection in every evaluation metric (see Table 6). However, a more discriminative analysis should be done to properly estimate the impact of the tourist domain knowledge filters used to generate the TouristFiltered sub-collection, for this, we shall evaluate the performance of the different sub-collections generated by each of the domain knowledge filters.

Recall that assessments are made considering geographical and profile relevance independently from each other. The latter one is further assessed as relevant based on the document or on the description provided by the method. Considering this information, we recomputed the evaluation metrics for each topic while taking into account the geographical relevance provided by the assessors, as well as the description and document judgments, both separately and combined (that is, a document that is relevant both based on the description and when the assessor visited its URL). We present in Table 7 the contribution to the relevance dimensions of each of the TouristFiltered sub-collection subsets, where each subset was selected based on different domain knowledge filter. The run based on TouristFiltered sub-collection contains documents from the three subsets. We modified the run based on TouristFiltered sub-collection by start computing effectiveness based only on suggestions from TouristSites subset (second column), then we add to them suggestions from TouristSitesOutlinks, and finally suggestions from Attractions. The main conclusion from this table is that the larger improvement in performance happens after adding the candidates from Attractions subset. It is interesting to note that the performance of this part alone (last column) is comparable to that of the whole sub-collection.

Table 6 Performance of the run based on GeographicFiltered sub-collection and the run based on TouristFiltered sub-collection
Table 7 Effect of each part of the TouristFiltered sub-collection on performance

6.3 Discussion

Because systems based on Open Web can still be competitive when the candidate documents are constrained to the ClueWeb12 collection, we have shown that there exist documents in ClueWeb12 that are relevant for the Contextual Suggestion task we address in this paper. However, the candidate selection process is challenging, and the use of external, manually curated tourist services make this task easier, by promoting those relevant documents at the cost of reducing the reproducibility of the whole process.

In this section we aim to understand the candidate selection process and to provide recommendations in order to improve it. With this goal in mind, we study the GeographicFiltered and Attractions sub-collections by comparing the actual documents that pass the corresponding filters, so that we can analyze these sub-collections from the user perspective (what will the user receive?) instead of from the system perspective (what is the performance of the system?), as we have presented previously in the paper.

Fig. 7
figure 7

Distribution of the document length in words for the GeographicFiltered (left) and Attractions (right) sub-collections. Note the different range in the X axis

A first aspect we consider is the document length (in terms of words included in the processed HTML code), which gives an insight about how much information is contained (and shown to the user) in each sub-collection. We observe from Fig. 7 that documents from the GeographicFiltered sub-collection are much larger than those from Attractions: their average length is twice as large as those from the other filter. This may suggest that relevant documents in the tourist domain should be short or, at least, they should not present too much information to the user. If this was true, it would be more interesting to retrieve—in the contextual suggestion scenario—home pages such as the main page of a museum or a restaurant, instead of their corresponding Contact or How to access sub-pages. Because of this, in the future we aim to take information about the URL depth into account when selecting the candidates, since it has been observed in (Kraaij et al. 2002) that the probability of being a home page is inversely related to its URL depth.

Fig. 8
figure 8

Screenshots of a document retrieved by the GeographicFiltered sub-collection (left) and by the Attractions sub-collection (right). The document in the left (clueweb12-0202wb-00-19744) was rated in average with a value of 1.9, whereas the one in the right (clueweb12-0200tw-67-19011) with a 3

Related to the aforementioned aspect, we now want to check manually the content of some pages from each sub-collection. For this analysis we aggregate the judgments received to the documents submitted in each sub-collection, and then focus on documents with very bad or very good ratings in any of them. Specifically, we have found two candidate documents (presented in Fig. 8) that clearly illustrate the main difference between these two sub-collections, and further corroborates the previous assumption: the GeographicFiltered subcollection requires pages where the target city and state are present, which in turn favors pages containing listings of places located in that city, resulting in documents not very informative for an average tourist. On the other hand, the Attractions sub-collection tend to retrieve the home page of significant tourist places.

Finally, we have run an automatic classifier on the documents of each sub-collection to gain some insights about whether the content of the pages are actually different. We have used decision trees J48 (Quinlan 1993) as implemented in the Weka libraryFootnote 5) and tried with different combinations of parameters (stemming, stopwords, confidence value for pruning, number of words to consider, etc.). For the sake of presentation, we have used a very restrictive setting, so that a limited number of leafs are generated. In Fig. 9 we show the branch of the decision tree where states appears at least once in the documents; hence, we find that states is the most discriminative term in this situation, and in decreasing order the terms: internist and america. The classifier represented in this way was trained using a vector representation using the TF-IDF values of the terms in each document, considering only top-20 words with the highest frequency and discarding stopwords and numbers. Additionally, the classifier was parameterized with a confidence threshold of 0.5 and a minimum number of instances per leaf of 500. We conclude from the figure that the Attractions sub-collection uses a different vocabulary than the GeographicFiltered sub-collection, where terms such as md, st, or america tend to appear with much less frequency. In the future we want to exploit this information to improve the candidate selection process and the corresponding filters.

Fig. 9
figure 9

Visualization of a branch of the J48 decision tree trained using documents from the Attractions (att) and GeographicFiltered (geo) sub-collections. This branch corresponds to the case where the term states appears at least once. In every leaf, the label of the classified instances appears together with the total number of instances reaching that leaf (first number) and the number of missclassified instances (hence, the lower this number the better)

7 Conclusions

In this paper we have analyzed and discussed the balance between reproducibility and representativeness when building test collections. We have focused our analysis on the Contextual Suggestion TREC track, where in 2013 and 2014 it was possible to submit runs based on Open Web or based on ClueWeb12, a static version of the web. In both editions of the track, there were more runs based on Open Web compared to those based on ClueWeb12 collection, which seems to go against any reproducibility criteria we may expect from such a competition. The main reason, as we have shown in this paper, for that behavior is that systems based on Open Web perform better than systems based on ClueWeb12 collection in terms of returning more relevant documents.

We have studied such difference in effectiveness from various perspectives. First, the analysis of relevance assessments of 2 years of the Contextual Suggestion track shows that documents returned by Open Web systems receive better ratings than documents returned by ClueWeb12 systems. More specifically, we have found differences in judgment when looking at identical documents that were returned by both Open Web and ClueWeb12 systems. Second, based on an expanded version of the relevance assessments—considering documents in the overlap of Open Web and ClueWeb12 systems—and on generating ClueWeb12-based runs from Open Web runs, we have investigated the representativeness of ClueWeb12 collection. Although the performance of Open Web systems decreases, we find a representative sample of ClueWeb12 collection in Open Web runs. Third, we proposed an approach for selecting candidate documents from ClueWeb12 collection using the information available on the Open Web. Our results are promising, and evidence that there is still room for improvement by using different and more information available on the Open Web.

For future work, we plan to collect candidate documents from different crawls of the web besides ClueWeb12 collection, such as the Common Crawl. Footnote 6 Both crawls will complement each other and help to find more representative samples of the web; in this way we could evaluate them by participating in future editions of the Contextual Suggestion track. Another aspect we would like to explore in the future is that of improving the candidate selection filters. We have learnt some features that seem to be frequent in the sub-collection generated from the Open Web. We would like to incorporate that information into our geographical filters, so that better candidate documents are found, and—in principle—lead to better contextual recommendations.