LandmarkBased Route Recommendation with Crowd Intelligence
 1.5k Downloads
 4 Citations
Abstract
Route recommendation is one of the most widely used locationbased services nowadays, as it is vital for nicedriving experience and smooth public traffic. Given a pair of userspecified origin and destination, a route recommendation service aims to provide users with the routes of the best travelling experience according to given criteria. However, even the routes recommended by the bigthumb service providers can deviate significantly from the ones travelled by experienced drivers, which motivates the previous research that leverages crowds’ knowledge to improve the recommendation quality. Since route recommendation is normally an online task, lowlatency response to drivers’ queries is required in this kind of systems. Unfortunately, latency of crowdsourced systems is usually high, because they need to generate tasks and wait for workers’ feedbacks before answering queries. To address this issue, we extend our previous system—CrowdPlanner—by proposing some strategies to reuse existing answers (truths) to deal with newly coming queries more efficiently. A prototype system has been deployed to many voluntary mobile clients and extensive tests on realscenario queries have shown the superiority of our system in comparison with the results given by map services and popular routemining algorithms.
Keywords
Landmark Route Crowdsourcing Recommendation1 Introduction
To consider the diversity of the preference factors simultaneously, some previous studies propose to use popular routes mined from historical trajectories as recommended routes. This approach, however, has significant drawbacks. First, it is not always possible to have a sufficient amount of historical trajectories to derive reliable route recommendation. Second, there exists a number of popular routemining algorithms. The definitions of popularity in those algorithms slightly differ from each other, which can suggest different routes for users. As a result, it is still difficult for users to select one particular route as a best choice. For example, Fig. 1a shows different popular routes mined using different algorithms. In this experiment, we first randomly select 5000 source–destination pairs as the testing queries. For each of them, we test the similarity between the route recommended by a bigthumb Web map service (WS) and the route obtained from three popular routemining algorithms, namely most popular route (MPR) [4], local driver route (LDR) [3] and most frequent path (MFP) [15], all of which perform reasonably well according to their reported results. The results of average similarity are shown in Fig. 1. One can see that the similarities are at best around 60 %, which means that different sources recommend quite different routes. Figure 1b demonstrates the recommended routes from different sources on map, where two routes recommended by WS are different from those by MPR and MFP, respectively.
Going beyond the limitation of the route recommendation based on popular routes, our preliminary work [26] takes the emerging concept of crowdsourcing that explicitly leverages human knowledge to resolve complex problems. Specifically, a crowdbased route recommendation system which can effectively blend domainexpert knowledge for route recommendation is proposed. The task generation component utilizes a set of significant and discriminative landmarks to generate a binary question set by analyzing the given candidate route set. Then, those questions are presented to the workers with optimized orders based on the informativeness of each question (whether it is more likely to lead to the final answer) and the response of the worker. In worker selection component, we identify a few key attributes of workers that mostly affect their performance on a given task and propose an efficient search algorithm to find the most eligible workers.
In general, the routes provided by Web services are different and cannot be easily judged. To evaluate the quality of these candidate routes, we may simply publish a evaluating task and ask the workers to make the decision. As route recommendation is a realtime task, the response time of waiting for workers may not be tolerable for users who want to plan the trips immediately. However, our preliminary work [26] neglects the power of reusing the answers of the previous tasks, which is beneficial to reduce the response time. Therefore, we propose a truth reusing component, which utilizes the existing truths from the previous tasks and return the recommended route to users.

We design and develop a novel crowdbased route recommendation system, which is able to generate concise yet informative task intelligently and assign it to the selected worker who can accomplish the task with high accuracy and low latency.

We develop a truth reusing component to reuse the verified truths and known best routes near the query locations to evaluate the routes and quickly return to users, which notably reduces the response time.

We deploy the system and conduct extensive experiments with a large number of workers, users, and queries in real scenarios. The results demonstrate that the system can recommend the most satisfactory routes efficiently in most cases.
2 Problem Statement
Summarize of notations
Notation  Definition 

R  A recommended route 
\(\mathbb {R} \)  Candidate set of recommended routes 
p  A place in the space 
l  A landmark in the space 
l.s  Significance of landmark l 
\(\mathbb {L}\)  A landmark set 
\(\mathbb {L}_{\mathbb {R}} \)  The questioned landmark set of route set \(\mathbb {R}\) 
\(d(l_i,l_j)\)  Euclidean distance between landmarks \(l_i\) and \(l_j\) 
w  A worker of the system 
\(\mathbb {W}\)  A worker set 
\(\mathbb {W}_{\mathbb {R}}\)  The selected workers of routes set \(\mathbb {R}\) 
2.1 Preliminary Concepts
Definition 1
(Route) A route R is a continuous travelling path. We use a sequence \([p_1, p_2, \ldots , p_n]\), which consists of a source, a destination, and a sequence of consecutive road intersections inbetween, to represent a route.
Definition 2
(Landmark) A landmark is a geographical object in the space, which is stable and independent of the recommended routes. A landmark can be either a point (i.e., point of interest), a line (i.e., street and high way) or a region (i.e., block and suburb) in the space.
Definition 3
(Landmarkbased Route) A landmarkbased route \(\bar{R}\) is a route represented as a finite sequence of landmarks, i.e., \(\bar{R} = [l_1,l_2,\ldots ,l_n]\).
In this paper, we will also use \(\bar{R}\) as the set \(\{l_1,l_2,\ldots ,l_n\}\) without ambiguity. To obtain the landmarkbased route from a raw route, we employ the results on anchorbased trajectory calibration [27] to rewrite the continuous recommend routes into landmarkbased routes, by treating landmarks as anchor points.
Definition 4
(Discriminative landmarks) A landmark set \(\mathbb {L}\) is called discriminative to a set of landmarkbased routes \(\bar{\mathbb {R}}\) if for any two routes \(\bar{R}_1\) and \(\bar{R}_2\) of \(\bar{\mathbb {R}}\), the joint sets \(\bar{R_1} \cap \mathbb {L}\) and \(\bar{R_2}\cap \mathbb {L}\) are different.
2.2 System Overview
We propose a twolayer system (mobile client layer and server layer) which receives user’s request from mobile client specifying the source and destination, processes the request on the server, and, finally, returns the verified best routes to the user. Figure 3 shows the overview of the proposed system, which comprises two modules: traditional route recommendation (TR) and crowdbased route recommendation (CR). The TR module first processes user’s request by trying to evaluating the quality of candidate routes obtained from external sources, such as map services and historical trajectory mining; the CR module generates a crowdsourcing task when the TR module can not judge the quality of candidate routes, and return the best route based on the feedbacks of human workers of the system.
2.2.1 Traditional Route Recommendation Module
This module processes the user’s request by generating a set of candidate routes from external sources and by evaluating the quality of those routes automatically without involving human effort. First, the control logic component receives the user’s request and controls the workflow of the entire system. Once a user’s request is received, the truth reusing component is invoked to match the request to the verified routes (truth) between two places at departure time. If the new coming request is a hit of the truth, the system will return result immediately. Otherwise, the component will invoke the route evaluation component to automatically generate some candidate routes and evaluate the qualities of these candidate routes using the verified truth. The route evaluation component evaluates the routes using computer power, and it provides an efficient way to reduce the cost, since it can largely reduce the amount of tasks generated. The component first builds up a candidate route set by invoking route generation component. If some of these routes agree with each other to a high degree, one of them will be selected as the best recommended route and added into a truth database with the corresponding time tag. If a best recommended route can not be determined, the system will assign each candidate route a confidence score, which is generated by the verified truths and illustrates the possibility of the route to be the best recommended route. A route with the highest confidence score that is greater than a threshold \(\eta \) will be regarded to be the best recommended and returned to the user; otherwise, the logic control will hand over the request to the CR module.
2.2.2 Crowd Route Recommendation Module
Crowd route recommendation module will take over the route recommendation request when the traditional route recommendation module cannot provide the best route with confidence high enough. The module will generate a crowdsourcing task consisting of a series simple but informative binary questions, and assign the task to a set of selected workers who are most suitable to answer these questions. The task generation component generates a task by proposing a series of questions for workers to answer. It is beneficial to have these questions as simple and compact as possible, since both the accuracy and economic effectiveness of the system can be improved. The design of this component addresses two important issues: what to ask in questions and how to ask the questions. The worker selection component is another core component. To maximize the effectiveness of the system, we need to select a set of eligible workers who are most suitable to answer the questions in a given task, by estimating the worker’s familiarity with the area of request.
3 Task Generation
Almost everyone has the experience of being unable to explicitly describe a route even you know the directions clearly, which implies that this kind of job is hard for humans in its nature. Therefore, we cannot simply publish a task to workers and expect them to describe the best route in a turnbyturn manner. In an alternative and more friendly way, we may provide several pictures, which demonstrate several candidate routes on a map, as a multiplechoice question for workers to choose. Take the route recommendation request in Fig. 2 as an example, we publish a multichoice question to workers by showing four routes on a map, and asking them to pick the route they most prefer. Even when all the routes have been visualized on a map, it is still effortdemanding to tell the subtle differences between candidate routes, and especially so if doing it on a smallscreen device, say a smartphone. To make the question easier to answer, we take into consideration that it is human nature to utilize significant locations, i.e., landmarks, to help describe a route in high level, whereas a computer sees a route as a sequence of continuous roads indifferently. Thus, we choose to proactively present the differences in candidate routes to the workers using landmarks, instead of waiting for them to find out. Besides, how the questions are presented can also affect the complexity of a task. For example, a multichoice question with all candidate routes presented at the same time would be more difficult to answer than an equivalent combination of binary questions, such as “do you prefer the routepassing landmark A at 2:00 pm?”. Actually, [25] has pointed out that several binary choice questions are easier and more accurate than a multiplechoice question. Based on the above analysis, we will generate a task as a sequence of binary questions, each relating to a landmark that can discriminate some of the candidate routes from the others.
3.1 Inferring Landmark Significance
It is common sense that landmarks have different significance. For instance, the White House is world famous, but Pennsylvania Ave, where the White House is located, is only known by locals of Washington DC. People tend to be more familiar with the landmarks that are frequently referred to by different sources, e.g., news, bus stop, and yellow pages. In this work, we utilize the online checkin records from a popular locationbased social network (LBSN) and cars’ trajectories in the city to infer the significance of landmarks, for these, two data sets are large enough to cover most areas of a city. By regarding the travellers as authorities, landmarks as hubs, and checkins/visits as hyperlinks, we can leverage a HITSlike algorithm [33] to infer the significance of a landmark.
3.2 Landmark Selection
Although any landmark can be used to generate a question, not all of them are suitable for the purpose of generating easy questions for a certain candidate route set \(\bar{\mathbb {R}}\). First, the selected landmark set \(\mathbb {L}\) should be discriminative to the candidate routes \(\bar{\mathbb {R}}\), which ensures that the difference between any two routes can be presented. Second, the landmarks of \(\mathbb {L}\) should have high significance, so that more people can answer the question accurately. Third, to reduce the work load of workers, the selected landmark set \(\mathbb {L}\) should be as small as possible. Therefore, the problem of landmark selection is to find a small set of highly significant landmarks which are discriminative to all the candidate routes. It can be formally represented as an optimization problem as below:
Given n landmarkbased candidate routes \(\bar{\mathbb {R}}\), and the significance of each landmark,
Select a landmark set \(\mathbb {L}\) with the size of k (\(\left\lceil \log _2 (n)\right\rceil \le k \le n\)) which is discriminative to \(\bar{\mathbb {R}}\),
Maximize \(\mathbb {L}^{1} \cdot \sum \nolimits _{l \in \mathbb {L}} {l.s}\) Here, the target function aims to maximize the total significance of selected landmarks (\(\sum \nolimits _{l \in \mathbb {L}} {l.s}\)), normalized by the size of \(\mathbb {L}\) (\(\mathbb {L}\)).
It is a nontrivial task to tradeoff between maximizing accumulate significance of the selected landmark set \(\mathbb {L}\) and minimizing the size of \(\mathbb {L}\), while guarantees the restriction that \(\mathbb {L}\) must be discriminative to \(\bar{\mathbb {R}}\). A straightforward method is to enumerate all combinations of the landmarks from \(\bar{\mathbb {R}}\), and find a discriminative landmark set with the maximized target value. However, the time cost of this algorithm grows exponentially with the size of landmark set, making this method impractical. To speed up this process, we propose a greedy algorithm, called GreedySelect (GS). The main idea is to enumerate all the possible landmark combinations in a smart order, so that it enables pruning early in the enumeration process. Let S denote the current testing landmark set and \(\mathbb {L}_\mathrm{best}\) denote the best landmark set which is discriminative and has the highest target value. The landmark selection process is introduced as follows.
Preparation step during preparation, we filter out some nonbeneficial landmarks, i.e., the ones which cannot discriminate any routes of \(\bar{\mathbb {R}}\). A straightforward way is to filter out all landmarks which are shared by all the candidate routes, and those which do not appear on any candidate route. Thus, the beneficial landmarks set of \(\bar{\mathbb {R}}\) can be generated as following: \(\mathbb {L} = \bigcup _{\bar{R} \in \bar{\mathbb {R}}} {\bar{R}}  \bigcap _{\bar{R} \in \bar{\mathbb {R}}} { \bar{R}} \), and the landmarks are sorted in descending order by significance to enable further pruning. Normally, each landmark l of \(\mathbb {L}\) can divide the routes set \(\bar{\mathbb {R}}\) into two parts: the set of routes that pass l, and the set of routes that dose not. Particularly, we observe that there may exist a landmark \(l_i\) whose partition equals to that of \(l_j\), thus we say \(l_i\) and \(l_j\) have the same discriminative information. For instance, in Fig. 2, we can see that \(l_2\) and \(l_3\) have the same discriminative information, so as \(l_8\) and \(l_9\).
Lemma 1
Given two landmarks \(l_i\) and \(l_j\) sharing the same discriminative information and a landmark combination S, we have \(S\cup \{l_i\}\) and \(S\cup \{l_j\}\) are either both discriminative or both nondiscriminative to \(\bar{\mathbb {R}}\).
Proof
Consider any two routes \(\bar{R}\), \(\bar{R^{\prime }}\) from \(\bar{\mathbb {R}}\). If S is discriminative to \(\bar{R}\) and \(\bar{R^{\prime }}\), i.e., \(\bar{R} \cap S \ne \bar{R^{\prime }} \cap S\), clearly \(\bar{R} \cap (S \cup \{l_i\}) \ne \bar{R^{\prime }} \cap (S \cup \{l_i\})\), so as \(S \cup \{l_j\}\), that is, \(S \cup \{l_i\}\) (\(S \cup \{l_j\}\)) is discriminative to \(\bar{R}\), \(\bar{R^{\prime }}\). Otherwise, \(\bar{R} \cap S = \bar{R^{\prime }} \cap S\). \(\square \)
Expansion step this step generates and tests landmark set S. In each recursion step, we find all the landmarks not in S, pick nonadded biggest landmark of them, and add it into S. Each time, a new S is generated, we conduct a test to see whether S is discriminative. If S is not discriminative, return false. Otherwise, we use \(\mathrm{GetMaxSet}(S)\) to get maximum superset of S, i.e., the set which contains all the points in S, and maximizes the target function. Once S is discriminative, we stop adding landmark to it, no longer visit supersets of S, and roll back to upper layer recursion. The process stops when all the possible combinations have been visited. For example, as shown in Fig. 4a, the algorithm starts by adding \(l_2\) to S. Since \(S=\{l_2\}\) is not discriminative, the S will be expanded by adding \(l_8\) to S which is shown in Fig. 4b, so as adding \(l_7\) to \(S=\{ l_2, l_8\}\). Then, \(S=\{l_2, l_8, l_7\}\) will not be expanded, and the system will roll back \(l_7\) and expand \(S=\{l_2, l_8\}\) with \(l_6\).
Lemma 2
For any set S and \(S^{\prime }\), where \(S \subset S^{\prime }\), if \(\forall l_i \in S, l_j \in S^{\prime }S\), \(l_i.s > l_j.s\), then the target value of \(GetMaxSet(S^{\prime })\) is always smaller than the target value of GetMaxSet(S).
Proof
The proof is straightforward and omitted. \(\square \)
Based on Lemma 2, we eagerly retrieve the maximum super of the current S. If the maximum target value is less than the target value of the current \(\mathbb {L}_{{\hbox {best}}}\), then we stop expanding S, as all the following added landmark will have a lower significance than the elements in S, and following Observation 2, the following expansion cannot generate a better landmark set than the current \(\mathbb {L}_{{\hbox {best}}}\). For example in Fig. 4d, the target value of current \(\mathbb {L}_{{\hbox {best}}}\) equals to 0.8 which is given by \(S=\{ l_2, l_8, l_7\}\). Since the possible maximum target values given by landmark sets containing \(\{l_2, l_6\}\) and \(\{l_2, l_4\}\) are 0.725 (given by \(\{l_2, l_6, l_8, l_7\}\)) and 0.65 (given by \(\{l_2, l_4, l_8, l_7\}\)), respectively, then the landmark sets containing \(\{l_2, l_6\}\) or \(\{l_2, l_4\}\) will be not be expanded, so as the landmark sets containing \(\{l_6\}\), \(\{l_5\}\), or \(\{l_4\}\) in Fig. 4e.
3.3 Question Ordering
4 Worker Selection
Some crowdsourcing platforms, such as AMT and CrowdFlower, give workers the freedom to choose any questions. However, this may cause some problems. For example, many workers choose to answer a same question, while some other questions are not picked by anyone; workers have to view all the questions before they choose; workers may answer questions that they are not familiar with. Our work avoids these problems by designing a dedicated component to assign each task to a set of eligible workers. To judge whether a worker is eligible for a task, many aspects of the worker have to be taken into consideration, i.e., number of outstanding tasks, worker’s response time, and familiarity with a certain area. First, since each worker may have many outstanding tasks, to balance the workload, and reduce the response time, we use a threshold \(\eta _{\#q}\) to restrict the maximum number of tasks for each worker. Second, each user of our system can specify the longest time delay she allows to get an answer, so this task will not be assigned to workers who have a high probability to miss the due time. Finally, a recommended route will have high confidence to be correct if the assigned workers are very familiar with this area. Again, the worker’s familiarity with respect to a certain area can also be affected by several factors, such as whether the worker lives around the area, whether the worker has answered questions relating to this area correctly in the past, etc. In summary, an eligible worker should meet three conditions: (1) has quota to answer the question; (2) has high probability to answer a question before the due time; and (3) has relatively high familiarity level with the query regions.
4.1 Response Time
Each task has a userspecified response time, by which an answer must be returned. We assume the probability of the response time t of a worker follows an exponential distribution, i.e., \(f(t; \lambda ) =\lambda \exp ^{\lambda t}\), which is standard assumption in estimating worker’s response time. The cumulative distribution function of \(f(t;\lambda )\) is \(F(t;\lambda )= 1\exp ^{\lambda t}\). If the probability of a worker to respond a task within time \(\overline{t}\), represented by \(F (\overline{t}; \lambda )\), is less than the threshold \(\eta _\mathrm{time}\), we will not assign the task to him.
4.2 Worker’s Familiarity Score
4.3 Finding Topk Eligible Workers
Next, we discuss how to find the topk eligible workers for a given task. Given a task (the selected n landmarks \(\mathbb {L}_{\mathbb {R}}\)), the workerlandmark accumulated familiarity score matrix \(M^{*}\), a response time t, a positive integer k, and a topk eligible workers query returns k workers who have the most knowledge of landmarks in \(\mathbb {L}_{\mathbb {R}}\) among all the workers and have high possibility to finish the task within time t.
5 Truth Reusing
It happens sometimes that the mining routes and web service provided routes are different, in this case, a best recommended route cannot be easily judged. To evaluate the quality of these candidate routes, we may simply publish a evaluating task and ask the workers to make the decision, but this manner is rude, since the crowd evaluating is a time costly and manpower costly procedure. To remedy this issue, we can use the relevant verified truths, which are some already known best recommended routes near the query locations, to evaluate these routes. Figure 6 demonstrates two recommendations routes \(R_1\) and \(R_2\) connecting \(l_1\) and \(l_2\), denoted by lines, and an existing truth route \(R_\mathrm{truth}\) connecting \(l_3\) and \(l_4\), denoted by dotted line. Since the best route \(R_\mathrm{truth}\) between \(l_3\) and \(l_4\) is very similar with \(R_1\), \(R_1\) has a higher possibility to be the best route than \(R_2\).
5.1 Search Relevant Truths
At the same start location, people’s preference routes change with different destinations. Therefore, when asking about the route from a start location to a destination, we should further consider if the verified truths are (approximately) heading from the start location to the destination. In other words, the relevant truths should go pass the start location and the destination sequentially. However, due to the sparse of truths, the mentioned rule of relevant truths is too strict that it may often happens that there are no any truth passing the start location and the destination sequentially. It is not surprising that people’s preference route between two places is similar to the prefer route between two nearby places, the nearby start place and the nearby destination. Besides, we all have the experience that our preference between two places changes with the time of the day, for example, people in Beijing avoid to drive on the third ring road during rush hour. Therefore, the time factor must be considered when the system do automatically recommendation. In conclusion, the relevant truths we search for are truths which head from the nearby places of the start location to the nearby places of the destination at a certain time. In the following subsections, we will introduce the indexing structure of truths and the searching process.
A truth is presented by a sequence of intersections that every two consecutive intersections denote a road. Thus, we use Rtree to index all the roads of all the truths and inverted index to indicate all the truths passing each intersection. We apply a range query on the start location \(p_\mathrm{start}\) and find all the roads, denoted by \(\mathbb {R}_\mathrm{start}\), of which the distances to \(p_\mathrm{start}\) are less than \(\eta _\mathrm{range}\). All the truths \(\mathbb {T}_\mathrm{start}\) passing roads in \(\mathbb {R}_\mathrm{start}\) can be easily found using the inverted index. Therefore, as \(p_\mathrm{dest}\), we can get truths \( \mathbb {T}_\mathrm{start}\) passing roads nearby \(p_\mathrm{dest}\), of which the time tags are within the query time zone. Therefore, relevant truths \(\mathbb {T}\) are given by \(\mathbb {T}_\mathrm{start}\cap \mathbb {T}_\mathrm{dest}\). For each relevant truth \(T^{*}\), the start point of the last road on \(T^{*}\) in \(\mathcal {R}_\mathrm{start}\) is denoted as \(p_{{\hbox {start}}}^{T^{*}}\), and the start point of the first road on \(T^{*}\) in \(\mathcal {R}_\mathrm{dest}\) is denoted as \(p_\mathrm{dest}^{T^{*}}\). The relevant part of \(T^{*}\) to the request is the subroute \(T^{*}\) which starts from \(p_\mathrm{start}^{T^{*}}\) and ends at \(p_\mathrm{dest}^{T^{*}}\).
5.2 Evaluate the Confidence Score
Given the relevant truths set \(\mathbb {T} \), the confidence score of each candidate route of \(\mathbb {R}\) to be best recommended should be evaluated. Both the distances between candidate routes and relevant truths and the relevance of truths should be considered in the evaluation, since if a candidate route has a smaller distance with a more relevant truth, the route should possess a higher possibility to be best recommended. However, the directly sum up the weighted distance and choose the smallest distance may always lead bias in evaluation, because the trajectory distance measures are not metric that the route with smallest weighted distance sum does not mean the route has the lowest distance among all the relevant truths.
6 Experiments
In this section, we conduct extensive experiments to validate the effectiveness and efficiency of our system. All the algorithms in our system are implemented in Java and run on a computer with Intel Core i53210 CPU (2.50 GHz) and 4 GB memory.
6.1 Experiment Setup
Data set
Id  City  #Trajectory  Duration 

A  Beijing  112,232  6 months 
B  Nanjing  35,340  3 months 
POI Clusters We get two POI data sets of the Beijing and Nanjing cities from a reliable thirdparty company in China. After performing DBSCAN on these POI data sets, approximately 50,000 POI clusters are obtained and each POI cluster is used as a landmark.
Ground truth route We carefully choose 1000 popular routes agreed by all three routemining algorithms in each city as the ground truth. These routes are treated as the correct answers for the route recommendation request between corresponding places.
Workers In each of the cities, we have several volunteers to answer the questions generated by system.
6.2 Evaluation Approach
For each ground truth route, we query a bigthumb map service provider to get three recommended routes from its source to its destination. The ground truth and the recommended routes form the candidate route set, based on which a task will be generated and assigned to workers. In this way, we can assess the accuracy of the answers returned by the system by comparing the answer with the ground truth.
Parameter settings
Notation  Explanation  Default value 

n  Number of candidate routes  6 
\(\mathbb {L}\)  Size of landmarks on candidate routes  200 
\(\alpha \)  Influence factor of people’s living space to their knowledge  0.3 
\(\beta \)  Influence factor of people answering a question wrong to their knowledge  0.3 
\(\eta _{\#q}\)  The maximum number of outstanding tasks of each worker  3 
\(\eta _\mathrm{dis}\)  Radius of knowledge influence region  500 m 
\(\eta _{t}\)  The minimal possibility to answer a question in time  80 % 
6.3 Performance Evaluation
6.3.1 Case Study
6.3.2 Quality of Recommendation
6.3.3 Effectiveness of Worker Selection
In this experiment, we test whether the overall performance of the system can be improved by assigning tasks to suitable workers with good knowledge about the query area. As a comparison, we also assign the same tasks to random workers without any selection algorithms applied. The accuracy of the route recommendation is shown in Fig. 9, from which we can see that the overall accuracy can improve by 20 % by applying the proposed worker selection methods.
To further demonstrate the relationship between the worker’s knowledge and the accuracy of an answer, we analyze the relationship between the precision of recommended routes and workers’ knowledge level. The result is shown in Fig. 10b, from which we can see that the precision grows steadily with workers getting more familiar with the area.
6.3.4 Effect of Question Format
The question format adopted by system is a series of binary questions with a certain order. In this experiment, we evaluate the effect of different question formats to the performance of the system. We compare our question format (BO) with three other format candidates: (1) map only format (MO: show the candidate routes directly on map and ask workers to choose), (2) checkbox format (CB: workers need to choose all the landmarks on their preferred routes), and (3) binary question without smart ordering (BwO: the questions are asked in the descending order of the significance). We generate the same tasks using the four question formats and assign to the same set of workers. Both efficiency and accuracy are evaluated. The results are shown in Fig. 11. From Fig. 11a, we can see that MO takes the longest time for workers to finish a task, which is because the workers need to spend lots of time to realize the differences between candidate routes. All other question formats, of which the differences are automatically summarized by the system, cost around 10s for each task. Notably, BO format costs less time than BwO, since by presenting the questions in a smarter order, the number of questions needed for each task has reduced. CB takes the least time, as many workers do not bother to check a lot of landmarks and simply skip the questions. Figure 11b shows the results of accuracy, in which CB format has the lowest precision, since people pay least time and attention on this type of question. Binary question format, both BwO and BO, outperforms MO in precision, since MO is hard for people to realize the difference between candidate routes on a map. Furthermore, the precision of BO is more than 10 % higher than that of BwO, demonstrating that smart ordering not only reduces the time cost, but also improves the accuracy of answers.
6.3.5 Time Cost of Landmark Selection
We also test the time cost of landmark selection process, which is important for system to respond to user request in real time. In general, two factors can influence the time cost: (a) the number of landmarks on the candidate routes and (b) the number of candidate routes. Both factors are tested in our experiments by comparing the GreedySelect (GS) method with the Incremental Landmark Selecting (ILS) introduced by [24]. The average time cost for selecting landmarks of a candidate route set with the size of 6 is shown in Fig. 12a, from which we observe that both the time costs of GS and ILS grow with the landmark size increasing from 50 to 400. However, GS constantly outperforms ILS by three orders of magnitudes. The average time costs for selecting landmarks with different number of candidate route set are shown in Fig. 12b. It illustrates that the running time of GS is much more stable, as the number of candidate routes grows, compared to the exponential growth of the time cost of ILS.
6.3.6 Precision of Map Service and Popular RouteMining Algorithms
6.3.7 Hit Rate of Truth Reusing
7 Related Work
To our knowledge, there is no existing work on evaluating the quality of recommended routes. As the goal of this work is to evaluate the quality of recommended routes by Web services and mining algorithms, the route recommendation algorithms (mining frequent path algorithms) used in this paper are reviewed first. In addition, we leverage the generating easy questions and finding target workers to improve the quality of evaluating and reduce the workers’ workload, which share the same motivation of some research works of Crowdsourcing question designing and workers selecting. Therefore, in the last of this section, we will review the previous works of these two aspects.
Route Recommendation Algorithms The popular routes mining has received tremendous research interests for a decade and a lot of works are on it, such as [5, 6, 7, 9, 11, 12, 13, 16, 22, 23, 28, 29, 32, 33]. Among these works, [3, 4, 15, 31] are the most representative. Chen et al. [4] propose a novel popularity function for path desirability evaluation using historical trajectory data sets. The popular routes recommended by it tend to have fewer vertices. The work in [31] provides k popular routes by mining uncertain trajectories. The recommendation routes of this work tend to be rough routes instead of correct routes. [15] claims the popular routes change with time, so it carries out a popular routes mining algorithms which can provide the recommended routes in arbitrary time periods specified by the users. [3] provides the evidences that the routes recommended by web services are sometimes different from drivers’ preference. Thus, it mines the individual popular routes from his historical trajectories. The recommended routes of this method reflect certain people’s preference.
Crowdsourcing problems In crowdsourcing problems, question designing is always an applicationdependent strategy, which may consider the cost of questions or the number of questions. [8, 19] propose strategies to minimize the cost of the questions designed. The question designing strategy of [30] is to minimize the number of questions. The question designing strategy of [20] is to generate the optimal set of questions. [14] builds the desired travelling plans incrementally, optimally choosing, at each step, the best questions, so that the overall number of questions to minimize the number of the asked questions. Selecting workers with high individual qualities for tasks always does beneficial to the final quality of answers. Thus, [10] propose an algorithm to select workers fulfilling some skills with the minimized the cost of choosing them. In [1], use emails communication to identifying skillful workers. Cao et al [2] assign tasks to microblog users by mining users’ knowledge and measuring their error rate.
8 Conclusions
In this work, we present a novel crowdbased route recommendation system, which evaluates the quality of routes recommended from different sources by leveraging the knowledge and opinions of the crowd. Three core components, task generation, worker selection, and truth reusing, have been carefully designed, such that informative and concise questions will be created and assigned to the most suitable workers. By having the system deployed and tested in real scenarios, we demonstrate that our system is able to recommend users the most satisfactory routes with at least 90 % chances, much higher than either the most wellknown map services or the stateoftheart routemining algorithms. Besides, this research sheds light on some other crowdbased recommendation systems, such as location recommendation and itinerary planning, which can be used in more application scenarios.
Notes
Compliance with Ethical Standards
Conflict of interest
The authors declare that they have no conflict of interests.
References
 1.Campbell C, Maglio P, Cozzi Ax, Dom B (2003) Expertise identification using email communications. In: CIKM, ACM, pp 528–531Google Scholar
 2.Cao C, She Jg, Tong Y, Chen L (2012) Whom to ask? Jury selection for decision making tasks on microblog services. PVLDB 5(11):1495–1506Google Scholar
 3.Ceikut V, Jensen C (2013) Routing service qualityłlocal driver behavior versus routing services. In: MDM, IEEE, pp 195–203Google Scholar
 4.Chen Z, Shen H, Zhou X (2011) Discovering popular routes from trajectories. In: ICDE, pp 900–911Google Scholar
 5.Gaffney S, Smyth P (1999) Trajectory clustering with mixtures of regression models. In: SIGKDD, ACM, pp 63–72Google Scholar
 6.Giannotti F, Nanni M, Pinelli F, Pedreschi D (2007) Trajectory pattern mining. In: KDD, pp 330–339Google Scholar
 7.Gonzalez H, Han J, Li X, Myslinska M, Sondag J (2007) Adaptive fastest path computation on a road network: a traffic mining approach. In: PVLDB, pp 794–805Google Scholar
 8.Guo S, Parameswaran A, GarciaMolina H (2012) So who won? Dynamic max discovery with the crowd. In: SIGMOD, ACM, pp 385–396Google Scholar
 9.Hua W, Wang Z, Wang H, Zheng K, Zhou X (2015) Short text understanding through lexicalsemantic analysis. In: Proceedings of 2015 IEEE 31st international conference on data engineering, pp 495–506Google Scholar
 10.Lappas T, Liu K, Terzi E (2009) Finding a team of experts in social networks. In: SIGKDD, ACM, pp 467–476Google Scholar
 11.Lee J, Han J, Li X, Gonzalez H (2008) Traclass: trajectory classification using hierarchical regionbased and trajectorybased clustering. PVLDB 1(1):1081–1094Google Scholar
 12.Lee J, Han J, Whang K (2007) Trajectory clustering: a partitionandgroup framework. In: SIGMOD, ACM, pp 593–604Google Scholar
 13.Li X, Han J, Lee J, Gonzalez H (2007) Traffic densitybased discovery of hot routes in road networks. Adv Spat Temporal Databases 4605:441–459CrossRefGoogle Scholar
 14.Lotosh I, Milo T, Novgorodov S (2013) Crowdplanr: planning made easy with crowd. In: Proceeding of 2013 IEEE 29th international conference on data engineering (ICDE), pp 1344–1347. doi: 10.1109/ICDE.2013.6544940
 15.Luo W, Tan H, Chen L, Ni L (2013) Finding time periodbased most frequent path in big trajectory data. In: SIGMOD, ACM, pp 195–203Google Scholar
 16.Mamoulis N, Cao H, Kollios G, Hadjieleftheriou M, Tao Y, Cheung D (2004) Mining, indexing, and querying historical spatiotemporal data. In: KDD, pp 236–245Google Scholar
 17.Mnih A, Salakhutdinov R (2007) Probabilistic matrix factorization. In: NIPS, pp 1257–1264Google Scholar
 18.Nordmann L, Pham H (1999) Weighted voting systems. IEEE Trans Reliab 48(1):42–49CrossRefGoogle Scholar
 19.Parameswaran A, GarciaMolina H, Park H, Polyzotis N, Ramesh A, Widom J (2012) Crowdscreen: algorithms for filtering data with humans. In: SIGMOD, ACM, pp 361–372Google Scholar
 20.Parameswaran A, Sarma A, GarciaMolina H, Polyzotis N, Widom J (2011) Humanassisted graph search: it’s okay to ask questions. PVLDB 4(5):267–278Google Scholar
 21.Quinlan JR (1986) Induction of decision trees. Mach Learn 1(1):81–106Google Scholar
 22.Sacharidis D, Patroumpas K, Terrovitis M, Kantere V, Potamias M, Mouratidis K, Sellis T (2008) Online discovery of hot motion paths. In: EDBT, pp 392–403Google Scholar
 23.Shang S, Yuan B, Deng K, Xie K, Zheng K, Zhou X (2012) Pnn query processing on compressed trajectories. GeoInformatica 16(3):467–496CrossRefGoogle Scholar
 24.Su H (2013) Crowdplanner: a crowdbased route recommendation system. arXiv preprint arXiv:1309.2687
 25.Su H, Deng J, Li F (2012) Crowdsourcing annotations for visual object detection. In: AAAIGoogle Scholar
 26.Su H, Zheng K, Huang J, Jeung H, Chen L, Zhou X (2014) Crowdplanner: a crowdbased route recommendation system. In: Proceedings of 2014 IEEE 30th international conference on data engineering (ICDE), pp 1144–1155Google Scholar
 27.Su H, Zheng K, Wang H, Huang J, Zhou X (2013) Calibrating trajectory data for similaritybased analysis. In: SIGMOD, ACM, pp 833–844Google Scholar
 28.Wang H, Su H, Zheng K, Sadiq S, Zhou X (2013) An effectiveness study on trajectory similarity measures. In: Proceedings of the twentyfourth australasian database conference, Vol 137. Australian Computer Society Inc., pp 13–22Google Scholar
 29.Wang H, Zheng K, Xu J, Zheng B, Zhou X, Sadiq S (2014) Sharkdb: An inmemory columnoriented trajectory storage. In: Proceedings of the 23rd ACM international conference on conference on information and knowledge management, pp 1409–1418Google Scholar
 30.Wang J, Kraska T, Franklin M, Feng J (2012) Crowder: crowdsourcing entity resolution. PVLDB 5(11):1483–1494Google Scholar
 31.Wei LY, Zheng Y, Peng WC (2012) Constructing popular routes from uncertain trajectories. In: Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 195–203Google Scholar
 32.Zheng K, Su H, Zheng B, Shang S, Xu J, Liu J, Zhou X (2015) Interactive topk spatial keyword queries. In: Proceedings of 2015 IEEE 31st international conference on data engineering, pp 423–434Google Scholar
 33.Zheng Y, Zhang L, Xie X, Ma W (2009) Mining interesting locations and travel sequences from gps trajectories. In: WWW, pp 791–800Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.