Introduction

Route prediction is a key requirement in many location-based important applications such as vehicular ad hoc networks, traffic congestion estimation, resource prediction in grid computing, vehicular turn prediction, travel pattern similarity, and pattern mining. Route prediction is a problem which deals with, given a sequence of road network graph edges already traveled by the user, predicting the most probable edge of the network to be traveled. Our approach is to build a prediction by partial match (PPM) model from a huge corpus of sequential trajectories traveled by the user in the past. PPM is widely used in various applications in the area of data compression and machine learning (Begleiter et al. 2004). Time-stamped GPS traces are collected over a long period of time. The chronological huge sequence of GPS traces is broken down into smaller units called trip (Froehlich and Krumm 2008; Tiwari et al. 2013). Trips are mapped to road network graph using map matching process which identifies the object’s location on the road network graph (Tiwari et al. 2014; Bernstein and Kornhauser 1996; Zhou and Golledge 2006). PPM tree-based model is constructed from trips composed of an ordered sequence of road network edges. Given a trajectory traveled by the user, a lookup is done in the PPM tree-based model and the most likely edge is found.

Cleary and Witten invented PPM back in (1984). Many versions of PPM evolved thereafter (Moffat 1990; Cleary et al. 1995; Teahan 1995; SchüRmann and Grassberger 1996). PPM models learn from historical occurrences of sequences to predict the probability of a specific data appearing after a given data sequence. For experiments in this work, a version PPM-C is used. We explain the process of construction of PPM-C, followed by distributed construction of the same. Real applications using PPM deals with processing of huge data sets, and processing such volume sequentially and coming up with a PPM model is a bottleneck. Attempts have been made to achieve scalability by adding processors and memory (Gilchrist 2004; Joel and Sirota 2012; Effros 2000). However, distributed construction of PPM is still a challenge. In the proposed work, scalability is achieved by decomposing GPS traces into trips and processing them in parallel and finally consolidating them to form the PPM model. A set of user trips is decomposed into smaller sets and ported to compute a module known as mappers. Mappers compute the variable order contexts as key–value pairs. In each case, the key is the context and value is the occurrence frequency in the training set. Key–value pairs from various mappers are emitted to the reducer node. Reducer consolidates the occurrences of various contexts and inserts in the PPM trie. The final tree produced by the reducer is the PPM model which is used for route prediction. The major contribution of this work is a technique of distributed computation of PPM and its application in route prediction. All experiments and implementations are done on real data sets available openly in the public domain.

PPM tree-related work and literature

Prediction by partial match (PPM) is a context modeling-based adaptive statistical data compression technique. It has evolved as a better alternative for solving many problems in the field of biomedical engineering, natural language processing and artificial intelligence. PPM models use a set of historical occurrences of sequences to predict the probability of a specific symbol appearing at a given position in an input stream (Begleiter et al. 2004). Arithmetic encoding was proposed in 1976, after which soon PPM variants PPM-A and PPM-B were invented by Cleary and Witten (1984). These were further improved by Moffat (1990), resulting in PPM-C and PPM-D. PPM-D was proved show a bit more improvement in some cases. All these variants of PPM are more or less same, but only differ in the way probability is computed. In all the cases, the PPM model is a mix of lower-order models. If unsuitable results are found with the higher-order model, then it falls back to lower-order models (context of lesser length). Hiroyuki et al. (2005) presented an unbounded version of PPM known as PPM*, used for the classification of text. However, it proposed the use of finite deterministic contexts. Cleary et al. (1995) also proposed an unbounded variant of PPM, but scalability and parallelism were not addressed. It was well established that it performs well in compression, language identification, text prediction, word segmentation, text categorization, etc. Gilchrist et al. (2004) proposed parallel computation with more focus on BZIP2 in a multi-processor system. Effros et al. (2004) presented an improvement on PPM, but parallelism was not addressed. Begleiter et al. (2004) further explored PPM and successfully applied it to artificial intelligence (AI) applications including text prediction and music recognition and it worked well. Celikel et al. (2005) applied PPM for language recognition and proved promising results. The objectives of almost all researches on PPM were either improving its accuracy and execution on a single machine or its application in different fields of study. In spite of their huge applicability, parallel execution and PPM model construction were hardly explored. The objective of this research is to come up with a technique for distributed parallel construction of the PPM model tree. The major milestones in PPM are as listed in Table 1.

Table 1 Important PPM construction algorithms

PPM tree basics

Time-stamped GPS traces are collected over a long period of time. GPS traces are in the form \(\left( {x_{{t^{0} }} ,y_{{t^{0,} }} t^{0} } \right),\left( {x_{{t^{1} }} ,y_{{t^{1,} }} t^{1} } \right) \ldots \left( {x_{{t^{n} }} ,y_{{t^{n,} }} t^{n} } \right)\), which represents the object’s location \(\left( {x_{{t^{k} }} ,x_{{t^{k} }} } \right)\) at time \(t^{k}\). Chronological huge sequences of GPS traces are broken down into smaller units called trips (Froehlich and Krumm 2008; Tiwari et al. 2013). A user trip \(T = \left( {p_{\text{s}} \text{,}t_{\text{s}} \text{,}p_{\text{e}} \text{,}t_{\text{e}} } \right)\) is an ordered sequence of GPS location data points \(\left( {p_{i} ,t_{i} } \right) \, \forall 1 \le i \le {\text{n}}\) where \(p_{\text{s}} ,p_{\text{e}}\) are the start and end positions and ts, te are the start and end time of trips, respectively.

$$T = \left( {x_{{t^{0} }} ,y_{{t^{0,} }} t^{0} } \right),\left( {x_{{t^{1} }} ,y_{{t^{1,} }} t^{1} } \right) \ldots \left( {x_{{t^{m} }} ,y_{{t^{m,} }} t^{m} } \right),$$
$$p_{\text{s}} = (x_{{t^{0} }} ,y_{{t^{0,} }} ) , t_{s} = t^{0} ,\quad p_{\text{e}} = (x_{{t^{m} }} ,y_{{t^{m,} }} ) , t_{s} = t^{m} .$$

Two trips T1 and T2 are said to be consecutive if the end of the first trip is in the same position as the end of the second trip and there is a time gap between the two. A user trip plotted on OpenStreetMap (OSM) base images is as shown in Fig. 1.

Fig. 1
figure 1

User GPS traces representing a trip made by user

Trips are mapped to road network graph using map matching process which identifies the object’s location on the road network graph (Quddus 2006; Quddus et al. 2006; Greenfeld 2002). An example of a road network extracted from OSM is shown in Fig. 2. Map matching is function f, for which the input is the GPS location and the road network graph, and the output is the edge of the road network.

Fig. 2
figure 2

Road network plotted over the OSM map

$$f\left( {\left( {x_{{t^{0} }} ,y_{{t^{0,} }} t^{0} } \right),\left( {x_{{t^{1} }} ,y_{{t^{1,} }} t^{1} } \right) \ldots \left( {x_{{t^{n} }} ,y_{{t^{n,} }} t^{n} } \right)} \right) \to S,$$

where sequence S is an ordered sequence of road network edges. Figure 3 shows the GPS traces corrected and mapped to the road network. Let ∑ = {e1,e2,e3,e4,e5} be a finite set of all the edges of digitized road network and ∑* represent all finite length trips possible. Any trip a user makes essentially belongs to ∑ *. Let X = e0,e1,…,en−1 with \(x_{i} \in \sum\, \text{and}\, X \in \sum^{*}\) be a trip, then the length of the trip is given by |X| = |e0,e1,….,en−1|.

Fig. 3
figure 3

User trip mapped to the road network

The ordered arrangement of all sequences \(s\sigma\), where \(\sigma\) is the symbol and \(s\) is the context of \(\sigma\), is compact and TRIE is known as the PPM tree. For demonstration purpose, let us assume alphabet set ∑ = {e1,e2,e3,e4,e5} and a string \(X = e_{1} ,e_{2} ,e_{5} ,e_{1} ,e_{3} ,e_{1} ,e_{4} ,e_{1} ,e_{2} ,e_{5} ,e_{1}\). All the contexts with length d = 2 are as shown in Table 2 and the resulting PPM tree is as shown in Fig. 4.

Table 2 All contexts computed for PPM construction
Fig. 4
figure 4

PPM tree construction

Prediction by partial match (ppm) tree construction

Two-phase PPM tree construction

We propose a two-step process to compute the PPM trie from user trips. The first phase computes all sequences \(s\sigma\), where \(\sigma\) is the symbol and \(s\) is the context of \(\sigma\), and the second phase constructs the trie from sequences computed in first phase. The algorithm scans one alphabet at a time and adds to the dictionary, new phrases which are shortest and not yet discovered. The subsequence generation process is represented in Algorithm 1.

figure a

The following sequence is used for demonstration: X = e1,e2,e5,e1,e3,e1,e4,e1,e2,e5,e1. All context \(s\) of length d = 2 along with target symbol σ denoted by \(s\sigma\) computed by Algorithm 1, is as shown in Table 3. The length of string X is denoted by n. All contexts of length d in \(X\) can be calculated in linear time Θ(n) by scanning X from left to right and maintaining a window of size d. The Window is advanced by one unit on scanning one symbol. The maximum number of context strings each of length d that can appear in map is Θ(n − d) ≈ Θ(n), where d ≪ n. This can happen only if contexts do not overlap; otherwise in practice, the number of contexts is \(\le \varTheta \left( n \right)\).

Table 3 All contexts computed by Algorithm 1

The second phase starts with a tree from scratch and keeps on inserting context sequences \(s\sigma ,\) obtained as input from the first step. For a new context which is not seen earlier, a completely new branch is created. Otherwise a path in tree is searched which is matching/overlapping with current context. All the nodes in overlapping path is increased by frequency of occurrance and remaining nodes are inserted at the end of overlapping path. The process is explained in Algorithm 2. The resultant PPM tree is constructed by Algorithm 2 from the map of context strings in Table 3 including the frequency count.

figure b

Distributed construction of the PPM tree

To achieve a distributed construction of the CTW tree-based model, the two-step process described in the earlier section is extended to be executed over the Hadoop cluster leveraging the MapReduce computation framework. The first phase is executed by the mapper module. GPS traces are decomposed into smaller units called trips and map matched to road network. Trips as ordered sequence of road network edges are grouped into smaller sets and processed by mapper module. All the contexts, \(s\sigma ,\) are generated by the mapper for each symbol σ in the trip and are put into a map which stores sequence as key and frequency as value. Implementation of the mapper module is as described in Algorithm 3.

figure c

To demonstrate the distributed construction of the PPM tree, we take a string below. This will be used as a running example throughout further discussions.

$$e_{1} ,e_{2} ,e_{5} ,e_{1} ,e_{3} ,e_{1} ,e_{4} ,e_{1} ,e_{2} ,e_{5} ,e_{1} ,e_{3} , e_{1} ,e_{4} , e_{1} , e_{2} , e_{5} , e_{1} , e_{3} , e_{1} .$$

For the sake of simplicity and to demonstrate the concept, the input string is split into two chunks. For each of the split, a mapper is instantiated.

$${\text{Split}}\,\,S_{1} = e_{1} , e_{2} , e_{5} ,e_{1} , e_{3} , e_{1} , e_{4} , e_{1} ,e_{2} ,e_{5} ,e_{1} \quad {\text{processed by mapper }}m_{1} ,$$
$${\text{Split}}\,\,S_{2} = e_{5} ,e_{1} , e_{3} , e_{1} , e_{4} , e_{1} , e_{2} , e_{5} , e_{1} , e_{3} , e_{1} \quad {\text{processed by mapper}}\, m_{2} .$$

The output of both mappers m1 and m2 are summarized in Tables 4 and 5, respectively. In this example, context \(s\sigma\) serves as the key and frequency (f) as value.

Table 4 All contexts with frequency computed by m1
Table 5 All contexts with frequency computed by m2

The output of the mapper modules is a set of key–value pairs, where the key is the context and value as the frequency is emitted as input to the reducer. The framework does a consolidation by adding the frequencies for each context as key. For example, if from one mapper the value received is 〈e1, e2 | 4〉 and 〈e1, e2 | 10〉, then after merging the final entry becomes 〈e1, e2 | 14〉. It is ensured that each key–value pair is unique during this step. If multiple entries exist for the same key, then consolidation is done before sending it to the reducer. If data do not fit into memory, then it is periodically written to disk (Chang et al. 2008; Jeffrey and Sanjay 2004; Lammel 2008). The reducer starts with a tree from scratch and keeps on inserting context sequences iteratively. For a new context which is not seen earlier, a completely new branch is created. Otherwise a path in tree is searched which is matching/overlapping with current context. All the nodes in overlapping path is increased by frequency of occurrance and remaining nodes are inserted at the end of overlapping path. The result of the consolidation of the output of mappers in Tables 4 and 5 is as shown in Table 6. Implementation of the reducer is as described in Algorithm 4.

figure d
Table 6 Result of merging of intermediate key/value pairs by MapReduce framework

Route prediction using the PPM tree

The objective is to predict the next edge σ ∊ E on the road network given the user traveled trajectory \(S = \left( {x_{{t^{0} }} ,y_{{t^{0,} }} t^{0} } \right),\left( {x_{{t^{1} }} ,y_{{t^{1,} }} t^{1} } \right) \ldots \left( {x_{{t^{n} }} ,y_{{t^{n,} }} t^{n} } \right)\), based on information learned from historical user travel data. To predict next edge \(\sigma\), S is map matched to digitize to road network using the map matching process f as described in earlier sections.

$$f\left( {x_{{t^{0} }} ,y_{{t^{0,} }} t^{0} } \right),\left( {x_{{t^{1} }} ,y_{{t^{1,} }} t^{1} } \right) \ldots \left( {x_{{t^{n} }} ,y_{{t^{n,} }} t^{n} } \right) \to e_{i} e_{i + 1} \ldots e_{i + n } .$$

Trajectory S is the converted form of an ordered sequence of road network edges and can be considered as a Markov chain, where the highest possibility of occurrence among all other possibilities is

$$p\, (\sigma |e_{i} e_{i + 1} \ldots e_{i + n } ).$$

p is the conditional probability of occurrence of σ given the event \(e_{i} e_{i + 1} \ldots e_{i + n }\) has already occurred. PPM trie constructed has information learned from historical travel data of user. Since PPM is an unbounded Markov model, the corresponding tree may be balanced and each path from the root may be of different length. This makes PPM a variable order Markov model. In the worst case, one has to traverse the longest branch of the PPM tree. If the length of the longest branch of tree is k, then the complexity of the prediction using the PPM trie is \(O\left( k \right)\). The probabilities of occurrence of each node starting with the root node is as shown in Fig. 5. Route prediction function denoted by a function Route_Predict can be represented as:

Fig. 5
figure 5

PPM tree with probability distribution

$$Route\_Predict\left( {e_{i} e_{i + 1} \ldots e_{i + n } } \right) \to \sigma .$$

The below cases demonstrate the prediction Route_Predict function over the PPM model constructed by Algorithm 4.

Case I::

This is the case when the user is at root node which signifies the user has not started travel. We represent the user trajectory by \(S = \varepsilon\). From the PPM trie, it can be seen that the various possibilities for traversals are \(\left\{ {e_{1} ,e_{2} ,e_{3} ,e_{4} ,e_{5} } \right\}\). The probability for each case is as follows:

$$p (e_{1} | \varepsilon ) = \frac{8}{18},\quad p (e_{2} | \varepsilon ) = \frac{3}{18},\quad p (e_{3} | \varepsilon ) = \frac{2}{18}, \quad p (e_{4} | \varepsilon ) = \frac{2}{18},\quad p (e_{5} | \varepsilon ) = \frac{3}{18}.$$

Hence, \(Route\_Predict\left( \varepsilon \right) \to e_{1} .\)

Case II::

Another case we explore is when edge e2 has been traversed so far, \(S = e_{2}\). The length of the input trajectory is 1 unit only and consists of a single edge. The candidate edge after e2 already traversed is only one and is e5. In this case, the probability of occurrence of e5 after e2 as context is \(p (e_{5} | e_{2} ) = 1\). Hence, Route_Predict (e2) → e5.

Case III::

The next case is when the input trajectory is \(S = \left\{ {e_{1} } \right\}\) and only one edge e1 has been traversed so far. However, there are multiple candidates ({e2,e3}) with high probability after edge e1 is already traversed. The probabilities of each candidate is as follows:

$$p (e_{2} | e_{1} ) = \frac{3}{8}, \quad p (e_{3} | e_{1} ) = \frac{3}{8}$$

Hence, two edges are likely and will be resolved once more edges are traveled.

Case IV::

Next, we consider a case when multiple edges are traveled and the input to Route_Predict function is \(\left\{ {e_{1} ,e_{2} } \right\}\). The possible candidate for travel next is edge e5, having the said event of traveling over \(\left\{ {e_{1} ,e_{2} } \right\}\) already occurred. p (e5|e1,e2) = 1 and hence Route_Predict(e1,e2) → e5.

Case V::

Next, we consider a case when the user has traveled a path which has not yet been seen by the PPM model. For example, if the user has traveled path {e3,e4} but in the trie no such path exists, this means something which has not occurred in the past. Hence, the prediction function result is \(Route\_Predict\left( {e_{3} ,e_{4} } \right) \to \varepsilon\). This can happen when the user has reached the destination and there is nothing to predict, and in another case it is a new route. In the latter case, new routes when found should be sent to the model for learning.

Case VI::

All the above cases focused on predicting one hop next edge. The same model can be used to predict an end to the end path as well. The input trajectory is \(\varepsilon\). The next edge selected is e1. From e2, the next probable edge is e5 and so on

Implementation and evaluation

Map data: spatial road network data

OpenStreetMaps provides various kinds of geographical spatial and non-spatial data sets such as water bodies, international boundaries, state boundaries, and road networks. In this work, we use digitized road network data downloaded from OSM. OSM provides open source data under public open content license (https://www.openstreetmap.org). Data can be downloaded in a variety of format images, XML files, shape files, etc. We used only road network data from OSM. Data can be downloaded using the OSM interface (https://www.openstreetmap.org) if the area is smaller. If the area is larger, for example, the official vendor portal can be used (https://www.cloudmade.org). Data are available in various standard formats such as image (.jpg, .png, etc.) or XML which comes with extension.osm. We used.osm format which we parsed using open source tool called Osm2pgsql (wiki.openstreetmap.org/wiki/Osm2pgsql). It is used to convert OSM data into PostGIS compatible.sql files. The SQL data are then loaded to spatial database PostGIS. We used GeoServer tool for all data visualization. GeoServer supports easy connectivity to PostGIS database. A snapshot of the OSM Beijing road network is as shown in Fig. 6.

Fig. 6
figure 6

OSM road network

User location traces data

GPS data corpus used in this research work is from Geolife project. GPS data collection effort was made as Geolife project for the period 2007–2012 (Zheng et al. 2009; Zheng et al. 2008; Zheng et al. 2010). Geolife GPS data set contains time-stamped positional information of around 182 users. It contains around 17,621 trajectories which have 24,876,978 GPS data points. The length of all trajectories sums up to 1.2 million km and a total duration of around 48+ thousand hours. Devices used to capture data are GPS loggers as well GPS phones with different recording frequencies. Of all the trajectories, 91% trajectories have data collection frequency of every 1–5 s or 5–10 m per point and are dense data (Lammel 2008). Data collections were done from users while performing a variety of activities ranging from routine tasks like the movement from home to office and back to home as well other non-routine tasks such as sightseeing, cycling, and shopping etc. (Zheng et al. 2009; Zheng et al. 2008; Zheng et al. 2010). Figure 7 shows GPS traces plotted from Geolife GPS data corpus.

Fig. 7
figure 7

Geolife trajectory sample data

Implementation and evaluation were performed in a cluster of distributed nodes which consisted of six compute nodes: one master and five worker nodes. Data were replicated with a factor of 5 to make sure that least time was spent on data transfer latency. Each independent node in the cluster had 8 GB internal memory and 64-bit processor with four cores. The prediction accuracy with the portion of trip completed is shown in Fig. 8. Construction of the CTW tree on a single node is shown in Fig. 9. The CTW tree construction time on Hadoop cluster consisting of 2 million, 8 million and 12 million is shown in Figs. 10, 11 and 12, respectively.

Fig. 8
figure 8

Prediction accuracy

Fig. 9
figure 9

Processing time of one single machine

Fig. 10
figure 10

Processing time of two million location traces on cluster

Fig. 11
figure 11

Processing time of eight million location traces on cluster

Fig. 12
figure 12

Processing time of twelve million location traces on cluster

Conclusion

In this work, the focus was on the construction of the PPM model in a distributed way from a huge corpus of GPS location traces. This model was then used for building a route prediction application. The application required road network data and GPS traces. Both data sets were sourced from openly available sources: road network data from OSM and GPS data from Geolife project. GPS location was decomposed into smaller units called user trips. User trips were map matched to road network to convert the data into a set of edges. This step is part of data preparation, which is a one-time activity. The map matching of GPS data to road network edges reduces the data size and makes the model construction faster than building a model from raw GPS data. For distributed construction, data were stored in HBase data store and MapReduce framework was used for computation. The design of processing was composed of two steps which are intuitive to implementation of MapReduce framework. The PPM model was constructed with the edges of the PPM tree annotated with the probability of their occurrence. The model was then used in the prediction of the route given a partial trajectory. We observed that the model construction phase is the most time consuming, but over distributed cluster processing the time decreases linearly with the addition of nodes in the cluster. Once the model is constructed, route prediction is not a time-consuming process, but is all about traversing a branch of a multiway rooted tree and is linear in search time. All tools and data sets used in this work are openly available in the public domain. All the snapshots presented in this work were taken during implementation from real data sets.