Supporting Secure Dynamic Alert Zones Using Searchable Encryption and Graph Embedding

Location-based alerts have gained increasing popularity in recent years, whether in the context of healthcare (e.g., COVID-19 contact tracing), marketing (e.g., location-based advertising), or public safety. However, serious privacy concerns arise when location data are used in clear in the process. Several solutions employ Searchable Encryption (SE) to achieve secure alerts directly on encrypted locations. While doing so preserves privacy, the performance overhead incurred is high. We focus on a prominent SE technique in the public-key setting -- Hidden Vector Encryption (HVE), and propose a graph embedding technique to encode location data in a way that significantly boosts the performance of processing on ciphertexts. We show that finding the optimal encoding is NP-hard, and provide several heuristics that are fast and obtain significant performance gains. Furthermore, we investigate the more challenging case of dynamic alert zones, where the area of interest changes over time. Our extensive experimental evaluation shows that our solutions can significantly improve computational overhead compared to existing baselines.


Introduction
Location data play an important part in offering customized services to mobile users.Whether they are used to find nearby points of interest, to offer location-based recommendations, or to locate friends situated in proximity to each other, location data significantly enrich the type of interactions between users and their favorite services.However, current service providers collect location data in clear, and often share it with third parties, compromising users' privacy.Movement data can disclose sensitive details about an individual's health status, political orientation, alternative lifestyles, etc.Hence, it is important to support such location-based interactions while protecting privacy.
Our focus is on secure alert zones, a type of locationbased service where users report their locations in encrypted form to a service provider, and then they receive alerts when an event of interest occurs in their proximity.This operation is very relevant to contact tracing, which is proving to be essential in controlling pandemics, e.g., COVID-19.It is important to determine if a mobile user came in close proximity to an infected person, or to a surface that has been exposed to the virus, but at the same time one must prevent against intrusive surveillance of the population.More applications of alert zones include public safety notifications (e.g., active shooter), and commercial applications (e.g., notifying mobile users of nearby sales events).
Searchable Encryption (SE) [5,15,24] is very suitable for implementing secure alert zones.Users encrypt their location before sending it to the service provider using a special kind of encryption, which allows the evaluation of predicates directly on ciphertexts.However, the underlying encryption functions are not specifically designed for geospatial queries, but for arbitrary keyword or range queries.As a result, a data mapping step is typically performed to transform spatial queries to the primitive operations supported on ciphertexts.Due to this translation, the performance overhead can be significant.Some solutions use Symmetric Searchable Encryption (SSE) [8,15,24], where a trusted entity knows the secret key of the transformation, and collects the location of all users before encrypting them and sending the ciphertext to the service provider.While the performance of SSE can be quite good, the system model that requires mobile users to share their clear text locations with a trusted service is not adequate from a privacy perspective, since it still incurs a significant amount of disclosure.
To address the shortcomings of SSE models, the work in [5] introduced the novel concept of Hidden Vector Encryption (HVE), which is an asymmetric type of encryption that allows direct evaluation of predicates on top of ciphertext.Each user encrypts her own location using the public key of the transformation, and no trusted component that accesses locations in clear is required.This approach has been considered in the location context in [12], [17], with encouraging results.However, the performance overhead of HVE in the spatial domain remains high.Motivated by this fact, we study techniques to reduce the computational overhead of HVE.Specifically, we derive special types of spatial data mapping using graph embeddings, which allow us to express spatial queries with predicates that are less computationally-intensive to evaluate.
In existing HVE work for geospatial data [12], [17], the data domain is partitioned into a hierarchical data structure, and each node in this structure is assigned a binary string identifier.The binary representation of each node plays an important part in the query encoding, and it influences the amount of computation that needs to be executed when evaluating predicates on ciphertexts.However, the impact of the specific encoding is not evaluated in-depth.Our approach embeds the geospatial data domain to a high-dimensional hypercube, and then it applies graph embedding [6] techniques that directly target the reduction of computation overhead in the predicate evaluation step.Finally, no existing work considers the case of alert zones that change over time.Support for dynamic alert zones is very important, given that in most use case scenarios, phenomena of interest evolve over time (e.g., places vis-ited by COVID carriers, area affected by a gas leak, etc).Our work tackles this important challenge 1 .
Our specific contributions are: -We introduce a novel transformation of the spatial data domain based on graph embedding that is able to model accurately the performance overhead incurred when running HVE queries for spatial predicates; -We transform the problem of minimizing HVE computation to a graph problem, and show that the optimal solution is NP-hard; -We devise several heuristics that can solve the problem efficiently in the embedded space, while reducing significantly the computational overhead; -We propose models that take into account the spatial and temporal evolution of alert zones, and choose encodings that improve performance under dynamic conditions; -We perform an extensive experimental evaluation which shows that the proposed approaches are able to halve the performance overhead incurred by HVE when processing spatial queries.
The rest of the paper is organized as follows: Section 2 introduces necessary background on the system model (an HVE primer is given in Appendix A).Section 3 provides the details of the proposed graph embedding transformation.Section 4 introduces several heuristic algorithms that solve the problem efficiently.Section 5 focuses on modeling of dynamic alert zones, and on advanced encodings under changing conditions.Section 6 evaluates thoroughly the proposed approach on reallife datasets.We survey related work in Section 7 and conclude in Section 8.

System Model
Consider a [0,1]×[0,1] spatial data domain divided into n non-overlapping partitions, denoted as We use the term cell to refer to partitions, which can have an arbitrary size and shape.An example of such a partitioning is provided in Fig. 3a.The system architecture of location-based alert system is represented in Fig. 1, and consists of three types of entities:   It receives encrypted updates from users and search tokens from TA, and performs the predicate evaluation to decide whether encrypted location C i of user i falls within alert zone j represented by token T K j .
If the predicate holds, the server learns message M i encrypted by the user, otherwise it learns nothing.
Table 1 summarizes the notations used throughout the manuscript.
The system supports location-based alerts, with the following semantics: a Trusted Authority (TA) desig- nates a subset of cells as an alert zone, and all the users enclosed by those cells must be notified.The TA can be, for instance, the Center for Disease Control (CDC), who is monitoring cases of a pandemic, and wishes to notify users who may have been affected; or, the TA can be some commercial entity that the users subscribe to, and who notifies users when a sales event occurs at selected locations.
The privacy requirement of the system dictates that the server must not learn any information about the user locations, other than what can be derived from the match outcome, i.e., whether the user is in a particular alert zone or not.In case of a successful match, the server S learns that user u is enclosed by zone z.In case of a non-match, the server S learns only that the user is outside the zone z, but no additional location information.Note that, this model is applicable to many real-life scenarios.For instance, users wish to keep their location private most of the time, but they want to be immediately notified if they enter a zone where their personal safety may be threatened.Furthermore, the extent of alert zones is typically small compared to the entire data domain, so the fact that S learns that u is not within the set of alert zones does not disclose significant information about u's location.The TA can be an organization such as CDC, or a city's public emergency department, which is trusted not to compromise user privacy, but at the same time does not have the infrastructure to monitor a large user population, and outsources the service to a cloud provider.
Fig. 3: An example of embedding graphs generated based on a sample grid.

Problem Statement
Prior work [12,17] assumed that all cells are equally likely be in an alert zone.However, that is not the case in practice.Some parts of the data domain (e.g., denser areas of a city) are more likely to become alert zones.The cost of encrypted alert zone enclosure evaluation is given by the number of operations required to apply HVE matching at the service provider.As we discuss in our HVE primer in Appendix A, the evaluation cost is directly proportional to the number of non-star bits in the tokens.Armed with knowledge about the likelihood of cells to be part of an alert zone, one can create superior encodings that reduce processing overhead.
Our goal is to find an enhanced encoding that reduces non-star bits for a given set of alert zone tokens.Denote by p(v i ) the probability of cell v i being part of an alert zone.The mutual probability of multiple cells indicates how likely they are to be part of the same alert zone.Given individual cell probabilities, the mutual probability of a set of i cells L = {v 1 , , v 2 , ..., v i } is calculated as: ( The problem we study is formally presented as follows: Problem 1 Find an encoding of the grid that on average reduces the number of non-star bits in the tokens generated from alert zone cells. In the above formulation, the correlation between cells becoming part of an alert zone is assumed to be negligible.In essence, the assumption is that cells are independent in time and space (in Section 5, we provide an advanced modeling of the correlation of alert zones over space and time).

Location Domain Mapping through Graph Embedding
Our approach minimizes the number of non-star bits in alert zone tokens by modeling the data domain partitioning as an embedding problem of a k-cube onto a complete graph.We denote a k-cube as G 1 (C, E 1 ), where C = {c 1 , , c 2 , ..., c n } and c i = {0, 1} k .Fig. 3b illustrates a k-cube generated based on the sample partitioning in Fig 3a .In G 1 , two nodes c i and c j are connected if their Hamming distance is equal to one.We refer to such a bit as Hamming bit.
Definition 1 (Hamming Distance and Bits).The Hamming distance between two indices c i and c j in G 1 (C, E 1 ) is the minimum number of substitutions required to transform c i to c j , denoted by the function d h (.).We refer to the bits that need to be substituted as the Hamming bits of the indices.

Example 1
The Hamming distance between indices c 1 = 0100 and c 2 = 0010 is two (d h (c i , c j ) = 2), and the Hamming bits are the second and third most significant bits of the indices.
The second graph required to formulate the problem of minimizing the number of non-stars is a complete graph generated by all cells in the partitioning, denoted by G 2 (V, E 2 ).The set V represents the nodes corresponding to cells, and an undirected edge connects every two nodes in G 2 .
Note that, every token (including those containing stars), can be related to several cycles on the k-cube.For example, token 00** represents four indices 0000, 0001, 0010, 0011, which correspond to cycles (c 1 , c 2 , c 6 , c 3 ) and (c 1 , c 3 , c 6 , c 2 ) on the k-cube in Fig. 3b.Unfortunately, there is no one-to-one correspondence between the tokens and the cycles.In particular, for a larger number of stars, there exist several cycles representing the same token.To generate a one-to-one correspondence, we incorporate Binary-Reflected Gray (BRG) encoding on the k-cube to create unique cycles corresponding to tokens.
Definition 2 (BRG path on k-cube).A BRG path between two nodes with non-zero Hamming distance is defined as the path on the k-cube going from one node to another based on BRG coding on Hamming bits.
As an example, the Hamming bits between 0001 and 1000 are the least and most significant bits, and the BRG path connecting them on the k-cube in Fig. 3b includes indices 0001, 1001, and 1000 in the given order.One can see that as the BRG codes are unique, the BRG path between two indices on the k-cube is also unique.This characteristic of BRG paths is formulated in Lemma 1.
Lemma 1 A BRG path between two nodes on a k-cube is unique.
Proof The uniqueness of the path between two nodes on the k-cube follows from the uniqueness of BRG code, as only one such path can be constructed.Definition 3 (Complete x-bit BRG cycle).Given a kcube, a complete x-bit BRG cycle is a cyclic BRG path with the length of 2 x , in which only x bits are affected.We denote the set of all possible complete x-bit BRG cycles by L x = { l i }.
We can uniquely associate a token to a cycle on the k-cube.Consider a token with k bits and x stars.This token is mapped to a complete x-bit BRG cycle on the k-cube, starting from a node in which all the star bits are set to zero.Such a cycle is unique and has a length of 2 x .Based on this mapping, every token is associated with a unique cycle on the k-cube, and every complete x-bit BRG cycle is mapped to a unique token with xstars.Therefore, there is a one-to-one correspondence between tokens and complete BRG cycles.The formulation of Problem 1 based on graph embedding can be written as follows: (3)

Gray Optimizer (GO)
The problem of embedding a complete graph within a minimized size k-cube has been shown to be NPhard [6].We develop an heuristic algorithm called Gray Optimizer that solves Problem 2. Consider an initial node of the complete graph v r ∈ V, and without loss of generality assume that it is assigned to index c 1 .We refer to nodes in G 1 interchangeably using their vertex id or binary index.The optimization problem can be formulated as follows.
Problem 3 Given two graphs G 1 (C, E 1 ) and G 2 (V, E 2 ), and the node v r ∈ V assigned to index c 1 , find a mapping function Problem 2 requires an assignment of vertices in G 2 to the nodes of G 1 such that the probability of complete BRG cycles is maximized; whereas Problem 3 seeks to maximize the probability of cycles with respect to a particular node, in this case v r , which is assigned to the index c 1 .A reasonable candidate for assignment to c 1 is the cell with the highest probability, as it is most likely to be part of an alert zone.To solve this problem, we propose the heuristic in Algorithm 1.The input of the algorithm is the root index c 1 ∈ G 1 , the root node v r ∈ G 2 (also called seed) and the graphs G 1 and G 2 .
Input : Denote by D i|c1 the set of nodes on C that have a Hamming distance of i from c 1 .Note that D i|c1 includes k i nodes, each one having a Hamming distance of i from c 1 .The overall assignment structure is as follows: first, Algorithm 1 assigns the remaining nodes of V of the graph G 2 to nodes in D 1|c1 .After assignment of all nodes in D 1|c1 , the algorithm assigns the nodes in D 2|c1 and follows the same process until all nodes are assigned (D 1|c1 to D k|c1 ).An initial sorting of nodes in V is conducted at the start of the algorithm, and is used throughout the assignment process to reduce the computation complexity.
The assignment objective in stage i of the process is to maximize p(L i |v r ).
Note that (4) can be written as: where p(L i |v r ) represents the probability of all complete i-bit BRG cycles that include c 1 (v r → c 1 ).Denote such a cycle by l.Based on the following lemma, there exists one and only one node c j in l that has a Hamming distance of i from c 1 , which means that c j ∈ D i|c1 .Therefore, every complete i-bit BRG cycle given index c 1 includes one node in D i|c1 .On the other hand, every node in D i|c1 corresponds to a unique complete i-bit BRG cycle passing through c 1 , as it results from Lemma 1.Therefore, all complete i-bit BRG cycles are considered in stage i and we maximize their probabilities in this stage of the assignment.
Lemma 2 For each node c i in a complete x-bit BRG cycle, there exists one and only one node with the Hamming distance of x from c i .
Proof A complete x-bit BRG cycle includes 2 x nodes and only x bits are affected.Therefore, the only index that can exist with the Hamming distance of x from c i is the one in which all x Hamming bits are flipped.
The assignment process in the stage i of GO creates a bipartite graph, i.e., (H 1 , H 2 , E 3 ), where H 1 and H 2 are two set of nodes, and E 3 represents the set of edges.In this stage, the nodes in sets D 1|c1 , D 2|c1 ,...,D i−1|c1 are already assigned and we aim to find the best assignment for the nodes in D i|c1 such that p(L i |v r ) is maximized.Among the remaining nodes in V, we choose k i of them that have the highest probabilities, as |D i|c1 | = k i , and allocate them to H 1 .
On the other hand, for each node c j in D i|c1 , we construct the unique complete i-bit BRG cycle including c j and c 1 .Let us represent this cycle by l j .Note that all nodes included in l j are assigned except c j .The algorithm calculates the probability of the set of nodes in l j excluding c j and allocates it to a node in H 2 .Based on (2), this probability can be calculated as: Fig. 4: An example of embedding graphs generated based on a sample grid.
The algorithm repeats the process for all nodes in D i|c1 .
Next, the nodes in H 2 are sorted, and the best matching is conducted between these two sets of nodes by assigning the i th node of H 1 to the i th node of H 2 .
The optimality of the matching process is proven in Lemma 3, and the achievement of maximal assignment in each stage is proven in Lemma 4.
Lemma 3 Suppose in the i th step of the algorithm h 1 to h ( k i ) are the members of H 1 and h 1 to h . The optimal value of matching is achieved when h i is matched with h i .
Proof Suppose that the converse is true.Hence, there exist two nodes h i and h k which are paired with h j and h t , respectively, such that h i ≤ h k and h j ≥ h t .Since the current matching is maximal by swapping h j and h t , we have where R indicates the remaining pairing summation.Re-writing equation (8) results in However, h i ≤ h k and h j ≥ h t , therefore, the left hand side of the equation is always less than or equal to zero, which is a contradiction.The case for equality of equation ( 8) is removed as swapping does not change the summation and the lemma holds.
Proof We prove the lemma based on mathematical induction.
Base case: For i = 1, given that the node v r is assigned to c 1 , we aim to prove that GO maximizes p(L 1 |v r ).To start with, GO chooses k 1 remaining nodes of V for the purpose of assignment.The optimal assignment of nodes in D 1|c1 is a permutation of the chosen nodes; otherwise, they could be replaced with a node with a higher probability that would result in a higher value for p(L 1 |v r ).Next, the algorithm generates a bipartite graph (H 1 , H 2 , E 3 ).The probability of chosen nodes are allocated to H 1 , and the nodes in H 2 represent the probability of complete 1-bit gray cycles constructed from c j ∈ D 1|c1 and the node c 1 , excluding the probability of c j itself.Next, the optimal matching is done by assigning the j th maximum node in H 2 to the j th maximum node in H 1 , achieving maximal p(L 1 |v r ) given the node c 1 .
Induction step: Let us assume that GO has maximized the probabilities of complete x-bit BRG cycles for x = 1 to i − 1 in stages one to i − 1.We prove that in stage i, the algorithm maximizes complete i-bit gray cycles, given the previously assigned nodes.
Based on Lemma 2, all complete i-bit BRG cycles are considered in stage i, as each such cycle includes exactly one node in D i|c1 , which has the highest Hamming distance from c 1 .GO starts by choosing the cells with the highest probabilities and assigning them to H 1 .Same as in the base case, we know that the optimal assignment in this stage includes the chosen set of nodes.Next, the nodes in H 2 are assigned based on finding the probability of complete i-bit BRG cycles for nodes in D i|c1 , excluding the nodes themselves from the probability.As the matching process is optimal match, the best permutation of nodes in H 1 is matched to complete i-bit BRG cycles.

Scaling Up Gray Optimizer
The GO algorithm can lead to significant improvements in the processing of HVE operations; however, there are two major drawbacks once the algorithm is applied to grids with high granularities.(i) The complexity of the algorithm creates a processing time bottleneck for its application in HVE; (ii) The calculation of probabilities for large complete BRG cycles may result in numerical inaccuracies.To make GO applicable to grids with higher levels of granularity, we propose two variations.
The first proposed algorithm, called Multiple Seed Gray Optimizer (MSGO) (Section 4.1), generates non-overlapping clusters and applies GO within each one of them.The second algorithm, called Scaled Gray Optimizer (SGO) (Section 4.2) takes a Breadth-First Search (BFS) [16] approach.The performance of BFS is preferred to its counterpart Depth-First Search (DFS) as the nodes closer to the seed have higher probabilities.Thus, it is reasonable to consider those nodes earlier in the process.

Multiple Seed Gray Optimizer (MSGO)
The starting point of the GO algorithm, which we refer to as seed, was chosen as the node in G 2 with the maximum probability.However, the algorithm can work starting with any initial seed, then follow the assignment process for other nodes in ascending order of their Hamming distance from the seed.Furthermore, as BRG cycles become larger, their associated probability becomes smaller.Thus, one way to reduce the complexity of GO is to run the algorithm up to a particular depth.Essentially, the algorithm aims at optimizing BRG cycles up to a certain length.We enhance GO by running Algorithm 1 with multiple seeds, and also by limiting the depth of the assignment.Definition 4 Depth: For a given seed c j , the GO algorithm is said to run with a depth of i if it only considers the assignment of nodes in D 1|cj , D 2|cj , ..., D i|cj .
The pseudocode of the proposed approach is presented in Algorithm 2. The algorithm starts by assigning the node with the highest probability in G 2 to the origin of G 1 or a random index.However, instead of running GO with respect to this index for all depths from one to k, MSGO runs GO with the specified depth as input.The algorithm completes the process of assignment for a cluster of indices in G 1 .MSGO then chooses a random index of G 1 among the remaining indices and assigns it to the node in G 2 with maximum probability among remaining nodes.Similarly, this index is used as a seed for GO with the specified depth and generates a new cluster.The cluster-based approach continues until all nodes are assigned to an index.The algorithm supports variable cluster sizes based on the underlying application.
The MSGO algorithm provides a robust solution for grids with higher granularity.The algorithm no longer suffers the drawbacks of GO when the grid size grows, such as numerical inaccuracies in the calculation of the probability of large cycles.The complexity of the algorithm depends on the depth chosen as input, and in low depths, it can be implemented in O(n(log 2 n)).MSGO can significantly reduce the number of operations required for the implementation of HVE in location-based Algorithm 2: Multiple Seed Gray Optimizer (MSGO).
Input : G 1 ; G 2 ; depth 1 Sort nodes in G 2 based on probabilities 2 Select a random index on G 1 which is not currently assigned 3 Assign the index with the node that has the maximum probability in G 2 4 Apply Algorithm 1 on the selected index with the specified depth 5 Repeat lines 2-4 until all indices are assigned alert systems, and therefore, making it a practical solution for preserving the privacy of users in location-based alert systems.

Scaled Gray Optimizer (SGO)
SGO considers overlapping clusters and necessitates that all nodes act as seed during the assignment process.The pseudocode of the proposed approach is presented in Algorithm 3. SGO starts by assigning the node with the highest probability to an index on G 1 .However, instead of assigning indices with all depths from one to k with respect to index c 1 , the SGO algorithm runs GO with the depth of one.Next, SGO sorts the indices in D 1|c1 based on their assigned probabilities in descending order and runs GO with the depth of one on each index.Once the algorithm is applied on all the indices in D 1|c1 , the process repeats for indices in D 2|c1 , D 3|c1 , ..., etc.The algorithm continues until all indices are assigned to a node.

Complexity Analysis
The key computation overhead of the GO algorithm is in the calculation of probability of BRG cycles.Let the function T (.) return the computational complexity.In the i th step of the algorithm, the nodes with the hamming distance of i from c 1 are assigned to an index on the k-cube, i.e., D i|c1 .The number of nodes in D i|c1 is . For each one of such nodes the complete BRG path is calculated which requires the multiplication of 2 i − 1 probabilities.Therefore, the assignment process for the nodes in D i|c1 requires operations.Hence, the total number of operations required for the algorithm is From the binomial theorem, and Therefore, Eq. ( 11) can be written as In addition to the above operations, there exists an initial sorting of the probabilities that can be implemented based on merge sort with the complexity of O(n(log 2 n)), and a sorting process in each stage for the nodes in H 2 .For the latter, the complexity can be written as Therefore, accounting for sorting, the closed-form expression for the total complexity is O(2n(log The MSGO algorithm is based on executing the GO algorithm with shorter depths in a cluster based approach.Suppose that the depth is set to r where r ≤ log 2 n.Running the algorithm in each cluster with similar logic as the GO requires the following number of operations.r i=1 On the other hand, the total number of clusters is approximately Therefore, the total complexity considering the initial sorting algorithm is calculated as Defining the binary entropy function as the following approximation can be used for deriving closed-form expression for various cluster sizes in Eq. ( 21) Lastly, the SGO algorithm executes the GO algorithm with the depth of one and has the computational complexity of O(n(log 2 n)).The low computational complexity of SGO makes it a suitable option for the encoding of grids with higher levels of granularity.

Supporting Dynamic Alert Zones
So far, we considered the case of static alert zones, and we optimized the data encoding and token generation under this scenario.However, in practice, alert zones vary over time.Whether an alert corresponds to a natural phenomenon (e.g., gas leak) or a human activity (e.g., COVID carrier movement), alert zones exhibit spatio-temporal patterns that must be accounted for in order to obtain fast performance.
We maintain the grid-based partition of the spatial domain used for the static case, and we denote by state of the grid the set of all alert cells at a given time.The occurrence probability of a state can be modeled analytically and used as a basis for grid encoding.The higher the statistical model accuracy, the more precise the encoding becomes, reducing HVE operations overhead.Next, we build a comprehensive statistical model to characterize alert zone evolution in space and time.
Definition 5 (State Space).For a given grid let X be a random variable defined on all possible subsets of the cells.The state space of X is defined as the power set S n = {1, 2, ..., 2 n }.
The cardinality of a state i represents the number of cells included in the state and is denoted by |i|.The set of all states with the cardinality of j are denoted by S |j| n .Note that, the notation is not concerned with a precise order of states.For example, a grid with two cells {v 1 , v 2 } leads to the state space of S 2 = {{∅}, {v 1 }, {v 2 }, {v 1 , v 2 }}, which is depicted by S 2 = {1, 2, 3, 4}; however, the order of states is not captured by the notation.Two examples of such an assignment can be We provide more details on the construction of the state space and ordering in Section 5.4.
Let X 0 , X 1 , ..., X i , ... denote the sequence of random variables modeling the occurrence of alert zones.The set of possible values for X i is the state space of the grid, and the index i denotes the evolution of the process in time.The probability of X i being in a particular state j is denoted as p(X i = j).The probability of a cell becoming part of an alert zone depends on underlying phenomena properties, existing correlations among cells, and the history of alert zones on the map.Moreover, probabilities do not remain constant over time.We identify several distinct scenarios, and we create a statistical model for each: (i) the states are independent in both space and time; (ii) the states are independent in space, but dependent in time (i.e., temporal causality); (iii) the states are independent in time but exhibit space correlation (i.e., spatial causality); and (iv) the states are dependent in both time and space.The first case corresponds to the static case introduced in the previous sections; the last case is the most general one, whereas cases (ii) and (iii) are special cases of (iv).Each case may be relevant under different types of applications and data domains.Next, we investigate in details each of the cases, and propose a data encoding and token generation technique for each.Our goal is to obtain an accurate representation of how the probabilities X i are distributed over the state space.

Independence in Time and Space
Having the independence assumption in space and time greatly simplifies the problem formulation as the sequence of random variables X 0 , X 1 , ..., X i , ... become a sequence of independent and identically distributed (iid) random variables defined over the state space.Such modeling indicates that the random variables X 1 to X i provide no information about the random variable X i+1 .Therefore, the probability mass function (PMF) of X i depends on the probabilities of individual cells.For a given X i , the probability of cell v i ∈ V becoming part of the alert zone is denoted by p(v i ), and corresponds to a value between zero and one.The mutual probability of a subset of cells L = {v 1 , , v 2 , ..., v i } being in an alert zone can be calculated as The calculation of mutual probabilities is the direct result of the independence assumption, which indicates that there are no correlations between cells.

Independence in Space, Dependence in Time
In this case, the grid state no longer consists of iid random variables following the same PMFs.The probability of state i at time j is no longer assumed to be equal to the probability of being in state i at a different time k, i.e., p(X j = i) = p(X k = i).Our objective is to determine whether the system reaches a steady state in which the probabilities no longer change significantly over time.We model the evolution of alert cells over time using Markov chains.We assume that alert zones evolve incrementally by addition or removal of a single cell at a time (this can always be achieved by properly choosing the time granularity).
The proposed model is represented in Fig 5 .States i and j are connected if and only if the difference between their cardinality is one, |i − j| = 1.The only exception is the state including all cells (if all cells are within the alert zone, then all have the same status).The model assumes that each state depends only on the previous state, and therefore, it follows Markov chain properties, i.e., for all k ≥ 0, The forward propagation to a state with a higher cardinality indicates the addition of an alert cell, whereas forward propagation to a state with a lower cardinality indicates the removal of an alert cell.
The value of p(X k+1 = j|X k = i) is called the transition probability from state i to state j and we implicitly make the assumption that the transition probabilities are homogeneous over time.We are interested in understanding what the likelihood of being in a state is starting from any other state, and whether the chain reaches a stationary distribution in which the probabilities of individual states do not change over time.
First, we review three properties of the proposed Markov chain: Property 1 All states in the proposed model are recurrent.Therefore, starting from any state of the chain, it is possible to reach any other state, eventually.

Property 2
The proposed Markov chain is irreducible, as for any two states i and j, it is possible to reach one from the other in a finite number of steps.

Property 3
The proposed Markov chain for modeling alert zones is aperiodic, as the period of states is equal to one.
The above properties help to characterize the longterm behaviour of the Markov chain.If after a certain period of time the transition matrix of the chain reaches a stationary distribution, it enables us to know the probability of each state in the state space.The state transition matrix is defined as follows: Definition 6 (Transition matrix).For a Markov chain X 0 , X 1 , ..., X i , ... with a state space S n = {1, 2, ..., 2 n }, let q ij = p(X k+1 = j|X k = i) be the transition probability from state i to state j.The 2 n ×2 n matrix Q n = (q ij ) is called the transition matrix of the chain.The value of q ij for i < 2 n is defined as p(v), where v is the alert cell which exists in state i (row) and does not exist in state j (column).Recall that two states are connected if and only if their cardinality differs by one.The last row of the matrix represents the only outgoing directed edge from the state with the cardinality of n to the state with cardinality of zero.Thus, the first element of the last row is one (q 2 n 1 = 1) and all its other elements are zero.Such a row ensures the aperiodicity of the chain.
It can be inferred that the i th row of the transition matrix corresponds to outgoing edges from the state i of the Markov chain.Therefore, in order for the matrix to maintain the Markovian properties, the values in each row should sum up to one, which is indeed the case for the proposed transition matrix.This property is termed as Markovian matrix property.Let a row vector t = [t 1 , t 2 , ..., t 2 n ] be the PMF of X 0 , where t i = p(X 0 = i).Then, based on the properties of Markovian chains, the marginal distribution of X m is given by the j th component of tQ m n , i.e., p(X n = j).The marginal distribution indicates that given a initial state i, the probability of being in state j after m transitions is the j th component of the vector tQ m n .We are interested in the long run behaviour of the system and to understand if the proposed model will reach a stationary distribution.
Definition 7 (Stationary distribution).Given a Markov chain with the transition matrix Q n , a row vector s = [s 1 , ..., s 2 n ], such that s i ≥ 0 and i s i = 1, is a stationary distribution if We elaborate further on the meaning of the vector s.Suppose that the i th element of the vector corresponds to the state i.If the proposed Markov chain reaches a stationary distribution, this value represents probability p(X n = i) for any n after reaching the stationary distribution.Thus, the importance of each state is revealed by its corresponding value in s.There are three important questions to be answered about the stationary distribution: (a) does it exist?(b) is it unique?and (c) does the Markov chain converge to the stationary distribution?The stationary distribution is the left eigenvector of the transition matrix corresponding to the eigenvector of one as shown by Eq. ( 26).The existence and uniqueness of a stationary distribution for the proposed Markov model is proven in the following theorem.
Theorem 1 There exists a unique stationary distribution for the proposed Markov chain to model alert zones.Proof According to [2], a stationary distribution exists for any finite-state Markov chain, and if the chain is irreducible, the solution is unique.Based on property 2, there exists a unique stationary distribution for the model.Later in Section 5.4, we present the recursive construction of matrix Q n and show that the cardinality of the null space of the matrix s(Q n − I) is one.
The above theorem shows that there exists a unique stationary distribution for the proposed Markov model regardless of the initial probabilities of the cells; however, to reach the stationary distribution, the chain needs to be aperiodic as well as irreducible.Based on Property 3, the proposed model is aperiodic.However, particular initial probabilities, including zero values, can result in periodic chains.To address this problem, we adopt a similar approach as the PageRank algorithm [21], used to rank the relevance of webpages.Suppose that before moving to a new state on the chain, a coin is tossed with probability α of heads.If the result of the coin toss is heads, the state evolves using the transition matrix Q; otherwise, the system jumps to a state in a uniformly random distribution.The resulting transition matrix is represented as: where J n is a 2 n × 2 n matrix of all ones.The recommended value [21] of α is 0.85.It can be observed that all elements of O n are positive, and therefore, the aperiodicity of the chain is guaranteed.Hence, solving Eq. ( 26) for O n has a solution leading to a stationary distribution (s) as well as converging to the stationary distribution.Similarly, the i th element of the vector s for the new transition matrix O n indicates the significance of state i, as it represents p(X m = i) for any large value of m.In the following, we consider that the transition matrix is aperiodic, and we use the matrix Q n as our reference.which is the stationary distribution vector s.

Dependence in both space and time
In this section, we study how to capture correlation among alert cells over time by incorporating spatial distance between cells within the Markov model.We embed spatial correlations in the transition matrix while maintaining Markovian properties, and thus the longterm behaviour of the model can be better defined.We use as starting point the proposed model from Fig. 5. Consider a grid with two cells {v 1 , v 2 } and the state space of Investigating the transition matrix closely, one can see the impact of independence between cells in the matrix.Consider the entry Q 2 (2, 4) as an example.This entry indicates the probability of going from state 2 = {v 1 } to state 4 = {v 1 , v 2 } is p({v 2 }).In other words, the transition captures the fact that the existence of another alert zone cell v 1 did not impact the cell v 2 (i.e., spatial independence between cells).More formally, from the Bayes rule: given that In Section 5.2, we assumed independence between states.
To address this issue, we propose the following method to capture the correlations between states without eliminating the Markov property of the matrix Q n .The main idea behind the approach is that cells that are in close proximity to the alert zone are more likely to become part of the zone in the future.Let X 0 , X 1 , ..., X i , ... be an order one Markov sequence of random variables modeling the occurrence of the alert zones, where X i 's are defined over the state space of the grid.Without loss of generality assume that the j th row of the matrix Q corresponds to the state {v 1 , v 2 , ..., v j }.Based on the proposed Markov model in Fig. 5, it is known that this state can evolve by the addition or removal of a single alert cell.Therefore, there exist n non-zero elements in each row of the matrix.For all v k ∈ V, we calculate the probability of its removal or addition as: where the function d(.) returns the Euclidean distance between two points, β is a normalization factor over the entire row, and the point c is the centre point of {v 1 , v 2 , ..., v j }, calculated as Note that, in all above calculations, each cell's center point is used as its representative.The intuition behind the approach is that the correlation between cells becomes smaller as we go further away from the alert zone.The only special case is when there exists a single-cell alert zone, and we seek the probability of its removal.In this case d(v k , c) becomes close to zero and p(v k )/(d(v k , c)) tends to go to infinity.As there exist no other alert zone cell for this case, we consider this probability as p(v k ) instead of p(v k )/(d(v k , c)) to avoid inaccuracies.As an example, consider a grid with three cells {v 1 , v 2 , v 3 } and the average point c.Suppose that the j th row of the matrix Q 3 corresponds to the state {v 1 , v 2 }.In this row, there exist three nonzero elements: The proposed method satisfies the Markovian matrix property.Hence, it can be used as part of the Markov model in Section 5.2 to capture the long-term behavior of the system.

Recursive Construction and Monte Carlo Sampling
Finding the eigenvector of matrix Q n corresponding to eigenvalue one is necessary to determine the probability of being in a particular state at a given time p(X n = i).
The eigenvector provides valuable information that enables us to prioritize more likely states in the grid encoding process.However, there are two important issues with its calculation: em (i) The matrix Q n has dimensions of 2 n × 2 n .Even considering a small grid with 100 cells, it requires an extremely large storage capacity.
(ii) The calculation of the eigenvector for such a large matrix is expensive, with O(n 3 ) [20] complexity.For example, based on Householder transformations, eigenvalues and eigenvectors can be calculated with complexity O(n 2 )+4n 3 /3.To address the high computational overhead, we approximate the stationary distribution based on random walks on the Markov model.
We start by explaining the recursive construction of the matrix Q n .The rows and columns of the matrix depend on the order in which states are chosen.We propose to construct the states of the n + 1 cells, v 1 to v n+1 , from the grid with n cells, v 1 to v n as follows: For instance, S 3 is constructed as The matrix Q n+1 can be constructed recursively as where I 2 n is the identity matrix and K 2 n is an all-zero 2 n × 2 n matrix except for element K 2 n (2 n , 0) = 1, and given that The above representation of Q n works under the spatial independence assumption, but the construction of states holds regardless of that assumption.
To tackle the high computational complexity of determining eigenvectors, we use a probabilistic approach.PageRank's approach [21] to this problem is incorporating the power iteration method to calculate the eigenvectors, but still incurs a high computational complexity.An alternative approach is the Monte Carlo approximation, which is widely used in literature and results in an enhanced estimation of the stationary distribution.The Monte Carlo method provides several advantages over deterministic power iteration methods such as significantly lower computation complexity, opportunities for parallel implementation, and it facilitates updating of probabilities.
The main idea behind the Monte Carlo approximation is to start R random walks on the Markov model's primary node, i.e., state 1.Each random walk terminates with the probability of 1 − c and makes a transition to the next outgoing node with the PMF specified in the transition matrix Q n .The fraction of walks ending at a state over all the random walks indicates the probability or significance of that state.The vector of calculated probabilities for all states is the approximation of stationary distribution.The number of samples required to estimate the stationary distribution is shown to pessimistically be in the order O(n 2 ), where n indicates the number of states; however, it is shown that n random walks are enough to provide a reasonable approximation of a stationary distribution [1].
6 Experimental Evaluation

Experimental Setup
We conduct our experiments on a 3.40GHz core-i7 Intel processor with 8GB RAM running 64-bit Windows (g) a=0.5, b=10 (baseline performance).
7 OS.The code is implemented in Python, and we used the LogicMin Library [9] for binary minimization of token expressions.We compare the proposed approaches (GO, MGSO and SGO) against the hierarchical Gray encoding technique from [12] (labeled HGE), the stateof-the-art in location alerts on HVE-encrypted data.
To model the probability of partition cells becoming alert zones, we use the sigmoid function S(x) = 1/(1 + exp −b(x−a) ), where a and b are parameters controlling the function shape.The output value is between zero and one.The sigmoid function is a frequent model used in machine learning, and we chose it because in practice, the probability of individual cells becoming part of an alert zone can be computed using such a model built on a regions' map of features (e.g., type of terrain, building designation, point-of-sale information, etc).Parameter a of the sigmoid controls the inflection point of the curve, whereas b controls the gradient.

Gray Optimizer Evaluation
GO is our core proposed algorithm to reduce the number of HVE operations required to support alert zones.Specifically, by HVE operations we refer to the computation executed by the server to determine matches between tokens and encrypted user locations.Recall that, for each non-star item in a token, a number of expensive bilinear map operations are required.GO aims to minimize the number such non-star items in tokens by choosing an appropriate encoding of the domain.Our comparison benchmark is the approach from [12] which uses a hierarchical quadtree structure to partition the data domain.We refer to this approach as HGE, and we present our result as an improvement in terms of computation overhead compared with [12].Fig. 7: Performance evaluation of GO for varying depth (100 cells).

Improvement in HVE Operations
Fig 6 summarizes the evaluation results of GO for three logistic function parameter settings.The grid size is set to 100 cells (recall from our earlier discussion that GO can only support relatively low granularities).Fig. 6 shows the total number of bilinear pairings performed for a ciphertext-token pair.GO clearly outperforms the approach from [12].The relative gain in performance of GO increases when the size of the alert zone increases (i.e., when there are more grid cells covered by the alert zone).This can be explained by the fact that a larger input set gives GO more flexibility to optimize the encoding and decrease the number of non-star entries in a token.In terms of percentage gains, GO can improve performance by up to 40%, which is quite significant.Also, note that the gains are significant for all parameters of the sigmoid function used.In general, we identified that a higher a value leads to more pronounced gains.This is an encouraging factor, because a higher a corresponds to a more skewed probability case, where a relatively small number of cells are more likely to be included in an alert zone than others.In practice, one would expect that to be the case, since events that trigger alerts also tend to be concentrated over a relatively small area (e.g., very popular hotspots, certain facilities that present higher risks, like a chemical plant, etc.).

Impact of Depth
Recall that the reduction in computation achieved by GO depends on the depth at which the algorithm is run (GO works similar to a depth-first search graph algorithm).In general, running the algorithm with a higher depth will produce better results in terms of performance gain at runtime (i.e., when matching is performed at the server), but it also requires a lot more computational time to compute a good encoding (which is a one-time cost).Fig. 7 captures the impact of depth on improvement.In this experiment, GO is executed on  a single cell with different depths, and the remaining cells are assigned randomly (the experiment is specifically designed to show the effect of using lower depths on GO).As expected, there is a clear increasing trend, with higher depths resulting in better improvement factors.However, after a sharp initial gain (illustrated by the large distance between the chart graphs corresponding to depths 2 and 3), the improvement stabilizes, and it may no longer be worth increasing the depth of the computation considerably (the gains are stabilizing between depths 3 and 4).

Execution Time
Fig. 8a illustrates the execution time of GO.Recall that, the execution time of GO is influenced by the granularity of the grid (finer granularities increase execution time).The results show that GO can complete within a short execution time for smaller grid sizes; however, as the grid granularity increases, there is a sharp increase in execution time.Therefore, GO may not be practical to apply for high granularity grids, and that is the main motivation behind our two variations, MSGO and SGO (which are evaluated next).Moreover, as the grid granularity increases, the length of cycles becomes larger, which will also result in numerical inaccuracies when executing GO.The execution time required by GO for values up to 600 cells is around 10 seconds.We observed that this value is the maximum number of cells for which GO performs reasonably; beyond this level, the algorithm is not suitable due to increased execution time and numerical inaccuracies associated with the calculation of probabilities for large cycles.

Evaluation of GO Variations on Higher Granularity Grids
As discussed previously, GO does not perform well when directly applied to high granularity grids.To improve the computational complexity of GO, we proposed two extensions of the algorithm, namely, MSGO and SGO.Next, we evaluate experimentally both these variations.

MSGO
Fig 9 illustrates the performance of MSGO compared to the HGE benchmark scheme from [12].Unlike the single seed GO, we are able to evaluate the performance of MSGO for grids with much higher granularity (i.e., 1024 cells in this case).There is a similar trend in terms of gain as we have observed with GO, where larger alert zones provide more opportunities for advantageous encodings, and thus overall performance is improved (the percentage of HVE operations eliminated is higher).
The relative gain obtained is very close to 50% compared to the benchmark.Also, the absolute amount of improvement is better than for GO in all cases.This occurs due to the fact that MSGO can support highergranularity grids, and in this setting there is more flexibility in choosing a good encoding (due to the larger number of cells, there are significantly more choices for our algorithm).As expected, increasing the depth of MSGO leads to higher improvement percentage, but the trade-off is a larger computation complexity.
Comparing Figs. 6 and 9, we remark that the MSGO algorithm obtains similar performance gains as the core algorithm GO for low granularity grids, but with a much lower computational overhead.For high granularity grids, GO cannot keep up in terms of computational overhead, whereas MSGO scales reasonably well, and it is able to still obtain significant improvements.One main reason is that MSGO no longer requires the calculation of probabilities of large cycles, avoiding numerical inaccuracies and reducing overall computational over-  head.The complexity of the algorithm can be as low as O(n(log 2 n)) depending on the chosen depth value, which provides a robust and efficient solution for reducing the number of HVE operations.The execution time of MSGO is illustrated in Fig. 8b.The graph indicates that even for a high level of granularity, such as 4, 000, the algorithm requires less than 15 minutes to encode the grid, depending on the specified depth at the input.As expected, by increasing the depth of the algorithm, better performance can be achieved in terms of HVE operations, at the cost of higher computational overhead.The MSGO algorithm can be extended for an arbitrary number of cells on the grid, and also it may have various cluster sizes depending on the application.In this experiment, we focused on applying the algorithm to much larger number of cells, up to 50, 625 (which is equivalent to a 225 × 225 square grid).Similar to the MSGO algorithm, the improvement achieved by SGO occurs even when the alert zones are small.Since the overall number of cells is larger, the SGO algorithm has even more flexibility in choosing an advantageous encoding, resulting in strong performance gains.For example, at 9% ratio of alert cells, the SGO algorithm results in 25.8, 26, and 27.3% improvements for grid sizes of 10, 000, 28, 900, and 50, 625, respectively.

SGO
The execution time of SGO is shown in Fig. 8c.Even for very large grid sizes, such as 50, 625, the algorithm requires less than six minutes to encode the grid.Therefore, the system can be set to regularly update the probabilities and run the algorithm at six-minute intervals, if needed.To compare this time performance with GO, consider the maximum grid size for which the encoding can be computed within 60 seconds in each case.As shown in Fig. 8a, this number corresponds to a grid size of 1200 for GO, whereas in a similar time, SGO can be applied on the grid size of 22, 000 cells.Therefore, the SGO algorithm requires significantly lower computation overhead to execute compared with GO and even MSGO algorithms, while the performance gain in terms of HVE operations reductions is still solid.Fig. 11 presents the result of algorithms by fixing the percentage of alerted cells to 30% and varying the grid size.It can be seen that the performance improvement of algorithms stays in a comparable margin for varying grid sizes.The slight fluctuation in graphs is due to two primary reasons (i) as all codewords have the same length, increasing the quantization level result in an addition of a bit to all codewords, and (ii) the ini-tial probabilities are assigned to the cells in a random process based on the sigmoid activation function.

Imperfect Probabilities Information
The knowledge of cell probabilities plays an important role in the reduction of HVE operations.These probabilities are input to GO and its extensions SGO and MSGO, used to find an enhanced encoding of space.Having imperfect initial cell probabilities can negatively impact the performance of algorithms by deviating the optimization result.Therefore, we aim at investigating the effect of imperfect initial probabilities on the improvement achieved compared with the previous work (HGE).This is done by the addition of noise to cell probabilities at the input of algorithms modeling the inaccuracies that might exist.Let us briefly illustrate how the addition of noise is conducted.Given the vector of cell probabilities: each probability is added with an iid uniformly distributed random noise n between [0, u], where u indicates the maximum noise value.For example, if the percentage of noise is 20%, this value is set to 0.2, and the random noise is generated uniformly in the interval of [0, 0.2].Doing so, the transformed probabilities are acquired as [p(v 1 ) , p(v 2 ) , ..., p(v n ) ], where p(v i ) = p(v i ) + n i .
Note that the values are considered to be cyclic between zero and one, i.e., if the noise of 0.5 is added with a cell probability of 0.8, the resulting value would be recorded as 0.3.Hence, with 100% of noise, it is expected that the numbers would be uniformly distributed.
Fig. 12 indicates the sensitivity of GO, MSGO, and SGO to imperfect probability values used as input.For each algorithm, the number of HVE operations required is shown as well as the improvement gained in the performance compared with the previous work.The x-axis represents the percentage of noise added to the perfect probability information varied between 0 to 100, and the y-axis indicates HVE operations required side by side to the improvement achieved.The percentage of alerted cells is set to 40% in all graphs.
The overall trend of reduction in the performance improvement by the addition of noise is consistent across all three algorithms.The improvement gained from algorithms stays at its highest when there exists no amount of noise at the input.The figure gradually drops as more noise is applied between zero to 50%, after which the performance improvement becomes almost negligible.As expected, in the case of maximum noise, no information is available regarding probabilities, and therefore, no further gain could be made with respect to HGE.Hence, at 100% of noise, the number of HVE operations required from all algorithms converges to the HVE operations of HGE.The rate of sensitivity to imperfect information varies among algorithms.Looking at 10% of noise, it can be seen that the drop in MSGO performance occurs at a higher rate than the other two algorithms with GO and SGO indicating 25% loss in the performance against the loss of 40% for the MSGO algorithm.Overall, MSGO shows a higher sensitivity compared with the GO and SGO algorithms.

Dynamic Alert Zones
So far, we evaluated techniques for static alert zones.Next, we measure the performance of our proposed technique for dynamic alert zones introduced in Section 5.
Fig. 13 investigates the performance gain acquired by applying the proposed Markov model.The random path approach (Monte Carlo sampling) is used as the underlying method to compute the transition matrix's stationary distribution, minimizing the induced computation complexity on the system.The x-axis of the graphs shows the percentage of alert cells, and the yaxis represents the percentage of improvement as well as the number of HVE operations required.To distinguish between the two modeling approaches, performance improvement achieved by incorporating the Markov model is labeled as dynamic, and the scenario in which the time dependence is not considered is referred to as static.The experiment is designed by initializing both static and dynamic approaches with the same set of initial probabilities; however, the system would continue evolving in a uniformly distributed manner.Therefore, if there are m outgoing edges from a state of the model, the corresponding probability is set to 1/m.The aim is to see if the Markov model is able to capture the evolution of the system and how much improvement can be achieved with the gained information.As before, the value of the a and b are set to 0.75 and 10 with the termination probability of 0.4.Fig. 13 shows that the dynamic method can predict well the evolution of alert zones, as the resulting encoding requires far fewer HVE operations.The performance gain achieved for all three of the algorithms is significant.The percentage of improvement is approximately 35% to 50%, indicating more impact on GO compared to MSGO and SGO.Preserving the privacy of users in communication networks and online platforms has been one of the most challenging research problems in the past two decades.
In the widely accepted scenario, users provide their location to service providers in exchange for locationbased services they offer.The goal is to provide the service without user privacy being compromised by any of the parties involved.Early works to tackle this problem were focused on hiding or obfuscating user locations to achieve a privacy metric termed as k-anonymity.The location of a user is said to be k-anonymous if it is not distinguishable from at least k − 1 other queried locations [26].
In [14], the authors aim to provide k-anonymity by hiding the location of user among k − 1 fake locations and requesting for desired services for all k locations at the same time.The generation of such dummy locations based on a virtual grid or circle was considered in [19].The authors in [18] conducted the selection of dummy locations predicated on the number of queries made on the map and aimed at increasing the entropy of k locations in each set.In [7], random regions that enclose the user locations were introduced to bring uncertainty in the authentication of user locations.Unfortunately, fake locations can be revealed particularly in trajectories and with the existence of prior knowledge about the map and users.
Later on, approaches based on Cloaking Regions (CRs) proposed by [13] gained momentum in the literature.The principal idea behind this method is to use a trusted anonymizer that clusters k real user locations and query the area they are enclosed by to retrieve points of interest.Doing so, CRs aim to achieve k-anonymity for users and preserve their privacy.This approach is partially effective when snapshots of trajectories are considered, but once users are seen in trajectories, their location privacy would be severely at risk [22].Even for individual snapshots, it must be noted that a coarse area of real locations is released to the service provider, which could threaten the location privacy of users.Moreover, the CR-based approaches are susceptible to inference attacks predicated on the background knowledge or socalled side information.One such side information is the knowledge about the number of queries made on different locations of the map [18].
More recently, a model for privacy preservation in statistical databases termed as differential privacy was developed in [10].The metric provides a promising prospect for aggregate queries; however, it is not suitable for private retrieval of specific data from datasets.Closer to HVE approach, a private information protocol was proposed in [11].The PIR technique is based on cryptog- raphy and shown to be secure for private retrieval of information.Despite the promising results, there exists an assumption behind PIR approach that the user already knows about the points of interest.Therefore, PIR is not suitable for location-based alert systems as users are not aware of alert zone whereabouts.

Searchable Encryption.
Originated from works such as [25], the concept of search encryption was proposed to provide a secure cryptographic search of keywords.Initially, only the exact matches of keywords were supported and later on the approach was extended for comparison queries in [4], and to subset queries and conjunctions of equality in [5].The authors in [5] also proposed the concept of HVE, used as the underlying tool to provide a secure locationbased alert system.This approach and its extension in [3] preserves the privacy of encrypted messages and tokens with the overhead of high computational complexity.The authors in [12] introduced and adopted the HVE for location-based alert systems, conducting the predicate match at a trusted provider, preserving the privacy of encrypted messages as well as tokens.Despite the promising results of the approach for privacy preservation in location-based alert systems, further reduction of computational overhead is necessary to increase the practicality.

Conclusion
We proposed a family of techniques to reduce the computational overhead of HVE predicate evaluation in location-based alert systems.Specifically, we used graph embeddings to find advantageous domain space encodings that help reduce the required number of expensive HVE operations.Our heuristic solutions provide a significant improvement in computation compared to existing work, and they can scale to domain partitionings of fine granularity.In addition, we studied how to extend these techniques to work for the challenging setting of dynamic alert zones.Table 2 summarizes the properties of the proposed approaches.
In future work, we will focus on deriving cost models and strategies to reduce the HVE overhead based on workload-specific requirements.Certain families of tasks may exhibit specific patterns of operations, which can be taken into account to optimize HVE matching performance, as well as to re-use computation.We will also investigate extending the graph embedding approach to other types of searchable encryption, beyond HVE (e.g., Inner Product Evaluation), which exhibit different types of internal algebraic operations.expressed in this material are those of the author(s) and do not necessarily reflect the views of any of the sponsors such as the National Science Foundation.

A Primer on HVE Encryption
Hidden Vector Encryption (HVE) [5] is a searchable encryption system that supports predicates in the form of conjunctive equality, range and subset queries.Search on ciphertexts can be performed with respect to a number of index attributes.HVE represents an attribute as a bit vector (each element has value 0 or 1), and the search predicate as a pattern vector where each element can be 0, 1 or '*' that signifies a wildcard (or "don't care") value.Let l denote the HVE width, which is the bit length of the attribute, and consequently that of the search predicate.A predicate evaluates to T rue for a ciphertext C if the attribute vector I used to encrypt C has the same values as the pattern vector of the predicate in all positions that are not '*' in the latter.Fig. 2 illustrates the two cases of Match and Non-Match for HVE.
HVE is built on top of a symmetrical bilinear map of composite order [5] G and G T are cyclic multiplicative groups of composite order N = P • Q where P and Q are large primes of equal bit length.We denote by G p , G q the subgroups of G of orders P and Q, respectively.Let l denote the HVE width, which is the bit length of the attribute, and consequently that of the search predicate.HVE consists of the following phases: Setup.The T A generates the public/secret (P K/SK) key pair and shares P K with the users.SK has the form: SK = (g q ∈ G q , a ∈ Z p , ∀i ∈ [1..l] : u i , h i , w i , g, v ∈ G p ) To generate P K, the T A first chooses at random elements R u,i , R h,i , R w,i ∈ G q , ∀i ∈ [1..l] and R v ∈ G q .Next, P K is determined as: P K = (g q , V = vR v , A = e(g, v) a , ∀i ∈ [1..l] : Encryption uses P K and takes as parameters index attribute I and message M ∈ G T .The following random elements are generated: Z, Z i,1 , Z i,2 ∈ G q and s ∈ Z n .Then, the ciphertext is: ) Token Generation.Using SK, and given a search predicate encoded as pattern vector I * , the TA generates a search token T K as follows: let J be the set of all indices i where I * [i] = * .TA randomly generates r i,1 and r i,2 ∈ Z p , ∀i ∈ J. Then T K = (I * , K 0 = g a i∈J (u Query is executed at the server, and evaluates if the predicate represented by T K holds for ciphertext C. The server attempts to determine the value of M as M = C /(e(C 0 , K 0 )/ i∈J e(C i,1 , K i,1 )e(C i,2 , K i,2 ) (37) If the index I based on which C was computed satisfies T K, then the actual value of M is returned, otherwise a special number which is not in the valid message domain (denoted by ⊥) is obtained.

6 for c j in D i|c1 do 7 Apply
2 with the highest probability to the origin of G 1 , i.e., c 1 3 Apply Algorithm 1 on c 1 with the depth of one 4 for i in [1 : k] do 5 Sort D i|c1 in descending order of probabilities assigned to its indices Algorithm 1 on c j with the depth of one 8 end 9 end

Fig 10
Fig 10 illustrates the performance gain obtained by SGO.In this experiment, we focused on applying the algorithm to much larger number of cells, up to 50, 625 (which is equivalent to a 225 × 225 square grid).Similar to the MSGO algorithm, the improvement achieved by SGO occurs even when the alert zones are small.Since the overall number of cells is larger, the SGO algorithm has even more flexibility in choosing an advantageous encoding, resulting in strong performance gains.For example, at 9% ratio of alert cells, the SGO algorithm results in 25.8, 26, and 27.3% improvements for grid sizes of 10, 000, 28, 900, and 50, 625, respectively.The execution time of SGO is shown in Fig.8c.Even for very large grid sizes, such as 50, 625, the algorithm requires less than six minutes to encode the grid.Therefore, the system can be set to regularly update the probabilities and run the algorithm at six-minute intervals, if needed.To compare this time performance with GO, consider the maximum grid size for which the encoding can be computed within 60 seconds in each case.As shown in Fig.8a, this number corresponds to a grid size of 1200 for GO, whereas in a similar time, SGO can be applied on the grid size of 22, 000 cells.Therefore, the SGO algorithm requires significantly lower computation overhead to execute compared with GO and even MSGO algorithms, while the performance gain in terms of HVE operations reductions is still solid.Fig.11presents the result of algorithms by fixing the percentage of alerted cells to 30% and varying the grid size.It can be seen that the performance improvement of algorithms stays in a comparable margin for varying grid sizes.The slight fluctuation in graphs is due to two primary reasons (i) as all codewords have the same length, increasing the quantization level result in an addition of a bit to all codewords, and (ii) the ini-
, which is a function e : G × G → G T such that ∀a, b ∈ G and ∀u, v ∈ Z it holds that e(a u , b v ) = e(a, b) uv .

Table 1 :
Summary of notations.

Table 2 :
Summary of the proposed algorithms.