Pattern Masking for Dictionary Matching: Theory and Practice

Data masking is a common technique for sanitizing sensitive data maintained in database systems which is becoming increasingly important in various application areas, such as in record linkage of personal data. This work formalizes the Pattern Masking for Dictionary Matching ( PMDM ) problem: given a dictionary D of d strings, each of length (cid:2) , a query string q of length (cid:2) , and a positive integer z , we are asked to compute a smallest set K ⊆ { 1 , . . . , (cid:2) } , so that if q [ i ] is replaced by a wildcard for all i ∈ K , then q matches at least z strings from D . Solving PMDM allows providing data utility guarantees as opposed to existing approaches. We ﬁrst show, through a reduction from the well-known k -Clique problem, that a decision version of the PMDM problem is NP-complete, even for binary strings. We thus approach the problem from a more practical perspective. We show a combinatorial O (( d (cid:2)) | K | / 3 + d (cid:2)) -time and O ( d (cid:2)) -space algorithm for PMDM for | K | = O ( 1 ) . In fact, we show that we cannot hope for a faster combinatorial algorithm, unless the combinatorial k -Clique hypothesis fails (Abboud et al. in SIAM J Comput 47:2527–2555, 2018; Lincoln et al., in: 29th ACM-SIAM Symposium on Discrete Algorithms (SODA), 2018). Our combinatorial algorithm, executed with small | K | , is the backbone of a greedy heuristic that we propose


Introduction
Let us start with a true incident to illustrate the essence of the computational problem formalized in this work.In the Netherlands, water companies bill the non-drinking and drinking water separately.The 6th author of this paper had direct debit for the former but not for the latter.When he tried to set up the direct debit for the latter, he received the following masked message by the company: The rationale of the data masking is: the client should be able to identify themselves to help the companies link the client's profiles, without inferring the identity of any other client via a linking attack [33,67], so that clients' privacy is preserved. 1Thus, the masked version of the data is required to conceal as few symbols as possible, so that the client can recognize their data, but also to correspond to a sufficient number of other clients, so that it is hard for a successful linking attack to be performed.
This requirement can be formalized as the Pattern Masking for Dictionary Matching (PMDM) problem: Given a dictionary D of d strings, each of length , a query string q of length , and a positive integer z, PMDM asks to compute a smallest set K ⊆ {1, . . ., }, so that if q[i], for all i ∈ K , is replaced by a wildcard, q matches at least z strings from D. The PMDM problem applies data masking, a common operation to sanitize personal data maintained in database systems [27,68,69].In particular, PMDM lies at the heart of record linkage of databases containing personal data [24,48,50,63,72,74], which is the main application we consider in this work.
Record linkage is the task of identifying records that refer to the same entities across databases, in situations where no entity identifiers are available in these databases [22,39,58].This task is of high importance in various application domains featuring personal data, ranging from the health sector and social science research, to national statistics and crime and fraud detection [24,45].In a typical setting, the task is to link two databases that contain names or other attributes, known collectively as quasiidentifiers (QIDs) [73].The similarity between each pair of records (a record from one of the databases and a record from the other) is calculated with respect to their values in QIDs, and then all compared record pairs are classified into matches (the pair is assumed to refer to the same person), non-matches (the two records in the pair are assumed to refer to different people), and potential matches (no decision about whether the pair is a match or non-match can be made) [22,39].
Unfortunately, potential matches happen quite often [8].A common approach [63,72] to deal with potential matches is to conduct a manual clerical review, where a domain expert looks at the attribute values in record pairs and then makes a manual match or non-match decision.At the same time, to comply with policies and legislation, one needs to prevent domain experts from inferring the identity of the people represented in the manually assessed record pairs [63].The challenge is to achieve desired data protection/utility guarantees; i.e. enabling a domain expert to make good decisions without inferring peoples' identities.
To address this challenge, we can solve PMDM twice, for a potential match (q 1 , q 2 ).The first time we use as input the query string q 1 and a reference dictionary (database) D containing personal records from a sufficiently large population (typically, much larger than the databases to be linked).The second time, we use as input q 2 instead of q 1 .Since each masked q derived by solving PMDM matches at least z records in D, the domain expert would need to distinguish between at least z individuals in D to be able to infer the identity of the individual corresponding to the masked string.The underlying assumption is that D contains one record per individual.Also, some wildcards from one masked string can be superimposed on another to ensure that the expert does not gain more knowledge from combining the two strings, and the resulting strings would still match at least z records in D. Thus, by solving PMDM in this setting, we provide privacy guarantees alike z-map [70]; a variant of the well-studied z-anonymity [65] privacy model. 2 In z-map, each record of a dataset must match at least z records in a reference dataset, from which the dataset is derived.In our setting, we consider a pattern that is not necessarily contained in the reference dataset.Offering such privacy is desirable in real record linkage systems where databases containing personal data are being linked [24,50,74].On the other hand, since each masked q contains the minimum number of wildcards, the domain expert is still able to use the masked q to meaningfully classify a record pair as a match or as a non-match.
Offering such utility is again desirable in record linkage systems [63].Record linkage is an important application for our techniques, because no existing approach can provide privacy and utility guarantees when releasing linkage results to domain experts [49].In particular, existing approaches [49,50] recognize the need to offer privacy by preventing the domain expert from distinguishing between a small number of individuals, but they provide no algorithm for offering such privacy, let alone an algorithm offering utility guarantees as we do.
A secondary application where PMDM is of importance is query term dropping, an information retrieval task that seeks to drop keywords (terms) from a query, so that the remaining keywords retrieve a sufficiently large number of documents.This task is performed by search engines, such as Google [7], and by e-commerce platforms such as e-Bay [51], to improve users' experience [35,71] by making sufficiently many search results available to users.For example, e-Bay applies query term dropping, removing one term, in our test query: Query : vacuum database cleaner Query results : 0 results found for vacuum database cleaner 42 results found for vacuum cleaner We could perform query term dropping by solving PMDM in a setting where strings in a dictionary correspond to document terms and a query string corresponds to a user's query.Then, we provide the user with the masked query, after removing all wildcards, and with its matching strings from the dictionary.Two remarks are in order for this application.First, we consider a setting where the keyword order matters.This occurs, for example, when using phrase search in Google. 3Second, since the dictionary may contain strings of different length, PMDM should be applied only to the dictionary strings that have the same length as the query string.Query term dropping is a relevant application for our techniques, because existing techniques [71] do not minimize the number of dropped terms.Rather, they drop keywords randomly, which may unnecessarily shorten the query, or drop keywords based on custom rules, which is not sufficiently generic to deal with all queries.More generally, our techniques can be applied to drop terms from any top-z database query [42] to ensure there are z results in the query answer.

Related algorithmic work
Let us denote the wildcard symbol by and provide a brief overview of works related to PMDM, the main problem considered in this paper.
-Partial Match: Given a dictionary D of d strings over an alphabet = {0, 1}, each of length , and a string q over { } of length , the problem asks whether q matches any string from D. This is a well-studied problem [13,18,44,56,59,60,64].Patrascu [59] showed that any data structure for the Partial Match problem with cell-probe complexity t must use space 2 ( /t) , assuming the word size is O(d 1− /t), for any constant > 0. The key difference to PMDM is that the wildcard positions in the query strings are fixed.
-Dictionary Matching with k-errors: A similar line of research to that of Partial Match has been conducted under the Hamming and edit distances, where, in this case, k is the maximum allowed distance between the query string and a dictionary string [10,11,14,16,26,77].The structure of Dictionary Matching with k-errors is very similar to Partial Match as each wildcard in the query string gives | | possibilities for the corresponding symbol in the dictionary strings.On the other hand, in Partial Match the wildcard positions are fixed.
The PMDM problem is a generalization of the decision version of the Dictionary Matching with k-errors problem (under Hamming distance): by querying a data structure for PMDM with string q and z = 1, one obtains the minimum number of mismatches of q with any string from D, which suffices to answer the decision version of the Dictionary Matching with k-errors problem.The query time or space of all known data structures for Dictionary Matching with k-mismatches incurs some exponential factor with respect to k.In [25], Cohen-Addad et al. showed that, in the pointer machine model, for the reporting version of the problem, one cannot avoid exponential dependency on k either in the space or in the query time.In the word-RAM model, Rubinstein showed that, conditional on the Strong Exponential Time Hypothesis [15], any data structure that can be constructed in time polynomial in the total size ||D|| of the strings in the dictionary cannot answer queries in time strongly sublinear in ||D||.
We next provide a brief overview of other algorithmic works related to PMDM.
-Dictionary Matching with k-wildcards: Given a dictionary D of total size N over an alphabet and a query string q of length over { } with up to k wildcards, the problem asks for the set of matches of q in D. This is essentially a parameterized variant of the Partial Match problem.The seminal paper of Cole et al. [26] proposed a data structure occupying O(N log k N ) space allowing for O( +2 k log log N + |output|)-time querying.This data structure is based on recursively computing a heavy-light decomposition of the suffix tree and copying the subtrees hanging off light children.Generalizations and slight improvements have been proposed in [12,34,54].In [12] the authors also proposed an alternative data structure that instead of a log k N factor in the space complexity has a multiplicative | | k 2 factor.Nearly-linear-sized data structures that essentially try all different combinations of letters in the place of wildcards and hence incur a | | k factor in the query time have been proposed in [12,52].On the lower bound side, Afshani and Nielsen [2] showed that, in the pointer machine model, essentially any data structure for the problem in scope must have exponential dependency on k in either the space or the query time, explaining the barriers hit by the existing approaches.
-Enumerating Motifs with k-wildcards: Given an input string s of length n over an alphabet and positive integers k and z, this problem asks to enumerate all motifs over { } with up to k wildcards that occur at least z times in s.As the size of the output is exponential in k, the enumeration problem has such a lower bound.Several approaches exist for efficient motif enumeration, all aimed at reducing the impact of the output's size: efficient indexing to minimize the output delay [6,37]; exploiting a hierarchy of wildcards positions according to the number of occurrences [9]; defining a subset of motifs of fixed-parameter tractable size (in k or z) that can generate all the others [61,62], or defining maximality notions meaning a subset of the motifs that implicitly include all the others [31,36].

Our Contributions
We consider the word-RAM model of computations with w-bit machine words, where w = (log(d )), for stating our results.We make the following contributions: we are given a collection M of m query strings (instead of one query string) and we are asked to compute a smallest set K so that, for every q from M , if q[i], for all i ∈ K , is replaced by a wildcard, then q matches at least z strings from dictionary D. We show an O((d 7. (Sect.9) An extensive experimental evaluation on real-world and synthetic data demonstrating that our heuristic finds nearly-optimal solutions in practice and is also very efficient.In particular, our heuristic finds optimal or nearly-optimal solutions for PMDM on a dataset with six million records in less than 3 s.
We conclude this paper with a few open questions in Sect.10.This paper is an extended version of a paper that was presented at ISAAC 2021 [17].Compared to [17], Sects.8 and 9 are new, while Sects. 1, 5 and 6 contain additional details that were omitted from [17] due to space constraints.

Definitions and Notation
Strings An alphabet is a finite nonempty set whose elements are called letters.We assume throughout an integer alphabet is the substring of x that starts at position i and ends at position j of x.By ε we denote the empty string of length 0. A prefix of x is a substring of x of the form x[1 . .j], and a suffix of x is a substring of x of the form x[i . .n].A dictionary is a collection of strings.We also consider alphabet = { }, where is a wildcard letter that is not in and matches all letters from .Then, given a string x over and a string y over with |x| = |y|, we say that x matches y if and only if Given a string x of length n and a set S ⊆ {1, . . ., n}, we denote by x S = x ⊗ S the string obtained by first setting x S = x and then x S [i] = , for all i ∈ S. We then say that x is masked by S.
The main problem considered in this paper is the following.

Pattern Masking for Dictionary Matching (PMDM)
Input: A dictionary D of d strings, each of length , a string q of length , and a positive integer z.Output: A smallest set K ⊆ {1, . . ., } such that q K = q ⊗ K matches at least z strings from D.
We refer to the problem of computing only the size k of a smallest set K as PMDM-Size.We also consider the data structure variant of the PMDM problem in which D is given for preprocessing, and q, z queries are to be answered on-line.Throughout, we assume that k ≥ 1 as the case k = 0 corresponds to the well-studied dictionary matching problem for which there exists a classic optimal solution [3].We further assume z ≤ d; otherwise the PMDM has trivially no solution.In what follows, we use N to denote d .
Tries.Let M be a finite set containing m > 0 strings over .The trie of M , denoted by R(M ), contains a node for every distinct prefix of a string in M ; the root node is ε; the set of leaf nodes is M ; and edges are of the form (u, α, uα), where u and uα are nodes and α ∈ is the label.The compacted trie of M , denoted by T (M ), contains the root, the branching nodes, and the leaf nodes of R(M ).Each maximal branchless path segment from R(M ) is replaced by a single edge, and a fragment of a string M ∈ M is used to represent the label of this edge in O( 1) space.The size of T (M ) is thus O(m).The most well-known example of a compacted trie is the suffix tree of a string: the compacted trie of all the suffixes of the string [75].To access the children of a trie node by the first letter of their edge label in O(1) time we use perfect hashing [32].In this case, the claimed complexities hold with high probability (w.h.p., for short), that is, with probability at least 1 − N −c (recall that N = d ), where c > 0 is a constant fixed at construction time.Assuming that the children of every trie node are sorted by the first letters of their edge labels, randomization can be avoided at the expense of a log | | factor incurred by binary searching for the appropriate child.

NP-hardness and Conditional Hardness of PMDM-SIZE
We show that the following decision version of PMDM-Size is NP-complete.

k-PMDM
Input: A dictionary D of d strings, each of length , a string q of length , and positive integers z ≤ d and k ≤ .Output: Is there a set K ⊆ {1, . . ., } of size k, such that q K = q ⊗ K matches at least z strings from D?
Our reduction is from the well-known NP-complete k-Clique problem [46]: Given an undirected graph G on n nodes and a positive integer k, decide whether G contains a clique of size k (a clique is a subset of the nodes of G that are pairwise adjacent).Proof Let G = (V , E) be an undirected graph on n = |V | nodes numbered 1 through n, in which we are looking for a clique of size k.We reduce k-Clique to k-PMDM as follows.Consider the alphabet {a, b}.Set q = a n , and for every edge Then G contains a clique of size k, if and only if k-PMDM returns a positive answer.This can be seen by the fact that cliques of size k in G are in one-to-one correspondence with subsets K ⊆ {1, . . ., n} of size k for which q K matches z strings from D: the elements of K correspond to the nodes of a clique and the z strings correspond to its edges.k-PMDM is clearly in NP and the result follows.
An example of the reduction from k-Clique to k-PMDM is shown in Fig. 1.

Corollary 3.2 k-PMDM is NP-complete for strings over a binary alphabet.
Any algorithm solving PMDM-Size can be trivially applied to solve k-PMDM.
Corollary 3.3 PMDM-Size is NP-hard for strings over a binary alphabet.

Remark 3.4
Given an undirected graph G, an independent set is a subset of nodes of G such that no two distinct nodes of the subset are adjacent.Let us note that the problem of computing a maximum clique in a graph G, which is equivalent to that of computing the maximum independent set in the complement of G, cannot be n 1− -approximated in polynomial time, for any > 0, unless P = NP [38,78].In Sect.7, we show a polynomial-time O(d 1/4+ )-approximation algorithm for PMDM.We remark that this algorithm and Theorem 3.1 do not contradict the inapproximability results for the maximum clique problem, since our reduction from k-Clique to k-PMDM cannot be adapted to a reduction from maximum clique to PMDM-Size.
Theorem 3.1 shows that solving k-PMDM efficiently even for strings over a binary alphabet would imply a breakthrough for the k-Clique problem for which it is known that, in general, no fixed-parameter tractable algorithm with respect to parameter k exists unless the Exponential Time Hypothesis (ETH) fails [19,43].That is, k-Clique has no f (k)n o(k) time algorithm, and is thus W [1]-complete (again, under the ETH hypothesis).On the upper bound side, k-Clique can be trivially solved in O(n k ) time (enumerating all subsets of nodes of size k), and this can be improved to O(n ωk/3 ) time for k divisible by 3 using square matrices multiplication (ω is the exponent of square matrix multiplication).However, for general k ≥ 3 and any constant > 0, the k-Clique hypothesis states that there is no O(n (ω/3− )k )-time algorithm and no combinatorial O(n (1− )k )-time algorithm [1,55,76].Thus, conditional on the k-Clique hypothesis, and since d = nm = O(n 3 ) and = n (Theorem 3.1), we cannot hope to devise a combinatorial algorithm for k-PMDM with runtime O((d ) (1− )k/3 ) or O( (1− )k ) for any constant > 0. In Sect.4, we show a combinatorial O(d + min{(d ) k/3 , k })-time algorithm, for constant k ≥ 3, for the optimization version of k-PMDM (seeking to maximize the matches), which can then be trivially applied to solve k-PMDM in the same time complexity, thus matching the above conditional lower bound.Additionally, under the k-Clique hypothesis, even with the aid of algebraic techniques, one cannot hope for an algorithm for k-PMDM with runtime O((d ) (ω/9− )k ) or O( (ω/3− )k ), for any constant > 0.
In fact, as we show next, by reducing from the (c, k)-Hyperclique problem, which is not known to benefit from fast matrix multiplication [55], we obtain stronger conditional lower bounds for some values of d and .
A hypergraph H is a pair (V , E), where V is the set of nodes of H and E is a set of non-empty subsets of V , called hyperedges.The (c, k)-Hyperclique problem is defined as follows: Given a hypergraph H = (V , E) such that all of its hyperedges have size c, does there exist a set S of k > c nodes in V so that every subset of c nodes from S is a hyperedge?We call set S a (c, k)-hyperclique in H .We will reduce Proof We reduce (c, k)-Hyperclique to k-PMDM as follows.Consider the alphabet {a, b}.Set q = a n , and for every hyperedge e i ∈ E, add the binary string Then H contains a (c, k)-hyperclique if and only if k-PMDM returns a positive answer.This can be seen by the fact that (c, k)-hypercliques H are in one-to-one correspondence with subsets K ⊆ {1, . . ., n} of size k for which q K matches z strings from D: the elements of K correspond to the nodes of a (c, k)-hyperclique and the z strings correspond to its hyperedges.
The (c, k)-Hyperclique hypothesis states that there is no O(n (1− )k )-time algorithm, for any k > c > 2 and > 0, that solves the (c, k)-Hyperclique problem.For a discussion on the plausibility of this hypothesis and for more context, we refer the reader to [55,Sect. 7].Theorem 3.5 shows that solving k-PMDM efficiently even for strings over a binary alphabet would imply a breakthrough for the (c, k)-Hyperclique problem.In particular, assuming that the (3, k)-Hyperclique hypothesis is true, due to Theorem 3.5, and since d = nm = O(n 4 ), we cannot hope to devise an algorithm for k-PMDM requiring time O((d ) (1− )k/4 ) or O( (1− )k ), for any k > 3 and > 0.

Exact Algorithms for a Bounded Number k of Wildcards
We consider the following problem, which we solve by exact algorithms.These algorithms will form the backbone of our effective and efficient heuristic for the PMDM problem (see Sect. 8).
Heaviest k-PMDM Input: A dictionary D of d strings, each of length , a string q of length , and a positive integer k ≤ .Output: A set K ⊆ {1, . . ., } of size k such that q K = q⊗K matches the maximum number of strings in D.
We will show the following result, which we will employ to solve the PMDM problem.
Recall that a hypergraph H is a pair (V , E), where V is the set of nodes of H and E is a set of non-empty subsets of V , called hyperedges-in order to simplify terminology we will simply call them edges.Hypergraphs are a generalization of graphs in the sense that an edge can connect more than two nodes.Recall that the size of an edge is the number of nodes it contains.The rank of H , denoted by r (H ), is the maximum size of an edge of H .
We refer to a hypergraph H [K ] = (K , {e : e ∈ E, e ⊆ K }), where K is a subset of V , as a |K |-section.H [K ] is the hypergraph induced by H on the nodes of K , and it contains all edges of H whose elements are all in K .A hypergraph is weighted when each of its edges is associated with a weight.We define the weight of a weighted hypergraph as the sum of the weights of all of its edges.In what follows, we also refer to weights of nodes for conceptual clarity; this is equivalent to having a singleton edge of equal weight consisting of that node.
We define the following auxiliary problem on hypergraphs (see also [21]).

Heaviest k-Section
Input: A weighted hypergraph H = (V , E), with E given as a list, and an integer k > 0.
When k = O(1), we preprocess the edges of H as follows in order to have O(1)-time access to any queried edge.We represent each edge as a string, whose letters correspond to its elements in increasing order.Then, we sort all such strings lexicographically using radix sort in O(|E|) time and construct a trie over them.An edge can then be accessed in O(k log k) = O(1) time by a forward search starting from the root node of the trie.
A polynomial-time O(n 0.697831+ )-approximation for Heaviest k-Section, for any > 0, for the case when all hyperedges of H have size at most 3 was shown in [21] (see also [5]).Two remarks are in place.First, we can focus on edges of size up to k as larger edges cannot, by definition, exist in any k-section.Second, Heaviest k-Section is a generalization of the problem of deciding whether a (c, k)-hyperclique (i.e. a set of k nodes whose subsets of size c are all in E) exists in a graph, which in turn is a generalization of k-Clique.Unlike k-Clique, the (c, k)-hyperclique problem is not known to benefit from fast matrix multiplication in general; see [55] for a discussion on its hardness.Proof We first compute the set M s of positions of mismatches of q with each string s ∈ D. We ignore strings from D that match q exactly, as they will match q after changing any set of letters of q to wildcards.This requires O(d ) = O(N ) time in total.
Let us consider an empty hypergraph (i.e. with no edges) H on nodes, numbered 1 through .Then, for each string s ∈ D, we add M s to the edge-set of H if |M s | ≤ k; if this edge already exists, we simply increment its weight by 1.
We set the parameter k of Heaviest k-Section to the parameter k of Heaviest k-PMDM.We now observe that for K ⊆ V with |K | = k, the weight of H [K ] is equal to the number of strings that would match q after replacing with wildcards the k letters of q at the positions corresponding to elements of K .The statement follows.
An example of the reduction in Lemma 4.2 is shown in Fig. 2. The next lemma gives a straightforward solution to Heaviest k-Section.It is analogous to algorithm Small-, presented in Sect.6, but without the optimization in computing sums of weights over subsets.It implies a linear-time algorithm for Heaviest 1-Section.We next show that for the cases k = 2 and k = 3, there exist more efficient solutions.In particular, we provide a linear-time algorithm for Heaviest 2-Section.Proof Let K be a set of nodes of size 2 such that H [K ] has maximum weight.We decompose the problem in two cases.For each of the cases, we give an algorithm that considers several 2-sections such that the heaviest of them has weight equal to that of Case 1 There is an edge e = K in E. For each edge e ∈ E of size 2, i.e. edge in the classic sense, we compute the sum of its weight and the weights of the nodes that it is incident to.This step requires O(|E|) time.Case 2 There is no edge equal to K in E. We compute H [{v 1 , v 2 }], where v 1 , v 2 are the two nodes with maximum weight, i.e. max and second-max.This step takes O(|V |) time.
In the end, we return the heaviest 2-section among those returned by the algorithms for the two cases, breaking ties arbitrarily.
We next show that for k = 3 the result of Lemma 4.3 can be improved when

Finally, the maximum weight of H
We next address the remaining case of any arbitrarily large constant k ≥ 4.
and linear space.We can thus henceforth assume that |E| ≤ |V | 2 .Let K be a set of nodes of size at most k such that H [K ] has maximum weight.If H [K ] contains isolated nodes (i.e.nodes not contained in any edge), they can be safely deleted without altering the result.We can thus assume that H [K ] does not contain isolated nodes, and that |V | ≤ k|E| since otherwise the hypergraph H would contain isolated nodes.
We first consider the case that the rank r (H [K ]) > 1, i.e. there is an edge of H [K ] of size at least 2. We design a branching algorithm that constructs several candidate sets; the ones with maximum weight will have weight equal to that of H [K ].We will construct a set of nodes X , starting with X := ∅.For each set X that we process, let Z X be the superset of X of size at most k such that H [Z X ] has maximum weight.We have the following two cases: Case 1 There is an edge e in H [Z X ] that contains at least two nodes from Z X \ X .To account for this case, we select every possible such edge e, set X := X ∪ e, and continue the branching algorithm.Case 2 Each edge in H [Z X ] contains at most one node from Z X \ X .In this case we conclude the branching algorithm as follows.For every node v ∈ V \ X we compute its weight as the total weight of edges ), which is dominated by (a).By using (1) for k − 3 we also have: We now consider the case that r (H [K ]) = 1.We use the algorithm for Case 2 above that works in

Exact Algorithms for a Bounded Number m of Query Strings
Recall that masking a potential match (q 1 , q 2 ) in record linkage can be performed by solving PMDM twice and superimposing the wildcards (see Sect. 1).In this section, we consider the following generalized version of PMDM to perform the masking simultaneously.The advantage of this approach is that it minimizes the final number of wildcards in q 1 and q 2 .

Multiple Pattern Masking for Dictionary Matching (MPMDM)
Input: A dictionary D of d strings, each of length , a collection M of m strings, each of length , and a positive integer z.Output: A smallest set K ⊆ {1, . . ., } such that, for every q from M , q K = q⊗K matches at least z strings from D.
Let N = d .We show the following theorem.1) and m = O (1), where N = d .

Theorem 5.1 MPMDM can be solved in time
We use a generalization of Heaviest k-Section in which the weights are m-tuples that are added and compared component-wise, and we aim to find a subset K such that the weight of H [K ] is at least (z, . . ., z).An analogue of Lemma 4.3 holds without any alterations, which accounts for the O(N + k )-time algorithm.We adapt the proof of Lemma 4.6 as follows.The branching remains the same, but we have to tweak the final step, that is, what happens when we are in Case 2. For m = 1 we could simply select a number of largest weights, but for m > 1 multiple criteria need to be taken into consideration.All in all, the problem reduces to a variation of the classic Multiple-Choice Knapsack problem [47], which we solve using dynamic programming.
The variation of the classic Multiple-Choice Knapsack problem is as follows.
The exact reduction from Case 2 is as follows: the set T contains weights of subsequent nodes v ∈ V \ X (defined as the sums of weights of edges Y ∪ {v} ∈ E for Y ⊆ X ), so t ≤ |V |, x is (z, . . ., z) minus the sum of weights of all edges e ∈ E such that e ⊆ X , and κ = k − |X |.
The solution to κ-HV is a rather straightforward dynamic programming.Proof We apply dynamic programming.Let T = v 1 , . . ., v t .We compute an array A of size O(tκz m−1 ) such that, for i ∈ {0, . . ., t}, j ∈ {0, . . ., κ} and v ∈ {0, . . ., z} m−1 , where (v, a) denotes the operation of appending element a to vector v. From each state A[i, j, v] we have two transitions, depending on whether v i+1 is taken to the subset or not.Each transition is computed in The array is equipped with a standard technique to recover the set S (parents of states).The final answer is computed by checking, for each vector v ∈ {0, . . ., Overall, we pay an additional O(z m−1 ) factor in the complexity of handling of Case 2, which yields the complexity of Theorem 5.1.

A Data Structure for PMDM Queries
We next show algorithms and data structures for the PMDM problem under the assumption that 2 is reasonably small.We measure space in terms of w-bit machine words, where w = (log(d )), and focus on showing space versus query-time trade-offs for answering q, z PMDM queries over D. A summary of the complexities of the data structures is shown in Table 1.Specifically, algorithm Small-and data structure Simple are used as building blocks in the more involved data structure Split underlying the following theorem.Theorem 6.1 There exists a data structure that answers q, z PMDM queries over D in time O(2 /2 (2 /2 + τ ) ) w.h.p. and requires space O(2 Our algorithm is based on the Fast zeta/Möbius transform [28,Theorem 10.12].No data structure on top of the dictionary D is stored.In the query algorithm, we initialize an integer array A of size 2 with zeros.For an -bit vector m, by K m ⊆ {1, . . ., } let us denote the set of the positions of set bits of m.Now for every possible -bit vector m we want to compute the number of strings in D that match q K m = q ⊗ K m . To this end, for every string s ∈ D, we compute the set K of positions in which s and q differ.For m that satisfies K = K m , we increment A[m], where m is the integer representation of the bit vector.This computation takes O(d ) time and O(1) extra space.Then we apply a folklore dynamic-programming-based approach to compute an integer array B, which is defined as follows: In other words, B[m] stores the number of strings from D that match q K m .
We provide a description of the folklore algorithm here for completeness.Consider a vector (mask) m.Let S(m, i) consist of the subsets of m which do not differ from m but (possibly) in the rightmost i bits, and Clearly, S(m) is equal to S(m, ), and hence The following equation is readily verified (in the first case, since the ith bit of m is 0, no element of S(m) can have the ith bit set): By OR and XOR we denote the standard bitwise operations.Overall, there are O(2 ) choices for m and i.We can compute B[•, •] column by column, in constant time per entry, thus obtaining an O(2 )-time algorithm.We can limit the space usage to O(2 ) by discarding column i when we are done computing column i + 1.
Thus, overall, the (query) time required by algorithm Small-is O( 2 +d ), the data structure space is O(d ), and the extra space is O (2 ).
We now present Simple, an auxiliary data structure, which we will apply later on to construct DS Split, a data structure with the space/query-time trade-off of Theorem 6.1.2) Query Time.We initialize an empty set Q.For each possible subset of {1, . . ., } we do the following.We mask the corresponding positions in all strings from D and then sort the masked strings lexicographically.By iterating over the lexicographically sorted list of the masked strings, we count how many copies of each distinct (masked) string we have in our list.We insert each such (masked) string to Q along with its count.After processing all 2 subsets, we construct

DS Simple: O(2 d) Space, O(
a compacted trie for the strings in Q; each leaf corresponds to a unique element of Q, and stores this element's count.The total space occupied by this compacted trie is thus O(2 d).Upon an on-line query q (of length ) and z, we apply all possible 2 masks to q and read the count for each of them from the compacted trie in O( ) time per mask.Next, we show how to decrease the exponential dependency on in the space complexity when 2 = o(d), incurring extra time in the query.
Query Time, for any τ .This trade-off is relevant when τ = ω( √ d); otherwise the DS Simple is better.We split each string p ∈ D roughly in the middle, to prefix p L and suffix p R ; specifically, p = p L p R and | p L | = /2 .We create dictionaries D L = { p L : p ∈ D} and D R = {p R : p ∈ D}.Let us now explain how to process D L ; we process D R analogously.Let λ = /2 .We construct DS Simple over D L .This requires space O(2 /2 d).Let τ be an input parameter, intuitively used as the minimum frequency threshold.For each of the possible 2 λ masks, we can have at most d/τ (masked) strings with frequency at least τ .Over all masks, we thus have at most 2 λ d/τ such strings, which we call τ -frequent.For every pair of τ -frequent strings, one from D L and one from D R , we store the number of occurrences of their concatenation in D using a compacted trie as in DS Simple.This requires space O(2 d 2 /τ 2 ).
Consider D L .For each mask i and each string p L ∈ D L , we can afford to store the list of all strings in D L that match p L ⊗i.Note that we have computed this information when sorting for constructing DS Simple over D L .This information requires space O(2 /2 d).Thus, DS Split requires O(2 Let us now show how to answer an on-line q, z query.Let q = q L q R with |q L | = /2 .We iterate over all possible 2 masks.For a mask i, let q = q ⊗ i.We split q into two halves, q L and q R with q = q L q R and |q L | = /2 .First, we check whether each of q L and q R is τ -infrequent using the DS Simple we have constructed for D L and D R , respectively, in time O( ).We have the following two cases (inspect also Fig. 3).
-If both halves are τ -frequent, then we can read the frequency of their concatenation using the stored compacted trie in time O( ).-Else, at least one of the two halves is τ -infrequent.Assume without loss of generality that q L is τ -infrequent.Let F be the dictionary consisting of at most τ strings from D R that correspond to the right halves of strings in D L that match q L .Naïvely counting how many elements of F match q R could require (τ ) time, and thus (2 τ ) overall.Instead, we apply algorithm Small-on q R and Fig. 3 Let τ = 3.If both q L and q R are 3-frequent (we check this using the counts of DS Simple), then we read the count for q L q R from the compacted trie of DS Split.If q L is 3-infrequent, then we apply Smallon q R and on the dictionary consisting of at most τ = 3 strings from D R corresponding to the right halves of strings in D L that match q L F .The crucial point is that if we ever come across q L again (for a different mask on q), we will not need to do anything.We can maintain whether q L has been processed by temporarily marking the leaf corresponding to it in DS Simple for D L .Thus, overall, we perform the Small-algorithm O(2 Comparison of the Data Structures DS Simple has lower query time than algorithm Small-.However, its space complexity can be much higher.DS Split can be viewed as an intermediate option.For τ as in Table 1, it has lower query time than algorithm Small-for d = ω(2 3 /2 ), while keeping moderate space complexity.DS Split always has higher query time than DS Simple, but its space complexity is lower by a factor of 2 /2 .For example, for d = 2 2 we get the complexities shown in Table 2.
Let us now discuss why our data structure results cannot be directly obtained using the same data structures as for the problem Dictionary Matching with k-wildcards (see Sect. 1 for the problem definition).Conceivably, one could construct such a data structure, and then iterate over all subsets of {1, . . ., }, querying for the masked string.Existing data structures for dictionary matching with wildcards (cf.[12,Table 1], [52], and [34]), that allow querying a pattern with at most wildcards, have: (a) Either (min{σ , d}) query time, thus yielding (2 • min{σ , d}) query time for our problem, and space (d ), a trade-off dominated by the Small-algorithm (cf.our Table 1); (b) Or ( ) query time, thus yielding (2 ) query time for our problem, and (d log log(d )) space, a trade-off dominated by the DS Simple (cf.our Table 1).

Approximation Algorithm for PMDM
Clearly, PMDM is at least as hard as PMDM-Size because it also outputs the positions of the wildcards (set K ).Thus, PMDM is also NP-hard.In what follows, we show existence of a polynomial-time approximation algorithm for PMDM whose approximation factor is given with respect to d.Specifically, we show the following approximation result for PMDM.
Theorem 7.1 For any constant > 0, there is an O(d1/4+ )-approximation algorithm for PMDM, whose running time is polynomial in N , where N = d .
Our result is based on a reduction to the Minimum Union (MU) problem [20], which we define next.We next describe the reduction that leads to our result.

Theorem 7.3 PMDM can be reduced to MU in time polynomial in N .
Proof We reduce the PMDM problem to MU in polynomial time as follows.Given any instance I PMDM of PMDM, we construct an instance I MU of MU in time O(d ) by performing the following steps: 2. We start with an empty collection S .Then, for each string s i in D, we add member S i to S , where S i is the set of positions where string q and string s i have a mismatch.This can be done trivially in time O(d ) for all strings in D. To conclude the proof, it remains to show that given a solution T to I MU we can obtain a solution K to I PMDM in time polynomial in the size of I MU .This readily follows from the proof of the above claim: it suffices to set K = ∪ S∈T S.
We can now prove Theorem 7.1.

Proof of Theorem 7.1
The reduction in Theorem 7.3 implies that there is a polynomialtime approximation algorithm for PMDM.In particular, Theorem 7.2 provides an approximation guarantee for MU that depends on the number of sets of the input S .In Step 2 of the reduction of Theorem 7.3, we construct one set for the MU instance per one string of the dictionary D of the PMDM instance.Also, from the constructed solution T to the MU instance, we obtain a solution K to the PMDM instance by simply substituting the positions of q corresponding to the elements of the sets of T with wildcards.This construction implies the approximation result of Theorem 7.1 that depends on the size of D.
Applying Theorem 7.1 to solve PMDM is not practical, as in real-world applications, such as those in Sect. 1, d is typically in the order of thousands or millions [24,30,35,71].
Sanity Check We remark that Theorem 3.1 (reduction from k-Clique to k-PMDM) and Theorem 7.1 (approximation algorithm for PMDM) do not contradict the inapproximability results for the maximum clique problem (see Sect. 3), since our reduction from k-Clique to k-PMDM cannot be adapted to a reduction from maximum clique to PMDM-Size.

Two-Way Reduction
Chlamtáč et al. [20] also show that their polynomial-time O(d 1/4+ )-approximation algorithm for MU is tight under a plausible conjecture for the so-called Hypergraph Dense vs Random problem.In what follows, we also show that approximating the MU problem can be reduced to approximating PMDM in polynomial time and hence the same tightness result applies to PMDM.-Bruteforce (henceforth, BF).In iteration i, BF applies Lemma 4.3 in the process of obtaining an optimal solution K of size i = k to PMDM.In particular, it checks whether at least z strings from D match the query string q, after the i positions in q corresponding to K are replaced with wildcards.If the check succeeds, BF returns K and terminates; otherwise, it proceeds into iteration i + 1. BF takes O(k(2 ) k + dk) time (see Lemma 4.3).Since-as mentioned in Sect.1-there are no existing algorithms for addressing PMDM, in the evaluation we used our own baseline BA.We have implemented all of the above algorithms in C++.Our implementations are freely available at https:// bitbucket.org/pattern-masking/pmdm/.
Datasets We used the North Carolina Voter Registration database [57] (NCVR); a standard benchmark dataset for record linkage [23,30,45,73].NCVR is a collection of 7,736,911 records with attributes such as Forename, Surname, City, County, and Gender.We generated 4 subcollections of NCVR: (I) FS is comprised of all 952,864 records having Forename and Surname of total length = 15; (II) FCi is comprised of all 342,472 records having Forename and City of total length = 15; (III) FCiCo is comprised of all 342,472 records having Forename, City, and County of total length = 30; and (IV) FSCiCo is comprised of all 8,238 records having Forename, Surname, City and County of total length = 45.
We also generated a synthetic dataset, referred to as SYN, using the IBM Synthetic Data Generator [41], a standard tool for generating sequential datasets [29,40].SYN contains a collection of 6 • 10 6 records, each of length = 50, over an alphabet of size | | = 10.We also generated subcollections of SYN comprised of: x • 10 6 arbitrarily selected records; the length-y prefix of each selected record.We denote each resulting dataset by SYN x.y .

Comparison Measures
We evaluated the effectiveness of the algorithms using: AvgRE An Average Relative Error measure, computed as avg i∈ [1,1000] where k * i is the size of the optimal solution produced by BF, and k i is the size of the solution produced by one of the other tested algorithms.Both k * i and k i are obtained by using, as query q i , a record of the input dictionary selected uniformly at random.AvgSS An Average Solution Size measure computed as avg i∈ [1,1000] k * i for BF and avg i∈ [1,1000] k i for any other algorithm.
We evaluated efficiency by reporting avg i∈ [1,1000] t i , where t i is the elapsed time of a tested algorithm to obtain a solution for query q i over the input dictionary.

Execution Environment
In our experiments we used a PC with Intel Xeon E5-2640@2.66GHzand 160GB RAM running GNU/Linux, and a gcc v.7.3.1 compiler at optimization level -O3.Effectiveness Figs. 4 and 5a show that GR produced nearly-optimal solutions, significantly outperforming BA.In Fig. 4a, the solutions of GR 3 were no more than 9% worse than the optimal, while those of BA were up to 95% worse.In Fig. 5a, the average solution size of BF was 5.4 versus 5.9 and 9, for the solution size of GR 3 and BA, respectively.
In Figs.5b and c, we examined the effectiveness of GR for larger values.Figure 5b shows that the solution size of GR 3 was at least 31% and up to 60% smaller than that of BA on average, while Fig. 5c shows that the solution of GR 3 was comparable to that of GR 4 and 5.We omit the results for BF from Figs. 5b and c and those for BA from Fig. 5c, as these algorithms did not produce results for all queries within 48 h, for any z.This is because, unlike GR, BF does not scale well with and BA does not scale well with the solution size, as we will explain later.
Note that increasing τ generally increases the effectiveness of GR as it computes more positions of wildcards per iteration.However, even with τ = 3, it remains competitive to BF. Efficiency Having shown that GR produced nearly-optimal solutions, we now show that it is comparable in terms of efficiency or faster than BA for synthetic data.(BF was at least two orders of magnitude slower than the other methods on average and thus we omit its results.)The results for NCVR were qualitatively similar (omitted).Figure 6a shows that GR spent, on average, the same time for a query as BA did.However, it took significantly (up to 36 times) less time than BA for queries with large solution size k.This can be seen from Fig. 6b, which shows the time each query with solution size k took; the results for GR 3 and 4 were similar and thus omitted.The reason is that BA updates the hypergraph every time a node is added into the solution, which is computationally expensive when k is large.Figures 6c and d show that all algorithms scaled sublinearly with d and with z, respectively.The increase with d is explained by the time complexity of the methods.The slight increase with z is because k gets larger, on average, as z increases (see Fig. 7c next, in which we also show the average solution size for the experiments in Figs.6a, c).GR 3 and 4 performed similarly to each other, being faster than GR 5 in all experiments as expected: increasing τ from 3 or 4 to 5 trades-off effectiveness for efficiency.
Average Solution Size Figs.7a-c show the average solution size in the experiments of Fig. 6a, c, and d, respectively.Observe that the results are analogous to those obtained using the NCVR datasets: GR outperforms BA significantly.Also, observe in Fig. 7c that the solution size of each tested algorithm gets larger, on average, as z increases.

Fig. 1 3 . 1
Fig. 1 An example of the reduction from k-Clique to k-PMDM.The solution for both is {1, 2, 3} as shown.Note that, for k = 4, the instance of 4-PMDM would need z = 6 matches; neither this many matches can be found in D nor a 4-clique can be found in the graph

Theorem 3 . 5
Any instance of the (c, k)-Hyperclique problem for a hypergraph with n nodes and m hyperedges each of size c can be reduced in O(nm) time to a k-PMDM instance with = n, d = m and = {a, b}.

Fig. 2
Fig. 2 An example of the reduction from Heaviest k-PMDM to Heaviest k-Section.The solutions are at the bottom.Each edge has its weight in brackets and the total weight is d = 6

Lemma 4 . 2
Heaviest k-PMDM can be reduced to Heaviest k-Section for a hypergraph with nodes and d edges in O(N ) time, where N = d .

Lemma 4 . 3
Heaviest k-Section, for any constant k, can be solved in O(|V | k + |E|) time and O(|V | + |E|) space.Proof For every subset K ⊆ V of size at most k, we sum the weights of all edges corresponding to its subsets.There are |V | k = O(|V | k ) choices for |K |, each having 2 k − 1 non-empty subsets: for every subset, we can access the corresponding edge (if it exists) in O(1) time.

Lemma 4 . 5 Case 1
Heaviest 3-Section can be solved in time O(|V |•|E|) using O(|V |+|E|) space.Proof Let K be a set of nodes of size 3 such that H [K ] has maximum weight.We decompose the problem into the following three cases.There is an edge e = K in E. We go through each edge e ∈ E of size 3 and compute the weight of H [e] in O(1) time.This takes O(|E|) time in total.Let the edge yielding the maximum weight be e max .Case 2 There is no edge of size larger than one in H [K ].We compute H [{v 1 , v 2 , v 3 }], where v 1 , v 2 , v 3 are the three nodes with maximum weight, i.e. max, second-max and third-max.This step takes O(|V |) time.Case 3 There is an edge of size 2 in H [K ].We can pick an edge e of size 2 from E in O(|E|) ways and a node v from V in O(|V |) ways.We compute the weight of H [(e ∪ {v})] for all such pairs.Let the pair yielding maximum weight be (e , u ).

Lemma 4 . 6
Heaviest k-Section for an arbitrarily large constant k ≥ 4 can be solved in time O((|V | • |E|) k/3 ) using O(|V | + |E|) space.Proof If |E| > |V | 2 , then the simple algorithm of Lemma 4.3 solves the problem in time Finally, in O(|V |k) = O(|V |) time we select k − |X | nodes with largest weights and insert them into X .The total time complexity of this step is O(|V |).This case also works if |X | = k and then its time complexity is only O(1).The correctness of this branching algorithm follows from an easy induction, showing that at every level of the branching tree there is a subset of K .Let us now analyze the time complexity of this branching algorithm.Each branching in Case 1 takes O(|E|) time and increases the size of |X | by at least 2. At every node of the branching tree we call the procedure of Case 2. It takes O(|V |) time if |X | < k.If the procedure of Case 2 is called in a non-leaf node of the branching tree, then its O(|V |) running time is dominated by the O(|E|) time that is required for further branching since we have assumed that |V | ≤ k|E|.Hence, it suffices to bound (a) the total time complexity of calls to the algorithm for Case 2 in leaves that correspond to sets X such that |X | < k and (b) the total number of leaves that correspond to sets X such that |X | = k.If k is even, (a) is bounded by O(|E| (k−2)/2 |V |) and (b) is bounded by O(|E| k/2 ).Hence, (b) dominates (a) and we have

Lemmas 4 .
2-4.6 imply Theorem 4.1, which we iteratively employ to obtain the following result.

Theorem 4 . 7
PMDM can be solved in time O(N + min{N k/3 , k }) using space O(N ) if k = O(1), where N = d .Proof We apply Lemma 4.2 to obtain a hypergraph with |V | = and |E| = d.Starting with k = 1 and for growing values of k, we solve Heaviest k-Section until we obtain a solution of weight at least z, employing either only Lemma 4.3, or Lemmas 4.3, 4.4, 4.5, 4.6 for k = 1, 2, 3 and k ≥ 4, respectively.We obtain O(N + min{N k/3 , k }) time and O(N ) space.

3 .
Set the z of the MU problem to the z of the PMDM problem.Thus, the total time O(d ) needed for Steps 1 to 3 above is clearly polynomial in the size of I PMDM .Claim For any solution T to I MU and any solution K to I PMDM , such that I MU is obtained from I PMDM using the above three steps, we have |K | = | ∪ S∈T S|.Proof of Claim Let F ⊆ D consist of z strings that match q K .Further, let the set F * consist of the elements of S corresponding to strings in F .We have | ∪ S∈T S| ≤ | ∪ S∈F * S| ≤ |K |.Now, let C = ∪ S∈T S.Then, q C = q ⊗C matches at least z strings from D and hence |K | ≤ |C| = | ∪ S∈T S|.

Theorem 7 . 4
MU can be reduced to PMDM in time polynomial in the size of S .into iteration i + 1.Note BA generally constructs a suboptimal solution K to PMDM and takes O(d 2 ) time.

Fig. 5
Fig. 5 AvgSS versus z for (a) FCi.(b) FCiCo (BF did not produce results for any z within 48 h), and (c) FSCiCo (BF and BA did not produce results for any z within 48 h.The results of GR for z > 100 are omitted because AvgSS > 40 which is close to = 45)

Fig. 6 Fig. 7
Fig. 6 Efficiency versus (a) for SYN 6. , (b) k for SYN, (c) d for SYN x.30 , x ∈ {1•10 6 , 2 •10 6 , . . ., 6•10 6 }, and (d) z for SYN 6.30 .The results of BF are omitted, because it was slower than other methods by at least two orders of magnitude on average 1. (Sect.3)A reduction from the k-Clique problem to a decision version of the PMDMproblem, which implies that PMDM is NP-hard, even for strings over a binary alphabet.The reduction also implies conditional hardness of the PMDM problem.

Table 1
Basic complexities of the data structures from Sect. 6

Table 2
Basic complexities of the data structures from Sect. 6 for d = 2 2 /2) times, each time in O((2 /2 + τ ) ) time.This completes the proof of Theorem 6.1.Efficient ConstructionFor completeness, we next show how to construct DS Split in O(d log(d ) + 2 d + 2 d 2 /τ 2 ) time.We preprocess D by sorting its letters in O(d log(d )) time.The DS Simple for D L and D R can then be constructed in O(2 /2 d ) time.We then create the compacted trie for pairs of τ -frequent strings.For each of the 2 possible masks, say i, and each string p ∈ D, we split p = p ⊗ i in the middle to obtain p L and p R .If both p L and p R are τ -frequent then p will be in the set of strings for which we will construct the compacted trie for pairs of τ -frequent strings.The counts for each of those strings can be read in O( ) time from a DS Simple over D, which we can construct in time O(2 d )-this data structure is then discarded.The compacted trie construction requires time O(2 d 2 /τ 2 ).