On the Practical Power of Automata in Pattern Matching

The classical pattern matching paradigm is that of seeking occurrences of one string - the pattern, in another - the text, where both strings are drawn from an alphabet set $\Sigma$. Assuming the text length is $n$ and the pattern length is $m$, this problem can naively be solved in time $O(nm)$. In Knuth, Morris and Pratt's seminal paper of 1977, an automaton, was developed that allows solving this problem in time $O(n)$ for any alphabet. This automaton, which we will refer to as the {\em KMP-automaton}, has proven useful in solving many other problems. A notable example is the {\em parameterized pattern matching} model. In this model, a consistent renaming of symbols from $\Sigma$ is allowed in a match. The parameterized matching paradigm has proven useful in problems in software engineering, computer vision, and other applications. It has long been suspected that for texts where the symbols are uniformly random, the naive algorithm will perform as well as the KMP algorithm. In this paper we examine the practical efficiency of the KMP algorithm vs. the naive algorithm on a randomly generated text. We analyse the time under various parameters, such as alphabet size, pattern length, and the distribution of pattern occurrences in the text. We do this for both the original exact matching problem and parameterized matching. While the folklore wisdom is vindicated by these findings for the exact matching case, surprisingly, the KMP algorithm works significantly faster than the naive in the parameterized matching case. We check this hypothesis for DNA texts, and observe a similar behaviour as in the random text. We also show a very structured case where the automaton is much more efficient.


Introduction
One of the most well-known data structures in Computer science is the Knuth-Morris-Pratt automaton, or the KMP automaton [20].It allows solving the exact string matching problem in linear time.The exact string matching problem has input text T of length n and pattern P of length m, where the strings are composed of symbols from a given alphabet Σ.The output is all text locations where the pattern occurrs in the text.The naive way of solving the exact string matching problem takes time O(nm).This can be achieved by sliding the pattern to start at every text location, and comparing each of its elements to the corresponding text symbol.Using the KMP automaton, this problem can be solved in time O(n).In fact, analysis of the algorithm shows that at most 2n comparisons need to be done.
It has long been known in the folklore 1 that if the text is composed of uniformly random alphabet symbols, the naive algorithm's time is also linear.This belief is bolstered by the fact that the naive algorithm's mean number of comparisons for text and pattern over a binary alphabet is bounded by n m i=1 i 2 i which is bounded by 2n comparisons.
The number of comparisons in the KMP algorithm is also bounded by 2n.However, because control in the naive algorithm is much simpler, then it may be practically faster than the KMP algorithm.
The last few decades have prompted the evolution of pattern matching from a combinatorial solution of the exact string matching problem [16,20] to an area concerned with approximate matching of various relationships motivated by computational molecular biology, computer vision, and complex searches in digitized and distributed multimedia libraries [15,7].An important type of non-exact matching is the parameterized matching problem which was introduced by Baker [10,11].Her main motivation lay in software maintenance, where program fragments are to be considered "identical" even if variable names are different.Therefore, strings under this model are comprised of symbols from two disjoint sets Σ and Π containing fixed symbols and parameter symbols respectively.In this paradigm, one seeks parameterized occurrences, i.e., exact occurrences up to renaming of the parameter symbols of the pattern string in the respective text location.This renaming is a bijection b : Π → Π.An optimal algorithm for exact parameterized matching appeared in [5].It makes use of the KMP automaton for a linear-time solution over fixed finite alphabet Σ. Approximate parameterized matching was investigated in [10,17,8].Idury and Schäffer [18] considered multiple matching of parameterized patterns.
Parameterized matching has proven useful in other contexts as well.An interesting problem is searching for color images (e.g.[24,9,3]).Assume, for example, that we are seeking a given icon in any possible color map.If the colors were fixed, then this is exact two-dimensional pattern matching [2].However, if the color map is different the exact matching algorithm would not find the pattern.Parameterized two dimensional search is precisely what is needed.If, in addition, one is also willing to lose resolution, then a two dimensional function matching search should be used, where the renaming function is not necessarily a bijection [1,6].
Parameterized matching can also be naively done in time O(nm).Based on our intuition for exact matching, it is expected that here, too, the naive algorithm is competitive with the KMP automaton-based algorithm of [5] in a randomly generated text.
In this paper we investigate the practical efficiency of the automaton-based algorithm vs. the naive algorithm both in exact and parameterized matching.We consider the following parameters: pattern length, alphabet size, and distribution of pattern occurrences in the text.Our findings are that, indeed, the naive algorithm is faster than the automaton algorithm in practically all settings of the exact matching problem.However, it was interesting to see that the automaton algorithm is always more effective than the naive algorithm for parameterized matching over randomly generated texts.We analyse the reason for this difference.
We established that the randomness of the text is what made the naive algorithm so efficient for exact matching.We, therefore, ran the comparison in a very structured artificial text, and the automaton algorithm was a clear winner.
Having understood the practical behavior of the naive vs. automaton algorithm over randomly generated texts, we were curious if there were "real" texts with a similar phenomenom.We ran the same experiments over DNA texts and observed a similar behavior as that of a randomly generated text.

Problem Definition
We begin with basic definitions and notation generally following [13].
Let S = s 1 s 2 . . .s n be a string of length |S| = n over an ordered alphabet Σ.By ε we denote an empty string.For two positions i and j on S, we denote by S[i..j] = s i ..s j the factor (sometimes called substring) of S that begins at position i and ends at position j (it equals ε if j < i).A prefix of S is a factor that begins at position 1 (S[1..j]) and a suffix is a factor that ends at position n (S[i..n]).
The exact string matching problem is defined as follows: Definition 1 (Exact String Matching) Let Σ be an alphabet set, T = t 1 • • • t n the text and P = p 1 • • • p m the pattern, t i , p j ∈ Σ, i = 1, . . ., n; j = 1, . . ., m.The exact string matching problem is: input: text T and pattern P .output: All indices j, j ∈ {1, ..., n − m + 1} such that We simplify Baker's definition of parameterized pattern matching.
Definition 2 (Parameterized-Matching) Let Σ, T and P be as in Definition 1.We say that P parameterizematches or simply p-matches T in location j if p i ∼ = t j+i−1 , i = 1, . . ., m, where p i ∼ = t j if and only if the following condition holds: for every k = 1, . . ., i − 1, p i = p i−k if and only if t j = t j−k .
The p-matching problem is to determine all p-matches of P in T .
It two strings S 1 and S 2 have the same length m then they are said to parametrize-match or simply p-match if s 1i ∼ = s 2i for all i ∈ {1, ..., m}.
Intuitively, the matching relation ∼ = captures the notion of one-to-one mapping between the alphabet symbols.Specifically, the condition in the definition of ∼ = ensures that there exists a bijection between the symbols from Σ in the pattern and those in the overlapping text, when they p-match.The relation ∼ = has been defined by [5] in a manner suitable for computing the bijection.
Example: The string ABABCCBA parameterize matches the string XY XY ZZY X.The reason is that if we consider the bijection β : {A, B, C} → {X, Y, Z} defined by β − → Z, then we get β(ABABCCBA) = XY XY ZZY X.This explains the requirement in Def. 2, where two sumbols match iff they also match in all their previous occurrences.
Of course, the alphabet bijection need not be as extreme as bijection β above.String ABABCCBA also parameterize matches BABACCAB, because of bijection γ : {A, B, C} → {A, B, C} defined as: For completeness, we define the KMP automaton.
Definition 3 Let P = p 1 . . .p m be a string over alphabet Σ.The KMP automaton of P is a 5-tuple (Q, Σ, δ s , δ f , q 0 , q a ), where Q = {0, ..., m} is the set of states, Σ is the alphabet, δ s : Q → Q is the success function, δ f : Q → Q is the failure function, q 0 = 0 is the start state and q a = m is the accepting state.
The success function is defined as follows: The failure function is defined as follows: Denote by (S) the length of the longest proper prefix of string S (i.e.excluding the entire string S) which is also a suffix of S. δ f (i) = (P [1..i]), for i = 1, ..m.
For an example of the KMP automaton see Fig. 1.
Theorem 1 [20] The KMP automaton can be constructed in time O(m).

The Exact String Matching Problem
The Knuth-Morris-Pratt (KMP) search algorithm uses the KMP automaton in the following manner: Variables: pointer t points to indices in the text.pointer p points to indices in the pattern.

Main Loop:
While pointer t ≤ n − m + 1 do: If t pointert = δ s (pointer p ) then do: If pointer p = m − 1 then do: output "pattern occurrence ends in text location pointer t ".

to beginning of while loop endwhile
Theorem 2 [20] The time for the KMP search algorithm is O(n).In fact, it does not exceed 2n comparisons.

The Parameterized Matching Problem
Amir, Farach, and Muthukrishnan [5] achieved an optimal time algorithm for parameterized string matching by a modification of the KMP algorithm.In fact, the algorithm is exactly the KMP algorithm, however, every equality comparison "x = y" is replaced by "x ∼ = y" as defined below.
The following subroutines compute "p i ∼ = t j " for j ≥ i, and "p i ∼ = p j " for j ≤ i.
then return equal return not equal end Theorem 3 [5] The p-matching problem can be solved in O(n log σ) time, where σ = min(m, |Σ|).

Proof:
The table A can be constructed in O(m log σ) time as follows: scan the pattern left to right keeping track of the distinct symbols from Σ in the pattern in a balanced tree, along with the last occurrence of each such symbol in the portion of the pattern scanned thus far.When the symbol at location i is scanned, look up this symbol in the tree for the immediately preceding occurrence; that gives A[i].
Compare can clearly be implemented in time O(log σ).For the case A[i] = i, the comparison can be done in time O (1).When scanning the text from left to right, keep the last m symbols in a balanced tree.The check t j = t j−1 , . . ., t j−i+1 in Compare(p i ,t j ) can be performed in O(log σ) time using this information.Similarly, Compare(p i ,p j ) can be performed using A[i].Therefore, the automaton construction in KMP algorithm with every equality comparison "x = y" replaced by "x ∼ = y" takes time O(m log σ) and the text scanning takes time O(n log σ), giving a total of O(n log σ) time.
As for the algorithm's correctness, Amir, Farach and Muthukrishnan showed that the failure link in automaton node i produces the largest prefix of p 1 • • • p i that p-matches the suffix of p 1 • • • p i .

Our Experiments
Our implementation was written in C + +.The platform was Dell latitude 7490 with intel core i7 -8650U, 32 GB RAM, with 8 MB cache.The running time was computed using the chrono high resolution clock.The random strings were generated using the random Python package.
We implemented the naive algorithm for exact string matching and for parameterized matching.The same code was used for both, except for the implementation of the equivalence relation for parameterized matching, as described above.This required implementing the A array.We also implemented the KMP algorithm for exact string matching, and used the same algorithm for parameterized matching.The only difference was the implementation of the equivalence parameterized matching relation.
The text length n was 1,000,000 symbols.Theoretically, since both the automaton and naive algorithm are sequential and only consider a window of the pattern length, it would have been sufficient to run the experiment on a text of size twice the pattern [4].However, for the sake of measurement resolution we opted for a large text.Yet the size of 1,000,000 comfortably fits in the cache, and thus we avoid the caching issue.In general, any searching algorithm for patterns of length less than 4MB would fit in the cache if appropriately constructed in the manner of [4].Therefore our decision gives as accurate a solution as possible.
Methodology: We generated a uniformly random text of length 1, 000, 000.If the pattern would also be randomly generated, then it would be unlikely to appear in the text.However, when seeking a pattern in the text, one assumes that the pattern occurs in the text.An example would be searching for a sequence in the DNA.When seeking a sequence, one expects to find it but just does not know where.Additionally, we considered the common case where one does not expect many occurrences of the pattern in the text.Consequently, we planted 100 occurrences of the pattern in the text at uniformly random locations.The final text length was always 1, 000, 000.The reason for inserting 100 pattern occurrences is the following.We do not expect manny occurrences, and a 100 occurrences in a million-length text means that less than 0.1% of the text has pattern occurrences.On the other hand, it is sufficient to introduce the option of actually following all elements of the pattern 100 times.This would make a difference in both algorithms.They would both work faster if there were no occurrences at all.
We also implemented a variation where half of the pattern occurrences were in the last quarter of the text.For each alphabet size and pattern length we generated 10 tests and considered the average result of all 10 tests.It should be noted that from a theoretical point of view, the location of the pattern should not make a difference.We tested the different options in order to verify that this is, indeed, the case.

Results
Tables 1 and 2 in the Appendix show the alphabet size, the pattern length, the average of the running times of the naive algorithm for the 10 tests, the average of the running time of the KMP algorithm for the 10 tests, and the ratio of the naive algorithm running time over the KMP algorithm running time.Any ratio value below 1 means that the naive algorithm is faster.A small value indicates a better performance of the naive algorithm.Any value above 1 indicates that the KMP algorithm is faster than the naive algorithm.The larger the number, the better the performance.
To enable a clearer understanding of the results, we present them below in graph form.The following graphs show the results of our tests for the different pattern lengths.In Figs. 2 and 3, the x-axis is the pattern size.The y-axis is the ratio of the naive algorithm running time to the KMP algorithm running time.The different colors depict alphabet sizes.In Fig. 2, the patterns were inserted at random, whereas in Fig. 4 the patterns appear at the last half of the text.
To better see the effect of the pattern distribution in the text, Fig. 4 maps, on the same graph, both cases.In this graph, the x-axis is the average running time of all pattern lengths per alphabet size, and the y-axis is the ratio of the naive algorithm running time to the KMP algorithm running time.The results of the uniformly random distribution are mapped in one color, and the results of all pattern occurrences in the last half of the text are mapped in another.
We note the following phenomena: 1.The naive algorithm performs better than the automaton algorithm.Of the 600 tests we ran, there were only 3 instances where the KMP algorithm performed better than the naive, and all were subsumed by the average.In the vast majority of cases the naive algorithm was superior by far.
2. The naive algorithm performs relatively better for larger alphabets.
3. For a fixed alphabet size, there is a slight increase in the naive/KMP ratio, as the pattern length increases.
4. The distribution of the pattern occurrences in the text does not seem to make a change in performance.
An analysis of these implementation behaviors appears in the next subsection.

Analysis
We analyse all four results noted above.

Better Performance of the Naive Algorithm
We have seen that the mean number of comparisons of the naive algorithm for binary alphabets is bounded by The running time of the KMP algorithm is also bounded by O(2n).However, the control of the KMP algorithm is more complex than that of the naive algorithm, which would indicate a constant ratio in favor of the naive algorithm.However, when the KMP algorithm encounters a mismatch it follows the failure link, which avoids the need to re-check a larger substring.Thus, for longer length patterns, where there are more possibilities of following the failure links for longer distances, there is a lessening advantage of the naive algorithm.
Better Performance of the Naive Algorithm for Larger Alphabets This is fairly clear when we realize that the mean performance of the naive algorithm for alphabet of size k is: This is clearly decreasing the larger the alphabet size.However, the repetitive traversal of the failure link, even in cases where there is no equality in the comparison check, will still raise the relative running time of the KMP algorithm.Here too, the longer the pattern length, the more failure link traversals of the KMP, and thus less overall comparisons, which slightly decreases the advantage of the naive algorithm.

The Distribution of Pattern Occurrences in the Text
If the pattern is not periodic, and if the patterns are not too frequent in the text, then there will be at most one pattern in a text substring of length 2m.In these circumstances, there is really no effect to the distribution of the pattern in the text.We would expect a difference if the pattern is long with a small period.Indeed, an extreme such case is tested in Subsection 5.1.3.

A Very Structured Example
All previous analyses point to the conviction that the more times a prefix of the pattern appears in the text, and the more periodic the pattern, the better will be the performance of the KMP algorithm.The most extreme case would be of text A n (A concatenated n times), and pattern A m−1 B. Indeed the results of this case appear in Fig. 5.
Theoretical analysis of the naive algorithm predicts that we will have nm comparisons, where n is the text length and m is the pattern length.The KMP algorithm will have 2n comparisons, for any pattern length.Thus the ratio q of naive to KMP will be O( m 2 ).In fact, when we plot m q we get twice the cost of the control of the KMP algorithm.This can be seen in Fig. 5 to be 5.

Results
The exact matching results behaved roughly in the manner we expected.The surprise came in the parameterized matching case.Below are the results of our tests.As in the exact matching case, the tables show the alphabet size, the pattern length, the average of the running times of the naive algorithm for the 10 tests, the average of the running time of the automaton-based algorithm for the 10 tests, and the ratio q of the naive algorithm running time over the automaton-based algorithm running time.Any ratio value above 1 means that the automaton-based algorithm is faster.A large value indicates a better performance of the automaton-based algorithm.
The following graphs show the results of our tests for the different pattern lengths.The x-axis is the pattern size.The y-axis is the ratio of the naive algorithm running time to the automaton-based algorithm running time.The different colors depict alphabet sizes.To better see the effect of the pattern distribution in the text, we also map, on the same graph, both cases.In this graph, the x-axis is the average running time of  all pattern lengths per alphabet size, and the y-axis is the ratio of the naive algorithm running time to the automaton-based algorithm running time.The results of the uniformly random distribution are mapped in one color, and the results of all pattern occurrences in the last half of the text are mapped in another.
The parameterized matching results appear in tables 3 and 4 in the appendix.Figs. 6 and 7 map the results of the parameterized matching comparisons for the case where the patterns were inserted at random vs. the case where the patterns appear at the last half of the text.In Fig. 8 we map at the same graph the average results of both the cases where the patterns appear at the text uniformly at random, and where the patterns appear at the last half of the text.
The results are very different from the exact matching case.We note the following phenomena: 1.The automaton-based algorithm always performs significantly better than the naive algorithm.
2. The automaton-based algorithm performs relatively better for larger alphabets.
3. For a fixed alphabet size, the pattern length does not seem to make much difference.An analysis of these implementation behaviors and an explanation of the seemingly opposite results from the exact matching case appear in the next subsection.

Analysis
We analyse all four results noted above.

Better Performance of the Automaton-based Algorithm
We have established that the mean number of comparisons for the naive algorithm in size k alphabet is However, when it comes to parameterized matching, any order of the alphabet symbols is a match, thus the mean number of comparisons is to be multiplied by k!.Therefore, for size 2 alphabet we get 4n comparisons, and the number rises exponentially with the alphabet size.Also, the automaton-based algorithms is constant at 2n comparisons.Even for a size 2 alphabet, there is twice the number of comparisons in the naive algorithm than in the automaton-based algorithm.Note, also, that because of the need to find the last parameterized match, the control mechanism even of the naive algorithm, is more complex.This results in a superior performance of the automaton-based algorithm even for small alphabets.Of course, the larger the alphabet, the better the performance of the automaton-based algorithm.

Pattern Length
The pattern length does not play a role in the automaton-based algorithm, where the number of comparisons is always bounded by 2n.In the naive case, the multiplication of the factorial of the alphabet size is so overwhelming that it dominates the pattern length.For example, note that for an extremely large alphabet, there would be a leading prefix of different alphabet symbols.That prefix will always be traversed by the naive algorithm.The larger the alphabet, the longer will be the mean length of that prefix.

Pattern Distribution
As in the exact matching case, for a non-periodic pattern that does not appear too many times, the distribution of occurrences will have no effect on the complexity.

DNA Data
Having understood the behavior of the naive algorithm and the automaton-based algorithm in randomly generated texts, the natural question is are there any "real" texts for which the naive algorithm performs better than the automaton-based algorithm.
We performed the same experiments on DNA data.The experimental setting was identical to that of the randomly generated texts with the following differences: 1.The DNA of the fruit fly, Drosophila melanogaster is 143.7 MB long.We extracted 60 subsequences of length 1,000,000 each, as FASTA data from the NIH National Library of Medicine, National Center for Biotechnology Information.We ran 10 tests on each of the six pattern lengths 32, 64, 128, 256, 512, 1024.
2. The alphabet size is 4, due to the four bases in DNA sequence.
Figs. 9 and 10 below show the ratio between the average running time of the naive algorithm and the automaton based algorithm.As in the uniformly random text we see that for the exact matching case the ratio is less than 1, i.e., the naive algorithm is faster, whereas in the parameterized matching case, the ratio is more than 1, indicating that the automaton based algorithm is faster.

Conclusions
The folk wisdom has always been that the naive string matching algorithm will outperform the automatonbased algorithm for uniformly random texts.Indeed this turns out to be the case for exact matching.This study shows that this is not the case for parameterized matching, where the automaton-based algorithm always outperforms the naive algorithm.This advantage is clear and is impressively better the larger the alphabets.The same result is true for searches over DNA data.
The conclusion to take away from this study is that one should not automatically assume that the naive string matching algorithm is better.The matching relation should be analysed.There are various matchings for which an automaton-based algorithm exists.We considered here parameterized matching, but other matchings, such as ordered matching [12,14,19], or Cartesian tree matching [21,22,23], can also be solved by automaton-based methods.In a practical application it is worthwhile spending some time considering the type of matching one is using.It may turn out to be that the automaton-based algorithm will perform significantly better than the naive, even for uniformly random texts.Alternately, even non-uniformly random data may be such that the naive algorithm performs better than the automaton based algorithm for exact matching.
An open problem is to compare the search time in DNA data to the search time in uniformly random data.While it is clear that DNA data is not uniformly random, it would be interesting to devise an experimental setting to compare search efficiency in both types of strings.

Figure 2 :
Figure 2: Performance in the Exact Matching case, pattern occurrences distributed uniformly random.

Figure 3 :
Figure 3: Performance in the Exact Matching case, pattern occurrences congregated at end of text.

Figure 4 :
Figure 4: Comparison of average performance of uniform pattern distribution vs. pattern occurrences congregated at end of text.

Figure 5 :
Figure 5: Performance in the Exact Matching case, periodic text.

Figure 6 :
Figure 6: Performance in the Parameterized Matching case, pattern occurrences distributed uniformly random.

Figure 7 :
Figure 7: Performance in the Parameterized Matching case, pattern occurrences congregated at end of text.

Figure 8 :
Figure 8: Comparison of average performance of uniform pattern distribution vs. pattern occurrences congregated at end of text.

Figure 9 :
Figure 9: Performance in the Exact Matching case on DNA sequences.

Figure 10 :
Figure 10: Performance in the Parameterized Matching case on DNA Sequences.

Table 2 :
Implementation Results -Exact Matching, patterns at end.