Abstract
We study quantum algorithms for several fundamental string problems, including Longest Common Substring, Lexicographically Minimal String Rotation, and Longest Square Substring. These problems have been widely studied in the stringology literature since the 1970s, and are known to be solvable by nearlinear time classical algorithms. In this work, we give quantum algorithms for these problems with nearoptimal query complexities and time complexities. Specifically, we show that: Longest Common Substring can be solved by a quantum algorithm in \(\tilde{O}(n^{2/3})\) time, improving upon the recent \(\tilde{O}(n^{5/6})\)time algorithm by Le Gall and Seddighin (in: Proceedings of the 13th innovations in theoretical computer science conference (ITCS 2022), pp 97:1–97:23, 2022. https://doi.org/10.4230/LIPIcs.ITCS.2022.97). Our algorithm uses the MNRS quantum walk framework, together with a careful combination of string synchronizing sets (Kempa and Kociumaka, in: Proceedings of the 51st annual ACM SIGACT symposium on theory of computing (STOC 2019), ACM, pp 756–767, 2019. https://doi.org/10.1145/3313276.3316368) and generalized difference covers. Lexicographically Minimal String Rotation can be solved by a quantum algorithm in \(n^{1/2 + o(1)}\) time, improving upon the recent \(\tilde{O}(n^{3/4})\)time algorithm by Wang and Ying (in: Quantum algorithm for lexicographically minimal string rotation. CoRR, 2020. arXiv:2012.09376). We design our algorithm by first giving a new classical divideandconquer algorithm in nearlinear time based on exclusion rules, and then speeding it up quadratically using nested Grover search and quantum minimum finding. Longest Square Substring can be solved by a quantum algorithm in \(\tilde{O}(\sqrt{n})\) time. Our algorithm is an adaptation of the algorithm by Le Gall and Seddighin (2022) for the Longest Palindromic Substring problem, but uses additional techniques to overcome the difficulty that binary search no longer applies. Our techniques naturally extend to other related string problems, such as Longest Repeated Substring, Longest Lyndon Substring, and Minimal Suffix.
1 Introduction
The study of string processing algorithms is an important area of research in theoretical computer science, with applications in numerous fields including bioinformatics, data mining, plagiarism detection, etc. Many fundamental problems in this area have been known to have lineartime algorithms since over 40 years ago. Examples include Exact String Matching [1, 2], Longest Common Substring [3,4,5], and (Lexicographically) Minimal String Rotation [6,7,8]. These problems have also been studied extensively in the context of data structures, parallel algorithms, and lowspace algorithms.
More recently, there has been growing interest in developing efficient quantum algorithms for these basic string problems. Given quantum query access to the input strings (defined in Sect. 2.3), it is sometimes possible to solve such problems in sublinear query complexity and time complexity. The earliest such result was given by Ramesh and Vinay [9], who combined Vishkin’s deterministic sampling technique [10] with Grover search [11] to obtain a quantum algorithm for the Exact String Matching problem with nearoptimal \({\tilde{O}}(\sqrt{n})\) time complexity.^{Footnote 1} More recently, Le Gall and Seddighin [12] obtained sublineartime quantum algorithms for various string problems, among them an \({\tilde{O}}(n^{5/6})\)time algorithm for Longest Common Substring (LCS) and an \({\tilde{O}}(\sqrt{n})\)time algorithm for Longest Palindromic Substring (LPS). In developing these algorithms, they applied the quantum Exact String Matching algorithm [9] and Ambainis’ Element Distinctness algorithm [13] as subroutines, and used periodicity arguments to reduce the number of candidate solutions to be checked. Another recent work by Wang and Ying [14] showed that Minimal String Rotation can be solved in \({\tilde{O}}(n^{3/4})\) quantum time. Their algorithm was also based on quantum search primitives (including Grover search and quantum minimum finding [15]) and techniques borrowed from parallel string algorithms [10, 16, 17].
On the lower bound side, it has been shown that Longest Common Substring requires \({\tilde{\Omega }}(n^{2/3})\) quantum query complexity (by a reduction [12] from the Element Distinctness problem [18,19,20]), and that Exact String Matching, Minimal String Rotation, and Longest Palindromic Substring all require \(\Omega (\sqrt{n})\) quantum query complexity (by reductions [12, 14] from the unstructured search problem [21]). Le Gall and Seddighin [12] observed that although the classical algorithms for LCS and LPS are almost the same (both based on suffix trees [3]), the latter problem (with time complexity \({\tilde{\Theta }}(\sqrt{n})\)) is strictly easier than the former (with an \({\tilde{\Omega }}(n^{2/3})\) lower bound) in the quantum query model.
Despite these results, our knowledge about the quantum computational complexities of basic string problems is far from complete. For the LCS problem and the Minimal String Rotation problem mentioned above, there are \(n^{\Omega (1)}\) gaps between current upper bounds and lower bounds. Better upper bounds are only known in special cases: Le Gall and Seddighin [12] gave an \({\tilde{O}}(n^{2/3})\)time algorithm for \((1\varepsilon )\)approximating LCS in nonrepetitive strings, matching the query lower bound in this setting. Wang and Ying [14] gave an \(\tilde{O}(\sqrt{n})\)time algorithm for Minimum String Rotation in randomly generated strings, and showed a matching averagecase query lower bound. However, these algorithms do not immediately extend to the general cases. Moreover, there remain many other string problems which have nearlinear time classical algorithms with no known quantum speedup.
1.1 Our Results
In this work, we develop new quantum query algorithms for many fundamental string problems. All our algorithms are nearoptimal and timeefficient: they have time complexities that match the corresponding query complexity lower bounds up to \(n^{o(1)}\) factors. In particular, we close the gaps for Longest Common Substring and Minimal String Rotation left open in previous work [12, 14]. We summarize our contributions (together with some earlier results) in Fig. 1. See Sect. 2.2 for formal definitions of the studied problems.
1.2 Technical Overview
We give highlevel overviews of our quantum algorithms for Longest Common Substring (LCS), Minimal String Rotation, and Longest Square Substring.
1.2.1 Longest Common Substring
We consider the decision version of LCS with threshold length d: given two lengthn input strings s, t, decide whether they have a common substring of length d.
Le Gall and Seddighin [12, Section 3.1.1] observed a simple reduction from this decision problem to the (bipartite version of) Element Distinctness problem, which asks whether the two input lists A, B contain a pair of identical items \(A_i=B_j\). Ambainis [13] gave a comparisonbased algorithm for this problem in \({\tilde{O}}(n^{2/3}\cdot T)\) time, where T denotes the time complexity of comparing two items. In the LCS problem of threshold length d, each item is a lengthd substring of s or t (specified by the starting position), and the lexicographical order between two lengthd substrings can be compared in \(T={\tilde{O}}(\sqrt{d})\) using binary search and Grover search (see Lemma 2.5). Hence, this problem can be solved in \({\tilde{O}}(n^{2/3}\cdot \sqrt{d})\) time.
The anchoring technique The inefficiency of the algorithm described above comes from the fact that there are \(nd+1 = \Omega (n)\) positions to be considered in each input string. This seems rather unnecessary for larger d, since intuitively there is a lot of redundancy from the large overlap between these lengthd substrings.
This is the idea behind the socalled anchoring technique, which has been widely applied in designing classical algorithms for various versions of the LCS problem [22,23,24,25,26,27,28]. In this technique, we carefully pick subsets \(C_1,C_2 \subseteq [n]\) of anchors, such that in a YES input instance there must exist an anchored common substring, i.e., a common string with occurrences \(s[i\mathinner {.\,.}i+d)=t[j\mathinner {.\,.}j+d)\) and a shift \(0\le h <d\) such that \(i+h\in C_1\) and \(j+h \in C_2\). Then, the task reduces to the Two String Families LCP problem [23], where we want to find a pair of anchors \(i'\in C_1, j'\in C_2\) that can be extended in both directions to get a lengthd common substring, or equivalently, the longest common prefix of \(s[i'\mathinner {.\,.}],t[j'\mathinner {.\,.}]\) and the longest common suffix of \(s[\mathinner {.\,.}i'1], t[\mathinner {.\,.}j'1]\) have total length at least d. Intuitively, finding a smaller set of anchors would make our algorithm have better running time.
Small and explicit anchor sets One can construct such anchor sets based on difference covers [29, 30], with size \(C_1,C_2\le n/\sqrt{d}\). The construction is very simple and explicit (see Sect. 3.3.1), and is oblivious to the content of the input strings (in fact, it just consists of several arithmetic progressions of fixed lengths). In comparison, there exist much smaller constructions if the anchors are allowed to depend on the input strings: for example, in their timespace tradeoff algorithm for LCS, BenNun, Golan, Kociumaka, and Kraus [26] used partitioning sets [31] to construct an anchor set of size O(n/d). However, this latter anchor set takes too long time to construct to be used in our sublineartime quantum algorithm.
Our key idea is to combine the oblivious approach and nonoblivious approach, and design anchor sets with a balance between the size and the construction time: the number of anchors is \(m=O(n/d^{3/4})\), and, given any index \(i\in [m]\), the \(i^{\text {th}}\) anchor can be reported in \(T={\tilde{O}}(\sqrt{d})\) quantum time. Our construction (Sect. 3.3) is based on an approximate version of difference covers, combined with the string synchronizing sets recently introduced by Kempa and Kociumaka [32] (adapted to the sublinear setting using tools from pseudorandomness). Roughly speaking, allowing errors in the difference cover makes the size much smaller, while also introducing slight misalignments between the anchors, which are then to be fixed by the string synchronizing sets.
Anchoring via quantum walks Now we explain how to use small and explicit anchor sets to obtain better quantum LCS algorithms with time complexity \({\tilde{O}}(m^{2/3} \cdot (\sqrt{d}+T)) = {\tilde{O}}(n^{2/3})\), where \(m = {\tilde{O}}(n/d^{3/4})\) is the number of anchors, and \(T={\tilde{O}}(\sqrt{d})\) is the time complexity of computing the \(i^{\text {th}}\) anchor. Our algorithm uses the MNRS quantum walk framework [33] (see Sect. 2.5) on Johnson graphs. Informally speaking, to apply this framework, we need to solve the following dynamic problem: maintain a subset of r anchors which undergoes insertions and deletions (called update steps), and in each query (called a checking step) we need to solve the Two String Families LCP problem on this subset, i.e., answer whether the current subset contains a pair of anchors that can extend to a lengthd common substring. If each update step takes time \({\textsf{U}}\), and each checking step takes time \({\textsf{C}}\), then the MNRS quantum walk algorithm has overall running time \({\tilde{O}}(r \ {\textsf{U}} + \frac{m}{r}\ (\sqrt{r} \ {\textsf{U}} +{\textsf{C}}))\). We will achieve \({\textsf{U}} = {\tilde{O}}(\sqrt{d}+T)\) and \({\textsf{C}} = {\tilde{O}}(\sqrt{rd})\), and obtain the claimed time complexity by setting \(r = m^{2/3}\).
To solve this dynamic problem, we maintain the lexicographical ordering of the lengthd substrings specified by the current subset of anchors, as well as the corresponding LCP array which contains the length of the longest common prefix between every two lexicographically adjacent substrings. Note that the maintained information uniquely defines the compact trie of these substrings. This information can be updated easily after each insertion (or deletion) operation: we first compute the inserted anchor in T time, and then use binary search with Grover search to find its lexicographical rank and the LCP values with its neighbors, in \({\tilde{O}}(\sqrt{d})\) quantum time.
The maintained information will be useful for the checking step. In fact, if we only care about query complexity, then we are already done, since the maintained information already uniquely determines the answer of the Two String Families LCP problem, and no additional queries to the input strings are needed. The main challenge is to implement this checking step timeefficiently. Unfortunately, the classical nearlineartime algorithm [23] for solving the Two String Families LCP problem is too slow compared to our goal of \({\textsf{C}}={\tilde{O}}(\sqrt{rd})\), and it is not clear how to obtain a quantum speedup over this classical algorithm. Hence, we should try to dynamically maintain the solution using data structures, instead of solving it from scratch every time. In fact, such a data structure with \(\mathrm{poly\,log}(n)\) time per operation was already given by Charalampopoulos, Gawrychowski, and Pokorski [27], and was used to obtain a classical data structure for maintaining Longest Common Substring under character substitutions. However, this data structure cannot be applied to the quantum walk algorithm, since it violates two requirements that are crucial for the correctness of quantum walk algorithms: (1) It should have worstcase time complexity (instead of being amortized), and (2) it should be historyindependent (see the discussion in Sect. 3.2.1 for more details). Instead, we will design a different data structure that satisfies these two requirements, and can solve the Two String Families LCP problem on the maintained subset in \({\tilde{O}}(\sqrt{rd})\) quantum time. This time complexity is worse than the \(\mathrm{poly\,log}(n)\) time achieved by the classical data structure of [27], but suffices for our application.
A technical hurdle: limitations of 2D range query data structures Our solution for the Two String Families LCP problem is straightforward, but a key component in the algorithm relies on dynamic 2dimensional orthogonal range queries. This is a wellstudied problem in the data structure literature, and many \(\mathrm{poly\,log}~ (n)\)time data structures are known (see [34,35,36] and the references therein). However, for our results, the 2dimensional (2D) range query data structure in question has to satisfy not only the two requirements mentioned above, but also a third requirement of being comparisonbased. In particular, we are not allowed to treat the coordinates of the 2D points as \(\textrm{poly}(n)\)bounded integers, because the coordinates actually correspond to substrings of the input string, and should be compared by lexicographical order. Unfortunately, no data structures satisfying all three requirements are known.
To bypass this difficulty, our novel idea is to use a sampling procedure that lets us estimate the rank of a coordinate of the inserted 2D point among all the possible coordinates, which effectively allows us to convert the noninteger coordinates into integer coordinates. By a version of the BallsandBins hashing argument, the inaccuracy incurred by the sampling can be controlled for most of the vertices on the Johnson graph which the quantum walk operates on. This then lets us apply 2D range query data structures over integer coordinates (see Sect. 3.2.3 for the details of this argument), which can be implemented with worstcase time complexity and historyindependence as required. Combining this method with the tools and ideas mentioned before lets us get a timeefficient implementation of the quantum walk algorithm for computing the LCS.
We believe this sampling idea will find further applications in improving the time efficiency of quantum walk algorithms (for example, it can simplify the implementation of Ambainis’ \(\tilde{O}(n^{2/3})\)time Element Distinctness algorithm, as noted in Sect. 6).
1.2.2 Minimal String Rotation
In the Minimal String Rotation problem, we are given a string s of length n and are tasked with finding the cyclic rotation of s which is lexicographically the smallest. We sketch the main ideas of our improved quantum algorithm for Minimal String Rotation by comparing it to the previous best solution for this problem.
The simplest version of Wang and Ying’s algorithm [14, Theorem 5.2] works by identifying a small prefix of the minimal rotation using Grover search, and then applying pattern matching with this small prefix to find the starting position of the minimum rotation. More concretely, let B be some size parameter. By quantum minimum finding over all prefixes of length B among the rotations of s, we can find the lengthB prefix P of the minimal rotation in asymptotically \(\sqrt{B}\cdot \sqrt{n}\) time. Next, split the string s into \(\Theta (n/B)\) blocks of size \(\Theta (B)\) each. Within each block, we find the leftmost occurrence of P via quantum Exact String Matching [9] It turns out that one of these positions is guaranteed to be a starting position of the minimal rotation (this property is called an “exclusion rule” or “Ricochet Property” in the literature). By minimum finding over these O(n/B) candidate starting positions (and comparisons of lengthn strings via Grover search), we can find the true minimum rotation in asymptotically \(\sqrt{n/B} \cdot \sqrt{n}\) time. So overall the algorithm takes asymptotically
time, which is minimized at \(B = \sqrt{n}\) and yields a runtime of \(\tilde{O}(n^{3/4})\).
This algorithm is inefficient in its first step, where it uses quantum minimum finding to obtain the minimum lengthB prefix P. The lengthB prefixes we are searching over all come from rotations of the same string s. Due to this common structure, we should be able to find their minimum more efficiently than just using the generic algorithm for minimum finding. At a high level, we improve this step by finding P using recursion instead. Intuitively, this is possible because the Minimal Rotation problem is already about finding the minimum “prefix” (just of length n) among rotations of s. This then yields a recursive algorithm running in \(n^{1/2 + o(1)}\) quantum time.
In the presentation of this algorithm in Sect. 4, we use a chain of reductions and actually solve a more general problem to get this recursion to work. The argument also relies on a new “exclusion rule,” adapted from previous work, to prove that we only need to consider a constant number of candidate starting positions of the minimum rotation within each small block of the input string.
1.2.3 Longest Square Substring
A square string is a string of even length with the property that its first half is identical to its second half. In other words, a string is square if it can be viewed as some string repeated twice in a row.
We show how to find the longest square substring in an input string of length n using a quantum algorithm which runs in \(\tilde{O}(\sqrt{n})\) time. Our algorithm mostly follows the ideas used in [12] to solve the Longest Palindromic Substring problem, but makes some modifications due to the differing structures of square substrings and palindromic substrings (for example, [12] exploits the fact that if a string contains a large palindromic substring it has smaller palindromic substrings centered at the same position; in contrast, it is possible for a string to have a large square substring but not contain any smaller square substrings, so we cannot leverage this sort of property).
At a high level, our algorithm starts by guessing the size of the longest square substring within a \((1+\varepsilon )\) factor for some small constant \(\varepsilon > 0\). We then guess a large substring P contained in the first half of an optimal solution, and then use the quantum algorithm for Exact String Matching to find a copy of this P in the second half of the corresponding solution. If we find a unique copy of P, we can use a Grover search to extend outwards from our copies of P and recover a longest square substring. Otherwise, if we find multiple copies, it implies our substring is periodic, so we can use a Grover search to find a maximal periodic substring containing a large square substring, and then employ some additional combinatorial arguments to recover the solution.
1.3 Related Work
Quantum algorithms on string problems Wang and Ying [14] improved the logarithmic factors of the quantum Exact String Matching algorithm by Ramesh and Vinay [9] (and filled in several gaps in their original proof), and showed that the same technique can be used to find the smallest period of a periodic string [14, Appendix D].
Another important string problem is computing the edit distance between two strings (the minimum number of deletions, insertions, and substitutions needed to turn one string into the other). The best known classical algorithm has \(O(n^2/\log ^2 n)\) time complexity [37], which is nearoptimal under the Strong Exponential Time Hypothesis [38]. It is open whether quantum algorithms can compute edit distance in truly subquadratic time. For the approximate version of the edit distance problem, the breakthrough work of Boroujeni et al. [39] gave a truly subquadratic time quantum algorithm for computing a constant factor approximation. The quantum subroutines of this algorithm were subsequently replaced with classical randomized algorithms in [40] to get a truly subquadratic classical algorithm that approximates the edit distance to a constant factor.
Le Gall and Seddighin [12] also considered the \((1+\varepsilon )\)approximate Ulam distance problem (i.e., edit distance on nonrepetitive strings), and showed a quantum algorithm with nearoptimal \({\tilde{O}}(\sqrt{n})\) time complexity. Their algorithm was based on the classical algorithm by Naumovitz, Saks, and Seshadhri [41].
Montarano [42] gave quantum algorithms for the ddimensional pattern matching problem with random inputs. Ambainis et al. [43] gave quantum algorithms for deciding Dyck languages. There are also some results [44, 45] on string problems with nonstandard quantum queries to the input.
Quantum walks and timeefficient quantum algorithms Quantum walks [13, 33, 46] are a useful method to obtain queryefficient quantum algorithms for many important problems, such as Element Distinctness [13] and Triangle Finding [47,48,49]. Ambainis showed that the queryefficient algorithm for element distinctness [13] can also be implemented in a timeefficient manner with only a \(\mathrm{poly\,log}(n)\) blowup, by applying historyindependent data structures in the quantum walk. Since then, this “quantum walk plus data structure” strategy has been used in many quantum algorithms to obtain improved time complexity. For example, Belovs, Childs, Jeffery, Kothari, and Magniez [50] used nested quantum walk with Ambainis’ data structure to obtain timeefficient algorithms for the 3distinctness problem. Bernstein, Jeffery, Lange, and Meurer [51] designed a simpler data structure called quantum radix tree [52], and applied it in their quantum walk algorithms for the Subset Sum problem on random input. Aaronson, Chia, Lin, Wang, and Zhang [53] gave a quantum walk algorithm for the ClosestPair problem in O(1)dimensional space with nearoptimal time complexity \({\tilde{O}}(n^{2/3})\). The previous \({\tilde{O}}(n^{2/3})\)time algorithm for approximating LCS in nonrepetitive strings [12] also applied quantum walks.
On the other hand, queryefficient quantum algorithms do not always have timeefficient implementations. This motivated the study of quantum finegrained complexity. Aaronson et al. [53] formulated the QSETH conjecture, which is a quantum analogue of the classical Strong Exponential Time Hypothesis, and showed that Orthogonal Vectors and ClosestPair in \(\mathrm{poly\,log}(n)\)dimensional space require \(n^{1o(1)}\) quantum time under QSETH. In contrast, these two problems have simple quantum walk algorithms with only \(O(n^{2/3})\) query complexity. Buhrman, Patro, and Speelman [54] formulated another version of QSETH, which implies a conditional \(\Omega (n^{1.5})\)time lower bound for quantum algorithms solving the edit distance problem. Recently, Buhrman, Loff, Patro, and Speelman [55] proposed the quantum 3SUM hypothesis, and used it to show that the quadratic quantum speedups obtained by Ambainis and Larka [56] for many computational geometry problems are conditionally optimal. Notably, in their finegrained reductions, they employed a quantum walk with data structures to bypass the lineartime preprocessing stage that a naive approach would require.
Classical string algorithms We refer readers to several excellent textbooks [57,58,59] on string algorithms.
Weiner [3] introduced the suffix tree and gave a lineartime algorithm for computing the LCS of two strings over a constantsized alphabet. For polynomiallybounded integer alphabets, Farach’s construction of suffix trees [4] implies an lineartime algorithm for LCS. Babenko and Starikovskaya [5] gave an algorithm for LCS based on suffix arrays. Recently, Charalampopoulos, Kociumaka, Pissis, and Radoszewski [28] gave faster wordRAM algorithms for LCS on compactly represented input strings over a small alphabet. The LCS problem has also been studied in the settings of timespace tradeoffs [22, 26, 60], approximate matching [5, 23, 61,62,63,64,65,66], and dynamic data structures [24, 25, 27].
Booth [6] and Shiloach [7] gave the first linear time algorithms for the Minimal String Rotation problem. Later, Duval [8] gave a constantspace lineartime algorithm for computing the Lyndon factorization of a string, which can be used to compute the minimal rotation, maximal suffix, and minimal suffix. Duval’s algorithm can also compute the minimal suffix and maximal suffix for every prefix of the input string. Apostolico and Crochemore [67] gave a lineartime algorithm for computing the minimal rotation of every prefix of the input string. Parallel algorithms for Minimal String Rotation were given by Iliopoulos and Smyth [17]. There are data structures [68,69,70] that, given a substring specified by its position and length in the input string, can efficiently answer its minimal suffix, maximal suffix, and minimal rotation. The Longest Lyndon Substring problem can be solved in linear time [8] by simply outputting the longest segment in the Lyndon factorization. There are data structures [24, 71] for dynamically maintaining the longest Lyndon substring under character substitutions.
There are \(O(n \log n)\)time algorithms for finding all the square substrings of the input string [16, 72, 73]. There are data structures [74] for dynamically maintaining the longest square substring under character substitutions.
The construction of difference cover [29, 30] has been previously used in many string algorithms, e.g., [28, 29, 75, 76]. The string synchronizing set recently introduced by Kempa and Kociumaka [32] has been applied in [28, 32, 77, 78]. The localconsistency idea behind the construction of string synchronizing set had also appeared in previous work [31, 79, 80].
1.4 Subsequent Works
Some of our results were improved after the conference version of this paper was published. Jin and Nogler [81] gave a quantum algorithm that decides whether the Longest Common Substring of two input strings has length at least d in \(\tilde{O}((n/d)^{2/3}\cdot d^{1/2 + o(1)})\) quantum query and time complexity. Childs, Kothari, KovacsDeak, Sundaram, and Wang [82] gave a quantum algorithm for a decision version of the Lexicographical Minimal String Rotation problem in \({\tilde{O}}(n^{1/2})\) quantum query complexity.
1.5 Paper Organization
In Sect. 2, we provide useful definitions and review some quantum primitives which will be used in our algorithms. In Sect. 3, we present our algorithm for Longest Common Substring. In Sect. 4, we present our algorithm for Minimal String Rotation and several related problems. In Sect. 5, we present our algorithm for Longest Square Substring. Finally, we mention several open problems in Sect. 6.
2 Preliminaries
2.1 Notations and Basic Properties of Strings
We define sets \(\mathbb {N}= \{0,1,2,3,\dots \}\) and \(\mathbb {N}^+ = \{1,2,3,\dots \}\). For every positive integer n we introduce the set \([n]= \{1,2,\dots ,n\}\). Given two integers \(i\le j\), we let \([i\mathinner {.\,.}j]= \{i,i+1,\dots ,j\}\) denote the set of integers in the closed interval [i, j]. We define \([i\mathinner {.\,.}j),(i\mathinner {.\,.}j]\), and \((i\mathinner {.\,.}j)\) analogously. For an integer x and an integer set A, let \(x+A\) denote \(\{x+a: a\in A\}\), and let xA denote \(\{x\cdot a: a\in A\}\).
As is standard in the literature, we consider strings over a polynomiallybounded integer alphabet \(\Sigma = [1\mathinner {.\,.}\Sigma ]\) of size \(\Sigma \le n^{O(1)}\). A string \(s \in \Sigma ^*\) of length \(s=n\) is a sequence of characters \(s=s[1]s[2]\cdots s[n]\) from the alphabet \(\Sigma \) (we use 1based indexing). The concatenation of two strings \(s,t\in \Sigma ^*\) is denoted by st. The reversed string of s is denoted by \(s^{R}=s[n]s[n1]\cdots s[1]\).
Given a string s of length \(s=n\), a substring of s is any string of the form \(s[i\mathinner {.\,.}j] = s[i]s[i+1]\cdots s[j]\) for some indices \(1\le i\le j\le n\). For \(i>j\), we define \(s[i\mathinner {.\,.}j]\) to be the empty string \(\varepsilon \). When i, j may be out of bounds, we use the convention \(s[i\mathinner {.\,.}j] = s[\max \{1,i\}\mathinner {.\,.}\min \{n, j\}]\) to simplify notation. We sometimes use \(s[i\mathinner {.\,.}j) = s[i]s[i+1]\cdots s[j1]\) and \(s(i \mathinner {.\,.}j]=s[i+1]\cdots s[j1]s[j]\) to denote substrings. A substring \(s[1\mathinner {.\,.}j]\) is called a prefix of s, and a substring \(s[i\mathinner {.\,.}n]\) is called a suffix of s. For two strings s, t, let \(\mathsf {\textrm{lcp}}(s,t)= \max \{j: j\le \min \{s,t\}, s[1\mathinner {.\,.}j] = t[1\mathinner {.\,.}j] \}\) denote the length of their longest common prefix.
We say string s is lexicographically smaller than string t (denoted \(s \prec t\)) if either s is a proper prefix of t (i.e., \(s<t\) and \(s=t[1\mathinner {.\,.}s]\)), or \(\ell = \mathsf {\textrm{lcp}}(s,t)<\min \{s,t\}\) and \(s[\ell +1]<t[\ell +1]\). The notations \(\succ , \preceq ,\succeq \) are defined analogously. The following easytoprove and wellknown fact has been widely used in string data structures and algorithms.
Lemma 2.1
(e.g. [83, Lemma 1]) Given strings \(s_1\preceq s_2\preceq \cdots \preceq s_m\), we have \(\mathsf {\textrm{lcp}}(s_1,s_m) = \min _{1\le i< m} \{ \mathsf {\textrm{lcp}}(s_i,s_{i+1})\}\).
For a positive integer \(p\le s\), we say p is a period of s if \(s[i] = s[i+p]\) holds for all \(1\le i\le sp\). One can compute all the periods of s by a classical deterministic algorithm in linear time [1]. We refer to the minimal period of s as the period of s, and denote it by \(\mathsf {\textrm{per}}(s)\). If \(\mathsf {\textrm{per}}(s)\le s/2\), we say that s is periodic. If \(\mathsf {\textrm{per}}(s)\) does not divide s, we say that s is primitive. We will need the following classical results regarding periods of strings for some of our algorithms.
Lemma 2.2
(Weak Periodicity Lemma, [84]) If a string s has periods p and q such that \(p + q \le s\), then \(\gcd (p, q)\) is also a period of s.
Lemma 2.3
(e.g., [79, 85]) Let s, t be two strings with \(s/2 \le t\le s\), and let \(s[i_1\mathinner {.\,.}i_1+t)= s[i_2\mathinner {.\,.}i_2+t)= \dots = s[i_m\mathinner {.\,.}i_m+t) = t\) be all the occurrences of t in s (where \(i_k<i_{k+1}\)). Then, \(i_1,i_2,\dots ,i_m\) form an arithmetic progression. Moreover, if \(m\ge 3\), then \(\mathsf {\textrm{per}}(t) =i_2i_1\).
We say string s is a (cyclic) rotation of string t, if \(s=t=n\) and there exists an index \(1\le i \le n\) such that \(s = t[i\mathinner {.\,.}n]t[1\mathinner {.\,.}i1]\). If string s is primitive and is lexicographically minimal among its cyclic rotations, we call s a Lyndon word. Equivalently, s is a Lyndon word if and only if \(s\preceq t\) for all proper suffices t of s. For a periodic string s with minimal period \(\mathsf {\textrm{per}}(s)=p\), the Lyndon root of s is defined as the lexicographically minimal rotation of \(s[1\mathinner {.\,.}p]\), which can be computed by a classical deterministic algorithm in linear time (e.g., [6,7,8]).
2.2 Problem Definitions
We give formal definitions of the string problems considered in this paper.
In the first four problems, we only require the algorithm to output the maximum length. The locations of the witness substrings can be found by a binary search.
2.3 Computational Model
We assume the input strings can be accessed in a quantum query model [86, 87], which is standard in the literature of quantum algorithms. More precisely, letting s be an input string of length n, we have access to an oracle \(O_s\) that, for any index \(i\in [n]\) and any \(b\in \Sigma \), performs the unitary mapping \(O_s:\left i,b\right\rangle \mapsto \left i,b\oplus s[i]\right\rangle \), where \(\oplus \) denotes the XOR operation on the binary encodings of characters. The oracles can be queried in superposition, and each query has unit cost. Besides the input queries, the algorithm can also apply intermediate unitary operators that are independent of the input oracles. Finally, the query algorithm should return the correct answer with success probability at least 2/3 (which can be boosted to high probability^{Footnote 2} by a majority vote over \(O(\log n)\) repetitions). The query complexity of an algorithm is the number of queries it makes to the input oracles.
In this paper, we are also interested in the time complexity of the quantum algorithms, which counts not only the queries to the input oracles, but also the elementary gates [88] for implementing the unitary operators that are independent of the input. In order to implement the query algorithms in a timeefficient manner, we also need the quantum random access gate, defined as
to access at unit cost the \(i^{\text {th}}\) element from the quantum working memory \(\left z_1,\dots ,z_m\right\rangle \). Assuming quantum random access, a classical timeT algorithm that uses random access memory can be converted into a quantum subroutine in time O(T), which can be invoked by quantum search primitives such as Grover search. Quantum random access has become a standard assumption in designing timeefficient quantum algorithms (for example, all the timeefficient quantum walk algorithms mentioned in Sect. 1.3 relied on this assumption).
2.4 Basic Quantum Primitives
Grover search (Amplitude amplification) [11, 89]. Let \(f:[n] \rightarrow \{0,1\}\) be a function, where f(i) for each \(i\in [n]\) can be evaluated in time T. There is a quantum algorithm that, with high probability, finds an \(x\in f^{1}(1)\) or report that \(f^{1}(1)\) is empty, in \({\tilde{O}}(\sqrt{n}\cdot T)\) time. Moreover, if it is guaranteed that either \(f^{1}(1) \ge M\) or \(f^{1}(1)=0\) holds, then the algorithm runs in \(\tilde{O}(\sqrt{n/M} \cdot T)\) time.
Quantum minimum finding [15]. Let \(x_1,\dots ,x_n\) be n items with a total order, where each pair of \(x_i\) and \(x_j\) can be compared in time T. There is a quantum algorithm that, with high probability, finds the minimum item among \(x_1,\dots ,x_n\) in \({\tilde{O}}(\sqrt{n} \cdot T)\) time.
Remark 2.4
If the algorithm for evaluating f(i) (or for comparing \(x_i,x_j\)) has some small probability of outputting the wrong answer, we can first boost it to high success probability, and then the Grover search (or Quantum minimum finding) still works, since quantum computational errors only accumulate linearly. It is possible to improve the logfactors in the query complexity of quantum search when the input has errors [90], but in this paper we do not seek to optimize the logfactors.
Lemma 2.5
(Computing LCP) Given two strings s, t of lengths \(s,t\le n\), there is a quantum algorithm that computes \(\mathsf {\textrm{lcp}}(s,t)\) and decides whether \(s\preceq t\), in \(\tilde{O}(\sqrt{n})\) time.
Proof
Note that we can use Grover search to decide whether two strings are identical in \({\tilde{O}}(\sqrt{n})\) time. Then we can compute \(\mathsf {\textrm{lcp}}(s,t)\) by a simple binary search over the length of the prefix. After that we can easily compare their lexicographical order by comparing the next position. \(\square \)
Given a string s and positions g and h such that \(s[g] = s[h]\), we often use Lemma 2.5 to “extend” these common characters to larger identical strings to some bound d while keeping them equivalent (i.e. find the largest positive integer \(j\le d\) such that \(s[g\mathinner {.\,.}g+j) = s[h\mathinner {.\,.}h+j)\)). We will often refer to this process (somewhat informally) as “extending strings via Grover search.”
As a final useful subroutine, we appeal to the result of Ramesh and Vinay [9], who combined Grover search with the deterministic sampling technique of Vishkin [10], and obtained a quantum algorithm for Exact String Matching.
Theorem 2.6
(Quantum Exact String Matching [9]) We can solve the Exact String Matching problem with a quantum algorithm on input strings s, t of length at most n using \(\tilde{O}(\sqrt{n})\) query complexity and time complexity.
2.5 Quantum Walks
We use the quantum walk framework developed by Magniez, Nayak, Roland, and Santha [33], and apply it on Johnson graphs,
The Johnson graph J(m, r) has \(\left( {\begin{array}{c}m\\ r\end{array}}\right) \) vertices, each being an subset of [m] with size r, where two vertices in the graph \(A,B\in \left( {\begin{array}{c}[m]\\ r\end{array}}\right) \) are connected by an edge if and only if \(A\cap B = r1\), or equivalently there exist \(a\in A, b\in [m]\setminus A\) such that \(B = (A \setminus \{a\}) \cup \{b\}\). Depending on the application, we usually identify a special subset of the vertices \(V_{\textsf{marked}}\subseteq \left( {\begin{array}{c}[m]\\ r\end{array}}\right) \) as being marked. The quantum walk is analogous to a random walk on the Johnson graph attempting to find a marked vertex, but provides quantum speedup compared to the classical random walk. The vertices in the Johnson graph are also called the states of the walk.
In the quantum walk algorithm, each vertex \(K\in \left( {\begin{array}{c}[m]\\ r\end{array}}\right) \) is associated with a data structure D(K). The setup cost \(\textsf{S}\) is the cost to set up the data structure D(K) for any \(K\in \left( {\begin{array}{c}[m]\\ r\end{array}}\right) \), where the cost could be measured in query complexity or time complexity. The checking cost \(\textsf{C}\) is the cost to check whether K is a marked vertex, given the data structure D(K). The update cost \(\textsf{U}\) is the cost of updating the data structure from D(K) to \(D(K')\), where \(K' = (K\setminus \{a\})\cup \{b\}\) is an adjacent vertex specified by \(a\in K, b\in [m]\setminus K\). The MNRS quantum walk algorithm can be summarized as follows.
Theorem 2.7
(MNRS Quantum Walk [33]) Suppose \(V_\mathsf{{marked}}/\left( {\begin{array}{c}m\\ r\end{array}}\right) \ge \varepsilon \) whenever \(V_\mathsf{{marked}}\) is nonempty. Then there is a quantum algorithm that with high probability determines if \(V_\mathsf{{marked}}\) is empty or finds a marked vertex, with cost of order \({\textsf{S}} + \frac{1}{\sqrt{\varepsilon }} ( \sqrt{r}\cdot {\textsf{U}} + {\textsf{C}} )\).
Readers unfamiliar with the quantum walk approach are referred to [91, Section 8.3.2] for a quick application of this theorem to solve the Element Distinctness problem using \(O(n^{2/3})\) quantum queries. This algorithm can be implemented in \(\tilde{O}(n^{2/3})\) time by carefully designing the data structures to support timeefficient insertion, deletion, and searching [13, Section 6.2]. We elaborate on the issue of time efficiency when we apply quantum walks in our algorithm in Sect. 3.2.
3 Longest Common Substring
In this section, we prove the following theorem.
Theorem 3.1
The Longest Common Substring (LCS) problem can be solved by a quantum algorithm with \({\tilde{O}}(n^{2/3})\) query complexity and time complexity.
In Sect. 3.1, we will give an outline of our quantum walk algorithm based on the notion of good anchor sets, and show that this algorithm achieves good query complexity. Then in Sect. 3.2, we describe how to use data structures to implement the quantum walk algorithm in a timeefficient manner. Finally, in Sect. 3.3, we present the construction of good anchor sets used by the algorithm. We have organized the arguments so that Sects. 3.2 and 3.3 are independent of one another, and can be read separately.
3.1 Anchoring Via Quantum Walks
As mentioned in Sect. 1.2.1, our algorithm for LCS is based on the anchoring technique which previously appeared in classical LCS algorithms. Here, we will implement this technique using the MNRS quantum walk framework (Sect. 2.5).
Notations and input assumptions To simplify the presentation, we concatenate the two input strings s, t into \(S:=s\$ t\), where \(\$\) is a delimiter symbol that does not appear in the input strings, and let \(n = S =s+1+t\). So \(s[i]=S[i]\) for all \(i\in [1\mathinner {.\,.}s]\), and \(t[j] = S[s+1+j]\) for all \(j\in [1\mathinner {.\,.}t]\).
We will only solve the decision version of LCS: given a length threshold d, determine whether s and t have a common substring of length d. The algorithm for computing the length of the longest common substring then follows from a binary search over the threshold d. We assume \(d\ge 100\) to avoid corner cases in later analysis; for smaller d, the problem can be solved in \({\tilde{O}}(n^{2/3}d^{1/2}) = {\tilde{O}}(n^{2/3})\) time by reducing to the (bipartite version of) element distinctness problem [12, Section 3.1.1] and applying Ambainis’ algorithm [13] (see Sect. 1.2.1).
Anchoring We begin by introducing the notion of good anchor sets.
Definition 3.2
(Good anchor sets) For input strings s, t and threshold length d, we call \(C\subseteq [1\mathinner {.\,.}n]\) a good anchor set if the following holds: if the longest common substring of s and t has length at least d, then there exist positions \(i \in [1\mathinner {.\,.}sd+1],j\in [1\mathinner {.\,.}td+1]\) and a shift \(h\in [0\mathinner {.\,.}d)\), such that \(s[i\mathinner {.\,.}i+d)=t[j\mathinner {.\,.}j+d)\), and \(i+h, s+1+j+h \in C\).
In this definition, the anchor set C is allowed to depend on s and t. If \(C = \{C(1),C(2),\dots ,C(m)\}\) and there is a (quantum) algorithm that, given any index \(1\le j\le m\), computes the element C(j) in T(n, d) time, then we say C is T(n, d)(quantum)time constructible. The elements \(C(1),C(2),\dots ,C(m)\) are allowed to contain duplicates (i.e., C could be a multiset), and are not necessarily sorted in any particular order.
The set \([1\mathinner {.\,.}n]\) is trivially a good anchor set, but there are constructions of much smaller size. As a concrete example, one can directly construct good anchor sets using difference covers.
Definition 3.3
(Difference cover [29, 30]) A set \(D\subseteq \mathbb {N}^+\) is called a dcover, if for every \(i,j\in \mathbb {N}^+\), there exists an integer \(h(i,j) \in [0\mathinner {.\,.}d)\) such that \(i+h(i,j),j+h(i,j) \in D\).
The following construction of dcover has optimal size (up to a constant factor).
Lemma 3.4
(Construction of dcover [29, 30]) For every positive integer \(d\ge 1\), there is a dcover D such that \(D\cap [n]\) contains \(O(n/\sqrt{d})\) elements. Moreover, given integer \(i\ge 1\), one can compute the \(i^{\text {th}}\) smallest element of \(D\cap [n]\) in \({\tilde{O}} (1)\) time.
Here we omit the proof of Lemma 3.4, as a more general version (Lemma 3.19) will be proved later in Sect. 3.3.1. Using difference covers, we immediately have the following simple construction of good anchor sets.
Corollary 3.5
(A simple good anchor set) There is a \(\tilde{O}(1)\)time constructible good anchor set C of size \(m= O(n/\sqrt{d})\).
Proof
Let D be the dcover from Lemma 3.4. Then, for input strings s, t and threshold length d, it immediately follows from definition that \(C:=\big (D\cap [s]\big ) \cup \big (s+1+(D\cap [t])\big )\) is a good anchor set. \(\square \)
Note that the construction in Corollary 3.5 is deterministic, and oblivious to the content of the input strings s and t. The following lemma (which will be proved in Sect. 3.3) states that we can achieve a smaller size by a probabilistic nonoblivious construction that takes longer time to compute.
Lemma 3.6
(A smaller good anchor set) There is an \(\tilde{O}(\sqrt{d})\)quantumtime constructible anchor set C of size \(m= O(n/d^{3/4})\). This set C depends on the input strings s, t and \(O(\log n)\) many random coins, and is a good anchor set with at least 2/3 probability over the random coins.
Let \(C=\{C(1),\dots ,C(m)\} \subseteq [n]\) be a good anchor set of size \(C=m\). For every anchor C(k) indexed by \(k\in [m]\), we associate it with a pair of strings (P(k), Q(k)), where
are substrings (or reversed substrings) of S obtained by extending from the anchor C(k) to the right or reversely to the left. The length of P(k) is at most^{Footnote 3}d, and the length of Q(k) is at most \(d1\). We say the string pair (P(k), Q(k)) is red if \(C(k)\in [1\mathinner {.\,.}s]\), or blue if \(C(k)\in [s+1\mathinner {.\,.}n]\). We also say \(k\in [m]\) is a red index or a blue index, depending on the color of the string pair (P(k), Q(k)). Then, from the definition of good anchor sets, we immediately have the following simple observation.
Proposition 3.7
(Witness Pair) The longest common substring of s and t has length at least d, if and only if there exist a red string pair (P(k), Q(k)) and a blue string pair \((P(k'),Q(k'))\) where \(k,k'\in [m]\), such that \(\mathsf {\textrm{lcp}}(P(k),P(k')) + \mathsf {\textrm{lcp}}(Q(k),Q(k')) \ge d\). In such case, \((k,k')\) is called a witness pair.
Proof
Suppose s and t have LCS of length at least d. Then the property of the good anchor set C implies the existence of a shift \(h\in [0\mathinner {.\,.}d)\) and a lengthd common substring \(s[i\mathinner {.\,.}i+d)=t[j\mathinner {.\,.}j+d)\) such that \(i+h=C(k), s+1+j+h = C(k')\) for some \(k,k'\in [m]\). Then, we must have \(\mathsf {\textrm{lcp}}(P(k),P(k'))\ge dh\) and \(\mathsf {\textrm{lcp}}(Q(k),Q(k')) \ge h\), implying that \((k,k')\) is a witness pair.
Conversely, the existence of a witness pair immediately implies a common substring of length at least d. \(\square \)
Remark 3.8
The algorithm we are going to describe can be easily adapted to the Longest Repeated Substring problem: we only have one input string \(S[1\mathinner {.\,.}n]\), and we drop the redblue constraint in the definition of witness pairs in Proposition 3.7.
Now we shall describe our quantum walk algorithm that solves the decision version of LCS by searching for a witness pair.
Definition of the Johnson graph Recall that \(C=\{C(1),\dots ,C(m)\}\) is a good anchor set of size \(C=m\). We perform a quantum walk on the Johnson graph with vertex set \(\left( {\begin{array}{c}[m]\\ r\end{array}}\right) \), where r is a parameter to be determined later. A vertex \(K=\{k_{1},k_2,\dots ,k_r\}\subseteq [m]\) in the Johnson graph is called a marked vertex, if and only if \(\{k_{1},k_2,\dots ,k_r\}\) contains a witness pair (Proposition 3.7). If s and t have a common substring of length d, then at least \(\left( {\begin{array}{c}m2\\ r2\end{array}}\right) / \left( {\begin{array}{c}m\\ r\end{array}}\right) = \Omega (r^2/m^2)\) fraction of the vertices are marked. Otherwise, there are no marked vertices.
Associated data In the quantum walk algorithm, each state \(K=\{k_1,\dots ,k_r\}\subseteq [m]\) is associated with the following data.

The indices \(k_1,\dots ,k_r\) themselves.

The corresponding anchors \(C(k_1),\dots ,C(k_r) \in [n]\).

An array \((k^P_1,\dots ,k^P_r)\), which is a permutation of \(k_1,\dots ,k_r\), such that \(P(k_i^P)\preceq P(k_{i+1}^P)\) for all \(1\le i<r\).

The LCP array \(h_1^P,\dots ,h^P_{r1}\), where \(h_i^P = \mathsf {\textrm{lcp}}(P(k_i^P),P(k_{i+1}^P))\)

An array \((k^{Q}_1,\dots ,k^{Q}_r)\), which is a permutation of \(k_1,\dots ,k_r\), such that \(Q(k_i^{Q})\preceq Q(k_{i+1}^{Q})\) for all \(1\le i<r\).

The LCP array \(h_1^Q,\dots ,h_{r1}^Q\), where \(h_i^Q = \mathsf {\textrm{lcp}}(Q(k_i^{Q}),Q(k_{i+1}^{Q}))\).
Note that we stored the lexicographical orderings of the strings \(P(k_1),\dots ,P(k_r)\) and \(Q(k_1),\dots ,Q(k_r)\) (for identical substrings, we break ties by comparing the indices themselves), as well as the LCP arrays which include the length of the longest common prefix of every pair of lexicographically adjacent substrings. By Lemma 2.1, these arrays together uniquely determine the values of \(\mathsf {\textrm{lcp}}\big (P(k_i),P(k_j)\big )\) and \(\mathsf {\textrm{lcp}}\big (Q(k_i),Q(k_j)\big )\), for every pair of \(i,j\in [r]\).^{Footnote 4}
In the checking step of the quantum walk algorithm, we decide whether the state is marked, by searching for a witness pair (Proposition 3.7) in \(\{k_1,\dots ,k_r\}\). Note that the contents of the involved strings \(\{P(k_i)\}_{i\in [r]}\), \(\{Q(k_i)\}_{i\in [r]}\) are no longer needed in order to solve this task, as long as we already know their lexicographical orderings and the LCP arrays. This task is termed as the Two String Families LCP problem in the literature [23], formalized as below.
We will show how to solve this task timeefficiently in Sect. 3.2. For now, we only consider the query complexity of the algorithm, and we have the following simple observation, due to the fact that our associated information already uniquely determines the LCP value of every pair.
Proposition 3.9
(Query complexity of checking step is zero) Using the associated data defined above, we can determine whether \(\{k_1,\dots ,k_r\}\subseteq [m]\) is a marked state, without making any additional queries to the input.
Now, we consider the cost of maintaining the associated data when the subset \(\{k_1,\dots ,k_r\}\) undergoes insertion and deletion during the quantum walk algorithm.
Proposition 3.10
(Update cost) Assume the anchor set C is Ttime constructible. Then, each update step of the quantum walk algorithm has query complexity \( {\textsf{U}} = {\tilde{O}}(\sqrt{d} + T)\).
Proof
Let us consider how to update the associated data when a new index k is being inserted into the subset \(\{k_1,\dots ,k_r\}\). The deletion process is simply the reverse operation of insertion.
The insertion procedure can be summarized by the pseudocode in Algorithm 1. First, we compute and store C(k) in time T. Then we use a binary search to find the correct place to insert k into the lexicographical orderings \((k_1^P,\dots ,k_r^P)\) (and \((k_1^Q,\dots ,k_r^Q)\)). Since the involved substrings have length d, each lexicographical comparison required by this binary search can be implemented in \({\tilde{O}}(\sqrt{d})\) time by Lemma 2.5. After inserting k into the list, we update the LCP array by computing its LCP values \(h_{\textsf{pre}},h_{\textsf{suc}}\) with two neighboring substrings, and removing (by “uncomputing”) the LCP value \(h_\mathsf{{old}}\) between their neighbors which were adjacent at first, in \(\tilde{O}(\sqrt{d})\) time (Lemma 2.5). \(\square \)
Proposition 3.11
(Setup cost) The setup step of the quantum walk has query complexity \({\textsf{S}} = {\tilde{O}}(r\cdot (\sqrt{d}+T))\).
Proof
We can set up the initial state for the quantum walk by simply performing r insertions successively using Proposition 3.10. \(\square \)
Remark 3.12
Observe that, in the insertion procedure in Algorithm 1, Lines 2 and 47 can be implemented also in time complexity \({\tilde{O}}(\sqrt{d}+T)\). The timeconsuming steps in Algorithm 1 are those that actually modify the data. For example, in Lines 8 and 9, the insertion causes some elements in the array to shift to the right, and would take O(r) time if implemented naively. Later in Sect. 3.2 we will describe appropriate data structures to implement these steps timeefficiently.
Finally, by Theorem 2.7, the query complexity of our quantum walk algorithm (omitting \(\mathrm{poly\,log}(n)\) factors) is
by choosing \(r = m^{2/3}\). The construction of good anchor sets from Corollary 3.5 has \(m = O(n/\sqrt{d}), T={\tilde{O}} (1)\), achieving query complexity \({\tilde{O}}(n^{2/3}\cdot d^{1/6})\). The improved construction from Lemma 3.6 has \(m = O(n/d^{3/4}), T ={\tilde{O}}(\sqrt{d})\), achieving query complexity \({\tilde{O}}(n^{2/3})\).
3.2 TimeEfficient Implementation
In this section, we will show how to implement the \({\tilde{O}}(n^{2/3})\)query quantum walk algorithm from Sect. 3.1 in time complexity \(\tilde{O}(n^{2/3})\).
3.2.1 Overview
Recall that our algorithm described in Sect. 3.1 for input strings s, t and threshold length d performs a quantum walk on the Johnson graph \(\left( {\begin{array}{c}[m]\\ r\end{array}}\right) \). In this section, we have to measure the quantum walk costs \(\textsf{S},\textsf{C},\textsf{U}\) in terms of the time complexity instead of query complexity. Inspecting Equation 1, we observe that the quantum walk algorithm can achieve \({\tilde{O}}(n^{2/3})\) time complexity, as long as we can implement the setup, checking and update steps with time complexities \({\textsf{S}} = {\tilde{O}}(r(\sqrt{d}+T)), {\textsf{C}} = {\tilde{O}}(\sqrt{rd})\), and \({\textsf{U}} = {\tilde{O}}(\sqrt{d}+T) \).
As mentioned in Sect. 3.1, there are two parts in the described quantum walk algorithm that are timeassuming:

Maintaining the arrays of associated data under insertions and deletions ((Remark 3.12).

Solving the Two String Families LCP problem in the checking step.
Now we give an overview of how we address these two problems.
Dynamic arrays under insertions and deletions A natural solution to speed up the insertions and deletions is to maintain the arrays of using appropriate data structures, which support the required operations in \({\tilde{O}}(1)\) time. This “quantum walk plus data structures” framework was first used in Ambainis’ element distinctness algorithm [13], and have been applied to many timeefficient quantum walk algorithms (see the discussion in Sect. 1.3). However, as noticed by Ambainis [13, Section 6.2], such data structures have to satisfy the following requirements in order to be applicable in quantum walk algorithms.

1.
The data structure needs to be historyindependent, that is, the representation of the data structure in memory should only depend on the set of elements stored (and the random coins used) by the data structure, not on the sequence of operations leading to this set of elements.

2.
The data structure should guarantee worstcase time complexity (with high probability over the random coins) per operation.
The first requirement guarantees that each vertex of the Johnson graph corresponds to a unique quantum state, which is necessary since having multiple possible states would destroy the interference during the quantum walk algorithm. This requirement rules out many types of selfbalancing binary search trees^{Footnote 5} such as AVL Tree and RedBlack Tree.
The second requirement rules out data structures with amortized or expected running time, which may take very long time in some of the operations. The reason is that, during the quantum algorithm, each operation is actually applied to a superposition of many instances of the data structure, so we would like the time complexity of an operation to have a fixed upper bound that is independent of the particular instance being operated on.
Ambainis [13] designed a data structure satisfying both requirements based on hash tables and skip lists, which maintains a sorted list of items, and supports insertions, deletions, and searching in \({\tilde{O}}(1)\) time with high probability. Buhrman, Loff, Patro, and Speelman [55] modified this data structure to also support indexing queries, which ask for the \(k^{\text {th}}\) item in the current list (see Lemma 3.14 below). Using this data structure to maintain the arrays in our quantum walk algorithm, we can implement the update steps and the setup steps timeefficiently.
Dynamic Two String Families LCP The checking step of our quantum walk algorithm (Proposition 3.9) requires solving an Two String Families LCP instance with r string pairs of lengths bounded by d. We will not try to solve this problem from scratch for each instance, since it is not clear how to solve it significantly faster than the \({\tilde{O}}(r)\)time classical algorithm [23, Lemma 3] even using quantum algorithms. Instead, we dynamically maintain the solution using some data structure, which efficiently handles each update step during the quantum walk where we insert one string pair (P, Q) into (and remove one from) the current Two String Families LCP instance. As mentioned in Sect. 1.2.1, the classical data structure for this task given by Charalampopoulos, Gawrychowski, and Pokorski [27] is not applicable here, since it violates both requirements mentioned above: it maintains a heavylight decomposition of the compact tries of the input strings, and rebuilds them from time to time to ensure amortized \(\mathrm{poly\,log}(n)\) time complexity. It is not clear how to implement this strategy in a historyindependent way and with worstcase time complexity per operation.
Instead, we will design a different data structure that satisfies the historyindependence and worstcase update time requirements, and can solve the Two String Families LCP problem on the maintained instance in \({\tilde{O}}(\sqrt{rd})\) quantum time. This time complexity is much worse than the \(\mathrm{poly\,log}(n)\) time achieved by the classical data structure of [27], but is sufficient for our purpose. As mentioned in Sect. 1.2.1, one challenge is the lack of a comparisonbased data structure for 2D range query that also satisfies the two requirements above. We remark that there exist comparisonbased data structures with historyindependence but only with expected time complexity (e.g., [92]). There also exist folklore data structures for integer coordinates that have historyindependence and worstcase time complexity (e.g., Lemma 3.15). For the easier problem of 1dimensional range query, there exist folklore data structures (e.g., Lemma 3.14) that satisfy all three requirements. To get around this issue, we will use a sampling procedure and a version of the BallsandBins argument, which can effectively convert the involved noninteger coordinates into integer coordinates. Then, we are able to apply 2D range query data structures over integer coordinates. Details will be given in Sect. 3.2.3.
3.2.2 Basic Data Structures
In this section, we will review several existing constructions of classical historyindependent data structures.
Let D be a classical data structure using \({\tilde{O}}(1)\) many random coins \({\textsf{r}}\) that maintains a dynamically changing data set S. We say D is historyindependent if for each possible S and \({\textsf{r}}\), the data structure has a unique representation \(D(S,{\textsf{r}})\) in the memory. Furthermore, we say D has worstcase update time O(T) with high probability, if for every possible S and update operation \(S\rightarrow S'\), with high probability over \({\textsf{r}}\), the time complexity to update from \(D(S,{\textsf{r}})\) to \(D(S',{\textsf{r}})\) is O(T). Similarly we can define worstcase query time with high probability.
Since our quantum walk algorithm is over the Johnson graph \(\left( {\begin{array}{c}[m]\\ r\end{array}}\right) \), for consistency we will use r to denote the size of the data structure instances in the following statements.
Hash tables We use hash tables to implement efficient lookup operations without using too much memory.
Lemma 3.13
(Hash tables) There is a historyindependent data structure of size \({\tilde{O}}(r)\) that maintains a set of at most r keyvalue pairs \(\{({\textsf{key}}_1,{\textsf{value}}_1),({\textsf{key}}_2,{\textsf{value}}_2),\dots , ({\textsf{key}}_r,{\textsf{value}}_r)\}\) where \({\textsf{key}}_i\)’s are distinct integers from [m], and supports the following operations in worstcase \({\tilde{O}}(1)\) time with high probability:

Lookup Given a \({\textsf{key}}\in [m]\), find the \(\textsf{value}\) corresponding to \(\textsf{key}\) (or report that \(\textsf{key}\) is not present in the set).

Insertion Insert a keyvalue pair into the set.

Deletion Delete a keyvalue pair from the set.
Proof
(Sketch) The construction is similar to [13, Section 6.2]. The hash table has r buckets, each with the capacity for storing \(O(\log m)\) many keyvalue pairs. A pair \((\textsf{key},\textsf{value})\) is stored in the \((h({\textsf{key}}))^{\text {th}}\) bucket, and the pairs inside each bucket are sorted in increasing order of keys. If some buckets overflow, we can collect all the leftover pairs into a separate buffer of size r and store them in sorted order. This ensures that any set of r keyvalue pairs has a unique representation in the memory. And, each basic operation can be implemented in \(\mathrm{poly\,log}(m)\) time, unless there is an overflow. Using an \(O(\log m)\)wise independent hash function \(h:[m] \rightarrow [r]\), for any possible rsubset of keys, with high probability none of the buckets overflow.^{Footnote 6}\(\square \)
Dynamic arrays We will need a dynamic array that supports indexing, insertion, deletion, and some other operations.
The skip list [93] is a probabilistic data structure which is usually used as an alternative to balanced trees, and satisfies the historyindependence property. Ambainis’ quantum Element Distinctness algorithm [13] used the skip list to maintain a sorted array, supporting insertions, deletions, and binary search. In order to apply the skip list in the quantum walk, a crucial adaptation in Ambainis’ construction is to show that the random choices made by the skip list can be simulated using \(O(\log n)\)wise independent functions [13, Section 6.2], which only take \(\mathrm{poly\,log}(n)\) random coins to sample. In the recent quantum finegrained reduction result by Buhrman, Loff, Patro, and Speelman [55, Section 3.2], they used a more powerful version of skip lists that supports efficient indexing. We will use this version of skip lists with some slight extension.
Lemma 3.14
(Dynamic arrays) There is a historyindependent data structure of size \({\tilde{O}}(r)\) that maintains an array of items \(({\textsf{key}}_1,{\textsf{value}}_1),({\textsf{key}}_2,{\textsf{value}}_2),\dots ,({\textsf{key}}_r, {\textsf{value}}_r)\) with distinct keys (note that neither the keys nor the values are necessarily sorted in increasing order), and supports the following operations with worstcase \({\tilde{O}}(1)\) time complexity and high success probability:

Indexing Given an index \(1\le i\le r \), return the \(i^{\text {th}}\) item \(({\textsf{key}}_i,{\textsf{value}}_i)\)

Insertion Given an index \(1\le i \le r+1\) and a new item, insert it into the array between the \((i1)^{\text {st}}\) item and the \(i^{\text {th}}\) item (shifting later items to the right).

Deletion Given an index \(1\le i\le r \), delete the \(i^{\text {th}}\) item from the array (shifting later items to the left).

Location Given a \(\textsf{key}\), return its position i in the array (i.e., \({\textsf{key}}_i = \textsf{key}\)).

Rangeminimum query Given \(1\le a\le b\le r\), return \(\min _{a\le i\le b}\{{\textsf{value}_i}\}\).
Proof
(Sketch) We will use (a slightly modified version of) the data structure described in [55, Section 3.2], which extends the construction of [13, Section 6.2] to support insertion, deletion, and indexing. Their construction is a (bidirectional) skip list of the items, where a pointer (a “skip”) from an item \((\textsf{key},\textsf{value})\) to another item \((\textsf{key}',\textsf{value}')\) is stored in a hash table as a keyvalue pair \((\textsf{key},\textsf{key}')\). To support efficient indexing, for each pointer they also store the distance of this skip, which is used during an indexing query to keep track of the current position after following the pointers (similar ideas were also used in, e.g., [94, Section 3.4]). After every insertion or deletion, the affected distance values are updated recursively, by decomposing a leveli skip into \(O(\log n)\) many level\((i1)\) skips.
A difference between their setting and ours is that they always keep the array sorted in increasing order of \(\textsf{value}\)’s, and the position of an inserted item is decided by its relative order among the values in the array, instead of by a given position \(1\le i\le r+1\). Nevertheless, it is straightforward to adapt their construction to our setting, by using the distance values of the skips to keep track of the current position, instead of by comparing the values of items.
Note that using the distance values we can also efficiently implement the Location operation in a reversed way compared to Indexing, by following the pointers backwards and moving up levels.
To implement the rangeminimum query operations, we maintain the rangeminimum value of each skip in the skip list, in a similar way to maintaining the distance values of the skips. They can also be updated recursively after each update. Then, to answer a query, we can travel from the \(a^{\text {th}}\) item to the \(b^{\text {th}}\) by following the pointers (this is slightly trickier if \(a \ne 1\), where we may first move up levels and then move down). \(\square \)
We also need a 2D range sum data structure for points with integer coordinates.
Lemma 3.15
(2D range sum) Let integer \(N \le n^{O(1)}\). There is a historyindependent data structure of size \({\tilde{O}}(r)\) that maintains a multiset of at most r points \(\{(x_1,y_1),\dots ,(x_r,y_r)\}\) with integer coordinates \(x_i\in [N], y_i \in [N]\), and supports the following operations with worstcase \({\tilde{O}}(1)\) time complexity and high success probability:

Insertion Add a new point (x, y) into the multiset (duplicates are allowed).

Deletion Delete the point (x, y) from the multiset (if it appears more than once, only delete one copy of them).

Range sum Given \(1\le x_1\le x_2\le N, 1\le y_1\le y_2\le N\), return the number of points (x, y) in the multiset that are in the rectangle \([x_1\mathinner {.\,.}x_2] \times [y_1\mathinner {.\,.}y_2]\).
Proof
(Sketch) Without loss of generality, assume N is a power of two. We use a simple folklore construction that resembles a 2D segment tree (sometimes called 2D range tree or 2D radix tree). Define a class \(\mathcal {C}= \mathcal {C}_1\cup \mathcal {C}_2\cup \dots \cup \mathcal {C}_{\log N}\) of subsegments of the segment \([1\mathinner {.\,.}N]\) as follows:
Then it is not hard to see that every segment \([a\mathinner {.\,.}b]\subseteq [1\mathinner {.\,.}N]\) can be represented as the disjoint union of at most \(2\log N\) segments in \(\mathcal {C}\). Consequently, the query rectangle \([x_1\mathinner {.\,.}x_2] \times [y_1\mathinner {.\,.}y_2]\) can always be represented as the disjoint union of \(O(\log ^2 N)\) rectangles of the form \({\mathcal {I}} \times {\mathcal {J}}\) where \({\mathcal {I}}, {\mathcal {J}} \in \mathcal {C}\).
Hence, for every \({\mathcal {I}}, {\mathcal {J}} \in \mathcal {C}\) with nonzero range sum \(s({\mathcal {I}} \times {\mathcal {J}})\), we store this range sum into a hash table, indexed by the canonical encoding of \(({\mathcal {I}},{\mathcal {J}})\). Then we can efficiently answer all the rangesum queries by decomposing the rectangles and summing up their stored range sums.
When a point (x, y) is updated, we only need to update the range sums of \(\log ^2 N\) many rectangles that are affected, since each \(a\in [1\mathinner {.\,.}N]\) is only included by \(\log N\) intervals in \(\mathcal {C}\). We may also need to insert a new rectangle into the hash table, or remove a rectangle once its range sum becomes zero. \(\square \)
Data structures in quantum walk Ambainis [13] showed that a historyindependent classical data structure D with worstcase time complexity T (with high probability over the random coins \({\textsf{r}}\)) can be applied to the quantum walk framework by creating a uniform superposition over all possible \({\textsf{r}}\), i.e., the data structure storing data S corresponds to the quantum state \(\sum _{{\textsf{r}}} {D(S,{\textsf{r}})}\rangle {{\textsf{r}}}\rangle \). During the quantum walk algorithm, each data structure operation is aborted after running for T time steps. By doing this, some components in the quantum state may correspond to malfunctioning data structures, but Ambainis showed that this will not significantly affect the behavior of the quantum walk algorithm. We do not repeat the error analysis here, but instead refer interested readers to the proof of [13, Lemma 5 and 6] (see also [55, Lemma 1 and 2]).
3.2.3 Applying the Data Structures
Now we will use the data structures described in Sect. 3.2.2 to implement our quantum walk algorithm from Sect. 3.1 timeefficiently.
Recall that C is the Tquantumtimeconstructible good anchor set of size \(C=m\) (Definition 3.2). The states of our quantum walk algorithms are rsubsets \(K=\{k_1,k_2,\dots ,k_r\}\subseteq [m]\), where each index \(k\in K\) is associated with an anchor \(C(k)\in [n]\), which specifies the color (red or blue) of k and the pair (P(k), Q(k)) of strings of lengths at most d. We need to maintain the lexicographical orderings \((k_1^P,\dots ,k_r^P)\) and LCP arrays \((h_1^P,\dots ,h_{r1}^P)\), so that \(P(k_1^P) \preceq P(k_2^P)\preceq \cdots \preceq P(k_r^P)\) and \(h_i^P = \mathsf {\textrm{lcp}}(P(k_i^P),P(k_{i+1}^P))\), and similarly maintain \((k_1^{Q},\dots ,k_r^{Q}),(h_1^Q,\dots ,h_{r1}^Q)\) for the strings \(\{Q(k)\}_{k\in K}\).
For \(k\in K\), we use \(\mathsf {\textrm{pos}}^P(k)\) to denote the position i such that \(k^P_i = k\), i.e., the lexicographical rank of P(k) among all \(P(k_1),\dots ,P(k_r)\). Similarly, let \(\mathsf {\textrm{pos}}^Q(k)\) denote the position i such that \(k^{Q}_i = k\).
We can immediately see that all the steps in the update step (Algorithm 1) of our quantum walk can be implemented timeefficiently. In particular, we use a hash table (Lemma 3.13) to store the anchor C(k) corresponding to each \(k\in K\), and use Lemma 3.14 to maintain the lexicographical orderings and LCP arrays under insertions and deletions. Each update operation on these data structures takes \({\tilde{O}}(1)\) time. Additionally, these data structures allow us to efficiently compute some useful information, as summarized below.
Proposition 3.16
Given indices \(k,k' \in K\), the following information can be computed in \({\tilde{O}}(1)\) time.

1.
The anchor C(k), the color of k, and the lengths \(P(k),Q(k)\le d\).

2.
\(\mathsf {\textrm{pos}}^P(k)\) and \(\mathsf {\textrm{pos}}^Q(k)\).

3.
\(\mathsf {\textrm{lcp}}(P(k),P(k'))\) and \(\mathsf {\textrm{lcp}}(Q(k),Q(k'))\).
Proof
For 1, rather than use T time to compute C(k) (Definition 3.2), we instead look up the value of C(k) from the hash table. Then, \(C(k)\in [n]\) determines the color of k and the string lengths.
For 2, we use the location operation of the dynamic array data structure (Lemma 3.14).
For 3, we first compute \(i = \mathsf {\textrm{pos}}^P(k), i' = \mathsf {\textrm{pos}}^P(k')\), and assume \(i<i'\) without loss of generality. Then, by Lemma 2.1, we can compute \(\mathsf {\textrm{lcp}}(P(k),P(k')) = \mathsf {\textrm{lcp}}(P(k^P_i),P(k^P_{i'})) = \min \{h^P_{i},h^P_{i+1},\dots ,h^P_{i'1}\}\) using a rangeminimum query (Lemma 3.14). \(\square \)
The remaining task is to efficiently implement the checking step, where we need to solve the Two String Families LCP problem. The goal is to find a red index \(k^{\textsf{red}}\in K\) and a blue index \(k^{\textsf{blue}}\in K\), such that \(\mathsf {\textrm{lcp}}(P(k^{\textsf{red}}),P(k^{\textsf{blue}}))+ \mathsf {\textrm{lcp}}(Q(k^{\textsf{red}}),Q(k^{\textsf{blue}}))\ge d\). Now we give an outline of the algorithm for solving this task.
In the Algorithm 2, we use Grover search to find a red index \(k^{\textsf{red}}\in K\) and an integer \(d'\in [0\mathinner {.\,.}d]\), such that there exists a blue index \(k^{\textsf{blue}}\in K\) with \(\mathsf {\textrm{lcp}}(P(k^{\textsf{red}}),P(k^{\textsf{blue}}))\ge d'\) and \(\mathsf {\textrm{lcp}}(Q(k^{\textsf{red}}),Q(k^{\textsf{blue}}))\ge dd'\). The number of Grover iterations is \({\tilde{O}}(\sqrt{K\cdot d}) = {\tilde{O}}(\sqrt{rd})\), and we will implement each iteration in \(\mathrm{poly\,log}(n)\) time. By Lemma 2.1, all the strings P(k) that satisfy \(\mathsf {\textrm{lcp}}(P(k),P(k^{\textsf{red}}))\ge d'\) form a contiguous segment in the lexicographical ordering \(P(k_1^P)\preceq \dots \preceq P(k_r^P)\). In Line 2, we find the left and right boundaries \(\ell ^P, r^P\) of this segment, using a binary search with Proposition 3.16 (3). Line 3 is similar to Line 2. Then, Line 4 checks the existence of such a blue string pair. It is clear that this procedure correctly solves the Two String Families LCP problem. The only remaining problem is how to implement Line 4 efficiently.
Note that Line 4 can be viewed as a 2D orthogonal range query, where each 2D point is a blue string pair (P(k), Q(k)), with coordinates being strings to be compared in lexicographical order. We cannot simply replace the coordinates by their ranks \(\mathsf {\textrm{pos}}^P(k)\) and \(\mathsf {\textrm{pos}}^Q(k)\) among the r substrings in the current state, since their ranks will change over time. It is also unrealistic to replace the coordinates by their ranks among all the possible substrings \(\{P(k)\}_{k\in [m]}\), since m could be much larger than the desired overall time complexity \(n^{2/3}\). These issues seem to require our 2D range query data structure to be comparisonbased, which is also difficult to achieve as mentioned before.
Instead, we will use a sampling technique, which effectively converts the noninteger coordinates into integer coordinates. At the very beginning of the algorithm (before running the quantum walk), we uniformly sample r distinct indices \(x_1,x_2\dots ,x_r\in [m]\), and sort them so that \(P(x_1)\preceq P(x_2) \preceq \cdots \preceq P(x_r)\) (breaking ties by the indices), in \(\tilde{O}(r(\sqrt{d}+T))\) total time (this complexity is absorbed by the time complexity of the setup step \(\textsf{S}= O(r(\sqrt{d}+T))\)). Then, during the quantum walk algorithm, when we insert an index \(k\in [m]\) into K, we assign it an integer label \(\rho ^P(k)\) defined as the unique \(i \in [0\mathinner {.\,.}r]\) satisfying \(P(x_i)\preceq s' \prec P(x_{i+1})\), which can be computed in \({\tilde{O}}(\sqrt{d})\) time by a binary search on the sorted sequence \(P(x_1)\preceq \cdots \preceq P(x_r)\). We also sample \(y_1,\dots ,y_r\in [m]\) and sort them so that \(Q(y_1)\preceq Q(y_2)\preceq \dots \preceq Q(y_r)\), and similarly define the integer labels \(\rho ^{Q}(k)\). Intuitively, the (scaled) label \(\rho ^P(k) \cdot (m/r)\) estimates the rank of P(k) among all the strings \(\{P(k')\}_{k'\in [m]}\).
The following lemma formalizes this intuition. It states that in a typical rsubset \(K = \{k_1,k_2,\dots ,k_r\}\subseteq [m]\), not too many indices can receive the same label.
Lemma 3.17
For any \(c>1\), there is a \(c'>1\), such that the following statement holds:
For positive integers \(r\le m\), let \(A,B\subseteq [m]\) be two independently uniformly random rsubsets. Let \(A = \{a_1,a_2,\dots ,a_r\}\) where \(a_1<a_2<\dots <a_r\), and denote
Then,
Proof
Let \(k = c' \log m\) for some \(c'>1\) to be determined later, and we can assume \(k\le r\). Observe that, \(A_i\cap B \ge k\) holds for some i only if there exist \(b,b'\in [m]\), such that \([b\mathinner {.\,.}b'] \cap B \ge k\) and \([b+1\mathinner {.\,.}b'] \cap A = \emptyset \).
Let \(b,b'\in [m], b\le b'\). For \(b'b \ge (c+2) (m\ln m)/r\), we have
For \(b'b < (c+2)(m\ln m)/r\), we have
where we set \(k = c'\log m = 3(c+3)\log m\).
The proof then follows from a union bound over all pairs of \(b,b'\in [m]\). \(\square \)
Then, we can use the 2D point \((\rho ^P(k),\rho ^Q(k))\) with integer coordinates to represent the string pair (P(k), Q(k)), and use the data structure from Lemma 3.15 to handle the 2D range sum queries. To correctly handle the points near the boundary of a query, we need to check them one by one, and Lemma 3.17 implies that in average case this brute force step is not expensive.
The pseudocode in Algorithm 3 describes the additional steps to be performed during each insertion step of the quantum walk (the deletion step is simply the reversed operation of the insertion step).
The pseudocode in Algorithm 4 describes how to implement Line 4 in Algorithm 2 for solving the Two String Families LCP problem. Line 4 correctly handles all the “internal” blue pairs \((P(k^{\textsf{blue}}),Q(k^{\textsf{blue}}))\), which must satisfy \(\mathsf {\textrm{pos}}^P(k^{\textsf{blue}}) \in [\ell ^P,r^P]\) and \(\mathsf {\textrm{pos}}^Q(k^{\textsf{blue}}) \in [\ell ^Q,r^Q]\) by the definition of our integer labels \(\rho ^P(\cdot ),\rho ^Q(\cdot )\) and Lines 2 and 3. In Line 4 we handle the remaining possible blue pairs, which must have \(\rho ^P(k^{\textsf{blue}}) \in \{{\tilde{\ell }}^P,{\tilde{r}}^P\}\) or \(\rho ^Q(k^{\textsf{blue}}) \in \{{\tilde{\ell }}^Q,{\tilde{r}}^Q\}\), and can be found by binary searches on the lexicographical orderings (to be able to do this, we need to maintain the lexicographical orderings of \(P(k_1),\dots ,P(k_r)\) and the sampled strings \(P(x_1),\dots ,P(x_r)\) combined).
Note that in Line 5 of Algorithm 4 we abort if we have checked more than \(4c'\log m\) boundary points, so that Algorithm 4 has worstcase \({\tilde{O}}(1)\) overall running time. But this early stopping would also introduce (onesided) error if there are too many boundary points which we have no time to check. However, a straightforward application of Lemma 3.17 implies that, with high success probability over the initial samples \(P(x_1)\preceq P(x_2)\preceq \cdots \preceq P(x_r)\) and \(Q(y_1)\preceq Q(y_2)\preceq \cdots \preceq Q(y_r)\), only \(1/\textrm{poly}(m)\) fraction of the rsubsets \(K= \{k_1,\dots ,k_r\} \in [m]\) in the Johnson graph can have more than \(c'\log m\) strings receiving the same label. On these problematic states \(K=\{k_1,\dots ,k_r\} \in [m]\) , the checking procedure may erroneously recognize K as unmarked, while other states are handled correctly by Algorithm 4 since there is no early aborting. This decreases the fraction of marked states in the Johnson graph by only a \(1/\textrm{poly}(m)\) fraction, which does not affect the overall time complexity of our quantum walk algorithm.
3.3 Improved Construction of Good Anchor Sets
In this section, we will prove Lemma 3.6 by constructing a good anchor set with smaller size. Our construction of good anchor sets is based on a careful combination of a generalized version of difference covers [29, 30] and the string synchronizing sets [32].
3.3.1 Approximate Difference Covers
We first need to generalize the notion of difference covers.
Definition 3.18
(Approximate Difference Covers) A set \(D\subseteq \mathbb {N}^+\) is called a \((d,\tau )\)cover, if for every \(i,j\in \mathbb {N}^+\), there exists two integers \(h_1(i,j),h_2(i,j) \in [0\mathinner {.\,.}d)\) such that \(i+h_1(i,j),j+h_2(i,j) \in D\), and \(h_1(i,j)h_2(i,j) \le \tau 1\).
The notion of dcover (Definition 3.3) used in previous algorithms corresponds to the \(\tau =1\) case of our new definition. Our generalization to larger \(\tau \) can be viewed as an approximate version of difference covers with additive error \(\le \tau 1\). As we shall see, allowing additive error makes the size of the \((d,\tau )\)cover much smaller compared to Definition 3.3.
We present a construction of approximate difference covers, by adapting previous constructions from \(\tau =1\) to general values of \(\tau \).
Lemma 3.19
(Construction of \((d,\tau )\)cover) For every positive integers \(1\le \tau \le d\), there is a \((d,\tau )\)cover D such that \(D\cap [n]\) contains \(O(n/\sqrt{d\tau })\) elements. Moreover, given integer \(i\ge 1\), one can compute the \(i^{\text {th}}\) smallest element of \(D\cap [n]\) in \({\tilde{O}} (1)\) time.
Proof
Let \(M:= \left\lfloor \sqrt{d/\tau } \right\rfloor \ge 1\). Define
and
We claim that \(D:= I\cup J\) is a \((d,\tau )\)cover that satisfies the desired properties.
First, observe that \(I\cap [n]=\lfloor n/(M\tau )\rfloor \le O(n/\sqrt{d \tau })\), and \(J\cap [n]\le \lfloor n/\tau \rfloor \cdot (M/M^2) = O(n/\sqrt{d \tau })\). Hence \(D=I\cup J\) satisfies the claimed size bound.
Next, we verify D is indeed a \((d,\tau )\)cover. For any \(i,j\in \mathbb {N}^+\), let \(i':= \lceil i/\tau \rceil , j':= \lceil j/\tau \rceil \). Let \(z\cdot \tau \in J\) be the smallest integer in J such that \(z\ge j'\) and \(z\equiv j'i' \pmod {M}\). By the construction of J, we have \(z\le j'+M^21\). Hence, let \(h_1(i,j)=(zj'+i')\cdot \tau  i\) and \(h_2(i,j) = z\cdot \tau  j\). Note that
where we used \(j' \tau  j \in [0\mathinner {.\,.}\tau 1 ]\). Similarly we can show \(0\le h_1(i,j) \le d1\), and we have
Moreover, \(j+h_2(i,j)\in J\subseteq D\), and
which implies \(i+h_1(i,j)\in I\subseteq D\). \(\square \)
3.3.2 String Synchronizing
In Corollary 3.5 we obtained a good anchor set using a (d, 1)cover. If we simply replace it by a \((d,\tau )\)cover with larger \(\tau \), the size of the obtained anchor set would become smaller, but it would no longer be a good anchor set, due to the misalignment introduced by approximate difference covers. To deal with the misalignment, we will use the string synchronizing sets recently introduced by Kempa and Kociumaka [32]. Informally, a synchronizing set of string S is a small set of synchronizing positions, such that every two sufficiently long matching substrings of S with no short periods should contain a pair of consistent synchronizing positions.
Definition 3.20
(String synchronizing sets [32, Definition 3.1]) For a string \(S[1\mathinner {.\,.}n]\) and a positive integer \(1\le \tau \le n/2\), we say \(A\subseteq [1\mathinner {.\,.}n2\tau +1]\) is a \(\tau \)synchronizing set of S if it satisfies the following properties:

Consistency If \(S[i\mathinner {.\,.}i+2\tau ) = S[j\mathinner {.\,.}j+2\tau )\), then \(i\in A\) if and only if \(j\in A\).

Density For \(i\in [1\mathinner {.\,.}n3\tau +2]\), \(A \cap [i\mathinner {.\,.}i+\tau ) =\emptyset \) if and only if \(\mathsf {\textrm{per}}(S[i\mathinner {.\,.}i+3\tau  2])\le \tau /3\).
Kempa and Kociumaka gave a lineartime classical randomized algorithm (as well as a derandomized version, which we will not use here) to construct a \(\tau \)synchronizing set A of optimal size^{Footnote 7}\(A=O(n/\tau )\). However, this classical algorithm for constructing A has to query each of the n input characters, and is not directly applicable to our sublinear quantum algorithm.
To apply Kempa and Kociumaka’s construction algorithm to the quantum setting, we observe that this algorithm is local, in the sense that whether an index i should be included in A is completely decided by its short context \(S[i\mathinner {.\,.}i+2\tau )\) and the random coins. Moreover, by suitable adaptation of their construction, one can compute all the synchronizing positions in an \(O(\tau )\)length interval in \({\tilde{O}}(\tau )\) time. We summarize all the desired properties of the synchronizing set in the following lemma.
Lemma 3.21
(Adaptation of [32]) For a string \(S[1\mathinner {.\,.}n]\) and a positive integer \(1\le \tau \le n/2\), given a sequence \(\textsf{r}\) of \(O(\log n)\) many random coins, there exists a set A with the following properties:

Correctness With high probability over \(\textsf{r}\), A is a \(\tau \)synchronizing set of S.

Locality For every \(i\in [1\mathinner {.\,.}n2\tau +1]\), whether \(i\in A\) or not is completely determined by the random coins \(\textsf{r}\) and the substring \(s[i\mathinner {.\,.}i+2\tau )\). Moreover, given \(s[i\mathinner {.\,.}i+4\tau )\) and \(\textsf{r}\), one can compute all the elements in \(A\cap [i\mathinner {.\,.}i+2\tau )\) by a classical algorithm in \({\tilde{O}}(\tau )\) time.

Sparsity For every \(i\in [1\mathinner {.\,.}n2\tau +1]\), \({\textbf{E}}_\mathsf {{r}}\big [A\cap [i\mathinner {.\,.}i+2\tau )\big ] \le 80\)
In the following, we first inspect the (slightly adapted) randomized construction of string synchronized sets by Kempa and Kociumaka [32], and then show that it satisfies the properties in Lemma 3.21.
Construction of string synchronizing sets Fix an input string \(S[1\mathinner {.\,.}n]\) and a positive integer \(\tau \le n/2\). Define sets
where we define \(B=\emptyset \) in the special case of \(\tau =1\).
Let \(\mathcal {P}= \{s\in \Sigma ^{\tau }: {s\text { is a substring of }S}\}\) denote the set of all the length\(\tau \) substrings in S (without duplicates). Let \(\pi :\mathcal {P}\rightarrow [N]\) be any injection, and define the identifier function \(\mathsf {\textrm{id}}:[1\mathinner {.\,.}n\tau +1]\rightarrow \mathbb {N}^+\) by
In this way, we have \(\mathsf {\textrm{id}}(i)=\mathsf {\textrm{id}}(j)\) if and only if \(S[i\mathinner {.\,.}i+\tau ) = S[j\mathinner {.\,.}j+\tau )\). Moreover, for \(i\in B,j\notin B\), we always have \(\mathsf {\textrm{id}}(i)<\mathsf {\textrm{id}}(j)\). Finally, define
Kempa and Kociumaka proved the following fact.
Lemma 3.22
([32, Lemma 8.2]) The set A is always a \(\tau \)synchronizing set of string S.
We first quickly verify the locality property of this construction.
Proposition 3.23
For every \(i\in [1\mathinner {.\,.}n2\tau +1]\), whether \(i\in A\) or not is completely determined by \(\pi \) and the substring \(s[i\mathinner {.\,.}i+2\tau )\).
Proof
This immediately follows from the definition of \(Q,B,\mathsf {\textrm{id}},\) and A. \(\square \)
Now, suppose \(\pi :\mathcal {P}\rightarrow [N]\) is randomly chosen so that the 0.1approximate minwise independence property is satisfied: for any \(x\in \mathcal {P}\) and subset \(X\subseteq \mathcal {P}\setminus \{x\}\),
Then the following holds.
Lemma 3.24
( [32, Fact 8.9], adapted) The expected size of A satisfies \({}{\textbf{E}}_{\pi }[A] \le 20n/\tau \).
Remark 3.25
We remark that in the original construction of [32], \(\pi \) was chosen to be a uniformly random bijection \(\mathcal {P}\rightarrow [\mathcal {P}]\), and this is the only part that differs from our modified version. The main issue with this ideal choice is that, in our quantum algorithm, we do not have enough time to sample and store \(\pi \), which could have size \(\Omega (n)\). Observe that in their proof of Lemma 3.24, the only required property of \(\pi \) is that \(\pi \) satisfies (perfect) minwise independence. Hence, here we can relax it to have approximate minwise independence, and their proof of Lemma 3.24 still applies (with a slighly worse constant factor).
Now we describe how to design such a mapping \(\pi \) that is efficiently computable. First, we hash the substrings into integers using the standard rolling hash method [2]. Recall that the alphabet \(\Sigma \) is identified with the integer set \([\Sigma ]\).
Definition 3.26
[Rolling hash] Let \(p>\Sigma \) be a prime, and pick \(y\in \mathbb {F}_p\) uniformly at random. Then, the rolling hash function \(\rho _{p,y}:\Sigma ^\tau \rightarrow \mathbb {F}_p \) on length\(\tau \) strings is defined as
We have two the following two folklore facts about rolling hash.

Rolling hashes of substrings can be easily computed in batch: on any given string s of length \(s\ge \tau \), one can compute the hash values \(\rho _{p,y}\big ( s[i\mathinner {.\,.}i+\tau )\big )\) for all \(i \in [1\mathinner {.\,.}s\tau +1]\), in \(O(s \cdot \mathrm{poly\,log} p)\) total time.

By choosing \(p = \textrm{poly}(n)\), we can ensure that with high probability over the choice of y, the rolling hash function \(\rho _{p,y}\) takes distinct values over all the strings in \(\mathcal {P}\) (by SchwartzZippel lemma).
After hashing the strings in \(\mathcal {P}\) to a small integer set \([\textrm{poly}(n)]\), we can apply known constructions of approximate minwise independent hash families.
Lemma 3.27
(Approximate minwise independent hash family, e.g., [95]) Given parameter \(n \ge 1\), one can choose \(N \le n^{O(1)}\), so that there is a hash family \({\mathcal {H}} = \{h:[N]\rightarrow [N]\}\) that satisfies the following properties:

Injectivity For any subset \(X\subseteq [N]\) of size \(X\le n\), with high probability over the choice of \(h \in {\mathcal {H}}\), h maps X to distinct elements.

Approximate minwise independence For any \(x\in [N]\) and subset \(X\subseteq [N] \setminus \{x\}\),
$$\begin{aligned} \Pr _{h\in {\mathcal {H}}} \big [h(x) < \min \{h(x'): x' \in X\}\big ] \in \frac{1}{X+1}\cdot (1\pm 0.1).\end{aligned}$$ 
Explicitness Each hash function \(h\in {\mathcal {H}}\) can be specified using \(O(\log n)\) bits, and can be evaluated at any point in \(\mathrm{poly\,log}(n)\) time.
Finally, we choose parameters \(p=\textrm{poly}(n), N = \textrm{poly}(n),p\le N\), and define the pseudorandom mapping \(\pi :\mathcal {P}\rightarrow [N]\) by \(\pi (s):= h\big (\rho _{p,y}(s)\big )\), where \(\rho _{p,y}:\mathcal {P}\rightarrow \mathbb {F}_p\) is the rolling hash function (identifying \(\mathbb {F}_p\) with \([p] \subseteq [N]\)), and \(h:[N] \rightarrow [N]\) is the approximate minwise independent hash function.
Now we are ready to prove that the string synchronizing set A determined by the random mapping \(\pi \) satisfies the properties stated in Lemma 3.21.
Proof
(of Lemma 3.21) First note that the random coins \(\textsf{r}\) are used to sample \(y\in \mathbb {F}_p\) and \(h\in {\mathcal {H}}\), which only take \(O(\log n)\) bits of seed.
Correctness By Lemma 3.22, A is correct as long as \(\pi \) is an injection, which holds with high probability by the injectivity properties of \(\rho _{p,y}\) and h.
Locality The first part of the statement is already verified in Proposition 3.23. To show the moreover part, first note that the values of \(\pi \big ( S[j\mathinner {.\,.}j+\tau )\big )\) over all \(j\in [i\mathinner {.\,.}i+3\tau )\) can be computed in \({\tilde{O}}(\tau )\) time, by the property of rolling hash and the explicitness of h. By [32, Lemma 8.8], the sets \(Q\cap [i \mathinner {.\,.}i+3\tau )\) and \(B\cap [i\mathinner {.\,.}i+3\tau )\) can be computed in \(O(\tau )\) time. Hence, we can compute \(\mathsf {\textrm{id}}(j)\) for all \(j\in [i\mathinner {.\,.}i+3\tau )\), and then construct \(A\cap [i\mathinner {.\,.}i+2\tau )\) by computing the slidingwindow minima, in \({\tilde{O}}(\tau )\) overall time.
Sparsity Let \(S'=S[i\mathinner {.\,.}i+4\tau )\), and construct a \(\tau \)synchronizing set \(A'\) of \(S'\) using the same random coins \(\textsf{r}\). Then, from the locality property we clearly have \(A'\ge A\cap [i\mathinner {.\,.}i+2\tau )\). Hence, by Lemma 3.24, \({}{\textbf{E}}_\mathsf {{r}}\big [A \cap [i\mathinner {.\,.}i+2\tau )\big ] \le {}{\textbf{E}}_\mathsf {{r}}\big [A'] \le 20\cdot 4\tau /\tau = 80\) \(\square \)
3.3.3 Putting it Together
Now we will construct the good anchor set for input strings s, t and threshold length d. Recall that \(S=s\$t\) and \(S=n\), and we have assumed \(d\ge 100\) in order to avoid corner cases. Our plan is to use string synchronizing sets to fix the misalignment introduced by the approximate different covers. However, in highlyperiodic parts where synchronizing fails, we have to rely on periodicity arguments and Grover search.
Construction 1.35 (Anchor set C) Let D be a \((\lfloor d/2 \rfloor ,\tau )\)cover for some parameter \(\tau \le d/100\) to be determined later, and let \(D_{S}:=\big (D\cap [s]\big ) \cup \big (s+1+(D\cap [t])\big ) \). Let A be the \(\tau \)synchronizing set of S determined by random coins \(\textsf{r}\).
For every \(i\in D_S \cap [n3\tau +2]\), let \(L_i\subseteq [1\mathinner {.\,.}n]\) be defined by the following procedure.

Step 1 If \(A \cap [i\mathinner {.\,.}i+2\tau )\) has at most 1000 elements, then add all the elements from \(A \cap [i\mathinner {.\,.}i+2\tau )\) into \(L_i\). Otherwise, add the smallest 1000 elements from \(A \cap [i\mathinner {.\,.}i+2\tau )\) into \(L_i\).

Step 2 If \(p:=\mathsf {\textrm{per}}(S[i+\tau \mathinner {.\,.}i+3\tau 2]) \le \tau /3\), then we do the following:

Define two boundary indices
$$\begin{aligned} r&:= \max \big \{r: r\le \min \{n,i+d\} \wedge \mathsf {\textrm{per}}(S[i+\tau \mathinner {.\,.}r])=p\big \},\\ \ell&:= \min \big \{\ell : \ell \ge \min \{1,id\} \wedge \mathsf {\textrm{per}}(S[\ell \mathinner {.\,.}i+3\tau 2])=p\big \}. \end{aligned}$$Let P be the Lyndon root of \(S[i+\tau \mathinner {.\,.}i+3\tau 2]\) (see Sect. 2.1). Then \(P=p\), and let \(P=S[i^{(b)}\mathinner {.\,.}i^{(b)}+p)=S[i^{(e)}\mathinner {.\,.}i^{(e)}+p)\) be the first and last occurrences of P in \(S[\ell \mathinner {.\,.}r]\). We add three elements \(i^{(b)},i^{(b)}+p\), and \(i^{(e)}\) into \(L_i\).

Finally, the anchor set C is defined as \(\bigcup _{i\in D_S\cap [n3\tau +2]} L_i\).
Before proving the correctness of the anchor set C in Construction 3.28, we first observe that C has small size and is efficiently constructible.
Lemma 3.29
The anchor set C has size \(C\le O(n/\sqrt{d\tau })\), and is \({\tilde{O}}(\tau + \sqrt{d})\)quantumtime constructible.
Proof
For any given \(i\in D_S\cap [n3\tau +2]\), \(L_i\) contains at most 1003 elements. Hence, \(C\le 1003\cdot D_S \le O(n/\sqrt{d\tau })\) by Lemma 3.19.
In Construction 3.28, Step 1 takes \({\tilde{O}}(\tau )\) classical time by the Locality property in Lemma 3.21. In Step 2, we can find the period p and the Lyndon root P in \(\tilde{O}(\tau )\) classical time (see Sect. 2.1). Then, finding the right boundary r is equivalent to searching for the largest \(r \in \big [i+3\tau 2\mathinner {.\,.}\min \{n,i+d\}\big ]\) such that p is a period of \(S[i+\tau \mathinner {.\,.}r]\) (this is because we already know \(\mathsf {\textrm{per}}(S[i+\tau \mathinner {.\,.}i+3\tau 2])=p\), and the period is monotonically nondecreasing in r). We do this with a binary search over r, where each check can be performed by a Grover search in \({\tilde{O}}(\sqrt{d})\) time, since the length of \(S[i+\tau \mathinner {.\,.}r]\) is at most d. The left boundary \(\ell \) can be found similarly. Finally, the positions \(i^{(b)}\) and \(i^{(e)}\) can be found in \({\tilde{O}}(p)\) time classically, since we must have \(i^{(b)}\ell , ri^{(e)} \le 2p\). Hence, our anchor set C is \({\tilde{O}}(\tau + \sqrt{d})\)quantumtime constructible. \(\square \)
Now we show that, with constant probability, C is a good anchor set (Definition 3.2).
Lemma 3.30
For input strings s, t and threshold length d, with at least 0.8 probability over the random coins \(\textsf{r}\), the set C is a good anchor set.
The proof of this lemma has a similar structure to the proof of [28, Lemma 17], but is additionally complicated by the fact that (1) we have to deal with the misalignment introduced by approximate difference covers, and (2) we only considered a subset of the synchronizing set when defining the anchors.
Here, we first give an informal overview of the proof. By the property of approximate difference covers, the lengthd common substring of s and t should have a pair of slightly misaligned anchors within a shift of at most \(\tau 1\) from each other. If the context around these misaligned anchors are not highlyperiodic (Case 1 in the proof below), then their \(O(\tau )\)neighborhood must contain a pair of synchronizing positions (by the density property of A), which are included in Step 1 of Construction 3.28, and form a pair of perfectly aligned anchors (by the consistency property of A). If the context around the misaligned anchors are highlyperiodic (Case 2), we can extend the period to the left or to the right, and look at the first position where the period stops. If this stopping position is inside the common substring, then we have a pair of anchors (Cases 2(i), 2(ii)). Otherwise, the whole common substring is highlyperiodic, and we can also obtain anchors by looking at the starting positions of its Lyndon roots (Case 2(iii)). These anchors for Case 2 are included in Step 2 of Construction 3.28.
Proof
(of Lemma 3.30) Let \(s[i_{\star }\mathinner {.\,.}i_{\star }+d)=t[j_{\star }\mathinner {.\,.}j_{\star }+d)\) be a lengthd common substring of s and t. Our goal is to show the existence of positions \(i \in [sd+1],j\in [td+1]\) and a shift \(h\in [0\mathinner {.\,.}d)\), such that \(s[i\mathinner {.\,.}i+d)=t[j\mathinner {.\,.}j+d)\), and \(i+h, s+1+j+h \in C\).
Recall that we assumed \(d\ge 100\tau \). By the definition of \(D_S\), there exist \(h_1, h_2\in [0\mathinner {.\,.}d/2)\) such that \(i_{\star }+h_1, s+1+j_{\star }+h_2 \in D_S\), and \(h_1h_2 \le \tau 1\). These \(h_1,h_2\) form a pair of anchors that are slightly misaligned by a shift of at most \(\tau 1\). Then the plan is to obtain perfectly aligned anchors from \(h_1,h_2\), either by finding synchronizing positions in their \(O(\tau )\) neighborhood, or by exploiting periodicity. Without loss of generality, we assume \(h_1\le h_2\), and the case of \(h_1>h_2\) can be proved analogously by switching the roles of s and t. Now we consider two cases:

Case 1 \(\mathsf {\textrm{per}}(s[i_{\star }+h_1+\tau \mathinner {.\,.}i_{\star }+h_1+ 4\tau 2])>\tau /3\). In this aperiodic case, by the density condition of the \(\tau \)synchronizing set A, we know that \(A\cap [i_{\star }+h_1+\tau \mathinner {.\,.}i_{\star }+h_1+2\tau )\) is an nonempty set, and let a be an arbitrary element of this set. Here a is a synchronizing position from s, and we let \(b = ai_{\star }+j_{\star }\) be the corresponding position in b. Since \(s[i_\star \mathinner {.\,.}i_\star + d) = t[j_\star \mathinner {.\,.}j_\star +d)\) is a common substring, in particular we have \(s[a\mathinner {.\,.}a+2\tau ) = t[b\mathinner {.\,.}b+2\tau )\), or equivalently \(S[a\mathinner {.\,.}a+2\tau )=S[s+1+b\mathinner {.\,.}s+1+b+2\tau )\), so \(s+1+b \in A\) by the consistency condition of A. Hence, we have found a pair of perfectly aligned anchors, a and \(s+1+b\), for the common substring. It remains to check that they are indeed included in our anchor set C defined in Construction 3.28. Note that we have
$$\begin{aligned} b&= j_{\star }+ h_2 +(ai_{\star }h_1) (h_2h_1) \\ {}&\in [ j_{\star }+h_2+\tau  (h_2h_1) \mathinner {.\,.}j_{\star }+h_2+ 2\tau (h_2h_1)) \\ {}&\subseteq [j_{\star }+h_2+1 \mathinner {.\,.}j_{\star }+h_2+2\tau ), \end{aligned}$$so \(s+1+b\in A \cap [s+1+j_{\star }+h_2+1 \mathinner {.\,.}s+1+ j_{\star }+h_2+2\tau )\). From the sparsity property of A (Lemma 3.21), using Markov’s inequality and a union bound, we can show that \(A \cap [i_{\star }+h_1\mathinner {.\,.}i_{\star }+h_1+2\tau )\le 1000\) and \(A \cap [s+1+j_{\star }+h_2\mathinner {.\,.}s+1+ j_{\star }+h_2+2\tau )\le 1000\) hold simultaneously with probability at least \(1 2\cdot 80/1000 >0.8\). In this case, in Step 1 of Construction 3.28 we must have \(a\in L_{i_{\star }+h_1}\), and \(s+1+b\in L_{s+1+j_{\star }+h_2}\). Then, setting \(i=i_{\star },j=j_{\star },h = ai_{\star }\) satisfies the requirement.

Case 2 \(p=\mathsf {\textrm{per}}(s[i_{\star }+h_1+\tau \mathinner {.\,.}i_{\star }+h_1+ 4\tau 2])\le \tau /3\). In this highlyperiodic case, we cannot rely on synchronizing sets. Instead, we have the following intuition (which is based on [28]): if the common substring contains the boundary of the highlyperiodic region, then we can use these boundary positions (which are easy to locate) as our anchors (Case 2(i), 2(ii)). Otherwise, the common substring is highlyperiodic, and it may repeatedly occur throughout the region that shares the same period. This allows us to ignore all but a constant number of these repeated occurrences of this common substring (Case 2(iii)). By the assumption on p we have \( \mathsf {\textrm{per}}(s[i_{\star }+h_1+\tau \mathinner {.\,.}i_{\star }+h_1+3\tau 2])=p\). From \(s[i_\star \mathinner {.\,.}i_\star +d)=t[j_\star \mathinner {.\,.}j_\star +d)\) and \(0\le h_2h_1\le \tau 1\), we also have \(\mathsf {\textrm{per}}(t[j_{\star }+h_2+\tau \mathinner {.\,.}j_{\star }+h_2+3\tau 2])=p\). Hence, for both \(L_{i_{\star }+h_1}\) and \(L_{s+1+j_{\star }+h_2}\), we triggered Step 2 in Construction 3.28. Then, we consider three subcases.

Case 2(i) \(\mathsf {\textrm{per}}(s[i_{\star } + h_1+\tau \mathinner {.\,.}i_{\star }+d))\ne p\). In this case, the period p of \(s[i_{\star }+h_1+\tau \mathinner {.\,.}i_{\star }+h_1+3\tau 2]\) does not extend to its superstring \(s[i_{\star } + h_1+\tau \mathinner {.\,.}i_{\star }+d)\), so the right boundary
$$\begin{aligned} r_s := \max \big \{r: \mathsf {\textrm{per}}(s[i_\star +h_1+\tau \mathinner {.\,.}r])=p\big \} \end{aligned}$$must satisfy \(i_\star +h_1+4\tau 2 \le r_s <i_\star +d1\). Here we observe that \(r_s\) is the same as the right boundary r in Step 2 of Construction 3.28 for constructing \(L_{i_\star +h_1}\). Let \(r_t := r_s  i_\star + j_\star \). Then \(j_\star +h_2+\tau < r_t\), and \(t[j_\star +h_2+\tau \mathinner {.\,.}r_t+1] = s[i_\star + h_2+\tau \mathinner {.\,.}r_s+1]\). Then from the definition of \(r_s\), we can observe that
$$\begin{aligned}r_t = \max \big \{r: \mathsf {\textrm{per}}(t[ j_\star +h_2+\tau \mathinner {.\,.}r])=p\big \},\end{aligned}$$and \(s+1+r_t\) must be the same as the right boundary r in Step 2 of Construction 3.28 for constructing \(L_{s+1+j_\star +h_2}\). Let P denote the Lyndon root of \(s[i_\star +h_1+\tau \mathinner {.\,.}r_s]\), and let \(P=s[i^{(e)}\mathinner {.\,.}i^{(e)}+p)=t[j^{(e)}\mathinner {.\,.}j^{(e)}+p)\) be the last occurrences of P in \(s[i_\star +h_1+\tau \mathinner {.\,.}r_s]\) and \(t[j_\star +h_2+\tau \mathinner {.\,.}r_t]\). We must have \(r_si^{(e)}=r_tj^{(e)}\). Note that \(i^{(e)}\in L_{i_\star +h_1}\) and \(s+1+j^{(e)} \in L_{s+1+j_\star +h_2}\). So setting \(i=i_\star ,j=j_\star , h = i^{(e)}i_\star \) satisfies the requirement.

Case 2(ii) \(\mathsf {\textrm{per}}(s[i_{\star } + h_1+\tau \mathinner {.\,.}i_{\star }+d))= p\), but \(\mathsf {\textrm{per}}(s[i_\star \mathinner {.\,.}i_\star +d))\ne p\). In this case, the period p fully extends to the right but not to the left. Using a similar argument as in Case 2(i), we can show that the left boundaries
$$\begin{aligned} \ell _s&:= \min \{\ell : \mathsf {\textrm{per}}(s[\ell \mathinner {.\,.}i_\star +h_1+3\tau 2])=p\},\\ \ell _t&:= \min \{\ell : \mathsf {\textrm{per}}(t[\ell \mathinner {.\,.}j_\star +h_2+3\tau 2])=p\} \end{aligned}$$must satisfy \(\ell _si_\star = \ell _tj_\star \ge 1\), and \(\ell _s,s+1+\ell _t\) are the left boundaries in Step 2 of Construction 3.28 for constructing \(L_{i_\star +h_1},L_{s+1+j_\star +h_2}\) respectively. Then, letting \(s[i^{(b)}\mathinner {.\,.}i^{(b)}+p) = t[j^{(b)}\mathinner {.\,.}j^{(b)}+p)\) be the first occurrences of the Lyndon root in \(s[\ell _s\mathinner {.\,.}i_\star +h_1+3\tau 2]\) and \(t[\ell _t \mathinner {.\,.}j_\star +h_2+3\tau 2]\), we can similarly see that setting \(i=i_\star ,j=j_\star , h=i^{(b)}i_\star \) satisfies the requirement.

Case 2(iii) \(\mathsf {\textrm{per}}(s[i_\star \mathinner {.\,.}i_\star +d))= p\). Let
$$\begin{aligned} \ell _s&:= \min \big \{\ell : \ell \ge \min \{1,i_\star +h_1d\} \wedge \mathsf {\textrm{per}}(s[\ell \mathinner {.\,.}i_\star +h_1+3\tau 2])=p\big \},\\ \ell _t&:= \min \big \{\ell : \ell \ge \min \{1,j_\star +h_2d\} \wedge \mathsf {\textrm{per}}(s[\ell \mathinner {.\,.}j_\star +h_2+3\tau 2])=p\big \}. \end{aligned}$$Then \(\ell _s,s+1+\ell _t\) are the left boundaries in Step 2 of Construction 3.28 for constructing the sets \(L_{i_\star +h_1},L_{s+1+j_\star +h_2}\) respectively. We must have \(\ell _s<i_\star \) and \(\ell _t<j_\star \). Let P be the Lyndon root of \(s[i_\star \mathinner {.\,.}i_\star +d)\), and assume the first occurrence of P in \(s[i_\star \mathinner {.\,.}i_\star +d)\) is \(s[i'\mathinner {.\,.}i'+p)\). We also let \(s[i^{(b)}\mathinner {.\,.}i^{(b)}+p)=t[j^{(b)}\mathinner {.\,.}j^{(b)}+p)\) be the first occurrences of P in \(s[\ell _s\mathinner {.\,.}i_\star +d)\) and \(t[\ell _t\mathinner {.\,.}j_\star +d)\). Then the second occurrence of P in \(s[\ell _s\mathinner {.\,.}i_\star + d)\) is \(s[i^{(b)}+p\mathinner {.\,.}i^{(b)}+2p)\). Observe that, if we find the first occurrence of the common substring \(s[i_\star \mathinner {.\,.}i_\star + d)\) inside the entire periodic region \(s[\ell _s\mathinner {.\,.}i_\star + d)\), this occurrence should align \(s[i'\mathinner {.\,.}i'+p)\) with either the first occurrence of P or the second occurrence of P in this region. Formally, let \(i_{\star }' =i^{(b)} (i'i_\star )\). Then, we have either \(s[i_\star \mathinner {.\,.}i_\star +d) = s[i_\star '\mathinner {.\,.}i_\star '+d)\) or \(s[i_\star \mathinner {.\,.}i_\star +d) =s[i_\star '+p\mathinner {.\,.}i_\star '+p+d)\). Similarly, letting \(j_{\star }' =j^{(b)} (j'j_\star )\), we have \(t[j_\star \mathinner {.\,.}j_\star +d) = t[j_\star '\mathinner {.\,.}j_\star '+d)\) or \(t[j_\star \mathinner {.\,.}j_\star +d) =t[j_\star '+p\mathinner {.\,.}j_\star '+p+d)\). Note that \(i^{(b)}, i^{(b)}+p \in L_{i_\star +h_1}\), and \(s+1+j^{(b)}, s+1+j^{(b)}+p \in L_{s+1+j_\star +h_2}\). Hence, setting \(i=i'_\star \) (or \(i=i'_\star +p\)), \(j=j'_\star \) (or \(j=j'_\star +p\)), \(h= i'i_\star \) satisfies the requirement.

Hence, the desired i, j and h always exist. \(\square \)
Finally, Lemma 3.6 immediately follows from Construction 3.28, Lemma 3.29, and Lemma 3.30, by setting \(\tau = \Theta (\sqrt{d})\).
4 Minimal String Rotation
4.1 Minimal Length\(\ell \) Substrings
Rather than work with the Minimal String Rotation problem directly, we present an algorithm for the following problem, which is more amenable to work with using our divideandconquer approach.
The elements in the output are guaranteed to be an arithmetic progression thanks to Lemma 2.3.
We will prove the following theorem.
Theorem 4.1
Minimal Length\(\ell \) Substrings can be solved by a quantum algorithm with \(n^{1/2+o(1)}\) query complexity and time complexity.
For convenience, we also introduce the following problem.
We now use a series of simple folklore reductions to show that the Minimal Length\(\ell \) Substrings problem generalizes the Minimal String Rotation problem.
Proposition 4.2
The Minimal String Rotation problem reduces to the Maximal String Rotation problem.
Proof
Take an instance of the Minimal String Rotation problem, consisting of a string s over an alphabet \(\Sigma \), which recall we identify with the set \([1\mathinner {.\,.}\Sigma ]\). Consider the map \(\varphi :\Sigma \rightarrow \Sigma \) defined by taking
for each character \(c\in \Sigma \). Let
be the result of applying this map to each character of s.
By construction, for any \(c,c'\in \Sigma \) we have \(\varphi (c) \prec \varphi (c')\) if and only if \(c' \prec c\). Combining this observation together with the definition of lexicographic order, we deduce that for any indices \(j,k\in [1\mathinner {.\,.}n]\) we have
if and only if
Thus the solution to the Maximal String Rotation problem on t recovers the solution to the Minimal String Rotation problem on s, which proves the desired result. \(\square \)
Proposition 4.3
The Maximal String Rotation problem reduces to the Maximal Suffix problem.
Proof
Take an instance of the Maximal String Rotation problem, consisting of a string s of length n.
Let \(t = ss\) be the string of length 2n formed by concatenating s with itself. Suppose i is the starting index of the maximal rotation of s. Then we claim that i is the starting index of the maximal suffix of t as well.
Indeed, take any position \(j\in [1\mathinner {.\,.}2n]\) in string t with \(j\ne i\).
If \(j > n\), then we can write \(j = n + \Delta \) for some positive integer \(\Delta \le n\). In this case we have
because the string on the left hand side is a proper prefix of the string on the right hand side. Thus j cannot be the starting position of a maximal suffix for t.
Otherwise, \(j\le n\). Note that we can write
Since i is a solution to the Maximal String Rotation problem, we know that either
or \( s[j\mathinner {.\,.}n]s[1\mathinner {.\,.}j1] = s[i\mathinner {.\,.}n]s[1\mathinner {.\,.}i1]\) and \(i<j\).
In the first case, the decompositions from Eq. (2) immediately imply that
by considering the length n prefixes of the two strings. In the second case, since \( s[j\mathinner {.\,.}n]s[1\mathinner {.\,.}j1] = s[i\mathinner {.\,.}n]s[1\mathinner {.\,.}i1]\) and \(i<j\) the decompositions from Eq. (2) imply that
because the string on the left hand side is a proper prefix of the string on the right hand side. Combining these results, we see that the solution to the Maximal Suffix problem on t is the index i which solves the Maximal String Rotation problem on s. \(\square \)
Proposition 4.4
The Maximal Suffix problem reduces to the Minimal Suffix problem.
Proof
Take an instance of the Maximal Suffix problem, consisting of a string s of length n over an alphabet \(\Sigma = [1\mathinner {.\,.}\Sigma ]\). Let \(\sigma = \Sigma +1\) denote a character lexicographically after all the characters in \(\Sigma \). As in the proof of Proposition 4.2, consider the map \(\varphi :\Sigma \rightarrow \Sigma \) defined by taking
for each character \(c\in \Sigma \). Now, build the string
formed by applying \(\varphi \) to each character of s and then appending \(\sigma \) to the end.
Suppose that \(s[i\mathinner {.\,.}n]\) is the maximal suffix of s. We claim that \(t[i\mathinner {.\,.}n+1]\) is the minimal suffix of t. Thus solving the Minimal Suffix problem on t recovers a solution to the Maximal Suffix problem on s.
To see this, note that take any index \(1\le j\le n\) with \(j\ne i\). By assumption
This can happen one of two ways.
First, it could be that \(j > i\) and the string on the left hand side above is a proper prefix of the string on the right hand side. In this case we must have
because the string on the left hand side agrees with the string on the right hand side for the first j positions, but then at the \((nj+2)^{\text {th}}\) position, the string on the right hand side has the character \(\sigma \), which is larger than the corresponding character \(\varphi (s[n  (ji)])\) from the string on the left hand side.
Otherwise, Eq. (3) holds because there exists some nonnegative integer \(\Delta \) such that \(s[j+\Delta ] < s[i+\Delta ]\) and \(s[j+d] = s[i+d]\) for all nonnegative \(d<\Delta \). By definition, \(\varphi (c)\prec \varphi (c')\) if and only if characters \(c' \prec c\) for all \(c,c'\in \Sigma \). Thus in this case too we have
because the strings agree for the first \(\Delta \) characters, but then at the \((\Delta +1)^{\text {st}}\) position, the string on the right hand side has the character \(\varphi (s[j+\Delta ])\), which is larger than the corresponding character \(\varphi (s[i+\Delta ])\) from the string on the left hand side by our observation on \(\varphi \). Finally, note that the suffix \(t[n+1] = \sigma \) is larger than every other suffix of t by construction, and is thus not a minimal suffix of t. Thus the minimal suffix of t corresponds to the maximal suffix of s, and the reduction is correct. \(\square \)
Proposition 4.5
The Minimal Suffix problem reduces to the Minimal Length\(\ell \) Substrings problem.
Proof
Take an instance of the Minimal Suffix problem, consisting of a string s of length n. Consider the string of length \(2n1\) of the form
formed by appending \(n1\) copies of a character 0, smaller than every character from the alphabet \(\Sigma \) of s, to the end of s.
Let i be the starting index of the minimal suffix of s. We claim that i is also the unique index returned by solving the Minimal Lengthn Substrings problem on t (note that n is at least half the length of t).
Indeed, take any index \(j\in [1\mathinner {.\,.}n]\) with \(j\ne i\). By assumption we have
Because the string on the left hand side occurs strictly before the string on the right hand side in lexicographic order, appending any number 0s to the ends of the strings above cannot change their relative order. Thus
as well. Because this holds for all \(j\ne i\) we get that i is the unique position output by solving the Minimal Lengthn Substrings problem on t. This proves the reduction is correct. \(\square \)
By chaining the above reductions together, we obtain the following corollary of Theorem 4.1.
Theorem 4.6
Minimal String Rotation, Maximal Suffix, and Minimal Suffix can be solved by a quantum algorithm with \(n^{1/2+o(1)}\) query complexity and time complexity.
Proof
By combining the results of Propositions 4.2, 4.3, 4.4, and 4.5, we see that all the problems mentioned in the theorem statement reduce to the Minimal Length\(\ell \) Substrings problem. Each of the reductions only involves simple substitutions and insertions to the input strings.
In particular, by inspecting the proofs of the propositions, we can verify that for an input string s and its image t under any of these reductions, any query to a character of t can be simulated with O(1) queries to the characters of s. Thus, we can get a \(n^{1/2 + o(1)}\) query and time quantum algorithm for each of the listed problems by using the algorithm of Theorem 4.1 and simulating the aforementioned reductions appropriately in the query model. \(\square \)
Remark 4.7
We remark that, from the \(\Omega (\sqrt{n})\) quantum query lower bound for Minimal String Rotation [14], this chain of reductions also implies that Maximal Suffix and Minimal Suffix require \( \Omega (\sqrt{n})\) quantum query complexity.
It remains to prove Theorem 4.1. To solve the Minimal Length\(\ell \) Substrings problem, it suffices to find any individual solution
and then use the quantum Exact String Matching algorithm to find all the elements (represented as an arithmetic progression) in \({\tilde{O}}(\sqrt{n})\) time. Our approach will invoke the following “exclusion rule,” which simplifies the previous approach used in [14]. We remark that similar kinds of exclusion rules have been applied previously in parallel algorithms for Exact String Matching [10] and Minimal String Rotation [17] (under the name of “Ricochet Property” or “duel”), as well as the quantum algorithm by Wang and Ying [14, Lemma 5.1]. The advantage of our exclusion rule is that it naturally yields a recursive approach for solving the Minimal Length\(\ell \) Substrings problem.
Lemma 4.8
(Exclusion Rule) In the Minimal Length\(\ell \) Substrings problem with input \(s[1\mathinner {.\,.}n]\) with \(n/2\le \ell \le n\), let
denote the set of answers forming an arithmetic progression. For integers \(a\ge 1,k\ge 1\) such that \(a+k\le n\ell +1\), let J denote the set of answers in the Minimal Lengthk Substrings problem on the input string \(s[a\mathinner {.\,.}a+2k)\). Then if \(\{\min J,\max J\} \cap I = \emptyset \), we must have \(J\cap I = \emptyset \).
Proof
First observe that
so \(s[a\mathinner {.\,.}a+2k)\) is a length2k substring of s. Since the statement is trivial for \(J\le 2\), we assume J consists of \(j_1<j_2<\dots <j_m\) where \(m\ge 3\). Let \(p = j_2j_1\). Then \(p = (j_mj_1)/(m1) \le k/2\). Then from
we know that p must be a period of \(s[j_1\mathinner {.\,.}j_m+k)\).^{Footnote 8} We consider the first position r where this period p stops, that is, \(r:= \min \{j_m+k \le r \le n : s[r]\ne s[rp]\}\). If such r does not exist, let \(r=n+1\). See Fig. 2. With this setup, we now proceed to prove the contrapositive of the original claim.
Suppose \(j_q \in I\) for some \(1\le q\le m\). We consider three cases.

Case 1 \(r\ge j_m+\ell \). In this case, we must have \(s[j_1\mathinner {.\,.}j_1+\ell ) = s[j_2\mathinner {.\,.}j_2+\ell ) = \cdots = s[j_m\mathinner {.\,.}j_m+\ell )\). Then, \(j_q\in I\) implies \(j_1\in I\).

Case 2 \(r< j_m+\ell \), and \(s[r]<s[rp]\). For every \(1\le t\le m1\), by the definition of r, we must have \(s[j_{t+1}\mathinner {.\,.}r) = s[j_t\mathinner {.\,.}rp)\). Then from \(s[r]<s[rp]\) we have \(s[j_t\mathinner {.\,.}j_t+\ell )\succeq s[j_{t+1}\mathinner {.\,.}j_{t+1}+\ell )\). Hence, \(j_q \in I\) implies \(j_{q+1},j_{q+2},\dots ,j_m \in I\).

Case 3 \(r<j_m+\ell \), and \(s[r]>s[rp]\). By an argument similar to Case 2, we can show \(s[j_t\mathinner {.\,.}j_t+\ell )\preceq s[j_{t+1}\mathinner {.\,.}j_{t+1}+\ell )\). Then, \(j_q \in I\) implies \(j_{q1},j_{q2},\dots ,j_1 \in I\).
Thus \(\{j_1,j_m\} \cap I \ne \emptyset \) in all of the cases, which proves the desired result. \(\square \)
4.2 Divide and Conquer Algorithm
To motivate our quantum algorithm, we first describe a classical algorithm for the Minimal n/2length Substring problem which runs in \(O(n\log n)\) time (note that other classical algorithms can solve the problem faster in O(n) time). Our quantum algorithm will use the same setup, but obtain a speedup via Grover search. For the purpose of this overview, we assume n is a power of 2. The classical algorithm works as follows:
Suppose we are given an input string s of length n and target substring size \(\ell = n/2\). Set \(m = \ell /2 = n/4\). Then the half of the solution (i.e. the first m characters of a minimum length \(\ell \)substring) are contained entirely in either the block \(s_1 = s[1\mathinner {.\,.}n/2]\) or the block \(s_2 = s[n/4\mathinner {.\,.}3n/4)\).
With that in mind, we recursively solve the problem on the strings \(s_1\) and \(s_2\) with target size m in both cases. Let \(u_1\) and \(v_1\) be the smallest and largest starting positions returned by the recursive call to \(s_1\) respectively. Define \(u_2\) and \(v_2\) as the analogous positions returned by the recursive call to \(s_2\). Then by Lemma 4.8, the true starting position of the minimal \(\ell \)length substring of s is in \(\{ u_1, u_2, v_1, v_2 \}\).
We identify the \(\ell \)length substrings starting at each of these positions, and find their lexicographic minimum in O(n) time via lineartime string comparison. This lets us find at least one occurrence of the minimum substring of length \(\ell \). Then, to find all occurrences of this minimum substring, we use a linear time string matching algorithm (such as the classic KnuthMorrisPratt algorithm [1]) to find the first two occurrences of the minimum length \(\ell \) substring in s. The difference between the starting positions then lets us determine the common difference of the arithmetic sequence of positions encoding all starting positions of the minimum substring.
If we let T(n) denote the runtime of this algorithm, the recursion above yields a recurrence
which solves to \(T(n) = O(n\log n)\).
4.3 Quantum Speedup
Next, we show how to improve the runtime of this divideandconquer approach in the quantum setting. The key change is to break the string into b blocks, and apply quantum minimum finding over these blocks which only takes \({\tilde{O}}(\sqrt{b})\) recursive calls, instead of b recursive calls needed by the classical algorithm. We will set b large enough to get a quantum speedup.
Proof
(of Theorem 4.1) Let b be some parameter to be set later. For convenience assume that b divides both \(\ell \) and n (this assumption does not affect the validity of our arguments, and is only used to let us avoid working with floor and ceiling functions). Set \(m = \ell /b\).
For each nonnegative integer \(k\le \lfloor n/m\rfloor 2\) we define the substring
Also set \(s_{\lfloor n/m\rfloor 1} = s(n2m\mathinner {.\,.}n]\).
These \(s_k\) blocks each have length 2m, and together cover every substring of length m in s. Let P be the minimum length\(\ell \) substring in s. By construction, the first \(m = \ell /b\) characters of P is contained entirely in one of the \(s_k\) blocks.
For each block \(s_k\), let \(P_k\) denote its minimum lengthm substring and let \(u_k\) and \(v_k\) be the smallest and largest starting positions respectively of an occurrence of \(P_k\) in \(s_k\). The lexicographically smallest prefix \(P_k\) will make up the first m characters of the minimum length\(\ell \) substring. Thus by Lemma 4.8, we know the minimum length\(\ell \) substring of s must start at position \(u_k\) or \(v_k\) for some index k.
We now use quantum minimum finding to find P. We search over the \(\Theta (n/m) = \Theta (b)\) blocks above. To compare blocks \(s_i\) and \(s_j\), we recursively solve the Minimal Lengthm Substrings problem on \(s_i\) and \(s_j\) to find positions \(u_i, v_i\) and \(u_j, v_j\). Then we look at the substrings of length \(\ell \) starting at these four positions. By binary search and Grover search (Lemma 2.5), in \(\tilde{O}(\sqrt{n})\) time we can determine which of these four substrings is lexicographically the smallest. If the smallest of these substrings came from \(s_i\) we say block \(s_i\) is smaller than block \(s_j\), and vice versa.
After running the minimum finding algorithm, we will have found P. To return all occurrences of P, we can then use the quantum algorithm for Exact String Matching to find the two leftmost occurrences and the rightmost occurrence of P in s in \(\tilde{O}(\sqrt{n})\) time. Together they determine the positions of all copies of P in s as an arithmetic sequence, which we can return to solve the original problem.
It remains to check the runtime of the algorithm. Let T(n) denote the runtime of the algorithm with error probability at most 1/n. Recall that our algorithm solves Minimum Finding over \(\Theta (b)\) blocks, where each comparison involves a recursive call on strings of size \(2m = \Theta (n/b)\) and a constant number of string comparisons of length n (via Lemma 2.5), and finally solves Exact String Matching for strings of size \(\Theta (n)\). Hence we have the recurrence (assuming all logarithms are base 2)
for some constants \(c, e > 0\), where the polylogarithmic factors are inherited from the subroutines we use and the possibility of repeating our steps \(O(\log n)\) times to drive down the error probability. Now set
for some constant d. We claim that for sufficiently large d, we recover a runtime of \(T(n) = n^{1/2} \cdot 2^{d(\log n)^{2/3}}\).
We prove this by induction. The result holds when n is a small constant by taking d large enough. Now, suppose we want to prove the result for some arbitrary n, and that the claimed runtime bound holds on inputs of size less than n. Then using the recurrence above and the inductive hypothesis we have
where the last inequality follows from \(d(\log (n/b))^{2/3} \ge d(\log (\sqrt{n}))^{2/3} > \frac{1}{2} d(\log n)^{2/3} = \log (\sqrt{b})\) for large enough n. Equivalently, this means that
Using the mean value theorem, we can bound
where the last inequality follows from \(\log \log b = \log d + (2/3)\log \log n\). Thus, by taking d to be a large enough constant in terms of c and e, we can force the right hand side of Equation (4) to be less than 1, which proves that
This completes the induction, and proves that we can solve the Minimum Length\(\ell \) Substrings problem in the desired runtime as claimed. \(\square \)
4.4 Longest Lyndon Substring
The technique we use to solve the Minimal String Rotation problem can also be adapted to get a quantum speedup for solving the Longest Lyndon Substring problem.
Theorem 4.9
The Longest Lyndon Substring problem can be solved by a quantum algorithm with \(n^{1/2+o(1)}\) query complexity and time complexity.
A difficulty in solving Longest Lyndon Substring compared to other string problems such as LCS and Longest Palindromic Substring is that the lengths of Lyndon Substrings do not have the monotone property, and hence we cannot use binary search (the Longest Square Substring problem in Sect. 5 also has the same issue). To overcome this issue, we first present a simple reduction.
Theorem 4.10
For any constant \(0<\varepsilon <1\), suppose there is a T(d)time quantum algorithm (where \(T(d) \ge \Omega (\sqrt{d})\)) for solving the Longest Lyndon Substring problem on string s of length \(s= (1+2\varepsilon )d\) with the promise that the longest Lyndon substring of s has length in the interval \([d, (1+\varepsilon )\cdot d)\). And suppose there is an T(d)time quantum algorithm for checking whether an O(d)length string is a Lyndon word.
Then, there is an algorithm in time \({\tilde{O}}(T(n))\) for solving the Longest Lyndon Substring problem on lengthn strings in general case.
Proof
Let s be the input string of length n. For each nonnegative integer \(i \le \lceil (\log n)/(\log (1+\varepsilon ))\rceil  1\), we look for a longest Lyndon substring of s whose length is in the interval \(\left[ (1+\varepsilon )^i, (1+\varepsilon )^{i+1}\right) \), and return the largest length (after certifying that it is indeed a Lyndon substring) found. This only blows up the total time complexity by an \(O(\log n)\) factor.
For each i, we define the positions \(j_k:= 1+ k\cdot \varepsilon d/2\) for all \(0\le k< 2n/(\varepsilon d)\), and consider the substrings
Note that, if the longest Lyndon substring of s has length in the interval \([d,(1+\varepsilon )d)\), then it must be entirely covered by some of these substrings. For each of these substrings, its longest Lyndon substring can be computed in T(d)time by the assumption. Then, we use the quantum maximum finding algorithm (see Sect. 2.4) to find the longest among these \(2n/(\varepsilon d)\) answers , in \({\tilde{O}}(\sqrt{2n/(\varepsilon d)} \cdot T(d)) = \tilde{O}(\sqrt{n}\cdot T(d)/\sqrt{d}) \le {\tilde{O}}(T(n))\) overall time, where we used the assumption of \(T(d) \ge \Omega (\sqrt{d})\). \(\square \)
Now, we are going to describe an \(d^{1/2+o(1)}\)time quantum algorithm for solving the Longest Lyndon Substring problem on string s of length \(s= (1+2\varepsilon )d\), with the promise that the longest Lyndon substring of s has length in the interval \([d, (1+\varepsilon )\cdot d)\). Combined with the reduction above, this proves Theorem 4.9, since a string is Lyndon if and only if its minimal suffix is itself (see Sect. 2.1) and can be checked by our Minimal Suffix algorithm. We will set \(\varepsilon = 0.1\).
We will make use of the following celebrated fact related to Lyndon substrings.
Definition 4.11
(Lyndon Factorization [8, 96]) Any string s can be written as a concatenation
where each string \(s_i\) is a Lyndon word, and \(s_1 \succeq s_2 \succeq \dots \succeq s_k\). This decomposition is unique, and called the Lyndon factorization of s. The \(s_i\) are called Lyndon factors of s
The following fact characterizes the longest Lyndon substring in a given string.
Proposition 4.12
(e.g., [71, Lemma 3]) The longest Lyndon substring of a string s is necessarily a longest Lyndon factor of s.
Then, given the promise about the input string s of length \((1+2\varepsilon )d\), we know s has Lyndon factorization \(s_1\cdots s^* \cdots s_k\), where \(s^* \in [d, (1+\varepsilon )d)\). The remaining task is to identify the position and length of the Lyndon factor \(s^*\).
Lemma 4.13
(The position of \(s^*\)) Suppose \(s[i\mathinner {.\,.}i+s^*) = s^*\). Then, \(s[i \mathinner {.\,.}s]\) must be the minimal suffix among all \(i\in [1\mathinner {.\,.}\varepsilon d + 1]\).
Proof
Note that \(i\in [1\mathinner {.\,.}\varepsilon d+1]\) due to \(s = (1+2\varepsilon ) d \) and \(s^* \in [d,(1+\varepsilon )d)\). For any other starting position \(j \in [1\mathinner {.\,.}\varepsilon d+1]\), we will prove that \(s[j\mathinner {.\,.}s] \succ s[i\mathinner {.\,.}s]\). We consider two cases.
Case 1 \(j>i\). In this case we must have \(j \in (i\mathinner {.\,.}i+s^*)\) due to the length constraints. Then, we have \(s[j\mathinner {.\,.}i+s^*) \succ s[i\mathinner {.\,.}i+s^*)\) due to the fact that \(s^*\) is a Lyndon word, which immediately implies \(s[j\mathinner {.\,.}s] \succ s[i\mathinner {.\,.}s]\), since \(\lefts[i\mathinner {.\,.}i+s^*)\right> \lefts[j\mathinner {.\,.}i+s^*)\right\).
Case 2 \(j<i\). Then, suppose a Lyndon factor \(s_t\) prior to \(s^*\) occurs at \(s_t = s[j'\mathinner {.\,.}j'']\) with \(j'\le j\le j''\). Then, we have \(s[j\mathinner {.\,.}j''] \succeq s[j'\mathinner {.\,.}j''] = s_t \succeq s^*\) by the property of Lyndon factorization. Then, from the length constraint \(s[j\mathinner {.\,.}j'']< s^*\), we necessarily have \(s[j\mathinner {.\,.}s]\succ s[i\mathinner {.\,.}s] \). \(\square \)
Then we can find the starting position of \(s^*\), by looking for the minimal suffix of s whose starting position is in \([1\mathinner {.\,.}\varepsilon d+1 ]\). We observe that, this task can be reduced to the the Minimum Length\(\ell \) Substrings problem using the same reduction as in Proposition 4.5 by appropriately adjusting the lengths, and we omit the proof here. Hence, we can find the starting position of \(s^*\) in \(d^{1/2+o(1)}\) time.
We can now without loss of generality assume that \(s^*\) appears as the first Lyndon factor of the input string \(s= s^*s_2s_3\cdots s_m\) of length \(s\le (1+2\varepsilon )d\). It remains to find the ending position of \(s^*\). We need the following definition.
Definition 4.14
We say a string s is preLyndon, if there is a Lyndon word t such that s is a prefix of t.
We have the following characterization of preLyndon strings.
Proposition 4.15
(e.g., [71, Lemma 10]) For any preLyndon string w, there exists a unique Lyndon word x such that \(w=x^kx'\) where \(k\ge 1\), and \(x' = x[1\mathinner {.\,.}i]\) for some \(i \in [0\mathinner {.\,.}x1]\). Here \(x^k\) denotes concatenating x for k times.
Note that we can check whether a string w is preLyndon, in \(w^{1/2+o(1)}\) time.
Lemma 4.16
Given any string w of length d, we can check whether it is preLyndon in \(d^{1/2+o(1)}\) quantum time. Moreover, if w is preLyndon, we can find its decomposition described in Proposition 4.15 also in \(d^{1/2+o(1)}\) quantum time.
Proof
We assume w is indeed a preLyndon string, and has decomposition \(w=x^kx'\) as described in Proposition 4.15.
We first observe that, the minimal rotation of \(w=x^kx'\) must equal \(x'x^k\). This observation can be easily proved from the fact that the Lyndon word x is strictly smaller than all other rotations of x. In the case of \(x'\ge 1\), we can compute the shift of the minimal rotation of w (which must be unique), and obtain the length \(x'\). We can also detct the case of \(x'=0\), by finding that w itself equals the minimal rotation of w.
After finding \(x'\), we are left with the part \(w'=x^k\), and we know that \(x=\mathsf {\textrm{per}}(w')\). We can then compute \(\mathsf {\textrm{per}}(w')\) by finding the second earliest occurrence of \(w'\) in the string \(w'w'\), using the quantum Exact String Matching algorithm [9]. (Alternatively, we can use the periodicity algorithm of Wang and Ying [14])
Finally, after obtaining x and \(x'\), we certify that w is indeed a preLyndon string, by checking that x is a Lyndon word, \(x'\) is a prefix of x, and \(w=x^k x'\), in \({\tilde{O}}(\sqrt{w})\) time by Grover search. \(\square \)
Then, on the input string s with Lyndon factorization \(s= s^*s_2s_3\cdots s_m\), we apply Lemma 4.16 with binary search to find the maximum position \(i \in [s]\) such that \(s[1\mathinner {.\,.}i]\) is preLyndon. We must have \(i\ge s^*\), by the definition of preLyndon string and the fact that \(s^*\) is Lyndon. We also obtain the decomposition of \(s[1\mathinner {.\,.}i] = x^kx'\) described in Proposition 4.15, where x is a Lyndon word with proper prefix \(x'\). Note that the longest Lyndon prefix of \(x^k x'\) must be x, since any longer prefix of \(x^kx'\) can be written as \(x^{j}x''\) and obviously has a smaller suffix than itself. Then, from the fact that \(s^*\) is the longest Lyndon prefix of \(s[1\mathinner {.\,.}i]\), we know \(x=s^*\). Hence, we have completely determined \(s^*\).
Remark 4.17
We can show that the Longest Lyndon Substring problem requires \(\Omega (\sqrt{n})\) quantum queries, by a simple reduction from the unstructured search problem. Suppose we are given a string \(s \in \{0,1\}^n\) and want to decide whether there exists \(i\in [n]\) such that \(s[i] = 1\). We create another string \(s':=s0^n1\) by appending n zeros and a one after s. Then, if \(s=0^n\), then the longest Lyndon substring of \(s'\) will be \(s'\) itself. If there is at least a one in s, then \(s'\) cannot be a Lyndon word, since its suffix \(0^n1\) must be smaller than \(s'\). Hence, by [21], this implies that Longest Lyndon Substring problem requires query complexity \(\Omega (\sqrt{n})\).
5 Longest Square Substring
Recall that in the Longest Square Substring problem, we are given a string s of length n and tasked with finding the largest positive integer \(\Delta \) such that there exists some index \(1\le i \le n2\Delta + 1\) with \(s[i\mathinner {.\,.}i+\Delta ) = s[i+\Delta \mathinner {.\,.}i+2\Delta )\). In other words, we want to find the maximum size \(\ell = 2\Delta \) such that s contains a \(\Delta \)periodic substring of length \(\ell \). We call \(\Delta \) the shift and \(\ell \) the length of the longest square substring. We refer to the substrings \(s[i\mathinner {.\,.}i+\Delta )\) and \(s[i+\Delta \mathinner {.\,.}i+2\Delta )\) as the first half and second half of the solution respectively.
In this section, we present a quantum algorithm which solves this problem on strings of length n in \(\tilde{O}(\sqrt{n})\) time. We follow this up with a brief argument indicating why this algorithm is optimal up to polylogarithmic factors in the query complexity.
Theorem 5.1
The Longest Square Substring problem can be solved by a quantum algorithm with \({\tilde{O}}(\sqrt{n})\) query complexity and time complexity.
Proof
Let s be the input string of length n.
Set \(\varepsilon = 1/10\). For each nonnegative integer \(i \le \lceil (\log n)/(\log (1+\varepsilon ))\rceil  1\), we look for a longest square substring of s whose length is in the interval \(\left[ (1+\varepsilon )^i, (1+\varepsilon )^{i+1}\right) \). We begin with i equal to its upper bound, and then keep decrementing i until we find a square substring in the relevant interval. The first time we find such a string we return it and halt. If we never find such a string we report that s has no square substring. We try out \(O(\log n)\) values of i, so it suffices to solve each of these subproblems in \(\tilde{O}(\sqrt{n})\) time (this is very similar to the argument used in Theorem 4.10).
If s has no square substring, our algorithm will never find a solution a will correctly detect that there is none. Hence in the remainder of this proof, suppose that s contains a square substring, and let \(\ell = 2\Delta \) be the length of the longest square substring in s. Let i be the unique positive integer with \(\ell \in \left[ (1+\varepsilon )^i, (1+\varepsilon )^{i+1}\right) \). We will eventually reach this value of i since we cannot have found any square substrings of larger size. Write \(d = (1+\varepsilon )^i\) and, for convenience, assume that d is an integer multiple of 10 (this will not affect the correctness of the arguments below).
We start by sampling a uniform random position g in the string. We say g is good if there exists a square substring A of size \(\ell \) in s with the property that g is among the first d/10 positions in A. Note that g is good with probability at least \(\Omega (d/n)\).
Suppose g is a good position. Now, consider the substring
of length 2d/5 starting immediately after g. Since g is good, the end of P is at most the
position character in A. Hence P is contained completely in the first half of A.
Now define \(S = s(g+2d/5 \mathinner {.\,.}g+(1+\varepsilon )d]\) to be an O(d) length substring of s starting immediately after P. Since g is good and A has length at most \((1+\varepsilon )d\), we know that the second half of A is contained completely in S. In particular, S must contain at least one copy of P.
By solving the Exact String Matching problem with S and P, we can find the leftmost and rightmost occurrences of P in S in \(\tilde{O}(\sqrt{d})\) time. If these copies occur at the same position, S actually contains only a single copy of P, and otherwise S contains multiple copies of P. We consider these cases separately.
Case 1: Unique Copy Suppose S has only one copy of P. Let this copy begin after position h, so that \(s(h \mathinner {.\,.}h+2d/5]=P.\) Thus we get that \(\Delta =hg\), since the shift of the longest square substring equals the distance between copies of P in the first and second half of A. Now, find the largest nonnegative integer \(j\le 2d/5\) such that \(s(gj\mathinner {.\,.}g] = s(hj\mathinner {.\,.}h]\). All we are doing in this step is extending the copies of P backwards while keeping them identical (pictured in the second image of Fig. 3 as light blue rectangles). This takes \(\tilde{O}(\sqrt{d})\) time via Grover search.
Next, in a similar fashion, we find the largest positive integer \(k\le \Delta j\) such that \(s(g, \dots , g+k] = s(h, \mathinner {.\,.}, h+k]\). In this step we are maximally extending the copies of P forwards while making sure they do not overlap with our previous extension.
Now, let \(j'\) be the positive integer such that \(A = s(gj'\mathinner {.\,.}gj'+\ell ]\) is the optimal solution whose first half and second half each contain the copies of P we are considering. Then because A is a square substring with shift \(\Delta = hg\), we have \(s(gj'\mathinner {.\,.}g] = s(hj'\mathinner {.\,.}h]\), which implies that \(j\ge j'\) by construction. But since A is square we also have \(s(g\mathinner {.\,.}g + \Delta j'] = s(h\mathinner {.\,.}h+\Delta j']\). Then the definition of k together with the observation that \(j\ge j'\) forces \(k = \Delta  j\) (this is pictured in the bottom two images of Fig. 3, where the left green substring and right blue substring are bordering each other). Combining these observations together, we get that \(s(gj\mathinner {.\,.}h+k]\) is a square substring of size \(2\Delta = \ell \). Thus returning this substring produces the desired solution.
Case 2: Multiple Copies It remains to consider the case where S contains multiple copies of P. In this case, we use the quantum algorithm for Exact String Matching to find the rightmost and second rightmost copies of P in S in \(\tilde{O}(\sqrt{d})\) time. Suppose that these copies start after positions h and \(h'\) respectively, so that
with \(h' < h\). Then since \(2P = 4d/5 > (3/5+\varepsilon )d = S\), we know by Lemma 2.3 that P has minimum period \(p = hh'\) and every appearance of P in S starts some multiple of p away from h. Moreover, all the copies of P in S overlap each other and together form one larger pperiodic substring in s. Our next step will be to extend these periodic parts to maximal periodic substrings, which well help us locate a large square substring.
By Exact String Matching, we can find the leftmost copy \(s(l\mathinner {.\,.}l+2d/5]\) of P in S in \(\tilde{O}(\sqrt{d})\), where the integer \(l+1\) is the starting position of this copy. By our earlier discussion, we know that the string \(s(l\mathinner {.\,.}h+2d/5]\) is pperiodic.
We now extend the original pattern P as well as the leftmost copy of P in S backwards while maintaining the property of being pperiodic.
Formally, we find the largest nonnegative integer \(j_1\le 2d/5\) such that \(s(gj_1\mathinner {.\,.}g+2d/5]\) is pperiodic and the largest nonnegative integer \(j_2\le l+1g2d/5\) such that \(s(lj_2\mathinner {.\,.}h+2d/5]\) is pperiodic. Because we upper bound \(j_1, j_2\le O(d)\), extending the strings in this way takes \(\tilde{O}(\sqrt{d})\) time via Grover search. We now split into two further subcases, depending on how far back the strings are extended.
Case 2a: Single Periodic Substring
Suppose we get \(j_2 = l+1g2d/5\). This means that we were able to extend the leftmost copy of P in S so far back that it overlapped with our original pattern P contained in the first half of A. It follows that the substring \(s(g\mathinner {.\,.}h+2d/5]\) is pperiodic. In particular, we deduce that the substring starting from the original pattern P in the first half of A to its \(\Delta \)shifted copy in the second half of A is contained in this pperiodic part. Since A is a square substring, it follows that its prefix which ends at position \(g+2d/5\) of s is also pperiodic. Thus, position \(gj_1\) of s occurs before the first character of A. This reasoning is depicted in the second image of Fig. 4.
We now extend this entire pperiodic substring forward. Find the largest nonnegative integer \(k\le (1+\varepsilon )d\) such that \(s(gj_1\mathinner {.\,.}g+k]\) is pperiodic. Since \(j_1, k\le O(\log d)\) this takes \(\tilde{O}(\sqrt{d})\) time via Grover search. As pictured in the bottom image of Fig. 4, since A is a square string and the end of its first half is pperiodic, the end of its second half is pperiodic as well. Thus position \(g+k\) in s occurs after the final character of A.
We now have a pperiodic string \(s(gj_1\mathinner {.\,.}g+k]\) which contains A, and thus has length at least \(\ell \). This means that A has period p as well. We claim the shift \(\Delta \) associated with A is an integer multiple of p.
Indeed, by definition, A is \(\Delta \)periodic. Then because A has length \(2\Delta \ge \Delta + p\), by Lemma 2.2 we know that A is \(\gcd (p,\Delta )\)periodic as well. Since P is a substring of A, P must have period \(\gcd (p,\Delta )\) too. But p is the minimum period of P. Hence \(p = \gcd (p,\Delta )\), so \(\Delta = pm\) for some positive integer m as claimed.
Thus A has length \(\ell = 2m\cdot p\). Our pperiodic string \(s(gj_1\mathinner {.\,.}g+k]\) is guaranteed to be at least this long. Thus, we can simply return the substring \(s(gj_1\mathinner {.\,.}g+2mp]\). Because this substring is pperiodic and has length an even multiple of p, it is a square substring. Because it has length equal to A, it is a longest square substring as desired.
Case 2b: Disjoint Periodic Substrings
If we do not fall into Case 2a, then we must have \(j_2 < l+1g2d/5\), so that the pperiodic substrings \(s(gj_1\mathinner {.\,.}g+2d/5]\) and \(s(lj_2\mathinner {.\,.}h+2d/5]\) do not overlap. In this case, we construct two candidate solutions, and afterwards prove that one of them is guaranteed to be a longest square substring.
Define \(\Delta ' = (lj_2)  (gj_1)\). Via Grover search in \(\tilde{O}(\sqrt{d})\) time, we find the largest nonnegative integers \(b,b'\le \Delta '\) such that \(s(gj_1b\mathinner {.\,.}gj_1+1] = s(lj_2b\mathinner {.\,.}hj_1+1]\) and \(s(gj_1\mathinner {.\,.}gj_1+b'] = s(lj_2\mathinner {.\,.}lj_2+b']\). We then set string \(B = s(gj_1b\mathinner {.\,.}lj_2+b']\) to be our first candidate solution. Intuitively, this candidate corresponds to a guess that \(\Delta = \Delta '\).
To construct the second candidate, we use a similar procedure, but first extend the strings forward. Using Grover search, we find the largest positive integers \(k_1\le lj_2g\) and \(k_2\le (1+\varepsilon )d\) such that \(s(g\mathinner {.\,.}g+k_1]\) and \(s(l\mathinner {.\,.}l+k_2]\) are each pperiodic, in \(\tilde{O}(\sqrt{d})\) time. Set \(\Delta '' = (l+k_2)  (g+k_1)\). Then, as before, we find the largest nonnegative integers \(c,c'\le \Delta ''\) such that \(s(g+k_1c\mathinner {.\,.}g+k_1] = s(l+k_2c\mathinner {.\,.}l+k_2]\) and \(s(g+k_1\mathinner {.\,.}g+k_1+c'] = s(l+k_2\mathinner {.\,.}l+k_2+c']\). The string \(C = s(g+k_1c\mathinner {.\,.}l+k_2+c']\) is then our second candidate. Intuitively, this corresponds to a guess that \(\Delta = \Delta ''\).
We can check if B and C are square in \(\tilde{O}(\sqrt{d})\) time by Grover search. If neither of them are square we report that we find no square substring. Otherwise, we return the largest square substring among these two. It remains to prove that this procedure is correct. There are two cases to consider, based off how large \(j_1\) is relative to the position of A.
First, suppose that position \(gj_1+1\) in s is a character in the first half of A. Then, as depicted in Fig. 5, since A is square, \(lj_2+1\) must also be in the second half of A, and in fact be exactly \(\Delta \) characters to the right of \(gj_1+1\) (because if this position was earlier, it would mean we could have picked \(j_1\) larger and still had a pperiodic string). Thus \(\Delta = (lj_2 + 1)  (gj_1 + 1) = \Delta '\) is forced. Then when we construct the string B by searching backwards and forwards from positions \(gj_1+1\) and \(lj_2+1\) we will in fact find a square string of length A, and B will our desired longest square substring.
Otherwise, position \(gj_1+1\) in s is placed before every character of A. Then as depicted in Fig. 6, since A is square, position \(lj_2\) must be in the first half of A. Consequently, when we extend P forward to position \(g+k_1\), this position is also in the first half of A (otherwise the pperiodic parts would overlap, and we would have been in Case 2a instead). As in the previous case, using the fact that A is a square again, we get that position \(l+k_2\) must be exactly \(\Delta \) characters to the right of \(g+k_1\). So \(\Delta = (l+k_2)  (g + k_1) = \Delta ''\) is forced. Then when we construct the string C by searching backwards and forwards from positions \(g+k_1\) and \(l+k_2\) we find a square string of length A, so C will be our desired longest square substring.
This handles all of the cases. So far, we have a described an algorithm that, for any integer i, will find the longest square substring of s with size in \([d,(1+\varepsilon )d)\) with probability at least \(\Omega (d/n)\) (recall this is the probability that g is good), in time \(\tilde{O}(\sqrt{d})\). By amplitude amplification and trying out the \(O(\log n)\) choices of i in decreasing order, we recover an algorithm for the Longest Square Substring problem which runs in
time, as desired. \(\square \)
We show that our algorithm is optimal by giving a quantum query lower bound of \(\Omega (\sqrt{n})\) for finding the longest square substring. This proof is essentially already present in [12], where the authors give a lower bound for finding the longest palindromic substring, but we sketch the argument here for completeness.
Proposition 5.2
Any quantum algorithm that computes the longest square substring of a string of length n requires \(\Omega (\sqrt{n})\) queries.
Proof
Let S be the set of strings of length 2n over the alphabet \(\{ 0,1 \}\) which contain at most one occurrence of the character 1. In [21] the authors prove that deciding with whether a given string \(s\in S\) is the string consisting of all 0s requires \(\Omega (\sqrt{n})\) queries in the quantum setting.
The longest square substring of the 0s string of length 2n is just the entire string, and has length 2n. However, every other string in S has an odd number of 1s, and thus has longest square substring of size strictly less than 2n. So solving the Longest Square Substring problem lets us decide if a string from S is the all 0s string, which means that any quantum algorithm solving this problem requires \(\Omega (\sqrt{n})\) queries as well. \(\square \)
6 Open Problems
We conclude by mentioning several open questions related to our work.

Our \(\tilde{O}(n^{2/3})\)time algorithm for LCS assumes that the input characters are integers in \([\textrm{poly}(n)]\). This assumption was used for constructing string synchronizing sets sublinearly (Sect. 3.3.2). However, the previous \(\tilde{O}(n^{5/6})\)time algorithm by Le Gall and Seddighin [12] can work with general ordered alphabet, where the only allowed query is to compare two symbols S[i], S[j] in the input strings (with three possible outcomes \(S[i]>S[j],S[i]=S[j],\) or \(S[i]<S[j]\)). Is \(\tilde{O}(n^{2/3})\) query complexity (or even time complexity) achievable in this more restricted setting? Alternatively, can we show a better query lower bound?

Our algorithm for the Minimal String Rotation problem (and other related problems in Sect. 4) has time complexity (and query complexity) \(n^{1/2+o(1)}\). Can we reduce the \(n^{o(1)}\) factor down to \(\mathrm{poly\,log}(n)\)? A subsequent work by Childs, Kothari, KovacsDeak, Sundaram, and Wang [82] showed such an improvement for the decision version of Minimal String Rotation, but the question remains open for the search version.

In our timeefficient implementation of the LCS algorithm, we used a simple sampling technique to bypass certain restrictions on 2D range query data structures (Sect. 3.2.3). Can this idea have further applications in designing timeefficient quantum walk algorithms? As a simple example, we can use this idea to get an \(\tilde{O}(n^{2/3})\)time comparisonbased algorithm for the element distinctness problem with simpler implementation. At the beginning, uniformly sample r items \(x_1,\dots ,x_r\) from the input array, and sort them so that \(x_1\le \dots \le x_r\). Then, we create a hash table with \(r+1\) buckets each having \(O(\log n)\) capacity, where the hash function h(x) is defined as the index i such that \(x_i\le x < x_{i+1}\), which can be found by binary search. Then, each insertion, deletion, and search operation can be performed in \(O(\log n)\) time, provided that the buckets do not overflow. The error caused by overflows can be analyzed using Ambainis’ proof of [13, Lemma 6]. In comparison, Ambainis’ implementation [13] additionally used a skip list, and Jeffery’s (noncomparisonbased) implementation used a quantum radix tree [52, Section 3.3.4].
Notes
Throughout this paper, \({\tilde{O}}(\cdot )\) hides \(\mathrm{poly\,log} n\) factors where n denotes the input length, and \({\tilde{\Omega }}(\cdot ),\tilde{\Theta }(\cdot )\) are defined analogously. In particular, \({\tilde{O}}(1)\) means \(O(\mathrm{poly\,log} n)\).
We say an algorithm succeeds with high probability (w.h.p), if the success probability can be made at least \(11/n^c\) for any desired constant \(c>1\).
Recall that we use the convention \(S[x\mathinner {.\,.}y) := S[\max \{1,x\}\mathinner {.\,.}\min \{y+d,n+1\})\) for a lengthn string S.
To better understand this fact, observe that they uniquely determine the compact tries of \(P(k_1),\dots ,P(k_r)\) and of \(Q(k_1),\dots ,Q(k_r)\), where the LCP of two strings equals the depth of the lowest common ancestor of the corresponding nodes in the compact trie.
One exception is Treap.
We remark that Ambainis only used a fixed hash function \(h(i)= \lfloor r\cdot i/m\rfloor \), which ensures the buckets do not overflow with high probability over a random rsubset \(K\subseteq [m]\) of keys. Ambainis showed that this property is already sufficient for the correctness of the quantum walk algorithm. Here we choose to state a different version that achieves high success probability for every fixed rsubset of keys, merely for keeping consistency with later presentation.
In the case where S has no highly periodic substrings, every \(\tau \)length interval should contain at least one index from A.
In fact, p is the minimum period of this substring.
References
Knuth, D.E., Morris, J.H., Jr., Pratt, V.R.: Fast pattern matching in strings. SIAM J. Comput. 6(2), 323–350 (1977). https://doi.org/10.1137/0206024
Karp, R.M., Rabin, M.O.: Efficient randomized patternmatching algorithms. IBM J. Res. Dev. 31(2), 249–260 (1987). https://doi.org/10.1147/rd.312.0249
Weiner, P.: Linear pattern matching algorithms. In: Proceedings of the 14th Annual Symposium on Switching and Automata Theory, pp. 1–11 (1973). https://doi.org/10.1109/SWAT.1973.13
Farach, M.: Optimal suffix tree construction with large alphabets. In: Proceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS 1997), pp. 137–143 (1997). https://doi.org/10.1109/SFCS.1997.646102
Babenko, M.A., Starikovskaya, T.: Computing longest common substrings via suffix arrays. In: Proceedings of the 3rd International Computer Science Symposium in Russia (CSR 2008), Theory and Applications, pp. 64–75 (2008). https://doi.org/10.1007/9783540797098_10
Booth, K.S.: Lexicographically least circular substrings. Inf. Process. Lett. 10(4/5), 240–242 (1980). https://doi.org/10.1016/00200190(80)901490
Shiloach, Y.: Fast canonization of circular strings. J. Algorithms 2(2), 107–121 (1981). https://doi.org/10.1016/01966774(81)900134
Duval, J.P.: Factorizing words over an ordered alphabet. J. Algorithms 4(4), 363–381 (1983). https://doi.org/10.1016/01966774(83)900172
Ramesh, H., Vinay, V.: String matching in \(\tilde{O}(\sqrt{n}+\sqrt{m})\) quantum time. J. Discrete Algorithms 1(1), 103–110 (2003). https://doi.org/10.1016/S15708667(03)000108
Vishkin, U.: Deterministic sampling: a new technique for fast pattern matching. SIAM J. Comput. 20(1), 22–40 (1991). https://doi.org/10.1137/0220002
Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC 1996), pp. 212–219 (1996). https://doi.org/10.1145/237814.237866
Le Gall, F., Seddighin, S.: Quantum meets finegrained complexity: Sublinear time quantum algorithms for string problems. In: Proceedings of the 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pp. 97:1–97:23 (2022). https://doi.org/10.4230/LIPIcs.ITCS.2022.97
Ambainis, A.: Quantum walk algorithm for element distinctness. SIAM J. Comput. 37(1), 210–239 (2007). https://doi.org/10.1137/S0097539705447311
Wang, Q., Ying, M.: Quantum algorithm for lexicographically minimal string rotation. CoRR (2020). arXiv:2012.09376
Durr, C., Høyer, P.: A quantum algorithm for finding the minimum. Preprint (1996). arXiv:quantph/9607014
Apostolico, A., Iliopoulos, C.S., Paige, R.: On O(n log n) cost parallel algorithm for the single function coarsest partition problem. In: Parallel Algorithms and Architectures, International Workshop, 1987, Proceedings, pp. 70–76 (1987). https://doi.org/10.1007/3540180990_30
Iliopoulos, C.S., Smyth, W.F.: Optimal algorithms for computing the canonical form of a circular string. Theor. Comput. Sci. 92(1), 87–105 (1992). https://doi.org/10.1016/03043975(92)901375
Aaronson, S., Shi, Y.: Quantum lower bounds for the collision and the element distinctness problems. J. ACM 51(4), 595–605 (2004). https://doi.org/10.1145/1008731.1008735
Kutin, S.: Quantum lower bound for the collision problem with small range. Theory Comput. 1(1), 29–36 (2005). https://doi.org/10.4086/toc.2005.v001a002
Ambainis, A.: Polynomial degree and lower bounds in quantum complexity: collision and element distinctness with small range. Theory Comput. 1(1), 37–46 (2005). https://doi.org/10.4086/toc.2005.v001a003
Bennett, C.H., Bernstein, E., Brassard, G., Vazirani, U.V.: Strengths and weaknesses of quantum computing. SIAM J. Comput. 26(5), 1510–1523 (1997). https://doi.org/10.1137/S0097539796300933
Starikovskaya, T., Vildhøj, H.W.: Timespace tradeoffs for the longest common substring problem. In: Proceedings of the 24th Annual Symposium on Combinatorial Pattern Matching (CPM 2013), pp. 223–234 (2013). https://doi.org/10.1007/9783642389054_22
Charalampopoulos, P., Crochemore, M., Iliopoulos, C.S., Kociumaka, T., Pissis, S.P., Radoszewski, J., Rytter, W., Waleń, T.: Lineartime algorithm for long LCF with k mismatches. In: Proceedings of the 29th Annual Symposium on Combinatorial Pattern Matching (CPM 2018), pp. 23:1–23:16 (2018). https://doi.org/10.4230/LIPIcs.CPM.2018.23
Amir, A., Charalampopoulos, P., Pissis, S.P., Radoszewski, J.: Longest common substring made fully dynamic. In: Proceedings of the 27th Annual European Symposium on Algorithms (ESA 2019), pp. 6:1–6:17 (2019). https://doi.org/10.4230/LIPIcs.ESA.2019.6
Amir, A., Charalampopoulos, P., Pissis, S.P., Radoszewski, J.: Dynamic and internal longest common substring. Algorithmica 82(12), 3707–3743 (2020). https://doi.org/10.1007/s00453020007440
BenNun, S., Golan, S., Kociumaka, T., Kraus, M.: Timespace tradeoffs for finding a long common substring. In: Proceedings of the 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020), pp. 5:1–5:14 (2020). https://doi.org/10.4230/LIPIcs.CPM.2020.5
Charalampopoulos, P., Gawrychowski, P., Pokorski, K.: Dynamic longest common substring in polylogarithmic time. In: Proceedings of the 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020), pp. 27:1–27:19 (2020). https://doi.org/10.4230/LIPIcs.ICALP.2020.27
Charalampopoulos, P., Kociumaka, T., Pissis, S.P., Radoszewski, J.: Faster algorithms for longest common substring. In: Proceedings of the 29th Annual European Symposium on Algorithms (ESA 2021), pp. 30:1–30:17 (2021). https://doi.org/10.4230/LIPIcs.ESA.2021.30
Burkhardt, S., Kärkkäinen, J.: Fast lightweight suffix array construction and checking. In: Proceedings of the 14th Annual Symposium on Combinatorial Pattern Matching (CPM 2003), pp. 55–69 (2003). https://doi.org/10.1007/3540448888_5
Maekawa, M.: A \(\sqrt{N}\) algorithm for mutual exclusion in decentralized systems. ACM Trans. Comput. Syst. 3(2), 145–159 (1985). https://doi.org/10.1145/214438.214445
Birenzwige, O., Golan, S., Porat, E.: Locally consistent parsing for text indexing in small space. In: Proceedings of the 31st ACMSIAM Symposium on Discrete Algorithms (SODA 2020), pp. 607–626 (2020). https://doi.org/10.1137/1.9781611975994.37
Kempa, D., Kociumaka, T.: String synchronizing sets: sublineartime BWT construction and optimal LCE data structure. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC 2019), pp. 756–767. ACM (2019). https://doi.org/10.1145/3313276.3316368
Magniez, F., Nayak, A., Roland, J., Santha, M.: Search via quantum walk. SIAM J. Comput. 40(1), 142–164 (2011). https://doi.org/10.1137/090745854
Willard, D.E., Lueker, G.S.: Adding range restriction capability to dynamic data structures. J. ACM 32(3), 597–617 (1985). https://doi.org/10.1145/3828.3839
Christian Worm Mortensen: Fully dynamic orthogonal range reporting on RAM. SIAM J. Comput. 35(6), 1494–1525 (2006). https://doi.org/10.1137/s0097539703436722
Chan, T.M., Tsakalidis, K.: Dynamic orthogonal range searching on the RAM, revisited. In: Proceedings of the 33rd International Symposium on Computational Geometry (SoCG 2017), vol. 77, pp. 28:1–28:13 (2017). https://doi.org/10.4230/LIPIcs.SoCG.2017.28
Masek, W.J., Paterson, M.: A faster algorithm computing string edit distances. J. Comput. Syst. Sci. 20(1), 18–31 (1980). https://doi.org/10.1016/00220000(80)900021
Backurs, A., Indyk, P.: Edit distance cannot be computed in strongly subquadratic time (unless SETH is false). SIAM J. Comput. 47(3), 1087–1097 (2018). https://doi.org/10.1137/15M1053128
Boroujeni, M., Ehsani, S., Ghodsi, M., HajiAghayi, M.T., Seddighin, S.: Approximating edit distance in truly subquadratic time: quantum and MapReduce. J. ACM 68(3), 1–41 (2021). https://doi.org/10.1145/3456807
Chakraborty, D., Das, D., Goldenberg, E., Koucký, M., Saks, M.E.: Approximating edit distance within constant factor in truly subquadratic time. J. ACM 67(6), 36:136:22 (2020). https://doi.org/10.1145/3422823
Naumovitz, T., Saks, M.E., Seshadhri, C.: Accurate and nearly optimal sublinear approximations to ulam distance. In: Proceedings of the 28th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2017), pp. 2012–2031 (2017). https://doi.org/10.1137/1.9781611974782.131
Montanaro, A.: Quantum pattern matching fast on average. Algorithmica 77(1), 16–39 (2017). https://doi.org/10.1007/s0045301500604
Ambainis, A., Balodis, K., Iraids, J., Khadiev, K., Kļevickis, V., Prūsis, K., Shen, Y., Smotrovs, J., Vihrovs, J.: Quantum lower and upper bounds for 2Dgrid and Dyck language. In: Proceedings of the 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020), pp. 8:1–8:14 (2020). https://doi.org/10.4230/LIPIcs.MFCS.2020.8
Ambainis, A., Montanaro, A.: Quantum algorithms for search with wildcards and combinatorial group testing. Quant. Inf. Comput. 14(5–6), 439–453 (2014). https://doi.org/10.26421/QIC14.564
Cleve, R., Iwama, K., Le Gall, F., Nishimura, H., Tani, S., Teruyama, J., Yamashita, S.: Reconstructing strings from substrings with quantum queries. In: Proceedings of the 13th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2012), pp. 388–397 (2012). https://doi.org/10.1007/9783642311550_34
Szegedy, M.: Quantum speedup of Markov chain based algorithms. In: Proceedings of the 45th Symposium on Foundations of Computer Science (FOCS 2004), pp. 32–41 (2004). https://doi.org/10.1109/FOCS.2004.53
Magniez, F., Santha, M., Szegedy, M.: Quantum algorithms for the triangle problem. SIAM J. Comput. 37(2), 413–424 (2007). https://doi.org/10.1137/050643684
Jeffery, S., Kothari, R., Magniez, F.: Nested quantum walks with quantum data structures. In: Proceedings of the 24th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2013), pp. 1474–1485 (2013). https://doi.org/10.1137/1.9781611973105.106
Le Gall, F.: Improved quantum algorithm for triangle finding via combinatorial arguments. In: Proceedings of the 55th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2014), pp. 216–225 (2014). https://doi.org/10.1109/FOCS.2014.31
Belovs, A., Childs, A.M., Jeffery, S., Kothari, R., Magniez, F.: Timeefficient quantum walks for 3distinctness. In: Proceedings of the 40th International Colloquium on Automata, Languages, and Programming (ICALP 2013), Part I, pp. 105–122 (2013). https://doi.org/10.1007/9783642392061_10
Bernstein, D.J., Jeffery, S., Lange, T., Meurer, A.: Quantum algorithms for the subsetsum problem. In: Proceedings of the 5th International Workshop on PostQuantum Cryptography (PQCrypto 2013), pp. 16–33 (2013). https://doi.org/10.1007/9783642386169_2
Jeffery, S.: Frameworks for quantum algorithms. PhD thesis, University of Waterloo (2014). http://hdl.handle.net/10012/8710
Aaronson, S., Chia, N.H., Lin, H.H., Wang, C., Zhang, R.: On the quantum complexity of closest pair and related problems. In: Proceedings of the 35th Computational Complexity Conference (CCC 2020), pp. 16:1–16:43 (2020). https://doi.org/10.4230/LIPIcs.CCC.2020.16
Buhrman, H., Patro, S., Speelman, F.: A framework of quantum strong exponentialtime hypotheses. In: Proceedings of the 38th International Symposium on Theoretical Aspects of Computer Science (STACS 2021), pp. 19:1–19:19 (2021). https://doi.org/10.4230/LIPIcs.STACS.2021.19
Buhrman, H., Loff, B., Patro, S., Speelman, F.: Limits of quantum speedups for computational geometry and other problems: Finegrained complexity via quantum walks. In: Proceedings of the 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), pp. 31:1–31:12 (2022). https://doi.org/10.4230/LIPIcs.ITCS.2022.31
Ambainis, A., Larka, N.: Quantum algorithms for computational geometry problems. In: Proceedings of the 15th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2020), pp. 9:1–9:10 (2020). https://doi.org/10.4230/LIPIcs.TQC.2020.9
Gusfield, D.: Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press (1997). https://doi.org/10.1017/CBO9780511574931
Crochemore, M., Rytter, W.: Jewels of Stringology. World Scientific (2002). https://doi.org/10.1142/4838
Crochemore, M., Hancart, C., Lecroq, T.: Algorithms on Strings. Cambridge University Press (2007). https://doi.org/10.1017/CBO9780511546853
Kociumaka, T., Starikovskaya, Ta., Vildhøj, H.W.: Sublinear space algorithms for the longest common substring problem. In: Proceedings of the 22th Annual European Symposium on Algorithms (ESA 2014), pp. 605–617 (2014). https://doi.org/10.1007/9783662447772_50
Abboud, A., Williams, R.R., Yu, H.: More applications of the polynomial method to algorithm design. In: Proceedings of the 26th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2015), pp. 218–230 (2015). https://doi.org/10.1137/1.9781611973730.17
Flouri, T., Giaquinta, E., Kobert, K., Ukkonen, E.: Longest common substrings with k mismatches. Inf. Process. Lett. 115(6–8), 643–647 (2015). https://doi.org/10.1016/j.ipl.2015.03.006
Thankachan, S.V., Apostolico, A., Aluru, S.: A provably efficient algorithm for the kmismatch average common substring problem. J. Comput. Biol. 23(6), 472–482 (2016). https://doi.org/10.1089/cmb.2015.0235
Starikovskaya, T.: Longest common substring with approximately k mismatches. In: Proceedings of the 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016), pp. 21:1–21:11 (2016). https://doi.org/10.4230/LIPIcs.CPM.2016.21
Kociumaka, T., Radoszewski, J., Starikovskaya, T.: Longest common substring with approximately k mismatches. Algorithmica 81(6), 2633–2652 (2019). https://doi.org/10.1007/s0045301900548x
Gourdel, G., Kociumaka, T., Radoszewski, J., Starikovskaya, T.: Approximating longest common substring with k mismatches: Theory and practice. In: Proceedings of the 31st Annual Symposium on Combinatorial Pattern Matching (CPM 2020), pp. 16:1–16:15 (2020). https://doi.org/10.4230/LIPIcs.CPM.2020.16
Apostolico, A., Crochemore, M.: Optimal canonization of all substrings of a string. Inf. Comput. 95(1), 76–95 (1991). https://doi.org/10.1016/08905401(91)90016U
Babenko, M.A., Kolesnichenko, I.I., Starikovskaya, T.: On minimal and maximal suffixes of a substring. In: Proceedings of the 24th Annual Symposium on Combinatorial Pattern Matching (CPM 2013), pp. 28–37, Springer (2013). https://doi.org/10.1007/9783642389054_5
Babenko, M.A., Gawrychowski, P., Kociumaka, T., Kolesnichenko, I.I., Starikovskaya, T.: Computing minimal and maximal suffixes of a substring. Theor. Comput. Sci. 638, 112–121 (2016). https://doi.org/10.1016/j.tcs.2015.08.023
Kociumaka, T.: Minimal suffix and rotation of a substring in optimal time. In: Proceedings of the 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016), pp. 28:1–28:12 (2016). https://doi.org/10.4230/LIPIcs.CPM.2016.28
Urabe, Y., Nakashima, Y., Inenaga, S., Bannai, H., Takeda, M.: Longest Lyndon substring after edit. In: Proceedings of the 29th Annual Symposium on Combinatorial Pattern Matching, (CPM 2018), pp. 19:1–19:10 (2018). https://doi.org/10.4230/LIPIcs.CPM.2018.19
Crochemore, M.: An optimal algorithm for computing the repetitions in a word. Inf. Process. Lett. 12(5), 244–250 (1981). https://doi.org/10.1016/00200190(81)900247
Main, M.G., Lorentz, R.J.: An O(n log n) algorithm for finding all repetitions in a string. J. Algorithms 5(3), 422–432 (1984). https://doi.org/10.1016/01966774(84)90021X
Amir, A., Boneh, I., Charalampopoulos, P., Kondratovsky, E.: Repetition detection in a dynamic string. In: Proceedings of the 27th Annual European Symposium on Algorithms (ESA 2019), pp. 5:1–5:18 (2019). https://doi.org/10.4230/LIPIcs.ESA.2019.5
Bille, P., Gawrychowski, P.G., Inge, L., Landau, G.M., Weimann, O.: Longest common extensions in trees. In: Proceedings of the 26th Annual Symposium on Combinatorial Pattern Matching (CPM 2015), pp. 52–64 (2015). https://doi.org/10.1007/9783319199290_5
Gawrychowski, P., Kociumaka, T., Rytter, W., Waleń, T.: Faster longest common extension queries in strings over general alphabets. In: Proceedings of the 27th Annual Symposium on Combinatorial Pattern Matching (CPM 2016), pp. 5:1–5:13 (2016). https://doi.org/10.4230/LIPIcs.CPM.2016.5
Alzamel, M., Crochemore, M., Iliopoulos, C.S., Kociumaka, T., Radoszewski, J., Rytter, W., Straszyński, J., Waleń, T., Zuba, W.: Quasilineartime algorithm for longest common circular factor. In: Proceedings of the 30th Annual Symposium on Combinatorial Pattern Matching (CPM 2019), pp. 25:1–25:14 (2019). https://doi.org/10.4230/LIPIcs.CPM.2019.25
Kempa, D., Kociumaka, T.: Breaking the O(n)barrier in the construction of compressed suffix arrays. CoRR, (2021). To appear in SODA 2023. arXiv:2106.12725
Kociumaka, T., Radoszewski, J., Rytter, W., Waleń, T.: Internal pattern matching queries in a text and applications. In: Proceedings of the 26th Annual ACMSIAM Symposium on Discrete Algorithms (SODA 2015), pp. 532–551 (2015). https://doi.org/10.1137/1.9781611973730.36
Kociumaka, T.: Efficient data structures for internal queries in texts. PhD thesis, University of Warsaw (2018). https://depotuw.ceon.pl/handle/item/3614
Jin, C., Nogler, J.: Quantum speedups for string synchronizing sets, longest common substring, and kmismatch matching. CoRR (2022). To appear in SODA 2023. arXiv:2211.15945
Childs, A.M., Kothari, R., KovacsDeak, M., Sundaram, A., Wang, D.: Quantum divide and conquer. CoRR 2022. arXiv:2210.06419
Kent, C., Lewenstein, M., Sheinwald, D.: On demand string sorting over unbounded alphabets. Theor. Comput. Sci. 426, 66–74 (2012). https://doi.org/10.1016/j.tcs.2011.12.001
Fine, N.J., Wilf, H.S.: Uniqueness theorems for periodic functions. Proc. Am. Math. Soc. 16(1), 109–114 (1965). https://doi.org/10.2307/2034009
Plandowski, W., Rytter, W.: Application of LempelZiv encodings to the solution of words equations. In: Proceedings of the 25th International Colloquium on Automata, Languages and Programming (ICALP 1998), pp. 731–742 (1998). https://doi.org/10.1007/BFb0055097
Ambainis, A.: Quantum query algorithms and lower bounds. In: Classical and New Paradigms of Computation and their Complexity Hierarchies, pp. 15–32. Springer (2004). https://doi.org/10.1007/9781402027765_2
Buhrman, H., de Wolf, R.: Complexity measures and decision tree complexity: a survey. Theor. Comput. Sci. 288(1), 21–43 (2002). https://doi.org/10.1016/S03043975(01)00144X
Barenco, A., Bennett, C.H., Cleve, R., DiVincenzo, D.P., Margolus, N., Shor, P., Sleator, T., Smolin, J.A., Weinfurter, H.: Elementary gates for quantum computation. Phys. Rev. A 52, 3457–3467 (1995). https://doi.org/10.1103/PhysRevA.52.3457
Brassard, G., Høyer, P., Mosca, M., Tapp, A.: Quantum amplitude amplification and estimation. Preprint (2000). arXiv:quantph/0005055
Høyer, P., Mosca, M., de Wolf, R.: Quantum search on boundederror inputs. In: Proceedings of the 30th International Colloquium on Automata, Languages and Programming (ICALP 2003), pp. 291–299 (2003). https://doi.org/10.1007/3540450610_25
de Wolf, R.: Quantum computing: Lecture notes. CoRR (2019). arXiv:1907.09415v2
Blelloch, G.E., Golovin, D., Vassilevska, V.: Uniquely represented data structures for computational geometry. In: Proceedings of the 11th Scandinavian Workshop on Algorithm Theory (SWAT 2008), pp. 17–28 (2008). https://doi.org/10.1007/9783540699033_4
Pugh, W.: Skip lists: a probabilistic alternative to balanced trees. Commun. ACM 33(6), 668–676 (1990). https://doi.org/10.1145/78973.78977
Pughm, W.: A skip list cookbook. Technical Report CSTR2286.1, University of Maryland at College Park, USA (1990). http://hdl.handle.net/1903/544
Indyk, P.: A small approximately minwise independent family of hash functions. J. Algorithms 38(1), 84–90 (2001). https://doi.org/10.1006/jagm.2000.1131
Chen, K.T., Fox, R.H., Lyndon, R.C.: Free differential calculus, IV: the quotient groups of the lower central series. Ann. Math. (1958). https://doi.org/10.2307/1970044
Acknowledgements
We thank Virginia Vassilevska Williams, Ryan Williams, and Yinzhan Xu for several helpful discussions. We additionally thank Virginia Vassilevska Williams for several useful comments on the writeup of this paper.
Funding
Open Access funding provided by the MIT Libraries.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Missing Open Access funding information has been added in the.
Shyan Akmal supported by NSF CCF1909429 and a Siebel Scholarship.
Ce Jin supported by an Akamai Presidential Fellowship and NSF CCF2129139.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Akmal, S., Jin, C. NearOptimal Quantum Algorithms for String Problems. Algorithmica (2023). https://doi.org/10.1007/s0045302201092x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s0045302201092x
Keywords
 String processing
 Quantum walks
 Longest common substring
 String synchronizing sets