Tight Conditional Lower Bounds for Longest Common Increasing Subsequence

We consider the canonical generalization of the well-studied Longest Increasing Subsequence problem to multiple sequences, called k-LCIS: Given k integer sequences X1,⋯,Xk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_1,\dots ,X_k$$\end{document} of length at most n, the task is to determine the length of the longest common subsequence of X1,⋯,Xk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_1,\dots ,X_k$$\end{document} that is also strictly increasing. Especially for the case of k=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k=2$$\end{document} (called LCIS for short), several algorithms have been proposed that require quadratic time in the worst case. Assuming the Strong Exponential Time Hypothesis (SETH), we prove a tight lower bound, specifically, that no algorithm solves LCIS in (strongly) subquadratic time. Interestingly, the proof makes no use of normalization tricks common to hardness proofs for similar problems such as Longest Common Subsequence. We further strengthen this lower bound (1) to rule out O(nL)1-ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {O}}\left( (nL)^{1-\varepsilon }\right) $$\end{document} time algorithms for LCIS, where L denotes the solution size, (2) to rule out Onk-ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {O}}\left( n^{k-\varepsilon }\right) $$\end{document} time algorithms for k-LCIS, and (3) to follow already from weaker variants of SETH. We obtain the same conditional lower bounds for the related Longest Common Weakly Increasing Subsequence problem.


Introduction
The longest common subsequence problem (LCS) and its variants are computational primitives with a variety of applications, which includes, e.g., uses as similarity measures for spelling correction [37,43] or DNA sequence comparison [39,5], as well as determining the differences of text files as in the UNIX diff utility [28].LCS shares characteristics of both an easy and a hard problem: (Easy) A simple and elegant dynamic-programming algorithm computes an LCS of two length-n sequences in time O n 2 [43], and in many practical settings, certain properties of typical input sequences can be exploited to obtain faster, "tailored" solutions (e.g., [27,29,7,38]; see also [14] for a survey).(Hard ) At the same time, no polynomial improvements over the classical solution are known, thus exact computation may become infeasible for very long general input sequences.The research community has sought for a resolution of the question "Do subquadratic algorithms for LCS exist?" already shortly after the formalization of the problem [21,4].
Recently, an answer conditional on the Strong Exponential Time Hypothesis (SETH; see Section 2 for a definition) could be obtained: Based on a line of research relating the satisfiability problem to quadratic-time problems [44,41,15,3] and following a breakthrough result for Edit Distance [9], it has been shown that unless SETH fails, there is no (strongly) subquadratic-time algorithm for LCS [1,16].Subsequent work [2] strengthens these lower bounds to hold already under weaker assumptions and even provides surprising consequences of sufficiently strong polylogarithmic improvements.
Due to its popularity and wide range of applications, several variants of LCS have been proposed.This includes the heaviest common subsequence (HCS) [32], which introduces weights to the problem, as well as notions that constrain the structure of the solution, such as the longest common increasing subsequence

Our Results
Our first result is a tight SETH-based lower bound for LCIS.Theorem 3. Unless SETH fails, there is no O n 2−ε time algorithm for LCIS for any constant ε > 0.
We extend our main result in several directions.

Parameterized Complexity I: Solution Size
Subsequent work [18,35] improved over Yang et al.'s algorithm when certain input parameters are small.Here, we focus particularly on the solution size, i.e., the length L of the LCIS.Kutz et al. [35] provided an algorithm running in time O (nL log log n + n log n).If L is small compared to its worst-case upper bound of n, say L = n 1 2 ±o (1) , this algorithm runs in strongly subquadratic time.Interestingly, exactly for this case, the reduction from 3-LCS to LCIS of Observation 1 already yields a matching SETH-based lower bound of (Ln) 1−o(1) = n 3 2 −o (1) .However, for smaller L, this reduction yields no lower bound at all and only a non-matching lower bound for larger L. We remedy this situation by the following result. 1heorem 4. Unless SETH fails, there is no O (nL) 1−ε time algorithm for LCIS for any constant ε > 0. This even holds restricted to instances with L = n γ±o (1) , for arbitrarily chosen 0 < γ 1.

Parameterized Complexity II: k-LCIS
For constant k 3, we can solve k-LCIS in O n k polylog(n) time [18,35], or even O n k time (see the appendix).While it is known that k-LCS cannot be computed in time O n k−ε for any constant ε > 0, k 2 unless SETH fails [1], this does not directly transfer to k-LCIS, since the reduction in Observation 1 is not tight.However, by extending our main construction, we can prove the analogous result.
Theorem 5. Unless SETH fails, there is no O n k−ε time algorithm for k-LCIS for any constant k 3 and ε > 0.

Longest Common Weakly Increasing Subsequence (LCWIS)
We consider a closely related variant of LCIS called the Longest Common Weakly Increasing Subsequence (k-LCWIS): Here, given integer sequences X 1 , . . ., X k of length at most n, the task is to determine the longest weakly increasing (i.e.non-decreasing) integer sequence Z that is a common subsequence of X 1 , . . ., X k .Again, we write LCWIS as a shorthand for 2-LCWIS.Note that the seemingly small change in the notion of increasing sequence has a major impact on algorithmic and hardness results: Any instance of LCIS in which the input sequences are defined over a small-sized alphabet Σ ⊆ Z, say |Σ| = O n 1/2 , can be solved in strongly subquadratic time O (nL log n) = O n 3/2 log n [35], by using the fact that L |Σ|.In contrast, LCWIS is quadratic-time SETH hard already over slightly superlogarithmic-sized alphabets [40].We give a substantially different proof for this fact and generalize it to k-LCWIS.Theorem 6.Unless SETH fails, there is no O n k−ε time algorithm for k-LCWIS for any constant k 3 and ε > 0. This even holds restricted to instances defined over an alphabet of size |Σ| f (n) log n for any function f (n) = ω(1) growing arbitrarily slowly.

Strengthening the Hardness
In an attempt to strengthen the conditional lower bounds for Edit Distance and LCS [9,1,16], particularly, to obtain barriers even for subpolynomial improvements, Abboud, Hansen, Vassilevska Williams, and Williams [2] gave the first fine-grained reductions from the satisfiability problem on branching programs.Using this approach, the quadratic-time hardness of a problem can be explained by considerably weaker variants of SETH, making the conditional lower bound stronger.We show that our lower bounds also hold under these weaker variants.In particular, we prove the following.Theorem 7.There is no strongly subquadratic time algorithm for LCIS, unless there is, for some ε > 0, an O (2 − ε) N algorithm for the satisfiability problem on branching programs of width W and length T on N variables with (log W )(log T ) = o (N ).

Discussion, Outline and Technical Contributions
Apart from an interest in LCIS and its close connection to LCS, our work is also motivated by an interest in the optimality of dynamic programming (DP) algorithms 2 .Notably, many conditional lower bounds in P target problems with natural DP algorithms that are proven to be near-optimal under some plausible assumption (see, e.g., [15,3,9,10,1,16,11,23,34] and [45] for an introduction to the field).Even if we restrict our attention to problems that find optimal sequence alignments under some restrictions, such as LCS, Edit Distance and LCIS, the currently known hardness proofs differ significantly, despite seemingly small differences between the problem definitions.Ideally, we would like to classify the properties of a DP formulation which allow for matching conditional lower bounds.
One step in this direction is given by the alignment gadget framework [16].Exploiting normalization tricks, this framework gives an abstract property of sequence similarity measures to allow for SETH-based quadratic lower bounds.Unfortunately, as it turns out, we cannot directly transfer the alignment gadget hardness proof for LCS to LCIS -some indication for this difficulty is already given by the fact that LCIS can be solved in strongly subquadratic time over sublinear-sized alphabets [35], while the LCS hardness proof already applies to binary alphabets.By collecting gadgetry needed to overcome such difficulties (that we elaborate on below), we hope to provide further tools to generalize more and more quadratic-time lower bounds based on SETH.

Technical Challenges
The known conditional lower bounds for global alignment problems such as LCS and Edit Distance work as follows.The reductions start from the quadratic-time SETH-hard Orthogonal Vectors problem (OV), that asks to determine, given two sets of (0, 1)-vectors 1) dimensions, whether there is a pair i, j such that u i and v j are orthogonal, i.e., whose inner product is 0 (over the integers).Each vector u i and v j is represented by a (normalized) vector gadget VG x (u i ) and VG y (v j ), respectively.Roughly speaking, these gadgets are combined to sequences X and Y such that each candidate for an optimal alignment of X and Y involves locally optimal alignments between n pairs VG x (u i ), VG y (v j ) -the optimal alignment exceeds a certain threshold if and only if there is an orthogonal pair u i , v j .
An analogous approach does not work for LCIS: Let VG x (u i ) be defined over an alphabet Σ and VG x (u i ) over an alphabet Σ .If Σ and Σ overlap, then VG x (u i ) and VG x (u i ) cannot both be aligned in an optimal alignment without interference with each other.On the other hand, if Σ and Σ are disjoint, then each vector v j should have its corresponding vector gadget V G y (v j ) defined over both Σ and Σ to enable to align VG x (u i ) with VG y (v j ) as well as VG x (u i ) with VG y (v j ).The latter option drastically increases the size of vector gadgets.Thus, we must define all vector gadgets over a common alphabet Σ and make sure that only a single pair VG x (u i ), VG y (v j ) is aligned in an optimal alignment (in contrast with n pairs aligned in the previous reductions for LCS and Edit Distance).

Technical Contributions and Proof Outline
Fortunately, a surprisingly simple approach works: As a key tool, we provide separator sequences α 0 . . .α n−1 and β 0 . . .β n−1 with the following properties: (1) for every i, j ∈ {0, . . ., n − 1} the LCIS of α 0 . . .α i and β 0 . . .β j has a length of f (i + j), where f is a linear function, and (2) i |α i | and j |β j | are bounded by n 1+o (1) .Note that existence of such a gadget is somewhat unintuitive: condition (1) for i = 0 and j = n − 1 requires |α 0 | = Ω(n), yet still the total length i |α i | must not exceed the length of |α 0 | significantly.Indeed, we achieve this by a careful inductive construction that generates such sequences with heavily varying block sizes |α i | and |β j |.
We apply these separator sequences as follows.We first define simple vector gadgets VG x (u i ), VG y (v j ) over an alphabet Σ such that the length of an LCIS of VG x (u i ) and VG y (v j ) is d − (u i • v j ).Then we construct the separator sequences as above over an alphabet Σ < whose elements are strictly smaller than all elements in Σ.Furthermore, we create analogous separator sequences α 0 . . .α n−1 and β 0 . . .β n−1 which satisfy a property like (1) for all suffixes instead of prefixes, using an alphabet Σ > whose elements are strictly larger than all elements in Σ.Now, we define As we will show in Section 3, the length of an LCIS of X and Y is C − min i,j (u i • v j ) for some constant C depending only on n and d.
In contrast to previous such OV-based lower bounds, we use heavily varying separators (paddings) between vector gadgets.

Preliminaries
As a convention, we use capital or Greek letters to denote sequences over integers.Let X, Y be integer sequences.We write |X| for the length of X, X[k] for the k-th element in the sequence X (k ∈ {0, . . ., |X| − 1}), and X • Y = XY for the concatenation of X and Y .We say that Y is a subsequence of X if there exist indices 0 i ).For any k sequences X 1 , . . ., X k , we denote by lcis(X 1 , . . ., X k ) the length of their longest common subsequence that is strictly increasing.

Hardness Assumptions
All of our lower bounds hold assuming the Strong Exponential Time Hypothesis (SETH), introduced by Impagliazzo and Paturi [30,31].It essentially states that no exponential speed-up over exhaustive search is possible for the CNF satisfiability problem.

Hypothesis 8 (Strong Exponential Time Hypothesis (SETH)).
There is no ε > 0 such that for all d 3 there is an O 2 (1−ε)n time algorithm for d-SAT.
This hypothesis implies tight hardness of the k-Orthogonal Vectors problem (k-OV), which will be the starting point of our reductions: Given (1) .The following conjecture is implied by SETH by the well-known split-and-list technique of Williams [44] (and the sparsification lemma [31]).
For the special case of k = 2, which we simply denote by OV, we obtain the following weaker conjecture.
A proof of the folklore equivalence of the statements for equal and unequal set sizes can be found, e.g., in [16].

Main Construction: Hardness of LCIS
In this section, we prove quadratic-time SETH hardness of LCIS, i.e., prove Theorem 3. We first introduce an inflation operation, which we then use to construct our separator sequences.After defining simple vector gadgets, we show how to embed an Orthogonal Vectors instance using our vector gadgets and separator sequences.

Inflation
We begin by introducing the inflation operation, which roughly corresponds to weighing the sequences.Definition 11.For a sequence A = a 0 , a 1 , . . ., a n−1 of integers we define: Lemma 12.For any two sequences A and B, lcis(inflate(A), inflate(B)) = 2 • lcis(A, B).
Proof.Let C be the longest common increasing subsequence of A and B. Observe that inflate(C) is a common increasing subsequence of inflate(A) and inflate(B) of length 2 • |C|, thus lcis(inflate(A), inflate(B)) 2 • lcis(A, B).
Conversely, let Ā denote inflate(A) and B denote inflate(B).Let C be the longest common increasing subsequence of Ā and B. If we divide all elements of C by 2 and round up to the closest integer, we end up with a weakly increasing sequence.Now, if we remove duplicate elements to make this sequence strictly increasing, we obtain C, a common increasing subsequence of A and B. At most 2 distinct elements may become equal after division by 2 and rounding, therefore C contains at least lcis( Ā, B)/2 elements, so 2 • lcis(A, B) lcis( Ā, B).This completes the proof.

Separator sequences
Our goal is to construct two sequences A and B which can be split into n blocks, i.e.A = α 0 α 1 . . .α n−1 and B = β 0 β 1 . . .β n−1 , such that the length of the longest common increasing subsequence of the first i blocks of A and the first j blocks of B equals i + j, up to an additive constant.We call A and B separator sequences, and use them later to separate vector gadgets in order to make sure that only one pair of gadgets may interact with each other at the same time.
We construct the separator sequences inductively.For every k ∈ N, the sequences A k and B k are concatenations of 2 k blocks (of varying sizes), . Let s k denote the largest element of both sequences.As we will soon observe, s k = 2 k+2 − 3.
The construction works as follows: for k = 0, we can simply set A 0 and B 0 as one-element sequences 1 .We then construct A k+1 and B k+1 inductively from A k and B k in two steps.First, we inflate both A k and B k , then after each (now inflated) block we insert 3-element sequences, called tail gadgets, 2s k + 2, 2s k + 1, 2s k + 3 for A k+1 and 2s k + 1, 2s k + 2, 2s k + 3 for B k+1 .Formally, we describe the construction by defining blocks of the new sequences.For i ∈ {0, 1, . . ., 2 k − 1}, Note that the symbols appearing in tail gadgets do not appear in the inflated sequences.The largest element of both new sequences s k+1 equals 2s k + 3, and solving the recurrence gives indeed s k = 2 k+2 − 3. tail gadget inflate(α 0 1 ) B k+1 : lcis : . Indeed, to obtain A k+1 first we double the size of A k and then add 3 new elements for each of the 2 k blocks of A k .Solving the recurrence completes the proof.The same reasoning applies to B k .Lemma 14.For every i, j ∈ 0, 1, . . ., The proof is by induction on k.Assume the statement is true for k and let us prove it for k + 1.
The " " direction.We proceed by induction on i + j.Fix i and j, and let L be a longest common increasing subsequence of α 0 k+1 . . .α i k+1 and β 0 k+1 . . .β j k+1 .If the last element of L is less than or equal to 2s k , L is in fact a common increasing subsequence of inflate(α 0 k . . .α i/2 k ) and inflate(β 0 k . . . ), thus, by the induction hypothesis and inflation properties, The remaining case is when the last element of L is greater than 2s k .In this case, consider the second-to-last element of L. It must belong to some blocks α i k+1 and β j k+1 for i i and j j, and we claim that i = i and j = j cannot hold simultaneously: by construction of separator sequences, if blocks α i k+1 and β j k+1 have a common element larger than 2s k , then it is the only common element of these two blocks.Therefore, it cannot be the case that both i = i and j = j , because the last two elements of L would then be located in α i k+1 and β j k+1 .As a consequence, i + j < i + j, which lets us apply the induction hypothesis to reason that the prefix of L omitting its last element is of length at most i + j + 2 k+1 .Therefore, |L| 1 + i + j + 2 k+1 i + j + 2 k+1 , which completes the proof.
Observe that if we reverse the sequences A k and B k along with changing all elements to their negations, i.e. x to −x, we obtain sequences Âk and Bk such that Âk splits into 2 k blocks α0 k . . .
Finally, observe that we can add any constant to all elements of the sequences A k and B k (as well as Âk and Bk ) without changing the property stated in Lemma 14 (and its analogue for Âk and Bk , i.e.Equation ( 1)).
For i ∈ {0, 1, . . ., n − 1} let us construct the vector gadgets U i and V i as 2d-element sequences, by defining, for every p ∈ {0, 1, . . ., d − 1}, Observe that at most one of the elements 2p − 1 and 2p may appear in the LCIS of U i and V j , and it happens if and only if u i [p] and v j [p] are not both equal to one.Therefore, lcis(U i , V j ) = d − (u i • v j ), and, in particular, lcis(U i , V j ) = d if and only if u i and v j are orthogonal.

Final construction
To put all the pieces together, we plug vector gadgets U i and V j into the separator sequences from Section 3.2, obtaining two sequences whose LCIS depends on the minimal inner product of vectors u i and v j .We provide a general construction of such sequences, which will be useful in later sections.
Lemma 15.Let X 0 , X 1 , . . ., X n−1 , Y 0 , Y 1 , . . ., Y n−1 be integer sequences such that none of them has an increasing subsequence longer than δ.Then there exist sequences X and Y of length O (δ for a constant C that only depends on n and δ and is O (nδ).
Proof.We can assume that n = 2 k for some positive integer k, adding some dummy sequences if necessary.Recall the sequences A k , B k , Âk and Bk constructed in Section 3.2.Let A, B, Â, B be the sequences obtained from A k , B k , Âk , Bk by applying inflation log 2 δ times (thus increasing their length by a factor of = 2 log 2 δ δ).Each of these four sequences splits into (now inflated) blocks, e.g.A = α 0 α 1 . . .α n−1 , where α i = inflate log 2 δ (α i k ).We subtract from A and B a constant large enough for all their elements to be smaller than all elements of every X i and Y j .Similarly, we add to A and B a constant large enough for all their elements to be larger than all elements of every X i and Y j .Now, we can construct the sequences X and Y as follows: Let X i and Y j be the pair of sequences achieving lcis(X i , Y j ) = M .Recall that lcis(α 0 . . .α i , β 0 . . .β j ) = • (i + j + n), with all the elements of this common subsequence preceding the elements of X i and Y j in X and Y , respectively, and being smaller than them.In the same way lcis(α i . . .αn−1 , βj . . .βn−1 ) = • (2 • (n − 1) − (i + j) + n) with all the elements of LCIS being greater and appearing later than those of X i and Y j .By concatenating these three sequences we obtain a common increasing subsequence of X and Y of length • (4n − 2) + M .It remains to prove lcis(X, Y ) • (4n − 2) + M .Let L be any common increasing subsequence of X and Y .Observe that L must split into three (some of them possibly empty) parts L = SG Ŝ with S consisting only of elements of A and B, G -only elements of X i and Y j , and Ŝ -elements of Â and B.
Let x be the last element of S and x the first element of Ŝ.We know that x belongs to some blocks α i of A and β j of B, and x belongs to some blocks αî of Â and βĵ of B. Obviously i î and j ĵ.By Lemma 14 and inflation properties we have |S| We consider two cases: Case 1.If i = î and j = ĵ, then G may only contain elements of X i and Y j .Therefore Case 2. If i < î or j < ĵ, then G must be a strictly increasing subsequence of both On the other hand, From that we obtain |L| • (4n − 2), as desired.
We are ready to prove the main result of the paper.
Proof of Theorem 3. Let U = {u 0 , . . ., u n−1 }, V = {v 0 , . . ., v n−1 } be two sets of d-dimensional binary vectors.In Section 3.3 we constructed vector gadgets U i and V j , for i, j ∈ {0, 1, . . ., n − 1}, such that lcis(U i , V j ) = d − (u i • v j ).To these sequences we apply Lemma 15, with δ = 2d, obtaining sequences X and Y of length O (n log npoly(d)) such that lcis(X, Y ) = C + d − min i,j (u i • v j ) for a constant C.This reduction, combined with an O n 2−ε time algorithm for LCIS, would yield an O n 2−ε polylog(n)poly(d) algorithm for OV, refuting Hypothesis 10 and, in particular, SETH.With the reduction above, one can not only determine whether there exist a pair of orthogonal vectors or not, but also, in the latter case, calculate the minimum inner product over all pairs of vectors.Formally, by the above construction, we can reduce even the Most Orthogonal Vectors problem, as defined in Abboud et al. [1] to LCIS.This bases hardness of LCIS already on the inability to improve over exhaustive search for the MAX-CNF-SAT problem, which is a slightly weaker conjecture than SETH.

Matching Lower Bound for Output-Dependent Algorithms
To prove our bivariate conditional lower bound of (nL) 1−o(1) , we provide a reduction from an OV instance with unequal vector set sizes.It remains to show the reduction itself.Let U = {u 0 , . . ., u n−1 } and V = {v 0 , . . ., v m−1 } be two sets of d-dimensional (0, 1)-vectors.By adding dummy vectors, we can assume without loss of generality that n = q • m for some integer q.
We use the vector gadgets U i and V j from Section 3.4.This time, however, we group together every q consecutive gadgets, i.e., (U 0 , . . ., U q−1 ), (U q , . . ., U 2q−1 ), and so on.Specifically, let U [r] i be the i-th vector gadget shifted by an integer r (i.e. with r added to all its elements).We define, for each lq+q−1 .In a similar way, for j ∈ {0, 1, . . ., m − 1}, we replicate every V j gadget q times with appropriate shifts, i.e., Vj = V Let us now determine lcis( Ūl , Vj ).No two gadgets grouped in Ūl can contribute to an LCIS together, as the later one would have smaller elements.Therefore, only one U i gadget can be used, paired with the one copy of V j having the matching shift.This yields lcis( Ūl , Vj ) = max lq i<lq+q lcis(U i , V j ), and in turn, also max l,j lcis( Ūl , Vj ) = max i,j lcis Observe that every Ūl is a concatenation of several U i gadgets, each one shifted to make its elements smaller than previous ones.Therefore, any increasing subsequence of Ūl must be contained in a single U i , and thus cannot be longer than 2d.The same argument applies to every Vj .Therefore, we can apply Lemma 15, with δ = 2d, to these sequences, obtaining X and Ȳ satisfying: Recall that C is some constant dependent only on m and d, and C = O (md).The length of both X and Ȳ is O (dm log m + mqd) = O (nd log n), and the length of the output is O (md), as desired.

Hardness of k-LCIS
In this section we show that, assuming SETH, there is no O n k−ε algorithm for the k-LCIS problem, i.e., we prove Theorem 5. To obtain this lower bound we show a reduction from the k-Orthogonal Vectors problem (for definition, see Section 2).There are two main ingredients of the reduction, i.e. separator sequences and vector gadgets, and both of them can be seen as natural generalizations of those introduced in Section 3.

Generalizing separator sequences
Please note that in this section we use a notation which is not consistent with the one from Section 3, because it has to accommodate indexing over k sequences.
The aim of this section is to show, for any N that is a power of two, how to construct k sequences A 1 , A 2 , . . ., A k such that each of them can be split into N blocks, i.e.
, and for any choice of j 1 , j 2 , . . ., As before, we construct separator sequences inductively, doubling the number of blocks in each step.Again, for N = 1, we define the sequences by A i = 1 , i ∈ {1, . . ., k}.
Suppose we have N -block sequences . Note that inflation properties still hold for k sequences, as the proof of Lemma 12 works in exactly the same way, i.e. inflating all the sequences increases their LCIS by a factor of 2.
To obtain B i , we first inflate A i , and then append a tail gadget after each block α j i .However, tail gadgets are now more involved.
Let s denote the largest element appearing in A 1 , A 2 , . . ., A k .Then the blocks of B i are where T 0 i is the sorted sequence of numbers of the form 2s + x for x ∈ 1, . . ., 2 k − 1 such that the i-th bit in the binary representation of x equals 0, while T 1 i contains those with i-th bit set to 1.Note that for k = 2 this exactly leads to the construction from Section 3.
During one construction step, every block doubles its size, and constant number of elements (precisely, 2 k − 1) is added for every original block.Therefore, the length L(N ) of N -block sequences satisfies the recursive equation: Note also that the size of the alphabet S(N ) used in N -block sequences gives the equation S(2N ) = 2S(N ) + 2 k − 1, as a constant number of elements is added in every step.Therefore S(N ) = O (N ).
) as a subsequence.Thus we can find, by inflation properties, a common increasing subsequence of length 2•(j as desired.Now, let j i be any odd index, and let L be the LCIS of the prefixes corresponding to j 1 , . . ., j i−1 , j i − 1, j i+1 , . . ., j k , which ends on an element bounded by x(j 1 , . . ., j i−1 , 0, j i+1 , . . ., j k ), of length j 1 + • • • + j k + 2N − 1 (which exists by the induction hypothesis).Then L • x(j 1 , . . ., j i−1 , 1, j i+1 , . . ., j k ) is an LCIS for the prefixes corresponding to j 1 , . . ., j k : Indeed, 2s + x(j 1 , . . ., j k ) is a common member of T j1 mod 2 1 , . . ., T j k mod 2 k , the last parts of these prefixes, and this element is larger and appears later in the sequences than all elements in L (since all T j i 's are sorted in the increasing order).For the converse, let L denote the LCIS of β 0 1 . . .β j1 1 , β 0 2 . . .β j2 2 , . .., β 0 k . . .β j k k .Note that if the last symbol of L does not come from the last blocks, i.e. β j1 1 , β j2 2 , . . ., β j k k , then L is an LCIS of prefixes corresponding to some j 1 , . . ., j k with j 1 + • • • + j k < j 1 + • • • + j k and the claim follows from the induction hypotheses.Thus, we may assume that L ends on a common symbol of the last blocks.
If all the indices are even, the last blocks share only elements less than or equal to 2s (since T 0 1 , . . ., T 0 k share no elements), thus L is the LCIS of inflate(α 0 i , . . ., α ji/2 i ), i ∈ {1, . . ., k} and the claim follows from the inflation properties.Otherwise, the only element the last blocks have in common is x(j 1 , j 2 , . . ., j k ), and thus L = L • x(j 1 , . . ., j k ), where L is the LCIS of prefixes corresponding to some j 1 , . . ., j k with

Generalizing vector gadgets
Each vector gadget is the concatenation of coordinate gadgets.Coordinate gadgets for j-th coordinate use elements from the range {kj + 1, . . ., kj + k}.If a coordinate is 0, the corresponding gadget contains all k elements sorted in decreasing order, otherwise the gadget for the i-th sequence skips the kj where Thus, if all k vectors have the j-th coordinate equal 1, there is no common element in the corresponding gadgets.Otherwise, if at least one, say i-th, vector has the j-th coordinate equal 0, the element kj + i appears in all coordinate gadgets.Since the coordinate gadgets are sorted in decreasing order, their LCIS cannot exceed 1.Therefore, and ultimately

Putting pieces together
We can finally prove our lower bound for k-LCIS, i.e., Theorem 5.
Proof of Theorem 5. Let U 1 , . . ., U k ⊆ {0, 1} d be a k-OV instance with |U i | = n.By at most doubling the number of vectors in each set, we may assume without loss of generality that n is a power of two.We construct separator sequences consisting of n blocks.Inflate the sequences log 2 kd times, thus increasing their length by a factor = 2 log 2 kd , and subtract from all their elements a constant large enough for them to become smaller than all elements of vector gadgets.Let A i = α 0 i . . .α n−1 i denote the thus constructed separator sequence corresponding to set U i .
Analogously (and as in the proof of Theorem 3), construct, for each i ∈ {1, . . ., k}, the separator sequence Âi = α0 i , . . ., αn−1 i by reversing A i , replacing each element by its additive inverse, and adding a constant large enough to make all the elements larger than vector gadgets (note that each αj i equals the reverse of α n−j−1 i , with negated elements, shifted by an additive constant).In this way, the analogous property to Equation (2) holds for suffixes instead of prefixes.
Finally, construct sequences X 1 , X 2 , . . ., X k by defining where the VG i are defined as in Section 5.

Hardness of k-LCWIS
We shortly discuss the proof of Theorem 6.
Proof sketch of Theorem 6.Note that our lower bound for k-LCIS almost immediately yields a lower bound for k-LCWIS: Trivially, each common increasing subsequence of X 1 , . . ., X k is also a common weakly increasing subsequence.The claim then follows after carefully verifying that, in the constructed sequences, we cannot obtain longer common weakly increasing subsequences by reusing some symbols.
Our claim for k-LCWIS is slightly stronger, however.In particular, we aim to reduce the size of the alphabet over which all the sequences are defined.For this, the key insight is to replace the inflation operation inflate( a 0 , . . ., a n−1 ) = 2a 0 − 1, 2a 0 , . . ., 2a n−1 − 1, 2a n−1 by inflate ( a 0 , . . ., a n−1 ) = a 0 , a 0 , . . ., a n−1 , a n−1 , which does not increase the alphabet size, but still satisfies the desired property for k-LCWIS.
Replacing this notion in the proof of Theorem 5, we obtain final sequences X 1 , . . ., X k by combining separator gadgets over alphabets of size O (log n) with vector gadgets over alphabets of size O (d), where d is the dimension of the vectors in the k-OV instance.Correctness of this construction under k-LCWIS can be verified by reworking the proof of Theorem 5. Thus, we construct hard k-LCWIS instances over an alphabet of size O (log n + d), and the claim follows.
Since RG x (a) and RG y (b) are constructed in t steps of the inductive construction, and each step increases the length of gadgets by a factor of O (W log W ), their final length can be bounded by O ((W log W ) t ), which is T O(log W ) .Combining the reachability gadgets RG x (a), a ∈ {0, 1} N/2 and RG y (b), b ∈ {0, 1} N/2 using Lemma 15 (where we choose δ as the maximum length of the reachability gadgets) yields the desired strings X, Y of length 2 N/2 • N • T O(log W ) whose LCIS lets us determine satisfiability of the given branching program, thus finishing the proof.
Similar techniques can be used to analogously strengthen other lower bounds in our paper.

Conclusion and Open Problems
We prove a tight quadratic lower bound for LCIS, ruling out strongly subquadratic-time algorithms under SETH.It remains open whether LCIS admits mildly subquadratic algorithms, such as the Masek-Paterson algorithm for LCS [36].Note, however, that our reduction from BP-SAT gives an evidence that shaving many logarithmic factors is immensely difficult.Finally, we give tight SETH-based lower bounds for k-LCIS.
For the related variant LCWIS that considers weakly increasing sequences, strongly subquadratic-time algorithms are ruled out under SETH for slightly superlogarithmic alphabet sizes ( [40] and Theorem 6).On the other hand, for binary and ternary alphabets, even linear time algorithms exist [35,24].Can LCWIS be solved in time O n 2−f (|Σ|) for some decreasing function f that yields strongly subquadratic-time algorithms for any constant alphabet size |Σ|?
Finally, we can compute a (1 + ε)-approximation of LCIS in O n 3/2 ε −1/2 polylog(n) time by an easy observation (see the appendix).Can we improve upon this running time or give a matching conditional lower bound?Note that a positive resolution seems difficult by the reduction in Observation 1: Any n α , α > 0, improvement over this running time would yield a strongly subcubic (1 + ε)-approximation for 3-LCS, which seems hard to achieve, given the difficulty to find strongly subquadratic (1 + ε)-approximation algorithms for LCS.

2 Figure 1 :
Figure 1: Initial steps of the inductive construction of the separator sequences.

Proof of Theorem 4 .
Let 0 < γ 1 be arbitrary and consider any OV instance with sets U, V ⊆ {0, 1} d with |U| = n, |V| = m = n γ and d = n o(1) .We reduce this problem, in linear time in the output size, to an LCIS instance with sequences X and Y satisfying |X| = |Y | = O (nd log n) and an LCIS of length O (n γ d).Theorem 4 is an immediate consequence of the reduction: an O (nL) 1−ε time LCIS algorithm would yield an OV algorithm running in time O n 1+γ−ε , which would refute Hypothesis 10 and, in particular, SETH.