1 Introduction

The length of the longest increasing subsequence of a uniformly random permutation has attracted the attention of researchers from several areas with significant contributions from Hammersley [19], Logan and Shepp [22] Vershik and Kerov [32], Aldous and Diaconis [1] and culminating with the breakthrough work of Baik et al. [4] who related this length to the theory of random matrices and proved that it has a Tracy–Widom limiting distribution. In this work we study the lengths of monotone subsequences (increasing or decreasing) of a random permutation having a different probability law, introduced by Mallows in [23] in order to study the statistical properties of non-uniformly random permutations (see also [13] and references therein for more background). The Mallows distribution is parameterized by a number \(q>0\), with the probability of a permutation \(\uppi \) proportional to \(q^{\mathrm{Inv }(\uppi )}\), where \(\mathrm{Inv }(\uppi )\) is the number of inversions in \(\uppi \), or pairs of elements of \(\uppi \) which are out of order.

For \(q>0\) and integer \(n\ge 1\), the \((n,q)\)-Mallows measure over permutations in \(S_n\) is given by

$$\begin{aligned} \mu _{n,q}(\uppi ) := \frac{q^{\mathrm{Inv }(\uppi )}}{Z_{n,q}}, \end{aligned}$$
(1)

where

$$\begin{aligned} \mathrm{Inv }(\uppi ) := |\{(i,j)\,:\,i<j \,\text {and}\, \uppi (i)>\uppi (j)\}| \end{aligned}$$

denotes the number of inversions in \(\uppi \), and \(Z_{n,q}\) is a normalizing constant, given explicitly by the following well-known formula [27, pg. 21] (see also the remark after Lemma 2.1 below)

$$\begin{aligned} Z_{n,q} = \displaystyle \prod _{i=1}^{n}\frac{1-q^{i}}{1-q}. \end{aligned}$$
(2)

Let \(I=(i_1,\ldots ,i_m)\) be an increasing sequence of indices. We say \(I\) is an increasing subsequence of a permutation \(\uppi \) if \(\uppi (i_{k+1})>\uppi (i_k)\) for \(1\le k\le m-1\). Define a decreasing subsequence analogously. Denote by \(\mathrm{LIS }(\uppi )\) the maximal length of an increasing subsequence in \(\uppi \). That is,

$$\begin{aligned} \mathrm{LIS }(\uppi ) = \max \{m\,:\, \exists \, i_1<\cdots <i_m \text { satisfying }\uppi (i_1)<\cdots <\uppi (i_m)\}. \end{aligned}$$

Analogously define \(\mathrm{LDS }(\uppi )\) to be the maximal length of a decreasing subsequence in \(\uppi \). That is,

$$\begin{aligned} \mathrm{LDS }(\uppi ) = \max \{m\,:\, \exists \, i_1<\cdots <i_m \text { satisfying }\uppi (i_1)>\cdots >\uppi (i_m)\}. \end{aligned}$$

Our goal is to investigate the distribution of \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) when \(\uppi \) is randomly sampled from the Mallows measure. We mention that the asymptotics of these lengths for other non-uniform distributions have been considered in the literature previously. For instance, Baik and Rains [5] study the longest increasing and decreasing subsequences of random permutations satisfying certain symmetry conditions such as uniformly chosen involutions. Féray and Méliot [15] studied a distribution similar to (1), but with \(\mathrm{Inv }\) replaced by another permutation statistic, the major index. Fulman [16] relates the longest increasing subsequence in this major index distribution to the study of eigenvalues of random matrices over finite fields, analogously to the relation of the longest increasing subsequence of a uniform permutation with random Hermitian matrices. In addition, \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) have been studied for the Mallows distribution itself, by Mueller and Starr [24], as detailed below.

We focus our investigations on the Mallows measure with \(q<1\). This restriction can be made without loss of generality since there is a duality between the measures \(\mu _{n,q}\) and \(\mu _{n,1/q}\). Indeed, if \(\uppi \sim \mu _{n,q}\) then its reversal \(\uppi ^R\), defined by \(\uppi ^R(i) := \uppi (n+1-i)\), is distributed as \(\mu _{n,1/q}\) (see Lemma 2.2 below). In particular, \(\mathrm{LIS }(\uppi )\) is distributed as \(\mathrm{LDS }(\uppi ^R)\). It is natural to allow \(q\) to be a function of \(n\). Mueller and Starr [24] studied the regime where \(n(1-q)\) tends to a finite limit \(\upbeta \). They showed that \(\mathrm{LIS }(\uppi ) / \sqrt{n}\) converges in probability to \(\ell (\upbeta )\), where \(\ell (\upbeta )\) is an explicitly given function of \(\upbeta \) satisfying \(\ell (0)=2\) (see Theorem 5.2 for the precise statement), thus extending the results of [1, 22, 32]. This implies an analogous result for \(\mathrm{LDS }(\uppi )\) by the above-mentioned duality. Thus, in this limiting sense, in the regime where \(n(1-q)\) tends to a finite constant as \(n\) tends to infinity, \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) have the same order of magnitude as for a uniformly random permutation, with a different leading constant. In this paper we complete this picture by considering the case that \(n(1-q)\) tends to infinity with \(n\). We find the typical order of magnitude of \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) (which now differ from the uniformly random case) and establish large deviation results for these lengths and a law of large numbers for \(\mathrm{LIS }(\uppi )\). We also prove a simple bound on the variance of \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\).

Our first result concerns the displacement \(|\uppi (i)-i|\) of an element in a random Mallows permutation. The result gives bounds on the tails of this displacement. This theorem is not used later in our analysis of monotone subsequences of random Mallows permutations but it is useful in developing intuition for their behavior. The upper bound follows by methods of Braverman and Mossel [8, Lemma 17] as well as Gnedin and Olshanski [18, Remark 5.2]. In [18], the authors studied a model of random permutations of the infinite group of integers \({\mathbb {Z}}\) which is obtained as a limit of the Mallows model, and obtained precise formulas for the distribution of displacements in this limiting model.

Theorem 1.1

For all \(0<q<1\), and integer \(n\ge 1, 1\le i\le n\) and \(t\ge 1\), if \(\uppi \sim \mu _{n,q}\) then

$$\begin{aligned} {\mathbb {P}}(|\uppi (i)-i|\ge t)\le 2q^t, \end{aligned}$$
(3)

and

$$\begin{aligned} c \min \left( \frac{q}{1-q}, n-1\right) \le {\mathbb {E}}|\uppi (i) - i| \le \min \left( \frac{2q}{1-q},n-1 \right) \end{aligned}$$
(4)

for some absolute constant \(c>0\). In addition, if \(n\ge 3\) and \(1\le t\le \frac{n+5}{8}\) then

$$\begin{aligned} {\mathbb {P}}(|\uppi (i) - i|\ge t)\ge \frac{1}{2}q^{2t-1}. \end{aligned}$$

A permutation \(\uppi \) in \(S_n\) can be naturally associated to a collection of \(n\) points in the square \([1,n]^2\) by placing a point at \((i,\uppi (i))\) for each \(i\). In this graphical representation, increasing subsequences correspond to increasing curves passing through the points (see Fig. 1), and decreasing subsequences correspond to decreasing curves. The graphical representation is depicted in Fig. 2 for permutations simulated from the Mallows distribution \(\mu _{n,q}\) for various choices of \(n\) and \(q\). The figure illustrates the fact that most points of the permutation are displaced by less than a constant times \(q / (1-q)\), as Theorem 1.1 proves.

Fig. 1
figure 1

An increasing piecewise linear curve corresponding to a longest increasing subsequence

Fig. 2
figure 2

Graphical representation for random Mallows-distributed permutations with \(1-q=n^{-0.7}, \ n^{-0.8}\) and \(n^{-0.88}\). The diagonal lines delineate a symmetric strip with width proportional to \(\frac{1}{1-q}\). Theorem 1.1 shows that most points of the permutation must lie in such a strip

The previous remark suggests a connection between the study of the longest increasing subsequence of a random Mallows permutation, and the last passage percolation model in a strip. In one version of the latter model, one puts independent and identically distributed random points in a strip, and studies the last passage time, which is the same as the longest increasing subsequence when these points are taken to be the graphical representation of a permutation. In Sect. 8 we mention some works related to the limiting distribution of the last passage time and raise the question of whether the same limiting distributions arise also for the Mallows model.

Our next results concern the typical order of magnitude of \(\mathrm{LIS }(\uppi )\) when \(\uppi \) is sampled from the Mallows distribution. A heuristic guess for this order of magnitude may be obtained from Fig. 3. Suppose that \(\upbeta /(1-q)\) and \(n(1-q)/\upbeta \) are integers for some large constant \(\upbeta >0\). Consider \(n(1-q)/\upbeta \) disjoint squares of side length \(\upbeta /(1-q)\) along the strip delineated in the figure, such that the bottom left corner of each square equals the top right corner of the preceding square. The figure hints that the distribution of points in each square is close to a sample from the \(\mu _{\upbeta /(1-q),q}\) distribution (here close should be interpreted as saying that the box contains a significant subsample of a Mallows distributed permutation of size \(\upbeta /(1-q)\). Theorem 1.1 and the results in Sect. 2.1 give rigorous meaning to such statements). Thus the parameters fall in the regime of [24] and according to their results, the typical length of the longest increasing subsequence in each square is of order \(1 / \sqrt{1-q}\). We may thus create an increasing subsequence with length of order \(n\sqrt{1-q}\) by concatenating the longest increasing subsequences in each of the \(n(1-q)/\upbeta \) squares. This reasoning gives rise to the prediction that \(\mathrm{LIS }(\uppi )\) is about \(C n\sqrt{1-q}\) for some constant \(C>0\). The next theorem establishes the correctness of this prediction, with a precise constant \(C=1\), in the limit (5).

Fig. 3
figure 3

Disjoint boxes with side length \(\frac{\upbeta }{1-q}\) along a symmetric strip around the diagonal

Theorem 1.2

Let \((q_n)\) be a sequence satisfying

$$\begin{aligned} q_n\rightarrow 1\quad \text {and}\quad n(1-q_n)\rightarrow \infty \end{aligned}$$
(5)

as \(n\) tends to infinity. Suppose \({{\uppi }_{n}}\sim \mu _{n,q_n}\). Then

$$\begin{aligned} \frac{\mathrm{LIS }({{\uppi }_{n}})}{n\sqrt{1-q_n}} \rightarrow 1 \end{aligned}$$

as \(n\) tends to infinity, where the convergence takes place in \(L_p\) for any \(0< p <\infty \).

In addition to this limiting behavior, Theorem 1.3 below gives large deviation bounds on the length of the longest increasing subsequence for fixed values of \(n\) and \(q\). The proof of Theorem 1.2 proceeds along the lines of the heuristic outlined above, combining our large deviation results with the weak law of large numbers shown in [24].

Notation: We will write \(a_{n,q} \approx b_{n,q}\) if there exist absolute constants \(0 < c \le C < \infty \) such that \(c b_{n,q} \le a_{n,q} \le C b_{n,q}\) for all \(n\) and \(q\) in a specified regime.

Theorem 1.3

Suppose that \(n\ge 1, \frac{1}{2} \le q \le 1-\frac{4}{n}\) and \(\uppi \sim \mu _{n,q}\). Then,

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }(\uppi )) \approx n\sqrt{1-q}. \end{aligned}$$
(6)

Furthermore, there exist absolute constants \(0<C,c<\infty \) such that

  1. (i)

    For integer \(L\ge Cn\sqrt{1-q}\),

    $$\begin{aligned} \left( \frac{c(1-q)n^2}{L^2} \right) ^L \le {\mathbb {P}}(\mathrm{LIS }(\uppi ) \ge L) \le \left( \frac{C(1-q)n^2}{L^2} \right) ^L. \end{aligned}$$
    (7)
  2. (ii)

    For integer \(n(1-q) \le L \le cn\sqrt{1-q}\),

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }(\uppi ) < L) \le \exp \left( -\frac{c(1-q)n^2}{L}\right) . \end{aligned}$$
    (8)

The bound (8) can be improved for certain regimes of \(n,q\) and \(L\); for details see Sect. 6.3. Complementing the regime of \(q\) in (6), we have the following simple bound on \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\), which is rather precise for small \(q\).

Proposition 1.4

Suppose that \(n\ge 1, 0 < q \le 1\) and \(\uppi \sim \mu _{n,q}\). Then

$$\begin{aligned} n(1-q) \le {\mathbb {E}}(\mathrm{LIS }(\uppi ))\le n - \frac{q}{1+q}(n-1). \end{aligned}$$

When \(\uppi \) is sampled uniformly from \(S_n\), symmetry implies that \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) have the same distribution. For the Mallows measure, the analogous fact is not true. Indeed, looking at Fig. 2 one expects \(\mathrm{LDS }(\uppi )\) to be of a smaller order of magnitude than \(\mathrm{LIS }(\uppi )\) when \(\uppi \sim \mu _{n,q}\) with \(q<1\) since the overall trend of the points is positive. Our next theorem establishes the order of magnitude for \(\mathrm{LDS }(\uppi )\), confirming this expectation. Interestingly, we find as many as four different behaviors for this order of magnitude according to the relation between \(n\) and \(q\).

Theorem 1.5

There exist constants \(C_0, c_1>0\) such that the following is true. Suppose that \(n\ge 2, 0 < q < 1\) and \(\uppi \sim \mu _{n,q}\).

  1. (i)
    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))\approx {\left\{ \begin{array}{ll} \frac{1}{\sqrt{1-q}}&{}1-\frac{C_0}{(\log n)^2}\le q\le 1-\frac{4}{n}\\ \frac{\log n}{\log ((1-q)(\log n)^2)}&{}1-\frac{c_1(\log \log n)^2}{\log n} \le q\le 1-\frac{C_0}{(\log n)^2}\\ \sqrt{\frac{\log n}{\log \left( \frac{1}{q}\right) }}&{}\frac{1}{n}\le q\le 1-\frac{c_1(\log \log n)^2}{\log n} \end{array}\right. }. \end{aligned}$$
    (9)
  2. (ii)

    If \(0<q\le \frac{1}{n}\) then

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))-1\approx nq. \end{aligned}$$

We pause briefly to give an informal reasoning for the results of Theorem 1.5. As explained before Theorem 1.2 above, one may again employ the idea of placing \(n(1-q)/\upbeta \) disjoint squares of side length \(\upbeta /(1-q)\) along the diagonal as in Fig. 3. Since we expect the distribution of the points in each such square to be close to that of the Mallows \(\mu _{\upbeta /(1-q), q}\) measure, the results of [24] suggest that the typical order of magnitude of the length of the longest decreasing subsequence in each square is of order \(1/\sqrt{1-q}\). When considering decreasing subsequences we cannot concatenate the subsequences of disjoint squares, since the overall trend of the points is positive. This heuristic suggests that \(\mathrm{LDS }(\uppi )\) should have order of magnitude at least as large as \(1/\sqrt{1-q}\) and possibly not much larger. This is indeed the order of magnitude obtained in the first regime of Theorem 1.5. However, as \(q\) decreases a different behavior takes over. Since we have \(n(1-q)/\upbeta \) disjoint squares in which to consider the longest decreasing subsequence, we may expect that one of these squares exhibits atypical behavior, with a decreasing subsequence of order which is significantly longer than \(1/\sqrt{1-q}\). The length of such an atypical decreasing subsequence may be predicted rather accurately using the large deviation results in Theorem 1.7 below and it turns out to be indeed significantly longer than \(1/\sqrt{1-q}\) when \((\log n)^2(1-q)\rightarrow \infty \). This is what causes the transition between the first two regimes in Theorem 1.5. A different strategy for obtaining a decreasing subsequence should also be considered. Consider the length of a longest decreasing subsequence composed solely of consecutive elements, i.e., the largest \(m\) for which \(\uppi (j)>\uppi (j+1)>\dots >\uppi (j+m-1)\) for some \(j\). The proof of Theorem 1.5 shows that the length of such a decreasing subsequence will have the same order of magnitude as the longest decreasing subsequence when \(q\) is so small that the typical longest decreasing subsequence is longer than \(1/(1-q)\). This is what governs the behavior in the third regime of the parameters in the theorem as well as in part of the second regime. Lastly, when \(q\le \frac{1}{n}\), i.e., in the fourth regime of the theorem, the probability that the random permutation differs from the identity is of order \(nq\) (see Proposition 1.9 below). This is what governs the behavior in the fourth regime of the theorem.

Remark 1.6

It seems likely that \(\mathrm{LDS }(\uppi )\) satisfies a law of large numbers similar to the one in Theorem 1.2. Indeed, if one formally takes the limit \(\upbeta \rightarrow -\infty \) in the results of [24] one obtains that \(\mathrm{LDS }(\uppi )\sqrt{1-q}\) should tend to the constant \(\uppi \). We expect this result to hold when \(n(1-q)\rightarrow \infty \) and \((\log n)^2(1-q)\rightarrow 0\), corresponding to the first regime in (9), see also Sect. 8.

Analogously to Theorem 1.3, we obtain large deviation estimates for \(\mathrm{LDS }(\uppi )\) holding for fixed \(n\) and \(q\).

Theorem 1.7

There exist constants \(C,c>0\) such that the following is true. Let \(n\ge 2, 0 < q < 1\) and \(\uppi \sim \mu _{n,q}\).

  1. (i)

    If \(0<q<1-\frac{2}{n}\) then for integer \(L\ge 2\),

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L) \le n^8{\left\{ \begin{array}{ll}\left( \frac{C}{(1-q)L^2}\right) ^L&{} L\le \frac{3}{1-q}\\ (C(1-q))^L q^{\frac{L(L-1)}{2}}&{}L>\frac{3}{1-q}\end{array}\right. }. \end{aligned}$$
    (10)

    Moreover, if \(\; 0<q<\frac{1}{2}\) then for integer \(L\ge 2\),

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\le nC^L q^{\frac{L(L-1)}{2}}. \end{aligned}$$
    (11)
  2. (ii)

    For integer \(L\),

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L) \ge {\left\{ \begin{array}{ll} 1- \left( 1-\left( \frac{c}{(1-q)L^2}\right) ^L\right) ^{\lfloor \frac{n(1-q)}{4} \rfloor } &{} \text {if }\frac{C}{\sqrt{1-q}}\le L \le \frac{1}{1-q}\;\;\text { and }\; \; \\ &{} \quad \frac{1}{2} \le q \le 1-\frac{4}{n}\\ 1-\left( 1-q^{\frac{L(L-1)}{2}} (1-q)^L\right) ^{\lfloor \frac{n}{L}\rfloor } &{} \text {for any}\, L\ge 2 \end{array}\right. }. \end{aligned}$$
    (12)
  3. (iii)

    Let \(\frac{1}{2} \le q \le 1-\frac{4}{n}\). For integer \(2\le L<\frac{c}{\sqrt{1-q}}\),

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) < L) \le (C(1-q)L^2)^{\frac{n}{L}}. \end{aligned}$$
    (13)

The discussion above focused on the typical order of magnitude and large deviations of \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) when \(\uppi \) is distributed according to the Mallows distribution. Also interesting, and seemingly more difficult, is the study of the typical deviations of \(\mathrm{LIS }(\uppi )\) and \(\mathrm{LDS }(\uppi )\) from their expected value. In this paper we make only a modest contribution towards understanding these quantities, as given in the following proposition. We denote by \(\mathrm{Var }(X)\) the variance of \(X\).

Proposition 1.8

Let \(n\ge 1, 0<q<\infty \) and \(\uppi \sim \mu _{n,q}\). Then

$$\begin{aligned} \mathrm{Var }(\mathrm{LIS }(\uppi ))\le n-1. \end{aligned}$$

Furthermore, for all \(t>0\),

$$\begin{aligned} {\mathbb {P}}(|\mathrm{LIS }(\uppi )-{\mathbb {E}}(\mathrm{LIS }(\uppi ))|> t\sqrt{n-1})< 2e^{-t^2/2}. \end{aligned}$$

We note that the proposition applies equally well to the distribution of \(\mathrm{LDS }(\uppi )\) since it applies to arbitrary \(q\) and, as noted above, the reversal of \(\uppi \) is distributed as \(\mu _{n,1/q}\), and satisfies that \(\mathrm{LIS }(\uppi ) = \mathrm{LDS }(\uppi ^R)\). We expect that when \(n\) tends to infinity with \(0<q<1\) fixed then \(\mathrm{Var }(\mathrm{LIS }(\uppi ))\) will indeed be of order \(n\). However, if \(q\) increases to \(1\) as \(n\) tends to infinity then we expect the variance to be of smaller order, see the discussion in Sect. 8.

We finish the description of our main results with a simple proposition which is useful for very small \(q\). It shows that when \(nq\) is much smaller than \(1\), the Mallows distribution is concentrated on the identity permutation.

Proposition 1.9

Suppose \(n\ge 2, 0<q\le \frac{1}{n}\) and \(\uppi \sim \mu _{n,q}\). Then

$$\begin{aligned} {\mathbb {P}}(\uppi \text { is not the identity})\approx nq. \end{aligned}$$

Policy on constants: In what follows, \(C\) and \(c\) denote positive numerical constants (independent of all other parameters) whose value can change each time they occur (even inside the same calculation), with the value of \(C\) increasing and the value of \(c\) decreasing. In contrast, the value of numbered constants, such as \(C_0\) or \(c_0\), is fixed and will not change between occurrences.

1.1 Techniques

Previous work on the asymptotics of the longest increasing subsequence followed two main approaches: either through analysis of combinatorial asymptotics or by the probabilistic analysis of systems of interacting particle processes. The combinatorial approach to the longest increasing subsequence makes use of a bijection between permutations and Young tableaux known as the Robinson–Schensted–Knuth (RSK) correspondence [21, 25, 26]. This bijection is intimately related to the representation theory of the symmetric group [12, 20], the theory of symmetric functions [28], and the theory of partitions [3]. The uniform measure on permutations induces the Plancherel measure on Young diagrams under the RSK correspondence. Vershik and Kerov and Logan and Shepp independently showed a limiting shape for diagrams under the Plancherel measure and proved that

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }(\uppi )) = 2\sqrt{n}+o(\sqrt{n})\quad \text {when}\, \uppi \text {is uniformly distributed}. \end{aligned}$$
(14)

This approach was extended much later in the groundbreaking work of Baik et al. [4] who determined completely the limiting distribution and fluctuations of the longest increasing subsequence of a uniformly distributed permutation.

The second approach has been through the framework of interacting particle processes. Hammersley [19] investigated “Ulam’s problem” of finding the constant in the expected length of the longest monotone subsequence in a uniformly random permutation. Implicit in this work was a certain one-dimensional interacting particle process which Aldous and Diaconis [1] call Hammersley’s process. Aldous and Diaconis gave hydrodynamical limiting arguments for Hammersley’s process to obtain an independent proof of the result (14). This approach led to other generalizations, such as the work of Deuschel and Zeitouni [10] who found the leading behavior of \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\) when \(\uppi \) is a random permutation whose graphical representation is obtained by putting independent and identically distributed points in the plane.

Mueller and Starr [24] were the first to consider the longest increasing subsequence of a random Mallows permutation. Their work focuses on the regime of parameters where \(n(1-q) \rightarrow \upbeta \in (-\infty ,\infty )\) as \(n\rightarrow \infty \). In this regime Starr [29] developed a Botzmann–Gibbs formulation of the Mallows measure and found a limiting density for the graphical representation of the random permutation. Mueller and Starr relied on this limiting density and applied similar techniques to those of Deuschel and Zeitouni [10] to find the leading behavior of \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\).

Our analysis uses a third approach. In his paper, Mallows [23] describes an iterative procedure for generating a Mallows-distributed permutation. This procedure, which we term the Mallows process, is defined formally in Sect. 2. Informally, it may be described as follows: A set of \(n\) folders is put in a random order into a drawer using the rule that each new folder is inserted at a random position, pushing back all the folders behind it. The probability that the \(i\)th folder is inserted at position \(j\), for \(1\le j\le i\), is proportional to \(q^{j-1}\), independently of all other folders. It is not hard to check that after all \(n\) folders have been placed in the drawer, their positions have the \((n,1/q)\)-Mallows distribution. Our analysis consists of tracking the dynamics of the increasing and decreasing subsequences throughout the evolution of this process.

1.2 Reader’s guide

The remainder of the paper is organized as follows. In Sect. 2 we define the Mallows process formally and derive some useful properties of the Mallows measure from it. In Sect. 3 we bound the displacement of elements in a random Mallows permutation, proving Theorem 1.1. Section 4 is devoted to the study of \(\mathrm{LIS }(\uppi )\). We establish there the large deviation bounds for \(\mathrm{LIS }(\uppi )\) and determine its typical order of magnitude, proving Theorem 1.3 and Proposition 1.4. In Sect. 5 we prove the law of large numbers for \(\mathrm{LIS }(\uppi )\), establishing Theorem 1.2. In Sect. 6 we study \(\mathrm{LDS }(\uppi )\), establishing large deviation bounds for it and determining its typical order of magnitude, proving Theorem 1.5, Theorem 1.7 and Proposition 1.9. In Sect. 7 we prove Proposition 1.8, giving a simple bound on the variance of \(\mathrm{LIS }(\uppi )\) and showing a Gaussian tail inequality. Finally, we end with some directions for further research in Sect. 8.

2 The Mallows process

In this section we describe a random evolution process on permutations, which we term the Mallows process. This process is central to our later analysis of the length of monotone subsequences. The process was known to Mallows [23], and was also used by Gnedin and Olshanski [17, 18] to study variants and extensions of the Mallows measure to infinite groups of permutations. The underlying idea is also useful in the analysis of the number of inversions of a uniformly random permutation, e.g., as in Feller [14, Chap. X.6].

Let \(q>0\). The \(q\) -Mallows process is a permutation-valued stochastic process \((p_n)_{n \ge 1}\), where each \(p_n \in S_n\). The process is initialized by setting \(p_1\) to be the (only) permutation on one element. The process iteratively constructs \(p_{n}\) from \(p_{n-1}\) and an independent random variable \(p_n(n)\) distributed as a truncated geometric. Precisely, letting \((p_n(n))\) be a sequence of independent random variables with the distributions

$$\begin{aligned} {\mathbb {P}}(p_n(n) = j) := \frac{q^{j-1}}{1+q+\cdots +q^{n-1}} = \frac{(1-q)q^{j-1}}{1-q^n} \quad (1\le j\le n), \end{aligned}$$
(15)

each permutation \(p_n\) is defined by

$$\begin{aligned} p_{n}(i) = {\left\{ \begin{array}{ll} p_{n-1}(i)&{} p_{n-1}(i) < p_n(n)\\ p_{n-1}(i)+1 &{} p_{n-1}(i) \ge p_n(n) \end{array}\right. } \quad (1 \le i \le n-1). \end{aligned}$$
(16)

Alluding to our intuitive description in Sect. 1.1, we may think of \(p_n(i)\) as denoting the position of the \(i\)th folder at time \(n\) in the drawer. It is clear by construction that \(p_n\) is a permutation in \(S_n\). Also, note that for each \(i\) and \(n \ge i, p_n(i)\) is non-decreasing in \(n\). Below is an example to illustrate the process. For example, we see that in the second step \(n=2\), since the position of the second folder is \(1\), the position of the first folder becomes \(2\). In general, in step \(n\), the position of a folder increases by \(1\) if its position in step \(n-1\) is at or after the position where the \(n\)th folder is inserted and otherwise it stays the same. We also note the process \((p_n^{-1})\) which may be thought of as the contents of the drawer at time \(n\), in the intuitive description of Sect. 1.1.

$$\begin{aligned} \begin{array}{c@{\quad }c@{\quad }l@{\quad }l} n &{} p_n(n) &{} p_n &{} (p_n^{-1})\\ 1 &{} 1 &{} 1 &{} 1\\ 2 &{} 1 &{} 21 &{} 21\\ 3 &{} 2 &{} 312 &{}23 1\\ 4 &{} 4 &{} 3124 &{} 2314\\ 5 &{} 2 &{} 41352 &{} 25314\\ 6 &{} 3 &{} 514623 &{} 256314\\ \end{array} \end{aligned}$$

Lemma 2.1

Let \(q>0\) and let \((p_n)_{n \ge 1}\) be the \(q\)-Mallows process. Then \(p_n\) is distributed according to the Mallows distribution with parameter \(1/q\).

Proof

The claim is trivial for \(n=1\). Assume by induction that for any \(\sigma _n \in S_n, {\mathbb {P}}(p_n = \sigma _n) \propto q^{-\mathrm{Inv }(\sigma _n)}\) and let us prove the same for \(n+1\). Fix a permutation \(\sigma _{n+1} \in S_{n+1}\). For \(1 \le i \le n\), define a permutation \(\sigma _n\in S_n\) by

$$\begin{aligned} \sigma _n(i) := {\left\{ \begin{array}{ll} \sigma _{n+1}(i) -1 &{} \text { if } \sigma _{n+1}(i) > \sigma _{n+1}(n+1)\\ \sigma _{n+1}(i) &{} \text { if } \sigma _{n+1}(i) < \sigma _{n+1}(n+1)\\ \end{array}\right. } \end{aligned}$$

It follows from the definition of the Mallows process that \(p_{n+1} = \sigma _{n+1}\) if and only if \(p_{n+1}(n+1)=\sigma _{n+1}(n+1)\) and \(p_n = \sigma _n\). Noting that \(\mathrm{Inv }(\sigma _{n+1}) = \mathrm{Inv }(\sigma _n) + n+1-\sigma _{n+1}(n+1)\), the induction hypothesis implies that

$$\begin{aligned} {\mathbb {P}}(p_{n+1} = \sigma _{n+1})&= {\mathbb {P}}(p_n = \sigma _n)\cdot {\mathbb {P}}(p_{n+1}(n+1) = \sigma _{n+1}(n+1)) \\&= \frac{q^{-\mathrm{Inv }(\sigma _n)}}{Z_{n,1/q}} \cdot \frac{q^{\sigma _{n+1}(n+1)-1}}{1+q+\cdots +q^n} \\&= \frac{q^{-\mathrm{Inv }(\sigma _n)}}{Z_{n,1/q}} \cdot \frac{(1/q)^{n-\sigma _{n+1}(n+1)+1}}{1+(1/q)+\cdots +(1/q)^n} \propto q^{-\mathrm{Inv }(\sigma _{n+1})}. \end{aligned}$$

\(\square \)

As a by-product, the above recursion also shows that the formula (2) for the normalizing constant holds. Recall that \(\uppi ^R\), the reversal of a permutation \(\uppi \), is defined by \(\uppi ^R(i) = \uppi (n+1-i)\).

Lemma 2.2

For any \(n\ge 1\) and \(q>0\), if \(\uppi \sim \mu _{n,q}\) then \(\uppi ^R\sim \mu _{n,1/q}\) and \(\uppi ^{-1}\sim \mu _{n,q}\).

Proof

The lemma is immediate upon noting that both taking reversal and taking inverse are bijections on \(S_n\), and that \(\mathrm{Inv }(\uppi ^R) = {n \atopwithdelims ()2} - \mathrm{Inv }(\uppi )\) and \(\mathrm{Inv }(\uppi ^{-1}) = \mathrm{Inv }(\uppi )\).

This lemma allows us to define four different permutations related to the \(q\)-Mallows process, all having the Mallows distribution \(\mu _{n,q}\).

Corollary 2.3

Let \(q>0\) and let \((p_n)_{n\ge 1}\) be the \(q\)-Mallows process. Then each of the following permutations is distributed as \(\mu _{n,q}\).

  1. (i)

    \(\uppi :=p_n^R\). That is, \(\uppi (i) = p_n(n+1-i)\).

  2. (ii)

    \(\uppi :=(p_n^R)^{-1}\). That is, \(\uppi (i)=n+1 - p_n^{-1}(i)\).

  3. (iii)

    \(\uppi :=(p_n^{-1})^R\). That is, \(\uppi (i)=p_n^{-1}(n+1-i)\).

  4. (iv)

    \(\uppi :=((p_n^{-1})^R)^{-1}\). That is, \(\uppi (i) = n+1-p_n(i)\).

This corollary will be useful in the sequel, allowing us to prove results about the Mallows distribution by choosing from the above list a convenient coupling of the Mallows distribution and the Mallows process.

2.1 Basic properties of the Mallows process

In this section we let \(q\) be an arbitrary positive number and let \((p_n)\) be the \(q\)-Mallows process. Let \(I=(i_1,\ldots ,i_k)\) be an increasing sequence of indices and let \(\uppi \) be any permutation. Let \({{\uppi }_{I}} \in S_{k}\) denote the induced relative ordering of \(\uppi \) restricted to \(I\). That is, \({{\uppi }_{I}}(j)>\fancyscript{{\uppi }_{I}}(k)\) if and only if \(\uppi (i_j)>\uppi (i_k)\). The following fact is clear from the definition of the Mallows process.

Fact 2.4

Let \(I=(i_1,\ldots ,i_k)\) be an increasing sequence and let \(n\ge i_k\). Then \((p_n)_I\) is a function only of \(p_{i_1}(i_1),p_{i_1+1}(i_1+1),\ldots ,p_{i_k-1}(i_k -1),p_{i_k}(i_k)\). In other words, \((p_n)_I\) is independent of the set of \((p_i)_i, i<i_1\) or \(i>i_k\).

Lemma 2.5

(Independence of induced orderings) Let \(I=(i_1,\ldots ,i_k)\) and \(I'= (i'_1,\ldots ,i'_\ell )\) be two increasing sequences such that \(i_k < i_1'\). Let \(\uppi \sim \mu _{n,q}\) for \(n\ge i'_\ell \). Then, \({{\uppi }_{I}}\) and \({{\uppi }_{I'}}\) are independent.

Proof

Using Corollary 2.3, we couple \(\uppi \) with \((p_n)\) so that \(\uppi (i) = n+1 - p_n(i)\) for all \(i\). By the definition of the Mallows process, the variables \((p_i(i))\) are independent. By Fact 2.4, \({{\uppi }_{I}}\) and \({{\uppi }_{I'}}\) are functions of independent variables and are therefore independent. \(\square \)

For a sequence of indices \(I=(i_1, \ldots , i_m)\) and an integer \(b\), define the sequence \(I+b:=(i_1+b, \ldots , i_m+b)\).

Lemma 2.6

(Translation invariance) Let \(I=(i_1, \ldots , i_k)\) be an increasing sequence and let \(\uppi \sim \mu _{n,q}\). Then, for any integer \(1 \le b \le n-i_k, {{\uppi }_{I}}\) and \({{\uppi }_{I+b}}\) have the same distribution. That is, for any \(\omega \in S_k\),

$$\begin{aligned} {\mathbb {P}}({{\uppi }_{I}} = \omega ) = {\mathbb {P}}({{\uppi }_{I+b}} = \omega ). \end{aligned}$$

Proof

Observe that we can make the following simplifying assumptions. First, we may assume that \(b=1\) since then the claim follows by applying the result \(b\) times. Second, under the assumption \(b=1, I\) is contained in \((1,2,\ldots , n-1)\) and hence we may deduce the lemma with the given \(I\) from the lemma with \(I=(1,2,\ldots , n-1)\).

Assume then that \(b=1\) and \(I=(1,2,\ldots , n-1)\). It is straightforward to see that there exists a unique bijection \(T\) from \(S_n\) to itself which preserves the number of inversions (and hence the Mallows distribution), such that \((T(\uppi ))_{I+1} = {{\uppi }_{I}}\). This establishes the lemma. \(\square \)

It is simple to check that the above fact is not necessarily true for sequences which are not translates. Suppose \(\uppi \sim \mu _{3,q}\). By explicit calculation,

$$\begin{aligned} {\mathbb {P}}(\uppi (2)>\uppi (1)) = \frac{1+q+q^2}{Z_{3,q}}\quad \text {whereas}\quad {\mathbb {P}}(\uppi (3)>\uppi (1)) = \frac{1+2q}{Z_{3,q}}, \end{aligned}$$

so that the probabilities are different for all \(q\ne 1\).

One corollary of translation invariance is that the permutation induced on any sequence of consecutive elements is distributed like a shorter Mallows permutation.

Corollary 2.7

Let \(I=(i,i+1,\ldots ,i+m-1)\subseteq [n]\) be a sequence of consecutive elements. If \(\uppi \sim \mu _{n,q}\) then \({{\uppi }_{I}}\sim \mu _{m,q}\).

Proof

Since \(q\) is arbitrary, it suffices to prove the corollary with \(\uppi \) replaced by \(p_n\), so that \(q\) is replaced by \(1/q\). For \(i=1\), the claim follows simply by the definition of the Mallows process. That is, since \({{\uppi }_{I}} = p_m \sim \mu _{m,1/q}\). For \(i>1\), the claim follows by the translation invariance given by Lemma 2.6. \(\square \)

Remark 2.8

One can also construct a Mallows permutation indexed by the infinite sets \({\mathbb N}\) or \({\mathbb Z}\) [17, 18]. A version of Corollary 2.7 would still be valid in this case, yielding the finite Mallows distribution as an induced permutation of the infinite one. The infinite permutation has the advantage that it is constructed out of a sequence of i.i.d. geometric random variables rather than just independent truncated geometric variables as in the finite construction. However, the fact that the geometric random variables are unbounded complicates some aspects of our proofs and in this paper we chose to work only in the finite setting.

3 The displacement of an element in a Mallows permutation

In this section we prove Theorem 1.1. Our proof of the upper bounds follows that of [8, Lemma 17], with slightly more precise estimates.

Fix \(0<q<1\). Recall the \(q\)-Mallows process \((p_i)\) from Sect. 2, defined for all \(i\ge 1\). We first prove the upper bounds in the theorem. Fix \(n\ge 1\) and consider the permutation \(\uppi \) defined by \(\uppi (i):=n+1-p_n(i)\), which by Corollary 2.3 is distributed according to \(\mu _{n,q}\). Note first that for all \(1\le i\le n\),

$$\begin{aligned} \uppi (i) - i = n+1 - p_n(i) - i = n-i-p_n(i)+p_i(i) - (p_i(i) - 1). \end{aligned}$$

Thus, since \(p_i(i)\ge 1\) and \(p_n(i) - p_i(i) \le n-i\), we have

$$\begin{aligned} |\uppi (i) - i|1\!\!1_{(\uppi (i)-i<0)}\le p_i(i)-1\quad \text { for }1\le i\le n. \end{aligned}$$
(17)

Similarly, let \({\uppi ^{'}}\) be defined by \({\uppi ^{'}}(i):=p_n(n+1-i)\), so that \({\uppi ^{'}}\sim \mu _{n,q}\) by Corollary 2.3. For all \(1\le i\le n\),

$$\begin{aligned}&{{\uppi }^{\prime }}(n+1-i) - (n+1-i) = p_n(i) - (n+1-i)\\&\quad = -(n-i-p_n(i)+p_i(i)) + (p_i(i) - 1). \end{aligned}$$

Thus, again since \(p_i(i)\ge 1\) and \(p_n(i) - p_i(i) \le n-i\), we have

$$\begin{aligned} |{{\uppi }^{'}}(n+1-i) - (n+1-i)|1\!\!1_{({\uppi ^{'}}(n+1-i) - (n+1-i)>0)}\le p_i(i)-1, \end{aligned}$$

and exchanging the roles of \(i\) and \(n+1-i\) we obtain

$$\begin{aligned} |{{\uppi }^{'}}(i) - i|1\!\!1_{({\uppi ^{'}}(i) - i>0)}\le p_{n+1-i}(n+1-i)-1\quad \text { for } 1\le i\le n. \end{aligned}$$
(18)

Putting together (17) and (18), and recalling that \(\uppi , {\uppi ^{'}}\sim \mu _{n,q}\) we conclude that for all \(1\le i\le n\) and integer \(t\ge 1\),

$$\begin{aligned}&{\mathbb {P}}(|\uppi (i)-i|\ge t)= {\mathbb {P}}(\uppi (i)-i\ge t)+{\mathbb {P}}(\uppi (i)-i\le -t)\le {\mathbb {P}}(p_{n+1-i}(n+1-i) \nonumber \\&\quad \ge t+1) + {\mathbb {P}}(p_i(i)\ge t+1). \end{aligned}$$
(19)

Now recall from (15) that \(p_j(j)\) has the distribution of a geometric random variable with parameter \(1-q\), conditioned to be at most \(j\). In particular, \(p_j(j)\) is stochastically dominated by this geometric random variable and thus

$$\begin{aligned} {\mathbb {P}}(p_j(j)\ge t+1)\le q^t\quad \text { for } 1\le j\le n\text { and integer }t\ge 1. \end{aligned}$$
(20)

Putting together (19) and (20) yields (3). Thus, the upper bound of (4) follows since \(|\uppi (i)-i| \le n-1\) and

$$\begin{aligned} {\mathbb {E}}|\uppi (i)-i|=\sum _{t=1}^\infty {\mathbb {P}}(|\uppi (i)-i|\ge t)\le \sum _{t=1}^\infty 2q^t=\frac{2q}{1-q}. \end{aligned}$$

Next we derive a lower bound on the displacement. This is done in the next three claims. We start by observing a monotonicity property of the Mallows process. Let

$$\begin{aligned} A = \{(a_1,a_2,\ldots )\,:\, a_j\in \{1,\ldots , j\}\}. \end{aligned}$$

By definition of the Mallows process, for each \(n\), the permutation \(p_n\) is a function of the vector \((p_1(1), \ldots , p_n(n))\), whose elements satisfy \(p_j(j)\in \{1,\ldots , j\}\). For \(a\in A\), denote by \(p_n^a\) the permutation \(p_n\) resulting from taking \(p_j(j) = a_j\).

Lemma 3.1

For each \(n\ge 1\) and \(1\le j\le n, p_n^a(j)\) is increasing in \(a_j\). That is, if \(a, a'\in A\) satisfy \(a_k = a'_k\) for all \(k\ne j\) and \(a_j>a'_j\) then \(p_n^a(j)>p_n^{a'}(j)\).

Proof

Fix \(n, j, a,a'\) as in the lemma. Trivially \(p_j^a(j) > p_j^{a'}(j)\). Hence it suffices to observe by induction that for \(k\ge j\),

$$\begin{aligned}&p_{k+1}^a(j) \!=\! p_k^a(j) + 1\!\!1_{(a_{k+1} \le p_k^a(j))} \!=\! p_k^a(j) + 1\!\!1_{(a'_{k+1} \le p_k^a(j))} \\&\quad > p_k^{a'}(j) + 1\!\!1_{(a'_{k+1} \le p_k^{a'}(j))} = p_{k+1}^{a'}(j). \end{aligned}$$

\(\square \)

Lemma 3.2

For all integer \(n\ge 1, 1\le i\le n\) and \(t\ge 1\), if \(\uppi \sim \mu _{n,q}\) then

$$\begin{aligned} {\mathbb {P}}(|\uppi (i) - i| \ge t)\ge \max ({\mathbb {P}}(p_i(i)\ge 2t),\, {\mathbb {P}}(p_{n+1-i}(n+1-i)\ge 2t)). \end{aligned}$$

Proof

Fix \(n,i\) and \(t\) as in the lemma. Couple \(\uppi \) with the Mallows process so that \(\uppi (j)=n+1-p_n(j)\) as in Corollary 2.3. Condition on \((p_j(j))\) for \(j\ne i\) and observe that under this conditioning, the value of \(p_n(i)\), and hence the value of \(\uppi (i)\), is a function of \(p_i(i)\). By Lemma 3.1, under the conditioning, there are at most \(2t-1\) (contiguous) values of \(p_i(i)\) for which \(|\uppi (i) - i|<t\). Since the \((p_j(j))\) are independent and \({\mathbb {P}}(p_i(i)=s)\) is a decreasing function of \(s\), it follows that

$$\begin{aligned}&{\mathbb {P}}(|\uppi (i) - i|\ge t) = {\mathbb {E}}\left[ {\mathbb {P}}(|\uppi (i) - i|\ge t\, |\, (p_j(j))_{j\ne i})\right] \\&\quad \ge {\mathbb {E}}\left[ {\mathbb {P}}(p_i(i)\ge 2t\,|\,(p_j(j))_{j\ne i})\right] = {\mathbb {P}}(p_i(i)\ge 2t). \end{aligned}$$

The proof of the bound \({\mathbb {P}}(|\uppi (i) - i|\ge t)\ge {\mathbb {P}}(p_{n+1-i}(n+1-i)\ge 2t)\) is analogous by using the coupling \(\uppi (j)=p_n(n+1-j)\) of Corollary 2.3 and applying Lemma 3.1 with \(j=n+1-i\).

Corollary 3.3

For all integer \(n\ge 3, 1\le i\le n\) and \(1\le t\le \frac{n+5}{8}\), if \(\uppi \sim \mu _{n,q}\) then

$$\begin{aligned} {\mathbb {P}}(|\uppi (i) - i|\ge t)\ge \frac{1}{2}q^{2t-1}. \end{aligned}$$

Proof

Let \(j = \max (i, n+1-i)\). Observe that \(j\ge \frac{n+1}{2}\). Note also that our assumptions imply that \(2t\le \frac{n+1}{2}\le j\). By Lemma 3.2 and (15),

$$\begin{aligned} {\mathbb {P}}(|\uppi (i) - i|\ge t) \ge {\mathbb {P}}(p_j(j)\ge 2t) = \frac{1-q^{j-2t+1}}{1-q^j} q^{2t-1}. \end{aligned}$$

Our assumptions imply that \(t\le \frac{n+5}{8}\le \frac{j+2}{4}\) and thus \(j-2t+1\ge \frac{j}{2}\). Hence we conclude that

$$\begin{aligned} {\mathbb {P}}(|\uppi (i) - i|\ge t) \ge \frac{1-q^{j/2}}{1-q^j}q^{2t-1} = \frac{q^{2t-1}}{1+q^{j/2}} \ge \frac{1}{2} q^{2t-1}. \end{aligned}$$

\(\square \)

Finally, we fix \(n\ge 2, 1\le i\le n\) and prove a lower bound for \({\mathbb {E}}|\uppi (i) - i|\). We consider separately three cases. If \(n\ge 3\) and \(q<1-\frac{1}{n}\) then by Corollary 3.3,

$$\begin{aligned} {\mathbb {E}}|\uppi (i) - i|&\ge \sum _{t=1}^{\lfloor \frac{n+5}{8} \rfloor } {\mathbb {P}}(|\uppi (i) - i|\ge t)\ge \frac{1}{2} \sum _{t=1}^{\lfloor \frac{n+5}{8} \rfloor } q^{2t-1}\\&= \frac{q (1-q^{2\lfloor (n+5)/8\rfloor })}{2 (1-q^2)} \ge c\frac{q}{1-q} \end{aligned}$$

for some absolute constant \(c>0\). If \(n\ge 3\) and \(q\ge 1-\frac{1}{n}\) then, similarly, by Corollary 3.3,

$$\begin{aligned} {\mathbb {E}}|\uppi (i) - i| \ge \sum _{t=1}^{\lfloor \frac{n+5}{8} \rfloor } {\mathbb {P}}(|\uppi (i) - i|\ge t)\ge \frac{1}{2} \sum _{t=1}^{\lfloor \frac{n+5}{8} \rfloor } q^{2t-1} \!\ge \! \frac{1}{2} \sum _{t=1}^{\lfloor \frac{n+5}{8} \rfloor } \left( 1-\frac{1}{n}\right) ^{2t-1}\!\ge \! c n \end{aligned}$$

for some absolute constant \(c>0\). Finally, if \(n=2\) then by Lemma 3.2,

$$\begin{aligned} {\mathbb {E}}|\uppi (i) - i| = {\mathbb {P}}(|\uppi (i) - i|\ge 1) \ge {\mathbb {P}}(p_2(2)\ge 2) = \frac{q}{1+q}\ge \frac{q}{2}. \end{aligned}$$

Thus in all cases we have shown that \({\mathbb {E}}|\uppi (i) - i|\ge c\min (\frac{q}{1-q}, n-1)\), as required.

4 Increasing subsequences

Our goal in this section is to establish Theorem 1.3 and Proposition 1.4. We begin in Sect. 4.1 with the lower bound in (7) and the bound (8). In Sect. 4.2 we use a union bound argument to show that the probability of a very long increasing subsequence cannot be too large and establish the upper bound in (7). In the same section we complete the proof of Theorem 1.3 and Proposition 1.4 by applying the previous results to estimate the expectation of \(\mathrm{LIS }(\uppi )\). Lastly, a result extending our tail bounds for \(\mathrm{LIS }(\uppi )\) is proved at the end of Sect. 4.2. This result is used in the arguments of Sect. 5.

4.1 Lower bounds on the probability of a long increasing subsequence

In this section we will show a lower bound on the probability that there is a long increasing subsequence, proving the lower bound of (7) and the bound (8) in Theorem 1.3.

The proof proceeds by defining a sequence of stopping times for the Mallows process at which elements are added to an increasing subsequence. We show that the waiting time to build a long increasing subsequence in this way is not too large with high probability.

4.1.1 Large deviation bounds for binomial random variables

The next proposition collects some standard results on binomial random variables which will be used in the sequel.

Proposition 4.1

Suppose \(n\ge 1, 0<p<1\) and let \(S\sim \mathrm{Bin }(n,p)\).

  1. (i)

    For all \(t>0\),

    $$\begin{aligned} {\mathbb {P}}(S - np < -t) < \exp \left( -\frac{t^2}{2np}\right) . \end{aligned}$$

    In particular,

    $$\begin{aligned} {\mathbb {P}}\left( S<\frac{1}{2}np\right) \le \exp \left( -\frac{1}{8}np\right) . \end{aligned}$$
    (21)
  2. (ii)

    If \(p<\frac{1}{2}\) then for all integer \(np\le t\le n\),

    $$\begin{aligned} {\mathbb {P}}(S \ge t) \ge \left( \frac{np}{et}\right) ^t. \end{aligned}$$
    (22)

Proof

The first part is proved, for instance, in [2, Theorem A.1.13]. For the second part, observe first that

$$\begin{aligned} {\mathbb {P}}(S\ge t) \ge \left( {\begin{array}{c}n\\ t\end{array}}\right) p^t (1-p)^{n-t} \ge \left( \frac{np}{t}\right) ^{t} (1-p)^{n-t}. \end{aligned}$$

Now note that \(\log (1-p)\ge -p-p^2\) for \(0\le p\le 1/2\). Thus, using that \(t\ge np\) in the third inequality,

$$\begin{aligned} {\mathbb {P}}(S\ge t) \ge \left( \frac{np}{t}\right) ^t e^{-(n-t)(p+p^2)}\ge \left( \frac{np}{t}\right) ^t e^{-np+p(t-np)}\ge \left( \frac{np}{t}\right) ^t e^{-t}. \end{aligned}$$

\(\square \)

4.1.2 Lower bounds for \({\mathbb {P}}(\mathrm{LIS }(\uppi )\ge L)\)

Fix \(n\ge 1\) and \(\frac{1}{2} \le q \le 1-\frac{4}{n}\). Let \((p_m)\) be the \(q\)-Mallows process, and define, for \(m\ge 1, {{\uppi }_{m}} := (p_m)^R\) so that \({{\uppi }_{m}}\sim \mu _{m,q}\) by Corollary 2.3. Fix an integer \(1\le L\le n\) and consider the following strategy for finding an increasing subsequence in \({{\uppi }_{n}}\). Let

$$\begin{aligned} W := \left[ \frac{1}{1-q},\, \frac{1}{1-q} + \frac{n}{1000L} + 1\right] \cap \mathbb {Z} \end{aligned}$$

and set \(T_0:=\max (W)\). Consider the minimal time \(S_1>T_0\) for which \(p_{S_1}(S_1)\in W\), and consider the first subsequent time \(T_1>S_1\) for which \(p_{T_1}(S_1)\notin W\). Then repeat the process and find the next subsequent time \(S_2>T_1\) for which \(p_{S_2}(S_2)\in W\), and so on. Formally, with \(T_0=\max (W)\), we inductively define the stopping times for \(i\ge 1\) as follows:

$$\begin{aligned} S_i&:= \min \{t > T_{i-1} \ : \ p_t(t) \in W \},\\ T_i&:= \min \{t > S_i \ : \ p_t(S_i)\notin W\}. \end{aligned}$$

We claim that for \(k\ge 1\) and \(m\ge S_k\), the sequence \(({{\uppi }_{m}}(S_1),\ldots , {{\uppi }_{m}}(S_k))\) is increasing. This is equivalent to the sequence \((p_m(S_1),\ldots , p_m(S_k))\) being decreasing. To see this note that, by definition of the Mallows process, the relative order of \(p_m(S_i)\) and \(p_m(S_{i+1})\) is the same as for \(p_{S_{i+1}}(S_i)\) and \(p_{S_{i+1}}(S_{i+1})\). Now observe that the definition of the stopping times above implies that \(p_{S_{i+1}}(S_i)>\max W\ge p_{S_{i+1}}(S_{i+1})\). We conclude that if \(m\ge S_k\) then \(\mathrm{LIS }({{\uppi }_{m}})\ge k\). Thus we arrive at

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})\ge L) \ge {\mathbb {P}}(S_L\le n). \end{aligned}$$
(23)

In the rest of the section we focus on estimating the right-hand side of the above inequality in two regimes of \(n,L\) and \(q\). We start by describing a common part to both regimes. We always take

$$\begin{aligned} \frac{1}{2} \le q\le 1 - \frac{4}{n}\quad \text {and}\quad L\ge n(1-q) \end{aligned}$$
(24)

and observe that this implies that

$$\begin{aligned} \max (W) \le \frac{2}{1-q} \le \frac{n}{2}. \end{aligned}$$
(25)

Thus, by (15), for any \(i>\max (W)\) and any \(1\le j\le \max (W)+1\),

$$\begin{aligned} {\mathbb {P}}(p_i(i)=j) = \frac{(1-q)q^{j-1}}{1-q^i} \ge (1-q)q^{j-1} \ge \frac{(1-q)}{16} =: c_1(1-q). \end{aligned}$$

The second inequality follows from the bound \(q \ge 1/2\) once we note that for \(x \le 1/2, (1-x)^{1/x} \ge 1/4\). In particular, if \(i>\max (W)\) then

$$\begin{aligned}&{\mathbb {P}}(p_i(i)\in W) \ge c_1(1-q)|W|\ge \frac{c_1(1-q)n}{1000L} =: \frac{c_2(1-q)n}{L}, \end{aligned}$$
(26)
$$\begin{aligned}&{\mathbb {P}}(p_i(i)\le \min (W)) \ge c_1(1-q)\min (W)\ge c_1. \end{aligned}$$
(27)

Next, we note the simple decomposition

$$\begin{aligned} S_{L} = T_0 + \sum _{i=1}^L S_i - T_{i-1} + \sum _{i=1}^{L-1} T_{i} - S_{i}. \end{aligned}$$

Since \(T_0\le \frac{n}{2}\) by the definition of \(T_0\) and (25), we may plug this decomposition into (23) to obtain

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})\ge L)\ge {\mathbb {P}}\left( \sum _{i=1}^L S_i - T_{i-1} \le \frac{n}{4},\ \ \sum _{i=1}^{L-1} T_{i} -S_{i} \le \frac{n}{4}\right) . \end{aligned}$$
(28)

We aim to bound the right-hand side by a product of two terms.

First, we note explicitly the following simple facts which follow from the definition of the Mallows process and our definition of the stopping times \((T_i)\) and \((S_i)\):

  1. 1.

    For each \(k\ge 0, |\{T_k < i \le S_{k+1}\,:\, p_i(i)\in W\}|= 1\).

  2. 2.

    For each \(k\ge 1, |\{S_k < i \le T_{k}\,:\, p_i(i)\le \min W\}|\le |W|\).

Second, we let \((U_j)\) and \((V_j), j\ge 1\), be two independent sequences of independent Bernoulli random variables satisfying

$$\begin{aligned} {\mathbb {P}}(U_j = 1) = \frac{c_2(1-q)n}{L}\quad \text {and}\quad {\mathbb {P}}(V_j = 1) = c_1. \end{aligned}$$

Third, we couple \(((U_j),(V_j))\) with the Mallows process \((p_m)\) as follows. If \(T_k< i\le S_{k+1}\) for some \(k\ge 0\) then we consider the next “unused” \(U_j\), i.e.,

$$\begin{aligned} j = |\{i'\,:\, i'<i,\ T_k< i'\le S_{k+1}\text { for some}~ k\ge 0\}| + 1, \end{aligned}$$

and couple \(U_j\) to \(p_i(i)\) in a way that if \(U_j=1\) then \(p_i(i)\in W\). Such a coupling is possible due to the bound (26) and the fact that the event \(T_k< i\le S_{k+1}\) is determined solely by \((p_j(j))\) for \(j<i\). Similarly, if \(S_k< i \le T_k\) for some \(k\ge 1\) then we consider the next “unused” \(V_j\), i.e.,

$$\begin{aligned} j = |\{i'\,:\, i'<i,\ S_k< i'\le T_{k}\text { for some }~k\ge 1\}| + 1, \end{aligned}$$

and couple \(V_j\) and \(p_i(i)\) in a way that if \(V_j=1\) then \(p_i(i)\le \min (W)\). Again, this is possible due to the bound (27) and the fact that the event \(S_k< i \le T_k\) is determined solely by \((p_j(j))\) for \(j<i\).

The coupling, together with the two enumerated facts above, yields the following containment of events,

$$\begin{aligned} \left\{ \sum _{1\le j\le n/4} U_j \ge L\right\}&\subseteq \left\{ \sum _{i=1}^L S_i - T_{i-1} \le \frac{n}{4}\right\} ,\\ \left\{ \sum _{1\le j\le n/4} V_j \ge (L-1)|W|\right\}&\subseteq \left\{ \sum _{i=1}^{L-1} T_{i} -S_{i} \le \frac{n}{4}\right\} . \end{aligned}$$

Finally, defining

$$\begin{aligned} B:= \sum _{1\le j\le n/4} U_j&\sim \mathrm{Bin }\left( \left\lfloor \frac{n}{4}\right\rfloor , \frac{c_2(1-q)n}{L}\right) ,\\ B':= \sum _{1\le j\le n/4} V_j&\sim \mathrm{Bin }\left( \left\lfloor \frac{n}{4}\right\rfloor , c_1\right) , \end{aligned}$$

we may continue (28) and write

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})\ge L)\ge {\mathbb {P}}(B\ge L)\, {\mathbb {P}}(B'\ge (L-1)|W|). \end{aligned}$$
(29)

We observe for later use that the restriction on \(q\) in (24) implies that \(n\ge 8\) and hence \(\lfloor \frac{n}{4}\rfloor \ge \frac{n}{8}\). The analysis now splits according to two regimes of the parameters.

First regime of the parameters: Suppose in addition to (24) that

$$\begin{aligned} L\le cn\sqrt{1-q} \end{aligned}$$
(30)

for some small absolute constant \(c>0\). This implies that \({\mathbb {E}}(B) \ge \frac{c_2(1-q)n^2}{8L}\ge 2L\), and it follows by (21) that

$$\begin{aligned} {\mathbb {P}}(B < L) \le e^{-\frac{c(1-q)n^2}{L}}. \end{aligned}$$
(31)

Moreover, recalling that \(c_1=\frac{1}{16}\) and \((L-1)|W|\le (L-1)(2+n/(1000L))\le 2L + n/1000 \le n/500\) if the constant in (30) is sufficiently small, we have \({\mathbb {E}}(B') \ge \frac{c_1 n}{8} \ge 2(L-1)|W|\). Using (21) again, we have the bound

$$\begin{aligned} {\mathbb {P}}(B' < (L-1)|W|) \le e^{-cn}. \end{aligned}$$
(32)

Putting together (29), (31) and (32) we obtain

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})< L) \le e^{-\frac{c(1-q)n^2}{L}} + e^{-cn} \le e^{-\frac{c(1-q)n^2}{L}} \end{aligned}$$

under the assumptions (24) and (30). This establishes (8).

Second regime of the parameters: Now suppose, in addition to (24) and instead of (30), that

$$\begin{aligned} L\ge Cn\sqrt{1-q} \end{aligned}$$
(33)

for some large absolute constant \(C>0\). This implies, in particular, that \(L \ge {\mathbb {E}}(B)\). It follows by (22) that

$$\begin{aligned} {\mathbb {P}}(B\ge L) \ge \left( \frac{c(1-q)n^2}{L^2}\right) ^L. \end{aligned}$$
(34)

Let us now make an additional assumption, which will imply that \({\mathbb {E}}(B') \ge 2(L-1)|W|\). Since \((L-1)|W|\le n/1000 + 2L\), it suffices to assume (recalling that \(c_1=\frac{1}{16}, \lfloor \frac{n}{4}\rfloor \ge \frac{n}{8}\) and hence \({\mathbb {E}}(B')\ge \frac{n}{128}\)) that

$$\begin{aligned} L\le \frac{1}{2}\left( \frac{c_1}{16} - \frac{1}{1000}\right) n. \end{aligned}$$
(35)

Under this assumption, by (21),

$$\begin{aligned} {\mathbb {P}}(B'\ge (L-1)|W|) \ge {\mathbb {P}}\left( B'\ge \frac{{\mathbb {E}}(B')}{2}\right) \ge 1 - \exp \left( \frac{1}{8} {\mathbb {E}}(B')\right) \ge \frac{1}{2}, \end{aligned}$$
(36)

where we have used the fact \({\mathbb {E}}(B')\ge 8\) which follows from our assumptions (24), (33) and (35).

Putting together (29), (34) and (36) we have proven that

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})\ge L)\ge \frac{1}{2}\left( \frac{c(1-q)n^2}{L^2}\right) ^L \ge \left( \frac{c(1-q)n^2}{L^2}\right) ^L \end{aligned}$$
(37)

under the assumptions (24), (33) and (35). To remove the extra assumption (35), we note that for any \(k\) we have the trivial bound

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{k}})=k) = Z_{k,q}^{-1} = (1-q)^k\prod _{i=1}^{k} (1-q^{i})^{-1} \ge (1-q)^k \end{aligned}$$

by (1) and (2). Thus, using Fact 2.4, for any \(1\le L\le n\) we have

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{n}})\ge L) \ge {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{L}}) = L) \ge (1-q)^L, \end{aligned}$$
(38)

establishing the bound (37) (with a different constant \(c\)) when the assumption (35) is violated. Putting together (37) and (38) establishes the lower bound in (7).

4.2 Upper bound on the probability of a long increasing subsequence

In this section we establish the remaining results of Theorem 1.3. In Sect. 4.2.1 we estimate the probability that the longest increasing subsequence of a random Mallows permutation is exceptionally long and establish the upper bound in (7). The expected length of the longest increasing subsequence is then estimated in Sect. 4.2.2. Lastly, a result extending our tail bounds for \(\mathrm{LIS }(\uppi )\) is proved at the end of Sect. 4.2.3. This result is used in the arguments of Sect. 5.

4.2.1 Very long increasing subsequences are unlikely

In this section we establish the upper bound in (7) of Theorem 1.3. In fact, we prove the following slightly stronger result.

Proposition 4.2

Let \(n\ge 1, 0 < q \le 1- \frac{2}{n}\) and \(\uppi \sim \mu _{n,q}\), then,

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }(\uppi )\ge L) \le \left( \frac{C(1-q)n^2}{L^2}\right) ^L \end{aligned}$$

for all integer \(L \ge Cn\sqrt{1-q}\).

The idea of the proof is to bound the probability that a fixed subsequence is increasing and then apply a union bound over all possible long increasing subsequences. For the remainder of this section, assume \(\uppi \sim \mu _{n,q}\) for some fixed \(n\) and \(q\) satisfying the conditions of the proposition. Using Corollary 2.3, we couple \(\uppi \) with the \(q\)-Mallows process \((p_m)\) so that

$$\begin{aligned} \uppi (i) = n+1 - p_n(i)\quad \text { for all}\, 1\le i\le n. \end{aligned}$$
(39)

For an increasing sequence of integers \(I = (i_1,\ldots , i_m)\) and a sequence of integers \(J = (j_1, \ldots , j_m)\) satisfying that \(1\le j_k\le i_k\), define the event

$$\begin{aligned} E_{I,J}:=\{p_{i_k}(i_k)=j_k\text { for all}\, 1\le k\le m\}. \end{aligned}$$
(40)

Additionally, for an increasing sequence of integers \(I = (i_1,\ldots , i_m)\subseteq [n]\), define the event that \(I\) is a set of indices of an increasing subsequence,

$$\begin{aligned} E_{I} := \{\uppi (i_{k+1})>\uppi (i_k)\text { for all}\, 1\le k\le m-1\}. \end{aligned}$$
(41)

In the next lemma and proposition we estimate the probabilities of these events.

Lemma 4.3

Let \(m\ge 1\). Let \(I=(i_1,\ldots ,i_m)\) be an increasing sequence of integers satisfying \(i_1 \ge 1/(1-q)\), and let \(J=(j_1,\ldots ,j_m)\) be a sequence of integers satisfying \(1\le j_k \le i_k\). Then

$$\begin{aligned} {\mathbb {P}}(E_{I,J}) \le \left( C(1-q)\right) ^m. \end{aligned}$$

Proof

By (15),

$$\begin{aligned} {\mathbb {P}}(p_{i_k}(i_k)=j_k\text { for all}\, 1\le k\le m) = \displaystyle \prod _{1 \le k \le m}\frac{(1-q)q^{j_{k}-1}}{1-q^{i_k}} \le \left( C(1-q)\right) ^m. \end{aligned}$$

\(\square \)

Proposition 4.4

Let \(1\le m\le n\) and let \(I=(i_1, \ldots , i_m)\subseteq [n]\) be an increasing sequence of integers. Then

$$\begin{aligned} {\mathbb {P}}(E_I) \le \left( \frac{Cn(1-q)}{m} \right) ^m. \end{aligned}$$

Proof

Fix a sequence \(I\) as in the proposition. Let \(\mathcal {J}\) be the set of all integer sequences \(J=(j_1,\ldots ,j_m)\) satisfying \(1\le j_k\le i_k\) for \(1\le k\le m\) and satisfying that the event \(E_I\cap E_{I,J}\) is non-empty. Observe that by (16), the Mallows process satisfies for every \(1\le k\le m-1\) that

$$\begin{aligned}&p_{i_{k+1}}(i_k)\le p_{i_k}(i_k) + i_{k+1} - i_k\quad \text { and}\\&p_n(i_{k+1})< p_n(i_k)\text { if and only if } p_{i_{k+1}}(i_{k+1})<p_{i_{k+1}}(i_k). \end{aligned}$$

Thus the coupling (39) implies that in order that \(J\in \mathcal {J}\) it is necessary that

$$\begin{aligned} j_{k+1}-j_k \le i_{k+1}-i_k\quad \text { for all}\, 1 \le k \le m-1. \end{aligned}$$
(42)

We conclude that if \(J\in \mathcal {J}\), then the transformed sequence \((\ell _1,\ldots , \ell _m)\) defined by \(\ell _k:=j_k-i_k-k\) satisfies

$$\begin{aligned}&1-2n \le \ell _k \le -1\quad \text { for all}\, 1 \le k \le m,\, \text {and} \\&\ell _{k+1}<\ell _k\quad \text { for all}\, 1 \le k \le m-1. \end{aligned}$$

Since the above transformation is one-to-one, it follows that

$$\begin{aligned} |\mathcal {J}|\le {2n \atopwithdelims ()m}. \end{aligned}$$
(43)

We proceed to establish the proposition by considering separately several cases. Suppose first that \(i_1 \ge 1/(1-q)\). Combining Lemma 4.3 and the bound (43), we obtain that

$$\begin{aligned} {\mathbb {P}}(E_I) = \displaystyle \sum _{J\in \mathcal {J}}{\mathbb {P}}(E_I \cap E_{I,J}) \le \displaystyle \sum _{J\in \mathcal {J}}{\mathbb {P}}(E_{I,J}) \le |\mathcal {J}| \left( C(1-q)\right) ^m \!\le \! \left( \frac{Cn(1-q)}{m} \right) ^m\!. \end{aligned}$$

This establishes the proposition for the case that \(i_1\ge 1/(1-q)\).

Now suppose that \(i_m<1/(1-q)\). Observe that by the assumptions on \(q\) in Proposition 4.2, we have \(1/(1-q)\le n/2\). Thus, the translated sequence \(I + \lceil n/2\rceil \) is contained in \([1/1-q, n]\). Applying the translation invariance Lemma 2.6, the case that \(i_m<1/(1-q)\) reduces to the case that \(i_1\ge 1/(1-q)\) and we conclude that the proposition holds for such \(I\) as well.

Finally, suppose that \(i_1<1/(1-q)\) and \(i_m\ge 1/(1-q)\). Let \(1 \le k \le m-1\) be such that \(I_1:=(i_1,\ldots ,i_k) \subseteq [0,1/(1-q))\) and \(I_2 := (i_{k+1},\ldots ,i_m) \subseteq [1/(1-q),n]\). By the independence of induced orderings Lemma 2.5, we may apply the proposition to each of \(I_1\) and \(I_2\) to obtain

$$\begin{aligned} {\mathbb {P}}(E_I)&\le {\mathbb {P}}(E_{I_1}\cap E_{I_2}) = {\mathbb {P}}(E_{I_1})\cdot {\mathbb {P}}(E_{I_2})\le \frac{\left( Cn(1-q)\right) ^m}{k^k(m-k)^{m-k}}\le \left( \frac{Cn(1-q)}{m}\right) ^m. \end{aligned}$$
(44)

The last inequality follows once we recall that \((ca)^a\le a!\le (Ca)^a\) for \(a\ge 1\), and note that \({m \atopwithdelims ()k} \le 2^m\). This finishes the proof of the proposition. \(\square \)

Proof of Proposition 4.2

For \(1\le m\le n\), denote by \(\mathcal {I}_m\) the set of all increasing integer sequences \(I=(i_1, \ldots , i_m)\subseteq [n]\). Observe that \(|\mathcal {I}_m|\le {n \atopwithdelims ()m}\). Applying a union bound and Proposition 4.4 we obtain for all integer \(L\ge Cn\sqrt{1-q}\) that

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }(\uppi )\ge L)&\le \displaystyle \sum _{L\le m \le n,\, I\in \mathcal {I}_m} {\mathbb {P}}(E_I) \le \displaystyle \sum _{m \ge L} {n \atopwithdelims ()m} \left( \frac{Cn(1-q)}{m}\right) ^m \\&\le \displaystyle \sum _{m \ge L} \left( \frac{Cn^2(1-q)}{m^2}\right) ^m \le \left( \frac{Cn^2(1-q)}{L^2}\right) ^L. \end{aligned}$$

\(\square \)

4.2.2 Bounds for \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\)

Proof of Proposition 1.4

Suppose that \(n\ge 1, 0<q\le 1\) and \(\uppi \sim \mu _{n,q}\). Couple \(\uppi \) with the \(q\)-Mallows process using Corollary 2.3 so that \(\uppi (i) = n+1 - p_n(i)\) for all \(i\). Define

$$\begin{aligned} I_1:=\{1\le i\le n\,:\, p_i(i)=1\}. \end{aligned}$$

Then, by the definition of the Mallows process,

$$\begin{aligned} \mathrm{LIS }(\uppi )\ge |I_1|. \end{aligned}$$
(45)

Observe that by (15), for each \(i\ge 1\),

$$\begin{aligned} {\mathbb {P}}(i\in I_1) = {\mathbb {P}}(p_i(i)=1) \ge 1-q. \end{aligned}$$

Together with (45) this implies that \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\ge n(1-q)\). To see the other direction, define the set of descents of \(\uppi \),

$$\begin{aligned} I_2:=\{1\le i\le n-1\,:\, \uppi (i)>\uppi (i+1)\}. \end{aligned}$$

It is not hard to check that

$$\begin{aligned} \mathrm{LIS }(\uppi )\le n - |I_2|. \end{aligned}$$
(46)

By Corollary 2.7, for each \(1\le i\le n-1\),

$$\begin{aligned} {\mathbb {P}}(i\in I_2) = \frac{q}{1+q}. \end{aligned}$$

Together with (46) this implies that \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\le n-\frac{q}{1+q}(n-1)\). \(\square \)

We continue to prove the bound (6) of Theorem 1.3. Fix \(n\ge 1\) and \(\frac{1}{2}\le q\le 1 - \frac{4}{n}\). We make use of the large deviation bounds in (7) and (8) shown previously. Set \(L^* := 2C_0 n\sqrt{1-q}\) where \(C_0\) is the constant \(C\) appearing in Theorem 1.3. Applying (7), for any integer \(L \ge L^*\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }(\uppi ) \ge L) \le \frac{1}{2^L}. \end{aligned}$$

Thus,

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }(\uppi )) \le L^* + \displaystyle \sum _{L>L^*}{\mathbb {P}}(\mathrm{LIS }(\uppi ) \ge L) \le L^* + \displaystyle \sum _{L>L^*} \frac{1}{2^L} \le L^*+1. \end{aligned}$$

Now let \(c_0\) be the constant \(c\) appearing in Theorem 1.3. We will prove that

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }(\uppi ))\ge \frac{c_0}{4}n\sqrt{1-q}. \end{aligned}$$
(47)

Since \({\mathbb {E}}(\mathrm{LIS }(\uppi ))\ge n(1-q)\) by Proposition 1.4, the bound (47) follows when \(q\le 1-\frac{c_0^2}{16}\). Assume that \(q>1-\frac{c_0^2}{16}\). Since we have also assumed that \(q\le 1 - \frac{4}{n}\) we obtain that

$$\begin{aligned} \frac{c_0}{2}n\sqrt{1-q} > 2n(1-q)\ge 8. \end{aligned}$$
(48)

Thus, defining \(L^*:=c_0n\sqrt{1-q}\), it follows that

$$\begin{aligned} L^*\ge \lfloor L^*\rfloor \ge \frac{L^*}{2}\ge n(1-q). \end{aligned}$$

Applying the bound (8) and using (48) gives

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }(\uppi ) < \lfloor L^* \rfloor )&\le \exp \left( - \frac{c_0n^2(1-q)}{\left\lfloor c_0 n \sqrt{1-q} \right\rfloor }\right) \le \exp \left( - n \sqrt{1-q} \right) \\&\le \exp \left( -n(1-q)\right) \le \frac{1}{2}. \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }(\uppi ))&\ge \lfloor L^* \rfloor (1-{\mathbb {P}}(\mathrm{LIS }(\uppi ) < \lfloor L^* \rfloor )) \ge \frac{L^*}{4}, \end{aligned}$$

proving (47) in the case \(q>1-\frac{c_0^2}{16}\), as required.

4.2.3 The \(\mathrm{LIS }\) of elements mapped far by the Mallows process

In this section we extend the bound of Proposition 4.2 to a refined estimate which will be used in Sect. 5. Let \(n\ge 1, 0<q<1\) and let \(\uppi \) be a random permutation with the \(\mu _{n,q}\) distribution. Consider again the coupling (39) of \(\uppi \) with the \(q\)-Mallows process \((p_k)\). Fix a real number \(a>0\) and define a subset \(T\) of the integers by

$$\begin{aligned} T:=\left\{ i\,:\, p_i(i)\ge \frac{a}{1-q}\right\} . \end{aligned}$$

Thus, \(T\) is the set of all elements which, at the time of their assignment by the Mallows process, were assigned a value no smaller than \(a/(1-q)\). Let \(B\subseteq [n]\) be a contiguous block of integers, i.e., \(B := \{i_0, \ldots , i_0+|B|-1\}\) for some \(i_0\ge 1\) such that \(i_0 + |B| - 1\le n\). Our main result concerns the length of the longest increasing subsequence of \(\uppi \) restricted to \(B\cap T\).

Theorem 4.5

Suppose \(n\ge 1, a>0\) and \(\frac{1}{2} \le q \le 1- \frac{2}{n}\). If \(|B| \ge \frac{a}{1-q}\) then

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{B\cap T}})\ge L) \le \frac{1}{|B|(1-q)} \left( \frac{Ce^{-a}|B|^2(1-q)}{L^2}\right) ^L \end{aligned}$$

for all integer \(L \ge Ce^{-a/2}|B|\sqrt{1-q}\).

An important feature of this bound is that it is uniform in \(n\). In fact, the result is similar to the upper bound of (7) in Theorem 1.3, with \(n\) replaced by \(e^{-a/2}|B|\).

Observe the trivial inequality \(\mathrm{LIS }({{\uppi }_{B\cap T}})\le \mathrm{LIS }({{\uppi }_{B}})\). It implies that if \(a\le 10\), say, the theorem follows from Corollary 2.7 and Proposition 4.2. Thus we assume in the sequel that \(a>10\). Assume in addition that \(\frac{1}{2} \le q \le 1- \frac{2}{n}\), as in the theorem.

The proof strategy is a modification of the argument of Proposition 4.2, using a union bound over all possible increasing subsequences which are subsets of \(B\cap T\). Recall the definitions of the events \(E_{I,J}\) and \(E_I\) from (40) and (41).

Lemma 4.6

Let \(m\ge 1\). Let \(I=(i_1,\ldots ,i_m)\) be an increasing sequence of integers, and let \(J=(j_1,\ldots ,j_m)\) be a sequence of integers satisfying \(\frac{a}{1-q}\le j_k \le i_k\). Then

$$\begin{aligned} {\mathbb {P}}(E_{I,J}) \le (C(1-q))^m q^{\sum j_k}. \end{aligned}$$

Proof

Observe that, since \(a>10\), we must have \(i_1\ge 1/(1-q)\). Thus, by (15) and our assumption that \(q\ge \frac{1}{2}\),

$$\begin{aligned} {\mathbb {P}}(p_{i_k}(i_k)=j_k\text { for all}\, 1\le k\le m) = \displaystyle \prod _{1 \le k \le m}\frac{(1-q)q^{j_{k}-1}}{1-q^{i_k}} \le \left( C(1-q)\right) ^m q^{\sum j_k}. \end{aligned}$$

\(\square \)

We need the following combinatorial lemma, inspired by a related fact on partitions (see, e.g., [31, Theorem 15.1]).

Lemma 4.7

Let \(1\le m\le |B|\) and let \(I=(i_1, \ldots , i_m)\subseteq B\) be an increasing sequence of integers. For an integer \(s\ge 1\) define a family of integer sequences by

$$\begin{aligned} \mathcal {J}_{s,I}':=\left\{ (j_1,\ldots , j_m)\,:\,\sum _{k=1}^m j_k = s,\; j_k\ge 0\;\text { and }\; j_{k+1} - j_k \le i_{k+1} - i_k\right\} . \end{aligned}$$

Then

$$\begin{aligned} |\mathcal {J}'_{s,I}| \le \left( \frac{C}{m^2}\right) ^{m-1}\left( s^{m-1} + (m|B|)^{m-1}\right) . \end{aligned}$$

Proof

Define a transformation from a sequence \(J\in \mathcal {J}'_{s,I}\) to a sequence \((\ell _1,\ldots , \ell _m)\) by

$$\begin{aligned} \ell _k := j_k + i_m - i_k + (m-k). \end{aligned}$$

It follows from the definition of \(\mathcal {J}'_{s,I}\) that each \(\ell _k\) is an integer, \(\ell _1>\ell _2>\cdots \ell _m\ge 0\) and

$$\begin{aligned} \sum _{k=1}^m \ell _k = s+mi_m - \sum _{k=1}^m i_k + \frac{m(m-1)}{2}=:s'. \end{aligned}$$

Thus, all \(m!\) permutations of \((\ell _1,\ldots , \ell _m)\) are distinct and each such permutation solves the equation

$$\begin{aligned} x_1 + \cdots + x_m = s'\quad \text { where each}\, x_i \,\text {is a non-negative integer}. \end{aligned}$$
(49)

Since the transformation from \(J\) to \((\ell _k)\) is one-to-one, we conclude that \(m!|\mathcal {J}'_{s,I}|\) is bounded above by the number of solutions to (49). Thus,

$$\begin{aligned} |\mathcal {J}'_{s,I}|\le \frac{1}{m!}{s' + m - 1 \atopwithdelims ()m-1}\le \left( \frac{C(s' + m)}{m^2}\right) ^{m-1}\le \left( \frac{C(s+2m|B|)}{m^2}\right) ^{m-1}, \end{aligned}$$

and the lemma follows from the fact that \((s + 2m|B|)^{m-1}\le (2\max (s, 2m|B|))^{m-1}\le (2s)^{m-1} + (4m|B|)^{m-1}\). \(\square \)

Proposition 4.8

Let \(1\le m\le |B|\) and let \(I=(i_1, \ldots , i_m)\subseteq B\) be an increasing sequence of integers. If \(|B| \ge \frac{a}{1-q}\) then

$$\begin{aligned} {\mathbb {P}}(E_I\cap \{I\subseteq T\}) \le (Ce^{-a})^m\left( \frac{|B|(1-q)}{m} \right) ^{m-1}. \end{aligned}$$

Proof

Fix a sequence \(I\) as in the proposition. For an integer \(s\ge ma/(1-q)\), define a family of integer sequences by

$$\begin{aligned} \mathcal {J}_{s,I}&:= \Bigg \{(j_1,\ldots , j_m)\,:\,\sum _{k=1}^m j_k = s,\; \frac{a}{1-q}\le j_k\le i_k\\&\qquad \text { and the event}\, E_I\cap E_{I,J}~\text { is non-empty}\Bigg \}. \end{aligned}$$

As in Proposition 4.4, (42) holds for all \(J\in \mathcal {J}_{s,I}\). Thus \(\mathcal {J}_{s,I}\subseteq \mathcal {J}'_{s,I}\) and Lemma 4.7 implies that

$$\begin{aligned} |\mathcal {J}_{s,I}| \le \left( \frac{C}{m^2}\right) ^{m-1}\left( s^{m-1} + (m|B|)^{m-1}\right) . \end{aligned}$$

Combining this with Lemma 4.6 we obtain that

$$\begin{aligned} {\mathbb {P}}(E_I\cap \{I\subseteq T\})&= \displaystyle \sum _{s\ge \frac{ma}{1-q}}\sum _{J\in \mathcal {J}_{s,I}}{\mathbb {P}}(E_I \cap E_{I,J}) \le \sum _{s\ge \frac{ma}{1-q}}\sum _{J\in \mathcal {J}_{s,I}}{\mathbb {P}}(E_{I,J})\nonumber \\&\le \left( C(1-q)\right) ^m \sum _{s\ge \frac{ma}{1-q}} |\mathcal {J}_{s,I}| q^{s} \nonumber \\&\le \left( C(1-q)\right) ^m m^{-2(m-1)} \left( \sum _{s\ge \frac{ma}{1-q}} s^{m-1}q^{s} + \sum _{s\ge \frac{ma}{1-q}} (m|B|)^{m-1}q^{s}\!\right) \!. \end{aligned}$$
(50)

To estimate the first sum in (50), observe that the ratio of consecutive elements in it is at most \((1+1/s)^{m-1}q\le 1 - (1-q)/2\) since \(s\ge ma/(1-q)\ge 10m/(1-q)\). Thus,

$$\begin{aligned}&\sum _{s\ge \frac{ma}{1-q}} s^{m-1}q^{s} \le \frac{2}{1-q}\left( \frac{ma}{1-q} \right) ^{m-1} q^{ma/(1-q)}\le \frac{2e^{-ma}}{1-q}\left( \frac{ma}{1-q} \right) ^{m-1} \quad \text { and }\\&\sum _{s\ge \frac{ma}{1-q}} (m|B|)^{m-1}q^s \le \frac{1}{1-q} (m|B|)^{m-1} q^{ma/(1-q)} \le \frac{e^{-ma}(m|B|)^{m-1}}{1-q}. \end{aligned}$$

Plugging these bounds into (50) and using the assumption \(|B| \ge \frac{a}{1-q}\) yields the result of the proposition. \(\square \)

Proof of Theorem 4.5

For \(1\le m\le |B|\), denote by \(\mathcal {I}_m\) the set of all increasing integer sequences \(I=(i_1, \ldots , i_m)\subseteq B\). Observe that \(|\mathcal {I}_m|= {|B| \atopwithdelims ()m}\). Let \(C_1\) be a large absolute constant. Applying a union bound and Proposition 4.8 we obtain for all integer \(L\ge C_1 e^{-a/2}|B|\sqrt{1-q}\) that

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{B\cap T}})\ge L)&\le \displaystyle \sum _{\begin{array}{c} L\le m \le |B|\\ I\in \mathcal {I}_m \end{array}} {\mathbb {P}}(E_I\cap \{I\subseteq T\})\\&\le \displaystyle \sum _{m \ge L} {|B| \atopwithdelims ()m} (Ce^{-a})^m\left( \frac{|B|(1-q)}{m} \right) ^{m-1} \\&\le \frac{1}{|B|(1-q)}\displaystyle \sum _{m \ge L} m\left( \frac{Ce^{-a}|B|^2(1-q)}{m^2}\right) ^m \\&\le \frac{1}{|B|(1-q)}\displaystyle \sum _{m \ge L} \left( \frac{Ce^{-a}|B|^2(1-q)}{m^2}\right) ^m \\&\le \frac{2}{|B|(1-q)} \left( \frac{Ce^{-a}|B|^2(1-q)}{L^2}\right) ^L \end{aligned}$$

where for the last inequality we took the constant \(C_1\) to be sufficiently large. \(\square \)

5 Law of large numbers for \(\mathrm{LIS }(\uppi )\)

In this section we prove Theorem 1.2. Let \(\uppi \sim \mu _{n,q}\). We wish to show that

$$\begin{aligned} \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} \rightarrow 1\quad \mathrm {\ as \ }\; n \rightarrow \infty ,\; q \rightarrow 1,\; n(1-q) \rightarrow \infty \end{aligned}$$
(51)

in \(L_p\) for every \(0< p < \infty \). The restrictions on \(n\) and \(q\) in the above limit should be interpreted as saying that \(n \rightarrow \infty \) and \(q \rightarrow 1\) in any way so that \(n(1-q) \rightarrow \infty \).

5.1 Block decomposition

Let \(n=n(q)\) be a function of \(q\) such that

$$\begin{aligned} \lim _{q \rightarrow 1}\; n = \infty \quad \text { and}\quad \lim _{q \rightarrow 1}\; n(1-q)= \infty . \end{aligned}$$
(52)

Let \(\uppi \sim \mu _{n,q}\). To prove (51) it suffices to show that

$$\begin{aligned} \lim _{q \rightarrow 1}\; \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} =1 \end{aligned}$$

in \(L_p\) for every \(0< p < \infty \). As mentioned in the introduction, we will achieve this by partitioning \(\{1,\ldots , n\}\) into blocks of size \(\frac{\upbeta }{1-q}\), for some large \(\upbeta \), considering the longest increasing subsequence of the permutation restricted to each block, and showing that the concatenation of these subsequences is close to being an increasing subsequence for the entire permutation. We proceed to make this idea formal.

Let \(\upbeta >0\) and define a function \(\upbeta (q)\) such that \(\upbeta (q)/(1-q)\) is an integer and \(\upbeta (q) \rightarrow \upbeta \) as \(q \rightarrow 1\). As a note to the reader we remark that we would have gladly set \(\upbeta (q)\) equal to \(\upbeta \) in the rest of our argument, but we need \(\upbeta (q)/(1-q)\) to be an integer for technical reasons. Define

$$\begin{aligned} m:=\left\lfloor \frac{n(1-q)}{\upbeta (q)} \right\rfloor \end{aligned}$$
(53)

and for \(1 \le i \le {m}\), define

$$\begin{aligned} {B}_i:=\left\{ (i-1) \frac{\upbeta (q)}{1-q}+1,\ldots , i \frac{\upbeta (q)}{1-q}\right\} . \end{aligned}$$

Thus the \({B}_i\) are blocks of size \(\upbeta (q)/(1-q)\) of consecutive integers which, possibly along with a block of smaller size \({B}_{{m}+1}:=\big \{{m}\frac{\upbeta (q)}{(1-q)}+1,\ldots ,n\big \}\), partition \(\{1,\ldots , n\}\). For \(1 \le i \le m+1\), let

$$\begin{aligned} X_i:= \mathrm{LIS }({{\uppi }_{B_i}}) \end{aligned}$$

be the length of the longest increasing subsequence of the restriction of \(\uppi \) to \(B_i\). By Lemma 2.5, the \(X_i\) are independent. By Corollary 2.7, each \(X_i\) has the distribution of the length of the longest increasing subsequence of a Mallows permutation of length \(|B_i|\) and parameter \(q\).

We regard the above objects, \(\upbeta (q), m, (B_i)\) and \((X_i)\), as implicit functions of \(\upbeta \) and \(q\). In particular, when we take the limits \(q \rightarrow 1\) and \(\upbeta \rightarrow \infty \) below it will be assumed that for every \(\upbeta \) and \(q\) these objects are defined by the above recipe. Using the triangle inequality,

$$\begin{aligned} \left| \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}}-1\right| \le \left| \frac{\mathrm{LIS }(\uppi ) - \sum _{i=1}^{m}X_i}{n\sqrt{1-q}}\right| + \left| \frac{\sum _{i=1}^{m}X_i}{n\sqrt{1-q}} -1\right| . \end{aligned}$$

We will prove that

$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \left| \frac{\mathrm{LIS }(\uppi ) - \sum _{i=1}^{m}X_i}{n\sqrt{1-q}} \right| \right) =0, \end{aligned}$$
(54)
$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \left| \frac{\sum _{i=1}^{m}X_i}{n\sqrt{1-q}} -1 \right| \right) =0. \end{aligned}$$
(55)

These equalities imply that

$$\begin{aligned} \limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \left| \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} -1 \right| \right) =0 \end{aligned}$$

and since \(\uppi \) does not depend on \(\upbeta \), in fact

$$\begin{aligned} \lim _{q \rightarrow 1}\; {\mathbb {E}}\left( \left| \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} -1 \right| \right) =0. \end{aligned}$$

In other words,

$$\begin{aligned} \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} \mathop {\rightarrow }\limits ^{L^1} 1\quad \text {as}\, q\rightarrow 1. \end{aligned}$$

Convergence in \(L^1\) implies convergence in probability. By our large deviation bounds, Theorem 1.3, for any \(0< p<\infty \), we have

$$\begin{aligned} \limsup _{q\rightarrow 1}\; {\mathbb {E}}\left( \left| \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} \right| ^p\right) < \infty . \end{aligned}$$
(56)

By considering some \(p'>p\) we conclude that for each fixed \(p, \{|\frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}}|^p\}\), regarded as a set of random variables indexed by \(q\), is uniformly integrable (starting from \(q\) sufficiently close to \(1\)) and hence

$$\begin{aligned} \frac{\mathrm{LIS }(\uppi )}{n\sqrt{1-q}} \mathop {\rightarrow }\limits ^{L^p} 1\quad \text {for all}\, 0< p <\infty . \end{aligned}$$

In the following sections, we prove (54) using properties of the Mallows process and show how (55) follows from the results of Mueller and Starr [24].

5.2 Comparing \(\mathrm{LIS }\) with \(\sum X_i\)

In this section we establish (54). Recall that \(X_i\) is the length of the longest increasing subsequence of \(\uppi \) restricted to \(B_i\). Since the \((B_i)\) partition \([n]\) it follows trivially that

$$\begin{aligned} \mathrm{LIS }(\uppi ) \le \displaystyle \sum _{i=1}^{m+1}X_i. \end{aligned}$$
(57)

Next, we show a bound in the other direction. Recalling the \(q\)-Mallows process of Sect. 2, we now use the coupling of \(\uppi \) and \((p_i)\),

$$\begin{aligned} \uppi (j) = n+1-p_n(j), \end{aligned}$$

introduced in Corollary 2.3. Let \(a = a(\upbeta )>0\) be any function of \(\upbeta \) satisfying

$$\begin{aligned} a\rightarrow \infty \quad \text {and}\quad \frac{a}{\upbeta }\rightarrow 0\quad \text {as} \upbeta \rightarrow \infty . \end{aligned}$$
(58)

For each \(i\), let \(E_i\) be the subset of elements of the block \(B_i\) whose final position, after the block \(B_i\) is assigned by the Mallows process, is at most \(a/(1-q)\). That is,

$$\begin{aligned} E_i := \left\{ j \in B_i \ : \ p_{\max B_i}(j) \le \frac{a}{1-q}\right\} . \end{aligned}$$

Let \(F_i\) be the subset of \(B_i\) which is initially assigned a position larger than \(a/(1-q)\) by the Mallows process. That is,

$$\begin{aligned} F_i := \left\{ j \in B_i \ : \ p_j(j) > \frac{a}{1-q}\right\} . \end{aligned}$$

Let \(I_i\subseteq B_i\) be the indices of an (arbitrary) longest increasing subsequence in the restriction of \(\uppi \) to \(B_i\), so that \(|I_i|=X_i\). Define

$$\begin{aligned} I_i' := I_i {\setminus } (E_i \cup F_i). \end{aligned}$$
(59)

The definition of \(B_i, E_i\) and \(F_i\) implies that \(\cup _i I_i'\) is a set of indices of an increasing subsequence in \(\uppi \). To see this, let \(j,k\in \cup _i I_i'\) satisfy \(j<k\). If \(j,k\in I_i\) for some \(i\) then \(\uppi (j)<\uppi (k)\) by definition of \(I_i\). Otherwise \(j\in I_{i_1} {\setminus } (E_{i_1} \cup F_{i_1})\) and \(k\in I_{i_2} {\setminus } (E_{i_2} \cup F_{i_2})\) for some \(i_1<i_2\). Then, by the definitions of \(F_{i_2}\) and \(E_{i_1}\),

$$\begin{aligned} p_k(k) \le \frac{a}{1-q} <p_{\max B_{i_1}}(j) \le p_{k}(j), \end{aligned}$$

which implies that \(p_n(k)<p_n(j)\), so that \(\uppi (k)>\uppi (j)\). Thus,

$$\begin{aligned} \mathrm{LIS }(\uppi ) \ge \sum _{i=1}^{m} |I_i'|. \end{aligned}$$
(60)

Moreover, the definition of \(I_i\) and (59) implies that

$$\begin{aligned} X_i = |I_i| \le |I_i'| + \mathrm{LIS }({{\uppi }_{E_i}}) +\mathrm{LIS }({{\uppi }_{F_i}}), \end{aligned}$$

so that together with (60) we have

$$\begin{aligned} \mathrm{LIS }(\uppi ) \ge \sum _{i=1}^{m}X_i -\sum _{i=1}^{m}\mathrm{LIS }({{\uppi }_{E_i}}) - \sum _{i=1}^{m}\mathrm{LIS }({{\uppi }_{F_i}}). \end{aligned}$$
(61)

Thus, from the upper and lower bounds (57) and (61), we deduce that

$$\begin{aligned} {\mathbb {E}}\Bigg [\Bigg |\mathrm{LIS }(\uppi )- \sum _{i=1}^{m}X_i \Bigg | \Bigg ] \le \sum _{i=1}^{m}{\mathbb {E}}\left( \mathrm{LIS }({{\uppi }_{E_i}})\right) +\sum _{i=1}^{m}{\mathbb {E}}\left( \mathrm{LIS }({{\uppi }_{F_i}})\right) +{\mathbb {E}}\left( X_{m+1}\right) . \end{aligned}$$

Relation (54) is a direct consequence of the next lemma, which provides asymptotic bounds for each of the terms on the right-hand side.

Lemma 5.1

$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \frac{X_{m+1}}{n\sqrt{1-q}} \right) =0,\end{aligned}$$
(62)
$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \frac{\sum _{i=1}^{m} {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{E_i}}))}{n\sqrt{1-q}} \right) =0,\end{aligned}$$
(63)
$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left( \frac{\sum _{i=1}^{m} {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{F_i}}))}{n\sqrt{1-q}} \right) =0. \end{aligned}$$
(64)

Proof

Throughout the proof we assume that \(\upbeta \) is sufficiently large and \(q\) is sufficiently close to \(1\) so that \(n(1-q)\) is large, \(\upbeta (q)\) is close to \(\upbeta , a\) is large and \(\frac{a}{\upbeta }\) is small.

Recall that \(X_{m+1}\) has the distribution of the length of the longest increasing subsequence of a Mallows permutation of length \(|B_{m+1}|\le \frac{\upbeta (q)}{1-q}\) and parameter \(q\). Hence Theorem 1.3 implies that

$$\begin{aligned} {\mathbb {E}}\left( X_{m+1} \right) \le \frac{C\upbeta (q)}{\sqrt{1-q}} \end{aligned}$$

for some constant \(C>0\) independent of \(q\) and \(\upbeta \). Thus

$$\begin{aligned} \lim _{q \rightarrow 1}\; {\mathbb {E}}\left( \frac{X_{m+1}}{n\sqrt{1-q}} \right) \le \lim _{q \rightarrow 1}\; \frac{C\upbeta (q)}{n(1-q)} =0 \end{aligned}$$

for any fixed \(\upbeta >0\), by our assumption that \(\upbeta (q)\rightarrow \upbeta \) and \(n(1-q)\rightarrow \infty \,\) as \(q\) tends to 1. This establishes (62).

We continue to bound \({\mathbb {E}}(\mathrm{LIS }({{\uppi }_{E_i}}))\). Our goal is to show that \(\mathrm{LIS }({{\uppi }_{E_i}})\) is stochastically dominated by the longest increasing subsequence of a permutation with the \((\lfloor \frac{a}{1-q}\rfloor , q)\)-Mallows distribution. To see this, set

$$\begin{aligned} I:=\left( 1,2,\ldots , \left\lfloor \frac{a}{1-q}\right\rfloor \right) \;\;\text { and }\;\;\bar{E}_i:=(p_{\max B_i})^{-1}(I). \end{aligned}$$

It follows that \(E_i \subseteq \bar{E}_i\). Now, denote \(\sigma :=p_{\max B_i}\). Then

$$\begin{aligned} \mathrm{LIS }({{\uppi }_{E_i}})&= \mathrm{LDS }((p_n)_{E_i}) = \mathrm{LDS }(\sigma _{E_i}) \le \mathrm{LDS }(\sigma _{\bar{E}_i})\\&= \mathrm{LDS }((\sigma ^{-1})_{I}) = \mathrm{LIS }(((\sigma ^{-1})_I)^R). \end{aligned}$$

Since \(\sigma ^{-1}\sim \mu _{\max B_i,1/q}\) by Lemma 2.2, it follows by Corollary 2.7 that \((\sigma ^{-1})_I\sim \mu _{\lfloor \frac{a}{1-q}\rfloor ,1/q}\). Finally, another application of Lemma 2.2 shows that \(((\sigma ^{-1})_I)^R\sim \mu _{\lfloor \frac{a}{1-q}\rfloor ,q}\), proving the required stochastic domination. Applying Theorem 1.3 we conclude that

$$\begin{aligned} {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{E_i}}))\le C\left\lfloor \frac{a}{1-q}\right\rfloor \sqrt{1-q} \le C\frac{a}{\sqrt{1-q}}. \end{aligned}$$

Thus, recalling the definition of \(m\) from (53), our assumption that \(\upbeta (q)\rightarrow \upbeta \) as \(q\rightarrow 1\) and the properties of \(a\) from (58), we have

$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left[ \frac{\sum _{i=1}^{m} {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{E_i}}))}{n\sqrt{1-q}} \right] \le \limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; \frac{C a m}{n(1-q)} \nonumber \\&\quad \le \limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; \frac{Ca}{\upbeta (q)} = \limsup _{\upbeta \rightarrow \infty }\; \frac{Ca}{\upbeta } = 0, \end{aligned}$$

proving (63).

We finish by bounding \({\mathbb {E}}(\mathrm{LIS }({{\uppi }_{F_i}}))\). Observe that \(\mathrm{LIS }({{\uppi }_{F_i}})\) is of the form studied in Theorem 4.5. Hence we may apply this theorem to deduce that for any integer \(L\ge Ce^{-a/2}|B_i|\sqrt{1-q}\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{F_i}})\ge L) \le \frac{1}{|B_i|(1-q)} \left( \frac{Ce^{-a}|B_i|^2(1-q)}{L^2}\right) ^L. \end{aligned}$$
(65)

For each \(1\le i\le m\) we may set

$$\begin{aligned} L_0:=C_0 e^{-a/2}|B_i|\sqrt{1-q} = C_0 e^{-a/2}\frac{\upbeta (q)}{\sqrt{1-q}}, \end{aligned}$$

with a sufficiently large absolute constant \(C_0\), and apply (65) to obtain

$$\begin{aligned}&{\mathbb {E}}(\mathrm{LDS }({{\uppi }_{F_i}})) \le L_0 + \sum _{L>L_0} {\mathbb {P}}(\mathrm{LIS }({{\uppi }_{F_i}})\ge L) \le L_0 + \sum _{L>L_0} \frac{1}{|B_i|(1-q)}2^{-L} = L_0 \\&\quad + \sum _{L>L_0} \frac{1}{\upbeta (q)}2^{-L} \le L_0 + 1. \end{aligned}$$

Finally, we conclude that

$$\begin{aligned}&\limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; {\mathbb {E}}\left[ \frac{\sum _{i=1}^{m} {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{F_i}}))}{n\sqrt{1-q}} \right] \le \limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; \frac{m(L_0+1)}{n\sqrt{1-q}} \nonumber \\&\quad \le \limsup _{\upbeta \rightarrow \infty }\; \limsup _{q \rightarrow 1}\; C_0 e^{-a/2} + \frac{\sqrt{1-q}}{\upbeta (q)} = \limsup _{\upbeta \rightarrow \infty }\; C_0 e^{-a/2} = 0, \end{aligned}$$

proving (64). \(\square \)

5.3 Relating to the results of Mueller and Starr

In this section we establish (55). We rely on the following result of Mueller and Starr, who proved a weak law of large numbers for the longest increasing subsequence of a random Mallows permutation in the regime that \(n(1-q)\) tends to a finite limit.

Theorem 5.2

(Mueller–Starr [24]) Suppose that \((q_n)_{n=1}^\infty \) satisfies that the limit

$$\begin{aligned} \upbeta = \lim _{n \rightarrow \infty }\; n(1-q_n) \end{aligned}$$

exists and is finite. Then for any \(\varepsilon >0\), if \(\uppi \sim \mu _{n,q_n}\) then

$$\begin{aligned} \lim _{n \rightarrow \infty }\; {\mathbb {P}}\left( \left| \frac{\mathrm{LIS }(\uppi )}{\sqrt{n}} - \ell (\upbeta ) \right| > \varepsilon \right) =0, \end{aligned}$$

where

$$\begin{aligned} \ell (\upbeta ) = {\left\{ \begin{array}{ll} 2 \upbeta ^{-1/2}\sinh ^{-1}(\sqrt{e^\upbeta -1}) &{} \mathrm {for }\ \upbeta >0 \\ 2 &{} \mathrm {for }\ \upbeta =0 \\ 2 |\upbeta |^{-1/2}\sin ^{-1}(\sqrt{1-e^\upbeta }) &{} \mathrm {for }\ \upbeta <0 \end{array}\right. }. \end{aligned}$$
(66)

We continue with the notation of Sect. 5.1 and, in particular, suppose that \(n=n(q)\) is such that (52) holds. Recall that \(X_1\) is distributed as the length of a longest increasing subsequence of a \((\upbeta (q)/(1-q),q)\)-Mallows permutation. Since the limit

$$\begin{aligned} \lim _{q \rightarrow 1}\; \frac{\upbeta (q)}{1-q}\cdot (1-q) = \upbeta \end{aligned}$$

exists and is finite, we may apply Theorem 5.2 to \(X_1\) and deduce that

$$\begin{aligned} \sqrt{\frac{1-q}{\upbeta (q)}}\cdot X_1 \rightarrow \ell (\upbeta )\quad \text { in probability, as } q \text { tends to } 1. \end{aligned}$$
(67)

Now fix \(\upbeta _0\) sufficiently large and \(q_0\) sufficiently close to \(1\) so that if \(\upbeta \ge \upbeta _0\) and \(q_0\le q<1\) then \(\frac{1}{2}<q<1 - \frac{4(1-q)}{\upbeta (q)}\) so that our large deviation estimate, inequality (7) in Theorem 1.3, may be applied to \(X_1\). It follows, as in (56), that for any fixed \(\upbeta \ge \upbeta _0\), the random variables

$$\begin{aligned} \left\{ \left( \frac{\sqrt{1-q}}{\upbeta (q)}\cdot X_1\right) ^2\right\} \text { indexed by } q_0\le q<1 \text { are uniformly integrable}. \end{aligned}$$
(68)

Since \(\upbeta (q)\rightarrow \upbeta \) as \(q\rightarrow 1\), (67) and (68) imply that for any fixed \(\upbeta \ge \upbeta _0\),

$$\begin{aligned} \sqrt{\frac{1-q}{\upbeta }}\cdot X_1 \rightarrow \ell (\upbeta )\quad \text { in }\, L_2, \text { as }\, q \,\text { tends to }\, 1. \end{aligned}$$

In particular, for any fixed \(\upbeta \ge \upbeta _0\), we have

$$\begin{aligned} \lim _{q\rightarrow 1} \sqrt{\frac{1-q}{\upbeta }}\cdot {\mathbb {E}}(X_1) = \ell (\upbeta )\quad \text {and}\quad \lim _{q\rightarrow 1} (1-q)\cdot \mathrm{Var }(X_1) = 0. \end{aligned}$$
(69)

We now consider the random variable

$$\begin{aligned} Y:=\frac{\sum _{i=1}^{m}X_i}{n\sqrt{1-q}}. \end{aligned}$$

In order to prove (55) we first show that

$$\begin{aligned} \lim _{\upbeta \rightarrow \infty }\; \lim _{q\rightarrow 1}\; {\mathbb {E}}(Y)&= 1\quad \text { and}\end{aligned}$$
(70)
$$\begin{aligned} \lim _{\upbeta \rightarrow \infty }\; \lim _{q\rightarrow 1}\; \mathrm{Var }(Y)&= 0. \end{aligned}$$
(71)

To prove (70) we note that since the \((X_i)\) are identically distributed, we may write

$$\begin{aligned} \lim _{q\rightarrow 1}\; {\mathbb {E}}(Y) = \lim _{q\rightarrow 1}\; \frac{m}{n\sqrt{1-q}} {\mathbb {E}}(X_1) = \lim _{q\rightarrow 1}\; \frac{m \upbeta }{n(1-q)} \frac{\sqrt{1-q}}{\upbeta }\cdot {\mathbb {E}}(X_1). \end{aligned}$$
(72)

Now, by (52) and (53) we have

$$\begin{aligned} \lim _{q\rightarrow 1} \frac{m \upbeta }{n(1-q)} = 1. \end{aligned}$$
(73)

Plugging this into (72) and using (69) implies that

$$\begin{aligned} \lim _{q\rightarrow 1}\; {\mathbb {E}}(Y) = \frac{1}{\sqrt{\upbeta }} \lim _{q\rightarrow 1}\; \sqrt{\frac{1-q}{\upbeta }}\cdot {\mathbb {E}}(X_1) = \frac{\ell (\upbeta )}{\sqrt{\upbeta }} \end{aligned}$$
(74)

for any fixed \(\upbeta \ge \upbeta _0\). Finally, we observe that by (66) we have

$$\begin{aligned} \lim _{\upbeta \rightarrow \infty } \frac{\ell (\upbeta )}{\sqrt{\upbeta }} = 1, \end{aligned}$$

which together with (74) implies (70).

To prove (71) we rely also on the fact that the \((X_i)\) are independent. Thus, by (73),

$$\begin{aligned} \lim _{q\rightarrow 1}\; \mathrm{Var }(Y)&= \lim _{q\rightarrow 1}\; \frac{m}{n^2(1-q)} \mathrm{Var }(X_1) = \lim _{q\rightarrow 1}\; \frac{1}{\upbeta n} \mathrm{Var }(X_1)\\&= \lim _{q\rightarrow 1}\; \frac{1}{\upbeta n(1-q)} (1-q)\cdot \mathrm{Var }(X_1). \end{aligned}$$

Hence, if \(\upbeta \ge \upbeta _0\) then (52) and (69) imply that

$$\begin{aligned} \lim _{q\rightarrow 1}\; \mathrm{Var }(Y) = 0, \end{aligned}$$

proving (71).

Finally, by the triangle and Cauchy–Schwartz inequalities we have

$$\begin{aligned} {\mathbb {E}}|Y - 1|\le {\mathbb {E}}|Y - {\mathbb {E}}(Y)| + |{\mathbb {E}}(Y) - 1| \le \sqrt{\mathrm{Var }(Y)} + |{\mathbb {E}}(Y) - 1|, \end{aligned}$$

which shows that (70) and (71) imply (55).

6 Decreasing subsequences

In this section we prove Theorems 1.5 and 1.7 concerning the length of the longest decreasing subsequence in a Mallows permutation. Part (1.7) of Theorem 1.7 is established in Sect. 6.1. In Sect. 6.2 we prove part (1.7) of Theorem 1.7 and in Sect. 6.3 we prove part (1.7). In Sect. 6.4 using the established large deviation inequalities for \(\mathrm{LDS }(\uppi )\) we derive the different regimes of the order of magnitude of \(E(\mathrm{LDS }(\uppi ))\) proving Theorem 1.5. This last section also includes the proof of Proposition 1.9.

6.1 An upper bound on the probability of a long decreasing subsequence

In this section we obtain an upper bound on the probability of having a long decreasing subsequence in a Mallows permutation. Precisely, we show that if \(\uppi \sim \mu _{n,q}\) for \(0<q<1-\frac{2}{n}\) then

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L) \le n^8{\left\{ \begin{array}{ll}\left( \frac{C}{(1-q)L^2}\right) ^L&{} L\le \frac{3}{1-q}\\ (C(1-q))^L q^{\frac{L(L-1)}{2}}&{}L>\frac{3}{1-q}\end{array}\right. } \end{aligned}$$
(75)

for any \(L\ge 2\). This establishes (10). We also establish (11), a more refined result for small \(q\), showing that for \(0<q<\frac{1}{2}\) and \(L\ge 2\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\le nC^L q^{\frac{L(L-1)}{2}}. \end{aligned}$$
(76)

The method of proof, as in Sect. 4.2.1, is to first bound the probability that a particular set of inputs to the permutation forms a decreasing subsequence of length \(L\) and then to perform a union bound over all the possibilities for such inputs. However, the calculations turn out to be somewhat involved.

6.1.1 Preliminary calculations

We begin with some preliminary calculations.

Lemma 6.1

For any \(0<p<1\) and integer \(r\ge 1\), if we denote by \(X(d)\) a random variable with distribution \(\mathrm{Bin }(d,p)\) then

$$\begin{aligned} \sum _{d=0}^\infty {\mathbb {P}}(X(d)<r) = \frac{r}{p}. \end{aligned}$$

In this lemma, as well as below, we say that \(X\) has the \(\mathrm{Bin }(0,p)\) distribution meaning that \(X\) is the identically zero random variable.

Proof

Let \(Y_1,Y_2,\ldots \) be an infinite sequence of independent Bernoulli\((p)\) random variables, i.e., \({\mathbb {P}}(Y_1=1)=1-{\mathbb {P}}(Y_1=0)=p\). Then

$$\begin{aligned} \sum _{d=0}^\infty {\mathbb {P}}(X(d)<r)&= \sum _{d=0}^\infty {\mathbb {E}}\left( 1\!\!1_{\{\sum _{k=1}^d Y_i < r\}}\right) = {\mathbb {E}}\sum _{d=0}^\infty \left( 1\!\!1_{\{\sum _{k=1}^d Y_i < r\}}\right) \\&= {\mathbb {E}}\bigg [\min \bigg (m\,:\, \sum _{k=1}^{m} Y_i = r\bigg )\bigg ], \end{aligned}$$

where \(1\!\!1_E\) denotes the indicator random variable of the event \(E\). Observing that \(\min (m\,:\, \sum _{k=1}^{m} Y_i = r)\) has the distribution of the waiting time for \(r\) successes in a sequence of independent trials with success probability \(p\), that is, the distribution of a sum of \(r\) independent geometric random variables with success probability \(p\), we conclude that

$$\begin{aligned} \sum _{d=0}^\infty {\mathbb {P}}(X(d)<r) = \frac{r}{p}. \end{aligned}$$

\(\square \)

For integers \(m\ge 1\) and \(M\ge m\) define the set of integer vectors

$$\begin{aligned} J_m(M):=\{(j_1,\ldots , j_{m})\,:\,0\le j_1< j_2<\cdots <j_m<M\}. \end{aligned}$$

Lemma 6.2

There exists an absolute constant \(C_1>0\) such that for any integers \(m\ge 2\) and \(j_m\ge m-1\) we have

$$\begin{aligned} \sum _{J_{m-1}(j_m)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1} \le \log (j_m+1)\left( \frac{C_1 j_m}{(m-1)^2}\right) ^{m-1}, \end{aligned}$$

In this lemma, as well as below, we write \(\sum _{J_{m-1}(j_m)}\) as a shorthand for \(\sum _{(j_1,\ldots , j_{m-1})\in J_{m-1}(j_m)}\).

Proof

We prove the claim by induction. For \(m=2\) the claim is

$$\begin{aligned} \sum _{j_1=0}^{j_2-1} \frac{j_2 - j_1}{j_1+1} \le C_1 j_2\log (j_2+1) \end{aligned}$$

for any \(j_2\ge 1\), which clearly holds if \(C_1\) is sufficiently large. Now fix \(m\ge 3\) and \(j_m\ge m-1\), assume the claim holds for \(m-1\) (and any \(j_{m-1}\ge m-2\)), and let us prove it for \(m\). We have

$$\begin{aligned}&\sum _{J_{m-1}(j_m)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1}= \sum _{j_{m-1}=m-2}^{j_m-1} \left[ \frac{j_m - j_{m-1}}{j_{m-1}+1} \sum _{J_{m-2}(j_{m-1})} \prod _{k=1}^{m-2} \frac{j_{k+1} - j_k}{j_k+1}\right] \\&\quad \le \sum _{j_{m-1}=m-2}^{j_m-1} \frac{j_m - j_{m-1}}{j_{m-1}+1} \log (j_{m-1}+1)\left( \frac{C_1 j_{m-1}}{(m-2)^2}\right) ^{m-2} \end{aligned}$$

by the induction hypothesis. It follows that

$$\begin{aligned} \sum _{J_{m-1}(j_m)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1} \le \log (j_m+1)\left( \frac{C_1}{(m-2)^2}\right) ^{m-2} \sum _{j_{m-1}=m-2}^{j_m-1} j_{m-1}^{m-3}(j_m-j_{m-1}).\nonumber \\ \end{aligned}$$
(77)

We have \(\sum _{j_{m-1}=m-2}^{j_m-1} j_{m-1}^{m-3}(j_m-j_{m-1})\le \frac{4j_m^{m-1}}{(m-2)(m-1)}\). One way to see this is to let \(f(x):=x^{m-3}(j_m-x)\) and \(x_c:=\frac{(m-3)j_m}{m-2}\). Observing that \(f\) attains its maximum on \([0,j_m]\) at \(x_c\), we have \(f(x)\le g(x)\) where \(g(x):=f(x)\) for \(x<x_c\) and \(g(x):=f(x_c)\) for \(x\ge x_c\). Now, since \(g\) is increasing on \([0,j_m]\), we have \(\sum _{j=1}^{j_m-1} f(j)\le \int _1^{j_m} g(x)dx\) which yields the required inequality. Thus, continuing (77), we have

$$\begin{aligned}&\sum _{J_{m-1}(j_m)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1}\le 4\log (j_m+1)\left( \frac{C_1}{(m-2)^2}\right) ^{m-2}\frac{j_m^{m-1}}{(m-2)(m-1)}\\&\quad = \frac{4\log (j_m+1)}{C_1}\left( \frac{m-1}{m-2}\right) ^{2m-3} \left( \frac{C_1j_m}{(m-1)^2}\right) ^{m-1}, \end{aligned}$$

from which the induction step follows if \(C_1\) is sufficiently large. \(\square \)

Corollary 6.3

There exists an absolute constant \(C>0\) such that for any integers \(m\ge 2\) and \(M\ge m\) we have

$$\begin{aligned} \sum _{J_m(M)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1} \le \log (M+1)\left( \frac{CM}{m^2}\right) ^m. \end{aligned}$$

Proof

By Lemma 6.2 we have

$$\begin{aligned} \sum _{J_m(M)} \prod _{k=1}^{m-1} \frac{j_{k+1} - j_k}{j_k+1}&\le \sum _{j_m=m-1}^{M-1} \log (j_m+1)\left( \frac{C_1 j_m}{(m-1)^2}\right) ^{m-1} \\&\le \log (M+1)\left( \frac{C_1}{(m-1)^2}\right) ^{m-1}\frac{M^m}{m}, \end{aligned}$$

from which the corollary follows for some \(C>C_1\). \(\square \)

For integers \(m\ge 1\) and \(M\ge 0\) define the (infinite) set of integer vectors

$$\begin{aligned} J_m'(M):=\{(j_1,\ldots , j_{m})\,:\,M\le j_1< j_2<\cdots <j_m\}. \end{aligned}$$

Lemma 6.4

There exists an absolute constant \(C>0\) such that for any \(0<q<1\) and integers \(m\ge 2\) and \(M\ge 0\) we have

$$\begin{aligned} \sum _{J_m'(M)} q^{\sum _{k=1}^m j_k}\prod _{k=1}^{m-1} (j_{k+1} - j_k) = \frac{q^{\frac{m(m-1)}{2} + mM}}{(1-q^m)\prod _{k=1}^{m-1} (1-q^{m-k})^2} \le \frac{C^m q^{\frac{m(m-1)}{2} + mM}}{(m'(1-q))^{2m'}}, \end{aligned}$$

where \(m':=\min (m,\lfloor \frac{1}{1-q}\rfloor )\).

Proof

We change variables, transforming the vector \(I=(j_1,\ldots , j_m)\) to the vector \((j_1,d_1,\ldots , d_{m-1})\) via the mapping \(d_k := j_{k+1}-j_k\). Observing that this transformation is one-to-one, we have

$$\begin{aligned} \sum _{J_m'(M)} q^{\sum _{k=1}^m j_k}\prod _{k=1}^{m-1} (j_{k+1} - j_k)= \sum _{j_1=M}^\infty \sum _D q^{mj_1 +\sum _{k=1}^{m-1} (m-k)d_k}\prod _{k=1}^{m-1} d_k, \end{aligned}$$

where the sum is over all integer vectors \(D:=\{(d_1,\ldots , d_{m-1})\ :\ d_k\ge 1\}\). Observing that the sum of products equals a product of sums since the factors involve different \(d_k\)’s, we have

$$\begin{aligned}&\sum _{J_m'(M)} q^{\sum _{k=1}^m j_k}\prod _{k=1}^{m-1} (j_{k+1} - j_k) = \left( \sum _{j_1=M}^\infty q^{mj_1}\right) \prod _{k=1}^{m-1}\left( \sum _{d=1}^\infty dq^{(m-k)d}\right) \\&\quad =\frac{q^{mM}}{1-q^m}\prod _{k=1}^{m-1}\frac{q^{m-k}}{(1-q^{m-k})^2}, \end{aligned}$$

proving the equality in the lemma. To prove the inequality, we observe that

$$\begin{aligned} (1-q^m)\prod _{k=1}^{m-1} (1-q^{m-k})^2 \ge \prod _{k=1}^m (1-q^k)^2 = \left[ (1-q)^{m}\prod _{k=1}^m\left( \sum _{j=0}^{k-1}q^j\right) \right] ^2. \end{aligned}$$

Noting that \(\sum _{j=0}^{k-1} q^j\ge ck\) when \(k\le \lfloor \frac{1}{1-q}\rfloor \) and \(\sum _{j=0}^{k-1} q^j\ge \frac{c}{1-q}\) when \(k\ge \lfloor \frac{1}{1-q}\rfloor \), we deduce

$$\begin{aligned} (1-q)^m\prod _{k=1}^m\left( \sum _{j=0}^{k-1}q^j\right) \ge c^m (m')!(1-q)^{m'} \ge c^m(m'(1-q))^{m'}, \end{aligned}$$

as required. \(\square \)

6.1.2 Union bound

Fix \(n\ge 3, 0<q<1-\frac{2}{n}\) and let \(\uppi \sim \mu _{n,q}\) for the remainder of this section and the next (we assume that \(n\ge 3\) since otherwise the range for \(q\) is empty). Using Corollary 2.3, we couple \(\uppi \) with the \(q\)-Mallows process so that

$$\begin{aligned} \uppi (i) = n+1 - p_n(i)\quad \text { for all}\, 1\le i\le n. \end{aligned}$$
(78)

In a similar (but not identical) way to Sect. 4.2.1, define, for an increasing sequence of integers \(I = (i_1,\ldots , i_m)\) and a sequence of integers \(J = (j_1, \ldots , j_m)\), the event

$$\begin{aligned} E_{I,J}:=\{p_{i_k}(i_k)=j_k+1\text { for all}\, 1\le k\le m\}. \end{aligned}$$

Additionally, for an increasing sequence of integers \(I = (i_1,\ldots , i_m)\subseteq [n]\), define the event that \(I\) is a set of indices of a decreasing subsequence,

$$\begin{aligned} E_{I} := \{\uppi (i_{k+1})<\uppi (i_k)\text { for all}\, 1\le k\le m-1\}. \end{aligned}$$

The starting point for our argument is a bound on the probability of \(E_{I,J}\cap E_I\). Recall the definition of \(J_m(M)\) and \(J_m'(M)\) from the previous section and define, for integers \(m\ge 1\) and \(M\ge 1\), the set of integer vectors

$$\begin{aligned} I_m'(M):=\{(i_1,\ldots , i_{m})\,:\,M\le i_1< i_2<\cdots <i_m\le n\}. \end{aligned}$$

Proposition 6.5

For any \(m\ge 2, I\in I_m'(\lfloor \frac{1}{1-q}\rfloor )\) and \(J\in J_m'(0)\) we have

$$\begin{aligned} {\mathbb {P}}(E_{I,J}\cap E_I)\le \left( C(1-q)\right) ^m q^{\sum _{k=1}^m j_k} \prod _{k=1}^{m-1}{\mathbb {P}}(X_k<j_{k+1}-j_k), \end{aligned}$$

where \(X_k\sim \mathrm{Bin }(i_{k+1}-i_{k} - 1,1-q^{j_k+1}), 1 \le k \le m-1\).

Proof

Fix \(I\) and \(J\) as in the proposition. By the coupling (78) of \(\uppi \) with the Mallows process, and the definition of the Mallows process, the event \(E_{I,J}\cap E_I\) occurs if and only if

$$\begin{aligned} p_{i_k}(i_k) = j_k + 1, \ \ \ \ \,&\forall 1 \le k \le m, \nonumber \\ p_{i_{k+1}}(i_{k+1})>p_{i_{k+1}}(i_k), \ \ \ \ \,&\forall 1 \le k \le m-1. \end{aligned}$$
(79)

If some \(j_k\ge i_k\) the probability of this event is zero and the proposition follows trivially. Assume from now on that \(j_k<i_k\) for all \(k\). Then (15) implies that

$$\begin{aligned} {\mathbb {P}}(E_{I,J}) = {\mathbb {P}}(\cap _{k=1}^m \{p_{i_k}(i_k) = j_k + 1\}) = \prod _{k=1}^m \frac{(1-q)q^{j_k}}{1-q^{i_k}} \le (C(1-q))^m q^{\sum _{k=1}^m j_k} \end{aligned}$$
(80)

since \(i_k\ge \frac{1}{2(1-q)}\) for all \(k\). Now, define the random variables \(D_k:=p_{i_{k+1}}(i_{k}) - p_{i_k}(i_k)\) for \(1\le k\le m-1\). Then we may reinterpret (79) in terms of the \(D_k\). Indeed,

$$\begin{aligned} \text {on the event}~E(I,J),\;\; p_{i_{k+1}}(i_{k+1})>p_{i_{k+1}}(i_k)\;\;\text {if and only if}\;\; D_k<j_{k+1} - j_k \end{aligned}$$
(81)

for each \(1\le k\le m-1\). By (16),

$$\begin{aligned} D_k\ge \sum _{i=i_k+1}^{i_{k+1}-1}\mathbf{1}_{\{p_i(i)\le j_k+1\}}\quad \text {on the event} \ E_{I,J}, \end{aligned}$$
(82)

where \(\mathbf{1}_E\) denotes the indicator random variable of the event \(E\), and, for all \(i\),

$$\begin{aligned} {\mathbb {P}}(p_i(i)\le j_k+1) = \frac{1+q+\cdots + q^{j_k}}{1+q+\cdots +q^{i-1}} = \frac{1-q^{j_k+1}}{1-q^i} \ge 1-q^{j_k+1}. \end{aligned}$$
(83)

Hence, using the fact that the \((p_i(i))\) are independent, we may combine (82) and (83) to deduce that conditioned on \(E_{I,J}\), the \((D_k)\) are independent and each \(D_k\) stochastically dominates a binomial random variable with \(i_{k+1}-i_k-1\) trials and success probability \(1-q^{j_k+1}\). In particular,

$$\begin{aligned} {\mathbb {P}}(\cap _{k=1}^{m-1} \{D_k<j_{k+1}-j_k\}\,|\, E_{I,J}) \le \prod _{k=1}^{m-1}{\mathbb {P}}(X_k < j_{k+1}-j_k), \end{aligned}$$

where \(X_k\sim \mathrm{Bin }(i_{k+1}-i_{k}-1,1-q^{j_k+1})\). Combined with (80) and (81) this proves the proposition. \(\square \)

As the next step in using a union bound over the sequences \(I\) and \(J\), we continue by performing the summation over \(I\).

Proposition 6.6

For any \(m\ge 2\) and \(J\in J_m'(0)\) we have

$$\begin{aligned} \sum _{I\in I_m'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I)\le n(C(1-q))^m q^{\sum _{k=1}^m j_k} \prod _{k=1}^{m-1} \frac{j_{k+1}-j_k}{1-q^{j_k+1}}. \end{aligned}$$

Proof

Comparing the result of the proposition with Proposition 6.5 we see it suffices to show that

$$\begin{aligned} \sum _{I_m'(\lfloor \frac{1}{1-q}\rfloor )} \prod _{k=1}^{m-1}{\mathbb {P}}(X_k(I)<j_{k+1}-j_k)\le n \prod _{k=1}^{m-1} \frac{j_{k+1}-j_k}{1-q^{j_k+1}}, \end{aligned}$$

where \(X_k(I)\sim \mathrm{Bin }(i_{k+1}-i_{k}-1,1-q^{j_k+1})\). We change variables, transforming the vector \(I=(i_1,\ldots , i_m)\) to the vector \(D=(i_1,d_1,\ldots , d_{m-1})\) via the mapping

$$\begin{aligned} d_k := i_{k+1}-i_k. \end{aligned}$$

Observing that this transformation is one-to-one, we have

$$\begin{aligned} \sum _I \prod _{k=1}^{m-1}{\mathbb {P}}(X_k(I)<j_{k+1}-j_k)\le \sum _D \prod _{k=1}^{m-1}{\mathbb {P}}(X_k(D)<j_{k+1}-j_k), \end{aligned}$$

where the sum is over all integer vectors \(D\) satisfying \(1\le i_1\le n\) and \(d_k\ge 1\) for \(1\le k\le m-1\), and where \(X_k(D)\sim \mathrm{Bin }(d_k-1,1-q^{j_k+1})\). We continue by observing that the product does not depend on \(i_1\), and further observing that the sum of products becomes a product of sums since the factors involve different \(d_k\)’s, whence

$$\begin{aligned} \sum _D \prod _{k=1}^{m-1}{\mathbb {P}}(X_k(D)<j_{k+1}-j_k) \le n \prod _{k=1}^{m-1} \left[ \sum _{d=1}^\infty {\mathbb {P}}(X_k(d)<j_{k+1}-j_k)\right] , \end{aligned}$$

where \(X_k(d)\sim \mathrm{Bin }(d-1,1-q^{j_k+1})\). Applying Lemma 6.1 we conclude that

$$\begin{aligned} \sum _{d=1}^\infty {\mathbb {P}}(X_k(d)<j_{k+1}-j_k) = \frac{j_{k+1}-j_k}{1-q^{j_k+1}}, \end{aligned}$$

and the proposition follows. \(\square \)

We next perform the summation over \(J\). This is best done separately over two regimes. To deal with certain edge cases later in the proof, we extend our previous definitions by setting \(J_0(M):=\{\emptyset \}, J_0'(M):=\{\emptyset \}, I_0'(M):=\{\emptyset \}\), for integer \(M\ge 1\), and setting \({\mathbb {P}}(E_{I,J})={\mathbb {P}}(E_I) = 1\) whenever \(I=J=\emptyset \). We also adopt the convention that \(0^0\) is \(1\).

Proposition 6.7

There exists an absolute constant \(C_1>0\) such that for any integer \(m\ge 0\) we have

$$\begin{aligned} \sum _{J\in J_m(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_m'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I)\le n^2\left( \frac{C_1}{(1-q)m^2}\right) ^m, \end{aligned}$$

and

$$\begin{aligned} \sum _{J\in J_m'(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_m'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I)\le \frac{n(C_1(1-q))^m q^{m(m-1)/2}}{(m'(1-q))^{2m'}}, \end{aligned}$$

where \(m':=\min (m,\lfloor \frac{1}{1-q}\rfloor )\).

Proof

The cases that \(m\in \{0,1\}\) follow trivially since the right-hand side of the above inequalities is larger than \(1\) when \(C_1\) is sufficiently large. Thus we assume that \(m\ge 2\). The relation

$$\begin{aligned} 1-q^a\ge \frac{(1-q)a}{1+(1-q)a} \end{aligned}$$

holds for any \(a\ge 0\). Hence by Proposition 6.6 we have

$$\begin{aligned}&\sum _{J\in J_m(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_m'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I) \le n(C(1-q))^m \!\sum _{J\in J_m(\lfloor \frac{1}{2(1-q)}\rfloor )} \prod _{k=1}^{m-1} \frac{j_{k+1}-j_k}{1-q^{j_k+1}} \\&\quad \le nC^m \sum _{J\in J_m(\lfloor \frac{1}{2(1-q)}\rfloor )} \prod _{k=1}^{m-1} \frac{j_{k+1}-j_k}{j_k+1}. \end{aligned}$$

Thus, noting that \(\log (\lfloor \frac{1}{2(1-q)}\rfloor +1)\le n\), the first part of the proposition follows from Corollary 6.3. Similarly,

$$\begin{aligned}&\sum _{J\in J_m'(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_m'(\lceil \frac{1}{1-q}\rceil )} {\mathbb {P}}(E_{I,J}\cap E_I)\\&\quad \le n(C(1-q)^m \sum _{J\in J_m'(\lfloor \frac{1}{2(1-q)}\rfloor )} q^{\sum _{k=1}^m j_k}\prod _{k=1}^{m-1} \frac{j_{k+1}-j_k}{1-q^{j_k+1}} \\&\quad \le n(C(1-q))^m \sum _{J\in J_m'(\lfloor \frac{1}{2(1-q)}\rfloor )} q^{\sum _{k=1}^m j_k}\prod _{k=1}^{m-1}(j_{k+1}-j_k), \end{aligned}$$

from which the second part of the proposition follows by applying Lemma 6.4 (and bounding \(q^{m\lfloor \frac{1}{2(1-q)}\rfloor }\le 1\)). \(\square \)

6.1.3 Proof of bound

In this section we complete the estimate of \({\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\). First, if \(0<q<\frac{1}{2}\), we may apply the union bound and the second part of Proposition 6.7 in a straightforward way to obtain that for any \(L\ge 2\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\le \sum _{J\in J_L'(0)}\sum _{I\in I_L'(1)} {\mathbb {P}}(E_{I,J}\cap E_I) \le nC^L q^{\frac{L(L-1)}{2}}\quad \left( 0<q<\frac{1}{2}\right) , \end{aligned}$$

proving (75) for this range of \(q\) and establishing (76).

In the rest of the section we assume \(q\ge \frac{1}{2}\) (and \(q<1-\frac{2}{n}\), as before). Fix \(2\le L\le n\). The union bound yields

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L) \le \sum _{I\in I_L'(1)} {\mathbb {P}}(E_I). \end{aligned}$$
(84)

Now, given \(I=(i_1,\ldots , i_L)\in I_L'(1)\) we let \(a(I)\) be the maximal \(k\) such that \(i_k<\lfloor \frac{1}{1-q}\rfloor \) (or 0 if no such \(k\) exists), and let \(I_1:=(i_1,\ldots ,i_{a(I)})\) and \(I_2:=(i_{a(I)+1},\ldots , i_L)\) (where one of these vectors may be empty). By the independence of induced orderings Lemma 2.5,

$$\begin{aligned} {\mathbb {P}}(E_I)\le {\mathbb {P}}(E_{I_1}\cap E_{I_2}) = {\mathbb {P}}(E_{I_1}){\mathbb {P}}(E_{I_2}). \end{aligned}$$
(85)

Define, for integers \(m\ge 1\) and \(M\ge 2\), the set of integer vectors

$$\begin{aligned} I_m(M):=\{(i_1,\ldots , i_{m})\,:\,1\le i_1< i_2<\cdots <i_m<M\}. \end{aligned}$$

As before, we also set \(I_0(M):=\{\emptyset \}\). Plugging (85) into (84) and using the translation invariance Lemma 2.6 (with our assumption that \(\frac{1}{1-q}<\frac{n}{2}\)) we find that

$$\begin{aligned}&{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\le \sum _{I\in I_L'(1)} {\mathbb {P}}(E_I)\nonumber \\&\quad \le \sum _{a=0}^{\min (L,\lfloor \frac{1}{1-q}\rfloor -1)} \sum _{I_1\in I_a(\lfloor \frac{1}{1-q}\rfloor )} \sum _{I_2\in I_{L-a}'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I_1}){\mathbb {P}}(E_{I_2})\nonumber \\&\quad \le \sum _{a=0}^{\min (L,\lfloor \frac{1}{1-q}\rfloor -1)}\left( \sum _{I\in I_a(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\right) \left( \sum _{I\in I_{L-a}'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\right) \nonumber \\&\quad \le \sum _{a=0}^{\min (L,\lfloor \frac{1}{1-q}\rfloor -1)}\left( \sum _{I\in I_a'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\right) \left( \sum _{I\in I_{L-a}'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\right) . \end{aligned}$$
(86)

Our next task is to estimate the first factor in the above product for a fixed \(0\le a\le \min (L,\lfloor \frac{1}{1-q}\rfloor -1)\). Using the union bound,

$$\begin{aligned} \sum _{I\in I_a'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\le \sum _{J\in J_a'(0)}\sum _{I\in I_a'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I). \end{aligned}$$

Now, given \(J=(i_1,\ldots , i_a)\in J_a'(0)\) we let \(b(J)\) be the maximal \(k\) such that \(j_k<\lfloor \frac{1}{2(1-q)}\rfloor \) (or 0 if no such \(k\) exists), let \(I^1:=(i_1,\ldots ,i_{b(J)}), I^2:=(i_{b(J)+1},\ldots , i_L), J^1:=(j_1,\ldots ,j_{b(J)})\) and \(J^2:=(j_{b(J)+1},\ldots , j_L)\) (where any of these vectors may be empty). By Fact 2.4, the event \(E_{I^1,J^1}\cap E_{I^1}\) is a function of \((p_i(i))\) for \(i\le i_{b_J}\), and the event \(E_{I^2, J^2}\cap E_{I^2}\) is a function of \((p_i(i))\) for \(i>i_{b_J}\). Since the \((p_i(i))\) are independent we obtain

$$\begin{aligned} {\mathbb {P}}(E_{I,J}\cap E_I)\!\le \! {\mathbb {P}}(E_{I^1,J^1}\cap E_{I^1}\cap E_{I^2, J^2}\cap E_{I^2})\!=\! {\mathbb {P}}(E_{I^1,J^1}\cap E_{I^1}){\mathbb {P}}(E_{I^2, J^2}\cap E_{I^2}). \end{aligned}$$

Thus, in a similar way to (86), we obtain

$$\begin{aligned}&\sum _{I\in I_a'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I) \nonumber \\&\quad \le \sum _{b=0}^{a} \left( \sum _{J\in J_b(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_b'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I)\right) \nonumber \\&\qquad \times \left( \sum _{J\in J_{a-b}'(\lfloor \frac{1}{2(1-q)}\rfloor )}\sum _{I\in I_{a-b}'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_{I,J}\cap E_I)\right) . \end{aligned}$$
(87)

To estimate this product, we let \(C_1>0\) be the constant from Proposition 6.7 and define, for \(m\ge 0\),

$$\begin{aligned} f(m)&:=\left( \frac{C_1}{(1-q)m^2}\right) ^m,\\ g(m)&:=\frac{(C_1(1-q))^m q^{m(m-1)/2}}{(m'(1-q))^{2m'}}, \end{aligned}$$

where \(m':=\min (m,\lfloor \frac{1}{1-q}\rfloor )\). It is immediate that \(g(m)\le f(m)\) if \(m\le \lfloor \frac{1}{1-q}\rfloor \). In addition, as in the last inequality of (44),

$$\begin{aligned} f(k)f(m)\le C^{k+m}f(k+m) \end{aligned}$$
(88)

for \(m,k\ge 0\). Now, applying Proposition 6.7 to the sums in (87) and recalling that \(a<\lfloor \frac{1}{1-q}\rfloor \), we deduce

$$\begin{aligned} \sum _{I\in I_a'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\le n^3\sum _{b=0}^{a} f(b)g(a-b)\le n^3\sum _{b=0}^{a} f(b)f(a-b)\le C^a n^3 f(a). \end{aligned}$$
(89)

In a completely analogous fashion, we estimate the second factor in (86) by

$$\begin{aligned} \sum _{I\in I_{L-a}'(\lfloor \frac{1}{1-q}\rfloor )} {\mathbb {P}}(E_I)\le n^3\sum _{b=0}^{\min (L-a,\lfloor \frac{1}{2(1-q)}\rfloor -1)} f(b)g(L-a-b). \end{aligned}$$
(90)

Plugging (89) and (90) into (86) and again using (88) we finally arrive at

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)&\le n^6 \sum _{a=0}^{\min (L,\lfloor \frac{1}{1-q}\rfloor -1)}\ \sum _{b=0}^{\min (L-a,\lfloor \frac{1}{2(1-q)}\rfloor -1)} C^a f(a)f(b)g(L\!-\!a\!-\!b)\nonumber \\&\le C^L n^8\max _{0\le m\le \min \left( L,\frac{3}{2(1-q)}\right) } f(m)g(L-m). \end{aligned}$$
(91)

It remains to estimate \(f(m)g(L-m)\). It is simple to see that \(g(m)\le C^m f(m)\) when \(m\le \frac{3}{1-q}\) since for such \(m, \frac{(m(1-q))^{2m}}{(m'(1-q))^{2m'}}\le C^m\). Hence, if we assume that \(L\le \frac{3}{1-q}\) we obtain by (88) that

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L) \le C^L n^8f(L) = n^8 \left( \frac{C}{(1-q)L^2}\right) ^L\qquad \left( L\le \frac{3}{1-q}\right) , \end{aligned}$$

proving (75) in this case. We continue to the case \(L>\frac{3}{1-q}\). For all \(0\le m\le \frac{3}{2(1-q)}\) we have \(L-m\ge \frac{1}{1-q}, ((1-q)^2m^2)^{-m}\le C^{\frac{1}{1-q}}\) (by differentiating with respect to \(m\)) and \(q^{-m}\le C\) (by our assumption that \(q\ge \frac{1}{2}\)). Thus, for these \(m\),

$$\begin{aligned} f(m)g(L-m)&= \left( \frac{C}{(1-q)m^2}\right) ^m \frac{(C(1-q))^{L-m}q^{(L-m)(L-m-1)/2}}{((L-m)'(1-q))^{2(L-m)'}}\\&=\frac{q^{-mL+m/2} }{((1-q)^2m^2)^m (\lfloor \frac{1}{1-q}\rfloor (1-q))^{2\lfloor \frac{1}{1-q}\rfloor }} C^{L-m}(1-q)^L q^{\frac{L(L-1)}{2}}\\&\le (C(1-q))^L q^{\frac{L(L-1)}{2}} \qquad \qquad \qquad \qquad \quad \left( L> \frac{3}{1-q}\right) . \end{aligned}$$

Using this estimate in (91) finishes the proof of (75).

6.2 A lower bound on \({\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L)\)

In this section we prove part (1.7) of Theorem 1.7 by establishing the bound (12), giving a lower bound on the probability of a long decreasing subsequence. We give two bounds, one which applies only when the length \(L\) of the subsequence satisfies \(C(1-q)^{-1/2} < L < (1-q)^{-1}\), and one which applies for all \(L\). The first bound is superior to the second in the cases to which it applies.

Proposition 6.8

Let \(n\ge 1, \frac{1}{2}\le q\le 1-\frac{4}{n}\) and \(\uppi \sim \mu _{n,q}\). There exist absolute constants \(C,c>0\) such that for all integer \(L\) satisfying

$$\begin{aligned} \frac{C}{\sqrt{1-q}} \le L \le \frac{1}{1-q} \end{aligned}$$
(92)

we have

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L) \ge 1- \left( 1-\left( \frac{c}{(1-q)L^2}\right) ^L\right) ^{\left\lfloor \frac{n(1-q)}{4}\right\rfloor }. \end{aligned}$$

Proof

Fix an integer \(L\) satisfying (92) with the constant \(C\) large enough and the constant \(c\) small enough for the following calculations. Using Corollary 2.3, we couple \(\uppi \) with the \(q\)-Mallows process so that

$$\begin{aligned} \uppi (i) = n+1 - p_n(i)\quad \text { for all}\, 1\le i\le n. \end{aligned}$$
(93)

For \(1\le k\le L\), define the set of integers

$$\begin{aligned} O_k:=\left[ 1+\frac{3(k-1)}{(1-q)L},\, 1+\frac{3(k-1)+1}{(1-q)L}\right] \cap \mathbb {Z}. \end{aligned}$$

Observe that

$$\begin{aligned} |O_k|\ge \left\lfloor \frac{1}{(1-q)L}\right\rfloor \ge 1 \end{aligned}$$
(94)

by (92). Let

$$\begin{aligned} N:=\left\lfloor \frac{n(1-q)}{4}\right\rfloor \end{aligned}$$

and observe that \(N\ge 1\) by our assumption on \(q\). For \(1\le j\le N\) and \(1\le k\le L\) define the set of integers and the event

$$\begin{aligned} I_{j,k}&:=\left[ \frac{j+2}{1-q} + \frac{k-1}{(1-q)L},\, \frac{j + 2}{1-q} + \frac{k}{(1-q)L}\right) \cap \mathbb {Z},\\ E_{j,k}&:=\{\exists \, i\in I_{j,k}\text { such that } p_i(i)\in O_k\}. \end{aligned}$$

Observe that \(\max _{j,k} (\max (I_{j,k}))\le n\) by our assumption on \(q\). Our strategy for proving a lower bound for \({\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\) is based on the following containment of events,

$$\begin{aligned} \{\mathrm{LDS }(\uppi )\ge L\}\supseteq \cup _{j=1}^N \cap _{k=1}^L E_{j,k}. \end{aligned}$$
(95)

Let us prove this relation. Suppose that \(\cap _{k=1}^L E_{j,k}\) occurs for some \(1\le j\le N\). For each \(k\), let \(i_{j,k}\in I_{j,k}\) be such that \(p_{i_{j,k}}(i_{j,k})\in O_k\). For each \(1\le k\le L-1\) we have by (16) that

$$\begin{aligned} p_{i_{j,k+1}}(i_{j,k})&\le p_{i_{j,k}}(i_{j,k}) + i_{j,k+1} - i_{j,k} \le \max (O_k) + \max (I_{j,k+1}) - \min (I_{j,k}) \\&< 1+\frac{3(k-1)+1}{(1-q)L} + \frac{2}{(1-q)L} \le \min (O_{k+1}) \le p_{i_{j,k+1}}(i_{j,k+1}). \end{aligned}$$

This implies, again by (16), that \(p_n(i_{j,k})<p_n(i_{j,k+1})\) and hence, by (93), that \(\uppi (i_{j,k})>\uppi (i_{j,k+1})\). Thus the event \(\{\mathrm{LDS }(\uppi )\ge L\}\) occurs.

We continue to establish a lower bound for the probability of the event on the right-hand side of (95). Observe that the sets \((I_{j,k})\) are pairwise-disjoint. Hence, since the random variables \((p_i(i))\) are independent, we have

$$\begin{aligned}&{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L) \ge {\mathbb {P}}\left( \cup _{j=1}^N \cap _{k=1}^L E_{j,k}\right) = 1 - \prod _{j=1}^N {\mathbb {P}}\left( \cup _{k=1}^L E_{j,k}^c\right) \nonumber \\&\quad = 1 - \prod _{j=1}^N \left( 1 - \prod _{k=1}^L {\mathbb {P}}(E_{j,k})\right) . \end{aligned}$$
(96)

Now, to estimate \({\mathbb {P}}(E_{j,k})\), observe first that \(\max _k (\max (O_k))\!\le \! \frac{3}{1-q}\!\le \! \min _{j,k} (\min (I_{j,k}))\) by (92). In addition, it follows from our assumption that \(q\ge \frac{1}{2}\) that \(\min _{m\in O_k} ( q^{m-1})\ge c>0\). Thus, by (15) and (94), for each \(j\) and \(k\),

$$\begin{aligned}&{\mathbb {P}}(E_{j,k}) = {\mathbb {P}}(\cup _{i\in I_{j,k}} \{p_i(i)\in O_k\}) = 1 - \prod _{i\in I_{j,k}} \left( 1 - {\mathbb {P}}(p_i(i)\in O_k)\right) \\&\quad = 1 - \prod _{i\in I_{j,k}} \left( 1 - \frac{(1-q)\sum _{m\in O_k} q^{m-1}}{1-q^i}\right) \\&\quad \ge 1 - \prod _{i\in I_{j,k}} \left( 1 - c(1-q)|O_k|\right) \ge 1 - \prod _{i\in I_{j,k}} \left( 1 - \frac{c}{L}\right) = 1 - \left( 1 - \frac{c}{L}\right) ^{|I_{j,k}|}, \end{aligned}$$

and, since \(\max _{j,k}|I_{j,k}|\le \left\lceil \frac{1}{(1-q)L}\right\rceil \le CL\) and \(\min _{j,k}|I_{j,k}|\ge \left\lfloor \frac{1}{(1-q)L}\right\rfloor \ge 1\) by (92), we may continue the last inequality to obtain

$$\begin{aligned} {\mathbb {P}}(E_{j,k}) \ge \frac{c|I_{j,k}|}{L} \ge \frac{c}{(1-q)L^2}. \end{aligned}$$

Plugging this estimate into (96) finishes the proof of the proposition. \(\square \)

We now prove our second bound, which applies to all \(L\). The strategy in this bound is to simply look for a decreasing subsequence composed of consecutive elements.

Proposition 6.9

Let \(n\ge 1, 0< q< 1\) and \(\uppi \sim \mu _{n,q}\). Then for all integer \(L\ge 2\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L) \ge 1 - \left( 1 - q^{\frac{L(L-1)}{2}} (1-q)^L\right) ^{\lfloor \frac{n}{L}\rfloor }. \end{aligned}$$

Proof

Let \(N:=\left\lfloor \frac{n}{L}\right\rfloor \) and define the sets \(I_i:=\{1+(i-1)L,\, 2+(i-1)L,\ldots , iL\}\) for \(1\le i\le N\). Define the events

$$\begin{aligned} E_i:=\{{{\uppi }_{I_i}}\text { is the reversed identity}\},\quad (1\le i\le N). \end{aligned}$$

Then we have the following containment of events,

$$\begin{aligned} \{\mathrm{LDS }(\uppi )\ge L\}\supseteq \cup _{i=1}^N E_i. \end{aligned}$$

The events \((E_i)\) are independent by Lemma 2.5, and have the same probability by Corollary 2.7. Hence,

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)&\ge {\mathbb {P}}[\cup _{i=1}^N E_i] = 1 - {\mathbb {P}}(\cap _{i=1}^N E_i^c)\nonumber \\&= 1 - \prod _{i=1}^N (1 - {\mathbb {P}}(E_i)) = 1 - (1 - {\mathbb {P}}(E_1))^N. \end{aligned}$$
(97)

Since the reversed identity permutation on \(L\) elements has \(L(L-1)/2\) inversions, we conclude by Corollary 2.7, (1) and (2) that

$$\begin{aligned} {\mathbb {P}}(E_1) = \frac{q^{\frac{L(L-1)}{2}}}{Z_{L,q}} = q^{\frac{L(L-1)}{2}}(1-q)^L\prod _{i=1}^L\frac{1}{1-q^i}\ge q^{\frac{L(L-1)}{2}}(1-q)^L. \end{aligned}$$

Plugging this estimate into (97) finishes the proof of the proposition. \(\square \)

6.3 Upper bound on \({\mathbb {P}}(\mathrm{LDS }(\uppi ) < L)\)

In this section use a classical combinatorial result of Erdös and Szekeres to show that \(\mathrm{LDS }(\uppi )\) is not likely to be very small, proving the bound (13) of Theorem 1.7. The following well-known theorem is a consequence of the pigeonhole principle.

Theorem 6.10

(Erdös–Szekeres) Let \(r,s\ge 1\) be any integers such that \(n>(r-1)(s-1)\). Then a permutation of length \(n\) contains either an increasing subsequence of length \(r\) or a decreasing subsequence of length \(s\).

The theorem allows us to translate the large deviation bound on \(\mathrm{LIS }(\uppi )\) given by the upper bound of (7) into an upper bound on the probability that \(\mathrm{LDS }(\uppi )\) is very small.

Proposition 6.11

There are absolute constants \(C,c>0\) for which, if \(n\ge 1, \frac{1}{2} \le q \le 1-\frac{4}{n}\) and \(\uppi \sim \mu _{n,q}\), then for all integer \(2\le L<\frac{c}{\sqrt{1-q}}\),

$$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )<L) \le \left( C(1-q)L^2\right) ^{\frac{n}{L}}. \end{aligned}$$

Proof

By Theorem 6.10, for any integer \(L\ge 2, \{\mathrm{LDS }(\uppi )<L\}\subseteq \{\mathrm{LIS }(\uppi )\ge \lceil \frac{n}{L-1}\rceil \}\). If, in addition, \(L<\frac{c}{\sqrt{1-q}}\), we may apply the upper bound of (7) to obtain

$$\begin{aligned}&{\mathbb {P}}(\mathrm{LDS }(\uppi )<L)\le {\mathbb {P}}\left( \mathrm{LIS }(\uppi )\ge \left\lceil \frac{n}{L-1}\right\rceil \right) \le \min \Bigg (1, \left( \frac{C(1-q)n^2}{\lceil \frac{n}{L-1}\rceil ^2} \right) ^{\lceil \frac{n}{L-1}\rceil }\Bigg )\\&\quad \le \left( C(1-q)L^2\right) ^{\frac{n}{L}}. \end{aligned}$$

\(\square \)

It is possible to use Theorem 6.10 in the other direction as well, to prove upper bounds for \({\mathbb {P}}(\mathrm{LIS }(\uppi )<L)\) via upper bounds on \({\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\). For certain ranges of \(n,q\) and \(L\) this provides an improvement over (8). For instance, when \(q = 1-\frac{4}{n}\) and \(L=4\), the bound (8) shows that \({\mathbb {P}}(\mathrm{LIS }(\uppi )<4)\le e^{-cn}\), whereas Theorem 6.10 and the bound (10) show that \({\mathbb {P}}(\mathrm{LIS }(\uppi )<4)\le (C/n)^{cn}\).

We do not pursue a systematic study of the ranges in which each of the bounds is optimal, nor do we prove a matching lower bound for \({\mathbb {P}}(\mathrm{LIS }(\uppi )<L)\) here. We direct the reader to Sect. 8 for a discussion of these open problems.

6.4 Bounds for \({\mathbb {E}}(\mathrm{LDS }(\uppi ))\)

In this section we prove Theorem 1.5. The proof requires also Proposition 1.9 which we now establish.

Proof of Theorem 1.9

Fix \(n\ge 2\) and \(0<q\le \frac{1}{n}\). Since \(1-x\le \exp (-x)\) for all \(x\) and \(1-x\ge \exp (-Cx)\) for \(0<x\le \frac{1}{2}\),

$$\begin{aligned} \max (c, 1-Cnq)\le&(1-q)^n\le 1-cnq,\\ 1-Cq\le&\prod _{i=1}^n(1-q^i) \le 1. \end{aligned}$$

Now, letting \(\uppi \sim \mu _{n,q}\), (1) and (2) show that

$$\begin{aligned} {\mathbb {P}}(\uppi \text { is not the identity}) = 1 - \frac{(1-q)^n}{\prod _{i=1}^n(1-q^i)} \le 1 - (1-q)^n\le Cnq, \end{aligned}$$

and, if \(n\) is sufficiently large,

$$\begin{aligned} {\mathbb {P}}(\uppi \text { is not the identity}) = 1 - \frac{(1-q)^n}{\prod _{i=1}^n(1-q^i)} \ge 1 - \frac{1-cnq}{1-Cq}\ge cnq. \end{aligned}$$

To obtain the lower bound for small \(n\), let \(\sigma \in S_n\) be any permutation with \(\mathrm{Inv }(\sigma )=1\) (here we assume \(n\ge 2\)). Then, by (1) and (2),

$$\begin{aligned} {\mathbb {P}}(\uppi = \sigma ) = \frac{q(1-q)^n}{\prod _{i=1}^n (1-q^i)} \ge c q. \end{aligned}$$

\(\square \)

We now establish Theorem 1.5 using the large deviation inequalities proved above. We consider separately several different regimes depending on the relative sizes of \(q\) and \(n\).

Proof of Theorem 1.5

The constants \(C_0, c_0, c_1\) appearing in the proof below are fixed positive constants with \(C_0\) taken large enough for our calculations and \(c_0,c_1\) taken small enough for our calculations. Also, we will assume throughout the proof of (9) that \(n\ge C\) for some constant \(C\), sufficiently large for our calculations. This is without loss of generality since the theorem bounds \({\mathbb {E}}(\mathrm{LDS }(\uppi ))\) up to constants, and we may always adjust these constants so that (9) applies also to the case \(n\le C\).

  1. (i)

    Suppose \(1-\frac{C_0}{(\log n)^2}\le q\le 1-\frac{4}{n}\). Let \(L^* := \frac{c}{\sqrt{1-q}}\), for a sufficiently small \(c\). Then, by (13),

    $$\begin{aligned}&{\mathbb {E}}(\mathrm{LDS }(\uppi )) \ge L^*{\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L^*) = L^* \left( 1-{\mathbb {P}}\left( \mathrm{LDS }(\uppi ) < \lceil L^* \rceil \right) \right) \\&\quad \ge L^* \left( 1-\left( C_{13}(1-q)\lceil L^*\rceil ^2 \right) ^{n/\lceil L^*\rceil }\right) \ge \frac{L^*}{2}, \nonumber \end{aligned}$$

    where \(C_{13}\) is the constant \(C\) appearing in (13). Now let \(L^* := \frac{C}{\sqrt{1-q}}\) where \(C\) is chosen large enough so that, using the lower bound on \(q, L^* \ge 9\log _2 n\). Therefore, by the first bound of (10),

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))&\le L^* + n{\mathbb {P}}(\mathrm{LDS }(\uppi ) > L^*) \le L^* + n{\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge \lceil L^* \rceil )\\&\le L^*+ n^9 \left( \frac{1}{2}\right) ^{L^*} \le L^*+1. \end{aligned}$$
  2. (ii)

    Suppose \(1-\frac{c_0\log \log n}{\log n }\le q\le 1-\frac{C_0}{(\log n)^2}\). Note that this is only part of the range of \(q\)’s in the second part of the theorem. The other part will be treated later. Let \(L^* := \frac{c \log n}{\log ((1-q) (\log n)^2)}\) for a sufficiently small \(c\). We claim that

    $$\begin{aligned} \frac{C_{12}}{\sqrt{1-q}} \le L^* \le \frac{1}{2(1-q)} \end{aligned}$$
    (98)

    where \(C_{12}\) is the constant \(C\) appearing in the first part of inequality (12). To see this, observe that \(L^*\ge \frac{C_{12}}{\sqrt{1-q}}\) is equivalent to

    $$\begin{aligned} c\sqrt{(1-q)(\log n)^2}\ge C_{12}\log ((1-q)(\log n)^2), \end{aligned}$$

    which holds when \((1-q)(\log n)^2\) is at least a sufficiently large constant. This follows from the upper bound on \(q\) by taking \(C_0\) large enough. Similarly, observe that \(L^*\le \frac{1}{2(1-q)}\) is equivalent to

    $$\begin{aligned} 2c(1-q)(\log n)^2\le \log ((1-q)(\log n)^2) \log n, \end{aligned}$$

    which holds when \(e\le (1-q)(\log n)^2\le \frac{1}{2c}\log n\cdot \log \log n\), which follows from our restrictions on \(q\) by taking \(C_0\) large enough and \(c_0\) small enough. This establishes (98). Next, we claim that

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L^*)= {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge \lceil L^* \rceil ) \ge \frac{1}{2}. \end{aligned}$$
    (99)

    Observing that (98) implies that \(\frac{C_{12}}{\sqrt{1-q}}\le \lceil L^*\rceil \le \frac{1}{1-q}\), (99) will follow from the first part of (12) if we show that

    $$\begin{aligned} \left\lfloor \frac{n(1-q)}{4}\right\rfloor \left( \frac{c_{12}}{(1-q)\lceil L^* \rceil ^2}\right) ^{\lceil L^* \rceil } \ge \log 2, \end{aligned}$$

    where \(c_{12}\) is the constant \(c\) appearing in the first part of (12). Recalling our bounds on \(q\), it suffices to show that

    $$\begin{aligned} \frac{n}{(\log n)^2} \exp \left( -\lceil L^*\rceil \log \left( \frac{(1-q)\lceil L^*\rceil ^2}{c_{12}}\right) \right) \ge \frac{8\log 2}{C_0}. \end{aligned}$$
    (100)

    Now, taking the constant in the definition of \(L^*\) small enough, we have \((1-q)\lceil L^*\rceil ^2/c_{12}\le (1-q)(\log n)^2\). Therefore, again taking the constant in the definition of \(L^*\) small enough, \(\lceil L^*\rceil \log ((1-q)\lceil L^*\rceil ^2/c_{12})\le \frac{1}{2}\log n\). This establishes (100) and hence (99). Finally, (99) implies that

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi )) \ge L^* {\mathbb {P}}(\mathrm{LDS }(\uppi ) \ge L^*) \ge \frac{L^*}{2}. \end{aligned}$$

    Now let \(L^* := \frac{C \log n}{\log ((1-q) (\log n)^2)}\) for a sufficiently large \(C\). As in the proof of (98), also in this case we have \(\lceil L^*\rceil \le \frac{3}{1-q}\) if the constant \(C_0\) is large enough and the constant \(c_0\) is small enough. We also have \(L^*\ge 2\) by our restrictions on \(q\) and by taking the constant \(C\) large enough. Hence we may apply the first bound of (10) and obtain the bound below, taking \(C\) to be large enough

    $$\begin{aligned} {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L^*)= {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge \lceil L^* \rceil )\le n^8\left( \frac{C_{10}}{(1-q)\lceil L^*\rceil ^2}\right) ^{\lceil L^*\rceil }, \end{aligned}$$
    (101)

    where \(C_{10}\) is the constant \(C\) from (10). We claim that the right-hand side of (101) is at most \(\frac{1}{n}\) if the constant in the definition of \(L^*\) is taken large enough. Equivalently,

    $$\begin{aligned} \lceil L^*\rceil \log ((1-q)\lceil L^*\rceil ^2 / C_{10})\ge 9\log n. \end{aligned}$$

    For this, substituting the definition of \(L^*\) with a large enough constant, it suffices to show that

    $$\begin{aligned} (1-q)\lceil L^*\rceil ^2 / C_{10}\ge \left( (1-q)(\log n)^2\right) ^{\frac{1}{2}}. \end{aligned}$$

    We now substitute the definition of \(L^*\) in the left-hand side. Again taking the constant \(C\) large enough, the inequality reduces to showing

    $$\begin{aligned} \frac{(1-q)(\log n)^2}{(\log ((1-q)(\log n)^2))^2} \ge \left( (1-q)(\log n)^2\right) ^{\frac{1}{2}}. \end{aligned}$$

    Denoting \(y:=(1-q)(\log n)^2\), we may rewrite this as

    $$\begin{aligned} y^{\frac{1}{2}}\ge (\log y)^2. \end{aligned}$$

    This inequality is satisfied whenever \(y\) is sufficiently large, and this condition is assured in our setting by choosing the constant \(C_0\) in the upper bound on \(q\) large enough. Finally, we conclude that

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))\le L^* + n{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L^*) \le L^* + 1. \end{aligned}$$
  3. (iii)

    Suppose \(1 - \frac{c_1(\log \log n)^2}{\log n}\le q\le 1-\frac{c_0\log \log n}{\log n}\). Continuing the previous item, the second part of the theorem will follow by showing that for this range of \(q\)’s, \({\mathbb {E}}(\mathrm{LDS }(\uppi ))\approx \frac{\log n}{\log \log n}\). Note that the assumptions on \(q\) imply that for some constants \(C(c_0), c(c_1), C(c_1)>0\) we have

    $$\begin{aligned}&c(c_1)\log \log n\le \log \left( \frac{1}{1-q}\right) \le C(c_0)\log \log n,\end{aligned}$$
    (102)
    $$\begin{aligned}&e^{-C(c_1)(1-q)}\le q\le e^{-(1-q)}. \end{aligned}$$
    (103)

    Let \(L^* := \frac{c \log n}{\log \log n}\) for a sufficiently small \(c\). We take \(n\) sufficiently large compared to \(c\) so that \(L^*\ge 2\). By the second part of (12),

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))&\ge L^*{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge \lceil L^* \rceil )\nonumber \\&\ge L^*\left( 1-\left( 1-q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}} (1-q)^{\lceil L^* \rceil }\right) ^{\left\lfloor \frac{n}{\lceil L^* \rceil }\right\rfloor }\right) . \end{aligned}$$
    (104)

    Applying (102) and (103), recalling our assumptions on \(q\) and taking \(c\) small enough, we have

    $$\begin{aligned}&q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}}(1-q)^{\lceil L^* \rceil }\\&\quad \ge \exp \left( -C(c_1)(1-q)\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}-C(c_0)\lceil L^* \rceil \log \log n \right) \\ \\&\quad \ge \exp \left( -C(c_1)(1-q)(L^*)^2-2C(c_0)L^*\log \log n \right) \\ \\&\quad \ge \exp \left( -\frac{1}{2}\log n\right) = \frac{1}{\sqrt{n}}. \end{aligned}$$

    Substituting into (104) shows that

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))\ge L^{*}\left( 1 - \left( 1 - \frac{1}{\sqrt{n}}\right) ^{\left\lfloor \frac{n}{L^*}\right\rfloor }\right) \ge \frac{L^*}{2}. \end{aligned}$$

    Now, let \(L^* := \frac{C \log n}{\log \log n}\) for a sufficiently large \(C\). Applying (102),

    $$\begin{aligned} q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}}(1-q)^{\lceil L^* \rceil }&\le \exp \left( -c(c_1)\lceil L^* \rceil \log \log n \right) \\&\le \exp \left( -c(c_1)L^*\log \log n \right) \le \frac{1}{n^{10}}. \end{aligned}$$

    For our choice of \(L^*\) we have \(L^*>\frac{3}{1-q}\) by the upper bound on \(q\). Thus, using the second part of (10),

    $$\begin{aligned}&{\mathbb {E}}(\mathrm{LDS }(\uppi ))\!\le \! L^* + n{\mathbb {P}}(\mathrm{LDS }(\uppi )\!\ge \! \lceil L^{*}\rceil )\!\le \! L^* + n^9(C_{10}(1\!-\!q))^{\lceil L^* \rceil } q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}} \nonumber \\&\quad \le L^* + \frac{(C_{10})^{\lceil L^*\rceil }}{n}\le L^* + 1. \end{aligned}$$
    (105)
  4. (iv)

    Let \(\frac{1}{n} \le q \le 1-c_1\frac{(\log \log n)^2}{\log n}\). In this regime we have for an appropriate \(C(c_1)>0\),

    $$\begin{aligned}&\log \left( \frac{1}{1-q}\right) \le C(c_1)\log \log n,\\&\log (1/q)\ge 1-q\ge c_1\frac{(\log \log n)^2}{\log n}. \end{aligned}$$

    Let \(L^* := c\sqrt{\frac{\log n}{\log (1/q)}}\) for a sufficiently small \(c\). If \(L^*<2\) then, trivially,

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))\ge 1> \frac{1}{2}L^*. \end{aligned}$$

    Otherwise, assume that \(L^*\ge 2\). Then, as in (104),

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))&\ge L^*{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge \lceil L^* \rceil )\nonumber \\&\ge L^*\left( 1-\left( 1-q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}} (1-q)^{\lceil L^* \rceil }\right) ^{\left\lfloor \frac{n}{\lceil L^* \rceil }\right\rfloor }\right) . \end{aligned}$$
    (106)

    We may estimate the term on the right-hand side as

    $$\begin{aligned}&q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}}(1-q)^{\lceil L^* \rceil }\\&\quad \ge \exp \left( -\log \left( \frac{1}{q}\right) \frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}-C(c_1)\lceil L^* \rceil \log \log n \right) \\ \\&\quad \ge \exp \left( -\log \left( \frac{1}{q}\right) (L^*)^2-2C(c_1)L^* \log \log n \right) \ge \exp (-\frac{1}{2}\log n) = \frac{1}{\sqrt{n}}. \end{aligned}$$

    Plugging into (106) implies that \({\mathbb {E}}(\mathrm{LDS }(\uppi ))\ge \frac{L^*}{2}\). Now, let \(L^* := C\sqrt{\frac{\log n}{\log (1/q)}}\) for a sufficiently large \(C\). First observe that \(L^*\ge 2\) by our assumptions on \(q\) and \(C\). In addition, note that when \(q\ge \frac{1}{2}\) we have \(\log (1/q)\le C'(1-q)\) for some \(C'>0\). It follows that \(L^*> \frac{3}{1-q}\) for our range of \(q\)’s. Thus, using the second part of (10),

    $$\begin{aligned} \,{\mathbb {E}}(\mathrm{LDS }(\uppi ))\!\le \! L^* + n{\mathbb {P}}(\mathrm{LDS }(\uppi )\!\ge \! \lceil L^{*}\rceil )\le L^* + n^9(C_{10}(1-q))^{\lceil L^* \rceil } q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}}. \end{aligned}$$
    (107)

    Then, using that \(L^*-1\ge \frac{L^*}{2}\) since \(L^*\ge 2\),

    $$\begin{aligned}&q^{\frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2}}(1-q)^{\lceil L^* \rceil }\le \exp \left( -\log \left( \frac{1}{q}\right) \frac{\lceil L^* \rceil (\lceil L^* \rceil -1)}{2} \right) \\&\quad \le \exp \left( -\log \left( \frac{1}{q}\right) \frac{(L^*)^2}{4} \right) \le \exp (-10\log n) = \frac{1}{n^{10}}. \end{aligned}$$

    Plugging into (107) and using our assumption on \(q\),

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))\le L^* + \frac{(C_{10})^{\lceil L^*\rceil }}{n}\le L^* + 1. \end{aligned}$$
  5. (v)

    Let \(n\ge 2\) and \(0<q\le \frac{1}{n}\). By the second part of (12),

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))-1&=\sum _{L=2}^n {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\ge {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge 2)\\&\ge \left( 1-\left( 1-q^{\frac{2(2-1)}{2}} (1-q)^{2}\right) ^{\left\lfloor \frac{n}{2}\right\rfloor }\right) \\&\ge \left( 1-\exp \left( -\left\lfloor \frac{n}{2}\right\rfloor q(1-q)^2\right) \right) \ge cnq, \end{aligned}$$

    where in the second to last inequality we used the fact that when \(0\le x\le \frac{1}{2}, \exp (-x)\le 1-cx\) for some \(c>0\). If \(2\le n\le 3\), Proposition 1.9 implies that

    $$\begin{aligned} {\mathbb {E}}(\mathrm{LDS }(\uppi ))-1\le 2{\mathbb {P}}(\uppi \text { is not the identity})\le cnq. \end{aligned}$$

    Otherwise, if \(n\ge 4\), we may use the second part of (10) along with Proposition 1.9 to obtain

    $$\begin{aligned} \,{\mathbb {E}}(\mathrm{LDS }(\uppi ))-1\!&=\!\sum _{L=2}^n {\mathbb {P}}(\mathrm{LDS }(\uppi )\ge L)\le 3{\mathbb {P}}(\mathrm{LDS }(\uppi )\!\ge \! 2) + n{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge 5)\\&\le 3{\mathbb {P}}(\uppi \text { is not the identity}) + n{\mathbb {P}}(\mathrm{LDS }(\uppi )\ge 5)\\&\le Cnq + Cn^9q^{10} \le Cnq + Cq. \end{aligned}$$

    \(\square \)

7 Variance of the length of monotone subsequences

In this section we prove Proposition 1.8, giving a bound on the variance of \(\mathrm{LIS }(\uppi )\) and a Gaussian tail bound for it.

Fix \(n\ge 1, q>0\), and let \((p_i)\) be the \(q\)-Mallows process. Since \(p_n\sim \mu _{n, 1/q}\), and \(q\) is arbitrary, it suffices to show that

$$\begin{aligned} \mathrm{Var }(\mathrm{LIS }(p_n))\le n-1 \end{aligned}$$
(108)

and, for all \(t>0\),

$$\begin{aligned} {\mathbb {P}}(|\mathrm{LIS }(p_n)-{\mathbb {E}}(\mathrm{LIS }(p_n))|> t\sqrt{n-1})< 2e^{-t^2/2}. \end{aligned}$$
(109)

Recall from the definition of the Mallows process that \(p_n\) is determined by the random variables \((p_i(i)), 2\le i\le n\), and that these random variables are independent. Let us define a function \(f\) by the relation

$$\begin{aligned} \mathrm{LIS }(p_n) = f(p_2(2), p_3(3),\ldots , p_n(n)). \end{aligned}$$

We will show that \(f\) has the bounded differences property. Precisely, that if \(x := (x_2,\ldots , x_n)\) and \(x' := (x_2',\ldots , x_n')\) satisfy \(1\le x_i, x_i'\le i\) for all \(i\) and \(x_i = x_i'\) for all but one value of \(i\), then

$$\begin{aligned} |f(x) - f(x')| \le 1. \end{aligned}$$
(110)

This implies (108) and (109) by standard facts. To see this, define the martingale \(L_i:={\mathbb {E}}(\mathrm{LIS }(p_n)\,|\,(p_j(j)), j\le i)\) for \(1\le i\le n\), where we note that \(L_1={\mathbb {E}}(\mathrm{LIS }(p_n))\) since \(p_1(1)\) is constant. Then (110) and [2, Theorem 7.4.1] imply that \(|L_{i+1}-L_i|\le 1\) for all \(i\), almost surely. Thus, by the martingale property,

$$\begin{aligned} \mathrm{Var }(\mathrm{LIS }(p_n)) = {\mathbb {E}}(L_n - L_1)^2 = \sum _{i=2}^n {\mathbb {E}}(L_i - L_{i-1})^2 \le n-1. \end{aligned}$$

The tail bound (109) follows from the Bernstein–Hoeffding–Azuma inequality [2, Theorem 7.2.1].

Let us now prove (110). Let \(x, x'\) be as above and suppose that \(x_i = x_i'\) for all \(i\ne i_c\), and \(x_{i_c}\ne x_{i_c}'\). By symmetry of \(x\) and \(x'\), it suffices to show that

$$\begin{aligned} f(x')\ge f(x)-1. \end{aligned}$$
(111)

Write \((p_i^x), 1\le i\le n\), for the first \(n\) permutations in the Mallows process which result when \(p_1(1) = 1\) and \(p_i(i) = x_i\) for \(2\le i\le n\). Similarly let \((p_i^{x'})\) be the first \(n\) permutations which result when \(p_i(i)=x_i'\) for \(2\le i\le n\). Recall that, by definition, \(\mathrm{LIS }(p_n^x) = f(x)\) and let \(1\le i_1<\cdots <i_{f(x)}\le n\) be the indices of an (arbitrary) longest increasing subsequence in \(p_n^x\). That is, \((i_1,\ldots , i_{f(x)})\) satisfy

$$\begin{aligned} p_n^x(i_{j+1})>p_n^x(i_j)\quad \text {for}\, 1\le j\le f(x)-1. \end{aligned}$$
(112)

We will make repeated use of the following two facts which follows directly from the definition of the Mallows process: For any \(i<j\), if \(p_j(i)<p_j(j)\) then \(p_k(i)<p_k(j)\) for all \(k\ge j\). In addition, the values of \((p_m(m)), i\le m\le j\), determine whether \(p_j(i)<p_j(j)\) (this is a special case of Fact 2.4). Let us now consider several cases.

  1. 1.

    If \(f(x)=1\) then (111) is trivial.

  2. 2.

    If \(i_c>i_{f(x)-1}\) it follows from (112) that \((i_1,\ldots , i_{f(x)-1})\) form the indices of an increasing subsequence in \(p_n^{x'}\) and hence \(f(x')\ge f(x)-1\), proving (111). Similarly, if \(i_c<i_2\) it follows from (112) that \((i_2, \ldots , i_{f(x)})\) form the indices of an increasing subsequence in \(p_n^{x'}\) and hence \(f(x')\ge f(x)-1\), again proving (111).

  3. 3.

    Finally, suppose that \(i_2\le i_c\le i_{f(x)-1}\) and let \(2\le j_c\le f(x)-1\) be equal to the maximal integer for which \(i_{j_c}\le i_c\). In this case, by the aforementioned facts about the Mallows process, we have that each of \((i_1,\ldots , i_{j_c-1})\) and \((i_{j_c+1}, \ldots , i_{f(x)})\) form the indices of an increasing subsequence in \(p_n^{x'}\). Hence, to prove (111), it suffices to prove that \(p_n^{x'}(i_{j_c+1})>p_n^{x'}(i_{j_c-1})\), which is equivalent to

    $$\begin{aligned} p_{i_{j_c+1}}^{x'}(i_{j_c+1})>p_{i_{j_c+1}}^{x'}(i_{j_c-1}). \end{aligned}$$
    (113)

    Condition (112) implies that

    $$\begin{aligned} p_{i_c}^x(i_{j_c-1})<p_{i_c}^x(i_{j_c}). \end{aligned}$$
    (114)

    Now, (16) implies that in a Mallows process \((p_i), p_i(j) - p_{i-1}(j)\) can change by at most one when \(p_i(i)\) changes. Thus, we deduce from (114) that

    $$\begin{aligned} p_{i_c}^{x'}(i_{j_c-1})\le p_{i_c}^x(i_{j_c}). \end{aligned}$$

    By (16) again, since \(x_i=x_i'\) for \(i_c<i\le i_{j_c}+1\), we conclude that

    $$\begin{aligned} p_{i_{j_c+1}}^{x'}(i_{j_c-1})\le p_{i_{j_c+1}}^x(i_{j_c}) < p_{i_{j_c+1}}^x (i_{j_c+1}) = x_{i_{j_c+1}} = x'_{i_{j_c+1}} = p_{i_{j_c+1}}^{x'} (i_{j_c+1}), \end{aligned}$$

    proving (113) and finishing the proof of the proposition.

8 Discussion and open questions

A number of interesting directions remain for further research.

  1. 1.

    (Variance of the \(\mathrm{LIS }\) and limiting distribution). A natural next step is to determine the variance of the longest increasing subsequence and its limiting distribution. By the work of Baik et al. [4] the variance is of order \(n^{1/3}\) and the limiting distribution is Tracy–Widom when \(q=1\). In the case that \(0<q<1\) is fixed we expect the variance to be of order \(n\) and the limiting distribution to be Gaussian. Establishing these last facts should not be difficult (Proposition 1.8 shows one direction for the variance). It is less clear what the variance and limiting distribution should be in the intermediate regime of \(q\) though it may at least seem reasonable that the variance decreases with \(q\). The bounds on the displacement obtained in Theorem 1.1 show that in the graphical representation of a Mallows permutation most points lie in a strip whose width is proportional to \(1/(1-q)\) (see Fig. 2). This suggests a possible connection between the length of the longest increasing subsequence of a Mallows permutation and the model of last passage percolation for random points in a strip. The analogy is not perfect, however, since the points in the graphical representation of the Mallows measure are correlated. It is not clear whether, asymptotically, these correlations have a significant effect on the variance and limiting distribution (see also Question 8 below). Chatterjee and Dey [9] investigated undirected first passage percolation in the rectangle \([0,k]\times [0, h_k]\) and conjectured that the first passage time has variance \(kh_k^{-1/2+o(1)}\) and Gaussian limit distribution when \(h_k\ll k^{2/3}\). They proved that the limiting distribution is indeed Gaussian when \(h_k\ll k^{1/3}\) and gave certain evidence for the full conjecture (as well as similar results in higher dimensions). Several authors [6, 7, 30] have investigated directed first and last passage percolation in the rectangle \([0,k]\times [0, h_k]\). They have shown that when \(1\ll h_k \ll k^{3/7}\) the passage time converges to the Tracy–Widom distribution, in contrast to the aforementioned results of [9] for undirected first passage percolation. While directed last passage percolation is more similar to the longest increasing subsequence model than undirected first passage percolation, the convergence to the Tracy–Widom law in this result seems related to the fact that the rectangle considered is horizontal, unlike our diagonal strip. Thus an intriguing question is which limit distribution appears for the length of the longest increasing subsequence in the intermediate regime of \(q\), when \(q\rightarrow 1\) with \(n\) at some rate. Is it a Tracy–Widom distribution as is the case for \(q=1\), or is it the Gaussian distribution as we expect for fixed \(0<q<1\), or some other possibility? Is it the same throughout the entire intermediate regime? What is the dependence of the variance on \(n\) and \(q\)? Does it have the asymptotic form \(n^a (1-q)^b\), for some \(a,b\), as the expectation does? Possibly, if there are several regimes for the limiting distribution then there would also be several regimes for the values of \(a\) and \(b\) depending on the precise rate at which \(q\) tends to \(1\) with \(n\). In Sect. 5.2 we have shown that the longest increasing subsequence is close to a sum of i.i.d. random variables corresponding to the longest increasing subsequences of disjoint blocks of elements. However, our bounds on the error terms in this approximation do not seem to be strong enough to draw useful conclusions on the distribution or variance of the longest increasing subsequence.

  2. 2.

    (RSK correspondence). In prior work on the distribution of the longest increasing subsequence for the uniform distribution, e.g., [4, 22, 32], the combinatorial bijection known as the Robinson–Schensted–Knuth (RSK) correspondence between permutations and Young tableaux has played an important role. A natural question is to study the measure induced on Young tableaux by the RSK correspondence applied to Mallows-distributed permutations.

  3. 3.

    (Limits of graphical representation). Consider the graphical representation of Mallows-distributed permutations as in Fig. 2. Theorem 1.1 and the figure suggest that the empirical distribution of the points in a square of width \(\frac{1}{1-q}\) around the diagonal converges to a limiting density. What is the form of this density? Starr [29] has answered this question in the regime where \(n(1-q)\) tends to a finite constant. Additionally, what is the local limit of the points in the graphical representation (the limit when zooming to a scale in which there is one point per unit area on average)? Is it a Poisson process or does it have non-trivial correlations? A related question is to understand the joint distribution of displacements beyond the estimates given in Theorem 1.1.

  4. 4.

    (Law of large numbers for \(\mathrm{LDS }\)). It remains to establish a law of large numbers for the longest decreasing subsequence. Extrapolating from the results of Mueller and Starr [24], we expect that the length of the longest decreasing subsequence multiplied by \(\sqrt{1-q}\) converges in probability to the constant \(\uppi \), at least when \(n(1-q)\rightarrow \infty \) and \((\log n)^2(1-q)\rightarrow 0\). See also Remark 1.6.

  5. 5.

    (Expected \(\mathrm{LIS }\) for fixed \(q\)). Fix \(0<q<1\) and let \({{\uppi }_{n}}\) have the \((n,q)\)-Mallows distribution. Corollary 2.7 implies that

    $$\begin{aligned}&{\mathbb {E}}(\mathrm{LIS }({{\uppi }_{n+m}})) \le {\mathbb {E}}(\mathrm{LIS }(({{\uppi }_{n+m}})_{\{1,\ldots , n\}}) + \mathrm{LIS }(({{\uppi }_{n+m}})_{\{n+1,\ldots , n+m\}})) \\&\quad = {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{n}})) + {\mathbb {E}}(\mathrm{LIS }({{\uppi }_{m}})). \end{aligned}$$

    Thus, by Fekete’s subadditive lemma,

    $$\begin{aligned} \lim _{n\rightarrow \infty }\frac{{\mathbb {E}}(\mathrm{LIS }({{\uppi }_{n}}))}{n}= \inf _n \frac{{\mathbb {E}}(\mathrm{LIS }({{\uppi }_{n}}))}{n} =: c(q). \end{aligned}$$

    It would be interesting to find an explicit expression for \(c(q)\). Proposition 1.4 shows that \(1-q\le c(q)\le \frac{1}{1+q}\) for all \(0<q<1\), which is rather tight for small \(q\). In addition, Theorem 1.2 and the above representation of \(c(q)\) as an infimum imply that

    $$\begin{aligned} \limsup _{q\uparrow 1}\frac{c(q)}{\sqrt{1-q}}\le 1. \end{aligned}$$
  6. 6.

    (Improved large deviation bounds). Our large deviation results are not always sharp. For instance, our bound (8) on the lower tail of \(\mathrm{LIS }(\uppi )\) can probably be improved. Deuschel and Zeitouni [11] proved that \({\mathbb {P}}(\mathrm{LIS }(\uppi )<c\sqrt{n})\) is exponentially small in \(n\) for a uniform permutation \(\uppi \in S_n\). However, substituting \(q=1-4/n\) (which one may expect behaves similarly to the uniform case) in (8) yields only that \({\mathbb {P}}(\mathrm{LIS }(\uppi )<c\sqrt{n})\) is at most exponentially small in \(\sqrt{n}\). See also the remark at the end of Sect. 6.3.