In addition to our usual notations for generating functions of different classes of paths, we denote by \(W_{\alpha }=W_{\alpha }(t,u)\) (where \(1 \le \alpha \le \ell \)) the bivariate generating function of those walks avoiding the pattern p that terminate in state \(\alpha \); similarly \(M_{\alpha }=M_{\alpha }(t,u)\) for meanders avoiding the pattern p that terminate in state \(\alpha \).
Proof of Theorem 3.1
(Generating function of walks) and proof of Proposition 4.1 On the one hand, we have the following vectorial functional equation:
$$\begin{aligned}&(W_1 \ \cdots \ W_{\ell }) = (1 \ 0 \ \cdots \ 0)+ t (W_1 \ \cdots \ W_{\ell }) \ A ,\\&(W_1 \ \cdots \ W_{\ell }) \ (I-tA) = (1 \ 0 \ \cdots \ 0), \\&(W_1 \ \cdots \ W_{\ell }) = (1 \ 0 \ \cdots \ 0) \ \frac{\mathrm {adj}(I-tA)}{\det (I-tA)}. \end{aligned}$$
Therefore, the generating function W(t, u), which is the sum of the generating functions \(W_\alpha (t,u)\) over all states, is equal to
$$\begin{aligned} W(t,u) = (W_1 \ \cdots \ W_{\ell }) \ \vec {{\mathbf {1}}}= \frac{(1 \ 0 \ \cdots \ 0) \, \mathrm {adj}(I-tA) \, \vec {{\mathbf {1}}}}{\det (I-tA)}. \end{aligned}$$
On the other hand, the generating function for W(t, u) can be obtained using the following combinatorial argument which en passant also justifies the introduction of the autocorrelation polynomial, as done in the seminal work of Guibas and Odlyzko [45]. We first introduce \(W^{\{p\}}(t,u)\), the generating function of the walks over \({\mathcal {S}}\) that end with p and contain no other occurrence of p. Then we have \(W+W^{\{p\}} = 1+tPW\) (if we add a letter from \({\mathcal {S}}\) to a p-avoiding walk, then we either obtain another p-avoiding walk, or a walk with a single occurrence of p at the end), and \(W t^\ell u^{{\text {alt}}(p)} = W^{\{p\}} R\) (a walk obtained from a p-avoiding walk by appending p at the end, can also be obtained from a walk ending with a single occurrence of p at the end by appending the complement of a presuffix of p). Solving this system, we obtain \(W(t,u)= R(t,u)/K(t,u)\).
Thus, we got two representations for W(t, u):
$$\begin{aligned} W(t,u) =\frac{(1 \ 0 \ \cdots \ 0) \, \mathrm {adj}(I-tA) \, \vec {{\mathbf {1}}}}{\det (I-tA)} = \frac{R(t,u)}{(1-tP(u))R(t,u)+t^{|p|}u^{{\text {alt}}(p)}}. \end{aligned}$$
(20)
In order to see that this is the same representation (that is, the numerators and the denominators are equal in both fractions), we notice that \(\det (I-tA)\) is a polynomial in t of degree \(\ell \) and constant term 1. This is also the case for \((1-tP(u))R(t,u)+t^{\ell }u^{{\text {alt}}(p)}\), so this allows us to say that the two numerators in Formula (20) are actually equal. This gives the proof of Theorem 3.1 for walks, and also proves Proposition 4.1 on the structure of the kernel. \(\square \)
We now turn to the consequences of this formula for W(t, u) when one considers bridges.
Proof of Theorem 3.1
(Generating function of bridges) In order to find the univariate generating function B(t) for bridges, we need to extract the coefficient of \([u^0]\) from W(t, u). To this end, we assume that t is a sufficiently small fixed number, extract the coefficient of a (univariate) function by means of Cauchy’s integral formula, and apply the residue theorem (recall that \(u_1, \ldots , u_e\) are the small roots of K(t, u)):
$$\begin{aligned} B(t) = [u^0] W(t,u) = \frac{1}{2 \pi i } \int _{|u|=\varepsilon } \frac{W(t,u)}{u} \, du = \sum _i^e \mathrm {Res}_{u=u_i(t)} \frac{W(t,u)}{u}. \end{aligned}$$
By the formula for residues of rational functions, we have
$$\begin{aligned} \mathrm {Res}_{u=u_i(t)} \frac{W(t,u)}{u}&= \mathrm {Res}_{u=u_i(t)} \frac{R(t,u)}{u \, ((1-tP(u))R(t,u)+t^\ell u^{{\text {alt}}(p)})} \\&= \left. \frac{R(t,u)}{ \frac{d}{du} ( u \, ((1-tP(u))R(t,u)+t^\ell u^{{\text {alt}}(p)}))} \right| _{u=u_i(t)}. \end{aligned}$$
The denominator of this expression is
$$\begin{aligned} \left. -tuP'(u)R(t,u)+u(1-tP(u))R_u(t,u)+{\text {alt}}(p) t^\ell u^{{\text {alt}}(p)}\right| _{u=u_i(t)}. \end{aligned}$$
(21)
Next, we differentiate \(K(t, u_i)=0\) with respect to t and obtain an expression for \(P'(u_i(t))\). When we substitute it into (21), we obtain (7). \(\square \)
We now consider the nonnegativity constraint for the paths.
Proof
(Generating function of meanders) We have the following vectorial functional equation:
$$\begin{aligned} (M_1\ \cdots \ M_{\ell }) = (1 \ 0 \ \cdots \ 0)+ t \ (M_1 \ \cdots \ M_{\ell }) \ A - t \, \{u^{<0}\} \big ( (M_1 \ \cdots \ M_{\ell }) \, A \big ), \end{aligned}$$
or, equivalently,
$$\begin{aligned} \displaystyle {(M_1\ \cdots \ M_{\ell })(I-tA) = (1 \ 0 \ \cdots \ 0) - t \, \{u^{<0}\} \big ( (M_1 \ \cdots \ M_{\ell }) \, A \big ), } \end{aligned}$$
(22)
where \(\{u^{<0}\}\) denotes all the terms in which the power of u is negative.
The right-hand side of (22) is a vector whose components are power series in t and Laurent polynomials in u (of lowest degree \(\ge -c\)). For \(\alpha =1, \ldots , \ell \), denote the \(\alpha \)-th component of this vector by \(F_\alpha =F_\alpha (t,u)\) (the letter F can be seen as a mnemonic for “forbidden”, as these components correspond to the forbidden transitions towards a negative value as exponent of u). In summary, one has
$$\begin{aligned} (M_1 \ \cdots \ M_{\ell })(I-tA) = (F_1 \ \cdots \ F_{\ell }). \end{aligned}$$
(23)
We multiply this from the right by \(\displaystyle {(I-tA)^{-1}\vec {{\mathbf {1}}}=} \)\(\displaystyle { \frac{(\mathrm {adj}(I-tA))\vec {{\mathbf {1}}}}{\det (I-tA)}}\). At this point, we denote \(\vec {{\mathbf {v}}}= \vec {{\mathbf {v}}}(t,u):= (\mathrm {adj}(I-tA))\vec {{\mathbf {1}}}\), where \(\vec {{\mathbf {1}}}\) is the column vector \((1 \ 1 \ \cdots \ 1)^{\top }\). This vector \(\vec {{\mathbf {v}}}\) is the autocorrelation vector we encountered in Proposition 4.2. As a direct consequence of its definition, one has
$$\begin{aligned} M(t,u)= \frac{(F_1 \ \cdots \ F_{\ell })\vec {{\mathbf {v}}}}{K(t,u)}. \end{aligned}$$
(24)
The following step is the essential part of the vectorial kernel method. Let \(u_i=u_i(t)\) be any small root of \(K(t,u)=\det (I-tA)\). We plug in \(u=u_i(t)\) into (23). The matrix \((I-tA)|_{u=u_i}\) is then singular. At this point we observe that \(\vec {{\mathbf {v}}}|_{u=u_i}\) is an eigenvector of \((I-tA)|_{u=u_i}\) belonging to the eigenvalue \(\lambda =0\). Indeed, \(\vec {{\mathbf {v}}}|_{u=u_i} = (\mathrm {adj}((I-tA)|_{u=u_i}))\vec {{\mathbf {1}}}\) is equivalent to \((I-tA)|_{u=u_i} \vec {{\mathbf {v}}}|_{u=u_i} = \det ((I-tA)|_{u=u_i})\vec {{\mathbf {1}}}\), which implies \((I-tA)|_{u=u_i} \vec {{\mathbf {v}}}|_{u=u_i} = 0\). Moreover, due to the structure of A, we have \(\mathrm {rank}((I-tA)|_{u=u_i} )=\ell -1\), therefore, the dimension of the characteristic space of \(\lambda =0\) is 1, and \(\vec {{\mathbf {v}}}|_{u=u_i}\) is the unique (up to scaling) eigenvector of \((I-tA)|_{u=u_i}\) that belongs to \(\lambda =0\).
Thus, if we multiply (23) by \(\vec {{\mathbf {v}}}|_{u=u_i}\), the left-hand side vanishes. In other words, the equation \((F_1(t,u), \ \ldots , \ F_{\ell }(t,u)) \, \vec {{\mathbf {v}}}(t, u)=0\) is satisfied by every small root \(u_i(t)\) of K(t, u).
Let
$$\begin{aligned} \Phi (t,u) := u^e\, (F_1(t,u), \ \ldots , \ F_{\ell }(t,u)) \, \vec {{\mathbf {v}}}(t, u)\text{. } \end{aligned}$$
(25)
Note that \(\Phi \) is a Laurent polynomial, as the \(F_i\)’s and \(\vec {{\mathbf {v}}}\) are by construction Laurent polynomials in u. What is more, since \(\Phi (t,u) = u^eM(t,u)K(t,u)\) by (24) and since M(t, u) is a power series in u, \(\Phi (t,u)\) has no negative powers of u and is thus a polynomial. Now, we know that every small root \(u_i(t)\) of K(t, u) is a root of a polynomial equation
$$\begin{aligned} \Phi (t,u) = 0. \end{aligned}$$
(26)
It follows that
$$\begin{aligned} \Phi (t,u) = G(t,u) \, \displaystyle {\prod _{i=1}^e (u-u_i(t))} \end{aligned}$$
(27)
for some G(t, u) which is a formal power series in t and a polynomial in u. We substitute this into (24), and obtain the claimed formula
$$\begin{aligned} M(t,u) = \frac{G(t,u)}{u^eK(t, u)} \, \displaystyle {\prod _{i=1}^e (u-u_i(t))}\text{. } \end{aligned}$$
\(\square \)
If the degree of \(\Phi (t,u)\) is precisely e, the formula simplifies as G is then just the leading term (in u) of \(\Phi (t,u)\). As we shall show now, this happens if p is a quasimeander (as introduced in Definition 3.3).
Proof of Theorem 3.4
(Generating function of meanders, whenpis a quasimeander) First, we notice that all the powers of u in R(t, u) are non-negative, and \({\text {alt}}(p) \ge -c\). Moreover, if \({\text {alt}}(p)=-c\), the cancellation of terms with \(u^{-c}\) in K(t, u) is not possible.Footnote 7 Therefore, the lowest power of u in K(t, u) is c, and thus we have \(e=c\) by Proposition 4.4.
Let us return to (22):
$$\begin{aligned} (M_1\ \cdots \ M_{\ell })(I-tA) = (1 \ 0 \ \cdots \ 0) - t \, \{u^{<0}\} \big ( (M_1 \ \cdots \ M_{\ell }) \, A \big ). \end{aligned}$$
(28)
We claim that all the components of the right-hand side, except for the first component, are 0. Indeed, if the path arrives at state \(X_i\) with \(i>1\), this means that it accumulated a non-empty prefix of p. And since p is a quasimeander, w will always remain (weakly) above the x-axis while it accumulates its non-empty prefix.
Therefore, we have \(\Phi = u^c\,(F_1 \ 0 \ \cdots \ 0) \, \vec {{\mathbf {v}}}\), and thus by (15), \(\Phi = u^c F_1 R\). Since the constant term of \(F_1\) is 1, \(u^cF_1\) is a monic polynomial in u. Therefore, we have
$$\begin{aligned} \Phi = R \cdot \displaystyle {\prod _{i=1}^c (u-u_i)}\text{. } \end{aligned}$$
This yields
$$\begin{aligned} M(t,u) = \frac{R(t,u)}{u^c \, K(t,u)} \prod _{i=1}^c \bigl (u-u_i(t)\bigr ) \end{aligned}$$
(29)
as claimed. \(\square \)
Let us now simplify this formula for excursions, i.e., when \(u=0\).
Proof of Theorem 3.4
(Generating function of excursions, whenpis a quasimeander) The generating function of excursions is given by \(E(t) = M(t,0)\). If \({\text {alt}}(p) >-c\), we have, as u tends to 0, \(K(t,u)\sim -tu^{-c}R(t,0)\) from (16). If \({\text {alt}}(p) = -c\), then we have \(R(t,u) = 1\) and \(K(t,u)\sim -tu^{-c} + t^\ell u^{-c}\). In both cases, (11) follows. \(\square \)
We now handle the next interesting class of patterns leading to generating functions with a nice closed form: the case of reversed meanders. Recall that a reversed meander is a lattice path whose terminal point has a (strictly) smaller y-coordinate than all other points. Moreover, we define a positive meander to be a meander that never returns to the x-axis.
Proof of Theorem 3.5
(Generating function of meanders, whenpis a reversed meander) If p is a reversed meander, then all the terms of R(t, u), except for the monomial 1, contain negative powers of u, and we also have \({\text {alt}}(p)<0\). Therefore, the highest power of u in K is d, and (by Proposition 4.4) the number of large roots of \(K(t, u)=0\) is d: we denote them as above by \(v_1, \ldots , v_d\). The number of small roots can be in general higher than c: as usual, we denote it by e, and the roots themselves by \(u_1, \ldots , u_e\).
We consider the following generalization of the Wiener–Hopf factorization for lattice paths. If we split the walk w at its first and at its last left-to-right minimum, we obtain a decomposition \(w=m^-.e.m^+\), where \(m^-\) is a reversed meander, e is a translate of an excursion, and \(m^+\) is a translate of a positive meander. One also has the decomposition \(w=m^-.m\), where \(m=e.m^+\) is a meander. Notice that these decompositions are unique (Fig. 4).
Moreover, since p is a reversed meander, its occurrence cannot overlap the junction of two factors. That is, m is p-avoiding if and only if its both factors are p-avoiding, and w is p-avoiding if and only if its three factors are p-avoiding. Therefore, we have
$$\begin{aligned} M(t,u)=E(t)M^+(t,u), \end{aligned}$$
(30)
and
$$\begin{aligned} W(t,u)=M^-(t,u) M(t,u) = M^-(t,u)E(t)M^+(t,u), \end{aligned}$$
where W(t, u), \(M^-(t,u)\), E(t), \(M^+(t,u)\) are the generating functions of p-avoiding walks, reversed meanders, excursions, positive meanders (respectively). This in particular implies
$$\begin{aligned} M(t,u)= \frac{W(t,u)}{M^-(t,u)}. \end{aligned}$$
(31)
By Theorem 3.1, we have \(W(t,u) = R(t,u)/K(t,u)\). In order to find \(M^-(t,u)\), we use a time reversal argument. Namely, we notice that a path is a reversed meander if and only if its horizontal reflection (upon translating the initial point to the origin) is a positive meander. The precise statement is as follows. Let \(- {\mathcal {S}} =\{-s:s \in {\mathcal {S}} \}\); and for the pattern \(p=[a_1, a_2, \ldots , a_\ell ]\), let \(\overleftarrow{p}=[-a_\ell , \ldots , -a_2, -a_1]\). Then there is a straightforward bijection between p-avoiding reversed meanders with steps from \({\mathcal {S}}\) and \(\overleftarrow{p}\)-avoiding positive meanders with steps from \(-{\mathcal {S}}\) which preserves the length and reflects the altitude. Therefore, we have
$$\begin{aligned} M^-(t,u) = \overleftarrow{M}^{+}(t,1/u), \end{aligned}$$
(32)
where the arrow means that it is the generating function for \(\overleftarrow{p}\)-avoiding paths (positive meanders in this equation) with the step set \(-{\mathcal {S}}\) (rather than p-avoiding with the step set \({\mathcal {S}}\)).
Refer to the \(m=e.m^+\) decomposition above. As we noticed, if the pattern p is a reversed meander, then m is p-avoiding if and only if both e and \(m^+\) are p-avoiding. The same is true if p is a positive meander. Therefore, similarly to Formula (30), \(M(t,u)=E(t)M^+(t,u)\), we also have \(\overleftarrow{M}(t,u)= \overleftarrow{E}(t) \overleftarrow{M}^+(t,u)\). Combined with (32), this implies
$$\begin{aligned} M^-(t,u) = \overleftarrow{M}^+(t,1/u) = \frac{\overleftarrow{M}(t,1/u)}{\overleftarrow{E}(t)}. \end{aligned}$$
(33)
Since \(\overleftarrow{p}\) is a meander, (9) holds for \(\overleftarrow{M}(t,u)\) and we have
$$\begin{aligned} \overleftarrow{M}(t,1/u)= & {} \frac{u^d \, \overleftarrow{R}(t,1/u)}{\overleftarrow{K}(t,1/u)} \prod _{j=1}^d\left( \frac{1}{u}- \frac{1}{v_j(t)} \right) \nonumber \\= & {} \frac{u^d \, R(t,u)}{K(t,u)} \prod _{j=1}^d\left( \frac{1}{u}- \frac{1}{v_j(t)} \right) . \end{aligned}$$
(34)
These identities are justified as follows. The equalities \(\overleftarrow{R}(t,1/u)=R(t,u)\) and \(\overleftarrow{K}(t,1/u)=K(t,u)\) can be easily derived directly, but also notice that we have \(W(t,u)=R(t,u)/K(t,u)\) and \(\overleftarrow{W}(t,1/u)=\overleftarrow{R}(t,1/u)/\overleftarrow{K}(t,1/u)\), and \(W(t,u)=\overleftarrow{W}(t,1/u)\) from the bijective horizontal reflection. Finally, \(\overleftarrow{K}(t,u)\) has d many small roots and e many large roots: if \(u_i(t)\) is a small root of K(t, u), then \(1/u_i(t)\) is a large root of \(\overleftarrow{K}(t,u)\); and if \(v_j(t)\) is a large root of K(t, u), then \(1/v_j(t)\) is a small root of \(\overleftarrow{K}(t,u)\).
Similarly, Eq. (11) holds for \(\overleftarrow{E}(t)\), and we have
$$\begin{aligned} \overleftarrow{E}(t) = \frac{(-1)^{d+1}}{t} \prod _{j=1}^d \frac{1}{v_j(t)}. \end{aligned}$$
(35)
Notice that the leading term of the polynomial \(u^eK(t,u)\) is \(-tu^{d+e}\) and, therefore, one has
$$\begin{aligned} u^eK(t,u) = -t \prod _{i=1}^e (u-u_i(t)) \prod _{j=1}^d (u-v_j(t)). \end{aligned}$$
(36)
We now substitute (34) and (35) into (33) and use (36) to obtain
$$\begin{aligned} M^-(t,u)&= \frac{(-1)^{d+1} \, t \, u^d \, R(t,u)}{K(t,u)} \prod _{j=1}^d \left( v_j \left( \frac{1}{u}- \frac{1}{v_j(t)} \right) \right) \end{aligned}$$
(37)
$$\begin{aligned}&=\frac{-t \, R(t,u)}{K(t,u)} \prod _{j=1}^d (u-v_j(t)) = \frac{u^e \, R(t,u)}{{\prod _{i=1}^e (u-u_i(t))}}. \end{aligned}$$
(38)
Finally, we substitute this into (31) and obtain
$$\begin{aligned} M(t,u)= & {} \frac{W(t,u)}{M^-(t,u)}= \frac{R(t,u)}{K(t,u)} \, \frac{1}{u^e \, R(t,u)} \, \displaystyle {\prod _{i=1}^e (u-u_i(t))} \nonumber \\ {}= & {} \frac{1}{u^e \, K(t,u)} \, \displaystyle {\prod _{i=1}^e (u-u_i(t))}. \end{aligned}$$
(39)
\(\square \)
Remark 5.1
It is interesting to notice that though M(t, u), for p being a quasimeander [as given in (29)], is similar to M(t, u), for p being a reversed meander [as given in (39)], the latter does not contain the factor R(t, u) even if p has a non-trivial autocorrelation.
Remark 5.2
It is also worth mentioning that if only the terminal point of the pattern p has negative y-coordinate, then p is both a quasimeander and a reversed meander, and \(R=1\). Therefore, we have \({M(t,u)= \frac{1}{u^e \, K(t,u)} \, \prod _{i=1}^e (u-u_i(t))}\) by both Theorems 3.4 and 3.5.
Proof of Theorem 3.5
(Generating function of excursions whenpis a reversed meander) Excursions are given by M(t, 0), so we need to compute \(D(t):=[u^0] u^e K(t,u)\). To this aim, first note that as p is a reversed meander (see Definition 3.3), one has the following facts.
In all the terms of R(t, u), the powers of u are non-positive.
Moreover, if \(t^{m_1}/u^{\gamma _1}\) and \(t^{m_2}/u^{\gamma _2}\) are two distinct terms in R(t, u) such that \(0 \le m_1 < m_2\), then we have \(0 \le \gamma _1 < \gamma _2\).
Therefore, we can order the terms of R(t, u) according to the powers of t, and write \(u^e K(t, u)\) as follows:
$$\begin{aligned} u^e K(t,u)= u^e \Bigg ( \bigg ( 1-t \Big ( \frac{1}{u^c} + \cdots + u^d \Big ) \bigg ) \bigg ( 1+\cdots + \frac{t^{m'}}{u^{\gamma '}}+\frac{t^{m}}{u^{\gamma }} \bigg ) + t^{\ell } u^{{\text {alt}}(p)} \Bigg ), \end{aligned}$$
where \(\frac{t^{m}}{u^{\gamma }}\) (the last term in R(t, u)) corresponds to the longest complement of a presuffix. Now, we have the following cases:
Case 1: \(c+\gamma > -{\text {alt}}(p)\). Then \(e=c+\gamma \) and we have \(D(t) = -t^{m+1}\).
Case 2: \(c+\gamma < -{\text {alt}}(p)\). Then \(e=-{\text {alt}}(p)\) and we have \(D(t)= t^{\ell }\).
Case 3: \(c+\gamma = -{\text {alt}}(p)\) and \(\ell \ne m+1\). Then \(e=c+\gamma = -{\text {alt}}(p)\) and \(D(t) = t^{\ell }-t^{m+1}\).
Case 4: \(c+\gamma = -{\text {alt}}(p)\) and \(\ell = m+1\). If \(\ell \ge 2\), then \(m\ge 1\), and therefore \(R(t,u) \ne 1\). Then \(e=c+\gamma '\) and \(D(t) = -t^{m'+1}\). As usual, we ignore the degenerate case \(\ell =1\).
In summary, we get the claim we wanted to prove, namely
$$\begin{aligned} E(t)=M(t,0)= \displaystyle {\frac{(-1)^{e}}{D(t)} \prod _{i=1}^e u_i(t)}\text{, } \end{aligned}$$
(40)
where D(t) is either some power of t, or a difference of two powers of t. \(\square \)
Now that we have proven these closed forms for the generating functions, we can turn to the asymptotics of their coefficients.