The strong Lefschetz property of monomial complete intersections in two variables

In this paper we classify the monomial complete intersections, in two variables, and of positive characteristic, which has the strong Lefschetz property. Together with known results, this gives a complete classification of the monomial complete intersections with the strong Lefschetz property.

The Hilbert function of a graded algebra A = i≥0 A i with residue field K is a function HF A : Z ≥0 → Z ≥0 defined by HF A (i) = vdim K A i , i. e. the vector space dimension of A i over K . The Hilbert series of A, denoted HS A , is the generating function of the sequence HF(i), that is HS A (t) = i≥0 HF(i)t i .
Let now A be a monomial complete intersection, A = K [x 1 , . . . , x n ]/(x d 1 1 , . . . , x d n n ), for some positive integers d 1 , . . . , d n . Let t = n i=1 (d i − 1). This is the highest possible degree of a monomial in A, and hence HF A (i) = 0 when i > t. It can also be seen that the Hilbert function is symmetric about t/2, and that HF A (i) ≤ HF A (i + d) when i ≤ (t − d)/2. For a multiplication map to have maximal rank in every degree in A, it shall then be injective up to some degree i, and surjective for larger i. It can be proved that the injectiveness in this case implies the surjectiveness. Proof See e. g. [9,Proposition 2.6].
In other words, multiplication by a form f has maximal rank in every degree if all homogeneous zero divisors of f are of degree greater than (t − d)/2. Another interesting fact is that if we consider forms of the type d , and t − d is even, then multiplication by d+1 has maximal rank in every degree if multiplication by d does. This result will be important for the classification of algebras with the SLP when n = 2. 1 1 , . . . , x d n n ) and t = n i=1 (d i − 1). Let ∈ A be a linear form, and d a positive integer such that t − d is even. If the maps · d : A i → A i+d have maximal rank for all i ≥ 0, so does the maps · d+1 : A i → A i+d+1 .

Proposition 2.4 Let
Proof Assume that · d : A i → A i+d have maximal rank for all i ≥ 0. By Proposition 2.3 all zero divisors of d are of degree at least (t − d)/2. Suppose that there is a homogeneous element f such that d+1 f = 0. By Proposition 2.3, we are done if we can prove that deg( f ) > (t − (d + 1))/2 = (t − d)/2 − 1/2. Since t − d is even, the right hand side is not an integer, and it is enough to prove deg( f ) > (t − d)/2 − 1. Consider first the case when d f = 0. That is, f is a zero divisor of d , and it follows that deg( f ) > (t − d)/2. Consider instead the case when d f = 0. We know that d+1 f = 0, that is f is a homogeneous zero divisor of d . Then deg( f ) > (t − d)/2, and deg( f ) > (t − d)/2 − 1, which finishes the proof. Proof The "only if"-part follows from Theorem 2.2.
The numbers t and d in Proposition 2.4 are here t = a + b − 2, and d = a + b − 2c. We see that t − d = 2c − 2 is even, so if multiplication by (x + y) a+b−2c has maximal rank in every degree, so does multiplication by (x + y) a+b−2c+1 . If c ≤ 0 then A i+a+b−2c = {0}, and obviously any map A i → A i+a+b−2c is surjective. This is why we only need to consider c ≥ 1. Without loss of generality, we can assume that a = min(a, b). To complete the proof we need to show that multiplication by (x + y) a+b−2c has maximal rank in every degree when c ≥ a. Suppose there is a non-zero homogeneous f ∈ A such that (x + y) a+b−2c f = 0. By Proposition 2.3 multiplication by (x + y) a+b−2c has maximal rank in every degree if we can prove that We can not have h = 0, because that would imply that F is divisible by x a , and f = 0 in A. Hence h = 0 and deg(( and we are done.

Classifying the monomial complete intersections with the strong Lefschetz property
A classification of the monomial complete intersections with the SLP, in three or more variables, is given in [9,Theorem 3.8]. Here we give a slightly reformulated version of the theorem, to make the notation similar to that used later in the case of two variables. We will prove that the formulation here is equivalent to that in [9].
Then A has the SLP if and only if one of the following two conditions hold Proof The difference, compared to [9,Theorem 3.8], is that in [9] the bound for r 1 is 0 < r 1 ≤ p, and the second condition is It is easy to see that both definitions of r 1 gives the same value min(r 1 , p −r 1 ). When d 1 = p condition 2 of [9, Theorem 3.8] is not satisfied. Neither is condition 2 in Theorem 3.1, because min(r 1 , p − r 1 ) = 0, and n i=2 (d i − 1) ≥ n − 1 ≥ 2. When d i = p, for some i > 1, condition 2 in Theorem 3.1 is not satisfied. Neither is 2 in [9, Theorem 3.8], because then n i=2 (d i − 1) ≥ p, and min(r 1 , p − r 1 ) < p in general. This shows that both formulations agree.
The two conditions in Theorem 3.1 above can be generalized to the case n = 2. Next we will prove that in two variables, and characteristic p > 2, the algebra A has the SLP in these two cases, but also in an additional one.
, where a, b ≥ 2 and K is a field of characteristic p > 2. Write a and b in base p, that is a = a k p k +· · ·+a 1 p+a 0 and b = b p +· · ·+b 1 p+b 0 , where 0 ≤ a i , b i < p, and a k , b = 0. We may assume that ≥ k. The classification of the algebras with the SLP is divided into three cases.
Notice that there are no restrictions on b i for i > k, in the case > k. The theorem will be proved later in this section.
In [5,Theorem 4.9] Cook II proves the special case a = b of Theorem 3.2. Cook II also proves the characteristic two case.
Proof The case n = 1 is trivial. Both proofs of [5,Corollary 4.8] and [5,Theorem 4.9] use Theorem 3.5 below. This will also be the key to the proof of Theorem 3.2.
, and u, v, w such that u + v + w is odd. Theorem 3.5 is proved in Sect. 4. We will now prove that Theorem 3.5 can be reformulated as the following proposition.

Proposition 3.6 Let
where K is a field of characteristic p > 0. For each integer i ≥ 1 we can write a = m i p i + r i , and b = n i p i + s i , where 0 ≤ r i , s i < p i . The algebra A has the SLP if and only if the following conditions hold for all i.
Proof We shall prove that the conditions above is equivalent to that in Theorem 3.5. Let us investigate for which a and b it can happen that Write a = m i p i + r i and b = n i p i + s i , as in the proposition. Notice that For all other values of u we get |a − up i | ≥ p i , and then of course Therefore we only need to consider u = m i and u = m i + 1. The corresponding is also true for |b − vp i |. This gives us four cases to examine.

I. u = m i and v = n i
Here which is an odd number, and thus |a We want to find out what the smallest possible value of |a + b − 2c − wp i | is. For this purpose we choose the largest w such that u + v + w is odd, and a + b − wp i > 0. After that we choose the value for c that makes |a + b − 2c − wp i | as small as possible. Since r i + s i ≤ p i − 2, the largest w with the required properties is w = m i + n i − 1. Then In a similar way we see that The conclusion, in this case, is that This corresponds to condition 3 in the proposition. Here We use that same idea as in case 1, and choose first w, and then c, such that |a + b − 2c − wp i | has the smallest possible value. The best option for w is w = n i + m i . This gives If m i > 0 on the other hand, we are allowed tho choose c = s i − 1. Then we get instead. Note that this is a non-positive number. This gives The conclusion, in this case, is that This corresponds to condition 1 in the proposition.
In the same way as above, we see that this corresponds to condition 2. IV. u = m i + 1 and v = n i + 1 Here so for this to be smaller than p i we must have 2 Consider first the case when r i + s i = p i + 1. Then we must choose w = m i + n i − 2d + 1, for some integer d. Then which is condition 4. Proposition 3.6 will be used later in this section to prove Proposition 3.7, which says something about the structure of an algebra that does not have the SLP. Now we shall use Proposition 3.6, with p > 2, to prove Theorem 3.2.

Proof of Theorem 3.2 Let
, and suppose throughout this proof that the characteristic of K is greater than 2. Write a and b in base p as a = a k p k +· · ·+a 1 p +a 0 We assume that ≤ k. With the notation a = m i p i + r i from Proposition 3.6 we have r i = a i−1 p i−1 + · · · + a 1 p + a 0 , and m i = a k p k−i + a k−1 p k−i−1 + · · · + a i , and similar for b.
If a, b < p then n i = m i = 0 in Proposition 3.6, for all i, and the conditions 1, 2 and 3 are trivially satisfied. Since a + b < 2 p condition 4 is satisfied for i > 1. The only restriction we get comes from condition 4 when i = 1, and states that A has the SLP if and only if a + b ≤ p + 1.
If a < p and b ≥ p we get b 0 ≥ a 0 − 1 and a 0 + b 0 ≤ p + 1 from the conditions 2 and 4 with i = 1. These two inequalities can be written as a 0 ≤ min(b 0 , p − b 0 ) + 1. In condition 1 and 3 there is nothing to check, and for i > 1 all conditions are satisfied. We get that A has the SLP if and only if a 0 ≤ min(b 0 , p − b 0 ) + 1.
Assume now that a, b ≥ p. The idea now is to translate the four conditions of Proposition 3.6 into the base p digits of a and b.
Let us first look at i = 1 in Proposition 3.6. We know that m 1 , n 1 > 0, so 1 and 2 gives a 0 − 1 ≤ b 0 ≤ a 0 + 1. The conditions 3 and 4 gives p − 1 ≤ a 0 + b 0 ≤ p + 1. Both these inequality are satisfied exactly when a 0 = p±1 2 and b 0 = p±1 2 . This is condition (a) in Theorem 3.2. Suppose that this is the case, and move on to i = 2. If k ≥ 2 then m 2 and n 2 are positive. The conditions 1 and 2 gives which implies a 1 = b 1 . For 3 and 4 to be satisfied is required. This is true if and only if a 1 + b 1 = p − 1. Hence we get a 1 = b 1 = p−1 2 . We suppose that this is true and continue with i = 3, . . . k. In the same way as above we get a 2 = · · · = a k−1 = b 2 = · · · = b k−1 = p−1 2 . This is condition (b) in Theorem 3.2.
Suppose that the conditions for i = 1, 2 . . . , k are satisfied, and move on to i = k + 1. Now m k+1 = 0, so in condition 1 and 3 there is nothing to check. If > k then n k+1 > 0. In this case condition 2 says b k p k + · · · + b 1 p + b 0 ≥ a k p k + · · · + a 1 p + a 0 − 1, which holds if and only if b k ≥ a k . Condition 4 says which holds if and only if a k + b k ≤ p − 1. This proves (c).
We must also show that there are no further restrictions on b j for j > k, when such b j exist. Suppose that the four conditions of Proposition 3.6 are satisfied for i = 1, 2, . . . , k + 1. We continue by looking at i = k + 2. The conditions 1 and 3 are satisfied, since m i = 0. Notice also that r k+2 = r k+1 = a, and s k+2 ≥ s k+1 . This means that if condition 2 is satisfied But this is no restriction on b k+1 , other than b k+1 < p. The same reasoning works for larger i.
The proof in [9] of when an algebra in three or more variables does not have the SLP, is carried out by finding a monomial zero-divisor of (x 1 + · · · + x n ) m , for some m. We will now see that this can also be done in two variables. This gives an alternative proof of the "only if"-part of Theorem 3.2. Proof For the case n ≥ 3, see [9].
Assume n = 2, and let = c 1 x 1 + c 2 x 2 for some c 1 , c 2 ∈ K . Recall that HF A (d) ≤ HF A (d +m) when d ≤ (d 1 +d 2 −2−m)/2. We shall prove that when one of the conditions in Proposition 3.6 fails, we can find a monomial of degree low enough, which is a zero divisor of some power of . Write d 1 = m i p i + r i and d 2 = n i p i + s i , for some i, as in Proposition 3.6, and suppose that condition 1 fails for this i. This means that m i > 0 and r i ≤ s i − 2. Then r i < d 1 , and therefore x r i 1 = 0. Recall that since all the other terms in the expansion will be of the form cx α 1 x β 2 where either α ≥ d 1 or β ≥ d 2 . We have In other words, x r i 1 is a monomial in the kernel of the multiplication map · (m i +n i ) p i : A r i → A r i +(m i +n i ) p i , and since If instead conditions 2 of Proposition 3.6 fails, the proof is carried out in the same way, but with x r i 1 replaced by x s i 2 . Suppose now that condition 3 fails for some i. That is m i , n i > 0, and r i + s i ≤ p i − 2. Then x r i 1 x s i 2 = 0. We have for some e 1 , e 2 ∈ K , and we see that (m i +n i −1) p i x r i 1 x s i 2 = 0. Also, At last, suppose that condition 4 of Proposition 3.6 fails. Then r i +s i ≥ p i +2. This implies that since all terms in the expansion will be of the form cx α 1 x β 2 where either α ≥ d 1 or β ≥ d 2 . This shows that 1 is in the kernel of the multiplication map · (m i +n i +1) p i : A 0 → A (m i +n i +1) p i . Since HF((m i + n i + 1) p i ) ≥ 1 = HF(0), this completes the proof.

The syzygy gap
The main purpose of this section is to prove Theorem 3.5. If we require the residue field to be algebraically closed, the theorem follows from combining a theorem by Han [6] and results by Brenner and Kaid in [1] and [2]. Han's result is also proved in a different way by Monsky in [10]. Monsky deals with the syzygy module of three pairwise relatively prime polynomials in two variables, and the so called "syzygy gap", while Brenner and Kaid connects this to the Lefschetz properties. We will go through the results from [10], and give a new proof of the connection to the SLP in the case of monomial complete intersections. The reason to go though the results of [10] is to prove that the residue field does not need to be algebraically closed, but also to give a deeper understanding of Theorem 3.5 and the theory behind it.

Mason-Stothers' Theorem
Note that r( f g) ≤ r( f ) + r(g), with equality when f and g are relatively prime. Let f x j denote the formal derivative of f w. r. t. the variable x j . When in a polynomial ring with just one variable, we write f for the derivative. Mason-Stothers' theorem is usually formulated over one variable, as follows.

Theorem 4.1 (Mason-Stothers) Let K be a field, and let f, g and h be polynomials in K [x]
such that • f, g and h are pairwise relatively prime, • f , g and h are not all zero, An elementary proof can be found in [11]. There is also a version of this theorem for homogeneous polynomials in two variables. For clarity we will prove how it can be deduced from Theorem 4.1.

Theorem 4.2 Let K be a field, and let f, g and h be homogeneous polynomials of degree d in K [x, y] such that
• f, g and h are pairwise relatively prime, • f x , f y , g x , g y , h x and h y are not all zero, Proof Let K be the splitting field of f . Over this field f can be factorized as follows where the α i , u j and v j 's are elements in K . After a possible linear change of variables, we can assume that . Then r(f ) = r( f ) − 1, while r(g) = r(ĝ) and r(h) = r(ĥ). Note also that deg(ĝ) = d. By Theorem 4.1 it now follows that which we wanted to prove.

The syzygy gap
Let now R = K [x, y], where K is any field. Let f 1 , f 2 and f 3 be non-zero homogeneous, pairwise relatively prime, polynomials in R, with d i = deg( f i ), and let I = ( f 1 , f 2 , f 3 ). The R-module R/I has a free resolution of length 2, by Hilbert's syzygy theorem.
where φ is given by the matrix f 1 f 2 f 3 , is an exact sequence of free modules. We have  (1) is always a free resolution (but not necessarily minimal), and ker φ is generated by two homogeneous elements of degrees, say α and β. We have a graded resolution This is the syzygy gap function introduced in [10]. From the graded resolution we see that the Hilbert series of R/I is We also know that R/I has dimension 0, thus the Hilbert series is a polynomial, say HS R/I (t) = p(t). Then By taking the derivative of both sides, and substituting t = 1 we get 0 This is one of the so called Herzog-Kühl equations, see e. g. [4]. From this follows also the below lemma.

Lemma 4.3
Let f 1 , f 2 and f 3 be non-zero, pairwise relatively prime homogeneous polyno- We shall also see some other properties of the function . . We let F denote the Frobenius functor on the category of R-modules, induced by the endomorphism a → a q on R. For a review of the Frobenius functor, see e. g. [3]. By [7, Corollary 2.7], F is an exact functor. Now, suppose Syz( f 1 , f 2 , f 3 ) is generated by (A 1 , A 2 , A 3 ) and (B 1 , B 2 , B 3 ), of degrees α and β. When we apply F to the resolution we get an exact sequence which we wanted to prove. ( f 1 , f 2 , f 3 ) when, for example, f 1 is replaced by f 1 , for some linear form . By Lemma 4.3, ( f 1 , f 2 , f 3 ) and ( f 1 , f 2 , f 3 ) has different parity, so they can not be equal. If we have a relation A 1 f 1 + A 2 f 2 + A 3 f 3 = 0, we also get a relation on f 1 , f 2 , f 3 by multiplying the expression by . This means that the two elements that generates Syz( f 1 , f 2 , f 3 ) can have degrees at most α + 1 and β + 1. On the other hand, a relation A 1 f 1 + A 2 f 2 + A 3 f 3 = 0 on f 1 , f 2 , f 3 can also be considered a syzygy (A 1 , A 2 , A 3 ) on f 1 , f 2 , f 3 . Hence, the two generators of Syz( f 1 , f 2 , f 3 ) have degrees at least α and β. This shows that must either increase of decrease by 1 when f 1 is replaced by f 1 . We summarize this in a lemma.

Let us now investigate what happens with
We shall look more carefully into two special cases where Lemma 4.5 applies. Let  (A 1 , A 2 , A 3 ) be the element in Syz( f 1 , f 2 , f 3 ) of the lowest degree α. If |A 1 then ( −1 A 1 , A 2 , A 3 ) is a syzygy of f 1 , f 2 , f 3 of degree α. The other generating syzygy can have degree β or β + 1, as we saw above. But since ( It follows also from Lemma 4.5 that If, in addition, |A 2 , it follows from the equality A 1 f 1 + A 2 f 2 + A 3 f 3 = 0 that also divides A 3 . Then we can divide the whole expression by , and get a syzygy ( This, together with Theorem 4.2, can now be used to prove the following proposition. r Inserted in (2), this gives which is rewritten as We can now conclude that (

Application of the syzygy gap function to monomial complete intersections
We will now specialize to the case f 1 = x d 1 , f 2 = y d 2 , and f 3 = (x + y) d 3 . This is allowed, since these polynomials are pairwise relatively prime. For an easier notation we introduce a new function δ : We will now see how the theory of the syzygy gap connects to the SLP. Proof We know that the syzygy module Syz(x d 1 , y d 2 , (x + y) d 3 ) is generated by two homogeneous elements (A 1 , A 2 , A 3 ) and (B 1 , B 2 , B 3 ) of degrees α and β. We may assume that α ≤ β. Provided that A 3 = 0, this can be formulated as (x + y) d 3 A 3 = 0 in S, and A 3 is a homogeneous element of lowest degree with this property. The degree of A 3 is α − d 3 . By Proposition 2.3 multiplication by (x + y) d 3 has maximal rank in every degree if and only if Recall that α + β = d 1 + d 2 + d 3 . This inserted in the above inequality gives, after simplification, α > β − 2. Since α ≤ β this is exactly the property δ(d 1 , Since f 1 and f 2 are relatively prime, this gives A 1 = c f 2 and A 2 = −c f 1 , for some c ∈ K . Then α = d 1 + d 2 , and since α + β = d 1 + d 2 + d 3 , we get β = d 3 . But β ≥ α and d 3 < d 1 + d 2 yields a contradiction.
This result combined with Proposition 2.5 now gives the following.
Also, let L = be the subset of L where equality holds, and L < = L \ L = .
and δ(c 1 , c 2 , c 3 ) decreases when any c i is replaced by c i ± 1.
Proof Recall from Lemma 4.5 that δ(d 1 , d 2 , d 3 ) increases or decreases by 1 when we "take a step" in Z 3 + , that is when one d i is replaced by d i ± 1. This proves (3). Imagine now that we start in the point (d 1 , d 2 , d 3 ), and take a step in some direction, if it makes the value of δ increase. We continue in this way, as long as we can make the value of δ increase in each step. What we want to prove is that such a path can not be infinitely long. Let us fix a point (d 1 , d 2 , d 3 ). It follows that a path where the value of δ increases in each step must be of minimal length, among all paths between these two points. Any other path of minimal length must also have the property that δ increases in each step. Hence we can replace our path by the path that first increases/decreases d 1 , then d 2 and last d 3 . But when d 2 and d 3 are fixed, we can only increase of decrease d 1 a finite number of times, before we hit L = . The corresponding holds for d 2 and d 3 . At L = the value of δ is zero, as we saw in Lemma 4.9, so δ must have decreased. This shows that there is a bound for the length of a path that starts in a given point (d 1 , d 2 , d 3 ) ∈ L < , and increases δ in each step. Eventually we will reach a point (c 1 , c 2 , c 3 ) such that and δ(c 1 , c 2 , c 3 ) decreases when any c i is replaced by c i ± 1.  and δ(c 1 , c 2 , c 3 ) decreases by one if we replace any c i by c i ±1. Write c 1 = p s u, c 2 = p s v and c 3 = p s w, such that (at least) one of u, v, and w is not divisible by p. Notice that δ(u, v, w) also must decrease when u, v or w is increased or decreased by one. Otherwise we would have e. g. δ(u, v, w + 1) = δ(u, v, w) + 1, which implies δ(c 1 , c 2 , c 3 + p s ) = δ(c 1 , c 2 , c 3 ) + p s . This can only hold if δ increases in each step from (c 1 , c 2 , c 3 ) to (c 1 , c 2 , c 3 + p s ), which is not the case. Now we can use Proposition 4.6 on δ(u, v, w) with = x, y, or x + y, depending on which of u, v and w are not divisible by p. Since r(x u y v (x + y) w ) = 3 we get δ(u, v, w) ≤ 1. Since δ(u, v, w) − 1 = δ(u, v, w + 1) ≥ 0, we must have δ(u, v, w) = 1. By Lemma 4.3 u + v + w is odd, and we can use our assumption to get δ(d 1