1 Introduction

Let \(R=k[x_1,\ldots ,x_m]\) be a polynomial ring over a field k. In 2006, Boij and Söderberg formulated two conjectures regarding the cone of Betti tables of finitely generated Cohen–Macaulay modules over R [2]. First progress towards answering the conjectures was made by Eisenbud, Fløystad, and Weyman [8], who proved the existence of modules with pure resolutions associated to any degree sequence in characteristic zero. Later on, Eisenbud and Schreyer proved the conjectures [6], and then Boij and Söderberg extended them to the non-Cohen–Macaulay case [3], using the techniques introduced in [6]. One of the main aspects of these conjectures can roughly be summarized as follows:

Theorem 1.1

Given an R-module M, there exist finitely generated graded R-modules \(N_1,\ldots ,N_s\) with a pure resolution, and positive rational numbers \(r_1,\ldots ,r_s\), such that

$$\begin{aligned} \displaystyle \beta (M) = \sum _{j=1}^s r_j \beta (N_j). \end{aligned}$$

Here, \(\beta (-)\) denotes the Betti table of a finitely generated graded R-module.

At the core of the proof is the study of another object, the cone of cohomology tables of vector bundles in \(\mathbb {P}^{m-1}\). This cone is not dual to that of Betti tables in the usual sense. However, using suitable pairings, Eisenbud and Schreyer derive information about extremal rays and supporting hyperplanes of one cone from the other. They also provide decomposition algorithms for both cones. Later, in [7], the same authors extend these result to cohomology tables of coherent sheaves. The duality between Betti tables and cohomology tables was later revisited by Eisenbud and Erman in [5], who provided a categorified version. Further results on categorification for the decomposition of cohomology tables were proved by Erman and Sam in [9]. Recently, there has been interest in extending the theory to other settings: for example, [10, 11] develop a Boij–Söderberg theory for coherent sheaves on Grassmannians.

In 2015, during the Bootcamp for the AMS Summer Research Institute in Algebraic Geometry at the University of Utah, Daniel Erman asked whether a theory, analogous to that for cohomology tables of coherent sheaves, could be developed for local cohomology tables of finitely generated graded R-modules. In this article, we work towards answering this question. We give a complete description of the extremal rays of the cone in dimension up to two, and we show that every local cohomology table inside the cone can be expressed as a finite sum of tables from the extremal rays. In what follows, we will view lower dimensional polynomial rings as R-modules via the isomorphisms \(k[x_1,\ldots ,x_i] \cong R/(x_{i+1},\ldots ,x_m)\). The following is the first main result of this article.

Theorem A

(See Theorem 4.6) Let \(R=k[x_1,\ldots ,x_m]\) be a standard graded polynomial ring, let \(\mathfrak {m}=(x_1,\ldots ,x_m)R\), and M be a finitely generated \(\mathbb {Z}\)-graded R-module of dimension at most two. Let \(S=k[x,y]\) be a standard graded polynomial ring, and \(\mathfrak {n}=(x,y)S\). There exist positive rational numbers \(r_1,\ldots ,r_s\) and finitely generated graded S-modules \(N_1,\ldots ,N_s \in \{k(a),k[x](a),S(a), \mathfrak {n}^t(a) \mid t \in \mathbb {Z}_{\geqslant 1}\), \(a \in \mathbb {Z}\}\) such that

$$\begin{aligned} \displaystyle \dim _k({\text {H}}^i_\mathfrak {m}(M)_\ell ) = \sum _{j =1}^s r_j \dim _k({\text {H}}^i_{\mathfrak {n}}(N_j)_\ell ) \end{aligned}$$

for all \(i \in \{0,1,2\}\) and all \(\ell \in \mathbb {Z}\). Moreover, the local cohomology tables of the modules in the set above describe the extremal rays of the cone of local cohomology tables of finitely generated graded R-modules of dimension at most two.

Recall that there is a well-known relation between the local cohomology of a finite module M and the cohomology of the sheaf \({\widetilde{M}}\) associated to M. The relation states that \(\bigoplus _{t \in \mathbb {Z}} {\text {H}}^i({\widetilde{M}}(t)) \cong {\text {H}}^{i + 1}_\mathfrak {m}(M)\) for \(i > 0\), and there is a four-terms exact sequence:

$$\begin{aligned} 0 \rightarrow {\text {H}}^{0}_\mathfrak {m}(M) \rightarrow M \rightarrow \bigoplus _{t \in \mathbb {Z}} {\text {H}}^0 ({\widetilde{M}}(t)) \rightarrow {\text {H}}^{1}_\mathfrak {m}(M) \rightarrow 0. \end{aligned}$$
(1.1)

However, this exact sequence is a stumbling block, and we do not see a way to obtain information on the decompositions of \({\text {H}}^0_\mathfrak {m}(M)\) and \({\text {H}}^1_\mathfrak {m}(M)\) from those of \(\bigoplus _t {\text {H}}^0({\widetilde{M}}(t))\) and M.

To present more differences between local cohomology and sheaf cohomology, observe that, in \(\mathbb {P}^1\), a decomposition of cohomology tables in terms of cohomology tables of supernatural bundles is easily seen to be finite. In fact, by taking cohomology of the exact sequence \(0 \rightarrow \mathrm{t}(\mathcal {F}) \rightarrow \mathcal {F}\rightarrow \mathcal {F}/\mathrm{t}(\mathcal {F}) \rightarrow 0\), where \(\mathrm{t}(\mathcal {F})\) denotes the torsion subsheaf of the sheaf \(\mathcal {F}\), we obtain that it is enough to decompose the tables of \(\mathrm{t}(\mathcal {F})\) and \(\mathcal {F}/\mathrm{t}(\mathcal {F})\) separately. For the latter, observe that \(\mathcal {F}/\mathrm{t}(\mathcal {F})\) is a direct sum of line bundles. For the former, using that \({\text {H}}^1 (\mathrm{t}(\mathcal {F})) = 0\) and \(\dim _k {\text {H}}^0 (\mathrm{t}(\mathcal {F})(d)) = \chi (\mathrm{t}(\mathcal {F})(d))\) is a constant, one can decompose the table \(\mathrm{t}(\mathcal {F})(d)\) using skyscraper sheaves. In the case of local cohomology tables, finiteness of the decomposition in k[xy] is a consequence of Theorem A, but this requires a significant amount of work, as we will show in Sect. 4. In \(\mathbb {P}^2\), a decomposition of sheaf cohomology tables in terms of extremal points may not be finite, as shown in [7, Example 0.3]. On the other hand, [7, Theorem 0.1] asserts that every point in the cone will be given by a convergent series of extremal points, given by supernatural cohomology tables. Given that the arguments for \(\mathbb {P}^1\) and k[xy] are significantly different, there is still a possibility that the decomposition of local cohomology tables in terms of extremal points is always finite. We hope to provide an answer to this question in future work.

Another important aspect of Boij–Söderberg theory is the dual description of the cone spanned by Betti tables by non-negative functionals. In other words, while it is very hard to say when a given table is a Betti table, it is possible to characterize completely tables such that some multiple is a Betti table. We provide an answer in the following form.

Theorem B

(See Theorem 6.2 and Algorithm 6.8) Let \(R=k[x_1,\ldots ,x_m]\), \(\mathfrak {m}=(x_1,\ldots ,x_m)\), and let \({\mathbb {M}}\) denote the space of \(\mathbb {Z}\times 3\) matrices with finitely many non-zero entries. Then (Proposition 4.2) we can identify a local cohomology table of a finitely generated graded R-module of dimension at most two with a matrix in \({\mathbb {M}}\). Furthermore, a matrix \(A = \{a_{i,j}\} \in {\mathbb {M}}\) is in the cone spanned by the images of local cohomology tables of \(\mathbb {Z}\)-graded R-modules of dimension at most two if and only if the entries of A satisfy the following inequalities:

  • \(0 \leqslant a_{0, s}\) for \(s \in \mathbb {Z}\),

  • \(0 \leqslant a_{1,s} + \sum _{i \leqslant s-1} a_{2,i}\) for \(s \in \mathbb {Z}\),

  • \(0 \leqslant a_{2,s}\) for \(s \in \mathbb {Z}\),

  • \(0 \leqslant \sum _{i > s+n} a_{1,i} + (n+1)a_{1,s+n} + \sum _{i=0}^{n-1} (i+1)a_{2,s+i}\) for \(s \in \mathbb {Z}\) and \(n \in \mathbb {Z}_{\geqslant 0}\).

A fundamental aspect of our work are greedy decomposition algorithms accompanying both main theorems. In Sect. 5 and Algorithm 6.8 we explain how to decompose, in terms of extremal points of the cone, a given local cohomology table of a finitely generated k[xy]-module or a matrix satisfying the inequalities of Theorem B. We point out that the proof of Theorem A could be turned into an algorithm to obtain such a decomposition and this proof may produce a different decomposition than the one coming from the greedy algorithm of Sect. 5. The advantage of the strategy used in the proof of Theorem A is that it provides a shorter and more conceptual argument; the disadvantage is that it requires knowledge of the module M. The greedy algorithm provided in Sect. 5, while being less transparent and more computational in nature, only requires knowledge of the local cohomology table of M.

2 Notation and background

In what follows, let \(R=k[x_1,\ldots ,x_m]\) be a polynomial ring over a field k. We will always view R with its standard grading, that is, \(\deg (x_i)=1\) for all i. We can write \(R= \bigoplus _{n \geqslant 0} R_n\), where \(R_n\) is the k-vector space spanned by the monomials in \(x_1,\ldots ,x_d\) of degree n. We will use \(\mathfrak {m}\) to denote the irrelevant maximal ideal \(\bigoplus _{n \geqslant 1} R_n\).

Local cohomology was introduced by Grothendieck [13]. One way to define it is as follows. Given a \(\mathbb {Z}\)-graded R-module \(M = \bigoplus _{n \in \mathbb {Z}} M_n\), we consider the \(\check{\text{ C }}\)ech complex:

which is a complex of \(\mathbb {Z}\)-graded modules. Each map is just a localization, up to an appropriate sign choice that makes \(\check{\text{ C }}^\bullet (M)\) into a complex. For \(i \in \mathbb {Z}\), the local cohomology modules

$$\begin{aligned} \displaystyle {\text {H}}^i_\mathfrak {m}(M) = {\text {H}}^i(\check{\text{ C }}^\bullet (M)) \end{aligned}$$

are \(\mathbb {Z}\)-graded Artinian R-modules. It is well-known that, if \({\text {H}}^i_\mathfrak {m}(M)\ne 0\), then \({\text {depth}}(M) \leqslant i \leqslant \dim (M)\) and these bounds are sharp. Given a finitely generated \(\mathbb {Z}\)-graded R-module M, for \(j \in \mathbb {Z}\) we let

$$\begin{aligned} \displaystyle h^i(M)_j = \dim _k({\text {H}}^i_\mathfrak {m}(M)_j), \end{aligned}$$

where the subscript j denotes the j-th graded component of \({\text {H}}^i_\mathfrak {m}(M)\). It is well-known that all these dimensions are finite. We collect these numbers in a matrix with \(\mathbb {Z}\)-many rows, and \(d+1\) columns:

$$\begin{aligned} \displaystyle [{\text {H}}^\bullet _\mathfrak {m}(M)] = (h^i(M)_j)_{j \in \mathbb {Z}, 0 \leqslant i \leqslant d} \in \mathrm{Mat}_{\mathbb {Z},d+1}(\mathbb {Z}_{\geqslant 0}). \end{aligned}$$

Finally, for \(i=0,\ldots ,d\), we denote by \([{\text {H}}^i_\mathfrak {m}(M)]\) the \((i+1)\)-st column of the matrix \([{\text {H}}^\bullet _\mathfrak {m}(M)]\), that is, the column with entries \((h^i(M)_j)_{j \in \mathbb {Z}}\). The following is the main question we investigate in this article.

Question 2.1

Let \(R=k[x_1,\ldots ,x_m]\), where k is a field. Is there a set \(\Lambda _d\) of local cohomology tables of finitely generated \(\mathbb {Z}\)-graded R-modules that satisfies the following two conditions?

  1. (1)

    Given any finitely generated \(\mathbb {Z}\)-graded R-module M with \(\dim (M) \leqslant d\), there exist finitely many positive rational numbers \(r_1,\ldots ,r_s\) and tables \({\text {H}}_1,\ldots ,{\text {H}}_s \in \Lambda _d\) such that \([{\text {H}}^\bullet _\mathfrak {m}(M)] = \sum _{j=1}^s r_j{\text {H}}_j\).

  2. (2)

    The set \(\Lambda _d\) is minimal, that is, none of the elements of \(\Lambda _d\) can be obtain as a finite positive rational linear combination of other elements from \(\Lambda _d\).

Observe that, if such a set \(\Lambda _d\) exists, the local cohomology tables of modules from \(\Lambda _d\) define the extremal rays of the cone of local cohomology tables of finitely generated graded R-modules.

In relation to the above, we are also interested in a dual description of the cone, in terms of its facets. In other words, our goals include a description of the linear functionals that cut out the cone in the space of all tables.

In this article we provide an answer to Question 2.1 when \(d \leqslant 2\) (Sects. 3 and 4). We first show that, in general, the study of \(\Lambda _d\) reduces to understanding local cohomology tables of modules over polynomial rings in d variables. Moreover, we provide the facet description of the cone, in terms of the supporting hyperplanes, again for \(d\leqslant 2\). This is done in Sect. 6.

Both problems actually reduce to the study of local cohomology tables of finite graded modules over k[xy], with k and infinite field, by means of the following lemma.

Lemma 2.2

Let \(k \subseteq \ell \) be fields, with \(|\ell |=\infty \), let \(R=k[x_1,\ldots ,x_m]\), and \(S=\ell [y_1,\ldots ,y_d]\). The cone of local cohomology tables of finite graded modules over R of dimension at most d equals the cone of local cohomology tables of finite graded S-modules.

Proof

First observe that, when studying the cone of local cohomology tables, we may always extend the base field without losing any generality, using considerations along the lines of [5, Lemma 9.6]. In fact, every local cohomology table over R is naturally a local cohomology table over \(R_\ell = R \otimes _k \ell \); conversely, every local cohomology table over \(R_\ell \) is a multiple of a local cohomology table over R.

We will therefore assume that \(k=\ell \) is infinite, without losing any generality. Let M be a finitely generated graded R-module of dimension at most d. Let \(A = \ell [z_1,\ldots ,z_t]\) be a graded Noether normalization of \(R/{\text {ann}}_R(M)\), where \(t \leqslant d\) is forced by our assumptions. We can view A as a finite graded S-module by sending \(y_i\) to \(z_i\) for \(1 \leqslant i \leqslant t\), and the remaining \(y_i\) to zero. Since M is a finitely generated graded \(R/{\text {ann}}_R(M)\)-module, and \(S \rightarrow A \rightarrow R/{\text {ann}}_R(M)\) is finite, M is a finitely generated graded S-module with respect to the standard grading on S. Therefore the local cohomology table of M belongs to the cone of local cohomology tables of finite S-modules. Conversely, every finite S-module can be viewed as a finite R-module of dimension at most d via the map \(R \rightarrow S\) that sends \(x_i\) to \(y_i\) for \(1 \leqslant i \leqslant d\), and the remaining \(x_i\) to zero. \(\square \)

Remark 2.3

In the rest of the article, we will tacitly make use of Lemma 2.2, and study the cone of local cohomology tables of modules of dimension at most two by working with polynomial rings in at most two variables over an infinite field.

Moreover, there is little harm in working with modules with positive depth. Namely, we may decompose the table \([{\text {H}}^\bullet _\mathfrak {m}(M)] = [{\text {H}}^0_\mathfrak {m}(M)]+[{\text {H}}^\bullet _\mathfrak {m}(M/{\text {H}}^0_\mathfrak {m}(M))]\) and note that the decomposition of \({\text {H}}^0_\mathfrak {m}(M)\) as k-vector space gives a decomposition of its local cohomology table by elements of the form \([{\text {H}}^\bullet _\mathfrak {m}(k(a))]\).

3 Decomposition of graded local cohomology tables in dimension one

When \(R=k\) is a field, one can immediately see that the set \(\Lambda _0= \{[{\text {H}}^\bullet _\mathfrak {m}(k(a))] \mid a \in \mathbb {Z}\}\) provides an answer to Question 2.1. Finitely generated modules over \(R = k[x]\) are also very well-understood, since R is a PID. We will show in this section that \(\Lambda _1 = \{[{\text {H}}^\bullet _\mathfrak {m}(k(a))], [{\text {H}}^\bullet _\mathfrak {m}(k[x](a))] \mid a \in \mathbb {Z}\}\).

Theorem 3.1

Let \(R=k[x]\). The local cohomology table of every finitely generated graded R-module can be expressed as a finite sum, with positive integer coefficients, of local cohomology tables of the form \([{\text {H}}^\bullet _\mathfrak {m}(k(a))]\) and \([{\text {H}}^\bullet _\mathfrak {m}(k[x](a))]\), for \(a \in \mathbb {Z}\). Moreover, the set these tables form is minimal, so that \(\Lambda _1=\{[{\text {H}}^\bullet _\mathfrak {m}(k(a))], [{\text {H}}^\bullet _\mathfrak {m}(k[x](a))] \mid a \in \mathbb {Z}\}\) provides an answer to Question 2.1.

Proof

By Remark 2.3, we may assume that M is positive depth and, therefore, it decomposes as a direct sum of R(a).

To conclude the proof, we need to show that the set \(\{[{\text {H}}^\bullet _\mathfrak {m}(k(a))],[{\text {H}}^\bullet _\mathfrak {m}(k[x](a))] \mid a \in \mathbb {Z}\}\) is minimal. To do so, we distinguish two cases:

  1. (1)

    First assume that there exist \(\lambda _r,\mu _s \in \mathbb {Q}_{\geqslant 0}\) such that

    $$\begin{aligned} \displaystyle [{\text {H}}^\bullet _\mathfrak {m}(k(a))] = \sum _{r \ne a} \lambda _r [{\text {H}}^\bullet _\mathfrak {m}(k(r))] + \sum _{s \in \mathbb {Z}} \mu _s [{\text {H}}^\bullet _\mathfrak {m}(k[x](s))]. \end{aligned}$$

    We will reach a contradiction by specializing these equality of \(\mathbb {Z}\times 2\) tables to specific entries. In fact, the entry \((-a,1)\) on the left is \(h^0(k(a))_{-a} = 1\), while every table on the right has a zero entry in that position.

  2. (2)

    Now assume there exist \(\lambda _r,\mu _s \in \mathbb {Q}_{\geqslant 0}\) such that

    $$\begin{aligned} \displaystyle [{\text {H}}^\bullet _\mathfrak {m}(k[x](a))] = \sum _{r \in \mathbb {Z}} \lambda _r [{\text {H}}^\bullet _\mathfrak {m}(k(r))] + \sum _{s \ne a} \mu _s [{\text {H}}^\bullet _\mathfrak {m}(k[x](s))]. \end{aligned}$$

    Since the table on the left has all zeros in the first column, we readily get that \(\lambda _r=0\) for all r. Moreover, since the \((-a,2)\) entry on the left is \(h^1(k[x](a))_{-a} = 0\), we obtain that \(\mu _s=0\) for all \(s<a\). However, specializing at \((-a-1,2)\), on the left we have \(h^1(k[x](a))_{-a-1} = 1\), while all the tables on the right have a zero entry in that position. A contradiction.

\(\square \)

4 Decomposition of graded local cohomology tables in dimension two

In this section, R will denote a polynomial ring k[xy] over an infinite field k. Given any finitely generated R-module M, we have \({\text {H}}^i_\mathfrak {m}(M)=0\) for all \(i \leqslant -1\) and all \(i \geqslant 3\). Therefore the local cohomology table \({\text {H}}^i_\mathfrak {m}(M)\) can be encoded into a \(\mathbb {Z}\times 3\) matrix \([{\text {H}}^\bullet _\mathfrak {m}(M)]\), with non-negative integer entries.

Notation 4.1

Let \(N = \bigoplus _{n\in \mathbb {Z}}N_n\) be a \(\mathbb {Z}\)-graded k-vector space that satisfies \(\dim _k(N_n) < \infty \) for all \(n \in \mathbb {Z}\). For \(t \geqslant 0\) we define a “t-difference function” \(\Delta ^t_N:\mathbb {Z}\rightarrow \mathbb {Z}\) inductively. If \(t=0\) then \(\Delta ^0_N(n) = \dim _k(N_n)\) for all \(n \in \mathbb {Z}\). If \(t>0\), for \(n\in \mathbb {Z}\) we define \(\Delta ^t_N(n) = \Delta ^{t-1}_N(n) - \Delta ^{t-1}_N(n+1)\).

Proposition 4.2

Let \(R=k[x,y]\), and M be a finitely generated R-module.

  1. (1)

    There exists an integer a such that \({\text {H}}^i_\mathfrak {m}(M)_n = 0\) for all \(n > a\).

  2. (2)

    For \(i=0,2\) we have \(\Delta ^j_{{\text {H}}^i_\mathfrak {m}(M)}(n) \geqslant 0\) for all \(n \in \mathbb {Z}\) and all \(j \leqslant i\). For \(i=1\), we have \(\Delta ^0_{{\text {H}}^1_\mathfrak {m}(M)}(n) \geqslant 0\) for all \(n \in \mathbb {Z}\), and \(\Delta ^1_{{\text {H}}^1_\mathfrak {m}(M)}(n) \geqslant 0\) for all \(n \ll 0\).

  3. (3)

    For every \(i=0,1,2\) we have \(\Delta ^i_{{\text {H}}^i_\mathfrak {m}(M)}(n)=0\) for all but finitely many \(n \in \mathbb {Z}\).

Proof

This follows from standard results on the growth of Hilbert functions of finitely generated graded modules of a given dimension. Indeed, the graded Matlis dual of \({\text {H}}^i_\mathfrak {m}(M)\) is \({\text {Ext}}^{2-i}_R(M,R(-2))\), and the latter is a finitely generated module of dimension at most i. \(\square \)

In analogy with the notation we use for local cohomology modules, given a \(\mathbb {Z}\)-graded R-module L we record its Hilbert function \(n \mapsto \dim _k(L_n)\) in a column which we denote by [L]. To help keeping track of degrees, we will also include the index \(n \in \mathbb {Z}\) as an extra column. Moreover, we usually represent such columns as rows, by taking the transpose matrix:

$$\begin{aligned} \displaystyle [L]^T = \left[ \ \ \begin{matrix} \cdots &{} n+1 &{} n &{} n-1 &{} \cdots \\ \hline \cdots &{} \dim _k(L_{n+1}) &{} \dim _k(L_n) &{} \dim _k(L_{n-1}) &{} \cdots \end{matrix} \ \ \right] . \end{aligned}$$

Lemma 4.3

Let \(R=k[x,y]\), and L be a graded cyclic R-module of finite length. Let a (respectively, b) be the smallest (respectively, largest) integer t such that \(L_t \ne 0\). Then \([L]= \sum _{n=0}^{b-a} r_n [R/\mathfrak {m}^{n+1}(-a)]\), for some \(r_n \in \mathbb {Q}_{\geqslant 0}\).

Proof

Since L is cyclic, we can write \(L=(R/I)(a)\) for some \(\mathfrak {m}\)-primary homogeneous ideal I. Let \(s=b-a\), and \(d_n = \dim _k((R/I)_n)\) for all \(n \in \mathbb {Z}\). By [1, Theorem 1.1] we have that

$$\begin{aligned} n d_n \leqslant (n+1) d_{n-1} \end{aligned}$$
(4.1)

for all \(n \geqslant a\). Consider the linear system

$$\begin{aligned} \left[ \begin{matrix} 0 &{}\quad 0 &{}\quad \ldots &{}\quad 0 &{}\quad s+1 \\ 0 &{}\quad 0 &{}\quad \ldots &{}\quad s &{}\quad s \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \vdots \\ 0 &{}\quad 2 &{}\quad \ldots &{}\quad 2 &{}\quad 2 \\ 1 &{}\quad 1 &{}\quad \ldots &{}\quad 1 &{}\quad 1 \end{matrix} \right] \cdot \left[ \begin{matrix} X_0 \\ X_1 \\ \vdots \\ X_{s-1} \\ X_s \end{matrix} \right] = \left[ \begin{matrix} d_s \\ d_{s-1} \\ \vdots \\ d_1 \\ d_0 \end{matrix} \right] \end{aligned}$$

which has a unique solution \((r_0,\ldots ,r_s) \in \mathbb {Q}^{s+1}\). We prove that \(r_i \geqslant 0\) for all i. It is clear that \(r_s = d_s/(s+1) > 0\). For \(0 \leqslant i < s\) we have that

$$\begin{aligned} \displaystyle r_{i}= & {} \frac{d_i}{i+1} - \left( \sum _{j=i+1}^s r_j\right) = \frac{d_i}{i+1} - \left( \frac{d_{i+1}}{i+2} - \frac{d_{i+2}}{i+3} \right) - \cdots \\&- \left( \frac{d_{s-1}}{s} - \frac{d_{s}}{s+1}\right) - \frac{d_s}{s+1} = \frac{d_i}{i+1} - \frac{d_{i+1}}{i+2} \geqslant 0 \end{aligned}$$

by (4.1). For all \(j =0,\ldots ,s\), we then have that

$$\begin{aligned} \displaystyle d_j = \sum _{n=j}^{s} r_n(j+1) = \sum _{n=0}^{s} r_n\dim _k((R/\mathfrak {m}^{n+1})_j). \end{aligned}$$

Taking into account the shift by a, we finally obtain that \([L] = \sum _{n=0}^{b- a} r_n[(R/\mathfrak {m}^{n+1})(-a)]\), as claimed. \(\square \)

We recall the following graded versions of Serre’s condition \((S_k)\)

Definition 4.4

Let \((R,\mathfrak {m})\) be a standard graded k-algebra, and M be a finitely generated graded R-module. We say that M satisfies Serre’s graded condition \((S_k)\) if

$$\begin{aligned} \displaystyle {\text {depth}}(M_\mathfrak {p}) \geqslant \min \{\dim (M_\mathfrak {p}), k\} \end{aligned}$$

for all homogeneous ideals \(\mathfrak {p}\in {\text {Spec}}(R)\). We say that M satisfies Serre’s graded condition \((S_k)\) on the punctured spectrum if the inequality holds for all homogeneous ideals \(\mathfrak {p}\), with \(\mathfrak {p}\ne \mathfrak {m}\).

Lemma 4.5

Let \(R=k[x_1,\ldots ,x_d]\), \(\mathfrak {m}= (x_1,\ldots ,x_d)\) be its irrelevant maximal ideal, and M be a finitely generated R-module of dimension d. If M satisfies Serre’s graded condition \((S_{d-1})\) on the punctured spectrum, then \({\text {H}}^i_\mathfrak {m}(M)\) has finite length for all \(i \ne d\).

Proof

Let \(i \ne d\). By graded local duality, we have that \({\text {H}}^i_\mathfrak {m}(M)\) has finite length if and only if \({\text {Ext}}^{d-i}_R(M,R)\) does. Since M is graded, so is \({\text {Ext}}^{d-i}_R(M,R)\). In particular, such a module has finite length if and only if \({\text {Ext}}^{d-i}_R(M,R)_\mathfrak {p}= 0\) for all homogeneous primes \(\mathfrak {p}\), with \(\mathfrak {p}\ne \mathfrak {m}\). Given that \({\text {Ext}}^{d-i}_R(M,R)_\mathfrak {p}\cong {\text {Ext}}^{d-i}_{R_\mathfrak {p}}(M_\mathfrak {p},R_\mathfrak {p})\), by local duality the latter is zero if and only if \({\text {H}}^{i+\delta (\mathfrak {p})}_{\mathfrak {p}R_\mathfrak {p}}(M_\mathfrak {p}) = 0\), where \(\delta (\mathfrak {p}) = \dim (R_\mathfrak {p})-d\). Finally, because \(i \ne d\), this local cohomology module over \(R_\mathfrak {p}\) is zero given that, because of our assumptions, \(M_\mathfrak {p}\) is Cohen–Macaulay with \(\dim (M_\mathfrak {p}) = \dim (R_\mathfrak {p})\). \(\square \)

The following is the main result of this section.

Theorem 4.6

Let \(R=k[x,y]\), \(\mathfrak {m}=(x,y)\), and M be a finitely generated \(\mathbb {Z}\)-graded R-module. Then \([{\text {H}}^\bullet _\mathfrak {m}(M)]\) can be written as a finite sum with positive rational coefficients of tables of the form \([{\text {H}}^\bullet _\mathfrak {m}(k(a))]\), \([{\text {H}}^\bullet _\mathfrak {m}(k[x](a))]\), \([{\text {H}}^\bullet _\mathfrak {m}(k[x,y](a))]\) and \([{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^t(a))]\), for \(a \in \mathbb {Z}\). Moreover, the set of such tables is minimal. Thus, the following set provides an answer to Question 2.1:

$$\begin{aligned} \displaystyle \Lambda _2 = \{[{\text {H}}^\bullet (k(a))], [{\text {H}}^\bullet _\mathfrak {m}(k[x](a))], [{\text {H}}^\bullet _\mathfrak {m}(k[x,y](a))], [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^t(a))] \mid t \in \mathbb {Z}_{\geqslant 1}, a\in \mathbb {Z}\}. \end{aligned}$$

Proof

Let M be a finitely generated R-module, and consider its local cohomology table \([{\text {H}}^\bullet _\mathfrak {m}(M)]\).

By Remark 2.3 we will assume that M has positive depth. Let \({\widetilde{M}}\) be the sheaf on \(\mathbb {P}^1\) associated to M, so that \({\widetilde{M}} \cong \mathcal {F} \oplus \mathcal {O}(-a_1) \oplus \cdots \oplus \mathcal {O}(-a_t)\), with \(a_1,\ldots ,a_t \in \mathbb {Z}\) and \(\mathcal {F}\) the torsion subsheaf of \({\widetilde{M}}\). Let \(\Gamma _*({\widetilde{M}}) = \bigoplus _{n \in \mathbb {Z}} {\text {H}}^0(\mathbb {P}^1,{\widetilde{M}}(n))\), and consider the composition

We let N be its kernel, and P be its image. Both N and P have positive depth. Since \(\dim (N) \leqslant 1\), this forces N to be Cohen–Macaulay, and the exact sequence \(0 \rightarrow N \rightarrow M \rightarrow P \rightarrow 0\) gives an exact sequence \(0 \rightarrow {\text {H}}^1_\mathfrak {m}(N) \rightarrow {\text {H}}^1_\mathfrak {m}(M) \rightarrow {\text {H}}^1_\mathfrak {m}(P) \rightarrow 0\), and \({\text {H}}^2_\mathfrak {m}(M) \cong {\text {H}}^2_\mathfrak {m}(P)\). Because N has dimension one, it is finite over a one-dimensional polynomial ring, and it then follows from Theorem 3.1 that we can decompose its table using elements from \(\Lambda _2\). Therefore, in order to finish the proof, it suffices to show that we can decompose \([{\text {H}}^\bullet _\mathfrak {m}(P)]\) using elements from \(\Lambda _2\). We have a short exact sequence

where C has finite length. Taking local cohomology gives that \({\text {H}}^1_\mathfrak {m}(P) \cong C\), and \({\text {H}}^2_\mathfrak {m}(P) \cong {\text {H}}^2_\mathfrak {m}(R(-a_1)) \oplus \cdots \oplus {\text {H}}^2_\mathfrak {m}(R(-a_t))\). We induct on \(t \geqslant 0\). If \(t=0\), there is nothing to prove. If \(t>0\), then we let \(I(-a_t) = {\text {ker}}(R(-a_t) \rightarrow C)\), which is an \(\mathfrak {m}\)-primary ideal. Let \(\overline{P} = {{\,\mathrm{coker}\,}}(I(-a_t) \rightarrow P)\) and \(\overline{C} = {{\,\mathrm{coker}\,}}((R/I)(-a_t) \rightarrow C)\), so that we have a short exact sequence \(0 \rightarrow \overline{P}\rightarrow R(-a_1) \oplus \cdots \oplus R(-a_{t-1}) \rightarrow \overline{C} \rightarrow 0\). By induction, we can decompose \([{\text {H}}^\bullet _\mathfrak {m}(\overline{P})]\) using tables from \(\Lambda _2\). Moreover, it can be checked that \([{\text {H}}^\bullet _\mathfrak {m}(P)] = [{\text {H}}^\bullet _\mathfrak {m}(I(-a_t))] + [{\text {H}}^\bullet _\mathfrak {m}(\overline{P})]\). Therefore, it suffices to decompose \([{\text {H}}^\bullet _\mathfrak {m}(I)]\), where I is an \(\mathfrak {m}\)-primary ideal. By Lemma 4.3 we can write \([R/I] = \sum _{n=0}^t r_n[R/\mathfrak {m}^n]\) for some \(r_n \in \mathbb {Q}_{\geqslant 0}\), and some integer t. Observe that, since \([R/I]_0=1\), we must have \(1=\sum _{n=0}^t r_n[R/\mathfrak {m}^n]_0 = \sum _{n=0}^t r_n\). Notice that \([{\text {H}}^1_\mathfrak {m}(I)] = [R/I] = \sum _{n=0}^t r_n [R/\mathfrak {m}^n] = \sum _{n=0}^t r_n [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^n)]\). Moreover, since \({\text {H}}^2_\mathfrak {m}(I) \cong {\text {H}}^2_\mathfrak {m}(R) \cong {\text {H}}^2_\mathfrak {m}(\mathfrak {m}^n)\) for all n, we have that \([{\text {H}}^\bullet _\mathfrak {m}(I)] = \sum _{n=0}^t r_n[{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^n)]\). This concludes the proof that the local cohomology table of every module can be decomposed using tables from the set \(\Lambda _2\). It is left to show the minimality of this set.

For tables of the form \([{\text {H}}^\bullet _\mathfrak {m}(k(a))]\) and \([{\text {H}}^\bullet _\mathfrak {m}(k[x](a))]\), the strategy is completely identical to that used inside the proof of Theorem 3.1. We therefore only focus on the proof for the remaining tables.

Assume that, for \(\lambda _r,\mu _s\) and \(\tau _{t,u} \in \mathbb {Q}_{\geqslant 0}\), one has

$$\begin{aligned} \displaystyle [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^n(a))] = \sum _{r \in \mathbb {Z}} \lambda _r[{\text {H}}^\bullet _\mathfrak {m}(k(r))] + \sum _{s \in \mathbb {Z}} \mu _s [{\text {H}}^\bullet _\mathfrak {m}(k[x](s))] + \sum _{{\tiny \begin{array}{c} u \in \mathbb {Z}\\ (t,u) \ne (n,a)\end{array}}}\tau _{t,u} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^t(u))]. \end{aligned}$$

Here, we allow the exponent in \(\mathfrak {m}^t\) to be zero, in which case we mean \(\mathfrak {m}^0:=R\). Since the first column on the left contains all zeros, one readily sees that \(\lambda _r=0\) for all r. Moreover, \(\mu _s=0\) is forced for all s, since the table on the left satisfies \(h^1(\mathfrak {m}^n(a))_p = 0\) for \(p \ll 0\). Similar considerations on zeros of the second and third column rule out \([{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^t(u))]\), with \(u\ne a\). Finally, since the table on the left has zeros at \(h^1(\mathfrak {m}^n(a))_p\) for \(p \geqslant n-a\), we have \(\tau _{t,a} = 0\) for \(t > n\). If \(n=0\), we have reached a contradiction, since no tables on the right satisfy these requirements. If \(n>0\), what is left is:

$$\begin{aligned} \displaystyle [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^n(a))] = \sum _{0 \leqslant t < n} \tau _{t,a} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^t(a))]. \end{aligned}$$

However, the entry \(h^1(\mathfrak {m}^n(a))_{n-1-a}\) on the left is equal to n, while on the right all the tables have zero entries. A contradiction, which concludes the proof. \(\square \)

Remark 4.7

The proof of the theorem shows that if F is a graded free R-module such that \({\text {H}}_\mathfrak {m}^2 (F) \cong {\text {H}}_\mathfrak {m}^2(M)\), then we have a surjection \(F \rightarrow {\text {H}}^1_\mathfrak {m}(C) \rightarrow 0\).

Remark 4.8

Alexandra Seceleanu has indicated to us that, quite interestingly, all modules whose local cohomology tables appear in the set \(\Lambda _2\) of Theorem 4.6 are actually graded with respect to the fine \(\mathbb {Z}^2\)-grading on R. Daniel Erman has pointed out that they in fact satisfy an even stronger condition, as they are \(\mathrm{GL}_2\)-equivariant. Assuming Question 2.1 has positive answer, it would be interesting to determine whether this is the case even in higher dimension.

We conclude the section with an example that shows that the coefficients appearing in a decomposition may not be integers, as opposed to the case of finitely generated modules over k[x]. Moreover, such a decomposition may not be unique. The reason is that the cone of local cohomology tables is not simplicial, since the vectors defined by elements of \(\Lambda _2\) are not linearly independent.

Example 4.9

Let \(M = (x^2,y^2)\). Given that \({\text {H}}^0_\mathfrak {m}(M) = 0\), and using the isomorphisms \({\text {H}}^1_\mathfrak {m}(M) \cong {\text {H}}^0_\mathfrak {m}(R/(x^2,y^2))\) and \({\text {H}}^2_\mathfrak {m}(M) \cong {\text {H}}^2_\mathfrak {m}(R)\), one can verify that the transpose of the local cohomology table of M is

so \([{\text {H}}^\bullet _\mathfrak {m}(M)] = \frac{2}{3} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^2)] + \frac{1}{3} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^3)]\). Using the same module M, it is then easy to see that the transpose of the local cohomology table of \(M \oplus R(-2)\) is

This table can then be decomposed in at least two ways:

$$\begin{aligned}&\displaystyle \frac{2}{3} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^2)] + \frac{1}{3} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^3)] + [{\text {H}}^\bullet _\mathfrak {m}(R(-2))] \\&\quad = [{\text {H}}^\bullet _\mathfrak {m}(M \oplus R(-2))] = [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^2)] + [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}(-2))]. \end{aligned}$$

5 An algorithm for the decomposition of local cohomology tables in k[xy]

Let \(R=k[x,y]\), where k is a field. We now describe a greedy algorithm that, given the local cohomology table of a finitely generated graded R-module, shows how to express it in terms of tables from the set \(\Lambda _2\) described in Theorem 4.6.

Let L be a cyclic graded R-module of finite length. Recall that we are denoting by [L] its Hilbert function, that we view as a column, where the row n records the value \(\dim _k(L_n)\). Let a (respectively, b) be the smallest (respectively, largest) \(n \in \mathbb {Z}\) such that \(L_n \ne 0\). By Lemma 4.3 we can write \([L] = \sum _{n=0}^{b-a} r_n[R/\mathfrak {m}^{n+1}(-a)]\), for some \(r_n \in \mathbb {Q}_{\geqslant 0}\). We now turn the proof of Lemma 4.3 into an explicit algorithm.

Algorithm 5.1

Let \(H = (h_n)\) be a \(\mathbb {Z}\times 1\) matrix with non-negative rational entries. Assume that H satisfies the following conditions, that we temporarily denote with \((*_a^b)\):

  1. (1)

    \(h_n=0\) if and only if \(n<a\) or \(n>b\)

  2. (2)

    \(h_n \leqslant \left\{ \begin{array}{ll} (n-a+1)h_a &{} \text{ if } h_{n-1}=(n-a)h_a \\ h_{n-1}h_a &{} \text{ if } h_{n-1} < (n-a)h_a\end{array} \right. \)

We describe an algorithm to write H as a linear combination with non-negative rational coefficients of \([R/\mathfrak {m}^{n+1}(-a)]\), with \(0 \leqslant n \leqslant b-a\).

We proceed as follows:

Step 1::

Let \(r_b = \displaystyle \frac{h_b}{b}\)

Step 2::

Let \(K=(k_n)_{n \in \mathbb {Z}}\) be the column that satisfies

$$\begin{aligned} \displaystyle k_n = \left\{ \begin{array}{ll} n-a+1 &{} \text{ if } a \leqslant n \leqslant b \\ 0 &{} \text{ otherwise }\end{array} \right. \end{aligned}$$

Observe that this is just \([R/\mathfrak {m}^{b-a+1}(-a)]\). We replace H by \(H'=H- r_bK\).

If \(H'=0\), we just write \(H = r_bK = r_b[R/\mathfrak {m}^{b-a+1}(-a)]\), and we STOP. If \(H' = (h'_n)_{n \in \mathbb {Z}}\) is not the zero column, we observe that \(h'_n = 0\) if and only if \(n<a\) or \(n>b'\), for some \(0 \leqslant b' < b\). It takes a tedious but straightforward computation to show that \(H'\) still has non-negative entries, and it satisfies \((*_a^{b'})\). We now repeat Steps 1 and 2 with \(H'\), and continue until we STOP. The process clearly terminates, since every time we have a table whose number of non-zero entries decreases at least by one.

Remark 5.2

The condition \((*_a^b)\) in Algorithm 5.1 is just a restatement of Macaulay’s Theorem, which characterizes the possible Hilbert functions of standard graded k-algebras, adapted to our setup. In particular, any cyclic R-module of finite length satisfies \((*_a^b)\) for some ab (see Proposition 5.5).

Notation 5.3

We call a \(\mathbb {Z}\times 1\) matrix H that satisfies the conditions \((*_a^b)\) of Algorithm 5.1 and that further satisfies \(h_a=1\) and \(h_n \in \mathbb {N}\) for all \(n \in \mathbb {Z}\) an admissible column generated in degree a. Note that we do not wish to keep track of b with this terminology. If a \(\mathbb {Z}\times 1\) matrix can be written as a sum of t columns, each generated in degree \(a_i\), we call it an admissible column, generated in degrees \(a_1,\ldots ,a_t\). Finally, given a \(\mathbb {Z}\times 1\) matrix H, and integers \(a_1,\ldots ,a_t\), we set \({\widetilde{H}}(a_1,\ldots ,a_t) = ({\widetilde{h}}_n)_{n \in \mathbb {Z}}\), where \({\widetilde{h}}_n = h_n - b_n\), and \(b_n\) is the cardinality of the set \(\{1,\ldots ,t \mid a_i = n\}\). We call \({\widetilde{H}}\) the truncation of H with respect to the degrees \(a_1,\ldots ,a_t\).

Remark 5.4

Using this new terminology, it follows from Lemma 4.3 (or Algorithm 5.1) that every admissible column \(H = (h_n)_{n \in \mathbb {Z}}\), generated in degree a, and such that \(h_n = 0\) for \(n>b\), can be realized as a sum \(\sum _{n=0}^{b-a} r_n [R/\mathfrak {m}^{n+1}(-a)]\), with \(r_n \in \mathbb {Q}_{\geqslant 0}\) and \(\sum _{n=0}^{b-a}r_n = 1\).

Conversely, we observe the following:

Proposition 5.5

Let L be a graded R-module of finite length, with minimal homogeneous generators of degrees \(a_1,\ldots ,a_t\). Then its Hilbert function [L] is a finite sum of admissible columns generated in degrees \(a_1, \ldots , a_t\).

Proof

Let \(0 \rightarrow N \rightarrow F=R(-a_1) \oplus \cdots \oplus R(-a_t) \rightarrow L \rightarrow 0\) be a minimal free graded presentation of L. Choose any term order \(\tau \) on F, and consider the initial module \({\text {in}}_\tau (N) \subseteq F\). Then \(F/{\text {in}}_\tau (N)\) has the same Hilbert function as \(F/N \cong L\) [4, Theorem 15.26]. Furthermore, \({\text {in}}_\tau (N)\) consists of a direct sum of monomial ideals \(I_1(-a_1) \oplus I_2(-a_2) \oplus \cdots \oplus I_t(-a_t) \subseteq F\), so that

$$\begin{aligned} \displaystyle F/{\text {in}}_\tau (N) \cong (R/I_1)(-a_1) \oplus \cdots \oplus (R/I_t)(-a_t), \end{aligned}$$

By Macaulay’s Theorem, the Hilbert function of each \(R/I_j(-a_j)\) is an admissible column generated in degree \(a_j\), and the proposition now follows. \(\square \)

We now present a series of technical lemmas regarding properties of admissible columns. These will be used in the proof of the algorithm for the decomposition. In what follows, given two columns \(K=(k_n)_{n\in \mathbb {Z}}\) and \(H=(h_n)_{n \in \mathbb {Z}}\), we will write \(K \leqslant H\) if \(k_n \leqslant h_n\) for all \(n \in \mathbb {Z}\).

Lemma 5.6

Let \(U=(u_n)_{n \in \mathbb {Z}}\) be an admissible column, generated in degree a, and with \(u_n = 0\) for \(n > b\). Let \(V=(v_n)_{n \in \mathbb {Z}}\) be any column with non-negative entries such that for some integer \(a' \geqslant a\) the following conditions hold:

  1. (1)

    \(v_n = 0\) for \(n < a'\) and \(n > b\),

  2. (2)

    for all \(a' \leqslant n \leqslant b\) we have \(v_n \leqslant n-a+1\) (This condition is automatic if \(V\leqslant L\), for some admissible column L generated in degree a.),

  3. (3)

    for all \(a' \leqslant n \leqslant b\) we have \(v_{n}>v_{n-1}\).

Then \(W = (w_n)_{n\in \mathbb {Z}}\), defined as \(w_n = \max \{0,u_n-v_n\}\), is an admissible column, and W is still generated in degree a if \(a'>a\). Moreover, the column \(Z=(z_n)_{n \in \mathbb {Z}}\) defined as \(z_n = \max \{0,v_n-u_n\}\), is either zero or it satisfies \(z_n > z_{n-1}\) for all \(a'' \leqslant n \leqslant b\), for some \(a'' \geqslant a'\).

Proof

For the first claim, the only values we need to check for \(w_n\) are those corresponding to n between a and b, since \(w_n=0\) otherwise. For \(a \leqslant n <a'\) we have \(w_n = u_n\), so \(w_n\) is admissible. For \(a'\leqslant n \leqslant b\), if \(w_n=0\) there is nothing to show. Otherwise, since \(v_n > v_{n-1}\) we have \(w_n = u_n - v_n \leqslant u_n-v_{n-1}-1\). Also, note that \(u_n \leqslant u_{n-1}+1\) always holds. Therefore \(w_n \leqslant u_{n-1}-v_{n-1} \leqslant w_{n-1}\), and thus it is admissible. If \(a'>a\), then \(w_n = u_n =1\), so that W is generated in degree a.

Now, consider the column Z. If \(Z \ne 0\), then let n be an integer, with \(a' \leqslant n \leqslant b\). If \(u_n = n-a+1\), then since \(v_n \leqslant n-a+1\) we must have \(z_n=0\). On the other hand, if \(u_{j} < j-a+1\) for some j, then \(u_{n+1} \leqslant u_n\) for all \(n \geqslant j\). If \(a''\) is the smallest such value of j, we then have \(z_{n+1} \geqslant v_n+1-u_n > z_n\) for all \(a'' \leqslant n \leqslant b\). \(\square \)

Definition 5.7

Given a \(\mathbb {Z}\times 1\) matrix \(T = (t_n)_{n \in \mathbb {Z}}\), we say that T is a monotone column if \(\Delta ^1_{T}(n) \geqslant 0\) for all \(n \in \mathbb {Z}\).

Lemma 5.8

Let \(H = (h_n)_{n \in \mathbb {Z}}\) be an admissible column generated in degrees \(a_1,\ldots ,a_t\). Assume that \(a_1 \leqslant a_2 \leqslant \cdots \leqslant a_t\). Let \(T = (t_n)_{n \in \mathbb {Z}}\) be a monotone column, and let \(P=T+H\). Then P can be written as \(U + \sum _{i=1}^t K_i\), where:

  • Each \(K_i\) is an admissible column, still generated in degree \(a_i\).

  • U is a monotone column, with \(U \leqslant T\).

  • \(K_t\) is the maximal admissible column generated in degree \(a_t\) satisfying \(K_t \leqslant P\).

Proof

We let \(K = (k_n)_{n \in \mathbb {Z}}\) be the largest admissible column generated in degree \(a_t\), satisfying \(K \leqslant P\). In other words, if \(P=(p_n)_{n \in \mathbb {Z}}\), we have \(k_n = \min \{p_n,n-a_t+1\}\) for all \(n \geqslant a_t\), and \(k_n=0\) otherwise.

Claim 5.9

If we let \(c=\min \{n \in \mathbb {Z}\mid n \geqslant a_t, k_n \leqslant k_{n-1}\}\), then \(k_n=p_n\) for all \(n \geqslant c\).

Proof of the Claim

Observe that \(1=k_{a_t} > k_{a_t-1} = 0\), therefore \(c > a_t\). Moreover, by maximality of K, if \(k_n > k_{n-1}\), we also have \(k_{n+1} > k_n\), as long as \(k_n+1 \leqslant p_{n+1}\). Therefore, since \(k_{c-1} > k_{c-2}\) but \(k_c \leqslant k_{c-1}\), we must have \(k_{c-1}+1>p_c\). In particular, by maximality we have \(k_c = p_c\). Now we recall that \(H=T+H_1 + \cdots + H_t\), where each \(H_i\) is admissible, generated in degree \(a_i\), and T is monotone. For \(i=1,\ldots ,t\), if we set \(H_i = (h_{i,n})_{n \in \mathbb {Z}}\), we then have \(p_n = t_n+\sum _i h_{i,n}\) for all \(n \in \mathbb {Z}\). Observe that, for all i, we have \(h_{i,c} \leqslant p_c=k_c \leqslant k_{c-1} \leqslant c-a_t \leqslant c-a_i < c-a_i+1\). In particular, for each \(H_i\) to be admissible, we must have \(h_{i,n+1} \leqslant h_{i,n}\) for all \(n \geqslant c\). The same type of inequality holds for T, just because it is a monotone column: \(t_{n+1}\leqslant t_n\) for all \(n \in \mathbb {Z}\) and, in particular, for \(n \geqslant c\). It follows that \(p_{n+1} \leqslant p_n\) for all \(n \geqslant c\), and by maximality of K we then have \(k_n = p_n\) for all \(n \geqslant c\). This proves the claim. \(\square \)

For c as in Claim 5.9, and all \(i=1,\ldots ,t\), define \(H'_{i} = (h'_{i,n})_{n \in \mathbb {Z}}\) as follows: \(h'_{i,n} = h_{i,n}\) for all \(n <c\), and \(h'_{i,n}=0\) for all \(n \geqslant c\). Observe that all the columns \(H_i'\) are still admissible, generated in degree \(a_i\). Similarly, we define \(T'=(t'_n)_{n \in \mathbb {Z}}\) as follows: \(t'_n=t_n\) for \(n <c\), and \(t_n=0\) for \(n \geqslant c\). Observe that \(T'\) is still monotone, with \(T' \leqslant T\).

Now, we observe that \(K \geqslant H_t\), by maximality of K. We define \(Z_t=(z_{t,n})_{n\in \mathbb {Z}}\) as \(z_{t,n} = k_n-h_{t,n}\) for \(n <c\), and \(z_{n,t}=0\) for \(n \geqslant c\). By Claim 5.9, we have that \(k_n>k_{n-1}\) for all \(a_t \leqslant n < c\). Because of this inequality, and since K is admissible, we can apply Lemma 5.6 with \(U=H_t'\) and \(V=K\). We then obtain that either \(Z_t=0\), or \(z_{t,n} > z_{t,n-1}\) for all \(b_t \leqslant n < c\), for some \(b_t>a_t\), and \(z_{t,n} =0\). In case \(Z_t=0\), we then have that \(p_n = t_n+h_{1,n} + \cdots + h_{t-1,n} + k_n\) for all \(n <c\), and \(p_n = k_n\) for \(n \geqslant c\). Thus:

$$\begin{aligned} \displaystyle P=T'+H_1'+ \cdots + H_{t-1}' + K \end{aligned}$$

is the desired decomposition, setting \(U=T'\), \(K_i = H_i'\) for all \(i=1,\ldots ,t-1\), and \(K_t=K\). If \(Z_t \ne 0\), observe that \(z_{t,n}\) is either zero, or it satisfies \(z_{t,n} \leqslant k_n \leqslant n-a_t+1\), Moreover, since \(z_{t,n} > z_{t,n-1}\) for \(b_t \leqslant n < c\), we can apply Lemma 5.6 applied to \(U=H'_{t-1}\) and \(V=Z_t\). We then get that \(W_{t-1}=(w_{t-1,n})_{n\in \mathbb {Z}}\), defined as \(w_{t-1,n} = \max \{0,h'_{t-1,n}-z_{t,n}\}\), is admissible, generated in degree \(a_{t-1}\). Moreover, \(Z_{t-1}=(z_{t-1,n})_{n \in \mathbb {Z}}\), defined as \(z_{t-1,n} =\max \{0,z_{t,n}-h_{t-1,n}\}\) is either zero, or it satisfies \(z_{t-1,n}>z_{t-1,n-1}\) for \(b_{t-1} \leqslant n <c\), for some \(b_{t-1} \geqslant b_t\). In case \(Z_{t-1}=0\), we have

$$\begin{aligned} \displaystyle P = T'+H'_1 + \cdots + H'_{t-2} + W_{t-1} + K, \end{aligned}$$

using the fact that for \(n <c\) one has \(p_n=t_n+h_{1,n} + \cdots + h_{t-2,n} + w_{t-1,n} + k_n = h_{t-1,n} + h_{t,n}\), while for \(n \geqslant c\) one has \(p_n = k_n\). In this case, we can set \(U=T'\), \(K_i = H_i'\) for \(i=1,\ldots ,t_2\), \(K_{t-1}=W_{t-1}\), \(K_t=K\) and we have the desired decomposition. If \(Z_{t-1} \ne 0\), observe that \(z_{t-1,n}\) is either zero, or \(z_{t-1,n} \leqslant k_n \leqslant n-a_t+1\); moreover, \(z_{t-1,n} > z_{t-1,n-1}\) for all \(b_{t-1} \leqslant n < c\). We can apply again Lemma 5.6 to \(U=H'_{t-2}\) and \(V=Z_{t-1}\) to obtain a column \(W_{t-2}\) that is admissible, generated in degree \(a_{t-2}\), and a column \(Z_{t-2} = (z_{t-2,n})_{n \in \mathbb {Z}}\) defined as \(z_{t-2,n} = \max \{0,z_{t-1,n}-h_{t-2,n}\}\). As before, we have that \(Z_{t-2}\) is either zero, or it satisfies \(z_{t-2,n}>z_{t-2,n-1}\) for all \(b_{t-2} \leqslant n < c\), with \(b_{t-2} \geqslant b_{t-1}\). In the first case, similar to the case above, we now have

$$\begin{aligned} \displaystyle P=T'+H_1' + \cdots + H'_{t-3} + W_{t-2} + W_{t-1} + K, \end{aligned}$$

and we can set \(U=T'\), \(K_i = H_i'\) for \(i=1,\ldots ,t_3\), \(K_{i}=W_{i}\) for \(i=t-2,t-1\), and \(K_t=K\). Repeating this way, we either eventually get \(Z_j=0\) for some j, in which case

$$\begin{aligned} \displaystyle P=T'+H_1'+ \cdots + H_{j-1}' + W_j + \cdots + W_{t-1} + K. \end{aligned}$$

We can then set \(U=T'\), \(K_i = H_i'\) for \(i=1,\ldots ,j-1\), \(K_{i}=W_{i}\) for \(i=j,\ldots ,t-1\), and \(K_t=K\). Otherwise, we have constructed admissible columns \(W_1,W_2,\ldots ,W_{t-1}\), generated in degrees \(a_1,\ldots ,a_{t-1}\), and we have a column \(Z_1 = (z_{1,n})_{n \in \mathbb {Z}}\) that satisfies \(z_{1,n} > z_{1,n-1}\) for \(b_1 \leqslant n < c\), and \(Z_1 \leqslant T'\) by construction, since we started with \(K \leqslant P\). We observe that \(U=T'-Z_1\) is still monotone since \(z_{1,n}>z_{1,n-1}\) for \(b_1 \leqslant n < c\), and \(t'_n=z_{1,n}=0\) for \(n\geqslant c\). Moreover, we have \(U \leqslant T' \leqslant T\). Choosing \(K_i = W_i\) for all \(i=1,\ldots ,t-1\) and \(K_t=K\), we finally have \(P=U+K_1 + \cdots + K_t\), as desired. \(\square \)

We would like to stress the fact that one should think of \(K_t\) in Lemma 5.8 as the “maximal” admissible column generated in the highest degree \(a_t\), that can be subtracted from \(P=T+H\).

We illustrate this construction with a concrete example.

Example 5.10

Let us represent an admissible column \(H = (h_n)\) generated in degree a in the following way: we place a filled star in row a, and \(h_n\)-many empty circles in row n, with \(n \ne a\). For example, the following drawing below represents the admissible column \(A=(a_n)_{n \in \mathbb {Z}}\), generated in degree \(-2\), with \(a_{-1} = 2\), \(a_{0} = 3\), \(a_1 = 3\), \(a_2 = 2\), \(a_3=1\), and \(a_n = 0\) for \(n<-2\) or \(n>3\):

figure a

Moreover, we are going to represent a monotone column \(T = (t_n)_{n \in \mathbb {Z}}\) by placing \(t_n\) empty circles on line n. For example, the following drawing represents the monotone column that satisfies \(t_n=3\) for \(n \leqslant -1\), \(t_0=2\), \(t_n=1\) for \(n=1,2,3\), and \(t_n=0\) for \(n \geqslant 4\):

figure b

Consider the following three admissible columns, generated in degrees \(-2, -2\) and 0 respectively:

figure c

Taking their sum with the monotone column T defined above, we obtain

figure d

We can rewrite P, for instance, as the sum of

figure e
figure f

Observe that all columns \(K_1,K_2\) and \(K_3\) are still admissible, and they are still generated in the same degrees as the starting ones. Moreover, \(K=K_3\) is the maximal admissible column generated in degree 1 such that \(K \leqslant P\). Additionally, U is monotone, with \(U \leqslant T\).

Remark 5.11

As a consequence of Lemma 5.8, given any admissible column H generated in degrees \(a_1 \leqslant \cdots \leqslant a_t\), and any monotone column T, we can always construct an admissible column \(K_t\), generated in the largest degree \(a_t\), such that \(T+H-K_t\) can be written as \(U + K\), with K an admissible column generated in degrees \(a_1,\ldots ,a_{t-1}\), and U a monotone column with \(U \leqslant T\).

We observe that the same column can be admissible with respect to different degrees of generators. The following lemma allows us to extend the generating set, under certain assumptions.

Lemma 5.12

Let H be an admissible column generated in degrees \(a_1,\ldots ,a_t\). Let \(a\in \mathbb {Z}\), and assume that the truncation \({\widetilde{H}}(a_1,\ldots ,a_t) = ({\widetilde{h}}_n)_{n \in \mathbb {Z}}\) satisfies \({\widetilde{h}}_a >0\). Then H is an admissible column, generated in degrees \(a,a_1,\ldots ,a_t\).

Proof

Write \(H=H_1+\cdots + H_t\), where each \(H_i\) is an admissible column, generated in degree \(a_i\). Since we are assuming that \({\widetilde{H}}(a_1,\ldots ,a_t)_a > 0\), we must have \(\widetilde{H_i}(a_i)_a >0\) for some i. Say \(i=1\). We consider K to be the maximal admissible column, generated in degree a, that satisfies \(K \leqslant \widetilde{H_1}(a_1)\). We claim that \(W = H_1 - K\) is an admissible column, generated in degree \(a_1\). In fact, let \(W=(w_n)_{n \in \mathbb {Z}}\), \(H_1=(h_n)_{n \in \mathbb {Z}}\), and \(K=(k_n)_{n \in \mathbb {Z}}\). Since \(K \leqslant \widetilde{H_1}(a_1)\), and K is generated in degree a, we necessarily have \(a>a_1\). Moreover, we have \(w_n = h_n\) for all \(n < a\). In particular, \(w_n=0\) for \(n<a_1\) and \(w_{a_1}=1\). To show that W is admissible, we distinguish a few cases. For \(n < a\), \(w_n=h_n\), so satisfies the conditions to be admissible. For \(n \geqslant a\), first assume that \(h_{n-1}= n-a_1\), which is the maximal possible value for \(H_1\) in that degree. Since K is chosen to be maximal, we then must have \(k_{n-1} = n-a\); observe that \(k_{n-1}= n-a<n-a_1=h_{n-1}\). Moreover, we will have \(h_n \leqslant n-a_1+1\) because \(H_1\) is admissible, and \(k_n = \min \{h_n,n-a+1\}\), again by maximality. In particular, we have \(w_{n-1} = h_{n-1}-k_{n-1} = (n-a_1) - (n-a) = a-a_1\), and \(w_n = h_n-k_n \leqslant (n+1-a_1) - (n+1-a) = a-a_1 = w_{n-1}\). So W would be admissible in this case. On the other hand, if \(h_{n-1}<n-a\), by maximality we still have \(k_{n-1} = \min \{h_{n-1},n-a\}\). Thus \(w_{n-1} = h_{n-1} - \min \{h_{n-1},n-a\} = \max \{0,h_{n-1}-n+a\}\). We also have \(h_n \leqslant h_{n-1}\), because \(H_1\) is admissible, and \(k_n =\min \{h_{n-1},n-a+1\}\), by maximality. Therefore we get \(w_n \leqslant \max \{0,h_{n-1}-n+a-1\} \leqslant \max \{0,h_{n-1}-n+a\} = w_{n-1}\). Either way, W is admissible. This shows that \(H = W+K + H_2+\ldots +H_t\) is admissible, generated in degrees \(a,a_1,\ldots ,a_t.\) \(\square \)

We are now ready to describe the algorithm.

Algorithm 5.13

We start with the cohomology table of a finitely generated graded R-module M, that is, we start with \([{\text {H}}^\bullet _\mathfrak {m}(M)] = (h^i(M)_j)\) for \(i=0,1,2\) and \(j \in \mathbb {Z}\).

We initialize \({\text {H}}= [{\text {H}}^\bullet _\mathfrak {m}(M)]\). The goal is to describe how to subtract from H positive rational combinations of elements from \(\Lambda _2\) (defined as in Theorem 4.6), to eventually get to the trivial table 0. At each step, we will redefine \({\text {H}}\) to be the table we obtain from subtracting such combinations. In the end, solving for \([{\text {H}}^\bullet _\mathfrak {m}(M)]\) will result in the desired decomposition of \([{\text {H}}^\bullet _\mathfrak {m}(M)]\). Throughout, we denote with \({\text {H}}^0, {\text {H}}^1\) and \({\text {H}}^2\) the first, second, and third column of \({\text {H}}\), respectively. Moreover, we denote by \(h^i_j\) the entry in row \(j \in \mathbb {Z}\) of the column \({\text {H}}^i\).

Step 1:

Replace \({\text {H}}\) by \({\text {H}}-\sum _{n \in \mathbb {Z}} h^0_n [{\text {H}}^\bullet _\mathfrak {m}(k)(-n)]\).

Step 2:

If the set \(\{ n \in \mathbb {Z}\mid \Delta ^2_{{\text {H}}^2}(n-2) \ne 0\}\) is empty, go to Step 4. Otherwise, let a be its maximum. If \(h^1_a = 0\), replace \({\text {H}}\) by \({\text {H}}-[{\text {H}}^\bullet _\mathfrak {m}(R)(-a)]\). If \(h^1_a >0\), proceed to Step 3.

Step 3:

Set \(K = (k_n)_{n \in \mathbb {Z}}\) by

$$\begin{aligned} \displaystyle k_n = \left\{ \begin{array}{ll} \min \{h^1_n,n-a+1\} &{} \text{ if } n\geqslant a \\ 0 &{} \text{ if } n<a \end{array} \right. \end{aligned}$$

Use Algorithm 5.1 to write \(K = \sum _n r_n[R/\mathfrak {m}^{n+1}(-a)]\) for some \(r_n \in \mathbb {Q}_{\geqslant 0}\). Replace \({\text {H}}\) by \({\text {H}}-\sum _nr_n [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{n+1})(-a)]\) and return to Step 2.

Step 4:

If the set \(\{n \in \mathbb {Z}\mid \Delta ^1_{{\text {H}}^1}(n-1) \ne 0\}\) is empty, then FINISH. Otherwise, let b be its maximum. Replace \({\text {H}}\) by \({\text {H}}-[{\text {H}}^\bullet _\mathfrak {m}(k[x])(-b)]\) and repeat Step 4.

Proof

We prove that the Algorithm terminates with the trivial table \({\text {H}}= 0\), and thus produces the desired decomposition of \([{\text {H}}^\bullet _\mathfrak {m}(M)]\). Step 1 removes the first column, that is, the one corresponding to \({\text {H}}^0_\mathfrak {m}(M)\). Note that we are subtracting only a finite sum of tables of the form \([{\text {H}}^\bullet _\mathfrak {m}(k)(-n)]\), because of Proposition 4.2.

By collecting the values of a from Step 2 that correspond to \(h^1_a>0\), we obtain a sequence of integers \(a_1 \geqslant a_2 \geqslant \cdots \geqslant a_t\) that satisfies the following three conditions:

  1. (1)

    \({\text {H}}^2_\mathfrak {m}(M) \cong {\text {H}}^2_\mathfrak {m}(R)(-a_1) \oplus \cdots \oplus {\text {H}}^2_\mathfrak {m}(R)(-a_t) \oplus F\), where F is a graded free module generated in degrees corresponding to a with \(h^1_a = 0\).

  2. (2)

    If \(0 \rightarrow T \rightarrow M/{\text {H}}^0_\mathfrak {m}(M) \rightarrow C \rightarrow 0\) is a short exact sequence as in the proof of Theorem 4.6, then there is a surjection \(R(-a_1) \oplus \cdots \oplus R(-a_t) \rightarrow {\text {H}}^1_\mathfrak {m}(C) \rightarrow 0\).

  3. (3)

    For all \(i=0,\ldots ,t-1\), if we let \(\widetilde{H^1}(a_1,\ldots ,a_i) = ({\widetilde{h}}_n)_{n \in \mathbb {Z}}\), we have \({\widetilde{h}}_{a_{i+1}} > 0\).

The first two claims follow from Remark 4.7. The third condition comes from the way the sequence \(a_1,\ldots ,a_t\) appears in Step 2.

Now, recall that in the proof of Theorem 4.6 it is shown that \([{\text {H}}^1(M)] = [{\text {H}}^1_\mathfrak {m}(T)] + [{\text {H}}^1_\mathfrak {m}(C)]\), where this decomposition comes from the condition (2) described above. Since \({\text {H}}^1_\mathfrak {m}(C)\) has finite length with generators of degrees contained in the set \(\{a_1,\ldots ,a_t\}\), by Proposition 5.5, its Hilbert function \([{\text {H}}^1_\mathfrak {m}(C)]\) is an admissible column, generated in degrees contained in the set \(\{a_1,\ldots ,a_t\}\). Because of condition (3) above, we may use Lemma 5.12 to extend the generating set and assume that \([{\text {H}}^1_\mathfrak {m}(C)]\) is an admissible column generated in all degrees \(a_1,\ldots ,a_t\). Moreover, \([{\text {H}}^1_\mathfrak {m}(T)]\) is monotone, by Proposition 4.2. Therefore \({\text {H}}^1\) is the sum of a monotone column, and an admissible column, with generators in degrees \(a_1 \geqslant \cdots \geqslant a_t\).

At each iteration of Step 3 the constructed column K is, by definition, the maximal admissible column generated in the largest possible degree a and such that \(K \leqslant {\text {H}}^1\). This column is decomposed using using Algorithm 5.1 as a non-negative rational linear combination of the tables of \([R/\mathfrak {m}^{n+1}(-j)]\). Recall that the table \([{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{n+1})(-a)]\) has second column equal to \([R/\mathfrak {m}^{n+1}(-a)]\). Moreover, we have \({\text {H}}^2_\mathfrak {m}(\mathfrak {m}^{n+1})(-a) \cong {\text {H}}^2_\mathfrak {m}(R)(-a)\) for all n. Since, as shown in the proof of Theorem 4.6, we have \(\sum _n r_n=1\), we conclude that \(\sum _n r_n [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{n+1}(-a)]\) has:

  1. (1)

    First column equal to zero.

  2. (2)

    Second column equal to K. In particular, by Remark 5.11, the second column of the table \({\text {H}}- \sum _n r_n [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{n+1}(-a)]\) is equal to some \(U + A\), where U is monotone with \(U \leqslant [{\text {H}}^1_\mathfrak {m}(T)]\), and A is still admissible, now generated in the remaining degrees \(a_i, \ldots ,a_{t}\).

  3. (3)

    Third column equal to the third column of \([{\text {H}}^\bullet _\mathfrak {m}(R)(-a)]\). In particular, by condition (3) above, the third column of \({\text {H}}-\sum _n r_n [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{n+1}(-a)]\) is equal to \([{\text {H}}^2_\mathfrak {m}(G)]\) where G is a free module generated in the remaining degrees \(\leqslant a\).

Thus after repeating Step 2 and Step 3 as required, we eliminate the third column of \({\text {H}}\). Moreover, the remaining second column, \({\text {H}}^1\), is now monotone with \({\text {H}}^1 \leqslant [{\text {H}}^1_\mathfrak {m}(T)]\).

Step 4 constructs the monotone column U remaining in \({\text {H}}^1\) using tables of the form \([{\text {H}}^\bullet _\mathfrak {m}(k[x](-j))]\). By Proposition 4.2, \(\{n \in \mathbb {Z}\mid \Delta ^1_{{\text {H}}^1_\mathfrak {m}(T)}(n) \ne 0\}\) is a finite set and \(\Delta ^1_{{\text {H}}^1_\mathfrak {m}(T)}(n)<\infty \) for all n. Since U is monotone and \(U \leqslant [{\text {H}}^1_\mathfrak {m}(T)]\) by Lemma 5.8, it follows that \(0 \leqslant \Delta ^1_U (n) < \infty \) for all n, and it is zero for all but finitely many values of n. Note that \(\Delta ^1_{{\text {H}}^1_\mathfrak {m}(k[x](-j))}(n) = 1\) if \(n = -j\) and is 0 for all other values of n. Thus, each iteration of Step 4 decreases by 1 precisely one nonzero entry of \(\Delta ^1_U\), and the algorithm returns the zero table after finitely many steps. \(\square \)

Remark 5.14

Let M be a finitely generated k[x]-module. If one runs Algorithm 5.13 with the table \([{\text {H}}^\bullet _\mathfrak {m}(M)]\), ignoring Step 2 and Step 3 (which require a third column in the matrix), then one gets a decomposition of \([{\text {H}}^\bullet _\mathfrak {m}(M)]\) in terms of the extremal points \(\{[{\text {H}}^\bullet _\mathfrak {m}(N)] \mid N \in \Lambda _1\}\) of the cone in dimension one, as described in Theorem 3.1.

Example 5.15

Consider the following \(R=k[x,y]\)-module:

$$\begin{aligned} M = \mathrm{coker} \begin{pmatrix} x^3 &{} x^2y^2 &{} x^4y^2\\ x^2y &{} x^3y + xy^3 &{} x^4y^2 + x^2y^4\\ x^3 + y^3 &{} x^4 &{} x^3y^3\\ x^3 &{} 2x^2y^2 &{} x^5y\\ y^3 &{} y^4 &{} x^6 \end{pmatrix} \end{aligned}$$

Using Macaulay 2 [12], one can check that M has transposed local cohomology table

From now on, since the column \([{\text {H}}^0_\mathfrak {m}(M)]\) consists of all zeros, we will disregard it.

The first meaningful step in the algorithm is Step 3: \(a = -5\) gives an admissible column \(K = (k_n)_{n \in \mathbb {Z}}\), generated in degree \(-5\), that we can write as

$$\begin{aligned} \begin{bmatrix} n &{} k_n\\ 4 &{} 1\\ 3 &{} 2 \\ 2 &{} 4 \\ 1 &{} 7 \\ 0 &{} 6 \\ -1 &{} 5\\ -2 &{} 4\\ -3 &{} 3\\ -4 &{} 2\\ -5 &{} 1 \end{bmatrix}&= \frac{1}{10} \begin{bmatrix} n &{} \\ 4 &{} 10\\ 3 &{} 9 \\ 2 &{} 8 \\ 1 &{} 7 \\ 0 &{} 6 \\ -1 &{} 5\\ -2 &{} 4\\ -3 &{} 3\\ -4 &{} 2\\ -5 &{} 1 \end{bmatrix} + \frac{11}{90} \begin{bmatrix} n &{} \\ 4 &{} 0\\ 3 &{} 9 \\ 2 &{} 8 \\ 1 &{} 7 \\ 0 &{} 6 \\ -1 &{} 5\\ -2 &{} 4\\ -3 &{} 3\\ -4 &{} 2\\ -5 &{} 1 \end{bmatrix} + \frac{5}{18} \begin{bmatrix} n &{} \\ 4 &{} 0\\ 3 &{} 0 \\ 2 &{} 8 \\ 1 &{} 7 \\ 0 &{} 6 \\ -1 &{} 5\\ -2 &{} 4\\ -3 &{} 3\\ -4 &{} 2\\ -5 &{} 1 \end{bmatrix} + \frac{1}{2} \begin{bmatrix} n &{} \\ 4 &{} 0\\ 3 &{} 0 \\ 2 &{} 0 \\ 1 &{} 7 \\ 0 &{} 6 \\ -1 &{} 5\\ -2 &{} 4\\ -3 &{} 3\\ -4 &{} 2\\ -5 &{} 1 \end{bmatrix} \\ \\&= \frac{1}{10} \ [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^{11}(5))] + \frac{11}{90} \ [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^{10}(5))] + \frac{5}{18} \ [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^9(5))] + \frac{1}{2} \ [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^8(5))]. \end{aligned}$$

Subtracting \(\frac{1}{10} \ [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{11}(5))] + \frac{11}{90} \ [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^{10}(5))] + \frac{5}{18} \ [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^9(5))] + \frac{1}{2} \ [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^8(5))]\) from \([{\text {H}}^\bullet _\mathfrak {m}(M)]\), we get

Next, for \(a = -6\) we construct an admissible column \(K = (k_n)_{n\in \mathbb {Z}}\), generated in degree \(-6\), as follows:

$$\begin{aligned} \begin{bmatrix} n &{} k_n\\ 0 &{} 4\\ -1 &{} 6\\ -2 &{} 5\\ -3 &{} 4\\ -4 &{} 3\\ -5 &{} 2 \\ -6 &{} 1 \\ \end{bmatrix}&= \frac{4}{7} \begin{bmatrix} n &{} \\ 0 &{} 7\\ -1 &{} 6\\ -2 &{} 5\\ -3 &{} 4\\ -4 &{} 3\\ -5 &{} 2 \\ -6 &{} 1 \\ \end{bmatrix} + \frac{3}{7} \begin{bmatrix} n &{} \\ 0 &{} 0\\ -1 &{} 6\\ -2 &{} 5\\ -3 &{} 4\\ -4 &{} 3\\ -5 &{} 2 \\ -6 &{} 1 \\ \end{bmatrix} \\ \\&= \frac{4}{7} \ [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^8(6))] + \frac{3}{7} [{\text {H}}^1_\mathfrak {m}(\mathfrak {m}^7(6))]. \end{aligned}$$

Subtracting \(\frac{4}{7} \ [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^8(6))] + \frac{3}{7} [{\text {H}}^\bullet _\mathfrak {m}(\mathfrak {m}^7(6))]\) leaves the table

6 Facets of the cone of local cohomology tables in dimension two

We adopt the following notation. In the space of \(3 \times {\mathbb {Z}}\)-matrices let \({\mathbb {M}}\) denote the subspace formed by the matrices with finitely many nonzero entries. We consider the cone \(C \subseteq {\mathbb {M}}\) generated by the matrices \(\Delta \Lambda _2 = \{E_{i,s}, \Gamma _s(n) \mid i \in \{0, 1, 2\}, s \in \mathbb {Z}, n \in \mathbb {Z}_{\geqslant 1} \}\), where \(E_{i,s}\) are the elementary matrices \(\{e_{i,s} = 1, e_{j, t} = 0 \text{ for } (i, s) \ne (j, t) \}\) and

By Proposition 4.2, we can transform a local cohomology table \(({\text {H}}^0_\mathfrak {m}(M)_i, {\text {H}}^1_\mathfrak {m}(M)_i, {\text {H}}^2_\mathfrak {m}(M)_i)\) to a point in \({\mathbb {M}}\) given by \((\Delta ^0 {\text {H}}^0_\mathfrak {m}(M)_i, \Delta ^1 {\text {H}}^1_\mathfrak {m}(M)_i, \Delta ^2 {\text {H}}^2_\mathfrak {m}(M)_i)\). This map is injective, and the extreme rays \(\Lambda _2\) from Theorem 4.6 map to \(\Delta \Lambda _2\) (hence, the notation). Thus the cone C corresponds to the cone of the local cohomology tables.

The space \({\mathbb {M}}\) is naturally filtered by bounding the support of its elements:

$$\begin{aligned} {\mathbb {M}}_{[a,b]} := \{A = \{a_{i, n}\}\in {\mathbb {M}} \mid a_{i, n} = 0 \text { for all } i \in \{0,1, 2\}, n < a \text { and } n > b\}. \end{aligned}$$

Moreover, this decomposition is compatible with the construction of C. Namely \(C_{[a,b]}\), the cone spanned by the rays supported in \({\mathbb {M}}_{[a,b]}\) is the intersection \({\mathbb {M}}_{[a,b]} \cap C\).

Now we will define the functionals on \({\mathbb {M}}\) that will give us the facet equations.

Definition 6.1

Let \(A = \{a_{i,j}\} \in {\mathbb {M}}\). For \(s \in \mathbb {Z}\), we set \(\tau _s(A)= a_{1,s} + \sum _{i \leqslant s-1} a_{2,i}\), \(\mu _s (A) = a_{0, s}\), and \(\phi _s(A)= a_{2,s}\). Finally, for an integer \(n \geqslant 0\) and \(s \in \mathbb {Z}\), we set

$$\begin{aligned} \displaystyle \pi _{n,s}(A)= \sum _{i > s+n} a_{1,i} + (n+1)a_{1,s+n} + \sum _{i=0}^{n-1} (i+1)a_{2,s+i}. \end{aligned}$$

We let \(\mathcal {H}\) be the set of functionals on the space \({\mathbb {M}}\) defined by these equations.

We want to show that for all \(a< b\) the cone \(C_{[a,b]}\) is cut by the hyperplanes defined by the functionals belonging to \({\mathcal {H}}\), thus proving that \({\mathcal {H}}\) give the facet equations of \(C = \cup C_{[a, b]}\). By invariance under shifts, it is enough to consider \(C_{[0, d]}\). For \(d \geqslant 0\), consider the following list of functionals

$$\begin{aligned} \displaystyle {\mathcal {H}}_{[0,d]}= \left\{ \begin{array}{ll} \mu _s &{} \text { for } 0 \leqslant s \leqslant d, \\ \tau _s &{} \text { for } 0 \leqslant s< d, \\ \phi _s &{} \text { for } 0 \leqslant s \leqslant d, \\ \pi _{0,s} &{} \text { for } 1 \leqslant s \leqslant d, \\ \pi _{n,s} &{} \text { for } 1 \leqslant n \leqslant d - 2, 1 \leqslant s < d - n \end{array}\right\} . \end{aligned}$$

The following theorem allows us to describe the facets of the cone \(C_{[0,d]}\), by identifying it with the cone defined by the list of functionals \(\mathcal {H}_{[0,d]}\).

Theorem 6.2

For \(d \geqslant 0\), let \(D_{[0,d]}\) be the cone defined by \(\{A \in {\mathbb {M}}_{[0,d]} \mid H(A) \geqslant 0\) for all \(H \in \mathcal {H}_{[0,d]}\}\). Then \(D_{[0,d]} = C_{[0, d]}\).

Proof

We have \(C_{[0, d]} \subseteq D_{[0, d]}\) by direct verification of positivity. The other inclusion will be proven by providing Algorithm 6.8. \(\square \)

Remark 6.3

We may identify \({\mathbb {M}}_{[0, d- 1]}\) with a subset of \({\mathbb {M}}_{[0,d]}\) given by \(\pi _{0, d} (A) = 0\), \(\phi _d (A) = 0\) and \(\mu _d (A) = 0\), or, simply, \(a_{0,d} = a_{1, d} = a_{2, d} = 0\). Via this identification we have \(D_{[0,d-1]} = D_{[0,d]} \cap {\mathbb {M}}_{[0, d-1]} \).

One inclusion is clear, because \({\mathcal {H}}_{[0,d]}\) still provides non-negative functionals by restriction. However, the nonzero restrictions that are not in \({\mathcal {H}}_{[0,d-1]}\) are now a positive linear combination of the functionals in \({\mathcal {H}}_{[0,d - 1]}\). Namely, for \(n \geqslant 1\) we have a decomposition \(\pi _{n, d - 1 - n} = (n + 1) \pi _{0, d} + n\phi _{d - 1} + \cdots + \phi _{d - 1 - n}\) as functionals on \({\mathbb {M}}_{[0, d- 1]}\).

Remark 6.4

It can be checked by testing appropriate points that the list of functionals \(\mathcal {H}_{[0,d]}\) minimally defines the cone \(D_{[0,d]}\). In other words, removing any of the functionals would define a strictly larger cone than \(D_{[0,d]}\). It then follows from Theorem 6.2 that \(C_{[0,d]}\) has an equal number of extremal rays and facets. In fact, computations on Macaulay 2 suggest that the entire f-vector is symmetric. This may make the reader suspect that \(C_{[0,d]}\) is self-dual, however, the incidence matrix of \(C_{[0, 4]}\) cannot be turned into a symmetric matrix by reordering rays and facets: there is precisely one facet which contains 14 extreme rays, \(\tau _3 (x) = 0\), but two extreme rays that belong to 14 facets, \(E_{1, 3}\) and \(E_{1,4}\). It is still possible, although unlikely, that the entire cone C is self-dual.

6.1 Proofs

We start with lemmas describing relations between \(\Delta \Lambda _2\) and \({\mathcal {H}}\).

Definition 6.5

For \(A \in {\mathbb {M}}_{[0,d]}\), we define \( {\text {Supp}}(A) = \{H \in {\mathcal {H}}_{[0,d]} \mid H(A) \ne 0\}. \)

Lemma 6.6

For \(\pi _{n,s} \in {\mathcal {H}}_{[0,d]}\) and \(1 \leqslant k \leqslant d -1\) we have \(\pi _{n, s} (\Gamma _{d-1-k}(k)) = \max (0, s - d + k)\). In particular,

$$\begin{aligned} {\text {Supp}}(\Gamma _{d-1-k}(k))= & {} \{\phi _{d - 1 - k}; \pi _{0, s}, d + 1 - k \leqslant s \leqslant d; \pi _{n,s} \text { for }1 \\&\leqslant n \leqslant \max (0, k - 2), d + 1 - k \leqslant s \leqslant d-n-1 \}. \end{aligned}$$

Proof

It is straightforward to check the functionals \(\phi _s, \mu _s,\) and \(\tau _s\). Recall that

from which it is also clear that \(\pi _{0,s} (\Gamma _{d - 1 - k}(k)) = \max (0, s - d + k)\). Now, we consider \(\pi _{n,s}\) with \(n > 0\) starting with \(s \leqslant d - 1 - k\). By the formula for \(\pi _{n,s}\), we get (recall that \(s + n < d\)) that

$$\begin{aligned} \pi _{n,s} (\Gamma _{d - 1 - k}(k)) = (d - k - s) - (n+1) - \sum _{i = s+n+1}^{d - 1} 1 + k = 0. \end{aligned}$$

If \(d - 1 - k < s\), then we only have contribution from \(\gamma _{1,s}\):

$$\begin{aligned} \pi _{n,s} (\Gamma _{d - 1 - k}(k)) = - (n+1) - \sum _{i = s+n+1}^{d - 1} 1 + k = s -d + k. \end{aligned}$$

\(\square \)

The following relations on our equations are essential for the algorithm.

Lemma 6.7

For all \(0 \leqslant i < d\) and \(k > 0\) with \(k+i<d\), we have

$$\begin{aligned} \phi _{i} + 2 \pi _{k, i + 1} = \pi _{k + 1, i} + \pi _{k - 1, i + 2} \text { and } \phi _i + 2 \pi _{0, i + 1} = \pi _{1, i} + \pi _{0, i + 2}. \end{aligned}$$

Proof

We first check the first equality. For a matrix \(A = \{a_{i,j}\}\), the left-hand side is

$$\begin{aligned} \displaystyle \phi _{i}(A) + 2\pi _{k,i+1}(A) = a_{2,i} + \sum _{j = k+i+2}^d 2a_{1,j} + 2(k+1)a_{1,k+i+1} + \sum _{j=0}^{k-1} 2(j+1) a_{2,j+i+1}. \end{aligned}$$

The right-hand side, on the other hand, is

$$\begin{aligned} \displaystyle \pi _{k + 1, i}(A) + \pi _{k - 1, i + 2} (A)&= \left[ \sum _{j=k+i+2}^d a_{1,j} + (k+2)a_{1, k+i+1} + \sum _{j=0}^{k} (j+1) a_{2,j+i}\right] \\&\quad + \left[ \sum _{j=k+i+2}^d a_{1,j} + ka_{1,k+i+1} + \sum _{j=0}^{k-2} (j+1) a_{2,j+i+2}\right] \\&= \left[ \sum _{j=k+i+2}^d a_{1,j} + (k+2)a_{1,k+i+1} + a_{2,i} + 2a_{2,i+1}\right. \\&\quad \left. + \sum _{j=2}^{k} (j+1) a_{2,j+i}\right] \\&\quad + \left[ \sum _{j=k+i+2}^d a_{1,j} + ka_{1,k+i+1} + \sum _{j=2}^{k} (j-1) a_{2,j+i}\right] \\&= \sum _{j = k+i+2}^d 2a_{1,j} + 2(k+1)a_{1,k+i+1} + a_{2,i} + 2a_{2,i+1} \\&\quad + \sum _{j=2}^{k} 2j a_{2,j+i}, \end{aligned}$$

and the two sides are then easily seen to agree. For the second relation, the left-hand side is

$$\begin{aligned} \displaystyle \phi _i(A)+2\pi _{0,i+1}(A) = a_{2,i} + \sum _{j = i+1}^d 2a_{1,j}, \end{aligned}$$

which coincides with the right-hand side:

$$\begin{aligned} \displaystyle \pi _{1,i}(A) + \pi _{0,i+2}(A) = \sum _{j = i+1}^d a_{1,j} + a_{i+1} + a_{2,i} + \sum _{j=i+2}^d a_{1,j}. \end{aligned}$$

\(\square \)

Algorithm 6.8

Let \(A = \{a_{i,j}\} \in {\mathbb {M}}_{[0,d]}\) be in the cone \(D_{[0,d]}\) defined, as described above, by the set of functionals \({\mathcal {H}}_{[0, d]}\). The strategy of the following algorithm is to reduce A to 0 by subtracting a finite positive linear combination of tables from \(\Delta \Lambda _2 \cap {\mathbb {M}}_{[0,d]}\). To do so, we will induct on \(d \geqslant 1\).

Step 0::

Replace A by \(A - a_{2, d}E_{2, d} - \sum _{i = 0}^d a_{0,i}E_{0, i}\). Set \(w = d\). Proceed to Step 1.

Step 1::

If \(w = 0\) then proceed to Step 3. Replace A with \(A - a_{2, w - 1}E_{2, w - 1}\). Set \(k = 1\) and proceed to Step 2.

Step 2::

If \(k = w\) then proceed to Step 3. If \(a_{1, w} = 0\), then set \(w = w - 1\) and return to Step 1. Set

$$\begin{aligned} m&= \min \{\phi _{w- 1 - k} (A), \pi _{n,s} (A)/(s-w+k) \mid w + 1 - k \leqslant s \leqslant w, \\ 0&\leqslant n \leqslant \max (0, \min (k - 2, w-s - 1))\}. \end{aligned}$$

Replace A with \(A - m\Gamma _{w - 1 - k}(k)\). Set \(k = k + 1\). Repeat Step 2.

Step 3::

Replace A with \(A - \sum _{i = 0}^d a_{1, i} E_{1, i}\).

Proof

Both cycles described in the algorithm are finite, so it will terminate in finitely many steps. We need to show that \(A = 0\) at the end of the algorithm and all appearing coefficients are non-negative. We will use induction on d. In the base case of \(d = 1\) we note that the algorithm provides us the decomposition

$$\begin{aligned} A&= a_{0, 0} E_{0, 0} + a_{0, 1}E_{0, 1} + a_{1, 0} E_{1, 0} + a_{1, 1} E_{1, 1} + a_{2, 0} E_{2, 0} + a_{2, 1} E_{2, 1}\\&= \mu _0 (A) E_{0, 0} + \mu _1(A) E_{0, 1} + \tau _0 (A) E_{1, 0} + \pi _{0,1}(A) E_{1, 1} + \phi _0(A) E_{2, 0} + \phi _1(A) E_{2, 1}. \end{aligned}$$

We will use this strategy in general by expressing the coefficients in terms of H(A) for \(H \in {\mathcal {H}}\) and showing that A remains in the cone defined by \({\mathcal {H}}_{[0, d]}\) throughout the algorithm.

We note that, for \(n = d -1\) or d, \(a_{2, n} = \phi _n (A) \geqslant 0\) and \(H (A - a_{2, n}E_{2, n}) = H(A) \geqslant 0\) for all \(\phi _n \ne H \in {\mathcal {H}}_{[0,d]}\) as one can easily check that \(\{\phi _n\} = {\text {Supp}}(E_{2, n})\). Similarly, for all n, \(a_{0, n} = \mu _n (A)\) and \(H(A - \mu _n (A)E_{0,n}) = H(A) \geqslant 0\) for all \(\mu _n \ne H \in {\mathcal {H}}_{[0,d]}\). Hence Steps 0 and 1 produce a table that is still inside \(D_{[0,d]}\). In Step 2, let us concentrate first on the case of \(w = d\). In this case, Lemma 6.6 explains that the functionals used in the definition of m are exactly \({\text {Supp}}(\Gamma _{d - 1 - k} (k))\). It follows that \(H (A - m\Gamma _{d - 1 - k} (k)) = H(A) \geqslant 0\) for \(H \notin {\text {Supp}}(\Gamma _{d - 1 - k} (k) )\). Moreover, by its definition \(m \geqslant 0\) and \(H (A - m\Gamma _{d - 1 - k} (k)) \geqslant 0\) for \(H \in {\text {Supp}}(\Gamma _{d - 1 - k} (k) )\) by Lemma 6.6.

Now, we want to show that induction allows us to assume that \(w = d\). To do so, we observe that we may shrink the window [0, d] after finishing the loop in Step 2.

Claim 6.9

For \(w = d\), if repeating Step 2 does not result in \(a_{1, d} = 0\) (i.e., \(k = d\) is reached), then \(a_{2, 0} = \cdots = a_{2, d} = 0\).

Proof

Let A be the matrix at the beginning of Step 2 and \(A' = A - m \Gamma _{d - 1 - k} (k)\), the result of the step. By induction on k we show that either \(a'_{1, d} = 0\) or in \(a'_{2, d - k - 1} = \cdots = a'_{2, d} = 0\). At \(k = 1\), we either have \(m = \phi _{d - 2} (A)\) or \(m = \pi _{d} (A)\). In the second case \(a'_{1, d} = \pi _{0, d} (A') = 0\) and the claim follows, or \(a'_{2, d - 2} = \phi _{d-2} (A') = 0\).

By the induction hypothesis, we may assume that \(a_{2, d - k - 2} = \cdots = a_{2, d} = 0\). If \(m = \phi _{d - 1 - k} (A)\) then \(a'_{2, d - k - 1} = \cdots = a'_{2, d} = 0\) and we are done. Otherwise, \(m = \pi _{n,s} (A)/\pi _{n,s} (\Gamma _{d - 1 - k} (k))\), so \(\pi _{n,s} (A') = 0\). If \(n > 0\), then \(d + 1 -k \leqslant s \leqslant d - n - 1\), so we may use Lemma 6.7 to show that \(\pi _{0, s + n} (A') = 0\). If \(n = 0\), then necessarily \(d + 1 - k \leqslant s \leqslant d\), so we may use again Lemma 6.7 and the fact that \(A' \in D_{[0,d]}\) to show that \(\pi _{0, d} (A') = 0\) as well.

\(\square \)

If \(a_{2, 0} = \cdots = a_{2, d} = 0\), then the equations \(T_{i}\) and \(P_{0, d}\) show that \(a_{1, i} \geqslant 0\) for \(0 \leqslant i \leqslant d\). Hence when we are moved to Step 3, we subtract a positive linear combination of \(E_{1, i}\) and the resulting table is 0. Otherwise, when we leave Step 2, \(a_{0, d} = a_{1,d} = a_{2, d} = 0\) and we may now consider A as a table in \(D_{[0, d- 1]}\) by Remark 6.3. This concludes the induction step. \(\square \)