A new algorithm for computing class groups of Zariski surfaces

Let k be an algebraically closed field of characteristic p ≠ 0 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${X_{g} \subset A_{k}^{3}}$$\end{document} be a normal surface defined by an equation of the form zp = g(x, y). The two original algorithms for calculating the group of Weil divisors of Xg contain key errors. This paper presents an algorithm that corrects and improves upon the earlier attempts.


Introduction
Let k be an algebraically closed field of characteristic p = 0 and X g ⊂ A 3  k a normal surface defined by an equation of the form z p = g (x, y) with g ∈ k [x, y].Such varieties are known as Zariski surfaces and their divisor class groups have been the focus of much investigation.Although class groups in general are often difficult to determine, for Zariski surfaces they are algorithmically obtainable.[1] and [4] present programmable algorithms for calculating them, but errors were recently discovered in each of these.The first algorithm depends on an incorrect lemma [1, p. 249].Although it can be repaired, the program is very slow to make it worth while, as it often takes several hours to finish a computation, even for cases of low degree and small characteristic.The second algorithm is more efficient than the original one, but it also contains an error in a critical step [4, pp. 5-6, step 5].This paper presents a revised version of the latter algorithm that corrects its flaws and provides several computational improvements.Unlike its predecessors, it does not require computing roots, which imposes programming limitations, and it employs for the most part only standard matrix computations already built into most well known mathematical programs.It also differs fundamentally from the recently discovered algorithm introduced in [6] for calculating the divisor class group of a Zariski surface, which involves iteratively calculating a sequence of matrices of increasing size together with their orthogonal complements.The algorithm presented here is computationally simpler in the sense that it only employs elementary row reduction.

The isomorphism
Let k be an algebraically closed field of characteristic p = 0, g ∈ k [x, y] a polynomial of degree n = 0 such that g x and g y have no common factors in k [x, y], and X g ⊂ A 3  k be the surface defined by the equation z p = g.Then X g is regular in codimension one.Let Cl X g denote the group of Weil divisors of X g [3, p. 130].[5, pp. 393-398].
Definition 1.3 For a field F and positive integers r and s, let F r xs be the set of r x s matrices with entries in F. If M = a i j ∈ F r xs and q is an integer, let M ( p q ) = a p q i j .I r will denote the identity matrix in F r xr and O rs the zero matrix in F r xs .When the context makes plain the dimension of the zero matrix, we will simply denote it by O.If M ∈ F r xs , let row (M) denote the row space of M. For a matrix D, let R D denote the reduced row-echelon form of D. Definition 1.4 Let g k [x, y] be as above.Let V be the k-vector space of polynomials in k [x, y] of degree at most n − 2 (where n = deg (g)) and for each r = 0, . . ., p − 1, let W r be the k-vector space of polynomials in k x p , y p of degree at most (r + 1) n − 2 p.For r = 0, . . ., p − 1, let T r : V → W r be the linear transformation defined by T r ( f ) = ∇ (g r f ) and let M g,r be the matrix of T r with respect to the monomial bases 2 matrix with coefficients in k, where n r is the greatest integer less than or equal to (r +1)n p .In particular, M g, p−1 is a square matrix of dimension n(n−1) 2 .

Lemma 1.5 Let t =
n−2 i+ j=0 . Then the map t → x t maps L g isomorphically to the group of solutions of the system of equations M g is obtained by comparing coefficients on both sides of the equations ∇ g i t = 0, for i = 0, 1, . . ., p − 2, and ∇ g p−1 t = t p in (1.1).Thus t is a solution of the differential equations if and only if x t is a solution of the matrix equations.The map is also clearly additive.
, where the M g,i are as in (1.4).By (1.4), Cl X g is isomorphic to the group of solutions of the system, A g x = O, B g x = x p .

Linearized systems of exponent one
A linearized system of exponent one is a system of equations of the form, where A ∈ k sxr , B, C ∈ k txr , for some r, s, t ∈ N. The solutions to 2.1 form an additive p-group of exponent one (i.e.every non-identity element has order p).

Proposition 2.2 If s + t = r and the rows of C
A ( p) are independent, then the solution set of 2.1 has finite order.
system is equivalent to a system of the form Dx = x ( p) , for some D ∈ k r xr .Thus R is generated by the monomials x e 1 1 • • • x e r r with each e i < p.In particular, R is finite dimensional over k, hence is Artinian, hence has only finitely many maximal ideals.Therefore, O = A ( p) x ( p) ; Bx = Cx ( p) , and 2.1, have only finitely many solutions.

Corollary 2.3
Let A g and B g be as in Notation 1.6.Then the solution set of the system A g x = O, B g x = x p has order at most p n(n−1) 2 .
Proof By Proposition 2.2, B g x = x p , has only finitely many solutions, which by Bezout's theorem is at most Proposition 2.4 If s + t < r , then the system in 2.1 has an infinite solution set.
Proof If A = O, then one of the variables in 2.1 is a linear combination of the others and the system can be reduced to one of the same form but with one less variable and at least one less equation.Thus, by induction we may assume the system in 2.1 is only of the form, Bx = Cx ( p) .If the rows of B or C are dependent, then we can either eliminate an equation from the system or replace one with a linear homogeneous equation.So we may assume the rows of B and C are independent and 1 ≤ t < r .After adding a general choice of r − t − 1 equations of the form, r i=0 α i x i = r i=0 β i x p i , to the system, we may also assume t = r − 1.If the system has only finitely many solutions, then for a general choice of linear homogeneous form h in the x i we have: (i) the row vector corresponding to h is independent of the rows of B; (ii) the row vector corresponding to h p is independent of the rows of C; (iii) the hyperplane, h = 0, passes through none of the solution points of the system except the origin.Then by (i) and (ii), the system, Bx = Cx ( p) ; h = 0, is such that each solution has multiplicity one and it has no intersections at infinity, which implies by Bezout's theorem that it has p r −1 distinct solutions, which contradicts (iii).p) is identical to that of the system, Ax = 0, Bx = Cx ( p) .
Proof The solution set of the system p) and each of the matrices obtained above corresponds to performing elementary operations on this system.Proof Replacing A by A as defined in Definition 2.7, we may assume that the rows of A are independent.
Then B C and B C have the same number of rows if and only if the rows of A B and the rows of C A ( p) are independent.Then by Corollary 2.5 and Proposition 2.6, the system, Ax = O, Bx = Cx ( p) , has exactly p t distinct solutions.Since the solution set is a finite abelian group with every nonzero element having order p, the rest of the conclusion follows.

The algorithm Proposition 3.1 Let
have the same numbers of rows, then B j C j and B j+e C j+e will have the same number of rows for all e ≥ 1.
Proof As in the proof of Proposition 2.10, after replacing A j by A j , we may assume that the rows of A j are independent.Then B j C j and B j C j have the same number of rows if and only if the rows of A j B j and the rows of C j A ( p) j are independent.Hence, A j as defined in Definition 2.7 will be R A j (see Definiton 1.3), B j will be row equivalent to R B j , and C j will be R C j , where B j and C j are as described in Definition 2.7, and the row operations that create these matrices will produce no zero rows.From this, it follows that the rows identity matrix of dimension Example 3.9 Let k be an algebraically closed field of characteristic 3, g = x + y + x 2 + y 2 + x 2 y + x y 2 + x 4 + x y 3 + 2y 4 , and X g ⊂ A 3 k the surface defined by z p = g.We have A g = 0 1 1 1 0 1 and As in Algorithm II for calculating Cl X g 3.7 A 0 = A g , B 0 = B g , C 0 = I 6 .We then have 0 0 1 2 1 0 0 0 1 0 0 0 0 0 0 1 2 0 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 0 1 which implies Cl X g has order p 2 ; i.e.Cl X g ∼ = Z p ⊕ Z p .
Example 3.10 Let k be an algebraically closed field of characteristic 3, g = x + y + x 2 + 2x y + 2y 2 + 2x y 2 + x 4 + 2x y 3 + 2y 4 , and X g ⊂ A 3 k the surface defined by z p = g.We haveA g = 0 2 0 2 2 1 and Remark 3.11 The algorithm presented above Algorithm II for calculating Cl X g 3.7 determines Cl X g up to isomorphism by calculating the order of the additive group of solutions of the system A g x = O, B g x = x p .Obtaining a set of actual divisors that generate Cl X g requires calculating the group of solutions to A g x = O, B g x = x p .This can be done algorithmically but we have not yet found a way to do this efficiently, which is a current project.

Corollary 2 . 5 Proposition 2 . 6 Definition 2 . 7
If the system in 2.1 has only finitely many solutions and the rows of A B are independent, then s + t = r.Proof By Proposition 2.4, r ≤ s + t = rank A B ≤ r .If s + t = r and the rows of A B and of C A ( p) are independent, then the solution set of 2.1 has order p t .Proof If x 0 is a solution to 2.1, then the system remains fixed under the change of coordinates x − x 0 .Hence, all solutions have the same multiplicity, which is one since det A B = 0. Also, 2.1 has no intersections at infinity since det C A ( p) = 0.By Bezout's theorem, 2.1 has p t distinct solutions.Let A ∈ k sxr , B, C ∈ k txr and M = A O B C form the block matrix H 1 = A B , where A is the reduced row-echelon form of A but with the zero rows deleted.Use row operations on H 1 to eliminate all nonzero entries below the pivot entries of A to obtain an equivalent block matrix H 2 = A B with row A ∩ row B = {0}.Put the matrix B C in reduced row-echelon form to obtain an equivalent block matrix R B C (see Definition 1.3).Let B be the matrix consisting of the nonzero rows of R B .Then the rows of B are independent and R B C = B C O D for matrices C and D. Let H 3 = D A ( p)

Proposition 2 . 7 .
10 Let A ∈ k sxr , B, C ∈ k txr and M = Suppose that the matrices B C and B C have the same number of rows and the system, Ax = O, Bx = Cx( p) , has only finitely many distinct solutions.Then the solution set of the system, Ax = O, Bx = Cx( p) , is a p-group of type ( p, . . ., p) of order p t .

. 1 Then A 4 =
As in Algorithm II for calculating Cl X g 3.7 A 0 = A g , B 0 = B g , C 0 = I 6 .We then have A A 3 and B 4 C 4 = B 3 C 3 , which implies Cl X g has order p; i.e.Cl X g ∼ = Z p .
4 to eliminate all nonzero entries above the pivot entries of H 3 to obtain a block matrix H 5 = C H 3 with row H 3 ∩ row C = {0}.Use row operations on B C to obtain an equivalent block matrix B R C .Note the rows of B are independent.B R C can be written in block form The number of rows of B C in Corollary 2.5 is clearly less than or equal to the number of rows of B C .
and form the block matrix H 4 = C H 3 , where H 3 consists of the nonzero rows of R H 3 .Use row operations on H inition 2.7.Then the solution set of the system, Ax = O, Bx = Cx ( n(n−1) 2 .Then for each i = 0, 1, 2, ..., calculate M i = the number of rows of B i C i stabilizes.Then Cl X g is a p-group of type ( p, ..., p) of order p m , where m is the stabilization number.All of the steps in Algorithm II for calculating Cl X g 3.7 involve easily programmable steps (e.g.computing M( p)for a matrix M), or apply simple procedures already built into computational software programs (e.g.putting a matrix in reduced row-echelon form), or can be readily adapted to built in programs.