1 Introduction

Let \(B \in \mathbb {R}^{n \times n}\) be a non-singular matrix generating a full-dimensional Euclidean lattice \(\Lambda (B) = \{B z \mid z \in \mathbb {Z}^n\}\). Several problems in the algorithmic theory of lattices, such as the shortest or closest vector problem, or the covering radius problem, become very easy if the columns of B form an orthonormal basis. But surprisingly, if B is an arbitrary basis and we want to decide whether there exists an orthonormal basis of \(\Lambda (B)\), not much is known.

As this is equivalent to \(\Lambda (B)\) being a rotation of \(\mathbb {Z}^n\), we call this decision problem the rotated standard lattice problem (RSLP), which is the main concern of the work at hand. A related problem is the unimodular decomosition problem (UDP): For a given unimodular and positive definite matrix G, decide whether there exists a unimodular matrix U such that \(G = U^\intercal U\). Recall that an invertible integral matrix \(U \in \mathbb {Z}^{n \times n}\) is called unimodular if its inverse is also integral. The goal of this work is twofold. For one, we show that RSLP and UDP are equivalent, and that both problems belong to NP \(\cap \) co-NP. This provides two new problems in this class for which no polynomial-time algorithm is known. We will use a result of Elkies on characteristic vectors [1], which appear in analytic number theory. It seems that characteristic vectors are rather unknown in the algorithmic lattice theory. The second contribution of this paper is thus to introduce Elkies’ result and characteristic vectors to a wider audience, linking analytic and algorithmic number theory.

We close this section by giving an overview of related work. The rest of the paper is structured as follows. Section 2 provides sufficient background on the topic, and shows that the rotated standard lattice problem is equivalent to the unimodular decomposition problem. We also introduce characteristic vectors and show some easy properties. Section 3 discusses the theorem of Elkies and applies it, showing that RSLP is in NP \(\cap \) co-NP. Section 4 concludes with open questions.

1.1 Related work

In [2], Lenstra and Silverberg show that RSLP can be decided in polynomial time, provided that additional information on the automorphism group of the lattice is part of the input. However, they do not discuss the complexity of the general problem without additional information. When the input lattice is a construction-A lattice, Chandrasekaran, Gandikota and Grigorescu show that existence of an orthogonal basis can be decided in polynomial time [3]. If it exists, they also find one.

RSLP can be viewed as a special case of the lattice isomorphism problem (LIP), which, given two lattices \(\Lambda _1,\Lambda _2\), asks whether there is an isomorphism \(\varphi : \Lambda _1 \rightarrow \Lambda _2\) between the two lattices that preserves the Euclidean structure (\(\left\langle x,y \right\rangle = \left\langle \varphi (x), \varphi (y) \right\rangle \), where \(\left\langle x,y \right\rangle = \sum _{i=1}^n x_i y_i\) for \(x,y \in \mathbb {R}^n\)). The lattice isomorphism problem was introduced by Plesken and Souvignier [4], solving it in small dimension for specific lattices of interest. In [5], Dutour Sikirić, Schürmann and Vallentin show that this problem is at least as hard as the more famous graph isomorphism problem. The best algorithm for the lattice isomorphism problem the author is aware of is due to Haviv and Regev and has a running time of \(n^{\mathcal {O} (n)}\) [6]. They solve the problem by computing all orthonormal linear transformations (i.e. all isomorphisms) between the two given lattices \(\Lambda _1\), \(\Lambda _2\). On the complexity side, they show that the problem is in the complexity class SZK (statistical zero knowledge), which already suggests that it is not NP-hard.

Quite recently, Blanks and Miller [7] provided empirical evidence that RSLP should not be too hard. Essentially, they randomly generate a basis of \(\mathbb {Z}^n\) and apply known basis-reduction techniques to the sampled basis. In most cases (but not always), the algorithm ends up with an orthonormal basis.

2 Preliminaries

We give a brief introduction to lattices and fix the notation. For a more thorough introduction we refer to [8].

For linearly independent vectors \(b_1,\dots ,b_n \in \mathbb {R}^m\), the lattice \(\Lambda \subseteq \mathbb {R}^m\) generated by \(b_1,\dots ,b_n\) is the set

$$\begin{aligned} \Lambda = \left\{ \sum _{i=1}^n \alpha _i b_i \mid \forall i \in \{1,\dots ,n\}: \, \alpha _i \in \mathbb {Z}\right\} . \end{aligned}$$

The matrix \(B = (b_1,\dots ,b_n)\) is called a basis of \(\Lambda \). It is known that two bases \(B_1,B_2\) generate the same lattice, if and only if there exists a unimodular matrix \(U \in \mathbb {Z}^{n \times n}\) such that \(B_1 = B_2 U\). Two lattices \(\Lambda _1, \Lambda _2\) are isomorphic, in symbols \(\Lambda _1 \cong \Lambda _2\), if there exists an orthonormal matrix \(Q \in \mathbb {R}^{m \times m}\) such that \(\Lambda _1 = Q \Lambda _2\). In this case, it follows that \(B_2\) is a basis of \(\Lambda _2\) if and only if \(B_1 \mathrel {\mathop :}=QB_2\) is a basis of \(\Lambda _1\). We are interested in the following problem, which is a special case of the lattice isomorphism problem.

figure a

To keep the notation simple, we will assume \(m=n\) unless stated otherwise. Though an isomorphism refers to an orthonormal matrix Q whereas a rotation additionally requires \(\det (Q) = 1\) these two relations are the same in our setting: A matrix B generates \(\mathbb {Z}^n\) if and only if B is unimodular. Given an isomorphic basis QB, we can multiply the first column of Q and the first row of B by \(-1\) without changing the basis QB. This flips the sign of \(\det (Q)\), while B remains unimodular.

Though a lattice is usually specified by a basis matrix B, we will see that another representation is preferable for our problem. The Gram matrix G of a basis B is defined as \(G \mathrel {\mathop :}=B^{\intercal } B\), i.e. \(G_{i,j} = \left\langle b_i,b_j \right\rangle \). An advantage of the Gram matrix is that it “forgets” the embedding of a lattice \(\Lambda \) into the Euclidean space, and only carries the information of the isomorphism class of \(\Lambda \).

Lemma 1

Two bases \(B_1,B_2 \in \mathbb {R}^{n \times n}\) generate isomorphic lattices \(\Lambda (B_1)\), \(\Lambda (B_2) \subseteq \mathbb {R}^n\) if and only if there exists a unimodular matrix \(U \in \mathbb {Z}^{n \times n}\) such that for the corresponding Gram matrices \(G_1,G_2\) the relation \(G_1 = U^\intercal G_2 U\) holds.

In particular a lattice \(\Lambda (B_1)\) is isomorphic to \(\mathbb {Z}^n\) if and only if there exists a unimodular matrix \(U \in \mathbb {Z}^{n \times n}\) such that \(G_1 = U^\intercal U\).

Proof

If there is an isomorphism \(\Lambda (B_1) = Q \Lambda (B_2)\), then \(QB_2\) is a basis of \(\Lambda _1\). This implies that there is a unimodular matrix \(U \in \mathbb {Z}^{n \times n}\) such that \(B_1 = QB_2 U\), and we obtain \(G_1 = B_1^\intercal B_1 = U^\intercal B_2^\intercal Q^\intercal Q B_2 U = U^\intercal G_2 U\).

On the other hand, if \(G_1 = U^\intercal G_2 U\), define \(Q \mathrel {\mathop :}=B_1 U^{-1} B_2^{-1}\), and verify

$$\begin{aligned} Q^\intercal Q = B_2^{-\intercal } ( U^{-\intercal } B_1^\intercal B_1 U^{-1}) B_2^{-1} = B_2^{-\intercal } G_2 B_2^{-1} = I. \end{aligned}$$

Hence we find \(Q \Lambda (B_2) = \Lambda (QB_2) = \Lambda (B_1 U^{-1}) = \Lambda (B_1)\), since \(U^{-1}\) is again unimodular.

The second part follows by observing that the identity matrix \(B_2 \mathrel {\mathop :}=I\) is a basis of \(\mathbb {Z}^n\) and hence \(G_2 = I\). \(\square \)

Clearly, a Gram matrix is always symmetric and positive definite. This grants a reduction from RSLP to the following problem.

figure b

As it turns out, as we allow for a lattice to be embedded in a higher dimension, the problems are even equivalent. However, the reverse direction requires some more effort, since we need to find a rational lattice basis B such that \(B^\intercal B = G\) for a given matrix G.

Theorem 2

Let \(G \in \mathbb {Q}^{n \times n}\) be a symmetric, positive definite matrix, and s its encoding size, i.e. the number of bits needed to store G. Then, there exists a matrix \(B \in \mathbb {Q}^{4n \times n}\) such that \(B^\intercal B = G\). Moreover, we can compute such a matrix in expected polynomial (in s) time, and we can compute a matrix \(B \in \mathbb {Q}^{f_n(s) \times n}\) such that \(B^\intercal B = G\), in deterministic polynomial (in s) time, where \(f_n(s) \in \mathcal {O} (n \log (ns))\).

Proof

Let us first give the idea of the proof. We will show an adaption of the (generally real-valued) Cholesky Decomposition by showing that there is a diagonal matrix \(D \in \mathbb {Z}^{n \times n}\) and a rational lower triangular matrix T such that \(G = T D T^{\intercal }\). Lagrange’s four-square theorem (see e.g. [9], Chap. 20.5) states that every non-negative integer can be written as the sum of four squares. Hence for every diagonal entry \(d_{k}\) of D there exists a vector \(v_k \in \mathbb {Z}^{4}\) such that \(d_{k} = v_k^\intercal v_k = \sum _{i=1}^4 (v_k)_i^2\). Setting \(F \in \mathbb {Z}^{4n \times n}\) to be a block-diagonal matrix containing the vectors \(v_k\) on its diagonal we obtain \( G = T DT^{\intercal } = TF^\intercal F T^{\intercal }, \) and can set \(B \mathrel {\mathop :}=F T^{\intercal }\). For the algorithmic approach we use the result [10] with expected running time \(\mathcal {O}\Big (\tfrac{\log (d_{k})^2}{\log (\log (d_{k}))}\Big )\) to obtain the four squares for each diagonal entry. If we want to define a sequence of squares deterministically we have to increase the number of squares from 4 to \(\mathcal {O} (\log (\log (d_{k})))\).

For showing the decomposition \(G = T D T^{\intercal }\) we revisit the Gaussian algorithm as outlined in [11], Chapter 4.3.

After the k-th step of the Gaussian elimination, the modified matrix has the shape

$$\begin{aligned} T^\prime G = \begin{pmatrix} R &{}\quad M \\ 0 &{}\quad S \end{pmatrix}, \quad R \in \mathbb {Q}^{k \times k}, M \in \mathbb {Q}^{k \times n-k}, S \in \mathbb {Q}^{n-k \times n-k}, \end{aligned}$$

where \(T^\prime \) stores the unimodular row operations performed by the algorithm and R is upper triangular.

Assume the algorithm did not swap any rows in the first k iterations, which is certainly true for \(k=0\). Hence \(T^\prime \in \mathbb {Q}^{n \times n}\) is a lower triangular matrix whose last \(n-k\) columns coincide with the identity. But then symmetry implies that

$$\begin{aligned} T^\prime G T^{\prime \intercal } = \begin{pmatrix} D^\prime &{}\quad 0 \\ 0 &{}\quad S \end{pmatrix}, \end{aligned}$$

where \(D^\prime \in \mathbb {Q}^{k \times k}\) is a diagonal matrix, and S is unchanged. If \(s_{1,1} = 0\), then \(e_{k+1}^\intercal T^\prime G T^{\prime \intercal } e_{k+1} = 0\), a contradiction to the positive definiteness of G. Thus, also in iteration \(k+1\) the algorithm does not swap any rows, and by induction the whole algorithm does not swap any rows when applied to a positive definite matrix.

After n iterations we obtain a lower triangular matrix \(T^\prime \) and the diagonal matrix \(D^{\prime } = T^\prime G T^{\prime \intercal } \in \mathbb {Q}^{n \times n}\). Let \({{\,\textrm{diag}\,}}(x_1,\dots ,x_n) \in \mathbb {R}^{n \times n}\) denote a diagonal matrix with \(x_1,\dots ,x_n\) on the diagonal. If the entries of \(D^\prime \) are given as \(d_{k} = \tfrac{p_k}{q_k}\) with \(p_k,q_k \in \mathbb {Z}\) we set \(T = T^{\prime - 1} {{\,\textrm{diag}\,}}\Big (\tfrac{1}{q_1}, \dots , \tfrac{1}{q_n}\Big )\) and \(D = {{\,\textrm{diag}\,}}(p_1q_1,\dots ,p_nq_n) \in \mathbb {Z}^{n \times n}\) to obtain the desired decomposition \(G = T D T^{\intercal }\).

Edmonds [12] observed that the Gaussian Elimination can be implemented in polynomial time. A more careful analysis, as in [11, Chap. 4.3], reveals that all occurring numbers (i.e. the \(p_i\)’s and \(q_i\)’s in particular) are bounded by \(\mathcal {O}(n^2\,s)\), where s is the encoding size of G. Therefore the entries of D are bounded by \(2^{\mathcal {O} (n^4\,s^2)}\).

With the discussion in the beginning of the proof, this shows existence and the randomized algorithm.

To show how to obtain the deterministic result let \(a \in \mathbb {Z}_{\ge 1}\) be any number. We define two sequences \((a_i)_i\) and \((n_i)_i\) as follows. Set \(a_0 = a\). Then recursively set \(n_i = \lfloor \sqrt{a_{i-1}} \rfloor \) and \(a_i = a_{i-1} - n_i^2 \ge 0\). It is clear that there is a finite r such that \(n_i \equiv 0\) for \(i > r\), and we have \(\sum _{i=1}^r n_i^2 = a_0\). To limit the number r from above, observe that \(n_i^2 \le a_{i-1} < (n_i+1)^2\) implies \(a_i = a_{i-1} - n_i^2 < 2n_i + 1\), thus \(a_i \le 2\sqrt{a_{i-1}}\) due to integrality. Resolving the recursion, we obtain \(a_k \le 4 a_0^{(1/2)^k}\). Once \(a_k \le 8\), it is easy to see that we need at most four additional steps. Resolving \(4 a_0^{(1/2)^k} \ge 8\) reveals that \(r \le \lceil \log (\log (a_0)) \rceil + 4\) numbers suffice such that \(\sum _{i=1}^r n_i^2 = a_0\). Clearly, the computations can be done in time polynomial in the input, and the numbers \(n_i\) can be found with binary search in \(\log (a_{i-1})\) steps, which is also polynomial in the input size.

We construct such a sequence \(n_1(k), \dots , n_{r}(k)\) for each diagonal entry \(d_k\) of D and define \(v_i^\intercal = (n_1(k), \dots , n_{r}(k))\) with \(r \in \mathcal {O} (\log (ns))\), using the bound on the size of \(\Vert D\Vert _\infty \). This leads to a block diagonal matrix \(F = {{\,\textrm{diag}\,}}(v_1,\dots ,v_n)\) such that \(F^\intercal F = D^\prime \). In total we obtain \(G = T^{-1} Q^\intercal F^\intercal F Q T^{-\intercal }\). Defining \(B = FQT^{-\intercal }\) finishes the deterministic result. \(\square \)

Corollary 3

The problems UDP and RSLP are polynomial-time equivalent.

Proof

Let \(G \in \mathbb {R}^{d \times d}\) be an instance of UDP. Apply Theorem 2 to obtain a matrix \(B \in \mathbb {Q}^{n \times d}\) such that \(B^\intercal B = G\), where \(n \in \mathcal {O}(d \log (ds))\) for the encoding size s. Now, \(\Lambda (B) \cong \mathbb {Z}^{d} \times \{0\}^{n - d}\) if and only if there exists an orthonormal matrix \(Q \in \mathbb {R}^{n \times n}\) and a unimodular matrix \(U \in \mathbb {Z}^{d \times d}\) such that \(QBU = \left( {\begin{array}{c}I\\ 0\end{array}}\right) \). This in turn is equivalent to \(U^\intercal G U = I\), thus \(G = U^{- \intercal } U\) is a unimodular decomposition.

For the other direction, let \(G = B^\intercal B\). Now, \(\Lambda (B) \equiv \mathbb {Z}^n\) if and only if there exists an orthonormal matrix Q and a unimodular matrix U such that \(QBU = I\), which is equivalent to \(U^\intercal G U = I\). \(\square \)

A certificate for UDP being in NP is given by the matrix U, whose entries are bounded by \(\max \{ \sqrt{G_{ii}} \mid 1 \le i \le n\}\).

In the following, we will discuss a paper of Elkies [1] and apply his results to show that the problem is also in co-NP. Let us recall some terminology for a full-dimensional lattice \(\Lambda \subseteq {\mathbb {R}}^n\) first. The dual of \(\Lambda \), denoted by \(\Lambda ^\star \), is defined as \(\Lambda ^\star = \{y \in \mathbb {R}^n \mid \forall \, x \in \Lambda : \, y^\intercal x \in \mathbb {Z}\}\). If \(B \in \mathbb {R}^{n \times n}\) is a basis for a full-dimensional lattice, then \(B^{-T}\) is a basis for the dual lattice and consequently, \(\det (\Lambda ) \det (\Lambda ^\star ) = 1\). (This also shows for instance that \((\Lambda ^\star )^\star = \Lambda \)). We call \(\Lambda \) self-dual if \(\Lambda = \Lambda ^\star \). Self-dual lattices are also called unimodular lattices for the following reason.

Lemma 4

(See e.g. [13]) A matrix \(B \in \mathbb {R}^{n \times n}\) generates a self-dual lattice \(\Lambda (B)\) if and only if the corresponding Gram matrix \(G = B^\intercal B\) is unimodular.

Proof

If B is a basis of a self-dual lattice \(\Lambda \), then \(B^{-\intercal }\) is a basis as well. Hence there exists a unimodular matrix G such that \(B^{-\intercal } G = B\), which is equivalent to \(G = B^\intercal B\).

Let \(G = B^\intercal B\) be unimodular and \(x = Bz_1\), \(y= Bz_2\) be any two lattice vectors (i.e. \(z_1,z_2 \in \mathbb {Z}^n\)). Since \(x^\intercal y = z_1^\intercal G z_2 \in \mathbb {Z}\) for all \(x,y \in \Lambda = \Lambda (B)\), we have \(\Lambda \subseteq \Lambda ^\star \). This implies \(\det (\Lambda ^\star ) \le \det (\Lambda )\) with equality if and only if the lattices are the same. On the other hand, \(\det (\Lambda ) = |\det (B) |= \sqrt{ \det (G) } = 1\), and since \(\det (\Lambda ^\star ) \det (\Lambda ) = 1\) for any primal-dual pair, equality follows. \(\square \)

Definition 1

Let \(\Lambda = \Lambda ^\star \subseteq \mathbb {R}^n\) be a self-dual lattice. A vector \(w \in \Lambda \) is a characteristic vector of the lattice \(\Lambda \) if

$$\begin{aligned} \forall \, v \in \Lambda : \quad \left\langle v,w \right\rangle \equiv _2 \left\langle v,v \right\rangle , \end{aligned}$$

where \(x \equiv _k y\) is shorthand for \(x \equiv y \!\mod k\).

Self-duality of \(\Lambda \) implies that \( \left\langle v,w \right\rangle \in \mathbb {Z}\) for all \( v,w \in \Lambda \), hence characteristic vectors are well-defined.

It is known that for dimensions \(n \le 7\) the lattice \(\mathbb {Z}^n\) is the unique self-dual lattice (up to isomorphism). In dimension 8 the lattice

$$\begin{aligned} E_8 = \left\{ z \in \mathbb {R}^8 \left| \, \sum _{i=1}^8 z_i \equiv _2 0, z \in \mathbb {Z}^8 \cup \left( \frac{1}{2} \textbf{1} + \mathbb {Z}^8 \right) \right. \right\} , \ \text { with } \ \textbf{1} \mathrel {\mathop :}=(1,\dots ,1)^\intercal , \end{aligned}$$

is self-dual, but not isomorphic to \(\mathbb {Z}^8\) (cf. [13]). The easiest way to see that \(E_8\) is not isomorphic to \(\mathbb {Z}^8\) is to verify that \(E_8\) does not have any vector of length 1, whereas \(\mathbb {Z}^8\) does.Footnote 1

The following Lemma lists some basic properties of characteristic vectors. We will only use parts one and five later on, while the other parts are meant for informative reasons, e.g. part two addresses the natural arising question of uniqueness of characteristic vectors.

In terms of quadratic forms, the computations carried out in the second part of the following lemma are already discussed in [14], also implying the first part. The third part can e.g. be found in [15, Chap. V]. While we do not believe we are the first ones to observe parts four and five, we did not find any source that makes the statements explicit.

The vector z in part 4 can be regarded as a “coefficient vector” of a characteristic vector: If B is a lattice basis satisfying \(G = B^\intercal B\), then Bz is a characteristic vector of \(\Lambda (B)\). Such a vector will be the co-NP certificate for UDP.

Lemma 5

The following are true for every self-dual lattice \(\Lambda = \Lambda ^\star \subseteq \mathbb {R}^n\).

  1. 1.

    There exists a characteristic vector \(w \in \Lambda \).

  2. 2.

    The set of characteristic vectors is precisely a co-set \(w + 2\Lambda \), where \(w \in \Lambda \) is an arbitrary characteristic vector.

  3. 3.

    For every two characteristic vectors \(u,w \in \Lambda \), we have \(\Vert u\Vert ^2 \equiv _8 \Vert w\Vert ^2\).

  4. 4.

    If we are given a Gram matrix G of \(\Lambda \), we can compute in polynomial time a vector \(z \in \mathbb {Z}^n\) such that for all \(y \in \mathbb {Z}^n\), we have \(y^\intercal G z \equiv _2 y^\intercal G y\).

  5. 5.

    The shortest characteristic vectors of the lattice \(\mathbb {Z}^n\) are the vectors \(\{-1,1\}^n\).

Proof

Let \(B = (b_1,\dots ,b_n)\) be a basis of \(\Lambda \), and \(D = (d_1,\dots ,d_n) = B^{- \intercal }\) the corresponding dual basis, also spanning \(\Lambda \). We note that \(b_i^\intercal d_j = 0\) whenever \(i \ne j\), and \(b_i^\intercal d_i = 1\). Also, \(\Vert d_i \Vert ^2 \in \mathbb {Z}\) since \(\Lambda \) is self-dual.

  1. 1.

    Represented in the primal basis, define a vector \(w = \sum _{i=1}^n \Vert d_i\Vert ^2 b_i \in \Lambda \), and let \(v = \sum _{i=1}^n \alpha _i d_i \in \Lambda \) be a lattice vector, represented in the dual basis D. Using \(x^2 \equiv _2 x\) for \(x \in \mathbb {Z}\), we obtain

    $$\begin{aligned} \left\langle v,v \right\rangle&= \sum _{i=1}^n \sum _{j=1}^n \alpha _i \alpha _j d_i^\intercal d_j \equiv _2 \sum _{i=1}^n \alpha _i^2 \Vert d_i\Vert ^2 \\&\equiv _2 \sum _{i=1}^n \alpha _i \Vert d_i\Vert ^2 = \sum _{i=1}^n \sum _{j=1}^n \alpha _i \Vert d_j\Vert ^2 d_i^\intercal b_j = \left\langle v,w \right\rangle . \end{aligned}$$

    Thus, w is a characteristic vector and the first part is shown.

  2. 2.

    Now let w be the characteristic vector from the previous part, and \(w^\prime \) be a characteristic vector. Since for every \(y \in \Lambda \), we have \(\left\langle w^\prime + 2y, v \right\rangle = \left\langle w^\prime ,v \right\rangle + 2 \left\langle y,v \right\rangle \equiv _2 \left\langle w^\prime ,v \right\rangle \), the whole co-set \(w^\prime + 2\Lambda \) consists of characteristic vectors. If \(w^\prime \) has the representation \(w^\prime = \sum _{i=1}^n \gamma _i b_i \in \Lambda \), computing

    $$\begin{aligned} \Vert d_k\Vert ^2 = \left\langle d_k,d_k \right\rangle \equiv _2 \left\langle w^\prime ,d_k \right\rangle = \gamma _k \end{aligned}$$

    for every \(1 \le k \le n\) shows that \(w^\prime \in w + 2\Lambda \), finishing the proof of the second part.

  3. 3.

    Let w be a characteristic vector. It suffices to show that we have \(\Vert w\Vert ^2 \equiv _8 \Vert w + 2b_k\Vert ^2\) for \(k=1,\dots ,n\). The claim then follows by repeatedly adding or subtracting twice a basis vector and the previous part. Since w is characteristic, there is an integer x satisfying \(w^\intercal b_k = b_k^\intercal b_k + 2x\). We compute

    $$\begin{aligned} \Vert w + 2 b_k\Vert ^2 = w^\intercal w + 4 w^\intercal b_k + 4 b_k^\intercal b_k = \Vert w\Vert ^2 + 8 (b_k^\intercal b_k + x). \end{aligned}$$

    Since x and by self-duality of \(\Lambda \) also \(b^\intercal _k b_k\) are integral, we are done.

  4. 4.

    Observe that the matrix \(G^{-1} = D^\intercal D\) for the dual basis D corresponding to B can be computed in polynomial time. We saw in the proof of the first part that \(w = \sum _{i=1}^n \Vert d_i\Vert ^2 b_i\) is a characteristic vector. Setting \(z_k = (G^{-1})_{kk}\) for \(k=1,\dots ,n\), we obtain \(w = Bz\), and therefore \(y^\intercal G z \equiv _2 y^\intercal G y\) for all \(y \in \mathbb {Z}^n\).

  5. 5.

    Choosing the identity matrix as lattice basis, the corresponding dual basis is also the identity. By the proof of part 1, the all-ones vector \(\textbf{1} = (1,\dots ,1)^\intercal \in \mathbb {Z}^n\) is a characteristic vector. Applying part 2, the set of characteristic vectors is \(\textbf{1} + 2 \mathbb {Z}^n\), finishing the proof. \(\square \)

3 The unimodular decomposition problem is in NP and co-NP

We are now able to provide the main result of this article. The crucial argument we are using will be Elkies’ theorem on self-dual lattices, stating that if the squared length of each characteristic vector is at least the dimension, then the lattice is isomorphic to \(\mathbb {Z}^n\). The original theorem reads as follows.

Theorem 6

( [1]) Let \(\Lambda \) be a self-dual lattice in \(\mathbb {R}^n\) with no characteristic vector such that \(\Vert w \Vert ^2 < n\). Then \(\Lambda \cong \mathbb {Z}^n\).

As a slight remark, we recall that the shortest characteristic vectors of \(\mathbb {Z}^n\) are \(\{-1,1\}^n\) by Lemma 5, part 5. Hence in this case, \(\Vert w \Vert ^2 = n\), and the implication of Elkies’ theorem turns out to be an equivalence (which was already known by Elkies).

With this theorem, we can prove that RSLP is in co-NP as follows. If \(\Lambda (B) \not \cong \mathbb {Z}^n\), then the certificate is a characteristic vector \(w \in \Lambda (B)\) with \(\Vert w\Vert ^2 < n\). By Theorem 6, such a vector exists. Hence, we only have to show that we can check the certificate in polynomial time, i.e. we have to ensure that \(\Vert w\Vert ^2 < n\), and \(w^\intercal v \equiv _2 v^\intercal v\) for all \(v \in \Lambda (B)\).

However, if we turn our attention to UDP, we are not given a lattice explicitly, but only a Gram matrix G. If \(G =B^\intercal B\), our certificate will therefore be the coefficient vector z of a short characteristic vector \(w = Bz \in \Lambda (B)\). As it turns out, such a vector z is independent of the particular choice of B.

Lemma 7

Let \(G \in \mathbb {Z}^{n \times n}\) be a symmetric, positive definite, and unimodular matrix, and \(z \in \mathbb {Z}^n\). We have

$$\begin{aligned} e_k^\intercal G e_k \equiv _2 e_k^\intercal G z, \quad \forall \, k=1,\dots ,n, \end{aligned}$$

if and only if for every matrix B with \(B^\intercal B = G\), the vector \(w= Bz\) is a characteristic vector in the lattice \(\Lambda (B)\).

Proof

Let \(w =Bz \in \Lambda \) for \(z \in \mathbb {Z}^n\). If w is a characteristic vector, then we have \(e_k^\intercal G e_k = \left\langle b_k,b_k \right\rangle \equiv _2 \left\langle b_k, w \right\rangle = e_k^\intercal G z\).

Let \(x = \sum _{i=1}^n \alpha _i b_i \in \Lambda \) be a vector, and \(w = Bz \in \Lambda \). For the other direction, we find

$$\begin{aligned} \left\langle x, w - x \right\rangle&= \left\langle \sum _{i=1}^n \alpha _i b_i, w - \sum _{i=1}^n \alpha _i b_i \right\rangle = \left\langle \sum _{i=1}^n \alpha _i b_i, w \right\rangle - \sum _{i=1}^n \sum _{j=1}^n \alpha _i \alpha _j \left\langle b_i,b_j \right\rangle \\&\equiv _2 \sum _{i=1}^n \alpha _i (e_i^\intercal G z)) - \sum _{i=1}^n \alpha _i^2 (e_i^\intercal G e_i) \equiv _2 \sum _{i=1}^n (\alpha _i - \alpha _i^2) e_i^\intercal G e_i \equiv _2 0, \end{aligned}$$

showing that w is indeed a characteristic vector. \(\square \)

This lemma also shows that we can check whether a vector w is characteristic by only checking \(\left\langle w,b_i \right\rangle \equiv _2 \left\langle b_i,b_i \right\rangle \) on some basis \(\{b_1,\dots ,b_n\}\), instead of all lattice vectors. It is straightforward that this can be checked in polynomial time, provided the encoding size of w is bounded in the encoding size of the input.

Lemma 8

Let \(G \in \mathbb {Z}^{n \times n}\) be a symmetric, positive definite, and unimodular matrix, and \(z \in \mathbb {Z}^n\) such that \(z^\intercal G z \le n\). Then the bit complexity of z is polynomially bounded by the bit complexity of G.

Proof

Let \(M \in \mathbb {Z}\) be a bound on the entries of G, i.e. \( |G_{i,j} |\le M\) for all ij in range, and let \(v_1,\dots ,v_n\) be an orthonormal basis of eigenvectors of G with eigenvalues \(0 < \lambda _1 \le \dots \le \lambda _n \le nM\). The last inequality holds since \(\Vert Gx\Vert _\infty \le nM \Vert x\Vert _\infty \). As \(\prod _{i=1}^n \lambda _i = \det (G) = 1\), this in turn implies \(\lambda _1 \ge 1/(nM)^{n-1}\). Writing \(z = \sum _{i=1}^n \alpha _i v_i\), we estimate \( n \ge z^\intercal G z = \sum _{i=1}^n \alpha _i^2 \lambda _i, \) and hence \(\alpha _i^2 \le \frac{n}{\lambda _i} \le (nM)^n\). Thus, \(\Vert z\Vert ^2 \le (nM)^{n+1}\), and since \(z \in \mathbb {Z}^n\), its bit complexity is bounded by \(\mathcal {O} (n^2 (\log (n) + \log (M) ))\). \(\square \)

Theorem 9

The unimodular decomposition problem (and thus the rotated standard lattice problem) is in NP \(\cap \) co-NP.

Proof

If G is a yes-instance, the certificate is the unimodular matrix U such that \(U^\intercal U = G\). Since \(G_{i,i} = U_i^\intercal U_i\), where \(U_i\) is the i-th column of U, all entries in U are bounded by \(\max \{\sqrt{ G_{ii}} \mid i=1,\dots ,n\}\), hence the encoding size is polynomial in the input, and verifying the certificate clearly takes only polynomial time.

If G is a no-instance, the certificate is an integral vector z and we verify

  1. 1.

    \(z^\intercal G z < n\), and

  2. 2.

    \(e_i^\intercal G e_i \equiv _2 e_i^\intercal G z\), \(i = 1,\dots ,n\).

These checks can be done in time polynomial in the encoding size of z and G using Lemma 8. If the answer to both is yes, then we output that there is no unimodular matrix U with \(G = U^\intercal U\).

We first show that such a vector z exists if and only if G is a no-instance (i.e. the certificate exists and the conclusion is correct). By Lemma 7, both points together are equivalent to the fact that in every lattice \(\Lambda (B)\) with \(B^\intercal B = G\), the vector Bz is a characteristic vector with \(\Vert Bz\Vert ^2 \le n-1\). Moreover, the existence of such a vector follows from Lemma 5, part 1. By Elkies, this is equivalent to \(\Lambda (B) \not \cong \mathbb {Z}^n\), which in turn is equivalent to the impossibility of a unimodular decomposition \(G = U^\intercal U\) by Lemma 1.

To show that both tests can be performed in polynomial time, we only have to observe that the bit complexity of z is polynomially bounded in the bit complexity of G by Lemma 8. \(\square \)

To illustrate the certificates, we consider two examples. First, let us consider the following decomposable Gram matrix

$$\begin{aligned} G = \begin{pmatrix} 34 &{}\quad 21 \\ 21 &{}\quad 13 \end{pmatrix} = U^\intercal U = \begin{pmatrix} 5 &{}\quad 3 \\ 3 &{}\quad 2 \end{pmatrix} \begin{pmatrix} 5 &{}\quad 3 \\ 3 &{}\quad 2 \end{pmatrix}. \end{aligned}$$

It should not come as a surprise that G is decomposable, as the first self-dual lattice not isomorphic to \(\mathbb {Z}^n\) appears in dimension 8 as discussed before. The coefficient vector of a shortest characteristic vector is \(z = (-1,2)^\intercal \), yielding \(z^\intercal G z = 2\). In the lattice \(\Lambda (U) = \mathbb {Z}^2\), this corresponds to the characteristic vector \((1,1)^\intercal \).

For the second example, consider the Gram matrix

$$\begin{aligned} \tiny {G = \begin{pmatrix} 4 &{} -2 &{} &{} &{} &{} &{} &{} 1 &{} 3 \\ -2 &{} 2 &{} -1 &{} &{} &{} &{} &{} &{} -1 \\ &{} -1 &{} 2 &{} -1 &{} &{} &{} &{} \\ &{} &{} -1 &{} 2 &{} -1 &{} &{} &{} \\ &{} &{} &{} -1 &{} 2 &{} -1 &{} &{} \\ &{} &{} &{} &{} -1 &{} 2 &{} -1 &{} \\ &{} &{} &{} &{} &{} -1 &{} 2 &{} &{} 1 \\ 1 &{} &{} &{} &{} &{} &{} &{} 2 &{} 3 \\ 3 &{} -1 &{} &{} &{} &{} &{} 1 &{} 3 &{} 7 \end{pmatrix} \in \mathbb {Z}^9} \end{aligned}$$

and the vector \(z = (-1,-1,-1,-1,-1,-1,-1,-1,1)^\intercal \in \mathbb {Z}^9\) as a certificate. First, \(z^\intercal G z = 1 < 9\). Moreover, we have

$$\begin{aligned} e_i^\intercal G e_i \equiv _2 {\left\{ \begin{array}{ll} 0 &{}\quad 1 \le i \le 8 \\ 1 &{}\quad i = 9 \end{array}\right. }, \quad \text {and } e_i^\intercal G z \equiv _2 {\left\{ \begin{array}{ll} 0 &{}\quad 1 \le i \le 8 \\ 1 &{}\quad i = 9 \end{array}\right. }. \end{aligned}$$

Thus, there is no unimodular matrix U such that \(G = U^\intercal U\). (The matrix G is chosen to correspond to the lattice \(E_8 \times \mathbb {Z}\).)

4 Conclusion

We have seen that characteristic vectors are well suited as a co-NP certificate for RSLP, and their coefficient vectors for UDP. If \(\Lambda \) is given by its basis, it follows from the discussions that the problem at hand can be solved by computing the co-set of characteristic vectors, together with a single call to an oracle for the Closest Vector Problem (CVP), computing the shortest among all characteristic vectors. However, CVP is NP-hard, and the best known running time using this reduction we are aware of is \(2^{\mathcal {O}(n)}\) (see e.g. [16]).

Another easy approach to RSLP is the following. If a lattice admits an orthogonal basis \(\{b_1,\dots ,b_n\}\) then any HKZ-reduced basis [17, 18] is an orthogonal basis. Due to the recursive structure of those bases n calls to an oracle for the Shortest Vector Problem (SVP) are sufficient to find this basis. Hence the best known running time using this reduction is \(2^{\mathcal {O} (n)}\) (see e.g. [16]) and even allows to find general orthogonal bases. But as UDP is—from a complexity point of view—supposedly easier than SVP or CVP it is of great interest to find a smarter algorithm. It seems to be a reappearing phenomenon that if a certain class of lattices allows to solve lattice problems such as SVP or CVP fast, a certain representation is needed. For instance, if \(\Lambda \) is a lattice of Voronoi’s first kind, we still need to know an obtuse superbasis before using the polynomial time algorithm in [19]. Therefore, being able to find orthonormal, or even orthogonal bases remains an interesting open problem.

We can also ask whether characteristic vectors can help us to decide whether an orthogonal basis of a given lattice exists. If we generalize characteristic vectors to integral positive definite Gram matrices, it turns out that there may be several co-sets \(v + 2\Lambda \) comprising characteristic vectors. Since the proof of Elkies heavily depends on self-duality, finding an analogous, more general result will presumably need new techniques.

One might hope that the presented result implies containment of the more general Lattice Isomorphism Problem in co-NP, using the following reduction: Given two lattice bases \(B_1\) and \(B_2\), apply a linear transformation A such that \(A \Lambda (B_1) = \mathbb {Z}^n\), for instance \(A = B_1^{-1}\). Then decide whether \(\Lambda (AB_2) \cong \mathbb {Z}^n\). For this to hold we need that two lattices \(\Lambda _1, \Lambda _2\) are isomorphic if and only if the two lattices \(A\Lambda _1\) and \(A\Lambda _2\) are isomorphic. The following example shows that this is false in general. Consider the hexagonal lattice and a rotation thereof, i.e.

$$\begin{aligned} B_1 = \begin{pmatrix} 1 &{}\quad \tfrac{1}{2} \\ 0 &{}\quad \frac{\sqrt{3}}{2} \end{pmatrix}, \qquad O = \begin{pmatrix} \frac{\sqrt{3}}{2} &{}\quad \tfrac{1}{2} \\ - \tfrac{1}{2} &{}\quad \frac{\sqrt{3}}{2} \end{pmatrix} \qquad B_2 = OB_1 = \begin{pmatrix} \frac{\sqrt{3}}{2} &{}\quad \frac{\sqrt{3}}{2} \\ -\tfrac{1}{2} &{}\quad \tfrac{1}{2} \end{pmatrix}. \end{aligned}$$

Clearly, \(\Lambda (B_1) \cong \Lambda (B_2)\). Now we consider the linear map A given by \(A = B^{-1}\) and compare the two lattices \(A \Lambda (B_1) = \mathbb {Z}^2\) and \(A \Lambda (B_2)\). We obtain bases

$$\begin{aligned} B_1^\prime = AB_1 = I, \qquad B_2^\prime = AB_2 = \begin{pmatrix} 1 &{}\quad - \frac{1}{\sqrt{3}} \\ 0 &{}\quad \frac{2}{\sqrt{3}} \end{pmatrix} \begin{pmatrix} \frac{\sqrt{3}}{2} &{}\quad \frac{\sqrt{3}}{2} \\ -\tfrac{1}{2} &{}\quad \tfrac{1}{2} \end{pmatrix} = \begin{pmatrix} \frac{2}{\sqrt{3}} &{}\quad \frac{1}{\sqrt{3}} \\ - \frac{1}{\sqrt{3}} &{}\quad \frac{1}{\sqrt{3}} \end{pmatrix}. \end{aligned}$$

An easy way to see that \(\Lambda (B_2^\prime ) \not \cong \mathbb {Z}^2 = \Lambda (B_1^\prime )\) is to verify that \(\Lambda (B_2^\prime )\) contains a vector of norm less than 1, namely the second basis vector \(\Vert (1/ \sqrt{3}, 1/\sqrt{3})^\intercal \Vert ^2 = 2/3\), whereas \(\mathbb {Z}^2\) does not.