Abstract
In 1996, the famous scholar Ajtai proposed the reduction principle from the worst case to the average case at the 28th Summer Symposium of the American Computer Society (ACM), named the Ajtai reduction principle [see Ajtai (1996), Ajtai (1999) and Ajtai and Dwork (1997)]. Subsequently, Ajtai and Dwork presented the first lattice-based cryptosystem, which is called the Ajtai-Dwork cryptosystem in the academic circles. The proof of this cryptosystem resisting Shor’s quantum computing is to apply Ajtai reduction principle to transform searching for collision points of the Hash function into the SIS problem, and Ajtai reduction principle proves that the difficulty of solving the SIS problem is polynomially equivalent to the shortest vector problem on lattice. The main purpose of this chapter is to prove the Ajtai reduction principle.
You have full access to this open access chapter, Download chapter PDF
In 1996, the famous scholar Ajtai proposed the reduction principle from the worst case to the average case at the 28th Summer Symposium of the American Computer Society (ACM), named the Ajtai reduction principle [see Ajtai (1996), Ajtai (1999) and Ajtai and Dwork (1997)]. Subsequently, Ajtai and Dwork presented the first lattice-based cryptosystem, which is called the Ajtai-Dwork cryptosystem in the academic circles. The proof of this cryptosystem resisting Shor’s quantum computing is to apply Ajtai reduction principle to transform searching for collision points of the Hash function into the SIS problem, and Ajtai reduction principle proves that the difficulty of solving the SIS problem is polynomially equivalent to the shortest vector problem on lattice. The main purpose of this chapter is to prove the Ajtai reduction principle.
2.1 Random Linear System
Let \(A\in \mathbb {Z}_{q}^{n\times m}\) be an \(n\times m\) matrix on \(\mathbb {Z}_q\), if each element of A is a random variable on \(\mathbb {Z}_q\), and the \(n\times m\) random variables are independent and identically distributed, then A is called a random matrix on \(\mathbb {Z}_q\). We give the definition of random linear system
where x, y, z are random variables on \(\mathbb {Z}_q^m\) and \(\mathbb {Z}_q^n\), respectively. This random linear system plays an important role in modern cryptography. We prove some basic properties in this section.
Lemma 2.1.1
Let \(A\in \mathbb {Z}_{q}^{n\times n}\) be an invertible square matrix of order n, \(y\equiv Ax\ (\text {mod} q)\), then y is uniformly at random on \(\mathbb {Z}_q^n\) if and only if x is uniformly distributed.
Proof
If x is uniformly distributed on \(\mathbb {Z}_q^n\), then for any \(x_0\in \mathbb {Z}_q^n\), we have
Since there is only one \(y_0\in \mathbb {Z}_q^n\Rightarrow Ax_0 \equiv y_0\ (\text {mod}\ q)\), therefore,
Because A is an invertible matrix, there is a one-to-one correspondence between \(y_0\) and \(x_0\). In other words, when \(x_0\) traverses all the vectors in \(\mathbb {Z}_q^n\), \(y_0\) also traverses all the vectors in \(\mathbb {Z}_q^n\), which means y is also uniformly at random on \(\mathbb {Z}_q^n\). On the other hand, if y is uniformly distributed on \(\mathbb {Z}_q^n\), so is x on \(\mathbb {Z}_q^n\) by \(x\equiv A^{-1}y\ (\text {mod}\ q)\). \(\square \)
Remark 2.1.1
In fact, for the above linear system, x and y are random variables with the same distribution when A is an invertible square matrix. However, this property doesn’t hold if A is not a square matrix.
Let \(a\in \mathbb {R}\) be a real number, [a] be the greatest integer no more than a, i.e. [a] is the only integer satisfying the following inequality,
If \(x\in \mathbb {R}^n\) is an n dimensional vector, \(x=(x_1,x_2,\ldots ,x_n)\), we define [x] as follows
[x] is called the integer vector of x. We say x is a random vector, which means each element \(x_j\) is a random variable, and the n random variables are mutually independent.
Lemma 2.1.2
If \(x\in [0,1)^n\) is a continuous random variable uniformly distributed on the unit cube, then [qx] is a discrete random variable uniformly on \(\mathbb {Z}_q^n\).
Proof
Since all the components of x are independent, we only prove for \(n=1\). If \(a\in [0,1)\) is a continuous random variable uniformly distributed, then for any \(i=0,1,\ldots ,q-1\), we have
This indicates [qa] is a discrete random variable uniformly distributed on \(\mathbb {Z}_q\). \(\square \)
Lemma 2.1.3
Let \(L=L(B)\) be a n dimensional full-rank lattice, F(B) is the basic neighbourhood of L. If x is a random variable uniformly distributed on F(B), then \([qB^{-1}x]\) is a discrete random variable uniformly on \(\mathbb {Z}_q^n\).
Proof
\(\forall a\in \mathbb {Z}_q^n\), we have
Since the volume of basic neighbourhood F(B) is \(\text {det}(L)=|\text {det}(B)|\), the probability density function of x is \(\frac{1}{\text {det}(L)}\), thus,
We set \(y=Bu\) in the above equality, and get
So \([qB^{-1}x]\) is uniformly distributed on \(\mathbb {Z}_q^n\). \(\square \)
2.2 SIS Problem
The SIS problem plays a very important role in modern lattice cryptography, which is to find the shortest nonzero integer solution in a class of random linear systems.
Definition 2.2.1
Let n, m, q be positive integers, \(m\geqslant n\), \(A\in \mathbb {Z}_q^{n\times m}\) is a uniformly distributed random matrix on \(\mathbb {Z}_q\), \(\beta \in \mathbb {R}\), \(0<\beta <q\). The SIS problem is to find the shortest nonzero integer vector \(z\in \mathbb {Z}^m\) such that
We call the above SIS problem with parameters \(n,m,q,A,\beta \) as \(\text {SIS}_{n,q,\beta ,m}\), and A is called as the coefficient matrix of SIS problem.
Remark 2.2.1
If \(m<n\), since the number of variables is less than equations, (2.2.1) is not guaranteed to have a nonzero solution, so we suppose that \(m\geqslant n\). If \(\beta \geqslant q\), let \(z=\begin{pmatrix} q \\ 0 \\ \vdots \\ 0 \end{pmatrix}\in \mathbb {Z}^m\), we have \(Az\equiv 0\ (\text {mod}\ q)\), and \(|z|=q<\beta \). This solution is trivial so that we always assume that \(\beta <q\) in Definition 2.2.1.
Remark 2.2.2
The difficulty of SIS problem decreases when m becomes larger, while it increases as n becomes larger. In fact, if z is a solution of \(\text {SIS}_{n,q,\beta ,m}\), \(m'>m\), \([A,A']\) is the coefficient matrix of \(\text {SIS}_{n,q,\beta ,m'}\). Let \(z'=\begin{pmatrix} z\\ 0 \end{pmatrix}\), then
So \(z'\) is a solution of \(\text {SIS}_{n,q,\beta ,m'}\). If a solution satisfies \(n+1\) equations of SIS problem, it also satisfies n equations of SIS problem. Therefore, the difficulty of SIS problem increases when n becomes larger.
Lemma 2.2.1
For any positive integer q, any \(A\in \mathbb {Z}_q^{n\times m}\), and \(\beta \geqslant \sqrt{m}q^{\frac{n}{m}}\), the SIS problem has a nonzero solution; i.e. there exists a vector \(z\in \mathbb {Z}^m\), \(z\ne 0\), such that
Proof
Let \(z=\begin{pmatrix} z_1 \\ \vdots \\ z_m \end{pmatrix}\in \mathbb {Z}^m\), we consider the value of coordinate \(z_i\) in \(0\leqslant z_i\leqslant q^{\frac{n}{m}}\). It’s easy to check that there are more than \(q^n\) such integer vectors. Thus, we can find \(z'\) and \(z''\) such that \(z'\ne z''\), \(Az'\equiv Az''\ (\text {mod}\ q)\), i.e.
We complete the proof. \(\square \)
By the above Lemma and Remark 2.2.1, in order to guarantee there is a nontrivial solution of the SIS problem, we always assume the following conditions of parameters
Since the difficulty of SIS problem decreases when \(\beta \) becomes larger, we always suppose that
Furthermore, we call n as the security parameter of SIS problem, \(m=m(n)\), \(q=q(n)\), \(\beta =\beta (n)\) are functions of n. By (2.2.2) and (2.2.3), if m and q are polynomial functions of n written as \(m=\text {poly}(n)\), \(q=\text {poly}(n)\), then \(\beta \) is also a polynomial function of n, i.e. \(\beta =\text {poly}(n)\). Let \(U(\mathbb {Z}_q^{n\times m})\) be all the \(n\times m\) random matrices uniformly distributed on \(\mathbb {Z}_q\), we call all the possible SIS problems as \(\text {SIS}_{q,m}\), i.e.
\(\text {SIS}_{q,m}\) problem is called the total SIS problem, which plays an ‘average case’ role in the Ajtai reduction principle. The parameters are selected as
Definition 2.2.2
Let \(A\in U(\mathbb {Z}_q^{n\times m})\), \(\text {SIS'}_{n,q,\beta ,m}\) problem is to find \(z\in \mathbb {Z}^n\), \(z\notin 2\mathbb {Z}^n\), such that
In fact the goal of SIS’ problem is to find a solution of SIS problem with at least one odd integer of all the coordinates. The relation between solutions of the two problems could be summarized in the following lemma.
Lemma 2.2.2
Suppose q is an odd integer, then there is a polynomial time algorithm from the solution of SIS problem to SIS’ problem.
Proof
If \(z=\begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}\in \mathbb {Z}^n\) is a solution of SIS problem, then there exists an integer \(k\geqslant 0\), such that \(2^{-k}z\notin 2\mathbb {Z}^n\). Let \(z'=2^{-k}z\), since q is an odd integer, based on \(Az\equiv 0\ (\text {mod}\ q)\), we have
and \(|z'|=2^{-k}|z|\leqslant 2^{-k}\beta \). This means \(z'\) is a solution of SIS’ problem. The complexity of calculating \(z'\) from z is polynomial (polynomial function of n), and this is because
The above formula also holds even if q is an exponential function of n. \(\square \)
SIS problem and Ajtai-Dwork cryptosystem have close relation. Let \(f_A(z)=Az\) be Hash function, \(z'\) and \(z''\) be the collision points of \(f_A(z)\), then
It’s easy to obtain a solution of SIS problem if we can find two collision points of \(f_A\). In this sense, Hash function \(f_A(z)\) is strongly collision resisted. The security of Ajtai-Dwork cryptosystem mainly depends on the difficulty of solving SIS problem.
SIS problem could be regarded as the shortest vector problem in the average case. Let
Then \(\Lambda _q^{\bot }(A)\) is an m dimensional q-ary integer lattice. In fact, solving SIS problem is equivalent to find the shortest vector of \(\Lambda _q^{\bot }(A)\).
If \(A\in U(\mathbb {Z}_q^{n\times m})\) is the coefficient matrix of SIS problem, we can discuss SIS problem by transforming it to Hermite form. Let \(\text {rank}A=n\), the matrix \(A_1\in \mathbb {Z}_q^{n\times n}\) constructed by the first n column vectors of A is an invertible matrix. Suppose \(A=[A_1,A_2]\), replace A with \(A_1^{-1}A\), we have
Since \(A_2\) is a random matrix uniformly distributed, by Lemma 2.1.1, \(\bar{A}\) is also a uniform random matrix with dimension \(n\times (m-n)\).
Lemma 2.2.3
The solution set of SIS problem with coefficient matrix A is the same as that of coefficient matrix \(A_1^{-1}A\).
Proof
Let \(z\in \mathbb {Z}^m\) such that
Then \(A_1^{-1}Az\equiv 0\ (\text {mod}\ q)\), z is the solution of SIS problem with coefficient matrix \(A_1^{-1}A\). On the other hand, if \(A_1^{-1}Az\equiv 0\ (\text {mod}\ q)\Rightarrow Az\equiv 0\ (\text {mod}\ q)\), Lemma 2.2.3 holds. \(\square \)
We call the coefficient matrix \(A_1^{-1}A\) determined by (2.2.5) as the normal form of SIS problem.
Finally, we define some hard problems on lattice. We always suppose \(L=L(B)\subset \mathbb {R}^n\) is a full-rank lattice, \(\lambda _1,\lambda _2,\dots ,\lambda _n\) are the lengths of the continuous shortest vectors in lattice L, \(\lambda _1\) is the length of shortest vector in L, \(\gamma =\gamma (n)\geqslant 1\) is a positive function of n.
Definition 2.2.3
(1) \(\text {SVP}_{\gamma }\): find a nonzero vector x in lattice L such that
(2) \(\text {GapSVP}_{\gamma }\): determine the minimal distance \(\lambda _1=\lambda _1(L)\) of lattice L,
(3) \(\text {SIVP}_{\gamma }\): find a set of n linearly independent lattice vectors \(S=\{s_i\}\subset L\), such that
(4) \(\text {BDD}_{\gamma }\): let \(d=\lambda _1(L)/2\gamma (n)\) be the decoding distance of lattice L. For any target vector \(t\in \mathbb {R}^n\), if
then there exists only one lattice vector \(v\in L\Rightarrow |v-t|<d\). The bounded decoding distance problem \(\text {BDD}_{\gamma }\) is to find the only lattice point v.
The above Definition 2.2.3 gives four kinds of hard problems on lattice. \(\text {SVP}_{\gamma }\) is called the approximation problem of the shortest vector. \(\text {GapSVP}_{\gamma }\) is called the determination problem of the shortest vector. \(\text {SIVP}_{\gamma }\) is called the approximation problem of the shortest linearly independent group. \(\text {BDD}_{\gamma }\) is called the approximation problem of bounded decoding distance problem.
Since parameter \(\gamma (n)\geqslant 1\), the bounded decoding distance d satisfies
If the target vector \(t\in \mathbb {R}^n\) satisfies the above decoding distance, i.e. \(\text {dis}(t,L)<d\), it is easy to see there is only one lattice vector \(v\in L\Rightarrow |v-t|<d\). In fact, if \(v_1\in L\), \(v_2\in L\Rightarrow |v_1-t|<d,\ |v_2-t|<d\), by triangle inequality
This has a contradiction with that the minimal distance of lattice L is \(\lambda _1(L)\).
The Ajtai reduction principle is said that the above \(\text {SIVP}_{\gamma }\) and \(\text {GapSVP}_{\gamma }\) problems are polynomial equivalent with average case SIS problem. We will prove this in the next section.
2.3 INCGDD Problem
Let \(S=\{\alpha _i\}\subset \mathbb {R}^n\) be a set of vectors in \(\mathbb {R}^n\), we define
Definition 2.3.1
Let \(L=L(B)\subset \mathbb {R}^n\) be a full-rank lattice, \(S=\{\alpha _1,\alpha _2,\dots ,\alpha _n\}\subset L\) be a set of any n linearly independent vectors in L, \(t\in \mathbb {R}^n\) be the target vector, \(r>\gamma (n)\phi (B)\) be a real number. INCGDD problem is to find a lattice vector \(\alpha \in L\) such that
where g, \(\gamma (n)\) and \(\phi (B)\) are parameters. Under the given parameter system, INCGDD problem could be written as \(\text {INCGDD}_{\gamma ,g}^{\phi }\).
Remark 2.3.1
The key of the INCGDD problem is that for the set S of any given n linearly independent vectors and any target vector \(t\in \mathbb {R}^n\), to find a lattice point \(\alpha \in L\), such that the distance between \(\alpha \) and the target vector is no more than \(\frac{1}{g}|S|+r\). By the nearest plane algorithm of Babai, for any S and t, there exists a polynomial algorithm finding
In general, the above formula cannot be improved. We can give a counterexample. Let \(L=\mathbb {Z}^n\), \(S=I_n\) be an identity matrix, the target vector \(t=(\frac{1}{2},\frac{1}{2},\dots ,\frac{1}{2})\), then \(\forall \alpha \in \mathbb {Z}^n\), we have
So there is no lattice point \(\alpha \) with the distance no more than \(\frac{1}{4}|S|\) from t.
Based on the above counterexample, the parameter selection for INCGDD problem is generally \(g=4\). r in (2.3.2) is called the controlled remainder, which could guarantee the existence of lattice vector \(\alpha \). Under given parameter system, the INCGDD problem can be transformed into the SIS problem of the corresponding parameter system. This transformation is the key idea of Ajtai reduction principle. We call this transformation algorithm the oracle algorithm, written as \(\mathcal {A}(B,S,t)\).
oracle algorithm \(\mathcal {A}(B,B,0)\).
We first explain how the oracle algorithm works in a very special case. Let \(S=B\) be the generated matrix of L, the target vector \(t=0\), parameters of corresponding SIS problem are as follows
Since \(\beta \geqslant \sqrt{m}q^{\frac{n}{m}}\), by Lemma 2.2.1, the total SIS problem \(\text {SIS}_{q,m}\) has a solution.
The oracle sampling algorithm that converts the INCGDD problem into the SIS problem is actually a probabilistic algorithm, which can be divided into the following four steps.
The first step: let F(B) be the basic neighbourhood of \(L=L(B)\), defined by
We select a point \(c\in F(B)\) uniformly in F(B). Let \(y\in L\) be the nearest lattice vector to c, we obtain a pair of vectors (c, y). Repeat this process independently m times and get m pairs of vectors \((c_1,y_1),(c_2,y_2),\dots ,(c_m,y_m)\), here \(m>n\).
The second step: for each \(c_i\ (1\leqslant i\leqslant m)\), we define \(\hat{c_i}\),
Let \(c_i=Bx_i\), where \(x_i=(x_{i_1},x_{i_2},\dots ,x_{i_n})^T\in [0,1)^n\), so we have
Each coordinate satisfies
Thus, \(\hat{c_i}\in F(B)\). Let \(c_i-\hat{c_i}=B v_i\), \(v_i=(v_{i_1},v_{i_2},\dots ,v_{i_n})^T\), then
Therefore, the distance between \(\hat{c_i}\) and \(c_i\) has the following estimation. Suppose \(B=[\beta _1,\dots ,\beta _n]\), it follows that
The above formula holds for all \(1\leqslant i\leqslant m\). We can give a geometric interpretation of \(\hat{c_i}\). Divide the basic neighbourhood F(B) into \(q^n\) polyhedra with side length \(\frac{1}{q}\), and each polyhedron is denoted as \(\triangle _i\), where
Since \(\{c_i\}_{i=1}^m\) are uniformly distributed in F(B), each polyhedron \(\triangle _i\) contains at least one c point under positive probability, written as \(c_i\). Based on \(\text {Vol}(\triangle _i)=\frac{1}{q^n}\text {det}(L)\), so
According to (2.3.5), both \(\hat{c_i}\) and \(c_i\) are contained in the polyhedron \(\triangle _i\), and \(\hat{c_i}\) is the point at the bottom left corner of \(\triangle _i\). From Lemma 2.1.3, since \(\{c_i\}\) is uniformly at random in F(B), then \(\frac{1}{q}[qB^{-1}c_i]\) is uniformly distributed. Based on Lemma 2.1.1, \(\{\hat{c_i}\}\) is also uniformly distributed at random. Let
We get three \(n\times m\) matrices.
The third step: now we define m n dimensional vectors \(a_i\in \mathbb {Z}_q^n\), \(1\leqslant i\leqslant m\) in \(\mathbb {Z}_q\)
Then
According to Lemma 2.1.3, A is a random matrix uniformly distributed. Suppose z is a solution of \(\text {SIS}_{q,m,\beta }\) problem, i.e.
Combining z and \(\{\hat{c_i}\}\),
Since \(Az\equiv 0\ (\text {mod}\ q)\Rightarrow \frac{1}{q}Az\in \mathbb {Z}^n\), we get a lattice vector \(\hat{C}z\in L\).
The four step: Similarly, combining z and \(\{c_i\}_{i=1}^m\), \(\{y_i\}_{i=1}^m\), we get two vectors Yz and Cz. Let \(z=(z_1,z_2,\dots ,z_m)^T\), then
Both \(\hat{C}z\) and Yz are lattice vectors, let \(\alpha =\hat{C}z-Yz=(\hat{C}-Y)z\in L\). We are to prove that \(\alpha \) is a solution of INCGDD problem. Denote \(|z|_1\) as the \(l_1\) norm of z, it follows that
The major part of the length of \(\alpha =\hat{C}z-Yz\) is \(|Cz-\hat{C}z|\), which could be estimated as follows
Select the parameters \(m=n\text {log}n\), \(q=n^4\), \(\beta =n\), when n is sufficiently large we have,
The minor part of length \(|Cz-Yz|\) of \(\alpha \) could be calculated by the nearest plane algorithm of Babai [see (2.3.3)]:
Let \(\phi (B)=|B|\), \(\gamma (n)=\frac{1}{2}\sqrt{n}\), then
where \(r\geqslant \gamma (n)\phi (B)\). In other words, based on a solution z of the \(\text {SIS}_{q,m,\beta }\) problem, we can get a solution of the \(\text {INCGDD}_{\gamma ,g}^{\phi }\) problem for generated matrix B and the target vector \(t=0\) by a probabilistic polynomial oracle algorithm. Here the parameters are chosen as \(g=4\), \(\gamma (n)=\frac{1}{2}\sqrt{n}\), \(\phi (B)=|B|\).
The above oracle algorithm is a simple simulation of the reduction principle for INCGDD problem by setting \(S=B\) and the target vector \(t=0\). Given any n linearly independent vectors \(S=\{\alpha _1,\alpha _2,\dots ,\alpha _n\}\subset L\) and target vector \(t\in \mathbb {R}^n\), general oracle algorithm \(\mathcal {A}(B,S,t)\) will complete the whole technical process of transforming the INCGDD problem into the SIS problem, which is the core idea of Ajtai reduction principle. We begin from two lemmas.
Lemma 2.3.1
(Sampling lemma) Let \(L=L(B)\subset \mathbb {R}^n\) be a full-rank lattice, F(B) be the basic neighbourhood, \(t\in \mathbb {R}^n\) be the target vector, \(s\geqslant \eta _{\epsilon }(L)\) be a positive real number. Then there exists a probabilistic polynomial time algorithm T(B, t, s) to find a pair of vectors \((c,y)\in F(B)\times L(B)\) such that
(i) The distribution of vector \(c\in F(B)\) is within statistical distance \(\frac{1}{2}\epsilon \) from the uniform distribution over F(B).
(ii) The conditional distribution of \(y\in L\) given c is discrete Gauss distribution \(D_{L,s,(t+c)}\).
Proof
The process of sampling algorithm T(B, t, s) could be proved as follows:
1. Since the density function of Gauss distribution \(D_{s,t}(x)\) is
the corresponding random variable is denoted as \(D_{s,t}\). Let \(r\in \mathbb {R}^n\) comes from distribution \(D_{s,t}\), and r is called the noise vector.
2. Let \(c\in F(B)\), \(c\equiv -r\ (\text {mod}\ L)\), \(y=c+r\in L\) be output vectors, (c, y) be the output result.
Since r is generated by Gauss distribution in \(\mathbb {R}^n\), it follows that c has the distribution \(-D_{s,t}\ \text {mod}\ L\) in the basic neighbourhood F(B). We can prove
Then the statistical distance between the c and the uniform distribution on F(B) is
On the other hand, \(y=c+r\in L\), if c is fixed, the distribution of \(y\in L\) is the discrete Gauss distribution \(D_{L,s,(t+c)}\). We complete the proof. \(\square \)
Lemma 2.3.2
(Combining lemma) Let q be a positive integer, \(L=L(B)\subset \mathbb {R}^n\) be a full-rank lattice, F(B) be the basic neighbourhood. For any full-rank subset \(L(S)\subset L(B)\), where \(S=[\alpha _1,\alpha _2,\dots ,\alpha _n]\), there is a probabilistic polynomial time algorithm \(T_1(B,S)\), for m vectors \(C=[c_1,c_2,\dots ,c_m]\) uniformly at random in F(B), we can find a random matrix \(A\in \mathbb {Z}_q^{n\times m}\) uniformly distributed and a lattice vector \(x\in L(B)\), such that
where \(z\in \mathbb {Z}^m\), and \(Az\equiv 0\ (\text {mod}\ q)\).
Proof
Suppose \(\{\alpha _1,\alpha _2,\dots ,\alpha _n\}\subset L\) are n linearly independent lattice vectors, and \(S=[\alpha _1,\alpha _2,\dots ,\alpha _n]\) generates the full-rank lattice \(L(S)\subset L(B)\). Let F(S) be the basic neighbourhood of lattice L(S). It is easy to see that \(F(B)\subset F(S)\). For any m vectors \(\{c_i\}_{i=1}^m\) uniformly distributed in F(B), we can choose m lattice vectors \(\{v_1,v_2,\dots ,v_m\}\subset L(B)\) by sampling lemma. The corresponding vector in the basic neighbourhood F(S) is denoted as \(v_i\ \text {mod}\ L(S)\), such that
In other words \(\{v_i\}\) is selected from the quotient group L(B)/L(S), satisfying \(v_i\not \equiv v_j\ (\text {mod}\ L(S))\), and \(\{v_i\ \text {mod}\ L(S)\}_{i=1}^m\) are uniformly distributed in F(S). We still write \(v_i\ (\text {mod}\ L(S))\) as \(v_i\), and let
It follows that \(\{w_i\}\) is uniformly at random in F(S). For \(1\leqslant i\ne j\leqslant m\), we have
so \(\{v_i+F(B)\}_{i=1}^m\) forms a split of F(S) with the same volume. We get \(\{w_i\}\subset F(S)\) is uniformly distributed according to \(\{v_i\}\) is uniformly at random. Suppose the following two matrices C and W are
Define m vectors uniformly distributed in \(\mathbb {Z}_q^n\) as
By Lemma 2.1.3, since \(\{w_i\}\) is uniformly at random in F(S), then \(A=[a_1,a_2,\dots ,a_m]\) is an \(n\times m\) dimensional uniform matrix, \(A\in \mathbb {Z}_q^{n\times m}\). Let \(z\in \wedge _q^{\bot }(A)\), then
Define the vector x
We first prove \(x\in L(B)\) is a lattice vector. From the definition of vector x, we have
Note that
since \(c_i+v_i\equiv w_i\ (\text {mod}\ L(S))\Rightarrow \)
and each \(v_i\) satisfies \(v_i\in L\), it follows that \(c_i-w_i\in L\), \(1\leqslant i\leqslant m\). On the other hand \(\frac{1}{q}Az\in \mathbb {Z}^n\), we get \(\frac{1}{q}SAz\in L(S)\). Thus, we confirm that \(x\in L\). Finally, we estimate the distance between x and Cz.
where \(u_i=qS^{-1}w_i\). It is easy to see, for any \(d=\begin{pmatrix} d_1 \\ d_2 \\ \vdots \\ d_n \end{pmatrix}\in \mathbb {R}^n\),
Since
by (2.3.17) and (2.3.18) we get
So we finish the proof. \(\square \)
2.4 Reduction Principle
The Ajtai reduction principle is to solve hard problems on lattice in general case. For example, SVP, SIVP and GapSVP problems can be transformed to SIS problem by a polynomial algorithm with positive probability, so the difficulty of SIS problem is polynomial equivalent with that of lattice problems. This principle from general to average case is called Ajtai reduction principle from the worst case to the average case in academic circles.
We start by proving that the INCGDD problem could be transformed to the SIS problem. Denote the \(\text {INCGDD}_{\gamma ,g}^{\phi }\) problem with parameters as \(\{B,S,t,r\}\). For any n linearly independent vectors S in a full-rank lattice \(L=L(B)\) and any target vector \(t\in \mathbb {R}^n\), our goal is to solve a lattice vector \(s\in L\) such that
where \(g>0\) is a positive real number, \(r>\gamma (n)\phi (B)\).
Theorem 2.4.1
(From INCGDD to SIS) Given parameters \(g=g(n)>0\), \(m,\beta \) are polynomial functions of n, i.e. \(m=n^{O(1)}\), \(\beta =n^{O(1)}\), \(\epsilon =\epsilon (n)\) is a negligible function of n, i.e. \(\epsilon <\frac{1}{n^{k}}\ (k>0)\), \(\phi (B)=\eta _{\epsilon }(L)\), and
Under the above parameter system, there is a probabilistic polynomial algorithm, which could transform the \(\text {INCGDD}_{\gamma ,g}^{\phi }\) problem to the SIS problem.
Proof
The probabilistic polynomial algorithm in Theorem 2.4.1 is called the oracle algorithm, written as \(\mathcal {A}(B,S,t)\). In the last section, we introduce the oracle algorithm detailedly in special case with \(S=B\) and the target vector \(t=0\). Now we give the work procedure of general oracle algorithm \(\mathcal {A}(B,S,t)\) by sampling Lemma 2.3.1 and combining Lemma 2.3.2:
1. Select two integers j and \(\alpha \) uniformly at random, such that
For a given target vector \(t\in \mathbb {R}^n\), and positive integer j, we define m vectors \(t_i\ (1\leqslant i\leqslant m)\) as
2. For each \(i=1,2,\dots ,m\), according to the sampling algorithm \(T(B,t_i,\frac{2r}{\gamma })\) in Lemma 2.3.1, i.e. let \(t=t_i\), \(s=\frac{2}{\gamma }r\), we get
Note that \(r\geqslant \gamma (n)\phi (B)\), so
3. Define two matrices
4. Based on the given matrices \(S\subset L(B)\), \(C\in F(B)^m\) and the parameter q, we can find a uniform random matrix \(A\in \mathbb {Z}_q^{n\times m}\), a solution z of the corresponding SIS problem, and a lattice vector \(x\in L(B)\) by the combining algorithm in Lemma 2.3.2 satisfying
5. Let \(s=x-Yz\), then \(s\in L(B)\) is a solution of the INCGDD problem, such that
holds with a positive probability. The above oracle algorithm \(\mathcal {A}(B,S,t)\) could be represented in the following graph
Since \(x,Yz\in L(B)\), it follows that \(s=x-Yz\in L(B)\). Next we are to estimate the probability that the inequality \(|s-t|\leqslant \frac{1}{g}|S|+r\) holds. We write \(\delta >0\) as the positive probability when solving the SIS problem successfully. The event \(H_{j,\alpha }\) denotes getting a solution \(z=(z_1,z_2,\dots ,z_m)^T\) of the SIS problem with \(z_j=\alpha \), and its probability is \(\delta _{j,\alpha }\), where \(1\leqslant j\leqslant m\), \(-\beta \leqslant \alpha \leqslant \beta \), \(\alpha \ne 0\). If we obtain a solution z of the SIS problem successfully, then at least one of these \(2m\beta \) events \(H_{j,\alpha }\) occurs. Therefore,
there is a pair of \(j,\alpha \) such that \(Pr\{H_{j,\alpha }\}=\delta _{j,\alpha }\geqslant \frac{\delta }{2m\beta }>0\). We assume that the event \(H_{j,\alpha }\) occurs and estimate the conditional probability of \(|s-t|\leqslant \frac{1}{g}|S|+r\). Let \(T=[t_1,t_2,\dots ,t_m]\), then \(Tz=t_j z_j=-t\). By the triangle inequality,
We have
Based on the sampling Lemma 2.3.1, \(y_i\) has discrete Gauss distribution \(D_{L(B),\frac{2r}{\gamma },c_i+t_i}\). According to Lemma 2.4.2 in Sect. 1.4, it follows that
and
Since \(y_1,y_2,\dots ,y_m\) are independent, by Lemma 4.6 in section 1.4,
Combining \(|z|\leqslant \beta \) and \(\gamma =\beta \sqrt{n}\), we get
Using Chebyshev inequality,
By (4.6),
Note that the above inequality holds under the assumption \(H_{j,\alpha }\), i.e.
Finally, we have the estimation
This means \(|s-t|\leqslant \frac{1}{g}|S|+r\) holds with a positive probability, so we complete the proof of Theorem 2.4.1. \(\square \)
In the above proof, we have completed the whole process of transforming the INCGDD problem to the SIS problem, and prove that the difficulty of the INCGDD problem is polynomial equivalent with that of the SIS problem. This realizes the reduction principle from the worst case to the average case, which is the main result we introduce in this section. For hard problems on lattice, such as SIVP and GapSVP problems, based on Theorems 5.19, 5.22 and 5.23 in Micciancio and Regev (2004), we can transform them to the SIS problem equivalently. By Theorem 2.4.1, the difficulty of hard problem on lattice is polynomial equivalently with that of the SIS problem. In addition, the following Theorem 2.4.2 provides another way of reduction from SIVP to SIS problem.
Theorem 2.4.2
(From SIVP to SIS) Let the parameter m be a polynomial function of n, i.e. \(m=n^{O(1)}\), \(\beta >0\), \(q\geqslant 2\beta n^{O(1)}\), \(\gamma =\beta n^{O(1)}\), then the difficulty of solving the \(\text {SIS}_{n,q,\beta ,m}\) problem by a probabilistic polynomial algorithm is not lower than that of the \(\text {SIVP}_{\gamma }\) problem.
Proof
We are to prove that if there is a positive probability polynomial algorithm to get the solution of the \(\text {SIS}_{n,q,\beta ,m}\) problem, so is the \(\text {SIVP}_{\gamma }\) problem. In other words, we can find n linearly independent vectors \(S=\{s_i\}\subset L\), such that \(|S|=\max |s_i|\leqslant \gamma (n)\lambda _n(L)\). Based on a set of linearly independent lattice vectors \(S\subset L\) (S is initially the generated matrix B of lattice L), the idea of the reduction algorithm is using the oracle algorithm to obtain a set of new linearly independent lattice vectors \(S'\subset L\) satisfying \(|S'|\leqslant |S|/2\). Repeating this process and we can finally get the solution of the \(\text {SIVP}_{\gamma }\) problem. Let \(q\geqslant 2\beta f(n)\), f(n) be a polynomial function of n. We give the work process of this reduction algorithm.
1. According to the sampling lemma and combining lemma, generate m short vectors \(v_i\in L\) in the basic neighbourhood of lattice L(S) such that \(|v_i|\leqslant |S|f(n)\), \(i=1,2,\dots ,m\), \(V=[v_1,v_2,\dots ,v_m]\).
2. Let \(A=B^{-1}V\ (\text {mod}\ q)\), by the combining lemma we know A is uniformly distributed in \(\mathbb {Z}_q^{n\times m}\). Solve the SIS problem \(Az=0\ (\text {mod}\ q)\) with \(|z|\leqslant \beta \) and obtain a solution z.
3. Let \(s=Vz/q\). Repeat these three steps and generate enough vectors s so that there are n linearly independent vectors, denoted as \(s_1,s_2,\dots ,s_n\). Suppose the matrix \(S'\) is \(S'=[s_1,s_2,\dots ,s_n]\).
We are to prove that \(|S'|\leqslant |S|/2\). Firstly, note that \(s\in L\). This is because
so \(B(Az)\in qL\) and \(s=Vz/q=B(Az)/q\in L\). Secondly,
This means \(|S'|\leqslant |S|/2\). Replace S with \(S'\) and repeat the above three steps until \(|S'|\leqslant \gamma (n)\lambda _n(L)\), then we confirm that \(S'\) is a solution of the \(\text {SIVP}_{\gamma }\) problem. \(\square \)
At the end of this section, we show that the difficulty of some other hard problems on lattice are polynomial equivalently with that of the SIS problems. We give another two definitions about hard problems on lattice.
Definition 2.4.1
(1) \(\text {GIVP}_{\gamma }^{\phi }\): find a set of n linearly independent vectors \(S=\{s_i\}\subset L\), such that
where \(\gamma (n)\geqslant 1\) is a positive function of n, B is the generated matrix of L, and \(\phi \) is a real function of B.
(2) \(\text {GDD}_{\gamma }^{\phi }\): let \(t\in \mathbb {R}^n\) be a target vector, find a vector \(x\in L\), such that
where B is the generated matrix of L, and \(\phi \) is a real function of B.
If \(\phi =\lambda _n\) is the nth continuous minimal distance of lattice L, the \(\text {GIVP}_{\gamma }^{\phi }\) problem in the above definition becomes the \(\text {SIVP}_{\gamma }\) problem in Definition 2.2.3. Here we give two lemmas to show that the above two problems could be reduced to the SIS problem.
Lemma 2.4.1
For any function \(\gamma (n)\geqslant 1\) and \(\phi \), there is a polynomial reduction algorithm from \(\text {GIVP}_{8\gamma }^{\phi }\) to \(\text {INCGDD}_{\gamma ,8}^{\phi }\) problem.
Proof
Suppose B is a generated matrix of lattice L, our goal is to find a set of n linearly independent vectors \(S=\{s_i\}\subset L\) such that
We use the idea of iteration to achieve this goal. Initially, let \(S=B\). If S satisfies the above condition, then the solution has been found. If S does not satisfy the above inequality, assume \(S=[s_1,s_2,\dots ,s_n]\), and suppose that
i.e. \(s_n\) is the longest vector among \(s_1,s_2,\dots ,s_n\). Let t be a vector orthogonal to \(s_1,s_2,\dots ,s_{n-1}\), and \(|t|=|S|/2=|s_n|/2\). Here the vector t can be constructed by the Schmidt orthogonalization method. Based on the reduction algorithm in Theorem 2.4.1, we solve the INCGDD problem with parameters \(\{B,S,t,|S|/8\}\). If the algorithm fails, then we have
This implies S is a solution of the \(\text {GIVP}_{8\gamma }^{\phi }\) problem. If the reduction algorithm solves the INCGDD problem successfully, then we get a vector u, such that
It follows that
It is easy to verify \(u,s_1,s_2,\dots ,s_{n-1}\) are linearly independent. Otherwise, u is orthogonal to t since t is orthogonal to \(s_1,s_2,\dots ,s_{n-1}\). Thus,
It is a contradiction. So \(u,s_1,s_2,\dots ,s_{n-1}\) are linearly independent. Let \(S'=[s_1,s_2,\dots ,s_{n-1},u]\), \(|S'|<|S|\), repeat the above process for \(S'\) and we get a solution of the \(\text {GIVP}_{8\gamma }^{\phi }\) problem finally. Lemma 2.4.1 holds. \(\square \)
Lemma 2.4.2
For any function \(\gamma (n)\geqslant 1\) and \(\phi \), there is a polynomial reduction algorithm from \(\text {GDD}_{3\gamma }^{\phi }\) to \(\text {INCGDD}_{\gamma ,8}^{\phi }\) problem.
Proof
Assume B is a generated matrix of lattice L, \(t\in \mathbb {R}^n\) is the target vector. Our goal is to find \(x\in L\), such that
According to Lemma 2.4.1, we can find a set of n linearly independent vectors \(S=\{s_i\}\subset L\) such that \(|S|\leqslant 8\gamma (n)\phi (B)\). Let r be a real number satisfying the INCGDD problem with parameters \(\{B,S,t,r/2\}\) fails, and \(\{B,S,t,r\}\) successfully solves a solution x. In fact, the real number r in this range \(r/2\leqslant \gamma (n)\phi (B)\leqslant r\) could satisfy the above condition. It follows that
So we get a solution of the \(\text {GDD}_{3\gamma }^{\phi }\) problem. We complete the proof. \(\square \)
In Lemma 2.4.1 and Lemma 2.4.2, we transform the \(\text {GIVP}_{\gamma }^{\phi }\) and \(\text {GDD}_{\gamma }^{\phi }\) problems to the \(\text {INCGDD}_{\gamma ,g}^{\phi }\) problem. While Theorem 2.4.1 tells us the difficulty of the \(\text {INCGDD}_{\gamma ,g}^{\phi }\) problem is polynomial equivalent with that of the SIS problem. So we have proved that the \(\text {GIVP}_{\gamma }^{\phi }\) and \(\text {GDD}_{\gamma }^{\phi }\) problems are polynomial equivalent with the SIS problem.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Zheng, Z., Tian, K., Liu, F. (2023). Reduction Principle of Ajtai. In: Modern Cryptography Volume 2. Financial Mathematics and Fintech. Springer, Singapore. https://doi.org/10.1007/978-981-19-7644-5_2
Download citation
DOI: https://doi.org/10.1007/978-981-19-7644-5_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7643-8
Online ISBN: 978-981-19-7644-5
eBook Packages: Economics and FinanceEconomics and Finance (R0)