Abstract
In this chapter, we introduce the basic random theory of lattice, including Fourier transform, discrete Gauss measure, smoothing parameter and some properties of discrete Gauss distribution. Random lattice is a new research topic in lattice theory. However, only a special class of random lattices named Gauss lattice has been defined and studied. We will introduce Gauss lattice, define the smoothing parameter on Gauss lattice, and calculate the statistical distance based on the smoothing parameter
You have full access to this open access chapter, Download chapter PDF
Let \(\mathbb {R}^n\) be the Euclidean space of dimension n, \(x=\begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}\), \(y=\begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}\) are two vectors of \(\mathbb {R}^n\), the inner product of x and y is defined as
The Euclidean norm |x| of vector x (also called the \(l_2\) norm) is defined as
Let \(B=(b_{ij})_{n\times n}\in \mathbb {R}^{n\times n}\) be an invertible square matrix of order n, a full-rank lattice L in \(R^{n}\) is defined as
A lattice L is a discrete geometry in \(\mathbb {R}^n\), in other words, there is a positive constant \(\lambda _1=\lambda _1(L)>0\) and a vector \(\alpha \in L\) satisfying \(\alpha \ne 0\), such that
\(\lambda _1\) is called the shortest distance in L, \(\alpha \) is the shortest vector in L. A sphere in n dimensional Euclidean space \(\mathbb {R}^n\) with center \(x_0\) and radius r is defined as
In particular, N(0, r) represents a sphere with origin as the center of the circle and radius r. The discretization of a lattice is equivalent to the fact that the intersection of L with any sphere \(N(x_0,r)\) is a finite set, i.e.
Let \(L=L(B)\) be a lattice, B is the generated matrix of L. Block B by each column vector as \(B=[\beta _1,\beta _2,\dots ,\beta _n]\), the basic neighborhood F(B) of L is defined as
Clearly the basic neighborhood F(B) is related to the generated matrix B of L, which is actually a set of representative elements of the additive quotient group \(\mathbb {R}^n/L\). \(F^{*}(B)\) is also a set of representative elements of the quotient group \(\mathbb {R}^n/L\), where
therefore, \(F^{*}(B)\) can also be a basic neighborhood of the lattice L. The following property is easy to prove [see Lemma 2.6 in Chap. 7 in Zheng (2022)]
That is, the volume of the basic neighborhood of L is an invariant and does not change with the choice of the generated matrix B. We denote \(\text {det}(L)=|\text {det}(B)|\) as the determinant of the lattice L.
The basic properties of lattice can be found in Chap. 7 of Zheng (2022). The main purpose of this chapter is to establish the random theory of lattice. If a lattice L is the space of values of a random variable (or random vector), it is called a random lattice. Random lattice is a new research topic in lattice theory, and the works of Micciancio and Regev (2004), Regev (2004), Micciancio and Regev (2004), Micciancio and Regev (2009) are pioneering. In this way, the study of random lattice is no more than ten years. For technical reasons, only a special class of random lattices can be defined and studied. That is, consider a random variable \(\xi \) defined in \(\mathbb {R}^n\) from a Gauss distribution, and limit the discretization of \(\xi \) to L so that L becomes a random lattice. It is a special kind of random lattice, which we call the Gauss lattice. The main purpose of this chapter is to introduce Gauss lattice, define the smoothing parameter on Gauss lattice and calculate the statistical distance based on the smoothing parameter. The mathematical technique used in this chapter is high dimensional Fourier transform.
1.1 Fourier Transform
A complex function f(x) on \(\mathbb {R}^n\) is a mapping of \(\mathbb {R}^n \rightarrow \mathbb {C}\), where \(\mathbb {C}\) is the complex field. We define the function space \(L^1(\mathbb {R})\) and \(L^2(\mathbb {R})\):
and
If \(f(x),g(x)\in L^1(\mathbb {R}^n)\), define the convolution of f with g as
We have the following properties about convolution.
Lemma 1.1.1
Suppose \(f(x),g(x)\in L^1(\mathbb {R}^n)\), then
(i) \(f*g(x)=g*f(x)\).
(ii) \(\int \limits _{\mathbb {R}^n} f*g(x) \textrm{d}x=\int \limits _{\mathbb {R}^n} f(x) \textrm{d}x\cdot \int \limits _{\mathbb {R}^n} g(x) \textrm{d}x\).
Proof
By the definition of convolution (1.1.3), we have
Property (i) holds. To obtain the second result (ii), we have
The lemma is proved. Â Â Â \(\square \)
Definition 1.1.1
If \(f(x)\in L^1(\mathbb {R}^n)\), define the Fourier transform of f(x) as
Note that \(f\rightarrow \hat{f}\) is an operator of the function space defined on \(L^1(\mathbb {R}^n)\), which is called the Fourier operator. If \(f(x)=f_1(x_1)f_2(x_2)\cdots f_n(x_n)\), then the high dimensional Fourier operator can be reduced to the product of one dimensional Fourier operators, i.e.
The following are some of the most common and fundamental properties of Fourier transform.
Lemma 1.1.2
Suppose \(f(x)\in L^1(\mathbb {R}^n), g(x)\in L^1(\mathbb {R}^n)\), then
(i) \(\widehat{f*g}(x)=\hat{f}(x)\hat{g}(x)\).
(ii) \(a\in \mathbb {R}^n\) is a given vector, denote \(\tau _a f\) as the coordinate translation function, i.e. \(\tau _a f(x)=f(x+a)\), \(\forall x\in \mathbb {R}^n\). Then we have \(\widehat{\tau _a f}(x)={\text {e}}^{2\pi i x\cdot a}\hat{f}(x)\).
(iii) Let \(h(x)={\text {e}}^{2\pi i x\cdot a}f(x)\), thus \(\hat{h}(x)=\hat{f}(x-a)\).
(iv) Let \(\delta \ne 0\) be he real number, \(f_{\delta }(x)=f(\frac{1}{\delta }x)\), then \(\hat{f}_{\delta }(x)=|\delta |^n \hat{f}_{\delta ^{-1}}(x)=|\delta |^n \hat{f}(\delta x)\).
(v) Let A be an invertible real matrix of order n, namely \(A\in GL_n(\mathbb {R})\), define \(f\circ A(x)=f(Ax)\). Then \(\widehat{f\circ A}(x)=|A|^{-1} \hat{f} \circ (A^{-1})^T (x)=|A|^{-1} \hat{f} ((A^{-1})^T x)\), where \(A^T\) is the transpose matrix of A.
Proof
By definition, we have
Taking variable substitution \(\xi -y=y'\), then \(\xi =y+y'\), and \(\textrm{d}\xi =\textrm{d}y'\), so we have
property (i) is proved. Based on the definition of Fourier transform, we have
property (ii) gets proved. Similarly, we can obtain (iii). Next, we give the proof of (iv). Since \(\delta \ne 0\), and \(f_{\delta }(x)=f(\frac{1}{\delta }x)\), so
By the condition \(A\in GL_n(\mathbb {R})\), \(f\circ A(x)=f(Ax)\), then
Taking variable substitution, \(y=A\xi \), then \(A^{-1}y=\xi \), and \(\textrm{d}\xi =|A|^{-1}\textrm{d}y\), so
Lemma 1.1.2 is proved. Â Â Â \(\square \)
Finally, we give some examples of the Fourier transform.
Example 1.1
Let \(n=1\), \(a\in \mathbb {R}\), \(a>0\), define the characteristic function \(1_{[-a,a]}(x)\) of the closed interval \([-a,a]\) as
Then
For \(n>1\), let \(a=(a_1,a_2,\dots ,a_n)\in \mathbb {R}^n\), the square \([-a,a]\) is defined as
Define the characteristic function \(1_{[-a,a]}(x)\) of the square \([-a,a]\), then
Proof
For the general n, it is clear that
Based on Eq. (1.1.5), we only need to prove Eq. (1.1.6). \(n=1\), \(a\in \mathbb {R}\), so
   \(\square \)
Example 1.2
Let \(f(x)={\text {e}}^{-\pi |x|^2}\), \(x\in \mathbb {R}^n\), then \(f(x)\in L^1(\mathbb {R}^n)\), and \(\hat{f}(x)=f(x)\), namely f(x) is a fixed point of Fourier operator, which is also called a dual function.
Proof
Clearly, \(f(x)\in L^1(\mathbb {R}^n)\). To prove the fixed point property of f(x), by definition
By one dimensional Poisson integral,
we have the following high dimensional Poisson integral,
So we get \(\hat{f}(x)=f(x)\). Â Â Â \(\square \)
1.2 Discrete Gauss Measure
From the property of \(f(x)={\text {e}}^{-\pi |x|^2}\) under the Fourier operator introduced in the last section, and high dimensional Poisson integral formula (1.1.9), we can generalize f(x) as the density function of a random variable from the normal Gauss distribution to a general Gauss distribution in \(\mathbb {R}^n\). We first discuss the Gauss function on \(\mathbb {R}^n\).
Definition 1.2.1
Let \(s>0\) be a given positive real number, \(c\in \mathbb {R}^n\) is a vector. The Gauss function \(\rho _{s,c}(x)\) centered on c with parameter s is defined as
and
From the definition we have
and
It can be obtained from Poisson integral formula (1.1.9)
Lemma 1.2.1
The Fourier transform of Gauss functions \(\rho _s(x)\) and \(\rho _{s,c}(x)\) are
and
Proof
By property (iv) of Lemma 1.1.2 and \(s>0\), we have
The last equation follows from Example 2 in the previous section, therefore, (1.2.4) holds. By the property (ii) of Lemma 1.1.2, we have
Lemma 1.2.1 is proved. Â Â Â \(\square \)
Lemma 1.2.2
\(\rho _{s,c}(x)\) is uniformly continuous in \(\mathbb {R}^n\), i.e. for any \(\epsilon >0\), there is \(\delta =\delta (\epsilon )\), when \(|x-y|<\delta \) for \(x\in \mathbb {R}^n\), \(y\in \mathbb {R}^n\), we have
Proof
By definition, \(0<\rho _{s,c}(x)\leqslant 1\), hence \(\rho _{s,c}(x)\) is uniformly bounded in \(\mathbb {R}^n\), we will prove \(\rho _{s,c}^{'}(x)\) is also uniformly bounded in \(\mathbb {R}^n\). We only prove the case of \(c=0\). Since \(\rho _s(x)=\rho _s(x_1)=\cdots =\rho _s(x_n)\), without loss of generality, let \(n=1\), \(t\in \mathbb {R}\), then
When \(|t|\geqslant M\), it is clear
Hence, when \(|t|\geqslant M\), we have
For \(|t|<M\), By the continuity of \(\rho _s^{'}(t)\) we have \(\rho _s^{'}(t)\) is bounded. This gives the proof that \(\rho _{s,c}^{'}(x)\) is uniformly continuous in \(\mathbb {R}^n\). Let \(|\rho _{s,c}^{'}(x)|\leqslant M_0, \forall x\in \mathbb {R}^n\). By the differential mean value theorem, we have
Let \(\delta =\frac{\epsilon }{M_0}\), then
We finish the proof of the lemma. Â Â Â \(\square \)
Definition 1.2.2
For \(s>0\), \(c\in \mathbb {R}^n\), define the continuous Gauss density function \(D_{s,c}(x)\) as
The definition gives that
Thus, a continuous Gauss density function \(D_{s,c}(x)\) corresponds to a continuous random vector of from Gauss distribution in \(\mathbb {R}^n\), and this correspondence is one-to-one.
Definition 1.2.3
Suppose \(f(x):\mathbb {R}^n \rightarrow \mathbb {C}\) is an n-elements function, \(A\subset \mathbb {R}^n\) is a finite or countable set in \(\mathbb {R}^n\), define f(A) as
The continuous Gauss density function \(D_{ {s,c}}(x)\) is also called the continuous Gauss measure. In order to implement the transformation from continuous measure to discrete measure and define random variables on discrete geometry in \(\mathbb {R}^n\), the following lemma is an important theoretical support.
Lemma 1.2.3
Let \(L\subset \mathbb {R}^n \) be a full-rank lattice, then
Proof
From definition,
By the property of the exponential function \({\text {e}}^t\), there exists a constant \(M_{0}>0\), when \(|x-c|>M_0\),
Thus, we can divide the points on the lattice L into two sets. Let
and
From (1.0.6) we have
Based on (1.2.8),
Since \(A_2\) is a countable set, the right hand side of the above inequality is clearly a convergent series. Combining the above two estimations, we have \(D_{s,c}(L)<\infty \), the lemma is proved. Â Â Â \(\square \)
To give a clearer explanation of (1.2.9), we provide another proof of Lemma 1.2.3. First we prove the following lemma.
Lemma 1.2.4
Let \(A\in \mathbb {R}^{n\times n}\) be an invertible square matrix of order n, \(T=A^T A\) is a positive definite real symmetric matrix. Let \(\delta \) be the smallest eigenvalue of T, \(\delta ^{*}\) is the biggest eigenvalue of T, we have \(0<\delta \leqslant \delta ^{*}\), and
where \(S=\{x\in \mathbb {R}^n\ |\ |x|=1\}\) is the unit sphere in \(\mathbb {R}^n\).
Proof
Since T is a positive definite real symmetric matrix, so all eigenvalues \(\delta _1,\delta _2,\dots ,\delta _n\) of T are positive, and there is an orthogonal matrix P such that
Hence,
Since \(P^T TP\) is a diagonal matrix, we have
If \(x\in S\), then \(|P^T x|=|x|=1\), so we have \(\sqrt{\delta }\leqslant |Ax|\leqslant \sqrt{\delta ^{*}}\). Â Â Â \(\square \)
By Lemma 1.2.4, and S is a compact set, |Ax| is a continuous function on S, so |Ax| can achieve the maximum value on S. This maximum value is defined as ||A||,
We call \(\Vert A \Vert \) for the matrix norm of A, and Lemma 1.2.4 shows that
Another proof of Lemma 1.2.3: Let \(L=L(B)\) be any full-rank lattice, B is the generated matrix of L. By definition we have
From Lemma 1.2.4,
Let \(x=By\), \(\delta ^{*}\) is the biggest eigenvalue of \((B^{-1})^T B^{-1}\), we have
The property of the exponential function implies that,
Since
Denote \(x=(x_1,\dots ,x_n)\), \(B^{-1}c=(u_1,\dots ,u_n)\), then
By (1.2.15),
every infinite series on the right hand side of the above equation converges, hence, \(D_{s,c}(L)<\infty \). Â Â Â \(\square \)
By Lemma 1.2.3, we define the discrete Gauss density function \(D_{L,s,c}(x)\) as
Trivially, we have
So \(D_{L,s,c}(x)\) corresponds to a random variable from Gauss distribution defined on the lattice L (discrete geometry) with parameters s and c.
Definition 1.2.4
Let \(L=L(B)\subset \mathbb {R}^n\) be a lattice with full rank, \(s>0\) is a given positive real number, \(c\in \mathbb {R}^n\) is a given vector, define the discrete Gauss measure function \(g_{L,s,c}(x)\) as a function defined on the basic neighborhood F(B) of L,
By Definition and (1.2.3), it is clear that
Thus, the density function \(g_{L,s,c}(x)\) defined on the basic neighborhood F(B) corresponds to a continuous random variable on F(B), denoted as \(D_{s,c} {\text {mod}} L\).
Lemma 1.2.5
The random variable \(D_{s, c} \text {mod} L\) is actually defined in the additive quotient group \(\mathbb {R} ^{n}/L \).
Proof
F(B) is a set of representative elements of the additive quotient group \(\mathbb {R}^n/L\), and we only prove that for any set of representative elements of \(\mathbb {R}^n/L\), the discrete Gauss function \(g_{L,s,c}(x)\) remains constant, then \(D_{s,c}\ \text {mod}\ L\) can be regarded as a random variable on the additive quotient group \(\mathbb {R}^n/L\). Actually, if \(x_1,x_2\in \mathbb {R}^n\), \(x_1\equiv x_2\ (\text {mod}\ L)\), we have \(g_{L,s,c}(x_1)=g_{L,s,c}(x_2)\). To obtain the result, by definition
Since \(x_1=x_2+y_0\), where \(y_0\in L\), so
By \(x_1\equiv x_2\ (\text {mod}\ L)\), then \(\bar{x_1}=\bar{x_2}\) are the same additive cosets in the quotient group \(\mathbb {R}^n/L\). Thus, the discrete Gauss measure \(g_ {L, s,c}(x)\) can be defined on any basic neighborhood of L, and the corresponding random variable \(D_{s,c}\ \text {mod}\ L\) is actually defined on the quotient group \(\mathbb {R}^n/L\). Â Â Â \(\square \)
1.3 Smoothing Parameter
For a given full-rank lattice \(L\subset \mathbb {R}^n\), in the previous section we defined the discrete Gauss measure \(g_{L,s,c}(x)\), and the corresponding continuous random variable \(D_{s,c}\ \text {mod}\ L\) on the basic neighborhood F(B) of L. In this section, we discuss an important parameter on Gauss lattice—the smoothing parameter. The concept of smooth parameters was introduced by Micciancio and Regev in 2007 Micciancio and Regev (2004). For a given vector \(x\in \mathbb {R}^n\), we have the following lemma.
Lemma 1.3.1
For a given lattice \(L\subset \mathbb {R}^n\), we have
or equally
Proof
By the property of the exponential function, when \(|x|>M_0\) (\(M_0\) is a positive constant) then
So
The first part of the equation above only has a finite number of terms, so
The second part of the above equation is a convergent series, therefore,
Here, we get the proof. Â Â Â \(\square \)
By Definition 1.2.3, we have \(\rho _{1/s}(L)=\sum \limits _{x\in L}\rho _{1/s}(x)\), then \(\rho _{1/s}(L)\) is a monotone decreasing function of s. When \(s\rightarrow \infty \), \(\rho _{1/s}(L)\) monotonically decreasing to 1. So we give the definition of smoothing parameter.
Definition 1.3.1
Let \(L\subset \mathbb {R}^n\) be a lattice with full rank, \(L^{*}\) is the dual lattice of L, define the smoothing parameter \(\eta _{\epsilon }(L)\) of L: For any \(\epsilon >0\), define
Equally,
By definition, the smoothing parameter \(\eta _{\epsilon }(L)\) of L is a monotone decreasing function of \(\epsilon \), namely
Definition 1.3.2
Let \(A\subset \mathbb {R}^n\) be a finite or countable set, X and Y are two discrete random variables on A, the statistical distance between X and Y is defined as
If A is a continuous region in \(\mathbb {R}^n\), X and Y are continuous random variables on A, \(T_{ 1}(x)\) and \(T_{ 2}(x)\) are the density functions of X and Y, respectively, then the statistical distance between X and Y is defined as
It can be proved that for any function f defined on A, we have
From (1.2.17) in the last section, \(D_{s,c}\ \text {mod}\ L\) is a continuous random variable defined on the basic neighborhood F(B) of the lattice L with the density function \(g_{L,s,c}(x)\). Let U(F(B)) be a uniform random variable defined on F(B) with the density function \(d(x)=\frac{1}{\text {det}(L)}\). The main result of this section is that the statistical distance between \(D_{s,c}\ \text {mod}\ L\) and the uniform distribution U(F(B)) can be arbitrarily small.
Theorem 1.1
For any \(s>0\), given a lattice with full rank \(L=L(B)\subset \mathbb {R}^n\), \(L^*\) is the dual lattice of L, then the statistical distance between the discrete Gauss distribution and the uniform distribution on the basic neighborhood F(B) satisfies
Particularly, for any \(\epsilon >0\), and any \(s\geqslant \eta _{\epsilon }(L)\), we have
To prove Theorem 1.1, we first introduce the following lemma.
Lemma 1.3.2
Suppose \(f(x)\in L^1(\mathbb {R}^n)\) and satisfies the following two conditions:
(i) \(\sum \limits _{x\in L} |f(x+u)|\) uniformly converges in any bounded closed region of \(\mathbb {R}^n\) (about u);
(ii) \(\sum \limits _{y\in L^*} |\hat{f}(y)|\) converges. Then
where \(L=L(B)\subset \mathbb {R}^n\) is a full-rank lattice, \(L^*\) is the dual lattice, \(\text {det}(L)=|\text {det}(B)|\) is the determinant of the lattice L.
Proof
We first consider the case of \(B=I_n\), here \(L=\mathbb {Z}^n\), \(L^*=\mathbb {Z}^n\). By condition (i), let F(u) be
Since F(u) is a periodic function of the lattice \(\mathbb {Z}^n\), namely \(F(u+x)=F(u)\), for \(\forall x\in \mathbb {Z}^n\), we have the following Fourier expansion
Integrating \(F(u){\text {e}}^{-2\pi i u\cdot x}\) for \(u\in [0,1]^n\):
Hence, we have the following Fourier inversion formula:
From the above equation and (1.3.7),
Take \(u=0\), we have
the lemma is proved for \(L=\mathbb {Z}^n\). For the general case \(L=L(B)\), since \(L^*=L((B^{-1})')\), then
where \(f\circ B(x)=f(Bx)\). Replace f(x) with \(f\circ B\), then \(f\circ B\) still satisfies the conditions of this lemma, so
From the definition of Fourier transform,
Take variable substitution \(t=B^{-1} x\), then
Above all,
We finish the proof of this lemma. Â Â Â \(\square \)
The proof of Theorem 1.1 The density function of the continuous random variable \(D_{ {s,c}} \ \text {mod}\ L\) defined on the basic neighborhood F(B) of L is \(g_{L,s,c}(x)\), from Eq. (1.2.17) and Lemma 1.3.2, we have
By (1.2.5), the Fourier transform of \(\rho _{s,c-x}(y)\) is
Combining with Lemma 1.3.2, we obtain
The density function of the uniformly distributed random variable U(F(B)) on F(B) is \(\frac{1}{|\text {det}(B)|}\), based on the definition of statistical distance,
So (1.3.5) in Theorem 1.1 is proved. From the definition of smoothing parameter \(\eta _{\epsilon }(L)\), when \(s\geqslant \eta _{\epsilon }(L)\), we have
Therefore, if \(s\geqslant \eta _{\epsilon }(L)\), we have
Thus, Theorem 1.1 is proved. Â Â Â \(\square \)
Another application of Lemma 1.3.2 is to prove the following inequality.
Lemma 1.3.3
Let \(a\geqslant 1\) be a given positive real number, then
Proof
By Definition 1.2.1, the left hand side of the sum in the above inequality can be written as
Since \(\rho _s(x)\) satisfies the conditions of Lemma 1.3.2, we have
Obviously \(\rho _s(x)\) is a monotone increasing function of s, take \(s=\sqrt{a}\geqslant 1\), then
We complete the proof of Lemma 1.3.3. Â Â Â \(\square \)
Let \(N=N(0,1)\) be the unit sphere in \(\mathbb {R}^n\), namely
Lemma 1.3.4
Suppose \(L\subset \mathbb {R}^n\) is a lattice with full rank, \(c>\frac{1}{\sqrt{2\pi }}\) is a positive real number, \(C=c\sqrt{2\pi e}\cdot {\text {e}}^{-\pi c^2}\), \(v\in \mathbb {R}^n\), then
That is,
Proof
We will prove the first inequality, ler t be a positive real number, \(0<t<1\), then
In Lemma 1.3.3, take \(a=\frac{1}{t}\), then \(a>1\), we get
Hence,
It implies that
Let \(t=\frac{1}{2\pi c^2}\), then
The second inequality can be proved in the same way. Lemma 1.3.4 holds. Â Â Â \(\square \)
Based on the above inequality, we can give an upper bound estimation of the smoothing parameter on lattice, which is a very important result about the smoothing parameter.
Theorem 1.2
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have
where \(\lambda _1(L^*)\) is the minimal distance of the dual lattice \(L^*\) (see (1.0.4)).
Proof
Take \(c=1\) in Lemma 1.3.4, we first prove
If we take the logarithm of both sides, then
Since we have the following inequality,
So (1.3.12) holds. By Lemma 1.3.4, we have
From the both sides, we get
If \(s>\sqrt{n}/\lambda _1(L^*)\), for all \(x\in L^*\backslash \{0\}\),
Hence,
Take \(\epsilon =2^{-n}\), then
Theorem 1.2 is obtained. Â Â Â \(\square \)
According to the proof of Theorem 1.2, we can further improve the upper bound estimation of the smoothing parameter.
Corollary 1.3.1
Let
Then for any full-rank lattice \(L\subset \mathbb {R}^n\), we obtain
Proof
Take \(c>r\) in Lemma 1.3.4, then \(c>\frac{1}{\sqrt{2\pi }}\), and
By Lemma 1.3.4, for any full-rank lattice \(L\subset \mathbb {R}^n\), we have
If \(s>c\sqrt{n}/\lambda _1(L^*)\), for any \(x\in L^*\backslash \{0\}\),
Hence,
Therefore,
Finally we have (let \(c\rightarrow r\))
Corollary 1.3.1 is proved. Â Â Â \(\square \)
Corollary 1.3.2
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have
Proof
Take \(c=\frac{4}{5}\) in Lemma 1.3.4, then \(c>\frac{1}{\sqrt{2\pi }}\), and
Lemma 1.3.4 implies that for any full-rank lattice \(L\subset \mathbb {R}^n\), we have
If \(s>c\sqrt{n}/\lambda _1(L^*)\), for any \(x\in L^*\backslash \{0\}\),
Hence,
We get
which implies that
Corollary 1.3.2 is proved. Â Â Â \(\square \)
In the following, we give another classical upper bound estimation for the smoothing parameter. For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have introduced the definition of minimal distance \(\lambda _1(L)\) on lattice, which can actually be generalized to the general case. For \(1\leqslant i\leqslant n\),
\(\lambda _i(L)\) is also called the i-th continuous minimal distance of lattice L. To give an upper bound estimation of the smoothing parameter, we first prove the following lemma.
Lemma 1.3.5
For any n dimensional full-rank lattice L, \(s>0\), \(c\in \mathbb {R}^n\), then
Proof
According to Lemma 1.3.2, we have
where we have used \(\hat{\rho }_{s,c}(y)={\text {e}}^{-2\pi ic\cdot y}\hat{\rho }_{s}(y)\), the lemma gets proved. Â Â Â \(\square \)
Theorem 1.3
For any n dimensional full-rank lattice L, \(\epsilon >0\), we have
where \(\lambda _n(L)\) is the N-th continuous minimal distance of the lattice L defined by (1.3.17).
Proof
Let
we need to prove \(\rho _{1/s}(L^*\backslash \{0\})\leqslant \epsilon \). From the definition of \(\lambda _n(L)\), there are n linearly independent vectors \(v_1,v_2,\dots ,v_n\) in L satisfying \(|v_i|\leqslant \lambda _n(L)\), and for any positive integer \(k>1\), we have \(v_i/k\notin L\), \(1\leqslant i\leqslant n\). The main idea of the proof is to take a segregation of \(L^*\), for any integer j, let
for any \(y\in L^*\backslash \{0\}\), there is \(v_i\) that satisfies \(y\cdot v_i\ne 0\) (otherwise we have \(y=0\)), which implies \(y\notin S_{i,0}\), i.e. \(y\in L^*\backslash S_{i,0}\), so we have
To estimate \(\rho _{1/s}(L^*\backslash S_{i,0})\), we need some preparations. Let \(u_i=v_i/|v_i|^2\), then \(|u_i|=1/|v_i|\geqslant 1/\lambda _n(L)\). \(\forall j\in \mathbb {Z}\), \(\forall x\in S_{i,j}\),
Therefore,
So
Since the inner product of any vector in \(S_{i,j}-ju_i\) with \(v_i\) is 0, then \(S_{i,j}-ju_i\) is actually a translation of \(S_{i,0} \), namely there is a vector w satisfying \(S_{i,j}-ju_i=S_{i,0}-w\). In fact, for any \(x_j\in S_{i,j}\), \(x_0\in S_{i,0}\), \(w=x_0-x_j+ju_i\) satisfies the equality \(S_{i,j}-ju_i=S_{i,0}-w\). By Lemma 1.3.5, we have
Combine (1.3.21) with (1.3.22),
When \(x>1\), it follows that
Next, we will estimate \(\rho _{1/s}(L^*\backslash S_{i,0})\),
So we get
From (1.3.20),
Together with \(\rho _{1/s}(L^*)=1+\rho _{1/s}(L^*\backslash \{0\})\), we have
In the last equality, we have used that
Based on the definition of the smoothing parameter,
Theorem 1.3 is proved. Â Â Â \(\square \)
At the end of this section, we present an inequality for the minimal distance on lattice, which will be used in the next chapter when we prove that the LWE problem is polynomial equivalent with the hard problems on lattice.
Lemma 1.3.6
For any n dimensional lattice L, \(\epsilon >0\), we have
Proof
Let \(v\in L^*\) and \(|v|=\lambda _1(L^*)\), \(s=\eta _{\epsilon }(L)\), from the definition of smoothing parameter, we have
Hence,
That is, the first inequality in this lemma holds. For the second inequality, Theorem 2.1 in Banaszczyk (1993) implies that
so we immediately get the second inequality. The lemma holds. Â Â Â \(\square \)
1.4 Some Properties of Discrete Gauss Distribution
In this section we introduce some properties about the discrete Gauss distribution. First we give the definition of the expectation of discrete Gauss distribution.
Definition 1.4.1
Let m, n be two positive integers, \(L\subset \mathbb {R}^n\) be an n dimensional full-rank lattice, \(c\in \mathbb {R}^n\), \(s>0\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), and \(f:\mathbb {R}^n\rightarrow \mathbb {R}^m\) is a given function, we denote
as the expectation of \(\xi \), and denote
as the expectation of \(f(\xi )\).
Lemma 1.4.1
For any n dimensional full-rank lattice, \(L\subset \mathbb {R}^n\), \(c,u\in \mathbb {R}^n\), \(|u|=1\), \(0<\epsilon <1\), \(s\geqslant 2\eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), then we have
and
Proof
Let \(L'=L/s=\{\frac{x}{s}\ |\ x\in L\}\), \(c'=c/s\), \(\xi '\) is a random variable from the discrete Gauss distribution \(D_{L',c'}\), for any \(x\in L'\), we have
That is, \(Pr\{\frac{\xi }{s}=x\}=Pr\{\xi '=x\}\), \(\forall x\in L'\), therefore,
the inequality (1.4.3) is equivalent to
Similarly, the inequality (1.4.4) is equivalent to
So we only need to prove the two inequalities for \(s=1\). Denote \(\xi \) as a random variable from the discrete Gauss distribution \(D_{L,c}\), the condition \(s\geqslant 2\eta _{\epsilon }(L)\) in Lemma 1.4.1 becomes \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}\). We prove that the two inequalities (1.4.5) and (1.4.6) hold if \(u=(1,0,\dots ,0)\) firstly. For any positive integer j, let
where \(x=(x_1,x_2,\dots ,x_n)\), \(c=(c_1,c_2,\dots ,c_n)\). Let \(\xi =(\xi _1,\xi _2,\dots ,\xi _n)\), then
Based on Lemma 1.3.2,
In order to estimate \(\hat{\rho }_c(L^*)\), from Lemma 1.2.1 we get \(\hat{\rho }_c(x)={\text {e}}^{-2\pi i x\cdot c} \rho (x)\), thus, \(|\hat{\rho }_c(x)|=\rho (x)\), note that \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}<1\),
To estimate \(\hat{g}_j(L^*)\), assume \(\rho _{c}^{(j)}(x)\) is the j order partial derivative of \(\rho _c(x)\) about the first variable \(x_1\), i.e.
If \(j=1,2\), it is easy to get
It follows that
Since \(\widehat{\rho _{c}^{(j)}}(x)=(2\pi i x_1)^j \hat{\rho }_c(x)\), we have
According to the inequality \(|x_1|\leqslant \sqrt{|x|^2}\leqslant {\text {e}}^{\frac{|x|^2}{2}}\) and \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}\),
Combining (1.4.7), (1.4.8) and (1.4.9) together,
For a general unit vector \(u\in \mathbb {R}^n\), there exists an orthogonal matrix \(M\in \mathbb {R}^{n\times n}\) such that \(Mu=(1,0,\dots ,0)\). Denote \(\eta \) as a random variable from the discrete Gauss distribution \(D_{M^{-1}L,M^{-1}c}\), for any \(x\in L\),
which implies that the distributions of \(\eta \) and \(M^{-1}\xi \) are the same, hence,
Above all the inequality (1.4.3) holds, and inequality (1.4.4) could be proved in the same way. We complete the proof of Lemma 1.4.1. Â Â Â \(\square \)
Lemma 1.4.2
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(c\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant 2\eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), then we have
and
Proof
Let \(u_1,u_2,\dots ,u_n\) be the n unit column vectors of \(n\times n\) matrix \(I_n\), by Lemma 1.4.1,
Lemma 1.4.2 holds. Â Â Â \(\square \)
Lemma 1.4.3
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,v}\), then we have
Proof
From the proof of Lemma 1.4.1, here we only need to prove for the case \(s=1\). Since
take \(c=1\) in Lemma 1.3.4 and get
That is,
Based on Lemma 1.3.2, Lemma 1.2.1 and \(\eta _{\epsilon }(L)\leqslant 1\),
Similarly,
Combining (1.4.13), (1.4.14) and (1.4.15) together, it follows that
This lemma holds. Â Â Â \(\square \)
For \(x\in \mathbb {R}^n\) and a set \(A\subset \mathbb {R}^n\), we define the distance from x to A as \(\text {dist}(x,A)=\min \limits _{y\in A} |x-y|\).
Lemma 1.4.4
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(c,v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), \(\text {dist}(v,L^*)\geqslant \frac{\sqrt{n}}{s}\), then
Proof
From the proof of Lemma 1.4.1, we only need to prove for the case \(s=1\). Let
By Lemma 1.3.2,
We have proved that \(|\hat{\rho }_c(L^*)|\geqslant 1-\epsilon \) in Lemma 1.4.1, based on (iii) of Lemma 1.1.2 and Lemma 1.2.1,
therefore,
Since \(\text {dist}(v,L^*)\geqslant \sqrt{n}\), we know
Take \(c=1\) in Lemma 1.3.4 and get
Above all,
We complete the proof of Lemma 1.4.4. Â Â Â \(\square \)
Lemma 1.4.5
For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(w,c,v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), \(\text {dist}(v,L^*)\geqslant \frac{\sqrt{n}}{s}\), then
Proof
By Lemma 1.4.4 we have
Lemma 1.4.5 holds. Â Â Â \(\square \)
Finally, we give a lemma which will be used in the next chapter.
Lemma 1.4.6
Let \(v_1,v_2,\dots ,v_m\) be m independent random variables on \(\mathbb {R}^n\) such that \(E[|v_i|^2]\leqslant l\) and \(|E[v_i]|^2\leqslant \epsilon \) for \(i=1,2,\dots ,m\). Then for any \(z=(z_1,z_2,\dots ,z_m)^T \in \mathbb {R}^m\),
Proof
By Cauchy inequality we get \(\sum \nolimits _{i=1}^m |z_i|\leqslant \sqrt{m}|z|\), so
The first term of the right hand side in (1.4.19) has the estimation
The second term of the right hand side in (1.4.19) has the estimation
From (1.4.19) it follows that
This lemma holds. Â Â Â \(\square \)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Zheng, Z., Tian, K., Liu, F. (2023). Random Lattice Theory. In: Modern Cryptography Volume 2. Financial Mathematics and Fintech. Springer, Singapore. https://doi.org/10.1007/978-981-19-7644-5_1
Download citation
DOI: https://doi.org/10.1007/978-981-19-7644-5_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7643-8
Online ISBN: 978-981-19-7644-5
eBook Packages: Economics and FinanceEconomics and Finance (R0)