Let \(\mathbb {R}^n\) be the Euclidean space of dimension n, \(x=\begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}\), \(y=\begin{pmatrix} y_1 \\ \vdots \\ y_n \end{pmatrix}\) are two vectors of \(\mathbb {R}^n\), the inner product of x and y is defined as

$$\begin{aligned} x\cdot y=x_1y_1+x_2y_2+\cdots +x_ny_n=x^{\text {T}} y. \end{aligned}$$
(1.0.1)

The Euclidean norm |x| of vector x (also called the \(l_2\) norm) is defined as

$$\begin{aligned} |x|=(x_1^2+x_2^2+\cdots +x_n^2)^{\frac{1}{2}}=\sqrt{x\cdot x}. \end{aligned}$$
(1.0.2)

Let \(B=(b_{ij})_{n\times n}\in \mathbb {R}^{n\times n}\) be an invertible square matrix of order n, a full-rank lattice L in \(R^{n}\) is defined as

$$\begin{aligned} L=L(B)=\{Bx\ |\ x\in \mathbb {Z}^n\}. \end{aligned}$$
(1.0.3)

A lattice L is a discrete geometry in \(\mathbb {R}^n\), in other words, there is a positive constant \(\lambda _1=\lambda _1(L)>0\) and a vector \(\alpha \in L\) satisfying \(\alpha \ne 0\), such that

$$\begin{aligned} |\alpha |=\min \limits _{x\in L,x\ne 0}|x|=\lambda _1(L). \end{aligned}$$
(1.0.4)

\(\lambda _1\) is called the shortest distance in L, \(\alpha \) is the shortest vector in L. A sphere in n dimensional Euclidean space \(\mathbb {R}^n\) with center \(x_0\) and radius r is defined as

$$\begin{aligned} N(x_0,r)=\{x\in \mathbb {R}^n\ |\ |x-x_0|\leqslant r\},\ x_0\in \mathbb {R}^n. \end{aligned}$$
(1.0.5)

In particular, N(0, r) represents a sphere with origin as the center of the circle and radius r. The discretization of a lattice is equivalent to the fact that the intersection of L with any sphere \(N(x_0,r)\) is a finite set, i.e.

$$\begin{aligned} ^{\#}\{L\cap N(x_0,r)\}<\infty . \end{aligned}$$
(1.0.6)

Let \(L=L(B)\) be a lattice, B is the generated matrix of L. Block B by each column vector as \(B=[\beta _1,\beta _2,\dots ,\beta _n]\), the basic neighborhood F(B) of L is defined as

$$\begin{aligned} F(B)=\{\sum \limits _{i=1}^{n} x_i\beta _i\ |\ 0\leqslant x_i<1\}. \end{aligned}$$
(1.0.7)

Clearly the basic neighborhood F(B) is related to the generated matrix B of L, which is actually a set of representative elements of the additive quotient group \(\mathbb {R}^n/L\). \(F^{*}(B)\) is also a set of representative elements of the quotient group \(\mathbb {R}^n/L\), where

$$\begin{aligned} F^{*}(B)=\{\sum \limits _{i=1}^{n} x_i\beta _i\ |\ -\frac{1}{2}\leqslant x_i<\frac{1}{2}\}, \end{aligned}$$

therefore, \(F^{*}(B)\) can also be a basic neighborhood of the lattice L. The following property is easy to prove [see Lemma 2.6 in Chap. 7 in Zheng (2022)]

$$\begin{aligned} \text {Vol}(F(B))=|\text {det}(B)|=\text {det}(L). \end{aligned}$$
(1.0.8)

That is, the volume of the basic neighborhood of L is an invariant and does not change with the choice of the generated matrix B. We denote \(\text {det}(L)=|\text {det}(B)|\) as the determinant of the lattice L.

The basic properties of lattice can be found in Chap. 7 of Zheng (2022). The main purpose of this chapter is to establish the random theory of lattice. If a lattice L is the space of values of a random variable (or random vector), it is called a random lattice. Random lattice is a new research topic in lattice theory, and the works of Micciancio and Regev (2004), Regev (2004), Micciancio and Regev (2004), Micciancio and Regev (2009) are pioneering. In this way, the study of random lattice is no more than ten years. For technical reasons, only a special class of random lattices can be defined and studied. That is, consider a random variable \(\xi \) defined in \(\mathbb {R}^n\) from a Gauss distribution, and limit the discretization of \(\xi \) to L so that L becomes a random lattice. It is a special kind of random lattice, which we call the Gauss lattice. The main purpose of this chapter is to introduce Gauss lattice, define the smoothing parameter on Gauss lattice and calculate the statistical distance based on the smoothing parameter. The mathematical technique used in this chapter is high dimensional Fourier transform.

1.1 Fourier Transform

A complex function f(x) on \(\mathbb {R}^n\) is a mapping of \(\mathbb {R}^n \rightarrow \mathbb {C}\), where \(\mathbb {C}\) is the complex field. We define the function space \(L^1(\mathbb {R})\) and \(L^2(\mathbb {R})\):

$$\begin{aligned} L^1(\mathbb {R})=\{f:\mathbb {R}^n \rightarrow \mathbb {C}\ |\ \int \limits _{\mathbb {R}^n} |f(x)|\textrm{d}x <\infty \} \end{aligned}$$
(1.1.1)

and

$$\begin{aligned} L^2(\mathbb {R})=\{f:\mathbb {R}^n \rightarrow \mathbb {C}\ |\ \int \limits _{\mathbb {R}^n} |f(x)|^2\textrm{d}x <\infty \}. \end{aligned}$$
(1.1.2)

If \(f(x),g(x)\in L^1(\mathbb {R}^n)\), define the convolution of f with g as

$$\begin{aligned} f*g(x)=\int \limits _{\mathbb {R}^n} f(x-\xi )g(\xi )\textrm{d}\xi . \end{aligned}$$
(1.1.3)

We have the following properties about convolution.

Lemma 1.1.1

Suppose \(f(x),g(x)\in L^1(\mathbb {R}^n)\), then

(i) \(f*g(x)=g*f(x)\).

(ii) \(\int \limits _{\mathbb {R}^n} f*g(x) \textrm{d}x=\int \limits _{\mathbb {R}^n} f(x) \textrm{d}x\cdot \int \limits _{\mathbb {R}^n} g(x) \textrm{d}x\).

Proof

By the definition of convolution (1.1.3), we have

$$\begin{aligned} g*f(x)=\int \limits _{\mathbb {R}^n} g(x-\xi )f(\xi )\textrm{d}\xi =\int \limits _{\mathbb {R}^n} g(y)f(x-y)\textrm{d}y=f*g(x). \end{aligned}$$

Property (i) holds. To obtain the second result (ii), we have

$$\begin{aligned} \int \limits _{\mathbb {R}^n}f*g(x)\textrm{d}x=\int \limits _{\mathbb {R}^n}(\int \limits _{\mathbb {R}^n}f(x-\xi )g(\xi )\textrm{d}\xi )\textrm{d}x \end{aligned}$$
$$\begin{aligned} =\int \limits _{\mathbb {R}^n} \int \limits _{\mathbb {R}^n} f(y)g(\xi )\textrm{d}y\textrm{d}\xi =\int \limits _{\mathbb {R}^n}f(y)\textrm{d}y\cdot \int \limits _{\mathbb {R}^n}g(\xi )\textrm{d}\xi . \end{aligned}$$

The lemma is proved.    \(\square \)

Definition 1.1.1

If \(f(x)\in L^1(\mathbb {R}^n)\), define the Fourier transform of f(x) as

$$\begin{aligned} \hat{f}(x)=\int \limits _{\mathbb {R}^n} f(\xi ) {\text {e}}^{-2\pi i x\cdot \xi } \textrm{d}\xi ,\ x\in \mathbb {R}^n. \end{aligned}$$
(1.1.4)

Note that \(f\rightarrow \hat{f}\) is an operator of the function space defined on \(L^1(\mathbb {R}^n)\), which is called the Fourier operator. If \(f(x)=f_1(x_1)f_2(x_2)\cdots f_n(x_n)\), then the high dimensional Fourier operator can be reduced to the product of one dimensional Fourier operators, i.e.

$$\begin{aligned} \hat{f}(x)=\Pi _{i=1}^n \hat{f}_i(x_i). \end{aligned}$$
(1.1.5)

The following are some of the most common and fundamental properties of Fourier transform.

Lemma 1.1.2

Suppose \(f(x)\in L^1(\mathbb {R}^n), g(x)\in L^1(\mathbb {R}^n)\), then

(i) \(\widehat{f*g}(x)=\hat{f}(x)\hat{g}(x)\).

(ii) \(a\in \mathbb {R}^n\) is a given vector, denote \(\tau _a f\) as the coordinate translation function, i.e. \(\tau _a f(x)=f(x+a)\), \(\forall x\in \mathbb {R}^n\). Then we have \(\widehat{\tau _a f}(x)={\text {e}}^{2\pi i x\cdot a}\hat{f}(x)\).

(iii) Let \(h(x)={\text {e}}^{2\pi i x\cdot a}f(x)\), thus \(\hat{h}(x)=\hat{f}(x-a)\).

(iv) Let \(\delta \ne 0\) be he real number, \(f_{\delta }(x)=f(\frac{1}{\delta }x)\), then \(\hat{f}_{\delta }(x)=|\delta |^n \hat{f}_{\delta ^{-1}}(x)=|\delta |^n \hat{f}(\delta x)\).

(v) Let A be an invertible real matrix of order n, namely \(A\in GL_n(\mathbb {R})\), define \(f\circ A(x)=f(Ax)\). Then \(\widehat{f\circ A}(x)=|A|^{-1} \hat{f} \circ (A^{-1})^T (x)=|A|^{-1} \hat{f} ((A^{-1})^T x)\), where \(A^T\) is the transpose matrix of A.

Proof

By definition, we have

$$\begin{aligned} \widehat{f*g}(x)=\int \limits _{\mathbb {R}^n} f*g(\xi ) {\text {e}}^{-2\pi i x\cdot \xi } \textrm{d}\xi \end{aligned}$$
$$\begin{aligned} =\int \limits _{\mathbb {R}^n} (\int \limits _{\mathbb {R}^n} f(\xi -y)g(y)\textrm{d}y) {\text {e}}^{-2\pi i x\cdot \xi }\textrm{d}\xi . \end{aligned}$$

Taking variable substitution \(\xi -y=y'\), then \(\xi =y+y'\), and \(\textrm{d}\xi =\textrm{d}y'\), so we have

$$\begin{aligned} \widehat{f*g}(x)=\int \limits _{\mathbb {R}^n} g(y) {\text {e}}^{-2\pi i x\cdot y} \textrm{d}y\cdot \int \limits _{\mathbb {R}^n} f(y') {\text {e}}^{-2\pi i x\cdot y'} \textrm{d}y'=\hat{f}(x)\hat{g}(x), \end{aligned}$$

property (i) is proved. Based on the definition of Fourier transform, we have

$$\begin{aligned} \widehat{\tau _a f}(x)=\int \limits _{\mathbb {R}^n} f(\xi +a){\text {e}}^{-2\pi i x\cdot \xi }\textrm{d}\xi =\int \limits _{\mathbb {R}^n} f(y){\text {e}}^{-2\pi i x\cdot (y-a)}\textrm{d}y \end{aligned}$$
$$\begin{aligned} ={\text {e}}^{2\pi i x\cdot a}\int \limits _{\mathbb {R}^n} f(y){\text {e}}^{-2\pi i x\cdot y}\textrm{d}y={\text {e}}^{2\pi i x\cdot a} \hat{f}(x),\ \end{aligned}$$

property (ii) gets proved. Similarly, we can obtain (iii). Next, we give the proof of (iv). Since \(\delta \ne 0\), and \(f_{\delta }(x)=f(\frac{1}{\delta }x)\), so

$$\begin{aligned} \hat{f}_{\delta }(x)=\int \limits _{\mathbb {R}^n} f(\frac{1}{\delta }\xi ) {\text {e}}^{-2\pi i x\cdot \xi }\textrm{d}\xi =\int \limits _{\mathbb {R}^n} f(y) {\text {e}}^{-2\pi i x\cdot \delta y} |\delta |^n \textrm{d}y \end{aligned}$$
$$\begin{aligned} =\int \limits _{\mathbb {R}^n} f(y) {\text {e}}^{-2\pi i (\delta x\cdot y)} |\delta |^n \textrm{d}y=|\delta |^n \hat{f}_{\delta ^{-1}}(x).\ \ \end{aligned}$$

By the condition \(A\in GL_n(\mathbb {R})\), \(f\circ A(x)=f(Ax)\), then

$$\begin{aligned} \widehat{f\circ A}(x)=\int \limits _{\mathbb {R}^n} f(A\xi ) {\text {e}}^{-2\pi i x\cdot \xi }\textrm{d}\xi . \end{aligned}$$

Taking variable substitution, \(y=A\xi \), then \(A^{-1}y=\xi \), and \(\textrm{d}\xi =|A|^{-1}\textrm{d}y\), so

$$\begin{aligned} \widehat{f\circ A}(x)=\int \limits _{\mathbb {R}^n} f(y) {\text {e}}^{-2\pi i x\cdot A^{-1}y} |A|^{-1}\textrm{d}y=|A|^{-1} \int \limits _{\mathbb {R}^n} f(y) {\text {e}}^{-2\pi i ((A^{-1})^{\text {T}} x\cdot y)}\textrm{d}y \end{aligned}$$
$$\begin{aligned} =|A|^{-1}\hat{f} ((A^{-1})^{\text {T}} x)=|A|^{-1} \hat{f}\circ (A^{-1})^T (x).\qquad \qquad \quad \ \ \end{aligned}$$

Lemma 1.1.2 is proved.    \(\square \)

Finally, we give some examples of the Fourier transform.

Example 1.1

Let \(n=1\), \(a\in \mathbb {R}\), \(a>0\), define the characteristic function \(1_{[-a,a]}(x)\) of the closed interval \([-a,a]\) as

$$\begin{aligned} 1_{[-a,a]}(x)=\left\{ \begin{array}{cc}1,&{}\ x\in [-a,a],\\ 0,&{}\ x\notin [-a,a]. \end{array} \right. \end{aligned}$$

Then

$$\begin{aligned} \hat{1}_{[-a,a]}(x)=\frac{\sin 2\pi a x}{\pi x}. \end{aligned}$$
(1.1.6)

For \(n>1\), let \(a=(a_1,a_2,\dots ,a_n)\in \mathbb {R}^n\), the square \([-a,a]\) is defined as

$$\begin{aligned}{}[-a,a]=[-a_1,a_1]\times [-a_2,a_2]\times \cdots \times [-a_n,a_n]. \end{aligned}$$

Define the characteristic function \(1_{[-a,a]}(x)\) of the square \([-a,a]\), then

$$\begin{aligned} \hat{1}_{[-a,a]}(x)=\Pi _{i=1}^{n}\frac{\sin 2\pi a_i x_i}{\pi x_i}. \end{aligned}$$
(1.1.7)

Proof

For the general n, it is clear that

$$\begin{aligned} 1_{[-a,a]}(x)=\Pi _{i=1}^n 1_{[-a_i,a_i]}(x_i). \end{aligned}$$

Based on Eq. (1.1.5), we only need to prove Eq. (1.1.6). \(n=1\), \(a\in \mathbb {R}\), so

$$\begin{aligned} \hat{1}_{[-a,a]}(x)=\int \limits _{\mathbb {R}} 1_{[-a,a]}(\xi ) {\text {e}}^{-2\pi i x\xi }\textrm{d}\xi =\int \limits _{-a}^a {\text {e}}^{-2\pi i x \xi }\textrm{d}\xi =\frac{1}{\pi x} \sin 2\pi a x. \end{aligned}$$

   \(\square \)

Example 1.2

Let \(f(x)={\text {e}}^{-\pi |x|^2}\), \(x\in \mathbb {R}^n\), then \(f(x)\in L^1(\mathbb {R}^n)\), and \(\hat{f}(x)=f(x)\), namely f(x) is a fixed point of Fourier operator, which is also called a dual function.

Proof

Clearly, \(f(x)\in L^1(\mathbb {R}^n)\). To prove the fixed point property of f(x), by definition

$$\begin{aligned} \hat{f}(x)=\int \limits _{\mathbb {R}^n} {\text {e}}^{-\pi |\xi |^2-2\pi i x\cdot \xi }\textrm{d}\xi ={\text {e}}^{-\pi |x|^2} \int \limits _{\mathbb {R}^n} {\text {e}}^{-\pi |\xi +ix|^2}\textrm{d}\xi ={\text {e}}^{-\pi |x|^2} \int \limits _{\mathbb {R}^n} {\text {e}}^{-\pi |y|^2}\textrm{d}y. \end{aligned}$$

By one dimensional Poisson integral,

$$\begin{aligned} \int \limits _{-\infty }^{+\infty } {\text {e}}^{-\pi y^2}\textrm{d}y=1, \end{aligned}$$
(1.1.8)

we have the following high dimensional Poisson integral,

$$\begin{aligned} \int \limits _{\mathbb {R}^n} {\text {e}}^{-\pi |y|^2}\textrm{d}y=1. \end{aligned}$$
(1.1.9)

So we get \(\hat{f}(x)=f(x)\).    \(\square \)

1.2 Discrete Gauss Measure

From the property of \(f(x)={\text {e}}^{-\pi |x|^2}\) under the Fourier operator introduced in the last section, and high dimensional Poisson integral formula (1.1.9), we can generalize f(x) as the density function of a random variable from the normal Gauss distribution to a general Gauss distribution in \(\mathbb {R}^n\). We first discuss the Gauss function on \(\mathbb {R}^n\).

Definition 1.2.1

Let \(s>0\) be a given positive real number, \(c\in \mathbb {R}^n\) is a vector. The Gauss function \(\rho _{s,c}(x)\) centered on c with parameter s is defined as

$$\begin{aligned} \rho _{s,c}(x)={\text {e}}^{-\frac{\pi }{s^2} |x-c|^2},\ x\in \mathbb {R}^n \end{aligned}$$
(1.2.1)

and

$$\begin{aligned} \rho _s(x)=\rho _{s,0}(x),\ \rho (x)=\rho _1(x)={\text {e}}^{-\pi |x|^2}. \end{aligned}$$
(1.2.2)

From the definition we have

$$\begin{aligned} \rho _s(x)=\rho (\frac{1}{s}x)={\text {e}}^{-\pi |\frac{x}{s}|^2} \end{aligned}$$

and

$$\begin{aligned} \rho _s(x)=\rho _s(x_1)\ldots \rho _s(x_n). \end{aligned}$$

It can be obtained from Poisson integral formula (1.1.9)

$$\begin{aligned} \int \limits _{\mathbb {R}^n}\rho _s(x)\textrm{d}x=\int \limits _{\mathbb {R}^n}\rho _{s,c}(x)\textrm{d}x=s^n. \end{aligned}$$
(1.2.3)

Lemma 1.2.1

The Fourier transform of Gauss functions \(\rho _s(x)\) and \(\rho _{s,c}(x)\) are

$$\begin{aligned} \hat{\rho }_s(x)=s^n \rho _{1/s}(x)=s^n {\text {e}}^{-\pi |sx|^2} \end{aligned}$$
(1.2.4)

and

$$\begin{aligned} \hat{\rho }_{s,c}(x)={\text {e}}^{-2\pi i x\cdot c} s^n \rho _{1/s}(x). \end{aligned}$$
(1.2.5)

Proof

By property (iv) of Lemma 1.1.2 and \(s>0\), we have

$$\begin{aligned} \hat{\rho }_s(x)=s^n \hat{\rho }_{1/s}(x)=s^n \hat{\rho }(sx)=s^n \rho (sx). \end{aligned}$$

The last equation follows from Example 2 in the previous section, therefore, (1.2.4) holds. By the property (ii) of Lemma 1.1.2, we have

$$\begin{aligned} \hat{\rho }_{s,c}(x)=\widehat{\tau _{-c}\rho _s}(x)={\text {e}}^{-2\pi i x\cdot c} \hat{\rho }_s(x)=s^n {\text {e}}^{-2\pi i x\cdot c} \rho _{1/s}(x). \end{aligned}$$

Lemma 1.2.1 is proved.    \(\square \)

Lemma 1.2.2

\(\rho _{s,c}(x)\) is uniformly continuous in \(\mathbb {R}^n\), i.e. for any \(\epsilon >0\), there is \(\delta =\delta (\epsilon )\), when \(|x-y|<\delta \) for \(x\in \mathbb {R}^n\), \(y\in \mathbb {R}^n\), we have

$$\begin{aligned} |\rho _{s,c}(x)-\rho _{s,c}(y)|<\epsilon . \end{aligned}$$

Proof

By definition, \(0<\rho _{s,c}(x)\leqslant 1\), hence \(\rho _{s,c}(x)\) is uniformly bounded in \(\mathbb {R}^n\), we will prove \(\rho _{s,c}^{'}(x)\) is also uniformly bounded in \(\mathbb {R}^n\). We only prove the case of \(c=0\). Since \(\rho _s(x)=\rho _s(x_1)=\cdots =\rho _s(x_n)\), without loss of generality, let \(n=1\), \(t\in \mathbb {R}\), then

$$\begin{aligned} \rho _{s}^{'}(t)=-\frac{2\pi }{s^2}t {\text {e}}^{-\frac{\pi }{s^2}t^2}. \end{aligned}$$

When \(|t|\geqslant M\), it is clear

$$\begin{aligned} {\text {e}}^{-\frac{\pi }{s^2}t^2}\leqslant \frac{1}{|t|^2}. \end{aligned}$$

Hence, when \(|t|\geqslant M\), we have

$$\begin{aligned} |\rho _s^{'}(t)|\leqslant \frac{2\pi }{s^2|t|}\leqslant \frac{2\pi }{s^2M}. \end{aligned}$$

For \(|t|<M\), By the continuity of \(\rho _s^{'}(t)\) we have \(\rho _s^{'}(t)\) is bounded. This gives the proof that \(\rho _{s,c}^{'}(x)\) is uniformly continuous in \(\mathbb {R}^n\). Let \(|\rho _{s,c}^{'}(x)|\leqslant M_0, \forall x\in \mathbb {R}^n\). By the differential mean value theorem, we have

$$\begin{aligned} |\rho _{s,c}(x)-\rho _{s,c}(y)|=|\rho _{s,c}^{'}(\xi )|\cdot |x-y|\leqslant M_0 |x-y|. \end{aligned}$$

Let \(\delta =\frac{\epsilon }{M_0}\), then

$$\begin{aligned} |\rho _{s,c}(x)-\rho _{s,c}(y)|<\epsilon ,\quad \text {if }\ |x-y|<\delta . \end{aligned}$$

We finish the proof of the lemma.    \(\square \)

Definition 1.2.2

For \(s>0\), \(c\in \mathbb {R}^n\), define the continuous Gauss density function \(D_{s,c}(x)\) as

$$\begin{aligned} D_{s,c}(x)=\frac{1}{s^n}\rho _{s,c}(x),\quad \forall x\in \mathbb {R}^n. \end{aligned}$$
(1.2.6)

The definition gives that

$$\begin{aligned} \int \limits _{\mathbb {R}^n} D_{s,c}(x)\textrm{d} x=\frac{1}{s^n}\int \limits _{\mathbb {R}^n}\rho _{s,c}(x)\textrm{d}x=1. \end{aligned}$$

Thus, a continuous Gauss density function \(D_{s,c}(x)\) corresponds to a continuous random vector of from Gauss distribution in \(\mathbb {R}^n\), and this correspondence is one-to-one.

Definition 1.2.3

Suppose \(f(x):\mathbb {R}^n \rightarrow \mathbb {C}\) is an n-elements function, \(A\subset \mathbb {R}^n\) is a finite or countable set in \(\mathbb {R}^n\), define f(A) as

$$\begin{aligned} f(A)=\sum \limits _{x\in A}f(x). \end{aligned}$$
(1.2.7)

The continuous Gauss density function \(D_{ {s,c}}(x)\) is also called the continuous Gauss measure. In order to implement the transformation from continuous measure to discrete measure and define random variables on discrete geometry in \(\mathbb {R}^n\), the following lemma is an important theoretical support.

Lemma 1.2.3

Let \(L\subset \mathbb {R}^n \) be a full-rank lattice, then

$$\begin{aligned} D_{s,c}(L)=\sum \limits _{x\in L}D_{s,c}(x)<\infty . \end{aligned}$$

Proof

From definition,

$$\begin{aligned} D_{s,c}(L)=\frac{1}{s^n} \sum \limits _{x\in L} \rho _{s,c}(x)=\frac{1}{s^n} \sum \limits _{x\in L} {\text {e}}^{-\frac{\pi }{s^2}|x-c|^2}. \end{aligned}$$

By the property of the exponential function \({\text {e}}^t\), there exists a constant \(M_{0}>0\), when \(|x-c|>M_0\),

$$\begin{aligned} {\text {e}}^{-\frac{\pi }{s^2}|x-c|^2}\leqslant \frac{s^2}{\pi |x-c|^2}. \end{aligned}$$
(1.2.8)

Thus, we can divide the points on the lattice L into two sets. Let

$$\begin{aligned} A_1=L \cap \{x\in \mathbb {R}^n\ |\ |x-c|\leqslant M_0\}=L\cap N(c,M_0). \end{aligned}$$

and

$$\begin{aligned} A_2=L \cap \{x\in \mathbb {R}^n\ |\ |x-c|> M_0\}. \end{aligned}$$

From (1.0.6) we have

$$\begin{aligned} \sum \limits _{x\in A_1} {\text {e}}^{-\frac{\pi }{s^2}|x-c|^2}\leqslant \sum \limits _{x\in A_1} 1=^{\#}A_1<\infty . \end{aligned}$$

Based on (1.2.8),

$$\begin{aligned} \sum \limits _{x\in A_2} {\text {e}}^{-\frac{\pi }{s^2}|x-c|^2}\leqslant \sum \limits _{x\in A_2} \frac{s^2}{\pi |x-c|^2}<\infty . \end{aligned}$$
(1.2.9)

Since \(A_2\) is a countable set, the right hand side of the above inequality is clearly a convergent series. Combining the above two estimations, we have \(D_{s,c}(L)<\infty \), the lemma is proved.    \(\square \)

To give a clearer explanation of (1.2.9), we provide another proof of Lemma 1.2.3. First we prove the following lemma.

Lemma 1.2.4

Let \(A\in \mathbb {R}^{n\times n}\) be an invertible square matrix of order n, \(T=A^T A\) is a positive definite real symmetric matrix. Let \(\delta \) be the smallest eigenvalue of T, \(\delta ^{*}\) is the biggest eigenvalue of T, we have \(0<\delta \leqslant \delta ^{*}\), and

$$\begin{aligned} \sqrt{\delta }\leqslant |Ax|_{x\in S}\leqslant \sqrt{\delta ^{*}}, \end{aligned}$$
(1.2.10)

where \(S=\{x\in \mathbb {R}^n\ |\ |x|=1\}\) is the unit sphere in \(\mathbb {R}^n\).

Proof

Since T is a positive definite real symmetric matrix, so all eigenvalues \(\delta _1,\delta _2,\dots ,\delta _n\) of T are positive, and there is an orthogonal matrix P such that

$$\begin{aligned} P^T TP=\text {diag}\{\delta _1,\delta _2,\dots ,\delta _n\}. \end{aligned}$$

Hence,

$$\begin{aligned} |Ax|^2=x^T Tx=x^T P(P^T TP)P^T x. \end{aligned}$$

Since \(P^T TP\) is a diagonal matrix, we have

$$\begin{aligned} \delta |P^T x|^2\leqslant |Ax|^2\leqslant \delta ^{*} |P^T x|^2. \end{aligned}$$

If \(x\in S\), then \(|P^T x|=|x|=1\), so we have \(\sqrt{\delta }\leqslant |Ax|\leqslant \sqrt{\delta ^{*}}\).    \(\square \)

By Lemma 1.2.4, and S is a compact set, |Ax| is a continuous function on S, so |Ax| can achieve the maximum value on S. This maximum value is defined as ||A||,

$$\begin{aligned} ||A||=\max \{|Ax|\ \big |\ |x|=1\}. \end{aligned}$$
(1.2.11)

We call \(\Vert A \Vert \) for the matrix norm of A, and Lemma 1.2.4 shows that

$$\begin{aligned} \sqrt{\delta }\leqslant ||A||\leqslant \sqrt{\delta ^{*}},\quad \forall A\in GL_n(\mathbb {R}). \end{aligned}$$
(1.2.12)

Another proof of Lemma 1.2.3: Let \(L=L(B)\) be any full-rank lattice, B is the generated matrix of L. By definition we have

$$\begin{aligned} D_{s,c}(L)=\sum \limits _{x\in L}D_{s,c}(x)=\frac{1}{s^n}\sum \limits _{x\in L} {\text {e}}^{-\frac{\pi }{s^2}|x-c|^2}=\frac{1}{s^n}\sum \limits _{x\in \mathbb {Z}^n} {\text {e}}^{-\frac{\pi }{s^2}|Bx-c|^2}. \end{aligned}$$
(1.2.13)

From Lemma 1.2.4,

$$\begin{aligned} \frac{|B^{-1}x|}{|x|}\leqslant ||B^{-1}||\Rightarrow |B^{-1}x|\leqslant ||B^{-1}||\ |x|,\ \forall x\in \mathbb {R}^n. \end{aligned}$$

Let \(x=By\), \(\delta ^{*}\) is the biggest eigenvalue of \((B^{-1})^T B^{-1}\), we have

$$\begin{aligned} |y|\leqslant ||B^{-1}||\ |By|\Rightarrow |By|\geqslant \frac{1}{||B^{-1}||} |y|\geqslant |y|/\sqrt{\delta ^{*}},\ \forall y\in \mathbb {R}^n. \end{aligned}$$
(1.2.14)

The property of the exponential function implies that,

$$\begin{aligned} \sum \limits _{x\in \mathbb {Z}^n,|Bx-c|>M} {\text {e}}^{-\frac{\pi }{s^2}|Bx-c|^2}\leqslant \sum \limits _{x\in \mathbb {Z}^n,|Bx-c|\ne 0} \frac{s^{2n}}{\pi ^n |Bx-c|^{2n}}. \end{aligned}$$
(1.2.15)

Since

$$\begin{aligned} |Bx-c|^{2n}=|B(x-B^{-1}c)|^{2n}\geqslant |x-B^{-1}c|^{2n}/ (\delta ^{*})^n. \end{aligned}$$

Denote \(x=(x_1,\dots ,x_n)\), \(B^{-1}c=(u_1,\dots ,u_n)\), then

$$\begin{aligned} |x-B^{-1}c|^{2n}=(\sum \limits _{i=1}^n (x_i-u_i)^2)^n\geqslant (n\root n \of {\Pi _{i=1}^{n}(x_i-u_i)^2})^n=n^n \Pi _{i=1}^n (x_i-u_i)^2. \end{aligned}$$

By (1.2.15),

$$\begin{aligned} \sum \limits _{x\in \mathbb {Z}^n,|Bx-c|\ne 0} \frac{s^{2n}}{\pi ^n |Bx-c|^{2n}}\leqslant \sum \limits _{x\in \mathbb {Z}^n,|Bx-c|\ne 0} \frac{s^{2n}(\delta ^{*})^n}{\pi ^n n^n}\cdot \frac{1}{\Pi _{i=1}^n (x_i-u_i)^2} \end{aligned}$$
$$\begin{aligned} =\frac{s^{2n}(\delta ^{*})^n}{\pi ^n n^n}\sum \limits _{x_1\in \mathbb {Z}}\frac{1}{(x_1-u_1)^2} \sum \limits _{x_2\in \mathbb {Z}}\frac{1}{(x_2-u_2)^2}\cdots \sum \limits _{x_n\in \mathbb {Z}}\frac{1}{(x_n-u_n)^2}, \end{aligned}$$

every infinite series on the right hand side of the above equation converges, hence, \(D_{s,c}(L)<\infty \).    \(\square \)

By Lemma 1.2.3, we define the discrete Gauss density function \(D_{L,s,c}(x)\) as

$$\begin{aligned} D_{L,s,c}(x)=\frac{D_{s,c}(x)}{D_{s,c}(L)}=\frac{\rho _{s,c}(x)}{\rho _{s,c}(L)}. \end{aligned}$$
(1.2.16)

Trivially, we have

$$\begin{aligned} \sum \limits _{x\in L} D_{L,s,c}(x)=1. \end{aligned}$$

So \(D_{L,s,c}(x)\) corresponds to a random variable from Gauss distribution defined on the lattice L (discrete geometry) with parameters s and c.

Definition 1.2.4

Let \(L=L(B)\subset \mathbb {R}^n\) be a lattice with full rank, \(s>0\) is a given positive real number, \(c\in \mathbb {R}^n\) is a given vector, define the discrete Gauss measure function \(g_{L,s,c}(x)\) as a function defined on the basic neighborhood F(B) of L,

$$\begin{aligned} g_{L,s,c}(x)=D_{s,c}(\bar{x})=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x+y),\ x\in F(B). \end{aligned}$$
(1.2.17)

By Definition and (1.2.3), it is clear that

$$\begin{aligned} \int \limits _{F(B)} g_{L,s,c}(x) \textrm{d}x=\frac{1}{s^{n}}\sum \limits _{y\in L}\int \limits _{F(B)} \rho _{s,c}(x+y)\textrm{d}x=\frac{1}{s^n} \int \limits _{\mathbb {R}^{n}} \rho _{s,c}(x)\textrm{d}x=1. \end{aligned}$$
(1.2.18)

Thus, the density function \(g_{L,s,c}(x)\) defined on the basic neighborhood F(B) corresponds to a continuous random variable on F(B), denoted as \(D_{s,c} {\text {mod}} L\).

Lemma 1.2.5

The random variable \(D_{s, c} \text {mod} L\) is actually defined in the additive quotient group \(\mathbb {R} ^{n}/L \).

Proof

F(B) is a set of representative elements of the additive quotient group \(\mathbb {R}^n/L\), and we only prove that for any set of representative elements of \(\mathbb {R}^n/L\), the discrete Gauss function \(g_{L,s,c}(x)\) remains constant, then \(D_{s,c}\ \text {mod}\ L\) can be regarded as a random variable on the additive quotient group \(\mathbb {R}^n/L\). Actually, if \(x_1,x_2\in \mathbb {R}^n\), \(x_1\equiv x_2\ (\text {mod}\ L)\), we have \(g_{L,s,c}(x_1)=g_{L,s,c}(x_2)\). To obtain the result, by definition

$$\begin{aligned} g_{L,s,c}(x_1)=D_{s,c}(\bar{x_1})=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x_1+y). \end{aligned}$$

Since \(x_1=x_2+y_0\), where \(y_0\in L\), so

$$\begin{aligned} g_{L,s,c}(x_1)=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x_1+y)=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x_2+y_0+y) \end{aligned}$$
$$\begin{aligned} \qquad \quad \ =\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x_2+y)=D_{s,c}(\bar{x_2})=g_{L,s,c}(x_2). \end{aligned}$$

By \(x_1\equiv x_2\ (\text {mod}\ L)\), then \(\bar{x_1}=\bar{x_2}\) are the same additive cosets in the quotient group \(\mathbb {R}^n/L\). Thus, the discrete Gauss measure \(g_ {L, s,c}(x)\) can be defined on any basic neighborhood of L, and the corresponding random variable \(D_{s,c}\ \text {mod}\ L\) is actually defined on the quotient group \(\mathbb {R}^n/L\).    \(\square \)

1.3 Smoothing Parameter

For a given full-rank lattice \(L\subset \mathbb {R}^n\), in the previous section we defined the discrete Gauss measure \(g_{L,s,c}(x)\), and the corresponding continuous random variable \(D_{s,c}\ \text {mod}\ L\) on the basic neighborhood F(B) of L. In this section, we discuss an important parameter on Gauss lattice—the smoothing parameter. The concept of smooth parameters was introduced by Micciancio and Regev in 2007 Micciancio and Regev (2004). For a given vector \(x\in \mathbb {R}^n\), we have the following lemma.

Lemma 1.3.1

For a given lattice \(L\subset \mathbb {R}^n\), we have

$$\begin{aligned} \lim \limits _{s\rightarrow \infty } \sum \limits _{x\in L} \rho _{1/s}(x)=1 \end{aligned}$$

or equally

$$\begin{aligned} \lim \limits _{s\rightarrow \infty } \sum \limits _{x\in L\backslash \{0\}} \rho _{1/s}(x)=0. \end{aligned}$$

Proof

By the property of the exponential function, when \(|x|>M_0\) (\(M_0\) is a positive constant) then

$$\begin{aligned} {\text {e}}^{-\pi s^2 |x|^2}\leqslant \frac{1}{\pi s^2 |x|^2}. \end{aligned}$$

So

$$\begin{aligned} \sum \limits _{x\in L}\rho _{1/s}(x)=\sum \limits _{x\in L} {\text {e}}^{-\pi s^2 |x|^2}\leqslant \sum \limits _{|x|\leqslant M_0,x\in L} {\text {e}}^{-\pi s^2 |x|^2}+\frac{1}{\pi s^2} \sum \limits _{|x|> M_0,x\in L} \frac{1}{|x|^2}. \end{aligned}$$

The first part of the equation above only has a finite number of terms, so

$$\begin{aligned} \lim \limits _{s\rightarrow \infty } \sum \limits _{|x|\leqslant M_0,x\in L} {\text {e}}^{-\pi s^2 |x|^2}=1. \end{aligned}$$

The second part of the above equation is a convergent series, therefore,

$$\begin{aligned} \lim \limits _{s\rightarrow \infty } \frac{1}{\pi s^2} \sum \limits _{|x|> M_0,x\in L} \frac{1}{|x|^2}=0. \end{aligned}$$

Here, we get the proof.    \(\square \)

By Definition 1.2.3, we have \(\rho _{1/s}(L)=\sum \limits _{x\in L}\rho _{1/s}(x)\), then \(\rho _{1/s}(L)\) is a monotone decreasing function of s. When \(s\rightarrow \infty \), \(\rho _{1/s}(L)\) monotonically decreasing to 1. So we give the definition of smoothing parameter.

Definition 1.3.1

Let \(L\subset \mathbb {R}^n\) be a lattice with full rank, \(L^{*}\) is the dual lattice of L, define the smoothing parameter \(\eta _{\epsilon }(L)\) of L: For any \(\epsilon >0\), define

$$\begin{aligned} \eta _{\epsilon }(L)=\min \{s\ |\ s>0,\ \rho _{1/s}(L^*)<1+\epsilon \}. \end{aligned}$$
(1.3.1)

Equally,

$$\begin{aligned} \eta _{\epsilon }(L)=\min \{s\ |\ s>0,\ \rho _{1/s}(L^*\backslash \{0\})<\epsilon \}. \end{aligned}$$
(1.3.2)

By definition, the smoothing parameter \(\eta _{\epsilon }(L)\) of L is a monotone decreasing function of \(\epsilon \), namely

$$\begin{aligned} \eta _{\epsilon _1}(L)\leqslant \eta _{\epsilon _2}(L),\quad \text {if}\ 0<\epsilon _2<\epsilon _1. \end{aligned}$$

Definition 1.3.2

Let \(A\subset \mathbb {R}^n\) be a finite or countable set, X and Y are two discrete random variables on A, the statistical distance between X and Y is defined as

$$\begin{aligned} \Delta (X,Y)=\frac{1}{2}\sum \limits _{a\in A} |Pr\{X=a\}-Pr\{Y=a\}|. \end{aligned}$$
(1.3.3)

If A is a continuous region in \(\mathbb {R}^n\), X and Y are continuous random variables on A, \(T_{ 1}(x)\) and \(T_{ 2}(x)\) are the density functions of X and Y, respectively, then the statistical distance between X and Y is defined as

$$\begin{aligned} \Delta (X,Y)=\frac{1}{2}\int \limits _{A} |T_1(x)-T_2(x)|\textrm{d}x. \end{aligned}$$
(1.3.4)

It can be proved that for any function f defined on A, we have

$$\begin{aligned} \Delta (f(X),f(Y))\leqslant \Delta (X,Y). \end{aligned}$$

From (1.2.17) in the last section, \(D_{s,c}\ \text {mod}\ L\) is a continuous random variable defined on the basic neighborhood F(B) of the lattice L with the density function \(g_{L,s,c}(x)\). Let U(F(B)) be a uniform random variable defined on F(B) with the density function \(d(x)=\frac{1}{\text {det}(L)}\). The main result of this section is that the statistical distance between \(D_{s,c}\ \text {mod}\ L\) and the uniform distribution U(F(B)) can be arbitrarily small.

Theorem 1.1

For any \(s>0\), given a lattice with full rank \(L=L(B)\subset \mathbb {R}^n\), \(L^*\) is the dual lattice of L, then the statistical distance between the discrete Gauss distribution and the uniform distribution on the basic neighborhood F(B) satisfies

$$\begin{aligned} \Delta (D_{s,c}\ \text {mod}\ L,U(F(B)))\leqslant \frac{1}{2}\rho _{1/s}(L^*\backslash \{0\}). \end{aligned}$$
(1.3.5)

Particularly, for any \(\epsilon >0\), and any \(s\geqslant \eta _{\epsilon }(L)\), we have

$$\begin{aligned} \Delta (D_{s,c}\ \text {mod}\ L,U(F(B)))\leqslant \frac{1}{2}\epsilon . \end{aligned}$$
(1.3.6)

To prove Theorem 1.1, we first introduce the following lemma.

Lemma 1.3.2

Suppose \(f(x)\in L^1(\mathbb {R}^n)\) and satisfies the following two conditions:

(i) \(\sum \limits _{x\in L} |f(x+u)|\) uniformly converges in any bounded closed region of \(\mathbb {R}^n\) (about u);

(ii) \(\sum \limits _{y\in L^*} |\hat{f}(y)|\) converges. Then

$$\begin{aligned} \sum \limits _{x\in L} f(x)=\frac{1}{\text {det}(L)} \sum \limits _{y\in L^*} \hat{f}(y), \end{aligned}$$

where \(L=L(B)\subset \mathbb {R}^n\) is a full-rank lattice, \(L^*\) is the dual lattice, \(\text {det}(L)=|\text {det}(B)|\) is the determinant of the lattice L.

Proof

We first consider the case of \(B=I_n\), here \(L=\mathbb {Z}^n\), \(L^*=\mathbb {Z}^n\). By condition (i), let F(u) be

$$\begin{aligned} F(u)=\sum \limits _{x\in \mathbb {Z}^n} f(x+u),\quad u\in \mathbb {R}^n. \end{aligned}$$

Since F(u) is a periodic function of the lattice \(\mathbb {Z}^n\), namely \(F(u+x)=F(u)\), for \(\forall x\in \mathbb {Z}^n\), we have the following Fourier expansion

$$\begin{aligned} F(u)=\sum \limits _{y\in \mathbb {Z}^n} a(y) {\text {e}}^{2\pi i u\cdot y}. \end{aligned}$$
(1.3.7)

Integrating \(F(u){\text {e}}^{-2\pi i u\cdot x}\) for \(u\in [0,1]^n\):

$$\begin{aligned} \int \limits _{[0,1]^n} F(u){\text {e}}^{-2\pi i u\cdot x}\textrm{d}u=\sum \limits _{y\in \mathbb {Z}^n} \int \limits _{[0,1]^n} a(y){\text {e}}^{2\pi i u\cdot (y-x)}\textrm{d}u=a(x),\ \forall x\in \mathbb {Z}^n. \end{aligned}$$

Hence, we have the following Fourier inversion formula:

$$\begin{aligned} a(y)=\int \limits _{[0,1]^n} F(u){\text {e}}^{-2\pi i u\cdot y}\textrm{d}u=\sum \limits _{x\in \mathbb {Z}^n} \int \limits _{[0,1]^n} f(x+u){\text {e}}^{-2\pi i (u+x)\cdot y}\textrm{d}u \end{aligned}$$
$$\begin{aligned} \qquad =\sum \limits _{x\in \mathbb {Z}^n} \int \limits _{x+[0,1]^n} f(z){\text {e}}^{-2\pi i z\cdot y}\textrm{d}z=\int \limits _{\mathbb {R}^n} f(z){\text {e}}^{-2\pi i z\cdot y}\textrm{d}z=\hat{f}(y). \end{aligned}$$

From the above equation and (1.3.7),

$$\begin{aligned} F(u)=\sum \limits _{y\in \mathbb {Z}^n} \hat{f}(y) {\text {e}}^{2\pi i u\cdot y}. \end{aligned}$$

Take \(u=0\), we have

$$\begin{aligned} F(0)=\sum \limits _{x\in \mathbb {Z}^n} f(x)=\sum \limits _{y\in \mathbb {Z}^n} \hat{f}(y), \end{aligned}$$

the lemma is proved for \(L=\mathbb {Z}^n\). For the general case \(L=L(B)\), since \(L^*=L((B^{-1})')\), then

$$\begin{aligned} \sum \limits _{x\in L}f(x)=\sum \limits _{x\in \mathbb {Z}^n}f(Bx)=\sum \limits _{x\in \mathbb {Z}^n}(f\circ B)(x), \end{aligned}$$

where \(f\circ B(x)=f(Bx)\). Replace f(x) with \(f\circ B\), then \(f\circ B\) still satisfies the conditions of this lemma, so

$$\begin{aligned} \sum \limits _{x\in \mathbb {Z}^n}f\circ B(x)=\sum \limits _{y\in \mathbb {Z}^n}\widehat{f\circ B}(y). \end{aligned}$$

From the definition of Fourier transform,

$$\begin{aligned} \widehat{f\circ B}(y)=\int \limits _{\mathbb {R}^n} f(Bt) {\text {e}}^{-2\pi i y\cdot t} \textrm{d}t. \end{aligned}$$

Take variable substitution \(t=B^{-1} x\), then

$$\begin{aligned} \widehat{f\circ B}(y)=\frac{1}{|\text {det}(B)|} \int \limits _{\mathbb {R}^n} f(x) {\text {e}}^{-2\pi i y\cdot B^{-1}x} \textrm{d}x \end{aligned}$$
$$\begin{aligned} \qquad \qquad \quad =\frac{1}{|\text {det}(B)|} \int \limits _{\mathbb {R}^n} f(x) {\text {e}}^{-2\pi i (B^{-1})' y\cdot x} \textrm{d}x\ \end{aligned}$$
$$\begin{aligned} =\frac{1}{|\text {det}(B)|} \hat{f} ((B^{-1})'y).\quad \end{aligned}$$

Above all,

$$\begin{aligned} \sum \limits _{x\in L}f(x)=\sum \limits _{y\in \mathbb {Z}^n}\widehat{f\circ B}(y)=\frac{1}{|\text {det}(B)|} \sum \limits _{y\in \mathbb {Z}^n} \hat{f} ((B^{-1})'y)=\frac{1}{|\text {det}(B)|} \sum \limits _{y\in L^*} \hat{f}(y). \end{aligned}$$

We finish the proof of this lemma.    \(\square \)

The proof of Theorem 1.1 The density function of the continuous random variable \(D_{ {s,c}} \ \text {mod}\ L\) defined on the basic neighborhood F(B) of L is \(g_{L,s,c}(x)\), from Eq. (1.2.17) and Lemma 1.3.2, we have

$$\begin{aligned} g_{L,s,c}(x)=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c}(x+y)=\frac{1}{s^n}\sum \limits _{y\in L}\rho _{s,c-x}(y). \end{aligned}$$

By (1.2.5), the Fourier transform of \(\rho _{s,c-x}(y)\) is

$$\begin{aligned} \hat{\rho }_{s,c-x}(y)={\text {e}}^{-2\pi i y \cdot (c-x)} s^n \rho _{1/s}(y). \end{aligned}$$

Combining with Lemma 1.3.2, we obtain

$$\begin{aligned} g_{L,s,c}(x)=\frac{1}{|\text {det}(B)|} \sum \limits _{y\in L^*} {\text {e}}^{2\pi i y\cdot (x-c)} \rho _{1/s}(y). \end{aligned}$$
(1.3.8)

The density function of the uniformly distributed random variable U(F(B)) on F(B) is \(\frac{1}{|\text {det}(B)|}\), based on the definition of statistical distance,

$$\begin{aligned} \Delta (D_{s,c}\ \text {mod}\ L,U(F(B)))=\frac{1}{2} \int \limits _{F(B)} |g_{L,s,c}(x)-\frac{1}{|\text {det}(B)|}|\textrm{d}x\qquad \qquad \qquad \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ =\frac{1}{2} \int \limits _{F(B)} |\frac{1}{|\text {det}(B)|} \sum \limits _{y\in L^*,y\ne 0} {\text {e}}^{2\pi i y\cdot (x-c)} \rho _{1/s}(y)|\textrm{d}x \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \ \leqslant \frac{1}{2} \text {Vol}(F(B)) \text {det}(L^*) \max \limits _{x\in F(B)} |\sum \limits _{y\in L^*\backslash \{0\}} {\text {e}}^{2\pi i y\cdot (x-c)} \rho _{1/s}(y)| \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \ \ \ \leqslant \frac{1}{2}\sum \limits _{y\in L^*\backslash \{0\}} \rho _{1/s}(y)=\frac{1}{2}\rho _{1/s}(L^*\backslash \{0\}). \end{aligned}$$

So (1.3.5) in Theorem 1.1 is proved. From the definition of smoothing parameter \(\eta _{\epsilon }(L)\), when \(s\geqslant \eta _{\epsilon }(L)\), we have

$$\begin{aligned} \rho _{1/s}(L^*\backslash \{0\})<\epsilon . \end{aligned}$$

Therefore, if \(s\geqslant \eta _{\epsilon }(L)\), we have

$$\begin{aligned} \Delta (D_{s,c}\ \text {mod}\ L,U(F(B)))\leqslant \frac{1}{2}\epsilon . \end{aligned}$$

Thus, Theorem 1.1 is proved.    \(\square \)

Another application of Lemma 1.3.2 is to prove the following inequality.

Lemma 1.3.3

Let \(a\geqslant 1\) be a given positive real number, then

$$\begin{aligned} \sum \limits _{x\in L} {\text {e}}^{-\frac{\pi }{a} |x|^2}\leqslant a^{\frac{n}{2}} \sum \limits _{x\in L} {\text {e}}^{-\pi |x|^2}. \end{aligned}$$
(1.3.9)

Proof

By Definition 1.2.1, the left hand side of the sum in the above inequality can be written as

$$\begin{aligned} \rho _{\sqrt{a}}(x)={\text {e}}^{-\frac{\pi }{a} |x|^2},\ s=\sqrt{a}. \end{aligned}$$

Since \(\rho _s(x)\) satisfies the conditions of Lemma 1.3.2, we have

$$\begin{aligned} \sum \limits _{x\in L}\rho _s(x)=\text {det}(L^*) \sum \limits _{x\in L^*} \hat{\rho }_s(x)=\text {det}(L^*) \sum \limits _{x\in L^*} s^n \rho _{1/s}(x). \end{aligned}$$

Obviously \(\rho _s(x)\) is a monotone increasing function of s, take \(s=\sqrt{a}\geqslant 1\), then

$$\begin{aligned} \sum \limits _{x\in L}\rho _{\sqrt{a}}(x)=a^{\frac{n}{2}} \text {det}(L^*) \sum \limits _{x\in L^*} \rho _{\frac{1}{\sqrt{a}}}(x)\leqslant a^{\frac{n}{2}} \text {det}(L^*) \sum \limits _{x\in L^*} \rho (x) \end{aligned}$$
$$\begin{aligned} =a^{\frac{n}{2}} \sum \limits _{x\in L}\rho (x)=a^{\frac{n}{2}} \sum \limits _{x\in L} {\text {e}}^{-\pi |x|^2}.\qquad \ \end{aligned}$$

We complete the proof of Lemma 1.3.3.    \(\square \)

Let \(N=N(0,1)\) be the unit sphere in \(\mathbb {R}^n\), namely

$$\begin{aligned} N=\{x\in \mathbb {R}^n\ |\ |x|\leqslant 1\}. \end{aligned}$$

Lemma 1.3.4

Suppose \(L\subset \mathbb {R}^n\) is a lattice with full rank, \(c>\frac{1}{\sqrt{2\pi }}\) is a positive real number, \(C=c\sqrt{2\pi e}\cdot {\text {e}}^{-\pi c^2}\), \(v\in \mathbb {R}^n\), then

$$\begin{aligned} \rho (L\backslash c\sqrt{n}N)<C^n \rho (L),\ \text {and}\ \rho ((L+v)\backslash c\sqrt{n}N)<2C^n \rho (L). \end{aligned}$$

That is,

$$\begin{aligned} \sum \limits _{x\in L,x\notin c\sqrt{n}N} {\text {e}}^{-\pi |x|^2}<C^n \sum \limits _{x\in L} {\text {e}}^{-\pi |x|^2}, \end{aligned}$$
(1.3.10)
$$\begin{aligned} \sum \limits _{x\in L+v,x\notin c\sqrt{n}N} {\text {e}}^{-\pi |x|^2}<2C^n \sum \limits _{x\in L} {\text {e}}^{-\pi |x|^2}. \end{aligned}$$

Proof

We will prove the first inequality, ler t be a positive real number, \(0<t<1\), then

$$\begin{aligned} \sum \limits _{x\in L}{\text {e}}^{-\pi t |x|^2}=\sum \limits _{x\in L}{\text {e}}^{\pi (1-t) |x|^2}\cdot {\text {e}}^{-\pi |x|^2} \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \ >\sum \limits _{x\in L,|x|^2\geqslant c^2 n}{\text {e}}^{\pi (1-t) |x|^2}\cdot {\text {e}}^{-\pi |x|^2} \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \geqslant {\text {e}}^{\pi (1-t) c^2 n}\sum \limits _{x\in L,|x|^2\geqslant c^2 n}{\text {e}}^{-\pi |x|^2}. \end{aligned}$$

In Lemma 1.3.3, take \(a=\frac{1}{t}\), then \(a>1\), we get

$$\begin{aligned} \sum \limits _{x\in L}{\text {e}}^{-\pi t |x|^2}\leqslant t^{-\frac{n}{2}} \sum \limits _{x\in L}{\text {e}}^{-\pi |x|^2}. \end{aligned}$$

Hence,

$$\begin{aligned} \sum \limits _{x\in L,|x|^2\geqslant c^2 n}{\text {e}}^{-\pi |x|^2}<{\text {e}}^{-\pi (1-t) c^2 n} \sum \limits _{x\in L}{\text {e}}^{-\pi t |x|^2}\leqslant {\text {e}}^{-\pi (1-t) c^2 n} t^{-\frac{n}{2}} \sum \limits _{x\in L}{\text {e}}^{-\pi |x|^2}. \end{aligned}$$

It implies that

$$\begin{aligned} \rho (L\backslash c\sqrt{n}N)<(t^{-\frac{1}{2}} {\text {e}}^{-\pi (1-t) c^2})^n \rho (L). \end{aligned}$$

Let \(t=\frac{1}{2\pi c^2}\), then

$$\begin{aligned} \rho (L\backslash c\sqrt{n}N)<(c\cdot \sqrt{2\pi e}\cdot {\text {e}}^{-\pi c^2})^n \rho (L), \end{aligned}$$

The second inequality can be proved in the same way. Lemma 1.3.4 holds.    \(\square \)

Based on the above inequality, we can give an upper bound estimation of the smoothing parameter on lattice, which is a very important result about the smoothing parameter.

Theorem 1.2

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant \sqrt{n}/\lambda _1(L^*). \end{aligned}$$
(1.3.11)

where \(\lambda _1(L^*)\) is the minimal distance of the dual lattice \(L^*\) (see (1.0.4)).

Proof

Take \(c=1\) in Lemma 1.3.4, we first prove

$$\begin{aligned} C=\sqrt{2\pi e}\cdot {\text {e}}^{-\pi }<\frac{1}{4}. \end{aligned}$$
(1.3.12)

If we take the logarithm of both sides, then

$$\begin{aligned} \text {log}(32\pi )+1<2\pi . \end{aligned}$$

Since we have the following inequality,

$$\begin{aligned} \text {log}(32\pi )+1<\text {log}128+1<2\pi . \end{aligned}$$

So (1.3.12) holds. By Lemma 1.3.4, we have

$$\begin{aligned} \rho (L^*\backslash \sqrt{n}N)<C^n \rho (L^*)=C^n (\rho (L^*\backslash \sqrt{n}N)+\rho (L^*\cap \sqrt{n}N)). \end{aligned}$$

From the both sides, we get

$$\begin{aligned} \rho (L^*\backslash \sqrt{n}N)<\frac{C^n}{1-C^n} \rho (L^*\cap \sqrt{n}N). \end{aligned}$$

If \(s>\sqrt{n}/\lambda _1(L^*)\), for all \(x\in L^*\backslash \{0\}\),

$$\begin{aligned} |sx|\geqslant s\cdot \lambda _1(L^*)>\sqrt{n}\Rightarrow sL^*\cap \sqrt{n}N=\{0\}. \end{aligned}$$

Hence,

$$\begin{aligned} \rho _{1/s}(L^*)=\rho (sL^*)=1+\rho (sL^*\backslash \sqrt{n}N) \end{aligned}$$
$$\begin{aligned} \qquad \quad \quad \ <1+\frac{C^n}{1-C^n} \rho (sL^*\cap \sqrt{n}N) \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \quad \quad \ =1+\frac{C^n}{1-C^n}<1+\frac{2^{-2n}}{2^{-n}}=2^{-n}+1. \end{aligned}$$

Take \(\epsilon =2^{-n}\), then

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant \sqrt{n}/\lambda _1(L^*). \end{aligned}$$

Theorem 1.2 is obtained.    \(\square \)

According to the proof of Theorem 1.2, we can further improve the upper bound estimation of the smoothing parameter.

Corollary 1.3.1

Let

$$\begin{aligned} r=\sqrt{\frac{1}{2\pi }+\frac{{\log }2\pi }{2\pi }+\frac{1}{n\pi }{\log }(1+2^n)}.\quad (<0.82) \end{aligned}$$
(1.3.13)

Then for any full-rank lattice \(L\subset \mathbb {R}^n\), we obtain

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant r\sqrt{n}/\lambda _1(L^*). \end{aligned}$$
(1.3.14)

Proof

Take \(c>r\) in Lemma 1.3.4, then \(c>\frac{1}{\sqrt{2\pi }}\), and

$$\begin{aligned} C=c\cdot \sqrt{2\pi e}\cdot {\text {e}}^{-\pi c^2}\Rightarrow \frac{C^n}{1-C^n}<\frac{1}{2^n}. \end{aligned}$$
(1.3.15)

By Lemma 1.3.4, for any full-rank lattice \(L\subset \mathbb {R}^n\), we have

$$\begin{aligned} \rho (L^*\backslash c\sqrt{n}N)<\frac{C^n}{1-C^n} \rho (L^*\cap c\sqrt{n}N). \end{aligned}$$

If \(s>c\sqrt{n}/\lambda _1(L^*)\), for any \(x\in L^*\backslash \{0\}\),

$$\begin{aligned} |sx|\geqslant s\lambda _1(L^*)>c\sqrt{n}. \end{aligned}$$

Hence,

$$\begin{aligned} sL^*\cap c\sqrt{n}N=\{0\}. \end{aligned}$$

Therefore,

$$\begin{aligned} \rho _{1/s}(L^*)=\rho (sL^*)=1+\rho (L^*\backslash c\sqrt{n}N)<1+\frac{C^n}{1-C^n}<1+\frac{1}{2^n}. \end{aligned}$$

Finally we have (let \(c\rightarrow r\))

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant r\sqrt{n}/\lambda _1(L^*). \end{aligned}$$

Corollary 1.3.1 is proved.    \(\square \)

Corollary 1.3.2

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant \frac{4}{5}\sqrt{n}/\lambda _1(L^*). \end{aligned}$$
(1.3.16)

Proof

Take \(c=\frac{4}{5}\) in Lemma 1.3.4, then \(c>\frac{1}{\sqrt{2\pi }}\), and

$$\begin{aligned} C=c\cdot \sqrt{2\pi e}\cdot {\text {e}}^{-\pi c^2}\Rightarrow \frac{C^n}{1-C^n}<\frac{1}{2^n}. \end{aligned}$$

Lemma 1.3.4 implies that for any full-rank lattice \(L\subset \mathbb {R}^n\), we have

$$\begin{aligned} \rho (L^*\backslash c\sqrt{n}N)<\frac{C^n}{1-C^n} \rho (L^*\cap c\sqrt{n}N). \end{aligned}$$

If \(s>c\sqrt{n}/\lambda _1(L^*)\), for any \(x\in L^*\backslash \{0\}\),

$$\begin{aligned} |sx|\geqslant s\lambda _1(L^*)>c\sqrt{n}. \end{aligned}$$

Hence,

$$\begin{aligned} sL^*\cap c\sqrt{n}N=\{0\}. \end{aligned}$$

We get

$$\begin{aligned} \rho _{1/s}(L^*)=\rho (sL^*)=1+\rho (L^*\backslash c\sqrt{n}N)<1+\frac{C^n}{1-C^n}<1+\frac{1}{2^n}, \end{aligned}$$

which implies that

$$\begin{aligned} \eta _{2^{-n}}(L)\leqslant \frac{4}{5}\sqrt{n}/\lambda _1(L^*). \end{aligned}$$

Corollary 1.3.2 is proved.    \(\square \)

In the following, we give another classical upper bound estimation for the smoothing parameter. For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), we have introduced the definition of minimal distance \(\lambda _1(L)\) on lattice, which can actually be generalized to the general case. For \(1\leqslant i\leqslant n\),

$$\begin{aligned} \lambda _i(L)=\min \{r\ |\ \text {dim}(L\cap rN(0,1))\geqslant i\}. \end{aligned}$$
(1.3.17)

\(\lambda _i(L)\) is also called the i-th continuous minimal distance of lattice L. To give an upper bound estimation of the smoothing parameter, we first prove the following lemma.

Lemma 1.3.5

For any n dimensional full-rank lattice L, \(s>0\), \(c\in \mathbb {R}^n\), then

$$\begin{aligned} \rho _{s,c}(L)\leqslant \rho _s(L). \end{aligned}$$
(1.3.18)

Proof

According to Lemma 1.3.2, we have

$$\begin{aligned} \rho _{s,c}(L)=\text {det}(L^*)\hat{\rho }_{s,c}(L^*) \end{aligned}$$
$$\begin{aligned} \qquad \qquad \ \ =\text {det}(L^*)\sum \limits _{y\in L^*}\hat{\rho }_{s,c}(y) \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \quad \ =\text {det}(L^*)\sum \limits _{y\in L^*}{\text {e}}^{-2\pi ic\cdot y}\hat{\rho }_{s}(y) \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \leqslant \text {det}(L^*)\sum \limits _{y\in L^*}\hat{\rho }_{s}(y)=\rho _s(L), \end{aligned}$$

where we have used \(\hat{\rho }_{s,c}(y)={\text {e}}^{-2\pi ic\cdot y}\hat{\rho }_{s}(y)\), the lemma gets proved.    \(\square \)

Theorem 1.3

For any n dimensional full-rank lattice L, \(\epsilon >0\), we have

$$\begin{aligned} \eta _{\epsilon }(L)\leqslant \sqrt{\frac{\ln (2n(1+1/\epsilon ))}{\pi }}\lambda _n(L), \end{aligned}$$
(1.3.19)

where \(\lambda _n(L)\) is the N-th continuous minimal distance of the lattice L defined by (1.3.17).

Proof

Let

$$\begin{aligned} s=\sqrt{\frac{\ln (2n(1+1/\epsilon ))}{\pi }}\lambda _n(L), \end{aligned}$$

we need to prove \(\rho _{1/s}(L^*\backslash \{0\})\leqslant \epsilon \). From the definition of \(\lambda _n(L)\), there are n linearly independent vectors \(v_1,v_2,\dots ,v_n\) in L satisfying \(|v_i|\leqslant \lambda _n(L)\), and for any positive integer \(k>1\), we have \(v_i/k\notin L\), \(1\leqslant i\leqslant n\). The main idea of the proof is to take a segregation of \(L^*\), for any integer j, let

$$\begin{aligned} S_{i,j}=\{x\in L^*\ |\ x\cdot v_i=j\}\subset L^*, \end{aligned}$$

for any \(y\in L^*\backslash \{0\}\), there is \(v_i\) that satisfies \(y\cdot v_i\ne 0\) (otherwise we have \(y=0\)), which implies \(y\notin S_{i,0}\), i.e. \(y\in L^*\backslash S_{i,0}\), so we have

$$\begin{aligned} L^*\backslash \{0\}=\cup _i^n (L^*\backslash S_{i,0}). \end{aligned}$$
(1.3.20)

To estimate \(\rho _{1/s}(L^*\backslash S_{i,0})\), we need some preparations. Let \(u_i=v_i/|v_i|^2\), then \(|u_i|=1/|v_i|\geqslant 1/\lambda _n(L)\). \(\forall j\in \mathbb {Z}\), \(\forall x\in S_{i,j}\),

$$\begin{aligned} (x-ju_i)\cdot ju_i=j x\cdot u_i-j^2 u_i\cdot u_i=\frac{j^2}{|v_i|^2}-\frac{j^2}{|v_i|^2}=0. \end{aligned}$$

Therefore,

$$\begin{aligned} |x|^2=|x-ju_i|^2+|ju_i|^2. \end{aligned}$$

So

$$\begin{aligned} \rho _{1/s}(S_{i,j})=\sum \limits _{x\in S_{i,j}}{\text {e}}^{-\pi s^2 |x|^2} \end{aligned}$$
$$\begin{aligned} ={\text {e}}^{-\pi s^2 |ju_i|^2}\sum \limits _{x\in S_{i,j}}{\text {e}}^{-\pi s^2 |x-ju_i|^2} \end{aligned}$$
$$\begin{aligned} ={\text {e}}^{-\pi s^2 |ju_i|^2}\rho _{1/s}(S_{i,j}-ju_i).\quad \end{aligned}$$
(1.3.21)

Since the inner product of any vector in \(S_{i,j}-ju_i\) with \(v_i\) is 0, then \(S_{i,j}-ju_i\) is actually a translation of \(S_{i,0} \), namely there is a vector w satisfying \(S_{i,j}-ju_i=S_{i,0}-w\). In fact, for any \(x_j\in S_{i,j}\), \(x_0\in S_{i,0}\), \(w=x_0-x_j+ju_i\) satisfies the equality \(S_{i,j}-ju_i=S_{i,0}-w\). By Lemma 1.3.5, we have

$$\begin{aligned} \rho _{1/s}(S_{i,j}-ju_i)=\rho _{1/s}(S_{i,0}-w)=\rho _{1/s,w}(S_{i,0})\leqslant \rho _{1/s}(S_{i,0}). \end{aligned}$$
(1.3.22)

Combine (1.3.21) with (1.3.22),

$$\begin{aligned} \rho _{1/s}(S_{i,j})\leqslant {\text {e}}^{-\pi s^2 |ju_i|^2} \rho _{1/s}(S_{i,0})\leqslant {\text {e}}^{-\pi (s/\lambda _n(L))^2 j^2} \rho _{1/s}(S_{i,0}). \end{aligned}$$

When \(x>1\), it follows that

$$\begin{aligned} \sum \limits _{j\ne 0}x^{-j^2}\leqslant 2\sum \limits _{j>0} x^{-j}=\frac{2}{x-1}. \end{aligned}$$

Next, we will estimate \(\rho _{1/s}(L^*\backslash S_{i,0})\),

$$\begin{aligned} \rho _{1/s}(L^*\backslash S_{i,0})=\sum \limits _{j\ne 0}\rho _{1/s}(S_{i,j}) \end{aligned}$$
$$\begin{aligned} \leqslant \sum \limits _{j\ne 0}{\text {e}}^{-\pi (s/\lambda _n(L))^2 j^2} \rho _{1/s}(S_{i,0})\qquad \ \end{aligned}$$
$$\begin{aligned} \leqslant \frac{2}{{\text {e}}^{\pi (s/\lambda _n(L))^2}-1} \rho _{1/s}(S_{i,0})\qquad \quad \ \end{aligned}$$
$$\begin{aligned} \qquad \qquad =\frac{2}{{\text {e}}^{\pi (s/\lambda _n(L))^2}-1} (\rho _{1/s}(L^*)-\rho _{1/s}(L^*\backslash S_{i,0})). \end{aligned}$$

So we get

$$\begin{aligned} \rho _{1/s}(L^*\backslash S_{i,0})\leqslant \frac{2}{{\text {e}}^{\pi (s/\lambda _n(L))^2}+1}\rho _{1/s}(L^*). \end{aligned}$$

From (1.3.20),

$$\begin{aligned} \rho _{1/s}(L^*\backslash \{0\})\leqslant \sum \limits _{i=1}^n \rho _{1/s}(L^*\backslash S_{i,0})\leqslant \frac{2n}{{\text {e}}^{\pi (s/\lambda _n(L))^2}+1}\rho _{1/s}(L^*). \end{aligned}$$

Together with \(\rho _{1/s}(L^*)=1+\rho _{1/s}(L^*\backslash \{0\})\), we have

$$\begin{aligned} \rho _{1/s}(L^*\backslash \{0\})\leqslant \frac{2n}{{\text {e}}^{\pi (s/\lambda _n(L))^2}+1-2n}<\frac{2n}{{\text {e}}^{\pi (s/\lambda _n(L))^2}-2n}=\epsilon . \end{aligned}$$

In the last equality, we have used that

$$\begin{aligned} s=\sqrt{\frac{\ln (2n(1+1/\epsilon ))}{\pi }}\lambda _n(L). \end{aligned}$$

Based on the definition of the smoothing parameter,

$$\begin{aligned} \eta _{\epsilon }(L)\leqslant \sqrt{\frac{\ln (2n(1+1/\epsilon ))}{\pi }}\lambda _n(L). \end{aligned}$$

Theorem 1.3 is proved.    \(\square \)

At the end of this section, we present an inequality for the minimal distance on lattice, which will be used in the next chapter when we prove that the LWE problem is polynomial equivalent with the hard problems on lattice.

Lemma 1.3.6

For any n dimensional lattice L, \(\epsilon >0\), we have

$$\begin{aligned} \eta _{\epsilon }(L)\geqslant \sqrt{\frac{\ln 1/\epsilon }{\pi }} \frac{1}{\lambda _1(L^*)}\geqslant \sqrt{\frac{\ln 1/\epsilon }{\pi }} \frac{\lambda _n(L)}{n}. \end{aligned}$$
(1.3.23)

Proof

Let \(v\in L^*\) and \(|v|=\lambda _1(L^*)\), \(s=\eta _{\epsilon }(L)\), from the definition of smoothing parameter, we have

$$\begin{aligned} \epsilon =\rho _{1/s}(L^*\backslash \{0\}) \geqslant \rho _{1/s}(v)={\text {e}}^{-\pi s^2 \lambda _1^2(L^*)}. \end{aligned}$$

Hence,

$$\begin{aligned} s\geqslant \sqrt{\frac{\ln 1/\epsilon }{\pi }} \frac{1}{\lambda _1(L^*)}. \end{aligned}$$

That is, the first inequality in this lemma holds. For the second inequality, Theorem 2.1 in Banaszczyk (1993) implies that

$$\begin{aligned} 1\leqslant \lambda _1(L^*)\lambda _n(L)\leqslant n, \end{aligned}$$
(1.3.24)

so we immediately get the second inequality. The lemma holds.    \(\square \)

1.4 Some Properties of Discrete Gauss Distribution

In this section we introduce some properties about the discrete Gauss distribution. First we give the definition of the expectation of discrete Gauss distribution.

Definition 1.4.1

Let m, n be two positive integers, \(L\subset \mathbb {R}^n\) be an n dimensional full-rank lattice, \(c\in \mathbb {R}^n\), \(s>0\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), and \(f:\mathbb {R}^n\rightarrow \mathbb {R}^m\) is a given function, we denote

$$\begin{aligned} E[\xi ]=\sum \limits _{\xi =x\in L} x D_{L,s,c}(x) \end{aligned}$$
(1.4.1)

as the expectation of \(\xi \), and denote

$$\begin{aligned} E[f(\xi )]=\sum \limits _{\xi =x\in L} f(x) D_{L,s,c}(x) \end{aligned}$$
(1.4.2)

as the expectation of \(f(\xi )\).

Lemma 1.4.1

For any n dimensional full-rank lattice, \(L\subset \mathbb {R}^n\), \(c,u\in \mathbb {R}^n\), \(|u|=1\), \(0<\epsilon <1\), \(s\geqslant 2\eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), then we have

$$\begin{aligned} |E[(\xi -c)\cdot u]|\leqslant \frac{\epsilon s}{1-\epsilon }, \end{aligned}$$
(1.4.3)

and

$$\begin{aligned} |E[((\xi -c)\cdot u)^2]-\frac{s^2}{2\pi }|\leqslant \frac{\epsilon s^2}{1-\epsilon }. \end{aligned}$$
(1.4.4)

Proof

Let \(L'=L/s=\{\frac{x}{s}\ |\ x\in L\}\), \(c'=c/s\), \(\xi '\) is a random variable from the discrete Gauss distribution \(D_{L',c'}\), for any \(x\in L'\), we have

$$\begin{aligned} Pr\{\xi '=x\}=\frac{\rho _{c'}(x)}{\rho _{c'}(L')}=\frac{\rho _{s,c}(sx)}{\rho _{s,c}(L)}=Pr\{\xi =sx\}. \end{aligned}$$

That is, \(Pr\{\frac{\xi }{s}=x\}=Pr\{\xi '=x\}\), \(\forall x\in L'\), therefore,

$$\begin{aligned} E[(\xi -c)\cdot u]=sE[(\frac{\xi }{s}-c')\cdot u]=sE[(\xi '-c')\cdot u], \end{aligned}$$

the inequality (1.4.3) is equivalent to

$$\begin{aligned} |E[(\xi '-c')\cdot u]|\leqslant \frac{\epsilon }{1-\epsilon }. \end{aligned}$$
(1.4.5)

Similarly, the inequality (1.4.4) is equivalent to

$$\begin{aligned} |E[((\xi '-c')\cdot u)^2]-\frac{1}{2\pi }|\leqslant \frac{\epsilon }{1-\epsilon }. \end{aligned}$$
(1.4.6)

So we only need to prove the two inequalities for \(s=1\). Denote \(\xi \) as a random variable from the discrete Gauss distribution \(D_{L,c}\), the condition \(s\geqslant 2\eta _{\epsilon }(L)\) in Lemma 1.4.1 becomes \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}\). We prove that the two inequalities (1.4.5) and (1.4.6) hold if \(u=(1,0,\dots ,0)\) firstly. For any positive integer j, let

$$\begin{aligned} g_j(x)=(x_1-c_1)^j \rho _c(x), \end{aligned}$$

where \(x=(x_1,x_2,\dots ,x_n)\), \(c=(c_1,c_2,\dots ,c_n)\). Let \(\xi =(\xi _1,\xi _2,\dots ,\xi _n)\), then

$$\begin{aligned} E[((\xi -c)\cdot u)^j]=E[(\xi _1-c_1)^j]=\frac{g_j(L)}{\rho _c(L)}. \end{aligned}$$

Based on Lemma 1.3.2,

$$\begin{aligned} E[((\xi -c)\cdot u)^j]=\frac{g_j(L)}{\rho _c(L)}=\frac{\text {det}(L^*)\hat{g}_j(L^*)}{\text {det}(L^*)\hat{\rho }_c(L^*)}=\frac{\hat{g}_j(L^*)}{\hat{\rho }_c(L^*)}. \end{aligned}$$
(1.4.7)

In order to estimate \(\hat{\rho }_c(L^*)\), from Lemma 1.2.1 we get \(\hat{\rho }_c(x)={\text {e}}^{-2\pi i x\cdot c} \rho (x)\), thus, \(|\hat{\rho }_c(x)|=\rho (x)\), note that \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}<1\),

$$\begin{aligned} |\hat{\rho }_c(L^*)|=|1+\sum \limits _{x\in L^*\backslash \{0\}} \hat{\rho }_c(x)|\geqslant 1-\sum \limits _{x\in L^*\backslash \{0\}} |\hat{\rho }_c(x)|= 1-\rho (L^*\backslash \{0\})\geqslant 1-\epsilon . \end{aligned}$$
(1.4.8)

To estimate \(\hat{g}_j(L^*)\), assume \(\rho _{c}^{(j)}(x)\) is the j order partial derivative of \(\rho _c(x)\) about the first variable \(x_1\), i.e.

$$\begin{aligned} \rho _{c}^{(j)}(x)=(\frac{\partial }{\partial x_1})^j \rho _c(x). \end{aligned}$$

If \(j=1,2\), it is easy to get

$$\begin{aligned} \rho _{c}^{(1)}(x)=-2\pi (x_1-c_1) \rho _c(x). \end{aligned}$$
$$\begin{aligned} \rho _{c}^{(2)}(x)=(4\pi ^2 (x_1-c_1)^2-2\pi ) \rho _c(x). \end{aligned}$$

It follows that

$$\begin{aligned} g_1(x)=-\frac{1}{2\pi } \rho _{c}^{(1)}(x). \end{aligned}$$
$$\begin{aligned} g_2(x)=\frac{1}{4\pi ^2} \rho _{c}^{(2)}(x)+\frac{1}{2\pi }\rho _c(x). \end{aligned}$$

Since \(\widehat{\rho _{c}^{(j)}}(x)=(2\pi i x_1)^j \hat{\rho }_c(x)\), we have

$$\begin{aligned} \hat{g}_1(x)=-i x_1 \hat{\rho }_c(x). \end{aligned}$$
$$\begin{aligned} \hat{g}_2(x)=(\frac{1}{2\pi }-x_1^2) \hat{\rho }_c(x). \end{aligned}$$

According to the inequality \(|x_1|\leqslant \sqrt{|x|^2}\leqslant {\text {e}}^{\frac{|x|^2}{2}}\) and \(\eta _{\epsilon }(L)\leqslant \frac{1}{2}\),

$$\begin{aligned} |\hat{g}_1(L^*)|\leqslant \sum \limits _{x\in L^*} |x_1|\cdot |\hat{\rho }_c(x)|=\sum \limits _{x\in L^*\backslash \{0\}} |x_1|\rho (x)\leqslant \sum \limits _{x\in L^*\backslash \{0\}} {\text {e}}^{\frac{|x|^2}{2}} {\text {e}}^{-\pi |x|^2} \end{aligned}$$
$$\begin{aligned} \ \leqslant \sum \limits _{x\in L^*\backslash \{0\}} {\text {e}}^{-\frac{\pi }{4}|x|^2}=\rho _2(L^*\backslash \{0\})\leqslant \epsilon .\qquad \qquad \qquad \quad \end{aligned}$$
(1.4.9)

Combining (1.4.7), (1.4.8) and (1.4.9) together,

$$\begin{aligned} |E[(\xi -c)\cdot u]|=\frac{|\hat{g}_1(L^*)|}{|\hat{\rho }_c(L^*)|} \leqslant \frac{\epsilon }{1-\epsilon }. \end{aligned}$$

For a general unit vector \(u\in \mathbb {R}^n\), there exists an orthogonal matrix \(M\in \mathbb {R}^{n\times n}\) such that \(Mu=(1,0,\dots ,0)\). Denote \(\eta \) as a random variable from the discrete Gauss distribution \(D_{M^{-1}L,M^{-1}c}\), for any \(x\in L\),

$$\begin{aligned} Pr\{\eta =M^{-1}x\}=\frac{\rho _{M^{-1}c}(M^{-1}x)}{\rho _{M^{-1}c}(M^{-1}L)}=\frac{{\text {e}}^{-\pi |M^{-1}x-M^{-1}c|^2}}{\rho _{M^{-1}c}(M^{-1}L)}\qquad \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad =\frac{{\text {e}}^{-\pi |x-c|^2}}{\rho _{c}(L)}=Pr\{\xi =x\}=Pr\{M^{-1}\xi =M^{-1}x\}, \end{aligned}$$

which implies that the distributions of \(\eta \) and \(M^{-1}\xi \) are the same, hence,

$$\begin{aligned} |E[(\xi -c)\cdot u]|=|E[M^{-1}(\xi -c)\cdot Mu]|=|E[(\eta -M^{-1}c) \cdot Mu]|\leqslant \frac{\epsilon }{1-\epsilon }. \end{aligned}$$

Above all the inequality (1.4.3) holds, and inequality (1.4.4) could be proved in the same way. We complete the proof of Lemma 1.4.1.    \(\square \)

Lemma 1.4.2

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(c\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant 2\eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), then we have

$$\begin{aligned} |E[\xi -c]|^2 \leqslant (\frac{\epsilon }{1-\epsilon })^2\,s^2 n, \end{aligned}$$
(1.4.10)

and

$$\begin{aligned} E[|\xi -c|^2]\leqslant (\frac{1}{2\pi }+\frac{\epsilon }{1-\epsilon }) s^2 n. \end{aligned}$$
(1.4.11)

Proof

Let \(u_1,u_2,\dots ,u_n\) be the n unit column vectors of \(n\times n\) matrix \(I_n\), by Lemma 1.4.1,

$$\begin{aligned} |E[\xi -c]|^2=\sum \limits _{i=1}^{n} (E[(\xi -c)\cdot u_i])^2 \leqslant (\frac{\epsilon }{1-\epsilon })^2\,s^2 n. \end{aligned}$$
$$\begin{aligned} E[|\xi -c|^2]=\sum \limits _{i=1}^{n} E[((\xi -c)\cdot u_i)^2] \leqslant (\frac{1}{2\pi }+\frac{\epsilon }{1-\epsilon }) s^2 n. \end{aligned}$$

Lemma 1.4.2 holds.    \(\square \)

Lemma 1.4.3

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,v}\), then we have

$$\begin{aligned} Pr\{|\xi -v|>s\sqrt{n}\}\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$
(1.4.12)

Proof

From the proof of Lemma 1.4.1, here we only need to prove for the case \(s=1\). Since

$$\begin{aligned} Pr\{|\xi -v|>\sqrt{n}\}=\sum \limits _{x\in L,|x-v|>\sqrt{n}} \frac{\rho _v(x)}{\rho _v(L)} \end{aligned}$$
$$\begin{aligned} =\sum \limits _{x\in L, |x-v|>\sqrt{n}} \frac{\rho (x-v)}{\rho _v(L)}=\frac{\rho ((L-v)\backslash \sqrt{n}N)}{\rho _v(L)}, \end{aligned}$$

take \(c=1\) in Lemma 1.3.4 and get

$$\begin{aligned} \rho ((L-v)\backslash \sqrt{n}N)<2^{-n} \rho (L). \end{aligned}$$

That is,

$$\begin{aligned} Pr\{|\xi -v|>\sqrt{n}\}<2^{-n}\frac{\rho (L)}{\rho _v(L)}. \end{aligned}$$
(1.4.13)

Based on Lemma 1.3.2, Lemma 1.2.1 and \(\eta _{\epsilon }(L)\leqslant 1\),

$$\begin{aligned} \rho _v(L)=|\rho _v(L)|=|\text {det}(L^*)\hat{\rho }_v(L^*)|=|\text {det}(L^*)\sum \limits _{x\in L^*} {\text {e}}^{-2\pi i x\cdot v} \rho (x)|\qquad \qquad \end{aligned}$$
$$\begin{aligned} \qquad \quad \ \ \geqslant |\text {det}(L^*)|(1-\sum \limits _{x\in L^*\backslash \{0\}} |{\text {e}}^{-2\pi i x\cdot v} \rho (x)|)= |\text {det}(L^*)|(1-\sum \limits _{x\in L^*\backslash \{0\}} \rho (x)) \end{aligned}$$
$$\begin{aligned} =|\text {det}(L^*)| (1-\rho (L^*\backslash \{0\}))\geqslant |\text {det}(L^*)| (1-\epsilon ).\qquad \quad \end{aligned}$$
(1.4.14)

Similarly,

$$\begin{aligned} \rho (L)=|\rho (L)|=|\text {det}(L^*)\hat{\rho }(L^*)|\qquad \qquad \qquad \qquad \qquad \quad \end{aligned}$$
$$\begin{aligned} \qquad =|\text {det}(L^*)\sum \limits _{x\in L^*} \rho (x)|=|\text {det}(L^*)|(1+\sum \limits _{x\in L^*\backslash \{0\}} \rho (x)) \end{aligned}$$
$$\begin{aligned} =|\text {det}(L^*)|(1+\rho (L^*\backslash \{0\}))\leqslant |\text {det}(L^*)|(1+\epsilon ). \end{aligned}$$
(1.4.15)

Combining (1.4.13), (1.4.14) and (1.4.15) together, it follows that

$$\begin{aligned} Pr\{|\xi -v|>\sqrt{n}\}\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$

This lemma holds.    \(\square \)

For \(x\in \mathbb {R}^n\) and a set \(A\subset \mathbb {R}^n\), we define the distance from x to A as \(\text {dist}(x,A)=\min \limits _{y\in A} |x-y|\).

Lemma 1.4.4

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(c,v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), \(\text {dist}(v,L^*)\geqslant \frac{\sqrt{n}}{s}\), then

$$\begin{aligned} |E[{\text {e}}^{2\pi i \xi \cdot v}]|\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$
(1.4.16)

Proof

From the proof of Lemma 1.4.1, we only need to prove for the case \(s=1\). Let

$$\begin{aligned} g(x)={\text {e}}^{2\pi i x\cdot v}\rho _c(x). \end{aligned}$$

By Lemma 1.3.2,

$$\begin{aligned} E[{\text {e}}^{2\pi i \xi \cdot v}]=\frac{g(L)}{\rho _c(L)}=\frac{\text {det}(L^*)\hat{g}(L^*)}{\text {det}(L^*)\hat{\rho }_c(L^*)}=\frac{\hat{g}(L^*)}{\hat{\rho }_c(L^*)}. \end{aligned}$$

We have proved that \(|\hat{\rho }_c(L^*)|\geqslant 1-\epsilon \) in Lemma 1.4.1, based on (iii) of Lemma 1.1.2 and Lemma 1.2.1,

$$\begin{aligned} \hat{g}(x)=\hat{\rho }_c(x-v)=\rho (x-v){\text {e}}^{-2\pi i (x-v)\cdot c}, \end{aligned}$$

therefore,

$$\begin{aligned} |\hat{g}(L^*)|=|\sum \limits _{x\in L^*} \rho (x-v){\text {e}}^{-2\pi i (x-v)\cdot c}|\leqslant \sum \limits _{x\in L^*}\rho (x-v)=\rho (L^*-v). \end{aligned}$$

Since \(\text {dist}(v,L^*)\geqslant \sqrt{n}\), we know

$$\begin{aligned} \rho (L^*-v)=\rho ((L^*-v)\backslash \sqrt{n}N). \end{aligned}$$

Take \(c=1\) in Lemma 1.3.4 and get

$$\begin{aligned} \rho ((L^*-v)\backslash \sqrt{n}N)<2^{-n} \rho (L^*)=2^{-n}(1+\rho (L^*\backslash \{0\}))\leqslant 2^{-n}(1+\epsilon ). \end{aligned}$$

Above all,

$$\begin{aligned} |E[{\text {e}}^{2\pi i \xi \cdot v}]|=|\frac{\hat{g}(L^*)}{\hat{\rho }_c(L^*)}|\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$

We complete the proof of Lemma 1.4.4.    \(\square \)

Lemma 1.4.5

For any n dimensional full-rank lattice \(L\subset \mathbb {R}^n\), \(w,c,v\in \mathbb {R}^n\), \(0<\epsilon <1\), \(s\geqslant \eta _{\epsilon }(L)\), \(\xi \) is a random variable from the discrete Gauss distribution \(D_{L,s,c}\), \(\text {dist}(v,L^*)\geqslant \frac{\sqrt{n}}{s}\), then

$$\begin{aligned} |E[\cos (2\pi (\xi +w)\cdot v)]|\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$
(1.4.17)

Proof

By Lemma 1.4.4 we have

$$\begin{aligned} |E[\cos (2\pi (\xi +w)\cdot v)]|\leqslant |E[{\text {e}}^{2\pi i (\xi +w)\cdot v}]|=|E[{\text {e}}^{2\pi i \xi \cdot v}]|\leqslant \frac{1+\epsilon }{1-\epsilon }2^{-n}. \end{aligned}$$

Lemma 1.4.5 holds.    \(\square \)

Finally, we give a lemma which will be used in the next chapter.

Lemma 1.4.6

Let \(v_1,v_2,\dots ,v_m\) be m independent random variables on \(\mathbb {R}^n\) such that \(E[|v_i|^2]\leqslant l\) and \(|E[v_i]|^2\leqslant \epsilon \) for \(i=1,2,\dots ,m\). Then for any \(z=(z_1,z_2,\dots ,z_m)^T \in \mathbb {R}^m\),

$$\begin{aligned} E[|\sum \limits _{i=1}^m z_i v_i|^2]\leqslant (l+m\epsilon )|z|^2. \end{aligned}$$
(1.4.18)

Proof

By Cauchy inequality we get \(\sum \nolimits _{i=1}^m |z_i|\leqslant \sqrt{m}|z|\), so

$$\begin{aligned} E[|\sum \limits _{i=1}^m z_i v_i|^2]=\sum \limits _{i,j}z_i z_j E[v_i\cdot v_j]=\sum \limits _i z_i^2 E[|v_i|^2]+\sum \limits _{i\ne j} z_i z_j E[v_i]\cdot E[v_j]. \end{aligned}$$
(1.4.19)

The first term of the right hand side in (1.4.19) has the estimation

$$\begin{aligned} \sum \limits _i z_i^2 E[|v_i|^2]\leqslant \sum \limits _i z_i^2 l=l|z|^2. \end{aligned}$$

The second term of the right hand side in (1.4.19) has the estimation

$$\begin{aligned} \sum \limits _{i\ne j} z_i z_j E[v_i]\cdot E[v_j]\leqslant \sum \limits _{i\ne j} |z_i||z_j|\cdot \frac{1}{2}(|E[v_i]|^2+|E[v_j]|^2) \end{aligned}$$
$$\begin{aligned} \qquad \qquad \qquad \qquad \quad \leqslant \sum \limits _{i\ne j} \epsilon |z_i||z_j|\leqslant \epsilon (\sum \limits _i |z_i|)^2\leqslant m\epsilon |z|^2. \end{aligned}$$

From (1.4.19) it follows that

$$\begin{aligned} E[|\sum \limits _{i=1}^m z_i v_i|^2]\leqslant (l+m\epsilon ) |z|^2. \end{aligned}$$

This lemma holds.    \(\square \)