1 Introduction

Oblivious transfer (OT), introduced by Rabin [32], is one of the most fundamental cryptographic tasks. A sender (S) holds two values \(\mu _0, \mu _1\) and a receiver (R) holds a bit \(\beta \). The functionality should allow the receiver to learn \(\mu _\beta \) and nothing else, the sender should learn nothing. OT has been a fundamental building block for many cryptographic applications, in particular ones related to secure multi-party computation (MPC), starting with [15, 35].

A central measure for the complexity of a protocol or a proof system is its round complexity. One could imagine a protocol implementing the OT functionality with only two messages: a first message from the receiver to the sender, and a second message from the sender to the receiver. Indeed, in the semi-honest setting, where parties are assumed to follow the protocol, this can be achieved based on a variety of concrete cryptographic assumptions (Decisional Diffie-Hellman, Quadratic Residuosity, Decisional Composite Residuosity, Learning with Errors, to name a few), as well as based on generic assumptions such as trapdoor permutations, additively homomorphic encryption and public key encryption with oblivious public key generation (e.g. [7, 13]).

In the malicious setting, where an adversarial party might deviate from the designated protocol, the ultimate simulation based security notion cannot be achieved in a two message protocol (without assuming setup such as a common random string or a random oracle) [16]. The standard security notion in this setting, which originated from the works of Naor and Pinkas [27] and Aiello et al. [1], and was further studied in [3, 18, 21], provides a meaningful relaxation of the standard (simulation-based) security notion. This definition requires that the receiver’s only message is computationally indistinguishable between the cases of \(\beta =0\) and \(\beta =1\)Footnote 1, and that regardless of the receiver’s first message, the sender’s message statistically hides at least one of \(\mu _0, \mu _1\). Alternative equivalent formulations are simulation using a computationally unbounded (or exponential time) simulator, or the existence of a computationally unbounded (or exponential time) extractor, that can extract a \(\beta \) value from any receiver message.

With the aforementioned connection to secure MPC, it is not surprising that this notion of malicious statistical sender-private OT (SSP-OT) found numerous applications. In particular in recent years as the round complexity of MPC and related objects is taken to the necessary minimum. Badrinarayanan et al. [3], Jain et al. [19] and Kalai et al. [22] used it to construct two-message witness indistinguishable proof systems, and even restricted forms of zero-knowledge proof systems.

Badrinarayanan et al. [4] used similar techniques to present malicious MPC with minimal round complexity (4-rounds). In particular, their building blocks are SSP-OT and a 3-round semi-malicious MPC protocol (a comparable result was achieved by Halevi et al. [17] using different techniques, in particular requiring NIZK/ZAP). Khurana and Sahai [24] used SSP-OT to construct two-message non-malleable commitment schemes (with respect to the commitment), and Khurana [23] used it (together with ZAPs) to achieve 3-round non-malleable commitments from polynomial assumptions. Badrinarayanan et al. [5] relied on SSP-OT to construct 3-round concurrent MPC.

Ostrovsky, Paskin-Cherniavsky and Paskin-Cherniavsky [28] used SSP-OT to show that any fully homomorphic encryption scheme (FHE) can be converted to one that is statistically circuit private even against maliciously generated public keys and ciphertexts.

Our Results and Applications. Prior to this work it was only known how to construct SSP-OT from number theoretic assumptions such as DDH [1, 27], QR and DCR [18]. If setup is allowed, specifically a common random string, then an LWE-based construction by Peikert, Vaikuntanathan and Waters [31] achieves strong simulation security (even in the UC model). However, the aforementioned applications require a construction without setup and could therefore not be instantiated in a post-quantum secure manner. In this work, we construct SSP-OT from the learning with errors (LWE) assumption [33], with polynomial noise-ratio, which translates to the hardness of polynomially approximating short-vector problems (such as SIVP or GapSVP) to within a polynomial factor. Currently, no polynomial time quantum algorithm is known for these problems, and thus they serve as a major candidate for constructing post-quantum secure cryptography.

Relying on our construction, it is possible for the first time, to instantiate the works of [3, 5, 19, 22, 24] from LWE, i.e. in a post-quantum secure manner, and obtain proof systems with witness-indistinguishable or (limited) zero-knowledge properties, as well as non-malleable commitment schemes and concurrent MPC protocols. It is also possible to construct a round-optimal malicious MPC from LWE by applying the result of [4] using our SSP-OT and the LWE-based 3-round semi-malicious MPC of Brakerski et al. [8]. Lastly, our result allows to achieve malicious circuit private FHE from LWE by instantiating the [28] result with our LWE-based SSP-OT and relying on the numerous existing LWE-based FHE schemes. We stress that none of these applications had prior post-quantum secure candidates.

1.1 Technical Overview

Our construction relies on some fundamental properties of lattices. For our purposes we will only consider the so called q-ary lattices that can be described as follows. Given a matrix \(\mathbf {{A}}\in \mathbb {Z}_q^{n \times m}\) for some modulus q and \(m \ge n\), we can define \(\varLambda _q(\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}^m : \mathbf {{y}} = \mathbf {{s}}\mathbf {{A}} \pmod {q} \}\) which is the lattice defined by the row-span of \(\mathbf {{A}}\), and \(\varLambda ^{\perp }_q(\mathbf {{A}}) = \{ \mathbf {{x}} \in \mathbb {Z}^m : \mathbf {{A}}\mathbf {{x}} = \mathbf {{0}} \pmod {q} \}\) which is the lattice defined by the kernel of \(\mathbf {{A}}\). Note that both lattices have rank m over the integers, i.e. they contain a set of m linearly independent vectors over the integers (but not modulo q), since they contain \(q \cdot \mathbb {Z}^m\). There is a duality relation between these two lattices, both induced by the matrix \(\mathbf {{A}}\), and this relation will be instrumental for our methods.

An important fact about lattices is that a good basis implies decoding. Specifically, if \(\varLambda ^{\perp }_q(\mathbf {{A}})\) contains m linearly independent vectors (over the integers) of length at most \(\ell \), then it is possible to decode vectors of the form \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\), if \(\left\| {\mathbf {{e}}} \right\| \) is sufficiently smaller than \(q/\ell \). Namely, to recover \(\mathbf {{s}}, \mathbf {{e}}\). Such a short basis is sometimes called a trapdoor for \(\mathbf {{A}}\).Footnote 2

Consider sampling \(\mathbf {{s}}\) uniformly in \(\mathbb {Z}_q^n\) and \(\mathbf {{e}}\) from a Gaussian s.t. \(\left\| {\mathbf {{e}}} \right\| \) is slightly below the decoding capability \(q/\ell \). Then if \(\varLambda ^{\perp }_q(\mathbf {{A}})\) indeed has an \(\ell \)-basis then \(\mathbf {{s}}, \mathbf {{e}}\) can be recovered from \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\). However, a critical observation for us is that this encoding becomes lossy if the lattice \(\varLambda _q(\mathbf {{A}})\) contains a vector of norm \({\ll } q/\ell \). That is, in this case it is information theoretically impossible to recover the original \(\mathbf {{s}}\). This is because the component of \(\mathbf {{s}}\mathbf {{A}}\) that is in the direction of the short vector is masked by the noise \(\mathbf {{e}}\) (which is Gaussian and thus has a component in every direction). This property was also used by Goldreich and Goldwasser [14] to show that some lattice problems are in \(\mathbf {coAM}\).

To utilize this structure for our purposes, we specify the OT receiver message to be a matrix \(\mathbf {{A}}\). Then the OT sender generates \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\) and encodes one of its inputs, say \(\mu _1\) using entropy from the vector \(\mathbf {{s}}\) (e.g. using a randomness extractor). We get that this value is recoverable if \(\mathbf {{A}}\) has \(\ell \)-basis and information-theoretically hidden if \(\varLambda _q(\mathbf {{A}})\) has a short vector. If the receiver’s choice bit is 1, all it needs to do is generate \(\mathbf {{A}}\) that has an \(\ell \)-trapdoor, for which there are many well known methods to generate such \(\mathbf {{A}}\)’s that are statistically indistinguishable from uniform (starting from [2] with numerous followups). In order to complete the OT functionality we need to find a way to encode \(\mu _0\) in a way that is lossy if \(\varLambda _q(\mathbf {{A}})\) has no short vector. This will guarantee that regardless of the (possibly malicious) choice of matrix \(\mathbf {{A}}\), either \(\mu _0\) or \(\mu _1\) are information theoretically hidden.

Let us examine the case where all vectors in \(\varLambda _q(\mathbf {{A}})\) are of length \({\gg } t\) for some parameter t. Then the duality relations expressed in Banaszczyk’s transference theorems [6] guarantees that \(\varLambda ^{\perp }_q(\mathbf {{A}})\) has a basis of length \({\ll } q/t\). In such case we can use the smoothing principle to conclude that if \(\mathbf {{x}}\) is a discrete Gaussian with parameter q / t then \(\mathbf {{A}}\mathbf {{x}}\pmod {q}\) is statistically close to uniform. We can thus instruct the sender to compute \(\mathbf {{A}}\mathbf {{x}} + \mathbf {{d}}\pmod {q}\) for some vector \(\mathbf {{d}}\), and encode \(\mu _1\) using entropy extracted from \(\mathbf {{d}}\). This guarantees lossiness if \(\varLambda _q(\mathbf {{A}})\) has no short vectors as required. Furthermore, it is possible to generate a pseudorandom \(\mathbf {{A}}\) (under the LWE assumption) and specify \(\mathbf {{d}}\) such that \(\mathbf {{d}}\) is recoverable (this \(\mathbf {{A}}\) corresponds to the public key in Regev’s original encryption scheme [33]).

All that is left is to set the relation between \(\ell , t, q\) so as to make sure that if one mode of the OT is decodable then the other is lossy. One may be suspicious whether there is a valid setting of parameters, but in fact there is quite some slackness in the choice of parameters. We can start by setting \(\ell , t\) to be some fixed polynomial in n that is sufficient to guarantee correct recovery in the respective cases. This can be done regardless of the value of q. We will set the parameter q to ensure that if \(\mu _1\) is recoverable then \(\mu _0\) is not, which is sufficient to guarantee statistical sender privacy against malicious receiver. Specifically, if \(\mu _1\) is recoverable then \(\varLambda _q(\mathbf {{A}})\) does not have vectors of length \(q/(k \ell )\), where k is some polynomial in n (that does not depend on q), and thus \(\varLambda ^{\perp }_q(\mathbf {{A}})\) has a \(k \ell \) basis. We therefore require that \(q/t \gg k\ell \), or equivalently \(q \gg k \ell t\), which guarantees that \(\mu _0\) is not recoverable in this case. Since \(k, \ell , t\) are fixed polynomials in n, it is sufficient to choose q to be a sufficiently larger polynomial than the product \(k \ell t\) to guarantee security. Receiver privacy is guaranteed since \(\mathbf {{A}}\) is either statistically indistinguishable from uniform if the choice bit \(\beta \) is 1, or computationally indistinguishable from uniform if \(\beta = 0\).

Disadvantages of the Basic Solution, and Our Actual Improved Scheme. The proposal above can indeed be used to implement an SSP-OT. However, when actual parameters are assigned, it becomes apparent that the argument about the lossiness of \(\mathbf {{s}}\) given \(\mathbf {{s}}\mathbf {{A}}+\mathbf {{e}}\pmod {q}\) when \(\varLambda _q(\mathbf {{A}})\) has some short vector does not produce sufficient randomness to allow extraction. This can be resolved by repetition (many \(\mathbf {{s}}\) values with the same \(\mathbf {{A}}\)). However, the lossiness argument for \(\mathbf {{d}}\) guarantees much more and in fact allows to extract random bits from \(\mathbf {{d}}\) deterministically. The consequence is an unnecessarily inefficient scheme. In particular, the information rate is inverse polynomial in the security parameter of the scheme.

The scheme we actually introduce and analyze is therefore a balanced version of the above outline, where we “pay” in weakening the lossiness in \(\mathbf {{d}}\) in exchange for strengthening the lossiness for \(\mathbf {{s}}\), which leads to a scheme with information rate \(\widetilde{\varOmega }(1)\) (achieving constant information rate while preserving statistical security remains an intriguing question). Towards this end, we introduce refinements of known lattice tools that may be of independent interest.

The idea is to improve the lossiness in \(\mathbf {{s}}\) by considering the case where \(\varLambda _q(\mathbf {{A}})\) has multiple short vectors, instead of just one. Intuitively, this will introduce entropy into additional components of \(\mathbf {{s}}\), thus increasing the lossiness. We formalize this by considering the Gaussian measure of \(\varLambda _q(\mathbf {{A}})\). A high Gaussian measure translates (at least intuitively) to the existence of a multitude of short vectors, formally it characterizes the potency of \(\mathbf {{e}}\) to hide information about \(\mathbf {{s}}\). The formal argument goes through the optimal Voronoi cell decoder, see Sect. 3 for formal statement and additional details.

Of course the lossiness in \(\mathbf {{s}}\) needs to be complemented by lossiness in \(\mathbf {{d}}\) if the Gaussian measure of \(\varLambda _q(\mathbf {{A}})\) is small, which translates to having few independent short vectors in \(\varLambda _q(\mathbf {{A}})\). We show that in this case we can derive partial smoothing where for a Gaussian \(\mathbf {{x}}\), the value \(\mathbf {{A}}\mathbf {{x}}\pmod {q}\) is no longer uniform, but rather is uniform over some subspace modulo q. If the dimension of this subspace is large enough, we can get lossiness for the vector \(\mathbf {{d}}\) and complete the security proof. Partial smoothing and implications are discussed in Sect. 4.

To apply these principles we need to slightly modify the definition of the vector \(\mathbf {{d}}\) and the matrix \(\mathbf {{A}}\) in the case of \(\beta =0\). Now \(\mathbf {{A}}\) will no longer correspond to the public key of the Regev scheme but rather, interestingly, to the public key of the batched scheme introduced in [31] (which is also concerned with constructing OT, but allowing setup). The complete construction and analysis can be found in Sect. 5.

2 Preliminaries

2.1 Statistical Sender-Private Two-Message Oblivious Transfer

We now define the object of main interest in this work, namely SSP-OT. We only define the two-message perfect-correctness variant since this is what we achieve in this work. A two-message oblivious transfer protocol consists of a tuple ppt algorithms \((\mathsf {OTR}, \mathsf {OTS}, \mathsf {OTD})\) with the following syntax.

  • \(\mathsf {OTR}(1^\lambda , \beta )\) takes the security parameter \(\lambda \) and a selection bit \(\beta \) and outputs a message \(\mathsf {ot_1}\) and secret state \(\mathsf {st}\).

  • \(\mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\) takes the security parameter \(\lambda \), two inputs \((\mu _0, \mu _1) \in \{0,1\}^\mathsf {len}\) (where \(\mathsf {len}\) is a parameter of the scheme) and a message \(\mathsf {ot_1}\). It outputs a message \(\mathsf {ot_2}\).

  • \(\mathsf {OTD}(1^\lambda , \beta , \mathsf {st}, \mathsf {ot_2})\) takes the security parameter, the bit \(\beta \), secret state \(\mathsf {st}\) and message \(\mathsf {ot_2}\) and outputs \(\mu ' \in \{0,1\}^\mathsf {len}\).

Correctness and security are defined as follows.

Definition 2.1

A tuple \((\mathsf {OTR}, \mathsf {OTS}, \mathsf {OTD})\) is a SSP-OT scheme if the following hold.

  • Correctness. For all \(\lambda , \beta , \mu _0, \mu _1\), letting \((\mathsf {ot_1}, \mathsf {st}) = \mathsf {OTR}(1^\lambda , \beta )\), \(\mathsf {ot_2}= \mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\), \(\mu ' = \mathsf {OTD}(1^\lambda , \beta , \mathsf {st}, \mathsf {ot_2})\), it holds that \(\mu ' = \mu _\beta \) with probability 1.

  • Receiver Privacy. Consider the distribution \(\mathcal{D}_\beta (\lambda )\) defined by running \((\mathsf {ot_1}, \mathsf {st}) = \mathsf {OTR}(1^\lambda , \beta )\) and outputting \(\mathsf {ot_1}\). Then \(\mathcal{D}_0, \mathcal{D}_1\) are computationally indistinguishable.

  • Statistical Sender Privacy. There exists an extractor \(\mathsf {OTExt}\) (possibly computationally unbounded) s.t. for any sequence of messages \(\mathsf {ot_1}= \mathsf {ot_1}(\lambda )\) and inputs \((\mu _0, \mu _1) = (\mu _0(\lambda ), \mu _1(\lambda ))\), the distribution ensembles \(\mathsf {OTS}(1^\lambda , (\mu _0, \mu _1), \mathsf {ot_1})\) and \(\mathsf {OTS}(1^\lambda , (\mu _{\beta '}, \mu _{\beta '}), \mathsf {ot_1})\), where \(\beta '=\mathsf {OTExt}(\mathsf {ot_1})\), are statistically indistinguishable.

2.2 Linear Algebra, Min-Entropy and Extractors

Random Matrices: The probability that a uniformly random matrix (with \(m \ge n\)) has full rank is given by

$$\begin{aligned} \Pr _{\mathbf {{A}}}[\mathsf {rank}(\mathbf {{A}}) < n] = 1 - \prod _{i = 0}^{n-1} (1 - 2^{i - m}) \le \sum _{i = 0}^{n-1} 2^{i - m} \le 2^{n-m}, \end{aligned}$$

where the first inequality follows from the union-bound.

Average Conditional Min-Entropy. Let X be a random-variable supported on a finite set \(\mathcal {X}\) and let Z be a (possibly correlated) random variable supported on a finite set \(\mathcal {Z}\). The average-conditional min-entropy \(\tilde{H}_\infty ( X | Z )\) of X given Z is defined as

$$\begin{aligned} \tilde{H}_\infty ( X | Z ) = - \log \left( \mathsf {E}_z \left[ \max _{x \in \mathcal {X}} \Pr [X = x | Z = z] \right] \right) . \end{aligned}$$

We will use the following easy-to-establish fact about uniform distributions on binary vector-spaces: If \(\mathsf {U}, \mathsf {V} \subseteq \mathbb {Z}_2^n\) are sub-vectorspaces of \(\mathbb {Z}_2^n\), and if and , then it holds that

$$\begin{aligned} \tilde{H}_\infty (\mathbf {{u}} | \mathbf {{u}} + \mathbf {{v}} ) = \mathsf {dim}(\mathsf {U} \cap \mathsf {V}). \end{aligned}$$

Extractors. A function \(\mathsf {Ext}: \{ 0,1 \}^d \times \mathcal {X} \rightarrow \{0,1\}^\ell \) is called a seeded strong average-case \((k,\epsilon )\)-extractor, if it holds for all random variables X with support \(\mathcal {X}\) and Z defined on some finite support that if \(\tilde{H}_\infty (X | Z) \ge k\), then it holds that

$$\begin{aligned} (\mathsf {s},\mathsf {Ext}(\mathsf {s},X),Z) \approx _\epsilon (\mathsf {s},U,Z), \end{aligned}$$

where and . Such extractors can be constructed from universal hash functions [11, 12]. In fact, any extractor is an average-case extractor for slightly worse parameters by the averaging principleFootnote 3.

2.3 Lattices

We recall the standard facts about lattices. A lattice \(\varLambda \subseteq {\mathbb R}^m\) is the set of all integer-linear combinations of a set of linearly independent basis-vectors, i.e. for every lattice \(\varLambda \) there exists a full-rank matrix \(\mathbf {{B}} \in {\mathbb R}^{k \times m}\) such that \(\varLambda = \varLambda (\mathbf {{B}}) = \{ \mathbf {{z}} \cdot \mathbf {{B}} \ | \ \mathbf {{z}} \in \mathbb {Z}^k \}\). We call k the rank of \(\varLambda \) and \(\mathbf {{B}}\) a basis of \(\varLambda \). More generally, for a set \(S \subseteq \varLambda \) we denote by \(\varLambda (S)\) the smallest sub-lattice of \(\varLambda \) which contains S. Moreover, we will write \(\mathsf {rank}(S)\) to denote \(\mathsf {rank}((\varLambda (S))\).

The dual-lattice \(\varLambda ^*= \varLambda ^*(\varLambda )\) of a lattice \(\varLambda \) is defined by \(\varLambda ^*(\varLambda ) = \{ \mathbf {{x}} \in {\mathbb R}^n \ | \ \forall \mathbf {{y}} \in \varLambda : \langle \mathbf {{x}}, \mathbf {{y}} \rangle \in \mathbb {Z}\}\). Note that it holds that \((\varLambda ^*)^*= \varLambda \). The determinant of a lattice \(\varLambda \) is defined by \(\det \varLambda = \sqrt{\det (\mathbf {{B}} \cdot \mathbf {{B}}^\top )}\) where \(\mathbf {{B}}\) is any basis of \(\varLambda \). It holds that \(\det \varLambda ^*= 1 / \det \varLambda \). If \(\varLambda = \varLambda (\mathbf {{B}})\) and the norm of each row of \(\mathbf {{B}}\) is at most \(\ell \), then an argument using Gram-Schmidt orthogonalization establishes \(\det \mathbf {{B}} \le \ell ^k\).

For a basis \(\mathbf {{B}} \in {\mathbb R}^{k \times m}\) of \(\varLambda \), we define the parallel-epiped of \(\mathbf {{B}}\) by \(\mathcal {P}(\mathbf {{B}}) = \{ \mathbf {{x}} \cdot \mathbf {{B}} \ | \ \mathbf {{x}} \in [-1/2,1/2 )^k \}\). In abuse of notation we write \(\mathcal {P}(\varLambda )\) to denote \(\mathcal {P}(\mathbf {{B}})\) for some canonic basis \(\mathbf {{B}}\) of \(\varLambda \) (such as e.g. a Hermite basis). For lattices \(\varLambda \subseteq \varLambda _0\), we will use \(\mathcal {P}(\varLambda ) \cap \varLambda _0\) as a system of (unique) representatives for the quotient group \(\varLambda _0 / \varLambda \).

We say that a lattice is q-ary if \((q \mathbb {Z})^m \subseteq \varLambda \subseteq \mathbb {Z}^m\). In particular, for every q-ary lattice \(\varLambda \) there exists a matrix \(\mathbf {{A}} \in \mathbb {Z}_q^{k \times m}\) such that \(\varLambda = \varLambda _q(\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}^m \ | \ \exists \mathbf {{x}} \in \mathbb {Z}_q^k: \mathbf {{y}} = \mathbf {{x}} \cdot \mathbf {{A}} ( \ \mathsf {mod} \ q) \}\). We also define the lattice \(\varLambda _q^\bot (\mathbf {{A}}) = \{ \mathbf {{y}} \in \mathbb {Z}_q^m \ | \ \mathbf {{A}} \cdot \mathbf {{y}} = 0 ( \ \mathsf {mod} \ q)\}\). It holds that \((\varLambda _q(\mathbf {{A}}))^*= \frac{1}{q} \varLambda _q^\bot (\mathbf {{A}})\).

Gaussians. The Gaussian function \(\rho _\sigma : {\mathbb R}^m \rightarrow {\mathbb R}\) is defined by

$$\begin{aligned} \rho _\sigma (\mathbf {{x}}) = e^{- \pi \cdot \frac{\Vert \mathbf {{x}} \Vert ^2}{\sigma ^2}}. \end{aligned}$$

For a lattice \(\varLambda \subseteq {\mathbb R}^m\) and a parameter \(\sigma > 0\), we define the discrete Gaussian distribution \(D_{\varLambda ,\sigma }\) on \(\varLambda \) as the distribution with probability-mass function \(\Pr [\mathbf {{x}} = \mathbf {{x}}'] = \rho _\sigma (\mathbf {{x}}') / \rho _\sigma (\varLambda )\) for all \(\mathbf {{x}}' \in \varLambda \). Let in the following \(\mathcal {B}= \{ \mathbf {{x}} \in {\mathbb R}^m \ | \ \Vert \mathbf {{x}} \Vert \le 1 \}\) be the closed ball of radius 1 in \({\mathbb R}^m\). A standard concentration inequality for discrete gaussians on general lattices is provided by Banaszczyk’s Theorem.

Theorem 2.2

([6]). For any lattice \(\varLambda \in {\mathbb R}^m\), parameter \(\sigma > 0\) and \(u \ge 1/\sqrt{2\pi }\) it holds that

$$\begin{aligned} \rho _\sigma (\varLambda \backslash u \sigma \sqrt{m} \mathcal {B}) \le 2^{-c_u \cdot m} \cdot \rho _\sigma (\varLambda ), \end{aligned}$$

where \(c_u = - \log (\sqrt{2 \pi e} u \cdot e^{-\pi u^2})\).

Setting \(\varLambda = \mathbb {Z}^m\) and \(u = 1\) in Theorem 2.2 we obtain the following corollary.

Corollary 2.3

Let \(\sigma > 0\) and . Then it holds that \(\Vert \mathbf {{x}} \Vert \le \sigma \cdot \sqrt{m}\), except with probability \(2^{-m}\).

Uniform Matrix Distributions with Decoding Trapdoor. For our construction we will need an efficiently samplable ensemble of matrices which is statistically close to uniform and is equipped with an efficient bounded-distance-decoder. Such an ensemble was first constructed by Ajtai [2] for q-ary lattices with prime q. We use a more efficient ensemble due to Micciancio and Peikert [25] which works for arbitrary modulus.

Lemma 2.4

([25]). Let \(\kappa (n) = \omega (\sqrt{\log (n)})\) be any function that grows faster than \(\sqrt{\log (n)}\) and \(\tau \) be a sufficiently large constant. There exists a pair of algorithms \((\mathsf {SampleWithTrapdoor},\mathsf {Decode})\) such that if \((\mathbf {{A}},\mathsf {td}) \leftarrow \mathsf {SampleWithTrapdoor}(q,n)\), then \(\mathbf {{A}}\) is of size \(n \times m\) with \(m = m(q,n) = O(n \cdot \log (q))\) and \(\mathbf {{A}}\) is \(2^{-n}\)close to uniform. For any \(\mathbf {{s}} \in \mathbb {Z}_q^m\) and \(\varvec{{\eta }} \in \mathbb {Z}_q^m\) with \(\Vert \varvec{{\eta }} \Vert < \frac{q}{\sqrt{m} \cdot \kappa (n)}\) the algorithm \(\mathsf {Decode}\) on input \(\mathsf {td}\) and \(\mathbf {{s}} \cdot \mathbf {{A}} + \varvec{{\eta }}\) will output \(\mathbf {{s}}\).

2.4 Learning with Errors

The learning with errors (LWE) problem was defined by Regev [33]. In this work we exclusively use the decisional version. The \(\mathrm {LWE}_{n,m,q,\chi }\) problem, for \(n,m,q\in {\mathbb N}\) and for a distribution \(\chi \) supported over \(\mathbb {Z}\) is to distinguish between the distributions \((\mathbf {{A}}, \mathbf {{s}}\mathbf {{A}}+\mathbf {{e}} \pmod {q})\) and \((\mathbf {{A}}, \mathbf {{u}})\), where \(\mathbf {{A}}\) is uniform in \(\mathbb {Z}_q^{n \times m}\), \(\mathbf {{s}}\) is a uniform row vector in \(\mathbb {Z}_q^n\), \(\mathbf {{e}}\) is a uniform row vector drawn from \(\chi ^m\), and \(\mathbf {{u}}\) is a uniform vector in \(\mathbb {Z}_q^m\). Often we consider the hardness of solving \(\mathrm {LWE}\) for any \(m=\mathrm{poly}(n \log q)\). This problem is denoted \(\mathrm {LWE}_{n,q,\chi }\). The matrix version of this problem asks to distinguish \((\mathbf {{A}},\mathbf {{S}}\cdot \mathbf {{A}} + \mathbf {{E}})\) from \((\mathbf {{A}},\mathbf {{U}})\), where , and \(\mathbf {{U}} \leftarrow \mathbb {Z}_q^{k \times m}\). The hardness of the matrix version for any \(k = \mathrm{poly}(n)\) can be established from \(\mathrm {LWE}_{n,m,q,\chi }\) via a routine hybrid-argument.

As shown in [30, 33], the \(\mathrm {LWE}_{n,q,\chi }\) problem with \(\chi \) being the discrete Gaussian distribution with parameter \(\sigma = \alpha q \ge 2 \sqrt{n}\) (i.e. the distribution over \(\mathbb {Z}\) where the probability of x is proportional to \(e^{-\pi (\left| {x} \right| /\sigma )^2}\), see more details below), is at least as hard as approximating the shortest independent vector problem (\(\mathsf {SIVP}\)) to within a factor of \(\gamma = {\widetilde{O}}({n}/\alpha )\) in worst case dimension n lattices. This is proven using a quantum reduction. Classical reductions (to a slightly different problem) exist as well [9, 29] but with somewhat worse parameters. The best known (classical or quantum) algorithms for these problems run in time \(2^{{\widetilde{O}}(n/\log \gamma )}\), and in particular they are conjectured to be intractable for \(\gamma = \mathrm{poly}(n)\).

3 Lossy Modes for q-Ary Lattices

The following lemmata borrow techniques of the proofs of two lemmata by Chung et al. [10] (Lemmas 3.3 and 3.4), but are not directly implied by these lemmata. In this section and Sect. 4, it will be instructive to think of \(\varLambda _0\) as \(\mathbb {Z}^n\), which will be the case in our application in Sect. 5.

Lemma 3.1

Let \(\varLambda \subseteq \varLambda _0 \subseteq {\mathbb R}^m\) be full rank lattices and let \(T \subseteq \varLambda _0\) be a system of coset representatives of \(\varLambda _0 / \varLambda \), i.e. we can write every \(\mathbf {{x}} \in \varLambda _0\) as \(\mathbf {{x}} = \mathbf {{t}} + \mathbf {{z}}\) for unique \(\mathbf {{t}} \in T\) and \(\mathbf {{z}} \in \varLambda \). Then it holds for any parameter \(\sigma > 0\) that

$$\begin{aligned} \frac{\rho _\sigma (T)}{\rho _\sigma (\varLambda _0)} \le \frac{1}{\rho _\sigma (\varLambda )}. \end{aligned}$$

Proof

As the \(T + \mathbf {{y}}\) cover \(\varLambda _0\) it holds that

$$\begin{aligned} \rho _\sigma (\varLambda _0)&= \sum _{\mathbf {{y}} \in \varLambda } \frac{1}{2} ( \rho _\sigma (T + \mathbf {{y}}) + \rho _\sigma (T - \mathbf {{y}}))\\&= \sum _{\mathbf {{y}} \in \varLambda } \sum _{\mathbf {{t}} \in T} \frac{1}{2} (\rho _\sigma (\mathbf {{t}} + \mathbf {{y}}) + \rho _\sigma (\mathbf {{t}} - \mathbf {{y}}))\\&= \sum _{\mathbf {{y}} \in \varLambda } \sum _{\mathbf {{t}} \in T} \rho _\sigma (\mathbf {{y}}) \cdot \rho _\sigma (\mathbf {{t}}) \cdot \underbrace{\frac{1}{2}(e^{-2\pi \langle \mathbf {{t}},\mathbf {{y}} \rangle / \sigma ^2} + e^{2\pi \langle \mathbf {{t}},\mathbf {{y}} \rangle / \sigma ^2})}_{\ge 1}\\&\ge \sum _{\mathbf {{y}} \in \varLambda } \rho _\sigma (\mathbf {{y}}) \sum _{\mathbf {{t}} \in T} \rho _\sigma (\mathbf {{t}})\\&= \rho _\sigma (\varLambda ) \cdot \rho _\sigma (T), \end{aligned}$$

where the first equality follows from the fact that \(\sum _{\mathbf {{y}} \in \varLambda }\rho _\sigma (T + \mathbf {{y}}) = \sum _{\mathbf {{y}} \in \varLambda }\rho _\sigma (T - \mathbf {{y}}) = \rho _\sigma (\varLambda _0)\). The claim follows immediately.

Lemma 3.2

Fix a matrix \(\mathbf {{A}}\in \mathbb {Z}_q^{n \times m}\) with \(m = O(n \log (q))\) and a parameter \(0< \sigma < \frac{q}{2 \sqrt{m}}\). Let and . Then it holds that \(\tilde{H}_\infty ( \mathbf {{s}} | \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}} \ \mathsf {mod} \ q) \ge -\log \left( \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\right) \).

Proof

Given arbitrary \(\mathbf {{A}}\) and \(\mathbf {{y}}\), we would like to find an \(\mathbf {{s}}^*\) that maximizes the probability \(\Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}]\). By Bayes’ rule, it holds that

$$\begin{aligned} \Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}]&= \Pr [\mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}} | \mathbf {{s}} = \mathbf {{s}}^*] \cdot \frac{\Pr [\mathbf {{s}} = \mathbf {{s}}^*]}{\Pr [\mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}]}\\&= \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}] \cdot \frac{\Pr [\mathbf {{s}} = \mathbf {{s}}^*]}{\sum _{\mathbf {{s}}'}\Pr [\mathbf {{y}} = \mathbf {{s}}\mathbf {{A}}+ \mathbf {{e}} | \mathbf {{s}} = \mathbf {{s}}'] \Pr [\mathbf {{s}} = \mathbf {{s}}']}\\&= \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}] \cdot \frac{q^{-n}}{\sum _{\mathbf {{s}}'}\Pr [\mathbf {{e}} =\mathbf {{y}} - \mathbf {{s}}'\mathbf {{A}}] q^{-n}}\\&= \frac{\Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}]}{\sum _{\mathbf {{s}}'} \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}' \mathbf {{A}}]}. \end{aligned}$$

As the denominator \(\sum _{\mathbf {{s}}'} \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}' \mathbf {{A}}]\) is independent of \(\mathbf {{s}}^*\), it suffices to maximize the numerator \(\Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}]\) with respect to \(\mathbf {{s}}^*\). As \( \Pr [\mathbf {{e}} = \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}] = \frac{\rho _{\sigma }(y - s^*\mathbf {{A}})}{\rho _{\sigma }(\mathbb {Z}^m)}\) is monotonically decreasing in \(\Vert y - s^*\mathbf {{A}}\Vert \), this probability is maximal for the \(\mathbf {{s}}^*\) that minimizes \(\Vert \mathbf {{y}} - \mathbf {{s}}^*\mathbf {{A}}\Vert \).

Let \(V \subseteq \mathbb {Z}^n\) be the discretized Voronoi-cell of \(\varLambda _q(\mathbf {{A}})\), that is V consists of all the points in \(\mathbb {Z}^m\) that are (strictly) closer to 0 than to any other point in \(\varLambda \) and, for any point \(\mathbf {{x}} \in \mathbb {Z}^m\) that is equi-distant to several lattice-points \(\mathbf {{z}}_1,\dots ,\mathbf {{z}}_\ell \) (where \(\mathbf {{z}}_1 = 0\)), assume that there is some tie-breaking rule \(\mathbf {{x}} \mapsto i(\mathbf {{x}})\), such that \(\mathbf {{x}} - \mathbf {{z}}_{i(\mathbf {{x}})} \in V\), but for all \(j \in [\ell ] \backslash \{ i(\mathbf {{x}}) \}\) it holds that \(\mathbf {{x}} - \mathbf {{z}}_j \notin V\). By construction, V is a system of coset representatives of \(\mathbb {Z}^m / \varLambda _q(\mathbf {{A}})\).

Moreover, for the maximum-likelihood \(\mathbf {{s}}^*\) it holds that \(\Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}}\mathbf {{A}}+ \mathbf {{e}}] = \Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V]\). By Corollary 2.3 it holds that \(\Vert \mathbf {{e}} \Vert \le \sigma \cdot \sqrt{m} < q/2\), except with probability \(2^{-m}\). Moreover, conditioned on \(\Vert \mathbf {{e}} \Vert < q/2\) the events \(\mathbf {{e}} \ \mathsf {mod} \ q \in V\) and \(\mathbf {{e}} \in V\) are equivalent. We can therefore bound \(\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \Pr [\mathbf {{e}} \in V] + 2^{-m}\). By Lemma 3.1 we obtain \(\Pr [\mathbf {{e}} \in V] \le \frac{\rho _\sigma (V)}{\rho _\sigma (\mathbb {Z}^m)} \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))}\) and therefore \( \Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\)

We conclude that \(\max _{\mathbf {{s}}^*\in \mathbb {Z}_q^n} \Pr [\mathbf {{s}} = \mathbf {{s}}^*| \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}] =\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] \le \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\). Thus, it holds that

$$\begin{aligned} \tilde{H}_\infty (\mathbf {{s}} | \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}})&= - \log (\mathsf {E}_{\mathbf {{y}}} \left[ \max _{\mathbf {{s}}^*} \Pr _{\mathbf {{s}},\mathbf {{e}}}[\mathbf {{s}} =\mathbf {{s^*}} | \mathbf {{y}} = \mathbf {{s}} \mathbf {{A}}+ \mathbf {{e}}] \right] )\\&= - \log (\mathsf {E}_{\mathbf {{y}}} [\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] ])\\&= - \log (\Pr [\mathbf {{e}} \ \mathsf {mod} \ q \in V] )\\&\ge -\log \left( \frac{1}{\rho _{\sigma }(\varLambda _q(\mathbf {{A}}))} + 2^{-m}\right) \end{aligned}$$

4 Partial Smoothing

In this section we will state a variant of the smoothing lemma of Micciancio and Regev [26]. Consider a discrete gaussian \(D_{\varLambda _0,\sigma }\) on a lattice \(\varLambda _0\). As in the setting of the smoothing Lemma of [26], we want to analyze what happens to the distribution of this Gaussian when we reduce it modulo a sublattice \(\varLambda \subseteq \varLambda _0\). The new lemma states that if the mass of the Fourier-transform of the probability-mass function of \(D_{\varLambda _0,\sigma } \ \mathsf {mod} \ \varLambda \) is concentrated on short vectors of the dual lattice \(\varLambda ^*\), then \(D_{\varLambda _0,\sigma } \ \mathsf {mod} \ \varLambda \) will be uniform on a certain sublattice \(\varLambda _1\) with \(\varLambda _0 \subseteq \varLambda _1 \subseteq \varLambda \).

Lemma 4.1

Let \(\sigma > 0\) and let \(\varLambda \subseteq \varLambda _0 \subseteq {\mathbb R}^n\) be full-rank lattices where \(\det (\varLambda _0) = 1\). Furthermore, let \(\gamma > 0\). Define \(\varLambda _1 = \{ \mathbf {{z}} \in \varLambda _0 \ | \ \forall \mathbf {{y}} \in \varLambda ^*\cap \gamma \mathcal {B}: \langle \mathbf {{y}},\mathbf {{z}} \rangle \in \mathbb {Z}\}\). Given that \(\rho _{1/\sigma }(\varLambda ^*\backslash \gamma \mathcal {B}) \le \epsilon \), it holds that

$$\begin{aligned} \mathbf {{x}} \ \mathsf {mod} \ \varLambda \approx _\epsilon (\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ \varLambda , \end{aligned}$$

where and .

Notice that for the case of \(\varLambda ^*\cap \gamma \mathcal {B}= \{ 0 \}\) we recover the standard smoothing lemma of [26]. The proof of Lemma 4.1 uses standard Fourier-analytic techniques akin to [26] and is deferred to Appendix A. We will make use of the following consequence of Lemma 4.1.

Corollary 4.2

Let \(q > 0\) be an integer and let \(\gamma > 0\). Let \(\mathbf {{A}}\in \mathbb {Z}_q^{m \times n}\) and let \(\sigma > 0\) and \(\epsilon > 0\) be such that \(\rho _{q/\sigma }(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \epsilon \). Let \(\mathbf {{D}} \in \mathbb {Z}_q^{k \times m}\) be a full-rank (and therefore minimal) matrix with \(\varLambda _q^\bot (\mathbf {{D}}) = \{ \mathbf {{x}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{y}} \in \varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}: \langle \mathbf {{x}},\mathbf {{y}} \rangle = 0 \pmod {q} \}\). Let and . Then it holds that

$$\begin{aligned} \mathbf {{A}} \mathbf {{x}} \ \mathsf {mod} \ q \approx _\epsilon \mathbf {{A}} \cdot (\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ q. \end{aligned}$$

Proof

Setting \(\varLambda _0 = \mathbb {Z}^n\), \(\varLambda = \varLambda _q^\bot (\mathbf {{A}})\) and \(\gamma ' = \gamma /q\), it holds that \(\varLambda ^*= \frac{1}{q} \varLambda _q(\mathbf {{A}})\) and

$$\begin{aligned} \epsilon \ge \rho _{q/\sigma }(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) = \rho _{1/\sigma }\left( \frac{1}{q}\varLambda _q(\mathbf {{A}}) \backslash \frac{\gamma }{q} \mathcal {B}\right) = \rho _{1/\sigma }(\varLambda ^*\backslash \gamma ' \mathcal {B}). \end{aligned}$$

Therefore, we can set

$$\begin{aligned} \varLambda _1&= \{ \mathbf {{x}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{y}} \in \varLambda ^*\cap \gamma ' \mathcal {B}: \langle \mathbf {{x}},\mathbf {{y}} \rangle \in \mathbb {Z}\}\\&= \{ \mathbf {{x}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{y}} \in \varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}: \langle \mathbf {{x}},\mathbf {{y}} \rangle = 0 ( \ \mathsf {mod} \ q) \}\\&= \varLambda _q^\bot (\mathbf {{D}}). \end{aligned}$$

Now it holds by Lemma 4.1 as that \(\mathbf {{x}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}}) \approx _\epsilon (\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ \varLambda _q^{\bot } (\mathbf {{A}})\). Write \(\mathbf {{y}}_1 = \mathbf {{x}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}})\) as \(\mathbf {{y}}_1 = \mathbf {{x}} + \mathbf {{z}}_1 \ \mathsf {mod} \ q\) for a suitable \(\mathbf {{z}}_1 \in \varLambda _q^{\bot }(\mathbf {{A}})\). Likewise, we can write \(\mathbf {{y}}_2 = \mathbf {{x}} + \mathbf {{u}} \ \mathsf {mod} \ \varLambda _q^{\bot }(\mathbf {{A}})\) as \(\mathbf {{y}}_2 = \mathbf {{x}} + \mathbf {{u}} + \mathbf {{z}}_2 \ \mathsf {mod} \ q\) for a suitable \(\mathbf {{z}}_2\). Thus it holds that

$$\begin{aligned} \mathbf {{A}} \mathbf {{x}} = \mathbf {{A}}(\mathbf {{x}} + \mathbf {{z}}_1) \approx _\epsilon \mathbf {{A}}(\mathbf {{x}} + \mathbf {{u}} + \mathbf {{z}}_2) = \mathbf {{A}}(\mathbf {{x}} + \mathbf {{u}}) \ ( \ \mathsf {mod} \ q). \end{aligned}$$

We will also use the following lower bound on the gaussian measure of lattices that have many short linearly independent vectors. The proof of Lemma 4.3 is technically similar to the proof of the transference theorem in [6].

Lemma 4.3

Let \(\varLambda \in {\mathbb R}^m\), \(\sigma > 0\) and \(\gamma > 0\) be such that \(\varLambda \cap \gamma \mathcal {B}\) contains at least k linearly independent vectors. Then it holds that \(\rho _\sigma (\varLambda ) \ge (\sigma /\gamma )^k\).

Proof

Let \(\varLambda ' \subseteq \) be the sublattice generated by the vectors in \(\varLambda \cap \gamma \mathcal {B}\). Let k be the dimension of the span of \(\varLambda '\). As \(\varLambda ' \subseteq \varLambda \), it holds that \(\rho _\sigma (\varLambda ) \ge \rho _\sigma (\varLambda ')\). As \(\varLambda '\) has a basis of length at most \(\gamma \), we we have that \(\det (\varLambda ') \le \gamma ^k\) and conclude \(\det ((\varLambda ')^*) = 1/\det (\varLambda ') \ge \frac{1}{\gamma ^k}\). By the Poisson-summation formula, we get that

$$\begin{aligned} \rho _\sigma (\varLambda ')&= \sigma ^k \cdot \det ((\varLambda ')^*) \cdot \rho _{1/\sigma }((\varLambda ')^*)\\&\ge (\sigma / \gamma )^k, \end{aligned}$$

as \(\rho _{1/\sigma }((\varLambda ')^*) \ge 1\). Thus we conclude that \(\rho _\sigma (\varLambda ) \ge (\sigma /\gamma )^k\).

5 Our Oblivious Transfer Protocol

We are now ready to provide our statistically sender private oblivious transfer protocol. In the following, let \(q,n,\ell = \mathrm{poly}(\lambda )\) and assume that q is of the form \(q = 2p\) for an odd p. Let \((\mathsf {SampleWithTrapdoor},\mathsf {Decode})\) be the pair of algorithms provided in Lemma 2.4 and let \(m = m(q,n)\) be such that the matrices \(\mathbf {{A}}\) generated by \(\mathsf {SampleWithTrapdoor}(q,2n)\) are elements of \(\mathbb {Z}^{2n \times m}\). Let \(\mathsf {Ext}_0: \{0,1\}^d \times \{0,1\}^n \rightarrow \{0,1\}^\ell \) and \(\mathsf {Ext}_1: \{0,1\}^d \times \mathbb {Z}_q^{2n}\rightarrow \{0,1\}^\ell \) be seeded extractors, both with seed-length d and \(\ell \) bits of output. Finally, let \(\sigma _0, \sigma _1 > 0\) be parameters for discrete Gaussians and \(\chi \) be an LWE error-distribution.

The protocol \(\mathsf {OT}= (\mathsf {OTR},\mathsf {OTS},\mathsf {OTD})\) is given as follows.

  • \(\mathsf {OTR}(1^\lambda ,\beta \in \{0,1\})\):

    • If \(\beta = 0\), choose a matrix , a matrix \(\mathbf {{S}} \leftarrow \mathbb {Z}_q^{n \times n}\), . Set \(\mathbf {{A}}_2 \leftarrow \mathbf {{S}} \cdot \mathbf {{A}}_1 + \mathbf {{E}}\) and . Repeat this step until \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank.

      Output \(\mathsf {ot}_1 \leftarrow \mathbf {{A}}\) and \(\mathsf {st}\leftarrow \mathbf {{S}}\).

    • If \(\beta = 1\), sample . Repeat this step until \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank. Output \(\mathsf {ot}_1 \leftarrow \mathbf {{A}}\) and \(\mathsf {st}\leftarrow \mathsf {td}\).

  • \(\mathsf {OTS}(1^\lambda , (\mu _0,\mu _1) \in (\{0,1\}^\ell )^2, \mathsf {ot}_1 = \mathbf {{A}})\):

    • Check if \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank, if not output \(\bot \).

    • Parse . Sample and reject a discrete Gaussian until \(\Vert \mathbf {{x}} \Vert < \sigma _0 \sqrt{m}\). Choose a uniformly random \(\mathbf {{r}} \leftarrow \{0,1\}^n\) and choose a random seed for the extractor \(\mathsf {Ext}_0\). Compute \(\mathbf {{y}}_1 \leftarrow \mathbf {{A}}_1\mathbf {{x}}\) and \(\mathbf {{y}}_2 \leftarrow \mathbf {{A}}_2 \mathbf {{x}} + \frac{q}{2} \cdot \mathbf {{r}}\). Set \(\mathsf {c}_0 \leftarrow (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0)\).

    • Sample and reject until \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m}\). Choose a uniformly random and a seed for the extractor \(\mathsf {Ext}_1\). Compute \(\mathbf {{y}} \leftarrow \mathbf {{t}} \cdot \mathbf {{A}}+\varvec{{\eta }}\) set \(\mathsf {c}_1 \leftarrow (\mathbf {{y}},\mathsf {s}_1,\mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu _1)\).

    • Output \(\mathsf {ot}_2 \leftarrow (\mathsf {c}_0,\mathsf {c}_1)\).

  • \(\mathsf {OTD}(\beta ,\mathsf {st},\mathsf {ot}_2 = (\mathsf {c}_0,\mathsf {c}))\)

    • If \(\beta = 0\): Parse \(\mathsf {st}= \mathbf {{S}}\) and \(\mathsf {c}_0 = (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\tau )\). Compute \(\mathbf {{r}}' \leftarrow \left\lfloor \mathbf {{y}}_2 - \mathbf {{S}} \cdot \mathbf {{y}}_1 \right\rceil _{q/2}\) and output \(\mu _0' \leftarrow \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \tau \).

    • If \(\beta = 1\): Parse \(\mathsf {st}= \mathsf {td}\) and \(\mathsf {c}_1 = (\mathbf {{y}},\mathsf {s}_1,\tau )\). Compute \(\mathbf {{t}}' \leftarrow \mathsf {Decode}(\mathsf {td},\mathbf {{y}})\) and output \(\mu _1' \leftarrow \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \tau \).

We will first show correctness of our protocol.

Lemma 5.1

(Correctness). Assume that the distribution \(\chi \) is a B-bounded. Provided that \(\sigma _0 \le \frac{q}{4 B \cdot m }\) and \(\sigma _1 \le \frac{q}{m \cdot \kappa (n)}\) (where \(\kappa (n) = \omega (\sqrt{\log (n)})\) as in Lemma 2.4), the protocol \(\mathsf {OT}\) is perfectly correct.

Proof

First note that as \(m \ge n \cdot \log (q)\), it holds for a uniformly random that \(\mathbf {{A}} \ \mathsf {mod} \ 2\) has full rank, except with negligible probability \(2^{n - m}\) (as detailed in Sect. 2.2). Moreover for \(\mathbf {{x}} \leftarrow D_{\mathbb {Z}^m,\sigma _0}\) and it holds by Corollary 2.3 that \(\Vert \mathbf {{x}} \Vert < \sigma _0 \sqrt{m}\) and \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m}\), except with negligible probability. Thus, rejection in \(\mathsf {OTR}\) and \(\mathsf {OTS}\) happens only with negligible probability.

In the case of \(\beta = 0\), it holds that

$$\begin{aligned} \mathbf {{y}}_2 - \mathbf {{S}} \cdot \mathbf {{y}}_1&= (\mathbf {{S}} \mathbf {{A}}_1 + \mathbf {{E}})\mathbf {{x}}+\frac{q}{2}\mathbf {{r}} - \mathbf {{S}} \mathbf {{A}}_1\mathbf {{x}}\\&= \mathbf {{E}}\cdot \mathbf {{x}} + \frac{q}{2} \cdot \mathbf {{r}}. \end{aligned}$$

By the Cauchy-Schwarz inequality it holds for each row \(\mathbf {{e}}_i\) of \(\mathbf {{E}}\) that \(| \langle \mathbf {{e}}_i , \mathbf {{x}} \rangle | \le \Vert \mathbf {{e}}_i \Vert \cdot \Vert \mathbf {{x}} \Vert \). As the entries of \(\mathbf {{e}}_i\) are chosen according to \(\chi \), we can bound \(\Vert \mathbf {{e}}_i \Vert \) by \(\Vert \mathbf {{e}}_i \Vert \le B \cdot \sqrt{m}\). As \(\Vert \mathbf {{x}} \Vert < \sigma _0 \cdot \sqrt{m}\), we have that

$$\begin{aligned} | \langle \mathbf {{e}}_i , \mathbf {{x}} \rangle | \le B \cdot \sigma _0 \cdot m < \frac{q}{4} \end{aligned}$$

as \(\sigma _0 \le \frac{q}{4 B \cdot m }\). We conclude that \(\mathbf {{r}}' = \left\lfloor \mathbf {{y}}_2 - \mathbf {{S}} \cdot \mathbf {{y}}_1 \right\rceil _{q/2}\) is identical to the vector \(\mathbf {{r}}\) used during encryption. Consequently, it holds that \(\mu _0' = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \tau = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}') \oplus \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0 = \mu _0\).

For the case of \(\beta = 1\), as \(\Vert \varvec{{\eta }} \Vert < \sigma _1 \sqrt{m} \le \frac{q}{\sqrt{m} \cdot \kappa (n)}\) it holds by Lemma 2.4 that \(\mathsf {Decode}(\mathsf {td},\mathbf {{y}}_1)\) outputs the correct \(\mathbf {{t}}' = \mathbf {{t}}\). We conclude that \(\mu '_1 = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \tau = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}') \oplus \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu = \mu _1\).

We now show that \(\mathsf {OT}\) has computational receiver privacy under the decisional matrix LWE assumption.

Lemma 5.2

(Computational Receiver Security). Given that the decisional \(LWE_{n,q,\chi }\)-assumption holds, the protocol \(\mathsf {OT}= (\mathsf {OTR},\mathsf {OTS},\mathsf {OTD})\) has receiver privacy.

Proof

Let \((\mathbf {{A}},\mathsf {st}_0) \leftarrow \mathsf {OTR}(1^\lambda ,0)\) and \((\mathbf {{A}}',\mathsf {st}_1) \leftarrow \mathsf {OTR}(1^\lambda ,1)\). Assume towards contradiction that there exists a PPT-distinguisher \(\mathcal {D}\) which distinguishes \(\mathbf {{A}}\) and \(\mathbf {{A}}'\) with non-negligible advantage \(\epsilon \). We can immediately use \(\mathcal {D}\) to distinguish decisional matrix LWE. Decomposing , it holds that \(\mathbf {{A}}_1\) is uniformly random and \(\mathbf {{A}}_2 = \mathbf {{S}} \cdot \mathbf {{A}}_1 + \mathbf {{E}}\), i.e. \((\mathbf {{A}}_1,\mathbf {{A}}_2)\) is a sample of the matrix LWE distribution. On the other hand, due to the uniformity property of \(\mathsf {SampleWithTrapdoor}\) (provided in Lemma 2.4) it holds that \(\mathbf {{A}}' \approx _s \mathbf {{A}}^*\) for a uniformly random . Consequently

$$\begin{aligned} \mathsf {Adv}_{LWE}(\mathcal {D})&= |\Pr [\mathcal {D}(\mathbf {{A}}) = 1] - \Pr [\mathcal {D}(\mathbf {{A}}^*) = 1]|\\&\ge |\Pr [\mathcal {D}(\mathbf {{A}}) = 1] - \Pr [\mathcal {D}(\mathbf {{A}}') = 1]| - |\Pr [\mathcal {D}(\mathbf {{A}}^*) = 1] - \Pr [\mathcal {D}(\mathbf {{A}}') = 1]|\\&\ge \epsilon - \mathrm{negl}, \end{aligned}$$

which contradicts the hardness of decisional matrix LWE.

We will now show that \(\mathsf {OT}\) is statistically sender-private.

Theorem 5.3

(Statistical Sender Security). Let \(q = 2p\) for an odd p. Given that \(\sigma _0 \cdot \sigma _1 \ge 4 \sqrt{m} \cdot q\), \(\sigma _1 < \frac{q}{2 \sqrt{m}}\) and both \(\mathsf {Ext}_0\) and \(\mathsf {Ext}_1\) are strong average-case \((n/2,\mathrm{negl})\)-extractors, then the above scheme enjoys statistical sender security.

Proof

Fix a maliciously generated \(\mathsf {ot}_1\)-message \(\mathsf {ot}_1 = \mathbf {{A}}\). Let in the following \(\gamma := \sqrt{m} \cdot \frac{q}{\sigma _0}\). Consider the following two cases.

  1. 1.

    \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) > 2^{n/2 + 1}\) or \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) > n/2\).

  2. 2.

    \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) \le 2^{n/2 + 2}\) and \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}) \le n/2\).

First notices that the two cases are slightly overlapping, but for any choice of \(\mathbf {{A}}\) one of the two cases must be true.

The unbounded message extractor \(\mathsf {OTExt}\) takes input \(\mathbf {{A}}\) and decides if item 1 or item 2 holds. If item 1 holds it outputs 0, otherwise 1. Note that \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})\) can be computed exactly. On the other hand, it is sufficient approximate \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}}))\) to a certain precision to determine which case holds.

We will now show that in case 1 the sender-message \(\mu _1\) is statistically hidden, whereas in case 2 the sender-message \(\mu _0\) is statistically hidden.

Case 1. We will start with the (easier) first case. We will show that either statement implies \(\rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge 2^{n/2 + 1}\). If it holds that \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) > 2^{n/2 +1}\), we can directly conclude that

$$\begin{aligned} \rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge \rho _{4 \sqrt{m} \cdot \frac{q}{\sigma _0}}(\varLambda _q(\mathbf {{A}})) \ge \rho _{\frac{q}{\sigma _0}}(\varLambda _q(\mathbf {{A}})) > 2^{n/2+1}. \end{aligned}$$

If the second statement \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) > n/2\) holds, Lemma 4.3 implies

$$\begin{aligned} \rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge (\sigma _1 / \gamma )^{n/2+1} \ge 2^{n + 2} \ge 2^{n/2 +1}, \end{aligned}$$

as \(\sigma _1 \ge 4 \gamma \).

Now let \(\mathsf {c}_1 = (\mathbf {{y}},\mathsf {s}_1,\tau )\), where \(\mathbf {{y}} \leftarrow \mathbf {{t}} \cdot \mathbf {{A}}+\varvec{{\eta }}\). Note that we can switch to a hybrid in which the distribution of \(\varvec{{\eta }}\) is \(D_{\mathbb {Z}^m,\sigma _1}\) instead of the truncated version while only incurring a negligible statistical error.

As \(\rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) \ge 2^{n/2 + 1}\) and \(\sigma _1 < \frac{q}{2 \sqrt{m}}\), Lemma 3.2 implies that

$$\begin{aligned} \tilde{H}_\infty (\mathbf {{t}} | \mathbf {{y}}) \ge -\log ( 1/\rho _{\sigma _1}(\varLambda _q(\mathbf {{A}})) + 2^{-m}) \ge -\log ( 2^{-n/2 - 1} + 2^{-m}) \ge n/2 \end{aligned}$$

Thus, as \(\mathsf {Ext}_1\) is a strong \((n/2,\mathrm{negl})\)-extractor, we get that \(\mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}})\) is statistically close to uniform given \(\mathbf {{y}}\). Consequently, \(\tau = \mathsf {Ext}_1(\mathsf {s}_1,\mathbf {{t}}) \oplus \mu _1\) is statistically close to uniform given \(\mathsf {s}_1\) and \(\mathbf {{y}}\), which concludes the first case.

Case 2. We will now turn to the second case, i.e. it holds that \(\rho _{q /\sigma _0}(\varLambda _q(\mathbf {{A}})) \le 2^{n/2 + 2}\) and \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B})) \le n/2\). Theorem 2.2 yields that

$$\begin{aligned} \rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B})&= \rho _{q/\sigma _0}\left( \varLambda _q(\mathbf {{A}}) \backslash \frac{\sqrt{m} \cdot q}{\sigma _0} \mathcal {B}\right) \\&\le 2^{-C \cdot m} \cdot \rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}))\\&\le 2^{-C \cdot m}\cdot 2^{n/2 + 2} = 2^{n/2 + 2 - C \cdot m} \end{aligned}$$

where \(C > 0\) is a constant. This expression is negligible as \(m \ge n \cdot \log (q)\). Consequently, the precondition \(\rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \mathrm{negl}\) of Corollary 4.2 is fulfilled.

Now let \(\mathbf {{D}} \in \mathbb {Z}_q^{k \times m}\) be a full-rank matrix with \(\varLambda _q^\bot (\mathbf {{D}}) = \{ \mathbf {{z}} \in \mathbb {Z}^m \ | \ \forall \mathbf {{v}} \in \varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}: \langle \mathbf {{z}},\mathbf {{v}} \rangle = 0 ( \ \mathsf {mod} \ q))\). Thus it holds that \(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}\subset \varLambda _q(\mathbf {{D}})\) and there is no matrix with fewer than k rows with this property. As \(\mathsf {rank}(\varLambda _q(\mathbf {{A}}) \cap \gamma \mathcal {B}) \le n/2\), it holds that \(k \le n/2\).

Decompose the matrix \(\mathbf {{A}}\) into with \(\mathbf {{A}}_1 \in \mathbb {Z}_q^{n \times m}\) and \(\mathbf {{A}}_2 \in \mathbb {Z}_q^{n \times m}\). Let \(\mathsf {c}_0 = (\mathbf {{y}}_1,\mathbf {{y}}_2,\mathsf {s}_0,\tau )\), where \(\mathbf {{y}}_1 = \mathbf {{A}}_1 \mathbf {{x}}\) and \(\mathbf {{y}}_2 = \mathbf {{A}}_2 \mathbf {{x}} + \frac{q}{2} \mathbf {{r}}\) with and . As \(\rho _{q/\sigma _0}(\varLambda _q(\mathbf {{A}}) \backslash \gamma \mathcal {B}) \le \epsilon \), Corollary 4.2 implies that

$$\begin{aligned} (\mathbf {{y}}_1,\mathbf {{y}}_2) = (\mathbf {{A}}_1 \mathbf {{x}},\mathbf {{A}}_2 \mathbf {{x}} + \frac{q}{2} \mathbf {{r}}) \approx _\epsilon (\mathbf {{A}}_1 (\mathbf {{x}} + \mathbf {{u}}),\mathbf {{A}}_2 (\mathbf {{x}} + \mathbf {{u}}) + \frac{q}{2} \mathbf {{r}}) =: (\mathbf {{y}}'_1,\mathbf {{y}}'_2) \end{aligned}$$

where . We can therefore switch to a hybrid experiment in which we replace \(\mathbf {{x}}\) with \(\mathbf {{x}} + \mathbf {{u}}\) while only incurring negligible statistical distance. We will now show that \(\tilde{H}_\infty (\mathbf {{r}} | \mathbf {{y}}'_1,\mathbf {{y}}'_2 ) \ge n/2\).

As \(q = 2p\) and p is odd, it holds by the Chinese remainder theorem that

$$\begin{aligned} \mathbf {{y}}'_1&\equiv (\mathbf {{A}}_1(\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ 2,\mathbf {{A}}_1(\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ p)\\ \mathbf {{y}}'_2&\equiv (\mathbf {{A}}_2(\mathbf {{x}} + \mathbf {{u}}) + \mathbf {{r}} \ \mathsf {mod} \ 2,\mathbf {{A}}_1(\mathbf {{x}} + \mathbf {{u}}) \ \mathsf {mod} \ p) \end{aligned}$$

Note that \(\mathbf {{u}} \ \mathsf {mod} \ 2\) and \(\mathbf {{u}} \ \mathsf {mod} \ p\) are independent. As the \( \ \mathsf {mod} \ p\) part does not depend on \(\mathbf {{r}}\), we only need to consider the \( \ \mathsf {mod} \ 2\) part. Let in the following variables with a hat denote this variable is reduced modulo 2, e.g. \(\hat{\mathbf {{x}}} = \mathbf {{x}} \ \mathsf {mod} \ 2\). It holds that \(\hat{\mathbf {{u}}}\) is chosen uniformly from \(\mathsf {ker}(\hat{\mathbf {{D}}}) = \{ \mathbf {{w}} \in \mathbb {Z}_2^m \ | \ \hat{\mathbf {{D}}}\cdot \mathbf {{w}} = 0 \}\). The dimension of \(\mathsf {ker}(\hat{\mathbf {{D}}})\) is at least \(m - k \ge m - n/2\). Let \(\hat{\mathbf {{B}}} \in \mathbb {Z}_2^{m \times m}\) be a basis of \(\mathsf {ker}(\hat{\mathbf {{D}}})\). As \(\hat{\mathbf {{A}}}\) has full rank and therefore \(\mathsf {rank}(\mathsf {ker}(\hat{\mathbf {{A}}})) = m - 2n\), it holds that \(\mathsf {rank}(\hat{\mathbf {{A}}} \cdot \hat{\mathbf {{B}}}) \ge \frac{3}{2} n\). Therefore \(\hat{\mathbf {{A}}} \cdot \hat{\mathbf {{u}}}\) is uniformly random in an \(\frac{3}{2}n\) dimensional subspace. But this means that \((\hat{\mathbf {{y}}}_1',\hat{\mathbf {{y}}}_2') = (\hat{\mathbf {{A}}}_1 \hat{\mathbf {{x}}} + \hat{\mathbf {{A}}}_1 \hat{\mathbf {{u}}}, \hat{\mathbf {{A}}}_2 \hat{\mathbf {{x}}} + \hat{\mathbf {{A}}}_2 \hat{\mathbf {{u}}} + \mathbf {{r}})\) loses at least n / 2 bits of information about \(\mathbf {{r}}\) (c.f. Sect. 2.2). Consequently, it holds that \(\tilde{H}_\infty (\mathbf {{r}} | \mathbf {{y}}_1',\mathbf {{y}}_2') \ge n/2\). Therefore, as \(\mathsf {Ext}_0\) is a strong \((n/2,\mathrm{negl})\)-extractor, we get that \(\mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}})\) is statistically close to uniform given \(\mathbf {{y}}_1',\mathbf {{y}}_2'\). Finally, \(\tau = \mathsf {Ext}_0(\mathsf {s}_0,\mathbf {{r}}) \oplus \mu _0\) is statistically close to uniform given \(\mathsf {s}_0\) and \(\mathbf {{y}}_1',\mathbf {{y}}_2'\), which concludes the second case.

5.1 Setting the Parameters

We will now show that the parameters of the scheme can be chosen such that correctness, statistical sender privacy and computational receiver privacy hold.

  • By Lemma 5.1, \(\mathsf {OT}\) is correct if \(\sigma _0 \le \frac{q}{4 B \cdot m }\) and \(\sigma _1 \le \frac{q}{m \cdot \kappa (n)}\) (where \(\kappa (n) = \omega (\sqrt{\log (n)})\)).

  • By Theorem 5.3, \(\mathsf {OT}\) is statistically sender private if \(\sigma _0 \cdot \sigma _1 \ge 4 \sqrt{m} \cdot q\) and \(\sigma _1 < \frac{q}{2 \sqrt{m}}\).

These requirements can be met if

$$\begin{aligned} \frac{q^2}{4 \kappa (n) B m^2} \ge 4 \sqrt{m} \cdot q, \end{aligned}$$

which is equivalent to

$$\begin{aligned} q \ge 16 \kappa (n) \cdot B \cdot m^{2.5}. \end{aligned}$$
(1)

If \(\chi \) is a discrete Gaussian on \(\mathbb {Z}\) with parameter \(\alpha q\), i.e. \(\chi = D_{\mathbb {Z},\alpha q}\), then, given that \(\alpha q \ge \eta _\epsilon (\mathbb {Z}) = \omega (\sqrt{\log (n)})\) it holds that \(\chi \) is \(\alpha q\) bounded, i.e. \(B \le \alpha q\) (with overwhelming probability). This means that

$$\begin{aligned} \alpha \le \frac{1}{16 \cdot \kappa (n) m^{2.5}} = \tilde{O}(n^{-2.5}) \end{aligned}$$

implies inequality (1). Thus, we get a worst-case approximation factor \(\tilde{O}(n / \alpha ) = \tilde{O}(n^{3.5})\) for SIVP (compared to \(\tilde{O}(n^{1.5})\) for primal Regev encryption). With this choice of \(\alpha \), we can choose \(q = \tilde{O}(n^3)\), \(\sigma _0 = \tilde{O}(n^{2.5})\) and \(\sigma _1= \tilde{O}(n)\).