1 Introduction

The Random Oracle model [6] is one of the most well studied models in the cryptographic literature. In this model, everyone has access to a single random function. It is usually possible to show clean and simple constructions that are information-theoretically secure in this idealized model. Also, in many cases it allows to prove unconditional lower bounds.

One major question is when (and under what assumptions) can we replace the Random Oracle with a “real life” object. It is known that such a transformation is impossible in the general case, but the counter examples are usually quite contrived [13, 17, 26]. This leaves the possibility that for specific applications of a Random Oracle such a transformation could possibly exist. One of the obstacles in answering the aforementioned question is that it seems hard to formalize and list all the properties such a generic transformation should preserve. In practice, this difficulty is circumvented by replacing the Random Oracle with an ad-hoc “cryptogrpahic hash function” (e.g., MD5, SHA-1, SHA-256) which results with protocols and constructions that have no provable security guarantees, and often tend to be broken [39, 41, 42].

Motivated by the above, Canetti [15] initiated the systematic study of identifying useful properties of a Random Oracle and then realizing them in the standard model. In his work, he focused on one property called “point obfuscation” (or “oracle hashing”). This property ensures that when the Random Oracle is applied on an input, the output value is completely uncorrelated to the input, and at the same time, it is possible to verify whether a given output was generated from a given input. Canetti formally defined this notion and gave a construction of such a primitive in the standard model based on a variant of the decisional Diffie-Hellman assumption (DDH). Since then, other instantiations of this primitive were suggested. Wee [43] gave a construction whose security is based on a strong notion of one-way permutations, Goldwasser et al. [27] gave a construction based on the Learning With Errors assumption, and more recently Bellare and Stepanovs [7] proposed a framework for constructing point obfuscators. The latter result gives a generic construction of point obfuscators based on either (1) indistinguishability obfuscation [3, 24] and any one-way function, (2) deterministic public-key encryption [4], or (3) UCEs [5].

While hiding the point is a natural and useful goal, there are many setting where this is not enough to replace a Random Oracle. One other natural property we wish to realize in “real life” is that of non-malleability: given the value of a Random Oracle on a random point x, it is infeasible to get the value of the Random Oracle at any “related” point (e.g., the point \(x+1\)). The work of Canetti and Varia [19] identified this property and the goal of realizing it. Their work provided definitions (of non-malleable obfuscation for general circuits, and not only for point functions) and constructions of non-malleable (multi) point obfuscators in the random oracle model.

In this work, we focus on construction of non-malleable point obfuscators in the plain model. Observe that many of the known constructions of point obfuscators are malleable. For example, let us recall the construction of Canetti [15] which involves a group \(\mathcal G\) with a generator \(g\in \mathcal G\). For an input point x and randomness r (interpreted as a random group element) the obfuscation is:

$$\begin{aligned} O(x; r) = (r, r^x). \end{aligned}$$

Indeed, the obfuscation of \(x+1\) can be computed by multiplying \(r^x\) by r and outputting the pair \((r, r^{x+1})\). In other words, the obfuscation of a point is malleable. The point obfuscators of Wee [43] and of Goldwasser et al. [27] admit similar attacks (i.e., they are malleable).Footnote 1

Thus, we ask whether we can remedy this situation and provide a construction of a secure point obfuscator in the plain model that is provably non-malleable under simple and concrete assumptions. We view this as a necessary and vital step towards understanding the possibility for realizing a Random Oracle in “real life”.

1.1 Our Results

We provide a construction of a secure point obfuscator that is non-malleable for a wide class of mauling functions. Our notion of non-malleability is parametrized by a distribution \(\mathcal X\) over the input domain X and by a class of possible mauling attacks \(\mathcal F = \{f:X\rightarrow X\}\). Roughly speaking, our notion guarantees that for every function \(f\in \mathcal F\), any polynomial-time adversary, when given the obfuscation of a point \(x\leftarrow \mathcal X\), cannot generate the obfuscation of the point f(x).Footnote 2

We give a construction of a (public-coinFootnote 3) point obfuscator that is non-malleable for any well-spread distribution \(\mathcal X\) (i.e., a distribution that has super-logarithmic min-entropy) and the class of mauling functions \(\mathcal F\) which can be described by univariate polynomials of bounded polynomial degree (in the security parameter). Our construction involves a group \(\mathcal G\) with a generator \(g\in \mathcal G\). For an input point x and randomness r (interpreted as a random group element) the obfuscation is:

$$\begin{aligned} O(x; r) = (r, r^{g^{h(x)}}), \end{aligned}$$

where \(h(x)=x^4+x^3+x^2+x\). We prove security and non-malleability of the above point obfuscator under variants of the DDH and power-DDH assumptions (see Sect. 2.2). We also present two ways to support more general mauling functions \(\mathcal F\) by strengthening the underlying security assumption (yet the construction remains the same). First, we show how to support a larger class of mauling function by assuming (sub-)exponential security of the underlying assumption. Second, we show that our construction is secure against any mauling function f for which one cannot distinguish the triple \((g, g^x , g^{h(f(x))})\) from a triple \((g,g^{r_1},g^{r_2})\), where \(r_1,r_2\) are random exponents. We do not have a simple characterization of the functions f for which this assumption holds.

In terms of efficiency, our construction is quite efficient: it involves only two group exponentiation (Canetti’s construction requires a single exponentiation), does not rely on any setup assumptions, and does not rely on expensive machinery such as zero-knowledge proofs, which are usually employed to achieve non-malleability. Moreover, it satisfies the same privacy guarantees as of Canneti’s obfuscator. As such, our point obfuscator can be used in any application where point obfuscators are used. These include encryption schemes [15], storing passwords [19, 40], reusable fuzzy extractors [16], round-efficient zero-knowledge proofs and arguments [10], and more.

Applications to Non-interactive Non-malleable Commitments. It is possible to view our obfuscator as a non-interactive non-malleable commitment that is secure when committing to strings that come from a distribution with super-logarithmic entropy. To commit to a string x, compute the obfuscation of x and that would be the commitment. The opening is x itself (and thus for security it has to have entropy). The resulting commitment scheme is computationally hiding by the security of the point obfuscator, and also non-malleable against a large class of mauling functions.

Previously, constructions of non-interactive non-malleable commitments (in the plain model, without any setup assumptions) required an ad-hoc and non-standard primitive called “adaptive injective one-way functions” that has built-in some form of non-malleability [34]. More recent works provide constructions that are secure against uniform adversaries [33] or ensure limited forms of non-malleability (“with respect to opening”) [31]. These constructions, however, allow to commit on worst-case inputs and handle arbitrary mauling functions.

1.2 Related Work

Non-malleable Cryptography. Non-malleability was introduced as a measure to augment and strengthen cryptographic primitives (such as encryption schemes or commitment schemes) in such a way that it does not only guarantee privacy, but also that it is hard to manipulate a given ciphertext (or commitment) of one value into a ciphertext of another.

Non malleability was first defined in the seminal work of Dolev, Dwork, and Naor [22] where they presented a non-malleable public-key encryption scheme, a non-malleable string commitment scheme, a non-malleable zero-knowledge protocol. Since then, there has been a long line of works on non-malleability. See [12, 29,30,31,32,33, 33, 35,36,37] to name just a few.

A particular type of non-malleable protocols (or primitives) that may a-priori be related to non-malleable point obfuscators are non-interactive commitments and encryption schemes. These were the focus of multiple works (see, for example, [21,22,23, 38] and some of the references given above). However, these notions do not imply point obfuscators as they do not support public verification on a given input (without revealing the randomness which completely breaks security).

In the context of obfuscation, the only work we are aware of is that of Canetti and Varia [19] who gave several incomparable definitions for non-malleable obfuscation. They also gave a construction of a (multi-bit) non-malleable point obfuscator (under each definition), however, their construction is in the Random Oracle model.

Obfuscation with High Min-Entropy. Canetti, Micciancio and Reingold [18] gave a construction of a point obfuscator that satisfies a relaxed notion of security where the input is guaranteed to come from a source with high min-entropy. Their underlying assumption is any collision resistant hash function. There is a significant (qualitative) difference between this notion and the original notion of Canetti [15] that we consider in this work. We refer to Wee [43, Section 1.3] for an elaborate discussion.

Boldyreva et al. [11] showed how to make the point obfuscator of [18] non-malleable using non-interactive zero-knowledge proofs (assuming a common reference string). Following the work of Boldyreva et al., Baecher et al. [2] presented a game-based definition of non-malleability which is very similar to ours (see also [20]). However, they did not provide new constructions in the plain model.

1.3 Our Techniques

Our starting point is Canetti’s point function construction [15], who presented a construction under a variant of the DDH assumption (and no random oracles). Recall that the DDH assumption involves a group ensemble \(\mathcal G= \{\mathbb G_\lambda \}_{\lambda \in \mathbb N}\) with a generator g and it asserts that \((g^x,g^y,g^{xy})\) is computationally indistinguishable from a sequence of random group elements, where x and y are chosen uniformly at random. Canetti’s variant is that the foregoing indistinguishability holds even if x has high enough min-entropy (yet y is completely random). For an input point x and using randomness r, viewed as a random group element of \(\mathbb G_{\lambda }\), Canetti’s construction is:

$$\begin{aligned} O(x; r) = r, r^x. \end{aligned}$$

As we mentioned, it is easy to modify \(r^x\) to get \(r^{x+1}\), giving an obfuscation of the point \(x+1\). Let us first focus on the goal of modifying the construction such that it is non-malleable against this function: \(f(x)=x+1\). Towards this end, we change the construction to be:

$$\begin{aligned} O(x; r) = r, r^{x^2}. \end{aligned}$$

The claim is that under a suitable variant of the power-DDH assumptions this is a non-malleable point obfuscator against the function f. Roughly speaking, we assume that \((g^x,g^{x^2},g^{x^3},\ldots )\) is indistinguishable from a sequence of random group elements, where x comes from a distribution with high enough min-entropy. Assume first that the adversary outputs a point obfuscation of \(x+1\) under the same randomness r as she received. That is, on input rw, the output is \(r,w'\) for an element \(w' \in \mathbb G\). Later, we show how to handle adversaries that output an obfuscation of \(x+1\) under new randomness.

The point obfuscation of \(x+1\) under this construction (with the same randomness r) is (rw), where \(w=r^{x^2+2x+1}\). Suppose that there is an adversary \(\mathcal {A}\) that given \(r,r^{x^2}\) can compute w, then we show how to break the security of our assumption. We are given a challenge \((g,g^{z_1},g^{z_2})\), where either \(z_i=x^i\) or \(z_i=r_i\) and each \(r_i\) is chosen at random. Then, we can run the adversary on the input \(g^s,g^{sz_2}\), for a random s to get w. We compute \(w'=g^{s(z_2+2z_1+1)}\) and compare it to w. If \(w=w'\) we output 1, and otherwise we output a random bit. In the case that \(z_i=x^i\), the adversary gets \(g^s,g^{sx^2}\) which is exactly the distribution of a point obfuscation of x and thus will output \(w=g^{s(x^2+2x+1)}=w'\) with some non-negligible probability. Otherwise, the adversary gets \(g^{sr_2}\) for a random \(r_2\) and the probability that she outputs \(w'=g^{s(r_2+2r_1+1)}\) is negligible as she has no information regarding \(r_2\) (this is true even for an unbounded adversaries). Overall, we have a non-negligible advantage in distinguishing the two cases.

While the above construction is non-malleable against the function \(f(x)=x+1\), it is malleable for the function \(f(x)=2x\). Indeed, given \(r^{x^2}\) one can simply compute \((r^{x^2})^4 = r^{4x^2} = r^{(2x)^2}\) which is a valid obfuscation of the point 2x. Our second observation is that we can modify the construction to resist this attack by defining:

$$\begin{aligned} \mathcal {O}(x;r)=r,r^{x^2+x}. \end{aligned}$$

The proof of non-malleability is similar to the proof above; we run the adversary \(\mathcal {A}\) on \(g^{s},g^{s(z_2+z_1)}\) to get w, and compute \(w'=g^{s(4z_2+2z_1)}\). If \(z_i=x^i\), then the adversary sees exactly the distribution of a point obfuscation of x and thus will output \(w=g^{s(4z_2+2z_1)}=w'\) with some non-negligible probability. Otherwise, the adversary gets \(g^{s(r_2+r_1)}\) for random \(r_i\)’s. We bound the probability that \(\mathcal {A}\) outputs \(w'=g^{r(4r_2+2r_1)}\). This is again an information theoretic argument where we assume that the adversary gets \(r_2+r_1\) and needs to compute \(4r_2+2r_1\). The argument follows since the adversary has only information regarding the sum \(r_2+r_1\) which leaves the random variable corresponding to \(4r_2+2r_1\) with high min-entropy (given the adversary’s view), and thus the probability of outputting \(w=w'\) is negligible.

One important thing to notice is that the proof relied on the fact that the adversary only had the sum \(r_1+r_2\) which is a linear combination of \((r_1,r_2)\) with the coefficients (1, 1) but the final goal was to output a different combination with the coefficients (4, 2), which are linearly independent of (1, 1). That is, the key observation is that for \(h(x)=x^2+x\) the polynomial h(f(x)) for \(f(x)=2x\) has (non-free) coefficients which are not all the same. Generalizing this argument, we can show that the construction is non-malleable against any linear function \(f(x)=ax+b\) for any constants ab such that the function h(f(x)) written as a polynomial over x has at least 2 different (non-free) coefficients. For non-linear functions, a similar proof works but the running time of the security reduction (that is, the loss in the security of our scheme) will be proportional to the degree of f(x).

Given the above observation, we can easily check if our construction is non-malleable for a function f by computing the polynomial h(f(x)). It turns our that the above construction is actually malleable for a simple function such as \(f(x)=3x+1\). Indeed, \(h(f(x))=(3x+1)^2+(3x+1)=9x^2+9x+2\) has the same two non-free coefficients. In order to eliminate more functions f, we need to add more constraints to the set of equations which translates to taking a higher degree of polynomial h(x). That is, we define \(h(x)=x^3+x^2+x\), and construct the obfuscator:

$$\begin{aligned} \mathcal {O}(x;r)=r,r^{x^3+x^2+x}. \end{aligned}$$

For a function f to be malleable under this construction, it must hold that the polynomial h(f(x)) has all three non-free coefficients equal. However, there is still single function that satisfies this condition (the function is \(f (x) = -x-2\cdot 3^{-1}\), where \(3^{-1}\) is the inverse of 3 in the relevant group). As a final step, we modify the construction to be of one degree higher and this does eliminate all possible functions f. Thus, we define the construction:

$$\begin{aligned} \mathcal {O}(x;r)=r,r^{x^4+x^3+x^2+x}. \end{aligned}$$

The Double Exponentiation. In our exposition above, we assumed that the adversary “uses the same randomness she received”. That is, on input rw she mauls the point and outputs \(r,w'\). Suppose now that the adversary is allowed to output \(r',w'\), where \(r'\) might be arbitrary. Recall that the issue is that we cannot simulate the power of \(w'\) from the challenge under the randomness \(r'\) to check consistency (since we do not know the discrete log of \(r'\)). Let us elaborate on this in the simple case where the obfuscation is \(r,r^x\) (and not the degree 4 polynomial in the exponent; this is just for simplicity). When the malleability adversary gets \(r,r^x\) and returns \(r,w'\), it is easy to check that \(w'=r^{f(x)}\) by recomputing this value since we know the discrete log of r. However, when it return \(r',w'\), it is hard to recompute \(r'^{f(x)}\) since we do not know the discrete log of r (and only get the value x in the exponent from the challenge).

In other words, we need to be able (in the security proof) to compute the obfuscation of some input that depends on the exponents from the challenge under randomness that comes from the adversary’s mauled obfuscation. If we knew either the discrete log of the challenge or the discrete log of the randomness used by the adversary we would be done.

In the description above we actually used this property. Since we assumed that the adversary outputs the same randomness r (that we chose and know the discrete log of), we could use \(r=g^s\) to compute the obfuscation of the challenge we received. However, if the adversary outputs randomness \(r'\), then not only we no longer know the discrete log of \(r'\) (and this is hard to compute), but we also do not have the discrete log of the challenge.

Thus, we need to modify our construction such that we can compute the obfuscation of x given only \(g^x\) and while given the public coins r explicitly (without given their discrete log). Towards this end, we introduce a new technique that we call “double exponentiation”. Consider any mapping of the group elements \(\mathbb G_\lambda \rightarrow \mathbb Z_q^*\) where q is the order of \(\mathbb G_\lambda \) (e.g., their binary representation as strings). Then, we define the final version of our construction:

$$\begin{aligned} \mathcal {O}(x;r)=r,r^{g^{x^4+x^3+x^2+x}}. \end{aligned}$$

One can observe that it is possible to compute the obfuscation of x given only \(g^{x^4+x^3+x^2+x}\) and given r by a single exponentiation. In addition, the construction is still efficient, consists of just two group elements, and involves only two exponentiations.

A final remark about security. Proving that the resulting construction is still a point obfuscator is not immediate a-priori. Our proof works by a reduction to the security of Canetti’s construction via an intermediate notion of security called virtual gray-box obfuscation [8]. We refer to Sect. 4 for more details.

2 Preliminaries

For a distribution X we denote by \(x \leftarrow X\) the process of sampling a value x from the distribution X. Similarly, for a set \(\mathcal {X}\) we denote by \(x \leftarrow \mathcal {X}\) the process of sampling a value x from the uniform distribution over \(\mathcal {X}\). For a randomized function f and an input \(x\in \mathcal {X}\), we denote by \(y\leftarrow f(x)\) the process of sampling a value y from the distribution f(x). For an integer \(n \in \mathbb {N}\) we denote by [n] the set \(\{1,\ldots , n\}\).

Throughout the paper, we denote by \(\lambda \) the security parameter. A function \(\mathsf{negl}:\mathbb N\rightarrow \mathbb R^+\) is negligible if for every constant \(c > 0\) there exists an integer \(N_c\) such that \(\mathsf{negl}(\lambda ) < \lambda ^{-c}\) for all \(\lambda > N_c\). Two sequences of random variables \(X = \{ X_\lambda \}_{\lambda \in \mathbb N}\) and \(Y = \{Y_\lambda \}_{\lambda \in \mathbb N}\) are computationally indistinguishable if for any probabilistic polynomial-time algorithm \(\mathcal {A}\) there exists a negligible function \(\mathsf{negl}(\cdot )\) such that \(\left| \Pr [\mathcal {A}(1^{\lambda }, X_\lambda ) = 1] - \Pr [\mathcal {A}(1^{\lambda },Y_\lambda ) = 1] \right| \le \mathsf{negl}(\lambda )\) for all sufficiently large \(\lambda \in \mathbb N\).

2.1 Point Obfuscation

For an input \(x\in \{ 0,1 \}^n\), the point function \(I_x:\{ 0,1 \}^n \rightarrow \{ 0,1 \}\) outputs 1 on input x and 0 everywhere else. A point obfuscator is a compiler that gets a point x as input and outputs a circuit that has the same functionality as \(I_x\) but where x is (supposedly) computationally hidden. Let us recall the definition of security of Canetti [15] (called there oracle simulation).

Definition 1

(Functional Equivalence). We say that two circuits C and \(C'\) are functionally equivalent and denote it by \(C \equiv C'\) if they compute the same function (i.e., \(\forall x: C(x)=C'(x)\)).

Definition 2

(Point Obfuscation). A point obfuscator \(\mathcal {O}\) for a domain \(X = \{X_\lambda \}_{\lambda \in \mathbb N}\) of inputs is a probabilistic polynomial-time algorithm that gets as input a point \(x\in X_\lambda \), and outputs a circuit C such that

  1. 1.

    Completeness: For all \(\lambda \in \mathbb N\) and all \(x\in X_\lambda \), it holds that

    $$\begin{aligned} \Pr [ \mathcal {O}(x) \equiv I_x ] = 1, \end{aligned}$$

    where the probabilities are over the internal randomness of \(\mathcal {O}\).

  2. 2.

    Soundness: For every probabilistic polynomial-time algorithm \(\mathcal {A}\), and any polynomial function \(p(\cdot )\), there exists a probabilistic polynomial-time simulator \(\mathcal {S}\), such that for every \(x\in X_\lambda \), any predicate \(P:X_\lambda \rightarrow \{0,1\}\), and all large enough \(\lambda \in \mathbb N\),

    $$\begin{aligned} \left| \Pr [ \mathcal {A}(\mathcal {O}(x)) = P(x)] - \Pr [\mathcal {S}^{I_x}(1^\lambda ) = P(x) ]\right| \le \frac{1}{p(\lambda )}, \end{aligned}$$

    where the probabilities are over the internal randomness of \(\mathcal {A}\) and \(\mathcal {O}\), and \(\mathcal {S}\), respectively.

The obfuscation is called public coin if it publishes its internal coin tosses as part of its output.

Indistinguishability-Based Security. Another way to formalize the security of a point obfuscator is via an indistinguishability-based security definition (rather than simulation-based). Canetti [15] suggested such a definition (termed distributional indistinguishability there): the input comes from a distribution \(\mathcal X_\lambda \) over the input space \(X_\lambda \) and the guarantee is that for any adversary \(\mathcal {A}\) that outputs a single bit, the following two distributions are computationally indistinguishable:

$$\begin{aligned} (x, \mathcal {A}(\mathcal {O}(x; r))) \approx _c (x, \mathcal {A}(\mathcal {O}(y; r))), \end{aligned}$$
(1)

where r is the randomness (chosen uniformly) for the point obfuscator and x and y are chosen independently from \(\mathcal X_\lambda \).

One of Canetti’s results [15, Theorem 4] was that the indisinguishability-based definition is equivalent to the simulation-based definition given in Eq. 1 if the indisinguishability-based security holds with respect to all distributions that have super-logarithmic min-entropy (over the message space). Such a distribution is called a well-spread distribution:

Definition 3

(Well-Spread Distribution). An ensemble of distributions \(\mathcal {X}= \{\mathcal {X}_\lambda \}_{\lambda \in \mathbb N}\), where \(\mathcal {X}_\lambda \) is over \(\{ 0,1 \}^\lambda \), is well-spread if

  1. 1.

    it is efficiently and uniformly samplable – there is a probabilistic polynomial-time algorithm that given \(1^\lambda \) as input, outputs a sample according to \(\mathcal {X}_\lambda \).

  2. 2.

    for all large enough \(\lambda \in \mathbb N\), it has super-logarithmic min-entropy. Namely,

    $$\begin{aligned} {\mathsf {H}_\infty }(\mathcal {X}_\lambda ) = -\min _{x\in \{ 0,1 \}^\lambda } \log _2 {\Pr [X = x]} \ge \omega (\log \lambda ). \end{aligned}$$

Canetti’s Construction. In [15], Canetti provided a construction that satisfies Definition 2. In his construction, the domain of inputs \(X_\lambda \) is \(\mathbb Z_p\) for prime \(p \approx 2^\lambda \). Let \(\mathcal G= \{\mathbb G_\lambda \}_{\lambda \in \mathbb N}\) be a group ensemble with uniform and efficient representation and operations, where each \(\mathbb G_\lambda \) is a group of prime order \(p \in (2^{\lambda } , 2^{\lambda +1} )\). The public coin point obfuscator \(\mathcal {O}\) for points in the domain \(\mathbb Z_{p}\) is defined as follows: \(\mathcal {O}(I_x)\) samples a random generator \(r\leftarrow \mathbb G_\lambda ^*\) and outputs the pair \((r, r^x)\). Evaluation of the obfuscation at point z is done by checking whether \(r^x=r^z\).

Canetti proved that this construction satisfies Eq. 1 for any well-spread distribution under the strong variant of the DDH assumption, that we review below (see Assumption 3). Thereby, the result is that under the same assumption his construction satisfies Definition 2, as well.

2.2 Hardness Assumptions

The DDH and Power-DDH Assumptions. The DDH assumption says that in a suitable group, the triple of elements \((g^x, g^y, g^{xy})\) is pseudorandom for random x and y. The power-DDH assumption says that the power sequence \((g, g^x , g^{x^2},\dots , g^{x^t})\) is pseudorandom, for a random x and a polynomially bounded t. While the power-DDH assumption is less common in the literature, there are many works that explicitly rely on it (see, for example, [1, 14, 25, 28]). To the best of our knowledge, the power-DDH assumption is incomparable to the DDH assumption.

Throughout this section, let \(\mathcal G= \{\mathbb G_\lambda \}_{\lambda \in \mathbb N}\) be a group ensemble with uniform and efficient representation and operations, where each \(\mathbb G_\lambda \) is a group of prime order \(p \in (2^{\lambda -1} , 2^\lambda )\).

Assumption 1

(DDH). The DDH assumption asserts that for the group \(\mathbb G_\lambda \) with associated generator g, the ensembles \((g^x, g^{y}, g^{xy})\) and \((g^{x}, g^{y}, g^z)\) are computationally indistinguishable, where \(x,y,z\leftarrow \mathbb Z_p^*\).

Assumption 2

(Power-DDH). The power-DDH assumption asserts that for the group \(\mathbb G_\lambda \) with associated generator g, for every polynomially bounded function \(t(\cdot )\), the ensembles \((g, g^x, g^{x^2} \dots , g^{x^{t}})\) and \((g, g^{r_1}, g^{r_2} \dots ,g^{r_{t}})\) are computationally indistinguishable, where \(x,r_1,\dots ,r_{t}\leftarrow \mathbb Z_p^*\).

We need an even stronger variant of both assumptions. The strong variant that we need, first proposed by Canetti [15], roughly, says that DDH is hard not only if x, y and z are chosen uniformly at random, but even if x is chosen from a distribution with enough min-entropy (i.e., a well-spread distribution; see Definition 3). Analogously, we define a strong variant of the power-DDH assumption where x is chosen from such a distribution rather than from the uniform one.

Assumption 3

(Strong DDH and power-DDH). The strong variant of the DDH and power-DDH assumptions is when the two distributions are computationally indistinguishable even if x is chosen uniformly from a well-spread distribution \(\mathcal X_\lambda \) (rather than from \(\mathbb Z_p^*\)).

3 Non-malleable Point Obfuscation

We define non-malleability of point function obfuscators. Such obfuscators not only hide the obfuscated point, but they also (informally) ensure that an obfuscation of a point x cannot be transformed into an obfuscation of a related (yet different) point.

There are several ways to formalize this notion of security. We focus on a notion of security where the objective of the adversary, given an obfuscation of x, is to come up with a circuit (of prescribed structure) that is a point function on a related point (a similar definition is given in [2]). We discuss the relation to the notions of Canetti and Varia [19] below.

Definition 4

(Verifier). A PPT algorithm \(\mathcal V\) for a point obfuscator \(\mathcal {O}\) for the ensemble of domains \(\{X_\lambda \}_{\lambda \in \mathbb N}\) domain is called a verifier if for all \(\lambda \in \mathbb N\) and all \(x\in X_\lambda \), it holds that \(\Pr [\mathcal V (\mathcal {O}(x)) = 1] = 1\), where the probability is taken over the randomness of \(\mathcal V\) and \(\mathcal {O}\).

Notice that there is no guarantee as to what \(\mathcal V\) is suppose to output when its input is not a valid obfuscation. In particular, a verifier that always outputs 1 is a legal verifier. In many cases, including the obfuscator of Canetti [15] and our own, one can define a meaningful verifier.

Definition 5

(Non-malleable Point Function). Let \(\mathcal {O}\) be a point obfuscator for an ensemble of domains \(\{X_\lambda \}_{\lambda \in \mathbb N}\) with an associated verifier \(\mathcal V\). Let \(\{\mathcal F_\lambda \}_{\lambda \in \mathbb N} = \{f :X_\lambda \rightarrow X_\lambda \}_{\lambda \in \mathbb N}\) be an ensemble of families of functions, and let \(\{\mathcal X_\lambda \}_{\lambda \in \mathbb N}\) be an ensemble of distributions over X.

The point obfuscator \(\mathcal {O}\) is a non-malleable obfuscator for \(\mathcal F\) and \(\mathcal X\) if for any polynomial-time adversary \(\mathcal {A}\), there exists a negligible function \(\mathsf{negl}(\cdot )\), such that for any \(\lambda \in \mathbb N\) it holds that:

$$\begin{aligned}&\Pr \left[ \mathcal V(C)=1 \text {, } f \in \mathcal {F}_\lambda , \,\, and \,\, I_{f(x)} \equiv C \ \Bigg | \ \begin{array}{lr} x \leftarrow \mathcal X_\lambda \\ (C,f) \leftarrow \mathcal {A}(\mathcal {O}(x)) \end{array} \right] \le \mathsf{negl}(\lambda ). \end{aligned}$$

That is, the adversary \(\mathcal {A}\), given an obfuscation of a point x sampled from \(\mathcal X_\lambda \), cannot output a function \(f \in \mathcal {F}_\lambda \) and a valid-looking obfuscation of the point f(x), except with negligible probability.

The verifier \(\mathcal V\). We require that an attacker outputs an obfuscation with a prescribed structure so that it passes the verifier \(\mathcal V\). Without such a requirement, there is a trivial attack for the adversary: use the given circuit \(\widehat{C}_w\) to create a new circuit that gets x, computes \(f^{-1}(x)\) and then applies \(\widehat{C}_w\) on this value. The result is a circuit that accepts the point f(w).

In general, it might be hard to come up with a verifier \(\mathcal V\) that tests whether a given circuit is legal, but here we are interested in the case where this can be done efficiently. In our case, it will be very easy to define \(\mathcal V\) since a “valid-looking” obfuscation will consist of all pairs of group elements (in some given group).

Adaptivity of f. We stress that our definition is adaptive with respect to the family \(\mathcal {F}_\lambda \). That is, the adversary first gets to see the obfuscation \(\mathcal {O}(x)\) of the point x and then may chose the function it wishes to maul to. This definition is stronger than a static version in which the function f is fixed and known in advance (before the adversary sees the challenge).

3.1 Relation to Canetti-Varia

The work of Canetti and Varia [19] presented a systematic study of non-malleable obfuscation both specifically for point functions and also for general functionalities. They gave two definitions for non-malleability, called functional non-malleability and verifiable non-malleability.

The verifiable non-malleability definition is more related to ours since there they also require that there is a verifier \(\mathcal V\) that gets an alleged obfucated circuit and checks whether it is a legitimate output of the obfuscator. Recall that the obfuscator of Canetti (as well as ours) has this property: An obfuscation can be verified by simply checking whether the obfuscation consists of two group elements in the desired group.

The verifiable non-malleability notion of Canetti and Varia asserts that, roughly, whatever mauling attack one can apply on an obfuscation, there exists a simulator that has only oracle access to the input circuit and outputs a “similarly mauled” obfuscation. To prevent trivial attacks (that treat the input circuit as a black-box), they allow the simulator to output a circuit that has oracle gates to its own oracle (namely, to the input circuit). The verifiability ensures that the output of the adversary (and the simulator) have a “legal” structure. The precise definition is subtle and it captures a wide range of mauling attacks in a meaningful way. We refer to [19] for their elaborate discussions on the matter. We provide their formal definition, restricted to point functions next.

Definition 6

(Verifiable Non-malleable Point Obfuscation [19]). Let \(\mathcal {O}\) be a point obfuscator for a domain \(X = \{X_\lambda \}_{\lambda \in \mathbb N}\) with an associated verifier \(\mathcal V\). For every PPT adversary \(\mathcal {A}\) and every polynomial \(p(\cdot )\), there exists a PPT simulator \(\mathcal {S}\) such that for all sufficiently large \(\lambda \in \mathbb N\), for any input \(x\in X_\lambda \) and any polynomial-time computable relation \(E:X_\lambda \times X_\lambda \rightarrow \{ 0,1 \}\) (that may depend on x), it holds that

$$\begin{aligned}&\Pr \left[ C\ne \mathcal {O}(x) \wedge \mathcal V(C) = 1 \wedge \left( \exists y \in X_\lambda :I_y \equiv C \wedge E(x,y) = 1\right) \mid C \leftarrow \mathcal {A}(\mathcal {O}(x)) \right] - \\&\Pr \left[ \mathcal V(C) = 1 \wedge \left( \exists y\in X_\lambda :I_y \equiv C^{I_x} \wedge E(x,y) = 1 \right) \mid C \leftarrow \mathcal {S}^{I_x}(1^\lambda ) \right] \le \frac{1}{p(\lambda )}. \end{aligned}$$

We observe that our definition is related to the above definition albeit with the following modifications. First, the input for our obfuscator is sampled from a well-spread distribution, rather than being worst-case. Second, the non-malleablility in our definition is parametrized with a family of functions, whereas the above definition requires non-malleability for all possible relations. The modified definition is given next.

Definition 7

(). Let \(\mathcal {O}\) be a point obfuscator for a domain \(X = \{X_\lambda \}_{\lambda \in \mathbb N}\) with an associated verifier \(\mathcal V\). Let \(\{\mathcal F_\lambda \}_{\lambda \in \mathbb N} = \{f :X_\lambda \rightarrow X_\lambda \}_{\lambda \in \mathbb N}\) be an ensemble of families of functions, and let \(\{\mathcal X_\lambda \}_{\lambda \in \mathbb N}\) be an ensemble of distributions over X. For every PPT adversary \(\mathcal {A}\) and every polynomial \(p(\cdot )\), there exists a PPT simulator \(\mathcal {S}\) such that for all sufficiently large \(\lambda \in \mathbb N\), for any function \(f\in \mathcal F_\lambda \), it holds that

Definition 7 is a special case of Definition 6 since it has restrictions on the input to the obfuscator and the set of relations it supports. In the next claim, we show that our notion of non-malleability from Definition 5 implies the notion from Definition 7.

Claim

A point obfuscator satisfying Definition 5 with respect to an ensemble of families of functions \(\mathcal F\) and an ensemble of distributions \(\mathcal X\) also satisfies Definition 7 with respect to \(\mathcal F\) and \(\mathcal X\).

Proof

Let \(\mathcal {O}\) be an obfuscator that satisfies Definition 5 with respect to the function in \(\mathcal F\) and the distribution \(\mathcal X\). Thus, for any \(f\in \mathcal F\), there is no PPT adversary that can generate a valid-looking circuit C such that \( I_{f(x)} \equiv C\) for \(x\leftarrow \mathcal X\), except with negligible probability. Namely,

Hence, a simulator that does nothing (say, outputs \(\bot \)) will satisfy security requirement of Definition 7.

A Discussion. Our definition is thus, morally, equivalent to the strong definition of [19], albeit with the assumption that the input comes from a well-spread distribution and the mauling is restricted to functions rather than relations. Getting a construction in the plain model that resolves these two issues is left as an open problem.

Lastly, observe that in the above proof, the simulator is in fact independent of the adversary \(\mathcal {A}\) and independent of the distinguishability gap (the polynomial \(p(\cdot )\)). Thus, we actually get one simulator for all adversaries and the computational distance between the output of the adversary and the output of the simulator is negligible.

4 Our Obfuscator

Let \(\lambda \in \mathbb N\) be the security parameter and let \(X_\lambda = \mathbb Z_{2^\lambda }\) be the domain. Let \(\mathcal F_{\mathsf {poly}} = \{f :X_\lambda \rightarrow X_\lambda \}_{\lambda \in \mathbb N}\) be the ensemble of classes of all functions that can be computed by polynomials of degree \(\mathsf {poly}(\lambda )\), except the constant functions and the identity function.

Let \(\mathcal G= \{\mathbb G_\lambda \}_{\lambda \in \mathbb N}\) be a group ensemble with uniform and efficient representation and operations, where each \(\mathbb G_\lambda \) is a group of prime order \(q \in (2^{\lambda -1} , 2^{\lambda } )\). We assume that for every \(\lambda \in \mathbb N\) there is a canonical and efficient mapping between the elements of \(\mathbb G_\lambda \) and the domain \(X_{\lambda }\). Let g be the generator of the group \(\mathbb G_{5\lambda }\). Our obfuscator gets as input an element \(x\in X_\lambda \) and randomness \(r\in \mathbb G_{5\lambda }\) and computes:

$$\begin{aligned} \mathcal {O}(x; r) = \left( r,r^{g^{x^4+x^3+x^2+x}} \right) . \end{aligned}$$

The verifier \(\mathcal V\) for a valid-looking obfuscation is the natural one: it checks whether the obfuscation consists of merely two group elements in \(\mathbb G_{5\lambda }\). In the next two theorems we show that our obfuscator is both secure and non-malleable. The first part is based on the strong DDH assumption (Assumptions 1 and 3) and the second is based on (Assumptions 2 and 3). Thus, overall, our obfuscator is both secure and non-malleable under the assumption that there is a group where the strong DDH and strong power-DDH assumptions hold.

Theorem 4

Under the strong DDH assumption (Assumptions 1 and 3), the obfuscator \(\mathcal {O}\) above is a point obfuscator according to Definition 2.

Theorem 5

Let \(\mathcal X_\lambda \) be any well-spread distribution over \(X_\lambda \). Under the strong power-DDH assumption (Assumptions 2 and 3), the obfuscator \(\mathcal {O}\) above is non-malleable according to Definition 5 for the family of functions \(\mathcal F_{\mathsf {poly}}\) and the distribution \(\mathcal X_\lambda \).

The proofs of these theorems appear in the following two subsections.

4.1 Proof of Theorem 4

For completeness, we first notice that for any \(x \in X_\lambda \) it holds that \(x^4+x^3+x^2+x \le 2^{5\lambda }\) and thus for any distinct \(x,y \in X_\lambda \) it holds that \(y^4+y^3+y^2+y \ne x^4+x^3+x^2+x\). Therefore, we get that for every \(x\in X_\lambda \) it holds that \(\mathcal {O}(x) \equiv I_x\), as required.

To prove soundness, we reduce to the security of our construction to the security of the \(r,r^x\) construction of Canetti [15]. We prove the following claim general claim regarding point function obfuscators.

Claim

Let \(f :X_\lambda \rightarrow X_\lambda '\) be an injective polynomial-time computable function, and let \(\mathcal {O}\) be a secure point obfuscator. Then, \(\mathcal {O}'(x)=\mathcal {O}(f(x))\) is also a secure point obfuscator.

Proof

We prove that for any probabilistic polynomial-time algorithm \(\mathcal {A}\), there is a probabilistic polynomial-time simulator \(\mathcal {S}\) and a negligible function \(\mathsf{negl}(\cdot )\), such that for all \(x\in X_\lambda \) and all \(\lambda \in \mathbb N\),

where the probabilities are over the internal randomness of \(\mathcal {A}, \mathcal {O}\) and \(\mathcal {S}\).

Let \(\mathcal {A}\) be such an adversary and let \(\mathcal {S}\) be the corresponding simulator whose existence is guaranteed by the fact that \(\mathcal {O}\) is a secure point obfuscator. It holds that for every \(x\in X_\lambda \):

As a first step, we construct a simulator \(\mathcal {S}'\) that is inefficient yet makes only a polynomial-number of queries to its oracle (we will get rid of this assumption later using a known transformation). We define a simulator \(\mathcal {S}'\) (with oracle access to \(I_x\)) that works by simulating \(\mathcal {S}\) as follows. When \(\mathcal {S}\) performs a query y to its oracles, then \(\mathcal {S}'\) finds \(x'\) such that \(f(x')=y\). If no such \(x'\) exists, then \(\mathcal {S}'\) replies with 0. Otherwise, if \(\mathcal {S}'\) found such an \(x'\), then it performs the query to its oracle with \(x'\) and answers with the reply of the oracle. Since f is injective, we have that \(f(x)=y\) if and only if \(x'=x\). Thus, it holds that

Thus, we get that

We are left to take care of the fact that the simulator is inefficient. For this we use a result of Bitansky and Canetti [8] who showed that this can be solved generically. Let us elaborate.

Bitansky and Canetti called obfuscators in which the simulation is inefficient yet the number of queries is bounded by a polynomial as gray-box obfuscation. This is in contrast to virtual-black box obfuscation where the simulator is required to be both efficient in its running time and the number of queries and indistinguishability obfuscation [3, 24], which can be phrased as a simulation-based definition where the simulator is unbounded in both running time and number of queries (see [8, Proposition 3.1]). One of the main results of Bitansky and Canetti was that for point functions, the virtual-black box and virtual-gray box notions are equivalent: a simulator that runs in unbounded time yet makes a polynomial number of queries can be turned into one that runs in polynomial-time and makes a polynomial number of queries.Footnote 4

Using their result for our construction we obtain a simulator that works in polynomial-time and makes a polynomial number of queries to its oracle. This completes the claim.

We finish the proof by applying the claim with \(f(x) = g^{x^4 + x^3 + x^2 + x}\), noticing that this function is injective and efficiently computable.

4.2 Proof of Theorem 5

Assume that there exists an adversary \(\mathcal {A}\), and a distribution \(\mathcal X_\lambda \) such that given an obfuscation of a point \(x\leftarrow \mathcal X_\lambda \), the adversary \(\mathcal {A}\) outputs a function \(f \in \mathcal F_{\mathsf {poly}}\) and a valid-looking obfuscation (i.e., an obfuscation that passes the verification of \(\mathcal V\)) of f(x) with probability at least \(\varepsilon > 0\). Denote by \(t = t(\lambda )\) the degree of f (written as a polynomial over \(X_\lambda \)). We show how to construct an adversary \(\mathcal {A}'\) that breaks the strong power-DDH assumption for the power sequence of length \(T=4t\).

Suppose we are given \((g^{z_0},g^{z_1},\ldots ,g^{z_{T}})\), where \(z_0=1\) and either \(\forall i\in [T]: z_i=x^i\) for a random \(x\leftarrow X_\lambda \) or \(\forall i\in [T]: z_i=r_i\) for random \(r_1,\ldots ,r_t \leftarrow X_\lambda \). Our goal is to show that \(\mathcal {A}'\) can distinguish between the two cases. The algorithm \(\mathcal {A}'\), on input \((g^{z_0}, \ldots , g^{z_T})\), first samples a random generator \(r \leftarrow \mathbb G\) and computes \(g^{z_1+z_2+z_3+z_4}\). Then, it runs \(\mathcal {A}\) on the input pair \((r,r^{g^{z_1+z_2+z_3+z_4}})\) to get a function f and an output pair . We assume that we are given the coefficients of the polynomial that represents the function f, as otherwise we can learns these coefficients by interpolation of random evaluations of f (according to the distribution of the inputs \(\mathcal X_\lambda \)).

Let \(h(x)=x^4+x^3+x^2+x\) and let us write the polynomial h(f(x)) as a polynomial of degree at most 4t with coefficients \(b_i\):

$$\begin{aligned} h(f(x)) = (f(x))^4 + (f(x))^3 + (f(x))^2+f(x) = \sum _{i=0}^{4t}b_i x^i. \end{aligned}$$

Using these values, it computes \(u=g^{\sum _{i=0}^{T}b_iz_i}\) and . Finally, the adversary \(\mathcal {A}'\) outputs 1 if and only if . The precise description of \(\mathcal {A}'\) is given in Fig. 1.

Fig. 1.
figure 1

The adversary \(\mathcal {A}'\) that breaks the power-DDH assumption.

We argue that \(\mathcal {A}'\) successfully breaks the power-DDH assumption.

The Real Case. Observe that if \(z_i=x^i\) for each \(i\in [T]\), then the distribution that \(\mathcal {A}\) sees is exactly the distribution \((r,r^{g^{x^4+x^3+x^2+x}})\) and thus with probability at least \(\varepsilon \), the adversary \(\mathcal {A}\) will maul the point obfuscation of x to a point obfuscation of f(x). That is,

Thus, \(\mathcal {A}'\) will output 1 with probability at least \(\varepsilon \).

The Random Case. Suppose that \(z_i=r_i\) is random for each \(i\in [T]\). We show that the probability that is negligible (in \(\lambda \)). This is an information theoretic claim that holds against unbounded adversaries. The adversary \(\mathcal {A}\) holds r and \(r^{g^{r_1+r_2+r_3+r_4}}\) and let us even assume that she knows \(s=r_1+r_2+r_3+r_4\). In order for \(\mathcal {A}'\) to succeed, she needs to be able to compute \(s'=\sum _{i=0}^{T}b_ir_i\) (recall that \(\mathcal {A}'\) is unbounded). We show that the min-entropy of this value \(s'\) given all the information of the adversary is high and therefore it cannot guess it with noticeable probability. Denote by \(\mathsf {view}(\mathcal {A})\) a random variables that correspond to the view of \(\mathcal {A}\) and denote by \(S'\) a random variable that corresponds to the value of \(s'\).

We first show that if the degree of f (denoted above by t) is at least 2, then the min-entropy of \(S'\) is at least \(\lambda \). This means that \(\mathcal {A}'\) will be able to guess it with only negligible probability.

Claim

If \(t\ge 2\), then \({\mathsf {H}_\infty }(S' \mid \mathsf {view}(\mathcal {A})) \ge \lambda \).

Proof

If the degree of f is at least 2, then the degree of \(h(f(\cdot ))\) is at least 5 and thus there exist \(i > 4\) such that \(b_i \ne 0\). In this case, since \(r_i\) is uniform in \(\mathcal X_\lambda \), then the random variable \(s'\) has min-entropy \(\lambda \) given the view of \(\mathcal {A}\).

The case where f is a linear function (i.e., a degree 1 polynomial) is slightly harder to handle and here we use properties of the exact choice of our degree 4 polynomial. Let f be written as \(f(x) = ax + b\) for some fixed \(a,b\in X_\lambda \). We expand the polynomial h(f(x)) and rewrite it by grouping terms:

$$\begin{aligned} h(f(x)) =&~(ax+b)^4 + (ax+b)^3 +(ax+b)^2+ (ax+b) \\ =&~a^4 x^4+(4 a^3 b +a^3)x^3 +(6 a^2 b^2 +3 a^2 b +a^2 )x^2 \\&+ (4 a b^3 +3 a b^2+2 a b+a) x+b^4+b^3+b^2+b. \end{aligned}$$

We show that the coefficients of \(h(f(\cdot ))\) cannot be all identical.

Claim

The coefficients of h are not all identical.

Proof

If they were identical, then

$$\begin{aligned} a^4 = 4 a^3 b +a^3 = 6 a^2 b^2 +3 a^2 b +a^2 = 4 a b^3 +3 a b^2+2 a b+a. \end{aligned}$$

Solving this set of equations gives that the only solutions are \(a=0,b=*\) (i.e., b is arbitrary) and \(a=1,b=0\) (i.e., the identity function). However, these are illegal according to our definition of \(\mathcal {F}_{\mathsf {poly}}\): this class contains neither constant functions nor the identity function.

Using the fact that the coefficients are not all identical, we claim that the min-entropy of \(S'\) is at least \(\lambda \) even given the view of \(\mathcal {A}\). Thus, again, the probability of guessing correctly the value is negligible.

Claim

Let \(R_1,R_2,R_3,R_4 \leftarrow X_\lambda \) be random variable whose distribution is uniform from \(\mathcal X_\lambda \), and let their sum be \(S=R_1+R_2+R_3+R_4\in X_{6\lambda }\). Let \(b_1,b_2,b_3,b_4 \in X_\lambda \) be arbitrary constants such that at least two of them are different. Let \(S'=b_1R_1 + b_2R_2+b_3R_3 + b_4R_4\). Then, \({\mathsf {H}_\infty }(S' \mid S) \ge \lambda \).

Proof

We lower bound the min entropy by computing \(\Pr [S'=s' \mid S=s]\) for each \(s,s' \in X_\lambda \). This probability is exactly the fraction of possible \(r_1,r_2,r_3,r_4\) such that \(r_1+r_2+r_3+r_4=s\) and \(b_1r_1 + b_2r_2+b_3r_3 + b_4r_4=s'\). Writing this in matrix form we have

$$\begin{aligned} \underbrace{\begin{bmatrix} 1&1&1&1 \\ b_1&b_2&b_3&b_4 \end{bmatrix}}_{A} \cdot \begin{bmatrix} r_1 \\ r_2 \\ r_3 \\ r_4 \end{bmatrix} = \begin{bmatrix} s \\ s' \end{bmatrix}. \end{aligned}$$

Denote by Q the size of the support of \(\mathcal X_\lambda \) and notice that \(Q \ge 2^{\lambda }\). Since \(\mathcal X_\lambda \) is well-spread, its min-entropy is super logarithmic in \(\lambda \) and thus the support size is super polynomial in \(\lambda \). Since not all the \(b_i\)’s are equal, we have that A’s rank is 2, and thus the solution dimension is 2 for each \(s' \in X_\lambda \) and the number of possible solutions is \(Q^2\) out of the total \(Q^4\) possibilities. Altogether, we get that for every \(s' \in X_\lambda \), it holds that \(\Pr [S'=s' \mid S = s] = Q^2/Q^4 \le 1/Q < 1/2^{\lambda }\). Thus, the min-entropy is at least \(\lambda \).

Combining the above, we get that overall, the probability of distinguishing is:

$$\begin{aligned}&\left| \Pr [\mathcal {A}'(g^{x^1},\ldots ,g^{x^T})=1] - \Pr [\mathcal {A}'(g^{r_1},\ldots ,g^{r_T})=1] \right| \ge \varepsilon - \mathsf{negl}(\lambda ) \end{aligned}$$

which contradicts the security of the power-DDH assumption.

4.3 Supporting More Functions

In our construction above, we have shown how to get a point function obfuscator that is non-malleable against any function that can be written as a univariate polynomial of a polynomial degree. The reason that there is a bound on the degree of the polynomial is that the security reduction runs in time that is proportional to the degree. In particular, to be resilient against a function f of degree t we had to construct \(g^{h(f(x))}\) in the reduction given the sequence \(\{g^{x^i}\}_{i=0}^{4t}\) (recall that \(h(x)=x^4+x^3+x^2+x\)).

Exponential Security. Suppose that the min-entropy of the inputs is k. Thus, the support-size of the distribution is at most \(2^k\) and hence any function can be written as a polynomial of degree at most \(2^k\). That is, we can assume without loss of generality that the mauling function is described by a degree \(t \le 2^k\) polynomial. Thus, if we assume an exponential version of the strong power-DDH assumption, where the adversary’s running time and advantage are bounded by \(2^{O(k)}\) and \(2^{-\varOmega (k)}\), respectively, we can support functions of exponential degree (in k).

Uber Assumption. Instead of building the polynomial h(f(x)) in the proof monomial by monomial in order to break the power-DDH assumption, we can, alternatively, modify our assumption to get a more direct security proof without the large security loss. Concretely, instead of having the reduction computing \(g^{h(f(x))}\) given \(\{g^{z_i}\}_{i=0}^{4t}\), where t is the degree f, we assume an “uber” power-DDH assumption that is parametrized by a class of functions \(\mathcal F=\{f:\mathbb Z_p\rightarrow \mathbb Z_p\}\) (and thus can thought of as a collection of assumptions, one per \(f\in \mathcal F\)). The assumption says that for any \(f\in \mathcal F\), the following two distributions are computationally-indistinguishable:

$$\begin{aligned} (g,g^x,g^{h(f(x))}) \approx _c (g,g^x,g^{s}), \end{aligned}$$

where \(x\leftarrow \mathcal X\) and \(s \leftarrow \mathbb Z^*_p\) is chosen at random. Having such an assumption for a class of mauling functions \(\mathcal {F}\), implies that our construction is non-malleable for the same class \(\mathcal {F}\).