Skip to main content
Log in

An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing

  • Published:
Journal of Cryptology Aims and scope Submit manuscript

Abstract

The distributed discrete logarithm (DDL) problem was introduced by Boyle, Gilboa and Ishai at CRYPTO 2016. A protocol solving this problem was the main tool used in the share conversion procedure of their homomorphic secret sharing (HSS) scheme which allows non-interactive evaluation of branching programs among two parties over shares of secret inputs. Let g be a generator of a multiplicative group \({\mathbb {G}}\). Given a random group element \(g^{x}\) and an unknown integer \(b \in [-M,M]\) for a small M, two parties A and B (that cannot communicate) successfully solve DDL if \(A(g^{x}) - B(g^{x+b}) = b\). Otherwise, the parties err. In the DDL protocol of Boyle et al., A and B run in time T and have error probability that is roughly linear in M / T. Since it has a significant impact on the HSS scheme’s performance, a major open problem raised by Boyle et al. was to reduce the error probability as a function of T. In this paper we devise a new DDL protocol that substantially reduces the error probability to \(O(M \cdot T^{-2})\). Our new protocol improves the asymptotic evaluation time complexity of the HSS scheme by Boyle et al. on branching programs of size S from \(O(S^2)\) to \(O(S^{3/2})\). We further show that our protocol is optimal up to a constant factor for all relevant cryptographic group families, unless one can solve the discrete logarithm problem in a short interval of length R in time \(o(\sqrt{R})\). Our DDL protocol is based on a new type of random walk that is composed of several iterations in which the expected step length gradually increases. We believe that this random walk is of independent interest and will find additional applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In all algorithms presented in this paper, the bulk of computation involves performing group operations; hence, this is a reasonable complexity measure. Alternatively, the parameter T may bound the complexity of AB in some reasonable computational model.

  2. We note that in the applications of [4,5,6], the distribution of \(b \in [-M,M]\) is arbitrary. However, as we show in Lemma 12, our choice to define and analyze DDL for the uniform distribution of \(b \in [-M,M]\) is technically justified since the uniform distribution is the hardest for DDL: algorithms for AB that solve DDL with an error probability \(\delta \) for the uniform distribution, also solve DDL with an error probability \(O(\delta )\) for any distribution of \(b \in [-M,M]\).

  3. The function \(\phi \) (and additional pseudo-random functions defined in this paper) can be implemented by a keyed MAC, where the key is pre-distributed to A and B. Thus, our probabilistic calculations should be formally taken over the choice of the key. They should include an error term that accounts for the distinguishing advantage of an efficient adversary (A or B in our case) that attempts to distinguish the PRF from a truly random permutation. However, for an appropriately chosen PRF, the distinguishing advantage is negligible and we ignore it for the sake of simplicity.

  4. Our analysis assumes that \(\psi _{L-1}\) is a truly random function and our probabilistic calculations are taken over the choice of \(\psi _{L-1}\).

  5. We assume that the algorithm uses a table containing the pre-computed values \(g,g^2,\ldots ,g^{L-1}\). Otherwise, it has to compute \(g^{z_{i+1}}\) on-the-fly in Step 10, which results in a multiplicative penalty of \(O(\log (T))\) on the number of group operations. Of course, it is also possible to obtain a time-memory tradeoff here.

  6. We assume in our analysis that during the application of the whole protocol by a single party, each function \(\phi ,\psi _{L-1}\) is not evaluated twice on the same input. These constrains are satisfied since \(|{\mathbb {G}}| = N\) is much larger than T (e.g., \(|{\mathbb {G}}|>cT^2\) for a sufficiently large constant c).

  7. We note that according to the proof of Theorem 7, I can be taken to be any value between \(\log \log (T)+\omega (1)\) and \(o(\sqrt{\log (T)})\), without a significant effect on the provable performance of the algorithm. On the other hand, according to Table 1, I seems to grow more sharply. However, the restriction of \(I = o(\sqrt{\log (T)})\) is merely an artifact of the proof and the optimal value of I could be asymptotically larger. Furthermore, the sharp increase in the values of I in Table 1 could be attributed to low-order terms that have a more noticeable effect for small T values.

  8. We consider only uniform algorithms that can be applied to families of groups (such as elliptic curve groups) and not non-uniform algorithms that are specialized to a specific group \({\mathbb {G}}\). Indeed, in the non-uniform model, there exist algorithms that solve DLI in an interval of length R in time \(o(\sqrt{R})\) for any specific group (see, e.g., [2]).

  9. More accurately, it is \(O(\delta )\), as \(\delta \) is the average error probability in the interval \([-1,1]\).

  10. It is also possible to prove a similar bound in case the interval \([M_1,M_2]\) is not symmetric around the origin.

  11. Alternatively, x could be in any fixed interval of length R. The exact interval is not important as one can easily reduce the problem in a given interval to any other interval.

  12. The function \({\mathcal {B}}(N)\) for which our assumption on the hardness of DLI is believed to hold depends on the specific family of groups. For example, for some subgroups of \({\mathbb {Z}}^{*}_p\), \({\mathcal {B}}(N)\) is subexponential in \(\log N\).

  13. Our actual proof is slightly more general than outlined here and uses the notion of query chains.

  14. Our proof relaxes this strong condition, and requires that it holds unless a low-probability event (called a collision) occurs.

  15. Note w must be a primitive root of unity. In our case, this holds trivially since N is prime.

  16. An additional difficulty is that cz may be a large number, while the parties are restricted to performing computations on small integers. This problem is overcame by providing an encryption of each input w multiplied by each bit of the secret key \(c_i\), and then applying a linear combination whose coefficients are powers of 2 to the additive shares of the products \(c_iyw\), obtaining additive shares of cz.

  17. The parties cannot multiply two memory locations, which would allow evaluation of arbitrary circuits.

  18. Conversion friendly groups are subgroups of \({\mathbb {Z}}^{*}_p\) of a prime order q, where \(p = 2q + 1\) is a prime that is close to a power of 2, and \(g = 2\) is a generator. Multiplying a group element h by the generator g can be very efficiently performed by shifting h by one bit to the left, and adding the difference between p and the closest power of 2 in case the removed leftmost bit is 1.

References

  1. M. Ben-Or, S. Goldwasser, A. Wigderson, Completeness theorems for non-cryptographic fault-tolerant distributed computation (extended abstract), in Simon [22], pp. 1–10

  2. D.J. Bernstein, T. Lange, Computing small discrete logarithms faster, in S.D. Galbraith, M. Nandi, editors, Progress in Cryptology—INDOCRYPT 2012, 13th International Conference on Cryptology in India, Kolkata, India, December 9–12, 2012. Proceedings. Volume 7668 of Lecture Notes in Computer Science (Springer, 2012), pp. 317–338

  3. D. Boneh, E. Goh, K. Nissim, Evaluating 2-DNF formulas on ciphertexts, in J. Kilian, editor, Theory of Cryptography, Second Theory of Cryptography Conference, TCC 2005, Cambridge, MA, USA, February 10–12, 2005, Proceedings. Volume 3378 of Lecture Notes in Computer Science (Springer, 2005), pp. 325–341

  4. E. Boyle, G. Couteau, N. Gilboa, Y. Ishai, M. Orrù, Homomorphic secret sharing: optimizations and applications, in B.M. Thuraisingham, D. Evans, T. Malkin, D. Xu, editors, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30–November 03, 2017 (ACM, 2017), pp. 2105–2122

  5. E. Boyle, N. Gilboa, Y. Ishai ,Breaking the circuit size barrier for secure computation under DDH, in M. Robshaw, J. Katz, editors, Advances in Cryptology—CRYPTO 2016—36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14–18, 2016, Proceedings, Part I. Volume 9814 of Lecture Notes in Computer Science (Springer, 2016), pp. 509–539

  6. E. Boyle, N. Gilboa, Y. Ishai, Group-based secure computation: optimizing rounds, communication, and computation, in J. Coron, J.B. Nielsen, editors, Advances in Cryptology—EUROCRYPT 2017—36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Paris, France, April 30–May 4, 2017, Proceedings, Part II. Volume 10211 of Lecture Notes in Computer Science (2017), pp. 163–193

  7. E. Boyle, N. Gilboa, Y. Ishai, H. Lin, S. Tessaro, Foundations of homomorphic secret sharing, in A.R. Karlin, editor, 9th Innovations in Theoretical Computer Science Conference, ITCS 2018, January 11–14, 2018, Cambridge, MA, USA. Volume 94 of LIPIcs (Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018), pp. 21:1–21:21

  8. D. Chaum, C. Crépeau, I. Damgård, Multiparty unconditionally secure protocols (extended abstract), in Simon [22], pp. 11–19

  9. B. Chor, E. Kushilevitz, O. Goldreich, M. Sudan, Private information retrieval. J. ACM 45(6), 965–981 (1998)

    Article  MathSciNet  Google Scholar 

  10. I. Dinur, N. Keller, O. Klein, An optimal distributed discrete log protocol with applications to homomorphic secret sharing, in H. Shacham, A. Boldyreva, editors, Advances in Cryptology—CRYPTO 2018—38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19–23, 2018, Proceedings, Part III. Volume 10993 of Lecture Notes in Computer Science (Springer, 2018), pp. 213–242

  11. N. Fazio, R. Gennaro, T. Jafarikhah, W.E. Skeith III, Homomorphic secret sharing from paillier encryption, in T. Okamoto, Y. Yu, M.H. Au, Y. Li, editors, Provable Security—11th International Conference, ProvSec 2017, Xi’an, China, October 23–25, 2017, Proceedings. Volume 10592 of Lecture Notes in Computer Science (Springer, 2017), pp. 381–399

  12. S.D. Galbraith, J.M. Pollard, R.S. Ruprai, Computing discrete logarithms in an interval. Math. Comput.  82(282), 1181–1195 (2013)

    Article  MathSciNet  Google Scholar 

  13. C. Gentry, Fully homomorphic encryption using ideal lattices, in M. Mitzenmacher, editor, Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May 31–June 2, 2009 (ACM, 2009), pp. 169–178

  14. D.M. Gordon, Discrete logarithms in GF(P) using the number field sieve. SIAM J. Discrete Math. 6(1), 124–138 (1993)

    Article  MathSciNet  Google Scholar 

  15. R.A. Horn, C.R. Johnson, Matrix Analysis (Paperback ed.) (Cambridge University Press, Cambridge, 1985)

    Book  Google Scholar 

  16. A.K. Lenstra, H.W, Lenstra, The Development of the Number Field Sieve. Number 1554 in Lecture Notes in Mathematics (Springer, Berlin, 1993)

  17. R. O’Donnell, Analysis of Boolean Functions (Cambridge University Press, Cambridge, 2014)

    Book  Google Scholar 

  18. J.M. Pollard, Monte Carlo methods for index computation (\(\text{ mod }p\)). Math. Comput. 32(143), 918–924 (1978)

    MATH  Google Scholar 

  19. J.M. Pollard, Kangaroos, monopoly and discrete logarithms. J. Cryptol. 13(4), 437–447 (2000)

    Article  MathSciNet  Google Scholar 

  20. R.L. Rivest, L. Adleman, M.L. Dertouzos, On data banks and privacy homomorphisms, in Foundations of Secure Computation (1978), pp. 169–179

  21. V. Shoup, Lower bounds for discrete logarithms and related problems, in W. Fumy, editor, Advances in Cryptology—EUROCRYPT ’97, International Conference on the Theory and Application of Cryptographic Techniques, Konstanz, Germany, May 11–15, 1997, Proceeding. Volume 1233 of Lecture Notes in Computer Science (Springer, 1997), pp. 256–266

  22. J. Simon, editor, in Proceedings of the 20th Annual ACM Symposium on Theory of Computing, May 2–4, 1988, Chicago, Illinois, USA (ACM, 1988)

  23. D. Williams, Probability with Martingales (Cambridge University Press, Cambridge, 1991)

    Book  Google Scholar 

  24. A.C. Yao, Protocols for secure computations (extended abstract), in 23rd Annual Symposium on Foundations of Computer Science, Chicago, Illinois, USA, 3–5 November 1982 (IEEE Computer Society, 1982), pp. 160–164

Download references

Acknowledgements

The authors would like to thank Elette Boyle, Niv Gilboa, Yuval Ishai and Yehuda Lindell for discussions and helpful suggestions that improved this work significantly. The authors would also like to thank the anonymous reviewers of CRYPTO 2018 and the Journal of Cryptology for their detailed and valuable comments. This research was supported by the European Research Council under the ERC starting Grant agreement n. 757731 (LightCrypt) and by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office. The first author was additionally supported by the Israeli Science Foundation through Grant No. 573/16.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Itai Dinur.

Additional information

Communicated by Nigel Smart.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version of the paper was presented at the CRYPTO 2018 conference [10].

Appendices

A Group-Based Homomorphic Secret Sharing Scheme [5]

We describe very briefly the main ideas behind the group HSS scheme of [5]. For more details, refer to the original publication and its extensions [4, 6].

The (2-party) HSS scheme of [5] randomly splits an input w into a pair of shares \((w_0, w_1)\) such that: (1) each share \(w_i\) computationally hides w, and (2) there exists a polynomial-time local evaluation algorithm \(\mathrm {Eval}\) such that for any program P from a given class (branching programs in this case), the output P(w) can be efficiently reconstructed as \(\mathrm {Eval}(w_0, P) + \mathrm {Eval}(w_1, P)\).

The scheme works in a multiplicative cyclic group \({\mathbb {G}}\) of prime order N with a generator g. First, the scheme allows the parties to locally multiply an encrypted input \(w \in {\mathbb {Z}}_N\) with an additively secret-shared value \(y \in {\mathbb {Z}}_N\), such that the result \(z = wy\) is shared between the parties. For this purpose, the parties also require an additive share of (cy) (where \(h = g^c\) is the public key). The sharing is such that each party \(i \in \{0,1\}\) has a group element \(g^{z_i}\) satisfying \(g^{z_0} \cdot g^{z_1} = g^{z}\). We note that all intermediate computation values are bounded by a small integer parameter M.

More specifically, the encryption of w is \((g^r,h^r g^w)\), which is an ElGamal encryption, modified to be additively homomorphic by placing the plaintext in the exponent. The value y (and similarly (cy)) is additively shared as \((y_0,y_1)\) such that \(y = y_0 + y_1 \bmod N\) and party i obtains \(y_i\). Given \((g^r,h^r g^w)\), \(y_i\) and \((cy)_i\), party i locally computes a share of \(z = wy\) as \(\gamma _i = (h^r g^w)^{y_i} \cdot (g^r)^{-(cy)_i}\). Thus, \(\gamma _0 \cdot \gamma _1 = (h^r g^w)^{y} \cdot (g^r)^{-(cy)} = g^{(cr+w)y - cry} = g^{wy} = g^{z}\) as required.

At this stage, \(g^z\) is multiplicatively shared by the parties, so they cannot multiply z with a new encrypted input \(w'\). Next, [5] shows how the parties can convert multiplicative shares of \(g^z\) into additive shares of z without any interaction via a share conversion procedure. The parties implement the share conversion procedure by collectively solving the distributed discrete log (DDL) problem, described in Sect. 2.1. Once the parties have an additive sharing of z, they can add it to other additive shares. They can also multiply it with a new encrypted input \(w'\) (provided that they also compute an additive share of (cz)).Footnote 16

This allows the parties to perform the following operations on their input: (1) load an input into memory, (2) add the values of two memory locations, (3) multiply a memory location by an input, and (4) output the value of a memory location modulo some integer.Footnote 17 Such programs, referred to as restricted multiplication straight-line (RMS) programs, can emulate any branching program of size S by a sequence of O(S) instructions.

B Refined Provable Parameters for Theorem 7

In Theorem 7, we proved that the failure probability of Algorithm 3, namely, \(\mathrm {Pr}_{\mathbf {err}}(\mathrm {IteratedRandW}_P,[1,1],T)\), is bounded by \(2^{10.2+o(1)}/T^{2}\). One may wonder whether for small values of T, the o(1) term is actually dominant and makes the result meaningless. It turns out that this is not the case, and on the contrary, for specific values of T one may obtain bounds which are similar to those of Theorem 7 and usually even stronger, using a computer-aided choice of parameters. In order to obtain such bounds, we programmed a routine that uses all the bounds proved in this paper to obtain a bound on \(\mathrm {Pr}_{\mathbf {err}}(\mathrm {IteratedRandW}_P,[1,1],T)\), given a set of parameters \(SP=\{I, \{L_{k}\}, \{t_{k}\}\}\). Then, we optimized the parameters, and obtained for several values of T an upper bound on the probability \(\mathrm {Pr}_{\mathbf {err}}(\mathrm {IteratedRandW}_P,[1,1],T)\).

The bounds were used together with their lower-order terms, so that the results given below are precise, assuming the group \({\mathbb {G}} \) is sufficiently large. Notice the lower-order terms are dominant for small T, but the resulting bound is still not much worse than the one claimed in Theorem 7. We emphasize that these bounds are rigorously provable, by plugging the choices of parameters given below into the proof of Theorem 7; the computer routine only helps in choosing the parameters.

T

I

\(\sim \log (t_{k})\)

\(T^{2}\cdot \Pr [\mathbf {err}]\le \)

\(\sim \log (L_{k})\)

\(2^{15}\)

2

12.4, 13.5, 14.0

1440

5.7, 9.6

\(2^{25}\)

5

19.2, 20.7, 21.5, 22.3, 23.1, 23.8

755

7.6, 12.4, 15.8, 18.4, 20.8

\(2^{50}\)

10

32.1, 36.2, 38.8, \(\ldots \), 47.3, 48.1, 48.8

555

9.1, 16.8, \(\ldots \), 41.5, 43.8, 46.0

\(2^{100}\)

12

77.9, 80.9, 83.6, \(\ldots \), 97.3, 98.1, 98.8

555

30.5, 47.4, \(\ldots \), 91.5, 93.8, 96.0

C An Additional Optimization Used in Part of the Experiments

In this section we describe the third optimization used in part of the experiments. The results obtained using this optimization are described in the rightmost column of Table 1. As can be seen in the table, for all values of T for which computation using only the first two optimizations is feasible, the expected error probability obtained using the third optimization is very close to the probability obtained without it. The advantage of this optimization is that it leads to a much smaller standard deviation, and allows performing the simulation for much larger values of T.

The optimization is based on the fact that Algorithm 3 is composed of several stages, and \(\Pr [\mathbf {err}]\) expresses the failure of the parties to synchronize in any of these stages. Roughly speaking, it estimates efficiently the probability of failure in each stage, assuming failure in all previous stages and then multiplies these probabilities. First we describe the simulation and then we justify its correctness.

For each stage i (starting at \(i=0\)), we simulate a specific pattern of jumps of both parties for the i’th stage, and compute the probability \(p_{i}\) of the event that the parties are not synchronized at the end of that stage, given the simulated pattern of jumps. We then uniformly draw \(c_{i}^{A}, c_{i}^{B}\) (that is, the values of the parties at the end of the i’th step), given the pattern of jumps and the assumption that the parties have not synchronized yet. We then repeat the process for the \((i+1)\)’th stage, until \(i=I\). The simulation outputs the product of probabilities \(X=p_{0}\cdot p_{1} \cdots p_{I}\). We claim that we have \({{\,\mathrm{{\mathbb {E}}}\,}}[X]=\Pr [\mathbf {err}]\), which allows us to approximate \(\Pr [\mathbf {err}]\) efficiently.

The claim is proved by induction on I, using repeated application of the law of total expectation and the law of total probability, in their conditional version. Recall that the law of total expectation states that if \(\{B_n\}_{n=1,2,\ldots }\) is a partition of the entire space, then for each pair of events AC we have

$$\begin{aligned} {\mathbb {E}}[A|C] = \sum _n {\mathbb {E}}[A | C \cap B_n] \Pr [B_n | C]. \end{aligned}$$

The law of total probability asserts the same, with expectation replaced by probability.

In our case, recall that for each \(0 \le i \le I\), \(\mathbf {err}_i\) denotes the event that the parties fail to synchronize until the end of stage i, and \(c_{i}^{A}, c_{i}^{B}\) denote the positions of the parties at the end of stage i. Let us denote by \(\{E_i^j\}_{j=1,2,\ldots }\) the possible pairs of positions \((c_{i}^{A}, c_{i}^{B})\).

For \(I=0\) (i.e., if Algorithm 3 consists of a single stage), we clearly have \({{\,\mathrm{{\mathbb {E}}}\,}}[X]={{\,\mathrm{{\mathbb {E}}}\,}}[p_0]=\Pr [\mathbf {err}_0]=\Pr [\mathbf {err}]\). Assume that for all \(j \le I-1\) we have \({{\,\mathrm{{\mathbb {E}}}\,}}[p_0 \cdot p_1 \cdot \ldots \cdot p_{j}] = \Pr [\mathbf {err}_j]\), and consider stage I.

We have

$$\begin{aligned} \Pr [\mathbf {err}] = \Pr [\mathbf {err}_I] = \Pr [\mathbf {err}_{I-1}] \Pr [\mathbf {err}_I | \mathbf {err}_{I-1}] \end{aligned}$$

By the law of total probability,

$$\begin{aligned} \Pr [\mathbf {err}_I | \mathbf {err}_{I-1}] = \sum _j \Pr [\mathbf {err}_I | E_{I-1}^j \wedge \mathbf {err}_{I-1}] \Pr [E_{I-1}^j | \mathbf {err}_{I-1}]. \end{aligned}$$

By the definition of the \(p_i\)’s, we have

$$\begin{aligned} \Pr [\mathbf {err}_I | E_{I-1}^j \wedge \mathbf {err}_{I-1}] = {{\,\mathrm{{\mathbb {E}}}\,}}[p_I | E_{I-1}^j], \end{aligned}$$

and so by the law of total expectation,

$$\begin{aligned} \begin{aligned} \Pr [\mathbf {err}_I | \mathbf {err}_{I-1}]&= \sum _j \Pr [\mathbf {err}_I | E_{I-1}^j \wedge \mathbf {err}_{I-1}] \Pr [E_{I-1}^j | \mathbf {err}_{I-1}] \\&= \sum _j {{\,\mathrm{{\mathbb {E}}}\,}}[p_I | E_{I-1}^j \wedge \mathbf {err}_{I-1}] \Pr [E_{I-1}^j | \mathbf {err}_{I-1}] \\&= {{\,\mathrm{{\mathbb {E}}}\,}}[p_I]. \end{aligned} \end{aligned}$$

Hence, using the induction hypothesis, we obtain

$$\begin{aligned} \begin{aligned} \Pr [\mathbf {err}]&= \Pr [\mathbf {err}_{I-1}] \Pr [\mathbf {err}_I | \mathbf {err}_{I-1}] \\&= {{\,\mathrm{{\mathbb {E}}}\,}}[p_0 \cdot p_1 \cdot \ldots \cdot p_{I-1}] {{\,\mathrm{{\mathbb {E}}}\,}}[p_I] \\&= {{\,\mathrm{{\mathbb {E}}}\,}}[p_0 \cdot p_1 \cdot \ldots \cdot p_{I}], \end{aligned} \end{aligned}$$

as asserted.

We note that an important difference between the first two optimizations, described in Sect. 4, and the third optimization described above, is the type of random variables we obtain. The variables outputted from a simulation based solely on the first two optimizations are Bernoulli variables, indicating the failure of the parties to synchronize, and we seek to compute their expectation. In contrast, the distribution of the variables outputted from a simulation using the third optimization is not clear, but still has expectation equal to \(\Pr [\mathbf {err}]\). Therefore, we performed some simulations also without this optimization.

In both cases we approximate \(\Pr [\mathbf {err}]\) by the empirical expectation of the samples drawn with the described simulations. In addition, we estimate the standard deviation (\(\sigma \)) of our approximation of \(\Pr [\mathbf {err}]\) using the empirical variance

$$\begin{aligned} \frac{\sum _{i < j} (X_{i}-X_{j})^2}{n^3-n^2}, \end{aligned}$$

where \(X_{1},\ldots , X_{n}\) are independent samples from the corresponding distribution, as this is the standard way to estimate the variance of a random variable whose distribution is not known a priori, based on samples.

D Variants and Optimizations of Algorithm 3

1.1 D.1 Error Flag

The DDL protocols of [4, 6] were adapted to raise a flag in case there was a potential error in the protocol (namely, the parties fail to synchronize). This feature was important for some of the HSS applications. In the tweaked protocol, A simulates B’s algorithm for all its potential locations (i.e., \(b \in [-M,M]\)). This can be performed very efficiently for small M by computing about M additional group elements. A then raises a flag if there is a potential disagreement (i.e., an error) in one of these simulations.

Another option (described in [11]) is for each party to maintain a footprint (e.g., a hash) of all the final group elements it computes during the DDL executions of an RMS program. One can then check if an error occurred during one of the DDL executions by comparing these footprints.

Our iterated random walk algorithm can be adapted to support each of these two modifications with a small overhead. In particular, the expected overhead in the first modification is small since each iteration in all simulations can be performed by A in parallel. Note that almost all simulations are likely to agree on the same group element after the first iteration, which executes the basic algorithm and requires computing only about M additional consecutive group elements for all simulations. Hence, the number of simulations that A has to test in the remaining iterations drops sharply.

1.2 D.2 Practical Considerations

In the HSS implementations of [4, 6], the authors used a specific type of groups (called “conversion friendly” groups) for which their share conversion procedure can be implemented very efficiently. In such groups, multiplication by g is very efficient,Footnote 18 and moreover, given \(g^{x}\) it is possible to scan very quickly a consecutive interval \(g^{x+1},g^{x+2},\ldots ,g^{x+c}\) (for some value of c) to determine whether one of these elements has a specific pattern (e.g., is the minimal element in this set). In the following, we refer to this procedure as a consecutive scan. Consequently, in [4], the authors were able to perform \(T \approx 5 \cdot 10^9\) “conversion operations” per second, which gives an error probability of roughly \(2 \cdot 1/5 \cdot 10^{-9}\) using their algorithm. On the other hand, the authors reported that using an elliptic curve group (offering a similar level of security), they could only perform about \(T = 4 \cdot 10^6\) group operations per second. This gives an error probability that is higher by a factor of about 1000 compared to the conversion friendly group.

If we run our algorithm on a similar elliptic curve group, according to Table 1, we expect the error probability to become roughly \(400 \cdot T^{-2} = 400 \cdot (4 \cdot 10^6)^{-2} \approx 1/4 \cdot 10^{-10}\), which is lower than the probability obtained with a conversion friendly group in [4] by a factor of about 16. We note that an additional benefit of elliptic curve groups is that they reduce the size of the HSS ciphertexts.

We would like to further optimize our algorithm in conversion friendly groups, but its direct application would not yield a similar benefit as in [4, 6] since (besides the first iteration) it does not compute consecutive group elements. Nevertheless, we can tweak our algorithm to benefit from conversion friendly groups by combining our random walk iterations with efficient consecutive scans as above, resulting in “alternating” random walks. Optimizing the parameters for this algorithm in practice is quite involved and we leave it to future work. Below, we sketch the algorithm and its analysis at a high level.

We combine the iterated random walk of Algorithm 3 with basic Algorithm 1 as follows: for parameters \(T_1,T_2\), Algorithm 3 is viewed as an outer algorithm that operates on blocks of size \(T_1\) (in the original algorithm \(T_1=1\)) and makes \(T_2\) invocations of the basic Algorithm 1, where each such inner invocation makes \(T_1\) consecutive queries to its block. Thus, the combined algorithm makes a total of \(T_1 \cdot T_2\) queries.

More specifically, in each step, the combined algorithm moves to the group element which has the lowest value under the mapping \(\phi \) among the next \(T_1\) elements. It then computes the step function \(\psi \) on the chosen element, where its output defines the number of blocks (of size \(T_1\)) to jump over in this step (the domain and range of \(\psi \) are the same as in the standard Algorithm 3 with \(T_2\) queries). Calculation shows that for \(M=1\), the combined algorithm has error probability of \(\delta = O(T_1^{-1} \cdot T_2^{-2})\) (for the cases of \(T_1 = 1\) and \(T_2 = 1\) this is directly implied by our analysis of Algorithms 1 and 3, respectively).

Let us assume that in conversion friendly groups, scanning a block of size \(T'\) using Algorithm 1 requires the same amount of time as performing a single jump in the iterated random walk of Algorithm 3 (the exact value of \(T'\) should be determined by experiments). Furthermore, assume that we can make \(T_u\) jumps in Algorithm 3 in a fixed time unit, or alternatively, scan \(T_u\) blocks of size \(T'\) using Algorithm 1. Since the combined algorithm makes the same number of consecutive scans and jumps, we can set the block size as \(T_1 = T'\), i.e., we make \(T_u/2\) jumps and scan \(T_u/2\) blocks of size \(T'\) in a time unit, obtaining \(\delta = O(T'^{-1} \cdot T_u^{-2})\) (precise optimization will set \(T_1\) more accurately in order to optimize the constants). In other words, we get an advantage of \(O(T'^{-1})\) in error probability compared to the iterated random walk of Algorithm 3.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dinur, I., Keller, N. & Klein, O. An Optimal Distributed Discrete Log Protocol with Applications to Homomorphic Secret Sharing. J Cryptol 33, 824–873 (2020). https://doi.org/10.1007/s00145-019-09330-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00145-019-09330-2

Keywords

Navigation