Skip to main content

Zero Knowledge Contingent Payments for Trained Neural Networks

  • 1224 Accesses

Part of the Lecture Notes in Computer Science book series (LNSC,volume 12973)

Abstract

Nowadays, neural networks have been widely used in many machine learning tasks. In practice, one might not have enough expertise to fine-tune a neural network model; therefore, it becomes increasingly popular to outsource the model training process to a machine learning expert. This activity brings out the needs of fair model exchange: if the seller sends the model first, the buyer might refuse to pay; if the buyer pays first, the seller might refuse to send the model or send an inferior model. In this work, we aim to address this problem so that neither the buyer nor the seller can deceive the other. We start from Zero Knowledge Contingent Payment (ZKCP), which is used for fair exchange of digital goods and payment over blockchain, and extend it to Zero Knowledge Contingent Model Payment (ZKCMP). We then instantiate our ZKCMP with two state-of-the-art NIZK proofs: zk-SNARKs and Libra. We also propose a random sampling technique to improve the efficiency of zk-SNARKs. We extensively conduct experiments to demonstrate the practicality of our proposal.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-88428-4_31
  • Chapter length: 21 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   89.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-88428-4
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.

References

  1. The first successful zero-knowledge contingent payment (2016). https://bitcoincore.org/en/2016/02/26/zero-knowledge-contingent-payments-announcement/

  2. Hashlock (2016). https://en.bitcoin.it/wiki/Hashlock

  3. Angelini, E., di Tollo, G., Roli, A.: A neural network approach for credit risk evaluation. Q. Rev. Econ. Finan. 48(4), 733–755 (2008). https://doi.org/10.1016/j.qref.2007.04.001

    CrossRef  Google Scholar 

  4. Ben-Sasson, E., Chiesa, A., Genkin, D., Tromer, E., Virza, M.: SNARKs for C: verifying program executions succinctly and in zero knowledge. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013. LNCS, vol. 8043, pp. 90–108. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40084-1_6

    CrossRef  MATH  Google Scholar 

  5. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Heidelberg (2006)

    MATH  Google Scholar 

  6. Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: From extractable collision resistance to succinct non-interactive arguments of knowledge, and back again. In: ITCS 2012, pp. 326–349. Association for Computing Machinery, New York (2012)

    Google Scholar 

  7. Campanelli, M., Gennaro, R., Goldfeder, S., Nizzardo, L.: Zero-knowledge contingent payments revisited: attacks and payments for services. In: CCS 2017, pp. 229–243 (2017)

    Google Scholar 

  8. Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., Wernsing, J.: Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. Technical report, February 2016

    Google Scholar 

  9. Fakoor, R., Ladhak, F., Nazi, A., Huber, M.: Using deep learning to enhance cancer diagnosis and classification. In: Proceedings of the International Conference on Machine Learning, vol. 28. ACM, New York (2013)

    Google Scholar 

  10. Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987). https://doi.org/10.1007/3-540-47721-7_12

    CrossRef  Google Scholar 

  11. Ghodsi, Z., Gu, T., Garg, S.: SafetyNets: verifiable execution of deep neural networks on an untrusted cloud. In: Advances in Neural Information Processing Systems, vol. 30, pp. 4672–4681. Curran Associates, Inc. (2017)

    Google Scholar 

  12. Goldwasser, S., Kalai, Y.T., Rothblum, G.N.: Delegating computation: interactive proofs for muggles. J. ACM 62(4), 1–64 (2015)

    MathSciNet  CrossRef  Google Scholar 

  13. Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive proof systems. SIAM J. Comput. 18(1), 186–208 (1989)

    MathSciNet  CrossRef  Google Scholar 

  14. Liu, J., Juuti, M., Lu, Y., Asokan, N.: Oblivious neural network predictions via MiniONN transformations. In: CCS 2017, pp. 619–631. Association for Computing Machinery, New York (2017)

    Google Scholar 

  15. Lund, C., Fortnow, L., Karloff, H.: Algebraic methods for interactive proof systems. J. ACM 39(4) (1999)

    Google Scholar 

  16. Tramer, F., Boneh, D.: Slalom: fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287 (2018)

  17. Xie, T., Zhang, J., Zhang, Y., Papamanthou, C., Song, D.: Libra: succinct zero-knowledge proofs with optimal prover computation. In: Boldyreva, A., Micciancio, D. (eds.) CRYPTO 2019. LNCS, vol. 11694, pp. 733–764. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26954-8_24

    CrossRef  Google Scholar 

  18. Zhang, Y., Genkin, D., Katz, J., Papadopoulos, D., Papamanthou, C.: VSQL: verifying arbitrary SQL queries over dynamic outsourced databases. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 863–880. IEEE (2017)

    Google Scholar 

  19. Zhao, L., et al.: VeriML: enabling integrity assurances and fair payments for machine learning as a service. arXiv preprint arXiv:1909.06961 (2019)

Download references

Acknowledgment

This work is supported by the Key (Keygrant) Project of Chinese Ministry of Education. (No. 2020KJ010201) and the National Natural Science Foundation of China (Grant No. 62072401, 62002319, U20A20222). It is also supported by the “Open Project Program of Key Laboratory of Blockchain and Cyberspace Governance of Zhejiang Province” and GTTX Network Technology Co., Limited. The work is also supported in part by Zhejiang Key R&D Plans (Grant No. 2021C01116).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Jian Liu or Bingsheng Zhang .

Editor information

Editors and Affiliations

Appendices

A The Main Building Blocks of Libra

zkVPD Scheme. A zkVPD scheme [18] allows a verifier to delegate the computation of polynomial evaluations to a powerful prover without leaking any sensitive information, and validates the result in time that is constant or logarithmic to the size of the polynomial. Let \(\mathcal {F}\) be a family of l-variate polynomial over \(\mathbb {F}\). A zkVPD for \(f \in \mathcal {F}\) and consists of the following algorithms:

  • \((pp, vp) \leftarrow \mathsf {KeyGen}(1^\lambda )\)

  • \(com \leftarrow \mathsf {Commit}(f, r_f, pp)\)

  • \(\{0, 1\} \leftarrow \mathsf {Check}(com, vp)\)

  • \((y, \pi ) \leftarrow \mathsf {Open}(f, t, r_f, pp)\)

  • \(\{0, 1\} \leftarrow \mathsf {Verify}(com, t, y, \pi , vp)\)

GKR Protocol. Using sumcheck protocol [15] as a main building block, Goldwasser et al. [12] constructed an interactive protocol for layered arithmetic circuits with size C and depth d. We denote the number of gates in the i-th layer as \(C_i\) and let \(c_i=\lceil \log _2 S_i \rceil \). We then define a function \(V_i:\{ 0,1 \}^{c_i} \rightarrow \mathbb {F}\) that takes a binary string \(b \in \{0,1\}^{c_i}\) as input and returns the output of gate b in layer i. Therefore, \(V_0\) corresponds to the output of the circuit and \(V_D\) corresponds to the input. Then we extend \(V_i\) to its multilinear extension.

Definition 4

(Multi-linear Extension). Let \(V:\{0,1\}^l \rightarrow \mathbb {F}\) be a function. The multilinear extension of V is the unique polynomial \(\widetilde{V}:\mathbb {F}^l \rightarrow \mathbb {F}\) such that \(\widetilde{V}(x_1, x_2, \dots , x_l)=V(x_1, x_2, \dots , x_l)\) for all \((x_1, x_2, \dots , x_l) \in \{0,1\}^l\). \(\widetilde{V}\) can be expressed as:

figure h

where \(b_i\) is i-th bit of b.

To ensure zero knowledge, \(\mathcal {P}\) masks the polynomial \(\widetilde{V}_i\) and the sumcheck protocol by adding random polynomials. In particular, for layer i, \(\mathcal {P}\) selects a random bivariate polynomial \(R_i(x_1, z)\) and defines

$$\begin{aligned} \overline{V}_i(x_1,\dots ,x_{c_i}) = \widetilde{V}_i(x_1, \dots , x_{c_i}) + Z_i(x_1, \dots , x_{c_i})\cdot \sum _{z \in \{0,1\}}R_i(x_1, z), \end{aligned}$$
(2)

where \(Z_i(x) = \prod _{i=1}^{c_i} x_i(1-x_i)\), so \(Z_i(x) = 0, \forall x \in \{0,1\}^{c_i}\). Since \(R_i\) is randomly selected, revealing evaluations of \(\overline{V}_i\) does not leak information about \(\widetilde{{V}_i}\). A random polynomial \(\delta _i(x,y,z)\) is also selected to mask the sumcheck protocol. In this way, the sumcheck protocol will not leak information and thus be zero knowledge. See more details in [17].

B Proof of Theorem 1

Proof

For perfect completeness, since the underlying \(\mathsf {NIZK}\) is perfect complete, it is straightforward that the verification \(\mathsf {Verify}\) would return 1, and \(\mathcal {F}_{ex}\) guarantees that the buyer \(\mathcal {B}\) will receive k when the event \(\mathcal {E}_m\) occurs.

For 0-soundness, the event \(\mathcal {E}_v\) occurs when the potentially malicious seller \(\hat{\mathcal {S}}\) produces an accepting proof \(\pi \) and submits \((\mathsf {Redeem},k,d)\) to \(\mathcal {F}_{ex}[\mathsf {COM}]\); By the soundness of the underlying \(\mathsf {NIZK}\) protocol, with overwhelming probability, the model parameter \(w:=(w_1,\ldots , w_\ell )\) can satisfy \(|\{ i\;| \; F(w,x_i) = y_i \; \wedge \; \mathsf {argmax}(y_i) = L_i \}| \ge n\cdot \tau \), where \(\forall i\in [\ell ]: w_i = c_i \oplus \mathsf {PRF}(k,i) \; \wedge \; \mathsf {COM}.\mathsf {Verify}(E,d,k)= 1 \). Moreover, due to the binding property of the commitment scheme \(\mathsf {COM}\), k cannot be changed afterwards. Therefore, we can construct an extractor \(\mathsf {Ext}_{\hat{\mathcal {S}}}\) that takes input as \(\{c_i\}_{i\in [\ell ]}\) and k from the out-going messages of \(\hat{\mathcal {S}}\), and outputs the model as \(w_i = c_i \oplus \mathsf {PRF}(k,i)\).

For computational zero-knowledge, we first construct a simulator \(\mathsf {Sim}\) works as follows.

  • During \(\mathsf {Setup}\):

    • Invoke \((\mathsf {crs}^*,\mathsf {td})\leftarrow \mathsf {NIZK}.\mathsf {Sim}_1(1^\lambda )\);

    • Output \(pp:=\mathsf {crs}^*\);

  • During \(\mathsf {Seal}\):

    • Pick a random key \(k^*\leftarrow \{0,1\}^\lambda \);

    • Compute \((E^*, d^*)\leftarrow \mathsf {COM.Commit}(k^*)\);

    • For \(i\in [\ell ]\), compute \(c^*_i \leftarrow \{0,1\}^{\mu (\lambda )}\), where \(\mu (\lambda ):=|c_i|\);

    • Output \((c^*:=(c^*_1,\ldots , c^*_\ell ),E^*)\);

  • During \(\mathsf {Prove}\):

    • Invoke \(\pi ^*\leftarrow \mathsf {NIZK}.\mathsf {Sim}_2(pp,(c^*,E^*,\mathcal {D},\tau ),\mathsf {td})\);

    • Output \(\pi ^*\);

Lemma 3

The adversary’s view output by the simulator \(\mathsf {Sim}\) as described above is indistinguishable from the real view with advantage

\(\mathsf {Adv}^{\mathcal {A},ZK}_{\mathsf {NIZK}}(1^\lambda )+ \mathsf {Adv}^{\mathcal {A},Hide}_{\mathsf {COM}}(1^\lambda )+ \ell \cdot \mathsf {Adv}^{\mathcal {A}}_{\mathsf {PRF}}(1^\lambda )\).

Proof

We prove Lemma 3 by the sequence of hybrids \(\mathcal {H}_0,\ldots , \mathcal {H}_3\) as follows.

Hybrid \(\mathcal {H}_0\): it is the real view.

Hybrid \(\mathcal {H}_1\): it is the same as Hybrid \(\mathcal {H}_0\), except during \(\mathsf {Setup}\), \(\mathsf {NIZK}.\mathsf {Sim}_1(1^\lambda )\) is used to generate the simulated CRS \(\mathsf {crs}^*\); during \(\mathsf {Prove}\), \(\pi ^*\) is generated by \(\mathsf {NIZK}.\mathsf {Sim}_2(pp,(c,E,\mathcal {D},\tau ),\mathsf {td})\) instead of the real proof.

Claim 1

If the underlying NIZK proof system is computationally zero-knowledge with advantage \(\mathsf {Adv}^{\mathcal {A},ZK}_{\mathsf {NIZK}}(1^\lambda )\), then the view of Hybrid \(\mathcal {H}_1\) is indistinguishable from the view of Hybrid \(\mathcal {H}_0\) with distinguishing advantage \(\mathsf {Adv}^{\mathcal {A},ZK}_{\mathsf {NIZK}}(1^\lambda )\).

Proof

By Definition 1, it is straightforward that if an adversary \(\mathcal {A}\) can distinguish \(\mathcal {H}_1\) from \(\mathcal {H}_0\) with advantage \(\mathsf {Adv}^{\mathcal {A},ZK}_{\mathsf {NIZK}}(1^\lambda )\), then \(\mathcal {A}\) can break the zero-knowledge property of the underlying NIZK proof system with the same advantage.    \(\square \)

Hybrid \(\mathcal {H}_2\): it is the same as Hybrid \(\mathcal {H}_1\), except during \(\mathsf {Seal}\), replace \((E^*,d^*)\) as \(\mathsf {COM.Commit}(k^*)\) instead of \(\mathsf {COM.Commit}(k)\).

Claim 2

If the distinguishing advantage of the \(\mathsf {COM}\) hiding property is \(\mathsf {Adv}^{\mathcal {A},Hide}_{\mathsf {COM}}(1^\lambda )\), then the view of Hybrid \(\mathcal {H}_2\) is indistinguishable from the view of Hybrid \(\mathcal {H}_1\) with distinguishing advantage \( \mathsf {Adv}^{\mathcal {A},Hide}_{\mathsf {COM}}(1^\lambda )\).

Proof

It is straightforward by direct reduction.    \(\square \)

Hybrid \(\mathcal {H}_3\): it is the same as Hybrid \(\mathcal {H}_2\), except during \(\mathsf {Seal}\), for \(i\in [\ell ]\), replace \(c^*_i\) as \(\{0,1\}^{\mu (\lambda )}\) instead of \(w_i\oplus \mathsf {PRF}(k,i)\).

Claim 3

If the distinguishing advantage of \(\mathsf {PRF}\) is \(\mathsf {Adv^{\mathcal {A}}_{\mathsf {PRF}}}(1^\lambda )\), then the view of Hybrid \(\mathcal {H}_3\) is indistinguishable from the view of Hybrid \(\mathcal {H}_2\) with distinguishing advantage \(\ell \cdot \mathsf {Adv^{\mathcal {A}}_{\mathsf {PRF}}}(1^\lambda )\).

Proof

First of all, the distribution of \(D_i := c^*_i\oplus w_i\) is the uniformly random. Since the distinguishing advantage of \(D_i\) and \(\mathsf {PRF}(k,i)\) is bounded by the advantage of PRF \(\mathsf {Adv^{\mathcal {A}}_{\mathsf {PRF}}}(1^\lambda )\), by hybrid argument, the overall distinguishing advantage of \(\mathcal {H}_3\) and \(\mathcal {H}_2\) is bounded by \(\ell \cdot \mathsf {Adv^{\mathcal {A}}_{\mathsf {PRF}}}(1^\lambda )\).   \(\square \)

Hybrid \(\mathcal {H}_3\) is the simulated view; therefore, the overall distinguishing advantage is \(\mathsf {Adv}^{\mathcal {A},ZK}_{\mathsf {NIZK}}(1^\lambda )+ \mathsf {Adv}^{\mathcal {A},Hide}_{\mathsf {COM}}(1^\lambda )+ \ell \cdot \mathsf {Adv}^{\mathcal {A}}_{\mathsf {PRF}}(1^\lambda )\).    \(\square \)

This concludes the proof.    \(\square \)

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z., Cao, X., Liu, J., Zhang, B., Ren, K. (2021). Zero Knowledge Contingent Payments for Trained Neural Networks. In: Bertino, E., Shulman, H., Waidner, M. (eds) Computer Security – ESORICS 2021. ESORICS 2021. Lecture Notes in Computer Science(), vol 12973. Springer, Cham. https://doi.org/10.1007/978-3-030-88428-4_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88428-4_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88427-7

  • Online ISBN: 978-3-030-88428-4

  • eBook Packages: Computer ScienceComputer Science (R0)