Skip to main content

CryptoMask: Privacy-Preserving Face Recognition

  • Conference paper
  • First Online:
Information and Communications Security (ICICS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14252))

Included in the following conference series:

Abstract

Face recognition is a widely-used technique for identification or verification, where a verifier checks whether a face image matches anyone stored in a database. However, in scenarios where the database is held by a third party, such as a cloud server, both parties are concerned about data privacy. To address this concern, we propose CryptoMask, a privacy-preserving face recognition system that employs homomorphic encryption (HE) and secure multi-party computation (MPC). We design a new encoding strategy that leverages HE properties to reduce communication costs and enable efficient similarity checks between face images, without expensive homomorphic rotation. Additionally, CryptoMask leaks less information than existing state-of-the-art approaches. CryptoMask only reveals whether there is an image matching the query or not, whereas existing approaches additionally leak sensitive intermediate distance information. We conduct extensive experiments that demonstrate CryptoMask’s superior performance in terms of computation and communication. For a database with 100 million 512-dimensional face vectors, CryptoMask offers \({\thicksim }\)5\(\times \) and \({\thicksim }\)144\(\times \) speed-ups in terms of computation and communication, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    \(\boldsymbol{1}\{condition\}\) and \(\boldsymbol{0}\{condition\}\) mean the condition is true and false, respectively.

References

  1. Acar, A., Aksu, H., Uluagac, A.S., Conti, M.: A survey on homomorphic encryption schemes: theory and implementation. ACM Comput. Surv. (Csur) 51(4), 1–35 (2018)

    Article  Google Scholar 

  2. Beaver, D.: Efficient multiparty protocols using circuit randomization. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 420–432. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-46766-1_34

    Chapter  Google Scholar 

  3. Boddeti, V.N.: Secure face matching using fully homomorphic encryption. In: 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–10. IEEE (2018)

    Google Scholar 

  4. Bowyer, K.W.: Face recognition technology: security versus privacy. IEEE Technol. Soc. Mag. 23(1), 9–19 (2004)

    Article  Google Scholar 

  5. Chen, H., Dai, W., Kim, M., Song, Y.: Efficient homomorphic conversion between (Ring) LWE ciphertexts. In: Sako, K., Tippenhauer, N.O. (eds.) ACNS 2021. LNCS, vol. 12726, pp. 460–479. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78372-3_18

    Chapter  Google Scholar 

  6. Cherepanova, V., et al.: Lowkey: leveraging adversarial attacks to protect social media users from facial recognition. arXiv preprint arXiv:2101.07922 (2021)

  7. Danielsson, P.E.: Euclidean distance mapping. Comput. Gr. Image Process. 14(3), 227–248 (1980)

    Article  Google Scholar 

  8. Demmler, D., Schneider, T., Zohner, M.: Aby-a framework for efficient mixed-protocol secure two-party computation. In: NDSS (2015)

    Google Scholar 

  9. Engelsma, J.J., Jain, A.K., Boddeti, V.N.: Hers: Homomorphically encrypted representation search. IEEE Trans. Biom. Behav. Identity Sci. (2022). https://github.com/human-analysis/secure-face-matching

  10. Erkin, Z., Franz, M., Guajardo, J., Katzenbeisser, S., Lagendijk, I., Toft, T.: Privacy-preserving face recognition. In: Goldberg, I., Atallah, M.J. (eds.) PETS 2009. LNCS, vol. 5672, pp. 235–253. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03168-7_14

    Chapter  Google Scholar 

  11. Fan, J., Vercauteren, F.: Somewhat practical fully homomorphic encryption. Cryptology ePrint Archive (2012)

    Google Scholar 

  12. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, pp. 169–178 (2009)

    Google Scholar 

  13. Haigh, T.: The chromium-plated tabulator: institutionalizing an electronic revolution, 1954–1958. IEEE Ann. History Comput. 23(4), 75–104 (2001)

    Article  MathSciNet  Google Scholar 

  14. Hu, S., Li, M., Wang, Q., Chow, S.S., Du, M.: Outsourced biometric identification with privacy. IEEE Trans. Inf. Forensics Secur. 13(10), 2448–2463 (2018)

    Article  Google Scholar 

  15. Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Workshop on faces in Real-Life Images: Detection, Alignment, and Recognition (2008)

    Google Scholar 

  16. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst, October 2007

    Google Scholar 

  17. Huang, Z., Lu, W.j., Hong, C., Ding, J.: Cheetah: lean and fast secure two-party deep neural network inference. IACR Cryptol. ePrint Arch. 2022, 207 (2022). https://github.com/Alibaba-Gemini-Lab/OpenCheetah

  18. Ishai, Y., Kilian, J., Nissim, K., Petrank, E.: Extending oblivious transfers efficiently. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 145–161. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45146-4_9

    Chapter  Google Scholar 

  19. Jin, Z., Hwang, J.Y., Lai, Y.L., Kim, S., Teoh, A.B.J.: Ranking-based locality sensitive hashing-enabled cancelable biometrics: index-of-max hashing. IEEE Trans. Inf. Forensics Secur. 13(2), 393–407 (2017)

    Article  Google Scholar 

  20. Kim, A., Polyakov, Y., Zucca, V.: Revisiting homomorphic encryption schemes for finite fields. In: Tibouchi, M., Wang, H. (eds.) ASIACRYPT 2021. LNCS, vol. 13092, pp. 608–639. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-92078-4_21

    Chapter  Google Scholar 

  21. Lee, Y.J., Park, K.R., Lee, S.J., Bae, K., Kim, J.: A new method for generating an invariant iris private key based on the fuzzy vault system. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 38(5), 1302–1313 (2008)

    Google Scholar 

  22. Parmar, D.N., Mehta, B.B.: Face recognition methods & applications. arXiv preprint arXiv:1403.0485 (2014)

  23. Patel, V.M., Ratha, N.K., Chellappa, R.: Cancelable biometrics: a review. IEEE Signal Process. Mag. 32(5), 54–65 (2015)

    Article  Google Scholar 

  24. Pradel, G., Mitchell, C.: Privacy-preserving biometric matching using homomorphic encryption. In: 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 494–505. IEEE (2021)

    Google Scholar 

  25. Rao, Y.S., Sukonkina, Y., Bhagwati, C., Singh, U.K.: Fingerprint based authentication application using visual cryptography methods (improved id card). In: TENCON 2008–2008 IEEE Region 10 Conference, pp. 1–5. IEEE (2008)

    Google Scholar 

  26. Rathee, D., et al.: Cryptflow2: practical 2-party secure inference. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 325–342 (2020)

    Google Scholar 

  27. Rivest, R.L., Adleman, L., Dertouzos, M.L., et al.: On data banks and privacy homomorphisms. Found. Secure Comput. 4(11), 169–180 (1978)

    MathSciNet  Google Scholar 

  28. Ross, A., Othman, A.: Visual cryptography for biometric privacy. IEEE Trans. Inf. Forensics Secur. 6(1), 70–81 (2010)

    Article  Google Scholar 

  29. Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823 (2015)

    Google Scholar 

  30. Microsoft SEAL (release 3.7), September 2021, Microsoft Research, Redmond, WA. https://github.com/Microsoft/SEAL

  31. Shashank, J., Kowshik, P., Srinathan, K., Jawahar, C.: Private content based image retrieval. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)

    Google Scholar 

  32. Singhal, A., et al.: Modern information retrieval: a brief overview. IEEE Data Eng. Bull. 24(4), 35–43 (2001)

    Google Scholar 

  33. Smart, N.P., Vercauteren, F.: Fully homomorphic SIMD operations. Des. Codes Crypt. 71(1), 57–81 (2014)

    Article  MATH  Google Scholar 

  34. Troncoso-Pastoriza, J.R., González-Jiménez, D., Pérez-González, F.: Fully private noninteractive face verification. IEEE Trans. Inf. Forensics Secur. 8(7), 1101–1114 (2013)

    Article  Google Scholar 

  35. Uludag, U., Pankanti, S., Jain, A.K.: Fuzzy vault for fingerprints. In: Kanade, T., Jain, A., Ratha, N.K. (eds.) AVBPA 2005. LNCS, vol. 3546, pp. 310–319. Springer, Heidelberg (2005). https://doi.org/10.1007/11527923_32

    Chapter  Google Scholar 

  36. Upmanyu, M., Namboodiri, A.M., Srinathan, K., Jawahar, C.V.: Efficient biometric verification in encrypted domain. In: Tistarelli, M., Nixon, M.S. (eds.) ICB 2009. LNCS, vol. 5558, pp. 899–908. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01793-3_91

    Chapter  Google Scholar 

  37. Upmanyu, M., Namboodiri, A.M., Srinathan, K., Jawahar, C.: Efficient privacy preserving video surveillance. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1639–1646. IEEE (2009)

    Google Scholar 

  38. Yao, A.C.C.: How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science (sfcs 1986), pp. 162–167. IEEE (1986)

    Google Scholar 

Download references

Acknowledgment

We thank the anonymous reviewers for their insightful comments and suggestions. Bai and Russello would like to acknowledge the MBIE-funded programme STRATUS (UOWX1503) for its support and inspiration for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangfu Song .

Editor information

Editors and Affiliations

Appendices

A Complexity and Security Analysis

We first provide a theoretical complexity analysis to show the efficiency of CryptoMask. Then we show that CryptoMask is secure against a semi-honest adversary while assuming KG is fully trusted.

1.1 A.1 Complexity Analysis

In CryptoMask, communication overhead mainly comes from two parts. One is from CS, who sends all the encrypted distances to the verifier, which contains O(Nm/d) communication cost. Another one is the result of the secure revealing process, which requires O(ml) communication. We can obtain the overall communication complexity as \(O(Nm/d+ml)\). The computation overhead is more complex. We set the computation for data encryption using HE as \(C_{en}\), for homomorphic multiplication as \(C_{mul}\), for homomorphic addition as \(C_{add}\), for key switching as \(C_{sw}\), for secure comparison as \(C_{com}\) and for secure B2A as \(C_{cov}\). The overall computation overhead for the CS side is \(O((Nm/d)(C_{com}+C_{add} + C_{sw}) + m(C_{com}+C_{cov}))\) and for the verifier side is \(O(C_{en} + m(C_{com}+C_{cov}))\).

1.2 A.2 Security Analysis

Privacy of Face Vector Matrix. In CryptoMask, all face vectors are encrypted by HE, and only the KG knows the secret key. Due to the semantic security of HE, neither CS nor the verifier learns sensitive information about the underlying encrypted face vector; thus, the privacy of the face vector is always maintained.

Now we show CryptoMask only reveals a face recognition result to the verifier and nothing else to either party. This is argued as regards to a corrupted CS and a corrupted verifier, respectively. Note we only provide the security of the HE-based part as the simulation of the comparison/B2A protocols can be implemented in the existing ways.

Corrupted CS. We first demonstrate the security against a semi-honest CS. Intuitively, the security against a semi-honest CS comes from the fact that the CS’s view of the execution includes only ciphertext, thus reducing the argument to the semantic security of HE. We now give the formal argument.

Let \(\mathcal {A}\) be the semi-honest CS in the real protocol. We construct a simulator \(\mathcal {S}\) in the ideal world as follows:

  1. 1.

    At the beginning of the protocol execution, \(\mathcal {S}\) receives the input \(\boldsymbol{\textsf{A}}\) from the environment \(\mathcal {E}\) and also receives the public key pk and the vector length d. The simulator sends \(\boldsymbol{\textsf{A}}\) to the trusted party.

  2. 2.

    Start running \(\mathcal {A}\) on input \(\boldsymbol{\textsf{A}}\). Next, \(\mathcal {S}\) computes and sends a ciphertext ct, which encrypts a d dimensional vector \(\boldsymbol{0}\) to the CS under the public key pk.

  3. 3.

    Output whatever \(\mathcal {A}\) outputs.

We argue the above simulated view is indistinguishable from real protocol execution. Using the fact that \(\mathcal {A}\) is semi-honest, at the end of the protocol in the real world, the verifier obtains the encryption of \(\boldsymbol{\textsf{A}} \cdot \boldsymbol{b}\) where \(\boldsymbol{b}\) is the verifier’s queried face image. Since \(\mathcal {S}\) is semi-honest, this also holds in the ideal world. Since \(\boldsymbol{\textsf{A}} \cdot \boldsymbol{b}\) is a deterministic function, the joint distribution of the verifier’s output and the adversary’s output decomposes. Thus, it is sufficient to show that the simulated view from \(\mathcal {S}\) is computationally indistinguishable from the real view from \(\mathcal {A}\).

The view of \(\mathcal {A}\) in the real world contains one part: the encrypted face image ct from the verifier. When interacting with the simulator \(\mathcal {S}\), adversary \(\mathcal {A}\) sees an encryption of \(\boldsymbol{0}\). Security follows immediately by the semantic security of the BFV scheme.

Corrupted Verifier. We now prove the security against a semi-honest verifier. We construct a simulator \(\mathcal {S}\) in the ideal world as follows:

  1. 1.

    At the beginning of the execution, \(\mathcal {S}\) receives the input \(\boldsymbol{b}\) from the environment \(\mathcal {E}\) and also receives the BFV key pairs (pksk) and the matrix size md. The simulator sends \(\boldsymbol{b}\) to the trusted party.

  2. 2.

    Start running \(\mathcal {A}\) on input \(\boldsymbol{b}\). Next, \(\mathcal {S}\) computes and sends ciphertexts \(c_i\) which is the encryption of an \(m \times d\) matrix filled by some random values to the verifier under the public key \(pk_v\).

  3. 3.

    Output whatever \(\mathcal {A}\) outputs.

At the end of face recognition, CS has no output. Thus, to show the security against a semi-honest verifier, it suffices to show that the output of \(\mathcal {S}\) is computationally indistinguishable from the output of the adversary \(\mathcal {A}\). Now we show the view of simulator \(\mathcal {S}\) in the ideal world is computationally indistinguishable from the view of the adversary \(\mathcal {A}\) in the real world.

The view of \(\mathcal {A}\) in the real world contains one part: the encrypted face database \(\{c_1, \cdots , c_n\}\) from CS. When interacting with the simulator \(\mathcal {S}\), adversary \(\mathcal {A}\) sees the encryption of random values. Security follows immediately by the semantic security of the BFV scheme.

Table 2. Face recognition accuracy for LWF dataset (TAR @ FAR in \(\%\))

B Accuracy

We report the results of face recognition on dataset LFW for state-of-the-art face representation FaceNet in Table 2. We only test face templates of 128-D. For more results on different representations, we refer to [3], which is also constructed on BFV. Same as [3], we report true acceptance rate (TAR) at three different operating points of 0.01%, 0.1% and 1.0% false accept rates (FARs). We first report the performance of the unencrypted face images. We treat these outputs as a baseline to compare. To evaluate encrypted face images, we consider four different quantization for each element in facial features. Specifically, we employ precision of 0.1, 0.01, 0.0025 and 0.0001. It shows that the performance of most given precision is competitive with the performance conducted from the raw data. We conclude that CryptoMask working over HE and MPC can perform as well as the one working over raw data.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bai, J. et al. (2023). CryptoMask: Privacy-Preserving Face Recognition. In: Wang, D., Yung, M., Liu, Z., Chen, X. (eds) Information and Communications Security. ICICS 2023. Lecture Notes in Computer Science, vol 14252. Springer, Singapore. https://doi.org/10.1007/978-981-99-7356-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-7356-9_20

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-7355-2

  • Online ISBN: 978-981-99-7356-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics