Skip to main content

DeepFakes Have No Heart: A Simple rPPG-Based Method to Reveal Fake Videos

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2022 (ICIAP 2022)

Abstract

We present a simple, yet general method to detect fake videos displaying human subjects, generated via Deep Learning techniques. The method relies on gauging the complexity of heart rate dynamics as derived from the facial video streams through remote photoplethysmography (rPPG). Features analyzed have a clear semantics as to such physiological behaviour. The approach is thus explainable both in terms of the underlying context model and the entailed computational steps. Most important, when compared to more complex state-of-the-art detection methods, results so far achieved give evidence of its capability to cope with datasets produced by different deep fake models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lee, S.-H., Yun, G.-E., Lim, M.Y., Lee, Y.K.: A study on effective use of bpm information in deepfake detection. In: 2021 International Conference on Information and Communication Technology Convergence (ICTC), pp. 425–427. IEEE (2021)

    Google Scholar 

  2. Bansal, A., Ma, S., Ramanan, D., Sheikh, Y.: Recycle-GAN: unsupervised video retargeting. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 119–135 (2018)

    Google Scholar 

  3. Tran, L., Yin, X., Liu, X.: Representation learning by rotating your faces. IEEE Trans. Pattern Anal. Mach. Intell. 41(12), 3007–3021 (2018)

    Article  Google Scholar 

  4. Bursic, S., D’Amelio, A., Granato, M., Grossi, G., Lanzarotti, R.: A quantitative evaluation framework of video de-identification methods. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 6089–6095. IEEE (2021)

    Google Scholar 

  5. Peng, B., Fan, H., Wang, W., Dong, J., Lyu, S.: A unified framework for high fidelity face swap and expression reenactment. In: IEEE Transactions on Circuits and Systems for Video Technology (2021)

    Google Scholar 

  6. Gupta, A., Khan, F.F., Mukhopadhyay, R., Namboodiri, V.P., Jawahar, C.: Intelligent video editing: incorporating modern talking face generation algorithms in a video editor. In: Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing, pp. 1–9 (2021)

    Google Scholar 

  7. Ding, H., Sricharan, K., Chellappa, R.: Exprgan: facial expression editing with controllable expression intensity. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  8. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of stylegan. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8110–8119 (2020)

    Google Scholar 

  9. Suwajanakorn, S., Seitz, S.M., Kemelmacher-Shlizerman, I.: Synthesizing Obama: learning lip sync from audio. ACM Trans. Graph. 36(4), 1–13 (2017)

    Article  Google Scholar 

  10. Nirkin, Y., Keller, Y., Hassner, T.: FSGAN: subject agnostic face swapping and reenactment. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7184–7193 (2019)

    Google Scholar 

  11. Lattas, A., et al.: Avatarme: realistically renderable 3D facial reconstruction in-the-wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 760–769 (2020)

    Google Scholar 

  12. Mirsky, Y., Lee, W.: The creation and detection of deepfakes: a survey. ACM Comput. Surv. 54(1), 1–41 (2021)

    Article  Google Scholar 

  13. Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., Ortega-Garcia, J.: Deepfakes and beyond: a survey of face manipulation and fake detection. Inf. Fusion 64, 131–148 (2020)

    Article  Google Scholar 

  14. Nguyen, T.T., Nguyen, C.M., Nguyen, D.T., Nguyen, D.T., Nahavandi, S.: Deep learning for deepfakes creation and detection: a survey. arXiv preprint arXiv:1909.11573 (2019)

  15. Ciftci, U.A., Demir, I., Yin, L.: How do the hearts of deep fakes beat? deep fake source detection via interpreting residuals with biological signals. In: 2020 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–10. IEEE (2020)

    Google Scholar 

  16. Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92. IEEE (2019)

    Google Scholar 

  17. Yang, X., Li, Y., Lyu, S.: Exposing deep fakes using inconsistent head poses. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8261–8265. IEEE (2019)

    Google Scholar 

  18. Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., Li, H.: Protecting world leaders against deep fakes. In: CVPR Workshops, vol. 1 (2019)

    Google Scholar 

  19. Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., Manocha, D.: Emotions don’t lie: an audio-visual deepfake detection method using affective cues. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2823–2832 (2020)

    Google Scholar 

  20. Hosler, B., et al.: Do deepfakes feel emotions? a semantic approach to detecting deepfakes via emotional inconsistencies. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1013–1022 (2021)

    Google Scholar 

  21. Demir, I., Ciftci, U.A.: Where do deep fakes look? synthetic face detection via gaze tracking. In: ACM Symposium on Eye Tracking Research and Applications, pp. 1–11 (2021)

    Google Scholar 

  22. Cuculo, V., D’Amelio, A., Lanzarotti, R., Boccignone, G.: Personality gaze patterns unveiled via automatic relevance determination. In: Mazzara, M., Ober, I., Salaün, G. (eds.) STAF 2018. LNCS, vol. 11176, pp. 171–184. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04771-9_14

  23. Jung, T., Kim, S., Kim, K.: Deepvision: deepfakes detection using human eye blinking pattern. IEEE Access 8, 83144–83154 (2020)

    Article  Google Scholar 

  24. Prathosh, A., Praveena, P., Mestha, L.K., Bharadwaj, S.: Estimation of respiratory pattern from video using selective ensemble aggregation. IEEE Trans. Signal Process. 65(11), 2902–2916 (2017)

    Article  MathSciNet  Google Scholar 

  25. Chen, M., Zhu, Q., Zhang, H., Wu, M., Wang, Q.: Respiratory rate estimation from face videos. In: 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 1–4. IEEE (2019)

    Google Scholar 

  26. Boccignone, G., Conte, D., Cuculo, V., D’Amelio, A., Grossi, G., Lanzarotti, R.: An open framework for remote-PPG methods and their assessment. IEEE Access 8, 216083–216103 (2020)

    Article  Google Scholar 

  27. Rouast, P.V., Adam, M.T., Chiong, R., Cornforth, D., Lux, E.: Remote heart rate measurement using low-cost RGB face video: a technical literature review. Front. Comput. Sci. 12(5), 858–872 (2018)

    Article  Google Scholar 

  28. Qi, H., et al.: DeepRhythm: exposing deepfakes with attentional visual heartbeat rhythms. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 4318–4327 (2020)

    Google Scholar 

  29. Liang, J., Deng, W.: Identifying rhythmic patterns for face forgery detection and categorization. In: 2021 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8 (2021)

    Google Scholar 

  30. Hernandez-Ortega, J., Tolosana, R., Fierrez, J., Morales, A.: DeepFakesON-Phys: Deepfakes detection based on heart rate estimation. arXiv preprint arXiv:2010.00400 (2020)

  31. Ciftci, U.A., Demir, I., Yin, L.: FakeCatcher: detection of synthetic portrait videos using biological signals. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

    Google Scholar 

  32. Koban, L., Gianaros, P.J., Kober, H., Wager, T.D.: The self in context: brain systems linking mental and physical health. Nat. Rev. Neurosci. 22(5), 309–322 (2021)

    Article  Google Scholar 

  33. Hutchinson, J.B., Barrett, L.F.: The power of predictions: an emerging paradigm for psychological research. Curr. Direct. Psychol. Sci. 28(3), 280–291 (2019)

    Article  Google Scholar 

  34. Boccignone, G., Conte, D., Cuculo, V., D’Amelio, A., Grossi, G., Lanzarotti, R.: Deep construction of an affective latent space via multimodal enactment. IEEE Trans. Cognit. Develop. Syst. 10, 865–880 (2018)

    Article  Google Scholar 

  35. Wieringa, F.P., Mastik, F., Steen, A.F.W.v.d.: Contactless multiple wavelength photoplethysmographic imaging: a first step toward “spo2 camera"technology”. Ann. Biomed. Eng. 33(8), 1034–1041 (2005)

    Google Scholar 

  36. Humphreys, K., Ward, T., Markham, C.: Noncontact simultaneous dual wavelength photoplethysmography: a further step toward noncontact pulse oximetry. Rev. Sci. Instrum. 78(4), 044304 (2007)

    Google Scholar 

  37. Verkruysse, W., Svaasand, L.O., Nelson, J.S.: Remote plethysmographic imaging using ambient light. Opt. Express 16(26), 21434–21445 (2008)

    Google Scholar 

  38. McDuff, D.J., Estepp, J.R., Piasecki, A.M., Blackford, E.B.: A survey of remote optical photoplethysmographic imaging methods. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6398–6404. IEEE (2015)

    Google Scholar 

  39. Wang, W., den Brinker, A.C., Stuijk, S., De Haan, G.: Algorithmic principles of remote PPG. IEEE Trans. Biomed. Eng. 64(7), 1479–1491 (2016)

    Google Scholar 

  40. Hjorth, B.: Eeg analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 29(3), 306–310 (1970)

    Google Scholar 

  41. Pincus, S.M., Gladstone, I.M., Ehrenkranz, R.A.: A regularity statistic for medical data analysis. J. Clin. Monitor. 7(4), 335–345 (1991)

    Google Scholar 

  42. Bandt, C., Pompe, B.: Permutation entropy: a natural complexity measure for time series. Phys. Rev. Lett. 88(17), 174102 (2002)

    Article  Google Scholar 

  43. Roberts, S.J., Penny, W., Rezek, I.: Temporal and spatial complexity measures for electroencephalogram based brain-computer interfacing. Med. Biol. Eng. Comput. 37(1), 93–98 (1999)

    Google Scholar 

  44. Esteller, R., Vachtsevanos, G., Echauz, J., Litt, B.: A comparison of waveform fractal dimension algorithms. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 48(2), 177–183 (2001)

    Google Scholar 

  45. Pudil, P., Novovičová, J., Kittler, J.: Floating search methods in feature selection. Pattern Recogn. Lett. 15(11), 1119–1125 (1994)

    Google Scholar 

  46. Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M.: FaceForensics++: learning to detect manipulated facial images. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  47. Deepfakes. https://github.com/deepfakes/faceswap

  48. Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., Nießner, M.: “Face2face: real-time face capture and reenactment of RGB videos. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2387–2395 (2016)

    Google Scholar 

  49. Li, L., Bao, J., Yang, H., Chen, D., Wen, F.: Faceshifter: towards high fidelity and occlusion aware face swapping. arXiv preprint arXiv:1912.13457 (2019)

  50. Faceswap. https://github.com/MarekKowalski/FaceSwap/

  51. Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: image synthesis using neural textures. ACM Trans. Graph. 38(4), 1–12 (2019)

    Google Scholar 

  52. Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S.: Celeb-df: a large-scale challenging dataset for deepfake forensics. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3207–3216 (2020)

    Google Scholar 

  53. Dolhansky, B., et al.: The deepfake detection challenge (dfdc) dataset. arXiv preprint arXiv:2006.07397 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro D’Amelio .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Boccignone, G. et al. (2022). DeepFakes Have No Heart: A Simple rPPG-Based Method to Reveal Fake Videos. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds) Image Analysis and Processing – ICIAP 2022. ICIAP 2022. Lecture Notes in Computer Science, vol 13232. Springer, Cham. https://doi.org/10.1007/978-3-031-06430-2_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-06430-2_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-06429-6

  • Online ISBN: 978-3-031-06430-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics