Skip to main content

Towards a Typology of Intentionally Inaccurate Representations of Reality in Media Content

  • Conference paper
  • First Online:
Human-Centric Computing in a Data-Driven Society (HCC 2020)

Abstract

In this paper, we take a look at three concepts frequently discussed in relation to the spread of misinformation and propaganda online; fake news, deepfakes and cheapfakes. We have mainly two problems with how these three phenomena are conceptualized. First of all, while they are often discussed in relation to each other, it is often not clear what these concepts are examples of. It is sometimes argued that all of them are examples of misleading content online. This is quite a one-sided picture, as it excludes the vast amount of content online, namely when these techniques are used for memes, satire and parody, which is part of the foundation of today’s online culture. Second of all, because of this conceptual confusion, much research and practice is focusing on how to prevent and detect audiovisual media content that has been tampered with, either manually or through the use of AI. This has recently led to a ban on deepfaked content on Facebook. However, we argue that this does not address problems related to the spread of misinformation. Instead of targeting the source of the problem, such initiatives merely target one of its symptoms. The main contribution of this paper is a typology of what we term Intentionally Inaccurate Representations of Reality (IIRR) in media content. In contrast to deepfakes, cheapfakes and fake news – terms with mainly negative connotations – this term emphasizes both sides; the creative and fun, and the malicious use of AI and non-AI powered editing techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A meme is a visual or textual expression of an idea, behaviour or style that spreads and evolves online by means of imitation and is not to be confused with “viral content”, which refers to anything – a text, an image or a video – that is frequently shared online.

  2. 2.

    This refers to a certain way of completing a video game or selected parts of it as quick as possible. In order to prove that the run has been completed in a certain time, the runner needs to provide video evidence.

References

  1. Chesney, R., Citron, D.: Deepfakes and the new disinformation war: the coming age of post-truth geopolitics. Foreign Aff. 98, 147–155 (2019)

    Google Scholar 

  2. Kofman, A., Kofman, E.: Bruno Latour, the post-truth philosopher, mounts a defense of science. New York Times 6 (2018)

    Google Scholar 

  3. Eichensehr, K.: Don’t believe it if you see it: deep fakes and distrust. Jotwell J. Things We Like 1, 1 (2018)

    Google Scholar 

  4. Miller, D.: Tell Me Lies: Propaganda and Media Distortion in the Attack on Iraq. Pluto Press, London (2004)

    Google Scholar 

  5. Department of Digital, Culture, Media, and Sport: Disinformation and ‘fake news’: Final Report: Eighth Report of Session 2017–19, pp. 1–109 (2019)

    Google Scholar 

  6. Gutiérrez-Martín, A., Torrego-González, A., Vicente-Mariño, M.: Media education with the monetization of YouTube: the loss of truth as an exchange value. Cult. Educ. 31, 267–295 (2019). https://doi.org/10.1080/11356405.2019.1597443

    Article  Google Scholar 

  7. Postman, N.: Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin (2006)

    Google Scholar 

  8. Silbey, J., Hartzog, W.: The Upside Of Deep Fakes (2019)

    Google Scholar 

  9. Molina, M.D., Sundar, S.S., Le, T., Lee, D.: “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am. Behav. Sci. (2019). https://doi.org/10.1177/0002764219878224

    Article  Google Scholar 

  10. Franks, M.A., et al.: Sex, Lies, And Videotape: Deep Fakes And Free Speech Delusions (2019)

    Google Scholar 

  11. Roozenbeek, J., Van Der Linden, S.: The fake news game: actively inoculating against the risk of misinformation (2018). https://doi.org/10.1080/13669877.2018.1443491

  12. Lessenski, M.: Resilience to ‘post-truth’ and its predictors in the new media literacy index 2018 *. Open Soc. Inst. Raporu. Mart (2018)

    Google Scholar 

  13. McGlynn, C., Rackley, E., Houghton, R.: Beyond ‘revenge porn’: the continuum of image-based sexual abuse. Fem. Leg. Stud. 25, 25–46 (2017)

    Article  Google Scholar 

  14. DeKeseredy, W.S., Schwartz, M.D.: Thinking sociologically about image-based sexual abuse: the contribution of male peer support theory. Sex. Media Soc. 2, 2374623816684692 (2016)

    Google Scholar 

  15. Urban Dictionary: Bubbling, defined by user “4Red,”. https://www.urbandictionary.com/define.php?term=Bubbling

  16. Paris, B., Donovan, J.: Deepfakes and cheap fakes: the manipulation of audio and visual evidence. Data and Society (2019)

    Google Scholar 

  17. Coulter, M.: BBC issues warning after fake news clips claiming NATO and Russia at war spread through Africa and Asia. Evening Standard (2018)

    Google Scholar 

  18. Sadiq, M.: Real v fake: debunking the “drunk” Nancy Pelosi footage - video (2019)

    Google Scholar 

  19. Maras, M.-H., Alexandrou, A.: Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. Int. J. Evid. Proof 23, 255–262 (2019). https://doi.org/10.1177/1365712718807226

    Article  Google Scholar 

  20. Beridze, I., Butcher, J.: When seeing is no longer believing. Nat. Mach. Intell. 1, 332–334 (2019). https://doi.org/10.1038/s42256-019-0085-5

    Article  Google Scholar 

  21. Fried, O., et al.: Text-based editing of talking-head video. ACM Trans. Graph. (TOG) 38(4), 1–4 (2019)

    Article  Google Scholar 

  22. Zannettou, S., Sirivianos, M., Blackburn, J., Kourtellis, N.: The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J. Data Inf. Qual. 11, 1–37 (2019)

    Article  Google Scholar 

  23. Westerlund, M.: The emergence of Deepfake technology: a review. Technol. Innov. Manag. Rev. 9(11), 39–52 (2019)

    Google Scholar 

  24. Ajder, H., Patrini, G., Cavalli, F., Cullen, L.: Report 2019: The State of Deepfakes (2019)

    Google Scholar 

  25. Öhman, C.: Introducing the pervert’s dilemma: a contribution to the critique of Deepfake pornography. Ethics Inf. Technol. 22(2), 133–140 (2019). https://doi.org/10.1007/s10676-019-09522-1

    Article  Google Scholar 

  26. BuzzFeed and Jordan Peele: You Won’t Believe What Obama Says In This Video! ). BuzzFeedVideo (2018)

    Google Scholar 

  27. Future Advocacy: Deepfakes. https://futureadvocacy.com/deepfakes/

  28. Maddox, T.: Here are the biggest IoT security threats facing the enterprise in 2017. Consult (2016). https://www.techrepublic.com/article/here-are-the-biggest-iot-security-threats-facing-the-enterprise-in-2017/

  29. Dack, S.: Deep Fakes, Fake News, and What Comes Next (2019)

    Google Scholar 

  30. Hasan, H.R., Salah, K.: Combating Deepfake videos using blockchain and smart contracts. IEEE Access 7, 41596–41606 (2019). https://doi.org/10.1109/ACCESS.2019.2905689

    Article  Google Scholar 

  31. Chappell, B.: Facebook issues new rules on Deepfake videos, targeting misinformation (2020)

    Google Scholar 

  32. Twitter: Drew Harwell. (2020). https://twitter.com/drewharwell/status/1214394479026855936

  33. Farkas, J., Schou, J.: Fake news as a floating signifier: hegemony antagonism and the politics of falsehood. Javnost Pub. J. Eur. Inst. Commun. Cult. (2018). https://doi.org/10.1080/13183222.2018.1463047

    Article  Google Scholar 

  34. Westerlund, M.: The emergence of Deepfake technology: a review (2019)

    Google Scholar 

  35. Bovet, A., Makse, H.A.: Influence of fake news in Twitter during the 2016 US presidential election. Nat. Commun. 10, 1–14 (2019)

    Article  Google Scholar 

  36. Wu, L., Liu, H.: Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 637–645 (2018)

    Google Scholar 

  37. Conroy, N.J., Rubin, V.L., Chen, Y.: Automatic deception detection: methods for finding fake news. Proc. Assoc. Inf. Sci. Technol. 52, 1–4 (2015)

    Article  Google Scholar 

  38. Waszak, P.M., Kasprzycka-Waszak, W., Kubanek, A.: The spread of medical fake news in social media – The pilot quantitative study. Heal. Policy Technol. 7, 115–118 (2018). https://doi.org/10.1016/j.hlpt.2018.03.002

    Article  Google Scholar 

  39. Wang, W.Y.: Liar, liar pants on fire: a new benchmark dataset for fake news detection. arXiv Prepr. arXiv:1705.00648 (2017)

  40. Shu, K., Sliva, A., Wang, S., Tang, J., Liu, H.: Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor. Newsl. 19, 22–36 (2017)

    Article  Google Scholar 

  41. Tacchini, E., Ballarin, G., Della Vedova, M.L., Moret, S., de Alfaro, L.: Some like it hoax: automated fake news detection in social networks. arXiv Prepr. arXiv:1704.07506 (2017)

  42. Ruchansky, N., Seo, S., Liu, Y.: CSI: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 797–806 (2017)

    Google Scholar 

  43. Pérez-Rosas, V., Kleinberg, B., Lefevre, A., Mihalcea, R.: Automatic detection of fake news. arXiv Prepr. arXiv:1708.07104 (2017)

  44. Long, Y., Lu, Q., Xiang, R., Li, M., Huang, C.-R.: Fake news detection through multi-perspective speaker profiles. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 252–256 (2017)

    Google Scholar 

  45. Granik, M., Mesyura, V.: Fake news detection using naive Bayes classifier. In: 2017 IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON), pp. 900–903. IEEE (2017)

    Google Scholar 

  46. Li, Y., Lyu, S.: Exposing Deepfake videos by detecting face warping artifacts. arXiv Prepr. arXiv:1811.00656 (2018)

  47. Güera, D., Delp, E.J.: Deepfake video detection using recurrent neural networks. In: 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2018)

    Google Scholar 

  48. Koopman, M., Rodriguez, A.M., Geradts, Z.: Detection of deepfake video manipulation. In: The 20th Irish Machine Vision and Image Processing Conference (IMVIP), pp. 133–136 (2018)

    Google Scholar 

  49. Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., Li, H.: Protecting world leaders against deep fakes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45 (2019)

    Google Scholar 

  50. Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92. IEEE (2019)

    Google Scholar 

  51. Korshunov, P., Marcel, S.: Vulnerability assessment and detection of Deepfake videos. In: The 12th IAPR International Conference on Biometrics (ICB), pp. 1–6 (2019)

    Google Scholar 

  52. Fors, P.: Problematizing Sustainable ICT (2019)

    Google Scholar 

  53. MacKenzie, D., Wajcman, J.: The social shaping of technology: how the refrigerator got its hum. Milton Keynes–Philadelphia (1985)

    Google Scholar 

  54. Fazio, L., et al.: Knowledge does not protect against illusory truth. J. Exp. Psychol. Gen. 144(5), 993–1002 (2015)

    Article  Google Scholar 

  55. Merriam-Webster: Words We’re Watching: ‘Deep Fake’. Merriam-Webster. https://www.merriam-webster.com/words-at-play/deepfake-slang-definition-examples. Accessed 24 July 2020

  56. Wolak, J., Finkelhor, D.: Sextortion: findings from a survey of 1,631 victims. Crimes Against Children Research Center, University of New Hampshire (2016)

    Google Scholar 

  57. Lenhart, A., Ybarra, M., Price-Feeney, M.: Nonconsensual image sharing: one in 25 Americans has been a victim of “revenge porn”. Data & Society Research Institute, CiPHR, Data Memo (2016)

    Google Scholar 

  58. O’Connor, K., et al.: Cyberbullying, revenge porn and the mid-sized university: victim characteristics, prevalence and students’ knowledge of university policy and reporting procedures. High. Educ. Quart. 72, 344–359 (2018)

    Google Scholar 

  59. Citron, D., Franks, M.: Criminalizing revenge porn. Wake For. Law Rev. 49(2), 345–392 (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew J. Davis .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Davis, M.J., Fors, P. (2020). Towards a Typology of Intentionally Inaccurate Representations of Reality in Media Content. In: Kreps, D., Komukai, T., Gopal, T.V., Ishii, K. (eds) Human-Centric Computing in a Data-Driven Society. HCC 2020. IFIP Advances in Information and Communication Technology, vol 590. Springer, Cham. https://doi.org/10.1007/978-3-030-62803-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62803-1_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62802-4

  • Online ISBN: 978-3-030-62803-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics