Abstract
In this paper, we take a look at three concepts frequently discussed in relation to the spread of misinformation and propaganda online; fake news, deepfakes and cheapfakes. We have mainly two problems with how these three phenomena are conceptualized. First of all, while they are often discussed in relation to each other, it is often not clear what these concepts are examples of. It is sometimes argued that all of them are examples of misleading content online. This is quite a one-sided picture, as it excludes the vast amount of content online, namely when these techniques are used for memes, satire and parody, which is part of the foundation of today’s online culture. Second of all, because of this conceptual confusion, much research and practice is focusing on how to prevent and detect audiovisual media content that has been tampered with, either manually or through the use of AI. This has recently led to a ban on deepfaked content on Facebook. However, we argue that this does not address problems related to the spread of misinformation. Instead of targeting the source of the problem, such initiatives merely target one of its symptoms. The main contribution of this paper is a typology of what we term Intentionally Inaccurate Representations of Reality (IIRR) in media content. In contrast to deepfakes, cheapfakes and fake news – terms with mainly negative connotations – this term emphasizes both sides; the creative and fun, and the malicious use of AI and non-AI powered editing techniques.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A meme is a visual or textual expression of an idea, behaviour or style that spreads and evolves online by means of imitation and is not to be confused with “viral content”, which refers to anything – a text, an image or a video – that is frequently shared online.
- 2.
This refers to a certain way of completing a video game or selected parts of it as quick as possible. In order to prove that the run has been completed in a certain time, the runner needs to provide video evidence.
References
Chesney, R., Citron, D.: Deepfakes and the new disinformation war: the coming age of post-truth geopolitics. Foreign Aff. 98, 147–155 (2019)
Kofman, A., Kofman, E.: Bruno Latour, the post-truth philosopher, mounts a defense of science. New York Times 6 (2018)
Eichensehr, K.: Don’t believe it if you see it: deep fakes and distrust. Jotwell J. Things We Like 1, 1 (2018)
Miller, D.: Tell Me Lies: Propaganda and Media Distortion in the Attack on Iraq. Pluto Press, London (2004)
Department of Digital, Culture, Media, and Sport: Disinformation and ‘fake news’: Final Report: Eighth Report of Session 2017–19, pp. 1–109 (2019)
Gutiérrez-Martín, A., Torrego-González, A., Vicente-Mariño, M.: Media education with the monetization of YouTube: the loss of truth as an exchange value. Cult. Educ. 31, 267–295 (2019). https://doi.org/10.1080/11356405.2019.1597443
Postman, N.: Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin (2006)
Silbey, J., Hartzog, W.: The Upside Of Deep Fakes (2019)
Molina, M.D., Sundar, S.S., Le, T., Lee, D.: “Fake news” is not simply false information: a concept explication and taxonomy of online content. Am. Behav. Sci. (2019). https://doi.org/10.1177/0002764219878224
Franks, M.A., et al.: Sex, Lies, And Videotape: Deep Fakes And Free Speech Delusions (2019)
Roozenbeek, J., Van Der Linden, S.: The fake news game: actively inoculating against the risk of misinformation (2018). https://doi.org/10.1080/13669877.2018.1443491
Lessenski, M.: Resilience to ‘post-truth’ and its predictors in the new media literacy index 2018 *. Open Soc. Inst. Raporu. Mart (2018)
McGlynn, C., Rackley, E., Houghton, R.: Beyond ‘revenge porn’: the continuum of image-based sexual abuse. Fem. Leg. Stud. 25, 25–46 (2017)
DeKeseredy, W.S., Schwartz, M.D.: Thinking sociologically about image-based sexual abuse: the contribution of male peer support theory. Sex. Media Soc. 2, 2374623816684692 (2016)
Urban Dictionary: Bubbling, defined by user “4Red,”. https://www.urbandictionary.com/define.php?term=Bubbling
Paris, B., Donovan, J.: Deepfakes and cheap fakes: the manipulation of audio and visual evidence. Data and Society (2019)
Coulter, M.: BBC issues warning after fake news clips claiming NATO and Russia at war spread through Africa and Asia. Evening Standard (2018)
Sadiq, M.: Real v fake: debunking the “drunk” Nancy Pelosi footage - video (2019)
Maras, M.-H., Alexandrou, A.: Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. Int. J. Evid. Proof 23, 255–262 (2019). https://doi.org/10.1177/1365712718807226
Beridze, I., Butcher, J.: When seeing is no longer believing. Nat. Mach. Intell. 1, 332–334 (2019). https://doi.org/10.1038/s42256-019-0085-5
Fried, O., et al.: Text-based editing of talking-head video. ACM Trans. Graph. (TOG) 38(4), 1–4 (2019)
Zannettou, S., Sirivianos, M., Blackburn, J., Kourtellis, N.: The web of false information: rumors, fake news, hoaxes, clickbait, and various other shenanigans. J. Data Inf. Qual. 11, 1–37 (2019)
Westerlund, M.: The emergence of Deepfake technology: a review. Technol. Innov. Manag. Rev. 9(11), 39–52 (2019)
Ajder, H., Patrini, G., Cavalli, F., Cullen, L.: Report 2019: The State of Deepfakes (2019)
Öhman, C.: Introducing the pervert’s dilemma: a contribution to the critique of Deepfake pornography. Ethics Inf. Technol. 22(2), 133–140 (2019). https://doi.org/10.1007/s10676-019-09522-1
BuzzFeed and Jordan Peele: You Won’t Believe What Obama Says In This Video! ). BuzzFeedVideo (2018)
Future Advocacy: Deepfakes. https://futureadvocacy.com/deepfakes/
Maddox, T.: Here are the biggest IoT security threats facing the enterprise in 2017. Consult (2016). https://www.techrepublic.com/article/here-are-the-biggest-iot-security-threats-facing-the-enterprise-in-2017/
Dack, S.: Deep Fakes, Fake News, and What Comes Next (2019)
Hasan, H.R., Salah, K.: Combating Deepfake videos using blockchain and smart contracts. IEEE Access 7, 41596–41606 (2019). https://doi.org/10.1109/ACCESS.2019.2905689
Chappell, B.: Facebook issues new rules on Deepfake videos, targeting misinformation (2020)
Twitter: Drew Harwell. (2020). https://twitter.com/drewharwell/status/1214394479026855936
Farkas, J., Schou, J.: Fake news as a floating signifier: hegemony antagonism and the politics of falsehood. Javnost Pub. J. Eur. Inst. Commun. Cult. (2018). https://doi.org/10.1080/13183222.2018.1463047
Westerlund, M.: The emergence of Deepfake technology: a review (2019)
Bovet, A., Makse, H.A.: Influence of fake news in Twitter during the 2016 US presidential election. Nat. Commun. 10, 1–14 (2019)
Wu, L., Liu, H.: Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 637–645 (2018)
Conroy, N.J., Rubin, V.L., Chen, Y.: Automatic deception detection: methods for finding fake news. Proc. Assoc. Inf. Sci. Technol. 52, 1–4 (2015)
Waszak, P.M., Kasprzycka-Waszak, W., Kubanek, A.: The spread of medical fake news in social media – The pilot quantitative study. Heal. Policy Technol. 7, 115–118 (2018). https://doi.org/10.1016/j.hlpt.2018.03.002
Wang, W.Y.: Liar, liar pants on fire: a new benchmark dataset for fake news detection. arXiv Prepr. arXiv:1705.00648 (2017)
Shu, K., Sliva, A., Wang, S., Tang, J., Liu, H.: Fake news detection on social media: a data mining perspective. ACM SIGKDD Explor. Newsl. 19, 22–36 (2017)
Tacchini, E., Ballarin, G., Della Vedova, M.L., Moret, S., de Alfaro, L.: Some like it hoax: automated fake news detection in social networks. arXiv Prepr. arXiv:1704.07506 (2017)
Ruchansky, N., Seo, S., Liu, Y.: CSI: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 797–806 (2017)
Pérez-Rosas, V., Kleinberg, B., Lefevre, A., Mihalcea, R.: Automatic detection of fake news. arXiv Prepr. arXiv:1708.07104 (2017)
Long, Y., Lu, Q., Xiang, R., Li, M., Huang, C.-R.: Fake news detection through multi-perspective speaker profiles. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 252–256 (2017)
Granik, M., Mesyura, V.: Fake news detection using naive Bayes classifier. In: 2017 IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON), pp. 900–903. IEEE (2017)
Li, Y., Lyu, S.: Exposing Deepfake videos by detecting face warping artifacts. arXiv Prepr. arXiv:1811.00656 (2018)
Güera, D., Delp, E.J.: Deepfake video detection using recurrent neural networks. In: 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2018)
Koopman, M., Rodriguez, A.M., Geradts, Z.: Detection of deepfake video manipulation. In: The 20th Irish Machine Vision and Image Processing Conference (IMVIP), pp. 133–136 (2018)
Agarwal, S., Farid, H., Gu, Y., He, M., Nagano, K., Li, H.: Protecting world leaders against deep fakes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 38–45 (2019)
Matern, F., Riess, C., Stamminger, M.: Exploiting visual artifacts to expose deepfakes and face manipulations. In: 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92. IEEE (2019)
Korshunov, P., Marcel, S.: Vulnerability assessment and detection of Deepfake videos. In: The 12th IAPR International Conference on Biometrics (ICB), pp. 1–6 (2019)
Fors, P.: Problematizing Sustainable ICT (2019)
MacKenzie, D., Wajcman, J.: The social shaping of technology: how the refrigerator got its hum. Milton Keynes–Philadelphia (1985)
Fazio, L., et al.: Knowledge does not protect against illusory truth. J. Exp. Psychol. Gen. 144(5), 993–1002 (2015)
Merriam-Webster: Words We’re Watching: ‘Deep Fake’. Merriam-Webster. https://www.merriam-webster.com/words-at-play/deepfake-slang-definition-examples. Accessed 24 July 2020
Wolak, J., Finkelhor, D.: Sextortion: findings from a survey of 1,631 victims. Crimes Against Children Research Center, University of New Hampshire (2016)
Lenhart, A., Ybarra, M., Price-Feeney, M.: Nonconsensual image sharing: one in 25 Americans has been a victim of “revenge porn”. Data & Society Research Institute, CiPHR, Data Memo (2016)
O’Connor, K., et al.: Cyberbullying, revenge porn and the mid-sized university: victim characteristics, prevalence and students’ knowledge of university policy and reporting procedures. High. Educ. Quart. 72, 344–359 (2018)
Citron, D., Franks, M.: Criminalizing revenge porn. Wake For. Law Rev. 49(2), 345–392 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 IFIP International Federation for Information Processing
About this paper
Cite this paper
Davis, M.J., Fors, P. (2020). Towards a Typology of Intentionally Inaccurate Representations of Reality in Media Content. In: Kreps, D., Komukai, T., Gopal, T.V., Ishii, K. (eds) Human-Centric Computing in a Data-Driven Society. HCC 2020. IFIP Advances in Information and Communication Technology, vol 590. Springer, Cham. https://doi.org/10.1007/978-3-030-62803-1_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-62803-1_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62802-4
Online ISBN: 978-3-030-62803-1
eBook Packages: Computer ScienceComputer Science (R0)