In January 2017, George Orwell’s futuristic dystopian novel 1984 was brought back to life. The reason this 70-year-old classic all of a sudden became a no. 1 bestseller on Amazon is likely to be found in the White House. But in focusing too much on the dangers forecast in 1984, we should not forget an older and less famous vision, Aldous Huxley’sBrave New World (1932). It is at least as relevant as the Orwellian dystopia. Its content easily translates to today’s criticisms of technology, as it describes how people will love the very same technology that deprives them of their ability to think clearly and critically. Both visions of the future have been put face-to-face by cultural and media critic Neil Postman: “What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture, preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumble puppy.”Footnote 1 Postman’s presumption, that Huxley’s future vision was not the least relevant of the two, rings truer today than ever before. In the Information Age, there is an abundance of information and competition over our attention. This has created an attention economy in which tech giants compete for harvesting the most attention and reselling it to third party advertisers.Footnote 2 But this war over attention has its victims. First, it has led tech giants to develop still smarter designs whose purpose it is to create dependence. The idea is for users to spend as much time as possible on the platforms and click, like, share as often as possible—to engage. Second, companies add targeted, usually secret, ingredients to their algorithms, which then reward the content that attracts the most attention and traffic. This has led to a knowledge deficit in an online world dominated by emotions. We have long been blind to the negative consequences of this attention-based infrastructure and have come to love a technology that gobbles up our ability to think reflectively.

Information has never been as easily available and all-embracing in its offerings as it is today. As IT guru Mitchell Kapor once put it: “Getting information off the Internet is like taking a drink from a fire hydrant.”Footnote 3 Such an overwhelming amount of available information has caused a deficit of attention. All the way back in 1971, a Nobel Prize winner in Economics, Herbert Simon, warned of this: “[I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients.”Footnote 4

Today, attention should be considered a limited resource. The Internet has become a chaotic marketplace, where the price of information is not paid with dollars and cents, but with attention. Unlike financial means, however, attention is distributed more evenly among people.Footnote 5 Furthermore, attention cannot be accumulated like money. We are constantly more or less attentive to something. But the common denominator between attention and money is that if the resource is used on one thing, it is at the expense of something else.Footnote 6 Philosopher and psychologist William James pointed to this back in 1890, with his well-known definition of attention: “[Attention] is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. […] It implies withdrawal from some things in order to deal effectively with others.”Footnote 7 The brain, of course, has a limited capacity to handle information. Many perceptions are eliminated, if they take place outside one’s field of attention. The phenomenon is called ‘inattentional blindness’ and has been exposed by Daniel Simons and Christopher Chabris in a famous psychological experiment, “The Invisible Gorilla”.Footnote 8 The participants watched a video where some basketball players threw two balls around among each other. Participants were asked to count how many times the balls were thrown by one team and at the same time say if they discovered anything unusual. The interesting thing was that while they were busy focusing their attention on the ball and on the players, about one in two test subjects completely overlooked something: a person in a gorilla costume walked straight through the middle of the group of players, stopped in the middle, beat his chest and wandered off. The experiment shows spectacularly how attention is a scarce resource. There is a lot we sense but do not actually see.

On tech platforms there is high demand for attention. To advertisers, attention is a valuable resource because it is necessary in order to nurture demand for a product, raise awareness about a certain news story or gain political influence.Footnote 9 The advertisers’ high interest in this scarce resource has led to fierce competition between the tech giants over who can harvest the most attention. Facebook, Twitter, Google and other tech companies can be seen as middle men locked in rivalry, wedged in between the attention economy and the monetary economy, because what they resell is user attention. In a way, the attention economy is no new thing. Studies from as early as the 1970s saw ad-driven American TV networks through a similar lens: their business concept is selling their clients’ attention to advertisers. A parallel to this are free newspapers: they also lure clients to give away their attention free of charge, only for it to be capitalized in the form of ad sales. But on the Internet, the attention economy is taken much further, with the help of new means such as addiction and personalization.

Google started this game early, in 2000, by applying the rather obvious idea of offering ads associated with keywords entered by users. Facebook seems to have had significantly more trouble finding out how their accumulated data about users, their likes, their posts and their networks could be used for advertising purposes. According to Facebook insider Antonio García Martínez, it was only in 2013 that the company really cracked the code by opening up a user’s news feed to ads that could be targeted to that individual user.Footnote 10 That was the result of a combination of ideas. These included: extracting user behavior in many general categories (e.g. “hip hop music” instead of the more specific “Eminem”); supplying Facebook’s own data with massive amounts of external personal data purchased from data brokers, who had—since the 1960s—built a large industry of targeted print ads via the postal system in the US; identifying the user across indicators such as name, address, phone number, email and IP-address; and retargeting, i.e. continuously following browser and shopping behaviors in real time and registering not only whether they clicked on or merely looked at the ads, but also whether they actually acted upon them.

In this race for attention, the winner is the tech giant who is able to exploit a user’s time and attention to the maximum. Therefore, the giants fight to keep the user glued to the screen. Google’s former product manager, Tristan Harris, has become a strong critic of the methods his former employer and others deploy to keep user attention. In his opinion, the giants’ computer engineering designs have become so sophisticated that they actually hijack the users’ brains.Footnote 11 Or to put it less dramatically, the systems have become better at exploiting the users’ instincts than the users themselves are at controlling them.

Consumers have always been convinced and persuaded by a variety of sellers, town criers or advertisers. But what is new about the attention economy is that the tech platforms are designed to cause outright dependence. That way they harvest the maximum amount of the users’ attention in what turns out to be a very unequal struggle because the individual user is up against corporate programmers and psychologists using advanced personalized data, all of whom work hard to predict how the individual user is likely to respond to different temptations. Take for example YouTube’s auto-play feature. It is designed to make users spend as much time as possible on the platform by placing—immediately after the end of the video chosen by the user—a related video not chosen by the user. It is undoubtedly entertaining when YouTube sucks the user into a current of lol cats, finding ever funnier and crazier versions. But YouTube is not preoccupied with the fact that this can divert users not only from attending to their personal wellbeing in the form of sleep, family time and work, but also from serious news, debates and public life. In similar fashion, the teaser clickbaits work by holding back interesting information in order to make users click through more ads in order to reach the wanted information.

In line with Tristan Harris, Facebook co-founder Sean Parker has revealed that Facebook was developed from the idea of maximum exploitation of users’ time and attention. Parker explains how the Facebook likebutton is designed to give the user “a little dopamine hit”, which motivates the user to upload more content and spend more time on the website. The same hit is released by comments to posts and images. Parker elaborates: “It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”Footnote 12 This makes “liking” the fundamental connection made between “friends”: it is all about social acceptance. But what if the button instead said “important” and referred to something a person found important for his “friends” to see?

There is an interesting myth about the like feature — that it supposedly can be traced back to French philosopher René Girard, who is even referred to as the “Godfather of the like button”. The element of truth to this myth has to do with Peter Thiel, Facebook’s first big investor, board member and one of the most prominent opinion leaders of Silicon Valley. Thiel is famous for his libertarian critique of government in all its forms and for his vision of stateless societies forming on independent islands, ships and the like. Thiel is claimed to have based his early investment in Facebook on an analysis of the business concept’s opportunities built on René Girard’s theory of “mimetic desire”.Footnote 13

The idea is that unlike human needs, human desires are not spontaneous or given but mediated through other people, as they are largely directed at what the person observes others to desire. People want what others have. Facebook is designed to do exactly that, mediate between people’s desires: you continuously update your knowledge of what others “like” and respond to it by hitting “like” yourself — and exposing to your “friends” an image of yourself as someone who has attained the coveted objects of desire. Thiel himself took Girard’s classes when Girard was a professor at Stanford University. Thiel saw Facebook as a technology that was based on the mimetic nature of humanity and which gave mimetic desire new ways to flourish and spread.

In the year Facebook was founded, 2004, Thiel sponsored a symposium with and about Girard, entitled Politics and Apocalypse. It was held at Stanford near Silicon Valley, and Thiel himself participated with his talk “The Straussian Moment”, referring to the German-American political thinker Leo Strauss.Footnote 14 That talk showed Thiel’s awareness of the multiple components of Girard’s theory. The mimetic desire implies that everybody wants what others have. That of course leads to infinite strife and conflict between people—occasionally culminating in a “mimetic crisis”, gang battles, rebellion, persecution, civil war, revolution, war, etc. Girard now claims that the traditional way of overcoming such a crisis and re-establishing peace is to designate a scapegoat who is then obliged to bear all responsibility for the crisis and who is consequently imprisoned, exorcised, killed or otherwise sacrificed and pacified, so peace can prevail. But, of course, peace does not last because the war was never really the scapegoat’s fault, and the constant crises require a constant supply of scapegoats. Girard, who was a Catholic, claimed that Christianity, in its right interpretation, is the only cure against ongoing strife and exorcism of scapegoats, since the Crucifixion of Jesus is the last and definitive sacrifice, which is why Christians must turn the other cheek. This is where Thiel strays away from his master. In his essay, combining Girard with Carl Schmitt, Thiel rages against Enlightenment ideas which he accuses of hiding the true, violent nature of humanity—and which, nevertheless, are in the process of being exposed in a disclosure that will once and for all overthrow modernity itself. In this view, the Enlightenment project is mistaken in that “... the whole issue of human violence has been whitewashed away by the Enlightenment.”Footnote 15 Enlightenment, according to Thiel, caused a shutdown of all discussion of human nature (which is actually pretty inaccurate, seen from the point of view of intellectual history).Footnote 16 He also blames the Enlightenment movement for the vulnerability of the West when up against terrorist violence, because Western principles stand in the way of a hard and efficient response from the West. We therefore need “...to awaken from that very long and profitable period of intellectual slumber and amnesia, that is so misleadingly called the Enlightenment.”Footnote 17

Thiel then considers what a Christian prince or statesman is supposed to do, once the Enlightenment project runs dry. But he comes up with no clear answer other than that leaders must be prepared to bravely lead a world completely different from the Enlightenment version of the modern world, with its peaceful but entirely elusive and ineffective discussions which Thiel so adamantly mocks the Enlightenment project for having promoted. With his martial preference for Schmitt and his insistence on a society that can only be politically united if it singles out a common enemy, Thiel does not seem to share Girard’s pacifism. They do share, however, the drive to expose humanity’s true nature. Thiel compares Leo Strauss to Girard, pointing to time as what separates the two. While Strauss is hesitant to reveal the dark side of humanity, Girard is more impatient as to how quickly modernity should be overthrown by the disastrous revelation of humanity’s violent nature. On this matter, Thiel is on Girard’s side: as soon as possible!

A month after Thiel’s symposium on Girard, he went on to invest the decisive 500,000 dollars in Facebook. Why this investment? Empirically, it is true that the existence of Facebook actuates the next steps of Girard’s theory: battle and strife are coming thick and fast, tribalization is increasing, not least because the attention economy naturally focuses on the most striking and click-amassing aspects: fear, anger, hatred, rage, balkanization, violence, etc. It is also evident that the scapegoat logic thrives on the platform, in the form of more or less organized social media shitstorms directed at select victims. As noted by writer Geoff Shullenberger, these somewhat violent Facebook phenomena might be more than simply unexpected side effects; they may in fact be Facebook’s key defining “features”.

Did Thiel consider Facebook an opportunity to start an enormous mimetic crisis, so that the Enlightenment project could end as soon as possible? In the essay he references Girard: “However, the new science of humanity must thrive the idea of imitation, or mimesis, much further than it has in the past.”Footnote 18 In Thiel’s short 2014 manifesto, Zero to One, Girard’s name is absent — but not his theory. In the final chapter, the heroes and idols of different cultures are analyzed through Girard’s theory—inherent in these persons are both potential gods and scapegoats. The heroes of our times are the founders of technological start-up companies, notes Thiel with little modesty. At the same time, he warns against “The Founder’s Paradox”: the fact that the heroic status of the founders may quickly be reversed and turned against them, resulting in scapegoat persecution.Footnote 19 In Shullenberger’s eyes, Thiel thinks of Facebook—in the absence of effective authorities—as an effective way to channel mimetic violence. This gives Facebook a powerful arsenal of violent means in its battle against the authorities, which would also protect the heroic tech entrepreneurs like himself from being singled out as scapegoats.Footnote 20 Such a perception of Facebook is, needless to say, in some contrast to his friend Zuckerberg’s rosy ideas of “a global community”.Footnote 21

Regardless of how well the tech giants have or have not understood the basic nature of human beings, they have indeed obtained large, data-driven psychological powers. Dan Ariely, a Professor of Psychology and Behavioral Economics, believes that irregular reward systems such as likes, tweets and comments can be seen as an updated version of American behavioral psychologist B.F. Skinner’s work from the 1930s.Footnote 22 Skinner placed rats in specially built boxes, where they learned to press buttons to get food as a reward. Skinner discovered that the most effective way of maintaining a particular behavior is by giving out the rewards randomly. One might think that the rat in Skinner’s box would press the button less if reward was not certain. But in the experiment, it turned out that the rat pressed harder and longer than when the reward followed automatically. Even when the reward disappeared, the rat would continue to press. Today, users hammer the keyboard or drum on the touch screen hoping for virtual reward in the form of recognition through new emails, retweets and likes. Similarly, the rat would hammer the button on the Skinner Box hoping for food. The information that ticks in on a phone may often be uninteresting—and only rarely is it indispensable. But suddenly something important or useful could pop up. Therefore, the phone must be checked 100 to 150 times a day. Deducting six to seven hours of sleep, that equals six to eight times an hour.Footnote 23 The same technique is known from the classic slot machines, or one-armed bandits: the player never knows if the next move will trigger nothing, pennies, or maybe the big jackpot. There is still no clear definition of smartphone addiction. But some countries have begun, little by little, to recognize the problem: In France, a total ban on smartphones in schools has been introduced, citing public health as an argument. The United States now has rehabilitation centers for children who cannot let go of the screen. Spain recently recognized the phenomenon as disorder requiring treatment on par with ludomania and alcoholism—that is, a pathological condition that restricts the users’ freedom and prevents them from acting and expressing themselves freely.

It is tempting to believe that the huge amounts of freely available information have made the world a wiser place. After all, information may be a source of learning. In 2007, Clive Thompson from tech magazine Wired even blessed the new opportunities that Silicon Valley memory equipped the very act of thinking with: “[…] the cyborg future is here. Almost without noticing it, we’ve outsourced important peripheral brain functions to the silicon around us. And frankly, I kind of like it. I feel much smarter when I’m using the Internet as a mental plug-in during my daily chitchat.”Footnote 24 But in the Information Age, it is more important than ever to differentiate between knowledge and information. Tech giants do not take into account that these two concepts are different. Knowledge implies information, but information does not necessarily imply knowledge. First, the difference is that knowledge is accompanied by a truth requirement. Facts must be respected. As a requirement, this cannot be satisfied only by being informed about what others like, think, believe, hope or feel. Second, there is a difference in the way information is processed. Pure information is obtained easily, quickly and cheaply. But knowledge cannot simply be collected, it is a systematic practice with a given purpose. It is based on organizing, processing and formatting information. And it requires tools, will, judgment and audacity. Users may be fooled by information, but it is harder to be fooled when they have knowledge.Footnote 25

In its abundance, the Age of Information has led to a form of knowledge collapse: To the tech giants there is no difference between content elements. There is nothing but content. It’s all about attention and traffic, aimed at something, no matter what. But this happens at the expense of truth and facts. The user is flooded with information and opinions—easily produced, sometimes even completely free of charge, and they do not have to deal with facts and truths. No distinction is made between cute cat videos, ISIS propaganda, ads, conspiracy theories, scientific insights or breaking news. It turns out that since it is only about attention and traffic, producing and distributing disinformation has become easier than ever before. A well-known example is the by-far most virally active piece of online news during the 2016 US presidential campaign: The Pope Supports Trump. It generated 960,000 shares, reactions and comments on Facebook. But the news was fake, fabricated and produced in Macedonia for the purpose of generating ad profits.Footnote 26 By comparison, the most popular piece of mainstream news got 849,000 reactions. It came from TheWashington Post and was about Trump’s history of corruption charges: “Trump’s history of corruption is mind-boggling. So why is Clinton supposedly the corrupt one?”Footnote 27 As early as 2013, the World Economic Forum announced that disinformation is the new global challenge. Citizens, politicians, academics, and reporters can all be misled. When misinformed, people believe that factually false convictions are in fact true. Disinformation may be distortions of facts, fact-denying conspiracy theories, lies or false news stories.Footnote 28 Obviously, the challenge of navigating through this maze has always been there. But on the Internet, it happens on a new scale and at a new pace.

American professors Jonah Berger and Katherine Milkman set out to investigate what types of ads, videos, news stories, etc. go viral in the infinite offerings of online information. What does it take to win jackpot in this advanced algorithm system? By studying data from all the New York Times articles published over a three-month period, they found that feelings are what makes content go viral.Footnote 29 More specifically, content driven by activity-mobilizing emotions wins, by eliciting both negative ones like anger and fear and positive ones such as awe and fascination. The reason is that they incite action in the form of likes, retweets, shares, debates, counter-arguments, etc. They make an effective fuel to activate the algorithm system and set an agenda. This explains President Trump’s unstoppable viral success: People get excited when he tweets that Mexicans are rapists and killers or proposes to ban all Muslims from entering the United States. Both supporters and opponents contribute to spreading this on the web. On the tech platforms, emotions set the pace. Apart from emotions, in his book Contagious: Why Things Catch On (2013), Berger points to five other ingredients which help accelerate social transmission: A story must give users who share it social currency; it must be able to trigger, such as when the word “beer” causes one to think of salted peanuts; it must be of public interest to the general public; it must have practical value, e.g. by saving time or offering something; and finally, it must be a good story that is easy to reproduce.Footnote 30 It is worth noting the complete absence of concepts such as true and false.

Information has been commercialized to such an extent that all expressions have become a sort of commodity where the user must supply the right elements. Dry and complex problems, no matter how important, rarely find an audience. The users’ audience can easily press the likebutton, but no one has developed a challenging but really important story button and given it the same opportunities for exposure in the algorithms’ scoring system. Incited by the company’s adversity, Facebook started to offer more expression possibilities. In 2015, a change was made, so the user—in addition to liking a post—was also able to hand out a heart, a surprised emoji or a sad face. But overall, all is about making the story trend—regardless of whether it is true or false, difficult or easy, new or old, relevant or irrelevant.

A consequence of the algorithm systems of the tech giants is that your ability to stand out and reach an audience is limited to your ability to reap and spread attention. Expressions with the most attention get the highest degree of exposure and thus the most advertising payments, while expressions with less drown in the noise. The limelight-stealing stories are short-lived and characterized by fleeting emotions with high entertainment value, conflict and sensation. The story of 35-year-old Twitter user Eric Tucker is a spot-on example. On November 9, 2016, he tweeted to his mere 40 followers that paid protesters were taken by bus to protest rallies against newly elected Donald Trump: “Anti-Trump protestors in Austin today are not as organic as they seem. Here are the busses they came in. #fakeprotests #trump2016 #austin.” The tweet quickly went viral, adding lots of fuel to the national conspiratorial fire. It was shared 16,000 times on Twitter and more than 350,000 times on Facebook. The dubious origin of this piece of “news” was smoothed out little by little. At first, Reddit was referenced as a source of this breaking news. Then, suddenly the source was conservative debate forum Free Republic. Soon after, there were various Facebook pages such as the hardliner conservative publisher Robertson Family Values with more than a million followers and it was eventually promoted in a confirming tweet from the White House itself: “Just had a very open and successful presidential election. Now professional protesters, incited by the media, are protesting. Very unfair!” There was just one small problem: These buses with paid protesters did not exist. The story was simply not true. After two days, when Tucker had realized the effects of his provocation, he deleted the original tweet and posted a picture of the very same tweet with FALSE stamped in red on top of it. But not surprisingly, the correction notice got minimal attention—only 29 retweets and 27 likes within the following week, to be exact.Footnote 31 The truth is simply not as entertaining and stimulating as red-hot rumors. This is also well known from traditional print media, where the correction notice pertaining to a front-page story is usually written in fine print somewhere deep inside the paper. But on the web, the possibilities for the penetration of misinformation are multiplied.

Sociologist Danah Boyd has also noted how certain feelings achieve viral success. She explains how people consume content that simply stimulates their minds and senses. That is the reason why users are drawn to content that excites, activates, entertains or otherwise elicits emotional response. This content is not always the “best” content – in the sense of acquiring knowledge. But in the same way that the body is programmed to crave fat and sugar because they are energy boosts and are rarely found in nature, humans are also programmed to pay attention to things that stimulate and awaken passions: Obnoxious, violent or sexual content; humiliating, embarrassing or offensive gossip.Footnote 32 The tech platforms have, in other words, become the dictatorships of emotions—especially negative ones. The algorithm system rewards what is fleeting and short-lived. As a consequence, content that does not match such uncurbed emotional release simply risks drowning in the noise. Again, an infringement upon freedom of speech takes place—both in the sense of freedom of information and the right to freely express one’s point of view.