One thing is difficult to decipher: to what degree do tech giants actually see themselves as saving the world, and to what degree do they see themselves as capitalizing on it? Obviously, one does not exclude the other, but how do the two sides balance and what is the actual improvement the tech giants bring? In an interesting interview, Mark Zuckerberg defended the company’s treatment of user data by framing it as a historical philosophical trend where people are becoming more open and more willing to share data about themselves with friends, with companies, ultimately with anybody. But the trend is also a normative one: according to Zuckerberg, people must be brought to realize that they only have one identity.Footnote 1 Not only does Zuckerberg here exalt himself as a philosopher but he apparently feels compelled to accept highly normative consequences of his own ideas—and to claim that these consequences should be acceptable to his users, imposing this philosophy on them. Everyone should know everything about everyone—this is the moral imperative Zuckerberg uses to legitimize his business model. Such an idea would erase the distinction between public and private, which has played an important role in modern societies where privacy has been considered a fundamental political good, and where the elementary freedoms of expression, of thought, of beliefs have been articulated to protect individuals and their sphere of privacy against government abuse and regimentation. The moral consequence of Zuckerberg’s dictum—beyond the very handy legitimization of his business model’s unlimited data collection—is that people should not behave differently in different contexts. You cannot be one person to your friends, then another to your family, and a third to your colleagues, etc. You cannot run away from your past, either, if suddenly something more preferable comes up. And neither can you evolve and develop your personality, put past mistakes behind you. In Zuckerberg’s line of thought, people bring their detailed digital identity with them everywhere and, what is more, that identity should be fully available to anyone. It is curious that he does not see how this would cause a tremendous loss of personal freedom—the freedom to evolve, change, apply different skills in different contexts. During the congressional hearings in April 2018, one Senator took a shrewd look at this data gathering and innocently asked Zuckerberg what hotel he was staying in while in Washington. Zuckerberg preferred not to disclose that information. Perhaps he did not want fans, journalists or assassins lurking around. But that would indeed be the collateral damage if he took his own medicine.

Mark Zuckerberg’s philosophical considerations may appear amateurish, but we must take them seriously, coming from a man who controls the conditions framing the online activities of billions of people. It is no wonder that his reflections have shifted, looking at the enormous growth and change Facebook has seen over the past fifteen years since Zuckerberg started out from his college dorm room in 2004. As New York Magazine editor Max Read points out, it is doubtful whether Zuckerberg—or anyone at all—even knows what Facebook really is anymore.Footnote 2 Read elaborates by noting that it is the very same company that sends birthday reminders while at the same time striving to ensure the integrity of the German elections. Zuckerberg’s initial goal was for information to flow as freely as possible, captured by the slogan “Information wants to be free”—in Facebook, this developed into the slogan “making the world more open and connected”. In Facebook’s onboarding information for new employees, their “Little Red Book”, it is described in this way: “Facebook was not originally created to be a company. It was built to accomplish a mission—to make the world more open and connected.”Footnote 3 In the 1990s, “Connecting People” was the slogan of then mobile phone giant Nokia. But in the spring of 2018, the phrase became associated with Facebook. It turned out that in a 2016 internal memo entitled “The Ugly”, Facebook deputy director Andrew Bosworth had presented the company’s strategy for growth at any price, including death victims, by concluding, in mantra-like fashion: “We connect people. Period.”Footnote 4

As British historian Niall Ferguson states, in his book The Square and the Tower, it is naive to believe that new technologies connecting people with each other outside of existing power structures will automatically and seamlessly lead to peace and agreement. His counterexample is when the modern printing press was invented around 1450, leading to a media revolution even greater than the Internet and probably one of the most important facilitators of the Reformation of the following century. It made deviant theologians and other dissidents capable of quickly publishing their new thoughts without having to get permission from neither Church nor King. As an example, Martin Luther is believed to have published a new text around every two weeks throughout his adult life.Footnote 5 This led to a large clash with the church authorities—but in and of itself it did not lead to freer, more democratic or even peaceful conditions, quite the contrary. Many of the new protestant theologies were even more authoritarian and belligerent than the Catholic Church, and the many competing churches often found themselves in conflict with each other, all of which led to the vast religious wars of sixteenth and seventeenth century Europe, culminating in the Thirty Years’ War. Connecting people.

After the discovery that Facebook had been used as a key tool to spread false news and manipulate democratic elections, Zuckerberg is now in a process of changing this philosophy. On February 17, 2017, he issued a creed entitled “Building Global Community”. Here, the concept of “community” replaces that of free information and of connecting ideas as key elements of Facebook philosophy. It is the largest public manifestation of Facebook principles thus far. It seems as if Zuckerberg has now realized that connecting people is not enough. The manifesto keeps to positive feel-good terms, but the backdrop is “fake news”, Russian bots and cyber warfare. People do not automatically become friendly and peaceful by being more connected. More likely, the many links between people open new paths for misinformation, outgrouping, balkanization, hostility, crime and war. Apparently, this needs to be solved through moral sermons with a call for values. Now Zuckerberg says: “... the most important thing we at Facebook can do is develop the social infrastructure to give people the power to build a global community that works for all of us.”Footnote 6 This is then elaborated into five aspects of “community”: support, security, information, civil society and inclusion.Footnote 7 “Global community” comes off as if everyone is connected to everyone, resonant of McLuhan’s idea of “global village”—but there is quite a gap between an average user’s circle of “friends” and any kind of global connectedness. In the US, almost 80% of Facebook users have fewer than 500 “friends”.Footnote 8

This quaint philosophy of a global community is based on a repeated reference to values; to ideas of “our values”, “collective values” and “our common values”. It appears to be a redundant argument, insofar as it presupposes the very global community of values that should first need building. But the whole problem is that large groups of people do not agree to share common values. The vocabulary of “community” and “values” is derived from the American political philosophy known as communitarianism with its emphasis on self-organized communities sharing values. Conveniently enough, those ideas enjoy support from both the right and left (the Republicans love to pit local communities against the Washington elite; Democrats love self-organized protest groups). It builds on the idea of local communities whose possibility to pursue politics is based on sharing pre-political values. But Zuckerberg overlooks the fact that communitarianism, with the village as its political ideal, easily inherits the downsides to the village: social incestuousness, strait-lacedness, xenophobia, gossip, conformity, hate towards other villages.Footnote 9 As is well known, you can’t be friends with everyone; maybe you can’t even be “friends” with everyone. Significantly enough, as a political philosophy communitarianism is not kindly disposed towards political liberalism, universal rights, free speech and the idea that even if people do not share values, they may share principles for the coexistence of different values. Nor are Facebook’s principles—termed community standards—kind to freedom of speech, see below. There are thus local versions of the “global community” in which a person can easily get trapped. Zuckerberg continues: “What is your limit when it comes to nudity? To violence? To rough content? To profanity? Whatever you decide, those will be your personal settings. We will regularly ask you these questions to increase participation and so you do not need to go dig for them yourself. For those who choose not to decide anything, the default settings will be what the majority of people in your region have chosen, just like in a poll.”Footnote 10 Knowing how difficult it is to change personal settings online, the vast majority of users will simply decide to accept the local default settings. This effectively indoctrinates them to have the same opinions as the majority of people in their local region (it remains unknown if by “region” Zuckerberg means village, city, county, country or continent). But the communitarian aspect is clear: people should share values with their local society, with their “community”.

Just below the surface of the community ideals, the gears of the capitalist machinery keep turning. Tech giants have complicated algorithm-based advertising systems where advertisers quickly and automatically buy access to highly specific user groups. Advertisements are traded in an online auction, where the price is automatically calculated, after which the ads immediately appear tailor-made to the platform profiles of the meticulously selected users. Users themselves enjoy the targeted content, but often they are not aware of the principle of how the tech giants’ handle their data. Tech platforms generally do not involve the users or the public in how data are handled: what the data are used for, how one is profiled as a user and how that profiling is associated with third parties. Algorithms are like the Coca-Cola recipes of the big tech companies. But when pious preachers of transparency, openness, mutuality and community keep the very core of their own business models hidden from public inspection, investigation and critique, it rings hollow.

Most of us know the simple recommendations on the platforms. People who have bought The Matrix will probably also buy The Matrix 2. If you are interested in yoga books, you may also be interested in buying yoga gear. These are examples of recommendations that focus on the product. Consumption patterns are analyzed, and the information is used to calculate what the next potential purchase might be. This is relatively harmless and follows a classic and well-known logic of marketing. But it’s only the tip of the marketing iceberg. Today, not only the product can be personalized, but also the very type of argument that makes the user choose one product over another. The strategy behind this is called persuasion profiling.Footnote 11

Two PhD students from Stanford University, Maurits Kaptein and Dean Eckles (the latter was also a researcher at Facebook from 2012 to 2015), made an experiment where they started an online bookstore and encouraged customers to browse through titles and mark the books they would be most likely to buy.Footnote 12 They experimented with different sales strategies and found that it is possible to track what kind of argumentation seems the most convincing to a given person. Some consumers prefer expert reviews, others fall for promotional offers and others react to recommendations of friends. They also found that certain pitches are counterproductive. One consumer hurries to buy a product on discount, while another believes that discounts are a sign that the very same product has lost value. Eckles and Kaptein themselves claim that they can increase the impact of recommendations by 30–40% by eliminating the types of arguments that are counterproductive to the individual consumer. These are very big numbers. But more importantly: The experiment also shows that one and the same person responds to the same type of argument in widely different contexts. A consumer’s persuasion profile can be relatively easily transferred from one commodity group to another. This means that these profiles are worth a lot of money because they are extremely attractive to advertisers.

Also, new methods within sentiment analysis are employed.Footnote 13 Data analysis tools are now so advanced that they can generate highly detailed data and even measure what mood a user is in. By analyzing text messages, Facebook updates and emails, it is possible to distinguish between good days and bad, or sober messages from drunken ones. The latter is based, among other things, on the number of spelling mistakes. Facebook is familiar with the methods: In 2015, “Danmarks Radio”, a public service media outlet, broke the following news: “Facebook updates featuring suicidal thoughts are being studied closely by Facebook.”Footnote 14 The idea was that Facebook would help the potentially suicidal person find appropriate treatment online in order to avoid self-harm. It is indeed commendable that Facebook suggests that kind of help. But it is also unsettling: A company can now measure, on a relatively detailed level, what state of mind a given user is in—including hard times and even clinical depression. Strangely enough, this news was not followed by any noticeable pushback over the massive amounts of surveillance enabled by big data.

Undoubtedly, extremely customized and relevant content is quite convenient. A good example is DirectLife, a Philips device featuring a fitness tracker and a virtual trainer.Footnote 15 It can figure out which argument makes a person eat healthier and exercise more regularly. For example, if the user is the kind who responds to positive feedback, the virtual trainer will say: Nice work! This is brilliant, but can we expect all companies trading these profiles to have good or even relatively unproblematic intentions, such as improving the health of users or trying to prevent them from suicide? Potential buyers of such profile information are not necessarily limited to commercial companies, but could also include religious, political and other movements with intentions of a rather different nature. Whether they celebrate a good cause is one question, another is the methods behind the cause. In the wrong hands, persuasion profiling enables the buyers to manipulate individuals and take advantage of the psychological traits of a person through sentiment analysis.

Within the algorithm-based advertising system, examples abound of the abuse of big tech tools. In 2017, US nonprofit organization ProPublica revealed that Facebook allowed landlords to discriminate against tenants based on ethnicity, disability, gender, and other characteristics when listing their properties for rent.Footnote 16 The algorithms automatically generated the categories based on data from user profiles. It became a scandal. Nine months later, nothing had changed. ProPublica tested Facebook’s new advertising rules and tools, freshly updated and explicitly declared “diversity-enhancing”, but they found that it was still possible to buy rental housing ads and place them outside the view of certain categories of people: African Americans, people in need of wheelchair ramps, Jews, Hispanics, people interested in Islam, and so on.Footnote 17 In some sense this is a natural consequence of targeted marketing—when targeting ads to certain selected groups, other groups do not see them and are not informed about the offer. In March 2019, Facebook accepted to overhaul their advertising system to attempt to rule out discriminatory ads in job, housing and loan ads.Footnote 18

This can be exploited in numerous unsympathetic ways. Google has sold ads associated with racist and highly prejudiced keywords, and the service has even automatically and unintentionally recommended several bizarre words during the process, as revealed by BuzzFeed, an online media outlet. Advertisers could target ads based on keywords such as “black people ruin neighborhoods”, “Jewish parasites” and “Jews control the media”—keyword combinations targeting racists and anti-Semites. Google responded that they would work hard to stop these offensive ads which violate their policy of abusive speech. This happened right around the time when Facebook was struggling to customize their ads platform, which allowed advertisers to target messages to “Jew haters”.Footnote 19 From a free speech perspective, one could argue: Nazis have a horrific worldview, but they also buy stuff, so should an advertiser be forbidden to address them specifically? The above-mentioned case of rental housing ads is downright illegal, as these groups of people are protected under the US Federal Fair Housing Act. However, ads associated with racist content are not illegal in and of themselves. They are unsympathetic to the bone, but not criminal. The problem is rather that the companies themselves have an advertising policy which they believe has been violated. Both cases show that abuse of information is not unique to one company. These issues are tightly interwoven the complex and automated advertising system that has made Facebook and Google two of the world’s most valuable companies. The problem is also found on Twitter. Also here, it has been revealed that advertisers were able to target ads based on users’ racist and patronizing comments.Footnote 20 Google and Facebook get most of the criticism because of their dominant position in the advertising market. However, the extent of the problem is more difficult to determine in Google’s advertising system, where advertisers target users by associating ads with keywords that they expect the user to search. On Facebook, however, advertisers can choose criteria for targeting people and their characteristics from a large catalog of information. In 2016 alone, Google removed 1.7 billion ads that violated the company’s ad policy.Footnote 21

In July 2018, Danish public radio show P1 Orientering reported on a particularly controversial categorization based on sensitive personal information within the Facebook ad system. Facebook had categorized 65,000 Russians as interested in “treason” and 130,000 Nigerians as interested in homosexuality. It may seem innocent to allow ads targeted to such groups and their particular interests. However, this meant that both groups could easily be identified by the authorities of those countries via their Facebook profiles, which could put people in danger. In most countries, treason is illegal. Russian authorities in particular take a strong interest in the category, and in Nigeria, homosexuality is penalized by up to 14 years in prison.Footnote 22 With no intention of it whatsoever, Facebook’s categorization has given intelligence agencies in all countries a golden opportunity to comb the population in much more detail than what is allowed under most democratic constitutions. If a user is categorized on Facebook with a behavior showing an interest in “treason”, then the Russian authorities can quite easily identify that user. It only requires the following bait: In Facebook’s advertising system, they pretend to be advertisers wanting to target every user who lives in Russia and whom Facebook has registered as interested in ‘treason’. In the ads, users must be lured to click a link that sends them to a specially designed website owned by the advertiser. The ad needs not have the slightest thing to do with treason. It could be for something as innocent as a discount on new gardening tools. But it allows the advertiser to learn that the user clicking on the link lives in Russia and is interested in “treason”. The link sends the user to a website where a traceable IP address is automatically left, which makes it easy to identify the person. The process is a special variety of phishing, a method to con Internet users to reveal their name, bank account, phone number, email, post address, etc.Footnote 23 In a place like Russia, the consequence is that the Russian intelligence agencies may, in principle, systematically monitor users and record those they consider as potential traitors. This advertising trick not only enables government agencies to identify potential victims; the trick also lies open to other forces wishing to shame or harass specific groups.

Facebook has explained that “treason” was thought of as an ad category because of its historical significance. But since it is also an illegal act, the company has acknowledged that the category is problematic. The criticism raised by P1 Orientering made Facebook remove “treason” from its category system worldwide. On previous occasions, Facebook has felt it necessary to remove “communism”, “shia Muslim” and “Islam” because religious beliefs and political inclinations are seen as sensitive information subject to special protection. At the same time, however, it is still possible to target ads to people who are interested in “Christianity”, “homosexuality” and “anxiety”’.Footnote 24 Facebook’s advertising policy is far from consistent, and the giant should consider whether health information or sexual orientation ought to be considered as sensitive personal information, only to be made publicly available after getting the user’s consent.

The consequence of this categorization is that Facebook users will need to be very acutely aware of how they express themselves so as not to get labeled as something they might not like. The users must be extremely aware of what they like, comment or click on, share, etc. It is a consideration that may naturally lead to self-censorship. But even cautious users never have complete control over, let alone access to, the categories Facebook foists upon them. This restriction of free expression is not censorship in the sense of rules-based content removal. Rather, it is a kind that thrives because the users have no access to which categorizations their behavior leads to. As a consequence, they may have good reason to hesitate to speak out digitally, even in front of close friends or “friends”. Facebook’s own website states: “Your interests are based on your Facebook activity and on other actions.” Other actions are exactly what constitutes the business secret, but it probably includes click behavior, browser history plus additional data acquired from data brokers. Facebook itself emphasizes that advertising interests and sympathies are not the same. The 65,000 Russians were thus not categorized as “traitors” but as interested in “treason”, for example, as a topic of historical research interest. A distinction should be made between the user’s online activity and the user’s personal characteristics. According to Facebook, the company only has information about the former, not the latter. On a conceptual level, this distinction is clear, but it seems very likely and indeed very possible to reason from one to the other. Research in the field shows that it is possible to guess, with up to 88% certainty, the sexuality of a user based on the individual’s likes on Facebook—similar possibilities hold for the user’s ethnicity, religious beliefs, political attitude, personality traits, intelligence, mood, consumption of addictive substances, parents’ separation, age and gender.Footnote 25 There is an increasing number of revelations of how the tools of tech giants are used in many ways where the line between use and abuse is extremely difficult to draw. Big data enable quick and targeted access to the weak spots of each individual user.

Marketing strategies used on the giants’ platforms are generally difficult to decipher. They operate at their best when invisible to the user. Consider for a moment the virtual personal trainer DirectLife yelling out: “You’re doing a great job! I’m telling you this because you respond well to positive feedback!” It would probably not have quite the same effect.Footnote 26 And the problem gets still worse: Marketing strategies, of course, work in the same way when promoting ideas as they do with products. The 2016 US presidential campaign bears witness to this. The Cambridge Analytica scandal would later, in March 2018, clear the front pages. Through the workings of Cambridge Analytica, the Trump campaign had gained unique insight into the lives of voters. Huge amounts of data were collected, as part of a psychological research project at Cambridge University, and resold by researcher Aleksandr Kogan to the UK consultant company Cambridge Analytica. The consultant was able, by using for instance the just-quoted Kosinski article, to map how users made decisions and subsequently figure out how their efforts could affect a decision-making process. The results were very precise voter profiles divided into 500 different psychological categories. The profiles were then used to target anonymous or pseudonymous campaign messages—shaped in accordance with these categories—at selected swing voters in the few but crucial US swing states. It was not necessarily obvious to these people that they were exposed to election propaganda. And it was certainly not clear to them that they were the subjects of targeted campaigns. After all, they have no access to information about which other users receive the same information and which do not. These voters do not, in principle, enjoy freedom of expression in the sense of freedom of information, in which they can choose freely from a given quantity of available information.

According to Cambridge Analytica’s own top management—and as captured on hidden camera by British network Channel4—what made the difference and what wound up deciding the 2016 US elections was the company’s micro-targeting ads directed at only 40,000 wavering US voters in the decisive swing states. Of course, we do not know how accurate the campaign’s boasts are, or whether this was just a sales pitch made to Channel4’s journalists posing as potentially interested clients. But it is, however, thought-provoking that Facebook, in a piece of research from before the scandal broke, boasted about the company’s ability to influence voter turnout.Footnote 27 Speaking of Facebook’s much mentioned transparency and openness—when The Guardian began to unveil the scandal in March 2018, Facebook’s first reaction was to file a lawsuit against the British newspaper and shut down the profile of Christopher Wylie, the central whistleblower and former research director at Cambridge Analytica. Not exactly a role model for transparency and freedom of expression. In 2011, the US Federal Trade Commission made an agreement with Facebook regarding privacy practices and is expected, in 2019, to charge Facebook with a 5 billion dollars fine for violations of the agreement.Footnote 28

However, not only the Trump team has used micro-targeting. It happens across the board and has been going on for many years. The first political campaign that used massive data from Facebook was the 2012 Obama campaign. In the Trump campaign, Cambridge Analytica was able to collect user data from as many as 87 million Facebook accounts via Aleksandr Kogan, who had 300,000 users actively sign up for and give their consent to a psychological test, which made him able to harvest data not only from themselves, but also from all their “friends”. To this day, it remains unknown whether Kogan, who also has ties to the University of St. Petersburg, Russia, passed on this data to Russian authorities. In 2012, the Obama campaign had about a million of their Facebook supporters provide access to data from all their “friends”. According to one of the campaign leaders, this allowed the campaign to reconstruct the entire Facebook social graph (i.e. the chart displaying all connection relationships between the users). This was significantly more users than what Cambridge Analytica got access to in 2016. But contrary to the Cambridge Analytica campaign, the Obama campaign had its one million active supporters mold their “friends”, who were then influenced politically by people they at least already knew. In the other campaign the 300,000 intermediaries knew nothing, and the campaign messages appeared in people’s news feeds out of nowhere. In that sense, the Obama campaign’s use of large amounts of Facebook data was more honest—but still it highlights another problem. The day after the Cambridge Analytica revelations on March 18th, 2018, Obama campaign leader Carol Davidson made the following statement: “They [Facebook] came to office in the days following election recruiting & were very candid that they allowed us to do things they wouldn’t have allowed someone else to do because they were on our side.”Footnote 29 The Obama campaign was allowed to use massive amounts of Facebook data because the company sided with one of the candidates against the other (Republican candidate Mitt Romney). Thus, access to this scarily effective political micro-targeting based on big data can both be based on economics (who can afford to buy the relevant ads?), on fraud (who manages to trick data out of gullible tech giants?) and on the companies’ own departure from their expressed neutrality (“Facebook is a platform for all ideas,” as Zuckerberg repeatedly, but not very convincingly, chanted during the congressional hearings). It all depends on what the political situation requires. Other options for obtaining access to such data include legislation, persuasion, hacking, espionage, pressure, bribery, etc. However, if only one political party out of several has access to such powerful instruments in their campaign, it constitutes a break with the fundamental norm of political equality and fairness in the electoral process. Voters affected lose their freedom of expression, understood as their freedom to seek out and compare information independently and for the purpose of forming their opinions.

As already mentioned, Cambridge Analytica had used Facebook data to categorize users into 500 different specified psychological profiles on which to base the targeted affecting. Through micro-targeting, politicians can access the voters’ private lives and psyches, thus affecting carefully selected segments of voters. German philosopher Byung-Chul Han calls this phenomenon data-driven psychopolitics.Footnote 30 Micro-targeted messages to voters are hardly much different from micro-targeted ads. In both cases, sophisticated algorithms make it possible to predict people’s behavior very precisely and optimize a candidate or product profile. Today we see, according to Han, an increasing amalgamation of the citizen and consumer, of the state and market, of the vote and the purchase.Footnote 31

Orientation towards market demands increasingly seem to pull tech giants away from what might be left of their early idealism, prompting them to behave like the more seedy examples of greedy companies employing dirty tricks. In November 2018, New York Times published a long piece of investigative journalismFootnote 32 on the tactics of crisis management chosen by the Facebook leadership—particularly Mark Zuckerberg and Sheryl Sandberg. Under the headline “Delay, Deny and Deflect”, the article showed how the company had used denial and smear campaigns to conceal the extent of the problems with data privacy and political disinformation. In October 2017, Facebook hired the political spin company Definers, which has a history of making aggressive political campaigns for Republican candidates, but increasingly expanding into business spin. This company had disseminated rumors about Facebook competitors like Google and Apple in order to “muddy the waters” (in the words of a Definers leader) and divert critical focus away from Facebook. Simultaneously, Definers had spread articles trying to portray protesters as anti-Semites because of a protest poster showing Zuckerberg and Sandberg as a world-embracing octopus at the congressional hearings. This was a reference to the antitrust case against Standard Oil in the 1900s, but Definers claimed that singling out the two Jewish top Facebook persons was anti-Semitic. In an incredible show of hypocrisy, Definers simultaneously published papers attacking Facebook critics for being paid marionettes for the liberal Hungarian-Jewish philantropist billionaire George Soros, a staple target for right-wing, anti-Semitic conspiracy attacks. All this activity, of course, appeared in public with no direct connection to Facebook’s name and was only revealed through the New York Times article. Facebook immediately cut its ties to Definers, and the Facebook top claimed not to have been informed about the Definers contract. Simultaneously, Facebook defended itself in the following way: Definers was “[...] useful to help respond to unfair claims where Facebook has been singled out for criticism, and to positively distinguish us from competitors.”Footnote 33—in the words of Elliot Schrage, former head of communications and policy, who went on to take the blame.

The growing number of issues springing from the advertising system make it seem like the tech giants are unable to handle the problems themselves. In connection with the Cambridge Analytica scandal, Facebook actually discovered the leak as early as 2015, long before the US elections. The company responded by asking Kogan specifically to delete the data in question (which he did not) and more generally by blocking all apps active on Facebook from gaining access to the data of their users’ “friends”. As was revealed in June 2018, however, the sharing of massive volumes of user data with hardware developers like Apple, Samsung, and BlackBerry continued even until after the Cambridge Analytica scandal broke in March 2018. The trick was to get likebuttons and automatic Facebook connectivity integrated into their devices.Footnote 34 A new scandal, in July-August 2018, was less intentional. Here, Facebook publicly admitted that the company was aware of new infiltration attempts up to the US midterm election in November 2016. It included 32 fake Facebook accounts that reached nearly 300,000 users with ads, posts and organized political events, rallies, protests, all on topics such as race, feminism, mindfulness, and resistance to Trump. What they all had in common was that they were efforts to create division and disagreement. It is believed that the pages may once again have been created by Russian activists from the Internet Research Agency, but this time in a way that may be more difficult to detect.Footnote 35 New cases of data misuse continue to surface. In December 2018, a British parliamentary committee on online misinformation confiscated a batch of Facebook documents, which was in the possession of an employee of software company Six4Three during a visit to London. Here, it appeared that Facebook from 2012 to 2015 gave privileged access to user data to companies such as Netflix and Airbnb in order to boost traffic and engagement on Facebook.Footnote 36

Just like the relationship between code-makers and code breakers, data collection vs. data protection is a case of dog-eat-dog. On the horizon we are seeing the contours of a fundamental and interminable war between the owners of growing data masses on the one hand, and on the other many different stakeholders who use various means (hacking, espionage, trade, pressure, persuasion, partnerships...) to seek to get their hands on data that can be used for political turnover. Through their massive surveillance structure, tech giants possess extremely personally sensitive data. That does not mean, however, that they automatically possess the relevant and increasingly demanding mechanisms to protect those data. It does not seem difficult to invent ever new abuses of the system, as long as it makes use of an automated ad system that allows for the aggressive, detailed and targeted marketing we have seen in recent years.