Skip to main content
Log in

Big other: surveillance capitalism and the prospects of an information civilization

  • Research Article
  • Published:
Journal of Information Technology

Abstract

This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ The institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries-long evolution of market capitalism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For a recent example of this, see ‘JetBlue to Add Bag Fees, Cut Legroom’ (Nicas, 2014).

  2. See Braudel’s discussion on this point (1984: 620).

  3. Consider that in 1986 there were 2.5 optimally compressed exabytes, only 1% of which were digitized (Hilbert, 2013: 4). In 2000, only a quarter of the world’s stored information was digital (Mayer-Schönberger and Cukier, 2013: 9). By 2007, there were around 300 optimally compressed exabytes with 94% digitized (Hilbert, 2013: 4). Digitization and datafication (the application of software that allows computers and algorithms to process and analyze raw data) combined with new and cheaper storage technologies produced 1200 exabytes of data stored worldwide in 2013 with 98% digital content (Mayer-Schönberger and Cukier, 2013: 9).

  4. The EU Court’s 2014 ruling on the ‘right to be forgotten’ arguably represents the first time that Google has been forced to substantially alter its practices as an adaptation to regulatory demands – the first chapter of what is sure to be an evolving story.

  5. For an extended discussion of this theme, see Zuboff and Maxmin (2002, especially chapters 4, 6, and 10).

  6. With the competitive advantage of Google’s exponentially expanding data capture, Google’s ad revenues jumped from $21 billion in 2008 to over $50 billion in 2013. By February 2014, 15 years after its founding, Google’s $400 billion dollar market value edged out Exxon for the #2 spot in market capitalization, making it the second richest company after Apple (Farzad, 2014).

  7. Consider these facts in relation to Google and Facebook, the most hyper of the hyperscale firms. Google processes four billion searches a day. A 2009 presentation by Google engineer Jeff Dean indicated that it was planning the capacity for ten million servers and an exabyte of information. His technical article published in 2008 described new analytics that allowed Google to process 20 petabytes of data per day (1000 petabytes=1 exabyte), or about 7 exabytes a year (Dean and Ghemawat, 2008; Dean, 2009). One analyst observed that these numbers have likely been substantially exceeded by now, ‘particularly given the volume of data being uploaded to YouTube, which alone has 72h worth of video uploaded every minute’ (Wallbank, 2012). As for Facebook, it has more than a billion users. At the time of its float on the US stock market in 2012, it claimed to have more than seven billion photos uploaded each month and more than 100 petabytes of photos and videos stored in its servers (Ziegler, 2012).

  8. Smaller firms without hyperscale revenues can leverage some of these capabilities with cloud computing services (Manyika and Chui, 2014; Münstermann et al., 2014).

  9. See my discussion of anticipatory conformity in Zuboff (1988: 346–356). For an update, see recent research on Internet search behavior in Marthews and Tucker (2014).

  10. This process is apparantly exemplified in the US federal lawsuit concerning Google’s data mining of student emails sent and received by users of its Apps for Education cloud service. See Herold (2014).

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shoshana Zuboff.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zuboff, S. Big other: surveillance capitalism and the prospects of an information civilization. J Inf Technol 30, 75–89 (2015). https://doi.org/10.1057/jit.2015.5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1057/jit.2015.5

Keywords

Navigation