Skip to main content

Google Told Me So! On the Bent Testimony of Search Engine Algorithms

Abstract

Search engines are important contemporary sources of information and contribute to shaping our beliefs about the world. Each time they are consulted, various algorithms filter and order content to show us relevant results for the inputted search query. Because these search engines are frequently and widely consulted, it is necessary to have a clear understanding of the distinctively epistemic role that these algorithms play in the background of our online experiences. To aid in such understanding, this paper argues that search engine algorithms are providers of “bent testimony”—that, within certain contexts of interactions, users act as if these algorithms provide us with testimony—and acquire or alter beliefs on that basis. Specifically, we treat search engine algorithms as if they were asserting as true the content ordered at the top of a search results page—which has interesting parallels with how we might treat an ordinary testifier. As such, existing discussions in the philosophy of testimony can help us better understand and, in turn, improve our interactions with search engines. By explicating the mechanisms by which we come to accept this “bent testimony,” our paper discusses methods to help us control our epistemic reliance on search engine algorithms and clarifies the normative expectations one ought to place on the search engines that deploy these algorithms.

This is a preview of subscription content, access via your institution.

Notes

  1. The terms “recommendation systems” or “content-filtering algorithms” are sometimes used in the computer science literature, as well as in popular discourse, to refer to the algorithms that search engines use. To stay clear of any potential terminological inconsistencies that might arise with the use of these more precise terms, we use the more generic term “search engine algorithms” throughout.

    For a discussion on the ubiquity and use of such algorithms, see Chaney et al. (2018).

  2. Since Google is currently by far the most popular search engine (cf. StatCounter, n.d.), throughout this paper, we interchangeably use the phrases “Google’s algorithms” to refer to search engine algorithms and “googling” to refer to online searching.

  3. Although Gunn and Lynch’s account focuses primarily on our epistemic reliance on those who produce the content that we engage with online, one might also consider how we are reliant on fellow Google users—whose data and search habits improve the quality of the recommendations Google provides to us—as well as Google’s engineers who designed these algorithms. On a broader reading of their claim that googling resembles testimony because of it is “dependent on the beliefs and actions of others,” one might reasonably extend their account to include these other groups and thereby build a more comprehensive view of how googling resembles testimony. Gunn and Lynch, however, do not explicitly mention these other groups, and our own suggested improvement for their view (i.e., that we should better consider our epistemic reliance on search engine algorithms themselves) still holds even on this extended version of their account. As a result, we did not discuss this extension in our main text, but instead raise it as footnote for interested readers.

  4. Gunn and Lynch do acknowledge that googling is a “preference-dependent” mode of inquiry—Google tells us who to consult based on its assessment of what we might like and what links we will click (p. 43). Preference dependence is, of course, one important way in which google differs from “xoogle.” However, they do not specify how exactly this feature of preference dependence is relevant to their account of how googling resembles testimony. As such, a reading that the only relevant consideration for them is our dependence on the actions and beliefs of others is, while perhaps strict, not uncharacteristic.

  5. More recently, Google has implemented a feature called “Quick Answers” that highlights at the top of the first page of search results a single answer to certain types of simple search queries—like “Who is the Prime Minister of Singapore?”, or “What’s two plus six?”—where single answers are possible. But even so, below this highlighted “quick answer,” you would still find the set of ordered links as you would in an ordinary Google search.

  6. This is often, but not necessarily the case. The articles at the top of a search result have high ‘similarity scores’, are hosted on websites of comparable repute, and are closely related to the search input we type in. If, for instance, two sources deemed reputable by the algorithm (say, The New York Times and The Washington Post) make dramatically different assertions about a topic, these might appear together at the top of a search results page. It’s hard to say how often this happens.

    For details on the design of Google’s ‘PageRank’ algorithm, see (Page, 2006).

  7. Some might prefer to avoid using a well-explored term like “testimony”—with modifier or without—in favour of terms like “evidence” or “influence.” But, for this paper, it does not matter so much what we call it. We would only have to engage in these terminological discussions if we were arguing that search engine algorithms were, in fact, providing testimony (and, perhaps, not just “evidence”). Hopefully even those who prefer a more restricted use of “testimony” would agree that, in some cases, we might act as if someone (or something) was giving us testimony and acquire/alter beliefs on this basis, even if they, in fact, were not testifying. This concession is all we need for the arguments in the paper to proceed.

  8. Despite opening up the possibility for algorithmic testimony for this essay, Lackey seems to be among those scholars who believe only humans can testify (2008, p. 189). See Freiman and Miller (2020) for a more comprehensive engagement with her concerns about non-human testimony.

  9. Freiman and Miller (2020) are among those who believe that there is a meaningful distinction to be drawn between the testimony of humans and algorithms. Their use of the modifier “quasi” while discussing algorithmic testimony is mainly to emphasize this distinction. Despite this, they believe that we might similarly acquire or alter beliefs on the basis of both ordinary testimony as well as “quasi” testimony.

  10. Freiman and Miller suggest that when a machine’s output, by design, resembles human testimony (e.g., an automated announcement in a natural language on a loud speaker), the machine’s designers “count on its users to correctly decipher the meaning of the output and correctly assess its validity because they recognize the testimony-like epistemic norms under which the output is produced” (p. 13). In turn, when users expect this machine output to conform to epistemic norms that would be in place for similar interactions (e.g., an announcement on a loud speaker made by a human), we are treating this machine as a “quasi-testifier.” See Freiman and Miller (2020, pp. 11 – 14) for a more thorough discussion of quasi-testimony.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Devesh Narayanan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Information in Interactions between Humans and Machines

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Narayanan, D., De Cremer, D. Google Told Me So! On the Bent Testimony of Search Engine Algorithms. Philos. Technol. 35, 22 (2022). https://doi.org/10.1007/s13347-022-00521-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13347-022-00521-7

Keywords

  • Applied epistemology
  • Testimony
  • Search Engines
  • Algorithmic curation
  • Trust