Skip to main content
Log in

Economics and the FTC’s Google Investigation

  • Published:
Review of Industrial Organization Aims and scope Submit manuscript

Abstract

We explain the issues in the Federal Trade Commission (FTC’s) antitrust investigation into whether Google’s use of “Universal” search results violated the antitrust laws and assess the role for economics in the FTC’s decision to close the investigation. We argue that the presence of the Bureau of Economics infuses the FTC with an economic perspective that helped it recognize that “Universals” were a product innovation that improved search rather than a form of leveraging. Labeling them as “anticompetitive” would have confused protecting competition with protecting competitors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. The FTC investigation covered some issues besides Google’s use of Universals. These issues included “scraping,” Google’s AdWords API, and standard essential patents. The aspect of the investigation that drew the most attention concerned Google’s use of Universals.

  2. One important source of information about the role of the Bureau of Economics is the articles on the FTC in the Review of Industrial Organization’s annual Antitrust and Regulatory update (Carlson et al. 2013; Shelanski et al. 2012; Farrell et al. 2011).

  3. In most but not all cases, the Bureau Directors endorse their staffs’ recommendations.

  4. CNET is not just a shopping site as it also publishes content about the electronics and information technology. But it is a good site for looking for electronics shopping.

  5. This section of this article makes extensive use of Salinger and Levinson (2013). Screen shots from that time illustrate the Universal search results at the time of the FTC investigation better than do more recent screen shots. Not only do Google results change over time (both because of changes in its algorithms and changes in available content), but they can vary by user (based, for example, on location or search history). Someone else who attempted the searches that we describe may have gotten different results.

    Fig. 1
    figure 1

    A screen shot of upper left hand portion of Google home page taken May 2013. Notice the black bar with the words “You Search Images Maps Play YouTube News\({\ldots }\).” Clicking on the appropriate label within the black bar was one way to access Google’s thematic results. Entering a query and clicking the “I’m Feeling Lucky” icon took users directly to what would have been the first Web site listed on Google’s general SERP

  6. The bolder font may not be clear in the screenshot, but it was clear when one used Google.

  7. Software programs used by general search engines to crawl the Web are known generically as “spiders,” “bots,” or “crawlers.” Google crawls the web using its “Googlebot.” See, e.g., Hayes (n.d.). Although they harvest enormous amounts of data, crawlers such as Googlebot do not access every site on the Web. One reason for this is that only a small fraction of the Web, known as the “surface” or “public” Web, can be accessed by crawlers. The remainder, known as the “deep” or “invisible” Web,” includes concealed content and material that is “either in a database or stored in HTML pages many layers deep with complex URL addresses.” See, for example, the links “Definition of: Surface Web” (n.d.) and “Definition of: Deep Web” (n.d.) provided in the references to this article.. A second reason is that Web site administrators often can block crawlers’ access to their sites by including appropriate directives in their Web site code, either in a special file called “robots.txt” or in meta tags embedded in individual Web pages. Google and most other reputable users of crawlers respect these directives. See Hayes, Ibid. See also Google (n.d. a). The “invisible” Web also includes Web content that is generated dynamically as the result of user actions, rather than being stored on static Web pages. Dynamically-generated content cannot be found or indexed by Web crawlers because it does not exist on the Web except in response to user requests.

  8. Note that “Images” is now brighter than the other words in the black bar.

  9. In Figs. 1, 2, 3 and 4, the black rectangle near the top of the page says, “\({\ldots }\) Search Images Maps Play YouTube News \({\ldots }\)..” Each is a clickable “tab” that leads to a page with a search bar (as well as content in the case of Maps, Play, YouTube and News). The same query in these different tabs yields different results because Google uses different algorithms to generate them. As described above, “Search” is Google’s general search. Searches in the other tabs are thematic searches. For example, a search in the “Images” yields results based on an image theme, meaning that the results are images. In addition to being based on a different algorithm, a thematic search might be based on a more limited set of crawled sites.

  10. We are not privy to the identities of all the complaining publishers of “vertical” Web sites, but Foundem and NexTag are examples of shopping sites whose publishers have complained publicly about Google bias. More generally, there are many “vertical” Web sites that provide specialized search capabilities that are tailored to specific user wants. Examples of “vertical” sites that compete with Google’s Local Universal, in that they provide links to local businesses, include: Yelp! (providing reviews and links to local restaurants, shopping, entertainment venues and services); OpenTable (providing links and reservations to local restaurants); and Yahoo! Local (listings of local businesses, services and events). Examples of “vertical” sites that compete with Google’s Shopping Universal include Amazon.com; Yahoo! Shopping; and Shopping.com.

  11. For a discussion of early Internet search sites, see Sullivan (2003).

  12. As noted earlier, the portion of the Web that is accessible to crawlers is known as the “surface” or “public” Web. We say “in principle” because “even large search engines [index] only a portion of the publicly available part” of the Web. See “Web Crawler” (n.d.).

  13. One approach to search would be to have human-generated answers to some queries (perhaps augmented by machine-learning about which answers users clicked on) and then supplement those with results based on Web crawling and algorithms for which the site did not have human-generated answers. Ask Jeeves used this approach when it started in 1998.

  14. The science of assessing the relevance of documents for queries is known as “Information retrieval.” Bush (1945) is credited with having introduced the idea of a systematic approach to information retrieval. One of the earliest approaches suggested in the 1950’s was based on word overlap. The science had advanced well beyond that by the mid-1990’s, although the appearance of query terms in a document continues to be an important consideration. The earliest Web browsers made use of developments up to that time. See Singhal (2001) for a discussion.

  15. That is, there is no human intervention at the time of the search. The design of the algorithm can entail human intervention, which can range in terms of how “heavy-handed” it is. One form of intervention is to augment or diminish the scores given particular sites. A still more heavy-handed approach would be to program directly the response to a particular query (without any reliance on a formula calculated about each crawled page). Of course, any change in an algorithm designed to modify Google results is arguably human intervention.

  16. To be sure, an algorithm might incorporate user-specific information, such as location or search history. But the fact remains that two searchers issuing the same query and that otherwise look identical to Google or any other search engine might be interested in quite different information.

  17. More specifically, PageRank is an algorithm that “assigns an ‘importance’ value to each page on the Web and gives it a rank to determine how useful it is by taking into account the number and quality of other Web pages that contain links to the Web page being ranked by the algorithm.” See Google, Inc. (n.d. b).

  18. The potential use of links between pages was one fundamental way in which the Internet provided opportunities for information retrieval that had not been available in other applications of computerized information retrieval. Another, which Google’s founders were not the first to realize, is that the volume of queries on the Internet is so great that many users issue the same query. As a result, a search engine can track user responses to a query and then use those data to modify its subsequent responses to the same query.

  19. Assessing quality and incorporating those assessments into its search algorithms has been an important focus of innovation at Google. These assessments are the results of judgments made by Google’s developers and managers. For example, Google considers the originality of a Web site’s content (in contrast to links to content on other sites) to be an important indicator of quality. As noted in a Google blog entry, “low-quality sites [are] sites which are low-value add for users, copy content from other websites or sites that are just not very useful\({\ldots }\) [while] [h]igh-quality sites [are] sites with original content and information such as research, in-depth reports, thoughtful analysis and so on.” See Google, Inc. (2011). While the determination of whether, and to what extent, a Web site’s content is “original” may often be empirically observable, the emphasis placed on originality reflects Google’s judgments regarding the relationship between a Web site’s quality and originality.

  20. Google gives each of its search results a blue title. When the result is a link to a Web site, the title is itself a link to the site (meaning that clicking on the blue title takes the searcher to the Web site).

  21. See note 7, above, for definitions of the “invisible” or “deep” Web. Additional sources on this topic include, e.g., Bergman (2001) and Sherman and Price (2003). Even if it is technically possible for a search engine to evaluate dynamically generated content, doing so would presumably require a substantial change in how Google crawls the Web and in its algorithms to evaluate the results.

  22. The trade press distinguishes between “White Hat” and “Black Hat” search engine optimizers. “White Hat SEO” improves the quality of a Web page whereas “Black Hat SEO” exploits imperfections in search algorithms that allow a Web site to get more prominent placement without improving (and perhaps even lowering) quality. See “Search Engine Optimization” (n.d.).

  23. The term “Google bomb” refers to a situation in which people have intentionally published Web pages with links so that Google’s algorithms generate an embarrassing or humorous result. For example, by creating Web pages that linked the term “miserable failure” to George W. Bush’s White House biography page, people were able to “trick” Google’s algorithms into returning that page as the top link to a Google query for “miserable failure.” See “Google Bomb” (n.d.).

  24. The third is Orbitz, which five of the six major airlines launched in 2001.

  25. A specialty site that gathers information by crawling the Web can limit its crawling to sites that provide the class of information its users want. The point is not limited to Web crawlers, however. A specialty site that relies on human cataloguing of sites can limit the sites that it catalogs.

  26. See Kramer (2003) for a description of the start of Google News.

  27. It did not remove the “Beta” label until 2005, but it was used widely and was well-reviewed before that. For example, as noted in Kramer (2003), it won the “Best News” Webby in 2003. The Webbys are the equivalent of Academy Awards for the Internet.

  28. A precursor to Universals at Google was “OneBoxes,” which were links to Google thematic results that appeared at the top (or, in some cases, the bottom) of Google’s SERP. The introduction of Universals provided for more flexible placement of the links to Google’s thematic search sites within its SERP.

  29. This is not the only feasible way to generate diversity in results. For example, using what we have called the first generation of general search algorithms, a search engine would, for each query, compute a single score for ranking and the top ten listings would be those with the top ten scores. A search engine could conceivably place the Web site with the top score first and then generate another set of scores for all remaining Web sites based on an algorithm that is contingent on features of the first listing.

  30. To the extent that Google licensed some content for its thematic search results, some successful Google searches may not have ended with a referral to the relevant data from the licensor’s site, if, for example, such data are available only by entering a database search on that third-party site.

  31. The information about Barack Obama on the right hand side of Fig. 5 is an example of a “Knowledge Graph,” which Google introduced in 2012. Google considered it a sufficiently important innovation to list it as a corporate highlight in its 2012 10-K. See Google, Inc. (2013) at p. 4.

  32. The sort of information on the right-hand side of the SERP in Fig.  5 is present in Fig.  2 as well. The key difference between Figs. 2 and 5 for the points we are trying to illustrate is that Fig. 2 includes an Images Universal on the left-hand side of the page where Google’s organic search results appeared originally.

  33. According to the Wall Street Journal, “The FTC’s decision [to close its investigation of Google’s search practices] also shows how anti-Google lobbying from rivals like Microsoft Corp. \({\ldots }\) had little effect. Microsoft had pressed regulators to bring an antitrust case against Google.” See Kendall et al. (2013). Microsoft’s efforts to lobby the FTC to bring a case can also be inferred from Microsoft’s strong and negative reactions to the FTC’s decision to not bring a case against Google. See, e.g., Kaiser (2013).

  34. In making this point, we do not mean to suggest that the FTC could have demonstrated an antitrust violation. We argue below that it could not have.

  35. While the term “two-sided market” is more common in the economics literature than “two-sided” business, the latter term is often more accurate. As Evans and Schmalensee (2007) correctly observe, firms with two-sided business models often compete on one or both sides with firms that have one-sided business models. For example, a basic cable network that operates on a two-sided model (with revenues from viewer subscriptions and from advertising) might compete for viewers with pay cable networks that get revenue only from subscription fees and not from advertising.

  36. Demand by advertisers to advertise on Google depends on Google’s ability to attract searchers. In some cases, people who search on Google get the information they want from advertising-supported links. To the extent that they do, demand for Google by its search customers might depend in part on its success in attracting advertising customers. But this linkage is not necessary for Google to be a two-sided business. The dependence of demand by advertising customers on Google’s success in competing for search customers is sufficient.

  37. As a technical matter, Web pages can choose not to appear in Google search results by denying access to Google’s crawler. As an economic matter, though, Google does not have to compete to get Web page publishers to grant them access. Web page publishers benefit from appearing in search results.

  38. While this point is not an economic point, most economists have some programming experience and an appreciation of what computer algorithms are. Thus, we would expect that the FTC economists would have been more attuned than the FTC attorneys to the difficulties in demonstrating bias objectively.

  39. Even if television users find some advertisements entertaining and informative, the market evidence is that viewers are generally willing to pay a premium for video entertainment without advertising.

  40. The Robinson-Patman Act is a 1936 amendment to Sect. 2 of the Clayton Act, which strengthened the Clayton Act’s limitations on price discrimination.

  41. A key institutional feature of the FTC is that it enforces both antitrust and consumer protection statutes. The enforcement of these missions has historically been more separate than one might expect, given that they share the common objective of protecting consumers. The lawyers that enforce the competition and consumer protection statutes are organized in different bureaus (the Bureau of Competition and the Bureau of Consumer Protection). This institutional separation is probably not mere historical accident but, instead, reflects the fact that the statutory provisions are distinct and any law enforcement action must allege a violation of a particular statutory provision. As privacy has emerged as a central issue in consumer protection, how a wide array of Internet companies, including Google, collect and use data has been a concern at the FTC. Those concerns are, however, irrelevant for analyzing the allegations about Google’s use of Universals. Despite the institutional separation between competition and consumer protection enforcement in individual cases, the participation of the Bureau of Economics in both provides an institutional mechanism to harmonize the broad enforcement philosophy that underlies the FTC’s pursuit of its two main missions.

  42. Werden (1992) provides an excellent historical discussion of this reluctance. It dates back to the literature on monopolistic competition in the 1930’s.

  43. The current DOJ/FTC Horizontal Merger Guidelines indicate that market definition is not always a necessary element in the economic analysis of mergers, noting that the diagnosis of “ \({\ldots }\) unilateral price effects based on the value of diverted sales need not rely on market definition or the calculation of market shares and concentration.” U.S. Department of Justice and Federal Trade Commission (2010) at § 6.1.

  44. U.S. Department of Justice and Federal Trade Commission (2010) at §4.1.

  45. United States v. E. I. du Pont de Nemours & Co., 351 U.S. 377 (1956).

  46. Since search is a new and rapidly developing product, the historical quality of search would not be the relevant “but-for” benchmark. Rather, the conceptual benchmark would be the quality of search under competitive conditions.

  47. Jefferson Parish v. Hyde, 466 U.S. 2 (1984).

  48. In our opinion, an episode of Google search begins when one enters a query into Google and ends when one exits Google with respect to that query (either because one goes to a site identified by Google, gets an answer directly from Google, or abandons Google as a source of useful information). Under our definition, clicks on Universals, clicks on pages other than the first results page, clicks on alternative spelling suggestions, clicks on suggested alternative queries, and even entry of so-called “refinement queries” are all part of a single episode of Google search.

References

Download references

Acknowledgments

Salinger and Levinson were consultants to Google during the FTC’s investigation. This paper draws heavily on Salinger and Levinson (2013), for which Google provided financial support. The views expressed in this paper are the authors’ alone. We are grateful to Mike Baye, Susan Creighton, Joe Farrell, Franklin Rubinstein, Scott Sher, and Elizabeth Wang for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael A. Salinger.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Salinger, M.A., Levinson, R.J. Economics and the FTC’s Google Investigation. Rev Ind Organ 46, 25–57 (2015). https://doi.org/10.1007/s11151-014-9434-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11151-014-9434-z

Keywords

Navigation