Skip to main content
Log in

Business Ethics and Free Speech on the Internet

  • Published:
Philosophia Aims and scope Submit manuscript

Abstract

The unique role of the Internet in today’s society, and the extensive reach and potentially profound impact of much Internet content, raise philosophically interesting and practically urgent questions about the responsibilities of various agents, including individual Internet users, governments, and corporations. Raphael Cohen-Almagor’s Confronting the Internet’s Dark Side is an extremely valuable contribution to the emerging discussion of these important issues. In this paper, I will focus on the obligations of Internet Service Providers (ISP’s) and Web Hosting Services (WHS’s) with respect to online hate speech. I will argue that although Cohen-Almagor is correct that we should not understand these companies’ obligation to protect and promote freedom of expression as always taking priority over potentially competing values, his argument that they ought to deny service to those who would engage in hate speech online is less than fully convincing. In part this is because his definition of hate speech is, in important respects, overly broad. In addition, I will argue that the analogies to which he appeals in defense of his position are flawed, and that a more accurate analogy appears to provide at least some support to the view that ISP’s and WHS’s ought not deny service to those who would engage in at least some forms of hate speech.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For example, at least many forms of cyberbullying are legally prohibited in every U.S. state, often under more general anti-harassment policies (see http://www.stopbullying.gov/laws/). In addition, the speech involved in cyberbullying is typically not political speech, whereas hate speech, is, at least often, a form of political speech.

  2. This is the case even if Jane does not consciously identify as racist, or recognize her tendency to react in racially biased ways, since, as much recently developed evidence suggests, implicit biases, and in particular implicit racial biases, are quite pervasive. For relevant discussion, see Kelly and Roedder (2008).

  3. For relevant discussion, see Altman (1993).

  4. Cohen-Almagor does suggest that he intends his definition of hate speech to rule out statements such as “Jews are money hungry…gays are immoral...[and] blacks are chimps,” which he describes as “unpleasant yet legitimate speech” (Cohen-Almagor 2015, p. 205). But in fact both his initial definition quoted above, and the one that he offers immediately after noting that he wishes to exclude these statements (“malicious speech aimed at victimizing and dehumanizing its target” (p. 205)), would seem to clearly count at least some instances in which these statements are made as hate speech, since it is surely possible for them to be bias-motivated, hostile, malicious, and made with the aim of victimizing and dehumanizing their targets.

  5. Cohen-Almagor’s definition allows that speech that targets individuals or groups because of their religious belief or affiliation could count as hate speech if the speaker mistakenly perceives her targets’ religion as an innate characteristic. This type of case, however, will be fairly unusual at best.

  6. In other cases, however, the reasons against interfering will outweigh the reason in favor of interference, despite the fact that there is good reason to believe that the relevant speech might play a role in causing people to be seriously and wrongfully harmed. For example, even if there is reason to believe that an online opinion piece criticizing a candidate for political office on the basis of her history of opposing, for example, widely supported gun control measures, might cause supporters of that candidate to react violently, perhaps seriously injuring others, this does not seem to be a sufficient reason for an ISP or WHS to remove the piece or deny service to its author. If we are to defend treating hate speech differently, allowing that, at the very least, it is easier to justify interfering with such speech than it is to justify interfering with other speech that might cause similar harm, then we must identify a morally relevant difference between hate speech and the relevant other speech other than the fact that, in general, hate speech is more likely to play a role in causing serious harm. This is a difficult challenge, about which Cohen-Almagor says relatively little, and which I cannot take up in any detail here.

  7. As Cohen-Almagor notes, the website was in fact provided with technical support by Don Black, whom Cohen-Almagor refers to as “the godfather of hate sites” (Cohen-Almagor 2015, p. 207). Obviously, individuals or companies that aim to promote hate sites are engaged in seriously wrongful conduct and ought to be judged harshly. Had the site been supported by a company that simply had a general policy of not engaging in content discrimination, however, the threat posed by the content of this particular site would have given the company sufficient reason, and perhaps an obligation, to deny service to the racist organization and its leader. And if the company continued to provide service to the site in order to, for example, avoid financial losses, then there would be good reason to judge its decision makers harshly as well (Cohen-Almagor 2015, p. 177).

  8. Cohen-Almagor suggests that the same moral principles ought to govern speech and its regulation both online and offline (Cohen-Almagor 2015, pp. 56–57). This is, in my view, correct, and the remainder of my argument will proceed on the basis of that assumption.

  9. I will not discuss the library analogy here, since Cohen-Almagor does not employ it in order to discuss the issue of hate speech, but instead to suggest that Internet users should have to provide verifiable identifying information in order to gain access to content such as, for example, “recipes for rape drugs, recipes for bombs, and manuals on how to kill people” (Cohen-Almagor 2015, p. 160).

  10. Of course, telephone service providers would be justified in denying service to those who used their services to make harassing phone calls to targets of their hate speech. But in these cases, it would be the harassment, and not the hate speech itself, that would justify denial of service.

  11. For relevant discussion, see Scanlon (2003).

References

  • Altman, A. (1993). Liberalism and campus hate-speech: a philosophical examination. Ethics, 103(2), 302–317.

    Article  Google Scholar 

  • Cohen-Almagor, R. (2015). Confronting the internet’s dark side: moral and social responsibility on the free highway. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Kelly, D., & Roedder, E. (2008). Racial cognition and the ethics of implicit bias. Philosophy Compass, 3(3), 522–540.

    Article  Google Scholar 

  • Mill, J.S. (1978). On liberty. E. Rapaport (Ed.). Indianapolis: Hackett Publishing Company.

  • Scanlon, T. M. (2003). The difficulty of tolerance. In The difficulty of tolerance. Cambridge: Cambridge University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian Berkey.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Berkey, B. Business Ethics and Free Speech on the Internet. Philosophia 45, 937–945 (2017). https://doi.org/10.1007/s11406-016-9785-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11406-016-9785-9

Keywords

Navigation