Skip to main content
Log in

Moderating (mis)information

  • Published:
Public Choice Aims and scope Submit manuscript

Abstract

This paper uses a laboratory experiment to investigate the efficacy of different content moderation policies designed to combat misinformation on social media. These policies vary the way posts are monitored and the consequence imposed when misinformation is detected. We consider three monitoring protocols: (1) individuals can fact check information shared by other group members for a cost; (2) the social media platform randomly fact checks each post with a fixed probability; (3) the combination of individual and platform fact checking. We consider two consequences: (1) fact-checked posts are flagged, such that the results of the fact check are available to all who view the post; (2) fact-checked posts are flagged, and subjects found to have posted misinformation are automatically fact checked for two subsequent rounds, which we call persistent scrutiny. We compare our data to that of Pascarella et al. (Social media, (mis)information, and voting decisions. Working paper, 2022), which studies an identical environment without content moderation. We find that allowing individuals to fact check improves group decision making and welfare. Platform checking alone does not improve group decisions relative to the baseline with no moderation. It can improve welfare, but only in the case of persistent scrutiny. There are marginal improvements when the two protocols are combined. We also find that flagging is sufficient to curb the negative effects of misinformation. Adding persistent scrutiny does not improve the quality of decision-making; it leads to less engagement on the social media platform as fewer group members share posts.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. https://www.pewresearch.org/fact-tank/2020/10/15/64-of-americans-say-social-media-have-a-mostly-negative-effect-on-the-way-things-are-going-in-the-u-s-today/.

  2. There is a large literature that studies endogenous information acquisition in voting games. See e.g., Elbittar et al. (2016), Grosser and Seebauer (2016), Großer (2018) and Meyer and Rentschler (2022).

  3. The order of monitoring protocols methods is balanced across sessions to control for order effects.

  4. Pascarella et al. (2022) study how social media affects voting outcomes in an identical experimental environment, but without content moderation. The data for Pascarella et al. (2022) and the current design were collected at ExCEN Laboratory at Georgia State University between February and March of 2020. As such, our data is directly comparable and we use the data of Pascarella et al. (2022) as benchmarks for our analysis. The data for our experiment and Pascarella et al. (2022) were collected in separate sessions; no subject participated in a session for both studies. The recruitment methods, software, and instructions were the same across the studies, except for the inclusion of content moderation. The experimenter running the session was also the same.

  5. The number of rounds is pre-randomized for all groups. Groups interact in 17 rounds in the first stage and 18 rounds in the second stage. Subjects are only informed of the continuation probability.

  6. Subjects’ earnings are denominated in Experimental Francs (EF) which are converted back to USD at 145EF = $1.

  7. Robbett and Matthews (2018) find that individuals are more likely to give partisan responses in a voting scenario, relative to when they are the decision-maker for a policy. It also finds that free access to information reduces the partisan gap in outcomes. When information is costly, individuals do not purchase information and vote according to their partisan preferences.

  8. In particular, if a subject’s type is weakly more than 75, they would vote for Brown even if they knew the state of the world was Purple. Analogously, a subject would vote for Purple if their type is weakly less than 25, even if they know the state of the world was Brown. To see this, suppose a subject knows that the state of the world is Brown. They would still prefer to vote for Purple if 50 + p < 100 − p. That is, when p < 25.

  9. Such extreme partisans are exactly the types that are likely to post misinformation or strategically with-hold information that runs counter to their partisan preference. Further, these types are particularly likely to target posts for fact checks that run counter to their partisan preferences.

  10. In the first period, subjects follow all members of the group by default and can modify this structure before proceeding with the period.

  11. Note that having two stages in our experiment effectively doubles the number of groups in our data, and allows us to balance for any potential ordering effects.

  12. There is no cost to group members if the platform conducts a fact check, so that the cost of fact checks is not equal across monitoring protocols. This is an asset to our design, since when real-world social media platforms fact check user content, no costs are imposed on users of the platform, and we are interested in assessing content moderation polices as they would actually be implemented.

  13. The type revealed in the post is not fact checked.

  14. See footnote 4 for more details.

  15. See “Appendix 1” for a sample set of instructions.

  16. Each of the corresponding t-tests are highly significant, with p < 0.001. In our analysis, we focus on group decisions, since the interest of our study is to learn how content moderation policies affect aggregate outcomes. Our unit of observation is a group in a given round, and we report the results of two-tailed t-tests.

  17. Data are pooled across monitoring protocols in this test. Results are analogous when considering all pairwise comparisons.

  18. When a test is not statistically significant at conventional levels, we denote this as n.s.

  19. In “Appendix 2”, we report summary statistics where the variable of interest is the percentage of the units of information purchased, including those shared inaccurately (column 3). The results presented here extend to these data as well.

  20. Adding persistent scrutiny has no statistically significant effect within the PL or the P2P + PL monitor- ing protocols. Adding persistent scrutiny actually increases the percentage of posts containing misinforma- tion when the detection method is P2P (p < 0.001). It is unclear what drives this increase. Even with this puzzling result, we can say that adding persistent scrutiny of purveyors of misinformation does not reduce the percentage of posts containing misinformation in our experiment.

  21. These results are all true under both consequences. The corresponding t-tests comparing the quality of group decision making across PL and TR have p < 0.05. In addition, the quality of group decision making under PL is not statistically different from the scenario without social media (the NONE baseline) under either consequence.

  22. To explore other factors that may influence group decision quality, we analyze panel linear probability models in “Appendix 3”.

  23. Not surprisingly, when misinformation is only flagged, PL monitoring results in lower welfare than in the TR baseline (p < 0.05).

  24. Regression analysis yields similar results. See “Appendix 3” for details.

  25. For the PL monitoring protocols, this is significant for both consequences (p < 0.001, in both cases). For both P2P and P2P + PL with persistent scrutiny this difference is significant (p < 0.001, in both cases). However, for both P2P and P2P + PL with only flagging there is no significant difference.

  26. Logistic and probit regression models yield comparable results.

  27. The variable used to pick up order effects is a dummy variable for whether or not the protocol was the first protocol a participant participated in. Learning across periods is controlled for via log(t − 1).

References

  • Barrera, O., Guriev, S., Henry, E., & Zhuravskaya, E. (2020). Facts, alternative facts, and fact checking in times of post-truth politics. Journal of Public Economics, 182, 104123.

    Article  Google Scholar 

  • Elbittar, A., Gomberg, A., Martinelli, C., & Palfrey, T. R. (2016). Ignorance and bias in collective decisions. Journal of Economic Behavior and Organization., 174, 332–359.

    Article  Google Scholar 

  • Fischbacher, U. (2007). z-tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171–178.

    Article  Google Scholar 

  • Goeree, J. K., & Yariv, L. (2011). An experimental study of collective deliberation. Econometrica, 79(3), 893–921.

    Article  Google Scholar 

  • Großer, J. (2018). Voting game experiments with incomplete information: A survey. Available at SSRN 3279218.

  • Grosser, J., & Seebauer, M. (2016). The curse of uninformed voting: An experimental study. Games and Economic Behavior, 97, 205–226.

    Article  Google Scholar 

  • Guarnaschelli, S., McKelvey, R. D., & Palfrey, T. R. (2000). An experimental study of jury decision rules. American Political Science Review, 94(2), 407–423.

    Article  Google Scholar 

  • Jun, Y., Meng, R., & Johar, G. V. (2017). Perceived social presence reduces fact-checking. Proceedings of the National Academy of Sciences, 114(23), 5976–5981.

    Article  Google Scholar 

  • Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., et al. (2018). The science of fake news. Science, 359(6380), 1094–1096.

    Article  Google Scholar 

  • Le Quement, M. T., & Marcin, I. (2019). Communication and voting in heterogeneous committees: An experimental study. Journal of Economic Behavior and Organization, 174, 449–468.

    Article  Google Scholar 

  • Meyer, J., & Rentschler, L. (2022). Abstention and informedness in nonpartisan elections. Working paper.

  • Nieminen, S., & Rapeli, L. (2019). Fighting misperceptions and doubting journalists’ objectivity: A review of fact-checking literature. Political Studies Review, 17(3), 296–309.

    Article  Google Scholar 

  • Nyhan, B., Porter, E., Reifler, J., & Wood, T. J. (2019). Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior, 42, 1–22.

    Google Scholar 

  • Pascarella, J., Mukherjee, P., Rentschler, L., & Simmons, R. (2022). Social media, (mis)information, and voting decisions. Working paper.

  • Pogorelskiy, K., & Shum, M. (2019). News we like to share: How news sharing on social networks influences voting outcomes. Available at SSRN 2972231.

  • Robbett, A., & Matthews, P. H. (2018). Partisan bias and expressive voting. Journal of Public Economics, 157, 107–120.

    Article  Google Scholar 

  • Smith, V. (1982). Microeconomic systems as an experimental science. American Economic Review, 72(5), 923–955.

    Google Scholar 

Download references

Funding

Generous financial support from the Center for Growth and Opportunity at Utah State University is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas Rentschler.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Sample instructions

1.1 Welcome

1.1.1 No talking allowed

Once the experiment begins, we request that you not talk until the end of the experiment. If you have any questions, please raise your hand, and an experimenter will come to you.

1.1.2 Payment

For today’s experiment, you will receive a show-up fee of $5. All other amounts will be denominated in Experimental Francs (EF). These Experimental Francs will be traded in for Dollars at a rate of 145EF = $1.

1.1.3 Stages

This experiment will be conducted in two stages. At the beginning of each stage:

  1. 1.

    Five individuals from the room would be randomly matched to form a group.

  2. 2.

    Each group member will be randomly assigned a Subject ID – A, B, C, D, or E.

  3. 3.

    Each group member will be assigned a whole number, p, which is randomly chosen between 0 and 100 EF. Any number between 0 and 100 EF has the same chance of being selected. It is independently drawn for each group member. Therefore, the draw of p for one group member is not affected by those of the other members of the group. Each group member knows their own p, but not those of the other group members.

In each stage, everyone in the group will make decisions for at least 5 rounds. There is a 90% chance there will be a sixth round. If there is a sixth round, there is a 90% chance there will be a seventh round, and so on. Thus, at the end of each round (after the fifth round) there is a 90% chance that there will be at least one more round.

You can think of this as the computer rolling a 10 sided dice at the end of each round after the fifth round. If the number is 1 through 9 there is at least one more round. If the number is 10 then the stage end.

1.1.4 Round overview

Each round will consist of the following sequence:

  1. 1.

    Jar Assignment Each group is randomly assigned either a Brown Jar or a Purple Jar. The color of the jar will not be known by any group member.

  2. 2.

    Buying Information Each group member has the option to buy information about the color of the Jar that has been randomly assigned.

  3. 3.

    Connections Each group has a message board. Each group member has the option to follow other members of the group on this message board.

  4. 4.

    Post Choice After making connection decisions, each group member will have a choice regarding whether or not to post information.

  5. 5.

    Post Creation Each group member who has opted to post information determines the content of their post for the message board.

  6. 6.

    Viewing and Fact-Checking posts on the message board Each group member will observe the posts of those group members they are following, provided they have decided to make a post.

  7. 7.

    Each group member has the option to Fact-Check the accuracy of any Post they observe on the message board at a cost (Peer to Peer protocol). In addition, there is a 20% chance that the computer automatically Fact-Check each group member’s Post (Platform and Peer to Peer and Platform protocols).

  8. 8.

    Results from Fact-Checking If a group member’s Post is Fact-Checked the results of the Fact-Check will be observed by all group members who observe the Post.

  9. 9.

    Vote Each group member casts a vote for either Brown or Purple. The color that gets three or more votes is the group decision.

1.2 Decision environment and choices

All treatments.

1.2.1 Jar assignment

At the beginning of every round, the computer will randomly assign one of two options as the correct Jar for each group: the Brown Jar or the Purple Jar. In each round, there is a 50% probability that the Jar assigned to a group is the Brown Jar and a 50% probability that the Jar assigned to a group is the Purple Jar. The computer will choose the Jar randomly for each group and separately for each round. Therefore, the chance that your group is assigned the Brown Jar or the Purple Jar shall not be affected by what happened in previous rounds or by what is assigned to other groups. The choice shall always be completely random in each round, with a probability of 50% for the Brown Jar and 50% for the Purple Jar.

1.2.2 Voting task

Each group member will decide between one of the two colors: Brown or the Purple. Specifically, at the end of each round, the group will simultaneously vote for either Brown or Purple. The group decision will be determined by the color which gets three or more votes.

1.2.3 Payoff from voting

The payoff from voting that each group member earns in the round depends on the outcome of the vote. There are two parts to this payoff.

1.2.3.1 Part A

Remember that at the start of each stage, each group member is assigned a whole number, p, which is randomly chosen between 0 and 100EF. Any number between 0 and 100EF has the same chance of being selected. It is independently drawn for each group member. Therefore, the draw of p for one group member is not affected by the draw of p of the other members of their group. Each group member knows their own p, but not those of their other group members. Remember that the value of p assigned to each group member remains fixed within each stage but is randomly assigned in each stage.

Each group member gets p EF if the group votes for Brown, and 100 − p EF if the group votes for Purple. This payoff does not depend on the color of the Jar,which is randomly assigned at the start of each round. Notice that this means that each group member is likely to get a different payoff if Brown wins the vote because each group member is likely to have a different p. Similarly, each group member is likely to get a different payoff if Purple wins the vote.

1.2.3.2 Part B

If the color chosen by the vote matches the color of the Jar that was randomly assigned at the start of the round, each group member gets a payoff a payoff of 50 EF.

Example 1

Suppose you are assigned p = 70, Purple is the color chosen by the vote, and the color of the Jar that is randomly assigned to your group is Purple.

Since you’re assigned p = 70, your payoff from Part A is: 100 − 70 = 30EF.

Since the color of the Jar assigned to your group is the same as the color chosen by your group in the vote, your payoff from Part B is: 50EF.

Thus, your total payoff from the voting task is: 80EF.

Example 2

Suppose you are assigned p = 70, Brown is the color chosen by the vote, and the color of the Jar that is randomly assigned to your group is Purple.

You earn 70 EF from the voting task.

Since you’re assigned p = 70, your payoff from Part A is: 70 EF.

Since the color of the Jar assigned to your group is NOT the same as the color chosen by your group in the vote, your payoff from Part B is: 0EF.

Thus, your total payoff from the voting task is: 70EF.

1.2.4 Buying information

Remember that none of the group members know the color of the Jar that has been randomly assigned to a group prior to voting on a color. Each group member has an option to buy multiple units of information.

If a group member purchases information, he/she will observe a Report, which is either Brown or Purple. The probability that the color of this Report is the same as the color of the Jar depends on how many units of information the group member purchases. Each group member can buy any number of units between 0 and 9 (in increments of one unit). If a group member buys a single unit of information, their Report is correct 55% of the time. If a group member purchases two units of information, their Report is correct 60% of the time. Each additional unit of information that a group member purchases increases the probability that their report is correct by 5.

You can think of this process as the computer starts with a box with 50 Brown and 50 Purple balls. Each unit of information a group member purchases the computer adds 5 balls with the color of the Jar randomly assigned and removes 5 balls of the other color. The computer then mixes the balls and selects one randomly. The color of this selected ball is the color in the Report. So, for example, if a group member buys four units of information, the box from which the computer randomly selects a ball contains 70 balls with the color of the Jar which is randomly assigned and 30 balls of the other color. The cost of units of information is detailed in the table below.

Units

Probability correct (%)

total cost

1

55

1

2

60

2

3

65

5

4

70

8

5

75

13

6

80

18

7

85

25

8

90

32

9

95

41

If a group member chooses not to buy any units of information, they do not get a Report.

Example

Suppose you choose to buy 5 units of information and your Report is Brown. The cost of the 5 units of information is 13 EF, and there is a 75% chance the information is correct.

1.2.5 Connections

1.2.5.1 Only in communication treatments

At the start of each stage all group members are following each other on the message board. After each group member has decided how many units of information they wish to purchase, and viewed their Reports (when applicable), each group member decides who they would like to follow on the group’s message board.

Each group member can only see Posts made by group members they are following. Each group member is identified by the Subject ID assigned to them at the beginning of each stage. Remember that the Subject ID assigned to each group member remains the same within a stage.

Note that if you follow a particular group member, but they do not follow you, then they do not see your posts.

1.2.6 Post choice

After connection decisions have been made, each group member is shown:

  1. 1.

    Group members they are following.

  2. 2.

    Group members who are following them.

Each group member then chooses whether or not to make a Post.

1.2.7 Post creation

Each group member who has opted to make a post determines the following contents of their post:

  1. 1.

    p randomly assigned to the group member.

    The group member can input a number between 0 and 100 EF. Their Post will state that the inputted number is the value of p assigned to them. Note that the number imputed does not have to be equal to their p. He/She can also opt not to input a number.

  2. 2.

    Units of information the group member purchased.

    Each group member states the number of units of information they purchased. He/She can input any whole number between 0 and 9. Note that the number they state does not have to be equal to the actual number of units of information they purchased.

  3. 3.

    The group member states the color of their Report, if they state they have purchased one or more units of information. Notice that the color they state does not have to equal the actual color of their Report.

1.2.8 Viewing posts on the message board

Each group member will observe the Posts of those they are following, provided they decided to make a Post.

Each group member then has the option to Fact-Check the accuracy of any of these Posts. It costs 5EF to fact-check a Post. (Peer to Peer protocol).

In addition, there is a 20% chance that the computer automatically Fact-Check each group member’s Post. (Platform protocol).

If a Post is Fact-Checked, the computer will check the accuracy of the following information stated in the Post:

  1. 1.

    The number of units of information purchased.

  2. 2.

    The color of the Report, if the Post states that any units of information were purchased.

Note A Fact-Check does not verify p.

1.2.8.1 Results from fact-checking

The results from any Fact-Check will be displayed with the Post, and will be observed by anyone who is able to observe the Post. These results will be displayed before the group votes.

If a group member’s Post is Fact-Checked, the results of the Fact-Check will be displayed with the Post. The result will state whether the Post is Accurate or Inaccurate. (Flagging).

In addition, if a Post is Inaccurate the next two subsequent Posts shared by this group member will be automatically Fact-Checked by the computer. (Persistent Scrutiny).

Example 1

Suppose your assigned Subject ID is C and you choose to follow Subjects B, D, and E, and not follow Subject A. Further suppose only Subject A and Subject B chose to create Posts. Since you are following Subject B, you will see their Post. Since you are not following Subject A, you will not see their Post. Suppose you decide to Fact-Check the accuracy of the information in the Post created by Subject B. You and everyone following Subject B will be able to see the results from the Fact-Check before the group decides to vote.

Example 2

Suppose your assigned Subject ID is C, and Subjects A and E choose to follow you, while Subjects B and D choose not to follow you. Further suppose that you decide to make a Post. Subjects A and E will see your Post, while Subjects B and D will not see your Post. Suppose neither A nor E choose to Fact-Check your post, but the computer randomly Fact-Check the accuracy of your post. The results from the Fact-Check will be available to both Subjects A and E before the group decides to vote.

1.2.9 Vote

After all group members observe the Fact-Checking results (if any) on the group message board, each group member casts a vote for either: Brown or Purple.

Each group cast their vote without knowing the votes of the other members of their group.

The computer sums up the number of votes for Brown and for Purple. The color which receives three or more votes is the group’s decision.

1.2.10 Final payoff

Each group member’s final payoff for the round is given by: Peer to Peer protocol and Peer to Peer and Platform Protocol

$$ Final\;Payoff = Payoff\;from\;voting - Total\;cost\;of\;buying\;information - Cost\;of\;fact - Checking $$

Platform protocol

$$ Final\;payoff = Payoff\;from\;voting{-}Total\;cost\;of\;buying\;information $$

Remember that the payoff of each group member in the round depends on the outcome of the vote, and the number of units of information they purchased.

Remember that there are two parts of the payoff from Voting:

1.2.10.1 Part A

Each group member gets p EF if the group votes for Brown, and 100 − p EF if the group votes for Purple. This payoff does not dependent on the color of the Jar, which is randomly assigned at the start of each round.

1.2.10.2 Part B

If the group’s decision matches the color of the Jar that was randomly assigned at the start of the round, each group member get a payoff a payoff of 50 EF.

The cost of Fact-Checking each post is 5 EF (Only in Peer to Peer). The cost of information depends on the number of units of information purchased. The table below contains these costs.

Units

Probability correct (%)

Total cost

1

55

1

2

60

2

3

65

5

4

70

8

5

75

13

6

80

18

7

85

25

8

90

32

9

95

41

Example

Suppose you are assigned p = 70, the group’s decision is Purple, and the color of the randomly assigned jar is Purple.Further suppose that you purchased 5 units of information and got a Brown Report. Suppose in addition, you chose to Fact-Check one Post.

Your final payoff for the round is 62 EF. You get 30EF from the group’s decision being Purple (Part A) plus you 50 EF since the group’s decision matched the color of the randomly selected jar (Part B) minus 13 EF for buying 5 units of information minus 5 EF for Fact-checking one Post.

Appendix 2: Composition of information shared

See Table 5.

Table 5 Group information purchasing and sharing

Appendix 3: Regression

3.1 Quality of group decisions

We analyze factors that influence group decision quality in a panel linear probability regression analysis with the standard error clustered at the group level.Footnote 26 We report the results in Table 6 for five models we estimate. The dependent variable is whether the group’s vote matched the correct policy. In columns 1 and 2, we include data from the fact-checking and the information-sharing protocols. The independent variables include a dummy for all the protocols; the excluded category is the peer to peer fact-checking when posts are flagged. Also, we control for the order in which the protocol was introduced and learning across periods.Footnote 27 In column 2, we include a variable to account for the total information purchased by groups. In columns 3 and 4, we include data from all protocols where a social media platform was present to separate the platform’s effect on group decision quality. In column 3, besides the total information purchased, we control for the average number of connections on the platform. In column 4, we include two variables that account for the nature of information shared on the platform. First, units of information are wasted to misinformation, and second, we include a variable account for the units of information strategically withheld. In column 5, we only include data from the fact-checking protocol and include a variable to account for the total number of fact-checked posts.

Table 6 Quality of group decision making

The overall quality of group decision is not significantly different across the three factchecking protocols for both the consequences but is significantly lower when there is no option to fact-check, and participants can share posts containing misinformation. Once we account for the total information purchase, we find that when only the platform is fact-checking and flagging post leads to lower quality of decision making among the factchecking protocols. Recall, apart from platform fact-checking under persistent scrutiny, leads to an increase in information purchase in the presence of a social media platform. The increase in the information purchase does not translate to an increase in the quality of decision when only the platform is fact-checking and when there is no option to fact-check in the full information sharing protocol.

In columns 3 and 4, we consider only the protocols where a social media platform is present. Recall that fact-checking protocol leads to an increase in the average number of connections relative to truthful and full information-sharing protocols. The increase in the average number of connections leads to an improvement in the quality of group decisions. Not surprisingly, we find that an increase in the information wasted to misinformation lowers group decision quality.

In column 5, we only consider the three fact-checking protocols; we find that platform fact-checking when posts are flagged lowers the quality of decisions. Although the platform fact-checking, when there is persistent scrutiny of posts, leads to a lower quality of group decision, these differences are not significant once we account for the lower information purchase.

3.2 Welfare

We also analyze the results in a linear panel regression analysis with the standard errors clustered at the group level. We report the results in Table 7 for models we estimate. The dependent variable of interest is the total group payoff. In columns 1 and 2, we include data from both the fact-checking and information-sharing protocols. The independent variables include a dummy for all the protocols; the excluded category is the peer to peer fact-checking when posts are flagged. Also, we control for the order in which the protocol was introduced and learning across periods. In column 2, we include a variable to account for the total information purchased by groups. In columns 3 and 4, we include data from all protocols where a social media platform was present to separate the platform’s effect on group decision quality. In column 3, besides the total information purchased, we control for the average number of connections on the platform. In column 4, we include two variables that account for the nature of information shared on the platform. First, units of information are wasted to misinformation, and second, we include a variable account for the units of information strategically withheld. In column 5, we only include data from the fact-checking protocol and include a variable to account for the total number of fact-checked posts.

Table 7 Total welfare

The fact-checking protocols under the two consequences leads to higher total welfare relative to the full information-sharing protocol where sharing misinformation is possible, but there is no option to fact-check. These results are robust even after we account for the increase in the total information purchase in the presence of a social media platform. Not surprisingly, when we analyze the data in the presence of a social media platform, we find that having more connections on the platform increases total welfare, whereas units of information purchase wasted to misinformation lower the total welfare. Comparing the two consequences of fact-checking, we do not find any significant difference in welfare.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meyer, J., Mukherjee, P. & Rentschler, L. Moderating (mis)information. Public Choice 199, 159–186 (2024). https://doi.org/10.1007/s11127-022-01041-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11127-022-01041-w

Keywords

JEL Classification

Navigation