Some people are honest, while others are likely to lie whenever it benefits them. We would like to understand the prevalence of lying, because dishonesty may be economically and socially harmful. Since we cannot simply ask people if they are liars, one way to estimate the proportion of liars in a group is to ask them to report the result of a coin flip or other random device, offering them a payment if they report heads. Liars do not always lie: they only lie when it benefits them. So, they always report heads irrespective of the true coin flip.Footnote 1 If there are many more heads than we would expect by chance, we can assume many people are lying. But how many?

A naïve estimate would be that if, e.g., 80 people of 100 report heads, then on average 50 really saw heads and 60% (30/50) of the remainder are lying. More generally, from R reports of a good outcome in a sample of size N, where the bad outcome happens with probability P, we can estimate that the following proportion are lying (Abeler et al. 2016):

$$\begin{aligned} \frac{R/N - (1-P)}{P}. \end{aligned}$$
(1)

The problem with this approach is that the number of heads is not fixed. If we see 1 out of 3 people reporting heads, this method estimates there are less than zero liars. But it is still possible that everyone saw heads and 1 person lied.

Garbarino et al. (2018)—GSV from here on—point out this problem and introduce an alternative method. They claim that their method corrects for this problem and can estimate the full distribution of lying outcomes, and they recommend using it for confidence intervals, hypothesis testing and power calculations.

I ran simulations to check the overall performance of the GSV confidence intervals. Simulations parameters are shown in Table 1. \(\lambda\) is the probability that an individual in the sample lies and report heads when they observe tails:

$$\begin{aligned} \lambda = \frac{1}{N} \sum _{i=1}^N \text {Prob}(i \text { reports heads}|i\text { saw tails}). \end{aligned}$$
(2)

For each parameter combination, I ran 1000 simulations, drawing random coin flips and reports given \(P\), \(\lambda\) and \(N\).

For each simulation and confidence level, I computed whether the GSV confidence interval contained the true value of \(\lambda\). The first row of Table 2 shows the results.

Table 1 Parameter values
Table 2 Coverage levels for GSV and alternative methods

By definition, 95% of 95% confidence intervals ought to contain the true value, on average. This is called “achieving nominal coverage”. GSV confidence intervals are too narrow.Footnote 2

To deal with this problem, I test two alternative methods for calculating confidence intervals on my simulated data. The first (“Frequentist”) is the standard method of deriving confidence intervals from a binomial test. The second is a Bayesian method.

To understand the statistics, start with the probability of getting R reports of heads in total, given \(\lambda\). Since individuals report heads either if they see heads with probability \(1-P\), or if they see tails but lie, with probability \(\lambda P\), this is just:

$$\begin{aligned} \text {Pr}(R|\lambda ; \,N,P) = \text {binom}(R, N, (1 - P) + \lambda P). \end{aligned}$$
(3)

That immediately suggests the “Frequentist” method, which is to estimate the parameter of this distribution, \((1 - P) + \lambda P\), from the proportion of heads reported in the sample, then back out \(\lambda\). This is the conventional method of, e.g., Abeler et al. (2016). It is justified if the sample is large, because this will lessen sampling variation in the proportion of actual heads observed. Similarly, if the sample is large enough, we can generate hypothesis tests for a value of \(\lambda\)—e.g., zero—using the tails of the binomial distribution. And we can back out confidence bounds for \(\lambda\) from confidence bounds for the population proportion of heads reported in the same way. As GSV point out, in small samples, this method runs the risk that the sample proportion of high outcomes will be different from its expected value.Footnote 3 We will see whether this matters.

There are numerous ways to calculate confidence intervals in a test of proportions. See, e.g., Agresti and Coull (1998). Here, I use the binomial exact test of Clopper and Pearson (1934), which is known to be conservative.

The second method uses Bayes’ rule. Start with a prior probability density function over \(\lambda\), \(\varphi (\lambda )\). The posterior probability is then:

$$\begin{aligned} \varphi (\lambda | R;\, N,P) = \frac{ \text {Pr}(R|\lambda ; \,N,P) \varphi (\lambda ) }{ \int _0^1 \text {Pr}(R|\lambda ';\, N,P) \varphi (\lambda ') \; \text {d}\lambda ' }. \end{aligned}$$
(4)

From this, one can derive confidence intervals and expected values in the usual way. Technically, they are Bayesian “credible” intervals. I used highest posterior density intervals (Hyndman 1996), rather than the central confidence interval. This allows the intervals to include endpoints of the distribution, which is important when, e.g., testing for \(\lambda = 0\).

The Bayesian method requires a prior. Here, I used a uniform prior, \(\varphi (\lambda ) = 1\) on \([0, 1]\).

Results in Table 2 show that both frequentist and Bayesian methods mostly achieve the nominal confidence level, with more than 90/95/99% of intervals containing the true value of \(\lambda\). The exception is the frequentist 99% confidence interval, which is too narrow.

Frequentist confidence intervals could be less accurate when \(N\) is low, since that leads to more sampling variation in the number of true heads. Table 3 checks this by looking separately at simulations with \(N = 10\) and \(N = 50\). Frequentist 99% confidence intervals indeed appear slightly too narrow for this range. Bayesian confidence intervals are fine.

Table 3 Confidence interval coverage by sample size

1 Understanding the GSV approach

Why does the GSV method produce narrow confidence intervals? We can get a clue by running the GSV method when there are 10 reports of “heads” out of 10 for a fair coin flip (\(R = N = 10, P = 0.5\)). The resulting point estimate is that 100% of subjects lied. The upper and lower 99% confidence intervals are also 100%.

This is calculated as follows. First, given R reports of heads, the probability that a total of \(T\) “true” heads were observed is calculated as:

$$\begin{aligned} \text {Prob}(T \text { heads}| R;\, N, P) = \frac{ \text {binom}(T, N, 1 - P) }{ \sum ^R_{k=0}\text {binom}(k, N, 1 - P) }. \end{aligned}$$
(5)

This is the binomial distribution, truncated at R because by assumption, nobody “lies downward” and reports tails when they really saw heads.

Next, from T the number of lies told is calculated as \(R - T\); and the proportion of lies told is:

$$\begin{aligned} \mathrm{Lies} = \frac{R-T}{N-T}, \end{aligned}$$
(6)

because \(N - T\) people saw the low outcome and had the chance to lie. Combining this with the truncated binomial gives a cumulative distribution function of Lies. This is then used to estimate means and confidence intervals.

Putting these together, for \(R = N = 10\), the estimated distribution of Lies is calculated as follows:

  • With probability \(\frac{1}{1024}\), there were really 10 heads. Nobody lied in the sample.Footnote 4

  • Otherwise, 1 or more people saw tails, and they all lied. The proportion of liars is 100%.

Hence, the lower and upper confidence intervals are all 100%.

There are two problems with this approach: one statistical, and one conceptual.

First, if many heads are reported, you should learn two things. On the one hand, there are probably many liars in your sample. On the other hand, probably a lot of coins really landed heads. The probability distribution in Eq. (5) does not take account of this.

For example, suppose we are certain that everyone in the sample is a liar who always reports heads. In this case, observing \(R = N = 10\) gives us no information about the true number of heads. The posterior probability that \(T = 10\) is then indeed 1/1024, the same as the prior. Now, suppose we know that nobody in the sample is a liar. Then on observing \(R = 10\), we are sure that there were truly 10 heads: the posterior that \(T = 10\) is 1. If exactly 5 out of 10 subjects are liars, then observing \(R = 10\) means that all 5 truth-tellers really saw heads. The posterior probability that \(T = 10\) is then \(1/32\), the chance that all 5 liars saw heads, and so on.

When we are uncertain about the number of liars, our posterior that \(T = 10\) will be some weighted combination of these beliefs. Unless we are certain everyone in the sample is a liar, the probability that \(T = 10\) will be greater than 1 in 1024. Equation (5) is, therefore, not correct. In this case, it is equivalent to assume that everybody in the sample is a liar, whose report is uninformative about the true number of heads. One then uses the prior distribution of heads to estimate the proportion of those who actually saw tails and lied.

Indeed, in the simulations with \(P = 0.5\) and across all values of \(\lambda\), the overall probability that there were 10 true heads, conditional on \(R = N = 10\), was about 1 in 161, not 1 in 1024. Fixing \(\lambda = 0.2\), it was about 1 in 4.

This problem means that the GSV estimator of Lies is biased. In the “Appendix”, I show that the GSV estimator can have substantial bias, and performs worse than the naïve estimator from Eq. (1), \(\frac{R/N-(1-P)}{P}\). Also, the GSV confidence intervals do not always achieve nominal coverage of Lies. When the number of heads reported is either high or low, the percentage of confidence intervals containing Lies may fall below the nominal value.

There is a second, more important problem. The GSV approach attempts to estimate Lies in Eq. (6). This is the proportion of lies actually told, among the subsample of people who saw tails. But we are not usually interested in the proportion of lies actually told. We care about the probability that a subject in the sample would lie if they saw tails—\(\lambda\) in Eq. (2). This \(\lambda\) can be interpreted in different ways. Maybe on seeing a tail, each person in the sample lies with probability \(\lambda\). Or maybe the sample is drawn from a population of whom \(\lambda\) are (always) liars, and \(1 - \lambda\) are truth-tellers. Lies has no interpretation in the population, because the rest of the population has no chance to tell a lie in the experiment.

Lies can be treated as an estimate of \(\lambda\). It is unbiased: it estimates \(\lambda\) from the random, and randomly sized, sample of \(N - T\) people who saw tails. But it can be a very noisy estimate. Again, suppose 10 heads out of 10 are reported, and 9 heads were really observed. Lies is 100%. But it is 100% of just one person.

This means that even the correct confidence intervals for Lies would not be correct for \(\lambda\). For example, if 3 out of 3 subjects report heads, the GSV software reports a lower bound of 100% for any confidence interval. Indeed, since anyone who had the opportunity to lie clearly did so, this is the correct lower bound (if we arbitrarily define Lies = 1 when \(T = N\)). But it makes no sense as a confidence interval for \(\lambda\): we clearly cannot rule out that one or two subjects truly saw heads, and would have reported tails if they had seen tails.

Because of this problem, the GSV confidence interval coverage of \(\lambda\) is much worse than its coverage of Lies.The issue is especially serious when there are many reports of heads. In this case there were probably many true heads, so T is high and the true sample size \(N - T\) is low, making Lies a noisy estimate of \(\lambda\). Table 4 shows this. It splits the simulations by the proportion of reported heads, R/N. GSV coverage levels fall off sharply as R/N increases. Note that for fair coin flips, R/N is usually greater than 0.5, both in the simulations and in reality.

Table 4 GSV confidence interval coverage by proportion of heads reported (R/N)

2 Point estimation

We can also compare the accuracy of point estimates of \(\lambda\) between GSV, Frequentist and Bayesian methods. Table 5 shows bias (the estimated value minus the true value of \(\lambda\)) for different methods by different N. The Bayesian method is always the least biased until \(N = 500\), and the GSV method is the most biased.

Table 6 shows the mean squared error for methods by different N. For low N, the best method is Bayesian and the worst is Frequentist, with GSV in between. When N gets large, all methods give about the same estimates and are equally accurate.

The Bayesian method might have an advantage here, since it assumes a uniform prior and the simulations indeed used a uniform distribution of the proportion of liars L/N. In fact, further analysis reveals that the Bayesian method is best across all specific values of L/N up to 80%.Footnote 5 So, the Bayesian method is likely to be best, unless one is sure that the true L/N is rather high.

Table 5 Mean bias by method and N
Table 6 Mean squared errors by method and N

3 Comparing different groups

Bayesian estimates are accurate, but rely on a choice of prior. A non-informative prior is a reasonable choice. Alternatively one might use information from previous meta-analyses such as Abeler et al. (2016). If the sample size is large enough, the choice of prior should not matter much.

When comparing the dishonesty rates of different groups, an interesting approach is to use the “empirical Bayes” method (Casella 1985). This piece of statistical jiu-jitsu involves estimating a common prior from the pooled data, before updating the prior for each individual group.

We can also test hypotheses using the Bayesian approach. If two samples are independent, then the probability that, e.g., the true proportion of liars in sample 1 is smaller than in sample 2 can be calculated from the posterior distributions for each sample:

$$\begin{aligned} \int ^1_0 \int _0^{\lambda _1} \varphi (\lambda _1)\varphi (\lambda _2) \; \mathrm {d}\lambda _2 \mathrm {d}\lambda _1. \end{aligned}$$
(7)

4 Applications

Benndorf et al. (2017) use the GSV method to calculate confidence intervals for the proportion of liars in a lying task with a die roll (\(P = 5/6\)). From 57 reports of the best outcome, out of 98 subjects, they calculate a lying rate of 49.68%, with a 95% CI of (45.3%, 53.95%). Using the Bayesian method with a uniform prior, the confidence interval becomes (38.0%, 61.1%), about twice as big.

Banerjee et al. (2018) use the GSV method to estimate confidence intervals for proportion of liars in a die roll task. They estimate the proportion of liars who report a die roll above 3 (\(P = 0.5\)), for several treatments. Table 8 in the “Appendix” shows GSV confidence intervals, along with recalculated Bayesian confidence intervals (from a uniform prior), and confidence intervals for the difference between lying to the “Same” and “Other” caste. The Bayesian confidence intervals are much larger than GSV confidence intervals. Only a couple of significant results survive. (Note that significance tests in the original paper were done with standard frequentist techniques, not the GSV method.) More importantly, the N is rather low to make useful inferences about the differences between groups. For example, for the T2-winners-GC group in the “aligned payoffs” treatment, differences in lying could be as much as 40% in either direction.

Hugh-Jones (2016) estimates the dishonesty rates of 15 nations using a coin flip experiment. I use empirical Bayes to check these results. For my prior over \(\lambda\), I fit a beta distribution using the 15 observations of \(2R/N - 1\). I then updated this prior separately for each country to find new confidence intervals and point estimates of the means.Footnote 6 There is some “shrinkage” towards the pooled mean from the naïve per-country estimates found by calculating \(2R/N - 1\) separately for each country. One of the strengths of empirical Bayes, as Casella (1985) points out, is that it “anticipates regression to the mean”. Using Eq. (7), I calculated the probability of different \(\lambda\) values for each pair of countries in the data. Reassuringly, there were still significant differences between countries.

5 Software

The Bayesian methods described here are implemented in R code, available at https://github.com/hughjonesd/GSV-comment. In this section, I give some simple examples of how to use it. More details are available at the website.

To load the code, download the file “bayesian-heads-cts.R” from github, and source it in the R command line:

figure a

Suppose 33 people report heads out of an N of 50, where the probability of the bad outcome is 0.5. To create a posterior distribution over \(\lambda\), we use the update_prior() function:

figure b

Here, we have started with a uniform prior, using R’s built in dunif() function.

To calculate the point estimate of lambda, call the dist_mean() function on the updated posterior:

figure c

To calculate the 95% confidence interval (the highest density region), use dist_hdr():

figure d

Lastly, we can run power tests by simulating multiple experiments. GSV argue that existing sample sizes may be too small to reject “no lying” (\(\lambda = 0\)). With a uniform prior and an \(N\) of 100, the Bayesian method has 80.6% power to detect \(\lambda\) of 25% and 21.4% power to detect \(\lambda\) of 10%. So, this paper confirms that important point. To run power calculations, use power_calc(). Here, we calculate the power to detect \(\lambda = 0.1\) in a sample of 300, where the probability of the bad outcome is 0.5, with an alpha level of 0.05 and a uniform prior:

figure e

6 Conclusion

These results suggest some recommendations when designing and analysing a coin flip style experiment.

  1. 1.

    Use power tests to ensure that your N is big enough.

  2. 2.

    If your N is reasonably large, say at least 100, you can safely use standard frequentist confidence intervals and tests.

  3. 3.

    If your N is small, consider Bayesian estimates and confidence intervals. To estimate differences between subgroups, consider empirical Bayes with a prior derived from the pooled sample.