1 Group Action and Group Misconduct

Some actions are performed individually: I am writing this paper, and you are reading it. Others are performed together, by multiple agents, as a group. We say, for instance, that the Department of Geography hired a new professor, that the European Union has made huge investments in a new recovery plan, or that Amazon Watch strives to protect the rights of indigenous people. Group action is pervasive in our society, and not much would be accomplished without it.

Not all group action, however, is good or selfless. On the contrary, group actions often have atrocious consequences. Here’s three telling examples that will accompany us for the rest of the paper:

  1. (1)

    Between the thirties and the seventies, several gas companies (prominently Dupont, General Motors, and Standard Oil) made a concerted effort to promote the addition of tetraethyl lead to gasoline, leading to worldwide lead poisoning. This resulted in many premature deaths, and to the widespread development of many cognitive disfunctions.

  2. (2)

    Between 2008 and 2015, Volkswagen developed a “defeat device” to cheat the emission tests performed on their cars. This allowed them to market cars that emitted up to 40 times more pollutants than it was allowed in the US, with adverse effects for pollution levels and for the health of US citizens.

  3. (3)

    During the First World War, religious tensions between Muslim Turks and Christian Armenians progressively evolved into the persecution and systematic killing of Armenians in the Ottoman Empire. By the end of the War, it is estimated that up to one and a half million Armenians were killed (Morris and Ze’evi 2019), and many others were victims of persecutions and acts of barbarity.

When groups behave immorally, they often try to do what’s in their power to cover their tracks: they lie and deceive. Turkey still officially denies that the genocide of the Armenians ever occurred, and took active measures to promote denialist narrations internationally (Mamigonian 2015; Hovannisian 2015). Dupont, General Motors and Standard Oil conspired to bury evidence of the deadly risks of lead poisoning, and lied for decades about the availability of healthier alternatives (Kovarik 2012). Volkswagen’s first lied about the emissions of their cars; then, when evidence of fraud emerged, they lied by denying that they had cheated the emission tests.Footnote 1

It’s natural to say that Turkey, General Motors, and Volkswagen claimed that they didn’t do anything wrong, and that in doing so they lied to the public. But what do we mean, exactly, when we say that a group makes a claim, and when we say that a group lies? While the analysis of group action, belief, justification, and knowledge has attracted a lot of philosophical attention,Footnote 2 we have seen so far only a few attempts to characterise group assertion (Meijers 1999, 2007; Tollefsen 2020; Lackey 2017, 2020b; Townsend 2018; Ludwig 2019; 2020) and group lying (Staffel 2018, Lackey 2018a; 2020a, 2020b, Hormio 2022).

Building on scholarship in collective epistemology, linguistics, and speech act theory, this paper attempts to provide a precise characterisation of what it means for a group to make an assertion and lie. I begin by considering what group lying is, and then discuss how it relates to group assertion, group insincerity, and group deception.

2 What is Group Lying?

To approach the subject of group lying, it’s helpful to fist consider what it means for an individual to lie. Scholars tend to agree that lying necessarily involves saying what you believe to be false.Footnote 3 So, the following are assumed to be necessary conditions for lying:

Necessary Conditions for Lying

A speaker S lies to an audience A only if:

(1) S asserts that p

(2) S believes that p is false

Many scholars also think that lying is essentially a form of intended deception, so that a further condition should be addedFootnote 4:

(3) S intends to thereby deceive A about p

Quite straightforwardly, this account of individual lying can be “collectivised”. In other words, we can derive a definition of group lying from it (Staffel 2018; Lackey 2018a; 2020a; 2020b; Hormio 2022):

The Group Lie Schema

A group, G, lies to B iff

(G-Assert) G asserts that p

(G-Insincere) G believes that p is false

(G-Deceive) G intends to thereby deceive B about p

The Group Lie Schema is a useful starting point to guide discussion on the nature of group lying. To understand what group lying is, we need to establish whether it really requires satisfying these three conditions, and what satisfying each condition amounts to. Specifically, this paper will tackle three questions:

  1. (i)

    What does G-Assert require, exactly? In other words, what is group assertion?

  2. (ii)

    Does group lying really require that the group intends to deceive the audience, as required by G-Deceive?

  3. (iii)

    Does group lying really require that the group believes that p is false, as required by G-Insincere?

I will deal with each question in this order, starting from (i).Footnote 5

3 Group Assertion

3.1 Two Kinds of Group Assertion

Some authors before me (Hughes 1984; Tollefsen 2007; 2009; 2020; Meijers 2007; Fricker 2012; Lackey 2017; 2020b; Ludwig 2017, chaps. 13–14; 2019; 2020; Townsend 2018) have attempted to characterise group assertion and group testimony. Rather than tiring the reader by reviewing the strength and weaknesses of each view, I will take Lackey’s proposal as a point of reference to develop mine. This is because Lackey (2017, 2018a, 2020b) has already highlighted problems with every account other than her own.Footnote 6 To keep discussion concise, then, I will begin by presenting Lackey’s proposal, and discuss alternative views whenever they become relevant.

There is substantial consensus (e.g. Lackey 2017; Ludwig 2019; Tollefsen 2020) that groups can make assertions in at least two ways: (i) directly and together, through coordinated effort, or (ii) less directly, through a spokesperson. When two scientists write a journal article together, or when two demonstrators shout their slogan at a unison, they make a joint assertion of the first kind—let’s call this a collective assertion.Footnote 7 The second form of group assertion concerns speech acts performed by a spokesperson who has the authority to speak for a group. Following Ludwig (2019, 2020), I call these proxy assertions. Official statements issued by companies and governments belong to this category.Footnote 8

To accommodate both kinds of group assertions, Lackey proposes a disjunctive account. A group can assert in either of the following two waysFootnote 9:

(CA):

A group G makes a collective assertion that p iff the members of G coordinate individual acts a1, … an, so that they all reasonably intend to convey that p together in virtue of these acts.

(PA):

A group G makes a proxy assertion that p iff that p belongs to a domain d, and a spokesperson(s) S:

(a) Reasonably intends to convey the information that p in virtue of the communicable content of an individual act (or individual acts) of communication,

(b) Has the authority to convey the information in d, and

(c) Acts in this way in virtue of S’s authority as a representative of GFootnote 10

3.2 Problems with Lackey’s account

CA and PA handle previous examples efficiently. The joint manuscript and the demonstration chant involve coordinated actions designed to convey a message, so CA classifies them as (collective) group assertions. The statements issued by governments and corporations (like Turkey and Volkswagen) are typically delivered by a spokesperson (e.g. a representative, or a media agency) with the authority to speak on behalf of the institution, so they are also classified as (proxy) group assertions. As we are about to see, however, both characterisations face difficulties. I will start by highlighting a minor hitch for PA, before moving to a more pressing problem for both definitions.

PA ties proxy assertions to a particular domain D: the spokesperson must have the “authority to convey the information in D”. The idea is that spokespersons don’t merely repeat what they are told. They often enjoy some autonomy: they can address questions on the spot and improvise their declarations, within defined constraints. So, for instance, a legal representative of Volkswagen may be allowed to make impromptu statements about legal matters on behalf of the company (say, during interviews with the media), but they may have no authority to make comments on the company’s financial plans.

This seems right. By tying authority to domains of speech, however, PA risks losing track of the fact that there are many other factors that can make or unmake a spokesperson’s authority to speak for a group. Some have to do with context. For instance, the same lawyer may be allowed to speak for the company only on weekends, when the usual legal team is not operating. Some other factors may have to do with content. A company’s lawyer may be allowed to speak freely, but only if they don’t contradict previous statements made by the company. None of these restrictions have to do with domain. And we could expand the list indefinitely: the spokesperson’s authority might depend on the way in which the assertion is produced (e.g. Diego can speak on behalf of the Police Department on any matter, but only when he is wearing his uniform), the presence of defeaters (e.g. the Vice Dean can speak for the Department only when the Dean is absent), and so forth.

The bottom line is that domain is just one of the many factors that determines whether a spokesperson has the authority to assert for a group. Attempting to list them all in the definition would be hopeless. Instead, it’s preferable to simply require that the spokesperson does in fact have the authority to say what they said on behalf of the group, in the context in which they said it. Condition (b) and (c) in PA can therefore be replaced by the more concise (b*)Footnote 11:

(b*):

S has the authority to convey p on behalf of G, and conveys p in virtue of that authority

This leads us to a second, more pressing problem for both PA and CA. Both definitions require that the group intend to convey a proposition. As we are about to see, however, it’s not clear that this condition is able to differentiate between a group asserting something and a group communicating something without asserting it. To understand why this is a problem, let’s first consider what asserting is, and how it differs from merely communicating something.

As with most definitions, there is substantial philosophical disagreement on what an assertion exactly is (Pagin and Marsili 2021). However, consensus is wide enough on some points. One is that “assertion” designates a direct and explicit speech act, so that a good definition should acknowledge the difference between assertions and implicatures (Gluer 2001; Stainton 2016; Pagin 2014, sec. 2; Alston 2000; Searle 1969; Borg 2019, but cf. García-Carpintero 2018). We might say, then, that a minimum desideratum for a definition of group assertion is to be able to distinguish between assertions and implicata.

Drawing this distinction is especially important given that our ultimate goal is to characterise group lying. A good definition of lying should distinguish lying from merely misleading (Saul 2012; Stokke 2013b; 2018).Footnote 12 To draw this distinction, condition G-Assert (from the Group Lie Schema) must be able to differentiate what a group asserts from what a group merely implies. Definitions like CA and PA, however, conflate the two notions, and therefore cannot distinguish misleading from lying.Footnote 13 An imaginary example can illustrate the point:

Misleading Smoke

Nicotella has invented a new kind of e-cigarette. Just before the launch, an internal review establishes that there is a problem with the e-cigarette’s battery: it can explode if it overheats. Nicotella’s executive board convenes to assess the risk. They decide to bury evidence of the problem and go ahead with the launch, while they attempt to fix the defect. To maintain plausible deniability, they also instruct their marketing division to refrain from making claims about the e-cigarette’s safety—ads can only claim that the smoke is safe. As a result, Nicotella’s ads state that the smoke of the e-cigarettes is safe, and poses no threat to one’s health.

The example illustrates how a group can mislead without lying. Let’s imagine that Nicotella’s official statement is strictly speaking true: inhaling the smoke of their e-cigarettes is safe. Nicotella is still aware that their e-cigarettes may explode, posing a much more immediate damage to their customers’ health. Nicotella’s assertion that the smoke is safe, while literally true, is designed to communicate something false: namely, that their devices pose no threat to their users.

Misleading Smoke illustrates why PA is inadequate as an account of proxy assertion. Nicotella only asserted that the e-cigarette’s smoke is safe, because this is all they explicitly stated in their ads. However, since the executive board’s reasonable intention was to convey that the e-cigarettes (and not only their smoke) are safe, PA incorrectly classifies the implied proposition as asserted. Crucially, the objection can be extended. Whenever a group implies something without asserting it, Lackey’s definition will incorrectly classify the implicature as an assertion.

Plugged into the Group Lie Schema, PA leads to similarly incorrect classifications. Since the board believes that it is false that the e-cigarettes are safe (and was aiming to deceive the public), the resulting definition classifies Misleading Smoke as a lie. But Nicotella misled the public without technically lyingFootnote 14: PA’s verdict is incorrect. The same problem arises with any misleading implicature. Crucially, these objections (about assertion and lying) apply also to CA, which (like PA) only requires that the group intend to convey some content, not that the group assert it.Footnote 15

This is already a damning objection to Lackey’s definition. But the problem runs deeper than this. PA and CA classify as group assertions every group speech act other than assertion (suggestions, hypotheses, assumptions, orders, etc.), since virtually any speech act is meant to convey a certain proposition. If Lackey’s account captures every group speech act, it is about as incorrect as a definition of group assertion can be. To illustrate the problem, consider an example involving advising:

Misleading Advice

Beautella is a company that sells beauty products. All their conditioners have the following statement printed on the label: We advise customers to use the Beautella conditioner with the Beautella shampoo. Beautella’s executive board is aware that there is no advantage in using their products together. In fact, some of Beautella’s studies even proved that using a Beautella conditioner with other company’s shampoos leads to better results. However, admitting it wouldn’t be in Beautella’s interest: they prefer to deceive customers and maximise sales.

In Misleading Advice, Beautella doesn’t make an assertion at all: the label only contains advice concerning how to combine conditioner and shampoo. Nonetheless, the label implies that Beautella’s conditioner and shampoo are more effective together, which is false. Since Beautella intends to convey this message, PA incorrectly classifies this misleading implicature as an assertion, and hence as a lie. This is an important failure, since the company made no assertion at all. Both for the purpose of defining assertion and for the purpose of defining lying, PA and CA deliver verdicts that are dramatically off the mark.

3.3 Asserting as Taking Responsibility

PA and CA should be narrowed down. Ideally, a good definition should specify that an assertion must explicitly state something (as opposed to implying it), and incorporate a criterion that allows us to tell assertions apart from other speech acts.

Tollefsen (2020, p. 339) recently proposed an account of group assertion that might do the trick. She considers various definitions of individual assertionFootnote 16 that could be “collectivised” into an account of group assertion, concluding that commitment-based accounts fare best at this task. According to commitment-based accounts, a speaker asserts a proposition p only if they undertake responsibility for p being true.Footnote 17 From this characterisation, Tollefsen (2020, p. 339) derives the following account of group assertion:

(GA):

For a group G to assert that p, G must be committed to the truth of p.

This characterisation leaves open some questions. First, this is not a definition: GA only states a necessary condition for group assertion, but no sufficient condition(s). Second,Footnote 18 the nature of the commitment involved is left unspecified in GA.

Concerning the second issue, Tollefsen clarifies that she understands commitment in a Brandomian fashion (Brandom 1983; 1994). Broadly, on this view assertoric commitment is modelled as a responsibility to provide reasons in support of one’s claim, if challenged. Although this represents an improvement, it has been pointed out that assertions also give rise to another kind of responsibility: speakers are sanctionable for what they assert. If you assert that p, you stand to incur in certain social sanctions if p turns out to be false: your reputation may be damaged, your audience might be entitled to criticise you, and so forth (Peirce, CP, MS, Green 2007, 2009; Graham 2020).

To accommodate these observations, assertoric commitment can be characterised as involving both components: (i) a responsibility to provide reasons if the assertion is challenged, and (ii) liability to social sanctions in case the proposition turns out to be false. This account of commitment can be further refined (see Tanesini 2016; 2020; Shapiro 2018; Marsili 2020b, 2021b), but for our purposes this constitutes a sufficient degree of approximation.

We can now go back to the problem of identifying sufficient conditions for asserting. Is being committed to a proposition (in the sense just established) sufficient for asserting? No, and for two reasons. The first is that a requirement of explicitness is missing. To assert that p, you must explicitly say that p—you must undertake commitment to p by stating that p, rather than implying that p.Footnote 19

The second reason is that asserting also involves presenting a proposition as true: unless you are communicating that p is true, you are not asserting that p (Frege 1948; Wright 1992, 34; Adler 2002, 274; Marsili 2018a; Marsili and Green 2021). Pagin (2004; 2009) argues that a speaker can undertake commitment to a proposition p by stating p explicitly without thereby communicating that p is true.Footnote 20 If this is right, it’s preferable to explicitly require that assertions communicate that their content is true.

In previous work (Marsili 2020b; 2021b; Marsili and Green 2021), I have offered an account of individual assertion that incorporates all these requirements. It reads as follows:

(AC):

A speaker S asserts that p iff (1) S utters an expression with content p, thereby (2) presenting p as true, and (3) undertaking assertoric commitment to p.Footnote 21

There are two ways to translate this definition into an account of group assertion. We might simply replace “a speaker S” with “a group G”. Or we might follow Lackey, and offer two separated accounts for collective and proxy assertion. Let me clarify how we might go about the latter.

For collective assertions, collectivising the account is straightforward: we simply replace the requirement that all group members “reasonably intend to convey that p” with the requirement that they all intentionallyFootnote 22 satisfy AC: each member must intentionally act so that they collectively utter an expression with content p, thereby presenting p as true and taking responsibility for its truth.

(CAC):

A group G makes a collective assertion that p iff the members of G coordinate individual acts a1, … an, so that they intentionally satisfy AC(1–3) together in virtue of these acts.

Adapting PA to the commitment-based model is a similarly easy feat. Condition (a) (the spokesperson reasonably intends to convey… etc.) should be replaced with conditions (1–3) from AC. Condition (b) (the authority requirement) can be adjusted to highlight that the relevant authority is the authority to undertake commitment to the proposition on behalf of the group. We get the following definition:

(PAC):

A group G makes a proxy assertion iff a spokesperson(s) S

  1. (a)

    (1) utters an expression with content p, thereby (2) presenting p as true, and (3) undertaking assertoric commitment to pFootnote 23 on G’s behalf

  2. (b)

    S has the authority to undertake commitment to p on G’s behalf, and undertakes commitment to p in virtue of that authority

This definition has no difficulty dealing with the examples encountered so far. Clause (a-1) rules out all content that is not conveyed literally, but merely implied. This correctly classifies the believed-false proposition in Misleading Smoke (that the e-cigarettes are indeed safe) as implied, not asserted—and therefore as misleading, not lying.Footnote 24 Similarly, non-assertoric speech acts (like Misleading Advice) are ruled out by condition (a): only genuine assertions explicitly present their propositional content as true, thereby committing the group to that content.

This definition represents an improvement on both Tollefsen and Lackey’s proposals. Unlike Tollefsen’s GA, it offers sufficient conditions for group assertion, and supplements this characterisation with a well-defined account of what assertoric commitment is. Unlike Lackey’s PA and CA, it distinguishes group assertions from other group speech acts and from group implicatures. Finally, unlike Lackey’s account, it discriminates between group misleading and group lying.Footnote 25

4 Group intention to deceive

4.1 Does Lying Require an Intent to deceive?

The Group Lie Schema states that a group assertion is a lie only if it is meant to deceive. However, this requirement is controversial. Consider the following example:

Impossible deception.Footnote 26

Pete took part in a robbery. He knows that his involvement in the crime was unmistakably recorded on CCTV camera, so that there is no chance that anybody will believe him if he denies that he was there. However, he also knows that (because of an odd regional law) if he denies being involved in the robbery the judge will set a lower bail. So he claims that he wasn’t present at the scene of the crime. He has no intention to deceive anyone. He only says this because he is planning skip bail.

Intuitively, Pete is lying, even if he lacks an intention to deceive his audience. The example shows that you can lie without intending to deceive. Crucially, this is not an exception: countless examples purporting to show that not all lying is deceptive have been discussed in the literature.Footnote 27 This led most contemporary scholars to conclude that lying typically, but not necessarily, involves deception—so that asserting what you believe to be false is sufficient for lying.

4.2 Intending to Deceive and Intending to be Deceptive

Not all philosophers are happy with the recent divorce between lying and deception, however. Lackey (2013) once again plays a central role, for she devised a strategy to re-establish a link between lying and deceptive intent. Liars need not aim to deceive, she concedes, but a case can be made that liars always aim to be deceptive. “Intending to be deceptive” is a stipulative notion, which is meant to cover both attempts to make someone believe something and attempts to conceal information from someone. Lackey’s (2013, p. 241) alternative deception-condition can therefore be phrased as follows:

(Deceptive):

The speaker intends her assertion to make someone believe that p is true, or to conceal information from someone as to whether p

About the Impossible Deception example, Lackey would say that although Pete has no intention to make the police believe that he did not commit the crime, he intends to conceal information from the police as to whether he was present at the scene of the crime. If this is right, Pete’s assertion is intended to be deceptive after all.

As Fallis (2015) rightly pointed out, however, this reconstruction of the scenario is problematic. You cannot conceal information from someone that already has that information. In Impossible Deception, for example, there is no meaningful sense in which Pete can conceal from the police the information that he was at the robbery by denying that he was there. The police already possess that information: they have a video of Pete committing the robbery.Footnote 28

Of course, out of irrational wishful thinking, Pete could nonetheless intend to conceal information that the police already have, and intention is all that the Deceptive condition requires (Lackey 2020b). But what matters for our purposes is whether we can conceive a version of the scenario in which Pete reasons like a rational agent, and lacks this intention. Clearly, we can. Call this version of the scenario Impossible Deception*. In Impossible Deception*, Pete’s only motivation to make his statement is to skip bail, and he lacks an intention to conceal information from the police: Deceptive is not satisfied. Since Pete is clearly lying (just as much as he is lying in the scenario in which he forms the irrational intention), the unavoidable conclusionFootnote 29 is that Deceptive, too, is subject to the counterexamples that affect ordinary “deceptionist” definitions.

4.3 Group Lying Without Intended Deception

If individual lying need not involve deception, the same is likely true of group lying. Here’s an example that supports this generalisation, inspired by the Volkswagen scandal:

Impossible Group Deception

Volkswagen has been exposed. There’s unquestionable proof that the board of directors commissioned a “defeat device” to fake emission tests. But the company’s lawyers identified a loophole in the law, that will prevent anyone from being convicted. Thanks to an obscure technicality, as long as Volkswagen doesn’t admit wrongdoing, and maintains (in public and in trial) that the executive board was not aware of the defeat device, no one can be convicted. Accordingly, Volkswagen issues various statements claiming that the executive board was unaware of the defeat device. None of these statements is meant to convince the public or the jury: after all, the executive board’s involvement has been proven beyond doubt. Volkswagen simply needs to go on the record as stating this, if the board of directors is to avoid conviction.

In Impossible Group Deception, we have a situation that is comparable to its individual counterpart, Impossible Deception. Volkswagen’s statement is intuitively a group lie. However, it doesn’t involve an intention to be deceptive—deception is unattainable in the scenario. Ex hypothesi, the company’s statement is not motivated by a vain hope to conceal the board of directors’ involvement (since its involvement has already been proven beyond doubt). The goal is simply to avoid the social sanctions that would ensue if the false statement wasn’t made.

A strong case has been made that the “intention to deceive condition” can simply be scratched off definitions of lying,Footnote 30 as long as the remaining conditions require not only that the speaker say something that they believe to be false, but that they genuinely assert it (Fallis 2009; 2012; 2013; Stokke 2013a; 2018; Marsili 2021b). This is just what our definition of group lying requires, once we replace Lackey’s PA (which doesn’t fulfil this requirement), with PAC (which requires that the speaker literally assert a proposition). This solution matches a growing number of accounts of individual lying that drop the intention to deceive condition in favour of a commitment-based assertion condition (Marsili 2014; 2016; 2018b; 2020a; 2021b; Leland 2015; Viebahn 2017; 2021; Marsili and Löhr forthcoming, cf. also Carson 2006, 2010; Saul 2012). Of course, the insincerity condition (G-Insincere) will play a crucial role in the resulting definition of group lying (it will have to discriminate, alone, between lies and sincere assertions). Let’s have a look, then, at how group insincerity is best characterised.

5 Group Insincerity

According to G-Insincere (the second condition of the Group Lying Schema), a group lies only if that group believes that what they assert is false. To illustrate with an example: since General Motors was aware of the deadly effects of tetraethyl lead, they lied when they claimed that they were not aware of any health hazard. A question immediately arises: what do we mean when we say that General Motors was aware that what they claimed was false? What do we mean, more generally, when we say that a group believes something?

There is little scholarly consensus on these matters, and two factions dominate the debate. Summative views understand group belief as the ‘summation’ of the beliefs of the members of the group.Footnote 31Non-summative views take group belief to be irreducible to its members’ beliefs: what a group believes is determined at the collective level, for instance by a collective agreement to accept a proposition as true.Footnote 32

In what follows, I will not take a stance on what constitutes group belief. My goal is more modest. I want to show that a group can lie even if the group does not believe that the proposition it asserted is false (regardless of how one understands group belief), and consider some alternative accounts of group insincerity.

5.1 Group lying and group uncertainty

Most existing discussion of group insincerity implicitly presupposes an on/off conception of belief: either you believe something, or you don’t. But doxastic states often come in shades. Your confidence in a proposition can fall on a spectrum that goes from high confidence to complete uncertainty, and these graded attitudes will often fall short of a full belief (or full disbelief).

This has implications for how lying should be defined. To illustrate: suppose that you’re somewhat confident that Carolina went to the gym today, but (given your uncertainty) you don’t quite believe that she did. You would be lying if you told someone that Carolina didn’t go to the gym, even if you don’t outright believe that what you said is false (there is consensus on this; see Isenberg 1964; Carson 2006; Whyte 2013, Marsili 2014, 2021a, 2018b, Marsili and Löhr forthcoming, Krauss 2017, Benton 2018, Trpin et al. 2021). I call this kind of lie a “graded-belief” lie. Definitions that require outright belief in the falsity of the proposition (like the Group Lying Schema) don’t classify graded-belief lies as lies.

This is a problem, because groups, like individuals, often operate under conditions of uncertainty. A food company may have to determine whether consumption of a certain additive is safe, given the minimal (but not null) risk that it might cause long-term health issues. Similarly, a government may need to determine whether a foreign country really represents a threat. In each of these cases, the process of deliberation may not lead to a neat yes/no verdict. The food company may conclude that the evidence is unconclusive, and that the additive is probably safe. And the government may determine that the foreign country almost surely doesn’t represent a threat, without ruling out the possibility that it might. In both cases, the group is somewhat confident that the relevant proposition is true, but (given the salient doubts) it would be incorrect to say that the group simply believes that the proposition is true.

I didn’t take a stance on which conditions have to be met for a group to hold a belief. But unless we adopt a strongly revisionist account of group belief (one that goes against our intuitive doxastic ascriptions), these examples suggest that there are cases where a group’s confidence in a proposition falls short of full belief. Graded-belief lies are therefore a possibility for groups, too, as illustrated by the following example (inspired on real events, cf. Hersh 2003):

Iraq War

It’s 1993, right before the US invasion of Iraq. The United States National Security Council (USNSC) is meeting to discuss whether military action would be warranted. The CIA just received reports indicating that Iraq has bought uranium from Niger and is storing Weapons of Mass Destruction (WOMD) in secret facilities in the North-West of the country. But more reliable evidence indicates that the reports were fabricated, most likely by the Italian Secret Services (SISMI). After some discussion, the members of the USNSC agree that, although the possibility cannot be ruled out, it’s unlikely that Iraq possesses WOMD. However, the members of the USNSC also agree that this is a great chance to justify military intervention in Iraq. They decide to bury all the evidence contradicting the CIA reports, and to seal the records of the meeting. An official memorandum is then produced, with recommendations for the US congress and for the public. The memorandum states that immediate action should be taken, because “Iraq possesses weapons of mass destruction”.

In the scenario, none of the members of the USNSC has ruled out the possibility that Iraq possesses WOMD: their doxastic attitude falls short of belief, so that it would be incorrect to say that the USNSC outright believes that its official statement is false.Footnote 33 Intuitively, however, the USNSC lied to the public.

Graded-belief group lies are not uncommon. To go back to two of our initial examples, it’s not unlikely that the heads of General Motors thought (perhaps driven by wishful thinking) that there was a chance that the effects of tetraethyl would turn out to negligible for human health. Similarly, Philip Morris (who asserted for decades that tobacco is safe to smoke) may have thought that while the evidence of carcinogenic effects was convincing, conclusive proof was yet to be found. Group lies involving uncertainty are not merely a conceptual possibility. They are central cases, and as such they call for a refinement in our definition of group lying (cf. Ludwig 2019, fn 12).

Luckily, fine-grained accounts of insincerity that can handle these cases are available. We could take inspiration from Carson (2006; 2010) and Sorensen (2007, p. 256; 2011, p. 407), who have characterised insincerity as the absence of a belief in the truth of the asserted proposition (as opposed to an active belief in its falsity). The refined insincerity condition would read:

(G-Insincere-weak):

G does not believe that p is true.

Plugged into the Group Lie Schema, this condition classifies Iraq War as a group lie. Arguably, however, it’s unable to track the intuitive distinction between lying and bullshitting, as shown by the following example (modified from Lackey 2020a, pp. 197–98):

Oil Company

After the oil spill in the Gulf of Mexico, BP began spraying dispersants in the clean-up process that were criticized by environmental groups for their level of toxicity. In response to this outcry, the executive management team of BP convened and its members jointly accepted to reply that the dispersants being used are safe and pose no threat to the environment, a view that was then made public through all the major media outlets. It later turned out that BP’s executive management team had no idea whether the claim was true, and they never even attempted to check whether it was. Their reply simply served their purpose of financial and reputational preservation.

Since BP didn’t even check whether the claim was true, its statement should arguably be classified as bullshit, rather than lying (Frankfurt 2005, p. 55; Saul 2012, p. 20; Marsili 2014; 2018b, p. 175; Falkenberg 1988, p. 93; Meibauer 2014, p. 162). However, G-Insincere-weak classifies it as a lie, since BP lacks a belief in the asserted proposition. If we want to keep these notions apart (cf. Lackey 2018a, p. 278; 2020a), G-Insincere-weak will not do, because it conflates group bullshit with group lying.Footnote 34

In earlier work (Marsili 2014, 2018b, 2021a, 2022a), I proposed an alternative way to characterise graded insincerity: an assertion is insincere if the speaker finds it is more likely to be false than true. Applied to groups:

(G-Insincere-graded):

G is more confident in the falsity of p than in its truth.

According to G-Insincere-graded, Oil Company is not a lie: since BP didn’t assess the veracity of its claim, there is no sense in which BP is more confident in the falsity of p than its truth. This criterion also correctly classifies Iraq War as a lie, since the council opined that their statement was more likely to be false than true.Footnote 35 However, as we are about to see, there are cases that also this account has trouble handling.

5.2 Group Lying and Group Intentions

It has been pointed out that an individual can lie by misrepresenting their intentions, rather than their beliefs (Marsili 2016). Suppose I promise my jealous fiancée that I’ll never kiss Gelsomina. I promise this because I’m aware that I have virtually no chance of kissing Gelsomina: she doesn’t reciprocate my infatuation for her—in fact, she finds me repulsive. However, I have not been honest to my fiancée: I fully intend to kiss Gelsomina if she were to suddenly change her dispositions and try to kiss me. Intuitively, my promise is insincere: I lack an intention to act as I promise. Most people would agree that I lied to my fiancée.Footnote 36

Generalising, lying is not always a matter of misrepresenting doxastic states: sometimes it involves misrepresenting your intentions without misrepresenting your beliefs. And groups can lie in this way, too:

University Funding

Every year, the Faculty of Tetrapiloctomy receives funding from various donors. One of them, E-Corp, has just been involved in a terrible scandal. Students wrote a petition to ask the faculty not to accept E-Corp funding in the future. But the faculty desperately needs money. They meet to consider their options. A professor notes that E-Corp is going through financial hardship since the scandal, and they will almost certainly cut all funding to academic institutions. All Faculty members agree: they all believe that it is virtually impossible that E-Corp funding will be available in the future. At the same time, they collectively agree that funding is needed. They deliberate that if (against all likelihood) E-Corp funding were to become available, the faculty will accept it. But they also decide that it’s preferable to let the students believe that the faculty would not accept E-Corp funding. The next day, the faculty’s spokesperson sends an email to all students, solemnly pledging that “the faculty will never accept funding from E-Corp in the future”.

Intuitively, the Faculty of Tetrapiloctomy is lying about their funding plans. The case is analogous to the individual lie example, which involved a promise that was sincere in terms of belief (I believe I won’t kiss Gelsomina) but not in terms of intention (I intend to kiss her if possible). Similarly, although the faculty believes that funding won’t be available, it intends to accept it (if, against all odds, their predictions turn out to be mistaken). Their pledge is clearly mendacious. But the Faculty is aware that E-Corp funding will almost certainly never be available, so the statement is not classified as a lie by G-Insincere-Graded.

The example shows that a good insincerity condition should capture both insincere intentions and insincere beliefs. To incorporate both into the definition, we can take inspiration from existing speech-act theoretic analyses. Speech act theorists typically define insincerity as a discrepancy between the psychological state in which the speaker is and the state expressed by their utterance (Searle 1969; Falkenberg 1988; Marsili 2014; 2016; Stokke 2014; 2018). The underlying idea is that, by performing a speech act, a speaker can expressFootnote 37 psychological states, thereby representing themselves as being in those states (e.g., by shouting “ouch!”, I represent myself as being in pain). Whenever the speaker misrepresents the state that they are in, the speaker is insincere. Accordingly, group insincerity can be defined as follows:

A Group G is insincere iff:

(Insincere-1) By performing a speech act F(p), G expresses a psychological state Ψ(p).

(Insincere-2) G is not in state Ψ(p).

This works for University Funding. The utterance “we solemnly pledge that the faculty will never accept funding from E-Corp” expresses two psychological states: an intention not to accept funding, and a belief that the faculty will not accept funding.Footnote 38 According to Insincere-2, the faculty is insincere if the faculty believes that they will accept funding, or if they intend to accept funding, or both. Since the faculty intends to accept funding, their statement is correctly classified as insincere.

This definition, however, cannot accommodate the difference between group lying and group bullshitting. In Oil Company, BP doesn’t believe that the dispersants they used are safe (they have no idea whether they are), so their statement would be classified as a lie. To rule out group bullshitting, the definition should be narrowed down. A solution is to replace Insincere-2 with Insincere-2*, which is derived from G-Insincere-graded:

(Insincere-2*):

S is closer to Ψ-ing(¬p) than Ψ-ing(p).

The notion of “being closer to a mental state than another” introduced here is meant to be a term of art. We saw that graded beliefs can be thought of as falling on a spectrum, with full conviction in a proposition at one end and full conviction in its negation at the other. For instance, in Iraq War, the USNSC was more confident in the falsity of their statement than in its truth, and therefore closer to believing that what they said is false. The notion of closeness is meant to apply to dichotomic mental states like intentions, too. In University Funding, the faculty intends to accept funding: they are closer to intending to accept it than they are to not intending to accept it—so their statement is insincere by Insincere-2*’s light.

So construed, Insincere-2* meets all the desiderata. Since Ψ-ing can be replaced with believing, it can draw all the distinctions that are drawn by G-Insincere-graded. And since Ψ-ing can be replaced with intending, it correctly classifies University Funding as a genuine lie.Footnote 39

6 Group Lying

It is now time to draw our conclusions. I argued that The Group Lie Schema should be refined, and I proposed three adjustments. First, against what was previously suggested in the literature, we should understand group assertion along the lines of PAC and CAC. Second, group lying does not require an intention to deceive. Third, group insincerity is best modelled in a way that allows for graded-belief lies and insincere intentions. The resulting definition reads as follows:

A group, G, lies iff

(G-1) G asserts that p (in the sense defined by PAC or CAC).

(G-2) By asserting that p, the group expresses a psychological state Ψ(p).

(G-3) S is closer to Ψ -ing(¬p) than Ψ -ing(p).

A qualification is in order at this point. While the present definition improves over previous ones, it’s still wanting in many respects. Many questions remain open—and depending on how we answer them, new difficulties will arise.

I did not attempt to settle, for instance, how group beliefs should be modelled. The examples introduced in this paper (University Funding, Iraq War, etc.) further complicate the task of modelling the attitudes relevant to group lying, as they bring more attitudes (credences, intentions) into the picture. While previous work assumed that an account of group belief would suffice (the only relevant possibilities being that the group believes that p is true or false), it turned out that a good account of group lying should also tell us something about how group credences and group intentions should be modelled.

Group credences in particular raise tricky questions. To keep the discussion manageable, this paper focused on cases involving consistent group credences: in Iraq War, every member of USNSC agreed that their assertion was likely false. But how should we model cases in which the members of a group hold inconsistent attitudes towards the asserted proposition? We might have a case where 20% of the group believes p likely true, 10% certainly true, 15% certainly false, 25% likely false, and 30% is simply uncertain. And the picture could be complicated further, by considering more fine-grained attitudes (and the graded attitudes that a group agrees to adopt). How we should aggregate attitudes in these cases, and under which conditions graded criteria like (G-3) would be satisfied, remain open questions.

Many challenges, then, are still to be met before we get a firm grasp on what group lying is—and the ones I highlighted here are by no means the only ones. The phenomenon remains understudied: this paper has tied some loose ends, but much work still needs to be done in order to understand what group assertion and group lying are.