Logics of Synonymy

We investigate synonymy in the strong sense of content identity (and not just meaning similarity). This notion is central in the philosophy of language and in applications of logic. We motivate, uniformly axiomatize, and characterize several “benchmark” notions of synonymy in the messy class of all possible notions of synonymy. This class is divided by two intuitive principles that are governed by a no-go result. We use the notion of a scenario to get a logic of synonymy (SF) which is the canonical representative of one division. In the other division, the so-called conceptivist logics, we find, e.g., the well-known system of analytic containment (AC). We axiomatize four logics of synonymy extending AC, relate them semantically and proof-theoretically to SF, and characterize them in terms of weak/strong subject matter preservation and weak/strong logical equivalence. This yields ways out of the no-go result and novel arguments—independent of a particular semantic framework—for each notion of synonymy discussed (using, e.g., Hurford disjunctions or homotopy theory). This points to pluralism about meaning and a certain non-compositionality of truth in logic programs and neural networks. And it unveils an impossibility for synonymy: if it is to preserve subject matter, then either conjunction and disjunction lose an essential property or a very weak absorption law is violated.


Introduction
One of the most important problems in philosophy of language and related disciplines is to understand synonymy: when do two expressions mean the same thing? Similarly, in all instances of logical modeling it is crucial to become clear on which sentences Levin Hornischer l.a.hornischer@uva.nl 1 Institute for Logic, Language and Computation (ILLC), University of Amsterdam, P.O. Box 94242, Amsterdam, 1090 GE, The Netherlands should be considered to be equivalent: when a fallible cognitive agent is modeled, this is different from when, say, metaphysical grounding is modeled.
This problem is a persistently hard one since synonymy is such a multifaceted concept. In everyday language or in thesauri, we find many alleged synonyms that surely are synonymous in a wide range of contexts (contextual stability). However, on a less credulous and more critical stance, we usually can find contexts in which those synonyms differ in meaning (contextual flexibility). Indeed, for any two nonidentical, logically atomic sentences it seems like we can almost always cook up a weird context in which they differ in meaning [30,40].
There are more pairs of opposing features of synonymy other than "stability vs. flexibility". On the one hand, we think that whether or not two sentences are synonymous is objectively and externally settled by the language and the world alone. On the other hand, there also is the intuitive idea (which is largely discredited in modern philosophy of language) that whether or not two expressions are synonymous also depends on subjective and internal attitudes of the speaker. Moreover, on the one hand, we sometimes think that usually synonymy respects logical equivalence (rendering it an at most intensional concept). On the other hand, there is reason to believe that classically equivalent sentences, like p ∨ ¬p and q ∨ ¬q, are not synonymous since they are "about" different things (rendering synonymy a so-called hyperintensional concept). Furthermore, sometimes we think of two synonyms as being identical in meaning and sometimes as being only very similar in meaning. This list of opposing features could be extended much further.
In this paper, we're concerned with synonymy in the strong sense of meaning (or content) identity-and not just meaning similarity. 1 That is, we're interested in the notion of synonymy that we get when we adopt a critical stance and move to the more discerning side in the pairs of the opposing features. Many different semantic and proof-theoretic systems have been proposed to explicate such fine notions of synonymy-just to name a few: [1,5,10,11,13,18,20,23,35,37,41,44,46], and more will be mentioned below. However, these approaches differ tremendously and there is practically no consensus on which approach is correct. Because of this, we want to understand the intuitive notion (or family of notions) of strong synonymy without committing to a particular framework. To do so, we work with various logics (or axiomatizations) that attempt to capture the (or a) notion of synonymy. Thus, we'll gain insights into synonymy directly and not into a particular framework representing synonymy.
The main contribution of this paper is to motivate, uniformly axiomatize, and characterize "benchmark" notions of synonymy in the messy class of possible notions of synonymy (as, e.g., provided by different frameworks). This helps to quickly identify not only the notion of synonymy of a given framework but also the precise synonymies that make it different from other frameworks. Moreover, this provides novel arguments and impossibility results for the various notions of synonymy that are independent of a particular conception of semantics.
Summary In Section 2, we show why finding the logical laws governing synonymy is problematic: Neither the famous possible worlds semantics nor any straightforward refinement of it can satisfy the fundamental principle of synonymy that being synonymous entails having the same subject matter. To understand this no-go result, we look, in Section 3, at a well-known logic satisfying that principle: the system of analytic containment (AC) with a sound and complete truthmaker semantics due to [23]. And in Section 4, we develop a formal notion of a scenario to see just how finegrained can get the notion of synonymy in a straightforward refinement of possible world semantics. We axiomatize this notion as an extension SF of AC. In Section 5, we see that the two logics AC and SF are related by moving up one set-theoretic level: If we take sentences not to be true at a scenario but at sets of scenarios, we get a semantics that is extensionally equivalent to truthmaker semantics.
In Section 6, we investigate the lattice of conceptivist logics, that is, logics where synonymy (or equivalence) entails having the same atomic sentences-which hence can be regarded to satisfy the fundamental principle about synonymy. We show the main formal result of the paper: We identify various extensions of AC that correspond to the possible combinations of characterizing synonymy by weak/strong subject matter preservation and weak/strong logical equivalence. In Section 7, we thus can offer novel arguments for the various notions of synonymy (making use of, e.g., the Hurford constraint, a "truth plus subject matter" conception of meaning, or homotopy theory).
In Section 8, we discuss ways out of the no-go result: we identify the exact reason for the inconsistency and describe possible ways of adding more (intensional) structure to the notion of scenario that allows them to satisfy the fundamental principle about synonymy. Moreover, we state some consequences of the paradox: The inconsistency and the various arguments point to a pluralistic conception of meaning and to the non-compositionality of the notion of truth in logic programs or states of a neural network. In Section 9, we analyze this non-compositionality and generalize it to the following impossibility result for synonymy: if synonymy preserves subject matter, either some of our basic intuitions about conjunction and disjunction are violated or a very weak logical law-satisfied by most logics-is violated. We provide some linguistic and cognitive evidence against the law. This poses the problem of finding a logic violating this law and accounting for the evidence.
Most results are proven in an Appendix extending the method of proving completeness by normal forms. Thus, the characterization of the synonymies is proven uniformly and constructively. Moreover, it associates to each of these synonymies a characteristic notion of disjunctive normal form which can be seen as an invariant of the synonymy (across different, philosophically-laden theories and semantics for this synonymy).
Notation As common in the field, we only deal with sentences built in the usual way from a set of propositional letters p 0 , p 1 , . . . (called atoms) using the connectives ¬, ∧, ∨. Variables for sentences are ϕ, ψ, χ, . . . and variables for atomic sentences are p, q, r, . . .. A statement of the form ϕ ≡ ψ is called equivalential. A logic of synonymy L is a logic reasoning with equivalential statements: L ϕ ≡ ψ (respectively, L ϕ ≡ ψ) means that under the semantics of L, ϕ ≡ ψ is valid (respectively, is derivable with the rules of L).
To keep the framework as simple as possible, we don't add further operators like conditionals, modal operators, or even hyperintensional operators. We leave this to further research.

The Cause of the Problem: a No-go Result About Synonymy
In this section, we show why it is a real problem to find the appropriate axioms or laws governing synonymy in the strong sense of meaning identity: A fundamental principle about this notion is that being synonymous entails having the same subject matter. However, we argue that neither standard possible worlds semantics nor any straightforward refinement of it (with the same underlying idea) can satisfy this principle. To understand synonymy, it arguably is not enough to just provide a particular semantics with a notion of synonymy that satisfies the subject matter preservation principle. Rather, we need a systematic understanding of this impossibility result and how to avoid it-whence we need to analyze and compare different notions of synonymy.
As indicated in the introduction, we explore the possible logical laws of synonymy. That is, we want to know, for example, is it a general law of synonymy that p is Since we're interested in the general logical laws of synonymy, we're not so much interested in when logically atomic sentences are synonymous. Rather, we consider which logically complex sentences formed with these atomic sentences should be synonymous as a matter of a general law about synonymy. 2 So we assume we've fixed some (or, rather, any) theory about how to map logically atomic natural language sentences (that we're interested in) to propositional atoms such that atomic sentences with the same meaning are mapped to the same propositional atom and atomic sentences with different meaning are mapped to different atoms. Thus, distinct propositional atoms represent atomic sentences with distinct meaning. 3 Whatever theory we've picked, our results can then be applied to see which complex sentences should be synonymous.
Let's illustrate this in three remarks. First, we could use some of the established theories to determine when two atomic sentences express the same proposition: possible world semantics, (exact) truthmaker semantics, two-dimensional semantics, structured propositions, impossible world semantics, etc. Then different propositional atoms represent different equivalence classes of atomic sentences (under the equivalence relation of 'expressing the same atomic proposition' in the respective semantics). We're then investigating what the appropriate notion of synonymy for complex sentences is-independent of what the respective semantics for atomic sentences says. Second, assume one is convinced by the view mentioned in the introduction that no two syntactically non-identical atomic sentences are completely identical in meaning (except, say, sentences with different allowed spellings or emphasis, etc.). Then every atomic sentence is assigned to a distinct propositional atom. Third, assume the meaning of atomic sentences is governed by a set of (defeasible) rules describing our semantic knowledge or contextual information (or both). This may include the rules 'A sofa is a couch' or-in a context far away from penguins, ostriches, and the like-'A bird is an animal that can fly'. We then again assign every atomic sentence to a distinct propositional atom, but we add the synonymies 'This is a couch ≡ This is a sofa' or 'This is a bird ≡ This is an animal that can fly' as non-logical axiom to our logical axioms of synonymy. However, since we're interested in the logical laws of synonymy, we won't consider these additional non-logical axioms.
With these preliminaries at hand, we can get to arguably one of the most fundamental principles about synonymy: if two sentences are synonymous, they are about the same thing. In other words, if two sentences are about different things, there is a sense in which they are not synonymous-and hence shouldn't be synonymous in the strong sense of meaning identity. As a slogan: synonymy entails subject matter identity.
To formulate this principle precisely, we need to specify what subject matter is. We could choose one of the reconstructions of this intuitive notion of subject matter [24,31,42,58]. However, we can remain general and independent of particular reconstructions by working with the syntactic reflection of the subject matter of a sentence, that is, with the set of its atomic sentences. This is because we assumed that distinct propositional atoms represent distinct atomic propositions: so if two sentences are built from different atoms, one is about a proposition that the other isn't about, whence they should have different subject matter. In particular, the most straightforward counterexample is excluded: we cannot have anymore two atomic sentences-like 'The couch is black' and 'The sofa is black'-that express the same atomic proposition, and hence have the same subject matter, but still get assigned to different propositional atoms. Hence we have the following.
(S1) Completeness of syntactic reflection. If two sentences have the same subject matter, they have the same atomic sentences.
Alternatively, we can also think of (S1) as a logical-as opposed to a semanticprinciple: if two sentences have different atoms, they have, as far as logic is concerned, different subject matter. (In Section 7, we also discuss the converse of (S1), that is, the soundness of syntactic reflection.) Thus, we can work with the following precise formulation of the fundamental principle that synonymy entails subject matter identity: (P1) Subject matter preserving. If two sentences are synonymous, then the sets of their atomic sentences are identical.
When we look at the class of possible sets of logical laws governing synonymy below, this principle is conceptually useful in dividing the class into those sets of laws (or axioms) that satisfy the principle and those that don't. In particular, we'll  consider what it takes to get into the class of subject matter preserving synonymies  and what the benchmark representatives of this class are. So what are notions of synonymy that satisfy the subject matter preservation principle (P1)? We immediately see that it cannot be provided by standard possible worlds semantics: For any distinct atoms p and q, the sentences p ∧ ¬p and q ∧ ¬q are equivalent in classical logic and hence not distinguishable by a possible world. So they are synonymous according to possible worlds semantics but they have different subject matter.
Thus, an advocate of possible worlds semantics may wonder whether it can be changed minimally and straightforwardly to obtain a notion of synonymy that satisfies principle (P1).
To answer this, let's recall the main idea of possible worlds semantics: Sentences are evaluated at possible worlds and the meaning of a sentence is its truth-value profile across all possible worlds. As far as semantics are concerned, possible worlds can be regarded as maximally consistent scenarios: every sentence is either true or false at a world (maximality) and no sentence is both true and false at a world (consistency).
Thus, arguably the most minimal and straightforward change to possible world semantics is to relax the minimality and/or consistency assumption. So possible worlds are relaxed to scenarios (or circumstances) where sentences can be not only (i) true, or (ii) false; but also (iii) not maximal, i.e., neither true or false, or (iv) not consistent, i.e., both true and false. Let's abbreviate these truth-values as follows: true t, false f , neither true nor false u (for undecided), and both true and false b (for both). Then there are three possibilities for the new sets of truth-values: {t, f, u}, {t, f, b}, and {t, f, u, b}. 4 The meaning of a sentence still is its truth-value profile across all possible scenarios, though now using more than just the two truth-values true and false. For possible worlds, the truth-value of a complex sentence at a possible world is determined by the truth-value of its parts at that possible world according to the most straightforward logic for the set of truth-values {t, f }, i.e., according to classical logic. Thus, to change possible worlds semantics minimally, the truth-value of a complex sentence at a possible scenario is determined by the truth-value of its parts at that scenario according to the most straightforward logic for the chosen set of truth-values. Standardly, these are the following. For {t, f, u} the truth-functions for the connectives ¬, ∧, ∨ are as in (strong) Kleene 3-valued logic (K 3 ), for {t, f, b} as in the Logic of Paradox (LP), and for {t, f, u, b} as in First-Degree Entailment (FDE). 5 See Section 4 and [49, sec. 7-8] for more details on these logics-though for now that is not needed.
To summarize, the most minimal and straightforward change of possible world semantics arguably is to relax possible worlds to scenarios where: (A1) Scenarios are structures where atomic sentences can be evaluated with truth-values in {t, f, u}, {t, f, b}, or {t, f, u, b}. (A2) The truth-value of a complex sentence at a scenario is determined by the truth-value of the sentence's parts at that scenario according to the truth-functions for the connectives as in K 3 , LP, or FDE, respectively.
In Section 4, we provide concrete examples of such scenarios and axiomatize the notion of synonymy that they provide. For now, let's consider whether this minimal change to possible world semantics can provide a notion of synonymy satisfying (P1).
The answer is no: Consider the two propositional sentences ϕ := p and ψ := p ∨ (p ∧ q). Whatever set of truth-values one chooses ({t, f, u}, {t, f, b}, or {t, f, u, b}), the respective logic will evaluate ϕ and ψ to the same truth-value no matter how p and q were evaluated (this can easily be checked). Thus, there is no scenario in the adapted possible worlds semantics that can distinguish the two sentences, so they are synonymous according to this scenario semantics. However, the two sentences don't have the same atoms. Consequently, the adapted possible worlds semantics cannot provide a notion of synonymy satisfying (P1).
We can summarize the observations of this section as the following impossibility result. The natural principle about synonymy that is motivating any form of scenarioor circumstance-based semantics is the following.
(P2) Scenario respecting. If there is no possible scenario or circumstance whatsoever in which two sentences differ in truth-value, then they are synonymous.
The above shows that there is no notion of scenario that satisfies all of (A1), (A2), (P1), and (P2). This impossibility result shows that there is a real problem about finding out the logical laws governing synonymy: Whatever they are, we would like them to render the subject matter preservation principle (P1) true, but we also see that neither standard possible world semantics nor a conservatively refined scenario based semantics can yield such a notion of synonymy.
Thus, to get a subject matter preserving notion of synonymy we need to go a different way. There are many possibilities that have been suggested-they fall under the term conceptivist logics. However, they come in greatly varying intentions, semantic frameworks, and/or axiomatizations. This makes it hard to compare them and distill out the logical laws for synonymy. This is why we'll proceed in a different way: we look directly at possible sets of axioms about synonymy and compare these (e.g., Fig. 1 The system of analytic containment (AC) as presented in [23] which is an extension of the other). This allows for direct comparison and avoids the commitments that come with opting for particular semantic systems.
We start in the next section by looking at a logic satisfying (P1) that recently gained much prominence. In Section 4, we develop and explore how close the scenario approach from this section comes to satisfying (P1). Against this background, we add and characterize further axiomatizations. Finally, in Section 9, we come full circle and both analyze and strengthen the impossibility result mentioned here.

Truthmaking Synonymy
We recall the logic of synonymy AC that satisfies (P1). Fine [23] provides a sound and complete truthmaker semantics for a system that aims to axiomatize the notion of analytic content containment: the content of a sentence ϕ is contained in the content of a sentence ψ and vice versa. The idea of the system originated in the work of Angell [1,2]. 6 The system AC is given in Fig. 1.
By induction on derivations in AC, we immediately see that if AC ϕ ≡ ψ, then ϕ and ψ have the same atoms. Hence AC is a logic of synonymy that satisfies (P1).
As we will need it later on, we very briefly recap the truthmaker semantics of [23]. This semantics can be traced back to [27] and is defined as follows. A state model M is a triple (S, , | · |) where is a complete partial order 7 on S and | · | maps atomic sentences p to pairs (|p| + , |p| − ) of non-empty subsets of S. (Elements of |p| + are called verifiers of p and elements of |p| − are called falsifiers of p.) We recursively define when a sentence ϕ is verified/falsified by a state s ∈ S (in signs: s ϕ / s ϕ).
s p :iff s ∈ |p| + , and s ϕ :iff s ∈ |p| − -s ¬ϕ :iff s ϕ, and s ¬ϕ :iff s ϕ s ϕ ∧ ψ :iff ∃u, t ∈ S : u ϕ & t ψ & s = u t, 8 and s ϕ ∧ ψ :iff s ϕ or s ψ s ϕ ∨ ψ :iff s ϕ or s ψ, and The exact content of ϕ is |ϕ| := {s ∈ S : s ϕ}, and the (replete) content of ϕ, denoted [ϕ], is the convex closure of the complete closure of |ϕ|. 9 We won't go into the philosophical difference between these two notions of content. For this see [23], and for the notion of synonymy induced by exact content see [25]. The soundness and completeness result of [23]

Scenarios Synonymy
To get a notion of synonymy satisfying (P2) and the two assumptions from Section 2, we formally describe scenarios and axiomatize "scenario synonymy".
In Section 2, we straightforwardly relaxed possible worlds to scenarios in an attempt to obtain a semantics satisfying the subject matter preservation principle (P1). The idea behind such a scenario is that it is a (possibly inconsistent) representation of a part of the world or of a possible world. Before we present examples, let's first describe the two assumptions characterizing scenarios in more detail.
By (A1), every scenario s, whatever object it might be, determines a valuation v s : {p 0 , p 1 , . . .} → T , where T is one of the three possible sets of truth-values. Since we're interested in just how fine-grained this generalization of possible worlds semantics can become, we choose T := {t, f u, b} (the other choices will be coarser or at best equally fine-grained). By (A2), this valuation extends to complex sentences according to FDE: Thus, we may define a model of scenario semantics as a pair (S, v) where S is a nonempty set whose elements are called scenarios and v is a function mapping scenarios s ∈ S to valuations v s . The positive content of a sentence ϕ is ϕ + := {s : v s + ϕ} and the negative content is ϕ − := {s : v s − ϕ}. The content of ϕ is ϕ := ϕ + , ϕ − . Note that, while there is no restriction on (the set of) scenarios, their semantically relevant aspect is the valuation that they determine.
Let's consider two examples of such scenarios: First, if we want to give scenarios a metaphysical interpretation (also applicable in semantic paradoxes), we can interpret them as FDE-valuations or worlds [49, ch. 7-8].
Second, if we want to give scenarios a cognitive interpretation, we can take them to be sets of defeasible rules (i.e. rules that allow for exceptions) with which we conceptualize (a part of) the world. In other words, these rules are our knowledge base where we store both factual and semantic knowledge. 10 Different knowledge bases correspond to different conceptualization (or representations). This explains what kind of object these scenarios are, but how do they determine a valuation? For this we use that such defeasible knowledge bases are paradigmatically modeled by programs in logic programming. 11 A logic program determines a three-valued model that is the canonical interpretation of the set of rules provided by the program [26]. Thus, a logic program s (modeling some knowledge base of a cognitive agent) determines a valuation (the intended model that the agent forms given her knowledge base).
This idea can be taken further. Logic programming has a neural interpretation: for every program there is a certain (artificial) neural network that computes the canonical interpretation of the program, that is, after starting in an initial state, the neural network will eventually reach a stable state that corresponds to the intended interpretation [55]. The idea is that, while the logic program is the symbolic representation of the knowledge base of the agent, the neural network is the (high-level) neural implementation of it. Each state of such a network can be described by a four-valued interpretation and could, roughly, be interpreted as how an agent currently cognizes the part of the world she is in. Thus, we can model knowledge bases as above as logic programs and take these as scenarios (determining the valuation given by their intended interpretation), but we can also model knowledge bases as neural networks and take their possible states as scenarios. 12 See [33] for elaborating this idea of grounding (cognitive) scenarios (or "worlds") in states of neural networks (defining similarity between such scenarios, various modal operators, a counterfactual, and weaker versions of synonymy).
Let's now turn to the notion of synonymy provided by scenario semantics. Whatever instance we consider, scenarios satisfying the two assumptions extensionally act like FDE-models. (Intensionally they might be more complex than plain FDEvaluations; we discuss ways to intensionalize scenario semantics in Section 8.) We define the canonical model of scenario semantics as (S c , v c ) where S c is the set of valuations v : {p 0 , p 1 , . . .} → T and v c is the identity function. Then the logic of synonymy obtained by scenario semantics can be characterized as follows (where ϕ ⇔ FDE ψ means that ϕ and ψ are equivalent in the logic FDE in the sense of having the same value under every valuation). Section 6). Then for all sentences ϕ and ψ, the following are equivalent

Theorem 1 Let
Thus we can see that the two sentences ϕ and ϕ ∨ (ϕ ∧ ψ) that we used to show the inconsistency of the principles (P1), (P2), (A1), and (A2) really are the only source of the inconsistency. In Section 7, we present a possible argument in favor of scenario synonymy.
The first three items of the theorem are immediate, and the last one, which axiomatizes this logic of synonymy as an extension of AC, will be proven in the Appendix.

Move-on Up: From Scenario to Truthmaking Synonymy
We show that truthmaking synonymy and scenario synonymy are related by moving up one set-theoretic level: moving from scenarios to sets of scenarios (as the entities at which sentences are evaluated) fine-grains scenario semantics to the level of truthmaking semantics.
By now, we know the proof-theoretic relationship between truthmaker synonymy (AC) and scenario synonymy (SF = AC + ϕ ≡ ϕ ∨ (ϕ ∧ ψ)). We now want to see how the two are semantically related.
Analogous to the canonical model of [23], we define the canonical scenario-based state model C as follows.
Definition 1 In the canonical model of scenario semantics, the scenarios are the valuations which, in turn, correspond exactly to the subsets s of L := {p 0 , ¬p 0 , p 1 , ¬p 1 , . . .}. 13 Thus, write S := P(L) (where P is the powerset operator). Write s p := {p} ∈ S, which is the scenario that makes p true and leaves anything else undetermined, and write s ¬p : So the set of scenarios {s p , s q } ∈ P(S) verifies or makes true p ∨ (p ∧ q) but not p. Thus, if we move from scenarios making sentences true to sets of scenarios making sentences true, then we get a semantics that can distinguish between the scenario-synonymous sentences p and p ∨ (p ∧ q). In the Appendix, we proveanalogously to [23]-the following result.

Corollary 1
The following are equivalent: In other words, while scenario semantics can never get to the level of granularity achieved by truthmaking synonymy, any semantics that individuates content according to AC is equivalent-in terms of content individuation-to the sets-of-scenarios semantics. 14 We'll next get to investigating the "space" between SF and AC.

Characterizing Benchmark Synonymies
So far, we've motivated and described two benchmarks, SF and AC, among the possible notions of synonymy. Now, we'll add more and thus get to the main formal result: We identify and uniformly axiomatize various logics of synonymy and characterize them by possible combinations of weak/strong subject matter preservation and weak/strong logical equivalence.
We first state the logics and the characterization, and then we'll put them into context. In the next section, we give arguments for the various benchmark notions of synonymy.
We first need some terminology. Recall that the set of atoms At (ϕ) occurring in a sentence provides some information about the subject matter of the sentence. Now, we not only want to record which atoms occur in ϕ, but also whether they occur positively or negatively in ϕ. This is standardly done as follows. 15 Definition 2 (Valence) The valence (positive or negative) of an atom p in a sentence ϕ is defined recursively by: (i) Only p occurs positively in p, and no atom occurs negatively in p.
(ii) p occurs positively (negatively) in ¬ϕ iff p occurs negatively (positively) in ϕ. (iii) p occurs positively (negatively) in ϕ • ψ (for • ∈ {∧, ∨}) iff p occurs positively (negatively) in ϕ or ψ. Note that p can occur in ϕ either not at all, or positively, or negatively, or both positively and negatively. We define L(ϕ) := p : p occurs positively in ϕ ∪ ¬p : p occurs negatively in ϕ . 14 This yields an intuitive two-level picture of content where the dividing line lies somewhere between SF and AC. On the first granularity level, content can be modeled by first-order objects like scenarios or possible worlds. And on the second granularity level, content is modeled by second-order objects like sets of scenarios. 15 See e.g. [23] or [21]. Thus, L(ϕ) not only records which atoms occur in ϕ, but also whether they occur positively or negatively.
Next, we provide an equivalent perspective on valence that will be central to the proof of our characterization result. Again, we need some terminology. A literal is an atom or a negated atom. A sentence ϕ is in disjunctive form if it is a disjunction of conjunctions of literals. It is standard if the conjuncts and disjuncts are ordered according to a fixed order. 16 Following [23], ϕ is maximal if whenever it contains a disjunct ϕ 0 and literal l (appearing as a conjunct of some disjunct), then it contains a disjunct ϕ 1 whose literals are exactly those of ϕ 0 ∧l (that is, ϕ 1 and ϕ 0 ∧l are identical modulo order and repeats). Fine [23] shows that every sentence ϕ is AC-provably equivalent to a unique standard maximal disjunctive form ϕ max .

Lemma 1
For every sentence ϕ, L(ϕ) is the set of literals that occur in ϕ max . 17 Thus, the definition of L(ϕ) via valence or via the literals in ϕ max are equivalent. The former might be conceptually more useful (so we'll use it in the main text), while the later is more useful in proofs (so we'll use it in the Appendix).

Definition 3
The logics that we'll consider are axiomatized as follows.
Their relationship is described in Fig. 2: all indicated containments are strict, and SF is incomparable to both SCA and SCL. We leave the search for the required counterexamples to the reader.
Their names come from the characterization theorem that we'll show next: each logic represents one combination of weak (Classical) or strong (FDE) logical equivalence and weak (Atomic) or strong (Literal) subject matter identity. So the first letter S stands for Synonymy, the second letter (F or C) abbreviates the kind of logical equivalence that the synonymy requires, and the third letter (L or A), if existing, abbreviates the kind of subject matter identity that the synonymy requires. This then explains the-perhaps surprising-axiomatization. (Recall, ϕ ⇔ FDE ψ denotes FDE-equivalence; similarly, ϕ ⇔ C ψ denotes classical equivalence.) We end this section with several remarks putting this result into context. First, with scenario synonymy, we developed a logic of synonymy that exactly satisfies the principle (P2) under the two assumptions (A1) and (A2). To get a similar understanding for the class of logics satisfying (P1), we look at the newly added logics which all satisfy (P1). Such logics, where equivalence entails having the same atoms, have been called conceptivist logics by [53]. (For a short history and references see, e.g., [20].) The first such system has been suggested by [45,47] to capture the idea that in many cases a valid inference shouldn't only preserve truth, but the conclusion should also be conceptually contained in the premises. The lattice of conceptivist equivalences is marked in gray in Fig. 2. The top element is the identity relation and the bottom element is the logic ⊥ C where any two sentences with the same atoms are equivalent. 18 Second, there are various related results. Part (i) is Theorem 1. Part (ii) is a characterization of AC proven by [21] and [23] formulated in terms of valence preservation. As we'll discuss below, we'll provide a novel proof. Concerning (iii), there are various logical systems whose notion of equivalence is characterized by SFA. One is the first-degree fragment of the system of [16] with the "story semantics". Another one is the formalization of Buddhist dialectics of [50] given by adding a fifth truth-value to the semantics of FDE that represents "emptyness" ( [21] shows that equivalence in this system is FDE-equivalence plus having the same atoms). Concerning (iv), [52] shows that an equivalent characterization is that ϕ and ψ Mx-match in the category of classical proofs. We'll come back to this below. Concerning (v), [53] shows that for all sentences ϕ and ψ that don't contain the conditional →, the sentence ϕ ↔ ψ is provable in Parry's logic of analytic containment iff ϕ and ψ are classically equivalent and have the same atoms.
Third, the proof of the theorem will take up most of the Appendix. It defines, for each of the logics, a notion of a unique normal form and shows that two such forms are identical iff they have the properties of the characterization. This method is not only uniform across these logics, but it also is constructive since it provides an algorithm to decide equivalence. Moreover, it associates to each notion of synonymy a characteristic notion of normal form. In particular, the proof doesn't require any (philosophically laden) ideas about semantics. (Because of these advantages we include our proof of (ii).) Fourth, an intuitive conception of meaning takes the meaning of a sentence to consist of two components: a "truth component" that specifies the truth-conditions of the sentence, and an "aboutness component" that specifies what the sentence is about. Thus, synonymy according to this intuitive conception is given by "truth-based equivalence" plus "subject matter identity". (Note how the two intuitive principles about synonymy mirror this distinction: (P2) is concerned with truth, while (P1) is concerned with subject matter.) Thus, the theorem provides the axiomatization for various choices of these two components. 19 Fifth, this also provides an interpretation for the dotted lines in Fig. 2. We get from logics above the dotted line labeled "add classicality" to the logics below by replacing the stricter truth-component of FDE-equivalence with the looser one of classical equivalence. Similarly, we get from logics above the "add transparancy" line to the logics below by moving from having the same literals to the weaker requirement of having the same atoms. That is, we postulate that ϕ has the same subject matter as ¬ϕ.
In other words, negation is subject matter transparent. This is a common principle if subject matter is understood as topic [31,48].
Sixth, what's beyond AC? There is, for example, isomorphism in the category of classical proofs which coincides with equivalence in multiplicative linear logic [17,52], factual equivalence [15], equivalence in exact entailment [25], equivalence in some impossible world semantics [6], or syntactic identity. However, for reasons of space, we won't further investigate those. 19 In fact, not only our but also most other conceptivist logics eventually received an analysis in terms of a truth and a subject matter condition [20,21]. This corroborates the two-component analysis of meaning: Any available notion of meaning identity that respects subject matter or topicality intuitions can be analyzed as having two components. (Of course this doesn't mean the this is the only analysis as Priest's formalization of Buddhist dialectics shows: it has a semantics with only truth-components but it also can be regarded under the two component analysis.)

Arguments for Various Notions of Synonymy
In this section, we present novel arguments for the various notions of synonymy discussed so far.
Argument for SF A reasonable notion of synonymy is to say that two sentences are synonymous if they, when uttered, always communicate the same thing. Denote this by ≡ c . The argument claims that the intuitively correct notion of synonymy ≡ c is, in fact, scenario synonymy.
Let's recap Hurford's constraint [34] which says that disjunctions where one disjunct entails the other are infelicitous-an often cited example is "John is American or Californian". 20 Call such disjunctions Hurford disjunctions. An intuitive explanation for this constraint is the following. A Hurford disjunction ϕ ∨ ψ is equivalent to one of the disjuncts, say, ϕ. It's a pragmatic principle that you hence should utter the simpler ϕ and not the equivalent but more complicated ϕ ∨ ψ. Hence ϕ ∨ ψ is infelicitous, while ϕ is not.
This has to be stated more carefully. The pragmatic principle can only be invoked when ϕ and ϕ ∨ ψ communicate the same thing: The principle works because you could have used either ϕ or ϕ ∨ ψ to communicate what you wanted to communicate, but you should choose ϕ over ϕ ∨ ψ on the grounds that ϕ is more concise. But if ϕ and ϕ ∨ ψ should communicate the same thing, the implication from ψ to ϕ needs to be "obvious", because if the implication would require a very complicated classical logic computation, ϕ and ϕ ∨ ψ wouldn't communicate the same thing. Thus, the refined Hurford constraint with its explanation states the following. 21 Assume ϕ ∨ ψ is a disjunction where ψ obviously implies ϕ. Then ϕ ∨ ψ is infelicitous because ϕ and ϕ ∨ ψ communicate the same thing, whence you should choose the simpler ϕ over the complicated ϕ ∨ ψ. Now for the argument. Assume the refined Hurford constraint with its explanation is correct. The sentence ϕ ∨ (ϕ ∧ ψ) is a disjunction where (ϕ ∧ ψ) obviously implies ϕ (in the sense of the preceding paragraph) since it infers the conjunct from a conjunction. Hence, by assumption, ϕ and ϕ ∨ (ϕ ∧ ψ) need to communicate the same thing, whence ϕ ≡ c ϕ ∨ (ϕ ∧ ψ). (So the Hurford constraint can be seen as a reason for coarse-graining synonymy.) Moreover, the axioms of AC represent necessary properties of ≡ c , and there is no other reason to make more sentences ≡ csynonymous. Thus, ≡ c is axiomatized by SF, and hence scenario synonymy is the intuitively correct notion of synonymy ≡ c . Argument for SFA Assume the two components conception of meaning, and assume the truth component is correctly spelled out as FDE-equivalence (motivated by scenarios). Moreover, assume that the aboutness component is given by some atombased approach to subject matter which, as mentioned, is particularly well-suited if subject matter is understood as topic [31]. That is, the subject matter of a complex sentence is given by merging the subject matter of the atoms of the sentence. For this notion of subject matter, the following is true.
(S2) Soundness of syntactic reflection. If two sentences have the same atomic sentences, they have the same subject matter.
Thus, by (S1), which we discussed in Section 2, two sentences have the same subject matter if and only if they have the same atoms. Hence, by the characterization theorem, synonymy in this intuitively correct conception of meaning is axiomatized by SFA.
Argument for AC The flip side of the argument just given is that subject matter identity has to be stronger than having the same atoms (which is in favor of AC): Assume one could argue that ϕ ∨ (ϕ ∧ ψ) is not synonymous to ϕ ∨ (ϕ ∧ ¬ψ) (as, for example, in truthmaker semantics). Then, by the characterization theorem, having the same atoms and FDE-equivalence are not enough for synonymy. Assume one sticks to the idea of two components semantics where truth-component identity shouldn't be spelled out in a way more fine-grained than FDE-equivalence. Then subject matter identity involves more than just having the same atoms (that is, the soundness of syntactic reflection (S2) fails). Since the two sentences differ only by a negation sign, this then seems to suggest that negation is not topic transparent, that is, that the subject matter of a statement and that of its negation may differ. 22 Argument for SCA We can take the same argument as for SFA but replace the choice for the truth-component: instead of FDE motivated by scenarios, take classical equivalence motivated as the conservative choice. Thus, SCA axiomatizes synonymy in the minimal modification of standard possible world semantics where synonymy entails subject matter identity.
Another way to bring this about is as follows. The semantics for weak Kleene logic uses three truth-values t, f, u, but u is interpreted as meaninglessness or offtopicness [3,9]. Thus, when all atoms of a sentence ϕ have a classical truth-value (t or f ), the whole sentence has a classical truth-value, but as soon as one atom of ϕ is u, the whole sentence is u. Hence two sentences ϕ and ψ are equivalent in weak Kleene logic iff they are classically equivalent and have the same atoms iff they are SCA-equivalent. So if we take weak Kleene to correctly describe reasoning preserving both truth and topic and adopt a two component view of meaning, then SCA is the correct logic of synonymy.
Taking up on footnote 21, a possible counterargument to SCA as the correct axiomatization of synonymy is as follows. The non-exhaustive reading of a Hurford disjunction is infelicitious, while the exhaustive reading is felicitous. So the two readings shouldn't be synonymous. In particular, ϕ ∨ (ϕ ∧ ψ) shouldn't be synonymous to (ϕ ∧ ¬ψ) ∨ (ϕ ∧ ψ). However, since these two sentences are classically equivalent and have the same atoms, they are equivalent according to SCA. Argument for AC and SCL An intuitive notion of synonymy is explanational equivalence: any explanation for why ϕ is true can be "obviously" transformed into an explanation for why ψ is true. In other words, the two sentences don't just agree on truth-conditions but also on how they are explained or proven. Inspired by the concept of a homotopy and the well-known idea that a logic yields a category where formulas are objects and proofs are morphisms, we may formalize this by two conditions: First, there is a proof from ϕ to ψ and a proof from ψ to ϕ (this allows transformation of an explanation for ϕ into one of ψ, and vice versa). Second, if we concatenate the proofs and move from ϕ to ψ and back to ϕ, we obtain a proof that is "essentially" like the identity proof that obtains ϕ from ϕ; and similarly for ψ (this captures that the transformation is "obvious"). 23 Now, following [52], "essentially" can be spelled out in different ways: roughly, it could be exact identity or it could allow treating equally occurrences of the same atom (either all or only those of the same polarity). If we choose identity, the resulting notion of synonymy is that of isomorphism in the category of classical proofs as mentioned above [17,52]. If we choose same-polarity occurrences, we get a notion of synonymy equivalent to AC [52]. If we choose all occurrences, we get a notion of synonymy with the same characterization as (and hence axiomatized by) SCL [52]. Thus, AC and SCL can be seen as two possible axiomatizations of explanational synonymy.

Ways Out of the No-go Result and its Consequences
We mention ways out of the no-go result about synonymy from Section 2, and we state some of its consequences concerning a pluralistic conception of synonymy and the principle of compositionality in logic programs and neural networks.

Ways out
We've started with two intuitive principles about synonymy in the strong sense of content identity: (P1) demanding synonymy to preserve having the same atoms, and (P2) demanding synonymy to respect scenarios. The two principles are inconsistent under the two assumptions (A1) and (A2) on scenarios (which a straightforward refinement of possible worlds semantics should satisfy). With the last sections, we can now understand this contradiction by identifying it and considering ways out. Indeed, we can precisely locate the inconsistency: Scenario synonymy exactly satisfies (P2), (A1), and (A2), and the instances of sentences violating (P1) can be traced back to exactly one axiom: So let's wonder what modifications of (P1), (P2), (A1), or (A2) would render them consistent. We'll consider three related options.
First, we weaken (P2): The conceptivist logics SCA, SCL, SFA, AC all satisfy (P1) and, as extensions of AC, they satisfy a weaker version of (P2): If there is no set 23 Figuratively speaking, the homotopy idea arises by thinking of proofs as paths through the space of formulas and demanding that the loop going from ϕ through ψ back to ϕ can be "continuously" deformed into the point ϕ (and similarly for ψ). Thus, various notions of continuity (i.e., of proof identity) correspond to various notions of "obviousness" and subject matter. We leave it to future research to explore these connections to homotopy theory.
of scenarios that verifies one sentence but not the other, then the sentences are synonymous (cf. Section 5).
Second, we keep (P1) and (P2) but change the notion of scenario whence modifying (A1) and/or (A2). This is done by most semantics for conceptivist logics. We consider three kinds of examples: (a) As mentioned, weak Kleene semantics uses three truth-values t, f, u-so validates (A1)-but u is interpreted as meaninglessness or off-topicness. Hence conjunction and disjunction cannot be interpreted as latticelike operations on the truth-values, whence (A2) is not satisfied. 24 (b) The truthmaker semantics for AC uses the four truth-values t, f, u, b-so, too, validates (A1)-but there can be states neither making ϕ nor ψ true while making ϕ∧ψ true, whence (A2) is not satisfied. (c) The NC semantics for AC by [21] uses nine truth-values-so violates (A1)-but the truth-functions of the connectives on these truth-values is straightforward, whence it in spirit satisfies (A2).
Third, the general idea behind the preceding approach is to add more structure to the notion of a scenario and use this to make more fine-grained distinctions: in (a) the information what's on topic in a given scenario is added, in (b) mereological structure is added to the set of states, and in (c) more truth-values are added. Note an important difference: in (a) and (c) the scenarios themselves are 'locally' enriched by more structure while in (b) the set of scenarios is 'globally' enriched.
Let's consider more explicit ways of enriching scenarios and thus "intensionalize" scenario semantics. In Section 4, we described scenarios as representations of the world that extensionally act like FDE-models. So it could be that two scenarios determine the same valuation (extensionally identical) but they still differ in their representational or internal structure (intensionally different). Using this additional intensional structure, we could then weaken (P2) to demanding intensionally indistinguishable sentences to be synonymous. That is, if no scenario can find a difference between two sentences-neither based on its internal structure nor on the induced valuation-then they are synonymous. Here are three examples.
(a) As just indicated, a scenario might consist of a valuation plus a set of atomic sentences which are considered to be on topic in that scenario (or something similar to that effect). Then (intensional) scenario indistinguishability amounts to SFA.
(b) Two FDE-equivalent sentences ϕ and ψ might still be intensionally distinguished since the explanation for why ϕ is true according to the scenario or representation is different from why ψ is true. 25 Thus, two sentences are intensionally indistinguishable if they are FDE-equivalent and explanational synonymous as described in Section 6, whence this version of intensional scenario synonymy is axiomatized by AC or an even more fine-grained logic.
(c) Another, even more cognitive approach to weakening (P2) by intensionalizing scenario semantics is to read (P2) as "if no scenario can be imagined in which the two sentences differ, they are synonymous". Then scenario synonymy is moved from an extensional into an hyperintensional context, whence more is required for it to obtain. Using the logic of imagination of [4,5], this can formally be spelled out as: for all sentences χ, [χ]ϕ is equivalent to [χ]ψ. This roughly means that whenever we imagine a scenario constructed around making χ true, if ϕ will turn out to be true there, too, then also ψ will be true (and vice versa). What is the resulting notion of synonymy? If only possible worlds are used [5], it coincides with SCA (given same content implies having the same atoms). If also non-normal worlds are allowed [4], it depends on the assumptions about imagination but the resulting notion of synonymy will generally be very fine-grained. Thus, in line with the previous version, this version of intensionalized scenario synonymy, too, will be rather high up in the lattice of logics of synonymy.
Let's turn to the consequences of our results and the paradoxical nature of synonymy.
Pluralism According to pluralism about meaning, there are various, equally justified notions of meaning and thus logics of synonymy. Here are two ways how our results point in this direction.
First, assume that, despite the attempted reconciliation, it is not an option to give up on any of (P1), (P2), (A1), and (A2) in their original, non-weakened formulations. Say, because one is convinced that (P1) is a necessary truth and (P2) together with the two assumptions is key to how we evaluate complex sentences and use scenarios in our thinking (e.g. in counterfactual reasoning). Then, scenario synonymy and one of the conceptivist logics are equally justified notions of synonymy. Our intuitions in favor of the principles then come from distinct notions of meaning.
Second, and more importantly, we've seen several arguments motivating distinct notions of synonymy. That is, as soon as at least two of these arguments are accepted, pluralism holds. The characterization theorem associates these notions of synonymy to different choices of logic and subject matter. For example, the 'communicational synonymy' in the argument for SF is distinct from the 'explanational synonymy' in the argument for AC and SCL. That there is no universally correct choice can be made plausible by the fact that different domains of reasoning might demand different choices. For example, in some domains, maybe metaphysics or mathematics, it might be plausible to adopt a classical logic, while in other domains, maybe cognition or databases, it might be plausible to go four-valued. And in some domains, maybe topics of a discourse, it might be plausible that negation is transparent and hence same atoms should mean same subject matter, while in other domains, maybe epistemic notions of content, it might be that having the same literals is the right choice for tracking subject matter identity.
Non-compositionality Another consequence of our results is that synonymy cannot be spelled out via scenarios in what may be called a straightforwardly compositional way: Assume we take the common starting point and work with scenarios that assign (at most) four truth-values to atomic sentences-that is, we satisfy (A1). If we then build a semantics for complex sentences with these scenarios that satisfies the principles of synonymy (P1) and (P2), then (A2) fails: The truth-values of complex sentences at scenarios are not determined by the truth-value of the sentences' parts at that scenario according to the straightforward truth-functions for the connectives.
Thus, there is a sense-namely that of violating (A2)-in which synonymy or content identity is not compositional. Let's now discuss the value of this insight.
On a positive note, this can be seen as an extension of a result from [40] showing that synonymy in the weaker sense of meaning similarity or resemblance is not compositional.
Moreover, we see that the above abstractly described failure of (A2) is found in many concrete examples: This applies, for instance, to the examples of scenarios mentioned in Section 4, that is, logic programs and their implementation in appropriate neural networks. But it also applies much more generally when we assign-e.g., in the quest for explainable artificial intelligence-to each activation state (or weightsetting) of a given neural network some human-interpretable atomic properties (e.g., "in this this activation state, the network recognizes a dog" or "in this weight-setting, the network accurately identifies stop signs"). In all these examples, we assign at most four truth-values to atoms, but to fully query logic programs or explain the network, we, at the very least, also need to understand when complex properties (formed from the atomic ones) are true according to a program or state. For this, it is natural to demand that programs and states can individuate complex properties built from different atomic properties: For example, when we consider the two complex properties 'there is a dog' and 'there is a dog or there is both a dog and a cat', we'd intuitively expect that in an activation state with the latter property the concept 'cat' is somehow "present", while this is not necessary for states with the former property, whence the two properties intuitively should be distinguishable by some state. But then the notion of truth according to a logic program or (activation or weight) state of a neural network cannot be straightforwardly compositional: we cannot just use the straightforward truth-functions to determine the truth-value of a complex sentence. For example, to determine the truth-value of a conjunction we might have to take other states and programs into account-rendering 'and' a modal operator as in truthmaker semantics (also see the discussion at the end of Section 9). This is, to the best of our knowledge, a fresh perspective-that seems worth pursuing-on the correct logic and semantics for complex properties of logic programs and network states.
On a negative note, we may wonder: what good is the insight that there is no straightforwardly compositional semantics if we could still find a compositional semantics that is, if at all, only slightly less straightforward but satisfies (A1), (P1) and (P2). The following two examples exemplify two ways how this can be achieved. However, they show that the resulting compositionality is different from a straightforwardly compositional semantics in that it is not, what we may call, 'purely extensional'.
First, assume scenarios use the three truth-values t, f, u and as truth-functions we don't use the straightforward ones of strong Kleene logic but rather the, if at all, slightly less straightforward ones of weak Kleene logic. As discussed above, this still satisfies (A1), (P1) and (P2), however, now the truth-values aren't 'pure' truth-values anymore but rather 'composites' of truth and topic (t is interpreted as true and on topic, f is interpreted as false and on topic, and u is interpreted as off topic). So, really, we added additional intensional structure-namely, topic-to the scenarios and used this to satisfy the principles of synonymy. As described above, this is a promising way out of the inconsistency, but compositionality is restored by (covertly) intensional means. The semantics is not 'purely extensionally' compositional in the following sense: it uses extensional truth-functions but it interprets the truth-values as truth-topic composites which provide intensional structure.
Second, consider truthmaker semantics. Scenarios assign four truth-values to atomic sentences, but the truth-value of a complex sentence ϕ at a scenario s is not determined by a truth-function and the truth-values of ϕ's parts at s anymore (as it still was the case in the preceding example). Rather, the truth-value of complex sentences is determined modally, that is, by also taking into account scenarios other than s. 26 So, again as described above, this is a promising way out of the inconsitency, but compositionality is restored by the intensional means of using a modal semantics for conjunction and disjunction. Again the semantics is not 'purely extensionally' compositional: the truth-values are 'pure' but the connectives are modal and not extensional.

Impossibility Results for Subject Matter Preserving Synonymy
As just seen, a likely conclusion of the no-go result is that truth at scenarios is not straightforwardly compositional: the truth-value of a formula at a scenario cannot be determined from the truth-value of its atoms at the scenario alone in the straightforward way. We first analyze and then generalize this to an impossibility result for synonymy.
Analysis So what feature of a straightforwardly compositional semantics is the reason for the violation of the subject matter preservation principle? There are two evident potential reasons.
(1) To determine the truth-value of a formula at a scenario one only needs to consider that scenario and no other scenarios. That is, the connectives have an extensional, non-modal semantics. 27 (2) The truth-values can sensibly be ordered and conjunction and disjunction respect this order: the truth-value of a conjunction is the minimum of the truth-values of the conjuncts, and similarly the maximum for disjunctions. 28 We now formulate these ideas more formally, and then we argue that-maybe contrary to one's first guess-(1) is not the real reason, but (2) is. 26 For example, the truth-value of ϕ ∧ ψ at s is in {t, b} if there are scenarios s and s such that s = s s and the truth-value of ϕ at s is in {t, b} and the truth-value of ψ at s is in {t, b}. 27 This is what the logics used in (A2) satisfy (they have a straightforwardly compositional semantics). As already noted, truthmaker semantics doesn't satisfy this (and it cannot be given a straightforwardly compositional semantics). 28 Again, this is what the logics used in (A2) satisfy (they have a straightforwardly compositional semantics). As already noted, weak Kleene logic doesn't satisfy this (and it cannot be given a straightforwardly compositional semantics). Coming back to the weak Kleene logic example at the end of the previous section, one could speculate that an important feature of a 'pure' truth-value-as opposed to the 'truth-topic composites'-is that it makes sense to order 'pure' truth-values.
Let's define a simple but general framework to formulate the discussion. We work in the general semantic framework where formulas (here built from ¬, ∧ This is the formalization of (1): The truth-value of ϕ at s is determined by the truthvalue of ϕ's atoms at s, and no other state s = s is required for this. 29 (We might additionally demand that f can be chosen uniformly, but this weaker version will already be sufficient.) We say that a semantics S is conjunction and disjunction conservative if, for all M = (S M , T M , V M ) ∈ S, the set T M is a lattice (with operations ∧ and ∨) and for all states s ∈ S M , This is the formalization of (2): Conjunction and disjunction of the language get interpreted by the corresponding standard functions on the truth-values.
Finally, we say that a semantics S satisfies subject matter preservation, if for all formulas ϕ and ψ, We now show that (1) is not the reason for the inconsistency: We show that an extensionally compositional semantics doesn't necessarily violate subject matter preservation (which it had to, if it were the reason). (This is unlike scenario semantics which is extensionally compositional and violates subject matter preservation.) Indeed, consider the trivial semantics consisting of only one semantic model M 0 = (S 0 , T 0 , V 0 ) where S 0 consists of just one state s 0 , T 0 is simply the set of formulas, and V 0 (ϕ, s 0 ) = ϕ. It is easily seen that this semantics is extensionally compositional and doesn't violate (i.e., satisfies) the subject matter preservation. Of course, this semantics is no "serious" semantics, but it shows that being extensionally compositional doesn't imply a violation of subject matter preservation.
Next we show that (2) is a reason for the inconsistency: We claim that synonymy in a conjunction and disjunction conservative semantics S does not preserve subject 29 See [32] for a good discussion of compositionality at such a general level. matter. Indeed, by the absorption laws that any lattice satisfies, 30 we have for any M ∈ S and s ∈ S M , but At p ∨ (p ∧ q) = At p , so S doesn't preserve subject matter.
Generalization In fact, a much weaker condition is enough to violate subject matter preservation. We say a semantics S is weakly absorptive if for all formulas ϕ and ψ and all M ∈ S we have for all s ∈ S M , The name is to indicate that this is a weak version of the just mentioned absorption law of lattices. As lattice (-like) structures are so fundamental to logic, it's not surprising that most logics validate weak absorption. For example, even exact truthmaker semantics, which is much higher up than AC in the lattice of conceptivist synonymies, still is weakly absorptive. 31 Hence all conceptivist logics containing AC-which includes all synonymies discussed so far-satisfy weak absorption. 32 We say a semantics S is order conservative, if, for all M ∈ S, the set of truthvalues T M has a partial order ≤ and for all ϕ and ψ, Note that this is much weaker than being conjunction and disjunction conservative, since no lattice structure on the set of truth-values is assumed, let alone a homomorphism from formulas to truth-values. So this can be considered as a much weaker version of (2).
Still, these two weak properties yield the following simple but far-reaching impossibility result.

Theorem 3 (Impossibility result) Synonymy in a weakly absorptive and order conservative semantics S does not preserve subject matter.
Proof For any M ∈ S and s ∈ S M we have by order conservativity: But this violates subject matter preservation. 30 That is, a ∨ (a ∧ b) = a = a ∧ (a ∨ b) for all elements a and b of the lattice. 31 It is readily shown that for any state s in a state model M, we have s ϕ ∧ (ϕ ∨ ψ) iff s ϕ ∨ (ϕ ∧ ψ). (Contrast this with the fact that exact truthmaking doesn't satisfy the distributivity law [25].) 32 One logic that provides an exception is multiplicative linear logic: Conjunction (commonly written ⊗) can intuitively be interpreted, roughly, as there being enough resources to realize any of the disjuncts. Disjunction (commonly written & ) can intuitively be interpreted, roughly, as there being enough resources to realize any of the disjuncts. So ϕ ⊗ (ϕ & ψ) requires at least two times ϕ, while ϕ & (ϕ ⊗ ψ) only requires at least one ϕ and one ψ (which is less if ψ requires less than ϕ). As mentioned, [17,52] show that equivalence in multiplicative linear logic coincides with isomorphism in the category of classical proofs, and, in fact, [52] even explicitly mentions that ϕ ∧ (ϕ ∨ ψ) is not isomorphic to ϕ ∨ (ϕ ∧ ψ).
Thus, any subject matter preserving semantics either fails to be weakly absorptive or order conservative (or both). Let's consider both options.
Giving up order conservativeness: For example, truthmaker semantics (either exact or as for AC) is a subject matter preserving semantics and it indeed fails to be order preserving. 33 In contrast, as noted, any conceptivist logic containing AC (or exact equivalence) satisfies weak absorption. So any subject matter preserving semantics for these conceptivist logics is not order conservative, whence doesn't respect an essential property of conjunction and disjunction. This corollary can be summarized in the following slogan: Any subject matter preserving semantics that makes synonymous the few sentences dictated by exact equivalence, violates our intuitions about conjunction and disjunction.
Giving up weak absorption: If the impossibility result forces us to choose between giving up weak absorption or changing an essential property of conjunction and disjunction, it seems natural to, at the very least, consider giving up weak absorption (in contrast to what most logics do). Indeed, we'll present some linguistic and cognitive evidence against weak absorption.
Concerning the linguistic evidence, consider the following two sentences adapted from the famous Linda problem [57]: (a) Linda is a bank teller or (Linda is a bank teller and an activist). (b) Linda is a bank teller and (Linda is a bank teller or an activist). Albeit a Hurford disjunction (cf. Section 4), sentence (a) seems to be a legitimate sentence in the context of the "Linda experiment" [57]: participants might ponder about it when they judge which of the two disjuncts is more likely. In contrast, sentence (b) is pragmatically very ill-behaved: it first makes a claim (that Linda is a bank teller) and then makes a second claim which is weaker than the first. Pragmatically, this doesn't make sense: a hearer will think that either the speaker shouldn't have made the first claim since it was false, or that the speaker shouldn't make the weaker claim to not be redundant. So it seems that while (a) is sometimes pragmatically legitimate, (b) never is, even though these two sentences are synonymous if weak absorption holds. 34 Here is some cognitive evidence against weak absorption. Very roughly, our cognition recognizes conjunctive features in a serial manner and disjunctive features in a parallel manner [56]. This suggests that ϕ ∧ (ϕ ∨ ψ) and ϕ ∨ (ϕ ∧ ψ) should play different cognitive roles: Assume we're presented with a few objects one of which has property P (x). If we're asked whether there is an object x with the feature 33 A state can (exactly) make true a conjunction without (exactly) making true any of the conjuncts: consider three states s p, q and s p and s q such that s = s s , whence s p ∧ q. So {1, b} V (p ∧ q, s) ≤ V (p) ∈ {0, n}. 34 Note that we only claim that the concrete sentences (a) and (b) provide counter-evidence to weak absorption. We don't claim that the following general principle is true: If ϕ and ψ are synonymous (in a given sense), then if ϕ makes sense in some context, also ψ makes sense in this context. If sentences (a) and (b) are inserted to this principle, it could of course be used (contrapositively) to argue against weak absorption. Though, it is doubtful whether this general principle is true. I'm grateful to an anonymous referee who pointed out that ϕ is synonymous to ϕ ∨ ϕ (even on very fine-grained notions of synonymy) but 'Which is true: ϕ or ϕ?' makes sense while 'Which is true: ϕ?' doesn't. We leave it as an open question to figure out which refined version of this principle can be used as an argument schema against acclaimed synonymyies.

P (x)∧ P (x)∨Q(x)
, then the prediction is that we should-by default, i.e., without further reflecting on the question-serially go though the objects and for each check whether it has P (x) and P (x) ∨ Q(x). If, on the other hand, we're asked whether there is an object x with the feature P (x) ∨ P (x) ∧ Q(x) , we would-by defaultscan the scene in parallel and find the object with P (x) from which we'd immediately conclude that P (x) ∨ P (x) ∧ Q(x) holds since the first disjunct was confirmed. Thus, in this exceedingly idealized setting, the response times for these two sentences should be different, although they are equivalent according to weak absorption. 35 This poses some interesting, though highly speculative further questions. On the neural level, conjunctive features are realized by binding [19]: very roughly, a state where the neural network cognized that an object has two features P and Q somehow contains two parts-one signaling P and one signaling Q-that are bound together. So conjunction is much like the modal truthmaker conjunctions. However, since truthmaker semantics is weakly absorptive, this suggests that binding is more complicated than just 'merging' two states. What is this additional structure on the state space of the network? Or does disjunction also behave in a more complicated way-as, e.g., via a closure operator [51, ch. 12]?
We leave it to future research to answer these questions and to develop a logic that can account for the above linguistic and cognitive intuitions against weak absorption. (In light of footnote 32, multiplicative linear logic seems like a promising starting point.) If the outcome is order conservative and subject matter preserving, it would be a novel answer to the impossibility result.

Appendix : Proofs
In this appendix, we prove the theorems stated in the main text which essentially amounts to proving the characterization theorem (Theorem 2).
All our proofs are elementary. As mentioned, a methodological novelty is that we extend a technique used by [22,23]: proving completeness results by developing an appropriate notion of a disjunctive normal form. That is, the idea of the proof is as follows: For each notion of synonymy, we find a corresponding notion of normal form that is (i) provably equivalent according to the synonymy and (ii) if two such forms satisfy the two characterizing properties, they are identical (modulo the order of literals and disjuncts). The theorems then follow: soundness is easy, and for completeness we move to the normal form of the two given sentences with the two properties; these normal forms hence have to be the identical, whence the original sentences are provably equivalent. The unifying, constructive, and theory-independent character of this proof has been discussed in Section 6. 35 Although, given the special logical character of the sentences, it is not clear whether the "weirdness" of the sentence overrides the default response. Also note that, if at all, this can only work for sentences with small syntactic complexity: It couldn't be used to check whether or not two sentences involving, say, 100 disjunctions and/or conjunctions should be equivalent. For otherwise we wouldn't be able to immediately see whether we should go into 'conjunction/serial' mode or 'disjunction/parallel' mode when checking the sentence. This is analogous to why conjunctive/disjunctive feature detection also only works when presented with a small number of objects.
Recall Definition 3 collecting the logics that we'll be working with. If L is one of these logic obtained from AC by adding an axiom ϕ ≡ ψ, then we refer to ϕ ≡ ψ as the L-axiom. Also note, by construction (see Definition 2), if ϕ is in disjunctive form, then L(ϕ) = {l : l literal in ϕ}. So, by Lemma 1, for any ϕ, L(ϕ) = L(ϕ max ). Thus, we can-as we find it more convenient in this appendix-work with the conception of L(ϕ) as the set of literals of ϕ when ϕ is a disjunctive form, or as the set of literals of ϕ max when ϕ is arbitrary.

Step 1: Disjunctive Normal Forms
In this section, we do the first step: providing provably equivalent notions of normal form. In Section 6, we defined standard disjunctive normal forms. Now we define such a normal form for each logic. (Item (i) is due to [23].) This will be the normal form of SF. (iii) ϕ is maximal positive if (a) For every disjunct ϕ i of ϕ, there is an A ⊆ At (ϕ) and a minimal disjunct ϕ 0 of ϕ (i.e., there is no disjunct ϕ 0 of ϕ such that L(ϕ 0 ) L(ϕ)) such that L(ϕ i ) = L(ϕ 0 ) ∪ A, and (b) if ϕ i is a disjunct of ϕ and p ∈ At (ϕ), then ϕ i ∧p is a disjunct of ϕ (modulo the order of the literals). This will be the normal form of SFA. (iv) ϕ is maximal literal-contradiction closed if ϕ is maximal and if p, ¬p ∈ L(ϕ), then p ∧ ¬p is a disjunct of ϕ. This will be the normal form of SCL. (v) ϕ is maximal atom-contradiction closed if ϕ is maximal and if p ∈ At (ϕ), then p ∧ ¬p is a disjunct of ϕ. This will be the normal form of SCA.
As mentioned, [23] shows that every formula ϕ is AC-provably equivalent to a standard maximal disjunctive normal form ϕ max . We show the analogue for the new normal forms and extensions of AC. For this, we'll need the following replacement rule.
Lemma 2 (Replacement) For C ∈ {AC, SF, SFA}, the following rule is Cadmissible, that is, if the premise is C-derivable, then the conclusion is C-derivable.

(When χ[ϕ] is a formula containing occurrences of ϕ, then χ[ψ] is the result of replacing all occurrences of ϕ by ψ.) ϕ ≡ ψ χ[ϕ] ≡ χ[ψ] (R)
Proof Most of the work has been done in [23]. It suffices to show that the following two rules are admissible.
where in (PR) the occurrences of ϕ in χ [ϕ] are not in the scope of ¬. The admissibility of (PR) is shown for AC by [23] and the proof also works for SF and SFA. For (NR), the proof is by induction on the proof of ϕ ≡ ψ. All the cases corresponding to the axioms and rules of AC are dealt with in [23]. So we only need to consider the In these two cases we have to show that ¬ϕ ≡ ¬ψ is derivable. Indeed, it is easy to check that we have using the distributivity and de Morgan axioms.
This also holds for SCL and SCA, but this will follow immediately from the characterization theorem for these logics and we won't need replacement for these logics in the proof. This is why we don't prove it directly here, although it's not too hard either.
Proposition 1 (Normal form for SF) Every formula ϕ is SF-provably equivalent to a standard minimal disjunctive normal form ϕ min .
Proof As mentioned, there is a formula ϕ in standard disjunctive normal form that is AC-provably equivalent to ϕ and hence in particular SF-provably equivalent.
Next, we can delete-while preserving SF-provability-any disjunct ϕ j occurring in ϕ if there already is a disjunct ϕ i in ϕ with L(ϕ i ) ⊆ L(ϕ j ). This is because if there are such ϕ j and ϕ i , then, without loss of generality, ϕ j = ϕ i ∧ χ and (using the underlining to increase readability) where we essentially used commutativity and the axiom ϕ ∨ (ϕ ∧ ψ) ≡ ϕ. Thus, we can reduce ϕ to a provably equivalent formula ϕ * in minimal disjunctive form.
Finally, by commutativity, associativity, and idempotence we can reorder ϕ * to make it standard (without changing minimality). Thus, we get a formula ϕ min that is provably equivalent to ϕ and in standard minimal disjunctive form.
Proposition 2 (Normal form for SFA) Every formula ϕ is SFA-provably equivalent to a standard maximal positive disjunctive normal form ϕ pos .
Proof As mentioned, ϕ is AC-provably (and hence SFA-provably) equivalent to a formula ϕ max in maximal disjunctive normal form. Let ϕ 1 , . . . , ϕ r be the minimal disjuncts of ϕ max . Then every disjunct ϕ of ϕ max is of the form ϕ = ϕ i ∧ L (modulo ordering) for an i ≤ r and a (possibly empty) set L of literals occurring in ϕ. By using replacement (Lemma 2), the SFA-axiom, and idempotence several times, we can SFA-provably replace each ϕ i ∨ ϕ by ϕ i ∨ (ϕ i ∧ At (L)) and thus end up with a formula ϕ * that still is SFA-provably equivalent to ϕ. 36 Clearly, ϕ * satisfies (a), and it also satisfies (b): Let ϕ be a disjunct of ϕ * and p ∈ At (ϕ * ). Then ϕ = ϕ i ∧ At (L) for an i ≤ r and a set L of literals occurring in ϕ max , and p occurs in a literal l p of ϕ max (since At (ϕ * ) = At (ϕ max ). Then, by the maximality of ϕ max , ϕ i ∧ (L ∪ {l p }) is (modulo order) a disjunct of ϕ max . By our replacement process, Proposition 3 (Normal form for SCL) Every formula ϕ is SCL-provably equivalent to a maximal literal-contradiction closed disjunctive normal form ϕ lcl .
Proof Given ϕ, form ϕ max (which can be done in AC which is contained in SCL). Then, for all p, ¬p ∈ L(ϕ max ), add the disjunct p ∧ ¬p to ϕ max . Call the result ϕ which is still SCL-provably equivalent to ϕ by iterated application of the SCLaxiom (and applying the transitivity rule of AC). Then, again, form ϕ max which is the required ϕ lcl . Proposition 4 (Normal form for SCA) Every formula ϕ is SCA-provably equivalent to a maximal atom-contradiction closed disjunctive normal form ϕ acl . 36 To be a bit more precise: Say ϕ = ϕ i ∧ p ∧ ¬q 1 ∧ . . . ∧ ¬q m . Then, by maximality, ϕ i ∧ p ∧ ¬q 1 ∧ . . . ∧ ¬q m−1 is a disjunct of ϕ max , too. By the SFA-axiom, SFA proves that so we can replace the formula to the left of ≡, which is modulo order a subformula of ϕ max , by the formula to the right and obtain an SFA-equivalent formula ϕ 1 .
We continue this process with ϕ i ∧ (p ∧ q m ) ∧ ¬q 1 ∧ . . . ∧ ¬q m−1 by using the disjunct ϕ i ∧ (p ∧ q m ) ∧ ¬q 1 ∧ . . . ∧ ¬q m−2 that was in the original ϕ max and still is in ϕ 1 . So we can SFA-provably replace ¬q m−1 by q m−1 and obtain ϕ 2 . We continue until we replaced all the ¬q j 's by q j 's.
And if this replacement process applied to another ϕ = ϕ i ∧ p ∧ ¬r also requires a disjunct ϕ i ∧ p ∧ ¬q 1 ∧ . . . ∧ ¬q k , then we first add a copy of this disjunct to the current ϕ j (which we SFA-provably can do by idempotence) and then use one of them to replace ¬q k with q k .
To not be overly tedious, we omit a fully detailed proof of this fact.
Proof As in Proposition 3 except for adding the disjunct p ∧ ¬p already if p ∈ At (ϕ).
In Corollary 2 below, we prove that all of these normal forms are unique. (In the case of AC this was, as mentioned, already proven by [23].)

Step 2: Characterizing Normal Forms
In this section, we do the second step of the proof: showing that if two normal forms of a synonymy satisfy the two characterizing properties of that synonymy, they are identical (modulo the order and repeats of literals and disjuncts).
Ad (ii). Fix a ϕ i that is not disjunct of ψ. Consider the disjuncts ϕ * of ϕ such that L(ϕ * ) ⊆ L(ϕ i ) and ϕ * is not a disjunct of ψ. Since there are finitely many, we can pick a ⊆-minimal one, say ϕ * i , that is, L(ϕ * i ) ⊆ L(ϕ i ) and ϕ * i is not a disjunct of ψ and ∀j ≤ n : where C(ψ) is the set of L(ψ 0 )'s of disjuncts ψ 0 of ψ. Since ϕ ⇔ FDE ψ, we have by (i) that there is a ψ k (k ≤ m) such that L(ψ k ) ⊆ L(ϕ * i ). Among the disjuncts of ψ with this property, we can choose a minimal one L(ψ * k ) ⊆ L(ψ k ), that is, Since ψ ⇔ FDE ϕ, we have by (i) that there is a ϕ r such that L(ϕ r ) ⊆ L(ψ * k ). Again, there is a minimal disjunct L(ϕ * r ) ⊆ L(ϕ r ) (so no disjunct of ϕ is properly contained so by (1) we have L(ϕ * r ) ∈ C(ψ). So by (2), L(ϕ * r ) = L(ψ * k ) ⊆ L(ϕ i ).
Proposition 5 (Identity of SF-normal form) Let ϕ and ψ be two sentences in standard minimal disjunctive normal form. Then Proof For the non-trivial direction, write where ϕ 1 , . . . , ϕ r are exactly those disjuncts of ϕ that are-modulo ordering-also disjuncts of ψ (so the remaining ϕ 1 , . . . , ϕ s aren't disjuncts of ψ). Analogously, the unprimed disjuncts of ψ occur in ϕ, and the primed ones don't. We claim that primed disjuncts are extensions of unprimed ones, that is, for all j ≤ s, ϕ j = ϕ i ∧ L (modulo ordering) for an i ≤ r and a set of literals L (hence L is a subset of the literals occurring in ϕ). Analogously for ψ.
Indeed, fix a ϕ j . Since ϕ ⇔ FDE ψ and ϕ j is not in ψ, we have by Lemma 3(ii) that there are disjuncts ϕ 0 and ψ * (primed or unprimed) such that L(ϕ 0 ) = L(ψ * ) ⊆ L(ϕ j ). Hence ϕ 0 is in ψ and ϕ j = ϕ 0 ∧ L for L := L(ϕ j ) \ L(ϕ 0 ), which shows the claim. Now, since ϕ is minimal, no disjunct can be the extension of another one, hence the set of primed disjuncts is empty. The same goes for ψ. Thus, ϕ and ψ really look like this: and recall that the ϕ i 's also occur as disjuncts in ψ and vice versa. Hence {L(ϕ 1 ), . . . , L(ϕ r )} = {L(ψ 1 ), . . . , L(ψ u )}. Since ϕ and ψ are standard, their order is fixed, so ϕ = ψ, as wanted. Assume for contradiction that there is a ϕ i (for i ≤ n) such that L(ϕ i ) ∈ C(ψ) (the other case is analogous). Since ϕ ⇔ FDE ψ, we have by Lemma 3(ii) that there are L(ϕ j ) = L(ψ k ) ⊆ L(ϕ i ). Write L := L(ϕ i ) \ L(ϕ j ), so L(ϕ i ) = L(ϕ j ) ∪ L. Since ϕ and ψ have the same literals, the literals in L also occur in ψ. So, since ϕ j is a disjunct of ψ and ψ is in maximal normal form, L(ϕ j ) ∪ L is a disjunct of ψ, in contradiction to L(ϕ i ) ∈ C(ψ).
Proposition 7 (Identity of SFA-normal form) Let ϕ and ψ be two sentences in standard maximal positive disjunctive normal form. Then Proof For the non-trivial direction, write ϕ = ϕ 1 ∨ . . . ∨ ϕ n and ψ = ψ 1 ∨ . . . ∨ ψ m . As in Proposition 6 before, it suffices to show that C(ϕ) = C(ψ). Assume for contradiction that L(ϕ i ) violates this claim. Since ϕ is maximally positive we have, by clause (iii)(a) of Definition 4, that ϕ i = ϕ 0 ∧ A for a minimal disjunct ϕ 0 of ϕ and an A ⊆ At (ϕ).
Since ϕ 0 is assumed to be satisfiable, fix a valuation v 0 making it true. We construct v inductively: We perform m steps (corresponding to ψ 1 , . . . , ψ m ) such that at the end of step i we have a valuation v i making ϕ 0 true and ψ 1 ∨ . . . ∨ ψ i false. We can then choose v := v m .
For the remainder of the proof, we abuse notation and write the conjunctive normal form χ for the set L(χ). Thus, we have (where ϕ c 0 denotes all literals not in ϕ 0 ): . Since ψ i+1 ⊆ ϕ 0 , at least one of the sets ψ i+1 ∩ ψ j ∩ ϕ c 0 (for j ≤ i) is non-empty. Let j 1 , . . . , j r ≤ i be those j for which there is a literal l j ∈ ψ i+1 ∩ ψ j ∩ ϕ c 0 = ∅. Now, either there is a valuation w making all literals of the set L := {l j 1 , . . . , l j r } false, or there isn't.
Proof Soundness (left to right). It is readily checked that if ϕ ≡ ψ is an axiom of SF, then ϕ ⇔ FDE ψ. Moreover, it is also readily checked that if ϕ ≡ ψ is the result of applying one of the rules to ϕ ≡ ψ, and if ϕ ⇔ FDE ψ, that then also ϕ ⇔ FDE ψ .
Proof Let's start with the left-to-right direction. We show AC ϕ ≡ ψ implies ⇔ FDE ψ by contraposition (though it could also be shown directly by induction): If ϕ ⇔ FDE ψ, then, by Theorem 1, SF ϕ ≡ ψ, so in particular AC ϕ ≡ ψ (since SF is an extension of AC). And AC ϕ ≡ ψ implies L(ϕ max ) = L(ψ max ) because of the following. If AC ϕ ≡ ψ, then AC ϕ max ≡ ϕ ≡ ψ ≡ ψ max . So ϕ max and ψ max are two sentences in standard maximal normal form that are AC-equivalent to ϕ. We know that a sentence's standard maximal normal form is unique in AC. (A purely syntactic proof of this fact was given in [2], and [23] gave a semantic proof using his truthmaker semantics.) Hence ϕ max = ψ max , and, in particular, L(ϕ max ) = L(ψ max ). (Of course, this could also be shown directly by induction on AC-proofs.) For the other direction, assume L(ϕ max ) = L(ψ max ) and ϕ ⇔ FDE ψ. We have AC ϕ max ≡ ϕ and AC ψ max ≡ ψ. Moreover, we've seen in the left-to-right direction that AC-equivalence entails FDE-equivalence, so we have Hence, applying Proposition 6 to ϕ max and ψ max we get that ϕ max = ψ max , whence AC indeed proves ϕ ≡ ϕ max ≡ ψ max ≡ ψ. Proof (Sketch) (i)⇒(ii) is the soundness theorem of [23]. (ii)⇒(iii) is trivial. So it remains to show (iii)⇒(i). Indeed, assume [ϕ] C = [ψ] C . Since ϕ is AC-provably equivalent to its standard maximal disjunctive form ϕ max , and ψ to ψ max , we have, by the just mentioned soundness, that [ϕ max ] C = [ψ max ] C .
Proof The left-to-right direction is immediate by induction on SFA-proofs: For SFAaxioms ϕ ≡ ψ we have that At (ϕ) = At (ψ) and ϕ ⇔ FDE ψ, and these two properties are preserved by the SFA-rules. For the other direction, assume At (ϕ) = At (ψ) and ϕ ⇔ FDE ψ. By Proposition 2, there are ϕ pos and ψ pos in standard maximal positive disjunctive form such that SFA ϕ pos ≡ ϕ and SFA ψ pos ≡ ψ. Moreover, we've seen in the left-to-right direction that SFA-equivalence entails having the same atoms and FDE-equivalence, so we have At (ϕ pos ) = At (ϕ) = At (ψ) = At (ψ pos ), and ϕ pos ⇔ FDE ϕ ⇔ FDE ψ ⇔ FDE ψ pos .
Proof The left-to-right direction is shown by induction on SCL-proofs: That SCL ϕ ≡ ψ implies ϕ ⇔ C is immediate, so let's consider the subject matter condition.
Corollary 2 (Uniqueness of normal form) Let C be one of the systems AC, SFA, SCL, SCA, or SF. Then every sentence ϕ has a unique standard disjunctive normal form ϕ C with the properties corresponding to the system C (e.g. maximal, maximal positive, etc.).
Proof Assume ϕ C and ϕ C are standard normal forms of ϕ in the system C. Then C ϕ C ≡ ϕ C . Apply the C-soundness theorem and then the C-characterization lemma to get ϕ C = ϕ C .