1 Introduction

In a series of recent works, Fine [7, 8] has sketched a novel solution to Frege’s puzzle. The novelty is itself surprising and exciting given the minimal resources with which the puzzle is posed. Frege [9] claimed that sentences differing by the substitution of coreferential proper names such as (1) and (2) differ semantically, arguing that (2) expresses a valuable extension of our knowledge, while (1) doesn’t.

  1. (1)

    Cicero is Cicero.

  2. (2)

    Cicero is Tully.

The difference shows up in more complex sentences as well: attitude ascriptions differing by the substitution of one name for the other may even have different truth-values.

  1. (3)

    Sam believes that Cicero is Cicero.

  2. (4)

    Sam believes that Cicero is Tully.

Assuming that the meaning of a name is its referent, this would entail that the members of each pair semantically differ even though they differ only by the substitution of synonymous terms. This conflicts with the principle that meaning is compositional. Focussing on the simple sentences, the conflict can be brought out as a tension between the following claims:

  • difference: (1) and (2) differ semantically.

  • compositionality: If sentences (1) and (2) differ only by the substitution of constituents which are semantically the same, then (1) and (2) are semantically the same.

  • minimal pair: Sentences (1) and (2) differ only by the substitution of ‘Cicero’ for ‘Tully’—all other inputs to semantic evaluation coincide.

  • synonymy: ‘Cicero’ and ‘Tully’ are semantically the same.

The typical menu of solutions either outright rejects difference (e.g., [24, 28]) or posits that the meaning of a name goes beyond its referent, thereby rejecting synonymy (e.g., [2, 9]). In the case of attitude ascriptions, it is also popular to reject that (3) and (4) are a minimal pair (e.g. [3] and [27]).

Radically departing from previous solutions, Fine argues that the culprit is compositionality.

Current philosophical thinking on Frege’s puzzles has reached an impasse, with strong theoretical arguments in favor of [difference] and strong intuitive arguments in favor of [synonymy] and yet no apparent way to choose between them. And this suggests that we should perhaps take more seriously the possibility of rejecting the assumption of [compositionality] that puts them in conflict. ([8]: 35)

compositionality is a deep seated assumption and guiding principle in semantic theorizing, but Fine argues that its rejection is mandatory on account of a puzzle paralleling Frege’s, which Fine dubs the antinomy of the variable. The puzzle is that open sentences (5) and (6) differ semantically, even though—Fine alleges—‘x’ and ‘y’ are semantically the same.

  1. (5)

    R x x

  2. (6)

    R x y

Specifically, (5) and (6) embed differently: ∃xy R x x may be false while ∃xy R x y is true. The puzzle is then structurally similar to our original puzzle.

  • difference: (5) and (6) differ semantically.

  • compositionality: If formulae (5) and (6) differ only by the substitution of constituents which are semantically the same, then (5) and (6) are semantically the same.

  • minimal pair: Formulae (5) and (6) differ only by the substitution of ‘x’ for ‘y’—all other inputs to semantic evaluation coincide.

  • synonymy: ‘x’ and ‘y’ are semantically the same.

The synonymy claim is motivated by Fine’s suggestion that the difference between ‘x’ and ‘y’ is merely “notational”, and thus their semantic roles should be the same: “It is not as if the variables ‘x’ and ‘y’ have a special ‘x’-sense or ‘y’-sense” ([8]: 38).Footnote 1

According to Fine, the only reasonable solution to this parallel puzzle is to deny compositionality.

We must allow that any two variables will be semantically the same, even though pairs of identical and of distinct variables are semantically different; and we should, in general, be open to the possibility that the meaning of the expressions of a language is to be given in terms of their semantic relationships to one another. ([8]: 24)

Fine roughly outlines a non-compositional semantics for first-order logic incorporating the following principle.

  • relationism: The truth conditions of a sentence are not determined by the semantic features of its constituents in isolation, but instead determined by the semantic relationships that hold among the sequence of its constituents as a whole.

The relationist semantics for variables is meant to create space for a similar rejection of compositionality in the case of proper names. Fine’s proposed solution has taken its place among the rival solutions to Frege’s puzzle. Proponents and critics have focussed on the extension of the relationist semantics to names (see, e.g. [5, 25, 29], and [20]).

But we are skeptical about the underlying semantics for variables. Even granting that the antinomy is a real puzzle—that ‘x’ and ‘y’ agree semantically—we deny that Fine’s approach, on its own terms, mandates the rejection of compositionality for variables. Specifically, the relationist semantics on its own is inadequate. In order to repair it, Fine introduces additional complications: he enriches the input to the semantics.

  • enriched representation: The input to semantics must be enriched with patterns of coordination between occurrences of variables. We can think if these patterns of coordination “as lines connecting one occurrence of the variable to another—as in the familiar ‘telegraphic’ notation for quantifier binding” ([8]:30).Footnote 2

The incorporation of an additional input, a coordination scheme, is often taken to be a critical component of Fine’s non-compositional, relationist semantics. We will argue, however, that relationism and enriched representation are actually in conflict: positing the latter makes the former unnecessary. By enriching the input to semantics, R x x and R x y are no longer a minimal pair, differing only by the substitution of variables ‘x’ and ‘y’. Therefore there is no threat to compositionality.

In the following, we first provide an explicit formalization of the relational semantics for first-order logic suggested, but only briefly sketched, in ([7]: 623-629; [8]: 25-32). We then show why the relational semantics alone is technically inadequate, forcing Fine to enrich the syntax with a coordination schema. Given this enrichment, we argue that the semantics is (weakly) compositional. We then examine the deep consequences of this result for Fine’s proposed solution to Frege’s puzzle. Fine’s solution to the puzzle can only be properly assessed when we appreciate which of the inconsistent claims he rejects. Specifically, in the case of Frege’s puzzle, Fine also relies on coordination schema as extra inputs to the semantics. Thus, Fine’s solution is not to deny compositionality but instead to deny that (1) and (2) are minimal pairs differing only by the substitution of co-referential proper names. Therefore, Fine has mis-diagnosed his own solution. The correct characterization of Fine’s solution fits him more comfortably among familiar solutions to the puzzle.

2 Relationism

Fine’s relationism denies compositionality. Expressions α and β may agree semantically, though composite expressions ϕ α and ϕ β differing only by the substitution of an occurrence of α for and occurrence of β semantically differ. The relationship between α and the other components of ϕ α may generate semantic effects different from the relationship between β and the other components of ϕ β . To implement this idea, Fine proposes to evaluate a composite expression in terms of the semantic connection on the sequence of expressions composing it. The semantic connection on a sequence of expressions is the sequence of values that those expressions are capable of jointly assuming (in that order). The sequence corresponding to ϕ α may contain multiple occurrences of α, which are semantically mandated to assume the same value. The sequence which results from substituting an occurrence of α for β may have a different semantic connection, since an occurrence of α may not be required to assume the same value as an occurrence of β.

To handle the semantics of first-order logic, Fine considers the semantic connection on a sequence of variables. This generalizes the notion of a domain for a variable. The domain D of a single variable is the set of values that it can assume. The “domain” of a pair of variables 〈x,y〉 is the set of sequences of values that the variables are simultaneously capable of assuming: {〈d 1,d 2〉∣d 1,d 2D}. The idea generalizes to any sequence of simple expressions: the “domain” of a sequence of simple expressions is the set of sequences of values that the expressions are simultaneously capable of assuming. Defining the semantic connection on any sequence of simple expressions of the language provides the “lexical” or base semantics in terms of which the semantic connections on complex expressions is defined. The semantic connection on any complex expression is defined in terms of the semantic connection on its syntactic constituents when taken in sequence.

The idea is to define what values (true or false) the sentence ‘ x + y = y + x’ may assume in terms of what values the expressions composing the sequence 〈x,+,y,=,y,+,x〉 may assume when taken in that sequence. The sequence 〈x,+,y,=,y,+,x〉 may assume the value 〈7,+,3,=,3,+,7〉, but not the value 〈7,+,3,=,2,+,2〉. Given a definition of ‘+’ and ‘=’ in conjunction with the semantic connection on its simple constituents would yield that ‘ x + y = y + x’ can only assume the value truth. In this way one can recursively define the truth (and falsity) conditions of sentences in terms the generalized notion of a “domain” (i.e. in terms of semantic connections).

Notice that the semantic connection on the pairs 〈x,y〉 and 〈x,x〉 are distinct— {〈d 1,d 2〉∣d 1,d 2D} and {〈d,d〉∣dD}, respectively—even though the semantic connection on ‘x’ and ‘y‘ themselves are the same. Thus the semantic connections on the pairs of variables are not grounded in the semantic connections on their members. Fine claims that this rejection of compositionality is the key to solving the antinomy:

[The relational semantics] embodies a solution to the antinomy: the intrinsic semantic features of x and y (as given by the degenerate semantic connections on those variables) are the same, though the intrinsic semantic features of the pairs x,y and x,x (again, as given by the semantic connections on those pairs) are different. ([8]: 31-2)

In the next section, we formally implement Fine’s relationist idea that the semantic features of a composite expression should be evaluated in terms of semantic connections on their component expressions taken in sequence.

2.1 The Semantics

Assume the syntax is standard. For any sequence of variables \(\alpha _{1},\dots ,\alpha _{n}\) drawn from \(\{x_{i}\}_{i\in \mathbb {N}}\) and any n-place predicate \(\pi ^{n} \in \{{F^{n}_{i}}\}_{i,n\in \mathbb {N}}\) the well-formed sentences of the language are provided by the following grammar:Footnote 3

$$\phi \:\:= \pi^{n}\alpha_{1}\dots\alpha_{n} \mid \neg \phi \mid (\phi \wedge \phi) \mid \forall\alpha \phi $$

As usual the other connectives, such as \(\rightarrow ,\vee \), ∃, etc., be introduced as abbreviations, if desired.

Let a model \(\mathfrak {A} = \langle D, I\rangle \), where D is a (non-empty) domain of individuals, and I is an interpretation for the predicates, which assigns sets of ordered n-tuples of individuals to the n-place predicates. And let {0,1} be the set of truth-values. The semantics will not require relativization to an assignment of values to variables (in the style of [30]), since the semantic connection on a formula is defined in part in terms of the semantic connection on its variables taken in sequence, and the base clause of the semantics specifies the semantic connection on any sequence of variables. In order to streamline the exposition we’ll introduce a convention concerning concatenation of sequences.

Definition

For sequences \(\sigma = \langle a_{1},\dots , a_{i}\rangle \) and \(\tau = \langle b_{1},\dots , b_{j}\rangle \) such that i,j≥1, let \((\sigma , \tau ) = \langle a_{1},\dots ,a_{i}, b_{1},\dots , b_{j}\rangle \).

This operation requires a few typographical remarks.

  • Remark 1 We omit brackets on 1-membered sequences, so if a is not a sequence, we abbreviate (σ,a) = (σ,〈a〉).

  • Remark 2 The operation (…,…) is associative, so we abbreviate (σ,τ,υ) = (σ,(τ,υ)) = ((σ,τ),υ).

  • Remark 3: When occurring inside denotation brackets we will omit the round brackets so for example \(\left [{\kern -2.3pt}[\sigma , \tau \right ]{\kern -2.3pt}]^{~} = \left [{\kern -2.3pt}[(\sigma , \tau )\right ]{\kern -2.3pt}]^{~}\).

As we have said, Fine semantically evaluates an expression in terms of the semantic connection, \([{\kern -2.3pt}[ . ]{\kern -2.3pt}]\), on its constituents taken in sequence, where the semantic connection is the sequence of values they are capable of jointly assuming. The complex expression is said to “give way to” the sequence of its constituents expressions whose semantic connection determines the possible values of the complex expression. In order to recursively implement this idea, one must define the contribution of an expression χ of arbitrary complexity to the semantic connection on a sequence (Σ,χ,Υ) that contains χ. A formula will be true (relative to a model \(\mathfrak {A}\)) when the sequence consisting of just that formula assumes only the value 1 (and false when it only assumes the value 0). That is, for all formulae ϕ and models \(\mathfrak {A}\):

  • TRUTH: ϕ is true (in \(\mathfrak {A}\)) iff \([{\kern -2.3pt}[ \phi ]{\kern -2.3pt}] = \{1\}\)

  • FALSITY: ϕ is false (in \(\mathfrak {A}\)) iff \([{\kern -2.3pt}[ \phi ]{\kern -2.3pt}] = \{0\}\)

With these definitions in place we provide the following recursive specification of the semantic connection on a sequence including a formula (relative to a model \(\mathfrak {A}\)) in terms of the semantic connection on the sequence of expressions it gives way to. The base clause specifies the semantic connection on any sequence of variables:

  • VARIABLES: \([{\kern -2.3pt}[ \alpha _{1},\dots ,\alpha _{n}]{\kern -2.3pt}] = \left \{ \langle d_{1},\dots ,d_{n}\rangle \in D^{n} \mid d_{i} = d_{j} ~\text {if}~ \alpha _{i}=\alpha _{j} \right \} \)

Since formulae can assume only the values 0 and 1 (and open formulae can assume both values), the semantic connection on a sequence containing a formula contains only the following sequences of values:

  • ATOMIC:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \pi \alpha _{1}\dots \alpha _{k}, {\Upsilon }]{\kern -2.3pt}]\) iff for some τ such that \((\sigma ,\tau ,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \alpha _{1},\dots ,\alpha _{k}, {\Upsilon }]{\kern -2.3pt}]\), τI(π)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \pi \alpha _{1}\dots \alpha _{k}, {\Upsilon }]{\kern -2.3pt}]\) iff for some τ such that (σ,τ,υ)∈\( [{\kern -2.3pt}[{\Sigma }, \alpha _{1},\dots ,\alpha _{k}, {\Upsilon }]{\kern -2.3pt}]\), τI(π)

  • NEGATION:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \neg \phi , {\Upsilon }]{\kern -2.3pt}]\) iff \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \phi , {\Upsilon }]{\kern -2.3pt}]\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \neg \phi , {\Upsilon }]{\kern -2.3pt}]\) iff \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \phi , {\Upsilon }]{\kern -2.3pt}]\)

  • CONJUNCTION:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, (\phi \wedge \psi ),{\Upsilon }]{\kern -2.3pt}]\) iff \((\sigma ,1,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \phi , \psi , {\Upsilon }]{\kern -2.3pt}]\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, (\phi \wedge \psi ),{\Upsilon }]{\kern -2.3pt}]\) iff \((\sigma ,m,n,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \phi , \psi , {\Upsilon }]{\kern -2.3pt}]\), where m=0 or n=0

  • QUANTIFICATION:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma },\forall \alpha \phi ,{\Upsilon }]{\kern -2.3pt}]\) iff \( (\sigma , d,1, \upsilon )\in [{\kern -2.3pt}[{\Sigma }, \alpha ,\phi , {\Upsilon }]{\kern -2.3pt}], \text { for all } d\in D\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma },\forall \alpha \phi ,{\Upsilon }]{\kern -2.3pt}]\) iff \( (\sigma , d,0, \upsilon )\in [{\kern -2.3pt}[{\Sigma }, \alpha ,\phi , {\Upsilon }]{\kern -2.3pt}], \text { for some }\\ d\in D\)

Thus we’ve defined the semantic connection on any formula in terms of the semantic connection on its syntactic constituents when taken in sequence (relative to a model \(\mathfrak {A}\)). For example, a formula such as ∀x(F xG x) can be semantically evaluated as follows.

$$\begin{array}{@{}rcl@{}} 1 \in [{\kern-2.3pt}[\forall x (Fx \wedge Gx)]{\kern-2.3pt}] \;\; && \text{iff }\;\; (d,1)\in[{\kern-2.3pt}[x,(Fx \wedge Gx) ]{\kern-2.3pt}], \text{ for all } d\in D\\ &&\text{iff }\;\; (d,1,1) \in [{\kern-2.3pt}[x,Fx,Gx]{\kern-2.3pt}], \text{ for all } d\in D\\ && \text{iff }\;\; \text{ for some } a \text{ such that } (d,a,1) \in [{\kern-2.3pt}[x,x,Gx]{\kern-2.3pt}],\\ &&{\kern20pt} a \in I(F), \text{ for all } d\in D\\ && \text{iff }\;\; \text{ for some } a \text{ and some } b \text{ such that } (d,a,b) \in [{\kern-2.3pt}[ x,x,x]{\kern-2.3pt}],\\ &&{\kern20pt} a \in I(F) \text{ and } b \in I(G),\text{ for all } d\in D\\ && \text{iff }\;\; \text{ for all } d\in D, d \in I(F) \text{ and } d \in I(G) (\text{ since } [{\kern-2.3pt}[ x,x,x]{\kern-2.3pt}]\\ &&{\kern20pt} = \{(e, e, e)\mid e \in D\}) \end{array} $$

A similar derivation would show that \(0 \in [{\kern -2.3pt}[\forall x(Fx \wedge Gx)]{\kern -2.3pt}]\) just in case for some dD either dI(F) or dI(G). It can easily be proved that relative to a model, \([{\kern -2.3pt}[\forall x (Fx \wedge Gx)]{\kern -2.3pt}]\) = {1} or \([{\kern -2.3pt}[\forall x (Fx \wedge Gx)]{\kern -2.3pt}]\) = {0}. Thus, the formula is either true or false, as will be any closed formula.

2.2 Relationism and Compositionality

The semantics thus presented seems to deliver the results Fine desires. Excepting an issue to be discussed in the next section, the semantics appears to deliver the right truth conditions for formulae of first-order logic. Moreover, the semantic interpretation of a variable will be the semantic connection on the variable itself, thus \([{\kern -2.3pt}[ x ]{\kern -2.3pt}]\) = \([{\kern -2.3pt}[ y ]{\kern -2.3pt}]\). Yet, the semantic connection on R x y and R x x may differ, since \([{\kern -2.3pt}[ x,y ]{\kern -2.3pt}] \neq [{\kern -2.3pt}[ x,x ]{\kern -2.3pt}]\).

As Fine ([8]: 26) notes the semantic value of a composite expression is not a function of the semantic values of its constituents (either immediate or terminal) and the mode of combination of these values. Therefore, his semantics denies compositionality and its usual generalizations as strong or weak compositionality. Let μ be the syntactic mode of combination. Then, strong and weak compositionality can be formulated as follows (cf. the definitions in [16]):

  • strong compositionality: If ϕ = μ(ϕ 1,…,ϕ n ) and ψ = μ(ψ 1,…,ψ n ) and if ϕ i and ψ i are semantically the same for all i (that is, \([{\kern -2.3pt}[\phi _{i}]{\kern -2.3pt}]\) = \([{\kern -2.3pt}[\psi _{i}]{\kern -2.3pt}]\)), then ϕ and ψ are semantically the same.

  • weak compositionality: If ϕ = μ(ϕ 1,…,ϕ n ) and ψ = μ(ψ 1,…,ψ n ) where each ϕ i and ψ i is a terminal syntactic constituent and if ϕ i and ψ i are semantically the same for all i (that is, \([{\kern -2.3pt}[\phi _{i}]{\kern -2.3pt}]\) = \([{\kern -2.3pt}[\psi _{i}]{\kern -2.3pt}]\)), then ϕ and ψ are semantically the same.

The relational semantics above is not even weakly compositional because complex expressions are semantically different even though they differ only by substituting terminal syntactic constituents which are semantically the same. In fact, Fine only secures a significantly weaker principle.

  • relational dependence: If ϕ = μ(ϕ 1,…,ϕ n ) and ψ = μ(ψ 1,…,ψ m ) and if the sequences of expressions ϕ 1,…,ϕ n and ψ 1,…,ψ m are semantically the same (that is, \([{\kern -2.3pt}[ \phi _{1},\ldots , \phi _{n} ]{\kern -2.3pt}] = [{\kern -2.3pt}[ \psi _{1},\ldots ,\psi _{m}]{\kern -2.3pt}]\)), then ϕ and ψ are semantically the same

The semantic evaluation of a complex expression will be a function of the semantic values of its complex expressions plus relations among the expressions themselves rather than their values.

In semantics the principle of compositionality is often taken as both a guiding methodological hypothesis and a theoretical posit (see [17]). The explanatory motivation stems from considerations of semantic productivity. A language user can semantically evaluate infinitely many novel complex expressions, because the meanings of the complex expressions are a function of the semantic values of their constituents. It is an open question whether the weaker relationist principle can fill the explanatory role of the compositionality principles. Fine can explain why a language user can understand a complex expression in terms of that language user’s knowledge of the semantic connection on the sequence of simpler expressions that compose the complex expression. But the language user’s knowledge of this semantic connection does not arise from her understanding of the semantic features of the simple expressions in the sequence. Explanation comes to an end at a language user’s knowledge of a semantic connection on a sequence of expressions.

3 Enriched Representation

Relationism alone introduces semantic ills, which are far worse than the antimony itself. In particular, it mandates that every occurrence of the same variable type co-varies. Fine mentions that free occurrence of a variable will be forced to assume the same values as bound occurrences of the same variable. Consider the sentence ∃x F xG x. In this sentence, the quantifier ‘ ∃x’ has scope over the formula F x but not over G x. Thus, ‘x’ as it occurs in G x is free. Yet, on the semantics developed so far, the free occurrence is coordinated with the bound occurrence. Specifically, ∃x F xG x is true just in case for some \((d,d,d) \in [{\kern -2.3pt}[ x,x,x]{\kern -2.3pt}]\), dI(F) and dI(G). Thus, the sentence will be true just in case there exists something which is both F and G. As a result, the sentence has the same truth conditions as ∃x(F xG x). As ([8]: 31) observes, this is the familiar “dynamic” reading of the sentence ∃x F xG x—for example, in dynamic predicate logic where an existential quantifier can have binding effects beyond its syntactic scope [10]. As such, this might not be viewed as disaster, but rather as Fine suggests “a great virtue of the approach” (ibid.: 31).

But more troubling results follow. For instance, it is far less plausible to say that ∀x F xG x ever has the same truth conditions as ∀x(F xG x). The most telling difficulty, not discussed by Fine, is that multiple occurrences of a variable bound by distinct quantifiers in a formula such as ∃x F x∧∃x¬F x will be forced to assume the same values. Normally, one would want ∃x F x∧∃x¬F x to be true just in case there is an entity which is F and a (distinct) entity which is not F. But on the semantics so far, ∃x F x∧∃x¬F x will have the same truth conditions as the logical falsehood ∃x(F x∧¬F x), which says that there exists a single entity which is both F and not F. The derivation runs as follows:

$$\begin{array}{@{}rcl@{}} 1 \in [{\kern-2.3pt}[ \exists x Fx \wedge \exists x \neg Fx]{\kern-2.3pt}] \;\; && \text{iff }\;\; (1,1)\in[{\kern-2.3pt}[ \exists x Fx, \exists x \neg Fx ]{\kern-2.3pt}] \\ && \text{iff }\;\; (d_{1},1,1) \in [{\kern-2.3pt}[ x,Fx, \exists x \neg Fx]{\kern-2.3pt}], \text{ for some } d_{1}\in D\\ &&\text{iff }\;\; (d_{1}, d_{2},1) \in [{\kern-2.3pt}[ x,x,\exists x \neg Fx]{\kern-2.3pt}], \;\;d_{2} \in I(F),\\ &&{\kern15pt} \text{ for some } d_{1},d_{2} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, 1) \in [{\kern-2.3pt}[ x,x, x, \neg Fx]{\kern-2.3pt}], \;\;d_{2} \in I(F),\\ &&{\kern15pt} \text{ for some } d_{1},d_{2}, d_{3} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, 0) \in [{\kern-2.3pt}[ x,x, x, Fx]{\kern-2.3pt}], \;\;d_{2} \in I(F),\\ &&{\kern15pt} \text{ for some } d_{1},d_{2}, d_{3} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, d_{4}) \in [{\kern-2.3pt}[ x,x, x, x]{\kern-2.3pt}], \;\;d_{2} \in I(F) \text{ and }\\ &&{\kern18pt} d_{4}\notin I(F), \text{ for some } d_{1},d_{2}, d_{3}, d_{4} \in D\\ && \text{iff }\;\; \text{ for some } d\in D, d \in I(F) \text{ and } d \notin I(F)\\ &&{\kern18pt} (\text{ since } [{\kern-2.3pt}[ x,x,x,x]{\kern-2.3pt}] = \{(e, e, e,e)\mid e \in D\}). \end{array} $$

Thus, the formula (∃x F x∧∃x¬F x) cannot assume the value true.

There is no way to count this result as a virtue. Fine needs be able to differentiate the contributions of distinct occurrences of the same variable. To do so, Fine introduces additional semantic inputs: a coordination relation among variables, which one could represent with linking “wires” as follows:

figure a

The coordination relations restrict which occurrences of the same variable type must be co-interpreted in the semantic connection.

The change here is monumental. The relational semantics, as developed in the previous section, effectively reads the coordination off of the “logical structure” of the sentence. A variable’s two-fold occurrence in a sequence dictated that the corresponding positions in the semantic connection on the sequence take coordinated values. This allowed for the possibility that a minimal pair such as R x x and R x y, differing only in the substitution of an occurrence of ‘x’ for ‘y’, may differ semantically. The substitution of ‘x’ for ‘y’ does not preserve the overall logical structure of the sentence.

However, the suggestion now is that we impose a coordination scheme as an extra input to semantic evaluation. The immediate result is that R x x and R x y do not—in themselves—differ semantically. A difference only arises when we evaluate R x x in conjunction with a coordination scheme c which associates the two occurrence of ‘x’.Footnote 4 ([7]: 628) conceives of the coordination scheme as syntactic in nature, not a semantic parameter: “the syntactic object of evaluation will no longer be a sequence of expressions but a coordinated sequence of expressions”. Thus, the syntactic input to semantic evaluation is enriched. Our implementation in the semantics below, however, will be neutral as to whether the coordination schema is an addition to the syntactic structure (or more generally the linguistic representation), or as a semantic parameter against which the sentence is processed. (This is partially with an eye towards Frege’s puzzle where it is less plausible to introduce a syntactic coordination relation among distinct occurrences of the same name.)

3.1 Coordination

Since we are no longer semantically evaluating a formula or sequence of expressions alone, but only in conjunction with a coordination schema, we need to supplement the semantics. Again, one might take this coordination schema as an additional syntactic input, a syntactic relation among occurrences of a given expression. Or, one might take the coordination schema as a semantic input against which a formula is processed. Our approach will leave the coordination schema unanalyzed; we are only investigating the logical properties it must have. To do so, we represent the semantic connection on a sequence of expressions \(\alpha _{1},\dots , \alpha _{n}\) relative to a coordination scheme c as \([{\kern -2.3pt}[ \alpha _{1},\dots , \alpha _{n}]{\kern -2.3pt}]^{c}\). The coordination scheme c itself is meant to determine a relation among occurrences of expressions in a formula or sequence of formulas.

Formally, a coordination scheme is an equivalence relation on the free occurrences of variables in the sequence, subject to the requirement that it only relate occurrences of the same variable. ([8]: 30)

The occurrences of an expression in a sequence can be associated with their numerical positions in the sequence. So formally, an equivalence relation on occurrences in an n-membered sequence α 1,…α n of expressions can be modeled as an equivalence relation c on the numbers in {1,...,n}. For the purposes of the semantics of variables, Fine himself imposes the additional requirement that c(i,j) holds only if α i = α j . The motivation seems to be that it should be impossible for ‘ ∃y’ to bind the occurrence of ‘x’ in ∃y F x, though once coordination is introduced, we don’t see a principled reason why this binding should be prohibited.Footnote 5

With this terminology in place, the semantics basically follows the semantics above.

  • variables:\([{\kern -2.3pt}[ \alpha _{1},\dots ,\alpha _{n}]{\kern -2.3pt}]^{c} \,=\, \left \{ \langle d_{1},\dots ,d_{n}\rangle \in D^{n} \mid \begin {array}{ll}d_{i} \,=\, d_{j} &\! \text {iff}~c{(ij)\!}\end {array} \right \} \)

  • atomic:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \pi \alpha _{1}\dots \alpha _{k}, {\Upsilon }]{\kern -2.3pt}]^{c}\) iff for some τ such that \((\sigma ,\tau ,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \alpha _{1},\dots ,\alpha _{k}, {\Upsilon }]{\kern -2.3pt}]^{c}\), τI(π)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \pi \alpha _{1}\dots \alpha _{k}, {\Upsilon }]{\kern -2.3pt}]^{c}\) iff for some τ such that \((\sigma ,\tau ,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \alpha _{1},\dots ,\alpha _{k}, {\Upsilon }]{\kern -2.3pt}]^{c}\), τI(π)

  • negation:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \neg \phi , {\Upsilon }]{\kern -2.3pt}]^{c}\) iff \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \phi , {\Upsilon }]{\kern -2.3pt}]^{c}\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, \neg \phi , {\Upsilon }]{\kern -2.3pt}]^{c}\) iff \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \phi , {\Upsilon }]{\kern -2.3pt}]^{c}\)

  • conjunction:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, (\phi \wedge \psi ),{\Upsilon }]{\kern -2.3pt}]^{c}\) iff \((\sigma ,1,1,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \phi , \psi , {\Upsilon }]{\kern -2.3pt}]^{c}\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma }, (\phi \wedge \psi ),{\Upsilon }]{\kern -2.3pt}]^{c}\) iff \((\sigma ,m,n,\upsilon ) \in [{\kern -2.3pt}[ {\Sigma }, \phi , \psi , {\Upsilon }]{\kern -2.3pt}]^{c}\), where m=0 or n=0

  • quantification:

    • \((\sigma ,1,\upsilon ) \in [{\kern -2.3pt}[{\Sigma },\forall \alpha \phi ,{\Upsilon }]{\kern -2.3pt}]^{c}\) iff \( (\sigma , d,1, \upsilon )\in [{\kern -2.3pt}[ {\Sigma }, \alpha ,\phi , {\Upsilon }]{\kern -2.3pt}]^{c}, \text { for all } d\in D\)

    • \((\sigma ,0,\upsilon ) \in [{\kern -2.3pt}[{\Sigma },\forall \alpha \phi ,{\Upsilon }]{\kern -2.3pt}]^{c}\) iff \( (\sigma , d,0, \upsilon )\in [{\kern -2.3pt}[ {\Sigma }, \alpha ,\phi , {\Upsilon }]{\kern -2.3pt}]^{c}, \text { for some }\) dD

As we have observed, the core difference here lies in the semantic connections on sequences of variables. Whereas the simple relationist semantics coordinates any two occurrences of ‘x’ in a sequence, the supplemented semantics coordinates occurrences of ‘x’ only if they are related by the coordination relation.

The semantics repairs the problem with the formula ∃x F x∧∃x¬F x. On its own, no occurrences of ‘x’ in the formula are coordinated. Thus, neither occurrence of the quantifier ‘ ∃x’ binds any variables in its scope, making the (uncoordinated) formula equivalent to ∃x F y∧∃z¬F w. However, the formula can also be enriched with a coordination scheme that relates occurrences of the variable ‘x’. The desired coordination scheme is as above:

figure b

The coordination scheme c then relates the first and second occurrences of ‘x’ and the third and forth occurrences of ‘x’.Footnote 6 The derivation showing that ∃x F x∧∃x¬F x cannot be true is now blocked:

$$\begin{array}{@{}rcl@{}} 1 \in [{\kern-2.3pt}[ \exists x Fx \wedge \exists x \neg Fx]{\kern-2.3pt}]^{c} \;\; && \text{iff }\;\; (1,1)\in[{\kern-2.3pt}[ \exists x Fx, \exists x \neg Fx ]{\kern-2.3pt}]^{c} \\ && \text{iff }\;\; (d_{1},1,1) \in [{\kern-2.3pt}[ x,Fx, \exists x \neg Fx]{\kern-2.3pt}]^{c}, \text{ for some } d_{1}\in D\\ && \text{iff }\;\; (d_{1}, d_{2},1) \in [{\kern-2.3pt}[ x,x,\exists x \neg Fx]{\kern-2.3pt}]^{c}, \;\;d_{2} \in I(F),\\ &&{\kern15pt} \text{ for some } d_{1},d_{2} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, 1) \in [{\kern-2.3pt}[ x,x, x, \neg Fx]{\kern-2.3pt}]^{c}, \;\;d_{2} \in I(F),\\ &&{\kern15pt}\text{ for some } d_{1},d_{2}, d_{3} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, 0) \in [{\kern-2.3pt}[ x,x, x, Fx]{\kern-2.3pt}]^{c}, \;\;d_{2} \in I(F),\\ &&{\kern15pt} \text{ for some } d_{1},d_{2}, d_{3} \in D\\ && \text{iff }\;\; (d_{1}, d_{2}, d_{3}, d_{4}) \in [{\kern-2.3pt}[ x,x, x, x]{\kern-2.3pt}]^{c}, \;\;d_{2} \in I(F) \text{ and }\\ &&{\kern18pt} d_{4}\notin I(F), \text{ for some } d_{1},d_{2}, d_{3}, d_{4} \in D\\ \end{array} $$

Since coordination places the restriction that c(1,2) and c(3,4) the base clause yields that

$$\begin{array}{@{}rcl@{}} [{\kern-2.3pt}[ x,x,x,x]{\kern-2.3pt}]^{c} &=& \big\{ \langle d_{1},d_{2},d_{3}, d_{4}\rangle \mid d_{1} = d_{2} \wedge d_{3} = d_{4} \big\}\\ &=& \big\{(d_{1},d_{1},d_{2}, d_{2})\mid d_{1},d_{2} \in D\big\} \end{array} $$

and thus we can conclude that the formula is true

iff for some d 1,d 2D,dI(F) and d 2I(F).

Thus, the sentence is true relative to the relevant coordinate scheme just in case there exists an object which is F and a (possibly distinct) object which is not F, which conforms to its meaning in more pedestrian semantics for first-order logic.

4 Compositionality revisited

Where does this leave the antinomy? Recall that the puzzle was that (5) and (6) semantically differ, even though they are minimal pairs differing only by the substitution of synonyms ‘x’ and ‘y’. The relationist solution to the puzzle conceded all of this, but denied compostionality, the principle that if formulae (5) and (6) differ only by the substitution of constituents which are semantically the same, then (5) and (6) are semantically the same.

But, as we have observed, the formulae R x x and R x y no longer differ in themselves, but differ only when supplemented with a coordination scheme c relating the occurrences of ‘x’ in R x x. This coordination scheme may be an additional component of the syntax or a parameter against which the formula is evaluated. Thus, Fine effectively denies that (5) and (6) differ semantically insofar as they are minimal pairs.

However, once supplemented with a coordination scheme c relating occurrences of ‘x’ in R x x, there is a semantic difference. But the semantic difference does not occur between minimal pairs. That is, (5) differs from (6) both by substituting an occurrence of ‘x’ for ‘yand by having a distinct coordination scheme. We have to distinguish between ‘ R x x’ in which the occurrences of x are coordinated from ‘ R x x’ in which the occurrences are not coordinated:

figure c

The formula on the right does, while the formula left does not, form a minimal pair with R x y.Footnote 7

Thus, by invoking enriched representation Fine does not solve the antinomy by denying compositionality, but only by denying that the formulae, insofar as they differ semantically, are minimal pairs. Indeed, the semantics given above is weakly compositional on a standard construal. Weak compositionality generally demands that if expressions ϕ and ψ differ in meaning relative to index i, then either ϕ and ψ differ in structure or there are corresponding subordinate expressions ϕ 1 and ψ 1 that differ in meaning relative to i. Treating the coordination schema as an index yields the following principle of weak compositionality:

  • weak compositionality (with coordination): If ϕ = μ(ϕ 1,…,ϕ n ) and ψ = μ(ψ 1,…,ψ n ) where each ϕ i and ψ i is a terminal syntactic constituent and if ϕ i and ψ i are semantically the same for all i with respect to c (that is, \([{\kern -2.3pt}[\phi _{i}]{\kern -2.3pt}]^{c}\) = \([{\kern -2.3pt}[\psi _{i}]{\kern -2.3pt}]^{c}\) or I(ϕ i ) = I(ψ i )), then ϕ and ψ are semantically the same with respect to c. (That is: \([{\kern -2.3pt}[\phi ]{\kern -2.3pt}]^{c}\) = \([{\kern -2.3pt}[\psi ]{\kern -2.3pt}]^{c}\).)

The difference between the semantics enriched with coordination schema and the mere relational semantics is this. In the relational semantics, two formulae such as R x x and R x y give way to the sequence of variables (x,x) and (x,y), respectively, where the semantic connections on these sequences are automatically different even though the semantic connections on the variables taken individually are the same. In the enriched semantics, however, the semantic connections on the sequences (x,x) and (x,y) are the same, given the same coordination scheme as input. It is only when different coordination schema are taken as input that the sequences differ semantically. As a consequence, weak compositionality is restored: the only way for two structurally isomorphic sentences ϕ and ψ to differ semantically relative to a coordination scheme is if the sequence of variables that each gives way to differs semantically relative to the coordination scheme (or for their other vocabulary to different semantically).

Thus, the semantics of variables on offer provides no motivation for denying weak compositionality, and Fine’s solution to the antinomy of the variable is best understood as rejecting minimal pair. Thus, relationism becomes an idle wheel in Fine’s solution to the antinomy. The formulae R x x and R x y differ in meaning only when they are evaluated against distinct coordination schema. The semantic connection on the sequences x,x and x,y plays no role in generating this difference, except insofar as these sequences are themselves evaluated against distinct schema. In the next section, we examine what lessons can be drawn for Frege’s puzzle.

5 Relationism and Frege’s Puzzle

Fine’s solution to Frege’s puzzle takes its cue from his solution to the antinomy of the variable. We have seen two conflicting aspects to Fine’s solution to the antinomy of the variable. One aspect is relationism, the semantic value of a complex is determined not merely by the meanings of its constituents but by relations among the constituents when taken in sequence. relationism, thus formulated, is a denial of even weak compositionality. But—in the case of variables—we saw that relationism does not generate a satisfactory semantics. To repair the semantics, Fine introduces the other aspect, enriched representation. The input to the semantic processing of a sentence is enriched so that R x x and R x y no longer form a minimal pair. But, introducing enriched representation restores compositionality and thereby undermines the case for relationism.

Fine’s solution to Frege’s puzzle, likewise vacillates between both these aspects. He explicitly advertises his solution to Frege’s puzzle as a rejection of compositionality ([8]: §2.B). On this view, even though the names ‘Cicero’ and ‘Tully’ agree in meaning (and occupy terminal nodes), the sentences (1) ‘Cicero is Cicero’ and (2) ‘Cicero is Tully’ do not agree in meaning. The explanation of this fact is meant to issue from relationism. Fine rejects the following principle: “If the pairs of names ‘Cicero’, ‘Cicero’ and ‘Cicero’, ‘Tully’ are semantically different then so are the names ‘Cicero’ and ‘Tully”’ ([8]: 39). The semantics for sentences (1) and (2) will be evaluated in terms of these sequences just as—in the relationist semantics for variables—the semantics of R x x and R x y are evaluated in terms of the sequences (x,x) and (x,y), respectively. But, as was the case with variables, Fine finds reason to reject the view that coordination among the semantic values of two occurrences of a name in a sentence arises solely from the fact that the name occurs twice in the sentence. Rather, Fine’s semantics for names introduces coordination schema as extra inputs to the semantic evaluation. Therefore, his approach really should be taken as a denial of the claim that ‘Cicero is Cicero’ and ‘Cicero is Tully’ constitute a minimal pair. In particular, they differ because they are assessed with respect to different parameters. We begin by sketching how coordination among the semantic values of names in the content of a sentence might be construed. We then examine why these coordination relations are generated by enriched representation rather than relationism. This better situates Fine’s solution within the space of existing solutions to Frege’s puzzle.

5.1 Coordination Among Names

Fine wants to explain the difference between (1) ‘Cicero is Cicero’ and (2) ‘Cicero is Tully’ by appealing to the fact that the names may be coordinated in the pair (‘Cicero’, ‘Cicero’) but not in the pair (‘Cicero’, ‘Tully’). The coordination of the variables in the pair (x,x) was accounted for by the the fact that the pair can only assume values of the form 〈a,a〉. On the other hand, the lack of coordination between the variables in the pair (x,y) was accounted for by the fact that the pair can assume a value 〈a,b〉, where ab. The resources of the relationist semantics for variables, however, will need to be supplemented to account for proper names. A name ‘Cicero’ or ‘Tully’ is capable of assuming only one value: namely, its referent. So we cannot account for the difference between the pair (‘Cicero’, ‘Cicero’) and the pair (‘Cicero’, ‘Tully’) merely by assessing the different semantic connections, if this is construed as the set of sequences of values which the expressions can jointly assume.Footnote 8 Rather, we need to enrich the semantic connections to include additional information about coordination.

Fine develops his theory in the context of the assumption that the semantic content of a sentence is structured and that an occurrence of a name in the sentence corresponds to the occurrence of the semantic value of the name in the structured content. The particular implementation of this assumption is flexible.Footnote 9 As Fine ([8]: 54) says: “All that matters is that we should be able to talk meaningfully of the occurrences of an individual in a proposition and that we should be able to talk meaningfully of substituting one individual for another within a given proposition.” The connection between two occurrences of a name will be represented by an additional connection between the values of the occurrences of the value of name in the semantic value of the sentence.

We may think of the content of a sentence in this richer sense as a structured meaning. The sentence is construed as a whole composed of occurrences of various subordinated expressions and the content of the sentence is construed as a whole composed of occurrences of their values. Without representing coordination, the sentences (1) ‘Cicero is Cicero’ and (2) ‘Cicero is Tully’ will have the same structured meanings since \([{\kern -2.3pt}[\)Cicero\(]{\kern -2.3pt}]\) = \([{\kern -2.3pt}[\)Tully\(]{\kern -2.3pt}]\):

figure d

In order to represent that these are coordinated, lines will be drawn between the relevant occurrences of \([{\kern -2.3pt}[\)Cicero\(]{\kern -2.3pt}]\) and \([{\kern -2.3pt}[\)Tully\(]{\kern -2.3pt}]\) in the structured meanings.

figure e

Appealing to structured meanings provides the resources to describe the difference between the contents of (1) and (2). Now we need a semantic theory to explain why these sentences have these different contents. We turn to this issue in the next section.

5.2 The Semantics of Coordination

As we have mentioned, Fine advertises his semantic solution to Frege’s puzzle as an implementation of the apparatus of semantic relationism. The semantic difference between (1) ‘Cicero is Cicero’ and (2) ‘Cicero is Tully’ is meant to arise because to the semantic difference between the pairs ‘Cicero’, ‘Cicero’ and ‘Cicero’, ‘Tully’ without there being a difference in the semantics of the individual names.

A semantic difference between the identity sentences only strictly implies a semantic difference between the pairs of names “Cicero”, “Cicero” and “Cicero”, “Tully” but we may deny that the semantic difference between the pairs of names need imply a semantic difference between the names themselves. …“Cicero” is strictly coreferential with “Cicero” but that “Cicero” is only accidentally (not strictly) coreferential with “Tully”. ([8]: 51)

This type of view closely resembles a proposal in [21] and [13]. As Putnam described the view, a necessary condition for synonymy is intensional isomorphism (cf. [1]) which is equivalent to sameness of structured meaning. But Putnam argued that synonymy requires more than sameness of structured meaning. Rather, synonymy also requires agreement in logical structure ([21]: 118), where “[t]wo sentences are said to have the same logical structure, when occurrences of the same sign in one correspond to occurrences of the same sign in the other” (ibid.: footnote 8). Although (1) and (2) result from synonymous expressions put together in the same way, they nonetheless differ in logical structure, since ‘Cicero’ occurs twice in (1) but only once in (2).Footnote 10

Implementing this view in our framework, ‘Cicero is Cicero’ will correspond to a structured meaning in which the nodes corresponding to the occurrence of ‘Cicero’ are linked while ‘Cicero is Tully’ will correspond to a structured meaning in which the nodes are not linked.

figure f

So on this view, two nodes should be linked in the structured meaning of a sentence just in case the corresponding positions are occupied by the same term.

A semantic theory that assigns structured meanings unadorned with linking relations to formulae is weakly compositional, any two sentences with the same structure whose terminal nodes agree in meaning express the same unadorned structured meaning. However, a theory that assigns structured meanings adorned with linking relations to formulae may fail to be compositional. Two sentences that express different adorned structured meanings may result from composing corresponding expressions with the same semantic values in the same way. For it may assign distinct structured meanings adorned with linking relations to ‘Cicero is Cicero’ and ‘Cicero is Tully’, though these sentences agree in structure and their terminal constituents have the same semantic values.

Putnam [21] and Kaplan [13], therefore, develop an account of what a relationist solution to Frege’s puzzle would look like. But as with the case of variables, Fine ([8]: 41) rejects this solution. According to Fine coordination between two occurrences of a name requires something more than mere recurrence of the name—just as coordination between two occurrences of a variable requires something more than mere recurrence of the variable. This, in turn, will serve as an additional input to semantic processing.

According to Fine, the linking relation in ‘Cicero is Cicero’

… cannot be a matter of having the same typographic name on the left and the right […] nor can it be a matter of having the same name with the same reference on the left and the right[.] Nor can it consist in the names themselves being the same. ([8]: 41)

The worries about mere typographic coincidence (or typographic coincidence with accidental co-reference) being insufficient for coordination between to uses of names are well-taken.

But Fine is explicit that the very same public language name may be used on two occasions and fail to be coordinated. Fine considers ([14]) puzzling Peter who hears the common language name ‘Paderewski’ on two different occasions, once when presented with a musician and once when presented with a politician. Peter does not realize that the same word is used to refer to the same man on both occasions. At one point, Peter comes to ask ‘is Paderewski Paderewski?’. As ([8]: 111ff) analyzes the situation, Peter’s uses of the name ‘Paderewski’ are instances of the same public language word ‘Paderewski’. (Fine is explicit that they are coordinated with the same public language expression.) That is, the same public language word recurs twice in Peter’s question and its answer (7).

  1. (7)

    Paderewski is Paderewski.

Yet, the two occurrences of the public language word ‘Paderewski’ in (7) can fail to be coordinated. (So coordination is not Euclidean.)

What this means is that it cannot be a matter of the semantics of the public language word ‘Paderewski’ that any two occurrences of it are coordinated. So it cannot be a matter of semantics that (7) always expresses the coordinated structured meaning:

figure g

Analogously, Fine should think that it is not a matter of semantics that the public language sentence (1) ‘Cicero is Cicero’ always expresses a structured meaning in which the occurrences of the name ‘Cicero’ are coordinated.

Nonetheless, ([8]: 108-114) insists that it is a semantic matter that certain uses of the public language sentence (1) ‘Cicero is Cicero’ express a coordinated structured meaning while certain uses of (2) ‘Cicero is Tully’ do not express a coordinated structured meaning. But this means that coordination must come from something other than recurrence of the public language names ‘Cicero’. Fine (ibid.: 89) denies that “context” supplies the missing information. But regardless of where it comes from it is an additional input to the semantic processing.

Whether an utterance of the public language sentence (1) ‘Cicero is Cicero’ expresses a coordinated content is not a function of the meaning of the sequence of public language expressions ‘Cicero’, ‘is’, ‘Cicero’ on its own. The meaning of an utterance of (1) depends on whether or not the two occurrence of ‘Cicero’ are coordinated. Fine tells a story about when coordination happens. In particular, the two uses will be coordinated when a speaker takes them to be be coordinated.

When two tokens of a given name are uttered by a single speaker, they will be coordinated if and only if they are internally linked [i.e. just in case the speaker takes them to have the same use]. ([8]: 107)

So the meaning of a use of a public language sentence (1) ‘Cicero is Cicero’ is a function of the meaning of the sequence of public language expressions ‘Cicero’, ‘is’, ‘Cicero’ and also of what the speaker takes to be coordinated with what. If the speaker does not take the occurrence of the public language word to be coordinated, then the sentence does not express a coordinated structured meaning.

But this means that insofar as (1) ‘Cicero is Cicero’ and (2) ‘Cicero is Tully’ differ in meaning, they are not minimal pairs differing only by the substitution of proper names. They are evaluated against different coordination schema. To put the matter differently, (1) is a minimal pair with (2), but also has the same meaning. On the other hand, (1*) differs in meaning from (2), but these do not constitute a minimal pair.

  • (1) Cicero is Cicero.

  • (1*)

  • (2) Cicero is Tully.

Fine would likely deny that the origin of the coordination in (1*) is a “syntactic” matter. Rather, it arises from whether speakers take the two occurrences to have the same use. We would construe this as a difference in context.Footnote 11 ([8]: 113) himself has a narrower conception of context. But he nonetheless must concede that the input to the semantic evaluation in (1*) and (2) differs by more than the mere substitution of proper names. There is only a threat to compositionality if ‘Cicero is Cicero’ and ‘Cicero is Tully’ (under the intended readings) are minimal pairs—all inputs to semantic evaluation must coincide. But since the inputs to semantic evaluation differ by coordination schema, they are not minimal pairs.Footnote 12 Thus, compositionality is not threatened. Just as in the variable case, relationism is an idle wheel. What explains the substitution failures is enriched representation, the coordination scheme, which is an extra element of the semantic evaluation. And, just as in the variable case, introducing additional inputs to the semantic evaluation reintroduces compositionality.

6 Conclusion

Frege’s puzzle, recall, is that the substitution of coreferential names does not seem to preserve meaning. We have by and large restricted our attention to the substitution of coreferential names in simple sentences such as (1) and (2). As we noted, the same puzzle arises from the substitution to coreferential names in attitude ascriptions such as (1) ‘Sam believes that Cicero is Cicero’ and (2) ‘Sam believes that Cicero is Tully’. Construing Fine as denying minimal pair situates his view among a larger class of views that reject this component of the puzzle. For instance, hidden indexical theories (e.g., [3] and [27]) would deny that insofar as they differ semantically, (1) and (2) are minimal pairs differing only by the substitution of coreferential proper names. According to the hidden indexical theory, the attitude ascriptions are also assessed at different contexts, which leads to a difference in meaning. Fine differs from the hidden indexical theory by locating the additional semantic input in the simple sentence itself and not in an attitude ascription containing it.Footnote 13 But no matter how this additional semantic input is implemented to solve Frege’s puzzle, there is no need to deny compositionality. As with the case of the antinomy of the variable, Frege’s puzzle does not need to be taken as a threat to this basic tenet of semantic theorizing.