Skip to main content
Log in

The Broadest Necessity

  • Published:
Journal of Philosophical Logic Aims and scope Submit manuscript

A Correction to this article was published on 26 June 2018

This article has been updated

Abstract

In this paper the logic of broad necessity is explored. Definitions of what it means for one modality to be broader than another are formulated, and it is proven, in the context of higher-order logic, that there is a broadest necessity, settling one of the central questions of this investigation. It is shown, moreover, that it is possible to give a reductive analysis of this necessity in extensional language (using truth functional connectives and quantifiers). This relates more generally to a conjecture that it is not possible to define intensional connectives from extensional notions. This conjecture is formulated precisely in higher-order logic, and concrete cases in which it fails are examined. The paper ends with a discussion of the logic of broad necessity. It is shown that the logic of broad necessity is a normal modal logic between S4 and Triv, and that it is consistent with a natural axiomatic system of higher-order logic that it is exactly S4. Some philosophical reasons to think that the logic of broad necessity does not include the S5 principle are given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Change history

  • 26 June 2018

    The original version of this article unfortunately contains mistakes.

Notes

  1. The use of the terminology of ‘broadness’ has its pitfalls: the broader an operator, the fewer propositions it applies to. The terminology derives from a way of modeling necessity operators in terms of worlds: the broader the operator the broader the set of worlds it quantifies over.

  2. This way of talking is strictly speaking incorrect, since the predicates ‘is a proposition’ and ‘is an operator’ will only grammatically combine with a singular term, and not a sentence or operator expression (‘it’s not the case that is an operator’ is not grammatical for example). However, since English has no device for quantifying into sentence or operator position, while it does have singular quantifiers, these paraphrases are extremely convenient, and make clear which formal sentences they are going proxy for. Throughout, I shall use the term operator expression to mean the sort of syntactic expression that prefixes a sentence to form another sentence, and I shall reserve the term operator for the sort of thing that such an expression denotes.

  3. An operator □ might be said to be normal only if □P is logically necessary (propositionally entailed by the empty set of propositions) whenever P is, and that the proposition □(PQ) →□P →□Q is logically necessary.

  4. ⊤ may be defined explicitly as AA for some particular proposition A, or taken as primitive as in some presentations of propositional logic. The uniqueness of the tautologous proposition will recieve a justification shortly.

  5. An operator expression is normal, relative to an interpretation, if all the theorems of the smallest modal logic K are true in that interpretation. This has a rule of necessitation: if you can prove A from K then you can prove □A. Any operator denoted by a normal modal operator expression applies to ⊤ because ‘ □⊤’ is provable in K.

  6. The relevant assumption is that ⊤ is the same proposition as the proposition that it’s metaphysically necessary that ⊤, and similarly for always/determinacy/etc. These sorts of identities can be proved by augmenting the logic of metaphysical necessity (or the logic of determinacy, or tense logic) with the Rule of Equivalence discussed in Section 3, but the motivation for them are of a piece with the sorts of motivations for Booleanism.

  7. Suppose □ is any weak necessity. Then if □ is a necessity operator it follows that □□⊤ is true, so that □□ is also a weak necessity. Finally, since □ is a necessity operator, and □□ is a weak necessity, it follows that □□□⊤ is true. Thus □□□⊤ is true whenever □ is a weak necessity and so □□ is a necessity operator. The generalizes to the other iterations of □.

  8. We can express this more precisely in a propositional language with operator constants, by adding to the propositional calculus the rule: if ⊩ AB then ⊩ ϕϕ[A/B]. Note that this is slightly stronger than the intersubstitutivity of Boolean equivalents: this allows us to substitute A and B that are provably equivalent given the propositional calculus and the Booleanism rule. So, for example, we can substitute □(AB) → □(BA) for ⊤ in any context ϕ, since these are provably equivalent given the propositional calculus and our rule. However this is something which couldn’t be proved from the intersubstitutivity of tautological equivalents alone, since they aren’t tautological equivalents. (The extra strength that this rule provides could more transparently be achieved by adding a propositional identity connective to the language, and adding axioms to the effect that Boolean equivalents are identical. We will consider such a connective in Section 3.)

  9. For example, the linguistic analog of our definition of a necessity operator would be dependent on which tautology was chosen in the definition. The question of broadness on this way of doing things would become uninteresting. Suppose that we chose the tautology A ∨¬A. Then following parallel definitions, a linguistic necessity predicate N 1 is at least a broad as another N 2 iff \(N_{3}(\ulcorner N_{1}(\ulcorner B\urcorner )\to N_{2}(\ulcorner B\urcorner ) \urcorner )\) is true for every every sentence B and predicate N 3 applying to \(\ulcorner A\vee \neg A\urcorner \). But any predicate that applies to \(\ulcorner A\vee \neg A\urcorner \) and nothing else can be substituting for N 3, but does not apply to \(\ulcorner N_{1}(\ulcorner B\urcorner )\to N_{2}(\ulcorner B\urcorner ) \urcorner \). So no linguistic predicate would count as broader than any other if we followed analogs of the above definitions.

  10. It is not often noted, but the principle of α equivalence, that allows one to re-letter bound variables, can be derived from the β η rules (which is why we have not included it in our discussion). A term of the from λ x ϕ is η-equivalent to λ y λ x ϕ y which by applying β reduction to the subterm (λ x ϕ)y is equivalent to λ y ϕ[y/x]. This gives us α-equivalence since any relettering of bound variables in a term will amount to a relettering of a subterm of the form λ x ϕ (λ is the only variable binder in the language).

  11. What I mean here is that connectives behaving truth-functionally like each truth functional connective can be defined. On a structured theory of connectives, ∨ and λ x y ((x →⊥) → y) necessarily have the same truth-functional behavior but are distinct. We shall later consider some principles that rules out such differences: see the discussion of Functionality and the Rule of Equivalence in Section 4.

  12. More precisely, ∃ σ := λ X ¬∀ σ λ x ¬X a.

  13. As emphasized in footnote 8, note that the rule allows the rule itself to be used in a proof that A and B are materially equivalent. Everything that can be proved from this rule could be proved from the assumption that Boolean equivalents are identical and a minimal logic of propositional identity (self-identity and Leibniz’s law). We shall discuss the propositional identity connective shortly.

  14. Given Booleanism it is also definable as, e.g., double negation.

  15. This argument uses the sort of iterated application of Booleanism noted in footnote 8.

  16. And by an argument due to Scroggs this logic cannot be stronger than S5: see the discussion in Section 5.

  17. In Kripke’s words, metaphysical necessity is ‘necessity in the highest degree’ (p99).

  18. Edgington [15], for example, concludes from this that there are just two independent families of modal notions — metaphysical modalities and epistemic modalities; see McFetridge [32] for critical discussion.

  19. On a simple model there is only one metaphysically necessary truth, which both ‘it’s actually sunny’ and ‘1=1’ both express, and that this proposition is knowable a priori if accessed via a guise corresponding to the latter sentence.

  20. There are many such strategies that could be used to do this. See, for example, Stalnaker [47], Salmon [41], Soames [46], Saul [43] (p6), Crimmins & Perry [10], Richard [40], Braun [6] and so on.

  21. One model of temporalism identifies propositions with sets of world-time pairs. On this model, propositions are more fine grained than sets of worlds, and so one would not expect propositions to be individuated by necessary equivalence. That is, one would expect to be able to find metaphysically necessarily equivalent propositions that are distinct. In particular if there was a metaphysically necessary proposition, p, that was distinct from ⊤ we could in principle find an operator O applying to ⊤ but not p. By (*) this would mean that metaphysical necessity was not broader than O. This is the rough intuition at any rate; we iron out the details in what follows.

  22. The fact that metaphysical necessity is not broader than eternal truth is of course a surprising consequence of the standard semantics for tense logic that takes a good deal of getting used to, and it has recently been challenged by Dorr and Goodman [13]. Dorr and Goodman have things to say about both of the sorts of arguments that I have given above. They reject the coherence of an actuality operator satisfying the usual axioms, and cast doubt on the idea that everything supervenes on the eternal. However I find the latter idea so attractive that I have nonetheless not been won over by their arguments (I briefly treat this issue in Bacon [3], footnote 16 and the surrounding text).

  23. There is, of course, an open question whether determinacy operators are linguistic necessities or propositional necessities. Many theorists, such as McGee [33] and Williamson [51], assume it is a linguistic necessity, although others do not (see Fine [17], Field [16], Bacon [4], and Bacon [2]).)

  24. This is the modal inference from A and □B to (AB) which is easily derivable in any normal modal logic, and so in particular holds when □ is interpreted as ‘determinately’.

  25. ■ is a necessity operator since (¬⊤□ →⊥), is equivalent to the logical truth ⊥□ →⊥, which is itself plausibly the same proposition as ⊤ (this assumption goes beyond Booleanism, but is a natural one to make). Moreover if ¬A →⊥ was a counterpossible that just means that A is metaphysically necessary but not counterfactually necessary.

  26. For the propositional calculus we take all tautologies as axioms, and take modus ponens as our only rule of inference.

  27. The differences between these authors mainly consists in whether the Rule of Equivalence is accepted.

  28. This fact was originally noticed in Bacon [4], chapter 11. However, this result on its own is so weak that it appears uninteresting, for it only characterizes the broadest necessity up to its extension. If Alice said the tautology, and nothing else, then Alice said that counts as broader than every other necessity as well.

  29. The operator L itself is defined as follows: L := λ YX(X⊤ → X Y).

  30. The connective itself is defined: ≈:= λ Y λ ZX(X YX Z).

  31. On this conception it can be contingent whether a relation or property is extensional. For example, the actuality operator @ counts as extensional since material equivalents are in fact substitutable within the scope of @, but it wouldn’t have been extensional had things been any other way. There is thus a more demanding notion of being broadly necessarily extensional which could also be considered in this context; the result discussed below that L can be defined from extensional notions also shows that L can be defined from notions that are broadly necessarily extensional.

  32. Cian Dorr has pointed out to me that these results (and some of the results below) can be proven without the functionality axiom if we assume a strengthening of the Rule of Equivalence: if ⊩ A x 1...x n B x 1...x n then ⊩ A = B. This can also be seen as a rule version of the axiom of Functionality.

  33. Suszko does not state his principle in full-fledged higher order logic, and so his version takes the form of a schema. Without employing higher-order resources, like Leibniz equivalence, it amounts to the claim that all operators are extensional: ABϕϕ[A/B].

  34. The rule of equivalence proves B = ⊤ → B. An instance of Substitution says B = (⊤ → B) → ((L BL B) = (L BL B)) → ((L(⊤ → B) → L B) = (L BL B)), so we may conclude ((L(⊤ → B) → L B) = (L BL B)). Since L BL B, we may conclude (L(⊤ → B) → L B) (by substitution again, substituting the whole formula L BL B for the conclusion). This is the desired conclusion.

  35. x AA is an instance of UI (making the vacuous substitution of x for x). Applying necessitation for L gets us L(∀x AA), and distributing L results in Lx AL A. Applying Gen directly gives us Lx A →∀x L A as required.

  36. See, e.g., Kripke [29].

  37. The theorem below was proved using the Rule of Equivalence, however a version of it is provable without that assumption with a slightly more intricate proof.

  38. For this argument to make any sense one must take heed of the types of these identifications: every candidate identity relation between entities of type σ is Leibniz equivalent (at type σσt) to Leibniz equivalence at type σ.

  39. It is worth comparing this with the notion of metaphysical universality adopted by Williamson [54]: in the language of higher-order logic it amounts to a sentence which is purely logical and true (as opposed to being purely logical and being broadly necessary). On Williamson’s conception there could be metaphysically universal truths that aren’t even metaphysically necessary.

  40. Failures of the necessity of distinctness is one way in which my definition of metaphysical validity can come apart from Williamson’s notion of metaphysical universality. For example there are models of HFE in which there are four propositions but, because two pairs of distinct propositions are possibly identical, it’s possible that there are only two propositions. In this model, the claim that there are exactly four propositions is metaphysically universal, but not valid in my sense because it is not L-necessary. A natural conjecture would be that on the assumption of S5 every metaphysically universal sentence in pure language of higher-order logic is valid (the converse is trivially true).

  41. 41Thanks to an anonymous referee here for making me think more carefully about this question.

  42. The version stated below is distinct from, but closely related to Shapiro’s principle.

  43. We define the notion of an extensional concept as follows. First we define a relation ∼ σ on each type σ. p t q stands for pq, a e b means a= e b, f στ g means ∀x y(x σ yf x τ g y). For each type we may define an extensionality predicate of type σt as follows: E x t σ (a) := a σ a. ϕ is the result of simultaneously replacing subformulae of the form ∀ σ x ψ with ∀ σ x(E x t σ (x) → ψ). It is worth noting that the relation we have defined is an example of logical relation. (See the Appendix for the definition of a Kripke logical relation. A logical relation is a Kripke logical relation with exactly one world.)

  44. Given a model of higher-order logic we can define the subset of each domain corresponding to the extensional concepts, using a definition parallel to that of ∼ given in in footnote 43. This will be a congruence with respect to application, and when the starting model is of the sort described in the Appendix, it is easy to see that the result of quotienting the model by the equivalence relation will always result in a standard model. It follows that whenever ϕ is true in the quotiented model, ϕ is true in the original model.

  45. The models can be generated using Kripke logical relations — the general technique is outlined in the Appendix. Note that in the Appendix we officially only work with one base type, t, however the definitions are easily generalized.

  46. I’m indebted to Peter Fritz here for alerting me of these sorts of correlations between type e and t on the assumption of Kreisel’s principle. See also Fritz [21] for some further potential constraints.

  47. Presumably named by analogy with the physicists’ Kronecker δ.

  48. In general there is a δ functions at each type σστττ.

  49. There are some similarities between this principle and a comprehension principle discussed in Walsh [50] and Dorr [12], (in the latter under the title ‘Plenitude’). Unlike this principle, however, those principles do not entail the Fregean axiom. Thanks to an anonymous referee for pointing this connection out.

  50. For example, a standard theorem in the logic of actuality is □(@¬p ⇔¬@p). But had every proposition been actually true, then @¬p would true, and ¬@p false.

  51. We have excluded the type e since all of the relevant definitions involve types constructed only from t. This makes the presentation simpler, although nothing turns on this.

  52. Note that if we are working in an applicative structure that is not functional, then further constraints on S and K are needed to to ensure that the λ-terms obey β η conversion. See Hindley and Seldin [24]p 86.

  53. Indeed, modalized domains form a cartesian closed category, where the morphisms between modalized domains are simply functions that preserve the associated relations at every world. The above point can be put categorically by noting that the forgetful functor A↦|A| from this category into Set is faithful, and maps each full modalized structure to a Henkin structure.

  54. One clue that Modalized Functionality is a profitable principle to study is that it is equivalent to a certain natural adjunction principle for the quantifiers (discussed in Dorr [11]), in the presence of Booleanism at each type. Thanks to Cian Dorr for pointing this out to me.

  55. Thanks to Jeremy Goodman for suggesting a simplification of the following definition.

  56. With the stronger preservation condition functional domains would never properly expand. More problematic is the fact that we could not interpret every λ term if we imposed this condition. For example, consider the frame ({0, 1}, ≤) and the expanding modalized domains A and B where |A| = |B| = {a} and \({\sim ^{A}_{0}} = {\sim ^{B}_{0}} = {\sim ^{B}_{1}} = \{(a,a)\}\) and \({\sim ^{A}_{0}} = \emptyset \). Then, with the stronger preservation condition, |BA| is empty as there are no functions that preserve ∼0. Since |A| is non-empty, |A ⇒ (BA)| is also empty since there are no functions from a non-empty set into an empty set. But this means there is no interpretation for the K combinator of type ete in a model which interprets types e and t with A and B respectively. Our definition, by contrast, ensures that every term has an interpretation.

References

  1. Armstrong, D.M. (1989). A combinatorial theory of possibility. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  2. Bacon, A. (2015). Can the classical logician avoid the revenge paradoxes? Philosophical Review, 124(3), 299–352.

    Article  Google Scholar 

  3. Bacon, A. Tense and relativity Nous, Forthcoming.

  4. Bacon, A. Vagueness and Thought. Oxford University Press, Forthcoming.

  5. Barcan, R.C. (1946). A functional calculus of first order based on strict implication. Journal of Symbolic Logic, 11(1), 1–16.

    Article  Google Scholar 

  6. Braun, D.M. (2002). Cognitive significance, attitude ascriptions, and ways of believing propositions. Philosophical Studies, 108(1-2), 65–81.

    Article  Google Scholar 

  7. Brogaard, B., & Salerno, J. (2013). Remarks on counterpossibles. Synthese, 190(4), 639–660.

    Article  Google Scholar 

  8. Chandler, H.S. (1976). Plantinga and the contingently possible. Analysis, 36 (2), 106–109.

    Article  Google Scholar 

  9. Cresswell, M.J. (1967). Propositional identity. Logique Et Analyse, 40, 283–291.

    Google Scholar 

  10. Crimmins, M., & Perry, J. (1989). The prince and the phone booth: Reporting puzzling beliefs. Journal of Philosophy, 86(12), 685–711.

    Article  Google Scholar 

  11. Dorr, C. (2014). Quantifier variance and the collapse theorems. The Monist, 97, 503–570.

    Google Scholar 

  12. Dorr, C. To be f is to be g. In J. Hawthorne, J. Turner (Ed), Philosophical Perspectives 30: Metaphysics. forthcoming.

  13. Dorr, C., & Goodman, J. Diamonds are forever. Nous, Forthcoming.

  14. Dummett, M.A.E., & Lemmon, E.J. (1959). Modal logics between s4 and s5. Mathematical Logic Quarterly, 5(1424), 250–264.

    Article  Google Scholar 

  15. Edgington, D. (2004). Two kinds of possibility. Aristotelian Society Supplementary Volume, 78(1), 1–22.

    Article  Google Scholar 

  16. Field, H. (2000). Indeterminacy, degree of belief, and excluded middle. Nous, 34(1), 1–30.

    Article  Google Scholar 

  17. Fine, K. (1975). Vagueness, truth and logic. Synthese, 30(3), 265–300.

    Article  Google Scholar 

  18. Fine, K. (1977). Prior on the construction of possible worlds and instants. World, times and selves.

  19. Frege, G. (1879). Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle.

  20. Frege, G. (1951). On concept and object. (Translation: P. T. geach). Mind, 60 (n/a), 168.

    Article  Google Scholar 

  21. Fritz, P. (2016). First-order modal logic in the necessary framework of objects. Canadian Journal of Philosophy, 46(4-5), 584–609.

    Article  Google Scholar 

  22. Hale, B. (1996). Absolute necessities. Philosophical Perspectives, 10, 93–117.

    Google Scholar 

  23. Henkin, L. (1950). Completeness in the theory of types. Journal of Symbolic Logic, 15(2), 81–91.

    Article  Google Scholar 

  24. Hindley, J.R., & Seldin, J.P. (2008). Lambda-calculus and combinators: an introduction, Vol. 13. Cambridge: Cambridge University Press.

  25. Hughes, G.E., & Cresswell, M. (1996). A new introduction to modal logic routledge.

  26. Kaplan, D. (1989). Demonstratives. In Almog, J., Perry, J., & Wettstein, H. (Eds.) Themes From Kaplan (pp. 481–563): Oxford University Press.

  27. Kreisel, G. (1967). Informal rigour and completeness proofs. In Lakatos, I. (Ed.), Problems in the Philosophy of Mathematics (pp. 138–157). North-Holland.

  28. Saul, A. (1959). Kripke. a completeness theorem in modal logic. Journal of Symbolic Logic, 24(1), 1–14.

    Article  Google Scholar 

  29. Kripke, S.A. (1963). Semantical considerations on modal logic. Acta Philosophica Fennica, 16(1963), 83–94.

    Google Scholar 

  30. Kripke, S.A. (1980). Naming and necessity, Harvard University Press, Cambridge.

  31. Lewis, D.K. (1986). On the plurality of worlds. Hoboken: Blackwell Publishers.

    Google Scholar 

  32. McFetridge, I. (1990). Essay VIII. In Haldane, J., & Scruton, R. (Eds.) Logical Necessity and Other Essays. Aristotelian Society Series.

  33. McGee, V. (1990). Truth, vagueness, and paradox: An essay on the logic of truth. Indianapolis: Hackett Publishing Company Inc.

    Google Scholar 

  34. Mitchell, J.C. (1996). Foundations for programming languages. Cambridge: MIT Press.

    Google Scholar 

  35. Nolan, D. Causal counterfactuals and impossible worlds. H. Beebee, C. Hitchcock and H. Price (Ed), Making a Difference. Oxford University Press, forthcoming.

  36. Plotkin, G. (1973). Lambda-definability and logical relations.

  37. Prior, A.N. (1962). Formal logic. Oxford: Clarendon Press.

    Google Scholar 

  38. Prior, A. N. (1971). Objects of thought. Oxford.

  39. Rayo, A. On the open-endedness of logical space. unpublished manuscript.

  40. Richard, M. (1983). Direct reference and ascriptions of belief. Journal of Philosophical Logic, 12(4), 425–52.

    Article  Google Scholar 

  41. Salmon, N. (1986). Frege’s puzzle ridgeview.

  42. Salmon, N. (1989). The logic of what might have been. Philosophical Review, 98(1), 3–34.

    Article  Google Scholar 

  43. Saul, J.M. (2010). Simple sentences, substitution, and intuitions. UK: Oxford University Press.

    Google Scholar 

  44. Scroggs, S.J. (1951). Extensions of the lewis system s5. Journal of Symbolic Logic, 16(2), 112–120.

    Article  Google Scholar 

  45. Shapiro, S. (1987). Principles of reflection and second-order logic. Journal of Philosophical Logic, 16(3), 309–333.

    Article  Google Scholar 

  46. Soames, S. (1987). Direct reference, propositional attitudes, and semantic content. Philosophical Topics, 15(1), 47–87.

    Article  Google Scholar 

  47. Stalnaker, R.C. (1984). Inquiry. Cambridge: MIT Press.

    Google Scholar 

  48. Suszko, R. (1971). Identity connective and modality. Studia Logica, 27(1), 7–39.

    Article  Google Scholar 

  49. Suszko, R. (1975). Abolition of the Fregean axiom. Lecture Notes in Mathematics, 453, 169–239.

    Article  Google Scholar 

  50. Walsh, S. (2016). Predicativity, the russell-Myhill paradox, and Church’s intensional logic. Journal of Philosophical Logic, 45(3), 277–326.

    Article  Google Scholar 

  51. Williamson, T. (1994). Vagueness. Abingdon: Routledge.

    Google Scholar 

  52. Williamson, T. (1996). The necessity and determinateness of distinctness. In Lovibond, S., & Williams, S.G. (Eds.) Essays for David Wiggins: Identity, Truth and Value. Oxford: Blackwell.

  53. Williamson, T. (2003). Everything. Philosophical Perspectives, 17(1), 415–465.

    Article  Google Scholar 

  54. Williamson, T. (2013). Modal logic as metaphysics. Oxford: Oxford University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew Bacon.

Additional information

Thanks to Cian Dorr, Peter Fritz, Jeremy Goodman and John Hawthorne, and to the members of the Grain Exchange reading group for helpful comments and discussion. I would also like to thank the audience of a colloquium at Oxford where I presented a talk based on this material. I owe a particular debt of gratitude to two a nonymous referees for this journal, whose feedback greatly improved this paper.

Appendix

Appendix

1.1 A.1 Models of Higher-Order Logic

We work within the simply typed λ-calculus with one base type, t. All other types may be obtained as follows: t is a type, and if σ and τ are types, so is στ. Footnote 51

Type signatures such as the one described above can in general be modeled by applicative structures (see Mitchell [34]). Here we shall focus on a particular kind of applicative structure:

Definition A.1 (Henkin structure)

A Henkin structure is a collection of sets A σindexed by the types σ with the property that

  • \(A^{\sigma \to \tau } \subseteq {A^{\tau }}^{A^{\sigma }}\) for each σ and τ

A Henkin structure is moreover full if:

  • \(A^{\sigma \to \tau } = {A^{\tau }}^{A^{\sigma }}\)

Henkin structures then assign meanings to all type expressions. In general Henkin structures are too impoverished too interpret higher-order logic: we need to ensure that they contain enough functions to interpret the typed λ-calculus.

Definition A.2 (Rich Henkin structure)

A Henkin structure is rich iff, for each types σ, τ, υ there are elements K σ τ A στσand S σ τ υ A (στυ)→(στ)→συ satisfying the following properties:

  • K σ τ x y = x whenever xA σand yA τ

  • S σ τ υ x y z = x z(y z)whenever xA στυ, yA στand zA σ

Note that any model of the simply typed λ-calculus must contain S and K in each relevant type, because we can define functions with their behavior just using λ-terms: λ x λ y x and λ x λ y λ z x z(y z). Of more note is the fact that if a Henkin structure contains S and K then it contains every λ-definable function (see Mitchell [34] chapter 4).

Definition A.3 (Signature)

A signature Σ for a simply typed λ-calculusis a set of constants c and a type assignment function T y p mapping each constant to a type.

Given an infinite set of variables V a r, the type assignment function can be expanded so that it surjectively assigns types to every member of V a r in such a way that the preimage of each type is infinite. We then expand T y p to assign types to strings of symbols from our signature as follows (below and elsewhere we write ‘ α has type σ’ when T y p(α) = σ):

  • c has type T y p(c)

  • x has type T y p(x)

  • α β has type τ when α has type στ and β has type σ

  • λ x α has type στ when x has type σ and α type τ

A term of the simply typed λ-calculus of signature Σ, \(\mathcal {L}_{\Sigma }\) , is any string in the domain of T y p.

A variable assignment is a function g on V a r such that g(x) ∈ A Typ(x) for each xV a r. We write g[xd] for the assignment that is exactly like g except it maps x to d. If a Henkin structure is rich it is possible to interpret the simply typed λ-calculus over a given signature.

Definition A.4 (Henkin model)

A Henkin model of a signature Σ is a pair (A, [[ ]]) where A is a rich Henkin structure and [[⋅]] a function taking each term of \(\mathcal {L}_{\Sigma }\) of type σ and variable assignment to an element of A σ, satisfying the following properties:

  1. 1.

    [[c σ ]]gA σ

  2. 2.

    [[x]]g = g(x)

  3. 3.

    [[α β]]g = [[α]]g([[β]]g)

  4. 4.

    [[λ x α]]g = the unique function fA στsuch that f(d) = [[α]]g[xd] for every dA σ

Notice that if there is a function satisfying the condition in the last clause, it is unique by the functionality of Henkin models. The fact that we have required the Henkin model to be rich guarantees that there is always at least one function that satisfies the condition; this follows from the point, noted above, that a rich Henkin structure contains every λ-definable function.Footnote 52

To interpret higher-order logic we focus on the signature H = {→}∪{∀ σ σT y p e} where → has type ttt and ∀ σ type (σt) → t. From these the other logical operations may be defined in the usual way, e.g. ⊥ := ∀(tt)→t t , ¬ = λ p (p →⊥), and so on.

Definition A.5 (Logical Henkin model)

A logical Henkin model is a a triple (A, [[ ]], T) where (A, [[ ]]) is a Henkin model, TA t, and moreover,

  • [[ → ]](a)(b) ∈ T iff aT or bT

  • [[∀ σ ]](f) ∈ T iff f(a) ∈ T for every aA σ.

Term ϕ of type t is true in a logical model M if and only if [[ϕ]]gT for every assignment g.

Logical Henkin models are models of higher-order logic: It is easily verified that every theorem of H is true in a logical Henkin model. Note that logical Henkin models do not encode any assumptions about how fine-grained propositions are. For example, our constraints ensure that [[¬¬ϕ]] and [[ϕ]] have the same truth value, but not that they are identical.

Example A.1 (Boolean models)

Suppose A is a rich Henkin structure, and that A tis a Boolean algebra such that every subset of A twhich is the range of some function fA σthas a conjunction in A t. (This is satisfied, in particular, if A tis a complete Boolean algebra.) We may then define a logical Henkin model as follows. Let \(\llbracket \forall _{\sigma }\rrbracket (f) = \bigwedge _{a\in A^{\sigma }} f(a)\) and [[ → ]](a)(b) = ¬ab(where ¬, ∨ and \(\bigwedge \) express the Boolean operations), and let T be an ultrafilter on A t with the property that whenever the range of some fA στ is a subset of T, then its conjunction is also in T. It is easily verified that (A, [[ ]], T) is a logical Henkin model, that moreover makes all of the theorems of HE true.

The status of the functionality principle is more subtle. In a Henkin model, but not in an arbitrary applicative structure, if two elements f, gA στ output the same thing for every argument, they are identical. This ensures that we can close our theory under a weak functionality rule: if ⊩ ϕ x = ψ x then ⊩ ϕ = ψ. However there are logical Henkin models in which the functionality axiom of Section 4 is false; we will attend to this matter later in Corollary A.1.

1.2 A.2 Kripke Logical Relations

We now introduce an important definition from Plotkin [36].

Definition A.6 (Kripke logical relation)

Let A be a Henkin structure, and let (W, R) be a reflexive transitive Kripke frame. A binary Kripke logical relation over (W, R)is, for each xW, a typed family ofbinary relations \(\sim _{\sigma }^{x}\) over A σwith the following properties.

  • For every a, bA σ, if \(a\sim ^{\sigma }_{x} b\) and R x y then \(a\sim ^{\sigma }_{y}b\).

  • For f, gA στ, \(f\sim ^{\sigma \to \tau }_{x}g\) if and only if \(fa\sim ^{\tau }_{y}gb\) for every y such that R x y and a, bA σsuch that \(a\sim ^{\sigma }_{y}b\).

Clearly once you have fixed the behavior of a Kripke logical relation on the base domain A t its behavior is determined on all higher types. There is a close connection between Kripke logical relations and the Kripke semantics for intuitionistic logic, in which we talk of types being true at worlds, and in which a functional type στ is true at a world x only if τ is true at every σ world that is R-accessible from x. For convenience we shall continue to suppress the type superscripts when the types can be inferred from the context. The notion of Kripke logical relation we employ in the following is less general than Plotkin’s in a couple of respects. Firstly, we will restrict attention to Kripke logical relations generated by a family of equivalence relations \({\sim _{x}^{t}}\) on the base type. (It’s worth noting that, even if a Kripke logical relation is generated from an equivalence relation on the base types, it does not follow that it is an equivalence relation on the higher types.) Secondly, Plotkin’s Kripke logical relations can have any arity, whereas we restrict attention to binary relations.

Kripke logical relations where originally introduced by Plotkin in order to give a characterization of the λ-definable functions in a model of type theory. For our purposes, the most important result concerning Kripke logical relations is that every λ-definable function is invariant under every Kripke logical relation.

Theorem A.1 (Plotkin)

Let (A, [[ ]]) be a Henkin model of the empty signature, (W, R)a transitive Kripke frame, anda Kripke logical relation on A with respect to (W, R). Then for every closed term α, and world xW, [[α]]∼ x [[α]].

Proof

Given two variable assignments g and h we say that g x h if g(v) ∼ x h(v)for every variable vV a r.

We will prove, by induction on term complexity, a stronger hypothesis that if g x h then [[α]]g x [[α]]hfor arbitrary terms α. The hypothesis is clearly true for variables.

Suppose that it is true for α and β of types στ and σ respectively, and suppose that g x h. Then [[α]]g x [[α]]h by inductive hypothesis. By the definition of a Kripke logical relation, that means that if R x y and a y b, [[α]]g (a) ∼ y [[α]]h (b).In particular, since [[β]]g x [[β]]hby inductive hypothesis, and R x x, if follows that [[α]]g([[β]]g) ∼ x [[α]]h([[β]]h). Thus [[α β]]g x [[α β]]has required.

Now suppose that the hypothesis is true for α of type τ, let v be a variable of type σ and suppose g x h. We wish to show that [[λ v α]]g x [[λ v α]]h, so suppose that R x y and a y b. Then g[va] ∼ y h[vb], and thus by inductive hypothesis [[α]]g[va] y [[α]]h[vb].This is just to say that [[λ v α]]g (a) ∼ y [[λ v α]]h (b), as required for the equivalence of functions. □

The restriction to the empty signature can be lifted if we additionally impose that \(\llbracket c_{\sigma }\rrbracket \sim _{x}^{\sigma } \llbracket c_{\sigma }\rrbracket \) for each constant c σ in the language.

1.3 A.3 Modalized Domains and Models of HFE

Here we define a general class of models for HFE. In these models propositions will be represented by sets of worlds. As we noted in Section 5.3, if there is contingent identity then there are principled reasons why the interpretations of functional types are not full (i.e. do not contain all functions between the source and target types). The models that follow are, in a natural sense, as full as they can be once you’ve laid down the structure of the broadest necessity — i.e. once you’ve specified the set of worlds from which propositions are constructed, and the accessibility relation representing the broadest necessity. The logic validated by this class of models thus has many of the properties of the logic of standard models: it will not, for example, be compact, or complete for any recursive axiomatic system.

To define these models we introduce an extension of Henkin structures that carry with them information about modal structure. Recall that a Henkin structure consisted of a type-indexed collection of domains A σ, where a domain is just an ordinary set. Instead of using sets, our domains will simply be certain Kripke logical relations. Let \(\mathcal {F}= (W,R)\) be a Kripke frame.

Definition A.7 (Modalized domains)

Let \(\mathcal {F} = (W,R)\) be a transitive reflexive Kripke frame. A modalized domain based on \(\mathcal {F}\) is an ordered pair \(A = (|A|, \sim _{\cdot }^{A})\) such that:

  • A is a set

  • For each xW, \({\sim _{x}^{A}}\) is an equivalence relation on W.

  • Whenever a, b ∈|A|, R x y and a x b, a y b.

We shall drop the superscript from \({\sim _{x}^{A}}\) when it is clear from context.

Roughly, a modalized domain is a set of elements, along with a notion of identity, ∼ x , telling us which elements of A are identical at the world x. Given two modalized domains on \(\mathcal {F}\), A = (|A|, ∼A),B = (|B|, ∼B), we understand a function between them to be a mapping which preserves identity at each world:

Definition A.8 (Mapping between modalized domains)

If A and B are modalized domains, we write f : AB to mean:

  • f : |A| → |B|

  • For all wW, if a w b then f a w f b

Given this one can define the ‘full’ function space between two modalized domains as follows:

Definition A.9 (Full function space for modalized domains)

If (|A|, ∼A)and (|B|, ∼B)are modalized domainson \(\mathcal {F}\) we define thefull function space, (|AB|, ∼AB) as follows:

  • |AB| = {ff : AB}

  • \(f\sim _{x}^{A\Rightarrow B} g\) if and only if, for each a, b ∈|A|and each y such that R x y, if \(a{\sim _{x}^{A}} b\) then \(fa {\sim _{x}^{B}} gb\)

Note that f : AB if and onlyif \(f \sim _{x}^{A\Rightarrow B} f\) , meaning that\(\sim _{x}^{A\Rightarrow B}\) is an equivalencerelation on |AB|for each xW. It is similarlystraightforward to show that \(\sim _{x}^{A\Rightarrow B} \subseteq \sim _{y}^{A\Rightarrow B}\) whenever R x y, showing that AB is indeed a modalized domain.

For the function space, the notion of identity on A and B respectively is lifted to functions between A and B using the rule for Kripke logical relations. The condition for functions to belong to |AB| is just the condition that functions preserve identity at each world x (a condition that, of course, must obtain if we are to validate Leibniz’s law). Note that Plotkin’s theorem says that each closed term built only out of λ s and variables satisfies: [[α]]∼ x [[α]]. This corresponds to the idea that λ definable functions will ‘necessarily preserve identity’: for by the definition of ∼ x between functions, the above means that whenever R x y and a y b, [[α]](a) ∼ y [[α]](b).

Definition A.10 (Full modalized structure)

A full modalized structure is a type indexed collection of modalized domains, A σ, withthe property that:

  • A στ = A σA τ

It should be stressed that when you throw away the W-indexed equivalence relations, a full modalized structure is just a special kind of (non-full) Henkin structure, B, given by setting B σ := |A σ|, and that the relations associated with each modalized domain form a Kripke logical relation on B. Indeed, we could have defined the class of Henkin models we were interested in directly, without the detor through modalized domains. However, it is often helpful to carry the information encoded by the equivalence relations around with the domains, and doing so fixes the behavior of the function space constructor uniquely. In the following, we shall move between the modalized structure and the ordinary Henkin structure it corresponds to without comment.Footnote 53

The definition of a rich structure, a model and a logical model over modalized structures follows the same sequence of definitions as in Section Appendix. In particular, a full modalized structure A σ is rich, a model, or a logical model iff the Henkin structure |A σ| is rich, a model, or a logical model. As before, a logical model on a modalized structure does not include any assumptions about how fine-grained the elements of the propositional type are. However, in the intended models, to be described shortly, there is a tight connection between the frame (W, R) and the interpretation of the propositional type (A t, ∼t).

Proposition A.2

Every full modalized structure is rich.

Proof

To show that a modalized structure A is rich we must show that the S σ τ υ and K σ τ combinators belong to the corresponding domains in A. To show that K σ τ A στσ, for example, we must show that K σ τ preserves the Kripke logical relation associated with A σand A τσrespectively. Indeed, this preservation holds for all closed terms, by the samereasoning employed in Plotkin’s Theorem A.1 (we shall not reproduce it here). □

In what follows we write R(x) to abbreviate {yWR x y}, for a relation RW × W. We are now in a position to describe, for each transitive reflexive Kripke frame (W, R), the intended model for HFE for that frame (we may think of this as the intended model, on the assumption that modal reality is accurately represented by (W, R)).

Definition A.11 (Intended models)

Let \(\mathcal {F}\) be a pointed Kripke frame (W, R, @)where (W, R)is a transitive reflexive frame, and @ ∈ W the designated world. Then (W, R, @)determines a unique logical model, \(M_{\mathcal {F}} = (A,\llbracket \cdot \rrbracket , T)\) ,based on a full modalized structure A. It is constructed as follows. It sufficesto: (i) say which modalized domain we use to interpret the base type, A t, (ii) specify T, and (iii) specify the interpretations of ∀and →.

For (i) we get an modalized domain (A t, ∼t)as follows:

  • \(A^{t} = \mathcal {P}(W)\)

  • \(p {\sim _{x}^{t}} q\) iff pR(x) = qR(x).

In other words, p and q are identical at x iff they are necessarily equivalent at x. For (ii) and (iii) wefollow Example A.1:

  • pT if and only if @ ∈ p.

  • \(\llbracket \forall _{\sigma }\rrbracket (f) = \bigcap _{a\in A^{\sigma }} f(a)\)

  • [[ → ]](p, q) = (Wp) ∪ q

For each transitive reflexive pointed Kripke frame \(\mathcal {F}\) denote the corresponding model of higher-order logic \(M_{\mathcal {F}}\). We let \(\mathcal {C} := \{M_{\mathcal {F}}\mid \mathcal {F}\) a transitive reflexive pointed Kripke frame}.

To show that the above really is a model we must show that \(M_{\mathcal {F}}\) is rich, and moreover contains the interpretations of ∀ σ and → above. (In what follows we assume a fixed frame \(\mathcal {F}\) , and we omit the subscript from \(M_{\mathcal {F}}\) accordingly.)

Proposition A.3

M is a model of HE.

Proof

That M is rich follows from Proposition A.2. It remains to show that the interpretations of ∀ σ and →given by Definition A.11 really are mappings between modalized domains: [[∀ σ ]] : A σtA t and [[ → ]] : A tA tt.That is to say, we must show that these functions preserve ∼ w at each world.

The case of →is straightforward. For ∀ σ ,suppose f, gA σtand f x g. For each aA σ, a x a so f a x g a. Expanding the definition of ∼ x ,this means that R(x) ∩ f a = R(x) ∩ g a for each a, and so \(R(x)\cap \bigcap _{a\in M_{\sigma }} fa = R(x) \cap \bigcap _{a\in A^{\sigma }} ga\). That is to say [[∀ σ ]](f) ∼ x [[∀ σ ]](g).

M is thus a logical Henkin model, in which A tis a complete Boolean algebra, and T an ultrafilter on A t, as described earlier. Thus M is a model of HE. □

The next proposition shows that propositional identity in our model, which is defined by Leibniz equivalence — ∀X(X pX q) — amounts to the same thing as necessary equivalence relative to the modality defined by the accessibility relation R. In particular x ∈ [[∀X(X pX q)]] if and only if every world accessible to x belongs to [[pq]]. This also has the consequence that x ∈ [[L A]] if and only if y ∈ [[A]] for every y such that R x y — that is, L is governed by a standard Kripke semantics in terms of the accessibility relation R.

Proposition A.4

Let a, b ∈|A σ|.Then for each xW, a x b if and only if, for every f ∈|A σt|, xf(a) ⇔ xf(b). In particular, when σ = t, Leibniz equivalence corresponds to being necessarily equivalent (relative to R) in our model.

Proof

Suppose that a x b and let f ∈|A σt|. Since f preserves ∼ x it follows that f(a) ∼ x f(b)— i.e. f(a) ∩ R(x) = f(b) ∩ R(x). Since R is reflexive, xR(x) and so xf(a) if and only if xf(b).

Conversely suppose that a x b. Define a function f as follows: f(X) := {yX y a}. Clearly xf(a) and xf(b). It remains to show that f ∈|A σt|. That amounts to showing f : A σA t, or, more explicitly, that f preserves ∼ z for each world z.

Suppose, then, that X z Y. We want to show that f(X) ∼ z f(Y): that every f(X) world accessible to z is an f(Y) world and conversely. Let R z w, and suppose that wf(X). By the definition of f that means X w a. Since X z Y, X w Y and since ∼ w is an equivalence relation Y w a. So wf(Y). The converse direction proceeds in exactly the same way, so f(X) ∼ z f(Y). □

Corollary A.5

The functionality principle is true in M.

Proof

For any given world, x we want to show that x ∈ [[∀x(F x = G x) → F = G]], recalling again that = is short for Leibniz equivalence.

By Proposition A.4 it suffices to show that if f, gA στand f a x g a for every aA σthen f x g, since we have ∼ x corresponds to Leibniz equivalence in our model. So suppose the hypothesis, and let a x b. Since f and g are in M they preserve ∼ x ,and so f a x f b and g a x g b. Since ∼ x is an equivalence relation, f a x g b, and since this holds for every such a and b, f x g as required. □

1.4 A.4 The Completeness of S4

We can now prove Theorem 5.1 as a corollary. Let \(\mathcal {F}=(W,R,@)\) be a pointed Kripke frame.

Corollary A.6

A sentence of \(\mathcal {L}_{L}\) is true in a pointed Kripke model \((\mathcal {F}, \llbracket \cdot \rrbracket )\) iff it is true in a corresponding model of higher-order logic \(M_{\mathcal {F}}\).

Proof

Here we construct \(M=M_{\mathcal {F}}\) as above, except we also need to provide interpretations, [[P]]M for the propositional letters; these interpretations may simply betransferred from the Kripke model. By Proposition A.4 we know that w ∈ [[L A]]Miff x ∈ [[A]] for every x such that R w x, provided [[A]]M = [[A]], and so by a simple induction we can show that for any formula 𝜃 of the modal language \(\mathcal {L}_{L}\) (defined in Section 4), w ∈ [[𝜃]]M if and only if w ∈ [[𝜃]]. □

Now if ϕ is not provable in S4, it is false in some pointed Kripke model \(\mathcal {F}\) with a transitive and reflexive accessibility relation (see Hughes and Cresswell [25]). So ϕ is false in \(M_{\mathcal {F}}\). Since \(M_{\mathcal {F}}\) satisfies HFE, it follows that one cannot prove ϕ from HFE. Conversely, we have shown in Section 4 that every theorem of S4 is provable in HFE, establishing Theorem 5.1.

1.5 A.5 Modalized Functionality

Although we found the functionality principle to be attractive, and consequently adopted it as a working hypothesis, many of the results in this paper do not rest on it. Here is an extremely natural way to weaken the functionality principle:

Modalized Functionality :

L σ x(F x= τ G x) → F= στ G

Modalized Functionality is sufficient to prove the uniqueness of identity and the broadest necessity, and so is sufficient to prove, with the exception of the Barcan formula, the results in Section 4 given HE. Footnote 54

It is relatively simple to tweak our models to generate a more general class of models that invalidate Functionality and the Barcan formula, but validate Modalized Functionality. Here we just outline the basic theory, leaving a more thorough treatment to future work. A partial equivalence relation, or a PER, on a set D is a transitive symmetric relation on D. A partial equivalence relation on a domain D can be equivalently thought of as an equivalence relation on some subset of D (namely, the set {xWxx}, where ∼ is a PER in the first sense).Footnote 55

Definition A.12 (Expanding modalized domain)

An expanding modalized domain for a frame (W, R) is apair A = (|A|, ∼) where:

  • |A| is a set

  • is a W-indexed set of PERs on |A|such that every a ∈|A| is in the field of some ∼ w .

  • For all a, b ∈|A|, and x, y such that R x y, if a x b, a y b

Our choice of name for our domains is justified as follows:

Definition A.13

The inner domain, D(w), of an expanding modalized domain A at a world w is D(w) := {a ∈|A| : a w a}.

It follows straightforwardly from Definition A.12 that the inner domains are expanding in the sense that D(x) ⊆ D(y) whenever R x y.

The full function space between expanding modalized domains is defined as follows

Definition A.14

Given expanding modalized domains, A and B, define AB by:

  • |AB| = {f : |A| → |B|∣∃xW such that ∀yW with R x y and ∀a, b ∈|A|, if a y b, f a y f b},

  • \(f\sim _{x}^{A\Rightarrow B} g\) iff, for each a, b ∈|A|and each y such that R x y, if \(a{\sim _{y}^{A}} b\) then \(fa {\sim _{y}^{B}} gb\).

Note the quantifiers in the definition of |AB|: functions don’t need to preserve the PER at every world, they merely need to preserve it from some world onwards (it is worth thinking about why this must be.Footnote 56)

The notions of richness, model, and logical model carry over as before. Following Definition A.11, one can define a class of intended models for each pointed frame as follows. A model based on a frame (W, R, @) consists of an expanding modalized Henkin structure, A, a truth set T, and interpretations of the logical operations, [[∀ σ ]]∈ A (σt)→t, [[ → ]]∈ A ttt, subject to the following constraints:

  • \(A^{t} = \mathcal {P}(W)\)

  • If p, qD(w) then: \(p{\sim _{w}^{t}} q\) iff R(w) ∩ p = R(w) ∩ q.

  • \({\sim ^{t}_{x}}\) is a congruence with respect to arbitrary Boolean operations:

    • Wp x Wq whenever p x q

    • \(\bigcap _{i\in I} p_{i} \sim _{x} \bigcap _{i\in I} q_{i}\) whenever p i x q i for every iI

  • pT iff @ ∈ p.

  • [[ → ]](p)(q) = (Wp) ∪ q

  • [[∀ σ ]](f) = {wWwf(a) for every a such that aD σ (w)}

  • [[∀ σ ]] ∈ |A στ|

Note that the last condition amounts to the requirement that [[∀ σ ]] preserve ∼ at all worlds. This condition was automatically satisfied in all the models \(M_{\mathcal {F}}\in \mathcal {C}\) of Functionality (as proved in Proposition A.3). In this context, however, the constraint is not always satisfied. Consider, for example, the frame (3,≤, 0) where 3 = {0, 1, 2} and ≤ is the usual ordering of natural numbers. Then setting \({\sim ^{t}_{0}} = {\sim ^{t}_{2}} = \{(W,W), (\emptyset ,\emptyset )\}\) , and \(p{\sim ^{t}_{1}} q\) iff pR(1) = qR(1), it can be verified that the sentence, ϕ := ∃p q r(pqqrpr), saying that there are at least three propositions, is true at world 1 only. Moreover, in order to have a model we must have [[ψ]] ∈ D t(0) for every closed sentence of the language, or else the principle of universal instantiation for the propositional quantifiers would not in general hold. This condition fails in this model, since [[ϕ]] = {1} and {1}≁0{1}. The diagnosis in this case is that [[∀ σ ]] does not preserve ∼ x at every world. Doubtless more needs to be said here, however a proper investigation of these models would take us too far afield.

1.6 A.6 The Behavior of Bijections in Models of HFE

The sorts of models described in Section Appendix can be used to show other independence results. We end by briefly describing how to use these techniques to construct a model (mentioned in Section 5.2) in which there is a bijection of type et with no inverse in te. Here we use a Kripke frame (W, R) where \(W=\mathbb {N}\) and R =≤.

  • Let \(A^{e} = A^{t} = P(\mathbb {N})\).

  • For \(x,y\in \mathbb {N}\) let R x y iff xy. (Any preorder that isn’t an equivalence relation would work here.)

Now we define modalized domains for the base types as follows:

  • For a, bA e, a w b iff a = b

  • For a, bA t, a w b iff R(w) ∩ a = R(w) ∩ b.

As before we obtain modalized domains for the higher types using the modalized function space construction. That is:

  • For f, gA στ, f w g iff for every x such that R w x, and every a, bA σ such that a x b, f a x g b.

  • A ab = {f : A aA τ∣ for every w, f w f}

By setting @ := 0 we can define a logical modalized Henkin model as above which makes all of the theorems of HFE true.

It is immediate that any bijective function f : A eA t preserves ∼ x for each world x and thus that f x f, because \({\sim _{x}^{e}}\) is just identity. So fA et. It should be noted that two propositions a, bA t are Leibniz equivalent at 0 (0 ∈ [[x = y]]g[xa, yb]) if and only if a = b, since by Proposition A.4 a and b are Leibniz equivalent at 0 iff they are necessarily equivalent at 0. It is then easy to verify that a function f is satisfies the object language statement that f is a bijective concept of type et at 0, iff f is in fact a bijective function.

Note that bijections are at best contingently bijections. A bijection from A e to A t does not count as ‘bijective’ at any world > 0. Let \(a=\mathbb {N}\) and \(b=\mathbb {N}\setminus \{0\}\). Then a1 b, but since f is a bijection f(a)≠f(b) so f(a) and f(b) are not identical at 1 in A e. f thus fails to be injective at 1 because it takes distinct things at 1 of type e to identical things at 1 of type t.

This example also shows that no bijection can belong to A te, since to belong to this domain you must preserve ∼1, yet \(a{\sim _{1}^{t}} b\) but for any bijection f af b (since a and b are distinct) and so \(fa {\not \sim _{1}^{e}} fb\). Since no bijective functions belong to A te, the claim that there’s a bijective function from te is false at 0 (because, as noted above, a function counts as bijective at 0 iff it’s a bijection.) The object language claim ‘no bijection of type et has an inverse’ is also true in this model for similar reasons.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bacon, A. The Broadest Necessity. J Philos Logic 47, 733–783 (2018). https://doi.org/10.1007/s10992-017-9447-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10992-017-9447-9

Keywords

Navigation