1 Introduction

Call a quantifier ‘unrestricted’ if it ranges over absolutely all objects. In ordinary discourse, our quantifiers are often restricted to some contextually salient domain. For instance, suppose that someone asserts ‘Everyone came to the party’. Ordinarily, we would not take this assertion to imply that everyone in the entire world came to the party. We would interpret the quantifier as being restricted to some contextually relevant class of people, say, everyone who was invited to the party. But are our quantifiers always restricted in this way? Prima facie, the answer appears to be ‘no’. There are various contexts where it would be natural to take the quantifiers to be unrestricted. For instance, when a philosopher asserts ‘Everything is self-identical’ or ‘There are no abstract objects’, we typically take the quantifiers to range over all objects whatsoever.

However, as soon as we try to make reflective sense of unrestricted quantification, we quickly find ourselves in deep water. In ordinary model-theoretic semantics, the domain of quantification is typically taken to be a set. But according to our best theory of sets, viz. Zermelo-Fraenkel set theory, there is no set of all sets, and therefore there is no set of all objects. The move to a class theory, say, Morse-Kelley class theory, is of little use either. Although there is a class of all sets, there is no class of all classes. Hence, if classes are objects, there is no class of all objects. As I will rehearse in Sect. 2, the problem is quite general and it matters little whether domains are taken to be sets, classes, properties, or objects of some other sort. As long as domains are taken to be objects, and certain prima facie plausible constraints are imposed on a general semantic theory, it becomes difficult to develop a general semantic theory that admits the existence of an interpretation whose domain contains all objects whatsoever.

It has been argued that using a type theory as framework for our semantic theory provides a solution to this problem, at least if a broadly Fregean interpretation of type theory is assumed (Rayo and Uzquiano 1999; Williamson 2003; Rayo and Williamson 2003; Rayo 2006; Florio and Jones 2021). However, various concerns about type theory have been voiced in the literature. For instance, some philosophers have questioned the intelligibility of (the required Fregean interpretation of) type theory, while others have criticised it for various expressive limitations (Gödel 1983; Bealer 1982; Chierchia 1985; Menzel 1986; Chierchia and Turner 1988; Linnebo 2006; Weir 2006). It is for these reasons that it is desirable to develop some alternative solution. In this paper I will introduce a type-free theory of properties that can be used to vindicate unrestricted quantification. One of my main aims is to show that this theory emerges fairly naturally by reflecting on the features on which the type-theoretic solution of the problem of unrestricted quantification relies. I discuss some of the more formal aspects of this theory in a technical companion paper (Picenni and Schindler in press).

The present paper is structured as follows. In Sect. 2 I rehearse the problem of developing a general semantic theory that admits the existence of an interpretation whose domain is unrestricted. In Sect. 3 I examine the type-theoretic resolution of this problem. In Sect. 4 I will present my type-free alternative. In Sect. 5 I will show that this alternative system preserves the deductive strength of classical strict type theory. I conclude in Sect. 6 by briefly discussing the costs and benefits of both approaches.

I should stress at the outset that this paper is concerned with only one—though rather important—problem standing in the way of unrestricted quantification. I will not deal with other objections that have been levelled against the possibility of unrestricted quantification.Footnote 1 For instance, it has been argued that ontology is relative to a conceptual framework. Hence, even if the logical problem that we will deal with in the present paper is resolved, the most that we can hope for is quantification over domains that are unrestricted relative to some conceptual framework. Although I believe that, ultimately, these arguments can be resisted, each of them requires substantial discussion in its own right. But this is a task for another occasion.

2 The problem

Let L be some given object language. The problem that we will be concerned with in this paper is the problem of making reflective sense of the idea that the quantifiers of L can be unrestricted, at least on certain interpretations of L. Very roughly, I will understand the activity of semantic reflection on a language, or of interpreting a language, as the construction of a theory of what is expressed by the sentences of that language (under that interpretation). I will call such a theory a ‘semantic theory’ for L. To narrow down the scope of my investigation, I will assume that L is an ordinary first-order language. Moreover, I will assume that the metalanguage ML—the language in which we are to construct a semantic theory for L—is a formal language as well. For the moment, we will assume that ML is an ordinary first-order language, but later on we will consider other options as well.

Now, initially, there appears to be no major obstacle in providing an interpretation for a given object language L in which the quantifiers are taken to be unrestricted. For we could specify the truth-conditions of the sentences of L directly in an inductive manner.Footnote 2 The idea is familiar from the works of Tarski (1956) and Davidson (1967). This method does not require us to assign certain objects as semantic values to the expressions of the object language. Rather, each sentence of the object language is ‘matched’ with a corresponding sentence of the metalanguage that specifies its truth conditions.

Suppose that our object language L is an ordinary first-order language whose primitive predicates are \(P_1,, P_k\). For simplicity, let us assume that the \(P_i\) are unary and that L contains no individual constants or function symbols. Let \(D, F_1, , F_k\) be formulas of our metalanguage ML with exactly one free variable. Then we can easily specify an interpretation according to which \(P_i\) means \(F_i\) (for each i) and the quantifiers of L range over the Ds as follows.

Let \((D, \mathbf{F} )\) abbreviate \((D, F_1, \ldots , F_k)\). Where \(\varphi\) is a formula of L, we define \([\varphi ]^{(D, \mathbf{F} )}\) recursively as follows.Footnote 3 (Note that this definition is carried out in the metalanguage ML.)

  • \([P_ix]^{(D, \mathbf{F} )}\,\equiv \,F_i(x)\)

  • \([\lnot \,\varphi ]^{(D, \mathbf{F} )}\,\equiv \,\lnot \,[\varphi ]^{(D, \mathbf{F} )}\)

  • \([\varphi \,\wedge \,\psi ]^{(D, \mathbf{F} )}\,\equiv \,[\varphi ]^{(D, \mathbf{F} )}\,\wedge \,[\psi ]^{(D, \mathbf{F} )}\)

  • \([\forall x\,\varphi ]^{(D, \mathbf{F} )}\,\equiv \,\forall x \,(D(x)\,\rightarrow \,[\varphi ]^{(D, \mathbf{F} )})\)

For purposes of illustration, consider a basic object-language generalisation \(\forall x\,P_i(x)\). It follows from the above definition that \([\forall x\,P_i(x)]^{(D, \mathbf{F} )}\) is equivalent to the metalanguage sentence \(\forall x\,(D(x)\,\rightarrow \,F_i(x))\).

Now, where \(\varphi\) is a closed formula of L, i.e. a sentence, we can define (again, in ML):

$$\begin{aligned} \ulcorner {\varphi }\urcorner \, \text {is true in}\, (D, \mathbf{F} )\,\equiv \,[\varphi ]^{(D, \mathbf{F} )} \end{aligned}$$

Now, assume that the quantifiers of our metalanguage are unrestricted. I take this to be a plausible, although defeasible position. Given this assumption, we can easily provide an interpretation in which the quantifiers of L are unrestricted by letting D(x) be the formula \(x=x\).

It would be premature to claim victory though. The above method merely allows us, for any particular choice of formulas \(D, F_1, \ldots , F_k\), to specify an interpretation according to which \(P_i\) means \(F_i\) and the quantifiers range over the objects satisfying D. But for many theoretical purposes, we require a general semantic theory, i.e. a theory about all possible interpretations a language might take. Such a theory enables us to establish general semantic properties of a sentence. For example, we would like to be able to provide definitions of semantic validity and logical consequence, and derive from such definition that e.g. every sentence is a logical consequence of itself. To take another example, we might want to use such a theory to establish that the standard deduction systems for first-order logic are sound.

Unfortunately, the above method for specifying the truth conditions of a sentence in an interpretation \((D,\mathbf{F} )\) is not suitable for that purpose. Suppose, for example, that we define logical consequence as preservation of truth in all interpretations, roughly:

\(\ulcorner {\varphi }\urcorner\) is a logical consequence of \(\ulcorner {\psi }\urcorner\) iff for all formulas \(D, F_1, \ldots , F_k\) (of the metalanguage), if \(\ulcorner {\psi }\urcorner\) is true in \((D,\mathbf{F} )\) then \(\ulcorner {\varphi }\urcorner\) is true in \((D,\mathbf{F} )\).

But this will not work for two reasons. First, the above definition is ungrammatical. Second, this definition makes the notion of logical consequence depend too much on what predicates are available in the metalanguage.

Let’s start by considering the first difficulty in more detail. The problem here is that we cannot bind the formulas \(D, F_1,\ldots , F_k\) by a quantifier—at least not if our metalanguage is an ordinary first-order language. This is so because the formulas \(D, F_1, \ldots , F_k\) are used in the definition of ‘\(\ulcorner {\varphi }\urcorner\) is true in \((D, \mathbf{F} )\)’. That is, they occur in a syntactic position that cannot be replaced by a variable (in a first-order language).

To see this more clearly, consider the claim that \(\ulcorner {\forall x\,P_ix}\urcorner\) is a logical consequence of itself. According to our above definition, this is equivalent to saying that for all metalanguage formulas \(D, F_1, \ldots , F_k\), if \(\ulcorner {\forall x\,P_ix}\urcorner\) is true in \((D,\mathbf{F} )\) then \(\ulcorner {\forall x\,P_ix}\urcorner\) is true in \((D,\mathbf{F} )\). And this, in turn, is equivalent to saying that for all metalanguage formulas \(D, F_1, \ldots , F_k\), if \(\forall x\,(D(x)\,\rightarrow \,F_i(x))\) then \(\forall x\,(D(x)\,\rightarrow \,F_i(x))\). But this is ungrammatical in first-order languages, as it involves quantification into the syntactic position of the formulas D and \(F_i\).

It might be suggested that we augment our metalanguage with substitutional quantifiers that can bind variables in the syntactic position of a formula (where the class of admissible substitution instances consists of the formulas of ML that do not contain the new vocabulary). This would indeed solve the first difficulty; but there is a second problem. Typically, we do not want our definition of logical consequence to depend too much on what predicates are available in the metalanguage. Counterexamples to an alleged claim of logical consequence might arise as soon as new predicates become available in our metalanguage.Footnote 4

In order to solve both of these problems, two things need to be done. First, we need to find a method that allows us to generalise—directly or indirectly—on the syntactic position of every formula of the metalanguage. Second, we need to make sure that the existence of interpretations does not depend too much on what predicates are available in the metalanguage.

A natural suggestion at this point is that we appeal to sets, classes, properties, or other suitable abstract objects. As Parsons (1983) has remarked, talk about sets, classes, and properties answers a need to generalise on the syntactic position of formulas. I’d venture to say that this is one of the main reason why semantics is typically carried out in a set-theoretic metalanguage.

Suppose, for example, that there are \(y_0, y_i\) such that the following is provable in our metatheory:

$$\begin{aligned}&x\in y_0\,\leftrightarrow \,D(x)\\&x\in y_i\,\leftrightarrow \,F_i(x) \end{aligned}$$

Given these equivalences, we can generalise indirectly on the position of \(D, F_i\), by generalising on the singular terms \(y_0, y_i\).

In slightly more detail, we can proceed as follows. Let us say that I is an interpretation (of L) if there are \(y_0, y_1, \ldots , y_k\) such that \(I=(y_0, y_1, \ldots , y_k)\). Given some arbitrary interpretation I, we can define \([\varphi ]^I\) as follows.

  • \([P_ix]^I\,\equiv \,x\in y_i\)

  • \([\lnot \,\varphi ]^I\,\equiv \,\lnot \,[\varphi ]^I\)

  • \([\varphi \,\wedge \,\psi ]^I\,\equiv \,[\varphi ]^I\,\wedge \,[\psi ]^I\)

  • \([\forall x\,\varphi ]^I\,\equiv \,\forall x\,(x\in y_0 \,\rightarrow \,[\varphi ]^I)\)

Now, let \(D, F_1, \ldots , F_k\) be formulas of the metalanguage and suppose that there is an interpretation \(I=(y_0, y_1, \ldots , y_k)\) such that \(x\in y_0\,\leftrightarrow \,D(x)\) and \(x\in y_i\,\leftrightarrow \,F_i(x)\) for all \(1\leqslant i\leqslant k\). Then it is easily seen that

$$\begin{aligned} {[}\varphi ]^I\,\leftrightarrow \,[\varphi ]^{(D,\mathbf{F} )} \end{aligned}$$

But while we could not quantify into the position of the formulas \(D, F_1, \ldots , F_k\), we can now straightforwardly quantify into the position of the terms \(y_0, y_1, \ldots , y_k\).

Thus, appealing to sets or classes enables us to generalise over interpretations. Moreover, we can safely assume that there are sufficiently many sets that cannot be yet specified by a formula of our metalanguage, so that the second difficulty is taken care of as well. I have focused on sets here, but similar remarks apply to properties and other suitable entities as well.

Unfortunately, the above strategy faces severe limitations. It seems reasonable to impose the following constraint (C) on a general semantic theory (cf. Williamson 2003; Linnebo 2006):Footnote 5

(C) For all formulas \(D, F_1, \ldots , F_k\) (with appropriate number of free variables) of the metalanguage ML, there is an interpretation I of the object language L according to which \(P_i\) means \(F_i\) and the quantifiers of L range over the Ds (in the sense that \([\varphi ]^I\) iff \([\varphi ]^{(D, \mathbf{F} )}\) for all \(\varphi\) in L).Footnote 6

This constraint is intuitively plausible and, more importantly, seems to be required for certain theoretical purposes. For instance, suppose that we want to give a proof of soundness for one of our standard deduction systems, i.e. that every L-sentence derivable without assumptions in our deduction system is true in every interpretation. (As we will see in a moment, the notion of soundness I have in mind here differs from the notion found in most handbooks of logic.) Earlier we have seen how for any particular choice of formulas \(D, F_1, \ldots , F_k\) of the metalanguage, we can specify an interpretation \((D, \mathbf{F} )\) according to which \(P_i\) means \(F_i\) and the quantifiers of L range over the objects satisfying D, by specifying the truth-conditions of the sentences of L Tarski and Davidson. Hence, it might be argued that if a sentence is derivable without assumptions, then soundness should entail that the sentence is true in \((D, \mathbf{F} )\). However, if our general semantic theory—let’s call it T—does not satisfy (C), then it is logically possible that a sentence is true in all interpretations recognised by T without being true in \((D, \mathbf{F}).\)Footnote 7

Alas, constraint (C) cannot be satisfied if our semantic theory is formulated over classical first-order logic. For let \(D(x), F_1(x), \ldots , F_k(x)\) be arbitrary formulas of our metalanguage (some set-theoretic language, say). In order to satisfy constraint (C), there need to be \(y_0, y_1, \ldots , y_k\) such that \(x\in y_0\,\leftrightarrow \,D(x)\) and \(x\in y_i\,\leftrightarrow \,F_i(x)\) for all \(1\leqslant i\leqslant k\). Since the formulas \(D(x), F_1(x), \ldots , F_k(x)\) were arbitrary, this requires that some form of naïve comprehension holds in our metatheory. And of course we know that this is impossible in classical first-order logic, due to Russell’s paradox. Let F(x) be the formula \(x\notin x\). The assumption that there is a class y such that \(x\in y\,\leftrightarrow \,x\notin x\) leads straight into paradox.

Thus, in order to develop a general semantic theory that satisfies (C), one needs to go beyond classical first-order logic.

3 Type theory and ranges of significance

In order to set up a general semantic theory satisfying constraint (C), we need to be able to generalise on the syntactic position of every formula of our metalanguage. A natural suggestion at this point is to appeal to some type theory, which allows us to quantify directly into the syntactic position of a formula. I believe that this indeed provides a solution to the problem of unrestricted quantification, at least if a certain interpretation of the higher-order quantifiers is intelligible or legitimate.

That the problem of unrestricted quantification can be solved by moving to a type-theoretic language has been argued for by various authors Rayo and Uzquiano (1999); Williamson (2003), Rayo and Williamson (2003), Rayo (2006); Florio and Jones (2021). My goal in this section, then, is not to provide yet another argument for the type-theoretic solution (although I will present an argument for it, of sorts), but rather, by examining how the type-theoretic response works, to draw some general lessons that will allow us to formulate a type-free resolution of the problem of unrestricted quantification. (That, then, is my ultimate argumentative goal, which will be carried out in Sect. 4.) To this end, it will be convenient to have a look at different type theories. For ease of exposition, we will restrict our attention to simple, rather than ramified type theories. Moreover, we will deal only with type-theoretic languages all of whose predicate variables are monadic.

In a simple, monadic, type-theoretic language, we have an infinite stock of types 0, 1, 2, ..., and for every type n we have countably many variables \(x^n, y^n, z^n, \ldots\) of that type. In this framework, various type theories can be formulated.

In strict type theory, an atomic formula of the form \(x^m[y^n]\) is well-formed, or meaningful, if and only if \(m=n+1\). Strict type theory contains axioms of comprehension

$$\begin{aligned} \exists x^{n+1}\,\forall y^n\,(x^{n+1}[y^n]\,\leftrightarrow \,\varphi ) \end{aligned}$$

where \(x^{n+1}\) is not free in \(\varphi\).

In strict type theory, grammaticality coincides with meaningfulness, but this is not mandatory. For instance, one can formulate a variant of strict type theory—for lack of a better term, let’s call it 3-valued strict type theory—where an expression of the form \(x^m[y^n]\) is counted as well-formed for all choices of mn, but considered to be meaningless whenever \(m\ne n+1\). Thus, 3-valued strict type theory counts the same formulas as meaningful as strict type theory, but admits more expressions as well-formed.Footnote 8

In (3-valued) strict type theory, types are mutually exclusive. This assumption is not mandatory either. In cumulative type theory, an atomic formula of the form \(x^m[y^n]\) is well-formed, or meaningful, if and only if \(m>n\). Cumulative type theory has comprehension axioms

$$\begin{aligned} \exists x^m\,(\forall y^{m-1} \,(x^m[y^{m-1}]\,\leftrightarrow \,\varphi _{m-1})\,\wedge \,\ldots \,\wedge \,\forall y^0\,(x^m[y^0]\,\leftrightarrow \,\varphi _0)) \end{aligned}$$

where \(x^m\) does not occur free in any \(\varphi _i\) (cf. (Florio and Jones 2021, section 5)).Footnote 9

On the other hand, in liberal type theory an expression of the form \(x^m[y^n]\) is well-formed for all choices of mn, but considered as false (rather than meaningless) whenever \(m\leqslant n\) (cf. (Florio and Jones 2021, section 6)). Thus, liberal type theory could be viewed as a relaxation of cumulative type theory.

The quantifiers of a given type theory can be interpreted in various ways—for example, as ranging over sets, classes, properties, or pluralities. For our purposes it will be convenient to think of them as ranging over properties or concepts (I will use these notions interchangeably), but we will return to this issue later on.

Now, let us return to the suggestion that we use a type theory for providing a general semantic theory for our object language L. First, let us reconsider our constraint (C) on a satisfactory general semantic theory, which we introduced in the previous section:

(C) For all formulas \(D, F_1, \ldots , F_k\) (with appropriate number of free variables) of the metalanguage ML, there is an interpretation of the object language L according to which \(P_i\) means \(F_i\) and the quantifiers of L range over the Ds.

This constraint was formulated with a classical first-order language as metalanguage in mind. Once we take type-theoretic languages as potential metalanguages into account, this constraint needs to be slightly adjusted. Consider a basic generalisation \(\forall x\,P_ix\), and let \(D(x^n), F_i(y^m)\) be formulas of the metalanguage, i.e. formulas of some type theory. If we want to assign D as domain of quantification and interpret \(P_i\) by \(F_i\), then the truth condition for \(\forall x\,P_ix\) should be as follows:

$$\begin{aligned} {[}\forall x\,P_ix]^{(D, \mathbf{F} )}\, \equiv \,\forall x^n\,(D(x^n)\,\rightarrow \,F_i(x^n)) \end{aligned}$$

But it is quite obvious that this will not work for arbitrary choices of \(D(x^n)\) and \(F_i(y^m)\). For example, if \(m\ne n\), then substituting \(x^n\) for \(y^m\) in \(F(y^m)\) will not result in a well-formed expression of strict type theory; whereas in 3-valued strict type theory, the expression, though grammatical, will be meaningless.

In order to formulate our constraint in a way that is general enough to be applicable to the various type theories considered above, it will be convenient to invoke Russell’s notion of a range of significance (Russell 1903, 1908).Footnote 10 Incidentally, Russell made use of this notion when he introduced readers to his first type theory in Appendix B of Principles of Mathematics:

Every propositional function \(\Phi (x)\)—so it is contended—has, in addition to its range of truth, a range of significance, i.e. a range within which x must lie if \(\Phi (x)\) is to be a proposition at all, whether true or false. This is the first point in the theory of types; the second point is that ranges of significance form types [...] (Russell 1903, 771)

There is some scholarly debate as to what exactly Russell meant by ‘propositional function’ (Chihara 1972). Are propositional functions simply open formulas of the language or rather some ontological correlate thereof or something entirely different? We need not decide the matter here. In what follows, we will use the notion of range of significance in connection with both open formulas and properties, as the context will require. If a property is expressible or definable by a formula of the language, we assume that their ranges of significance coincide.

The notion of a range of significance of a property implies that the question whether a property c applies to some entity a or not is meaningful if and only if a is in the range of significance of c. If a is outside c’s range of significance then the expressions c[a] and \(\lnot c[a]\) are both meaningless.

Although Russell introduced the notion of range of significance in the context of a particular formulation of the theory of types, it is quite independent from it. In particular, the notion of range of significance can be applied no matter whether types are taken to be strict, cumulative, or liberal.

Now, using the notion of range of significance, we can easily reformulate our earlier constraint on a satisfactory general semantic theory. If the quantifiers of our object language are to range over the entities that satisfy some metalanguage formula D, then each predicate \(P_i\) of our object language needs to be interpreted by a metalanguage formula \(F_i\) that is significant for all Ds: if there is an object d falling under D but such that d is not in the range of significance of \(F_i\), then the question ‘Is d an \(F_i\)?’ is not even significant. Hence, the range of truth of D must be included in the range of significance of \(F_i\) for every i.

Thus, let us reformulate our constraint on a satisfactory general semantic theory as follows:

(C\(^*\)) For all formulas \(D, F_1,\ldots , F_k\) (with appropriate number of free variables) of the metalanguage ML such that \(F_i\) is significant for all Ds for all \(1\leqslant i\leqslant k\), there is an interpretation of L according to which \(P_i\) means \(F_i\) and the quantifiers of L range over the Ds.

Note that (C\(^*\)) implies our older criterion (C) for classical first-order languages, at least if we assume that all (classical) first-order predicates are significant for all objects. Thus, I would consider (C\(^*\)) as a generalisation rather than a weakening of (C).

It is not hard to see that (C\(^*\)) is satisfied, for example, in (3-valued) strict type theory. Let \(D, F_1, \ldots , F_k\) be formulas such that each \(F_i\) is significant for all Ds. In (3-valued) strict type theory, this will be the case if and only if the distinguished free variables in \(D, F_1, \ldots , F_k\) are all of the same type, say m. Strict type theory guarantees then that there are \(y^{m+1}_0, y^{m+1}_1, \ldots , y^{m+1}_k\) such that \(y^{m+1}_0[x^m]\,\leftrightarrow \,D(x^m)\) and \(y^{m+1}_i[x^m]\,\leftrightarrow \,F_i(x^m)\) for all \(1\leqslant i\leqslant k\).Footnote 11

But this does not amount to a vindication of unrestricted quantification yet. Let’s retrace our steps. We started by noting that we can specify the truth conditions for the sentences of a given object language assuming that the quantifiers of the metalanguage are unrestricted, or in other words, that there is some formula of the metalanguage that is satisfied by all objects whatsoever. One of the problems that we encountered then was that, in order to formulate a general semantic theory, we need to generalise on interpretations. Given our constraint that there be an interpretation for all assignments of metalanguage formulas to predicates of the object language, this requires us to generalise on the syntactic position of every formula our metalanguage. In order to generalise on the syntactic position of a formula, a principle of comprehension is needed. In a first-order theory, comprehension cannot hold for all formulas of the language due to Russell-like paradoxes—at least as long as the underlying logic is classical. This motivated the move to a type-theoretic metalanguage. But the move to a type-theoretic metalanguage vindicates unrestricted quantification only if there is some quantifier of type theory that is unrestricted, or in other words, if there is a formula of type theory that is satisfied by all objects whatsoever.

Now, in type theories we can find a formula that is satisfied by all entities of type 0, for instance, the formula \(x^0=x^0\). There is no formula that is satisfied by all entities that type theory recognises though. Is this a problem?

The answer to this question depends on how our chosen type theory is interpreted. If the various variables of type theory range over objects of some sort or other (say, sets of a particular rank), then there is no formula that is satisfied by all objects. On such an interpretation, type theory is actually more of a multi-sorted first-order language rather than a higher-order language.

A natural alternative is to take the higher-order variables to range over properties or concepts.Footnote 12 However, one needs to be careful how the notion of concept is understood here.

On an objectual conception of concepts, concepts have a predicable nature as well as an objectual nature. Since, on this conception, concepts are objects and concepts are stratified into different levels, there is no predicate that applies to all objects whatsoever. Thus, this interpretation does not give rise to a solution to the problem of unrestricted quantification.

By contrast, on a broadly Fregean conception of concepts, concepts have a merely predicable nature. On this conception, concepts are not objects at all. We can think of them as being obtained by “omitting” some argument from a proposition, e.g. by omitting a from Fa or by omitting F from \(\forall x\, Fx\). The range of significance of a concept is then naturally understood as comprising all the arguments that can “saturate” the concept in question, e.g. the range of significance of a second-level concept consists of all first-level concepts and the range of significance of a first-level concept consists of all objects. According to this conception, the quantifier \(\forall x^0\) ranges over all objects whatsoever. The quantifiers \(\forall x^n\) (for \(n>0\)) range over n-th-level concepts, but these are not objects at all. Thus, on a broadly Fregean interpretation, there is a formula of type theory that applies to all objects whatsoever, namely \(x^0=x^0\).

To be sure, the foregoing characterisation of the broadly Fregean conception of concepts is highly problematic, as Frege’s infamous concept horse paradox demonstrates (see e.g. Hale and Wright (2012)). In natural language, ‘concept’ is an ordinary noun, so when one says that no concept is an object—which we may formalise as \(\forall x\,(\text {Concept}(x) \,\rightarrow \,\lnot \,\text {Object}(x))\)—then one quantifies into the syntactic position of a name, hence over objects.

Recently, there have been various attempts to improve on the above characterisation of the broadly Fregean conception of concepts, e.g. Williamson (2013), Jones (2018), Trueman (2021). For instance, according to primitivists about type theory, type theory does not require a reductive explanation in terms of an antecedent understanding of sets or concepts. Rather, type theory can be adequately explained only using type theory itself—it must be understood ‘from the inside’ (Williamson 2013, 260). Primitivists about type theory often continue to talk about a hierarchy of concepts or properties; but such talk is derivative and needs to be understood in terms of the type-theoretic quantifiers, and not the other way round. It is simply a manner of speaking that makes the exposition more accessible.

If this interpretation of type theory is an intelligible or legitimate one, then unrestricted quantification (over objects) is vindicated, at least by my lights. Of course, whether the antecedent of this conditional holds is another (controversially discussed) question.Footnote 13 It is not my intention to settle that debate here. I will leave it to the adherents of type theory to defend their position. However, given that the intelligibility or legitimacy of a Fregean interpretation of type theory is at least controversial, it would be desirable if a solution to the problem of unrestricted quantification could be found that did not depend on it for its success. In the remainder of the paper, I will outline what I take to be the most promising alternative to the type-theoretic approach.

4 Ranges of significance without types

The problem of developing a general semantic theory boils down to the problem of generalising on the syntactic position of formulas of the metalanguage, and Russell’s paradox imposes severe limitations on our ability to do so. Type theories avoid Russell’s paradox by banning self-applicable properties or—to use Russell’s notion of range of significance—by claiming that no property lies in its own range of significance. Strict type theory assumes that ranges of significance are mutually exclusive. Cumulative type theory, on the other hand, rejects the assumption that ranges of significance are mutually exclusive, and assumes that they are cumulative instead.

To be sure, it is not mandatory to think of ranges of significance as forming types at all, whether strict or cumulative, as Gödel pointed out. He notes that

the theory of types brings in a new idea for the solution of the [logical] paradoxes, especially suited to their intensional form. It consists in blaming the paradoxes not on the axiom that every propositional function [or formula] defines a concept or class, but on the assumption that every concept gives a meaningful proposition, if asserted for any arbitrary object or objects as arguments. (Gödel 1983, 466)

Thus, the general idea here is that for every open formula \(\varphi (x)\) there is some property \(f_\varphi\), with the proviso that

  • the range of significance of \(f_\varphi\) comprises exactly those things x such that \(\varphi (x)\) is meaningful (i.e. true or false), and

  • for all x in \(f_\varphi\)’s range of significance, \(f_\varphi [x]\) and \(\varphi (x)\) coincide in truth value.

Strict type theory can be seen as instance of this general idea, based on the additional assumption that ranges of significance are mutually exclusive (Gödel 1983, 466). Cumulative type theory provides another instance, based on the additional assumption that the ranges are cumulative. The general idea, however, is logically independent from the assumption that ranges of significance form types at all.

Since we are searching for a type-free solution to the problem of unrestricted quantification, let us now return to a first-order framework and assume that properties are objects (rather than higher-order entities, as in the Fregean tradition). Hence, in an expression of the form x[y], both variables must be taken to be first-order, and therefore x[y] must express a binary relation between x and y. We could write this, more transparently, as A(xy) (x applies to y), but in order to facilitate comparison with type theory, we will stick to the notation x[y].

Since, for certain values of the variables, y might not be in range of x, our framework must admit the possibility that the formula x[y] is meaningless. There are various logics to deal with meaningless formulas, but in my mind the most natural way is to adopt the Weak Kleene rules for the logical connectives.Footnote 14 That is to say, a compound formula is considered as meaningless if and only if one of its subformulas is meaningless. And if a formula is meaningful (and hence all its subformulas are meaningful as well), then it is evaluated just as in classical logic. Setting up a system of natural deduction suitable for Weak Kleene logic is quite straightforward; I refer the reader to Petrukhin (2017) for details.

According to the Russell-Gödel idea, every formula \(\varphi\) determines a property \(f_{\varphi }\). In order to implement this idea, we assume that there is an abstraction term \(f_\varphi\) for every formula \(\varphi\) in the language.Footnote 15 Moreover, our theory should have some unrestricted principle of comprehension, at least in rule form.

$$\begin{aligned} \frac{\varphi (t)}{f_\varphi [t]} \qquad \frac{f_\varphi [t]}{\varphi (t)} \end{aligned}$$

(Note that in Weak Kleene logic, where conditional proof fails, this does not entail the biconditional \(f_\varphi [t]\,\leftrightarrow \,\varphi (t)\)).

It is natural to assume that the standard laws for identity statements hold, that is, reflexivity, symmetry, transitivity, and Leibniz’ law. Given reflexivity, there is a formula (namely, \(x=x\)) that holds for all objects, and consequently, there is a property that applies to all objects.

Constructing a theory that satisfies the above demands is actually not too difficult. It is well known by now how to construct theories that satisfy naïve comprehension in rule form, by utilising fixed-point constructions. This method was popularised by Kripke (1975) and Martin and Woodruff (1975), who have used it to develop type-free theories of truth, and has been applied to classes and properties by authors such as Maddy (1983), Feferman (1984), Field (2004), and Weir (2006), among many others. (See Feferman (1984) or Cantini (2009) for a historical overview of work in this area.)

Equipped with an unrestricted rule of comprehension and the existence of a universal property, it might seem now that we have everything we need to solve the problem of unrestricted quantification.

Unfortunately, things are not that simple. Consider, once more, our constraint on a general semantic theory:

(C\(^*\)) For all formulas \(D, F_1,\ldots , F_k\) (with appropriate number of free variables) of the metalanguage ML such that \(F_i\) is significant for all Ds for all \(1\leqslant i\leqslant k\), there is an interpretation of L according to which \(P_i\) means \(F_i\) and the quantifiers of L range over the Ds.

Let DF be formulas of our metalanguage such that F is significant for all Ds. Given a theory that satisfies naïve comprehension in rule form, we can construct an interpretation by letting \(f_D\) (the property determined by D) be the domain and assigning \(f_F\) (the property determined by F) as semantic value to P. Now, what we want is that our assignment of truth conditions to the sentences of L entails that \(\ulcorner {\forall x\,Px}\urcorner\) is true if and only if every D is an F. Suppose we formalise ‘Every D is an F’ as \(\forall x\,(D(x)\,\rightarrow \,F(x))\), where \(\rightarrow\) is the material conditional (i.e. defined in terms of negation and disjunction). It is quite obvious that, for certain choices of DF, the expression \(\forall x\,(D(x)\,\rightarrow \,F(x))\) will not be a meaningful formula of our metalanguage.

Consider, for example, the formula x[x], which expresses that x applies to itself. It is a straightforward consequence of the definition of range of significance that the formula x[x] is significant for all properties that apply to themselves. Now, let D be the formula x[x] and let F be the same formula. Since F is significant for all Ds, as we just noted, the antecedent of (C\(^*\)) holds. By definition, \(\forall x\,(D(x)\,\rightarrow \,F(x))\) is \(\forall x\,(x[x]\,\rightarrow \,x[x])\), which is equivalent to \(\forall x\,(\lnot \,x[x]\,\vee \,x[x])\). It is immediate that certain instances of this generalisation cannot be meaningfully asserted. For example, let r be the property of not applying to itself. Then the instance \(r[r]\,\vee \,\lnot \,r[r]\) cannot be a meaningful formula in theories of the kind envisaged here.

There are various ways to get around this difficulty though. One possibility is to introduce some kind of primitive conditional \(\supset\) into the language such that, for all formulas A, \(A\,\supset \,A\) will turn out to be valid. While this suggestion may be reasonable enough within a Strong Kleene framework, it does not sit well with the Weak Kleene approach that we have adopted here. According to the latter approach, a formula should be meaningless (i.e. receive the value \(\frac{1}{2}\)) whenever one of its subformulas is meaningless. Thus, whenever A is meaningless, \(A\,\supset \,A\) ought to be meaningless as well.

Another possibility—one that I find especially appealing—is to introduce operators that restrict the range of a variable to a particular range of significance. This can be done in a straightforward way. Let \(\varphi\) be a formula, x a variable, and t a term (variable or abstraction term) designating a property. Then we stipulate that \(\forall x:t\,\varphi\) is a well-formed formula, whose intended meaning is:

$$\begin{aligned} \text {For all}\ x\ \text {in the range of significance of} \ t, \text {it is the case that}\ \varphi . \end{aligned}$$

This form of quantification is not much different from the kind of quantification one finds in type theories. In type theories, variables are equipped with an upper index indicating the type or range of values that the variable can take on. These types are nothing other than ranges of significance. Of course, in type theories each variable wears its range of significance on the sleeves. By contrast, in the present system, which features only one kind of variable, a variable can take on any value, and in order to restrict it to a particular range of significance, one needs to make that restriction explicit by using the quantifier \(\forall x:t\).

Let us be a bit more precise about the semantics of the restricted quantifier.Footnote 16 A model for our property theory has the form \(M=(U, J)\) where U is a non-empty set, the universe or domain of discourse of M, and J is a function that assigns to each term an appropriate value, and assigns to each primitive predicate or relation symbol an appropriate extension and anti-extension. We will denote the extension of the relation symbol x[y] by \(A^+\) and its anti-extension by \(A^-\). Where d is an element of U, we denote the set \(\{e\mid (d,e)\in A^+\}\) by \(A^+(d)\) and the set \(\{e\mid (d,e)\in A^-\}\) by \(A^-(d)\). Finally, we denote the union \(A^+(d)\,\cup \,A^-(d)\) by R(d).

Intuitively, for a given property d, we can think of \(A^+(d)\) as the range of truth of d, i.e. the range of objects to which the property d applies (the expression ‘range of truth’ is taken from the Russell quote cited in the previous section); we can think of \(A^-(d)\) as the range of falsity of d, i.e. the range of objects to which the property d does not apply (or: the range of objects to which the negation of d applies); and we can think of R(d) as the range of significance of d, i.e. the range of objects of which it can be meaningfully asked whether the property d applies to it or not. This intuitive reading can be implemented as follows. Let h be a variable assignment; we denote the value of a term t relative to M and h by \(t^{M,h}\). Atomic formulas of the form t[s] are evaluated according to the following clauses:

  • \(v^h_M(t[s])=1\), if \(s^{M,h}\in A^+(t^{M,h})\)

  • \(v^h_M(t[s] )=0\), if \(s^{M,h}\in A^-(t^{M,h})\)

  • \(v^h_M(t[s] )=meaningless\), otherwise

The logical connectives and the ordinary quantifiers are evaluated according to the Weak Kleene rules. Finally, the restricted quantifier is interpreted according to the following clauses (where \(h\frac{d}{x}\) refers to the variable assignment which is just like h except that it assigns d to x):

  • \(v^h_M(\forall x:t\,\varphi )= 1\), if \(R(t^{M,h})\ne \varnothing\) and for all \(d\in R(t^{M,h}),\, v^{h\frac{d}{x}}_M(\varphi )=1\)

  • \(v^h_M(\forall x:t\,\varphi )=0\), if \(R(t^{M,h})\ne \varnothing\) and there is \(d\in R(t^{M,h})\) s.t. \(v^{h\frac{d}{x}}_M(\varphi )=0\)

  • \(v^h_M(\forall x:t\,\varphi )= meaningless\), otherwise

In words, \(\forall x:t\,\varphi\) is true if the range of significance of t is non-empty and every object within that range satisfies \(\varphi\); false if the range of significance of t is non-empty and some object within that range falsifies \(\varphi\); and meaningless otherwise.

It’s straightforward to lay down natural deduction rules for the restricted quantifiers. (Note that they behave quite similar to the quantifiers of free logics.) For instance, we have the following introduction and elimination rules:

$$\begin{aligned} \begin{array}{ll} \displaystyle \frac{\forall x{:}f\,\varphi \qquad R(t,f)}{\varphi (t/x)} &{} \qquad [R(x,f)] \\ &{}\qquad \ \displaystyle \frac{\overset{...}{\varphi (x)} \qquad R(s,f)}{\forall x{:}f\,\varphi (x)} \\ \end{array} \end{aligned}$$

where t must be free for x. The elimination rule (left) allows us to conclude \(\varphi (t/x)\) from \(\forall x:f\,\varphi\), provided that t is indeed in the range of significance if f. The introduction rule (right) allows us to conclude that \(\forall x:f\,\varphi\), if \(\varphi (x)\) holds under the assumption that x is in the range of f and the range of f is non-empty (which is secured by the condition R(sf), for some arbitrary term s).Footnote 17 For more details on the formal system, I refer the reader to our technical companion paper (Picenni and Schindler in press).

With this in hand, let us now return to the claim that every D is an F, where F is a formula of our property theory that is significant for all Ds. Given our restricted quantifier, we can formalise ‘Every D is an F’ as

$$\begin{aligned} \forall x:f_D\,(D(x)\,\rightarrow \,F(x)) \end{aligned}$$

where \(f_D\) is the property determined by D. Under the assumption that F is significant for all Ds (and that there is at least one object satisfying D), the displayed generalisation will be meaningful on the intended reading of the quantifier phrase \(\forall x:f_D\).

Thus, once we have the restricted quantifiers on board, our theory is able to satisfy criterion (C\(^*\)). Since there is, moreover, a formula that is satisfied by all objects whatsoever (namely, \(x=x\)), our theory provides a type-free resolution of the problem of unrestricted quantification.

5 Recovering the theory of types

In the previous section I have outlined an alternative to the type-theoretic response to the problem of unrestricted quantification. One of the main points of our discussion is that this alternative emerges quite naturally by reflecting on the features on which the type-theoretic solution of the problem of unrestricted quantification relies. Before concluding this paper, I will present a result that sheds further light on the relation between the two theories. Very roughly, the result is that a natural extension of the system presented in the previous section preserves the deductive strength of classical strict type theory. This can be shown by defining within our type-free theory a hierarchy of properties that mirrors the type-theoretic hierarchy. Indeed, one may think of this (ontological) hierarchy of properties as the first-order projection of the (ideological) hierarchy of types that results from reifying Fregean properties.

In order to do this, we need one new tool. In the previous section, we have introduced quantifiers ranging exactly over the objects falling within the range of significance of some property. Similarly, we can introduce quantifiers ranging exactly over the objects falling within the range of truth of some property. (The range of truth of a property t is the range of objects x such that t applies to x. The phrase ‘range of truth’ is taken from the Russell quote cited earlier.) This results in a major boost of the expressive power of the language. It enables us to define the above mentioned (ontological) hierarchy of properties that mirrors the (ideological) hierarchy of types.

Let \(\varphi\) be a formula, x a variable, and t a term (variable or abstraction term) designating a property. Then we stipulate that \((\forall x.\ t[x])\,\varphi\) is a well-formed formula, whose intended meaning is:

$$\begin{aligned} \text {For all}\ x\ \text {in the range of truth of}\ t, \text {it is the case that}\ \varphi . \end{aligned}$$

We can implement that in our model-theoretic semantics as follows:

  • \(v^h_M((\forall x.\ t[x])\,\varphi )= 1\), if \(A^+(t^{M,h})\ne \varnothing\) and for all \(d\in R(t^{M,h}),\, v^{h\frac{d}{x}}_M(\varphi )=1\)

  • \(v^h_M((\forall x.\ t[x])\,\varphi )=0\), if \(A^+(t^{M,h})\ne \varnothing\) and there is \(d\in R(t^{M,h})\) s.t. \(v^{h\frac{d}{x}}_M(\varphi )=0\)

  • \(v^h_M((\forall x.\ t[x])\,\varphi )= meaningless\), otherwise

In words, \((\forall x.\ t[x])\,\varphi\) is true if the range of truth of t is non-empty and every object within that range satisfies \(\varphi\); false if the range of truth of t is non-empty and some object within that range falsifies \(\varphi\); and meaningless otherwise.

As before, it is straightforward to set up natural deduction rules for this quantifier. For details I refer the reader to [Author2].

With this new restricted quantifier in hand, it is fairly easy to see that our type-free system can recover the deductive strength of the classical strict theory of types (STT, for short).

First, let us define a hierarchy of properties \(g_0, g_1, \ldots\), such that, intuitively, \(g_n\)’s range of truth contains all entities of type n. To this end, let I(x) be a primitive predicate applying to all and only individuals (i.e. objects that are not properties) and let \(g_0\) be the property determined by the predicate I(x). We stipulate that \(g_0\) has an unrestricted range of significance and that there is at least one individual. Now for \(n>0\), let \(g_{n+1}\) be the property determined by the formula

$$\begin{aligned} (\forall y.\,g_n[y])\,R(y,x) \end{aligned}$$

Thus, \(g_0\) is the property being an individual, \(g_1\) is the property being a property that has all individuals in its range of significance, \(g_2\) is the property being a property that has all properties falling under \(g_1\) in its range of significance, and so on.

It can be shown that every property \(g_n\) has a non-empty range of significance and a non-empty truth range.Footnote 18

Next, in order to show that we can recover the deductive strength of STT, we define a translation \({}^*\) from STT into our type-free language.

We translate formulas of the form \(y^{n+1}(x^n)\) as y[x], and formulas of the form \(\forall x^n\,\psi\) as \((\forall x.\, g_n[x])\,\psi ^*\), where \(\psi ^*\) is the translation of \(\psi\). Naturally, our translation commutes with the propositional connectives. Identities \(x^0=y^0\) are translated as \(x=y\), while \(x^{n+1}=y^{n+1}\) is translated as

$$\begin{aligned} (\forall z.\,g_{n+2}[z])\,(z[x]\,\leftrightarrow \,z[y]) \end{aligned}$$

Let’s consider a simple example. The formula

$$\begin{aligned} \exists y^{2}\,\forall x^1\,(y^2(x^1)) \end{aligned}$$

is translated as

$$\begin{aligned} (\exists y.\, g_{2}[y])\, (\forall x.\, g_1[x])\,y[x] \end{aligned}$$

This sentence will turn out to be (provably) meaningful in our system, implying that we can reason classically with it. Informally, this can be seen as follows. A formula of the form y[x] is meaningful if and only if x is in y’s range of significance. Now, in the above sentence, the values of x and y are restricted to the truth ranges of the properties \(g_1\) and \(g_2\) respectively. These truth ranges are non-empty, and the definitions of \(g_1,g_2\) entail that x is in y’s range of significance.

Given that we can reason classically with translations of formulas of STT, it is straightforward to derive the translations of the comprehension axioms of STT in our type-free theory, using simply the comprehension rules of the latter. (For more details, I refer the reader to Picenni and Schindler (in press).) Putting all of this together, we obtain the following result:

Theorem 5.1

Let \(\psi , \varphi _1, \ldots , \varphi _n\) be closed formulas of the language of STT. If \(\psi\) is derivable from \(\varphi _1, \ldots , \varphi _n\) in STT, then \(\psi ^*\) is derivable from \(\varphi _1^*, \ldots , \varphi _n^*\) in our type-free system.

6 Conclusion

There is no major obstacle in specifying the truth conditions of the sentences of some object language in which the quantifiers are unrestricted; for we can match sentences of the object language with sentences of the metalanguage Tarski and Davidson. However, as soon as we try to provide a general semantic theory, that is a theory that makes general claims about interpretations, we run into difficulties. Developing a general semantic theory requires us to generalise over interpretations, which in turn requires us to quantify into the syntactic position of formulas of our metalanguage. And Russell’s paradox imposes severe limitations on our ability to do that. One way out is to use a type theory as our metalinguistic framework, assuming a broadly Fregean interpretation of type theory. However, the intelligibility of this interpretation has been questioned. In this paper, I have tried to do two things: first, to introduce an alternative to the type-theoretic response; second, to show that this alternative emerges fairly naturally by reflecting on the features on which the type-theoretic solution of the problem of unrestricted quantification relies.

The theory I have offered here is based on a non-classical logic. Such a proposal raises many difficult questions. To begin with, one may wonder whether the principles of logic can be rationally revised at all. Another question concerns the metatheory of non-classical theories. To emphasise this important point again, the model-theoretic semantics that I have presented is a mere tool for guiding the reader in figuring out what inferences are (in)valid in the theory. I still owe the reader an explicit statement of the official metatheory. Another difficult question concerns the status of identity statements involving property terms. Suppose we define identity between a and b as indistinguishability of a and b in terms of the actual properties and relations they have or stand in. Then the non-classicality of ‘a has property f’ may entail the non-classicality of \(a=b\) (see e.g. Parsons and Woodruff (1995) and Field (2005, p. 37)). It is beyond the scope of this paper to discuss these issues here. At any rate, it should be clear that weakening classical logic is not something to be done lightly. There are many good reason for keeping classical logic, such as simplicity, familiarity, and so on.

This is a strong argument in favour of type theory, which is based on classical logic. Another point in favour of type theory is that it has been fruitfully applied in the foundations of mathematics. For example, it can be used to formulate axiom schemata such as mathematical induction, separation, replacement and reflection principles as single axioms.

Fortunately, we do not have to forgo these applications in a type-free framework. As we have seen in the previous section, the deductive power of type theory is preserved in the type-free theory presented here. The latter contains a hierarchy of first-order properties mirroring the hierarchy of higher-order properties.Footnote 19

Moreover, type theory has some severe costs as well, even if we set questions about the intelligibility of the required Fregean interpretation aside. For example, one objection that is often raised against type theory is that type theory is inadequate for formalising many (intuitively valid) natural language arguments involving property terms (e.g. (Bealer 1982, sections 6–8, 23–24), Chierchia (1985), (Menzel 1986, pp. 1–5), (Chierchia and Turner 1988, section 1.3.)). For instance, consider Annie. Annie loves Galen and Annie loves wisdom. It would be natural to conclude that there is a property that applies to both Galen and wisdom, namely being loved by Annie. But according to (strict) type theory, this is not the case. Instead, properties such as being loved by Annie are broken up into a sequence of properties of higher and higher type, each of which can only apply to entities of the immediately preceding level. There is a first-level property applying to Galen and a second-level property applying to wisdom. But there is no property shared by Galen and wisdom. Obviously, this problem does not arise in our type-free system.

The observation that predicates such as ‘Annie loves x’ cannot be represented by a single predicate, but are split into an infinite series of predicates of increasing type, also violates Davidson’s finite-learnability requirement, i.e. that there should be a finite number of undefined constants only (Bealer 1982, pp. 32–33). Of course, it might be responded that the hierarchy of languages is merely potential or open-ended. But that response causes various problems as well. For example, it seems to entail that claims of the form ‘the hierarchy is so-and-so’ are, strictly speaking, non-sense (see e.g. Rayo (2006, p. 247)).

The last point is related to another, similar objection that is often levelled against type theory, namely that type theory does not allow us to express any cross-type generalisations (e.g. Gödel (1983, p. 466), Linnebo (2006, section 6.4.)). Again, no such problem arises in a type-free framework.

No doubt, there are a few things that could be said in response to these objections. But I hope that these remarks are sufficient to show that the type-theoretic solution of the problem of unrestricted quantification also has its problems. The type-free approach proposed here should be considered as a serious contender because it avoids these problems while preserving the deductive power of type theory.

Ultimately, I believe that the question as to which of the two theories is preferable needs to be decided by more holistic considerations. In particular, one should look how well the two theories mesh with theories that have been developed in response to other problems, especially the paradoxes of truth and of vagueness. Presumably, if one is willing to restrict the law of excluded middle because of the sorites or the liar paradox, then one will be more amenable to adopt a non-classical solution to Russell’s paradox for properties as well. Conversely, if one is not willing to weaken classical logic in response to the sorites and the liar paradox, then it is unlikely that one will be willing to weaken classical logic in face of Russell’s paradox.