Natural language quantification is not polysemous

The paper argues that natural language quantification, as expressed by determiner phrases, is not polysemous. The foil for this claim is Hofweber (Ontology and the ambitions of metaphysics. Oxford University Press, 2016; Mind, 128:699–734, 2019), who contends that natural language quantification is polysemous between a domain reading and an inferential reading. The thesis is intended to support a more general division between externalist and internalist positions in semantics. The paper, to the contrary, argues that there is no linguistic evidence for polysemous quantification, and Hofweber’s proposal proves to be non-compositional. Further, an approach at least consistent with internalism is available independent of an inferential reading, for natural language quantification can be read as ontologically neutral, which removes the rationale for the polysemy hypothesis. The paper remains neutral on so-called heavyweight (thick) vs. lightweight (thin) construals of quantification, which are not claims about natural language semantics.


Introduction
The paper is concerned with whether natural language determiners are polysemous. The semantics of determiners is standardly theorised according to generalised quantifier theory (GQ), according to which determiners express relations over pairs of sets, and so a determiner phrase expresses a 'generalized quantifier' (a set of sets)  (Peters & Westerståhl, 2006;Szabolcsi,2010). 1 Our question might arise on any view of quantification, but the GQ approach has an especial significance for my dialectical purposes (see Sect. 6). Just what polysemy is remains controversial, and the term has different uses, but the general understanding is that an expression is polysemous where it supports multiple related construals, unlike ambiguity/homonymy, where the construals are unrelated or accidental.
The foil for the following will be Hofweber (2016Hofweber ( , 2019, who contends that natural language quantification is polysemous between what he calls a 'domain' reading and an 'inferential' reading. The thesis is intended to support a more general division between externalist and internalist positions in semantics. The former have it that the truth-conditional contribution of an expression to its potential host sentences relates the expression to extra-linguistic reality, whereas the latter positions have it that such a contribution only relates one type of expression to another in the form of licensing certain inferences. My aim is to show that natural language determiners (and so quantifiers) are not polysemous, at least not if that notion is understood on anything like its standard construal. Quantification is univocally domain-involving but only in the minimal sense that the GQ analysis is specifiable without reference to inference; it operates by specifying what the semantic contribution of a determiner is to host structures in combination with the semantic values of a restriction and a scope. One may favour an inferencebased semantics for independent reasons, but one cannot subscribe to both views on 1 GQ counts as a generalisation from the familiar first-order analysis, which effectively treats all quantifiers as unary functions taking a predicate as argument. From the perspective of natural language, this approach is clearly non-compositional. The chief problems with first-order quantificational theory (as a semantic model of natural language) is that it (i) doesn't generalise across all determiners (Dets) (most, few, etc.); (ii) is wedded to an invented syntax + composition; and (iii) fails to express generalisations across Dets and within the classes of Dets.
The basic fact about first-order quantification is that it depicts Dets as sortally reducible (Keenan, 1993): (SR) Where R is a Boolean relation and U is the universe, Q[A, B] Q U [R(A, B)].
SR holds for every, some, no, but not for most and other comparative relations that cannot be rendered as relations over the whole of the universe U. GQ admits polyadic quantifiers of arbitrary adicity, but the familiar simple Det relations are semantically specifiable as binary relations over the relevant sets: A striking generalisation that issues from this approach is that all quantifier relations expressed by natural language Dets are conservative: (

CONS) Q[A, B] Q[A, A ∩ B]
The truth of Q[A, B] 'lives on' the restriction A in the sense that how things are with the As alone determines truth value (i) a Some boy is a thief iff Some boy is a boy who is a thief b Every girl is a swimmer iff every girl is a girl who is a swimmer c Most women sing iff most women are women who sing Any theory of natural language Dets, therefore, should at least capture conservativity the basis of polysemy. Significant for the wider dialectic is that the domain construal in the minimal sense is consistent with the ontological commitments of internalism in Hofweber's sense insofar as there is nothing intrinsically externalist about the concept of a domain, for an ontologically neutral construal is available (cf., Azzouni, 2004Azzouni, , 2017. With such a conception in play, the very rationale for the polysemy view is stymied. In Sect. 2, Hofweber's position will be explained. The following section will sketch how polysemy is understood within the relevant linguistics literature. Section 4 will show that according to such an understanding, quantifiers are not polysemous, but always domain involving. Section 5 will explain how the univocity of the quantifiers as domain-involving is not externalist in any robust sense, for a domain is not a notion that enshrines ontological commitment. If this is so, an inferential reading of quantification is simply otiose, if one's purpose is to support internalism. Section 6 will spell out some purely linguistic reasons for quantification to be univocally domain-involving; in particular, I shall argue that semantics is compositionally local to syntactic structure, which means that the semantic properties of an expression as it contributes to the semantics of a host structure do not involve any other structure. Hofweber's account of how a GQ approach might realise the hypothesised polysemy of quantification runs foul of the general locality of semantics to syntax. Before beginning in earnest, let me be explicit on the scope of the sequel. The idea that quantification varies in existential force or ontological weight is a quite common theme, going back at least to Lewis (1990) in contemporary discussions. Chalmers (2009) appeals to a heavyweight and lightweight distinction, and Fine (2009) entertains a 'thin/thick' distinction. More generally, Hirsch (2011) endorses a thesis of quantifier variance: the idea that there is no metaphysically privileged idiom, that we may, in principle, say There are Xs from one perspective (or scheme or whatever) and There aren't Xs from another. I think we can entertain the import of such divisions without supposing that natural language quantification or other natural language idioms (exists, real, there is/are, etc.) are relevantly polysemous or are otherwise structurally diverse. Hirsch, at any rate, does not claim that his notion of quantifier variance is due to natural language determiners being polysemous (ibid., p. 80). Hofweber (2016, p. 91), to the contrary, considers positions that trade in levels of quantificational ontological commitment are woebegone, for they erroneously suppose, without evidence, that natural language caters for our metaphysical inclinations. In distinction, Hofweber's claim of polysemy is presented as a genuine linguistic hypothesis evidenced by the putative polysemy of determiners satisfying our 'communicative needs'. In fact, however, Hofweber's polysemy hypothesis is worse off than the decried metaphysical rivals, for (i) there is no linguistic evidence for the hypothesis; (ii) it runs counter to general principles of syntax and semantics; and (iii) 'communicative needs' provide as a poor a basis for inferences to linguistic structure as do intuitions about metaphysical heft (see Sects. 4 and 6). Hofweber (2019, p. 712) claims:

Hofweber on quantification
Quantifiers are polysemous: they can be used in two different ways, in their domain-conditions reading and their inferential reading.
Quantifiers in natural language have more than one reading, and we have a need for each of them in ordinary everyday communication (Hofweber, 2016, p. xii) (As noted in Sect. 1, properly speaking, at least on the GQ view, quantifiers are the semantic interpretation of determiner phrases, not determiners themselves. Where this nuance doesn't matter, it may be ignored. Hofweber clearly intends the polysemy view to be about determiners, although he seeks to show how the account applies to the GQ approach; see Sect. 6). Just what polysemy amounts to is not straightforward, and the complications matter to the assessment of Hofweber's hypothesis; pro tem, let a polysemous term be non-ambiguous but have multiple related literal readings (as opposed to coerced or figural readings). For purposes of exposition, assume the domain reading to be simply the familiar first-order reading where the quantifier imposes a condition on a domain (or satisfying sequences) such that a quantifier-involving sentence is true iff the predicate of which the quantifier phrase is the argument holds of a subset of the domain (including the empty set). More simply: 'We make claims about the domain of objects, whatever they might be' (Hofweber, 2016, p. 66). For example: The supposed externalism of the semantics arises from the presumption that their interpretation involves a variable that ranges over a domain of objects, whose specification is left implicit. 2 One may think of the determiner, therefore, as introducing the notion of a domain as the set of things the restriction and nuclear scope might hold over. Crucially, such objects constitute what there is, rather than any of the particular ways of talking about what there is.
On the inferential reading, we have: (2) a 'Something is red' is true iff ∨RED(t) b 'Everything is red' is true iff ∧RED(t).
Here, (2a) says that 'Something is red' is truth conditionally equivalent to the disjunction of instances of 't is red', for all substitutions of 't' that preserve grammaticality. Similarly, (2b) says that 'Everything is red' is truth-conditionally equivalent 2 We can make it explicit: In simple terms, the quantified formulae are satisfied relative to a structure S (including a domain of objects D) and an assignment of objects to variables. An existential is satisfied if at least one object assigned to the variable the quantifier binds satisfies the predicate; the universal if all such objects satisfy the predicate. The 'externalist' point, for present purposes, is that the domain is taken to be a set of objects independent of the semantics. The GQ analysis, which does not involve variable assignment, is still trivially domain-involving insofar as we take the members of the sets that the quantifier relates to be objects drawn from a domain.
to the conjunction of instances of 't is red', for all substitutions of 't' that preserve grammaticality (op cit. pp. 713-714). This construal counts as 'inferential' rather than domain-involving because the content of the quantificational claim is exhausted by its licensing of certain inferences; in particular, from 'F(t)', for any substitute of 't', we can infer 'Something is F', and from 'Everything is F' we can infer 'F(t)', for any substitute of 't'. 3 It is familiar, however, that a domain reading is not logically equivalent to some set of substitution instances, and this is independent of the issue of the ineffable. Assume a domain of three objects-a, b, and c-all of which are red. Relative to this domain, (3) is true: If we introduce into the domain a new non-red entity, d, the right-hand side of (3) remains true, whereas the left-hand side is false. I take Hofweber (2016, pp. 74-75) to respond to this divergence by treating the substitution class as involving an implicit quantification over grammatical substitutes: assuming there are at most denumerably many expressions that may substitute for 't'. If we further grant that there are no ineffable entities (entities that cannot be referred to), then we appear to have the two desired construals, one concerning extralinguistic reality and one concerning inferential relations within a language. The fact that domain quantification over expression types is reintroduced does not alter this crucial difference.
The rationale for the distinction is the need to capture the apparent divergence between ontologically-committing quantification and non-ontologically-committing quantification. Thus, consider (5): (5) Everything exists.
As Quine (1948) quips, (5) answers 'What is there?' tautologically, for (6) is a firstorder theorem: On the other hand, we want to say that somethings don't exist (Zeus, Father Christmas, etc.), which, if rendered on a domain-reading, issues in the inconsistency: On the inferential reading, we can infer 'Something doesn't exist' from 'Zeus doesn't exist'.
Similarly, we appear able to quantify into the object position of intensional transitives: (8) a Sally seeks a unicorn b Sally seeks something To be sure, these issues have been addressed in a voluminous literature. Hofweber's bold suggestion is that a simple recognition of the polysemy of the determiners (and so quantifiers) sheds light on the issues and, more broadly, lends weight to a certain internalist construal of much of our thought and talk. As said in the introduction, while I am sympathetic to internalism, the polysemy claim on offer is dubious, and is unrequired for a defence of internalism, because the very idea of a domain as involved in the standard semantics of determiners need not be externalist. As a prelude to seeing all of this, we need to be clear on what polysemy is.

Polysemy and ambiguity
Hofweber characterises polysemy as follows: Possibly the commonest source of semantic underspecification of sentences is polysemy, the phenomena that particular expressions have more than one of a group of closely related readings (Hofweber, 2016, p. 63)… The phenomenon of polysemy is everywhere in natural language. Overall, then, we have good reason to think that semantic underspecification is widespread and semantic content underdetermines utterance content in a variety of ways (op cit., p. 64) Hofweber might intend his polysemy hypothesis to be sketchy, but I shall presume that his intention is to speak of a real linguistic phenomenon whose details should matter to the assessment of his claim; after all, the polysemy view is supposed to have evidential weight behind it, unlike alternative views animated by metaphysical intuitions. In short, I shall proceed on the assumption that Hofweber means his hypothesis to be one linguistics might confirm.
The passages quoted run a number of things together. Firstly, underspecification and polysemy are distinct phenomena. The adjective ready, for example, is underspecified relative to a host construction if it lacks a complement (ready for/to do what?), as in I'm ready. Context resolves the matter in the normal run of things, i.e., if a shared context didn't resolve it, an audience could make no sense of the utterance; and if the speaker had no completion in mind at all, a proposition would not be articulated, or only one of a peculiar figural generic kind. The adjective is also polysemous between a material and psychological construal (The soup/boy is ready to drink). Resolving the underspecification, however, as just intimated, does not involve resolving the polysemy, which may be settled independently. Thus, I am ready admits both construals (imagine being put in a harness prior to a bungee jump-one might be ready materially, but not psychologically), and resolving what one is ready for (bungee jumping) leaves the polysemy intact. Likewise, resolving polysemy does not involve resolving underspecification. In the normal run of things, I am ready is read psychologically, but underpsecification remains. Similarly, The chicken is ready to eat might mean the chicken is hungry or it is cooked right, but the underspecification here is not a missing complement for the adjective but the interpretation of the arguments of the verb eat. 4 That said, there is a sense in which underspeficiation might be involved in polysemy, and this is my second point. Theorists tend to diverge between under-and overspecification models (see, e.g., Falkum & Vicente, 2015). On the latter models, a lexical item is apt to express a range of senses that can be selectively activated in a host construction by the presence of another item, such as a verb, adjective or noun (Asher, 2011;Pustejovsky, 1995). On such views, polysemy is not a matter of the lexical item having too little content to determine 'utterance content', but too much content, which is thus selected from in a specific linguistic context. For purposes of illustration, consider the adjective flat. It is paradigmatically polysemous, for it appears to contribute a different meaning to, say, flat beer, flat road, and flat wrong, flat tax rate, etc. Viewing flat as semantically underspecified, we say that its semantic content does not encode any of the construals but is, in some sense schematic. Viewing flat as semantically overspecfied, we may say that it encodes the relevant construals as disjoint options.
For present purposes, it does not make any difference which broad model is adopted, for on either it is a substantive question whether the kind of polysemy Hofweber proposes is plausible. This leads to my third point. That polysemy is widespread doesn't mean it is 'everywhere', certainly not once properly distinguished from underspecification; to the contrary, it appears to be highly restricted; in particular, that polysemy might hold for every verb and count noun does not entail or even suggest that it holds for determiners (and so quantifiers). Again, this observation is neutral between underand overspecification models. At any rate, I know of no account of polysemy that extends to closed-class items (minus prepositions) (Borer, 2005;Carston, 2021). Just why some expressions are and others aren't polysemous is a complex question, which curiously has attracted little attention. A somewhat intuitive answer, however, is that the non-polysemous closed-class items principally express functional or structural relations between the items with which they merge, and so reflect syntactic organisation, which is independent of broader conceptual knowledge that enters into the kind of lexical content verbs, nouns, etc. express. 5 In simple terms, determiners express relations over various semantic categories expressed by verbs, nouns, and adjectives which are polysemous. Thus, the determiners express invariances over different realisations of the categories, and so are blind to polysemy. For example, Every lunch 4 A subject of an infinitive is standardly treated as a covert pronominal item, PRO. Thus, if PRO is bound ('controlled') by the matrix subject (the chicken), then the chicken is eating. The interpretation of the object of eat is left unspecified. If PRO is not bound by the matrix subject, then its interpretation is arbitrary, unspecified, but the object of eat must be the chicken. The underspeficiation involved, therefore, is a grammatical matter, and once the contrsual of ready is settled, underspcification may remain, as with the object of eat. 5 See Rizzi (2004) for a conceptual overview. This insight is at the heart of Borer's (2005) approach. It is worth noting that on this view it is not the semantics as such that makes for an item being polysemous or not, but its relation to a supposed autonomous syntax. An interesting test case here is adverbial quantifiers, which semantically behave quantifier-like, and can be given a GQ treatment, but are prima facie open class (de Swart, 1993). Much could be said here. One immediate observation, however, as is clear in Lewis's (1975) initial discussion, that adverbs like always or occasionally have both a temporal and non-temporal construal, relating, say, to solutions to an equation or datable occurrences. was ugly is polysemous with respect to lunch (food on a plate or an eating event), but on either construal, each lunch must be contained in the set of ugly things, however construed (ignoring the polysemy of the adjective).
The best way of explaining what is peculiar about polysemy is via a contrast with ambiguity/homonymy. Intuitively, lexical ambiguity is a case of accidental or non-essential homonymy, where the same morphophonemic properties are associated with unrelated concepts/semantic values, such as bank (financial institution and riverside), pupil (centre of the eye and a child in school), file (a document and a tool), and so on. Various linguistic criteria have been suggested to differentiate the ambiguous from the nonambiguous, all of which express the basic idea that different tokens of a non-ambiguous type cannot have distinct interpretations or that the one token can't be simultaneously differentially construed relative to different predicates. This tracks the simple intuition that if an item w is ambiguous between r and r , then a token of w cannot be construed as expressing both r and r . 6

Tests for ambiguity (i) Conjunction reduction
Consider (10): (10) a The bank is big b The bank is popular c The bank is big and the bank is popular d The bank is big and popular Given (10a, b), (10c) follows, and is, perforce, acceptable. All this is so regardless of how bank is disambiguated. One can appreciate this by noting that (10c) is fourways ambiguous, corresponding to the two-way ambiguity of its conjuncts (10a-b). Reducing the conjunction as in (10d), however, reduces the level of ambiguity by two, i.e., whatever is big has to be popular too, and so bank can either refer to a riverside or a financial institution, but not both. Witness: (11) a The bank is boggy b The bank raised the interest rate c The bank is boggy and the bank raised the interest rate d #The bank is boggy and raised the interest rate The respective predicates in (11a) and (11b) resolve the ambiguity of bank; hence, (11c), although awkward, is not ambiguous at all, with its two occurrences of bank requiring distinct interpretations. Thus, (11d) is semantically anomalous, for the one occurrence of bank cannot shoulder the distinct interpretations required by the predicates.
The upshot is that ambiguous items do not uniformly allow for conjunction reduction: it fails where the predicates of the conjuncts are selective of different interpretations of the item. 7 Clearly, non-ambiguous items will support conjunction reduction, for there are no distinct interpretations to be selected, or so it seems.
(ii) Anaphora The use of anaphoric pronouns allows one to refer to the same entity across indefinitely many predications anchored to the single antecedent occurrence of a lexical item. Taking our lead from the previous example, one may readily recognise that (12) is two-ways ambiguous: (12) The bank was big but it was also popular, with it taking the bank as its antecedent. Antecedent and pronoun must be resolved together as either riverside or financial institution. Hence, (13) is as anomalous as (11d) for much the same reason, i.e., the respective predicates select for different interpretations, but there is only one occurrence of the bank, which can't support both interpretations: (13) #The bank was boggy, but it also raised the interest rate.

(iii) Ellipsis
In many respects, ellipsis works like anaphora, unsurprisingly, given the use of pro-forms. Thus, an elided constituent of a clause is interpreted in parallel with an overt constituent: (14) Mary went to the bank, as did Jane.
The basic phenomenon here is that the VP under ellipsis is resolved in parallel with the overt VP of the antecedent clause. So, (14) is two-ways, not four-ways, ambiguous; that is, Mary and Jane cannot be read as visiting different kinds of place, with Mary going fishing, and Jane depositing money, say. The moral here is that if one does find an acceptable parallel interpretation, then the overt item is not relevantly ambiguous.

Polysemy
If ambiguity is accidental homonymy, then polysemy is non-accidental homonym; that is, a single lexical item can support diverse construals, which, although distinct, are related by some conceptual/structural factors, unlike in the straight ambiguity case, 7 By selection here and throughout I do not solely intend grammatical argument selection, but more selection in terms of a predicate being co-interpretable with an argument only if the argument is read as referring to an entity that could have the property the predicate attributes. For example, in happy bus and happy person, the adjective selects for psychological entities, such as the passengers on the bus, as opposed to the vehicle, and the personality of a person, as opposed to their body. Theories of polysemy develop generalisations over such relations, but for present purposes an intuitive sense will suffice. and can be simultaneously activated (cp., Pustejovsky's (1995) on 'inherent polysemy' and Asher (2011) on 'logical polysemy'). To be sure, 'polysemy' is sometimes used in a broad sense to cover metaphor, metonymy, and so-called 'meaning transfer', but these phenomena are distinct from the kind of logical polysemy potentially germane to quantification, because they are pretty much open-ended and fail to be productive (see Sect. 4). Moreover, they function more like ambiguities. For example, one might use wheels to refer to a car (My wheels are in the garage), but one can't use wheels to refer simultaneously to wheels and a car (#My wheels seat four, but are punctured). More on this presently. As indicated above, it remains an open question how best to account for polysemy, but, for present purposes, the nature of the 'right theory' doesn't much matter. What is crucial is that polysemy is recognised to be a genuine phenomenon distinct from ambiguity understood as accidental homonymy. 8 Consider book. By the criteria of ambiguity in Sect. 3.1, nominal book is not ambiguous. First, conjunction reduction is acceptable across predicates that select for different construals of the single item: (15) The book was interesting but heavy.
Whatever the relation is, then, between the concrete particular sense and the informational content sense of book, the lexical item is not ambiguous with respect to such senses, not, at any rate, if conjunction reduction is criterial. We shall refer to this kind of construction as copredication, where a single nominal is the argument of different predicates that select for distinct senses.
Secondly, anaphora works fine across the relevant construals: (16) The book was heavy, but it was also interesting, so I didn't mind packing it.
As with the first criterion, if acceptable anaphora is a signature of non-ambiguity, then book is non-ambiguous. Finally, parallel interpretation of ellipsis works fine. Consider (17): (17) Mary enjoyed the book, as did Jane.
A book can be enjoyed in different respects. One might enjoy the story, its characterisations, and so on. Equally, one might enjoy a book as a work of art in-itself, such as an illuminated bible. (17) can be interpreted fine under such a disjoint construal of book, as (18) bears witness: (18) Mary enjoyed the book, as did Jane, but for very different reasons… All of this being so, one may operationally define polysemy as a case where a lexical item supports distinct construals, but which can, in some of its instances, fail the tests for ambiguity (or pass the tests for non-ambiguity, if you prefer). This is because polysemy pertains to logically related senses, whereas ambiguity does not. Note that the morphophonemic forms bank and file, say, are ambiguous and polysemous. What makes them polysemous is that when the ambiguity is resolved, distinct construals are supported in copredication; or equivalently, the ambiguous or accidental readings are not copredicatively expressible, but the polysemous readings are. For example, with The bank was big but popular, bank might be disambiguated to refer to a kind of financial institution, but bank may remain polysemous between a building (big) and an institution with lots of customs, who might never go near the building (mutatis mutandis for file between papers and content). To avoid confusion here, we might say that 'ambiguity' refers to a case where a single morphophonemic form has two or more (unrelated) meanings, i.e., there are two lexical items with the same morphophonemic properties. On the other hand, 'polysemy' refers to a case where a single lexical item has a complex or multiple meaning.
Let me register two caveats. First, cases of polysemy that do fail the tests are essentially zeugma. For example, find and expire are polysemous verbs that give rise to zeugma in constructions that demand the activation of both of their respective senses: (19) a #Gödel found the wine to be tasty and arithmetic to be incomplete b #Bill and his driving license expired at the same time It is clear, however, that the construals of the verbs here are relative to the different arguments, so that find is construed differently between complements that may or may not describe the content of an experience, and expire is construed differently between agentive and non-agentive subjects. Hence, to adopt a selectional model, zeugma rather than copredication occurs precisely because a single verb can't simultaneously select different kinds of argument, whereas different senses of the single nominal of a copredicative construction can be simultaneously selected by different verbs (predicates). Such asymmetry indicates that predicates select for senses of their arguments, and so must have a determine sense qua selective, where different predicates, such as in copredication, can require different such senses, whereas the one predicate cannot select different nominals with different senses; that would be as if the nominals selected the senses of the single predicate. I am happy for an underspecification model of polysemy to offer a distinct explanation (cf., Carston, 2021).
Secondly, not every polysemous nominal gives rise to copredication whatever the predicates. There is zeugmatic copredication. Consider: (20) a #The Times invested in new premises, but can blow away easily b #While the school enjoyed the day out, it was painted Just why some copredication is acceptable and other cases are zeugmatic remains unclear (but see Murphy, 2021). What is patent, however, is that the unacceptable cases are not due to any principled restriction on, say, copredication pertaining to both institution/company and its product, as in (20a), or pertaining to both the people associated with a building and the building itself, as in (20b). For example, the cases in (21) are fine, but involve the same categories as (20): (21) a The Times, which relies on advertising, is, unsurprisingly, full of it b The school is generally happy, but it's in a parlous state We may, then, stick to the original operational definition offered and think of polysemous nominals as capable of copredication qua expressive of logically related senses, in a way merely ambiguous nominals aren't, qua expressive of accidentally related senses.
Two characteristics of nominal polysemy are worth especial emphasis before we move onto Hofweber's cases. First, to emphasise again, polysemy, in distinction from ambiguity (and metaphor, metonymy, etc.), supports the one occurrence of an item simultaneously expressing or supporting two or more construals. In other words, the very existence of copredication, as in (15-17) and the other cases, is criterial of polysemy as logical homophony as opposed to ambiguity. Call this the simultaneity feature of polysemy. Secondly, the polysemy of an item is relative to a predicate selecting or being co-construed with an interpretation of the nominal. Call this the selective feature of polysemy. The simplest account of ambiguity one could propose is that an ambiguous morphophonemic item is really two (or more) lexical items that happen (accidentally) to share the same morphophonemic form. In this sense, resolving ambiguity amounts to determining what lexical item is employed in the relevant context, and all kinds of extra-linguistic factors may enter into such a resolution for the hearer (the speaker simply decides). Polysemy need involve no such extra-linguistic resolution simply because no choice between alternatives is required. Whether a particular construal is active or not often simply depends upon which predicates modify or have the relevant item as an argument. Thus, matters of polysemy can be settled internal to the language, with no recourse to extra-linguistic matters. Again, an underspecification model is not ruled out here, although a selectional model better fits the phenomena.

Quantificational polysemy? The very idea
Since Hofweber intends to be speaking of natural language rather than making a stipulation about artificial languages, we must ask what evidence there is for determiners being polysemous. 9 I shall take it as given that the intended readings are not cases of ambiguity (the readings are clearly too intimate for that).
An initial general observation, which I intimated above, is that polysemy is understood to be restricted to open-class items (verbs, adjectives, nouns) and prepositions; it does not, contrary to Hofweber's (2016, p. 64) presumption, obtain 'everywhere'; in particular, the closed-class terms are excluded: pronouns, complementisers, tense, co-ordinators, and determiners/quantifiers. This makes perfect sense, for the openclass items express conceptually rich notions and tend to be flexible even with respect to argument structure. 10 The closed-class items, on the other hand, are conceptually poor, expressing invariable syntactic or functional information rather than 'worldly' information concerning events, things, and their properties. Although I can't go into the matter here, a plausible hypothesis is that the functional categories, including determiners, constitute the fixed framework for each sentence, with open-class items being selected to incarnate the structure. 11 Also, polysemy is cross-linguistically productive in the sense that it holds across a class of terms, and so holds for new items. 12 Since the closed-classes do not admit new items, we lose a key diagnostic for polysemy. Hofweber elides these kinds of general considerations by way of a peculiar methodological precept. He considers previous debates on quantification that have looked at 'cases' to be 'ineffective' in determining their polysemous nature (Hofweber, 2016, p. 65), but one reason for this might simply be that determiners are not polysemous at all. At any rate, to make progress Hofweber (2016, p. 65) proposes that.
instead of focusing on individual examples of quantified sentences we will focus on the need we have for quantifiers in general in ordinary communication, and on the basis of this we will be able to see that quantifiers should play two different roles in ordinary communication, answering to two different needs we have for them. Only once we have seen what these communicative needs are, how quantifiers allow us to meet those needs, and why they require different contributions to the truth conditions, will we illustrate the difference with individual examples that bring out these two readings.
This methodology raises some concerns. Firstly, if we suppose that there are distinctive 'communicative needs' for different readings of determiners/quantifiers, what are 10 For example, kick can be ditransitive (Bill kicked Sam the ball), transitive (Bill kicked the ball), and intransitive (The baby kicked). Borer (2005) has made the not implausible claim that every open-class syntactic category is variable over the same conceptual content, i.e., any root word could occur as a verb, noun, or adjective, but not as a quantifier (inter alia). 11 Again, see Rizzi (2004) for an overview. Each functional item maximally projects in the sense of being the head of the phrase that hosts and selects complements, modifiers, and specifiers. So, a functional verbal item (often referred to as little-v) maximally projects relative to a lexical verb (and its arguments), and Tense maximally projects relative to a verbal phrase, and a complementiser (that, whether, if , etc.) maximally projects relative to Tense. Determiners maximally project relative to nominal phrases they take as complements. If this is right, we shouldn't have modification or adjunction of determiner phrases, which is obviously right for, say, *bald every man. Of course, many determiners can occur with adverbials and in complex embeddings, including Boolean compounds: some/most but not all, neither fewer than five nor more than ten, denumerably many, not all, almost all, almost/practically no, hardly any, nearly a hundred, more than ten, between five and ten, etc. See Keenan (1996) for a GQ treatment of these kinds of structures. Their syntax offers a mixed bag, but does not appear to contradict the basic fact that determiners maximally project and are not modified; the adverbials for example plausibly occur above the determiner phrase in their own projection or as specifiers of the determiner (e.g., all the men). See, for example, Ernst (2002) for an overview of the syntax of adverbials. 12 For example, vessel words are polysemous between containers and the contained (cup, spoon, glass, etc.), barrier words are polysemous between objects and portals (window, door, gate, etc.), and animal words are polysemous between individuals and food (duck, rabbit, chicken, etc.). They all admit copredication: a Bill put the glass down and drank three more b Bill painted the door and walked through it c Bill had rabbit for dinner, but didn't shoot it we supposed to think if we find that there is no linguistic evidence for natural language quantification encoding such distinctive needs? After all,we have lots of communicative needs, but we require linguistic evidence for claims that language caters for them in any distinctive semantic or syntactic manner. For instance, we need to ask questions, and language provides various devices for this, including closed-class wh-items (who, what, etc.), focus (You love [Bill] F ?), and movement (Bill is tall/Is Bill tall?). We find no such surfeit for, say, our need to promise, appeal to evidence, or even to distinguish between direct vs. indirect speech and use vs. mention more generally. 13 In sum, one cannot infer from need to means in matters linguistic. Of course, we can sometimes be utterly explicit about such matters (I promise to do the dishes as opposed to I'll do the dishes), but this is because we have a word for the relevant act. Yet such explicitness is neither required nor articulates what is left implicit. 14 Secondly, if linguistic evidence is eschewed, the polysemy claim becomes hostage to the fortune of the communicative needs being uniquely supported by the relevant readings of the hypothesised polysemy. If some alternative means to achieve the needs is on offer that is consistent with the linguistic evidence, then the polysemy view becomes simply otiose. As we shall see, this is precisely prevailing situation.
There is an irony to this alluded to in Sect. 1. Hofweber upbraids the common lightweight/heavyweight and thin/thick quantifier distinction for not being sufficiently grounded in the linguistic facts. With Chalmers (2009) andFine (2009) in mind, he writes: Why would our language have both of them? It would be amazing if it contained primitive distinctions or resources mainly to carry out metaphysics, but with no role in ordinary communication. And it would be puzzling how such quantifiers should be understood and how they relate to each other. If there is such a distinction to be made, it must be shown to arise from our language, and this can't simply proceed by example, pointing to our puzzles, or by wishful thinking in order to defend that there is work for ontology as part of metaphysics: to settle what there is in the heavyweight sense (Hofweber, 2016, p. 91). 13 Languages vary in regard to how, if at all, they morphologically or syntactically encode such matters. Consider so-called quotative constructions. In some languages, an explicit morpheme marks the use of direct speech. In English, we find like serving such a role:

(i) a Bill said I am not guilty [Bill said the speaker is not guilty] b Bill was like I am not guilty [Bill said he was not guilty]
This use is specific to certain registers, however, and is really only evident where the intended content overrides agreement and binding relations (the first-person pronoun in (ib) is effectively bound by Bill). If no such mismatch between normal agreement and binding is witnessed, it remains unclear how direct the report has to be: The complaint misreads Chalmers and Fine, neither of whom mean to endorse a straightforward thesis about natural language semantic structure. 15 The complaint rebounds, too, for precisely the same complaints should be raised against the polysemy view. If the domain/inferential distinction is real, it needs to be shown, and appeals to the needs of communication are as prima facie linguistically nebulous as appeals to metaphysical work, as we have already indicated. Let's now look at the evidence for polysemy more directly.
An initial observation is that quantifiers do not inherit the polysemy of the nominals that are their instances. For example: (22) a London is happy and expensive b Something is happy and expensive (23) a War and Peace can serve as a doorstop but is psychologically acute b Something can serve as a doorstop but be psychological acute (24) a The average American drives a Ford and has 2.3 children b Something drives a Ford and has 2.3 children (25) a Lunch made everyone ill and lasted all afternoon b Something made everyone ill and lasted all afternoon (26) a Bond kills with impunity but remains popular b Someone kills with impunity but remains popular 15 Chalmers appears not to be making a claim about natural language, but more about the potential to restrict the domain of certain quantifications in ways that would be primitive relative to other domains, i.e., the differences in quantification would reflect the hierarchy of a certain metaphysical organisation rather than differences reflected in natural language (Chalmers, 2012, p. 89/355). He writes: Some theorists will hold that lightweight quantification is not 'real' quantification. Here I am assuming 'quantification' in a broad sense to include language with the superficial appearance of quantification, but nothing turns on the terminological issue. One could equally talk in terms of lightweight and heavyweight quasi-quantification instead (Chalmers, 2009, p. 96, n. 14).
In a similar vein, Sider (2011) employs a supposed heavy notion of quantification, but means it in a technical sense rather than as an analysis of natural language, which might have its own distinct fixed structure: [T]he fact (if it is a fact) that the first-order quantifiers carve at the joints isn't a fact about the linguistic items 'there is' and 'for all'. It's a fact about the world-specifically, its quantificational aspect. (ibid., p. 91). Binary quantificational grammar is built into our minds in a way that's difficult or even impossible to change. But even if this view is correct, it is no obstacle to introducing the language of first-order logic in the metaphysics room and giving it a monadic semantics. The formal sentences of this language would describe fundamentally monadic facts; it's just that we couldn't think about those facts "natively". There would be a mismatch between the structure of the facts and the structure of our thoughts (though not between the facts and the sentences of the formal language). (ibid., p. 77). As for Fine, he remarks: [E]xcursions into the semantics of quantification, whatever their independent interest are largely irrelevant to the understanding of ontology… The critical and distinctive aspect of ontological claims lies not in the use of the quantifier but in the appeal to a certain concept of what is real. (Fine, 2009, p. 171). I myself doubt that there is any other way [apart from restriction of domain] in which the interpretation of unrestricted quantifier might properly be subject to variation. (ibid., p. 165).
The relevant feature here is that the b-generalisations do not follow from the acopredications save for as a kind of pun. This may be observed in the Who/What am I? game. It is difficult to find the final answer when the preceding questions trade on polysemy, and the reveal is understood to be a pun. Because the nominals are polysemous, the two predicates can select distinct senses. The oddity of the b-cases arises from the predicates being construed as expressing properties of a single kind of entity, which shows that the quantifiers are not polysemous in the manner their instances are, i.e., there are not different senses for the predicates to select. 16 These observations, while indicating the invariance of quantification over the polysemy of its arguments, does not decisively undermine Hofweber's thesis, for his claim is not that the polysemy of the quantifiers is inherited from their instances, but is a bespoke division between a domain and inferential reading. If this is genuine case of polysemy, we need, ideally, to be able to observe both the simultaneity and selective features of polysemy. Simultaneity would hold if there were constructions where both a domain and inferential reading hold. Since the domain reading entails the inferential reading where instances are suitably effable, we would need cases where ontological commitment is relevant. Pick a transitive verb such as paint that has both intensional and non-intensional construals and consider if we may plurally quantify: (27) a Sally painted the fence and painted Zeus b Sally painted some/two things This strikes me as punning at best. The fault does not simply lie in verbal polysemy, however. Consider: (28) a Sally's driving licence expired and her husband expired, too b Sally had two things that expired This is somewhat better, albeit a tad cute. As was noted in Sect. 3, polysemous verbs struggle to take a conjunctive argument whose constituents are open-class nominals that can be selected by the different senses of the verb. They appear to be happier, though, to take a single argument that abstracts over objects that the senses of the verb select, much as a polysemous nominal can take distinct predicates that select for different senses of the nominal. The problem with (27) appears to be that some/two things is read univocally to the exclusion of the representational reading.
The selectivity feature of polysemy is hard to detect for the putative polysemy of domain vs. inferential readings, for the former entail the latter. Hofweber (2016, p. 70) thinks otherwise: We have a need in ordinary communication for quantifiers to have a certain inferential role. And we have good reason to think that some terms or other in our language are empty. These two facts together tell us that quantifiers in their domain conditions reading can't do all we need them to do for us, in ordinary communication. In their domain conditions reading quantifiers don't have the inferential role they are supposed to have. "F(t)" does not imply "something is F," on that reading. "t" might be one of these empty terms, and if so we would move from truth to falsity. No object in the domain of quantification is denoted by "t," since "t" is empty. The domain conditions reading does not give one the inferential role we need in communication.
So, the inferential reading is triggered or chosen precisely where ontological commitment is (or would be) denied. Yet whether a name is 'empty' or not is not a linguistic matter, but more a matter of general worldy knowledge, as Hofweber appears to accept (cf., Collins, 2021). Nor, as just discussed, will intensional contexts suffice to differentiate between the two putative onstruals simply because one can, as might be, seek, worship, and have beliefs about the extant. This selection feature, however, would indicate polysemy only if the alternative domain reading were ontologically committing, which is just what Hofweber asserts. This view, though, can be questioned, and if it does not hold up, then the very distinction Hofweber proposes to cater for divergent communicative needs, regardless of the nuances of polysemy detection, is otiose, for one may happily be dubious about a general externalism while commending a univocal domain construal of quantification. This issue will occupy the next section.

The idea of an ontologically neutral domain
In broadest terms, a domain is the class of elements one takes to be values of the relevant expressions of the language. An interpretation function defined over the domain maps elements of the domain onto the expressions of the language. A name is valued by an element; a monadic predicate by a set of elements; a dyadic predicate by a set of pairs; a quantifier phrase by a set of sets; and so forth. The traditional line after Quine (1948) is that bound variables indicate ontological commitment, i.e., the values of the bound variables of the formulae we take to be true are those things we reckon to exist. In other words, linguistic expressions, such as names and predicates, do not make explicit one's ontological commitments, for they can be used freely over the fictional and non-fictional, say. Quantification, to the contrary, makes ontological commitment explicit, being an existential idiom. This can't be right, however, and is not even Quine's considered judgement, for ontological commitment cannot depend upon mere idiom (cf., Quine, 1970/76;Collins, 2020b). Indeed, we freely quantify over the fictional with the same blitheness as we use fictional names. It seems, therefore, that what is invariant over different idioms and systems of logic, such as a quantifier logic, a predicate-functor logic, and GQ, is the domain, i.e., we presume that whatever is in the domain exists; thus, the bound variables of truths or predicates themselves are satisfied by such elements and so attract our ontological commitment. Yet this conception of a domain as somehow securing existence independent of any particular idiom or means of specification can be questioned. Azzouni (2010Azzouni ( , 2017cf., Collins, forthcoming) has offered a so far unanswered argument for the ontological neutrality of the concept of a domain, i.e., being a member of a domain is no more an explicit criterion for existence than being the value of a variable. The basic challenge is that if the mere idiom of quantifiers can't express ontological commitment, then how precisely is membership of a domain supposed to do better? It is not as if the elements have to be manhandled. The point becomes vivid by considering the specification of a domain: The quantification in itself cannot confer existence, for if it could, we should not need to be considering the domain at all, but rest content with the quantifiers in the language at issue. The property of being F can't do it either, unless we have a story to tell about what properties might legitimately instantiate being F, but the bare idea of a domain does not entail that we have any such story to tell: being F can be anything one likes; only other independent considerations might preclude some instances. We fall back, then, onto the inchoate thought that the elements of the domain must be extant independent of the specification of a domain, either in terms of its quantification or predicate expression. Yet we remain without reason to think that either quantification or the concept of a domain unpacks or otherwise sheds light on the inchoate intuition. Azzouni's moral is that quantification is ontologically neutral, as is the notion of a domain, insofar as it involves quantification for its specification. Neutrality here means that we can quantify over what is and what is not, and the elements of a domain can equally be extant and non-extant.
I have not the space fully to defend this neutrality hypothesis. For my dialectical purposes, however, I only want such a position in play; that is, if it is a live option, then it stymies the very intent of the domain/inference polysemy hypothesis, which is supposed to be required to reflect our varying attitudes to the ontological status of what we happily quantify over in our colloquial idiom. That said, the neutrality hypothesis cannot be dismissed. The thesis is no doubt jarring as a metaphysical claim; Schaffer (2009, p. 358) reckons it 'unfathomable'. Being in a domain and being extant at least appear coeval. For example, the hypothesis appears to entail that, say, Numbers don't exist, but there are numbers might be true, whereas it sounds contradictory. I think this oddity, however, is explicable pragmatically. First note that there are bare and restricted existential claims, as in There are numbers between 0 and 1 and There are numbers, or There are orcs in Tolkien and There are orcs. A natural intuition is that while the restricted claims do not entail ontological commitment, the bare claims do. Hence, to assert the bare claim in conjunction with a denial of existence sounds contradictory, an effect not witnessed with restricted claims. This difference, however, appears not to be semantic, or narrowly linguistic. In a context where a restriction is understood (a conversation, say, about what creatures are depicted in which books), it makes perfect sense to claim that there are orcs and a denial of existence might simply be false without further ado, none of which expresses a commitment to orcs in any other context. In this light, bare existential claims without any understood restriction simply express a global commitment as opposed to a restricted one, i.e., one is talking about numbers or orcs without any particular context in mind. Yet this is something a defender of neutralism should be happy to endorse. Thus, what is odd about Numbers don't exist and there are numbers is that both conjuncts should be parallel in their expressing a global commitment. The problem doesn't arise from exist being necessarily ontologically committing in a way there is/are isn't. The neutralist, in other words, should think that exists is neutral too, just as it clearly is when used in a restricted way. A problem arises only if it is presupposed that there is some linguistic means to express unvarnished honest-to-goodness ontological commitment, but it is precisely this presupposition that the neutralist denies. 17 More broadly, from a semantic perspective, there is no discomfort with a neutral domain. The moral is merely that we can speak truly while talking about that which we reckon not to exist, which is something we do unreflectively. In order to make sense of this talk semantically, we specify domains with no thought to whether their denizens exist or not beyond their featuring in the specifications of truth conditions. As Glanzberg (2014) observes, semantic theory is 'partially' explanatory in the sense that it captures structural or compositional phenomena and lets the status of reference as a would-be external relation between words and world be 'kicked upstairs' into the metalanguage in which the theory is specified. One may have strong philosophical views about how the metalanguage is to be understood, but the theory itself does not entail or presuppose any particular view in that direction.
So, if a neutralist position is at least in play, the putative polysemy of the quantifiers is otiose. There just is no call for a division between ontologically-committing readings and some not so committing reading (inferential or otherwise), for the notion of a domain can hold invariantly over the two precisely because it is ontologically neutral. Furthermore, this conception of a domain is aligned with the apparent lack of quantifier polysemy.

Linguistic evidence
One might consider the reasoning offered so far as initiating a stand-off. 'OK', the thought might go, 'the polysemy of the quantifiers might not be easily detected, and perhaps some ontologically neutral invariant conception of a domain is available, but is there decisive evidence against the inferential reading?' I think there is once one reflects on the properties of natural language quantification rather than formal models (first-order or otherwise).
The idea in question is that certain rules or norms for what we may infer have explanatory priority over properties of words and syntactic structures independently of their inferential use. Here is a general argument against this conclusion: (GA) (i) Semantics for natural language interprets natural language syntax (ii) Natural language syntax of the sentence/clause is intrinsic; i.e., a sentence/clause has the syntax it has not due to or inclusive of any other sentences (iii) The interpretation the semantics provides for a sentence, therefore, cannot make appeal to the interpretation of any other sentences (iv) Since inference is an inter-sentential relation, semantics cannot appeal to inference in its interpretation of a sentence The premises of the argument, like any others, can be questioned, and I have not the space to defend them fully, but before spelling out the consequences for Hofweber's proposal, let me offer five clarifications. First, my target is not proof-theoretic semantics for formal languages, but rather a claim about what explains our semantic competence with a natural language. A language for which one can devise a proof theory is a language whose syntax is designed to track semantic (/truth-preserving) relations between formulae. Natural language is not of that character, even if 'fragments' of it can be suitably regimented. 18 Secondly, the argument does not exclude all kinds of evidence beyond a sentence, including other sentences, for what it might mean. The conclusion is only intended to rule out a view where semantic properties are not local to specific syntactic structures; it says nothing about what evidence might be required to understand an utterance.
Thirdly, the view of syntax, and so semantics, expressed in the first two premises is an empirical claim about an aspect of human cognition. For present purposes, I suppose tolerance to reign: theorists are free to devise artificial languages of any design, which may then be evaluated given the ends to which they are put. If the second premise is to be rejected, it must be shown that standard generative assumptions are false, or that some alternative explains the central phenomena better. In thinking of natural language, we are considering a complex empirical phenomena, not what stipulations to make.
Fourthly, the view the argument presents does not render natural-language inference opaque or essentially non-semantic. To the contrary, where the inference may be rightly seen to be deductive, it will precisely be the intrinsic semantic properties of a sentence that licenses the relevant inference. In the most trivial case, this will because the very conclusion is an intrinsic constituent of the interpreted sentence, as in conjunction elimination, say. I shall look at quantification shortly.
Fifthly, a so-called 'dynamic' approach to semantics is not ruled out, nor are discourse effects eschewed. The point is only that such matters will supervene on the interpretation of the intrinsic syntax plus extra-linguistic factors.
Let's now apply the moral of the argument to the specific case of quantification. The syntax of quantification is an open issue, much as any other area of syntax, but a general view, designed to accommodate scope phenomena, is that the determiner phrase moves into a scope position generating a quantifier-variable structure. Heim and Kratzer (1998) provide an influential 'textbook' treatment of how to semantically interpret such structures. For present purposes, I am not committed to any such model; indeed, it has some compositional glitches (Collins, 2017;Pietroski, 2018), and a GQ approach does not presuppose movement, i.e., determiner phrases can be interpreted in situ (Keenan, 2015). The present point, however, is that the movement of determiner phrases follows from syntactic considerations, not from a demand to have a variablebinding semantics. 19 If we assume a GQ interpretation, therefore, which does not involve variable binding, the chief theoretical question becomes how the syntax is interpreted so as to produce a GQ structure (cf., May, 1991). 20 19 The basic insight here is that scope-taking of determiner phrases is subject to the same constraints as overt movement, such as with wh-items in interrogatives. Consider: a There is a bottle in every corner of the room b There is a bottle which is in every corner of the room The DP, every corner of the room, in (ia) can scope over a bottle, but not do so in (ib). This is explained by relative clauses being 'islands', which preclude movement generally, whether overt or covert. Similarly: (ii) a *Every girl i is here and she i wants to study b *His i mother loves every boy i Here, the unacceptability of (iib) is explained if we take it to scope over the pronoun as is explicit in (iia) (so-called 'weak cross-over').
As a final example, consider: (iii) a What [did every student buy < what >]?
b Who [< who > bought every book]?
(iiia) has a pair-list construal (different students buy different things), whereas (iiib) does not. This subject-object asymmetry is explained by a restriction on the movement of the determiner phrase from object position in (iiib), i.e., a pair-list construal of (iiib) would involve every book scoping over what, but there is no such restriction from subject position.
It doesn't follow from any of this that all scope-taking is governed by movement in a strict syntactic sense; only that syntactic movement plays a role, in both permitting and disallowing scope-taking. See, for example, Fox (1995) and Reinhart (2006). 20 Trivially, the GQ approach takes a (binary) determiner to express a relation between a restriction and a scope, but carries no presupposition or entailment as to how its arguments are linguistically expressed. Also, GQ theory by itself does not explain why the generalisations it captures hold. For example, as remarked on in note 1, natural language determiners are conservative, which goes beyond first-order quantification. GQ itself gives no explanation why this should be (cf., Collins, 2020a;Pietroski, 2018 The labels do not much matter; the important feature is that determiner phrases move to extend the structures leaving behind a predicate akin to an open sentence (the angled brackets indicate that the lower phrases are not pronounced). Rendering the b cases into English, we have: (32) a Some boy is such that he swims b Every boy is such that Sally likes him As the pronouns suggest, the launch sites of the quantifier phrases can be viewed as variables bound by the higher copy of the phrase. As explained just above, though, it is not necessary to view the syntax as demanding a variable-binding semantics. The movement accounts for scope taking, and the resulting acceptability effects; once a syntactic structured is determined, a GQ interpretation is possible just as much as a variable-binding one. 21 What is demanded, however, is that the semantics interpret this compositional structure, which is invariant over whatever we are talking about, and whether we have any reflective ontological commitment. Each component needs to be interpreted, from lexical items, to phrases to the whole structure.
Just how the semantics should be is contentious, but the general insight that it should reflect an operation or function on predicates (as open sentences or not) is widely accepted. For example, following Montague (1973), we may depict a quantifier type as a function from a property of elements to a property of elements to a truth value (type < < e, t > , < < e, t > t > >), with the individual determiners reflecting differences of Boolean relations between the properties. 22 As explained, GQ takes a distinct approach; how this difference plays out with respect to Hofweber's polysemy hypothesis will be discussed presently. 23 21 It might be thought that this can't be right, for if movement creates a trace or a copy, then, in effect, an open sentence is created, a point made clear in the early development of the theory (cf., May, 1977;Chomsky, 1977;Bach, 1977). It might then seem that the trace/copy needs to be interpreted, which can only be construed as a variable (Heim and Kratzer, 1998). This is not quite right, however, for all traces/copies are bound. What does need to be explained, therefore, is the link between the higher determiner phrase and the argument position marked by the trace/copy, but by itself this does not entail a semantic interpretation of the trace/copy as a variable (see Hornstein, 2001;Pietroski, 2018 The semantic value of every can thus be depicted as the set of sets whose members contain every P-thing, and the semantic value of some as set of sets whose members include at least one P-thing. 23 A categorical grammar approach rejects variables too, and views composition as wholly a matter of function combination. It remains compositional, however, and endeavours to respect natural language Hofweber, it bears emphasis, does not propose an inferential-role reading for the quantifiers; rather, it will be recalled, the determiners (some and every, anyway) are depicted as equivalent to the relevant sets of disjunctions and conjunctions, which allow inferences from and to the quantified sentences. What this fails to account for is the compositional structure of quantification. What is the interpretation of the determiners such that their host sentences have the interpretations they have and what are the interpretations of the arguments of the quantifiers such that the whole means what it means? Hofweber (2016, pp. 97-100) proposes that the algebra of GQ theory (see note 1) can be interpreted as defining relations over substitution instances forming equivalence classes of terms. The basic upshot is that while on the domain reading, a quantifier expresses a relation over a pair of sets, F and G (better: the members of the sets), on the inferential reading it expresses a truth relation between instances of the sentences formed from substitution on 'F(t)' and 'G(t)'. There are two problems.
First, the proposed polysemy of the quantifiers entails a reciprocal polysemy of the two arguments of the quantifier. This is because, in order for the composition to work, the domain and inferential readings differ with respect to the interpretation of the arguments the quantifier relates. So, on the inferential reading, but not on the domain reading, we must specify the meaning of 'boy' and 'swim', say, as the instances of 'BOY(t)' or 'SWIM(t)' such that the items may compose as arguments of a quantifier. This makes for a dramatic loss of 'semantic innocence', and has no independent plausibility, i.e., we would be obliged to think that predicates change their interpretation relative to the construal of the determiner, in one case the determiner is read as relating members of a pair of sets, in another as relating instances of predicates.
The general problem is that it is impossible on the proposal to specify the semantics of predicates in a way that will be invariant regardless of how they are composed. If a predicate is involved in an ontologically-committing quantification, it has one interpretation, if in another non-ontologically committing quantification, then it has a distinct interpretation. Yet that is not how composition works: we must be able to specify the semantics of each lexical item independent of its composition. This is not to deny polysemy as a function of selection of one sense from a range encoded in an argument. What it does mean is that composition tracking syntax must specify invariances of meaning, even if the invariances are ranges of potential interpretations. What is unclear on Hofweber's proposal are what such invariances are, whether of open-class items that may serve as arguments of a quantifier or the quantifiers themselves.
Secondly, a problem arises with polyadic predicates, where one of the determiner phrases has a domain reading, and the other a supposed inferential reading. Consider Everyone seeks something, where we don't necessarily want to commit to the thing sought (Gods, gold at the end of a rainbow, etc.), on either scope reading: Footnote 23 continued syntax (see, for example, Jacobson, 2014). Just how such an approach sits with standard generative views of syntax is beyond my present scope.
It would appear that we need two predicates-'SEEKS(x, y)' and 'SEEKS(x, t)'depending on whether the thing sought is real or not, but whatever it is to seek ought to be invariant between the two cases and so the semantics of the predicate ought to be the same; after all, from the viewer of the seeker, what is sought is presupposed, but one need not know if there is the thing sought. Note that domain quantification over names for Gods, say, will not do the trick, for the inferential reading is not supposed to express a domain-involving concept at all, but more a license for inference; besides, a name is not what is sought. 24 It seems, then that we can't have an invariant reading of the predicate because of the supposed variance of the quantifiers.
If all this is accepted, the stark problem with Hofweber's inferential reading is that, while supposedly a specification of what determiner phrases mean, it specifies no compositional interpretation of the constituent expressions so as to say what a determiner means, what a determiner phrase means, and how it contributes to the meaning of a host sentence. The univocal domain-reading, on the other hand, perfectly fits with the independently constituted syntax, and provides a compositional semantics for it. Moreover, it is invariant, as is the syntax, over whatever we are talking about, and so neutral between any ontological attitudes. The inferential reading appears otiose for encoding a non-ontologically committing reading, and it is not independently suggested because there is no evidence of the putative polysemy, a finding which accords with general considerations that polysemy characterises the open-not the closed-class items.

Conclusion
Hofweber articulates an internalist view, at least for some expressions. There are different ways, however, in which thought and talk might not be world-involving. One way, which Hofweber favours, is for semantics to specify, in effect, intra-linguistic relations. Another way is for semantics to be simply neutral about the ontological status of what, if anything, in the external world language relates to. The latter option preserves the integrity of natural language semantics and its relation to syntax, neither of which are determined by our communicative needs. It also frees us from a metaphysically heavy notion of a domain without any sematic payoff. All of this means that we need not posit polysemous determiners, which is just as well given the absence of evidence for the relevant variability. An internalist approach to semantics is attractive, I think, principally because it challenges us to establish where, if at all, semantic phenomena require appeal to externalia for their explanation (cf., Chomsky, 2000;Peitroski, 2003;Pietroski, 2018;Collins, 2009Collins, , 2011Glanzberg, 2014). The challenge can be posed without supposing that determiners are polysemous. 25 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/ by/4.0/.