The Metalogic of Ground: Pure and Iterative Systems

I develop a graph-theoretic model theory for pure and iterative grounding logics.


Introduction and Basic Notions
Pure logics of ground are so-called because they deal solely with the logic of grounding operations, ignoring both the structure of the operands and the interactions between grounding operations and other sorts of operation. Importantly, their languages omit truth-functors, along with other connectives such as modal operators. In this paper we will consider several such languages, including one or more grounding operations. In this section I introduce notation for these languages and their proof systems in a general way.

Languages, Simple and Iterative
All of the languages considered in this paper share the same set of sentence letters or 'atoms' At ( ... ), and differ with respect to the grounding operations they include as well as whether or not iteration is permitted. Grounding operations are either many-one or one-one. Many-one operations like and take a set of operands on the left and a single operand on the right, as in . We omit parentheses where this does not generate ambiguity. We also borrow notational conventions 1 On deRosset's approach models are based on many-one relational structures where the manyone relation on need not satisfy any additional properties. One shortcoming of this approach is that multiple additional notions like those of trees, 'grafts', and 'floors' need to be introduced to recover the basic structural properties commonly attributed to ground, which complicates the presentation of proofs as well as generalisations to extended languages. A more philosophical shortcoming of this original approach is that the relation is intended to correspond to a relation of immediate grounding. But recent work [7,15] has seemed to indicate that the idea of immediate grounding is problematic, and should not be relied upon if the theory of grounding is to be developed systematically. from standard treatments of sequent calculi. 2 One-one operations like and take a single operand on the left and a single operand on the right as in . Expressions formed via grounding operations are called sequents, usually with the particular operator in question signified (e.g. -sequent, -sequent). A sequent is simple if the operands of the sequent are all atoms; else it is iterative. The languages we will study have sequents as their sole well-formed expressions-sentence letters are not themselves sequents, hence they are not taken to be fully-formed expressions of these languages. A language is identified with the set of its sequents. A language is called simple or 'pure' when it contains only simple sequents. Else it is iterative. Lowercase Greek letters , , etc. are metavariables that range over admissible operands of the grounding operations of some language (either just atoms if the lanaguge is simple, or also sequents where the language is iterative). Uppercase Greek letters , etc. are metavariables that range over sets of admissible operands. is a metavariable that ranges over sets of sequents; ranges over individual sequents.

Sequent Calculi and Inference Rules
We formulate sequent calculi in these languages. Where is a language, a sequent calculus formulated in is called an -calculus. Calculi are identified with sets of inference rules. Inference rules are set out using sequent schemas and written on two levels. For example: Transitivity Like the sequent schemas which make them up, inference rules have instances. An instance of an inference rule in a language is obtained by uniformly replacing lowercase Greek letters for admissible operands and uppercase Greek letters for sets of admissible operands throughout so as to obtain -sequents throughout. For example, we have an instance of Transitivity : Sequents above the line in an inference are called the premises of that inference, and the sequent below the line is called the conclusion. An inference can have indefinitely many premises unless stated otherwise. We ignore the order of premise sequents, so that in general inferences like 1 2 3 and 2 1 3 which differ only in a permutation of the upper line are identified with one another. A sequent calculus S is identified with a set of inference rules, called the S-rules. We call S acalculus when considering only those instances of the S-rules which are instances in . A set of -sequents is closed under an inference rule iff for every -instance of , if every premise of the inference is in , then so is the conclusion. Where S is an -calculus, is an -sequent and is a set of -sequents, we write S to say that any set of -sequents which is closed under all of the S-rules and contains 2 First, where the set on the left hand side is a union of two sets such as in , we abbreviate this to simply . Second, where is a single operand and the set on the left is the union of some other set with the singleton as in , we abbreviate this as just , dropping the curly brackets.
all of also contains . An inference rule is S-admissible iff any set closed under the S-rules is also closed under . Transitivity was a particularly simple rule to state. Other rules are more complicated, and some commentary on notation is required. Consider the following rules. Cut Irreflexivity Identity In Cut we use a schematic index , assumed to range over some nonempty index set . So to obtain an instance of Cut we must also substitute in some concrete index set. In such rules we also write or around one or more of the premises to abbreviate a sequence of sequent schemas of the relevant form. So abbreviates 0 , 1 , 2 , .... In Irreflexivity, is written under the line. As in [6] and [5], this is not a sequent. It is a sequent schema whose instances are all sequents. So a rule in which occurs below the line is one which licences the inference to any sequent whatever from the premises. Finally, in Identity nothing at all is written above the line. Such premise-less rules serve as axiom schemas, requiring that the conclusion always be contained in a set closed under that rule. Although none of our languages contain a sign for negation, in/consistency can still be defined via trivialisation. A set of sequents is S-inconsistent for a calculus S iff S for all sequents of the language under consideration; else is S-consistent. Additionally, a calculus S 2 extends another S 1 just in case Such extension is proper when S 2 is not also extended by S 1 .

Grounding Structures
When it comes to the task of providing semantics for pure logics of ground, two basic strategies have emerged in the last decade. The first, developed by Fine [6], makes use of truth-maker semantics. The second, first used in deRosset [5], makes use of graphs and graph-like structures. I will be developing a version of the latter strategy for pure and iterative logics of grounding.
By 'graph-like structures', I mean in particular structures consisting of a domain and a 'many-one' relation, which takes a set of elements on the left and a single element on the right. Such structures are not directed graphs, because a directed graph's relation must be binary. Such structures are also not what have come to be known as directed 'hypergraphs', which relate sets of elements to other sets. I propose to use the name semi-hypergraph to denote the class of relational structures whose relation is many-one. That is: where is a nonempty set and is a many-one relation on .
In particular, we will consider a class of such structures which admit a certain cut property: -hypergraph is cuttable iff wherever  ,  and for each , then .

Definition 1.3
A grounding structure is a cuttable directed semi-hypergraph.
We denote grounding structures , using (rather than ) to denote their relation-we sometimes call this their grounding relation. Now, grounding structures will make for the basis of our semantics for grounding logics. However, grounding structures ought to be of interest to grounding theorists for their own sake, to the extent that their investigation can help to clarify certain questions and positions about grounding. This much has been hinted at by the recent work of Rabin & Rabern [10], as well as Dixon [14]. These authors use the term 'grounding structures' in ways which differ importantly from ours (discussed in Appendix B), but they have clearly shown how the mathematical investigation of relational structures can shed light on the theory of grounding. In this sense, it is incidental that such structures also serve to provide a clean semantics for logics of ground.

Pure Logics of Ground
The language consists entirely of simple -sequents: expressions of the form ' ' where At and At. In this section we will consider seven calculi which are built up from the following rules.

Identity
The weakest calculus we will consider is that with Cut as its sole rule, which we call C. The calculus A results from adding Amalgamation to C. The calculi I and G then result from adding Irreflexivity to C and A respectively. If we add Identity to C, we obtain a calculus I call R. The calculi M and T then result from adding Monotonicity to C and R respectively. To sum up:

Cut Identity Monotonicity
These are the only interesting combinations of the above rules. We will not be considering -calculi which do not contain Cut. Moreover, any system which has both Amalgamation and either Identity or Monotonicity as rules is be redundant: both R and M (and so T by extension) extend A, in the sense that Amalgamation is already admissible in each. Finally, any system which has both Irreflexivity and either Monotonicity or Identity as rules admits the rule , and so is uninteresting. See  The classes of structures with these properties can be called 'Irr, 'Add', 'Ref', and 'Mon' respectively. With this in place we may already establish soundness for our seven calculi with respect to the following classes of structures.

Proposition 2.2 (Soundness)
Where and : Proof In each case it suffices to show that the rules of the calculus in question preserve truth in all models based on structures in the relevant class.
(i) C's only rule is Cut, which preserves truth in all models for any grounding structure . For if satisfies and for each , then by the semantic clause we have and for each . Since is a grounding structure we have , so by the semantic clause . (ii) I's rules are Cut, which by (i) preserves truth at all models, and Irreflexivity.
The latter vacuously preserves truth at all models where is irreflexive. For at such models is never true, and so we may say that if such a sequent were true, any sequent would be true. (iii) A's rules are Cut, which by (i) preserves truth at all models, and Amalgamation. The latter preserves truth at all models where is additive. For if for each , then by the semantic clause we have for each . Since is additive we have , so by the semantic clause . (iv) G's rules are Cut, Irreflexivity, and Amalgamation. So apply (i)-(iii) to show that G-rules preserve truth at models based on additive and irreflexive grounding structures. (v) R's rules are Cut, which by (i) preserves truth at all models, and Identity.
Identity preserves truth at all models based on reflexive grounding structures. For at such models, for all we will have and so . (vi) M's rules are Cut, which by (i) preserves truth at all models, and Monotonicity.
Monotonicity preserves truth in all models with monotonic structures. This is because for all and , in such a model if then . Since , by the structure's monotonicity we have and hence . (vii) T's rules are Cut, Identity, and Monotonicity. So apply (i), (v), and (vi) to show that T-rules preserve truth at models based on reflexive and monotonic grounding structures.
We next move towards completeness. Our method is to construct canonical models for sets of sequents with respect to various calculi.

Definition 2.3
Where is a set of sequents and S is an -calculus, we define the structure S S S as follows. Note that this definition does not imply that S is a grounding structure. To show this we must suppose that S admits the Cut rule for grounding sequents. Let us now apply this strategy to our calculi, and establish the converses of Proposition 2.2.

Proposition 2.8 (Completeness)
Where and : (vii) Apply (v) and (vi) to show that any T must be reflexive and monotonic.
I will now explain why I have picked these rules and systems to focus on. The systems G and R are Kit Fine's systems PLSFG (pure logic of strict full ground) and PLWFG (pure logic of weak full ground) respectively. The first four rules of Cut, Irreflexivity, Amalgamation, and Identity, are the rules making up these two systems and the subsystems of C, A, and I. To these I have added the rule of Monotonicity, and the additional systems M and T. Despite not usually being considered in the logic of ground, I have elected to consider systems admitting the rule of monotonicity because the rule helps to exhibit the relation between our semantics and the the theory of Tarskian consequence relations. That is, our 'grounding structures' are a generalisation of abstract consequence structures as defined by Tarski. Such a structure is a pair where is a nonempty set and is a many one relation on which satisfies the same cut property as specified for grounding structures, as well as being reflexive and monotonic in the same sense as Definition 2.1. 3 That is, the reflexive monotonic grounding structures are precisely the abstract consequence structures; thus T is sound and complete over the class of Tarskian abstract consequence structures. Moreover, although no grounding theorist thinks that grounding exhibits monotonicity, clearly there are grounding-adjacent relations which must (consider the natural relation of containing a ground of ). Each of the other rules we have considered involves more controversy. I do not know of any direct challenges to Cut in the literature, though Cut ensures the transitivity of partial ground, which has been denied by Schaffer [13]. Reflexivity is often rejected for grounding relations; as just alluded to, Fine's notion of weak ground is taken to be reflexive, but the very notion of weak grounding has been challenged by others including deRosset [4]. On the other hand, Irreflexivity has also been challenged, for example by Rodriguez-Pereyra [11]. Finally, Litland [9] has suggested rejecting Amalgamation, though in this case the recommendation is made in order to preserve other properties of ground like Irreflexivity in light of a proposed paradox. Despite these objections, if there is one system among the above which represents the current orthodoxy with respect to the pure logic of ground, it is the system we have called G, which represents ground as cuttable, irreflexive, and admitting amalgamation.

Partial and Proper Ground
In this section we consider extensions to which introduce additional grounding operators. These operators encode notions of partial ground and (what I call) proper ground. In these extended languages we will show how to interpret the systems of Fine [6] and deRosset [5], and provide completeness results.

Partial Ground
The language P extends and consists of all simple -sequents and plus all simple -sequents. The intended reading of of is that partially grounds , in the sense that features in some ground of . We consider five calculi in P , corresponding to five of the -calculi introduced in the previous section, making use of rules we have introduced previously as well as the following three involving .

Irreflexivity
In particular, where S is any of the -calculi from the previous section, let S P be the P -calculus whose rules are just those of S, plus Subsumption , and Transitivity . If S is either I or G, then we additionally replace the original Irreflexivity rule with Irreflexivity . In [5], Louis deRosset formulates four calculi in the language P . In our nomenclature, deRosset's logics are C P , A P , I P , and G P . 4 We consider these along with R P . We do not consider M P and T P since these turn out to be too strong to be interesting, and make partial ground essentially trivial. In M P we have admissible, and in T P we have the completely trivialising rule admissible. Now, the semantics we give for P are essentially the same as for . The languages have the same models, and the same semantic clauses when it comes to -sequents. But to this we also add a clause for -sequents: iff such that .
We may now establish soundness for our P -calculi.
Proposition 3.1 (Soundness) Where P and P : Proof In each case it suffices to show that the characteristic rules of the calculus in question preserve truth in all models based on structures in the relevant class.
(i) The rules of C P are Cut, Subsumption , and Transitivity . We know already that Cut preserves truth in all models. Subsumption does too, for if then and , so that . Finally, Transitivity preserves truth at all models since if and then we have such that and . Applying the definition of a grounding structure yields , and so . (ii) The rules of I P are those of C P and plus Irreflexivity . We know that the rules of C P preserve truth at all models. Additionally, Irreflexivity preserves truth at all models with irreflexive structures. This is because at such models it is never the case that , so vacuously if is true in such a model then any sequent is. (iii) The rules of A P are those of C P and A. We know that all of these rules preserve truth in additive models. (iv) The rules of G P are those of I P and A P . We already know that all of these rules preserve truth in models whose structures are both additive and irreflexive. (v) The rules of R P are those of C P and Identity. We already know that all of these rules preserve truth in all models with reflexive structures. Now we move towards proving completeness. Here we will need to make some alterations to the construction of canonical models. Our approach is inspired by deRosset's own use of witnessing elements in [5], though there are differences in implementation which will be obvious to those familiar with the construction presented there. S = the union of the three sets: We will need both kinds of canonical structures in the following completeness results. Now, the presence of the non-sentential witnessing elements and complicates the proof of the canonical model lemma (i.e. the P -analogue of Proposition 2.6.), but the basic strategy is the same. We first show that the above structures are indeed grounding structures, so that they may be used to construct P -models.

Proposition 3.3 If S admits all the C P -rules, then
S is a grounding structure. Proof We need to show that the defining cut property of grounding structures is obeyed in this structure. So suppose we have (1) If is not involved, meaning that and each is a sentence, and and each are included in At, then S and S for each . So by the Cut rule in S we will have S and so S as desired.
Since we never have S , the only way that can be involved is if is in or is in one of the s, or both; it cannot be or any of the s. In any scenario, we can show that the cut property is respected.
(2 And from this we may prove completeness for all of the P -calculi in a uniform way.

Proposition 3.8 (Completeness)
Where P and P :

Proper Ground and Fine's Full System
In this section we provide an interpretation of Kit Fine's logic of ground in terms of grounding structures. In Fine's system there are four notions of ground. They are called: strict full ground, weak full ground, strict partial ground, and weak partial ground. The primitive notion among these is that of weak full ground (expressed as ' is a weak full ground of '). The others are characterised as follows.
is a weak partial ground of iff features in some weak full ground of . is a strict full ground of iff is a weak full ground of and is not a weak partial ground of any . is a strict partial ground of iff is a weak partial ground of and is not a weak partial ground of .
I have found the most natural way to interpret Fine's system is to interpret our grounding relation as expressing weak ground (despite the fact that Fine uses this symbol for strict ground), and to adopt Fine's definitions for the other grounding operations. Thus, we work in a language I call F , which extends P with a new many-one operator and a new one-one operator . These new operators are to be called 'proper ground' and 'proper partial ground' respectively. They are given the following semantic clauses at models .
iff and for all . iff and .
These can be seen to encode Fine's characterisations of his strict grounding relations. In other words, for us, ground ( ) interprets Fine's notion of weak full ground, partial ground ( ) interprets weak partial ground, proper ground ( ) interprets strict full ground and proper partial ground ( ) interprets strict partial ground.
We present two calculi in F : Fine's own system, and a weakening which is adequate to capture general validity. Taking the weaker system first, let C F denote the system which results from adding the following rules to those of C P .
We will show this system to be sound and complete over the class of all grounding structures. Next, let R F be the system which results from adding Identity to C F . As just mentioned, R F is Fine's full system, and will be shown sound and complete with respect to the class of reflexive grounding structures. Soundness proceeds in a familiar way.

Proposition 3.9 (Soundness) For
F and In each case it suffices to show that the rules of each system preserve truth at all models with grounding structures from the classes of structures in question.
(i) We already know that the rules of C P (importantly including Transitivity ) preserve truth at all models on all structures, since this system was shown to be sound for general validity. The proof of completeness for each of these systems makes use of the same definition of a canonical model with a witnessing element as used for P , but the proof strategy is slightly augmented on account of the new vocabulary. In particular, we will need to establish the following. Where S is either C F or R F : The argument for these results is complicated; thankfully we can make use of several proof theoretic results from Fine [6]. There, Fine also shows that (i) and (ii) hold for R F , so we only need to worry about establishing the case of C F . Now, for the purpose of using Fine's results, we introduce two weakened F -calculi, called C and R . The rules of C are just Subsumption , Transitivity , Transitivity , and Irreflexivity . The rules of R are just those of C plus

Identity
The system R is just Fine's system PLPG (the pure logic of partial ground), and C is the system obtained from this by omitting Identity . Fine establishes the following: Additionally, where F , define as the union of the sets or for some and or for some . Then Fine also establishes,

Proposition 3.11 ([6] Lemma 4.7/8) For a -or -sequent,
Moreover the proof given of this can be easily adapted for C F and C by omitting the steps referencing Identity. So we also have for any -or -sequent , Using these we may establish our desired result as follows.

Proposition 3.12
Where S is either C F or R F : Proof As mentioned, Fine [6] has shown that this holds in the case of R F . So we we will show the case of C F . (i) Suppose that

Iterated Logics of Ground
Returning to the basic langauge , we now consider an alternative way of extending this language by allowing for sequents themselves to serve as operands for grounding operators. Fixing some regular cardinal (the purpose of which will be explained shortly), the grammar of the new language is defined as follows.
Every -sequent is an -sequent. If is a set of atoms and/or -sequents with , and is a sentence or -sequent, then is an -sequent. Nothing else is an -sequent.
An example of an iterated sequent would be ; another would be . We omit outer parentheses where it does not generate ambiguity.

Cardinality Troubles
A digression. Note that we have placed a restriction of cardinality on the sets of sequents which can be used to form further sequents. All of this is to ensure that we avoid a couple of technical and philosophical hitches involving cardinality. I will explain these hitches, and note some alternative strategies we might have taken to avoid them.
The use of a cardinal bound in the formation rules of sequents and instances of inference rules is needed to ensure that the language forms a set, rather than a proper class, since the canonical model constructions below require the use of sets in their current form. The reader will note that we might instead have allowed any setsized class of sequents to form a sequent (so that would have formed a proper class) but instead augmented the definition of grounding structures so as to allow for proper class domains, as well as the definition of inference rules so as to allow for the handling of proper classes of premises (etc.), since this would have had essentially the same effect.
The requirement that be regular can be added to rule out certain cases like the following. Suppose that is singular so that it has cofinality cf . Then we might have instances of rules like Amalgamation wherein we have premises cf , and conclusion . If each is sufficiently large we have that this conclusion is non-well-formed since then we may have . Similar examples involving Cut are easily formulated. Note that such examples are not strictly counterexamples to Cut or Amalgamation; even if is singular, there are no instances of these rules in which fail to be admissible in relevant calculi or truthpreserving in relevant models. This is simply because we require that for an instance of an inference rule to be an instance in a given language, say , every sequent in the inference must be well-formed sequent in . The regularity of is not necessary to avoid failures with these inference rules in the technical sense. 5 Nevertheless, it would seem to be a unfortunate blemish of any calculus which admitted Amalgamation (for instance) to have cases in which we have for all and yet do not have simply because does not count as wellformed. Even though this is not a failure of the rule strictly speaking, it is what we might term an informal failure, chalked up to the limits of the language 6 .
As far as I can tell, the assumption that is regular is all we need to avoid such cases in the present discussion; it seems to circumvent 'informal failures' for all of the rules considered here. But there are inference rules besides those we are considering here which would require that additionally be a strong limit cardinal (hence strongly inaccessible, assuming we continue to impose regularity). Here is an example: Subset If is not a strong limit, then there is some cardinal such that 2 . So suppose we have an instance of Subset in which . In this case, the top sequent would be well-formed but the bottom sequent would not be, because its lefthand side would be too large given that 2 . Subset is not plausible when read as a principle about grounding, so it is not so lamentable that we will have informal failures of Subset wherever is regular but not a strong limit. I do not know whether there are rules which are somewhat plausible when read as principles of grounding, and which require be a strong limit in order to avoid what we have termed 'informal failures'. Nevertheless, it is important to note that there may be some inference rules out there which are plausible when read as principles of grounding and which might require one to fix as be strongly inaccessible in order for informal failures to be avoided. 5  . Even though this inference does not work in classical propositional logic, it does not mean that conjunction introduction fails in propositional logic. Rather, it is just that the inference itself is not well-formed, since ' ' is not a formula in the language. This would be an 'informal failure' of conjunction introduction in ordinary propositional logic, applying terminology that will be introduced in a second, and might motivate one to move to consider languages that allow for infinitary conjunctions like . 6 This blemish would also slightly complicate arguments given for soundness for relevant calculi. In arguing that Amalgamantion is truth-preserving in any additive model, for instance, we would need to alter our statement to say: suppose that for all where is additive. Then if is well-formed we have . This is not so terrible, but given our assumption of regularity, when one come to arguments for the soundness of Cut and Amalgamation, one can proceed exactly as in Proposition 2.2. It is worth noting that none of the new, properly iterative rules considered in this section (besides Subset) are affected by any of this discussion.
I have already mentioned that, instead of requiring a cardinal bound in our formation rules, we might instead have allowed any set to serve as the left-hand side of a grounding sequent, and to appropriately generalise definitions and results regarding canonical models to allow for the handling of proper-class-sized languages. This would also have had the effect of avoiding the informal failures of rules like Cut and Amalgamation (as well as subset) just noted. This is unsurprising, given that strongly inaccessible cardinals can be used to provide models of set theory.
Another option to mention. If we fix , we end up with a very simple finitary language. Since 0 is a regular cardinal it avoids informal failures of Cut and Amalgamation as mentioned above. But in addition, even though is not strongly inaccessible, it is very much like a strongly inaccessible cardinal in being a strong limit (i.e. if then 2 ). Hence fixing also avoids informal violations of more recherché principles like Subset. This would in many ways be the 'easy' way out: these problems and puzzles involving infinite cardinalities only arise given that we allow our language to be infinitary. A finitary language would easiy avoid all of these problems. Nevertheless, it is standard in the literature starting with Fine [6] to allow for infinite sets of operands on the left-hand-side of grounding sequents. We want to remain as faithful to the literature as possible (and also provide as general a construction as possible), and this is why we proceed as we do.

Iterated Calculi
Now, versions of the -calculi named already are obtained by allowing the rules of each system to range over -sequents, and adding the following two rules to each.

Factivity Right Factivity Left
We retain the names for the iterative versions calculi adding a prime to represent the presence of iteration and of the Factivity rules, calling them C I A G , and so on.
Some principles formulated in correspond to theses expressed by various metaphysicians on the question of what grounds grounding. This is the question of what the facts about grounding are themselves grounded in: in virtue of what does ground ? An often less emphasised, but clearly related question is that of what grounding grounds: what do grounding facts help to ground? And some other principles formulated in relate to this second question. To my mind, the simplest answer to the question of what grounds grounding is the internality view. According to this view, grounding is an internal relation between its relata, meaning that statements or facts grounding are grounded in the relevant relata. For any and where grounds , that grounds is grounded in and in . This view can be expressed in an inference rule as follows 7 .

Internality
Another prominant view which admits a perspicuous rendering is the superinternality view of Karen Bennett (cf. [1]). According to this view-originally about what she terms 'building' relations, but which one can easily transpose into groundingtalk-grounding is a superinternal relation, which means that when grounds , also grounds the fact that grounds . This can be rendered as the following rule.

Superinternality
Despite the name, superinternality is a weaker thesis than internality, in the sense that Superinternality is admissible in any calculus in which Cut and Internality are admissible:

Internality Cut
As for the question of what grounding facts help to ground, a more specific question in this area is whether in particular, grounding facts help to ground the very thing grounded. Let us say that grounding is a generative relation just in case whenever grounds , the fact that grounds can also be said to ground when taken together with itself. 8 In this sense, the relation of grounding itself can be said to 7 A reviewer has pointed out that one might want to say Internality cannot exactly express the belief that ground is an internal relation. For, the thought goes, to be an internal relation requires that the grounding of by and is always strict (meaning at least asymmetric). But in calculi like R , where Identity is admissible, ground is not strict in this way. Hence in such calculi, Internality cannot be thought to express the view that ground is internal. The point is well-taken, but seems ultimately to turn on how one understands the distinction between internal and external relations (or rather, which of these proposed distinctions one proposes to refer to in using those terms). We might want to draw a distinction between relations grounded in their relata and those that are not. We might instead want to draw a distinction between those that are properly or strictly grounded in their relata and those that are not. Or indeed, we might take up some entirely modal, non-ground theoretic version of the distinction (e.g. is internal iff implies necessarily ). The view that ground is internal in the sense of being properly grounded in its relata, I agree, is not faithfully expressed by Internality in calculi that admit Identity. But neither does Internality express the view that ground is internal on the modal understanding of internality (i.e. that if then necessarily ). And this is fine. All I claim is that Internality does capture the claim that whenever , this fact is grounded in and . To call relation 'internal' iff it is always grounded in its relata (not necessarily properly grounded) is another perfectly legitimate conception of internality, and it seems to me that there is conceptual room for a view on which ground is reflexive and internal in just this sense-even if such a view is implausible. All of this discussion also applies to the rules of Superinternality and Generativity considered shortly. Even though the concept of superinternality is introduced by Bennett, who herself holds that building relations are all asymmetric, it is conceivable that someone else might think that ground is superinternal in the sense expressed by the Superinternality rule while not also thinking that ground is asymmetric. take part in generating the thing grounded. Expressed as a rule this says: Generativity Recall that where S is a -calculus, we write S for the -calculus whose rules are all of the S-rules, plus Factivity Left and Facitivty Right. Additionally, we will write S Sup , S Int and S Gen for the systems which respectively add Superinternality, Internality, and Generativity to S . We write S SG and S IG for the systems which result from adding Generativity to S Sup and S Int respectively. See Fig. 2   It is straightforward enough to extend the graph-theoretic semantics we have been considering to , though some natural augmentations have to be made, and extra structure added. Definition 4.1 An iterative grounding structure is a pair , written , where is a grounding structure and is a partially defined operation . In particular, is defined iff .
The intuitive idea is that represents the fact that grounds ; hence the operation is only defined where the grounding relation in fact obtains.

Definition 4.2 An
-model is a triple where is is an iterative grounding structure, , and At is a function such that: If then ; else, .
The idea here is that the object is some non-fact. When the -sequent is false, its denotation is a falsehood. Note that never features in the relation since it is always chosen to lie outside . The semantic clause for -sequents is otherwise unchanged.
iff Before establishing correspondences for systems with Internality, Generativity, etc., we will just focus on the appropriately extended versions of our -calculi. Recall that where S is one of C I A G R M or T, we call the result of extending the rules of S to and adding the Factivity rules S . Additionally, where is a class of grounding structures, let denote the class of iterative grounding structures where . In general, where is a class of iterative grounding structures, we write for and just in case is true at all models where and for every . Soundness and completeness are established as before, via first demonstrating truth-preservation of the rules in various systems and then providing canonical models.
Besides the new rules of Factivity Left and Factivity Right, the non-iterative rules of the systems C I A and so on are unchanged. Therefore, as the reader can verify, all of the arguments from Proposition 2.2 that these rules preserve truth in the relevant models may be applied here without revision beyond notation and use of -models. This means that to show soundness for these systems it suffices to show that Factivity Left and Factivity Right preserve truth at all models. Canonical models for are much like those for , and the absence of partial and proper ground means that we do not need to concern ourselves with witnessing elements, or with alternate forms of canonical model lemmas. Where S is an calculus and is a set of sequents, we first define an iterative grounding structure Proof Thanks to the previous result, it suffices to show that for each system S, and Sconsistent set , S is in the class of iterative structures in question. The arguments here are essentially unchanged from Proposition 2.8, and may be repeated without significant alterations. In each case, once it is shown that S , it follows that S . Now we consider systems which add the properly iterative rules of Generativity, Superinternality, and Internality. To provide adequate semantics for such systems we introduce the following criterea for iterative grounding structures.

Int Gen
Proof In each case, the left-to-right direction (Soundness) is established by showing that the rules of Superinternality, Internality, and Generativity, preserve truth in models with superinternal, internal, and generative structures respectively. None of these presents special difficulty, so we settle for proving as an example that Superinternality preserves truth at models with superinternal structures. If at such a model , then . Since is superinternal we have . Since , so and thus . In (i)-(iii), the right-to-left direction (Completeness) is shown by establishing that for any , S Sup , S Int and S Int are superinternal, internal, and generative respectively. The other requisite properties of the relevant canonical structures are then shown via the same arguments as in Proposition 2.8. None of these presents special difficulty, so we settle for proving as an example that for any ,

Conclusion
This concludes our discussion. Our scope here has been relatively modest, in future work it will be interesting to see this general semantic strategy applied to logics of irreducibly many-many grounding operators, as well as to properly impure logics of ground, which introduce non-grounding operators and propose rules for the interaction of grounding and non-grounding operators.
idea that, though well-formed and present in the language, statements of zero-ground are always false.

Nonzero
We may add this rule to each of the -calculi introduced in §2 to yield new, stronger calculi. I will refer to these as S + where S is the -calculus to which the rule is added. We now establish soundness and completeness for these zero-free calculi (Fig. 3). Proof (Soundness/Left to Right) Except for Nonzero, the rules of these systems are unchanged, as are the arguments that these rules preserve truth over the respective classes of models. So it suffices to note that Nonzero vacuously preserves truth over models with zero-free structures: at such models is never true, and so if such a sequent were true, any sequent would be true. (Completeness/Right to Left) It suffices to show for each calculus S that for all S-consistent , S must be in the relevant class of structures. The arguments in each case are as in Proposition 2.8, with the addition that if is S -consistent, then for all we must have Thus for all we must have S + , meaning that S + is zero-free.
The naming conventions and Propositions from Section 4 allow each of these new calculi to be extended to calculi, which then receive soundness and completeness results with respect to the obvious classes of iterative grounding structures. For example, C Int is the result of adding Nonzero to C Int , and can be shown sound and complete with respect to the class of internal iterative grounding structures whose underlying grounding structure is in Nz.

Appendix B: Cut Properties in Directed Semi-Hypergraphs
The only property that we have presumed to hold of ground throughout our discussion is cuttability-a many-one analogue of the transitivity property in binary relations. This assumption was implemented by requiring in the definition of grounding structures that they be cuttable. Here we are interested in the definition of a grounding structure, and in particular the kind of cut property used in this definition.

B.1 Finite Cut
As mentioned above, other authors have made use of the term 'grounding structure', and have wanted to include a cut property in the definition of such structures. However, our notion of grounding structure differs from these writers in two ways. Firstly, our definition is more purely mathematical; both Dixon's grounding structures and Rabin and Rabern's grounding structures appeal to real grounding relations between their inhabitants, and posit these inhabitants to be real relata of ground (facts or whatever). While we allow that real grounding relata can serve for the domain of a grounding structure and that a real grounding relation can serve for the relation of a grounding structure, this is not essential to grounding structures as such. For us: any nonempty set can serve as the domain for a grounding structure and any cuttable relation can serve as the relation of a grounding structure. The second way in which our grounding structures differ from the structures of these previous writers is the choice of cut property itself. These authors each settle for the following property as part of the definition of a grounding structure.
Definition B.1 is finitely cuttable iff: if and , then .
In particular, for Dixon, as for Rabin and Rabern, the real grounding relation is only posited to be finitely cuttable (though in both cases it is also posited to be irreflexive). Now, it is evident just from the definitions that every cuttable structure is also finitely cuttable. To show that the cuttability property of our grounding structures is strictly stronger than finite cuttability then, it suffices to show: Proposition B.2 Some finitely cuttable directed semi-hypergraphs are not cuttable. , and additionally 0 for every finite 1; if were cuttable we would thus have 0 , which is not so. However, the structure is finitely cuttable. To see this, suppose we have and for all . Either is a finite ordinal or it is . Where is finite, then we immediately have since by hypothesis every is less than and every is less than every , hence also less than . If , then by (ii), must be infinite, in which case is infinite. But again, by hypothesis every is less than and every is less than every , hence also less than . Hence we have . So in call cases, finite cut is respected.
Finitary cut is therefore a strict weakening of the cut property we have specified for our grounding structures; I think that it is clearly an unmotivated weakening, such that it is preferable to adopt the stronger condition in defining grounding structures. Whatever reasons we have for thinking that ground is cuttable do not seem restricted to the finitary.

B.2 Tarskian Cut
Earlier I claimed that our grounding structures are essentially a generalised kind of abstract consequence structure-ones which relax the conditions of monotonicity and reflexivity. Abstract consequence structures in this way form a subset of grounding structures, on our understanding. In particular they are the reflexive and monotonic grounding structures. Here I will show that this is indeed the case. This is necessary since, often, the definition of an abstract consequence structure is introduced using a different cut property to that we specified for grounding structures. Definition B.3 An abstract consequence structure is a directed semi-hypergraph such that for all and : If and for all , then . .
In the absence of monotonicity and/or reflexivity, the weakness of Tarskicuttability as a property of grounding is shown in that it does not even guarantee the transitivity of partial grounding in the way that cuttability and finite cuttabilty do. To make this thought precise, I introduce the following notation.

Definition B.8
Where is a directed semi-hypergraph, and , we may write just in case there is some such that .
The relation clearly corresponds to the notion of partial grounding introduced in the first part of Section 3. And as there, we can see that cuttability guarantees that this derivative binary relation is transitive. In fact, even finite cuttability suffices.

Proposition B.9
If is finitely cuttable, then is transitive.
Proof If and , then there are such that and . So by finite cuttability we have and so .
On the other hand, Tarski-cuttability is not sufficient in this sense: Proposition B.10 There are Tarski-cuttable where is non-transitive.
Proof Take from Proposition B.6. Here we have and but .
I take this to be another good reason to prefer our own notion of cuttability in thinking about ground. However, conceivably, those who dislike transitivity of partial ground may see the adoption of Tarksian cut as a way of retaining a cut property for grounding while accommodating certain influential objections to the transitivity of partial ground known in the literature (i.e. those of [13]). I would not endorse this move, since I do not think that the counterexamples in question succeed in undermining the transitivity of , but my reasons for thinking this are besides the present topic.