Cyclic Hypersequent System for Transitive Closure Logic

We propose a cut-free cyclic system for transitive closure logic (TCL) based on a form of hypersequents, suitable for automated reasoning via proof search. We show that previously proposed sequent systems are cut-free incomplete for basic validities from Kleene Algebra (KA) and propositional dynamic logic (PDL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {PDL}$$\end{document}), over standard translations. On the other hand, our system faithfully simulates known cyclic systems for KA and PDL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {PDL}$$\end{document}, thereby inheriting their completeness results. A peculiarity of our system is its richer correctness criterion, exhibiting ‘alternating traces’ and necessitating a more intricate soundness argument than for traditional cyclic proofs.


Introduction
Transitive closure logic (TCL) is the extension of first-order logic by an operator computing the transitive closure of definable binary relations.It has been studied by numerous authors, e.g.[15][16][17], and in particular has been proposed as a foundation for the mechanisation and automation of mathematics [18].
Recently, Cohen and Rowe have proposed non-wellfounded and cyclic systems for TCL [8,10].These systems differ from usual ones by allowing proofs to be infinite (finitely branching) trees, rather than finite ones, under some appropriate global correctness condition (the 'progressing criterion').One particular feature of the cyclic approach to proof theory is the facilitation of automation, since complexity of inductive invariants is effectively traded off for a richer proof structure.In fact this trade off has recently been made formal, cf.[1,11], and has led to successful applications to automated reasoning, e.g.[6,7,25,28,29].
In this work we investigate the capacity of cyclic systems to automate reasoning in TCL (refer to Fig. 1 for a summary of our contributions).Our starting point is the demonstration of Fig. 1 The diagram displays results from the literature together with our contributions, marked with ( ).Double arrows represent soundness and completeness results, which for PDL + and the class of cyclic sequent proofs LPD is known from [21] (cfr.Sect.6.1).Hooked arrows represent simulations via translations: PDL + can be simulated by TCL, under the standard translation (cfr.Sect.3.2).TC G is the class of cyclic sequent proofs for TCL, introduced in [10], that cannot simulate LPD proofs (cfr.Sect.3.3).Our contribution is the hypersequential and cyclic proof system HTC, for which we prove soundness (Sect.5) and completeness via simulation of LPD (Sect.6).These results can be extended to full PDL and TCL = (with identity), indicated by the right components of each node (in blue) a key shortfall of Cohen and Rowe's system: its cut-free fragment, here called TC G , is unable to cyclically prove even standard theorems of relational algebra, e.g.(a ∪ b) * = a * (ba * ) * and (aa ∪ aba) + ≤ a + ((ba + ) + ∪ a)) (Theorem 3.7).An immediate consequence of this is that cyclic proofs of TC G do not enjoy cut-admissibility (Corollary 3.14).On the other hand, these (in)equations are theorems of Kleene Algebra (KA) [19,20], a decidable theory which admits automation-via-proof-search thanks to the recent cyclic system of Das and Pous [13].
What is more, TCL is well-known to interpret Propositional Dynamic Logic (PDL), a modal logic whose modalities are just terms of KA, by a natural extension of the 'standard translation' from (multi)modal logic to first-order logic (see, e.g., [2,3]).Incompleteness of cyclic-TC G for PDL over this translation is inherited from its incompleteness for KA.This is in stark contrast to the situation for modal logics without fixed points: the standard translation from K (and, indeed, all logics in the 'modal cube') to first-order logic actually lifts to cut-free proofs for a wide range of modal logic systems, cf.[22,23].
A closer inspection of the systems for KA and PDL reveals the stumbling block to any simulation: these systems implicitly conduct a form of 'deep inference', by essentially reasoning underneath ∃ and ∧.Inspired by this observation, we propose a form of hypersequents for predicate logic, with extra structure admitting the deep reasoning required.We present the cut-free system HTC and a novel notion of cyclic proof for these hypersequents.In particular, the incorporation of some deep inference at the level of the rules necessitates an 'alternating' trace condition corresponding to alternation in automata theory.
Our first main result is the Soundness Theorem (Theorem 5.1): non-wellfounded proofs of HTC are sound for standard semantics.The proof is rather more involved than usual soundness arguments in cyclic proof theory, due to the richer structure of hypersequents and the corresponding progress criterion.Our second main result is the Simulation Theorem (Theorem 6.1): HTC is complete for PDL over the standard translation, by simulating a cutfree cyclic system for the latter.This result can be seen as a formal interpretation of cyclic modal proof theory within cyclic predicate proof theory, in the spirit of [22,23].To simplify the exposition, we shall mostly focus on equality-free TCL and 'identity-free' PDL during this paper, though we indeed present an extension to the general case (for TCL with equality and PDL with tests) towards the end of this paper in Sect.7.
The paper is structured as follows.Section 2 introduces TCL, its semantics, and the cyclic sequent calculus for TCL from Cohen and Rowe [10].Section 3 introduces PDL + , the identity-free version of PDL, and the standard translation, and shows that the cyclic system for TCL by Cohen and Rowe is incomplete for PDL + .Section 4 presents the cyclic hypersequent calculus for TCL, Sect. 5 shows that it is sound and Sect.6 proves its completeness over PDL + with respect to the standard translation via a cyclic sequent calculus for PDL + .Finally, Sect.7 discusses the extension of our calculus to full TCL and PDL, and Sect.8 presents further insights and conclusions.
This paper is a full version of the conference paper [12] published at IJCAR '22.It extends the conference version by providing full definitions, detailed proofs and additional examples.

Preliminaries
We shall work with a fixed first-order vocabulary consisting of a countable set Pr of unary predicate symbols, written p, q, etc., and of a countable set Rel of binary relation symbols, written a, b, etc.We build formulas from this language differently in the modal and predicate settings, but all our formulas may be formally evaluated within structures: As above, we shall generally distinguish the words 'predicate' (unary) and 'relation' (binary).We could include further relational symbols too, of higher arity, but choose not to in order to calibrate the semantics of both our modal and predicate settings.

Transitive Closure Logic
In addition to the language introduced at the beginning of this section, in the predicate setting we further make use of a countable set of function symbols, written f i , g j , etc., where the superscripts i, j ∈ N indicate the arity of the function symbol and may be omitted when it is not ambiguous.Nullary function symbols (aka constant symbols), are written c, d etc.We shall also make use of variables, written x, y, etc., typically bound by quantifiers.Terms, written s, t, etc., are generated as usual from variables and function symbols by function application.A term is closed if it has no variables.
We consider the usual syntax for first-order logic formulas over our language, with an additional operator for transitive closure (and its dual).Formally TCL formulas, written A, B, etc., are generated as follows: A, B ::

TC(λx, y.A)(s, t) | TC(λx, y.A)(s, t)
When variables x, y are clear from context, we may write TC(A(x, y))(s, t) or TC(A)(s, t) instead of TC(λx, y.A)(s, t), as an abuse of notation, and similarly for TC.Without loss of generality, we assume that the same variable cannot occur free and bound within the scope of the quantifiers, TC-and TC-formulas.We write A[t/x] for the formula obtained from A by replacing every free occurrence of the variable x by the term t.The choice of allowing negation only on atomic proposition, and not including implication as a primitive operator in the language, is motivated by the fact that we will opt for a one-sided definition of sequents.Remark 2.2 (Equality) Note that we do not include term equality among our atomic formulas at this stage.Later we shall indeed consider such extensions, for which the syntax and semantics are as usual for predicate logic.

Definition 2.3 (Duality)
For a formula A we define its complement, Ā, by: p(t) := p(t) p(t) := p(t) a(s, t) := ā(s, t) ā(s, t) := a(s, t) We shall employ standard logical abbreviations, e.g.A → B for Ā ∨ B. We may evaluate formulas with respect to a structure, but we need additional data for interpreting function symbols: Definition 2.4 (Interpreting function symbols) Let M be a structure with domain D. An interpretation is a map ρ that assigns to each function symbol f n a function D n → D. We may extend any interpretation ρ to an action on (closed) terms by setting recursively We only consider standard semantics in this work: TC (and TC) is always interpreted as the real transitive closure (and its dual) in a structure, rather than being axiomatised by some induction (and coinduction) principle.
In order to facilitate the formal definition of satisfaction, namely for the quantifier and reflexive transitive closure cases, we shall adopt a standard convention of assuming among our constant symbols arbitrary parameters from the model M. Formally this means that we construe each v ∈ D as a constant symbol for which we shall always set ρ(v) = v.Definition 2.5 (Semantics) Given a structure M with domain D and an interpretation ρ, the judgement M, ρ | A is defined as follows: If M, ρ | A for all M and ρ, we simply write | A.
As expected, we have , and so the two operators are semantically dual.The following statement, that easily follows from the semantics clauses defined above, demonstrates that TC and TC duly correspond to least and greatest fixed points.
Fact 2.6 (TC and TC as least and greatest fixed points) The following hold, for arbitrary M, ρ and x: We have included both TC and TC as primitive so that we can reduce negation to atomic formulas, allowing a one-sided formulation of proofs.Let us point out that our TC operator is not the same as Cohen and Rowe's transitive 'co-closure' operator TC op in [9].As they already note there, TC op cannot be defined in terms of TC (using negations), whereas TC is the formal De Morgan dual of TC and, in the presence of negation, are indeed interdefinable, cf.Definition 2.3.

Cohen-Rowe Cyclic System for TCL
Cohen and Rowe proposed in [8,10] a non-wellfounded sequent system for TCL (with equality) extending a standard sequent calculus LK = for first-order logic with equality and substitution by rules for TC inspired by its characterisation as a least fixed point, cf.Fact 2.6.A non-wellfounded proof system allows for infinitely long branches, provided that they satisfy a logic-specific progress condition.Here we present a one-sided variation of (the cut-free fragment of) their system, both with and without equality.G (colours may be ignored for now).A preproof is regular if it has only finitely many distinct sub-preproofs.
In Fig. 2 σ is a map ("substitution") from constants to terms and other function symbols to function symbols of the same arity, extended to terms, formulas and sequents in the natural way.The substitution rule is redundant for usual provability, but facilitates the definition of 'regularity' in predicate cyclic proof theory.
The notions of non-wellfounded and cyclic proofs for G are formulated similarly to those for first-order logic with (ordinary) inductive definitions [5]:

Definition 2.8 (Traces and proofs) Given a TC (=)
G preproof D and a branch B = (r i ) i∈ω (where each r i is an inference step), a trace is a sequence of formulas of the form (TC(A)(s i , t i )) i≥k such that for all i ≥ k either: 1. r i is not a substitution step and (s i+1 , t i+1 ) = (s i , t i ); or, 2. r i is a TC step with principal formula TC(A)(s i , t i ) and (s i+1 , t i+1 ) = (c, t i ), where c is the eigenvariable of r i ; or, 3. r i is a substitution step with respect to σ and (σ (s i+1 ), σ (t i+1 )) = (s i , t i ).
We say that the trace is progressing if the case 2 above happens infinitely often along it.A TC G -preproof D is a proof if each of its infinite branches has a progressing trace.If D is regular we call it a cyclic proof.We write Remark 2.9 (Traces via colours) Fig. 2 codes the notion of trace by means of colours: along any infinite branch a trace is a monochromatic sequence of formulas (with inference steps as displayed in Fig. 2); if the trace hits a formula in the context in the conclusion of an inference step, it must hit the same formula in the premiss it follows.
Proposition 2.10 (Soundness, [8,10]) In fact (the equality-free version of) this result is subsumed by our main soundness result for HTC (Theorem 5.1) and its simulation of TC G (Theorem 4.11).A partial converse of Proposition 2.10 is available in the presence of a cut rule: G proofs are 'Henkin complete', i.e. complete for all models of a particular axiomatisation of TCL (with or withour equality, resp.) based on (co)induction principles [8,10].However, the counterexample we present in the next section implies that cut is not eliminable (Corollary 3.14).

Our formulation of TC (=)
G differs slightly from the original presentation in [8,10], but in no essential way.Nonetheless, let us survey these differences now.

One-Sided vs. Two-Sided
Cohen and Rowe employ a two-sided calculus as opposed to our one-sided one, but the difference is purely cosmetic.Sequents in their calculus are written A 1 , . . ., A m ⇒ B 1 , . . ., B n , which may be duly interpreted in our calculus as Ā1 , . . ., Ām , B 1 , . . ., B n .Indeed we may write sequents in this two-sided notation at times in order to facilitate the reading of a sequent and to distinguish left and right formulas.For this reason, Cohen and Rowe do not include a TC operator in their calculus, but are able to recover it thanks to a formal negation symbol, cf.Definition 2.3.

TC vs. RTC
Cohen and Rowe's system is originally called RTC G , rather using a 'reflexive' version RTC of the TC operator.As they mention, this makes no difference in the presence of equality.Semantically we have RTC(A)(s, t) ⇐⇒ s = t ∨ TC(A)(s, t), but this encoding does not lift to proofs, i.e. the RTC rules of [8] are not locally derived in TC = G modulo this encoding.However, the encoding RTC(A)(s, t) := TC((x = y ∨ A))(s, t) suffices for this purpose.

Alternative Rules and Fixed Point Characterisations
Cohen and Rowe use a slightly different fixed point formula to induce rules for RTC and RTC (i.e.RTC on the left) based on the fixed point characterisation, decomposing paths 'from the right' rather than the left.These alternative rules induce analogous notions of trace and progress for preproofs such that progressing preproofs enjoy a similar soundness theorem, cf.Proposition 2.10.The reason we employ a slight variation of Cohen and Rowe's system is to remain consistent with how the rules of LPD + (or LPD) and HTC (or HTC = ) are devised later.To the extent that we prove things about TC G , namely its (cut-free) regular incompleteness in Theorem 3.7, the particular choice of rules turns out to be unimportant.The counterexample we present there is robust: it applies to systems with any (and indeed all) of the above rules.

Interlude: Motivation from PDL
Given the TCL sequent system proposed by Cohen and Rowe, why do we propose a hypersequential system?Our main argument is that proof search in TC G is rather weak, to the extent that cut-free cyclic proofs are unable to simulate a basic (cut-free) system for modal logic PDL (regardless of proof search strategy).At least one motivation here is to 'lift' the standard translation from cut-free cyclic proofs for PDL to cut-free cyclic proofs in an adequate system for TCL (with equality).

Identity-Free PDL
Identity-free propositional dynamic logic (PDL + ) is a version of the modal logic PDL without tests or identity, thereby admitting an 'equality-free' standard translation into predicate logic.Formally, PDL + formulas, written A, B, etc., and programs, written α, β, etc., are generated by the following grammars: Remark 3.1 (Formula metavariables) We are using the same metavariables A, B, etc. to vary over both PDL + and TCL formulas.This should never cause confusion due to the context in which they appear.Moreover, this coincidence is suggestive, since many notions we consider, such as duality and satisfaction, are defined in a way that is compatible with both notions of formula.

Definition 3.2 (Duality)
For a formula A we define its complement, Ā, by: We evaluate PDL + formulas using the traditional relational semantics of modal logic, by associating each program with a binary relation in a structure.Again, we only consider 'standard' semantics: • (a M is already given in the specification of M, cf.Definition 2.1).
For elements v ∈ D and formulas A we also define the judgement M, v | A by: If M, v | A for all M and v ∈ D, then we write | A.
Note that we are overloading the satisfaction symbol | here, for both PDL + and TCL.This should never cause confusion, in particular since the two notions of satisfaction are 'compatible', given that we employ the same underlying language and structures.In fact such overloading is convenient for relating the two logics, as we shall now see.

The Standard Translation
The so-called "standard translation" of modal logic into predicate logic is induced by reading the semantics of modal logic as first-order formulas.We now give a natural extension of this that interprets PDL + into TCL.At the logical level our translation coincides with the usual one for basic modal logic; our translation of programs, as expected, requires the TC operator to interpret the + of PDL + .Definition 3.4 For a PDL + formula A and program α, we define the standard translations ST(A)(x) and ST(α)(x, y) as TCL-formulas with free variables x and x, y, resp., inductively as follows, where we have written simply TC(ST(α)) instead of TC(λx, y.ST(α)(x, y)).

ST( (ab) + p)(x) := ∃y(TC(∃z(a(x, z) ∧ b(z, y))) ∧ p(y)).
It is routine to show that ST(A)(x) = ST( Ā)(x), by structural induction on A, justifying our overloading of the notation Ā, in both TCL and PDL + .Yet another advantage of using the same underlying language for both the modal and predicate settings is that we can state the following (expected) result without the need for encodings, following by a routine structural induction (see, e.g., [3]):

Cohen-Rowe System is Not Complete for PDL +
PDL + admits a standard cut-free cyclic proof system LPD + (see Sect. 6.1) which is both sound and complete (cf.Theorem 6.4).However, a shortfall of TC G is that it is unable to cut-free simulate LPD + .In fact, we can say something stronger: This means not only that TC G is unable to locally cut-free simulate the rules of LPD + , but also that there are some validities for which there are no cut-free cyclic proofs at all in TC G .One example of such a formula is: This formula is derived from the well-known PDL validity (a ∪ b) * p → a * (ba * ) * p by identity-elimination.This in turn is essentially a theorem of relational algebra, namely (a ∪ b) * ≤ a * (ba * ) * , which is often used to eliminate ∪ in (sums of) regular expressions.The same equation was (one of those) used by Das and Pous in [13] to show that the sequent system LKA for Kleene Algebra is cut-free cyclic incomplete.
In the remainder of this subsection, we shall give a proof of Theorem 3.7.The argument is much more involved than the one from [13], due to the fact we are working in predicate logic, but the underlying basic idea is similar.At a very high level, the right-hand side of (2) (viewed as a relational inequality) is translated to an existential formula ∃z(ST(a + )(x, z) ∧ ST((ba + ) + ∪ a)(z, y) that, along some branch (namely the one that always chooses aa when decomposing the LHS of ( 2)) can never be instantiated while remaining valid.This branch witnesses the non-regularity of any proof.

Some Closure Properties for Cyclic Proofs
Demonstrating that certain formulas do not have (cut-free) cyclic proofs is a delicate task, made more so by the lack of a suitable model-theoretic account (indeed, cf.Corollary 3.14).In order to do so formally, we first develop some closure properties of cut-free cyclic provability.

Proposition 3.8 (Inversions)
We have the following:

as long as c is fresh.
Proof Sketch All three statements are proved similarly.
For item 1, replace every direct ancestor of A ∨ B with A, B. The only critical steps are when A ∨ B is principal, in which case we delete the step, or is weakened, in which case we apply two weakenings, one on A and one on B. If the starting proof had only finitely many distinct subproofs (up to substitution), say n, then the one obtained by this procedure has at most 2n distinct subproofs (up to substitution), since we simulate a weakening on A ∨ B by two weakenings.
For item 2, replace every direct ancestor of A ∧ B with A or B, respectively.The only critical steps are when A ∧ B is principal, in which case we delete the step and take the left or right subproof, respectively, or is weakened, in which case we simply apply a weakening on A or B, respectively.The proof we obtain has at most the same number of distinct subproofs (up to substitution) as the original one.
For item 3, replace every direct ancestor of ∀x A(x) with A(c).The only critical steps are when ∀x A(x) is principal, in which case we delete the step and rename the eigenvariable in the remaining subproof everywhere with c, or is weakened, in which case we simply apply a weakening on A(c).The proof we obtain has at most the same number of distinct subproofs (up to substitution) as the original one.Proposition 3.9 (Predicate admissibility) Suppose TC G cyc , p(t) or TC G cyc , p(t), where p or p (respectively) does not occur in .Then it holds that TC G cyc .
Proof sketch Delete every ancestor of p(t) or p(t), respectively.The only critical case is when one of the formulas is weakened, in which case we omit the step.Note that there cannot be any identity on p, due to the assumption on , and by the subformula property.

Reducing to a Relational Tautology
Here, and for the remainder of this subsection, we shall simply construe PDL + programs α and formulas A as TCL formulas with two free variables and one free variable, respectively, by identifying them with their standard translations ST(α)(x, y) and ST(A)(x), respectively.This modest abuse of notation will help suppress much of the notation in what follows.

Lemma 3.10
) so, by unwinding the definition of ST and since duality commutes with the standard translation, cf.Sect.3.2, we have that TC G cyc ([(aa ∪ aba) + ] p)(c) ∨ ( a + ((ba + ) + ∪ a) p)(c).By ∨-inversion (Proposition 3.8.1)we have: Again unwinding the definition of ST, and by the definition of duality, we thus have: Now, by ∀-inversion and ∨-inversion, Prop.3.8, we have: Since there is no occurrence of p above, by Prop.3.9 we conclude as required.

Irregularity via an Adversarial Model
In the previous subsubsection we reduced the incompleteness of cut-free cyclic sequent proofs for TCL over the image of the standard translation on PDL + to the non-regular cutfree provability of a particular relational validity.Unwinding this a little, the sequent that we shall show has no (cut-free) cyclic proof in TC G can be written in 'two-sided notation' (cf.Sect.2.3) as follows: ( This two-sided presentation is simply a notational variant that allows us to more easily reason about the proof search space (e.g.referring to 'LHS' and 'RHS').Formally: Convention 3.11 (Two-sided notation) We may write ⇒ as shorthand for the sequent ¯ , , where ¯ = { Ā : A ∈ }. References to the 'left-hand side (LHS)' and 'right-hand side (RHS)' have the obvious meaning, always with respect to the delimiter ⇒.
To facilitate our argument, we shall only distinguish sequents 'modulo substitution' rather than allowing explicit substitution steps when reasoning about (ir)regularity of a proof.
We shall design a family of 'adversarial' models, and instantiate proof search to just these models.In this way, we shall show that any non-wellfounded TC G proof of the sequent (3) must have arbitrarily long branches without a repetition (up to substitution).Since TC G is finitely branching, by König's Lemma this means that any non-wellfounded TC G proof of (3) has an infinite branch with no repetitions (up to substitution), as required.Definition 3.12 (An adversarial model) For n ∈ N, define the structure A n by: Note that, since the sequent (3) that we are considering is purely relational, it does not matter what sets A n assigns to the predicate symbols, so we refrain from specifying such data.

Lemma 3.13 Let n ∈ N. Any TC G proof D of (3) has a branch with no repetitions (up to substitutions) among its first n sequents.
Proof Set c 0 = c.Consider some (possibly finite, but maximal) branch B = (r i ) i≤ν (with ν ≤ ω) of D satisfying: • whenever TC on the LHS is principal (formally speaking, for a TC step), the right premiss is followed; and, • whenever (aa)(s, t) ∨ (aba)(s, t) is principal for any s and t on the LHS (formally speaking, for a ∧ step) the left premiss (corresponding to (aa)(s, t)) is followed.
Let k ≤ n be maximal such that, for each i ≤ k, r i has principal formula on the LHS.Now: 1.For i ≤ k, each r i has conclusion with LHS of the form: for some l ≤ i and distinct c 0 , . . ., c l and where each j (c To see this, proceed by induction on i ≤ k: • The base case is immediate, by setting l = 0.
• For the inductive step, note that the principal formula of r i must be on the LHS, since i ≤ k.Thus by the inductive hypothesis the principal formula of r i must have the form: a(c j−1 , c j−1 ) ∧ a(c j−1 , c j ), in which case the premiss of r i (which is a ∨ step) replaces it by a(c j−1 , c j−1 ), a(c j−1 , c j ); or, -(aa)(c j−1 , c j ), in which case the premiss of r i (which is a ∀ step) replaces it by a(c j−1 , c j−1 ) ∧ a(c j−1 , c j ), for c j−1 a fresh symbol; or, -(aa)(c j−1 , c j ) ∨ (aba)(c j−1 , c j ), in which case, by definition of B, the Bpremiss of r i (which is a left-∨ step) replaces this formula by some a(c j−1 , c j ); or, -TC(aa ∨ aba)(c l , d) for some l ≤ i, in which case, by definition of B, the Bpremiss of r i (which is a left-TC step) replaces it by the cedent (aa 2. Moreover, for i < i ≤ k, the conclusion of r i and r i are not equal (up to substitution).
To see this, note that any rule principal on an LHS of form (4) either decreases the size of some j (c j−1 , c j ) (when it is a ∨, ∀ or ∧ step) or increases the number of eigenvariables in the sequent (when it is a left TC step), in particular the index l of TC(aa ∨ aba)(c l , d,). 3. Since proofs must be sound for all models (by soundness), we shall work in A n with respect to an interpretation ρ n satisfying c i → u i for i ≤ n and c i → u i for i < n and d → v.It follows by inspection of (4) that, for i ≤ k, each formula on the LHS of the conclusion of r i is true in (A n , ρ n ). 4. Along B, the RHS cannot be principal unless l ≥ n in (4), so in particular k ≥ n.To see this: • Recall that the interpretation ρ n assigns to c 0 , c 0 , . . ., c n−1 , c n the worlds u 0 , u 0 , . . ., u n−1 , u n respectively.• If the existential formula on the RHS is instantiated by some c i or c i with i < n then the resulting sequent is false in (A n , ρ n ) (recall that, by Item 3, every formula on the LHS is true, so we require the RHS to be true too).To see this, note that the RHS in particular would imply (ba However when i < n none of these formulas are true with respect to (A n , ρ n ).• If the existential formula on the RHS is instantiated by d then the resulting sequent is again false, by the same analysis as above.
By Item 4, we have that k ≥ n and so, since we assumed k ≤ n at the start, indeed k = n.Thus, by Item 1 and Item 2, there are no repeated sequents (up to substitution) in (r i ) i≤n , as required.

Putting It All Together
We are now ready to give the proof of the main result of this section.

Proof of Theorem 3.7, Sketch
Since the choice of n in Lemma 3.13 was arbitrary, any TC G proof D of (3) must have branches with arbitrarily long initial segments without any repetition (up to substitution).Since the system is finitely branching, by König's Lemma we have that there is an infinite branch through D without any repetition (up to substitution), and thus D is not regular.Thus TC G cyc (3).Finally, by contraposition of Lemma 3.10, we have, as required: An immediate consequence of Theorem 3.7 and Henkin-completeness of TC G with cut [8,10] is: 14 The class of cyclic proofs of TC G does not enjoy cut-admissibility.

Hypersequent Calculus for TCL
In light of the preceding subsection, let us take a moment to examine how a 'local' simulation of LPD + by TC G fails, in order to motivate the main system that we shall present.The program rules, in particular the -rules, require a form of deep inference to be correctly simulated, over the standard translation.For instance, let us consider the action of the standard translation on two rules we shall see later in LPD + (cf.Sect.6.1): The first case above suggests that any system to which the standard translation lifts must be able to reason underneath ∃ and ∧, so that the inference indicated in blue is 'accessible' to the prover.The second case above suggests that the existential-conjunctive meta-structure necessitated by the first case should admit basic equivalences, in particular certain prenexing.This section is devoted to the incorporation of these ideas (and necessities) into a bona fide proof system.

Annotated Hypersequents
An annotated cedent, or simply cedent, written S, S etc., is an expression { } x , where is a set of formulas and the annotation x is a set of variables.We sometimes construe annotations as lists rather than sets when it is convenient, e.g. when taking them as inputs to a function.Each cedent may be intuitively read as a TCL formula, under the following interpretation: When x = ∅ then there are no existential quantifiers above, and when = ∅ we simply identify with .We also sometimes write simply A for the annotated cedent {A} ∅ .A (annotated) hypersequent, written S, S etc., is a set of annotated cedents.Each hypersequent may be intuitively read as the disjunction of its cedents.Namely we set: With a slight abuse of notation, we sometimes identify S and fm(S).

Non-wellfounded Hypersequent Proofs
We now present our hypersequential system for TCL and its corresponding notion of 'nonwellfounded proof'.

Definition 4.1 (System)
The rules of HTC are given in Fig. 4 (the colours may be ignored for now).A HTC preproof is a possibly infinite tree of sequents generated by the rules of HTC.A preproof is regular if it has only finitely many distinct subproofs.
The substitution rule σ is needed to guarantee regularity of non-wellfounded branches.While we have included an explicit substitution rule we shall, as in earlier sections, often work 'modulo substitution' when writing down cyclic preproofs.Propositional rules, as well as init, are standard, recalling the formula interpretation of hypersequents defined in the previous section.The ∪ rule is the only branching rule of the system, while rule id allows us to eliminate (bottom-up) a closed formula A from one of the cedents (thus from a conjunction, wrt the formula interpretation) provided that the dual of A occurs in a singleton cedent with empty annotation.The usual sequent rule for the existential quantifier is factored into two HTC rules: ∃, which introduces a fresh variable in the annotation of a cedent, and inst, which instantiates a variable in the annotation with a term.Similarly the usual sequent rule for ∧ is factored in HTC by the rules ∧ and ∪.The rules for TC and TC are induced by the characterisation of TC as a least fixed point in (1).Note that the rules TC and ∀ introduce, bottom-up, the fresh function symbol f , which plays the role of the Herbrand function of the corresponding ∀ quantifier: just as ∀x∃x A(x) is equisatisfiable with ∀xA( f (x)), when f is fresh, by Skolemisation, by duality ∃x∀x A(x) is equivalid with ∃xA( f (x)), when f is fresh, by Herbrandisation.Note that the usual ∀ rule of the sequent calculus is just a special case of this, when x = ∅, and so f is a constant symbol.
Our notion of ancestry, as compared to traditional sequent systems, must account for the richer structure of hypersequents.Specifically, since formulas now occur within cedents, tracing ancestry only for formulas no longer suffices.Instead, we define a notion of ancestry for cedents, and then trace formulas within cedent-paths.In line with the formula interpretation, our notion of 'progress' needs to take into account all infinite traces occurring within such cedent-paths.Definition 4.2 (Ancestry for cedents) Fix an inference step r, as typeset in Fig. 4. We say that a cedent S in a premiss of r is an immediate ancestor of a cedent S in the conclusion of r if either: 1. r = σ and S = S ∈ S, i.e. S and S are identical 'side' cedents of r; or, 2. r = σ and S = σ (S). 3. r = id, r = σ , and S is the (unique) cedent distinguished in the conclusion of r, and S is a cedent indicated in a premiss of r; or, 4. r = id and S is the (unique) cedent distinguished in the premiss of id and S is the cedent { , A} x dsitinguished in the conclusion of id.
Note in particular that in id, as typeset in Fig. 4, { } x is not an immediate ancestor of { Ā} ∅ .Definition 4.3 (Ancestry for formulas) Fix an inference step r, as typeset in Fig. 4. We say that a formula F in a premiss of r is an immediate ancestor of a formula F in the conclusion of r if either: Immediate ancestry on both formulas and cedents is a binary relation, inducing a directed graph whose paths form the basis of our correctness condition: Definition 4.5 ((Hyper)traces) A hypertrace is a maximal path in the graph of immediate ancestry on cedents.A trace is a maximal path in the graph of immediate ancestry on formulas.Thus, in the id rule, as typeset in Fig. 4, no (infinite) trace can include the distinguished A or Ā.From the above definitions it follows that whenever a cedent S in the premiss of a rule r is an immediate ancestor of a cedent S in the conclusion, then some formula in S is an immediate ancestor of some formula in S .Thus, for a hypertrace (S i ) i<ω , there is at least one trace (F i ) i<ω which lies 'within' or 'along' the hypertrace, i.e., such that F i ∈ S i for all i.Definition 4.6 (Progress and proofs) Fix a preproof D. A (infinite) trace (F i ) i∈ω is progressing if there is k such that, for all i > k, F i has the form TC(A)(s i , t i ) and is infinitely often principal.1A (infinite) hypertrace H is progressing if every infinite trace along it is progressing.A (infinite) branch is progressing if it has a progressing hypertrace.D is a proof if every infinite branch is progressing.If, furthermore, D is regular, we call it a cyclic proof.
We write HTC nwf S (or HTC cyc S) if there is a proof (or cyclic proof, respectively) in HTC of the hypersequent S.

Some Examples
Let us consider some examples of cyclic proofs in HTC and compare the system to TC G .As mentioned in Sect.4.2, for convenience we here write cyclic (pre)proofs modulo substitution.where we have indicated roots of identical subproofs with •, and an infinite progressing trace along the (unique) infinite branch in blue.
There is not much choice in the construction of this cyclic proof, bottom-up: we must apply TC first and branch before applying TC differently on each branch.This cyclic proof is naturally simulated by the following HTC one, where the progressing hypertrace (along the unique infinite branch) is marked in blue: Due to the granularity of the inference rules of HTC, we actually have some liberty in how we implement such a derivation.Example, the HTC-proof below applies TC rules below TC ones, and delays branching until the 'end' of proof search, which is impossible in TC G .The only infinite branch, looping on •, is progressing by the blue hypertrace.
This is an example of the more general 'rule permutations' available in HTC, hinting at a more flexible proof theory (we discuss this further in Sect.8).
Let us now consider a more complex example whose relevance will become significant shortly: Example 4. 8 We give a cyclic HTC proof D of the following hypersequent: where: We do not show the finite derivations of hypersequents Q 1 and Q 2 , but here is the subproof of Q 3 : Note the multiple occurrences of the 'backpointer' • (we have omitted explicit substitution steps here), resulting in uncountably many infinite branches.Specifically, the preproof contains two non-wellfounded branches: Q 3 and the branch displayed in D. Both branches are regular, as they are identical (modulo substitution) to the sequent generated in the last-but-one bottom sequent in D. Since Q 3 is generated by the branch displayed in D, and this latter is repeated infinitely often, the preproof contains uncountably many infinite branches.In all cases, the cedents marked in red induce progressing hypertraces along any infinite branch.
Finally, it is pertinent to revisit the 'counterexample' (2) from Sect.3.3 that witnessed incompleteness of TC G for PDL + .The following result is, in fact, already implied by our later completeness result, Theorem 6.1, but it is useful to give it explicitly nonetheless:

On Cyclic-Proof Checking
In usual cyclic systems, checking that a regular preproof is progressing is decidable by straightforward reduction to the universality of nondeterministic ω-word-automata, with runs 'guessing' a progressing thread along an infinite branch.Our notion of progress exhibits an extra quantifier alternation: we must guess an infinite hypertrace in which every trace is progressing.Nonetheless, by appealing to determinisation or alternation, we can still decide our progressing condition: Proposition 4.10 Checking whether a cyclic HTC preproof is a proof is decidable.

Proof Sketch
The result is proved using using automata-theoretic techniques.Fix a cyclic HTC preproof D. First, using standard methods from cyclic proof theory, it is routine to construct a nondeterministic Büchi automaton recognising non-progressing hypertraces of D. The construction is similar to that recognising progressing branches in cyclic sequent calculi, e.g. as found in [11,14,26], since we are asking that there exists a non-progressing trace within a hypertrace.By Büchi's complementation theorem and McNaughton's deter-minisation theorem (see, e.g., [30] for details), we can thus construct a deterministic parity automaton P H recognising progressing hypertraces. 2ow we can construct a nondeterministic parity automaton P recognising progressing branches of D similarly to the previous construction, but further keeping track of states in P H : • P essentially guesses a progressing hypertrace along the branch input; • at the same time, P runs the hypertrace-in-construction along P H and keeps track of the state therein; • acceptance for P is inherited directly from P H , i.e. a run is accepting just if the hypertrace guessed along it is accepted by P H . Now it is clear that P accepts a branch of D if and only if it is progressing.Assuming that P also accepts any ω-words over the underlying alphabet that are not branches of D (by adding junk states), we have that D is a proof (i.e. each of its infinite branches is progressing) if and only if P is universal.For additional material and results on infinite word automata refer to [4,30].

Simulating Cohen-Rowe
As we earlier, cyclic proofs of HTC indeed are at least as expressive as those of Cohen and Rowe's system by a routine local simulation of rules: Proof Sketch Let D be a TC G cyclic proof.We can convert it to a HTC cyclic proof by simply replacing each sequent A 1 , . . ., A n by the hypersequent {A 1 } ∅ , . . ., {A n } ∅ and applying some local corrections.In what follows, if = A 1 , . . ., A n , let us simply write S for {A 1 } ∅ , . . ., {A n } ∅ .
• Any id step of D must be amended as follows: • Any ∨ step of D becomes a correct ∨ step of HTC or HTC = .
• Any ∧ step of D must be amended as follows: • Any ∃ step of D must be amended as follows: • Any ∀ step of D becomes a correct ∀ step of HTC or HTC = .
• Any TC 0 step of D becomes a correct TC 0 step of HTC.
• Any TC 1 step of D must be amended as follows: • Any TC step of D must be amended as follows: Particular inspection of the TC case shows that progressing traces of TC G induce progressing hypertraces of HTC.Also, since each of the cases above maps an inference step of TC G to a fixed finite gadget in HTC, regularity is preserved too.

Soundness of HTC
This section is devoted to the proof of the first of our main results: The argument is quite technical due to the alternating nature of our progress condition.In particular the treatment of traces within hypertraces requires a more fine grained argument than usual, bespoke to our hypersequential structure.

Some Conventions on (Pre)proofs and Semantics
First, we work with proofs without substitution, in order to control the various symbols occurring in a proof.
Throughout this section, we shall fix a HTC preproof D of a hypersequent S. We start by introducing some additional definitions and propositions.Proposition 5.2 If HTC nwf S then there is also a HTC proof of S that does not use the substitution rule.

Proof Sketch
We appeal to a coinductive argument, applying a meta-level substitution operation on proofs to admit each substitution step.Productivity of the translation is guaranteed by the progressing condition: each infinite branch must, at the very least, have infinitely many TC steps.
The utility of this is that we can now carefully control the occurrences of eigenfunctions in a proof so that, bottom-up, they are never 're-introduced', thus facilitating the definition of interpretations on them.
Throughout this section, we shall allow interpretations to be only partially defined, i.e. they are now partial maps from the set of function symbols of our language to appropriately typed functions in the structure at hand.Typically our interpretations will indeed interpret the function symbols in the context in which they appear, but as we consider further function symbols it will be convenient to extend an interpretation 'on the fly'.This idea is formalised in the following definition: Definition 5.3 (Interpretation extension) Let M be a structure and ρ, ρ be two (partial) interpretations over |M|.We say that ρ is an extension of ρ, written ρ ⊆ ρ , if ρ ( f ) = ρ( f ), for all f in the domain of ρ.
Finally, we assume that the free and bound variables occurring in a hypersequent at the root a (pre)proof are all pairwise distinct, and that whenever we apply rule ∃ or rule TC (resp.rule ∀ or rule TC) to a hypersequent S occurring in a branch, the rule introduces in the premiss a variable (resp.a function symbol) that does not appear in any hypersequent in the branch from the root up to S, included.This strong freshness requirement guarantees that each function and variable symbol is uniquely interpreted in the countermodel that we are going to construct.

Constructing a 'Countermodel' Branch
Recall that we have fixed at the beginning of this section a HTC preproof D of a hypersequent S. Let us fix some structure M × and an interpretation ρ 0 such that ρ 0 | S (within M × ).As we shall prove in the following Lemma, since each rule is locally sound, by contraposition we can continually choose 'false premisses' to construct an infinite 'false branch':

Lemma 5.4 (Countermodel branch)
There is a branch B × = (S i ) i<ω of D and an interpretation ρ × such that, with respect to M × : 1. ρ × | S i , for all i < ω; 2. Suppose that S i concludes a TC step, as typeset in Fig. 4, and ρ Intuitively, our interpretation ρ × is going to be defined as the limit of a chain of 'partial' interpretations (ρ i ) i<ω , with each ρ i | S i (with respect to M × ).Referring to Item 2, whenever some TC-formula is principal, we shall always choose ρ i+1 to assign to it a falsifying path of minimal length (if one exists at all), with respect to an assignment d to variables x in the annotation of its cedent.It is crucial at this point that our definition of ρ × is parametrised by such assignments.
Proof of Lemma 5. 4 We construct B × and ρ × simultaneously.In fact we shall define a chain of interpretations ρ 0 ⊆ ρ 1 ⊆ ρ 2 ⊆ • • • such that, for each i, M × , ρ i | S i .We will define ρ × as the limit of this chain.We distinguish cases according to the rule r i that S i concludes.For the case of weakening, S i+1 is the unique premiss of the rule, and ρ i+1 = ρ i .We now give all the other cases: Case (∪) 3 To be clear, we here choose an arbitrary such minimal ' Ā-path' to set d 1 .
By assumption, 2 ).Set ρ i+1 = ρ i .By the truth condition for ∀, we have that for all m-tuples d 1 ∈|M × | and n-tuples By the truth condition associated to ∨ we can conclude that, for all d 1 , d 2 , either: Since x 1 ∩ fv( 2 ) = ∅ and x 2 ∩ fv( 1 ) = ∅, the above is equivalent to: And, since this holds for all choices of d 1 and d 2 , we can conclude that: Take S i+1 to be the S k such that ρ i+1 | ∀x k ( k ), for k = {1, 2}.For all the remaining cases, S i+1 is the unique premiss of the rule r i .Moreover, for x the unique (possibly empty) annotation explicitly indicated in the remaining rules, let n =| x | and d ∈|M × | n .

Cases (∧), (∨), (∃), (id) and (TC)
For all these cases set ρ i+1 = ρ i .The formula interpretation of the conclusion logically implies the formula interpretation of the premiss.Thus, from M × , ρ i | S i we have that M × , ρ i+1 | S i+1 .Let us justify this explicitly for the cases (∃), (id) and (TC).(∃) By assumption, ρ i | Q and ρ i | ∀x( ∨ ∀x(A(x))).We have ρ i+1 | Q.By prenexing the quantifier and variable renaming we obtain ρ i+1 | ∀x ∀y( ∨ A(y)).(id) By assumption, ρ i | Q and ρ i | ∀x( ∨ A) and ρ i | A. By the truth condition for ∀ we have that, for all choices of d, it holds that: By the truth condition for ∨, for every choice of d: Since fv(A) ∩ x = ∅, the above is equivalent to: By assumption, ρ i+1 | A. Thus, the second disjunct cannot hold, and we have that Since this holds for all choices of d, we conclude that ρ i+1 | ∀x(

, by means of the classical theorem ∀x(A ∧ B) → (∀x(A)∧∀x(B)).
The other steps are either standard theorems or follow from the truth conditions of the logical operators.
For the three remaining cases of (inst), (∀) and (TC), ρ i+1 extends ρ i by adequately interpreting the new function symbols introduced, bottom-up: Case (inst) By assumption, Thus, for all choices of d, we have that By the truth condition for ∀, this means that, for all d ∈|M × |, Take ρ i+1 to be any extension of ρ i that is defined on the language of S i+1 .That is, if f is a function symbol in t to which ρ i already assigns a map, then ρ i+1 assigns to it that same map.Otherwise, ρ i+1 assigns an arbitrary map to f .It follows that ] and, since this holds for all d, we have that ρ i+1 | ∀x( (t)).Thus ρ i+1 | S i+1 .Case (∀) By assumption, ρ i | Q and ρ i | ∀x( ∨ ∃x(A(x)).By the truth condition for ∀ and ∨, for all choices of d we have: We define ρ i+1 to extend ρ i by defining and so By assumption, ρ i | Q and ρ i | ∀x( ∨ TC(A)(s, t)) which, by definition of duality, means ρ i | ∀x( ∨ TC(A)(s, t)).By the truth conditions for ∨ we have, for all d: We define ρ i+1 to extend ρ i by defining ρ i+1 ( f ) as follows.Let d ⊆ M × .If 1) holds, then we may set ρ i+1 ( f )(d) to be an arbitrary element of |M × |.Otherwise, 2) must hold, so by the truth conditions for TC there is a A-path between ρ i (s) and ρ i (t) of length greater or equal than 1, i.e. there are elements d 0 , . . ., d n , with n > 0 and ρ i (s) = d 0 and ρ i (t) = d n , such that ρ i | Ā(d i , d i+1 ) for all i < n.We select a shortest such path, i.e. one with smallest possible n > 0. There are two cases: We have considered all the rules, so the construction of B × and the all ρ i 's is complete.From here, note that we have ρ i ⊆ ρ i+1 , for all i < ω.Thus we can construct the limit ρ × = i<ω ρ i .

Canonical Assignments Along Countermodel Branches
Let us now B × and × as provided by Lemma 5.4 above.Moreover, let us henceforth assume that D is a proof, i.e. it is progressing, and fix a progressing hypertrace H = ({ i } x i ) i<ω along B × .In order to carry out an infinite descent argument, we will need to define a particular trace along this hypertrace that 'preserves' falsity, bottom-up.This is delicate since the truth values of formulas in a trace depend on the assignment of elements to variables in the annotations.A particular issue here is the instantiation rule inst, which requires us to 'revise' whatever assignment of y we may have defined until that point.Thankfully, our earlier convention on substitution-freeness and freshness of the variables introduced by quantifiers and transitive closure rules in D facilitates the convergence of this process to a canonical such assignment: Definition 5.5 (Assignment) We define δ H : Note that δ H is indeed well-defined, thanks to the convention that each quantifier and transitive closure rule introduces only variables that do not appear in previous sequents in the derivation branch.In particular we have that each variable x is instantiated at most once along a hypertrace.Henceforth we shall simply write ρ, δ Working with such an assignment ensures that false formulas along H always have a false immediate ancestor: We show how to choose a F satisfying the conditions of the Lemma.We distinguish cases according to the rule r i that { i } x i concludes.Propositional cases are routine as well as ∪, since the failed branch has been chosen during the construction of B × .For the weakening rule, observe that we could not have chosen a hypertrace going through the structure which gets weakened, as by assumption the hypertrace is infinite.We show the remaining cases.It can be easily checked that, given a formula F such that ρ × , δ H | F, the formula F is an immediate ancestor of F.

Case (∃). Suppose
. By the truth condition for ∀ we obtain that, for all n-tuples d of elements of |M × |, for n =| x |, it holds that: By definition, δ H assigns a value in the domain to all the variables occurring in annotations along H. From (5) and the truth condition for ∨ it follows that: If F ∈ , by hypothesis we have that ρ × , δ H | F and we set F = F. Otherwise, F = ∃x(A(x)) and ρ × , δ H | ∀x(A(x)).By the truth condition for ∀ we have ρ × , δ H | A(y), so we set F = A(y).
Case (inst).Suppose Reasoning as in the previous case, from the truth condition for ∀ it follows that ρ × , δ H | (y).If F does not contain y, then ρ × , δ H | F, and we set F = F. Otherwise, if F contains y, then ρ × , δ H | F(y).By Lemma 5.4, ρ × assigns a value to t, and by Definition 5.5, since y is instantiated with Observe that the hypertrace H could not have gone through the structure {A} ∅ occurring in the conclusion of the rule, because by assumption H is infinite.Moreover, by construction the formula interpretation of all the cedents along B × is not valid, and thus ρ × | { A} ∅ .This implies that ρ × | A and so: By assumption, ρ × | ∀x( ∨ TC(A)(s, t)).Thus, we have that: From the inductive definition of TC and the truth condition for ∧, the second disjunct is equivalent to: There are two cases to consider, since the premiss of the rule has two cedents that the hypertrace H could follow: 6) we have: ) and, from the truth conditions for ∀ and ∨: . Thus, we have that: We need to consider two cases, depending on which cedent the hypertrace H follows: By assumption it follows that ρ × , δ H | F. Otherwise, F = TC(A)(s, and by assumption ρ × , δ H | TC(A)(s, t).By the inductive definition of TC and the truth condition for ∨, this is equivalent to: According to the definition of ρ × at the (TC) step, since ρ × , δ H | and ρ × , δ H | A(s, t), then ρ × ( f )(d) is defined as in case 2), subcase (ii).Thus, The proof proceeds exactly as in the previous case except for the very last step, where F is set to be TC(A)( f (x), t).

Putting It All Together
Note how rule inst of Fig. 4 From here we define each successive F i by appealing repeatedly to Lemma 5.6 above.
We are now ready to prove our main soundness result.
Proof of Theorem 5.1 Fix the infinite trace τ × = (F i ) i<ω through H obtained by Proposition 5.7.Since τ × is infinite, by definition of HTC proofs, it needs to be progressing, i.e., it is infinitely often TC-principal and there is some k ∈ N s.t. for i > k we have that F i = TC(A)(s i , t i ) for some terms s i , t i .
To each F i , for i > k, we associate the natural number n i measuring the ' Ā-distance between s i and t i '.Formally, n i ∈ N is least s.t.there are d 0 , . . ., ).Our aim is to show that (n i ) i>k has no minimal element, contradicting wellfoundness of N. For this, we establish the following two local properties: 1. (n i ) i>k is monotone decreasing, i.e., for all i > k, we have n i+1 ≤ n i ; 2. Whenever F i is principal, we have n i+1 < n i .
We prove item 1 and item 2 by inspection on HTC rules.We start with item 2. Suppose F i = TC(A)(s, t) is the principal formula in an occurrence of tc so F i+1 = TC(A)( f (x), t), for some x.Moreover, by construction ρ × , δ H | TC(A)(s, t) and ρ × , δ H | TC(A)( f (x), t).We have to show that n i , the A-distance between f (x) and t, is strictly smaller than n i+1 , the A-distance between s and t, wrt.ρ × and δ H .
To prove item 1, suppose that F i is not principal in the occurrence r i of a HTC rule.Suppose r i is inst, F i = TC(A)(s, x), F i+1 = TC(A)(s, t), and x gets instantiated with t by inst.By construction, ρ × , δ H | TC(A)(s, x).Let n i be the distance between ρ × (s) and δ H (x). By definition, δ H (x) = ρ × (t).Thus, the distance between ρ × (s) and ρ × (t) is n i+1 = n i .In all the other cases, F i = TC(A)(s, t) = F i+1 , and thus n i+1 = n i .
So (n i ) i>k is monotone decreasing by item 1 but cannot converge by item 2, and by definition of progressing trace.Thus (n i ) k<i has no minimal element, yielding the required contradiction.

Completeness for PDL + , Over the Standard Translation
In this section we give our next main result: The proof is by a direct simulation of a cut-free cyclic system for PDL + that is complete.We shall briefly sketch this system below.

Cyclic System for PDL +
The system LPD + , shown in Fig. 5, is the natural extension of the usual sequent calculus for basic multimodal logic K by rules for programs.In Fig. 5, , etc. range over sets of PDL + formulas, and we write a as a shorthand for { a B : B ∈ }. (Regular) preproofs for this system are defined just like for HTC or TC G .
The notion of ancestry for formulas is colour-coded in Fig. 5 as before: a formula C in a premiss is an immediate ancestor of a formula C in the conclusion if they have the same • If r is a k a step then, as typeset in Fig. 5: -C and D are occurrences of the same formula; or, -D is principal and C is auxiliary in r, i.e. as typeset in Fig. 5, C and D are the (uniquely) distinguished formulas in a premiss and conclusion, respectively; Definition 6.3 (Non-wellfounded proofs) Fix a preproof D of a sequent .A thread is a maximal path in its graph of immediate ancestry.We say a thread is progressing if it has a smallest infinitely often principal formula of the form [α + ]A.D is a proof if every infinite branch has a progressing thread.If D is regular, we call it a cyclic proof and we may write LPD + cyc .
Soundness of cyclic-LPD + is established by a standard infinite descent argument, but is also implied by the soundness of cyclic-HTC (Theorem 5.1) and the simulation we are about to give (Theorem 6.1), though this is somewhat overkill.Completeness may be established by the game theoretic approach of Niwinskí and Walukiewicz [24], as carried out by Lange in [21], or by the purely proof theoretic techniques of Studer [27].Either way, both results follow immediately by a standard embedding of PDL + into the (guarded) μ-calculus and its known completeness results [24,27], by way of a standard 'proof reflection' argument: μ-calculus proofs of the embedding are 'just' step-wise embeddings of LPD + proofs.Theorem 6.4 (Soundness and completeness, essentially from [21]) Let A be a PDL + formula.| A iff LPD + cyc A.

Examples of Cyclic Proofs in LPD +
Before giving our main simulation result, let us first see some examples of proofs in LPD + , in particular addressing the 'counterexample' from Sect.3.3.
We use the following abbreviations: α = (aa ∪ aba) + and β = (ba + ) + .Moreover, we sometimes use rule + , which is derivable from rules + 0 and + 1 , keeping in mind that LPD + sequents are sets of formulas: Similarly for rule ∨.There are uncountably many infinite branches, but each such branch is supported by the infinite thread induced by the blue-coloured formulas.is sequent [aa ∪ aba] p, a + p, β p, a + β p, derivable by means of a finite derivation (which we do not show).6 Below we give a cyclic LPD + proof of formula (2), which witnesses the incompleteness of TC G without cut: We employ the same shorthands as in the previous example, i.e., α = (aa ∪ aba) + and β = (ba + ) + .The progressing thread along the unique infinite branch displayed (not including those from Example 6.5), looping on •, is coloured blue.
Here is the sequent [aa ∪ aba] p, a + β ∪ a p, which has a finite derivation: [aa ∪ aba] p, a + β ∪ a p

A 'Local' Simulation of LPD + by HTC
In this subsection we show that LPD + -preproofs can be stepwise transformed into HTCpreproofs, with respect to the standard translation.In order to produce our local simulation, we need a refined version of the standard translation, incorporating the structural elements of hypersequents.
In the definitions below we are often using arbitrarily chosen variables (e.g."fresh variables") and constant symbols.We assume these choices do not break our previous assumptions on variables and constants occurring in hypersequents (refer to Sect.5.1).Definition 6.7 (Hypersequent translation for formulas) For x a variable and A a PDL + formula, we define the hypersequent translation of A, denoted by HT(A)(c), by induction on the complexity of A as follows, for • ∈ {∧, ∨}, with d always a fresh constant symbol and y always a fresh variable: where the cedent translation CT(B)(t) and x B are defined as follows: 1.There is a finite HTC-derivation from S, HT(A)(c) to S, {CT(A)(c)} x A ; and, 2. There is a finite HTC-derivation from S, {CT(A)(c)} x A to S, {ST(A)(c)} ∅ .By inspection of the HT-translation (Definition 6.9) whenever F i+1 is an immediate ancestor of F i in B , there is a path from the cedent {TC(ST(α))(d i+1,n i+1 , d i+1 )} ∅ to the cedent {TC(ST(α))(d i,n i , d i )} ∅ in the graph of immediate ancestry along B. Thus, since τ = (F i ) i<ω is a trace along B , we have a (infinite) hypertrace of the form H τ := ({ i , TC(ST(α))(d i,n i , d i )} ∅ ) i>k along B.

HT( )(c), HT(
By construction i = ∅ for infinitely many i > k , and so H τ has just one infinite trace.Moreover, by inspection of the [+] step in Definition 6.9, this trace progresses in B every time τ does in B , and so progresses infinitely often.Thus, H is a progressing hypertrace.Since the choice of the branch B of D was arbitrary, we are done.

Putting It All Together
We can now finally conclude our main simulation theorem: Proof of Theorem 6.1 Let A be a PDL + formula s.t.| A. By the completeness result for LPD + , Theorem 6.4, we have that LPD + cyc A, say by a cyclic proof D. From here we construct the HTC preproof HT(D)(c) which, by Propositions 6.11 and 6.12, is in fact a cyclic proof of HT(A)(c).Finally, we apply Proposition 6.8 to obtain a cyclic HTC proof of ST(A)(c).

Extension by Equality: Simulating Full PDL
We now briefly explain how our main results are extended to the 'reflexive' version of TCL, which we denote by TCL = , whose language extends that of TCL by adding atomic formulas of the form s = t and s = t.

Hypersequential System with Equality
The calculus HTC = extends HTC by the rules: S, { } x = S, {t = t, } x S, { (s), (s)} x = S, { (s), s = t} x , { (t)} x (7) The notion of immediate ancestry for formulas and cedents is colour-coded in (7) just as we did for HTC in Sect.4.2.Formally: Definition 7.1 (Ancestry for cedents, HTC = ) Let r be a HTC = inference step, as typeset in Fig. 4 or (7).We say that a cedent S in a premiss of r is an immediate ancestor of a cedent S in the conclusion of r if any of the conditions in Definition 4.2 applies (where in 3. we further ask that r is not =), or the following holds: 5. r is = and S is the (unique) cedent distinguished in the premiss of = and S is the cedent { (s), s = t} x distinguished in the conclusion of =.
Definition 7.2 (Ancestry for formulas, HTC = ) Let r be a HTC = inference step, as typeset in Fig. 4 or (7).We say that a formula F in a premiss of r is an immediate ancestor of a formula F in the conclusion of r if any of the conditions from Definition 4.3 applies, or the following holds: (e) r is = and F ∈ (s) and F is s = t.

Completeness for PDL (with Tests)
Turning to the modal setting, PDL may be defined as the extension of PDL + by including a program A? for each formula A. Semantically, we have A? M = {(v, v) : M, v | A}.From here we may define ε := ?and α * := (ε ∪ α) + .Again, while it is semantically correct to set α * = ε ∪ α + , this encoding does not lift to the standard sequent rules for * .
The system LPD is obtained from LPD + by including the rules: , A , B ?
, A? B , Ā, B [?] , [A?]B The notion of ancestry for formulas is defined as for LPD + (Definition 6.2) and colourcoded in the rules.The resulting notions of (pre)proof, thread and progress are as in Definition 6.3.We write LPD cyc A if there is a cyclic proof of A in LPD.Just like for LPD + , a standard encoding of LPD into the μ-calculus yields its soundness and completeness, thanks to known sequent systems for the latter, cf.[21,24,27].

Definition 2 . 1 (
Structures) A structure M consists of a set D, called the domain of M, which we sometimes denote by | M |; a subset p M ⊆ D for each p ∈ Pr; and a subset a M ⊆ D × D for each a ∈ Rel.

Fig. 2
Fig. 2 Above: Sequent calculus TC G .The first two lines of the Figure contain the rules of the Tait-style sequent system for first-order predicate logic, without equality.The constant symbol c in the ∀-rule and the TC-rule is called an eigenvariable.Below: The standard rules for equality.When added to TC G , they give the sequent calculus TC = G .Colours define traces (see Remark 2.9)

Definition 2 . 7 (
System) A sequent, written , etc., is a set of formulas.The systems TC G and TC = G are given in Fig. 2: TC = G consists of all the rules displayed, while TC G does not include the = or = rules.TC (=) G -preproofs are possibly infinite trees of sequents generated by the rules of TC (=)

Definition 3 . 3 (
Semantics) Fix a structure M with domain D. For elements v ∈ D and programs α we define α M ⊆ D × D by:

Fig. 3
Fig.3 The adversarial model from Definition 3.12.Solid arrows represent a A n -relations, dashed arrows b A n -relations

Fig. 4
Fig.4 Hypersequent calculus HTC, where σ is a substitution map from constants to terms and a renaming of other function symbols and variables, extended to terms, formulas, cedents and hypersequents in the natural way.For set of formulas, fv( ) denotes the set of free variables occurring in formulas in .Colours define ancestry (seeRemark 4.4)

Remark 4 . 4 (
(a) r = σ and F = F occur in some cedent S ∈ S; or, (b) r = σ and F = σ (F) occurs in some S = σ (S) where F occurs in S ∈ S; or, (c) r = ∪ and F = F ∈ or F = F ∈ ; or, (d) F is one of the formulas explicitly distinguished in the premiss of r and F is the (unique) formula explicitly distinguished in the conclusion of r.Ancestry via colours) Again we may understand cedent ancestry and formula ancestry by the colouring in Fig.4.A formula C in the premiss is an immediate ancestor of a formula C in the conclusion if they have the same colour; if C, C ∈ then we further require C = C , and if C, C occur in S then C = C occur in the same cedent.A cedent S in the premiss is an immediate ancestor of a cedent S in the conclusion if some formula in S is an immediate ancestor of some formula in S .