Constructive Hybrid Games

Hybrid games combine discrete, continuous, and adversarial dynamics. Differential game logic () enables proving (classical) existence of winning strategies. We introduce constructive differential game logic (CdGL) for hybrid games, where proofs that a player can win the game correspond to computable winning strategies. This constitutes the logical foundation for synthesis of correct control and monitoring code for safety-critical cyber-physical systems. Our contributions include novel semantics as well as soundness and consistency.


Introduction
Logics for program verification can be broadly divided into two groups. Imperative programs are typically verified with program logics such as Hoare calculi [25] and dynamic logics (DL) [47] where modalities capture the effect of program execution. Games logics (GL) [39], studied in this paper, are DL's with a turntaking program connective to switch players. In contrast, functional programs are often studied by writing the program in logic. In a dependent type theory [13], a program's type is its correctness specification. Classical higher-order logics also specify functions and their correctness.
The Curry-Howard correspondence [16,28] explains type-theoretic verification: a constructive proof of a specification corresponds to a function which provably implements that specification. Curry-Howard for program logics is far less explored, but deserves exploration in order to answer: what is the computational content of a program-logic proof? This paper argues that the intersection of Curry-Howard and program logic, both fundamental tools, provides an insightful logical foundation. For example, we expect this correspondence will enable synthesis for programmatic models that are too challenging for automatic synthesis without a proof.
Our hybrid games are as in differential game logic (dGL) [42], and combine continuous dynamics, discrete computation, and adversarial dynamics to provide powerful models of cyber-physical systems (CPSs) such as transportation systems, energy systems, and medical devices. But dGL is classical, so truth of dGL formulas only implies classical existence of winning strategies for their hybrid games. Based on discrete Constructive Game Logic (CGL) [8], this paper introduces Constructive Differential Game Logic (CdGL) for hybrid games with a Curry-Howard interpretation that proofs that a player can win a hybrid game correspond to programs implementing their winning strategies.
CdGL is a compelling use case for Curry-Howard-based synthesis among program logics precisely because synthesizing a winning strategy of a hybrid game is undecidable until a proof is provided; the combination of adversarial and continuous dynamics makes this so not just in theory but in practice. Curry-Howard for games says proofs correspond to constructive winning strategies, allowing us to reduce undecidable synthesis questions to verification questions which, while still undecidable, are routinely verified with human assistance.
Contributions. We build directly on Constructive Game Logic (CGL) [8] for discrete games and Differential Game Logic (dGL) [42] for classical hybrid games. In combining these logics, we must constructively justify differential equation (ODE) reasoning. This requires foundations in constructive analysis [6,10], so the proofs in Appendix B appeal to constructive formalizations of ODEs [15,34]. We also contribute a new type-theoretic semantics, as opposed to the previous realizability semantics [8]. This clarifies subtle side conditions and should be useful in future constructive program logics. Our example model and proof, while short, lay the groundwork for future case studies.

Related Work
We discuss related works on games, constructive logic, and hybrid systems.
Games in Logic. Propositional GL was introduced by Parikh [39]. GL's are unique in their clear delegation of strategy to the proof language rather than the model language, allowing succinct game specifications with sophisticated winning strategies. Succinct specifications are important: specifications are trusted because proving the wrong theorem would not ensure correctness. Relatives without this separation include SL [12], ATL [2], CATL [26], SDGL [22], structured strategies [48], DEL [3,5,55], evidence logic [4], and Angelic Hoare Logic [35].
Constructive Modal Logics. We are interested in the semantics of games, thus we review constructive modal semantics generally. This should not be confused with game semantics [1], which give a semantics to programs in terms of games. The main semantic approaches for constructive modal logics are intuitionistic Kripke semantics [57] and realizability semantics [38,32]. CGL [8] used a realizability semantics which operate on a state, reminiscent of state in Kripke semantics, whereas we interpret CdGL formulas into type theory.
Modal Curry-Howard is relatively little-studied, and each author has their own emphasis. Explicit proof terms are considered for CGL [8] and a small fragment thereof [30]. Others [58,17,11] focus on intuitionistic semantics for their logics, fragments of CGL. Our semantics should be of interest for these fragments. We omit proof terms for space. CdGL proof terms would extend CGL proof terms [8] with a constructive version of existing classical ODE proof terms [7]. Propositional modal logic [37] has been interpreted as a type system.
Hybrid Systems Synthesis. Hybrid games synthesis is a motivation of this work. Synthesis of hybrid systems (1-player games) is an active area. The unique strength of proof-based synthesis is expressiveness: they can synthesize every provable system. CdGL proofs support first-order regular games with first-order (e.g., semi-algebraic) initial and goal regions. While synthesis and proof are both undecidable, interactive proof for undecidable logics is well-understood. The ModelPlex [36] synthesizer for CdGL's classical systems predecessor dL [44] recently added [9] proof-based synthesis to improve expressiveness. CdGL aims to provide a computational foundation for a more systematic proof-based synthesizer in the more general context of games.
Fully-automatic synthesis, in contrast, restricts itself to some fragment in order to sidestep undecidability. Studied classes include rectangular hybrid games [24], switching systems [51], linear systems with polyhedral sets [31,51], and discrete abstractions [20,19]. A well-known [54] systems synthesis approach translates specifictions into finite-alternation games. Arbitrary first-order games are our source rather than target language. Their approach is only known to terminate for simpler classes [50,49].

Constructive Hybrid Games
Hybrid games in CdGL are 2-player, zero-sum, and perfect-information, where continuous subgames are ordinary differential equations (ODEs) whose duration is chosen by a player. Hybrid games should not be confused with differential games which compete continuously [29]. The player currently controlling choices is always called Angel in this paper, while the player waiting to play is always called Demon. For any game α and formula φ, the modal formula α φ says Angel can play α to ensure postcondition φ, while [α]φ says Demon can play α to ensure postcondition φ. These generalize safety and liveness modalities from DL. GL's are distinguished from other DL's by the dual game α d , which implements turntaking by switching the Angel and Demon roles in game α. The Curry-Howard interpretation of proof of a modality α φ or [α]φ is a program which performs each player's winning strategy. A game might have several winning strategies, each represented by a different proof.

Syntax of CdGL
We introduce the language of CdGL. We introduce three classes of expressions e: terms f, g, games α, β, and formulas φ, ψ.
continue playing, but must not repeat α infinitely. The exact number of repetitions is not known in advance, because it may depend on Demon's reactions. In the dual game α d , Angel takes the Demon role and vice-versa while playing α. Demon strategies "wait" until a dual game α d is encountered, then contain an Angelic strategy for α. We parenthesize games with braces {α} when necessary.
Definition 3 (CdGL Formulas). The CdGL formulas φ (also ψ, ρ) are: Above, f ∼ g is a comparison formula for ∼ ∈{≤, <, =, =, >, ≥}. The defining formulas of CdGL (and GL) are the modalities α φ and [α]φ. These mean that Angel or Demon respectively have a constructive strategy to play α and prove postcondition φ. We do not develop modalities for existence of classical strategies because those cannot be synthesized to executable code.
For convenience, we also write derived operators where Demon is given control of a single choice before returning control to Angel. The Demonic choice α ∩ β, defined {α d ∪ β d } d , says Demon chooses which branch to take, but Angel controls the subgames. Demonic repetition α × is defined likewise by {{α d } * } d .
We write φ y x (likewise for α and f ) for the renaming of x for y and vice versa in formula φ, and write φ f x for the result of substitution of term f for game variable x in φ, if the substitution is admissible (Def. 12).

Example Game
We give an example game and theorem statements, proven in Appendix A. Automotive systems are a major class of CPS, so we consider simple timetriggered 1-dimensional driving with adversarial timing. For maximum time T between control cycles, we let Demon choose any duration in [T /2, T ]. This forces Angel's controller to be robust to realistic timing constraints, yet prohibits Demon from pathological "Zeno" behaviors.
We write x for the position of the car, v for the velocity, a for the current acceleration, A > 0 for the maximum positive acceleration, and B > 0 for the maximum braking rate. We assume x = v = 0 initially to simplify arithmetic. In time-triggered control, the controller runs at least once every T > 0 time units. Time and physics are continuous, T simply says how often the controller runs.
Local clock t marks the current time within the current timestep, then resets at each step. The control game (ctrl) says Angel can pick any acceleration a that is physically achievable (−B ≤ a ≤ A). The clock t is then reinitialized to 0. The plant game (plant) says Demon can evolve physics for duration t ∈ [T /2, T ] such that v ≥ 0 throughout, then returns control to Angel. The lower bound on t rules out Zeno strategies where Demon "cheats" by exponentially decreasing durations to effectively stop time. The limit t ≥ T /2 is chosen for simplicity.
Typical theorems in DL's and GL's are safety and liveness: are unsafe states always avoided and are goals eventually reached? Safety and liveness of the 1D system has been proven previously: safe driving (safety) never goes past goal g, while live driving eventually reaches g (liveness).
Safety and liveness theorems, if designed carelessly, have trivial solutions. It is safe to remain at x = 0 and is live to maintain a = A, but not vice-versa. In contrast to DL's, GL's easily express the requirement that the same strategy is both safe and live: we must remain safe while reaching the goal. This specification is called reach-avoid, which we use because it is immune to trivial solutions. We state and prove a new reach-avoid result for 1D driving.
Example 4 (Reach-avoid). The following is provable in dGL and CdGL: Angel reaches v = 0 ∧ g = x while safely avoiding states where x ≤ g does not hold. Angel is safe at every iteration for every time t ∈ [0, T ], thus safe throughout the game. The test t ∈ [T /2, T ] appears second, allowing Demon to win if Angel violates safety during t < T /2. 1D driving is well-studied for classical systems, but the constructive reach-avoid proof (Appendix A) is subtle. The proof constructs a envelope of safe upper and live lower bounds on velocity as a function of position Fig. 1. The blue point indicates where Angel must begin to brake to ensure time-triggered safety. It is surprising that Angel can achieve postcondition g = x ∧ v = 0, given that trichotomy (f < g ∨ f = g ∨ f > g) is constructively invalid. The key (Appendix A) is comparison terms min(f, g) and max(f, g) are exact under Type 2 effectivity where bits of min and max may be computed lazily. Our exact result encourages us that constructivity is not overly burdensome in practice. When decidable comparisons (f < g + δ ∨ f > g) are needed, the alternative is a weaker guarantee x ∈ [g − ε, g] for parameter ε > 0. This relaxation is often enough to make the theorem provable, and reflects the fact that real agents only expect to reach their goal within finite precision.

Type-theoretic Semantics
In this section, we define the semantics of games and game formulas in type theory. We start with assumptions on the underlying type theory.

Type Theory Assumptions
We assume a Calculus of Inductive and Coinductive Constructions (CIC)-like type theory [13,14,53] with polymorphism and dependency. We assume firstclass anonymous constructors for (indexed [18]) inductive and coinductive types. We write τ for type families and κ for kinds (those type families inhabited by other type families). Inductive type families are written µt : κ. τ, which denotes the smallest solution ty of kind κ to the fixed-point equation ty = τ ty t . Coinductive type families are written ρt : κ. τ, which denotes the largest solution ty of kind κ to the fixed-point equation ty = τ ty t . Per Knaster-Tarski [23, Thm. 1.12], the type-expression τ must be monotone in t to ensure that smallest and largest solutions exist. We allow arbitrary proofs that τ is monotone; a major reason we did not mechanize this work is that prominent proof assistants such as Coq reject definitions where monotonicity requires nontrivial proof.
We use a single predicative universe which we write T and Coq writes Type 0. Predicativity is an important assumption because our semantic definition is a large elimination, a feature known to interact dangerously with impredicativity. We write Πx : τ 1 . τ 2 for a dependent function type with argument named x of type τ 1 and where return type τ 2 may mention x. We write Σx : τ 1 . τ 2 for a dependent pair type with left component named x of type τ 1 and right component of type τ 2 , possibly mentioning x. These specialize to the simple types τ 1 → τ 2 and τ 1 * τ 2 respectively when x is not mentioned in τ 2 . Lambdas (λx : τ. M ) inhabit function types. Pairs (M, N ) inhabit dependent pair types. Application is M N . Let-binding unpacks pairs and π L M and π R M are left and right projection. We write τ 1 + τ 2 for disjoint unions inhabited by · M and r · M, and write case A of p ⇒ B | q ⇒ C for case analysis.
We assume a real number type R and a Euclidean state type S. The positive real numbers are written R + , nonnegative reals R ≥0 . We assume scalar and vector sums, products, inverses, and units. A state s : S assigns values to every variable x ∈ V and supports operations s x and set s x v which respectively retrieve the value of x or update it to v. The usual axioms of setters and getters [21] are satisfied.

Semantics of CdGL
Terms f, g are interpreted into type theory as functions of type S → R. We will need differential terms (f ) , a definable term construct when f is differentiable.
Not every term f need be differentiable, so we give a virtual definition, defining when (f ) is equal to some term g. If (f ) does not exist, (f ) = g is not provable. We define the (total) differential as the dot product (·) of the gradient (variable name: ∇) with s , which is the vector of values s x assigned to primed variables. To show that ∇ is the gradient, we define the gradient as a limit, which we express in (ε, δ) style. In this definition, f and g are scalar-valued, and the minus symbol is used for both scalar and vector difference.
For practical proofs, a library of standard rules for automatic, syntactic differentiation of common arithmetic operations can be proven.
We model a formula φ as a predicate over states, i.e., a type family φ : S → T. A predicate of kind S → T is also understood as a region, e.g., φ is the region containing states where φ is provable. We say the formula φ is valid if there exists a term M such that · M : (Πs : S. φ s). That is, a valid formula is provable in every state. The witness may inspect the state, but must do so constructively. The formula semantics are defined in terms of the Angelic and Demonic semantics of games, which determine how to win a game α whose postcondition is φ. We write α : (S → T) → (S → T) for the Angelic semantics of α and [ [α]] : (S → T) → (S → T) for its Demonic semantics. Angel and Demon strategies for a game α with postcondition P are inhabitants of α P and [ [α]] P, respectively.
Modality α φ is provable in s when α φ s is inhabited so Angel has an α strategy from s to reach region φ on which φ is provable. Modality [α]φ is provable in s when [ [α]] φ s is inhabited so Demon has an α strategy from s to reach region φ on which φ is provable. For ∼ ∈ {≤, <, =, >, ≥, =}, the values of f and g are compared at state s in f ∼ g. The game and formula semantics are simultaneously inductive. In each case, the connectives which define α and [ [α]] are duals, because [α]φ and α φ are dual. Below, P refers to the postcondition of the game and s to the initial state.
Definition 6 (Angel semantics). We define α : (S → T) → (S → T) inductively (by a large elimination) on α: ?ψ P s = ψ s * P s x := f P s = P (set s x (f s)) Angel wins ?ψ P by proving both ψ and P at s. Angel wins the deterministic assignment x := f by performing the assignment, then proving P . Angel wins nondeterministic assignment x := * by constructively choosing a value v to assign, then proving P . Angel wins α ∪ β by choosing between playing α or β, then winning that game.
Demon wins [?ψ]P by proving P under assumption ψ, which Angel must provide (Section 7). Demon's deterministic assignment is identical to Angel's. Demon wins x := * by proving ψ for every choice of x. Demon wins α ∪ β with a pair of winning strategies. Demon wins α; β by winning α with a postcondition of winning β. Demon wins α d if he can win α after switching roles with Angel.
Demon wins x = f & ψ if for an arbitrary duration and arbitrary solution which satisfy the domain constraint, he can prove the postcondition. Demon wins [α * ]P if he can prove P no matter how many times Angel makes him play α. Demon repetition strategies are coinductive using some invariant τ . When Angel decides to stop the loop, Demon responds by proving P from τ . Whenever Angel chooses to continue, Demon proves that τ is preserved. Greatest fixed points exist by Knaster-Tarski [23, Thm. 1.12] and Lemma 7.
It is worth comparing the Angelic and Demonic semantics of x := * . An Angel strategy says how to compute x. A Demon strategy simply accepts x ∈ R as its input, even uncomputable numbers. This is because Angel strategies supply a computable real while Demon act with computable outputs given real inputs. In general, each strategy is constructive but permits its opponent to play classically. In the cyber-physical setting, the opponent is indeed rarely a computer.

Proof Calculus
To enable direct syntactic proof, we give a natural deduction system for CdGL. We write Γ = ψ 1 , . . . , ψ n for a context of formulas and Γ φ for the naturaldeduction sequent with conclusion φ and context Γ . We begin with rules shared by CGL [8] and CdGL, then present the new rules for ODEs. We write Γ y x for the renaming of game variable x to y and vice versa in context Γ . Likewise Γ f x is the substitution of term f for game variable x. To avoid repetition, we write [α] φ to indicate that the same rule applies for α φ and [α]φ. These rules write [ α ]φ for the dual of [α] φ. We write FV(e) and BV(α) for the free variables of expression e and bound variables of game α respectively, i.e., variables which might influence the meaning of an expression or be modified during game execution.  x Monotonicity M is Lemma 7 in rule form. The second premiss writes Γ y BV (α) to indicate that the bound variables of α must be freshly renamed in Γ for soundness. Rule M is used for generalization because all GL's are subnormal, lacking axiom K (modal modus ponens) and necessitation. Common uses include concise right-to-left symbolic execution proofs and, in combination with [;] I, Hoare-style sequential composition reasoning. Nondeterministic assignments quantify over real-valued game variables. Assignments [:=] I remember the initial value of x in fresh variable y (Γ y x ) for sake of completeness, then provides an assumption that x has been assigned to f . Skolemization [: * ]I bound-renames x to y in Γ , written Γ y x . Specialization [: * ]E instantiates x to a term f . Existentials are introduced by giving a witness f in : * I. Herbrandization : * E unpacks existentials, soundness requires x is not free in ψ. Fig. 4 provides rules for repetitions. In rule * I, M indicates an arbitrary termination metric where denotes an arbitrary (effectively) well-founded [27] partial order with some zero element 0. M 0 is a fresh variable which remembers M. Angel plays α * by repeating an α strategy which always decreases the termination metric. Angel maintains a formula ϕ throughout, and stops once 0 M. The postcondition need only follow from termination condition 0 M and convergence formula ϕ. Simple real comparisons x ≥ y are not well-founded, but inflated comparisons like x ≥ y +1 are. Well-founded metrics ensure convergence in finitely (but often unboundedly) many iterations. In the simplest case, M is a real-valued term. Generalizing M to tuples enables, e.g., lexicographic termination metrics. For example, the metric in the proof of Example 4 is the distance to the goal, which must decrease by some minimum amount each iteration. Rule FP says α * φ is a least pre-fixed-point. It works backwards: first show ψ holds after α * , then preserve ψ when each iteration is unwound. Rule loop is the repetition invariant rule. Demonic repetition is eliminated by [ * ]E.
Like any first-order program logic, CdGL proofs contain first-order reasoning at the leaves. Decidability of constructive real arithmetic is an open problem [33], so first-order facts are proven manually in practice. Our semantics embed CdGL into type theory; we defer first-order arithmetic proving to the host theory. Note that even effectively-well-founded need not have decidable guards (M 0 ∨ M 0) since exact comparisons are not computable [6]. We may not be able to distinguish M = 0 from very small positive values of M, leading to one unnecessary loop iteration, after which M is certainly 0 and the loop terminates. Comparison up to ε > 0 is decidable [10] (f > g ∨ (f < g + ε)).  Fig. 5 gives the ODE rules, which are a constructive version of those from dGL [42]. For nilpotent ODEs such as the plant of Example 4, reasoning via solutions is possible. Since CdGL supports nonlinear ODEs which often do not have closed-form solutions, we provide invariant-based rules, which are complete [46] for invariants of polynomial ODEs. Differential induction DI [41] says φ is an invariant of an ODE if it holds initially and if its differential formula [41] (φ) holds throughout, for example (f ≥ g) ≡ ((f ) ≥ (g) ). Soundness of DI requires differentiability, and (φ) is not provable when φ mentions nondifferentiable terms. Differential cut DC proves R invariant, then adds it to the domain constraint. Differential weakening DW says that if φ follows from the domain constraint, it holds throughout the ODE. Differential ghosts DG permit us to augment an ODE system with a fresh dimension y, which enables [46] proofs of otherwise unprovable properties. We restrict the right-hand side of y to be linear in y and (uniformly) continuous in x because soundness requires that ghosting y does not change the duration of an ODE. A linear right-hand side is guaranteed to be Lipschitz on the whole existence interval of equation x = f, thus ensuring an unchanged duration by (constructive) Picard-Lindelöf [34]. Differential variants [41,52] DV is an Angelic counterpart to DI. The schema parameters d and ε must not mention x, x , t, t . To show that f eventually exceeds g, first choose a duration d and a sufficiently high minimum rate ε at which h − g will change.
Prove that h−g is decreases at rate at least ε and that the ODE has a solution of duration d satisfying constraint ψ. Thus at time d, both h ≥ g and its provable consequents hold. Rules bsolve and dsolve assume as a side condition that sln is the unique solution of x = f on domain ψ. They are convenient for ODEs with simple solutions, while invariant reasoning supports complicated ODEs.

Theory: Soundness
Following constructive counterparts of the classical soundness proofs for dGL, we prove that the CdGL proof calculus is sound: provable formulas are true in the CIC semantics. We begin with standard lemmas. Full details in Appendix B.

Lemma 11 (Bound effect). Game execution modifies only bound variables.
Definition 12 (Term substitution admissibility [40,Def. 6]). For a formula φ, (likewise for context Γ, term f, and game α) Soundness of the proof calculus follows from the lemmas, and soundness of the ODE rules employing several known results from constructive analysis.
Proof Sketch. By induction on the derivation. The assignment case holds by Lemma 13 and Lemma 9. Lemma 10 and Lemma 11 are applied when maintaining truth of a formula across changing state. The equality and inequality cases of DI and DV employ the constructive mean-value theorem (Theorem 21 in Appendix B), which has been formalized, e.g., in Coq [15]. Rules DW, bsolve, and dsolve follow from the semantics of ODEs. Rule DC uses the fact that prefixes of solutions are solutions. Rule DG uses constructive Picard-Lindelöf [34], which constitutes an algorithm for arbitrarily approximating the solution of any Lipschitz ODE, with a convergence rate depending on its Lipschitz constant.
We have shown that every provable formula is true in the type-theoretic semantics. Because the soundness proof is constructive, it amounts to an extraction algorithm from CdGL into type theory: for each proof, there exists a program in type theory which inhabits the corresponding type.

Theory: Extraction and Execution
Another perspective on constructivity is that provable properties must have witnesses. We show Existential and Disjunction properties providing witnesses for existentials and disjunctions. For modal formulas α φ and [α]φ we show proofs can be used as winning strategies: a big-step operational semantics play allows playing strategies against each other to extract a proof that their goals hold in some final state s. Our presentation is more concise than defining the language, semantics, and properties of strategies, while providing key insights. Their proofs follow directly from their counterparts in type theory. The Disjunction Property considers truth at a specific state. It is not the case that validity of φ ∨ ψ implies validity of either φ or ψ. For example, x < 1 ∨ x > 0 is valid, but its disjuncts are not.
Function play below gives a big-step semantics: Angel and Demon strategies as and ds for respective goals φ and ψ in game α suffice to construct a final state s satisifying both. By parametricity, s was found by playing α, because play cannot inspect p and q, thus can only prove them via as and ds.
Applications of play are written play α s as ds (P and Q implicit). Game consistency (Corollary 17) is by play and consistency of type theory. Note that α d is played by swapping the Angel and Demon strategies in α. play x:=f s as ds = (let t = set s x (f s) in (t, (as t, ds t))) play x:= * s as ds = let t = set s x πLas in (t, (πRas, ds πLas)) play ?φ s as ds = (s, (πRas, ds (πLas))) play α∪β s as ds = case (as s) of as ⇒ play α s as (πLds) | as ⇒ play β s as (πRds) play α;β s as ds = (let (t, (as , ds )) = play α s as ds in play β t as ds ) play α * s as ds = case (as s) of as ⇒ (s, (as , πLds)) | as ⇒ let (t, (as , ds )) = play α s as (πRds) in play α * t as ds play α d s as ds = play α s ds as Corollary 17 (Consistency). It is never the case that both α φ s and [α]¬φ s are inhabited.
The play semantics show how strategies can be executed. Consistency is a theorem which ought to hold in any GL and thus helps validate our semantics.

Conclusion and Future Work
We extended Constructive Game Logic CGL to CdGL for hybrid games. We contributed a new static and dynamic semantics. We presented a natural deduction proof calculus for CdGL and used it to prove reach-avoid correctness of 1D driving with adversarial timing. We showed soundness and constructivity results. The next step is to implement a proof checker, game interpreter, and synthesis tool for CdGL. Function play is the high-level interpreter algorithm, while synthesis would commit to one Angel strategy and allow black-box Demon implementations. In practice, Demon strategies represent some physical environment which is not implemented in type theory. There is good justification to allow black-box treatment of Demon: the Demon connectives are negative and thus defined by their observable behaviors. Any program which behaves like a Demon is a Demon. Angel connectives are defined positively by their introduction forms, thus the task of synthesis is to extract these contents into code form.

A Example Proof
It is understood that reading these appendices is optional for the reviewers. We include the appendices in the event that a reviewer wishes to read them. When published, the appendices will be published in an online-only extended version.
We restate the definition of the 1D driving game and its reach-avoid specification here: We first give an overview of our proof approach, then give the main algebraic derivations, then finally the complete natural deduction proof.

A.1 Proof overview
While safety of 1D driving is a thoroughly studied introductory example, adversarial 1D reach-avoid is more challenging due to the combination of adversarial timing and liveness.
To simplify our arithmetic, the proof uses the same acceleration magnitude C = min(A, B) to both accelerate and brake. The resulting strategy is conservative, but still satisifies reach-avoid correctness. Our proof proceeds by convergence: we establish a minimum distance ∆x which is traversed in each iteration, guaranteeing that the goal is eventually reached. The minimum distance is determined by appealing to a velocity envelope which is invariant throughout the loop, given as a function of the position. Much of Section A.2 is devoted to identifying a velocity envelope which is invariant while also strong enough to ensure liveness. The car's velocity goes to 0 during braking, so the key is to show that velocity decreases slowly enough to ensure progress. We depict the safe driving envelope in Fig. 6.
A major task of the controller is to detect events within an adversarial environment. We detect one event: when are we close enough to the goal that we must brake? Because our acceleration and braking rates are the same, it suffices to begin braking by the midpoint x = g/2. Care is still required because the controller is time-triggered : we must determine whether it is possible to cross the midpoint within the coming timestep, and react right before we actually cross the midpoint. In Fig. 6, the blue point BM (T ) is the point at which we would start to react.
The other major task of the controller is to choose an acceleration value. Until we approach the midpoint, the acceleration is at the maximum value C. Once the midpoint is detected, acceleration is computed with a predictive method: compute the state at the end of the timestep, and solve for the greatest C that maintains safety. Recall that CdGL features Type-2 effective computations on reals. We note that in proofs which use inexact comparisons for constructive reals, only approximate "reach" properties might be provable. The reason our proof obtains an exact result is subtle: in Type-2 effectivity, the functions min(f, g) and max(f, g) are exact comparisons, in contrast to inexact formula-level comparisons. Because min(f, g) and max(f, g) are terms, Type-2 effectivity demands only that when real numbers are represented as lazy streams of bits, the binary representation of min(f, g) or max(f, g) could be computed lazily from the bits of f and g. Exact min and max are implementable lazily: each bit of f is compared with the corresponding bit of g. In the case that f and g have identical bit representations, this process will lazily return their bit representation. In the case that the bit representations of f and g differ, the extremum will start by returning identical bits, then commit to a choice of f or g once a differing bit is found. While there exist numbers with multiple binary representations (1.0 2 = 0.111 . . . 2 ), this simply means min and max are free to return either representation.

A.2 Algebraic derivations
We now algebraically derive the main equations of the proof, e.g., for invariant regions, termination metrics, and acceleration control. In this section, "monotonicity" does not refer to the monotonicity rule for game modalities, but to monotone functions in the arithmetic sense, e.g., The following employs the well-known Newtonian motion equations: Because our initial conditions are v(0) = x(0) = 0, we can often eliminate the first terms. We write x(k) and v(k) for the values of x and v at the beginning of a given iteration of the game loop, as opposed to the beginning of the game. From the Newton equations we derive the safe-braking (SB) inequality, which says braking rate −C suffices to stop the car by the time x reaches g: When the equality is strict, then x reaches g exactly as the car stops.
We write x mid for the midpoint, i.e., g/2. We write v mid for the maximum velocity which may ever be attained, i.e., the maximum velocity one might have at the midpoint and still brake safely. We write UV(x) for the upper bound of permissible velocity at a position x. The shape of UV will be a convex "triangle" which is 0 at x = 0 and x = g and attains value v mid at x = x mid . The shape of UV is convex rather than a true triangle because its edges are defined by square-root expressions defining the relationship between position and velocity.
Its left side (LS) is derived by setting the maximum acceleration a = C and solving the Newton equations for v as a function of x: The right side (RS) is derived by setting the braking acceleration to C, assuming SB holds as a strict equality, and solving for v as a function of x: The upper velocity envelope is their minimum: We refer to the set of points bounded by these curves as the Large Triangle (LT), with the understanding that its sides are actually convex curves. Choosing a lower velocity bound LV(x) at each point x is more challenging, because velocity necessarily decays as x approaches g. To prove that position x = d is eventually reached, our proof must rule out so-called "Zeno" behaviors, such as those where distanced traversed in each timestep decreases exponentially. Ruling out Zeno behaviors requires a somewhat strong lower bound LV. Strong bounds are of course desirable in and of themselves: the higher the bound, the faster x is guaranteed to reach g.
Our control scheme predicts future motion: a control choice is safe if for every duration t ∈ [0, T ], the motion is safe. By monotonicity, it suffices to show the case t = T . The lower bound likewise predicts motion, we introduce several helper functions which predict motion. We write BM(T ) (before midpoint) for "the position x from which the car will reach the midpoint in time T at maximum acceleration." Under time-triggered control, BM(T ) is the "point of no return" by which Angel must react, and we will safety detect the approaching midpoint by comparing our position to BM(T ). We also write AM(T ) (after midpoint) for the conjugate point of BM(T ) opposite the midpoint. To derive BM(T ) we set x(k)+UV(x(k))T + CT 2 2 = g/2 with the simplification that UV(x(k)) = LS(x(k)) before the midpoint. We solve (by computer): and its conjugate It is clear that control must be more conservative past BM(T ), the only question is how conservative. For example, the minimum safe acceleration from BM(T /2) is 0, which takes us to the boundary at AM(T /2) if Demon chooses t = T . We might wonder if it is sufficient to exclude all states above the line connecting BM(T /2) and AM(T /2), indicated by a red dashed line in Fig. 6. It is not, under adversarial timing. Consider blue point BM(T /2) again. Demon may choose t = T /2 so that by definition Angel regains control at x = x mid but v < v mid . Demon now has a strategy to keep Angel strictly in the interior of the Large Triangle indefinitely, so that the lower bound is eventually violated, probably after AM(T /2).
Our envelope can only hope to be an invariant if a strict inequality LV(x) < UV(x) holds for all x ∈ (x mid , g). We believe that the optimal lower bound is particularly nontrivial, so we do not aim to show our bound is perfectly tight. We do note that our bound is not exceptionally loose either. For example, if we were to permit Demon to elapse physics up to time 2T, then any strategy more aggressive than ours becomes clearly unsafe. Regardless, we believe the controller used in this proof is tight modulo our simplifying assumption A = B = C, we simply use this slightly looser bound for the sake of proving liveness.
The starting observation is that Angel might need to decrease acceleration as early as BM(T ). The simplest live decision would be to construct a braking rate a end > 0 which reaches g exactly when v = 0, and simply brake at rate a = −a end until stopped. The braking curve of rate a end form the small right side (SRS) of the triangle, while its mirror forms the small left side (SLS). Perhaps we could reuse LS for the left side, but we preserve symmetry in hopes of simplifying the proof.
Consider the upper-left point UL = (BM(T ), UV(BM(T ))) and upper-right point UR = (AM(T ), UV(AM(T ))). The conservative braking rate a end is defined by setting SB as an equality and solving for a as a function of x and v.
Then we define the Small Triangle (ST) as the set of points bounded by: The lower velocity envelope is their minimum: We are finally ready to give the variant formula J and the progress term ∆x which induces the termination metric. The variant simply says the velocity is between the envelopes and gives the signs of state variables: The minimum progress is found by taking the shortest distance traversed along any path of duration T /2 contained within the envelope between triangles LTand ST: where a ranges over accelerations which remain within the envelope for time t.
We observe that distance traversed is monotone in x, v, and t. Thus it suffices to consider the worst case where t = T /2, x = 0, v = 0, a = −a end . The worst-case acceleration is a = a end because a end defines the lower bound of the velocity envelope. The worst case is then We construct our termination argument. We use M ≡ (g −x) as our termination metric with terminal value 0 = 0 and define a ≺ b ↔ a + ∆x ≤ b. As usual, the convergence proof will need only prove a disjunction: either M has decreased or it is equal to zero. It sometimes happens that the penultimate step actually reaches x = g but can only prove that (g − x) has decreased, in which case the final step will observe x = g and terminate. We are simply observing that such behavior is permissible: when we observe the goal has already been reached, it is irrelevantly how much progress the final (i.e., previous) step had made. The invariant and metric are major components of the proof. The last major component is the strategy for choosing a, which we have only alluded to thus far. We wish to set the highest acceleration that is guaranteed to remain within the velocity envelope.
To find this acceleration, recall the motion equations: where v(k) and x(k) are the values of state variables v and x at the start of the current iteration of the game loop.
The most aggressive safe acceleration is that which satisfies SB as a strict equality after the pessimal time interval T , so we set and solve for a. Wolfram Alpha gives two conjugate solutions (assuming T = 0 and C = 0, which are true): 2T , the latter of which is positive.
We take this second solution as a candidate for the acceleration: Recall that accelerations are required to fall within the range [−B, A] and that for simplicitly we show the stronger condition a ∈ [−C, C]. The lower bound a cand ≥ −C holds by construction: a cand is the greatest acceleration which remains within the safe envelope. By construction of the envelope, an acceleration −C always remains below the upper limit, so a cand must be at least −C. The upper bound a cand ≤ C does not hold in general: we computed a cand as the acceleration required to reach the maximum velocity in this timestep, when reaching maximum velocity usually takes multiple timesteps. Thus the final acceleration (acc) is computed by bounding a cand against the upper limit C: acc ≡ min (C, a cand )

A.3 Natural Deduction Proof
We give a formal proof in the natural deduction calculus. We first give derived rules and lemmas used in the proof, and we split the deduction into small pieces for the sake of formatting.
Derived Rules The vacuity axiom for constant propositions (indicated p()) is not sound for games [42]: The following rule is sound for games, however, and can be derived from Lemma 11 and Lemma 7: The formula Q is arbitrary: as soon as Angel has any winning strategy, vacuity becomes sound. Q = tt is usually chosen in practice.
As discussed in Section 5, we do not axiomatize first-order reasoning in this paper, but assume it has been implemented in "the host logic" Thus we label first-order steps "FO" but do give arithmetic proofs in full axiomatic detail. To be precise, the following (non-effective!) rule is sound: Lemmas We use several arithmetic facts throughout the proof.

Lemma 18 (Safe Upper Bound).
Braking is safe when the upper velocity bound is satisified.
Proof. By first-order arithmetic.
Lemma 19 (Acceleration in Bounds). Our control algorithm only proposed accelerations which are feasible.
Proof. The lower bound holds by construction: as discussed in the last section, acc ≥ −C when (g − x) ≥ SB. The upper bound holds trivially because acc is computed by bounding a cand to C.
Main Proof The main proof begins by applying the convergence rule * I, which is parameterized by a well-order (M, 0, < M ) and an invariant ϕ. The convergence metric is the remaining distance M = (g − x) and the zero element is 0 = 0. The ordering < M defines 0 less than all other states and otherwise defines ( The invariant ϕ says variables' signs are preserved and that velocity remains within its envelope: Note we do not include the sign conditions on T, A, B, C in the invariant because they are constants. By GV, for any game α and constant proposition (indicated p()), we have p() → α p() whenever α ψ holds for any postcondition ψ. Loop convergence contains such a proof, thus vacuity can always applied to constants of a convergence proof in the inductive step.
We first dispatch the precondition and postcondition steps, which are purely arithmetic.
By construction, when x = v = 0 then LV(x) = UV(x) = 0 and the first two conjuncts are trivially satisfied. The latter two conjuncts follow directly from v = 0 and x = 0.
. The LHS is always nonnegative, so We proceed to the proof of the loop body. We abbreviate The first ? I step applies the lemma SB → acc ∈ [−B, A]. The second ? I step applies the lemma D arith1 . In the dsolve step, we write X(t) and V (t) for the solutions of x and v, where X(t) = x + vt + acc t 2 2 and V (t) = v + acc t. The domain constraint assumption in the dsolve step has been simplified monotonely. The remainder of the proof is D arith1 and D arith2 . To prove them, we first prove a lemma D arith3 . Γ, a = acc, To prove D arith3 prove the upper bound V (T ) ≤ UV(X(T )), then the lower bound LV(X(T )) ≤ V (T ). The upper bound holds by construction since acc is specifically chosen to remain within LV(T ). To show the lower bound, consider two regions which partition the safe envelope. Let Region 1 be bounded by SLS, LS, and SRS, while Region 2 is bounded by SRS, RS, and LS. Note that in our strategy, case analysis on Region 1 vs. Region 2 is implicit in the comparisons min and max. We make this case analysis explicit in our proof for the sake of clarity. It suffices in Region 1 to show acc ≥ a min and in Region 2 to show acc ≥ −C.
First consider an initial state (X(0), V (0)) ∈ Region 1. The acceleration can be determined by first determining the distance δX = X(T ) − X(0) traversed in time T , not to be confused with the global minimum traversed distance ∆x. From the UL and UR points, correctness of the entire Region 1 follows by additional monotonicity and continuity arguments. Any point between UL and UR has acc ∈ [a min , C] because the restriction of acc to this line is monotone. As we move toward the lower-left corner (LL), then acc can only increase: decreasing V (0) or X(0) frees us to be less conservative. Thus acc ≥ a min everywhere in Region 1 as desired.
From every point in Region 2, consider the simplistic braking rate a simp which, if followed indefinitely, achieves x = g exactly when v = 0. The trajectory of a simp is clearly within Region 2 for any initial point in Region 2. The value a simp is always in [−C, −a min ] and so is not only physically achievable but is also live. Thus concludes the proof of D arith3 .
For D arith1 we prove Γ, a = acc, ≤ (g − X(T )). Since the LHS is trivially nonnegative, the RHS is nonnegative, i.e. X(T ) ≤ g as desired.
We now prove D arith2 , i.e., we prove Γ, a = acc, We prove each conjunct φ i . Conjunct φ 1 is already proven by D arith3 . We prove φ 3 next to prove φ 2 as a corollary. We prove φ 4 last.
In each case except φ 4 , it suffices to consider the case t = T by monotonicity. We prove ρ 3 by hypothesis rule: assumption ρ 7 is the desired result. We prove ρ 2 . We can prove it by the ODE solution or even more obviously by DI. From the domain constraint, v ≥ 0 is an invariant. Then ρ 2 says X(0) ≥ 0 and since x = v ≥ 0 then by DI we have X(T ) ≥ 0.
We prove ρ 4 . To do so, we must test whether we are in the "final" iteration of the loop. We perform an inexact (formula-level) comparison of g − x against aminT 2 16 with tolerance aminT 2 16 so that we have constructively g − x ≤ aminT 2 8 ∨ g − x ≥ aminT 2 16 . In the former case since t ≥ T /2 we derive from construction of a and the initial velocity envelope that V (0) 2 2a = (g − x) so by definition of X(t) have X(t) = g which satisfies the right disjunct.
In the second case, not only does the test yield g − X(0) ≥ aminT 2

16
, but the test t ≥ T /2 implies a stronger condition: which combined with safe braking entails g − X(0) ≥ aminT 2

8
. Then We argue by monotonicity. The elapsed distance δX is minimized (i.e., δX = ∆x) when velocity and acceleration are minimized, that is when a = a min and x = 0.
In the worst case Demon chooses t = T /2, and Demon is responsible for satisfying the domain constraint V (t) ≥ 0 and test t ≥ T /2 simultaneously. Then by the Newton equations, (X(T ) − X(0)) ≥ δX = a min T 2 8 , which exactly the definition of ∆x as required.
This completes the last case of the arithmetic lemma, which in turn closes the final goal of the reach-avoid proof.

B Theory Proofs
We prove the stated meta-theorems of CdGL such as monotonicity, soundness, the Existential Property, and the Disjunction Property.

B.1 Preliminaries and Assumptions
We first state preliminaries from the literature and assumptions.
Constructive ODEs. A difference between our soundness proof and that of dGL is that we draw on results of constructive analysis rather than classical analysis. The major results on which we rely have been proven in the literature, but we restate them here because the theorem statements are otherwise difficult to locate. The main catch in applying these results is that they are proven for timederivatives, whereas our differentials are spatial. For this reason, we will prove Lemma 26 equating time and space differentials within the context of an ODE, which justifies applying these existing results.
Theorem 20 (Constructive Picard-Lindelöf [15]). Picard-Lindelöf has been formalized in Coq. We restate it from CoRN 3 . The functions and theorems referenced in our proof summary are also from the CoRN repository. Let τ 1 and τ 2 be metric spaces and let f 0 : τ 1 → τ 2 be uniformly continuous on some region X ⊆ τ 1 . Consider the initial-value problem where f (0, y) = f 0 (y) and (f ) (x, y) = v(x, y) and where v is Lipschitz on X.
Then there constructively exists a function f : τ 1 → τ 2 that solves the initial value problem, i.e., Summary. The proof relies on the existence for each v of the well-known Picard operator picard v : (τ 1 → τ 2 ) → (τ 1 → τ 2 ) and the fact that this operator is contractive. When contracted iteratively, the limit is the solution of the ODE (f ) (x, y) = v(x, y). The proof relies on the Banach fixed point operator fp such that fp g g 0 is a fixed point of g, computed as the limit of the sequence g i+1 = g g i starting from the given g 0 . Specifically, define g 0 (t)(y) = f (0, y).
1. By the Banach fixed point theorem, then fp picard v g 0 is a fixed point such that picard v (fp picard v g 0 ) = (fp picard v g 0 ). 2. fp picard v g 0 is a solution of the IVP. 3. fp picard v g 0 is constructive: its exact value is arbitrarily approximated by iterating the picard operator. -Lemma (Feq-criterium) supports equational DI. If (F ) = (G) on I and there exists x ∈ I such that F x = G x, then F = G on I.
The cases for > and ≥, which are also proven by CoRN, are symmetric. Static Semantics. The proof calculus and soundness proofs rely on standard notions of free variables FV(e), bound variables BV(α), and must-bound variables MBV (α). The design decision must be made whether to characterize these functions implicitly (semantically) or define them explicitly (syntactically). For example the semantic free variables of an expression are the smallest set of variables which determine its meaning, while the syntactic free variables are all those which appear in free position. The semantic free variables are never more than the syntactic free variables, but sometimes are a strict subset. For a game α, the syntactic bound variables BV(α), are those which are assigned on at least one execution path of α, while the syntactic must-bound variables MBV(α) are those which are assigned on every execution path of α.
Because we leave the language of terms f, g open (any well-typed term from the meta logic is permitted) we have no choice but to characterize free term variables implicitly or assume a correct syntactic computation exists. For formulas and games we do have a choice: the language of games is closed and easily admits a syntactic definition. In this work we use a closed formula language, but certainly one might wish to use an open formula language; it could be useful to use arbitrary type families τ : S → T as the postcondition of a game modality: α τ . This is easy for a semantic treatment of variables but not a syntactic one: any new connectives would need new syntactic variable computations. Yet, a syntactic characterization of free variables is required to show that our proof rules are effective.
The cases for systems are those from [43], while the duality cases α d are homomorphic [45]. However, for comparison, we briefly discuss their semantic counterparts (note the different font) FV(e), BV(α), and MBV(α) based directly on the coincidence and bound effect properties. For an expression e (term f, formula φ, or game α), the semantic free variables FV(e) are those which can influence the meaning of e. In these definitions, V = s V is an abbreviation for x∈V x = s x where s x is a constant equal to the value of x in state s. We write S for the complement of set S. We now recall the syntactic definitions of free, bound, and must-bound variables for the sake of contrast and the sake of being self-contained.
While we reuse existing definitions of FV(·), BV(·), and MBV(·), we necessarily offer new proofs of the coincidence and bound effect properties: our semantics are entirely different from those of prior work. As discussed in the next paragraph, FV(f ) is not defined here, rather we assume there exists FV(f ) function which satisfies the coincidence lemma.
Term Language. Because our term language reuses terms of the host type theory, we must assume basic lemmas about the term language in order to prove the corresponding lemmas for formulas and games. The following lemmas should hold in any reasonable type theory. Coincidence, renaming, and substitution for state variables s are fundamental operations in any λ-calculus, and we simply require that these properties must be generalizable to program variables x, which are simply projections of s. Justification. We are assuming that variables can be renamed in terms. This is only a modest generalization of the α-renaming rule for variables s.
Justification.. Substitution for program variables is a modest generalization of substitution for state variables.
Notations and abbreviations. Some notations are useful for brevity in the proofs which may not have been mentioned in the main paper. It is sometimes useful to talk of ODE solutions as yielding an entire state rather than the value for one variable. Thus we abbreviate for a state-valued solution.

B.2 Proofs of Stated Results
For semantic proofs about the inhabitance of types, we do not explicitly write out the proof terms which inhabit them, since the proof terms are obvious from our proofs-by-type-rewriting. Sketch. First observe the left-hand-side is the directional derivative of g at x = sol t in direction x = f (set s x (sol t)) by the semantics of (g) . By assumptions (sol, s, d x = f ) and t ∈ [0, d] then Sol(t) x = dx dt , i.e., the directional derivative and time derivative agree. Assumption (A3) is essential: differential variables y are not bound in Sol for y = x, so g must not depend on them.
In practice, however, (A3) is not a limitation. Rather, before applying a rule which relies on this differential lemma, one would apply a step that locally transforms any additional game variables y into constants, which ensures their derivatives are 0 as intended, also fulfilling the requirement of this lemma. Proof. In each case, assume (0) P s → Q s for all s : S. Then fix some such s : S, for which (0) trivially also holds. Then assume (1) α P s or [ [α]] P s to show α Q s or [ [α]] Q s accordingly. We annotate a step with subscript 0 when its justification is fact (0), likewise for other facts.
The Angel and Demon cases are proven by simultaneous induction, of which we list the Angel cases first.
Case . P (set s x (sol t))) so that P (set s (x, x ) (sol d, f (set s x (sol d)))) and by (1) have Case α; β, have α; β P s = α ( β P ) s (2). Note by IH on β that (3) for all s, have ( β P s) → ( β Q s). Then (2) and (3) suffice to apply the IH on α, Then note for all t have P t → Q t so that (µτ : (S → T). λt : S (P t → τ t) where the step marked IH employs the IH from the simultaneous IH on Demonic games, which applies because α is structurally smaller than α d .
We give the Demon cases. Case Then for arbitrary d and sol and assume (sol, s, d x = f ) and (Πt : [0, d]. P (set s x (sol t))), so that P (set s (x, x ) (sol d, f (set s x (sol d)))) and by (0) have giving *(τ t → P t)) s →(ρτ : (S → T). λt : S ] Q s where the IH is from the simultaneous induction on Angelic games.
The static semantics results are stated informally in the main paper for the sake of brevity. We give full formal statements here with proof. The coincidence lemmas for formulas, Angelic games, and Demonic games are proven by simultaneous induction. We also include results for contexts which are simply finite conjunctions of formulas, and write Γ to mean the product of φ for φ ∈ Γ . The same holds of the renaming and substitution lemmas.
Note that we state coincidence and bound effect for games differently from prior work [45] simply to avoid introducing some extra notations used in prior work.
Coincidence for contexts also holds: If s = t on FV(Γ ) and Γ s is inhabited then Γ t is inhabited. Coincidence for the construct (sol, s, d x = f ) also holds: Proof. The formula and context cases are proven by simultaneous induction with one another and with Lemma 29.
We note a simplification for the formula cases: In each case the proof of the conclusion begins by assuming (A2) Γ t is inhabited, which by the context cases Lemma 28 and because V ⊇ FV(Γ ) gives an inhabitant of (A4) Γ s. Then by modus ponens on (A1) gives (A3) φ s in each case. In short, in every case we are entitled to simply consider formulas as in (A3) rather than sequents (A1).
Case f ∼ g : Because FV(f ∼ g) = FV(f ) ∪ FV(g) then by Lemma 23 have f s = f t and g s = g t from which we derive f ∼ g s = (f s ∼ g s) = (f t ∼ g t) = f ∼ g t.
Case We note a simplification: In each case the proof of the conclusion begins by assuming (A2) Γ t is inhabited, which by the context case of Lemma 28 and because V ⊇ FV(Γ ) gives an inhabitant of (A4) Γ s. Then by modus ponens on (A1) gives (A3) α φ s or [ [α]] φ s in each case. In short, in every case we are entitled to simply consider formulas as in (A3) rather than sequents as in(A1).
We give the Angel cases.
We give the Demon cases.
∈ FV(ρ) by syntactic constraints, thus the IH on ρ applies to give (1) (Πr : [0, d]. ρ (set s x (sol r))) = (Πr : Then applying (1) Case α * : Note that FV(α) ∪ FV(φ) are the free variables of the fixed point (ρτ : (S → T). λt : S (τ t → [ [α]] τ t) *(τ t → φ t)). Note s = t on FV(α) ∪ FV(φ) since FV(α * ) = FV(α) and MBV(α * ) = ∅. This suffices to ensure Formally the proof follows by coinduction on the proof that s belongs to the fixed point. Case Proof. By the same argument as in the coincidence lemma, sequent-style bound effect follows trivially from formula-style bound effect. We first give a uniform argument for the converse direction, then prove the forward direction. The forward direction performs an outer induction on the size of V, then an inner simultaneous induction on Angelic and Demonic games. Converse direction: By left projection, φ ∧ V = s V s implies φ s for all s. Then by Lemma 27 have that α φ ∧ V = s V s implies α φ s and likewise for [ [α]].
Forward direction: By induction on |V |, generalizing φ. The case |V | = 0 is trivial since α φ ∧ (V = s V ) = α φ ∧ tt = α φ . In the case |V ∪ {x}| = k + 1 then apply the IH to φ ∧ x = s x and set V to get that {x})) s), then by transitivity it suffices to show that α φ ∧ x = s x s follows α φ s. We proceed by inner induction on games, simultaneously for Angel and Demon. In each case we assume (A) α φ s and show α φ ∧ (x = s x) s or likewise for [ [α]]. We do so by inner induction on games, simultaneously for Angel and Demon.
We give the cases for Angel.  (1) (2) (3) and (4) have Case α; β: Note (0) β φ ∧ (x = t x) t for all t by IH on β. Note (1) that the truth value of (λr. t x = s x) is constant with respect to r so it suffices to show β φ , which follows from (A).
which follows from the IH on α.
Case α ∪ β: Have either α φ s or β φ s. In each case, the IH applies by BV(α ∪ β) = BV(α) ∪ BV(β) . In the first case, α φ ∧ (x = s x) s → α ∪ β φ ∧ (x = s x) s. In the second case,  (1) (2) (3) and (3) have t for all t by IH on β. Note (1) that the truth value of (λr. t x = s x) is constant as a function of r and it suffices to Proof. Trivial induction because we define e y x to be transposition renaming which renames x to y but also renames y to x. Proof of Lemma 32 and Lemma 33. The cases for formulas, contexts, solutions, and games are all proven by simultaneous induction. We give the cases for contexts first.
Case ·, have · y x = · and · s holds trivially for all s. Case G, ψ: Assume (Γ, ψ) s = Γ s * φ s, so that by IH on Γ have (0) π L M y x : Γ y x (s y x ) and by the IH on ψ have (1) π R M y x : ψ y x (Γ y x ). Then by conjunction of (0) and (1) have M y x : Γ, ψ y x (s y x ) as desired. The formula and game cases employ the following simplification using the context case. Each case first assumes (A1) a sequent of shape Γ s → φ s then exhibits a sequent of shape Γ y x (s y x ) → Q (s y x ), the first step of which is to assume (A2) Γ y x (s y x ). From (A2), uniform renaming on contexts yields (by Lemma 31) Γ s, then by modus ponens on (A1) have φ s. That is, the remaining cases are free to ignore the context Γ .
Also, we write z y x as shorthand for a variable which is z in the case z / ∈ {x, y}, or y when z = x, or x when z = y.
We give the formula cases.
) which can be proven by an inner induction. Likewise α * φ s = (µτ : (S → T). λt : S (φ t → τ t) *( α τ t → τ t)) s = (µτ : (S → T). λt : S (φ y x t y x → τ t y x ) *( α y x τ t → τ t)) y x s y x , by induction on the membership of s in the fixed point. This simplifies to α * y x φ y x s y x as desired.
We give the Demon cases.
From (A2) have (L1) x = y and (L2) y / ∈ FV(g). Note that this is a stronger admissibility condition than those for y := g and y := * . Unlike the former constructs, y is always a free variable of y = g & ψ, thus if we attempted to define a sufficient admissibility condition to support the case x = y, we would find it unsatisfiable. Thus we simply say x cannot be substituted into f in any ODE which binds x. So (L1) (L2), then have (S) (y := g) f x = x := (g f x ) also, from (L2) have (*) (f s) = (f (set s y (g f x s))) by Lemma 23 since s = (set s y (g f x s)) on {y} ⊇ FV(f ) . then Have (0) (sol, s f x , d y = g) = (sol, s, d (y = g) f x ) by the "solves" IH. Have (1) for all t ∈ [0, d], ψ (set (s f x ) y (sol t)) = ψ f x (set s y (sol t)) by IH on ψ and because set (s f x ) y (sol t) = set (set s x (f s)) y (sol t) = set (set s y (sol t)) x (f s) = set (set s y (sol t)) x (f (set s y (sol t))) since by (L2) have y / ∈ FV(f ) thus s = (set s y (sol t)) on FV(f ) and Lemma 23 applies.
by the IH on φ and because set (s f x ) (y, y ) (sol d, g (set (s f x ) y (sol d))) =set (set s (y, y ) (sol d, g (set (s f x ) y (sol d)))) x (f (set s y (sol d))) by the same argument as above and because g (set (s f x ) y (sol d)) = (g f x ) (set s y (sol d)) by term IH.
where s f x is shorthand for set s x (f s).
Proof of Lemma 34 and Lemma 35. In the formula cases, we assume (A0) Γ s M : φ s, (A1) admissibility of Γ f x and (A2) admissibility of φ f x . Likewise for, contexts, games, and predicate (sol, s, d x = f ). We note that in each case, the admissibility conditions of the IH hold following (A1) and (A2) and unpacking the inductive definition of admissibility. As usual, we also ignore the contexts in the formula and game cases since the cases with contexts follow easily from those without, combined with IH's on the contexts.
We give the context cases first.
Case Γ = ·, then trivially · s f x . Case Γ, ψ: From (A0) have Γ s f x and ψ s f x , then by the IH's have Γ f x s and ψ f x s giving (Γ, ψ) f x s as desired. We now give the formula cases.
∈ FV(f ). Note that this is a stronger admissibility condition than those for y := f and y := * . Unlike the former constructs, y is always a free variable of y = g & ψ, thus if we attempted to define a sufficient admissibility condition to support the case x = y, we would find it unsatisfiable. Thus we simply say x cannot be substituted into f in any ODE which binds x. So (L1) (L2), then have (S) (y := g) x ) y (sol t)) = ψ f x (set s y (sol t)) by IH on ψ and because set (s f x ) y (sol t) = set (set s x (f s)) y (sol t) = set (set s y (sol t)) x (f s) = set (set s y (sol t)) x (f (set s y (sol t))) since by (L2) have y / ∈ FV(f ) thus s = (set s y (sol t)) on FV(f ) and Lemma 23 applies.
Have (2) φ (set (s f x ) (y, y ) (sol d, g (set (s f x ) y (sol d)))) = φ f x (set s (y, y ) (sol d, g f x (set s y (sol d)))) by the IH on φ and because by the same argument as above and by term case because (set (s f x ) y (sol d)) = (set s y (sol d)) f x . In the base case, (ψ s f Then the main proof of the case proceeds: α * φ s f x = (µτ : (S → T). λt : S (ψ t → τ t) *( α τ t → τ t)) (s f x ) = * (µτ : (S → T). λt : x s We give the Demon cases.
Have (2) φ (set (s f x ) (y, y ) (sol d, g (set (s f x ) y (sol d)))) = φ f x (set s (y, y ) (sol d, g f x (set s y (sol d)))) by the IH on φ and because set (s f x ) y (sol d) =set (set s y (sol d)) x (f (set s y (sol d))) by the same argument as above and by term case because (set (s f x ) y (sol d)) = (set s y (sol d)) Theorem 36 (Soundness of Proof Calculus). If Γ φ is provable then Γ φ is valid. As a corollary when Γ = ·, if φ is provable, then φ is valid.
Proof. Each case proceeds by fixing s : S and assuming (A) Γ s. In cases where premisses include Γ, we assume that modus ponens has been applied to all premisses with (A). Additional antecedents beyond Γ will be explicitly discharged in each caes.
We have ∆ 1 t from (A) by repeating Lemma 28 because for each ψ ∈ ∆ 1 , we have s = t on FV(ψ) as a consequent of t V = s V specifically because V ⊇ z ⊇ FV(ψ).
Consider ∆ 2 next. Recall ∆ 2 y BV(α) = ∧ ψ∈∆2 (ψ y x ). For each such ψ we appeal s y = t y (since y ⊆ V ) and s y = s x by ghosting, thus t y = s x by transitivity.
To show the desired ψ y x t we note by Lemma 32 it suffices to show ψ t y x , which, because x ∈ FV(ψ) and y / ∈ FV(ψ), reduces to ψ (set t x (s y)). Since (AA) includes ψ (set s y x), it suffices to apply Lemma 28, providing the assumption that on FV(ψ) ⊆ {z} ∪ BV(ψ) = {x, z} we have (set s y (s x)) = (set t x (s y)). We already have s = t on y, z. Then s y is set to s x but s x = s y anyway, a no-op. t x is set to s y = s x so that (set s y (s x)) = (set t x (s y)) agree on {x, y, z} as desired.
In practice it usually also the unique solution, but that is not strictly required in this rule. Assume Proof. ( φ ∨ ψ s) = φ s + ψ s so by inversion on M there exists L : φ s or R : ψ s as desired.