A Complete Uniform Substitution Calculus for Differential Dynamic Logic

This article introduces a relatively complete proof calculus for differential dynamic logic (dL) that is entirely based on uniform substitution, a proof rule that substitutes a formula for a predicate symbol everywhere. Uniform substitutions make it possible to use axioms instead of axiom schemata, thereby substantially simplifying implementations. Instead of subtle schema variables and soundness-critical side conditions on the occurrence patterns of logical variables to restrict infinitely many axiom schema instances to sound ones, the resulting calculus adopts only a finite number of ordinary dLformulas as axioms, which uniform substitutions instantiate soundly. The static semantics of differential dynamic logic and the soundness-critical restrictions it imposes on proof steps is captured exclusively in uniform substitutions and variable renamings as opposed to being spread in delicate ways across the prover implementation. In addition to sound uniform substitutions, this article introduces differential forms for differential dynamic logic that make it possible to internalize differential invariants, differential substitutions, and derivatives as first-class axioms to reason about differential equations axiomatically. The resulting axiomatization of differential dynamic logic is proved to be sound and relatively complete.


Introduction
Differential dynamic logic (dL) [11,13] is a logic for proving correctness properties of hybrid systems. It has a sound and complete proof calculus relative to differential equations [11,13] and a sound and complete proof calculus relative to discrete systems [13]. Both sequent calculi [11] and Hilbert-type axiomatizations [13] have been presented for dL but only the former has been implemented. The implementation of dL's sequent calculus in KeYmaera [19] makes it straightforward for users to prove properties of hybrid systems, because it provides proof rules that perform natural decompositions for each operator. The downside is that the implementation of the rule schemata Number literals such as 0,1 are allowed as function symbols without arguments that are interpreted as the numbers they denote. Occasionally, constructions will be simplified by considering θ + η and θ · η as special cases of function symbols f (θ , η), but + and · always denote addition and multiplication. Differential-form dL allows differentials (θ ) of terms θ as terms for the purpose of axiomatically internalizing reasoning about differential equations. The differential (θ ) describes how the value of θ changes locally depending on how the values of its variables change, i.e. as a function of the values of the corresponding differential symbols. Differentials will make it possible to reduce reasoning about differential equations to reasoning about equations of differentials, which, quite unlike differential equations, have a local semantics in isolated states and are, thus, amenable to an axiomatic treatment.
Formulas and hybrid programs (HPs) of dL are defined by simultaneous induction, because formulas can occur in programs and programs can occur in formulas. Similar simultaneous inductions are, thus, used throughout the proofs in this article.
Definition 2 (dL formula). The formulas of (differential-form) differential dynamic logic (dL) are defined by the grammar (with dL formulas φ , ψ, terms θ , η, θ 1 , . . . , θ k , predicate symbol p, quantifier symbol C, variable x, HP α): Operators >, ≤, <, ∨, →, ↔ are definable, e.g., φ → ψ as ¬(φ ∧ ¬ψ). Also [α]φ is equivalent to ¬ α ¬φ and ∀x φ equivalent to ¬∃x ¬φ . The modal formula [α]φ expresses that φ holds after all runs of α, while the dual α φ expresses that φ holds after some run of α. Quantifier symbols C (with formula φ as argument), i.e. higher-order predicate symbols that bind all variables of φ , are unnecessary but included for convenience since they internalize contextual congruence reasoning efficiently with uniform substitutions. The concrete quantifier chain in ∀x ∃y φ evaluates the formula φ at multiple x and y values to determine whether the whole formula is true. Similarly, an abstract quantifier symbol C can evaluate its formula argument φ for different variable values to determine whether C(φ ) is true. Whether C(φ ) is true, and where exactly C evaluates its argument φ to find out, depends on the interpretation of C.
Definition 3 (Hybrid program). Hybrid programs (HPs) are defined by the following grammar (with α, β as HPs, program constant a, variable x, term θ possibly containing x, and with dL formula 1 ψ): α, β ::= a | x := θ | ?ψ | x = θ & ψ | α ∪ β | α; β | α * Assignments x := θ of θ to variable x, tests ?ψ of the formula ψ in the current state, differential equations x = θ & ψ restricted to the evolution domain ψ, nondeterministic choices α ∪ β , sequential compositions α; β , and nondeterministic repetition α * are as usual in dL [11,13]. The assignment x := θ instantaneously changes the value of x to that of θ . The test ?ψ checks whether ψ is true in the current state and discards the program execution otherwise. The continuous evolution x = θ & ψ will follow the differential equation x = θ for any nondeterministic amount of time, but cannot leave the region where the evolution domain constraint ψ holds. For example, x = v, v = a & v ≥ 0 follows the differential equation where position x changes with time-derivative v while the velocity v changes with time-derivative a for any arbitrary amount of time, but without ever allowing a negative velocity v (which would, otherwise, ultimately happen for negative accelerations a < 0). Usually, the value of differential symbol x is unrelated to the value of variable x. But along a differential equation x = θ , differential symbol x has the value of the time-derivative of the value of x (and is, furthermore, equal to θ ). Differential equations x = θ & ψ have to be in explicit form, so y and (η) cannot occur in θ and x ∈ V . The nondeterministic choice α ∪ β either executes subprogram α or β , nondeterministically. The sequential composition α; β first executes α and then, upon completion of α, runs β . The nondeterministic repetition α * repeats α any number of times, nondeterministically.
The effect of an assignment x := θ to differential symbol x ∈ V , called differential assignment, is like the effect of an assignment x := θ to variable x, except that it changes the value of the differential symbol x instead of the value of x. It is not to be confused with the differential equation x = θ , which will follow said differential equation continuously for an arbitrary amount of time. The differential assignment x := θ , instead, only assigns the value of θ to the differential symbol x discretely once at an instant of time. Program constants a are uninterpreted, i.e. their behavior depends on the interpretation in the same way that the values of function symbols f , predicate symbols p, and quantifier symbols C depend on their interpretation.
expresses that a car starting with nonnegative velocity v ≥ 0 and braking constant b > 0 will always have nonnegative velocity when following a HP that repeatedly provides a nondeterministic control choice between putting the acceleration a to braking (a := −b) or to a positive constant (a := 5) before following the differential equation system x = v, v = a restricted to the evolution domain constraint v ≥ 0 for any amount of time. The formula in (1) is true, because the car never moves backward in the HP. But similar questions quickly become challenging, e.g., about safe distances to other cars or for models with more detailed physical dynamics.

Dynamic Semantics
The (denotational) dynamic semantics of dL defines, depending on the values of the symbols, what real value terms evaluate to, what truth-value formulas have, and from what initial states which final states are reachable by running its HPs. Since the values of variables and differential symbols can change over time, they receive their value by the state. A state is a mapping ν : V → R from variables V including differential symbols V ⊆ V to R. The set of states is denoted S. The set X = S \ X is the complement of a set X ⊆ S. Let ν r x denote the state that agrees with state ν except for the value of variable x, which is changed to r ∈ R. The interpretation of a function symbol f with arity n (i.e. with n arguments) in interpretation I is a (smooth, i.e. with derivatives of any order) function I( f ) : R n → R of n arguments. The set of interpretations is denoted I . The semantics of a term θ is a mapping [[θ ]] : I → (S → R) from interpretations and states to a real number.

Definition 4 (Semantics of terms
Time-derivatives are undefined in an isolated state ν. The clou is that differentials can still be given a local semantics in a single state: Iν[[(θ ) ]] is the sum of all (analytic) spatial partial derivatives at ν of the value of θ by all variables x multiplied by the corresponding direction described by the value ν(x ) of differential symbol x . That sum over all variables x ∈ V is finite, because θ only mentions finitely many variables x and the partial derivative by variables x that do not occur in θ is 0. As usual, ∂ g ∂ x (ν) is the partial derivative of function g at point ν by variable x, which is sometimes also just denoted ∂ g(ν) ∂ x . So, the partial derivative ] ∂ x is the derivative of the one-dimensional function Iν X x [[θ ]] of X at ν(x). The spatial partial derivatives exist since Iν [[θ ]] is a composition of smooth functions, so is itself smooth. Thus, the semantics of Iν[[(θ ) ]] is the differential 2 of (the value of) θ , hence a differential one-form giving a real value for each tangent vector (i.e. point of a vector field) described by the values ν(x ). The values ν(x ) of the differential symbols x select the direction in which x changes, locally. The partial derivatives of Iν [[θ ]] by x describe how the value of θ changes with a change of x. Along the solution of (the vector field corresponding to a) differential equation the value of the differential (θ ) coincides with the analytic time-derivative of θ (Lemma 12).
The semantics of a formula φ is a mapping [[φ ]] : I → ℘(S) from interpretations to the set of all states in which it is true, where ℘(S) is the powerset of S. The semantics of an HP α is a mapping [[α]] : I → ℘(S × S) from interpretations to a (reachability) relation on states. The set of states I[[φ ]] ⊆ S in which formula φ is true and the transition relation I[[α]] ⊆ S × S of HP α are defined by simultaneous induction since their syntax is simultaneously inductive. The interpretation of predicate symbol p with arity n is an n-ary relation I(p) ⊆ R n . The interpretation of quantifier symbol C is a functional I(C) : ℘(S) → ℘(S) mapping subsets M ⊆ S of states where its argument is true to subsets I(C)(M) ⊆ S of states where C applied to that argument is then true. ∂ x i dx i when x 1 , . . . , x n are the variables in θ and their differentials dx i form the basis of the cotangent space, which, when evaluated at a point ν whose values ν(x ) determine the tangent vector alias vector field.
The relation composition operator • in Case 8 is also for sets which are unary relations. The interpretation of program constant a is a state-transition relation I(a) ⊆ S ×S, where (ν, ω) ∈ I(a) iff HP a can run from initial state ν to final state ω.  ]. Since ν and ϕ(0) are only assumed to agree on the complement {x } of the set {x }, the initial values ν(x ) of differential symbols x do not influence the behavior of (ν, ω) ∈ I[[x = θ & ψ]], because they may not be compatible with the time-derivatives for the differential equation, e.g. in x := 1; x = 2 with a discontinuity in x . The final values ω(x ) after x = θ & ψ will coincide with the derivatives at the final state, though, even for evolutions of duration zero.

Static Semantics
The dynamic semantics gives a precise meaning to dL formulas and HPs but is inaccessible for effective reasoning purposes. By contrast, the static semantics of dL and HPs defines only simple computable aspects of the dynamics that can be read off directly from their syntactic structure without running their programs or evaluating their dynamical effects. The correctness of uniform substitutions depends only on the static semantics that defines which variables are free (so can be read) and which variables are bound (so can change their value).
Bound variables x are those that are bound by ∀x or ∃x , but also those that are bound by modalities such as [x := 5y] or x = 1 or [x := 1 ∪ x = 1] or [x := 1 ∪ ?true] because of the assignment to x or differential equation for x they contain. The scope of the bound variable x is limited to the quantified formula or to the postcondition and remaining program of a modality. Only bound variables change their value while evaluating a formula or executing a HP (Lemma 1). Only free variables are read while evaluating a formula, so the semantics of a formula depends only on the value of its free variables (Lemma 4). Together, these provide uniform substitutions with all they need to know to determine what changes during substitutions will go unnoticed (variables are not free so changes to their value has no impact) and what state-change an expression may cause itself (variables that are not bound cannot change their value during the evaluation of that expression). Whether a particular uniform substitution preserves truth in a proof depends on the interaction of the free and bound variables.
The static semantics is easily read off from the dynamic semantics and provides the sole input that uniform substitutions depend on, which, in turn, are the only part of the calculus where the static semantics of the language is relevant.
Definition 7 (Bound variable). The set BV(φ ) ⊆ V of bound variables of dL formula φ is defined inductively as: The set BV(α) ⊆ V of bound variables of HP α, i.e. all those that may potentially be written to, is defined inductively as: Both x and x are bound by a differential equation x = θ , because both change their value. All variables V is the only option for program constants and quantifier symbols C, since, depending on their interpretation, both might change the value of any x ∈ V .
The free variables of a quantified formula are defined by removing its bound variable as FV(∀x φ ) = FV(φ ) \ {x}, since all occurrences of x in φ are bound by ∀x. The bound variables of a program in a modality act in a somewhat similar way, except that the program itself may read variables during the computation, so its free variables need to be taken into account. By analogy to the quantifier case, it is often suspected that FV([α]φ ) could be defined as FV(α) ∪ (FV(φ ) \ BV(α)). But that would be unsound, because [x := 1 ∪ y := 2] x ≥ 1 would have no free variables then, contradicting the fact that its truth-value depends on the initial value of x. The reason is that x is a bound variable of that program, but only written to on some but not on all paths. So the initial value of x may be needed to evaluate the truth of the postcondition x ≥ 1 on some execution paths. If a variable is must-bound, so written to on all paths of the program, however, it can safely be removed from the free variables of the postcondition. So, the static semantics defines the subset of variables that are must-bound (MBV(α)), so must be written to on all execution paths of α. This complication does not happen for ordinary quantifiers or strictly nested languages like pure λ -calculi.
Definition 8 (Must-bound variable). The set MBV(α) ⊆ BV(α) ⊆ V of must-bound variables of HP α, i.e. all those that must be written to on all paths of α, is defined inductively as: for atomic HPs α except program constants Finally, the static semantics also defines which variables are free so may be read. The definition of free variables is simultaneously inductive for formulas (FV(φ )) and programs (FV(α)) owing to their mutually recursive syntactic structure.
Definition 9 (Free variable). The set FV(θ ) ⊆ V of free variables of term θ , i.e. those that occur in θ directly or indirectly, is defined inductively as: where f can also be + or · FV((θ ) ) = FV(θ ) ∪ FV(θ ) The set FV(φ ) of free variables of dL formula φ , i.e. all those that occur in φ outside the scope of quantifiers or modalities binding it, is defined inductively as: where p can also be ≥ The set FV(α) ⊆ V of free variables of HP α, i.e. all those that may potentially be read, is defined inductively as: The variables of dL formula φ , whether free or bound, are V(φ ) = FV(φ ) ∪ BV(φ ). The variables of HP α, whether free or bound, are V(α) = FV(α) ∪ BV(α).
Soundness requires FV((θ ) ) to be the union of FV(θ ) and its differential closure FV(θ ) of all differential symbols corresponding to the variables in FV(θ ), because the value of (xy) depends on {x, x , y, y } so the current and differential symbol values. Indeed, (xy) will turn out to equal x y + xy (Lemma 14), which has the same set of free variables {x, x , y, y } for more obvious reasons. Both x and x are bound in x = θ & ψ since both change their value, but only x is added to the free variables, because the behavior only depends on the initial value of x, not of that of x . All variables V are free and bound variables for program constants a, because their effect depends on the interpretation I, so they may read and write any variable in FV(a) = BV(a) = V but possibly not on all paths, so MBV(a) = / 0. For example, only {v, b, x} are the free variables of the formula in (1), while {a, x, x , v, v } are its bound variables as well as the must-bound variables of its program. Note that a is not a free variable of (1), because a is never actually read, since a must have been defined on every execution path before being read anywhere. No execution of the program in (1), thus, depends on the initial value of a, which is why it is not a free variable since a is not free after the loop or in the postcondition. This would have been different for the less precise definition FV(α; β ) = FV(α) ∪ FV(β ).
The signature, i.e. set of function, predicate, quantifier symbols, and program constants in φ is denoted by Σ(φ ) (accordingly for terms θ and programs α). The dynamic semantics interprets variables in V by the state ν. The symbols in Σ(φ ) are interpreted by the interpretation I.

Correctness of Static Semantics
Since uniform substitutions depend on the static semantics, soundness of uniform substitutions requires the static semantics to be correct, i.e. it soundly captures how the dynamic semantics reads and writes variables. Correctness of the static semantics is easy to prove by straightforward structural induction with some attention for differential cases. The first property that uniform substitutions depends on is that HPs have bounded effect: only bound variables of HP α are modified during runs of α. Of course [20], α may bind a variable x in ways that never actually change the value of x, such as x := x or because some differential equation can never be executed. Proof. The proof is straightforward by a structural induction on α.
The value of a term only depends on the values of its free variables. When evaluating a term θ in two different states ν,ν that agree on its free variables FV(θ ), the values of θ in both states coincide. Accordingly, the value of a term will agree for different interpretations I, J that agree on the symbols Σ(θ ) that occur in θ . . . , θ k )) and I and J were assumed to agree on the function symbol f that occurs in the term f (θ 1 , . . . , θ k ). This includes the case where f is + or · so that I and J agree by definition.
] as ν =ν on FV((θ ) ), which includes all differential symbols x for all x ∈ FV(θ ) (the others have partial derivative 0 so do not contribute to the sum), and by IH on the simpler term θ , because FV(θ ) ⊆ FV((θ ) ).
, because x is interpreted to be X in both states and ν =ν on FV(θ ) already.
Corollary 3. The semantics of differentials is a sum over the free variables: By a more subtle argument, the values of dL formulas also only depend on the values of their free variables. When evaluating dL formula φ in two different states ν,ν that agree on its free variables FV(φ ), the truth-values of φ in both states coincide. Lemmas 4 and 5 are proved by a straightforward simultaneous induction, reflecting the simultaneously inductive definitions of formulas and programs.
Lemma 4 (Coincidence for formulas). If ν =ν on FV(φ ) and I = J on Σ(φ ), The runs of an HP α also only depend on the values of its free variables, because its behavior cannot depend on the values of variables that it never reads. If ν =ν on FV(α) and (ν, ω) ∈ I[[α]], then there is anω such that (ν,ω) ∈ J[[α]] and ω andω agree mostly. There is a subtlety, though. The resulting states ω andω will only continue to agree on FV(α) and the variables that are bound on the particular path that α ran for the transition (ν, ω) ∈ I[[α]]. States ω andω may disagree on variables z that are neither free (so the initial states ν andν have not been assumed to coincide) nor bound on the particular path that α took, because z has not been written to.
Example 2 (Bound variables may not agree after an HP). Let (ν, ω) ∈ I[[α]]. It is not enough to assume ν =ν only on FV(α) in order to guarantee ω =ω on V(α) for someω such that Yet, the respective resulting states ω andω still do agree on the must-bound variables that are bound on all paths of α, rather than just somewhere in α. If initial states agree on (at least) all free variables FV(α) that HP α may read, then the final states continue to agree on those (even if overwritten since) as well as on all variables that α must write on all paths, i.e. MBV(α). It is crucial for the inductive proof of Lemma 5 but also its use in the proof of Lemma 4 that the resulting states continue to agree on any superset V ⊇ FV(α) of the free variables that the initial states agreed on. It is of similar significance that the resulting states agree on MBV(α) whether or not the initial states agreed on those, because, e.g., free occurrences in φ of must-bound variables of α will not be free in [α]φ , so the initial states will not have been assumed to agree initially. The respective pairs of initial and final states of a run of HP α already agree on the complement BV(α) by Lemma 1.  Proof. The proof is by simultaneous induction with Lemma 4. The proof is by induction on the structural complexity of α, where α * is considered to be structurally more complex than HPs of any length but with less nested repetitions, which induces a well-founded order on HPs. For atomic programs α for which BV(α) = MBV(α), it is enough to show agreement on V(α) = FV(α) ∪ BV(α) = FV(α) ∪ MBV(α) in the remainder of the proof, because any variable in V \ V(α) is in BV(α) , which remains unchanged by α according to Lemma 1. 1. Since FV(a) = V so ν =ν, the statement is vacuously true for program constant a.
In particular, the final states ω andω agree on V(α) if the initial states ν andν agree on V(α) and even if the initial states only agree on V(α) \ MBV(α).
This concludes the static semantics of dL, which characterizes syntactically what kind of state change formulas φ and HPs α may cause (captured in BV(φ ), BV(α)) and what part of the state their values and behavior depends on (FV(φ ), FV(α)).

Uniform Substitutions
The uniform substitution rule US 1 from first-order logic [2, §35,40] substitutes all occurrences of predicate p(·) by a formula ψ(·), i.e. it replaces all occurrences of p(θ ), for any (vectorial) argument term θ , by the corresponding ψ(θ ) simultaneously: Soundness of rule US 1 [15] requires all relevant substitutions of ψ(θ ) for p(θ ) to be admissible, i.e. that no p(θ ) occurs in the scope of a quantifier or modality binding a variable of ψ(θ ) other than the occurrences in θ ; see [2, §35,40]. A precise definition of admissibility is the key ingredient and will be developed from the static semantics.
This section develops rule US as a more general and constructive definition with a precise substitution algorithm and precise admissibility conditions that allow symbols from more syntactic categories to be substituted. The dL calculus uses uniform substitutions that affect terms, formulas, and programs. A uniform substitution σ is a mapping from expressions of the form f (·) to terms σ f (·), from p(·) to formulas σ p(·), from C( ) to formulas σC( ), and from program constants a to HPs σ a. Vectorial extensions are accordingly for uniform substitutions of other arities k ≥ 0. Here · is a reserved function symbol of arity zero and a reserved quantifier symbol of arity zero, which mark the positions where the respective argument, e.g., argument θ to p(·) in the formula p(θ ), will end up in the replacement σ p(·) used for p(θ ).
Example 3 (Uniform substitutions with or without clashes). The uniform substitution σ = { f → x + 1, p(·) → (· = x)} substitutes all occurrences of function symbol f (of arity 0) by x + 1 and simultaneously substitutes all occurrences of p(θ ) with predicate symbol p of any argument θ by the corresponding (θ = x). Whether that uniform substitution is sound depends on admissibility of σ for the formula φ in US as will be defined in Def. 10. It will turn out to be admissible (and thus sound) for but will turn out to be in-admissible (and, in fact, would be unsound) for: The reason why σ cannot be admissible for the latter formula is because σ has a free variable x in its replacement for p(·) that it introduces into a context where x is bound by the modality [x := .
. .], so the x in the replacement · = x for p(·) would refer to different values in the two occurrences of p. Figure 1 defines the result σ (φ ) of applying to a dL formula φ the uniform substitution σ that uniformly replaces all occurrences of a function f by a term (instantiated with its respective argument of f ) and all occurrences of a predicate p or a quantifier C symbol by a formula (instantiated with its argument) as well as of a program constant a by a program. A uniform substitution can replace any number of such function, predicate, and quantifier symbols or program constants simultaneously. The notation σ f (·) denotes the replacement for f (·) according to σ , i.e. the value σ f (·) of function σ at f (·). By contrast, σ (φ ) denotes the result of applying σ to φ according to Fig. 1 (likewise for σ (θ ) and σ (α)). The notation f ∈ σ signifies that σ replaces f , i.e. σ f (·) = f (·). Finally, σ is a total function when augmented with σ g(·) = g(·) for all g ∈ σ , so that the case g ∈ σ in Fig. 1 is subsumed by case f ∈ σ . Corresponding notation is used for predicate symbols, quantifier symbols, and program constants. The cases g ∈ σ , p ∈ σ , C ∈ σ , b ∈ σ follow from the other cases but are listed explicitly for clarity. Arguments are put in for the placeholder · recursively by uniform substitution {· → σ (θ )} in Fig. 1, which is defined since it replaces the function symbol · of arity 0 by σ (θ ), or accordingly for quantifier symbol of arity 0.
is the restriction of σ that only replaces symbols that occur in φ and FV(σ ) = f ∈σ FV(σ f (·)) ∪ p∈σ FV(σ p(·)) are the free variables that σ introduces. A uniform substitution σ is admissible for φ (or θ or α, respectively) iff the bound variables U of each operator of φ are not free in the substitution on its arguments, i.e. σ is U-admissible. These admissibility conditions are listed explicitly in Fig. 1, which defines the result σ (φ ) of applying σ to φ .
The substitution σ is said to clash and its result σ (φ ) (or σ (θ ) or σ (α)) is not defined if σ is not admissible, in which case rule US is not applicable either. All subsequent applications of uniform substitutions are required to be defined (no clash).
Example 4 (Admissibility). The first use of US in Example 3 is admissible, because no free variable of the substitution is introduced into a context in which that variable is bound. The second, unsound attempt in Example 3 clashes, because it is not admissible, since x ∈ FV(σ ) but also x ∈ BV(x := x + 1). Occurrences of such bound variables that result from the arguments of the predicates or functions are exempt:

Correctness of Uniform Substitutions
Soundness of rule US requires proving that validity is preserved when replacing symbols with their uniform substitutes. The key to its soundness proof is to relate this syntactic change to a semantic change of the interpretations such that validity of its premise in all interpretations implies validity of the premise in the semantically modified interpretation, which is then equivalent to validity of its syntactical substitute in the conclusion. The semantic substitution corresponding to (or adjoint to) σ modifies the interpretation of function, predicate and quantifier symbols as well as program constants semantically in the same way that σ replaces them syntactically. When σ is admissible, the value of an expression in the adjoint interpretation agrees with the value of its uniform substitute in the original interpretation. This link to the static semantics proves the following correspondence of syntactic and semantic substitution. Let I d · denote the interpretation that agrees with interpretation I except for the interpretation of function symbol · which is changed to d ∈ R. Correspondingly I R denotes the interpretation that agrees with I except that quantifier symbol is R ⊆ S.
Definition 11 (Substitution adjoints). The adjoint to substitution σ is the operation that maps I, ν to the adjoint interpretation σ * ν I in which the interpretation of each function symbol f ∈ σ , predicate symbol p ∈ σ , quantifier symbol C ∈ σ , and program constant a ∈ σ is modified according to σ : Corollary 6 (Admissible adjoints). If ν = ω on FV(σ ), then σ * ν I = σ * ω I. If σ is U-admissible for φ (or θ or α, respectively) and ν = ω on U , then Proof. For well-definedness of σ * ν I, note that σ * ν I( f ) is a smooth function since its substitute term σ f (·) has smooth values. First, σ * ν I(a) = I[[σ a]] = σ * ω I(a) holds because the adjoint to σ for I, ν in the case of programs is independent of ν (the program has access to its respective initial state at runtime). Likewise σ * ν I(C) = σ * ω I(C) for quantifier symbols, because the adjoint is independent of ν for quantifier symbols. By Lemma 2, FV(σ f (·)) ⊆ U for every function symbol f ∈ Σ(φ ) (or θ or α) and likewise for predicate symbols p ∈ Σ(φ ). Since ν = ω on U was assumed, so σ * ω I = σ * ν I on the function and predicate symbols in Σ(φ ) (or θ or α). Finally ] by Lemma 5, respectively. Substituting equals for equals is sound by the compositional semantics of dL. The more general uniform substitutions are still sound, because the semantics of uniform substitutes of expressions agrees with the semantics of the expressions themselves in the adjoint interpretations. The semantic modification of adjoint interpretations has the same effect as the syntactic uniform substitution.
Lemma 7 (Uniform substitution for terms). The uniform substitution σ and its adjoint interpretation σ * ν I, ν for I, ν have the same semantics for all terms θ : Proof. The proof is by structural induction on θ and the structure of σ .
] by using the induction hypothesis twice, once for σ (θ ) on the smaller θ and once for {· → σ (θ )}(σ f (·)) on the possibly bigger term σ f (·) but the structurally simpler uniform substitution {· → σ (θ )}(. . . ) that is a substitution on the symbol · of arity zero, not a substitution of functions with arguments. For well-foundedness of the induction note that the · substitution only happens for function symbols f with at least one argument θ (for f ∈ σ ) so not for · itself.
]] by induction hypothesis and since I(g) = σ * ν I(g) as the interpretation of g does not change in σ * ] by induction hypothesis, provided σ is V -admissible for θ , i.e. does not introduce any variables or differential symbols, so that Corollary 6 implies σ * ν I = σ * ω I for all ν, ω (that agree on V = / 0, which imposes no condition on ν, ω). In particular, the adjoint interpretation σ * ν I is the same for all ways of changing the value of variable x in state ν when forming the partial derivative.
The uniform substitute of a formula is true at ν in an interpretation iff the formula itself is true at ν in its adjoint interpretation. Uniform substitution lemmas are proved by simultaneous induction for formulas and programs.
Lemma 8 (Uniform substitution for formulas). The uniform substitution σ and its adjoint interpretation σ * ν I, ν for I, ν have the same semantics for all formulas φ : The proof is by structural induction on φ and the structure on σ , simultaneously with Lemma 9. 1 ] by using Lemma 7 for σ (θ ) and by using the induction hypothesis for {· → σ (θ )}(σ p(·)) on the possibly bigger formula σ p(·) but the structurally simpler uniform substitution {· → σ (θ )}(. . . ) that is a mere substitution on function symbol · of arity zero, not a substitution of predicates.
. By induction hypothesis for the smaller ] by Corollary 6 for all ν, ω (that agree on V = / 0, which imposes no condition on ν, ω) since σ is V -admissible for φ . The proof then proceeds: , so, by induction hypothesis for the structurally simpler uniform substitution { → σ (φ )} that is a mere substitution on quantifier symbol of arity zero, iff ] by induction hypothesis. Both sides are, thus, equivalent.
, which, by Lemma 9 and induction hypothesis, respectively, is equivalent to: there is a ω such that The uniform substitute of a program has a run from ν to ω in an interpretation iff the program itself has a run from ν to ω in its adjoint interpretation.
Lemma 9 (Uniform substitution for programs). The uniform substitution σ and its adjoint interpretation σ * ν I, ν for I, ν have the same semantics for all programs α: Proof. The proof is by structural induction on α, simultaneously with Lemma 8.  ] by Corollary 6, because σ is BV(σ (α))-admissible for β and ν = ω on BV(σ (α)) by for all i < n. By n uses of the induction hypothesis, this is equivalent to ] by Corollary 6 since σ is BV(σ (α))admissible for α and ν i+1 = ν i on BV(σ (α)) by Lemma 1 as

Soundness
The uniform substitution lemmas are the key insights for the soundness of proof rule US, which is only applicable if its uniform substitution is defined. A proof rule is sound iff validity of all its premises implies validity of its conclusion.
Theorem 10 (Soundness of uniform substitution). The proof rule US is sound. Uniform substitutions can also be used to soundly instantiate locally sound proof rules or whole proofs just like proof rule US soundly instantiates axioms or other valid formulas (Theorem 10). An inference or proof rule is locally sound iff its conclusion is valid in any interpretation I in which all its premises are valid. All locally sound proof rules are sound. The use of Theorem 11 in a proof is marked USR.
Theorem 11 (Soundness of uniform substitution of rules). All uniform substitution instances (with FV(σ ) = / 0) of locally sound inferences are locally sound: Proof. Let D be the inference on the left and let σ (D) be the inference on the right. Assume D to be locally sound. , which continues to hold for all ν. Thus, I |= σ (ψ), i.e. the conclusion of σ (D) is valid in I, hence σ (D) is locally sound. Consequently, all uniform substitution instances σ (D) of locally sound inferences D with FV(σ ) = / 0 are locally sound.
If ψ has a proof (i.e. n = 0), USR is locally sound even if FV(σ ) = / 0, because US proves σ (ψ) from the provable ψ, which makes this inference locally sound since local soundness is equivalent to soundness for n = 0 premises. If ψ has a proof, USR for n = 0 premises is identical to US.
Example 5 (Uniform substitutions are only globally sound). Rule US itself is only sound but not locally sound, so it cannot have been used on any unproved premises at any point during a proof that is to be instantiated by proof rule USR from Theorem 11. The following sound proof with a modus ponens (marked MP) has an unproved premise on which US has been used at some point during the proof: This use of US makes the above proof sound but not locally sound, which prevents rule USR in Theorem 11 from (unsoundly) concluding a uniform substitution instance under σ = { f (·) → 0} of this inference: Indeed, rule US assumes that its premise (here f (x) = 0) is valid (in all interpretations I), but the latter (clashing) substitution instance only proves one specific different choice for f to satisfy f (x) = 0. Rule US can still be used in the proof of a premise that proves without endangering local soundness, because proved premises are valid in all interpretations by soundness.

Differential Dynamic Logic Axioms
Proof rules and axioms for a Hilbert-type axiomatization of dL from prior work [13] are shown in Fig. 2, except that, thanks to proof rule US, axioms and proof rules now reduce to the finite list of concrete dL formulas in Fig. 2 as opposed to an infinite collection of axioms from a finite list of axiom schemata along with schema variables, side conditions, and implicit instantiation rules. Soundness of the axioms follows from soundness of corresponding axiom schemata [6,13], but is easier to prove standalone, because it is a finite list of formulas without the need to prove soundness for all their instances. Soundness of axioms, thus, reduces to validity of one formula as opposed to validity of all formulas that can be generated by the instantiation mechanism complying with the respective side conditions for that axiom schema. The proof rules in Fig. 2 are axiomatic rules, i.e. pairs of concrete dL formulas instantiated by USR. Soundness of axiomatic rules reduces to proving that their concrete conclusion formula is a consequence of their premise formula. Further, x is the vector of all relevant variables, which is finite-dimensional, or considered as a built-in vectorial term. Proofs in the uniform substitution dL calculus use US (and variable renaming such as ∀x p(x) to ∀y p(y)) to instantiate the axioms from Fig. 2 to the required form.
Diamond axiom · expresses the duality of the [·] and · modalities. Assignment axiom [:=] expresses that p(x) holds after the assignment x := f iff p( f ) holds initially. Test axiom [?] expresses that p holds after the test ?q iff p is implied by q, because test ?q only runs when q holds. Choice axiom [∪] expresses that p(x) holds after all runs of a ∪ b iff p(x) holds after all runs of a and after all runs of b. Sequential composition axiom [;] expresses that p(x) holds after all runs of a; b iff, after all runs of a, it is the case that p(x) holds after all runs of b. Iteration axiom [ * ] expresses that p(x) holds after all repetitions of a iff it holds initially and, after all runs of a, it is the case that p(x) holds after all repetitions of a. Axiom K is the modal modus ponens from modal logic [8]. Induction axiom I expresses that if, no matter how often a repeats, p(x) holds after all runs of a if it was true before, then, if p(x) holds initially, it holds after all repetitions of a. Vacuous axiom V expresses that arity 0 predicate symbol p continues to hold after all runs of a if it was true before.
Gödel's generalization rule G expresses that p(x) holds after all runs of a if p(x) is valid. Accordingly for the ∀-generalization rule ∀. MP is modus ponens. Congruence rules CQ,CE are not needed but included to efficiently use axioms in any context. Congruence rule CT derives from CQ using p(·) def ≡ (c(·) = c(g(x))) and reflexivity: (CT) f (x) = g(x) c( f (x)) = c(g(x)) Remark 1. The use of variable vectorx is not essential but simplifies concepts. An equivalent axiomatization is obtained when considering p(x) to be a quantifier symbol of arity 0 in the axiomatization, or as C(true) with a quantifier symbol of arity 1. Neither replacements of quantifier symbols nor (vectorial) placeholders · for the substitutions {p(·) → ψ} that are used for p(x) cause any free variables in the substitution. The mnemonic notation σ = {p(x) → φ } adopted for such uniform substitutions reminds that the variablesx are not free in σ even if they occur in the replacement φ .
Sound axioms are just valid formulas, so true in all states. For example, in any state where is true, too, by equivalence axiom [;]. Using axiom [;] to replace one by the other is a truth-preserving transformation, i.e. in any state in which one is true, the other is true, too. Sound rules are validity-preserving, i.e. the conclusion is valid if the premises are valid, which is weaker than truth-preserving transformations. For proof search, the dL axioms are meant to be used to reduce the axiom key (marked blue) to the structurally simpler remaining conditions (right-hand sides of equivalences and the conditions assumed in implications).
Real Quantifiers. Besides (decidable) real arithmetic (whose use is denoted R), complete axioms for first-order logic can be adopted to express universal instantiation ∀i (if p is true of all x, it is also true of constant function symbol f ), distributivity ∀→, and vacuous quantification V ∀ (predicate p of arity zero does not depend on x).
The Significance of Clashes. This section illustrates how uniform substitutions tell sound instantiations apart from unsound proof attempts. Rule US clashes exactly when the substitution introduces a free variable into a bound context, which would be unsound. Example 3 on p. 16 already showed that even an occurrence of p(x) in a context where x is bound does not permit mentioning x in the replacement except in the · places. US can directly handle even nontrivial binding structures, though, e.g. from [:=] with the substitution σ = { f → x 2 , p(·) → [(z := · + z) * ; z := · + yz]y ≥ ·}: It is soundness-critical that US clashes when trying to instantiate p in V ∀ with a formula that mentions the bound variable x: It is soundness-critical that US clashes when substituting p in vacuous program axiom V with a formula with a free occurrence of a variable bound by the replacement of a: Additional free variables are acceptable in replacements for p as long as they are not bound in the particular context into which they will be substituted: Complex formulas are acceptable as replacements for p if their free variables are not bound in the context, e.g., using But it is soundness-critical that US clashes when substituting a formula with a free dependence on x for p into a context where x will be bound after the substitution: Gödel's generalization rule G uses p(x) instead of p from V, so its USR instance allows all variablesx to occur in the replacement without causing a clash: Intuitively, the argumentx in this uniform substitution instance of G was not introduced as part of the substitution but merely put in for the placeholder · instead. Letx = (x, y), Not all axioms fit to the uniform substitution framework, though. The Barcan axiom was used in a completeness proof for the Hilbert-type calculus for differential dynamic logic [13] (but not in the completeness proof for its sequent calculus [11]): B is unsound without the restriction x ∈ α, though, so that the following would be an unsound axiom because x ∈ a cannot be enforced for program constants, since their effect might depend on the value of x or since they might write to x. In (2), x cannot be written by a without violating soundness: nor can x be read by a in (2) without violating soundness: Thus, the completeness proof for differential dynamic logic from prior work [13] does not carry over. A more general completeness result for differential game logic [15] implies, however, that Barcan schema B is unnecessary for completeness.

Differential Equations and Differential Axioms
Section 4 leverages uniform substitutions to obtain a finite list of axioms without side-conditions. They lack axioms for differential equations, though. Classical calculi for dL have axiom schema [ ] from p. 2 for replacing differential equations with time quantifiers and discrete assignments for their solutions. In addition to being limited to simple solvable differential equations, such axiom schemata have quite nontrivial soundness-critical side conditions. This section leverages US and the new differential forms in dL to obtain a logically internalized version of differential invariants and related proof rules for differential equations [12,14] as axioms (without schema variables or side-conditions). These axioms can prove properties of more general "unsolvable" differential equations. They can also prove all properties of differential equations that can be proved with solutions [14] while guaranteeing correctness of the solution as part of the proof. Figure 3 shows axioms for proving properties of differential equations (DW-DS), and differential axioms for differentials (+ ,· ,• ) which are equations of differentials. Axiom x identifying (x) = x for variables x ∈ V and axiom c for functions f and number literals of arity 0 are used implicitly to save space. Some axioms use reverse implications φ ← ψ instead of the equivalent ψ → φ for emphasis.

Differentials: Invariants, Cuts, Effects, and Ghosts
Differential weakening axiom DW internalizes that differential equations never leave their evolution domain q(x). The evolution domain q(x) holds after all evolutions of x = f (x) & q(x), because differential equations cannot leave their evolution domains. DW derives 3 , which allows to export the evolution domain to the postcondition

Figure 3: Differential equation axioms and differential axioms
and is also called DW. Its (right) assumption is best proved by G yielding premise q(x) → p(x). The differential cut axiom DC is a cut for differential equations. It internalizes that differential equations always staying in r(x) also always stay in p(x) iff p(x) always holds after the differential equation that is restricted to the smaller evolution domain & q(x)∧r(x). DC is a differential variant of modal modus ponens axiom K.
Differential effect axiom DE internalizes that the effect on differential symbols along a differential equation is a differential assignment assigning the right-hand side f (x) to the left-hand side x . The differential assignment x := f (x) in DE mimics instantaneously the (continuous) effect that the differential equation x = f (x) & q(x) has on x , thereby selecting the appropriate vector field for subsequent differentials. Axiom DI internalizes differential invariants [12], i.e. that p(x) holds always after a differential equation x = f (x) & q(x) iff it holds after ?q(x), provided its differential (p(x)) always holds after the differential equation x = f (x) & q(x). This axiom reduces future truth to present truth when the truth of p(x) does not change along the differential equation because (p(x)) holds all along. The differential equation also vacuously stays in p(x) if it starts outside q(x), since it is stuck then. The assumption of DI is best proved by DE to select the appropriate vector field x = f (x) for the differential (p(x)) and a subsequent DW,G to make the evolution domain constraint q(x) available as an assumption when showing (p(x)) . The condition [?q(x)]p(x) in DI is equivalent to q(x) → p(x) by axiom [?]. While a general account of (p(x)) is possible [16], this article focuses on atomic postconditions with the equivalences (θ ≥ η) ≡ (θ > η) ≡ (θ ) ≥ (η) and (θ = η) ≡ (θ = η) ≡ (θ ) = (η) , etc. Note (θ = η) cannot be (θ ) = (η) , because different rates of change from different initial values do not imply the values would remain different. Conjunctions can be handled separately by [α](p(x) ∧ q(x)) ↔ [α]p(x) ∧ [α]q(x) which derives from K. Disjunctions split into separate disjuncts, which is equivalent to classical differential invariants [12] but easier. Axiom DG internalizes differential ghosts [14], i.e. that additional differential equations can be added whose solutions exist long enough, which can enable new invariants that are not otherwise provable [14]. Axiom DS solves constant differential equations, and, as Section 5.2 will demonstrate, more complex solvable differential equations with the help of DG,DC,DI. Vectorial generalizations to systems of differential equations are possible for the axioms in Fig. 3.
The differential axioms for differentials (+ ,· ,• ,c ,x ) axiomatize differentials of polynomials. They are related to corresponding rules for time-derivatives, except that those would be ill-defined in a local state, so it is crucial to work with differentials that have a local semantics in individual states. Uniform substitutions correctly maintain that y does not occur in replacements for a(x), b(x) for DG and that x does not occur in replacements for f in DS, which are both soundness-critical.
Occurrences of x in replacements for f are acceptable when using axiom Most axioms in Fig. 2 and 3 are independent, because there is exactly one axiom per operator. Exceptions in Fig. 2 are K,I,V, but there is a complete calculus without [ * ],V [13] and one without G,K,I,V that uses two extra rules instead [15]. The congruence rules CQ,CE are redundant and can be proved on a per-instance basis as well. Axiom DW is the only one that can use the evolution domain, axiom DC the only one that can change the evolution domain, and axiom DG the only one that can change differential equations. Axiom DE is the only one that can use the right-hand side of the differential equation. Axiom DI is the only axiom that relates truth of a postcondition after a differential equation to truth at the initial condition. Finally, axiom DS is needed for proving diamond properties of differential equations, because it is the only one (besides the limited DW) that does not reduce a property of a differential equation to another property of a differential equation and, thus, the only axiom that ultimately proves them without the help of G,V,K, which are not sound for α .

Example Proofs
This section illustrates how the uniform substitution calculus for dL can be used to realize a number of different reasoning techniques from the same proof primitives. While the same flexibility enables these different techniques also for proofs of hybrid systems, the following examples focus on differential equations to additionally illustrate how the axioms in Fig. 3 are meant to be combined.
Example 6 (Contextual equivalence proof). The following proof proves a property of a differential equation using differential invariants without having to solve that differential equation. One use of rule US is shown explicitly, other uses of US are similar to obtain and use the other axiom instances. CE is used together with MP.
Previous calculi [12,14] collapse this proof into a single proof step but with complicated built-in operator implementations that silently perform the same reasoning yet in an intransparent way. The approach presented here combines separate axioms to achieve the same effect in a modular way, with axioms of individual responsibilities internalizing separate logical reasoning principles in differential-form dL. Tactics combining the axioms as indicated make the axiomatic way equally convenient. Clever proof structuring, cuts or MP uses enable proofs in which the main argument remains as fast [12,14] while the additional premises subsequently check soundness. Inferences in context such as those portrayed in CE,CQ are impossible in sequent calculus [11].
Example 7 (Flat proof). Rules CQ,CE simplify the proof in Example 6 substantially but are not needed because a proof without contextual equivalence is possible: Example 8 (Parametric proof). The proofs in Example 6 and 7 use (implicit) cuts with equivalences that predict the outcome of the right premise, which is conceptually simple but inconvenient for proof search. More constructively, a direct proof can use a free function symbol j(x, x ) to obtain a straightforward parametric proof, instead: After conducting this proof with two open premises, the free function symbol j(x, x ) can be instantiated as needed by a uniform substitution (USR from Theorem 11). The above proof justifies the locally sound inference on the left whose two open premises and conclusions are instantiated by USR leading to the new sound proof on the right: After the instantiation of j(x, x ), the right proof completes as follows: Theorem 11 is applied to locally sound proofs. The same technique helps invariant search, where a free predicate symbol p(x) is instantiated by rule US lazily once all conditions it needs to complete the proof become clear. This reduction saves significant proof effort compared to eager invariant instantiation in sequent calculi [11].
Example 9 (Forward computation proof). The proof in Example 8 involves less search than the proofs of the same formula in Example 6 and 7. But it still ultimately requires foresight to identify the appropriate instantiation of j(x, x ) for which the proof closes. For invariant search, such proof search is essentially unavoidable [13] even if the technique in Example 8 maximally postpones the search. When used from left to right, the differential axioms c ,x ,+ ,· ,• compute deterministically and always simplify terms by pushing differential operators inside. For example, all backwards proof search in the right branch of the last proof of Example 8 can be replaced by a deterministic forward computation proof starting from reflexivity (x · x) = (x · x) and drawing on axiom instances (used in a term context via CT) as needed in a forward proof until the desired output shape is obtained: Efficient proof search combines this forward computation proof technique with the backward proof search from Example 8 with advantages similar to other combinations of computation and axiomatic reasoning [5]. Even the remaining positions where axioms still match can be precomputed as a simple function of the axiom that has been applied, e.g., from its fixed pattern of occurrences of differential operators.
Example 10 (Axiomatic differential equation solver). Axiomatic equivalence proofs for solving differential equations involve DG for introducing a time variable, DC to cut the solutions in, DW to export the solution to the postcondition, inverse DC to remove the evolution domain constraints again, inverse DG to remove the original differential equations, and finally DS to solve the differential equation for time: The existential quantifier for t is instantiated by 0 (suppressed in the proof for readability reasons). The 4 uses of DC lead to 2 additional premises (marked by ) proving that v = v 0 + at and then x = x 0 + a 2 t 2 + v 0 t are differential invariants (using DI,DE,DW). Shortcuts using only DW instead are possible. But the elaborate proof above generalizes to because it is an equivalence proof. The additional premises for DC with v = v 0 + at prove as follows: After that, the additional premises for DC with x = x 0 + a 2 t 2 + v 0 t prove as follows: This axiomatic differential equation solving technique is not limited to differential equation systems that can be solved in full, but also works when only part of the differential equations have definable solutions. Contrast this constructive formal proof with the unverified use of a differential equation solver in axiom schema [ ] from p. 2.

Differential Substitution Lemmas
In similar ways how the uniform substitution lemmas are the key ingredients that relate syntactic and semantic substitution for the soundness of proof rule US, this section proves the key ingredients relating syntax and semantics of differentials that will be used for the soundness proofs of the differential axioms. Differentials (η) have a local semantics in isolated states, which is crucial for well-definedness. The DI axiom relates truth along a differential equation to initial truth and truth of differentials along a differential equation. The key insight for its soundness is that the analytic time-derivative of the value of a term η along any differential equation x = θ & ψ agrees with the values of its differential (η) along that differential equation. Recall from Def. 6 that I, ϕ |= x = θ ∧ ψ indicates that the function ϕ solves the differential equation x = θ & ψ in interpretation I, of which the only important part for the next lemma is that it gives x the value of the time-derivative of x along the solution ϕ.
Lemma 12 (Differential). If I, ϕ |= x = θ ∧ ψ holds for some solution ϕ : [0, r] → S of any duration r > 0, then for all times 0 ≤ ζ ≤ r and all terms η with FV(η) ⊆ {x}: Proof. By Def. 4 the left side is: By chain rule (Lemma 19 in the beginning of the appendix) the right side is: , which has finite support by Lemma 2 so is 0 for all but finitely many variables. Both sides, thus, agree since ϕ(ζ )(x ) = dϕ(t)(x) dt (ζ ) = ϕ (ζ )(x) by Def. 6 for all x ∈ FV(η). The same proof works for vectorial differential equations as long as all free variables of η have some differential equation so that their differential symbols agree with their time-derivatives.
The differential effect axiom DE axiomatizes the effect of differential equations on the differential symbols. The key insight for its soundness is that differential symbol x already has the value θ along the differential equation x = θ such that the subsequent differential assignment x := θ that assigns the value of θ to x has no effect on the truth of the postcondition. The differential substitution resulting from a subsequent use of axiom [:=] is crucial to relay the values of the time-derivatives of the state variables x along a differential equation by way of their corresponding differential symbol x , though. In combination, this makes it possible to soundly substitute the right-hand side of a differential equation for its left-hand side in a proof.
Lemma 13 (Differential assignment). If I, ϕ |= x = θ ∧ ψ where ϕ : [0, r] → S is a solution of any duration r ≥ 0, then for all 0 ≤ ζ ≤ r. Thus, since x already has the value Iϕ(ζ )[[θ ]] in state ϕ(ζ ), the differential assignment x := θ has no effect, thus, (ϕ(ζ ), ϕ(ζ )) ∈ I[[x := θ ]] so that φ and [x := θ ]φ are equivalent along ϕ. Hence, The final insights for differential invariant reasoning for differential equations are syntactic ways of computing differentials, which can be internalized as axioms (+ ,· ,• ), since differentials are represented syntactically in differential-form dL. It is the local semantics as differential forms that makes it possible to soundly capture the interaction of differentials with arithmetic operators by local equations.
[y := θ ][y := 1] ( f (θ )) = ( f (y)) · (θ ) for y, y ∈ FV(θ ) Proof. The proof shows each equation separately. The first parts consider any constant function (i.e. arity 0) or number literal f for (3) and align the differential (x) of a term that happens to be a variable x ∈ V with its corresponding differential symbol x ∈ V for (4). The other cases exploit linearity for (5) and Leibniz properties of partial derivatives for (6). Case (7) exploits the chain rule and assignments and differential assignments for the fresh y, y to mimic partial derivatives. Equation (7) generalizes to functions f of arity n > 1, in which case · is the (definable) Euclidean scalar product.

Soundness
The uniform substitution calculus for differential-form dL is sound, i.e. all formulas that it proves from valid premises are valid. The soundness argument is entirely modular. The concrete dL axioms in Fig. 2 and 3 are valid formulas and the axiomatic proof rules (i.e. pairs of formulas) in Fig. 2 are locally sound, which implies soundness. The uniform substitution rule is sound so only concludes valid formulas from valid premises (Theorem 10), which implies that dL axioms (and other provable dL formulas) can only be instantiated soundly by rule US. Uniform substitution instances of locally sound inferences (and other locally sound inferences) are locally sound (Theorem 11), which implies that dL axiomatic proof rules in Fig. 2 can only be instantiated soundly by uniform substitutions (USR). The soundness proof follows a high-level strategy that is similar to earlier proofs [13,12,14], but ends up in stronger results as all axioms for differential equations are equivalences now. The availability of differentials and differential assignments as syntactic elements in differential-form dL as well as the support from uniform substitutions makes those soundness proofs significantly more modular, too. For example, what used to be a single proof rule for differential invariants [12] can now be decomposed into separate modular axioms.
Theorem 15 (Soundness). The uniform substitution calculus for dL is sound, that is, every formula that is provable by the dL axioms and proof rules is valid, i.e. true in all states of all interpretations. The axioms in Fig. 2 and 3 are valid formulas and the axiomatic proof rules in Fig. 2 are locally sound.
Proof. The axioms (and most proof rules) in Fig. 2 are special instances 4 of corresponding axiom schemata and proof rules for differential dynamic logic [13] and, thus, sound. All proof rules in Fig. 2 (but not US itself) are even locally sound, which implies soundness, i.e. that their conclusions are valid (in all I) if their premises are. In preparation for a completeness argument, note that rules ∀,MP can be augmented soundly to use p(x) instead of p(x) or p, respectively, such that the FV(σ ) = / 0 requirement of Theorem 11 will be met during USR instances of all axiomatic proof rules. The axioms in Fig. 3  . DC Soundness of DC is a stronger version of soundness for the differential cut rule [12]. DC is a differential version of the modal modus ponens K. Only the direction "←" of the equivalence in DC needs the outer assumption [x = f (x) & q(x)]r(x), but the proof of the conditional equivalence in DC is simpler: The core is that if , so r(x) holds after that differential equation, and if p(x) holds after the differential equation that is additionally restricted to r(x), then p(x) holds after the differential equation . Since all restrictions of solutions are solutions, this is equivalent to I, ϕ |= r(x) for all ϕ of any duration solving I, ϕ |= x = f (x) ∧ q(x) and starting in ϕ(0) = ν on {x } . So, for all ϕ starting in ϕ(0) = ν on {x } : That is equivalent to: for all ϕ, if I, ϕ |= x = f (x) ∧ q(x) then I, ϕ |= p(x, x ). By Lemma 13, . DI Soundness of DI has some relation to the soundness proof for differential invariants [12], yet proves an equivalence and is generalized to leverage differentials.
then the solution ϕ of duration 0 implies that ] since x ∈ FV(q(x)), such that there is a solution at all. Thus, ) ≥ 0), because the variation for other formulas is the same as the variations in previous work [12]. Consider a state ν dt (ζ ) ≥ 0 for some ζ ∈ (0, r). The mean-value theorem (Lemma 18 in appendix) is applicable since the value Iϕ(t) [[g(x)]] of term g(x) along ϕ is continuous in t on [0, r] and differentiable on (0, r) as compositions of the, by Def. 4 smooth, evaluation function and the differentiable solution ϕ(t) of a differential equation. DG Soundness of DG is a constructive variation of the soundness proof for differential auxiliaries [14].
where the maximum exists, because it is a maximum of a continuous function on the compact set [0, r]. The modificationφ agrees with ϕ on {y, y } . On {y, y }, the modificationφ is defined asφ(t)(y) = y(t) andφ(t)(y ) = F(t, y(t)), respectively, for the solution y(t) of (8). In particularφ(t)(y ) agrees with the time-derivative y (t) of the valueφ(t)(y) = y(t) of y alongφ. By construction,φ(0)(y) = d and holds alongφ by (8) and because ϕ(t) =φ(t) on {y, y } so that whether the dL calculus is complete, i.e. can prove all dL formulas that are valid, has an answer, too. Previous calculi for dL were proved to be complete relative to differential equations [11,13] and also proved complete relative to discrete dynamics [13]. A generalization of the Hilbert calculus to hybrid games was even proved complete schematically [15]. The uniform substitution calculus for differential-form dL is, to a large extent, a specialization of previous calculi tailored to significantly simplify soundness arguments. Yet, completeness does not transfer when restricting proof calculi. In fact, one key question is whether the restrictions imposed upon proofs for soundness purposes by the simple technique of uniform substitutions does also preserve completeness. Completeness carries over from a previous schematic completeness proof for differential game logic [15] using expressiveness results from previous completeness proofs [11,13] by augmenting the schematic completeness proof with instantiability proofs.
The first challenge is to prove that uniform substitutions are flexible enough to prove all required instances of the dL axioms and axiomatic proof rules. For simplicity, consider p(x) to be a quantifier symbol of arity 0. A dL formula ϕ is called surjective iff rule US can instantiate ϕ to any of its axiom schema instances, which are those formulas that are obtained by uniformly replacing program constants a by any hybrid programs and quantifier symbols C() by formulas. An axiomatic rule is called surjective iff USR can instantiate it to any of its proof rule schema instances.
Lemma 16 (Surjective axioms). If ϕ is a dL formula that is built only from quantifier symbols of arity 0 and program constants but no function or predicate symbols, then ϕ is surjective. Axiomatic rules consisting of surjective dL formulas are surjective.
Proof. Letφ be the desired instance of the axiom schema belonging to ϕ, that is, letφ be obtained from ϕ by uniformly replacing each quantifier symbol C() by some formula, naïvely but consistently (same replacement for C() in all places) and accordingly for program constants a. The proof follows a structural induction on ϕ to show that there is a uniform substitution σ with FV(σ ) = / 0 such that σ (ϕ) =φ. The proof for formulas is by a mostly straightforward simultaneous induction with programs: 1. Consider quantifier symbol C() of arity 0 and letφ be the desired instance. Define σ = {C() →φ}, which has FV(σ ) = / 0, because it only substitutes quantifier symbols. Then σ (C()) ≡ σC() ≡φ. The substitution is admissible for all arguments, since there are none.
3. Case 2 generalizes to a general uniform replacement argument: the induction hypothesis and uniform replacement assumptions imply for each subexpression θ • η of ϕ with any operator • that the corresponding desired instance has to have the same shapeθ •η and that there are uniform substitutions σ , τ with FV(σ ) = FV(τ) = / 0 such that their union σ ∪ τ is defined This shows the cases φ ∨ ψ, φ → ψ, φ ↔ ψ and, after a moment's thought, also ¬φ .
The proof for hybrid programs is by simultaneous induction with formulas, where most cases are in analogy to the previous cases, except: 1. Consider program constant a with desired instanceã. Then σ = {a →ã} has FV(σ ) = / 0 and satisfies σ (a) = σ a =ã.
2. Consider the case x = θ & ψ with desired instance x =θ &ψ, which has to have this shape.
By induction hypothesis and the uniform replacement argument, there are uniform substitu- where admissibility again follows from FV(σ ∪ τ) = / 0.
4. The case α; β is similar and case α ∪ β follows directly from the uniform replacement argument.
The corresponding result for axiomatic rules built from surjective dL formulas follows since surjective dL formulas can be instantiated by rule US to any instance, which, thus, continues to hold for the premises and conclusions in rule USR.
Lemma 16 generalizes to quantifier symbols with arguments that have no function or predicate symbols, since those are always V -admissible. Generalizations to function and predicate symbol instances are possible with adequate care. The axiom [?] is surjective, because it does not have any bound variables, so admissibility of its instances is obvious. Similarly rules MP and, with the twist from the proof of Theorem 15, rule ∀ become surjective. Axioms ∀i,∀→,V ∀ can be augmented for surjectivity in similar ways, where V ∀ is surjective when p is instantiated such that x does not occur free, which is soundness-critical, and ∀i is instantiated respecting its shape.
A previous schematic completeness result [15] shows completeness relative to any differentially expressive 5 logic. Lemma 16 makes it easy to augment this proof to show that the schema instantiations required for completeness are provable by US,USR from axioms or axiomatic rules. Both the first-order logic of differential equations [11] and discrete dynamic logic [13] are differentially expressive for dL.
Theorem 17 (Relative completeness). The dL calculus is a sound and complete axiomatization of hybrid systems relative to any differentially expressive logic L, i.e. every valid dL formula is provable in the dL calculus from L tautologies.
Proof. This proof refines the completeness proof for the axiom schemata of differential game logic [15] with explicit proofs of instantiability by US and USR. Write L φ to indicate that dL formula φ can be derived in the dL proof calculus from valid L formulas. Soundness follows from Theorem 15, so it remains to prove completeness. For every valid dL formula φ it has to be proved that φ can be derived from valid L tautologies within the dL calculus: from φ prove L φ . The proof proceeds as follows: By propositional recombination, inductively identify fragments of φ that correspond to φ 1 → α φ 2 or φ 1 → [α]φ 2 logically. Find structurally simpler formulas from which these properties can be derived in the dL calculus by uniform substitution instantiations, taking care that the resulting formulas are simpler than the original one in a well-founded order. Finally, prove that the original dL formula can be re-derived from the subproofs in the dL calculus by uniform substitution instantiations.
The first insight is that, with the rules MP and ∀ and (by Lemma 16, all) relevant instances of ∀i,∀→,V ∀ and real arithmetic, the dL calculus contains a complete axiomatization of first-order logic. Thus, all first-order logic tautologies can be used without further notice in the remainder of the proof. Furthermore, by Lemma 16, all instances of · ,[∪],[;],[ * ],K,I can be proved by rule US in the dL calculus.
By appropriate propositional derivations, assume φ to be given in conjunctive normal form. Assume that negations are pushed inside over modalities using the dualities ¬[α]φ ≡ α ¬φ and ¬ α φ ≡ [α]¬φ that are provable by axiom · , and that negations are pushed inside over quantifiers using definitorial first-order equivalences ¬∀x φ ≡ ∃x ¬φ and ¬∃x φ ≡ ∀x ¬φ . The remainder of the proof follows an induction on a well-founded partial order ≺ from previous work [15] induced on dL formulas by the lexicographic ordering of the overall structural complexity of the hybrid programs in the formula and the structural complexity of the formula itself, with the logic L placed at the bottom of the partial order ≺. The base logic L is considered of lowest complexity by relativity, because F immediately implies L F for all formulas F of L. In the following, IH is short for induction hypothesis. First note that the following monotonicity rules derive from G,K, · by Lemma 16 with a classical argument: The proof follows the syntactic structure of dL formulas.
0. If φ has no hybrid programs, then φ is a first-order formula; hence provable by assumption (even decidable if in first-order real arithmetic [21], i.e. no uninterpreted symbols occur).
1. φ is of the form ¬φ 1 ; then φ 1 is first-order and quantifier-free, as negations are assumed to be pushed inside, so Case 0 applies.
2. φ is of the form φ 1 ∧ φ 2 , then φ 1 and φ 2 , so individually deduce simpler proofs for L φ 1 and L φ 2 by IH, which combine propositionally to a proof for L φ 1 ∧ φ 2 using MP twice with the propositional tautology 4. φ is a disjunction and-without loss of generality-has one of the following forms (otherwise use provable associativity and commutativity to reorder): Let φ 1 ∨ [α] φ 2 be a unified notation for those cases. Then, φ 2 ≺ φ , since φ 2 has less modalities or quantifiers. Likewise, φ 1 ≺ φ because [α] φ 2 contributes one modality or quantifier to φ that is not part of φ 1 . When abbreviating the simpler formulas ¬φ 1 by F and φ 2 by G, the validity φ yields ¬F ∨ [α] G, so F → [α] G, from which the remainder of the proof inductively derives The proof of (9)  (a) If [α] is the operator ∀x then F → ∀x G, where x can be assumed not to occur in F by a bound variable renaming. Hence, F → G. Since G ≺ ∀x G, because it has less quantifiers, also (F → G) ≺ (F → ∀x G), hence L F → G is derivable by IH. Then, L F → ∀x G derives with Lemma 16 by generalization rule ∀, since x does not occur in F: The instantiations succeed by the remark after Lemma 16 using for V ∀ that x ∈ V(F). The formula F → ∀x G is even decidable if in first-order real arithmetic [21]. The remainder of the proof concludes (F → ψ) ≺ (F → φ ) from ψ ≺ φ without further notice. The operator ∀y can be obtained correspondingly by uniform renaming.
(b) If [α] is the operator ∃x then F → ∃x G. If F and G are L formulas, then, since L is closed under first-order connectives, so is the valid formula F → ∃x G, which is, then, provable by IH and even decidable if in first-order real arithmetic [21]. Otherwise, F, G correspond to L formulas by expressiveness of L, which implies the existence of an L formula G such that G ↔ G. Since L is closed under firstorder connectives [15], the valid formula F → ∃x (G ) is provable by IH, because (F → ∃x (G )) ≺ (F → ∃x G) since G ∈ L while G ∈ L. Now, G ↔ G implies G → G, which is derivable by IH, because (G → G) ≺ φ since G is in L. From L G → G, the derivable dual of axiom ∀→, (∀x (p(x) → q(x)) → (∃x p(x) → ∃x q(x))), derives L ∃x (G ) → ∃x G, which combines with L F → ∃x (G ) essentially by rule MP to L F → ∃x G.
F →∃x (G ) The instantiations succeed by Lemma 16 and its subsequent remark.
(c) F → x = θ G implies F → ( x = θ G ) , which is derivable by IH, as (F → ( x = θ G ) ) ≺ φ since ( x = θ G ) is in L. Since L is differentially expressive, L x = θ G ↔ ( x = θ G ) is provable. Hence L F → x = θ G derives by propositional congruence. Now G → G is simpler (since G is in L) so derivable by IH, so x = θ G → x = θ G derives by M. Together, both derive L F → x = θ G propositionally.
This completes the proof of completeness (Theorem 17), because all syntactical forms of dL formulas have been covered.
With the exceptions of loops and differential equations, the proof of Theorem 17 confirms that successive unification with axioms gives a complete proof strategy. The search for applicable positions is deterministic using recursive computations as in Example 9. For loops and differential equations, corresponding (differential) invariant search is needed using parametric predicates j(x, x ) as in Example 8. This result proves that a very simple mechanism, essentially the single proof rule of uniform substitution, makes it possible to prove differential dynamic logic formulas from a parsimonious soundness-critical core with a few concrete formulas as axioms and without losing the completeness that axiom schema calculi enjoy.

Conclusions
Uniform substitutions lead to a simple and modular proof calculus that is entirely based on axioms and axiomatic rules, instead of soundness-critical schema variables with side-conditions in axiom schemata. The US calculus is straightforward to implement, since axioms are just formulas and axiomatic rules are just pairs of formulas and since the uniform substitutions themselves have a straightforward recursive definition. The increased modularity also enables flexible reasoning by fast contextual equivalence that uniform substitutions provide almost for free.
The key ingredient enabling such locality for differential equations are differential forms that have a local semantics and make it possible to reduce reasoning about differential equations to local reasoning about (inequalities or) equations of differentials. Overall, uniform substitutions lead to a simple and modular sound and complete proof calculus for differential dynamic logic that is entirely based on axioms and axiomatic rules. Prover implementations merely reduce to uniform substitutions using the static semantics, starting from one copy of each axiom and axiomatic rule.