A Simple Logic of Functional Dependence

This paper presents a simple decidable logic of functional dependence LFD, based on an extension of classical propositional logic with dependence atoms plus dependence quantifiers treated as modalities, within the setting of generalized assignment semantics for first order logic. The expressive strength, complete proof calculus and meta-properties of LFD are explored. Various language extensions are presented as well, up to undecidable modal-style logics for independence and dynamic logics of changing dependence models. Finally, more concrete settings for dependence are discussed: continuous dependence in topological models, linear dependence in vector spaces, and temporal dependence in dynamical systems and games.


Introduction: Toward a logic of local dependence
Dependence is a ubiquitous notion, pervading areas from probability to reasoning with quantifiers, and from informational correlation in databases to causal connections or interactive social behavior. How the Moon moves depends on how the Earth moves, and vice versa. What you will do in our current Chess game depends on how I play. And dependence, or independence, matters. Whether variables are dependent or not is crucial to probabilistic calculation. And as for qualitative reasoning, dependence is at the heart of quantifier combinations in logic. Now ubiquity does not mean unity: there need not be one coherent notion behind all talk of dependence in science or daily life. 1 Still, over the last century, various proposals have been made for a basic logic of reasoning about dependence and independence, witness publications such as [5], [55], [3], [47], [62], [71]. While some of these logics are weak calculi of pure dependence statements, others are very strong and second-order. And most of them are non-classical: the propositional connectives break classical laws such as Tertium Non Datur, while the semantics differs radically from that of First Order Logic (FOL); being either a game semantics, or some higher-order version of first-order semantics, evaluating formulas on sets of assignments. validities in the weaker logic CRS. For instance, if a dependence model invalidates the above law ∃x∃yφ → ∃y∃xφ, then there exist some non-trivial correlations between variables.
A key goal of generalized assignment semantics was analyzing the causes of the undecidability of validity for FOL. The intent was to decouple the desideratum of a compositional semantics for the first-order language from additional mathematical assumptions (about existence of all possible functional assignments) that increase complexity. Indeed, while CRS semantics is clearly compositional, the set of validities is decidable, forming roughly a core calculus of monotonicity and persistence reasoning inside full predicate logic. 4 Thus, CRS makes a distinction between general simple inferences inside FOL and more complex reasoning relying on special mathematical existence assumptions. 5 This lower complexity may be understood as a result of 'modalization', [13]. The above analysis also works on abstract state models for the first-order language without underlying objects, where first-order logic becomes a modal logic. This modal perspective will be significant in what follows, as it explains how a logic of dependence can be decidable.
Still, from a dependence perspective, the CRS quantifiers have some peculiar features. Notably, the Locality property of FOL fails: the truth value of a CRS-formula ϕ need not depend only on the values of its free variables, it may well depend on values of variables that do not even occur in ϕ. This 'dependence on irrelevant variables' is an artifact of the specific way in which CRS generalizes FOL semantics by letting only the values of X vary, keeping the values of all other variables fixed, including the ones not occurring at all in the given formula.
This problem was noticed early on in the CRS literature, leading to an alternative proposal for generalizing FOL quantifiers. 6 Since these alternative operators do satisfy Locality, we will call them local quantifiers, denoted here by ∀ X ϕ: s |= ∀ X ϕ iff t |= ϕ for every t ∈ A with s(y) = t(y) for all y ∈ F ree(∀ X ϕ) = F ree(ϕ) − X, where F ree(ϕ) is the set of free variables in ϕ. This fixes only the values of the actually occurring free variables that do not belong to X, allowing all the others to vary.
Note that in full models (with A = D V ), both ∀ X ϕ and ∀Xϕ collapse to classical FOL quantifiers; so they are both entitled to play the role of generalized FOL quantifiers. Even so, both versions of CRS still have a major drawback: there is no explicit way to say that a variable x functionally depends on other variables. Moreover, no new validities are added that capture interesting laws of dependence. For this, further steps are needed, to be previewed now.
Remark 1.2. The language of CRS also supports modalities for substitutions. A formula [y/x]ϕ (ϕ with all free occurrences of x replaced by y, where no substituted y becomes bound) is true at an assignment s if there is an available assignment t in the model equal to s except that t(x) = s(y) with ϕ true at t. There is also a natural extension for simultaneous substitutions [y/x]ϕ, which do not reduce to iterated single ones. The usual recursive definition of syntactic substitution in FOL now expresses various substantial properties of the (in general, partial) semantic substitution function on assignments and its interactions with CRS quantifiers, cf. [13]. For the proof theory of this modal view of substitution, cf. [61].

Explicit logic of local dependence
As we saw, CRS is an 'implicit' logic of dependence. In this paper, we add the explicit syntactic atomic dependence formulas D X y of [71], now read locally as: X locally determines (the value of ) y, or y locally depends on X. These atomic formulas are interpreted at assignments s ∈ A using the local dependence relation D s X y, saying that all admissible assignments that keep the values of X fixed to the current ones also fix the value of y: s |= D X y iff s(y) = t(y) holds for every t ∈ A satisfying s(x) = t(x) for all x ∈ X.
Next, we reconsider the quantifiers. From a dependence perspective, it is natural to introduce dependence modalities or dual quantifiers D X ϕ, which 'fix' the values of X to the current ones. More precisely, like the dependence atoms, these talk about all the assignments that keep X equal to its current value(s), saying that they also fix the truth value of ϕ to 'true': s |= D X ϕ iff t |= ϕ holds for every t ∈ A satisfying s(x) = t(x) for all x ∈ X. 7 We read D X ϕ as X locally determines the truth of ϕ. Recall that in standard FOL, 'free' variables are the ones whose current values are kept fixed (while the values of 'bound' variables are ignored as irrelevant). This fixing the values of X explains why we sometimes call dependence modalities D X ϕ 'dual quantifiers': they 'free' the variables in X (rather than binding them), while binding all the other variables (in V −X, regardless of whether they occur in the formula).
Like the local universal quantifiers ∀ X ϕ, dependence modalities do satisfy Locality. But they appear to be more fundamental: indeed, ∀ X ϕ is simply definable via the equivalence ∀ X ϕ ↔ D F ree(ϕ)−X ϕ, whereas the converse is not as straightforward. 8 Note also here that, like both FOL and CRS quantifiers ∀X (but in contrast to local quantifiers ∀ X ), dependence modalities validate the standard Distribution axiom D X (ϕ → ψ) → (D X ϕ → D X ψ). 9 Dependence modalities can also quantify over all assignments in A: taking X to be the empty set yields the universal modality ∀ ϕ := D ∅ ϕ, saying that all admissible assignments satisfy ϕ. As a consequence, global dependence of y on X can be expressed as ∀ D X y.
The resulting logic of functional dependence LFD is more expressive than may meet the eye, as will become clear in what follows. Also, while capturing the main properties of functional dependence, it retains all classical Boolean operators with their standard laws; thus demonstrating that dependence is not an intrinsically non-classical phenomenon. Neither is basic reasoning about dependence necessarily complex, LFD is simple and well-behaved, with transparent axiomatizations and good meta-properties: decidability, forms of the finite model property, compactness, strong interpolation, and a form of cut elimination. Of course, this does not come for free. As always in logic, system design involves a balance between expressive power and other nice system properties. The more expressive the language, the more complex the validities -or stated conversely, the more well-behaved the logic, the less expressive the language. On the minimal basis language of LFD, however, one can analyze just which additional features in modeling dependence (and independence) force greater complexity for a logical system. Moreover, the modal flavor of LFD brings interesting connections with epistemic logics [33,32,6], interrogative and inquisitive logics [20,26,25], and situation-theoretic logics of informational correlations, [19]. Finally, as we shall show, LFD offers a platform for studying concrete notions of dependence in many fields in a way that imports only a minimum of logical complexity.

Structure of this article
Section 2 defines our models, giving a structural characterization of dependence. Section 3 introduces the logic LFD, together with a translation into FOL, a discussion of the differences between LFD quantifiers and the classical ones, and an equivalent modal relational semantics. The tandem of first-order and modal views will recur throughout the paper. Section 4 proves the decidability of LFD using object-free 'type models', while Appendix A has proofs of decidability and completeness using standard modal techniques. Section 5 presents a Hilbert-style axiomatization and a sequent calculus admitting a form of cut elimination, as well as interpolation and Beth definability results (with proofs in Appendix B). Section 6 explores extensions of LFD, including function terms, identity, independence, informational correlation, and dynamic modalities over changing dependence models. Section 7 looks at richer settings for dependence: including vector spaces, topological models, and dynamical systems. Section 8 draws comparisons with other approaches, including some discussion of their expressive surplus over LFD and questions raised by this. Conclusions and further prospects are found in Section 9.

State spaces, dependence graphs, functions
The starting point of this paper are the basic properties of semantic dependence relations, which will be determined here. Also a natural duality will emerge with explicit functional definitions for dependence, as well as appealing connections with consequence relations.

Dependence models
Throughout this paper, we assume given a set of variables V and a relational vocabulary (P red, ar), where P red is a set of predicate symbols and ar : P red → N is an arity map, associating to each predicate P ∈ P red a natural number ar(P ). A dependence model is full if all possible assignments are admissible, i.e., if A = O V . For assignments s ∈ A and sets X ⊆ V , we put s↾X for the restriction of s to domain X. Definition 2.2 (Agreement, local dependence, atoms). In dependence models, we define three basic relations: (a) for each set X ⊆ V of variables, an agreement relation s = X t on assignments s, t ∈ A, (b) for each s ∈ A, a local dependence relation D s X y between sets X ⊆ V and variables y ∈ V , and (c) for each n-ary predicate P and each assignment s ∈ A, an n-ary relation P s ⊆ V n on variables (where we use the notation = y for = {y} ): D s X y iff s = X t implies s = y t for all t ∈ A, P s x 1 . . . x n iff I(P )(s(x 1 ), . . . , s(x n )) holds, If s = X t, s and t are said to agree on X, and if D s X y we say that y locally depends on X at s. For any Y ⊆ V , we write D s X Y if D s X y holds for all y ∈ Y . Finally, we skip the set brackets for singletons, writing D s x Y for D s {x} Y , and D s x y for D s {x} {y}.
quantifies over all assignments in A: y depends on X in M, written D M X y, if D s X y holds locally at all assignments s ∈ A. As for local dependence, this notation is extended to sets Y ⊆ V , by writing D M X Y if D M X y holds for all y ∈ Y ; and again, set brackets are skipped for singletons. When the context is clear, superscripts M for current models will be dropped.
Note that our global dependence statement D M X y matches the semantic clause for the socalled dependence atom = (X; y) introduced in Väänänen's Dependence Logic [71], when interpreted on the 'team' A of all admissible assignments. 10

Dependence graphs
The basic structural properties of dependence relations are as follows.
Definition 2.4. Let R ⊆ P(V ) × V be a relation between sets of variables and variables. Using the same conventions as for the dependence relation D above (writing R X y instead of (X, y) ∈ R, R X Y as an abbreviation for y∈Y R X y, and skipping set brackets for singletons), we say that: The following is easy to see: 3. R satisfies the Inclusion property.
It is well-known that the combination of Reflexivity, Transitivity and Monotonicity provides a characterization of classical logical consequence, cf. [69]. The following two results show that the same three properties characterize the relation of (local and global) dependence: 11 Fact 2.6. For every dependence model M = (M, A) and assignment s ∈ A, both global dependence D M X y and local dependence D s X y satisfy Reflexivity, Transitivity and Monotonicity. For both relations R ∈ {D M , D s }, the R-constants are exactly the variables y ∈ V whose value is the same for every assignment in A. 1. For every relation R ⊆ P(V ) × V satisfying Reflexivity, Transitivity and Monotonicity, there is a dependence model M whose global dependence relation D M coincides with R. Moreover, if V is finite, then M can be taken to be finite as well, of size bounded by 2 |V | .

2.
For every relation R ⊆ P(V ) × V satisfying Reflexivity, Transitivity and Monotonicity, there is a dependence model M R where R coincides with all the local dependence relations D s (at all assignments s ∈ A), and hence it also coincides with the global dependence D M . Moreover, if V is finite, then M R can be taken to be finite as well, of size bounded by 2 2 |V | .

3.
Let R ⊆ P(V ) × V be a family of relations satisfying Reflexivity, Transitivity and Monotonicity, s.t. all relations "agree on constants" (i.e., R ∅ y iff R ′ ∅ y, for all y ∈ V and R, R ′ ∈ R). Then R coincides with the family {D s : s ∈ A} of all local dependence relations of some dependence model M R . Moreover, if V is finite then M R can be taken to be finite.

Proof.
For a start, we need some preliminary notations and results. Let R ⊆ P(V ) × V be a relation satisfying Reflexivity, Transitivity and Monotonicity. A subset X ⊆ V is R-closed if we have y ∈ X for all y ∈ V satisfying R X y. Let Γ be the family of all R-closed subsets of V . Note that Γ is closed under arbitrary intersections 12 . [Note: for greater readability in what follows, we have put the simple proof of this and some later auxiliary statements in footnotes.] Also, it is immediate that the family Γ contains the set V of all variables. We putX := {y ∈ V : R X y} for the R-closure of X, which is the least R-closed set s.t. X ⊆X. 13 Proof of Part 1. Let R ⊆ P(V ) × V satisfy Reflexivity, Transitivity and Monotonicity. Consider the model M = (O, I, A) with (a) O = V ∪ P(V ), (b) the interpretation map I makes all predicates false, and (c) the family A = {s X : X ∈ Γ} consists of assignments s X , one for each R-closed set X, with Note that |A| = |Γ| ≤ |P(V )| = 2 |V | . The model M validates the following two claims, for all Y, Z ∈ Γ and U ⊆ V : (b) D M X y holds iff R X y holds. Claim (a): This follows from the definition of the assignments s X via the following sequence of equivalences: Claim (b): From left to right, let D M X y, and consider the assignments sX and s V , with V the set of all variables. Note that s V (y) = y for all variables y. By (a), we have s V = X sX (since X ⊆X = V ∩X). Therefore, since D M X y, s V = y sX, and this means by the definition of the two assignments that s V (y) = sX(y) = y. In particular, then, y ∈X, i.e., R X y.
From right to left, assume that R X y. To show that D M X y holds, let s Z , s U ∈ A (with Z, U ∈ Γ) be any two assignments with s Z = X s U . By (a), s Z = X s U implies that either Z = U or X ⊆ Z ∩ U . In the first case, Z = U immediately gives s Z = y s U , as desired. In the second case, R X y and X ⊆ Z ∩ U imply R Z y, R U y by Monotonicity, which means by the R-closure of Z, U that y ∈ Z, y ∈ U , By definition then s Z (y) = s U (y) = y. 14 Proof of Part 2. Let R ⊆ P(V ) × V satisfy Reflexivity, Transitivity and Monotonicity. For each x ∈ V , we define a binary relation ∼ x on families of R-closed sets A, B ∈ P(Γ), by putting: It is easy to check that each ∼ x is an equivalence relation. 15 For any family A ⊆ Γ and variable x ∈ V , we denote by [A] x the equivalence class of A modulo ∼ x .
We construct now a model , all the equivalence classes modulo all the relations ∼ x , the interpretation I R makes all predicates false; and This model validates the following claims, for all s A , s B ∈ A R and U ⊆ V : This follows directly from the definition of the assignments s A , via the following sequence of equivalences: Claim (b): From left to right, suppose that D s A X y holds in M R . Take the family B := A∪{X}. Case (i):X ∈ A. Then we have A △ B = {X}, hence (A △ B) =X. Thus, by (a) we have s A = X s B (since X ⊆X). It follows by the truth of D s X y at s A that s A = y s B . But this means by the already proved equivalence (a) that y ∈X, i.e., R X y. Case (ii):X ∈ A. Repeat the preceding argument, but now w.r.t. the families A and A − {X}.
From right to left, assume that R X y, and consider any assignment s A ∈ A R . Let s B ∈ A R be any admissible assignment s.t. s A = X s B . By claim (a), X ⊆ (A △ B). Putting this together with R X y, we obtain by Monotonicity that R (A△B) y. But (A △ B) is R-closed (being the intersection of a family of R-closed sets), and therefore, y ∈ (A △ B). Applying claim (a) again, we conclude that s A = y s B . Thus s A satisfies D s X y, as desired. The desired conclusion follows immediately from the second claim. 16 Proof of Part 3. Let R be a family of binary relations on V satisfying Reflexivity, Transitivity and Monotonicity, and agreeing on constants. For each R ∈ R, put C = {y ∈ R : R ∅ y} ⊆ V for the common set of R-constants. Construct all the models M R as in Part 2, for every R ∈ R. Then each R is both the local and the global dependence in the corresponding M R .
The only remaining step for our main proof involves the following general disjoint union construction on dependence models. Define a new model is the disjoint union of the common set of constants and all sets of objects of the models M R , (b) the interpretation I R makes all predicates false, and (c) A R := {s R : R ∈ R, s ∈ A R } consists of new assignments s R , each associated to an old assignment s ∈ A R with R ∈ R, with Note that in this model, Using these facts, it is easy to see that the local dependence statement D w X y holds in M R at a state w = s R ∈ A R iff it holds at s in the corresponding component M R , and so the global dependence statement D M X y holds in M R iff it holds in all components. It follows that R coincides with the family of all local dependence relations within M R , and that R is the global dependence relation on M R . 17 The preceding representation method uses a large number of objects in general. What happens when one restricts the available objects that can be assigned to variables? Example 2.8. Consider a dependence model given by the table below: x y z 0 1 0 1 1 0 2 0 0 This table uses three values to represent a strict linear dependence order of three variables x, y, z: we have global dependencies D x y and D y z, but not the other way around. But as is easy to see, this cannot be done with only two objects. 18 To state the underlying observation positively, the following is valid on two-valued models: if D x y and D y z, then D y x or D z y.
More generally, the following can be shown: Arbitrarily high finite numbers of values are needed to represent arbitrary finite linear orders.
What are minimal sets of objects for representing given dependence graphs? How can one axiomatize the structural dependence properties for each fixed finite set of objects? Remark 2.9 (Dependence and consequence). As already mentioned, the three stated structural properties (Reflexivity, Transitivity and Monotonicity) are known to be characteristic for the relation of classical logical consequence, [69]. But the preceding observations show one essential difference. To represent a three-element linear sequence of variables ordered by dependence, three objects were needed in Example 2.8. But to represent an analogous sequence of strict consequences, only two truth values are needed, e.g.: In fact, any finite acyclic graph can be represented in terms of classical logical consequence.
All this suggests a move to non-classical consequence relations without a simple truth value semantics. In fact, the format D X y, with multiple 'premises' in X and a single 'conclusion' y, resembles Gentzen-style sequents for intuitionistic logic, and dependence has been related to intuitionistic implication, [2]. Also, given the analogies between dependence and implication between questions to be discussed in Section 3.5, dependence has been related to notions of implication in interrogative and inquisitive logics, [6], [25].
The analogy between dependence and consequence can also be extended in other ways. For instance, adopting a classical sequent format, one can study dependencies D X Y read disjunctively in the set Y . Or, softening the strict universal quantification over assignments in our semantic notion, one obtains new non-monotonic varieties of dependence where the dependence only holds 'under normal circumstances', by analogy with non-monotonic logics, [23].

Explicit function definitions
Our semantic definition makes dependence D X y a form of implicit definability, as fixing the values of the dependent variable y by fixing the values of the variables in X. But there is also a broad alternative intuition of dependence, viz. as y being definable in terms of X using some repertoire of available operations. 19 The two views are connected. In mathematics, implicit semantic definability justifies the explicit introduction of a corresponding function.
This discussion suggests the following general line.
Definition 2. 10. Given a dependence model M, a set X ⊆ V of variables and a variable y ∈ V , let F y X be the partial function from X-indexed tuples in O X to objects in O, satisfying for all is undefined. In other words: F y X (u) = o holds iff we have both (1) u = s↾X for some assignment s ∈ A, and (2) for all assignments t, t ′ ∈ A, t↾X = t ′ ↾X implies t(y) = t ′ (y) = o. We denote by dom(F y X ) the domain of this function. The expansion of M with all these partial functions F y X is called the induced function model F (M).
The partial functions introduced in this Skolemization-like manner make explicit the functions that underlie local and global dependencies: Fact 2.11. Induced function models satisfy the following two equivalences: Explicit definability is a natural companion to our semantic view as implicit determination. 20 Developing an abstract purely operational approach matching our semantic view of dependence may be worthwhile, and some concrete instances of how such an approach might work can be found in Section 7.2 on the notion of linear dependence in vector spaces.

The logic of functional dependence
We now introduce the language of our logic LFD of functional dependence.

Syntax and semantics of LFD
Definition 3.1. Given a vocabulary (V, P red, ar), the language LFD is recursively given by: where y ∈ V is any variable, P is any predicate symbol, x = (x 1 . . . x n ) is a finite string of variables of length n = ar(P ) and X ⊆ V is a finite set of variables.
As announced in the Introduction, the dependence modality D X ϕ is read as "X locally determines ϕ": the current values of X determine the truth of ϕ. Similarly, D X y is read as "X locally determines y": it says that the current values of X determine the value of y. 21 One important notion in LFD is that of free variables. Here we have to be careful. As in FOL, we want the free variables to be those whose current values determine the truth value of a formula; indeed, binding a variable is a way of "forgetting" its value as irrelevant, while the specific value currently assigned to a free variable is relevant for the meaning of the formula. But the definition is subtly different from FOL, since the dual quantifiers D X ϕ explicitly list the variables that are left free (rather than listing the bound ones, as do the usual quantifiers).

Proof.
The proof is by induction on ϕ. The atomic and Boolean cases are entirely straightforward. Dependence modalities: Assume that s = X t and s |= D X ϕ. To show that t |= D X ϕ, consider any w ∈ A with t = X w. Here s = X t and t = X w imply s = X w. This together with D X ϕ then yields w |= ϕ as desired, by the semantics of D X . Dependence atoms: Assume that s = X t and s |= D X y. Then s = y t by the semantics of D X . To show that t |= D X y, consider any w ∈ A with t = X w. Here s = X t and t = X w imply s = X w, which together with s |= D X y gives that s = y w. From this and s = y t, it follows that t = y w as desired.
Here is a useful immediate consequence in the presence of additional local dependencies. 21 As already mentioned in the Introduction, the obvious analogy between DXϕ and DX y can be made precise by introducing Boolean variables ?ϕ as in [6], that record the truth-values of formulas ϕ, and then defining the dependence quantifiers as DX ϕ := ϕ ∧ DX ?ϕ. Here, we choose to take these quantifiers as primitive, as they have an independent logical motivation and in our view they play an equally important role as the dependence atoms in the study of (partial) dependencies and correlations between variables. Corollary 3.5. If two assignments s, t agree on all formulas with free variables in X, then they also agree on all formulas with free variables in the extended set {y | s |= D X y}. 22 Abbreviations. Boolean connectives ⊤, ⊥, ϕ ∨ ψ, ϕ → ψ, ϕ ↔ ψ are defined as usual. We use D x y for D X y when X = {x} is a singleton, and same for D x ϕ. Other abbreviations are: These defined connectives behave as expected. E.g., syntactically, we have that: F ree(∀ X ϕ) = F ree(∃ X ϕ) = F ree(ϕ) − X. Semantically, e.g., ∀ ϕ means that all assignments satisfy ϕ, etc.: Note that our defined formula ∀ X ϕ matches the semantics of the 'local' version of the universal quantifier (in the sense of satisfying the Locality principle from Fact 3.4), as given in the Introduction. Recall that these amount to the standard FOL quantifiers on full models, and are their closest analogue on arbitrary dependence models. Thus, LFD contains the first-order quantifiers, generalized from their standard models to the larger realm of dependence models. For further discussion of the meaning of LFD quantifiers, cf. Section 3.2.
Remark 3.6 (Informational interpretation). The set A of admissible assignments in a dependence model is a 'complete database', as in Example 1.1, and can be interpreted as an information structure, encoding the 'knowledge base' of an (anonymous) agent: a full list of all the tuples that are consistent with the agent's background information. The underlying assumption is that only one tuple (the 'current assignment') represents the actual state of the world, but that tuple is typically unknown: the agent can only narrow down the possibilities to the set A. The universal modality ∀ ϕ := D ∅ ϕ then captures the agent's information: ∀ ϕ means that ϕ is 'known'. The constant-value formula Cx := D ∅ x says that the value of x is 'known'. Dependence quantifiers capture a form of conditional knowledge: D X ϕ means that the agent can know that ϕ if she is given the current values of X. Analogously, dependence atoms D X y express a form of conditional knowledge of a value: the agent can know the value of y if given the values of X. Finally, global dependence ∀ D X y captures known correlations: the agent knows how to determine the value of y from the values of X.
Remark 3.7 (Further notions of dependence). We can also formalize a common alternative intuition of dependence, [13], as 'changing x involves changing y': this is just D y x. Moreover, weaker notions of dependence can be defined, such as 'restricting the value of x to property P restricts the value of y to have property Q' (cf. Remark 1.2). This is expressed by ∀ (P x → Qy). Yet another definable notion of dependence is that the current values of X restrict the value of y to have property Q, captured by the formula D X Qy.
Example 3.8. Here are some illustrations of valid and invalid consequences: is not valid. However, (b) if F ree(ϕ) ⊆ X and F ree(ψ) ⊆ Y , the preceding implication is valid, and in fact we have the stronger validity (E X ϕ ∧ E Y ψ) → (ϕ ∧ ψ). 23

4.
The Distribution axiom is sound for dual quantifiers: (a) D X ϕ → ϕ and ∀ X ϕ → ϕ are valid. However, (b) the classical elimination rule for the universal quantifier is not sound: The last non-validity is explained by the fact that in LFD variables are no longer arbitrary placeholders, but have an individual meaning, denoting specific quantities (as commonly done in the empirical sciences, where e.g. t stands for time, etc). This means that, unlike in FOL, bound alphabetic variants may have different truth values: ∀ x P x can be true in a model while ∀ y P y is false. On the other hand, LFD still allows for a formulation of the intuition behind bound variants that the choice of variables is arbitrary: though no longer holding inside one given model, the invariance under renaming still holds across models. Let σ be a permutation of all variables, and let σ(ϕ) be the result of replacing in ϕ every occurrence of any variable x ∈ V by σ(x). Moreover, for every s ∈ A, let s σ be the assignment given by putting s σ (x) := s(σ −1 (x)) for all variables x, and let M σ = (M, A σ ) be the dependence model obtained by taking A σ = {s σ : s ∈ A}). Then the following equivalence holds: As a consequence, validity is invariant under variable renaming: ϕ is valid iff σ(ϕ) is valid.

Proof.
The proof of the first claim is by induction on ϕ.
Dependence atoms. Using the same observation as in the previous case, together with the fact that σ(D X y) = D σ(X) σ(y), as well as truth clause for dependence atoms and the induction hypothesis, we obtain the sequence of equivalences: Finally, the second claim follows immediately from the first, by quantifying over both admissible assignments and dependence models.

Discussion: Quantification over objects in LFD
Having entered the world of LFD with its special behavior of variables, one might ask whether the above quantifier companions ∀ X ϕ of the dependence modalities are 'true' quantifiers. This question calls for some distinctions. First, as we saw earlier, both the CRS-style quantifiers ∀Xϕ and their local versions ∀ X ϕ are simply generalizations of the FOL quantifiers to a broader class of models, and the dependence modalitities of LFD are their close duals. However, one might require a true quantifier to be a semantic operator, acting on objects, so that the precise variable used in its syntax does not matter. Now in one sense, this is true in LFD: variable names do not matter when we look across models. What can be said in one model using x can be said in another model using another variable y, by the Renaming Principle 3.9, which underpins, for instance, the use of alphabetic variants for proofs in our axiomatic systems of Sections 5.1, 6.2. But locally within one model, the existence of non-trivial dependencies gives rise to asymmetries of behavior between variables: as already observed, variables in a fixed dependence model acquire 'individuality'. As a result, in a given model, quantifiers in LFD quantify over admissible assignments, not over tuples of objects like the first-order quantifiers. Thus, existential quantifiers in LFD do not seem at first sight to be obviously related to the usual Skolem functions in the semantics of FOL.
But more can be said. In fact, a string of LFD quantifiers does induce a semantic operator over tuples of objects, albeit one that, in contrast to its classical counterpart: (a) quantifies over a restricted range of tuples (the ones that are in the range of admissible joint values for the given variables), and (b) imposes further constraints on the corresponding Skolem function, requiring it to behave well with respect to the admissible assignments. These additional features are both natural and informative in generalized assignment semantics. Indeed, quantifier combinations in LFD play a twofold role, giving information both about objects and about variable ranges and dependencies.
More precisely, let us compare the meaning of some quantifiers and quantifier combinations in LFD with their classical meanings in FOL. To do this, we need some notation. Given a tuple of variables x = (x 1 , . . . , x n ) ∈ V * , let X := {x 1 , . . . , x n } be the set of its variables. Also, for any given dependence model M = (O, I, A), we denote by the range of admissible x-values, as a subset of O n . As a special case, we have For a start, with the given notational convention, consider the LFD formula So on full models, ∀ X captures universal quantification, exactly as ∀X does in FOL. But over an arbitrary dependence model M = (O, I, A), the same formula holds iff we have This is clearly universal quantification, but only over the restricted range O x of admissible simultaneous x-values. Though weaker than the FOL formula ∀xP x, this is precisely the natural meaning in a dependence model, where each variable x or tuple of variables x has its own range of (tuples of) values. The universal LFD quantifier simply quantifies over that range.
Our second, perhaps more telling, example is a quantifier combination expressing a functional dependence. 26 It is easy to see that the LFD formula holds in a full model iff there is a function witnessing this fact: This matches the usual Skolem-type meaning of the FOL formula ∀x∃yP xy. But spelling out the LFD semantics for the defined dependence quantifiers, over an arbitrary dependence model M = (O, I, A), the same formula ∀ x ∃ y P xy holds iff we have This statement is neither weaker, nor stronger than the one expressed by the FOL formula. On the one hand, the domain of F is restricted to the admissible x-values, which is a weakening. But on this restricted range, we have a stronger statement: not only do all resulting pairs (o, F (o)) satisfy P , but they are all realized by admissible simultaneous assignments of values to (x, y). Once again, this is a natural statement: in the earlier terms, the combination ∀ x ∃ y gives information about objects and on how these objects can be accessed by variables.
This twofold nature is shared by all quantifier combinations in LFD, making them meaningful in a broader realm than the classical quantifier combinations to which they reduce on full models. Alternatively, they can also be viewed as just being restricted versions of the classical quantifier combinations, but with the added value that they are now forced to also give information beyond their traditional comfort zone (about constraints and correlations on variable ranges).
Summing up the presentation so far, LFD has both dependence modalities and quantifiers in one setting. But one can also view the system at a higher level. Modalities can be seen as quantifiers, as is well-known in modal logic, [22], and conversely, a system like CRS shows how first-order quantifiers can be seen as modalities. Thus, two perspectives are possible on LFD: it is both a first-order logic and a modal logic. This interplay will continue throughout this paper, as it allows for borrowing notions and techniques from both sides. In the remainder of this section, the two intertwined perspectives are taken a bit further, starting with a connection of LFD to standard FOL in terms of translation between languages and semantics.

First-order translation
As is the case for modal logic, the preceding language can be translated faithfully into a firstorder language. But before doing so, it is important to be clear in which sense this is meant.
LFD can be seen as a weak, decidable first-order logic over generalized models. But sometimes, a language interpreted over generalized models can be translated into a fragment of that same language interpreted over the original standard models. 28 By the Locality of LFD, it is enough to consider a finite set V = {v 1 , . . . , v n } of n variables, with a given enumeration. First take fresh copies of these variables Also introduce a new n-ary predicate A where intuitively, A(v 1 , ..., v n ) encodes the fact that the tuple of values assigned to v 1 , . . . , v n belongs to the admissible assignments A of the current dependence model. Now consider FOL with variables in V ∪ V ′ and predicates in P red ∪ {A}.
For each dependence model M, there is an associated FOL model T (M) for this extended language, having the same domain and the same interpretation of the old predicate symbols, and with the new predicate A interpreted as above. Conversely, every FOL model for the extended language is the translation T (M) of some dependence model. Definition 3.10. The first-order translation tr(ϕ) from LFD-formulas ϕ to first-order formulas in the above finite vocabulary is defined as follows.
where v is the enumeration of all the variables in V and z is the enumeration of all the variables in V − X, where v and z are as in part (d), z ′ and y ′ are the corresponding fresh V ′ -copies of z and respectively y, and Av[z ′ /z] is the result of replacing the variables z by z ′ in the formula Av.
Free and bound occurrences of a variable in a FOL formula are defined as usual. Variables are allowed to occur both free and bound in different parts of the same formula, so that freely occurring variables can be reused in quantification. The free variables of a FOL formula are also defined as usual: as those variables that occur free at least once in the formula.
It is easy to see from the above translation that, for every formula ϕ of LFD over V , the set of free variables of its FOL translation tr(ϕ) is exactly F ree(ϕ).
where tr(ϕ) is the above FOL translation of ϕ, and T (M) is the FOL model associated to M.
The proof is a simple induction following the idea of the stated translation.

Modalization of LFD: standard relational semantics
Next, we elaborate the modal perspective on LFD. An equivalent semantics is obtained by abstracting away the assignments from their concrete set-theoretical interpretation as functions and treating them as abstract possible worlds. This eliminates all references to values assigned to variables, and replaces identity of values s = x t by abstract equivalence relations ∼ x . Definition 3.13. A standard relational model is a triple M = (W, ∼, • ), consisting of: (a) a set W of worlds or 'states'; (b) a map ∼: V → P(W × W ) associating to each variable x ∈ V an equivalence relation ∼ x on W ; and (c) a valuation • associating to each formula of the form P x a set of worlds P x ⊆ W . It is useful to introduce auxiliary relations ∼ X on W , for sets of variables X ⊆ V , defined by taking intersections ∼ X := x∈X ∼ x . With this notation, the valuation is required to satisfy the following additional condition: if w ∼ X v and w ∈ P x 1 . . . x n for some x 1 , . . . , x n ∈ X, then v ∈ P x 1 . . . x n .
We interpret dual quantifiers D X ϕ as universal modalities for the relation ∼ X , while dependence atoms D X y capture a local inclusion (every ∼ X -successor is also a y-successor): Definition 3.14. In a standard relational model M = (W, ∼, • ), the notion of truth M, s |= ϕ (with the index M dropped when the model is fixed) is given by the valuation for atomic formulas P x, by the usual recursive clauses for the Boolean operators, and by: The two kinds of models introduced so far are closely related: we can show that the standard relational semantics is equivalent to the dependence-model semantics.
One direction is given by the following observation: This construction is so tight, that its adequacy should be clear without further proof. A slightly less routine construction yields the opposite direction: the interpretation I ∼ maps each n-ary predicate P to the set Note that w ∼ (x) = v ∼ (y) implies that x = y and w ∼ x v. Using this, one easily checks that I ∼ (P ) is well-defined on objects, i.e., independent of the choice of representatives for the equivalence classes. Moreover, the construction preserves truth of LFD formulas:

Proof.
The proof is by induction on ϕ. The atomic case holds by the definition of I ∼ in dep(M). Boolean cases are routine. The inductive cases for D X ϕ and D X y follow easily from the semantic definitions, together with the following simple fact: The two constructions can also be intertwined, with outcomes such as the following. Remark 3.19. An obvious next desideratum is a natural notion of modal bisimulation for LFD, capturing its precise range within the first-order language over standard models. One lead here might be the connection with generalized assignment semantics for FOL. A natural analogue to modal bisimulation for FOL is potential isomorphism, using partial assignments from finite sets of variables to objects. The crucial back-and-forth clauses of a potential isomorphism F are easily adapted to generalized assignment semantics.
Open problem Find a bisimulation invariance theorem characterizing LFD. 30

Other interpretations: information, knowledge, questions
The relational semantics, and its equivalence with the dependence-models semantics, shows that the actual values of variables do not play an essential role in LFD: what is important are the relations of 'agreement on values' of X, and 'dependence of y on X'. This suggests other, non-variable-based interpretations of our logic. Three such interpretations will be outlined here (epistemic, interrogative, and mixed), all information-based, like the informational interpretation in Remark 3.6. The informational perspective is ubiquitous: one often talks informally about even ontic dependence in the real world as knowing the value for one variable implying knowing that of the other, or as answers to some questions implying answers to other questions.
A straightforward epistemic reading of LFD re-interprets the variables x ∈ V as agents, while the equivalence relations ∼ x represent the agents' uncertainty relations. Then the modal statement D x ϕ captures agent x's individual knowledge, while D X ϕ expresses distributed knowledge among the group of agents X, [33]. Dependence atoms D x y express knowledge subsumption: 'agent x knows at least as much as agent y', [29], while atoms D X Y stand for the analogue notion of group subsumption. 31 Next, since an equivalence relation is also a partition as used in the traditional semantics of questions, [42], dependence models also have an interrogative interpretation. Variables x represent basic questions, and sets of variables X are joint questions asking for the answers to all the given questions). The dependence modality D x ϕ is the 'interrogative modality' Qϕ of [20], while D X ϕ extends this to joint questions. Dependence atoms D X y are local versions of 'inquisitive implication' between questions, see [26,25] for modern versions.
Finally, in mixed readings, some variables stand for agents, others denote objects, while yet others represent questions. Such mixtures greatly enhance the range of LFD. For instance, the logic for mixed readings in [6] captures a group's distributed knowledge of the value of a variable, as well as individual or group knowledge of a dependence between variables.

Decidability via type models
In this section, we show that LFD is decidable, using type models. These 'models' are just syntactic constructs, with no explicit objects, resembling the 'quasi-models' used in [3], [13] to investigate the Guarded Fragment. Sparse models like this have an independent interest, and they yield a bare-bones proof of decidability. However, the price of this directness is a certain amount of ad-hoc syntactic construction. In Appendix A, we use general semantic methods from modal logic to give a more elegant (though less direct) proof of decidability for LFD.

Syntactic type models
Consider any finite set F of LFD formulas, and let V F be the finite set of all variables occurring in F . Add to F all formulas D X Y for all sets of variables X, Y ⊆ V F . Close the resulting set under subformulas, as well as one round of negations, where explicit negations themselves are left as they are. Call the resulting finite set Φ = Φ F . This set will be fixed henceforth, and models and arguments about them will only involve these formulas.
Hintikka set for Φ (also occasionally called a syntactic 'type') if it satisfies the following conditions, where all formulas mentioned run over Φ only: Note that there are only finitely many Hintikka sets for a given finite set Φ. Moreover, the property of being a Hintikka set for a set F of bounded size ≤ N is clearly decidable.
Definition 4.2. For every Hintikka set Σ ⊆ Φ and every set of variables X ⊆ V F , the dependence closure of X wrt Σ is the set of variables D Σ X := {y ∈ V F : D X y ∈ Σ}. The terminology 'closure' is justified by the following observations. First, clause (d) on Hintikka sets implies that the dependence closure D Σ X contains X; second, clauses (b), (e) together imply that D Σ X is closed under adding variables z with D Y z ∈ Σ for any Y ⊆ D Σ X ; third, D Σ X is the smallest set (in the sense of set inclusion) of variables satisfying the first two properties. If Z is any other set satisfying the two properties, then D Σ X ⊆ Z. For, let z ∈ D Σ X , so D X z ∈ Σ. We have X ⊆ Z by the first property, and so z ∈ Z by the second property.
Fact 4.4. The following hold for all Hintikka sets Σ, ∆, and sets of variables X, Y ⊆ V : Proof.
For the first item: if Σ ∼ X ∆, then the sets Σ, ∆ contain the same dependence atoms D X y -since the latter have only free variables X, and X ⊆ D Σ X . It follows that D Σ X = D ∆ X . For the second item: ∼ X is evidently reflexive, by its definition. Symmetry and transitivity also follow immediately from the definition of ∼ X together with the first item (the invariance of D X Y under ∼ X ).
The third item follows from the fact that thus D X z ∈ Σ by property (e) of Hintikka sets, and hence z ∈ D Σ X .
Next we define a syntactic notion capturing key aspects of the families of Hintikka sets that can occur together in one dependence model. Here Clause (f) reflects the witnessing for existential dependence modalities in the model, and Clause (g) the fact that constants (i.e., variables x for which D ∅ x holds) behave uniformly in the model. Definition 4.5. A type model for Φ is a family M of Hintikka sets for Φ obeying the following two conditions. The first is an additional 'witness condition' for existential modalities: The second condition expresses uniformity for constants: Once again, for a given finite set F , there are only finitely many type models for Φ F , and moreover, the property of being a type model for a set F of bounded size N is decidable.

Representation of type models as dependence models
First, it is easy to see that every dependence model induces a type model.

Proof.
Checking conditions (a)-(e) on Hintikka sets is straightforward. For the witness condition (f) in the type model, let Σ = type(s) for s ∈ A, and let E X ψ ∈ Σ, i.e. s |= E X ψ. By the semantics of LFD, there exists t ∈ A with s = X t and t |= ψ, i.e. ψ ∈ type(t). By the Locality Lemma 3.4, s, t make the same formulas true whose free variables are among the X, which includes all dependence atoms D X y. Therefore, s, t agree on all variables in the set D Σ X , and so, once more by Locality, we have that type(s) ∼ X type(t) in the sense of Definition 4.3. Finally, condition (g) reflecting the uniform behavior of constants again follows from Locality in dependence models.
The more challenging direction is now the converse: that every type model can be represented as the set of types of some dependence model. Proof.
First fix any Hintikka set Σ 0 ∈ M. Define a good path to be a finite sequence π = Σ 0 , X 1 , Σ 1 , . . . , X n , Σ n of any length n+1 ≥ 1 such that (i) Σ k ∈ M for each k (hence each Σ k is a Hintikka set in M), and (ii) each X k ⊆ V F satisfies Σ k−1 ∼ X k Σ k . Write last(π) = Σ n for the last element of path π. 33 In what follows, it is convenient to view good paths as consisting of successive good transitions of the form (Σ, X, ∆). Here we think of the variables in X, and those depending on them according to Σ, as keeping their value in the transition. More precisely, we say that the variables kept fixed in a transition (Σ, X, ∆) are all those in the extended set of variables D Σ X introduced in Definition 4.2. 34 Sets of variables kept fixed in good transitions underlie many of the definitions and proofs that follow.
The good paths are finite sequences that form a rooted branching tree in a standard manner, with the 1-length path Σ 0 as its root. It may help the reader to keep a tree picture in mind in what follows, cf. Figure 1 below for a visual aid.
Next, objects will be special pairs of good paths and variables. Instead of defining these objects separately, we introduce them simultaneously with the following inductive definition of path assignments v π for good paths π, that send variables to objects: v π (x) = (π, x) if π has length 1, i.e. π = Σ 0 is the root of our tree.
The second clause leaves the same values for variables if the last transition keeps them 'fixed'. The third clause creates fresh objects as soon as this fixing is not prescribed. In particular, note that constants x, i.e. special variables with D ∅ x present in all Hintikka sets in M, will get the same value ( Σ 0 , x) under all path assignments. By condition (g) on type models, that value never changes for longer paths. Now, we define a first-order model M = (O, I) by letting O = {v π (x) : π good path and x ∈ V F } be the set of all objects (π, x) assigned by the assignments v π in the above manner. Next, an interpretation I(P ) is given to each predicate by means of the following 'coherence condition': I(P ) holds for a finite sequence of objects (π, x) in O if all paths π occurring in the sequence are linearly ordered by the relation of initial segment, and the formula P x occurs in last(π * ) on the longest path π * among these. The crucial semantic notion of equality of values among assignments v π , v π ′ in the dependence model M wrt a given set X of variables may be described concretely as follows. In general, the paths π, π ′ fork beyond a shared initial segment π ′′ , that includes at least Σ 0 . The semantic equality v π = X v π ′ means that the values assigned by v π and v π ′ to all variables in X have been set already by the final stage of π ′′ (cf. Figure 1): Figure 1: Forking paths π and π ′ with v π = X v π ′ : according to Fact 4.9, all variables in X are kept fixed in all transitions (on these paths) beyond the shared path π ′′ . Fact 4.9. For any two v π , v π ′ in M and any set of variables X, the following are equivalent: (a) v π = X v π ′ (b) π and π ′ have the form π = π ′′ , X 1 , . . . , X n , last(π), π ′ = π ′′ , X ′ 1 , . . . , X ′ m , last(π ′ ) with a shared path π ′′ , where all variables in X are kept fixed in the transitions involving the displayed sets X 1 , . . . , X n and X

Proof.
This follows by inspection of the above definitions for values of assignments, noting that the identical objects assigned by v π and v π ′ to any variables x ∈ X must be of the form (π • , x) for some initial segment π • of the shared path π ′′ , while no further changes have taken place. 35 To complete the proof of the main theorem, we must show that our initially given type model coincides with the set of Φ-types of all assignments in M, i.e. that we have: M = {type(v π ) : v π ∈ A}. And in order to establish this identity, it suffices to prove that type(v π ) = last(π) for all good paths π.
Once we proved this claim, the desired identity M = {type(v π ) : v π ∈ A} is immediate. 36 Unfolding now the claim type(v π ) = last(π), we can see that our remaining task is to prove the following result. . For all formulas ϕ ∈ Φ and good paths π, the following holds: Proof.
The proof is by induction on the formula ϕ.
Case 1: Atomic formulas. By the truth definition for LFD, M, v π |= P x iff I(P )(v π (x)). By the above definition of the atomic predicates in the first-order model M , the objects v π (x) are pairs (π ′ , x) whose paths π ′ are all initial subpaths of the longest path π * among them. Moreover, the formula P x belongs to last(π * ). Now, given the above inductive definition of assignments, all objects assigned by v π to variables have a path component which is an initial segment of π. In particular, π * is an initial segment of π, and also, again by the inductive definition of the assignments, no values of variables x ∈ x have changed along the remaining path from π * to π. This means, by Definition 4.3 for the relations ∼ X that the formula P x itself occurs in every Hintikka set in π after π * , and in particular, that P x occurs in the set last(π).
Case 2: Boolean combinations. The proof is a straightforward appeal to the truth definition, the inductive hypothesis, and the definition of Hintikka sets.
Case 3: Dependence modalities. For ease of presentation, we consider the existential LFD dependence modality instead of the universal one.
From right to left. Let E X ϕ ∈ last(π). By the witness condition (f) on type models, there exists a set ∆ ∈ M with ϕ ∈ ∆ and last(π) ∼ X ∆. Let π + = (π, X, ∆) be the good path consisting of π with a ∼ X -transition to ∆ added. By the inductive hypothesis, M, v π + |= ϕ, and hence also M, v π + |= E X ϕ. Now consider the objects that v π + assigns to the variables in X. By the above definition for v π + , none of the variables x ∈ X changed their value in the last step ∼ X -and so, these objects are the same as those assigned by v π . Thus, the assignments v π + , v π agree on the values of the free variables for E X ϕ, and so, by the Locality Lemma 3.4, the latter formula is also true at M, v π .
From left to right. Let M, v π |= E X ϕ. By the truth definition, there is an assignment v π ′ = X v π with M, v π ′ |= ϕ, so, by the inductive hypothesis ϕ ∈ last(π ′ ). By condition (c) on Hintikka sets (dualized to the existential dependence modality), we then have E X ϕ ∈ last(π ′ ). Now compare the two good paths π, π ′ , keeping Fact 4.9 in mind concerning their shape wrt some shared initial path π ′′ , and the fact that X is contained in the set of variables kept fixed in each transition made on the paths extending beyond π ′′ toward last(π) and toward last(π ′ ).
Given that E X ϕ ∈ last(π ′ ), with free variables X, it follows by Definition 4.3 that this formula is present in each Hintikka set on the path toward last(π ′′ ) and then in each Hintikka set on the path from there toward last(π). 37 So, finally, E X ϕ ∈ last(π).
Case 4: Dependence atoms. The case of dependence atoms is proved in a similar manner, but interestingly, it makes no appeal to a witness clause for non-dependence in type models.
From right to left. Let D X y ∈ last(π). Local semantic dependence of y on X at the assignment v π is shown as follows. Consider any assignment v π ′ ∈ A assigning the same objects to the variables in X, i.e., v π (X) = v π ′ (X). Just as in the preceding Case 3, X-values have not changed after the largest common initial segment π ′′ of π and π ′ . But then, since F ree(D X y) = X, the formula D X y is shared by the Hintikka sets in each of these later transitions. Now the above recursive definition of values v π (u) for variables u under assignments v π worked with extended sets of variables D last(π ′ ) X for immediately preceding subpaths π ′ , and these sets all include y in the present case. It follows that v π (y) = v π ′ (y), as desired, From left to right. Let M, v π |= D X y. Consider the good path π + = (π, X, last(π)) extending π with one good ∼ X transition to the Hintikka set last(π). By the earlier definitions for the values given by our assignments, v π , v π + assign the same objects to all the variables x ∈ X, kept fixed in the final transition. Therefore, by the given local semantic dependence at v π , we also have that v π + (y) = v π (y). But this can only happen if the variable y, too, was kept fixed in the last transition of π + , which means by definition that y ∈ Y = D last(π) X : i.e., D X y ∈ last(π).
This concludes the proof of Theorem 4.8. 37 What we use here is the earlier observation that good transitions are good in both directions.

Decidability
The decidability of LFD can now be established.
Theorem 4.11. Validity for formulas of LFD on dependence models is decidable.

Proof.
By Theorem 4.8 and Fact 4.7, satisfiability for a formula ϕ in dependence models is equivalent to ϕ's occurring in some Hintikka set of some type model for the set Φ generated by F = {ϕ} and all its subformulas and dependence formulas in the manner described earlier. As there are only finitely many type models of this sort, the latter test is decidable.
Open problems Does LFD have the Finite Model Property? What is the computational complexity of satisfiability for LFD? Remark 4.12. As noted earlier, the proof of decidability for LFD presented here is an extension of that for the Guarded Fragment of first-order logic, [3]. An open problem is whether we can reduce the decidability problem for LFD to that for the Guarded Fragment with identity, [39], though this seems unlikely given the syntax of dependence atoms. Another issue in this connection is whether known decidable extensions of the Guarded Fragment such as the 'loosely guarded fragment', [13], [14], have counterparts in natural extensions of LFD.
Finally, it may be worth noting that the preceding style of decidability argument can also be applied to first-order logic itself. Hintikka sets and type models can be defined just like above, and the representation result for type models as dependence models also goes through. Moreover, it is decidable whether a given first-order formula has a type model. Given the undecidability of FOL, it must then be an undecidable problem whether a given type model can be represented as a standard first-order model, i.e., a full assignment model.

Axiomatizations
It was shown in Section 3.2 that the set of LFD validities is recursive. In this section the structure of this set will be explored in more depth, in the form of two complete deductive systems.

A Hilbert-style axiomatization
A Hilbert-style proof system LFD is given in Table 1, consisting of: (I) the classical axioms and rules of propositional logic; (II) axioms and rules for dependence modalities, that can be seen as restricted duals of the classical Hilbert axioms for quantifiers; (III) axioms governing the behavior of dependence atoms (namely, Projection and Transitivity, already known to be equivalent to the conjunction of Reflexivity, Monotonicity and Transitivity); (IV) the key Transfer axiom, describing the interaction between dependence modalities and dependence atoms. The notions of formal derivation and provability are defined as usual.  Table 1, the axiom schema (D-Introduction) can be replaced by its instances listed in Table 2.
Note that, unlike with CRS, the provable principles for LFD are closed under substitution for predicate letters. Note also the analogy between D-Necessitation, D-Distribution, D-Elimination, D-Intro 2 and D-Intro 3 with the usual axioms and rules of the modal system S5. This is unsurprising and it is more than an analogy: as seen in Section 3.4, our dependence modalities D X are in fact relational modalities for equivalence relations ∼ X , and so they automatically validate all the S5 laws (by known results in classical modal correspondence theory, [22]).

(I) Axioms and rules of classical propositional logic (II)
Axioms and rules for dependence modalities D Note that the more general quantifier elimination rule via substitution ∀ x ϕ → [y/x]ϕ (as in classical FOL) is not a theorem or axiom of LFD: indeed, as we saw in Example 3.8, this rule is not sound in our semantics.

Theorem 5.3. ( Completeness)
The system LFD is sound and complete for dependence models.

Proof.
Given a consistent formula ϕ, consider the set Φ = Φ F generated by F = {ϕ} as in Section 4.1. Fix some maximally consistent subset Σ * of Φ that contains ϕ, and let M be the family of all maximally consistent subsets of Φ that are connected to Σ * via a finite sequence of relations ∼ X as introduced in Definition 4.3. 38 Proof. Maximally consistent subsets are Hintikka sets: they obviously satisfy the Boolean clauses, and the other closure conditions follow from their closure under deduction. To prove that M satisfies the witness condition (f) on type models, let E X ψ ∈ Σ ∈ M. Take Y := D Σ X (the dependence closure of X with respect to Σ), and consider the set This set of formulas is consistent by a standard modal argument using the S5 axioms 39 for D, the presence of the formulas D X y in Σ, and the Transfer Axiom of LFD. The required Hintikka set ∆ can be taken to be any maximally consistent set in M that includes ∆ 0 . Finally, condition (g) on type models is satisfied because all sets in M are connected by ∼ X transitions, which are also ∼ ∅ transitions by the Monotonicity property provable in LFD.
This concludes the proof of Fact 5.4, and of the completeness theorem.
Theorem 5.3 states 'weak completeness' only. 'Strong completeness' says that provability also matches semantic consequence from possibly infinite sets of formulas.
Theorem 5.5. The proof calculus LFD is strongly complete.

Proof.
First, the Compactness Theorem holds for LFD. This follows from the first-order translation in Fact 3.8, plus compactness for first-order logic. Given this, given any valid semantic consequence Ψ |= ϕ, we also have a valid consequence Ψ 0 |= ϕ from some finite subset Ψ 0 ⊆ Ψ of the premises -and this amounts to the validity of a single formula Ψ 0 → ϕ. By the weak completeness theorem, there is a formal proof of this formula, hence ϕ is provable from Ψ.
In Appendix A, we give another proof of strong completeness, that proceeds along more standard lines using modal logic techniques.

Sequent calculus, cut elimination and strong interpolation
An alternative formulation of the proof system is as a sequent calculus. To avoid the use of the rules of Contraction and Permutation, we take a Gentzen calculus using sets of formulas rather than sequences. In the following, Γ, ∆ denote sets of formulas, Γ, ϕ denotes Γ ∪ {ϕ}, etc. V ar(Γ) is the set of all variables occurring in Γ, and F ree(Γ) is the set of free variables in Γ.
Definition 5.6. The sequent calculus for LFD has the standard Gentzen axioms and rules for classical propositional logic (including structural rules of Identity, Weakening and Cut), together with the following additional axioms and rules: Note that, compared with the classical sequent calculus for FOL, there are now extra structural rules for D-Projection and D-Transitivity. Next, the left-introduction rule (D L ) is weaker than (the dual version of) the classical left-introduction rule for the universal first-order quantifier ∀, as it does not allow for variable or term substitutions. Also, the right-introduction rule (D R ) is different from, and in fact stronger then, the (dual version of the) classical rule for ∀: note it involves a dependence-atom premise (incorporating the Hilbert-style Transfer axiom). But also note that (D R ) implies the weaker rule which can indeed be seen as a dualization of the classical right-introduction rule for the universal quantifier of FOL.
It is easy to show that the two proof calculi are equivalent in terms of their output: Fact 5.7. The provable sequents Γ ⊢ ∆ in the above calculus match exactly the provable implications Γ → ∆ in the axiomatic system LFD.
Although our sequent calculus lacks standard cut elimination in its full generality, it does have it in a restricted form. Namely, Cut is eliminable in favor of 'DA Cut': this version of the Cut Rule allows cutting only dependence atoms that involve variables actually occurring in the conclusion. To ensure the subformula/subterm property, it is also convenient to absorb Weakening into the logical rules (cf. [70], or the explanation in Appendix B), while simultaneously restricting Projection and Transitivity to the variables that actually occur in the sequent to be proven. A restricted-cut proof uses only these modified rules and the DA Cut rule. We obtain a limited, but very useful, form of the Cut Elimination Theorem: Theorem 5.8. ( Restricted Cut Elimination) Every provable sequent Γ ⊢ ∆ has a restricted-cut proof. Such a proof involves only subformulas of the sequent formulas, or dependence atoms for variables occurring in the final sequent proved.
The details, as well as a sketch of the proof, are in Appendix B.
Remark 5.9 (Decidability revisited). These results yield a purely proof-theoretic proof of decidability for LF D. For a given sequent Γ ⊢ ∆, proof search in the above system with no other structural rule than DA Cut is finite. The search produces a tree whose nodes are sequents Γ ′ ⊢ ∆ ′ consisting only of subformulas of the original sequent or formulas D x y with all x i , y j ∈ V ar(Γ ∪ ∆). There are only finitely many such formulas, and thus only finitely many such sequents Γ ′ ⊢ ∆ ′ (since Γ ′ , ∆ ′ are sets, there are no repetitions). The pruned tree will be finite, and it contains a proof of the original sequent iff such a proof exists.
Another spin-off is a strong version of Craig Interpolation for LFD. A formula θ is a strong interpolant for a sequent Γ ⊢ ∆ if we have: (1) Γ ⊢ θ and θ ⊢ ∆ are valid, (2) all predicate symbols in θ occur both in Γ and in ∆, and (3) all variables in θ occur in both Γ and in ∆, i.e., we have V ar(θ) ⊆ V ar(Γ) ∩ V ar(∆).

Proof.
By Completeness and Restricted Cut Elimination, Γ ⊢ ∆ has a restricted-cut proof. So, it is enough to find strong interpolants for all sequents that are restricted-cut-provable. For this, it suffices to provide strong interpolants for the axioms, and then show how to turn strong interpolants for the premises of each of the above modified rules (including DA Cut) into a strong interpolant for the conclusion. This can be done in the usual way. The strong version of the above interpolation result arises thanks to the tighter variable management provided by DA Cut and restricted Projection and Transitivity.
As usual, interpolation implies a version of the Beth Definability Theorem. Given a sequent Γ, an n-ary relation symbol P and a tuple of n fresh variables x = (x 1 , . . . , x n ) with x i ∈ V ar(Γ), say that Γ implicitly defines P in variables x if the sequent Γ, Γ ′ ⊢ P x ↔ P ′ x is valid, where P ′ is any fresh relation symbol of the same arity as P , and Γ ′ is the sequent obtained from Γ by replacing every occurrence of P with P ′ .

Adding special axioms
Further axioms beyond the logic LFD may hold on special classes of dependence models. We give just a few illustrations here, relying heavily on known notions and results from modal logic. For convenience, we will mostly use the existential version of the dependence modality.
Example 5.12. Consider the following operator interchange principle: The following dependence model M is a counterexample. Take two variables x, y and three assignments s, t, u with s(x) = s(y) = 0, t(x) = 0, t(y) = 1, u(x) = u(y) = 1. Let R be a binary predicate holding only of the tuple of objects (1, 1). Then M, s |= E x E y Rxy, as one can reach u by first keeping the value of x fixed, and then that of y. But M, s |= E y E x Rxy is false: there is no way of getting from s to u by first keeping the value of y fixed, and then that of x.
On the other hand, it is easy to see that E X E Y ϕ → E Y E X ϕ holds on full dependence models (with all functions from V to O as assignments). The crucial property here is the following: for every three assignments s, t, u ∈ A, if s = X t = Y u, then there also exists an assignment This technical condition is a Church-Rosser principle requiring the set of available assignments to be rich in alternative pathways. It is in fact the exact semantic content of Commutation, but formulating this precisely requires the modal notion of frame correspondence, [22], that we will demonstrate with a different example below. The result of the above Church-Rosser restriction on dependence models is striking: Fact 5.14. The logic LFD plus the Commutation axiom is undecidable.

Proof.
It is known that the modal CRS-type logic of generalized assignment models plus the commutation axiom ∃x∃yϕ → ∃y∃xϕ is undecidable, [63]. Given that this logic can be translated effectively into LFD plus the Commutation axiom, the latter logic is undecidable too. 40 To show a bit more detail of how frame correspondence analysis works, we give an illustration for a related special dependence axiom. Recall the invalid principle D X D Y ϕ → D X∩Y ϕ mentioned in Example 3.8, perhaps better understood in its existential form Like Commutation, Stepwise expresses an existence constraint on available assignments that holds in full dependence models, but not in all of them. We now give a semantic correspondence analysis, for convenience, in terms of only three variables x, y, z. Call an LFD formula ϕ true in a dependence frame (a dependence model without an added interpretation for atomic predicates) if, for every interpretation of the predicate letters on the frame (where dependence atoms always keep their fixed interpretation), ϕ is true at every assignment. In one direction, this is straightforward. If the frame has the stated Cartesian structure, then it is easily verified that Stepwise will hold everywhere under every interpretation of the atomic predicates. In the opposite direction, starting from the frame truth of Stepwise, the quantification over all interpretations of atomic predicates allows us to assume that for each assignment s, there exists some predicate P xyz that holds uniquely for the values s(x), s(y), s(z). 41 Now, suppose some value d occurs for x at some available assignment s. Suppose also that value e occurs for y at some assignment t, uniquely defined by an atomic formula P xyz. One can reach t from s via the universal relation = ∅ , so s satisfies E ∅ P xyz. Now write ∅ = {x} ∩ {y, z}. Then by Stepwise, we also have E {x} E {y,z} P xyz true at s. But that means one can go from s to some assignment u keeping the value of x fixed, and then from u to t keeping the values of y, z fixed. It follows that u(x) = d, u(y) = e. Next assume that z takes on value f at some assignment v. Repeating the preceding argument for u and v, now making the split ∅ = {x, y} ∩ {z}, we find an assignment w with w(x) = d, w(y) = e, w(z) = f . Again, there is a consequence in terms of logics extending LFD.
Fact 5.16. The logic LFD plus the Stepwise axiom is undecidable.

Proof.
The Stepwise axiom has the modal Sahlqvist form mentioned in Footnote 40, and hence, by general results, [22], this logic is complete for dependence frames satisfying the corresponding condition identified above. Now, the Cartesian product structure obtained here is not a full dependence model in our sense, since each variable can have its own range of objects. But this is no obstacle to the following analysis combining two known facts.
Dependence models with the preceding structure are standard models for the three-variable fragment of many-sorted first-order logic, whose satisfiability problem is known to be undecidable, [45]. Moreover, CRS quantifiers are definable by LFD dependence modalities (cf. Section 3.1), while CRS quantifiers just are the first-order quantifiers on standard models.
It follows that satisfiability of first-order formulas in the many-sorted three-variable fragment reduces to satisfiability of LFD formulas in the preceding Cartesian models. In particular, one just replaces first-order quantifiers ∃u by their obvious LFD-counterparts E {x,y,z}−{u} .
While the above examples concern semantic restrictions in the spirit of modal logic, the dependence setting also suggests new questions of axiomatization. Recall the three representation results for abstract dependence relations listed in Proposition 2.7. The pivotal second result there concerned uniform dependence models where all local dependence relations between variables are the same, and hence also equal the global dependence relation. Uniform dependence models validate the following principles, where D ∅ is the universal modality: It is easy to find counter-examples to these implications in arbitrary LFD models.
Open problem Axiomatize LFD over uniform dependence models. 42 This concludes the analysis of properties of the system LFD. The remaining part of this article explores what lies beyond the base system LFD: extensions of the language, enrichments of the framework, and concrete dependence notions in a number of areas.

Richer dependence languages
The modal language of LFD can be extended to describe other natural features of dependence. This section contains a few examples, all with first-order truth conditions, thus making it possible to extend the translation of Section 3.3 making all logics effectively axiomatizable. Some of these extensions are straightforward, and do not affect the decidability of the logic, others do.

Function symbols and constants
Recall the functional perspective of Section 2.3. It makes sense to add to LFD function terms, built from variables x using a given family of operation symbols f with arities marked. 0-ary function symbols are individual constants c denoting objects. Terms are constructed by the rule with t a tuple of terms of the arity of f .
In the syntax of formulas, the earlier sets of variables X now become sets T of terms, and one can correspondingly extend the LFD syntax with operators D T t and D T ϕ for such sets of terms T and single terms t. This allows for new sorts of dependence statements, such as In this setting, it is straightforward to define agreement s = T s ′ on the values of all terms in a set T , and use it to give the corresponding semantic clauses for D T y and D T ϕ.
This logic is still decidable, but to show this the following notion is needed.   , s(x)). Moreover, the two models are LFD-equivalent: for all assignments s ∈ A and formulas ϕ of LFD: Fact 6.3. The logic LFD extended with function terms is decidable.

Proof.
One can translate formulas ϕ in the extended language to formulas τ (ϕ) in the original LFD language so that ϕ is satisfiable iff τ (ϕ) is satisfiable. First, associate to each complex term t occurring in ϕ some distinct new variable v t , while keeping the old variables the same. Let V ′ be the total extended set of variables, and let τ 0 (ϕ) be the LFD formula obtained by replacing all terms t in ϕ by the matching variables v t . The required functional dependencies between the variables are expressed as global dependence formulas, e.g., Let ϕ 0 be the conjunction of all these global dependence formulas, for all terms in ϕ. Then the translation τ (ϕ) is simply given by the conjunction ϕ 0 ∧ τ 0 (ϕ). 43 To check that our translation preserves satisfiability, first assume that a formula ϕ in the extended language holds for some assignment

−
The Substitution Rule "from ϕ, infer [t/x]ϕ". 43 For example, the translation of the formula P xf (x, g(y)) is ∀ Dyw ∧ ∀ Dx,wv ∧ P xv, where w and v are the fresh variables associated to terms g(y), f (x, g(y)), respectively.

Proof.
The proof is similar to the previous one, except that we now need a theorempreserving translation τ ′ (ϕ) between the two systems. For any given formula ϕ in the extended language, we associate new variables v t as in the proof of Fact 6.3 to each of its terms t, and we construct the formulas τ 0 (ϕ) and ϕ 0 as in that proof. Then our translation τ ′ (ϕ) is simply given by the implication ϕ 0 → τ 0 (ϕ). It is now easy to check that ϕ is a theorem in the above extended proof system iff τ ′ (ϕ) is a theorem in the basic system LFD. The Substitution Rule, as well as the theorem ∀ D x f x (provable in the extended system by applying the Necessitation Rule to the Functionality Axiom) plays a key role in this verification.
Note that the additional axiom and rule can be used to establish facts about complex terms. For instance, by the Functionality axiom we have D x g(x), and then by applying the Substitution rule we get D f (x) g(f (x)). Combining this with D x f (x) (itself another instance of the Functionality axiom) and applying the Transitivity of dependence, we obtain that D x g(f (x)). Applying the Necessitation Rule, we see that in fact this holds globally: ∀ D x g(f (x)), i.e. we have = (x; g(f (x))).
This extended logic can Skolemize implicit dependencies, in the spirit of Section 2.3 on operational views of dependence, using function symbols as witnesses: Proposition 6.5. Let ϕ(x, y, z) be an LFD formula with free variables x, y, z. Let X = {x 1 , . . . , x n }, Y = {y 1 , . . . , y m }, and f 1 , . . . , f m fresh n-ary relation symbols not in ϕ. Then

Proof.
Apply the same construction as in the proof of Fact 6.4 to the formula on the left, associating fresh variables y ′ 1 , . . . , y ′ m to each of the terms f 1 x, . . . , f m x. The same argument as in the preceding proof shows that: and replace any occurrence of variables y ′ i by the corresponding variables y i , obtaining a proof of ⊢ ∀ D X Y → ϕ(x, y, z). The converse is proven by the inverse substitution (replacing every occurrence y i in the proof by the corresponding y ′ i ). 44 However, this functional language still cannot talk about identity of term values, making it impossible to witness implicit dependencies by means of explicit statements ∀ (y = f (x)).

Explicit equality
We can easily extend our set of predicate symbols with an identity relation = on objects, with the obvious semantics. It is convenient to work with a countably infinite set C of constants, and allow complex terms (built from variables and constants using function symbols) as in the previous section. We denote by c, d, etc. arbitrary constants, and by t, t ′ arbitrary terms.
A ground term is one that does not contain any variables (i.e., it is constructed only from constants using function symbols). As before, it is useful to extend our dependence quantifiers and dependence atoms to terms, writing e.g. D T ϕ and D T t, where t is any arbitrary term and T is any finite set of terms. As before, we use x and t for finite tuples of variables and terms.
Let us call this new logic LF D = . Our translation to FOL can be easily extended to LF D = , so the logic is compact and its set of validities is recursively enumerable. But the new syntax has several advantages, such as supporting a more perspicuous axiomatization of the logic.
A Hilbert-style proof system LFD = is given in Table 3, where the letters P shown range over all predicate symbols, including equality. (Value Existence Rule) From t = c → ϕ, infer ϕ, provided that c does not occur in ϕ.

(III)
Axioms and rules for the universal modality provided that t consists only of ground terms.
Axioms for dependence atoms and modalities , where in both cases, X is the set of variables occurring in x. Substitution of Equals is a special case of Leibniz' Law of 'indiscernability of identicals', allowing substitution of equal variables in atomic formulas. 45 Essentially, the Value-Existence Rule asserts that each term always has a current value. The Dependence-Atom Axiom 'reduces' local dependence to a universal implication when the current values of the variables are explicitly given, and the Dependence-Modality Axiom does the same for dependence modalities. As a consequence of these context-dependent 'reductions', all the LFD principles characterizing the dependence modalities and dependence atoms (i.e. the axioms and rules II, III, IV in Table 1 for D and D) are missing from from the definition of the new system: indeed, they now become provable theorems in LFD = .
Theorem 6.6. ( Completeness.) The calculus LFD = is sound and complete for validity in the dependence language with equality w.r.t. dependence models.
The completeness proof follows exactly the lines of that for the logic LED in [6]. The proof uses a Henkin-style canonical model, with the additional twist that the maximally consistent theories are also required to be 'witnessed': for every term t there exists some constant c such that t = c is in the theory. Beyond completeness, however, there is a complication. The type model proof used in Section 4 to show the decidability of LFD does not seem to work in the presence of the Value-Existence Rule.
Open problem Is LFD with equality decidable? 46

Independence
A major natural extension for LFD concerns the notion of independence. Intuitively, saying that y is independent from x is by no means the same as the statement ¬D x y which just expresses that x does not fix the value of u at the current assignment, and even the universal quantification ∀ ¬D x y is too weak. What independence of y from x should mean in the present setting is that the current value of x does not constrain the values that y can take. 47 In epistemic terms, this amounts to saying that knowing the current values of x tells us nothing about the current value of y. To make this precise, we need the following notion. Inf We now introduce independence atoms I X Y , saying that Y is independent from X at s. Definition 6.8. For any model M, sets of variables X, Y , and assignment s ∈ A, we put: One can also define more general conditional independence atoms I X Y |Z, saying that given Z, X gives no further information about Y : Global independence (both conditional and unconditional) 48 can then be defined from the local versions in the obvious manner, using the universal modality available in LFD: As usual, when either X or Y are singletons, notation simplifies to I X y, I x y, etc.
Reasoning with independence atoms has some interesting features. For instance, it is easy to see that I X Y does not imply I Y X locally (the current value of X might carry no information about Y , but that of Y might carry information about X). In contrast with this, however, global independence X |= Y is commutative, and the conditional version (X |= Y )|Z is commutative in 46 In response to a preprint version of this paper, [68] has announced a negative answer, proved by reducing satisfiability for the undecidable Kahr-Class of first-order formulas to satisfiability for LF D = -formulas. 47 Varieties of independence have been studied extensively in IF-logic, [47], and in Dependence Logic, [41]. Our central concern here is how dependence and independence behave on our simple modal basis. 48 Global conditional independence as defined here may be viewed as a qualitative counterpart to the notion of conditional independence found in Probability Theory. X, Y . If in a model, every value of y can be taken at every value of x, then the set of joint values (x, y) must be a full Cartesian product of the ranges of x and y.
There are also interesting valid principles connecting the modalities I and D. As an illustration, (D x z ∧ I x y) → I z y is valid. If the current value of x gives full information about z but no information about y, then the current value of z does not yield any information about y either. Fact 6.9. The dependence atoms can be defined in terms of conditional independence, via the equivalence D X Y ↔ I Y Y |X, and the same goes for the global versions.
These principles are part of a new logic LFDI extending the purely structural rules for independence in [36]. It consists of LFD extended with the basic independence modalities I X y.
Open problem Axiomatize the logic LFDI.
Interestingly, the core logic of independence differs essentially from that of dependence: LFDI is more complex than LFD. The reason is explained in the proof to follow. Theorem 6.10. The modal logic LFDI is undecidable.

Proof.
The proof is reminiscent of that for Fact 5.16, and uses the undecidability of the three-variable fragment of many-sorted first-order logic. Formulas ϕ of the latter language with variables x, y, z can be translated into formulas τ (ϕ) of LFD as indicated at several places earlier on, replacing first-order quantifiers ∃uψ by existential LFD modalities E {x,y,z}−{u} τ (ψ). Now, crucially, the independence modalities can be used as follows to force the values of x, y, z to be a full Cartesian product, via the formula: where ∀ is the universal modality, available in LFD.
Let us show that every model of this formula is a full Cartesian product. Suppose that x takes value d at some assignment in M, y can take e, and z can take f . Consider an assignment s with s(x) = d. Since I x y holds everywhere, the value e for y occurs with this value for x, at some assignment t with t(x) = d, t(y) = e. But since also I {x,y} z holds everywhere, there must also be an assignment u with u(x) = d, u(y) = e, u(z) = f . Now it is immediate that any three-variable first-order formula ϕ is satisfiable iff the matching LFDI formula τ (ϕ) ∧ ∀ (I x y ∧ I {x,y} z) is satisfiable.
Remark 6.11. Moving beyond the contrast between independence and functional dependence, the more general perspective for this section is the notion of correlated behavior. Local independence I x y in our sense says that the current value of x does not place any constraint on the values that y can take at the present location. This is at the opposite extreme from functional dependence D x y which restricts the range of y to just one value. Clearly, there are other natural notions here. For instance, the mere negation of independence, ¬I x y, says that the current value of x excludes at least one value for y, which can be seen as a weak form of correlation. This gives a very minimalistic notion of 'dependence', much weaker than our functional dependence, but one that is of interest on its own. The examples of failure of classical laws for CRS quantifiers presented in the Introduction do not necessitate functional dependence, but only weak correlations: any breach of full independence between any two variables can lead to such a failure of a classical validity. The complete logic of weak correlation and independence would be the pure modal logic of I X y. Stronger notions of correlation or partial dependence between x and y arise when we put further constraints on how far the local value of x constrains that of y, with full functional dependence in the limit.
Open problem Axiomatize the pure modal logic of independence. Is it decidable?.
Remark 6.12. The preceding analysis also suggests more general comparative informational assertions X ≥ Y Z ('X carries at least as much information about Y as Z does'): All the above notions of dependence and independence are definable in terms of these comparative assertions. E.g., the conditional independence statement I X Y |Z says that Z is at least (in fact, just) as informative about Y as X ∪ Z is, which can be expressed formally as the equivalence Open problem Axiomatize the logic of comparative informational assertions.

Dynamics and model change
Typically, epistemic events carrying new information can change a current model. One may learn the current value of some variable, or more general facts. There can also be non-informational reasons for changing a current model, say, with a shift of a current dynamical system. A few instances of the dynamics of dependence models will be discussed here, using methods from dynamic-epistemic logic, [7], [28], [15].
Learning current values. One can update a knowledge base after learning the true values of a set of variables X at (the current assignment) s ∈ A. This changes the model M = (M, A) to the submodel M|X = M |{t ∈ A : s = X t} that retains only the assignments that agree with s on all X-values. Now interpret the dynamic modality [X]ϕ as follows: This modality occurs in epistemic logic under the name of "public inspection of a value", [32].
Fact 6.13. The logic LFD with the modalities [X]ϕ is completely axiomatizable and decidable.

Proof.
It suffices to observe that the following recursion axioms are valid: Used iteratively in a standard dynamic-epistemic style, these reduce each formula in the extended dynamic language to an equivalent base formula of LFD.
Learning new facts. Another form of information update happens when learning a new true fact ϕ about the current assignment s. Semantically, an update with a formula ϕ transforms the model M = (M, A) into the relativized submodel M|ϕ = M |{s ∈ A : s |= ϕ} retaining only the assignments satisfying ϕ. In the syntax, this is reflected by dynamic modalities [ϕ]ψ, with a semantic truth condition given by: Remark 6.14. The logic of this type of update modality ('public announcement logic') is a well-known pilot system of information update. But updating a dependence model can mean different things. Going to a submodel with fewer assignments typically adds to the existing dependencies. In epistemic scenarios, this increase is fine, and in fact useful. 49 However, if the dependence model is a state space for some known current process, changes in the dependence structure create a new process, and this needs to be motivated by other considerations. 50 A dynamic-epistemic analysis still works for the new extended setting, but there is no longer any reduction to the base language of LFD. Admittedly, the dependence modalities after an update can be reduced to the original ones in a similar way to the well-known recursion law for epistemic modalities: But the new dependencies in the updated model M|ϕ can only be 'pre-encoded' in the original model M by means of a conditional dependence operator D ϕ X y, with a semantics given by: This is illustrated in the following recursion equivalence, whose validity is easy to check: Of course, conditional dependence needs a recursion law in its turn, and the following is valid: The logic with this update modality can be reduced to its static base logic (with conditional dependence operators) via such recursion laws. But in this case, the static base logic itself is no longer a routine extension of LFD. The difficulty lies in the following result.

Proof.
For simplicity, consider a language with two variables x, y and one atom P xy. Take a dependence model M with just two admissible assignments s, t where s(x) = s(y) = 0, t(x) = 0, t(y) = 1, while the binary predicate P holds only of (0, 0) in the underlying first-order model. As for non-trivial dependence atoms in this language, at both assignments, the formulas D y x, ¬D x y are true. Now extend M to a model M ′ with a third assignment u such that u(x) = 0, u(y) = 2, while P now also holds of (0, 2). In M ′ , D y x, ¬D x y are true at all of s, t, u. Now, it is easy to prove by induction that the map F sending the two assignments s and u in M ′ to s in M, and the assignment t in M ′ to t in M has the following property. For any LFD formula ϕ and any assignment v, M ′ , v |= ϕ iff M, F (v) |= ϕ. 51 But, conditional dependence sees a difference here: the formula D P xy x y is true at s in the model M (the restriction leaves only the assignment s), but not in M ′ , since both s, u remain after the restriction. So D P xy x y cannot be definable in terms of LFD formulas.
Open problem Axiomatize the modal logic of conditional dependence. Is it decidable? Remark 6.16 (Enlarging models). Natural updates can just as well extend current dependence models with new assignments, thereby possibly giving up dependencies that used to hold.
Broader dynamic perspectives. An update perspective suggests extending the semantics of LFD from considering just single dependence models to families of these. 50 A typical model change different from information update is changing the space of relevant variables. This happens, e.g., when analyzing two correlated variables by introducing a new variable on which both depend. 51 The general fact here is that F is a modal 'p-morphism', cf. [22], in a sense appropriate to dependence models. Definition 6.17. A dependence universe U is a family of dependence models. Epistemically, each model in U can be seen as a candidate for the true structure of the world, and the dependence universe then represents a 'space of inquiry'. But a dependence universe might also be a family of available processes one can switch between. Either way, just as a dependence model, a dependence universe need not be a full power set of some sort: many possible structures may be missing from the space of inquiry or process repertoire. 52 A natural extension of LFD describes triples (U, M, S) of a dependence universe U, a dependence model M ∈ U, and one or more binary relations on dependence models S ⊆ U × U, expressing relevant changes of the dependence models. It contains the dependence modalities interpreted as before, but also modalities accessing alternative models via the relations S. Example 6.19. The following two principles are valid in dependence universes: Open problem What is the complete logic of LFD plus the downward and upward submodel modalities on dependence universes?
This concludes our exploration of logical operators that extend the basic language of LFD.

Dependence in concrete settings
The dependence semantics and logic of this paper are simple, and many notions of dependence in actual use add further features. This section presents a few cases, mainly to show that they fit with the basic LFD perspective, while also highlighting their interesting more specialized structures that call for further logical investigation.

Databases
This paper started with a simple database example, which nevertheless does not do justice to the more sophisticated structures studied in database theory, [1]. Much of this theory is in terms of first-order logic and its low-complexity fragments, and in this light, LFD can be seen as an attempt at capturing some high-level features of databases in a modal style. Indeed, various kinds of dependence and independence in databases can be represented in LFD-style languages, especially with the extensions introduced in Section 6. As a further point, databases consist of 'facts' and 'rules'. Rules are hard-wired regularities, telling us how to close the database under inferences. For semantic dependencies in a model, this suggests a natural distinction: some are 'accidental', others are 'essential'. This distinction cannot be seen inside dependence models, it requires an additional external decision which regularities are important and which ones are not. A semantic setting for getting at the distinction are the dependence universes of Section 6.6. Accidental dependencies D X y just hold in the current model, while essential ones continue to hold even under relevant updates of that model, which can be expressed using modalities such as [↓]ϕ and [↑]ϕ.

Vector spaces
The next example comes from linear algebra where dependence is the fundamental notion behind a wide range of applications to computation, defining geometrical dimension, and much more. A vector y depends on a set of vectors x if y can be written as a linear combination of the z ∈ x. This notion is not primarily semantic, but it rather ties in with the equivalent functional definability perspective on dependence of Section 2.3. However, given the special operations used in linear algebra, there are interesting valid properties beyond those provided by LFD.
Example 7.1. The Steinitz Exchange Principle, [55], reads as follows in the LFD language: The reason for its validity in linear algebra is that, if z is a linear combination of X, y, then either the coefficient for y is 0, and the first disjunct holds, or that coefficient is not 0, and then one can divide by it, obtaining a formula expressing y as a linear combination of X, z.
The Steinitz principle is not valid in LFD: a counter-example on dependence models occurs in Example 2.8, where we have D {z,x} y, ¬D z y and ¬D {z,y} x. However, in the spirit of Section 5.3, one can ask for a modal correspondence result: which constraint on assignments in dependence models ensures the validity of the Steinitz principle? What comes to mind is a principle about existence of inverses for implicit functions that truly depend on their arguments, but a precise solution remains to be found. In addition, there is a natural question of axiomatization.
Open problem Axiomatize the complete theory of LFD-style assertions about dependence between vectors. Is it just the basic proof system LFD plus Steinitz Exchange? Remark 7.2. Matroid Theory studies abstract linear dependence and independence. Matroids are finite families of sets of vectors satisfying conditions implying the uniqueness of finite dimension. Matroids can be represented as dependence frames for LFD, [37], but there is an issue of the best logical framework. In the matroid setting, sets of variables are the central notion, and LFD does not describe such sets in an abstract algebraic way, except by brute enumeration. It would be of interest to develop a modal perspective on Matroid Theory.

Topologizing LFD: the logic of continuous dependence
In empirical contexts, the exact values of most variables are never accessible. Then, the existence of a functional dependence in the sense of LFD is a moot point, of only theoretical importance. What matters is whether there is a knowable dependence: given what can be known in principle, by measurements of any precision, about the value of x, can the value of y be found with any desired degree of precision?
Making sense of this intuition calls for a topological setting, with its intuitions of approximation and continuity. This section outlines such a logic of continuous dependence LCD, though a full presentation and development is postponed to our forthcoming paper [8].
A variable y depends continuously on x at an assignment s if the value of y at s is determined to any desired degree of approximation by some (possibly better) degree of approximation of the value of x at s. Epistemically, this means that one can know the value of y with any desired accuracy, if given a sufficiently accurate estimate of the value of x. 53 This suggests having a topology τ on the set of objects O, to capture approximations of values s(x) as open neighborhoods U ∈ τ with s(x) ∈ U . Global dependence D M x y in such a topo-dependence model M = (M, A, τ ) requires existence of a continuous map from x-values to y-values, while local dependence D s x y is given by: where τ (o) = {U ∈ τ : o ∈ U } is the family of open neighborhoods of an object o ∈ O. This can be generalized to dependence D X y on a set X of variables, by using the product topology on O |X| . Intuitively, D X y holds at an assignment s if all assignments that assign to X values that are close enough to their current ones also assign to y a close enough value to its current one.
The natural analogue semantic clause for simple dependence modalities is: The definition can be generalized to set-based dependence modalities D X ϕ using a product topology, but we skip details here. Intuitively, D x ϕ holds at an assignment s if ϕ holds at all admissible assignments that assign to x values that are 'close enough' to their current value s(x). This connects to the well-known topological-interior semantics for the modal logic S4: an assignment s satisfies D x ϕ iff the current x-value s(x) is in the interior of the set {t(x) : t |= ϕ} of all x-values of ϕ-assignments. Philosophically, the interior semantics points to an evidential conception of knowledge: ϕ is knowable from X if there exist some potential pieces of evidence about X that entails ϕ.
As for defined notions, ∀ ϕ := D ∅ ϕ is still the universal modality over all assignments in A. Global dependence, defined as before by ∀ D x y, now expresses the existence of a continuous map f : O → O with s(y) = f (s(x)) for all s ∈ A. Other defined operators acquire a different meaning. The formula D ∅ y used to mean in LFD that y is constant, taking only one value. But in a topological setting it expresses a more complex condition on the 'specialization pre-order', which only reduces to constancy in the presence of the separation axiom T 1 .
More details, including decidability and a complete axiomatization, as well as further extensions to include uniform continuity and links with Domain Theory, will be presented in [8]. For now, we note that the proof calculus for LCD involves modal logic S4 rather than S5 for its dependence modalities. Moreover, even LFD principles that remain valid as they stand now express something subtly different in a topological setting. In particular, the Transfer Axiom (D X Y ∧ D Y ϕ) → D X ϕ turns out to capture the continuity of dependence. 54 Remark 7.3 (Point-free alternatives). Dependence in the logic LCD strengthens the notion of dependence in LFD: the functions made explicit in Section 2.3 are now to be continuous. But the intuitions behind the topological view seem independent from the existence of point-to-point functions. They rather talk about correlating evidence, i.e., open sets, whether or not there is some underlying set of sharp limit points and functions between these. The better framework, then, might be a point-free topology, with the notion of dependence suitably adapted to direct correlations between open sets that induce continuous functions under some appropriate mathematical construction of points.

Dynamical systems
Many real-life dependencies have a temporal aspect. Even the simple propositional example of Remark 6.8 suggests a network dynamics where propositions can become true or false, and dependencies involve a time delay. 55 This suggests a temporal universe of assignments occurring over time, with dependencies such as s t+1 (y) = s t (x). Now one might reduce this to a static setting by adding temporal variables, using function terms as in Section 6.1. But it seems more natural to turn dependence models into dynamical systems where assignments are global states that can occur and repeat over the permissible evolutions of the system. A logic for this should combine LFD with a temporal language. Consider a dependence model M = (M, A) with an assignment-changing next-state map g : A → A. The dynamical system defined by (M, g) is the family of functions {g n } n∈N , with g n the n-fold composition of g with itself. A simple language of dynamic dependence adds three items to the syntax of LFD: a next operator ϕ, an n-th step dependence operator D (n) X y for each n ∈ N , and a henceforth operator * ϕ. Their semantics has these clauses: X y iff s = X t implies g n (s) = y g n (t) for all t ∈ A s |= * ϕ iff s |= n ϕ for all n ∈ N (with n ϕ the n-th iteration of ) In particular, D X y says that the current values of the variables in X uniquely determine the next-step value of y. This is just what is needed to formalize Tit-for-Tat or Copy-Cat. 56 As for valid reasoning, dynamic analogues of Reflexivity, Monotonicity and Transitivity are easy to formulate. There is also a valid dynamic analogue of the Transfer Axiom: Open problem Axiomatize dynamic dependence logic completely. Is this logic decidable? 57 Remark 7.4 (Topology once more). Dynamical systems usually have a state space endowed with a topology. This richer setting gives rise to 'dynamical topo-dependence models' (M, A, τ ) with a next-step map g : A → A that is continuous. Then, for instance, extending the system LCD with temporal operators, for a finite total set of variables V , the commutation axiom D V ϕ → D V ϕ expresses the continuity of the next-step function g. 55 The same is true for situation-theoretic scenarios of information flow, [11], [19], and for ubiquitous strategies in iterated strategic games, such as Tit-for-Tat or Copy-Cat: what you do now is what I will do next, [64]. 56 The syntax chosen here for purposes of illustration is a bit cumbersome. Direct functional notations, such as Ox for the value of x at the next state, will be more perspicuous in practice. 57 In response to an earlier version of this paper, completeness and decidability results for the temporal dependence logic of dynamical systems have been claimed in [56].

Games
Dependence also occurs in game theory, [64], though with an additional flavor. While LFD speaks about dependence of values, game theory talks about dependence of actions. The notions are related, but games pose some interesting new features for logical dependence analysis, [16], [18].
Example 7.5. (Choice and dependence). Consider an extensive game of perfect information with two players A, E that have two moves 'left' and 'right' at each turn. A moves first, then it is E's turn. The four histories in the game tree can be viewed as assignments to two variables x, y, with x the action chosen by A, and y by E. The available actions for each player are independent from those of the other: both I x y and I y x hold in the sense of Section 6.3.
Now let E choose a strategy, i.e., in this simple game: a move at each of her two possible decision nodes. This restricted play introduces a functional dependence: the LFD statement D x y, which was false before, will now come to hold. Thus, committing to a choice, or a strategy in general, changes the current dependence model for the game to one where appropriate dependence statements come to hold. 58 Extensive game trees can be associated with dependence models whose variables stand for successive actions by the players. 59 Moreover, the action perspective introduces the dependence dynamics of Section 6.4. Making a choice makes a dependence statement D X y true by removing some assignments from a given model M, to obtain a submodel N satisfying (a) the statement D X y, but also (b) the following 'X-richness' constraint, for the given set X ⊆ V of variables: (where M ↾ X is the restriction of all assignments in M to the domain X). The 'X-rich submodels' N satisfying this constraint can be viewed in both first-order and modal terms. An interesting question is which syntactic types of statement are preserved when moving from M to N.
Open problem Develop the dynamic dependence logic of strategic choice.
Next consider extensive games with imperfect information. Here is a simple illustration. Example 7.6 (Imperfect information games). In Example 7.5, now assume that E cannot observe A's move. Then E's epistemic uncertainty relation holds between the two mid-points of the game tree. This game has been discussed widely for its combination of action and knowledge, a typical feature of games with imperfect information and their links with modal logics, [16].
In game theory, a strategy must be uniform, assigning the same move at points that E cannot distinguish epistemically. In the above example, this leaves only strategies 'always left' and 'always right'. 60 The intuition behind uniform strategies is that they can only appeal to things that players know. This knowledge is encoded by the current equivalence class of their epistemic equivalence relation. This is LFD dependence combined with the epistemic representation in Section 3.4. The choice of move by a strategy depends on the variable for the agent's knowledge state. In general, this perspective will work with distinct variables for agents and moves, corresponding to the knowledge and the action modalities whose interplay is crucial to reasoning about games with imperfect information.
But, if games are very regular, say, just choosing values for stage variables x, y, ... from some fixed set, epistemic uncertainty relations match up directly with dependence relations for sets of variables. Assume, as is common in epistemic-temporal logic, [65], that players can observe some events that have taken place, but not others. Then their equivalence relation on histories will be equality for the values of their observable variables only. In games, strategies for a player now have to choose actions that depend in the LFD sense on the values of the observed variables for that player. In general, in this sort of two-player game with imperfect observation, players A and E partition all the variables. 61 This discussion by no means exhausts the topic of games with either perfect or imperfect information from a dependence-logical perspective, and in general, as stated before, we will need combinations of epistemic logic for players's knowledge and LFD for their actions.

Causality
A final important arena for dependence is causality. Causal graphs, [66,43], impose correlations between variables, restricting the simultaneous assignments of values that represent possible states of world. This is reminiscent of the dependence graphs in Section 2, and indeed, one common notion of 'causal influence' of a variable x on an endogenous variable y found in [66] can be simply represented in LFD as D V −{y} y ∧ ¬D V −{y, x} y, with V the set of all variables. But the match is not one-to-one. Not all relational facts in dependence graphs represent causal connections: singling out the truly causal ones requires a separate decision. Vice versa, causal graphs do not have a unique associated dependence model: they are schemata for many models. Even so, the representation in Section 2 may extend to causality, now also analyzing various types of dependencies that can occur in dependence models.
Conversely, several themes in the theory of causal graphs resonate in the present framework. For instance, LFD with function terms may be considered a modal companion to the logic for causality in [43], that manipulates explicit equations between variables in causal graphs. Also, the crucial notion of 'interventions' in causal graphs has an obvious counterpart in updates of dependence models that fix values, as in Section 6.5. Even so, there may be an essential surplus to the notion of causal dependence that transcends the resources of the LFD framework. In this sense, see [9,10] for a formalism that combines features of Dependence Logic with an interventionist approach to causality, and see [76] for a similar combination of epistemic logic and causal models.
Many further concrete notions of (in-)dependence occur in the literature. There is essential dependence and independence in natural language, [47], metaphysics, [34], [50], proof theory, [70], ceteris paribus reasoning, [17], social choice theory, logics of agency, and many other fields. A complete list is beyond the scope of this paper, but a confrontation with LFD seems worthwhile in many of these cases.

Related work
In this section, some of many other approaches to dependence are listed in historical order, with comments on connections to the LFD framework.
Armstrong axioms. The basic structural properties of functional dependence used in this paper were identified by Armstrong [5], in the form of the postulates of Inclusion (cf. Definition 2.4, Example 5.2a), Transitivity (cf. Definition 2.4) and Additivity (cf. Example 5.2b). By Fact 2.5, the first two together are equivalent with the conjunction of our Projection and Transitivity properties (as well as with the conjunction of Reflexivity, Monotonicity and Transitivity), while Armstrong's Additivity is absorbed into our definition of D X Y as an abbreviation for y∈Y D X y. Armstrong gave a representation theorem showing that these axioms are complete for (global) database dependence. Section 2 of this paper presents a different proof, yielding a stronger representation theorem (Proposition 2.7), for both global and local dependence. Similar abstract structural axioms for independence, given in [36], underlie the modal independence logic in Section 6.3.
CRS logic. As explained in our Introduction, LFD is a direct continuation of generalized assignment semantics CRS for first-order logic, for which we have given several references. The origins of CRS lie in relational and cylindric algebra, [63]. The decidability of CRS can be shown by first-order translation into the 'Guarded Fragment' GF, [3], while the first-order translation for LFD in Section 3.2 does not map into GF. As we have noted, it is an open problem whether one can prove decidability for LFD via a known decidable fragment of FOL.
Independence-friendly logic. Dependence pervades game-theoretic semantics for logical systems. Strategies in evaluation games for FOL correspond with Skolem functions that express dependence in the sense of Section 2.3. A further innovation was 'Independence-Friendly Logic' (IF-logic, for short), [47], where the player for the existential quantifier may have imperfect information about the objects chosen by the player for the universal quantifier, cf. Section 7.5. A compositional semantics for IF-logic uses evaluation on sets of assignments, [48], allowing for choices of values independently from the values for specified other variables. 62 These sets are like LFD dependence models, but without designated single assignments and local dependence. Moreover, in contrast with LFD, IF-logic is second-order and non-axiomatizable. For a complete mathematical development of IF-logic, see [58].
A comparison between LFD and IF-logic poses a challenge, already noted for CRS vs. IFlogic in [13]. IF-logic sees first-order logic as tied to linear dependencies between quantifiers, and incorporates 'branching quantifiers', thereby moving up to second-order complexity. In contrast, CRS sees FOL as too much tied to independence, and weakens it to a decidable logic that allows for both dependence and independence of variables. One obvious difference is that IF-logic takes standard FOL as is, and adds syntax for independence. We made some remarks on the connection of LFD with FOL in Section 3.2, and we have more precise results -but a deeper treatment is a topic for a separate paper. But perhaps more importantly here, in the terminology of Section 7.5, while LFD analyzes what might be called value dependencies between variables, IF-logic describes what might be called choice dependencies between quantifiers. It is easy to see formally that LFD cannot express choice dependencies, and our brief discussion of games showed that we would need additional modalities over dependence universes. Even so, LFD and IF-logic also share some traits, and cross-overs between the two are worth exploring.
For instance, one can enrich LFD with natural forms of branching quantification on dependence models. For instance, the natural reading of the simplest Henkin formula . 62 As it happens, sets of assignments were used even earlier in dynamic semantics of natural language, in order to model the meaning and anaphoric behavior of plural expressions, [21].

Conclusion
Dependence has a ubiquitous semantic sense of determination of values for some variables by those of others. We have presented a decidable classical logic LFD for reasoning about functional dependence, together with complete axiomatizations. The proofs come in both firstorder and modal style. Conceptually, these two complementary perspectives connect to the two manifestations of dependence highlighted throughout this paper: 'ontic' in the world or in some dynamical system, and 'informational' connecting to knowledge and questions. Further language extensions, as well as richer semantical settings, have been discussed in some detail.
Many open problems have been identified in this extension process, reflecting mainly its semantic and model-theoretic spirit. But we have also shown that there is room for a purely proof-theoretic analysis of LFD and its extensions, and perhaps as a compromise between the model theory and proof theory: an analysis in universal algebra would be illuminating.
Going beyond these standard logical perspectives, one can think of dependence informationtheoretically, in terms of values of dependent variables adding no Kolmogorov complexity to the given ones. But perhaps the greatest challenge left unaddressed here is tying the qualitative logical LFD analysis to probabilistic notions of correlation and dependence. 68 A point of entry may be the analogy of dependence with consequence relations noted in Section 2. LFD-style dependence goes by universal quantification over all assignments. But as we observed, one can soften this, as in non-monotonic default logics, by going to models where the semantic dependence holds only in the most plausible cases, or only with high probability in some qualitative sense, [31]. In that case, the agenda for LFD becomes wide open again. → X , incorporating all the one-step relations labelled by sets that locally determine X: Then the required equivalence relations = X on worlds/histories in A st can be taken to be the reflexive-transitive-symmetric closure of the relations = → X . To check the claims below, it may be useful to note that h = X h ′ holds iff the unique non-redundant path from h to h ′ consists only of steps of the form h n = → Y n h n+1 , or h n = ← Y n h n+1 , with last(h n ) |= D Y X.
Finally, the valuation on atoms is given by truth at the last world in the history (in the original model): D h X y iff last(h) |= D X y, P h x iff last(h) |= P x. The fact that this definition yields a standard relational model M st is an easy verification.
To finish the proof, we define a map f :

A3. Decidability via relational models
The preceding detour into abstract relational models and the above Corollary A.4 on modal equivalence can be used to give a second, more general proof of decidability using the Modal Logic concept of filtration [22].
Proposition A.5. The language LFD has the Strong Finite Relational Model Property: if ϕ is satisfied in some relational model M, then it is satisfied in a finite relational model, whose size is bounded by a computable function of ϕ. As a consequence, the logic LFD is decidable.