Beyond the Mean: A Flexible Framework for Studying Causal Effects Using Linear Models

Graph-based causal models are a flexible tool for causal inference from observational data. In this paper, we develop a comprehensive framework to define, identify, and estimate a broad class of causal quantities in linearly parametrized graph-based models. The proposed method extends the literature, which mainly focuses on causal effects on the mean level and the variance of an outcome variable. For example, we show how to compute the probability that an outcome variable realizes within a target range of values given an intervention, a causal quantity we refer to as the probability of treatment success. We link graph-based causal quantities defined via the do-operator to parameters of the model implied distribution of the observed variables using so-called causal effect functions. Based on these causal effect functions, we propose estimators for causal quantities and show that these estimators are consistent and converge at a rate of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N^{-1/2}$$\end{document}N-1/2 under standard assumptions. Thus, causal quantities can be estimated based on sample sizes that are typically available in the social and behavioral sciences. In case of maximum likelihood estimation, the estimators are asymptotically efficient. We illustrate the proposed method with an example based on empirical data, placing special emphasis on the difference between the interventional and conditional distribution. Supplementary Information The online version contains supplementary material available at 10.1007/s11336-021-09811-z.


Graph-Based Models for Causal Inference
The graph-based approach to causal inference was primarily formalized by Judea Pearl (1988, 1995) and Spirtes, Glymour, and Scheines (2001). A causal graph represents a researcher's theory about the causal structure of the data-generating mechanism. Based on a causal graph, causal inference can be conducted using the interventional distribution, from which standard causal quantities such as average treatment effects (ATEs) can be derived. In the most general formulation, a causal graph is accompanied by a set of nonparametric structural equations. Thus, a common acronym for Pearl's general nonparametric model is NPSEM, which stands for nonparametric structural equation model (Pearl, 2009;Shpitser, Richardson, & Robins, 2020).
Graph-based causal models share many common characteristics with the traditional literature on structural equation models (SEM) prevalent in the social and behavioral sciences and economics (Bollen & Pearl, 2013;Heckman & Pinto, 2015;Pearl, 2009Pearl, , 2012. However, these two approaches also differ in several aspects including the underlying assumptions (e.g., graphbased models assume modularity), notational conventions (e.g., the meaning of bidirected edges in graphical representations), research focus (e.g., nonparametric identification in graph-based models vs. parametric estimation in traditional SEM), and standard procedures.
Graph-based procedures often focus on a single causal quantity of interest (e.g., ATE) and establishing its causal identification based on a minimal set of assumptions (e.g., without making 1. The causal ordering of observed variables and unobserved confounders is known. 2. Interventions only alter the mechanisms that are directly targeted (modularity). 4 3. The treatment status of a unit (e.g., person) does not affect the treatment status or the outcome of other units (no interference). 4. Direct causal effects are constant across units (homogeneity). 5. Direct causal effects are constant across value combinations of observed variables and unobserved error terms (no effect modification). 6. Omitted direct causes as comprised in the error terms follow a multivariate normal distribution. 5 The first three assumptions listed above are generic to the graph-based approach to causal inference and need to hold in its most general nonparametric formulation. Assumptions 4 and 5 justify the use of linear structural equations. Assumptions 6 justifies the use of multivariate normally distributed error terms. We further assume that variables are measured on a continuous scale and are observed without measurement error. Throughout this paper, we assume that the model is correctly specified. In the discussion section, we briefly point to the literature on statistical tests of model assumptions and methods for analyzing the sensitivity of causal conclusions with respect to violations of untestable assumptions. Furthermore, we briefly discuss possible ways to relax the model assumptions (e.g., measurement errors, unobserved heterogeneity, effect modification, excess kurtosis in the error terms).
A linear graph-based causal models over the set V = {V 1 , ..., V n } of observed variables are defined by the following set of equations (Brito & Pearl, 2006, p.2): 6 V j = n i c ji V i + ε j , j = 1, ..., n We assume that all variables are deviations from their means and no intercepts are included in Eq.
(1). A nonzero structural coefficient (c ji = 0) expresses the assumption that V i has a direct causal influence on V j . Restricting a structural coefficient to zero (c ji = 0) indicates the assumption 3 If the listed statements are indeed true, the causal Markov assumption is implied. For a detailed discussion of the logical relation of causal assumptions encoded in graph-based models and causal assumptions from the Neyman-Rubin potential outcome framework (e.g., ignorability, SUTVA), see, for example, Holland (1988); Pearl (2009) ;Shpitser et al. (2020). 4 Similar concepts such as autonomy (Aldrich, 1989), exogeneity (Mouchart, Russo, & Wunsch, 2009), and invariance (Cartwright, 2009) have been discussed in the econometric literature. However, we believe that these concepts are not part of the canonical assumptions of traditional SEM as used in the social and behavioral sciences. 5 Many results derived in this paper (e.g., the moments of the interventional distribution in Eqs. (6a) and (6b) or Theorem 8) do not rely on multivariate normality. However, Result 3 on the distributional family of the interventional distribution requires multivariate normality. 6 Throughout this article, we use the following conventions: Sets of random variables are denoted by calligraphic letters (e.g., V = {V 1 , ..., V n }). Single random variables from a set are denoted by corresponding upper-case Latin letters (e.g., V i ). The column vector containing all random variables in a set is denoted by the corresponding bold Latin letter (e.g., V = (V 1 , ..., V n ) ). Realizations of a random vector V are denoted by lower-case Latin letters (e.g., v).

871
that V i has no direct causal effect on V j . The parameter c ji quantifies the magnitude of a direct effect. The q × 1 parameter vector θ F ∈ F ⊆ R q contains all distinct, functionally unrelated and unknown structural coefficients c ji . F denotes the parameter space, and it is a subspace of the q-dimensional Euclidean space. Restating Eqs. (1) in matrix notation yields: The n × n identity matrix is denoted as I n . The n × n matrix of structural coefficients is denoted as C, and we sometimes use the notation C(θ F ) to emphasize that C is a function of θ F . We restrict our attention to recursive systems for which the variables V can be ordered in such a way that the matrix C is strictly lower triangular (which ensures the existence of the inverse in Eq. (2); Bollen, 1989). The set of error terms is denoted by E = {ε 1 , ..., ε n }. Each error term ε i , i = 1, ..., n, comprises variables that determine the level of V i but are not explicitly included in the model. Typically the following assumptions (or a subset thereof) are made (Brito & Pearl, 2002;Kang & Tian, 2009;Koster, 1999): (a) E(ε) = 0 n , where 0 n is an n × 1 vector that contains only zeros. (b) E(εε ) = , where the n × n matrix is finite, symmetric and positive definite.
(c) ε ∼ N n (0 n , ), where N n denotes the n-dimensional normal distribution.
A nonzero covariance ψ i j indicates the existence of an unobserved common cause of the variables V i and V j . The p×1 parameter vector θ P ∈ P ⊆ R p contains all distinct, functionally unrelated and unknown parameters from the error term distribution. P denotes the parameter space, and it is a subspace of the p-dimensional Euclidean space. We sometimes use the notation (θ P ) to emphasize that is a function of θ P . The resulting model implied joint distribution of the observed variables is denoted by {P(v, θ ) | θ ∈ }, where = F × P , and P is the family of n-dimensional multivariate normal distributions. The graph G is constructed by drawing a directed edge from V i pointing to V j if and only if the corresponding coefficient is not restricted to zero (i.e., c ji = 0). A bidirected edge between vertices V i and V j is drawn if and only if ψ i j = 0 (bidirected edges are often drawn using dashed lines). The absence of a bidirected edge between V i and V j reflects the assumption that there is no unobserved variable that has a direct causal effect on both V i and V j (no unobserved confounding). 7 For recursive systems, the resulting graph belongs to the class of acyclical directed mixed graphs (ADMG), whereas mixed refers to the fact that graphs in this class contain directed edges as well as bidirected edges (Richardson, 2003;Shpitser, 2018). An example model with n = 6 variables and the corresponding causal graph is introduced in the illustration section.
At the heart of the graph-based approach to causal inference lies a hypothetical experiment in which the values of a subset of observed variables are controlled by an intervention. This exogenous intervention is formally denoted via the do-operator, namely do(x), where x denotes the interventional levels and X ⊆ V denotes the subset of variables that are controlled by the experimenter. The system of equation under the intervention do(x) is obtained from the original system by replacing the equation for each variable V i ∈ X (i.e., for each variable that is subject to the do(x)-intervention) with the equation (Pearl, 2009;Spirtes et al., 2001). Note that the do(x)-intervention does not alter the equations for variables that are not subject to intervention, an assumption known as autonomy or modularity (Pearl, 2009;Peters, Janzing, & Schölkopf, 2017;Spirtes et al., 2001).

PSYCHOMETRIKA
The probability distribution of the variables V one would observe had the intervention do(x) been uniformly applied to the entire population is called the interventional distribution, and it is denoted as P(V | do(x)). 8 The interventional distribution differs formally and conceptually from the conditional distribution P(V | X = x). The former describes a situation where the datagenerating mechanism has been altered by an external do(x)-type intervention in an (hypothetical) experiment. The latter describes a situation where the data-generating mechanism of V has not been altered, but the evidence X = x about the values of a subset of variables X ⊆ V is available. These differences will be further discussed in the illustration section (see also, e.g., Gische, West, & Voelkle, 2021;Pearl, 2009).
In the remainder of this section, we translate the changes in the data-generating mechanism induced by the do(x)-intervention into matrix notation (see Hauser and Bühlmann (2015) for a similar approach). The following definition introduces the required notation.
Definition 1. (interventions in linear graph-based models) 1. Variables X ⊆ V are subject to an external intervention, where |X | = K x ≤ n denotes the set size. The K x × 1 vector of interventional levels is denoted by x. The external intervention is denoted by do(x). 2. Let I ⊆ {1, 2, ..., n}, |I| = K x denote the index set of variables that are subject to intervention. The index set of all variables that are not subject to intervention is denoted by N , namely N := {1, 2, ..., n} \ I, |N | = n − K x , where the operator \ denotes the set complement. 3. Let ı i ∈ R n be the i-th unit vector, namely a (column) vector with entry 1 on the i-th component and zeros elsewhere. The n× K x matrix 1 I := (ı i ) i∈I contains all unit vectors with an interventional index. The n × (n − K x ) matrix 1 N is defined analogously, namely 1 N := (ı i ) i∈N . The matrices 1 I and 1 N are called selection matrices. 4. Let I N be an n × n diagonal matrix with zeros and ones as diagonal values. The i-th diagonal value is equal to one if i ∈ N and zero otherwise.
Note that all of the elements of the matrices 1 I , 1 N , and I N are either zero or unity. The variables V in a linear graph-based model under the intervention do(x) are determined by the following set of structural equations: 9 given do(x) : The corresponding interventional reduced form equation is given by: The matrix I N C is obtained from C by replacing its rows with interventional indexes by rows of zeros, and consequently (I n − I N C) is non-singular. Equation (4) states that V | do(x) is a linear transformation of the random vector ε. The corresponding transformation matrix is labeled as T 1 , and the additive constant is a 1 x.

873
The target quantity of interest is the interventional distribution of those variables that are not subject to intervention, denoted by V N . The reduced form equation of all non-interventional variables is given by: Important characteristics of the distribution of a linear transformation of a random vector depend on the rank of the transformation matrix.
Lemma 2. (rank of transformation matrices) The n × n transformation matrix Based on the reduced form equations, we derive the interventional distribution and its features in the following section.

Interventional Distribution
Combining the reduced form stated in Eq. (4) with the assumptions on the first-and secondorder moments of the error term distribution yields the following moments of the interventional distribution: The results are obtained via a direct application of the rules for the computation of moments of linear transformations of random variables. Note that these results do not require multivariate normality of the error terms. The interventional mean vector is functionally dependent on the vector of interventional levels x, whereas the interventional covariance matrix is functionally independent of x. The interventional distribution in linear graph-based models with multivariate normal error terms is given as:

Result 3. (interventional distribution for Gaussian linear graph-based models)
Proof. Both results follow from the fact that linear transformations of multivariate normal vectors are also multivariate normal (Rao, 1973). Results on the rank of the transformation matrices T 1 and T 2 can be found in Lemma 2.
Equation (7a) states that the interventional distribution of all variables is a singular normal distribution in R n with reduced rank n − K x as denoted by the superscript n − K x . Singularity follows from the fact that the K x interventional variables are no longer random given the do(x)intervention, but are fixed to the constant interventional levels x. Therefore, the random vector 874 PSYCHOMETRIKA V | do(x) satisfies the restriction 1 I (V | do(x)) = x with a probability of one. Equation (7b) states that the vector of all non-interventional variables follows a (n − K x )-dimensional normal distribution. Typically, one is interested in a subset Y ⊆ V N of outcome variables. The marginal interventional distribution P(y | do(x)) can be obtained as follows: Result 4. (marginal interventional distribution for Gaussian linear graph-based models) Let the outcome variables Y be a subset of the non-interventional variables (i.e., Y ⊆ V N , |Y| = K y ). The index set of the outcome variables is denoted as I y . Then, the following result holds: The result follows from the fact that the family of multivariate normal distributions is closed with respect to marginalization (Rao, 1973). An important special case of Result 4 is the ATE of a single variable V i on another variable V j , which is obtained by the setting Y = {V j } and X = {V i } (and consequently I x = {i}, I y = { j}, K y = K x = 1). The ATE of the intervention do(x) relative to the intervention do(x ) (where x and x are distinct treatment levels) on Y is defined as the mean difference E(y | do(x)) − E(y | do(x )). For a single outcome variable {V j }, the selection matrix 1 I y simplifies to the unit vector ı j and E(y | do(x)) − E(y | do(x )) can be expressed as ı j a 1 (x − x ) (using the mean expression from the normal distribution in Eq. [8]). The probability density function (pdf) of the interventional distribution of all noninterventional variables is given as follows: Many features of the interventional distribution that hold substantive interest in applied research (e.g., probabilities of interventional events, quantiles of the interventional distribution) can be calculated from the pdf via integration. For example, a physician would like a patient's blood glucose level (outcome) to fall into a predefined range of values (e.g., to avoid hypo-or hyperglycemia) given an injection of insulin (intervention). More formally, let [y low , y up ] denote a predefined range of values of a set of outcome variables Y ⊆ V N . The interventional probability P(y low ≤ y ≤ y up | do(x)) is given by: The interventional distribution and its features will be used to formally define parametric causal quantities in the following section.

Causal Effect Functions
In this section, we formally define terms containing the do-operator as causal quantities denoted by γ . According to this definition, any feature of the interventional distribution that can be expressed using the do-operator is a causal quantity. Let the space of causal quantities be denoted as . As discussed in earlier in the section on "Graph-Based Causal Models with Linear Equations," linear causal models imply a joint distribution of observed variables that is parametrized by  Figure 1 displays the mapping g : → that corresponds to a causal effect function γ = g(θ).

The domain
⊆ R q+ p (left-hand side) contains the parameters of the model implied joint distribution of observed variables (no do-operator). The co-domain ⊆ R r (right-hand side) contains causal quantities γ that are defined via the do-operator. θ ∈ ⊆ R q+ p and denoted by {P(v, θ ) | θ ∈ }. A function g that maps the parameters θ of the model implied joint distribution onto a causal quantity γ is called causal effect function. This idea is illustrated in Fig. 1 and stated in Definition 5.
Definition 5. (causal quantity and causal effect function) Let γ be an r -dimensional feature of the interventional distribution. Let γ ⊆ be an s-dimensional subspace of the parameter space of the model implied joint distribution of observed variables. A mapping g is called a causal effect function. The image γ of a causal effect function is called a causal quantity which is parametrized by θ γ . If the value of a causal quantity depends on other variables (e.g., the interventional level x ∈ R K x , the values v N ∈ R n−K x of non-interventional variables), we include these variables as auxiliary arguments in the causal effect function separated by a semicolon (e.g., g(θ γ ; x, v N )).
This idea can be applied to the interventional mean from Eq. (6a) by defining it as a causal quantity γ 1 as follows: The right-hand side of Eq. (12a) is free of the do-operator and contains the parameter vector θ F (structural coefficients ) as a main argument and the interventional level x as an auxiliary argument. Thus, the causal effect function g 1 maps the parameter vector θ F onto the interventional mean. The interventional mean is an n × 1 vector and therefore the co-domain of g 1 is R n (i.e., r = n), as stated in Eq. (12b). Note that the causal effect function g 1 depends on the distinct and functionally unrelated structural coefficients θ F but is independent of the parameters from the error term distribution θ P . Therefore, the domain of g 1 is F and s = q. The interventional covariance matrix from Eq. (6b) can be expressed using the notation from Definition 5 as follows: To avoid matrix valued causal effect functions, we defined γ 2 as the half-vectorized interventional covariance matrix, which is of dimension r = n(n + 1)/2 (the operator vech stands for halfvectorization). The interventional covariance matrix is a function of both the structural coefficients 876 PSYCHOMETRIKA θ F and the entries of the covariance matrix θ P . Thus, θ γ 2 = θ and s = q + p. No auxiliary arguments are included in the causal effect function g 2 , since the value of γ 2 only depends on the values of θ (recall that I n , I N , 1 I are constant zero-one matrices).
The interventional pdf f (v N | do(x)) from Eq. (9) can be formally defined as a causal effect function as follows: The interventional density depends on both the structural coefficients and the parameters of the error term distribution, yielding θ γ 3 = θ, γ 3 = and s = q + p. The interventional density is scalar-valued and thus r = 1. Since the value of the interventional pdf depends on x and v N , both are included as auxiliary arguments in the causal effect function g 3 , namely g 3 (θ ; x, v N ).
Probabilities of interventional events can be understood as a causal quantity in the following way: Where θ γ 4 is the subset of parameters that appear in the marginal interventional pdf f (y | do(x)). The causal effect function g 4 is a scalar-valued and thus r = 1. The value of the interventional probability depends on x, y low , and y up (y integrates out), which are included as auxiliary arguments in the causal effect function g 4 .

Identification of Parametrized Causal Quantities
The meaning of the term "identification" as used in the nonparametric graph-based approach slightly differs from the meaning in the field of traditional SEM. A graph-based causal quantity is said to be identified if it can be expressed as a functional of joint, marginal, or conditional distributions of observed variables (Pearl, 2009). The latter distributions can in principle be estimated based on observational data using nonparametric statistical models. In other words, an identified nonparametric causal quantity could in theory be computed from an infinitely large sample without further limitations. 10 Graph-based tools for identification exploit the causal structure depicted in the causal graph and are independent of the functional form of the structural equations. Thus, causal identification is established in the absence of the risk of misspecification of the functional form. By contrast, model identification in traditional parametric SEM relies on the solvability of a system of nonlinear equations in terms of a finite number of model parameters. A single parameter θ ∈ is identified if it can be expressed as a function of moments of the the joint distribution of observed variables in a unique way (Bekker et al., 1994;Bollen & Bauldry, 2010). If all parameters in θ are identified, then the model is identified. Definition 6 uses causal effect functions to combine the above ideas. Definition 6. (causal identification of parametrized causal quantities) Let γ be a parametrized causal quantity in a linear graph-based model. γ is said to be causally identified if (i) it can be expressed in a unique way as a function of the parameter vector θ γ via a causal effect function, namely γ = g(θ γ ), and (ii) the value of θ γ can be uniquely computed from the joint distribution of the observed variables.
Based on this insight, graph-based techniques for causal identification in linear models have been derived, for example by Brito and Pearl (2006); Drton, Foygel, and Sullivant (2011); Kuroki and Cai (2007). Furthermore, part (ii) of the above definition has been dealt with extensively in the literature on traditional linear SEM (see, e.g., Bekker et al., 1994;Bollen, 1989;Fisher, 1966;Wiley, 1973).
We now illustrate Definition 6 for the causal quantities defined in Eqs. (22a) and (22b) from the illustration section. For the interventional mean stated in Eq. (22a), part (i) of the definition is satisfied, since the causal quantity γ 1 can be expressed as a function of the parameter θ γ 1 = c yx in a unique way as follows: Part (ii) of the above definition requires that the single structural coefficient c yx can be uniquely computed from the joint distribution of the observed variables.
Note that both of the causal quantities discussed above require only a subset of parameters to be identified (i.e., it is not required to identify the entire model θ ). After causal identification of a parametrized causal quantity has been established, it can be estimated from a sample using the techniques described in the following section.

Estimation of Causal Quantities
Estimators of causal quantities as defined in Eq. (11) are constructed by replacing the parameters in the causal effect function with a corresponding estimator, namely γ = g( θ γ ). This plug-in procedure is summarized in the following definition.
Definition 7. (estimation of parametrized causal quantities) Let γ be an identified causal quantity in a linear graph-based models and g(θ γ ) the corresponding causal effect function. Let θ γ denote an estimator of θ γ , then γ := g( θ γ ) is an estimator of the causal quantity γ .
A main strength of the traditional SEM literature is that a variety of estimation procedures have been developed. Common estimation techniques include maximum likelihood (ML; Jöreskog, 1967;Jöreskog & Lawley, 1968), generalized least squares (GLS; Browne, 1974), and asymptotically distribution free (ADF; Browne, 1984). 11 Note that some estimation techniques do not rely on the assumption of multivariate normal error terms and for others robust versions have been proposed that allow for certain types of deviations from multivariate normality (Satorra & Bentler, 1994;Yuan & Bentler, 1998).
In the following, we assume that causal effect functions g and estimators θ γ satisfy certain regularity conditions stated as Properties A.1 and A.2 in the Appendix. The following theorem establishes the asymptotic properties of estimators of causal quantities γ = g( θ γ ).
Theorem 8. (asymptotic properties of estimators of causal quantities) Let γ be a causal quantity and g(θ γ ) the corresponding causal effect function. Let θ γ be an estimator of θ γ . Assume that g and θ γ satisfy Property A.1 and Property A.2, respectively.
Where θ * γ denotes the true population value and refers to convergence in probability (distribution) as the sample size N tends to infinity. AV( √ N θ γ ) denotes the covariance matrix of the limiting distribution.
Proof. The results are obtained via a straightforward application of standard results on transformations of convergent sequences of random variables (Mann & Wald, 1943;Serfling, 1980, Chapter 1.7), one of which is known as the multivariate delta method (Cramér, 1946;Serfling, 1980, Chapter 3.3).
Theorem 8 establishes that the estimator γ = g( θ γ ) is consistent and converges at a rate of N − 1 2 to the true population value γ * = g(θ * γ ). The rate of convergence is independent of the finite number of parameters and variables in the model. If the causal effect function contains auxiliary variables, then the results in Theorem 8 hold pointwise for any fixed value combination of the auxiliary variable.
Note that the results in Theorem 8 hold whenever an estimator satisfies Property A.2 and they do not depend on a particular estimation method. However, if θ γ is estimated via maximum likelihood, the proposed estimator γ of the causal quantity has the following property: Proof. Result (i) is a direct consequence of the functional invariance of the ML-estimator (Zehna, 1966; see, for example, Casella & Berger, 2002, Chapter 7.2) and result (ii) was established by Cramér (1946) and Rao (1945).
To make inference feasible in practical applications, a consistent estimator of AV( √ N γ ) is required.
Corollary 10. (consistent estimator of AV( √ N γ )) Let the situation be as in Theorem 8 and let the estimator of AV( √ N γ ) be defined as: Proof. Note that the partial derivatives ∂g(θ γ ) ∂θ γ are continuous (see Property A.1) and that θ γ p −→ θ * γ holds (see Property A.2). Thus, the result is a direct consequence of standard results on transformations of convergent sequences of random variables (Mann & Wald, 1943;Serfling, 1980, Chapter 1.7).
Equation (17) states that estimates of the asymptotic covariance matrix of a causal quantity γ can be computed based on (i) the estimate of the asymptotic covariance matrix AV( √ N θ γ ), and (ii) the Jacobian matrix ∂g(θ γ ) ∂θ γ (evaluated at θ γ ). Estimation results for (i) the asymptotic covariance matrix depend on the estimation method that is used to obtain θ γ . For many standard procedures (e.g., 3SLS, ADF, GLS, GMM, ML, IV), theoretical results on the asymptotic covariance matrix are available in the corresponding literature and estimators are implemented in various software packages (e.g., see Muthén & Muthén, 1998-2017Rosseel, 2012). Explicit expressions of (ii) the Jacobian matrices for the causal effect functions g 1 , g 2 , g 3 , and g 4 are provided in the following corollary.
Corollary 11. (Jacobian matrices of basic causal effect functions) Let the causal effect functions g 1 , g 2 , g 3 , and g 4 be defined as in Eqs. (12a), (13), (14), and (15), respectively. Then, the Jacobian matrices with respect to θ are given by: Where the unit vector in the upper entry of the vector in Eq. (18d) is of dimension (n × 1) and the unit vector in the lower entry is of dimension (n 2 × 1). The matrices denoted by G and a subscript are defined as follows: Where L n , D n , and K n denote the elimination matrix, duplication matrix, and commutation matrix for n × n-matrices, respectively (Magnus & Neudecker, 1979. μ y and σ y denote univariate inerventional moments. Proof. See Appendix. Causal Graph (ADMG) in the Absence of Interventions. Figure 2 displays the ADMG corresponding to the linear graphbased model. The dashed bidirected edge drawn between X 1 and Y 1 represents a correlation due to an unobserved common cause. Directed edges are labeled with the corresponding path coefficients that quantify direct causal effects. For example, the direct causal effect of X 2 on Y 3 is quantified as c yx . Traditionally, disturbances (residuals, error terms), denoted by ε in Eq. (19), are not explicitly drawn in an ADMG.
Note that the Jacobian matrix for interventional probabilities stated in Eq. (18d) is given for a single outcome variable Y = V j (i.e., |Y| = K y = 1). For simplicity of notation, the derivatives in Corollary 11 are taken with respect to the entire parameter vector θ. Recall that a causal quantity is a function of the s × 1 subvector θ γ . Consequently, the r × (q + p) Jacobian matrix ∂g(θ γ ) ∂θ will contain (q + p − s) columns with zero entries that can be eliminated by pre-multiplication with an appropriate selection matrix.
These asymptotic results can be used for approximate causal inference based on finite samples, as will be illustrated in the following section.

Illustration
We illustrate the method proposed in the previous paragraphs using simulated data. In this way, the data-generating process is known and we know with certainty that the model is correctly specified. For didactic purposes, we link the simulated data to a real-world example: The data are simulated according to a modified version of the model used in a study by Ito et al. (1998). 12 Our simulation mimics an observational study where N = 100 persons are randomly drawn from a target population of homogeneous individuals and measured at three successive ( t = 6 min) occasions. Variables X 1 , X 2 , X 3 represent mean-centered blood insulin levels at three successive measurement occasions measured in micro international units per milliliter (mcIU/ml). Variables Y 1 , Y 2 , Y 3 represent mean-centered blood glucose levels measured in milligrams per deciliter (mg/dl). Mean-centered blood glucose levels below −40 mg/dl or above 80 mg/dl indicate hypo-or hyperglycemia, respectively. Both hypo-and hyperglycemia should be avoided, yielding an acceptable range for blood glucose levels of [y low , y up ] = [−40, 80]. The graph of the assumed linear graph-based models is depicted in Fig. 2.
Each directed edge corresponds to a direct causal effect and is quantified by a nonzero structural coefficient. We assume that direct causal effects are identical (stable) over time. For example, we assign the same parameter c yx to the directed edges X 1 c yx → Y 2 and X 2 c yx → Y 3 to indicate that we assume time-stable direct effects of X t−1 on Y t . The absence of a directed edge 881 from, say, X 1 to Y 3 in the ADMG encodes the assumption that there is no direct effect of insulin levels at t = 1 on glucose levels at t = 3. In other words, we assume that X 1 only indirectly affects Y 3 via X 2 or via Y 2 . Furthermore, we assume the absence of effect modification which justifies the use of the following system of linear structural equations: Each bidirected edge in the ADMG indicates the existence of an unobserved confounder. In linear graph-based models, unobserved confounders are formalized as covariances between error terms. The covariance matrix of the error terms implied by the graph is given by: The entries ψ x 1 x 1 , ψ y 1 y 1 and ψ x 1 y 1 describe the (co-)variances of the initial values of blood insulin and blood glucose. (Co-)Variances of error terms at time 2 and time 3 are assumed to be constant and are denoted as ψ x x , ψ yy , and ψ xy . Serial correlations in the X -series (Y -series) are denoted by ψ x 1 x 2 , ψ x 2 x 3 (ψ y 1 y 2 , ψ y 2 y 3 ). The covariances COV(X t , Y t ), t = 1, 2, 3, encode the assumption that the contemporaneous relationship of blood insulin and blood glucose is confounded. The absence of a bidirected edge between X t and Y t+1 encodes the assumption that there are no unobserved confounders that affect the lagged relationship of blood insulin and blood glucose. Further, we assume that the error terms follow a multivariate normal distribution. Thus, the linear graph-based model is parametrized by the following vector of distinct, functionally unrelated and unknown parameters: θ = (θ F , θ P ) with θ F = (c x x , c xy , c yx , c yy ) and θ P = (ψ x 1 x 1 , ψ y 1 y 1 , ψ x 1 y 1 , ψ x x , ψ yy , ψ xy , ψ x 1 x 2 , ψ x 2 x 3 , ψ y 1 y 2 , ψ y 2 y 3 ).
We are interested in the effect of an intervention on blood insulin at the second measurement occasion (i.e., X 2 ) on blood glucose levels at the third measurement occasion (i.e., Y 3 ). We set the interventional level of blood insulin to one standard deviation, namely x 2 = √ V(X 2 ) = 11.54. The graph of the causal model under the intervention do(x 2 ) is depicted in Fig. 3.
Based on the above description of the research situation and the hypothetical experiment, all terms in Definition 1 are uniquely determined and given by:  Figure 3 displays the ADMG of the graph-based model under the intervention do(x 2 ). Edges that enter node X 2 (i.e., that have an arrowhead pointing at node X 2 ) are removed since the value of X 2 is now set by the experimenter via the intervention do(x 2 ). The interventional value x 2 is neither determined by the values of the causal predecessors of X 2 nor by unobserved confounding variables. All other causal relations are unaffected by the intervention reflecting the assumption of modularity.
The target quantity of causal inference in this example is the interventional distribution P(Y 3 | do(x 2 )), which can be characterized, for example, by the following causal quantities: 13 yy ψ y 1 y 2 + 2c yy ψ y 2 y 3 (22b) Where denotes the cumulative distribution function (cdf) of the standard normal distribution. A central goal of a treatment at time 2 (i.e., do(x 2 )) is to avoid hypo-or hyperglycemia at time 3. We therefore refer to the event {y low ≤ Y 3 ≤ y up | do(x 2 )} as treatment success. Using this terminology, the causal quantity γ 4 from Eq. (22d) is called the probability of treatment success. The causal effect functions corresponding to these causal quantities are stated below and satisfy Property A.1: γ 4 = g 4 (θ γ 4 ; x 2 , y low , y up ), with θ γ 4 = θ γ 2 (23d) Figure 4 displays the pdfs of interventional distributions that result from three distinct (hypothetical) experiments where different interventional levels are chosen, namely −11.54, 0, and 11.54. Note that the interventional mean γ 1 = g 1 (θ γ 1 ; x 2 ) is functionally dependent on the interventional level x 2 (see also Eq.
[22a]). Thus, the location of the interventional distributions in Fig. 4 depends on the interventional level x 2 . By contrast, the interventional variance γ 2 = g 2 (θ γ 2 ) is functionally independent of x 2 (see also Eq. [22b]). Consequently, the scale of the interventional distributions in Fig. 4 is the same for all interventional levels.
Equations 23(a-d) display the causal effect functions corresponding to the causal quantities γ 1 , . . . , γ 4 . Definition 6 states that the parametrized causal quantities γ 1 , . . . , γ 4 are identified if the corresponding parameters θ γ 1 , θ γ 2 , θ γ 3 , and θ γ 4 can be uniquely computed from the joint distribution of the observed variables. We show in the Appendix that the values of the entire parameter vector θ can be uniquely computed from the joint distribution of the observed variables. In fact, the values of θ can be uniquely computed from the covariance matrix of the observed variables. 14 The joint distribution of the observed variables is given by {P(v, θ ) | θ ∈ }, where P is the family of 6-dimensional multivariate normal distributions. We estimated all parameters simultaneously by minimizing the maximum likelihood discrepancy function of the model implied covariance matrix and the sample covariance matrix. The ML-estimator θ M L is consistent, asymptotically efficient, and asymptotically normally distributed (Bollen, 1989) and therefore satisfies Property A.2. Additionally, the asymptotic covariance matrix of the ML-estimator is known (e.g., see Bollen, 1989) and consistent estimates thereof are implemented in many statistical software packages (e.g., in the R package lavaan; Rosseel, 2012). The corresponding estimation results for θ are displayed in Table 1. Since Property A.1 and Property A.2 are satisfied, the asymptotic properties of the estimators γ 1 , γ 2 , γ 3 and γ 4 can be established via Theorem 8. The Jacobian matrices of the causal effect functions in Eq. (23) can be calculated according to Corollary 11. Estimates of the causal quantities are reported in Table 2 together with estimates of the asymptotic standard errors and approximate z-values.
From Theorem 8, we know that γ 3 = g 3 ( θ γ 3 ; x 2 , y 3 ) = f (y 3 | do(x 2 )) p −→ f (y 3 | do(x 2 )) holds pointwise for any (y 3 , x 2 ) ∈ R 2 . Figure 5 displays the estimated interventional pdf together with its population counterpart as well as pointwise asymptotic confidence intervals for the fixed interventional level x 2 = 11.54 over the range y 3 ∈ [−100, 100]. Figure 5 shows that a sample size of N = 100 yields very precise estimates of the interventional pdf over the whole range of values y 3 ∈ [−100, 100], which is a consequence of the rate of convergence N − 1 2 established in Theorem 8. Figure 6 displays the estimated probability that the blood glucose level falls into the acceptable range (i.e., hypo-and hyperglycemia are avoided) at t = 3, given an intervention do(x 2 ) on blood insulin at t = 2, as a function of the interventional level x 2 . Just like in the case of the interventional pdf, Fig. 6 shows that a sample size of N = 100 yields very precise estimates of interventional probabilities over the whole range of values x 2 ∈ [−50, 50]. Given the intervention do(x 2 = 11.54), the probability of treatment success (i.e, blood glucose level within the acceptable range at t = 3) equals .85, as depicted in Fig. 6. Since the curve in Fig. 6 displays a unique (local Variance-covariance parameters The estimation results θ M L for the model parameters θ (using a covariance-based maximum likelihood estimator with N = 100) are displayed together with the true population values used for data simulation.
The z-values are reported for the null hypothesis of a population quantity equal to zero. Structural coefficients are displayed in the upper part, and the variance-covariance parameters are displayed in the lower part. ASE = asymptotic standard error. The estimation results for the causal quantities γ 1 , γ 2 , γ 3 , and γ 4 are displayed together with the population values used for data simulation. The z-values are reported for the null hypothesis of a population quantity equal to zero. † The estimates γ 3 and γ 4 depend on x 2 , y 3 , y low 3 , or y up 3 in a nonlinear way. The displayed quantities are calculated for x 2 = 11.54, y 3 = 0, y low   Figure 5 displays the estimated interventional pdf f (y 3 | do(x 2 = 11.54)) (black solid line) with pointwise 95% confidence intervals, that is, ±1.96 · ASE[ f (y 3 | do(x 2 = 11.54))] (gray shaded area). The true population interventional pdf f (y 3 | do(x 2 = 11.54)) is displayed by the gray dashed line. and global) maximum, the interventional level can be chosen such that the probability of treatment success is maximized. The maximal probability of treatment success is equal to .94 and can be obtained by administering intervention do(x * 2 = −38.3). Note that the curve is relatively flat around its maximum, meaning that slight deviations from the optimal treatment level will result in a small decrease in the probability of treatment success.

Interventional Distribution vs. Conditional Distribution
To illustrate the conceptual differences between the interventional and conditional distribution, we use the numeric population values from the first row of Table 1 and Table 2, respectively. The interventional distribution is given by P(Y 3 | do(x 2 )) = N 1 (−0.6x 2 , 1096.39) and it differs from  Marginal, Conditional, and Interventional Distribution. The panels depict (i) the pdf of the unconditional distribution P(Y 3 ) (top panel), (ii) the conditional distribution P(Y 3 | X 2 = x 2 ) (middle panel), and (iii) the interventional distribution P(Y 3 | do(x 2 )) (bottom panel). In (ii) the level x 2 = 11.54 mg/dl was passively measured whereas in (iii) the intervention do(X 2 = 11.54) was performed. The central vertical black solid lines are drawn at the mean and shaded areas cover 95% of the probability mass.
both the conditional distribution, P(Y 3 | X 2 = x 2 ) = N 1 (1.76x 2 , 353.99), and the unconditional distribution, P(Y 3 ) = N 1 (0 , 766.91), as depicted in Fig. 7. 15 The unconditional distribution (upper panel) corresponds to a situation where no prior observation is available and no intervention is performed. Note that the conditional distribution (middle panel) is shifted to the right (for X 2 = 11.54), whereas the interventional distribution (bottom panel) is shifted to the left for do(X 2 = 11.54), as displayed in Fig. 7. The differences displayed between P(Y 3 | X 2 = 11.54) and P(Y 3 | do(X 2 = 11.54)) reflect the fundamental difference between the mode of seeing, namely passive observation, and the mode of doing, namely active intervention (Pearl, 2009).
On the one hand, observing a blood insulin level of X 2 = 11.54 at the second measurement occasion leads to an expected value of 20.31 mg/dl for blood glucose at the third measurement occasion (i.e., E(Y 3 | X 2 = 11.54) = 1.76·11.54 = 20.31). Using the conditional variance V(Y 3 | X 2 = 11.54) = 353.99 to compute a 95% forecast interval yields P(Y 3 ∈ [−16.56, 57.19] | X 2 = 11.54) = .95, as indicated by the shaded area under the curve in the middle panel of Fig. 7.
On the other hand, setting the level of blood insulin to do(X 2 = 11.54) at the second occasion by an active intervention leads to an expected value of −6.92 mg/dl for blood glucose at the third measurement occasion (i.e., E(Y 3 | do(x 2 = 11.54)) = −0.60 · 11.54 = −6.92). Using the interventional variance V(Y 3 | do(11.54)) = 1096.39 to compute a 95% forecast interval yields P(Y 3 ∈ [−71.82, 57.97] | do(x 2 = 11.54)) = .95, as indicated by the shaded area under the curve in the bottom panel of Fig. 7.
Based on both the conditional and interventional distribution, valid statements about values of blood glucose can be made. A patient who measures a high level of insulin at time 2 in the absence of an intervention (e.g., self-measured monitoring of blood insulin; mode of seeing) will predict a high level of blood glucose at time 3 based on the conditional distribution. A physician who actively administers a high dose of insulin at time 2 (e.g., via an insulin injection; mode of doing) will forecast a low value of blood glucose at time 3 based on the interventional distribution.
Incorrect conclusions arise if the conditional distribution is used to forecast effects of interventions, or the other way around, the interventional distribution is used to predict future values of blood glucose in the absence of interventions. For example, a physician who correctly uses the interventional distribution to choose the optimal treatment level would administer do(X 2 = −38.3), resulting in a 94% probability of treatment success (see Fig. 6). A physician who erroneously uses the conditional distribution to specify the optimal treatment level would administer do(X 2 = 11.4). Such a non-optimal intervention would result in a 85% probability of treatment success. Thus, an incorrect decision results in an absolute decrease of 9% in the probability of treatment success (Gische et al., 2021).

Discussion
Graph-based causal models combine a priori assumptions about the causal structure of the datagenerating mechanism (e.g., encoded in a ADMG) and observational data to make inference about the effects of (hypothetical) interventions. Causal quantities are defined via the do-operator and may comprise any feature of the interventional distribution (e.g., the mean vector, the covariance matrix, the pdf). This flexibility allows researchers to analyze effects of interventions beyond changes in the mean level. Causal effect functions map the parameters of the model implied joint distribution of observed variables onto causal quantities and therefore enable analyzing causal quantities using tools from the literature on traditional SEM. We propose an estimator for causal quantities and show that it is consistent and converges at a rate of N − 1 2 . In case of maximum likelihood estimation, the proposed estimator is asymptotically efficient.
In the remainder of the paper, we discuss several situations in which linear graph-based models are misspecified and how the proposed procedure can be extended to be applicable in such situations.

Causal Structure, Modularity, and Conditional Interventions
A researcher's beliefs about the causal structure are encoded in the graph. Based on the concept of d-separation, every ADMG implies a set of (conditional) independence relations between observable variables that can be tested parametrically (Chen, Tian, & Pearl, 2014;Shipley, 2003;Thoemmes, Rosseel, & Textor, 2018) or nonparametrically (Richardson, 2003;Tian & Pearl, 2002b). One drawback of these tests is that they only distinguish between equivalence classes of ADMGs and do not evaluate the validity of a single graph.
One way of dealing with this situation is to further analyze the equivalence class to which a specified model belongs (Richardson & Spirtes, 2002). Some authors have proposed methods to draw causal conclusions based on common features of an entire equivalence class instead of using a single model (Hauser & Bühlmann, 2015;Maathuis, Kalisch, & Bühlmann, 2009;Perkovic, 2020;Zhang, 2008). However, equivalence classes can be large and its members might not overlap with respect to the causal effects of interest (He & Jia, 2015).
Another approach discussed in the literature is to complement the available observational data with experimental data. If these experiments are optimally chosen, the size of an equivalence class can be substantially reduced (Eberhardt, Glymour, & Scheines, 2005;Hyttinen, Eberhardt, & Hoyer, 2013). The idea of combining observational data and experimental data is theoretically appealing for many reasons, and it has stimulated the development of a variety of techniques (He & Geng, 2008;Peters, Bühlmann, & Meinshausen, 2016;Sontakke, Mehrjou, Itti, & Schölkopf, 2020). Most importantly, the combination of observational and interventional data allows differentiating causal models that cannot be distinguished solely based on observational data. Furthermore, the availability of experimental evidence enables (partly) testing further causal assumptions such as the assumption of modularity, which cannot be tested solely based on observational data. While modularity seems rather plausible if the mechanisms correspond to natural laws (e.g., chemical or biological processes, genetic laws, laws of physics), it needs additional reflection if the mechanisms describe human behavior. For example, humans might respond to an intervention by adjusting behavioral mechanisms different from the one that is intervened on. The proposed method can readily be adjusted to capture such violations of the modularity assumption if an intervention changes other mechanisms in a known way. However, if the ways in which humans adjust their behavior in response to an intervention are unknown, they need to be learned. Well-designed experiments may be particularly useful for this purpose.
Throughout the manuscript, we focus on specific do-type interventions that assign fixed values to the interventional variables according to an exogenous rule. However, in practical applications interventional values are often chosen conditionally on the values of other observed variables. In our illustrative example, the interventional insulin level at t = 2 might be chosen in response to the glucose level observed at t = 1. Such situations are discussed in the literature on conditional interventions (Pearl, 2009) and dynamic treatment plans (Pearl & Robins, 1995;Robins, Hernán, & Brumback, 2000). In principle, the proposed method can be extended to evaluate conditional interventions and effects of dynamic treatment plans. However, the derivation of the closed-form representations of parametrized causal quantities and the corresponding causal effect functions in these settings require further research.

Effect Modification and Heterogeneity
In this article, we have focused on situations in which direct causal effects are constant across value combinations of observed variables and error terms. In such situations, the use of linear models is justified. Statistical tests for linearity of the functional relations exist for both nested and non-nested models (Amemiya, 1985;Lee, 2007;Schumacker & Marcoulides, 1998). If these tests provide evidence against linearity, the assumption of constant direct effects is likely to be violated.
Theoretical considerations often suggest the existence of so-called effect modifiers (moderators), which can be modeled in parametrized graph-based models via nonlinear structural equations (Amemiya, 1985;Klein & Muthén, 2007). However, a closed-form representation of the entire interventional distribution in case of nonlinear structural relations cannot be derived via a direct application of the method proposed in this paper. The extent to which the proposed parametric method can be generalized to capture common types of nonlinearity (e.g., simple product terms that capture certain types of effect modification) is a focus of ongoing research. Preliminary results suggest that parametrized closed-form expressions of certain features of the interventional distribution (e.g., its moments) can be obtained (Kan, 2008;Wall & Amemiya, 2003), which in turns enables analyzing ATEs and other causal quantities.
Furthermore, we assumed that direct causal effects quantified by structural coefficients are equal across individuals in the population. However, (unobserved) heterogeneity in mean levels or direct effects might be present in many applied situations. A common procedure to capture specific types of unobserved heterogeneity is to include random intercepts or random coefficients in panel data models (Hamaker, Kuiper, & Grasman, 2015;Usami, Murayama, & Hamaker, 2019;Zyphur et al. 2019). Gische et al. (2021) apply the method proposed in this paper to linear cross-lagged panel models with additive person-specific random intercepts and show how absolute values of optimal treatment levels differ across individuals.
Even though additive random intercepts capture unobserved person-specific differences in the mean levels of the variables, these models still imply constant effects of changes in treatment level across persons. The latter implication might be overly restrictive in many applied situations in which treatment effects vary across individuals (e.g., different patients respond differently to variations in treatment level). An extension of the proposed methods to more complex dynamic panel data models (e.g., models including random slopes) requires further research. Several alternative approaches to model effect heterogeneity have been proposed for example within the social and behavioral sciences (Xie, Brand, & Jann, 2012), economics (Athey & Imbens, 2016), the political sciences (Imai & Ratkovic, 2013), and the computer sciences (Nie & Wager, 2020;Wager & Athey, 2018).

Measurement Error and Non-Normality
We assumed that variables are observed without measurement error. The proposed method can be extended to define, identify, and estimate causal effects among latent variables. In other words, measurement errors and measurement models can be included. The model implied joint distribution of observed variables in latent variable SEM is known (Bollen, 1989), and the derivation of the parametric expressions for causal quantities and causal effect functions in such models is subject to ongoing research.
However, measurement models for latent variables often can only mitigate measurement error issues (unless the true measurement model is known and everything is correctly specified). Furthermore, the degree to which interventions on certain types of latent constructs is feasible in practice needs further discussion (e.g., see Bollen, 2002;Borsboom, Mellenbergh, & van Heerden, 2003;van Bork, Rhemtulla, Sijtsma, & Borsboom, 2020).
Some population results derived in this paper rely on multivariate normally distributed error terms (e.g., Result 3), while others do not (e.g., the moments of the interventional distribution in Eqs. (6a) and (6b) or Theorem 8). For the former results, a systematic analytic inquiry of the consequences of incorrectly assuming multivariate normal error terms requires specific knowledge about the type of misspecification. If such knowledge is not available, one could attempt to assess the sensitivity of, for example, the interventional pdf, to misspecifications in the error term distribution via simulation studies.
Some estimation results derived in this paper rely on a known parametric distributional family of the error terms (e.g., Theorem 9 requires maximum-likelihood estimation), while others do not (e.g., Theorem 8 ensures consistency of the estimators of causal quantities for a broad class of estimators including ADF or WLS estimation of θ). Thus, inference about the interventional moments can be conducted in the absence of parametric assumptions on the error term distribution. Furthermore, it has been shown that ML-estimators in linear SEM are robust to certain types of distributional misspecification but sensitive to others (West, Finch, & Curran, 1995) and robust estimators have been developed for several types of distributional misspecifications (Satorra & Bentler, 1994;Yuan & Bentler, 1998).

Conclusion
Causal graphs (e.g., ADMGs) allow researchers to express their causal beliefs in a transparent way and provide a sound basis for the definition of causal effects using the do-operator. Causal effect functions enable analyzing causal quantities in parametrized models. They are a flexible tool that allow researchers to model causal effects beyond the mean and covariance structure and can thus be applied in a large variety of research situations. Consistent and asymptotically efficient estimators of parametric causal quantities are provided that yield precise estimates based on sample sizes commonly available in the social and behavioral sciences.

Acknowledgments
The first author thanks Stephen G. West for the careful editing and helpful comments; Bernd Droge and Grégoire Njacheun-Njanzoua for the insightful discussions on asymptotic inference; and the editor and reviewers for their helpful comments which significantly strengthened the manuscript.
Funding Open Access funding enabled and organized by Projekt DEAL.

Declarations
Disclosure statement The authors do not have any conflicts of interest to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The ordered set of non-interventional indexes is given by N := {1, 2, ..., n} \ I = { j 1 , j 2 , ..., j n−K x }. For clarity of display (and without loss of generality), we assume j 1 = 1 and j n−K x = n, that is, variables V 1 and V n are not subject to intervention. Due to the step structure of the matrix T 2 with the rightmost nonzero element of each row equal to one, the matrix T 2 has full row rank, that is, rank(T 2 ) = n − K x .
Sketch of proof of local identification of example model. Due to space restrictions and the necessity to state high-dimensional vectors and matrices explicitly, a detailed and fully reproducible version of the proof is given in the online supplementary material.
Let V = CV + ε be a linear graph-based model as defined in Eq.
(2), where n = 6, and C and are given in Eq. (19) and (20), respectively. Plugging in these quantities into Eq. (3.3.6) from Bekker et al. (1994) yields: The (32 × 36) matrix R and the (11 × 36) matrix R C encode the zero restrictions and equality constraints imposed on the covariance matrix and the matrix of structural coefficients in Eqs. (20) and (19), respectively. The matrix K 6 denotes the commutation matrix for n ×n matrices (Magnus & Neudecker, 1979). Theorem 3.3.1 in Bekker et al. (1994) states that under certain regularity conditions the parameter vector θ is locally identified, if and only if, the Jacobian matrixJ has full column rank. We show that rank(J) = 36. The exact form of the restriction matrices R and R C , and the Mathematica (Wolfram Research Inc., 2018) code used to evaluate the rank ofJ are provided in the online supplementary material.
Properties Required for Theorem 8: Property A.1. (properties of causal effect functions g) Let γ be a causal quantity and g(θ γ ) the corresponding causal effect function. Let g(θ γ ) be continuously differentiable with respect to θ γ in a neighborhood around the true population parameter value θ * γ ∈ γ . The r × s matrix of partial derivatives is non-singular and denoted by ∂g(θ γ ) ∂θ γ . If the causal effect function contains auxiliary variables, say g(θ γ ; x, v N ), then non-singularity of the matrix of partial derivatives is supposed to hold for any fixed value combination ( Property A.2. (statistical properties of θ γ ) Let θ γ be an estimator of θ γ with: Where θ * denotes the true population value and p −→ ( d − →) refers to convergence in probability (distribution) as the sample size N tends to infinity. The covariance matrix of the limiting distribution is denoted as AV( √ N θ γ ) and is assumed to be finite. 17 Proof of Corollary 11. We follow the definition of a matrix differential and a matrix derivative in Magnus and Neudecker (1999). To complete the proof, we make extensive use of results (a) from matrix differential calculus (Abadir & Magnus, 2005;Magnus & Neudecker, 1999) and (b) regarding the vec-operator and Kronecker products (e.g., see Lütkepohl, 1997 for an overview). (18a):

Proof of Equation
Note that vec dC = ∂vecC ∂θ dθ holds by definition and that each entry of the matrix C = C(θ ) is either equal to a single element of θ or equal to zero. Thus, the n 2 × p Jacobian matrix ∂vecC ∂θ is a zero-one matrix. (18b): Where K n denotes the commutation matrix for n × n matrices (Magnus & Neudecker, 1979). For simplicity of notation, we define the following n 2 × n 2 matrices:

Proof of Equation
Substituting G 2,C and G 2, into the expression for vec dV(V | do(x)) yields: Note that vec d = ∂ vec ∂θ dθ holds by definition and each entry of the matrix = (θ) is either equal to a single element of θ or equal to zero. Thus, the n 2 × p Jacobian matrix ∂vec ∂θ is a zero-one matrix. Since V(V | do(x)) is symmetric, one oftentimes works with the half-vectorized version, given by: Where L n denotes the elimination matrix for n × n matrices Magnus and Neudecker (1980). (18c): We treat the interventional pdf f (v N | do(x)) as a function ϕ of the interventional mean vector and the interventional covariance matrix:

Proof of Equation
Further, we treat ϕ as a product of two functions, that is, ϕ = ϕ 1 · ϕ 2 , with: We display ϕ from Eq. (A.8a) as a function of ϕ 1 and ϕ 2 and apply the product rule, yielding: Both ϕ 1 and ϕ 2 are composite functions: with: The differentials of ϕ 1 and ϕ 2 are computed using Cauchy's invariance (Magnus & Neudecker, 1999). We start with ϕ 1 and compute the differential of the innermost function f 1 ( N ): Next, we obtain the differential of g 1 with respect to f 1 : Plugging in Eq. (A.13) into Eq. (A.14) yields: For ϕ 2 , we start with the differentials of f 21 (μ N ) and f 22 ( N ): Next, we obtain the total differential of g 2 by applying the product rule twice: The last mapping that is applied in this chain is h 2 (g 2 ), which is a scalar function of a scalar argument: Where we have resubstituted the expressions for ϕ 1 , ϕ 2 , ϕ, f 21 , f 22 and introduced the following terms for simplicity of notation: From the equations stated in Eq. (A.8b), it immediately follows: (x) Proof of Equation (18d): The general definition of g 4 for a vector of outcome variables Y is given in Eq. (15). The following derivation is restricted to the case of a single (scalar) outcome variable Y , that is, |Y| = K y = 1.
The last equation sign of Eq. (A.27) follows from Leibniz's rule for partial differentiation of an integral (Dieudonné, 1969). The derivative under the integral sign (first term after the last equation sign) is equal to zero since the pdf of the standard normal φ(u) is functionally independent of θ . For simplicity of notation, we use μ y and σ 2 y instead of μ y (θ) and σ 2 y (θ ) in the following. The two partial derivatives in the second line of Eq. (A.27) have the same structure ϕ 3 = h 3 [ f 31 (μ y ), f 32 (σ 2 y )] and differ only in the constants y up and y low . The functions below are stated for y up and are defined analogously for y low (we do not state the latter ones explicitly): The corresponding differentials and derivatives are given by: The differential of ϕ 3 = h 3 [ f 31 (μ y (θ )), f 32 (σ 2 y (θ))] can be evaluated as follows using the total differential, Cauchy's invariance and the chain rule: