Parameters not empirically identifiable or distinguishable, including correlation between Gaussian observations

Note: Accepted version, published in Statistical Papers, https://doi.org/10.1007/s00362-023-01414-3. It is shown that some theoretically identifiable parameters cannot be empirically identified, meaning that no consistent estimator of them can exist. An important example is a constant correlation between Gaussian observations (in presence of such correlation not even the mean can be empirically identified). Empirical identifiability and three versions of empirical distinguishability are defined. Two different constant correlations between Gaussian observations cannot even be empirically distinguished. A further example are cluster membership parameters in $k$-means clustering. Several existing results in the literature are connected to the new framework. General conditions are discussed under which independence can be distinguished from dependence.


Introduction
Meaningful statistical inference is only possible if the target of inference (parameter) is identifiable, meaning that if parameter values differ, the parameterised distributions should also differ.There are several versions of identifiability definitions, and many identifiability and non-identifiability results, see, e.g., Yakowitz and Spragins (1968); Rothenberg (1971); Prakasa Rao (1992); Ho and Rosen (2017).

2
Here situations are treated in which parameters are identifiable according to this classical definition, yet the parameters cannot be identified from observed data.Rothenberg (1971); Hsiao (1983); Prakasa Rao (1992) define identifiability with explicit reference to observable data but do not cover the issues that are treated here.Regarding the results in Rothenberg (1971), there is no difference between classical identifiability and identifiability from observations.Simple examples for classical identifiability issues are the non-identifiability of linear regression parameters in case of collinear explanatory variables, and identifiability of the parameters of mixture distributions, which can be guaranteed under certain assumptions (Yakowitz and Spragins (1968)), particularly ruling our label switching, but counterexamples exist (Prakasa Rao, 1992, Chapter 8).Hsiao (1983); Prakasa Rao (1992) also study situations in which issues occur because certain modelled random variables are unobservable, such as the true value of a variable in errors-invariables models.Some examples in Section 4 are also of this kind, but there are further reasons why the observed data may not allow for identification of classically identifiable parameters, which are explored here.Some such situations have already appeared in the literature, see, e.g., Neyman and Scott (1948); Bahadur and Savage (1956); Donoho (1988); Spirtes et al (1993); Robins et al (2003); Molenberghs et al (2008); Almeida and Mouchart (2014).Section 4 gives more details on these works, and how they fit into the unified terminology introduced here.
It turns out that there are different possible levels of information about identifiable parameters in the data, and therefore various definitions are introduced.Consistent estimators may or may not exist ("empirical identifiability").Sets that can distinguish two parameter values for a finite sample size may or may not exist ("empirical distinguishability" with weaker and stronger versions).
The concept of empirical identifiability is closely connected to the concept of estimability, which is also stronger than classical identifiability.Once more there are several versions around.The concept mostly focuses on what can be estimated with a give finite sample, and its connection with classical identifiability is investigated, see, e.g., Bunke and Bunke (1974); Jacquez and Greif (1985); Maclaren and Nicholson (2020).
This work was motivated by the discovery that data hold no information about distinguishing i.i.d.Gaussian observations from Gaussian data with a constant correlation between any two observations.This will be used as a guiding example.Section 2 derives a key result regarding this situation.Section 3 presents the main definitions, some of their implications and some more examples.Section 4 reviews results from the literature that fit into the framework of Section 3. Section 5 uses this framework to discuss the general problem of telling apart dependence and independence in situations in which potential dependence is not governed by the observation order or observable external information.This is relevant in many situations that require independence assumptions.Section 6 presents another ex- The following lemma shows that in model M01, the conditional distribution given the mean Xn is the same as for i.i.d., therefore uncorrelated, Gaussian random variables.But the mean does not hold information about correlations (or rather, any information about correlations is confounded with the information about the true means), meaning that model M1 cannot be distinguished from model M0 based on the data alone.
which does not depend on µ.
Thus, conditionally on the mean, X n will look like i.i.d.Gaussians with variance (1 − ρ)σ 2 ; for ρ > 0 there is less variation of the X i given their mean than their unconditional variance.On the other hand, Xn has a larger variance than under independence (ρ = 0).
In fact, the model can equivalently be written as a model with a single realisation of a random effect Z, i = 1, . . ., n: which suggests that observed data look like i.i.d.N (µ * , τ 2 2 ) with µ * = µ + Z, and Z is unknown and unobservable.The definitions and results in Section 3 aim at making precise a general sense in which observations give no information about ρ and limited information about µ.

Empirical identifiability and distinguishability
Let X 1 , X 2 , . . ., X n , . . .be random variables on a space X , for n ∈ IN : L(X 1 , . . ., X n ) = P n;θ with parameter θ ∈ Θ. P ∞;θ denotes the distribution of the whole sequence.The spaces X and Θ can be very general, but assume that Θ is a metric space with metric d Θ .The focus may be on the parameter θ in full, or it may be on g(θ), where g : Θ → Λ, Λ being a metric space with metric d Λ .No further conditions on Θ and Λ are required for the general definitions.It is generally assumed that the underlying σ-algebras are rich enough so that the sets required in the arguments are measurable.In the specific cases discussed here this is always fulfilled using standard (Borel) σ-algebras and parameter spaces.Sometimes but not always g(θ) = θ and Λ = Θ are considered.Other examples for g are a projection on a lower dimensional space, or an indicator function for a parameter subset (hypothesis) of interest.
Here, data generating mechanisms are treated that do not allow to empirically identify parameters that are in fact identifiable in the traditional sense.Model M01 is an example.Obviously, distributions with different correlation parameters ρ 1 = ρ 2 are different from each other, and ρ can be estimated consistently if the whole sequence of n observations is repeated independently.In this case, assuming equal correlation between any two components, the sequence of length n of observations becomes an n-variate Gaussian, and the sample correlation between any two of the n components will estimate ρ consistently, although a better estimator will of course use information from all components.The data generating mechanism modelled in Section 2 does not allow for independent repetition; all available observations are dependent from all other observations, and this makes consistent estimation of ρ impossible: Not only is the correlation ρ not empirically identifiable, the same holds for µ, meaning that in practice using an estimator different from Xn does not help dealing with the potential existence of ρ > 0.
The proof of Theorem 3.4 relies on the random effects formulation (1) with only a single realisation of the random effect.A similar case can be made for a standard random effects model assuming that the number of realised values of the random effect is bounded even if the number of observations goes to infinity, as expressed in the following model M2: group), j = 1, . . ., n i (within group observation), n = m i=1 n i , θ = (µ, τ 1 , τ 2 ).Let m be fixed, whereas n is allowed to grow.Let X n be the vector collecting all X ij .Such a model could make sense for a random effects meta analysis with a low number m of studies, each of which is potentially large, but it does not allow to empirically identify the random effects' variance, and neither the overall mean, unless m → ∞.Consequently, common advice in the meta analysis literature is to not use a random effects model if the number of studies is low (see, e.g., Kulinskaya et al (2008)).
In model M01, there is a difference between trying to estimate ρ on one hand and µ on the other hand.While µ cannot be estimated consistently, in case that ρ is small, the data can give fairly precise information about its location, whereas there is no information in the data about ρ at all.The following definition aims at formalising this difference.
Definition 3.6.For n ∈ IN , α ≤ β ∈ (0, 1], an observable set A, i.e., any measurable set expressing an observable event, is an (2) Obviously, this definition is symmetric in λ 1 , λ 2 .Before returning to the problem of constant correlation between Gaussian observations, empirical distinguishability is discussed in some more generality.
For ǫ > 0 and η 0 in some metric space H with metric d H , define B ǫ (η 0 ) = {η : d H (η, η 0 ) ≤ ǫ}.If g(θ) = θ, empirical distinguishability follows from empirical identifiability, because there is a consistent estimator T n of θ, and In general, empirical identifiability does not imply empirical distinguishability.If g(θ) specifies only a part of the information in θ, it may happen that no set A can distinguish g(θ 1 ) = λ 1 from g(θ 2 ) = λ 2 uniformly over the information in θ that is not in g(θ), even if g(θ) is empirically identifiable.
Example 3.8.Let X i , i ∈ IN be independently distributed according to , where q is randomly drawn from U (0, 1).g(θ) = p is empirically identifiable, because Xn is consistent for it.But any p 1 = p 2 are not empirically distinguishable, because for any n < m, X 1 , . . ., X n do not contain any information about p.In this situation, the data may carry information about whether n > m (namely where it can be observed that at some point in the past q may likely have changed to p), in which case it also carries information about p, but for m = 1 this can never happen, and for very small m this can hardly ever be diagnosed with any reliability.
If λ = g(θ), in order to make λ 1 and λ 2 empirically distinguishable from having a consistent estimator (T n ) n∈IN of θ, in general g needs to be uniformly continuous, and (T n ) n∈IN needs to be uniformly consistent on C = g −1 (λ 1 ) ∪ g −1 (λ 2 ), i.e., ∀ǫ > 0, α > 0 ∃n 0 ∀n ≥ n 0 , θ ∈ C : The latter holds automatically in case that θ is empirically identifiable if |C| is finite, and in particular if g is bijective.
Lemma 3.9.If θ ∈ Θ is empirically identifiable, g is uniformly continuous on an open superset of C, requiring that such a set exists, and there is an estimator T n of θ that is uniformly consistent on C, then λ 1 = λ 2 ∈ Λ are empirically distinguishable.
Only assuming λ = g(θ) but not θ to be empirically identifiable, consistency of (T n ) n∈IN as estimator of g(θ) needs to be uniform on C for making λ 1 = λ 2 empirically distinguishable.
Consistent estimators are uniformly consistent in many situations.For example, the behaviour of affine equivariant multiple linear regression estimators is uniform over the whole parameter space.Therefore, if they are consistent for the full parameter vector, they are uniformly consistent, and will according to Lemma 3.9 empirically distinguish subvectors and single coefficients of the regression parameter.
Remark 3.10.There are possible variants of Definition 3.7 that may be taken as different "grades" of empirical distinguishability.
In the situation in Example 3.8, events may be observed for large enough n that can distinguish p 1 and p 2 , even though this is not guaranteed to happen.A concept that allows p 1 and p 2 to be seen as distinguishable in some sense (at least if too low m is excluded; m needs to be large enough that a "change point" after m observations can be diagnosed with nonzero probability regardless of p) is "potential distinguishability", see Definition 3.11.
Furthermore, it would make a difference to not allow the probabilities P n;θ 1 (A), P n;θ 2 (A) to be arbitrarily close.Consider a simple i.i.d.N (θ, 1) model.With the given definition, and, for given fixed θ 0 , g(θ) = 1(θ = θ 0 ), 0 and 1 can be empirically distinguished (the rejection region of the standard two-sided test will do), i.e., it can be distinguished whether θ = θ 0 or θ = θ 0 .Using a definition that requires a "gap" of some β > 0 between P n;θ 1 (A) and P n;θ 2 (A), 0 cannot be distinguished from 1, because for θ → θ 0 , P θ (A) for any A gets arbitrarily close to P θ 0 (A).Both of these definitions can be seen as appropriate, from different points of view.It could be argued that X contains some, if not necessarily conclusive, information about whether θ = θ 0 , and it should therefore count as distinguishable from θ = θ 0 , as achieved by Definition 3.7, But even with arbitrarily large n, X will not be exactly zero, and will therefore be at least as compatible with some θ = θ 0 as with θ 0 , which could be used to argue that the two should not be defined as distinguishable.Choosing g as the identity, any fixed θ = θ 0 could still be distinguished from θ 0 ; in this case there is a positive distance between θ and θ 0 , which makes Xn a better fit for the closer parameter.This can be achieved by the concept of "empirical gap distinguishability", see Definition 3.11.
Potential distinguishability in particular implies ∀θ 1 ∈ g −1 (λ 1 ) : P n;θ 1 (A) ≤ α, so that λ 1 can be "rejected" by the indicator of A (keeping in mind that α cannot necessarily be chosen small), even though this test may be biased against some θ 2 ∈ g −1 (λ 2 ).Potential distinguishability is not symmetric in λ 1 and λ 2 , but in Example 3.8, in fact p 1 is potentially distinguishable from p 2 (A can be chosen as intersection of a set that rejects the null hypothesis of no change point in the binary sequence, see Worsley (1983), and | X * n −p 1 | being large, where X * n is the mean after the estimated change point), and p 2 is potentially distinguishable from p 1 in the same way.The reason why there is symmetric potential distinguishability here but not standard distinguishability is that different distinguishing sets are required for the two directions, and that no set works uniformly over all θ ∈ g −1 (p 1 ) ∪ g −1 (p 2 ).See Example 4.3 for a genuinely asymmetric instance of potential distinguishability.Empirical gap distinguishability is stronger than empirical distinguishability, whereas potential distinguishability is weaker.Still, lack of empirical gap distinguishability means that in terms of the parameterised probabilities of observable sets, λ 1 = λ 2 appear infinitesimally close, regardless of d Λ (λ 1 , λ 2 ).Theorem 3.12 treats empirical distinguishability in model M01.
Remark 3.13.Apart from partial identifiability, there are further weaker versions of the classical identifiability concept.A parameter value is locally identifiable if in an open neighbourhood in the parameter space there is no other parameter that parameterises the same distribution (Rothenberg (1971)).Set identifiability (Ho and Rosen (2017)) means that sets of equivalent parameter values can be identified and potentially be estimated as opposed to a precise parameter value.In other words, parameters from non-equivalent sets could be (empirically) distinguished, whereas equivalent parameters could not be distinguished.Empirical versions of these definitions are possible, but all the situations treated here that are not empirically identifiable would not be empirically locally or set identifiable either.All proofs of empirical non-identifiability in the Appendix rule out the existence of consistent estimators that can tell any two parameter values apart, so neither parameter sets nor parameter values in any neighbourhood of each other can be consistently told apart.

Examples from the literature
The concepts of empirical identifiability and distinguishability provide a framework that fits various existing results on the limitations of empirically identifying parameters or hypotheses that are identifiable according to the classical definition.
Example 4.1.The so-called "incidental parameter problem" was introduced by Neyman and Scott (1948), see Lancaster (2000) for a review.It refers to a situation in which there are observed units the number of which is allowed to go to infinity, and for these observed units there is a bounded finite number of observations, say x ij , i = 1, . . ., n, j = 1, . . ., t, where n → ∞ but t fixed.The model for the distribution of the corresponding random variables X ij involves some parameters α i , i = 1, . . ., n.For the estimation of each of these there are only t observations available, and the α i will not be estimated consistently if n → ∞, so they are not empirically identifiable, although they may well be empirically distinguishable, depending on the specific model.The problem can be avoided in many situations by modeling the α i as random effects, so that they are governed by only one or few parameters, but in some situations, e.g., panel data in economics (Lancaster (2000)), researchers may be interested in inference about specific α i , and also standard distributional assumptions for the random effects distribution may not seem realistic.
Example 4.2.In a famous paper, Bahadur and Savage (1956) show the nonexistence of valid statistical inference for the problem of finding out about the true mean in a sufficiently large class of distributions F with existing mean (essentially requiring that for every P and Q also any mixture of them is in F, and that ∀µ ∈ IR ∃P ∈ F : E P (X) = µ).
Applying the terminology of the present paper, their Theorem 1 shows that any two means µ 1 and µ 2 are not empirically gap distinguishable.The reason is that with E P (X) = µ 1 , E Q (X) can be chosen so that for arbitrarily small ǫ > 0 and R = (1 − ǫ)P + ǫQ, E R (X) can take any value µ 2 = µ 1 .For arbitrarily large n and small enough ǫ, P n (A) and R n (A) are arbitrarily close.
Example 4.3.Expanding the work of Bahadur and Savage (1956), Donoho (1988) considered functionals J of distributions, including the number of modes of the density, the Fisher information, any L p -norm of any derivative of the density, the number of mixture components, and the negentropy.He showed that for a sufficiently rich nonparametric family P of distributions, the graph (i.e., the set of pairs (P, J(P )) with distribution P and functional value J(P )) is dense in the epigraph (the set of pairs (P, j) where j ≥ J(P )) using a "testing topology" induced by d(P, Q) = sup 0≤ψ≤1 | ψdP − ψdQ|.As ψ can be the indicator of a distinguishing set, two classes of distributions P and Q are not empirically gap distinguishable if inf P ∈P,Q∈Q d(P, Q) = 0. On the positive side, Donoho proves the existence of confidence sets for lower bounds of these functionals (except the negentropy); enough data can make it possible to identify that J(P ) ≥ k fixed and given, provided that this is indeed the case for P .J(P ) = k 1 can be potentially distinguished from Whereas distinguishability results are mostly negative, Donoho (1988) constructs a consistent estimator of the number of modes (Corollary of Theorem 3.4) and shows therefore that the number of modes is identifiable from data, but consistency is not uniform, and for fixed n only a lower bound for J(P ) can be given.
The key ingredient for Donoho's results (as well as the result of Bahadur and Savage ( 1956)) is the richness of P. If P is suitably constrained by assumptions, the functionals can be empirically identified; however it cannot be empirically identified whether such assumptions hold.
Example 4.4.Spirtes et al (1993); Robins et al (2003) deal with the possibility to infer the presence or absence of causal arrows in a graphical model.Chapter 4 of Spirtes et al (1993) is about "statistical indistinguishability".They define several indistinguishability concepts of different strengths for the problem of identifying the causal graph.As classical identifiability, these concepts regard the model, not making reference to observable data, but empirical identifiability is also of key interest regarding the issue how much can be inferred about the causal relationship between observed variables in the presence of an unobserved confounder.Robins et al (2003) show that in several situations "uniformly consistent tests" do not exist, whereas "pointwise consistent tests" exist; the latter however do not allow to distinguish presence or absence of causal arrows for any fixed n uniformly over the possible parameters."Consistent tests" are procedures that can have outcomes 0 ("accept absence of arrow"), 1 ("reject absence or arrow"), or 2 ("inconclusive").Their Example 2 is very simple and most instructive.Assume observed binary random variables X and Y and a categorical unobserved confounder Z.The existence of a causal arrow between X and Y is operationalised by a parameter θ * encoding the strength of the causal effect, where θ * = 0 means that there is no causal arrow.There are eight different possible causal graphs encoding the possible conditional independence structures (Fig. 3 in Robins et al (2003)).The authors assume that the distribution is faithful to the graph, meaning that there are not more independence relationships in the distribution than encoded in the graph.
Adapting the terminology of the present work, the problem is to distinguish two classes of graphs.The first class C 1 consists of those graphs that encode marginal independence between X and Y , which implies θ * = 0 under faithfulness.The second class C 2 are the graphs that imply marginal dependence between X and Y , in which case a consistent test should give an inconclusive result, because there is a possible graph in which both X and Y are influenced by Z, which causes marginal dependence, despite X and Y being independent given Z, therefore θ * = 0.The results in Robins et al (2003) imply that within the second class, the existence of a causal arrow between X and Y is not empirically identifiable, as only dependence or independence between X and Y can be observed.There is a pointwise consistent test (i.e., a consistent estimator of the indicator variable, therefore empirical identifiability) that can tell apart the first and the second class.The two classes are not empirically gap empirically distinguishable, but this is not very surprising as θ can be arbitrarily close to 0 if a causal arrow exists.What is more remarkable is that the proof of their Theorem 1 (which states that no uniformly consistent test exists) implies that C 1 cannot even be empirically gap distinguished from C 2 ∩ {θ * = θ * 0 }, where θ * 0 = 0 is a fixed parameter value.This is because a graph in the second class that has causal arrows between each pair of X, Y , and Z encodes a model that allows for dependence between Z and X, and also between Z and Y in such a way that X and Y can be arbitrarily close to marginally independent despite the existing causal effect θ * 0 .
Example 4.5.Regarding models for missing values, a key distinction is between MAR (missing at random) and MNAR (missing not at random) mechanisms.MAR holds if the distribution of the missingness indicator only depends on the complete data (including missing values) through the non-missing observations.This is a very convenient assumption for dealing with missing values, because it means that the non-missing data provide enough information to allow for unbiased inference.As acknowledged by the missing values literature, it is doubtful that this assumption is realistic, though, and it is also doubtful whether the data allow to check this assumption, as key information for this is hidden in the missing values.
Molenberghs et al ( 2008) made this concern precise by showing that for every MAR model there exists an MNAR model that reproduces the same observed likelihood function, meaning in particular that the densities of the observed data and therefore the probabilities for every observable set are equal between these models.This translates into empirical indistinguishability (not even potential distinguishability is possible) of an indicator of MAR vs. MNAR, once more assuming a sufficiently rich class of models.The authors state that MAR and MNAR may be distinguishable under certain parametric assumptions that restrict the flexibility of the MNAR models to emulate the likelihoods for certain MAR models; but then it is not empirically distinguishable whether such a model holds or not.
Example 4.6.p-dimensional ordinal data is often modelled as generated by discretisation of latent Euclidean continuous variables.There is much work about dimension reduction, but assume here that there are p continuous latent variables, every one of which corresponds to an observed ordinal variable, i.e., if Y i is the ith latent variable, and the observed ordinal variable X i takes the ordered categories j = 1, . . ., k i with probabilities π 1 , . . ., π k i with π 0 = 0, X i = j if Y i is between the j−1 l=0 π l -and j l=0 π l -quantiles of L(Y i ).The most popular approach is to assume the latent variables as multivariate Gaussian, see Muthén (1984).A mis-specification of the distribution of the latent distribution can have consequences in practice, see Foldnes and Grønneberg (2020), who discuss tests of this assumption.In fact, all such tests only test the dependence structure of (Y 1 , . . ., Y p ) rather than the shape of the marginal distributions of L(Y i ), which makes sense as the assignment of categories of X i according to quantiles of L(Y i ) obviously works for any continuous L(Y i ).Following Sklar's famous theorem (Sklar (1959)), the joint distribution of (Y 1 , . . ., Y p ) has a cumulative distribution function (CDF) H that can be written as H(y 1 , . . ., y p ) = C(F 1 (y 1 ), . . ., F p (y p )) where F 1 , . . ., F p are the marginal CDFs and C is a copula.This means that any dependence structure observed in ordinal data is compatible with any choice of the marginals.Indeed, for the latent variable model for such data, Proposition 2 of Almeida and Mouchart (2014) implies that F 1 , . . ., F p are not empirically identifiable, and that any two vectors of marginal CDFs are not even empirically distinguishable (be it potentially).The authors formulate this as classical identifiability statement, but involving the (Y 1 , . . ., Y p ) in the model, even though not observable, means that the model is identified but empirically, i.e., not from what is observable.

Distinguishing independence and dependence
The problem of identifying constant correlation between Gaussian observations is an instance of the more general problem to detect dependence between the observations in a sample, particularly if they are meant to be analysed by methods that assume independent data.Here the focus is on i.i.d.data.There will not be sophisticated results in this Section, the focus is on general ideas.
Existing tests such as the runs test (Wald and Wolfowitz (1940)) require additional information about the kind of dependence to be detected.Most of them test for dependence governed by the observation order, which is sensible if it can be suspected that closeness in observation order can give rise to dependence.This is often the case if observations are a time series, but also other meaningful orderings are conceivable, and also dependence governed by "closeness" on external variables such as spatial location.Alternatively, there may be a known grouping of observations and possible within-group dependence, as modelled for example by random effects models.
In practice, the observation order is not always meaningful, an originally existing meaningful observation order may be unavailable to the data analyst, or dependence structures can be suspected that cannot be detected by examining relations between observations that are in some sense "close".Constant correlation between any two Gaussian observations as in model M1 is one example of such a structure.
The question of interest here is whether independence and dependence can be distinguished in case that the observation order carries no relevant information, and neither is there secondary information from additionally observed variables.This amounts to observing the empirical distribution of the data only.
For real-valued data X 1 , . . ., X n , Fn denotes the empirical distribution function.Assume that only Fn is observed.The concept of empirical identifiability can be applied to binary "parameters", and particularly to a parameter that indicates, within an underlying model, whether X 1 , . . ., X n are i.i.d. or not.Empirical identifiability involves asymptotics.For n → ∞ here it needs to be assumed that a new sequence X 1 , . . ., X n is generated for each n, because observing a sequence F1 , . . ., Fn , Fn+1 , . . .based on the same sequence of observations will re-introduce observation order information.
At first sight the task of identifying dependence from Fn may seem hopeless, given that Fn is perfectly compatible with i.i.d.data generated from distributions with a "close" true CDF or even Fn itself.As in other identifiability problems, information can come from restrictive assumptions.
Example 5.1.Consider model M2 and assume for the number of groups m ≥ 2, but only Fn is observed, meaning that it cannot be observed to which group i an observation X ij belongs.Independence amounts to τ 2 1 = 0, and the interest here is in g(θ) = 1(τ 2 1 = 0) denoting the indicator function for {τ 2 1 = 0}.In case that τ 2 1 = 0, the underlying distribution of X ij is i.i.d.Gaussian.In case that τ 2 1 > 0, the underlying distribution of X ij partitions the data into different Gaussians for different i.Assuming that n i n → π i > 0, Fn will for large enough n look like a Gaussian mixture, which can be told apart from a single Gaussian.The order of a Gaussian mixture can be consistently estimated (James et al (2001)), and therefore 1(τ 2 1 = 0), which is equal to the indicator of a single mixture component, is empirically identifiable.Note that the cited result is for i.i.d.data from a Gaussian mixture, which in model M2 would require the group memberships to be modelled i.i.d.multinomial(1, π 1 , . . ., π m ), and then conditioning on the unknown values of Z 1 , . . ., Z m .
The example illustrates that certain empirical distributions can indeed indicate dependence, if corresponding distributional shapes (here a Gaussian mixture with m ≥ 2 components) are assumed as impossible under independence but can occur under dependence.
Even if general marginal distributions are allowed, there are specific dependence structures that can be identified from the empirical distribution alone.In order to simplify matters, from now on consider binary data X 1 , . . ., X n , for which observing the empirical distribution is equivalent to observing the number of ones or Xn = 1 − Fn (0), and all marginal distributions are Bernoulli(p).Call the i.i.d.model M3.A problem of interest is whether there are models for dependence for which all marginal distributions are identical that can be told apart based on Fn from M3.
Such examples rely on the definition of a subset A n of possible values of Fn , which under the dependence model for large enough n has a probability either higher than the maximum over p under M3, or smaller than the corresponding minimum.For checking dependence in practice, A n needs to be specified in advance, meaning that the user needs to know a priori which values of Fn can be suspected to indicate dependence even given the possibility under M3 that p = Xn .Such information is rarely available in practice.
Summarising, dependence can be diagnosed from the data only if • it is governed by the known order of observations or known external variables, • or it favours (or avoids) specific events regarding the observed empirical distribution compared to the independence model of interest, both in ways that the user has to specify in advance.It can be suspected that many existing dependence structures are not of this kind, meaning that only very limited aspects of independence between observations, regarding a sequence of such observations, can be checked.It is therefore very important to think through all background information about the data generating process to become aware of further potential issues with independence.
6 Cluster membership in k-means clustering k-means clustering is probably the most popular cluster analysis method (Jain (2010)).It can be connected to a "fixed classification model" (Bock (1996)): Let X 1 , . . ., X n , X i ∈ IR p , i = 1, . . ., n, be independently distributed with This model can be interpreted as generating k different Gaussian distributed clusters characterised by cluster means µ 1 , . . ., µ k ∈ IR p , all with the same spherical covariance matrix, and γ i indicates the true cluster membership of X i .The γ i take discrete values, and their number converges to ∞ with n, so these are nonstandard parameters, but in many applications they are of practical interest.
The maximum likelihood (ML)-estimator for θ = (µ 1 , . . ., µ k , γ i , . . ., γ n ) in this model is given by k-means clustering (the ML-estimator for σ 2 is easily derived, but this is not relevant here) of data Xn = (X 1 , . . ., X n ): with ties in the arg min broken in an arbitrary way.For given m 1 , . . ., m k , the g 1 , . . ., g n minimising W are given by and with these write W (m 1n , . . ., m kn ) = W (m 1n , . . ., m kn , g in , . . ., g nn ).For given g 1 , . . ., g n , the cluster-wise mean vectors minimise W .As there are only finitely many values of g 1 , . . ., g n , the ML-estimator does always exist, if not necessarily uniquely.Two issues with identifiability here are (i) that the numbering of the clusters is arbitrary and (ii) that µ q = µ r for q = r means that it is not possible to distinguish between γ i = q and γ i = r for i ∈ IN .Therefore assume that the µ 1 , . . ., µ k are pairwise different and lexicographically ordered (i.e., with obvious notation, µ 11 ≤ . . .≤ µ k1 with ties broken by the second variable, or, if there's still equality, by the third and so on, same for m 1n , . . ., m kn ).This makes the model identifiable according to the traditional definition.
However, due to the nonstandard nature of the model parameters, the ML-estimator is known to be inconsistent, even if only the estimation of the k mean vectors alone is of interest (Bryant (1991)).
The cluster membership parameters are another example for parameters that are identifiable according to the classical definition (because γ i uniquely defines the distribution of X i ), but cannot be empirically identified.
Theorem 6.1.The parameters γ i , i ∈ IN in the fixed classification model defined in (3) are not empirically identifiable.
It may be suspected that this is a consequence of the fact that for i = 1, . . ., n, only X i holds information about the parameter γ i , and the number of these parameters goes to ∞ with n → ∞.But this is not quite true.More observations add information about the clusters that can in turn be used to classify individual observations better.The problem here is rather the Gaussian distribution assumption, which implies that the marginal density of X i is everywhere nonzero, so that the single observation made of X i is not enough to determine with probability 1 to what cluster the observation belongs (the setup by which the classical identifiability could be used to estimate this parameter would be to have a potentially infinite amount of replicates of X i ), even if there is an infinite amount of information about the clusters.In fact, there is a different model setup in which the γ i are empirically identifiable, which requires that, where densities exist, the marginal density f θ * ,n (X i = x) is zero wherever f θ,n (X i = x) > 0, where θ * equals θ in all components except γ i . Defining x − m 2 dP (x), Pollard (1981) showed that for a distribution P satisfying (5) For L(X) = P, j = 1, . . ., k, define P j is P constrained to the set A j of points that are closest to the mean µ j (A 1 , . . ., A k form a Voronoi tesselation of IR p ), and (every distribution can be written as a mixture in this form; as a side remark, P might be a Gaussian mixture, but in this case the P j are not its Gaussian components).Mixture models of this form can be derived from a model for outcomes (G, X) with G ∈ {1, . . ., k} distributed according to a categorical distribution with probabilities (π 1 , . . ., π k ) and L(X | G = j) = P j .Then L(X) = P (McLachlan and Peel (2000)).For an i.
Now consider L( Xn ) = P * so that X 1 , . . ., X n are independently distributed with This defines a fixed classification model associated to the mixture P .Let Q P be an infinite i.i.d.product of categorical distributions on {1, . . ., k} with probabilities (π 1 , . . ., π k ).Assume for given P that γ = (γ 1 , γ 2 , . ..) fulfill Because of the strong consistency of T m n , (8) holds with probability 1 under Q P , but note that under (7), γ is a fixed parameter and not a random variable, and the fact that (8) holds for Q P -almost all γ just means that (8) is not more restrictive than assuming a mixture with fixed proportions, although it will not allow for fully general γ.Theorem 6.2.Assuming (4), (5), and (8), the parameters γ i , i ∈ IN , in the fixed classification model defined by ( 7) are empirically identifiable.
Already from Pollard (1981) it is clear that k-means does not actually estimate the centres of the spherical Gaussians in (3), but rather the Voronoi tesselation resulting from P , and the resulting clusters are not necessarily spherical.Added here is the observation that one can define meaningful cluster indicators in this setup, and that these can be consistently estimated, even though there is one such indicator for every observation.This is not possible in (3).Furthermore (6) interprets P as a mixture, and thus shows that there is a mixture that k-means estimates consistently.
The reader may wonder about empirical distinguishability of the parameters γ i , i ∈ IN , i.e., about whether the given values j 1 and j 2 of γ i for given i could be distinguished.In the situation of Theorem 6.2, this follows from Lemma 3.9 (projections of discrete parameters are by definition uniformly continuous).The situation of Theorem 6.1 is less obvious.It is however clear that the data contain some information about γ i through X i .If it were possible to empirically identify µ 1 , . . ., µ k , A = { X i − µ j 1 > X i − µ j 2 } could distinguish j 1 and j 2 .A conjecture is that finding consistent estimators for µ 1 , . . ., µ k requires additional conditions on the sequence (γ i ) i∈IN that allow to use a consistent estimator from an i.i.d.mixture, see Redner and Walker (1984); Bryant (1991).

Conclusion
There are various potential issues with the identification of parameters, and the four definitions given here (empirical identifiability, empirical distinguishability, empirical gap distinguishability, potential empirical distinguishability) may not cover all of them; even using the definition of distinguishing sets, further definitions are possible, for example empirical set identifiability, but what is already present allows to deal with many examples.
Apart from the precise definitions, there are also different sources for identifiability and distinguishability problems.In some situations (Examples 4.5, 4.6) the problem is that modelled information is not observed, either because it regards missing values, or latent variables.In some situations (Examples 4.2, 4.3), the issue is that the class of distributions to consider, even for a single parameter value of interest, is too rich, allowing for so much flexibility that the probability of any observable set cannot be sufficiently constrained.Identifiability and empirical distinguishability can be the result of model assumptions constraining this flexibility, see Example 5.1; in Example 2.1, constraining the variance σ 2 would in turn allow the data to be informative about ρ.These model assumptions cannot be justified from the data alone though.In some further situations, the data carry information about the parameter of interest, but this information, or more precisely the growth of the information over n, is limited (Lemma 3.5, Example 4.1, Theorem 6.1; Example 2.1 is also of this kind, using (1)).Example 3.8 is constructed so that the distinguishing information may not occur at any finite n, and the parameterisation in Example 4.4 is arbitrarily close to a situation of classical non-identifiability, which is only avoided by the faithfulness assumption.
Some examples such as model M01 and the problem in Section 6 are characterised by not allowing for i.i.d.repetition; the corresponding parameters can be identified if the whole sequence of observations is repeated i.i.d., and the lack of empirical identifiability is due to the assumed impossibility to do this.It may be wondered whether such models have a valid frequentist interpretation, which seems to rely on i.i.d.replicability at least in principle.Frequentism needs to be interpreted in a rather "idealist" way to accommodate such situations, appealing to replication of an infinite sequence as a thought experiment, although this issue can arguably be made regarding time series and other models as well; for more on this, see Hennig (2020).
The most relevant and unsettling implication of the work for practice regards the lack of possibility to check certain model assumptions, particularly independence; flexible enough models allowing for non-identical marginals can be impossible to detect as well, although this is not shown here.
The considerations regarding requirements for detecting dependence in Section 5 do not only hold for data for which only the empirical distribution is observed; they hold in the same way also for situations in which the observation order, even if known and potentially meaningful, is not informative for the dependence structure, and no external variables exist either that hold such information.This is probably a very common situation.The only way to justify independence then is knowledge about the subject matter and the data generation.Bayesians may think that the lack of information in the data about parameters such as ρ in M01 could be compensated by a prior distribution, but the lack of empirical identifiability and distinguishability of ρ raises the question where quantitative information should come from to set up the prior.A prior could only be obtained from existing qualitative information about the data generating process, and as there is no information in the data, the prior will determine the impact of ρ without the possibility of being "corrected" by the data.

Appendix: Proofs
Now consider Proof of Theorem 3.3 Let (R n ) n∈IN with R n : X n → [0, 1] be a consistent estimator of ρ (ρ = 1 and R n = 1 are assumed possible, although the following contradiction to consistency relies on ρ < 1; R n could be allowed to take negative values even though ρ ≥ 0 must be assumed because otherwise the matrices Σ j below would become negative definite for large enough n).