Small-area estimation with missing data using a multivariate linear random effects model

In this article, small-area estimation with multivariate data that follow a monotonic missing sample pattern is addressed. Random effects growth curve models with covariates are formulated. A likelihood-based approach is proposed for estimation of the unknown parameters. Moreover, the prediction of random effects and predicted small-area means are also discussed.


Introduction
In survey analysis estimation of characteristics of interest for subpopulations (also called domains or small areas) for which sample sizes are small is challenging. We adopt an approach were the survey estimates are improved via covariate information. To produce reliable estimates in surveys utilizing covariates for small areas is known as the small-area estimation (SAE) problem (Pfeffermann 2002). Rao (2003) has given a comprehensive overview of theory and methods of modelbased SAE. Most surveys are conducted continuously in time based on cross-sectional repeated measures data. There are also some works related to time series and longitudinal surveys in small-area estimation, for example, one can refer to Consortium (2004), Ferrante and Pacei (2004), Nissinen (2009), Singh and Sisodia (2011) and Ngaruye et al. (2017). In Ngaruye et al. (2017), the authors have proposed a multivariate linear model for repeated measures data in a SAE context. The model is a combination of the classical growth curve model (Potthoff and Roy 1964) and a random effects model. This model accounts for longitudinal surveys, i.e., units are sampled ones and then followed in time, grouped (blocked) response units and time correlated random effects. It is common to obtain incomplete repeated measures data in longitudinal surveys. In this article, we extend the above mentioned model and let the model include a monotonic missing observation structure. In particular, drop-outs from the survey can be handled, i.e., when it is planned to follow units in time, but before the end-point some units disappear.
Missing data may be due to a number of limitations such as unexpected budget constraints, but also it may happen that for various reasons units for which the measurements were expected to be sampled over time disappeared from the survey. The statistical analysis of data with missing values emerged early in the 1970s with advancement of modern computer-based technology (Little and Rubin 1987). Since then, several methods of analysis of missing data have been developed following the missing data mechanism whether ignorable for inferences which includes missing data at random and missing data completely at random or nonignorable missing data. Many authors have dealt with the problem of missing data and we can refer to Little and Rubin (1987), Carriere (1999), Srivastava (2002), Kim and Timm (2006) and Longford (2006), for example. In particular, incomplete data in the classical growth curve models and in random effects growth curve model have been considered, for example, by Kleinbaum (1973), Woolson and Leeper (1980), Srivastava (1985), Liski (1985), Liski andNummi (1990), andNummi (1997).
In Sect. 2, we present the formulation of a multivariate linear model for repeated measures data. Thereafter this model is extended to handle missing data. A ''canonical'' form of the model is considered in Sect. 3. In Sect. 4, the estimation of parameters and prediction of random effects and small-area means are derived. Finally, in Sect. 5, we give a small simulation study to show the performance of the proposed estimation procedure.

Multivariate linear model for repeated measures data
We will in this section consider the multivariate linear regression model for repeated measurements with covariates at p timepoints suitable for discussing the SAE problem, which was defined by Ngaruye et al. (2017), when data are complete. It is supposed that the target population of size N, whose characteristic of interest y is divided into m subpopulations called small areas of sizes N d , d ¼ 1; . . .; m, and the units in all small areas are grouped in k different categories. Furthermore, we assume the mean growth of each unit in area d for each one of the k groups to be, for example, a polynomial in time of degree q À 1 and also suppose that we have covariate variables related to the characteristic of interest, whose values are available for all units in the population. Out of the whole population N and small areas N d , n and n d ''units'' are sampled according to some sampling scheme which, however, technically in the present work is of no interest. The model at small-area level for the sampled units is written: and when combining all disjoint m small areas and all n sampled units divided into k non-overlapping group units yields where R u is an unknown arbitrary positive definite matrix and without loss of generality R e ¼ r 2 e I p which is assumed to be known. In practise r 2 e is estimated from the survey and only depends on how many units are sampled from the total population N. In model (2), Y:p Â n is the data matrix, A:p Â q; q p, is the within individual design matrix indicating the time dependency within individuals, B:q Â k is unknown parameter matrix, C:mk Â n with rankðCÞ þ p n and p m is the between individuals design matrix accounting for group effects, c is an r-vector of fixed regression coefficients representing the effects of auxiliary variables, X:r Â n is a known matrix taking the values of the covariates, the matrix U:p Â m is a matrix of random effect whose columns are assumed to be independently distributed as a multivariate normal distribution with mean zero and a positive dispersion matrix R u , i.e., U $ N p;m ð0; R u ; I m Þ, Z : m Â n is a design matrix for random effect and the columns of the error matrix E are assumed to be independently distributed as pvariate normal distribution with mean zero and and known covariance matrix R e , i.e., E $ N p;n ð0; R e ; I n Þ. The matrix H ¼ ðI k : I k . . .I k Þ : k Â mk captures all k group units by stacking as column blocks the m data matrices of model (1) together in a new matrix and is included in the model for technical issues of estimation. More details about model formulation and estimation of model parameters can be found in Ngaruye et al. (2017).

Incomplete data
Consider model (2) and suppose that there are missing values in such a way that the measurements taken at time t, (for t ¼ 1; . . .; p), on each unit are not all complete and the number of observations for the different p timepoints are n 1 ; . . .; n p , with n 1 ! n 2 ! Á Á Á ! n p [ p. Such a pattern of missing observations follows a so-called monotone sample.
Let the sample observations be composed of mutually disjoint h sets according to the monotonic pattern of missing data, where the ith set, (i ¼ 1; . . .; h), is the sample data matrix Y i : p i Â n i , whose units in the sample have completed for the first period i ¼ 1 and failed to complete for i ¼ 2; . . .; h periods with p i p and P h i¼1 p i ¼ p. Such a data set is called an h-step monotone missing data pattern. For technical simplicity, in this paper, we only study a three-step monotone missing structure with complete sample data for a given number of timepoints and incomplete sample data for the other timepoints. This means that we have a complete data set n 1 observations with p 1 dimension and an incomplete data set n 2 and n 3 observations with p 2 and p 3 dimensions, where p 1 þ p 2 þ p 3 ¼ p.

The model which handles missing data
In this article, we will only present details for a three-step monotonic pattern. We assume that the model, defined in (2), holds together with a monotonic missing structure. This extended model can be presented by three equations: n idg equals the number of observations for the response Y i , dth small area and gth group, X i represents all covariates for the Y i response, In particular the construction of Z i helps to derive a number of mathematical results including where CðQÞ stands for the column vector space generated by the columns of the matrix Q.

A canonical version of the model
The model defined through (3) will be transmitted to a simpler model which will be utilized when estimating the unknown parameters. A couple of definitions will be necessary to introduce but first it is noted that because CðZ 0 are idempotent. It is supposed that we have so many observations that the inverses exist. Therefore, there exists an orthogonal matrix and let Q o be any matrix of full rank spanning CðQÞ ? , the orthogonal complement to Before we analyze the transformations above, we need a few technical relations concerning Z i , i ¼ 1; 2; 3. To some extent, the next lemma is our main contribution, because without it, the mathematics would become very difficult to carry out. Note that the result depends on the definition of Z i , i ¼ 1; 2; 3 given in (4).
Lemma 3.1 Let Z i , i ¼ 1; 2; 3, be as in (4), and let R ij , i ¼ 1; 2; 3 , j ¼ 1; 2 be defined in (6). Then Proof Using (6), (5) and the definition of C i1 it follows that where i is the unique orthogonal projection on CðC i Þ, and thus statement (i) is established. Moreover, once again using (6) and the definition of C i1 and statement (ii) is verified. Statement (iii) can be shown in a similar way. h

The likelihood
We start to define the covariance between two random matrices and the dispersion matrix. Let X and Y be two random matrices. The covariance between X and Y is defined by and the dispersion matrix D½X is defined by D½X ¼ covðX; XÞ, where vec is the usual columnwise vectorization operator and vec 0 is its transpose.
The transformation which has taken place in the previous section is one-to-one. Based on fV ij g, i ¼ 1; 2; 3, j ¼ 0; 1; 2, we will set up the likelihood for all observations. However, first, we present the marginal densities (likelihood function) for fV ij g, which of course are normally distributed. Thus, to determine the distributions, it is enough to present means and dispersion matrices: for i ¼ 1; 2; 3. The matrix R u ii stands for the covariance matrix between rows of the ith data matrix Y i ; i ¼ 1; 2; 3. Concerning the simultaneous distribution of fV ij g, i ¼ 1; 2; 3, j ¼ 0; 1; 2, V i0 and V i2 , i ¼ 1; 2; 3; they are independently distributed and these variables are also independent of fV i1 g. However, the elements in fV i1 g are not independently distributed. We have to pay attention to the likelihood of these variables and fvecV i1 g, i ¼ 1; 2; 3, will be considered.
Let LðV; HÞ denote the likelihood function for the random variable V with parameter H. We are going to discuss where in (13), indicates that no parameters have been specified. The next result, which is obtained by straightforward calculations, will be used in the forthcoming presentation: where À Á i¼1;2;3;j¼1;2;3 denotes a block partitioned matrix. From the factorization of the likelihood in (13), it follows that we have to investigate LðvecV 31 jvecV 21 ; vecV 11 ; Þ: Thus, we are interested in the conditional expectation and the conditional dispersion. The conditional mean equals where the expectations for vecV i1 , i ¼ 1; 2; 3 can be obtained from (11). Moreover, the conditional dispersion is given by D½vecV 31 jvecV 11 ; vecV 21 ¼ D½V 31 À ðC½V 31 ; V 11 ; C½V 31 ; V 21 ÞD½ðvec 0 V 11 ; vec 0 V 21 Þ 0 À1 ðC½V 31 ; V 11 ; C½V 31 ; V 21 Þ 0 : The next lemma fills in the details of this relation and the conditional mean, and indeed shows that relative complicated expressions can be dramatically simplified using Lemma 3.1.
Lemma 3.2 Let V i1 , i ¼ 1; 2; 3, be defined in (8). Then Proof Statements (i), (ii), (iii), and (iv) follow directly from (14). In (v), the inverse of a partitioned matrix is utilized and (vi) is obtained by straightforward matrix manipulations and application of Lemma 3.1. h Put where W 22 is given in (16) and then the next theorem is directly established using Lemma 3.2.
Moreover, it follows from (13) that LðvecV 21 jvecV 11 ; Þ is needed. However, the calculations are the same as above and we only present the final result.

Estimation of parameters and prediction of small-area means
For the monotone missing value problem, treated in the previous sections, it was shown that it is possible to present a model which seems to be easy to utilize. The remaining part of the report consists of a relatively straightforward approach for predicting the small areas which is of concern in this article.

Estimation
To estimate the parameters, a restricted likelihood approach is proposed. For the likelihood given in Theorem 3.3, we start to estimate B and c by maximizing From this part of the likelihood, generally, we cannot estimate B and c, only specific linear combinations are estimable. However, B and c can be expressed as a linear function of new unknown parameters, say H, which can be estimated together with R u 11 from LðV 11 ; b cðHÞ; b BðHÞ; R u 11 Þ, which is a likelihood from a MANOVA model. Furthermore, inserting these estimators in and thereafter maximizing the likelihoods with respect to the remaining unknown parameters produces estimators for R u 12 (using B 0 in Eq. (21)) and R u 22 (using W 22 in (16)). Inserting all the obtained estimators in L vecV 31 jvecV 11 ; vecV 21 ; b cð b and then maximizing the likelihood with respect to B 1 , B 2 and W 3Á2 yields estimators for R u 31 , R u 32 and R u 33 (using (17), (18), (19) with (15), (16) and (20)).

Prediction
To perform predictions of small-area means we first have to predict U 1 , U 2 and U 3 in the model given by ( Following Henderson's prediction approach to linear mixed model (Henderson 1975), the prediction of v can be derived in two stages, where in the first stage R u is supposed to be known. Thus, the idea is to maximize the joint density of with respect to vecB, c, which are included in l, and v, which is also included in l but also appear in the term v 0 X À1 v. Moreover, in (22), c is a known constant and X is given by The vector l and the matrix R are the expectation and dispersion of y j v and are respectively given by where and Supposing R u is known, and then using (22) together with standard results from linear models theory we find estimators of the unknown parameters and of v as a function of R u and thereafter replacement of R u by its estimator, which is obtained as described in Sect. 4.1, yields an estimator b v.
The prediction of small-area means is performed in the sense that estimating the small-area means is equivalent to predicting small-area means of non-sampled values, given the sample data and auxiliary data. To this end, for the dth area and gth group units, we consider the means for sample observations of the data matrices Y 1 ; Y 2 and Y 3 and predict the means of non-sampled values. Use the superscripts s and r to indicate the corresponding partitions for observed sample data and nonobserved sample data in the target population, Y ðsÞ id and Y ðrÞ id , respectively. Similarly, we denote by X ðrÞ id : r Â ðN d À n id Þ, C ðrÞ id : mk Â ðN d À n id Þ and z ðrÞ id : ðN d À n id Þ Â 1 the corresponding matrix of covariates, design matrix and design vector for nonsampled units in the population, respectively. Then, the prediction of small-area means at each timepoint and for different group units is presented in the next proposition Proposition 4.1 Consider repeated measures data with missing values on the variable of interest for three-steps monotone sample data described by the models in (3).
(i) The target small-area means at each timepoint are elements of the vectors . . .; m: (ii) The small-area means at each timepoint for each group units for complete and incomplete data sets are given by . . .; m; g ¼ 1; . . .; k: Note that the predicted vector b u id is the dth column of the predicted matrix b U i ; i ¼ 1; 2; 3 and b b g is the column of the estimated parameter matrix b B for the corresponding group g.
A direct application of Proposition 4.1 is to find the target small-area means for each group across all timepoints obtained as a linear combination of b l dg depending on the type of the characteristics of interest.

Simulation study
In this section we give a small simulation study to show the performance of the estimation of the covariance matrix R u . Assume we have m ¼ 10, 25, 50, 100 and 200 small areas, and k ¼ 2 groups. Furthermore, let p ¼ 6 with p 1 ¼ 3, p 2 ¼ 2, p 3 ¼ 1 timepoints and q ¼ 2 with For simplicity, assume we have equal number of observations n idg , i ¼ 1; 2; 3, for each small area d ¼ 1; . . .; m and group g ¼ 1; 2, with n 1dg [ n 2dg [ n 3dg , i.e., we have equal numbers of drop-outs in each small area and each group. For example, if n 1dg ¼ 5; n 2dg ¼ 4 and n 3dg ¼ 3, we have one drop out in each small area and each group for every time period i ¼ 1; 2; 3, see Fig. 1 for the incomplete monotone missing data pattern. In addition, in the simulations, let r 2 i ¼ 0:01, i ¼ 1; 2; 3 B ¼ 1 2 3 4 ; c ¼ 1 2 3 0 B @ 1 C A; and R u ¼ I p : As result we compare the Frobenius norm of the difference between the estimated covariance matrix b R u and the true value R u ¼ I p , that is for different number of small areas m and different sample sizes n idg . In Table 1, we can see, that in general we get a better estimate of the covariance matrix R u for larger number of small areas and larger sample sizes.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.