Small area estimation with mixed models: a review

Small area estimation is recognized as an important tool for producing reliable estimates under limited sample information. This paper reviews techniques of small area estimation using mixed models, covering from basic to recently proposed advanced ones. We first introduce basic mixed models for small area estimation, and provide several methods for computing mean squared errors and confidence intervals which are important for measuring uncertainty of small area estimators. Then we provide reviews of recent development and techniques in small area estimation. This paper could be useful not only for researchers who are interested in details on the methodological research in small area estimation, but also for practitioners who might be interested in the application of the basic and new methods.


Introduction
The terms 'small area' or 'small domain' refers to a small geographical region such as a county, municipality or state, or a small demographic group such as a specific age-sex-race group. In the estimation of a characteristic of such a small group, the direct estimate based on only the data from the small group is likely to be unreliable, because only the small number of observations are available from the small group. The problem of small area estimation is how to produce a reliable estimate for the characteristic of the small group, and the small area estimation has been actively and extensively studied from both theoretical and practical aspects due to an increasing demand for reliable small area estimates from public and private sectors. The articles Ghosh and Rao (1994) and Pfeffermann (2013) give good reviews and motivations, and the comprehensive book Rao and Molina (2015) covers all the main developments in small area estimation. Also see Demidenko (2004) for general mixed models and Pratesi (2016) for the analysis of poverty data by small area estimation. In this paper, we describe the details of classical methods and give a review of recent developments, which will be helpful for readers who are interested in this topic.
To improve the accuracy of direct survey estimates, we make use of the relevant supplementary information such as data from other related areas and covariate data from other sources. The linear mixed models (LMM) enable us to 'borrow strength' from the relevant supplementary data, and the resulting model-based estimators or the best linear unbiased predictors (BLUP) provide reliable estimates for the small area characteristics. The BLUP shrinks the direct estimates in small areas towards a stable quantity constructed by pooling all the data, thereby BLUP is characterized by the effects of pooling and shrinkage of the data. These two features of BLUP mainly come from the structure of linear mixed models described as (observation) = (common parameters) + (random effects) + (error terms), namely the shrinkage effect arises from the random effects, and the pooling effect is due to the setup of the common parameters. While BLUP was originally proposed by Henderson (1950), empirical version of BLUP (EBLUP) is related to the classical shrinkage estimator studied by Stein (1956), who established analytically that EBLUP improves on the sample means when the number of small areas is larger than or equal to three. This fact shows not only that EBLUP has a larger precision than the sample mean, but also that a similar concept came out at the same time by Henderson (1950) for practical use and Stein (1956) for theoretical interest. Based on these historical backgrounds, there have been a lot of methods proposed so far. In Sect. 2, we describe the details of classical approaches to small area estimation through the two basic linear mixed models: the Fay-Herriot (FH) model suggested by Fay and Herriot (1979) and the nested error regression (NER) model, which are used for the analysis of area-level and unit-level data, respectively. These models are classical, but helpful for understanding how to make reliable estimates. The Fay-Herriot model is extended to the unmatched sampling and linking models for analyzing positive data and hierarchical models based on natural exponential families for analyzing area-level discrete data such as mortality. These extended models are also explained in Sect. 2.
In Sect. 3, we discuss the uncertainty quantification of small area estimators. Since the small area estimator is constructed through model-based approaches, it is quite important in practice to assess the variability or risk for the resulting estimates. We mainly focus on mean squared errors and prediction intervals, and describe several methods based on asymptotic calculations and simulation-based methods such as jackknife and bootstrap.
The classical and basic results in small area estimation are explained in Sects. 2 and 3. Due to a growing demand for reliable small area statistics, however, the small area estimation has been extensively studied in various directions from both theoretical and practical aspects. In Sect. 4, we give a review of recent developments. The topics treated there are adjusted likelihood methods for variance components estimation, estimation of general parameters such as poverty measures in a finite population, transforming response values via parametric transformation, flexible modeling for heteroscedastic variances, modeling of random effects based on the spike-and-slab and global-local shrinkage priors, skewed distributions for error terms and random effects, nonparametric and semiparametric modeling, robust methods, measurement error models, observed best prediction methods, and variable selection techniques. These explanations will be helpful for readers interested in small area estimation. This section shows that practical situations in real data analysis request new methodology, techniques and theories, and the small area estimation is developing from practical and theoretical points of view. The paper is concluded with some remarks in Sect. 5. Since mainly recent articles are given in the reference of this paper, see the reference in Rao and Molina (2015) for other articles.

Basic mixed models for small area estimation
In this section, we introduce some basic mixed models used in small area estimation. We start with two most standard models, known as the Fay-Herriot model and the nested error regression model. We also describe some models for non-normal data and sub-area models.

Fay-Herriot model
Most public data are reported based on cumulated data like sample means from counties and cities. The Fay-Herriot (FH) model introduced by Fay and Herriot (1979) is the mixed model for estimating the true areal means 1 , … , m based on area-level summary statistics denoted by y 1 , … , y m , where y i is called a direct estimator of i for i = 1, … , m . Note that y i is a crude estimator i with large variance, because the sample size for calculating y i could be small in practice. Let x i be a vector of known area characteristics with an intercept term. The FH model is given by where is a vector of regression coefficients, i and v i are sampling errors and random effects, respectively, and are independently distributed as i ∼ N(0, D i ) and v i ∼ N(0, A) . Here D i is a variance of y i given i , which is assumed to be known, and A is an unknown variance parameter. The assumption that D i is known seems restrictive, because it can be estimated from data a priori. This issue will be addressed in Sect. 4.4.
The best predictor of i under squared error loss is the conditional expectation given by where i = A∕(A + D i ) is known as a shrinkage coefficient. Under model (2.1), the marginal distribution of y i is N(x t i , A + D i ) , and can be estimated under given A by generalized least squares (GLS) estimator given by By replacing with ̂ GLS in (2.2), we obtain the following best linear unbiased predictor (BLUP): Since ̂ GLS is constructed based on all the data, the regression estimator x t î GLS would be much more stable than the direct estimators y i . Then the BLUP (2.4) can be interpreted as a shrinkage estimator that shrinks the unstable direct estimator y i toward the stable estimator x t î GLS , depending on the shrinkage coefficient i . Note that if D i is large compared with A, which means that y i has a large fluctuation, i is small, so that y i is more shrunk toward x t î GLS , and vice versa. In practice, the random effects variance A is unknown, and should be replaced in i and ̂ GLS by a sample estimate, which yields the empirical BLUP (EBLUP) in the frequentist's framework, or the empirical Bayes estimator in the Bayesian framework. The standard way to estimate A is the maximum likelihood estimator based on the marginal distribution of y i . Other options would be the restricted maximum likelihood estimator and moment-type estimators as considered in Fay and Herriot (1979) and Prasad and Rao (1990). We will revisit this issue in Sect. 4.1. Alternatively, we may employ the hierarchical Bayes (HB) approach by assigning prior distributions on unknown parameters and A, and compute a posterior distribution of i , which produces the point estimator as well as credible intervals. Due to the recent advancement of computational techniques, the HB approaches now became standard in this context .

Nested error regression model
When unit-level data are available, we can use a model for more in-depth analysis. Let y i1 , … , y in i be a unit-level sample from the ith area for i = 1, … , m , and let x i1 , … , x in i be fixed vectors of covariates with/without the intercept term. The nested error regression model (Battese et al. 1988) is described as where v i and ij are random effects and error terms, respectively, and are mutually independently distributed as v i ∼ N(0, 2 ) and ij ∼ N(0, 2 ) , is a vector of unknown regression coefficients, and 2 and 2 are unknown variance parameters. It is noted that v i is a random effect depending on the ith area and common to all the observations y ij 's in the same area. This induces correlations among y ij 's, that is, Cov(y ij , y ij � ) = 2 for j ≠ j ′ , noting that observations in the different areas are still independent. Hence, the variances 2 and 2 are called 'within' and 'between' components of variance, respectively. (2.5) The NER model is typically used in the framework of a finite population model. We assume that the ith area includes N i units in total, but only n i units are sampled. For simplicity, we assume sampling mechanism is the simple random sampling, so that we do not consider survey weights. For all the units, we assume the following super-population model: where Y ij is the characteristic for the jth unit in the ith area. Without loss of generality, we assume the first n i characteristics Y i1 , … , Y in i are observed as y i1 , … , y in i , and the rest of characteristics Y i,n i +1 , … , Y iN i are unobserved. Under the setting, the true area mean is defined as In practice, the total number of units N i is very large although the number of sampled units n i is not large. Then the final term could be negligible, thereby we can define the mean parameter i as i = X t i + v i . Hence, we can estimate i only if we know the true mean vector X i of auxiliary information, which is actually often the case in practice. Under the model (2.5), the best predictor of v i is given by Similar to the FH model, can be estimated via the GLS estimator ̂ GLS based on all the sampled data. Also variance parameters 2 and 2 can be estimated via the marginal distribution of (y i1 , … , y in i ) , which is Gaussian under (2.5). Then the EBLUP v i of v i can be obtained, so that the EBLUP of i is given by ̂i = X t i � GLS +v i . More general methods for predicting population means are discussed in Jiang and Lahiri (2006).
As a famous application of NER, Battese et al. (1988) considered predicting areas under corn and soybeans for each of m = 12 counties in north-central Iowa. In their analysis, each county is divided into about 250 hectares segments, and n i segments are selected from the ith county. For the jth segment of the ith county, y ij is the number of hectares of corn (or soybeans) in the (i, j) segment reported by interviewing farm operators, and x ij1 and x ij2 are the numbers of pixels (0.45 hectar) classified as corn and soybeans, respectively, using LANDSAT satellite data. Since n i 's range from 1 to 5 with ∑ m i=1 n i = 37 , the sample mean y i has large deviation for predicting the mean crop hectare per segment i . The NERM enables us to construct more reliable prediction procedures not only using the auxiliary information on the LANDSAT data, but also by combining the data of the related areas.

Unmatched sampling and linking models
Response variables y i for which the normality assumption is not suitable (e.g. income) are ubiquitous in real applications. In this case, we may use the generalized version of the FH model (2.1), given by where h(⋅) is a known link function, and the other variables are defined in the same as in (2.1). A typical example for positive valued y i is h(x) = log x . Under the model (2.7), the marginal distribution of y i is expressed as where h ′ is the first-order derivative of h, and (⋅; a, b) denotes the density function of the normal distribution with mean a and variance b. The above density does not admit a closed form expression for a general function h(⋅) . The model (2.7) is originally proposed by You and Rao (2002), who considered a hierarchical Bayesian approach via the Markov Chain Monte Carlo algorithm. Recently, Sugasawa et al. (2018) developed an empirical Bayes approach in which the Monte Carlo method for computing the best predictor of i and the Monte Carlo expectation-maximization algorithm for computing the maximum likelihood estimator.

Generalized linear mixed models
When the response variables are discrete such as binary, count and multi-category, discrete distributional models are required. Ghosh et al. (1998) discussed the use of generalized linear mixed models for small area estimation under this situation, and provided hierarchical Bayesian methods. On the other hand, Jiang and Lahiri (2001) discussed a frequentist approach for small area estimation with binary data.
The unit-level model is given by where u i ∼ N(0, 2 ) is a random effect. A typical purpose is to predict the true area proportion given by The best predictor of the random effect u i under squared loss is the conditional expectation of u i given y i = (y i1 , … , y in i ) , which is expressed as where log i (z; , 2 ) = z ∑ n i j=1 y ij − ∑ n i j=1 log{1 + exp(x t ij + z)} , z ∼ N(0, 1) , and E z denotes the expectation with respect to z. The above integral does not admit (2.7) analytical expressions, thereby the numerical integration is required to compute the predictor. In general, under generalized linear mixed models, the best predictor of the random effect as well as the marginal distribution of y i do not have analytical expressions, which could be a potential drawback of generalized linear mixed models.

Models based on natural exponential families
One of the main drawbacks of generalized linear mixed models is the intractability of conditional distributions of random effects as well as marginal likelihood functions. As alternative models for area-level discrete data, Ghosh and Maiti (2004) introduced models based on the natural exponential families. Let y 1 , … , y m be mutually independent random variables where the conditional distribution of y i given i and the marginal distribution of i belong to the following natural exponential families: where n i is a known scalar and is an unknown scalar. Here c(⋅, ⋅) and C(⋅, ⋅) are normalizing constants and (⋅) is a link function. Moreover, , where x i and are vectors of covariates and unknown regression coefficients, respectively, and � (⋅) is the first order derivative of (⋅) . Usually, n i is related to sample size in the ith area, so that n i is not so large in practice. The function f (y i | i ) is the regular one-parameter exponential family and the function ( i ) is the conjugate prior distribution. Note that i ≡ E[y i | i ] = � ( i ) which is the true area mean, and E[ i ] = m i . Therefore, y i is regarded as a crude estimator of i and m i is the prior mean of i .
Owing to the conjugacy, the marginal likelihood can be obtained in a closed form, which enables us to get the maximum likelihood estimators or moment-type estimators as considered in Ghosh and Maiti (2004) for the unknown parameters. Moreover, the conditional distribution of i given y i can be obtained in an analytical form as well, and the conditional expectation of i with estimated parameters is given by . It is observed that ̂ i is a weighted average of the direct estimator y i and the estimator of prior mean (regression estimator) m i , which would be stable since ̂ is estimated based on all the data. The model (2.8) includes some typical two-stage models for area-level data. For example, when = A −1 and n i = D −1 i and (x) = x 2 ∕2 , the model (2.8) reduces to the FH model (2.1). Also, when (x) = exp(x) and (x) = log(1 + exp(x)) , the model (2.8) reduces, respectively, to the Poisson-gamma model (e.g. Clayton and Kaldor 1987) and the binomial-beta model (e.g. Williams 1975), which have been widely adopted in several fields such as disease mapping.

Some remarks
There are some papers concerned with practical applications of the standard models described above. For example, Schmid et al. (2017) employed the FH model to estimate sociodemographic indicators and Mauro et al. (2017) considered small area estimation for forest inventory. We also note that software for applying the standard mixed models for small area estimation is available recently, e.g. package 'sae' in +R+ language (Molina and Marhuenda 2015).

Measuring uncertainty of small area estimators
An important aspect of small area estimation is the assessment of the accuracy of the predictors. Under the frequentist approach, this will be complicated due to the additional fluctuation induced by estimating unknown parameters in models. We here focus on two methods that are widely adopted in this context: estimators of mean squared error (MSE) and confidence intervals.

Estimation of MSE
We first consider a general situation. Let i (i = 1, … , m) be the true parameter, ̃i be the conditional expectation (or best predictor) of i given y i which depends on the unknown parameter , and ̂i is the empirical best predictor of i . MSE of ̂i is defined by Using the fact that ̃i is the conditional expectation of i given y i 's, we have Note that g i1 ( ) is concerned with variability of the best predictor given , and is typically of order O(1). Also an analytical expression of g i1 ( ) can be obtained in many cases. On the other hand, g i2 ( ) measures additional variability which comes from the estimation of , thereby g i2 ( ) = O(m −1 ) in most cases, but it hardly admits closed form expressions. Hence, we derive an approximation formula for g i2 ( ) up to second order.
Here we demonstrate the approximation under the FH model as considered in Datta et al. (2005). Note that = ( t , A) t in this case. From (2.2), it follows that g i1 ( ) = AD i ∕(A + D i ) . Regarding g i2 ( ) , it follows that thereby it holds from Prasad and Rao (1990) and Datta and Lahiri (2000) that where Var(Â) is an asymptotic variance of Â . For more general linear mixed models, Datta and Lahiri (2000) provided the second order approximation of MSE under linear mixed models that include the FH and NER models as special cases, and Das et al. (2004) extended the results of Datta and Lahiri (2000) by relaxing the independence assumption for random effects and error terms. Moreover, Lahiri and Rao (1995) derived an approximation of MSE under the FH model without assuming normality for random effects and error terms. For non-normal models, Ghosh and Maiti (2004) derived an approximation formula of MSE under the natural exponential family model (2.8).
We now consider the estimation of MSE. In the context of small area estimation, second order unbiased estimators of MSE having bias of order o(m −1 ) are widely adopted. To achieve the goal, there are mainly three methods: analytical methods (e.g. Prasad and Rao 1990;Das et al. 2004;Datta et al. 2005;Lahiri and Rao 1995), bootstrap methods (e.g. Butar and Lahiri 2003;Hall and Maiti 2006a, b), and Jackknife methods (e.g. Jiang et al. 2002), which are described as follows.
(1) Analytical method We assume Then we can define a bias corrected estimator of g i1 , that is, g 1 (̂ ) − B i (̂ ) , which is second-order unbiased under some regularity conditions. Therefore, an analytical estimator of MSE is given by . To derive the MSE estimator, we derive an asymptotic bias of ̂ , which typically requires tedious algebraic calculation. It is noted that the MSE estimator (3.2) does not necessarily produce positive-valued estimators because of the bias correction although the true MSE values are positive.
(2) Bootstrap method There are a few types of bootstrap methods for MSE. The most typical approach would be a hybrid bootstrap method (Butar and Lahiri 2003) which separately estimates g i1 and g i2 via the parametric bootstrap. Here we explicitly write ̃i (y i , ) instead of ̃i ( ) in order to address the dependence of the best predictor ̃i on the observation y i . Given the estimate ̂ =̂ (y 1 , … , y m ) of , the parametric bootstrap method first generates the bootstrap sample y * 1(b) , … , y * m(b) from the assumed model with =̂ as the bth bootstrap sample, and then computes the bootstrap estimator ̂ * b =̂ * (y * 1(b) , … , y * m(b) ) of . Then the hybrid bootstrap estimator is given by where B is the number of bootstrap replications. Another bootstrap method is the double bootstrap (e.g. Hall and Maiti 2006b) in which the naive bootstrap estimator is defined as is a generated value of i from the estimated model and y * is a collection of the bootstrap sample. Then the double bootstrap estimator computes C bootstrap MSE estimators {M(y * 1 ), … , M(y * C )} and carry out bias correction. Compared with the hybrid bootstrap estimator, the double bootstrap method requires additional bootstrap replications, which could be computationally intensive in practice.
(3) Jackknife method Jiang et al. (2002) suggested the use of the jackknife method for estimating g i1 and g i2 , separately. Let ̂ − denote the estimator of based on all the data except for the th area. Then the Jackknife estimator of MSE is given by Under some regularity conditions, it holds that the estimator is second-order unbiased.
Finally, we note that there is another type of MSE, called conditional MSE, defined as E[(̂i − i ) 2 |y i ] , which measures the estimation variability under given y i . The detailed investigation and comparisons with the standard (unconditional) MSE have been done in the literature (e.g. Datta et al. 2011;Torabi and Rao 2013). As noted in Booth and Hobert (1998) and Sugasawa and Kubokawa (2016), the difference between the conditional and unconditional MSEs under models based on normal distributions can be negligible under large sample sizes (i.e. large number of areas), whereas the difference is significant under non-normal distributions. Also unified jackknife methods for the conditional MSE are developed in Lohr and Rao (2009).

Confidence intervals
Another approach to measuring uncertainty of EBLUP is a confidence interval based on EBLUP, and the confidence intervals which satisfy the nominal confidence level with second-order accuracy are desirable. There are mainly two methods for constructing the confidence interval: the analytical method based on a Taylor series expansion and a parametric bootstrap method. For simplicity, we suppose that i |y i ∼ N(̃i(y i , ), s i ( ) 2 ) , where ̃i and s 2 i are conditional expectation and variance of i . This holds under the FH model (2.1) or general class of linear mixed models. Then, given the unknown parameter , the confidence interval with coverage probability 1 − is I ( ) ∶̃i(y i , ) ± z ∕2 s i ( ) with z being the upper 100 % quantile of the standard normal distribution, which satisfies P( i ∈ I ( )) = 1 − . Since is unknown, the naive confidence interval is I (̂ ) , which has a coverage probability with the approximation P( i ∈ I (̂ )) = 1 − + O(m −1 ) in general. Thus, a confidence interval with the second-order correction P( i ∈ I (̂ )) = 1 − + O(m −3∕2 ) is desirable.
(1) Analytical method We begin by expanding the coverage probability of the naive interval. For example, under the FH model (2.1), we have where noting that c i (z, A) = O(m −1 ) (e.g. Yoshimori and Lahiri 2014b). Then the confidence interval obtained by replacing z ∕2 with z ∕2 {1 + c i (z, A)} has the corrected coverage probability 1 − + O(m −3∕2 ) . Similar results are obtained in Datta et al. (2002) and Basu et al. (2003).
As another approach, we may construct confidence interval-based MSE estimators given in the previous section, which has the form where M SE i is a second-order unbiased MSE estimator, and h i (⋅) is an adjustment function such that the coverage of the above interval is second-order accurate. Diao et al. (2014) adopted this idea to derive confidence intervals under the FH model. Also Kubokawa (2009) derived the confidence intervals of this type under the NER model.
(2) Bootstrap method We first provide a method using a pivotal statistic given in Chatterjee et al. (2008). Define U i ( ) = ( i −̃i)∕s i ( ) , then U i ( ) ∼ N(0, 1) when is the true parameter. We approximate the distribution of U i (̂ ) via the parametric bootstrap, that is, we generate the parametric bootstrap sample * i(b) as well as y * i(b) from the estimated model and compute the bootstrap estimator ̂ * (b) for b = 1, … , B . Then the distribution of U i (̂ ) can be approximated by B bootstrap realizations b) ) . Letting z * iu ( ) and z * il ( ) be the empirical upper and lower 100 % quantiles of the empirical distribution of . We next describe a general parametric bootstrap approach given in Hall and Maiti (2006b). Define I ( ) = (F ∕2 ( ), F 1− ∕2 ( )) where F ( ) is the -quantile of the posterior distribution of i , such as N(̃i(y i , ), s i ( ) 2 ) . Since the naive interval I (̂ ) does not satisfy P( i ∈ I (̂ )) = 1 − , we calibrate a suitable via the parametric bootstrap. Denote by Î * (b) = I ( � * (b) ) the bootstrap interval based on the bth bootstrap sample, and let ̂ be the solution of the equation Then, Î (̂ ) is the bootstrap-calibrated interval which has a coverage probability with second-order accuracy.

Recent developments in small area estimation
The small area estimation has been actively and extensively studied in various directions from theoretical and practical aspects. In this section, we give a review of recent developments, which will be helpful for readers interested in this topic.

Adjusted likelihood method
Remember that the random effects variance A determines the amount of shrinkage through i in the BLUP (2.4). A practical problem in estimating A is that the maximum likelihood estimator may produce zero estimate, under which the BLUP (2.4) reduces to ̃i = x t i � GLS . This is not plausible since heterogeneity among areas cannot be taken into account. To overcome the difficulty, Li and Lahiri (2010)  Such an adjusted likelihood method has been adopted not only for avoiding zero estimate but also for MSE estimation and constructing confidence intervals since the adjustment likelihood method produces different asymptotic properties of Â . Yoshimori and Lahiri (2014b) is concerned with constructing efficient confidence intervals with second-order accuracy via the adjusted likelihood method. Under the FH model (2.1), the authors focused on the confidence interval of i of the form, where ̃i is given in (2.2) and z is the upper 100 % quantile of the standard normal distribution. Yoshimori and Lahiri (2014b) considered an area-specific adjustment function h i (A) to produce the area-specific estimator Â i . Then they derived a condition on h i (A) such that the coverage probability of (4.1) L ad (A) = h(A)L(A), the confidence interval I i (Â i ) has second-order accuracy. Moreover, Hirose (2017) improved the result so that the adjustment function does not depend on i. Regarding the MSE estimation, Hirose (2019) employed the same idea to derive a second-order unbiased estimator of MSE which is always positive under the FH model (2.1). The author considered a certain class for the adjustment factor and gave reasonable conditions to achieve the desirable properties of MSE estimation. Finally, Hirose and Lahiri (2018) proposed a new but simple area-wise adjustment factor h i (A) = A + D i to achieve several nice properties simultaneously.

Estimation of general parameters in a finite population
In Sect. 2.2, we demonstrated an application of the NER model to estimating areal means in the finite population framework. Molina and Rao (2010) extended the classical approach to the estimation of the general parameters given by where T(⋅) depends on estimation problems which we are interested in. For example, if we adopt T(x) = I(x < z) for a fixed threading value z and Y ij is a welfare measure, then i can be interpreted as the poverty rate in the ith area. Molina and Rao (2010) introduced the FGT poverty measure (Foster et al. 1984) to estimate area-wise general poverty indicators. Molina and Rao (2010) used the following NER model: where v i and ij are defined in the same way as in Sect. 2.2, and H(⋅) is a specified transformation function such as the logarithm function. Here we again assume that the first n i units are sampled without loss of generality, and let s i = {1, … , n i } and r i = {n i + 1, … , N i } . Under the model (4.2), the conditional distribution of H(Y ij ) with j ∈ r i given sampled units is given by where ṽ i is the same as (2.6) expect for replacing y ij with H(y ij ) . The best predictor of i is the conditional expectation For general functions T(⋅) and H(⋅) , the conditional expectation of T(Y ij ) cannot be obtained in an analytical form, but it can be computed via the Monte Carlo integration by generating random samples of Y ij , where Y ij can be easily simulated via H −1 (U ij ) with U ij generated from N(x t ij +ṽ i , 2 ) . It should be noted that under the general function T(⋅) , all the information of x ij is required for computing the predictor of i whereas only the true mean vector of x ij is required in estimating the mean parameter as considered in Sect. 2.2. Molina et al. (2014) adopted the hierarchical Bayes approach and developed a Monte Carlo sampling algorithm for estimating i as well as model parameters whereas Molina and Rao (2010) considered the frequentist approach.

Parametric transformation for response variables
In practice, we often deal with positive response variables such as income, for which the normality assumption used in the FH and NER models could not be reasonable. To address this issue, the log-transformation is widely adopted due to simplicity, and many theoretical results have been revealed (e.g. Slud and Maiti 2006;Molina and Martin 2018). However, the use of log-transformation is not necessarily reasonable, and it would be more preferable to use a parametric family of transformations and estimate the transformation parameter based on the data. Under the FH model, Kubokawa (2015, 2017c) proposed the following transformed mixed model: where H(y i ; ) is a parametric transformation, is a transformation parameter and the other quantities are defined in the same way as the FH model. Under the model, the areal parameter i is defined as The most famous parametric transformation for positive response variables would be the Box-Cox transformation (Box and Cox 1964), but its inverse function cannot be defined on the whole real line, thereby we cannot define i with the Box-Cox transformation. A possible alternative choice for positive response could be the dual power transformation (Yang 2006) which includes log-transformation as a special case, and this transformation is adopted in Kubokawa (2015, 2017c). Kubokawa (2015, 2017c) provided the methods for estimating unknown parameters including , the derivation of empirical predictors of i and the asymptotic evaluation of mean squared errors of the predictors.
The same idea can be incorporated into the NER model.  proposed the following transformed nested error regression model: where H(y ij ; ) denotes transformed response variables, and the other quantities are defined in the same way as the NER model. An important application of the transformed NER model is the estimation of general parameters in a finite population as considered in Sect. 4.2. Under the transformed NER model, the method by Molina and Rao (2010) can be easily modified by replacing a fixed transformation with the parametric transformation H(y ij ; ) . The log-likelihood function without irrelevant constants is given by and is the variance-covariance matrix of y i . Under given , the maximization with respect to the other parameters is equivalent to the maximum likelihood estimator of the NER model with response variable H(y ij , ) , thereby the profile likelihood for can be easily obtained.  adopted the golden section method for maximizing the profile likelihood of . On the other hand, Sugasawa (2019b) developed a hierarchical Bayesian approach similar to Molina et al. (2014) for the model (4.3) with spatially correlated random effects v i .

Flexible modeling for variance parts
As mentioned in Sect. 2.1, the main criticism for the FH model (2.1) is the assumption that the sampling variances D i are known although they are estimated from data. It has been revealed that the assumption for D i may lead to several serious problems such as underestimation of risks (e.g. Wang and Fuller 2003). To take account of variability of D i , You and Chapman (2006) introduced the following joint hierarchical model for y i and D i : where n i is a sample size, and (⋅) is a prior distribution for 2 i . In the model (4.4), i and 2 i are the true mean and variance, and y i and D i are estimates of them, respectively. You and Chapman (2006) adopted Ga(a i , b i ) for (⋅) , where a i and b i are fixed constants, and proposed hierarchical Bayesian approach via a Markov Chain Monte Carlo. However, You and Chapman (2006)  In the NER model, the variance parameters in random effects and error terms are assumed to be constants in the classical NER model (2.5), but it could be unrealistic for some data as pointed out by Jiang and Nguyen (2012). To address this issue, Jiang and Nguyen (2012) replaced the distribution of v i and ij in (2.5) as log H(y ij ; ), (4.4) where , 2 1 , … , 2 m are unknown parameters. Since the number of 2 i increases as the number of areas, so that 2 i cannot be consistently estimated as long as n i is finite. However, an important observation made by the authors is that the predictor of v i of the form (2.6) depends on the two variance parameters 2 and 2 only though their ratio 2 ∕ 2 . Under (4.5), the ratio is which can be consistently estimated, thereby the predictor of v i or areal mean can be reasonably derived. The authors also pointed out that the MSE cannot be consistently estimated since it depends on 2 i . To improve the performance of the model (4.5), Kubokawa et al. (2016) introduced additional structure for 2 i , that is, 2 i ∼ Ga( , ) , with unknown parameters and . The authors developed likelihood inference on unknown parameters and proved consistency of the maximum likelihood estimator. On the other hand, Sugasawa and Kubokawa (2017b) proposed alternative modeling strategy to address heteroscedasticity given by Var(v i ) = 2 and Var( ij ) = (z t ij ) , where z ij is a sub-vector of x ij , is a vector of unknown parameters and (⋅) is a known positive-valued function such as exp(⋅) . The authors developed moment-based methods to estimate the unknown parameters and derived predictors as well as approximation of MSE.

Flexible modeling for random effects
As discussed in Sect. 4.1, the random effects play important roles in small area estimation. However, when the true structure does not include random effects, the use of random effects models would increase the estimation variability, so that inclusion of random effects in the model should be carefully examined. To address this issue, Datta et al. (2011) and  proposed testing the existence of random effects before applying mixed models. Datta and Mandal (2015) generalized this idea to the mixture modeling for random effects in the FH model (2.1), which is given by where 0 is the one-point distribution on the origin, and s i is a latent random variable indicating whether random area effect should be needed or not. Then the unknown parameter in the model (4.6) can be interpreted as the prior probability of existence of random effects in the ith area. Datta et al. (2011) called the random effects structure (4.6) Muncertain random effects'. It is noted that the marginal distribution of v i is the mixture of two distributions N(0, A) + (1 − ) 0 , which is very similar to the spike-and-slab prior (e.g. Ishwaran and Rao 2005) for variable selection. Datta et al. (2011) proposed a hierarchical Bayesian approach to estimate i . Sugasawa and Kubokawa (2017a) adopted the idea (4.6) in the NER model to demonstrate the usefulness in the context of estimating parameters in a finite population framework. On the other hand,  generalized the idea of the uncertain random effects (4.6) to the area-level model based on the natural exponential family (2.8).
The new model for i is given by where m i is the mean of the conjugate prior and ( � ) −1 denotes the inverse function of ′ given in Sect. 2.5. Based on the model,  developed an empirical Bayes method for estimating i = E[y i | i ] . As a related method, Chakraborty et al. (2016) proposed a two component mixture model N(0, A 1 ) + (1 − )N(0, A 2 ) with unknown A 1 and A 2 for v i . A new direction for modeling random effects is the use of the global-local shrinkage prior originally developed in the context of signal estimation (e.g. Carvalho et al. 2010). Tang et al. (2018) introduced the global-local shrinkage prior for molding random effects in the FH model (2.1). Their model is given by For example, the exponential prior for 2 i leads to the Laplace prior for v i . Under the formulation, the conditional expectation of i is y Tang et al. (2018) showed that the shrinkage coefficient B GL,i converges to 0 as |y i − x t i | → ∞ , which means that the resulting estimator can prevent over-shrinkage issues, that is, it does not shrink the direct estimator which is very far from the regression part. Under count responses, Hamura et al. (2019) developed a new global-local shrinkage prior under the Poisson-gamma model given in Sect. 2.5.

Skewed distributions for error terms and random effects
In the classical FH and NER model, the distributions of both error terms and random effects are assumed to be normal, which are not necessarily reasonable in practice. In the FH model (2.1), the normality assumption of the error term i is based on central limit theorems since y i is typically a summary statistic. However, when the sample size for computing y i is small, the normality assumption would be violated. Especially, empirical distributions of data such as income tend to be skewed. Then Ferraz and Mourab (2012) adopted the skew-normal distribution (e.g. Azzalini 1985) for i in the FH model. Ferrante and Pacei (2017) developed a multivariate FH model with both error terms and random effects following multivariate skew-normal distributions. In the context of the NER models, Diallo and Rao (2018) was concerned with the situation where the distributions of the response variables are skewed. The authors proposed replacing the normality assumption, namely v i ∼ N(0, 2 ) and ij ∼ N( and SN( , 2 , ) is a skew-normal distribution with density with (⋅) and Φ(⋅) being the density and distribution functions of the standard normal distribution. Note that the non-zero location parameters in v i and ij ensure that However, it may be hard to estimate 2 , 2 , v and accurately. Tsujino and Kubokawa (2019) investigated a simplified version of the model in which only error terms follow the skew-normal distribution and the random effects are still normal, and provided a simple expression for the predicting random effects.

Nonparametric and semiparametric modeling
In the standard mixed models in small area estimation, parametric models are typically adopted for simplicity. However, parametric models suffer from model misspecification, which could produce unreliable small area estimates. Instead of using parametric models, the use of nonparametric or semiparametric models have been considered in the literature. Opsomer et al. (2008) considered nonparametric estimation of the regression part in the linear mixed model by adopting the P-spline method. Specifically, the authors proposed the following model: where v i and ij are defined in the same way as the original NER model (2.5), and f (⋅) is modeled by the P-spline given by where q is the degree of the spline, (x) q + = x q I(x > 0) , 1 < … < K is a set of fixed knots and = ( 0 , … , q ) and = ( 1 , … , K ) denote coefficient vectors for the parametric and spline terms. Moreover, it is assumed that ∼ N(0, I K ) , where I K is the K × K identity matrix, and is an unknown parameter controlling the smoothness of the estimation of f (⋅) . Let . Then the nonparametric model (4.7) can be expressed as the linear mixed model and the unknown parameters are estimated via the likelihood method. The P-spline is also adopted for modeling regression terms in the literature (e.g. Rao et al. 2014;Sugasawa et al. 2018).  is concerned with misspecification of the link function in the unmatched sampling and linking model (2.7). Although the link function is typically specified by a user, the selected link function is subject to misspecification. To overcome the problem,  employed the P-spline technique to estimate the link function and small area parameters simultaneously. The proposed model is given by 2 is the small area parameter. The authors put prior distributions on the unknown parameters and developed a hierarchical Bayesian method for estimating i .
Finally, to avoid specification of the distributions of random effects or error terms, Chaudhuri and Ghosh (2011) adopted an empirical likelihood approach for both arealevel and unit-level models, and Polettini (2017) employed Dirichlet process mixture modeling for random effects in the FH model.

Robust methods
In practice, outliers are often contained in data. The model assumption such as normality would be violated for such outlying data, and the inclusion of outliers would significantly affect estimation of model parameters. In the context of the FH model, Ghosh et al. (2008) used the influence function to investigate the effect of outliers on the BLUP, and proposed the robust version of BLUP given by where K (t) = u min(1, K∕|u|) is Huber's -function with a tuning constant K > 0 . Similarly, Sinha and Rao (2009) used Huber's -function to modify an equation for i and suggested a robust predictor as a solution to the equation Moreover, Sugasawa (2019a) proposed the use of density power divergence given by where (⋅; a, b) denotes the density function of the normal distribution with mean a and variance b, and is a tuning parameter. Note that L ( , A) reduces to the standard marginal likelihood function without an irrelevant constant as → 0 . Using the fact that the Bayes predictor ̃i under the FH model (2.1) can be derived via the partial derivatives of the marginal likelihood with respect to y i , Sugasawa (2019a) defined a new robust estimator by replacing the marginal likelihood with the density power divergence, and the robust predictor is given by It should be noted that ̃R i reduces to the original predictor ̃i as → 0 . Compared with ̃i , the shrinkage coefficient depends on y i through its density function. Since the second term converges to 0 as |y i | → ∞ under fixed parameters, the estimator ̃R i reduces to y i for outlying observation y i .
On the other hand, Datta and Lahiri (1995) replaced the normal distribution for the random effects with the Cauchy distribution in outlying areas, and provided some robustness properties of the resulting estimator. Moreover, Ghosh et al. (2018) studied the use of the t-distribution for the random effects under the FH model with joint modeling means and variances as considered in .
In estimating population parameters in the framework of finite population, existing outliers in the sampled data can invalidate the parametric assumptions of the NER model (2.5), which might impact the validity of model-based small area estimators. To solve the problem, we can again employ the idea given in Sinha and Rao (2009) to get robust estimator ̂ R and v R i , which enables us to obtain robust small area estimators of the mean in the ith area, given by where s i and r i are the sets of indices of sampled and non-sampled units, respectively. However, the above estimator assumes that the non-sampled values in the population are drawn from a distribution with the same mean as the sampled nonoutliers, which can be unrealistic. Dongmo-Jiongo et al. (2013) and Chambers et al. (2014) addressed this issue and proposed an improved version of the above robust estimator. Moreover, Schmid et al. (2016) extended the results of Chambers et al. (2014) to take spatial correlation into account.

Measurement error models
In the FH model (2.1), covariate or auxiliary information x i is fixed. However, x i could be estimates from another survey in practice, thereby x i might include estimation error, that is, x i is measured with errors. Ybarra and Lohr (2008) clearly pointed out the drawback of simply using x i under measurement error by investigating MSE. Let x i be the estimator of the true covariate x i , and suppose that its MSE matrix MSE (x i ) = C i is known. Note that the true areal parameter is i = x t i + v i , and the naive predictor with x i and unknown parameters is Ybarra and Lohr (2008) where ̃i is the predictor in the known case of x i . This suggests that the measurement error increases MSE of the standard predictor. More importantly, since the MSE of the direct estimator y i is D i , the naive predictor ̃i could be worse than the naive estimator if t C i > A + D i . Ybarra and Lohr (2008) derived an optimal predictor under the measurement error, which is obtained by replacing i with ̃i = (A + t C i )∕(A + D i + t C i ) . The unknown parameters can be estimated via the maximum likelihood method, which enables us to get the empirical version of the predictor. On the other hand, Arima et al. (2015) pointed out that the parameter estimation could be unstable, and the authors developed a hierarchical Bayesian approach for the models by Ybarra and Lohr (2008). Ghosh et al. (2006) and Torabi et al. (2009) studied the NER model (2.5) with measurement errors. The authors assume that the true covariate is x i for all the units in the same area, and different measurements x ij are observed for different sampled units. It is assumed that x ) and ij ∼ N(0, 2 ) . This formulation is called structural measurement error. Note that x , 2 x and 2 are additional unknown parameters, which can be estimated via the maximum likelihood method.

Observed best prediction
The classical FH model (2.1) implicitly assumes that the regression part x t i is correctly specified, and the estimation of model parameters including as well as the prediction of i are carried out under the assumed model. However, any assumed model is subject to model misspecification. Jiang et al. (2011) introduced the true ∼ N(0, A) , where i is the true mean which is not necessarily equivalent to x t i . Note that the assumed model (2.1) does not change, and they focused on a reasonable estimation method for regression coefficients under possible model misspecification. They considered the total mean squared prediction error (MSPE) of the best predictor ̃i given by where B i = A∕(A + D i ) , and an unbiased estimator of the MSPE is This expression suggests a natural estimator of as the minimizer of the expression inside the expectation, which is equivalent to minimizing Then a closed form expression of the minimizer is obtained as which is called the observed best predictive (OBP) estimator of . Note that the expression (4.8) is different from the maximum likelihood estimator or GLS estimator given in (2.3). In Jiang et al. (2011), they also developed general OBP estimators under linear mixed models and asymptotic theory of OBP estimators. As follow-up works, Jiang et al. (2011) is concerned with misspecification of both mean and variance parts in the NER model (2.5), and Chen et al. (2015) extended the OBP theory to a situation with binary responses.

Variable selection
Modeling of regression parts is an important issue to obtain a good small area estimator, thereby variable selection problems would arise naturally. In the general statistical theory, variable selection has been recognized as essential problems since the proposal of well-known Akaike Information Criterion or AIC (Akaike 1973(Akaike , 1974. Bayesian Information Criterion or BIC (Schwarz 1978) is also a well-known criterion, but the use of both AIC and BIC is not necessarily reasonable for variable selection when the model contains random effects. Vaida and Blanchard (2005) considered the general form of linear mixed models given by where X and Z are design matrices for fixed effects and random effects, respectively. Vaida and Blanchard (2005) pointed out that the classical AIC measures the prediction error of the predictor based on the marginal distribution y ∼ N(X , ) with = ZGZ t + R , thereby the classical AIC might be inappropriate for predicting quantity including random effects. Therefore, the authors proposed the conditional AIC as an asymptotically unbiased estimator of the conditional Akaike information (cAI) defined by where E y,y * |v denotes the expectation with respect to the conditional distribution of y and y * given v , namely, y|v ∼ N(X + Zv, R) and y * |v ∼ N(X + Zv, R) , respectively. Note that cAI is related to the Kullback-Leibler divergence between the estimated and true conditional density of a future observation y * . Vaida and Blanchard (2005) derived an asymptotically unbiased estimator of cAI of the form −2 log (y * ; X̂ + Zv,R) − Δ with bias correction term Δ , which is called conditional AIC. Since both FH and NER models are special cases of linear mixed models, the cAIC can be adopted for selecting suitable covariates. We refer to Muller et al. (2013) for more details and review of variable selection techniques in linear mixed models.
One possible criticism in the deviation of cAIC is that the same design matrix X is used for both estimation and prediction models. However, in the context of the estimating population parameters via the NER model as described in Sect. 2.2, the two models have different covariates since we are interested in the prediction based on X i and the NER model is based on {x ij } j=1,…,n i ,i=1,…,m . Kawakubo et al. (2018) addressed this issue and developed a new information criterion for variable selection under this situation by modifying the derivation of cAIC. We also note that y = X + Zv + e, v ∼ N( , G), e ∼ N( , R), (4.9) cAI = −2 ∫ E y,y * |v log (y * ; X̂ + Zv,R) (v; , G)dv, minimizing cAI is not necessarily equivalent to minimizing the mean squared error of small area estimators. Hence, if we pursue suitable variable selection to obtain better predictors which have smaller mean squared error, it would be more reasonable to derive an information criterion based on mean squared euros.  adopted the OBP method given in Sect. 4.10 to derive a criterion for conducting variable selection based on unbiased estimator of mean squared prediction errors.
As a more general strategy, Jiang et al. (2008) proposed a variable selection procedure called the fence method in generalized linear mixed models, and applied the method for selecting covariates in the FH and NER models. Moreover, Jiang et al. (2010) adopted the fence method for fitting the P-spline models described in Sect. 4.7, which requires selecting the degree of the spline, the number of knots, and a smoothing parameter.
Finally, it should be noted that the variable selection and estimation are typically considered separately, which might lead to ignoring variability in the selection procedure. This issue is addressed by Jiang et al. (2018) and the authors introduced a new jackknife procedure for estimating mean squared errors that takes accounts of additional variability due to variable selection.

Other related topics
We here briefly review some related topics that we did not cover the previous section. One of them is the benchmarking. This is a method for adjusting the EBLUP so that the total sum of EBLUPs is equal to that of the direct estimates, and several new methodologies have been recently developed (e.g. Kubokawa and Strawderman 2013;Kubokawa et al. 2014;Ghosh et al. 2015). There have been several attempts to propose a new methodology for small area estimation of non-standard data such as grouped data (Kawakubo and Kobayashi 2019) and circular data (Hernandez-Stumpfhauser et al. 2016). Torabi and Rao (2013) proposed an extension of the FH model under existing sub-areas. Finally, the use of spatial and time series data is getting more and more important to produce reliable estimates by borrowing information from both spatially and temporally close areas. Some extensions of the standard mixed models such as FH and NER models have been considered in the literature (e.g. Pratesi andSalvati 2008, 2009;Marhuenda et al. 2013).

Concluding remarks
In this paper, we reviewed both classical and recently developed methods for small area estimation with mixed models. Since this paper has even focused on small area estimation based on mixed models, we did not cover other methodologies without using mixed models or more practical issues such as survey weights. Please see Rao and Molina (2015) for details of such approaches.