Skip to main content

Best Linear Unbiased Prediction

  • Chapter
  • First Online:
Linear Model Theory
  • 1184 Accesses

Abstract

Suppose, as in Chap. 11, that the model for y is an Aitken model. In this chapter, however, rather than considering the problem of estimating c Tβ under that model (which we have already dealt with), we consider the problem of estimating or, to state it more accurately, predictingτ ≡c Tβ + u, where u is a random variable satisfying

$$\displaystyle \mbox{E}(u) = 0, \quad \mbox{var}(u) = \sigma ^2h, \quad \mbox{and cov}(\mathbf {y}, u) = \sigma ^2\mathbf {k}. $$

Here h is a specified nonnegative scalar and k is a specified n-vector such that the matrix \(\left (\begin {array}{cc}\mathbf {W} & \mathbf {k}\\{\mathbf {k}}^T & h \end {array}\right )\),which is equal to (1∕σ 2) times the variance–covariance matrix of \(\left (\begin {array}{c}\mathbf {y}\\u\end {array}\right )\), is nonnegative definite. We speak of “predicting τ” rather than “estimating τ” because τ is now a random variable rather than a parametric function (although one of the summands in its definition, namely c Tβ, is a parametric function). We refer to the joint model for y and u just described as the prediction-extended Aitken model, and to the inference problem as the general prediction problem (under that model). In the degenerate case in which u = 0 with probability one, the general prediction problem reduces to the estimation problem considered in Chap. 11.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Goldberger (1962). Best linear unbiased prediction in the generalized linear regression model. Journal of the American Statistical Association, 57, 369–375.

    Google Scholar 

  • Harville, D. A. (1976). Extension of the Gauss–Markov theorem to include the estimation of random effects. Annals of Statistics, 4, 384–395.

    Article  MathSciNet  Google Scholar 

  • Henderson, C. R. (1963). Selection index and expected genetic advance. In W. D. Hanson & H. F. Robinson (Eds.), Statistical genetics and plant breeding. Publication 982 (pp. 141–163). Washington: National Academy of Sciences — National Research Council.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Zimmerman, D.L. (2020). Best Linear Unbiased Prediction. In: Linear Model Theory. Springer, Cham. https://doi.org/10.1007/978-3-030-52063-2_13

Download citation

Publish with us

Policies and ethics