Advertisement

Introduction

  • Ronald Christensen
Chapter
Part of the Springer Texts in Statistics book series (STS)

Abstract

This book is about linear models. Linear models are models that are linear in their parameters. A typical model considered is
$$ {Y}={X}\beta + {e},$$
where Y is an n × 1 vector of random observations, X is an n × p matrix of known constants called the model (or design) matrix, β is a p × 1 vector of unobservable fixed parameters, and e is an n × 1 vector of unobservable random errors. Both Y and e are random vectors. We assume that the errors have mean zero, a common variance, and are uncorrelated. In particular, E(e) = 0 and Cov(e) = σ2I, where σ2 is some unknown parameter. (The operations E(·) and Cov(·) will be defined formally a bit later.) Our object is to explore models that can be used to predict future observable events. Much of our effort will be devoted to drawing inferences, in the form of point estimates, tests, and confidence regions, about the parameters β and σ2. In order to get tests and confidence regions, we will assume that e has an n-dimensional normal distribution with mean vector (0,0, …, 0)’ and covariance matrix σ2I, i.e., e ~ N(0, σ2I).

Keywords

Covariance Matrix Generalize Linear Model Quadratic Form Random Vector Multivariate Normal Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Mathematics and Statistics MSC01 11151 University of New MexicoAlbuquerqueUSA

Personalised recommendations