We first deal with a general case and then apply it into the BL context. In general, assume the following linear relationship:
where Q is a known/observable matrix; is an observable vector; is unobservable and needs to be estimated; and is the error vector.
Also, assume the following Gaussian distributions for regression errors and the prior:
We therefore have conditional on a realisation of
In terms of probability density function (pdf), distributions (A3) and (A4) can be equivalently written as:
where ∣·∣ gives the determinant of the matrix it applies to; and
We try to imply the probability distribution of from the joint pdf:
We hope, based on the Bayes’ Rule, that it is possible to express (A7) in terms of as we are interested in the posterior estimation of given . This translates into a need to replace the dependence of the mean estimates of on (As in (A4)) with a dependence of the mean estimates of on . In other words, through some transformation, we need to get rid of from the lower part of the error vector in (A8), but allow to enter the upper part. To this end, we construct the following matrix (Hamilton, 1994, Ch.12):
where note ∣A∣=1.
Using A as the transform matrix, we define:
where V̂(Q, Σ
Therefore, (A7) can be rearranged, noting ∣A−1∣=∣A∣=1, as follows:
From (A1), (A2) and (A3), we have the unconditional distribution:
With (A15), it is easy to imply from (A14) the following:
To assess in (A16), we use our best knowledge regarding and .
Recall that in the model setting our best knowledge based on the public information G leads to the following prior belief:
After examining the private information H, we form the following updated views:
where P is the view structure; is the view forecast vector; and we assume .
Therefore, conditional on a realisation of
Substituting for and for into (A16), we reach:
Finally, using our eventual conviction about the mean of we reach the following posterior belief:
This completes the proof. □