This chapter is concerned with two very important elaborations. In Section 3.1 we consider ‘extending the argument’, an elaboration which can be used whenever You feel that a target probability would be easier to measure if You had some extra information. Bayes’ theorem, which we derive in Section 3.2, can be considered as having the opposite effect. It is useful when You feel You could measure probabilities accurately if some specific information were excluded from H. Its primary function is to describe how Your probabilities are affected by gaining new information, and this role is explored in Section 3.3. Section 3.4 considers the effect on Bayes’ theorem when one or more of its component probabilities are zero. It shows the dangers of the kind of prejudice that unreasonably asserts a proposition to be impossible. Finally, in Section 3.5 we examine a real application in some detail.
KeywordsSequential Learning Prior Belief Target Probability True Probability White Ball
Unable to display preview. Download preview PDF.