Learning and Generalisation pp 285-310 | Cite as

# Learning Under an Intermediate Family of Probabilities

Chapter

## Abstract

The two preceding chapters have addressed what may be thought of as the two “extreme” situations in learning, namely: the case where the learning samples are generated by a known fixed probability (Chapter 6), and the case where the learning samples are generated by a probability measure that is itself

*completely*unknown (Chapter 7). In the present chapter, we study the intermediate situation, namely: the learning samples are generated by a probability measure*P*belonging to a family*P*that is neither a singleton set, nor the set of*all*probability measures. If no assumptions at all are made regarding*P*(what might be termed the “general” case), very few results are available. Some such results are summarized in Section 8.1. However, the situation is markedly different if some metric structure is imposed on*P*. For this purpose, we define a metric*p*on the set*P**of all probability measures on a measurable space (*X,S)*, as follows: If*P*,*Q*∈*P**, then$$ \rho \left( {P,Q} \right): = \mathop {\sup }\limits_{A \in S} \left| {P\left( A \right) - Q\left( A \right)} \right| $$

### Keywords

Entropy Hull## Preview

Unable to display preview. Download preview PDF.

## Copyright information

© Springer-Verlag London 2003