Skip to main content
  • 4160 Accesses

Abstract

We begin our discussion of probabilities with the definition of relative frequency, because this notion is very concrete and probabilities are, in a sense, idealizations of relative frequencies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    If S is a finite set, then the collection \(\mathcal{F}\) of events is taken to be the collection of all subsets of S. If S is infinite, then \(\mathcal{F}\) must be a so-called sigma-field, which we do not discuss here.

  2. 2.

    It is customary to omit the braces in writing the probabilities of the elementary events, such as writing P(s 1) instead of the correct, but clumsy, P\((\left \{s_{1}\right \}).\)

  3. 3.

    See https://en.wikipedia.org/wiki/Monty_Hall_problem

  4. 4.

    Note that, terminology notwithstanding, it is the events with their probabilities that are here defined to be independent, not just the events themselves.

  5. 5.

    Named after one of the founders of the theory of probability, Jacob Bernoulli (1654–1705), the most prominent member of a Swiss family of at least six famous mathematicians.

  6. 6.

    Note, however, that such a multiplication rule does hold for expected values. In this case, the expected number of double sixes in n throws is n times the expected number in one throw, as we shall see in Section 6.1

  7. 7.

    This theorem does not quite make P(A | B) for fixed B into a probability measure on B in place of S though, because in Definition 4.1.2 P(A) was defined for events \(A \subset S\), but in P(A | B) we do not need to have \(A \subset B.\) See Corollary 4.4.1, however.

  8. 8.

    Because of this theorem, a few authors use the notation P\(_{B}\left (A\right )\) for P\(\left (A\vert B\right )\) to emphasize the fact that P B is a probability measure on S and in P\(\left (A\vert B\right )\) we do not have a function of a conditional event A | B but a function of A. In other words, P\(\left (A\vert B\right ) =\) (the probability of A) given B, and not the probability of (A given B). Conditional events have been defined but have not gained popularity.

  9. 9.

    We usually omit the braces or union signs around compound events when there are already parentheses there, and separate the components with commas. Thus we write P\(\left (CC,C\overline{C},\overline{C}C\right )\) rather than P\(\left (\{CC,C\overline{C},\overline{C}C\}\right )\) or P\(\left (CC \cup C\overline{C} \cup \overline{C}C\right ).\)

  10. 10.

    Tversky’s Legacy Revisited, by Keith Devlin,

    www.maa.org/devlin/devlin_july.html, 1996.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Schay, G. (2016). Probabilities. In: Introduction to Probability with Statistical Applications. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-30620-9_4

Download citation

Publish with us

Policies and ethics