Abstract
Sharp upper bounds are proved for the probability that a standardized random variable takes on a value outside a possibly asymmetric interval around 0. Six classes of distributions for the random variable are considered, namely the general class of ‘distributions’, the class of ‘symmetric distributions’, of ‘concave distributions’, of ‘unimodal distributions’, of ‘unimodal distributions with coinciding mode and mean’, and of ‘symmetric unimodal distributions’. In this way, results by Gauß (Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores 5:1–58, 1823), Bienaymé (C R Hebd Séance Acad Sci Paris 37:309–24, 1853), Bienaymé (C R Hebd Séance Acad Sci Paris 37:309–24, 1853), Chebyshev (Journal de mathématiques pures et appliqués (2) 12:177–184, 1867), and Cantelli (Atti del Congresso Internazionale dei Matematici 6:47–59, 1928) are generalized. For some of the known inequalities, such as the Gauß inequality, an alternative proof is given.
1 Introduction
Let X be a random variable with finite mean \(\mu \) and finite, positive variance \(\sigma ^2\). We are interested in sharp upper bounds on the tails of the distribution of X. Without loss of generality and in order to simplify notation, we will restrict attention to the standardized version of X, which is denoted by \(Z=(X\mu )/\sigma .\) Hence, we are interested in sharp upper bounds on the probability that a standardized random variable falls outside an arbitrary interval containing the value zero, i.e. sharp upper bounds on
We may assume \(0<v \le u\). We also study the onesided probability \(P(Z \ge v)\), which corresponds to (1) with \(u=\infty \).
Of course, upper bounds on probability (1) depend on the class of distributions assumed for Z. We shall focus on broad, nonparametric classes of distributions. The corresponding upper bounds are often applied in theoretical considerations and proofs, typically with \(u=v\). However, as Rougier et al. (2013), p.861, argue, they have practical value as well when one does not want strong model assumptions. Let us mention two examples, one with u equal to infinity and the other one with general u and v.
Clarkson et al. (2009) study the Receiver Operating Characteristic (ROC) curve, which they define as the power of a simple versus simple Neyman–Pearson hypothesis test viewed as a function of the significance level, more precisely
where F and G are the continuous distribution functions of the test statistic under the null hypothesis and the alternative, respectively. They are interested in the Area Under Curve (AUC), which equals
with X and Y independent random variables with continuous distribution functions F and G, respectively. For Gaussian X and Y with variance 1 and \(E(YX)= 2\sqrt{6}\), the AUC equals \(\Phi (2\sqrt{3})\approx 0.99973\) (and not 0.99966 as at page 468 of Clarkson et al. (2009)). If Gaussianity is weakened to unimodality of X and Y with at least one of them being strongly unimodal, then Clarkson et al. (2009) obtain 1/2 as a lower bound to the AUC; see their page 468, Corollary 6 and Remark 7. However, our bound (27) from Theorem 5.2 with \(v=2\sqrt{3}\) yields \(113/117 \approx 0.96581\) as a lower bound. This shows that at least some of the bounds presented here are strong enough to be valuable in applications.
Sharp upper bounds on probability (1) are also relevant in statistical process control. Indeed, attainable upper bounds on (1) determine the maximum risk for producing products outside specification limits, as the distribution of the quality characteristic X is typically not (perfectly) centred within the specification limits. This kind of practical concerns was our main motivation for studying upper bounds on (1); see for instance (Van den Heuvel and Ion 2003) and the PhD thesis (Ion 2001).
Traditionally, focus has been on upper bounds on the probability in (1) for symmetric intervals with \(u=v.\) The Bienaymé–Chebyshev inequality, which may be stated and proved by
was first obtained by Bienaymé (1853), cf. page 171 of the 1867 reprint, and later by Chebyshev (1867); see also Heyde and Seneta (1972), page 682. Note that in (2) equality holds if and only if \(Z \) takes on the values 0 and v only.
Amazingly, Gauß (1823), translation (Stewart 1995), had already proved the sharp inequality
for random variables Y with a unimodal distribution with mode at zero and with \(E(Y^2) = 1\). Sellke and Sellke (1997) describe the history of the Gauß inequality and refer to Pukelsheim (1994) for three proofs, namely Gauß’s one, a variation on it, and the one from exercise 4 on page 256 of Cramér (1946). A detailed description of Gauß’s proof is given also by Hooghiemstra and Van Mieghem (2015). We present a fourth proof of the Gauß inequality in Section 4. However, as the mean is typically known or estimated in the practice of statistical process control (and in general can be estimated more accurately than the mode), we shall focus on standardized random variables Z with \(EZ=0\). For the asymmetric case with \(u \ne v\) in (1), this mean zero condition complicates results and proofs considerably; see Theorems 6.1 and 6.3.
The upper bound in (2) follows from the classical Markov inequality \(P(Z \ge v) = E(\textbf{1}_{[Z ^r/v^r \ge 1]}) \le EZ ^r/v^r\) with \(r=2.\) Generalizations of the Gauß inequality in Markov style have been presented by Camp (1922); Meidell (1922), and Theil (1949). DasGupta (2000) has exploited these Markov–Gauß –Camp–Meidell inequalities to derive properties of, e.g. the zeta function. Sellke (1996) and Sellke and Sellke (1997) extend this line of research by determining sharp upper bounds on \(P(Z \ge v)\) in terms of \(Eg( Z )\) for unimodal random variables Z and nondecreasing functions g on \([0,\infty ).\) Bickel and Krieger (1992) derive improvements of (2) with Z an average and of (3) for symmetric unimodal densities with a bound on the derivative.
For symmetric distribution functions of X, an upper bound on the onesided probability \(P(Z\ge v)\) is \(1/(2v^2)\), which can be obtained by the inequality in (2), see Theorem 3.1. Cantelli (1928) proved the upper bound \(1/(1+v^2)\) on this tail probability for the class of all distribution functions of X. Camp (1922) provided an upper bound when the distribution function of X is symmetric and unimodal.
Bhat and Kosuru (2022) presented bounds for linear combinations of tail probabilities. There are many generalizations of the Bienaymé–Chebyshev inequality to vector valued random variables; see, e.g. Ogasawara (2019) and Ogasawara (2020). Tail probabilities for sums (and other functions) of i.i.d. random variables are called concentration inequalities. The extensive literature on concentration inequalities is reviewed in the elegant books Boucheron et al. (2013) and Lugosi (2009).
In each of the subsequent Sections, we shall discuss sharp inequalities for a specific class of distributions of the standardized random variable Z. The only exception is in Section 4, where we do not consider standardized random variables. There we shall present a onesided, sharp version of Gauß’s inequality for concave distributions, more specifically, for distributions on the nonnegative half line with a mode at zero and a concave distribution function on this half line. As an immediate consequence of this onesided version, the original Gauß inequality is also presented in this Section. Our proof is based on Khintchine’s representation of unimodal densities, Khintchine (1938), and Jensen’s inequality, Jensen (1906), as are our proofs of most results for unimodal distributions. A formulation and proof of Khintchine’s representation may be found in Lemma A.1 in the Appendix.
The results on upper bounds for the probability in (1) as discussed in this article are summarized in Table 1, where we mention only the most relevant cases of the inequalities. The complete versions of the inequalities with all special cases and proofs may be found in the text. Reference is made to the corresponding theorems that are proved in the present article. Theorems 3.1, 3.2, 4.1, 5.2, 5.3, 5.5, 6.1 and 6.3 seem to be new. For known results like in Theorems 2.2, 4.2, 7.1 and 7.2, alternative proofs are given. It is shown that all inequalities are sharp as well, by constructing random variables satisfying the bounds. It should be noted that the results on probability (1) imply the results for upper bounds on the probability in (2).
2 All distributions
Here, we discuss the inequalities from the first row of Table 1. The standardized version \(Z=(X\mu ) / \sigma \) of the random variable X has mean 0 and variance 1. Apart from this standardization, the distribution of Z is arbitrary within this section. We will start with the onesided analogue of the Bienaymé–Chebyshev inequality, which is referred to as Cantelli’s inequality. It is given by formula (19) in Cantelli (1928), and it reads as follows.
Theorem 2.1
(Cantelli’s inequality) Let Z be a random variable with mean 0 and variance 1. For any \(v\ge 0\), the inequality
holds. This inequality is sharp and for positive v equality is uniquely attained by
Proof
According to the hint from Problem 1.5.5 of Billingsley (1995), the Bienaymé–Chebyshev inequality yields
In order to obtain equality in (6), the equality
has to hold almost surely, or equivalently, Z has to have support \(\{1/v, v \}\) and hence, has to satisfy (5) \(\square \)
The asymmetric twosided analogue of the Bienaymé–Chebyshev inequality from (2) will be presented in full detail. Its simple proof is based on the Bienaymé–Chebyshev inequality itself.
Theorem 2.2
Let Z have mean 0 and variance 1, and assume \(0<v \le u\). Then,
holds. Under the additional condition \( u \le 1/v\), the trivial inequality
holds with equality for
In the case of
inequality (7) is sharp and equality holds for
In the case of \(v+2/v \le u\), the inequality
holds with equality for
Proof
By the Bienaymé–Chebyshev inequality, we obtain
Straightforward computations show that (11) yields a welldefined random variable provided that (10) holds, and that this random variable has mean zero and unit variance, and satisfies equality in (7).
Similarly, the random variable from (9) is welldefined and attains equality in the trivial inequality (8) in view of \(\sqrt{u/v} \le u\) iff \(uv\le 1\) iff \(\sqrt{v/u} \ge v\).
For \(u \ge v+2/v\), the probability in (7) is bounded by \(P(Z\le v2/v\ \ \textrm{or}\ \ Z\ge v)\) Applying inequality (7) with u replaced by \(v+2/v\) results in (12). Finally, straightforward computations show that (13) defines a proper random variable with zero mean and unit variance that attains equality in (12). \(\Box \)
For v fixed, the minimum of the right hand side of (7) over u is attained at \(u=v+2/v\) and equals \(1/(1+v^2).\) Consequently, Cantelli’s inequality (4) is a special case of (7); cf. (12) and (13).
Selberg (1940) seems to be the first to have formulated a version of Theorem 2.2. According to Ferentinos (1982), his proof of (7) is less complicated than Selberg’s, but it is still more complicated than ours due to the cumbersome notation.
Note that (7) with \(u=v\) reduces to the famous Bienaymé–Chebyshev inequality. The inequalities in this section are all based on the inequality of Bienaymé–Chebyshev itself, as is the first one of the next section. However, this inequality does not always seem to be helpful if the class of distributions of Z (or X) is restricted.
3 Symmetric distributions
The inequalities for symmetric distributions from the second row of Table 1 will be discussed in this section. The symmetry implies \(P(Z \ge v) = P(Z \ge v)/2\), and hence, the Bienaymé–Chebyshev inequality yields the following result.
Theorem 3.1
Let Z be symmetric with mean 0 and variance 1. For \(v\ge 0\) with \(w=\max \{v,1\}\), the inequality
holds, with equality if
In view of \(2w^2=2(\max \{v,1\})^2 \ge 1+v^2\), this inequality improves the bound from Cantelli’s inequality from Theorem 2.1, as it should. For symmetric random variables, we obtain the following bound for asymmetric intervals.
Theorem 3.2
Let the standardized random variable Z be symmetric. Consider any positive u and v with \(v \le u\) and discern four cases.
For \(0 < v \le u \le 1\), the inequality
holds with equality if Z puts mass 1/2 at both 1 and \(1\).
For \(0 < v \le u \le {\sqrt{2}}\, v, 1 \le u,\) the inequality
is valid with equality if
holds. For \(0< v \le 1 < u, {\sqrt{2}}\, v \le u,\) the inequality
is valid with equality if
holds. For \(1 \le v \le u, {\sqrt{2}}\, v \le u,\) the inequality
is valid with equality if
holds.
Note that any choice of (u, v) with \(0<v \le u\) belongs to at least one of the four cases in this Theorem.
Proof
To prove these inequalities, we determine the supremum of the left hand side of (14) over all symmetric random variables Z with mean 0 and variance at most 1. Let Z be such a random variable and define the symmetric random variable Y by
Note that Y is a discrete, symmetric random variable with probability mass at \({{\mathcal {V}}} = \{u, v, 0, v, u\}\) only and with \(\textrm{var}(Y) = E(Y^2) \le E(Z^2) = \textrm{var}(Z) \le 1\). Furthermore,
holds, and we may conclude that the supremum of (18) over Z is attained by a symmetric discrete random variable Y taking its values at \({\mathcal {V}}\) and with \(E(Y^2) \le 1\).
We introduce
and note that the supremum of (18) equals the maximum of
over the convex polygon
In this linear programming problem, the maximum is attained at one of the vertices of polygon \({\mathcal {Q}}\). We discern three cases.
 \(\textbf{A}.\):

\(\ 0 < v \le u \le 1\) Here, \({\mathcal {Q}}\) reduces to the triangle
$$\begin{aligned} {{\mathcal {Q}}} = \{(p,q) \mid p \ge 0 \ , \ q \ge 0 \ , \ p+q \le \frac{1}{2} \}, \end{aligned}$$the maximum 1 of the map \((p,q) \mapsto 2p+q\) on \({\mathcal {Q}}\) is attained at \((p,q) = (\tfrac{1}{2},0)\), and we get inequality (14).
 \(\textbf{B}.\):

\(\ 0< v \le 1 < u\) In this case, the polygon \({\mathcal {Q}}\) is a quadrangle with vertices
$$\begin{aligned} (0, 0), \left( \frac{1}{2u^2}, 0\right) , \left( 0, \tfrac{1}{2} \right) , \left( \frac{1v^2}{2(u^2v^2)}, \frac{u^21}{2(u^2v^2)} \right) . \end{aligned}$$(19)The corresponding values of the function \((p,q) \mapsto 2p+q\) are
$$\begin{aligned} 0, \ \ \frac{1}{u^2}, \ \ \frac{1}{2}, \ \ \frac{1}{2} + \frac{1v^2}{2(u^2v^2)}. \end{aligned}$$Computation shows that the fourth value is larger than the second value and hence largest, iff \({\sqrt{2}}\, v \le u\) holds. Note that this yields inequality (16) and inequality (15) under the additional restriction \(v \le 1\).
 \(\textbf{C}.\):

\(\ 1 \le v \le u\) Polygon \({\mathcal {Q}}\) reduces to a triangle here with vertices
$$\begin{aligned} (0, 0), \left( 0, \frac{1}{2v^2}\right) , \left( \frac{1}{2u^2}, 0 \right) . \end{aligned}$$The corresponding values of the function \((p,q) \mapsto 2p+q\) are
$$\begin{aligned} 0, \ \ \frac{1}{2v^2}, \ \ \frac{1}{u^2}. \end{aligned}$$Computation shows that this implies inequality (17) and inequality (15) under the additional restriction \(v \ge 1\).
Straightforward computation shows that equalities are attained by the random variables mentioned in the Theorem. \(\Box \)
4 Concave distribution functions
The upper bounds from the preceding sections to the probability in (1) are rather large. It is to be expected that restriction of the class of completely unknown distributions and the class of symmetric distributions to smaller classes of distributions will yield smaller upper bounds. In the next three sections, we will obtain sharp upper bounds for (1) over the class of unimodal distributions, the class of unimodal distributions with mean and mode coinciding, and the class of symmetric unimodal distributions, respectively. This unimodality assumption is not unrealistic, as it is a very natural assumption in several practical applications, like statistical process control.
A distribution is unimodal with mode at M if its corresponding distribution function is convex on \((\infty ,M)\) and concave on \([M,\infty ).\) Consequently, a unimodal distribution has at most one atom, which may occur only at the mode M. If a unimodal distribution is uniform on its support with an atom at one of its boundary points, we will call it a onesided boundaryinflated uniform distribution; cf. Klaassen et al. (2000). We shall repeatedly use a representation theorem for unimodal distributions of Khintchine, Lemma A.1, Khintchine (1938). It characterizes unimodal distributions as a mixture of uniform distributions. The inequalities we will derive attain equality for mixtures of at most three uniforms, where often one of these uniforms is degenerate, i.e. a point mass. Unimodal distributions with their mode at \(M=0\) and all their mass on the nonnegative half line \([0, \infty )\) have a distribution function that is concave on \([0,\infty )\) and vanishes on \((\infty ,0).\) They have a nonincreasing density on \((0,\infty ).\) This special class of distributions is considered in the present section. For this class, a onesided version of the Gauß inequality holds. The Gauß inequality itself is an immediate consequence of it and will be presented also.
Theorem 4.1
(Onesided Gauß inequality) Let the random variable Y have second moment \(E(Y^2) =1\) and let its distribution function be concave on \([0,\infty )\) and 0 on \((\infty ,0).\) For all nonnegative v, the inequality
is valid. For \(0 \le v \le 2/{\sqrt{3}}\), equality holds iff Y has a uniform distribution on \([0,\sqrt{3}).\) For \(2/{\sqrt{3}} \le v\), equality holds iff Y has a onesided boundaryinflated uniform distribution on [0, 3v/2) with mass \(1 4/(3v^2)\) at 0.
Proof
By Khintchine’s representation from Lemma A.1, there exist a probability \(p_0\) and a distribution function F on the positive half line, such that \(P(Y=0)=p_0\) holds and the density of Y at y on the positive half line equals \((1p_0)\int _0^\infty c^{1}{} \textbf{1}_{(0,c)}(y)\, dF(c).\) By Fubini, it follows that
holds and that for positive v
is valid. Without loss of generality, we assume that F puts positive mass on \((v, \infty ).\) Let us write
As the map \(c \mapsto (1v/c)\) is strictly concave on \([v,\infty )\), (21) implies by Jensen’s inequality
with equalities iff F is degenerate at \(c_v.\) This means that we may restrict attention to those Y with mass \(p_0\) at 0 for which F is degenerate at some \(c_v > v.\) For such Y equation (20) implies
which together with (22) yields
As this function of \(c_v\) attains its maximum at \(c_v = 3v/2 > v,\) we obtain
with equality iff Y has the onesided boundaryinflated uniform distribution as described in the Theorem. However, for \(c_v = 3v/2\) equation (23) becomes \(1p_0 = 4/(3v^2),\) which for \(v < 2/\sqrt{3}\) leads to an impossible, negative value of \(p_0.\) This means that for \(v < 2/\sqrt{3}\) the mass at 0 vanishes and that (22), (23), and (24) hold with \(c_v = \sqrt{3}.\) \(\Box \)
About two centuries ago, Johann Carl Friedrich Gauß presented and proved a sharp upper bound on the probability \(P(X \ge v)\) for unimodal random variables X with mode at 0 and finite second moment in Sections 9 and 10 of Gauß (1823); for a translation from Latin into English see Stewart (1995). His result precedes the famous Bienaymé–Chebyshev inequality (2) by three decades. The Gauß inequality for large values of v has been given in (3). The complete inequality is the following.
Theorem 4.2
(Original Gauß inequality) Let the random variable Y have a unimodal distribution with mode at 0 and second moment \(E(Y^2) =1\). For all nonnegative v, the inequality
is valid. For \(0 \le v \le 2/{\sqrt{3}}\), equality holds if Y has a uniform distribution on \((\sqrt{3},\sqrt{3}).\) For \(2/{\sqrt{3}} \le v\), equality holds if Y has mass \(1 4/(3v^2)\) at 0 and the rest of its mass uniformly distributed on \((3v/2, 3v/2)\).
Proof
As \(Y \) has a concave distribution function on \([0,\infty )\), the onesided Gauß inequality proves the Theorem. \(\Box \)
Our proof of the Gauß inequality via the Khintchine representation and Jensen’s inequality differs from the three proofs as presented by Pukelsheim (1994). Observe that the bound in (25) can also be described as the minimum of the two functions in there. Also note that Gauß considered only densities and hence could not prove the second bound in (25) to be sharp.
5 Unimodal distributions
In the preceding section on concave distributions, we have already defined the related class of unimodal distributions, which we will study in the remaining sections. The factor 4/9 from the onesided Gauß inequality of Theorem 4.1 will play a role in all these sections. For the proof of our extension of the Cantelli inequality from Theorem 2.1 to unimodal distributions, we shall use the following powerful result for unimodal distributions, which also shows the factor 4/9.
Theorem 5.1
(Vysochanskiĭ and Petunin inequality) Any unimodal random variable W with finite second moment satisfies
Proof
The proof of Vysočanskiĭ and Petunin (1980) and Vysochanskiĭ and Petunin (1983) has been smoothed by Pukelsheim (1994) and invokes Gauß’s inequality presented in Theorem 4.2. \(\Box \)
Actually, for \(\sqrt{8/3} \le v\) inequality (26) implies the Gauß inequality (25). Observe that the bound in (26) can be described as the minimum of the three expressions at its right hand side.
Here is our analogue of Cantelli’s inequality from Theorem 2.1.
Theorem 5.2
Let the distribution of the standardized random variable Z be unimodal. For any \(v\ge 0\), the inequality
holds, with equality for \(0 < v \le \sqrt{5/3}\) if Z has mass \((3  v^2)/(3(1+v^2))\) at v and the rest of its mass, \(4 v^2/(3(1+v^2)),\) uniformly distributed on the interval \([(3+v^2)/(2v),v]\), and with equality for \(\sqrt{5/3} \le v\) if Z has mass \((3v^21)/(3(1+v^2))\) at \(1/v\) and the rest of its mass, \(4/(3(1+v^2)),\) uniformly distributed on the interval \([1/v, (1+3v^2)/(2v)].\)
Proof
Applying Theorem 5.1 with \(W=Z + 1/v\) and \(w=v + 1/v\), we obtain (27) after some computation. Additional computation shows that the random variables mentioned in the theorem attain equality. \(\Box \)
Comparing this inequality (27) to Cantelli’s inequality from Theorem 2.1, we note the extra factor 4/9 for larger values of v; see also Table 1. Furthermore, note that the bound in (27) can be viewed as the minimum of the two functions in there, and that these functions intersect at \(v=\sqrt{5/3}\).
Next, we turn to the general case of asymmetric intervals around 0. The Vysochanski\({\breve{i}}\) and Petunin inequality from Theorem 5.1 implies the following result.
Theorem 5.3
For \(v \ge \sqrt{5/3},\ \max \{v,(11v 4 \sqrt{6v^210})/5 \} \le u \le v+2/v\), and any standardized unimodal random variable Z, the inequality
holds with equality if \(Z=(vu)/2 + UY\), U and Y independent random variables, U uniform on the unit interval, and Y the generalized Bernoulli random variable
Proof
As in the proof of Theorem 2.2, we note
Applying the third inequality of (26) from Theorem 5.1, we obtain (28). Computation shows that the random variable Y and hence \(Z = (vu)/2 + UY\) are well defined under the conditions on u and v, and that this Z attains the bound. \(\Box \)
An immediate consequence of this Theorem is the following one, which is the main content of Theorem 2 of Vysočanskiĭ and Petunin (1980).
Corollary 5.4
For \(\sqrt{8/3} \le v\) and any standardized unimodal random variable Z, the inequality
holds with equality for \(Z=UY\), U and Y independent random variables, U uniform on the unit interval, and Y the generalized Bernoulli random variable
Instead of applying the Vysochanski\(\mathrm{\breve{i}}\) and Petunin inequality from Theorem 5.1, we could choose the approach via Khintchine’s characterization of unimodal distributions and Jensen’s inequality as in Section 4. This would yield an admittedly laborious proof of Cantelli’s inequality for unimodal distributions as given in Theorem 5.2. However, this KhintchineJensen approach yields a partially improved version of Theorem 5.3 too, namely
Theorem 5.5
Assume \(\sqrt{3} \le v \le u\). For any standardized unimodal random variable Z, the inequality
holds. For \(v \le u \le v + \frac{2}{v}\), equality is attained if Z is defined as in (29). For \(v + \frac{2}{v} \le u\), equality is attained if Z has mass \((3v^21)/(3(1+v^2))\) at \(1/v\) and the rest of its mass, \(4/(3(1+v^2)),\) uniformly distributed on the interval \([1/v, (1+3v^2)/(2v)]\), like in Theorem 5.2.
The proof of this Theorem is given in Subsection 1 of the Appendix.
6 Unimodal distributions with coinciding mode and mean
When we restrict the class of distributions further to the class of unimodal distributions with coinciding mode and mean, then for the onesided probability we see that the factor 4/9 does not play such a role anymore as in Theorem 5.2, the analogue of Cantelli’s inequality, Theorem 2.1.
Theorem 6.1
For any standardized unimodal random variable Z with mode at 0, the inequality
holds with equality if \(Z=UY\) holds with U and Y independent random variables, U uniform on the unit interval, and Y the Bernoulli variable
Proof
Let \({{\mathcal {Z}}}_0\) and \({{\mathcal {Y}}}_0\) be the classes of distributions as defined in Lemma A.2. By this lemma with \(u=\infty \), we obtain
with \(\psi \) defined by
With \(Y \in {{\mathcal {Y}}}_0\), we define the Bernoulli random variable \(Y_0\) by
with \(\mu _ = E(Y \mid Y<v)\) and \(\mu _+ = E(Y \mid Y\ge v)\). Note \(EY_0=EY=0\) and \(EY_0^2 \le EY^2=3.\) Since \(\psi \) is concave on \([v,\infty )\) and vanishes elsewhere, we have by Jensen’s inequality
By adding a positive amount to \(\mu _+\) and subtracting from \(\mu _\), if necessary, we can force the Bernoulli random variable \(Y_0\) to have variance 3 while we maintain its mean at 0 and possibly increase \(E\psi (Y_0),\) as \(\psi \) is increasing on \([v, \infty ).\) We have shown that the supremum at the right hand side of (33) is attained by a Bernoulli random variable
with \(a< v \le b,\) and \(0\le p\le 1\). In view of \(EY_0 = 0\) and \(EY_0^2 = 3\), we obtain
Writing
we see that the suprema from (33) equal
Straightforward computation shows that the derivative with respect to x of the function in (34) is nonnegative if and only if
holds. Observe that the function \(x \mapsto 2x^3 3x^2\) is increasing for \(x \ge 1\) and negative for \(1 \le x < 3/2\). By Vieta’s method to tackle cubic equations, we substitute \(x=(w+1 + 1/w)/2 \ge 3/2, \ w > 0\) and obtain equality in (35) if and only if
holds. The positive roots of this quadratic equation in \(w^3\) yield \(w_1 = (\sqrt{3+v^2} + \sqrt{3})^{2/3} v^{2/3}\) and \(w_2 = (\sqrt{3+v^2}  \sqrt{3})^{2/3} v^{2/3}\). Note that \(w_1 w_2 =1\) and hence \(w_1 + 1/w_1 = w_2 + 1/w_2\) hold. Consequently, the only real root x of the cubic function in (35) equals the one given in (31). Combining (34) and (35) (with equality), we arrive at the inequality in (31).
Straightforward verification shows that \(Z=UY\) with Y as in (32) attains this bound. \(\Box \)
The sharp, restricted Gauß inequality for random variables with coinciding mean and mode is the same as the original one from Theorem 4.2, as the distributions that attain equality in (25) are symmetric and hence, have coinciding mean and mode.
Theorem 6.2
(Restricted Gaußinequality) For any standardized unimodal random variable Z with mode at 0, the inequality
holds with equality if the distribution of Z is the mixture of a uniform distribution on \([((\frac{3}{2}v)\vee \sqrt{3})\ ,\ (\frac{3}{2}v)\vee \sqrt{3}]\) and a distribution degenerate at 0 such that the point mass at 0 equals \([14/(3v^2)]\vee 0\).
We extend Gauß’s inequality to asymmetric intervals as in (1) as follows.
Theorem 6.3
For \(\sqrt{3} \le u \le v \le u + 2/u\) or \(\sqrt{3} \le v \le u \le v + 2/v\) and for any standardized unimodal random variable Z with mode at 0, the inequality
holds with
Equality is attained in (37) for \(Z=UY\) with U and Y independent random variables, U uniform on the unit interval, and Y the generalized Bernoulli random variable
with
Proof
The proof of Theorem 5.5, given at the end of the Appendix, can be applied with \(M=0\) all the way up to and including the value of the upper bound in (67) with \(\gamma \) defined in (38) and satisfying
Note that this equation may be rewritten as
which implies
With the help of the last two equations, u may be eliminated from (67) resulting in the expression in (37).
In view of this, the random variable Y from (39) follows straightforwardly from (66) and (51) with \(M=0\). \(\Box \)
Remark 6.4
For \(u=v\), the value of \(\gamma \) from (38) becomes 1 and the upper bound in (37) takes on the value \(4/(9v^2)\), which is in line with Theorem 6.2.
7 Symmetric Unimodal distributions
Under the extra assumption of unimodality, Theorem 3.1 for symmetric distributions may be sharpened too. Again we will encounter the extra factor 4/9. The resulting inequality for symmetric unimodal distributions has been obtained by Camp (1922); Meidell (1922), and Shewhart (1931). A different proof is given by Theil (1949). Still a different proof is given in Lemma 2 of Clarkson et al. (2009). However, our proof is shorter and simpler than theirs.
Theorem 7.1
If the distribution of the standardized random variable Z is symmetric and unimodal, then
holds. Equality is attained by the mixture of a uniform distribution on \([((\frac{3}{2}v)\vee \sqrt{3})\ ,\ (\frac{3}{2}v)\vee \sqrt{3}]\) and a distribution degenerate at 0 such that the point mass at 0 equals \([14/(3v^2)]\vee 0\).
Proof
As Z is symmetric, \(P(Z \ge v) = P(Z \ge v)/2\) holds and hence, Theorem 6.2 yields a proof. \(\Box \)
Actually, this Theorem is equivalent to Theorem 6.2. Indeed, let Z be standardized with mode at 0 and let B be an independent Bernoulli random variable with \(P(B=1) =P(B=1) = 1/2.\) As BZ is symmetric and hence \(P(Z \ge v) = P(BZ \ge v) = 2P(BZ \ge v)\) holds, Theorem 7.1 implies Theorem 6.2. For the class of symmetric unimodal distributions, these Theorems also imply that inequality (36) holds and is sharp.
Next, we consider the case of asymmetric intervals around 0.
Theorem 7.2
Assume the distribution of the standardized random variable Z is symmetric and unimodal and consider \(\sqrt{3} \le v \le u.\) For \(u \le (2 \sqrt{2} 1)v\), the inequality
holds with equality if Z is uniform on \([ 3(u+v)/4\,,\, 3(u+v)/4]\) with probability \(16/(3(u+v)^2)\) and has a point mass at 0 with probability \(116/(3(u+v)^2).\)
For \((2 \sqrt{2} 1)v \le u\), the inequality
holds with equality if Z is uniform on \([ 3v/2\,,\, 3v/2]\) with probability \(4/(3v^2)\) and has a point mass at 0 with probability \(14/(3v^2).\)
Remark 7.3
The Theorem in Sect. 3 of Semenikhin (2019) presents a complete version for all positive u and v of our Theorem 7.2 with his \(m=(vu)/2\) and \(h=(u+v)/2\). Semenikhin (2019) uses another approach for the proof than we do, although there are similarities between (44) below and his expression in the displayed formula above his (A.3).
Proof
By Lemma A.1, we have \(Z=UY\) with Y symmetric around 0 because of the symmetry of Z around 0. Along the lines of Lemma A.2, we obtain by Jensen’s inequality
With the notation \(a = \sqrt{E(Y^2 \mid v \le Y < u)},\, b = \sqrt{E(Y^2 \mid Y\ge u)}\),
\(p = P(v \le Y < u),\) and \(q = P(Y\ge u)\), this implies
By increasing a or b if necessary, we see that this supremum is attained at \(2a^2p + 2b^2q =3.\)
Fix a and b with \(v \le a \le u \le b\) and write \(\alpha = 1 v/a \ge 0\) and \(\beta = 2 (u+v)/b \ge 0.\) Consider
Now, \(b \ge a \ge v \ge \sqrt{3}\) and hence, \(p+q \le (a^2p + b^2q)/3 = 1/2\) hold. Consequently, we have
Studying the stationary points of the functions \(a \mapsto (1 v/a)/a^2\) and \(b \mapsto (2 (u+v)/b)/b^2\), we see with the help of (45) that the supremum in (44) equals
If \(3v < u\) holds, then we have
If \(3v \ge u \ge v\) holds, then
holds iff
holds. The last inequality is valid in view of
We conclude that the supremum in (44) is bounded by
Straightforward computation shows that equality holds in (42) and (43) for the indicated random variables Z. \(\Box \)
8 Discussion
Our bounds for the probability that (the absolute value of) a standardized random variable exceeds a given value are essential in quality control, where the probability for the risk of an outofspecification (OOS) result is being calculated or estimated. The probability that a quality characteristic is above or below the upper (USL) or lower (LSL) specification limit, respectively, can be formulated by the probability \(P(Z \le u \mathrm{~or~} Z \ge v)\) in (1), assuming that the process mean \(\mu \) falls within the specification limits, \(\mu \in (\textrm{LSL},\textrm{USL})\).
With the introduction of Statistical Process Control (SPC) by Walter Shewhart in the 1920’s, threesigma limits (\(\mu \pm 3\sigma \)) were suggested for the use of control charts to monitor the stability or predictability of quality characteristics of products and processes over time; see Di Bucchianico and Van den Heuvel (2015). On page 277 of Shewhart (1931), Shewhart mentioned the Bienaymé–Chebyshev inequality and stated “Experience indicates that \(v=3\) seems to be an acceptable economic value.”
In case, the threesigma control limits of a Shewhart control chart fall (just) within the specifications limits, the process has a capability index \(C_{pk}\) of at least one, i.e. \(C_{pk} = \min \{(\mu  \textrm{LSL})/(3 \sigma ),(\textrm{USL}  \mu )/(3\sigma )\} \ge 1\). For such capable processes, the worst case risk of an OOS result becomes \(P( Z \ge 3)\) for onesided specification limits, \(P( Z \ge 3)\) for twosided specification limits when the process is perfectly centred, i.e. \(\mu =(\textrm{LSL}+\textrm{USL})/2\)), and \(P(Z \le 11/3 \mathrm{~or~} Z \ge 3)\) for noncentred processes with twosided specification limits. This last probability follows from our Theorem 2.2, where we have chosen \(v=3, u=v+2/v=11/3\).
In case u would be at least 11/3, the OOS probability \(P(Z\le 11/3 \mathrm{~or~} Z\ge 3)\) for noncentred processes with twosided specification limits and a \(C_{pk}\ge 1\) is the worst case risk for producing products with a quality characteristic outside specification. When u is smaller than 11/3 (and larger than 3 to maintain a \(C_{pk} \ge 1\)), the OOS probability \(P(Z \ge 3)\) is considered the worst risk, while the OOS probability \(P(Z \le 11/3 \mathrm{~or~} Z\ge 3)\) is then considered the most favourable risk when the process has a capability of one, i.e. \(C_{pk}=1\).
The bounds for these three OOS probabilities under different assumptions are presented in Table 2 below.
We see that these probabilities for the specific distributions in Table 2 show a rather large discrepancy with their bounds, which is due to their large deviation in shape from the distributions for which the bounds are sharp. Note that the symmetrized Pareto density from the table is logconvex and that the Laplace, logistic and normal densities are logconcave and hence strongly unimodal. The strongly unimodal distributions constitute an important class of distributions; cf. Section 1.4 of Dharmadhikari and Joagdev (1988). Therefore, it might be useful to prove sharp bounds for this class as well.
References
Bhat MA, Kosuru GSR (2022) Generalizations of some concentration inequalities. Stat. Probab. Lett. 182:109298
Bickel PJ, Krieger AM (1992) Extensions of Chebychev’s inequality with applications. Probab. Math. Stat. 13:293–310
Billingsley P (1995) Probability and measure. Wiley, New York
Bienaymé IJ (1853) Considérations à l’appui de la decouverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés. C R hebd. Séances Acad Sci Paris 37:309–24. Reprinted (1867) Journal de mathématiques pures et appliqués (2) 12:158–176
Boucheron S, Lugosi G, Massart P (2013) Concentration inequalities. Oxford University Press, Oxford
Camp BH (1922) A new generalization of Tchebycheff’s statistical inequality. Bull Am Math Soc 28:427–432
Cantelli FP (1928) Sui confini della probabilità. Atti del Congresso Internazionale dei Matematici 6:47–59
Chebyshev P (1867) (P. Tchébychef) Des valeurs moyennes. Journal de mathématiques pures et appliqués (2) 12:177–184
Clarkson E, Denny JL, Shepp L (2009) ROC and the bounds on tail probabilities via theorems of Dubins and F. Riesz. Ann Appl Probab 19:467–476
Cramér H (1946) Mathematical methods of statistics. Princeton University Press, Princeton
DasGupta A (2000) Best constants in Chebyshev’s inequalities with various applications. Metrika 51:185–200
Dharmadhikari SW, Joagdev K (1988) Unimodality, convexity and applications. Academic Press, Boston
Di Bucchianico A, Van den Heuvel ER (2015) Shewhart’s idea of predictability and modern statistics. In: Knoth S, Schmid W (eds) Frontiers in statistical quality control, vol 11. Springer, Cham, pp 237–248
Feller W (1966) An introduction to probability theory and its applications. Wiley, New York
Ferentinos K (1982) On Tcebycheff’s type inequalities. Trabajos Estad’ist Investigaci’on Oper 33(1):125–132
Gauß CF (1823) Theoria combinationis observationum erroribus minimis obnoxiae, pars prior. Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores 5:1–58
Govindaraju K, Lai CD (2004) Run length variability and three sigma control limits. Econ Qual Control 19:175–184
Heyde CC, Seneta E (1972) Studies in the history of probability and statistics. XXXI. The simple branching process, a turning point test and a fundamental inequality: a historical note on I.J. Bienaymé. Biometrika 59:680–683
Hooghiemstra G, Van Mieghem PFA (2015) An inequality of Gauss. Nieuw Archief voor Wiskunde 5(16):123–126
Ion RA (2001) Nonparametric statistical process control. PhD thesis University of Amsterdam, Amsterdam
Jensen JLWV (1906) Sur les fonctions convexes et les inégalités entre les valeurs moyennes. Acta Math 30:175–193
Khintchine A Ya (1938) On unimodal distributions. Izv Nauchno Issled Inst Mat Mech Temsk Gos Univ 2:1–7
Klaassen CAJ, Mokveld PJ, van Es AJ (2000) Squared skewness minus kurtosis bounded by 186/125 for unimodal distributions. Stat Probab Lett 50:131–135
Lugosi G (2009) Concentrationofmeasure inequalities: lecture notes by Gábor Lugosi. http://84.89.132.1/lugosi/anu.pdf
Meidell MB (1922) Sur un problème du calcul des probabilités et les statistique mathématiques. Comptes Rendus 175:806–808
Ogasawara H (2019) The multiple Cantelli inequalities. Stat Methods Appl (J Italian Stat Soc) 28:495–506
Ogasawara H (2020) The multivariate Markov and multiple Chebyshev inequalities. Commun Stat Theory Methods 49:441–453
Pukelsheim F (1994) The three sigma rule. Am Stat 48:88–91
Rougier J, Goldstein M, House L (2013) Secondorder exchangeability analysis for multimodel ensembles. J Am Stat Assoc 108:852–863
Selberg HL (1940) Zwei Ungleichungen zur Ergänzung des Tchebycheffschen Lemmas. Skand. Aktuarietidskr. 1940:121–125
Sellke T (1996) Generalized Gauß–Chebyshev’s inequalities for unimodal distributions. Metrika 43:107–121
Sellke TM, Sellke SH (1997) Chebyshev inequalities for unimodal distributions. Am Stat 51:34–40
Semenikhin KV (2019) Twosided probability bound for a symmetric unimodal random variable. Autom Remote Control 80:474–489
Shewhart WA (1931) Economic control of quality of manufactured product. Van Nostrand, Princeton
Stewart GW (1995) Theory of the combination of observations least subject to error : part one, part two, supplement. Classics in applied mathematics. vol 11. Society for Industrial and Applied Mathematics, Philadelphia. Translation from Latin of Gauß (1823)
Theil H (1949) Over de ongelijkheid van Camp en Meidell. Statistica Neerlandica 3:201–208
Van den Heuvel ER, Ion RA (2003) A note on capability indices and the proportion of nonconforming items. Qual Eng 15:425–437
Vysočanskiĭ DF, Petunin JuI (1980) Justification of the \(3\sigma \)rule for unimodal distributions. Theory Probab Math Stat 21:25–36
Vysochanskiĭ DF, Petunin YuI (1983) A remark on the paper ‘Justification of the \(3 \)rule for unimodal distributions’. Theory Probab Math Stat 27:27–29
Acknowledgements
The more tedious computations were partially checked with the help of Mathematica.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Appendix
A Appendix
In this Appendix, we prove a lemma with Khintchine’s representation theorem, Khintchine (1938), and other lemmata that we need. Furthermore, the proof of Theorem 5.5 is given.
Lemma A.1
(Khintchine representation) If Z has a unimodal distribution, then there exist a constant M and independent random variables U and Y with U uniformly distributed on the unit interval, such that \(Z=M + UY\) holds. If Z is symmetric around M, then Y is symmetric around 0.
Proof
Let Z have its mode and possibly a point mass at M. Theorem V.9 of Feller (1966) and Theorem 1.3 of Dharmadhikari and Joagdev (1988) yield the characterization \(Z=M + UY.\) As an alternative proof, cf. page 8 of Dharmadhikari and Joagdev (1988), let the conditional distribution of \(ZM\) given \(Z>M\) have density f and distribution function F on \((0,\infty ).\) Since f is nonincreasing, we may write
with
a distribution function on \([0, \infty ).\) It follows that f is the density of UY given Y positive. With a similar argument for negative values of \(ZM\) and Y, we obtain Khintchine’s characterization. From this construction, it follows that Y is symmetric around 0, if Z is symmetric around M. \(\Box \)
For the proofs of Theorems 5.5, 6.1, 6.3, and 7.2, we need also the following result.
Lemma A.2
Let \({{\mathcal {Z}}}_M\) be the class of random variables Z that have a unimodal distribution with mean zero, unit variance, and mode at M. Let \({{\mathcal {Y}}}_M\) be the class of random variables Y with mean \(2M\) and variance \(3M^2.\) With \(u \le M \le v\), the following holds true
with the function \(\Psi _M\) given by
Proof
By Lemma A.1, we may write \(Z=M+UY\) with U and Y independent, which implies \(0=EZ=M+ \tfrac{1}{2} EY\) and \(1=\textrm{var} Z = \textrm{var}(UY)= E(U^2Y^2)  (E(UY))^2 = \tfrac{1}{3} E(Y^2) M^2.\) These equations yield \(EY=2M,\, E(Y^2) = 3(1+M^2),\) and hence \(\textrm{var}Y = 3  M^2,\) which shows \(M^2 \le 3.\) Consequently, we get
As a similar relation holds for \(P(Z \le u),\) we obtain the lemma. \(\Box \)
In the proof of Theorem 5.5, we will need to solve the following cubic equation.
Lemma A.3
With \(r > 0\), the cubic equation
has exactly one positive root, namely
with, in particular, \(z_1 =1.\)
Proof
By Vieta’s method, Eq. (46) becomes
via the substitution \(z=w +(1+r)w^{1} 1.\) This leads to
and hence to
with k an integer and \(\alpha \) equal to
In view of
the claim has been proved once
has been shown. Writing \(\psi = \pi /6 \arctan (\sqrt{r})/3\) with \(\psi \in (0, \pi /6)\), we have \(r = \cot ^2(3\psi )\) and we see that (47) holds if and only if
or equivalently
holds. Because of \(\sin (\psi ) \in (0, 1/2)\), this is the case. \(\Box \)
1.1 A. 1 Proof of Theorem 5.5
Finally, we present our proof of Theorem 5.5 as given in Sect. 5.
By Khintchine’s Lemma A.1, the standardized unimodal random variable Z may be represented as \(Z=M + UY\) with U uniformly distributed on the unit interval and independent of the random variable Y. As Z is standardized, Y has to satisfy \(EY=2M\) and \(EY^2=3(1+M^2)\) with M the location of the mode of Z. It follows that the variance of Y equals \(3M^2,\) and hence that \(M \le \sqrt{3}\) holds. As u and v are both at least as large as \(\sqrt{3}\), we have \(u \le M \le v\). Hence, Khintchine’s representation and Lemma A.2 yield
where the function \(\Psi _M\) is given by
We define the random variable \(Y_1\) by
with \(\mu _ = E(Y \mid Y< uM),\ \mu _0 = E(Y \mid uM \le Y \le vM),\) and \(\mu _+ = E(Y \mid Y > vM)\). Note \(EY_1=EY\) and \(EY_1^2 \le EY^2.\)
As \(\mu _< uM \le \mu _0 \le vM < \mu _+\) holds and \(\psi _M\) is concave on \((\infty , uM)\) and on \((vM,\infty )\) and vanishes elsewhere, we have by Jensen’s inequality
If necessary, by subtracting a positive value from \(\mu _\) (or \(\mu _0\) if the mass at \(\mu _\) vanishes) and adding to \(\mu _+\) (or \(\mu _0\) if the mass at \(\mu _+\) vanishes), the random variable \(Y_1\) can be forced to satisfy \(E(Y_1^2)=3(1+M^2)\) while increasing \(E\Psi _M(Y_1)\) and maintaining \(EY_1=2M.\) This mechanism does not work if \(Y_1\) is degenerate at \(2M\). However, in view of \(uM \le 2M \le vM\) we then have \(E\Psi _M(Y_1)=0\), which is not an upper bound to (48) whatever the values of u and v are.
We have shown that the supremum of the probability in (30) is attained by a random variable \(Z=M + UY\) as above, where Y is discrete with three atoms, namely
with \(a \le uM \le b \le vM \le c, M \le \sqrt{3},\) and
The restrictions \(EZ=0\) and \(EZ^2=1\) imply
and hence
Writing
we note
and define the set
So far, we have seen that
holds. Note that \(u+M\) and \(vM\) are positive on \({\mathcal {A}}\) in view of \(u \le M \le v\).
We shall prove that for \(\sqrt{3} \le v \le u \le v+2/v\) this supremum is attained at a stationary point within \({\mathcal {A}}\) of the function in (54) and that for \(v + 2/v \le u\) it is attained at a point on the boundary of \({\mathcal {A}}\). To this end, we shall show first that at the boundary of \({\mathcal {A}}\) the function in (54) cannot attain a value larger than the second bound given in (30). With \(\bar{{\mathcal {A}}}\) denoting the closure of \({\mathcal {A}}\), we see that the boundary \(\partial A\) of \({\mathcal {A}}\) is the union of the following sets
We treat these boundary subsets as follows.
 \({{\mathcal {A}}}_1\):

With \(a=u+M\), we have
$$\begin{aligned} P(Z\le u\ \ \textrm{or}\ \ Z \ge v) = \left( 1  \frac{vM}{c} \right) q = P(Z \ge v), \end{aligned}$$(55)which by Theorem 5.2 is bounded by \(4/(9(1+v^2))\) provided \(v^2 \ge 5/3\) holds. By differentiation, one observes that the function
$$\begin{aligned} u \mapsto \frac{4+(uv)^2}{(u+v)^2} \end{aligned}$$is decreasing if and only if \(u \le v+2/v\) holds, and hence, it has minimum value \(1/(1+v^2)\). So, the first bound from (30) equals at least the bound \(4/(9(1+v^2))\) from Theorem 5.2.
 \({{\mathcal {A}}}_2\):

By symmetry, an analogous argument holds for \({{\mathcal {A}}}_2\) as for \({{\mathcal {A}}}_1\).
 \({{\mathcal {A}}}_3\):

By symmetry, an analogous argument holds for \({{\mathcal {A}}}_3\) as for \({{\mathcal {A}}}_4\).
 \({{\mathcal {A}}}_4\):

With \(p=0\), we have (55) and the same argument as for \({{\mathcal {A}}}_1\) holds here. Furthermore, the random variable Z that attains the second bound from (30), corresponds to \(M=1/v\) and
$$\begin{aligned} Y = \left\{ \begin{array}{lll} 0 &{} &{} \frac{3v^2  1}{3(1+v^2)} \\ &{}\mathrm{with\, probability}&{} \\ \frac{3}{2} \left( v + \frac{1}{v} \right) &{} &{} \frac{4}{3(1+v^2)}, \end{array}\right. \end{aligned}$$which shows that this second bound is attained within \({{\mathcal {A}}}_4\).
 \({{\mathcal {A}}}_5\):

In view of \(EZ=0\) and \(EZ^2=1\), the definition of Y from (50) implies \(EY=2M\) and \(EY^2=3(1+M^2)\), and hence, \(E(Y+M)^2 =3\). With \(p+q=1\), this yields
$$\begin{aligned} (aM)^2 p + (M+c)^2 (1p) =3, \end{aligned}$$which means that \(aM\) and \(M+c\) cannot simultaneously be larger than \(\sqrt{3}\). As both u and v equal at least \(\sqrt{3}\), this shows that either \(1(u+M)/a \le 0\) or \(1(vM)/c \le 0\) holds. Together with (55), we conclude that \({{\mathcal {A}}}_5 \subset {{\mathcal {A}}}_1 \cup {{\mathcal {A}}}_2\) holds and that the second bound from (30) cannot be exceeded on \({{\mathcal {A}}}_5\).
 \({{\mathcal {A}}}_6\):

In case of \(M = \sqrt{3}\), the variance of Y from (50) vanishes, i.e. Y is degenerate, and hence, at least two of the restrictions \(p=0, q=0\), and \(p+q=1\) hold. Consequently, we have \({{\mathcal {A}}}_6 \subset {{\mathcal {A}}}_3 \cup {{\mathcal {A}}}_4\) and we see that the second bound from (30) cannot be exceeded on \({{\mathcal {A}}}_6\).
 \({{\mathcal {A}}}_7\):

If b equals \(a\), the random variable Y from (50) may be viewed as a Bernoulli random variable with \(p+q=1\), which implies \({{\mathcal {A}}}_7 \subset {{\mathcal {A}}}_5\).
 \({{\mathcal {A}}}_8\):

If b equals c, the random variable Y from (50) may be viewed as a Bernoulli random variable with \(p+q=1\), which implies \({{\mathcal {A}}}_8 \subset {{\mathcal {A}}}_5\).
We conclude that for \(\sqrt{3} \le v \le u\)
holds. As we have shown that at the boundary of \({\mathcal {A}}\) the function from (54) cannot attain a value larger than the first bound given in (30), we focus on the interior of \({\mathcal {A}}\) and determine the stationary points of the function in (54). For the time being, we fix b and hence \(\zeta \) and \(\eta \) and note that the function to be maximized over a and c may be written as
“Some” computation shows that the stationary points of this function of a and c are solutions of the two equations
Ignoring the first factors, which correspond to the boundary conditions \(p=0\) and \(q=0\) treated under \({{\mathcal {A}}}_3\) and \({{\mathcal {A}}}_4\) above, we obtain
Dividing the second equality by \(vM\), we obtain
Dividing this by \(c^3(cb)(a+b)\) and writing \(a = \gamma c\), we arrive at
which by Lemma A.3 has exactly one positive root, namely
By a slight abuse of notation, we shall denote this unique positive root by \(\gamma \) too. We conclude that a and c satisfy (58) and
holds with a and c depending on b and M, and with \(\gamma \) depending on M only.
Note that \(\zeta \) and \(\eta \) defined in (52) depend on b. Straightforward computation shows that the derivative of (57) with respect to b vanishes if
holds. If the second factor vanishes, (51) implies
which is the boundary case \({{\mathcal {A}}}_5\) treated above. So, ignoring the second factor and multiplying by ac we arrive at
The first equation from (58) can be rewritten as
Adding up (61) and (62) and dividing the result by \(c^2\), we obtain
and hence,
Substituting this into (61) with \(c=a/\gamma \) and multiplying the result by \(a^{3}\gamma ^3\), we get
With the help of (59), this may be rewritten as
One may verify that (64) can be factorized as follows
As \(a=u+M\) is a boundary case, we conclude
Note that (59) may be reformulated as
and that hence
holds. Consequently also b, c, and the function to be maximized itself, as given in (57), can be expressed in terms of M and \(\gamma \). As \(\gamma \) is a complicated function of M, we shall write M in terms of \(\gamma \). To this end, we rewrite (59) as
and notice that this implies
and hence
As \(\gamma \) is positive, these values satisfy \(a<b<c\) and \(u \le M \le v\) as prescribed by \({\mathcal {A}}\). Substituting them into (49)–(52), we arrive at
Eliminating \(\zeta \) and \(\eta \) from this expression, we obtain
which we shall denote as \(\psi (\gamma ; u,v)\). At \(\gamma =1\), the expression from (67) equals the first bound from (30), i.e.
We shall prove
The function
is quadratic in u with a positive coefficient for \(u^2\) and attains its minimum in u at
which equals at most v as the denominator minus the numerator in (69) equals
We see that
is nonnegative for \(\gamma >0\), if
holds, which is the case in view of the assumption \(v \ge \sqrt{3}\). Consequently, \(\chi (\gamma ; u,v) \ge 0\) and (68) hold. We have proved that the first bound from (30) is valid for \(\sqrt{3} \le v \le u\).
Substituting \(\gamma =1\) into (65) and (66), we arrive at the random variables Z and Y defined in Theorem 5.3. However, Y from this Theorem is not well defined if \(2+v(vu)\) is negative, i.e. if \(u > v + 2/v\) holds. Put differently, for \(u > v + 2/v\) and \(\gamma =1\) the point (a, b, c, M) from (65) and (66) is not contained in \({\mathcal {A}}\).
Therefore, we have to consider the case \(\sqrt{3} \le v,\ v+2/v < u\) separately, as we do next. Lengthy computations show that the derivative of \(\psi (\gamma ;u,v)\) from (67) with respect to \(\gamma \) vanishes if and only if
holds. As \(\gamma =1\) corresponds with a point outside \({\mathcal {A}}\) in the present situation, we restrict attention to those \(\gamma \) for which the second factor vanishes. So, we may assume
Using (53), (65), and (66), we see that in terms of \(\gamma \) the nonnegativity of p is equivalent to
which by (71) becomes the inequality
In view of \(u>v+2/v\), this implies
and we conclude that for a \(\gamma \) satisfying (71) \(0 \le \gamma <1\) holds. Since \(\gamma = \gamma _M\) from (60) is increasing in \((u+M)/(vM)\) and equals 1 for \((u+M)/(vM)=1\), we have \((u+M)/(vM) < 1\) and hence,
Applying these inequalities to (71) with the first factor removed, we arrive at
and hence, in view of \(\sqrt{3} \le v\) and \(\gamma < 1\) at
This contradiction shows that the stationary points corresponding to the second factor in (71) do not belong to \({\mathcal {A}}\). It follows that for \(v + 2/v < u\) there are no stationary points within the interior of \({\mathcal {A}}\) and hence, the supremum in (54) is attained at the boundary \(\partial {{\mathcal {A}}}\) of \({\mathcal {A}}\). Consequently, (56) completes the proof, as computation shows that the random variables Z mentioned in the Theorem attain the bounds. \(\Box \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ion, R.A., Klaassen, C.A.J. & Heuvel, E.R.v.d. Sharp inequalities of Bienaymé–Chebyshev and Gauß type for possibly asymmetric intervals around the mean. TEST (2023). https://doi.org/10.1007/s11749022008449
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11749022008449
Keywords
 Cantelli’s inequality
 Khintchine representation
 Jensen inequality
MSC Classification
 60E15
 60E05
 62E99
 62G99
 62P30