Sharp inequalities of Bienaymé–Chebyshev and Gauß type for possibly asymmetric intervals around the mean

Sharp upper bounds are proved for the probability that a standardized random variable takes on a value outside a possibly asymmetric interval around 0. Six classes of distributions for the random variable are considered, namely the general class of ‘distributions’, the class of ‘symmetric distributions’, of ‘concave distributions’, of ‘unimodal distributions’, of ‘unimodal distributions with coinciding mode and mean’, and of ‘symmetric unimodal distributions’. In this way, results by Gauß (Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores 5:1–58, 1823), Bienaymé (C R Hebd Séance Acad Sci Paris 37:309–24, 1853), Bienaymé (C R Hebd Séance Acad Sci Paris 37:309–24, 1853), Chebyshev (Journal de mathématiques pures et appliqués (2) 12:177–184, 1867), and Cantelli (Atti del Congresso Internazionale dei Matematici 6:47–59, 1928) are generalized. For some of the known inequalities, such as the Gauß inequality, an alternative proof is given.


Introduction
Let X be a random variable with finite mean μ and finite, positive variance σ 2 . We are interested in sharp upper bounds on the tails of the distribution of X . Without loss of generality and in order to simplify notation, we will restrict attention to the standardized version of X , which is denoted by Z = (X − μ)/σ. Hence, we are interested in sharp upper bounds on the probability that a standardized random variable falls outside an arbitrary interval containing the value zero, i.e. sharp upper bounds on (1) We may assume 0 < v ≤ u. We also study the one-sided probability P(Z ≥ v), which corresponds to (1) with u = ∞.
Of course, upper bounds on probability (1) depend on the class of distributions assumed for Z . We shall focus on broad, nonparametric classes of distributions. The corresponding upper bounds are often applied in theoretical considerations and proofs, typically with u = v. However, as Rougier et al. (2013), p.861, argue, they have practical value as well when one does not want strong model assumptions. Let us mention two examples, one with u equal to infinity and the other one with general u and v. Clarkson et al. (2009) study the Receiver Operating Characteristic (ROC) curve, which they define as the power of a simple versus simple Neyman-Pearson hypothesis test viewed as a function of the significance level, more precisely where F and G are the continuous distribution functions of the test statistic under the null hypothesis and the alternative, respectively. They are interested in the Area Under Curve (AUC), which equals 1 0 1 − G(F −1 (1 − α)) dα = P(X ≤ Y ) with X and Y independent random variables with continuous distribution functions F and G, respectively. For Gaussian X and Y with variance 1 and E(Y − X ) = 2 √ 6, the AUC equals (2 √ 3) ≈ 0.99973 (and not 0.99966 as at page 468 of Clarkson et al. (2009)). If Gaussianity is weakened to unimodality of X and Y with at least one of them being strongly unimodal, then Clarkson et al. (2009) obtain 1/2 as a lower bound to the AUC; see their page 468, Corollary 6 and Remark 7. However, our bound (27) from Theorem 5.2 with v = 2 √ 3 yields 113/117 ≈ 0.96581 as a lower bound. This shows that at least some of the bounds presented here are strong enough to be valuable in applications.
Sharp upper bounds on probability (1) are also relevant in statistical process control. Indeed, attainable upper bounds on (1) determine the maximum risk for producing products outside specification limits, as the distribution of the quality characteristic X is typically not (perfectly) centred within the specification limits. This kind of practical concerns was our main motivation for studying upper bounds on (1); see for instance (Van den Heuvel and Ion 2003) and the PhD thesis (Ion 2001).
Traditionally, focus has been on upper bounds on the probability in (1) for symmetric intervals with u = v. The Bienaymé-Chebyshev inequality, which may be stated and proved by was first obtained by Bienaymé (1853), cf. page 171 of the 1867 reprint, and later by Chebyshev (1867); see also Heyde and Seneta (1972), page 682. Note that in (2) equality holds if and only if |Z | takes on the values 0 and v only. Amazingly, Gauß (1823), translation (Stewart 1995), had already proved the sharp inequality for random variables Y with a unimodal distribution with mode at zero and with E(Y 2 ) = 1. Sellke and Sellke (1997) describe the history of the Gauß inequality and refer to Pukelsheim (1994) for three proofs, namely Gauß's one, a variation on it, and the one from exercise 4 on page 256 of Cramér (1946). A detailed description of Gauß's proof is given also by Hooghiemstra and Van Mieghem (2015). We present a fourth proof of the Gauß inequality in Section 4. However, as the mean is typically known or estimated in the practice of statistical process control (and in general can be estimated more accurately than the mode), we shall focus on standardized random variables Z with E Z = 0. For the asymmetric case with u = v in (1), this mean zero condition complicates results and proofs considerably; see Theorems 6.1 and 6.3. The upper bound in (2) follows from the classical Markov inequality P(|Z | ≥ v) = E(1 [|Z | r /v r ≥1] ) ≤ E|Z | r /v r with r = 2. Generalizations of the Gauß inequality in Markov style have been presented by Camp (1922); Meidell (1922), and Theil (1949). DasGupta (2000) has exploited these Markov-Gauß -Camp-Meidell inequalities to derive properties of, e.g. the zeta function. Sellke (1996) and Sellke and Sellke (1997) extend this line of research by determining sharp upper bounds on P(|Z | ≥ v) in terms of Eg(|Z |) for unimodal random variables Z and nondecreasing functions g on [0, ∞). Bickel and Krieger (1992) derive improvements of (2) with Z an average and of (3) for symmetric unimodal densities with a bound on the derivative.
For symmetric distribution functions of X , an upper bound on the one-sided probability P(Z ≥ v) is 1/(2v 2 ), which can be obtained by the inequality in (2), see Theorem 3.1. Cantelli (1928) proved the upper bound 1/(1 + v 2 ) on this tail probability for the class of all distribution functions of X . Camp (1922) provided an upper bound when the distribution function of X is symmetric and unimodal. Bhat and Kosuru (2022) presented bounds for linear combinations of tail probabilities. There are many generalizations of the Bienaymé-Chebyshev inequality to vector valued random variables; see, e.g. Ogasawara (2019) and Ogasawara (2020). Tail probabilities for sums (and other functions) of i.i.d. random variables are called concentration inequalities. The extensive literature on concentration inequalities is reviewed in the elegant books Boucheron et al. (2013) and Lugosi (2009).
In each of the subsequent Sections, we shall discuss sharp inequalities for a specific class of distributions of the standardized random variable Z . The only exception is in Section 4, where we do not consider standardized random variables. There we shall present a one-sided, sharp version of Gauß's inequality for concave distributions, more specifically, for distributions on the nonnegative half line with a mode at zero and a concave distribution function on this half line. As an immediate consequence of this one-sided version, the original Gauß inequality is also presented in this Section. Our proof is based on Khintchine's representation of unimodal densities, Khintchine (1938), and Jensen's inequality, Jensen (1906), as are our proofs of most results for unimodal distributions. A formulation and proof of Khintchine's representation may be found in Lemma A.1 in the Appendix.
The results on upper bounds for the probability in (1) as discussed in this article are summarized in Table 1, where we mention only the most relevant cases of the inequalities. The complete versions of the inequalities with all special cases and proofs may be found in the text. Reference is made to the corresponding theorems that are proved in the present article. Theorems 3.1, 3.2, 4.1, 5.2, 5.3, 5.5, 6.1 and 6.3 seem to be new. For known results like in Theorems 2.2, 4.2, 7.1 and 7.2, alternative proofs are given. It is shown that all inequalities are sharp as well, by constructing random variables satisfying the bounds. It should be noted that the results on probability (1) imply the results for upper bounds on the probability in (2).

All distributions
Here, we discuss the inequalities from the first row of Table 1. The standardized version Z = (X − μ)/σ of the random variable X has mean 0 and variance 1. Apart from this standardization, the distribution of Z is arbitrary within this section. We will start with the one-sided analogue of the Bienaymé-Chebyshev inequality, which is referred to as Cantelli's inequality. It is given by formula (19) in Cantelli (1928), and it reads as follows.
Theorem 2.1 (Cantelli's inequality) Let Z be a random variable with mean 0 and variance 1. For any v ≥ 0, the inequality holds. This inequality is sharp and for positive v equality is uniquely attained by Cantelli (1928) Bienaymé (1853) Theorem 2.1 Chebyshev (1867) Theorem 2.2 (1867) Chebyshev (1867) 1 Proof According to the hint from Problem 1.5.5 of Billingsley (1995), the Bienaymé-Chebyshev inequality yields In order to obtain equality in (6), the equality has to hold almost surely, or equivalently, Z has to have support {−1/v, v} and hence, has to satisfy (5) The asymmetric two-sided analogue of the Bienaymé-Chebyshev inequality from (2) will be presented in full detail. Its simple proof is based on the Bienaymé-Chebyshev inequality itself.
Theorem 2.2 Let Z have mean 0 and variance 1, and assume 0 < v ≤ u. Then, holds. Under the additional condition u ≤ 1/v, the trivial inequality In the case of inequality (7) is sharp and equality holds for In the case of v + 2/v ≤ u, the inequality holds with equality for Proof By the Bienaymé-Chebyshev inequality, we obtain Straightforward computations show that (11) yields a well-defined random variable provided that (10) holds, and that this random variable has mean zero and unit variance, and satisfies equality in (7). Similarly, the random variable from (9) is well-defined and attains equality in the trivial inequality (8) For u ≥ v + 2/v, the probability in (7) is bounded by P(Z ≤ −v − 2/v or Z ≥ v) Applying inequality (7) with u replaced by v + 2/v results in (12). Finally, straightforward computations show that (13) defines a proper random variable with zero mean and unit variance that attains equality in (12).
For v fixed, the minimum of the right hand side of (7) over u is attained at u = v+2/v and equals 1/(1 + v 2 ). Consequently, Cantelli's inequality (4) is a special case of (7); cf. (12) and (13). Selberg (1940) seems to be the first to have formulated a version of Theorem 2.2. According to Ferentinos (1982), his proof of (7) is less complicated than Selberg's, but it is still more complicated than ours due to the cumbersome notation.
Note that (7) with u = v reduces to the famous Bienaymé-Chebyshev inequality. The inequalities in this section are all based on the inequality of Bienaymé-Chebyshev itself, as is the first one of the next section. However, this inequality does not always seem to be helpful if the class of distributions of Z (or X ) is restricted.

Symmetric distributions
The inequalities for symmetric distributions from the second row of Table 1 will be discussed in this section. The symmetry implies P(Z ≥ v) = P(|Z | ≥ v)/2, and hence, the Bienaymé-Chebyshev inequality yields the following result.
Theorem 3.1 Let Z be symmetric with mean 0 and variance 1. For v ≥ 0 with w = max{v, 1}, the inequality In view of 2w 2 = 2(max{v, 1}) 2 ≥ 1+v 2 , this inequality improves the bound from Cantelli's inequality from Theorem 2.1, as it should. For symmetric random variables, we obtain the following bound for asymmetric intervals.
Theorem 3.2 Let the standardized random variable Z be symmetric. Consider any positive u and v with v ≤ u and discern four cases. For holds with equality if Z puts mass 1/2 at both 1 and −1.
is valid with equality if Note that any choice of (u, v) with 0 < v ≤ u belongs to at least one of the four cases in this Theorem.
Proof To prove these inequalities, we determine the supremum of the left hand side of (14) over all symmetric random variables Z with mean 0 and variance at most 1. Let Z be such a random variable and define the symmetric random variable Y by [u≤Z ] .
Note that Y is a discrete, symmetric random variable with probability mass at holds, and we may conclude that the supremum of (18) over Z is attained by a symmetric discrete random variable Y taking its values at V and with E(Y 2 ) ≤ 1. We introduce and note that the supremum of (18) equals the maximum of In this linear programming problem, the maximum is attained at one of the vertices of polygon Q. We discern three cases.
In this case, the polygon Q is a quadrangle with vertices The corresponding values of the function ( p, q) → 2 p + q are 0, 1 u 2 , Computation shows that the fourth value is larger than the second value and hence largest, iff √ 2 v ≤ u holds. Note that this yields inequality (16) and inequality (15) under the additional restriction v ≤ 1.
Polygon Q reduces to a triangle here with vertices The corresponding values of the function ( p, q) → 2 p + q are 0, 1 2v 2 , Computation shows that this implies inequality (17) and inequality (15) under the additional restriction v ≥ 1.
Straightforward computation shows that equalities are attained by the random variables mentioned in the Theorem.

Concave distribution functions
The upper bounds from the preceding sections to the probability in (1) are rather large. It is to be expected that restriction of the class of completely unknown distributions and the class of symmetric distributions to smaller classes of distributions will yield smaller upper bounds. In the next three sections, we will obtain sharp upper bounds for (1) over the class of unimodal distributions, the class of unimodal distributions with mean and mode coinciding, and the class of symmetric unimodal distributions, respectively. This unimodality assumption is not unrealistic, as it is a very natural assumption in several practical applications, like statistical process control. A distribution is unimodal with mode at M if its corresponding distribution function is convex on (−∞, M) and concave on [M, ∞). Consequently, a unimodal distribution has at most one atom, which may occur only at the mode M. If a unimodal distribution is uniform on its support with an atom at one of its boundary points, we will call it a one-sided boundary-inflated uniform distribution; cf. Klaassen et al. (2000). We shall repeatedly use a representation theorem for unimodal distributions of Khintchine, Lemma A.1, Khintchine (1938). It characterizes unimodal distributions as a mixture of uniform distributions. The inequalities we will derive attain equality for mixtures of at most three uniforms, where often one of these uniforms is degenerate, i.e. a point mass. Unimodal distributions with their mode at M = 0 and all their mass on the nonnegative half line [0, ∞) have a distribution function that is concave on [0, ∞) and vanishes on (−∞, 0). They have a nonincreasing density on (0, ∞). This special class of distributions is considered in the present section. For this class, a one-sided version of the Gauß inequality holds. The Gauß inequality itself is an immediate consequence of it and will be presented also.
Proof By Khintchine's representation from Lemma A.1, there exist a probability p 0 and a distribution function F on the positive half line, such that P(Y = 0) = p 0 holds and the density of Y at y on the positive half line equals holds and that for positive v is valid. Without loss of generality, we assume that F puts positive mass on (v, ∞). Let us write with equalities iff F is degenerate at c v . This means that we may restrict attention to those Y with mass p 0 at 0 for which F is degenerate at some c v > v. For such Y equation (20) implies which together with (22) yields As this function of c v attains its maximum at with equality iff Y has the one-sided boundary-inflated uniform distribution as described in the Theorem. However, for c v = 3v/2 equation (23) becomes 1 − p 0 = 4/(3v 2 ), which for v < 2/ √ 3 leads to an impossible, negative value of p 0 . This means that for v < 2/ √ 3 the mass at 0 vanishes and that (22), (23), and (24) hold with c v = √ 3.
About two centuries ago, Johann Carl Friedrich Gauß presented and proved a sharp upper bound on the probability P(|X | ≥ v) for unimodal random variables X with mode at 0 and finite second moment in Sections 9 and 10 of Gauß (1823); for a translation from Latin into English see Stewart (1995). His result precedes the famous Bienaymé-Chebyshev inequality (2) by three decades. The Gauß inequality for large values of v has been given in (3). The complete inequality is the following.

Theorem 4.2 (Original Gauß inequality) Let the random variable Y have a unimodal distribution with mode at 0 and second moment E(Y
at 0 and the rest of its mass uniformly distributed on (−3v/2, 3v/2).

Proof
As |Y | has a concave distribution function on [0, ∞), the one-sided Gauß inequality proves the Theorem.
Our proof of the Gauß inequality via the Khintchine representation and Jensen's inequality differs from the three proofs as presented by Pukelsheim (1994). Observe that the bound in (25) can also be described as the minimum of the two functions in there. Also note that Gauß considered only densities and hence could not prove the second bound in (25) to be sharp.

Unimodal distributions
In the preceding section on concave distributions, we have already defined the related class of unimodal distributions, which we will study in the remaining sections. The factor 4/9 from the one-sided Gauß inequality of Theorem 4.1 will play a role in all these sections. For the proof of our extension of the Cantelli inequality from Theorem 2.1 to unimodal distributions, we shall use the following powerful result for unimodal distributions, which also shows the factor 4/9.
Theorem 5.1 (Vysochanskiȋ and Petunin inequality) Any unimodal random variable W with finite second moment satisfies Proof The proof of Vysočanskiȋ and Petunin (1980) and Vysochanskiȋ and Petunin (1983) has been smoothed by Pukelsheim (1994) and invokes Gauß's inequality presented in Theorem 4.2.
Actually, for √ 8/3 ≤ v inequality (26) implies the Gauß inequality (25). Observe that the bound in (26) can be described as the minimum of the three expressions at its right hand side.
Here is our analogue of Cantelli's inequality from Theorem 2.1.
Theorem 5.2 Let the distribution of the standardized random variable Z be unimodal. For any v ≥ 0, the inequality Proof Applying Theorem 5.1 with W = Z + 1/v and w = v + 1/v, we obtain (27) after some computation. Additional computation shows that the random variables mentioned in the theorem attain equality.
Comparing this inequality (27) to Cantelli's inequality from Theorem 2.1, we note the extra factor 4/9 for larger values of v; see also Table 1. Furthermore, note that the bound in (27) can be viewed as the minimum of the two functions in there, and that these functions intersect at v = √ 5/3. Next, we turn to the general case of asymmetric intervals around 0. The Vysochanskiȋ and Petunin inequality from Theorem 5.1 implies the following result.

Theorem 5.3
For v ≥ √ 5/3, max{v, (11v − 4 √ 6v 2 − 10)/5} ≤ u ≤ v + 2/v, and any standardized unimodal random variable Z , the inequality holds with equality if Z = (v − u)/2 + U Y , U and Y independent random variables, U uniform on the unit interval, and Y the generalized Bernoulli random variable Proof As in the proof of Theorem 2.2, we note Applying the third inequality of (26) from Theorem 5.1, we obtain (28). Computation shows that the random variable Y and hence Z = (v − u)/2 + U Y are well defined under the conditions on u and v, and that this Z attains the bound.
An immediate consequence of this Theorem is the following one, which is the main content of Theorem 2 of Vysočanskiȋ and Petunin (1980 Instead of applying the Vysochanski M i and Petunin inequality from Theorem 5.1, we could choose the approach via Khintchine's characterization of unimodal distributions and Jensen's inequality as in Section 4. This would yield an admittedly laborious proof of Cantelli's inequality for unimodal distributions as given in Theorem 5.2. However, this Khintchine-Jensen approach yields a partially improved version of Theorem 5.3 too, namely For any standardized unimodal random variable Z , the inequality (29). For v + 2 v ≤ u, equality is attained if Z has mass (3v 2 − 1)/(3(1 + v 2 )) at −1/v and the rest of its mass, 4/(3(1 + v 2 )), uniformly distributed on the interval [−1/v, (1 + 3v 2 )/(2v)], like in Theorem 5.2.
The proof of this Theorem is given in Subsection 1 of the Appendix.

Unimodal distributions with coinciding mode and mean
When we restrict the class of distributions further to the class of unimodal distributions with coinciding mode and mean, then for the one-sided probability we see that the factor 4/9 does not play such a role anymore as in Theorem 5.2, the analogue of Cantelli's inequality, Theorem 2.1.
Theorem 6.1 For any standardized unimodal random variable Z with mode at 0, the inequality holds with equality if Z = U Y holds with U and Y independent random variables, U uniform on the unit interval, and Y the Bernoulli variable Proof Let Z 0 and Y 0 be the classes of distributions as defined in Lemma A.2. By this lemma with u = ∞, we obtain with ψ defined by With Y ∈ Y 0 , we define the Bernoulli random variable Y 0 by Since ψ is concave on [v, ∞) and vanishes elsewhere, we have by Jensen's inequality By adding a positive amount to μ + and subtracting from μ − , if necessary, we can force the Bernoulli random variable Y 0 to have variance 3 while we maintain its mean at 0 and possibly increase Eψ(Y 0 ), as ψ is increasing on [v, ∞). We have shown that the supremum at the right hand side of (33) is attained by a Bernoulli random variable with a < v ≤ b, and 0 ≤ p ≤ 1. In view of EY 0 = 0 and EY 2 0 = 3, we obtain we see that the suprema from (33) Straightforward computation shows that the derivative with respect to x of the function in (34) is nonnegative if and only if holds. Observe that the function x → 2x 3 − 3x 2 is increasing for x ≥ 1 and negative for 1 ≤ x < 3/2. By Vieta's method to tackle cubic equations, we substitute x = (w + 1 + 1/w)/2 ≥ 3/2, w > 0 and obtain equality in (35) if and only if v 2 w 3 2 − 2 6 + v 2 w 3 + v 2 = 0 holds. The positive roots of this quadratic equation in w 3 yield w 1 = ( . Note that w 1 w 2 = 1 and hence w 1 +1/w 1 = w 2 +1/w 2 hold. Consequently, the only real root x of the cubic function in (35) equals the one given in (31). Combining (34) and (35) (with equality), we arrive at the inequality in (31).
Straightforward verification shows that Z = U Y with Y as in (32) attains this bound.
The sharp, restricted Gauß inequality for random variables with coinciding mean and mode is the same as the original one from Theorem 4.2, as the distributions that attain equality in (25) are symmetric and hence, have coinciding mean and mode.

Theorem 6.2 (Restricted Gaußinequality) For any standardized unimodal random variable Z with mode at 0, the inequality
holds with equality if the distribution of Z is the mixture of a uniform distribution on and a distribution degenerate at 0 such that the point mass at 0 equals [1 − 4/(3v 2 )] ∨ 0.
We extend Gauß's inequality to asymmetric intervals as in (1) as follows.
and for any standardized unimodal random variable Z with mode at 0, the inequality holds with Equality is attained in (37) for Z = U Y with U and Y independent random variables, U uniform on the unit interval, and Y the generalized Bernoulli random variable Proof The proof of Theorem 5.5, given at the end of the Appendix, can be applied with M = 0 all the way up to and including the value of the upper bound in (67) with γ defined in (38) and satisfying Note that this equation may be rewritten as With the help of the last two equations, u may be eliminated from (67) resulting in the expression in (37). In view of this, the random variable Y from (39) follows straightforwardly from (66) and (51) with M = 0.

Remark 6.4
For u = v, the value of γ from (38) becomes 1 and the upper bound in (37) takes on the value 4/(9v 2 ), which is in line with Theorem 6.2.

Symmetric Unimodal distributions
Under the extra assumption of unimodality, Theorem 3.1 for symmetric distributions may be sharpened too. Again we will encounter the extra factor 4/9. The resulting inequality for symmetric unimodal distributions has been obtained by Camp (1922); Meidell (1922), and Shewhart (1931). A different proof is given by Theil (1949). Still a different proof is given in Lemma 2 of Clarkson et al. (2009). However, our proof is shorter and simpler than theirs.

holds. Equality is attained by the mixture of a uniform distribution on
and a distribution degenerate at 0 such that the point mass at 0 equals [1 − 4/(3v 2 )] ∨ 0.
Actually, this Theorem is equivalent to Theorem 6.2. Indeed, let Z be standardized with mode at 0 and let B be an independent Bernoulli random variable with P(B = −1) = P(B = 1) = 1/2. As B Z is symmetric and hence P(|Z | ≥ v) = P(|B Z| ≥ v) = 2P(B Z ≥ v) holds, Theorem 7.1 implies Theorem 6.2. For the class of symmetric unimodal distributions, these Theorems also imply that inequality (36) holds and is sharp.
Next, we consider the case of asymmetric intervals around 0.

holds with equality if Z is uniform on [−3(u + v)/4 , 3(u + v)/4] with probability 16/(3(u + v) 2 ) and has a point mass at 0 with probability
holds with equality if Z is uniform on [−3v/2 , 3v/2] with probability 4/(3v 2 ) and has a point mass at 0 with probability 1 − 4/(3v 2 ). Proof By Lemma A.1, we have Z = U Y with Y symmetric around 0 because of the symmetry of Z around 0. Along the lines of Lemma A.2, we obtain by Jensen's inequality

Remark 7.3 The Theorem in
By increasing a or b if necessary, we see that this supremum is attained at 2a 2 p + 2b 2 q = 3.
Fix a and b with v ≤ a ≤ u ≤ b and write α = 1−v/a ≥ 0 and β = 2−(u+v)/b ≥ 0. Consider 3 and hence, p +q ≤ (a 2 p +b 2 q)/3 = 1/2 hold. Consequently, we have Studying the stationary points of the functions a → (1 − v/a)/a 2 and b → (2 − (u + v)/b)/b 2 , we see with the help of (45) that the supremum in (44) equals If 3v < u holds, then we have holds. The last inequality is valid in view of We conclude that the supremum in (44) is bounded by Straightforward computation shows that equality holds in (42) and (43) for the indicated random variables Z .

Discussion
Our bounds for the probability that (the absolute value of) a standardized random variable exceeds a given value are essential in quality control, where the probability for the risk of an out-of-specification (OOS) result is being calculated or estimated. The probability that a quality characteristic is above or below the upper (USL) or lower (LSL) specification limit, respectively, can be formulated by the probability P(Z ≤ −u or Z ≥ v) in (1), assuming that the process mean μ falls within the specification limits, μ ∈ (LSL, USL).
With the introduction of Statistical Process Control (SPC) by Walter Shewhart in the 1920's, three-sigma limits (μ ± 3σ ) were suggested for the use of control charts to monitor the stability or predictability of quality characteristics of products and processes over time; see Di Bucchianico and Van den Heuvel (2015). On page 277 of Shewhart (1931), Shewhart mentioned the Bienaymé-Chebyshev inequality and stated "Experience indicates that v = 3 seems to be an acceptable economic value." In case, the three-sigma control limits of a Shewhart control chart fall (just) within the specifications limits, the process has a capability index C pk of at least one, i.e. C pk = min{(μ − LSL)/(3σ ), (USL − μ)/(3σ )} ≥ 1. For such capable processes, the worst case risk of an OOS result becomes P(Z ≥ 3) for one-sided specification limits, P(|Z | ≥ 3) for two-sided specification limits when the process is perfectly centred, i.e. μ = (LSL + USL)/2), and P(Z ≤ −11/3 or Z ≥ 3) for non-centred processes with two-sided specification limits. This last probability follows from our Theorem 2.2, where we have chosen v = 3, u = v + 2/v = 11/3.
In case u would be at least 11/3, the OOS probability P(Z ≤ −11/3 or Z ≥ 3) for non-centred processes with two-sided specification limits and a C pk ≥ 1 is the worst case risk for producing products with a quality characteristic outside specification. When u is smaller than 11/3 (and larger than 3 to maintain a C pk ≥ 1), the OOS probability P(|Z | ≥ 3) is considered the worst risk, while the OOS probability P(Z ≤ −11/3 or Z ≥ 3) is then considered the most favourable risk when the process has a capability of one, i.e. C pk = 1.
The bounds for these three OOS probabilities under different assumptions are presented in Table 2 below.
We see that these probabilities for the specific distributions in Table 2 show a rather large discrepancy with their bounds, which is due to their large deviation in shape from the distributions for which the bounds are sharp. Note that the symmetrized Pareto density from the table is log-convex and that the Laplace, logistic and normal densities are log-concave and hence strongly unimodal. The strongly unimodal distributions constitute an important class of distributions; cf. Section 1.4 of Dharmadhikari and Joag-dev (1988). Therefore, it might be useful to prove sharp bounds for this class as well.

Declarations
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Appendix
In this Appendix, we prove a lemma with Khintchine's representation theorem, Khintchine (1938), and other lemmata that we need. Furthermore, the proof of Theorem 5.5 is given.

Lemma A.1 (Khintchine representation) If Z has a unimodal distribution, then there exist a constant M and independent random variables U and Y with U uniformly distributed on the unit interval, such that Z = M + U Y holds. If Z is symmetric around M, then Y is symmetric around 0.
Proof Let Z have its mode and possibly a point mass at M. Theorem V.9 of Feller (1966) and Theorem 1.3 of Dharmadhikari and Joag-dev (1988) yield the characterization Z = M + U Y . As an alternative proof, cf. page 8 of Dharmadhikari and Joag-dev (1988), let the conditional distribution of Z − M given Z > M have density f and distribution function F on (0, ∞). Since f is nonincreasing, we may write It follows that f is the density of U Y given Y positive. With a similar argument for negative values of Z − M and Y , we obtain Khintchine's characterization. From this construction, it follows that Y is symmetric around 0, if Z is symmetric around M.
For the proofs of Theorems 5.5, 6.1, 6.3, and 7.2, we need also the following result.

Lemma A.2 Let Z M be the class of random variables Z that have a unimodal distribution with mean zero, unit variance, and mode at M. Let Y M be the class of random variables Y with mean −2M and variance
with the function M given by Proof By Lemma A.1, we may write Z = M + U Y with U and Y independent, which implies 0 = E Z = M + 1 2 EY and 1 = var These equations yield EY = −2M, E(Y 2 ) = 3(1 + M 2 ), and hence varY = 3 − M 2 , which shows M 2 ≤ 3. Consequently, we get As a similar relation holds for P(Z ≤ −u), we obtain the lemma.
In the proof of Theorem 5.5, we will need to solve the following cubic equation.

A. 1 Proof of Theorem 5.5
Finally, we present our proof of Theorem 5.5 as given in Sect. 5. By Khintchine's Lemma A.1, the standardized unimodal random variable Z may be represented as Z = M + U Y with U uniformly distributed on the unit interval and independent of the random variable Y . As Z is standardized, Y has to satisfy EY = −2M and EY 2 = 3(1 + M 2 ) with M the location of the mode of Z . It follows that the variance of Y equals 3 − M 2 , and hence that |M| ≤ √ 3 holds. As u and v are both at least as large as √ 3, we have −u ≤ M ≤ v. Hence, Khintchine's representation and Lemma A.2 yield where the function M is given by We define the random variable Y 1 by If necessary, by subtracting a positive value from μ − (or μ 0 if the mass at μ − vanishes) and adding to μ + (or μ 0 if the mass at μ + vanishes), the random variable Y 1 can be forced to satisfy E(Y 2 1 ) = 3(1 + M 2 ) while increasing E M (Y 1 ) and maintaining EY 1 = −2M. This mechanism does not work if Y 1 is degenerate at −2M. However, in view of −u − M ≤ −2M ≤ v − M we then have E M (Y 1 ) = 0, which is not an upper bound to (48) whatever the values of u and v are.
We have shown that the supremum of the probability in (30) is attained by a random variable Z = M + U Y as above, where Y is discrete with three atoms, namely The restrictions E Z = 0 and E Z 2 = 1 imply and hence Writing we note and define the set So far, we have seen that We shall prove that for √ 3 ≤ v ≤ u ≤ v + 2/v this supremum is attained at a stationary point within A of the function in (54) and that for v + 2/v ≤ u it is attained at a point on the boundary of A. To this end, we shall show first that at the boundary of A the function in (54) cannot attain a value larger than the second bound given in (30). WithĀ denoting the closure of A, we see that the boundary ∂ A of A is the union of the following sets We treat these boundary subsets as follows.
which by Theorem 5.2 is bounded by 4/(9(1 + v 2 )) provided v 2 ≥ 5/3 holds. By differentiation, one observes that the function is decreasing if and only if u ≤ v + 2/v holds, and hence, it has minimum value 1/(1 + v 2 ). So, the first bound from (30) equals at least the bound 4/(9(1 + v 2 )) from Theorem 5.2. A 2 By symmetry, an analogous argument holds for A 2 as for A 1 . A 3 By symmetry, an analogous argument holds for A 3 as for A 4 . A 4 With p = 0, we have (55) and the same argument as for A 1 holds here. Furthermore, the random variable Z that attains the second bound from (30), corresponds which shows that this second bound is attained within A 4 . A 5 In view of E Z = 0 and E Z 2 = 1, the definition of Y from (50) implies EY = −2M and EY 2 = 3(1 + M 2 ), and hence, E(Y + M) 2 = 3. With p + q = 1, this yields which means that a − M and M + c cannot simultaneously be larger than √ 3. As both u and v equal at least √ 3, this shows that either 1 − (u + M)/a ≤ 0 or 1 − (v − M)/c ≤ 0 holds. Together with (55), we conclude that A 5 ⊂ A 1 ∪ A 2 holds and that the second bound from (30) cannot be exceeded on A 5 . A 6 In case of |M| = √ 3, the variance of Y from (50) vanishes, i.e. Y is degenerate, and hence, at least two of the restrictions p = 0, q = 0, and p + q = 1 hold. Consequently, we have A 6 ⊂ A 3 ∪ A 4 and we see that the second bound from (30) cannot be exceeded on A 6 . A 7 If b equals −a, the random variable Y from (50) may be viewed as a Bernoulli random variable with p + q = 1, which implies A 7 ⊂ A 5 . A 8 If b equals c, the random variable Y from (50) may be viewed as a Bernoulli random variable with p + q = 1, which implies A 8 ⊂ A 5 .
We conclude that for holds. As we have shown that at the boundary of A the function from (54) cannot attain a value larger than the first bound given in (30), we focus on the interior of A and determine the stationary points of the function in (54). For the time being, we fix b and hence ζ and η and note that the function to be maximized over a and c may be written as "Some" computation shows that the stationary points of this function of a and c are solutions of the two equations Ignoring the first factors, which correspond to the boundary conditions p = 0 and q = 0 treated under A 3 and A 4 above, we obtain Dividing the second equality by v − M, we obtain Dividing this by c 3 (c − b)(a + b) and writing a = γ c, we arrive at which by Lemma A.3 has exactly one positive root, namely By a slight abuse of notation, we shall denote this unique positive root by γ too. We conclude that a and c satisfy (58) and a = γ c holds with a and c depending on b and M, and with γ depending on M only. Note that ζ and η defined in (52) depend on b. Straightforward computation shows that the derivative of (57) with respect to b vanishes if holds. If the second factor vanishes, (51) implies which is the boundary case A 5 treated above. So, ignoring the second factor and multiplying by ac we arrive at The first equation from (58) can be rewritten as Adding up (61) and (62) and dividing the result by c 2 , we obtain and hence, Substituting this into (61) with c = a/γ and multiplying the result by a −3 γ 3 , we get With the help of (59), this may be rewritten as One may verify that (64) can be factorized as follows As a = u + M is a boundary case, we conclude Note that (59) may be reformulated as and that hence holds. Consequently also b, c, and the function to be maximized itself, as given in (57), can be expressed in terms of M and γ . As γ is a complicated function of M, we shall write M in terms of γ . To this end, we rewrite (59) as u + M v − M = γ 2 (γ + 3) 3γ + 1 and notice that this implies M = γ 2 (γ + 3)v − (3γ + 1)u (γ + 1) 3 and hence a = 3 8 As γ is positive, these values satisfy −a < b < c and −u ≤ M ≤ v as prescribed by A. Substituting them into (49)-(52), we arrive at 256(γ + 1) 5 27(γ + 3) 3 (3γ + 1) 3 (u + v) 2 (γ + 3)(ζ + ηc) + (3γ + 1)(ζ − ηa) .
This contradiction shows that the stationary points corresponding to the second factor in (71) do not belong to A. It follows that for v + 2/v < u there are no stationary points within the interior of A and hence, the supremum in (54) is attained at the boundary ∂A of A. Consequently, (56) completes the proof, as computation shows that the random variables Z mentioned in the Theorem attain the bounds.