1 Introduction and the notation

Positive and negative dependence concepts play very important role not only in mathematical statistics but also in applications of probability theory, in particular in mathematical physics. The definitions of positively and negatively quadrant dependent random variables (r.v.’s) were introduced by Lehmann in 1966 (cf. [5]) and soon after extended to the multivariate case by Esary at al. and Joag-Dev and Proschan, who introduced the notion of positive and negative association (cf. [3, 4]). Nowadays, a comprehensive study of this topic is contained in the monographs of Bulinski and Shashkin (cf. [2]), Oliveira (cf. [10]) and Prakasa Rao (cf. [11]).

Let us recall that the random variables \(X,Y\) are positively quadrant dependent (PQD) if

$$\begin{aligned} H_{X,Y}(t,s):=P(X\le t,Y\le s)-P(X\le t)P(Y\le s)\ge 0 \end{aligned}$$

for all \(t,s\in {\mathbb {R}}\) and \(X,Y\) are negatively quadrant dependent (NQD) if \(H_{X,Y}(t,s)\le 0\). It is well known that \(X,Y\) are NQD iff \(X,-Y\) are PQD. In view of this duality, we shall focus only on the PQD case later on.

Denote by

$$\begin{aligned} \mathop {\mathrm {Cov_H}}(X,Y)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty } H_{X,Y}(t,s) dtds \end{aligned}$$

the so-called Hoeffding covariance (it is always well defined for PQD or NQD r.v.’s, even though it may be infinite and if the usual product moment covariance exists, then it is equal to the Hoeffding covariance). From this fact it follows that uncorrelated PQD (or NQD) r.v.’s are independent. Therefore, in the study of limit theorems, covariance is usually used to ”control” the dependence of r.v.’s. It is also well known that monotonic functions of positively (negatively) dependent r.v.’s inherit such properties. In particular, the indicators \({\mathbb {I}}_{( -\infty ,t\rangle }(X), {\mathbb {I}}_{( -\infty ,s\rangle }(Y)\) are PQD (NQD), provided \(X,Y\) are PQD (NQD). Here and in the sequel \({\mathbb {I}}_A(x)\) denotes the indicator function of a set \(A\).

It is easy to see that

$$\begin{aligned} \mathop {\mathrm {Cov}}\left( {\mathbb {I}}_{( -\infty ,t\rangle }(X),{\mathbb {I}}_{( -\infty ,s\rangle }(Y)\right) =H_{X,Y}(t,s), \end{aligned}$$

thus, in the study of limit theorems for empirical processes based on positively or negatively dependent observations, it is important to control \(H_{X,Y}(t,s)\). In this context the upper bounds for \(H_{X,Y}(t,s)\) in terms of \(\mathop {\mathrm {Cov_H}}(X,Y)\) are very useful. On the other hand, it is interesting to establish how far the random variables \(X,Y\) are from the independent ones, in the sense of the difference between the joint distribution and the product of its marginals. The bounds for the covariance of the indicator functions in terms of the covariance of the random variables are called the covariance inequalities.

The first inequalities of the form

$$\begin{aligned} \sup _{t,s\in {\mathbb {R}}} H_{X,Y}(t,s)\le C\cdot \phi (\mathop {\mathrm {Cov}}(X,Y)), \end{aligned}$$
(1.1)

where \(X,Y\) are associated and absolutely continuous and \(C\) depends on the densities of \(X,Y\), were obtained by Bagai and Prakasa Rao (cf.[1]) and Roussas ([13]). These authors studied properties of the estimators of the survival function and kernel estimators of the density based on the sample of associated r.v.’s. The inequalities (1.1) were intensively studied in [6, 7] and [8]. In particular in [8], it was proved that if \(X,Y\) are absolutely continuous PQD r.v.’s with bounded densities \(f_X, f_Y\) (in the \(L^{\infty }\) norm), then

$$\begin{aligned} \sup _{t,s\in {\mathbb {R}}}H_{X,Y}(t,s)\le \left( \frac{3}{2}\left\| f_X\right\| _{\infty }\left\| f_Y\right\| _{\infty }\mathop {\mathrm {Cov_H}}(X,Y)\right) ^{1/3}. \end{aligned}$$
(1.2)

The discrete case was considered in [6] and it was proved that if \(X,Y\) are integer-valued PQD r.v.’s, then

$$\begin{aligned} \sup _{t,s\in {\mathbb {R}}}H_{X,Y}(t,s)\le \mathop {\mathrm {Cov_H}}(X,Y). \end{aligned}$$
(1.3)

We would also like to refer the reader to the monograph [2], where the covariance inequalities for Lipschitz functions of associated r.v.’s are studied.

In recent years, however, live interest in positively and negatively dependent r.v.’s have led to fruitful results not only in range of covariance inequalities. Several limit theorems have been proved under this kind of dependence, in particular: laws of large numbers, CLT and the rate of convergence in the CLT, invariance principle, moment bounds, convergence of empirical processes etc. (see [2, 10, 11] where further references are given).

Now let \(X,Y\) be any random variables and \(X',Y'\) independent copies of \(X\) and \(Y\), i.e. \(X'\) has the same distribution as \(X\), and \(Y'\) as \(Y\) and \(X', Y'\) are independent. We may write

$$\begin{aligned} H_{X,Y}(t,s)=P\left( [X,Y]\in Q(t,s)\right) -P\left( [X',Y']\in Q(t,s)\right) , \end{aligned}$$

where \(Q(t,s)=(-\infty ,t\rangle \times (-\infty ,s\rangle \) is a quadrant. Let us introduce the following notation

$$\begin{aligned} H_{X,Y}(D)=P\left( [X,Y]\in D\right) -P\left( [X',Y']\in D\right) , \end{aligned}$$

for \(D\subset {\mathbb {R}}^2\). Because the notions of uncorrelatedness and independence coincide, the key assumption in limit theorems for positively or negatively dependent r.v.’s always involves their covariance structure. Indeed covariance plays the role of the measure of dependence between r.v.’s. Taking this into account, it appears to be important to establish how, in fact, the covariance assesses the dependence. The answer may be given by finding the bounds for \(H_{X,Y}(D)\) in terms of \(\mathop {\mathrm {Cov_H}}(X,Y)\) (we shall call from now on comparison inequalities). This is the main goal of the paper.

The paper is organized as follows. In the second section we consider the comparison inequalities for integer-valued r.v.’s while the third one is devoted to the absolutely continuous random vectors. Section 4 presents the special case of Farlie-Gumbel-Morgenstern (FGM) r.v.’s.

2 Discrete case

With a view to stating the main result of this section we shall introduce the notion of a \(\delta \)–hull of a set \(D\subset {\mathbb {R}}^2\) as follows

$$\begin{aligned} D(\delta )=\{(x,y)\in {\mathbb {R}}^{2}:x=t+a,y=s+b, \\ \text {for some}\ (t,s)\in D\text { and}\ |a|\le \delta ,|b|\le \delta \}. \end{aligned}$$

Let \({\mathbb {Z}}\) denote the set of integers. For integer-valued r.v.’s we have the following general comparison theorem.

Theorem 2.1

Let \(X,Y\) be any random variables with values in \({\mathbb {Z}}\) and \(D\subset {\mathbb {R}}^2\) be any set, then

$$\begin{aligned} |H_{X,Y}(D)|\le 4 \underset{D(1/2)}{\int \int }\left| H_{X,Y}(t,s)\right| dtds. \end{aligned}$$
(2.1)

For the proof of this theorem and the results of the next section, we shall need the following identity of Newman (cf. (4.10) in [9]).

Lemma 2.2

Let \(g_1\) and \(g_2\) be absolutely continuous functions and \(X,Y\) random variables such that \(g_1(X)\) and \(g_2(Y)\) are square-integrable. Then

$$\begin{aligned}&\mathop {\mathrm {Cov}}(g_{1}(X),g_{2}(Y)) \\&\quad =\int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }g_{1}^{\prime }(t)g_{2}^{\prime }(s)\left[ P(X>t,Y>s)-P(X>t)P(Y>s)\right] dtds \\&\quad =\int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }g_{1}^{\prime }(t)g_{2}^{\prime }(s)H_{X,Y}(t,s)dtds. \end{aligned}$$

It is worth mentioning here, that

Proof of Theorem 2.1

For \(i\in {\mathbb {Z}}\) let us define the functions

$$\begin{aligned} f_i(x)=\max \left( 1-2|x-i|,0\right) , \end{aligned}$$

which are absolutely continuous and differentiable except the points \(i-1/2,i,i+1/2\). We shall use these functions to approximate the indicators. In fact, we have \(f_i(X)={\mathbb {I}}_{\{i\}}(X)\). Let us observe that for \((i,j)\in {\mathbb {Z}}\), we have

$$\begin{aligned} H_{X,Y}(D) =\sum _{(i,j)\in D}\left( P(X=i,Y=j)-P(X^{\prime }=i,Y^{\prime }=j)\right) =\sum _{(i,j)\in D}\mathop {\mathrm {Cov}}\left( f_{i}(X),f_{j}(Y)\right) \end{aligned}$$

and by Lemma 2.2, we get

$$\begin{aligned} H_{X,Y}(D)=\int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }\sum _{(i,j)\in D}f_{i}^{\prime }(t)f_{j}^{\prime }(s)H_{X,Y}(t,s)dtds. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \left| \sum _{(i,j)\in D}f_{i}^{\prime }(t)f_{j}^{\prime }(s)\right| \le 4\sum _{(i,j)\in D}{\mathbb {I}}_{\left\langle i-1/2,i+1/2\right\rangle \times \left\langle j-1/2,j+1/2\right\rangle }(t,s), \end{aligned}$$

thus

$$\begin{aligned} \left| H_{X,Y}(D)\right|&\le \int _{-\infty }^{+\infty }\int _{-\infty }^{+\infty }\left| \sum _{(i,j)\in D}f_{i}^{\prime }(t)f_{j}^{\prime }(s)\right| \left| H_{X,Y}(t,s)\right| dtds \\&\le \underset{D(1/2)}{4\int \int }\left| H_{X,Y}(t,s)\right| dtds. \end{aligned}$$

\(\square \)

For PQD r.v.’s we immediately obtain the following corollary.

Corollary 2.3

Let \(X,Y\) be PQD r.v.’s with values in \({\mathbb {Z}}\) and \(D\subset {\mathbb {R}}^2\) be any set, then

$$\begin{aligned} |H_{X,Y}(D)|\le 4 \mathop {\mathrm {Cov_H}}(X,Y). \end{aligned}$$
(2.2)

The inequality (2.2) is optimal up to a constant in this sense, that the left and the right hand-side may approach 0 with the same speed. Let us illustrate it with an example.

Example 2.4

Let \(X_n,Y_n\) have the following distribution: \(P(X_n=0,Y_n=0)=P(X_n=1,Y_n=1)=\frac{1}{4}+\frac{1}{n}, P(X_n=0,Y_n=1)=P(X_n=1,Y_n=0)=\frac{1}{4}-\frac{1}{n}, n\ge 5\). Then \(X_n,Y_n\) are PQD and \(P\left( [X_n,Y_n]\in \{(0,0);(1,1)\}\right) =\frac{1}{2}+\frac{2}{n}, P\left( [X'_n,Y'_n]\in \{(0,0);(1,1)\}\right) =\frac{1}{2}\). Thus \(H_{X_n,Y_n}\left( \{(0,0);(1,1)\}\right) =\frac{2}{n}\). Moreover \(\mathop {\mathrm {Cov}}(X_n,Y_n)=\frac{1}{n}\).

The inequality (2.2) may also be easily extended to the case of r.v.’s taking values in a lattice. Let \(L(a,b)=\{ia+b,i\in {\mathbb {Z}}\}\) be a set of lattice points, here \(a>0\). If \(X,Y\) are PQD, assuming their values in \(L(a,b)\), then \(U=\frac{X-b}{a}\) and \(V=\frac{Y-b}{a}\) are PQD as well with values in \({\mathbb {Z}}\). Furthermore, \(\mathop {\mathrm {Cov_H}}(U,V)=\frac{1}{a^2}\mathop {\mathrm {Cov_H}}(X,Y)\). Therefore we get another corollary.

Corollary 2.5

Let \(X,Y\) be PQD r.v.’s with values in the lattice \(L(a,b)\) and \(D\subset {\mathbb {R}}^2\) be any set, then

$$\begin{aligned} |H_{X,Y}(D)|\le \frac{4}{a^2} \mathop {\mathrm {Cov_H}}(X,Y). \end{aligned}$$
(2.3)

3 Absolutely continuous case

In this section we shall study absolutely continuous random vectors \([X,Y]\). Denote by \(f_{X,Y}(x,y)\) the joint density of \([X,Y]\) and by \(f_{X}(x), f_Y(y)\) the marginal densities of \(X\) and \(Y\) respectively. We shall assume that these densities are bounded in the essential supremum norm (\(L^{\infty }\) norm). We put

$$\begin{aligned} C_f:=\Vert f_{X,Y}\Vert _{\infty }+\Vert f_{X}\Vert _{\infty }\cdot \Vert f_Y\Vert _{\infty }. \end{aligned}$$

We will obtain bounds for \(H_{X,Y}(D)\), where \(D\subset {\mathbb {R}}^2\) is a compact set which boundary is a Peano curve \(\Gamma \) (continuous, piecewise \(C^1\), without self-intersections). The length of \(\Gamma \) will be denoted by \(L\) and the planar measure of \(D\) by \(\mu (D)\). Recall that, by the isoperimetric inequality, we have \(\mu (D)\le \frac{1}{4\pi }L^2\).

Theorem 3.1

Let the r.v.’s \(X,Y\) and the set \(D\) be as above, then

(3.1)

where \(C =\left( 2\sqrt{2}\left( \frac{1}{4\pi }+1\right) \left( C_{f}+1\right) +8\sqrt{2} \right) \max (L^{2},1).\)

In the proof we shall use the following elementary lemma.

Lemma 3.2

Let \([X,Y]\) be a random vector with bounded density \(f_{X,Y}.\) Let \(\varphi ,\psi :{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}\) be the measurable functions such that \(0\le \varphi ,\psi \le 1.\) Let us also put

$$\begin{aligned}A=\left\{ (x,y)\in {\mathbb {R}}^{2}:\varphi (x,y)\ne \psi (x,y)\right\} , \end{aligned}$$

then

$$\begin{aligned} \left| E\varphi \left( X,Y\right) -E\psi \left( X,Y\right) \right| \le \mu (A)\cdot \left\| f_{X,Y}\right\| _{\infty }. \end{aligned}$$

Proof of Theorem 3.1

For \(i,j\in {\mathbb {Z}}, \delta >0\) and \(0<\eta \le \delta /2\) let us introduce the following notation. Consider a family of squares with vertices in the lattice \(L(\delta ,0)\) and disjoint interiors

$$\begin{aligned} S_{i,j,\delta }=\langle i\delta ,(i+1)\delta \rangle \times \langle j\delta ,(j+1)\delta \rangle . \end{aligned}$$

Define a family of ”square-rings”

$$\begin{aligned} R_{i,j,\delta ,\eta }=S_{i,j,\delta }\setminus \left( i\delta +\eta ,(i+1)\delta -\eta \right) \times \left( j\delta +\eta ,(j+1)\delta -\eta \right) \end{aligned}$$

and define a family of ”hat-like” functions

$$\begin{aligned} f_{i,\delta ,\eta }(x)=\min \left( \max \left( \frac{\delta }{2\eta }-\frac{ \left| x-\delta (i+1/2)\right| }{\eta };0\right) 1\right) . \end{aligned}$$

These functions are absolutely continuous, equal to 0 for \(x\in (-\infty ,i\delta \rangle \cup \langle (i+1)\delta ,+\infty )\), equal to 1 for \(x\in \langle i\delta +\eta ,(i+1)\delta -\eta \rangle \) and are linear otherwise. They are differentiable except four points and \(|f'_{i,\delta ,\eta }(x)|\le 1/\eta \).

We begin with a comparison inequality for a square \(S_{i,j,\delta }\). Let us observe that

by Lemma 3.2, therefore by Lemma 2.2 we get

$$\begin{aligned} \left| H_{X,Y}\left( S_{i,j,\delta }\right) \right| \le 4(\delta \eta -\eta ^{2})C_{f}+\frac{1}{\eta ^{2}}\underset{R_{i,j,\delta ,\eta }}{\int \int }\left| H_{X,Y}(t,s)\right| dtds. \end{aligned}$$
(3.2)

Let \(\left\{ S_{i,j,\delta }\right\} _{(i,j)\in {\mathcal {I}}}\) be a cover of \(D\), i.e. \((i,j)\in {\mathcal {I}}\subset Z^{2}\) iff \(S_{i,j,\delta }\cap D\ne \emptyset .\) Let us split the set of indices \({\mathcal {I}}\) into two disjoint parts \({\mathcal {I}}_{\Gamma }\) and \({\mathcal {I}}_{int}\). The family \(\left\{ S_{i,j,\delta }\right\} _{(i,j)\in {\mathcal {I}}_{\Gamma }}\) covers the boundary \(\Gamma \), i.e. \((i,j)\in {\mathcal {I}}_{\Gamma }\) iff \(S_{i,j,\delta }\cap \Gamma \ne \emptyset .\) The family \(\left\{ S_{i,j,\delta }\right\} _{(i,j)\in {\mathcal {I}}_{int}}\) infills the interior of \(D\), i.e. \((i,j)\in {\mathcal {I}}_{int}\) iff \(S_{i,j,\delta }\subset \text {int} D\). Let us observe that

$$\begin{aligned} \text {Card}\left( {\mathcal {I}}_{int }\right) \le \left\lceil \frac{\mu (D)}{\delta ^2}\right\rceil \le \frac{\mu (D)}{\delta ^2}+1\le \frac{1}{4\pi }\frac{L^2}{\delta ^2}+1. \end{aligned}$$
(3.3)

Further

$$\begin{aligned} \text {Card}\left( {\mathcal {I}}_{\Gamma }\right) \le 4\left\lceil \frac{L}{\delta }\right\rceil \le 4\left( \frac{L}{\delta }+1\right) , \end{aligned}$$
(3.4)

which follows from the fact that \(\Gamma \) may be divided into at most \(\lceil \frac{L}{\delta }\rceil \) consecutive parts of the length \(\delta \) (the last at most \(\delta \)), each may be covered by a square of the side \(\delta \) parallel to the axis, which in turn may be covered by at most four squares \(S_{i,j,\delta }\). Now, we have

(3.5)

Therefore, from (3.2), (3.3) and (3.4), it follows that

(3.6)

Let us put \(h=\sup _{(t,s)\in D}\left| H_{X,Y}(t,s)\right| \) and assume that \(h>0.\) If \(h=0,\) then it is easy to prove that \(\left| H_{X,Y}(D)\right| =0.\) Further, let \(\widetilde{L}=\max (L,1)\) and assume that \(\delta \le 1.\) Then

$$\begin{aligned} \left| H_{X,Y}(D)\right| \le \frac{4\eta }{\delta }\left( \frac{1}{4\pi }+1\right) \widetilde{L} ^{2}C_{f}+\frac{4}{\delta \eta }\left( \frac{1}{4\pi }+1\right) \widetilde{L} ^{2}h+8\delta \widetilde{L}^{2}C_{f}. \end{aligned}$$

We put \(\delta =\sqrt{2}\root 4 \of {h}\) and \(\eta =\sqrt{h}\) to obtain an optimal exponent at \(h\). Since \(h\le \frac{1}{4}\) we see that \(\delta \le 1\) and \(\eta \le \delta /2.\) Thus we get

$$\begin{aligned} \left| H_{X,Y}(D)\right| \le \root 4 \of {h}\widetilde{L}^{2}\left( 2\sqrt{2}\left( \frac{1}{4\pi } +1\right) (C_{f}+1)+8\sqrt{2}\right) \end{aligned}$$

and the proof is completed. \(\square \)

By direct application of (1.2) to Theorem 3.1, for PQD r.v.’s, we get the following inequality

$$\begin{aligned} \left| H_{X,Y}(D)\right| \le C\cdot \left( \mathop {\mathrm {Cov_H}}(X,Y)\right) ^{1/12}, \end{aligned}$$

where \(C =\left( 2\sqrt{2}\left( \frac{1}{4\pi }+1\right) (C_{f}+1)+8\sqrt{2} \right) \left( \frac{3}{2}C_{f}\right) ^{1/12}\max (L^{2},1).\)

A more careful study of the proof of Theorem 3.1, in the case of PQD r.v.’s, leads to the following result.

Theorem 3.3

Let the assumptions of Theorem 3.1 be satisfied and \(X,Y\) be PQD r.v.’s, then

$$\begin{aligned} \left| H_{X,Y}(D)\right| \le C\cdot \left( \mathop {\mathrm {Cov_H}}(X,Y)\right) ^{1/5}, \end{aligned}$$

where \(C=4+C_f\left( \frac{1}{4\pi }+10\right) \max (L^{2},1).\)

Proof

As in the proof of Theorem 3.1, from (3.6) we get

$$\begin{aligned} \left| H_{X,Y}(D)\right| \le \frac{4\eta }{\delta }\left( \frac{1}{4\pi }+1\right) \widetilde{L} ^{2}C_{f}+\frac{1}{\eta ^2 }\mathop {\mathrm {Cov_H}}(X,Y)+8\delta \widetilde{L}^{2}C_{f}. \end{aligned}$$

If \(\mathop {\mathrm {Cov_H}}(X,Y)\ge 1\) then the conclusion is a trivial inequality. Assume \(\mathop {\mathrm {Cov_H}}(X,Y)< 1\) and take \(\delta =\root 5 \of {\mathop {\mathrm {Cov_H}}(X,Y)}\) and \(\eta =\left( \mathop {\mathrm {Cov_H}}(X,Y)\right) ^{2/5}/2\) to optimize the exponent in \(\mathop {\mathrm {Cov_H}}(X,Y)\) and arrive at the conclusion. \(\square \)

4 FGM case

It is said that the r.v.’s \(X, Y\) have the joint FGM distribution function if

$$\begin{aligned} F_{X,Y}(x,y)=F_X(x)F_Y(y)+\rho F_X(x)(1-F_X(x))F_Y(y)(1-F_Y(y)), \end{aligned}$$

where \(\rho \in [-1, 1], F_{X, Y}, F_X, F_Y\) are the joint distribution function and marginal d.f.’s respectively. Denote by \(f_{X, Y}, f_X, f_Y\) their densities. The corresponding copula takes the form

$$\begin{aligned} C(u, v)=uv+\rho u(1-u)v(1-v) . \end{aligned}$$
(4.1)

(For details on copulas see [12]). In this section we shall consider a more general form of (4.1).

Let \({\mathcal {H}}\) be a family of monotonically nonincreasing functions \(h:\langle 0, 1\rangle \rightarrow {\mathbb {R}}\) such that

  1. (1)

    \(||h||_{\infty }\le 1\)

  2. (2)

    \(\int _0^1h(t)dt=0\)

  3. (3)

    \(\int _0^xh(t)dt\ge 0\) for \(x\in \langle 0, 1\rangle \).

Let

$$\begin{aligned} C(u,v)=uv+\rho H_1(u)H_2(v) , \end{aligned}$$
(4.2)

where \(H_1(u)=\int _0^uh_1(t)dt, H_2(v)=\int _0^vh_2(t)dt\) for some \(h_1, h_2\in {\mathcal {H}}\) and \(\rho \in (0, 1\rangle \). It is easy to see that \(C(u,v)\) is now a copula with PQD property. Covariance inequalities for absolutely continuous r.v.’s \(X, Y\) with copula of the form (4.2) were studied in [7], where measurability of the functions \(h_1\) and \(h_2\) is only demanded.

From the monotonicity of the functions \(h_{i}\in {\mathcal {H}}, i\in \{1,2\}\), we conclude that \(H_{i}(x)\) is a concave function. As a result, we get

$$\begin{aligned} \int _{0}^{1}H_{i}(x)dx\ge \frac{1}{2}H_{i}(x),\quad \text {for any }x\in \left\langle 0,1\right\rangle \end{aligned}$$
(4.3)

which means that under the graph of \(H_{i}\) one can fit in a triangle with the unit base and height \(H_{i}(x)\).

Now, let us put \(x_{i}=\sup \left\{ x\in \left\langle 0,1\right\rangle :h_{i}(x)\ge 0\right\} .\) From \(\int _{0}^{1}h_{i}(t)dt=0\), we get \(\int _{0}^{x_{i}}h_{i}(t)dt=-\int _{x_{i}}^{1}h_{i}(t)dt.\) Thus, it is easy to see that

$$\begin{aligned} \int _{0}^{1}|h_{i}(t)|dt =\int _{0}^{x_{i}}h_{i}(t)dt-\int _{x_{i}}^{1}h_{i}(t)dt =2\int _{0}^{x_{i}}h_{i}(t)dt=2H_{i}(x_{i}). \end{aligned}$$
(4.4)

Formulas (4.3) and (4.4) lead to

$$\begin{aligned} \int _{0}^{1}|h_{i}(t)|dt\le 4\int _{0}^{1}H_{i}(x)dx. \end{aligned}$$
(4.5)

The above inequality enables us to prove the following comparison theorem for the FGM distributed r.v.’s.

Theorem 4.1

Let \(X,Y\) be absolutely continuous r.v.’s with the copula (4.2) and bounded densities. Then, for any Borel set \(B\subset {\mathbb {R}}^2\)

$$\begin{aligned} \left| H_{X,Y}(B)\right| \le 16\Vert f_X\Vert _{\infty }\Vert f_Y\Vert _{\infty }\mathop {\mathrm {Cov_H}}(X,Y). \end{aligned}$$
(4.6)

Proof

Under our assumptions

$$\begin{aligned} F_{X,Y}(x,y)=F_{X}(x)F_{Y}(y)+\rho H_{1}(F_{X}(x))H_{2}(F_{Y}(y)) \end{aligned}$$

and

$$\begin{aligned} f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)+\rho h_{1}(F_{X}(x))f_{X}(x)h_{2}(F_{Y}(y))f_{Y}(y). \end{aligned}$$

Thus, for any Borel set \(B\subset {\mathbb {R}}^{2}\) we get

by applying the transformation \(u=F_{X}(x), v=F_{Y}(y)\) and inequality (4.5). \(\square \)

Similarly as in Example 2.4, it may be shown that (4.6) is optimal up to a constant, i.e. the left and the right-hand side of this inequality may converge to 0 with the same rate.

Example 4.2

Let

$$\begin{aligned} h_{1}(t)=h_{2}(t)=\left\{ \begin{array}{l@{\quad }l} \frac{1}{n}, &{} t\in \left\langle 0,\frac{1}{2}\right\rangle \\ -\frac{1}{n}, &{} t\in \left( \frac{1}{2},1\right\rangle \end{array}\right. \end{aligned}$$

Consider a random vector \(\left[ X,Y\right] \), such that \(X\) and \(Y\) have uniform distributions on the interval \(\left\langle 0,1\right\rangle \) and the distribution of \(\left[ X,Y\right] \) coincides with the copula and is given by (4.2) with \(\rho =1.\) Then

$$\begin{aligned} H_{X,Y}\left( \left\langle 0,\frac{1}{2}\right\rangle \times \left\langle 0, \frac{1}{2}\right\rangle \cup \left( \frac{1}{2},1\right\rangle \times \left( \frac{1}{2},1\right\rangle \right) =\frac{1}{2n^{2}} \end{aligned}$$

and

$$\begin{aligned} \mathop {\mathrm {Cov}}\left( X,Y\right) =\frac{1}{16n^{2}}. \end{aligned}$$