1 Introduction

We first recall the definitions of some dependent sequences.

Definition 1.1 (Lehmann [1])

Two random variables X and Y are said to be negative quadrant dependent (NQD) if

P(Xx,Yy)P(Xx)P(Yy)for any x,yR.

A sequence of random variables { X n ,n1} is said to be pairwise negatively quadrant dependent (PNQD) if every pair of random variables in the sequence is NQD.

Definition 1.2 (Newman [2])

A sequence { X n ,n1} of random variables is said to be linearly negative quadrant dependent (LNQD) if for any disjoint subsets A,B Z + and positive r j ’s, k A r k X k and j B r j X j are NQD.

Definition 1.3 (Joag-Dev and Proschan [3])

Random variables X 1 , X 2 ,, X n are said to be negatively associated (NA) if for every pair of disjoint subsets A 1 and A 2 of {1,2,,n},

Cov ( f 1 ( X i ; i A 1 ) , f 2 ( X j ; j A 2 ) ) 0,

where f 1 and f 2 are increasing for every variable (or decreasing for every variable) so that this covariance exists. An infinite sequence of random variables { X n ;n1} is said to be NA if every finite subfamily is NA.

Remark 1.1 (i) If { X n ,n1} is a sequence of LNQD random variables, then {a X n +b,n1} is still a sequence of LNQD random variables, where a and b are real numbers. (ii) NA implies LNQD from the definitions, but LNQD does not imply NA.

Because of wide applications of LNQD random variables, the concept of LNQD random variables has received more and more attention recently. For example, Newman [2] established the central limit theorem for a strictly stationary LNQD process; Wang and Zhang [4] provided uniform rates of convergence in the central limit theorem for LNQD sequence; Ko et al.[5] obtained the Hoeffding-type inequality for LNQD sequence; Ko et al.[6] studied the strong convergence for weighted sums of LNQD arrays; Wang et al.[7] obtained some exponential inequalities for a linearly negative quadrant dependent sequence; Wu and Guan [8] obtained the mean convergence theorems for weighted sums of dependent random variables. In addition, from Remark 1.1, it is shown that LNQD is much weaker than NA and independent random variables. So, it is interesting to study some inequalities and their applications to a regression function for LNQD sequence.

The main results of this paper depend on the following lemmas.

Lemma 1.1 (Lehmann [1])

Let random variables X and Y be NQD, then

  1. (i)

    EXYEXEY;

  2. (ii)

    If f and g are both nondecreasing (or both nonincreasing) functions, then f(X) and g(Y) are NQD.

Lemma 1.2 (Zhang [4])

Suppose that{ X n ;n1}is a sequence of LNQD random variables withE X n =0. Then for anyp>1, there exists a positive constant D such that

E| i = 1 n X i | p DE ( i = 1 n X i 2 ) p / 2 .

2 Main results

Now, we state our main results with their proofs.

Theorem 2.1 Let X and Y be NQD random variables with finite second moments. If f and g are complex-valued functions defined on R with bounded derivatives f and g , then

| Cov ( f ( X ) , g ( Y ) ) | f g | Cov ( X , Y ) | .

Proof The proof follows easily from the brief outline of the main points of the proof of Theorem 4.1 in Roussas [9], p.773]. □

By Theorem 2.1, we establish an inequality for characteristic function (c.f.) as follows:

Theorem 2.2 If X 1 ,, X m are LNQD random variables with finite second moments, let φ j ( t j )andφ( t 1 ,, t m )be c.f.’s of X j and( X 1 ,, X m ), respectively, then for all nonnegative (or nonpositive) real numbers t 1 ,, t m ,

|φ( t 1 ,, t m ) j = 1 m φ j ( t j )|4 1 k < l m | t k t l | | Cov ( X k , X l ) | .

Proof Write

| φ ( t 1 , , t m ) j = 1 m φ j ( t j ) | | φ ( t 1 , , t m ) φ ( t 1 , , t m 1 ) φ m ( t m ) | + | φ ( t 1 , , t m 1 ) j = 1 m 1 φ j ( t j ) | = : I 1 + I 2 .
(2.1)

Further notice that e i x =cos(x)+isin(x). Thus,

I 1 = | E exp ( i j = 1 m t j X j ) E exp ( i j = 1 m 1 t j X j ) E exp ( i t m X m ) | | Cov ( cos ( j = 1 m 1 t j X j ) , cos ( t m X m ) ) | + | Cov ( sin ( j = 1 m 1 t j X j ) , sin ( t m X m ) ) | + | Cov ( sin ( j = 1 m 1 t j X j ) , cos ( t m X m ) ) | + | Cov ( cos ( j = 1 m 1 t j X j ) , sin ( t m X m ) ) | = : I 11 + I 12 + I 13 + I 14 .
(2.2)

By the definition of LNQD, it is easy to see that t m X m and j = 1 m 1 t j X j are NQD for t 1 ,, t m >0. Then by Theorem 2.1, we can obtain that

I 11 |Cov ( j = 1 m 1 t j X j , t m X m ) | j = 1 m 1 t j t m |Cov( X j , X m )|.
(2.3)

Similarly as above, we have

I 1 i j = 1 m 1 t j t m |Cov( X j , X m )|,i=2,3,4.
(2.4)

From (2.2) to (2.4), we obtain

I 1 4 j = 1 m 1 t j t m |Cov( X j , X m )|.
(2.5)

Therefore, in view of (2.1) and (2.5), we obtain that

|φ( t 1 ,, t m ) j = 1 m φ j ( t j )|4 j = 1 m 1 t j t m |Cov( X j , X m )|+ I 2 .
(2.6)

For I 2 , using the same decomposition as in (2.1) above, we obtain

I 2 | φ ( t 1 , , t m 1 ) φ ( t 1 , , t m 1 ) φ m 1 ( t m 1 ) | + | φ ( t 1 , , t m 2 ) j = 1 m 2 φ j ( t j ) | = : I 21 + I 22 .

Similarly to the calculation of I 1 , we get

I 2 4 j = 1 m 2 t j t m 1 |Cov( X j , X m 1 )|+ I 22 .
(2.7)

Thus, from (2.6) and (2.7), constantly repeating the above procedure, we get

| φ ( t 1 , , t m ) j = 1 m φ j ( t j ) | 4 j = 1 m 1 t j t m | Cov ( X j , X m ) | + 4 j = 1 m 2 t j t m 1 | Cov ( X j , X m 1 ) | + 4 j = 1 m 3 t j t m 2 | Cov ( X j , X m 2 ) | + + 4 | Cov ( X 1 , X 2 ) | = 4 l = 2 m k = 1 l 1 t k t l | Cov ( X k , X l ) | = 4 1 k < l m t k t l | Cov ( X k , X l ) | .
(2.8)

Note that for t 1 ,, t m <0, t m X m and j = 1 m 1 t j X j are NQD by the definition of LNQD. Similarly as above, we obtain that

|φ( t 1 ,, t m ) j = 1 m φ j ( t j )|4 1 k < l m | t k t l | | Cov ( X k , X l ) | .

This result, along with (2.8), completes the proof of the theorem. □

Theorem 2.3 Let X 1 ,, X n be a sequence of LNQD random variables, and let t 1 ,, t n be all nonnegative (or nonpositive) real numbers. Then

E [ exp ( j = 1 n t j X j ) ] j = 1 n E [ exp ( t j X j ) ] .

Remark 2.1 Let t j =1, j1 in Theorem 2.3, we can get Lemma 3.1 of Ko et al.[5]; let t j =t>0, j1, we also get Lemma 1.4 of Wang et al.[7]. Thus, our Theorem 2.3 improves and extends Lemma 3.1 in Ko et al.[5] and Lemma 1.4 in Wang et al.[7].

Proof For t 1 ,, t n >0, it is easy to see that j = 1 i 1 t j X j and t i X i are NQD by the definition of LNQD, which implies that exp( j = 1 i 1 t j X j ) and exp( t i X i ) are also NQD for i=2,3,,n by Lemma 1.1(ii). Then by Lemma 1.1(i) and induction,

E [ exp ( j = 1 n t j X j ) ] E [ exp ( j = 1 n 1 t j X j ) ] E [ exp ( t n X n ) ] = E [ exp ( j = 1 n 2 t j X j ) exp ( t n 1 X n 1 ) ] E [ exp ( t n X n ) ] E [ exp ( j = 1 n 2 t j X j ) ] E [ exp ( t n 1 X n 1 ) ] E [ exp ( t n X n ) ] j = 1 n E [ exp ( t j X j ) ] .
(2.9)

For t 1 ,, t n <0, it is easy to see that t 1 ,, t n >0 and j = 1 i 1 t j X j and t i X i are NQD by the definition of LNQD, which implies that exp( j = 1 i 1 t j X j ) and exp(( t i X i )) are also NQD for i=2,3,,n by Lemma 1.1(ii). Similar to the proof of (2.9), we obtain

E [ exp ( j = 1 n t j X j ) ] =E [ exp ( j = 1 n t j X j ) ] j = 1 n E { exp [ ( t j X j ) ] } = j = 1 n E [ exp ( t j X j ) ] .
(2.10)

Therefore, the proof is complete by (2.9) and (2.10). □

Theorem 2.4 Suppose that{ X j :j1}is a LNQD random variable sequence with zero mean and| X j | d j a.s. (j=1,2,). Lett>0andt max 1 j n d j 1. Then for anyε>0,

P ( | j = 1 n X j | ε ) 2exp { t ε + t 2 i = 1 n E X j 2 } .

Proof We obtain the result from the proving process of Theorem 2.3 in Wang et al.[7]. □

Theorem 2.5 Let{ X j :j1}be a LNQD random variable sequence with zero mean and finite second moment, sup j 1 E( X j 2 )<. Assume that{ a j ,j1}is a real constant sequence satisfyinga:= sup j 1 | a j |<. Then for anyr>1, E| j = 1 n a j X j | r D a r n r / 2 .

Proof Let a i + :=max{ a i ,0}, a i :=max{ a i ,0}. Notice that

E | j = 1 n a j X j | r C { E | j = 1 n a j + X j | r + E | j = 1 n a j X j | r } , E | j = 1 n a j + X j | r = a r E | j = 1 n a j + a 1 X j | r .
(2.11)

Let Y j = a j + a 1 X j . Then { Y n ,n1} is still a sequence of LNQD random variables with E Y n =0 by Remark 1.1. Note that 0< a j + a 1 1. By Lemma 1.2, we obtain

E| j = 1 n Y j | r D a r n r / 2 ,this implies thatE| j = 1 n a j + X j | r D a r n r / 2 .
(2.12)

Similarly as above, we have

E| j = 1 n a j X j | r D a r n r / 2 .
(2.13)

Combining (2.11)-(2.13), we get the result of the theorem. □

3 Application

To show the application of the inequalities in Section 2, in this section we discuss the asymptotic normality of the general linear estimator for the following regression model:

Y n i =g( x n i )+ ε n i ,1in,
(3.1)

where the design points x n i ,, x n n A, which is a compact set of R d , g is a bounded real valued function on A, and the { ε n i } are regression errors with zero mean and finite variance σ 2 . As an estimate of g(), we consider the following general linear smoother:

g n (x)= i = 1 n w n i (x) Y n i ,
(3.2)

where a weight function w n i (x), i=1,,n, depends on the fixed design points x n 1 ,, x n n and on the number of observations n.

Here, our purpose is to use the inequalities in Section 2 to establish asymptotic normality for the estimate (3.2) under LNQD condition. The results obtained generalize the results of Roussas et al.[10] and Yang [11] based on strong mixing sequence to LNQD sequence. Adopting the basic assumptions of Yang [11], we assume the following:

Assumption (A1) (i) g:AR is a bounded function defined on the compact subset A of R d ; (ii) { ξ t :t=0,±1,} is a strictly stationary and LNQD time series with E ξ 1 =0, Var( ξ 1 )= σ 2 (0,); (iii) For each n, the joint distribution of { ε n i :1in} is the same as that of { ξ 1 ,, ξ n }.

Denote

w n (x):=max { | w n i ( x ) | : 1 i n } , σ n 2 (x):=Var ( g n ( x ) ) .
(3.3)

Assumption (A2) (i) i = 1 n | w n i (x)|C for all n1; (ii) w n (x)=O( i = 1 n w n i 2 (x)); (iii) i = 1 n w n i 2 (x)=O( σ n 2 (x)).

Assumption (A3)E | ξ 1 | r < for r>2 and u(1)= sup j 1 | i j | 1 |Cov( ξ i , ξ j )|<.

Assumption (A4) There exist positive integers p:=p(n) and q:=q(n) such that p+qn for sufficiently large n and as n,

(i) q p 1 0;(ii) nq p 1 w n 0;(iii) p w n 0;(iv) n p r 2 1 w n r 2 0.

Here, we will prove the following result.

Theorem 3.1 Let Assumptions (A1)∼(A4) be satisfied. Then

σ n 1 (x) { g n ( x ) E g n ( x ) } d N(0,1).

Proof We first give some denotations. For convenience of writing, omit everywhere the argument x and set S n = σ n 1 ( g n E g n ), Z n i = σ n 1 w n i ε n i for i=1,,n, so that S n = i = 1 n Z n i . Let k=[n/(p+q)]. Then S n may be split as S n = S n + S n + S n , where

S n = m = 1 k y n m , S n = m = 1 k y n m , S n = y n k + 1 , y n m = i = k m k m + p 1 Z n i , y n m = i = l m l m + q 1 Z n i , y n k + 1 = i = k ( p + q ) + 1 n Z n i , k m = ( m 1 ) ( p + q ) + 1 , l m = ( m 1 ) ( p + q ) + p + 1 , m = 1 , , k .

Thus, to prove the theorem, it suffices to show that

E ( S n ) 2 0,E ( S n ) 2 0,
(3.4)
S n d N(0,1).
(3.5)

By Theorem 2.5, Assumptions (A2)(ii)∼(iii) and (A4)(i)∼(iii), we have

E ( S n ) 2 = E ( m = 1 k i = l m l m + q 1 σ 1 w n i ξ i ) 2 D k q σ 2 w n 2 C ( 1 + q p 1 ) 1 n q p 1 w n 0 , E ( S n ) 2 = E ( i = k ( p + q ) + 1 n σ 1 w n i ξ i ) 2 D ( n k ( p + q ) ) σ 2 w n 2 C ( 1 + q p 1 ) p w n 0 .

Thus (3.4) holds.

We now proceed with the proof of (3.5). Let Γ n = 1 i < j k Cov( y n i , y n j ) and s n 2 = m = 1 k Var( y n m ), then s n 2 =E ( S 1 n ) 2 2 Γ n . Apply relation (3.4) to obtain E ( S n ) 2 1. This would also imply that s n 2 1, provided we show that Γ n 0.

Indeed, by Assumption (A3) and u(1)<, we obtain u(q)0. Then by stationarity and Assumption (A2), it can be shown that

| Γ n | 1 i < j k μ = k i k i + p 1 ν = k j k j + p 1 | Cov ( Z n μ , Z n ν ) | 1 i < j k μ = k i k i + p 1 ν = k j k j + p 1 σ n 2 | w n μ w n ν | | Cov ( ξ μ , ξ ν ) | C σ n 2 w n i = 1 k 1 μ = k i k i + p 1 | w n ν | sup j 1 t : | t j | q | Cov ( ξ j , ξ t ) | C u ( q ) 0 .
(3.6)

Next, in order to establish asymptotic normality, we assume that { η n m :m=1,,k} are independent random variables, and the distribution of η n m is the same as that y n m for m=1,,k. Then E η n m =0 and Var( η n m )=Var( y n m ). Let T n m = η n m / s n , m=1,,k, then { T n m ,m=1,,k} are independent random variables with E T n m =0 and Var( T n m )=1. Let φ X (t) be the characteristic function of X, then

| ϕ m = 1 k y n m ( t ) e t 2 2 | | E exp ( i t m = 1 k y n m ) m = 1 k E exp ( i t y n m ) | + | m = 1 k E exp ( i t y n m ) e t 2 2 | | E exp ( i t m = 1 k y n m ) m = 1 k E exp ( i t y n m ) | + | m = 1 k E exp ( i t η n m ) e t 2 2 | = : I 3 + I 4 .
(3.7)

By Theorem 2.2, relation (3.6) and Assumption (A2), we obtain that

I 3 4 t 2 1 i < j k μ = k i k i + p 1 ν = k j k j + p 1 | Cov ( Z n μ , Z n ν ) | Cu(q)0.
(3.8)

Thus, it suffices to show that η n m d N(0,1) which, on account of s n 2 1, will follow from the convergence m = 1 k T n m d N(0,1). By the Lyapunov condition, it suffices to show that for some r>2,

1 s n r m = 1 k E | η n m | r 0.
(3.9)

Using Theorem 2.5 and Assumptions (A2) and (A4)(iv), we have

m = 1 k E | η n m | r = m = 1 k E | y n m | r = m = 1 k E | i = k m k m + p 1 σ n 1 w n i ξ i | r D k σ n r w n r p r 2 C n p r 2 1 w n r 2 0 .

So, (3.9) holds. Thus, the proof is complete. □