1 Introduction

Let x be an arbitrary positive real number. One can easily see the inequality

( x 3 2 1 ) ( x 2 1 ) 6 5 ( x 5 2 1 ) (x1),

for instance, is reduced to a simple polynomial inequality by putting t= x 1 2 . However, at least to the author, it seems not easy to give an elementary proof of the inequality

x 2 2 + 3 4 ( x 2 1 ) ( x 2 + 3 2 1 ) 1 2 ( x 2 + 3 1 ) (x1),

which has a very similar form to the preceding one although their corresponding numerical parts are different.

The purpose of this article is to show the following theorem.

Theorem 1.1 Let 0p, 1q and 0r with p+r(1+r)q. If 0<x, then

x 1 + r p + r q 2 ( x p 1 ) ( x p + r q 1 ) p q ( x p + r 1 ) (x1).
(1)

An elementary approach to proving the inequality (1) might be to consider the power series expansion.

Put t=x1, c= 1 + r p + r q 2 and

f(t)= p q ( ( 1 + t ) p + r 1 ) t ( 1 + t ) c ( ( 1 + t ) p 1 ) ( ( 1 + t ) p + r q 1 ) .

Then we can expand f(t) around t=0 as

f ( t ) = p q { p + r + ( p + r 2 ) t + ( p + r 3 ) t 2 + ( p + r 4 ) t 3 + ( p + r 5 ) t 4 + } t 2 { 1 + c t + ( c 2 ) t 2 + ( c 3 ) t 3 + ( c 4 ) t 4 + } { p + ( p 2 ) t + ( p 3 ) t 2 + ( p 4 ) t 3 + ( p 5 ) t 4 + } { p + r q + ( p + r q 2 ) t + ( p + r q 3 ) t 2 + ( p + r q 4 ) t 3 + ( p + r q 5 ) t 4 + } t 2 = a 4 t 4 + a 5 t 5 + a 6 t 6 + .

Thus, the constant term and the coefficients of t, t 2 and t 3 are 0. Further, one can obtain

a 4 = p ( p + r ) 24 q ( r 2 + 2 p r + 1 ( p + r q ) 2 ) , a 5 = p ( p + r ) ( p + r 3 ) 48 q ( r 2 + 2 p r + 1 ( p + r q ) 2 )

and

a 6 = p ( p + r ) 5760 q { 3 ( p + r q ) 4 + 10 ( p + r q ) 2 { 3 ( p + r ) ( p + r 8 ) + p 2 + 41 } 33 ( p + r ) 4 + 240 ( p + r ) 3 + 30 ( p + r ) 2 ( p 2 15 ) 240 ( p + r ) ( p 2 1 ) + ( 3 p 2 + 413 ) ( p 2 1 ) } .

Thus, if the assumption for the parameters p, q and r in Theorem 1.1 is satisfied, then we have 0< a 4 . However, the signature of a 5 and a 6 depends on parameters, and one cannot see any signs of a simple rule among the coefficients of higher order terms. Although f(t) is non-negative on a sufficiently small neighborhood of t=0, it seems difficult to show that f(t) is non-negative entirely on 1<t< by such an argument as above.

Let us recall some fundamental concepts on related matrix inequalities. A capital letter means a matrix whose entries are complex numbers. A square matrix T is said to be positive semidefinite (denoted by 0T) if 0(Tx,x) for all vectors x. We write 0<T if T is positive semidefinite and invertible. For two selfadjoint matrices T 1 and T 2 of the same size, a matrix inequality T 1 T 2 is defined by 0 T 2 T 1 .

The celebrated Löwner-Heinz theorem includes:

Theorem 1.2 [1, 2]

Let 0p1. If 0BA, then B p A p .

For 1<p, 0BA does not always ensure B p A p . Furuta obtained an epoch-making extension of the Löwner-Heinz inequality by using the Löwner-Heinz inequality itself.

Theorem 1.3 [3]

Let 0p, 1q and 0r with p+r(1+r)q. If 0BA, then

( A r 2 B p A r 2 ) 1 q A p + r q .
(2)

The following result by Tanahashi is a full description of the best possibility of the range

p+r(1+r)qand1q

as far as all parameters are positive.

Theorem 1.4 [4]

Let p, q, r be positive real numbers. If (1+r)q<p+r or 0<q<1, then there exist 2×2 matrices A, B with 0<BA that do not satisfy the inequality

( A r 2 B p A r 2 ) 1 q A p + r q .

One notices the coincidence between the assumption on parameters in Theorem 1.1 and Theorem 1.3. As a matter of fact, the inequality (1) is a particular conclusion of the Furuta inequality. We should point out that Tanahashi’s argument in [4] is almost sufficient to deduce the former from the latter. In the next section, we will prove Theorem 1.1 using Theorem 1.3 and Tanahashi’s argument.

2 Proof of Theorem 1.1

As we mentioned above, our proof of Theorem 1.1 has a major part which is parallel to [4]. Our matrix A is a little different from that in [4], we use a variable y instead of ε and δ. It simplifies the argument to an extent, though the improvement is not essential.

Throughout this paper, we assume that 1<a<b and 0<y. We will consider matrices

A= ( a ( a 1 ) y ( a 1 ) y b + y )

and

B= ( 1 0 0 b ) .

Then we have 0<BA. The eigenvalues of A are a + b + y ± d 2 , where d= a 2 + b 2 + y 2 2ab+2(a+b2)y.

Lemma 2.1 0<d< ( a + b + y ) 2 and aby d 0.

Proof Obviously,

d = ( a b ) 2 + y ( y + 2 ( a + b 2 ) ) > 0 , d = ( a + b + y ) 2 4 ( a b + y ) < ( a + b + y ) 2 .

If aby d =0, then we would have a=1 or y=0, which is contrary to the assumption. □

Let

c= 2 ( a 1 ) y a b y d

and

U= 1 c 2 + 1 ( c 1 1 c ) .

Then U is unitary and

U AU= 1 2 ( d 1 0 0 d 2 ) ,

where

d 1 =a+b+y+ d , d 2 =a+b+y d .

By the assumption and Theorem 1.3, A and B satisfy the inequality (2). Then

( U A r 2 U U B p U U A r 2 U ) 1 q U A p + r q U,

hence we have

{ ( d 1 r 2 0 0 d 2 r 2 ) U ( 1 0 0 b p ) U ( d 1 r 2 0 0 d 2 r 2 ) } 1 q 2 p q ( d 1 p + r q 0 0 d 2 p + r q ) .
(3)

Denote

( d 1 r 2 0 0 d 2 r 2 ) U ( 1 0 0 b p ) U ( d 1 r 2 0 0 d 2 r 2 ) = 1 c 2 + 1 ( A 1 A 3 A 3 A 2 ) ,

where

A 1 = d 1 r ( c 2 + b p ) , A 2 = d 2 r ( 1 + c 2 b p ) , A 3 = d 1 r 2 d 2 r 2 c ( 1 b p ) = ( ( a + b + y ) 2 d ) r 2 c ( 1 b p ) = ( 4 a b + 4 y ) r 2 c ( 1 b p ) .

Lemma 2.2 Let p, q, r be positive real numbers. Then A 2 < A 1 and A 3 <0.

Proof Since d 2 < d 1 and 0<r, we have d 2 r < d 1 r . Moreover,

( c 2 + b p ) ( 1 + c 2 b p ) = ( c 2 1 ) ( 1 b p ) ,1 b p <0

and

c 2 1= 2 ( a b ) 2 + 2 y 2 + 4 ( b a ) y + 2 ( b a + y ) d ( a b y d ) 2 <0,

hence we have 1+ c 2 b p < c 2 + b p . Thus A 2 < A 1 .

It is obvious that 1 b p <0 and 0<c, and hence A 3 <0. □

Let

V= 1 A 1 A 2 + 2 ε 1 ( A 1 A 2 + ε 1 ε 1 ε 1 A 1 A 2 + ε 1 ) ,

where

2 ε 1 = A 1 + A 2 + ( A 1 A 2 ) 2 + 4 A 3 2 .

Then it is easy to see that A 3 = ( A 1 A 2 + ε 1 ) ε 1 , V is unitary and

V ( A 1 A 3 A 3 A 2 ) V= ( A 1 + ε 1 0 0 A 2 ε 1 ) .

The following lemma is one of the most important points in Tanahashi’s argument. Although the substance is presented in the whole proof of [4], Theorem], we should restate and prove it in our context for the readers’ convenience.

Lemma 2.3

ε 1 { γ d 1 p + r q ( A 2 ε 1 ) 1 q } { ( A 1 + ε 1 ) 1 q γ d 2 p + r q } ( A 1 A 2 + ε 1 ) { γ d 1 p + r q ( A 1 + ε 1 ) 1 q } { γ d 2 p + r q ( A 2 ε 1 ) 1 q } ,
(4)

where γ= ( c 2 + 1 2 p ) 1 q .

Proof The formula (3) implies

( c 2 + 1 ) 1 q V ( ( A 1 + ε 1 ) 1 q 0 0 ( A 2 ε 1 ) 1 q ) V 2 p q ( d 1 p + r q 0 0 d 2 p + r q ) .
(5)

Write the left-hand matrix as

( c 2 + 1 ) 1 q ( A 1 A 2 + 2 ε 1 ) 1 ( B 1 B 3 B 3 B 2 ) ,

where

B 1 = ( A 1 A 2 + ε 1 ) ( A 1 + ε 1 ) 1 q + ε 1 ( A 2 ε 1 ) 1 q , B 2 = ε 1 ( A 1 + ε 1 ) 1 q + ( A 1 A 2 + ε 1 ) ( A 2 ε 1 ) 1 q , B 3 = A 1 A 2 + ε 1 ε 1 { ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q } .

Then, by the formula (5), we have

0 ( γ ( A 1 A 2 + 2 ε 1 ) d 1 p + r q B 1 B 3 B 3 γ ( A 1 A 2 + 2 ε 1 ) d 2 p + r q B 2 ) .

So, its determinant is also non-negative. We expand it to obtain

0 γ 2 ( A 1 A 2 + 2 ε 1 ) 2 d 1 p + r q d 2 p + r q γ ( A 1 A 2 + 2 ε 1 ) d 1 p + r q B 2 γ ( A 1 A 2 + 2 ε 1 ) d 2 p + r q B 1 + B 1 B 2 B 3 2 .
(6)

Now,

B 1 B 2 B 3 2 = { ( A 1 A 2 + ε 1 ) ( A 1 + ε 1 ) 1 q + ε 1 ( A 2 ε 1 ) 1 q } { ε 1 ( A 1 + ε 1 ) 1 q + ( A 1 A 2 + ε 1 ) ( A 2 ε 1 ) 1 q } ( A 1 A 2 + ε 1 ) ε 1 { ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q } 2 = ( A 1 A 2 + 2 ε 1 ) 2 ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q .

Hence, the formula (6) implies

0 ( A 1 A 2 + 2 ε 1 ) { γ 2 ( A 1 A 2 + 2 ε 1 ) d 1 p + r q d 2 p + r q γ d 1 p + r q B 2 γ d 2 p + r q B 1 } + ( A 1 A 2 + 2 ε 1 ) 2 ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q .

Cancel the common positive factor A 1 A 2 +2 ε 1 and substitute the definitions for B 1 and B 2 . Then a simple calculation shows that

ε 1 { γ 2 d 1 p + r q d 2 p + r q γ d 1 p + r q ( A 1 + ε 1 ) 1 q γ d 2 p + r q ( A 2 ε 1 ) 1 q + ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q } ( A 1 A 2 + ε 1 ) { γ 2 d 1 p + r q d 2 p + r q γ d 1 p + r q ( A 2 ε 1 ) 1 q γ d 2 p + r q ( A 1 + ε 1 ) 1 q + ( A 1 + ε 1 ) 1 q ( A 2 ε 1 ) 1 q } .

By factorizing, we have

ε 1 { γ d 1 p + r q ( A 2 ε 1 ) 1 q } { γ d 2 p + r q ( A 1 + ε 1 ) 1 q } ( A 1 A 2 + ε 1 ) { γ d 1 p + r q ( A 1 + ε 1 ) 1 q } { γ d 2 p + r q ( A 2 ε 1 ) 1 q } .

This completes the proof of Lemma 2.3. □

Now, we estimate each term of the inequality (4) with respect to y+0. A key point in making use of the inequality (4) is that both estimations of the factor ε 1 on the left-hand side and the factor γ d 1 p + r q ( A 1 + ε 1 ) 1 q on the right-hand side contain a common subfactor y. After the cancellation of this y, we will derive the desired functional inequality by letting y+0, a1+0 and applying l’Hopital’s rule. Terms in other factors can be roughly estimated.

In the following, o means o(y), that is

o y 0(y+0),

and o(1) denotes a term such that o(1)0 (y+0).

One can establish the following formulae:

d = ( b a ) { 1 + a + b 2 ( b a ) 2 y + o ( y ) } , d 1 p + r q = ( 2 b ) p + r q { 1 + p + r q b 1 b ( b a ) y + o ( y ) } , d 2 p + r q = ( 2 a ) p + r q { 1 + p + r q a + 1 a ( b a ) y + o ( y ) } , c = 2 ( a 1 ) y a b y ( b a + a + b 2 b a y + o ( y ) ) = y a 1 b a { 1 b 1 ( b a ) 2 y + o ( y ) } , c 2 + 1 = 1 + a 1 ( b a ) 2 y + o ( y ) , ( c 2 + 1 ) 1 q d 1 p + r q = { 1 + a 1 q ( b a ) 2 y + o ( y ) } ( 2 b ) p + r q { 1 + p + r q b 1 b ( b a ) y + o ( y ) } = ( 2 b ) p + r q { 1 + 1 q b ( b a ) 2 ( ( a 1 ) b + ( p + r ) ( b 1 ) ( b a ) ) y + o ( y ) } , ( c 2 + 1 ) 1 q d 2 p + r q = ( 2 a ) p + r q ( 1 + o ( 1 ) ) , A 1 = ( 2 b ) r { 1 + r ( b 1 ) b ( b a ) y + o ( y ) } { b p + a 1 ( b a ) 2 y + o ( y ) } = 2 r b p + r { 1 + 1 b ( b a ) 2 ( r ( b 1 ) ( b a ) + b 1 p ( a 1 ) ) y + o ( y ) } , A 2 = ( 2 a ) r ( 1 + o ( 1 ) ) , A 3 2 = ( 4 a b + 4 y ) r y a 1 ( b a ) 2 ( 1 + o ( 1 ) ) ( 1 b p ) 2 = y 4 r a r b r a 1 ( b a ) 2 ( 1 b p ) 2 ( 1 + o ( 1 ) ) , ε 1 = 1 2 ( A 1 A 2 ) ( 1 + 1 + 4 A 3 2 ( A 1 A 2 ) 2 ) = A 3 2 A 1 A 2 + o = y 4 r a r b r ( a 1 ) ( b a ) 2 ( 1 b p ) 2 ( 1 + o ( 1 ) ) 2 r b p + r ( 1 + o ( 1 ) ) ( 2 a ) r ( 1 + o ( 1 ) ) + o = y 2 r a r b r ( a 1 ) ( 1 b p ) 2 ( b a ) 2 ( b p + r a r ) ( 1 + o ( 1 ) ) , ( A 1 + ε 1 ) 1 q = ( 2 r b p + r { 1 + 1 b ( b a ) 2 ( r ( b 1 ) ( b a ) + b 1 p ( a 1 ) ) y + o ( y ) } + y 2 r a r b r ( a 1 ) ( 1 b p ) 2 ( b a ) 2 ( b p + r a r ) ( 1 + o ( 1 ) ) ) 1 q = 2 r q b p + r q { 1 + 1 q b ( b a ) 2 ( r ( b 1 ) ( b a ) + b 1 p ( a 1 ) + a r b 1 p ( a 1 ) ( 1 b p ) 2 b p + r a r ) y + o ( y ) } , ( A 2 ε 1 ) 1 q = 2 r q a r q ( 1 + o ( 1 ) ) , A 1 A 2 + ε 1 = 2 r ( b p + r a r ) ( 1 + o ( 1 ) ) , γ d 1 p + r q ( A 2 ε 1 ) 1 q = 2 r q ( b p + r q a r q ) ( 1 + o ( 1 ) ) , γ d 2 p + r q ( A 1 + ε 1 ) 1 q = 2 r q ( a p + r q b p + r q ) ( 1 + o ( 1 ) ) , γ d 2 p + r q ( A 2 ε 1 ) 1 q = 2 r q ( a p + r q a r q ) ( 1 + o ( 1 ) ) .

Now, we have the estimation of the most delicate factor in the formula (4), whose constant term is canceled by subtraction.

γ d 1 p + r q ( A 1 + ε 1 ) 1 q = 2 p q ( 2 b ) p + r q { 1 + 1 q b ( b a ) 2 ( ( a 1 ) b + ( p + r ) ( b 1 ) ( b a ) ) y + o ( y ) } 2 r q b p + r q { 1 + 1 q b ( b a ) 2 ( r ( b 1 ) ( b a ) + b 1 p ( a 1 ) + a r b 1 p ( a 1 ) ( 1 b p ) 2 b p + r a r ) y + o ( y ) } = 2 r q b p + r q 1 q ( b a ) 2 { ( a 1 ) b + p ( b 1 ) ( b a ) b 1 p ( a 1 ) a r b 1 p ( a 1 ) ( 1 b p ) 2 b p + r a r } y ( 1 + o ( 1 ) )

Substitute these estimations for the inequality (4), cancel the positive factor y, and let y+0, then we have

2 r a r b r ( a 1 ) ( 1 b p ) 2 ( b a ) 2 ( b p + r a r ) 2 r q ( b p + r q a r q ) 2 r q ( b p + r q a p + r q ) 2 r ( b p + r a r ) 2 r q b p + r q 1 q ( b a ) 2 { ( a 1 ) b + p ( b 1 ) ( b a ) b 1 p ( a 1 ) a r b 1 p ( a 1 ) ( 1 b p ) 2 b p + r a r } 2 r q ( a p + r q a r q ) ,

and hence

a r b r ( 1 b p ) 2 ( b p + r q a r q ) ( b p + r q a p + r q ) ( b p + r a r ) 2 b p + r q 1 q { ( a 1 ) b + p ( b 1 ) ( b a ) b 1 p ( a 1 ) a r b 1 p ( a 1 ) ( 1 b p ) 2 b p + r a r } a p + r q a r q a 1 .

Letting a1+0 and applying l’Hopital’s rule, we have

b r ( 1 b p ) 2 ( b p + r q 1 ) 2 ( b p + r 1 ) 2 b p + r q 1 p 2 q 2 ( b 1 ) 2 .

This implies that, for arbitrary 1<b,

b 1 + r p + r q 2 ( b p 1 ) ( b p + r q 1 ) p q ( b p + r 1 ) (b1).
(7)

For arbitrary 0<x<1, substitute 1 x for b in (7) and multiply by x, x p , x p + r , x p + r q both sides. It is easy to see that x itself satisfies (7). This completes the proof of Theorem 1.1.