1 Introduction

Let S(X) (resp. B(X)) be the unit sphere (resp. the unit closed ball) of a real Banach space \((X,\left\| \cdot \right\| _{X}).\) The letters \( {\mathbb {Z}},\)\({\mathbb {N}}\) and \({\mathbb {R}}\) stand for the sets of integers, positive integers and real numbers, respectively. For any subset \(A\subset X, \) denote \(A^{n}=\underset{n\text {-times}}{\underbrace{A\times \cdots \times A} }.\) Let \((\varOmega ,\varSigma , \mu )\) be a measure space with a \(\sigma \)-finite, non-atomic and complete measure \( \mu .\) Denote by \(L^{p}(\mu )\)\((1\le p\le \infty )\) the Lebesgue space of real \(\varSigma \)-measurable functions defined on \(\varOmega .\) The symbol \( l_{m}^{p}\)\((1\le p\le \infty ,\)\(m\in {\mathbb {N}}\cup \{\infty \})\) stands for m-dimensional Lebesgue sequence space. Clearly, \(l_{\infty }^{p}=l^{p}.\)

In 1937 Clarkson [5], on the basis of the famous paper [13] by Jordan and von Neumann, introduced the constant \(C_{NJ}(X)\) (called the von Neumann–Jordan constant or NJ-constant for short) as the smallest constant \(C\ge 1\) such that

$$\begin{aligned} \frac{1}{C}\le \frac{\left\| x+y\right\| _{X}^{2}+\left\| x-y\right\| _{X}^{2}}{2\left( \left\| x\right\| _{X}^{2}+\left\| y\right\| _{X}^{2}\right) }\le C \end{aligned}$$

holds for any \(x,y\in X\) with \(\left\| x\right\| _{X}^{2}+\left\| y\right\| _{X}^{2}>0.\) An equivalent and more convenient definition of NJ-constant is given in [15] by the formula

$$\begin{aligned} C_{NJ}(X)=\sup \left\{ \frac{\left\| x+y\right\| _{X}^{2}+\left\| x-y\right\| _{X}^{2}}{2\left( \left\| x\right\| _{X}^{2}+\left\| y\right\| _{X}^{2}\right) }:x\in S(X),\text { }y\in B(X)\right\} . \end{aligned}$$

The classical Jordan and von Neumann results [13] state that \(1\le C_{NJ}(X)\le 2\) for any Banach space X and \(C_{NJ}(X)=1\) if and only if X is a Hilbert space. Clarkson [5] showed that if \(1\le p\le \infty \) and \(\dim L^{p}(\mu )\ge 2,\) then \(C_{NJ}(L^{p}(\mu ))=2^{2/\min \{p,q\}-1}, \) where \(1/p+1/q=1.\) Kato and Takahashi [16], observed that \(C_{NJ}(X)=C_{NJ}(X^{*}).\) Moreover, they proved that if the Banach space X is uniformly convex, then \(C_{NJ}(X)<2\) and if \(C_{NJ}(X)<2,\) then X admits an equivalent uniformly convex norm. The same authors [16] state that the Banach space X is uniformly non-square if and only if \( C_{NJ}(X)<2.\) Some results concerning relationships between von Neumann–Jordan and so called James constant have been obtained among others in [2, 15, 19, 21, 23, 25].

A similar constant

$$\begin{aligned} C_{NJ}^{^{\prime }}(X)=\sup \left\{ \frac{\left\| x+y\right\| _{X}^{2}+\left\| x-y\right\| _{X}^{2}}{4}:x,y\in S(X)\right\} \end{aligned}$$

was introduced in 2006 by Gao [7] and called the modified von Neumann–Jordan constant. It is clear that \(C_{NJ}^{^{\prime }}(X)\le C_{NJ}(X).\) It has been shown that \(C_{NJ}^{^{\prime }}(X)\) does not necessarily coincide with \(C_{NJ}(X)\) (see [2, 8]). These constants have been considered recently also in [22].

The von Neumann Jordan constant has been generalized in many directions (see e.g. [17, 24, 26]).

To generalize the von Neumann–Jordan constant, denote

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})=\frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{ 2^{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \end{aligned}$$

for any \(x_{1},x_{2},\ldots ,x_{n}\in X\) such that \(\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}>0.\)

Definition 1

The smallest, resp., the largest constant \(C>0\) such that

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})\le C,\text { resp., }C\le C^{(n)}(x_{1},x_{2},\ldots ,x_{n}) \end{aligned}$$
(1)

for all \(x_{j}\in X,\)\((j=1,2,\ldots ,n\) and \(n\ge 2)\) with \( \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}>0\) is called an upper, resp., lowern-th von Neumann–Jordan constant and denoted by \({\overline{C}}_{NJ}^{(n)}(X)\), resp., \({\underline{C}} _{NJ}^{(n)}(X).\) If the infimum, resp., supremum of C satisfying (1) is taken over all \(x_{j}\in S(X),\)\((j=1,2,\ldots ,n\) and \(n\ge 2),\) then it is called upper, resp.,lowermodifiedn- th von Neumann–Jordan constant and denoted by \({\overline{C}} _{mNJ}^{(n)}(X),resp.,\)\({\underline{C}}_{mNJ}^{(n)}(X).\)

It is well known that \({\overline{C}}_{NJ}^{(2)}(X)=\left[ {\underline{C}} _{NJ}^{(2)}(X)\right] ^{-1}=C_{NJ}(X)\) (see [20]). As it is proved below, the equality \({\overline{C}}_{NJ}^{(n)}(X)=\left[ {\underline{C}} _{NJ}^{(n)}(X)\right] ^{-1}\) is not true in general for \(n>2.\) Moreover, \( {\overline{C}}_{mNJ}^{(2)}(X)=C_{NJ}^{\prime }(X)\) and \({\overline{C}} _{mNJ}^{(n)}(X)\) need not be equal to \(\left[ {\underline{C}}_{mNJ}^{(n)}(X) \right] ^{-1}\) even for \(n=2\) (see [7]). The n-th von Neumann–Jordan constant introduced and investigated by Kato, Takahashi and Hashimoto in [17] is exactly the upper n-th von Neumann–Jordan constant.

In 1964 James [12] introduced the notion of uniformly non-\(l_{n}^{1}\) Banach space. Namely, a Banach space X is called uniformly non-\( l_{n}^{1}\) if there exists \(\delta >0\) such that for each n elements of the unit ball B(X)

$$\begin{aligned} \min _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}\le n\left( 1-\delta \right) \end{aligned}$$

(see [10]). The definition remains the same if we replace the unit ball B(X) by the unit sphere S(X). If X is uniformly non-\(l_{n}^{1}\) for \( n=2,\) then it is called uniformly non-square. It is worth mentioning that uniform non-squareness plays a crucial role in fixed point theory, since any uniformly non-square Banach space has the fixed point property (for more details see [9]). In 1987 Kamińska and Turett [14] proved that the uniform non-\(l_{n}^{1}\) for Banach spaces is equivalent to the fact that there exists \(\delta >0\) such that for all \(x_{1},x_{2},\ldots ,x_{n}\) in X

$$\begin{aligned} \min _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}\le \left( 1-\frac{\delta n\min _{1\le i\le n}\left\| x_{i}\right\| _{X}}{\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}}\right) \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}. \end{aligned}$$

Banach spaces that are uniformly non-\(l_{n}^{1}\) for a certain \(n\in {\mathbb {N}}\) have been studied by A. Beck [3]. Such spaces are said to be B- convex. Beck [3] proved that a Banach space X is B-convex if and only if a certain strong law of large numbers is valid for random variables with ranges in X. Moreover, B-convexity is a very important property in fixed point theory because every B-convex uniformly monotone Köthe space has the fixed point property (see [1]).

2 Basic Properties

Proposition 1

Let \(n\ge 2\) and X be a Banach space. The lower, upper, modified lower and modified upper n-th von Neumann–Jordan constants have the following properties:

  1. (a)

    \(1\le {\overline{C}}_{mNJ}^{(n)}(X)\le {\overline{C}} _{NJ}^{(n)}(X)\le n\) and \(1/n\le {\underline{C}}_{NJ}^{(n)}(X)\le {\underline{C}}_{mNJ}^{(n)}(X)\le 1;\)

  2. (b)

    \({\overline{C}}_{NJ}^{(n)}(X)\le {\overline{C}} _{NJ}^{(n+1)}(X)\) and \({\underline{C}}_{NJ}^{(n+1)}(X)\le {\underline{C}} _{NJ}^{(n)}(X);\)

  3. (c)

    \({\overline{C}}_{mNJ}^{(n)}(X)\le \frac{n+1}{n}{\overline{C}}_{mNJ}^{(n+1)}(X).\)

Proof

Let \((X,\left\| \cdot \right\| _{X})\) be a Banach space and \(n\ge 2.\)

(a) The estimation \({\overline{C}}_{NJ}^{(n)}(X)\le n\) is proved in [17]. \({\overline{C}}_{mNJ}^{(n)}(X)\le {\overline{C}}_{NJ}^{(n)}(X)\) by the definition. Putting \(x_{1}\in S(X)\) and \(x_{i}=x_{1}\) for \(i=2,3,\ldots ,n,\) we have

$$\begin{aligned} C^{(n)}(x_{1},x_{1},\ldots ,x_{1})= & {} \frac{1}{n2^{n-1}}\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{1}\right\| _{X}^{2} \\= & {} \frac{1}{n2^{n-1}}\sum \nolimits _{j=0}^{n-1}\left( {\begin{array}{c}n-1\\ j\end{array}}\right) \left\| (n-2j)x_{1}\right\| _{X}^{2} \\= & {} \frac{1}{n2^{n-1}}\sum \nolimits _{j=0}^{n-1}\left( {\begin{array}{c}n-1\\ j\end{array}}\right) (n-2j)^{2}=1. \end{aligned}$$

Hence

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)=\sup \left\{ C^{(n)}(x_{1},x_{2},\ldots ,x_{n}):x_{i}\in S(X),\text { }i=1,2,\ldots ,n\right\} \ge 1 \end{aligned}$$

and

$$\begin{aligned} {\underline{C}}_{mNJ}^{(n)}(X)=\inf \left\{ C^{(n)}(x_{1},x_{2},\ldots ,x_{n}):x_{i}\in S(X),\text { }i=1,2,\ldots ,n\right\} \le 1. \end{aligned}$$

Obviously, \({\underline{C}}_{NJ}^{(n)}(X)\le {\underline{C}}_{mNJ}^{(n)}(X)\) by the definition. To prove that \(\frac{1}{n}\le {\underline{C}}_{NJ}^{(n)}(X),\) we use the mathematical induction principle. For \(n=2\) we have

$$\begin{aligned} {\underline{C}}_{NJ}^{(2)}(X)=\frac{1}{{\overline{C}}_{NJ}^{(2)}(X)}\ge \frac{1 }{2}. \end{aligned}$$

Suppose that \({\underline{C}}_{NJ}^{(n-1)}(X)\ge \frac{1}{n-1}.\) Notice that

$$\begin{aligned} \left\| x+y\right\| _{X}^{2}+\left\| x-y\right\| _{X}^{2}\ge 2\left( \max \left\{ \left\| x\right\| _{X},\left\| y\right\| _{X}\right\} \right) ^{2}\ge \left\| x\right\| _{X}^{2}+\left\| y\right\| _{X}^{2} \end{aligned}$$
(2)

for any \(x,y\in X.\) Really, since

$$\begin{aligned} \left\| x+y\right\| _{X}+\left\| x-y\right\| _{X}\ge 2\max \left\{ \left\| x\right\| _{X},\left\| y\right\| _{X}\right\} =2m_{x,y}\ge \left\| x+y\right\| _{X}-\left\| x-y\right\| _{X} \end{aligned}$$

we have

$$\begin{aligned} \left\| x+y\right\| _{X}^{2}+\left\| x-y\right\| _{X}^{2}\ge & {} \left\| x+y\right\| _{X}^{2}+\left( 2m_{x,y}-\left\| x+y\right\| _{X}\right) ^{2} \\= & {} 2\left\| x+y\right\| _{X}^{2}-4m_{x,y}\left\| x+y\right\| _{X}+4\left( m_{x,y}\right) ^{2} \\= & {} 2\left[ \left( \left\| x+y\right\| _{X}-m_{x,y}\right) ^{2}+\left( m_{x,y}\right) ^{2}\right] \\\ge & {} 2\left( \max \left\{ \left\| x\right\| _{X},\left\| y\right\| _{X}\right\} \right) ^{2}\ge \left\| x\right\| _{X}^{2}+\left\| y\right\| _{X}^{2}. \end{aligned}$$

Hence

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{ 2^{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| \sum _{j=1}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{2^{n}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\= & {} \frac{\sum _{\theta _{j}=\pm 1}\left( \left\| \left( \sum _{j\ne i}\theta _{j}x_{j}\right) \!+\!x_{i}\right\| _{X}^{2}\!+\!\left\| \left( \sum _{j\ne i}\theta _{j}x_{j}\right) -x_{i}\right\| _{X}^{2}\right) }{ 2^{n}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\\ge & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| \sum _{j\ne i}\theta _{j}x_{j}\right\| _{X}^{2}+2^{n-1}\left\| x_{i}\right\| _{X}^{2}}{ 2^{n}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\\ge & {} \frac{\frac{2^{n-1}}{n-1}\sum _{j\ne i}\left\| x_{j}\right\| _{X}^{2}+2^{n-1}\left\| x_{i}\right\| _{X}^{2}}{2^{n}\sum _{j=1}^{n} \left\| x_{j}\right\| _{X}^{2}} \\= & {} \frac{\frac{1}{n-1}\sum _{j\ne i}\left\| x_{j}\right\| _{X}^{2}+\left\| x_{i}\right\| _{X}^{2}}{2\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\= & {} \frac{\frac{1}{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}+ \frac{n-2}{n-1}\left\| x_{i}\right\| _{X}^{2}}{2\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \\= & {} \frac{1}{2\left( n-1\right) }+\frac{(n-2)\left\| x_{i}\right\| _{X}^{2}}{2\left( n-1\right) \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \end{aligned}$$

for any \(i=1,2,\ldots ,n.\) It follows that

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})\ge & {} \frac{1}{n}\sum \limits _{i=1}^{n}\left( \frac{1}{2\left( n-1\right) }+\frac{(n-2)\left\| x_{i}\right\| _{X}^{2} }{2\left( n-1\right) \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}} \right) \\= & {} \frac{1}{2\left( n-1\right) }+\frac{n-2}{2n\left( n-1\right) }=\frac{1}{n} \end{aligned}$$

and consequently \({\underline{C}}_{NJ}^{(n)}(X)\ge \frac{1}{n},\) which finishes the proof of (a).

(b) The inequality \({\overline{C}}_{NJ}^{(n)}(X)\le {\overline{C}} _{NJ}^{(n+1)}(X)\) is proved in [17]. To prove the second inequality it is enough to notice that

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})=C^{(n+1)}(x_{1},x_{2},\ldots ,x_{n},0) \end{aligned}$$

for any elements \(x_{1},x_{2},\ldots ,x_{n}\in X.\) Hence

$$\begin{aligned} {\underline{C}}_{NJ}^{(n+1)}(X)= & {} \inf \left\{ C^{(n+1)}(x_{1},x_{2},\ldots ,x_{n},x_{n+1}):x_{1},x_{2},\ldots ,x_{n+1}\in X\right\} \\\le & {} \inf \left\{ C^{(n)}(x_{1},x_{2},\ldots ,x_{n}):x_{1},x_{2},\ldots ,x_{n}\in X\right\} ={\underline{C}}_{NJ}^{(n)}(X). \end{aligned}$$

(c) For any \(x_{1},x_{2},\ldots ,x_{n+1}\in S(X),\) by the inequality (2), we have

$$\begin{aligned} C^{(n+1)}(x_{1},x_{2},\ldots ,x_{n+1})= & {} \frac{\sum \limits _{\theta _{j}=\pm 1}\left( \left\| \left( \sum \limits _{j=1}^{n}\theta _{j}x_{j}\right) +x_{n+1}\right\| _{X}^{2}+\left\| \left( \sum \limits _{j=1}^{n}\theta _{j}x_{j}\right) -x_{n+1}\right\| _{X}^{2}\right) }{(n+1)2^{n+1}} \\\ge & {} \frac{1}{(n+1)2^{n}}\sum _{\theta _{j}=\pm 1}\max \left\{ \left\| \sum \nolimits _{j=1}^{n}\theta _{j}x_{j}\right\| _{X}^{2},\left\| x_{n+1}\right\| _{X}^{2}\right\} \\\ge & {} \frac{1}{(n+1)2^{n}}\sum _{\theta _{j}=\pm 1}\left\| \sum \nolimits _{j=1}^{n}\theta _{j}x_{j}\right\| _{X}^{2}=\frac{n}{n+1} C^{(n)}(x_{1},x_{2},\ldots ,x_{n}), \end{aligned}$$

whence \({\overline{C}}_{mNJ}^{(n)}(X)\le \frac{n+1}{n}{\overline{C}} _{mNJ}^{(n+1)}(X).\)

\(\square \)

Proposition 2

Let \(n\ge 2\) and X be a Banach space.

  1. (a)

    The following conditions are equivalent:

    1. (i)

      X is a Hilbert space;

    2. (ii)

      \({\overline{C}}_{NJ}^{(n)}(X)=1;\)

    3. (iii)

      \({\underline{C}}_{NJ}^{(n)}(X)=1.\)

  2. (b)

    If X is a Hilbert space, then \({\overline{C}} _{mNJ}^{(n)}(X)={\underline{C}}_{mNJ}^{(n)}(X)=1.\)

Proof

(a) By Theorem 5 (iii) in [17], conditions (i) and (ii) are equivalent. To prove the implication \(\mathrm{(i)}\Rightarrow \mathrm{(iii)}\) suppose that X is a Hilbert space. By elementary calculations, we get that

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})=1 \end{aligned}$$
(3)

for any elements \(x_{1},x_{2},\ldots ,x_{n}\in X,\) whence \({\underline{C}} _{NJ}^{(n)}(X)=1.\) Conversely, if \({\underline{C}}_{NJ}^{(n)}(X)=1,\) then, by Proposition 1 (a) and (b), we have

$$\begin{aligned} 1\ge \frac{1}{C_{NJ}(X)}={\underline{C}}_{NJ}^{(2)}(X)\ge {\underline{C}} _{NJ}^{(n)}(X)=1. \end{aligned}$$

Hence \(C_{NJ}(X)=1.\) Consequently, X is a Hilbert space (see [13]).

(b) follows immediately from (3). \(\square \)

In general, the explicit calculation of various types of n-th von Neumann–Jordan constant is rather a hard problem. Anyway, the next proposition can be helpful to do this.

Proposition 3

Let \((X,\left\| \cdot \right\| _{X})\) be a Banach space and \(n\ge 2.\) Denote \(D_{1}=\left[ B(X)\right] ^{n}\setminus \{ {\mathbf {0}}\},\)\(D_{2}=B(l_{n}^{2}(X))\setminus \{{\mathbf {0}}\},\)\( D_{3}=S(l_{n}^{2}(X)),\) where \({\mathbf {0}}=(0,0,\ldots ,0).\) Then

$$\begin{aligned} {\overline{C}}_{NJ}^{(n)}(X)=\sup \left\{ C^{(n)}(x_{1},x_{2},\ldots ,x_{n}):\left( x_{1},x_{2},\ldots ,x_{n}\right) \in D_{j}\right\} \end{aligned}$$
(4)

and

$$\begin{aligned} {\underline{C}}_{NJ}^{(n)}(X)=\inf \left\{ C^{(n)}(x_{1},x_{2},\ldots ,x_{n}):\left( x_{1},x_{2},\ldots ,x_{n}\right) \in D_{j}\right\} \end{aligned}$$
(5)

for any \(j=1,2,3.\)

Proof

Since

$$\begin{aligned} S(l_{n}^{2}(X))\subset B(l_{n}^{2}(X))\setminus \{{\mathbf {0}}\}\subset \left[ B(X)\right] ^{n}\setminus \{{\mathbf {0}}\}\subset X^{n}\setminus \{{\mathbf {0}} \}, \end{aligned}$$

it follows that

$$\begin{aligned} \sup _{{\mathbf {x}}\in D_{3}}C^{(n)}({\mathbf {x}})\le \sup _{{\mathbf {x}}\in D_{2}}C^{(n)}({\mathbf {x}})\le \sup _{{\mathbf {x}}\in D_{1}}C^{(n)}({\mathbf {x}} )\le {\overline{C}}_{NJ}^{(n)}(X), \end{aligned}$$

where \({\mathbf {x}}=(x_{1},x_{2},\ldots ,x_{n}).\) To show (4), it remains to prove that \(\sup _{{\mathbf {x}}\in D_{3}}C^{(n)}({\mathbf {x}})\ge {\overline{C}} _{NJ}^{(n)}(X).\) Let \({\mathbf {x}}=\left( x_{1},x_{2},\ldots ,x_{n}\right) \in X^{n}\setminus \{{\mathbf {0}}\}.\) Define the sequence \({\mathbf {y}} =(y_{k})_{k=1}^{n}\) by

$$\begin{aligned} y_{k}=\frac{x_{k}}{\left( \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}\right) ^{\frac{1}{2}}} \end{aligned}$$

for \(k=1,2,\ldots ,n\) and \(n\ge 2.\) Obviously, \({\mathbf {y}}\in S(l_{n}^{2}(X)).\) Hence

$$\begin{aligned} \sup _{{\mathbf {x}}\in D_{3}}C^{(n)}({\mathbf {x}})\ge & {} C^{(n)}({\mathbf {y}})= \frac{\sum _{\theta _{j}=\pm 1}\left\| y_{1}+\sum _{j=2}^{n}\theta _{j}y_{j}\right\| _{X}^{2}}{2^{n-1}} \\= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{2^{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}}=C^{(n)}({\mathbf {x}}) \end{aligned}$$

for any elements \({\mathbf {x}}\in X^{n}\setminus \{{\mathbf {0}}\}.\) Therefore

$$\begin{aligned} \sup _{{\mathbf {x}}\in D_{3}}C^{(n)}({\mathbf {x}})\ge \sup \left\{ C^{(n)}( {\mathbf {x}}):{\mathbf {x}}\in X^{n}\setminus \{{\mathbf {0}}\}\right\} ={\overline{C}} _{NJ}^{(n)}(X) \end{aligned}$$

which finishes the proof of (4). The equality (5) can be proved similarly.

\(\square \)

Let \(n\ge 2.\) Define

$$\begin{aligned} A_{2}=\left[ \begin{array}{ll} 1 &{}\quad 1 \\ 1 &{}\quad -1 \end{array} \right] _{2\times 2} \end{aligned}$$

and for each integers \(n>2\)

$$\begin{aligned} A_{n}=\left[ \begin{array}{ll} A_{n-1} &{}\quad {\mathbf {1}} \\ A_{n-1} &{}\quad \mathbf {-1} \end{array} \right] _{2^{n-1}\times n}, \end{aligned}$$

where \({\mathbf {1}}\) denotes the \(2^{n-2}\)-by-1 column vector in which all the elements are equal to 1. The matrix \(A_{n}\) generates a linear operator

$$\begin{aligned} T_{n}:l_{n}^{2}(X)\rightarrow l_{2^{n-1}}^{2}(X) \end{aligned}$$

defined for any \(x\in l_{n}^{2}(X)\) by the formula

$$\begin{aligned} T_{n}(x)=A_{n}x. \end{aligned}$$

A one-to-one correspondence between n-th von Neumann–Jordan constant \( {\overline{C}}_{NJ}^{(n)}(X)\) and the norm of the operator \(T_{n}\) is given by the following result.

Corollary 1

Let \((X,\left\| \cdot \right\| _{X})\) be a Banach space and \( T_{n}:l_{n}^{2}(X)\rightarrow l_{2^{n-1}}^{2}(X)\) be the linear operator generated by the matrix \(A_{n}.\) Then

$$\begin{aligned} {\overline{C}}_{NJ}^{(n)}(X)=\frac{||T_{n}||^{2}}{2^{n-1}} \end{aligned}$$

for any integer \(n\ge 2.\)

Proof

Fix \(n\ge 2.\) Let \(x_{1},x_{2},\ldots ,x_{n}\in X\) and \( \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}>0.\) Denote \( {\mathbf {x}}=(x_{1},x_{2},\ldots ,x_{n}).\) By Proposition 3, we have

$$\begin{aligned} {\overline{C}}_{NJ}^{(n)}(X)= & {} \sup \left\{ \frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{ 2^{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}}:{\mathbf {x}}\in S(l_{n}^{2}(X))\right\} \\= & {} \frac{1}{2^{n-1}}\sup \left\{ ||T_{n}x||_{l_{2^{n-1}}^{2}(X)}^{2}:{\mathbf {x}}\in S(l_{n}^{2}(X))\right\} =\frac{||T_{n}||^{2}}{2^{n-1}}. \end{aligned}$$

\(\square \)

Corollary 2

Let \(\left( X^{*},\left\| \cdot \right\| _{X^{*}}\right) \) be the dual space of the Banach space \((X,\left\| \cdot \right\| _{X}).\) Then

  1. (a)

    \({\underline{C}}_{NJ}^{(n)}(X^{*})\ge \frac{1}{ {\overline{C}}_{NJ}^{(n)}(X)}.\)

  2. (b)

    \({\underline{C}}_{NJ}^{(n)}(X)\ge \frac{1}{{\overline{C}} _{NJ}^{(n)}(X^{*})}.\)

Proof

(a) Define an operator \(T_{n}:l_{n}^{2}(X)\rightarrow l_{2^{n-1}}^{2}(X)\) as above. Let \(T_{n}^{*}\) be the adjoint of operator \(T_{n}.\) Obviously, \(T_{n}^{*}:l_{2^{n-1}}^{2}(X^{*})\rightarrow l_{n}^{2}(X^{*})\) is generated by the matrix \(A_{n}^{*}=A_{n}^{T}.\) Then, by Corollary 1, we get

$$\begin{aligned} \frac{1}{{\overline{C}}_{NJ}^{(n)}(X)}=\frac{2^{n-1}}{||T_{n}||^{2}}=\frac{ 2^{n-1}}{||T_{n}^{*}||^{2}}\le \frac{2^{n-1}\left\| y^{*}\right\| _{l_{2^{n-1}}^{2}(X^{*})}^{2}}{\left\| T_{n}^{*}y^{*}\right\| _{l_{n}^{2}(X^{*})}^{2}} \end{aligned}$$
(6)

for any \(y^{*}=(y_{1}^{*},y_{2}^{*},\ldots ,y_{2^{n-1}}^{*})\in l_{2^{n-1}}^{2}(X^{*})\setminus \{{\mathbf {0}}\}.\) Let \(S_{n}^{*}= \frac{1}{2^{n-1}}A_{n}.\) Then, \(S_{n}:l_{n}^{2}(X^{*})\rightarrow l_{2^{n-1}}^{2}(X^{*}).\) Since

$$\begin{aligned} A_{n}^{*}\cdot \frac{1}{2^{n-1}}A_{n}=I_{n}, \end{aligned}$$

where \(I_{n}\) is the identity matrix of size \(n\times {n},\) it follows that

$$\begin{aligned} T_{n}^{*}\left( S_{n}^{*}x^{*}\right) =x^{*} \end{aligned}$$

for any \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots ,x_{n}^{*})\in l_{n}^{2}\left( X^{*}\right) .\) Hence,  by (6), we have

$$\begin{aligned} \frac{1}{{\overline{C}}_{NJ}^{(n)}(X)}\le & {} \frac{2^{n-1}\left\| S_{n}^{*}x^{*}\right\| _{l_{2^{n-1}}^{2}(X^{*})}^{2}}{ \left\| T_{n}^{*}\left( S_{n}^{*}x^{*}\right) \right\| _{l_{n}^{2}(X^{*})}^{2}}=\frac{\left\| A_{n}x^{*}\right\| _{l_{2^{n-1}}^{2}(X^{*})}^{2}}{2^{n-1}\left\| x^{*}\right\| _{l_{n}^{2}(X^{*})}^{2}} \\= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| x^{*}_{1}+\sum _{j=2}^{n}\theta _{j}x^{*}_{j}\right\| _{X^{*}}^{2}}{2^{n-1}\sum _{j=1}^{n}\left\| x^{*}_{j}\right\| _{X^{*}}^{2}}=C^{(n)}\left( x_{1}^{*},x_{2}^{*},\ldots ,x_{n}^{*}\right) \end{aligned}$$

for any \((x_{1}^{*},x_{2}^{*},\ldots ,x_{n}^{*})\in \left( X^{*}\right) ^{n}\setminus \{{\mathbf {0}}\}.\) By the definition of the lower n-th von Neumann–Jordan constant, we get the thesis of (a).

(b) Since X can be isometrically embedded into \(X^{**},\) it follows that \({\underline{C}}_{NJ}^{(n)}(X)\ge {\underline{C}} _{NJ}^{(n)}(X^{**}).\) Hence, by (a), we have

$$\begin{aligned} {\underline{C}}_{NJ}^{(n)}(X)\ge {\underline{C}}_{NJ}^{(n)}(X^{**})\ge \frac{1}{{\overline{C}}_{NJ}^{(n)}(X^{*})}. \end{aligned}$$

\(\square \)

The n-th von Neumann–Jordan constant for some classical Banach spaces can be calculated effectively.

Proposition 4

Let \(n\ge 2.\)

  1. (a)

    If \(1\le p\le 2\) and \(n\le m\le \infty ,\) then \( {\overline{C}}_{NJ}^{(n)}(l_{m}^{p})={\overline{C}}_{mNJ}^{(n)}(l_{m}^{p})=n^{ \frac{2}{p}-1}.\)

  2. (b)

    If \(2^{n-1}\le m\le \infty ,\) then \({\overline{C}} _{NJ}^{(n)}(l_{m}^{\infty })={\overline{C}}_{mNJ}^{(n)}(l_{m}^{\infty })=n.\)

  3. (c)

    If \(n\le m\le \infty ,\) then \({\underline{C}} _{NJ}^{(n)}(l_{m}^{\infty })={\underline{C}}_{mNJ}^{(n)}(l_{m}^{\infty })= \frac{1}{n}.\)

  4. (d)

    Let \((\varOmega ,\varSigma ,\mu )\) be a measure space with non-atomic \(\sigma \)-finite and complete measure \(\mu .\) Then

    $$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(L^{1}(\mu ))={\overline{C}}_{NJ}^{(n)}(L^{1}(\mu ))= {\overline{C}}_{NJ}^{(n)}(L^{\infty }(\mu ))={\overline{C}}_{NJ}^{(n)}(L^{\infty }(\mu ))=n \end{aligned}$$

    and

    $$\begin{aligned} {\underline{C}}_{mNJ}^{(n)}(L^{\infty }(\mu ))={\underline{C}} _{NJ}^{(n)}(L^{\infty }(\mu ))=\frac{1}{n}. \end{aligned}$$

Proof

(a) Let \(n\le m\le \infty .\) By Theorem 3 (ii) in [17], \( {\overline{C}}_{NJ}^{(n)}(l_{m}^{p})=n^{\frac{2}{p}-1}.\) By Proposition 1 (b), \({\overline{C}}_{mNJ}^{(n)}(l_{m}^{p})\le n^{\frac{2 }{p}-1}.\) Taking the canonical basis \(\left( e_{i}\right) _{i=1}^{m}\)\( (n\le m\le \infty )\) in \(l_{m}^{p},\) we get

$$\begin{aligned} C^{(n)}\left( e_{1},e_{2},\ldots ,e_{n}\right)= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| e_{1}+\sum _{j=2}^{n}\theta _{j}e_{j}\right\| _{l^{p}}^{2}}{ 2^{n-1}\sum _{j=1}^{n}\left\| e_{j}\right\| _{l^{p}}^{2}} \\= & {} \frac{n^{\frac{2}{p}}2^{n-1}}{n2^{n-1}}=n^{\frac{2}{p}-1}. \end{aligned}$$

Since \(e_{i}\in S(l_{m}^{p})\) for \(i\in {\mathbb {N}}\cap [1,m],\) it follows that

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(l_{m}^{p})\ge C^{(n)}\left( e_{1},e_{2},\ldots ,e_{n}\right) =n^{\frac{2}{p}-1}. \end{aligned}$$

Hence \({\overline{C}}_{mNJ}^{(n)}(l_{m}^{p})=n^{\frac{2}{p}-1}.\)

(b) Let \(2^{n-1}\le m\le \infty .\) Then \(l_{m}^{\infty }\) is not uniformly non-\(l_{n}^{1}.\) By Theorem 5 (iv) in [17] and Proposition 1 (a), we conclude that \({\overline{C}} _{NJ}^{(n)}(l_{m}^{\infty })=n.\) To prove that \({\overline{C}}_{mNJ}^{(n)}(l_{m}^{\infty })=n\) take the matrix \(A_{n}\) defined as above. The column j of \( A_{n}\) denote by \({\mathbf {a}}_{j}^{(n)}.\) Then \({\mathbf {a}} _{j}^{(n)}=[1,a_{2j}^{(n)},\ldots ,a_{2^{n-1}j}^{(n)}],\) where \(a_{ij}^{(n)}=\pm 1\) for any \(1\le j\le n\) and \(2\le i\le 2^{n-1}.\) Define for \(j=1,2,\dots ,n\),

$$\begin{aligned} z_{j}=\sum \limits _{i=1}^{2^{n-1}}a_{ij}^{(n)}e_{i}, \end{aligned}$$

where \(\left( e_{i}\right) _{i=1}^{m}\)\((n\le m\le \infty )\) is the canonical basis in \(l_{m}^{\infty }.\) Obviously, \(\left\| z_{j}\right\| _{l_{m}^{\infty }}=1.\) Let \((1,\theta _{2},\ldots ,\theta _{n})\) be an arbitrary sequence such that \(\theta _{j}=\pm 1\) for any \(2\le j\le n.\) Then there is exactly one row \(i_{0}\) of \(A_{n}\) such that

$$\begin{aligned} \left[ 1,\theta _{2},\ldots ,\theta _{n}\right] =\left[ a_{i_{0}1}^{(n)},a_{i_{0}2}^{(n)},\ldots ,a_{i_{0}n}^{(n)}\right] . \end{aligned}$$

Hence

$$\begin{aligned} 1+\sum _{j=2}^{n}\theta _{j}a_{i_{0}j}^{(n)}=n. \end{aligned}$$

Moreover

$$\begin{aligned} \left| 1+\sum _{j=2}^{n}\theta _{j}a_{ij}^{(n)}\right| <n \end{aligned}$$

for any \(i\not =i_{0}.\) Consequently,

$$\begin{aligned} \left\| z_{1}+\sum _{j=2}^{n}\theta _{j}z_{j}\right\| _{l^{\infty }}=\max _{1\le i\le 2^{n-1}}\left| 1+\sum _{j=2}^{n}\theta _{j}a_{ij}^{(n)}\right| =n, \end{aligned}$$

whence

$$\begin{aligned} C^{(n)}\left( z_{1},z_{2},\ldots ,z_{n}\right) =\frac{\sum _{\theta _{j}=\pm 1}\left\| z_{1}+\sum _{j=2}^{n}\theta _{j}z_{j}\right\| _{l^{\infty }}^{2}}{2^{n-1}\sum _{j=1}^{n}\left\| z_{j}\right\| _{l^{\infty }}^{2}}= \frac{n^{2}2^{n-1}}{n2^{n-1}}=n. \end{aligned}$$

Therefore

$$\begin{aligned} n\le {\overline{C}}_{mNJ}^{(n)}(l_{m}^{\infty })\le {\overline{C}} _{NJ}^{(n)}(l_{m}^{\infty })=n, \end{aligned}$$

which completes the proof of (b).

(c) Let \(n\le m\le \infty \) and \((e_{i})\) be the canonical basis in \(l_{m}^{\infty }.\) By Proposition 1 (a), we have

$$\begin{aligned} \frac{1}{n}\le & {} {\underline{C}}_{NJ}^{(n)}(l_{m}^{\infty })\le {\underline{C}} _{mNJ}^{(n)}(l_{m}^{\infty })\le C^{(n)}\left( e_{1},e_{2},\ldots ,e_{n}\right) \\= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| e_{1}+\sum _{j=2}^{n}\theta _{j}e_{j}\right\| _{l_{m}^{\infty }}^{2}}{2^{n-1}\sum _{j=1}^{n}\left\| e_{j}\right\| _{l_{m}^{\infty }}^{2}}=\frac{1}{n}. \end{aligned}$$

(d) Since \(L^{1}(\mu )\) contains an isometric copy of \(l^{1},\) applying Proposition 4 (a) for \(p=1,\) we get

$$\begin{aligned} n={\overline{C}}_{mNJ}^{(n)}(l^{1})\le {\overline{C}}_{mNJ}^{(n)}(L^{1}(\mu ))\le {\overline{C}}_{NJ}^{(n)}(L^{1}(\mu ))\le n. \end{aligned}$$

Hence \({\overline{C}}_{mNJ}^{(n)}(L^{1}(\mu ))={\overline{C}}_{NJ}^{(n)}(L^{1}( \mu ))=n.\) Using the same arguments, by Proposition 4 (b) we conclude that \({\overline{C}}_{NJ}^{(n)}(L^{\infty }(\mu ))={\overline{C}} _{mNJ}^{(n)}(L^{\infty }(\mu ))=n.\) Similarly, by Proposition 4 (c),  we obtain

$$\begin{aligned} \frac{1}{n}\le {\underline{C}}_{NJ}^{(n)}(L^{\infty }(\mu ))\le {\underline{C}} _{mNJ}^{(n)}(L^{\infty }(\mu ))\le {\underline{C}}_{mNJ}^{(n)}(l^{\infty })= \frac{1}{n}, \end{aligned}$$

which completes the proof. \(\square \)

3 Uniformly non-\(l_{n}^{1}\) spaces

The next theorem gives some characterizations of the uniform non-\(l_{n}^{1}\) property for Banach spaces. Kato, Takahashi and Hashimito proved in [17] that \((X,\left\| \cdot \right\| _{X})\) is uniformly non-\( l_{n}^{1}\) iff \({\overline{C}}_{NJ}^{(n)}(X)<n.\) We will extend their results.

Theorem 1

Let \((X,\left\| \cdot \right\| _{X})\) be a Banach space. Then the following conditions are equivalent:

  1. (a)

    \({\overline{C}}_{mNJ}^{(n)}(X)<n;\)

  2. (b)

    \((X,\left\| \cdot \right\| _{X})\) is uniformly non-\(l_{n}^{1};\)

  3. (c)

    There exists \(\delta \in (0,1)\) such that for any element \((x_{1},x_{2},\ldots ,x_{n})\in B\left( l_{n}^{2}\left( X\right) \right) ,\) we have

    $$\begin{aligned} \min _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}\le \sqrt{n}(1-\delta ); \end{aligned}$$
    (7)
  4. (d)

    There exists \(\delta \in (0,1)\) such that for any element \((x_{1},x_{2},\ldots ,x_{n})\in S\left( l_{n}^{2}\left( X\right) \right) ,\) the inequality (7) is satisfied.

Proof

\(\mathrm{(a)}\Rightarrow \mathrm{(b)}.\) Suppose that \({\overline{C}} _{mNJ}^{(n)}(X)<n.\) Then

$$\begin{aligned} \frac{1}{2^{n-1}}\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}\le {}n{\overline{C}}_{mNJ}^{(n)}(X) \end{aligned}$$

for any \(x_{1},x_{2},\ldots ,x_{n}\in S\left( X\right) .\) Since on the left hand side we have an arithmetic mean, there is at least one sequence \((1, {\overline{\theta }}_{2},\ldots ,{\overline{\theta }}_{n})\) such that

$$\begin{aligned} \left\| x_{1}+\sum \nolimits _{j=2}^{n}{\overline{\theta }} _{j}x_{j}\right\| _{X}^{2}\le n{\overline{C}}_{mNJ}^{(n)}(X). \end{aligned}$$

Hence

$$\begin{aligned} \min _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}\le n\sqrt{\frac{{\overline{C}}_{mNJ}^{(n)}(X)}{n}} =n\left( 1-\delta \right) , \end{aligned}$$

where \(\delta =\frac{\sqrt{n}-\sqrt{{\overline{C}}_{mNJ}^{(n)}(X)}}{\sqrt{n}}.\) Consequently, \(\left( X,\left\| \cdot \right\| _{X}\right) \) is uniformly non-\(l_{n}^{1}.\)

\(\mathrm{(b)}\Rightarrow \mathrm{(c)}.\) Assume that \(\left( X,\left\| \cdot \right\| _{X}\right) \) is uniformly non-\(l_{n}^{1}.\) Let \((x_{1},x_{2},\ldots ,x_{n})\in B\left( l_{n}^{2}\left( X\right) \right) .\) Since \(\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}\le 1,\) it follows that \(\min _{1\le j\le n}\left\| x_{j}\right\| _{X}\le \frac{1}{\sqrt{n}}.\) Moreover, by the Hölder inequality, we have

$$\begin{aligned} \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}\le \sqrt{n}\left( \sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}\right) ^{1/2}\le \sqrt{n} . \end{aligned}$$
(8)

Case 1. Suppose that \(\frac{1}{2\sqrt{n}}<\min _{1\le j\le n}\left\| x_{j}\right\| _{X}\le \frac{1}{\sqrt{n}}.\) By the characterization of uniform non-\(l_{n}^{1}\) given in [14] and by the inequality (8), there is \(\delta _{1}>0\) such that

$$\begin{aligned} \min _{\theta _{j}=\pm 1}\left\| x_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}\le & {} \left( 1-\frac{\delta _{1}n\min _{1\le i\le n}\left\| x_{i}\right\| _{X}}{\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}}\right) \sum _{j=1}^{n}\left\| x_{j}\right\| _{X} \\\le & {} \left( 1-\frac{\delta _{1}\sqrt{n}}{2\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}}\right) \sum _{j=1}^{n}\left\| x_{j}\right\| _{X} \\\le & {} \sqrt{n}\left( 1-\frac{\delta _{1}}{2}\right) . \end{aligned}$$

Case 2. Suppose that \(0\le \min _{1\le j\le n}\left\| x_{j}\right\| _{X}\le \frac{1}{2\sqrt{n}}.\) Let \(x_{k}\) be the element on which the minimum is taken. Then, by the Hölder inequality, we have

$$\begin{aligned} \left\| x_{1}\pm x_{2}\pm \cdots \pm x_{n}\right\| _{X}\le & {} \sum _{j=1,j\ne k}^{n}\left\| x_{j}\right\| _{X}+\left\| x_{k}\right\| _{X} \\\le & {} \sqrt{n-1}\sqrt{\sum _{j=1,j\ne k}^{n}\left\| x_{j}\right\| _{X}^{2}}+\left\| x_{k}\right\| _{X} \\\le & {} \sqrt{n-1}\sqrt{1-\left\| x_{k}\right\| _{X}^{2}}+\left\| x_{k}\right\| _{X} \end{aligned}$$

for any choice of signs. Define

$$\begin{aligned} f(t)=\sqrt{n-1}\sqrt{1-t^{2}}+t \end{aligned}$$

for any \(t\in \left[ 0,\frac{1}{2\sqrt{n}}\right] .\) By elementary calculus, we conclude that f is an increasing function on the interval \(\left[ 0, \frac{1}{2\sqrt{n}}\right] .\) Hence, the function f(t) takes its highest value on \(\left[ 0,\frac{1}{2\sqrt{n}}\right] \) at the point \(t=\frac{1}{2 \sqrt{n}}.\) Thus,

$$\begin{aligned} \left\| x_{1}\pm x_{2}\pm \cdots \pm x_{n}\right\| _{X}\le & {} \sqrt{n-1} \sqrt{1-\left( \frac{1}{2\sqrt{n}}\right) ^{2}}+\frac{1}{2\sqrt{n}} \\= & {} \frac{1}{2\sqrt{n}}\left( \sqrt{\left( 4n-1\right) \left( n-1\right) } +1\right) \\= & {} \sqrt{n}\left( 1-\frac{\left( 2n-1\right) -\sqrt{\left( 4n-1\right) \left( n-1\right) }}{2n}\right) \end{aligned}$$

for any choice of signs. Taking

$$\begin{aligned} \delta =\min \left\{ \frac{\delta _{1}}{2},\frac{\left( 2n-1\right) -\sqrt{ \left( 4n-1\right) \left( n-1\right) }}{2n}\right\} , \end{aligned}$$

we get (c).

\(\mathrm{(c)}\Rightarrow \mathrm{(d)}.\) It is obvious.

\(\mathrm{(d)}\Rightarrow \mathrm{(a)}.\) Let \((x_{1},x_{2},\ldots ,x_{n})\in S\left( l_{n}^{2}\left( X\right) \right) .\) By the assumption (d) there exists \( \delta \in \left( 0,1\right) \) such that

$$\begin{aligned} \left\| x_{1}\pm x_{2}\pm \cdots \pm x_{n}\right\| _{X}\le \sqrt{n}\left( 1-\delta \right) \end{aligned}$$

for some choice of signs. Moreover, by (8), \(\left\| x_{1}\pm x_{2}\pm \cdots \pm x_{n}\right\| _{X}\le \sqrt{n}\) for any choice of signs. Hence, we have

$$\begin{aligned} \frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{2^{n-1}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{2}}\le & {} \frac{n\left( 1-\delta \right) ^{2}+n\left( 2^{n-1}-1\right) }{2^{n-1}} \\= & {} n-\frac{\delta n\left( 2-\delta \right) }{2^{n-1}}. \end{aligned}$$

By the definition of the upper n-th von Neumann–Jordan constant \({\overline{C}}_{NJ}^{(n)}(X)\) and Proposition 3, we conclude

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)\le {\overline{C}}_{NJ}^{(n)}(X)\le n-\frac{ \delta n\left( 2-\delta \right) }{2^{n-1}}<n, \end{aligned}$$

which finishes the proof. \(\square \)

By Theorem 1 and the definition of B-convexity, we get immediately

Corollary 3

A Banach space \((X,\left\| \cdot \right\| _{X})\) is B-convex if and only if there is \(n\ge 2\)\((n\in N)\) such that \({\overline{C}} _{mNJ}^{(n)}(X)<n.\)

Notice that \({\overline{C}}_{mNJ}^{(n)}(X)\) is not equal to \({\overline{C}} _{NJ}^{(n)}(X)\) in general (for \(n=2\) see [18]).

Corollary 4

\({\overline{C}}_{mNJ}^{(n)}(X)=n\) if and only if \({\overline{C}} _{NJ}^{(n)}(X)=n.\)

Proof

Since \((X,\left\| \cdot \right\| _{X})\) is uniformly non-\(l_{n}^{1}\) iff \({\overline{C}}_{NJ}^{(n)}(X)<n,\) it follows, by Theorem 1, that \({\overline{C}}_{NJ}^{(n)}(X)<n\) iff \({\overline{C}} _{mNJ}^{(n)}(X)<n.\) Hence, by Proposition 1 (a), we get the thesis. \(\square \)

Remark 1

Let us notice that the above corollary can be reformulated equivalently as follows

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)<n \quad \text { if and only if}\quad {\overline{C}}_{NJ}^{(n)}(X)<n. \end{aligned}$$

4 Upper and lower n-th von Neumann–Jordan constant for \(L^{p}\)-spaces

Now we will calculate the upper n-th von Neumann–Jordan constant for Lebesgue spaces \(L^{p}(\mu )\) and \(l_{m}^{p}\)\((1<p<\infty \), \(1\le m\le \infty ).\) To prove the next lemma, we will apply the following results given by Figiel, Iwaniec and Pełczyński in [6]. Namely, for arbitrary scalars \( c_{1},c_{2},\ldots ,c_{n}\) and \(2<p<\infty \) we have

$$\begin{aligned} \int _{0}^{1}\left| \sum _{j=1}^{n}c_{j}r_{j}(t)\right| ^{p}dt\le n^{-1}\int _{0}^{1}\left| \sum _{j=1}^{n}r_{j}(t)\right| ^{p}dt\sum _{j=1}^{n}\left| c_{j}\right| ^{p}, \end{aligned}$$
(9)

where \(r_{1},r_{2},\ldots ,r_{n}\)\((n=1,2,\ldots )\) are Rademacher functions, that is \(r_{n}(t)=\) sign\(\left( \sin 2^{n}\pi t\right) .\)

Let \(\left\lfloor \cdot \right\rfloor :{\mathbb {R}}\rightarrow {\mathbb {Z}}\) be the floor function, i.e. \(\left\lfloor x\right\rfloor =\max \left\{ k\in {\mathbb {Z}}:k\le x\right\} \) for any \(x\in {\mathbb {R}}.\)

Lemma 1

Let \(2<p<\infty \) and \(X=L^{p}(\mu )\) or \(X=l^{p}.\) Then

$$\begin{aligned} \sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{p}\le n^{-1}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{p} \end{aligned}$$

for any \(x_{1},x_{2},\ldots ,x_{n}\in X\) and any integer \(n\ge 1.\)

Proof

Fix an integer \(n\ge 1.\) Notice that

$$\begin{aligned} \int _{0}^{1}\left| \sum _{j=1}^{n}c_{j}r_{j}(t)\right| ^{p}dt=2^{1-n}\sum _{\theta _{j}=\pm 1}\left| c_{1}+\sum _{j=2}^{n}\theta _{j}c_{j}\right| ^{p} \end{aligned}$$

for any scalars \(c_{1},c_{2},\ldots ,c_{n}.\) On the other hand, it can be proved elementarily that

$$\begin{aligned} \int _{0}^{1}\left| \sum _{j=1}^{n}r_{j}(t)\right| ^{p}dt=2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}. \end{aligned}$$

Suppose that \(x_{k}=\left( t_{i}^{(k)}\right) _{i=1}^{\infty }\in l^{p}\) for \(k=1,2,\ldots ,n.\) By the inequality (9), we get

$$\begin{aligned} \sum _{\theta _{j}=\pm 1}\left| t_{i}^{(1)}+\sum _{j=2}^{n}\theta _{j}t_{i}^{(j)}\right| ^{p}\le n^{-1}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\sum _{j=1}^{n}\left| t_{i}^{(j)}\right| ^{p} \end{aligned}$$

for any \(i\in {\mathbb {N}}.\) Summing by sides from \(i=1\) to \(\infty \) and reversing the order of summation, we obtain the thesis. Similarly, for \( X=L^{p}(\mu )\) take \(x_{1},x_{2},\ldots ,x_{n}\in L^{p}(\mu ).\) Then, by the inequality (9), we get

$$\begin{aligned} \sum _{\theta _{j}=\pm 1}\left| x_{1}(t)+\sum _{j=2}^{n}\theta _{j}x_{j}(t)\right| ^{p}\le n^{-1}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\sum _{j=1}^{n}\left| x_{j}(t)\right| ^{p} \end{aligned}$$

for almost every \(t\in \varOmega .\) Integrating by sides this inequality and reversing the order of summation and integration, we obtain the desired inequality. \(\square \)

Theorem 2

Let \(1\le p<\infty \) and \(X=L^{p}(\mu )\) or \(X=l_{m}^{p}\)\((1\le m\le \infty ).\) Then

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)=\left\{ \begin{array}{lll} n^{\frac{2}{p}-1} &{} {\textit{if}} &{} 1\le p\le 2\text { and }m\ge n,\text { } \\ n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c} n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}&{\textit{if}}&2<p<\infty \text { and }m\ge 2^{n-1}. \end{array} \right. \end{aligned}$$

Proof

Case 1. Let \(1\le p\le 2.\) By Theorem 3 from [17] and Proposition 1 (a), for all \(n\ge 2,\) we have

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)\le {\overline{C}}_{NJ}^{(n)}(X)=n^{\frac{2}{p} -1}. \end{aligned}$$

whenever \(X=L^{p}(\mu )\) or \(X=l_{m}^{p},\)\(m\in {\mathbb {N}}\cup \{\infty \}.\) The opposite inequality follows immediately from Proposition 4 (a) whenever \(X=l_{m}^{p}\) with \(m\ge n.\)

Now consider \(X=L^{p}(\mu ).\) Let \(A\subset \varOmega \) be a set of positive finite measure. Divide the set A into n pairwise disjoint subsets \( A_{1}, \)\(A_{2},\ldots ,A_{n}\) such that \(\bigcup \limits _{i=1}^{n}A_{i}=A\) and \( \mu (A_{i})=\frac{1}{n}\mu (A).\) Define \(z_{i}=\mu (A_{i})^{-1/p}\chi _{A_{i}}\) for \(i=1,2,\ldots ,n.\) Then

$$\begin{aligned} \left\| z_{i}\right\| _{L^{p}}=\left( \int \limits _{A_{i}}\left( \mu (A_{i})^{-1/p}\right) ^{p}d\mu \right) ^{\frac{1}{p}}=1 \end{aligned}$$

for any \(i\in \{1,2,\ldots ,n\}\) and

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(L^{p}(\mu ))\ge & {} C^{(n)}(z_{1},z_{2},\ldots ,z_{n}) \\= & {} \frac{1}{n2^{n-1}}\sum _{\theta _{j}=\pm 1}\left\| z_{1}+\sum _{j=2}^{n}\theta _{j}z_{j}\right\| _{L^{p}}^{2} \\= & {} \frac{1}{n2^{n-1}}\sum _{\theta _{j}=\pm 1}\left\| \mu (A_{1})^{-1/p}\chi _{A_{1}}+\sum _{j=2}^{n}\theta _{j}\mu (A_{j})^{-1/p}\chi _{A_{j}}\right\| _{L^{p}}^{2} \\= & {} \frac{1}{n2^{n-1}}2^{n-1}\left\| \left( \frac{1}{n}\mu (A)\right) ^{-1/p}\chi _{A}\right\| _{L^{p}}^{2} \\= & {} \frac{1}{n}\left[ \mu (A)\left( \frac{1}{n}\mu (A)\right) ^{-1}\right] ^{ \frac{2}{p}}=n^{\frac{2}{p}-1}. \end{aligned}$$

Hence \({\overline{C}}_{mNJ}^{(n)}(L^{p}(\mu ))={\overline{C}}_{NJ}^{(n)}(L^{p}(\mu ))=n^{ \frac{2}{p}-1},\) whenever \(1\le p\le 2.\)

Case 2. Let \(2<p<\infty \) and \(X=L^{p}(\mu )\) or \(X=l_{m}^{p}\)\((2^{n-1}\le m\le \infty )\). By the Hölder-Rogers inequality for \(p>2\) and by Lemma 1, we have

$$\begin{aligned}&\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}\le 2^{2(n-1)\left( \frac{1}{2}-\frac{1}{p} \right) }\left( \sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{p}\right) ^{\frac{2}{p} } \nonumber \\&\quad \le 2^{(n-1)\left( \frac{p-2}{p}\right) }\left( n^{-1}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{p}\right) ^{ \frac{2}{p}} \end{aligned}$$
(10)

for any \(x_{1},x_{2},\ldots ,x_{n}\in X.\) Assume that \(x_{1},x_{2},\ldots ,x_{n}\in S(X).\) Then, by inequality (10), we have

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})=&{} \frac{\sum _{\theta _{j}=\pm 1}\left\| x_{1}+\sum _{j=2}^{n}\theta _{j}x_{j}\right\| _{X}^{2}}{n2^{n-1}} \\ \le&{} \frac{1}{n2^{n-1}}2^{(n-1)\left( \frac{p-2}{p}\right) }\left( n^{-1}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\sum _{j=1}^{n}\left\| x_{j}\right\| _{X}^{p}\right) ^{ \frac{2}{p}} \\=&{} \frac{1}{n}2^{(n-1)\left( \frac{p-2}{p}-1\right) }\left( \sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}} \\=&{} n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}, \end{aligned}$$

whence

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X)\le n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}. \end{aligned}$$
(11)

Let the matrix \(A_{n}\) be defined as in the proof of Proposition 3 (b). Denote by \(y_{i}\) column i of the matrix \(A_{n}\)\((i=1,2,\ldots ,n).\) For any \(i\in \left\{ 1,2,\ldots ,n\right\} \) define

$$\begin{aligned} z_{i}=\frac{1}{\left( 2^{n-1}\right) ^{1/p}}y_{i}^{T}, \end{aligned}$$

where \(y_{i}^{T}\) denotes the transpose of the column \(y_{i}.\) Then

$$\begin{aligned} \left\| z_{i}\right\| _{l_{2^{n-1}}^{p}}=\left( \sum _{m=1}^{2^{n-1}}\left( \frac{1}{\left( 2^{n-1}\right) ^{1/p}}\right) ^{p}\right) ^{1/p}=1 \end{aligned}$$

for any \(i\in \left\{ 1,2,\ldots ,n\right\} .\) Hence \(z_{1},z_{2},\ldots ,z_{n}\in S\left( l_{2^{n-1}}^{p}\right) .\) For any element \( x=(t_{1},t_{2},\ldots ,t_{2^{n-1}})\in l_{2^{n-1}}^{p}\) denote by \(x^{*}\) its non-increasing rearrangement, i.e. a non-increasing sequence obtained from \(\left\{ \left| t_{i}\right| \right\} _{i=1}^{2^{n-1}}\) by a suitable permutation of the integers. Notice that for all sequences \( (1,\theta _{2},\ldots ,\theta _{n})\) such that \(\theta _{j}=\pm 1,\)\( (j=2,3,\ldots ,n)\) the non-increasing rearrangements \(\left( z_{1}+\sum _{j=2}^{n}\theta _{j}z_{j}\right) ^{*}\) coincide. Denoting by \( \left( v_{1},v_{2},\ldots ,v_{2^{n-1}}\right) \) the non-increasing sequence such that

$$\begin{aligned} \left( z_{1}+\sum \nolimits _{j=2}^{n}\theta _{j}z_{j}\right) ^{*}=\left( v_{1},v_{2},\ldots ,v_{2^{n-1}}\right) \end{aligned}$$

for any sequences \((1,\theta _{2},\ldots ,\theta _{n}).\) Hence

$$\begin{aligned} C^{(n)}(x_{1},x_{2},\ldots ,x_{n})= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| z_{1}+\sum _{j=2}^{n}\theta _{j}z_{j}\right\| _{l_{2^{n-1}}^{p}}^{2}}{ n2^{n-1}} \\= & {} \frac{1}{n}\left\| \left( v_{1},v_{2},\ldots ,v_{2^{n-1}}\right) \right\| _{l_{2^{n-1}}^{p}}^{2}. \end{aligned}$$

Notice that \(v_{1}=\frac{n}{\left( 2^{n-1}\right) ^{1/p}},\)\(v_{l}=\frac{n-2k }{\left( 2^{n-1}\right) ^{1/p}}\) for \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) subsequent integers l\(\left( k=1,2,\ldots ,\left\lfloor n/2\right\rfloor \right) .\) Consequently,

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}\left( l_{2^{n-1}}^{p}\right)\ge & {} C^{(n)}(x_{1},x_{2},\ldots ,x_{n}) \\= & {} \frac{1}{n}\left( \sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor } \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( \frac{n-2k}{\left( 2^{n-1}\right) ^{1/p}}\right) ^{p}\right) ^{2/p} \\= & {} n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor } \left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}. \end{aligned}$$

Since \(l_{2^{n-1}}^{p}\) can be embedded isometrically in any \(l_{m}^{p}\) with \( m\ge 2^{n-1},\) by inequality (11) applied for \(X=l_{m}^{p}\), it follows that

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}\left( l_{2^{n-1}}^{p}\right) ={\overline{C}} _{mNJ}^{(n)}\left( l_{m}^{p}\right) =n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c}n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}} \end{aligned}$$

whenever \(2<p\le \infty \) and \(m\ge 2^{n-1}.\)

Since \(L^{p}(\mu )\) contains an isometric copy of \(l_{2^{n-1}}^{p},\) by inequality (11), we obtain the thesis for \(X=L^{p}(\mu ),\) which completes the proof. \(\square \)

Haagerup [11] proved that the best type (2, p) constant in the Khinthine inequality for \(2\le p<\infty \) is \(B_{p}=\sqrt{2}\left( \frac{1}{\sqrt{\pi }}\varGamma \left( \frac{p+1}{2} \right) \right) ^{\frac{1}{p}}.\) Kato, Takahashi and Hashimoto proved in [17] that \({\overline{C}} _{NJ}^{(n)}\left( X\right) \le \min \left\{ n^{\frac{2}{q} -1},B_{p}^{2}\right\} .\) Combining this result with Theorem 2, we get two hand side estimation of upper von Neumann–Jordan constant for Lebesgue spaces with \(p\in (2,\infty ).\)

Corollary 5

Let \(2<p<\infty ,\)q be conjugate to p and \(X=L^{p}(\mu )\) or \(X=l_{m}^{p}.\) If \(m\ge 2^{n-1},\) then

$$\begin{aligned} n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c} n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}\le {\overline{C}} _{NJ}^{(n)}\left( X\right) \le \min \left\{ n^{\frac{2}{q} -1},B_{p}^{2}\right\} . \end{aligned}$$

Proof

The left hand side inequality follows immediately from Theorem 2. Namely,

$$\begin{aligned} n^{-1}\left( 2^{1-n}\sum \limits _{k=0}^{\left\lfloor n/2\right\rfloor }\left( {\begin{array}{c} n\\ k\end{array}}\right) \left( n-2k\right) ^{p}\right) ^{\frac{2}{p}}={\overline{C}} _{mNJ}^{(n)}\left( X\right) \le {\overline{C}}_{NJ}^{(n)}\left( X\right) , \end{aligned}$$

whenever \(2<p<\infty ,\)\(X=L^{p}(\mu )\) or \(X=l_{m}^{p}\) and \(m\ge 2^{n-1}. \) The right hand side inequality was proved in [17]. \(\square \)

Theorem 3

Let \(2<p<\infty .\) If \(X=L^{p}(\mu )\) or \(X=l^{p},\) then

$$\begin{aligned} \lim _{n\rightarrow \infty }{\overline{C}}_{mNJ}^{(n)}\left( X\right) =\lim _{n\rightarrow \infty }{\overline{C}}_{NJ}^{(n)}\left( X\right) =B_{p}^{2}. \end{aligned}$$

Proof

Assume that \(2<p<\infty .\) By Theorem 2, for every \(n\in {\mathbb {N}}\) , we get

$$\begin{aligned} {\overline{C}}_{mNJ}^{(2n)}\left( X\right)= & {} (2n)^{-1}\left( 2^{1-2n}\sum _{k=0}^{n}\left( {\begin{array}{c}2n\\ k\end{array}}\right) (2n-2k)^{p}\right) ^{2/p} \\= & {} \frac{1}{2n}\left( \frac{2^{p+1}}{2^{2n}}\sum _{k=0}^{n}\left( {\begin{array}{c}2n\\ k\end{array}}\right) (n-k)^{p}\right) ^{2/p} \\= & {} \frac{2^{1+2/p}}{n}\left( \frac{1}{2^{2n}}\sum _{j=0}^{n}\left( {\begin{array}{c}2n\\ n-j\end{array}}\right) j^{p}\right) ^{2/p}, \end{aligned}$$

where \(j=n-k.\) On the other hand, by elementary asymptotic method (see [4]), we have

$$\begin{aligned} \sum _{|j|<x\sqrt{n/2}}\frac{1}{2^{2n}}\left( {\begin{array}{c}2n\\ n-j\end{array}}\right) =(1+o(1))\sum _{|j|<x \sqrt{n/2}}\frac{1}{\sqrt{\pi {n}}}e^{-j^{2}/n} \end{aligned}$$

for any \(x>0.\) Hence, letting \(t_{j}=j\sqrt{2/n},\) we obtain

$$\begin{aligned} \sum _{|j|<x\sqrt{n/2}}\frac{1}{2^{2n}}\left( {\begin{array}{c}2n\\ n-j\end{array}}\right) =(1+o(1)) \sum _{|t_{j}|<x}\frac{1}{\sqrt{2\pi }}e^{-t_{j}^{2}/2}\varDelta {t_{j}} \end{aligned}$$

for any \(x>0.\) Passing to the limit gives the classic de Moivre–Laplace theorem (see [4]). In our case, we get

$$\begin{aligned} \sum _{j=0}^{n}\frac{1}{2^{2n}}\left( {\begin{array}{c}2n\\ n-j\end{array}}\right) j^{p}=(1+o(1))\left( \frac{n}{2} \right) ^{p/2}\frac{1}{\sqrt{2\pi }}\int _{0}^{\infty }e^{-t^{2}/2}t^{p}dt. \end{aligned}$$
(12)

Moreover, by the definition of the gamma function, it follows that

$$\begin{aligned} \frac{1}{\sqrt{2\pi }}\int _{0}^{\infty }e^{-t^{2}/2}t^{p}dt&=\frac{1}{\sqrt{ 2\pi }}\int _{0}^{\infty }e^{-u}(2u)^{(p-1)/2}du \\&=\frac{2^{p/2}}{2\sqrt{\pi }}\int _{0}^{\infty }e^{-u}u^{(p-1)/2}du \\&=\frac{2^{p/2}}{2\sqrt{\pi }}\int _{0}^{\infty }e^{-u}u^{\left( \frac{p+1}{2 }\right) -1}du \\&=\frac{2^{p/2}}{2\sqrt{\pi }}\varGamma \left( \frac{p+1}{2}\right) . \end{aligned}$$

Thus, by the equality (12), we obtain

$$\begin{aligned} {\overline{C}}_{mNJ}^{(2n)}\left( X\right) =&{} \frac{2^{1+2/p}}{n}\left( \frac{ 1}{2^{2n}}\sum _{j=0}^{n}\left( {\begin{array}{c}2n\\ n-j\end{array}}\right) j^{p}\right) ^{2/p} \\=&{} \frac{2^{1+2/p}}{n}\left( (1+o(1))\left( \frac{n}{2}\right) ^{p/2}\frac{1 }{\sqrt{2\pi }}\int _{0}^{\infty }e^{-t^{2}/2}t^{p}dt\right) ^{2/p} \\=&{} \frac{n2^{2/p}}{n}(1+o(1))^{2/p}\left( \frac{1}{\sqrt{2\pi }} \int _{0}^{\infty }e^{-t^{2}/2}t^{p}dt\right) ^{2/p} \\=&{} 2^{2/p}(1+o(1))^{2/p}\left( \frac{2^{p/2}}{2\sqrt{\pi }}\varGamma \left( \frac{p+1}{2}\right) \right) ^{2/p} \\=&{} 2(1+o(1))^{2/p}\left( \frac{1}{\sqrt{\pi }}\varGamma \left( \frac{p+1}{2} \right) \right) ^{2/p} \\=&{} (1+o(1))^{2/p}\left[ \sqrt{2}\left( \frac{1}{\sqrt{\pi }}\varGamma \left( \frac{p+1}{2}\right) \right) ^{1/p}\right] ^{2} \\=&{} (1+o(1))^{2/p}B_{p}^{2} \end{aligned}$$

for any \(n\in {\mathbb {N}}\). Hence

$$\begin{aligned} \lim _{n\rightarrow \infty }{\overline{C}}_{mNJ}^{(2n)}\left( X\right) =B_{p}^{2}. \end{aligned}$$

By Proposition 1 (a), (c) and Corollary 5, we have

$$\begin{aligned} {\overline{C}}_{mNJ}^{(2n)}\left( X\right) \le \frac{2n+1}{2n}{\overline{C}} _{mNJ}^{(2n+1)}\left( X\right) \le {\overline{C}}_{NJ}^{(2n+1)}\left( X\right) \le B_{p}^{2}. \end{aligned}$$

By the squeeze theorem, it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty }{\overline{C}}_{mNJ}^{(2n)}\left( X\right) =\lim _{n\rightarrow \infty }{\overline{C}}_{mNJ}^{(2n+1)}\left( X\right) =\lim _{n\rightarrow \infty }{\overline{C}}_{NJ}^{(2n+1)}\left( X\right) =B_{p}^{2}, \end{aligned}$$

whence \(\lim _{n\rightarrow \infty }{\overline{C}}_{mNJ}^{(n)}\left( X\right) =B_{p}^{2}.\) Furthermore, \(\lim _{n\rightarrow \infty }{\overline{C}} _{NJ}^{(n)}\left( X\right) =B_{p}^{2}\) because the sequence \(\left( {\overline{C}}_{mNJ}^{(n)}\left( X\right) \right) \) is increasing. \(\square \)

The proof of Theorem 3 presented here is a modification of idea given by Cecil Rousseau from The University of Memphis in private communication.

Corollary 6

Let \(2\le p\le \infty \) and \(X=L^{p}(\mu )\) or \(X=l_{m}^{p}.\) If \( m\ge \)n,  then \({\underline{C}}_{NJ}^{(n)}(X)=n^{\frac{2}{p} -1}.\)

Proof

Let \(2\le p\le \infty \) and q denote the conjugate number of p. By Corollary 2 and Theorem 2, we have

$$\begin{aligned} {\underline{C}}_{NJ}^{(n)}(L^{p}(\mu ))\ge \frac{1}{{\overline{C}} _{NJ}^{(n)}(\left( L^{p}(\mu )\right) ^{*})}=\frac{1}{{\overline{C}} _{NJ}^{(n)}(L^{q}(\mu ))}=n^{1-\frac{2}{q}}=n^{\frac{2}{p}-1}. \end{aligned}$$

On the other hand, taking the canonical basis \(\left\{ e_{i}\right\} _{i=1}^{n}\) in \(l_{n}^{p}\) we have

$$\begin{aligned} C^{(n)}\left( e_{1},e_{2},\ldots ,e_{n}\right)= & {} \frac{\sum _{\theta _{j}=\pm 1}\left\| e_{1}+\sum _{j=2}^{n}\theta _{j}e_{j}\right\| _{l_{n}^{p}}^{2} }{2^{n-1}\sum _{j=1}^{n}\left\| e_{j}\right\| _{l_{n}^{p}}^{2}} \\= & {} \frac{2^{n-1}n^{\frac{2}{p}}}{2^{n-1}n}=n^{\frac{2}{p}-1}. \end{aligned}$$

Hence, by the definition of lower n-th von Neumann–Jordan constant, we get

$$\begin{aligned} {\underline{C}}_{NJ}^{(n)}(L^{p}(\mu ))\le {\underline{C}} _{NJ}^{(n)}(l_{m}^{p})\le {\underline{C}} _{NJ}^{(n)}(l_{n}^{p})\le n^{\frac{2}{p}-1} \end{aligned}$$

whenever \(m\ge n.\) Combining the both inequalities, we get the thesis. \(\square \)

Remark 2

It is known that \(C_{NJ}(X)=C_{NJ}(X^{*})\) (see [16]) and in general \({\overline{C}}_{NJ}^{(n)}(X)\ne {\overline{C}}_{NJ}^{(n)}(X^{*})\) for \(n\ge 3\) (see [17]). Theorem 2 shows that \({\overline{C}} _{mNJ}^{(2)}(X)={\overline{C}}_{mNJ}^{(2)}(X^{*})\) whenever \(X=L^{p}(\mu )\) or \(X=l_{m}^{p}\) with \(m\ge 2^{n-1}.\) Really, fix \(1<p<2\) and consider \( X=L^{p}(\mu )\) or \(X=l_{m}^{p}\) with \(m\ge 2^{n-1}.\) Let q be conjugate to p. Then \(q>2\) and \(X^{*}=L^{q}(\mu )\) or \(X^{*}=l_{m}^{q}\) with \( m\ge 2^{n-1},\) respectively. Applying Theorem 2 for \( n=2,\) we have

$$\begin{aligned} {\overline{C}}_{mNJ}^{(2)}(X)=2^{\frac{2}{p}-1}. \end{aligned}$$

Since \(q=\frac{p}{p-1}>2,\) it follows from Theorem 2 that

$$\begin{aligned} {\overline{C}}_{mNJ}^{(2)}(X^{*})= & {} \frac{1}{2}\left( \frac{1}{2} \sum \limits _{k=0}^{1}\left( {\begin{array}{c}2\\ k\end{array}}\right) \left( 2-2k\right) ^{\frac{p}{p-1}}\right) ^{\frac{2(p-1)}{p}} \\= & {} \frac{1}{2}\left( 2^{\frac{p}{p-1}-1}\right) ^{\frac{2(p-1)}{p}}=2^{\frac{ 2}{p}-1}, \end{aligned}$$

whence \({\overline{C}}_{mNJ}^{(2)}(X)={\overline{C}}_{mNJ}^{(2)}(X^{*}).\)

The equality \({\overline{C}}_{mNJ}^{(n)}(X)={\overline{C}} _{mNJ}^{(n)}(X^{*})\) does not hold in general for \(n\ge 3.\) By Remark \( 9 \,(ii)\) in [17] and Theorem 2, we have

$$\begin{aligned} {\overline{C}}_{mNJ}^{(n)}(X^{*})\le {\overline{C}}_{NJ}^{(n)}(X^{*})< {\overline{C}}_{NJ}^{(n)}(X)=n^{\frac{2}{p}-1}={\overline{C}}_{mNJ}^{(n)}(X), \end{aligned}$$

whence \({\overline{C}}_{mNJ}^{(n)}(X)\not ={\overline{C}}_{mNJ}^{(n)}(X^{*})\) for \(n\ge 3.\)