1 Introduction and preliminaries

We start this section by recalling an interesting metric-type inequality due to Dragomir and Goşa [7]. Let us first fix some notations. We denote by \(\mathbb{N}\) the set of positive natural numbers, that is, \(\mathbb{N}=\{1,2,\dots \}\). For \(n\in \mathbb{N}\), let

$$ \Pi _{n}= \Biggl\{ (p_{1},p_{2},\dots ,p_{n})\in \mathbb{R}^{n}:\, p_{i} \geq 0 \,(i=1,2,\dots ,n),\, \sum_{i=1}^{n}p_{i}=1 \Biggr\} . $$

Theorem 1.1

(Dragomir–Goşa [7])

Let \((X,d)\) be a metric space. Then, for all \(n\in \mathbb{N}\), \(n\geq 2\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\),

$$ \sum_{i=1}^{n-1}\sum _{j=i+1}^{n} p_{i}p_{j} d(x_{i},x_{j}) \leq \inf_{x\in X}\sum _{i=1}^{n}p_{i} d(x_{i},x). $$
(1.1)

Moreover, the inequality is optimal in the sense that the multiplicative coefficient \(C=1\) on the right-hand side of (1.1) (in front of inf) cannot be replaced by a smaller real number.

In the particular case where \(p_{i}=\frac{1}{n}\) (\(i=1,2,\dots ,n\)), (1.1) reduces to

$$ \sum_{i=1}^{n-1}\sum _{j=i+1}^{n} d(x_{i},x_{j}) \leq n \inf_{x\in X} \sum_{i=1}^{n} d(x_{i},x). $$

This inequality can be interpreted as follows. Let P be a polygon in a metric space with n vertices, and let x be an arbitrary point in the space. Then the sum of all edges and diagonals of P is less than n times the sum of the distances from x to the vertices of P.

In the same reference [7] the authors provided some interesting applications of inequality (1.1) to normed linear spaces and pre-Hilbert spaces. For more results on metric inequalities, we refer to [1, 6, 12] and the references therein.

In this paper, we derive new inequalities in 2-metric spaces and 2-normed linear spaces. In particular, we obtain an extension of Theorem 1.1 to the setting of 2-metric spaces and provide a geometric interpretation of the obtained inequality.

Before stating and proving our results, let us recall briefly some basic notions related to 2-metric spaces and 2-normed linear spaces.

In 1963, Gähler [10] introduced the notion of 2-metric spaces as follows. Let X be a nonempty set, and let \(D: X\times X\times X\to \mathbb{R}\). We say that D is a 2-metric on X if the following conditions are satisfied:

(\(D_{1}\)):

for all \(x,y\in X\) with \(x\neq y\), there exists \(z=z(x,y)\in X\) such that

$$ D(x,y,z)\neq 0; $$
(\(D_{2}\)):

\(D(x,y,z)=0\) when at least two elements of \(\{x,y,z\}\subset X\) are equal;

(\(D_{3}\)):

for all \(x,y,z\in X\),

$$ D(x,y,z)=D(x,z,y)=D(y,z,x); $$
(\(D_{4}\)):

for all \(x,y,z,u\in X\),

$$ D(x,y,z)\leq D(u,y,z)+D(x,u,z)+D(x,y,u). $$

In this case, the pair \((X,D)\) is called a 2-metric space.

Let us mention some remarks following from properties (\(D_{1}\))–(\(D_{4}\)).

  • Given \(x,y,z\in X\), we denote by \(\sigma (x,y,z)\) any permutation of the elements x, y, and z. By (\(D_{3}\)) we deduce that

    $$ D(x,y,z)=D\bigl(\sigma (x,y,z)\bigr),\quad x,y,z\in X. $$
  • Let \(x,y,z\in X\). By (\(D_{3}\)) and (\(D_{4}\)), for all \(u\in X\), we have

    $$ \begin{aligned} & D(x,y,z) \\ &\quad \leq D(u,y,z)+D(x,u,z)+D(x,y,u) \\ &\quad \leq D(x,y,z)+D(u,x,z)+D(u,y,x)+D(x,u,z)+D(x,y,u) \\ &\quad =D(x,y,z)+2D(u,x,z)+2D(u,y,x), \end{aligned} $$

    which yields

    $$ D(u,x,z)+D(u,y,x)\geq 0. $$

    Taking \(u=y\) in this inequality and using (\(D_{2}\)), we obtain

    $$ D(x,y,z)\geq 0,\quad x,y,z\in X. $$

Example 1.1

(see [10])

Let \(D: \mathbb{R}^{N}\times \mathbb{R}^{N}\times \mathbb{R}^{N}\to \mathbb{R}\), \(N\in \mathbb{N}\), \(N\geq 2\), be the mapping defined by

$$ D(A_{1},A_{2},A_{3}) = \frac{1}{2} \Vert \overrightarrow{A_{1}A_{2}} \times \overrightarrow{A_{1}A_{3}} \Vert _{2},\quad A_{1},A_{2},A_{3}\in \mathbb{R}^{N}, $$
(1.2)

where × denotes the cross product in \(\mathbb{R}^{N}\), and \(\|\cdot \|_{2}\) denotes the Euclidean norm in \(\mathbb{R}^{N}\). Then D is a 2-metric on \(X=\mathbb{R}^{N}\). Note that \(D(A_{1},A_{2},A_{3})\) is equal to the area of the triangle spanned by \(A_{1}\), \(A_{2}\), and \(A_{3}\).

In the same reference [10], Gähler introduced the notion of 2-normed linear spaces as follows. Let X be a linear space over \(\mathbb{R}\) of dimension \(1< L\leq \infty \). Let \(\|\cdot ,\cdot \|: X\times X\to \mathbb{R}\) be a given mapping. We say that \(\|\cdot ,\cdot \|\) is a 2-norm on X if the following conditions are satisfied for all \(x,y,z\in X\) and \(\lambda \in \mathbb{R}\):

(\(N_{1}\)):

\(\|x,y\|=0\) if and only if x and y are linearly dependent;

(\(N_{2}\)):

\(\|x,y\|=\|y,x\|\);

(\(N_{3}\)):

\(\|\lambda x,y\|=|\lambda |\|x,y\|\);

(\(N_{4}\)):

\(\|x,y+z\|\leq \|x,y\|+\|x,z\|\).

In this case, the pair \((X,\|\cdot ,\cdot \|)\) is said to be a 2-normed space.

We now give some remarks following from (\(N_{1}\))–(\(N_{4}\)):

  • By (\(N_{2}\)) and (\(N_{3}\)), for all \(x,y\in X\) and \(\lambda ,\mu \in \mathbb{R}\), we have

    $$ \Vert \lambda x,\mu y \Vert = \vert \lambda \vert \vert \mu \vert \Vert x,y \Vert = \Vert \mu x,\lambda y \Vert . $$
  • If \(\|\cdot ,\cdot \|\) is a 2-norm on X, then the mapping \(D: X\times X\times X\to \mathbb{R}\) defined by

    $$ D(x,y,z)= \Vert x-z,y-z \Vert ,\quad x,y,z\in X, $$
    (1.3)

    is a 2-metric on X. Note that if \(L=1\), then condition (\(D_{1}\)) is not satisfied by D. Namely, by (\(N_{1}\)), if \(X=\operatorname{span}\{a\}\), \(a\in X\), then for all \(x,y,z\in X\), there exist \(\lambda ,\mu ,\gamma \in \mathbb{R}\) such that

    $$ D(x,y,z)=D(\lambda a,\mu a,\gamma a)= \bigl\Vert (\lambda -\gamma )a,(\mu - \gamma )a \bigr\Vert = \bigl\vert (\lambda -\gamma ) (\mu -\gamma ) \bigr\vert \Vert a,a \Vert =0. $$
  • From the above remark and the positivity of D we deduce that

    $$ \Vert x,y \Vert \geq 0,\quad x,y\in X. $$
  • Let \(x,y,z\in X\) and \(\lambda _{1},\lambda _{2}\in \mathbb{R}\). By (\(N_{2}\)) and (\(N_{4}\)) we have

    $$\begin{aligned} \Vert \lambda _{1}x+\lambda _{2}y,z \Vert =& \Vert z, \lambda _{1}x+\lambda _{2}y \Vert \\ \leq & \Vert z,\lambda _{1} x \Vert + \Vert z,\lambda _{2} y \Vert \\ =& \vert \lambda _{1} \vert \Vert x,z \Vert + \vert \lambda _{2} \vert \Vert y,z \Vert . \end{aligned}$$

    Hence by induction we deduce that if \(x_{i},z\in X\) and \(\lambda _{i}\in \mathbb{R}\), \(i=1,2,\dots ,m\), then

    $$ \Vert \lambda _{1}x_{1}+\lambda _{2}x_{2}+\cdots +\lambda _{m}x_{m},z \Vert \leq \sum_{i=1}^{m} \vert \lambda _{i} \vert \Vert x_{i},z \Vert . $$
    (1.4)

For more details about 2-metric spaces and 2-normed linear spaces, see, for example, [25, 8, 9, 11, 1317] and the references therein.

2 Results and proofs

In this section, we state and prove our main results and provide some interesting consequences.

Theorem 2.1

Let \((X,D)\) be a 2-metric space. Then, for all \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\),

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k}D(x_{i},x_{j},x_{k}) \leq \inf_{x\in X}\sum_{i=1}^{n-1} \sum_{j=i+1}^{n}p_{i}p_{j} D(x,x_{i},x_{j}). $$
(2.1)

Moreover, the inequality is optimal in the sense that the multiplicative coefficient \(C=1\) on the right-hand side of (2.1) (in front of inf) cannot be replaced by a smaller real number.

Proof

Let \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\). Let x be an arbitrary element of X. For all \(i,j,k\in \{1,2,\dots ,n\}\), we have

$$ D(x_{i},x_{j},x_{k})\leq D(x,x_{j},x_{k})+D(x_{i},x,x_{k})+D(x_{i},x_{j},x). $$

Multiplying this inequality by \(p_{i}p_{j}p_{k}\) and taking the sum from 1 to n, we obtain

$$ \sum_{i=1}^{n}\sum _{j=1}^{n}\sum _{k=1}^{n} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}) \leq A+B+C, $$
(2.2)

where

$$ A= \sum_{i=1}^{n}\sum _{j=1}^{n}\sum_{k=1}^{n} p_{i}p_{j}p_{k} D(x,x_{j},x_{k}), \qquad B= \sum_{i=1}^{n}\sum _{j=1}^{n}\sum_{k=1}^{n} p_{i}p_{j}p_{k} D(x_{i},x,x_{k}) $$

and

$$ C= \sum_{i=1}^{n}\sum _{j=1}^{n}\sum_{k=1}^{n} p_{i}p_{j}p_{k} D(x_{i},x_{j},x). $$

Sine \(\sum_{i=1}^{n} p_{i}=1\), by the symmetry of D we deduce that

$$ A=B=C=\sum_{i=1}^{n} \sum_{j=1}^{n} p_{i}p_{j} D(x,x_{i},x_{j}). $$
(2.3)

On the other hand, by (\(D_{2}\))–(\(D_{3}\)) we have

$$ \begin{aligned}\sum_{i=1}^{n} \sum_{j=1}^{n} p_{i}p_{j} D(x,x_{i},x_{j}) &=\sum_{i< j} p_{i}p_{j} D(x,x_{i},x_{j})+\sum_{j< i} p_{i}p_{j} D(x,x_{i},x_{j}) \\ & =2 \sum_{i< j} p_{i}p_{j} D(x,x_{i},x_{j}), \end{aligned} $$

that is,

$$ \sum_{i=1}^{n}\sum _{j=1}^{n} p_{i}p_{j} D(x,x_{i},x_{j})=2 \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} p_{i}p_{j}D(x,x_{i},x_{j}). $$
(2.4)

Similarly, we have

$$ \begin{aligned} &\sum_{i=1}^{n} \sum_{j=1}^{n}\sum _{k=1}^{n} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}) \\ &\quad =\sum_{i< j< k} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k})+\sum _{i< k< j} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k})+\sum _{j< i< k} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}) \\ &\qquad{} +\sum_{j< k< i} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k})+ \sum _{k< i< j} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k})+\sum _{k< j< i} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}) \\ &\quad =6\sum_{i< j< k} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}), \end{aligned} $$

that is,

$$ \sum_{i=1}^{n}\sum _{j=1}^{n}\sum _{k=1}^{n} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k})=6 \sum _{i=1}^{n-2}\sum_{j=i+1}^{n-1} \sum_{k=j+1}^{n} p_{i}p_{j}p_{k} D(x_{i},x_{j},x_{k}). $$
(2.5)

Hence, using (2.2), (2.3), (2.4), and (2.5), we obtain

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum_{k=j+1}^{n} p_{i}p_{j} p_{k}D(x_{i},x_{j},x_{k}) \leq \sum_{i=1}^{n-1}\sum _{j=i+1}^{n}p_{i}p_{j} D(x,x_{i},x_{j}). $$

Since this inequality holds for all \(x\in X\), we deduce (2.1).

Suppose now that there exists a constant \(C>0\) such that

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k}D(x_{i},x_{j},x_{k}) \leq C \inf_{x\in X}\sum_{i=1}^{n-1} \sum_{j=i+1}^{n}p_{i}p_{j} D(x,x_{i},x_{j}) $$
(2.6)

for all \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\). Taking \(n=3\) in (2.6), we obtain

$$ p_{1}p_{2}p_{3} D(x_{1},x_{2},x_{3}) \leq C \bigl[p_{1}p_{2}D(x,x_{1},x_{2})+p_{1}p_{3}D(x,x_{1},x_{3})+p_{2}p_{3}D(x,x_{2},x_{3}) \bigr] $$

for all \((p_{1},p_{2},p_{3})\in \Pi _{3}\), \(\{x_{i}\}_{i=1}^{3}\subset X\), and \(x\in X\). In particular, for \(x=x_{1}\) and \((p_{1},p_{2},p_{3})=(2\varepsilon -1,1-\varepsilon ,1-\varepsilon )\), \(\frac{1}{2}<\varepsilon <1\), by (\(D_{2}\)) we obtain

$$ (2\varepsilon -1) (1-\varepsilon )^{2} D(x_{1},x_{2},x_{3}) \leq C (1- \varepsilon )^{2} D(x_{1},x_{2},x_{3}), $$

which yields

$$ 2\varepsilon -1\leq C,\quad \frac{1}{2}< \varepsilon < 1. $$

Passing to the limit as \(\varepsilon \to 1^{-}\), we get that \(C\geq 1\), which proves the sharpness of (2.1). □

Corollary 2.1

Let \((X,D)\) be a 2-metric space. Then, for all \(n\in \mathbb{N}\), \(n\geq 3\), and \(\{x_{i}\}_{i=1}^{n}\subset X\),

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} D(x_{i},x_{j},x_{k}) \leq n \inf_{x\in X}\sum_{i=1}^{n-1} \sum_{j=i+1}^{n} D(x,x_{i},x_{j}). $$
(2.7)

Proof

By (2.1) with

$$ p_{i}=\frac{1}{n},\quad i\in \{1,2,\dots ,n\}, $$

(2.7) follows. □

Corollary 2.1 has the following geometric interpretation.

Corollary 2.2

Let \(n\in \mathbb{N}\), \(n\geq 3\), and let \(A_{1},A_{2},\dots ,A_{n}, A\) be \(n+1\) points of \(\mathbb{R}^{N}\), \(N\geq 2\). Then the sum of the areas of all triangles with vertices belonging to the set of points \(\{A_{i}:\, i=1,2,\dots ,n\}\) is less than n times the sum of the areas of all triangles such that one of the vertices is the point A and the other vertices belong to the set of points \(\{A_{i}:\, i=1,2,\dots ,n\}\).

Proof

The result follows immediately from Corollary 2.1 by taking \(X=\mathbb{R}^{N}\) and D, the 2-metric defined by (1.2). □

Corollary 2.3

Let \((X,D)\) be a 2-metric space, \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\). Let \(x\in X\) be such that

$$ D(x,x_{i},x_{j})\leq r,\quad i,j\in \{1,2,\dots ,n\}, $$
(2.8)

for some \(r>0\). Then

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k}D(x_{i},x_{j},x_{k}) \leq \Biggl(\sum_{i=1}^{n-1}\sum _{j=i+1}^{n}p_{i}p_{j} \Biggr) r. $$
(2.9)

Proof

By (2.1) we have

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k}D(x_{i},x_{j},x_{k}) \leq \sum_{i=1}^{n-1}\sum _{j=i+1}^{n}p_{i}p_{j} D(x,x_{i},x_{j}). $$
(2.10)

On the other hand, using (2.8), we obtain

$$ \sum_{i=1}^{n-1}\sum _{j=i+1}^{n}p_{i}p_{j} D(x,x_{i},x_{j}) \leq r \sum _{i=1}^{n-1}\sum_{j=i+1}^{n}p_{i}p_{j}. $$
(2.11)

Combining (2.10) with (2.11), (2.9) follows. □

Corollary 2.4

Let X be a linear space over \(\mathbb{R}\) of dimension \(1< L\leq \infty \), and let \(\|\cdot ,\cdot \|\) be a 2-norm on X. Then, for all \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\),

$$ \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k} \Vert x_{i}-x_{k},x_{j}-x_{k} \Vert \leq \inf_{x\in X}\sum_{i=1}^{n-1} \sum_{j=i+1}^{n}p_{i}p_{j} \Vert x-x_{j},x_{i}-x_{j} \Vert . $$
(2.12)

Moreover, the inequality is optimal in the sense that the multiplicative coefficient \(C=1\) on the right-hand side of (2.12) (in front of inf) cannot be replaced by a smaller real number.

Proof

Consider the 2-metric D on X defined by (1.3). Then (2.12) follows by (2.1). □

Theorem 2.2

Let X be a linear space over \(\mathbb{R}\) of dimension \(1< L\leq \infty \), and let \(\|\cdot ,\cdot \|\) be a 2-norm on X. Then, for all \(n\in \mathbb{N}\), \(n\geq 3\), \((p_{1},p_{2},\dots ,p_{n})\in \Pi _{n}\), and \(\{x_{i}\}_{i=1}^{n}\subset X\),

$$ \frac{1}{6}\sum_{i=1}^{n} \sum_{j=1}^{n} p_{i}p_{j} \Vert x_{p}-x_{i},x_{j}-x_{i} \Vert \leq \rho _{n} \leq \sum_{i=1}^{n-1} \sum_{j=i+1}^{n}p_{i}p_{j} \Vert x_{p}-x_{j},x_{i}-x_{j} \Vert , $$
(2.13)

where

$$ \rho _{n}=\sum_{i=1}^{n-2} \sum_{j=i+1}^{n-1}\sum _{k=j+1}^{n} p_{i}p_{j} p_{k} \Vert x_{i}-x_{k},x_{j}-x_{k} \Vert ,\quad x_{p}=\displaystyle \sum _{i=1}^{n} p_{i}x_{i}. $$

Proof

Using (2.12) with \(x=x_{p}\), we obtain

$$ \rho _{n} \leq \sum_{i=1}^{n-1} \sum_{j=i+1}^{n}p_{i}p_{j} \Vert x_{p}-x_{j},x_{i}-x_{j} \Vert . $$
(2.14)

By (2.5) we have

$$ \rho _{n}= \frac{1}{6} \sum _{i=1}^{n}\sum_{j=1}^{n} \sum_{k=1}^{n} p_{i}p_{j}p_{k} \Vert x_{i}-x_{k},x_{j}-x_{k} \Vert . $$
(2.15)

On the other hand, using (\(N_{2}\)), we obtain

$$ \sum_{i=1}^{n}\sum _{j=1}^{n}\sum _{k=1}^{n} p_{i}p_{j}p_{k} \Vert x_{i}-x_{k},x_{j}-x_{k} \Vert =\sum_{k=1}^{n} \sum _{i=1}^{n} p_{k}p_{i} \sum_{j=1}^{n} \bigl\Vert p_{j}(x_{j}-x_{k}),x_{i}-x_{k} \bigr\Vert . $$
(2.16)

Next, by (1.4) we have that

$$\begin{aligned} \sum_{j=1}^{n} \bigl\Vert p_{j}(x_{j}-x_{k}),x_{i}-x_{k} \bigr\Vert \geq & \Biggl\Vert \sum_{j=1}^{n} p_{j}(x_{j}-x_{k}),x_{i}-x_{k} \Biggr\Vert \\ \ =& \Vert x_{p}-x_{k},x_{i}-x_{k} \Vert . \end{aligned}$$
(2.17)

Hence it follows from (2.15), (2.16), and (2.17) that

$$ \rho _{n} \geq \frac{1}{6} \sum _{k=1}^{n} \sum_{i=1}^{n} p_{k}p_{i} \Vert x_{p}-x_{k},x_{i}-x_{k} \Vert =\frac{1}{6}\sum_{i=1}^{n} \sum_{j=1}^{n} p_{i}p_{j} \Vert x_{p}-x_{i},x_{j}-x_{i} \Vert . $$
(2.18)

Finally, (2.13) follows from (2.14) and (2.18). □

For our next result, we need some notations.

Given three points \(A,B,C\in \mathbb{R}^{N}\), \(N\geq 2\), we denote by \(\bigtriangleup (A,B,C)\) the area of the triangle with vertices A, B, and C.

Let \(n\in \mathbb{N}\), \(n\geq 3\). For n points \(A_{1},A_{2},\dots , A_{n}\in \mathbb{R}^{N}\), let

$$ \mathcal{S}(A_{1},A_{2},\dots ,A_{n})=\sum _{i=1}^{n} \bigtriangleup (A_{i},A_{i+1},A_{i+2}), \quad A_{n+1}=A_{1},\quad A_{n+2}=A_{2}. $$

We introduce the set

$$ \Lambda _{n}= \bigl\{ \{A_{1},A_{2},\dots ,A_{n}\}\subset \mathbb{R}^{N}: \,\mathcal{S}(A_{1},A_{2}, \dots ,A_{n})=1 \bigr\} $$

and the quantity

$$ \alpha _{n}=\inf_{\{A_{1},A_{2},\dots ,A_{n}\}\in \Lambda _{n}} \sum _{i=1}^{n-2}\sum_{j=i+1}^{n-1} \sum_{k=j+1}^{n}\bigtriangleup (A_{i},A_{j},A_{k}). $$

Theorem 2.3

For all \(n\in \mathbb{N}\), \(n\geq 3\), we have that \(\alpha _{n} \geq \frac{n}{18}\).

Proof

First, for all \(A,B,C\in \mathbb{R}^{N}\), we have

$$ \bigtriangleup (A,B,C)=D(A,B,C), $$

where D is the 2-metric defined by (1.2). On the other hand, given \(\{A_{1},A_{2},\dots ,A_{n}\}\in \Lambda _{n}\), for all \(j\in \{1,2,\dots ,n\}\), by (\(D_{4}\)), we have

$$ D(A_{j},A_{j+1},A_{j+2})\leq D(P,A_{j+1},A_{j+2})+D(A_{j},P,A_{j+2})+D(A_{j},A_{j+1},P) $$

for all \(P\in \{A_{1},A_{2},\dots ,A_{n}\}\). Taking the sum over j from 1 to n, we get that

$$ \mathcal{S}(A_{1},A_{2},\dots ,A_{n})\leq \sum_{j=1}^{n} D(P,A_{j+1},A_{j+2})+ \sum_{j=1}^{n} D(A_{j},P,A_{j+2})+ \sum_{j=1}^{n} D(A_{j},A_{j+1},P), $$

that is,

$$ 1\leq \sum_{j=1}^{n} D(P,A_{j+1},A_{j+2})+\sum_{j=1}^{n} D(A_{j},P,A_{j+2})+ \sum_{j=1}^{n} D(A_{j},A_{j+1},P). $$
(2.19)

Notice that

$$\begin{aligned} \sum_{j=1}^{n} D(P,A_{j+1},A_{j+2}) =& \sum_{j=2}^{n+1} D(P,A_{j},A_{j+1}) \\ =& \sum_{j=1}^{n} D(P,A_{j},A_{j+1})-D(P,A_{1},A_{2})+D(P,A_{n+1},A_{n+2}) \\ =&\sum_{j=1}^{n} D(P,A_{j},A_{j+1})-D(P,A_{1},A_{2})+D(P,A_{1},A_{2}) \\ =&\sum_{j=1}^{n} D(P,A_{j},A_{j+1}). \end{aligned}$$

Hence by (2.19) we obtain

$$ 1\leq 2 \sum_{j=1}^{n} D(P,A_{j},A_{j+1})+\sum_{j=1}^{n} D(P,A_{j},A_{j+2}). $$
(2.20)

On the other hand, we have

$$ \sum_{j=1}^{n} D(P,A_{j},A_{j+1})\leq \sum_{j=1}^{n} \sum_{k=1}^{n} D(P,A_{j},A_{k}) $$
(2.21)

and

$$ \sum_{j=1}^{n} D(P,A_{j},A_{j+2}) \leq \sum _{j=1}^{n}\sum_{k=1}^{n} D(P,A_{j},A_{k}). $$
(2.22)

Therefore, using (2.20), (2.21), and (2.22), we get that

$$ 1\leq 3 \sum_{j=1}^{n}\sum _{k=1}^{n} D(P,A_{j},A_{k}). $$

Next, taking the sum over \(P\in \{A_{1},A_{2},\dots ,A_{n}\}\), we obtain

$$ n\leq 3 \sum_{i=1}^{n} \sum_{j=1}^{n}\sum _{k=1}^{n} D(A_{i},A_{j},A_{k}). $$
(2.23)

Notice that by (2.5) we have

$$ \sum_{i=1}^{n} \sum _{j=1}^{n}\sum _{k=1}^{n} D(A_{i},A_{j},A_{k})=6 \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum_{k=j+1}^{n} D(A_{i},A_{j},A_{k}). $$
(2.24)

Combining (2.23) with (2.24), we deduce that

$$ n\leq 18 \sum_{i=1}^{n-2}\sum _{j=i+1}^{n-1}\sum_{k=j+1}^{n} D(A_{i},A_{j},A_{k}), $$

which yields the desired estimate. □

3 Conclusion

We obtained new inequalities in the setting of 2-metric spaces and 2-normed linear spaces. Namely, we first derived an analogous version of Theorem 1.1 for 2-metric spaces (see Theorem 2.1). Moreover, we provided a geometric interpretation of our obtained result (see Corollary 2.2). We also presented some interesting consequences following from Theorem 2.1. Next, we considered a problem related to the estimates of areas of triangles and derived a new inequality (see Theorem 2.3).