1 Introduction and preliminaries

It is a natural trend in fixed point theory to refine a standard metric space structure with a weaker one. One of the interesting extensions of the notion of a metric space is the concept of a b-metric space which was introduced by Czerwik [8].

Definition 1.1

([8])

Let X be a nonempty set and \(s\geq 1\) a given real number. A mapping \(d \colon X \times X\to [0, \infty )\) is said to be a b-metric if for all \(x, y, z \in X\) the following conditions are satisfied:

\((bM_{1})\) :

\(d(x, y) =0\) if and only if \(x = y\);

\((bM_{2})\) :

\(d(x, y) = d(y,x)\) (symmetry);

\((bM_{3})\) :

\(d(x, z)\leq s[d(x, y) + d(y, z)]\) (b-triangle inequality).

In this case, the pair \((X, d)\) is called a b-metric space (with constant s).

Clearly, any metric space is a b-metric space (with constant \(s=1\)).

Example 1.2

([10])

Let \(X= [ 0,1 ] \) and let \(d:X\times X\longrightarrow {}[ 0,\infty )\) be defined by \(d ( x,y ) = ( x-y ) ^{2}\). Then, clearly, \(( X,d ) \) is a b-metric space with \(s=2\).

The following is another constructive example of b-metric.

Example 1.3

([1])

Let \(X=\{x_{i}: 1\leq i\leq M\}\) for some \(M \in \mathbb{N}\) and \(s\geq 2\). Define \(d: X\times X\to \infty \) as

$$ d(x_{i},x_{j})= \textstyle\begin{cases} 0 & \text{if } i=j, \\ s & \text{if } (i,j)=(1,2) \text{ or } (i,j)=(2,1), \\ 1 & \text{otherwise.} \end{cases} $$

Consequently, we derive that

$$ d(x_{i},x_{j}) \leq \frac{s}{2}\bigl[d(x_{i},x_{k}) +d(x_{k},x_{j}) \bigr], $$

for all \(i,j,k \in \{1,M\}\). Thus, \((X,d)\) forms a b-metric for \(s >2\) where the ordinary triangle inequality does not hold.

For more examples for b-metric, we may refer, e.g., to [1,2,3,4,5,6,7, 9, 12] and the corresponding references therein.

Example 1.4

(see, e.g., [6])

The space \(L^{p}[0,1]\) (where \(0< p<1\)) of all real functions \(x(t)\), \(t\in [0,1]\) such that \(\int _{0}^{1} |x(t)|^{p} \,dt<\infty \), together with the functional

$$ d(x,y):= \biggl( \int _{0}^{1} \bigl\vert x(t)-y(t) \bigr\vert ^{p} \,dt \biggr)^{1/p}, \quad \text{for each } x,y\in L^{p}[0,1], $$

is a b-metric space. Notice that \(s=2^{1/p}\).

2 Main result

We start this section by recalling an interesting inequality that was proposed by Dragomir and Gosa in [11]. In what follows we investigate their inequality in the setting of a more general structure, namely that of b-metric spaces.

Theorem 2.1

Let \(( X,d ) \) be a b-metric space with constant \(s\geq 1\), and \(x_{i}\in X\), \(p_{i}\geq 0\) (\(i\in \{ 1,2, \dots ,n \} \)) with \(\sum_{i=1}^{n}p_{i}= \frac{1}{s}\). Then we have

$$ \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) \leq \inf_{x\in X} \Biggl[ \sum_{i=1}^{n}p_{i}d ( x _{i},x ) \Biggr]. $$
(1)

The inequality is sharp in the sense that the constant \(c=1\) in front of the infimum cannot be replaced by a smaller constant.

Proof

Using the b-triangle inequality, for any \(x\in X\), \(i,j\in \{ 1,2,\dots ,n \} \) we have

$$ d ( x_{i},x_{j} ) \leq s \bigl[ d ( x_{i},x ) +d ( x,x_{j} ) \bigr]. $$
(2)

If we multiply (2) by \(p_{i}\), \(p_{j}\) and sum over i and j from 1 to n, we get

$$ \sum_{i,j=1}^{n}p_{i}p_{j}d ( x_{i},x_{j} ) \leq s \Biggl[ \sum _{i,j=1}^{n}p_{i}p_{j} \bigl[ d ( x_{i},x ) +d ( x,x_{j} ) \bigr] \Biggr]. $$

Note that by symmetry we have

$$ \sum_{i,j=1}^{n}p_{i}p_{j}d ( x_{i},x_{j} ) =2 \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ). $$
(3)

Now, using the condition \(\sum_{i=1}^{n}p_{i}=\frac{1}{s}\), we can easily deduce that

$$ \sum_{i,j=1}^{n}p_{i}p_{j} \bigl[ d ( x_{i},x ) +d ( x,x_{j} ) \bigr] =\frac{2}{s} \sum_{i=1}^{n}p _{i}d ( x_{i},x ). $$

So, from (3) we have

$$ \begin{aligned} \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) &= \frac{1}{2}\sum _{i,j=1}^{n}p_{i}p_{j}d ( x_{i},x_{j} ) \\ &\leqslant \frac{s}{2} \Biggl[ \sum_{i,j=1}^{n}p_{i}p_{j} \bigl[ d ( x_{i},x ) +d ( x,x_{j} ) \bigr] \Biggr] \\ &=\sum_{i=1}^{n}p_{i}d ( x_{i},x ). \end{aligned} $$

Therefore,

$$ \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) \leq \sum_{i=1}^{n}p_{i}d ( x_{i},x ) $$

for any \(x\in X\). Using the fact that the infimum is the greatest lower bound, we deduce (1).

Now, suppose that there exists \(c>0\) such that

$$ \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) \leq c \inf_{x\in X} \Biggl[ \sum_{i=1}^{n}p_{i}d ( x_{i},x ) \Biggr]; $$

and choose \(n=2\), \(p_{1}=p\) and \(p_{2}=1-p\) where \(p\in (0,1)\). Then,

$$ p(1-p)d ( x_{1},x_{2} )\leq c \bigl[ pd ( x_{1},x )+(1-p)d ( x,x_{2} ) \bigr]. $$
(4)

If we let \(x=x_{1}\) in (4), we get

$$ p(1-p)d ( x_{1},x_{2} )\leq c(1-p)d ( x_{1},x_{2} ). $$

As \(d ( x_{1},x_{2} )>0\) and \(1-p>0\), so \(p\leq c\) for any \(p\in (0,1)\). Using the fact that the supremum is the least upper bound, we deduce that \(c\geq 1\). □

The following corollary is a generalization of Corollary 1 in [11] to the case of a b-metric space.

Corollary 2.2

Let \(( X,d ) \) be a b-metric space with constant \(s\geq 1\), and \(x_{i}\in X\), \(i\in \{ 1,2,\dots ,n \} \), then

$$ \sum_{1\leq i< j\leq n}d ( x_{i},x_{j} ) \leq \frac{n}{s} \inf_{x\in X} \Biggl[ \sum _{i=1}^{n}d ( x_{i},x ) \Biggr] . $$

The proof follows directly by taking \(p_{i}=\frac{1}{ns}\), \(i\in \{ 1,2,\dots ,n \} \) in the previous theorem.

The above corollary can be interpreted geometrically as follows: The sum of all edges and diagonals of a polygon with n vertices in a b-metric space is less than or equal to \(\frac{n}{s}\)-times the sum of the distances from any arbitrary point in the space to its vertices.

The next corollary is a generalization of Corollary 2 in [11] in the framework of b-metric spaces.

Corollary 2.3

Let \(( X,d ) \) be a b-metric space with constant s and \(x_{i}\in X\), \(i\in \{ 1,2,\dots ,n \} \). If there exist \(z\in X\) and \(r>0\) such that the closed ball \(\overline{B} ( z,r ) = \{ y\in X:d ( z,y ) \leq r \} \) contains all the points \(x_{i}\), then for any \(p_{i}\geq 0\) with \(\sum_{i=1}^{n}p_{i}=\frac{1}{s}\) we have

$$ \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) \leq \frac{r}{s}. $$

Proof

Using (1) we have

$$ \begin{aligned} \sum_{1\leq i< j\leq n}p_{i}p_{j}d ( x_{i},x_{j} ) & \leq \inf_{x\in X} \Biggl[ \sum_{i=1}^{n}p_{i}d ( x _{i},x ) \Biggr] \\ & \leq \sum_{i=1}^{n}p_{i}d ( x_{i},z ) \\ &\leq \frac{r}{s}. \end{aligned} $$

 □

3 Applications

In this section we define a new notion of a b-normed space and study some of its properties.

Definition 3.1

Let X be a vector space over a field K and let \(s\geq 1\) be a constant. A function \(\Vert \cdot \Vert _{b}:X\longrightarrow {}[ 0,\infty )\) is said to be a b-norm if the following conditions hold for every \(x,y\in X\), \(c\in K\):

(Nb1):

\(\Vert x \Vert _{b}\geq 0\);

(Nb2):

\(\Vert x \Vert _{b}=0\Longleftrightarrow x=0\);

(Nb3):

\(\Vert cx \Vert _{b}=|c|^{\log _{2}s+1} \Vert x \Vert _{b}\) (b-homogeneity);

(Nb4):

\(\Vert x+y \Vert _{b}\leq s [ \Vert x \Vert _{b}+ \Vert y \Vert _{b} ] \) (b-norm triangle inequality).

In this case \(( X, \Vert \cdot \Vert _{b} ) \) is called a b-normed space with constant s.

Here we give an example of a b-normed space.

Example 3.2

Let \(X=\mathbb{R}\) and define \(\Vert \cdot \Vert _{b}:X \longrightarrow {}[ 0,\infty )\) by \(\Vert x \Vert _{b}=|x|^{p}\) where \(p\in (1,\infty )\), then, using the relation \(( x+y ) ^{p}\leq 2^{p-1} ( x+y ) \), we can easily deduce that \(( X, \Vert \cdot \Vert _{b} ) \) is a b-normed space with constant \(s=2^{p-1}\).

Remark 3.3

Let \(( X, \Vert \cdot \Vert _{b} ) \) be a b-normed space with constant \(s\geq 1\), \(x_{i}\in X\), \(i\in \{ 1,\dots ,n \}\). Then it is easy to prove the following generalized b-triangle inequality:

$$ \Biggl\Vert \sum_{i=1}^{n}x_{i} \Biggr\Vert \leq \sum_{i=1}^{n}s^{i} \Vert x_{i} \Vert . $$

Remark 3.4

Any b-norm with \(s\geq 1\) defines a b-metric as follows:

$$ d ( x,y ) = \Vert x-y \Vert _{b}. $$

The question now is the following: Is any b-metric induced from a b-norm? The following remark can answer this question.

Remark 3.5

Let X be a vector space over a field K. Any b-metric \(d:X\times X\longrightarrow {}[ 0,\infty )\) with constant \(s\geq 1\) induced from a b-norm must satisfy the following properties for each \(x,y,z\in X\), \(c\in K\):

  1. (i)

    \(d ( x+z,y+z ) =d ( x,y ) \) (translation invariance);

  2. (ii)

    \(d ( cx,cy ) =|c|^{\log _{2}s+1}d ( x,y ) \) (b-homogeneity).

Proposition 3.6

A b-homogeneous translation invariant b-metric \(d:X\times X\longrightarrow {}[ 0,\infty )\) with constant \(s\geq 1\) can define a b-norm \(\Vert \cdot \Vert _{b}:X\longrightarrow {}[ 0, \infty )\) as follows:

$$ \Vert x \Vert _{b}=d ( x,0 ) \quad \forall x\in X. $$

Proof

Clearly, (Nb1) and (Nb2) are satisfied.

As d is homogeneous, \(\Vert cx \Vert =d ( cx,0 ) =|c|^{\log _{2}s+1}d ( x,0 ) =|c|^{\log _{2}s+1} \Vert x \Vert _{b}\).

As d is translation invariant,

$$ \begin{aligned} \Vert x+y \Vert _{b}&=d ( x+y,0 ) \leq s \bigl[ d(x+y,x)+d(x,0) \bigr] \\ & =s \bigl[ d ( y,0 ) +d ( x,0 ) \bigr] \\ & =s \bigl[ \Vert x \Vert _{b}+ \Vert y \Vert _{b} \bigr] , \end{aligned} $$

which prove (Nb3) and (Nb4), respectively. □

Now, we rewrite inequality (1) in the sense of b-normed spaces and obtain some corollaries.

If \(( X, \Vert \cdot \Vert _{b} ) \) is a b-normed space with constant \(s\geq 1\), \(x_{i}\in X\), and \(p_{i}\geq 0\), \(i\in \{ 1,\dots ,n \} \) with \(\sum_{i=1}^{n}p_{i}=\frac{1}{s} \), then by (1) we have

$$ \sum_{1\leq i< j\leq n}p_{i}p_{j} \Vert x_{i}-x_{j} \Vert \leq \inf_{x\in X} \Biggl[ \sum_{i=1}^{n}p_{i} \Vert x_{i}-x _{j} \Vert \Biggr] . $$
(5)

The following proposition is a generalization of Proposition 2 in [11] to the case of a b-normed space.

Proposition 3.7

Let \(( X, \Vert \cdot \Vert _{b} ) \) be a b-normed space with constant \(s\geq 1\), \(x_{i}\in X\) and \(p_{i} \geq 0\), \(i\in \{ 1,\dots ,n \} \) with \(\sum_{i=1} ^{n}p_{i}=\frac{1}{s}\). Let \(x_{p}=\sum_{i=1}^{n}p_{i}x_{i}\), then

$$ \frac{1}{2}\sum_{i=1}^{n}p_{i} \Vert x_{i}-x_{p} \Vert \leq s^{n}\sum _{1\leq i< j\leq n}p_{i}p_{j} \Vert x_{i}-x _{j} \Vert \leq s^{n}\sum_{i=1}^{n}p_{i} \Vert x _{i}-x_{p} \Vert . $$
(6)

Proof

As the infimum is a lower bound, the second part of inequality (6) is trivial. For the first part, we use a generalized b-norm inequality as follows:

$$ \begin{aligned} \frac{1}{2}\sum_{i=1}^{n}p_{i} \Vert x_{i}-x_{p} \Vert &= \frac{1}{2}\sum _{i=1}^{n}p_{i} \Biggl\Vert x_{i}-\sum_{j=1}^{n}p_{j}x_{j} \Biggr\Vert \\ &=\frac{1}{2}\sum_{i=1}^{n}p_{i} \Biggl\Vert \sum_{j=1} ^{n} ( x_{i}-p_{j}x_{j} ) \Biggr\Vert \\ &\leq \frac{1}{2}\sum_{i,j=1}^{n}p_{i}s^{j} \Vert x_{i}-p _{j}x_{j} \Vert \\ &\leq \frac{s^{n}}{2}\sum_{i,j=1}^{n}p_{i}p_{j} \Vert x _{i}-x_{j} \Vert \\ &=s^{n}\sum_{1\leq i< j\leq n}p_{i}p_{j} \Vert x_{i}-x_{j} \Vert , \end{aligned} $$

which completes the proof. □

We have the following corollary, which has a nice geometric interpretation.

Corollary 3.8

Let \(( X, \Vert \cdot \Vert _{b} ) \) be a b-normed space with constant \(s\geq 1\) and \(x_{i}\in X\), \(i\in \{ 1,\dots ,n \} \). If \(\overline{x}=\frac{x_{1}+\cdots +x_{n}}{n}\) is the gravity center of the vectors \(\{ x_{1}, \dots ,x_{n} \} \), then we have

$$ \frac{n}{2}\sum_{i=1}^{n} \Vert x_{i}-\overline{x} \Vert \leq s^{n}\sum _{1\leq i< j\leq n} \Vert x_{i}-x_{j} \Vert \leq ns^{n}\sum_{i=1}^{n} \Vert x_{i}-\overline{x} \Vert . $$

Geometrically, the last corollary means that the sum of the edges and diagonals of a polygon with n vertices in a b-normed space is less than or equal to n-times the sum of the distances from the gravity center to its vertices and greater than or equal to \(\frac{n}{2s^{n}}\)-times this quantity.

4 Conclusion

Similarly, we can generalize more inequalities on metric and normed spaces.