1 Introduction and mathematical preliminaries

The program of the paper is to find best proximity pairs between two subsets of a metric space with a partial ordering. There are several works which utilize for that purpose non-self mappings in the following manner. Let A and B be two non-intersecting subsets of a metric space \((X, d)\). A mapping \(S \colon A \longrightarrow B\) realizes the best proximity pair \((x, Sx)\) if \(d(x, Sx) = d(A, B)\). In that case the point x is called a best proximity point of S and the problem of finding such a point is designated as best proximity point problem. This area of research has attracted attention in recent time which has resulted into the publication of a good number of papers as, for instances, those which are noted in [116].

The problem has two aspects. Primarily, it is a global minimization problem, where the quantity \(d(x, Sx)\) is minimized over \(x\in A\) subject to the condition that the minimum value is \(d(A, B)\). When this global minimum is attained at a point z, then we have a best proximity point for which \(d(z, Sz) = d(A, B)\). Another aspect is that it is an extension of the idea of fixed point to which it reduces in the cases where \(A \cap B\) is nonempty. This is the reason that fixed point methodologies are applicable to these category of problems. More elaborately, the problem can be treated as that of finding a global optimal approximate solution of the fixed point equation \(x = Sx\) even when the exact solution is nonexistent for \(A \cap B = \emptyset\), which is the case of interest here. We adopt the latter approach in this paper.

We use a generalized weak contraction in our results. Weak contraction was studied in partially ordered metric spaces by Harjani and Sadarangani [17]. In a recent result by Choudhury et al. [18], a generalization of the above result to a coincidence point theorem has been made using three control functions. More specifically, here we utilize a generalized weak contraction mapping defined with the help of three control functions for the purpose of obtaining the desired minimum distance. The above mentioned mapping is assumed to be defined from one set A to the other set B. Then under suitable conditions, by applying fixed point methodologies, we obtained a best proximity point of the above mentioned mapping which realizes the minimum distance. Several metric and order theoretic concepts are utilized in our results. The main result has four corollaries and an illustrative example. Separate order theoretic condition are imposed to ensure the uniqueness of the best proximity point in the main result. It is also shown that the corollaries are properly contained in the main theorem.

The following are the requisite mathematical concepts for the discussions in this paper.

Throughout the paper \((X, d)\) denotes a metric space, ⪯ a partial order on X and \(A, B \subseteq X\). We use the following notations:

$$\begin{aligned}& d(A, B) = \operatorname{inf} \bigl\{ d(a, b) : a\in A \mbox{ and } b \in B\bigr\} , \\& A_{0} = \bigl\{ a\in A : d(a, b) = d(A, B) \mbox{ for some } b \in B \bigr\} , \\& B_{0} = \bigl\{ b\in B : d(a, b) = d(A, B) \mbox{ for some } a \in A \bigr\} . \end{aligned}$$

It is to be noted that if \((A, B)\) is a nonempty, weakly compact, and convex pair in a Banach space X, then \(A_{0}\) and \(B_{0}\) are nonempty [5, 11]. If a mapping \(S:A\cup B \longrightarrow A\cup B\) is cyclic relatively nonexpansive mapping, and \((A, B)\) is a nonempty, weakly compact, and convex pair in a Banach space X then further we have \(S(A_{0})\subseteq B_{0}\).

Definition 1.1

(P-property [16])

Let A and B be two nonempty subsets of a metric space \((X, d)\) with \(A_{0}\neq\emptyset\). Then the pair \((A, B)\) is said to have the P-property if, for any \(x_{1}, x_{2}\in A_{0}\) and \(y_{1}, y_{2}\in B_{0}\),

$$\begin{aligned} \left . \textstyle\begin{array}{@{}r@{}} d(x_{1}, y_{1})= d(A, B),\\ d(x_{2}, y_{2})= d(A, B) \end{array}\displaystyle \right \} \quad\Rightarrow\quad d(x_{1}, x_{2})=d(y_{1}, y_{2}). \end{aligned}$$

In [1], Abkar and Gabeleh show that every nonempty, bounded, closed, and convex pair of subsets of a uniformly convex Banach spaces has the P-property. Some non-trivial examples of a nonempty pair of subsets which satisfies the P-property are given in [1].

Lemma 1.1

([10])

Let \((A, B)\) be a pair of nonempty closed subsets of a complete metric space \((X, d)\) such that \(A_{0}\) is nonempty and \((A, B)\) has the P-property. Then, \((A_{0}, B_{0})\) is a closed pair of subsets of X.

Definition 1.2

A mapping \(S \colon A \longrightarrow A\) is said to be increasing if for all \(x, y\in A\),

$$ x \preceq y \quad\Longrightarrow\quad Sx \preceq Sy. $$

Definition 1.3

([4])

A mapping \(S \colon A \longrightarrow B\) is called proximally increasing if for all \(v_{1}, v_{2}, y_{1}, y_{2}\in A\),

$$ y_{1} \preceq y_{2},\quad d(v_{1}, Sy_{1})= d(A, B) \quad\mbox{and}\quad d(v_{2}, Sy_{2})= d(A, B) \quad\Longrightarrow\quad v_{1} \preceq v_{2}. $$

In case of self-mapping the above definition reduces to the definition of increasing mapping.

Definition 1.4

A mapping \(S \colon A \longrightarrow B\) is called proximally increasing on \(A_{0}\) if for all \(v_{1}, v_{2}, y_{1}, y_{2}\in A_{0}\),

$$ y_{1} \preceq y_{2},\quad d(v_{1}, Sy_{1})= d(A, B) \quad\mbox{and}\quad d(v_{2}, Sy_{2})= d(A, B) \quad\Longrightarrow \quad v_{1} \preceq v_{2}. $$

Definition 1.5

The partially ordered metric space \((X, d, \preceq)\) is called regular if it has the following properties:

  1. (i)

    if \(\{z_{n}\}\) is any nondecreasing sequence in X converging to z, then \(z_{n} \preceq z\) for any \(n \geq0\);

  2. (ii)

    if \(\{z_{n}\}\) is any nonincreasing sequence in X converging to z, then \(z_{n} \succeq z\) for any \(n \geq0\).

2 Main results

Let Γ and Λ denote the following classes of functions:

  • \(\Gamma=\{\eta\colon[0, \infty) \longrightarrow[0, \infty), \eta \mbox{ is continuous and monotonic increasing}\}\);

  • \(\Lambda=\{\xi\colon[0, \infty) \longrightarrow[0, \infty), \xi \mbox{ is bounded on any bounded interval in } [0, \infty)\}\).

Now, we discuss some properties of some special type of functions in Λ.

Let \(\Theta= \{\theta\in\Lambda: \underline{\lim}\, \theta(z_{n}) > 0, \mbox{whenever } \{z_{n}\}\mbox{ is any sequence of nonnegative real}\mbox{ }\mbox{numbers converging to }l> 0\}\).

We note that Θ is nonempty. For an illustration, we define \(\theta_{1}\) on \([0,\infty)\) by \(\theta_{1}(x)=e^{2x}\), \(x\in[0, \infty)\). Then \(\theta_{1}\in\Theta\). Here we observe that \(\theta_{1}(0)=1 > 0\). On the other hand, if \(\theta_{2}(x)=x^{3}\), \(x\in[0, \infty)\), then \(\theta_{2}\in\Theta\) and \(\theta_{2}(0)= 0\).

Also, for any \(\theta\in\Theta\), it is clear that \(\theta(x)> 0\) for \(x>0\); and \(\theta(0)\) need not be equal to 0.

Let \(\Upsilon= \{\varphi\in\Lambda: \overline{\lim}\, \varphi (z_{n}) < l, \mbox{whenever } \{z_{n}\}\mbox{ is any sequence of nonnegative real}\mbox{ }\mbox{numbers converging to }l > 0\}\).

It follows from the definition that, for any \(\varphi\in\Upsilon\), \(\varphi(y) < y\) for all \(y > 0\).

Theorem 2.1

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty subsets of X such that \(A_{0}\) is nonempty and closed. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Assume that there exist \(\eta\in\Gamma\) and \(\xi, \theta\in\Lambda\) such that

  1. (i)

    for \(x, y\in[0, \infty)\), \(\eta(x)\leq\xi(y) \Longrightarrow x\leq y\),

  2. (ii)

    \(\eta(z) - \overline{\lim}\, \xi(z_{n}) + \underline{\lim}\, \theta(z_{n}) > 0\), whenever \(\{z_{n}\}\) is any sequence of nonnegative real numbers converging to \(z> 0\),

  3. (iii)

    for all \(x, y, u, v \in A_{0}\)

    $$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow\quad\eta \bigl(d(u, v)\bigr) \leq\xi\bigl(M(x, y, u, v)\bigr) - \theta \bigl(M(x, y, u, v) \bigr), $$

    where \(M(x, y, u, v) = \max \{d(x, y), \frac{d(x, u) + d(y, v)}{2}, \frac{d(y, u) + d(x, v)}{2} \}\).

Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) for which \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in  \(A_{0}\).

Proof

It follows from the definition of \(A_{0}\) and \(B_{0}\) that for every \(x\in A_{0}\) there exists \(y\in B_{0}\) such that \(d(x, y) = d(A, B)\) and conversely, for every \(y'\in B_{0}\) there exists \(x'\in A_{0}\) such that \(d(x', y') = d(A, B)\). Since \(S(A_{0})\subseteq B_{0}\), for every \(x \in A_{0}\) there exists a \(y \in A_{0}\) such that \(d(y, Sx) = d(A, B)\).

By the hypothesis of the theorem there exist \(x_{0}, x_{1} \in A_{0}\) for which \(x_{0} \preceq x_{1}\) and

$$ d(x_{1}, Sx_{0}) = d(A, B). $$
(2.1)

Now, \(x_{1}\in A_{0}\) and \(S(A_{0}) \subseteq B_{0}\) imply the existence of a point \(x_{2} \in A_{0}\) such that

$$ d(x_{2}, Sx_{1}) = d(A, B). $$
(2.2)

As S is proximally increasing on \(A_{0}\), we get \(x_{1} \preceq x_{2}\). In this way we obtain a sequence \(\{x_{n}\}\) in \(A_{0}\) such that for all \(n \geq0\),

$$ x_{n} \preceq x_{n + 1} $$
(2.3)

and

$$ d(x_{n + 1}, Sx_{n}) = d(A, B). $$
(2.4)

By the hypothesis (iii), \(x_{n} \preceq x_{n + 1}\), \(d(x_{n + 1}, Sx_{n}) = d(A, B)\) and \(d(x_{n + 2}, Sx_{n + 1}) = d(A, B)\) imply that

$$ \eta\bigl(d(x_{n + 1}, x_{n + 2})\bigr) \leq\xi \bigl(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) \bigr) - \theta\bigl(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2})\bigr), $$
(2.5)

where

$$\begin{aligned} &M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) \\ &\quad= \max \biggl\{ d(x_{n}, x_{n + 1}), \frac{d(x_{n}, x_{n + 1}) + d(x_{n + 1}, x_{n + 2})}{2}, \frac{d(x_{n + 1}, x_{n + 1}) + d(x_{n}, x_{n + 2})}{2} \biggr\} \\ &\quad= \max \biggl\{ d(x_{n}, x_{n + 1}), \frac{d(x_{n}, x_{n + 1}) + d(x_{n + 1}, x_{n + 2})}{2}, \frac{ d(x_{n}, x_{n + 2})}{2} \biggr\} . \end{aligned}$$

By the triangular inequality, \(\frac{d(x_{n}, x_{n+2})}{2}\leq\frac {d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2}\). So, it follows that

$$ M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2})=\max \biggl\{ d(x_{n}, x_{n+1}), \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2} \biggr\} . $$

Let \(U_{n} = d(x_{n}, x_{n + 1})\), for all \(n \geq0\).

Case 1: \(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) = d(x_{n}, x_{n +1})\). Then by (2.5),

$$ \eta\bigl(d(x_{n + 1}, x_{n + 2})\bigr) \leq\xi \bigl(d(x_{n}, x_{n+1})\bigr) - \theta \bigl(d(x_{n}, x_{n+1})\bigr), $$

that is,

$$ \eta(U_{n+1})\leq\xi(U_{n})- \theta(U_{n}), $$
(2.6)

which implies that \(\eta(U_{n+1})\leq\xi(U_{n})\). Then it follows by the hypothesis (i) of the theorem that \(U_{n+1} \leq U_{n}\), for all \(n \geq0\).

Case 2: \(M(x_{n }, x_{n + 1}, x_{n + 1}, x_{n + 2}) = \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2} = \frac{U_{n} + U_{n + 1}}{2} = V_{n}\) (say). Then it follows from (2.5) that

$$ \eta(U_{n+1})\leq\xi(V_{n})- \theta(V_{n}), $$
(2.7)

which implies that \(\eta(U_{n+1})\leq\xi(V_{n})= \xi (\frac {U_{n+1} + U_{n}}{2} )\). Again by the hypothesis (i) the theorem it follows that \(U_{n+1} \leq \frac{U_{n} + U_{n + 1}}{2}\), that is, \(U_{n+1} \leq U_{n}\), for all \(n \geq0\).

From Case 1 and Case 2, we conclude that \(\{U_{n}\}\) is a monotone decreasing sequence of nonnegative real numbers. As \(\{U_{n}\}\) is bounded below by zero, there exists an \(t \geq0\) such that

$$ \lim_{n\rightarrow\infty} U_{n} = d(x_{n}, x_{n + 1}). $$
(2.8)

Then it follows that

$$ \lim_{n\rightarrow\infty} V_{n} = \frac{d(x_{n}, x_{n+1}) + d(x_{n+1}, x_{n+2})}{2}. $$
(2.9)

Taking the limit supremum in both sides of the inequality (2.6), using (2.8), the continuity of η, and the property of ξ and θ, we obtain

$$ \eta(t) \leq \overline{\lim}\, \xi(U_{n}) + \overline{\lim}\, \bigl(- \theta(U_{n})\bigr). $$

Since \(\overline{\lim}\, (- \theta(U_{n})) = - \underline{\lim}\, \theta(U_{n})\), it follows that

$$ \eta(t) \leq \overline{\lim}\, \xi(U_{n}) - \underline{\lim}\, \theta(U_{n}), $$

that is,

$$ \eta(t) - \overline{\lim}\, \xi(U_{n}) + \underline{\lim}\, \theta(U_{n}) \leq0, $$

which, by the hypothesis (ii) and (2.8), is a contradiction unless \(t = 0\).

Arguing similarly as above, from (2.7) and (2.8), we have

$$ \eta(t) - \overline{\lim}\, \xi(V_{n}) + \underline{\lim}\, \theta(V_{n}) \leq0, $$

which, by the hypothesis (ii) and (2.9), is a contradiction unless \(t = 0\). Hence

$$ U_{n} = d(x_{n}, x_{n + 1}) \longrightarrow0 \quad\mbox{as } n\longrightarrow\infty. $$
(2.10)

Next we show that \(\{x_{n}\}\) is a Cauchy sequence.

Suppose that \(\{x_{n}\}\) is not a Cauchy sequence. Then there exist \(\delta> 0\) and two sequences \(\{m(k)\}\) and \(\{n(k)\}\) of positive integers such that for all positive integers k, \(n(k) > m(k) > k\) and \(d(x_{m(k)}, x_{n(k)})\geq\delta\). Assuming that \(n(k)\) is the smallest such positive integer, we get

$$n(k) > m(k) > k,\quad d(x_{m(k)}, x_{n(k)})\geq\delta \quad\mbox{and} \quad d(x_{m(k)}, x_{n(k)-1})< \delta. $$

Now,

$$\delta\leq d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{n(k)-1}) + d(x_{n(k)-1}, x_{n(k)}) < \delta+ d(x_{n(k)-1}, x_{n(k)}). $$

From the above inequality and (2.10), it follows that

$$ \lim_{k\rightarrow\infty} d(x_{m(k)}, x_{n(k)})= \delta. $$
(2.11)

Again,

$$d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{m(k)+1}) + d(x_{m(k)+1}, x_{n(k)+1})+ d(x_{n(k)+1}, x_{n(k)}) $$

and

$$d(x_{m(k)+1}, x_{n(k)+1})\leq d(x_{m(k)+1}, x_{m(k)})+ d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$

The above two inequalities imply that

$$\begin{aligned} &d(x_{m(k)}, x_{n(k)}) - d(x_{m(k)}, x_{m(k)+1}) - d(x_{n(k)+1}, x_{n(k)})\\ &\quad \leq d(x_{m(k)+1}, x_{n(k)+1}) \leq d(x_{m(k)+1}, x_{m(k)})+ d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). \end{aligned}$$

From the above inequality, (2.10) and (2.11), we have

$$ \lim_{k\rightarrow\infty} d(x_{m(k)+1}, x_{n(k)+1}) = \delta. $$
(2.12)

Again,

$$d(x_{m(k)}, x_{n(k)})\leq d(x_{m(k)}, x_{n(k)+1})+ d(x_{n(k)+1}, x_{n(k)}) $$

and

$$d(x_{m(k)}, x_{n(k)+1})\leq d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$

The above two inequalities imply that

$$d(x_{m(k)}, x_{n(k)}) - d(x_{n(k)+1}, x_{n(k)}) \leq d(x_{m(k)}, x_{n(k)+1})\leq d(x_{m(k)}, x_{n(k)}) + d(x_{n(k)}, x_{n(k)+1}). $$

From the above inequality, (2.10) and (2.11), we have

$$ \lim_{k\rightarrow\infty} d(x_{m(k)}, x_{n(k)+1})= \delta. $$
(2.13)

Similarly, we can prove that

$$ \lim_{k\rightarrow\infty}d(x_{n(k)}, x_{m(k)+1}) = \delta. $$
(2.14)

By the construction of the sequence \(\{x_{n}\}\), we have

$$x_{m(k)} \preceq x_{n(k)},\quad d(x_{m(k) + 1}, Sx_{m(k)}) = d(A, B) \quad\mbox{and}\quad d(x_{n(k) + 1}, Sx_{n(k)}) = d(A, B), $$

which, by the hypothesis (iii), imply that

$$\begin{aligned} &\eta\bigl(d(x_{m(k)+1}, x_{n(k)+1})\bigr) \\ &\quad\leq\xi \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)- \theta\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr), \end{aligned}$$
(2.15)

where

$$\begin{aligned} &M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\\ &\quad=\max \biggl\{ d(x_{m(k)}, x_{n(k)}), \frac{d(x_{m(k)}, x_{m(k)+1}) + d(x_{n(k)}, x_{n(k)+1})}{2},\\ &\qquad{} \frac{d(x_{n(k)}, x_{m(k)+1}) + d(x_{m(k)}, x_{n(k)+1})}{2} \biggr\} . \end{aligned}$$

From (2.10), (2.11), (2.13), and (2.14), it follows that

$$ \lim_{k\rightarrow\infty} M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})= \delta. $$
(2.16)

Taking the limit supremum in both sides of the inequality (2.15), using (2.12), (2.16), the continuity of η, and the property of ξ and θ, we obtain

$$ \eta(\delta) \leq \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) + \overline{\lim }\, \bigl(- \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)\bigr). $$

As \(\overline{\lim}\, (- \theta(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}))) = - \underline{\lim}\, \theta(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}))\), it follows that

$$ \eta(\delta) \leq \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) - \underline{\lim}\, \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr), $$

that is,

$$ \eta(\delta) - \overline{\lim}\, \xi\bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1})\bigr) + \underline{\lim}\, \theta \bigl(M(x_{m(k)}, x_{n(k)}, x_{m(k) + 1}, x_{n(k) + 1}) \bigr)\leq0, $$

which, by the hypothesis (ii) and (2.16), is a contradiction. Therefore, \(\{x_{n}\}\) is a Cauchy sequence in \(A_{0}\). Since \(A_{0}\) is a closed subset of complete metric space \((X, d)\), there exists \(a \in A_{0} \) such that

$$ \lim_{n\rightarrow\infty} x_{n} = a; \quad \mbox{that is}, \lim_{n\rightarrow\infty} d( x_{n}, a) = 0. $$
(2.17)
  • Suppose that S is continuous.

Taking \(n\longrightarrow\infty\) in (2.4) and using the continuity of S, we have \(d(a, Sa) = d(A, B)\); that is, a is a best proximity point of S.

  • Next we suppose that X is regular.

By (2.3) and (2.17), we have

$$ x_{n} \preceq a \quad\mbox{for all } n \geq0. $$
(2.18)

Now \(a \in A_{0}\) and \(S(A_{0}) \subseteq B_{0}\) imply the existence of a point \(p \in A_{0}\) for which

$$ d(p, Sa) = d(A, B). $$
(2.19)

By (2.4), (2.18) and (2.19), we have

$$ x_{n} \preceq a,\quad d(x_{n+1}, Sx_{n}) = d(A, B) \quad \mbox{and} \quad d(p, Sa) = d(A, B), $$

which, by the hypothesis (iii) of the theorem, imply that

$$ \eta\bigl(d(x_{n + 1}, p)\bigr) \leq\xi \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) - \theta \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr), $$
(2.20)

where

$$ M(x_{n}, a, x_{n + 1}, p)=\max \biggl\{ d(x_{n}, a), \frac{d(x_{n}, x_{n + 1})+ d(a, p)}{2}, \frac{d(a, x_{n + 1}) + d(x_{n}, p)}{2} \biggr\} . $$

From (2.17), it follows that

$$ \lim_{n\rightarrow\infty} M(x_{n}, a, x_{n + 1}, p) = \frac{d(a, p)}{2}. $$
(2.21)

Taking the limit supremum in both sides of the inequality (2.20), using (2.17), (2.21), the properties of η, and the property of ξ and θ, we obtain

$$ \eta\biggl(\frac{d(a, p)}{2}\biggr)\leq\eta\bigl(d(a, p)\bigr) \leq \overline { \lim}\, \xi\bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) + \overline{\lim }\, \bigl(- \theta\bigl(M(x_{n}, a, x_{n + 1}, p)\bigr)\bigr). $$

Arguing similarly as discussed above, we have

$$ \eta \biggl(\frac{d(a, p)}{2} \biggr) - \overline{\lim}\, \xi \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr) + \underline{\lim}\, \theta \bigl(M(x_{n}, a, x_{n + 1}, p)\bigr)\leq0, $$

which, by the hypothesis (ii) and (2.21), is a contradiction unless \(d(a, p) = 0\); that is, \(p = a\). Then by (2.19) we have \(d(a, Sa) = d(A, B)\); that is, a is a best proximity point of S. □

Theorem 2.2

In addition to the hypotheses of Theorem 2.1, suppose that for every \(x, y \in A_{0}\) there exists \(u \in A_{0}\) such that u is comparable to x and y. Then S has a unique best proximity point.

Proof

By Theorem 2.1, the set of best proximity points of S is nonempty. Suppose \(x, y\in A_{0}\) are two best proximity points of S; that is,

$$ d(x, Sx) = d(A, B) \quad\mbox{and}\quad d(y, Sy) = d(A, B). $$
(2.22)

By the assumption, there exists \(u \in A_{0}\), which is comparable with x and y.

Put \(u_{0} = u\). Suppose that

$$ u_{0} \preceq x \quad\mbox{(in the other case the proof is similar)}. $$
(2.23)

\(S(A_{0}) \subseteq B_{0}\) and \(u_{0} = u\in A_{0}\) imply the existence of a point \(u_{1} \in A_{0}\) for which

$$ d(u_{1}, Su_{0}) = d(A, B). $$
(2.24)

Since S is proximally increasing on \(A_{0}\), from (2.22), (2.23), and (2.24) we have

$$ u_{1} \preceq x. $$
(2.25)

Following this process, we obtain a sequence \(\{u_{n}\}\) in \(A_{0}\) such that for all \(n \geq0\),

$$ d(u_{n+1}, Su_{n}) = d(A, B) \quad\mbox{and} \quad u_{n} \preceq x. $$
(2.26)

By (2.22) and (2.26), we have

$$ u_{n} \preceq x,\quad d(u_{n+1}, Su_{n}) = d(A, B) \quad\mbox{and}\quad d(x, Sx) = d(A, B), $$

which, by the hypothesis (iii) of Theorem 2.1, imply that

$$ \eta\bigl(d(u_{n+1}, x )\bigr) \leq\xi \bigl(M(u_{n}, x, u_{n + 1}, x)\bigr) - \theta \bigl(M(u_{n}, x, u_{n + 1}, x)\bigr), $$
(2.27)

where

$$\begin{aligned} M(u_{n}, x, u_{n + 1}, x) &= \max \biggl\{ d(u_{n}, x), \frac{d(u_{n}, u_{n + 1}) + d(x, x)}{2}, \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} \\ & = \max \biggl\{ d(u_{n}, x), \frac{d(u_{n}, u_{n + 1})}{2}, \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} . \end{aligned}$$

By the triangular inequality \(\frac{d(u_{n}, u_{n+1})}{2}\leq\frac {d(x, u_{n + 1}) + d(u_{n}, x)}{2}\). Then it follows that

$$M(u_{n}, x, u_{n + 1}, x) =\max \biggl\{ d(u_{n}, x), \frac{d(x, u_{n + 1}) + d(u_{n}, x)}{2} \biggr\} . $$

Let \(Q_{n} = d(u_{n}, x)\), for all \(n \geq0\).

Arguing similarly as in the proof of Theorem 2.1 (Case 1 and Case 2), we can prove that \(\{Q_{n}\}\) is a monotone decreasing sequence of nonnegative real numbers and

$$ \lim_{n\rightarrow\infty}Q_{n} = \lim _{n\rightarrow\infty}d(u_{n}, x) = 0. $$
(2.28)

Similarly, we show that

$$ \lim_{n\rightarrow\infty}d(u_{n}, y) = 0. $$
(2.29)

By the triangle inequality, and using (2.28) and (2.29), we have

$$ 0 \leq d(x, y)\leq\bigl[d(x, u_{n}) + d(u_{n}, y)\bigr] \longrightarrow0 \quad\mbox{as } n\longrightarrow\infty, $$

which implies that \(d(x, y) = 0\); that is, \(x = y\); that is, the best proximity point of S is unique. □

With the help of P-property we have the following theorem which is obtained by an application of Theorem 2.1.

Theorem 2.3

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty and closed subsets of X such that \(A_{0}\) is nonempty and \((A, B)\) satisfies the P-property. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Assume that there exist \(\eta\in \Gamma\) and \(\xi, \theta\in\Lambda\) such that

  1. (i)

    for \(x, y\in[0, \infty)\), \(\eta(x)\leq\xi(y) \Longrightarrow x\leq y\),

  2. (ii)

    \(\eta(z) - \overline{\lim}\, \xi(z_{n}) + \underline{\lim}\, \theta(z_{n}) > 0\), whenever \(\{z_{n}\}\) is any sequence of nonnegative real numbers converging to \(z> 0\),

  3. (iii)

    for all \(x, y, u, v \in A_{0}\)

    $$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow \quad\eta \bigl(d(Sx, Sy)\bigr) \leq\xi\bigl(M(x, y, u, v)\bigr) - \theta \bigl(M(x, y, u, v)\bigr), $$

    where \(M(x, y, u, v) = \max \{d(x, y), \frac{d(x, u) + d(y, v)}{2}, \frac{d(y, u) + d(x, v)}{2} \}\).

Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) for which \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in  \(A_{0}\).

Proof

By Lemma 1.1, \(A_{0}\) is nonempty and closed. Since \((A, B)\) satisfies the P-property, \(d(u, Sx)= d(A, B)\) and \(d(v, Sy)= d(A, B)\) imply that \(d(u, v)= d(Sx, Sy)\). Then condition (iii) of the theorem reduces to the condition (iii) of Theorem 2.1. Therefore, all the conditions of the Theorem 2.1 are satisfied and hence we have the required proof. □

3 Corollaries and example

Corollary 3.1

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty subsets of X such that \(A_{0}\) is nonempty and closed. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Let there exists \(\xi\in\Lambda\) such that \(\overline{\lim}\, \xi(z_{n}) < z\), whenever \(\{z_{n}\}\) is any sequence of nonnegative real numbers converging to \(z > 0\) and for all \(x, y, u, v \in A_{0}\)

$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow \quad d(u, v) \leq\xi\bigl(M(x, y, u, v)\bigr) , $$

where \(M(x, y, u, v)\) is the same as in Theorem 2.1. Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) such that \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in \(A_{0}\).

Proof

Let η be the identity mapping and \(\theta (t) = 0\) for all \(t \in[0, \infty)\) in Theorem 2.1. Then we have the required proof from that of Theorem 2.1. □

Corollary 3.2

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty subsets of X such that \(A_{0}\) is nonempty and closed. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Assume that there exist \(\eta\in\Gamma\) and \(\theta\in\Lambda\) such that for any sequence of nonnegative real numbers \(\{z_{n}\}\) with \(z_{n}\longrightarrow z > 0\), \(\underline{\lim}\, \theta(z_{n}) > 0\) and for all \(x, y, u, v \in A_{0}\)

$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow \quad\eta \bigl(d(u, v)\bigr) \leq\eta\bigl(M(x, y, u, v)\bigr) - \theta \bigl(M(x, y, u, v) \bigr) , $$

where \(M(x, y, u, v)\) is the same as in Theorem 2.1. Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) for which \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in \(A_{0}\).

Proof

The required proof is obtained by considering ξ to be identical with the function η in Theorem 2.1. □

Corollary 3.3

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty subsets of X such that \(A_{0}\) is nonempty and closed. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Let there exists \(\theta\in\Lambda\) such that \(\underline{\lim}\, \theta (z_{n}) > 0\), whenever \(\{z_{n}\}\) is any sequence of nonnegative real numbers converging to \(z > 0\) and for all \(x, y, u, v \in A_{0}\)

$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow\quad d(u, v) \leq M(x, y, u, v) - \theta\bigl(M(x, y, u, v)\bigr) , $$

where \(M(x, y, u, v)\) is the same as in Theorem 2.1. Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) for which \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in \(A_{0}\).

Proof

Let η and ξ be the identity mappings in Theorem 2.1. Then we have the required proof from that of Theorem 2.1. □

Corollary 3.4

Let \((X, d)\) be a complete metric space andbe a partial order on X. Let \((A, B)\) be a pair of nonempty subsets of X such that \(A_{0}\) is nonempty and closed. Let \(S \colon A \longrightarrow B\) be a mapping with the properties that \(S(A_{0})\subseteq B_{0}\) and S is proximally increasing on \(A_{0}\). Assume that there exists \(k\in[0, 1)\) such that for all \(x, y, u, v \in A_{0}\)

$$\left . \textstyle\begin{array}{r@{}} x \preceq y, \\ d(u, Sx)= d(A, B),\\ d(v, Sy)= d(A, B) \end{array}\displaystyle \right \}\quad\Rightarrow\quad d(u, v) \leq k M(x, y, u, v) , $$

where \(M(x, y, u, v)\) is the same as in Theorem 2.1. Suppose either S is continuous or X is regular. Also, suppose that there exist elements \(x_{0}, x_{1} \in A_{0}\) for which \(d(x_{1}, Sx_{0}) = d(A, B)\) and \(x_{0} \preceq x_{1}\). Then S has a best proximity point in \(A_{0}\).

Proof

We consider that η and ξ are the identity mappings and \(\theta(t)=(1-k)t\), where \(0\leq k < 1\), in Theorem 2.1. Then we have the required proof from that of Theorem 2.1. □

Example 3.1

Let \(X = \mathbb {R}^{2}\) (\(\mathbb {R}\) denotes the set of all real numbers) and d be a metric on X defined as \(d(x, y) = | x_{1} - x_{2}|+ | y_{1} - y_{2}|\), for \(x = (x_{1}, y_{1}), y = (x_{2}, y_{2})\in X\). Define a partial order ⪯ on X such that \((x, y) \preceq(u, v)\) if and only if \(x \leq u\) and \(y \leq v\), for all \((x, y), (u, v) \in X\). Let

$$\begin{aligned}& A = \bigl\{ (x, 1) : 0 \leq x \leq1\bigr\} \cup\bigl\{ (0, x) : 1 \leq x < 2\bigr\} ,\\& B = \bigl\{ (x, - 1) : 0 \leq x \leq1\bigr\} \cup\bigl\{ (0, x) : -2 < x \leq-1\bigr\} ,\\& A_{0} = \bigl\{ (x, 1) : 0 \leq x \leq1\bigr\} \subseteq A \quad\mbox{and}\quad B_{0} = \bigl\{ (x, - 1) : 0 \leq x \leq1\bigr\} \subseteq B. \end{aligned}$$

Let \(S \colon A \rightarrow B\) be defined as

$$S(t) =\left \{ \textstyle\begin{array}{@{}l@{\quad}l} (\frac{x}{2}, - 1 ), &\mbox{if } t = (x, 1) \in A_{0}, \\ (0, -x), &\mbox{if } t = (0, x) \in\{(0, x) : 1 \leq x < 2\}, \end{array}\displaystyle \right . $$

and \(\eta, \xi, \theta\colon[0, \infty) \longrightarrow[0, \infty)\) be defined as follows:

$$\eta(x) =x^{2},\qquad \xi(x) = \frac{x^{2}}{2},\qquad \theta(x)=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0, &\mbox{if } 0 \leq x \leq1,\\ \frac{x^{2}}{4},& \mbox{otherwise}. \end{array}\displaystyle \right . $$

The function S now satisfies all the postulates of Theorems 2.1 and 2.2. Then, by joint applications of Theorems 2.1 and 2.2 we conclude that S must have a best proximity point which is unique. The point can be seen here to be \((0, 1) \in A_{0}\).

Note

In this example A and B are not closed sets. This is an illustration of the fact that the closedness of A and B are not required in our theorem.

Remark 3.1

Corollaries 3.1, 3.2, 3.3, and 3.4 are not applicable to this example and hence Theorem 2.1 is an actual extension of its Corollaries 3.1, 3.2, 3.3, and 3.4.

4 Conclusions

The present paper is an application of weak inequalities satisfied by non-self mappings. Weak contractions are intermediate to the contractions and non-expansions which have been generalized in various ways and have been utilized in different types of problem. We make such an application for finding a best proximity pair. The speciality of this paper is that it has been obtained in the most general settings of a metric space without any special assumptions on this space.