1 Introduction

The equilibrium problem \((\operatorname{EP})\) provides a general setting of many problems, such as optimization problem and the complementarity problem. In the past few decades, it has been studied extensively in linear space (e.g., Hadjisavvas et al. [1], Bianchi et al. [2, 3], Blum et al. [4]).

It is necessary to extend the concept and method from linear space to Riemannian manifolds. By choosing a suitable Riemannian metric, the nonconvex optimization problem can be transformed into convex optimization problem, and the constrained optimization problem can be transformed into an unconstrained one. Some classical algorithms have been extended from linear space to Riemannian manifolds, such as by Ferreira et al. [5, 6], Li et al. [7], and Tang et al. [8]. The related work on Hadamard manifolds can be found in Kristály [9], Li et al. [10, 11], Ceng et al. [12], Zhou et al. [13] and so on.

In 2012, Colao et al. [14] studied the equilibrium problem on Hadamard manifolds. Let E be a nonempty closed convex subset on Hadamard manifolds \(\mathbb{M}\), and \(S : E \times E \longrightarrow \mathbb{R}\) be a bifunction that satisfies \(S(x, x)=0\), \(\forall x \in E\), then the form of equilibrium problem is to find \(x \in E\), such that

$$ S (x, y ) \geq 0, \quad \forall y \in E. $$
(EP)

We define the solution set of equilibrium problem (EP) as \(\operatorname{EP}(S; E)\), and we always assume that \(\operatorname{EP}(S; E)\neq \emptyset \). In the special case, if \(S(x, y)=\langle V(x), \exp ^{-1}_{x}{y}\rangle \), where \(V : E\rightarrow T\mathbb{M}\) is a vector field satisfies \(V(x) \in T_{x}\mathbb{M}\), \(\forall x \in E\), and exp−1 is the inverse of exponential, then (EP) becomes the variational inequality problem. The form is: find \(x\in E\), such that

$$ \bigl\langle V (x ), \exp ^{-1}_{x}{y} \bigr\rangle \geq 0, \quad \forall y \in E . $$
(VI)

The solution set of variational inequality problem (VI) is denoted by \(\operatorname{VI}(V,E)\).

It is known that the KKM lemma is an important tool for studying the existence of solutions for equilibrium problems. Colao et al. [14] developed and proved Fan’s KKM lemma [15] and obtained the existence of solutions to equilibrium problem (EP) on Hadamard manifolds. For relevant conclusions, see, for instance, Yang and Pu [16], Tang et al. [17], Chen et al. [18], Batista et al. [19], Zhou et al. [2022].

Furthermore, the existence of solutions for equilibrium problems or variational inequality problems on Riemannian manifolds has been presented by several references. In particular, Li et al. [23] established the existence and uniqueness results for variational inequality problems on Riemannian manifolds. Meanwhile Li and Yao [24] provided the existence theorems of solutions for variational inequalities for set-valued mappings on Riemannian manifolds. Very recently, Wang et al. [25] obtained the existence of solutions and the convexity properties of the solution set for the equilibrium problem on Riemannian manifolds.

Many authors have studied ideas and methods for solving equilibrium problems or variational inequality problems in linear space, for example, Korpelevich [26] first designed an extragradient method for a solution of variational inequality problem, while Censor et al. [27] proposed the subgradient extragradient method inspired by extragradient method in [26]. In 2019, Thong and Hieu [28] introduced an inertial subgradient extragradient algorithm based on the subgradient extragradient method in [27]. Then Ceng et al. [29] and Yao et al. [30] obtained the inertial algorithms for finding a common solution of the variational inequality problem and the fixed-point problem by using a subgradient approach. As for equilibrium problems, Quoc et al. [31] obtained an extragradient method for a solution of a pseudomonotone equilibrium problem, while Nguyen et al. [32] provided an iterative method for finding a common solution to an equilibrium problem and a fixed point problem based on extragradient method in [31]. Then in 2020, Yao et al. [33] improved and extended the main result in [32] to a general case.

In recent years, algorithms for solving equilibrium problem (EP) on Hadamard manifolds have received a lot of attention by some authors, such as Colao et al. [14], Salahuddin [34], and Li et al. [35]. Recently, Cruz Neto et al. [36] extended the result of Nguyen et al. [32] and obtained an extragradient method for solving the equilibrium problem on Hadamard manifolds, which is described as follows: choose \(\lambda _{k}>0\), compute

$$ \textstyle\begin{cases} y^{k}=\arg \min_{z \in E} \{ S (x^{k}, z )+ \frac{1}{2 \lambda _{k}} d^{2} (x^{k}, z ) \} , \\ x^{k+1}=\arg \min_{z \in E} \{ S (y^{k}, z )+ \frac{1}{2 \lambda _{k}} d^{2} (x^{k}, z ) \} , \end{cases} $$
(1)

where \(0<\lambda _{k}<\beta <\min \{ \alpha _{1}^{-1}, \alpha _{2}^{-1} \} \), \(\alpha _{1}\), \(\alpha _{2}\) are constants related to Lipschitz-type constants. It should be noted that Lipschitz-type constants are unknown in general, and it is difficult to approximate them even in complex non-linear problems.

Recently, Hieu et al. [37, 38] and Yang et al. [39, 40] introduced some proximal-like algorithms in the linear setting. The stepsize of the algorithms is given by the adjacent iteration information in each iteration, so it is unnecessary to know the Lipschitz constants.

Inspired by the work above, we present a new extragradient-like method for (EP) on Hadamard manifolds. Compared with [36], our algorithm is performed without the prior knowledge of the Lipschitz-type constants. Moreover, values of the adjacent iteration points have great influence on the stepsize of the further iteration, which can effectively improve the efficiency of the iteration. We note that, if \(\mathbb{M} = \mathbb{R}\), then our algorithm is an improvement of the algorithm presented in Hieu et al. [38].

The organization of the paper is as follows. In Sect. 2, we present some basic knowledge on Riemannian manifolds which will be used in this paper; for more details, see [41, 42]. In Sect. 3, we introduce the extragradient-like algorithm and analyze its convergence. Finally, in Sect. 4, we present two experiments to verify the algorithms.

2 Preliminaries

Suppose \(\mathbb{M}\) is simply connected n-dimensional Riemannian manifold, ∇ is the Levi-Civita connection, and γ is a smooth curve on \(\mathbb{M}\). V is the unique vector field satisfies \(\nabla _{\gamma '(t)}V=0\) (\(\forall t \in [a, b]\)), and \(V(\gamma (a))=v\). Then the parallel transport \(\mathrm{P}_{\gamma , \gamma }(b), \gamma (a): T_{\gamma (a)} \mathbb{M} \rightarrow T_{\gamma (b)} \mathbb{M}\) on the tangent bundle \(T\mathbb{M}\) along γ is defined by

$$\begin{aligned} \mathrm{P}_{\gamma , \gamma (b), \gamma (a)}(v)=V\bigl(\gamma (b)\bigr), \quad \forall a, b \in \mathbb{R} \text{ and } v \in T_{\gamma (a)} \mathbb{M}. \end{aligned}$$

If γ is a minimal geodesic joining p to q, then we use \(\mathrm{P}_{q, p}\) instead of \(\mathrm{P}_{\gamma , q ,p}\).

A Riemannian manifold \(\mathbb{M}\) is complete if for any \(p\in \mathbb{M}\), all the geodesic \(\gamma (t)\) emanating from p are defined for all \(t \in \mathbb{R}\).

Suppose \(\mathbb{M}\) is complete, and \(\gamma (\cdot ) = \gamma _{v}(\cdot , p)\) is the geodesic, the exponential map \(\exp _{p}: T_{p}\mathbb{M} \rightarrow \mathbb{M}\) at p is defined by \(\exp _{p}v = \gamma _{v}(1, p) \), \(\forall v \in T_{p}\mathbb{M}\), then \(\exp _{p}tv = \gamma _{v}(t, p)\), \(\forall t\in \mathbb{R}\). We note here that \(\forall p \in \mathbb{M}\), \(\exp _{p}\) is differentiable on \(T_{p}\mathbb{M}\), and \(\exp _{p}: T_{p}\mathbb{M}\rightarrow \mathbb{M} \) is a diffeomorphism.

A complete, simply connected Riemannian manifold of nonpositive sectional curvature is named a Hadamard manifold. In this paper, we assume that \(\mathbb{M}\) is an n dimensional Hanamard manifold.

Proposition 2.1

([43])

Let\(p\in \mathbb{M}\), then\(\exp _{p}: T_{p}\mathbb{M}\rightarrow \mathbb{M} \)is a diffeomorphism, and for any\(p, q \in \mathbb{M}\), there exists a unique normalized geodesic\(\gamma _{q,p}\)joiningpto q.

A geodesic triangle \(\triangle (p_{1}, p_{2}, p_{3})\) of a Riemannian manifold is a set consisting of three points \(p_{1}\), \(p_{2}\), \(p_{3}\), and three minimal geodesic joining these points.

Proposition 2.2

([42])

Let\(\triangle (p_{1}, p_{2}, p_{3} )\)be a geodesic triangle on Hadamard manifolds\(\mathbb{M}\). Then

$$\begin{aligned} &d^{2}(p_{1}, p_{2})+d^{2}(p_{2}, p_{3})-2\bigl\langle \exp _{p_{2}}^{-1} p_{1}, \exp _{p_{2}}^{-1} p_{3}\bigr\rangle \leq d^{2}(p_{3}, p_{1}), \end{aligned}$$
(2)

where\(\exp _{p_{2}}^{-1}\)is the inverse of\(\exp _{p_{2}}\).

Proposition 2.3

([44])

Let\(\triangle (p_{1}, p_{2}, p_{3})\)be a geodesic triangle on\(\mathbb{M}\), Then there exist three points (i.e. \(p_{1}'\), \(p_{2}'\), and\(p_{3}'\)) in\(\mathbb{R}^{2}\), such that

$$\begin{aligned} d(p_{1}, p_{2})= \bigl\Vert p_{1}^{\prime }-p_{2}^{\prime } \bigr\Vert , \quad\quad d(p_{2}, p_{3})= \bigl\Vert p_{2}^{\prime }-p_{3}^{\prime } \bigr\Vert , \quad \quad d(p_{3}, p_{1})= \bigl\Vert p_{1}^{\prime }-p_{3}^{\prime } \bigr\Vert . \end{aligned}$$

Lemma 2.4

([7])

Let\(\triangle (p_{1}, p_{2}, p_{3})\)be a geodesic triangle on\(\mathbb{M}\)and the comparison triangle be\(\Delta (p_{1}', p_{2}', p_{3}')\).

  1. (1)

    Letα, β, γbe the angles of\(\Delta (p_{1}, p_{2}, p_{3})\)at the vertices\(p_{1}\), \(p_{2}\), \(p_{3}\), and\(\alpha '\), \(\beta '\), \(\gamma '\)be the angles of\(\Delta (p_{1}', p_{2}', p_{3}')\)at the vertices\(p_{1}'\), \(p_{2}'\), \(p_{3}'\). Then

    $$ \alpha ' \geq \alpha ,\quad\quad \beta ' \geq \beta ,\quad\quad \gamma ' \geq \gamma . $$
  2. (2)

    Letzbe a point in the geodesic joining\(p_{1}\)to\(p_{2}\), and\(z'\in [p_{1}', p_{2}']\)is the comparison point, if\(d(z, p_{1})= \Vert z'-p_{1}' \Vert \), \(d(z, p_{2})= \Vert z'-p_{2}' \Vert \), then

    $$ d(z, p_{3}) \leq \bigl\Vert z'-p_{3}' \bigr\Vert . $$

Lemma 2.5

([45])

Let\(x_{0} \in \mathbb{M}\), \(\{x_{n}\}\subset \mathbb{M}\), and\(x_{n} \rightarrow x_{0}\). Then, for\(\forall y\in \mathbb{M}\),

$$ \exp _{x_{n}}^{-1} y \rightarrow \exp _{x_{0}}^{-1} y, \quad\quad \exp _{y}^{-1} x_{n} \rightarrow \exp _{y}^{-1} x_{0}. $$

Definition 2.6

([46])

A subset \(E \subset \mathbb{M}\) is said to be convex if for any \(p, q \in E\), the geodesic connecting p and q is still in E.

Definition 2.7

([41])

Let ω be a real-valued function on \(\mathbb{M}\), ω is said to be convex if for any geodesic γ on \(\mathbb{M}\), the composition function \(\omega \circ \gamma : [a,b] \rightarrow \mathbb{R}\) is convex.

Definition 2.8

([34])

Let \(\omega : \mathbb{M} \rightarrow \mathbb{R}\) be a convex and \(z \in \mathbb{M} \). A vector \(u \in T_{z} \mathbb{M}\) is called a subgradient of ω at z, iff

$$ \omega (y) \geq \omega (z)+ \bigl\langle u, \exp _{z}^{-1} y \bigr\rangle , \quad \forall y \in \mathbb{M}. $$

The set of all subgradients of ω is named the subdifferential of ω at z, which is represented by \(\partial \omega (z)\), and the domain of ∂ω is \(\mathcal{D}(\partial \omega )=\{z \in \mathbb{M}| \partial \omega (z) \neq \emptyset \}\), and \(\partial \omega (z)\) is a closed and convex set.

Proposition 2.9

([6])

Let\(\omega :[a, b] \rightarrow \mathbb{R}\)be a convex, for any pointpin Hadamard manifolds\(\mathbb{M}\), we have\(\mathcal{D}(\partial \omega )=\mathbb{M}\).

Definition 2.10

([45])

Let \(\mathbb{M}\) be a Hadamard manifolds, ω be a lower semicontinuous, proper and convex function, \(\omega \subset \mathbb{M}\) and \(\mathcal{D}(\omega )=\mathbb{M}\), then the proximal mapping \(\operatorname{prox}_{\lambda \omega }: \mathbb{M} \rightarrow \mathbb{M}\) is defined as

$$ \operatorname{prox}_{\lambda \omega }(z):=\mathop{\operatorname{argmin}}_{y \in \mathbb{M}} \biggl\{ \omega (y)+\frac{1}{2 \lambda } d^{2}(z, y) \biggr\} , \quad \forall z \in \mathbb{M}, \lambda >0. $$

From [6, Lemma 4.2], \(\operatorname{prox}_{\lambda \omega }(\cdot )\) is a single-valued and \(\mathcal{D}(\operatorname{prox}_{\lambda \omega })=\mathbb{M}\), and for each \(z \in \mathbb{M}\), there exists a unique point \(p=\operatorname{prox}_{\lambda \omega }(z)\), which is characterized by

$$ \exp _{p}^{-1} z \in \lambda \partial \omega (p). $$

Combining this and Definition 2.8, we have the following.

Lemma 2.11

Letωbe a lower semicontinuous, proper and convex function on Hadamard manifold\(\mathbb{M}\), and\(z, p \in \mathbb{M}\), \(\lambda >0\). If\(p=\operatorname{prox}_{\lambda \omega }(z)\), \(\forall y \in \mathbb{M}\), Then

$$\begin{aligned} \bigl\langle \exp ^{-1}_{p}{y}, \exp ^{-1}_{p}{z} \bigr\rangle \leqslant \lambda \bigl(\omega (y)-\omega (p)\bigr). \end{aligned}$$

Remark 1

From Lemma 2.11, if \(z=\operatorname{prox}_{\lambda \omega }(z)\), then

$$\begin{aligned} z \in \operatorname{Argmin}\bigl\{ \omega (y) : y \in E\bigr\} := \Bigl\{ z \in E : \omega (z)=\min_{y \in E} \omega (y) \Bigr\} . \end{aligned}$$

For a closed and convex \(E \subseteq \mathbb{M}\), the projection \(P_{E} : \mathbb{M} \rightarrow E\) is defined for all \(z \in \mathbb{M}\), such that \(P_{E}(z)=\operatorname{argmin}\{d(z,y), \forall y\in E\}\).

Definition 2.12

([47])

For a bifunction \(S: E \times E \rightarrow \mathbb{R}\), \(\forall (z, y) \in E \times E\):

  1. (1)

    If \(S(z, y)+S(y, z) \leq 0\), then S is called monotone.

  2. (2)

    If \(S(z, y) \geq 0 \Rightarrow S(y, z) \leq 0\), then S is called pseudomonotone.

Definition 2.13

([48])

Let \(\mathbb{M}\) be a Hadamard manifolds, \(E\subset \mathbb{M}\), and \(S : E \times E \rightarrow \mathbb{R}\). S satisfies a Lipschitz-type condition, if there exist \(k_{1}, k_{2} > 0\) such that

$$\begin{aligned} S(x, y)+S(y, z) \geq S(x, z)-k_{1}d^{2}(x,y)-k_{2}d^{2}(y,z), \quad \forall x, y,z \in E. \end{aligned}$$

Lemma 2.14

([49])

Let\(\{a_{n}\}_{n\in \mathbb{N}}\) (\(a_{n}>0\)), \(\{b_{n}\}_{n\in \mathbb{N}} \) (\(b_{n}>0\)) be two real sequences and there exists\(N>0\), for all\(n>N\). such that\(a_{n+1} \leq a_{n}-b_{n}\). Then\(\{a_{n}\}_{n\in \mathbb{N}}\)is convergent and\(\lim_{n\rightarrow \infty } b_{n} = 0\).

In addition, compared with Definitions 2.12 and 2.13, for the variational inequality (VI), we have the following definitions. Let V is a single-valued vector field, and \(\mathcal{D}(V)\) be the domain of V.

Definition 2.15

([50])

If there exists a constant \(L > 0\) such that

$$ \bigl\Vert \mathrm{P}_{y, x} V(x)-V(y) \bigr\Vert \leq L d(x, y),\quad \forall x, y \in \mathbb{M}, $$

then V is called Lipschitz continuous.

Definition 2.16

([43])

For all x, y \(\in \mathcal{D}(V)\),

$$ \bigl\langle V(x), \exp _{x}^{-1} y\bigr\rangle \geq 0\quad \Rightarrow \quad \bigl\langle V(y), \exp _{y}^{-1} x\bigr\rangle \leq 0, $$

V is called pseudomonotone.

3 Main result

In this section, inspired by the algorithms in Hieu et al. [37, 38] and Yang et al. [39, 40], we introduce an extragradient-like algorithm for solving equilibrium problems (EP), and analyze the convergence of sequences generated by the algorithm. Finally, we apply the algorithm to solving the variational inequality problem (VI) as a particular case.

Unless explicitly stated otherwise, the subset E is a nonempty closed convex subset on \(\mathbb{M}\), and the bifunction S satisfies the following conditions:

\((A1)\):

For each \(z\in E\), S is pseudomonotone on E, i.e., \(S(z, y) \geq 0 \Rightarrow S(y, z) \leq 0\);

\((A2)\):

S satisfies the Lipschitz-type condition on E, i.e., \(S(x, y)+S(y, z) \geq S(x, z)-k_{1}d^{2}(x, y)-k_{2}d^{2}(y,z)\);

\((A3)\):

\(S (x, \cdot )\) is convex and subdifferentiable on E, ∀ fixed \(x \in E\);

\((A4)\):

\(S(\cdot , y)\) is upper semicontinuous, \(\forall y \in E\).

In order to describe the new algorithm more conveniently, we note that \([a]_{+} = \max \{0, a\}\) and adopt the convention \(\frac{0}{0}=+\infty \), and \(\frac{1}{0}=+\infty \).

Algorithm 3.1

(Extragradient-like algorithm for solving (EP))

Initialization: :

Choose \(x_{0}, \overline{x}_{0}, \overline{x}_{1}\in E\), \(\lambda _{1}>0\), \(\delta \in (0, 1) \), \(\theta \in (0, 1] \), \(\alpha \in (0, 1) \), \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\).

Iterative Steps: :

Suppose \(x_{n-1}\), \(\overline{x}_{n-1}\), \(\overline{x}_{n}\) are obtained.

Step 1:

Calculate

$$ \textstyle\begin{cases} x_{n}=\gamma _{{x_{n-1}},{\overline{x}_{n}}}{(\varphi )}, \\ \overline{x}_{n+1}=\operatorname{prox}_{\lambda _{n} S(\overline{x}_{n},\cdot )}(x_{n}). \end{cases} $$

If \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then stop: \(\overline{x}_{n} \) is a solution. Otherwise,

Step 2:

Compute

$$ \lambda _{n+1}= \min \biggl\{ {\lambda _{n}, \frac{\alpha \delta \theta }{4\varphi \varLambda } \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr) } \biggr\} , $$

where \(\varLambda = [S(\overline{x}_{n-1},\overline{x}_{n+1}) -S( \overline{x}_{n-1},\overline{x}_{n})-S(\overline{x}_{n},\overline{x}_{n+1}) ]_{+}\). Set \(n := n + 1\) and return.

Remark 2

Under the conditions \((A1)\)\((A4)\) and \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then by Lemma 2.11 we obtain

$$ S(\overline{x}_{n},y)\geq S(\overline{x}_{n}, \overline{x}_{n+1}) + \frac{1}{\lambda _{n}}\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n+1}}{y} \bigr\rangle \geq 0,\quad \forall y\in E. $$

So \(\overline{x}_{n+1}\in \operatorname{EP}(S; E)\).

Remark 3

By Definition 2.13, if the hypothesis \((A2)\) holds, then there exist \(k_{1}>0\), \(k_{2}>0\) such that

$$ \begin{aligned} S(\overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n}) -S( \overline{x}_{n},\overline{x}_{n+1}) &\leq k_{1}d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+k_{2}d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \\ &\leq \max \{ k_{1}, k_{2} \} \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}( \overline{x}_{n+1},\overline{x}_{n}) \bigr), \end{aligned} $$

then \(\{ \lambda _{n} \} \) is bounded from below by \(\{\lambda _{1}, \frac{\alpha \delta \theta }{4\varphi \max \{c_{1}, c_{2}\}}\}\). Moreover, \(\{ \lambda _{n} \} \) is a monotonically decreasing sequence. Thus, \(\lim_{n \rightarrow \infty } \lambda _{n}\) exists (i.e. \(\lim_{n \rightarrow \infty } \lambda _{n}=\lambda >0\)). It should be noted that, if \(S(\overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n}) -S(\overline{x}_{n},\overline{x}_{n+1})\leq 0\), then \(\lambda _{n+1}:=\lambda _{n}\).

Remark 4

From \(x_{n}=\gamma _{{x_{n-1}}, {\overline{x}_{n}}}{(\varphi )}\), we have \(x_{n}=\exp _{\overline{x}_{n}}{\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}}\), it implies that \(x_{n-1}\), \({\overline{x}_{n}}\), \({x_{n}}\) lies in the same geodesic. From [51], we have

$$\begin{aligned} &\exp ^{-1}_{x_{n-1}}{\overline{x}_{n}}= \frac{1}{1-\varphi }\exp ^{-1}_{x_{n-1}}{x_{n}}, \end{aligned}$$
(3)
$$\begin{aligned} &\exp ^{-1}_{\overline{x}_{n}}{x_{n}}=\exp ^{-1}_{\overline{x}_{n}} \bigl( \exp _{\overline{x}_{n}}{\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}} \bigr) =\varphi \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}, \end{aligned}$$
(4)
$$\begin{aligned} &\exp ^{-1}_{x_{n}}{\overline{x}_{n}}= \frac{-\varphi }{1-\varphi }\exp ^{-1}_{x_{n}}{x_{n-1}}. \end{aligned}$$
(5)

By the definition of \(\overline{x}_{n+1}\) and Remark 2, we know that, if Algorithm 3.1 terminates after finite iterations, then \(\overline{x}_{n+1}\in \operatorname{EP}(S; E)\). Otherwise, we have Lemma 3.1 and Theorem 3.2.

Lemma 3.1

Suppose\((A1)\)\((A4)\)hold, and\(\operatorname{EP}(S; E)\neq \emptyset \), let\(\{x_{n}\}\)be sequences generated by Algorithm 3.1. Then\(\{x_{n}\}\)is bounded.

Proof

Since \(\overline{x}_{n+1}=\operatorname{prox}_{\lambda _{n} S(\overline{x}_{n},\cdot )}(x_{n})\), by Lemma 2.11, \(\forall z\in E\), we obtain

$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{z}\bigr\rangle \leq \lambda _{n} \bigl(S(\overline{x}_{n},z)-S( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \end{aligned}$$
(6)
$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}},\exp ^{-1}_{ \overline{x}_{n}}{z}\bigr\rangle \leq \lambda _{n-1} \bigl(S(\overline{x}_{n-1},z)-S( \overline{x}_{n-1}, \overline{x}_{n})\bigr). \end{aligned}$$
(7)

Let \(s\in \operatorname{EP}(S; E)\), substituting \(z:=s\) into (6) and \(z:=\overline{x}_{n+1}\) into (7), we have

$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{s} \bigr\rangle \leq \lambda _{n} \bigl(S(\overline{x}_{n},s)-S( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \end{aligned}$$
(8)
$$\begin{aligned} &\bigl\langle \exp ^{-1}_{\overline{x}_{n}}{x_{n-1}}, \exp ^{-1}_{ \overline{x}_{n}}{\overline{x}_{n+1}}\bigr\rangle \leq \lambda _{n-1}\bigl(S( \overline{x}_{n-1},\overline{x}_{n+1}) -S(\overline{x}_{n-1}, \overline{x}_{n})\bigr) . \end{aligned}$$
(9)

Since S is pseudomonotone, and \(s\in \operatorname{EP}(S; E)\), we obtain \(S(s,\overline{x}_{n})\geq 0\), so \(S(\overline{x}_{n},s)\leq 0\). From (8) and \(\lambda _{n}>0\), it follows that

$$\begin{aligned} \bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{ \overline{x}_{n+1}}{s}\bigr\rangle \leq -\lambda _{n}S( \overline{x}_{n}, \overline{x}_{n+1}). \end{aligned}$$
(10)

Combining (9) and (4), we obtain for \(\lambda _{n}>0\) that

$$\begin{aligned} \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\bigl\langle \exp ^{-1}_{ \overline{x}_{n}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n}}{ \overline{x}_{n+1}} \bigr\rangle \leq \lambda _{n}\bigl(S( \overline{x}_{n-1},\overline{x}_{n+1})-S( \overline{x}_{n-1}, \overline{x}_{n})\bigr). \end{aligned}$$
(11)

On the other hand, applying inequality (2) in Proposition 2.2, gives

$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n}}^{-1} x_{n}, \exp _{ \overline{x}_{n}}^{-1} \overline{x}_{n+1} \bigr\rangle \geq d^{2} (\overline{x}_{n}, x_{n} )+d^{2} (\overline{x}_{n}, \overline{x}_{n+1} )-d^{2} (x_{n}, \overline{x}_{n+1} ), \end{aligned}$$
(12)
$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n+1}}^{-1} x_{n}, \exp _{ \overline{x}_{n+1}}^{-1} s \bigr\rangle \geq d^{2} ( \overline{x}_{n+1}, x_{n} )+d^{2} ( \overline{x}_{n+1}, s )-d^{2} (x_{n}, s ). \end{aligned}$$
(13)

By multiplying the both sides of inequality (12) by \(\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }>0\), and then adding the both sides of the resulting equation to inequality (13), we get

$$\begin{aligned} &2 \bigl\langle \exp _{\overline{x}_{n+1}}^{-1} x_{n}, \exp _{ \overline{x}_{n+1}}^{-1} s \bigr\rangle +2 \frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi } \bigl\langle \exp _{\overline{x}_{n}}^{-1} x_{n}, \exp _{\overline{x}_{n}}^{-1} \overline{x}_{n+1} \bigr\rangle \\ &\quad \geq d^{2} (x_{n}, \overline{x}_{n+1} )+d^{2} ( \overline{x}_{n+1}, s )-d^{2} (x_{n}, s ) \\ &\quad\quad{} +\frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi } \bigl(d^{2} (x_{n}, \overline{x}_{n} )+d^{2} ( \overline{x}_{n+1}, \overline{x}_{n} )-d^{2} (x_{n}, \overline{x}_{n+1} ) \bigr) \\ &\quad =\biggl(1-\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\biggr)d^{2}(x_{n}, \overline{x}_{n+1})-d^{2}(x_{n}, s) \\ &\quad\quad{} +\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr)+d^{2}( \overline{x}_{n+1}, s). \end{aligned}$$
(14)

Combining Eqs. (14), (10) and (11), we get for \(\lambda _{n}>0\)

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}&\biggl( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s) \\ & {} +2\lambda _{n+1}\frac{\lambda _{n}}{\lambda _{n+1}} \bigl(S( \overline{x}_{n-1}, \overline{x}_{n+1})-S(\overline{x}_{n-1}, \overline{x}_{n})-S( \overline{x}_{n},\overline{x}_{n+1}) \bigr) \\ & {} -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr). \end{aligned}$$
(15)

By the definition of \(\lambda _{n}\) and (15), we obtain

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}& \biggl( \frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s) \\ & {} + \frac{1}{2}\delta \frac{\lambda _{n}}{\lambda _{n+1}} \frac{1}{\varphi }\alpha \theta \bigl(d^{2}(\overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr) \\ & {} -\frac{\lambda _{n}}{\lambda _{n-1}}\frac{1}{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n})\bigr). \end{aligned}$$
(16)

From Remark 3\(\lambda _{n}\rightarrow \lambda >0\), and \(0<\delta <1\). Hence, there exists \(N\geq 0\), such that, for all \(n \geq N\), \(0<\lambda _{n} \frac{\delta }{\lambda _{n+1}}<1\), and \(\frac{\lambda _{n}}{\lambda _{n-1}} \frac{1}{\varphi }-1 \leq \frac{\lambda _{n-1}}{\lambda _{n-1}} \frac{1}{\varphi }-1= \frac{1}{\varphi }-1\). Thus, from (16), we have

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s) \leq{}& \biggl( \frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}(x_{n}, s)-\frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}(\overline{x}_{n+1}, \overline{x}_{n})\bigr) \\ & {} + \frac{1}{2}\delta \frac{1}{\varphi }\alpha \theta \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr), \quad \forall n \geq N. \end{aligned}$$
(17)

Now, we estimate the term \(d^{2}(\overline{x}_{n+1}, s)\) in (17). Fix \(n \geq 0\), set \(p=\overline{x}_{n+1}\), \(q=x_{n}\) in geodesic triangle \(\triangle (s, p, q)\). Then using Lemma 2.3 in the comparison triangle \(\triangle (s', p', q')\), we have

$$ d(s, \overline{x}_{n+1})= d(s, p)= \bigl\Vert p'-s' \bigr\Vert , \quad\quad d(s, x_{n})= d(s, q)= \bigl\Vert q'-s' \bigr\Vert . $$

Recall from Algorithm 3.1 that \(x_{n+1}=\exp _{\overline{x}_{n+1}}\varphi \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}}\). The comparison point of \(x_{n+1}\) is \(x_{n+1}'=(1-\varphi )p'+\varphi q'\). Let β and \(\beta '\) denote the angles at s and \(s'\), respectively. From Lemma 2.4(1), we have \(\beta \leq \beta '\), thus \(\cos \beta ' \leq \cos \beta \). Then, from Lemma 2.4(2) we have

$$\begin{aligned} d^{2}(x_{n+1}, s) \leq{}& \bigl\Vert (1-\varphi )p'+\varphi q'-s' \bigr\Vert ^{2} \\ ={}& \bigl\Vert (1-\varphi ) \bigl(p'-s'\bigr)+ \varphi \bigl(q'-s'\bigr) \bigr\Vert ^{2} \\ ={}&(1-\varphi )^{2}\bigl(p'-s' \bigr)^{2}+\varphi ^{2}\bigl(q'-s' \bigr)^{2} +2\varphi (1- \varphi ) \bigl\Vert p'-s' \bigr\Vert \bigl\Vert q'-s' \bigr\Vert \cos \beta ' \\ \leq{}&(1-\varphi )^{2}d^{2}(p, s)+\varphi ^{2}d^{2}(q,u) +2\varphi (1- \varphi )d(p, s)d(q, s)\cos \beta \\ ={}&(1-\varphi )^{2}d^{2}(\overline{x}_{n+1}, s)+ \varphi ^{2}d^{2}(x_{n},s) +2\varphi (1-\varphi ) \bigl\langle \exp ^{-1}_{s}{\overline{x}_{n+1}}, \exp ^{-1}_{s}{x_{n}}\bigr\rangle . \end{aligned}$$
(18)

Using the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned} 2\bigl\langle \exp ^{-1}_{s}{x_{n}},\exp ^{-1}_{s}{\overline{x}_{n+1}} \bigr\rangle &\leq 2 \bigl\Vert \exp ^{-1}_{s}{x_{n}} \bigr\Vert \bigl\Vert \exp ^{-1}_{s}{\overline{x}_{n+1}} \bigr\Vert \\ &\leq d^{2}(s,x_{n})+d^{2}(s, \overline{x}_{n+1}). \end{aligned}$$
(19)

By substituting (19) into (18), we get

$$\begin{aligned} d^{2}(\overline{x}_{n+1}, s)\geq - \frac{\varphi }{1-\varphi }d^{2}(x_{n}, s)+\frac{1}{1-\varphi }d^{2}(x_{n+1},s). \end{aligned}$$
(20)

Consequently, combining (17) and (20), we have for all \(n\geq N\)

$$\begin{aligned} &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s)-\frac{\varphi }{1-\varphi }d^{2}(x_{n}, s) \\ &\quad \leq d^{2}(x_{n}, s)+\biggl(\frac{1}{\varphi }-1 \biggr)d^{2}(x_{n}, \overline{x}_{n+1}) - \frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}( \overline{x}_{n+1}, \overline{x}_{n})\bigr) \\ &\quad\quad{} +\frac{\alpha }{2\varphi }\theta \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr). \end{aligned}$$
(21)

Next, we need Lemma 2.14 to complete the proof. From (21) and (2), we get

$$\begin{aligned} &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s) +\frac{\alpha \theta }{2\varphi }(d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad \leq \frac{\varphi }{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }(d^{2}(\overline{x}_{n+1}, \overline{x}_{n})+d^{2}(x_{n}, s)+\biggl( \frac{1}{\varphi }-1\biggr)d^{2}(x_{n}, \overline{x}_{n+1}) \\ &\quad\quad{} -\frac{\alpha }{\varphi }\bigl(d^{2}(x_{n}, \overline{x}_{n})+d^{2}( \overline{x}_{n+1}, \overline{x}_{n})\bigr) + \frac{\alpha \theta }{2\varphi }\bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})\bigr) \\ &\quad =\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ \frac{(\theta -1)\alpha }{\varphi } d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad\quad{} +\biggl(\frac{1}{\varphi }-1\biggr)d^{2}( \overline{x}_{n+1},x_{n})- \frac{\alpha }{\varphi }d^{2}(x_{n}, \overline{x}_{n}) \\ &\quad \leq \frac{1}{1-\varphi }d^{2}(x_{n},s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1}) +\frac{(\theta -1)\alpha }{\varphi } d^{2}( \overline{x}_{n+1}, \overline{x}_{n})+\biggl( \frac{1}{\varphi }-1\biggr)d^{2}(\overline{x}_{n+1},x_{n}) \\ &\quad\quad{} -\frac{\alpha }{\varphi } \bigl(d^{2}(x_{n}, \overline{x}_{n+1})+d^{2}( \overline{x}_{n}, \overline{x}_{n+1})-2\bigl\langle \exp ^{-1}_{ \overline{x}_{n+1}}{x_{n}}, \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}}\bigr\rangle \bigr). \end{aligned}$$
(22)

Moreover, note that \(\theta \in (0, 1] \), then \(2-\theta >0\), and it follows from the Cauchy–Schwarz inequality that

$$\begin{aligned} &2\bigl\langle \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}},\exp ^{-1}_{ \overline{x}_{n+1}}{\overline{x}_{n}}\bigr\rangle \leq 2 \bigl\Vert \exp ^{-1}_{ \overline{x}_{n+1}}{x_{n}} \bigr\Vert \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}} \bigr\Vert \\ &\quad \leq \frac{1}{2-\theta } \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{x_{n}} \bigr\Vert ^{2} +(2-\theta ) \bigl\Vert \exp ^{-1}_{\overline{x}_{n+1}}{ \overline{x}_{n}} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2-\theta } d^{2}({\overline{x}_{n+1}}, {x_{n}})+(2-\theta )d^{2}({ \overline{x}_{n+1}}, { \overline{x}_{n}}). \end{aligned}$$
(23)

Equations (23) and (22) imply

$$\begin{aligned} \begin{aligned}[b] &\frac{1}{1-\varphi }d^{2}(x_{n+1}, s)+\frac{\alpha \theta }{2\varphi }d^{2}( \overline{x}_{n+1}, \overline{x}_{n}) \\ &\quad \leq \frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1})+\biggl( \frac{1}{\varphi }-1-\frac{\alpha }{\varphi }+ \frac{\alpha }{\varphi (2-\theta )}\biggr)d^{2}(\overline{x}_{n+1},x_{n}). \end{aligned} \end{aligned}$$
(24)

Now we set

$$\begin{aligned} &a_{n}=\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1}), \\ &b_{n}=-\biggl(\frac{1}{\varphi }-1-\frac{\alpha }{\varphi }+ \frac{\alpha }{\varphi (2-\theta )}\biggr)d^{2}(\overline{x}_{n+1} x_{n}). \end{aligned}$$

It follows from \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\) that \(b_{n}>0\), then from (24), we have, for all \(n\geq N\), \(a_{n+1}\leq a_{n}-b_{n} \). Then we get the conclusion from Lemma 2.14 that \(\{a_{n}\}\) is bounded, \(\lim_{n\rightarrow \infty }a_{n}\) exists, \(\lim_{n\rightarrow \infty }b_{n}=0\), and \(\lim_{n\rightarrow \infty }d(\overline{x}_{n+1},x_{n})=0\).

Moreover, by using the triangle inequality, it follows that

$$ \begin{aligned} &d(\overline{x}_{n}, x_{n})+d(x_{n}, \overline{x}_{n-1})\geq d( \overline{x}_{n}, \overline{x}_{n-1}), \\ &d(x_{n}, x_{n-1})+d(x_{n-1}, \overline{x}_{n-1}) \geq d(x_{n}, \overline{x}_{n-1}). \end{aligned} $$

Combining this and Eqs. (4) and (5), we can obtain

$$\begin{aligned} \begin{gathered} \lim_{n\rightarrow \infty }d( \overline{x}_{n}, x_{n})=\lim_{n\rightarrow \infty }d(x_{n}, x_{n-1})=0, \\ \lim_{n\rightarrow \infty }d(\overline{x}_{n}, \overline{x}_{n-1})= \lim_{n\rightarrow \infty }d(\overline{x}_{n+1}, x_{n})=0, \\ \lim_{n\rightarrow \infty }a_{n}=\lim_{n\rightarrow \infty } \biggl(\frac{1}{1-\varphi }d^{2}(x_{n}, s)+ \frac{\alpha \theta }{2\varphi }d^{2}(\overline{x}_{n}, \overline{x}_{n-1})\biggr)>0. \end{gathered} \end{aligned}$$
(25)

Thus, we see that \(\{x_{n}\}\) and \(\{\overline{x}_{n}\}\) are bounded. □

Theorem 3.2

Assume that\((A1)\)\((A4)\)hold, and\(\operatorname{EP}(S; E)\neq \emptyset \), then the sequences\(\{x_{n}\}\)generated by Algorithm 3.1converge to a solution of the equilibrium problem (EP).

Proof

By Lemma 3.1, we know that \(\{x_{n}\}\) and \(\{\overline{x}_{n}\}\) are bounded, and there exists a subsequence \(\{x_{l}\}\) of \(\{x_{n}\}\) that converges to \(x^{*}\in E\). It follows from (25) that

$$\begin{aligned} \lim_{k\rightarrow \infty }d(\overline{x}_{l}, x_{l})=\lim_{k\rightarrow \infty }d(x_{l}, \overline{x}_{l+1})= \lim_{k\rightarrow \infty }d( \overline{x}_{l}, \overline{x}_{l-1})=0. \end{aligned}$$
(26)

It follows from inequality (6) that

$$\begin{aligned} \lambda _{l} S(\overline{x}_{l}, z)\geq \lambda _{l}S(\overline{x}_{l}, \overline{x}_{l+1})+ \bigl\langle \exp ^{-1}_{\overline{x}_{l+1}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l+1}}{\overline{x}}\bigr\rangle , \quad \forall \overline{x}\in E. \end{aligned}$$
(27)

On the other hand, since S satisfies the Lipschitz-type condition, we have

$$\begin{aligned} \lambda _{l}S(\overline{x}_{l},\overline{x}_{l+1}) \geq{}& \lambda _{l}\bigl(S( \overline{x}_{l-1}, \overline{x}_{l+1})-f(\overline{x}_{l-1}, \overline{x}_{l}) \bigr) \\ &{} -\lambda _{l}c_{1}d^{2}( \overline{x}_{l}, \overline{x}_{l-1})- \lambda _{l}c_{2}d^{2}(\overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(28)

From Eqs. (11) and (28), it follows that

$$\begin{aligned} \lambda _{l}S(\overline{x}_{l},\overline{x}_{l+1}) \geq{}& \frac{\lambda _{l}}{\lambda _{l-1}}\frac{1}{\varphi } \bigl\langle \exp ^{-1}_{ \overline{x}_{l}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l}}{ \overline{x}_{l+1}} \bigr\rangle \\ & {} -\lambda _{l}c_{1}d^{2}( \overline{x}_{l}, \overline{x}_{l-1})- \lambda _{l}c_{2}d^{2}(\overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(29)

Now, combining (27) and (29), we get, for \(\forall \overline{x} \in E\),

$$\begin{aligned} S(\overline{x}_{l},z)\geq{}& \frac{1}{\lambda _{l-1}}\frac{1}{\varphi } \bigl\langle \exp ^{-1}_{\overline{x}_{l}}{x_{l}},\exp ^{-1}_{\overline{x}_{l}}{ \overline{x}_{l+1}}\bigr\rangle + \frac{1}{\lambda _{l}}\bigl\langle \exp ^{-1}_{ \overline{x}_{l+1}}{x_{l}}, \exp ^{-1}_{\overline{x}_{l+1}}{ \overline{x}}\bigr\rangle \\ & {} -c_{1}d^{2}(\overline{x}_{l}, \overline{x}_{l-1})-c_{2}d^{2}( \overline{x}_{l}, \overline{x}_{l+1}). \end{aligned}$$
(30)

From Lemma 2.5, (26), (30), the boundedness of \(\{x_{n}\}\), and \(\lim_{n\rightarrow \infty }\lambda _{n}=\lambda >0\), we obtain

$$\begin{aligned} S\bigl(x^{*},z\bigr)\geq 0, \quad \forall y\in E. \end{aligned}$$
(31)

So we obtain \(x^{*}\in \operatorname{EP}(S; E)\).

Next, we will prove that \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has a unique cluster point. Suppose that \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has at least two cluster points \(\overline{x}_{1}, \overline{x}_{2}\in \operatorname{EP}(S; E)\). Let \(\{{x_{n_{i}}}\}\) be a sequence such that \(x_{n_{i}}\rightarrow \overline{x}_{1}\), \(x_{n_{j}}\rightarrow \overline{x}_{2}\), as \(i\rightarrow \infty \). By Lemma 2.2, we have

$$\begin{aligned} \lim_{n\to \infty }d^{2}(x_{n}, \overline{x}_{2}) &= \lim_{i\to \infty }d^{2}(x_{n_{i}}, \overline{x}_{2}) \\ &\geq \lim_{i\to \infty } \bigl({d^{2}(x_{n_{i}}, \overline{x}_{1})+d^{2}( \overline{x}_{1}, \overline{x}_{2})-2 \bigl\langle \exp _{ \overline{x}_{1}}^{-1} x_{n_{i}}, \exp _{\overline{x}_{1}}^{-1} \overline{x}_{2} \bigr\rangle } \bigr) \\ &= \lim_{n\to \infty }{ d^{2}(x_{n}, \overline{x}_{1})+d^{2}( \overline{x}_{1}, \overline{x}_{2})} \end{aligned}$$
(32)

and

$$\begin{aligned} \lim_{n\to \infty }d^{2}(x_{n}, \overline{x}_{1})&=\lim_{j\to \infty }d^{2}(x_{n_{j}}, \overline{x}_{1}) \\ &\geq \lim_{j\to \infty } \bigl({ d^{2}(x_{n_{j}}, \overline{x}_{2})+d^{2}( \overline{x}_{2}, \overline{x}_{1})-2 \bigl\langle \exp _{\overline{x}_{2}}^{-1} x_{n_{j}}, \exp _{\overline{x}_{2}}^{-1} \overline{x}_{1} \bigr\rangle } \bigr) \\ &=\lim_{n\to \infty }{ d^{2}(x_{n}, \overline{x}_{2})+d^{2}( \overline{x}_{2}, \overline{x}_{1})}. \end{aligned}$$
(33)

By summing (32) and (33), we have \(\overline{x}_{1}=\overline{x}_{2}\). So \(\{{x_{n}}\}_{n\in \mathbb{N}}\) has a unique cluster point. □

Remark 5

From Algorithm 3.1, we can obtain a new method for solving the pseudomonotone variational inequality (VI). If a vector field V is Lipschitz-continuous and pseudomonotone, then the conditions \((A1)\)\((A4)\) hold for S with \(k_{1}=k_{2}=\frac{L}{2}\). So, we can get the following algorithm for solving (VI).

Algorithm 3.2

(Extragradient-like algorithm for solving (VI))

Initialization: :

Choose \(x_{0}, \overline{x}_{0}, \overline{x}_{1}\in E\), \(\lambda _{1}>0\), \(\delta \in (0, 1) \), \(\theta \in (0, 1] \), \(\alpha \in (0, 1) \), \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha ,1)\).

Iterative Steps: :

Suppose \(x_{n-1}\), \(\overline{x}_{n-1}\), \(\overline{x}_{n}\) are obtained.

Step 1:

Calculate

$$ \textstyle\begin{cases} x_{n}=\gamma _{{x_{n-1}},{\overline{x}_{n}}}{(\varphi )}, \\ \overline{x}_{n+1}= P_{E}(\exp _{x_{n}}{-\lambda _{n}V(\overline{x}_{n})}). \end{cases} $$

If \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), then stop: \(\overline{x}_{n} \) is a solution. Otherwise,

Step 2:

Compute

$$ \lambda _{n+1}= \min \biggl\{ \lambda _{n}, { \frac{\alpha \delta \theta }{4\varphi \varLambda } \bigl(d^{2}( \overline{x}_{n}, \overline{x}_{n-1})+ d^{2}(\overline{x}_{n+1}, \overline{x}_{n}) \bigr) } \biggr\} , $$

where \(\varLambda = [\langle \mathrm{P}_{\overline{x}_{n},\overline{x}_{n-1}}V( \overline{x}_{n-1})-V(\overline{x}_{n}), \exp ^{-1}_{\overline{x}_{n}}{ \overline{x}_{n+1}}\rangle ]_{+}\). Set \(n := n + 1\) and return.

As for the convergence of Algorithm 3.2, if Algorithm 3.2 terminates after finite iterations, we have \(\overline{x}_{n+1} = x_{n} = \overline{x}_{n}\), it follows that \(\overline{x}_{n}=P_{E}(\overline{x}_{n}-\lambda V(\overline{x}_{n}))\), thus \(\overline{x}_{n}\in \operatorname{VI}(V,E)\) follows directly from [43]; otherwise, we can find a sequence \(\{x_{n}\}\) generated by Algorithm 3.2 converging to some \(x^{*}\in \operatorname{VI}(V,E)\), as \(n\rightarrow \infty \). The analysis process is completely similar to that of Theorem 3.2, which we omit here.

4 Numerical experiments

In this section, we perform two experiments to show the numerical behaviors of proposed algorithms in this paper. We take \(\mathbb{M}=\mathbb{R}^{m}_{++}=\{x \in \mathbb{R}: x>0\}\), and involve two experiments named Test 1 and Test 2 to verify the effectiveness of Algorithms 3.1 and 3.2, respectively.

We choose \(\alpha =0.95\), \(\delta =0.90\), \(\theta =0.5, 0.75, 0.90\), and \(\varphi \in (1-\frac{1-\theta }{2-\theta }\alpha , 1)\) is a random number, and \(\overline{x}_{1}\), \(x_{0}\), \(\overline{x}_{0}\) by Matlab code 10*rand(m,1). The termination criterion is

$$\begin{aligned} \varepsilon \geq d^{2}(\overline{x}_{n+1},x_{n})+d^{2}( \overline{x}_{n}, x_{n}) . \end{aligned}$$

Example 4.1

Let \(\mathbb{R}_{++}=\{x \in \mathbb{R}: x>0\}\) and \(\mathbb{M}_{1}= (\mathbb{R}_{++},\langle \cdot , \cdot \rangle )\) be the Riemannian manifold with \(\langle x, y\rangle := x y\), \(\forall x,y\in \mathbb{R}_{++}\). It can be seen from Ref. [52] that the section curvature of \(\mathbb{M}_{1}\) is zero, thus \(\mathbb{M}_{1}\) is a Hadamard manifold. Suppose that \(x, y\in \mathbb{M}_{1}\) and \(u\in T_{x}\mathbb{M}_{1}\) with \(\Vert v \Vert _{2}=1\), then

$$\begin{aligned} \textstyle\begin{cases} d(x, y):= \vert \ln (\frac{x}{y}) \vert , \\ \exp _{x} tv=x e^{ (v / x ) t}, \quad t\in (0,+\infty ), \\ \exp ^{-1}_{x}y=x \ln (\frac{y}{x} ). \end{cases}\displaystyle \end{aligned}$$
(34)

Let \(\mathbb{R}_{++}^{m}\) be the product space of \(\mathbb{R}_{++}\), that is, \(\mathbb{R}_{++}^{m}=\{(x_{1}, x_{2},\ldots,x_{m})^{T}: x_{i}\in \mathbb{R}_{++}, i=1,2,\ldots,m\}\). Let \(\mathbb{M}= (\mathbb{R}_{++}^{m},\langle \cdot , \cdot \rangle )\) be the m-dimensional Hadamard manifolds with metric \(\langle u, v\rangle := u^{T} v\), and \(d(x, y):= \vert \ln (x / y) \vert = \vert \ln (\sum_{i=1}^{m}(x_{i} / y_{i})) \vert \), where \(x, y\in \mathbb{M}\), \(x=(x_{i})\), \(y=(y_{i})\), \(i=1,2,\ldots, m\).

Test 1

In this test, we verify the effectiveness of Algorithm 3.1in\(\mathbb{M}= (\mathbb{R}^{m}_{++},\langle \cdot , \cdot \rangle )\). We consider an extension of Nash equilibrium model, which was introduced in [53, 54]. The form is as follows:

$$\begin{aligned} S(x,y) = \langle P_{1}x+P_{2}y+p, y-x \rangle , \end{aligned}$$

the feasible set\(E\subset \mathbb{M}\)given by

$$ E:=\bigl\{ x=(x_{1}, x_{2},\ldots,x_{m})^{T}: 1\leq x_{i}\leq 100, i=1,\ldots, m\bigr\} , $$

\(x,y\in E\), \(p=(p_{1},p_{2},\ldots,p_{m})^{T} \in \mathbb{R}^{m}\)is chosen randomly with its elements in\([1, m]\), and the matrices\(P_{1}\)and\(P_{2}\)are two square matrices of ordermsuch that\(P_{2}\)is symmetric positive semidefinite and\(P_{2}-P_{1}\)is negative semidefinite.

From [54], we know that S is pseudomonotone. Moreover, from [31, Lemma 6.2], we obtain a bifunction S that satisfies \((A2)\) with the Lipschitz-type constants \(k_{1} = k_{2} = \frac{ \Vert P_{2}-P_{1} \Vert }{2}\). Assumptions \((A3)\), \((A4)\) are automatically fulfilled and so Algorithm 3.1 can be applied in this case.

For numerical experiment, we take \(\lambda _{1}=\frac{1}{ \Vert P_{2}-P_{1} \Vert }\), and \(m = 20, 300, 500\). For each m, we have generated two random samples with different choice of \(P_{1}\), \(P_{2}\) and p. The number of iterations (Iter.) and the computing time (Time) measured in seconds are described in Table 1.

Table 1 Performance of algorithm 3.1 for the number of iterations (Iter.) and the computing time (Time) measured in seconds with \(m=20, 300, 500\).

Test 2

We consider the performance of Algorithm 3.2in\(\mathbb{M}= (\mathbb{R}^{m}_{++},\langle \cdot , \cdot \rangle )\). Let the feasible set\(E:=\{x=(x_{1}, x_{2},\ldots,x_{m})^{T}: 1\leq x_{i}\leq 10, i=1,\ldots, m\}\)be a closed convex subset of\(\mathbb{R}^{m}_{++}\)and\(V : E \rightarrow \mathbb{R}\)be a single-valued vector field defined by

$$\begin{aligned} V(x):=\sum_{i=1}^{m}(x_{i}\ln {x_{i}}), \quad \forall x\in E. \end{aligned}$$

According to [55, Example 1], V is monotone and Lipschitz continuous. Therefore, the conditions \((A1)\) and \((A2)\) are valid, assuming that \((A3)\) and \((A4)\) are automatically verified, then Algorithm 3.2 can be applied in this case.

For the numerical experiment, we take \(\lambda _{1}=0.4\), \(m=200,300,500\), and generate three random samples with different choice of initial points. The number of iterations (Iter.) and the computing time (Time) measured in seconds are described in Table 2.

Table 2 Performance of algorithm 3.2 for the number of iterations (Iter.) and the computing time (Time) measured in seconds with \(m=20, 300, 500\)

5 Conclusions

In this paper, a new algorithm for solving the equilibrium problem on Hadamard manifolds is presented, in which the bifunctions satisfy the Lipschitz type extension and are pseudomonotone. Compared with the existing algorithm, the advantage of this algorithm is that the Lipschitz constants can be unknown.