1 Introduction

Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let \(A:H\rightarrow H\) be a single valued mapping. Let \(\langle \cdot , \cdot \rangle \) and \(||\cdot ||\) denote the inner product and induced norm on H,  respectively. The classical variational inequality (VI) problem was introduced independently by Fichera (1963) and Stampacchia (1968) as finding a point \(x^*\in C\), such that

$$\begin{aligned} \langle Ax^*, x-x^* \rangle \ge 0, \quad \forall x\in C. \end{aligned}$$
(1.1)

We denote the solution set of VI (1.1) by \(V_I,\) and we denote the trivial solution set and non-trivial solution set of the VI (1.1) by \(V_T\) and \(V_N,\) respectively, that is,

$$\begin{aligned}&V_T:=\{x^*\in C~|~ \langle Ax^*, x-x^* \rangle = 0 \quad \forall x\in C\}, \\&V_N:= V_I\setminus V_T. \end{aligned}$$

Over the years, variational inequality theory has proven to be a major area of research in mathematical analysis, and has drawn the attention of several researchers due to its wide range of applications in diverse fields, such as optimal control, game theory, signal processing, linear programming, image recovery, etc., (see, for instance Chen et al. 2022; Godwin et al. 2023a, c; Iiduka 2012; Kinderlehrer and Stampacchia 1980; Ogwo et al. 2022a; Wickramasinghe et al. 2023). The VI is also known to be a generalization of several other problems in nonlinear analysis, such as fixed point problems, Nash equilibria, complementary problems, etc., (see Alakoya et al. 2023; Liu et al. 2016; Taiwo et al. 2021 for details). For instance, if \(f:C\rightarrow \mathbb {R}\) is a convex function and \(A(x)=\triangledown f(x)\), then the VI (1.1) becomes the minimization problem defined as Yin et al. (2022)

$$\begin{aligned} \min _{x\in C} f(x). \end{aligned}$$
(1.2)

In solving VIs, there are two generally known approaches, namely; the regularised method (RM) and the projection method (PM). Our focus in this work is the projection method. Several works have been done on PMs over the years, (see Gibali et al. 2017; Kraikaew and Saejung 2014; Maingé 2008). The simplest known projection method is the gradient method (GM), which requires only one projection onto the feasible set C. However, this method has a major drawback—the stringent “strongly monotonicity” or “inversely strongly monotonicity” condition imposed on the cost operator, (see Xiu and Zhang 2003). In order to tackle this setback, Korpelvich (1976) and Antipin (1976) proposed the extragradient method (EGM), which only requires the cost operator to be monotone. Although the EGM is an improvement over the earlier GM, it requires computing two projections onto the feasible set C per iteration, which is a major barrier in implementing the EGM. In a bid to overcome this bottleneck, several researchers have tried to improve on the EGM by proposing new iterative methods that only require calculating one projection onto the feasible set per iteration and without the stringent conditions of the the GM. These new methods have proven to be more efficient and easier to implement than the EGM. One of these improved methods over the EGM is the subgradient extragradient method (SEGM), also known as the modified extragradient method (MEGM), which was introduced by Censor et al. (2011). In the SEGM, the second projection onto the feasible set C is replaced by a projection onto a half-space, which can be easily calculated using an explicit formula. Another of these methods is the Tseng’s extragradient method (TEGM), also known as the forward-backward-forward algorithm, which was proposed by Tseng (2000). Another improved method over the EGM is the projection and contraction method (PCM) He (1997). Like the other two methods, the PCM also requires only one projection onto the feasible set per iteration, and is presented as follows.

$$\begin{aligned} {\left\{ \begin{array}{ll} x_0\in H, \\ y_n=P_C(x_n-\gamma Ax_n),\\ d(x_n,y_n):= (x_n-y_n)-\gamma (Ax_n-Ay_n),\\ x_{n+1}=x_n-\mu \beta d(x_n,y_n), \end{array}\right. } \end{aligned}$$
(1.3)

where \(P_C\) is the projection onto the feasible set C, \(\mu \in (0,2), ~\gamma \in (0,\frac{1}{L}),\) and \(\beta _n:=\frac{\alpha (w_n,y_n)}{\Vert d(w_n,y_n)\Vert ^2}, ~~ \alpha (w_n,y_n):=\langle w_n-y_n, d(w_n,y_n) \rangle , \quad \forall n\ge 0,\) and L is the Lipschitz constant of the cost operator. Over the years and in recent times, researchers have developed new variants of the PCM and have obtained fascinating results, (see Cholamjiak et al. 2020; Gibali et al. 2020).

We note that the PCM (1.3) depends on the Lipschitz constant (L) of the cost operator, which is a significant drawback, due to the difficulty in computing the value of L. One of our goals in this work is to overcome the above drawback of the PCM.

Now, we define the dual variational inequality (DVI) problem of (1.1) as finding a point \(x^*\in C\) such that

$$\begin{aligned} \langle Ax, x-x^* \rangle \ge 0, \quad \quad \forall x\in C. \end{aligned}$$
(1.4)

We denote the solution of the DVI (1.4) by \(V_D.\)

Remark 1.1

We note the following relationship between the VI (1.1) and DVI (1.4), (see Yin et al. 2022; Cottle and Yao 1992; Ye and He 2015).

  1. i.

    If A is continuous and C is convex. then \(V_D \subseteq V_I.\)

  2. ii.

    If A is pseudomonotone and continuous, then \(V_I=V_D.\)

  3. iii.

    If A is quasimonotone and continuous, then the inclusion \(V_I\subseteq V_D\) fails to hold, but \(V_N\subseteq V_D.\)

  4. iv.

    The condition \(V_I\subseteq V_D\) is a direct consequence of the pseudomonotonicity of A, that is for any

    $$\begin{aligned} x^* \in V_I, ~~ ~~ \langle Ax, x-x^* \rangle \ge 0, \quad \quad \forall x\in C. \end{aligned}$$
    (1.5)

Most of the results obtained by researchers over the years have been based on conditions (i) and (ii) of Remark 1.1, (see Thong and Hieu 2018; Thong et al. 2021). In this study we are concerned about solving the VI (1.1) for the case when A is quasimonotone, i.e., when \(V_I\nsubseteq V_D\), (see Panyanak et al. 2023; Wang et al. 2023).

Recently, Liu and Yang (2020) proposed a new self-adaptive method for solving the variational inequalities with quasimonotone operator (or without monotonicity). Their algorithm is presented as follows.

Algorithm 1.2

Step 0.:

Take \(\gamma _0>0, ~~ x_0\in H, ~~ 0<\sigma <1.\) Choose a nonnegative real sequence \(\{\theta _n\}\) such that \(\sum _{n=0}^{\infty }\theta _n<+\infty .\)

Step 1.:

Given the current iterate \(x_n,\) compute

$$\begin{aligned} y_n=P_C(x_n-\gamma _n Ax_n). \end{aligned}$$

If \(x_n=y_n\) (or \(Ay_n=0\)), then stop: \(y_n\) is a solution. Otherwise,

Step 2.:

Compute

$$\begin{aligned} x_{n+1} = y_n + \gamma _n(Ax_n - Ay_n), \end{aligned}$$

and

$$\begin{aligned} \gamma _{n+1}={\left\{ \begin{array}{ll} \min \{\frac{\sigma \Vert x_n-y_n\Vert }{\Vert Ax_n-Ay_n\Vert }, \gamma _n + \theta _n\} &{}\text {if}~~ Ax_n-Ay_n\ne 0,\\ \gamma _n +\theta _n, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

The authors obtained a weak convergence result under the following assumptions.

Assumption 1.3

C1::

\(V_D\ne \emptyset .\)

C2::

The mapping A is quasimonotone on H.

C2’::

If \(x_n\rightharpoonup x^*\) and \(\limsup _{n\rightarrow \infty } \langle Ax_n,x_n \rangle \le \langle Ax^*, x^* \rangle ,\) then \(\lim _{n\rightarrow \infty }\langle Ax_n,x_n \rangle = \langle Ax^*, x^* \rangle .\)

C3::

The mapping A is Lipschitz-continuous with constant \(L>0.\)

C4::

The mapping A is sequentially weakly continuous on C,  i.e., for each sequence \(\{x_n\}\subset C, ~~ x_n\rightharpoonup x^*\) implies that \(Ax_n\rightharpoonup Ax^*.\)

C5::

Set \(A=\{d\in C: Ad=0\}\setminus V_D\) is a finite set.

C5’::

The set \(B=V_I\setminus V_D\) is a finite set.

Remark 1.4

We note that conditions \(C4-C5'\) are quite restrictive. One of our goals in this study is to relax these conditions. It is known that

$$\begin{aligned} V_D\ne \emptyset \Leftrightarrow \exists x^*\in V_I~~ \text {such that}~~ \langle Ax,x-x^* \rangle \ge 0. \end{aligned}$$
(1.6)

So, it is clear that the condition \(V_D\ne \emptyset \) is more relaxed than the condition (1.5) of Remark 1.1 (iv). Hence, \(V_I\ne \emptyset \) together with the pseudomonotonicity property implies that \(V_D\ne \emptyset ,\) but the converse is not true. (For more details on the condition \(V_D\ne \emptyset ,\) see Lemma 2.7).

Yin et al. (2022) proposed the following iterative algorithm for approximating the common solution of fixed point problem with operator T and quasimonotone variational inequalities.

Algorithm 1.5

Step 0.:

Let \(x_0\in H\) be an initial guess. We set \(n=0.\)

Step 1.:

Let the nth iterate \(x_n\) be given. We compute

$$\begin{aligned} {\left\{ \begin{array}{ll} \hat{w}_n=(1-\rho _n)x_n+\rho _nT(x_n),\\ w_n=(1-\eta _n)x_n+\eta _nT(\hat{w}_n). \end{array}\right. } \end{aligned}$$
Step 2.:

Let the nth stepsize \(\gamma _n\) be known. We compute:

$$\begin{aligned} y_n=P_C(w_n-\gamma _n Aw_n), \end{aligned}$$

and

$$\begin{aligned} x_{n+1}=(1-\beta _n)w_n + \beta _ny_n+\beta _n\gamma _n(Aw_n-Ay_n). \end{aligned}$$
Step 3.:

We update the \((n+1)th\) stepsize as follows:

$$\begin{aligned} \gamma _{n+1}={\left\{ \begin{array}{ll} \min \{\gamma _n, \frac{\sigma \Vert w_n-y_n\Vert }{\Vert Aw_n-Ay_n\Vert }\} &{}\text {if}~~ Aw_n-Ay_n\ne 0,\\ \gamma _n, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Set \(n:=n+1\) and return to Step 1.

where, T is a Lipschitz continuous pseudocontractive mapping. Also, the authors in Yin et al. (2022) were only able to obtain weak convergence result under certain conditions (which includes condition C5 of Assumption 1.3).

Yin and Hussain (2022) proposed a forward-backward-forward algorithm for solving quasimonotone variational inequalities as follows.

Algorithm 1.6

Step 0.:

Let \(\gamma _0>0\) and \(\sigma \in (0,1).\) Select the starting point \(x_0\in H\) and the sequence of relaxation parameters \(\{\beta _n\}_{n\ge 0}\subset (0,1]\) satisfying \(\liminf _{n\rightarrow \infty }\beta _n>0.\) Set \(n=0.\)

Step 1.:

Let \(x_n\) and \(\gamma _n\) be given. Compute

$$\begin{aligned} y_n=P_C(x_n-\gamma _n Ax_n). \end{aligned}$$

If \(x_n=y_n,\) then stop.

Step 2.:

Compute

$$\begin{aligned} x_{n+1} = \beta _n(y_n + \gamma _n(Ax_n - Ay_n))+(1-\beta _n)x_n, \end{aligned}$$

and

$$\begin{aligned} \gamma _{n+1}={\left\{ \begin{array}{ll} \min \{\frac{\sigma \Vert x_n-y_n\Vert }{\Vert Ax_n-Ay_n\Vert }, \gamma _n\} ~~ \text {if}~~ Ax_n-Ay_n\ne 0,\\ \gamma _n, \quad \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$

Update k to \(k+1\) and go to Step 1.

Under certain conditions, with the inclusion of condition C5 of Assumption 1.3, Yin and Hussain (2022) established weak convergent of Algorithm 1.6. Recently, Izuchukwu et al. (2022) also proposed an iterative method for solving quasimonotone variational inequality problems, with only weak convergent under certain conditions, including C5.

Over the years, efforts have been made by researchers to speed up the rate of convergence of algorithms. In 1964, Polyak (1964) introduced the inertial scheme, a two-step iteration which has been shown to be a very efficient technique to improving the convergence rate of iterative methods. In recent times, fascinating works have been done via the use of the inertial technique, (see Alakoya and Mewomo 2023; Alakoya et al. 2022; Ogwo et al. 2022b; Taiwo et al. 2021; Uzor et al. 2022a; Wickramasinghe et al. 2023).

Following the above, the natural question arises, and is our foremost goal in this paper.

Is it possible to establish a strong convergent method for solving quasimonotone VI (and VI without monotonicity) such that C4–C5’ of Assumption 1.3are dispensed?

By answering affirmatively this question we improve the works of Yin et al. (2022); Liu and Yang (2020); Yin and Hussain (2022), and Izuchukwu et al. (2022) by achieving a strong convergent method dispensing the stringent conditions C4-C5’ of Assumption 1.3.

The outline of the paper is as follows. In Sect. 2, we state some existing lemmas and relevant definitions that would be useful in establishing our results. In Sect. 3, we present our proposed algorithm and its convergent analysis. In Sect. 4, we present some numerical experiments to showcase the performance of our method over some other methods in literature. Finally, in Sect. 5, we give a brief summary of our results.

2 Preliminaries

Here, we state relevant definitions and lemmas which will be employed ahead in our convergence analysis.

Recall that H is a real Hilbert space and C is a nonempty, closed and convex subset of H. Throughout this paper, we denote the weak and strong convergence of a sequence \(\{x_n\}\) to a point \(x^* \in H\) by \(x_n \rightharpoonup x^*\) and \(x_n \rightarrow x^*\), respectively. Let \(w_\omega (x_n)\) denote the set of weak limit points of \(\{x_n\},\) defined by

$$\begin{aligned} w_\omega (x_n):= \{x^*\in H: x_{n_j}\rightharpoonup x^*~ \text {for some subsequence}~ \{x_{n_j}\}~ \text {of} ~\{x_{n}\}\}. \end{aligned}$$

The metric projection denoted by \(P_C:H\rightarrow C\) is defined for each \(x\in H,\) as the unique element \(P_Cx\in C\) such that

$$\begin{aligned} \Vert x-P_Cx\Vert =\inf \{\Vert x-z\Vert :z\in C\}. \end{aligned}$$

It is well known that \(P_C\) is firmly nonexpansive (see Alakoya et al. 2022; Uzor et al. 2022b and Lemma 2.2 for more properties of \(P_C\)).

Lemma 2.1

Uzor et al. (2022a) Let H be a real Hilbert space. Then for all \(x,y\in H\) and \(\kappa \in \mathbb {R},\) the following results hold.

  1. (i)

    \(\Vert x + y\Vert ^2 \le \Vert x\Vert ^2 + 2\langle y, x + y \rangle ;\)

  2. (ii)

    \(\Vert x + y\Vert ^2 = \Vert x\Vert ^2 + 2\langle x, y \rangle + \Vert y\Vert ^2;\)

  3. (iii)

    \(\Vert \kappa x + (1-\kappa ) y\Vert ^2 = \kappa \Vert x\Vert ^2 + (1-\kappa )\Vert y\Vert ^2 -\kappa (1-\kappa )\Vert x-y\Vert ^2.\)

Lemma 2.2

Takahashi (2009); Kopecká and Reich (2012) Let C be a nonempty, closed and convex subset of a real Hilbert space H, and let I be the identity map on H. Then for any \(x\in H\) and \(y,z\in C\), the following results hold.

  1. (i)

    \(z = P_Cx \Longleftrightarrow \langle x - z, z - y\rangle \ge 0.\)

  2. (ii)

    \(\Vert y-P_Cx\Vert ^2+\Vert x-P_Cx\Vert ^2 \le \Vert x-y\Vert ^2.\)

  3. (iii)

    \(\langle x-y,P_Cx-P_Cy \rangle \ge \Vert P_Cx-P_Cy\Vert ^2.\)

  4. (iv)

    \(\langle (I-P_C)x-(I-P_C)y, x-y\rangle \ge \Vert (I-P_C)x-(I-P_C)y\Vert ^2.\)

Definition 2.3

Godwin et al. (2023b) Let H be a real Hilbert space. A mapping \(A: H\rightarrow H\) is said to be

  1. (1)

    L-Lipschitz continuous, where \(L>0,\) if

    $$\begin{aligned} \Vert Ax - Ay\Vert \le L\Vert x-y\Vert ,\quad \forall ~~x,y\in H. \end{aligned}$$

    If \(L\in [0,1),\) then A is said to be a contraction;

  2. (2)

    nonexpansive, if A is 1-Lipschitz continuous;

  3. (3)

    monotone, if

    $$\begin{aligned} \langle Ax - Ay, x-y\rangle \ge 0,\quad \forall ~~x,y\in H; \end{aligned}$$
  4. (4)

    pseudomonotone, if

    $$\begin{aligned} \langle Ay, x-y \rangle \ge 0 \Rightarrow \langle Ax, x-y \rangle \ge 0,\quad \forall x,y\in H; \end{aligned}$$
  5. (5)

    quasimonotone, if

    $$\begin{aligned} \langle Ay, x-y \rangle > 0 \Rightarrow \langle Ax, x-y \rangle \ge 0,\quad \forall x,y\in H. \end{aligned}$$

We observe that (3)\(\implies \) (4)\(\implies \) (5), but the converse is not generally true. Hence, the class of quasimonotone mappings is more general than the classes of monotone and pseudomonotone mappings, (see Izuchukwu et al. (2022)).

Lemma 2.4

Maingé (2007) Let \(\{a_n\}, \{c_n\}\subset \mathbb {R_+}, \{\sigma _n\}\subset (0,1)\) and \(\{b_n\}\subset \mathbb {R}\) be sequences such that

$$\begin{aligned} a_{n+1}\le (1-\sigma _n)a_n + b_n + c_n~~ \text {for all}~ n\ge 0. \end{aligned}$$

Assume \(\sum _{n=0}^{\infty }|c_n|<\infty .\) Then the following results hold.

  1. (1)

    If \(b_n\le \beta \sigma _n\) for some \(\beta \ge 0,\) then \(\{a_n\}\) is a bounded sequence.

  2. (2)

    If we have

    $$\begin{aligned} \sum _{n=0}^\infty \sigma _n = \infty ~~ \text {and}~~ \limsup _{n\rightarrow \infty }\frac{b_n}{\sigma _n}\le 0, \end{aligned}$$

    then \(\lim _{n\rightarrow \infty }a_n =0.\)

Lemma 2.5

Saejung and Yotkaew (2012) Let \(\{a_n\}\) be a sequence of non-negative real numbers, \(\{\alpha _n\}\) be a sequence in (0, 1) with \(\sum _{n=1}^\infty \alpha _n = \infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that

$$\begin{aligned} a_{n+1}\le (1 - \alpha _n)a_n + \alpha _nb_n, ~~~ \text {for all}~~ n\ge 1, \end{aligned}$$

if \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying \(\liminf _{k\rightarrow \infty }(a_{n_{k+1}} - a_{n_k})\ge 0,\) then \(\lim _{n\rightarrow \infty }a_n =0.\)

Lemma 2.6

Tan and H. K., (1993) Suppose \(\{\lambda _n\}\) and \(\{\phi _n\}\) are two nonnegative real sequences such that

$$\begin{aligned} \lambda _{n+1}\le \lambda _n + \phi _n,\quad \forall n\ge 1. \end{aligned}$$

If \(\sum _{n=1}^{\infty }\phi _n<+\infty ,\) then \(\lim \limits _{n\rightarrow \infty }\lambda _n\) exists.

Lemma 2.7

Ye and He (2015)

If either,

  1. (i)

    A is pseudomonotone on C and \(V_I\ne \emptyset ;\)

  2. (ii)

    If A is the gradient of G, where G is a differentiable quasiconvex function on an open set \(k\supset C\) and attains its global minimum on C

  3. (iii)

    A is quasimonotone on \(C,~~ A\ne 0,\) on C and C is bounded;

  4. (iv)

    A is quasimonotone on \(C, ~~ A\ne 0\) on C and there exists a positive number r,  such that for every \(y\in C\) with \(\Vert y\Vert \ge r,\) there exists \(z\in C,\) such that \(\Vert z\Vert \le r\) and \(\langle Ay, z-y \rangle \le 0;\)

  5. (v)

    A is quasimonotone on \(C, ~~ \text {int}C\ne \emptyset \) and there exists \(x^*\in S\) such that \(Ax^*\ne 0,\)

then \(V_D\) is nonempty.

3 Main result

In this section, we present a new Mann-type projection and contraction method (MTPCM) for solving the quasimonotone variational inequality problem. We assume the following conditions for the convergence analysis of the proposed algorithm.

Assumption A

(A1):

\(V_D\ne \emptyset .\)

(A2):

The mapping \(A:H\rightarrow H\) is \(L-\) Lipschitz continuousFootnote 1

(A3):

\(A:H\rightarrow H\) satisfies the following property whenever \(\{x_n\}\subset C,~ x_n\rightharpoonup d,\) then \(\Vert Ad\Vert \le \liminf \limits _{n\rightarrow \infty }\Vert Ax_n\Vert .\)

(A4):

The mapping \(A:H\rightarrow H\) is quasimonotone.

(A3’):

A is sequentially weakly continuous.

(A4’):

If \(x_n\rightharpoonup x^*\) and \(\lim \sup _{n\rightarrow \infty }\langle Ax_n,x_n \rangle \le \langle Ax^*,x^* \rangle ,\) then \(\lim _{n\rightarrow \infty }\langle Ax_n,x_n \rangle = \langle Ax^*,x^* \rangle .\)

Assumption B

(B1):

Let \( \{\alpha _n\} \subset (0,1)\) such that \(\lim _{n\rightarrow \infty }(1-\alpha _n)=0\) and \(\sum _{n=1}^\infty (1-\alpha _n) = +\infty , \{\beta _n\} \subset [a,b]\subset (0,1).\)

(B2):

Let \(\delta >0, \{\epsilon _n\}\) be a positive sequence such that \(\lim _{n\rightarrow \infty }\frac{\epsilon _n}{1-\alpha _n}=0, ~~ l\in (0,2),\) and \(\sigma \in (0,1).\)

(B3):

Let \(\gamma _0>0,\) and \(\{\theta _n\}\) be a nonnegative sequence such that \(\sum _{n=1}^\infty \theta _n<+\infty .\)

We present our algorithm as follows:

Algorithm 3.1

Step 0.:

Let \(x_0, x_1\in H\) be two arbitrary initial points and set \(n=1.\)

Step 1.:

Given the \((n-1)th\) and nth iterates, choose \(\delta _n\) such that \(0\le \delta _n\le \hat{\delta }_n\) with \(\hat{\delta }_n\) defined by

$$\begin{aligned} \hat{\delta }_n = {\left\{ \begin{array}{ll} \min \Big \{\delta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \delta , \hspace{95pt} \text {otherwise.} \end{array}\right. } \end{aligned}$$
(3.1)
Step 2.:

Compute

$$\begin{aligned}&w_n = x_n + \delta _n(x_n - x_{n-1}); \nonumber \\&y_n=P_C(w_n-\gamma _n Aw_n), \end{aligned}$$
(3.2)

If \(y_n=w_n\) (or \(Ay_n=0\)), then stop: \(y_n\) is a solution. Otherwise,

Step 3.:

Compute

$$\begin{aligned}&z_n = w_n-l\tau _nd_n, \nonumber \\&\text {where}\quad d_n:= w_n-y_n-\gamma _n(Aw_n-Ay_n), \quad \text {and} \nonumber \\&\tau _n= {\left\{ \begin{array}{ll} \frac{\langle w_n-y_n, d_n \rangle }{\Vert d_n\Vert ^2}, &{} \text {if}~~ d_n\ne 0, \\ 0, &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$
(3.3)
$$\begin{aligned}&\gamma _{n+1} = {\left\{ \begin{array}{ll} \min \{\frac{\sigma \Vert w_n-y_n\Vert }{\Vert Aw_n-Ay_n\Vert },~~ \gamma _n+\theta _n\},&{}\text {if}~~~ Aw_n\ne Ay_n,\\ \gamma _n+\theta _n, &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$
(3.4)
Step 4.:

Compute

$$\begin{aligned} x_{n+1} = (1-\beta _n)(\alpha _nw_n)+\beta _nz_n. \end{aligned}$$
(3.5)

Set \(n:= n +1\) and return to Step 1.

Remark 3.2

  1. (i)

    We observe that our proposed Algorithm 3.1 requires only one projection onto the feasible set C per iteration, which makes our method computationally inexpensive.

  2. (ii)

    Our algorithm does not require any linesearch procedure. Rather, we employ a more efficient step size technique in (3.4) which generates a non-monotonic sequence of step sizes. This makes computation and implementation of the proposed method easier.

  3. (iii)

    Observe that condition (A3) is strictly weaker than the sequentially weakly continuity condition (A3’) often used by researchers when solving pseudomonotone and quasimonotone VIs (see Cholamjiak et al. 2020; Yin et al. 2022; Liu and Yang 2020; Yin and Hussain 2022). In proving our first strong convergence theorem with quasimonotonicity assumption, we do not require condition (A3’). We only require this condition when proving our second strong convergence theorem without monotonicity.

  4. (iv)

    Observe that the stringent conditions C4–C5’ of Assumption 1.3 employed in Yin et al. (2022); Liu and Yang (2020); Yin and Hussain (2022); Izuchukwu et al. (2022) are dispensed with in our proposed method.

  5. (v)

    If \(\theta _n\equiv 0\) in Step 3 of Algorithm 3.1, then the step size \(\gamma _n\) reduces to the ones in Uzor et al. (2022a); Yin and Hussain (2022); Yin et al. (2022).

Remark 3.3

  1. (i)

    By conditions (B1) and (B2), we can easily see from (3.1) that

    $$\begin{aligned} \lim _{n\rightarrow \infty }\delta _n\Vert x_n - x_{n-1}\Vert = 0\quad \text {and}\quad \lim _{n\rightarrow \infty }\frac{\delta _n}{(1-\alpha _n)}\Vert x_n - x_{n-1}\Vert = 0. \end{aligned}$$
  2. (ii)

    The sequence \(\{\gamma _n\}\) generated by (3.4) is well defined and \(\lim \limits _{n\rightarrow \infty }\gamma _n=\gamma \in [\min \{\frac{\sigma }{L}, \gamma _1\}, \gamma _1+\Theta ],\) where \(\Theta =\sum _{n=1}^{\infty }\theta _n\) and L is the Lipschitz constant of A,  (see Alakoya et al. 2022; Liu and Yang 2020).

3.1 Convergence analysis

In this section, we first prove some lemmas which would be needed to prove our strong convergence theorem. Next, we give the proof of the strong convergence theorems for our proposed algorithm.

Lemma 3.4

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 under assumptions \((A1)-(A4)\) and \((B1)-(B3)\). Then, \(\{x_n\}\) is bounded.

Proof

Let \(d\in V_D.\) From the definition of \(w_n,\) we have

$$\begin{aligned} \Vert w_n-d\Vert&=\Vert x_n+\delta _n(x_n-x_{n-1})-d\Vert \nonumber \\&\le \Vert x_n-d\Vert +\delta _n \Vert x_n-x_{n-1}\Vert \nonumber \\&= \Vert x_n-d\Vert + (1-\alpha _n)\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert . \end{aligned}$$
(3.6)

By Remark 3.3, we have that \(\lim \limits _{n\rightarrow \infty }\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert =0.\) So, there exists \(J_1>0,\) such that \(\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \le J_1,\) for all \(n\ge 1.\) Consequently, we have

$$\begin{aligned} \Vert w_n-d\Vert \le \Vert x_n-d\Vert +(1-\alpha _n)J_1. \end{aligned}$$
(3.7)

Next, since \(y_n=P_C(w_n-\gamma _nAw_n),\) we have from Lemma 2.2 that

$$\begin{aligned}&\langle y_n-w_n+\gamma _nAw_n, y_n-d \rangle \le 0 \nonumber \\&\implies \langle w_n-y_n-\gamma _nAw_n, y_n-d \rangle \ge 0. \end{aligned}$$
(3.8)

Since \(d\in V_D\) and \(y_n\in C,\) we have \(\langle Ay_n, y_n-d \rangle \ge 0, ~~ \forall n\ge 0.\) Again, since \(\gamma _n>0,\) we have

$$\begin{aligned} \gamma _n\langle Ay_n, y_n-d \rangle \ge 0. \end{aligned}$$
(3.9)

By summing up (3.8) and (3.9), we have

$$\begin{aligned}&\langle w_n-y_n-\gamma _nAw_n, y_n-d \rangle + \gamma _n\langle Ay_n, y_n-d \rangle \ge 0 \nonumber \\&\implies \langle w_n-y_n-\gamma _nAw_n+\gamma _nAy_n, y_n-d \rangle \ge 0 \nonumber \\&\implies \langle w_n-y_n-\gamma _n(Aw_n-Ay_n), y_n-d \rangle \ge 0. \end{aligned}$$
(3.10)

By the definition of \(d_n\) and (3.10), we obtain

$$\begin{aligned} \langle w_n-d, d_n \rangle&= \langle w_n-y_n, d_n \rangle +\langle y_n-d, d_n \rangle \nonumber \\&= \langle w_n-y_n, d_n \rangle +\langle w_n-y_n-\gamma _n(Aw_n-Ay_n),y_n- d \rangle \nonumber \\&\ge \langle w_n-y_n, d_n \rangle . \end{aligned}$$
(3.11)

From Step 3, and by applying Lemma 2.1, (3.11)together with the condition on l,  we get

$$\begin{aligned} \Vert z_n-d\Vert ^2&=\Vert w_n-l\tau _nd_n-d\Vert ^2\nonumber \\&=\Vert w_n-d\Vert ^2+l^2\tau _n^2\Vert d_n\Vert ^2-2l\tau _n\langle w_n-d, d_n \rangle \nonumber \\&\le \Vert w_n-d\Vert ^2+l^2\tau _n^2\Vert d_n\Vert ^2-2l\tau _n\langle w_n-y_n, d_n \rangle \nonumber \\&=\Vert w_n-d\Vert ^2+l^2\tau _n^2\Vert d_n\Vert ^2-2l\tau _n (\tau _n)\Vert d_n\Vert ^2\nonumber \\&= \Vert w_n-d\Vert ^2-l\tau _n^2(2-l)\Vert d_n\Vert ^2 \nonumber \\&= \Vert w_n-d\Vert ^2-l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\nonumber \\&\le \Vert w_n-d\Vert ^2. \end{aligned}$$
(3.12)

Using (3.7), (3.12) and the conditions on \(\alpha _n\) and \(\beta _n,\) we get

$$\begin{aligned} \Vert x_{n+1}-d\Vert&=\Vert (1-\beta _n)(\alpha _nw_n)+\beta _nz_n-d\Vert \nonumber \\&=\Vert \alpha _n(1-\beta _n)(w_n-d)+\beta _n(z_n-d)-(1-\beta _n)(1-\alpha _n)d\Vert \nonumber \\&\le \Vert \alpha _n(1-\beta _n)(w_n-d)+\beta _n(z_n-d)\Vert +(1-\beta _n)(1-\alpha _n)\Vert d\Vert . \end{aligned}$$
(3.13)

Then, by Lemma 2.1, (3.12) and (3.13), we obtain

$$\begin{aligned}&\Vert \alpha _n(1-\beta _n)(w_n-d)+\beta _n(z_n-d)\Vert ^2\nonumber \\&\quad =(\alpha _n(1-\beta _n))^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert z_n-d\Vert ^2+2\alpha _n(1-\beta _n)\beta _n\langle z_n-d,w_n-d \rangle \nonumber \\&\quad \le (\alpha _n(1-\beta _n))^2\Vert w_n-d\Vert ^2+\beta _n^2[\Vert w_n-d\Vert ^2-l^{-1}(2-l)\Vert z_n-w_n\Vert ^2]\nonumber \\&\qquad +2\alpha _n(1-\beta _n)\beta _n\Vert z_n-d\Vert \Vert w_n-d\Vert \nonumber \\&\quad \le (\alpha _n(1-\beta _n))^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert w_n-d\Vert ^2-l^{-1}(2-l)\beta _n^2\Vert z_n-w_n\Vert ^2\nonumber \\&\qquad +\alpha _n(1-\beta _n)\beta _n[\Vert z_n-d\Vert ^2+\Vert w_n-d\Vert ^2] \nonumber \\&\quad =(\alpha _n(1-\beta _n))^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert w_n-d\Vert ^2-l^{-1}(2-l)\beta _n^2\Vert z_n-w_n\Vert ^2\nonumber \\&\qquad +\alpha _n(1-\beta _n)\beta _n\Vert z_n-d\Vert ^2\nonumber \\&\quad ~+\alpha _n(1-\beta _n)\beta _n\Vert w_n-d\Vert ^2 \nonumber \\&\quad \le (\alpha _n(1-\beta _n))^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert w_n-d\Vert ^2-l^{-1}(2-l)\beta _n^2\Vert z_n-w_n\Vert ^2\nonumber \\&\quad +\alpha _n(1-\beta _n)\beta _n[\Vert w_n-d\Vert ^2-l^{-1}(2-l)\Vert z_n-w_n\Vert ^2]+\alpha _n(1-\beta _n)\beta _n\Vert w_n-d\Vert ^2 \nonumber \\&\quad = [(\alpha _n(1-\beta _n))^2+2\alpha _n(1-\beta _n)\beta _n+\beta _n^2]\Vert w_n-d\Vert ^2\nonumber \\&\qquad -[\alpha _n(1-\beta _n)\beta _n+\beta _n^2]l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\nonumber \\&\quad = (\alpha _n(1-\beta _n)+\beta _n)^2\Vert w_n-d\Vert ^2-\beta _n[1-(1-\alpha _n)(1-\beta _n)]l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\nonumber \\&\quad \le (\alpha _n(1-\beta _n)+\beta _n)^2\Vert w_n-d\Vert ^2. \end{aligned}$$
(3.14)

So, we have

$$\begin{aligned}&\Vert \alpha _n(1-\beta _n)(w_n-d)+\beta _n(z_n-d)\Vert \le (\alpha _n(1-\beta _n)+\beta _n)\Vert w_n-d\Vert \\&\quad = [1-(1-\alpha _n)(1-\beta _n)]\Vert w_n-d\Vert \\&\quad \le [1-(1-\alpha _n)(1-\beta _n)](\Vert x_n-d\Vert +(1-\alpha _n)J_1)\\&\quad = [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert +(1-\alpha _n)J_1-(1-\alpha _n)^2(1-\beta _n)J_1\\&\quad \le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert +(1-\alpha _n)J_1 \end{aligned}$$

By applying the last inequality in (3.13), we get

$$\begin{aligned} \Vert x_{n+1}-d\Vert&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert +(1-\alpha _n)(1-\beta _n)\Bigg [\Vert d\Vert +\frac{J_1}{(1-\beta _n)}\Bigg ]\\&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert +(1-\alpha _n)(1-\beta _n)M^*, \end{aligned}$$

where \(M^*:=\sup _{n\in \mathbb {N}}\Big \{\Vert d\Vert +\frac{J_1}{(1-\beta _n)}\Big \}.\) Setting \(a_n:=\Vert x_n-d\Vert ;~ b_n:=(1-\alpha _n)(1-\beta _n)M^*;~ c_n:=0,\) and \(\sigma _n:=(1-\alpha _n)(1-\beta _n).\) By Lemma 2.4(1) together with the conditions on the control parameters, we have that \(\{\Vert x_n-d\Vert \}\) is bounded and this implies that \(\{x_n\}\) is bounded. Hence, \(\{w_n\},~~ \{y_n\},~~ \{z_n\},~~ \{d_n\}\) are all bounded.

Lemma 3.5

Assume \(\{w_n\}\) and \(\{y_n\}\) are sequences generated by Algorithm 3.1, such that conditions (A1)–(A4) and (B1)–(B3) hold. If there exists a subsequence \(\{w_{n_k}\}\) of \(\{w_n\}\) that converges weakly to \(\hat{x}\in H\) such that \(\lim _{k\rightarrow \infty }\Vert w_{n_k}-y_{n_k}\Vert =0,\) then \(\hat{x}\in V_D\) or \(A\hat{x}=0.\)

Proof

Since \(\{w_n\}\) is bounded, then \(w_\omega (w_n)\) is not empty. We let \(\hat{x}\in w_\omega (w_n)\) be an arbitrary element. Then, there exists a subsequence \(\{w_{n_k}\}\) of \(\{w_n\}\) such that \(w_{n_k}\rightharpoonup \hat{x}\) as \(k\rightarrow \infty .\) It follows from the hypothesis of the lemma that \(y_{n_k}\rightharpoonup \hat{x}\in C.\) We consider the following two cases to complete the proof of the lemma.

Case 1: If \(\limsup _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert =0,\) then it implies that \(\lim _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert =\liminf _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert =0.\)

Since \(y_{n_k}\rightharpoonup \hat{x}\in C,\) then by condition (A3),  it follows that

$$\begin{aligned} 0\le \Vert A\hat{x}\Vert \le \liminf _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert =0. \end{aligned}$$

Therefore, we have that \(A\hat{x}=0.\)

Case 2: If \(\limsup _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert >0.\) Without loss of generality, let \(\lim _{k\rightarrow \infty }\Vert Ay_{n_k}\Vert =L^*>0.\) Then, it follows that there exists \(K\in \mathbb {N}\) such that \(\Vert Ay_{n_k}\Vert >\frac{L^*}{2},\) for all \(k\ge K.\)

From (3.2), we have that \(y_{n_k}=P_C(w_{n_k}-\gamma _{n_k}Aw_{n_k}).\) Then, by Lemma 2.2 we have

$$\begin{aligned}&\langle y_{n_k}-w_{n_k}+\gamma _{n_k}Aw_{n_k}, z-y_{n_k} \rangle \ge 0,~~ \forall z\in C, \nonumber \\&\implies \langle w_{n_k}-y_{n_k}, z-y_{n_k} \rangle \le \gamma _{n_k} \langle Aw_{n_k}, z-y_{n_k} \rangle , ~~ \forall z\in C,\nonumber \\&\implies \frac{1}{\gamma _{n_k}}\langle w_{n_k}-y_{n_k}, z-y_{n_k} \rangle -\langle Aw_{n_k}-Ay_{n_k}, z-y_{n_k} \rangle \le \langle Ay_{n_k}, z-y_{n_k} \rangle , ~~ ~ \forall z\in C. \end{aligned}$$
(3.15)

Since \(\{y_{n_k}\}\) is bounded and \(\lim _{k\rightarrow \infty }\gamma _{n_k}=\gamma >0,\) then by applying \(\Vert y_{n_k}-w_{n_k}\Vert =0, ~~ k\rightarrow \infty ,\) the continuity of A and fixing \(d\in C,\) we obtain

$$\begin{aligned} 0\le \liminf _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle \le \limsup _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle <+\infty . \end{aligned}$$
(3.16)

If \(\limsup _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle >0,\) then there exists a subsequence \(\{y_{n_{k_j}}\}\) such that \(\lim _{j\rightarrow \infty } \langle Ay_{n_{k_j}}, z-y_{n_{k_j}} \rangle >0.\) Thus, there exist \(j_0\in \mathbb {N}\) such that

$$\begin{aligned} \langle Ay_{n_{k_j}}, z-y_{n_{k_j}} \rangle >0, ~~\forall j\ge j_0. \end{aligned}$$

By the quasimonotonicity of A,  we have that \(\forall j\ge j_0,\)

$$\begin{aligned} \langle Az, z-y_{n_{k_j}} \rangle \ge 0. \end{aligned}$$

Thus, as \(j\rightarrow \infty ,\) we see that \(\hat{x}\in V_D.\)

If \(\limsup _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle =0,\) we can easily see from (3.16) that

$$\begin{aligned} \lim _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle = \liminf _{k\rightarrow \infty } \langle Ay_{n_k}, z-y_{n_k} \rangle =\limsup _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle =0. \end{aligned}$$

We set \(\eta _k=|\langle Ay_{n_k}, z-y_{n_k} \rangle |+\frac{1}{k+1}.\) Then, we have

$$\begin{aligned} \langle Ay_{n_k}, z-y_{n_k} \rangle +\eta _k>0. \end{aligned}$$
(3.17)

Next, we set \(\xi _{n_k}=\frac{Ay_{n_k}}{\Vert Ay_{n_k}\Vert ^2}\) for all \(k\ge K.\) Then, it follows that

$$\begin{aligned} \langle Ay_{n_k}, \xi _{n_k} \rangle =1. \end{aligned}$$
(3.18)

Then, from (3.17), we see that for all \(k\ge K,\)

$$\begin{aligned} \langle Ay_{n_k}, z+\eta _k\xi _{n_k}-y_{n_k} \rangle >0. \end{aligned}$$

Then, by the quasimonotonicity of A,  we have that for all \(k\ge K,\)

$$\begin{aligned} \langle A(z+\eta _k\xi _{n_k}), z+\eta _k\xi _{n_k}-y_{n_k} \rangle \ge 0. \end{aligned}$$

Also, by the Lipschitz continuity of A, we have that for all \(k\ge K,\)

$$\begin{aligned}&\langle Az, z+\eta _k\xi _{n_k}-y_{n_k} \rangle \nonumber \\&\quad = \langle Az-A(z+\eta _k\xi _{n_k}), z+\eta _k\xi _{n_k}-y_{n_k} \rangle +\langle A(z+\eta _k\xi _{n_k}), z+\eta _k\xi _{n_k}-y_{n_k} \rangle \nonumber \\&\ge \langle Az-A(z+\eta _k\xi _{n_k}), z+\eta _k\xi _{n_k}-y_{n_k} \rangle \nonumber \\&\ge - \Vert Az-A(z+\eta _k\xi _{n_k})\Vert \Vert z+\eta _k\xi _{n_k}-y_{n_k} \Vert \nonumber \\&{\ge -\eta _kL\Vert \xi _{n_k}\Vert \Vert z+\eta _k\xi _{n_k}-y_{n_k}\Vert } \nonumber \\&{= -\eta _k\frac{L}{\Vert Ay_{n_k}\Vert }\Vert z+\eta _k\xi _{n_k}-y_{n_k}\Vert } \nonumber \\&{\ge -\eta _k\frac{2L}{L^*}\Vert z+\eta _k\xi _{n_k}-y_{n_k}\Vert }. \end{aligned}$$
(3.19)

If we let \(k\rightarrow \infty \) in (3.19) and apply the fact that \(\lim _{k\rightarrow \infty }\eta _k=0\) together with the boundedness of \(\{\Vert z+\eta _k\xi _{n_k}-y_{n_k}\Vert \},\) we obtain

$$\begin{aligned} \langle Az,z-\hat{x} \rangle \ge 0, ~~\forall z\in C. \end{aligned}$$

It follows that \(\hat{x}\in V_D,\) which completes the proof.

Lemma 3.6

Suppose \(\{x_n\}\) is a sequence generated by Algorithm 3.1 and \(d\in V_D.\) Then, under the conditions (A1)–(A4) and (B1)–(B3), we have the following inequality for all \(n\in \mathbb {N}:\)

$$\begin{aligned} \Vert x_{n+1}-d\Vert ^2&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert ^2 \\&+ (1-\alpha _n)(1-\beta _n)\Big [\frac{3J_2}{(1-\beta _n)} \frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~+ J_3\Vert x_{n+1}-d\Vert + 2 \langle d, d-x_{n+1}\rangle \Big ]\\&\quad -[1-(1-\alpha _n)(1-\beta _n)]^2\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2. \end{aligned}$$

Proof

Let \(d\in V_D.\) By using Lemma 2.1 together with the Cauchy inequality, we have

$$\begin{aligned} \Vert w_n-d\Vert ^2&=\Vert x_n+\delta _n(x_n-x_{n-1})-d\Vert ^2\nonumber \\&= \Vert x_n-d\Vert ^2+\delta _n^2\Vert x_n-x_{n-1}\Vert ^2+2\delta _n\langle x_n-d, x_n-x_{n-1} \rangle \nonumber \\&\le \Vert x_n-d\Vert ^2+\delta _n^2\Vert x_n-x_{n-1}\Vert ^2+2\delta _n\Vert x_n-d\Vert \Vert x_n-x_{n-1} \Vert \nonumber \\&=\Vert x_n-d\Vert ^2+\delta _n\Vert x_n-x_{n-1}\Vert \big (\delta _n \Vert x_n-x_{n-1}\Vert +2\Vert x_n-d\Vert \big ) \nonumber \\&\le \Vert x_n-d\Vert ^2+3J_2\delta _n\Vert x_n-x_{n-1}\Vert \nonumber \\&=\Vert x_n-d\Vert ^2+3J_2(1-\alpha _n)\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert , \end{aligned}$$
(3.20)

where \(J_2:=\sup _{n\in \mathbb {N}}\{\Vert x_n-d\Vert , \delta _n\Vert x_n-x_{n-1}\Vert \}>0.\) Now, let \(g_n=(1-\beta _n)w_n+\beta _nz_n.\) Then, we have by Lemma 2.1 and (3.12), we get

$$\begin{aligned} \Vert g_n-d\Vert ^2&=\Vert (1-\beta _n)w_n+\beta _nz_n-d\Vert ^2\nonumber \\&=\Vert (1-\beta _n)(w_n-d)+\beta _n(z_n-d)\Vert ^2\nonumber \\&= (1-\beta _n)^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert z_n-d\Vert ^2+2(1-\beta _n)\beta _n\langle z_n-d, w_n-d \rangle \nonumber \\&\le (1-\beta _n)^2\Vert w_n-d\Vert ^2+\beta _n^2[\Vert w_n-d\Vert ^2-l^{-1}(2-l)\Vert z_n-w_n\Vert ^2]\nonumber \\&\quad +2(1-\beta _n)\beta _n\Vert z_n-d\Vert \Vert w_n-d\Vert \nonumber \\&\le (1-\beta _n)^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert w_n-d\Vert ^2-l^{-1}(2-l)\beta _n^2\Vert z_n-w_n\Vert ^2\nonumber \\&\quad +(1-\beta _n)\beta _n[\Vert z_n-d\Vert ^2+\Vert w_n-d\Vert ^2] \nonumber \\&\le (1-\beta _n)^2\Vert w_n-d\Vert ^2+\beta _n^2\Vert w_n-d\Vert ^2-l^{-1}(2-l)\beta _n^2\Vert z_n-w_n\Vert ^2\nonumber \\&+(1-\beta _n)\beta _n[\Vert w_n-d\Vert ^2-l^{-1}(2-l)\Vert z_n-w_n\Vert ^2+\Vert w_n-d\Vert ^2]\nonumber \\&= [(1-\beta _n)^2+\beta _n^2+2(1-\beta _n)\beta _n]\Vert w_n-d\Vert ^2 -[\beta _n^2\nonumber \\&\quad +(1-\beta _n)\beta _n]l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\nonumber \\&=\Vert w_n-d\Vert ^2-\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2. \end{aligned}$$
(3.21)

Next, by applying (3.20), (3.21) and Lemma 2.1, we get

$$\begin{aligned} \Vert x_{n+1}-d\Vert ^2&=\Vert (g_n-d)-(1-\alpha _n)(1-\beta _n)w_n\Vert ^2\nonumber \\&= \Vert [1-(1-\alpha _n)(1-\beta _n)](g_n-d) + (1-\alpha _n)(1-\beta _n)(g_n-w_n)\nonumber \\&\quad -(1-\alpha _n)(1-\beta _n)d\Vert ^2\nonumber \\&= \Vert [1-(1-\alpha _n)(1-\beta _n)](g_n-d) + (1-\alpha _n)(1-\beta _n)\beta _n(z_n-w_n)\nonumber \\&\quad -(1-\alpha _n)(1-\beta _n)d\Vert ^2\nonumber \\&\le [1-(1-\alpha _n)(1-\beta _n)]^2\Vert g_n-d\Vert ^2 \nonumber \\&\quad + 2(1-\alpha _n)(1-\beta _n) \langle \beta _n(z_n-w_n)-d, x_{n+1}-d \rangle \nonumber \\&\le [1-(1-\alpha _n)(1-\beta _n)]^2\big [\Vert x_n-d\Vert ^2+3J_2(1-\alpha _n)\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~-\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\big ] + 2(1-\alpha _n)(1-\beta _n) \langle \beta _n(z_n-w_n), x_{n+1}-d \rangle \nonumber \\&~+ 2(1-\alpha _n)(1-\beta _n) \langle d, d-x_{n+1}\rangle \nonumber \\&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert ^2 +3J_2(1-\alpha _n)\frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~-[1-(1-\alpha _n)(1-\beta _n)]^2\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2 \nonumber \\&\quad + 2(1-\alpha _n)(1-\beta _n)\beta _n \Vert z_n-w_n\Vert \Vert x_{n+1}-d\Vert \nonumber \\&~ + 2(1-\alpha _n)(1-\beta _n) \langle d, d-x_{n+1}\rangle \nonumber \\&= [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert ^2 \nonumber \\&\quad + (1-\alpha _n)(1-\beta _n)\Big [\frac{3J_2}{(1-\beta _n)} \frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~+ 2\beta _n \Vert z_n-w_n\Vert \Vert x_{n+1}-d\Vert + 2 \langle d, d-x_{n+1}\rangle \Big ]\nonumber \\&\quad -[1-(1-\alpha _n)(1-\beta _n)]^2\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\\&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-d\Vert ^2 \nonumber \\&\quad + (1-\alpha _n)(1-\beta _n)\Big [\frac{3J_2}{(1-\beta _n)} \frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~+ J_3\Vert x_{n+1}-d\Vert + 2 \langle d, d-x_{n+1}\rangle \Big ]\nonumber \\&\quad -[1-(1-\alpha _n)(1-\beta _n)]^2\beta _n l^{-1}(2-l)\Vert z_n-w_n\Vert ^2\nonumber , \end{aligned}$$
(3.22)

where \(J_3:=\sup \limits _{n\in \mathbb {N}}\{2\beta _n \Vert z_n-w_n\Vert \}.\) This completes the proof.

Lemma 3.7

The following inequality holds for all \(d \in V_D\) and \(n\in \mathbb {N},\) under conditions (A1)–(A4) and (B1)–(B3) : 

$$\begin{aligned} \Vert y_n-w_n\Vert ^2\le J_4\Vert z_n-w_n\Vert , \end{aligned}$$

for some \(J_4>0.\)

Proof

We have from (3.4) that

$$\begin{aligned} \Vert Aw_n-Ay_n\Vert \le \frac{\sigma }{\gamma _{n+1}}\Vert w_n-y_n\Vert \quad \forall n\in \mathbb {N}, \end{aligned}$$

is true for both \(Aw_n=Ay_n\) and \(Aw_n\ne Ay_n.\)

Next, we see that

$$\begin{aligned} \langle w_n-y_n, d_n \rangle&= \langle w_n-y_n, w_n-y_n-\gamma _n(Aw_n-Ay_n) \rangle \nonumber \\&=\langle w_n-y_n, w_n-y_n \rangle - \gamma _n \langle w_n-y_n, Aw_n-Ay_n \rangle \nonumber \\&\ge \Vert w_n-y_n\Vert ^2 -\gamma _n\Vert w_n-y_n\Vert \Vert Aw_n-Ay_n\Vert \nonumber \\&\ge \Vert w_n-y_n\Vert ^2 - \sigma \frac{\gamma _n}{\gamma _{n+1}}\Vert w_n-y_n\Vert ^2\nonumber \\&= \Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )\Vert w_n-y_n\Vert ^2. \end{aligned}$$
(3.23)

We can easily see from Remark 3.3 (ii) that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Big (\frac{\gamma _{n+1} +\sigma \gamma _n}{\gamma _{n+1} -\sigma \gamma _n}\Big ) = \Big (\frac{1 +\sigma }{1 -\sigma }\Big )>0. \end{aligned}$$

Hence, from (3.23), we obtain

$$\begin{aligned} \Vert w_n-y_n\Vert ^2&\le \frac{1}{\Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}\langle w_n-y_n, d_n \rangle \\&= \frac{1}{\Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}\tau _n\Vert d_n\Vert ^2\\&\le \frac{1}{\Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}\tau _n\Vert d_n\Vert \Big (\Vert w_n-y_n\Vert + \gamma _n\Vert Aw_n-Ay_n\Vert \Big )\\&\le \frac{1}{\Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}\tau _n\Vert d_n\Vert \Big (\Vert w_n-y_n\Vert + \sigma \frac{\gamma _n}{\gamma _{n+1}}\Vert w_n-y_n\Vert \Big )\\&= \frac{\Big (1+\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}{\Big (1-\sigma \frac{\gamma _n}{\gamma _{n+1}}\Big )}\tau _n\Vert d_n\Vert \Vert w_n-y_n\Vert \\&=l^{-1}\Big (\frac{\gamma _{n+1} +\sigma \gamma _n}{\gamma _{n+1} -\sigma \gamma _n}\Big )\Vert z_n-w_n\Vert \Vert w_n-y_n\Vert \\&\le J_4\Vert z_n-w_n\Vert , \end{aligned}$$

where \(J_4=\sup _{n\in \mathbb {N}}\Big \{l^{-1}\Big (\frac{\gamma _{n+1} +\sigma \gamma _n}{\gamma _{n+1} -\sigma \gamma _n}\Big )\Vert w_n-y_n\Vert \Big \}.\) Observe that by (3.23) and the definition of \(z_n,\) the last inequality still holds if \(d_n=0.\) This completes the proof.

We now proceed to state and prove the strong convergence theorem for our proposed Algorithm 3.1, as follows.

Theorem 3.8

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that conditions \((A1)-(A4)\) and \((B1)-(B3)\) hold, and \(Ax\ne 0, ~~ \forall x\in C.\) Then, \(\{x_n\}\) converges strongly to an element \(x^*\in V_D\subset V_I,\) where \(\Vert x^*\Vert =\min \{\Vert p\Vert :p\in V_D \subset V_I\}.\)

Proof

Let \(\Vert x^*\Vert =\min \{\Vert p\Vert :p\in V_D \subset V_I\},\) then \(x^*=P_{V_D}(0).\) It follows that \(x^*\in V_D.\) Then, from (3.22) we obtain

$$\begin{aligned} \Vert x_{n+1}-x^*\Vert ^2&\le [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-x^*\Vert ^2\nonumber \\&\quad + (1-\alpha _n)(1-\beta _n)\Big [\frac{3J_2}{(1-\beta _n)} \frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber \\&~+ 2\beta _n \Vert z_n-w_n\Vert \Vert x_{n+1}-x^*\Vert + 2 \langle x^*, x^*-x_{n+1}\rangle \Big ]\nonumber \\&= [1-(1-\alpha _n)(1-\beta _n)]\Vert x_n-x^*\Vert ^2 + (1-\alpha _n)(1-\beta _n)b_n, \end{aligned}$$
(3.24)

where \(b_n= \frac{3J_2}{(1-\beta _n)} \frac{\delta _n}{(1-\alpha _n)}\Vert x_n-x_{n-1}\Vert \nonumber + 2\beta _n \Vert z_n-w_n\Vert \Vert x_{n+1}-x^*\Vert + 2 \langle x^*, x^*-x_{n+1}\rangle .\) We claim that the sequence \(\{\Vert x_n - x^*\Vert \}\) converges to zero. To establish this claim, it suffices to show by Lemma 2.5 that \(\limsup \limits _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{\Vert x_{n_k} - x^*\Vert \}\) of \(\{\Vert x_n - x^*\Vert \}\) satisfying

$$\begin{aligned} \liminf _{k\rightarrow \infty }(\Vert x_{n_k+1} - x^*\Vert - \Vert x_{n_k} - x^*\Vert ) \ge 0. \end{aligned}$$
(3.25)

Suppose \(\{\Vert x_{n_k} - x^*\Vert \}\) is a subsequence of \(\{\Vert x_n - x^*\Vert \}\) such that (3.25) holds. From Lemma 3.6, we have

$$\begin{aligned}{}[1-(1-\alpha _{n_k})(1&-\beta _{n_k})]^2\beta _{n_k} l^{-1}(2-l)\Vert z_{n_k}-w_{n_k}\Vert ^2 \nonumber \\&\le [1-(1-\alpha _{n_k})(1-\beta _{n_k})]\Vert x_{n_k}-x^*\Vert ^2 - \Vert x_{{n_k}+1}-x^*\Vert ^2\nonumber \\&+(1-\alpha _{n_k})(1-\beta _{n_k})\Big [\frac{3J_2}{(1-\beta _{n_k})}\frac{\delta _{n_k}}{(1-\alpha _{n_k})}\Vert x_{n_k}-x_{{n_k}-1}\Vert \nonumber \\&\quad + J_3\Vert x_{{n_k}+1}-x^*\Vert + 2 \langle x^*, x^*-x_{{n_k}+1}\rangle \Big ]. \end{aligned}$$

By applying (3.25), the fact that \(\lim _{k\rightarrow \infty }(1-\alpha _{n_k})=0\) and Remark 3.3, we obtain

$$\begin{aligned}{}[1-(1-\alpha _{n_k})(1-\beta _{n_k})]^2\beta _{n_k} l^{-1}(2-l)\Vert z_{n_k}-w_{n_k}\Vert ^2\rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$

By the conditions on the control parameters, we obtain

$$\begin{aligned} \Vert z_{n_k}-w_{n_k}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.26)

Also, by Remark 3.3, we get

$$\begin{aligned} \Vert w_{n_k}-x_{n_k}\Vert = \delta _{n_k}\Vert x_{n_k}-x_{n_k-1}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.27)

Then, from (3.26) and (3.27), we obtain

$$\begin{aligned} \Vert z_{n_k}-x_{n_k}\Vert \le \Vert z_{n_k}-w_{n_k}\Vert + \Vert w_{n_k}-x_{n_k}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.28)

Also, by Lemma 3.7 and (3.26), we obtain

$$\begin{aligned} \Vert w_{n_k}-y_{n_k}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.29)

Moreover, from (3.27) and (3.29) we get

$$\begin{aligned} \Vert y_{n_k}-x_{n_k}\Vert \le \Vert y_{n_k}-w_{n_k}\Vert +\Vert w_{n_k}-x_{n_k}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.30)

Similarly, from (3.26) and (3.29), we obtain

$$\begin{aligned} \Vert z_{n_k}-y_{n_k}\Vert \le \Vert z_{n_k}-w_{n_k}\Vert +\Vert w_{n_k}-y_{n_k}\Vert \rightarrow 0, ~~ ~ k\rightarrow \infty . \end{aligned}$$
(3.31)

By the definition of \(x_{n+1}\) and using \(\lim _{k\rightarrow \infty }(1-\alpha _{n_k})=0,\) (3.27) together with (3.28), we have

$$\begin{aligned} \Vert x_{{n_k}+1}-x_{n_k}\Vert&=\Vert (1-\beta _{n_k})(\alpha _{n_k}w_{n_k})+\beta _{n_k}z_{n_k}-x_{n_k}\Vert \nonumber \\&=\Vert \alpha _{n_k}(1-\beta _{n_k})(w_{n_k}-x_{n_k})+\beta _{n_k}(z_{n_k}-x_{n_k})-(1-\alpha _{n_k})(1-\beta _{n_k})x_{n_k}\Vert \nonumber \\&\le \alpha _{n_k}(1-\beta _{n_k})\Vert w_{n_k}-x_{n_k}\Vert +\beta _{n_k}\Vert z_{n_k}-x_{n_k}\Vert \nonumber \\&\quad +(1-\alpha _{n_k})(1-\beta _{n_k})\Vert x_{n_k}\Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(3.32)

Now, we complete the proof by first showing that \(w_{\omega }(x_n)\subset V_D.\) Since \(\{x_n\}\) is bounded, then \(w_{\omega }(x_n)\ne \emptyset .\) Let \(\hat{x}\in w_{\omega }(x_n)\) be an arbitrary element. Then, from (3.27) and (3.30), we have that \(w_{\omega }(x_n)=w_{\omega }(w_n)=w_{\omega }(y_n).\) Since \(y_n\in C\) and C is weakly closed, we have \(\hat{x}\in C.\) So, by the assumption that \(Ax\ne 0, ~~ \forall x\in C\) we have \(A\hat{x}\ne 0.\) Thus, by (3.29) and Lemma 3.5 we have that \(\hat{x}\in V_D.\) Since \(\hat{x}\in w_{\omega }(x_n)\) was chosen arbitrarily, we obtain \(w_{\omega }(x_n)\subset V_D.\)

Next, since \(\{x_{n_k}\}\) is bounded, there exists a subsequence \(\{x_{n_{k_j}}\}\) of \(\{x_{n_k}\},\) such that \(x_{n_{k_j}}\rightharpoonup x^{\dagger },\) and

$$\begin{aligned} \lim _{j\rightarrow \infty }\langle x^*,x^*-x_{n_{k_j}} \rangle = \limsup _{k\rightarrow \infty }\langle x^*, x^*-x_{n_k} \rangle = \limsup _{k\rightarrow \infty }\langle x^*, x^*-y_{n_k} \rangle . \end{aligned}$$
(3.33)

Since \(x^*=P_{V_D}(0),\) it follows from (3.33) that

$$\begin{aligned} \limsup _{k\rightarrow \infty }\langle x^*, x^*-x_{n_k} \rangle = \lim _{j\rightarrow \infty }\langle x^*,x^*-x_{n_{k_j}} \rangle = \langle x^*, x^*-x^\dagger \rangle \le 0. \end{aligned}$$
(3.34)

From (3.32) and (3.34), we obtain

$$\begin{aligned} \limsup _{k\rightarrow \infty }\langle x^*, x^*- x_{n_k+1} \rangle = \limsup _{k\rightarrow \infty }\langle x^*, x^*- x_{n_k} \rangle = \langle x^*, x^*- x^\dagger \rangle \le 0. \end{aligned}$$
(3.35)

By Remark 3.3, (3.26) and (3.35), we have \(\limsup \limits _{k\rightarrow \infty }b_{n_k}\le 0.\) Thus, by appealing to Lemma 2.5, it follows from (3.24) that \(\lim \limits _{n\rightarrow \infty }\Vert x_n - x^*\Vert =0\) as required. Hence, the proof is complete.

Remark 3.9

We note that the quasimonotonicity of the mapping A was only employed in Case 2 of Lemma 3.5. Now, we proceed to prove the second strong convergence theorem for the proposed Algorithm 3.1 without recourse to the monotonicity property.

Lemma 3.10

Assume that \(\{w_n\}\) and \(\{y_n\}\) are sequence generated by Algorithm 3.1 such that conditions (A1)-(A2), (A3’)-(A4’) and (B1)-(B3) hold. Suppose there exists a subsequence \(\{w_{n_k}\}\) of \(\{w_n\}\) such that \(w_{n_k}\rightharpoonup x^*\in H\) and \(\Vert y_{n_k}-w_{n_k}\Vert \rightarrow 0\) as \(k\rightarrow \infty .\) Then, either \(x^*\in V_D\) or \(Ax^*=0.\)

Proof

From (3.16), following similar argument as in Lemma 3.5 and fixing \(z\in C,\) we have that \(y_{n_k}\rightharpoonup x^*\in C\) and

$$\begin{aligned} \liminf _{k\rightarrow \infty }\langle Ay_{n_k}, z-y_{n_k} \rangle \ge 0. \end{aligned}$$

Next, we choose a positive sequence \(\eta _k\) such that \(\lim _{k\rightarrow \infty }\eta _k=0\) and

$$\begin{aligned} \langle Ay_{n_k}, z-y_{n_k} \rangle + \eta _k >0, ~~ ~ \forall k\in \mathbb {N}. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \langle Ay_{n_k}, z \rangle + \eta _k > \langle Ay_{n_k}, y_{n_k} \rangle , ~~ ~ \forall k\in \mathbb {N}. \end{aligned}$$
(3.36)

We set \(z=x^*\) in (3.36) to obtain

$$\begin{aligned} \langle Ay_{n_k}, x^* \rangle + \eta _k > \langle Ay_{n_k}, y_{n_k} \rangle , ~~ ~ \forall k\in \mathbb {N}. \end{aligned}$$

Then, as \(k\rightarrow \infty ,\) by applying condition \((A3')\) and the fact that \(y_{n_k}\rightharpoonup x^*,\) from the last inequality we have

$$\begin{aligned} \langle Ax^*, x^* \rangle \ge \limsup _{k\rightarrow \infty }\langle Ay_{n_k}, y_{n_k} \rangle , ~~ ~ \forall k\in \mathbb {N}. \end{aligned}$$

Then, by condition \((A4'),\) we obtain

$$\begin{aligned} \langle Ax^*, x^* \rangle = \lim _{k\rightarrow \infty }\langle Ay_{n_k}, y_{n_k} \rangle , ~~ ~ \forall k\in \mathbb {N}. \end{aligned}$$

From (3.36), we obtain

$$\begin{aligned} \langle Ax^*, z \rangle&=\lim _{n\rightarrow \infty }(\langle Ay_{n_k}, z \rangle +\eta _k) \\&\ge \liminf _{k\rightarrow \infty }\langle Ay_{n_k}, y_{n_k} \rangle \\&=\lim _{k\rightarrow \infty }\langle Ay_{n_k}, y_{n_k} \rangle \\&= \langle Ax^*, x^* \rangle . \end{aligned}$$

Thus, we have

$$\begin{aligned} \langle Ax^*, z-x^* \rangle \ge 0, ~~ ~ \forall z\in C. \end{aligned}$$

So, it follows that \(x^*\in V_D.\) Hence, \(x^*\in V_D,\) or \(Ax^*=0\) as required.

Theorem 3.11

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that conditions (A1)–(A2), (A3’)–(A4’) and (B1)–(B3) hold, and \(Ax\ne 0, ~~ \forall x\in C.\) Then, \(\{x_n\}\) converges strongly to an element \(x^*\in V_D\subset V_I,\) where \(\Vert x^*\Vert =\min \{\Vert p\Vert :p\in V_D \subset V_I\}.\)

Proof

By following similar argument as in Theorem 3.8 and applying Lemma 3.10 we obtain the required result.

4 Numerical examples

In this section, we carry out some numerical experiments to illustrate and showcase the efficiency of our proposed Algorithm 3.1 (Proposed Alg) in comparison with Algorithm 1.2 (Liu & Yang Alg.), Algorithm 1.5 (Yin et al. Alg.), Algorithm 1.6 (Yin & Hussain Alg.), Appendix 5.1 (Izuchukwu et al. Alg.) and Appendix 5.2 (Alakoya et al. Alg.). In our experiment, we choose for each \(n\in \mathbb {N},~~ \alpha _n=\frac{n}{n+2}, ~~ \beta _n=\frac{n}{2n+1}, ~~\epsilon _n=\frac{2}{(n+2)^3},~~ \delta =0.89, ~~ \theta _n=\frac{1000}{(n+1)^{1.5}},~~ \gamma _0=0.9,~~ \sigma =0.95, ~~l=0.89\) in Algorithm 3.1; \(\eta _n=0.25, \rho _n=0.30, Tx=\frac{x}{2}\) in Algorithm 1.5 and \(\gamma _1=0.75,\vartheta =0.2\) in Appendix 5.1.

We perform our experiment using the MATLAB software, version R2022(b), as follows:

Example 4.1

We consider the following problem Liu and Yang (2020). Let \(C:= [-1,1]\) and

$$\begin{aligned} Ax= {\left\{ \begin{array}{ll} 2x-1 \quad \quad x>1,\\ x^2 \quad \quad x\in [-1,1],\\ -2x-1 \quad \quad x<-1. \end{array}\right. } \end{aligned}$$

We see that A is quasimonotone and Lipschitz continuous. Also \(V_D=\{-1\}\) and \(V_I=\{-1,0\}\).

We use \(|x_{n+1}-x_n|< 10^{-4}\) as the stopping criterion and choose different starting points as follows:

Case a: \(x_0=0.5000,~x_1=0.0100;\)

Case b: \(x_0=0.4961,~x_1=0.0324;\)

Case c: \(x_0=0.6047,~x_1=0.0209;\)

Case d: \(x_0=5674,~x_1=0.0186.\)

The numerical results are reported in Figs. 1, 2, 3, 4 and Table 1.

Fig. 1
figure 1

Example 4.1: Case I

Fig. 2
figure 2

Example 4.1: Case II

Fig. 3
figure 3

Example 4.1: Case III

Fig. 4
figure 4

Example 4.1: Case IV

Example 4.2

See Izuchukwu et al. (2022). Let \(C:=[0,1]^m\) and \(Ax=(h_1x, h_2x,\ldots ,h_mx),\) where

$$\begin{aligned}{} & {} h_ix=x_{i-1}^2+x_i^2+x_{i-1}x_i+x_ix_{i+1}-2x_{i-1}+4x_i+x_{i+1}-1,\\{} & {} \quad i=1,2,\ldots ,m, \quad \quad x_0=x_{m+1}=0. \end{aligned}$$

We consider the cases \(m = 5, m = 10, m = 20\) and \(m = 40\) while the starting points \(x_0\) and \(x_1\) are generated randomly. We use \(|x_{n+1}-x_n|< 10^{-3}\) as the stopping criterion. The numerical results are reported in Figs. 5, 6, 7, 8 and Table 2.

Table 1 Numerical results for example 4.1
Fig. 5
figure 5

Example 4.2 with \(m = 5\)

Fig. 6
figure 6

Example 4.2 with \(m = 10\)

Fig. 7
figure 7

Example 4.2 with \(m = 20\)

Fig. 8
figure 8

Example 4.2 with \(m = 40\)

Fig. 9
figure 9

Example 4.3: Case I

Fig. 10
figure 10

Example 4.3: Case II

Fig. 11
figure 11

Example 4.3: Case III

Fig. 12
figure 12

Example 4.3: Case IV

Next, we present an example in an infinite dimensional Hilbert space and compare our proposed Algorithm 3.1 with Appendix 5.2, which is on strong convergence results.

Example 4.3

Let \(H=\ell _2:=\{x=(x_1,x_2,\ldots ,x_i,\ldots ):\sum _{i=1}^{\infty }|x_i|^2<+\infty \}.\) We take \(C:=\{x\in \ell _2:\Vert x\Vert \le 3\}\) and \(Ax:= (x_1 e^{-x_{1}^2},0,0,\ldots ), ~~ ~~ x=(x_1,x_2,x_3,\ldots )\in C.\) Let \(P:\ell _2\rightarrow \ell _2\) be defined by \(P(x_1,x_2,x_3,\ldots )=(x_1,0,0,\ldots );~~ ~~ h(x):=\mu (\langle y,x \rangle )\) with \(y=(1,0,0,\ldots )\in \ell _2\) and \(\mu (t):=e^{-t^2}, ~~ t>0.\) Let \(Ax=h(x)P(x), ~~ x\in \ell _2.\) Then A is quasimonotone, but not monotone on H (see Izuchukwu et al. 2022).

We use \(\Vert x_{n+1}-x_n\Vert < 10^{-4}\) as the stopping criterion and choose different starting points as follows:

Case a: \(x_0=(1,-0.1,0.01,\ldots ),~x_1=(-\frac{1}{3},\frac{1}{9},-\frac{1}{27},\ldots )\),

Case b: \(x_0=(3,1,\frac{1}{3},\ldots ),~x_1=(-\frac{1}{3},\frac{1}{9},-\frac{1}{27},\ldots )\),

Case c: \(x_0=(2,-1,\frac{1}{2},\ldots ),~x_1=(-\frac{1}{3},\frac{1}{9},-\frac{1}{27},\ldots ),\)

Table 2 Numerical results for example 4.2

Case d: \(x_0=(2,0.2,0.02,\ldots ),~x_1=(-\frac{1}{3},\frac{1}{9},-\frac{1}{27},\ldots )\).

The numerical results are reported in Figs. 9, 10, 11, 12 and Table 3.

5 Conclusion

In this paper, we we studied the class of quasimonotone variational inequality problems and the class of variational inequality problems without monotonicity. We proposed a new Mann-type inertial projection and contraction method for approximating the solutions of these two classes of variational inequality problems. We proved some strong convergence theorems for the proposed algorithm under more relaxed conditions and without the sequentially weakly continuity condition often assumed by authors. Finally, we presented some numerical experiments and compared our method with some existing methods. The numerical results given in Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 and Tables 1, 2, 3 show that our method performs better than these existing methods.

Table 3 Numerical results for example 4.3