1 Introduction

Variational inequalities theory, which was introduced by Stampacchia [173] and Ficchera [38] independently, has emerged as an interesting and fascinating branch of applied mathematics with a wide range of applications in industry, finance, economics, social, pure and applied sciences. Variational inequalities may be viewed as novel generalization of the variational principles, the origin of which can be traced back to Euler, Lagrange and the Bernoulli brothers. Variational principles have played a crucial and important role in the development of various fields of sciences and have appeared as a unifying force. The ideas and techniques of variational inequalities are being applied in a variety of diverse areas of sciences and prove to be productive and innovative. Variational inequalities have been extended and generalized in several directions using novel and new techniques. The minimum of a differentiable convex function \(F\) on the convex set \(K\) is equivalent to finding \(u \in K \) such that

$$\begin{aligned} \langle F^{\prime }(u), v-u \rangle \geq 0, \quad \forall v\in K, \end{aligned}$$
(1)

which is known as the variational inequality (1). Here \(F^{\prime }(u) \) is the Frechet differential. Stampacchia [173] proved that potential problems associated with elliptic equations can be studied by the variational inequality. This simple fact inspired a great interest in variational inequalities. Lions and Stampacchia [54] studied the existence of a solution of variational inequalities using essentially the auxiliary principle technique coupled with the projection idea.

Lemke [62] considered the problem of finding \(u\in R^{n}_{+} \) such that

$$\begin{aligned} u \geq 0, \quad Au \geq 0, \quad \langle Au, u\rangle =0, \end{aligned}$$
(2)

which is called the linear complementarity problem. Here \(A \) is a linear operator. Lemke [62] proved that the two person game theory problems can be studied in the framework of linear complementarity problem (2). See also Lemke and Howson, Jr. [63] and Cottle al et [27] for the nonlinear complementarity problems.

It is worth mentioning that both problems (1) and (2) are different and have been studied in infinite dimensional spaces and finite dimensional spaces independently using quite different techniques. However. Karmardian [57] established that both problems (1) and (2) are equivalent, if the underlying set \(K \) is a convex cone. This equivalent formulation played an important role in developing several techniques for solving these problems.

If the convex set \(K\) depends upon the solution explicitly or implicitly, then the variational inequality is called the quasi variational inequality. Quasi variational inequalities were introduced and investigated by Bensoussan and Lions [15] in control theory. In fact, for a given operator \(T: H \longrightarrow H\), and a point-to-set mapping \(K : u \longrightarrow K(u) \), which associates a closed convex-valued set \(K\) with any element \(u \) of \(H\), we consider the problem of finding \(u \in K(u)\) such that

$$\begin{aligned} \langle Tu, v- u \rangle \quad \geq 0, \quad \forall v \in K(u), \end{aligned}$$
(3)

which is known as the quasi variational inequality. Chan and Pang [22] considered the generalized quasi variational inequalities for set-valued operators. Noor [84] established the equivalence between the quasi variational inequalities and the fixed point formulation and used this equivalence to suggest some iterative methods for solving (3). This equivalence was used to study the existence of a solution of quasi variational inequalities and develop numerical methods.

Related to the quasi variational inequality, we have the problem of finding \(u \in H \) such that

$$\begin{aligned} u\geq m(u), \quad Tu \geq 0, \quad \langle Tu, u-m(u) \rangle =0, \end{aligned}$$
(4)

which is called the implicit (quasi) complementarity problem, where \(m \) is a point-to-point mapping. Using the technique of Karamardian [57], Pang [155] and Noor [84] established the equivalence between the problems (3) and (4). Noor [85, 86] has used the change of variables technique to prove that the implicit complementarity problems are equivalent to the fixed point problem. This alternative formulation played an important part in the development of iterative methods for solving various types of complementarity problems and related optimization problems. It is an interesting problem to extend this technique for solving variational inequalities.

Motivated and inspired by the ongoing research in these fields, Noor [87] introduced and investigated a new class of variational inequalities involving two operators. For given nonlinear operators \(T,g \), consider the problem of finding \(u \in H: g(u) \in K \), such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq 0, \quad \forall v\in H: g(v) \in K, \end{aligned}$$
(5)

which is known as the general (Noor) variational inequalities. It turned out that odd-order and nonsymmetric obstacle, free, unilateral and moving boundary value problems arising in pure and applied sciences can be studied via the general variational inequalities, cf. [87,88,89,90,91].

If \(K\) is a convex cone, then the implicit complementarity problem (5) is equivalent to finding \(u \in H \) such that

$$\begin{aligned} g(u)\geq 0, \quad Tu \in K^{*}, \quad \langle Tu, g(u) \rangle =0, \end{aligned}$$
(6)

which is known as the general complementarity problem, where \(K^{*} \) is the dual (polar) cone. We would like to point out that for appropriate and suitable choice of the operators \(T, g \) and the convex sets \(K \), one can obtain several known and new classes of variational inequalities and complementarity problems as special cases of the problem (5).

During the years that have elapsed since its discovery, a number of numerical methods including the projection method and its variant forms, Wiener-Hopf equations, the auxiliary principle, and dynamical systems, have been developed for the solution of variational inequalities and related optimization problems. The projection method and its variants forms including the Wiener-Hopf equations, represent important tools for finding the approximate solution of variational inequalities, the origin of which can be traced back to Lions and Stampacchia [66]. The main idea in this technique is to establish the equivalence between the variational inequalities and the fixed-point problem by using the concept of projection. This alternative formulation has played a significant part in developing various projection-type methods for solving variational inequalities. It is well known that the convergence of the projection methods requires that the operator must be strongly monotone and Lipschitz continuous. Unfortunately these strict conditions rule out many applications of this method. This fact motivated the modification of the projection method or the development of other methods. The extragradient-type methods overcome this difficulty by performing an additional forward step and a projection at each iteration according to the double projection. These methods can be viewed as predictor-corrector methods. Their convergence requires only that a solution exists and the monotone operator is Lipschitz continuous. When the operator is not Lipschitz continuous or when the Lipschitz continuous constant is not known, the extragradient method and its variant forms require an Armijo-like line search procedure to compute the step size with a new projection needed for each trial, which leads to expansive computation. To overcomes these difficulties, several modified projection and extragradient-type methods have been suggested and developed for solving variational inequalities. See [6,7,8,9,10, 17, 18, 24, 36, 37, 39, 42,43,44, 49, 55, 63, 68, 71, 79, 82, 83, 90, 92, 95, 100,101,102,103,104, 106, 107, 110,111,112,113,114, 116, 117, 119, 120, 133,134,135, 137, 138, 143,144,145,146, 150, 152, 153, 160] and the references therein.

In Sect. 4, we present the concept of the general Wiener-Hopf equations, which was introduced by Noor [90]. As a special case, we obtain the original Wiener-Hopf equations, which were considered and studied by Shi [168] and Robinson [166] in conjunction with variational inequalities from different point of views. Using the projection technique, one usually establishes the equivalence between the variational inequalities and the Wiener-Hopf equations. It turned out that the Wiener-Hopf equations are more general and flexible. This approach has played not only an important part in developing various efficient projection-type methods, but also in studying the sensitivity analysis, dynamical systems as well as other concepts of variational inequalities. Noor, Wang and Xiu [152, 153] and Noor and Rassias [146] have suggested and analyzed some predictor-corrector type projection methods by modifying the Wiener-Hopf equations. These methods are also known as Forward-Backward methods, see Tseng [179, 180]. It has been shown that these predictor-corrector-type methods are efficient and robust. Some numerical examples are given to illustrate the efficiency and implementation of the proposed methods. Consequently, our results represent a refinement and improvement of the known results.

Section 5 is devoted to the concept of projected dynamical system in the context of variational inequalities, which was introduced by Dupuis and Nagurney [35] by using the fixed-point formulation of the variational inequalities. For the recent development and applications of the dynamical systems, see [13, 34, 35, 40, 41, 56, 75, 79, 109, 115, 116, 122]. In this technique, we reformulate the variational inequality problem as an initial value problem. Using the discretizing of the dynamical systems, we suggest some new iterative methods for solving the general variational inequalities.

It is a well known fact that in order to implement the projection-type methods, one has to evaluate the projection, which is itself a difficult problem. Secondly, the projection and Wiener-Hopf equations techniques can’t be extended and generalized for some classes of variational inequalities involving the nonlinear (non)differentiable functions, see [92, 94, 108]. These facts motivated to use the auxiliary principle technique. This technique deals with finding the auxiliary variational inequality and proving that the solution of the auxiliary problem is the solution of the original problem by using the fixed-point approach. It turned out that this technique can be used to find the equivalent differentiable optimization problems, which enables us to construct gap (merit) functions. Glowinski et al. [47] used this technique to study the existence of a solution of mixed variational inequalities. Noor [93,94,95, 100, 101, 114, 121, 122] has used this technique to suggest some predictor-corrector methods for solving various classes of variational inequalities. It is well known that a substantial number of numerical methods can be obtained as special cases from this technique. We use this technique to suggest and analyze some explicit predictor-corrector methods for general variational inequalities. In this paper, we give the basic idea of the inertial proximal methods and show that the auxiliary principle technique can be used to construct gap (merit) functions. We use the gap function to consider an optimal control problem governed by the general variational inequalities. The control problem as an optimization problem is also referred as a generalized bilevel programming problem or mathematical programming with equilibrium constraints. These results are mainly due to Deitrich [32, 33]. It is an open problem to compare the efficiency of the inertial methods with other methods and this is another direction for future research.

In Sect. 7, we discuss the application of the penalty function method, which was introduced by Lewy and Stampacchia [64] to study the regularity of the solutions of the variational inequalities. It is known that the finite difference and similar numerical methods cannot be applied to find the approximate solutions of the obstacle, free and moving value problems due to the presence of the obstacle and other constraint conditions. However, it is known that if the obstacle is known then these obstacle and unilateral problems can be characterized by a system of differential equations in conjunction with the general variational inequalities using the penalty function technique. Al-Said [3], Noor and Al-Said [112], Noor and Tirmizi [148] and Al-Said et al. [4, 5] used this technique to develop some numerical methods for solving these systems of differential equations. The main advantage of this technique is its simple applicability for solving systems of differential equations. We present here only the main idea of this technique for solving odd-order obstacle and unilateral problems.

In recent years, much attention has been given to the study of equilibrium problems, which were considered and studied by Blum and Oettli [19] as well as Noor and Oettli [142]. It is known that equilibrium problems include variational inequalities and complementarity problems as special cases. It is remarked that there are very few iterative methods for solving equilibrium problems, since the projection method and its variant forms including the Wiener-Hopf equations cannot be extended for these problems. We use the auxiliary principle technique to suggest and analyze some iterative type methods for solving general equilibrium problems, which is considered in Sect. 8.

Hanson [50] introduced the concept of the invex functions to study mathematical programming problems, which appeared to be significant generalization of the convex functions. Ben-Israel and Mond [14] considered the concepts of invex sets and preinvex functions. They proved that the differentiable preinvex functions are invex functions. Mohan and Neogy [74] proved that the converse is also true under certain conditions. Noor [93] proved that the optimality conditions can be characterized by a class of variational inequalities, which are called variational-like inequalities. Due to the inherent nonlinearity, one cannot use the projection type iterative methods for considering the existence results and numerical methods for variational-like inequalities. However, one uses the auxiliary principle technique to study the existence and numerical methods for variational-like inequalities. Fulga and Preda [43] as well as Awan et al [11] considered the general invex sets and general preinvex functions involving an arbitrary function and studied their basic properties. We show that the minimum of the differentiable general preinvex functions is characterized by a class of variational-like inequalities. This fact motivated us to introduce general variational-like inequalities and study their properties. We have used the auxiliary principle technique to analyze some iterative methods for solving the general variational-like inequalities. Several special cases are discussed as applications of the general variational-like inequalities. These aspects are discussed in Sect. 9.

In Sect. 10, we consider the concept of higher order strongly general convex functions involving an arbitrary function, which can be viewed as a novel and innovative extension of the strongly convex functions. Polyak [159] in 1966 introduced strongly convex functions in order to study optimization problems. Zu and Marcotte [201] discussed the role of the strongly convex functions in the analysis of the iterative methods for solving variational inequalities. Mohsen et al [75] introduced the higher order strongly convex functions involving bifunction, which can be viewed as a significant refinement of the higher order strongly convex function, which were considered by Lin and Fukushima [65] in mathematical programming with equilibrium constraints. They have shown that parallelogram laws for Banach spaces can be obtained as applications of the higher order strongly convex functions. Parallelogram laws for Banach spaces were analyzed by Bynum [21] and Chen at al [23,24,25], which are applied in prediction theory and information technology. We have investigated some basic properties of the higher order strongly general convex functions and have shown that the optimality conditions of the differentiable higher order strongly general convex functions can be expressed as higher order general variational inequalities.

Higher order general variational inequalities are introduced in Sect. 11. Some iterative methods are suggested and analyzed for solving higher order general variational inequalities. It is shown that general variational inequalities related to optimization problems can be obtained as applications.

Related to the convex functions, we have the concept of exponentially convex (concave) functions, which have important applications in information theory, big data analysis, machine learning and statistics. Exponentially convex functions have appeared as significant generalization of the convex functions, the origin of which can be traced back to Bernstein [16]. Avriel [9, 10] introduced the concept of \(r\)-convex functions, from which one can deduce the exponentially convex functions. Antczak [2] considered the \((r, p)\) convex functions and discussed their applications in mathematical programming and optimization theory. Alirazaie and Mahar [1] investigated the impact of exponentially concave functions in information theory. Zhao et al [200] discussed some characterizations of \(r\)-convex functions. Awan et al [5] also investigated some classes of exponentially convex functions. Noor and Noor [132,133,134,135,136,137,138] discussed the characterization of several classes of exponentially convex functions. In Sect. 12, we introduce the concept of strongly exponentially general convex functions and show that these enjoy some nice properties, which convex functions have.

The theory of general variational inequalities is quite broad, so we shall limit ourselves here to give the flavor of the ideas and techniques involved. The techniques used in the analysis of iterative methods and other results for general variational inequalities are a beautiful blend of ideas of pure and applied mathematical sciences. In this paper, we have presented some results regarding the development of various algorithms, their convergence analysis and the penalty computational technique. Although this paper is expository in nature, our choice has been rather to consider some interesting aspects of general variational inequalities. The framework chosen should be seen as a model setting for more general results for other classes of variational inequalities and variational inclusions. One of the main purposes of this expository paper is to demonstrate the close connection among various classes of algorithms for the solution of general variational inequalities and to point out that researchers in different fields of variational inequalities and optimization have been considering parallel paths. We would like to emphasize that the results obtained and discussed in this paper may motivate a large number of novel and innovative applications as well as extensions in these areas. The comparison of the proposed methods with other techniques needs further efforts and is itself an open interesting problem. We have given only a brief introduction of the general variational inequalities in the real Hilbert spaces. For some other aspects of the general variational inequalities, readers are referred to the articles of Noor [85,86,87, 89,90,91,92,93,94,95, 100,101,102,103,104,105, 110, 118,119,120,121,122, 124] and the references therein. The interested reader is advised to explore this field further and discover novel and fascinating applications of this theory in Banach and topological spaces.

It is perhaps part of the fascination of the subject that so many branches of pure and applied sciences are involved in the variational inequality theory. The task of becoming conversant with a wide spectrum of knowledge is indeed a real challenge. The general theory is quite technical, so we shall limit ourselves here to give the flavor of the main ideas involved. The techniques used to analyze the existence results and iterative algorithms for variational inequalities are a beautiful blend of ideas from different areas of pure and applied mathematical sciences. The framework chosen should be seen as a model setting for more general results. However, by just relying on these special results, interesting problems arising in applications can be dealt with easily. Our main motivation is for this paper to give a summary account of the basic theory of variational inequalities set in the framework of nonlinear operators defined on convex sets in a real Hilbert space. We focus our attention on the iterative methods for solving variational inequalities. The equivalence between the variational inequalities and the Wiener-Hopf equations has been used to suggest some new iterative methods. The auxiliary principle technique is applied to study the existence of the solution and to propose a novel and innovative general algorithm for the general variational inequalities, equilibrium problems and related optimization.

2 Preliminaries and Basic Concepts

Let \(H\) be a real Hilbert space, whose inner product and norm are denoted by \(\langle \cdot , \cdot \rangle \) and \(\| \cdot \|\) respectively.

Definition 1

The set \(K\) in \(H\) is said to be a convex set, if

$$\begin{aligned} u+t(v-u)\in K,\quad \forall u,v\in K, t\in [0,1]. \end{aligned}$$

Definition 2

A function \(F\) is said to be a convex function, if

$$\begin{aligned} F((1-t)u+tv) \leq (1-t)F(u)+ tF(v), \quad \forall u,v \in K, \quad t \in [0,1]. \end{aligned}$$

It is well known that a function \(F \) is a convex function, if and only if, it satisfies the inequality

$$\begin{aligned} F\left(\frac{a+b}{2}\right) \leq \frac{1}{b-a} \int ^{b}_{a} F(x) dx \leq \frac{F(a)+F(b)}{2}, \quad \forall a, b \in I=[a,b], \end{aligned}$$

which is known as the Hermite-Hadamard type inequality. Such types of the inequalities provide us with the upper and lower bounds for the mean value integral.

If the convex function \(F \) is differentiable, then \(u \in K \) is the minimum of \(F \), if and only if, \(u\in K \) satisfies the inequality

$$\begin{aligned} \langle F^{\prime }(u), v-u \rangle \geq 0, \quad \forall v\in K, \end{aligned}$$

which is called the variational inequality, introduced and studied by Stampacchia [173] in 1964. For applications, sensitivity, dynamical systems, generalizations, and other aspects of variational inequalities, see [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200] and references therein.

It is of course known that a set may not be a convex set. However, a set may be made a convex set with respect to an arbitrary function. Motivated by this fact, Youness [197] introduced the concept of a general convex set involving an arbitrary function.

Definition 3

The set \(K \) in \(H\) is said to be a general convex set, if there exists an arbitrary function \(g \), such that

$$\begin{aligned} g(u)+t(g(v)-g(u))\in K_{g},\quad \forall u,v\in H: g(u),g(v) \in K, t \in [0,1]. \end{aligned}$$

Note that, if \(g =I \), the identity operator, then the general convex set reduces to the classical convex set. Clearly every convex set is a general convex set, but the converse is not true.

For the sake of simplicity, we always assume that \(\forall u,v \in H: g(u), g(v) \in K \), unless otherwise stated.

Definition 4

A function \(F\) is said to be a general convex function, if there exists an arbitrary function \(g \) such that

$$\begin{aligned} F((1-t)g(u)+tg(v)) \leq (1-t)F(g(u)) +t F(g(v)), \forall v\in H: g(v) \in K, \quad t\in [0,1]. \end{aligned}$$

It is known that every convex function is a general convex function, but the converse is not true. For example, the function \(F(x) = e^{x^{2}} \) is a general convex function, but it is not convex.

We now define the general convex functions on \(I_{g}= [g(a),g(b)]\).

Definition 5

Let \(I_{g} =[g(a)g(b)]\). Then \(F\) is a general convex function, if and only if,

$$\begin{aligned} \left | \textstyle\begin{array}{c@{\quad }c@{\quad }c} 1&1&1 \\ g(a)& g(x)& g(b) \\ F(g(a))& F(g(x))&F(g(b)) \end{array}\displaystyle \right |\geq 0;\quad g(a)\leq g(x)\leq g(b). \end{aligned}$$

One can easily show that the following are equivalent:

  1. 1.

    \(F\) is a general convex function.

  2. 2.

    \(F(g(x))\leq F(g(a))+\frac{F(g(b))-F(g(a))}{g(b)-g(a)}(g(x)-g(a))\).

  3. 3.

    \(\frac{F(g(x))-F(g(a)}{g(x)-g(a)}\leq \frac{F(g(b))-F(g(a))}{g(b)-g(a)}\).

  4. 4.

    \((g(x)-g(a))F(g(a)) +(g(b)-g(a))F(g(x))+(g(a)-g(x))F(g(b))\geq 0\).

  5. 5.

    \(\frac{F(g(a))}{(g(b)-g(a))(g(a)-g(x))}+ \frac{F(g(x))}{(g(x)-g(b))(g(a)-g(x))}+ \frac{F(g(b))}{(g(b)-g(a))(g(x)-g(b))}\geq 0\),

where \(g(x)= (1-t)g(a)+tg(b), \in [0,1]\).

We now show that the minimum of a differentiable general convex function on \(K\) in \(H\) can be characterized by the general variational inequality. This result is mainly due to Noor [110].

Theorem 1

[110] Let \(F: K \longrightarrow H\) be a differentiable general convex function. Then

\(u \in H: g(u) \in K\) is the minimum of a differentiable general convex function \(F\) on \(K\), if and only if, \(u \in : g(u) \in K\) satisfies the inequality

$$\begin{aligned} \langle F'(g(u)), g(v)-g(u)\rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$
(7)

where \(F'\) is the differential of \(F\) at \(g(u)\in K\) in the direction \(g(v)-g(u)\).

Proof

Let \(u \in H: g(u) \in K\) be a minimum of the general convex function \(F\) on \(K\). Then

$$\begin{aligned} F(g(u)) \leq F(g(v)), \quad \forall g(v) \in K. \end{aligned}$$
(8)

Since \(K\) is a general convex set, so, for all \(u,v \in K, t \in [0,1]\),

$$\begin{aligned} g(v_{t})=g(u)+t(g(v)-g(u)) \in K. \end{aligned}$$

Setting \(g(v)= g(v_{t})\) in (8), we have

$$\begin{aligned} F(g(u)) \leq F(g(u)+ t(g(v)-g(u)) \leq F(g(u))+t(F(g(v)-g(u)). \end{aligned}$$

Dividing the above inequality by \(t\) and taking \(t \longrightarrow 0\), we have

$$\begin{aligned} \langle F'(g(u)), g(v)-g(u) \rangle \geq 0, \end{aligned}$$

which is the required result (7).

Conversely, let \(u \in H, g(u) \in K\) satisfy the inequality (7). Since \(F\) is a general convex function, so \(\forall g(u),g(v) \in K, t \in [0,1], g(u)+t(g(v)-g(u)) \in K \) and

$$\begin{aligned} F(g(u)+t(g(v)-g(u))) \leq (1-t)F(g(u))+tF(g(v)), \end{aligned}$$

which implies that

$$\begin{aligned} F(g(v))-F(g(u)) \geq \frac{F(g(u)+t(g(v)-g(u)))-F(g(u))}{t}. \end{aligned}$$

Letting \(t \longrightarrow 0\), we have

$$\begin{aligned} F(g(v))-F(g(u)) \geq \langle F'(g(u)),g(v)-g(u)\rangle \geq 0, \quad \text{using (7),} \end{aligned}$$

which implies that

$$\begin{aligned} F(g(u)) \leq F(g(v)), \quad \forall g(v) \in K, \end{aligned}$$

showing that \(u \in H: g(u) K\) is the minimum of \(F\) on \(K\) in \(H\). □

Theorem 1 implies that general convex programming problems can be studied via the general variational inequality (9) with \(Tu = F'(g(u))\). In a similar way, one can show that the general variational inequality is the Fritz-John condition of the inequality constrained optimization problem.

In many applications, the general variational inequalities do not arise as the minimization of the differentiable general convex functions. Also, it is known that the variational inequality introduced by Stampacchia [173] can only be used to study the even-order boundary value problems. These facts motivated Noor [87] to introduce a more general variational inequality involving two distinct operators. General variational inequalities constitute a unified framework to study such type of problems.

Let \(K\) be a closed convex set in \(H\) and \(T,g: H \longrightarrow H\) be nonlinear operators. We now consider the problem of finding \(u \in H,g(u) \in K\) such that

$$\begin{aligned} \langle Tu, g(v)-g(u)\rangle \geq 0, \quad \forall v\in H; g(v) \in K. \end{aligned}$$
(9)

Problem (9) is called the general variational inequality, which was introduced and studied by Noor [87] in 1988. It has been shown that a large class of unrelated odd-order and nonsymmetric obstacle, unilateral, contact, free, moving, and equilibrium problems arising in regional, physical, mathematical, engineering and applied sciences can be studied in the unified and general framework of the general variational inequalities (9). Luc and Noor [69] have studied the local uniqueness of solution of the general variational inequality (9) by using the concept of Frechet approximate Jacobian.

We now discus some special cases of the general variational inequality (9).

(I). For \(g \equiv I\), where \(I\) is the identity operator, problem (9) is equivalent to finding \(u \in K\) such that

$$\begin{aligned} \langle Tu, v-u \rangle \geq 0 \quad \forall v \in K, \end{aligned}$$
(10)

which is known as the classical variational inequality introduced and studied by Stampacchia [173] in 1964. For recent state-of-the-art results in this field, see [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200] and the references therein.

In the sequel, we assume that \(g\) is onto \(K\) unless, otherwise specified.

(II). If \(N(u)= \{w \in H: \langle w,v-u \rangle \leq 0, \forall v \in K\}\) is a normal cone to the convex set \(K\) at \(u\), then the general variational inequality (9) is equivalent to finding \(u \in H, g(u) \in K\) such that

$$\begin{aligned} -T(u) \in N(g(u)), \end{aligned}$$

which are known as the generalized nonlinear equations, see [129, 130].

(III). If \(P^{tg}\) is the projection of \(-Tu\) at \(g(u) \in K\), then it has been shown that the general variational inequality problem (9) is equivalent to finding \(u \in H, g(u) \in K\) such that

$$\begin{aligned} P^{tg}[-Tu] := P^{tg}(u) = 0, \end{aligned}$$

which are known as the tangent projection equations. This equivalence has been used to discuss the local convergence analysis of a wide class of iterative methods for solving general variational inequalities (9).

(IV). If \(K^{*}= \{u \in H: \langle u, v \rangle \geq 0, \forall v \in K\}\) is a polar (dual) cone of a convex cone \(K\) in \(H\), then problem (9) is equivalent to finding \(u \in H\) such that

$$\begin{aligned} g(u) \in K, \quad Tu \in K^{*} \quad \text{and} \quad \langle Tu, g(u) \rangle = 0, \end{aligned}$$
(11)

which is known as the general complementarity problem, see Noor [87]. For

$$g(u)= m(u)+K,$$

where \(m\) is a point-to-point mapping, is called the implicit (quasi) complementarity problem. If \(g \equiv I\), then problem (11) is known as the generalized complementarity problems. Such problems have been studied extensively in recent years.

(V). If \(K=H \), then the general variational inequality (9) is equivalent to finding \(u\in H: g(u) \in H \) such that

$$\begin{aligned} \langle Tu, g(v) \rangle = 0, \quad \forall v\in H: g(v) \in H, \end{aligned}$$

which is called the weak formulation of the odd-order and nonsymmetric boundary value problems.

For suitable and appropriate choice of the operators and spaces, one can obtain several classes of variational inequalities and related optimization problems as special cases of the general variational inequalities (9).

We also need the following result, which plays a key role in the studies of variational inequalities and optimization theory.

Lemma 1

[59] For a given \(z \in H\), \(u \in K\) satisfies the inequality

$$\begin{aligned} \langle u - z, v -u \rangle \geq 0, \quad \forall v \in K, \end{aligned}$$
(12)

if and only if

$$ u = P_{K} z, $$

where \(P_{K}\) is the projection of \(H\) onto \(K\).

Also, the projection operator \(P_{K}\) is nonexpansive, that is,

$$\begin{aligned} \|P_{K}(u)-P_{K}(v)\| \leq \|u-v\|, \quad \forall u,v \in H, \end{aligned}$$

and satisfies the inequality

$$\begin{aligned} \|P_{K}z-u\|^{2} \leq \|z-u\|^{2} -\|z-P_{K} z\|^{2}, \quad \forall z,u \in H. \end{aligned}$$

2.1 Applications

We now discuss some applications of general variational inequalities (9). For this purpose, we consider the functional \(I[v] \), defined as

$$\begin{aligned} I[g(v)]:= \langle Tv,g(v) \rangle -2 \langle f, g(v) \rangle , \quad \forall v \in H, \end{aligned}$$
(13)

which is called the general energy or potential, virtual work functional. We remark that, if \(g \equiv I \), the identity operator, then the functional \(I[v] \) reduces to

$$\begin{aligned} J[v] = \langle Tv,v \rangle - 2 \langle f, v \rangle , \quad \forall v \in H, \end{aligned}$$

which is known as the standard energy function.

It is known that, if the operator \(T : H \longrightarrow H \) is linear, symmetric and positive, then the minimum of the functional \(J[v] \) on the closed and convex set \(K\) in \(H\) is equivalent to finding \(u \in K \) such that

$$\begin{aligned} \langle Tu,v-u \rangle \geq \langle f, v-u \rangle , \quad \forall v \in K. \end{aligned}$$
(14)

Inequalities of the type (14) are known as variational inequalities, which were introduced by Stampacchia [173] in the study of potential theory. See also Fichera [38]. It is clear that the symmetry and positivity of the operator \(T \) is necessary. On the other hand, there are many important problems, which are nonsymmetric and non-positive. For the nonsymmetric and odd-order problems, many methods have been developed by several authors including Filippov [39] and Tonti [177] to construct the energy functional of type (13) by introducing the concept of \(g\)-symmetry and \(g\)-positivity of the operator \(g \). We now recall the following concepts.

Definition 6

[39, 177] \(\forall u,v \in H \), the operator \(T : H \longrightarrow H \) is said to be:

(a). \(g\)-symmetric, if and only if,

$$\begin{aligned} \langle Tu,g(v) \rangle = \langle g(u),Tv \rangle . \end{aligned}$$

(b). \(g\)-positive, if and only if,

$$\begin{aligned} \langle Tu, g(u) \rangle \geq 0. \end{aligned}$$

(c). \(g\)-coercive (\(g\)-elliptic, if there exists a constant \(\alpha > 0 \) such that

$$\begin{aligned} \langle Tu, g(u) \rangle \geq \alpha \|g(u)\|^{2}. \end{aligned}$$

Note that \(g\)-coercivity implies \(g\)-positivity, but the converse is not true. It is also worth mentioning that there are operators which are not \(g\)-symmetric but \(g\)-positive. On the other hand, there are \(g\)-positive, but not \(g\)-symmetric operators. Furthermore, it is well-known [39, 177] that if, for a linear operator \(T \), there exists an inverse operator \(T^{-1} \) operator on \(R(T) \) with \(\overline{R(T)} = H \), then one can find an infinite set of auxiliary operators \(g\) such that the operator \(T \) is both \(g\)-symmetric and \(g\)-positive.

We now consider the problem of finding the minimum of the functional \(I[v] \), defined by (13), on the convex set \(K \) in \(H \) and this is the main motivation of our next result.

Theorem 2

Let the operator \(T : H \longrightarrow H \) be linear, \(g\)-symmetric and \(g\)-positive. If the operator \(g: H \longrightarrow H \) is either linear or convex, then the function \(u \in H \) minimizes the functional \(I[v] \) defined by (13) on the convex set \(K \) in \(H \) if and only if \(u \in H, g(u) \in K \) such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq \langle f, g(v)-g(u) \rangle \quad \forall v\in H: g(v) \in K. \end{aligned}$$
(15)

Proof

Let \(u \in H, g(u) \in K \) satisfy (15). Then, using the \(g\)-positivity of the operator \(T \), we have

$$\begin{aligned} \langle Tv, g(v)-g(u) \rangle \geq & \langle f,g(v)-g(u) \rangle + \langle Tv-Tu,g(v)-g(u) \rangle \\ \geq & \langle f, g(v)-g(u) \rangle , \quad \forall g(v) \in K. \end{aligned}$$
(16)

Since \(K\) is a convex set, so for all \(t \in [0,1], u, w \in K, v_{t} = u +t(w-u) \in K \). Taking \(v = v_{t} \) in (16) and using the fact that \(g\) is linear (or convex), we have

$$\begin{aligned} \langle Tv_{t}, g(w)-g(u) \rangle \geq \langle f, g(w)-g(u) \rangle . \end{aligned}$$
(17)

We now define the function

$$\begin{aligned} h(t) = &t \langle Tu,g(w)-g(u) \rangle +\frac{t^{2}}{2}\langle T(w-u),g(w)-g(u) \rangle \\ & -t\langle f, g(w)-g(u) \rangle , \end{aligned}$$

such that

$$\begin{aligned} h^{\prime }(t) = &\langle Tu,g(w)-g(u) \rangle +t \langle T(w-u),g(w)-g(u) \rangle - \langle f, g(w)-g(u) \rangle \\ \geq & 0, \quad \text{by (17).} \end{aligned}$$

Thus it follows that \(h(t) \) is an increasing function on \([0,1] \) and so \(h(0) \leq h(1) \) gives us

$$\begin{aligned} \langle Tu,g(u) \rangle -2 \langle f,g(u) \leq \langle Tw,g(w) \rangle -2 \langle f, g(w) \rangle , \end{aligned}$$

that is,

$$\begin{aligned} I[u] \leq I[w], \end{aligned}$$

which shows that \(u \in H \) minimizes the functional \(I[v] \), defined by (13), on the convex set \(K\) in \(H \).

Conversely, assume that \(u \in H \) is the minimum of \(I[v] \) on the convex set \(K \), then

$$\begin{aligned} I[u] \leq I[v], \quad \forall v \in H : g(u) \in K. \end{aligned}$$
(18)

Taking \(v = v_{t} \equiv u +t(w-u) \in K , \forall u, w \in K \) and \(t\in [0,1] \) in (18), we have

$$\begin{aligned} I[u] \leq I[v_{t}]. \end{aligned}$$

Using (13) and the linearity (or convexity) of \(g \), we obtain

$$\begin{aligned} \langle Tu,g(w)-g(u) \rangle + \frac{t}{2}\langle T(w-u),g(w)-g(u) \rangle \geq \langle f,g(w)-g(u) \rangle , \end{aligned}$$

from which, as \(t \longrightarrow 0 \), we have

$$\begin{aligned} \langle Tu, g(w)-g(u) \rangle \geq \langle f, g(w)-g(u) \rangle , \quad \forall w\in H: g(w) \in K. \end{aligned}$$

This completes the proof. □

We remark that for \(g = I \), the identity operator, Theorem 2 reduces to the following well-known result in variational inequalities, which is due to Stampacchia [173].

Theorem 3

Let the operator \(T\) be linear, symmetric and positive. Then the minimum of the functional \(J[v]\) defined by (13) on the convex set \(K \) in \(H \) can be characterized by the variational inequality

$$\begin{aligned} \langle Tu, v-u \rangle \geq \langle f, v-u \rangle , \quad \forall v \in K. \end{aligned}$$

Proof

Its proofs follows from Theorem 2. □

Example 1

We now show that a wide class of nonsymmetric and odd-order obstacle, unilateral, free, moving and general equilibrium problems arising in pure and applied sciences can be formulated in terms of (13). For simplicity and to illustrate the applications, we consider the obstacle boundary value of third order of the type: Find \(u \) such that

$$\begin{aligned} \left . \textstyle\begin{array}{l@{\quad }ll} -u^{\prime \prime \prime } \geq f, \hspace{6mm} &\text{on}, \quad \Omega = & [a,b] \\ \hspace{4mm} u \geq \psi , \hspace{6mm} & \text{on}\quad \Omega = & [a,b] \\ \hspace{4mm}[-u^{\prime \prime \prime }-f][u - \psi ]= 0, \quad \quad & \text{on} \quad \Omega = & [a,b] \\ \hspace{4mm} u(a) = 0, \quad u^{\prime }(a) = 0, \quad u^{\prime }(b) =0. \end{array}\displaystyle \right \} , \end{aligned}$$
(19)

where \(\Omega = [a,b] \) is a domain, \(\psi (x) \) and \(f(x) \) are the given functions. The function \(\psi \) is known as the obstacle function. The region, where \(u(x) = \psi (x), \text{for } x \in \Omega \) is called the contact region (set).

We note that problem (19) is a generalization of the third-order boundary value problem

$$\begin{aligned} -\frac{d^{3}u(x)}{dx^{3}} = f(x) \quad \quad x \in \Omega \end{aligned}$$

with boundary condition

$$\begin{aligned} u(0) \quad = \quad u^{\prime }(a) \quad = \quad u^{\prime }(b) = 0, \end{aligned}$$

which arises from a similarity solution of the so-called barotropic quasi-geostrophic potential vorticity equation for one layer ocean circulation. For the formulation of the equation, see [91] and the references therein.

To study the problem (19) in the general framework of the general variational inequality, we define

$$\begin{aligned} K = \{ u \in H^{2}_{0}(\Omega ): u(x) \geq \psi (x) \text{on } \Omega \}, \end{aligned}$$

which is a closed convex set in \(H^{2}_{0}(\Omega ) \). For the definition and properties of the spaces \(H^{m}_{0} (\Omega ) \), see [50].

Using the technique of [39, 177], we can easily show that the energy functions associated with the problem (19) is

$$\begin{aligned} I[v] = & \int _{a}^{b} \left (-\frac{d^{3}v}{dx^{3}}\right )\left ( \frac{dv}{dx}\right )dx-2\int _{a}^{b}f(x)\left (\frac{dv}{dx}\right )dx, \quad \text{for all } \frac{dv}{dx} \in K \\ = & \int _{a}^{b}\left (\frac{d^{2}v}{dx^{2}}\right )\left ( \frac{d^{2}v}{dx^{2}}\right )dx-2\int _{a}^{b}f(x)\left ( \frac{dv}{dx}\right )dx \\ = & \langle Tv,g(v) \rangle -2\langle f, g(v) \rangle , \end{aligned}$$
(20)

where

$$\begin{aligned} \langle Tu, g(v) \rangle = \int _{a}^{b}\left (\frac{d^{2}u}{dx^{2}} \right )\left (\frac{d^{2}v}{dx^{2}}\right )dx, \end{aligned}$$
(21)

and

$$\begin{aligned} \langle f, g(v) \rangle = \int _{a}^{b}f(x)\left (\frac{dv}{dx} \right )dx. \end{aligned}$$

Here

$$ g(u) = \frac{du}{dx } \quad \text{and}\quad Tu= \frac{-d^{3}u}{d^{3}x} $$

are the linear operators.

It is clear that the operator \(T \) defined by the relation (21) is \(g\)-symmetric, \(g\)-positive and linear. Also we note that the operator \(g = \frac{d}{dx} \) is a linear operator. Consequently all the assumptions of Theorem 2 are satisfied. Thus it follows from Theorem 2 that the minimum of the functional \(I[v] \), defined by (20) is equivalent to finding \(u \in H \) such that \(g(u) \in K \) and the inequality (9) holds.

In fact, we conclude the problems equivalent to (19) are:

The Variational Problem. Find \(u \in H^{2}_{0}(\Omega ) \), which gives the minimum value to the functional

$$\begin{aligned} I[v] = \langle Tv,g(v) \rangle - 2 \langle f,g(v) \rangle \quad \text{on the convex set } \quad K. \end{aligned}$$

The Variational Inequality (Weak) Problem. Find \(u \in H^{2}_{0}(\Omega )\) such that \(g(u) \in K \) and

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq \langle f, g(v)-g(u) \rangle , \quad \forall \quad g(v) \in K. \end{aligned}$$

2.2 Quasi Variational Inequalities

We now show that quasi variational inequalities are a special case of general variational inequalities (9). If the convex set \(K\) depends upon the solution explicitly or implicitly, then the variational inequality problem is known as the quasi variational inequality. For a given operator \(T: H \longrightarrow H\), and a point-to-set mapping \(K : u \longrightarrow K(u) \), which associates a closed convex-valued set \(K(u)\) with any element \(u \) of \(H\), we consider the problem of finding \(u \in K(u)\) such that

$$\begin{aligned} \langle Tu, v- u \rangle \quad \geq 0, \quad \forall v \in K(u) \end{aligned}$$
(22)

The inequality of type (22) is called the quasi variational inequality. For the formulation, applications, numerical methods and sensitivity analysis of the quasi variational inequalities, see [15, 22, 84, 85, 102, 139, 147] and the references therein.

We can rewrite the equation (22), for \(\rho > 0 \), as

$$\begin{aligned} 0 \leq &\langle \rho Tu +u-u, v- u \rangle \\ =& \langle u-(u-\rho Tu),v-u \rangle , \quad \forall v \in K(u), \end{aligned}$$

which is equivalent (using Lemma 1) to finding \(u \in K(u) \) such that

$$\begin{aligned} u = P_{K(u)}[u-\rho Tu]. \end{aligned}$$
(23)

In many important applications, the convex-valued set \(K(u)\) is of the form

$$\begin{aligned} K(u)= m(u) +K, \end{aligned}$$
(24)

where \(m \) is a point-to-point mapping and \(K\) is a closed convex set.

From (23) and (24), we see that problem (22) is equivalent to

$$\begin{aligned} u = &P_{K(u)}[u-\rho Tu] =P_{m(u)+K}[u-\rho Tu] \\ = & m(u)+P_{K}[u-m(u)-\rho Tu] \end{aligned}$$

which implies that

$$\begin{aligned} g(u)=P_{K}[g(u)-\rho Tu ] \quad \text{with} \quad g(u)= u-m(u), \end{aligned}$$

which is equivalent to the general variational inequality (9) by an application of Lemma 1. We have shown that the quasi variational inequalities (22) with the convex-valued set \(K(u)\) defined by (24) are equivalent to the general variational inequalities (9).

We now recall the well known concepts.

Definition 7

For all \(u,v \in H\), the operator \(T: H \longrightarrow H\) is said to be (i). \(g\)-monotone, if

$$ \langle Tu-Tv,g(u)-g(v)\rangle \geq 0. $$

(ii). \(g\)-pseudomonotone, if

$$ \langle Tu,g(v)-g(u)\rangle \geq 0 \quad \text{implies} \quad \langle Tv, g(v)-g(u)\rangle \geq 0. $$

For \(g \equiv I\), Definition 7 reduces to the usual definition of monotonicity and pseudomonotonicity of the operator \(T\). Note that monotonicity implies pseudomonotonicity but the converse is not true, see [35].

Definition 8

A function \(F \) is said to be strongly general convex on the general convex set \(K \) with modulus \(\mu > 0 \), if, for all \(g(u),g(v) \in K_{g}, t \in [0,1] \),

$$\begin{aligned} F(g(u)+t(g(v)-g(u)) \leq (1-t)F(g(u)) +tF(g(v))-t(1-t)\mu \|g(v)-g(u) \|^{2}. \end{aligned}$$

For differentiable strongly general convex function \(F\), the following statements are equivalent.

$$\begin{aligned} & 1.\quad F(g(v))-F(g(u)) \geq \langle F^{\prime }(g(u)), g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{2} \\ & 2. \quad F^{\prime }(g(u))-F^{\prime }(g(v)), g(u)-g(v) \rangle \geq \mu \|g(v)-g(u)\|^{2}. \end{aligned}$$

It is well known that the general convex functions are not convex functions, but they have some nice properties which the convex functions have. Note that, for \(g = I \), the general convex functions are convex functions and definition (8) is the well known result in convex analysis.

3 Projection Methods

In this section, we use the fixed point formulation to suggest and analyze some new implicit methods for solving the variational inequalities. Using Lemma 1, one can show that the general variational inequalities are equivalent to the fixed point problems.

Lemma 2

[87] The function \(u\in H : g(u) \in K\) is a solution of the general variational inequalities (9), if and only if, \(u\in H: g(u) \in K\) satisfies the relation

$$\begin{aligned} g(u)= P_{K}[g(u)-\rho Tu], \end{aligned}$$
(25)

where \(P_{K} \) is the projection operator and \(\rho >0\) is a constant.

Lemma 2 implies that the general variational inequality (9) is equivalent to the fixed point problem (25). This equivalent fixed point formulation was used to suggest some implicit iterative methods for solving the general variational inequalities. One uses the equivalent fixed point formulation (25) to suggest the following iterative methods for solving general variational inequalities (9).

Algorithm 1

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} = u_{n}- g(u_{n})+ P_{K}[g(u_{n})-\rho Tu_{n}], \quad n=0,1,2,\ldots \end{aligned}$$

which is known as the projection method and has been studied extensively.

Algorithm 2

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} = u_{n} -g(u_{n})+ P_{K}[g(u_{n})-\rho Tg^{-1}P_{K}[g(u_{n})-\rho Tu_{n}]], \quad n=0,1,2,\ldots \end{aligned}$$

which can be viewed as the extragradient method, which was suggested and analyzed by Koperlevich [60] for solving the classical variational inequalities. Noor [114] has proved the convergence of the extragradient method for pseudomonotone operators.

Algorithm 3

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1}= u_{n} -g(u_{n})+ P_{K}[g(u_{n+1})-\rho Tu_{n+1}], \quad n=0,1,2,\ldots \end{aligned}$$

which is known as the modified projection method and has been studied extensively, see Noor [103].

We can rewrite the equation (25) as:

$$\begin{aligned} g(u)= P_{K}\left[g\left(\frac{u+u}{2}\right) -\rho Tu\right]. \end{aligned}$$

This fixed point formulation was used to suggest the following implicit method for solving variational inequalities, which is due to Noor et al [110, 116]. We used this equivalent formulation to suggest implicit methods for the general variational inequality (9).

Algorithm 4

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} = u_{n}- g(u_{n})+ P_{K}\left[g\left (\frac{u_{n}+u_{n+1}}{2} \right)-\rho Tu_{n+1}\right], \quad n=0,1,2,\ldots \end{aligned}$$

For the implementation of this Algorithm, one can use the predictor-corrector technique to suggest the following two-step iterative method for solving general variational inequalities.

Algorithm 5

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\rho Tu_{n}] \\ u_{n+1} =& u_{n}-g(u_{n})+P_{K}[g\big(\frac{y_{n}+u_{n}}{2}\big)-\rho Ty_{n}],\quad \lambda \in [0,1], \quad n=0,1,2,\ldots \end{aligned}$$

which is a two-step iterative method:

From the equation (25), we have

$$\begin{aligned} g(u) = P_{K}[g( u) -\rho T(\frac{u+u}{2})]. \end{aligned}$$

This fixed point formulation is used to suggest the implicit method for solving the variational inequalities as

Algorithm 6

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} = u_{n}-g(u_{n})+ P_{K}[g(u_{n})-\rho T(\frac{u_{n}+u_{n+1}}{2})], \quad n=0,1,2,\ldots, \end{aligned}$$

which is another implicit method, see Noor et al. [149].

To implement this implicit method, one can use the predictor-corrector technique to rewrite Algorithm 6 as an equivalent two-step iterative method.

Algorithm 7

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(y_{n}) = & P_{K} [g(u_{n})-\rho Tu_{n}], \\ u_{n+1} = &u_{n}-g(u_{n})+ P_{K}[g(u_{n})-\rho T(\frac{u_{n}+y_{n}}{2})], \quad n=0,1,2,\ldots, \end{aligned}$$

which is known as the mid-point implicit method for solving general variational inequalities.

For the convergence analysis and other aspects of Algorithm 4, see Noor et al [149].

It is obvious that Algorithm 4 and Algorithm 6 have been suggested using different variants of the fixed point formulations (25). It is natural to combine these fixed point formulation to suggest a hybrid implicit method for solving the general variational inequalities and related optimization problems, which is the main motivation of this paper.

One can rewrite the equation (25) as

$$\begin{aligned} g(u)= P_{K}[g \big(\frac{u+u}{2}\big) -\rho T(\frac{u+u}{2})]. \end{aligned}$$

This equivalent fixed point formulation enables to suggest the following method for solving the general variational inequalities.

Algorithm 8

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} =u_{n}-g(u_{n})+ P_{K}[g\big(\frac{u_{n}+u_{n+1}}{2}\big)-\rho T(\frac{u_{n}+ u_{n+1}}{2})], \quad n=0,1,2,\ldots, \end{aligned}$$

which is an implicit method.

We would like to emphasize that Algorithm 8 is an implicit method. To implement the implicit method, one uses the predictor-corrector technique. We use Algorithm 1 as the predictor and Algorithm 8 as corrector. Thus, we obtain a new two-step method for solving general variational inequalities.

Algorithm 9

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\rho Tu_{n}] \\ u_{n+1} =& u_{n}-g(u_{n})+P_{K}[g\big(\frac{y_{n}+u_{n}}{2}\big)-\rho T\big(\frac{y_{n}+u_{n}}{2}\big)], \quad n=0,1,2,\ldots \end{aligned}$$

which is a two step method.

For constants \(\lambda , \xi \in [ 0.1] \), we can rewrite the equation (25) as:

$$\begin{aligned} g(u)= P_{K}\big[(1-\lambda )g(u)+ \lambda g(u)) -\rho T((1-\xi )u+ \xi u)\big]. \end{aligned}$$

This equivalent fixed point formulation enables to suggest the following method for solving the general variational inequalities.

Algorithm 10

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(u_{n+1})= P_{K}\big[(1-\lambda )g(u_{n})+ \lambda g(u_{n+1})) -\rho T((1-\xi )u_{n}+\xi u_{n+1})\big]. \quad n=0,1,2,\ldots \end{aligned}$$

which is an implicit method.

Using the prediction-correction technique, Algorithm 10 can be written in the following form.

Algorithm 11

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme.

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\rho Tu_{n}] \\ g(u_{n+1}) =& P_{K}\big[(1-\lambda )g(u_{n})+ \lambda g(y_{n})) -\rho T((1-\xi )u_{n}+\xi y_{n})\big], \quad n=0,1,2,\ldots \end{aligned}$$

which is a two step method.

Remark 1

It is worth mentioning that Algorithm 11 is a unified one. For suitable and appropriate choice of the constant \(\lambda \) and \(\xi \), one can obtain a wide class of iterative methods for solving general variational inequalities and related optimization problems.

4 Wiener-Hopf Equations Technique

In this Section, we consider the problem of the general Wiener-Hopf equations. To be more precise, let \(Q_{K}= I-P_{K}\), where \(I\) is the identity operator and \(P_{K}\) is the projection of \(H\) onto \(K\). For given nonlinear operators \(T,g: H \rightarrow H\), consider the problem of finding \(z \in H\) such that

$$\begin{aligned} \rho Tg^{-1}P_{K}z + Q_{K}z = 0, \end{aligned}$$
(26)

provided \(g^{-1} \) exists. Equations of the type (26) are called the general Wiener-Hopf equations, which were introduced and studied by Noor [90, 91]. For \(g = I\), we obtain the original Wiener-Hopf equations, which were introduced and studied by Shi [168] and Robinson [166] in different settings independently. Using the projection operators technique, one can show that the general variational inequalities are equivalent to the general Wiener-Hopf equations. This equivalent alternative formulation has played a fundamental and important role in studying various aspects of variational inequalities. It has been shown that Wiener-Hopf equations are more flexible and provide a unified framework to develop some efficient and powerful numerical techniques for solving variational inequalities and related optimization problems.

Lemma 3

The element \(u\in H: g(u) \in K\) is a solution of the general variational inequality (9), if and only if \(z\in H\) satisfies the Wiener-Hopf equation (26), where

$$\begin{aligned} g(u) =&P_{K}z, \end{aligned}$$
(27)
$$\begin{aligned} z =&g(u)-\rho Tu, \end{aligned}$$
(28)

where \(\rho >0\) is a constant.

From Lemma 3, it follows that the variational inequalities (9) and the Wiener–Hopf equations (26) are equivalent. This alternative equivalent formulation is used to suggest and analyze a wide class of efficient and robust iterative methods for solving general variational inequalities and related optimization problems, see [90, 91, 99, 109] and the references therein.

We use the general Wiener-Hopf equations (26) to suggest some new iterative methods for solving the general variational inequalities. From (27) and (28),

$$\begin{aligned} z = &P_{K}z-\rho TP_{K}z \\ =& P_{K}[g(u)-\rho Tu]-\rho Tg^{-1}P_{K}[g(u)-\rho Tu]. \end{aligned}$$

Thus, we have

$$\begin{aligned} g(u)= \rho Tu+\big[P_{K}[g(u)-\rho Tu]-\rho Tg^{-1}P_{K}[g(u)-\rho Tu +P_{K}[g(u)- \rho Tu]-g(u)]. \end{aligned}$$

Consequently, for a constant \(\alpha >0 \), we have

$$\begin{aligned} g(u) =& (1-\alpha )g(u) + \alpha \{P_{K}[P_{K}[g(u)-\rho Tu] +\rho Tu- \rho Tg^{-1}P_{K}[g(u)-\rho Tu] \\ &+P_{K}[g(u)-\rho Tu]-g(u)]\} \\ =& (1-\alpha )g(u)+ \alpha \{ P_{K}[g(y)-\rho Ty]+P_{K}[g(y)-\rho Tu]-g(u)] \}, \end{aligned}$$
(29)

where

$$\begin{aligned} g(y)= P_{K}[g(u)-\rho Tu]. \end{aligned}$$
(30)

Using (29) and (30), we can suggest the following new predictor-corrector method for solving variational inequalities.

Algorithm 12

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\rho Tu_{n}] \\ g( u_{n+1}) =& (1-\alpha _{n})g(u_{n}) + \alpha _{n}\bigg\{ P_{K}[g(y_{n})-\rho Ty_{n}+g(y_{n}) -(g(u_{n})- \rho Tu_{n})]\bigg\} . \end{aligned}$$

Algorithm 12 can be rewritten in the following equivalent form:

Algorithm 13

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1} =& (1-\alpha _{n})u_{n} \\ &+ \alpha _{n}\{P_{K}[P_{K}[g(u_{n})-\rho Tu_{n}]-\rho Tg^{-1}P_{K}[g(u_{n})-\rho Tu_{n}] \\ &+P_{K}[g(u_{n})-\rho Tu_{n})-(g(u_{n})-\rho Tu_{n}])\}, \end{aligned}$$

which is an explicit iterative method that appears to be original.

If \(\alpha _{n} =1 \), then Algorithm 13 reduces to

Algorithm 14

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\rho Tu_{n}] \\ u_{n+1} =& P_{K}[g(y_{n})-\rho Ty_{n}+ g(y_{n})-(g(u_{n})-\rho Tu_{n}])\}, \quad n=0, 1,2, \ldots, \end{aligned}$$

which appears to be original.

We now consider another Algorithm for solving general variational inequalities (9). We also include some computational experiments of these special cases. See [122, 152, 153] for further details.

Algorithm 15

For a given \(u_{0} \in K \), compute

$$\begin{aligned} g(z_{n} ):=P_{K}[g(u_{n}) - Tu_{n}]. \end{aligned}$$

If \(\|R(u_{n}))\|=0\), where \(R(u_{n})= g(u_{n})-P_{K}[g(u_{n}) - Tu_{n}]\), stop; otherwise compute

$$\begin{aligned} g(y^{n}):=(1-\eta _{n})g(u^{n})+\eta _{n} g(z^{n}), \end{aligned}$$

where \(\eta _{n}=\gamma ^{m_{n}}\) with \(m_{n} \) being the smallest nonnegative integer satisfying

$$\begin{aligned} \langle T(u^{n})-T(u^{n}-\gamma ^{m}R(u_{n})),R(u_{n})\rangle \leq \sigma \|R(u_{n})\|^{2}. \end{aligned}$$

Compute

$$\begin{aligned} g(u_{n+1}) :=P_{K}[g(u_{n}) +\alpha _{n} d_{n}], \quad n =0,1,2, \ldots \end{aligned}$$

where

$$\begin{aligned} d_{n} = & -(\eta _{n}R(u_{n})-\eta _{n}T(u_{n})+T(y_{n})) \\ \alpha _{n} = & \frac{\eta _{n}\langle R(u_{n}),R(u_{n})- T(u_{n})+T(y_{n})\rangle }{\|d_{n}\|}. \end{aligned}$$

To obtain a larger decrease of the distance from the next iterative point to the solution set, we consider the following optimization problem

$$ \max \{ \phi _{k}(\alpha ): \alpha \geq 0 \}. $$

Following the technique of Wang et al [182], one can show that solution to the above optimization problem is just the root, denoted by \(\bar{\alpha _{n}} \).

If we choose \(\overline{\alpha }_{n}\) as step size in Algorithm 15, then we obtain another convergent algorithm. Obviously, \(\overline{\alpha }_{n}\) guarantees that the distance between the new iterative point and the solution set has a larger decrease, so we call \(\alpha _{n}\) the basic step and \(\overline{\alpha }_{n}\) the optimal step. However, in practice, if \(K \) does not possess any special structure, it is very expensive to compute \(\overline{\alpha }_{n}\). That is, we need to find a simple way to compute the projection \(P_{K}[u_{n}+\overline{\alpha }_{n} d_{n}]\). Following the proof of Lemma 4.2 in [188], we can show that \(u_{n}(\overline{\alpha }_{n})=P_{K\cap H_{n}}[u_{n}+\alpha _{n} d_{n}] \), where

$$ H_{n} =\{u \in R^{n}~|~\eta _{n}\langle R(u_{n}),R(u_{n})-T(u_{n})+T(y_{n}) \rangle +\langle u_{n}-u ,d_{n}\rangle =0\}. $$

Thus, we can obtain our improved double-projection method for solving general variational inequalities.

Algorithm 16

For a given \(u_{0} \in K \), compute

$$\begin{aligned} g(z_{n}):=P_{K}[g(u_{n})- T(u_{n})] \end{aligned}$$

If \(\|R(u_{n})\|=0\), stop; otherwise compute

$$\begin{aligned} g(y_{n} ):=(1-\eta _{n})g(u_{n})+\eta _{n} g(z_{n}), \end{aligned}$$

where \(\eta _{n}=\gamma ^{m_{n}}\) with \(m_{n}\) being the smallest nonnegative integer satisfying

$$\begin{aligned} \langle T(u_{n})-T(u_{n}-\gamma ^{m}R(u_{n})),R(u_{n})\rangle \le \sigma \|R(u_{n})\|^{2}. \end{aligned}$$

Compute

$$\begin{aligned} g(u_{n+1}) =P_{H_{n}\cap K}[u_{n}+\alpha _{n} d_{n}], \quad n =0, 1,2, \ldots \end{aligned}$$

where

$$\begin{aligned} d_{n} = & -(\eta _{n}R(u_{n})-\eta _{n}T(u_{n})+T(y_{n}))\\ \alpha _{n} = &\frac{\eta _{n}\langle R(u_{n}),R(u_{n})-T(u_{n})+T(y_{n})\rangle }{\|d_{n}\|}. \end{aligned}$$

Notice that at each iteration in Algorithm 16, the latter projection region is different from the former. More precisely, the latter projection region is an intersection of the domain set \(K \) and a hyperplane, so it does not increase the computation cost if \(K\) is a polyhedral.

For \(g=I \), we now give some numerical experiments for Algorithms 15 and 16 and some comparison with other double-projection methods. Throughout the computational experiments, the parameters used are set as \(\sigma =0.5,\gamma =0.8\), and we use \(\|R(u_{n})\| \leq 10^{-7}\) as stopping criteria. All computational results were undertaken on a PC-II by MATLAB. We use symbol \(e\) to denote the vector whose components are all ones.

Example 2

Consider the mapping \(T: R^{n}\to R^{n}\) defined by

$$ F(x_{1},x_{2},x_{3},x_{4})=\left ( \textstyle\begin{array}{l} -x_{2}+x_{3}+x_{4} \\ x_{1}-(4.5x_{3}+2.7x_{4})/(x_{2}+1) \\ 5-x_{1}-(0.5x_{3}+0.3x_{4})/(x_{3}+1) \\ 3-x_{1} \end{array}\displaystyle \right ), $$

with the domain set

$$\begin{aligned} K =\{x\in R^{n}_{+}~|~e^{\top }x=1\}, \end{aligned}$$

Example 3

This example was tested by Sun [173.174]. Let \(T(x)=Mx+q\), where

$$ M=\left ( \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 4&-1&0&\cdots &0 \\ -1&4&-1&\cdots &0 \\ \vdots &\vdots &\vdots &\vdots \\ 0&0&0&\cdots &-1 \\ 0&0&0&\cdots &4 \\ \end{array}\displaystyle \right ),~~~q=\left ( \textstyle\begin{array}{c} -1 \\ -1 \\ -1 \\ \vdots \\ -1 \end{array}\displaystyle \right ) $$

with the domain set

$$\begin{aligned} K=\{x\in R^{n}_{+}~|~x_{i}\le 1,i=1,2\cdots ,n\}. \end{aligned}$$

It is easy to see that \(T\) is strongly monotone on \(R^{n}\).

Example 4

Define \(T(x)=Mx+q\), where

$$ M=\text{diag}(1/n,2/n,\ldots ,1),~~~q=(-1,-1,\ldots ,-1)^{\top }, $$

with the domain set \(K=\{x\in R^{n}_{+}~|~x_{i}\le 1,i=1,2\cdots ,n\}\), see [137].

Again \(T\) is strongly monotone on \(K \). The corresponding strongly monotonicity modulus depends on the dimension \(n\) and approaches zeros when \(n\) tends to infinity. Obviously, \(x=e\) is its unique solution. We choose the starting point \(u_{0}=e\) for Example 2 and choose \(u =(0,\ldots ,0)^{\top }\) as starting point for Examples 3 and 4 for different dimensions \(n\). For double-projection methods [124, 139, 153], there always exist two step size rules just as in Algorithms 15 and 16. In the following, we give numerical comparison for these methods using two different steps. The numerical results for double-projection methods using the basic step for Examples 2, 34 are listed in Table 1, and the numerical results for double projection methods using the optimal step for Examples 2, 34 are listed in Table 2 (the symbol “∖” denotes that the number of iterations exceeds 1000 times).

Table 1 Numbers experience for Algorithm 15
Table 2 Numbers experience for Algorithm 16

Obviously, optimal step \(\overline{\alpha }_{n}\) is better than the basic step \(\alpha _{n}\) for any direction. Compared with other double projection methods, Algorithm 16 also shows a better behavior. From Table 1 and Table 2, it is clear that our new methods are as efficient as the methods of Solodov and Svaiter [171, 172, 174]. This shows that our Algorithm 15 and Algorithm 16 can be considered as a practical alternative to the extragradient and other modified projection methods. The comparison of new methods developed in this paper with the recent methods is an interesting problem for future research.

5 Dynamical Systems Technique

In this section, we consider the projected dynamical systems associated with variational inequalities. We investigate the convergence analysis of these new methods involving only the monotonicity of the operator.

We now define the residue vector \(R(u)\) by the relation

$$\begin{aligned} R(u)=g(u)-P_{K}[g(u)-\rho Tu]. \end{aligned}$$
(31)

Invoking Lemma 3, one can easily conclude that \(u\in H: g(u)\in K\) is a solution of (9), if and only if, \(u\in H: g(u) \in K\) is a zero of the equation

$$\begin{aligned} R(u)=0. \end{aligned}$$
(32)

We now consider a projected dynamical system associated with the variational inequalities. Using the equivalent formulation (32), we suggest a class of projected dynamical systems as

$$\begin{aligned} \frac{dg(u)}{dt}=\lambda \{P_{K}[g(u)-\rho Tu]-g(u)\},\quad u(t_{0})=u_{0} \in K, \end{aligned}$$
(33)

where \(\lambda \) is a parameter. The system of type (33) is called the projected dynamical system associated with variational inequalities (9). Here the right hand side is related to the resolvent and is discontinuous on the boundary. From the definition, it is clear that the solution of the dynamical system always stays in \(H\). This implies that the qualitative results such as the existence, uniqueness and continuous dependence of the solution of (33) can be studied. These projected dynamical systems are associated with the general variational inequalities (9), which have been studied extensively.

We use the projected dynamical system (33) to suggest some iterative for solving variational inequalities (9). These methods can be viewed in the sense of Koperlevich [60] and Noor [122] involving the double resolvent operator.

For simplicity, we consider the dynamical system

$$\begin{aligned} \frac{dg(u)}{dt}+g(u)=P_{K}[g(u)-\rho Tu],\quad u(t_{0})=\alpha . \end{aligned}$$
(34)

We construct the implicit iterative method using the forward difference scheme. Discretizing the equation (34), we have

$$\begin{aligned} \frac{g(u_{n+1})-g(u_{n})}{h}+g(u_{n+1})=P_{K}[g(u_{n})-\rho Tu_{n+1}], \end{aligned}$$
(35)

where \(h\) is the step size. Now, we can suggest the following implicit iterative method for solving the variational inequality (9).

Algorithm 17

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(u_{n+1})=P_{K}\bigg[g(u_{n})-\rho Tu_{n+1}-\frac{g(u_{n+1})-g(u_{n})}{h}\bigg],\quad n=0,1,2,\ldots . \end{aligned}$$

This is an implicit method and is quite different from the known implicit method. Using Lemma 1, Algorithm 17 can be rewritten in an equivalent form as:

Algorithm 18

For a given \(u_{0}\in H\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n+1}+\frac{1+h}{h}(g(u_{n+1})-g(u_{n})),g(v)-g(u_{n+1})\rangle \geq 0, \quad \forall g(v) \in K. \end{aligned}$$
(36)

We now study the convergence analysis of Algorithm 18 under some mild conditions.

Theorem 4

Let \(u\in H: g(v) \in K\) be a solution of the general variational inequality (9). Let \(u_{n+1}\) be the approximate solution obtained from (36). If \(T\) is \(g\)-monotone, then

$$\begin{aligned} \|g(u)-g(u_{n+1})\|^{2}\leq \|g(u)-g(u_{n})\|^{2}-\|g(u_{n})-g(u_{n+1}) \|^{2}. \end{aligned}$$
(37)

Proof

Let \(u\in H: g(v) \in K\) be a solution of (9). Then

$$\begin{aligned} \langle Tv,g(v)-g(u) \rangle \geq 0,\quad \forall v\in H: g(v) \in K, \end{aligned}$$
(38)

since \(T\) is a \(g\)-monotone operator.

Set \(v=u_{n+1}\) in (38), to have

$$\begin{aligned} \langle Tu_{n+1},g(u_{n+1})-g(u) \rangle \geq 0. \end{aligned}$$
(39)

Taking \(v=u\) in (36), we have

$$\begin{aligned} \langle \rho Tu_{n+1}+\{\frac{(1+h)g(u_{n+1})-(1+h)g(u_{n})}{h}\},g(u)-g(u_{n+1}) \rangle \geq 0. \end{aligned}$$
(40)

From (39) and (40), we have

$$\begin{aligned} \langle (1+h)(g(u_{n+1})-g(u_{n})),g(u)-g(u_{n+1})\rangle \geq 0. \end{aligned}$$
(41)

From (41) and using \(2\langle a,b\rangle = \|a+b\|^{2}-\|a\|^{2}-\|b\|^{2}, \forall a,b \in H\), we obtain

$$\begin{aligned} \|g(u_{n+1})-g(u)\|^{2}\leq \|g(u)-g(u_{n})\|^{2}-\|g(u_{n+1})-g(u_{n}) \|^{2}, \end{aligned}$$
(42)

the required result. □

Theorem 5

Let \(u\in K\) be the solution of the general variational inequality (9). Let \(u_{n+1}\) be the approximate solution obtained from (36). If \(T\) is a \(g\)-monotone operator and \(g^{-1}\) exists, then \(u_{n+1}\) converges to \(u\in H\) satisfying (9).

Proof

Let \(T\) be a \(g\)-monotone operator. Then, from (37), it follows that the sequence \(\{u_{i}\}^{\infty }_{i=1}\) is a bounded sequence and

$$\begin{aligned} \sum _{i=1}^{\infty }\|g(u_{n})-g(u_{n+1})\|^{2}\leq \|g(u)-g(u_{0})\|^{2}, \end{aligned}$$

which implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\|u_{n+1}-u_{n}\|^{2}=0, \end{aligned}$$
(43)

since \(g^{-1} \) exists.

Since the sequence \(\{u_{i}\}^{\infty }_{i=1}\) is bounded, there exists a cluster point \(\hat{u}\) to which the subsequence \(\{u_{ik}\}^{\infty }_{k=1}\) converges. Taking the limit in (36) and using (43), it follows that \(\hat{u}\in K\) satisfies

$$\begin{aligned} \langle T\hat{u},g(v)-g(\hat{u})\rangle \geq 0,\quad \forall v\in H: g(v) \in K, \end{aligned}$$

and

$$\begin{aligned} \|g(u_{n+1})-g(u)\|^{2}\leq \|g(u)-g(u_{n})\|^{2}. \end{aligned}$$

Using this inequality, one can show that the cluster point \(\hat{u}\) is unique and

$$\begin{aligned} \lim _{n\rightarrow \infty }u_{n+1}=\hat{u}. \end{aligned}$$

 □

We now suggest another implicit iterative method for solving (9). Discretizing (34), we have

$$\begin{aligned} \frac{g(u_{n+1})-g(u_{n})}{h}+g(u_{n+1})=P_{K}[g(u_{n+1})-\rho Tu_{n+1}], \end{aligned}$$
(44)

where \(h\) is the step size.

This formulation enables us to suggest the following iterative method.

Algorithm 19

For a given \(u_{0}\in K\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} g(u_{n+1})=P_{K}\bigg[g(u_{n+1})-\rho Tu_{n+1}-\frac{g(u_{n+1})-g(u_{n})}{h}\bigg]. \end{aligned}$$

Using Lemma 1, Algorithm 19 can be rewritten in the equivalent form as:

Algorithm 20

For a given \(u_{0}\in K\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n+1}+\{\frac{g(u_{n+1})-g(u_{n})}{h}\},g(v)-g(u_{n+1})\rangle \geq 0, \quad \forall v\in H: g(v)\in K. \end{aligned}$$
(45)

Again using the dynamical systems, we can suggested some iterative methods for solving the variational inequalities and related optimization problems.

Algorithm 21

For a given \(u_{0}\in K\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} u_{n+1}=P_{K}\bigg[\frac{(h+1)(g(u_{n})-g(u_{n+1}))}{h}-\rho Tu_{n}\bigg],\quad n=0,1,2,\ldots , \end{aligned}$$

which can be written in the equivalent form as:

Algorithm 22

For a given \(u_{0}\in K\), compute \({u_{n+1}}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n}+\{\frac{h+1}{h}(g(u_{n+1})-g(u_{n}))\},v-u_{n+1}\rangle \geq 0, \quad \forall g(v)\in K. \end{aligned}$$
(46)

In a similar way, one can suggest a wide class of implicit iterative methods for solving variational inequalities and related optimization problems, the comparison of these methods with other methods is an interesting problem for future research.

6 Auxiliary Principle Technique

In the previous sections, we have considered and analyzed several projection-type methods for solving variational inequalities. It is well known that to implement such types of methods, one has to evaluate the projection, which is itself a difficult problem. Additionally, one can’t extend the technique of projection for solving some other classes of variational inequalities. These facts motivate us to consider other methods. One of these techniques is known as the auxiliary principle. This technique is basically due to Lions and Stampacchia [173]. Glowinski et al. [47] used this technique to study the existence of a solution of mixed variational inequalities. Noor [93,94,95, 114, 121, 122] has used this technique to develop some predictor-corrector methods for solving variational inequalities. It has been shown that various classes of methods including projection, Wiener-Hopf, decomposition and descent can be obtained from this technique as special cases.

For a given \(u \in H, g(u) \in K \) satisfying (9), consider the problem of finding a unique \(w \in H, g(w) \in K \) such that

$$\begin{aligned} \langle \rho Tu + g(w)-g(u), g(v)-g(w) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$
(47)

where \(\rho > 0 \) is a constant.

Note that, if \(w = u \), then \(w \) is clearly a solution of the general variational inequality (9). This simple observation enables us to suggest and analyze the following predictor-corrector method.

Algorithm 23

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1}\) by the iterative schemes

$$\begin{aligned} &\langle \mu Tu_{n}+g(y_{n})-g(u_{n}),g(v)-g(y_{n}) \rangle \geq 0, \quad \forall g(v) \in K, \\ & \langle \beta Ty_{n}+g(w_{n})-g(y_{n}),g(v)-g(w_{n})\rangle \geq 0, \quad \forall g(v) \in K, \\ & \langle \rho Tw_{n}+g(u_{n+1})-g(w_{n}),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

where \(\rho > 0, \beta > 0 \) and \(\mu > 0 \) are constants

Algorithm 23 can be considered as a three-step predictor-corrector method, which was suggested and studied by Noor [110, 122].

If \(\mu = 0 \), then Algorithm 23 reduces to:

Algorithm 24

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1}\) by the iterative schemes:

$$\begin{aligned} & \langle \beta Tu_{n}+g(w_{n})-g(u_{n}),g(v)-g(w_{n})\rangle \geq 0, \quad \forall g(v) \in K, \\ & \langle \rho Tw_{n}+g(u_{n+1})-g(w_{n}),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

which is known as the two-step predictor-corrector method, see [110, 122].

If \(\mu = 0 , \beta = 0 \), then Algorithm 23 becomes:

Algorithm 25

For a given \(u_{0} \in H \), compute \(u_{n+1}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n}+g(u_{n+1})-g(u_{n}),g(v)-g(u_{n+1})\rangle \geq 0, \quad \forall g(v) \in K. \end{aligned}$$

Using the projection technique, Algorithm 23 can be written as

Algorithm 26

For a given \(u_{0} \in H \), compute \(u_{n+1}\) by the iterative schemes

$$\begin{aligned} g(y_{n}) = & P_{K}[g(u_{n})-\mu Tu_{n}] \\ g(w_{n}) = & P_{K}[g(y_{n})-\beta Ty_{n}] \\ g(u_{n+1}) = & P_{K}[g(w_{n})-\rho Tw_{n}], \quad n=0,1,2, \ldots \end{aligned}$$

or

$$\begin{aligned} g(u_{n+1}) = P_{K}[I-\mu Tg^{-1}]P_{K}[I-\beta Tg^{-1}]P_{K}[I-\rho Tg^{-1}]g(u_{n}), \quad n=0,1,2, \dots \end{aligned}$$

or

$$\begin{aligned} g(u_{n+1}) = & (I+\rho Tg^{-1})^{-1}\{P_{K}[I-\rho Tg^{-1}]P_{K}[I-\rho Tg^{-1}]P_{K}[I-\rho Tg^{-1}] \\ & + \rho Tg^{-1}\}g(u_{n}), \quad n=0,1,2,\ldots , \end{aligned}$$

which is a three-step forward-backward method. See also the two-step forward-backward splitting method of Tseng [179, 180] for solving classical variational inequalities.

Definition 9

For all \(u,v,z \in H \), the operator \(T:H \longrightarrow H \) is said to be:

(i). \(g\)-partially relaxed strongly monotone, if there exists a constant \(\alpha > 0 \) such that

$$\begin{aligned} \langle Tu-Tv,g(z)-g(v) \rangle \geq -\alpha \|z-u\|^{2} \end{aligned}$$

(ii). \(g\)-cocoercive, if there exists a constant \(\mu > 0 \) such that

$$\begin{aligned} \langle Tu-Tv,g(u)-g(v) \rangle \geq \mu \|Tu-Tv\|^{2}. \end{aligned}$$

We remark that if \(z = u \), then \(g\)-partially relaxed strongly monotonicity is equivalent to monotonicity. For \(g = I \), Definition 9 reduces to the standard definition of partially relaxed strongly monotonicity and cocoercivity of the operator. We now show that \(g\)-cocoercivity implies \(g\)-partially relaxed strongly monotonicity. This result is due to Noor [110, 122]. To convey an idea, we include its proof.

Lemma 4

If \(T \) is a \(g\)-cocoercive operator with constant \(\mu > 0 \), then \(T\) is \(g\)-partially relaxed strongly monotone operator with constant \(\frac{-1}{4\mu } \).

Proof

For all \(u,v, z \in H \), consider

$$\begin{aligned} \langle Tu-Tv,g(z)-g(v) \rangle = & \langle Tu-Tv,g(u)-g(v) \rangle + \langle Tu-Tv, g(z)-g(u) \rangle \\ \geq & \mu \|Tu-Tv\|^{2} -\mu \|Tu-Tv\|^{2} - \frac{-1}{4\mu }\|g(z)-g(u) \|^{2}, \\ \geq & \frac{-1}{4\mu }\|g(z)-g(u)\|^{2}, \end{aligned}$$

which shows that \(T \) is a \(g\)-partially relaxed strongly monotone operator. □

One can easily show that the converse is not true. Thus we conclude that the concept of \(g\)-partially relaxed strongly monotonicity is a weaker condition than \(g\)-cocoercivity.

One can study the convergence criteria of Algorithm 23 using the technique of Noor [104].

Remark 2

In the implementation of these algorithms, one does not have to evaluate the projection. Our method of convergence is very simple as compared with other methods. Following the technique of Tseng [179], one can obtained new parallel and decomposition algorithms for solving a number of problems arising in optimization and mathematical programming.

Remark 3

We note that, if the operator \(g \) is linear or convex, then the auxiliary problem (47) is equivalent to finding the minimum of the functional \(I[w] \) on the convex set \(K \), where

$$\begin{aligned} I[w] = & \frac{1}{2} \langle g(w)-g(u),g(w)-g(u) \rangle + \langle \rho Tu, g(w)-g(u) \rangle \\ = & \|g(w)-(g(u)-\rho Tu)\|^{2}. \end{aligned}$$
(48)

It can be easily shown that the optimal solution of (48) is the projection of the point \(( g(u)-\rho Tu )\) onto the convex set \(K \), that is,

$$\begin{aligned} g(w(u)) = P_{K}[g(u)-\rho Tu], \end{aligned}$$
(49)

which is the fixed-point characterization of the general variational inequality (9).

Based on the above observations, one can show that the general variational inequality (9) is equivalent to finding the minimum of the functional \(N[u] \) on \(K \) in \(H \), where

$$\begin{aligned} N[u] = & -\langle Tu,g(w(u))-g(u) \rangle -\frac{1}{2} \langle g(w(u))-g(u),g(w(u))-g(u) \rangle \\ = & \frac{1}{2}\{\|\rho Tu\|^{2}-\|g(w(u))-(g(u)-\rho Tu)\|^{2} \}, \end{aligned}$$
(50)

where \(g(w) = g(w(u)) \). The function \(N[u]\) defined by (50) is known as the gap (merit) function associated with the general variational inequality (9). This equivalence has been used to suggest and analyze a number of methods for solving variational inequalities and nonlinear programming, see, for example, Patriksson [156]. In this direction, we have:

Algorithm 27

For a given \(u_{0} \in H \), compute the sequence \(\{ u_{n}\} \) by the iterative scheme

$$\begin{aligned} g(u_{n+1}) = g(u_{n}) + t_{n}d_{n}, \quad n=0,1,2, \ldots , \end{aligned}$$

where \(d_{n} = g(w(u_{n}))-g(u_{n}) = P_{K}[g(u_{n})-\rho Tu_{n}]- g(u_{n}) \), and \(t_{n} \in [0,1] \) are determined by the Armijo-type rule

$$\begin{aligned} N[u_{n}+ \beta _{l}d_{n}] \leq N[u_{n}]- \alpha \beta _{l}\|d_{n}\|^{2}. \end{aligned}$$

It is worth noting that the sequence \(\{u_{n}\}\) generated by

$$\begin{aligned} g(u_{n+1}) = & (1-t_{n})g(u_{n}) + t_{n}P_{K}[g(u_{n})-\rho Tu_{n} ] \\ = & g(u_{n})-t_{n}R(u_{n}), \quad n=0,1,2,\ldots , \end{aligned}$$

is very much similar to that generated by the projection-type Algorithm 3. Based on the above observations and discussion, it is clear that the auxiliary principle approach is quite general and flexible. This approach can be used not only to study the existence theory but also to suggest and analyze various iterative methods for solving variational inequalities. Using the technique of Fukushima [42], one can easily study the convergence analysis of Algorithm 27.

We have shown that the auxiliary principle technique can be used to construct gap (merit) functions for general variational inequalities (9). We use the gap function to consider an optimal control problem governed by the general variational inequalities (9). The control problem is an optimization problem, which is also referred as a generalized bilevel programming problem or mathematical programming with equilibrium constraints. It is known that the techniques of the classical optimal control problems cannot be extended for variational inequalities, see Dietrich [22]. This has motivated us to develop some other techniques including the notion of conical derivatives, the penalty method and formulating the variational inequality as operator equation with a set-valued operator. Furthermore, one can construct a so called gap function associated with a variational inequality, so that the variational inequality is equivalent to a scalar equation of the gap function. Under suitable conditions such a gap function is Frechet differentiable and one may use a penalty method to approximate the optimal control problem and calculate a regularized gap function in the sense of Fukushima [42] to the general variational inequality (9) and determine their Frechet derivative. Dietrich [32, 33] has developed similar results for the general variational inequalities. We only give the basic properties of the optimal control problem and the associated gap functions to give an idea of the approach.

We now consider the following problem of optimal control for the general variational inequalities (9), that is, to find \(u \in H: g(u) \in K, z \in U \) such that

$$\begin{aligned} {\mathbf{P.}} \quad \min I(u,z), \quad \langle T(u,z),g(v)-g(u) \rangle \geq 0, \quad \forall v \in H:g(v) \in K, \end{aligned}$$

where \(H \) and \(U\) are Hilbert spaces. The sets \(K \) and \(E \) are closed convex sets in \(H \) and \(U \) respectively. Here \(H \) is the space of state and \(K \subset H \) is the set of state constraints for the problem. \(U \) is the space of control and the closed convex set \(E \subset U \) is the set of control constraints. \(T(.,.): H\times U \longrightarrow H \) is a an operator which is Frechet differentiable. The functional \(I(.,.) : H \times U \longrightarrow R\cup \{+\infty \} \) is a proper, convex and lower-semicontinuous function. Also we assume that the problem \({\mathcal{P }}\) has at least one optimal solution denoted by \((u^{*},z^{*}) \in H\times U \).

Related to the optimization problem \(({\mathbf{P }})\), we consider the regularized gap (merit) function \(h_{\rho }(u, z):H\times U \longrightarrow R \) as

$$\begin{aligned} h_{\rho }(u,z) =& \sup _{v \in H:g(v) \in K }\{ \langle -\rho T(u,z),g(v)-g(u) \rangle \\ &-\frac{1}{2}\|g(v)-g(u)\|^{2} \} \quad \forall v\in H:g(v) \in K. \end{aligned}$$
(51)

We remark that the regularized function (51) is a natural generalization of the regularized gap function (50) for variational inequalities. It can be shown that the regularized gap function \(h _{\rho }(.,.) \) defined by (51) has the following properties. The analysis is in the spirit of Dietrich [33].

Theorem 6

The gap function \(h_{\rho }(.,.) \) defined by (51) is well-defined and

$$\begin{aligned} (i). &\quad \forall u \in H :g(u) \in K, z \in U, \quad h_{\rho }(u,z) \geq 0. \\ (ii). &\quad h_{\rho }(u,z)=\frac{1}{2}\{\|\rho ^{2}\|T(u,z)-d^{2}_{K}(g(u)- \rho T(u,z))\}, \\ (iii). &\quad h_{\rho }(u,z)=-\rho \langle T(u,z),g(u_{K})-g(u)\rangle - \frac{1}{2}\|g(u_{K})-g(u)\|^{2}, \end{aligned}$$

where \(d_{K} \) is the distance to \(K \) and

$$\begin{aligned} g(u)=P_{K}[g(u)-\rho T(u,z)] \end{aligned}$$

Proof

It is well-known that

$$\begin{aligned} d^{2}_{K} = \min _{v \in H:g(v)\in K}\|g(v)-g(u)\|^{2} = \|g(u)-P_{K}[g(u_{K})] \|^{2} \end{aligned}$$

Take \(v = u \) in (51). Then clearly (i) is satisfied.

Let \((u,z) \in H\times U \). Then

$$\begin{aligned} h_{\rho }(u,z) = & \rho \langle T(u,z),g(u)\rangle -\frac{1}{2}\|g(u) \|^{2} \\ &+ \sup _{v\in H:g(v)\in K}\left [\langle -\rho T(u,z),g(v)\rangle - \frac{1}{2}\|g(v)\|^{2}+\langle g(u),g(v)\rangle \right ] \\ =& \rho \langle T(u,z),g(u)\rangle -\frac{1}{2}\|g(u)\|^{2} \\ &+ \inf _{v \in H:g(v)\in K }\left [\frac{1}{2}\|g(v)\|^{2}-\langle g(u)- \rho T(u,z),g(v) \rangle \|^{2} \right ] \\ = &\rho \langle T(u,z),g(u)\rangle -\frac{1}{2}\|g(u)\|^{2}|| \\ &-\frac{1}{2}\inf _{v\in H:g(v) \in K}\|g(v)-(g(u)-\rho T(u,z))\|^{2} + \frac{1}{2}\|g(u)-\rho T(u,z) \|^{2} \\ = & \frac{\rho ^{2}}{2}\|T(u,z)\|^{2}-\frac{1}{2}d^{2}_{K}(g(u)- \rho T(u,z)). \end{aligned}$$

Setting \(g(u_{K}) = P_{K}[g(u)-\rho T(u,z)] \), we have

$$\begin{aligned} h_{\rho }(u,z) = & \frac{\rho ^{2}}{2}\|T(u,z)\|^{2}-\frac{1}{2}\|g(u)- \rho T(u,z)-g(U_{K})\|^{2} \\ = & -\rho \langle T(u,z),g(v)-g(u)\rangle -\frac{1}{2}\|g(u_{K})-g(u) \|^{2}. \end{aligned}$$

 □

Theorem 7

If the set \(K\) is \(g\)-convex in \(H \), then the following are equivalent.

$$\begin{aligned} (i). \quad & h_{\rho }(u,z) = 0, \textit{for all }\quad u\in H: g(u) \in K, z \in U \\ (ii). \quad & \langle T(u,z),g(v)-g(u) \rangle \geq 0, \quad \forall u,v \in H: g(u),g(v) \in K, z \in U. \\ (iii). \quad & g(u)= P_{K}[g(u)-\rho T(u,z)]. \end{aligned}$$

Proof

We show that \((ii) \Longrightarrow (i)\).

Let \(u \in H \) and \(z \in U \) be a solution of

$$\begin{aligned} \langle T(u,z),g(v)-g(u) \rangle \geq 0, \quad \forall v \in H: g(v) \in K. \end{aligned}$$

Then we have

$$\begin{aligned} h_{\rho } (u,z) = -\rho \langle T(u,z),g(v)-g(u) \rangle -\frac{1}{2} \|g(v)-g(u)\|^{2} \leq 0, \end{aligned}$$

which implies that

$$\begin{aligned} h_{\rho }(u,z) \leq 0. \end{aligned}$$

Also for \(v\in H:g(v) \in K \), we know that

$$\begin{aligned} h_{\rho }(u,z) \geq 0. \end{aligned}$$

From these above inequalities, we have (i), that is, \(h_{\rho }(u,z) = 0 \).

Conversely, let (i) hold. Then

$$\begin{aligned} -\rho \langle T(u,z),g(v)-g(u) \rangle -\frac{1}{2}\|g(v)-g(u)\|^{2} \leq 0, \forall v \in H : g(v) \in K. \end{aligned}$$
(52)

Since \(K\) is a \(g\)-convex set, so for all \(g(w),g(u) \in K, t \in [0,1]\),

$$\begin{aligned} g(v_{t})= (1-t)g(u) +g(w) \in K. \end{aligned}$$

Setting \(g(v)=g(v_{t}) \) in (52), we have

$$\begin{aligned} -\rho \langle T(u,z),g(w)-g(u) \rangle -\frac{t}{2}\|g(w)-g(u)\|^{2} \leq 0. \end{aligned}$$

Letting \(t \longrightarrow 0 \), we have

$$\begin{aligned} \langle T(u,z),g(v)-g(u) \rangle \geq 0, \quad \forall g(w) \in K, \end{aligned}$$

the required (ii). Thus we conclude that (i) and (ii) are equivalent. Applying Lemma 1, we have \((ii) = (iii) \). □

From Theorem 6 and Theorem 7, we conclude that the optimization problem \({\mathcal{P}}\) is equivalent to

$$\begin{aligned} \min I(u,z), \quad h_{\rho }(u,z) = 0, \quad \forall u \in H:g(u) \in K, z \in U, \end{aligned}$$

where \(h_{\rho }(u,z) \) is \({\mathcal{C}}^{1}\)-differentiable in the sense of Frechet, but is not convex.

If the operators \(T, g \) are Frechet differentiable, then the gap function \(h_{\rho }(u,z) \) defined by (51) is also Frechet differentiable. In fact,

$$\begin{aligned} h^{\prime }_{\rho }(u,z) = \rho ^{2}[T^{\prime }(u,z)]^{\ast }T(u,z)-([g^{ \prime }(u)]^{\ast }-\rho [T^{\prime }(u,z)]^{\ast })(I-P_{k})[g(u)- \rho T(u,z)], \end{aligned}$$

where \([T^{\prime }(u,z)]^{\prime } \) is the adjoint operator of \(T^{\prime }(u,z) \). This implies the following connection at a point \((u_{1},z_{1}) \)

$$\begin{aligned} h^{\prime }_{\rho }(u_{1},z_{1}) = \rho \cdot [g^{\prime }(u_{1})]^{ \ast }T(u_{1},z_{1}), \end{aligned}$$

which is a solution of the general variational inequality (9), that is, for \((u_{1},z_{1}) \) with \(h_{\rho }(u_{1},z_{1}) = 0 \).

For the optimal problem \({\mathcal{P}}\), we have

$$\begin{aligned} h^{\prime }_{\rho }(u^{*},z^{*}) = \rho \cdot [g^{\prime }(u^{*})]^{ \ast }T(u^{*},z^{*}). \end{aligned}$$

We now consider a simple example of optimal control problem to illustrate

$$\begin{aligned} \min ({\mathbf{P}}_{1}) :&=\min \left \{ u^{2}+z^{2}\left | \textstyle\begin{array}{c} (u+z-1)(v^{2}-u-z^{2})\geq 0 \\ \forall v\in R:v^{2}\geq 1 \\ (u,z)\in R^{2}:u+z^{2}\geq 1 \end{array}\displaystyle \right . \right \} \\ T(u,z) =&u+z-1,\quad g(u)=u,\quad K=[1,+\infty ). \end{aligned}$$

First, we solve the general variational inequality (9)

$$\begin{aligned} \text{Case 1} :&T(u,z)=z+u-1=0 \\ \Longrightarrow &{\mathcal{L}}_{1}=\left \{ (u,z)=(1-z,z)\in R^{2}\left | \,\ z\in (-\infty ,0]\cup \lbrack 1,+\infty )\right . \right \} \\ \text{Case 2 } :&T(u,z)=z+u-1>0 \\ \Longrightarrow &{\mathcal{L}}_{2}=\left \{ (u,z)=(1-z^{2},z)\in R^{2} \left | \,\ u\in (0,1)\right . \right \} \\ \text{Case 3 } :&T(u,z)=z+u-1< 0 \\ \Longrightarrow &{\mathcal{L}}_{3}=\emptyset \\ {\mathcal{L}} =&\left \{ (u,z)\in R^{2}\left | \textstyle\begin{array}{c} u =1-z\quad \text{for}\quad\ z\in (-\infty ,0]\cup \lbrack 1,+ \infty ) \\ u=1-z^{2}\quad \text{for}\quad z\in (0,1) \end{array}\displaystyle \right . \right \} . \end{aligned}$$

We obtain as the unique optimal solution of \({\mathbf{P}}_{1} (u_{opt},z_{opt})=( \frac{1}{2}\sqrt{2},\frac{1}{2})\) with \(\min ( { \mathcal{P}}_{1})= \frac{3}{4}\).

Next, we calculate the gap function of the general variational inequality problem (9).

$$\begin{aligned} h_{1}(u,z) =&\frac{1}{2}\left ( z+u-1\right ) ^{2}-\frac{1}{2}\left [ \left ( I-P_{[1,+\infty )}\right ) \left ( z^{2}-z+1\right ) \right ] ^{2} \\ =&\left \{ \textstyle\begin{array}{c} \frac{1}{2}\left ( z+u-1\right ) ^{2}-\frac{1}{2}\left ( z^{2}-z \right ) ^{2}\quad \text{for}\quad z\in (0,1) \\ \frac{1}{2}\left ( y+u-1\right ) ^{2}\quad \text{for}\quad z\in (- \infty ,0]\cup \lbrack 1,+\infty ) \end{array}\displaystyle \right . \end{aligned}$$

This shows that equivalence between these problems.

$$ (u,z)\in R^{2}:u+z^{2}\geq 1\,\,\text{and}\,\ h_{1}(u,z)=0\,\,\Longleftrightarrow \ (u,z)\in H\times U: $$
$$ \langle T(u,z), g(v)-g(u) \rangle \geq 0, \quad \forall g(v) \in K. $$

7 Penalty Function Method

In this section, we consider a system of third-order boundary value problems, where the solution is required to satisfy some extra continuity conditions on the subintervals in addition to the usual boundary conditions. Such type of systems of boundary value arise in the study of obstacle, free, moving and unilateral problems and have important applications in various branches of pure and applied sciences. Despite of their importance, little attention has been given to develop efficient numerical methods for solving numerically these systems except for special cases. In particular, it is known that if the obstacle function is known then the general variational inequalities can be characterized by a system of odd-order boundary value problems by using the penalty method. This technique is called the penalty function method and was used by Lewy and Stampacchia [64] to study the regularity of a solution of variational inequalities. The computational advantage of this technique is its simple applicability for solving the system of differential equations. This technique has been explored and developed by Noor et al. to solve the systems of differential equations associated with even and odd-order obstacle problems. Our approach to these problems is to consider them in a general manner and specialize them later on. To convey an idea of the technique involved, we first introduce two numerical schemes for solving a system of third boundary value problems using the splines. An example involving the odd-order obstacle is given.

For simplicity, we consider a system of obstacle third-order boundary value problem of the type

$$\begin{aligned} u^{\prime \prime \prime } = \left \{ \textstyle\begin{array}{l} f(x), \quad \quad a \leq x \leq c, \\ p(x)u(x)+f(x)+r, \quad \quad c \leq x \leq d, \\ f(x), \quad \quad d \leq x \leq b, \end{array}\displaystyle \right . \end{aligned}$$
(53)

with the boundary conditions

$$\begin{aligned} u(a) = \alpha \quad u^{\prime }(a) = \beta , _{1} \quad \text{and} \quad u^{\prime } = \beta _{2}, \end{aligned}$$
(54)

and the continuity conditions of \(u, u^{\prime } \) and \(u^{\prime \prime } \) at \(c\) and \(d \). Here \(f \) and \(p \) are continuous functions on \([a,b] \) and \([c,d]\) respectively. The parameters \(r, \alpha , \beta _{1} \) and \(\beta _{2}\) are real finite constants. Such type of systems arise in the study of obstacle, free, moving and unilateral boundary value problems and have important applications in other branches of pure and applied sciences. In general, it is not possible to obtain the analytical solutions of (53) for arbitrary choice of \(f(x) \) and \(p(x) \). We usually resort to numerical methods for obtaining the approximate solutions of (53). Here we use cubic spline functions to derive some consistency relations which are then used to develop a numerical technique for solving a system of third-order boundary value problems. Without loss of generality, we set

$$c =\frac{3a+b}{4} \ \text{and}\ d = \frac{a+3b}{4}$$

in order to derive a numerical method for approximating the solution of the system (53). For this purpose, we divide the interval [a,b] into \(n\) equal subintervals using the grid point

$$\begin{aligned} x_{i}= a + ih, \quad i=0,1,2, \ldots , \end{aligned}$$

with

$$\begin{aligned} x_{0} = a, \quad x_{n} = b, \quad h = \frac{b-a}{n+1}, \end{aligned}$$

where \(n\) is a positive integer chosen such that both \(\frac{n+1}{4} \) and \(\frac{3(n+1)}{4}\) are also positive integers. Additionally, let \(u(x) \) be the exact solution of (53) and \(s_{i}\) be an approximation to \(u_{i} = u(x_{i}) \) obtained by the cubic \(P_{i}(x) \) passing through the points \((x_{i},s_{i})\) and \((x_{i+1},s_{i+1}) \). We write \(P_{i}(x) \) in the form

$$\begin{aligned} P_{i}(x) = a_{i}(x-x_{i})^{3} +b_{i}(x-x_{i})^{2} +c_{i}(x-x_{i}) + d_{i}, \end{aligned}$$
(55)

for \(i = 0,1,2, \ldots ,n-1 \). Then the cubic spline is defined by

$$\begin{aligned} s(x) = & P_{i}(x), \quad i = 0,1,2, \ldots ,n-1, \\ s(x) \in & C^{2}[a,b]. \end{aligned}$$
(56)

We now develop explicit expressions for the four coefficients in (55). To do this, we first design

$$\begin{aligned} P_{i}(x_{i}) = & s_{i}, \quad P_{i}(x_{i+1}) = s_{i+1} \quad P^{ \prime }_{i}(x_{i}) = D_{i}, \\ P_{i}^{\prime \prime \prime }(x_{i}) = &\frac{1}{2}[T_{i+1}+T_{i}], \quad \text{for} \quad i=0,1,2, \ldots ,n-1, \end{aligned}$$
(57)

and

$$\begin{aligned} T_{i} = \left \{ \textstyle\begin{array}{l} f_{i}, \quad \quad \text{for} \quad 0 \leq i \leq \frac{n}{4} \quad \text{and} \quad \frac{n}{4} < i \leq n, \\ p_{i}s_{i}+ f_{i}+ r, \quad \quad \text{for} \quad \frac{n}{4} < i \leq \frac{3n}{4}, \end{array}\displaystyle \right . \end{aligned}$$
(58)

where \(f_{i} = f(x_{i}) \) and \(p_{i} = p(x_{i})\).

Using the above discussion, we obtain the following relations

$$\begin{aligned} a_{i} = & \frac{1}{12}[T_{i+1} + T_{i}], \\ b_{i} = & \frac{1}{h^{2}}[s_{i+1}-s_{i}]-\frac{1}{h}[T_{i+1}+ T_{i}], \\ c_{i} = & D_{i}, \\ d_{i} = & s_{i}, \quad i=0,1,2,\ldots ,n-1. \end{aligned}$$
(59)

Now from the continuity of the cubic spline \(s(x)\) and its derivatives up to order two at the point \((x_{i},s_{i}) \) where the two cubic \(P_{i_{1}}(x)\) and \(P_{i}(x) \) join, we can have

$$\begin{aligned} P^{(m)}_{i-1} = P_{i}^{(m)}, \quad \quad m=0,1,2, \ldots \end{aligned}$$
(60)

From the above relations, one can easily obtain the following consistency relations

$$\begin{aligned} h[D_{i} +D_{i-1}] = & 2[s_{i}-s_{i-1}] + \frac{h^{3}}{12}[T_{i} +T_{i-1}], \end{aligned}$$
(61)
$$\begin{aligned} h[D_{i} +D_{i-1}] = & s_{i+1}-2s_{i} + s_{i-1} -\frac{h^{3}}{12}[T_{i+1}+3T_{i}+2T_{i-1}]. \end{aligned}$$
(62)

From (61) and (62), we obtain

$$\begin{aligned} 2hD_{i} = s_{i+1}-S_{i-1}-\frac{h^{3}}{12}[T_{i+1}+ 2T_{i} + T_{i-1}]. \end{aligned}$$
(63)

Eliminating \(D_{i} \) from (63), (62) and (61), we have

$$\begin{aligned} -s_{i-2}+3s_{i-1}-3s_{i}+s_{i+1} = \frac{1}{12}h^{3}[T_{i-2}+5T_{i-1}+5T_{i} + T_{i+1}], \end{aligned}$$
(64)

for \(i = 2,3, \ldots ,n-1\). The recurrence relations (64) gives \((n-2)\) linear equations in the unknowns \(s_{i}, i = 1,2, \ldots ,n\). We need two more equations one at each end of the range of integration. These two equations are:

$$\begin{aligned} 3s_{0} -4s_{1} + S_{2} = & -2hD_{0} + \frac{h^{3}}{12}[3T_{0} +4T_{1} +T_{2}], \quad i =1, \end{aligned}$$
(65)
$$\begin{aligned} -3s_{n-2} + 8s_{n-1} -5s_{n} = & -2hD_{n+1} + \frac{h^{3}}{12}[3T_{n-2} +10T_{n-1} + 31T_{n}], i = n. \end{aligned}$$
(66)

The cubic spline solution of (55) is based on the linear equations given by (63)-(64). The local truncation errors \(T_{i}, i =1,2, \ldots \) associated with the cubic spline method are given by

$$\begin{aligned} t_{i} = \left \{ \textstyle\begin{array}{l} -\frac{1}{10}h^{5}u^{(5)}(\zeta _{1}) + O(h^{6}), \quad a < \zeta _{1} < x_{2} \quad i =1, \\ -\frac{1}{6}h^{5}u^{(5)}(\zeta _{i}) + O(h^{6}), \quad x_{i-2} < \zeta _{i} < x_{i+1} \quad 2 \leq i \leq n -1, \\ -\frac{1}{10}h^{5}u^{(5)}(\zeta _{n}) + O(h^{6}), \quad x_{n-2} < \zeta _{n} < b, \end{array}\displaystyle \right . \end{aligned}$$

which indicates that this method is a second order convergent process.

To illustrate the applications of the numerical methods developed above, we consider the third order-order obstacle boundary value problem (19). Following the penalty function technique of Levy and Stampacchia [64], the variational inequality (65) can be written as

$$\begin{aligned} \langle Tu, g(v) \rangle + \langle \nu \{u-\psi \}(u-\psi ),g(v) \rangle = \langle f, g(v) \rangle , \quad \text{for all } \quad g(v) \in H, \end{aligned}$$
(67)

where \(\nu \{t\} \) is the discontinuous function defined by

$$\begin{aligned} \nu \{t\} = \left \{ \textstyle\begin{array}{l} 1, \quad \quad \text{for} \quad t \geq 0 \\ 0, \quad \quad \text{for} \quad t < 0 \end{array}\displaystyle \right . \end{aligned}$$
(68)

is known as the penalty function and \(\psi < 0 \) on the boundary is the called the obstacle function. It is clear that problem (19) can be written in the form

$$\begin{aligned} -u^{\prime \prime \prime } + \nu \{u-\psi \}(u-\psi ) = f, \quad \quad 0 < x < 1, \end{aligned}$$
(69)

with

$$\begin{aligned} u(0) = u^{\prime }(0) \quad =\quad u^{\prime }(1) = 0 \end{aligned}$$

where \(\nu \{t\}\) is defined by (68). If the obstacle function \(\psi \) is known and is given by the relation

$$\begin{aligned} \psi (x) = \left \{ \textstyle\begin{array}{l} -1, \quad \quad \text{for} \quad 0 \leq x \leq \frac{1}{4} \quad \text{and} \quad \frac{3}{4} \leq x \leq 1 \\ 1, \quad \quad \text{for} \quad \frac{1}{4} \leq x \leq \frac{3}{4},\end{array}\displaystyle \right . \end{aligned}$$
(70)

then problem (19) is equivalent to the following system of third-order differential equations

$$\begin{aligned} u^{\prime \prime \prime } = \left \{ \textstyle\begin{array}{l} f, \quad \quad \text{for} \quad 0 \leq x \frac{1}{4} \quad \text{and} \frac{3}{4} \leq x \leq 1 \\ u +f -1 \quad \quad \text{for}\quad \frac{1}{4} x \leq \frac{3}{4}, \end{array}\displaystyle \right . \end{aligned}$$
(71)

with the boundary conditions

$$\begin{aligned} u(0) = u^{\prime } (0) = u^{\prime }(1) = 0 \end{aligned}$$
(72)

and the conditions of continuity of \(u, u^{\prime }\) and \(u^{\prime \prime } \) at \(x = \frac{1}{4} \) and \(\frac{3}{4}\). It is obvious that problem (71) is a special case of problem (53) with \(p(x) =1 \) and \(r =-1\).

Note that for \(f =0 \), the system of differential equations (71) reduces to

$$\begin{aligned} u^{\prime \prime \prime } = \left \{ \textstyle\begin{array}{l} 0, \quad \quad \text{for} \quad 0 \leq x \leq \frac{1}{4} \quad \text{and} \quad \frac{3}{4} \leq x \leq 1 \\ u-1, \quad \quad \text{for} \quad \frac{1}{4} \leq x \leq \frac{3}{4} \end{array}\displaystyle \right . \end{aligned}$$
(73)

with the boundary condition (72).

The analytical solution for this problem is

$$ u(x) = \left \{ \textstyle\begin{array}{l@{\quad }l} \frac{1}{2} a_{1} x^{2}, \quad & 0 \leq x \leq \frac{1}{4} \\ 1 + a_{2} e^{x} + e^{- \frac{x}{2}} [ a_{3} \cos \frac{\sqrt{3}}{2} x + a_{4} \sin \frac{\sqrt{3}}{2} x ], \quad & \frac{1}{4} \leq x \leq \frac{3}{4} \\ \frac{1}{2} a_{5} x(x-2) + a_{6}, \quad & \frac{3}{4} \leq x \leq 1. \end{array}\displaystyle \right . $$
(74)

To find the constants \(a_{i}, \; i = 1,2, \ldots , 6\), we apply the continuity conditions of \(u, u^{\prime }\) and \(u^{\prime \prime }\) at \(x = \frac{1}{4}\) and \(\frac{3}{4}\), which leads to the following system of linear equations

$$\begin{aligned} &\left [ \textstyle\begin{array}{l@{\quad }l@{\quad }l@{\quad }l@{\quad }l@{\quad }l} \frac{1}{32} & -S_{1} & - S_{2} CS_{1} & - S_{2} S C_{1} & \ \, 0 & \ \, 0 \\ \frac{1}{4} & -S_{1} & \ \, \frac{1}{2} S_{2} (\sqrt{3} SC_{1} + CS_{1}) & - \frac{1}{2} S_{2} (\sqrt{3} CS_{1} - S C_{1}) & \ \, 0 & \ \, 0 \\ 1 & -S_{1} & -\frac{1}{2} S_{2} (\sqrt{3} SC_{1} - CS_{1}) & \ \, \frac{1}{2} S_{2} (\sqrt{3} CS_{1} + S C_{1}) & \ \, 0 & \ \, 0 \\ 0 & \ \, S_{3} & \ \, S_{4} CS_{2} & \ \, S_{4} S C_{2} & \ \, \frac{15}{32} & -1 \\ 0 & \ \, S_{3} & -\frac{1}{2} S_{4} (\sqrt{3} SC_{2}+ CS_{2}) & \ \, \frac{1}{2} S_{4} (\sqrt{3} CS_{2} - S C_{2}) & \ \, \frac{1}{4} & \ \, 0 \\ 0 & \ \, S_{3} & \ \ \, \frac{1}{2} S_{4} (\sqrt{3} SC_{2} - CS_{2}) & \ \, \frac{1}{2} S_{4} (- \sqrt{3} CS_{2} - S C_{2}) & -1 & \ \, 0 \end{array}\displaystyle \right ] \left [ \textstyle\begin{array}{c} a_{1} \\ a_{2} \\ a_{3} \\ a_{4} \\ a_{5} \\ a_{6} \end{array}\displaystyle \right ] \\ &\quad = \left [ \textstyle\begin{array}{c} \ \, 1 \\ \ \, 0 \\ \ \, 0 \\ -1 \\ \ \, 0 \\ \ \, 0 \end{array}\displaystyle \right ], \end{aligned}$$

where

$$ S_{1} = \exp (\frac{1}{4}), \; S_{2} = \exp ( - \frac{1}{8}), \; S_{3} = \exp ( \frac{3}{4}), \; S_{4} = \exp (- \frac{3}{8}), $$
$$ CS_{1} = \cos \frac{\sqrt{3}}{8}, \; SC_{1} = \sin \frac{\sqrt{3}}{8}, \; CS_{2} = \cos \frac{3 \sqrt{3}}{8} \; \; \text{and} \; \; SC_{2} = \sin \frac{3 \sqrt{3}}{8}. $$

One can find the exact solution of this system of linear equations by using the Gaussian elimination method.

For various values of \(h\), the system of third-order of boundary value problem defined by (71) and (72) was solved using the numerical method developed in this section. A detailed comparison is given in Table 3.

Table 3 Observed maximum errors \(\| {\mathbf{e}} \|\) for problem (6.x)

From Table 3, it is clear that the quartic spline method gives better results than the cubic and quintic splines methods developed earlier for solving system of third-order boundary value systems.

For more details regarding solving various classes of obstacle boundary value problems using the penalty technique, see [3,4,5, 114, 150] and the references therein. In recent years, homotopy (analysis) perturbation method, Adomonian decomposition, Laplace transformation and variational iteration techniques are being used to find the analytical solutions of fractional unilateral and obstacle boundary value problems.

8 General Equilibrium Problems

In this section, we introduce and consider a class of equilibrium problems known as general equilibrium problems. It is known that equilibrium problems [19, 142] include variational and complementarity problems as special cases. We note that the projection and its variant forms including the Wiener-Hopf equations cannot be extended to equilibrium problems, since it is not possible to find the projection of the bifunction \(F (.,.)\). Noor [94, 95, 121] used the auxiliary principle technique to analyse some iterative methods for equilibrium problems. In this chapter, we introduce and study a class of equilibrium problems involving the arbitrary function, which is called the general equilibrium problem. We show that the auxiliary principle technique can be used to suggest and analyze some iterative methods for solving general equilibrium problems. We also study the convergence analysis of these iterative methods and discuss some special cases.

For given nonlinear function \(F(.,.) : H\times H \longrightarrow R\) and operator \(g : H \longrightarrow R \), we consider the problem of finding \(u \in H, g(u) \in K \) such that

$$\begin{aligned} F(u,g(v)) \geq 0, \quad \forall g(v) \in K, \end{aligned}$$
(75)

which is known as the general equilibrium problem.

We now discuss some special cases of the general equilibrium problem (75).

(I). For \(g \equiv I \), the identity operator, problem (75) is equivalent to finding \(u \in K \) such that

$$\begin{aligned} F(u,v ) \geq 0, \quad \forall \quad v \in K, \end{aligned}$$
(76)

which is called the equilibrium problem, that was introduced and studied by Blum and Oettli [19]. For the recent applications and development, see [102, 109, 153] and the references therein.

(II). If \(F(u,g(v)) = \langle Tu, \eta (g(v),g(u)) \rangle \) and the set \(K_{g\eta } \) is an invex set in \(H \), then problem (75) is equivalent to finding \(u \in K_{g\eta } \) such that

$$\begin{aligned} \langle Tu, \eta (g(v),g(u)) \rangle \geq 0, \quad \forall v \in K_{\eta }. \end{aligned}$$
(77)

The inequality of type (77) is known as the general variational-like inequality, which arises as a minimum of general preinvex functions on the general invex set \(K_{g\eta } \).

(III). We note that, for \(F(u,g(v) \equiv \langle Tu, g(v)-g(u) \rangle \), problem (75) reduces to problem (9), that is, find \(u \in K, g(u) \in K \) such that

$$\begin{aligned} \langle Tu,g(v)-g(u) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

which is exactly the general variational inequality (9). Thus we conclude that general equilibrium problems (75) are quite general and unifying.

We now use the auxiliary principle technique as developed in Sect. 6 to suggest and analyze some iterative methods for solving general equilibrium problems (75).

For a given \(u \in H, g(u) \in K \) satisfying (75), consider the auxiliary equilibrium problem of finding \(w \in H, g(w) \in K \) such that

$$\begin{aligned} \rho F(u,g(v) ) + \langle g(w)-g(u),g(v)-g(w) \rangle \geq 0, \quad \forall g(v) \in K. \end{aligned}$$
(78)

Obviously, if \(w = u \), then \(w \) is a solution of the general equilibrium problem (75). This fact allows us to suggest the following iterative method for solving (75).

Algorithm 28

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1} \) by the iterative scheme:

$$\begin{aligned} \rho F(w_{n},g(v)) + \langle g(u_{n+1})-g(w_{n}),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K. \end{aligned}$$
(79)
$$\begin{aligned} \beta F(u_{n},g(v) ) + \langle g(w_{n})-g(u_{n}),g(v)-g(w_{n}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$
(80)

where \(\rho > 0 \) and \(\beta > 0 \) are constants.

Algorithm 28 is called the predictor-corrector method for solving general equilibrium problem (75).

For \(g = I \), where \(I \) is the identity operator, Algorithm 28 reduces to:

Algorithm 29

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1} \) by the iterative schemes

$$\begin{aligned} \rho F(w_{n}, v) + &\langle u_{n+1}-w_{n}, v-u_{n+1} \rangle \geq 0, \quad \forall v \in K. \\ \beta F(u_{n}, v) + & \langle w_{n}-u_{n}, v-w_{n} \rangle \geq 0, \quad \forall v \in K \end{aligned}$$

Algorithm 29 is also a predictor-corrector method for solving equilibrium problem and appears to be original.

If \(F(u,g(v)) = \langle Tu, g(v)-g(u) \rangle \), then Algorithm 28 becomes:

Algorithm 30

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1} \) by the iterative scheme

$$\begin{aligned} \langle \rho Tw_{n} + & g(u_{n+1})-g(w_{n}),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K, \\ \langle \beta Tu_{n} + & g(w_{n})-g(u_{n}),g(v)-g(w_{n}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

which is a two-step method for solving general variational inequalities (9).

In brief, for suitable and appropriate choice of the functions \(F(.,.) \) and the operators \(T, g \), one can obtain various algorithms developed in the previous sections.

For the convergence analysis of Algorithm 28, we need the following concepts.

Definition 10

The function \(F(.,.): H \times H \longrightarrow H \) is said to be:

(i). \(g\)-monotone, if

$$\begin{aligned} F(u,g(v) ) + F(v,g(u)) \leq 0, \quad \forall u,v \in H. \end{aligned}$$

(ii). \(g\)-pseudomonotone, if

$$\begin{aligned} F(u,g(v) ) \leq 0, \quad \text{implies } \quad F(v,g(u)) \leq 0, \quad \forall u,v \in H. \end{aligned}$$

(iii). \(g\)-partially relaxed strongly monotone, if there exists a constant \(\alpha > 0 \) such that

$$\begin{aligned} F(u,g(v)) + F(v,g(z)) \leq \alpha \|g(z)-g(u)\|^{2}, \quad \forall u,v,z \in H. \end{aligned}$$

Note that for \(u = z \), \(g\)-partially relaxed strongly monotonicity reduces to \(g\)-monotonicity of \(F(.,.)\).

For \(g = I \), Definition 10 coincides with the standard definition of monotonicity, pseudomonotonicity of the function \(F(.,.) \).

We now consider the convergence analysis of Algorithm 28.

Theorem 8

Let \(\bar{u} \in H \) be a solution of (75) and let \(u_{n+1} \) be an approximate solution obtained from Algorithm 28. If the bifunction \(F(.,.) \) is \(g\)-partially relaxed strongly monotone with constant \(\alpha > 0 \), then

$$\begin{aligned} \|g(\bar{u})-g(u_{n+1})\|^{2} \leq & \|g(\bar{u})-g(w_{n}) \|^{2} -(1-2 \alpha \rho ) \|g(w_{n})-g(u_{n+1})\|^{2} \end{aligned}$$
(81)
$$\begin{aligned} \|g(\bar{u})-g(w_{n})\|^{2} \leq & \|g(\bar{u})-g(u_{n}) \|^{2} -(1-2 \beta \rho ) \|g(w_{n})-g(u_{n})\|^{2}. \end{aligned}$$
(82)

Proof

Let \(g(\bar{u} ) \in H \) be a solution of (75). Then

$$\begin{aligned} \rho F(\bar{u}, g(v) ) \geq & 0, \quad \forall g(v) \in K. \end{aligned}$$
(83)
$$\begin{aligned} \beta F(\bar{u},g(v)) \geq & 0, \quad \forall g(v) \in K, \end{aligned}$$
(84)

where \(\rho > 0 \) and \(\beta > 0 \) are constants.

Now taking \(v= u_{n+1} \) in (79) and \(v = \bar{u} \) in (83), we have

$$\begin{aligned} \rho F(\bar{u}, g(u_{n+1})) \geq 0 \end{aligned}$$
(85)

and

$$\begin{aligned} \rho F(w_{n},g(\bar{u})) + \langle g(u_{n+1})- g(w_{n}),g(\bar{u})-g(u_{n+1}) \rangle \geq 0. \end{aligned}$$
(86)

Adding (85) and (86), we have

$$\begin{aligned} \langle g(u_{n+1})-g(w_{n}),g(\bar{u})-g(u_{n+1}) \geq & -\rho \{F(w_{n},g( \bar{u})+ F(\bar{u},g(u_{n+1}))\} \\ \geq & -\alpha \rho \|g(u_{n+1})-g(w_{n})\|^{2}, \end{aligned}$$
(87)

where we have used the fact that \(F(.,.) \) is \(g\)-partially relaxed strongly monotone with constant \(\alpha > 0 \). Using the inequality

$$ 2\langle a,b \rangle - \|a+b\|^{2}-\|a\|^{2}-\|b\|^{2}, \forall a,b \in H, $$

we obtain

$$\begin{aligned} 2\langle g(u_{n+1})-g(w_{n}),g(\bar{u})- g(u_{n+1}) \rangle = & \|g( \bar{u})-g(w_{n})\|^{2} - \|g(\bar{u})-g(u_{n+1})\|^{2} \\ & -\|g(u_{n+1})-g(w_{n})\|^{2}. \end{aligned}$$
(88)

Combining (87) and (88), we have

$$\begin{aligned} \|g(\bar{u})-g(u_{n+1})\|^{2} \leq \|g(\bar{u})-g(w_{n})\|^{2} -(1-2 \rho \alpha )\|g(w_{n})-g(u_{n+1})\|^{2}, \end{aligned}$$
(89)

the required (81).

Taking \(v = \bar{u} \) in (80) and \(v = w_{n} \) in (84), we obtain

$$\begin{aligned} \beta F(\bar{u},g(w_{n}) ) \geq 0 \end{aligned}$$
(90)

and

$$\begin{aligned} \beta F(u_{n},g(\bar{u})) + \langle g(w_{n})-g(u_{n}), g(\bar{u})-g(w_{n}) \rangle \geq 0. \end{aligned}$$
(91)

Adding (90), (91) and rearranging the terms, we have

$$\begin{aligned} \langle g(w_{n})-g(u_{n}), g(\bar{u})-g(w_{n}) \rangle \geq -\alpha \beta \|g(u_{n})-g(w_{n})\|^{2}, \end{aligned}$$
(92)

since \(F(.,.) \) is \(g\)-partially strongly monotone with constant \(\alpha > 0 \).

Consequently, from (92), we have

$$\begin{aligned} \|g(\bar{u})-g(w_{n})\|^{2} \leq \|g(\bar{u})-g(u_{n})\|^{2} - (1-2 \alpha \beta )\|g(u_{n})-g(w_{n}) \|^{2} , \end{aligned}$$

the required (82). □

Theorem 9

Let \(H\) be a finite dimensional subspace and let \(0 < \rho < \frac{1}{2\alpha }\) and

\(0 < \beta < \frac{1}{2\alpha }\). If \(\bar{u} \in H :g(\bar{u}) \in K\) is a solution of (75) and \(u_{n+1}\) is an approximate solution obtained from Algorithm 28, then

$$ \lim _{n \longrightarrow \infty }u_{n} = \bar{u}. $$

Proof

Its proof is very much similar to that of Noor [122]. □

We again use the auxiliary principle technique to suggest an inertial proximal method for solving the general equilibrium problem (75). It is noted that the inertial proximal method includes the proximal method as a special case.

For a given \(u \in H, g(u) \in K \) satisfying (75), consider the auxiliary general equilibrium problem of finding \(w \in H, g(w) \in K \) such that

$$\begin{aligned} \rho F(w,g(v)) + \langle g(w)-g(u)-\alpha _{n} (g(u)-g(u)),g(v)-g(w) \rangle \geq 0, \forall g(v) \in K, \end{aligned}$$
(93)

where \(\rho > 0 \) and \(\alpha _{n} > 0 \) are constants.

It is clear that if \(w = u \), then \(w\) is a solution of the general equilibrium problem (75). This fact enables us to suggest an iterative method for solving (75) as:

Algorithm 31

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1} \) by the iterative scheme

$$\begin{aligned} &(\rho F(u_{n+1},g(v))) + \langle g(u_{n+1})-g(u_{n}) \\ &-\alpha _{n} (g(u_{n})-g(u_{n-1})),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

where \(\rho > 0 \) and \(\alpha _{n} > 0 \) are constants.

Algorithm 31 is called the inertial proximal point method. For \(\alpha _{n} =0 \), Algorithm 31 reduces to:

Algorithm 32

For a given \(u_{0} \in H \), find the approximate solution \(u_{n+1} \) by the iterative schemes

$$\begin{aligned} (\rho F(u_{n+1}),g(v))) + \langle g(u_{n+1})-g(u_{n}),g(v)-g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K, \end{aligned}$$

which is known as the proximal method and appears to be a new one. Note that for \(g \equiv I \), the identity operator, one can obtain an inertial proximal method for solving equilibrium problems (76). In a similar way, using the technique of Noor [122], one can suggest and analyze several new inertial type methods for solving general equilibrium problems. It is a challenging problem to compare the efficiency of these methods with other techniques for solving general equilibrium problems.

9 General Variational-Like Inequalities

It is well known that the minimum of the (non) differentiable preinvex functions on the invex set can be characterized by a class of variational inequalities, called variational-like inequalities. For the applications and numerical methods of variational-like inequalities, see [18, 93, 95, 106] and the references therein. In this section, we introduce the general variational-like inequalities with respect to an arbitrary function. Due the structure of the general variational inequalities, the projection method and its variant forms cannot be used to study the problem of the existence of the solution. This implies that the variational-like inequalities are not equivalent to the projection (resolvent) fixed-point problems. We use the auxiliary principle technique to suggest and analyze some implicit and explicit iterative methods for solving variational-like inequalities. We also show that the general variational-like inequalities are equivalent to the optimization problems, which can be used to study the associated optimal control problem. Such type of the problems have been not studied for general variational-like inequalities and this is another direction for future research.

We recall some known basic concepts and results.

Let \(F:K_{\eta } \rightarrow R\) be a continuous function and let \(\eta (.,.) :K_{\eta }\times K_{\eta } \rightarrow R\) be an arbitrary continuous bifunction. Let \(g(.)\) be a non-negative function.

Definition 11

[14] The set \(K_{\eta }\) in \(H\) is said to be invex set with respect to an arbitrary bifunction \(\eta (\cdot ,\cdot )\), if

$$\begin{aligned} u+t\eta (v,u)\in K,\quad \quad \forall u,v\in K_{\eta }, t\in [0,1]. \end{aligned}$$

The invex set \(K_{\eta }\) is also called \(\eta \)-connected set. Note that the invex set with \(\eta (v,u)=v-u\) is a convex set, but the converse is not true.

In the sequel, \(K_{\eta }\) is a nonempty closed invex set in \(H\) with respect to the bifunction \(\eta (\cdot ,\cdot )\), unless otherwise specified.

Definition 12

[14] The set \(K_{g\eta }\) in \(H\) is said to be a general invex set with respect to an arbitrary bifunction \(\eta (\cdot ,\cdot )\) and the function \(g \), if

$$\begin{aligned} g(u)+t\eta (g(v),g(u))\in K_{g\eta },\quad \quad \forall g(u),g(v)\in K_{g \eta }, t\in [0,1]. \end{aligned}$$

The invex set \(K_{g\eta }\) is also called a \(g\eta \)-connected set. Note that the general invex set with \(\eta (g(v),g(u))=g(v)-g(u) \) is a general convex set, but the converse is not true. See Youness [197].

We now present the concept of the general preinvex function.

Definition 13

Let \(K_{g\eta } \subseteq H\) be a general invex set with respect to \(\eta (.,.): K_{\eta } \times K_{\eta } \longrightarrow R^{n}\) and \(g: H \longrightarrow H \). A function \(F: K_{g\eta } \longrightarrow R \) is said to be a general preinvex function, if,

$$\begin{aligned} F(g(u)+t\eta (g(v),g(u))) \leq (1-t)F(g(u)) + t F(g(v)), \\ \quad \forall u,v \in H: g(u),g(v) \in K_{g\eta }, t \in [0,1], \end{aligned}$$

Note that for \(g \equiv I\), the general preinvex functions are called the preinvex functions. For \(\eta (v,u) = g(v)-g(u)\), general preinvex functions are known as general convex functions. Every convex function is a general convex function and every general convex function is a general preinvex function, but the converse is not true, see [11, 43].

In the sequel, we assume that the set \(K_{g\eta } \) is a general invex set with respect to the functions \(\eta (.,.) : K_{g\eta } \times K_{g\eta } \longrightarrow H, g: K_{g \eta } \longrightarrow H \) , unless otherwise specified.

Definition 14

The function \(F \) is said to be general semi preinvex, if

$$\begin{aligned} F(g(u)+t\eta (g(v),g(u))) \leq (1-t)F(u)+tF(v), \\ \quad \forall u,v \in H: g(u),g(v) \in K_{g\eta }, \quad t\in [0,1]. \end{aligned}$$

For \(g \equiv I\), and \(t= 1\), we have

$$\begin{aligned} F(u+\eta (v,u)) \leq F(v), \quad \forall u,v \in K_{g\eta }. \end{aligned}$$

Definition 15

The function \(F \) is called general quasi preinvex, if

$$\begin{aligned} F(g(u)+t \eta (g(v),g(u))) \leq \max \{F(g(u)),F(g(v))\}, \\ \quad \forall u,v \in H: g(u),g(v) \in K_{g\eta }, \quad t \in [0,1]. \end{aligned}$$

The function \(F\) is called strictly general quasi preinvex, if strict inequality holds for all \(g(u),g(v) \in K_{g\eta }, g(u) \neq g(v)\). The function \(F\) is said to be general quasi preconcave, if and only if, \(-F\) is general quasi preinvex. A function which is both general quasi preinvex and general quasi preconcave is called the general quasimonotone function.

Definition 16

The function \(F \) is said to be a general logarithmic preinvex on the general invex set \(K_{g\eta }\) with respect to the bifunction \(\eta (.,.) \) and the function \(g\), if

$$\begin{aligned} F(g(u)+t\eta (g(v),g(u))) \leq (F(g(u)))^{1-t}(F(g(v)))^{t}, \\ \quad \forall u,v \in H; g(u),g(v) \in K_{g\eta }, \quad t \in [0,1], \end{aligned}$$

where \(F(.) > 0\).

Clearly for \(t = 1 \), and \(g = I \), we have

$$\begin{aligned} F(u + \eta (v,u)) \leq F(v), \quad \forall u,v \in K_{g\eta }. \end{aligned}$$

It follows that:

general logarithmic preinvexity ⟹ general preinvexity ⟹ general quasi preinvexity.

For appropriate and suitable choice of the operators and spaces, one can obtain several classes of generalized preinvexity.

In this section, we prove that the minimum of a differentiable general preinvex function on the general invex sets can be characterized by a class of variational-like inequalities, which is called the general variational-like inequality.

Theorem 10

Let \(F\) be a differentiable general preinvex function. Then \(u \in H: g(u) \in K_{g\eta } \) is a minimum of \(F\) on \(K_{\eta }\), if and only if, \(u \in H: g(u)\in K_{g\eta }\) satisfies

$$\begin{aligned} \langle F'(g(u)), \eta (g(v),g(u)) \rangle \geq 0, \quad \forall v \in H: g(v) \in K_{g\eta }, \end{aligned}$$
(94)

where \(F'\) is the Frechet derivative of \(F\) at \(g(u) \in K_{g\eta }\).

Proof

Let \(u\in H:g(u) \in K_{g\eta } \) be a minimum of \(F \). Then

$$\begin{aligned} F(g(u)) \leq F(g(v)), \quad \forall v\in H: g(v) \in K_{g\eta }. \end{aligned}$$
(95)

Since the set \(K_{\eta } \) is a general invex set, so \(\forall g(u), g(v) \in K_{g\eta } \), and \(t \in [0,1] \),

$$ g(v_{t})= g(u) +\eta (g(v),g(u)) \in K_{g\eta }. $$

Setting \(g(v) = g(v_{t}) \) in (95), we have

$$\begin{aligned} F(g(u)) \leq F(g(u)+ \eta (g(v),g(u))). \end{aligned}$$

Dividing the above inequality by \(t\) and taking the limit as \(t \longrightarrow 0 \), we have

$$\begin{aligned} \langle F^{\prime }(g(u)),\eta (g(v),g(u)) \rangle \geq 0, \quad \forall v\in H: g(v) \in K_{g\eta }, \end{aligned}$$

which is the required (94).

Conversely, let \(u \in H:g(u) \in K_{g\eta } \) satisfy the inequality (94). Then, using the fact that the function \(F \) is a general preinvex function, we have:

$$\begin{aligned} F(g(u)+t\eta (g(v),g(u)))- F(g(u)) \leq t\{F(g(v))-F(g(u))\}, \quad g(u, g(v) \in K_{g\eta }. \end{aligned}$$

Dividing the above inequality by \(t\) and letting \(t \longrightarrow 0 \), we have

$$\begin{aligned} F(g(v))-F(g(u)) \geq & \langle F^{\prime }(g(u)),\eta (g(v),g(u)) \rangle \geq 0, \quad \text{using (94) } \end{aligned}$$

which implies that

$$\begin{aligned} F(g(u)) \leq F(g(v)), \end{aligned}$$

which shows that \(u \in H: g(u)\in K_{g\eta } \) is a minimum of the general preinvex function on the general invex set \(K_{g\eta }\) in \(H \). □

Inequalities of the type (94) are called the general variational-like inequalities. For \(g \equiv I\), where \(I\) is the identity operator, Theorem 10 is mainly due to Noor [93]. From Theorem 10 it follows that the general variational-like inequalities (94) arise naturally in connection with the minimum of general preinvex functions over general invex sets. In many applications, problems like (94) do not arise as a result of minimization. This fact motivated us to consider a problem of finding a solution of a more general variational-like inequality of which (94) is a special case.

Given an (nonlinear) operator \(T: H \longrightarrow H\), and \(\eta :K_{g\eta }\times K_{g\eta } \longrightarrow R\), where \(K_{g\eta } \) is a nonempty general invex set in \(H \), we consider the problem of finding \(u \in H: g(u)\in K_{g\eta }\) such that

$$\begin{aligned} \langle Tu, \eta (g(v),g(u)) \rangle \geq 0, \quad v \in H: g(v) \in K_{g \eta }, \end{aligned}$$
(96)

which is known as the general variational-like inequality.

If \(\eta (v,u) = g(v)-g(u) \), then the general invex set \(K_{g\eta }\) becomes a general convex set \(K_{g}\). In this case, problem (96) is equivalent to finding \(u \in H : g(u) \in K_{g} \) such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq 0, \quad \forall v\in H: g(v) \in K_{g}, \end{aligned}$$

which is exactly the general variational inequality (9). For formulation, numerical methods, sensitivity analysis, dynamical system and other aspects of general variational inequalities, see [109, 110, 122, 144] and the references therein.

For suitable and appropriate choice of the operators \(T\), \(\eta \), and the general invex set, one may derive a wide class of known and new variational inequalities as special cases of problem (96). It is clear that general variational-like inequalities provide us a framework to study a wide class of unrelated problems in a unified setting.

We now use the auxiliary principle technique to suggest and analyze some iterative methods for general variational-like inequalities (96).

For a given \(u \in H; g(u) \in K_{g\eta }\) satisfying (96), consider the problem of finding a solution \(w \in H: g(w) \in K_{g\eta }\) satisfying the auxiliary variational-like inequality

$$\begin{aligned} \langle \rho Tw+E'(g(w))-E'(g(u)), \eta (g(v),g(w)) \rangle \geq 0, \quad \forall v\in H: g(v) \in K_{g\eta }, \end{aligned}$$
(97)

where \(\rho > 0 \) is a constant and \(E^{\prime } \) is the differential of a strongly general preinvex function \(E \). The inequality (97) is called the auxiliary general variational-like inequality.

Note that if \(w =u \), then clearly \(w\) is a solution of the general variational-like inequality (96). This observation enables us to suggest the following algorithm for solving (96).

Algorithm 33

For a given \(u_{0} \in H\), compute the approximate solution \(u_{n+1}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n+1}+E'(g(u_{n+1}))-E'(g(u_{n})),\eta (g(v),g(u_{n+1}))\rangle \geq 0, \\ \quad \forall v\in H; g(v) \in K_{g\eta }. \end{aligned}$$
(98)

Algorithm 33 is called the proximal point algorithm for solving the general variational-like inequalities (96).

For \(\eta (g(v),g(u))= g(v)-g(u) \), the general preinvex function \(E\) is equivalent to the convex function and the invex set \(K_{g\eta } \) becomes the general convex set. Consequently Algorithm 33 reduces to:

Algorithm 34

For a given \(u_{0} \in K_{g}\), compute the approximate solution \(u_{n+1}\) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n+1}+E'(g(u_{n+1}))-E'(g(u_{n})), g(v)- g(u_{n+1}) \rangle \geq 0, \quad \forall g(v) \in K_{g}, \end{aligned}$$

which is known as the proximal point algorithm for solving general variational inequalities.

Remark 4

The function

$$ B(g(w),g(u)) = E(g(w))-E(g(u))-\langle E'(g(u)),\eta (g(w),g(u)) \rangle $$

associated with the preinvex functions \(E(u) \) is called the general Bregman function. We note that, if \(\eta (g(v),g(u)) = g(v)-g(u) \), then

$$ B(g(w),g(u))= E(g(w))-E(g(u))-\langle E'(g(u)),g(v)-g(u) \rangle $$

is the well known Bregman function. For the applications of the Bregman function in solving variational inequalities and related optimization problems, see [201].

We now study the convergence analysis of Algorithm 33. For this purpose, we recall the following concepts.

Definition 17

\(\forall u,v,z \in H\), an operator \(T: H \longrightarrow H \) is said, with respect to an arbitrary function \(g: H \longrightarrow H \), to be:

  1. (i).

    general \(g\eta \)-pseudomonotone, if

    $$\begin{aligned} \langle Tu, \eta (g(v),g(u)) \rangle \geq 0 \quad \Longrightarrow \quad \langle Tv, \eta (g(v),g(u)) \rangle \geq 0. \end{aligned}$$
  2. (ii).

    general \(g\eta \)-Lipschitz continuous, if there exists a constant \(\beta > 0 \) such that

    $$\begin{aligned} \langle Tu-Tv, \eta (g(u),g(v) \rangle \leq \beta \|g(u)-g(v)\|^{2}. \end{aligned}$$
  3. (iii).

    general \(g\eta \)-cocoercive, if there exists a constant \(\mu > 0 \) such that

    $$\begin{aligned} \langle Tu-Tv, \eta (g(u),g(v)) \rangle \geq \mu \|T(g(u))-T(g(v))\|^{2}. \end{aligned}$$
  4. (iv).

    general \(g\eta \)-partially relaxed strongly monotone, if there exists a constant \(\alpha > 0 \) such that

    $$\begin{aligned} \langle Tu-Tv, \eta (g(z),g(v)) \rangle \geq \mu \|g(z)-g(u)\|^{2}. \end{aligned}$$

For \(\eta (g(v),g(u)) = g(v)-g(u) \), definition 17 reduces to the definition of general monotonicity, general Lipschitz continuity, general co-coercivity and partially relaxed general strongly monotonicity of the operator \(T\). We note that for \(g(z)=g(u) \), partially strongly monotonicity reduces to monotonicity. One can easily show that general \(g\eta \)-cocoercivity implies general \(g\eta \)-partially relaxed general strongly monotonicity, but the converse is not true.

Definition 18

A function \(F \) is said to be a strongly general preinvex function on \(K_{g\eta } \) with respect to the function \(\eta (.,.)\) with modulus \(\mu > 0\) and function \(g \), if

\(\forall u,v \in H: g(u), g(v) \in K_{g\eta } , t \in [0,1] \), such that

$$\begin{aligned} F(g(u)+t\eta (g(v),g(u)))\leq (1-t)F(g(u))+tF(g(v)) -t(1-t)\mu \| \eta (g(v),g(u))\|^{2}. \end{aligned}$$

We note that the differentiable strongly general preinvex function \(F \) implies the strongly general invex function, that is,

$$\begin{aligned} F(g(v))-F(g(u)) \geq \langle F^{\prime }(g(u)), \eta (g(v),g(u)) \rangle + \mu \|\eta (g(v),g(u))\|^{2}, \end{aligned}$$

but the converse is also true under some conditions.

Assumption 1

\(\forall u,v,z \in H\), the operator \(\eta :H\times H \longrightarrow H \) and the function \(g \) satisfy the condition

$$\begin{aligned} \eta (g(u),g(v)) = \eta (g(u),g(z)) + \eta (g(z),g(v)). \end{aligned}$$

In particular, from Assumption 1, we obtain

$$ \eta (g(u),g(v)) = -\eta (g(v),g(u)) $$

and

$$ \eta (g(v),g(u)) = - \eta (g(u),g(v)), \forall g(u),g(v), \in H. $$

Assumption 1 has been used to study the existence of a solution of general variational-like inequalities.

Theorem 11

Let \(T\) be a general \(\eta \)-pseudomonotone operator. Let \(E\) be a strongly differentiable general preinvex function with modulus \(\beta \) and Assumption 1satisfied. Then the approximate solution \(u_{n+1}\) obtained from Algorithm 33converges to a solution of (96).

Proof

Since the function \(E\) is strongly general preinvex, so the solution \(u_{n+1}\) is unique. Let \(u \in H: g(u) \in K_{\eta } \) be a solution of the general variational-like inequality (96). Then

$$\begin{aligned} \langle Tu, \eta (g(v),g(u)) \rangle \geq 0, \quad \forall v\in H; g(v) \in K_{\eta }, \end{aligned}$$

which implies that

$$\begin{aligned} \langle Tv, \eta (g(v),g(u)) \rangle \geq 0, \end{aligned}$$
(99)

since \(T\) is a general \(\eta \)-pseudomonotone. Taking \(v = u_{n+1} \) in (99), we have

$$\begin{aligned} \langle Tu_{n+1},\eta (g(u_{n+1}), g(u) )\rangle \geq 0. \end{aligned}$$
(100)

We consider the Bregman function

$$\begin{aligned} B(g(u),g(w)) = & E(g(u))-E(g(w))-\langle E'(g(u)),\eta (g(v),g(u)) \rangle \\ \geq & \frac{\beta }{2}\|\eta (g(u),g(w))\|^{2}, \end{aligned}$$
(101)

using strongly general preinvexity.

Now

$$\begin{aligned} B(g(u),g(u_{n}))-B(g(u),g(u_{n+1})) = & E(g(u_{n+1}))-E(g(u_{n})) \\ &+ \langle E'(g(u_{n+1})),\eta (g(u),g(u_{n+1})) \rangle \\ & - \langle E'(g(u_{n})), \eta (g(u),g(u_{n})) \rangle \end{aligned}$$
(102)

Using Assumption 1, we have

$$\begin{aligned} \eta (g(u),g(u_{n})) = \eta (g(u),g(u_{n+1}))+\eta (g(u_{n+1}),g(u_{n}) ). \end{aligned}$$
(103)

Combining (100), (101), (102) and (103), we have

$$\begin{aligned} &B(g(u),g(u_{n}))-B(g(u),g(u_{n+1})) \\ = & E(g(u_{n+1}))-E(g(u_{n}))-\langle E'(g(u_{n})),\eta (g(u_{n+1}),g(u_{n})) \\ & + \langle E'(g(u_{n+1}))-E'(g(u_{n})), \eta (g(u),g(u_{n+1})) \rangle \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n}))\|^{2} + \langle E'(g(u_{n+1}))-E'(g(u_{n})), \eta (g(u),g(u_{n+1})) \rangle \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n}))\|^{2}+\langle \rho Tu_{n+1}, \eta (g(u_{n+1}),g(u)) \rangle \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n}))\|^{2} , \quad \text{using (101).} \end{aligned}$$

If \(g(u_{n+1}) = g(u_{n}) \), then clearly \(g(u_{n})\) is a solution of the general variational-like inequality (96). Otherwise, it follows that \(B(u,u_{n})-B(u,u_{n+1}) \) is nonnegative, and we must have

$$\begin{aligned} \lim _{n \rightarrow \infty }\|\eta (g(u_{n+1}),g(u_{n}))\| = 0. \end{aligned}$$

Now using the technique of Zhu and Marcotte [201], one can easily show that the entire sequence \(\{u_{n}\}\) converges to the cluster point \(\bar{u}\) satisfying the variational-like inequality (96). □

To implement the proximal method, one has to calculate the solution implicitly, which is in itself a difficult problem. We again use the auxiliary principle technique to suggest another iterative method, the convergence of which requires only the \(g\eta \)-partially relaxed strongly general monotonicity.

For a given \(u \in H: g(u) \in K_{g\eta } \), satisfying (96), find a solution \(w \in H: g(w) \in K_{g\eta } \) such that

$$\begin{aligned} \langle \rho Tu +E^{\prime }(g(w))-E^{\prime }(g(u)), \eta (g(v),g(w)) \rangle \geq 0, \quad \forall v \in H: g(v) \in K_{\eta }, \end{aligned}$$
(104)

which is called the auxiliary general variational-like inequality, where \(E(u) \) is a differentiable strongly general preinvex function. It is clear that, if \(w = u \), then \(w\) is a solution of the general variational-like inequality (96). This fact allows us to suggest and analyze the following iterative method for solving (96).

Algorithm 35

For a given \(u_{0} \in H \), compute the approximate solution \(u_{n+1} \) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n}+ E^{\prime }(g(u_{n+1}))-E^{\prime }(g(u_{n})), \eta (g(v), g(u_{n+1})) \rangle \geq 0,\\ \quad \forall v \in H: g(v) \in K_{g\eta }. \end{aligned}$$
(105)

Note that for \(\eta (g(v),g(u) ) = g(v)-g(u) \), Algorithm 35 reduces to:

Algorithm 36

For a given \(u_{0} \in H_{g} \), find the approximate solution \(u_{n+1} \) by the iterative scheme

$$\begin{aligned} \langle \rho Tu_{n} +E^{\prime }(g(u_{n+1}))-E^{\prime }(g(u_{n})), g(v)- g(u_{n+1})\rangle \geq 0, \quad \forall v \in H: g(v) \in K_{g\eta }. \end{aligned}$$

Algorithm 4 for solving general variational inequalities appears to be novel. In a similar way, one can obtain a number of new and known iterative methods for solving various classes of variational inequalities and complementarity problems.

We now study the convergence analysis of Algorithm 35. The analysis is in the spirit of Theorem 11. We only give the main points.

Theorem 12

Let \(T\) be a partially relaxed strongly general \(g\eta \)-monotone with a constant \(\alpha > 0 \). Let \(E\) be a differentiable strongly general preinvex function with modulus \(\beta \) and Assumption 1satisfied. If \(0 < \rho < \frac{\beta }{\alpha ,}\) then the approximate solution \(u_{n+1}\) obtained from Algorithm 35converges to a solution of (96).

Proof

Since the function \(E\) is strongly general preinvex, so the solution \(u_{n+1}\) of (96) is unique. Let \(u \in H: g(u) \in K_{g\eta }\) be a solution of the general variational-like inequality (96). Then

$$\begin{aligned} \langle Tu, \eta (g(v),g(u)) \rangle \geq 0, \quad \forall v\in H: g(v) \in K_{\eta }. \end{aligned}$$

Taking \(v = u_{n+1} \) in the above inequality, we have

$$\begin{aligned} \langle \rho Tu,\eta (g(u_{n+1}),g(u) ) \rangle \geq 0. \end{aligned}$$
(106)

Combining (102), (105) and (106), we have

$$\begin{aligned} &B(g(u),g(u_{n}))-B(g(u),g(u_{n+1})) \\ = & E(g(u_{n+1}))-E(g(u_{n}))-\langle E'(g(u_{n})),\eta (g(u_{(n+1}),g(u_{n})) \\ & + \langle E'(g(u_{n+1}))-E'(g(u_{n})), \eta (g(u),g(u_{n+1})) \rangle \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n}))\|^{2} + \langle E'(g(u_{n+1}))-E'(g(u_{n})), \eta (g(u),g(u_{n+1})) \rangle \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n}))\|^{2}+\langle \rho Tu_{n}, \eta (g(u_{n+1}),g(u)) \rangle , \\ \geq & \beta \|\eta (g(u_{n+1}),g(u_{n})\|^{2}+\langle \rho Tu_{n}- \rho Tu, \eta (g(u_{n+1}),g(u))\rangle , \\ \geq & ( \beta -\rho \alpha )\|\eta (g(u_{n+1}),g(u_{n}))\|^{2} . \end{aligned}$$

If \(g(u_{n+1}) = g(u_{n}) \), then clearly \(g(u_{n})\) is a solution of the general variational-like inequality (96). Otherwise, the assumption \(0 < \rho < \frac{\alpha }{\beta } \), implies that the sequence

$$B(g(u),g(u_{n}))-B(g(u), g(u_{n+1}))$$

is nonnegative, and we must have

$$\begin{aligned} \lim _{n \longrightarrow \infty }\|\eta (g(u_{n+1}),g(u_{n}) \| = 0. \end{aligned}$$

Now by using the technique of Zhu and Marcotte [164], it can be shown that the entire sequence \(\{u_{n}\} \) converges to the cluster point \(\bar{u} \) satisfying the variational-like inequality (96). □

We now show that the solution of the auxiliary general variational-like inequality (104) is the minimum of the functional \(I[g(w)] \) on the general invex set \(K_{g\eta }\), where

$$\begin{aligned} I[g(w)] =& E(g(w))-E(g(u)- \langle E^{\prime }(g(u))-\rho Tu, \eta (g(w),g(u)) \rangle \\ =& B(g(w),g(u))-\rho \langle Tu, \eta (g(w),g(u)) )\rangle , \end{aligned}$$
(107)

is known as the auxiliary energy functional associated with the auxiliary general variational-like inequality (104), where \(B(g(w),g(u)) \) is a general Bregman function. We now prove that the minimum of the functional \(I[w] \), defined by (107), can be characterized by the general variational-like inequality (104).

Theorem 13

Let \(E \) be a differentiable general preinvex function. If Assumption 1holds and \(\eta (.,.)\) is prelinear in the first argument, then the minimum of \(I[w]\), defined by (107), can be characterized by the auxiliary general variational-like inequality (96).

Proof

Let \(w \in H: g(w) \in K_{\eta }\) be the minimum of \(I[w]\) on \(K_{g\eta } \). Then

$$\begin{aligned} I[g(w)] \leq I[g(v)], \quad \forall v \in H: g(v) \in K_{g\eta }. \end{aligned}$$

Since \(K_{g\eta } \) is a general invex set, so for all

$$g(w),g(u) \in K_{g\eta }, t \in [0,1], g(v_{t}) = g(w)+t\eta (g(v),g(w)) \in K_{g\eta }.$$

Replacing \(g(v) \) by \(g(v_{t}) \) in the above inequality, we have

$$\begin{aligned} I[g(w)] \leq I[g(w)+t \eta (g(v),g(w))]. \end{aligned}$$
(108)

Since \(\eta (.,.)\) is prelinear in the first argument, so, from (105) and (108), we have

$$\begin{aligned} E(g(w))-E(g(u)) -&\langle E'(g(u)) - \rho Tu,\eta (g(w),g(u)) \rangle \\ \leq &E(g(v_{t}))-E(g(u)) -\langle E'(g(u))-\rho Tu,\eta (g(v_{t}),g(u)) \rangle \\ \leq & E(g(v_{t}))-(1-t)\langle E'(g(u))-\rho Tu,\eta (g(w),g(u)) \rangle \\ &-t\langle E'(g(u))-\rho Tu, \eta (g(w),g(u)) \rangle , \end{aligned}$$

which implies that

$$\begin{aligned} E(g(w)+t\eta (g(v),g(w)))-E(g(w)) \geq & t\langle E'(g(u))-\rho Tu, \eta (g(v),g(w))\rangle \\ &-t\langle E'(g(u))-\rho Tu,\eta (g(w),g(u))\rangle . \end{aligned}$$
(109)

Now using Assumption 1, we have

$$\begin{aligned} \langle E'(g(u)),\eta (g(v),g(u))\rangle =& \langle E'(g(u)),\eta (g(v),g(w)) \rangle \\ &+\langle E'(g(u)), \eta (g(w),g(u))\rangle \end{aligned}$$
(110)
$$\begin{aligned} \langle Tu, \eta (g(v),g(u))\rangle = &\langle Tu,\eta (g(v),g(w)) \rangle + \langle Tu, \eta (g(w),g(u))\rangle . \end{aligned}$$
(111)

From (108), (109), (110) and (111), we obtain

$$\begin{aligned} E(g(w)+t\eta (g(v),g(w)))-E(g(w)) \geq t\langle E'(g(u))-\rho Tu, \eta (g(v),g(w))\rangle . \end{aligned}$$

Dividing both sides by \(t\) and letting \(t \rightarrow 0\), we have

$$\begin{aligned} \langle E'(g(w)), \eta (g(v),g(w))\rangle \geq \langle E'(g(u))-\rho Tu, \eta (g(v),g(w))\rangle , \end{aligned}$$

the required inequality (104).

Conversely, let \(u \in H: g(u) \in K_{\eta } \) be a solution of (104). Then

$$\begin{aligned} I[g(w)]-I[g(v)] = & E(g(w))-E(g(v))-\langle E'(g(u))-\rho Tu,\eta (g(w),g(u)) \rangle \\ & + \langle E'(g(u))-\rho Tu,\eta (g(v),g(u))\rangle \\ \leq & -\langle E'(g(w)),\eta (g(v),g(w))\rangle \\ &+ \langle E'(g(u)), \eta (g(v),g(u))-\eta (g(w),g(u))\rangle \\ & -\rho \langle Tu,\eta (g(v),g(u))- \eta (g(w),g(u))\rangle \\ \leq & \langle E'(g(u)),\eta (g(v),g(w))\rangle -\langle E'(g(w))- \rho Tu, \eta (g(v),g(w))\rangle \\ & + \langle E'(g(w))+ \rho Tu,\eta (g(v),g(w))\rangle -\langle E'(g(u)), \eta (g(v),g(w))\rangle \\ \leq & 0. \end{aligned}$$

Thus it follows that \(I[g(w)] \leq I[g(v)]\), showing that \(g(v) \in K_{\eta } \) is the minimum of the functional \(I[g(w)]\) on \(K_{\eta } \), which is the required result. □

10 Higher Order Strongly General Convex Functions

We would like to point out that strongly convex functions were introduced and studied by Polyak [159]. Such functions play an important role in optimization theory and related areas. For example, Karmardian [57] used strongly convex functions to discuss the unique existence of a solution of nonlinear complementarity problems. Strongly convex functions also have played important role in convergence analysis of iterative methods for solving variational inequalities and equilibrium problems, cf. Zu and Marcotte [201]. Lin and Fukushima [65] introduced the concept of higher order strongly convex functions and used it in the study of mathematical programs with equilibrium constraints. These mathematical programs with equilibrium constraints are defined by a parametric variational inequality or complementarity system and play a crucial role in many fields such as engineering design, economic equilibrium and multilevel games. These facts and observations inspired Mohsen et al [75] to consider higher order strongly convex functions involving an arbitrary bifunction. Noor and Noor [139, 140] have introduced the higher order strongly general convex functions, which include the higher order strongly convex functions [65, 75] as a special cases.

In this chapter, we introduce concepts of higher order strongly general convex functions. Several new concepts of monotonicity are introduced. Our results represent a refinement and improvement of the results of Lin and Fukushima [65]. Higher order strongly general convex functions are used to obtain new characterizations of the uniformly reflex Banach spaces by the parallelogram laws. It is worth mentioning that the parallelogram laws have been discussed in [21,22,23,24, 190].

We now define the concept of higher order strongly general convex functions, which have been investigated in [139, 140].

Definition 19

A function \(F\) on the convex set \(K\) is said to be higher order strongly general convex with respect to the function \(g\), if there exists a constant \(\mu >0 \) such that

$$\begin{aligned} &F(g(u)+t(g(v)-g(u)))\leq (1-t)F(g(u()+tF(g(v)) \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \quad \forall g(u),g(v) \in K_{g}, t\in [0,1], p>1. \end{aligned}$$

A function \(F\) is said to be higher order strongly general concave, if and only if, \(-F\) is higher order strongly general convex.

If \(t=\frac{1}{2} \), then

$$\begin{aligned} F\bigg(\frac{g(u(+g(v)}{2}\bigg)\leq \frac{F(g(u))+F(g(v))}{2}- \mu \frac{1}{2^{p}}\| g(v)-g(u) \|^{p}, \forall g(u),g(v)\in K_{g}, p>1. \end{aligned}$$

The function \(F\) is said to be a higher order strongly general \(J\)-convex function.

We now discuss some special cases.

I. If \(p=2\), then the higher order strongly convex function becomes a strongly convex function, that is,

$$\begin{aligned} F(g(u)+t(g(v)-g(u))) \leq & (1-t)F(g(u))+tF(g(v))-\mu t(1-t)\| g(v)-g(u) \|^{2}, \\ & \forall g(u),g(v)\in K_{g}, t\in [0,1]. \end{aligned}$$

For properties of strongly convex functions in variational inequalities and equilibrium problems, cf. Noor [95, 122, 129].

II. If \(g = I\), then Definition 19 reduces to:

Definition 20

A function \(F\) on the convex set \(K\) is said to be higher order strongly convex, if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} F(u+t(v-u))\leq (1-t)F(u)+tF(v)-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|v-u\|^{p}, p>1, \\ \forall u,v\in K, t\in [0,1], \end{aligned}$$

which appears to be original.

For appropriate and suitable choice of the function \(g\) and \(p \), one can obtain various new and known classes of strongly convex functions. This shows that the higher order strongly convex function involving the function \(g \) is quite general and a unifying one. One can explore the applications of the higher order strongly general convex function, which constitutes another direction for further research.

Definition 21

A function \(F\) on the convex set \(K\) is said to be higher order strongly affine general convex with respect to the function \(g \), if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} F(g(u) +&t(g(v)-g(u)))\leq (1-t)F(g(u))+tF(g(v) \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, t\in [0,1], p>1. \end{aligned}$$

Note that if a function is both higher order strongly convex and higher order strongly concave, then it is a higher order strongly affine convex function.

Definition 22

A function \(F\) is called a higher order strongly quadratic equation with respect to the function \(g\), if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} F\bigg(\frac{g(u)+g(v)}{2}\bigg) =& \frac{F(g(u))+F(g(v))}{2}- \\ &\mu \frac{1}{2^{p}}\|g(v)-g(u)\|^{p}, \forall g(u),g(v)\in K_{g}, t \in [0,1], p>1. \end{aligned}$$

This function \(F\) is also called a higher order strongly affine general \(J\)-convex function.

Definition 23

A function \(F\) on the convex set \(K\) is said to be higher order strongly quasi convex, if there exists a constant \(\mu >0\) such that

$$\begin{aligned} F(g(u) +&t(g(v)-g(u))\leq \max \{F(g(u)),F(g(v))\} \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, t\in [0,1], p>1. \end{aligned}$$

Definition 24

A function \(F\) on the convex set \(K\) is said to be higher order strongly log-convex, if there exists a constant \(\mu >0\) such that

$$\begin{aligned} F(g(u) +&t(g(v)-g(u))\leq (F(g(u)))^{1-t}(F(g(v)))^{t} \\ -&\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, t\in [0,1], p>1, \end{aligned}$$

where \(F(\cdot )>0\).

From the above definitions, we have

$$\begin{aligned} F(g(u) +&t(g(v)-g(u))\leq (F(g(u)))^{1-t}(F(g(v)))^{t} \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v-g(u)\|^{p} \\ \leq & (1-t)F(g(u))+tF(g(v))-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u) \|^{p} \\ \leq & \max \{F(g(u)),F(g(v))\}-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u) \|^{p}, p>1. \end{aligned}$$

This shows that every higher order strongly general log-convex function is a higher order strongly general convex function and every higher order strongly general convex function is a higher order strongly general quasi-convex function. However, the converse is not true.

For an appropriate and suitable choice of the arbitrary bifunction \(g \), one can obtain several new and known classes of strongly convex functions and their variant forms as special cases of generalized strongly convex functions. This shows that the class of higher order strongly general convex functions is quite broad and unifying.

Definition 25

An operator \(T:K\rightarrow H\) is said to be:

(i) higher order strongly monotone, if and only if, there exists a constant \(\alpha >0\) such that

$$\begin{aligned} \langle Tu-Tv, g(u)-g(v)v\rangle \geq \alpha \|g(u)-g(v)\|^{p}, \forall g(u),g(v)\in K_{g}. \end{aligned}$$

(ii) higher order strongly pseudomonotone, if and only if, there exists a constant \(\nu > 0\) such that

$$\begin{aligned} &\langle Tu,g(v)-g(u)\rangle +\nu \|g(v)-g(u)\|^{p} \geq 0 \\ & \Rightarrow \\ &\langle Tv,g(v)-g(u)\rangle \geq 0,\forall g(u),g(v)\in K_{g}. \end{aligned}$$

(iii) higher order strongly relaxed pseudomonotone, if and only if, there exists a constant \(\mu > 0\) such that

$$\begin{aligned} &\langle Tu, g(v)-g(u)\rangle \geq 0 \\ & \Rightarrow \\ & -\langle Tv, g(u)-g(v)\rangle +\mu \|\xi (v,u)\|^{p}\geq 0, \forall g(u),g(v)\in K_{g}. \end{aligned}$$

Definition 26

A differentiable function \(F\) on the convex set \(K_{g}\) is said to be higher order strongly pseudoconvex function, if and only if there exists a constant \(\mu >0\) such that

$$\begin{aligned} \langle F'(u),g(v)-g(u)\rangle +\mu \|g(v)-g(u)\|^{p}\geq 0 \Rightarrow F(v)\geq F(u), \forall g(u),g(v)\in K_{g}. \end{aligned}$$

We now consider some basic properties of higher order strongly general convex functions.

Theorem 14

Let \(F\) be a differentiable function on the convex set \(K_{g} \). Then the function \(F\) is a higher order strongly general convex function, if and only if

$$\begin{aligned} F(g(v))-F(g(u)) \geq & \langle F^{\prime }(g(u)), g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{p} , \\ & \forall g(v),g(u)\in K_{g}. \end{aligned}$$
(112)

Proof

Let \(F\) be a higher order strongly general convex function on the convex set \(K_{g}\). Then

$$\begin{aligned} &F(g(u)+t(g(v)-g(u))\leq (1-t)F(g(u))+tF(g(v)) \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u) \|^{p},\quad \forall g(u),g(v) \in K_{g} \end{aligned}$$

which can be written as

$$\begin{aligned} F(g(v))-F(g(u))\geq \{\frac{F(g(u)+t(g(v)-g(u))-F(g(u))}{t}\} \\ + \{t^{p-1}(1-t)+(1-t)^{p}\}\|g(v)-g(u)\|^{p}. \end{aligned}$$

Taking the limit in the above inequality as \(t\rightarrow 0 \), we have

$$\begin{aligned} F(g(v))-F(g(u))\geq \langle F'(g(u)),g(v)-g(u))\rangle + \mu \|g(v)-g(u) \|^{p},\forall g(u),g(v)\in K_{g}, \end{aligned}$$

which is (112), the required result.

Conversely, let (112) hold true. Then, \(\forall g(u),g(v)\in K_{g}, t\in [0,1]\),

\(g(v_{t})=g(u)+t(g(v)-g(u))\in K_{g} \), we have

$$\begin{aligned} F(g(v)) -&F(g(v_{t})) \geq \langle F'(g(v_{t})),g(v)-g(v_{t})) \rangle +\mu \|g(v)-g(v_{t})\|^{p} \\ =&(1-t)\langle F'(g(v_{t})),g(v)-g(u)\rangle + \mu (1-t)^{p}\|g(v)-g(u) \|^{p}. \end{aligned}$$
(113)

In a similar way, we have

$$\begin{aligned} F(g(u))-F(g(v_{t})) \geq & \langle F'(g(v_{t})),g(u)-g(v_{t})) \rangle +\mu \|g(u)-g(v_{t})\|^{p} \\ =&-t\langle F'(g(v_{t})),g(v)-g(u)\rangle + \mu t^{p}\|g(v)-g(u)\|^{p}. \end{aligned}$$
(114)

Multiplying (113) by \(t\) and (114) by \((1-t)\) and adding the resultants, we have

$$\begin{aligned} F(g(u) +&t(g(v)-g(u))\leq (1-t)F(g(u))+tF(g(v)) \\ &- \mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, \end{aligned}$$

showing that \(F\) is a higher order strongly general convex function. □

Theorem 15

Let \(F\) be a differentiable higher order strongly convex function on the convex set \(K_{g} \). Then

$$\begin{aligned} \langle F'(g(u))-F'(g(v)),g(u)-g(v)\rangle \geq 2\mu \{\|g(v)-g(u)\|^{p}, \forall g(u),g(v)\in K_{g}. \end{aligned}$$
(115)

Proof

Let \(F\) be a higher order strongly general convex function on the convex set \(K_{g}\). Then, from Theorem 14, we have

$$\begin{aligned} F(g(v))-F(g(u))\geq \langle F'(g(u)),g(v)-g(u)\rangle + \mu \|g(v)-g(u) \|^{p}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(116)

Interchanging \(u\) and \(v\) in (116), we have

$$\begin{aligned} F(g(u))-F(g(v))\geq \langle F'(g(v)),g(u)-g(v))\rangle + \mu \|g(v)-g(u)) \|^{p}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(117)

Adding (116) and (117), we have

$$\begin{aligned} \langle F'(g(u))-F'(g(v)),g(u)-g(v)\rangle \geq 2\mu \{\|g(v)-g(u)\|^{p}, \forall g(u),g(v)\in K_{g}, \end{aligned}$$
(118)

which shows that \(F'(.)\) is a higher order strongly general monotone operator. □

We remark that the converse of Theorem 15 is not true. In this direction, we have the following result.

Theorem 16

If the differential operator \(F^{\prime }(.) \) of a differentiable higher order strongly general convex function \(F\) is a higher order strongly monotone operator, then

$$\begin{aligned} F(g(v))-F(g(u)) \geq \langle F'(g(u)),g(v)-g(u)\rangle +2\mu \frac{1}{p}\|g)v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(119)

Proof

Let \(F'\) be a higher order strongly monotone operator. Then, from (118), we have

$$\begin{aligned} &\langle F'(g(v)),g(u)-g(v)\rangle \geq \langle F'(g(u)),g(u)-g(v)) \rangle +2\mu \|g(v)-g(u)\|^{p}, \\ &\quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(120)

Since \(K\) is a convex set,

$$\forall g(u),g(v) \in K_{g},\ t\in [0,1],\ g(v_{t})=g(u)+t(g(v)-g(u))\in K_{g}.$$

Setting \(g(v)= g(v_{t})\) in (120), we have

$$\begin{aligned} \langle F'(g(v_{t})),g(u)-g(v_{t})\rangle \leq & \langle F'(g(u)), g(u)-g(v_{t}) \rangle -2\mu \|g(v_{t})=g(u)\|^{p}, \\ =&-t \langle F'(g(u)),g(v)-g(u)\rangle -2\mu t^{p} \|g(v)-g(u)\|^{p}, \end{aligned}$$

which implies that

$$\begin{aligned} \langle F'(g(v_{t})),g(v)-g(u)\rangle \geq \langle F'(g(u)),g(v)-g(u) \rangle +2 \mu t^{p-1} \|g(v)-g(u)\|^{p}. \end{aligned}$$
(121)

Consider the auxiliary function

$$\begin{aligned} \zeta (t)=F(g(u)+t(g(v)-g(u)), \forall g(u),g(v) \in K_{g}, \end{aligned}$$
(122)

from which, we have

$$\begin{aligned} \zeta (1)= F(g(v)), \quad \zeta (0)= F(g(u)). \end{aligned}$$

Then, from (122), we have

$$\begin{aligned} \zeta '(t)=\langle F'(g(v_{t}), g(v)-g(u)\rangle \geq \langle F'(g(u)),g(v)-g(u) \rangle +2\mu t^{p-1} \|g(v)-g(u)\|^{p}. \end{aligned}$$
(123)

Integrating (123) between 0 and 1, we have

$$\begin{aligned} \zeta (1)-\zeta (0 ) =& \int ^{1}_{0}\zeta ^{\prime }(t)dt \\ & \geq \langle F'(g(u)),g(v)-g(u)\rangle +2\mu \frac{1}{p}\|g(v)-g(u) \|^{p}. \end{aligned}$$

Thus it follows that

$$\begin{aligned} F(g(v))-F(g(u)) \geq \langle F'(g(u)),g(v)-g(u)\rangle +2\mu \frac{1}{p}\|g(v)-g(u)\|^{p},\forall g(u),g(v)\in K_{g}, \end{aligned}$$

which is the required (119). □

We note that, if \(p=2 \), then Theorem 16 can be viewed as the converse of Theorem 15.

We now give a necessary condition for higher order strongly general pseudoconvex functions.

Theorem 17

Let \(F'(.)\) be a higher order strongly relaxed pseudomonotone operator. Then \(F\) is a higher order strongly pseudoconvex function.

Proof

Let \(F'(.)\) be a higher order strongly relaxed general pseudomonotone operator. Then, from (112), we have

$$\begin{aligned} \langle F'(g(u)),g(v)-g(u)\rangle \geq 0, \forall g(u),g(v)\in K_{g}, \end{aligned}$$

which implies that

$$\begin{aligned} \langle F'(g(v)),g(v)-g(u) \rangle \geq \mu \|g(v)-(u)\|^{p}, \forall g(u),g(v)\in K_{g}. \end{aligned}$$
(124)

Since \(K_{g}\) is a convex set,

$$\forall g(u),g(v)\in K_{g},\ t \in [0,1],\ g(v_{t})=g(u)+t(g(v)-g(u))\in K.$$

Setting \(g(v)=g( v_{t})\) in (124), we have

$$\begin{aligned} \langle F'(g(v_{t})), g(v)-g(u)\rangle \geq \mu t^{p-1}\|g(v)-g(u)\|^{p}. \end{aligned}$$
(125)

Consider the auxiliary function

$$\begin{aligned} \zeta (t)=F(g(u)+t(g(v)-g(u)))= F(v_{t}),\quad \forall g(u),g(v)\in K_{g}, t\in [0,1], \end{aligned}$$
(126)

which is differentiable, since \(F\) is a differentiable function. Then, using (126), we obtain that

$$\begin{aligned} \zeta '(t)=\langle F'(g(v_{t})),g(v)-g(u))\rangle \geq \mu t^{p-1}\|g(v)-g(u) \|^{p}. \end{aligned}$$

Integrating the above relation between 0 to 1, we have

$$\begin{aligned} \zeta (1)-\zeta (0)= \int ^{1}_{0} \zeta ^{\prime }(t)dt \geq \frac{\mu }{p}\|g(v)-g(u)\|^{p}, \end{aligned}$$

that is,

$$\begin{aligned} F(g(v))-F(g(u))\geq \frac{\mu }{p}\|g(v)-g(u)\|^{p}), \forall g(u),g(v) \in K_{g}, \end{aligned}$$

showing that \(F\) is a higher order strongly general pseudoconvex function. □

Definition 27

A function \(F\) is said to be sharply higher order strongly general pseudoconvex, if there exists a constant \(\mu >0\) such that

$$\begin{aligned} &\langle F'(g(u)),g(v)-g(u)\rangle \geq 0 \\ &\Rightarrow \\ &F(g(v))\geq F(g(v)+t(g(u)-g(v))) +\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u) \|^{p}, \\ &\quad \forall g(u),g(v)\in K_{g}. \end{aligned}$$

Theorem 18

Let \(F\) be a sharply higher order strongly general pseudoconvex function on the general convex set \(K_{g}\) with a constant \(\mu >0\). Then

$$\begin{aligned} \langle F'(g(v)),g(v)-g(u)\rangle \geq \mu \|g(v)-g(u)\|^{p}, \forall g(u),g(v)\in K_{g}. \end{aligned}$$

Proof

Let \(F\) be a sharply higher order strongly general pseudoconvex function on the general convex set \(K_{g}\). Then

$$\begin{aligned} F(g(v))\geq F(g(v)+t(g(u)-g(v)))+ \mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u) \|^{p}, \\ \forall g(u),g(v)\in K_{g}, t\in [0,1], \end{aligned}$$

from which, we have

$$\begin{aligned} \left\{\frac{F(g(v)+t(g(u)-g(v))-F(g(v))}{t}\right\}+\mu \{t^{p-1}(1-t)+(1-t)^{p} \}\|g(v)-g(u)\|^{p}\geq 0. \end{aligned}$$

Taking the limit in the above inequality, as \(t \rightarrow 0\), we have

$$\begin{aligned} \langle F'(g(v)),g(v)-g(u)\rangle \geq \mu \|g(v)-g(u)\|^{p}, \forall g(u),g(v)\in K_{g}, \end{aligned}$$

which is the required result. □

Definition 28

A function \(F\) is said to be a pseudoconvex function with respect to a strictly positive bifunction \(B(.,.)\), such that

$$\begin{aligned} &F(v) < F(u) \\ &\Rightarrow \\ &F(u + t~l(v, u)) < F(u) + t(t - 1)B(v, u), \forall u,v\in K, t\in [0,1]. \end{aligned}$$

Theorem 19

If the function \(F\) is a higher order strongly convex function such that

$$F(g(v)) < F(g(u)),$$

then the function \(F\) is higher order strongly pseudoconvex.

Proof

Since \(F(g(v))< F(g(u))\) and \(F\) is a higher order strongly convex function, then

\(\forall g(u),g(v)\in K_{g}, t\in [0,1] \), we have

$$\begin{aligned} &F(g(u) + t(g(v)- u))\\ &\quad \leq F(g(u)) + t(F(g(v)) - F(g(u)))-\mu \{t^{p}(1-t)+t(1-t)^{p} \}\|g(v)-g(u)\|^{p} \\ &\quad < F(g(u)) + t(1 - t)(F(g(v)) - F(gu)))-\mu \{t^{p}(1-t)+t(1-t)^{p} \}\|g(v)-g(u)\|^{p} \\ &\quad = F(g(u)) + t(t - 1)(F(g(u)) - F(g(v)))-\mu \{t^{p}(1-t)+t(1-t)^{p} \}\|g(v)-g(u)\|^{p} \\ &\quad < F(g(u)) + t(t - 1)B(g(u), g(v)) -\mu \{t^{p}(1-t)+t(1-t)^{p}\} \|g(v)-g(u)\|^{p}, \end{aligned}$$

where \(B(g(u), g(v)) = F(g(u)) - F(g(v)) > 0\), which is the required result. □

We now discuss the optimality for the differentiable strongly general convex functions, which is the main motivation of our next result.

Theorem 20

Let \(F \) be a differentiable higher order strongly general convex function with modulus \(\mu > 0 \). If \(u \in H: g(u)\in K_{g} \) is the minimum of the function \(F \), then

$$\begin{aligned} F(g(v))-F(g(u)) \geq \mu \|g(v)-g(u)\|^{p}, \quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(127)

Proof

Let \(u \in H: g(u)\in K_{g} \) be a minimum of the function \(F \). Then

$$\begin{aligned} F(u)\leq F(v), \forall v \in H: g(v) \in K_{g}. \end{aligned}$$
(128)

Since \(K \) is a general convex set, so, \(\forall g(u),g(v)\in K_{g}, t\in [0,1] \),

$$ g(v_{t})= (1-t)g(u)+ tg(v) \in K_{g}. $$

Setting \(g(v)= g(v_{t}) \) in (128), we have

$$\begin{aligned} 0 \leq \lim _{t \rightarrow 0}\{ \frac{F(g(u)+t(g(v)-g(u)))-F(g(u))}{t}\} = \langle F^{\prime }(g(u)), g(v)- g(u)\rangle . \end{aligned}$$
(129)

Since \(F \) is a differentiable higher order strongly general convex function, it follows that

$$\begin{aligned} F(g(u)+t(g(v)-g(u))) \leq & F(g(u))+ t(F(g(v))-F(g(u))) \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, \end{aligned}$$

from which, using (129), we have

$$\begin{aligned} F(g(v))-F(g(u)) \geq & \lim _{t \rightarrow 0}\{ \frac{F(g(u)+t(g(v)-g(u)))-F(g(u))}{t}\} + \mu \|g(v)-g(u)\|^{p} \\ =& \langle F^{\prime }(g(u)), g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{p}, \end{aligned}$$

which is the required result (127). □

Remark

We would like to mention that, if \(u \in H: g(u) \in K_{g} \) satisfies the inequality

$$\begin{aligned} \langle F^{\prime }(g(u)), g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{p} \geq 0, \quad \forall u,g(v) \in K_{g}, \end{aligned}$$
(130)

then \(u \in K_{g} \) is the minimum of the function \(F \). The inequality of the type (130) is called the higher order general variational inequality.

Theorem 21

Let \(f\) be a higher order strongly affine general convex function. Then \(F\) is a higher order strongly general convex function, if and only if, \(H= F-f \) is a general convex function.

Proof

Let \(f \) be a higher order strongly affine general convex function. Then

$$\begin{aligned} f((1-t)g(u)+tg(v)) =& (1-t)f(g(u))+tf(g(v)) \\ &- \mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(131)

From the higher order strongly general convexity of \(F \), we have

$$\begin{aligned} F((1-t)g(u)+tg(v)) \leq & (1-t)F(g(u))+tF(g(v)) \\ &-\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(132)

From (131) and (132), we have

$$\begin{aligned} F((1-t)g(u)+tg(v))-f((1-t)f(g(u))+tf(g(v)) \leq (1-t)(F(g(u))-f(g(u))) \\ +t (F(g(v))-f(g(v))), \end{aligned}$$
(133)

from which it follows that

$$\begin{aligned} H((1-t)g(u)+tg(v)) =&F((1-t)g(u)+tg())-f((1-t)g(u)+tg(v)) \\ \leq & (1-t)F(g(u))+tF(g(v))-(1-t)f(g(u))-tf(g(v)) \\ = & (1-t)(F(g(u))-f(g(u)))+t (F(g(v))-f(g(v))), \end{aligned}$$

which show that \(H= F-f \) is a convex function. The inverse implication is obvious. □

It is worth mentioning that the higher order strongly convex function is also a higher order strongly Wright general convex function. From the definition, we have

$$\begin{aligned} F(g(u) +&t(g(v)-g(u)))+ F(g(v)+t(g(u)-g(v)))\leq F(g(u))+F(g(v)) \\ -&2\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p},\forall g(u),g(v) \in K_{g}, t\in [0,1], \end{aligned}$$

which is called the higher order strongly Wright general convex function. One studies the properties and applications of the Wright higher order strongly convex functions in optimization and operations research.

Bynum [21] and Chen et al [22,23,24,25] have studied properties and applications of the parallelogram laws for Banach spaces. Xi [190] obtained new characteristics of \(p\)-uniform convexity and \(q\)-uniform smoothness of a Banach space via the functionals \(\|.\|^{p} \) and \(\|.\|^{q} \), respectively. These results can be obtained from the concepts of higher order strongly general convex (concave) functions, which can be viewed as novel applications. Setting \(F(u)= \|u\|^{p}\) in Definition 21, we have

$$\begin{aligned} \|g(u) +&t(g(v)-g(u))\|^{p} \leq (1-t)\|g(u)\|^{p}+t\|g(v)\|^{p} \\ -&\mu \{t^{p}(1-t)+t(1-t)^{p}\}\|g(v)-g(u)\|^{p}, \forall g(u),g(v) \in K_{g}, t\in [0,1]. \end{aligned}$$
(134)

Setting \(t=\frac{1}{2} \) in (134), we have

$$\begin{aligned} \left\|\frac{g(u)+g(v)}{2}\right\|^{p}+\mu \frac{1}{2^{p}}\|g(v)-g(u)\|^{p}\leq \frac{1}{2}\|g(u)\|^{p}+\frac{1}{2}\|g(v)\|^{p},\forall g(u),g(v)\in K_{g}, \end{aligned}$$
(135)

which implies that

$$\begin{aligned} \|g(u)+g(v)\|^{p}+\mu \|g(v)-g(u)\|^{p}\leq 2^{p-1}\{\|g(u)\|^{p}+\|v \|^{p}\},\forall g(u),g(v)\in K_{g}, \end{aligned}$$
(136)

which is known as the lower parallelogram for \(l^{p}\)-spaces. In a similar way, one can obtain the upper parallelogram law as

$$\begin{aligned} \|g(u)+g(v)\|^{p}+\mu \|g(v)-g(u)\|^{p}\geq 2^{p-1}\{\|g(u)\|^{p}+\|g(v) \|^{p}\},\forall g(u),g(v)\in K_{g}. \end{aligned}$$
(137)

Similarly from Definition 23, we have

$$\begin{aligned} \|g(u)+g(v)\|^{p}+\mu \|g(v)-g(u)\|^{p} = 2^{p-1}\{\|g(u)\|^{p}+\|g(v) \|^{p}\},\forall g(u),g(v)\in K_{g}, \end{aligned}$$
(138)

which is known as the parallelogram for \(l^{p}\)-spaces. For the applications of the parallelogram laws for Banach spaces in prediction theory and applied sciences, see [21,22,23,24] and the references therein.

In this section, we have introduced and studied a new class of convex functions, which is called the higher order strongly convex function. We have improved the results of Lin and Fukushima [65]. It is shown that several new classes of strongly convex functions can be obtained as special cases of these higher order strongly general convex functions. We have studied the basic properties of these functions. We have also shown that one can derive the parallelogram laws in Banach spaces, which are applied to prediction theory and stochastic analysis. These parallelogram laws can be used to characterize the \(p\)-uniform convexity and \(q\)-uniform smoothness of Banach spaces. The interested reader may explore the applications and other properties for the higher order strongly convex functions in various fields of pure and applied sciences. This is an interesting direction for future research.

11 Higher Order General Variational Inequalities

In this section, we consider a more general variational inequality of which (130) is a special case.

For given two operators \(T,g \), we consider the problem of finding \(u\in K \) for a constant \(\mu > 0 \), such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{p} \geq 0, \forall g(v)\in K, p>1, \end{aligned}$$
(139)

which is called the higher order general variational inequality, see [140].

We note that, if \(\mu =0\), then (139) is equivalent to finding \(u \in K \), such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq 0, \forall g(v)\in K, \end{aligned}$$
(140)

which is known as the general variational inequality (9), which was introduced and studied by Noor [87] in 2008.

For suitable and appropriate choice of the parameters \(\mu \) and \(p\), one can obtain several new and known classes of variational inequalities (cf. [87, 88, 90, 110, 122, 123]) and the references therein. We note that the projection method and its variant forms can be used to study the higher order strongly general variational inequalities (139) due to its inherent structure. These facts motivated us to consider the auxiliary principle technique, which is mainly due to Glowinski et al [47] and Lions and Stampacchia [66], as developed by Noor [122]. We use this technique to suggest some iterative methods for solving the higher order general variational inequalities (139).

For given \(u\in K \) satisfying (139), consider the problem of finding \(w\in K \), such that

$$\begin{aligned} \langle \rho Tw, g(v) - g(w)\rangle + \langle w - u, v - w\rangle + \nu \|g(v) - g(w)\|^{p} \geq 0, \forall g(v)\in K, \end{aligned}$$
(141)

where \(\rho > 0 \) is a parameter. The problem (141) is called the auxiliary higher order strongly general variational inequality. It is clear that the relation (141) defines a mapping connecting the problems (139) and (141). We note that, if \(w= u \), then \(w\) constitutes a solution of the problem (139). This simple observation enables us to suggest an iterative method for solving (139).

Algorithm 37

. For given \(u\in K\), find the approximate solution \(u_{n+1} \) by the scheme

$$\begin{aligned} \langle \rho T u_{n+1}, g(v) - g(u_{n+1})\rangle + \langle u_{n+1} - u_{n}, v- u_{n+1}\rangle \\ + \nu \|g(v) - g(u_{n+1})\|^{p} \geq 0. \quad \forall g(v)\in K. \end{aligned}$$
(142)

The Algorithm 37 is known as an implicit method. Such type of methods have been studied extensively for various classes of variational inequalities. See [11, 18, 19] and the reference therein. If \(\nu =0 \), then Algorithm 37 reduces to:

Algorithm 38

For given \(u_{0}\in K \), find the approximate solution \(u_{n+1} \) by the scheme

$$\begin{aligned} \langle \rho Tu_{n+1}, g(v) - g(u_{n+1})\rangle + \langle u_{n+1}u_{n}, v - u_{n+1} \rangle \geq 0,\\ \forall g(v) \in K, \end{aligned}$$

which appears to be novel even for solving the general variational inequalities (9).

To study the convergence analysis of Algorithm 37, we need the following concept.

Definition 29

The operator \(T\) is said to be pseudo \(g\)-monotone with respect to

$$\mu \|g(v)-u\|^{p} ,$$

if

$$\begin{aligned} &\langle \rho Tu, g(v) - g(u) \rangle + \mu \|g(v) - g(u)\|^{p} \geq 0, \forall g(v) \in K, p>1, \\ &\Longrightarrow \\ & \langle \rho Tv, g(v) - g(u) \rangle - \mu \|g(u) - g(v)\|^{p} \geq 0, \forall g(v) \in K. \end{aligned}$$

If \(\mu =0 \), then Definition 29 reduces to:

Definition 30

The operator \(T\) is said to be pseudo \(g\)-monotone, if

$$\begin{aligned} &\langle \rho Tu, g(v) - gu) \rangle \geq 0, \forall g(v) \in K \\ &\Longrightarrow \\ & \langle \rho Tv, g(v) - g(u) \rangle \geq 0, \forall g(v) \in K, \end{aligned}$$

which appears to be new.

We now study the convergence analysis of Algorithm 37.

Theorem 22

Let \(u\in K\) be a solution of (139) and \(u_{n+1} \) be the approximate solution obtained from Algorithm 37. If \(T\) is a pseudo \(g\)-monotone operator, then

$$\begin{aligned} \| u_{n+1}-u\|^{2} \leq \|u_{n}-u\|^{2} - \|u_{n+1}-u_{n}\|^{2}. \end{aligned}$$
(143)

Proof

Let \(u \in K \) be a solution of (139), then

$$\begin{aligned} \langle \rho Tu, g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{p}, \forall g(v) \in K, \end{aligned}$$

implies that

$$\begin{aligned} \langle \rho Tv, g(u)-g(v) \rangle - \mu \|g(u)-g(v)\|^{p}, \forall g(v) \in K. \end{aligned}$$
(144)

Now taking \(v = u_{n+1} \) in (144), we have

$$\begin{aligned} \langle \rho Tu_{n+1}, u_{n+1}-g(u)\rangle -\mu \|u_{n+1}-g(u)\|^{p} \geq 0. \end{aligned}$$
(145)

Taking \(v = u\) in (139), we have

$$\begin{aligned} \langle \rho T u_{n+1}, g(u) - g(u_{n+1})\rangle + \langle u_{n+1} - u_{n}, v- u_{n+1}\rangle \\ + \nu \|g(u) - g(u_{n+1})\|^{p} \geq 0. \forall g(v)\in K. \end{aligned}$$
(146)

Combining (145) and (146), we have

$$\begin{aligned} \langle u_{n+1}-u_{n}, u_{n+1}-u \rangle \geq 0. \end{aligned}$$

Using the inequality

$$ 2\langle a,b\rangle =\|a+b\|^{2}-\|a\|^{2}-\|b\|^{2}, \forall a,b \in H, $$

we obtain

$$\begin{aligned} \| u_{n+1}-u\|^{2} \leq \|u_{n}- u\|^{2}- \|u_{n+1}-u_{n}\|^{2}, \end{aligned}$$

which is the required result (143). □

Theorem 23

Let the operator \(T\) be pseudo \(g\)-monotone. If \(u_{n+1}\) is the approximate solution obtained from Algorithm 37and \(u \in K \) is the exact solution (139), then \(\lim _{n\rightarrow \infty }u_{n}=u \).

Proof

Let \(u\in K \) be a solution of (139). Then, from (143), it follows that the sequence \(\{\|u - u_{n}\|\}\) is nonincreasing and consequently \(\{u_{n}\}\) is bounded. From (143), we have

$$\begin{aligned} \sum ^{\infty }_{n=0} \|u_{n+1}-u_{n}\|^{2} \leq \|u_{0}-u\|^{2}, \end{aligned}$$

from which, it follows that

$$\begin{aligned} \lim _{ n \rightarrow \infty }\|u_{n+1}-u_{n}\|= 0. \end{aligned}$$
(147)

Let \(\hat{u} \) be a cluster point of \(\{u_{n}\}\) and the subsequence \(\{u_{n_{j}}\} \) of the sequence \(u_{n}\) converge to \(\hat{u} \in H\). Replacing \(u_{n}\) by \(u_{n_{j}}\) in (138), taking the limit \(n_{j} \rightarrow 0 \) and from (147), we have

$$\begin{aligned} \langle T\hat{u}, g(v)-g(\hat{u}) \rangle +\mu \|g(v)-g(\hat{u}) \|^{p}, \quad \forall g(v) \in K. \end{aligned}$$

This implies that \(\hat{u} \in K \) and

$$\begin{aligned} \|u_{n+1}-u_{n} \|^{2} \leq \|u_{n}- \hat{u}\|^{2}. \end{aligned}$$

Thus it follows from the above inequality that the sequence \(u_{n}\) has exactly one cluster point \(\hat{u} \) and

$$ \lim _{n \rightarrow \infty }(u_{n} )= \hat{u}. $$

 □

In order to implement the implicit Algorithm 37, one uses the predictor-corrector technique. Consequently, Algorithm 37 is equivalent to the following iterative method for solving the higher order strongly general variational inequality (139).

Algorithm 39

For a given \(u_{0} \in K \), find the approximate solution \(u_{n+1} \) by the schemes

$$\begin{aligned} \langle \rho Tu_{n}, g(v) - g(y_{n}) \rangle +& \langle y_{n} - u_{n} v - y_{n}\rangle + \mu \|g(v)-g(y_{n})\|^{p} \geq 0, \forall g(v) \in K, \\ \langle \rho Ty_{n}, g(v) - (u_{n})\rangle +& \langle u_{n}-y_{n}, v - u_{n}\rangle \mu \|g(v) - g(u_{n})\|^{p} \geq 0, \forall g(v)\in K. \end{aligned}$$

Algorithm 39 is called the two-step iterative method and appears to be new.

Using the auxiliary principle technique, on can suggest several iterative methods for solving higher order strongly general variational inequalities and related optimization problems. We have only given a glimpse of higher order strongly general variational inequalities. It is an interesting problem to explore the applications of such a type of variational inequalities in various fields of pure and applied sciences.

12 Strongly Exponentially General Convex Functions

Convexity theory describes a broad spectrum of very interesting developments establishing a link among various fields of mathematics, physics, economics and engineering sciences. The development of convexity theory can be viewed as the simultaneous pursuit of two different lines of research. On the one hand, it is related to integral inequalities. It has been shown that a function is a convex function, if and only if, it satisfies the Hermite-Hadamard type inequality. These inequalities help us to derive the upper and lower bounds of the integrals. On the other hand, the minimum of differentiable convex functions on the convex set can be characterized by variational inequalities, the origin of which can be traced back to the Bernoulli brothers, as well as Euler and Lagrange. Variational inequalities provide us a powerful tool to discuss the behavior of solutions (regarding existence, uniqueness and regularity) to important classes of problems. The theory of variational inequalities also enables us to develop highly efficient powerful new numerical methods to solve nonlinear problems. Recently various extensions and generalizations of convex functions and convex sets have been considered and studied using innovative ideas and techniques. It is known that more accurate inequalities can be obtained using logarithmically convex functions rather than convex functions. Closely related to log-convex functions, the concept of exponentially convex (concave) functions has important applications in information theory, big data analysis, machine learning and statistics. Exponentially convex functions have illustrated significant generalizations of convex functions, the origin of which can be traced back to Bernstein [16]. Avriel [9, 10] introduced the concept of \(r\)-convex functions, from which one can deduce exponentially convex functions. Antczak [2] considered the \((r, p)\) convex functions and discussed their applications in mathematical programming and optimization theory. It is worth mentioning that exponentially convex functions have important applications in information sciences, data mining and statistics, cf. [1, 2, 9, 10, 132,133,134,135,136,137,138, 154] and the references therein.

We would like to point out that general convex functions and exponentially general convex functions are two distinct generalizations of convex functions, which have played a crucial and significant role in the development of various branches of pure and applied sciences. It is natural to unify these concepts. Motivated by these facts and observations, we now introduce a new class of convex functions, which is called exponentially general convex functions involving an arbitrary function. We discuss the basic properties of exponentially general convex functions. It has been shown that exponentially general convex (concave) functions have nice properties which convex functions enjoy. Several new concepts have been introduced and investigated. We prove that the local minimum of exponentially general convex functions is also the global minimum.

Noor and Noor [132,133,134,135,136,137,138, 154] studied some classes of strongly exponentially convex functions. Inspired by the work of Noor and Noor [138], we introduce some new classes of higher order strongly exponentially convex functions. We establish the relationship between these classes and derive some new results. We have also investigated the optimality conditions for the higher order strongly exponentially convex functions. It is shown that the difference of strongly exponentially convex functions and strongly exponentially affine functions is again an exponentially convex function. The optimal conditions of the differentiable exponentially convex functions can be characterized by a class of variational inequalities, called the exponentially general variational inequality, which is itself an interesting problem.

We now define exponentially convex functions, which are mainly due to Noor and Noor [132,133,134,135,136,137,138, 154].

Definition 31

[132,133,134, 138] A function \(F\) is said to be exponentially convex function, if

$$\begin{aligned} e^{F((1-t)u+tv)} \leq (1-t)e^{F(u)}+ te^{F(v)}, \quad \forall u,v \in K, \quad t\in [0,1]. \end{aligned}$$

It is worth mentioning that Avriel [9, 10] and Antczak [2] introduced the following concept:

Definition 32

[3, 4] A function \(F\) is said to be exponentially convex, if

$$\begin{aligned} F((1-t)a+tb)\leq \log [(1-t)e^{F(a)}+te^{F(b)}] ,\quad \forall a,b \in K,\quad t\in [0,1], \end{aligned}$$

Avriel [9, 10] and Antczak [2] discussed the application of 1-convex functions in mathematical programming. We note that the Definitions 7 and 8 are equivalent. A function \(f\) is called exponentially concave, if \(-f\) is an exponentially convex function.

For applications in communication theory and information theory, cf. Alirezaei and Mathar [1].

Example 5

[1] The error function

$$ erf(x)= \frac{2}{\sqrt{\pi}} \int ^{x}_{0} e^{-t^{2}}dt, $$

becomes an exponentially concave function in the form \(erf(\sqrt{x}), x \geq 0 \), which describes the bit/symbol error probability of communication systems depending on the square root of the underlying signal-to-noise ratio. This shows that exponentially concave functions can play an important role in communication theory and information theory.

For properties, generalizations and applications of the various classes of exponentially convex functions, cf. [1, 2, 9, 10, 132,133,134,135,136,137,138, 154]

It is clear that exponentially convex functions and general convex functions are two distinct generalizations of convex functions. It is natural to unify these concepts. Motivated by this fact, Noor and Noor [138] introduced some new concepts of exponentially general convex functions. We include these results for the sake of completeness and for the convenience of the interested readers.

Definition 33

A function \(F\) is said to be exponentially strongly general convex with respect to an arbitrary non-negative function \(g \), if

$$\begin{aligned} e^{F((1-t)g(u)+tg(v))}\leq (1-t)e^{F(g(u))}+te^{F(g(v))}. \quad \forall g(u),g(v) \in K_{g}, t\in [0,1]. \end{aligned}$$

Or equivalently

Definition 34

A function \(F\) is said to be exponentially general convex function with respect to an arbitrary non-negative function \(g \), if,

$$\begin{aligned} F((1-t)g(u)+tg(v))\leq \log [(1-t)e^{F(g(u))}+te^{F(g(v))}], \quad \forall g(u), g(v) \in K_{g}, t\in [0,1]. \end{aligned}$$

A function \(f\) is called exponentially general concave, if \(-f\) is an exponentially general convex function.

Definition 35

A function \(F\) is said to be exponentially affine general convex with respect to an arbitrary non-negative function \(g \), if

$$\begin{aligned} e^{F((1-t)g(u)+tg(v))}= (1-t)e^{F(g(u))}+te^{F(g(v))}, \forall g(u),g(v) \in K_{g}, t\in [0,1]. \end{aligned}$$

If \(g= I \), the identity operator, then exponentially general convex functions reduce to exponentially convex functions.

Definition 36

The function \(F\) on the general convex set \(K_{g}\) is said to be exponentially general quasi-convex, if

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))}\leq \max \{e^{F(g(u))},e^{F(g(v))}\}, \quad \forall g(u),g(v)\in K_{g}, t\in [0,1]. \end{aligned}$$

Definition 37

The function \(F\) on the general convex set \(K_{g}\) is said to be exponentially general log-convex, if

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))}\leq (e^{F(g(u))})^{1-t} (e^{F(g(v))})^{t}, \quad \forall g(u),g(v)\in K_{g}, t\in [0,1], \end{aligned}$$

where \(F(\cdot )>0\).

From the above definitions, we have

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u))} \leq & (e^{F(g(u))})^{1-t} (e^{F(g(v))})^{t} \\ \leq & (1-t)e^{F(g(u))}+te^{F(g(v))}) \\ \leq & \max \{e^{F(g(u))},e^{F((g(v)))}\}. \end{aligned}$$

This shows that every exponentially general log-convex function is an exponentially general convex function and every exponentially general convex function is an exponentially general quasi-convex function. However, the converse is not true.

Let \(K_{g} =I_{g}=[g(a),g(b)]\) be the interval. We now define the exponentially general convex functions on \(I_{g}\).

Definition 38

Let \(I_{g} =[g(a),g(b)]\). Then \(F\) is exponentially general convex, if and only if,

$$\begin{aligned} \left | \textstyle\begin{array}{c@{\quad }c@{\quad }c} 1&1&1 \\ g(a)& g(x)& g(b) \\ e^{F(g(a))}& e^{F(g(x))}&e^{F(g(b))} \end{array}\displaystyle \right |\geq 0;\quad g(a)\leq g(x)\leq g(b). \end{aligned}$$

One can easily show that the following are equivalent:

  1. 1.

    \(F\) is exponentially general convex function.

  2. 2.

    \(e^{F(g(x))}\leq e^{F(g(a))}+ \frac{e^{F(g(b))}-e^{F(g(a))}}{g(b)-g(a)}(g(x)-g(a))\).

  3. 3.

    \(\frac{e^{F(g(x))}-e^{F(g(a)}}{g(x)-g(a)}\leq \frac{e^{F(g(b))}-e^{F(g(a))}}{g(b)-g(a)}\).

  4. 4.

    \((g(x)-g(b))e^{F(g(a))} +(g(b)-g(a))e^{F(g(x))}+(g(a)-g(x))e^{F(g(b))}) \geq 0\).

  5. 5.

    \(\frac{e^{F(g(a))}}{(g(b)-g(a))(g(a)-g(x))}+ \frac{e^{F(g(x))}}{(g(x)-g(b))(g(a)-g(x))}+ \frac{e^{F(g(b)}}{(g(b)-g(a))(g(x)-g(b))}\geq 0\),

where \(g(x)= (1-t)g(a)+tg(b) \in [g(a),g(b)]\).

Theorem 24

Let \(F\) be a strictly exponentially general convex function. Then, any local minimum of \(F \) is a global minimum.

Proof

Let the strictly exponentially convex function \(F\) have a local minimum at \(g(u) \in K_{g} \). Assume the contrary, that is, \(F(g(v))< F(g(u)) \) for some \(g(v) \in K_{g}\). Since \(F\) is a strictly exponentially general convex function, it follows that

$$\begin{aligned} e^{F(g(u) + t(g(v)-g(u)))} < te^{F(g(v))} + (1 - t)e^{F(g(u))}, \quad \text{for } \quad 0 < t < 1. \end{aligned}$$

Thus

$$ e^{F(g(u) + t(g(v0-g(u)))} - e^{F(g(u))} < - t[e^{F(g(v))} - e^{F(g(u))}] < 0, $$

from which it follows that

$$ e^{F(g(u) + t(g(v)-g(u0))} < e^{F(g(u))}, $$

for arbitrarily small \(t > 0 \), contradicting the local minimum. □

Theorem 25

If the function \(F \) on the general convex set \(K_{g} \) is exponentially general convex, then the level set

$$ L_{\alpha } = \{g(u) \in K_{g} : e^{F(g(u) )} \leq \alpha , \quad \alpha \in R \} $$

is a general convex set.

Proof

Let \(g(u), g(v) \in L_{\alpha } \). Then \(e^{F(g(u))} \leq \alpha \) and \(e^{F(g(v))} \leq \alpha \).

We have that

$$\forall t \in (0, 1), g(w) = g(v) + t(g(u)-g(v))\in K_{g} ,$$

since \(K_{g}\) is a general convex set. Thus, by the exponentially general convexity of \(F\), we obtain that

$$\begin{aligned} F e^{(g(v) + t(g(u)-g(v ))} \leq & (1 - t)e^{F(g(v))} + t e^{F(g(u))} \\ \leq & (1-t) \alpha + t\alpha =\alpha , \end{aligned}$$

from which it follows that \(g(v) + t(g(u)-g(v)) \in L_{\alpha } \) Hence \(L_{\alpha } \) is a general convex set. □

Theorem 26

The function \(F\) is an exponentially general convex function, if and only if,

$$ epi(F) = \{(g(u), \alpha ): g(u) \in K_{g} : e^{F(g(u))} \leq \alpha , \alpha \in R \} $$

is a general convex set.

Proof

Assume that \(F\) is an exponentially general convex function. Let

$$ (g(u), \alpha ), \quad (g(v),\beta ) \in epi(F). $$

Then it follows that \(e^{F(g(u))} \leq \alpha \) and \(e^{F(g(v))} \leq \beta \). Hence, we have

$$ e^{F(g(u) + t(g(v)-g(u)))} \leq (1 - t)e^{F(g(u))} + t e^{F(g(v))} \leq (1 - t)\alpha + t \beta , $$

which implies that

$$ ((1-t)g(u) + tg(v)), (1 - t)\alpha + t\beta ) \in epi(F). $$

Thus \(epi(F)\) is a general convex set.

Conversely, let \(epi(F)\) be a general convex set. Let \(g(u), g(v) \in K_{g} \). Then

$$(g(u), e^{F(g(u)}) \in epi(F)\ \ \text{and}\ \ (g(v, e^{F(g(v))}) \in epi(F) .$$

Since \(epi(F) \) is a general convex set, we must have

$$ (g(u) + t(g(v)-g(u), (1 - t)e^{F(g(u))} + te^{F(g(v))} \in epi(F), $$

which implies that

$$ e^{F((1-t)g(u) + tg(v))} \leq (1 - t)e^{F(g(u))} + te^{F(g(u))}. $$

This shows that F is an exponentially general convex function. □

Theorem 27

The function \(F\) is exponentially general quasi-convex, if and only if, the level set

$$ L_{\alpha } = \{g(u)\in K_{g}, \alpha \in R: e^{F(g(u))} \leq \alpha \} $$

is a general convex set.

Proof

Let \(g(u), g(v) \in L_{\alpha } \). Then \(g(u), g(v) \in K_{g} \) and \(\max (e^{F(g(u))}, e^{F(g(v))}) \leq \alpha \).

Now for

$$t \in (0, 1), g(w) = g(u) + t(g(v)-g(u)) \in K_{g} .$$

We have to prove that \(g(u) + t(g(v)-g(u)) \in L_{\alpha } \). By the exponentially general convexity of \(F\), we have

$$ e^{F(g(u) + t(g(v)-g(u)))} \leq \max {(e^{F(g(u))}, e^{F(g(v) )})} \leq \alpha , $$

which implies that \(g(u) + t(g(v)-g(u)) \in L_{\alpha } \), showing that the level set \(L_{\alpha } \) is indeed a general convex set.

Conversely, assume that \(L_{\alpha } \) is a general convex set. Then,

$$\forall g(u), g(v) \in L_{\alpha } , t\in [0,1], g(u) + t(g(v)- g(u)) \in L_{\alpha } .$$

Let \(g(u), g(v) \in L_{\alpha } \) for

$$ \alpha = max(e^{F(g(u))}, e^{F(g(v))}) \quad \text{and}\quad e^{F(g(v))} \leq e^{F(g(u))}. $$

Then, from the definition of the level set \(L_{\alpha } \), it follows that

$$ e^{F(g(u) + t(g(v)-g(u))} \leq \max {(e^{F(g(u))}, e^{F(g(v))}}) \leq \alpha . $$

Thus \(F \) is an exponentially general quasi convex function. This completes the proof. □

Theorem 28

Let \(F\) be an exponentially general convex function. Let \(\mu =\inf _{u\in K}F(u)\). Then the set

$$ E = \{g(u) \in K_{g} : e^{F(g(u))}= \mu \} $$

is a general convex set of \(K_{g}\). If \(F \) is strictly exponentially general convex function, then \(E \) is a singleton.

Proof

Let \(g(u), g(v) \in E\). For \(0 < t < 1\), let \(g(w) = g(u) + t(g(v)-g(u))\). Since \(F\) is an exponentially general convex function, then

$$\begin{aligned} F(g(w)) =& e^{F(g(u) + t(g(v)-g(u))} \\ \leq & (1 - t)e^{F(g(u))} + te^{F(g(v))} = t \mu + (1 - t)\mu = \mu , \end{aligned}$$

which implies \(g(w) \in E \), and hence \(E\) is a general convex set. For the second part, assume to the contrary that \(F(g(u)) = F(g(v)) = \mu \). Since \(K\) is a general convex set, then for \(0 < t < 1\),

$$\begin{aligned} g(u) + t(g(v)-g(u)) \in K_{g} . \end{aligned}$$

Since \(F\) is strictly exponentially general convex, we have

$$\begin{aligned} e^{F(g(u) + t(g(v)-g(u)))} < & (1 - t)e^{F(g(u))} + te^{F(g(v))} = (1 - t)\mu + t\mu =\mu . \end{aligned}$$

This contradicts the fact that \(\mu = \inf _{g(u)\in K_{g} }F(u) \) and hence the result follows. □

We now introduce the concept of strongly exponentially general convex functions, which is the main motivation of this chapter.

Definition 39

A positive function \(F\) on the general convex set \(K_{g}\) is said to be strongly exponentially general convex with respect to an arbitrary non-negative function \(g \), if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))} \leq &(1-t)e^{F}(g(u))+te^{F}(g(v)) \\ &-\mu \{t(1-t)\}\|g(v)-g(u)\|^{2},\forall g(u),g(v)\in K_{g}, t\in [0,1]. \end{aligned}$$

The function \(F\) is said to be strongly exponentially general concave with respect to an arbitrary non-negative function \(g \), if and only if, \(-F\) is a strongly exponentially general convex function with respect to an arbitrary non-negative function \(g \).

If \(t=\frac{1}{2} \) and \(\mu =1 \), then

$$\begin{aligned} e^{F(\frac{g(u)+g(v)}{2})} \leq \frac{e^{F}(g(u))+e^{F}(g(v))}{2} - \frac{1}{4}\|g(v)-g(u)\|^{2}, \\ \quad \forall g(u),g(v)\in K_{g}. \end{aligned}$$

The function \(F\) is called strongly exponentially general \(J\)-convex with respect to an arbitrary non-negative function \(g \).

Definition 40

A positive function is said to be strongly exponentially affine general convex with respect to an arbitrary non-negative function \(g \), if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))} = &(1-t)e^{F}(g(u))+te^{F}(g(v)) \\ &-\mu \{t(1-t)\}\|g(v)-g(u)\|^{2},\forall g(u),g(v)\in K_{g}, t\in [0,1]. \end{aligned}$$

If \(t= \frac{1}{2} \), then

$$\begin{aligned} e^{F(\frac{g(u)+g(v)}{2})}= \frac{e^{F(g(u)}+e^{F(g(v))}}{2}- \frac{1}{4}\mu \|g(v)-g(u)\|^{2}, \\ \quad \forall g(u),g(v)\in K_{g}. \end{aligned}$$

We then say that the function \(F\) is strongly exponentially affine general \(J\)-convex with respect to an arbitrary non-negative function \(g \).

For properties of strongly exponentially general convex functions in optimization, inequalities and equilibrium problems, cf. [4,5,6,7,8, 10,11,12,13,14,15,16,17,18,19,20,21] and the references therein.

Definition 41

A positive function \(F\) on the convex set \(K_{g}\) is said to be strongly exponentially general quasi-convex, if there exists a constant \(\mu >0\) such that

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))}\leq \max \{e^{F(g(u))},e^{F(g(v))}\}-\mu \{t(1-t) \}\|g(v)-g(u)\|^{2}, \\ \forall g(u),g(v)\in K_{g}, t\in [0,1]. \end{aligned}$$

Definition 42

A positive function \(F\) on the general convex set \(K_{g}\) is said to be strongly exponentially general log-convex, if there exists a constant \(\mu >0\) such that

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))}\leq (e^{(F(g(u)))})^{1-t}(e^{(F(g(v)))})^{t}- \mu \{t(1-t)\}\|g(v)-g(u)\|^{2}, \\ \forall g(u),g(v)\in K_{g}, t\in [0,1], \end{aligned}$$

where \(F(\cdot )>0\).

From this Definition, we have

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))} \leq & e^{(F(g(u)))^{1-t}}e^{(F(g(v)))^{t}}- \mu \{t(1-t)\}\|g(v)-g(u)\|^{2}, \\ \leq & (1-t)e^{F(g(u))}+ te^{F(g(v))}-\mu \{t(1-t)\}\|g(v)-g(u)\|^{2}. \end{aligned}$$

This shows that every strongly exponentially general log-convex function is a strongly exponentially general convex function, but the converse is not true.

From the above concepts, we have

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))} \leq & (e^{(F(g(u)))})^{1-t}(e^{(F(g(v)))})^{t}- \mu \{t(1-t)\}\|g(v)-g(u)\|^{2} \\ \leq & (1-t)e^{F(g(u))}+te^{F(g(v))}-\mu \{t(1-t)\}\|g(v)-g(u)\|^{2} \\ \leq & \max \{e^{F(g(u))},e^{F(g(v))}\}-\mu \{t(1-t)\}\|g(v)-g(u)\|^{2}. \end{aligned}$$

This shows that every strongly exponentially general log-convex function is a strongly exponentially convex function and every strongly exponentially general convex function is a strongly exponentially general quasi-convex function. However, the converse is not true.

Definition 43

A differentiable function \(F\) on the convex set \(K_{g}\) is said to be a strongly exponentially general pseudoconvex function with respect to an arbitrary non-negative function \(g \), if and only if there exists a constant \(\mu >0\) such that

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle +\mu \|g(v)-g(u)\|^{2} \geq & 0 \\ \Rightarrow &\\ e^{F(g(v))}- e^{F(g(u))} \geq & 0,\quad \forall g(u),g(v)\in K_{g}. \end{aligned}$$

Theorem 29

Let \(F\) be a differentiable function on the convex set \(K\). Then the function \(F\) is strongly exponentially general convex function, if and only if,

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))} \geq \langle e^{F(g(u))} F^{\prime }(g(u)), g(v)-g(u) \rangle +\mu \|g(v)-g(u)\|^{2}, \forall g(u),g(v)\in K_{g}. \end{aligned}$$
(148)

Proof

Let \(F\) be a strongly exponentially general convex function. Then

$$\begin{aligned} &e^{F(g(u)+t(g(v)-g(u)))}\leq (1-t)e^{F(g(u))}+te^{F(g(v))}- \mu t(1-t) \|g(v)-g(u)\|^{2},\\ &\quad \forall g(u),g(v)\in K_{g}, t\in [0,1] \end{aligned}$$

which can be written as

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))}\geq \{ \frac{e^{F(g(u)+t(g(v)-g(u)))}-e^{F(g(u))}}{t}\} + \mu (1-t)\|g(v)-g(u) \|^{2}. \end{aligned}$$

Taking the limit in the above inequality as \(t\rightarrow 0\), we have

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))}\geq \langle e^{F(g(u))} F'(g(u)),g(v)-g(u)) \rangle + \mu \|g(v)-g(u)\|^{2}, \end{aligned}$$

which is (148), the required result.

Conversely, let (148) hold. Then

$$\forall g(u),g(v)\in K_{g}, t\in [0,1], g(v_{t}) =g(u)+t(g(v)-g(u))\in K_{g} ,$$

we have

$$\begin{aligned} e^{F(g(v))}-e^{F(g(v_{t}))} \geq & \langle e^{F(g(v_{t}))}F'(g(v_{t})),g(v)-g(v_{t})) \rangle +\mu \|g(v)-g(u)\|^{2} \\ =&(1-t)\langle e^{F(g(v_{t}))}F'(g(v_{t})),g(v)-g(u)\rangle + \mu (1-t)^{2} \|g(v)-g(u)\|^{2}. \end{aligned}$$
(149)

In a similar way, we have

$$\begin{aligned} e^{F(g(u))}-e^{F(g(v_{t}))} \geq & \langle e^{F(g(v_{t}))}F'(g(v_{t})),g(u)-g(v_{t})) \rangle +\mu \|g(u)-g(v_{t})\|^{2} \\ =&-t\langle e^{F(g(v_{t}))}F'(g(v_{t})),g(v)-g(u)\rangle + \mu t^{2} \|g(v)-g(u)\|^{2}. \end{aligned}$$
(150)

Multiplying (148) by \(t\) and (150) by \((1-t)\) and adding the resultant, we have

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))}\leq (1-t)e^{F(g(u))}+te^{F(g(v))}- \mu t(1-t) \|g(v)-g(u)\|^{2}, \end{aligned}$$

showing that \(F\) is a strongly exponentially general convex function. □

Theorem 30

Let \(F\) be a differentiable strongly exponentially general convex function on the convex set \(K_{g} \). Then

$$\begin{aligned} &\langle e^{F(g(u))}F'(g(u))-e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \geq 2\mu \|g(v)-g(u)\|^{2}, \\ &\quad \forall g(u),g(v)\in K_{g}. \end{aligned}$$
(151)

Proof

Let \(F\) be a strongly exponentially general convex function. Then, from Theorem 29, we have

$$\begin{aligned} &e^{F(g(v))}-e^{F(g(u))}\geq \langle e^{F(g(u))} F'(g(u)),g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{2}, \\ &\quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(152)

Interchanging \(u\) and \(v\) in (152), we have

$$\begin{aligned} &e^{F(g(u))}-e^{F(g(v))} \geq \langle e^{F(g(v))}F'(g(v)),g(u)-g(v)) \rangle + \mu \|g(u)-g(v)\|^{2}, \\ &\quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(153)

Adding (153) and (152), we have

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u))-e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \geq 2\mu \|g(v)-g(u)\|^{2}, \end{aligned}$$

which is the required (151). □

We point out that the converse of Theorem 30 is not true expect for \(p=2\). In fact, we have the following result.

Theorem 31

If the differential of a strongly exponentially general convex function satisfies

$$\begin{aligned} &\langle e^{F(g(u))}F'(g(u))-e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \geq 2\mu \|g(v)-g(u)\|^{2}, \\ &\quad \forall g(u),g(v) \in K_{g}, \end{aligned}$$
(154)

then

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))} \geq \langle e^{F(g(u))}F'(g(u)),g(v)-g(u) \rangle +\mu \|g(v)-g(u)\|^{2}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(155)

Proof

Let \(F'(.)\) satisfy (154). Then

$$\begin{aligned} \langle e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \leq \langle e^{F(g(u))}F'(g(u)),g(u)-g(v)) \rangle - 2 \mu \|g(v)-g(u)\|^{2}. \end{aligned}$$
(156)

Since \(K_{g}\) is a general convex set,

$$\forall g(u),g(v) \in K_{g}, t\in [0,1] g(v_{t})= g(u)+t(g(v)-g(u))\in K_{g}.$$

Setting \(g(v)= g(v_{t})\) in (156), we have

$$\begin{aligned} &\langle e^{F(g(v_{t}))}F'(g(v_{t})), g(u)-g(v_{t})\rangle \\ &\quad \leq \langle e^{F(g(u))}F'(g(u)), g(u)-g(v_{t})\rangle -2\mu \|g(v_{t})-g(u) \|^{2} \\ &\quad =-t \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle -2t^{2} \mu \|g(v)-g(u) \|^{2}, \end{aligned}$$

which implies that

$$\begin{aligned} \langle e^{F(g(v_{t}))}F'(g(v_{t})),g(v)-g(u)\rangle \geq \langle e^{F(g(u))} F'(g(u)),g(v)-g(u)\rangle +2t \mu \|g(v)-g(u)\|^{2}. \end{aligned}$$
(157)

Consider the auxiliary function

$$\begin{aligned} \xi (t)=e^{F(g(u)+t(g(v)-g(u)))}, \forall g(u),g(v) \in K_{g}, \end{aligned}$$
(158)

from which, we have

$$\begin{aligned} \xi (1)= e^{F(g(v))}, \quad \xi (0)= e^{F(g(u))}. \end{aligned}$$

Then, from (157) and (158), we have

$$\begin{aligned} \xi '(t) =&\langle e^{F(g(v_{t}))}F'(g(v_{t}), g(v)-g(u)\rangle \\ \geq & \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle +2\mu t \|g(v)-g(u) \|^{2}. \end{aligned}$$
(159)

Integrating (159) between 0 and 1, we have

$$\begin{aligned} \xi (1)-\xi (0)= \int ^{1}_{0}\xi ^{\prime }(t)dt \geq \langle e^{F(g(u))}F'(g(u)),g(v)-g(u) \rangle +\mu \|g(v)-g(u)\|^{2}. \end{aligned}$$

Thus it follows that

$$\begin{aligned} e^{F(g(v))}-e^{Fg((u))} \geq \langle e^{F(g(u))}F'(g(u)),g(v)-g(u) \rangle +\mu \|g(v)-g(u)\|^{2}, \end{aligned}$$

which is the required (155). □

Theorem 29 and Theorem 30 enable us to introduce the following new concepts.

Definition 44

The differential \(F^{\prime }(.) \) of a strongly exponentially convex function is said to be strongly exponentially monotone, if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u))-e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \geq \mu \|g(v)-g(u)\|^{2}, \forall u,v \in H. \end{aligned}$$

Definition 45

The differential \(F^{\prime }(.) \) of an exponentially convex function is said to be exponentially monotone, if

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u))-e^{F(g(v))}F'(g(v)),g(u)-g(v)\rangle \geq 0, \forall u,v \in H. \end{aligned}$$

Definition 46

The differential \(F^{\prime }(.) \) of a strongly exponentially convex function is said to be strongly exponentially pseudomonotone, if

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle \geq 0, \end{aligned}$$

implies that

$$\begin{aligned} \langle e^{F(g(v))} F'(g(v)),g(v)-g(u) \rangle \geq \mu \|g(v)-g(u)\|^{2}, \quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$

We now give a necessary condition for strongly exponentially pseudoconvex functions.

Theorem 32

Let \(F'\) be a strongly exponentially pseudomonotone operator. Then \(F\) is a strongly exponentially general pseudoinvex function.

Proof

Let \(F'\) be a strongly exponentially pseudomonotone operator. Then

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle \geq 0, \forall g(u),g(v) \in K_{g}, \end{aligned}$$

implies that

$$\begin{aligned} \langle e^{F(g(v))} F'(g(v)),g(v)-g(u) \rangle \geq \mu \|g(v)-g(u)\|^{2}. \end{aligned}$$
(160)

Since \(K_{g}\) is a general convex set,

$$\forall g(u),g(v) \in K_{g}, t \in [0,1], g(v_{t})= g(u)+t(g(v)-g(u))\in K_{g}.$$

Setting \(g(v)=g( v_{t})\) in (160), we have

$$\begin{aligned} \langle e^{F(g(v_{t}))} F'(g(v_{t})),g(v)-g(u)\rangle \geq t \mu \|g(v)-g(u) \|^{2}. \end{aligned}$$
(161)

Consider the auxiliary function

$$\begin{aligned} \xi (t)=e^{F(g(u)+t(g(v)-g(u)))}= e^{F(g(v_{t}))},\quad \forall g(u),g(v) \in K_{g}, t\in [0,1], \end{aligned}$$

which is differentiable, since \(F\) is a differentiable function. Thus, we have

$$\begin{aligned} \xi '(t)=\langle e^{F(g(v_{t}))}F'(g(v_{t})),g(v)-g(u))\rangle \geq t \mu \|g(v)-g(u)\|^{2}. \end{aligned}$$

Integrating the above relation between 0 to 1, we have

$$\begin{aligned} \xi (1)-\xi (0)= \int ^{1}_{0} xi^{\prime }(t)dt \geq \frac{\mu }{2}\|v-u \|^{2}, \end{aligned}$$

that is,

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))} \geq \frac{\mu }{2}\|g(v)-g(u)\|^{2}, \end{aligned}$$

showing that \(F\) is a strongly exponentially general pseudoconvex function. □

Definition 47

The function \(F\) is said to be sharply strongly exponentially pseudoconvex, if there exists a constant \(\mu >0 \), such that

$$\begin{aligned} \langle e^{F(g(u))}F'(g(u)),g(v)-g(u)\rangle \geq &0 \\ \Rightarrow & \\ F(g(v)) \geq & e^{F(g(v)+t(g(u)-g(v)))}+\mu t(1-t)\|g(v)-g(u)\|^{2} \\ &\quad \forall g(u),g(v) \in K_{g}, t \in [0,1]. \end{aligned}$$

Theorem 33

Let \(F\) be a sharply strongly exponentially pseudoconvex function with a constant \(\mu >0\). Then

$$\begin{aligned} \langle e^{F(g(v))}F'(g(v)),g(v)-g(u)\rangle \geq \mu \|g(v)-g(u)\|^{2}, \quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$

Proof

Let \(F\) be a sharply strongly exponentially general pseudoconvex function. Then

$$\begin{aligned} e^{F(g(v))}\geq e^{F(g(v)+t(g(u)-g(v)))}+\mu t(1-t) \|g(v)-g(u)\|^{2}, \quad \forall g(u),g(v) \in K_{g}, t\in [0,1], \end{aligned}$$

from which we have

$$\begin{aligned} \left\{\frac{e^{F(g(v)+t(g(u)-g(v)))}-e^{F(g(v))}}{t}\right\}+\mu (1-t) \|g(v)-g(u) \|^{2}\leq 0. \end{aligned}$$

Taking limit in the above inequality, as \(t \rightarrow 0\), we have

$$\begin{aligned} \langle e^{F(g(v))}F'(g(v)),g(v)-g(u)\rangle \geq \mu \|g(v)-g(u)\|^{2}, \end{aligned}$$

the required result. □

We now discuss the optimality condition for differentiable strongly exponentially convex functions.

Theorem 34

Let \(F \) be a differentiable strongly exponentially convex function. If \(u \in K \) is the minimum of the function \(F \), then

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))} \geq \mu \|g(v)-g(u)\|^{p}, \quad \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(162)

Proof

Let \(u \in H: g(u)\in K_{g} \) be a minimum of the function \(F \). Then

$$\begin{aligned} F(g(u))\leq F(g(v)), \forall g(u),g(v) \in K_{g}, \end{aligned}$$

from which, we have

$$\begin{aligned} e^{F(g(u))}\leq e^{F(gv))}, \forall g(u),g(v) \in K_{g}. \end{aligned}$$
(163)

Since \(K _{g} \) is a convex set, so, \(\forall g(u),g(v) \in K_{g}, t\in [0,1] \),

$$ g(v_{t}) = (1-t)g(u)+ tg(v) \in K_{g}. $$

Setting \(g(v) = g(v_{t}) \) in (163), we have

$$\begin{aligned} 0 \leq \lim _{t \rightarrow 0}\{ \frac{e^{F(g(u)+t(g(v)-g(u)))}-e^{F(g(u))}}{t}\} = \langle e^{F(g(u))} F^{\prime }(g(u)), g(v)-g(u) \rangle . \end{aligned}$$
(164)

Since \(F \) is a differentiable strongly exponentially general convex function, it follows that

$$\begin{aligned} e^{F(g(u)+t(g(v)-g(u)))} \leq e^{F(g(u))}+ t(e^{F(g(v))}-e^{F(g(u))}) - \mu t(1-t)\|g(v)-g(u)\|^{2}, \\ \quad \forall g(u),g(v) \in K_{g}, t \in [0,1], \end{aligned}$$

from which, using (164), we have

$$\begin{aligned} e^{F(g(v))}-e^{F(g(u))} \geq & \lim _{t \rightarrow 0}\{ \frac{e^{F(g(u)+t(g(v)-g(u)))}-e^{F(g(u))}}{t} \}+ \mu \|g(v)-g(u)\|^{2}. \\ =& \langle e^{F(g(u))}F^{\prime }(g(u)), g(v)- g(u)\rangle + \mu \|g(v)-g(u) \|^{p} \\ \geq& \mu \|g(v)-g(u)\|^{2}, \end{aligned}$$

which is the required result (163). □

Remark 5

We would like to mention that, if \(u \in H: g(u)\in K_{g} \) satisfies

$$\begin{aligned} \langle e^{F(g(u))}F^{\prime }(g(u)), g(v)-g(u) \rangle + \mu \|g(v)-g(u) \|^{2} \geq 0, \quad \forall g(u),g(v) \in K_{g}, \end{aligned}$$
(165)

then \(u \in H: g(u)\in K_{g} \) is the minimum of the function \(F \). The inequality of the type (165) is called the strongly exponentially variational inequality. It is an interesting problem to study the existence of the inequality (165) and to develop numerical methods for solving strongly exponentially variational inequalities.

We would like to note that strongly exponentially convex functions are also strongly Wright general convex functions. From the definition 39, we have

$$\begin{aligned} &e^{F((1-t)g(u)+tg(v))}+ e^{F(tg(u)+(1-t)g(v))} \\ &\quad \leq e^{F}(g(u))+e^{F}(g(v)) -2\mu t(1-t) \|g(v)-g(u)\|^{2}, \forall g(u),g(v) \in K_{g}, t\in [0,1], \end{aligned}$$

which is known as the strongly Wright exponentially convex function. It is an interesting problem to study properties and applications of strongly Wright exponentially general convex functions.

13 Generalizations and Extensions

We would like to mention that some of the results obtained and presented in this paper can be extended for stronger general variational inequalities. To be more precise, for a given nonlinear operator \(T,A,g \), consider the problem of finding \(u\in H: g(u) \in K \) such that

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq \langle A(u), g(v)-g(u) \rangle + \mu \|g(v)-g(u)\|^{2},\quad \forall v\in H: g(v) \in K, \end{aligned}$$
(166)

which is called the strongly general variational inequality.

If \(\mu = 0 \), then problem (166) reduces to

$$\begin{aligned} \langle Tu, g(v)-g(u) \rangle \geq \langle A(u), g(v)-g(u) \rangle , \quad \forall v\in H: g(v) \in K, \end{aligned}$$
(167)

which is called the general strongly variational inequality.

We would like to mention that one can obtain various classes of general variational inequalities for appropriate and suitable choices of the operators \(T, A, g \).

Using Lemma 1, one can show that the problem (167) is equivalent to finding \(u\in H: g(u) \in K \) such that

$$\begin{aligned} g(u) = P_{K}[g(u)- \rho ( Tu-A(u))]. \end{aligned}$$
(168)

These alternative formulations can be used to suggest and analyze similar techniques for solving general strongly variational inequalities (167) as considered in this paper under certain extra conditions. A complete study of these algorithms for problem (167) will be the subject of subsequent research. The development of efficient and implementable algorithms for problems (167) requires further research efforts.

(I). For given nonlinear operators \(T,A,g \), consider the problem of finding \(u\in H: g(u) \in K \) such that

$$\begin{aligned} \langle Tu, v-g(u) \rangle \geq \langle A(u), v-g(u) \rangle ,\quad \forall v\in K, \end{aligned}$$
(169)

which is also called the strongly general variational inequality.

(III). For given nonlinear operators \(T,A,g \), consider the problem of finding \(u\in H: g(u) \in K \) such that

$$\begin{aligned} \langle Tu, g(v)-u \rangle \geq \langle A(u), g(v)-u \rangle ,\quad \forall v\in H: g(v) \in K, \end{aligned}$$
(170)

which is also known as the strongly general variational inequality.

Remark 6

We would like to point out that the problems (167), (169) and (170) are quite different from each other and have significant applications in various branches of pure and applied sciences. They are open and interesting problems for future research. We would like to emphasize that the problems (167), (169) and (170) are equivalent in various ways and share basic and fundamental properties. In particular, they have the same equivalent fixed-point formulations. Consequently, most of the results obtained in this paper continue to hold for these problems with minor modifications.

(IV). If \(K= H \), then the problem (166) is equivalent to finding \(u \in H: g(u) \in H \), such that

$$\begin{aligned} \langle Tu, g(v) \rangle = \langle A(u), g(v) \rangle , \quad \forall v\in H:g(v) \in H, \end{aligned}$$
(171)

which can be viewed as the representation theorem for nonlinear functions involving an arbitrary function \(g\). For more details, see Noor and Noor [141].

(V). If \(A(u)= |u| \), then the problem (171) is equivalent to finding \(u \in H: g(u) \in H \), such that

$$\begin{aligned} \langle Tu, g(v) \rangle = \langle A|u|, g(v) \rangle , \quad \forall v\in H:g(v) \in H, \end{aligned}$$
(172)

which is known as the generalized absolute value equation. See Batool et al [13] for more details.

The theory of general variational inequalities does not appear to have developed to an extent that it provides a complete framework for studying these problems. Much more research is needed in all of these areas for the development of a sound basis for applications. We have not treated variational inequalities for time-dependent problems and spectrum analysis of variational inequalities. In fact, this field has been forstering and will continue to foster new, innovative and novel applications in various branches of pure and applied sciences. We have given only a brief introduction of this rapidly evolving field. The interested reader is advised to explore this field further. It is our hope that this brief introduction may inspire and motivate the reader to discover new and interesting applications of general variational inequalities as well as related optimization problems in other areas of sciences.