1 Introduction

In this paper, we propose a numerical method to solve an inverse problem associated with a certain system of variational equations, which is based upon a generalization of the classical collage theorem. To this end, we first characterize, in terms of the existence of a scalar, the solvability of the following system

$$\begin{aligned} \ \left\{ \begin{array}{lll} x^{*}_1(y_1-x_0) &{} \le &{} a_1(y_1,y_1-x_0)\\ x^{*}_2(y_2-x_0) &{} \le &{} a_2(y_2,y_2-x_0)\\ \; \; \; \; \; \; \; \; \; \; \vdots &{} \; \vdots &{} \; \; \; \; \; \; \; \; \; \; \vdots \\ x^{*}_N(y_N-x_0) &{} \le &{} a_N(y_N,y_N-x_0)\\ \end{array}, \right. \end{aligned}$$

with \(x^{*}_i\) being continuous and linear functionals in a real reflexive Banach space and \(a_i\) continuous bilinear forms in the same space.

Since Stampacchia’s results in the 1960s, the study of variational inequalities and systems of variational equations has sparked great interest, in part due to the fact that a wide range of optimization problems can be reformulated as variational problems. The concept of variational systems encompasses different types of problems, for example, in Pang (1985), the Nash equilibrium problem, the spatial equilibrium problem and the general equilibrium programming problem are modeled as a system of variational inequalities. In Garralda-Guillem and Ruiz Galán (2019), the variational system includes certain mixed variational formulations associated with some elliptic problems. Also, we can find them associated with abstract economy, (Ansari and Yao 1999).

Another specific type of problem associated with variational systems, and one that is related to our work, is the so-called common solutions to variational inequalities problem, which consists of finding common solutions to a system of variational inequalities. There are different approaches to this kind of inequality system: in Zhao et al. (2010) the definition domains of the functions of the system are closed convex sets of a Hausdorff topological vector space. In Kassay and Kolumbn (2000), the system problem is dealt with for only two inequalities, a treatment that is generalized in Censor et al. (2012).

In the study of a solution to variational equations or systems of variational equations, a wide range of techniques is used, including those of the minimax type, (Bigi et al. 2019; Fan 1972; Park 1985), or those that use fixed point results and their associated iterative methods, such as those we detail next. In Ansari and Yao (1999), the authors prove the existence of a solution to certain variational systems by using a multivalued fixed point theorem. Also, one can find proof of the existence of a solution to a variational system with the Brouwer fixed point theorem in Zhao et al. (2010) as well as the construction of an iterative algorithm to approximate the unique solution to the system and a discussion of the convergence analysis of the algorithm.

Here, we use a minimax technique to prove the existence of a solution to the system of variational inequalities, Theorem 2.3, that, unlike the different results that appear in the articles mentioned above, characterizes the existence of solutions in closed subsets (not necessarily convex).

Once the conditions that ensure the existence of a solution to the system of variational equations have been established, we will deal with the inverse problem, i.e., assuming that the model which depends on different parameters has been established; once empirical solutions have been obtained, we will try to approximate the parameters for which the empirical solutions obtained are an approximation of the solution of the theoretical model.

From the different approaches to solving inverse problems proposed in the literature, we rely on the approach of the so-called Collage theorem, which starts by considering the forward problem as a solution to a fixed point problem and deduces its analysis from Banach’s fixed point theorem.

The first time the collage method is used to solve an inverse problem can be found in Kunze and Vrscay (1999), where the authors minimize the distance between the target solution obtained from the direct method and the image of the solution resulting from applying the corresponding operator. In Kunze et al. (2006) we can observe the importance of applying the generalizations of the Collage theorem, since it allows us to reduce PDEs complicated inverse problems to accessible optimization problems.

We follow the line of different proposed generalizations of the Collage theorem, which are supported by different versions of the Lax-Milgram theorem, established, for example, to solve inverse problems associated with different families of integral or ordinary differential equations, (Capasso et al. 2014; Kunze and Vrscay 1999; Kunze et al. 2004; Kunze and Gomes 2003), or of partial differential equations (Berenguer et al. 2016; Garralda-Guillem and Ruiz Galán 2019, 2014; Kunze et al. 2009, 2010, 2015; Levere et al. 2013).

The paper is organized as follows. The first section begins with a presentation of our minimax tool and the variational system. Theorem 2.3 is the central point of this section, providing us with a characterization of the solvability of the variational system. Moreover, from this theorem we derive a result which implies Stampacchia’s theorem. Then, the following section begins with a collage-type result that will be used in the numerical treatment of the inverse problem of a concrete example. To this end, we first propose a numerical approximation of the solution of the forward problem, and after that we describe the numerical method of the inverse problem. We show in different tables and graphics the results of both numerical methods. Finally, we end our paper with some conclusions.

2 The forward variational problem

In this section we deal with a result, Theorem 2.3, which generalizes the classic Stampacchia theorem. Indeed, it allows us to characterize the existence of a solution to a system of variational equations as well as that of a certain scalar. We should also mention that minimax inequalities are a widely used technique in variational analysis: (Aubin 1998) is a good example. In Simons (1998), we see the equivalence between minimax results and the Hahn-Banach Theorem, and how these results are used as functional analytic tools.

The fundamental tool to establish this direct result is given by the following minimax inequality (Kassay and Kolumbán 1996), which includes a not very restrictive convexity condition, which allows us to characterize the validity of the minimax identity (Ruiz Galán 2014, 2016). This concept of weak convexity is called infsup-convexity, and it appears with a nomenclature for first the time as affine weak convexlikeness in Stefanescu (2004). The infsup-convexity arises, in a natural way, when we deal with equilibrium and minimax problems.

Definition 2.1

If X and Y are nonempty sets, a function \(g: X \times Y \longrightarrow \mathbb {R}\) is called infsup-convex on Y provided that

$$\begin{aligned} \inf _{y \in Y} \max _{x \in X} g(x,y) \le \max _{x \in X} \sum _{j=1}^m t_j g(x, y_j), \end{aligned}$$

whenever \(m \ge 1\), \(y_1,\dots ,y_m \in Y\) and \(t \in \Delta _m\), where \(\Delta _m\) is the probability simplex, \(\Delta _m:= \{ (t_1,\dots ,t_m) \in \mathbb {R}^m : \ t_1,\dots ,t_m \ge 0 \hbox { and } \displaystyle \sum _{j=1}^m t_j =1 \}\).

Clearly, the infsup-convexity extends the concept of convex function, but also another types of weak convexity, such as convexlikeness (Fan (1953)=. Let us recall that a function \(f: X \times Y \longrightarrow \mathbb {R}\) in convexlike (or Fan-convex) on Y, when for any \(y_1,y_2\in Y\), there exist \(y \in Y\) and \(0<t<1\) such that

$$\begin{aligned} x \in X \ \Rightarrow \ f(x,y) \le tf(x,y_1)+(1-t)f(x,y_2). \end{aligned}$$

The concept of infsup-convexity is used in the following minimax result, (Kassay and Kolumbán 1996).

Theorem 2.2

Assume that X is a nonempty, convex and compact subset of a real topological vector space, Y is a nonempty set and \(g: X \times Y \longrightarrow \mathbb {R}\) is continuous and concave on X. Then,

$$\begin{aligned} \inf _{y \in Y} \max _{x \in X} g(x,y) = \max _{x \in X} \inf _{y \in Y} g(x,y) \end{aligned}$$

if, and only if, g is infsup-convex on Y.

This minimax inequality is of the Hahn-Banach type, in the sense that it is equivalent to this central result of the functional analysis. In fact, the Hahn-Banach theorem and some of its generalizations have also been used to prove some variational results (Saint Raymond 2018; Simons 2007), even for some systems of variational equations (Garralda-Guillem and Ruiz Galán 2019, 2014) that include, as a particular case, those corresponding to the mixed variational formulation of the classical Babuška-Brezzi theory (Boffi et al. 2013; Gatica 2014).

Now, we introduce in a precise way the forward problem involving a system of variational equations Tables 1, 2. Let E be a real and reflexive Banach space, let \(Y_{1}, \dots , Y_{N}\) be closed and nonempty subsets of E, \(\ Y :=\prod _{i=1}^N Y_i \), let \(x^{*}_1:E \longrightarrow \mathbb {R}, \dots , \, x^{*}_N:E \longrightarrow \mathbb {R}\) be continuous and linear functionals and let \(a_1:E \times E \longrightarrow \mathbb {R}, \dots , \, a_N:E \times E \longrightarrow \mathbb {R}\) be continuous bilinear forms. We consider the following variational problem: find an \(\ x_0 \in \overline{Y}:= \bigcap ^N_{i=1} Y_i\) such that

$$\begin{aligned} y \in Y \ \Rightarrow \ \ \left\{ \begin{array}{lll} x^{*}_1(y_1-x_0) &{} \le &{} a_1(y_1,y_1-x_0)\\ x^{*}_2(y_2-x_0) &{} \le &{} a_2(y_2,y_2-x_0)\\ \; \; \; \; \; \; \; \; \; \; \vdots &{} \; \vdots &{} \; \; \; \; \; \; \; \; \; \; \vdots \\ x^{*}_N(y_N-x_0) &{} \le &{} a_N(y_N,y_N-x_0)\\ \end{array}. \right. \end{aligned}$$
(1)

In order to study this system, we note by \(x^{*}\) the linear and continuous functional defined in Y as

$$\begin{aligned} x^{*}(y):= x^{*}_1(y_1)+ \cdots +x^{*}_N(y_N), \qquad (y \in Y ), \end{aligned}$$

and let \(a:E^N \times E^N \rightarrow \mathbb {R}\) be the continuous and bilinear form

$$\begin{aligned} a(x,y):=a_1(x_1,y_1)+ \cdots + a_N(x_N,y_N), \qquad ((x,y) \in E^N \times E^N). \end{aligned}$$

First, let us verify that the problem (1) is equivalent to finding an \(x_0 \in \overline{Y}\) fulfilling the following condition: for each \(y \in \ Y \), where \(\overline{x}_0\) denotes the vector \((x_0,\dots ,x_0)\),

$$\begin{aligned} x^{*}(y-\overline{x}_0) \le a(y,y-\overline{x}_0). \end{aligned}$$
(2)

The fact that (1) implies (2) follows from the sum of the inequalities and from the definition of \( x^{*}\) and a. For the opposite implication, it suffices to take \((y_1,x_0,\dots ,x_0)\) as an element of Y in (2) to obtain the first inequality of (1). We derive the other inequalities with the same reasoning.

Then we present the characterization mentioned at the beginning of this section, that extends the case of an equation previously established in [29]:

Theorem 2.3

Assuming that E is a real and reflexive Banach space, \(Y_{1}, \dots , Y_{N}\) are nonempty and closed subsets of E, \(x^{*}_1:E \longrightarrow \mathbb {R}, \dots , \, x^{*}_N:E \longrightarrow \mathbb {R}\) are continuous and linear functionals and define

$$\begin{aligned} x^{*}(y):= x^{*}_1(y_1)+ \cdots +x^{*}_N(y_N), \qquad \left( y \in E^N \right) . \end{aligned}$$

Let \(a_1:E \times E \longrightarrow \mathbb {R}, \dots , \, a_N:E \times E \longrightarrow \mathbb {R}\) be continuous bilinear forms and le

$$\begin{aligned} a(x,y):=a_1(x_1,y_1)+ \cdots + a_N(x_N,y_N), \qquad ((x,y) \in E^N \times E^N). \end{aligned}$$

Then, we have that there exists \(\ x_0 \in \overline{Y}=\bigcap ^N_{i=1} Y_i\) fulfilling the following system

$$\begin{aligned} y \in Y \ \Rightarrow \ \ \left\{ \begin{array}{lll} x^{*}_1(y_1-x_0) &{} \le &{} a_1(y_1,y_1-x_0)\\ x^{*}_2(y_2-x_0) &{} \le &{} a_2(y_2,y_2-x_0)\\ \; \; \; \; \; \; \; \; \; \; \vdots &{} \; \vdots &{} \; \; \; \; \; \; \; \; \; \; \vdots \\ x^{*}_N(y_N-x_0) &{} \le &{} a_N(y_N,y_N-x_0)\\ \end{array}, \right. \end{aligned}$$

if, and only if, for some \(\alpha \ge 0\), \(\overline{Y} \cap \alpha B_E \ne \emptyset \), and the next inequality holds:

$$\begin{aligned} \left. \begin{array}{c} m \ge 1, \ t \in \Delta _m \\ y_1, \dots ,y_m \in Y \end{array} \right\} \ \Rightarrow \ \sum _{j=1}^m t_j (x^*(y_j)-a(y_j,y_j)) \le \max _{x \in \overline{Y} \cap \alpha {B_E}} \left( \sum _{i=1}^N x_{i}^*(x)-a\left( \sum _{j=1}^m t_j y_j,\overline{x} \right) \right) , \end{aligned}$$
(3)

where \(\overline{x}=(x,\dots ,x)\).

Proof

We have that (1) \(\Rightarrow \) (3) just by taking \(\alpha :=\Vert x_0\Vert \).

For (3) \(\Rightarrow \) (1), let \(X:= \overline{Y} \cap \alpha B_E\) and \(Y=\displaystyle \prod _{i=1}^N Y_i\); choosing \(m=1\), from (3) we obtain

$$\begin{aligned} \begin{array}{rl} 0 &{} \le \displaystyle \inf _{y \in Y} \left( a(y,y)-x^*(y) + \max _{x \in X} \left( \sum _{i=1}^N x_{i}^*(x)-a(y,\overline{x})\right) \right) \\ &{} = \displaystyle \inf _{y \in Y} \max _{x \in X} (a(y,y-\overline{x})-x^*(y- \overline{x})). \end{array} \end{aligned}$$

Let

$$\begin{aligned} \mu := \displaystyle \inf _{y \in Y} \max _{x \in X} (a(y,y- \overline{x})-x^*(y- \overline{x})). \end{aligned}$$

If \(\mu =-\infty \) there is nothing to prove. Otherwise, let \(f:X \times Y \rightarrow \mathbb {R}\) be the function defined as

$$\begin{aligned} f(x,y):=a(y,y- \overline{x})-x^{*}(y- \overline{x})-\mu , \; \; (\overline{x} \in X^N, y \in Y). \end{aligned}$$

From (3) it follows

$$\begin{aligned} \left. \begin{array}{c} m \ge 1, \ t \in \Delta _m \\ y_1, \dots ,y_m \in Y \end{array} \right\} \ \Rightarrow 0 \le \max _{x \in X} \sum _{j=1}^m t_j f(x,y_j), \end{aligned}$$

and as we have

$$\begin{aligned} 0=\displaystyle \inf _{y \in Y} \max _{x \in X} f(x,y), \end{aligned}$$

by Theorem 2.2 the maximun is reached, and as a consequence, there exists \(x_0 \in X\) satisfying

$$\begin{aligned} \inf _{y \in Y} \max _{x \in X} f(x,y) \le \inf _{y \in Y} f(x_0,y), \end{aligned}$$

which is equivalent to the inequality system (1). \(\square \)

Before stating our next result, we recall a technical lemma. Although it is proven (Capatina 2014, Lemma 4.1) for Hilbert spaces, it clearly works for Banach spaces.

Lemma 2.4

Let E be a Banach space and let Y be a nonempty closed convex subset of E. Assume that \(a:E \times E \longrightarrow \mathbb {R}\) is a bilinear continuous form and \(x^{*}:E \longrightarrow \mathbb {R}\) is a continuous and linear functional. Then, the next problem: find \(y \in Y\) such that

$$\begin{aligned} x^{*}(y-\overline{x}_0) \le a(y,y-\overline{x}_0), \end{aligned}$$

is equivalent to the problem of finding \(y \in Y\) satisfying

$$\begin{aligned} x^{*}(y- \overline{x}_0) \le a(\overline{x}_0,y- \overline{x}_0). \end{aligned}$$

If we add certain more restrictive hypotheses to Theorem 2.3, we can equivalently express the condition (3) in a simpler way, extending the system version of the Stampacchia theorem.

Corollary 2.5

Let E be a real and reflexive Banach space, let \(Y_{1}, \dots , Y_{N}\) be nonempty closed and convex sets of E, let \(x^{*}_1:E \longrightarrow \mathbb {R}, \dots , \, x^{*}_N:E \longrightarrow \mathbb {R}\) be continuous and linear functionals and let \(a_1:E \times E \longrightarrow \mathbb {R}, \dots , \, a_N:E \times E \longrightarrow \mathbb {R}\) be continuous bilinear forms. Let

$$\begin{aligned} x^{*}(y):= x^{*}_1(y_1)+ \cdots +x^{*}_N(y_N), \qquad \left( y \in E^N \right) , \end{aligned}$$

and

$$\begin{aligned} a(x,y):=a_1(x_1,y_1)+ \cdots + a_N(x_N,y_N), \qquad ((x,y) \in E^N \times E^N), \end{aligned}$$

and suppose that

$$\begin{aligned} x \in E \ \Rightarrow \ a(x,x) \ge 0. \end{aligned}$$

Then, there exists \(\displaystyle x_0 \in \overline{Y}=\bigcap ^N_{i=1} Y_i\) such that

$$\begin{aligned} y \in Y:=\prod _{i=1}^N Y_i \; \Rightarrow \; \; x^{*}(y- \overline{x}_0) \le a(\overline{x}_0,y- \overline{x}_0) \end{aligned}$$
(4)

if, and only if, there exists \(\alpha \ge 0\) fulfilling \(\overline{Y} \cap \alpha B_E \ne \emptyset \) and

$$\begin{aligned} y \in Y \; \Rightarrow \; \; x^*(y)-a(y,y) \le \displaystyle \sup _{x \in \overline{Y} \cap \alpha B_E} (x^*(\overline{x})-a(y,\overline{x})). \end{aligned}$$
(5)

Proof

First, let us observe that the conditions (4) and (2), thanks to Lemma 2.4 with \(u=(x_0,\dots ,x_0)\) and \(v=(y_1,\dots ,y_N)\), are equivalents. Therefore, we must prove that the conditions (5) and (3) are equivalents.

On the one hand, if we take \(m=1\) in (3) then we have (5). On the other hand, we suppose that (5) is valid and let \(m \ge 1\), \(t \in \Delta _m\) and \(y_1, \dots ,y_m \in Y\). Then

$$\begin{aligned} \begin{array}{rl} \displaystyle \sum _{j=1}^m t_j (x^*(y_j)-a(y_j,y_j)) &{} = \displaystyle \sum _{j=1}^m t_j x^*(y_j) - \sum _{j=1}^m t_j a(y_j,y_j) \\ &{} \le \displaystyle x^* \left( \sum _{j=1}^m t_j y_j \right) -a \left( \sum _{j=1}^m t_j y_j , \sum _{j=1}^m t_j y_j \right) \\ &{} \displaystyle \le \sup _{x \in \overline{Y} \cap \alpha {B_E}} \left( x^*(\overline{x})- a \left( \sum _{j=1}^m t_j y_j , \overline{x} \right) \right) , \end{array} \end{aligned}$$

where we have used the convexity of Y and that of the quadratic form associated with the bilinear form a and (5). \(\square \)

We conclude this section by proving that the version of systems of the classical Stampacchia theorem is a consequence of Corollary 2.5. Indeed, assuming that E is a real Hilbert space, \(Y_1,\dots ,Y_N\) are nonempty closed and convex subsets of E, \(x^{*}_1:E \longrightarrow \mathbb {R}, \dots , \, x^{*}_N:E \longrightarrow \mathbb {R}\) are continuous and linear functionals and \(a_1:E \times E \longrightarrow \mathbb {R}, \dots , \,a_N:E \times E \longrightarrow \mathbb {R}\) are bilinear, continuous and coercive forms. With the notations above, let \(x^{*}\) be the continuous and linear functional defined as

$$\begin{aligned} x^{*}(y):= x^{*}_1(y_1)+ \cdots +x^{*}_N(y_N), \qquad (y \in E^N), \end{aligned}$$

and let \(a:E^N \times E^N \rightarrow \mathbb {R}\) be the continuous and bilinear form

$$\begin{aligned} a(x,y):=a_1(x_1,y_1)+ \cdots + a_N(x_N,y_N), \qquad ((x,y) \in E^N \times E^N). \end{aligned}$$

Let us note that, given a vector \(x \in \alpha B_E\) with \(\alpha \ge 0\), \(\Vert x \Vert = \alpha \), we can select, without loss of generality, the norm of \(E^N\) appropriately so that \(\Vert \overline{x} \Vert = \alpha \).

In addition, if \(\rho _1, \dots ,\rho _N\) are the coercivity constants of \(a_1, \dots , a_N\) and \(x \in E\), then we have \((\rho _1+ \cdots +\rho _N) \Vert x \Vert ^2 \le a(x,x)\).

Let \(\beta > 0\) such that \(\overline{Y} \cap \beta B_E \ne \emptyset \) and \(y \in Y:=\displaystyle \prod _{i=1}^N Y_i\). Then there hold

$$\begin{aligned} \begin{array}{rl} \displaystyle \frac{x^*(y)-a(y,y)}{\Vert y \Vert }-\frac{\displaystyle \sup _{x \in \overline{Y} \cap \beta {B_E}}(x^*(\overline{x})-a(y,\overline{x}))}{\Vert y \Vert } &{} \le \Vert x^*\Vert -(\rho _1+ \cdots +\rho _N) \Vert y \Vert + \displaystyle \beta \frac{\Vert x^*-a(y,\cdot )\Vert }{\Vert y\Vert } \\ &{} \le \Vert x^* \Vert \left( 1 + \displaystyle \frac{\beta }{\Vert y \Vert } \right) + \beta \Vert a \Vert -(\rho _1+ \cdots +\rho _N) \Vert y \Vert . \end{array} \end{aligned}$$

Taking \(\alpha > \beta \), it follows that \(Y \cap \alpha B_E \ne \emptyset \) and

$$\begin{aligned} y \in Y, \ \Vert y \Vert > \alpha \ \Rightarrow \ x^*(y)-a(y,y) \le \sup _{x \in \overline{Y} \cap \alpha {B_E}} (x^*( \overline{x} )-a(y, \overline{x}), \end{aligned}$$

and we arrive at (5).

3 The inverse varational problem

To solve the inverse problem associated with the system of variational inequalities (1), we will make use of the following collage-type result, which can be proved as a consequence of Stampacchia’s theorem for a system of inequalities. In order to avoid expository complications, we previously introduce the following notation: for a real Banach space, \(E^*\) is its topological dual space. Moreover, if J is a nonempty set and for each \(j \in J\) and \(i \in \left\{ 1, \dots , N \right\} \) \({x_j^1}^*,\dots ,{x_j^N}^* \in E^*\) and \(a_j^1,\dots ,a_j^N: E \times E \longrightarrow \mathbb {R}\) are continuos biliear forms, we denote by \(({x_j^1}^*,\dots ,{x_j^N}^*)\) and \((a_j^1,\dots ,a_j^N)\) the continuous and linear functional

$$\begin{aligned} {x_j^1}^*(y_1)+ \cdots +{x_j^N}^{*}(y_N), \qquad (y \in E^N), \end{aligned}$$

and the continuous bilinear form

$$\begin{aligned} a_j^1(x_1,y_1)+ \cdots + a_j^N(x_N,y_N), \qquad ((x,y) \in E^N \times E^N), \end{aligned}$$

respectively.

Theorem 3.1

Let J be a nonempty set, let \(Y_1,\dots ,Y_N\) be closed and convex nonempty subsebts of the Hilbert space E. For each \(j \in J\) and \(i \in \left\{ 1, \dots , N \right\} \), let \({x_j^1}^*,\dots ,{x_j^N}^* \in E^*\) and \(a_j^1,\dots ,a_j^N: E \times E \longrightarrow \mathbb {R}\) be bilinear and continuous functionals satisfying that there exist \(\rho ^1_j,\dots ,\rho ^N_j\) positives such that

$$\begin{aligned} y \in E \Rightarrow \ \rho ^i_j \Vert y \Vert ^2 \le a^i_j(y,y). \end{aligned}$$

If \(x_j^*:=({x_j^1}^*,\dots , {x_j^N}^*)\), \(a_j:=(a^1_j,\dots ,a^N_j)\), \(Y:=Y_1 \times \cdots \times Y_N\) and \(\overline{x}_j\) is a solution of the system

$$\begin{aligned} y \in Y \ \Rightarrow \ x_j^*(y-\overline{x}_j) \le a_j(\overline{x}_j,y-\overline{x}_j), \end{aligned}$$

then,

$$\begin{aligned} y \in Y, \ j \in J \ \Rightarrow \ \Vert y-\overline{x}_j \Vert \le \frac{\Vert a_j(y,\cdot )-x_j^*\Vert }{(\rho ^1_j+\cdots +\rho ^N_j)}. \end{aligned}$$

Proof

Given \(y \in Y\) and \( j \in J\) and taking into account Corollary 2.5, we have

$$\begin{aligned} \begin{array}{rl} (\rho ^1_j+\cdots +\rho ^N_j) \Vert y-\overline{x}_j \Vert ^2 &{} \le a_j(y-\overline{x}_j,y-\overline{x}_j) \\ &{} = a_j(y,y-\overline{x}_j)-a_j(\overline{x}_j,y-\overline{x}_j) \\ &{} \le a_j(y,y-\overline{x}_j)-x_j^*(y-\overline{x}_j) \\ &{} = (a_j(y,\cdot )-x_j^*)(y-\overline{x}_j) \\ &{} \le \Vert a_j(y,\cdot )-x_j^*\Vert \Vert y-\overline{x}_j\Vert . \end{array} \end{aligned}$$

\(\square \)

In Berenguer et al. (2016) Kunze et al. (2009) Kunze et al. (2015) Kunze et al. (2012) we can observe the idea that we used for the application of this result in the resolution of the inverse problem. This reasoning has been previously used with the Banach fixed point theorem in a similar way in Kunze and Gomes (2003).

We finish our work with the following example, which consists of two clearly defined parts. The first deals with solving the forward problem using Galerkin’s method. To this end, we will work with a certain Schauder basis, a very versatile tool, since we can observe its use in differential and integral problems (Berenguer et al. 2016, 2004; Gámez et al. 2005, 2009; Garralda-Guillem and Ruiz Galán 2019, 2014). The second part of this example, and also its main objective, is the numerical treatment of the inverse problem, where we obtain the target functions thanks to the Galerkin method previously described. Figs. 1, 2, 3 and 4.

Example 3.2

Assuming that \(E:=H^1(0,1)\), \(\lambda _1, \lambda _2\) are positive reals, \(\alpha _1,\alpha _2,\beta _1,\beta _2 \in \mathbb {R}\) and \(f,g \in L^\infty (0,1)\). We introduce the boundary value problem:

$$\begin{aligned} \left\{ \begin{array}{lll} -u''(x)+ \lambda _1 u(x)=f(x) \quad \hbox {on } (0,1)\\ -v''(x)+ \lambda _2 v(x)=g(x) \quad \hbox {on } (0,1)\\ u(0)=\alpha _1, v(0)=\alpha _2, u(1)=\beta _1, v(1)=\beta _2 \end{array} . \right. \end{aligned}$$

Considering the convex set

$$\begin{aligned} Y:=\left\{ (w_1,w_2) \in E^2: w_1(0)=\alpha _1,w_2(0)=\alpha _2, w_1(1)=\beta _1, w_2(1)=\beta _2 \right\} , \end{aligned}$$

and using a standard argumentation, we obtain the variational formulation of the previous system. Namely, for all \((w_1,w_2) \in Y\) it is satisfied that

$$\begin{aligned} \begin{array}{rl} \displaystyle \int ^1_0 u'(w_1-u)'+\int ^1_0 v'(w_2-v)'+ \lambda _1 \int ^1_0 u(w_1-u) + \lambda _2 \int ^1_0 v(w_2-v) &{} \ge \\ \displaystyle \int ^1_0 f(w_1-u)+g(w_2-u). \end{array} \end{aligned}$$
(6)

We define the bilineal, coercive and continuous form \(a:E^2 \times E^2 \rightarrow \mathbb {R}\)

$$\begin{aligned} a((x_1,y_1),(x_2,y_2)):= \int ^1_0 (x'_1 y'_1 +x'_2 y'_2) + \lambda _1 \int ^1_0 x_1 y_1 + \lambda _2 \int ^1_0 x_2 y_2, \; \; \; \; ((x_1,y_1),(x_2,y_2)) \in E^2 \times E^2, \end{aligned}$$

and the functional \(x^{*}:E^2 \longrightarrow \mathbb {R}\)

$$\begin{aligned} x^{*}(x,y):=\int ^1_0 (fx + gy) , \; \; ((x,y) \in E^2). \end{aligned}$$

The vectorial version of Stampacchia’s theorem ensures the existence of a solution to the variational inequality (6).

To show an example of the forward problem by using our numerical method, we define

$$\begin{aligned} f(x):= \left( e - \dfrac{1}{5} \right) e^{\frac{x^2}{10}} - \frac{x^2}{25}e^{\frac{x^2}{10}}, \;\; (x \in \left[ 0,1\right] ), \end{aligned}$$

and

$$\begin{aligned} g(x):=-2 \cos (x+1)^2+\frac{\pi }{2}\sin (x+1)^2+4(1+x)^2 \sin (x+1)^2 , \;\; (x \in \left[ 0,1\right] ). \end{aligned}$$

We choose \((\lambda _1,\lambda _2)=(e,\frac{\pi }{2})\). In order to use an appropriate Galerkin method, we take \(z_1=w_1-u\) and \(z_2=w_2-v\) in (6), and we obtain for each \((z_1,z_2) \in H^1_0 (0,1) \times H^1_0 (0,1)\)

$$\begin{aligned} \int ^1_0 u'z'_1+\int ^1_0 v'z'_2+ \lambda _1 \int ^1_0 u z_1 + \lambda _2 \int ^1_0 v z_2 \ge \int ^1_0 fz_1+gz_2. \end{aligned}$$

To design the Galerkin method, we consider the Haar system \(\left\{ h_k \right\} _{k \ge 1} \) in \(L^2(0,1)\). We define

$$\begin{aligned} g(x):=1, \;\; (x \in [0,1]), \end{aligned}$$

and for \(k\ge 2\)

$$\begin{aligned} g_k(x):=\int _0^x h_{k-1} (t) dt \;\; (x \in [0,1]). \end{aligned}$$

As a Schauder basis of \(H^1(0,1)\) we use \(\left\{ g_k \right\} _{k \ge 1}\), and for \(H^1_0(0,1)\) we take \(\left\{ g_{k+2} \right\} _{k \ge 1}\).

We have made use of Galerkin’s method to solve the m-dimensional variational problem in the subspaces generated by \(\left\{ g_3,\cdots ,g_{m+2} \right\} \). In the following table we show the behavior of the approximation in terms of the errors made in the corresponding spaces, where \((u_m, v_m)\) is the solution obtained for the m-dimensional problem. Also, we present some graphics that illustrate the exact solutions, their approximations and the differences between these functions.

Fig. 1
figure 1

a Exact solution u and their aproximations for \(m=3,15\) and 63 b Difference between u and \(u_{3}, u_{15}\), \(u_{63}\)

Fig. 2
figure 2

a Exact solution \(u'\) and their aproximations for \(m=3,15\) and 63 b Difference between \(u'\) and \(u'_{3}, u'_{15}\), \(u'_{63}\)

Fig. 3
figure 3

a Exact solution v and their aproximations for \(m=3,15\) and 63 b Difference between v and \(v_{3}, v_{15}\), \(v_{63}\)

Fig. 4
figure 4

a Exact solution \(v'\) and their aproximations for \(m=3,15\) and 63 b Difference between \(v'\) and \(v'_{3}, v'_{15}\), \(v'_{63}\)

Table 1 Errors of the forward method

Now, we finally present the treatment of the inverse problem under the notation of the Theorem 3.1. We now turn to the resolution of the inverse problem. The method we follow to solve it is as follows: we obtain a solution to the forward problem for each \(j_0 \in J\). We write this solution as \(x_j\). In the inverse problem, we try to find, whenever possible, a \(j_0 \in J\) such that

$$\begin{aligned} \Vert y-\overline{x}_{j_0}\Vert =\inf _{j \in J} \Vert y-\overline{x}_j\Vert . \end{aligned}$$

Let us take into account that if Y is a closed affine subset of E, and \(y \in Y\) is a target element, we can solve our problem if, under the condition that \(\displaystyle \inf _{j}(\rho ^1_j,\dots ,\rho ^N_j)>0\), we are able to solve

$$\begin{aligned} \inf _{j \in J} \Vert (a_j(y,\cdot )-\overline{x}_j^*)_{| (Y-Y)}\Vert . \end{aligned}$$
(7)

To show a concrete example, we take the variational inequality discussed above, with \(f(x):=\left( e - \dfrac{1}{5} \right) e^{\frac{x^2}{10}} - \frac{x^2}{25}e^{\frac{x^2}{10}}\), \(g(x):=-2 \cos (x+1)^2+\frac{\pi }{2} \mathrm {sin}(x+1)^2+4(1+x)^2 \sin (x+1)^2\), \((\lambda _1,\lambda _2) \in ([0.5,3]\times [0.5,3])\) and \(\alpha _1=1,\alpha _2=\sin 1,\beta _1=e^{\frac{1}{10}},\beta _2=\sin 4 \), that satisfy the hypotheses of Theorem 3.1 in a trivial way.

Now, taking \(\lambda _1 = e\) and \(\lambda _2 = \frac{\pi }{2}\) we obtain, using the forward method described previously, the approximate solutions \((u_m,v_m)\) for different values of m. We consider these approximate solutions as targets for solving the inverse problem. For our example, we can write (7) as

$$\begin{aligned} \inf _{(\lambda _1,\lambda _2) \in ([0.5,3]\times [0.5,3])} \sup _{{\begin{array}{cc}\omega \in H^1_0(0,1)\\ \Vert \omega \Vert =1\end{array}}} |a_{j}((u_m,\omega ),(v_m,\omega ))-\overline{x}^*(\omega ,\omega )|. \end{aligned}$$

We proceed to the discretization of our example, for that, we write the element \(\omega \in H^1_0 (0,1)\) as a combination of the n first elements of the Schauder basis of \(H^1_0(0,1)\) given by \(\left\{ g_{k+2}\right\} _{k \ge 1}\) to obtain the minimization problem:

$$\begin{aligned} \inf _{(\lambda _1,\lambda _2) \in ([0.5,3]\times [0.5,3])} \left| \sum _{k=1}^{n}(a_{j}((u_m,g_{k+2}),(v_m,g_{k+2})) -\overline{x}^*(g_{k+2},g_{k+2})\right| . \end{aligned}$$

Below, we present a table where we can observe the different approximations of \((\lambda _1,\lambda _2)\) that we have obtained by taking \(n=7\) in the above expression and considering different targets \((u_m,v_m)\).

Table 2 Numerical results for the inverse problem

4 Conclusions

In this paper we have presented a numerical method to solve the inverse problem associated with a system of variational inequalities. To do this, firstly, we used a minimax equality, Theorem 2.2, to prove a result that allows us to characterize the existence of a solution to the system of inequalities, Theorem 2.3.

To solve the inverse problem derived from the system of variational inequalities (2.1), we have used Theorem 3.1, a collage-type result which is a consequence of the version of the Stampacchia theorem for a system of variational inequalities.

Finally, we have illustrated these results by means of a numerical example, Example 3.2. First, we dealt with the forward problem by using a certain Schauder basis in the Galerkin method. In the programming we used the usual Schauder basis for the sake of simplicity. We could consider another basis with more regularity in order to improve the convergence.

Finally, we have developed the numerical treatment for the inverse problem. We show in some graphics and tables the error of the forward method and the approximations that we have obtained.