1 Introduction

The history of quadratic stochastic operators can be traced back to Bernstein’s work [1]. Nowadays, scientists are interested in these operators, since they have a lot of applications, especially in population genetics [2, 3]. Moreover, the quadratic stochastic operators were also used as a crucial source of analysis for the study of dynamical properties and modelings in many different fields such as biology [48], physics [9, 10], economics, and mathematics [3, 1012].

The time evolution of species in biology can be comprehended by the following situation. Let \(I = \{1,2,\dots,n\}\) be the n type of species (or traits) in a population and we denote by \(x^{(0)} = (x_{1}^{(0)},\dots,x_{n}^{(0)})\) the probability distribution of the species in an early state of that population and the probability of an individual in the ith species and jth species to cross-fertilize and produce an individual from kth species (trait) to be \(P_{ij,k}\). Given \(x^{(0)} = (x_{1}^{(0)},\dots,x_{n}^{(0)})\), we can find the probability distribution of the first generation, \(x^{(1)} = (x_{1}^{(1)},\dots,x_{n}^{(1)})\) by using a total probability, i.e.

$$\begin{aligned} x_{k}^{(1)}=\sum_{i,j=1}^{n} P_{ij,k}x_{i}^{(0)}x_{j}^{(0)}, \quad k\in\{1,\dots,n\}. \end{aligned}$$

This operator is denoted by the symbol V and it is called a quadratic stochastic operator (q.s.o.). The operator means that starting from the initial arbitrary state of probability distribution of a population, \(x^{(0)}\), then it continues to evolve to the probability distribution of the first generation, \(x^{(1)} = V(x^{(0)})\), the second generation, \(x^{(2)} = V(x^{(1)}) = V(V(x^{(0)}))=V^{(2)}(x^{(0)})\), and so on. Thus, the states (probabilities distribution) of the population can be described by the following dynamical system:

$$\begin{aligned} x^{(0)},\qquad x^{(1)} = V\bigl(x^{(0)}\bigr),\qquad {x}^{(2)} = V^{(2)}\bigl(x^{(0)}\bigr),\qquad {x}^{(3)} = V^{(3)}\bigl(x^{(0)}\bigr), \qquad \ldots. \end{aligned}$$

In other words, each q.s.o. describes the sequence of generations in terms of probability distributions if the values of \(P_{ij,k}\) and the distribution of the current generation are given. In [13], it is given by a self-contained exposition of the recent achievements and open problems in the theory of the q.s.o. The main problem in the nonlinear operator theory is to study the behavior of nonlinear operators. Nowadays, there is only a small number of studies on dynamical phenomena on higher dimensional systems that are presently comprehended, even though they are very important. In the case of a q.s.o., the difficulty of the problem depends on the given cubic matrix \((P_{ijk})^{m}_{i,j,k=1}\). An asymptotic behavior of the q.s.o. even on the small dimensional simplex is complicated [12, 1417].

The concept of majorization was established in 1929 [18] even though the idea was introduced much earlier by Lorenz [19], Dalton [20], and Schur [21]. This kind of theory was very important from an economic point of view, which resulted from the gaps in the income or wealth distribution in society. Later, it led to idea of Lorenz curve and principle of transfers. Moreover, Schur’s work on Hadamard’s determinant inequality also contributed to the development of majorization [21].

The idea of majorization kept occurring in other fields, such as chemistry and physics, but it was attributed by different names such as ‘x is more mixed than y’, ‘x is more chaotic than y’ and ‘x is more disordered than y’. One of the examples is given by [22].

Further, Parker and Ram [23] introduced a new order called majorization and they were referring to the majorization that was popularized by Hardy, Littlewood and Polya, classical majorization. This new order opened a path for the study to generalize the theory of majorization of Hardy et al. [18]. The new majorization has an advantage as compared to the classical one since it can be defined as a partial order on sequences. While classical majorization is not an antisymmetric order (because any sequence is majorized by any of its permutations), it is only defined as a preorder on a sequence [23].

Furthermore, one of the best known methods to solve optimization is the greedy method. This method is preferred because of space- and time-efficiency. It also yields crucial classes of optimization and usually provides a proper estimation to the optimal solution. Many of the studies in greed [2427] introduced special classes of optimization problems and provided their algorithms. Matroids and greedoids modeling were used in the past to approach greedy-solvable problems. Unfortunately there were not many problems that can be generalized. Hence, in [23], it was proven that the concept of majorization has a direct relation with the greedy method. Moreover, the same scholars also provided good examples in solving greed problems such as continuous knapsack, storage of files on tape, and job sequencing [23]. Note that Stott Parker and Prasad Ram were focussing the descriptions of the order’s classes defined on linear systems only. Hence, we are interested in the investigation of the case of quadratic ones.

In [28] a definition of bistochastic q.s.o. was proposed in terms of classical majorization (see [29]). Namely, a q.s.o. is called bistochastic (also called doubly stochastic) if \(V({\mathbf{x}}) \prec {\mathbf{x}}\) for all x taken from the \(n-1\) dimensional simplex. In [28, 30], the necessary and sufficient conditions were given for the bistochasticity of a q.s.o. In general, the descriptions of such a kind of operators are still an open problem.

Therefore, in the present paper, we are motivated to use majorization introduced in [23] to define a bistochasticity q.s.o. In order to differentiate between the terms majorization and classical majorization, we call majorization a b-order, while classical majorization is called majorization only. The main goal of this paper is to describe all such kind of operators on a two dimensional simplex.

2 b-Order and b-bistochastic operators

Throughout this paper we consider the simplex

$$\begin{aligned} S^{n-1} = \Biggl\{ \mathbf{x}=(x_{1},x_{2}, \ldots,x_{n})\in\mathbb{R}^{n} \bigg|x_{i}\geq0, \sum _{i=1}^{n}x_{i} = 1 \Biggr\} . \end{aligned}$$
(2.1)

Define functionals \(\mathcal{U}_{k} : \mathbb{R}^{n} \rightarrow \mathbb{R}\) by

$$\begin{aligned} \mathcal{U}_{k} (x_{1},\dots, x_{n}) = \sum _{i=1}^{k}x_{i} \quad \mbox{where } k=\overline{1,n-1}. \end{aligned}$$
(2.2)

Let \({\mathbf{x}},{\mathbf{y}}\in S^{n-1}\). We say that x is b-ordered by y (\({\mathbf{x}}\leq^{b} {\mathbf{y}}\)) if and only if \(\mathcal{U}_{k}({\mathbf{x}}) \leq\mathcal{U}_{k}({\mathbf{y}})\), for all \(k \in\{1,\dots, n-1\}\).

The introduced relation is indeed an order, i.e. it satisfies the following for any \({\mathbf{x}},{\mathbf{y}},{\mathbf{z}}\in S^{n-1}\):

  1. (i)

    \({\mathbf{x}}\leq^{b} {\mathbf{x}}\),

  2. (ii)

    \({\mathbf{x}}\leq^{b} {\mathbf{y}}\), \({\mathbf{y}}\leq^{b} {\mathbf{x}}\Longrightarrow {\mathbf{x}}= {\mathbf{y}}\),

  3. (iii)

    \({\mathbf{x}}\leq^{b} {\mathbf{y}}\), \({\mathbf{y}}\leq^{b} {\mathbf{z}}\Longrightarrow {\mathbf{x}}\leq^{b} {\mathbf{z}}\).

Moreover, it has the following properties:

  1. (i)

    One has \({\mathbf{x}}\leq^{b} {\mathbf{y}}\) if and only if \(\lambda {\mathbf{x}}\leq^{b} \lambda {\mathbf{y}}\) for any \(\lambda> 0\).

  2. (ii)

    If \({\mathbf{x}}\leq^{b} {\mathbf{y}}\) and \(\lambda\leq\mu\) then \(\lambda {\mathbf{x}}\leq^{b} \mu {\mathbf{y}}\).

Using the defined order, one can define the majorization [29]. First, recall that for any \({\mathbf{x}}= (x_{1}, x_{2}, \dots, x_{n}) \in S^{n-1}\), we define \({\mathbf{x}}_{[\downarrow]} = (x_{[1]}, x_{[2]}, \dots, x_{[n]})\), where

$$x_{[1]} \geq x_{[2]} \geq\cdots\geq x_{[n]} $$

is a nonincreasing rearrangement of x. The point \({\mathbf{x}}_{[\downarrow]}\) is called a rearrangement of x by nonincreasing. Take two elements, x and y, in \(S^{n-1}\), then it is said that an element x is majorized by y (or y majorates x) and denoted \({\mathbf{x}}\prec {\mathbf{y}}\) (or \({\mathbf{y}}\succ {\mathbf{x}}\)) if \({\mathbf{x}}_{[\downarrow]}\leq^{b}{\mathbf{y}}_{[\downarrow]}\). We refer the readers to [29] for more information regarding to this topic. One sees that a b-order does not require a rearrangement of x by nonincreasing.

Any operator V with \(V(S^{n-1})\subset S^{n-1}\) is called stochastic and if V is satisfied \(V({\mathbf{x}})\leq^{b}{\mathbf{x}}\) for all \({\mathbf{x}}\in S^{n-1}\), then it is called b-bistochastic. The following include discussions on nonlinear b-bistochastic operators. The simplest nonlinear operators are quadratic ones. Therefore, we restrict ourselves to such a kind of operators.

A stochastic operator \(V:S^{n-1} \rightarrow S^{n-1}\) is called a quadratic stochastic operator (q.s.o.) if V has the following form:

$$\begin{aligned} V({\mathbf{x}})= \sum_{i,j=1}^{n}P_{ij,k}x_{i}x_{j} ,\quad k = 1,2,\dots,n , {\mathbf{x}}=(x_{1},x_{2}, \dots,x_{n})\in S^{n-1}, \end{aligned}$$
(2.3)

where \(P_{ij,k}\) are coefficients with the following properties:

$$\begin{aligned} P_{ij,k}\geq0, \qquad P_{ij,k}=P_{ji,k},\qquad \sum _{k=1}^{n}P_{ij,k} = 1, \quad i,j,k=1,2,\ldots,n . \end{aligned}$$
(2.4)

Remark 2.1

In [28] a q.s.o. was introduced and studied with the property \(V({\mathbf{x}}) \prec {\mathbf{x}}\) for all \({\mathbf{x}}\in S^{n-1}\). Such an operator is called bistochastic. In our definition, we are taking the b-order instead of the majorization. Note that if one takes absolute continuity instead of the b-order, then the b-bistochastic operator reduces to a Volterra q.s.o. [3134].

Let us recall some preliminaries.

Remark 2.2

Let \(g(x)=mx+c \), then \(g(x) \leq0\) (respectively, \(g(x)\geq0\)) for all \(x \in[0,1]\) if and only if \(c \leq0\) (respectively, \(c \geq0\)) and \(m+c \leq0\) (respectively, \(m+c\geq0\)).

Let \(F : \mathbb{R}^{n} \rightarrow\mathbb{R}^{n}\) be a given mapping by

$$F({\mathbf{x}}) = \bigl(f_{1}({\mathbf{x}}),f_{2}({\mathbf{x}}),\ldots,f_{n}( {\mathbf{x}})\bigr), \quad {\mathbf{x}}\in {\mathbb{R}}^{n} . $$

If F is differentiable, then the Jacobian of F at a point x is defined by

$$J\bigl(F({\mathbf{x}})\bigr)= \begin{bmatrix} \frac{\partial f_{1}}{\partial x_{1}} & \ldots& \frac{\partial f_{1}}{\partial x_{n}} \\ \vdots& \vdots& \vdots\\ \frac{\partial f_{n}}{\partial x_{1}} & \ldots& \frac{\partial f_{n}}{\partial x_{n}} \end{bmatrix} . $$

A point \({\mathbf{x}}_{0}\) is called a fixed point of F if one has \(F({\mathbf{x}}_{0})={\mathbf{x}}_{0}\).

Definition 2.3

A fixed point \({\mathbf{x}}_{0}\) is called hyperbolic if the absolute value of every eigenvalue λ of the Jacobian at \({\mathbf{x}}_{0}\) satisfies \(|\lambda |\neq1\). Let \({\mathbf{x}}_{0}\) be a hyperbolic fixed point, then

  1. 1.

    \({\mathbf{x}}_{0}\) is called attractive if every eigenvalue of the Jacobian eigenvalues satisfies \(|\lambda |<1\);

  2. 2.

    \({\mathbf{x}}_{0}\) is called repelling if every eigenvalue of the Jacobian eigenvalues satisfies \(|\lambda |>1\).

Let us define

  1. (i)

    \(\overline{{\mathbf{x}}}_{k} = (\underbrace{0,0,\dots,0}_{k},\frac{1}{n-k}, \frac{1}{n-k}, \dots, \frac{1}{n-k})\), where \(k=\{0,\dots,n-1\}\),

  2. (ii)

    \({\mathbf{e}}_{m} = (\underbrace{0,0, \dots, 0, 1}_{m}, 0,\dots, 0) \), \(m\in\{1,\dots,n\}\).

3 Properties of b-bistochastic q.s.o.

In this section, we provide some basic properties of a b-bistochastic q.s.o. in a general setting. In the following, we need an auxiliary result.

Lemma 3.1

The inequality

$$\begin{aligned} A_{1}x_{1}+\cdots+A_{n}x_{n}+C\leq0 \end{aligned}$$
(3.1)

holds under the condition \(0\leq x_{1}+\cdots+x_{n}\leq1\), \(x_{k}\geq0\), \(k\in\{1,\dots,n\}\) if and only if

  1. (i)

    \(C\leq0\) and

  2. (ii)

    \(A_{k}+C\leq0\), \(k=\overline{1,n}\).

The proof is obvious.

Now we are ready to formulate several properties of a b-bistochastic q.s.o.

Theorem 3.2

Let V be a b-bistochastic q.s.o. defined on \(S^{n-1}\), then the following statements hold:

  1. (i)

    \(\sum_{m=1}^{k}\sum_{i,j=1}^{n}P_{ij,m} \leq kn\), \(k \in\{1,\dots,n \}\);

  2. (ii)

    \(P_{ij,k} =0\) for all \(i,j\in\{k+1,\dots,n\}\) where \(k\in\{1,\dots,n-1\}\);

  3. (iii)

    \(P_{nn,n}=1\);

  4. (iv)

    for every \({\mathbf{x}}\in S^{n-1}\) one has

    $$\begin{aligned}& V({\mathbf{x}})_{k} = \sum_{l=1}^{k}P_{ll,k}x^{2}_{l} + 2 \sum_{l=1}^{k}\sum _{j=l+1}^{n}P_{lj,k}x_{l}x_{j} \quad \textit{where } k = \overline{1,n-1}, \\& V({\mathbf{x}})_{n} = x_{n}^{2} + \sum _{l=1}^{n-1}P_{ll,n}x^{2}_{l} + 2 \sum_{l=1}^{n-1}\sum _{j=l+1}^{n}P_{lj,n}x_{l}x_{j}; \end{aligned}$$
  5. (v)

    \(P_{lj,l} \leq\frac{1}{2}\) for all \(j\geq l+1\), \(l\in\{1,\dots,n-1\}\).

Proof

(i) Consider the element \(\overline{{\mathbf{x}}}_{0} = (\frac{1}{n}, \frac{1}{n}, \dots, \frac{1}{n})\), then, due to the b-bistochastisity of V, we have \(\mathcal{U}_{k} (V(\overline{{\mathbf{x}}}_{0}))\leq\mathcal{U}_{k} (\overline {{\mathbf{x}}}_{0})\), for every \(k=\overline{1,n}\). Taking into account

$$V(\overline{{\mathbf{x}}}_{0})_{m} = \frac{1}{n^{2}}\sum _{i,j=1}^{n} P_{ij,m} \quad \mbox{for } m= \overline{1,n}, $$

one gets

$$\sum_{m=1}^{k}\frac{1}{n^{2}} \sum _{i,j=1}^{n}P_{ij,m}\leq \frac{k}{n} . $$

This implies

$$\sum_{m=1}^{k} \sum _{i,j=1}^{n}P_{ij,m}\leq kn . $$

(ii) Now, take \(\overline{{\mathbf{x}}}_{k}\). Then from the fact \(V(\overline{{\mathbf{x}}}_{k}) \leq^{b} \overline{{\mathbf{x}}}_{k}\), one finds \(V(\overline{{\mathbf{x}}}_{k})_{m} =0 \) for all \(m = \overline{1,k}\). This implies that

$$\frac{1}{(n-k)^{2}} \sum_{i,j= k+1}^{n}P_{ij,k} \leq0 . $$

Hence, \(P_{ij,k} =0\), for all \(i,j\in\{k+1,\dots,n\}\), where \(k=\overline{1,n-1}\).

The proof of (iii) is evident.

(iv) By using property (ii) for each \(k\in\{1,\dots,n-1\}\) we have

$$V({\mathbf{x}})_{k} = P_{11,k}x_{1}^{2} + \sum _{j=2}^{n}P_{1j,k}x_{1}x_{j}+ \sum_{i=2}^{n}P_{i1,k}x_{i}x_{1} + \cdots+ P_{kk,k}x_{k}^{2} + \sum _{j=k+1}^{n}P_{kj,k}x_{k}x_{j}+ \sum_{i=k+1}^{n}P_{ik,k}x_{i}x_{k} . $$

From \(P_{ij,k} = P_{ji,k}\), one finds

$$V({\mathbf{x}})_{k} = \sum_{l=1}^{k}P_{ll,k}x_{l}^{2}+2 \sum_{l=1}^{k}\sum _{j=l+1}^{n}P_{lj,k}x_{l}x_{j},\quad k = \overline {1,n-1 } . $$

If \(k=n\), then by the same argument, we get

$$V({\mathbf{x}})_{n} = x_{n}^{2} + \sum _{l=1}^{n-1}P_{ll,n}x^{2}_{l} + 2 \sum_{l=1}^{n-1}\sum _{j=l+1}^{n}P_{lj,n}x_{l}x_{j} . $$

(v) For each \(l\in\{1,\dots,n-1\}\), let us consider the vector \({\mathbf{y}}_{l}= (\underbrace{0,0,\ldots,0}_{l-1},y_{l},\dots, y_{n})\) belonging to \(S^{n-1}\). The b-bistochasticity implies that \(V({\mathbf{y}}_{l})_{k} = 0\) for all \(k \in\{1,\dots,l-1\}\), and hence \(V({\mathbf{y}}_{l})_{l}\leq y_{l}\). Due to property (iv), the last inequality reduces to

$$y_{l} \Biggl(P_{ll,l}y_{l}+2\sum _{j=l+1}^{n}P_{lj,l}y_{j}-1 \Biggr) \leq0, $$

which yields

$$ P_{ll,l}y_{l}+2\sum _{j=l+1}^{n}P_{lj,l}y_{j}-1 \leq0 . $$
(3.2)

Since \({\mathbf{y}}_{l}\in S^{n-1}\), we have \(y_{l} = 1- \sum_{j=l+1}^{n}y_{j}\) and (3.2) becomes

$$\sum_{j=l+1}^{n} (2P_{lj,l} - P_{ll,l})y_{j}+P_{ll,l}-1 \leq0 .$$

Now, by Lemma 3.1 one finds that \(P_{lj,l} \leq \frac{1}{2}\).

This completes the proof. □

By \(\mathcal{V}_{b}\) we denote the set of all b-bistochastic q.s.o.

Proposition 3.3

The set \(\mathcal{V}_{b}\) is convex.

Proof

Take any \(\lambda \in[0,1]\) and \(V_{1},V_{2}\in \mathcal{V}_{b}\). Then from the b-bistochasticity of \(V_{1}\) and \(V_{2}\) one finds that

$$\begin{aligned}& \lambda\sum_{m=1}^{k}V_{1}( {\mathbf{x}})_{m} \leq\lambda\sum_{m=1}^{k}x_{m} , \qquad (1-\lambda) \sum_{m=1}^{k}V_{2}( {\mathbf{x}})_{m} \leq(1-\lambda) \sum_{m=1}^{k}x_{m},\\& \quad {\mathbf{x}}= (x_{1},\dots,x_{n})\in S^{n-1}. \end{aligned}$$

The last expression yields

$$\lambda\sum_{m=1}^{k}V_{1}( {\mathbf{x}})_{m} + (1-\lambda) \sum_{m=1}^{k}V_{2}( {\mathbf{x}})_{m} \leq \sum_{m=1}^{k}x_{m}, $$

which means that \(\lambda V_{1}+(1-\lambda )V_{2}\in \mathcal{V}_{b}\). The proof is completed. □

4 Limiting behavior of b-bistochastic q.s.o.

In this section, we are going to study the limiting behavior of a b-bistochastic q.s.o.

Theorem 4.1

Let V be a b-bistochastic q.s.o. defined on \(S^{n-1}\), then for every \({\mathbf{x}}\in S^{n-1}\) the limit \(\lim_{m \rightarrow\infty} V^{(m)}({\mathbf{x}})\) exists.

Proof

It is clear that \(V^{(m)}({\mathbf{x}})=(V^{(m)}({\mathbf{x}})_{1}, \dots, V^{(m)}({\mathbf{x}})_{n})\), therefore, it is enough for us to show that the limit of \(V^{(m)}({\mathbf{x}})_{k}\) exists, for each \(k\in\{1,\dots,n\}\). We prove by induction. First, consider \(k=1\). Then, by the definition of a b-bistochastic q.s.o., we obtain

$$\mathcal{U}_{1}\bigl(V^{(m+1)}({\mathbf{x}})\bigr) \leq \mathcal{U}_{1}\bigl(V^{(m)}({\mathbf{x}})\bigr) \quad \mbox{for all } m \in {\mathbb{N}}. $$

This implies that the sequence {\(\mathcal{U}_{1}(V^{(m)}({\mathbf{x}}))\)} is a monotone decreasing. Due to the boundedness of \(\mathcal{U}_{1}(V^{(m)}({\mathbf{x}}))\), we conclude the existence of the limit \(\lim_{m \rightarrow\infty}\mathcal{U}_{1}(V^{(m)}({\mathbf{x}}))\). This implies that

$$\lim_{m \rightarrow\infty }V^{(m)}({\mathbf{x}})_{1} $$

exists.

Next, assume that the limits \(\lim_{m \rightarrow\infty} V^{(m)} ({\mathbf{x}})_{i}\) exist for every \(i\in\{1,\dots,k\}\). Now, we will prove the limit \(\lim_{m \rightarrow\infty }V^{(m)}({\mathbf{x}})_{k+1}\) exists as well. Again, from the b-bistochasticity we infer that the sequence {\(\mathcal{U}_{k+1}(V^{(m)}({\mathbf{x}}))\)} is monotone decreasing, and it is bounded. Therefore,

$$\lim_{m \rightarrow\infty}\sum_{i=1}^{k+1}V^{(m)}( {\mathbf{x}})_{i} $$

exists. From the assumption, one concludes the existence of the limit

$$\lim_{m \rightarrow\infty}V^{(m)}({\mathbf{x}})_{k+1}. $$

This completes the proof. □

Corollary 4.2

Let V be a b-bistochastic q.s.o. on \(S^{n-1}\), and let \(\lim_{m \rightarrow\infty} V^{m}({\mathbf{x}})= \overline{{\mathbf{x}}}\), then \(\overline{{\mathbf{x}}}\) is a fixed point of V.

Proposition 4.3

Let V be a b-bistochastic q.s.o., then \({\mathbf{x}}=(0,0,\dots,1)\) is its fixed point.

Proof

Let \({\mathbf{x}}=(0,0,\dots,1)\), then

$$V({\mathbf{x}})_{m} =0 \quad \mbox{for } m=\overline{1,n-1} , $$

Hence one has \(V({\mathbf{x}})_{n} =1\). Consequently, \(V(0,0,\dots, 1) = (0,0,\dots, 1)\). This proves the proposition. □

In order to study the dynamics of a b-bistchastic q.s.o., it is important for us to investigate the behavior of the fixed point \((0,\dots,0,1)\). Using the substitution

$${\mathbf{x}}= (x_{1},\dots,x_{n}) \quad \Longrightarrow \quad {\mathbf{x}}= \bigl(x_{1},\dots,1-(x_{1}+\cdots+x_{n-1})\bigr) $$

we restrict ourselves to consider the first \(n-1\) coordinates of V. In this case, the fixed point found above is reduced to \({\mathbf{x}}=(0,0,\dots,0)\). Moreover, using property (iv) in Theorem 3.2 and replacing \(x_{n} = 1-(x_{1}+\cdots+x_{n-1})\) one can find

$$\begin{aligned} V({\mathbf{x}})_{k} = \sum_{l=1}^{k}P_{ll,k}x^{2}_{l} + 2 \sum_{l=1}^{k}\sum _{j=l+1}^{n-1}P_{lj,k}x_{l}x_{j} + 2\sum_{i=1}^{k}P_{in,k}x_{i} - 2\sum_{j=1}^{n-1}\sum _{i=1}^{k}P_{in,k}x_{i}x_{j}, \quad k = \overline{1,n-1} . \end{aligned}$$

From the last expression, we immediately get the following lemma.

Lemma 4.4

Let V be the b-bistochastic q.s.o. given by (2.3). If \(m\leq k\), then

$$\frac{\partial V({\mathbf{x}})_{k}}{\partial x_{m}} = 2\sum_{i=1}^{ m}P_{im,k}x_{i} + 2 \sum_{j=m+1}^{n-1}P_{mj,k}x_{j} + 2P_{mn,k} - 2\sum_{j=1}^{n-1}P_{mn,k}x_{j}, $$

if \(m>k\), then

$$\frac{\partial V({\mathbf{x}})_{k}}{\partial x_{m}} = 2 \Biggl[ \sum_{i=1}^{k}(P_{im,k} - P_{in,k})x_{i} \Biggr]. $$

Next, let us compute the Jacobian at the fixed point

$$J \bigl[V(0,0,\dots,0) \bigr] = \begin{bmatrix} 2P_{1n,1} & 0 & \ldots & 0 \\ 2P_{1n,2} & 2P_{2n,2} & \ldots & 0\\ \vdots& \vdots& \ddots& \vdots\\ 2P_{1n,n-1}& 2P_{2n,n-1} & \ldots & 2P_{n-1n,n-1} \end{bmatrix}. $$

Thus, the eigenvalues of \(J(V(0,0,\dots,0))\) are \(\{2P_{kn,k}\}_{k=1}^{n-1}\). The fascinating part of this result is shown below.

Theorem 4.5

The fixed point \((0,0,\dots,0)\) is not repelling.

Proof

Due to property (v) of Theorem 3.2, we have

$$P_{kn,k}\leq\frac{1}{2} \quad\mbox{for } k=\overline{1,n-1}. $$

This proves that all the eigenvalues of \(J [V(0,0,\dots,0) ]\) are less than or equal to 1, so \((0,0,\dots,0)\) is not repelling. □

Corollary 4.6

If \(P_{lj,l} < \frac{1}{2}\) for all \(l\in\{1,\dots,n-1\}\), \(j\geq l+1\), then the fixed point \({\mathbf{x}}= (0,0,\dots,0)\) is attracting.

From the results, it is natural to ask the following question: Is there a trajectory of a b-bistochastic q.s.o. which converges to a fixed point different from \((0,0,\dots,1)\)?

We want to consider this question in a one dimensional setting.

Let us denote by \(\operatorname{Fix}(V)\) the set of all fixed points of V belonging to the simplex \(S^{n-1}\).

Let V be a b-bistochastic q.s.o. defined on one dimensional simplex, then by using Theorem 3.2, one gets

$$P_{22,1} = 0, \qquad P_{22,2} = 1, \quad\mbox{and}\quad P_{12,1} \leq \frac{1}{2}, $$

and we denote

$$P_{11,1} = a \quad \mbox{and} \quad P_{12,1} = b. $$

Note that V can be reduced to \(V({\mathbf{x}})_{1}= ax^{2}+2bx(1-x)\), \(x\in[0,1]\).

The following theorem describes the limiting behavior of a b-bistochastic on a one dimensional setting.

Theorem 4.7

Let V be a b-bistochastic q.s.o. defined on \(S^{1}\) and let \({\mathbf{x}}=(x,y) \in S^{1} \), then one has

$$\operatorname{Fix}(V) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \{0 \} & \textit{if } a \neq1, \\ \{0,1\} & \textit{if } a=1, b\neq\frac{1}{2}, \\ \{x\} : x\in[0,1] & \textit{if } a=1, b = \frac{1}{2}. \end{array}\displaystyle \right . $$

Moreover,

$$\lim_{m \rightarrow\infty} V^{(m)}({\mathbf{x}}) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \{0\} & \textit{if } a \neq1 \textit{ or } \{a=1, b\neq\frac {1}{2} \} \textit{ for } x\neq1, \\ \{x\} & \textit{if } a =1, b = \frac{1}{2}. \end{array}\displaystyle \right . $$

Proof

The fixed points of \(V({\mathbf{x}})={\mathbf{x}}\) are \(x_{1} =0\) and \(x_{2} =\frac{1-2b}{a-2b}\). From this, it easy to see that

$$\operatorname{Fix} (V) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \{0 \} & \mbox{if } a \neq1, \\ \{0, 1\} & \mbox{if } a=1, b \neq\frac{1}{2}. \end{array}\displaystyle \right . $$

Next, if \(a=1\) and \(b = \frac{1}{2}\), then \(V({\mathbf{x}}) = {\mathbf{x}}\), thus all points of \([0,1]\) are fixed i.e.

$$\operatorname{Fix}(V) = \biggl\{ \{x\}: x\in[0,1], \mbox{if } a=1, b = \frac{1}{2} \biggr\} . $$

Furthermore, using Corollary 4.2, we know that the limit of b-bistochasticity converges to a fixed point, thus we need to consider several cases.

Case 1. If \(a \neq1\), then one has a unique fixed point. Therefore, due to Corollary 4.2 one finds

$$\lim_{m \rightarrow\infty} V^{(m)}({\mathbf{x}})_{1} = 0. $$

Case 2. If \(a=1\) and \(b \neq\frac{1}{2}\), then we have two possible fixed points. We need to compute the derivative of \(V({\mathbf{x}})_{1}\) and find its modulus at each fixed point. Namely,

$$\bigl\vert V'(0)_{1}\bigr\vert = 2b < 1 \quad \mbox{and} \quad \bigl\vert V'(1)_{1}\bigr\vert = 2(1-b) > 1. $$

This implies that the fixed point 0 is attracting, while 1 is repelling. Hence,

$$\lim_{m \rightarrow\infty} V^{(m)}({\mathbf{x}})_{1} = 0 \quad \mbox{if } a \neq1 \quad\mbox{or}\quad \biggl\{ a=1, b\neq\frac{1}{2} \biggr\} \quad \mbox{for } x\neq1. $$

Case 3. If \(a=1\) and \(b = \frac{1}{2}\), then one obviously gets \(\lim_{m \rightarrow\infty} V^{(m)}({\mathbf{x}}) ={\mathbf{x}}\).

This completes the proof. □

5 Description of b-bistochastic q.s.o. on 2D simplex

In this section we are going to describe all b-bisochastic q.s.o. defined on a two dimensional simplex. Before doing that, we need the following auxiliary facts.

Lemma 5.1

Let \(f(x)=ax^{2}+bx+c \), then \(f(x) \leq0 \) for all \(x \in [0,1]\) if and only if \(c\leq0\), \(a+b+c \leq0\), and one of the following conditions is satisfied:

  1. (I)

    \(a\geq0 \);

  2. (II)

    \(a<0 \) and one of the following is satisfied:

    1. (i)

      \(b \leq0\);

    2. (ii)

      \(b \geq-2a\);

    3. (iii)

      \(b^{2}-4ac\leq0\).

Lemma 5.2

Let \(f(x,y) = Ax^{2}+By^{2}+Cxy+Dx+Ey\) and \({\mathbb{D}}= \{(x,y)| 0\leq x+y\leq1\}\). Assume that \(f(x,y)\leq0\) on the boundaries (i.e. \(x=0\), \(y=0\), and \(y=1-x\)). Then the following statements hold:

  1. (I)

    Let the critical point \((x_{0},y_{0})\) belongs to \({\mathbb{D}}\), then

    1. (i)

      if \((x_{0},y_{0})\) is a maximum point and \(f(x_{0},y_{0}) \leq 0\), then \(f(x,y)\leq0\) for all \(x,y\in {\mathbb{D}}\);

    2. (ii)

      if \((x_{0},y_{0})\) is a saddle point and \(f(x_{0},y_{0}) \leq0\), then \(f(x,y)\leq0\) for all \(x,y\in {\mathbb{D}}\);

    3. (iii)

      if \((x_{0},y_{0})\) is a minimum point, then \(f(x,y)\leq0\) for all \(x,y\in {\mathbb{D}}\).

  2. (II)

    Let \((x_{0},y_{0})\notin {\mathbb{D}}\), then one has \(f(x,y)\leq0\) for all \(x,y\in {\mathbb{D}}\).

The proof immediately follows from the fact that the given function is either paraboloid or saddle roof. Note that in (II) g reaches its maximum on the boundaries.

Now let us consider a q.s.o. V defined on \(S^{2}\). For the sake of simplicity we denote

$$ \begin{aligned} &P_{11,1} = A_{1}, \qquad P_{13,1} = C_{1} ,\qquad P_{23,1} = E_{1}, \\ &P_{11,2} = A_{2}, \qquad P_{13,2} = C_{2} , \qquad P_{23,2} = E_{2} , \\ &P_{12,1} = B_{1}, \qquad P_{22,1} = D_{1} , \qquad P_{33,1} = F_{1} , \\ &P_{12,2} = B_{2}, \qquad P_{22,2} = D_{2} , \qquad P_{33,2} = F_{2} , \end{aligned} $$
(5.1)

and

$$\begin{aligned}& M = 1- 2C_{1} - 2C_{2}, \qquad N = D_{2} -2E_{2}, \qquad P=1-2E_{2}, \\& Q = B_{1}+B_{2}-C_{1}-C_{2}-E_{2}, \qquad R = (A_{1}+A_{2}-2C_{1} - 2C_{2}),\\& K = 2\bigl(RN - Q^{2}\bigr), \qquad a = A_{1}+A_{2}+D_{2}-2B_{1}-2B_{2}, \\& b = 2B_{1}+2B_{2}-2D_{2}, \qquad c = D_{2}-1. \end{aligned}$$

The main result of this paper is the following theorem.

Theorem 5.3

Let \(V : S^{2} \rightarrow S^{2}\) be a q.s.o., then V is a b-bistochastic if and only if

  1. (i)

    \(F_{1} = E_{1} = D_{1} = F_{2} = 0\);

  2. (ii)

    \(B_{1} \leq \frac{1}{2}\), \(C_{1} \leq\frac{1}{2}\), \(E_{2} \leq \frac{1}{2}\);

  3. (iii)

    \(C_{1}+C_{2} \leq\frac{1}{2}\),

and one of the following is satisfied:

  1. (I)

    \(a\geq0 \);

  2. (II)

    \(a<0 \) and one of the following is satisfied:

    1. (1)

      \(b \leq0\);

    2. (2)

      \(b \geq-2a\);

    3. (3)

      \(b^{2}-4ac\leq0\).

Proof

One can see that the b-bistochasticity of V implies

$$\begin{aligned}& V(\mathbf{x})_{1} \leq x_{1}; \end{aligned}$$
(5.2)
$$\begin{aligned}& V(\mathbf{x})_{1}+V(\mathbf{x})_{2} \leq x_{1} + x_{2}, \end{aligned}$$
(5.3)

for all \(\mathbf{x}=(x_{1},x_{2},x_{3}) \in S^{2}\).

The conditions (i) and (ii) immediately follow from Theorem 3.2, which are equivalent to (5.2).

Next, we let

$$\begin{aligned} g(x_{1},x_{2}) =& x_{1}^{2}(A_{1}+A_{2}-2C_{1}-2C_{2}) + x_{2}^{2}(D_{2}-2E_{2}) + x_{1}(2C_{1} +2C_{2}- 1) \\ &{}+ x_{2}(2E_{2} - 1) + 2x_{1}x_{2}(B_{1}+B_{2}-C_{1}-C_{2}-E_{2}). \end{aligned}$$

One can see that (5.3) is equivalent to

$$\begin{aligned} g(x_{1},x_{2}) \leq0 \end{aligned}$$
(5.4)

for all \((x_{1},x_{2})\) with \(0\leq x_{1}+x_{2}\leq1\). Here we have used \(x_{3}=1-x_{1}-x_{2}\).

First, using the fact \(g(x_{1},x_{2})\) is not linear, we need to investigate g for its extremums on the boundaries (i.e. Side 1: \(x_{1} = 0\), Side 2: \(x_{2} = 0\) and Side 3: \(x_{2}=1-x_{1}\)) first and then in the internal region. So, we are going to study the function g on each side one by one.

Side 1. Let \(x_{1} = 0\), then \(g(0,x_{2}) = x_{2}^{2}(D_{2}-2E_{2}) + x_{2}(2E_{2} -1)\). Therefore, (5.4) reduces to

$$x_{2}(D_{2}-2E_{2}) + 2E_{2} -1 \leq0, $$

which is obviously true (see conditions (i) and (ii)).

Side 2. In this case, \(x_{2}=0\). Here, \(g(x_{1},0) = x_{1}^{2}(A_{1}+A_{2}-2C_{1}-2C_{2}) + x_{1}(2C_{1} + 2C_{2} - 1)\). Clearly, (5.4) becomes

$$x_{1}(A_{1}+A_{2}-2C_{1}-2C_{2}) + (2C_{1} + 2C_{2} - 1) \leq0. $$

Hence, from Remark 2.2 one finds

$$C_{1}+C_{2} \leq\frac{1}{2}, $$

which implies the condition (iii).

Side 3. Consider the boundary \(x_{2} = 1-x_{1}\), thus

$$g(x_{1},1-x_{1}) = x_{1}^{2}(A_{1}+A_{2}+D_{2}- 2B_{1} - 2B_{2})+x_{1}(2B_{1} + 2B_{2} - 2D_{2}) + D_{2} -1. $$

If \(x_{1}=0\) and \(x_{1}=1\), then one immediately gets \(D_{2}-1 \leq0\) and \(A_{1}+A_{2} \leq1\), respectively, which are evidently true. Moreover, g can be written as follows:

$$g(x_{1}) = ax_{1}^{2}+bx_{1}+c. $$

By Lemma 5.1, one infers that one of the following conditions must be satisfied:

  1. (I)

    \(a\geq0 \);

  2. (II)

    \(a<0 \) and one of the following is satisfied to meet (5.4):

    1. (1)

      \(b \leq0\);

    2. (2)

      \(b \geq-2a\);

    3. (3)

      \(b^{2}-4ac\leq0\).

Now we consider the internal region i.e. \(\mathbb{D} = \{(x_{1},x_{2})| 0\leq x_{1}+x_{2}\leq1\}\).

Internal region. In this case, \(g(x_{1},x_{2})\) can be written as follows:

$$g(x_{1},x_{2}) = x_{1}^{2}R + x_{2}^{2}N - x_{1}M - x_{2}P + 2x_{1}x_{2}Q. $$

First, let us compute its partial derivatives

$$ \begin{aligned} & g_{x_{1}} = 2x_{1}R -M+2x_{2}Q,\qquad g_{x_{2}} = 2x_{2}N - P +2x_{1}Q, \\ &g_{x_{1}x_{1}} = 2 R, \qquad g_{x_{2}x_{2}} = 2 N,\qquad g_{x_{1}x_{2}} = 2Q. \end{aligned} $$
(5.5)

It is clear that the critical point (i.e. a solution of \(g_{x_{1}} = 0\), \(g_{x_{2}} =0\)) is the following one:

$$\begin{aligned} (x_{1},x_{2}) = \biggl(\frac{MN - PQ}{K},\frac{PR - MQ}{K} \biggr), \end{aligned}$$

by assuming \(K = 2(RN - Q^{2}) \neq0\).

Recall that the critical point is

  1. (a)

    a maximum point if \(g_{x_{1}x_{1}} < 0 \) and \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2}>0\);

  2. (b)

    a minimum point if \(g_{x_{1}x_{1}} > 0 \) and \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2}>0\);

  3. (c)

    a saddle point if \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2}<0\).

Furthermore, in order to cover all possible values of R, N, and Q that they shall take, we examine several cases:

Case I: \(R > 0\) and \(RN - Q^{2} > 0\);

Case II: \(R < 0\) and \(RN - Q^{2} > 0\);

Case III: \(RN < 0\);

Case IV: \(RN>0\) and \(RN-Q^{2} < 0\);

Case V: \(R = 0\);

Case VI: \(N=0\);

Case VII: \(Q=0\);

Case VIII: \(RN-Q^{2} = 0 \).

We want to highlight that the values of P and M are positive due to (ii) and (iii). The investigation of each case is done separately.

Case I. Assume that \(R>0\) and \(RN-Q^{2} > 0\), then one immediately gets the critical point is minimum. Due to Lemma 5.2 we get \(g \leq0\). Correspondingly, (5.4) is true in this case.

Case II. Let \(R<0\) and \(RN-Q^{2} > 0\). Consequently, we see that \(N<0\) and the critical point is maximum. Here, one should consider two subcases: the critical point is either outside or inside the region \(\mathbb{D}\). If it is outside, then Lemma 5.2 implies (5.4). Now, we consider the critical point is inside the region \(\mathbb{D}\), which means

$$0< MN-PQ< K, \qquad 0< PR-MQ< K. $$

It implies the positivity of K (i.e. \(K=2(RN-Q^{2}) > 0\)). Due to \(MN<0\) and \(P>0\) one finds \(Q<0\). Furthermore, by substituting the maximum point into the function g, we obtain

$$\begin{aligned} g(x_{1},x_{2}) =& {\frac{ ( MN-PQ ) ^{2}R}{{4K}^{2}}}+{ \frac{ ( PR-MQ ) ^{2}N}{{4K}^{2}}}-{\frac{ ( MN-PQ ) M}{2K}} \\ &{}-{\frac { ( PR-MQ ) P}{2K}}+2 {\frac{ ( MN-PQ ) ( PR-MQ ) Q}{{4K}^{2}}}. \end{aligned}$$
(5.6)

Observe that at the maximum point \(g \leq0\). Therefore, from Lemma 5.2, it follows that \(g(x_{1}, x_{2}) \leq 0\) for all \((x_{1},x_{2})\in {\mathbb{D}}\).

Case III. Consider \(RN<0\). This means that \(RN-Q^{2}\leq0\). Hence, the critical point is saddle. Using the same argument as in Case II, it is enough for us to consider the case when the saddle point is inside the region. Consequently, one has

$$\begin{aligned}& K< MN-PQ< 0 , \end{aligned}$$
(5.7)
$$\begin{aligned}& K< PR-MQ< 0. \end{aligned}$$
(5.8)

Without loss of generality, one may assume that \(R>0\) and \(N<0\). Moreover, (5.8) implies that \(Q>0 \). Next, let us compute the value of g at the saddle point, which is (5.6). Taking into account \(K=2(RN-Q^{2})\), then

$$\begin{aligned} g(x_{1},x_{2}) =\frac {M^{2}N+P^{2}R-2MPQ}{-4(RN-Q^{2})}. \end{aligned}$$

In fact, \(P(PR-2MQ)<0\) implies \(g(x_{1},x_{2})\leq0\). This means \(g(x_{1}, x_{2}) \leq0\) for all \((x_{1},x_{2})\in {\mathbb{D}}\), hence it proves (5.4) (see Lemma 5.2).

Case IV. Let \(RN>0\) and \(RN-Q^{2} < 0\). By a similar method to Case III, one proves that (5.4) holds.

Case V. Let \(R=0\), then we have three subcases, which are \(N=0\), \(N>0\), and \(N<0\). First, we may assume \(N=0\), then g reduces to \(g(x_{1},x_{2}) = -x_{1}M-x_{2}P+2x_{1}x_{2}Q\). From (5.5) one can see that a unique solution of \(g_{x_{1}} = 0\) and \(g_{x_{2}} = 0\) is

$$(x_{1},x_{2}) = \biggl(\frac{P}{2Q}, \frac{M}{2Q} \biggr) \quad\mbox{for } Q \neq0. $$

In this case, the critical point is saddle since \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2} = -4Q^{2}<0\). By a similar argument to Case III, it is sufficient to check when the critical point is inside the region \({\mathbb{D}}\). This means \(0< P<2Q\) and \(0< M<2Q \). Consequently, one gets \(Q>0\) and finds

$$g \biggl(\frac{P}{2Q}, \frac{M}{2Q} \biggr) = \frac{-PM}{2Q} < 0. $$

Thus, we obtain (5.4) (see the argument in Case III). Note that if \(Q=0 \), then it is clear \(g(x_{1},x_{2}) \leq0 \) for all \((x_{1},x_{2}) \in {\mathbb{D}}\).

The second subcase is \(N>0\). Evidently \(g(x_{1},x_{2}) = x_{2}^{2}N - x_{1}M - x_{2}P + 2x_{1}x_{2}Q\). According to (5.5), the critical point is

$$(x_{1},x_{2}) = \biggl(\frac{PQ-MN}{2Q^{2}}, \frac{M}{2Q} \biggr) \quad \mbox{for } Q \neq0. $$

In fact, \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2} = -4Q^{2}<0\), thus the critical point is a saddle point. Hence, one may continue the proof as in the first subcase. Moreover, conditions \(Q = 0\) and \(R = 0 \) imply \(M=0 \) (solution \(g_{x_{1}} =0 \)). Hence, from \(g_{x_{2}} = 0 \), one gets \(x_{2} = \frac {P}{2N} \) and substituting it into \(g(x_{1},x_{2}) \), we find

$$g \biggl( x_{1}, \frac{P}{2N} \biggr) = \frac{P^{2}}{4N} - \frac {P^{2}}{2N} \leq0 \quad\mbox{for any } (x_{1},x_{2}) \in {\mathbb{D}}. $$

The last subcase \(N<0\) may proceed in a similar manner.

Case VI. We let \(N=0\) and the proof is analogous to Case V.

Case VII. If \(Q=0\), then one finds \(g(x_{1},x_{2}) = x_{1}^{2}R+ x_{2}^{2}N - x_{1}M - x_{2}P\). Correspondingly, the critical point is

$$(x_{1},x_{2}) = \biggl(\frac{M}{2R}, \frac{P}{2N} \biggr) \quad\mbox{for } R\neq0, N\neq0. $$

Positivity of \(x_{1}\) and \(x_{2}\) implies \(R>0\) and \(N>0\), respectively. Consequently, \(g_{x_{1}x_{1}}g_{x_{2}x_{2}} - (g_{x_{1}x_{2}})^{2} = 4RN>0\), which means that the critical point is maximum. Using Lemma 5.2, obviously, it is sufficient for us to check the value g takes at the critical point when it is inside the region \({\mathbb{D}}\). Accordingly, we get

$$g \biggl(\frac{M}{2R}, \frac{P}{2N} \biggr) = \frac {M^{2}N+P^{2}R}{-4RN}< 0. $$

Again, using Lemma 5.2, one concludes that (5.4) holds. In addition, whenever \(R=0 \) or \(N=0 \) or both hold, then the proof follows by the same method as Case V (subcase \(Q = 0 \)).

Case VIII. In the last case, we let \(MN-Q^{2} = 0 \). It is clear that we have \(2x_{2}(Q^{2} - NR) = MQ - PR \) and \(2x_{1} (RN - Q^{2}) = MN - PQ \). This gives us \(MQ - PR = 0\) and \(MN - PQ = 0 \) or otherwise \(g_{x_{1}} =0 \) and \(g_{x_{2}} =0\) do not have any solution (i.e. the critical point). Therefore, the maximum is reached on the boundaries, which gives \(g(x_{1},x_{2}) \leq 0\) for all \((x_{1},x_{2}) \in {\mathbb{D}}\) (see the investigations on Side 1, Side 2, and Side 3).

On the other hand, if \(MQ - PR = 0\) and \(MN - PQ = 0 \), then one has infinitely many solutions. Using the fact \(M \geq0 \) and \(P \geq0 \), then we have two possible subcases, which are (i) \(N <0\), \(Q<0\), \(R< 0 \) and (ii) \(N>0\), \(Q>0\), \(R>0 \). Taking into account the first subcase (i) and the argument in the Case II, one infers that \(g(x_{1},x_{2}) \leq0\) for all \((x_{1},x_{2}) \in {\mathbb{D}}\).

(ii) Next, if \(MN-Q^{2} = 0 \), then the equations \(g_{x_{1}} =0 \) and \(g_{x_{2}} = 0 \) are linearly dependent. Thus, it is enough to consider \(g_{x_{1}} =0 \). Obviously, one finds \(x_{1} = \frac{M-2x_{2}Q}{ 2R } \). Substituting \(x_{1} \) into \(g(x_{1},x_{2}) \), then

$$g \biggl( \frac{M-2x_{2}Q}{ 2R }, x_{2} \biggr) = \frac{-M}{ 4R} + (MQ - PR)x_{2} + \bigl(RN - Q^{2}\bigr)x_{2}^{2} = \frac{-M}{ 4R} . $$

This shows that \(g(x_{1},x_{2}) \leq0\) for all \((x_{1},x_{2}) \in {\mathbb{D}}\).

Briefly, we show that for all cases in the internal region, \(g(x_{1},x_{2})\leq0\) for all \((x_{1},x_{2})\in {\mathbb{D}}\). In addition, the reverse can be proved by the same way. This completes the proof. □

Remark 5.4

Note that such a kind of description of a bistochastic q.s.o. is not known in the literature. Our result fully describes all b-bistochasic q.s.o. on a two dimensional setting. The theorem proved allows one to find all extreme points of the set of a b-bistochastic q.s.o. on 2D simplex, which can be considered as one of the future studies that can be done. Moreover, it gives insight in and preliminary information on the direction of a higher dimensional setting.