1 Introduction

One of the most dangerous threats to the human society is the infectious disease. When this infectious disease becomes an epidemic, it will cause a big loss to human life and damage the economy on a large scale. The epidemic infectious diseases are also very dangerous in the sense that they are spreading very rapidly to a massive quantity of people in a given population in a limited period of time. Many factors that are contributing to epidemic infectious diseases are climate change, genetic change, globalization, and urbanization, and most of these factors are to some extent caused by humans. Many people from different fields have a lot of contribution to the detection of epidemic source and controlling of epidemic spreading. Mathematicians have also played a vital role in the modeling of epidemic spreading.

The contagion processes are the most attractive dynamic processes for the real life complex network of public interest [11, 12, 22, 24]. To model epidemic spreading, epidemiologists frequently use the compartmental models such as SIR models [17], SIS models [16], and SEIR models [20]. These models are very important when explicitly modeling and estimating the quantity of susceptible and infected individuals in a population at risk.

Epidemiologists have obtained many models for the epidemic source detection by imposing some restrictions on the network structure or on the spreading model process of compartmental models (SIR, SIS) or both [1214, 23, 25, 28]. The epidemiologists analyze the virous genetic evolution [15, 26] and detect the epidemic source or do backtracking by using given data [10]. Zhu et al. [28] initiated a model in which they established that the maximum distance to the infected nodes can be minimized by the source nodes on infinite trees. Altarelli et al. [8] estimated the epidemic source by using the message passing method, where they replaced the independent assumption by a tree-like contact network. Lokhov et al. [21] estimated the probability of a given node to produce the observed snapshot by considering the SIR model and using message-passing algorithm. Antulov-Fantulin et al. [9] proposed a model to analyze source probability estimators. They dropped the independency assumptions on nodes and all network structures and analyzed the source probability estimators for general compartmental models. The soft margin estimator for the proposed model of Antulov-Fantulin et al. [9] is given by

$$ \hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta =\theta )= \frac{1}{n}\sum_{i=1}^{n} \exp \biggl( -\frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr), $$
(1)

where \(\overrightarrow{R_{\theta }}\) is a binary vector that indicates the random outcomes of the epidemic process, \(\{\overrightarrow{r}_{\theta,1},\overrightarrow{r}_{\theta,2}, \ldots,\overrightarrow{r}_{\theta,n}\}\) are the sample vectors that show the n independent outcomes of the epidemic process with the source term θ, \(\varphi:{\mathbb{R}^{n}}\times {\mathbb{R}^{n}}\rightarrow [0,1]\) is a Jaccard similarity function, which can be calculated by dividing the cardinality of the intersection of the set of infected nodes in \(\overrightarrow{r}_{1}, \overrightarrow{r}_{2}\) by the cardinality of their union, \(\varphi (\overrightarrow{r_{*}},\overrightarrow{r}_{\theta,i})\) is a random variable that measures the similarity between the fixed realization vector \(\overrightarrow{r_{*}}\) and the random realization vector \(\overrightarrow{r}_{\theta,i}\), and \(\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is the Gaussian weighting function with \(a>0\).

We will use the following hypothesis for the construction of our results throughout the paper.

\(\mathbf{H:}\) Let \(R_{\theta }\) be a binary vector, \(\{\overrightarrow{r}_{\theta,1},\overrightarrow{r}_{\theta,2}, \ldots,\overrightarrow{r}_{\theta,n}\}\) be n independent vectors, \(\overrightarrow{r}_{*}\) be a fixed realization vector, a be a positive real number, \(\varphi:\mathbb{R}^{n}\times \mathbb{R}^{n}\mapsto [0,1]\) be the Jaccard similarity function, and \(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta =\theta )\) be the soft margin estimator as given in (1).

In the remaining portion of this section, we are going to discuss briefly convexity and concavity.

The notion of convex and concave functions is so impressive in all fields of science, especially in mathematics, because of its notable property. Therefore many generalized and interesting results for convex and concave functions and their application have been accomplished [17, 18, 19, 27].

Now, the formal definition of convex and concave functions is stated as follows.

Definition 1

Let I be an arbitrary interval in \(\mathbb{R}\). Then the function \(\Psi:I\rightarrow \mathbb{R}\) is convex if the inequality

$$ \Psi \bigl(\lambda x+(1-\lambda )y\bigr)\leq \lambda \Psi (x)+(1- \lambda )\Psi (y) $$
(2)

holds for all \(x,y\in I\) and \(\lambda \in [0,1]\).

If inequality (2) holds in the reverse direction, then the function \(\Psi:I\rightarrow \mathbb{R}\) is said to be concave.

There are many inequalities proved for convex and concave functions. Among these inequalities, one of the most prominent and dynamic inequality is the well known Jensen’s inequality in the literature. Jensen’s inequality is one of the most leading and generalized inequality in the sense that many inequalities can be assumed from it. The formal statement of Jensen’s inequality can be read in the following theorem.

Theorem 1

Let I be an interval in \(\mathbb{R}\), \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})\) be an n-tuple such that \(x_{i}\in {I}\) for all \(i\in \{1,2,\ldots,n\}\), and \(\mathbf{p}=(p_{1},p_{2},\ldots,p_{n})\) be a positive n-tuple of real entries with \(P_{n}=\sum_{i=1}^{n}p_{i}\). If the function \(\Psi:{I}\rightarrow \mathbb{R}\) is convex, then

$$ \Psi \Biggl(\frac{1}{P_{n}}\sum_{i=1}^{n}p_{i}x_{i} \Biggr)\leq \frac{1}{P_{n}}\sum_{i=1}^{n}p_{i} \Psi (x_{i}). $$
(3)

If the function \(\Psi:{I}\rightarrow \mathbb{R}\) is concave, then inequality (3) holds in the reverse direction.

In this paper, we advance the idea to give bounds for the soft margin estimator given in (1) while accustoming the existing notion of concave function. To achieve bounds for the soft margin estimator, we consume the concavity of Gaussian weighting function and Jensen’s inequality. To obtain some more general and clear bounds for soft margin estimator, we use some general functions defined on rectangles, which are monotonic with respect to the first variable. We also utilize the behavior of the Jaccard similarity function for obtaining the desire bounds of soft margin estimator.

2 Main results

In order to build our results, we first establish the following lemma, which will support us in the achievement of our results.

Lemma 1

The Gaussian weighting function \(\Psi:[0,1]\rightarrow \mathbb{R}\) defined by

$$ \Psi (x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) $$

is concave for all \(a\in [\sqrt{2},\infty )\).

Proof

To show the concavity of Gaussian function \(\Psi (x)\), we use the double derivative test. For this, differentiating two times \(\Psi (x)\) with respect to x, we get

$$ \Psi ^{\prime \prime }(x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \biggl[ \frac{4(x-1)^{2}-2a^{2}}{a^{4}} \biggr]. $$

Since

$$ \exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr)>0 \quad\text{and}\quad a^{4}>0. $$

So, we just need to show that

$$ 4(x-1)^{2}-2a^{2}\leq 0. $$

As

$$ 4(x-1)^{2}\leq 4 \quad\text{for all } x\in [0,1] $$
(4)

and

$$ -2a^{2}\leq -4 \quad\text{for all } a\in [\sqrt{2}, \infty ). $$
(5)

Now, adding (4) and (5), we obtain

$$ 4(x-1)^{2}-2a^{2}\leq 0. $$

Hence

$$ \Psi ^{\prime \prime }(x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \biggl[ \frac{4(x-1)^{2}-2a^{2}}{a^{4}} \biggr]\leq 0 $$

for all \(x\in [0,1]\) and \(a\in [\sqrt{2},\infty )\).

Consequently,

$$ \Psi (x)=\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) $$

is a concave function for all \(x\in [0,1]\) and \(a\in [\sqrt{2},\infty )\). □

In the following result, we acquire bounds for soft margin estimator adopting the concavity of the Gaussian function.

Theorem 2

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Then

$$\begin{aligned} &\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \hat{P}( \overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ) \\ &\quad \geq \Biggl(1-\frac{1}{n}\sum_{i=1}^{n} \varphi ( \overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i}) \Biggr) \exp \biggl(-\frac{1}{a^{2}} \biggr) +\frac{1}{n}\sum _{i=1}^{n} \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(6)

Proof

By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave on \([0,1]\) for \(a\in [\sqrt{2},\infty )\). Therefore

$$\begin{aligned} &\Psi (x)=\Psi \bigl((1-x)0+(x-0)1\bigr) \geq (1-x)\Psi (0)+x \Psi (1) \\ &\quad\Rightarrow \quad \exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \geq (1-x)\exp \biggl(-\frac{1}{a^{2}} \biggr)+x\exp (0)=(1-x)\exp \biggl(- \frac{1}{a^{2}} \biggr)+x. \end{aligned}$$
(7)

Now, putting \(x=\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})\) in (7), we obtain

$$\begin{aligned} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \geq \bigl(1- \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}) \bigr)\exp \biggl(-\frac{1}{a^{2}} \biggr) + \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(8)

Multiplying both sides of (8) by \(\frac{1}{n}\) and taking summation over i, we get

$$\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad\geq \Biggl(1-\frac{1}{n}\sum_{i=1}^{n} \varphi ( \overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i}) \Biggr) \exp \biggl(-\frac{1}{a^{2}} \biggr) +\frac{1}{n}\sum _{i=1}^{n} \varphi (\overrightarrow{r}_{*}, \overrightarrow{r}_{\theta,i}). \end{aligned}$$
(9)

Since, by Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore, using Theorem 1, we have

$$\begin{aligned} \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \geq \frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(10)

Now, comparing (9) and (10), we obtain (6). □

In the following theorem, we get some clearer bounds for soft margin estimator by imposing a restriction on the Jaccard function.

Theorem 3

Let all the hypotheses of Theorem 2hold. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\), then

$$\begin{aligned} &\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \hat{P}( \overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(11)

Proof

By Lemma 1, for \(a\in [\sqrt{2},\infty )\) and \(x\in [d,D]\), the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore

$$\begin{aligned} & \Psi (x)=\Psi \biggl(\frac{(D-x)d+(x-d)D}{D-d} \biggr) \geq \frac{D-x}{D-d}\Psi (d)+ \frac{x-d}{D-d}\Psi (D) \\ &\quad \Rightarrow \quad\exp \biggl(-\frac{(x-1)^{2}}{a^{2}} \biggr) \geq \frac{D-x}{D-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{D-d}\exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(12)

Now, substituting \(x=\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})\) in (12) and then multiplying by \(\frac{1}{n}\) and taking summation over i, we gain

$$\begin{aligned} &\frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(13)

By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave. Therefore, using Theorem 1, we have

$$\begin{aligned} \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \geq \frac{1}{n}\sum_{i=1}^{n} \exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(14)

Now, comparing (13) and (14), we achieve

$$\begin{aligned} & \exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{1}{n}\sum_{i=1}^{n}\exp \biggl(- \frac{(\varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1)^{2}}{a^{2}} \biggr) \\ &\quad \geq \frac{D-\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})}{D-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+ \frac{\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr), \end{aligned}$$
(15)

which is equivalent to (11). □

In the following theorem, we acquire some general bounds for soft margin estimator by considering a general function defined on rectangles, which is increasing with respect to the first variable.

Theorem 4

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\), \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is an increasing function with respect to the first variable, and \(\phi:[0,1]\rightarrow \Upsilon \) is an arbitrary function. Then

$$\begin{aligned} & F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \geq \min_{x,y\in [0,1]} F \biggl( (1-x ) \exp \biggl(- \frac{1}{a^{2}} \biggr) +x, \phi (y) \biggr). \end{aligned}$$
(16)

Proof

By utilizing inequality (6) and increasing the property of F with respect to the first variable, we get (16). □

In the following result, we obtain some more general bounds for soft margin estimator by using a general function defined on rectangles and imposing a restriction on the Jaccard function.

Theorem 5

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\) and \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is an increasing function with respect to the first variable. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\) and \(\phi:[d,D]\rightarrow \Upsilon \) is an arbitrary function, then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \geq \min_{x,y\in [d,D]} F \biggl(\frac{D-x}{D-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+\frac{x-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(17)

Furthermore, the right-hand side of (17) is a decreasing function of D and an increasing function of d.

Proof

By utilizing inequality (11) and increasing the property of F with respect to the first variable, we obtain (17).

Now, we show that the right-hand side of (17) is a decreasing function of D.

Let \(d\leq k_{1}< k_{2}\leq D\). By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave for \(a\in [\sqrt{2},\infty )\). Therefore, the first-order divided difference of \(\Psi (x)\) is decreasing, that is,

$$\begin{aligned} \frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d} \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}. \end{aligned}$$
(18)

Multiplying both sides of (18) by \(x-d\) and then adding \(\exp (-\frac{(d-1)^{2}}{a^{2}} )\), we get

$$\begin{aligned} &\frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\quad \Rightarrow\quad \frac{ \{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{1}-d)}{k_{1}-d} \\ & \phantom{\quad \Rightarrow\quad}\quad\geq \frac{ \{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{2}-d)}{k_{2}-d} \\ &\quad \Rightarrow \quad \frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr) \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{k_{2}-x}{k_{2}-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(19)

By utilizing (19) and the fact that \([d,k_{1}]\subseteq [d,k_{2}]\) and F is increasing with respect to the first variable, we attain

$$\begin{aligned} &\min_{x,y\in [d,k_{1}]}F \biggl(\frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \geq \min_{x,y\in [d,k_{2}]}F \biggl( \frac{k_{2}-x}{k_{2}-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(20)

Hence, (20) proves that the right-hand side of (17) is a decreasing function of D.

Similarly, we can prove that the right-hand side of (17) is an increasing function of d. □

In the succeeding theorem, we acquire some general bounds for soft margin estimator by taking a general function defined on rectangles and decreasing with respect to the first variable.

Theorem 6

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\), \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is a decreasing function with respect to the first variable and \(\phi:[0,1]\rightarrow \Upsilon \) is an arbitrary function. Then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \leq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \leq \max_{x,y\in [0,1]} F \biggl( (1-x ) \exp \biggl(- \frac{1}{a^{2}} \biggr) +x, \phi (y) \biggr). \end{aligned}$$
(21)

Proof

By utilizing inequality (6) and decreasing the property of F with respect to the first variable, we get (21). □

In the next result, we secure more certain general bounds for soft margin estimator by using a general function, which is decreasing with respect to the first variable, defined on rectangles and also imposing restriction on the Jaccard function.

Theorem 7

Let hypothesis H hold with \(a\in [\sqrt{2},\infty )\). Also assume that ϒ is an interval in \(\mathbb{R}\) and \(F:\Upsilon \times \Upsilon \rightarrow \mathbb{R}\) is a decreasing function with respect to the first variable. If \(0< d\leq \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{ \theta,i})\leq D<1\) and \(\phi:[d,D]\rightarrow \Upsilon \) is an arbitrary function, then

$$\begin{aligned} &F \biggl(\exp \biggl(- \frac{ (\frac{1}{n}\sum_{i=1}^{n} \varphi (\overrightarrow{r}_{*},\overrightarrow{r}_{\theta,i})-1 )^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad \leq F \bigl(\hat{P}(\overrightarrow{R}=\overrightarrow{r}_{*}|\Theta = \theta ),\phi (y) \bigr) \\ &\quad \leq \max_{x,y\in [d,D]} F \biggl(\frac{D-x}{D-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad{}+\frac{x-d}{D-d} \exp \biggl(-\frac{(D-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(22)

Furthermore, the right-hand side of (22) is an increasing function of D and a decreasing function of d.

Proof

By using inequality (11) and decreasing the property of F with respect to the first variable, we get (22).

Now, we show that the right-hand side of (22) is an increasing function of D.

Let \(d\leq k_{1}< k_{2}\leq D\). By Lemma 1, the Gaussian function \(\Psi (x)=\exp (-\frac{(x-1)^{2}}{a^{2}} )\) is concave for \(a\in [\sqrt{2},\infty )\). Therefore, the first-order divided difference of \(\Psi (x)\) is decreasing, that is,

$$\begin{aligned} \frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d} \geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}. \end{aligned}$$
(23)

Multiplying both sides of (23) by \(x-d\) and then adding \(\exp (-\frac{(d-1)^{2}}{a^{2}} )\), we get

$$\begin{aligned} &\frac{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{1}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\qquad\geq \frac{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} )}{k_{2}-d}(x-d) +\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) \\ &\quad \Rightarrow\quad \frac{ \{\exp (-\frac{(k_{1}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{1}-d)}{k_{1}-d} \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{ \{\exp (-\frac{(k_{2}-1)^{2}}{a^{2}} )-\exp (-\frac{(d-1)^{2}}{a^{2}} ) \} (x-d) +\exp (-\frac{(d-1)^{2}}{a^{2}} )(k_{2}-d)}{k_{2}-d} \\ &\quad \Rightarrow\quad \frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr) \\ &\phantom{\quad \Rightarrow\quad}\quad \geq \frac{k_{2}-x}{k_{2}-d}\exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) + \frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr). \end{aligned}$$
(24)

By utilizing (24) and the fact that \([d,k_{1}]\subseteq [d,k_{2}]\) and F is decreasing with respect to the first variable, we obtain

$$\begin{aligned} &\max_{x,y\in [d,k_{1}]}F \biggl(\frac{k_{1}-x}{k_{1}-d}\exp \biggl(- \frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{1}-d}\exp \biggl(- \frac{(k_{1}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr) \\ &\quad\leq \max_{x,y\in [d,k_{2}]}F \biggl( \frac{k_{2}-x}{k_{2}-d} \exp \biggl(-\frac{(d-1)^{2}}{a^{2}} \biggr) +\frac{x-d}{k_{2}-d}\exp \biggl(-\frac{(k_{2}-1)^{2}}{a^{2}} \biggr),\phi (y) \biggr). \end{aligned}$$
(25)

Hence (25) confirms that the right-hand side of (22) is an increasing function of D.

Similarly, we can prove that the right-hand side of (22) is a decreasing function of d. □

3 Conclusion

In this paper, we extracted some useful bounds for the soft margin estimator given in (1) with the help of notion of concavity. Acquiring these beneficial bounds, we exercised the characteristics of the Jaccard similarity function. To obtain some more advanced bounds for the soft margin estimator, we considered some broad function defined on rectangles and monotonic with respect to the first variable.