1 Introduction

The zero utility principle, introduced by Bühlmann [2], is a method of insurance contract pricing based on a utility function. According to the principle, an insurance premium for a risk X, represented by a non-negative essentially bounded random variable on a given probability space, is defined implicitly as a unique real number \(H_{u}(X)\) satisfying the equation

$$\begin{aligned} E[u(H_{u}(X)-X)]=0, \end{aligned}$$
(1)

where \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a continuous strictly increasing utility function such that \(u(0)=0\). For more details concerning the properties of the principle defined by (1) we refer e.g. to [1, 2, 4, 10, 14]. In this paper we deal with the zero utility principle under the Cumulative Prospect Theory, one of the behavioral models of decision-making under risk. The principle in this setting was introduced by Kałuszka and Krzeszowiec [11, 12]. In their approach a premium for a risk X is defined through the equation

$$\begin{aligned} E_{gh}[u(H_{(u,g,h)}(X)-X)]=0 \end{aligned}$$
(2)

where, for any essentially bounded random variable Y,

$$\begin{aligned} E_{gh}[Y]=\int _0^{\infty }g(P(Y>t))\;dt-\int _{-\infty }^{0}h(P(Y<t))\;dt \end{aligned}$$
(3)

is the Choquet integral with respect to the probability weighting functions g for gains and h for losses, that is non-decreasing functions mapping [0, 1] into [0, 1] and satisfying the conditions \(g(0)=h(0)=0\) and \(g(1)=h(1)=1\). It was proved in [3] that, if g and h are continuous, then the premium is uniquely determined by (2) for every risk X if and only if

$$\begin{aligned} h(p)+g(1-p)>0 \;\;\; \text{ for } \;\;\; p\in [0,1]. \end{aligned}$$
(4)

In [5, 7] and [11, 12] a series of results concerning characterizations of some important properties of the premium defined by (2) were proved. Furthermore, in [6] and [8], the extension problem for such a premium was investigated. In particular, it was proved in [6] that, if g and h are strictly increasing and continuous, then the zero utility principle defined by (2) can be uniquely extended from the family of all ternary risks. However (cf. e.g. the case \(w=0\) in [6, Example 3.5] and [8, Example 1]) the extension from the family of all binary risks need not be unique. The aim of this paper is to establish a characterization of the zero utility principles coinciding on the family of all binary risks. A crucial role in our considerations is played by the solutions of the multiplicative Pexider equation on an open and connected subset of \((0, \infty )^2\).

In the next section we present basic facts concerning the zero utility principle for binary risks. The main result is formulated and proved in the third section. In the last section some consequences of the main result are discussed.

2 Preliminary results

Assume that \((\Omega , {\mathcal {F}},P)\) is a non-atomic probability space, \(g,h:[0,1]\rightarrow [0,1]\) are continuous probability weighting functions satisfying (4) and \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous utility function such that \(u(0)=0\). Let \({\mathcal {X}}\) be a family of all risks, that is the non-negative essentially bounded random variables on \((\Omega , {\mathcal {F}},P)\). By

$$\begin{aligned} {\mathcal {X}}^{(2)}:=\{x\cdot 1_{A}:A\in {\mathcal {F}}, \; x\in [0,\infty )\} \end{aligned}$$

we denote a family of all binary risks on the space \((\Omega , {\mathcal {F}},P)\). According to (3), for any \(X=x\cdot 1_{A}\in {\mathcal {X}}^{(2)}\) Eq. (2) becomes

$$\begin{aligned} h(P(A))u(H_{(u,g,h)}(X)-x)+g(1-P(A))u(H_{(u,g,h)}(X))=0. \end{aligned}$$
(5)

Thus, we get

$$\begin{aligned} 0\le H_{(u,g,h)}(X)\le x \;\;\; \text{ for } \;\;\; X=x\cdot 1_{A}\in {\mathcal {X}}^{(2)}. \end{aligned}$$
(6)

Moreover, it follows from (5) that the premium for a risk \(X\in {\mathcal {X}}^{(2)}\) depends only on a distribution of X. Note also that, as the probability space \((\Omega , {\mathcal {F}},P)\) is non-atomic, we have \(\{P(A):A\in {\mathcal {F}}\}=[0,1]\). Therefore, in view of (5), for every \(x\in [0,\infty )\) and \(p\in [0,1]\), we obtain

$$\begin{aligned} h(p)u(H_{(u,g,h)}(x;p)-x)+g(1-p)u(H_{(u,g,h)}(x;p))=0 \end{aligned}$$
(7)

where \(H_{(u,g,h)}(x;p)\) denotes a premium for any risk \(x\cdot 1_{A}\in {\mathcal {X}}^{(2)}\) such that \(P(A)=p\). In view of (6), we have

$$\begin{aligned} 0\le H_{(u,g,h)}(x;p)\le x \;\;\; \text{ for } \;\;\; x\in [0,\infty ), \; p\in [0,1]. \end{aligned}$$
(8)

Remark 2.1

Assume that, for \(i\in \{1,2\}\), \(g_i\) and \(h_i\) are continuous probability weighting functions satisfying the condition

$$\begin{aligned} h_i(p)+g_i(1-p)>0 \;\;\; \text{ for } \;\;\; p\in [0,1], \end{aligned}$$
(9)

and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u_i(0)=0\). If

$$\begin{aligned} H(X):=H_{(u_1,g_1,h_1)}(X)=H_{(u_2,g_2,h_2)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}, \end{aligned}$$
(10)

then according to (7), for \(i\in \{1,2\}\), we get

$$\begin{aligned} h_i(p)u_i(H(x;p)-x)+g_i(1-p)u_i(H(x;p))=0 \;\;\; \text{ for } \;\;\; x\in [0,\infty ), \; p\in [0,1].\nonumber \\ \end{aligned}$$
(11)

Lemma 2.2

Assume that, for \(i\in \{1,2\}\), \(g_i\) and \(h_i\) are continuous probability weighting functions satisfying (9) and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function such that \(u_i(0)=0\). Let

$$\begin{aligned} p_{g_i}:=\max \{p\in [0,1]: g_i(p)=0\} \;\;\; \text{ for } \;\;\; i\in \{1,2\} \end{aligned}$$

and

$$\begin{aligned} p_{h_i}:=\max \{p\in [0,1]: h_i(p)=0\} \;\;\; \text{ for } \;\;\; i\in \{1,2\}. \end{aligned}$$

If (10) holds then

$$\begin{aligned}{} & {} p_g:=p_{g_1}=p_{g_2},\end{aligned}$$
(12)
$$\begin{aligned}{} & {} p_h:=p_{h_1}=p_{h_2},\end{aligned}$$
(13)
$$\begin{aligned}{} & {} p_g+p_h<1 \end{aligned}$$
(14)

and

$$\begin{aligned} h_i(p)g_i(1-p)>0 \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g), \; i\in \{1,2\}. \end{aligned}$$
(15)

Proof

Assume that (10) holds. Then, according to Remark 2.1, (11) is valid for \(i\in \{1,2\}\). Suppose that \(p_{g_1}\ne p_{g_2}\), say \(p_{g_1}<p_{g_2}\) and fix a \(p\in (1-p_{g_2},1-p_{g_1})\). Then \(1-p\in (p_{g_1},p_{g_2})\) and so \(g_2(1-p)=0<g_1(1-p)\). Hence, making use of (11) for \(i=2\) and \(x=1\), we get \(h_2(p)u_2(H(1;p)-1)=0\). Furthermore, in view of (9), we have \(h_2(p)>0\). Thus, \(u_2(H(1;p)-1)=0\) and so \(H(1;p)=1\). Therefore, applying (11) again, this time for \(i=1\) and \(x=1\), we obtain

$$\begin{aligned} 0<g_1(1-p)u_1(1)=h_1(p)u_1(H(1;p)-1)+g_1(1-p)u_1(H(1;p))=0, \end{aligned}$$

which yields a contradiction. In this way we have proved (12). The proof of (13) is similar. Note that, according to (9) and (12)–(13), we get

$$\begin{aligned} g_1(1-p_h)=g_1(1-p_{h_1})=h_1(p_{h_1})+g_1(1-p_{h_1})>0 \end{aligned}$$

and so

$$\begin{aligned} 1-p_h>p_{g_1}=p_g, \end{aligned}$$

which implies (14). Finally, (12)–(13) imply (15). \(\square \)

Lemma 2.3

Assume that, for \(i\in \{1,2\}\), \(g_i\) and \(h_i\) are continuous probability weighting functions satisfying (9) and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u_i(0)=0\). Let \(\psi _i:(p_h,1-p_g)\rightarrow (0,1)\) for \(i\in \{1,2\}\) be given by

$$\begin{aligned} \psi _i(p)=\frac{g_i(1-p)}{h_i(p)+g_i(1-p)} \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g). \end{aligned}$$
(16)

If (10) holds then, for every \(x\in [0,\infty )\), \(p\in (p_h,1-p_g)\) and \(i\in \{1,2\}\), we have

$$\begin{aligned} (1-\psi _i(p))u_i(H(x;p)-x)+\psi _i(p)u_i(H(x;p))=0. \end{aligned}$$
(17)

Moreover, for every \(p,q\in (p_h,1-p_g)\) the following conditions are pairwise equivalent:

  1. (i)

    \(\psi _1(p)=\psi _1(q)\);

  2. (ii)

    \(\psi _2(p)=\psi _2(q)\);

  3. (iii)

    \(H(x;p)=H(x;q) \;\;\; \text{ for } \;\;\; x\in [0,\infty )\).

Proof

Assume that (10) holds. Then it follows from Remark 2.1 that (11) is valid for \(i\in \{1,2\}\). Dividing both sides of (11) by \(h_i(p)+g_i(1-p)\), in view of (9) and (16), we conclude that (17) holds for every \(x\in [0,\infty )\), \(p\in [0,1]\) and \(i\in \{1,2\}\).

Let \(p,q\in (p_h,1-p_g)\). If (i) holds then, applying (17) with \(i=1\), for every \(x\in (0,\infty )\), we get

$$\begin{aligned}{} & {} (1-\psi _1(p))(u_1(H(x;p)-x)\\{} & {} -u_1(H(x;q)-x))+\psi _1(p) (u_1(H(x;p))-u_1(H(x;q)))=0. \end{aligned}$$

Since \(u_1\) is strictly increasing, this implies (iii). Conversely, if (iii) holds then making use of (17), we obtain

$$\begin{aligned} (\psi _1(p)-\psi _1(q))(u_1(H(x;p)-x)-u_1(H(x;p)))=0. \end{aligned}$$

Hence, as \(u_1\) is strictly increasing, we have (i). This proves the equivalence of (i) and (iii). In a similar way one can show that (ii) and (iii) are equivalent.\(\square \)

Lemma 2.4

Assume that, for \(i\in \{1,2\}\), \(g_i\) and \(h_i\) are continuous probability weighting functions satisfying (9), \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u_i(0)=0\) and \(\psi _i:(p_h,1-p_g)\rightarrow (0,1)\) is defined by (16). Let \({\overline{\psi }}:(0,1)\rightarrow {\mathbb {R}}\) and \({\overline{H}}:(0,\infty )\times (0,1)\rightarrow {\mathbb {R}}\) be given by

$$\begin{aligned} {\overline{\psi }}(p)=\psi _2(\min \{q\in [0,1]:\psi _1(q)=p\}) \;\;\; \text{ for } \;\;\; p\in (0,1) \end{aligned}$$
(18)

and

$$\begin{aligned} {\overline{H}}(x,p)=H(x,\min \{q\in [0,1]:\psi _1(q)=p\}) \;\;\; \text{ for } \;\;\; x\in (0,\infty ), \; p\in (0,1),\nonumber \\ \end{aligned}$$
(19)

respectively. If (10) holds then

$$\begin{aligned}{} & {} \psi _2(p)=({\overline{\psi }}\circ \psi _1)(p) \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g),\end{aligned}$$
(20)
$$\begin{aligned}{} & {} (1-p)u_1({\overline{H}}(x,p)-x)+pu_1({\overline{H}}(x,p))=0 \;\;\; \text{ for } \;\;\; x\in (0,\infty ), \; p\in (0,1)\nonumber \\ \end{aligned}$$
(21)

and

$$\begin{aligned} (1-{\overline{\psi }}(p))u_2({\overline{H}}(x,p)-x) +{\overline{\psi }}(p)u_2({\overline{H}}(x,p))=0 \;\;\; \text{ for } \; x\in (0,\infty ), \; p\in (0,1).\nonumber \\ \end{aligned}$$
(22)

Furthermore, for every \(p\in (p_h,1-p_g)\), a function

$$\begin{aligned} (0,\infty )\ni x \rightarrow {\overline{H}}(x,p) \end{aligned}$$
(23)

is strictly increasing and continuous, with

$$\begin{aligned} \lim _{x\rightarrow 0^+}{\overline{H}}(x,p)=0. \end{aligned}$$
(24)

Proof

Assume that (10) holds. Applying Lemma 2.3 and making use of (18), for every \(p\in (p_h,1-p_g)\), we get

$$\begin{aligned} {\overline{\psi }}(\psi _1(p))= & {} \psi _2(\min \{q\in [0,1]:\psi _1(q)=\psi _1(p)\})\\= & {} \psi _2(\min \{q\in [0,1]:\psi _2(q)=\psi _2(p)\})=\psi _2(p), \end{aligned}$$

which gives (20). Furthermore, taking Lemma 2.3 into account, in view of (19), for every \(x\in (0,\infty )\) and \(p\in (p_h,1-p_g)\), we obtain

$$\begin{aligned} {\overline{H}}(x,\psi _1(p))= & {} H(x,\min \{q\in [0,1]:\psi _1(q)=\psi _1(p)\})\\= & {} H(x,\min \{q\in [0,1]:H(x;q)=H(x;p)\})=H(x;p). \end{aligned}$$

Thus, making use of (17) and (20), for every \(x\in (0,\infty )\) and \(p\in (p_h,1-p_g)\), we get

$$\begin{aligned} (1-\psi _1(p))u_1({\overline{H}}(x;\psi _1(p))-x) +\psi _1(p)u_1({\overline{H}}(x;\psi _1(p)))=0 \end{aligned}$$
(25)

and

$$\begin{aligned} (1-{\overline{\psi }}(\psi _1(p)))u_2({\overline{H}}(x;\psi _1(p)) -x)+{\overline{\psi }}(\psi _1(p))u_2({\overline{H}}(x,\psi _1(p)))=0. \end{aligned}$$
(26)

Moreover, it follows from (16) that \(\psi _1\) is continuous, with

$$\begin{aligned} \lim _{p\rightarrow (1-p_g)^-}\psi _1(p)=\frac{g_1(p_g)}{h_1(1-p_g)+g_1(p_g)}=0 \end{aligned}$$

and

$$\begin{aligned} \lim _{p\rightarrow p_h^+}\psi _1(p)=\frac{g_1(1-p_h)}{h_1(p_h)+g_1(1-p_h)}=1. \end{aligned}$$

Hence \(\{\psi _1(p):p\in (p_h,1-p_g)\}=(0,1)\) and so, from (25) and (26) we derive (21) and (22), respectively.

Fix a \(p\in (p_h,1-p_g)\) and suppose that the function given by (23) is not strictly increasing. Then there exist \(x,y\in (0,\infty )\) such that \(x<y\) and \({\overline{H}}(y,p)\le {\overline{H}}(x,p)\). Hence \({\overline{H}}(y,p)-y<{\overline{H}}(x,p)-x\) and so, making use of (21), we get

$$\begin{aligned}{} & {} 0=(1-p)u_1({\overline{H}}(y,p)-y)+pu_1({\overline{H}}(y,p))\\{} & {} \quad < (1-p)u_1({\overline{H}}(x,p)-x)+pu_1({\overline{H}}(x,p))=0, \end{aligned}$$

which yields a contradiction. Therefore, the function defined by (23) is strictly increasing. Let \(z\in (0,\infty )\). Then the limits \(l(z):=\lim _{x\rightarrow z^-}{\overline{H}}(x,p)\) and \(r(z):=\lim _{x\rightarrow z^+}{\overline{H}}(x,p)\) exist and they are finite. Furthermore, as \(u_1\) is continuous, passing in (21) to the limit with \(x\rightarrow z^-\) and then with \(x\rightarrow z^+\), we obtain

$$\begin{aligned} (1-p)u_1(l(z)-z)+pu_1(l(z))=(1-p)u_1(r(z)-z)+pu_1(r(z)). \end{aligned}$$

Hence

$$\begin{aligned} (1-p)(u_1(l(z)-z)-u_1(r(z)-z))+p(u_1(l(z))-u_1(r(z)))=0. \end{aligned}$$

Since \(u_1\) is strictly increasing, this implies that \(l(z)=r(z)\) and proves the continuity of the function given by (23). Finally note that, letting in (21) \(x\rightarrow 0^+\), we get (24). \(\square \)

Remark 2.5

It follows from Lemma 2.4 that, for every \(p\in (0,1)\), there exists a limit

$$\begin{aligned} c(p):=\lim _{x\rightarrow \infty }{\overline{H}}(x,p)\in (0,\infty ] \end{aligned}$$
(27)

and

$$\begin{aligned} \{{\overline{H}}(x,p):x\in (0,\infty )\}=(0,c(p)). \end{aligned}$$
(28)

Let \(u_1(-\infty ):=\lim _{x\rightarrow -\infty }u_1(x)\) and \(u_1(\infty ):=\lim _{x\rightarrow \infty }u_1(x)\). Then, letting in (21) \(x\rightarrow \infty \), we conclude that either \(u_1(-\infty )=-\infty \) and \(c(p)=\infty \) for \(p\in (0,1)\), or \(-\infty< u_1(-\infty )< 0\) and

$$\begin{aligned} u_1(c(p))=\frac{p-1}{p}u_1(-\infty ) \;\; \text{ for } \text{ every } \;\; p\in (0,1) \;\; \text{ with } \;\; c(p)<\infty . \end{aligned}$$

Thus, in view of (28), for every \(p\in (0,1)\), we have

$$\begin{aligned} \left( 0,\min \left\{ \frac{p-1}{p}u_1(-\infty ),u_1(\infty )\right\} \right) \subseteq \{u_1({\overline{H}}(x,p)):x\in (0,\infty )\}, \end{aligned}$$
(29)

with the convention \(a\cdot (-\infty )=\infty \) for \(a\in (-\infty ,0)\).

3 Main result

The following theorem is the main result of the paper.

Theorem 3.1

Assume that, for \(i\in \{1,2\}\), \(g_i\) and \(h_i\) are continuous probability weighting functions satisfying (9) and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u_i(0)=0\). Then (10) holds if and only if there exist \(\alpha ,\beta ,r\in (0,\infty )\) such that

$$\begin{aligned} \alpha h_2(p)g_1(1-p)^r-\beta h_1(p)^rg_2(1-p)=0 \;\;\; \text{ for } \;\;\; p\in [0,1] \end{aligned}$$
(30)

and

$$\begin{aligned} u_2(x)=\left\{ \begin{array}{ll} -\alpha (-u_1(x))^r &{}\quad \text{ for } x \in (-\infty ,0), \\ \beta u_1(x)^r &{} \quad \text{ for } x \in [0,\infty ). \end{array}\right. \end{aligned}$$
(31)

Proof

Assume that (10) holds. Then, making use of (21) and (22), we get

$$\begin{aligned} {\overline{H}}(x,p)-x=u_1^{-1}\left( \frac{p}{p-1}u_1({\overline{H}}(x,p))\right) \;\;\; \text{ for } \;\;\; x\in (0,\infty ), \; p\in (0,1) \end{aligned}$$

and

$$\begin{aligned} {\overline{H}}(x,p)-x=u_2^{-1}\left( \frac{{\overline{\psi }}(p)}{{\overline{\psi }}(p)-1}u_2({\overline{H}}(x,p))\right) \;\;\; \text{ for } \;\;\; x\in (0,\infty ), \; p\in (0,1), \end{aligned}$$

respectively. Hence, for every \(x\in (0,\infty )\) and \(p\in (0,1)\), we have

$$\begin{aligned} u_1^{-1}\left( \frac{p}{p-1}u_1({\overline{H}}(x,p))\right) =u_2^{-1} \left( \frac{{\overline{\psi }}(p)}{{\overline{\psi }}(p)-1}u_2({\overline{H}}(x,p))\right) \end{aligned}$$

and so, taking

$$\begin{aligned} f:=u_2\circ u_1^{-1}, \end{aligned}$$
(32)

we obtain

$$\begin{aligned} f\left( \frac{p}{p-1}y\right) = \frac{{\overline{\psi }}(p)}{{\overline{\psi }}(p)-1}f(y) \;\; \text{ for } \;\; p\in (0,1), \; y\in \{u_1({\overline{H}}(x,p)):x\in (0,\infty )\}. \end{aligned}$$

Replacing in this equality p by \(\frac{s}{s+1}\), for every \(s\in (0,\infty )\) and \(y\in \left\{ u_1\left( {\overline{H}}\left( x,\frac{s}{s+1})\right) \right) :x\in (0,\infty )\right\} \), we get

$$\begin{aligned} -f(-sy)=\frac{{\overline{\psi }}\left( \frac{s}{s+1}\right) }{1-{\overline{\psi }}\left( \frac{s}{s+1}\right) }f(y). \end{aligned}$$

Furthermore, in view of (29), for every \(s\in (0,\infty )\), we have

$$\begin{aligned} \left( 0,\min \left\{ \frac{-u_1(-\infty )}{s},u_1(\infty )\right\} \right) \subseteq \left\{ u_1\left( {\overline{H}}\left( x,\frac{s}{s+1}\right) \right) :x\in (0,\infty )\right\} . \end{aligned}$$

Therefore, we get

$$\begin{aligned} -f(-sy)=\frac{{\overline{\psi }}(\frac{s}{s+1})}{1-{\overline{\psi }}(\frac{s}{s+1})}f(y) \;\;\; \text{ for } \;\;\; (s,y)\in D, \end{aligned}$$

where

$$\begin{aligned} D:=\left\{ (s,y)\in {\mathbb {R}}^2:s\in (0,\infty ), \; y\in \left( 0,\min \left\{ -\frac{u_1(-\infty )}{s},u_1(\infty )\right\} \right) \right\} . \end{aligned}$$

Note that D is a non-empty, open and connected subset of \((0,\infty )^2\), with

$$\begin{aligned} D_1&:=\{s\in {\mathbb {R}}:(s,y)\in D \; \text{ for } \text{ some } \; y\in {\mathbb {R}}\}=(0,\infty ),\\ D_2&:=\{y\in {\mathbb {R}}:(s,y)\in D \; \text{ for } \text{ some } \; s\in {\mathbb {R}}\}=(0,u_1(\infty )) \end{aligned}$$

and

$$\begin{aligned} D_+:=\{sy:(s,y)\in D\}=(0,-u_1(-\infty )). \end{aligned}$$

Moreover, in view of (32), f is a strictly increasing continuous function. Hence, applying [13, Theorem 13.1.6, p. 349] and [15, Corollary 2], we obtain that there exist \(a,b,r\in (0,\infty )\) such that

$$\begin{aligned}{} & {} \frac{{\overline{\psi }}(\frac{s}{s+1})}{1-{\overline{\psi }}(\frac{s}{s+1})}=as^r \;\;\; \text{ for } \;\;\; s\in (0,\infty ),\end{aligned}$$
(33)
$$\begin{aligned}{} & {} f(x)=bx^r \;\;\; \text{ for } \;\;\; x\in (0,u_1(\infty )) \end{aligned}$$
(34)

and

$$\begin{aligned} -f(-x)=abx^r \;\;\; \text{ for } \;\;\; x\in (0,-u_1(-\infty )). \end{aligned}$$
(35)

It follows from (33) that

$$\begin{aligned} {\overline{\psi }}(p)=\frac{ap^r}{ap^r+(1-p)^r} \;\;\; \text{ for } \;\;\; p\in (0,1). \end{aligned}$$

Hence, making use of (16) and (20), we get

$$\begin{aligned} a h_2(p)g_1(1-p)^r-h_1(p)^rg_2(1-p)=0 \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g). \end{aligned}$$

Furthermore, in view of (12)–(13), the last equality trivially holds also for \(p\in [0,1]\setminus (p_h,1-p_g)\). Therefore, taking (32) and (34)–(35) into account, we obtain (30)–(31) with \(\alpha :=ab\) and \(\beta :=b\).

In order to prove the converse implication, assume that (30)–(31) hold with some \(\alpha ,\beta ,r\in (0,\infty )\). Fix an \(x\in (0,\infty )\) and a \(p\in [0,1]\). If \(g_1(1-p)=0\) then from (9) and (30) we deduce that \(g_2(1-p)=0\) and \(h_1(p)h_2(p)>0\). Hence, as \(u_i\) for \(i\in \{1,2\}\) is strictly increasing, with \(u_i(0)=0\), in view of (7), we get

$$\begin{aligned} H_{(u_1,g_1,h_1)}(x;p)=H_{(u_2,g_2,h_2)}(x;p)=x. \end{aligned}$$

If \(g_1(1-p)>0\), then applying (6) and (30)–(31), we obtain

$$\begin{aligned}{} & {} h_2(p)u_2(H_{(u_1,g_1,h_1)}(x;p)-x)+g_2(1-p)u_2(H_{(u_1,g_1,h_1)}(x;p))\\{} & {} \quad =\frac{\beta g_2(1-p)}{g_1(1-p)^r}((g_1(1-p)u_1 (H_{(u_1,g_1,h_1)}(x;p)))^r\\{} & {} \qquad -(-h_1(p)u_1(H_{(u_1,g_1,h_1)}(x;p)-x))^r). \end{aligned}$$

On the other hand, according to (7), we have

$$\begin{aligned} g_1(1-p)u_1(H_{(u_1,g_1,h_1)}(x;p))=-h_1(p)u_1(H_{(u_1,g_1,h_1)}(x;p)-x). \end{aligned}$$

Thus

$$\begin{aligned} h_2(p)u_2(H_{(u_1,g_1,h_1)}(x;p)-x)+g_2(1-p)u_2(H_{(u_1,g_1,h_1)}(x;p))=0, \end{aligned}$$

which, in view of (7), implies that \(H_{(u_1,g_1,h_1)}(x;p)=H_{(u_2,g_2,h_2)}(x;p)\). Hence, (10) holds. \(\square \)

4 Applications

In this section we show that some known results concerning the zero utility principle in various settings can be directly derived from Theorem 3.1. We begin with the following two simple observations.

Remark 4.1

Let g and h be continuous probability weighting functions satisfying (4) and let \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a strictly increasing continuous function such that \(u(0)=0\). Assume that the zero utility principle defined by (2) coincides on the family of all binary risks with the net premium, that is

$$\begin{aligned} H_{(u,g,h)}(X)=E[X] \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}. \end{aligned}$$
(36)

Let \(g_0,h_0:[0,1]\rightarrow [0,1]\) be given by \(g_0(p)=h_0(p)=p\) for \(p\in [0,1]\) and let \(u_0:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be of the form \(u_0(x)=x\) for \(x\in {\mathbb {R}}\). Then, for any \(X=x\cdot 1_{A}\in {\mathcal {X}}^{(2)}\), we have

$$\begin{aligned}{} & {} h_0(P(A))u_0(E[X]-x)+g_0(1-P(A))u_0(E[X])\\{} & {} \quad =P(A)(P(A)x-x)+(1-P(A))P(A)x=0. \end{aligned}$$

Thus, taking (5) into account, we conclude that \(H_{(u_0,g_0,h_0)}(X)=E[X]\) for \(X\in {\mathcal {X}}^{(2)}\). Hence, in view of (36), we get

$$\begin{aligned} H_{(u,g,h)}(X)=H_{(u_0,g_0,h_0)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}. \end{aligned}$$
(37)

Therefore, applying Theorem 3.1, we obtain [5, Theorem 3.1], which says that (36) holds if and only if there exist \(\alpha ,\beta ,r\in (0,\infty )\) such that

$$\begin{aligned} \alpha h(p)(1-p)^r-\beta g(1-p)p^r=0 \;\;\; \text{ for } \;\;\; p\in [0,1] \end{aligned}$$
(38)

and

$$\begin{aligned} u(x)=\left\{ \begin{array}{ll} -\alpha (-x)^r &{} \quad \text{ for } x \in (-\infty ,0), \\ \beta x^r &{} \quad \text{ for } x \in [0,\infty ). \end{array}\right. \end{aligned}$$

Remark 4.2

Assume that g and h are continuous probability weighting functions satisfying (4), \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u(0)=0\) and the zero utility principle defined by (2) coincides on the family of all binary risks with the exponential premium, i.e.

$$\begin{aligned} H_{(u,g,h)}(X)=\frac{1}{c}\ln E[e^{cX}] \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}, \end{aligned}$$
(39)

where \(c\in (0,\infty )\) is fixed. Let \(g_0\) and \(h_0\) be as in Remark 4.1 and let \(u_0:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be given by \(u_0(x)=1-e^{-cx}\) for \(x\in {\mathbb {R}}\). Then, for every \(X=x\cdot 1_{A}\in {\mathcal {X}}^{(2)}\), we obtain

$$\begin{aligned}{} & {} h_0(P(A))u_0\left( \frac{1}{c}\ln E[e^{cX}]-x\right) +g_0(1-P(A))u_0\left( \frac{1}{c}\ln E[e^{cX}]\right) \\{} & {} \quad =P(A)u_0\left( \frac{1}{c}\ln \frac{ E[e^{cX}]}{e^{cx}}\right) +(1-P(A))u_0\left( \frac{1}{c}\ln E[e^{cX}]\right) \\{} & {} \quad =P(A)\left( 1-\frac{e^{cx}}{ E[e^{cX}]}\right) +(1-P(A))\left( 1-\frac{1}{E[e^{cX}]}\right) \\{} & {} \quad =1-\frac{P(A)e^{cx}}{ P(A)e^{cx}+1-P(A)}-\frac{1-P(A)}{P(A)e^{cx}+1-P(A)}=0. \end{aligned}$$

Hence, in view of (5), we have \(H_{(u_0,g_0,h_0)}(X)=\frac{1}{c}\ln E[e^{cX}]\) for \(X\in {\mathcal {X}}^{(2)}\) and so making use of (39), we get (37). Thus, applying Theorem 3.1, we obtain that (39) holds if and only if there exist \(\alpha ,\beta ,r\in (0,\infty )\) such that (38) is valid and

$$\begin{aligned} u(x)=\left\{ \begin{array}{ll} -\alpha (e^{-cx}-1)^r &{}\quad \text {for } x \in (-\infty ,0), \\ \beta (1-e^{-cx})^r &{}\quad \text {for } x \in [0,\infty ). \end{array}\right. \end{aligned}$$

Therefore, [5, Theorem 4.1] is a particular case of Theorem 3.1.

The next result shows that, if the probability weighting functions are fixed, then the zero utility principle can be uniquely extended from the family of all binary risks onto the family of all risks.

Corollary 4.3

Assume that g and h are continuous probability weighting functions satisfying (4) and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) for \(i\in \{1,2\}\) is a strictly increasing continuous function with \(u_i(0)=0\). If

$$\begin{aligned} H_{(u_1,g,h)}(X)=H_{(u_2,g,h)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}, \end{aligned}$$
(40)

then there exists an \(\alpha \in (0,\infty )\) such that

$$\begin{aligned} u_2(x)=\alpha u_1(x) \;\;\; \text{ for } \;\;\; x \in {\mathbb {R}}. \end{aligned}$$
(41)

Conversely, if (41) holds with some \(\alpha \in (0,\infty )\), then

$$\begin{aligned} H_{(u_1,g,h)}(X)=H_{(u_2,g,h)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}. \end{aligned}$$
(42)

Proof

Assume that (40) is valid. Then, according to Theorem 3.1, there exist \(\alpha ,\beta ,r\in (0,\infty )\) such that (31) holds and

$$\begin{aligned} h(p)g(1-p)(\alpha g(1-p)^{r-1}-\beta h(p)^{r-1})=0 \;\;\; \text{ for } \;\;\; p\in [0,1]. \end{aligned}$$

Thus

$$\begin{aligned} \alpha g(1-p)^{r-1}=\beta h(p)^{r-1} \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g). \end{aligned}$$
(43)

Suppose that \(r\ne 1\). Then from (43) we derive that

$$\begin{aligned} \alpha ^{\frac{1}{r-1}} g(1-p)=\beta ^{\frac{1}{r-1}} h(p) \;\;\; \text{ for } \;\;\; p\in (p_h,1-p_g). \end{aligned}$$

Since g and h are continuous, letting in this equality \(p\rightarrow p_h^+\), we obtain \(g(1-p_h)=0\). Hence \(h(p_h)+g(1-p_h)=0\), which contradicts (4). Therefore, \(r=1\) and so, in view of (43), we get \(\alpha =\beta \). Thus, (31) becomes (41).

If (41) is valid with some \(\alpha \in (0,\infty )\), then applying the positive homogeneity of the Choquet integral and making use of (2), for every \(X\in {\mathcal {X}}\), we obtain

$$\begin{aligned} E_{gh}[u_2(H_{(u_1,g,h)}(X)-X)]=\alpha E_{gh}[u_1(H_{(u_1,g,h)}(X)-X)]=0, \end{aligned}$$

which implies (42). \(\square \)

Remark 4.4

Recall that the zero utility principle under the Expected Utility model is defined by (1), which is a particular case of (2) with \(g=h=id_{[0,1]}\). Hence, Corollary 4.3 is a generalization of [4, Theorem 6].

It is worth mentioning that Heilpern [9] introduced and studied the zero utility principle under the Rank-Dependent Utility model. In this setting, the premium for \(X\in {\mathcal {X}}\) is defined as a solution of the equation

$$\begin{aligned} E_g[u(w+H_{(w,u,g)}(X)-X)]=u(w), \end{aligned}$$
(44)

where w is a fixed real number, \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function such that \(u(0)=0\) and, for any essentially bounded random variable Y,

$$\begin{aligned} E_g[Y]=\int _{-\infty }^{0}(g(P(Y>t))-1)\;dt+\int _0^{\infty }g(P(Y>t))\;dt \end{aligned}$$
(45)

is the Choquet integral with respect to a probability weighting function g. The following result, being a consequence of Theorem 3.1, characterizes those zero utility principles defined by (44) which coincide on the family of all binary risks.

Corollary 4.5

Let \(w\in {\mathbb {R}}\). Assume that, for \(i\in \{1,2\}\), \(g_i\) is a continuous probability weighting functions and \(u_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a strictly increasing continuous function with \(u_i(0)=0\). Then

$$\begin{aligned} H_{(w,u_1,g_1)}(X)=H_{(w,u_2,g_2)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}, \end{aligned}$$
(46)

if and only if there exist \(a,b,r\in (0,\infty )\) such that

$$\begin{aligned} g_2(p)=\frac{\alpha g_1(p)^r}{\alpha g_1(p)^r+\beta (1-g_1(p))^r} \;\;\; \text{ for } \;\;\; p\in [0,1] \end{aligned}$$

and

$$\begin{aligned} u_2(x)=\left\{ \begin{array}{ll} \gamma -\alpha (u_1(w)-u_1(x))^r &{} \quad \text{ for } x \in (-\infty ,w), \\ \gamma +\beta (u_1(x)-u_1(w))^r &{} \quad \text{ for } x \in [w,\infty ), \end{array}\right. \end{aligned}$$

where

$$\begin{aligned} \gamma =\left\{ \begin{array}{ll} -\beta (-u_1(w))^r&{}\quad \text{ whenever } w<0, \\ 0 &{} \quad \text{ whenever } w=0, \\ \alpha u_1(w)^r &{} \quad \text{ whenever } w>0. \end{array}\right. \end{aligned}$$

Proof

According to (44), we have

$$\begin{aligned} E_{g_i}[u_i(w+H_{(w,u_i,g_i)}(X)-X)]=u_i(w) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}, \; i\in \{1,2\}. \end{aligned}$$
(47)

Since the Choquet integral given by (45) is translative, in view of (3), Eq. (47) can be rewritten in the following form

$$\begin{aligned} E_{g_ih_i}[v_i(H_{(w,u_i,g_i)}(X)-X)]=0 \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}, \; i\in \{1,2\}, \end{aligned}$$

where \(v_i:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(h_i:[0,1]\rightarrow [0,1]\) for \(i\in \{1,2\}\) are given by

$$\begin{aligned} v_i(x)=u_i(x+w)-u_i(w) \;\;\; \text{ for } \;\;\; x\in {\mathbb {R}}, \end{aligned}$$
(48)

and

$$\begin{aligned} h_i(p)=1-g_i(1-p) \;\;\; \text{ for } \;\;\; p\in [0,1], \end{aligned}$$
(49)

respectively. Thus

$$\begin{aligned} H_{(w,u_i,g_i)}(X)=H_{(v_i,g_i,h_i)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}, \; i\in \{1,2\} \end{aligned}$$

and so, (46) is equivalent to

$$\begin{aligned} H_{(v_1,g_1,h_1)}(X)=H_{(v_2,g_2,h_2)}(X) \;\;\; \text{ for } \;\;\; X\in {\mathcal {X}}^{(2)}. \end{aligned}$$

Note that, for \(i\in \{1,2\}\), \(h_i\) is a continuous probability weighting function and \(v_i\) is strictly increasing and continuous, with \(v_i(0)=0\). Therefore, applying Theorem 3.1, we obtain that (46) holds if and only if there exist \(\alpha ,\beta ,r\in (0,\infty )\) such that (30) is valid and

$$\begin{aligned} v_2(x)=\left\{ \begin{array}{ll} -\alpha (-v_1(x))^r &{} \quad \text{ for } x \in (-\infty ,0), \\ \beta v_1(x)^r &{} \quad \text{ for } x \in [0,\infty ). \end{array}\right. \end{aligned}$$

Hence, taking (48)–(49) into account and using the fact that \(u_i(0)=0\) for \(i\in \{1,2\}\), after a standard computation we get the assertion. \(\square \)

Remark 4.6

Corollary 4.5 is a generalization of [8, Theorem 3], where the case \(w\in [0,\infty )\) was considered. Let us note that the strict monotonicity of g, although not explicitly assumed, was used in the proof of [8, Theorem 3]. Moreover, there is a missprint in the characterization presented in [8, Theorem 3]. Namely, in formula (30), on the right hand side, in the denominator instead of \(a(1-g(p)^r)\) there should be \(a(1-g(p))^r\).