1 Introduction

It is common knowledge that the probability weighting functions play an important role in non-expected utility theories, including prospect theory and rank-dependent models. Therefore, there has been a consistent interest in them (see, e.g. Abdellaoui et al. 2008; Kahneman and Tversky 2013; Lattimore et al. 1992; Ostaszewski et al. 1998; Prelec 1998; Tversky and Kahneman 1992; Wakker 2010; Wakker and Yang 2019). These functions describe the phenomenon that people tend to overreact those events that occur with a low probability and underreact to events that have a high probability (see, e.g. Bleichrodt 2001; Camerer 2007; Chateauneuf et al. 2007; Köbberling and Wakker 2003; Koszegi and Rabin 2006; Loomes et al. 2002). Thus, in line with the empirical estimates, the probability weighting functions are regressive (first they have values greater than the identity function, then they have values less than the diagonal), inverse S-shaped (first concave, then convex) and asymmetric (intersecting the diagonal at one third) (see, e.g. Levy 1992; Li et al. 2017; Offerman et al. 2009; Prelec 1998).

In this study, we present a novel methodology that can be used to generate parametric probability weighting functions by making use of the Dombi modifier operator of continuous-valued logic (Dombi 2012a). This operator is defined as follows.

Definition 1

The modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}:[0,1] \rightarrow [0,1]\) is given by

$$\begin{aligned} m_{\nu ,\nu _0}^{(\lambda )}(x) = f^{-1}\left( f(\nu _0) \left( \frac{f(x)}{f(\nu )}\right) ^{\lambda }\right) , \end{aligned}$$
(1)

where \(\nu ,\nu _0 \in (0,1)\), \(\lambda \in \mathbb {R}\) and \(f: [0,1] \rightarrow [0,\infty ]\) is a strictly decreasing (or increasing) continuous function, with the inverse function \(f^{-1}:[0,\infty ] \rightarrow [0,1]\), such that

  1. (a)

    if f is strictly increasing, then \(f(0)=0\) and \(\lim _{x \rightarrow 1}f(x) = \infty \)

  2. (b)

    if f is strictly decreasing, then \(f(1)=0\) and \(\lim _{x \rightarrow 0}f(x) = \infty \).

Here, function f is called the generator function of the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\).

We will show that if \(\lambda >0\), then \(m_{\nu ,\nu _0}^{(\lambda )}\) is a probability weighting function. It is worth mentioning that in continuous-valued logic, the generator function f is closely connected with the conjunction and disjunction operators, and the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) is also known as the kappa function. Next, we will demonstrate that the application of the modifier operator in Eq. (1) may be treated as a general approach for creating probability weighting functions including the well-known ones. We will show that the Prelec probability weighting function family (see Prelec 1998) can be induced by the generator function \(f(x) = - \ln (x)\), where \(x \in (0,1]\), by applying the modifier operator in Eq. (1) with \(\nu _0 = \nu \). Also, the Ostaszewski, Green and Myerson [Lattimore, Baker and Witte (see Lattimore et al. 1992)] probability weighting function family (see Ostaszewski et al. 1998) can be generated from the generator function \(f(x) = \frac{1-x}{x}\), where \(x \in (0,1]\), by making use of the modifier operator in Eq. (1) with \(\nu _0 = \nu \). Note that the function \(f(x) = \frac{1-x}{x}\) is the generator function of Dombi conjunction- and disjunction operators (Dombi 1982). In previous papers of ours (see Dombi and Jónás 2018; Dombi et al. 2018), we introduced the epsilon function that can be used to approximate the exponential function. Here, we will show that the asymptotic probability weighting function induced by the inverse of the epsilon function by utilizing Eq. (1) with \(\nu _0 = \nu \) is just the Prelec probability weighting function. Furthermore, we will prove that, by using the modifier operator in Eq. (1), other probability weighting functions can be generated from the so-called dual generator functions and from transformed generator functions. Lastly, we will show how the modifier operator can be used to generate strictly convex (or concave) probability weighting functions and demonstrate how a generated probability weighting function can be fitted to empirical data.

The rest of this paper is structured as follows. In Sect. 2, we will introduce the modifier operator, which we will use later on, and briefly describe the role of its parameters. In Sect. 3, we will show how the modifier operator can be utilized to generate probability weighting functions and discuss how the modifier operator can be utilized in practical regression problems. Lastly, in Sect. 4, we will provide a short summary of our findings and highlight our future research plans.

2 Modifier operators in continuous-valued logic

In fuzzy logic, linguistic modifiers like ‘very’, ‘more or less’, ‘somewhat’, ‘rather’ and ‘quite’ over fuzzy sets that have strictly monotonously increasing or decreasing membership functions can be modeled by modifier operators. In Dombi’s pliant system (Dombi 2008, 2012a), the general form of the modifier operator is given by Definition 1.

Later on, we will show that the value of parameter \(\lambda \) is closely related to the slope of function \(m_{\nu ,\nu _0}^{(\lambda )}\) at \(x=\nu \). Notice that

$$\begin{aligned} m_{\nu ,\nu _0}^{(\lambda )}(\nu ) = \nu _0 \end{aligned}$$

immediately follows from Eq. (1). Hence, if \(\lambda \ne 1\) and \(\nu _0 = \nu \), then \(\nu \) is the fixed point of the transformation \(x \longmapsto m_{\nu ,\nu _0}^{(\lambda )}(x)\), where \(x \in (0,1)\).

3 Generating probability weighting functions

In prospect theory, the probability weighting functions are defined as follows (Wakker 2010).

Definition 2

The function \(w: [0,1] \rightarrow [0,1]\) is said to be a probability weighting function, if w satisfies the following requirements:

  1. (1)

    w is strictly increasing;

  2. (2)

    \(w(0)=0\) and \(w(1)=1\).

Note that although the continuity of w is not required in general, we will generate probability weighting functions that are continuous in the interval [0, 1]. It should be also added that in prospect theory, the argument of a probability weighting function is traditionally denoted by p, indicating that the probability weighting function is a transformation on a probability measure. Here, we will use the notation x for the argument of function w. In this section, we will show how the modifier operator in Eq. (1) can be applied to generate probability weighting functions.

Proposition 1

If \(\lambda >0\), then the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) given by Definition 1 satisfies the requirements for a probability weighting function in Definition 2.

Proof

By noting the properties of the generator function f (see Definition 1), the proof is straightforward. \(\square \)

Proposition 1 lays the foundations for generating probability weighting functions derived from appropriately chosen generator functions. Here, we will utilize the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) with the parameter settings \(\nu _0 = \nu \) and \(\lambda >0\). This allows us to characterize the generated probability weighting function by its fixed point \(\nu \) and by its sharpness parameter \(\lambda \). That is, the generated probability weighting function w will always have the form

$$\begin{aligned} w(x) = m_{\nu }^{(\lambda )}(x), \end{aligned}$$

where

$$\begin{aligned} m_{\nu }^{(\lambda )}(x) = f^{-1}\left( f(\nu ) \left( \frac{f(x)}{f(\nu )}\right) ^{\lambda }\right) , \end{aligned}$$
(2)

\(x \in [0,1]\), \(\nu \in (0,1)\) and \(\lambda >0\).

Remark 1

One can easily see that the inverse function \(\left[ m_{\nu }^{(\lambda )}\right] ^{-1}\) of the modifier operator \(m_{\nu }^{(\lambda )}\) given in Eq. (2) is

$$\begin{aligned} \left[ m_{\nu }^{(\lambda )}\right] ^{-1}(x) = f^{-1}\left( f(\nu ) \left( \frac{f(x)}{f(\nu )}\right) ^{1/\lambda }\right) , \end{aligned}$$

and so \(\left[ m_{\nu }^{(\lambda )}\right] ^{-1}\) is a probability weighting function as well.

The following proposition tells us about the shape of the probability weighting function generated by the modifier operator \(m_{\nu }^{(\lambda )}\).

Proposition 2

If \(\lambda >0\), \(\nu \in (0,1)\) and the probability weighting function \(w: [0,1] \rightarrow [0,1]\) is generated by the modifier operator \(m_{\nu }^{(\lambda )}: [0,1] \rightarrow [0,1]\), then

  1. (1)

    If \(0< \lambda <1\), then w(x) is concave in \((0,\nu ]\) and w(x) is convex in \([\nu ,1)\);

  2. (2)

    If \(\lambda =1\), then \(w(x)=x\) for any \(x \in [0,1]\);

  3. (3)

    If \(1 < \lambda \), then w(x) is convex in \((0,\nu ]\) and w(x) is concave in \([\nu ,1)\).

Proof

Let \(w(x) = m_{\nu }^{(\lambda )}(x)\) for any \(x \in [0,1]\), where \(\nu \in (0,1)\) and \(\lambda >0\). Furthermore, let g(x) be defined by

$$\begin{aligned} g(x) = \left( f(\nu ) \right) ^{1-\lambda } \left( f(x) \right) ^{\lambda }. \end{aligned}$$
(3)

Then, w(x) can be written as

$$\begin{aligned} w(x) = f^{-1}\left( g(x) \right) . \end{aligned}$$
(4)

Here, we will distinguish two cases: (a) f is strictly increasing, (b) f is strictly decreasing.

  1. (a)

    In this case, f is a strictly increasing function and so \(f^{-1}\) is a strictly increasing function as well. By noting the properties of f (see Definition 1) and using Eq. (3) and Eq. (4), we readily get that

    • if \(x \in (0,\nu ]\) and \(0< \lambda <1\), then \(f^{-1}\left( g(x) \right) \ge x\)

    • if \(x \in [\nu ,1)\) and \(0< \lambda <1\), then \(f^{-1}\left( g(x) \right) \le x\),

    from which (1) follows. Similarly,

    • if \(x \in (0,\nu ]\) and \(1< \lambda \), then \(f^{-1}\left( g(x) \right) \le x\)

    • if \(x \in [\nu ,1)\) and \(1< \lambda \), then \(f^{-1}\left( g(x) \right) \ge x\),

    from which we have (3).

  2. (b)

    In this case, f is a strictly decreasing function and so \(f^{-1}\) is a strictly decreasing function as well. Using this fact, the proof can be obtained in a way similar to that of case (a).

If \(\lambda = 1\), then \(w(x)=x\) trivially holds for any \(x \in [0,1]\). \(\square \)

Suppose that the probability weighting function \(w:[0,1] \rightarrow [0,1]\) is generated by the modifier operator \(m_{\nu }^{(\lambda )}:[0,1] \rightarrow [0,1]\), \(\lambda >0\) and \(\nu \in (0,1)\). Then, the effect of the parameters \(\lambda \) and \(\nu \) on the shape of the probability weighting function w can be summarized as follows.

  • Parameter \(\lambda \) determines the sharpness and the shape of w. The more the value of \(\lambda \) differs from 1, the more the shape of w differs from that of the identity function. If \(0<\lambda <1\), then w is inverse S-shaped, and if \(1<\lambda \), then w is S-shaped.

  • Parameter \(\nu \) determines the point where w intersects the diagonal line; that is, parameter \(\nu \) may be viewed as the elevator parameter of w.

Fig. 1
figure 1

The role of parameters \(\nu \) and \(\lambda \)

Figure 1 shows the effect of the values of parameters \(\lambda \) and \(\nu \) on the shape of the probability weighting function w that was generated from the same generator function via the modifier operator \(m_{\nu }^{(\lambda )}\).

In the following subsections, we will show that the application of the modifier operator in Eq. (2) can be treated as a general approach for creating probability weighting functions including the most important ones.

3.1 The Prelec probability weighting function family

The Prelec probability weighting function family (Prelec 1998) \(w_{P}\) is given by

$$\begin{aligned} w_{P}(x) = \left( \mathrm {e}^{- \left( -\ln (x) \right) ^{a}} \right) ^{b}, \end{aligned}$$
(5)

where \(0<a<1\), \(b>0\) and \(x \in (0,1]\). The following proposition shows how the Prelec probability weighting function family can be generated by the modifier operator \(m_{\nu }^{(\lambda )}\).

Proposition 3

Let \(0<a<1\), \(b>0\) and let the generator function f be given by \(f(x) = - \ln (x)\), where \(x \in (0,1]\). If

$$\begin{aligned} \lambda =a \quad \mathrm{and} \quad \nu = \mathrm {e}^{-b^{\frac{1}{1-a}}}, \end{aligned}$$
(6)

then \(m_{\nu }^{(\lambda )}(x) = w_{P}(x)\) for any \(x \in (0,1]\).

Proof

After direct calculation, we get

$$\begin{aligned} m_{\nu }^{(\lambda )}(x) = \left( \mathrm {e}^{- \left( -\ln (x) \right) ^{\lambda }} \right) ^{\left( -\ln (\nu ) \right) ^{1-\lambda }}. \end{aligned}$$
(7)

Next, by noting Eq. (6), we immediately get that \( m_{\nu }^{(\lambda )}(x) = w_{P}(x)\) for any \(x \in (0,1]\). \(\square \)

Remark 2

It is worth mentioning that in continuous-valued logic, the generator function \(f(x) = -\ln (x)\) induces the product conjunction operator that is also called the probabilistic t-norm (Klement et al. 2004; Yager et al. 2000).

It should be added that the parameters in Eq. (7) give a more natural characterization of the probability weighting function than those in Eq. (5). For example, Wakker (2010) suggests that the parameter values \(a=0.65\) and \(b=1.0467\), giving an intersection with the diagonal at 0.32, are good parameter choices for gains. If we wish the function w in Eq. (7) to have its fixed point at 0.32, then we simply need to set its parameter \(\nu \) to 0.32. In prospect theory, the parameters a and b of the Prelec probability weighting function family are interpreted as the index of likelihood insensitivity and the index of pessimism, respectively (Wakker 2010). Thus, in terms of the parameters of the generated Prelec probability weighting function in Eq. (7), the index of likelihood insensitivity is \(\lambda \), and the index of pessimism is \(\left( -\ln (\nu ) \right) ^{1-\lambda }\).

3.2 The Ostaszewski, Green and Myerson probability weighting function family

The Ostaszewski, Green and Myerson probability weighting function family (Ostaszewski et al. 1998) \(w_{\textit{OGM}}\) is given by

$$\begin{aligned} w_{\textit{OGM}}(x) = \frac{b x^{a}}{b x^{a} + (1-x)^{a}}, \end{aligned}$$
(8)

where \(0<a<1\), \(b>0\) and \(x \in [0,1]\). Note that this probability weighting function family was introduced independently by Lattimore, Baker and Witte as well in 1992 (see Lattimore et al. 1992). Here, we will show that the Ostaszewski, Green and Myerson probability weighting function family can be generated by the modifier operator \(m_{\nu }^{(\lambda )}\).

Proposition 4

Let \(0<a<1\), \(b>0\) and let the generator function f be given by \(f(x) = \frac{1-x}{x}\), where \(x \in (0,1]\). If

$$\begin{aligned} \lambda =a \quad \mathrm{and} \quad \nu = \frac{1}{1+\left( \frac{1}{b}\right) ^{\frac{1}{1-a}}}, \end{aligned}$$
(9)

then \(m_{\nu }^{(\lambda )}(x) = w_{\textit{OGM}}(x)\) for any \(x \in (0,1]\).

Proof

By using Eq. (2) with \(f(x) = \frac{1-x}{x}\), where \(x \in (0,1]\), we get

$$\begin{aligned} m_{\nu }^{(\lambda )}(x) = \frac{1}{1+\frac{1-\nu }{\nu } \left( \frac{1-x}{x} \frac{\nu }{1-\nu }\right) ^{\lambda }}. \end{aligned}$$
(10)

Next, by taking into account Eq. (9), from Eq. (10) we can readily see that \(m_{\nu }^{(\lambda )}(x) = w_{\textit{OGM}}(x)\) for any \(x \in (0,1]\). \(\square \)

It should be mentioned here that in prospect theory, the parameters a and b of the Ostaszewski, Green and Myerson probability weighting function family can also be interpreted as the index of likelihood insensitivity and the index of pessimism, respectively (Wakker 2010). Hence, in terms of the parameters of the probability weighting function in Eq. (10), the index of likelihood insensitivity is \(\lambda \), and the index of pessimism is \(\left( \frac{\nu }{1-\nu }\right) ^{1-\lambda }\).

It is worth mentioning that in continuous-valued logic, the Dombi conjunction and disjunction operators (Dombi 2008) are induced by the following generator function \(f_{\alpha }:(0,1) \rightarrow (0,\infty ]\):

$$\begin{aligned} f_{\alpha }(x) = \left( \frac{1-x}{x}\right) ^{\alpha }, \end{aligned}$$

where \(\alpha \ne 0\). This means that the generator function \(f(x) = \frac{1-x}{x}\) is a special case of the generator function of Dombi conjunction and disjunction operators. It is also worth noting that the function in Eq. (10) is called the kappa function in Dombi and Jónás (2018, (2020).

It can be shown that the function \(m_{\nu }^{(\lambda )}\) in Eq. (10) is the solution of the differential equation

$$\begin{aligned} \frac{\mathrm {d} \kappa (x) }{\mathrm {d}x} = \lambda \frac{\kappa (x) \left( 1- \kappa (x)\right) }{x \left( 1-x \right) }, \end{aligned}$$

with the initial condition \(\kappa (\nu ) = \nu \), see Dombi (2012a) and Dombi and Jónás (2020). Thus, we have

$$\begin{aligned} \frac{\mathrm {d} \kappa (\nu ) }{\mathrm {d}x} = \lambda . \end{aligned}$$

Therefore, the derivative of function \(m_{\nu }^{(\lambda )}\) in Eq. (10) at \(x=\nu \) is equal to the value of parameter \(\lambda \). This means that the parameters in Eq. (10) give a more natural characterization of the probability weighting function than those in Eq. (8).

Now, let t be the tangent line of function \(m_{\nu }^{(\lambda )}\) given by Eq. (10) at \(x=\nu \). Then, t is given by the equation \(t(x) = \lambda x + \nu (1-\lambda )\), \(x \in [0,1]\). In prospect theory, the neo-additive probability weighting function \(w_{neo}:[0,1] \rightarrow [0,1]\) is given by

$$\begin{aligned} w_{neo}(x) = b + ax, \end{aligned}$$

where \(a,b >0\) and \(a+b<1\) (see Wakker 2010). The parameters a and b of the neo-additive probability weighting function \(w_{neo}\) can be interpreted as follows (see Wakker 2010):

  • a is an index of likelihood sensitivity (‘curvature’ or ‘inverse S-shape’)

  • \(\frac{2b+a}{2}\) is an index of optimism (elevation).

Now, if \(w_{neo}(x) = t(x)\) for any \(x \in [0,1]\), then \(a = \lambda \) and \(b = \nu (1-\lambda )\). In this case, the index of sensitivity is \(\lambda \) and the index of optimism is \(\nu (1-\lambda )+\frac{\lambda }{2}\). Figure 2 shows a plot of function \(m_{\nu }^{(\lambda )}\) given by Eq. (10) and function t, and the geometric interpretation of the parameters \(a,b,\lambda \) and \(\nu \).

Fig. 2
figure 2

Plot of \(m_{\nu }^{(\lambda )}\) given by Eq. (10) and its tangent line at \(x=\nu \)

It should be added here that the probability weighting function \(m_{\nu }^{(\lambda )}\), which coincides with the kappa function, can be fitted to empirical data by applying our kappa regression method described in Dombi and Jónás (2020).

3.3 An approximation to the Prelec probability weighting function family

In Dombi and Jónás (2018), we introduced the epsilon function.

Definition 3

The epsilon function \(\varepsilon ^{(\alpha )}_{d}:(-d,+d) \rightarrow [0,\infty ]\) is given by

$$\begin{aligned} \varepsilon ^{(\alpha )}_{d}(x) = \left( \frac{d+x}{d-x} \right) ^{\alpha \frac{d}{2}}, \end{aligned}$$
(11)

where \(\alpha \in \mathbb {R}\), \(\alpha \ne 0\), \(d \in \mathbb {R}\), \(d>0\), \(x \in (-d,+d)\).

If \(\alpha =-1\), then from Eq. (11) we have the following \(\varepsilon _d: (-d,+d) \rightarrow [0,\infty ]\) function that is given by

$$\begin{aligned} \varepsilon _{d}(x) = \left( \frac{d+x}{d-x} \right) ^{-\frac{d}{2}}, \end{aligned}$$

where \(d \in \mathbb {R}\), \(d>0\). The next proposition states a key property of the epsilon function \(\varepsilon _d\).

Proposition 5

For any \(x \in (-d,+d)\)

$$\begin{aligned} \lim \limits _{d \rightarrow \infty } \varepsilon _{d}(x) = \mathrm {e}^{-x}. \end{aligned}$$

Proof

See the proof of Theorem 1 in Dombi and Jónás (2018). \(\square \)

Since the epsilon function \(\varepsilon _{d}\) is strictly decreasing, it follows from Proposition 5 that

$$\begin{aligned} \lim \limits _{d \rightarrow \infty } \varepsilon ^{-1}_{d}(x) = -\ln (x) \end{aligned}$$

holds for any \(x \in (0,\infty )\), where \(\varepsilon ^{-1}_{d}\) is the inverse function of \(\varepsilon _{d}\). The practical implication of Proposition 5 is that if the value of parameter d is sufficiently large, then \(\varepsilon _{d}(x) \approx \mathrm {e}^{-x}\) and \(\varepsilon ^{-1}_{d}(x) \approx -\ln (x)\) for any \(x \in (-d,+d)\) and for any \(x \in (0,\infty )\), respectively. Note that the approximation is quite good already for \(d = 10\).

Now, let the generator function f be given by

$$\begin{aligned} f(x) = \varepsilon ^{-1}_{d}(x) = d \frac{x^{-\frac{2}{d}}-1}{x^{-\frac{2}{d}}+1}, \end{aligned}$$

where \(x \in (0,1]\) and \(d > 0\). Then the inverse function \(f^{-1}\) is

$$\begin{aligned} f^{-1}(x) = \varepsilon _{d}(x), \end{aligned}$$

and by making use of the modifier operator in Eq. (2) with \(\lambda >0\), we get the probability weighting function

$$\begin{aligned} w(x) = \left( \frac{1+q(x)}{1-q(x)}\right) ^{-\frac{d}{2}}, \end{aligned}$$

where

$$\begin{aligned} q(x) = \frac{\nu ^{-\frac{2}{d}}-1}{\nu ^{-\frac{2}{d}}+1} \left( \frac{x^{-\frac{2}{d}}-1}{x^{-\frac{2}{d}}+1} \frac{\nu ^{-\frac{2}{d}}+1}{\nu ^{-\frac{2}{d}}-1} \right) ^{\lambda }, \end{aligned}$$
(12)

\(\nu \in (0,1)\), \(\lambda >0\), \(d > 0\) and \(x \in (0,1]\). It immediately follows from Proposition 5 that for any \(x \in (0,1]\)

$$\begin{aligned} \lim \limits _{d \rightarrow \infty } w(x) = m_{\nu }^{(\lambda )}(x), \end{aligned}$$

where \(m_{\nu }^{(\lambda )}\) is given by Eq. (7), \(\nu \in (0,1)\) and \(\lambda >0\). Furthermore, if \(a = \lambda \) and \(b = \left( -\ln (\nu ) \right) ^{1-\lambda }\), then the asymptotic probability weighting function w is just the Prelec probability weighting function:

$$\begin{aligned} \lim \limits _{d \rightarrow \infty } w(x) = \left( \mathrm {e}^{- \left( -\ln (x) \right) ^{a}} \right) ^{b}. \end{aligned}$$
(13)

3.4 Dual probability weighting functions

We will use the concept of the dual generator function.

Definition 4

The dual of the generator function \(f:[0,1] \rightarrow [0,\infty ]\) is the function \(\hat{f}: [0,1] \rightarrow [0,\infty ]\), which is given by

$$\begin{aligned} \hat{f}(x) = f(1-x) \end{aligned}$$
(14)

for any \(x \in [0,1]\).

Obviously, \(\hat{f}\) is a generator functions as well. We will also make use of the concept of the dual modifier operator.

Definition 5

The dual modifier operator \(\hat{m}_{\nu ,\nu _0}^{(\lambda )}:[0,1] \rightarrow [0,1]\) of the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) is given by

$$\begin{aligned} \hat{m}_{\nu ,\nu _0}^{(\lambda )}(x) = \hat{f}^{-1}\left( \hat{f}(\nu _0) \left( \frac{\hat{f}(x)}{\hat{f}(\nu )}\right) ^{\lambda }\right) , \end{aligned}$$
(15)

where \(\hat{f}:[0,1] \rightarrow [0,\infty ]\) is the dual of the generator function f, \(\nu , \nu _0 \in (0,1)\) and \(\lambda \in \mathbb {R}\).

The generator functions f and \(\hat{f}\) are said to be a dual pair of generator functions, and the corresponding modifier operators \(m_{\nu ,\nu _0}^{(\lambda )}\) and \(\hat{m}_{\nu ,\nu _0}^{(\lambda )}\) are said to be the dual pair of the modifier operators induced by f and \(\hat{f}\), respectively. The following corollary allows us to generate additional probability weighting functions by using the dual generator functions.

Corollary 1

If \(\lambda >0\), then the dual modifier operator \(\hat{m}_{\nu ,\nu _0}^{(\lambda )}\) of the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) satisfies the requirements for a probability weighting function.

Proof

The corollary immediately follows from Definition 5 and Proposition 1. \(\square \)

Here, we will utilize the dual modifier operator \(\hat{m}_{\nu ,\nu _0}^{(\lambda )}\) with the parameter settings \(\nu _0 = \nu \). That is, the generated dual probability weighting function \(\hat{w}\) will always have the form

$$\begin{aligned} \hat{w}(x) = \hat{m}_{\nu }^{(\lambda )}(x), \end{aligned}$$

where

$$\begin{aligned} \hat{m}_{\nu }^{(\lambda )}(x) = \hat{f}^{-1}\left( \hat{f}(\nu ) \left( \frac{\hat{f}(x)}{\hat{f}(\nu )}\right) ^{\lambda }\right) , \end{aligned}$$

\(\nu \in (0,1)\) and \(\lambda >0\).

Table 1 summarizes the probability weighting functions induced by the generator functions presented in Sect. 3.1, 3.2 and 3.3 and the probability weighting functions induced by the corresponding dual generator functions.

Table 1 Generated probability weighting functions

Note that we have the requirement \(\nu \in (0,1)\) and \(\lambda >0\) for all the probability weighting functions in Table 1. Moreover, we have the requirement \(d> 0\) for the functions \(f_3\), \(\hat{f}_3\), \(w_3\) and \(\hat{w}_3\). The expressions for q(x) and r(x) in Table 1 are given by Eq. (12) and by

$$\begin{aligned} r(x) = \frac{(1-\nu )^{-\frac{2}{d}}-1}{(1-\nu )^{-\frac{2}{d}}+1} \left( \frac{(1-x)^{-\frac{2}{d}}-1}{(1-x)^{-\frac{2}{d}}+1} \frac{(1-\nu )^{-\frac{2}{d}}+1}{(1-\nu )^{-\frac{2}{d}}-1} \right) ^{\lambda }, \end{aligned}$$

respectively, where \(x \in [0,1)\).

Remark 3

Also, notice that \(w_{2}\) and \(\hat{w}_2\) are in fact identical. However, a deeper study of the pliant logic system (Dombi 2012b) is needed to answer the question when a probability weighting function induced by a generator function f coincides with the probability weighting function induced by the dual of generator function f.

Due to its simplicity and its useful properties, we recommend the use of the \(w_2\) function as a general model of probability weighting functions.

Figure 3 shows sample plots of the probability weighting functions that are listed in Table 1. These plots tell us that the probability weighting function induced by the inverse of the epsilon function (\(w_3\)) and by its dual function (\(\hat{w}_3\)) almost coincide with the Prelec probability weighting function (\(w_1\)) and its dual function (\(\hat{w}_1\)), respectively. These observations are in line with the finding given in Eq. (13); that is, \(w_3\) and \(w_1\) are asymptotically identical as the value of parameter d tends to infinity. Similarly, \(\hat{w}_3\) is asymptotically identical to \(\hat{w}_1\) when d tends to infinity.

Fig. 3
figure 3

Sample plots of the probability weighting functions in Table 1 with the parameter values \(\nu =1/3\) and \(\lambda =0.65\)

3.5 Additional probability weighting functions induced from transformed generator functions

Suppose that we have the generator function \(f:[0,1] \rightarrow [0,\infty ]\) and the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}:[0,1] \rightarrow [0,1]\) given in Eq. (1), \(\nu , \nu _0 \in (0,1)\) and \(\lambda >0\). We have shown (see Proposition 1) that \(m_{\nu , \nu _0}^{(\lambda )}\) is a probability weighting function induced by the function f. The following proposition allows us to generate additional probability weighting functions from transformed generator functions by making use of the modifier operator \(m_{\nu , \nu _0}^{(\lambda )}\).

Proposition 6

Let f be the generator function of the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) given in Definition 1. Furthermore, let \(g:[0,1] \rightarrow [0,1]\) and \(h:[0,\infty ] \rightarrow [0,\infty ]\) be two strictly monotonic functions. Then, the function \(\widetilde{f}: [0,1] \rightarrow [0,\infty ]\), which is given by

$$\begin{aligned} \widetilde{f} = h \circ (f \circ g) \end{aligned}$$
(16)

is also a generator function. If \(\lambda >0\), then the modifier operator \(\widetilde{m}_{\nu ,\nu _0}^{(\lambda )}:[0,1] \rightarrow [0,1]\) that is given by

$$\begin{aligned} \widetilde{m}_{\nu ,\nu _0}^{(\lambda )}(x) = \widetilde{f}^{-1}\left( \widetilde{f}(\nu _0) \left( \frac{\widetilde{f}(x)}{\widetilde{f}(\nu )}\right) ^{\lambda }\right) . \end{aligned}$$

is a probability weighting function.

Proof

If f is a generator function, then f is strictly monotonic in [0, 1]. Since \(g:[0,1] \rightarrow [0,1]\) and \(h:[0,\infty ] \rightarrow [0,\infty ]\) are two strictly monotonic functions, \(\widetilde{f} = h \circ (f \circ g)\) is a strictly monotonic function and \(\widetilde{f}:[0,1] \rightarrow [0, \infty ]\). Therefore, the function \(\widetilde{f}\) is a generator function as well, and by noting Proposition 1, we see that \(\widetilde{m}_{\nu ,\nu _0}^{(\lambda )}\) is a probability weighting function. \(\square \)

Obviously, if both g and h are the identity functions, then the generator function \(\widetilde{f} = h \circ (f \circ g)\) is identical to the generator function f and the modifier operator \(\widetilde{m}_{\nu ,\nu _0}\) induced by \(\widetilde{f}\) coincides with the modifier operator \(m_{\nu ,\nu _0}\) given in Eq. (1).

Also notice that if \(g:[0,1] \rightarrow [0,1]\) and \(h:[0,\infty ] \rightarrow [0,\infty ]\) are given by

$$\begin{aligned} g(x) = 1-x \quad \text {and} \quad h(x) = x, \end{aligned}$$

then the generator function \(\widetilde{f} = h \circ (f \circ g)\) is identical to the dual generator function \(\hat{f}\) given in Eq. (14) and the modifier operator \(\widetilde{m}_{\nu ,\nu _0}\) induced by \(\widetilde{f}\) coincides with the dual modifier operator \(\hat{m}_{\nu ,\nu _0}\) given in Eq. (15).

Now, for example, let \(g:[0,1] \rightarrow [0,1]\) and \(h:[0,\infty ] \rightarrow [0,\infty ]\) be given by

$$\begin{aligned} g(x) = x^{\beta } \quad \text {and} \quad h(x) = \ln (1+\gamma x^{\alpha }), \end{aligned}$$

where \(\alpha \in \mathbb {R}\), \(\alpha \ne 0\), \(\beta \in \mathbb {R}\), \(\beta \ne 0\) and \(\gamma \in (0,\infty )\). Then, the generator function \(\widetilde{f} = h \circ (f \circ g)\) is \(\widetilde{f}:[0,1] \rightarrow [0,\infty ]\), which is given by

$$\begin{aligned} \widetilde{f}(x) = \ln \left( 1 + \gamma f^{\alpha }(x^{\beta }) \right) . \end{aligned}$$
(17)

In this case, the inverse function of \(\widetilde{f}\) is \(\widetilde{f}^{-1}:[0,\infty ] \rightarrow [0,1]\), which is given by

$$\begin{aligned} \widetilde{f}^{-1}(x) = \left( f^{-1} \left( \left( \frac{1}{\gamma } \left( \mathrm {e}^{x} - 1 \right) \right) ^{\frac{1}{\alpha }}\right) \right) ^{\frac{1}{\beta }}. \end{aligned}$$

Notice that a wide range of probability weighting functions can be induced from the generator function \(\widetilde{f}\) by utilizing the modifier operator \(m_{\nu ,\nu _0}\) given in Eq. (1). It is worth mentioning that in continuous-valued logic, a special form of the transformation given in Eq. (17) is used to generate the generalized Dombi operators (Dombi 2008).

3.6 Generating strictly convex (concave) probability weighting functions

By noting the result that the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) is a probability weighting function (see Proposition 1) and the fact that \(m_{\nu ,\nu _0}^{(\lambda )}(\nu ) = \nu _0\), we have that

  1. (1)

    \(m_{\nu ,\nu _0}^{(\lambda )}\) is strictly convex if \(\nu > \nu _0\); and

  2. (2)

    \(m_{\nu ,\nu _0}^{(\lambda )}\) is strictly concave if \(\nu < \nu _0\),

where \(\nu , \nu _0 \in (0,1)\) and \(\lambda = 1\). This key property of the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) allows us to generate strictly convex or strictly concave probability weighting functions.

3.7 Fitting generated probability weighting functions to empirical data

Suppose that the probability weighting function \(w:(0,1) \rightarrow (0,1)\) is induced from the generator function \(f:(0,1) \rightarrow (0,\infty ]\) by the modifier operator \(m_{\nu }^{(\lambda )}\) given in Eq. (2), where \(\nu \in (0,1)\) and \(\lambda >0\). That is, we have \(w(x) = m_{\nu }^{(\lambda )}(x)\) for any \(x \in (0,1)\). Now, we will show how the probability weighting function w can be fitted to empirical data. Let \(y=w(x)\). Then we also have \(f(y) = f(w(x))\) and since both sides of this equation are positive, after taking the logarithm of its both sides, we get

$$\begin{aligned} \ln \left( f(y) \right) = \ln \left( f(\nu ) \right) + \lambda \ln \left( f(x) \right) - \lambda \ln \left( f(\nu ) \right) . \end{aligned}$$
(18)

The last equation can be written in the form

$$\begin{aligned} Y = A X + B, \end{aligned}$$
(19)

where \(X = \ln \left( f(x) \right) \), \(Y = \ln \left( f(y) \right) \), \(A= \lambda \) and \(B = \ln \left( f(\nu ) \right) (1-\lambda )\). Hence, the values of parameters A and B can be obtained by applying a linear regression. Once we have the estimated values of \(\hat{A}\) and \(\hat{B}\) for the parameters A and B, respectively, the estimates \(\hat{\lambda }\) and \(\hat{\nu }\) of the parameters \(\lambda \) and \(\nu \) are

$$\begin{aligned} \hat{\lambda }&= \hat{A} \nonumber \\ \hat{\nu }&= f^{-1} \left( \mathrm {e}^{\frac{\hat{B}}{1-\hat{A}}}\right) , \end{aligned}$$
(20)

respectively.

Suppose that we have the probability weighting function \(w(x) = m_{\nu }^{(\lambda )}(x)\) for any \(x \in (0,1)\), where \(m_{\nu }^{(\lambda )}\) is given by Eq. (2) with a fixed generator function f, and with unknown parameters \(\nu \in (0,1)\) and \(\lambda >0\). Furthermore, suppose that we have the observation pairs \((x_i,y_i)\), where \(y_i\) is the empirical value of the probability weighting function w at \(x_i\), \(i=1,2, \ldots , n\) and \(n \ge 2\). Then, following the line of thinking presented above, the unknown parameters \(\lambda \) and \(\nu \) of the function w can be estimated by fitting the linear regression model \(Y = A X + B\) to the transformed data pairs \((X_i,Y_i)\), where

$$\begin{aligned} X_i = \ln \left( f(x_i) \right) \\ Y_i = \ln \left( f(y_i) \right) , \end{aligned}$$

and then applying Eq. (20) with the estimates \(\hat{A}\) and \(\hat{B}\) of the parameters A and B, respectively.

3.7.1 A demonstrative example

In a survey, 1000 participants were asked in 9 runs about if, in their opinion, an uncertain event will happen or not. The known likelihood \(x_i\) of the event was different in each run, where i is the run index. The survey results are shown in Table 2. In this table, column \(k_i\) contains the number of survey participants who taught that the event will happen, while column \(n_i-k_i\) indicates the number of those participants who believed that the event will not happen (\(n_i=1000\) is the number of survey participants, \(i=1,2, \ldots , 9\)). The \(y_i = \frac{k_i}{n_i}\) values are the estimated values of the probability perceived by the survey participants when the actual known probability is \(x_i\).

Table 2 Empirical data

Let x denote the known probability, and let y be the perceived value of the probability x. In Table 2, the \((x_i,y_i)\) pairs are observations on the (xy) pairs. Here, we model the relationship between x and y by using the probability weighting function w that is induced by the generator function \(f(x) = \frac{1-x}{x}\) using the modifier operator in Eq. (2), (\(x \in (0,1]\)). That is, w(x) has the form of \(m_{\nu }^{(\lambda )}\) given by Eq. (10), where \(\nu \in (0,1)\), \(\lambda >0\) and \(x \in (0,1]\). Now, by using Eq. (18), (19) and the \(\ln \left( f(x_i) \right) \) and \(\ln \left( f(y_i) \right) \) transformed values in Table 2, we can estimate the model parameters A and B in Eq. (19) by applying linear regression. The estimated parameter values \(\hat{A}\) and \(\hat{B}\) are

$$\begin{aligned} \hat{A} = 0.6972 \quad \text {and} \quad \hat{B} = 0.2572, \end{aligned}$$

from which, using Eq. (20), we get the following \(\hat{\lambda }_0\) and \(\hat{\nu }_0\) estimations of the parameters \(\lambda \) and \(\nu \), respectively:

$$\begin{aligned} \hat{\lambda }_0 = 0.6972 \quad \text {and} \quad \hat{\nu }_0 = 0.2996. \end{aligned}$$

Although these estimates of the parameters are not necessarily optimal, they can be used as initial values for the numeric maximum likelihood estimation of the parameters. We will demonstrate this here.

Let the random variable \(\xi \) be the indicator of the studied event; that is,

$$\begin{aligned} \xi = {\left\{ \begin{array}{ll} 1, &{} \text {if the event happens} \\ 0, &{} \text {if the event does not happen}. \end{array}\right. } \end{aligned}$$

Then \(P_p(\xi =1 \vert x)\) represents the perceived probability of the studied event given that its probability is equal to x. Here, we have 9000 observations (summarized in Table 2) on the \((x,\xi )\) pair. That is, we have the sample \((x^{*}_{1}, \xi _{1}), (x^{*}_{2},\xi _{2}), \ldots , (x^{*}_{n}, \xi _{n})\), where \(x^{*}_{j} \in \lbrace x_1, x_2, \ldots , x_9 \rbrace \), \(\xi _{j} \in \lbrace 0,1 \rbrace \), \(j =1, 2, \ldots , n\), \(n=9000\). Here, we wish to model the perceived conditional probability \(P_p(\xi =1 \vert x)\) by the probability weighting function given in Eq. (10). The estimations of the model parameters (\(\lambda \) and \(\nu \)) can be obtained by maximizing the likelihood function

$$\begin{aligned} L(\lambda , \nu )&= \prod _{j=1}^{n} P_p(\xi =\xi _{j} \vert x^{*}_{j}) \nonumber \\&\quad = \prod _{j=1}^{n} w^{\xi _{j}}(x^{*}_{j}; \nu , \lambda ) \left( 1- w(x^{*}_{j}; \nu , \lambda ) \right) ^{1-\xi _{j}}, \end{aligned}$$
(21)

where \(w(x^{*}_{j}; \nu , \lambda ) = w(x^{*}_j)\), \(j=1,2, \ldots , n\). By using the data in Table 2, the log-likelihood function obtained from Eq. (21) can be written as

$$\begin{aligned} l(\lambda , \nu )&= \sum _{i=1}^{9} k_i \ln \left( w(x_{i}; \nu , \lambda ) \right) \nonumber \\&\quad + \sum _{i=1}^{9} (n_i - k_i)\ln \left( 1-w(x_{i}; \nu , \lambda ) \right) . \end{aligned}$$
(22)

The maxima of the log-likelihood function in (22) can be determined by applying gradient descent methods to the negative log-likelihood function (see Dombi and Jónás 2020). In the optimization procedure, the initial values of the parameters \(\lambda \) and \(\nu \) can be set to those determined by the linear regression above; that is, the initial values can be set as \(\lambda = \hat{\lambda }_0\) and \(\nu = \hat{\nu }_0\). This approach increases the speed of convergence of the optimization method. Following this method, the maximum likelihood parameter estimations are

$$\begin{aligned} \hat{\lambda } = 0.6535 \quad \text {and} \quad \hat{\nu } = 0.3085. \end{aligned}$$

Figure 4 shows the plots of the probability weighting functions fitted to the empirical data.

Fig. 4
figure 4

Fitting probability weighting functions to empirical data

4 Summary and future plans

In this study, we presented a novel methodology that can be utilized to generate parametric probability weighting functions by making use of the Dombi modifier operator of continuous-valued logic. The key findings of this paper can be summarized as follows:

  1. (1)

    We showed that the modifier operator \(m_{\nu ,\nu _0}^{(\lambda )}\) given in Eq. (1) is a probability weighting function.

  2. (2)

    We demonstrated that the application of the modifier operator \(m_{\nu }^{(\lambda )}\) [see Eq. (2)] can be interpreted as a general approach for generating probability weighting functions, and this includes the well-known ones.

  3. (3)

    We pointed out that the Prelec probability weighting function family can be induced from the generator function \(f(x) = - \ln (x)\) by applying the modifier operator \(m_{\nu }^{(\lambda )}\). Also, the Ostaszewski, Green and Myerson (Lattimore, Baker and Witte) probability weighting function family may be generated from the generator function \(f(x) = \frac{1-x}{x}\) by using the modifier operator \(m_{\nu }^{(\lambda )}\).

  4. (4)

    In previous papers of ours (see Dombi and Jónás 2018; Dombi et al. 2018), we introduced the epsilon function that can be used to approximate the exponential function. Here, we showed that the asymptotic probability weighting function induced from the inverse of the epsilon function by using the modifier operator is none other than the Prelec probability weighting function.

  5. (5)

    We also showed that even more probability weighting functions can be generated from the so-called dual generator functions and from transformed generator functions.

  6. (6)

    Finally, we discussed how the modifier operator can be used to generate strictly convex (or concave) probability weighting functions and introduced a method for fitting generated probability weighting functions to empirical data.

In the future, we would like to develop new numerical methods for fitting the generated probability weighting functions, which are listed in Table 1, to empirical data. Since the generic form of the modifier operator given in Eq. (1) can produce strictly convex (concave) functions, we will investigate how it can be used in probabilistic sophistication.