1 Introduction

Dombi et al. [3] presented the omega function and the omega probability distribution that is founded on this function. This probability distribution can be utilized in reliability engineering to model both monotonic and bathtub-shaped hazard rates. Furthermore, special cases of the omega probability distribution can be applied in various areas of science, including engineering, economics and the business sciences (see [1, 9]). In this study, we present a generalization of the omega function and point out some of its applications. The omega function is defined as follows (see Definition 1 in [3]):

Definition 1.1

The omega function \(\omega ^{(\alpha ,\beta )}_{d}(x)\) is given by

$$\begin{aligned} \omega ^{(\alpha ,\beta )}_{d}(x) = \left( \frac{d^{\beta }+x^{\beta }}{d^{\beta }-x^{\beta }} \right) ^{\frac{\alpha d^{\beta }}{2}}, \end{aligned}$$
(1.1)

where \(\alpha ,\beta ,d \in \mathbb {R}\), \(\beta ,d>0\), \(x \in (0,d)\).

The following result was also proven in [3]:

Theorem 1.2

For any \(x \in (0,d)\) and \(\beta >0\),

$$\begin{aligned} \lim _{d \rightarrow \infty } \omega ^{(\alpha ,\beta )}_{d}(x) = \mathrm {e}^{\alpha x^{\beta }}. \end{aligned}$$

Exploiting Definition 1.1 and Theorem 1.2, the cumulative distribution function (CDF) of the omega probability distribution, which is given by

$$\begin{aligned} F^{(\alpha ,\beta )}_{d}(x) = {\left\{ \begin{array}{ll} 0, &{} \text {if }x \le 0 \\ 1 - \omega ^{(-\alpha ,\beta )}_{d}(x), &{} \text {if }0< x <d \\ 1, &{} \text {if }x \ge d, \end{array}\right. } \end{aligned}$$

where \(\alpha , \beta , d>0\), may be treated as an alternative to the Weibull CDF (see [3]). Later, Okorie and Nadarajah presented more properties of the omega probability distribution [6]. It is worth mentioning that the omega function in Eq. (1.1) with \(\beta =1\) is the so-called epsilon function that was introduced by Dombi et al. [2]. Using this function, the CDF of the exponential probability distribution can be approximated quite efficiently (see [2]).

We will use the common notation \(\mathbb {R}\) for the real line and \(\overline{\mathbb {R}}\) for the extended real line, i.e., \(\overline{\mathbb {R}} = [-\infty , \infty ]\). Also, \(\overline{\mathbb {R}}_{+}\) will denote the non-negative extended real line, i.e., \(\overline{\mathbb {R}}_{+}=[0,\infty ]\). We will consider the arithmetic operations on the extended real line according to Klement et al. [5] and Grabisch et al. [4]. The notation \({\mathcal {D}}_f\) will stand for the domain of function f. Here, we utilize the following classes of functions.

Definition 1.3

Let \({\mathcal {G}}\) be the set of all functions \(g:[0,1] \rightarrow \overline{\mathbb {R}}_{+}\) that are

  1. (a)

    either strictly increasing with \(g(0)=0\), or strictly decreasing with \(g(1)=0\)

  2. (b)

    differentiable on (0, 1) with \(g^{\prime }\left( \frac{1}{2} \right) \ne 0\), where \(g^{\prime }\) denotes the first derivative of g, and \(g^{\prime }\) is continuous on (0, 1).

If \(g \in {\mathcal {G}}\) is strictly increasing with \(\lim _{x \rightarrow 1} g(x) = \infty \), then \(g(1)=\infty \) means this limit. Also, for a strictly decreasing \(g \in {\mathcal {G}}\), if \(\lim _{x \rightarrow 0} g(x) = \infty \), then \(g(0)=\infty \) stands for this limit.

Definition 1.4

Let \({\mathcal {H}}\) be the set of all functions h that have either the domain \({\mathcal {D}}_h= \overline{\mathbb {R}}\) or the domain \({\mathcal {D}}_h= \overline{\mathbb {R}}_{+}\), and h satisfies the following requirements:

  1. (a)

    For any \(d>0\), \(h(d) \ne 0\) and \(\left| h(d) \right| < \infty \), i.e., h(d) is finite.

  2. (b)

    \(h(0)=0\) and either \(\lim _{x \rightarrow \infty } h(x) = \infty \) or \(\lim _{x \rightarrow \infty } h(x) = -\infty .\)

  3. (c)

    If \({\mathcal {D}}_h = \overline{\mathbb {R}}\), then h is differentiable on \(\mathbb {R}\) and exactly one of the following cases holds:

  • h is strictly decreasing on \([-d,d],\)

  • h is strictly decreasing on \([-d,0)\) and h is strictly increasing on [0, d], 

  • h is strictly increasing on \([-d,d],\)

  • h is strictly increasing on \([-d,0)\) and h is strictly decreasing on [0, d].

  1. (d)

    If \({\mathcal {D}}_h = \overline{\mathbb {R}}_{+}\), then h is differentiable on \([0,\infty )\) and either h is strictly decreasing on [0, d] or h is strictly increasing on [0, d].

2 The Generalized Omega Function

Now, we introduce the so-called generalized omega function, which we can utilize to approximate \(\mathrm {e}^{\lambda h(\cdot )}\)-like functions.

Definition 2.1

Let \(g \in {\mathcal {G}}\), \(h \in {\mathcal {H}}\), \(\lambda \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \) and \(d>0\). We say that the function \(\omega ^{(\lambda )}_{d,g,h}:{\mathcal {D}}_h \cap [-d,d] \rightarrow \overline{\mathbb {R}}_{+}\), which is given by

$$\begin{aligned} \omega ^{(\lambda )}_{d,g,h}(x) = \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)}\right) }{g\left( \frac{1}{2} \right) } \right) ^{2 \lambda h(d) \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) }}, \end{aligned}$$
(2.1)

is a generalized omega function (GOF) with the parameters \(\lambda \), \(\beta \) and d induced by the generator function g with the core function h.

Later, we will show that the omega function, which was first introduced by Dombi et al. [3], is just a special case of function \(\omega ^{(\lambda )}_{d,g,h}\). That is, \(\omega ^{(\lambda )}_{d,g,h}\) may be viewed as a generalization of the omega function.

Since the GOF is a generator function-based mapping, it is an interesting question whether, for a given core function h, two generators induce the same GOF. The following proposition gives a sufficient condition for the equality of two generalized omega functions that are induced by two generator functions using the same core function.

Proposition 2.2

The GOF is uniquely determined up to any transformation

$$\begin{aligned} t(x) = \alpha x^{\beta }, \quad x \in \overline{\mathbb {R}}_{+} \end{aligned}$$
(2.2)

on its generator function, if \(\alpha >0\) and \(\beta \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \).

Proof

Let \(\lambda \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \), \(d>0\) and let the GOF \(\omega ^{(\lambda )}_{d,g,h}\) be induced by the generator function \(g \in {\mathcal {G}}\) with the core function \(h \in {\mathcal {H}}\). Furthermore, let \(\alpha >0\) and \(\beta \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \). Now, let the function \(t:\overline{\mathbb {R}}_{+} \rightarrow \overline{\mathbb {R}}_{+}\) be given by Eq. (2.2), and let \(f(x) = t(g(x))\) for any \(x \in [0,1]\). Then, \(f \in {\mathcal {G}}\) and for any \(x \in {\mathcal {D}}_h \cap [-d,d]\), the GOF induced by f with the core function h can be written as

$$\begin{aligned} \omega ^{(\lambda )}_{d,f,h}(x)= & {} \left( \frac{f\left( \frac{h(x)+h(d)}{2h(d)} \right) }{f\left( \frac{1}{2} \right) } \right) ^{2 \lambda h(d) \frac{f\left( \frac{1}{2}\right) }{f^{\prime }\left( \frac{1}{2}\right) }} \\= & {} \left( \frac{\alpha g^{\beta } \left( \frac{h(x)+h(d)}{2h(d)} \right) }{\alpha g^{\beta }\left( \frac{1}{2} \right) } \right) ^{2 \lambda h(d) \frac{\alpha g^{\beta } \left( \frac{1}{2}\right) }{\alpha \beta g^{\beta -1}\left( \frac{1}{2}\right) g^{\prime }\left( \frac{1}{2}\right) }} \\= & {} \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{1}{2} \right) } \right) ^{2 \lambda h(d) \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) }} = \omega ^{(\lambda )}_{d,g,h}(x). \end{aligned}$$

\(\square \)

The following proposition demonstrates that the GOF fits to the function \(\mathrm {e}^{\lambda h(\cdot )}\) around zero quite well. More precisely, we show that the function \(\omega ^{(\lambda )}_{d,g,h}(x)\) and the function \(\mathrm {e}^{\lambda h(x)}\) are identical to first order at \(x=0\).

Proposition 2.3

Let \(g \in {\mathcal {G}}\), \(h \in {\mathcal {H}}\), \(\lambda \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \) and \(d>0\). Let the function \(\omega ^{(\lambda )}_{d,g,h}:{\mathcal {D}}_h \cap [-d,d] \rightarrow \overline{\mathbb {R}}_{+}\) be a GOF induced by g with the core function h according to Eq. (2.1). The function \(\omega ^{(\lambda )}_{d,g,h}(x)\) and the function \(\mathrm {e}^{\lambda h(x)}\) are identical to first order at \(x=0\).

Proof

Noting that \(h(0)=0\) and \(h(d) \ne 0\), we immediately get that

$$\begin{aligned} \omega ^{(\lambda )}_{d,g,h}(0) = \mathrm {e}^{\lambda h(0)} = 1. \end{aligned}$$
(2.3)

The first derivatives of \(\omega ^{(\lambda )}_{d,g,h}(x)\) and \(\mathrm {e}^{\lambda h(x)}\) are

$$\begin{aligned} \left( \omega ^{(\lambda )}_{d,g,h}(x) \right) ^{\prime } = \lambda h^{\prime }(x) \frac{g^{\prime }\left( \frac{h(x)+h(d)}{2h(d)}\right) }{g^{\prime }\left( \frac{1}{2} \right) } \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)}\right) }{g\left( \frac{1}{2} \right) } \right) ^{2 \lambda h(d) \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) }-1} \end{aligned}$$

and

$$\begin{aligned} \left( \mathrm {e}^{\lambda h(x)} \right) ^{\prime } = \lambda h^{\prime }(x) \mathrm {e}^{\lambda h(x)}, \end{aligned}$$

respectively. Taking into account that \(h(0)=0\) and \(h(d) \ne 0\), we get that

$$\begin{aligned} \left( \omega ^{(\lambda )}_{d,g,h}(x) \right) ^{\prime } \Bigg \vert _{x=0} = \left( \mathrm {e}^{\lambda h(x)} \right) ^{\prime } \Bigg \vert _{x=0} = \lambda h^{\prime }(0). \end{aligned}$$
(2.4)

Therefore, based on Eqs. (2.3) and (2.4), \(\omega ^{(\lambda )}_{d,g,h}(x)\) and \(\mathrm {e}^{\lambda h(x)}\) are identical to first order at \(x=0\). \(\square \)

3 The generalized Omega Function as an Approximation to the Exponential Function

Here, we demonstrate an important connection between the generalized omega function given in Definition 2.1 and the exponential function.

Theorem 3.1

Let \(g \in {\mathcal {G}}\), \(h \in {\mathcal {H}}\), \(\lambda \in \mathbb {R} {\setminus } \lbrace 0 \rbrace \) and \(d>0\). Let the function \(\omega ^{(\lambda )}_{d,g,h}:{\mathcal {D}}_h \cap [-d,d] \rightarrow \overline{\mathbb {R}}_{+}\) be a GOF induced by g with the core function h according to Eq. (2.1). Then, for any \(x \in {\mathcal {D}}_h \cap (-d,d)\),

$$\begin{aligned} \lim _{d \rightarrow \infty } \omega ^{(\lambda )}_{d,g,h}(x) = \, \mathrm {e}^{\lambda h(x)}. \end{aligned}$$
(3.1)

Proof

Let \(x \in {\mathcal {D}}_h \cap (-d,d)\) have a fixed value. Using the definition of \(\omega ^{(\lambda )}_{d,g,h}\), Eq. (3.1) is equivalent to

$$\begin{aligned} \lim _{d \rightarrow \infty } \left( 2 \lambda h(d) \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) } \ln \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{1}{2} \right) } \right) \right) = \lambda h(x). \end{aligned}$$
(3.2)

The left hand side of Eq. (3.2) can be written as

$$\begin{aligned} 2 \lambda \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) } \lim _{d \rightarrow \infty } \left( \frac{ \ln \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{1}{2} \right) } \right) }{ \frac{1}{h(d)}} \right) . \end{aligned}$$
(3.3)

Since either \(\lim _{d \rightarrow \infty } h(d) = \infty \) or \(\lim _{d \rightarrow \infty } h(d) = -\infty \), and g is continuous on (0, 1), we have

$$\begin{aligned} \lim _{d \rightarrow \infty } g \left( \frac{h(x)+h(d)}{2h(d)} \right) = g \left( \frac{1}{2} \right) . \end{aligned}$$

Therefore, we can use the L’Hospital rule to compute the limit in Eq. (3.3). Taking into account that g is differentiable on (0, 1), \(g^{\prime }\) is continuous on (0, 1), and h is differentiable on \((-\infty ,\infty )\) or on \([0,\infty )\), after direct calculation, we get

$$\begin{aligned}&2 \lambda \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) } \lim _{d \rightarrow \infty } \left( \frac{ \ln \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{1}{2} \right) } \right) }{ \frac{1}{h(d)}} \right) \\&\quad = 2 \lambda \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) } \lim _{d \rightarrow \infty } \left( \frac{\left( \ln \left( \frac{g\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{1}{2} \right) } \right) \right) ^{\prime }}{\left( \frac{1}{h(d)} \right) ^{\prime }} \right) \\&\quad = 2 \lambda \frac{g\left( \frac{1}{2}\right) }{g^{\prime }\left( \frac{1}{2}\right) } \lim _{d \rightarrow \infty } \left( \frac{\frac{g^{\prime }\left( \frac{h(x)+h(d)}{2h(d)} \right) }{g\left( \frac{h(x)+h(d)}{2h(d)} \right) } \frac{h(x)}{2} \left( - \frac{1}{h^{2}(d)} \right) h^{\prime }(d) }{\left( - \frac{1}{h^{2}(d)} \right) h^{\prime }(d)}\right) = \lambda h(x). \end{aligned}$$

\(\square \)

The following example describes how an effective approximation to the function \(\mathrm {e}^{x}\) can be derived using the GOF.

Example

Let \(g(x)= \frac{1-x}{x}\), \(x \in [0,1]\), and let \(h(x)=x\), \(x \in \overline{\mathbb {R}}_+.\) Then, \(g\left( \frac{1}{2}\right) = 1\), \(g^{\prime }\left( \frac{1}{2} \right) = -4,\) and via direct calculation we get that the GOF induced by g with the core function h is

$$\begin{aligned} \omega ^{(\lambda )}_{d,g,h}(x) = \left( \frac{d+x}{d-x} \right) ^{\lambda \frac{d}{2}}, \end{aligned}$$
(3.4)

where \(d>0\) and \(x \in [-d, d]\). Note that the GOF in Eq. (3.4) is identical to the epsilon function which was introduced in [2]. Based on Theorem 3.1, the function \(\omega ^{(\lambda )}_{d,g,h}(x)\) in Eq. (3.4) is an approximation of the exponential function \(\mathrm {e}^{\lambda x}\) on the domain \([-d,d]\). Table 1 shows the absolute relative errors, i.e.,

$$\begin{aligned} \max _{x \in [-\Delta ,\Delta ]} \left| \frac{\omega ^{(\lambda )}_{d,g,h}(x)-\mathrm {e}^{\lambda x}}{\mathrm {e}^{\lambda x}}\right| , \end{aligned}$$

of this approximation for various values of d and \(\Delta \), where \(\lambda =1\).

Table 1 The maximum absolute relative errors of the approximations of \(\mathrm {e}^{\lambda h(x)}\) using \(\omega ^{(\lambda )}_{d,g,h}(x)\), for the Example above

The values of the maximal absolute relative errors indicate that the approximations are quite good around zero even for \(d=100\). This empirical observation is in line with the result of Proposition 2.3.

3.1 Connections with Some Probability Distributions

Here, we show how the GOF can be utilized to construct CDFs that can approximate the Weibull and exponential CDFs. Then, we demonstrate that the p-exponential distribution can be derived from a special case of the generalized omega function. Also, using the GOF, we present an approximation to the density function of the standard normal probability distribution.

3.1.1 Approximations to the Weibull Probability Distribution

Suppose that \(g \in {\mathcal {G}}\), \(h_{\beta }(x)=x^{\beta }\), \(x \in \overline{\mathbb {R}}_+\), \(\beta >0\), \(\lambda >0\) and \(d>0\). It can be verified that the function \(h_{\beta }\) satisfies the criteria for a core function given in Definition 1.4. Now, let the function \(F^{(\lambda ,\beta )}_{d,g}:\mathbb {R} \rightarrow [0,1]\) be given by

$$\begin{aligned} F^{(\lambda ,\beta )}_{d,g}(x) = {\left\{ \begin{array}{ll} 0, &{} \text {if }x \le 0 \\ 1 - \omega ^{(-\lambda )}_{d,g,h_{\beta }}(x), &{} \text {if }0< x <d \\ 1, &{} \text {if }x \ge d, \end{array}\right. } \end{aligned}$$

where \(\omega ^{(-\lambda )}_{d,g,h_{\beta }}\) is a GOF induced by the generator g with the core function \(h_{\beta }\). Clearly, the function \(F^{(\lambda ,\beta )}_{d,g}\) is a CDF of a continuous random variable. We shall call this CDF the generalized omega CDF. Exploiting the result of Theorem 3.1, we readily get that for any \(x \in (0,d)\),

$$\begin{aligned} \lim _{d \rightarrow \infty } F^{(\lambda ,\beta )}_{d,g}(x) = 1-\mathrm {e}^{-\lambda x^{\beta }}. \end{aligned}$$

This means that the well-known Weibull CDF is an asymptotic generalized omega CDF when \(d \rightarrow \infty \). That is, the generalized omega CDF may be treated as an alternative of the Weibull CDF. In the special case where the function g is given by \(g(x)= \frac{1-x}{x}\), \(x \in [0,1]\), the generalized omega CDF becomes

$$\begin{aligned} F^{(\lambda ,\beta )}_{d,g}(x) = {\left\{ \begin{array}{ll} 0, &{} \text {if }x \le 0 \\ 1 - \left( \frac{d^{\beta }+x^{\beta }}{d^{\beta }-x^{\beta }} \right) ^{-\lambda \frac{d^{\beta }}{2}}, &{} \text {if }0< x <d \\ 1, &{} \text {if }x \ge d. \end{array}\right. } \end{aligned}$$
(3.5)

This function is known as the omega CDF and it was introduced and studied in Dombi et al. [3] and Okorie et al. [6]. Note that with \(\beta =1\), the function in Eq. (3.5) becomes the epsilon CDF [2]. (Note that the GOF \(\omega ^{(\lambda )}_{d,g,h_{\beta }}\) with the generator \(g(x)= \frac{1-x}{x}\) is the omega function given in Eq. (1.1).) Figure 1 shows example plots of the Weibull CDF \(F_{\lambda ,\beta }(x) = 1-\mathrm {e}^{-\lambda x^{\beta }}\) for \(x \ge 0\) with various parameter values, and their approximations using the generalized omega CDF are given in Eq. (3.5). We can see that these approximations are quite good already for \(d=50\). Note that for \(\beta =1\), the Weibull CDF coincides with the exponential CDF. It is worth noting that [3] contains more details on how the omega function can be fitted to real life data in the reliability analysis.

Fig. 1
figure 1

Example plots of Weibull CDFs and their approximations using the generalized omega CDF, for \(d=50\), \(g(x)=\frac{1-x}{x}\) and \(h_{\beta }(x)= x^{\beta }\)

3.1.2 The p-exponential probability distribution and the generalized omega function

Suppose \(g(x)= x\), \(x \in [0,1]\), \(h(x)=x\), \(x \in \overline{\mathbb {R}}\) and \(\lambda =1\). After direct calculation, we get that the GOF induced by the generator g with the core function h is

$$\begin{aligned} \omega _{d}(x) = \left( 1+\frac{x}{d} \right) ^{d}, \end{aligned}$$

where \(d>0\) and \(x \in [-d, d]\). The q-exponential function \(e_{q}\) was introduced by Tsallis [8] as follows:

$$\begin{aligned} e_{q}(x) = \left( 1+(1-q)x \right) ^{\frac{1}{1-q}}, \end{aligned}$$

where \(q <1\), \(x \in \left[ -\frac{1}{1-q}, \frac{1}{1-q} \right] \). With the bijection \(q=\frac{d-1}{d}\) from \((0,\infty )\) to \((-\infty ,1)\), we get that \(\omega _{d}(x) = e_{q}(x)\) for any \(x \in [-d,d]\) (or equivalently for any \(x \in \left[ -\frac{1}{1-q}, \frac{1}{1-q} \right] \)). This means that the q-exponential function is a particular case of the GOF. Furthermore, based on Theorem 3.1, we readily find that for any \(x \in \left[ -\frac{1}{1-q}, \frac{1}{1-q} \right] \),

$$\begin{aligned} \lim _{q \rightarrow 1} e_{q}(x) = \mathrm {e}^{x}. \end{aligned}$$

The p-exponential CDF \(F_{p}:\mathbb {R} \rightarrow [0,1]\) is defined as follows:

$$\begin{aligned} F_{p}(x) = {\left\{ \begin{array}{ll} 0, &{} \text {if }x \le 0 \\ 1 - \left( 1- \frac{x}{p+1} \right) ^{p}, &{} \text {if }x \in (0,p+1) \\ 1, &{} \text {if }x \ge p+1, \end{array}\right. } \end{aligned}$$

where \(p>0\) (see Sinner et al. [7]). This CDF can be expressed in terms of the GOF \(\omega _{p+1}(x)=\left( 1+\frac{x}{p+1} \right) ^{p+1}\). Namely, we can write:

$$\begin{aligned} F_{p}(x) = {\left\{ \begin{array}{ll} 0, &{} \text {if }x \le 0 \\ 1 - \left( 1- \frac{x}{p+1} \right) ^{-1} \omega _{p+1}(-x), &{} \text {if }x \in (0,p+1) \\ 1, &{} \text {if }x \ge p+1. \end{array}\right. } \end{aligned}$$

Note that \(F_{p}(x)\) is an approximation of the function \(F(x) = 1-\mathrm {e}^{-x}\), which is the CDF of an exponential random variable.

3.1.3 Approximation to the density function of the standard normal probability distribution

Let \(g(x)= \frac{1-x}{x}\), \(x \in [0,1]\), and \(h(x)=-\frac{x^{2}}{2}\), \(x \in \overline{\mathbb {R}}\), \(\lambda >0\) and \(d>0\). It can be verified that the function h satisfies the criteria for a core function given in Definition 1.4. Now, \(g\left( \frac{1}{2}\right) = 1\), \(g^{\prime }\left( \frac{1}{2} \right) = -4\) and the GOF induced by g with the core function h is

$$\begin{aligned} \omega ^{(\lambda )}_{d,g,h}(x) = \left( \frac{d^{2}-x^{2}}{d^{2}+x^{2}} \right) ^{\lambda \frac{d^{2}}{4}}, \end{aligned}$$

where \(d>0\) and \(x \in [-d, d]\).

Taking into account Theorem 3.1 with \(\lambda =1\), we readily find that for any \(x \in (-d,d)\),

$$\begin{aligned} \lim _{d \rightarrow \infty } \left( \frac{d^{2}-x^{2}}{d^{2}+x^{2}} \right) ^{\frac{d^{2}}{4}} = \mathrm {e}^{-\frac{x^2}{2}}. \end{aligned}$$

This means that the function

$$\begin{aligned} \varphi _d(x) = \frac{1}{\sqrt{2 \pi }} \left( \frac{d^{2}-x^{2}}{d^{2}+x^{2}} \right) ^{\frac{d^{2}}{4}}, \quad x \in [-d,d] \end{aligned}$$

may be treated as an approximation to the density function of the standard normal probability distribution on the bounded domain \([-d,d]\). This approximation is quite good even for small values of d. For example, if \(d=10\), then

$$\begin{aligned} \max _{x \in (-6,6)} \left| \varphi (x) - \varphi _d(x) \right| < 7.78 \times 10^{-5}, \end{aligned}$$

where \(\varphi \) is the density function of the standard normal probability distribution. For \(|x|>6\), in most of the practical applications, \(\varphi (x)\) is considered as being zero as \(\varphi (6) = 6.08 \times 10^{-9}\). At the same time, for \(d=10\), \(\varphi _{d}(6) = 2.62 \times 10^{-9}\) and \(\varphi _{d}(10) =0\). Figure 2 shows how well \(\varphi _d(x)\) fits to \(\varphi (x)\) for \(x \in (-6,6)\).

Fig. 2
figure 2

The standard normal density function and its approximation using the GOF with \(d=10\), \(g(x)=\frac{1-x}{x}\) and \(h(x)= -\frac{x^{2}}{2}\)

4 Conclusions and Future Research Plans

Based on the above presented results, the generalized omega function may be viewed as a bounded alternative to \(\mathrm {e}^{\lambda h(\cdot )}\)-like functions and so this new function has a large application potential in many areas of science. As part of our future research work, we aim to study the above presented approach to the p-exponential distribution from the distribution theory perspective. Furthermore, from the same point of view, we would like to analyze the presented approximation to the density function of the standard normal random variable.