Introduction

The necessity for skew distributions appears in all fields of science. Univariate skew-symmetric models have been considered by several authors. Ferreira and Steel [7] defined Y as a new random variable with the probability density function (pdf)

$$\begin{aligned} f_{Y}(y) = f_{X}(y)w(F_{X}(y)),\quad y\in R, \end{aligned}$$
(1)

where w(.) is a pdf on (0, 1) and X is a symmetric random variable about zero with pdf \(f_{X}(.)\) and cumulative distribution function (cdf) \(F_{X}(.)\). Then, Y is a skew version of the symmetric random variable X. By changing w(.) in (1), many of the known families of skew distributions are obtained. For example if \(w(x)= 2F_{X}(\lambda F_{X}^{-1}(x))\), (1) changes to

$$\begin{aligned} f_{Y}(y) =2 f_{X}(y)F_{X}(\lambda y),\quad y\in R, \;\;\; \lambda \in R, \end{aligned}$$
(2)

as Azzalini skew distributions ([2] and [3]). Some special cases of (2) are the class of skew-normal distributions, \(f_{Y}(y) =2 \varphi (y)\phi (\lambda y)\), where \(\varphi\) and \(\phi\) are the standard normal pdf and cdf, respectively, skew-t and skew-Cauchy distributions.

The logistic distribution (LD) has been the center of attention in several areas of scientific research. It is applied as an alternative to the normal distribution in many practical situations. \(L(\mu , \sigma )\) denote the logistic distribution with parameters \(\mu\) and \(\sigma\), with the following pdf and cdf respectively,

$$\begin{aligned} f(x) &= \exp (-(x-\mu )/ \sigma )/[1+\exp (-(x-\mu )/\sigma )]^{2},\\&\quad x,\mu \in R,~ \sigma>0,\\ F(x)&= 1/[1+\exp (-(x-\mu )/\sigma )],\quad x, \mu \in R, \;\; \sigma >0. \end{aligned}$$

Furthermore, L(0, 1) denote the standard logistic distribution with \(\mu =0\) and \(\sigma =1\). Some properties and applications of the LD are available in Balakrishnan [4]. Similar to the skew-normal distribution of Azzalini [2], Wahed and Ali [15] introduced skew-logistic distribution (SLD) through the family of Azzalini skew distributions. Nadarajah [11], Gupta and Kundu [8] also derived some of its properties. One of the important problems of family (1) is complicated inference methods about finding estimator for skewness parameter. For example, Pewsey [13] has proved the MLE for the skewness parameter in Azzalini skew-normal families does not always exist. However, SLD was extended by many authors and received widespread attention. Gupta and Kundu [8] also discuss another generalization of the logistic distribution using the idea of the proportional reversed hazard family. They named this distribution as the proportional reversed hazard logistic distribution (PRHLD) (or type-I generalized logistic distribution). They also showed that PRHLD has several advantages over SLD. Chakraborty et al. [5] and Hazarika and Chakraborty [9] considered other skew-logistic distribution (NSLD) and the alpha skew-logistic distribution (ASLD), respectively. ASLD is flexible enough to adequately model both unimodal and bimodal data in the presence of positive or negative skewness. Asgharzadeh et al. [1] proposed a generalized skew-logistic distribution (GSLD) using the type-III generalized logistic distribution. A generalized version of the ASLD is also introduced by Hazarika and Chakraborty [10]. Satheesh Kumar and Manju [14] proposed modified skew-logistic distribution (MSLD). Some pdfs of above skew-logistic distributions are given in Table 1.

Table 1 Different skew-logistic distributions

Nadarajah et al. [12] introduce a new family of skew distributions, naming it truncated-exponential skew-symmetric (TESS) distributions. TESS is a member of the exponential family; therefore, the estimate of the skewness parameter can be obtained easier.

In this study which is based on TESS distribution, a new generalized logistic distribution is introduced. It refers to truncated-exponential skew-logistic (TESL) distribution and compares it to other skew-logistic distributions. Also some properties of TESL distribution are obtained. In other skew-logistic distributions which are based on (2), it is generally difficult or almost impossible to estimate skewness parameter. The aim of this study is to introduce the new skew-logistic distributions that is a member of exponential family and bears the good characteristics of the family. It is expected that the estimation of skew parameter to be easier and more precise.

Truncated-exponential skew-logistic distribution

A random variable Y has the truncated-exponential skew-symmetric distribution, denoted by TESS\((\lambda )\), if its pdf is given by

$$\begin{aligned} f_{Y}(y) =c f_{X} (y) \exp (-\lambda F_{X}(y)),\quad x, \lambda \in R. \end{aligned}$$
(3)

where \(f_{X}(.)\) and \(F_{X}(.)\) are, respectively, the pdf and cdf of a symmetric random variable X about zero, \(\lambda\) is shape parameter and \(c=\frac{\lambda }{1-\exp (-\lambda )}\). The cdf of Y is obtained as

$$\begin{aligned} F_{Y}(y) =\frac{c}{\lambda }\{1-\exp (-\lambda F_{X}(y))\},\quad x, \lambda \in R. \end{aligned}$$

Note that Eq. (3) is a particular case of Eq. (1) for \(w(x) = \frac{\lambda \exp (\lambda x)}{1-\exp (\lambda )}\).

Definition 2.1

A random variable Y has the truncated-exponential skew-logistic distribution with parameters \(\mu \in R, \sigma >0\) and \(\lambda \in R\), denoted by TESL\((\mu , \sigma , \lambda )\), if its pdf is given by

$$\begin{aligned} f_{Y}(y)&= \frac{\lambda \exp [-(y-\mu )/\sigma ]}{\sigma [1-\exp (-\lambda )][1+\exp [-(y-\mu )/\sigma ]]^{2}}\nonumber \\&\quad \exp \left( \frac{-\lambda }{1+\exp [-(y-\mu )/\sigma ]}\right) ,\quad y \in R. \end{aligned}$$
(4)

Further, the cdf of Y is

$$\begin{aligned} F_{Y}(y) =\left[ \frac{1}{1-\exp (-\lambda )}\right] \left[ {1- \exp \left( \frac{-\lambda }{1+\exp [-(y-\mu )/\sigma ]}\right) }\right] ,\quad y \in R. \end{aligned}$$

If \(\mu =0\) and \(\sigma =1\), we denote \(Y \sim {\rm TESL}(\lambda )\) .

Note that the TESL pdf in (4) can be written as \(f(y)=h(y)c(\lambda ) \exp \{ w(\lambda )t(y)\}\). where \(t(y)=F_{X}((y-\mu )/\sigma )=1/(1+\exp -(y-\mu )/\sigma )\), \(c(\lambda )=\lambda /\sigma (1-\exp (-\lambda ))\), \(h(y)=f_{X}((y-\mu )/\sigma )=\exp (-\lambda )/\sigma [1+\exp (-(y-\mu )/\sigma ]^{2}\) and \(w(\lambda )=-\lambda\). Therefore, (4) belongs to the exponential family and \(D({\underline{y}})=\sum _{i=1}^{n}F_{X}((y_{i}-\mu )/\sigma )=\sum _{i=1}^{n}[1/(1+\exp (-(y_{i}-\mu )/\sigma ))]\) is a complete sufficient statistic for \(\lambda\), if \(\mu\) and \(\sigma\) are assumed known. Also \(E(D({\underline{Y}}))=(1/\lambda )-[\exp (-\lambda )/(1-\exp (-\lambda ))]\).

The inverse cdf of TESL\(( \lambda )\) is

$$\begin{aligned} F^{-1}_{Y}(y) =\ln \left\{ \frac{-\ln [1-y(1-\exp (-\lambda ))]}{\lambda +\ln [1-y(1-\exp (-\lambda ))]} \right\} ,\quad y \in R. \end{aligned}$$
(5)

The hazard rate function of TESL\((\mu , \sigma , \lambda )\) is

$$\begin{aligned}h_{Y}(y)=\frac{{ \lambda \exp (-(y-\mu )/\sigma )\exp \left( \frac{-\lambda }{1+\exp (-(y-\mu )/\sigma )}\right) }}{{\sigma [1+\exp (-(y-\mu )/\sigma )]^{2}} \left( \exp \left( \frac{-\lambda }{1+\exp (-(y-\mu )/\sigma )}\right) -\exp (-\lambda )\right) },\quad y \in R. \end{aligned}$$

Figure 1 shows the pdf and hazard rate function of TESL\((0, \sigma , \lambda )\) for different values of \(\sigma\) and \(\lambda\).

Fig. 1
figure 1

Plots of pdf(left) and hazard rate function(right) for TESL\((0, \sigma , \lambda )\)

Properties

In this section, some mathematical properties of TESL distribution are derived. In the sequel, the following notations and definitions are required.

Let \(X_{i:n}\) denote the ith order statistic for a random sample of size n from L(0, 1), then the moment generating function (mgf) of \(X_{i:n}\) is

$$\begin{aligned} M_{X_{i:n}}(t) = \frac{\Gamma (i+t) \Gamma (n-i+1-t)}{\Gamma (i) \Gamma (n-i+1)},\quad 1~\le i\le n. \end{aligned}$$

Further, \(E(X_{i:n})=\psi (i)-\psi (n-i+1)\) and \({\rm Var}(X_{i:n})=\psi ^{\prime }(i)+\psi ^{\prime }(n-i+1)\), where \(\Gamma (.)\), \(\psi (.)\) and \(\psi ^{\prime }(.)\) are gamma, digamma and trigamma functions, respectively.

Let \(Y \sim {\rm TESL}(\lambda )\) it follows from Nadarajah et al. [12]; the mgf of \(Y_{i:n}\) is given by

$$\begin{aligned} M_{Y_{i:n}}(t)&= c\sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} M_{X_{k+1:k+1}}(t),\\&= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} \frac{\Gamma (k+1+t) \Gamma (1-t)}{\Gamma (k+1) }. \end{aligned}$$

where \(c=\frac{\lambda }{1-\exp (-\lambda )}\).

By theorem (1) in Nadarajah et al. [12], if \(Y \sim {\rm TESS}(\lambda )\) and \(E|X|^{r}, r > 0\) , exists, then \(E|Y|^{r}\) exists, so the rth moment of Y when \(Y \sim {\rm TESL}(\lambda )\) is

$$\begin{aligned} E(Y^{r})&= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} E(X_{k+1:k+1}^r),\\&= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} \frac{{\rm d}^{(r)}}{{\rm d}t^{r}} \left\{ \frac{\Gamma (k+1+t) \Gamma (1-t)}{\Gamma (k+1)} \right\} \Bigg\vert _{t=0}. \end{aligned}$$

In particular

$$\begin{aligned} E(Y) &= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!}\{\psi (k+1)-\psi (1)\} \end{aligned}$$
(6)
$$\begin{aligned} E(Y^2) &= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} \left\{ \left[ \psi \left( k+1 \right) -\psi \left( 1\right) \right] ^{2}\right. \nonumber \\&\quad\left. +\,\left[ \psi ^{\prime }\left( k+1 \right) +\psi ^{\prime }\left( 1 \right) \right] \right\} \end{aligned}$$
(7)
$$\begin{aligned} E(Y^{3})&= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} \left\{ [\psi (k+1)-\psi (1)]^{3} \right. \nonumber \\&\quad +\, 3[\psi ^{\prime }(k+1)+\psi ^{\prime }(1)] [\psi (k+1)-\psi (1)]\nonumber \\&\quad \left. +\,[\psi ^{\prime \prime }(k+1)-\psi ^{\prime \prime } (1)] \right\} \end{aligned}$$
(8)
$$\begin{aligned} E(Y^{4})&= c \sum _{k=0}^{\infty }\frac{(-\lambda )^{k}}{(k+1)!} \left\{ [\psi (k+1)-\psi (1)]^{4} \nonumber \right. \\&\quad +\, 6[\psi ^{\prime }(k+1)+\psi ^{\prime }(1)] [\psi (k+1)-\psi (1)]^{2} \nonumber \\&\quad +\, 4[\psi ^{\prime \prime }(k+1)-\psi ^{\prime \prime }(1)] [\psi (k+1)-\psi (1)] \nonumber \\&\quad \left. +\, 3[\psi ^{\prime }(k+1)+\psi ^{\prime }(1)]^{2} +[\psi ^{\prime \prime \prime }(k+1)+\psi ^{\prime \prime \prime }(1)]\right\} \end{aligned}$$
(9)

Let \(Y\sim {\rm TESL}(\lambda )\), the rth L-moment of Y is given by

$$\lambda _{r} = \sum\limits_{{j = 0}}^{{r - 1}} {\frac{{( - 1)^{{r - 1 - j}} }}{{r + 1}}\left( {\begin{array}{cc} {r - 1} \\ j \\ \end{array} } \right)} {\text{ }}\left( {\begin{array}{cc} {r - 1 + j} \\ j \\ \end{array} } \right)(\psi (r + 1) - \psi (1))$$
Fig. 2
figure 2

Plot of E(X), Var(X), Sk(X) and Ku(X) for different values of \(\lambda\)

Table 2 Moments of X for different values of \(\lambda\)
Table 3 MLEs, log-likelihood, AIC and BIC

Some moments are given in Table 2. Using the above moments, we can calculate the four measures E(X), Var(X), Skewness(X) and Kurtosis(X). Figure 2 illustrates the behavior of the four measures for different values of \(\lambda\). Based on Fig. 2 it is clear that

  1. 1.

    E(X) and Skewness(X) decrease with increasing \(\lambda\);

  2. 2.

    Var(X) decreases with increasing \(\mid \lambda \mid\);

  3. 3.

    Kurtosis(X) increases with increasing \(\mid \lambda \mid\).

Let \({\mathcal {R}}_{X}\left( \gamma \right)\) and \({\mathcal {R}}_{Y}\left( \gamma \right)\) denote Rényi entropies of X and Y, respectively. To derive \({\mathcal {R}}_{Y}\left( \gamma \right)\), we have

$$\begin{aligned}&\int _{-\infty }^{\infty }\left( f_{Y}\left( y\right) \right) ^{\gamma }{\rm d}y\\&\quad = \left\{ c^{\gamma } \int _{-\infty }^{\infty }\left( f_{X}(y)\right) ^{\gamma }\exp \left( -\lambda \gamma F_{X}(y)\right) {\rm d}y \right\} \\&\quad = \left\{ c^{\gamma } \int _{-\infty }^{\infty }\left( f_{X}(y)\right) ^{\gamma }\sum _{k=0}^{\infty }\frac{(-1)^{k}\left( \lambda \gamma \right) ^{k}\left( F_{X}(y)\right) ^{k} }{k!} {\rm d}y \right\} \\&\quad = \left\{ c^{\gamma } \sum _{k=0}^{\infty }\frac{(-1)^{k}\left( \lambda \gamma \right) ^{k} }{k!}\int _{-\infty }^{\infty }\left( f_{X}(y)\right) ^{\gamma }\left( F_{X}(y)\right) ^{k} {\rm d}y \right\} \\&\quad = \left\{ c^{\gamma } \left[ \int _{-\infty }^{\infty }\left( f_{X}(y) \right) ^{\gamma }{\rm d}y + \sum _{k=1}^{\infty }\frac{(-1)^{k}\left( \lambda \gamma \right) ^{k} }{k!}\int _{-\infty }^{\infty }\left( f_{X}(y)\right) ^{\gamma }\left( F_{X}(y)\right) ^{k} {\rm d}y\right] \right\} \\&\quad = \left\{ c^{\gamma } \left[ \exp \{(1-\gamma ){\mathcal {R}}_{X}\left( \gamma \right) \}+ \sum _{k=1}^{\infty }\frac{(-1)^{k}\left( \lambda \gamma \right) ^{k} }{k!}I_{k}(y)\right] \right\} \end{aligned}$$

where

$$\begin{aligned} I_{k}(y)= & {} \int _{-\infty }^{\infty }\left( f_{X}(y)\right) ^{\gamma }\left( F_{X}(x)\right) ^{k} {\rm d}y \\= & {} \int _{-\infty }^{ \infty }\left( \frac{\exp (-y)}{\left( 1+\exp (-y)\right) ^{2} } \right) ^{\gamma }\left( \frac{1}{1+\exp (-y)} \right) ^{k}{\rm d}y \\= & {} \int _{-\infty }^{ \infty }\frac{\exp (-\gamma y)}{\left( 1+\exp (-y)\right) ^{2\gamma +k} }{\rm d}y \\= & {} \int _{1}^{\infty }\frac{\left( u-1\right) ^{\gamma -1} }{u^{2\gamma +k} }{\rm d}u \\= & {} \int _{1}^{\infty }\sum _{j=0}^{\infty }\frac{{{\gamma -1} \atopwithdelims (){j}} \left( -1 \right) ^{j}u^{\gamma -1-j} }{u^{2\gamma +k} }{\rm d}u \\= & {} \sum _{j=0}^{\infty }{{\gamma -1} \atopwithdelims (){j}} \left( -1 \right) ^{j}\int _{1}^{\infty }u^{-\gamma -j-k-1}{\rm d}u \\= & {} \sum _{j=0}^{\infty }{{\gamma -1} \atopwithdelims (){j}} \left( -1 \right) ^{j}\frac{1}{\gamma +j+k},\quad \gamma >-\left( j+k\right) . \end{aligned}$$

Then, the Rényi entropy corresponding to \(f_{Y}(.)\) is given by

$$\begin{aligned} {\mathcal {R}}_{Y}\left( \gamma \right)&= \frac{1}{1-\gamma }\ln \left\{ c^{\gamma } \left[ \exp \{(1-\gamma ){\mathcal {R}}_{X}\left( \gamma \right) \}\right. \right. \\&\quad \left. \left. +\, \sum _{k=1}^{\infty }\sum _{j=0}^{\infty }{{\gamma -1} \atopwithdelims (){j}}\frac{(-1)^{k+j}\left( \lambda \gamma \right) ^{k} }{k! \left( \gamma +j+k\right) }\right] \right\} . \end{aligned}$$

Estimations

Let \(y_{1}, y_{2},\ldots , y_{n}\) be a random sample from (4). For estimating \(\varvec{\theta }=(\mu , \sigma , \lambda )^{\prime }\), we derive their moments and maximum likelihood estimators. Let \(m_{k}=(1/n)\sum _{j=1}^{n}y_{i}^{k}\), for \(k=1, 2, 3\). By equating the theoretical moments of Eqs. (69) with the sample moments, we obtain the moments estimators.

The likelihood function of \(\varvec{\theta }=(\mu , \sigma , \lambda )^{\prime }\) is

$$\begin{aligned} L(\mu , \sigma , \lambda ) &= \left( \frac{\lambda }{\sigma (1-\exp (-\lambda ))}\right) ^{n} \left\{ \prod _{i=1}^{n}\frac{\exp (-z_{i})}{(1+\exp (-z_{i}))^{2}}\right\} \exp \left\{ -\lambda \sum _{i=1}^{n} \frac{1}{1+\exp (-z_{i})}\right\} \end{aligned}$$

where \(z_{i}=(y_{i}-\mu )/\sigma\). The log-likelihood function is given by

$$\begin{aligned} \log L(\mu , \sigma , \lambda ) &= n\ln (\lambda )-n\ln (\sigma )-n\ln (1-\exp (-\lambda ))\\&\quad +\,\sum _{i=1}^{n}[-z_{i}-2\ln (1+\exp (-z_{i})) ]\\&\quad -\,\lambda \sum _{i=1}^{n} \frac{1}{1+\exp (-z_{i})} \end{aligned}$$

So, the maximum likelihood (ML) estimators can be obtained by solving the following equations

$$\begin{aligned} \frac{\partial Log L}{\partial \mu }= & {} \sum _{i=1}^{n}\left\{ \frac{1}{\sigma }-\frac{2\exp (-z_{i})}{\sigma (1+\exp (-z_{i}))}\right\} \nonumber \\&+\,\lambda \sum _{i=1}^{n}\left\{ \frac{\exp (-z_{i})}{\sigma (1+\exp (-z_{i}))^{2}}\right\} =0\nonumber \\ \frac{\partial Log L}{\partial \sigma }= & {} \frac{-n}{\sigma }+\sum _{i=1}^{n}\left\{ \frac{z_{i}}{\sigma } -\frac{2z_{i}\exp (-z_{i})}{\sigma (1+\exp (-z_{i}))}\right\} \nonumber \\&+\,\lambda \sum _{i=1}^{n}\left\{ \frac{z_{i}\exp (-z_{i})}{\sigma (1+\exp (-z_{i}))^{2}}\right\} =0\nonumber \\ \frac{\partial Log L}{\partial \lambda }= & {} \frac{n}{\lambda }-\frac{n\exp (-\lambda )}{1-\exp (-\lambda )}\nonumber \\&-\,\sum _{i=1}^{n}\frac{1}{1+\exp (-z_{i})}=0. \end{aligned}$$
(10)

These equations can be solved numerically for \(\mu\), \(\sigma\) and \(\lambda\). By Theorem 3 in Nadarajah et al. [12], if \(\mu\) and \(\sigma\) are assumed known, then the ML estimator of \(\lambda\) given by Eq. (4) always exists and is unique.

In the following, expressions for the Fisher information matrix are derived. The elements of \({\varvec{I}}_{n}(\varvec{\theta })=(I_{ij}),\; i=1, 2, 3,\; j=1, 2, 3,\) are given by

$$\begin{aligned}I_{33}&=\frac{n}{\lambda ^{2}}-\frac{n \exp \lambda }{(\exp \lambda -1)^{2} } \\I_{31}&=I_{13}=-\frac{n}{\sigma }E\left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) }{\left( 1+\exp \left( -\frac{x-\mu }{\sigma }\right) \right) ^{2} }\right) \\I_{32}&=I_{23}=-\frac{n}{\sigma }E\left( \frac{x-\mu }{\sigma } \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) }{\left( 1+\exp \left( -\frac{x-\mu }{\sigma }\right) \right) ^{2} }\right) \\I_{11}&=-\frac{n}{\sigma ^{2}}E\left( \frac{1-4\exp \left( -\frac{x-\mu }{\sigma }\right) +\exp \left( \frac{-2(x-\mu )}{\sigma }\right) }{\left( 1+\exp \left( -\frac{x-\mu }{\sigma }\right) \right) ^{2}} \right) +\,\frac{n}{\sigma ^{2}}E\left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) -1}{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) ^{2} +\,\frac{n\lambda }{\sigma ^{2}}E\left( \frac{\exp \left( \frac{-2(x-\mu )}{\sigma } \right) -\exp \left( -\frac{x-\mu }{\sigma }\right) }{\left( \exp \left( -\frac{x-\mu }{\sigma }\right) +1\right) ^{2}}\right) \\I_{12}&=I_{21}=-\frac{n}{\sigma ^{2}}E\left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) -1 }{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) -\,\frac{n}{\sigma ^2}E \left( \frac{x-\mu }{\sigma }\frac{1-4\exp \left( -\frac{x-\mu }{\sigma }\right) +\exp \left( \frac{-2(x-\mu )}{\sigma }\right) }{\left( \exp \left( -\frac{x-\mu }{\sigma }\right) +1\right) ^{2} }\right) \\&\quad +\,\frac{n}{\sigma ^{2}}E\left( \frac{x-\mu }{\sigma } \left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) -1}{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) ^{2}\right) +\,\frac{n\lambda }{\sigma ^{2}}E\left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) }{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) \\I_{22}&=-\frac{n}{\sigma ^2}-\frac{2n}{\sigma ^{2}}E\left( \frac{x-\mu }{\sigma } \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) -1}{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) -\,\frac{n}{\sigma ^{2}}E\left( \left( \frac{x-\mu }{\sigma }\right) ^{2} \frac{1-4\exp \left( -\frac{x-\mu }{\sigma }\right) +\exp \left( \frac{-2(x-\mu )}{\sigma } \right) }{\left( \exp \left( -\frac{x-\mu }{\sigma }\right) +1\right) ^{2} }\right) \\&\quad +\,\frac{n}{\sigma ^2}E\left( \left( \frac{x-\mu }{\sigma }\right) ^{2} \left( \frac{\exp \left( -\frac{x-\mu }{\sigma }\right) -1}{\exp \left( -\frac{x-\mu }{\sigma }\right) +1}\right) ^{2}\right) +\,\frac{2n\lambda }{\sigma ^{2}}E\left( \frac{x-\mu }{\sigma }\frac{\exp \left( -\frac{x-\mu }{\sigma }\right) }{\left( \exp \left( -\frac{x-\mu }{\sigma }\right) +1\right) ^{2} }\right) \\&\quad +\,\frac{n\lambda }{\sigma ^{2}}E\left( \left( \frac{x-\mu }{\sigma }\right) ^{2}\frac{\exp \left( \frac{-2(x-\mu )}{\sigma }\right) -\exp \left( -\frac{x-\mu }{\sigma }\right) }{\left( \exp \left( -\frac{x-\mu }{\sigma }\right) +1\right) ^{3} }\right) \end{aligned}$$

These expectations must be computed numerically. In the next section, for a real data set these expectations are computed by Monte Carlo method to provide asymptotic confidence interval for \(\varvec{\theta }=(\mu , \sigma , \lambda )^{\prime }\).

In general, \({\widehat{\varvec{\theta }}}\) is an asymptotically normal and asymptotically efficient estimator for \(\varvec{\theta }\), that is, \(\sqrt{n}({\widehat{\varvec{\theta }}}-\varvec{\theta }) \overset{d}{\longrightarrow } N_3(0, {\varvec{I}}_{n}^{-1}(\varvec{\theta }))\), where \({\varvec{I}}_{n}^{-1}(\varvec{\theta })\) is the inverse of Fisher information matrix, see Farbod [6].

Application

In this section, we fit the TESL distribution to a real data set. We compare the fits with other skew-logistic distributions. A real data set with 202 observations about hematocrit percent is considered. These data were collected in a study of how data on various characteristics of the blood vary with sport body size and sex of the athlete (see ais data in R). We use this data set to show that the TESL distribution can be a better model than SLD, PRHLD, ASLD, MSLD, SLND and skew-t distribution (STD). In order to compare the models, we used Akaike information criterion (AIC) and Bayesian information criterion (BIC). Table 3 lists the MLEs of the parameters from the fitted models and the values of the AIC and BIC. Based on the values of these statistics, we consider for this data set the TESL distribution is better than others models. Figure 3 shows again that the TESL distribution gives a good fit for these data.

Fig. 3
figure 3

Plot of fitted pdfs, fitted cdfs and pp plot

In the following, we computed the inverse of Fisher information matrix by numerical method. The \({\varvec{I}}^{-1}_{n}\) is as follows

$$\begin{aligned} {\varvec{I}}_{n}^{-1} &= \left( \begin{array}{lll} 0.0017 &\quad -\,0.0001 &\quad 0.0003\\ -\,0.0001 &\quad 0.0002 &\quad -\,0.0000\\ 0.0003 &\quad -\,0.0000 &\quad 0.0403\\ \end{array}\right) \end{aligned}$$

Thus, approximate 95 percent confidence intervals for \(\mu\), \(\sigma\) and \(\lambda\) are \(\left( 48.6256,487935\right)\), \(\left( 2.5690,2.6411\right)\) and \(\left( 5.6767,6.6440\right)\), respectively.

Table 4 Moments estimation for the TESLD
Table 5 Maximum likelihood for the TESLD

Simulation

In this section, we evaluate the performances of the methods of moments and maximum likelihood proposed in “Estimations” section. For this purpose, by using (5) samples of size n = 100 are generated, for \(\lambda = -1, 1, 2, 5\), \(\mu = -1, 0, 1, 5\) and \(\sigma =1, 2, 5, 10\). For each sample, the moments and maximum likelihood estimates are computed following the procedures described in “Estimations” section. We repeated this process 1000 times and computed the average of the estimates (AE), the average of bias (Ab) and the mean squared error (MSE). The results are reported in Tables 4 and 5.

Conclusion

In the present paper, we introduced a new class of the skew-logistic distribution (TESL) which will be useful for analysis and modeling of unimodal data with some skewness. Some various mathematical properties of TESL distribution like moments, Rényi entropy and moment generating function are derived. We considered the TESL distribution belonging to the exponential family has closed form expressions for pdf, cdf and quantiles. In continuation, we fit the TESL distribution to a real data set and compare it with other skew-logistic distributions. Finally, we compare the performances of the methods of moments and maximum likelihood, presented in “Simulation” section. Throughout the paper, some advantages of the TESL over Azzalini’s distributions are established.