1 Introduction

Over last decades, subordinators and their stopping-times have attracted the attention of many researchers. In general, a subordinator is a non-decreasing Lévy process [54]. Some well-known examples of subordinators include the gamma, lower incomplete gamma, inverse Gaussian, inverse gamma, \(\alpha \)-stable, tempered stable, geometric stable, iterated geometric stable, and Dickman subordinators [9, 13, 15, 16, 33, 59]. Subordinators play a crucial role in defining new class of stochastic processes called subordinated stochastic processes which have applications in finance, statistical physics in modelling of anomalous diffusion and fractional partial differential equations. Further, these processes have interesting connections to the scaling limit of continuous time random walks [42, 43, 62]. The Dickman subordinator with Lévy measure proportional to \(\frac{1}{x}\mathbb {1}_{(0, 1)}(x) dx\) can be considered a truncated 0-stable subordinator, by analogy with the \( \alpha \)-stable subordinator whose Lévy measure is [13]

$$\begin{aligned} \nu _\alpha (dx) = \frac{c}{x^{1+\alpha }}\mathbb {1}_{(0, \infty )}(x) dx, \;c>0,\;\alpha \in (0,1), \end{aligned}$$

where

$$\begin{aligned} \mathbb {1}_{(a,b)}(x) = {\left\{ \begin{array}{ll} 0 ,\; &{} x \notin (a,b), \\ 1 , \; &{} x \in (a, b). \end{array}\right. } \end{aligned}$$

The marginal density of the Dickman subordinator, related to the Dickman function, arises in many areas of mathematics ranging from number theory to combinatorics, population genetics, and the theory of random trees etc., see e.g. [20, 45, 47, 48]. The Dickman function was introduced by Karl Dickman in 1930 [20] and later studied by Bruijn [18]. The Dickman function appears even earlier in Ramanujan’s unpublished paper (see [51]). The applications to marginally relevant disordered systems of Dickman subordinators are given in [13]. For a recent survey article on Dickman distribution see [45]. For infinite divisibility of two widely studied probability distributions associated with Dickman function see [23]. Interestingly, one of these is infinitely divisible and the other is not. The size-biased sampling from the Dickman distribution is explored in [28] and simulations of Dickman random numbers are given in [19].

The first passage times of subordinators, also called inverse subordinators, are of interest for researchers due to their connections with fractional derivatives. First passage times are stopping time processes which arise naturally in areas such as finance, insurance, process control and survival analysis. The inverse subordinators are used as a time-change for Brownian motion and Poisson process see e.g. [1, 7, 26, 35, 36, 58]. The governing equations for Poisson processes time-changed by general inverse subordinator have been studied by Buchak and Sakhno in [12]. In recent papers by Kochubei [30] and Toaldo [58], the new types of differential operators are introduced which are related to Bernstein functions and generalize the classical Caputo-Djrbashian and Riemann-Liouville fractional derivatives which are closely associated with inverse stable subordinator, see e.g. [44]. For the first time, the concept of general fractional calculus (GFC) was discussed in [30]. In the papers [30, 31] the general fractional integral (GFI) and general fractional derivative (GFD) are defined. This approach is completely based on the concept of the Sonin kernels and associated kernels. The most important results regarding GFC are discussed in [37,38,39], which include the general fundamental theorems for GFI and GFD and the work is further extended to arbitrary order of GFC. In [37], operational calculus for equations with GFD is proposed.

In this article, we introduce an inverse Dickman subordinator which is the right continuous inverse of Dickman subordinator. The closed form representations of probability density functions of Dickman subordinator and its inverse are given. As an application, Dickman subordinator and its inverse are used as time change to define different kind of subordinated Poisson processes. The rest of the paper is organized as follows. In Section 2, generalized fractional derivatives are defined. Further, the main properties of Dickman subordinator are discussed in Section 2. Inverse Dickman subordinator and their distributional properties are established in Section 3. Section 4 deals with the space-fractional Poisson-Dickman, time-fractional Poisson-Dickman and fractional compound Poisson-Dickman processes. Further in Section 4, we introduce time changed Poisson process, Poisson process of order k, Skellam process and non-homogeneous Poisson process by considering the Dickman subordinator and its inverse as time changes. The last section concludes.

2 Dickman subordinator and its properties

In this section, we provide some basic definitions and results to be used in this paper. Further, we introduce the Dickman subordinator and its distributional properties. The governing fractional partial differential equation and asymptotic form of fractional order moments are given.

2.1 Fractional and generalized fractional derivatives

The Caputo-Djrbashian (CD) fractional derivative of a function \(g(t),\;t\ge 0\) of order \(\alpha \in (0,1],\) is defined as

$$\begin{aligned} \mathcal {D}_{t}^{\alpha }g(t)=\frac{1}{\varGamma {(1-\alpha )}}\int _{0}^{t} \frac{dg(\tau )}{d{\tau }}\frac{d{\tau }}{(t-\tau )^{\alpha }},\;\alpha \in (0,1]. \end{aligned}$$

Note that the class of functions for which the CD derivative is well defined is discussed in [43] (Sec. 2.2, 2.3). For a comprehensive reference on fractional derivatives and applications see [49]. The Riemann-Liouville tempered fractional derivative is defined by (see [2])

$$\begin{aligned} \mathbb {D}_{t}^{\alpha ,\zeta }g(t)=e^{-\zeta t}\mathbb {D}_{t}^{\alpha }[e^{\zeta t}g(t)]-\zeta ^{\alpha }g(t),\;\zeta >0,\;\alpha \in (0,1], \end{aligned}$$

where

$$\begin{aligned} \mathbb {D}_{t}^{\alpha }g(t)=\frac{1}{\varGamma (1-\alpha )}\frac{d}{dt} \int _{0}^{t}\frac{g(u)du}{(t-u)^{\alpha }}, \end{aligned}$$

is the usual Riemann-Liouville fractional derivative of order \(\alpha \in (0,1)\). Let f be a Bernstein function with an integral representation

$$\begin{aligned} f(s)=a+bs+\int _{0}^{\infty }(1-e^{-xs})\nu (dx),\;\;s>0, \end{aligned}$$

where the non-negative Lévy measure \(\nu \) on \(\mathbb {R}_{+}\cup \{0\}\) satisfies

$$\begin{aligned} \int _{0}^{\infty }(x\wedge 1)\nu (dx)<\infty ,\;\;\nu ([0,\infty ))=\infty . \end{aligned}$$

We will use the generalized Caputo-Djrbashian derivative with respect to the Bernstein function f, which is defined on the space of absolutely continuous functions as follows (see [58], Def. 2.4)

$$\begin{aligned} \mathcal {D}_{t}^{f}u(t)=b\frac{d}{dt}u(t)+\int \frac{\partial }{\partial t} u(t-s)\bar{\nu }(s)ds, \end{aligned}$$
(2.1)

where \(\bar{\nu }(s)=a+\nu (s,\infty )\) is the tail of the Lévy measure. The generalized Riemann-Liouville derivative according to the Bernstein function f is given by (see [30] and [58], Def. 2.1)

$$\begin{aligned} \mathbb {D}_{t}^{f}u(t)=b\frac{d}{dt}u(t)+\frac{d}{dt}\int u(t-s)\bar{\nu } (s)ds. \end{aligned}$$
(2.2)

The relation between \(\mathbb {D}_{t}^{f}\) and \(\mathcal {D}_{t}^{f}\) is (see [30] and [58], Prop. 2.7)

$$\begin{aligned} \mathbb {D}_{t}^{f}u(t)=\mathcal {D}_{t}^{f}u(t)+\bar{\nu }(t)u(0). \end{aligned}$$
(2.3)

For more details regarded the generalization of CD derivatives and some historical comments, see [6] and [52].

2.2 Dickman subordinator

In this subsection, the \(\alpha \)-stable and the Dickman subordinators are discussed. Let \(D_{\alpha }(t)\) be the one-sided stable Lévy process also known as \(\alpha \)-stable subordinator with the Laplace transform (LT) (see e.g. [53])

$$\begin{aligned} \mathbb {E}(e^{-s D_{\alpha }(t)}) = e^{-t s^{\alpha }},\;s>0,\;\alpha \in (0,1), \end{aligned}$$

with Lévy measure

$$\begin{aligned} \nu _\alpha (du)=\frac{\alpha }{\varGamma (1-\alpha )}u^{-1-\alpha }du. \end{aligned}$$

The right tail of the \(\alpha \)-stable subordinator behaves like

$$\begin{aligned} \mathbb {P}\left( D_{\alpha }(t)\right) > \tau )\sim \frac{ \tau ^{-\alpha }}{\varGamma {(1-\alpha )}},\;\tau \rightarrow \infty . \end{aligned}$$

Note that for the \(\alpha \)-stable subordinator, the derivative defined in eq.(2.1) becomes the Caputo-Djrbashian (CD) fractional derivative \(\mathcal {D}^{\alpha }_{t}\) of order \(\alpha \in (0,1)\):

$$\begin{aligned} \mathcal {D}^{\alpha }_{t}u(t)=&\frac{1}{\varGamma (1-\alpha )}\int _{0}^{t}\frac{ \partial }{\partial s}\frac{u(t)}{(t-s)^{\alpha }}ds =\mathbb {D} ^{\alpha }_{t}u(t)- u(0)\frac{1}{\varGamma (1-\alpha ) t^{\alpha }}, \end{aligned}$$

where \(\mathbb {D}^{\alpha }_{t}\) is Reimann-Liouville fractional derivative of order \(\alpha \in (0,1)\).

Next, we introduce Dickman subordinator (DS). Let D(t) be the DS with probability density function f(xt). The DS is an increasing Lévy process with LT (see [13])

$$\begin{aligned} \mathcal {L}_{x}\{f(x,t)\}=\tilde{f}(s,t)&=\mathbb {E}(e^{-sD(t)}) \nonumber \\&=\exp \left( -t\theta \int _{0}^{1}\frac{(1-\text {e}^{-su})}{u}du\right) =\text {e}^{-t\theta \text {Ein}(s)}, \end{aligned}$$
(2.4)

where \(s > 0\) and \(\text {Ein}(\cdot )\) is the modified exponential integral defined as (see [41])

$$\begin{aligned} \text {Ein}(z)=\int _{0}^{z}\frac{1-e^{-u}}{u}du=\sum _{k=1}^{\infty }\frac{ (-1)^{k+1}z^{k}}{k\cdot k!}. \end{aligned}$$

It is a “truncated 0-stable subordinator”, by analogy with the well-known \( \alpha \)-stable subordinator. In this case \(\alpha =0,\) and due to the condition of Lévy measure \(\int _{0}^{\infty }\min {(1,x)}\nu _{\alpha }(dx)<\infty \), its Lévy measure is restricted to (0, 1), which is given by

$$\begin{aligned} \nu _{D}(du)=\frac{\theta }{u}\mathbb {1}_{(0,1)}(u)du,\; \theta >0. \end{aligned}$$

The characteristic function of Dickman distribution is

$$\begin{aligned} \mathbb {E}(\text {e}^{izD(1)})=\exp \left( -\theta \int _{0}^{1}\frac{(1-\text {e}^{izu})}{u} du\right) ,\;z\in \mathbb {R}. \end{aligned}$$

Note that DS is infinite divisible and self-decomposable (see [13, 19]). Further, it is well known that the class of self-decomposable distributions is a subclass of infinity-divisible distributions, see e.g. [54]. The process has strictly increasing sample paths.

2.3 The density of the Dickman subordinator

The density forms of DS are discussed in [13, 17, 25, 27] by using different approaches. The density form of DS (see [13]) is

$$\begin{aligned} f(x,t) = {\left\{ \begin{array}{ll} {\displaystyle \frac{ \text {e}^{-\gamma t \theta }x^{\theta t - 1}}{ \varGamma (\theta t) }}, \; &{} x \in (0,1] \\[2ex] {\displaystyle \frac{ \text {e}^{-\gamma t \theta }x^{\theta t - 1}}{ \varGamma (\theta t) } - t\theta x^{\theta t - 1} \int _0^{x-1} \frac{f(a,t)}{ (1+a)^{t\theta }} \, da }, \; &{} x \in (1,\infty ) . \end{array}\right. } \end{aligned}$$
(2.5)

where \(\gamma = -\int _{0}^{\infty }\log u \text {e}^{-u} du \approx 0.577\), is the Euler-Mascheroni constant. In [17], Covo provided the cumulative distribution function of the DS as follows

(2.6)

where

$$\begin{aligned} C_{1;m}(x)=\{u \in \mathbb {R}^m : 1<u_1< \cdots < u_k, u_1+u_2+\cdots +u_m \le x\} , \end{aligned}$$

and then the density f(xt) of the DS as

(2.7)

In [25] Griffiths provided a series representation of f(xt) with an algorithm for the calculations of the series coefficients. Next we will provide an alternative series representation for f(xt).

Proposition 1

The density f(xt) of DS can also be written as

(2.8)

where \(\varphi _k(x,t)\) is given by

$$\begin{aligned} \varphi _0(x,t) = 1, \end{aligned}$$

and

$$\begin{aligned} \varphi _k(x,t) = {\left\{ \begin{array}{ll} 0 , \quad &{} x \le k , \\[1ex] \displaystyle \int _{k-1}^{x-1} (1+a_k)^{-t\theta } a_k^{t\theta -1} \varphi _{k-1}(a_k,t)\, da_k , \quad &{} x > k , \end{array}\right. } \end{aligned}$$
(2.9)

for \(k = 1, 2,3,\ldots \).

Proof

See Appendix 6. \(\square \)

It is interesting to note that using eq.(2.9) recursively, we can write \(\varphi _k(x,t)\) as an iterated integral, namely

$$\begin{aligned} \begin{aligned} \varphi _k(x,t) =&\int _{k-1}^{x-1} \int _{k-2}^{a_k-1}\cdots \int _1^{a_3-1}\int _0^{a_2-1} (a_1 a_2\cdots a_{k-1}a_k)^{t\theta -1} \\&\cdot \frac{da_1}{(1+a_1)^{t\theta }} \frac{da_2}{(1+a_2)^{t\theta }} \cdots \frac{ da_{k-1}}{(1+a_{k-1})^{t\theta }} \frac{da_k}{(1+a_k)^{t\theta }} , \end{aligned} \end{aligned}$$
(2.10)

and changing the integration variables from \((a_1,\ldots ,a_k)\) to \( (u_1,\ldots ,u_k)\) given by

$$\begin{aligned} a_i = \frac{1}{u_i}\left( x - \sum _{j=i}^k u_j \right) \qquad (i=1,\ldots ,k) , \end{aligned}$$
(2.11)

and following the steps of [22] we obtain eq.(2.7). Indeed, from eq.(2.11) we see that

$$\begin{aligned} \frac{\partial a_m}{\partial u_j} = 0 , \quad j = 1,2,\ldots ,m-1, \quad m = 2,3,\ldots , k , \end{aligned}$$

and consequently

$$\begin{aligned} J = \frac{\partial (a_1,a_2,\ldots ,a_k)}{\partial (u_1,u_2,\ldots ,u_k)} = \frac{\partial a_1}{\partial u_1}\frac{\partial a_2}{\partial u_2}\cdots \frac{\partial a_k}{\partial u_k} = (-1)^k \frac{(1+a_1)}{u_1}\frac{(1+a_2)}{u_2} \cdots \frac{(1+a_k)}{u_k} . \end{aligned}$$

The intervals of integration

$$\begin{aligned} k-1\le \; a_k\le & {} x-1 ,\\ k -(j+1)\le & {} \; a_{k-j} \le a_{k-(j-1)}-1 , \; (j=1,\ldots , k-1), \end{aligned}$$

in eq.(2.10), in terms of \((u_1,u_2,\ldots ,u_k)\), become

$$\begin{aligned} \begin{aligned} 1\le \;&u_k \le x/k , \\ u_{k-(j-1)} \le \;&u_{k-j} \le \frac{1}{k-j}\left( x-\sum _{i=0}^{j-1} u_{k-i}\right) , \; ( j = 1,\ldots ,k-1) . \end{aligned} \end{aligned}$$

Collecting the above expressions in eq.(2.10) we obtain

$$\begin{aligned} \begin{aligned} \varphi _k(x,t) = x^{1-t\theta }&\int _1^{x/k}\cdots \int _{u_{k-(j-1)}}^{\left[ x-\sum _{i=0}^{j-1}u_{k-i}\right] /(k-j)} \cdots \int _{u_2}^{x-(u_k+\cdots +u_2)} \\&\left[ x-(u_1 + u_2 + \cdots + u_k)\right] ^{t\theta -1} \frac{du_1}{u_1}\cdots \frac{d u_{k-j}}{u_{k-j}}\cdots \frac{d u_k}{u_k} , \end{aligned} \end{aligned}$$

which by substituting in eq.(2.8) gives eq.(2.7) (for more details, see [22]).

Proposition 2

The function \(\varphi _k(x,t)\) in Proposition 1 can be written as

$$\begin{aligned} \varphi _k(x,t) = \sum _{n=0}^\infty C_n^k(t\theta ) x_{(k)}^{n+t\theta +k-1} , \quad (k=1,2,\ldots ) , \end{aligned}$$
(2.12)

with

$$\begin{aligned} x_{(k)} = \frac{(x-k)_+}{x-k+1} , \qquad (k=1,2,\ldots ) \end{aligned}$$
(2.13)

and

$$\begin{aligned} (x-k)_+ = {\left\{ \begin{array}{ll} x-k , \; &{} x > k , \\ 0 , \; &{} x \le k , \end{array}\right. } \end{aligned}$$

and \(C_n^k(t\theta )\) is defined recursively as

$$\begin{aligned} C_n^1(t\theta ) = \frac{1}{t\theta + n} , \end{aligned}$$

and

$$\begin{aligned} C_n^k(t\theta ) = \frac{k[(k-1)/k]^{t\theta }}{t\theta +k-1+n} \sum _{j=0}^n \frac{\sigma _{n-j}^{k-1}(t\theta ) (t\theta )_j}{j!\, [k(k-1)]^{j+1}} {}_2{F}{}_{1}(-j,-j;1-t\theta -j;k(2-k)) , \end{aligned}$$

for \(k = 2, 3,\ldots \), where \((a)_j = \varGamma (a+j)/\varGamma (a)\) is the Pochhammer symbol, \({}_{2}{F}{}_{1}(\cdot )\) is the Gauss hypergeometric function, and we used the notation

$$\begin{aligned} \sigma _{n-j}^{k-1}(t\theta ) = \sum _{m=0}^{n-j} C_m^{k-1}(t\theta ) . \end{aligned}$$

Proof

See Appendix 7. \(\square \)

We should note that although the series in eq.(2.12) has the same form as the series obtained in [25], the algorithms for calculating the coefficients of the series are different. In our opinion, the expression in eq.(2.12) is more computationally friendly. In Figure 1 we have the plots of f(xt) for some fixed values of x and t and in Figure 2 we have the 3D plot of f(xt). These plots were made with Mathematica 13.3 using the series representation of \(\varphi _k(x,t)\) as in eq.(2.12).

Fig. 1
figure 1

Plots of f(xt) for fixed \(t \in \{1/2, 1, 3/2\}\) (left) and for fixed \(x \in \{1/2, 1, 3/2\}\) (right).

Fig. 2
figure 2

3D-Plot of f(xt) for \(x \in (0,4]\) and \(t \in (0,4].\)

Next the convolution type fractional derivatives or non-local operators corresponding to DS are defined. The Lévy measure for DS is

$$\begin{aligned} \bar{\nu }_D(s)= \nu _D(s,\infty )= \int _{s}^{\infty }\frac{\theta }{u}\mathbb {1} _{(0, 1)}(u) du = \left\{ \begin{array}{ll} -\theta \log s, &{} s \in (0, 1] \\ 0, &{} s \in (1, \infty ). \end{array} \right. \end{aligned}$$

Using eq.(2.1), the generalized Caputo fractional derivative corresponding to the DS is of the form:

$$\begin{aligned} \mathcal {D}^{\theta }_t u(t)= \left\{ \begin{array}{ll} -\theta \frac{d}{dt}\int _{0}^{t} \ u(s) \log (|t-s|) ds + u(0) \log (t^{\theta }), &{} \; t \in (0, 1] \\[1ex] 0, &{} \; t \in (1, \infty ). \end{array} \right. \end{aligned}$$
(2.14)

Further, the generalized Riemann-Liouville fractional derivative corresponding to DS is

$$\begin{aligned} \mathbb {D}^{\theta }_t u(t)= \left\{ \begin{array}{ll} -\theta \frac{d}{dt}\int _{0}^{t} \ u(s) \log (|t-s|) ds, &{} \; t \in (0, 1] \\[1ex] 0, &{} \; t \in (1, \infty ). \end{array} \right. \end{aligned}$$
(2.15)

Now, taking LT on both sides of eq.(2.15), yields

$$\begin{aligned} \begin{aligned} \mathcal {L}_{t}\{\mathbb {D}^{\theta }_t u(t)\}&=\mathcal {L}_{t}\{-\theta \frac{d}{dt}\int _{0}^{\infty } \ u(s) \log (|t-s|) ds\} \\&= -\theta y \mathcal {L}_{t}\{\log t \} \mathcal {L}_{t}\{u(t)\}\;\; \text {(see}\,\, [24], \text {entry 8.367 (12), p. 906.}) \\&= \tilde{u}(y) \theta \int _{0}^{1}\frac{(1-e^{-yt})}{t}dt. \end{aligned} \end{aligned}$$

Similarly, the LT of eq.(2.14) is

$$\begin{aligned} \mathcal {L}_{t}\{\mathcal {D}^{\theta }_t u(t)\}= \tilde{u}(y) \theta \int _{0}^{1}\frac{(1-e^{-yt})}{t}dt-u(0)\frac{\theta }{y}\int _{0}^{1}\frac{ (1-e^{-yt})}{t}dt. \end{aligned}$$

Proposition 3

The probability density function f(xt) of D(t) governs the following fractional differential equation

$$\begin{aligned} \left\{ \begin{array}{lll} \frac{\partial }{\partial t}f(x,t)= -\mathbb {D}^{\theta }_{x}f(x,t) &{} x>0,\; t>0, &{} \\[1ex] f(x,0)=\delta (x), &{} &{} \\[1ex] f(0,t)=0. &{} &{} \end{array} \right. \end{aligned}$$

Proof

We follow the same arguments as discussed in Theorem 4.1 of [58]. Taking the LT in both sides of eq.(3) with respect to x, which leads to

$$\begin{aligned} \frac{\partial }{\partial t}\tilde{f}(s,t)=\exp \left( -t\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du \right) \tilde{f}(s,t). \end{aligned}$$

Further, using the LT with respect to the time variable t and \(\tilde{f}(s,0)=1\), it follows

$$\begin{aligned} \tilde{\tilde{f}}(s,y)=\frac{1}{y+\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du}. \end{aligned}$$
(2.16)

The result follows by comparing eq.(2.16) and the LT of eq.(2.4) with respect to t.

\(\square \)

Next, we discuss the asymptotic behavior of the q-th order moments \(\mathbb {E}[D(t)^q]\), \(0<q<1\), of DS.

Proposition 4

For \(0<q<1\), the asymptotic behavior of the q-th order moments of DS is given by

$$\begin{aligned} \mathbb {E}[D(t)^q]\sim \theta ^q t^q,\;&\text{ as } t\rightarrow \infty . \end{aligned}$$

Proof

Using the result in [34],

$$\begin{aligned} \mathbb {E}[D(t)^q]&=\frac{(-1)}{\varGamma (1-q)}\int _{0}^{\infty }{\frac{d}{ds} \exp \left( -t\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du \right) s^{-q}}ds\\&= \frac{t \theta }{\varGamma (1-q)}\int _{0}^{\infty } s^{-q-1}\exp \left( -t\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du \right) (1-e^{-s})ds. \end{aligned}$$

By choosing \(f(s)=\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du \) and \(g(s)=s^{-q-1}(1-e^{-s})\), it follows

$$\begin{aligned} f(s) = \theta \sum _{k=0}^{\infty }\frac{(-1)^{k}s^{k+1}}{(k+1) (k+1)!} =f(0)+\sum _{k=0}^{\infty }{a_{k}s^{k+\beta }}, \end{aligned}$$

where \(f(0)=0, a_{0}=\frac{\theta }{1.1!}\) and \(\beta =1\). Further,

$$\begin{aligned} g(s)=\sum _{k=0}^{\infty }\frac{(-1)^{k+1} s^{k-q}}{(k+1)!}=\sum _{k=0}^{\infty }{b_{k}s^{k+\gamma +1}}, \end{aligned}$$

where \(b_{0}=-1\), \(b_{1}=1/2!\) and \(\gamma =1-q>0\). Using Laplace-Erdelyi theorem, (see in [21]), we have

$$\begin{aligned} \mathbb {E}[{D(t)}^q] \sim \frac{t \theta }{\varGamma (1-q)}\sum _{k=0}^{\infty }{\varGamma (k+1-q)\frac{C_{k}}{t^{k+1-q}}}, \end{aligned}$$
(2.17)

where \(C_{k}\) in term of coefficients \(a_{k}\) and \(b_{k}\) is given by

$$\begin{aligned} C_{k}=\frac{1}{a_{0}^{(k+\gamma )/ \beta }}\sum _{j=0}^{k}{b_{k-j}\sum _{i=0}^{j} \left( {\begin{array}{c}-\frac{k+\gamma }{\beta }\\ i\end{array}}\right) \frac{1}{a_{0}^{i}}{\overline{B}}_{j,i}(a_{1},a_{2},...a_{j-i+1})}, \end{aligned}$$

where \({\overline{B}}_{j,i}\) are partial ordinary Bell polynomials (see e.g. [3]). The Bell polynomials arise naturally from differentiating a composite function n times and exhibits important applications in combinatorics, statistics, numerical solutions of non-linear differential equations and other fields, and is given by (see e.g. [3, 8, 14])

$$\begin{aligned} \overline{B}_{j,i}(a_{1},a_{2},...a_{j-i+1})=\sum \frac{i!}{m_1!m_2!\cdots m_{j-i+1}!}{a_1}^{m_1}{a_2}^{m_2}\cdots {a_{j-i+1}}^{m_{j-i+1}}, \end{aligned}$$

where the sum is taken over the sequences satisfying

$$\begin{aligned} m_1+m_2+\cdots +m_{j-i+1}=i, m_1+2m_2+\cdots +(j-i+1)m_{j-i+1}=j, \end{aligned}$$

where \(m_1,m_2,\ldots ,m_{j-i+1} \ge 0.\) For large t the dominating term in eq.(2.17) is the first one, which implies

$$\begin{aligned} \mathbb {E}[{D(t)}^q] \sim t^{q} \theta ^q,\;\text {as}\;t\rightarrow \infty . \end{aligned}$$

\(\square \)

3 Inverse Dickman subordinator and its properties

Let \(E^{D}(t)\) be the right continuous inverse of DS D(t), defined by

$$\begin{aligned} E^{D}(t)= \inf \{u>0:\; D(u)>t\}, \; t>0. \end{aligned}$$

The process \(E^{D}(t)\) is called inverse of the DS (IDS). It is non-decreasing and its sample paths are almost sure continuous [10]. Let h(xt) be the density of the IDS. The LT of IDS with respect to t ([42])

$$\begin{aligned} \mathcal {L}_{t}\{h(x,t)\}=\tilde{h}(x,y)=\int _{0}^{\infty } e^{-yt}h(x,t)dt= \frac{\phi (y)}{y}e^{-x \phi (y)}, \end{aligned}$$

where \(\phi (y)= \theta \int _{0}^{1}\frac{(1-e^{-yu})}{u}du\). Again taking the LT with respect to x:

$$\begin{aligned} \mathcal {L}_{x}[\mathcal {L}_{t}\{h(x,t)\} ]=\tilde{\tilde{h}}(s,y)=\mathcal {L }_{x}\{\tilde{h}(x,y)\}= \frac{\phi (y)}{y(s+\phi (y))}. \end{aligned}$$
(3.1)

Note that \(E^{D}(t)\) is non-Markovian with non-stationary increments. The tail probability of the IDS is given by

$$\begin{aligned} \begin{aligned} \mathbb {P}\{E^{D}(t) > x\}= \mathbb {P}\{D(x) \le t\}&= \mathbb {P} \{e^{-uD(x)} \ge e^{-ut}\} \quad \text {(Markov inequality)} \\&\le \exp {\left[ ut- x\theta \sum _{k=1}^{\infty }\frac{(-u)^k}{k \cdot k!} \right] }, \end{aligned} \end{aligned}$$
(3.2)

which implies that all moments of \(E^{D}(t)\) are finite, i.e., \(\mathbb {E} [E^{D}(t)^k] < \infty \) for any \(k > 0\).

3.1 The density of the inverse Dickman subordinator

Proposition 5

The density h(xt) of the IDS \(E^{D}(t)\) is given by

(3.3)

where

$$\begin{aligned} \psi _0(x,t) = \int _0^t H(1-t+\tau )\log {(t-\tau )} \tau ^{x\theta -1} \, d\tau , \end{aligned}$$

or

$$\begin{aligned} \psi _0(x,t) = \int _{\text {max}(t-1,0)}^t \log {(t-\tau )} \tau ^{x\theta -1}\, d\tau , \qquad (t \ge 0) , \end{aligned}$$

and

$$\begin{aligned} \psi _k(x,t) = \sum _{n=0}^\infty C_n^k(x\theta ) \mathcal {K}_{k,n}(x,t) \end{aligned}$$
(3.4)

with

$$\begin{aligned} \mathcal {K}_{k,n}(x,t) = \int _0^t H(1-t+\tau )\log {(t-\tau )} \tau ^{x\theta -1} \tau _{(k)}^{x\theta +k+n-1} \, d\tau , \end{aligned}$$
(3.5)

where \(H(\cdot )\) is the Heaviside step function, or

$$\begin{aligned} \mathcal {K}_{k,n}(x,t) = \int _{\text {max}(t-1,k)}^t \log {(t-\tau )} \tau ^{x\theta -1} \tau _{(k)}^{x\theta +k+n-1} \, d\tau , \qquad (t \ge k), \end{aligned}$$

where we used the fact that \(\tau _{(k)}\) is such that \(\tau _{(k)} = 0\) for \( \tau \le k\), and thus \(\mathcal {K}_{k,n}(x,t) = 0\) for \(t \le k\).

Proof

Let us consider the LT

$$\begin{aligned} H(x,s) = \mathcal {L}_t[h(x,t);s] . \end{aligned}$$

Recalling that

$$\begin{aligned} F(k,t) = \mathcal {L}_x [f(x,t);k] = \text {e}^{-t\theta \text {Ein}(k)} , \end{aligned}$$

where \(\text {Ein}(k)\) is the modified exponential integral, we have

$$\begin{aligned} H(x,s) = \frac{\theta \text {Ein}(s)}{s} \text {e}^{-x\theta \text {Ein}(s)} . \end{aligned}$$
(3.6)

From the fact that

$$\begin{aligned} H(x,s) = H(0,s) F(s,x) , \end{aligned}$$

we can write the density of the IDS as the convolution

$$\begin{aligned} h(x,t) = h(0,t)*f(t,x) = \int _0^t h(0,t-\tau ) f(\tau ,x)\, d\tau . \end{aligned}$$
(3.7)

The expression for h(0, t) can be evaluated explicitly (see Appendix 8) and the result is

$$\begin{aligned} h(0,t) = - \theta H(1-t) \log {t} . \end{aligned}$$
(3.8)

Using eq.(2.8) and eq.(3.8) in eq.(3.7), which completes the proof. \(\square \)

Proposition 6

The functions \(\psi _0(x,t)\) and \(\psi _k(x,t)\) in Prop. 5 can be written, respectively, as

$$\begin{aligned} \psi _0(x,t) = -H_{x\theta }+\log {t} -\text {B}_{t_{(1)}}(1+x\theta ,0) , \end{aligned}$$
(3.9)

where \(H_r\) denotes the harmonic number for \(r \in \mathbb {R}\) and \(\text {B} _z(\cdot ,\cdot )\) is the upper incomplete beta function, and

$$\begin{aligned} \psi _k(x,t) = \sum _{n=0}^\infty M_n^k(x\theta ,t)\left( t_{(k)}^{n+x\theta +k}-t_{(k+1)}^{n+x\theta +k}\right) , \quad (k=1,2,\ldots ), \end{aligned}$$
(3.10)

where we defined

$$\begin{aligned} M_n^k(x\theta ,t) = \frac{k^{x\theta -1}}{x\theta +k+n}\sum _{j=0}^n \frac{ C_{n-j}^k(x\theta ) U_j^k(x\theta ,t)}{k^j \, j!} , \end{aligned}$$

with

$$\begin{aligned} \begin{aligned} U_j^k(x\theta ,t) =&\sum _{r=0}^j \frac{(-j)_r (-1-j)_r (k-1)^r}{r!} \bigg [ (1+x\theta )_{j-r}\log {t} \\&- (2+x\theta )_{j-r}\,\, {}_3{F}{}_{2}\big ( 1,1,2+x\theta +j-r;2,2+x\theta ;k/t\big ) \frac{k}{t} \bigg ], \end{aligned} \end{aligned}$$

where \({}_3{F}{}_{2}\) is the (3, 2)-generalized hypergeometric function with three upper parameters and two lower parameters.

Proof

See Appendix 9. \(\square \)

Next, we discuss the alternative form of the density of IDS.

Proposition 7

The density h(xt) of \(E^D(t)\) is given by

where

$$\begin{aligned} C_{1;m}(t)=\{u \in \mathbb {R}^m : 1<u_1< \cdots < u_k, u_1+u_2+\cdots +u_m \le t\}. \end{aligned}$$

Proof

We have

$$\begin{aligned} h(x,t)= \frac{d}{dx}\mathbb {P}(E^D(t)\le x)= - \frac{d}{dx}\mathbb {P}(D(x)< t), \end{aligned}$$

from eq.(2.6), we have

Now, take the derivative with respect to x, then we get the desired result. \(\square \)

In Figure 3 we have the plots of h(xt) for some fixed values of x and t and in Figure 4 we have the 3D plot of h(xt). These plots were made with Mathematica 13.3 using the expression for h(xt) given in eq.(3.3) with \(\psi _0(x,t)\) given by eq.(3.9) and \(\psi _k(x,t)\) given by eq.(3.4) along with numerical integration of the integral in eq.(3.5).

Fig. 3
figure 3

Plots of h(xt) for fixed \(t \in \{1/2, 1, 3/2\}\) (left) and for fixed \(x \in \{1/2, 1, 3/2\}\) (right).

Fig. 4
figure 4

3D-Plot of h(xt) for \(x \in (0,4]\) and \(t \in (0,4].\)

Proposition 8

The probability density function h(xt) of \(E^{D}(t)\) governs the following fractional differential equation

$$\begin{aligned} \left\{ \begin{array}{lll} \mathbb {D}^{\theta }_{t}h(x,t)= -\frac{\partial }{\partial x}h(x,t), &{} x>0,\; t \in (0, 1], &{} \\[1ex] h(x,0)=\delta (x), &{} &{} \\[1ex] h(0,t)=\left\{ \begin{array}{ll} -\theta \log t, &{} t \in (0, 1], \\[1ex] 0, &{} t \in (1, \infty ). \end{array} \right.&\,&\end{array} \right. \end{aligned}$$
(3.11)

Proof

A similar approach as in Prop. 3 is used here. Take the Laplace-Laplace transform of eq.(3.11) with respect to t and x leads to the same Laplace-Laplace transform as given in eq.(3.1). \(\square \)

Further, we discuss the asymptotic behavior of q-th order moments \(M_q(t)=\mathbb {E}(E^{D}(t))^{q},\; q>0,\) of \(E^{D}(t)\). The LT of \( M_q(t)\) is given by \(\overline{M}_q(s)=\frac{\varGamma (1+q)}{s(\phi (s))^{q}}\) (see e.g. [34, 59, 60]), where \(\phi (s)\) is the Laplace exponent given in eq.(2.4). We have,

$$\begin{aligned}{} & {} \overline{M}_q(s)= \frac{\varGamma (1+q)}{s\theta ^q\left( \sum _{k=1}^{\infty } \frac{(-1)^{k+1}s^k}{k k!}\right) ^{q}} \\{} & {} \overline{M}_q(s) \sim \frac{\varGamma (1+q)}{\theta ^q s ^{q+1}},\;\text{ as } s\rightarrow 0. \end{aligned}$$

Applying Tauberian theorem [9], we have the following asymptotic behavior for \(M_q(t)\)

$$\begin{aligned} M_q(t) \sim \frac{t^q}{\theta ^q},\; \text{ as } t\rightarrow \infty . \end{aligned}$$

Simulation of DS and inverse DS: The algorithm for generating the sample trajectories of DS shown in Fig. 5 is as follows:

  1. 1.

    take a particular value of parameter \(\theta \); generate iid random numbers \(U\sim \) Uniform[0, 1];

  2. 2.

    generate the increments of the DS D(t), see in [46], using the relationship \(D(t+dt) - D(t) \overset{d}{= }D(dt) \overset{d}{= }\ U^{1/dt \theta }(1+D(dt))\), i.e.

    $$\begin{aligned} D(dt) \overset{d}{= }\sum _{j=1}^{\infty }\left( \prod _{i=1}^{j} U_{i}^{\frac{1 }{\theta dt }}\right) ; \end{aligned}$$
  3. 3.

    cumulative sum of increments gives the DS D(t) sample trajectories. The inverse DS sample trajectories are obtained by reversing the axis.

Fig. 5
figure 5

Dickman (left) and inverse Dickman (right) subordinators sample trajectories.

4 Applications of DS and IDS as time changes

In this section, we introduce a time-changed Poisson process and Poisson process of order k by considering DS and IDS as time-changes. We also define space-fractional Skellam Dickman and non-homogeneous Poisson Dickman processes by subordinating Skellam process and non-homogeneous Poisson process by DS. We discuss the governing fractional differential equations of these time-changed processes.

The homogeneous Poisson process \(N(t),\; t\ge 0\), with parameter \( \lambda >0\) is defined as,

$$\begin{aligned} N(t)= \max \{n: T_1+T_2+ \ldots +T_n \le t\},\; t\ge 0, \end{aligned}$$

where the inter-arrival times \(T_{1},T_2,\ldots \) are iid exponential random variables with mean \(1/\lambda \). The probability mass function (PMF) \( P_n^N(t)= \mathbb {P}(N(t)=n)\) is given by

$$\begin{aligned} P_n^N(t) = \frac{e^{-\lambda t}(\lambda t)^n}{n!},\;n=0,1,\ldots \end{aligned}$$

The PMF of the Poisson process obeys the differential-difference equation of the form

$$\begin{aligned} \frac{d}{dt}P_n^N(t)= -\lambda (P_n^N(t) - P_{n-1}^N(t)), \end{aligned}$$

with initial condition

$$\begin{aligned} P_n^N(0) =\delta _{n,0}= {\left\{ \begin{array}{ll} 0, &{} n\ne 0, \\ 1, &{} n=0. \end{array}\right. } \end{aligned}$$

Further, \(P_{n}^N(t)=0, \; n\le -1\). Next, we introduce and study space- and time-fractional Poisson-Dickman processes and discuss their main properties.

4.1 Space-fractional Poisson-Dickman process (SFPDP)

Here, we introduce the space-fractional Poisson-Dickman process \(N^{D}(t)\) by subordinating the homogeneous Poisson process with the DS, defined by

$$\begin{aligned} N^{D}(t)=N(D(t)),\; t\ge 0. \end{aligned}$$

The characteristic function of SFPDP is

$$\begin{aligned} \mathbb {E}[e^{i z N^{D}(t)}] = \exp \left[ {-t \theta \int _{0}^{1}\frac{ 1-e^{-u\lambda (1-e^{iz})}}{u} du}\right] . \end{aligned}$$
(4.1)

The mean, variance and covariance of SFPDP can be easily calculated by using the characteristic function,

$$\begin{aligned} \mathbb {E}[N^{D}(t)]&=\theta \lambda t; \\ \textrm{Var}[N^{D}(t)]&=\theta \lambda t (1+\lambda /2); \\ \textrm{Cov}[N^{D}(t),N^{D}(s)]&= \lambda \theta s+\frac{\lambda ^2 \theta s }{2}, \;\; s<t. \end{aligned}$$

Proposition 9

The Lévy density \(\nu _{N^{D}}(x)\) of SFPDP is given by

$$\begin{aligned} \nu _{N^{D}}(x) = \theta \sum ^{\infty }_{k=1}\frac{(-1)^k\lambda ^k }{k\cdot k!} \sum _{n=0}^{k}(-1)^n {\left( {\begin{array}{c}k \\ n\end{array}}\right) } \delta _{n}(x). \end{aligned}$$

Proof

Using the Lévy-Khintchine formula (see [54]), it follows

$$\begin{aligned} \int _{\mathbb {R} \backslash \{0 \} }&(e^{i sx}-1) \theta \sum ^{\infty }_{k=1}\frac{(-1)^k\lambda ^k }{k \cdot k!}\sum _{n=0}^{k}(-1)^n \left( {\begin{array}{c}k\\ n\end{array}}\right) \delta _{n}(x) dx \\&=\theta \left[ \sum ^{\infty }_{k=1}\frac{(-1)^k\lambda ^k }{k \cdot k!}\sum _{n=0}^{k}(-1)^n \left( {\begin{array}{c}k\\ n\end{array}}\right) (e^{isn}-1) \right] \\&=\theta \sum ^{\infty }_{k=1}\frac{(-1)^k\lambda ^k }{k \cdot k!}(1-e^{is})^k, \end{aligned}$$

which is the characteristic exponent of SFPDP given in eq.(4.1). \(\square \)

4.2 Time-fractional Poisson-Dickman process (TFPDP)

Next, we define the time-fractional Poisson-Dickman stochastic process:

$$\begin{aligned} N_{E}(t)=N(E^{D}(t)),\; t\ge 0, \end{aligned}$$
(4.2)

where N(t) is homogeneous Poisson process with parameter \(\lambda \) and the IDS \(E^{D}(t)\) is independent of N(t). Note that this process is non-Markovian due to the time-change component of \(E^{D}(t),\) which is not a Lévy process. The state probability \(\mathbb {P}[N_{E}(t)=n]=P^{E}_{n}(t)\) is given by

$$\begin{aligned} P^{E}_{n}(t)=\int _{0}^{\infty }\frac{e^{-\lambda y} (\lambda y)^n}{n!} h(y,t)dy =\int _{0}^t \frac{\lambda ^n}{n!}h(0, t-\tau )\left( \int _{0}^{\infty }e^{-\lambda y} y^n f(\tau , y)dy\right) d\tau , \end{aligned}$$

for \(n = 0,1,2,\ldots \).

Proposition 10

The state probability \(P^{E}_{n}(t)\) of \(N_{E}(t)\) satisfies

$$\begin{aligned} \mathcal {D}^{\theta }_{t}P^{E}_{n}(t)= -\lambda (P^{E}_{n}(t) - P^{E}_{n-1}(t)) \end{aligned}$$

with initial condition

$$\begin{aligned} P^{E}_{n}(0) = \delta _{n,0} . \end{aligned}$$

Proof

Note that,

$$\begin{aligned} P^{E}_{n}(t) = \int _{0}^{\infty }P_n(y)h(y,t)dy. \end{aligned}$$

Taking generalized Riemann-Liouville derivative given in eq.(2.2) and using Prop. 8

$$\begin{aligned} \mathbb {D}_{t}^\theta P^{E}_{n}(t)&= \int _{0}^{\infty }P_n(y)\mathbb {D}_{t}^{f}h(y,t)dy\\&= -\int _{0}^{\infty }P_n(y)\frac{\partial }{\partial y}h(y,t)dy\\&= - P_n(y)h(y,t)|_{y=0}^{y=\infty } + \int _{0}^{\infty }\frac{d}{dy}P_n(y)h(y,t)dy \\&= \int _{0}^{\infty }\frac{d}{dy}P_n(y)h(y,t)dy + P_n(0)h(0,t)\\&= -\lambda (P^{E}_{n}(t) - P^{E}_{n-1}(t)) + P_n(0)h(0,t). \end{aligned}$$

Using eq.(2.3) and the fact that \( P^E_{n}(0)=P_n(0)=1\), it completes the proof. \(\square \)

The process \(N_E(t)\) is renewal process such that

$$\begin{aligned} N_E(t)=\max \{n: T_1+T_2+\cdots +T_n \le t\}, \end{aligned}$$

where \(T_i^{\prime }s\) are IID inter-arrival times of the \(N_E(t)\), which satisfy

$$\begin{aligned} \mathbb {P}(T_n > t)= \mathbb {E}[e^{-\lambda E^D(t)}]= \int _{0}^{\infty } e^{-\lambda y}h(y, t) dy. \end{aligned}$$

The renewal function of the process \(N_E(t)\) is

$$\begin{aligned} U(t)=\mathbb {E}[E^D(t)]=\int _{0}^{\infty } x h(x, t) dx. \end{aligned}$$

Then, we can obtain the covariance function of the process \(N_E(t)\), that is (see [35])

$$\begin{aligned} \textrm{Cov}[N_{E}(t),N_{E}(s)]= \int _{0}^{\min (t, s)}(U(t-\tau )+U(s-\tau ))dU(\tau )-U(t)U(s), \; s, t \ge 0. \end{aligned}$$

The next statement is the analogous of celebrated Watanabe martingale characterization of homogeneous Poisson processes and fractional Poisson processes, see [1, 61].

Proposition 11

Let \(\{M(t), \; t\ge 0\}\) be a \(\{ \mathscr {F}_{t}\}_{t\ge 0}\) adapted simply locally finite point process and D(t) be the strictly increasing subordinators such that \(E^D(t) = \inf \{u\ge 0 : D(u) >t\}\) is the inverse of D(t). Suppose \(E^D(t)\) is \(L^p\) bounded for some \(p>1\). The process \(\{M(t)- \lambda E^{D}(t)\}\) is a right continuous martingale with respect to the filtration \(\mathscr {F}_{t}= \sigma (M(s), s\le t) \vee \sigma (E^{D}(s), s\le t)\) for some \(\lambda >0\) iff M(t) is a TFPDP \( N_{E}(t)\).

Proof

The proof follows using similar steps as discussed in [1, 26]. Let \(\{M(t)- \lambda E^D(t)\}\) be \(\mathscr {F}_{t}\)-martingale and \(D(t)= \inf \{x\ge 0 : E^D(x) \ge t\}\) be the collections of stopping times for \(t \ge 0\) of \(E^D(t)\). Then by optional sampling theorem (see Theorem 6.29 [29]) \(\{M(D(t))- \lambda E^D(D(t))\}\) is a martingale, i.e. \(\{M(D(t))- \lambda t\}\) is a martingale. Because the composition \(E^D(D(t))= t,\; \forall \; t.\) By the Watanabe characterization (see [11, 61]), it follows that \(M(D(t)) = N(t) = N(E^{D}(D(t)))\) is a homogeneous Poisson process with intensity \(\lambda >0\). Thus M(t) is the TFPDP denoted by \(N_E(t)\) .

Conversely, if \(M(t)= N_E(t),\) then we have to show that \(\{N_E(t)- \lambda E^D(t)\}\) is \(\mathscr {F}_{t}\)-martingale. Since \(M\ge 0\) and \(E^D(t)\) is bounded in \(L^{p},\;p>1\) given by eq.(3.2), which implies that \(\{N_E(t)- \lambda E^D(t), 0\le t\le T\}\) is uniformly integrable. The result follows using Doob’s optional sampling theorem (see e.g. Theorem 6.29 in [29]), because \(N(t)-\lambda t\) is a martingale and \(E^D(t)\) is a stopping time. \(\square \)

4.3 Time-fractional Poisson-Dickman process of order k (TFPDPoK)

Here, we introduce the time-fractional Poisson-Dickman process of order k based on the jump distribution of \(X_i\). Let \( X_i,\;i=1,2,\cdots ,\) be the discrete uniform random variables such that \( \mathbb {P}(X_i = j) = \frac{1}{k},\; j=1,2,\cdots , k\) and N(t) be the Poisson process with parameter \(k\lambda \). Then the process defined by

$$\begin{aligned} Z_E^k(t) = \sum _{i=1}^{N^{E}(t)} X_i, \end{aligned}$$

is called the time-fractional Poisson-Dickman process of order k (FPDPoK). The marginals of \(Z_E^k(t)\) can be obtained by time-changing the Poisson process of order k denoted by \(N^{k}(t)\) (see [32]) with an independent IDS \(E^{D}(t)\), such that

$$\begin{aligned} Z^{K}_{E}(t)\overset{d}{=}\ N^{k}(E^{D}(t)). \end{aligned}$$

Further, we define the time-fractional compound Poisson-Dickman process,

$$\begin{aligned} Z_{E}^{\eta }(t) = \sum _{i=1}^{N^{E}(t)} X_i, \end{aligned}$$

under the assumption of \(X_i,\;i=1,2,\cdots ,\) are exponentially distributed with parameter \(\eta \) and Poisson process with parameter \(\lambda \). This process \(Z_{E}^{\eta }(t)\) can be written \(Z(E^{D}(t))\) in terms of the time-change in CPP \(Z(t)=\sum _{i=1}^{N(t)} X_i\), where \(X_i, i=1,2,\ldots \) are exponentially distributed random variables, with IDS \(E^{D}(t)\). The LT of the density \(P_x^{Z,\eta }(t)\) is given by

$$\begin{aligned} \mathcal {L}_x[P_x^{Z,\eta }(t)] = \mathbb {E}[e^{-sZ_{E}^{\eta }(t)}] = \mathbb { E}[e^{-\lambda E^{D}(t) \frac{s}{s+\eta }}]. \end{aligned}$$

Proposition 12

The density \(P_x^{Z,\eta }(t)\) of \(Z_{E}^{\eta }(t)\) governs the following fractional differential equation

$$\begin{aligned} \eta \mathcal {D}^{\theta }_{t} P_x^{Z,\eta }(t)&= -\left[ \lambda +\mathcal {D} ^{\theta }_{t}\right] \frac{\partial }{\partial x} P_x^{Z,\eta }(t),\; \end{aligned}$$

with conditions

$$\begin{aligned} P_x^{Z,\eta }(0)=0, \; \; \int _{0}^{\infty } P_x^{Z,\eta }(t) dx= 1-\mathbb {E} [e^{-\lambda E^D(t)}]. \end{aligned}$$

Proof

Note that

$$\begin{aligned} P_x^{Z,\eta }(t)=\int _{0}^{\infty } P_{x}(y) h(y,t) dy. \end{aligned}$$

Taking generalized Riemann-Liouville derivative given in eq.(2.2) and using Prop. 8, we get

$$\begin{aligned} \mathbb {D}^{\theta }_{t} P_x^{Z,\eta }(t)&= \int _{0}^{\infty } P_{x}(y)\mathbb {D}^{f}_{t}h(y,t) dy.\\&=-\int _{0}^{\infty } P_{x}(y)\frac{\partial }{\partial y}h(y,t)dy\\&= - P_{x}(y)h(y,t)|_{0}^{\infty } + \int _{0}^{\infty } \frac{\partial }{\partial y} P_{x}(y)h(y,t) dy. \end{aligned}$$

The density \(P_{x}(y)\) of CPP satisfies the following equation with some condition (see [7]),

$$\begin{aligned} \eta \frac{\partial }{\partial t}P_{x}(t)=-\left[ \lambda + \frac{\partial }{\partial t} \right] \frac{\partial }{\partial x}P_{x}(t). \end{aligned}$$

Using eq.(2.3), it completes the proof. \(\square \)

4.4 Space-fractional Skellam Dickman stochastic process (SFSDP)

Let S(t) be a Skellam process, such that

$$\begin{aligned} S(t)= N_{1}(t)-N_{2}(t), \; t\ge 0, \end{aligned}$$

where \(N_{1}(t)\) and \(N_{2}(t)\) are two independent homogeneous Poisson processes with intensity \(\lambda _{1} >0\) and \(\lambda _2>0,\) respectively. This process is symmetric only when \(\lambda _1= \lambda _2\). The PMF \( P_{n}^S(t)=\mathbb {P}(S(t)=n)\) of S(t) is given by

$$\begin{aligned} {} P_{n}^S(t)(t)=e^{-t(\lambda _1+\lambda _2)}{\left( \frac{ \lambda _1}{\lambda _2}\right) }^{n/2}I_{|n|}(2t\sqrt{\lambda _1 \lambda _2}),\; n\in \mathbb {Z}, \end{aligned}$$

where \(I_n\) is modified Bessel function of first kind,

$$\begin{aligned} I_{n}(z)=\sum _{i=0}^{\infty }\frac{{(z/2)}^{2i+n}}{i!(n+i)!}. \end{aligned}$$

The PMF \(P_{n}^S(t)\) satisfies the following differential-difference equation

$$\begin{aligned} \frac{d}{dt}P_{n}^S(t)= \lambda _{1}(P_{n-1}^S(t)-P_{n}^S(t))-\lambda _{2}(P_{n}^S(t)-P_{n+1}^S(t)),\; \; n\in \mathbb {Z}, \end{aligned}$$

with initial conditions \(P_{0}^S(0)=1\) and \(P_{n}^S(0)=0, \;n\ne 0\). Next we introduce the space Skellam-Dickman process (SSDP) \(S^{D}(t)\) by subordinating the Skellam process S(t) with the DS D(t), defined by

$$\begin{aligned} S^{D}(t)=S(D(t)),\; t\ge 0. \end{aligned}$$

The LT of SSDP is

$$\begin{aligned} \mathbb {E}[e^{-sS^{D}(t)}] = \exp \left[ -t \theta \int _{0}^{1}\frac{ 1-e^{-u(\lambda _1 (1-e^{-s})+\lambda _2 (1-e^{s}))}}{u} du\right] . \end{aligned}$$

The mean and covariance of SSDP can be easily calculated by using the LT,

$$\begin{aligned} \mathbb {E}[S^{D}(t)]&=\theta (\lambda _1-\lambda _2) t; \\ \textrm{Cov}[S^{D}(t),S^{D}(s)]&= (\lambda _1+\lambda _2) \theta s+\frac{ (\lambda _1-\lambda _2)^2 \theta s}{2}, \;\; s<t. \end{aligned}$$

Proposition 13

The Lévy density \(\nu _{S^{D}}(x)\) of SSDP is given by

$$\begin{aligned} \begin{aligned} \nu _{S^{D}}(x) = \theta&\bigg [\sum ^{\infty }_{i=1}\frac{(-1)^i\lambda _1^i }{ i\cdot i!}\sum _{n_1=0}^{i}(-1)^n_1 {\left( {\begin{array}{c}i \\ n_1\end{array}}\right) } \delta _{n_1}(x) \\&+\sum ^{ \infty }_{j=1}\frac{(-1)^j\lambda _2^j }{j \cdot j!}\sum _{n_2=0}^{j}(-1)^n_2 { \left( {\begin{array}{c}j \\ n_2\end{array}}\right) } \delta _{-n_2}(x)\bigg ]. \end{aligned} \end{aligned}$$

Proof

Substituting the Lévy densities \(\nu _{N_{1}^{D}}(x)\) and \(\nu _{N_2^{D}}(x)\) of the Poisson-Dickman processes \(N_{1}^{D}(t)\) and \(N_{2}^{D}(t)\) with parameters \(\lambda _1>0\) and \( \lambda _2>0\) respectively, we obtain

$$\begin{aligned} \nu _{S^{D}}(x)=\nu _{N_{1}^{D}}(x)+\nu _{N_2^{D}}(x), \end{aligned}$$

which gives the desired result. \(\square \)

Further, we consider the time-changed Skellam process by time-changing the Skellam process S(t) with the independent IDS \(E^D(t)\). The time-changed Skellam process is defined by

$$\begin{aligned} S_{E}(t) = S(E^D(t)),\;\; t\ge 0. \end{aligned}$$
(4.3)

Proposition 14

The state probability \(\mathbb {P}[S_{E}(t)=n]=Q^{S}_{n}(t)\) of \(S_{E}(t)\) satisfies

$$\begin{aligned} \mathcal {D}^{\theta }_{t}Q^{S}_{n}= -\lambda _1(Q^{S}_{n}(t) - Q^{S}_{n-1}(t)) -\lambda _2(Q^{S}_{n}(t) - Q^{S}_{n+1}(t)) \end{aligned}$$

with initial condition

$$\begin{aligned} Q^{S}_{n}(0) =\delta _{n,0}. \end{aligned}$$

Proof

The result follows using the steps of Prop. 10. \(\square \)

Next, we simulate the sample paths of Poisson-Dickman process \( N^{D}(t)\) and Skellam Dickman process \(S^{D}(t)\) defined in eq.(4.2) and eq.(4.3) respectively. By taking the superposition of the independent subordinator D(t) and the Poisson processes N(t), we obtain the trajectories of the Poisson-Dickman process. Similarly, the sample trajectories of Skellam Dickman process are obtained by the superposition of Skellam process and DS see Fig. 6.

Fig. 6
figure 6

Dickman Poisson process (left) and Skellam Dickman process (right) sample trajectories.

4.5 Non-homogeneous Poisson-Dickman process

Consider a non-homogeneous Poisson process (NHPP) \(H(t)= N(\varLambda (t)), t \ge 0,\) with intensity function

$$\begin{aligned} \lambda (t): [0, \infty ) \rightarrow [0, \infty ). \end{aligned}$$

We denote for \(0 \le s<t\)

$$\begin{aligned} \varLambda (s,t)=\int _{s}^{t}\lambda (u)du, \end{aligned}$$

where the function \(\varLambda (t) = \varLambda (0, t)\) is known as rate function or cumulative rate function. Thus, H is a stochastic process with independent, but not necessarily stationary increments. The probability mas function for \(0\le \mu <t\) (see [36]),

$$\begin{aligned} P_{n}^{H}(t, \mu )=\mathbb {P}[H(t+\mu )-H(\mu )=n]= \frac{e^{-\varLambda (\mu , t+\mu )}(\varLambda (\mu , t+\mu ))^{n}}{n!}, \; n=0,1,2\ldots . \end{aligned}$$

The distribution \(P_{n}^{H}(t)\) satisfies the difference-differential equation

$$\begin{aligned} \frac{d}{dt}P_{n}^{H}(t, \mu )= -\lambda (P_{n}^{H}(t, \mu ) - P_{n-1}^{H}(t, \mu )) \end{aligned}$$

with initial condition \(P_{n}^{H}(0, \mu )=\delta _{n,0}\) and \(P_{-1}^{H}(t, \mu )=0\). We introduce non-homogeneous Poisson-Dickman stochastic process (NHPDSP). A time-changed representation of NHPDSP can be written as

$$\begin{aligned} N^{H}_{D}(t)= H(E^{D}(t))= N(\varLambda (E^{D}(t)), \; t>0, \end{aligned}$$

where \(E^{D}(t)\) is IDS and independent of NHPP H(t). We consider a stochastic process

$$\begin{aligned} I(t, \mu )= N(\varLambda (t+\mu ))-N(\varLambda (\mu )), \end{aligned}$$

and

$$\begin{aligned} I^{D}(t, \mu )= I(E^{D}(t), \mu )=N(\varLambda (E^{D}(t)+\mu ))-N(\varLambda (\mu )). \end{aligned}$$

Their marginal distribution can be written as follows:

$$\begin{aligned} \begin{aligned} Q^H_n(t, \mu )&= \mathbb {P}[N(\varLambda (E^{D}(t)+\mu ))-N(\varLambda (\mu ))=n] \\&= \int _{0}^{\infty }\frac{e^{-\varLambda (\mu , y+\mu )}(\varLambda (\mu , y+\mu ))^{n}}{n!} h(y, t)dy. \end{aligned} \end{aligned}$$

Proposition 15

The PMF \(Q^H_n(t, \mu )\) satisfy the following fractional differential-difference integral equations:

$$\begin{aligned} \mathcal {D}^{\theta }_{t}Q^H_n(t, \mu )= \int _{0}^{\infty }-\lambda (\mu +y) (Q^H_n(y, \mu ) -Q^H_{n-1}(y, \mu ))h(y, t)dy, \end{aligned}$$

with usual initial condition.

Proof

The proof follows on similar lines as discussed in Theorem 1 in [12]. \(\square \)

5 Conclusions

The Dickman distribution also known as Dickman-Goncharov distribution has its origin in analytical number theory. The Dickman distribution is infinitely divisible and has its support on positive real line. Hence, one can define a continuous time Lévy process with non-decreasing sample paths known as DS having marginals as Dickman distribution. Inverse subordinators which are right continuous inverse of subordinators play a crucial role in defining time-changed stochastic processes by changing the clock of the original process with inverse subordinators. The DS is discussed in literature with no discussion on inverse DS as per our knowledge. With the help of DS and IDS, new class of time-changed Poisson processes namely space-fractional Dickman Poisson, time-fractional Dickman Poisson are discussed here. Note that space-fractional Dickman Poisson and time-fractional Dickman Poisson processes can be viewed as generalizations of space-fractional Poisson and time-fractional Poisson processes respectively, which are extensively studied in literature in recent years.