Abstract
In this article, convolution-type fractional derivatives generated by Dickman subordinator and inverse Dickman subordinator are discussed. The Dickman subordinator and its inverse are generalizations of stable and inverse stable subordinators, respectively. The series representations of densities of the Dickman subordinator and inverse Dickman subordinator are also obtained, which could be helpful for computational purposes. Moreover, the space and time-fractional Poisson-Dickman processes, space-fractional Skellam Dickman process and non-homogenous Poisson-Dickman process are introduced and their main properties are studied.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Over last decades, subordinators and their stopping-times have attracted the attention of many researchers. In general, a subordinator is a non-decreasing Lévy process [54]. Some well-known examples of subordinators include the gamma, lower incomplete gamma, inverse Gaussian, inverse gamma, \(\alpha \)-stable, tempered stable, geometric stable, iterated geometric stable, and Dickman subordinators [9, 13, 15, 16, 33, 59]. Subordinators play a crucial role in defining new class of stochastic processes called subordinated stochastic processes which have applications in finance, statistical physics in modelling of anomalous diffusion and fractional partial differential equations. Further, these processes have interesting connections to the scaling limit of continuous time random walks [42, 43, 62]. The Dickman subordinator with Lévy measure proportional to \(\frac{1}{x}\mathbb {1}_{(0, 1)}(x) dx\) can be considered a truncated 0-stable subordinator, by analogy with the \( \alpha \)-stable subordinator whose Lévy measure is [13]
where
The marginal density of the Dickman subordinator, related to the Dickman function, arises in many areas of mathematics ranging from number theory to combinatorics, population genetics, and the theory of random trees etc., see e.g. [20, 45, 47, 48]. The Dickman function was introduced by Karl Dickman in 1930 [20] and later studied by Bruijn [18]. The Dickman function appears even earlier in Ramanujan’s unpublished paper (see [51]). The applications to marginally relevant disordered systems of Dickman subordinators are given in [13]. For a recent survey article on Dickman distribution see [45]. For infinite divisibility of two widely studied probability distributions associated with Dickman function see [23]. Interestingly, one of these is infinitely divisible and the other is not. The size-biased sampling from the Dickman distribution is explored in [28] and simulations of Dickman random numbers are given in [19].
The first passage times of subordinators, also called inverse subordinators, are of interest for researchers due to their connections with fractional derivatives. First passage times are stopping time processes which arise naturally in areas such as finance, insurance, process control and survival analysis. The inverse subordinators are used as a time-change for Brownian motion and Poisson process see e.g. [1, 7, 26, 35, 36, 58]. The governing equations for Poisson processes time-changed by general inverse subordinator have been studied by Buchak and Sakhno in [12]. In recent papers by Kochubei [30] and Toaldo [58], the new types of differential operators are introduced which are related to Bernstein functions and generalize the classical Caputo-Djrbashian and Riemann-Liouville fractional derivatives which are closely associated with inverse stable subordinator, see e.g. [44]. For the first time, the concept of general fractional calculus (GFC) was discussed in [30]. In the papers [30, 31] the general fractional integral (GFI) and general fractional derivative (GFD) are defined. This approach is completely based on the concept of the Sonin kernels and associated kernels. The most important results regarding GFC are discussed in [37,38,39], which include the general fundamental theorems for GFI and GFD and the work is further extended to arbitrary order of GFC. In [37], operational calculus for equations with GFD is proposed.
In this article, we introduce an inverse Dickman subordinator which is the right continuous inverse of Dickman subordinator. The closed form representations of probability density functions of Dickman subordinator and its inverse are given. As an application, Dickman subordinator and its inverse are used as time change to define different kind of subordinated Poisson processes. The rest of the paper is organized as follows. In Section 2, generalized fractional derivatives are defined. Further, the main properties of Dickman subordinator are discussed in Section 2. Inverse Dickman subordinator and their distributional properties are established in Section 3. Section 4 deals with the space-fractional Poisson-Dickman, time-fractional Poisson-Dickman and fractional compound Poisson-Dickman processes. Further in Section 4, we introduce time changed Poisson process, Poisson process of order k, Skellam process and non-homogeneous Poisson process by considering the Dickman subordinator and its inverse as time changes. The last section concludes.
2 Dickman subordinator and its properties
In this section, we provide some basic definitions and results to be used in this paper. Further, we introduce the Dickman subordinator and its distributional properties. The governing fractional partial differential equation and asymptotic form of fractional order moments are given.
2.1 Fractional and generalized fractional derivatives
The Caputo-Djrbashian (CD) fractional derivative of a function \(g(t),\;t\ge 0\) of order \(\alpha \in (0,1],\) is defined as
Note that the class of functions for which the CD derivative is well defined is discussed in [43] (Sec. 2.2, 2.3). For a comprehensive reference on fractional derivatives and applications see [49]. The Riemann-Liouville tempered fractional derivative is defined by (see [2])
where
is the usual Riemann-Liouville fractional derivative of order \(\alpha \in (0,1)\). Let f be a Bernstein function with an integral representation
where the non-negative Lévy measure \(\nu \) on \(\mathbb {R}_{+}\cup \{0\}\) satisfies
We will use the generalized Caputo-Djrbashian derivative with respect to the Bernstein function f, which is defined on the space of absolutely continuous functions as follows (see [58], Def. 2.4)
where \(\bar{\nu }(s)=a+\nu (s,\infty )\) is the tail of the Lévy measure. The generalized Riemann-Liouville derivative according to the Bernstein function f is given by (see [30] and [58], Def. 2.1)
The relation between \(\mathbb {D}_{t}^{f}\) and \(\mathcal {D}_{t}^{f}\) is (see [30] and [58], Prop. 2.7)
For more details regarded the generalization of CD derivatives and some historical comments, see [6] and [52].
2.2 Dickman subordinator
In this subsection, the \(\alpha \)-stable and the Dickman subordinators are discussed. Let \(D_{\alpha }(t)\) be the one-sided stable Lévy process also known as \(\alpha \)-stable subordinator with the Laplace transform (LT) (see e.g. [53])
with Lévy measure
The right tail of the \(\alpha \)-stable subordinator behaves like
Note that for the \(\alpha \)-stable subordinator, the derivative defined in eq.(2.1) becomes the Caputo-Djrbashian (CD) fractional derivative \(\mathcal {D}^{\alpha }_{t}\) of order \(\alpha \in (0,1)\):
where \(\mathbb {D}^{\alpha }_{t}\) is Reimann-Liouville fractional derivative of order \(\alpha \in (0,1)\).
Next, we introduce Dickman subordinator (DS). Let D(t) be the DS with probability density function f(x, t). The DS is an increasing Lévy process with LT (see [13])
where \(s > 0\) and \(\text {Ein}(\cdot )\) is the modified exponential integral defined as (see [41])
It is a “truncated 0-stable subordinator”, by analogy with the well-known \( \alpha \)-stable subordinator. In this case \(\alpha =0,\) and due to the condition of Lévy measure \(\int _{0}^{\infty }\min {(1,x)}\nu _{\alpha }(dx)<\infty \), its Lévy measure is restricted to (0, 1), which is given by
The characteristic function of Dickman distribution is
Note that DS is infinite divisible and self-decomposable (see [13, 19]). Further, it is well known that the class of self-decomposable distributions is a subclass of infinity-divisible distributions, see e.g. [54]. The process has strictly increasing sample paths.
2.3 The density of the Dickman subordinator
The density forms of DS are discussed in [13, 17, 25, 27] by using different approaches. The density form of DS (see [13]) is
where \(\gamma = -\int _{0}^{\infty }\log u \text {e}^{-u} du \approx 0.577\), is the Euler-Mascheroni constant. In [17], Covo provided the cumulative distribution function of the DS as follows
where
and then the density f(x, t) of the DS as
In [25] Griffiths provided a series representation of f(x, t) with an algorithm for the calculations of the series coefficients. Next we will provide an alternative series representation for f(x, t).
Proposition 1
The density f(x, t) of DS can also be written as
where \(\varphi _k(x,t)\) is given by
and
for \(k = 1, 2,3,\ldots \).
Proof
See Appendix 6. \(\square \)
It is interesting to note that using eq.(2.9) recursively, we can write \(\varphi _k(x,t)\) as an iterated integral, namely
and changing the integration variables from \((a_1,\ldots ,a_k)\) to \( (u_1,\ldots ,u_k)\) given by
and following the steps of [22] we obtain eq.(2.7). Indeed, from eq.(2.11) we see that
and consequently
The intervals of integration
in eq.(2.10), in terms of \((u_1,u_2,\ldots ,u_k)\), become
Collecting the above expressions in eq.(2.10) we obtain
which by substituting in eq.(2.8) gives eq.(2.7) (for more details, see [22]).
Proposition 2
The function \(\varphi _k(x,t)\) in Proposition 1 can be written as
with
and
and \(C_n^k(t\theta )\) is defined recursively as
and
for \(k = 2, 3,\ldots \), where \((a)_j = \varGamma (a+j)/\varGamma (a)\) is the Pochhammer symbol, \({}_{2}{F}{}_{1}(\cdot )\) is the Gauss hypergeometric function, and we used the notation
Proof
See Appendix 7. \(\square \)
We should note that although the series in eq.(2.12) has the same form as the series obtained in [25], the algorithms for calculating the coefficients of the series are different. In our opinion, the expression in eq.(2.12) is more computationally friendly. In Figure 1 we have the plots of f(x, t) for some fixed values of x and t and in Figure 2 we have the 3D plot of f(x, t). These plots were made with Mathematica 13.3 using the series representation of \(\varphi _k(x,t)\) as in eq.(2.12).
Next the convolution type fractional derivatives or non-local operators corresponding to DS are defined. The Lévy measure for DS is
Using eq.(2.1), the generalized Caputo fractional derivative corresponding to the DS is of the form:
Further, the generalized Riemann-Liouville fractional derivative corresponding to DS is
Now, taking LT on both sides of eq.(2.15), yields
Similarly, the LT of eq.(2.14) is
Proposition 3
The probability density function f(x, t) of D(t) governs the following fractional differential equation
Proof
We follow the same arguments as discussed in Theorem 4.1 of [58]. Taking the LT in both sides of eq.(3) with respect to x, which leads to
Further, using the LT with respect to the time variable t and \(\tilde{f}(s,0)=1\), it follows
The result follows by comparing eq.(2.16) and the LT of eq.(2.4) with respect to t.
\(\square \)
Next, we discuss the asymptotic behavior of the q-th order moments \(\mathbb {E}[D(t)^q]\), \(0<q<1\), of DS.
Proposition 4
For \(0<q<1\), the asymptotic behavior of the q-th order moments of DS is given by
Proof
Using the result in [34],
By choosing \(f(s)=\theta \int _{0}^{1}\frac{(1-e^{-su})}{u}du \) and \(g(s)=s^{-q-1}(1-e^{-s})\), it follows
where \(f(0)=0, a_{0}=\frac{\theta }{1.1!}\) and \(\beta =1\). Further,
where \(b_{0}=-1\), \(b_{1}=1/2!\) and \(\gamma =1-q>0\). Using Laplace-Erdelyi theorem, (see in [21]), we have
where \(C_{k}\) in term of coefficients \(a_{k}\) and \(b_{k}\) is given by
where \({\overline{B}}_{j,i}\) are partial ordinary Bell polynomials (see e.g. [3]). The Bell polynomials arise naturally from differentiating a composite function n times and exhibits important applications in combinatorics, statistics, numerical solutions of non-linear differential equations and other fields, and is given by (see e.g. [3, 8, 14])
where the sum is taken over the sequences satisfying
where \(m_1,m_2,\ldots ,m_{j-i+1} \ge 0.\) For large t the dominating term in eq.(2.17) is the first one, which implies
\(\square \)
3 Inverse Dickman subordinator and its properties
Let \(E^{D}(t)\) be the right continuous inverse of DS D(t), defined by
The process \(E^{D}(t)\) is called inverse of the DS (IDS). It is non-decreasing and its sample paths are almost sure continuous [10]. Let h(x, t) be the density of the IDS. The LT of IDS with respect to t ([42])
where \(\phi (y)= \theta \int _{0}^{1}\frac{(1-e^{-yu})}{u}du\). Again taking the LT with respect to x:
Note that \(E^{D}(t)\) is non-Markovian with non-stationary increments. The tail probability of the IDS is given by
which implies that all moments of \(E^{D}(t)\) are finite, i.e., \(\mathbb {E} [E^{D}(t)^k] < \infty \) for any \(k > 0\).
3.1 The density of the inverse Dickman subordinator
Proposition 5
The density h(x, t) of the IDS \(E^{D}(t)\) is given by
where
or
and
with
where \(H(\cdot )\) is the Heaviside step function, or
where we used the fact that \(\tau _{(k)}\) is such that \(\tau _{(k)} = 0\) for \( \tau \le k\), and thus \(\mathcal {K}_{k,n}(x,t) = 0\) for \(t \le k\).
Proof
Let us consider the LT
Recalling that
where \(\text {Ein}(k)\) is the modified exponential integral, we have
From the fact that
we can write the density of the IDS as the convolution
The expression for h(0, t) can be evaluated explicitly (see Appendix 8) and the result is
Using eq.(2.8) and eq.(3.8) in eq.(3.7), which completes the proof. \(\square \)
Proposition 6
The functions \(\psi _0(x,t)\) and \(\psi _k(x,t)\) in Prop. 5 can be written, respectively, as
where \(H_r\) denotes the harmonic number for \(r \in \mathbb {R}\) and \(\text {B} _z(\cdot ,\cdot )\) is the upper incomplete beta function, and
where we defined
with
where \({}_3{F}{}_{2}\) is the (3, 2)-generalized hypergeometric function with three upper parameters and two lower parameters.
Proof
See Appendix 9. \(\square \)
Next, we discuss the alternative form of the density of IDS.
Proposition 7
The density h(x, t) of \(E^D(t)\) is given by
where
Proof
We have
from eq.(2.6), we have
Now, take the derivative with respect to x, then we get the desired result. \(\square \)
In Figure 3 we have the plots of h(x, t) for some fixed values of x and t and in Figure 4 we have the 3D plot of h(x, t). These plots were made with Mathematica 13.3 using the expression for h(x, t) given in eq.(3.3) with \(\psi _0(x,t)\) given by eq.(3.9) and \(\psi _k(x,t)\) given by eq.(3.4) along with numerical integration of the integral in eq.(3.5).
Proposition 8
The probability density function h(x, t) of \(E^{D}(t)\) governs the following fractional differential equation
Proof
A similar approach as in Prop. 3 is used here. Take the Laplace-Laplace transform of eq.(3.11) with respect to t and x leads to the same Laplace-Laplace transform as given in eq.(3.1). \(\square \)
Further, we discuss the asymptotic behavior of q-th order moments \(M_q(t)=\mathbb {E}(E^{D}(t))^{q},\; q>0,\) of \(E^{D}(t)\). The LT of \( M_q(t)\) is given by \(\overline{M}_q(s)=\frac{\varGamma (1+q)}{s(\phi (s))^{q}}\) (see e.g. [34, 59, 60]), where \(\phi (s)\) is the Laplace exponent given in eq.(2.4). We have,
Applying Tauberian theorem [9], we have the following asymptotic behavior for \(M_q(t)\)
Simulation of DS and inverse DS: The algorithm for generating the sample trajectories of DS shown in Fig. 5 is as follows:
-
1.
take a particular value of parameter \(\theta \); generate iid random numbers \(U\sim \) Uniform[0, 1];
-
2.
generate the increments of the DS D(t), see in [46], using the relationship \(D(t+dt) - D(t) \overset{d}{= }D(dt) \overset{d}{= }\ U^{1/dt \theta }(1+D(dt))\), i.e.
$$\begin{aligned} D(dt) \overset{d}{= }\sum _{j=1}^{\infty }\left( \prod _{i=1}^{j} U_{i}^{\frac{1 }{\theta dt }}\right) ; \end{aligned}$$ -
3.
cumulative sum of increments gives the DS D(t) sample trajectories. The inverse DS sample trajectories are obtained by reversing the axis.
4 Applications of DS and IDS as time changes
In this section, we introduce a time-changed Poisson process and Poisson process of order k by considering DS and IDS as time-changes. We also define space-fractional Skellam Dickman and non-homogeneous Poisson Dickman processes by subordinating Skellam process and non-homogeneous Poisson process by DS. We discuss the governing fractional differential equations of these time-changed processes.
The homogeneous Poisson process \(N(t),\; t\ge 0\), with parameter \( \lambda >0\) is defined as,
where the inter-arrival times \(T_{1},T_2,\ldots \) are iid exponential random variables with mean \(1/\lambda \). The probability mass function (PMF) \( P_n^N(t)= \mathbb {P}(N(t)=n)\) is given by
The PMF of the Poisson process obeys the differential-difference equation of the form
with initial condition
Further, \(P_{n}^N(t)=0, \; n\le -1\). Next, we introduce and study space- and time-fractional Poisson-Dickman processes and discuss their main properties.
4.1 Space-fractional Poisson-Dickman process (SFPDP)
Here, we introduce the space-fractional Poisson-Dickman process \(N^{D}(t)\) by subordinating the homogeneous Poisson process with the DS, defined by
The characteristic function of SFPDP is
The mean, variance and covariance of SFPDP can be easily calculated by using the characteristic function,
Proposition 9
The Lévy density \(\nu _{N^{D}}(x)\) of SFPDP is given by
Proof
Using the Lévy-Khintchine formula (see [54]), it follows
which is the characteristic exponent of SFPDP given in eq.(4.1). \(\square \)
4.2 Time-fractional Poisson-Dickman process (TFPDP)
Next, we define the time-fractional Poisson-Dickman stochastic process:
where N(t) is homogeneous Poisson process with parameter \(\lambda \) and the IDS \(E^{D}(t)\) is independent of N(t). Note that this process is non-Markovian due to the time-change component of \(E^{D}(t),\) which is not a Lévy process. The state probability \(\mathbb {P}[N_{E}(t)=n]=P^{E}_{n}(t)\) is given by
for \(n = 0,1,2,\ldots \).
Proposition 10
The state probability \(P^{E}_{n}(t)\) of \(N_{E}(t)\) satisfies
with initial condition
Proof
Note that,
Taking generalized Riemann-Liouville derivative given in eq.(2.2) and using Prop. 8
Using eq.(2.3) and the fact that \( P^E_{n}(0)=P_n(0)=1\), it completes the proof. \(\square \)
The process \(N_E(t)\) is renewal process such that
where \(T_i^{\prime }s\) are IID inter-arrival times of the \(N_E(t)\), which satisfy
The renewal function of the process \(N_E(t)\) is
Then, we can obtain the covariance function of the process \(N_E(t)\), that is (see [35])
The next statement is the analogous of celebrated Watanabe martingale characterization of homogeneous Poisson processes and fractional Poisson processes, see [1, 61].
Proposition 11
Let \(\{M(t), \; t\ge 0\}\) be a \(\{ \mathscr {F}_{t}\}_{t\ge 0}\) adapted simply locally finite point process and D(t) be the strictly increasing subordinators such that \(E^D(t) = \inf \{u\ge 0 : D(u) >t\}\) is the inverse of D(t). Suppose \(E^D(t)\) is \(L^p\) bounded for some \(p>1\). The process \(\{M(t)- \lambda E^{D}(t)\}\) is a right continuous martingale with respect to the filtration \(\mathscr {F}_{t}= \sigma (M(s), s\le t) \vee \sigma (E^{D}(s), s\le t)\) for some \(\lambda >0\) iff M(t) is a TFPDP \( N_{E}(t)\).
Proof
The proof follows using similar steps as discussed in [1, 26]. Let \(\{M(t)- \lambda E^D(t)\}\) be \(\mathscr {F}_{t}\)-martingale and \(D(t)= \inf \{x\ge 0 : E^D(x) \ge t\}\) be the collections of stopping times for \(t \ge 0\) of \(E^D(t)\). Then by optional sampling theorem (see Theorem 6.29 [29]) \(\{M(D(t))- \lambda E^D(D(t))\}\) is a martingale, i.e. \(\{M(D(t))- \lambda t\}\) is a martingale. Because the composition \(E^D(D(t))= t,\; \forall \; t.\) By the Watanabe characterization (see [11, 61]), it follows that \(M(D(t)) = N(t) = N(E^{D}(D(t)))\) is a homogeneous Poisson process with intensity \(\lambda >0\). Thus M(t) is the TFPDP denoted by \(N_E(t)\) .
Conversely, if \(M(t)= N_E(t),\) then we have to show that \(\{N_E(t)- \lambda E^D(t)\}\) is \(\mathscr {F}_{t}\)-martingale. Since \(M\ge 0\) and \(E^D(t)\) is bounded in \(L^{p},\;p>1\) given by eq.(3.2), which implies that \(\{N_E(t)- \lambda E^D(t), 0\le t\le T\}\) is uniformly integrable. The result follows using Doob’s optional sampling theorem (see e.g. Theorem 6.29 in [29]), because \(N(t)-\lambda t\) is a martingale and \(E^D(t)\) is a stopping time. \(\square \)
4.3 Time-fractional Poisson-Dickman process of order k (TFPDPoK)
Here, we introduce the time-fractional Poisson-Dickman process of order k based on the jump distribution of \(X_i\). Let \( X_i,\;i=1,2,\cdots ,\) be the discrete uniform random variables such that \( \mathbb {P}(X_i = j) = \frac{1}{k},\; j=1,2,\cdots , k\) and N(t) be the Poisson process with parameter \(k\lambda \). Then the process defined by
is called the time-fractional Poisson-Dickman process of order k (FPDPoK). The marginals of \(Z_E^k(t)\) can be obtained by time-changing the Poisson process of order k denoted by \(N^{k}(t)\) (see [32]) with an independent IDS \(E^{D}(t)\), such that
Further, we define the time-fractional compound Poisson-Dickman process,
under the assumption of \(X_i,\;i=1,2,\cdots ,\) are exponentially distributed with parameter \(\eta \) and Poisson process with parameter \(\lambda \). This process \(Z_{E}^{\eta }(t)\) can be written \(Z(E^{D}(t))\) in terms of the time-change in CPP \(Z(t)=\sum _{i=1}^{N(t)} X_i\), where \(X_i, i=1,2,\ldots \) are exponentially distributed random variables, with IDS \(E^{D}(t)\). The LT of the density \(P_x^{Z,\eta }(t)\) is given by
Proposition 12
The density \(P_x^{Z,\eta }(t)\) of \(Z_{E}^{\eta }(t)\) governs the following fractional differential equation
with conditions
Proof
Note that
Taking generalized Riemann-Liouville derivative given in eq.(2.2) and using Prop. 8, we get
The density \(P_{x}(y)\) of CPP satisfies the following equation with some condition (see [7]),
Using eq.(2.3), it completes the proof. \(\square \)
4.4 Space-fractional Skellam Dickman stochastic process (SFSDP)
Let S(t) be a Skellam process, such that
where \(N_{1}(t)\) and \(N_{2}(t)\) are two independent homogeneous Poisson processes with intensity \(\lambda _{1} >0\) and \(\lambda _2>0,\) respectively. This process is symmetric only when \(\lambda _1= \lambda _2\). The PMF \( P_{n}^S(t)=\mathbb {P}(S(t)=n)\) of S(t) is given by
where \(I_n\) is modified Bessel function of first kind,
The PMF \(P_{n}^S(t)\) satisfies the following differential-difference equation
with initial conditions \(P_{0}^S(0)=1\) and \(P_{n}^S(0)=0, \;n\ne 0\). Next we introduce the space Skellam-Dickman process (SSDP) \(S^{D}(t)\) by subordinating the Skellam process S(t) with the DS D(t), defined by
The LT of SSDP is
The mean and covariance of SSDP can be easily calculated by using the LT,
Proposition 13
The Lévy density \(\nu _{S^{D}}(x)\) of SSDP is given by
Proof
Substituting the Lévy densities \(\nu _{N_{1}^{D}}(x)\) and \(\nu _{N_2^{D}}(x)\) of the Poisson-Dickman processes \(N_{1}^{D}(t)\) and \(N_{2}^{D}(t)\) with parameters \(\lambda _1>0\) and \( \lambda _2>0\) respectively, we obtain
which gives the desired result. \(\square \)
Further, we consider the time-changed Skellam process by time-changing the Skellam process S(t) with the independent IDS \(E^D(t)\). The time-changed Skellam process is defined by
Proposition 14
The state probability \(\mathbb {P}[S_{E}(t)=n]=Q^{S}_{n}(t)\) of \(S_{E}(t)\) satisfies
with initial condition
Proof
The result follows using the steps of Prop. 10. \(\square \)
Next, we simulate the sample paths of Poisson-Dickman process \( N^{D}(t)\) and Skellam Dickman process \(S^{D}(t)\) defined in eq.(4.2) and eq.(4.3) respectively. By taking the superposition of the independent subordinator D(t) and the Poisson processes N(t), we obtain the trajectories of the Poisson-Dickman process. Similarly, the sample trajectories of Skellam Dickman process are obtained by the superposition of Skellam process and DS see Fig. 6.
4.5 Non-homogeneous Poisson-Dickman process
Consider a non-homogeneous Poisson process (NHPP) \(H(t)= N(\varLambda (t)), t \ge 0,\) with intensity function
We denote for \(0 \le s<t\)
where the function \(\varLambda (t) = \varLambda (0, t)\) is known as rate function or cumulative rate function. Thus, H is a stochastic process with independent, but not necessarily stationary increments. The probability mas function for \(0\le \mu <t\) (see [36]),
The distribution \(P_{n}^{H}(t)\) satisfies the difference-differential equation
with initial condition \(P_{n}^{H}(0, \mu )=\delta _{n,0}\) and \(P_{-1}^{H}(t, \mu )=0\). We introduce non-homogeneous Poisson-Dickman stochastic process (NHPDSP). A time-changed representation of NHPDSP can be written as
where \(E^{D}(t)\) is IDS and independent of NHPP H(t). We consider a stochastic process
and
Their marginal distribution can be written as follows:
Proposition 15
The PMF \(Q^H_n(t, \mu )\) satisfy the following fractional differential-difference integral equations:
with usual initial condition.
Proof
The proof follows on similar lines as discussed in Theorem 1 in [12]. \(\square \)
5 Conclusions
The Dickman distribution also known as Dickman-Goncharov distribution has its origin in analytical number theory. The Dickman distribution is infinitely divisible and has its support on positive real line. Hence, one can define a continuous time Lévy process with non-decreasing sample paths known as DS having marginals as Dickman distribution. Inverse subordinators which are right continuous inverse of subordinators play a crucial role in defining time-changed stochastic processes by changing the clock of the original process with inverse subordinators. The DS is discussed in literature with no discussion on inverse DS as per our knowledge. With the help of DS and IDS, new class of time-changed Poisson processes namely space-fractional Dickman Poisson, time-fractional Dickman Poisson are discussed here. Note that space-fractional Dickman Poisson and time-fractional Dickman Poisson processes can be viewed as generalizations of space-fractional Poisson and time-fractional Poisson processes respectively, which are extensively studied in literature in recent years.
References
Aletti, G., Leonenko, N., Merzbach, E.: Fractional Poisson fields and martingales. J. Stat. Phys. 170, 700–730 (2018)
Alrawashdeh, M.S., Kelly, J.F., Meerschaert, M.M., Scheffler, H.P.: Applications of inverse tempered stable subordinators. Comput. Math. Appl. 73(6), 892–905 (2017)
Andrews, G.E.: The Theory of Partitions. Cambridge University Press, Cambridge (1998)
Andrews, G.E., Askey, R., Roy, R.: Special Functions. Cambridge University Press, Cambridge (1999)
Appell, P., Kampé de Fériet, J.: Fonctions Hypergéométriques et Hypersphériques. Gauthier-Villars, Paris (1926)
Bazhlekova, E.: Subordination in a class of generalized time-fractional diffusion-wave equations. Fract. Calc. Appl. Anal. 21(4), 869–900 (2018). https://doi.org/10.1515/fca-2018-0048
Beghin, L., Macci, C.: Alternative forms of compound fractional Poisson processes. Abstr. Appl. Anal. 2012, Article ID 747503, 30 pages (2012)
Bell, E.T.: Exponential polynomials. Ann. Math. 35, 258–277 (1934)
Bertoin, J.: Lévy Processes. Cambridge University Press, Cambridge (1996)
Bertoin, J.: Subordinators: examples and applications. In: Lectures on Probability Theory and Statistics, pp. 1–91. Springer, New York (1999)
Brémaud, P.: Point Processes and Queues. Springer, New York (1981)
Buchak, K., Sakhno, L.: On the governing equations for Poisson and Skellam processes time-changed by inverse subordinators. Theory Probab. Math. Stat. 98, 91–104 (2019)
Caravenna, F., Sun, R., Zygouras, N.: The Dickman subordinator, renewal theorems, and disordered systems. Electron. J. Probab. 24, 1–40 (2019)
Comtet, L.: Advanced Combinatorics: The Art of Finite and Infinite Expansions. Springer, Dordrecht (1974)
Cont, R., Tankov, P.: Financial Modeling with Jump Processes. Chapman & Hall/CRC Press, London/Boca Raton (2004)
Covo, S.: On approximations of small jumps of subordinators with particular emphasis on a Dickman-type limit. J. Appl. Probab. 46, 732–755 (2009)
Covo, S.: One-dimensional distributions of subordinators with upper truncated Lévy measure, and applications. Adv. Appl. Probab. 41, 367–392 (2009)
de Bruijn, N.G.: On the number of positive integers \(\le x \) and free of prime factors \(> y\). Proc. Koninklijke Nederlandse Akademie van Wetenschappen Ser. A Math. Sci. 54, 50–60 (1951)
Devroye, L., Fawzi, O.: Simulating the Dickman distribution. Stat. Probab. Lett. 80, 242–247 (2010)
Dickman, K.: On the frequency of numbers containing prime factors of a certain relative magnitude. Arkiv Matematik Astronomi Och Fysik 22, A-10 (1930)
Erdélyi, A.: Asymptotic Expansions. Dover, New York (1956)
Franze, C.S.: A family of multiple integrals connected with relatives of the Dickman function. J. Number Theory 179, 33–49 (2017)
Grabchak, M., Molchanov, S., Panov, V.: Around the infinite divisibility of the Dickman distribution and related topics. J. Math. Sci. 515, 91–120 (2022)
Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products. Academic Press, Amsterdam (2014)
Griffiths, R.C.: On the distribution of points in a Poisson Dirichlet process. J. Appl. Probab. 25, 336–345 (1988)
Gupta, N., Kumar, A.: Fractional Poisson processes of order \(k\) and beyond. J. Theor. Probab. 36, 2165–2191 (2023)
Holst, L.A.R.S.: The Poisson-Dirichlet Distribution and Its Relatives Revisited. Preprint of the Royal Institute of Technology, Stockholm (2001)
Ipsen, Y.F., Maller, R.A., Shemehsavar, S.: Size-biased sampling from the Dickman subordinator. Stoch. Process. Appl. 130, 6880–6900 (2020)
Kallenberg, O.: Foundation of Modern Probability. Springer, New York (1997)
Kochubei, A.N.: General fractional calculus, evolution equations, and renewal processes. Integr. Eqn. Oper. Theory 71, 583–600 (2011)
Kochubei, A., Luchko, Y., Tarasov, V.E., Petráš, I. (eds.): Handbook of Fractional Calculus with Applications, vol. 1. De Gruyter, Berlin (2019)
Kostadinova, K.Y.: On the Poisson process of order \(k\) space. Pliska Studia Mathematica Bulgarica 22, 117–128 (2013)
Kumar, A., Vellaisamy, P.: Inverse tempered stable subordinators. Stat. Prob. Lett. 103, 134–141 (2015)
Kumar, A., Gajda, J., Wyomaska, A., Pooczaski, R.: Fractional Brownian motion delayed by tempered and inverse tempered stable subordinators. Methodol. Comput. Appl. Probab. 21, 185–202 (2019)
Leonenko, N.N., Meerschaert, M.M., Schilling, R.L., Sikorskii, A.: Correlation structure of time-changed Lévy processes. Commun. Appl. Ind. Math. 6, e-483 (2014)
Leonenko, N., Scalas, E., Trinh, M.: The fractional non-homogeneous Poisson process. Stat. Probab. Lett. 120, 147–156 (2017)
Luchko, Y.: Operational calculus for the general fractional derivative and its applications. Fract. Calc. Appl. Anal. 24, 338–375 (2021). https://doi.org/10.1515/fca-2021-0016
Luchko, Y.: General fractional integrals and derivatives of arbitrary order. Symmetry 13(5), 755 (2021)
Luchko, Y.: Convolution series and the generalized convolution Taylor formula. Fract. Calc. Appl. Anal. 25, 207–228 (2022). https://doi.org/10.1007/s13540-021-00009-9
Magnus, W., Bateman, H., Erdélyi, A., Oberhettinger, F., Tricomi, F.G.: Tables of Integral Transforms. McGraw-Hill, New York (1954)
Mainardi, F., Masina, E.: On modifications of the exponential integral with the Mittag-Leffler function. Fract. Calc. Appl. Anal. 21, 1156–1169 (2018). https://doi.org/10.1515/fca-2018-0063
Meerschaert, M.M., Scheffler, H.P.: Triangular array limits for continuous time random walks. Stoch. Process. Appl. 118, 1606–1633 (2008)
Meerschaert, M.M., Sikorskii, A.: Stochastic Models for Fractional Calculus. de Gruyter, Berlin (2012)
Meerschaert, M.M., Straka, P.: Inverse stable subordinators. Math. Model. Nat. Phenom. 8, 1–16 (2013)
Molchanov, S.A., Panov, V.A.: The Dickman-Goncharov distribution. Russ. Math. Surv. 75, 1089 (2020)
Penrose, M.D., Wade, A.R.: Random minimal directed spanning trees and Dickman-type distributions. Adv. Appl. Probab. 36, 691–714 (2004)
Pinsky, R.G.: A natural probabilistic model on the integers and its relation to Dickman-type distributions and Buchstab’s function. (2016)
Pinsky, R.G.: On the strange domain of attraction to generalized Dickman distributions for sums of independent random variables. Electron. J. Probab. 23, 1–17 (2018)
Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999)
Prudnikov, A.B., Brychkov, Yu.A., Marichev, O.I.: Direct Laplace Transforms, vol. 4. Gordon & Breach Science Pub, Integrals and Series (1992)
Ramanujan, S.A., Andrews, G.E.: The Lost Notebook and Other Unpublished Papers. Narosa Publishing House, New Delhi (1988)
Rogosin, S., Dubatovskaya, M.: Mkhitar Djrbashian and his contribution to fractional calculus. Fract. Calc. Appl. Anal. 23, 1797–1809 (2020). https://doi.org/10.1515/fca-2020-0089
Samorodnitsky, G., Taqqu, M.S., Linde, R.W.: Stable non-Gaussian random processes: stochastic models with infinite variance. Bull. Lond. Math. Soc. 28, 554–555 (1996)
Sato, K.-i.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (1999)
The Wolfram Functions Site: https://functions.wolfram.com/06.16.07.0001.01
The Wolfram Functions Site: https://functions.wolfram.com/06.16.06.0016.01
The Wolfram Functions Site: https://functions.wolfram.com/06.19.26.0005.01
Toaldo, B.: Convolution-type derivatives, hitting-times of subordinators and time-changed \(C_0\)-semigroups. Potential Anal. 42, 115–140 (2015)
Veillette, M., Taqqu, M.S.: Numerical computation of first-passage times of increasing Lévy processes. Methodol. Comput. Appl. Probab. 12, 695–729 (2010)
Veillette, M., Taqqu, M.S.: Using differential equations to obtain joint moments of first-passage times of increasing Lévy processes. Statist. Probab. Lett. 80, 697–705 (2010)
Watanabe, S.: On discontinuous additive functionals and Lévy measures of a Markov process. Jpn. J. Math. 34, 53–70 (1964)
Whitt, W.: Stochastic-Process Limits. Springer, New York (2002)
Acknowledgements
Nikolai Leonenko (NL) would like to thank the Isaac Newton Institute(INI) for Mathematical Sciences for support and hospitality during the program Fractional Differential Equations (FDE2) and the program “Uncertainly Quantification of Modeling of Material”. Also, NL would like to thank Prof. S.A. Molchanov for the discussion in INI about Dickman distribution and beyond. Also, NL was partially supported under the Australian Research Council’s Discovery Projects funding scheme (project number DP220101680), LMS grant of 42997 (UK), and FAPESP (Brazil) grant \(22/09207-8\). Further, N.G. would like to express her gratitude to the Indian Statistical Institute (ISI) in Delhi, India for support for the post-doctoral program. J.V. is grateful to FAPESP for the financial support (process \(22/09207-8\)).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
In the appendix, we prove some results for the density of DS and IDS, stated in Sections 2 and 3.
Proof of Proposition 1
To simplify the expressions, given f(x, t) as in eq.(2.5), let us define g(x, t) as
Then eq.(2.5) can be rewritten in terms of g(x, t) as
Let us denote by \(g_{(\alpha ,\beta ]}(x,t)\) the function g(x, t) for \(x\in (\alpha ,\beta ]\). Thus
and
We can write a single expression for \(x \in (0,2]\) as
with the definition
and
For \(g_{(2,3]}(x,t)\) we have
and using eq.(6.1),
where in the last term we used the fact that \(\varphi _1(a_2,t) = 0\) for \(a_2 \le 1\) in order to replace the lower endpoint \(a_2=0\) of the integral by \( a_2 = 1\). Thus we can write a single expression for \(x \in (0,3]\) as
with the definition
Using induction it is not difficult to see that the general expression for g(x, t) is
with \(\varphi _k(x,t)\) as in eq.(2.9). Note that the series in eq.(6.3) stops at for \(x \in (n,n+1]\).
Proof of Proposition 2
Let us start by evaluating \(\varphi _1(x,t)\) as in eq.(6.2). Using the change of variable
we rewrite \(\varphi _1(x,t)\) as
where we used the notation in eq.(2.13). Using the geometric series for \((1-u)^{-1}\) we obtain
The integral in eq.(2.9) can be conveniently rewritten with the change of variable
which gives
We will look for an expression for \(\varphi _k(x,t)\) similar to eq.(7.1), that is,
Using this expression in eq.(7.2), the geometric series for \( (1-u)^{-1}\), and the change of variable \(u = x_{(k)} y\), we obtain
where
For \(k = 2,3, \ldots \) we have
The above integral is of the form
with \(\mu \ne \nu \). This integral is related to Appell \({F}_{1}\) function, defined as [5]
for \(|x|< 1\) and \(|y| < 1\), and which can be represented by the Picard integral
for \(\text {Re}\alpha > 0\) and \(\text {Re}(\gamma -\alpha ) > 0\) and where \( \text {B}(\cdot ,\cdot )\) is the beta function. Thus we have
where we used \(\text {B}(\alpha ,1) = 1/\alpha \). When \(y = t x\) we have the relation [5]
and using it we obtain
where we have used \((\alpha )_m/(\alpha +1)_m = \alpha /(\alpha +m)\). Moreover, using the identity [4]
we can also write
Since \((-N)_j = 0\) for \(j = N +1, N+2,\ldots \), the hypergeometric function \( {}_2{F}{}_{1}\) in eq.(7.6) or eq.(7.7) is a polynomial of degree m in \(\nu /\mu \) or \( \nu /(\nu -\mu )\), respectively, due to the parameter \(-m\). However, there is also the parameter \(-1-\beta -m\) in \({}_2{F}{}_{1}\). If \(\beta \) is not an integer, then \((1-\beta -m)_j\) does not vanish. If \(\beta \) is an integer, then for \(\beta = 1,2,\ldots \) we still have a polynomial of degree m in \(\nu /\mu \) or \(\nu /(\nu -\mu )\), but for \( \beta = 0,-1,-2,\ldots \) the expression for \({}_2{F}{}_{1}\) diverges. So we assume \(\beta > 0\) in eq.(7.6) or eq.(7.7).
For the integral in eq.(7.4) we have \(\alpha = t\theta +k+n-1\) , \(\beta = t\theta \), \(\beta ^\prime = 1-t\theta \), \(\mu = (k-1)/k\) and \(\nu = (k-2)/(k-1)\), and so
or, using the hypergeometric function,
Finally, using eq.(7.8) in eq.(7.3) and using the summation index \(n + j = r\) gives eq.(2.12) (with \(r \rightarrow n\)).
Proof of eq.(3.8)
Let us recall that
where \(\text {Ein}(\cdot )\) is the modified exponential integral function, defined as
It is related to the exponential integral \(\text {E}_{1}(\cdot )\), i.e.,
through the relation
It is known (see, for example, [40], page 267) that
It is also known that (see, for example, [50], page 100) that
From eq.(3.6) we have
Using eq.(8.1) we have
Thus, using eq.(8.2) and eq.(8.3), we obtain
which gives eq.(3.8).
Proof of Proposition 6
Evaluation of eq.(3.9): Note that, due to the presence of \(H(1-t+\tau )\), the integral \(\psi _0(x,t)\) is such that
Thus we have an integration over [0, t] for \(0\le t \le 1\) or over \( [t-1,t]\) for \(t \ge 1\). Using the Taylor series for \(\log (1-z)\) we obtain for \(z = \tau /t\) that
We will analyse integration along [0, t] and \([0,t-1]\) in separate. Thus for the integration along [0, t] with \(t \ge 0\) we have
where \(H_r\) denotes the harmonic number for \(r \in \mathbb {R}\) defined as [55]
where \(\varPsi (\cdot )\) is the digamma function, which can be written as the series [56]
For the integration along [0, a] with \(a < t\) we use
in eq.(9.1), and then
The series can be written as
where \(B_z(\cdot ,\cdot )\) is the upper incomplete beta function and in the last equality we used its representation in terms of the hypergeometric function \({}_2{F}{}_{1}\) as given in [57]. Thus for \(a = t-1\) we have
Finally we have
If we use the notation introduced in the previous appendix as \(x_{(k)}\) now for the variable t, that is, \(t_{(1)} = 0 \) for \(t < 1\) and \(t_{(1)} = (t-1)/t\) for \(t \ge 1\), we can write the above expressions as
which is eq.(3.9).
Evaluation of eq.(3.10): In order to evaluate \(\psi _k(x,t)\) we need to evaluate eq.(3.5). Due to the presence of \(H(1-t+\tau )\), the lower limit of integration in the integral \(\mathcal {K}_{k,n}(x,t)\) is \(\text {max}(t-1,0)\). Furthermore, we recall that \(\tau _{(k)}\) is such that \(\tau _{(k)} = 0\) for \(\tau \le k\). So, if \(t \le k\), we have
For \(t > k\), we have
So we have an integration along [k, t] for \(k \le t \le k+1\) and an integration along \([t-1,t]\) for \(t \ge k+1\). Let us start with the integral along [k, t]. Using the Taylor series for \(\log {(1-z)}\) we have
Both integrals are of the form
In terms of the variable \(u = (\tau -k)/(\tau -k+1)\) we have
where \(t_{(k)} = (t-k)_+/(t-k+1)\). Taking \(u = t_{(k)}z\) we obtain
where we used eq.(7.5). So we have
Using eq.(9.4) in eq.(9.2) we obtain
where
The last term in the above expression can be written, using the series representation of \({}_2{F}{}_{1}\), as
Using the identities
we can write
where \({}_3{F}{}_{2}\) is the generalized hypergeometric function with three upper parameters and two lower parameters. Plugging this last expression in eq.(9.6), using again the definition of \({}_{2}{F}{}_{1}\) and identities involving the Pochhammer symbol, we obtain
Let us recall that eq.(9.5) holds for \(k\le t \le k+1\).
For \(t \ge k+1\) we need to evaluate
We can take advantage of the above computations by writing the integral along \([t-1,t]\) as the integral along [0, t] minus the integral along \( [0,t-1]\). Replacing t by \(t-1\) in eq.(9.3) and repeating the same steps, we obtain
where we used the fact that
Thus we obtain
for \(t \ge k+1\). Note that, since by definition – see eq.(2.13) – we have \(t_{(k+1)} = 0\) for \(t < k+1\), we can use eq.(9.7) for both situations \(k \le t \le k+1\) and \(t \ge k+1\).
Finally, using eq.(9.7) in eq.(3.4) and using \(n+j = s\) as summation index we obtain eq.(3.10) (with \(s \rightarrow n\)).
Rights and permissions
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
About this article
Cite this article
Gupta, N., Kumar, A., Leonenko, N. et al. Generalized fractional derivatives generated by Dickman subordinator and related stochastic processes. Fract Calc Appl Anal (2024). https://doi.org/10.1007/s13540-024-00289-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13540-024-00289-x
Keywords
- Dickman subordinator
- convolution type fractional derivatives
- inverse Dickman subordinators
- Poisson-Dickman process.