Introduction

The fractional differential equation has a long history of more than 300 years. Many mathematicians such as Euler, Laplace, Abel, Liouville, Riemann, Grünwald, Letnikov and Riez have worked in this field of mathematics. In 1974, first conference on fractional calculus and its application was held [1]. In Podlubny [2] wrote a book that provides the basic theory of fractional differentiation, equations and methods of their solution. Models based on partial differential equations and calculus of variations are also generalized for fractional derivatives. For instance, fractional-order partial differential equation-based formulation are applicable for multi-scale nonlocal contrast enhancement with texture preserving [3] and iterative learning control with high-order internal models [4]. In image processing, fractional calculus is exploited in image denoising using the diffusion equation [5,6,7,8] and in image segmentation with active contours using the fractional derivative within energy functional [9]. Mathieu et al. [10] applied the fractional differentiation for edge detection. Also, they discussed on the texture enhancement of multi-scale fractional mask.

Zhang et al. [11] have proposed fractional differential mask based on the definition of Riemann–Liouville. For fractional order of 1 to 2, they enhanced the texture and edges in multi-scale by controlling the fractional order. For denoising an image, Pu et. al applied fractional calculus based on the definition of Riemann–Liouville [12]. Also, Gao et al. in [13] applied an improved fractional differential operator based on a piecewise quaternion for image enhancement. Furthermore, in [14], the generalized fractional image denoising algorithm based on Srivastava–owa fractional differential operator is introduced for image denoising. The Grünwald–Letnikov derivative is also used for image enhancement in [15, 16]. In Gao et al. [17] by development of the real fractional derivative and its applications in the signal processing extended the quaternion fractional differential (QFD) based on Grünwald–Letnikov and applied it to edge detection of color image. He et al. in [18] proposed a model based on the Grünwald–Letnikov fractional differential operator that improves denoising operator mask. The total coefficient of this mask is not equal to zero, which means that its response value is not zero in flat areas of the image. The total coefficient of this mask is not equal to zero, which means that its response value is not zero in flat areas of the image. In 2017, Jalab et al. proposed a new contrast enhancement technique for medical images based on image entropy. Their method enhances edges accurately while preserving smooth textures [19]. We aim to redefine the Grünwald–Letnikov derivative, in order to better show the rate of changes of the derivative in image processing. In this paper, we highlight the defects of Grünwald–Letnikov derivative in image processing and based on them, we present a new definition of Grünwald–Letnikov derivative that is very flexible.

Preliminaries

In this section, we introduce some basic concepts which are essential to our discussions in the next sections. Let us now recall that the nth-order derivative of function f is defined by:

$$\begin{aligned} f^{(n)}(x)=\dfrac{d^nf}{dx^n}=\lim \limits _{h\rightarrow 0}\dfrac{1}{h^n}\sum \limits _{r=0}^n(-1)^r{n\atopwithdelims ()r} f(x-rh). \end{aligned}$$

Accordingly, the Grünwald–Letnikov fractional derivative for one variable function f is defined as follows [2]:

$$\begin{aligned} D^\alpha _{G-L}f(x)=\lim \limits _{h\rightarrow 0}\dfrac{1}{h^\alpha }\sum \limits _{r=0} ^{\left[ \frac{x-a}{h}\right] }(-1)^r{\alpha \atopwithdelims ()r} f(x-rh), \end{aligned}$$

where

$$\begin{aligned} {\alpha \atopwithdelims ()r}=\dfrac{\varGamma (\alpha +1)}{\varGamma (r+1)\varGamma (\alpha -r+1)}, \end{aligned}$$

and \(\varGamma \) is the gamma function.

Usually, an image can be defined as a two-dimensional function f(xy) where x and y are spatial coordinates. The value of f(xy) is called the color intensity of image at point (xy). In the field of image processing, the Grünwald–Letnikov derivative in two dimensions in the x-direction can be defined as follows [15, 20]:

$$\begin{aligned} D^\alpha _{G-L}f_x(x,y)=f(x,y)-\alpha f(x-1,y)+\frac{\alpha (\alpha -1)}{2}f(x-2,y). \end{aligned}$$
(1)

Similarly, the Grünwald–Letnikov derivative is defined in y-direction. Hence, the Grünwald–Letnikov fractional derivative can be defined by

$$\begin{aligned} D^\alpha _{G-L}f(x,y)=\sqrt{(D^\alpha _{G-L}f_x(x,y))^2+(D^\alpha _{G-L}f_y(x,y))^2}, \end{aligned}$$
(2)

or

$$\begin{aligned} D^\alpha _{G-L}f(x,y)\approx |D^\alpha _{G-L}f_x(x,y)|+|D^\alpha _{G-L}f_y(x,y)|. \end{aligned}$$
(3)

The similarities and the differences of regular derivative and Grünwald–Letnikov fractional derivative can be summarized as follows:

  1. 1.

    For the region of an image I whose color intensities are the same, the gradient of I is zero inside of the region(not on the edge points), but it is nonzero for Grünwald–Letnikov derivative. Furthermore, the more the intensity is closer to white (255), the larger the Grünwald–Letnikov derivative.

  2. 2.

    In edge pixels that gradient is positive (negative), the Grünwald–Letnikov derivative is also positive (negative). However, the (absolute) value of Grünwald–Letnikov derivative is usually larger than that of regular gradient.

By presenting some examples, we show that the definition of Grünwald–Letnikov derivative will arise some disorderliness in the application of derivative in image processing. In the following examples for simplicity, we consider \(0<\alpha \le 1\) and study the Grünwald–Letnikov derivative in x-direction.

Example 1

Let \(f(x-1,y)=f(x-2,y)=f(x,y)=250\). By (1) we get

$$\begin{aligned} D^\alpha _{G-L}f_x(x,y)=250-\alpha 250+\frac{\alpha (\alpha -1)}{2}250=(1-\alpha )(2-\alpha )125, \end{aligned}$$

that implies

$$\begin{aligned} 0<D^\alpha _{G-L}f_x(x,y)<250. \end{aligned}$$

In the special case \(\alpha =1/2\), we have \( D^\alpha _{G-L}f_x(x,y)=93.75\).

Example 2

Let \(f(x-2,y)=f(x-1,y)=f(x,y)=1\). We have

$$\begin{aligned} D^\alpha _{G-L}f_x(x,y)=1-1\alpha +\frac{\alpha (\alpha -1)}{2}=(1-\alpha )(2-\alpha )/2. \end{aligned}$$

Again, for \(0<\alpha \le 1\), we have

$$\begin{aligned} 0\le D^\alpha _{G-L}f_x(x,y)<1. \end{aligned}$$

In Examples 1 and 2, the value of f(xy) is constant in the x-neighborhood of f(xy), hence, we expect no change or a few change of (fractional) derivative of f in x-direction. However, we see the value of \( D^\alpha _{G-L}f_x(x,y)\) severely depends on the intensity of f rather than the difference of f and their x-neighborhoods.

Example 3

Let \(f(x-2,y)=f(x-1,y)=1\) and \(f(x,y)=250\). By computing the Grünwald–Letnikov derivative, we obtain

$$\begin{aligned} D^\alpha _{G-L}f_x(x,y)=250 - \alpha +\dfrac{\alpha (\alpha -1)}{2}=250-\alpha (3-\alpha )/2. \end{aligned}$$

For \(0<\alpha \le 1\), we have

$$\begin{aligned} 249 \le D^\alpha _{G-L}f_x(x,y)<250, \end{aligned}$$

and for \(\alpha =1/2\), we have \(D^\alpha _{G-L}f_x(x,y)=249.3750\).

Example 4

Let \(f(x-2,y)=f(x-1,y)=250\) and \(f(x,y)=1\). We get

$$\begin{aligned} D^\alpha _{G-L}f_x(x,y)=1 - 250\alpha +\dfrac{\alpha (\alpha -1)250}{2}=1-125\alpha (3-\alpha ), \end{aligned}$$

that for \(0<\alpha \le 1\) we have

$$\begin{aligned} -249\le D^\alpha _{G-L}f_x(x,y)<1. \end{aligned}$$

For \(\alpha =1/2\), we have \(D^\alpha _{G-L}f_x(x,y)=-155.2500\).

In Examples 3 and 4, we observe that the difference of f(xy) and its x-neighborhoods are the same; however, the Grünwald–Letnikov derivatives of f(xy) in x-direction are very different. The above examples show that Grünwald–Letnikov derivative is sensitive to the intensity of the pixels rather than the difference of the intensities.

According to these examples, the definition of Grünwald–Letnikov derivative should be modified in order to better represent the rate of changes of the derivative.

Modified Grünwald–Letnikov derivative

In this section, we express a modified definition of Grünwald–Letnikov derivative. To this end, we first take

$$\begin{aligned} M(x,y)=\dfrac{1}{s^n}\min \{f(x,y),f(x-1,y),f(x-2,y)\}, \end{aligned}$$

where \(s\ge 255\) is an integer number and \(0\le n\le 1\) is a real number. The equation of the line passing through of two points (0, M(xy)) and (s, 0) is

$$\begin{aligned} Y(x,y)=M(x,y)\left( \dfrac{s-X(x,y)}{s}\right) . \end{aligned}$$

By substituting

$$\begin{aligned} X(x,y)=\vert f(x,y)-\alpha f(x-1,y)+\dfrac{\alpha (\alpha -1)}{2}f(x-2,y)\vert , \end{aligned}$$

in the above equation, the value of Y(xy) is obtained. Now, we define the modified Grünwald–Letnikov derivative in x-direction as follows:

$$\begin{aligned} {_{m}}D^\alpha _{G-L}f_x(x,y)=\dfrac{f(x,y)-\alpha f(x-1,y)+\dfrac{\alpha (\alpha -1)}{2}f(x-2,y)}{Y(x,y)+1}. \end{aligned}$$
(4)

In Eq. 4, the value 1 is added to Y(xy) to avoid of vanishing the denominator. The coefficient \(\frac{1}{Y(x,y)+1}\) is thought of as modifier parameters of Grünwald–Letnikov derivative. Moreover, it is important to note that for \(0<n\le 1\),

$$\begin{aligned} \lim \limits _{s\rightarrow \infty }Y(x,y)=0. \end{aligned}$$

This yields the following lemma;

Lemma 1

The modified Grünwald–Letnikov derivative defined by (4) will be the same Grünwald–Letnikov as defined by (1), if \(s\rightarrow +\infty \).

By (4), we get

$$\begin{aligned} {_{m}}D^\alpha _{G-L}f_x(x,y)=\dfrac{s^{n+1}A}{\theta (s-|A|)+s^{n+1}}, \end{aligned}$$
(5)

where \(\theta =\min \{f(x,y),f(x-1,y),f(x-2,y)\}\) and \(A=D^\alpha _{G-L}f_x(x,y)\).

Furthermore, by (5), it is seen that if \(s=\vert A \vert ,\) then the regular and modified Grünwald–Letnikov derivatives will be the same. By considering the parameters s and n, we have two degree of freedom. In fact, the modified Grünwald–Letnikov derivative generally has a behavior between the regular derivative and Grünwald–Letnikov fractional derivative. Analogously, one can define the modified Grünwald–Letnikov derivative in y-direction. Hence, the modified Grünwald–Letnikov fractional derivative can be defined by

$$\begin{aligned} {_{m}}D^\alpha _{G-L}f(x,y)=\sqrt{({_{m}}D^\alpha _{G-L}f_x(x,y))^2+({_{m}}D^\alpha _{G-L}f_y(x,y))^2}, \end{aligned}$$
(6)

or

$$\begin{aligned} {_{m}}D^\alpha _{G-L}f(x,y)\approx |{_{m}}D^\alpha _{G-L}f_x(x,y)|+|{_{m}}D^\alpha _{G-L}f_y(x,y)|. \end{aligned}$$
(7)

Now, we compute the modified Grünwald–Letnikov derivative for the preceding examples. By (5), for Example 1, we have

$$\begin{aligned} 0\le {_{m}}D^\alpha _{G-L}f_x(x,y)<\dfrac{250 s^{n+1}}{250(s-250)+s^{n+1}}, \end{aligned}$$

in which \(0<\alpha \le 1\). The special case \(\alpha =1/2, s=255\) and \(n=1\) yields \({_{m}}D^\alpha _{G-L}f_x(x,y)=57.8725\). For Example 2,

$$\begin{aligned} 0\le {_{m}}D^\alpha _{G-L}f_x(x,y)<\dfrac{ s^{n+1}}{(s-1)+s^{n+1}}<1, \end{aligned}$$

in which \(0<\alpha \le 1\). For Example 3,

$$\begin{aligned} 0<\dfrac{ s^{n+1}}{(s-249)+s^{n+1}}\le {_{m}}D^\alpha _{G-L}f_x(x,y)<\dfrac{ s^{n+1}}{(s-250)+s^{n+1}}<1, \end{aligned}$$

in which \(0<\alpha \le 1\). Finally, for Example (4), we have

$$\begin{aligned} \dfrac{ -249s^{n+1}}{(s-249)+s^{n+1}}\le {_{m}}D^\alpha _{G-L}f_x(x,y)<\dfrac{ s^{n+1}}{(s-1)+s^{n+1}}<1, \end{aligned}$$

in which \(0<\alpha \le 1\). The special case \(\alpha =1/2, s=255\) and \(n=1\) yields

$$\begin{aligned} {_{m}}D^\alpha _{G-L}f_x(x,y)=-155.01, \end{aligned}$$

that is approximately equal to the value of usual Grünwald–Letnikov derivative.

We observe that the multiplier \(\frac{1}{Y(x,y)+1}\) in the modified Grünwald–Letnikov derivative moderates the value of the derivative.

Numerical examples

In this section, we aim to demonstrate that the modified Grünwald–Letnikov fractional derivative can be efficiently applied for edge detection and image enhancement. We, moreover, present a comparison between the modified and original Grünwald–Letnikov derivatives for two prototype image.

Fig. 1
figure 1

a Is an original image; b shows its Grünwald–Letnikov derivative and c shows the modified Grünwald–Letnikov derivative

Example 5

(Edge detection). Consider Fig. 1a as an original image. Figure 1b shows its Grünwald–Letnikov derivative defined by (3) and Fig. 1c shows its modified Grünwald–Letnikov derivative defined by (7). In both Fig. 1b, c, we put \(\alpha =0.5\). Also, for modified Grünwald–Letnikov derivative, \(s=255\) and \(n=0.5\) is selected. As it is seen the modified Grünwald–Letnikov derivative shows only the edges of the main figure while Grünwald–Letnikov derivative shows the whole of figure with low intensity. Based on Lemma 1, as s tends to infinity, the Grünwald–Letnikov derivative and its modified will be the same.

Example 6

(Image enhancement). Figure 2 shows a gray-scale image of an infant. Figures 3 and 4 show the enhanced images of Fig. 2 by Grünwald–Letnikov derivative and modified Grünwald–Letnikov derivative with \(\alpha =0.2,0.4, 0.6\) and \(\alpha =0.8\), respectively. We considered \(s=255\) and \(n=0.5\) for enhancing by modified Grünwald–Letnikov derivative. As it is seen, the modified Grünwald–Letnikov derivative gives a better quality in comparison with the usual Grünwald–Letnikov derivative.

Fig. 2
figure 2

The original image of an infant

Fig. 3
figure 3

The Grünwald–Letnikov derivative of Fig. 2 with different values of \(\alpha \)

Fig. 4
figure 4

The modified Grünwald–Letnikov derivative of Fig. 2 with different values of \(\alpha \)

Example 7

Figure 5a shows an original image of Lena, and Fig. 5b shows a regular derivative of it. It is computed as \(\sqrt{(\frac{\partial u}{\partial x})^2+(\frac{\partial u}{\partial y})^2}\) where u is the image of Lena. Figures 6 and 7 show the effect of Grünwald–Letnikov derivative and its modified for \(\alpha =0.2, 0.4, 0.6\) and \(\alpha =0.8\), respectively. For modified \(G-L\) derivative, we considered \(s=255\) and \(n=0.5\). It is clear that the modified \(G-L\) derivatives tends to regular \(G-L\) derivatives, as s tends to infinity.

Fig. 5
figure 5

a original image of “Lena” and it’s regular derivative

Fig. 6
figure 6

The Grünwald–Letnikov derivative of image “Lena” with different values of \(\alpha \)

Fig. 7
figure 7

The modified Grünwald–Letnikov derivative of image “Lena” with different values of \(\alpha \)

Conclusion

In order to better show the rate of change of derivative in image processing, we need to redefine the Grünwald–Letnikov fractional derivative. We highlight the defects of the Grünwald–Letnikov derivative in image processing, next, we present a new definition of Grünwald–Letnikov fractional derivative that is very flexible. The proposed modified Grünwald–Letnikov can be efficiently employed in different areas of image processing such as image enhancement, edge detection and medical diagnostic.