1 Introduction

Nowadays, images have tremendous applications in scientific studies and should be of good quality in order to be useful [1, 2]. So, image denoising has an important role in image processing and computer vision to prepare images with better resolutions. Partial differential equations (PDEs) are widely used in different parts of image processing, such as filtering, restoration, segmentation, edge enhancement, detection [3], and especially denoising. There are some PDE models that are applied for image and signal denoising, such as diffusion equation, geometric curvature equation, and TV flow [4, 5]. In the models based on the diffusion equation, the PDE model uses a nonlinear anisotropic diffusion to enhance the quality of an image by removing noise while preserving details and edges [6]. In this paper, we focus on the time-fractional Perona–Malik model (FPMM) with the Caputo fractional derivative for temporal derivative denoted by \({}_{0}^{C}\mathcal{D}_{t}^{\alpha}\). The FPMM model is defined as follows:

$$ \begin{aligned} &{}_{0}^{C} \mathcal{D}_{t}^{\alpha}u(x,y,t)=\operatorname{div} \bigl(g \bigl( \bigl\Vert \nabla G_{\sigma}*u(x,y,t) \bigr\Vert \bigr)\nabla u(x,y,t) \bigr), \\ &\quad (x,y,t) \in \Omega \times I, 0< \alpha < 1, \\ &\frac{\partial u}{\partial \mathbf{n}}=0, \quad (x,y,t) \in \partial \Omega \times I, \\ &u(x,y,0)=u_{0},\quad (x,y) \in \Omega , \end{aligned} $$
(1.1)

where \(\Omega \subseteq \mathbb{R}^{2}\), \(I=(0,T)\) is the scaling time interval, α is the order of time fractional derivative, and n is the unit outward normal to the boundary of Ω. The Caputo fractional derivative of order α is defined as

$$ _{0}^{C}\mathcal{D}_{t}^{\alpha}u(x,y,t)= \frac{1}{\Gamma (1-\alpha )} \int _{0}^{t} \frac{1}{(t-\tau )^{\alpha}} \frac{\partial u(x,y,\tau )}{\partial \tau}\,d\tau , $$
(1.2)

in which \(\Gamma (z)=\int _{0}^{\infty}e^{-s}s^{z-1}\,ds \) has the following property:

$$ \Gamma (z+1)=z\Gamma (z-1). $$

The nonlinear anisotropic diffusion filtering has been widely used in image processing and performed significantly [79]. Koenderink found a relationship between the solution of a heat equation with an initial noisy image and its convolution with Gaussian function at each scale [10]. In fact, the denoising process can be described as the solution of the following linear diffusion equation:

$$ \frac{\partial u(x,y,t)}{\partial t}=\Delta u(x,y,t), \qquad u(x,y,0)=u_{0}. $$
(1.3)

The solution of this equation can be obtained by the following integral convolution:

$$ u(x,y,t)=G_{t}*f, $$
(1.4)

where \(G_{t}\) is the Gaussian function with standard deviation σ. In this model, \(u(x,y,0)\) is the initial noisy image whose noise reduces over time. However, this model also fades the image on the edges which leads to losing some essential data of the image. So, Perona and Malik, in 1990, proposed a nonlinear diffusion model, called Perona–Malik model (PMM), to overcome the problems of previous models [11]. Actually, they used the gradient of the actual image \(u(x,y,t)\) as feedback into a diffusion process and introduced the following anisotropic diffusion equation with a zero Neumann boundary condition:

$$ \frac{\partial u(x,y,t)}{\partial t}=\operatorname{div} \bigl(g \bigl( \bigl\Vert \nabla u(x,y,t) \bigr\Vert \bigr)\nabla u(x,y,t) \bigr), $$
(1.5)

where \(g(\cdot )\) is a smooth decreasing function in terms of \(\Vert \nabla u(x,y,t)\Vert \), which controls the diffusion strength and is called the diffusion coefficient. This function is considered such that becomes equal to 1 inside the region and tends to zero near the edges. A typically used diffusion coefficient is [12]

$$ g \bigl( \bigl\Vert \nabla u(x,y,t) \bigr\Vert \bigr)= \frac{1}{1+ (\frac{ \Vert \nabla u(x,y,t) \Vert }{K} )^{2}}, $$
(1.6)

where K is an edge detector threshold parameter that decides the amount of diffusion to take place.

However, PMM is an ill-posed problem as it is a degenerate and backward diffusion equation. In other words, if the signal-to-noise ratio (SNR) is very low, most of noises remain on the edges. Therefore, Catté et al. proposed the regularized PMM to triumph over this disadvantage [13]. This model is defined as follows:

$$ \frac{\partial u(x,y,t)}{\partial t}=\operatorname{div} \bigl(g \bigl( \bigl\Vert \nabla G_{\sigma}*u(x,y,t) \bigr\Vert \bigr)\nabla u(x,y,t) \bigr), $$
(1.7)

where \(G_{\sigma} \in C^{\infty}(\mathbb{R}^{2})\) is Gauss kernel with variance σ, and we have

$$ \Vert \nabla G_{\sigma}*u \Vert = \biggl( \biggl( \frac{\partial G_{\sigma}}{\partial x}*\tilde{u} \biggr)^{2}+ \biggl( \frac{\partial G_{\sigma}}{\partial y}*\tilde{u} \biggr)^{2} \biggr)^{ \frac{1}{2}}, $$
(1.8)

in which ∗ represents the standard convolution and ũ is an extension of u to \(\mathbb{R}^{2}\) specified by a periodic reflection through the boundary of the problem.

PMM is a powerful model which has been utilized in nonlinear data filtration, image restoration, noise removal, and edge detection [11, 13, 14]. For instance, scientists used PMM in optical coherence tomography [15], radiography image processing [16], etc. Moreover, solving the PMM model with numerical algorithms is a challenging problem and some researchers have been trying to improve suitable approaches to simulate its solution. In [17], the domain decomposition approach combined with the finite difference method was proposed to solve nonlinear problems in image denoising. Kamranian and Dehghan developed a meshfree finite point method to solve PMM [12]. Gu, in 2020, proposed a finite element approach for two Peron–Malik and Yu–Kaveh models and compared the obtained results with the finite difference method [4]. Moreover, Hjouji et al. proposed a mixed finite element method for bivariate PMM [18]. Also, Sidi Ammi and Jamiai applied finite difference and Legendre spectral methods to a time-fractional diffusion–convection equation and its application for image processing [3].

Fractional integration and fractional differentiation are generalizations of integer–order ones which have various applications in modeling the phenomena in different fields of sciences [3, 1922], and scientists have been trying to develop various proper numerical approaches for solving fractional differential equations [2328]. Fractional calculus has a long history in mathematics [29, 30], and various definitions and operators are provided to express the fractional derivative, including Grunwald–Letnikow [31], Marchaud [31], Riemann–Liouville [31, 32], Caputo [33, 34], Riesz [35, 36], etc. [3739]. Recently, fractional calculus has also been used for many image processing applications such as image restoration, segmentation, texture enhancement, edge detection, image encryption, and image denoising to improve their performances [19, 4042].

FPMM is an extension of the classical one whose temporal derivatives are taken in the Caputo sense, which enhances model’s applicability. It is worth mentioning that if \(\alpha =1\), the classical model is obtained. The fractional model, in comparison with the classical one, has an extra parameter α which can enhance the model’s flexibility and improve its performance.

In recent decades, meshless algorithms have been among the most powerful numerical techniques utilized to approximate solutions of complex differential equations. Unlike the other methods which need mesh generation, meshless methods only work with a scattered set of collocation points without any prior information on the domain of the problem. This property drastically reduces the computational cost compared to the other numerical methods which need mesh generation [43]. In addition, using radial basis functions (RBFs) in meshless methods has some advantages like spectral convergence for smooth functions in even complex geometries, ease of implementation, and is appropriate for high-dimension problems [44]. RBFs can be divided into two categories, namely global and local. Global RBFs (e.g., Gaussian, Multiquadric, etc.) have a severe drawback, in particular they have a dense and ill-conditioned coefficient matrix of the obtained linear system [4547]. On the other hand, compactly-supported radial basis functions (CS–RBFs) such as Wendland’s functions are defined on the local arbitrary subdomain [4851] which makes them more stable. The CS–RBF method is more stable due to the sparsity of the obtained coefficient matrix, but it is not as accurate as the global RBFs. CS–RBFs have been introduced by Wu and Wendland for the first time for scattered data interpolation and have been used broadly in numerical simulations [52].

Since FPMM is applied to images, we usually face a large domain, making the numerical simulation difficult. In the current paper, with the aid of the operator splitting (OS) method, PMM is divided into two partial differential equations. Then, the CS–RBF approach is applied to obtained PDEs. Finally, the subproblems reduce to linear algebraic equations of small dimension that can be efficiently solved by a proper algorithm such as the LU factorization technique. Finally, we evaluate the proposed method by applying it to some examples for image denoising. To summarize, we propose an advanced and proper numerical method based on OS and CS–RBF schemes to remove noise by FPMM. This approach has some significant advantages. First, the model considered in this paper is FPMM which has a fractional order derivative that improves model’s ability in noise removal compared to the classical PMM. Second, the proposed approach converts a complex nonlinear system to some simple linear ones and dramatically decreases computational costs. Finally, as we use a local RBF method, the obtained linear systems are sparse and well-posed in comparison with global approaches that can be solved by standard solvers. Moreover, we report the obtained numerical results with a proper value of α with different test cases and compare them with PMM in different metrics, which confirms the high accuracy and effectiveness of the proposed algorithm.

The organization of this paper is as follows: In Sect. 2, the basic properties of the algorithm presented in this work, such as the details about fractional derivative, OS, and CS–RBF collocation approaches, are briefly described. The numerical results obtained by the proposed method are presented and discussed in Sect. 3, and some concluding remarks are given in Sect. 4.

2 Proposed approach

In this section, the Trotter OS scheme [53] is applied to Eq. (1.1) and converts it to some simpler PDEs. Then, the time dimension is discretized by a backward Euler approach. Finally, the obtained equations are solved by the CS–RBF method.

2.1 Temporal discretization

In this part, we apply a divide-and-conquer approach which is known as the OS method, to divide Eq. (1.1) into two subproblems in separate directions. Then, the Caputo time derivative is discretized in each subproblem.

2.1.1 Trotter splitting scheme

First, in order to reduce the complexity of problem (1.1), a first-order OS method called Trotter splitting approach is employed on Eq. (1.1) to solve FPMM along separate directions. Using this scheme reduces the size of obtained matrices and thus substantially reduces the computational complexity. Consider the time interval I which is divided into \(M+1\) equally spaced steps \(t_{0},t_{2},\dots ,t_{M}\), \(t_{k}=k\Delta t\), \(k=0,1,\dots ,M\), and \(\Delta t=\frac{T}{M}\). Equation (1.1) can be written as follows:

$$ _{0}^{C}\mathcal{D}_{t}^{\alpha}u= \mathcal{L}_{x}(u)+\mathcal{L}_{y}(u), $$
(2.1)

where \(\mathcal{L}_{x}\) and \(\mathcal{L}_{y}\) are suboperators on function u as follows:

$$\begin{aligned}& \mathcal{L}_{x}(u)=\frac{\partial}{\partial x}g \bigl( \Vert \nabla G_{ \sigma}*u \Vert \bigr)\frac{\partial}{\partial x}u+g \bigl( \Vert \nabla G_{\sigma}*u \Vert \bigr)\frac{\partial ^{2} }{\partial x^{2}}u, \end{aligned}$$
(2.2)
$$\begin{aligned}& \mathcal{L}_{y}(u)=\frac{\partial}{\partial y}g \bigl( \Vert \nabla G_{ \sigma}*u \Vert \bigr)\frac{\partial}{\partial y}u+g \bigl( \Vert \nabla G_{\sigma}*u \Vert \bigr)\frac{\partial ^{2}}{\partial y^{2}}u. \end{aligned}$$
(2.3)

By employing the Trotter splitting method, the approximate solution of \(u(x,y,t_{k+1})\), which can simply be written as \(u^{k+1}\), is obtained as \(u^{k+1}=[\mathcal{S}_{x}^{\Delta t} \mathcal{S}_{y}^{\Delta t}]u^{k}\), in which \(\mathcal{S}_{x}^{\Delta t}\) and \(\mathcal{S}_{y}^{\Delta t}\) are \({}_{0}^{C}\mathcal{D}_{t}^{\alpha}u^{*}=\mathcal{L}_{x}(u^{*})\) and \({}_{0}^{C}\mathcal{D}_{t}^{\alpha}u^{**}=\mathcal{L}_{y}(u^{**})\), respectively. Finally, we have \(u^{k+1}=u^{**^{k+1}}\). The interested readers can see [43, 5456] for more details about the OS methods.

Now, a proper scheme should be proposed to discretize the Caputo time derivative operator \({}_{0}^{C}\mathcal{D}_{t}^{\alpha}\) in the subproblems.

2.1.2 Discretization of Caputo derivative

At each time step \(t_{k}\), \(k=0,2,\dots ,M\), using the backward Euler approach, the time fractional derivative is approximated as follows:

$$ {}_{0}^{C}\mathcal{D}_{t}^{\alpha}u(x,y,t_{k+1})= \frac{1}{\Gamma (1-\alpha )}\sum_{j=0}^{k} \int _{t_{j}}^{t_{j+1}} \frac{1}{(t_{k+1}-\tau )^{\alpha}} \frac{\partial}{\partial \tau}u(x,y, \tau )\,d\tau . $$
(2.4)

So, we have

$$ \begin{aligned} &{}_{0}^{C} \mathcal{D}_{t}^{\alpha}u(x,y,t_{k+1}) \\ &\quad =\frac{1}{\Gamma (1-\alpha )}\sum_{j=0}^{k} \frac{u(x,y,t_{j+1})-u(x,y,t_{j})}{\Delta t} \int _{t_{j}}^{t_{j+1}} \frac{d\tau}{(t_{k+1}-\tau )^{\alpha}}+R_{k+1}, \end{aligned} $$
(2.5)

in which \(R_{k+1}\) is the truncation error and we have

$$ R_{k+1}\leq \mathcal{C} \Biggl[\frac{1}{\Gamma (1-\alpha )}\sum _{j=0}^{k} \int _{t_{j}}^{t_{j+1}} \frac{t_{j+1}+t_{j}-2\tau}{(t_{k+1}-\tau )^{\alpha}}\,d\tau + \mathcal{O} \bigl(\Delta t^{2} \bigr) \Biggr], $$
(2.6)

where \(\mathcal{C}\) is a constant that depends only on u. Based on Lemma 3.1 in [57] and the fact that \(\Gamma (2-\alpha )\geq 2\) for all \(\alpha \in [0,1]\), it can be concluded that

$$ \Biggl\vert \frac{1}{\Gamma (1-\alpha )}\sum_{j=0}^{k} \int _{t_{j}}^{t_{j+1}} \frac{t_{j+1}+t_{j}-2\tau}{(t_{k+1}-\tau )^{\alpha}}\,d\tau \Biggr\vert \leq 2\Delta t^{2-\alpha}. $$

So

$$ \vert R_{k+1} \vert \leq \mathcal{C}\Delta t^{2-\alpha}. $$
(2.7)

Moreover, from Eq. (2.4), the following equations can be computed:

$$ \begin{aligned} &\frac{1}{\Gamma (1-\alpha )}\sum _{j=0}^{k} \int _{t_{j}}^{t_{j+1}} \frac{1}{(t_{k+1}-\tau )^{\alpha}} \frac{\partial}{\partial \tau}u(x,y, \tau )\,d\tau \\ &\quad =\frac{1}{\Gamma (1-\alpha )}\sum_{j=0}^{k} \frac{u(x,y,t_{j+1})-u(x,y,t_{j})}{\Delta t} \int _{t_{k-j}}^{t_{k+1-j}} \frac{dt}{t^{\alpha}} \\ &\quad =\frac{1}{\Gamma (1-\alpha )}\sum_{j=0}^{k} \frac{u(x,y,t_{k+1-j})-u(x,y,t_{k-j})}{\Delta t} \int _{t_{j}}^{t_{j+1}} \frac{dt}{t^{\alpha}} \\ &\quad =\frac{1}{\Gamma (2-\alpha )}\sum_{j=0}^{k} \frac{u(x,y,t_{k+1-j})-u(x,y,t_{k-j})}{\Delta t^{\alpha}} \bigl((j+1)^{1- \alpha}-j^{1-\alpha} \bigr). \end{aligned} $$
(2.8)

The discrete fractional differential operator \(D^{\alpha}_{t}\) is as follows:

$$ D^{\alpha}_{t}=\frac{1}{\Gamma (2-\alpha )}\sum _{j=0}^{k} \frac{u(x,y,t_{k+1-j})-u(x,y,t_{k-j})}{\Delta t^{\alpha}} \bigl((j+1)^{1- \alpha}-j^{1-\alpha} \bigr). $$
(2.9)

Thus, we can write

$$ _{0}^{C}\mathcal{D}_{t}^{\alpha}u(x,y,t_{k+1})=D^{\alpha}_{t}+R_{k+1}. $$
(2.10)

So \(D^{\alpha}_{t}\) is an approximation to \({}_{0}^{C}\mathcal{D}_{t}\) and we have

$$ \begin{aligned} &{}_{0}^{C} \mathcal{D}_{t}^{\alpha}u(x,y,t_{k+1}) \\ &\quad \approx \frac{1}{\Gamma (2-\alpha )(\Delta t)^{\alpha}}\sum_{j=0}^{k} \bigl((j+1)^{1-\alpha}-j^{1-\alpha} \bigr) \bigl(u(x,y,t_{k-j+1})-u(x,y,t_{k-j}) \bigr). \end{aligned} $$
(2.11)

For the sake of simplicity, we define \(\mathcal{C}_{\Delta t}^{\alpha}= \frac{1}{\Gamma (2-\alpha )(\Delta t)^{\alpha}}\), \(\mathcal{A}_{j}^{\alpha}=(j+1)^{1-\alpha}-j^{1-\alpha}\), and \(u(x,y,t_{k})=u^{k}\).

Remark 2.1

From Theorems 3.1 and 3.2 in [57], it can be deduced that the proposed temporal discretization scheme is unconditionally stable for \(\Delta t>0\) and has globally \((2-\alpha )\)-order accuracy for \(0<\alpha <1\).

Now, by using Eq. (2.11), the subproblems are discretized along the temporal dimension as follows:

$$\begin{aligned} & \begin{aligned} \mathcal{C}_{\Delta t}^{\alpha} \Biggl[u^{*^{k+1}}-u^{k}+\sum_{j=1}^{k} \bigl(u^{{k+1-j}}-u^{{k-j}} \bigr)\mathcal{A}_{j}^{\alpha} \Biggr]&= \frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla G_{\sigma}*u^{k} \bigr\Vert \bigr) \frac{\partial}{\partial x}u^{*^{k+1}} \\ &\quad {} +g \bigl( \bigl\Vert \nabla G_{\sigma}*u^{k} \bigr\Vert \bigr) \frac{\partial ^{2}}{\partial x^{2}}u^{*{k+1}}, \end{aligned} \end{aligned}$$
(2.12)
$$\begin{aligned} &\begin{aligned} \mathcal{C}_{\Delta t}^{\alpha} \Biggl[u^{**^{k+1}}-u^{*^{k}}+\sum_{j=1}^{k} \bigl(u^{*^{k+1-j}}-u^{*^{k-j}} \bigr)\mathcal{A}_{j}^{\alpha} \Biggr]&= \frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla G_{\sigma}*u^{*^{k}} \bigr\Vert \bigr) \frac{\partial}{\partial y}u^{**^{k+1}} \\ &\quad {} +g \bigl( \bigl\Vert \nabla G_{\sigma}*u^{*^{k}} \bigr\Vert \bigr) \frac{\partial ^{2}}{\partial y^{2}}u^{**{k+1}}. \end{aligned} \end{aligned}$$
(2.13)

2.2 Spatial discretization

Before going to the space discretization of problems (2.12) and (2.13), we focus on the convolution term \(G_{\sigma}*u^{k} (G_{\sigma}*u^{*^{k}})\). Since the kernel \(G_{\sigma}\) is smooth, the term \(G_{\sigma}*u^{k} (G_{\sigma}*u^{*^{k}})\) can be replaced by a heat equation with initial condition \(u^{k}(u^{*^{k}})\). These heat equations are solved by the OS method in conjunction with the Crank–Nicolson and CS–RBF approaches. The solutions of heat equations replaced in Eqs. (2.12) and (2.13) are represented by \(u^{c}\) and \(u^{*^{c}}\), respectively.

By substituting of the solution of the heat equations, Eqs. (2.12) and (2.13) can be considered in the following forms:

$$\begin{aligned} &\begin{aligned} &\frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla u^{c} \bigr\Vert \bigr) \frac{\partial}{\partial x}u^{*^{k+1}}+g \bigl( \bigl\Vert \nabla u^{c} \bigr\Vert \bigr) \frac{\partial ^{2}}{\partial x^{2}}u^{*{k+1}}- \mathcal{C}_{\Delta t}^{ \alpha}u^{*^{k+1}} \\ &\quad =\mathcal{C}_{\Delta t}^{\alpha}\sum_{j=1}^{k} \bigl(u^{{k+1-j}}-u^{{k-j}} \bigr)\mathcal{A}_{j}^{\alpha}- \mathcal{C}_{\Delta t}^{\alpha}u^{k}, \end{aligned} \end{aligned}$$
(2.14)
$$\begin{aligned} &\begin{aligned} &\frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla u^{*^{c}} \bigr\Vert \bigr) \frac{\partial}{\partial y}u^{**^{k+1}}+g \bigl( \bigl\Vert \nabla u^{*^{c}} \bigr\Vert \bigr)\frac{\partial ^{2}}{\partial y^{2}}u^{**{k+1}}- \mathcal{C}_{ \Delta t}^{\alpha}u^{**^{k+1}} \\ &\quad =\mathcal{C}_{\Delta t}^{\alpha}\sum_{j=1}^{k} \bigl(u^{*^{k+1-j}}-u^{*^{k-j}} \bigr)\mathcal{A}_{j}^{\alpha}- \mathcal{C}_{\Delta t}^{\alpha}u^{*^{k}}. \end{aligned} \end{aligned}$$
(2.15)

By obtaining \(u^{c}\) and \(u^{*^{c}}\), functions \(g(\Vert \nabla u^{c}\Vert )\) and \(g(\Vert \nabla u^{*^{c}}\Vert )\) are computed as follows:

$$ g \bigl( \bigl\Vert \nabla u^{c} \bigr\Vert \bigr)= \frac{1}{1+\frac{(\frac{\partial u^{c}}{\partial x})^{2}+(\frac{\partial u^{c}}{\partial y})^{2}}{K^{2}}}, \qquad g \bigl( \bigl\Vert \nabla u^{*^{c}} \bigr\Vert \bigr)= \frac{1}{1+\frac{(\frac{\partial u^{*^{c}}}{\partial x})^{2}+(\frac{\partial u^{*^{c}}}{\partial y})^{2}}{K^{2}}}.$$

Moreover, \(\frac{\partial g(\Vert \nabla u^{c}\Vert )}{\partial x}\) and \(\frac{\partial g(\Vert \nabla u^{*^{c}}\Vert )}{\partial y}\) can be obtained by the chain rule.

Now, CS–RBF method is applied on Eqs. (2.14) and (2.15) which has two main benefits. First, as there is no need for any mesh generation for constructing the shape functions, the CS–RBFs scheme is considered truly meshless, and the coefficient matrix of this approach is well-conditioned and can be solved by common methods easily. Moreover, the local RBFs like Wendland CS–RBFs do not have a shape parameter and are more stable than global ones.

At the first step of the space discretization, the nodal points over the domain Ω should be selected at each time step \(t_{k}\). We consider pixels of the images, i.e., \(X\times Y\) where \(X=\lbrace x_{1},\dots ,x_{N_{x}}\rbrace \) and \(Y=\lbrace y_{1},\dots ,y_{N_{y}}\rbrace \), as the mesh nodes on the boundary and inside of the domain. In order to approximate \(u^{k}\) for all \(k\geq 0\), \(u^{0}\) is computed by the interpolation, then \(u^{k}\) for \(k>0\) is obtained by solving Eq. (2.12) using the CS–RBFs scheme. In addition, CS–RBFs are symmetric with respect to their center points by the following definition:

Definition 2.2

([45])

A function \(\phi :\mathbb{R}^{s}\rightarrow \mathbb{R}\) is called radial provided there exists a univariate function \(\varphi :[0,\infty )\rightarrow \mathbb{R}\) such that

$$ \phi (\mathbf{x})=\varphi (r),\quad \text{where } r= \Vert \mathbf{x} \Vert $$
(2.16)

and \(\Vert \cdot \Vert \) is some norm on \(\mathbb{R}^{s}\), usually the Euclidean norm.

Among different choices of existing radial basis functions, in this paper, Wendland’s compactly supported radial basis functions (WCS–RBFs) with \(C^{2}\) smoothness are selected because they do not include any shape parameters that make these functions easier to use. These functions are defined as follows:

$$ \phi _{i}(x)=(1-\delta r_{i})^{4}_{+}(1+4 \delta r_{i}), $$
(2.17)

in which \(r_{i}=\Vert x-x_{i}\Vert \) is the distance from node \(x_{i}\) to x, and δ is the size of support for the radial function \(\phi _{i}(x)\). Moreover, function \((1-\delta r_{i})^{4}_{+}\) is defined such that for \(0\leq \delta r_{i} <1\) it is equal to \((1-\delta r_{i})^{4}\), otherwise is zero.

To approximate functions \(u^{k}\), \(u^{*^{k+1}}\), and \(u^{**^{k+1}}\) in Ω over center points \((x_{i},y_{j})\), \(i=1,2,\dots ,N_{x}\), \(j=1,2,\dots ,N_{y}\), the CS–RBF interpolation approximations \(\hat{u}^{k}\), \(\hat{u}^{*^{k+1}}\), and \(\hat{u}^{**^{k+1}}\) are defined as

$$\begin{aligned}& \hat{u}^{k}(x,y_{j}) =\sum _{i=0}^{N_{x}}\theta _{ij}^{k} \phi _{i}(x),\quad j= 1, 2, \dots ,N_{y}, \end{aligned}$$
(2.18)
$$\begin{aligned}& \hat{u}^{*^{k+1}}(x,y_{j}) =\sum _{i=1}^{N_{x}}\lambda _{ij}^{k+1} \phi _{i}(x),\quad j= 1, 2, \dots ,N_{y}, \end{aligned}$$
(2.19)
$$\begin{aligned}& \hat{u}^{**^{k+1}}(x_{i},y) =\sum _{j=1}^{N_{y}}\gamma _{ij}^{k+1} \phi _{j}(y), \quad i= 1, 2, \dots ,N_{x}, \end{aligned}$$
(2.20)

in which \(\phi _{i}\) and \(\phi _{j}\) are RBFs based on centers \(\lbrace x_{i}\rbrace _{i=0}^{N_{x}}\) and \(\lbrace y_{j}\rbrace _{j=0}^{N_{y}}\). Additionally, \(\lambda _{ij}\) and \(\gamma _{ij}\) are unknown coefficients which should be specified.

Finally, substituting the approximation function of Eq. (2.19) into Eq. (2.14) associated to the x-direction, for all nodes in Ω, the matrix forms of their discrete equations are obtained as follows:

$$ \mathbf{A}_{j}\Lambda ^{k+1}_{j}= \mathbf{B}_{j},\quad j=1, 2, \dots , N_{y}, $$
(2.21)

where \(\Lambda ^{k+1}_{j}= [ \lambda _{1j}^{k+1},\lambda _{2j}^{k+1}, \dots ,\lambda _{N_{x}j}^{k+1} ]^{T}\), \(\mathbf{A}_{j}\) and \(\mathbf{B}_{j}\) are \(N_{x} \times N_{x}\) matrix and \(N_{x} \times 1\) vector, respectively, defined as

$$\begin{aligned}& \mathbf{A}_{j} =\mathbf{D} \bigl(\mathbf{G}_{x}^{j} \Upsilon _{x}+ \mathbf{G}^{j}\Psi _{x} - \mathcal{C}_{\Delta t}^{\alpha}\Phi _{x} \bigr)+ \mathbf{H}_{x}, \end{aligned}$$
(2.22)
$$\begin{aligned}& \mathbf{B}_{j} =\mathbf{D} \Biggl(\mathcal{C}_{\Delta t}^{\alpha} \sum_{p=1}^{k} \mathcal{A}_{k}^{\alpha} \bigl(\Phi _{x}\Theta ^{p+1-k}_{j}-\Phi _{x} \Theta ^{p-k}_{j} \bigr)- \mathcal{C}_{\Delta t}^{\alpha}\Phi _{x} \Theta ^{k}_{j} \Biggr), \end{aligned}$$
(2.23)

where

$$\begin{aligned}& \Phi _{x} = \begin{bmatrix} \phi _{1}(x_{1}) & \phi _{2}(x_{1}) & \ldots & \phi _{N_{x}-1}(x_{1})& \phi _{N_{x}}(x_{1}) \\ \phi _{1}(x_{2})& \phi _{2}(x_{2}) & \ldots & \phi _{N_{x}-1}(x_{2})& \phi _{N_{x}}(x_{2}) \\ \vdots & \vdots &\ddots &\vdots & \vdots \\ \phi _{1}(x_{N_{x}-1})& \phi _{2}(x_{N_{x}-1}) & \ldots & \phi _{N_{x}-1}(x_{N_{x}-1})& \phi _{N_{x}}(x_{N_{x}-1}) \\ \phi _{1}(x_{N_{x}})& \phi _{2}(x_{N_{x}}) & \ldots & \phi _{n}(x_{N_{x}})& \phi _{N_{x}}(x_{N_{x}}) \end{bmatrix}, \\& \Upsilon _{x} = \begin{bmatrix} \phi '_{1}(x_{1}) & \phi '_{2}(x_{1}) & \ldots & \phi '_{N_{x}-1}(x_{1})& \phi '_{N_{x}}(x_{1}) \\ \phi '_{1}(x_{2})& \phi '_{2}(x_{2}) & \ldots & \phi '_{N_{x}-1}(x_{2})& \phi '_{N_{x}}(x_{2}) \\ \vdots & \vdots &\ddots &\vdots & \vdots \\ \phi '_{1}(x_{N_{x}-1})& \phi '_{2}(x_{N_{x}-1}) & \ldots & \phi '_{N_{x}-1}(x_{N_{x}-1})& \phi '_{N_{x}}(x_{N_{x}-1}) \\ \phi '_{1}(x_{N_{x}})& \phi '_{2}(x_{N_{x}}) & \ldots & \phi '_{n}(x_{N_{x}})& \phi '_{N_{x}}(x_{N_{x}}) \end{bmatrix}, \\& \Psi _{x} = \begin{bmatrix} \phi ''_{1}(x_{1}) & \phi ''_{2}(x_{1}) & \ldots & \phi ''_{N_{x}-1}(x_{1})& \phi ''_{N_{x}}(x_{1}) \\ \phi ''_{1}(x_{2})& \phi ''_{2}(x_{2}) & \ldots & \phi ''_{N_{x}-1}(x_{2})& \phi ''_{N_{x}}(x_{2}) \\ \vdots & \vdots &\ddots &\vdots & \vdots \\ \phi ''_{1}(x_{N_{x}-1})& \phi ''_{2}(x_{N_{x}-1}) & \ldots & \phi ''_{N_{x}-1}(x_{N_{x}-1})& \phi ''_{N_{x}}(x_{N_{x}-1}) \\ \phi ''_{1}(x_{N_{x}})& \phi ''_{2}(x_{N_{x}}) & \ldots & \phi ''_{n}(x_{N_{x}})& \phi ''_{N_{x}}(x_{N_{x}}) \end{bmatrix}, \\& \mathbf{D}=\operatorname{diag}(0,1,1,1,\dots ,1,0), \\& \begin{aligned} \mathbf{G}^{j}={}&\operatorname{diag} \bigl(g \bigl( \bigl\Vert \nabla u^{c}(x_{1},y_{j}) \bigr\Vert \bigr),g \bigl( \bigl\Vert \nabla u^{c}(x_{2},y_{j}) \bigr\Vert \bigr), \\ & \dots ,g \bigl( \bigl\Vert \nabla u^{c}(x_{N_{x}-1},y_{j}) \bigr\Vert \bigr),g \bigl( \bigl\Vert \nabla u^{c}(x_{N_{x}},y_{j}) \bigr\Vert \bigr) \bigr), \end{aligned} \\& \begin{aligned} \mathbf{G}_{x}^{j}={}&\operatorname{diag} \biggl( \frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla u^{c}(x_{1},y_{j}) \bigr\Vert \bigr),\frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla u^{c}(x_{2},y_{j}) \bigr\Vert \bigr), \\ & \dots ,\frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla u^{c}(x_{N_{x}-1},y_{j}) \bigr\Vert \bigr),\frac{\partial}{\partial x}g \bigl( \bigl\Vert \nabla u^{c}(x_{N_{x}},y_{j}) \bigr\Vert \bigr) \biggr), \end{aligned} \end{aligned}$$

\(\mathbf{H}_{x}\) is an \(N_{x}\times N_{x}\) matrix whose first and last rows are first and last rows of \(\Upsilon _{x}\) and the other rows contain only zero elements. Moreover,

$$ \Theta ^{m}_{j}= \bigl(\theta _{1j}^{m}, \theta _{2j}^{m},\dots ,\theta _{N_{x}j}^{m} \bigr)^{T}, $$

that are known coefficients computed in previous iterations. Indeed, there are \(N_{y}\) linear systems of \(N_{x}\) algebraic equations which are well-posed and can be solved by a standard approach such as LU factorization. By solving these systems, the coefficient vectors \(\Lambda _{j}^{k+1}\) are obtained and the approximation of \(\hat{u}^{*^{k+1}}\) is computed as follows:

$$ \mathbf{U}^{*^{k+1}}=\Phi \bigl[\Lambda _{1}^{k+1}, \Lambda _{2}^{k+1}, \dots ,\Lambda _{N_{y}}^{k+1} \bigr], $$
(2.24)

in which

$$ \mathbf{U}^{*^{k+1}}= \begin{bmatrix} u^{*^{k+1}}(x_{1},y_{1}) & u^{*^{k+1}}(x_{1},y_{2}) & \ldots & u^{*^{k+1}}(x_{1},y_{N_{y}-1})& u^{*^{k+1}}(x_{1},y_{N_{y}}) \\ u^{*^{k+1}}(x_{2},y_{1}) & u^{*^{k+1}}(x_{2},y_{2}) & \ldots & u^{*^{k+1}}(x_{2},y_{N_{y}-1})& u^{*^{k+1}}(x_{2},y_{N_{y}}) \\ \vdots & \vdots &\ddots &\vdots & \vdots \\ u^{*^{k+1}}(x_{N_{x}-1},y_{1}) & u^{*^{k+1}}(x_{N_{x}-1},y_{2}) & \ldots & u^{*^{k+1}}(x_{N_{x}-1},y_{N_{y}-1})& u^{*^{k+1}}(x_{N_{x}-1},y_{N_{y}}) \\ u^{*^{k+1}}(x_{N_{x}},y_{1}) & u^{*^{k+1}}(x_{N_{x}},y_{2}) & \ldots & u^{*^{k+1}}(x_{N_{x}},y_{N_{y}-1})& u^{*^{k+1}}(x_{N_{x}},y_{N_{y}}) \end{bmatrix} , $$

The same procedure is applied on Eq. (2.15); as a result, the following system is obtained:

$$ \mathbf{A}_{i}\Gamma ^{k+1}_{i}= \mathbf{B}_{i}, \quad i=1, 2, \dots , N_{x}, $$
(2.25)

where \(\Gamma ^{k+1}_{i}= [ \gamma _{i1}^{k+1},\gamma _{i2}^{k+1},\dots , \gamma _{iN_{y}}^{k+1} ]^{T}\), \(\mathbf{A}_{i}\) and \(\mathbf{B}_{i}\) are \(N_{y} \times N_{y}\) matrix and \(N_{y} \times 1\) vector, respectively, defined as follows:

$$\begin{aligned}& \mathbf{A}_{i} =\mathbf{D} \bigl(\tilde{\mathbf{G}}_{y}^{i} \Upsilon _{y}+ \tilde{\mathbf{G}}^{i}\Psi _{y} -\mathcal{C}_{\Delta t}^{\alpha}\Phi _{y} \bigr)+\mathbf{H}_{y}, \end{aligned}$$
(2.26)
$$\begin{aligned}& \mathbf{B}_{i} =\mathbf{D} \Biggl(\mathcal{C}_{\Delta t}^{\alpha} \sum_{p=1}^{k} \mathcal{A}_{k}^{\alpha} \bigl(\Phi _{y}\overline{\Lambda}^{p+1-k}_{i}- \Phi _{y}\overline{\Lambda}^{p-k}_{i} \bigr)-\mathcal{C}_{\Delta t}^{ \alpha}\Phi _{y} \overline{\Lambda}^{k}_{i} \Biggr). \end{aligned}$$
(2.27)

Vector \(\overline{\Lambda}^{m}_{i}\) is calculated by the multiplying \(\Phi ^{-1}\) in the mth column of \((\mathbf{U}^{*^{k}})^{T}\); \(\Phi _{y}\), \(\Upsilon _{y}\), and \(\Psi _{y}\) are \(N_{y}\times N_{y}\) matrices which are constructed same as \(\Phi _{x}\), \(\Upsilon _{x}\), and \(\Psi _{x}\), respectively, with \(\lbrace y_{1},y_{2},\dots ,y_{N_{y}}\rbrace \). Moreover, \(\mathbf{H}_{y}\) is an \(N_{y} \times N_{y}\) matrix like \(\mathbf{H}_{x}\) whose first and last rows are equal to the first and the last rows of matrix \(\Upsilon _{y}\); \(\tilde{\mathbf{G}}^{i}\) and \(\tilde{\mathbf{G}}_{y}^{i}\) have the following structures:

$$\begin{aligned}& \begin{aligned} \tilde{\mathbf{G}}^{i}={}& \operatorname{diag} \bigl(g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{1}) \bigr\Vert \bigr),g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{2}) \bigr\Vert \bigr), \\ & \dots ,g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{N_{y}-1}) \bigr\Vert \bigr),g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{N_{y}}) \bigr\Vert \bigr) \bigr), \end{aligned} \\& \begin{aligned} \tilde{\mathbf{G}}_{y}^{i}={}& \operatorname{diag} \biggl( \frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{1}) \bigr\Vert \bigr), \frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{2}) \bigr\Vert \bigr), \\ & \dots ,\frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{N_{y}-1}) \bigr\Vert \bigr)g(x_{i},y_{N_{y}-1}), \frac{\partial}{\partial y}g \bigl( \bigl\Vert \nabla u^{*^{c}}(x_{i},y_{N_{y}}) \bigr\Vert \bigr) \biggr), \end{aligned} \end{aligned}$$

By solving \(N_{x}\) systems (2.25), \(\hat{u}^{**^{k+1}}\) is calculated as

$$ \mathbf{U}^{**^{k+1}}=\Phi \bigl[\Gamma _{1}^{k+1}, \Gamma _{2}^{k+1}, \dots ,\Gamma _{N_{x}}^{k+1} \bigr], $$
(2.28)

in which

$$ \mathbf{U}^{**^{k+1}}= \begin{bmatrix} u^{**^{k+1}}(x_{1},y_{1}) & u^{**^{k+1}}(x_{1},y_{2}) & \ldots & u^{**^{k+1}}(x_{1},y_{N_{y}-1})& u^{**^{k+1}}(x_{1},y_{N_{y}}) \\ u^{**^{k+1}}(x_{2},y_{1}) & u^{**^{k+1}}(x_{2},y_{2}) & \ldots & u^{**^{k+1}}(x_{2},y_{N_{y}-1})& u^{**^{k+1}}(x_{2},y_{N_{y}}) \\ \vdots & \vdots &\ddots &\vdots & \vdots \\ u^{**^{k+1}}(x_{N_{x}-1},y_{1}) & u^{**^{k+1}}(x_{N_{x}-1},y_{2}) & \ldots & u^{**^{k+1}}(x_{N_{x}-1},y_{N_{y}-1})& u^{**^{k+1}}(x_{N_{x}-1},y_{N_{y}}) \\ u^{**^{k+1}}(x_{N_{x}},y_{1}) & u^{**^{k+1}}(x_{N_{x}},y_{2}) & \ldots & u^{**^{k+1}}(x_{N_{x}},y_{N_{y}-1})& u^{**^{k+1}}(x_{N_{x}},y_{N_{y}}) \end{bmatrix} . $$

Finally, \(u^{k+1}(x_{i},y_{j})\), \(i=1,2,\dots ,N_{x}\), \(j=1,2,\ldots ,N_{y}\) are obtained from \((\mathbf{U}^{**^{k+1}} )^{T}\).

We summarize the proposed approach in Algorithm 1.

Algorithm 1
figure a

The developed numerical algorithm to solve FPMM

3 Numerical experiments

This section is devoted to evaluating the performance of the proposed algorithm by applying it to some examples. This section includes a \(512\times 512\) image (Lena) and two \(256\times 256\) images (Cameraman and Racoon). We consider Signal-to-Noise Ration (SNR), Peak Signal-to-Noise Ration (PSNR), Structure Similarity Index Measure (SSIM), and Mean Squared Error as four metrics to measure the performance of denoising with FPMM and PMM which are defined as follows:

$$\begin{aligned}& \mathrm{SNR} =10\log \biggl( \frac{\sum_{i=1}^{N}\sum_{j=1}^{M}\hat{u}^{2}(x_{i},y_{j})}{\sum_{i=1}^{N}\sum_{j=1}^{M} (\hat{u}(x_{i},y_{j})-u(x_{i},y_{j}) )^{2}} \biggr), \end{aligned}$$
(3.1)
$$\begin{aligned}& \mathrm{PSNR} =10\log \biggl( \frac{{MN}\kappa ^{2}}{\sum_{i=1}^{N}\sum_{j=1}^{M} (\hat{u}(x_{i},y_{j})-u(x_{i},y_{j}) )^{2}} \biggr), \end{aligned}$$
(3.2)
$$\begin{aligned}& \mathrm{MSE} =\frac{1}{{MN}}\sum_{i=1}^{N} \sum_{j=1}^{M} \bigl( \hat{u}(x_{i},y_{j})-u(x_{i},y_{j}) \bigr)^{2}, \end{aligned}$$
(3.3)
$$\begin{aligned}& \mathrm{SSIM} = \frac{(2\mu _{u}\mu _{\hat{u}}+c_{1})(2\sigma _{u\hat{u}}+c_{2})}{(\mu _{u}^{2}+\mu _{\hat{u}}^{2}+c_{1})+(\sigma _{u}^{2}+\sigma _{\hat{u}}^{2}+c_{2})}, \end{aligned}$$
(3.4)

in which u is the uncorrupted image, κ is its maximum intensity value, û is the reconstructed image, and \(M \times N\) is the size of the image. Moreover, \(\mu _{u}\) and \(\mu _{\hat{u}}\) are the averaging over all the pixel values of the images, \(\sigma _{\hat{u}}^{2}\), \(\sigma _{u}^{2}\) are the variance of all the pixel values of the images and \(\sigma _{u\hat{u}}\) is the covariance of u and û. The \(c_{1}\), and \(c_{2}\) coefficients are also defined as follows:

$$ c_{1}=(k_{1}L)^{2}, \qquad c_{2}=(k_{2}L)^{2}, $$

where \(k_{1}, k_{2}\ll 1\) are constant coefficients and L is the dynamic range of pixel values.

In order to illuminate the validity of the proposed method, different kinds of noise, i.e., Gaussian, Poisson, and speckle, are added to images, and the denoising process results of FPMM and PMM are demonstrated for each example. Moreover, the appropriate value of α in FPMM is selected according to SNR and PSNR indicators. Also, the stopping criterion for the denoising process is the value of PSNR that should be maximized. All the implementations are done with MATLAB 2017a, on a computer with the following hardware configuration: Intel Core i7, 16 GB RAM.

3.1 Test case 1

As the first example, we consider the image of Lena of size \(512 \times 512\) pixels. We set \(\Delta t=10^{-3}\), \(\delta =5\). Based on the value of δ, the obtained coefficient matrices are well-posed, and their condition numbers are almost 107. According to Fig. 1, \(\alpha =0.9\) can be a good choice for this example. Figure 2 shows the results of FPMM with \(\alpha =0.9\) and PMM for different input noises. This figure implies that FPMM overall is better than the classic one and is applicable for all types of noise. Table 1 reports this example’s result for different Gaussian noise variances. It seems that by increasing the noise variance, FPMM performs better than PMM.

Figure 1
figure 1

PSNR and SNR for different values of α in test case 1

Figure 2
figure 2

Test case 1: (a) the original image Lena, (b) noisy image with Gaussian noise (variance 0.02), (c) denoised image of the Gaussian noise by FPMM (\(\alpha =0.9\)), (d) denoised image of the Gaussian noise by PMM, (e) noisy image with Poisson noise, (f) denoised image of the Poisson noise by FPMM (\(\alpha =0.9\)), (g) denoised image of the Poisson noise by PMM, (h) noisy image with speckle noise, (i) denoised image of the speckle noise by FPMM (\(\alpha =0.9\)), and (j) denoised image of the speckle noise by PMM

Table 1 Quantitative analysis of obtained results of FPMM and PMM for denoising the test case 1 with the Gaussian noise

3.2 Test case 2

In this case, we regard the Cameraman image of size \(256 \times 256\). The parameters in this example are \(\Delta t=10^{-3}\), \(\delta =5\), and \(\alpha =0.9\). Figure 3 represents the effect of α on the accuracy of the denoising process. From this figure, \(\alpha =0.9\) is the suitable choice for the order of fractional derivative. Figure 4 depicts the result of denoising by FPMM (\(\alpha =0.9\)) and PMM for the Gaussian (variance 0.02), Poisson, and speckle noises. In addition, Table 2 contains the values of performance measurement metrics for FPMM and PMM against different Gaussian noise variances. It can be concluded that FPMM is stronger than PMM, and fractional derivative lets us denoise images more effectively.

Figure 3
figure 3

PSNR and SNR for different values of α in test case 2

Figure 4
figure 4

Test case 2: (a) the original image of a cameraman, (b) noisy image with Gaussian noise (variance 0.02), (c) denoised image of the Gaussian noise by FPMM (\(\alpha =0.9\)), (d) denoised image of the Gaussian noise by PMM, (e) noisy image with Poisson noise, (f) denoised image of the Poisson noise by FPMM (\(\alpha =0.9\)), (g) denoised image of the Poisson noise by PMM, (h) noisy image with speckle noise, (i) denoised image of the Speckle noise by FPMM (\(\alpha =0.9\)), and (j) denoised image of the speckle noise by PMM

Table 2 Quantitative analysis of obtained results of FPMM and PMM for denoising the test case 2 with the Gaussian noise

3.3 Test case 3

In the last example, the Racoon image of size \(256\times 256\) is considered as the test case. The numerical approach is applied with \(\Delta t=10^{-2}\), \(\alpha =0.8\), and \(\delta =5\). Like the previous examples, \(\alpha =0.8\) is selected based on the values of PSNR and SNR in Fig. 5. Additionally, Table 3 analyzes the results of FPMM and PMM. Both models can eliminate the noise; however, FPMM is much better for a higher level of noise variance. For instance, for variance 0.01, the MSE of PMM is less than that of FPMM, but as the noise intensity increases, the MSE model will be better. In Fig. 6, we show the performance of FPMM (with \(\alpha =0.8\)) and PMM for eliminating various kinds of noise for the Racoon example. In summary, the superiority of FPMM with \(\alpha =0.8\) is obvious in this test case.

Figure 5
figure 5

PSNR and SNR for different values of α in test case 3

Figure 6
figure 6

Test case 3: (a) the orginal image Racoon, (b) noisy image with Gaussian noise (variance 0.02), (c) denoised image of the Gaussian noise by FPMM (\(\alpha =0.8\)), (d) denoised image of the Gaussian noise by PMM, (e) noisy image with Poisson noise, (f) denoised image of the Poisson noise by FPMM (\(\alpha =0.8\)), (g) denoised image of the Poisson noise by PMM, (h) noisy image with speckle noise, (i) denoised image of the speckle noise by FPMM (\(\alpha =0.8\)), and (j) denoised image of the speckle noise by PMM

Table 3 Quantitative analysis of obtained results of FPMM and PMM for denoising the test case 3 with the Gaussian noise

4 Conclusion

In this study, we proposed a practical and accurate numerical approach based on the combination of Trotter splitting and compactly supported radial basis function methods to investigate the time-fractional Perona–Malik model (FPMM) which is a strong model for image denoising. The operator splitting approach allowed us to divide the main problem with a large domain into simpler subproblems, significantly decreasing computational costs. By employing the proposed scheme, the model was reduced to sparse and well-posed linear algebraic systems; thus, these systems could be solved by classical algorithms like the LU approach. Various test cases were examined in different factors such as SNR, PSNR, MSE, and SSIM to demonstrate the efficiency and accuracy of the proposed method. We showed that by choosing an appropriate value for fractional derivative order α, FPMM can outperform the classical one. In this paper, the proper value of α was selected such that the values of PSNR and SNR were maximized.