1 Introduction

Image restoration mainly includes image deblurring and image denoising, which is one of the most fundamental problems in imaging science. It plays an important role in many mid-level and high-level image-processing areas such as medical imaging, remote sensing, machine identification, and astronomy [1,2,3,4]. The image restoration problem usually can be expressed in the following form:

$$ \textstyle\begin{array}{l} g=Hf+\eta, \end{array} $$
(1.1)

where \(f\in R^{n^{2}}\) is the original \(n\times n\) image, \(H\in R^{n ^{2}\times n^{2}}\) is a blurring operator, \(\eta\in R^{n^{2}}\) is the white Gaussian noise, and \(g\in R^{n^{2}}\) is a degraded image.

It is well known that the image restoration problem is usually an ill-posed problem. An efficient method to overcome the ill-posed problems is to add some regularization terms to the objective functions, which is known as a regularization method. There are two famous regularization methods. One is the Tikhonov regularization [5], and the other is the total variation (TV) regularization [6]. The Tikhonov regularization method has a disadvantage, which tends to make images overly smooth and often fails to adequately preserve important image attributes such as sharp edges. The total variation regularization method has the ability to preserve edges well and remove noise at the same time, which was first introduced by Rudin et al. [6] as follows:

$$ \min_{f} \Vert {Hf - g} \Vert _{2}^{2} + \alpha { \Vert f \Vert _{\mathrm{TV}}}, $$
(1.2)

where \({ \Vert \cdot \Vert _{2}}\) denotes the Euclidean norm, \({ \Vert \cdot \Vert _{\mathrm{TV}}}\) is the discrete total variation regularization term, and α is a positive regularization parameter that controls the tradeoff between these two terms. To define the discrete TV norm, we first introduce the discrete gradient ∇f:

$$ {(\nabla f)_{i,j}} = \bigl((\nabla f)_{i,j}^{x},( \nabla f)_{i,j}^{y} \bigr) $$

with

$$\begin{aligned} (\nabla f)_{i,j}^{x}=\textstyle\begin{cases} f_{i+1,j}-f_{i,j} & \text{if } i < n, \\ f_{1,j}-f_{n,j} & \text{if } i = n, \end{cases}\displaystyle \qquad (\nabla f)_{i,j}^{y}=\textstyle\begin{cases} f_{i,j+1}-f_{i,j} & \text{if } j < n, \\ f_{i,1}-f_{i,n} & \text{if } j = n, \end{cases}\displaystyle \end{aligned}$$

for \(i,j =1,\ldots,n\). Here \(\nabla:\Re^{n^{2}}\rightarrow\Re^{n^{2}}\) denotes the discrete gradient operator, \(f_{i,j}\) refers to the \(((j-1)n+i)\)th entry of the vector f, which is the \((i,j)\)th pixel location of the image; see [7]. Then the discrete TV of f is defined by

$$\begin{aligned} { \Vert f \Vert _{\mathrm{TV}}} = \sum_{1 \le i,j \le n} { \sqrt{ {{ \bigl\vert {(\nabla f)_{i,j}^{x}} \bigr\vert }^{2}} + {{ \bigl\vert {(\nabla f)_{i,j} ^{y}} \bigr\vert }^{2}}} }. \end{aligned}$$

Due to the nonlinearity and nondifferentiability of the total variation function, it is difficult to solve model (1.2). To solve this problem more effectively, many methods have been proposed for total-variation-based image restoration in recent years [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27]. In these methods, Rudin et al. [6] raised a time marching scheme, and Vogel et al. [7] put forward a fixed point iteration method. The time marching scheme converges slowly, especially when the iterate point is close to the solution set. The fixed point iteration method is also very difficult to solve as the blurring kernel becomes larger. Based on the dual formulation, Chambolle [15] proposed a gradient algorithm for the total variation denoising problem. At present, based on variable separation and penalty techniques, Wang et al. [16] proposed the fast total variant deconvolution (FTVd) method. By introducing an auxiliary variable to replace the nondifferentiable part of model (1.2), the TV model (1.2) can be rewritten in the following minimization problem:

$$ \min_{f,\omega} \Vert {Hf - g} \Vert _{2}^{2} + \alpha{ \Vert \omega \Vert _{2}} + \frac{\beta}{2} \Vert { \omega- \nabla f} \Vert _{2}^{2}, $$

where β is a penalty parameter. Experimental results verify the effectiveness of the FTVd method. But in the calculation, the penalty parameter β needs to approach infinity, which creates numerical instability. To avoid the approach of penalty parameter to infinity, Chan et al. [28] proposed the alternating direction method of multipliers (ADMM) to solve model (1.2). By defining the augmented Lagrange function, the image restoration model (1.2) can be translated into the following form:

$$ \min_{f,\omega} \Vert {Hf - g} \Vert _{2}^{2} + \alpha{ \Vert \omega \Vert _{2}} + \langle {\lambda,\omega - \nabla f} \rangle + \frac{\beta}{2} \Vert {\omega- \nabla f} \Vert _{2}^{2}, $$

where λ is a Lagrange multiplier. The experimental results show that the ADMM method is robust and fast, and has a good restoration effect.

More recently, to overcome the shortcoming of the TV norm of f in model (1.2), Huang et al. [29] proposed a fast total variation minimization method for image restoration as follows:

$$ \min_{f,u} \Vert {Hf - g} \Vert _{2}^{2} + {\alpha_{1}} \Vert {f - u} \Vert _{2}^{2} + {\alpha_{2}} { \Vert u \Vert _{\mathrm{TV}}}, $$
(1.3)

where \(\alpha_{1}\), \(\alpha_{2}\) are positive regularization parameters. Model (1.3) adds a term \(\| f-u\|_{2}^{2}\), compared with model (1.2). The experimental results show that the modified TV minimization model can preserve edges very well in the image restoration processing. Based on model (1.3), Liu et al. [30] proposed the following minimization model:

$$ \min_{f,u} \Vert Hf-g \Vert _{2}^{2}+\alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \Vert f \Vert _{\mathrm{TV}}+\alpha_{3} \Vert u \Vert _{\mathrm{TV}}, $$
(1.4)

where \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are positive regularization parameters. Liu et al. [30] adopted the split Bregman method and Chambolle projection algorithm to solve the minimization model (1.4). Numerical results illustrated the effectiveness of their model.

Although the total variation regularization can preserve sharp edges very well, it also causes some staircase effects [31, 32]. To overcome this kind of staircase effect, some high-order total variational models [33,34,35,36,37,38,39] and fractional-order total variation models [40,41,42,43,44] are introduced. It has been proved that the high-order TV norm can remove the staircase effect and preserve the edges well in the process of image restoration.

To eliminate the staircase effect better and preserve edges very well in image processing, we combine the TV norm and second-order TV norm and introduce a new hybrid variational model as follows:

$$ \min_{f,u} \Vert Hf-g \Vert _{2}^{2}+\alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \bigl\Vert \nabla^{2}f \bigr\Vert _{2}+\alpha_{3} \Vert \nabla u \Vert _{2}, $$
(1.5)

where \(\alpha_{1}\), \(\alpha_{2}\), and \(\alpha_{3}\) are positive regularization parameters, \(\|\nabla u\|_{2}\) is the TV norm of u, and \(\|\nabla^{2}f\|_{2}\) is the second-order TV norm of f. The definition of the second-order TV norm is similar to that of the TV norm. The second-order TV norm is defined by

$$\begin{aligned}& \bigl(\nabla^{2}f \bigr)_{i,j}= \bigl((\nabla f)_{i,j}^{x,x}, (\nabla f)_{i,j}^{x,y},( \nabla f)_{i,j}^{y,x},(\nabla f)_{i,j}^{y,y} \bigr), \\& \bigl\Vert \nabla^{2}f \bigr\Vert =\sum _{1 \le i,j \le n}{\sqrt{{{ \bigl\vert {( \nabla f)_{i,j}^{x,x}} \bigr\vert }^{2}} + {{ \bigl\vert {(\nabla f)_{i,j}^{x,y}} \bigr\vert }^{2}}+ \bigl\vert (\nabla f)_{i,j}^{y,x} \bigr\vert ^{2}+ \bigl\vert (\nabla f)_{i,j} ^{y,y} \bigr\vert ^{2}} }, \end{aligned}$$

where \((\nabla f)_{i,j}^{x,x}\), \((\nabla f)_{i,j}^{x,y}\), \((\nabla f)_{i,j} ^{y,x}\), \((\nabla f)_{i,j}^{y,y}\) denote the second-order differences of the \(((j-1)n+i)\)th entry of the vector f. For more detail about the second-order differences, we refer to [45]. The second-order TV regularization and TV regularization are used; the edges in the restored image can be preserved quite well, and the staircase effect is reduced simultaneously.

The rest of this paper is organized as follows. In Sect. 2, we propose our alternating iterative algorithm to solve model (1.5). In Sect. 3, we give some numerical results to demonstrate the effectiveness of the proposed algorithm. Finally, concluding remarks are given in Sect. 4.

2 The alternating iterative algorithm

In this section, we use an alternating iterative algorithm to solve (1.5). Based on the variable separation technique [16], the minimization problem (1.5) can be divided into deblurring and denoising steps. The alternating iterative algorithm is based on decoupling of denoising and deblurring steps in the image restoration process. The deblurring step is defined as

$$ \arg\min_{f} \Vert Hf-g \Vert _{2}^{2}+ \alpha_{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \bigl\Vert \nabla^{2}f \bigr\Vert _{2}. $$
(2.1)

The denoising step is defined as

$$ \arg\min_{u} \alpha_{1} \Vert u-f \Vert _{2}^{2}+ \alpha _{3} \Vert \nabla u \Vert _{2}. $$
(2.2)

We adopt the alternating direction multiplier method to solve these two subproblems.

2.1 The deblurring step

Because the ADMM method has the characteristics of notable stability and high rate of convergence, this method can avoid the approach of penalty parameter to infinity. We employ the alternating direction method of multipliers to solve the minimization problem (2.1). Because the objective function of (2.1) is nondifferentiable, by introducing an auxiliary variable ω, the unconstrained optimization problem (2.1) can be transformed into the following equivalent constraint optimization problem:

$$ \arg\min_{f,\omega} \Vert Hf-g \Vert _{2}^{2}+ \alpha _{1} \Vert f-u \Vert _{2}^{2}+ \alpha_{2} \Vert \omega \Vert _{2} \quad \text{s.t. } \omega= \nabla^{2}f. $$
(2.3)

For the constrained optimization problem (2.3), its augmented Lagrange function is defined by

$$\begin{aligned} {L_{\mathrm{A}}}(f,\omega,\lambda_{1}) =& \Vert Hf-g \Vert _{2}^{2}+{\alpha_{1}} \Vert {f - u} \Vert _{2}^{2} + {\alpha_{2}} { \Vert \omega \Vert _{2}} \\ &{}+ \bigl\langle {\lambda_{1} ,\omega- \nabla^{2}f} \bigr\rangle + \frac{\beta_{1} }{2} \bigl\Vert {\omega- \nabla^{2}f} \bigr\Vert _{2}^{2}, \end{aligned}$$
(2.4)

where \(\lambda_{1}\) is a Lagrange multiplier, playing the role of avoiding the positive penalty parameters to go to infinity, and \(\beta_{1}\) is a positive penalty parameter. Then, the alternating minimization method to minimize problem (2.4) can be expressed as follows:

$$ \textstyle\begin{cases} (f^{k+1},\omega^{k+1})=\arg\min_{f,\omega}L_{A}(f,\omega, \lambda_{1}^{k}), \\ \lambda_{1}^{k+1}=\lambda_{1}^{k}+\beta_{1}(\omega^{k+1}-\nabla^{2}f ^{k+1}). \end{cases} $$
(2.5)

Based on the classical ADMM, starting at \(u = u^{k}\), \(\omega= \omega ^{k}\), \(\lambda=\lambda^{k}\), the iterative scheme is implemented via the following subproblems:

figure a

Based on the optimal conditions, the solution of (2.6) is given by the equation

$$ \bigl(2H^{T}H+\beta_{1}\nabla^{2^{T}} \nabla^{2}+2\alpha_{1}I \bigr)f=2H^{T}g+2 \alpha_{1}u^{k}+\beta_{1}\nabla^{2^{T}} \biggl(\omega^{k}+\frac{\lambda_{1} ^{k}}{\beta_{1}} \biggr), $$
(2.9)

where \(\nabla^{2^{T}}\) is the conjugate operator of \(\nabla^{2}\). Under the periodic boundary condition, \(H^{T}H\) and \(\nabla^{2^{T}}\nabla ^{2}\) are block circulant matrices [46, 47], so \(H^{T}H\) and \(\nabla^{2^{T}}\nabla^{2}\) can be diagonalized by the Fourier transform. The Fourier transform of f is denoted by \(\mathcal{F}(f)\), and \(\mathcal{F}^{-1}(f)\) is the inverse Fourier transform of f. By using the Fourier transform the solution of f can be given as follows:

$$ {f^{k + 1}} = {\mathcal{F}^{ - 1}}(\gamma), $$

where

$$ \gamma= \frac{{\mathcal{F}(2H^{T}g+2{\alpha_{1}}{u^{k}} + \beta_{1} {\nabla^{2^{T}}}({\omega^{k}} + \frac{\lambda^{k}}{\beta_{1}}))}}{ {\mathcal{F}(2{\alpha_{1}}I + \beta_{1} \nabla^{2^{T}}\nabla^{2}+2H ^{T}H)}}. $$

The subproblem for ω can be written as

$$ \omega^{k+1}=\arg\min_{\omega} \biggl\{ \alpha_{2} \Vert \omega \Vert _{2}+ \frac{\beta_{1}}{2} \biggl\Vert \omega- \biggl(\nabla^{2}f^{k+1}- \frac{ \lambda_{1}^{k}}{\beta_{1}} \biggr) \biggr\Vert _{2}^{2} \biggr\} , $$

and the solution can be explicitly obtained using the following two-dimensional shrinkage operator [16, 48]:

$$ {\omega^{k + 1}} = \max \biggl\{ { \biggl\Vert { \nabla^{2}{f^{k + 1}} - \frac{ {{\lambda_{1}^{k}}}}{\beta_{1} }} \biggr\Vert _{2} - \frac{{{\alpha_{2}}}}{ \beta_{1} },0} \biggr\} \frac{{\nabla^{2}{f^{k + 1}} - \frac{{{\lambda _{1}^{k}}}}{\beta_{1} }}}{{ \Vert {\nabla^{2}{f^{k + 1}} - \frac{ {{\lambda_{1}^{k}}}}{\beta_{1} }} \Vert _{2}}}, $$
(2.10)

where we follow the convention that \(0\cdot(0/0)=0\).

Finally, we update \(\lambda_{1}\) by

$$ \lambda_{1}^{k+1}=\lambda_{1}^{k}+ \eta\beta_{1} \bigl(\omega^{k+1}-\nabla ^{2}f^{k+1} \bigr), $$
(2.11)

where η is a relaxation parameter, and \(\eta\in(0,(\sqrt{5} + 1)/2)\).

The algorithm of the deblurring step is summarized in Algorithm 1.

Algorithm 1
figure b

Alternating direction minimization method for solving subproblem (2.1)

2.2 The denoising step

Subproblem (2.2) is a classical TV regularization process for image denoising, which can be solved by the Chambolle projection algorithm. However, it is well known that the Chambolle projection algorithm has large amount of calculations in the process of experiment and causes numerical instability. To overcome the disadvantage of numerical instability and large amount of calculations of the Chambolle projection algorithm, in this paper, we adopt the alternating direction multiplier method to solve subproblem (2.2).

The solution process of subproblem (2.2) is the same as that of subproblem (2.1). First, introducing an auxiliary variable v, problem (2.2) can be transformed into the following constraint minimization problem:

$$ \min_{u,v} {\alpha_{1}} \bigl\Vert {u - f^{k+1}} \bigr\Vert _{2}^{2} + { \alpha_{3}} { \Vert v \Vert _{2}} \quad \text{s.t. } v= \nabla u. $$
(2.12)

Second, to use the alternating direction multiplier method to solve model (2.12), we define its augmented Lagrangian function

$$ {L_{\mathrm{A}}}(u,v,\lambda_{2}) = { \alpha_{1}} \bigl\Vert {u - f^{k+1}} \bigr\Vert _{2}^{2} + {\alpha_{3}} { \Vert v \Vert _{2}} + \langle {\lambda_{2},v - \nabla u} \rangle + \frac{\beta_{2} }{2} \Vert {v - \nabla u} \Vert _{2}^{2}, $$
(2.13)

where \(\beta_{2}\) is a positive penalty parameters, and \(\lambda_{2}\) is a Lagrange multiplier.

The variables u, f, v are coupled together, so we separate this problem into two subproblems and adopt the alternating iteration minimization method. The two subproblems are given as follows.

The “u-subproblem” for v fixed:

$$ \min_{u}\alpha_{1} \bigl\Vert u-f^{k+1} \bigr\Vert _{2}^{2}+\frac{\beta_{2}}{2} \bigl\Vert v ^{k}-\nabla u \bigr\Vert _{2}^{2}- \bigl\langle \lambda_{2}^{k},\nabla u \bigr\rangle . $$
(2.14)

The “v-subproblem” for u fixed:

$$ \min_{v}\frac{\beta_{2}}{2} \bigl\Vert v-\nabla u^{k} \bigr\Vert _{2}^{2}+ \bigl\langle \lambda_{2}^{k},v \bigr\rangle +\alpha_{3} \Vert v \Vert _{2}. $$
(2.15)

The minimizer of subproblem (2.14) can be simplified as

$$ \min_{u}\alpha_{1} \bigl\Vert u-f^{k+1} \bigr\Vert _{2}^{2}+\frac{\beta_{2}}{2} \biggl\Vert v ^{k}- \biggl(\nabla u+\frac{\lambda_{2}^{k}}{\beta_{2}} \biggr) \biggr\Vert _{2}^{2}, $$
(2.16)

and the minimization problem (2.16) can be solved by the following equation:

$$ \bigl(2{\alpha_{1}}I + \beta_{2} { \nabla^{T}}\nabla \bigr)u = 2{\alpha_{1}} {f^{k + 1}} + \beta_{2} {\nabla^{T}} {v ^{k}} + { \nabla^{T}} {\lambda _{2}^{k}}. $$
(2.17)

Under the periodic boundary conditions, \(\nabla^{T}\nabla\) is a block circulant matrix, So \(\nabla^{T}\nabla\) is diagonalizable by the two-dimensional discrete Fourier transform.

Next, the minimization of (2.15) with respect to v is equivalent to the minimization problem

$$ \min_{v}\frac{\beta_{2}}{2} \biggl\Vert v- \biggl(\nabla u^{k+1}-\frac{ \lambda_{2}^{k}}{\beta_{2}} \biggr) \biggr\Vert _{2}^{2}+\alpha_{3} \Vert v \Vert _{2}, $$
(2.18)

and the solution of (2.18) can be explicitly obtained by the two-dimensional shrinkage:

$$ v^{k+1}=\max \biggl\{ \biggl\Vert \nabla u^{k+1}- \frac{\lambda_{2}^{k}}{\beta _{2}} \biggr\Vert _{2}-\frac{\alpha_{3}}{\beta_{2}},0 \biggr\} \frac{\nabla u ^{k+1}-\frac{\lambda_{2}^{k}}{\beta_{2}}}{ \Vert \nabla u^{k+1}-\frac{ \lambda_{2}^{k}}{\beta_{2}} \Vert _{2}}. $$
(2.19)

The Lagrange multiplier \(\lambda_{2}\) is updated as follows:

$$ {\lambda_{2}^{k + 1}} = {\lambda_{2}^{k}} + \eta\beta_{2} \bigl({v^{k + 1}} - \nabla{u^{k + 1}} \bigr), $$
(2.20)

where η is a relaxation parameter.

The algorithm of the denoising step is written in Algorithm 2.

Algorithm 2
figure c

Alternating direction minimization method for solving the subproblem (2.2)

3 Numerical experiments

This section presents some numerical examples, which show that the performance of our proposed algorithm to solve image restoration problems. In the following experiments, we compare our proposed method (HTV) with Fast-TV [29] and FNDTV methods [30]. All experiments are performed under Windows 7 and MATLAB 2012a running on a desktop with an core i5 Duo central processing unit at 2.50 GHz and 4 GB memory. The quality of the restoration results by different methods is compared quantitatively by using the peak-signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM). Suppose g, \(f^{0}\), and u are the observed image, the ideal image, and the restored image, respectively. Then, the BSNR, MSE, PSNR, and SSIM are defined as follows:

$$\begin{aligned}& \mathrm{BSNR} = 20\log_{10}\frac{ \Vert g \Vert _{2}}{ \Vert \eta \Vert _{2}}, \\& \mathrm{MSE} =\frac{1}{{{n^{2}}}}\sum_{i = 0}^{{n^{2}} - 1} {{{ \bigl({f ^{0}}(i) - u(i) \bigr)}^{2}}}, \\& \mathrm{PSNR} = 20\log_{10}\frac{\mathrm{MAX}_{{f^{0}}}}{{\sqrt{\mathrm{MSE}} }}, \\& \mathrm{SSIM} =\frac{(2\mu_{f^{0}}\mu_{u}+c_{1})(2\sigma_{f^{0}u}+c_{2})}{( \mu_{f^{0}}^{2}+\mu_{u}^{2}+c_{1})(\sigma_{f^{0}}^{2}+\sigma_{u}^{2}+c _{2})}, \end{aligned}$$

where η is the additive noise vector, \(n^{2}\) is the number of pixels of image, \(\mathrm{MAX}_{f^{0}}\) is the maximum possible pixel value of the \(f^{0}\), is the mean intensity value of \(f^{0}\), \(\mu_{f^{0}}\) is the mean value of the \(f^{0}\), \(\mu_{u}\) is the mean value of u, \(\sigma_{f^{0}}^{2}\) and \(\sigma_{u}^{2}\) are the variances of \(f^{0}\) and u, respectively, and \(\sigma_{f^{0}u}\) is the covariance of \(f^{0}\) and u, and \(c_{1}\) and \(c_{2}\) are stabilizing constants for near-zero denominator values. We will also use the SSIM index map to reveals areas of high/low similarity between two images; the whiter the SSIM index map, the closer the two images. Further details on SSIM can be founded in the pioneer work [49].

Four test images, “Cameraman”, “Lena”, “Baboon”, and “Man”, which are commonly used in the literature, are shown in Fig. 1. We test three kinds of blur, that is, Gaussian blur, average blur, and motion blur. These different blurring kernels can be builded by the function “fspecial” in the Matlab. The additive noise is a Gaussian noise in all experiments. In all tests, we add the Gaussian white noise of different BSNR to the blurred images. In our experiments, the stopping criterion is that the relative difference between the successive iteration of the restored image should satisfy the inequality

$$ \frac{{ \Vert {{f^{k + 1}} - {f^{k}}} \Vert _{2}} }{{ \Vert {{f^{k}}} \Vert _{2}}} \le1 \times{10^{ - 4}}, $$

where \(f^{k}\) is the computed image at the kth iteration of the tested method. In the following experiments, for our proposed method, we fixed the parameter \(\alpha_{2}=1.3e{-}2\) for all experiments, \(\alpha_{1}=1e{-}4\) (for Gaussian blur and average blur), \(3e{-}4\) (for motion blur), \(\alpha_{3}=1e{-}4\) (for Gaussian blur and average blur), \(2e{-}4\) (for motion blur). For the parameters of FastTV and FNDTV, we refer to [29, 30]. The parameters for every compared method are selected from many experiments until we obtain the best PSNR values.

Figure 1
figure 1

Test images

Figure 2 shows the experiment for Gaussian blur. We select the “Cameraman” image \((256\times256)\) as the test image, which is shown in Figure 1(a). The “Cameraman” image degraded by Gaussian blur with blur nucleus \(9\ast9\) and a noise with \(\mathrm{BSNR}=35\) is shown in Fig. 2(a). The recovered images by FastTV, FNDTV, and our method are shown in Fig. 2(b)–(d). To demonstrate the effectiveness of our method more intuitively, we enlarge some part of the three restored images, and the results of enlarged parts are shown in Fig. 2(e)–(h). We also show the SSIM index maps of the restored images recovered by the three methods in Fig. 2(i)–(l). The SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV and FNDTV. The values of PSNR and SSIM by these methods are shown in Table 1. We see that both PSNR and SSIM values of the restored image by our proposed method are higher than FastTV and FNDTV. We also plot the changing curve of SSIM versus iterations with three different methods in Fig. 3. It is not difficult to see that our method can achieve a high SSIM over the other two methods with a few iterations. In addition, for the restoration effect of other images, we depict them by PSNRs and SSIMs; see Table 1. It is easy to detect that both of PSNR and SSIM of the restored image by our method are higher than others obtained by FastTV and FNDTV.

Figure 2
figure 2

Results of different methods when restoring blurred and noisy image “Cameraman” degraded by Gaussian blur with Gaussian blur nucleus \(9\ast9\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by FastTV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 3
figure 3

Changes of SSIM value versus iteration number for the three methods about Gaussian blur

Table 1 Experimental results for different images and different blur kernels, \(\mathrm{BSNR}=35\)

Figure 4 shows the experiment about the “Lenna” image with size \(256\times256\) degraded by the average blur with length 9 and a noise with \(\mathrm{BSNR}=35\). The degraded “Lenna” image is shown Fig. 4(a). The recovered images by FastTV, FNDTV, and our method are shown in Fig. 4(b)–(d). More precisely, Fig. 4(e)–(h) displays the same regions of special interest, which are zoomed in for comparing the performance of the three methods. It is not difficult to observe that the proposed method can alleviate the staircase phenomenon better. In addition, the SSIM map of the restored images recovered by the three methods is shown in Fig. 4(i)–(l). It is easy to see that the SSIM map obtained by the proposed method is slightly whiter than the maps by the other two methods. In Fig. 5, we plot the changes of SSIM value versus iteration number for the three methods. It can also be found from the relationship between SSIM values and iteration numbers that our method requires fewer iterations and the values are superior to the other two methods. These experiments demonstrate the outstanding performance of our proposed method to overcome the blocky images while preserving edge details. We also report the PSNR and SSIM values by these methods in Table 1. The PSNR and SSIM values of the restored image by our proposed method are higher than those of FastTV and FNDTV.

Figure 4
figure 4

Results of different methods when restoring blurred and noisy image “Lenna” degraded by average blur with length 9 and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by Fast-TV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 5
figure 5

Changes of SSIM value versus iteration number for the three methods about average blur

The experiments about the motion blur are shown in Figs. 6 and 8. On the motion blur, we do two groups of experiments for different thetas. one group of experiment is added serious degree of blur that is shown in Fig. 8, the other group of experiment is added slight degree of blur that is shown in Fig. 6. The recovered images by the three methods are shown in Figs. 6 and 8(b)–(d), and the enlarged parts are shown in Figs. 6 and 8(e)–(h). We also show the SSIM index maps of the restored images recovered by the three methods in Figs. 6 and 8(j)–(l). It is easy to see that the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV and FNDTV. In Figs. 7 and 9, we plot the changes of SSIMs with iteration number for FastTV method, FNDTV method, and our method. From Figs. 7 and 9, we can see that our method can get higher image visual quality and more details than Fast-TV method and FNDTV method. The values of PSNR and SSIM are listed in Tables 1 and 2. We see that both the PSNR and SSIM values of the restored image by the proposed method are much better than those provided by FastTV and FNDTV.

Figure 6
figure 6

Results of different methods when restoring blurred and noisy image “Man” degraded by motion blur with \(\mathrm{len}=20\) and \(\mathrm{theta}=20\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by Fast-TV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 7
figure 7

Changes of SSIM value versus iteration number for the three methods about motion blur with \(\mathrm{theta}=20\)

Figure 8
figure 8

Results of different methods when restoring blurred and noisy image “Baboon” degraded by motion blur with \(\mathrm{len}=10\) and \(\mathrm{theta} =100\) and a noise with \(\mathrm{BSNR}=35\): (a) blurred and noisy image; (b) restored image by FastTV; (c) restored image by FNDTV; (d) restored image by our method; (e) zoomed part of (a); (f) zoomed part of (b); (g) zoomed part of (c); (h) zoomed part of (d); (i) SSIM index map of the corrupted image; (j) SSIM index map of the recovered image by FastTV; (k) SSIM index map of the recovered image by FNDTV; (l) SSIM index map of the recovered image by our method

Figure 9
figure 9

Changes of SSIM value versus iteration number for the three methods about motion blur with \(\mathrm{theta}=100\)

Table 2 Experimental results for different images and different blur kernels, \(\mathrm{BSNR}=40\)

The numerical results of three different methods in terms of PSNR and SSIM are shown in the following tables. From Tables 1 and 2 it is not difficult to see that the PSNR and SSIM of the restored image by our proposed method are higher than those obtained by FastTV and FNDTV.

4 Conclusion

In this paper, we propose a hybrid total variation model. In addition, we employ the alternating direction method of multipliers to solve it. Experimental results demonstrate that the proposed model can obtain better results than those restored by some existing restoration methods. It also shows that the new model can obtain a better visual resolution than the other two methods.