Global motion based video superresolution reconstruction using discrete wavelet transform
 95 Downloads
Abstract
Different from the existing superresolution (SR) reconstruction approaches working under either the frequencydomain or the spatial domain, this paper proposes an improved video SR approach based on both frequency and spatialdomains to improve the spatial resolution and recover the noiseless highfrequency components of the observed noisy lowresolution video sequences with global motion. An iterative planar motion estimation algorithm followed by a structureadaptive normalised convolution reconstruction method are applied to produce the estimated lowfrequency subband. The discrete wavelet transform process is employed to decompose the input lowresolution reference frame into four subbands, and then the new edgedirected interpolation method is used to interpolate each of the highfrequency subbands. The novelty of this algorithm is the introduction and integration of a nonlinear soft thresholding process to filter the estimated highfrequency subbands in order to better preserve the edges and remove potential noise. Another novelty of this algorithm is to provide flexibility with various motion levels, noise levels, wavelet functions, and the number of used lowresolution frames. The performance of the proposed method has been tested on three wellknown videos. Both visual and quantitative results demonstrate the high performance and improved flexibility of the proposed technique over the conventional interpolation and the stateoftheart video SR techniques in the wavelet domain.
Keywords
Image enhancement Image registration Image reconstruction1 Introduction
High resolution (HR) images and videos are highly desirable, and strongly in demand for most electronic imaging applications not only for providing better visualisation but also for extracting additional information. However, HR images are not always available since the setup of highresolution imaging can be expensive especially with the inherent physical limitations of the sensors, the optics manufacturing technology, the data storage and the sensor’s communication bandwidth. Therefore, it is essential to find an effective way in image processing to increase the resolution level at a lowcost, without replacing the existing imaging system. To address this challenge, the concept of superresolution (SR) has now been sought after. This technique aims to produce a single HR image, or HR video, from a set of different successive lowresolution (LR) images captured from the same scene in order to overcome the limitations and/or possibly illposed conditions of the imaging system [53]. Due to its wide applications, SR has been an active area of research over the last two decades for a variety of applications, such as satellite imaging [10, 21], medical imaging [19, 46], forensic imaging [29, 47] and video surveillance systems [20, 67].
Most SR methods consist of two main parts: I: image registration and II: image reconstruction. Image registration aims to estimate the motion between the LR images, while image reconstruction aims at combining the registered images to reconstruct the HR image. In image registration, the motion between the reference image and its neighbouring LR images is required to be estimated accurately to reconstruct the superresolved image [45, 57]. When the camera is moving and the scene is stationary, global motion occurs. On the contrary, when the camera is fixed and the scene is moving, nonglobal (local) motion occurs. This paper primarily focuses on the first scenario.
1.1 Image registration
Image registration methods can be operated either in the spatialdomain or the frequencydomain. Frequencydomain methods are usually limited to global motion models, whereas spatialdomain methods usually allow more general motion models. In the frequencydomain, Vandewalle et al. [57] presented an image registration algorithm to accurately register a series of aliased images based on their lowfrequencies, thereby aliasing its freepart. They used a planar motion model to estimate the shift and rotation between the images, particularly for the scenario when a set of images are captured in a short period of time with a small camera motion. Vandewalle’s method performs better than the other frequencydomain registration methods, such as Marcel et al. [39] and Luchese and Cortelazzo [36]. The advantage of Vandewalle’s method is that it is based on discarding the highfrequency components, where aliasing may have occurred, in order to be more robust. In the spatial domain, Keren et al. [30] developed an iterative planar motion estimation algorithm that uses different, downsampled versions of the images to estimate the shift and rotation parameters based on the Taylor series expansions. The goal of this pyramidal scheme is to increase the accuracy for estimating large motion parameters. Keren’s method and Vandewalle’s method, have been well accepted to tackle global motions [57]. However, the existing subpixel registration methods become inaccurate when the motion is nonglobal. There are several recent approaches dealing with the general motion estimation in video SR. For example, Liu and Sun [35] used optical flow techniques to register multiple images with subpixel accuracy whereas Liao et al. [34] used an ensemble of optical flow models to reconstruct the original HR frames with rich highfrequency details.
1.2 Image reconstruction in spatial and frequency domain
Image reconstruction methods can also be classified into frequency domainbased and spatial domainbased approaches. The first frequencydomainbased SR approach was proposed by Tsai and Huang [55], which formulates the system equations that relate the HR image to the observed LR images by estimating the relative shifts between a sequence of downsampled, aliased and noisefree LR images. This method was extended by Kim et al. [31] by proposing a weighted least squares solution based on the assumption that the blur and the noise characteristics are the same for all LR images. A major advantage of the frequencydomainbased SR methods is that they are usually theoretically simple and computationally inexpensive. However, these methods are insufficient to handle the realworld applications, as they are limited to only global translational motion and linear spaceinvariant blur during image acquisition process.
For the spatialdomainbased SR approaches, Nonuniform interpolation method [40, 56] is one of the most intuitive approaches with relatively low computational complexity. However, degradation models are applicable only if all LR images have the same blur and noise characteristics. Iterative backprojection (IBP) method [22, 43] can accommodate both global translational and rotational motions. However, the solution might not be unique due to the illposed nature of the SR problem and the selection of some parameters is usually difficult. Projection onto convex sets (POCS) method [17, 42] benefits from the utilisation of the efficient observation model and a proper priori information. The disadvantages, on the other hand, are the lack of a unique solution, a slow convergence rate and an expensive computational cost. Regularisedbased SR methods include Maximum likelihood (ML) method [54] and Maximum a posteriori (MAP) method [48, 49]. The ML method only considers the relationship among the observed LR images and the original HR image without priori information while the MAP method considers both. An extension of this approach, called Hybrid ML/MAPPOCS method [16], were proposed to guarantee the single optimal solution. The spatialdomainbased SR methods can tackle the realworld applications better because they can accommodate both global and nonglobal motion models, linear spacevariant blur and the noise during image acquisition process.
1.3 Waveletbased image reconstruction
In addition to the frequency and spatialbased domains efforts have been made using the waveletdomain. The waveletdomainbased SR reconstruction approach is able to exploit both the spatial and frequencydomains, and integrate properties of both to reconstruct a HR image from observed LR images. The wavelet transform (WT) is an effective tool that divides an image into low and highfrequency subbands, each of which is examined independently with a resolution matched to its scale [9]. The mechanism behind the strategy of WT is that the features of the image at different scales can be separated, analysed and manipulated. Global features can be examined at coarse scales, while local features can be analysed at fine scales [40]. The attractive properties of WT, such as locality, multiresolution, and compression make it effective for analysing realworld signals [7]. Discrete wavelet transform (DWT) is one of the recent wavelet transforms: it being employed as a powerful tool for many image and video processing applications to isolate and preserve the highfrequency components of the image. DWT decomposes the given image into one lowfrequency subband and three highfrequency subbands using the property of dilations and translations by a single wavelet function called mother wavelet [18].
One of the challenges in SR is to preserve or recover the true edges of objects meanwhile compressing noise, which is usually difficult to be achieved simultaneously using frequencybased methods due to similar response of edges and noise in frequency band. WT offers an alternative solution to analyse true edges and noise separately. Manipulating wavelet coefficients in subbands containing highpass frequency spatial information is the essential target of waveletbased methods to solve the SR reconstruction problem. A common assumption of WTbased methods is that the LR image is the lowpass filtered subband produced by WT of the HR image [52]. The existing literature on WTbased methods is in both the single frame case and multiframe (video) case. For the multiframe case, Izadpanahi et al. [25] presented a SR technique using DWT and bicubic interpolation. They applied an illumination enhancement method based on singular value decomposition before the registration process of the LR frames to reduce the illumination inconsistencies between the frames. Anbarjafari et al. [3] proposed a SR technique for the LR video sequences using DWT and stationary wavelet transform (SWT). However, these available methods have limited performance for variety of noise levels, motion levels, wavelet functions, and the number of used frames.
1.4 Other types of SR approaches
Recently, learningbased SR methods have emerged to further boost the efficiency of SR. These methods consist of two main parts: learning and recovering. In the learning part, a dictionary which contains a large number of LR and HR patch pairs is constructed. In the recovering part, the LR frame is divided into overlapped patches, and each patch searches its more similar LR patch from the dictionary. The HR frame is obtained by incorporating the corresponding HR patch to the LR frame. Takeda et al. [51] introduced a method based on the extension of steering kernel regression framework to 3D signals for performing video denoising, spatiotemporal upscaling and SR, without the need for explicit subpixel accuracy motion estimation. To generate better results, multidimensional kernel regression was applied. Yang et al. [65] proposed a sparsecoding method where LR and HR patch pairs in the dictionary share the same sparse representation. The sparse representation of a LR patch can be incorporated to the HR dictionary to obtain HR patch. Li et al. [33] introduced an adaptive subpixelguided autoregressive (AR) model in which keyframes are upsampled by a sparse regression while nonkeyframes are superresolved by simultaneously exploiting the spatiotemporal correlations. Deep learningbased SR approaches and deep learning networks SR approaches [6, 8, 13, 14, 27, 28, 37, 38, 50, 59, 61, 62, 63, 64] have been developed in recent years to improve SR results, and to better model complex image contents and details. For example, Dong et al. [14] proposed a SR convolution neural network (SRCNN) method to perform a sparse reconstruction. Jiang et al. [28] addressed the problem of learning the mapping functions (i.e. projection matrices) by introducing the nonlocal selfsimilarity and local geometry priors of the training data for fast SR. However, this type of methods is usually computationally costly and requires a large amount of training data.
1.5 Focus of this study
This paper proposes a robust video superresolution approach, based on a combination of the socalled discrete wavelet transformnew edgedirected interpolation and a softthresholding for increasing the spatial resolution and recovering the noiseless highfrequency details of the observed noisy LR video frames with global motion, which integrates merits from the methods of image registration and reconstruction in both frequencydomain and spatialdomain. The application of the proposed SR technique is particularly useful when the camera is moving and the observed scene is stationary. One of the motivations of this technique is to provide flexibility for a variety of motion levels, noise levels, wavelet functions, and sufficient number of used LR frames since the existing waveletbased SR methods have limited performance capabilities for these various factors and this potential has not yet been fully explored. The performance of this approach is tested on three wellknown videos. The robustness of the proposed algorithm is then evaluated through an empirical test with various motion levels, noise levels, wavelet functions, and the number of used frames. Most of the existing waveletbased SR methods have limited discussion on the abovementioned factors.
2 Methods
2.1 Observation model
2.2 Proposed video super resolution technique
Recovering the missing highfrequency details of the given LR frames is the fundamental target of the video SR methods. The first step is subpixel image registration that aims to estimate the motion parameters between the reference frame and each of the neighbouring LR frames. When the camera is moving and the scene is stationary, global motion occurs including translation and rotation. In this work, Keren’s method [30] is selected for global motion estimation which is one of the most accurate methods for subpixel image registration in the spatialdomain.
Equation (3) is chosen in the proposed method considering the prospect of automation in the proposed method and successful application of this equation in similar studies [66].
The rationale to include this thresholding process is that the energy of a signal is often concentrated on a few coefficients while the energy of noise is spread among all coefficients in the wavelet domain. Therefore, the nonlinear softthresholding tends to maintain few larger coefficients representing the signal and reduces noise coefficients to zero in the wavelet domain. The universal threshold is intuitively expected to uniformly remove the noise since the Gaussian noise still has the same variance over different scales in the transform domain [66]. On the other hand, in the spatialdomain, when the LR frames are precisely registered by Keren’s method, the registered frames can be combined to reconstruct the missing highfrequency information and produce the lowfrequency subband. In this work, structure adaptive normalised convolution (SANC) reconstruction method [44] is applied, with half of the scale factor α/2. This algorithm is used for fusion of irregularly sampled LR frames to recover the highfrequency details and generate the estimated LL subband, as the LL subband produced by the DWT does not contain any highfrequency information. Finally, inverse DWT (IDWT) process is applied to achieve a superresolved frame by combining the estimated LL subband and processed highfrequency subbands.
 1.
Consider four consecutive frames from the LR video;
 2.
Estimate the motion parameters between the reference frame and each of the other LR frames using global motion estimation algorithm proposed by Keren;
 3.
Apply onelevel DWT to decompose the input LR reference frame into four frequency subbands;
 4.
Apply the NEDI method to the LH, HL and HH highfrequency subbands with the scale factor of α;
 5.
Calculate the threshold τ for each highfrequency subband;
 6.
Apply the nonlinear softthresholding process for each highfrequency subband to create the estimated \( \widehat{\mathrm{LH}} \), \( \widehat{\mathrm{HL}} \) and\( \widehat{\mathrm{HH}} \);
 7.
In the spatialdomain, employ SANC with half the scale factor α/2 to create the estimated \( \widehat{\mathrm{LL}} \);
 8.
Apply IDWT using \( \left(\widehat{\mathrm{LL}}\widehat{,\mathrm{LH},}\widehat{\mathrm{HL},}\widehat{\mathrm{HH}}\right) \) to produce the output superresolved frame.
3 Results
The proposed superresolution technique was tested on three wellknown video sequences, namely, "Mother & daughter", “Akiyo”, and “Foreman”. The files were downloaded from a public database Xiph.org. The proposed algorithm and other methods for comparison were implemented using Matlab 2015. The original highresolution test videos were resized to 512 × 512 pixels which are considered as the ground truth to evaluate the performance of the proposed approach. The frame rate of the test videos is 30 frames per second and each of the video sequences have 100 frames. Based on the observation model, the input LR video frames with the size of 128 × 128 pixels were created as follows. Each original HR video frame is (1) blurred by a lowpass filter, (2) downsampled in both the vertical and the horizontal directions by a scale factor of 1/4, and (3) added by a white Gaussian noise with a certain value of signaltonoise ratio (SNR).
3.1 Visual and quantitative performance evaluation
This example aims to evaluate the overall performance of the proposed technique with a typical selection of parameters against other methods. Four shifted and rotated LR frames for each original HR frame were generated and downsampled, and Gaussian noise was then added with a SNR value of 30 dB. The motion vectors were randomly produced with a standard deviation (STD) of 2 for shift and 1 for rotation. The wavelet function was chosen as db.9/7.
The averaged PSNR and SSIM values of 100 frames produced from different methods for three tested videos
SR methods  Mother& Daughter  Akiyo  Foreman  

PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  
Nearest  24.80  0.69  24.54  0.75  20.89  0.61 
Bicubic  25.92  0.77  25.77  0.82  22.02  0.72 
NEDI [32]  24.88  0.75  24.89  0.81  21.19  0.71 
DASR [2]  23.76  0.62  23.83  0.69  20.16  0.55 
DWTDif [12]  22.57  0.55  22.58  0.62  18.97  0.47 
DWTSWT [11]  23.10  0.57  23.09  0.65  19.83  0.50 
VandewalleSANC  22.05  0.75  25.18  0.81  19.96  0.72 
Keren SANC  27.14  0.82  27.51  0.84  20.41  0.70 
Proposed method  31.48  0.90  30.57  0.91  23.88  0.84 
3.2 Performance for variety of noise levels
To demonstrate the robustness of the proposed method against noise benefiting from the adaptive thresholding process, four shifted and rotated LR frames for each original HR frame were generated and downsampled, and the motion vectors were randomly produced with a standard deviation of 2 for shift and 1 for rotation. The wavelet function was chosen as db.9/7. The noise level was increased from 50 dB to 20 dB with the step of 5 dB. The first 10 frames from Akiyo video were tested by the proposed method and other different methods, and the results were averaged.
The averaged PSNR results of 10 frames from Akiyo test video for each noise level, range from 20 dB to 50 dB with 5 dB step
SNR  Nearest  Bicubic  KerenSANC  Proposed method  Increment 

50 dB  25.20  26.50  28.18  31.13  10.47% 
45 dB  23.86  24.85  28.46  30.82  8.29% 
40 dB  24.98  26.29  30.05  31.64  5.29% 
35 dB  24.18  25.20  30.49  31.30  2.65% 
30 dB  23.84  24.86  26.50  30.37  14.60% 
25 dB  23.69  24.85  24.80  30.08  21.29% 
20 dB  23.42  24.93  24.31  28.63  17.78% 
3.3 Performance for variety of wavelet functions
The averaged PSNR and SSIM values of 10 frames produced by the proposed technique for different wavelet functions
Wavelet functions  PSNR  SSIM 

Db1  28.18  0.88 
Db2  29.95  0.90 
Sym16  30.83  0.92 
Sym20  30.89  0.92 
Coif1  30.39  0.91 
Coif2  26.78  0.85 
Bior4.4  30.72  0.92 
Bior5.5  30.20  0.90 
Bior6.8  30.81  0.92 
3.4 Performance for variety of motion levels
This section is dedicated to discussing the effectiveness of motion level (shift and rotation) on the performance of the proposed algorithm. In this experiment, the shift on both horizontal and vertical directions and rotation angle were randomly selected with the standard variation STD changing from 1 to 4 during generating the input LR frames from a HR frame. Four shifted and rotated LR frames for each original HR frame were generated and downsampled. The wavelet function was chosen as db.9/7, and the noise level was fixed as 30 dB.
The averaged PSNR and SSIM values of 10 frames produced by the proposed technique with different motion levels
STD of Shift  STD of Rotation  

1  2  4  
PSNR  SSIM  PSNR  SSIM  PSNR  SSIM  
1  30.74  0.92  29.44  0.90  28.09  0.87 
2  29.16  0.90  29.00  0.89  27.97  0.88 
4  27.35  0.86  27.13  0.86  26.90  0.85 
3.5 Performance for variety of number frames
The averaged PSNR and SSIM values of 10 frames produced by the proposed technique by sampling different number of frames
Number of sampled frames  PSNR  SSIM 

4  29.81  0.91 
8  30.06  0.92 
16  29.98  0.92 
32  30.07  0.92 
4 Conclusions
A robust video superresolution reconstruction approach based on combining discrete wavelet transform, new edgedirected interpolation and the nonlinear softthresholding has been proposed in this paper for noisy LR video sequences with global motion to recover the noiseless highfrequency details and increase the spatial resolution, which integrates properties from methods of image registration and reconstruction. Firstly, an iterative planar motion estimation algorithm by Keren is used to estimate the motion parameters between a reference frame and its neighbouring LR frames in the spatial domain. The registered frames are combined by the SANC reconstruction method to output the estimated lowfrequency subband. Secondly, the DWT is employed to decompose each input LR reference frame into four frequency subbands in the frequencydomain. The NEDI is employed to process each of three highfrequency subbands, which are then filtered using the adaptive thresholding process to preserve the true edges and reduce the noise in the estimated highfrequency subbands. Finally, by combining the estimated lowfrequency subband and three highfrequency subbands, a superresolved frame is recovered through the invertDWT process.
Subjective results show that this approach can better preserve the edges and remove potential noise in the estimated highfrequency subbands since a direct interpolation will blur the areas around edges. Three wellknown videos (totally 100 frames for each) have been tested, and the quantitative results show that the superior performance of the proposed method. The proposed method tops the averaged PSNR and SSIM values (31.48 dB, 30.57 dB and 23.88 dB for PSNR respectively; 0.90,0.91 and 0.84 for SSIM) for three videos respectively, and the averaged increment over KERENSANC is 16%, 11%, and 17% respectively. The performance against noise has also been analysed. Analysis based on the contribution of each component clearly demonstrates that the proposed method improves the quality of both background and true edges, but other methods usually can only have one merit.

The proposed technique has produced 10%, 8%, 5% and 3% averaged increment of PSNR for an image corrupted by low level noise with the SNR value of 50 dB, 45 dB, 40 dB and 35 dB respectively. It has produced 15%, 21% and 18% averaged increment of PSNR for the image corrupted by high level noise with the SNR value of 30 dB, 25 dB and 20 dB respectively.

The proposed technique can perform well using other wavelet functions apart from db.9/7, even better than db.9/7, although the difference between them is not significant.

The performance of the proposed method is affected by the level of motion. Based on the considered smallest to largest motions, 12% and 7% decrease of PSNR and SSIM values respectively has been observed.

If the motion is simple, the number of sampled frames has limited improvement on the performance due to the limited extra information. If the motion is complex and corrupted by high level of noise, significant improvement is expected using more frames.
A limitation of this method is that it can only be applied to video sequences with global motion. However, it can be extended to local motion by dividing the video frame into multiple blocks and then applying this method to each block.
References
 1.Allebach J, Wong PW Edgedirected interpolation. Proceedings of 3rd IEEE International Conference on Image Processing 3:707–710Google Scholar
 2.Anbarjafari G, Demirel H (2010) Image super resolution based on interpolation of wavelet domain high frequency subbands and the spatial domain input image. ETRI J 32(3):390–394CrossRefGoogle Scholar
 3.Anbarjafari G, Izadpanahi S, Demirel H (2015) Video resolution enhancement by using discrete and stationary wavelet transforms with illumination compensation. Signal, Image Video Process 9:87–92CrossRefGoogle Scholar
 4.Antonini M, Barlaud M, Mathieu P, Daubechies I (1992) Image coding using wavelet transform. IEEE Trans Image Process 1(2):205–220CrossRefGoogle Scholar
 5.Bhandari AK, Soni V, Kumar A, Singh GK (Jul. 2014) Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT–SVD. ISA Trans 53(4):1286–1296CrossRefGoogle Scholar
 6.Chen Y, Pock T (2017) Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Trans Pattern Anal Mach Intell 39(6):1256–1272CrossRefGoogle Scholar
 7.Crouse MS, Nowak RD, Baraniuk RG (1998) Waveletbased statistical signal processing using hidden markov. IEEE Trans Signal Process 46(4):886–902MathSciNetCrossRefGoogle Scholar
 8.Cui Z, Chang H, Shan S, Zhong B, Chen X (2014) Deep network cascade for image superresolution. ECCV 5:49–64Google Scholar
 9.Daubechies I (1992) Ten lectures on wavelets. Society for Industrial Applied Mathematics, PhiladelphiaCrossRefMATHGoogle Scholar
 10.Demirel H, Anbarjafari G (2010) Satellite image resolution enhancement using complex wavelet transform. IEEE Geosci Remote Sens Lett 7(1):123–126CrossRefGoogle Scholar
 11.Demirel H, Anbarjafari G (2011) IMAGE resolution enhancement by using discrete and stationary wavelet decomposition. IEEE Trans Image Process 20(5):1458–1460MathSciNetCrossRefMATHGoogle Scholar
 12.Demirel H, Anbarjafari G (2011) Discrete wavelet transformbased satellite image resolution enhancement. IEEE Trans Geosci Remote Sens 49(6):1997–2004CrossRefMATHGoogle Scholar
 13.Dong C, Loy CC, He K, Tang X (2014) Learning a deep convolutional network for image superresolution. ECCV 4:184–199Google Scholar
 14.Dong C, Loy CC, He K, Tang X (2016) Image superresolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38(2):295–307CrossRefGoogle Scholar
 15.Donoho DL (1995) Denoising by softthresholding. IEEE Trans Inf Theory 41(3):613–627MathSciNetCrossRefMATHGoogle Scholar
 16.Elad M, Feuer A (1997) Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans Image Process 6(12):1646–1658CrossRefGoogle Scholar
 17.Eren PE, Sezan MI, Tekalp AM (1997) Robust, objectbased highresolution image reconstruction from lowresolution video. IEEE Trans Image Process 6(10):1446–1451CrossRefGoogle Scholar
 18.Gonzalez RC, Woods RE (2007) Digital image processing. Prentice Hall, Englewood CliffsGoogle Scholar
 19.Greenspan H (2008) Superresolution in medical imaging. Comput J 52(1):43–63CrossRefGoogle Scholar
 20.Huang SC (2011) An advanced motion detection algorithm with video quality analysis for video surveillance images. IEEE Transactions on Circuits and Systems for video Technology 20(1):1–13CrossRefGoogle Scholar
 21.Iqbal MZ, Ghafoor A, Siddiqui AM (2013) Satellite image resolution enhancement using dualtree complex wavelet transform and nonlocal means. IEEE Geosci Remote Sens Lett 10(3):451–455CrossRefGoogle Scholar
 22.Irani M, Peleg S (1991) Improving resolution by image registration. CVGIP Graph Model Image Process 53(3):231–239CrossRefGoogle Scholar
 23.Izadpanahi S, Demirel H (2012) Multiframe super resolution using edge directed interpolation and complex wavelet transform. In: IET Conference on Image Processing (IPR 2012), pp A9–A9. https://doi.org/10.1049/cp.2012.0447
 24.Izadpanahi S, Demirel H (2013) Motion based video super resolution using edge directed interpolation and complex wavelet transform. Signal Process 93(7):2076–2086CrossRefGoogle Scholar
 25.Izadpanahi S, Ozcinar C (2013) DWT based resolution enhancement of video sequences. no. 2000Google Scholar
 26.Jagadeesh P, Pragatheeswaran J (2011) Image resolution enhancement based on edge directed interpolation using dual tree — Complex wavelet transform. In: 2011 International Conference on Recent Trends in Information Technology (ICRTIT), pp 759–763. https://doi.org/10.1109/ICRTIT.2011.5972260
 27.Jiang J, Hu R, Han Z, Lu T (2014) Efficient single image superresolution via graphconstrained least squares regression. Multimed Tools Appl 72(3):2573–2596. https://doi.org/10.1007/s1104201315679
 28.Jiang J, Ma X, Chen C, Lu T, Wang Z, Ma J (2017) Single Image SuperResolution via Locally Regularized Anchored Neighborhood Regression and Nonlocal Means. IEEE Trans Multimed 19(1):15–26CrossRefGoogle Scholar
 29.Kamenicky J, Bartos M, Flusser J, Mahdian B, Kotera J, Novozamsky A et al (2016) PIZZARO: Forensic analysis and restoration of image and video data. Forensic Sci Int 246:153–166CrossRefGoogle Scholar
 30.Keren D, Peleg S, Brada R (1988) Image sequence enhancement using subpixel displacements. In: Proceedings CVPR ’88: The Computer Society Conference on Computer Vision and Pattern Recognition, pp 742–746. https://doi.org/10.1109/CVPR.1988.196317
 31.Kim SP, Bose NK, Valenzuela HM (1990) Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans Acoust 38(6):1013–1027CrossRefGoogle Scholar
 32.Li X, Orchard MT (2001) New edgedirected interpolation. IEEE Trans Image Process 10(10):1521–1527CrossRefGoogle Scholar
 33.Li K, Zhu Y, Yang J, Jiang J (2016) Video superresolution using an adaptive superpixelguided autoregressive model. Pattern Recogn 51:59–71CrossRefGoogle Scholar
 34.Liao R, Tao X, Li R, Ma Z, Jia J (2015) Video SuperResolution via Deep DraftEnsemble Learning. In: 2015 I.E. International Conference on Computer Vision (ICCV), pp 531–539. https://doi.org/10.1109/ICCV.2015.68
 35.Liu C, Sun D (2014) On Bayesian adaptive video super resolution. IEEE Trans Pattern Anal Mach Intell 36(2):346–360CrossRefGoogle Scholar
 36.Lucchese L, Cortelazzo GM (2000) A noiserobust frequency domain technique for estimating planar rototranslations. IEEE Trans Signal Process 48(6):1769–1786CrossRefGoogle Scholar
 37.Ma J, Zhao J, Tian J, Yuille AL, Tu Z (2014) Robust point matching via vector field consensus. IEEE Trans Image Process 23(4):1706–1721MathSciNetCrossRefMATHGoogle Scholar
 38.Ma J, Qiu W, Zhao J, Ma Y, Yuille AL, Tu Z (2015) Robust L2E estimation of transformation for nonrigid registration. IEEE Trans Signal Process 63(5):1115–1129MathSciNetCrossRefGoogle Scholar
 39.Marcel B, Briot M, Murrieta R (1997) Calcul de Translation et Rotation par la Transformation de Fourier. Traitement du Signal 14(2):135–149MATHGoogle Scholar
 40.Nguyen N, Milanfar P (2000) A Waveletbased interpolationrestoration method for superresolution (wavelet superresolution). Circuits Syst Signal Process 19(4):321–338CrossRefMATHGoogle Scholar
 41.Park SC, Park MK, Kang MG (2003) Superresolution image reconstruction: a technical overview. IEEE Signal Process Mag 20(3):21–36CrossRefGoogle Scholar
 42.Patti AJ, Sezan MI, Tekalp AM (1997) Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time. IEEE Trans Image Process 6(8):1064–1076CrossRefGoogle Scholar
 43.Peleg S, Keren D, Schweitzer L (1987) Improving Image Resolution Using Subpixel Motion. Pattern Recogn Lett 5(3):223–226CrossRefGoogle Scholar
 44.Pham TQ, Van Vliet LJ, Schutte K (2006) Robust fusion of irregularly sampled data using adaptive normalized convolution. EURASIP J Appl Signal Processing 2006:1–12Google Scholar
 45.Protter M, Elad M, Takeda H, Milnfar P (2009) Generalizing the nonlocalmeans to superresolution reconstruction. IEEE Trans Image Process 18(1):1958–1975MathSciNetGoogle Scholar
 46.Robinson FS, Chiu SJ, Lo JY, Toth CA, Izatt JA (2010) Novel applications of superresolution in medical imaging. In: Milanfar P (ed) SuperResolution Imaging. CRC Press, Boca Raton, pp 383–412Google Scholar
 47.Satiro J, Nasrollahi K, Correia PL, Moeslund TB (2015) Superresolution of facial images in forensics scenarios. In: 2015 International Conference on Image Processing Theory, Tools and Applications (IPTA), pp 55–60. https://doi.org/10.1109/IPTA.2015.7367096
 48.Schultz RR, Stevenson RL (1996) Extraction of highresolution frames from video sequences. IEEE Trans Image Process 5(6):996–1011CrossRefGoogle Scholar
 49.Schulz RR, Stevenson RL (1994) A bayesian approach to image eExpansin for improved definition. IEEE Trans Image Process 3(3):233–242CrossRefGoogle Scholar
 50.Shi W, Caballero J, Huszar F, Totz J, Aitken AP, Bishop R, Rueckert D, Wang Z (2016) Realtime single image and video superresolution using an efficient subpixel convolutional neural network. In: 2016 I.E. Conference on Computer Vision and Pattern Recognition (CVPR), pp 1874–1883. https://doi.org/10.1109/CVPR.2016.207
 51.Takeda H, Milanfar P, Protter M, Elad M (2009) SuperResolution without Explicit Subpixel Motion Estimation. IEEE Trans Image Process 18(9):1958–1975MathSciNetCrossRefMATHGoogle Scholar
 52.Temizel A (2007) Image resolution enhancement using wavelet domain hidden markov tree and coefficient sign estimation. Proc Int Conf Image Process 5:V – 381–V – 384Google Scholar
 53.Tian J, Ma KK (2011) A survey on superresolution imaging. Signal, Image Video Process 5(3):329–342CrossRefGoogle Scholar
 54.Tom BC, Katsaggelos AK, Galatsanos NP (1994) Reconstruction of a high resolution image from registration and restoration of low resolution images. In: Proceedings  International Conference on Image Processing, ICIP, 3, pp 553–557. https://doi.org/10.1109/ICIP.1994.413745
 55.Tsai RY, Huang TS (1984) Multiframe image restoration and registration. In: Advances in Comuter Vision And Image Processing, vol 1. JAI Press, London, pp 317–339Google Scholar
 56.Ur H, Gross D (1992) Improved resolution from subpixel shifted pictures. CVGIP Graph Model Image Process 54(2):181–186CrossRefGoogle Scholar
 57.Vandewalle P, Süsstrunk S, Vetterll M (2006, 2006) A frequency domain approach to registration of aliased images with application to superresolution. EURASIP J Appl Signal Processing:1–14Google Scholar
 58.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861
 59.Wang Z, Yang Y, Wang Z, Chang S, Han W, Yang J, Huang TS (2015) Selftuned deep super resolution. CVPR Workshops:1–8Google Scholar
 60.Witwit W, Zhao Y, Jenkins K, Zhao Y (2016) An optimal factor analysis approach to improve the waveletbased image resolution enhancement techniques. Global J Comp Sci Technol 16(3F):1–11Google Scholar
 61.Yan C, Zhang Y, Xu J, Dai F, Li L, Dai Q, Wu F (2014) A highly parallel framework for HEVC Coding unit partitioning tree decision on manycore processors. IEEE Signal Process Lett 21(5):573–576CrossRefGoogle Scholar
 62.Yan C, Zhang Y, Xu J, Dai F, Zhang J, Dai Q, Wu F (2014) Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans Circuits Syst Video Technol 24(12):2077–2089CrossRefGoogle Scholar
 63.Yan C, Xie H, Yang D, Yin J, Zhang Y, Dai Q (2018) Supervised hash coding with deep neural network for environment perception of intelligent vehicles. IEEE Trans Intell Transp Syst 19(1):284–295. https://doi.org/10.1109/TITS.2017.2749965 CrossRefGoogle Scholar
 64.Yan C, Xie H, Liu S, Yin J, Zhang Y, Dai Q (2018) Effective uyghur language text detection in complex background images for traffic prompt identification. IEEE Trans Intell Transp Syst 19(1):220–229. https://doi.org/10.1109/TITS.2017.2749977 CrossRefGoogle Scholar
 65.Yang J, Wright J, Huang TS, Ma Y (2010) Image superresolution via sparse representation. IEEE Trans Image Process 19(11):2861–2873MathSciNetCrossRefMATHGoogle Scholar
 66.Zhang XP (2001) Thresholding neural network for adaptive noise reduction. IEEE Trans Neural Netw 12(3):567–584CrossRefGoogle Scholar
 67.Zhang L, Zhang H, Shen H, Li P (Mar. 2010) A superresolution reconstruction algorithm for surveillance images. Signal Process 90(3):848–859CrossRefMATHGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.