Skip to main content

Rain Streaks and Snowflakes Removal for Video Sequences via Motion Compensation and Matrix Completion

Abstract

Image and video deraining tasks aim to reconstruct original scenes, from which human vision and computer vision systems can better identify objects and more details present in images and video sequences. This paper proposes a three-step method to detect and remove rain streaks, even snowflakes from great majority video sequences, using motion compensation and low-rank matrix completion method. Firstly, we adopt the optical flow estimation between consecutive frames to detect the motion of rain streaks. We then employ the online dictionary learning for sparse representation technique, and SVM classifier to eliminate parts that are not rain streaks. Finally, we reconstruct the video sequence by using low-rank matrix completion techniques. In particular, by introducing the image dehazing network(GCANet) to our proposed method, the heavy rain caused dense rain accumulation and blurry phenomenon can be worked out well. The experimental results demonstrate the proposed algorithm and perform qualitatively and quantitatively better in several image quality metrics, boosting the best published PSNR metric by 4.47%, 6.05% on two static video sequences and 12.13% on a more challenging dynamic video sequence. In addition, to demonstrate the generality of the proposed method, we further apply it to two challenge tasks, which also achieves state-of-the-art performance.

Introduction

Outdoor vision-based systems rely more and more on computer vision information and widely applied in intelligent transportation, public safety, sports performance analysis, etc. However, images and videos are vulnerable to be influenced by adverse weather conditions [18], which not only greatly reduce the quality of images but also interfere with nearby pixels [24], affect the effectiveness of computer vision algorithms, such as navigation applications [5], object detection [16], object tracking [10, 57], object recognition [40], scene analysis [23, 45], person re-identification [3], event detection [58], video-surveillance systems [5] and other fields.

Based on their physical characteristics, the adverse weather conditions can be generally classified into steady systems(fog, sandstorms, haze, etc) and dynamic systems(rain, snow, hail, etc) [41]. For steady weather conditions, droplets are extremely tiny \((1{-}10\,\,\upmu \hbox {m})\) for the captured camera, which cannot be detected individually. In contrast, rain streaks consist of particles that are 1000 times larger and relatively fast-moving; the impact of dynamic weather conditions is much more complex [18]. Since rain and snow are prevailing weather conditions in our daily lives, image and video deraining tasks have become necessary for signal and image processing.

Fig. 1
figure1

Different appearances of intensity produced by falling raindrop at different distances range and imaging configurations and our results

As illustrated in Fig. 1, the appearance of rain streaks and snowflakes may significantly different depending on scene distance and imaging configurations [17]. Based on their visual effects, Chen et al. [9] separate rain streaks into three types: occluding rain (Rain streaks fall within the depth of field of a camera and occlude the background); veiling rain (Rain streaks which distributed nearer than the current depth of field cover a larger field of view and the camera cannot focus); accumulated rain (Rain streaks which distributed further than the current depth of field, the camera cannot focus and accumulations of them are similar to haze or mist). In this paper, we propose a rain and snow removal method that handles not only visual degradation of occluding rain, veiling rain and snowflakes, even accumulated rain caused dynamic states, such as haze and mist.

For each pixel, the temporal relationship of the corresponding scene content is the only reliable information over time. However, the motion of the camera and moving objects may influence the experimental results [9]. Earlier works have attempted to remove rain streaks for video sequences under static or moving background with static and/or moving objects  [52]:

  • These video sequences can be captured with a static camera [16, 30, 37, 51, 57] or a moving camera using physical characteristics [5, 17] or global frame alignment [47, 50]. Several previous works rely on the following model:

    $$\begin{aligned}{ \varvec{\mathcal {V}}}={\varvec{\mathcal {B}}}+{\varvec{\mathcal {R}}} \end{aligned}$$
    (1)

    where \({\varvec{\mathcal {V}}}\) is the input rain streaks video frame layer, \({\varvec{\mathcal {B}}}\) is the original static or moving background layer, and \({\varvec{\mathcal {R}}}\) is the linear shape rain streaks layer.

  • Earlier works attempt to test various properties of candidate rain pixels and pay attention to the physical characteristics, e.g., photometric appearance [5, 16, 17], brightness property [18], chromatic consistency [37, 61], spatio-temporal configurations of rain streaks [51], etc. For moving objects, rain streaks video frame \({\varvec{\mathcal {V}}}\) can be modeled rely on accurate segmentation of foreground regions:

    $$\begin{aligned} {\varvec{\mathcal {V}}}={\varvec{\mathcal {B}}}+{\varvec{\mathcal {O}}}+{\varvec{\mathcal {R}}} \end{aligned}$$
    (2)

    where \({\varvec{\mathcal {O}}}\) is the moving objects layer which can be separated from the original background.

This paper considers that both moving camera and moving objects affect the temporal content and propose a method using motion compensation to align scene content for the misalignment problem and matrix completion algorithm to remove rain streaks and snow for video sequences. Figure 1 also shows three example results of our proposed method on rainy video sequences in three kinds of different situations.

To summarize, the main contributions of our work can be generalized as follows:

  1. 1.

    Due to illumination variation between consecutive frames or incorrect detection, the initial rain map obtained by rain streaks detection includes some outliers. We adopt the sparse representation technique and SVM classifier to separate rain streaks from outliers. Furthermore, we create structuring elements of rain streaks to dilate the refined initial rain map to detect rain streaks clearer.

  2. 2.

    Instead of the traditional image-based method, this work takes advantage of the temporal information of video sequences. A video sequence may contain several moving objects or could be captured by a dynamic camera. Thus, we combine the prediction from forwarding frames and reverse frames to find similar blocks by using block matching method and adopt low-rank matrix completion techniques for background modeling and restore rain streaks video sequences.

A shorter version of this work presented in [63]. This paper is an extended and revised journal version that presents more technical details include additional experiments and quantitative performance comparison to verify the effectiveness of our proposed method.

The remainder of this paper is organized as follows. Section “Related Works” surveys several recent works of literature on rain streaks removal tasks. In section “Proposed Method”, we introduce our proposed method in detail. Experimental results and comprehensive performance evaluations between our methods and previous methods are presented in section “Experiments Results and Analysis”. Section “Conclusions and Future Directions” concludes the paper with future directions of research.

Related Works

Researchers focus more on the visibility impact of adverse weather conditions for images and video sequences in recent years. They proposed several methods as a pre-processing to increase the accuracy of computer vision algorithms. Based on the types of input, we can classify rain removal methods into image-based methods and video-based methods. In this section, we give a brief review of numerous methods that have been proposed to improve the visibility of images and videos captured under rain and snow weather conditions.

Image-Based Methods

Image-based methods just rely on information from the current frame and usually can be modeled as a layer separation problem as Eq. (1).

Kang et al. [26] separated single rainy images into the low-frequency parts and high-frequency parts by using a bilateral filter, and decompose high-frequency elements into rain streak components and non-rainy components based on morphological component analysis (MCA). Luo et al. [38] proposed a dictionary learning based single image rain removal method by sparsely approximate the patches of the rainy image layer and de-rained image layer using high discriminative codes over a learned dictionary with strong mutual exclusivity. Li et al. [32, 33] modeled rain streaks layer and background layer using Gaussian mixture models (GMM) learned on small patches that can accommodate complex background and multiple scales of rain streaks.

Recently, the research works on rain streaks removal found in the literature are mainly focusing on image-based approaches that exploit the deep-learning neural network. Each of the neural networks has to assemble a series of rain streaks images and without rain streaks images to learn these network parameters.

Eigen et al. [11] firstly utilized CNN by learning how to map corrupted rain image patches to remove ones from the characteristic appearance of dirt or raindrops present on the window glasses or camera lens. Wang et al. [54] constructed a large-scale dataset of rain/rain-free image pairs cover a wide range of natural rain scenes and propose a SPatial Attentive Network (SPANet) based on a local-to-global method to remove rain streaks. Since most of the deraining works concentrate on occluding and veiling rain streaks removal and cannot handle adequately with accumulated rain images, Li et al. [31] proposed a 2-stage network, stage I based on physical characteristics of atmospheric and adopt depth-guided GAN on stage II to restore the background details that failed to be retrieved by stage I.

Though these image-based methods can handle the rain removal task for video sequences in a frame-by-frame way, the further utilization of temporal information, such as different positions at the same frame or different frames in the same position of the video sequence, which always makes the video-based methods work better than image-based ones [30]. Furthermore, these deep learning neural network structures still demonstrate several limitations for opaque, thick and accumulated heavy rain streaks situations.

Fig. 2
figure2

An overview of the rain streaks removal algorithm

Video-Based Methods

Video-based methods can take advantage of the difference between consecutive frames in the video sequence. In the past few years, many studies have attempted to remove rain streaks for video sequences. These methods can handle rain video sequences with static and/or moving background. And these video sequences can be captured with a static camera [16, 30, 37, 51, 57] or a moving camera [5, 17].

Garg et al. [16] firstly raised a simple and efficient algorithm that defined rain streak pixels tend to be much brighter than the other background pixels in every frame for detecting and removing rain streaks from video sequences. Tripathi et al. [51] utilized the spatio-temporal properties to separate rain pixels from non-rain pixels. Wei et al. [57] proposed a method that modeled as Eq. (2) that takes into account object motions and formulates rain as a patch-based mixture of Gaussian(P-MoG). Jiang et al. [24] proposed a FastDeRain method that fully considered the discriminative characteristics of rain streaks and the different gradient-domain between clean videos and rain videos.

Deep learning based methods have also been investigated on rain removal tasks for video sequences in recent years. Chen et al. [9] proposed a rain streaks removal convolutional neural network (CNN) framework for video sequences, which can handle rain occlusion interferences and fast camera motion. Liu et al. [35] proposed a Dynamic Routing Residue Recurrent Network (D3R-Net) to extracts the spatial features for motion segmentation and describe rain streaks and occlusion parts. Moreover, they [36] considered low transmittance of rain streaks and constructed a Joint Recurrent Rain Removal and Reconstruction Network (J4R-Net) based on temporal coherence of background details reconstruction.

Specifically, works on rain streak removal for video sequences with moving background and/or moving objects by utilizing the temporal relationship between the consecutive video frames are still limited. Deep learning based methods have to collect adequate training and testing samples in advance, which needs to consume a large amount of manpower, material resources and time. Moreover, most of these methods are training based on synthetic rainy images and make it challenging to efficiently handle real-world video sequences. Furthermore, removal raindrops adhered to glass of windows, mist and water splashes caused by torrential rains are also significant challenges.

Proposed Method

We propose a method by using motion compensation and matrix completion algorithm to remove rain streaks and snowflakes for video sequences. Different from the previous model as Eqs. (1) and (2), we consider a more complete model, named patch-based block matrix completion (P-BMC):

$$\begin{aligned} {\varvec{\mathcal {V}}}={\widetilde{\varvec{{\mathcal {R}}}}} + {\widetilde{\varvec{{\mathcal {B}}}}} \end{aligned}$$
(3)

where \({\varvec{\mathcal {V}}}\) denotes the input rainy video frame layer. It is important to note that \({\widetilde{\varvec{\mathcal {R}}}}\) is an estimated refined rain streaks layer which the detailed review in section “Rain Map Refinement”, and \({\widetilde{\varvec{\mathcal {B}}}}\) is an estimated background layer composed of the resultant filled-in matrix which the detailed review in Sect. “Original Scene Reconstruction”. In order to ensure the scene consistency of each consecutive frame, we apply temporal correlation alignment for video sequences, which is captured under a moving camera and/or exists moving object. Thus, it is not necessary to separate the moving objects layer as Eq. (2). Furthermore, a state-of-the-art dehazing and deraining method [8] will be supplemented under our proposed method [63] for the advantage in the challenge of pristine scene restoration, especially for accumulated rain.  Figure 2 shows an overview of the proposed algorithm.

Rain Streaks Detection

The previous work [17] mentioned that rain streaks appear randomly, and the pixel value will become brighter than its original form when they pass through one pixel between each consecutive frame. Thus we can detect each rain streak pixel with larger pixel values in the current frame than in the adjacent frames.

Fig. 3
figure3

Initial rain map detection on 7th frame in the “Waterfall” video sequence

A variety of moving object detection computational models have been developed to estimate the motion from video sequences for rain streaks detection, by subtracting corresponding pixel values of adjacent frames. However, with the complex scene in real-world, motion blur, occlusion, shadow, reflection, compression artifacts occur, missed detection, both camera motion and moving objects cause scene content to be misaligned and thus corrupt temporal information [9]. Especially for outdoor surveillance equipment video sequences, there are always contained dynamic objects in the background or captured by moving cameras. However, general background extraction algorithms can only be applied to still cameras. Previous kinds of literature have researched on modeling and estimating dense optical flow fields. Thus, in order to compensate for these divergences between consecutive frames, we adopt Liu’s layer-wise optical flow estimation method  [34] based on the Horn-Schunck algorithm [21] and coarse-to-fine warping process to estimate a dense optical flow motion field between each frame by warping the previous frame into the current frame.

Figure 3a and b show the current 8th frame and the previous 7th frame selected from the captured “Waterfall” rainy video sequence, (c) is the warped previous frame \({\widetilde{I}_{7}}\) of optical flow field using motion compensation and (d) is a difference image between the current frame and the warped previous frame by using optical flow method named initial rain map layer \({\varvec{\mathcal {R}}}\), which can be used to estimate the position of rain streaks roughly. To compare with the result of optical flow estimation, we also extract the absolute differences between the pixels of the current 8th frame and the previous 7th frame by using Absdiff function which extracts just the pixels of the objects that are moving, and presents the result of shown as (e). It is obvious to see that (d) extract more effective rain streaks than (e). Since the pixels containing rain streaks have larger luminance values than pixels without rain streaks, we only use the luminance components of rain streaks in the initial rain map layer \({\varvec{\mathcal {R}}}\).

Rain Map Refinement

Some areas contain outliers that are not rain streaks or overlapping areas, as shown in Fig. 3d, which caused by warping errors or abrupt brightness changes between every adjacent frame. Since rain streaks and different sizes of outliers overlapping each other, all rain streaks cannot be effectively detected. We utilize the directionality and shape features of rain streaks to refine the initial rain map layer \({\varvec{\mathcal {R}}}\). Rain streaks generally tend to have elliptical or linear shapes, and the direction of rain streaks is mainly consistent with the vertical direction or slightly deviated [4]. Conversely, erroneously detected outliers typically have arbitrary shapes and random direction of movement. Thus, we can separate overlapped outliers from valid rain streaks, and further detect outliers by comparing the inclination angle of detected ellipses with the vertical components.

To divide outliers from valid rain streaks, Kang et al. [26] decomposed the single rainy image into low-frequency basis vectors and high-frequency basis vectors, and applied Morphological Component Analysis (MCA) based dictionary learning model to divide the rain streaks layer in the high-frequency region. Following this idea, we decompose outliers and valid rain streaks into its building blocks, considering that there is morphological diversity among outliers and valid rain streak’s components, which can be sparsely represented by different dictionaries. We adopt MCA [15, 19] decompose the rain streaks into several basis vectors, and reconstruct the signal by employing basis rain streaks vectors only, which can separate features in images that present different morphological aspects [27].

Let \(d_{1},d_{2},\ldots ,d_{x}\in {\mathbb {R}^{y}}\) be x basis vectors and matrix \({\varvec{D}}\in {\mathbb {R}^{y\times x}(y<x)}\) with y dimension, which is use to measure the over-completed dictionary. Each column of \({\varvec{D}}\) is one basis vector. \({\varvec{R}}\in {\mathbb {R}^{y\times m}}\) represents y patches of the initial rain streaks with m pixels in each column.

$$\begin{aligned} {\varvec{R}} = {\varvec{D}}{\varvec{\alpha }} \end{aligned}$$
(4)

where \({\varvec{\alpha} }\) is the coefficient matrix of \({\varvec{D}}\). Equation (4) is an under-determined linear problem. The sparse representation method demands that the obtained representation solution should be sparse [62].

To separate overlapped outliers(marked with blue boxes as shown in Fig. 4a) from valid rain streaks, we assume that \({\varvec{\alpha }}\) should be sparse, the following sparse representation formulation [12] with \(l_{1}\)-norm minimization technique should be satisfied for each patch of the initial rain map to find the optimal coefficient matrix \({\varvec{\hat{\alpha} }}\) into an optimization problem using Lagrangian multiplier:

$$\begin{aligned}{ \varvec{\hat{\alpha} }} = \mathop {\arg \min _{\varvec{\alpha }\in {\mathbb {R}^{x\times m}}}}{\frac{1}{2}}||{\varvec{R}}-{\varvec{D}}{\varvec{\alpha }}||_{2}^{2}+\lambda ||{\varvec{\alpha }}||_{1} \end{aligned}$$
(5)

where \(\lambda\) is a regularization parameter set to 0.1. We employ K-SVD for dictionary learning and orthogonal matching pursuit (OMP) for sparse coding to find \({\varvec{\hat{\alpha }}}\). The orthogonal matching pursuit algorithm can provide good approximations of basis vectors and satisfactory stable results than other matching pursuit methods [44].

The established dictionary can be separated into rain streak basis vectors and outlier basis vectors, we only use the rain streaks ones to reconstruct non-rainy images. Therefore, to reconstruct basis vectors dictionaries, we get each patch’s structure and update the best matching kernel in each patch by using the Singular Value Decomposition (SVD) method [13]. In this work, we divide rain streak vectors from outliers by employing the deviation angle, which relative to the vertical direction and two eigenvalues of elliptical kernel scales along the major axes and minor axes.

Figure 4a demonstrates basis vectors in the dictionary selected from the initial rain map of the 7th frame in the “Waterfall” video sequence. The basis vectors have diverse directional structures are visualized as \(16 \times 16\) patches.

Fig. 4
figure4

Visualization results of basis vectors of the 7th frame in the “Waterfall” video sequence: a the basis vectors of the initial rain map, b the basis vectors of the effective rain streaks in initial rain map [63]

As we all know, Supervised Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) are famous for the task of image classification [25]. However, it is important to note that ANN frequently converges on local minima rather than global minima, which means that they are essentially missing the global minima information sometimes. Moreover, ANN regularly causes overfitting when a model training goes on too long. To avoid this problem, we exploit the SVM classifier by taking deviation angles between the base vector and the horizontal direction, and the kernel scales along the major axes and minor axes. Notice that it is possible to simply define a range of the horizontal direction, major axes and minor axes kernel scales to separate rain streaks from outliers. Moreover, the SVM training is done offline and performed only once. In addition, compared with using kernel discriminant analysis, the SVM algorithm has better performance, such as robustness and sparseness.

For training SVM classifier, the number of positive samples of valid rain streaks vectors and negative samples of outlier vectors are 3000 and 3000. Positive samples are selected from several synthetic rainy video sequences, and negative samples are generated by taking the difference between the warped frame and the current frame in non-rainy video sequences.

Fig. 5
figure5

SVM processing refining initial binary rain map of the 7th frame in the “Waterfall” video sequence: a the initial binary rain map before processing, b the initial binary rain map after processing

After SVM classification, we replace outlier vectors to zero vectors which marked with red slashes as shown in Fig. 4b and get a new dictionary \({\varvec{\widetilde{D}}}\). We adopt the new dictionary \({\varvec{\widetilde{D}}}\) and the optimal coefficient-matrix \({\varvec{\hat{\alpha} }}\) to reconstruct each patch of the initial rain map matrix as \({\varvec{\widetilde{R}}}\),

$$\begin{aligned} {\varvec{\widetilde{R}}} ={ \varvec{\widetilde{D}}}{\varvec{{\hat{\alpha }}}} \end{aligned}$$
(6)

Then, we put the reconstructed patch to the image at the corresponding location, which means we replace the initial rain streaks layer \({\varvec{\mathcal {R}}}\) to a refined initial rain streaks layer \({\widetilde{\varvec{\mathcal {R}}}}\) to make the reconstructed image more consistent. To the end, we adopt an adaptive thresholding method to choose the thresholding value automatically, instead of determining the thresholding value manually to get the binary initial rain map. Figure 5 illustrates the initial binary rain map and the result of rain map refinement processing.

Additionally, some tiny rain streaks are not be detected in the initial rain map, and some thick rain streak boundaries have been distorted. Thus, we create structuring elements of rain streaks to dilate the initial rain map [20]. The improved parts are highlighted in orange and blue in Fig. 6a and b.

Fig. 6
figure6

Dilate processing refining initial binary rain map of the 7th frame in the “Waterfall” video sequence after SVM processing: a refined binary initial rain map, b dilated refined binary initial rain map

Original Scene Reconstruction

Because of the spatio-temporal constantly changing for videos, especially captured by moving cameras along a certain direction, the background scene always gradually deviate from the real scene, which causes blurry problem after a period of algorithm computing. Block matching (BM) algorithms are the most popular video coding method because of their effectiveness and simplicity for both software and hardware implementations. Since adjacent frames of video sequences are usually highly similar, block matching approaches assume that the motion of rectangular blocks, arbitrarily shaped patches, or even pixels within a defined region of the current frame as a reconstruction of pixels in the previous frame. It is possible to restore the detected rain streak pixel values of rain streaks by using the correlation between the spatio-temporal of adjacent frames. Based on these conditions, rain streaks reconstruction can be formulated as a low-rank matrix completion problem when considering adjacent frames [1].

We first divide each current frame \(I_{k}\) into l numbers of disjoint blocks. For each block \({\varvec{b}}\), we search for the most similar 4 blocks from two frames before(denote as \(I_{k-2},I_{k-1}\)) and after(denote as \(I_{k+1},I_{k+2}\)) the current frame:

$$\begin{aligned} {\widetilde{\varvec{b_{l}}}} = \left[ {\varvec{b_{k-2}}},{\varvec{b_{k-1}}},{\varvec{b_{k+1}}},{\varvec{b_{k+2}}}\right] \end{aligned}$$
(7)

We only compute the sum of the squared differences (SSD) between pixels without rain streaks to measure the similarity between two blocks and reduce the amount of calculation. Then, we define a block-matching matrix \({\varvec{B}}\) for each block l as:

$$\begin{aligned} {\varvec{B}} = \left[ {\widetilde{\varvec{b_{1}}}},{\widetilde{\varvec{b_{2}}}},{\widetilde{\varvec{b_{3}}}}\ldots{ \widetilde{\varvec{b_{l}}}}\right] \end{aligned}$$
(8)

where each column represents each block l. And a rain map matrix \({\varvec{M}}\) correspond to block-matching matrix \({\varvec{B}}\) for each block l as:

$$\begin{aligned} {\mathbf{M }} = \left[{ \widetilde{{m}_{1}}},{\widetilde{{m}_{2}}},{\widetilde{{m}_{3}}}\ldots {\widetilde{{m}_{l}}}\right] \end{aligned}$$
(9)

We apply the low-rank matrix completion algorithm to reconstruct a filled-in matrix \({\varvec{L}}\) from block matrix \({\varvec{B}}\). We adopt the two constraints. The first constraint is:

$$\begin{aligned} \left({ \varvec{1-M}}\right) \odot {\varvec{L}}={\varvec{(1-M})}\odot {\varvec{B}} \end{aligned}$$
(10)

which encourages the filled-in matrix \({\varvec{L}}\) preserves the original pixel values in \({\varvec{B}}\) at the non-rainy pixels. “\({\varvec{\odot }}\)” denotes the element-wise multiplication. The second constraint is:

$$\begin{aligned} {\varvec{M}}\odot {\varvec{L}}\le {\varvec{M}}\odot {\varvec{B}} \end{aligned}$$
(11)

where \({\varvec{1}}\) is the matrix of all elements are 1. We also assume that the pixel value contains rain streaks that are larger than the pixel value without rain streaks [16], which means the magnitude of the reconstructed pixel values should be equal to or less than that of the original pixel values. Consequently, we formulate a low-rank matrix completion problem [7] as:

$$\begin{aligned} \begin{aligned}\text {minimize}&\quad{ \varvec{||L||_{*}}}\\\text {subject to}&\quad ({\varvec{1-M}}) \odot{\varvec{L}}={\varvec{(1-M})}\odot \varvec{B}\\ &\quad {\varvec{M}}\odot{ \varvec{L}}\le {\varvec{M}}\odot {\varvec{B}} \end{aligned} \end{aligned}$$
(12)

In this work, we employ the expectation-maximization (EM) algorithm [49] to solve the constraint optimization problem of matrix completion in an iterative manner. The EM algorithm performs two steps: the expectation step and the maximization step:

Expectation  step: Take the elements of the input matrix \({\varvec{L}}\) as pixels without rain streaks, and the elements of the current estimate to obtain the filled matrix \({\varvec{L^{(t)}}}\):

$$\begin{aligned} {\varvec{L^{(t)}}}=({\varvec{1}}-{\varvec{M}})\odot {\varvec{B}}+{\varvec{M}}\odot ({\varvec{R^{(t)}}}\wedge {\varvec{B}}) \end{aligned}$$
(13)

Note that \({\varvec{R^{(t)}}}\) represents a parameter to restore the current estimation, we use \(({\varvec{R^{(t)}}}\wedge {\varvec{B}})\) instead of \(\varvec{B}\) due to the pixel value contains rain streaks that are larger than the pixel value without rain streaks. The initial \(\varvec{R^{(0)}}\)is equal to \(\varvec{B}\).

Definition 1

\({\varvec{{\wedge }}}\)” denotes the element-wise minimum operator to return the minimum between two elements.

Maximization  step: Update current estimate \(\varvec{R^{(t)}}\) to a low-rank approximation of the filled matrix \(\varvec{L^{(t)}}\). To minimize the sum of the singular values of the matrix \(\varvec{L}\) as a nuclear norm \(\varvec{||L||_{*}}\) in Eq. (12), we consider the singular value decomposition (SVD)  [6] of a matrix \(\varvec{L}\) of rank r:

$$\begin{aligned}{ \varvec{L^{(t)}}} ={ \varvec{U\Sigma V}^{T}},\quad {\varvec{\Sigma} } = diag(\sigma _{i})\quad (1 \le i \le r ) \end{aligned}$$
(14)

where \(\varvec{U}\) and \(\varvec{V}\) are rotation matrices orthonormal columns respectively, and the singular values \(\sigma _{i}\) are normally positive. For each \(\tau \ge 0\), we introduce the soft-thresholding operator \({D_{\tau }}\) and update the rain streaks estimator \({\varvec{R^{(t+1)}}}\) as follows:

$$\begin{aligned} {\varvec{R^{(t+1)}}}={\varvec{U}}D_{\tau }({\varvec{\Sigma}}){\varvec{V}^{T}} \end{aligned}$$
(15)

Definition 2

\(\mathbf{D} _{\tau }({\varvec{\Sigma} })\)” is an operator simply applies a soft-thresholding rule to effectively shrink the singular values which are under the threshold \((\tau =2000)\) towards zero. The number of \({t_{max}}\) is set to 200 times.

We repeat the expectation step as Eq. (13), and the maximization step as Eqs. (14), (15) repeatedly until the iteration stops when the number of iterations exceeds 200 times. Finally, we replace the rain streak pixel values in the block matrix B with the corresponding elements of the final filled matrix \({\varvec{\widetilde{L}}}\) as a reconstructed background layer \({\widetilde{\varvec{\mathcal {B}}}}\). The summarize of the algorithm is as Algorithm 1.

figurea

Post-processing (For Accumulated Rain Streaks Removal Task)

As mentioned in Sect. “Introduction”, rain streaks distribute further than the current distance of the camera lens field, appear heavily, and accumulate situation is similar to haze and mist. Various dehazing techniques can be applied to remove such haze or mist for single-image and video sequences [2, 8, 43, 48], even under some specific situations, such as underwater [14, 29], night time [28, 59], etc. As far as we know, the current rain streaks removal methods cannot remove accumulated rain streaks well. To improve the visual contrast of experimental results, we attach a state-of-the-art dehazing method  [8]Footnote 1 after rain streaks reconstruction step as post-processing, which also outperforms previous state-of-the-art image deraining methods and demonstrates its generality.

When the input video sequence contains accumulated rain, our method executes the advanced dehaze algorithm after the previous original scene reconstruction step as post-processing. Otherwise, the experimental result can be output directly.

Experiments Results and Analysis

In this section, we show experimental results and evaluate the performance of the proposed method on synthetic video sequences and real video sequences in both quantitative and qualitative perspectives. Some of the state-of-the-art rain/snowflake removal methods have also been implemented for comparison, including Wei et al. [57]Footnote 2 (denote as P-MoG) and Li et al. [30]Footnote 3 (denote as MS-CSC) for video sequences, Chen et al. [8]1 (denote as GCANet), Li et al. [31]Footnote 4 (denote as HR-GAN) and Ren et al. [46]Footnote 5 (denote as PReNet) for single images. It is important to note that the P-MoG method could not solve the rain removal problem with a moving background [57]. Furthermore, the proposed method proves the effectiveness of removal linear raindrops on window glass and splashes caused by accumulated raindrops in real video sequences. We compare the performance of raindrops on window glasses removal with Qian’s method [42]Footnote 6 which specializes in raindrops on window glass removal. Furthermore, by introducing a state-of-the-art dehaze method(GCANet) into our proposed P-BMC model, called P-BMC+Dehaze, this method can also be improved to resolve the problem of haze and mist caused by accumulating heavy raindrop. To make a comprehensive comparison, more video demonstrations of the obtained results by completing methods are available as supplementary materials on YouTubeFootnote 7.

Fig. 7
figure7

Rain streaks removal performances of different methods on the 29 th frame in the “Golf” video sequence

Fig. 8
figure8

Rain streaks removal performances of different methods on the 19 th frame in the “Tennis” video sequence

Fig. 9
figure9

Rain streaks removal performances of different methods on the 52 nd frame in the “City” video sequence

Table 1 Quantitative performance comparison of all competing methods on synthetic rainy video sequences
Fig. 10
figure10

Rain streaks removal performances of different methods on the 14th and 59th frame in the “Street” video sequence

Experiments on Synthetic Video Sequences

Firstly, we present experimental results on three video sequences with synthetic rain streaks, including two with static backgrounds (as shown in Figs. 7, 8), and one video sequence with dynamic backgrounds (as shown in Fig. 9).

Figures 7 and 8 illustrate the rain removal performance of all compared methods on video sequences of the usual light rain scenes while containing some thick rain streaks randomly with complex moving objects, including moving athletes, walking pedestrian, and even tiny balls. Both the “Golf” video sequence and “Tennis” video sequence are static background video sequences, which means the camera is not moving. It is obvious to observe that tiny rain streaks have been removed in the rain removal maps obtained by P-MoG, MS-CSC, HeavyRain-GAN, and PReNet methods, while some thick rain streaks still could not be removed. Moreover, the HR-GAN method cannot restore the original image saturation well. The PReNet may destroy some linear details, like the tennis net and tennis court, as shown in Fig. 8f. The GCANet method does not well remove synthetic rain streaks effectively. Comparatively, the proposed method can remove all kinds of rain streaks clearly and well maintain almost details, except for some extremely tiny objects(like the tennis ball), which discusses further in section. “Discussion”.

For dynamic video sequences, the deraining results as shown in Fig. 9. We can observe that PReNet and HR-GAN method cannot fully remove the rain streaks on the experimental images and not preserve the texture of buildings well as shown in Fig. 9d and e. MS-CSC and GCANet method remains some rain streaks while preventing the original scene and texture details. In comparison, our proposed P-BMC method reaches comparatively better performance in both two fields.

Fig. 11
figure11

Snowflakes removal performances of different methods on the 20th and 42nd frame in the “Indoor” video sequence

Fig. 12
figure12

Snowflakes removal performances of different methods on the 14th and 52nd frame in the “Woman” video sequence

Quantitative comparisons are also introduced in Table 1. Specifically, we employ five Image Quality Assessment (IQA) metrics to evaluate the performance of all competing methods: PSNR (Peak Signal-to-Noise Ratio) [22], SSIM (Structural Similarity) [56], UQI (Universal Quality Image Index) [55], FSIM (Feature Similarity) [60] and NIQE (Naturalness Image Quality Evaluator) [39]. As can be seen from the table, as compared with the other five competing methods, our proposed P-BMC method can perform best or the second-best results in terms of all five Image Quality Assessment (IQA) metrics.

Fig. 13
figure13

Snowflakes removal performances of different methods on the 33rd & 74th frame in the “School” video sequence

Experiments on Real-World Video Sequences

We further evaluate the performance of different methods on four real-world video sequences with real rain streaks or snowflakes, two captured under static backgrounds, as shown in Fig. 10 (with rain streaks), Fig. 11 (with snowflakes) and two captured under dynamic backgrounds, as shown in Fig. 12 (with snowflakes), Fig. 13 (with rain streaks). Figure 10 shows the visual comparisons of rain streaks removal results on a real scene, which captured by outdoor surveillance equipment on the street contains randomly moving rain streaks and leaves.  Figure 11 demonstrates the snowflakes removal results on real video sequences captured by an indoor camera with poor visibility, containing dynamically several kinds of snowflakes, impurities attached to the glass and some moving objects, like vehicles and flags. As displayed, P-MoG, MS-CSC, and HR-GAN methods remove most of the rain streaks, but distort the pixel values of images, the GCANet method leaves many rain streaks in the recovered frames, the PReNet cannot recover the texture information as shown in orange boxed in Fig. 10 and green boxes in Figs. 10 and 11. By contrast, the proposed method P-BMC method is able to remove all the rain streaks even water splashes on the ground better while preserving the details and texture in the original scene, further details of water splashes removal challenging task results shown in section “Accumulated Rain and Water Splashes and Mist Removal”.

Fig. 14
figure14

Several uncertain shapes of raindrops on window glass [63]

Fig. 15
figure15

Linear Raindrops removal results of the proposed rain streaks removal algorithm on the “Raindrop” video sequence

Figure 12 presents the desnowing results on a video sequence, which is captured by a moving camera containing dynamical snowflakes and a moving character. MS-CSC and GCANet methods leave lots of snowflakes that cannot be removed, the HR-GAN method can remove most of the snowflakes but extremely affect the skin tones of human, and the PReNet method may cause blurry and leads to a loss of image texture shown in Fig. 12e.

Figure 13 displays a hard sample of the rain streaks removal performance on a real video sequence, which obtained by a moving camera at night, containing dynamically several kinds of rain streaks, flags fluttering in the wind and bright light influence. As seen from the results, most of the competing methods have degenerated performance in this rain streaks video sequence, especially in the bright light area as the MS-CSC method shown in Fig. 13b. Comparatively, our method can still perform well in this difficult situation. The HR-GAN method causes over-saturation and destroys the original scene, the PReNet method well preserves the original saturation of images but lack of some texture details. Our method can solve these issues well.

Challenge Task

Note that raindrops or impurities adhering to the window glasses, mist/haze, and water splashes caused by accumulated heavy rain/snow greatly affect the experimental results shown in Fig. 11. Thus, we do experiments on these two different challenge tasks for real video sequences, and our proposed method also presents state-of-the-art performance experimental results.

Fig. 16
figure16

Accumulated Rain and Water Splashes and Mist removal results of the proposed rain streaks removal algorithm on the “Heavy Rain” video sequence

Table 2 Performance of all competing methods on object detection

Linear Raindrops on Window Glasses Removal

Raindrops adherent to camera lenses or raindrops flow down the window glass can seriously hinder the visibility of a background scene and degrade an image quality for computer vision systems [42]. Since raindrops are transparent, attach to the window glass, and have several uncertain shapes, removing raindrops on window glass is a great challenge. The raindrops on window glass can be classified into three types according to different directions of the wind and severity of the rain.  Figure 14a shows the most similar shape to rain streaks, so we attempt to use the proposed method to remove rain streaks and linear raindrops on window glass together. The real difference between this task and the original rain removal task is raindrops are attached to the glass and with the camera movements instead of randomly distributed in space. Thus it cannot be addressed as a global image transformation problem like simply divide video sequences into a clear background layer, a moving objects layer, and a rain streaks layer [63].

In Fig. 15, we present the results of the two frames randomly selected from the sequence “Raindrop”. The results show our proposed method and Qian et al. [42] only. Our proposed method can remove not only rain streaks outside of the window but also most of the linear raindrops that attach to the window glasses faithfully. On the contrary, Qian’s method has better performance than us in round shape raindrops removal, because Qian’s method is trained for such shape of raindrops, and they and can preserve the original resolution of the image better. The comparison results are highlighted in yellow, red, and green. However, some rain streaks highlighted in gray remain because they are excluded by SVM classifier due to their non-linear shapes, or they are modeled as the background due to the same motion as window glass.

Accumulated Rain and Water Splashes and Mist Removal

Most rain streaks removal methods cannot deal adequately with heavy rain images [31]. Figure 16 illustrates the experimental results of our proposed method to remove rain streaks, enhance visibility, and achieve considerable performance on real video sequences with heavy rain and water splashes caused by raindrops.

We assume that heavy rain streaks fall into puddles and cause water splashes, these splashes of water have the same motion and linear shape as rain streaks, which randomly distributed in space. Our proposed method efficiently applies motion compensation and temporal information to remove accumulated rain streaks and water splashes. To handle the rain streak accumulation problem that is similar to mist or haze, we utilize GCANet [8] to do further dehaze processing. It is obvious that although HR-GAN can remove most of the rain streaks, it also changes the original saturation of images and makes the results unnatural. As can be seen, our method can remove not only heavy rain streaks and water splashes but also mist/haze faithfully, and then makes the contrast and visibility of processing results approach the real scene that is not raining.

Performance Evaluation

Furthermore, we examine our rain streaks removal results that can be superior for computer vision systems by using the Google Vision APIFootnote 8. Two metrics results are performed for the evaluation: (a) the average score (denoted as Avg.Score) of identifying the main object in input images and results and (b) the number of object labels recognized (denoted as Obj.LBL). The results are listed in Table 2 for comparison. As seen, our results of rain/snow removal product better recognition and detection accuracy over most of the competing methods. Especially for heavy rain removal tasks, our proposed method (P-BMC+Dehaze) attends better results than the state-of-the-art method HR-GAN [31].

Execution Efficiency

Table 3 The average execution time (sec) comparison of all competing methods on a single frame

To show the efficiency of the rain streaks removal method, we list and compare the average execution times per frame of all competing methods in Table 3. It is worth noting that the average time based on the neural network method in Table 3 does not include the training time of the neural network itself. From the table, the speed of the PReNet method is faster than other competing methods. Since our proposed method based on dictionary learning and block matching, as the input image size increases, the complex optimizations for original scene reconstruction take more execution time.

Discussion

  • Because of rain streaks and background texture may cause the overlapping problem, most of the competing methods cannot preserve texture details well (as orange boxes in Fig. 10, yellow boxes in Fig. 11 and light blue boxes in Fig. 13), thus tend to cause over smoothing or blurry effects. Our proposed method can remove rain streaks and snow better while retaining more details to restore the original scene.

  • Due to the imaging process of rain in natural scenes is very complex, most of the existing deraining methods cannot effectively solve the problem of mist or haze effect formed by accumulated rain while removing rain streaks. Thus, we supplement a dehazing method as a post-process to solve this challenging task. In contrast, HR-GAN method bumping up the intensity of all colors in the shot. This method removes rain streaks, snowflakes, and mist. Still, it may result in over-saturation of certain colors, which cause loss of details in those areas (as Fig. 16b) and over-saturation of skin tones, leaving them looking too orange and unnatural (as Fig. 12d).

  • In general, the rain removal method by utilizing deep neural networks for a single image is a much more efficient attribute to its pre-trained models. However, the deraining methods for video sequences do not need to require and time-consuming to collect plenty of training samples.

  • As illustrated in Fig. 17b, it is obvious to see that the tiny tennis ball had falsely detected as rain streaks, caused by its high moving speed and small shape. Thus, it also removed as rain streaks. The second-row shows the deraining results of GCANet and PReNet, the third-row presents the results of MS-CSC and our method. From the discussions above, we can conclude that the existing video-based methods can remove almost rain streaks faithfully, but they also affect some tiny objects. For image-based methods may not be affected by small objects. However, they cannot handle well with all types of rain streaks or preserve the original texture and scene.

Fig. 17
figure17

Results of the proposed rain streaks removal algorithm and some of competing methods on the 5th frame in the “Tennis” video sequence

Conclusions and Future Directions

In this work, we propose a rain streaks and snowflakes detection and removal method, which exploits motion compensation of consecutive frames in a video sequence based on the low-rank matrix completion. The proposed algorithm firstly obtains an initial rain map by using optical flow method. Then we use the sparse representation technique to refine the initial rain map. Finally, we reconstruct the original scene by using the low-rank matrix completion method. Particularly for accumulated rain & water splashes removal task, we supplement a dehaze algorithm as post-processing. Consequently, rain streaks can be successfully removed from video sequences while preserving the most original image details.

To evaluate the performance of the proposed method, we conducted the experiments on seven rainy datasets & two snow datasets, and compared the proposed method with five kinds of recent state-of-the-art rain streaks removal models for single images(GCANet, HeavyRain-GAN, PReNet) and video sequences (P-MoG, MS-CSC). Extensive experimental results demonstrate that our method can detect and remove rain streaks, snowflakes, even linear raindrops on window glasses and accumulated rain reliably while preserving the original scene.

Throughout this paper, we had investigated various classical and novel rain streaks removal algorithms for image or video sequences and compared with our proposed rain streaks and snowflakes detection and removal method [63]. We also analyze object detection and execution time calculation as quantitative performance comparisons to verify the effectiveness of our proposed method. Furthermore, we fully utilize a dehazing network as post-processing for advantages in the task of accumulated rain video sequences restoration.

Especially for video sequences, it is necessary to fulfill the persistence (process video sequences in real-time), low time and space complexity, universality (available to complex video scenes) to construct online rain streaks removal techniques [53].

For future work, we plan to further improve the real-time performance of the algorithm by employing other more efficient block matching algorithms for rain streaks reconstruction and applying it to other applications. In addition, rain streaks/snowflakes removal tasks mainly serve as a pre-processing step for specific computer vision systems, such as autonomous driving, outdoor monitoring equipment, underwater photography, etc. Therefore, we intend to make full use of the proposed algorithm to process underwater datasets with fast horizontal movement and obvious illumination variations, aiming to eliminate impurities in the water and the sand suddenly kicked up by underwater robots in operation for performance enhancement may also be worth to exploring in the future research.

Notes

  1. 1.

    https://github.com/cddlyf/GCANet.

  2. 2.

    https://github.com/wwzjer/RainRemoval_ICCV2017.

  3. 3.

    https://github.com/MinghanLi/MS-CSC-Rain-Streak-Removal.

  4. 4.

    https://github.com/liruoteng/HeavyRainRemoval.

  5. 5.

    https://github.com/csdwren/PReNet.

  6. 6.

    https://github.com/rui1996/DeRaindrop.

  7. 7.

    https://youtu.be/5Jht7tqTbe8

  8. 8.

    https://cloud.google.com/vision/.

References

  1. 1.

    Abdel-Hakim AE. A novel approach for rain removal from videos using low-rank recovery. In: 2014 5th international conference on intelligent systems, modelling and simulation, 2014; pp. 351–356. IEEE.

  2. 2.

    Ancuti CO, Ancuti C, Sbert M, Timofte R. Dense-haze: a benchmark for image dehazing with dense-haze and haze-free images. In: 2019 IEEE international conference on image processing (ICIP), 2019; pp. 1014–1018. IEEE.

  3. 3.

    Bai X, Yang M, Huang T, Dou Z, Yu R, Xu Y. Deep-person: learning discriminative deep features for person re-identification. Pattern Recogn. 2020;98:107036.

    Article  Google Scholar 

  4. 4.

    Barnum PC, Narasimhan SG, Kanade T. Analysis of rain and snow in frequency space. Int J Comput Vis. 2008;86:256–74.

    Article  Google Scholar 

  5. 5.

    Bossu J, Hautière N, Tarel JP. Rain or snow detection in image sequences through use of a histogram of orientation of streaks. Int J Comput Vis. 2011;93(3):348–67.

    Article  Google Scholar 

  6. 6.

    Cai JF, Candès EJ, Shen Z. A singular value thresholding algorithm for matrix completion. SIAM J Optim. 2010;20(4):1956–82.

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Candès EJ, Recht B. Exact matrix completion via convex optimization. Found Comput Math. 2009;9(6):717.

    MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Chen D, He M, Fan Q, Liao J, Zhang L, Hou D, Yuan L, Hua G. Gated context aggregation network for image dehazing and deraining. In: 2019 IEEE winter conference on applications of computer vision (WACV), 2019; pp. 1375–1383. IEEE.

  9. 9.

    Chen J, Tan CH, Hou J, Chau LP, Li H. Robust video content alignment and compensation for clear vision through the rain. 2018. arXiv:1804.09555.

  10. 10.

    Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Trans Pattern Anal Mach Intell. 2003;25(5):564–77.

    Article  Google Scholar 

  11. 11.

    Eigen D, Krishnan D, Fergus R. Restoring an image taken through a window covered with dirt or rain. In: Proceedings of the IEEE international conference on computer vision, 2013. pp. 633–640.

  12. 12.

    Elad M. Sparse and redundant representations: from theory to applications in signal and image processing. Berlin: Springer Science & Business Media; 2010.

    MATH  Book  Google Scholar 

  13. 13.

    Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans Image Process. 2006;15(12):3736–45.

    MathSciNet  Article  Google Scholar 

  14. 14.

    Fabbri C, Islam MJ, Sattar J. Enhancing underwater imagery using generative adversarial networks. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018;7159–7165. IEEE.

  15. 15.

    Fadili MJ, Starck JL, Bobin J, Moudden Y. Image decomposition and separation using sparse representations: an overview. Proc IEEE. 2009;98(6):983–94.

    Article  Google Scholar 

  16. 16.

    Garg K, Nayar SK. Detection and removal of rain from videos. In: Proceedings of the 2004 IEEE computer society conference on computer vision and pattern recognition, 2004. CVPR 2004. 2004;1:I–I.

  17. 17.

    Garg K, Nayar SK. When does a camera see rain? In: Tenth IEEE international conference on computer vision (ICCV’05) Volume 1, 2005; vol. 2, pp. 1067–1074.

  18. 18.

    Garg K, Nayar SK. Vision and rain. Int J Comput Vis. 2006;75:3–27.

    Article  Google Scholar 

  19. 19.

    George J, Bhavani S, Jaya J. Certain explorations on removal of rain streaks using morphological component analysis. Int J Eng Res Technol (IJERT). 2013;2(2):2278–81.

    Google Scholar 

  20. 20.

    Gonzalez RC, Woods RE, Eddins SL. Digital image processing using MATLAB. Pearson Education India, 2004.

  21. 21.

    Horn BK, Schunck BG. Determining optical flow. In: Techniques and Applications of Image Understanding, 1981;281:319–331. International Society for Optics and Photonics

  22. 22.

    Huynh-Thu Q, Ghanbari M. Scope of validity of psnr in image/video quality assessment. Electron Lett. 2008;44(13):800–1.

    Article  Google Scholar 

  23. 23.

    Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell. 1998;20(11):1254–9.

    Article  Google Scholar 

  24. 24.

    Jiang TX, Huang TZ, Zhao XL, Deng LJ, Wang Y. Fastderain: a novel video rain streak removal method using directional gradient priors. IEEE Trans Image Process. 2018;28(4):2089–102.

    MathSciNet  Article  Google Scholar 

  25. 25.

    Joshi KD, Chauhan V, Surgenor B. A flexible machine vision system for small part inspection based on a hybrid svm/ann approach. J Intell Manuf. 2020;31(1):103–25.

    Article  Google Scholar 

  26. 26.

    Kang LW, Lin CW, Fu YH. Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans Image Process. 2011;21(4):1742–55.

    MathSciNet  MATH  Article  Google Scholar 

  27. 27.

    Kim JH, Sim JY, Kim CS. Video deraining and desnowing using temporal correlation and low-rank matrix completion. IEEE Trans Image Process. 2015;24:2658–70.

    MathSciNet  MATH  Article  Google Scholar 

  28. 28.

    Kuanar S, Rao K, Mahapatra D, Bilas M. Night time haze and glow removal using deep dilated convolutional network, 2019. arXiv:1902.00855.

  29. 29.

    Li J, Skinner KA, Eustice RM, Johnson-Roberson M. Watergan: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot Autom Lett. 2017;3(1):387–94.

    Google Scholar 

  30. 30.

    Li M, Xie Q, Zhao Q, Wei W, Gu S, Tao J, Meng D. Video rain streak removal by multiscale convolutional sparse coding. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018; pp. 6644–6653.

  31. 31.

    Li R, Cheong LF, Tan RT. Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019; pp. 1633–1642.

  32. 32.

    Li Y, Tan RT, Guo X, Lu J, Brown, MS. Rain streak removal using layer priors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016; pp. 2736–2744.

  33. 33.

    Li Y, Tan RT, Guo X, Lu J, Brown MS. Single image rain streak decomposition using layer priors. IEEE Trans Image Process. 2017;26(8):3874–85.

    MathSciNet  MATH  Article  Google Scholar 

  34. 34.

    Liu C, et al. Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. thesis, Massachusetts Institute of Technology, 2009.

  35. 35.

    Liu J, Yang W, Yang S, Guo Z. D3r-net: dynamic routing residue recurrent network for video rain removal. IEEE Trans Image Process. 2018;28(2):699–712.

    MathSciNet  MATH  Article  Google Scholar 

  36. 36.

    Liu J, Yang W, Yang S, Guo Z. Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018; pp. 3233–3242.

  37. 37.

    Liu P, Xu J, Liu J, Tang X. Pixel based temporal analysis using chromatic property for removing rain from videos. Comput Inf Sci. 2009;2:53–60.

    Google Scholar 

  38. 38.

    Luo Y, Xu Y, Ji H. Removing rain from a single image via discriminative sparse coding. In: Proceedings of the IEEE international conference on computer vision, 2015; pp. 3397–3405.

  39. 39.

    Mittal A, Soundararajan R, Bovik AC. Making a completely blind image quality analyzer. IEEE Signal Process Lett. 2012;20(3):209–12.

    Article  Google Scholar 

  40. 40.

    Mukhopadhyay S, Tripathi AK. Combating bad weather part i: rain removal from video. Synth Lecture Image Video Multimedia Process. 2014;7(2):1–93.

    Article  Google Scholar 

  41. 41.

    Narasimhan SG, Nayar SK. Vision and the atmosphere. Int J Comput Vis. 2002;48(3):233–54.

    MATH  Article  Google Scholar 

  42. 42.

    Qian R, Tan RT, Yang W, Su J, Liu J. Attentive generative adversarial network for raindrop removal from a single image. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2017; pp. 2482–2491.

  43. 43.

    Qin X, Wang Z, Bai Y, Xie X, Jia H. Ffa-net: feature fusion attention network for single image dehazing, 2019. arXiv:1911.07559.

  44. 44.

    Ramya C, Rani SS. Rain removal in image sequence using sparse coding. In: International conference on intelligent robotics, automation, and manufacturing, 2012; pp. 361–370. Springer.

  45. 45.

    Rao SR, Ni KY, Owechko Y. Video scene analysis system for situational awareness (2020). US Patent 10,528,818.

  46. 46.

    Ren D, Zuo W, Hu Q, Zhu P, Meng D. Progressive image deraining networks: a better and simpler baseline. In: IEEE Conference on Computer Vision and Pattern Recognition, 2019.

  47. 47.

    Ren W, Tian J, Han Z, Chan A, Tang Y. Video desnowing and deraining based on matrix decomposition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017; pp. 4210–4219.

  48. 48.

    Ren W, Zhang J, Xu X, Ma L, Cao X, Meng G, Liu W. Deep video dehazing with semantic segmentation. IEEE Trans Image Process. 2018;28(4):1895–908.

    MathSciNet  Article  Google Scholar 

  49. 49.

    Srebro N, Jaakkola T. Weighted low-rank approximations. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), 2003; pp. 720–727.

  50. 50.

    Tan CH, Chen J, Chau LP. Dynamic scene rain removal for moving cameras. In: 2014 19th International Conference on Digital Signal Processing, 2014; pp. 372–376. IEEE.

  51. 51.

    Tripathi A, Mukhopadhyay S. Video post processing: low-latency spatiotemporal approach for detection and removal of rain. IET Image Proc. 2012;6(2):181–96.

    MathSciNet  Article  Google Scholar 

  52. 52.

    Tripathi AK, Mukhopadhyay S. Removal of rain from videos: a review. SIViP. 2014;8:1421–30.

    Article  Google Scholar 

  53. 53.

    Wang H, Li M, Wu Y, Zhao Q, Meng D. A survey on rain removal from video and single image, 2019. arXiv:1909.08326.

  54. 54.

    Wang T, Yang X, Xu K, Chen S, Zhang Q, Lau RW. Spatial attentive single-image deraining with a high quality real rain dataset. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

  55. 55.

    Wang Z, Bovik AC. A universal image quality index. IEEE Signal Process Lett. 2002;9(3):81–4.

    Article  Google Scholar 

  56. 56.

    Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–12.

    Article  Google Scholar 

  57. 57.

    Wei W, Yi L, Xie Q, Zhao Q, Meng D, Xu Z. Should we encode rain streaks in video as deterministic or stochastic? 2017 IEEE international conference on computer vision (ICCV) 2017; pp. 2535–2544.

  58. 58.

    Xu H, Li B, Ramanishka V, Sigal L, Saenko K. Joint event detection and description in continuous video streams. In: 2019 IEEE winter conference on applications of computer vision (WACV), 2019; pp. 396–405. IEEE.

  59. 59.

    Zhang J, Cao Y, Fang S, Kang Y, Wen Chen C. Fast haze removal for nighttime image using maximum reflectance prior. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017; pp. 7418–7426

  60. 60.

    Zhang L, Zhang L, Mou X, Zhang D. Fsim: a feature similarity index for image quality assessment. IEEE Trans Image Process. 2011;20(8):2378–86.

    MathSciNet  MATH  Article  Google Scholar 

  61. 61.

    Zhang X, Li H, Qi Y, Leow WK, Ng TK. Rain removal in video by combining temporal and chromatic properties. In: 2006 IEEE international conference on multimedia and expo, 2006; pp. 461–464. IEEE.

  62. 62.

    Zhang Z, Xu Y, Yang J, Li X, Zhang D. A survey of sparse representation: algorithms and applications. IEEE Access. 2015;3:490–530.

    Article  Google Scholar 

  63. 63.

    Zhou Y, Shimada N. Using motion compensation and matrix completion algorithm to remove rain streaks and snow for video sequences. In: Pattern Recognition, 2020; pp. 91–104. Springer International Publishing, Cham.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Yutong Zhou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Machine Learning in Pattern Analysis” guest edited by Reinhard Klette, Brendan McCane, Gabriella Sanniti di Baja, Palaiahnakote Shivakumara and Liang Wang.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhou, Y., Shimada, N. Rain Streaks and Snowflakes Removal for Video Sequences via Motion Compensation and Matrix Completion. SN COMPUT. SCI. 1, 328 (2020). https://doi.org/10.1007/s42979-020-00333-6

Download citation

Keywords

  • Rain Streaks and Snowflakes removal
  • Motion compensation
  • Sparse representation technique
  • SVM
  • Block matching estimation