Fast Blind Deblurring of QR Code Images Based on Adaptive Scale Control

Abstract

With the development of 5G technology, the short delay requirements of commercialization and large amounts of data change our lifestyle day-to-day. In this background, this paper proposes a fast blind deblurring algorithm for QR code images, which mainly achieves the effect of adaptive scale control by introducing an evaluation mechanism. Its main purpose is to solve the out-of-focus caused by lens shake, inaccurate focus, and optical noise by speeding up the latent image estimation in the process of multi-scale division iterative deblurring. The algorithm optimizes productivity under the guidance of collaborative computing, based on the characteristics of the QR codes, such as the features of gradient and strength. In the evaluation step, the Tenengrad method is used to evaluate the image quality, and the evaluation value is compared with the empirical value obtained from the experimental data. Combining with the error correction capability, the recognizable QR codes will be output. In addition, we introduced a scale control parameter to study the relationship between the recognition rate and restoration time. Theoretical analysis and experimental results show that the proposed algorithm has high recovery efficiency and well recovery effect, can be effectively applied in industrial applications.

Introduction

As collaborative computing and general technology such as the Internet of Things (IoT) and information technology evolved, it provides an environment where people can share information without being restricted by space and time. Through the Internet, it is possible to collaborate effectively with anyone, anytime, and anywhere [1]. As the main entrance of the mobile Internet, two-dimensional barcodes are widely used in commodity payment, public security, financial insurance, and other fields due to their large storage capacity, wide application range, and strong sharing capabilities. Quick Response (QR) code, as the most common two-dimensional bar codes, has the advantages of low cost, easy production, durability, and so on [2,3,4]. However, in real life, out-of-focus blurred QR code images are often collected due to imaging problems such as lens shake, inaccurate focus, and optical noise. The ubiquitous image degradation usually leads to difficulties in QR code extraction and identification. In the era of commercialization, especially driven by 5G technology, people have higher requirements for the speed of information acquisition. This is why the fast deblurring of QR code images has become a research focus at this stage, especially in industrial applications [5], such as the traceability of salt, food, and medicine in Fig. 1. When an out-of-focus blurred QR code image is acquired, the deblurring algorithm will be used. Its main purpose is to restore the damaged image to the greatest extent by improving the signal-to-noise ratio after a series of processing on the damaged or degraded image. Traditional image restoration assumes that the image degradation function has been given, that is, the blur kernel is known. But in real life, the blur kernel is usually unknown, and the image can only be restored by obtaining the prior image, so as to achieve the effect of blind deblurring.

Fig. 1
figure1

Some examples of industrial applications of QR codes: (a) Salt, (b) Chocolate, (c) Medicine, (d) Beetroot. Source: picture c is snapshot of real situations in the clinical lab of a local hospital from Jiang [6]

In recent years, a series of studies have been carried out in the field of blind deblurring at home and abroad. Aiming at the blind deblurring of the binary images, which are similar to the QR code images, Zoran et al. [7] presented a general framework for calculating MAP estimates, showing how to derive an appropriate cost function, how to optimize it, and how to use it to return for the entire image. This framework has made a huge contribution in the field of blind deblurring. The more complex the image is, the better the deblurring will affect. But it pays more attention to details, so it is difficult to eliminate the ringing effect. Pan et al. [8, 9] proposed a simple but effective L0-regularized text and beyond deblurring algorithm based on the intensity and gradient priors. Compared with the existing blind deblurring algorithms, this algorithm does not need any complicated filtering strategy to select the edge, and can be applied to debluring the low-illumination images. In addition, this algorithm is easy to understand and realize, but the multi-scale iterative estimation method adopted is too redundant to take a long time. On the basis of Pan’s algorithm, Liu and Du et al. [10] introduced the binary feature of QR codes to construct a new constraint term to achieve double-layer constraints on QR images and have excellent effects on common blurred kernels and multiple noises. This algorithm is efficient than the traditional algorithm but still does not solve the speed problem of blind deblurring. Nah et al. [11] realized the deblurring of dynamic scenes through the simulation of multi-scale loss function based on a multi-scale convolutional neural network. However, a large number of training samples and training time are the disadvantages of this algorithm. Bai et al. [12] proposed a graph-based blind image deblurring algorithm by interpreting an image patch as a signal on a weighted graph. This algorithm can deal with various blind image deblurring scenarios, and the reconstructed sharp results are better than the state-of-the-art methods visually and numerically. A common disadvantage is that the algorithm is not ideal for more obscure ones. Based on the Pan’s, Wen et al. [13] proposed a novel algorithm with a simplified sparsity prior of local minimal pixels (PMP) rather than directly using the half quadratic splitting algorithm. This algorithm can improve the practical stability and computational efficiency substantially, and have a certain improvement in speed.

We note that, blind image deblurring is an ill-posed problem, so it is difficult to find an effective and universal solution. However, fortunately, the main processes of the existing blind deblurring algorithm based on L0 regularization is multi-scale iteration, image estimation, and blurred kernel estimation, which can effectively solve this problem. In this paper, we propose a fast blind deblurring algorithm for QR code images because of the slowness of the existing blind deblurring algorithms in the multi-scale iterations. A large number of experiments show that this method can effectively implement adaptive scale control, which greatly improves the speed of the blind deblurring algorithm. The remainder of this paper is organized as follows. Section 2 briefly reviews the related work on image deblurring and image quality evaluation. Section 3 presents the proposed Algorithm through analysing the evaluation mechanism and the idea of adaptive scale control. Section 4 is the overall process of the algorithm in this paper. In this section, we elaborate the procedure of the algorithm by three main parts, multi-scale division, image estimation, and blurred kernel estimation. Section 5 shows the experimental results and finally Section 6 concludes the paper.

Related work

Generally, image restoration technologies can be coarsely divided into two categories. One is the non-blind deconvolution algorithm. The algorithm refers to the deconvolution processing on the basis of the image blur mechanism and the point spread function (PSF). The other is the blind deconvolution algorithm. When the PSF is not known, the PSF and the original clear image need to be estimated according to the prior knowledge of the blurred image, which is an ill-posed problem. In the aspect of image quality evaluation, according to whether there are reference images, full reference, semi-reference, and no-reference are the main methods for objective evaluation of image quality. The full reference image evaluation method works best, and its premise is that the original reference image can be found. But in general, the reference image is difficult to obtain. The semi-reference image evaluation method is generally used when the complete reference image information cannot be obtained, or only the information of the reference image is extracted. This method often uses features of extraction and comparison to evaluate the image clarity. However, this paper uses the no-reference image quality method, which can directly estimate the clarity of the image by using certain characteristics of the image without reference images. It is not as effective as full reference and semi-reference, but it meets the needs of actual life and has higher practicality. In the following, we only briefly review the previous work closely related to out-of-focus image deblurring and image quality evaluation.

Image Deblurring

Many previous methods introduce complex image priors and regard blind image restoration as a joint optimization problem by penalizing image blurriness and promote image sharpness. For example, L0-norm based priors [8,9,10], dark channel prior [14], low-rank prior [15], graph-based prior [12], multi-scale latent structure prior [16], local maximum gradient prior [17] and deep priors [18], etc. These methods can effectively reduce the ill-posed of blind image restoration algorithms, but they are usually computationally expensive. The fundamental reason is mainly through the calculation of any non-convex prior with a large amount of calculation, which leads to complex optimization algorithms. What’s more, the parametric assumptions on blur kernels could greatly improve the robustness of blind image deblurring [19, 20], a principled algorithm within the maximum a posterior framework to tackle image restoration with a partially known or inaccurate degradation model [21], and other deep learning and other methods that contain a large number of real image sets and training sets using self-supervised methods [22,23,24,25,26], the characteristics of training data sets are often It will greatly affect the performance of the entire model, and in many cases, the construction and processing of data sets are often very expensive and low feasibility.

Image quality evaluation

When an image is blurred or out of focus, it loses the details of the edges. Consequently, it is necessary to apply a non-reference image definition evaluation method with high sensitivity to the gradient in real life. The common reference image evaluation methods are the image histogram method, variance method, Tenengrad method, Laplacian method, fast Fourier transform method, grey variance method, histogram entropy method, and local histogram method [27]. In recent years, the Kang & Park method [28], X-ray tomography [29], StyleGAN [30] etc. are used to analyse and improve image sharpness. The disadvantage of high image sharpness is usually cost, because it requires more expensive systems and longer scan times. Therefore, the traditional Tenengrad method is adopted in this paper. This method not only can reduce the occurrence of local extremum after selecting a certain threshold [31], but also has good sensitivity and accuracy [32, 33]. It is a common image clarity evaluation function based on gradient. In image processing, the edge pixel grey of QR code images in ideal state changes faster, that is, the value of gradient function is larger, while the edge pixel grey of blurred QR code images changes more smoothly, and the gradient is smaller.

The proposed algorithm

In this paper, a classic statistical method is used. Under the guidance of collaborative computing, an evaluation mechanism is introduced. That is, the effective boundary values of different scales and the recognition rate and average calculation time under the control of different scales are calculated and analyzed, so as to realize adaptive scale control, which is a fast blind deblurring thought to Section 4. This section shows what is the evaluation mechanism and how to implement adaptive scale control based on theoretical and experimental analysis.

Evaluation mechanism

To study the adaptive scale control to shorten the time of the blind deblurring algorithm of QR code images, the value of image clarity plays a very important role in the entire algorithm. In this paper, the Tenengrad method is used for image edge detection calculation, using the Sobel horizontal and vertical operators to find the gradient and intensity of the image [27] as follows:

$$ {g}_x=\frac{1}{4}\left[\begin{array}{ccc}-1& 0& 1\\ {}-2& 0& 2\\ {}-1& 0& 1\end{array}\right]\kern0.75em {g}_y=\frac{1}{4}\left[\begin{array}{ccc}1& 2& 1\\ {}0& 0& 0\\ {}-1& -2& -1\end{array}\right]\kern1.25em $$
(1)

The gradient boundary value of the image is defined as

$$ \kern0.5em Q=\frac{1}{n}{\sum}_x{\sum}_yS{\left(x,y\right)}^{2\kern1.5em }\left(S>T\right) $$
(2)

In Eq. (2), \( S=\sqrt{{g_x}^2+{g_y}^2} \), T is the given edge detection threshold, n is the total number of pixels. After the average grey value of the image is processed by the Sobel operator, the larger value is, the sharper image will be. This paper uses the Zxing software package to evaluate the clarity of each QR code before deblurring. The results are shown in Fig. 2. It can be obtained that the Tenengrad method has a good performance on QR code image clarity evaluation. The larger defocus blur radius is, the more blurred will image be. This method can well represent images which have sharp edges. For the QR code images with different degrees of blur, their recognition can be distinguished according to the boundary value.

Fig. 2
figure2

Recognition range of different versions of QR code images (before deblurring): (a) The first version of QR code images, (b) The second version of QR code images, (c) The third version of QR code images, (d) The fourth version of QR code images, (e) The fifth version of QR code images

To obtain the boundary value accurately, in this paper, we select QR code images of three different contents for each version to simulate 300 blurred images with a blur radius of [0.05,15]. It can be seen from Fig. 2 that there is a certain relationship between the recognizability of QR code and Tenengrad value, which can be used as an indicator to judge whether the QR code images can be recognized. According to the statistics of the experimental results, for QR code images with the same version but different contents, the recognition boundaries are also similar. The boundary defocuses ratio B of the 1–5 version of QR codes are about B1 = 9.1, B2 = 8.9, B3 = 6.75, B4 = 5.5, and B5 = 4.9. After that, to improve the accuracy of the boundary value, we shorten the different lengths and deblur the images by adding and subtracting 0.02 defocus ratio. Therefore, we obtain the effective boundary values (Q) at different scales under different versions as shown in Table 1. After obtaining Q, we can get the average boundary value (Qmax) of different versions, which is the evaluation mechanism proposed in this paper, as shown in Table 2. Table 2 is the average value of Table 1. It can be seen from Table 2 that the evaluation value becomes larger with the increase of scale or version relatively.

Table 1 Effective boundary values (Q) at different scales under different versions
Table 2 Average boundary values (Qmax) at different scales under different versions

As we all know, QR codes have certain error correction capabilities, and the higher version, the more stored content. Besides, the chosen error correction level and the type of encoded data influence capability [2]. It can be obtained through large quantities of experiments that the deblurring intervals of 1–5 version can be roughly set as I1 ∈ [9.1,12.2], I2 ∈ [8.9,11.8],I3 ∈ [6.75,11.35], I4 ∈ [5.5,11.1] and I5 ∈ [4.9,9.95]. Since image deblurring requires six scale iterations, we introduce the scale control parameter p (p ∈ [0, 5]) to improve the accuracy of the algorithm in this paper. p (p ∈ [0, 5]) indicates that each blurred image abandons the previous p scale comparisons during the scale iteration process, and further compares recognition rate of QR codes. After that, we randomly generate the QR code sample libraries in these intervals, and obtain the recognition rate R and deblurring time T from the identifiable images as shown in Table 3. From Table 3, the relationship between R and T can be obtained under the same p. As p increases, R and T also increase, and the larger p, the longer time required for adjacent scales. That is while improving the deblurring speed of the QR code images, it lost a certain recognition rate.

Table 3 The recognition rate and average operation time of QR code images under different scale control parameters p

Adaptive scale control

QR codes have the ability to read at high speed and support error correction processing. That is, when the QR code image is normalized and the multi-scale iterative deblurring operation is performed, it has the recognition ability before the next scale iteration. Based on this feature of the QR code images, this paper makes a judgment (see Figs. 3, 4, 5, 6, 7 and 8), and Fig. 3 is the collection of the input images. Figures 4, 5, 6, 7 and 8 show the effects of each iteration on the QR code images based on the existing blind deblurring algorithm [8, 34, 35]). It can be clearly seen that as the number of iterations increases, the QR codes become clearer gradually, but the blurred QR code image already has the ability to recognize before the number of iterations ends. This is because the QR code has a certain error correction rate, and can be recognized when the image is not optimal.

Fig. 3
figure3

The 300 × 300 blurred QR code images: (a) The first version of QR code image, (b) The second version of QR code image, (c) The third version of QR code image

Fig. 4
figure4

The first version of the blind deblurring algorithm for QR code image: (a) Latent image with the scale of 52 × 52, (b) Latent image with the scale of 74 × 74, (c) Latent image with the scale of 105 × 105, (d) Latent image with the scale of 148 × 148, (e) Latent image with the scale of 211 × 211, (f) Latent image with the scale of 300 × 300

Fig. 5
figure5

The second version of the blind deblurring algorithm for QR code image: (a) Latent image with the scale of 52 × 52, (b) Latent image with the scale of 74 × 74, (c) Latent image with the scale of 105 × 105, (d) Latent image with the scale of 148 × 148, (e) Latent image with the scale of 211 × 211, (f) Latent image with the scale of 300 × 300

Fig. 6
figure6

The third version of the blind deblurring algorithm for QR code image: (a) Latent image with the scale of 52 × 52, (b) Latent image with the scale of 74 × 74, (c) Latent image with the scale of 105 × 105, (d) Latent image with the scale of 148 × 148, (e) Latent image with the scale of 211 × 211, (f) Latent image with the scale of 300 × 300

Fig. 7
figure7

The fourth version of the blind deblurring algorithm for QR code image: (a) Latent image with the scale of 52 × 52, (b) Latent image with the scale of 74 × 74, (c) Latent image with the scale of 105 × 105, (d) Latent image with the scale of 148 × 148, (e) Latent image with the scale of 211 × 211, (f) Latent image with the scale of 300 × 300

Fig. 8
figure8

The fifth version of the blind deblurring algorithm for QR code image: (a) Latent image with the scale of 52 × 52, (b) Latent image with the scale of 74 × 74, (c) Latent image with the scale of 105 × 105, (d) Latent image with the scale of 148 × 148, (e) Latent image with the scale of 211 × 211, (f) Latent image with the scale of 300 × 300

Based on the aspect of QR codes, this paper introduces the Tenengrad method based on the existing blind deblurring algorithms, which realizes adaptive scale control of QR code images, thereby improving the speed of the blind deblurring algorithm.

Fast blind deblurring algorithm

Based on the existing blind deblurring algorithms [8, 9, 34,35,36,37] and the evaluation mechanism [9, 27, 28, 30, 31, 38], this paper proposes a fast blind deblurring algorithm for QR code images based on adaptive scale control. Firstly, we uniformly normalize the input QR code images. After that, the images are obtained by multi-scale division, image estimation and blurred kernel estimation algorithm. Finally, a Tenengrad evaluation function was introduced to obtain the empirical value. The image normalization is normalized into a 300 × 300 matrix without damaging any information of the image. Multi-scale division, that is, the image is divided into multiple scales for deblurring operation. The blurred kernels estimated at each scale will be up-sampled to the next scale and used as the next input, and finally obtain a clear QR code image. Image estimation is to deconvolve the blurred kernels and blurred QR code images, then predict clearer QR code images. Blurred kernel estimation is to use the clearer QR code images and blurred image obtained in the previous step as input, and obtain a more accurate blurred kernel finally. When the maximum number of iterations is reached, the Tenengrad method will be used. We use this method when the original reference image cannot be found, and rely on gradient features of the image to directly estimate the clarity of the image. This method firstly obtains empirical values of QR code images by a large number of experimental statistics. After that, each experimental object was evaluated. Last but not least, the numerical value Q generated by evaluation was compared with empirical value Qmax before deblurring operation on each scale. If Q > Qmax, the QR code under this scale can already be identified, and no further operation is required to achieve the adaptive scale effect of blind deblurring. The overall flow of the algorithm is shown (see Fig.9). The proposed algorithm will be analyzed and calculated from three main parts, that is, multi-scale division, image estimation and blurred kernel estimation.

Fig. 9
figure9

Overall flow chart of the algorithm in this paper

Multi-scale division

The multi-scale division algorithm firstly unifies the input blurred QR code image y into a 300 × 300 matrix and the blurred kernel k into a 31 × 31 matrix. Then, we refer to Cho’s idea [36], a coarse-to-fine scheme, when estimating the latent image and blurred kernel. When the QR code image is at the coarse scale, it will initialize k and down-sample y, and then estimate image and blurred kernel. Each scale has five iterative optimizations of k. After obtaining the estimated value of k, it will be up-sampled to the next scale level by bilinear interpolation, and then execute the estimation operation and down-sample the blurred image y. The scale division formula is as follows (see (3) and (4)):

$$ d=\frac{\lg \left(\raisebox{1ex}{$5$}\!\left/ \!\raisebox{-1ex}{${k}_d$}\right.\right)}{\lg (r)}+1 $$
(3)
$$ {k}_l={r}^{\mathrm{d}-1} $$
(4)

In Eqs. (3) and (4), \( r=\sqrt{0.5} \), d represents the number of scale divisions, kd is the estimated blurred kernel size, kl is the divided blurred kernel scale. For a 31 × 31 matrix-size blurred kernel and a matrix size (300 × 300) QR code image, after multi-scale division and downward rounding, we get 6 different scales. The sizes of blurred kernels are 7 × 7, 9 × 9, 11 × 11, 17 × 17, 23 × 23, 31 × 31 from the coarse-to-fine scheme. The image will also be down sampled to 52 × 52, 74 × 74, 105 × 105, 148 × 148, 211 × 211 and 300 × 300. yl is the image after division.

Image estimation

Taking the first version as an example, the QR code image is a binary image, which has a strong contrast for its background is white and the module is black. Compared with the text image, the module of the QR code image is a whole square. For clear images, their values are distributed between 0 and 255 in terms of intensity (see Fig.10(b)). On the contrary, the blurred image is distributed with a large number of non-zero values, and there are very few values where the pixel intensity is zero (see Fig.11(b)). In terms of image gradient, the image pixel gradient feature performs well for suppressing artifacts in QR codes, that is, in non-edge regions, the gradient is zero, and between black and white modules, the number of zero is relatively rare (see Fig.10(c) and Fig.11(c)). The intensity and gradient distribution of QR code images with different blurred degrees are similar. Therefore, the prior of QR code image blind deblurring can be defined as:

Fig. 10
figure10

Features of clear QR code image: (a) Clear QR code sample, (b) Strength feature, (c) Gradient feature

Fig. 11
figure11

Features of blurred QR code image: (a) Blurred QR code sample, (b) Strength feature, (c) Gradient feature

$$ P(x)=\sigma {P}_t(x)+{P}_t\left(\nabla x\right) $$
(5)

In Eq. (5), σ represents the weight. Pt(∇x) describes the QR code image gradient andPt(∇x) = ||∇x||0; Pt(x) describes the intensity characteristics of the QR image, and Pt(x) = ||x||0, where ||x||0 represents L0- regularization.

Based on the prior of the QR code image, this paper quoted the algorithm mechanism of Pan [8, 9]. Under the assumption that the blurred kernel is known or initialized, the non-blind deconvolution method of the image is used to perform the deblurring process to obtain a clear image, that is, the L0-regularization of the intensity and gradient of the image is used as the constraint below:

$$ \hat{x}=\underset{x}{\min }{\left\Vert x\bigotimes k-y\right\Vert}_2^2+\lambda P(x) $$
(6)

In Eq. (6), x is the clear image, k denotes a blurred kernel, y is a blurred image, λ is the weight parameter, P(x) represents the prior of the image, ⊗ is the convolution operator, \( {\left\Vert \kern0.5em \right\Vert}_2^2 \) is the regularization constraint term. And quote u and g(gh, gv)T as auxiliary variables. u is used to approximate x, g(gh, gv)T is used to approximate ∇x by:

$$ \kern0.5em \hat{x}=\underset{x,u,g}{\min }{\left|\left|x\bigotimes k-y\right|\right|}_2^2+\beta {\left|\left|x-u\right|\right|}_2^2+\mu {\left|\left|\nabla x-g\right|\right|}_2^2+\lambda \left(\sigma {\left|\left|u\right|\right|}_0+{\left|\left|g\right|\right|}_0\right) $$
(7)

In Eq. (7), β and μ are regularization parameters. When their values are infinitely close to ∞, eq. (7) is equivalent to eq. (6). By fixing other variables, we use an alternating minimization method to minimize x. The values of μ and g are initialized to zero. In each iteration, the solution of x is obtained as follows.

$$ x=\underset{x}{\min }{\left|\left|x\bigotimes k-y\right|\right|}_2^2+\beta {\left|\left|x-u\right|\right|}_2^2+\mu {\left\Vert \nabla x-g\right\Vert}_2^2 $$
(8)

The closed-form solution to this minimization problem is to use the extreme value to derive the formula (8), and use the fast Fourier transform method to convert the convolution operation in the matrix space into a point multiplication operation in the frequency domain as:

$$ x={F}^{-1}\left(\frac{\overline{F(k)}F(y)+\beta F(u)+\mu {F}_G}{\overline{F(k)}F(k)+\beta +\overline{\mu F\left(\nabla \right)}F\left(\nabla \right)}\right) $$
(9)

In Eq. (9), F( ) and F−1( ) respectively represent fast Fourier transform (FFT) and inverse FFT. \( \overline{F\left(\ \right)} \) is a complex conjugate operator, and\( {F}_G=\overline{F\left({\nabla}_h\right)}F\left(\nabla {g}_h\right)+\overline{F\left({\nabla}_v\right)}F\left(\nabla {g}_v\right) \), where ∇h and ∇v respectively represent horizontal and vertical differential operators. After solving x, we calculate u and g (see (10) and (11)):

$$ \kern0.5em u=\underset{u}{\min}\beta {\left|\left|x-u\right|\right|}_2^2+\lambda \sigma {\left|\left|u\right|\right|}_0 $$
(10)
$$ \kern0.5em g=\underset{g}{\min}\mu {\left|\left|\nabla x-g\right|\right|}_2^2+\lambda {\left|\left|g\right|\right|}_0 $$
(11)

Eqs. (10) and (11) are pixel-based minimization problems. The solutions of u and g are obtained (see (12) and (13)) based on Liu [10].

$$ u=\left\{\begin{array}{c}x,{\left|x\right|}^2\ge \frac{\lambda \sigma}{\beta}\kern0.5em \\ {}0,\kern2.25em else\kern0.75em \end{array}\right. $$
(12)
$$ g=\left\{\begin{array}{c}\nabla x,{\left|\nabla x\right|}^2\ge \frac{\lambda }{\mu}\kern0.5em \\ {}0,\kern2.25em else\kern1.25em \end{array}\right. $$
(13)

Blurred kernel estimation

For a given image x, this paper refers to Cho [36] using the predicted gradient idea to further estimate the blurred kernels by using the minimized energy function, the formula is as follows (see (14)):

$$ k=\underset{k}{\min }{\left\Vert \nabla x\bigotimes k-\nabla y\right\Vert}_2^2+\gamma {\left|\left|k\right|\right|}_2^2 $$
(14)

Similarly, for better calculations, we derive the formula (14) and use the fast Fourier transform method to transform the convolution operation in matrix space into a dot product operation in the frequency domain, we can obtain (see (15)):

$$ \frac{\partial f(k)}{\partial k}={F}^{-1}\left(2\overline{F\left(\nabla x\right)}F\left(\nabla x\right)F(k)-2\gamma F(k)-2\overline{F\left(\nabla x\right)}F\left(\nabla \mathrm{y}\right)\right) $$
(15)

In formula (15), the gradient of f(k) should be evaluated multiple times during the minimization process. F and F−1 also represent the Fourier transform and its inverse transform, and \( \overline{F\left(\ \right)} \) is a conjugate fast Fourier transform. F(∇x) and F(∇y) calculations are preprocessed before the conjugate gradient starts to reduce the number of FFT calculations in each iteration.

Experimental results and analysis

The error correction level of the QR code images is H. Based on a series of experiments, compared to other QR code generators, the Caoliao QR code website has little effect on the Tenengrad value of QR codes with different error correction rates. Therefore, this paper uses the Caoliao website to generate the corresponding version of the QR code and uses the Zxing 3.4.0 software package to decode. The input images are uniformly initialized to a matrix (300 × 300). The experimental environment of this article is the Intel i7 processor, CPU 3.20GHz and Matlab2018a. We will compare the deblurring effects of different algorithms from simulation experiments, and analyze the advantages of the algorithms from two dimensions of deblurring time and recognition rate after deblurring in the second section.

Simulation experiment

This section carries out simulation experiments, randomly generates six clear QR code images for each version, and randomly generates 20 blurred QR code images as the sample libraries in the corresponding defocusing intervals I1, I2, I3, I4, and I5. Next, we select a sample and 20 samples for each version of the QR codes respectively. The effect comparison of a single sample is shown in Figs. 12, 13, 14, 15 and 16, and the time comparison is shown in Table 4. Table 5 is the mean deblurring time of different algorithms and different control parameters p under multiple samples. It can be seen that at the same degree of blur, Pan’s algorithm and Wen’s algorithm have well deblurring capabilities, but Zoran’s algorithm has an obvious ringing effect, which affects the recognition function of QR code images. Starting from the different scale control parameters p, it can be seen that when p is small, the output image has the ability to be recognizable. From the time dimension, when the scale control parameter p of our algorithm is small, the deblurring speed is improved by about an order of magnitude compared with other algorithms.

Fig. 12
figure12

Version 1 - comparison of restoration effects under different algorithms and adaptive scale p: (a) Blurred image, (b) Zoran et al., (c) Pan et al., (d) Wen et al., (e) p = 0, (f) p = 1, (g) p = 2, (h) = 3, (i) p = 4, (j) = 5, (k) Ground truth

Fig. 13
figure13

Version 2 - comparison of restoration effects under different algorithms and adaptive scale p: (a) Blurred image, (b) Zoran et al., (c) Pan et al., (d) Wen et al., (e) = 0, (f) p = 1, (g) = 2, (h) p = 3, (i) = 4, (j) = 5, (k) Ground truth

Fig. 14
figure14

Version 3 - comparison of restoration effects under different algorithms and adaptive scale p: (a) Blurred image, (b) Zoran et al., (c) Pan et al., (d) Wen et al., (e) = 0, (f) p = 1, (g) p = 2, (h) = 3, (i) p = 4, (j) = 5, (k) Ground truth

Fig. 15
figure15

Version 4 - comparison of restoration effects under different algorithms and adaptive scale p: (a) Blurred image, (b) Zoran et al., (c) Pan et al., (d) Wen et al., (e) = 0, (f) p = 1, (g) = 2, (h) = 3, (i) p = 4, (j) = 5, (k) Ground truth

Fig. 16
figure16

Version 5 - comparison of restoration effects under different algorithms and adaptive scale p: (a) Blurred image, (b) Zoran et al., (c) Pan et al., (d) Wen et al., (e) = 0, (f) = 1, (g) p = 2, (h) p = 3, (i) p = 4, (j) p = 5, (k) Ground truth

Table 4 Deblurring time (s) under of five figures
Table 5 Mean deblurring time (s) of different restoration algorithms and different control parameters p

Deblurring time and recognition rate

After that, combining with Tables 3, 4 and 5 and Figs. 12, 13, 14, 15 and 16, for versions 1–3, we can get the most appropriate value range of p in [0,2]. And for version 4 and version 5, the most appropriate value range of p is [2, 4]. The size of the QR code output image is determined by the defocus radius of the input image and the identifiability of the QR code in the current scale, which does not require uniform iteration to the last scale, thus saving a lot of iteration time. The QR code identifiability uses the evaluation mechanism proposed in this paper, that is, the Tenengrad method, to compare the generated evaluation value. The recognizability of the output images are shown in Figs. 17, 18, 19, 20 and 21. Under these five figures, each dot represents the corresponding output image of different fuzzy images. The green one means the image can be recognized, while the red one cannot. We chose the range of scale control parameters with a large variation of recognition rate, such as QR code versions 1–3 choose p = [0,2], versions 4–5 choose p = [2, 4]. As can be seen from Figs. 17, 18, 19, 20 and 21, for version 1 and version 2, when p = 2, they have the highest recognition rate, reaching 99% and 96.3% respectively. For version 3, when p = 0 or p = 1, the recognition rate is the highest, which can reach 97.67%, while for version 4 and version 5, when p = 4, they have a well restoration effect, and the recognition rate can reach 99.67%. From these data, it can be concluded that our method has a high recognition rate, and the recognition rate increases with the larger of p under normal circumstances.

Fig. 17
figure17

Recognition rate of 300 random QR code pictures of the first version in the defocus interval I1 at the adaptive scale (p ∈ [0,2]), (a) p = 0, (b) p = 1, (c) p = 2

Fig. 18
figure18

Recognition rate of 300 random QR code pictures of the first version in the defocus interval I2 at the adaptive scale (p ∈ [0,2]), (a) p = 0, (b) p = 1, (c) p = 2

Fig. 19
figure19

Recognition rate of 300 random QR code pictures of the first version in the defocus interval I3 at the adaptive scale (p ∈ [0,2]), (a) p = 0, (b) p = 1, (c) p = 2

Fig. 20
figure20

Recognition rate of 300 random QR code pictures of the first version in the defocus interval I4 at the adaptive scale (p ∈ [2, 4]), (a) p = 2, (b) p = 3, (c) p = 4

Fig. 21
figure21

Recognition rate of 300 random QR code pictures of the first version in the defocus interval I5 at the adaptive scale (p ∈ [2, 4]), (a) p = 2, (b) p = 3, (c) p = 4

Last but not least, deblurring time and recognition rate are compared between the proposed algorithm and other blind deblurring algorithms, as shown in Fig. 22 and Fig. 23 respectively. The analysis of these figures reveals that not only the operation speed of the algorithm in this paper is improved greatly, but also the recognition rate is higher than Zoran’s algorithm and Wen’s algorithm but lower than Pan’s algorithm. After comparison, for slightly blurred images, it can be easily determined that the QR code images are recognizable in the small-scale process, and the deblurring algorithm is ended ahead of the next scale, avoiding excessive redundant calculations. This algorithm can initially show that within the identifiable range after image deblurring, as the blurring degree increases, the algorithm time in this paper gradually increases, and with the increase of version or scale control, the time also gradually increases. However, it can be seen that the algorithm in this paper has more practical application value than the existing algorithms in the field of blind deblurring of QR codes.

Fig. 22
figure22

Deblurring time of 1–5 versions of QR codes between the proposed algorithm and other blind deblurring algorithms

Fig. 23
figure23

Recognition rate of 1–5 versions of QR codes between the proposed algorithm and other blind deblurring algorithms

Discussion and conclusion

In this paper, we propose a simple and fast algorithm for blind deblurring of QR code images based on adaptive scale control, which has wide applications in e-commerce, web and mobile computing and the Internet of Things. Based on the existing blind deblurring algorithms, an evaluation mechanism is presented to realize the image scale adaption. Besides, we introduce a scale control parameter to measure the relationship between the parameter and the recognition rate and deblurring time. The experimental results show that the proposed algorithm can quickly identify unified versions of QR codes, which are generated in various industrial fields such as the traceability of salt, seeds, medicines et al. In comparison with the existing blind deblurring algorithms, our approach has more advantages in speed and efficacy.

There are still some limitations in the proposed algorithm. The first limitation is that the proposed algorithm is temporarily applied to the averagely blurred QR code images. However, for the uneven blurred QR code images, its recovery effect is not so satisfactory. The second limitation is that the QR images may be easily corrupted or attacked by so-called adversarial perturbations. For future work, we will continue to solve the limitations of the proposed algorithm and study a more universal deblurring method. Besides, we will focus on exploring the combination of most state-of-art techniques to further improve the performance. In real life, a customized two-dimensional barcode [5] can effectively prevent the forgery of imitation physical labels and maintain good anti-counterfeiting aesthetics, a unified gradient regularization family [39] can effectively solve the so-called confrontation that QR images may suffer disturbance destruction or attack, spectral-domain feature segmentation [40] can extract the QR code images in a complex environment.

References

  1. 1.

    Ai Y, Wang L, Han Z, Zhang P, Hanzo L (2018) Social networking and caching aided collaborative computing for the internet of things. IEEE Commun Mag 56(12):149–155. https://doi.org/10.1109/MCOM.2018.1701089

    Article  Google Scholar 

  2. 2.

    Kieseberg P, Leithner M, Mulazzani M, Munroe L, Schrittwieser S, Sinha M, Weippl E (2010) QR code security. In Proceedings of the 8th International Conference on Advances in Mobile Computing and Multimedia (ACM) (pp. 430-435). https://doi.org/10.1145/1971519.1971593

  3. 3.

    Focardi R, Luccio FL, Wahsheh HA (2019) Usable security for QR code. J Inf Secur Appl 48:102369. https://doi.org/10.1016/j.jisa.2019.102369

    Article  Google Scholar 

  4. 4.

    Chen C (2017) QR code authentication with embedded message authentication code. Mobile Netw Appl 22(3):383–394. https://doi.org/10.1007/s11036-016-0772-y

    Article  Google Scholar 

  5. 5.

    Chen R, Yu Y, Chen J, Zhong Y, Zhao H, Tan HZ (2020) Customized 2D barcode sensing for anti-counterfeiting application in smart IoT with fast encoding and information hiding. Sensors 20(17):4926. https://doi.org/10.3390/s20174926

    Article  Google Scholar 

  6. 6.

    Jiang B, Ji Y, Tian X, Wang X (2019) Batch Reading densely arranged QR codes. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications (pp. 1216-1224). IEEE. https://doi.org/10.1109/INFOCOM.2019.8737440

  7. 7.

    Zoran D, Weiss Y (2011) From learning models of natural image patches to whole image restoration. In 2011 International Conference on Computer Vision (pp. 479-486). IEEE. https://doi.org/10.1109/ICCV.2011.6126278

  8. 8.

    Pan J, Hu Z, Su Z, Yang MH (2014) Deblurring text images via L0-regularized intensity and gradient prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp.2901-2908). IEEE. https://doi.org/10.1109/CVPR.2014.371

  9. 9.

    Pan J, Hu Z, Su Z, Yang MH (2016) L0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans Pattern Anal Mach Intell 39(2):342–355. https://doi.org/10.1109/TPAMI.2016.2551244

  10. 10.

    Liu N, Du Y, Xu Y (2018) QR codes blind deconvolution algorithm based on binary characteristic and L0 norm minimization. Pattern Recogn Lett 111:117–123. https://doi.org/10.1016/j.patrec.2018.04.036

  11. 11.

    Nah S, Hyun Kim T, Mu Lee K (2017) Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3883-3891). IEEE. https://doi.org/10.1109/CVPR.2017.35

  12. 12.

    Bai Y, Cheung G, Liu X, Gao W (2018) Graph-based blind image deblurring from a single photograph. IEEE Trans Image Process 28(3):1404–1418. https://doi.org/10.1109/TIP.2018.2874290

  13. 13.

    Wen F, Ying R, Liu Y, Liu P, Truong TK (2020) A simple local minimal intensity prior and an improved algorithm for blind image Deblurring. IEEE Trans Circuits Syst Video Technol:1. https://doi.org/10.1109/TCSVT.2020.3034137

  14. 14.

    Pan J, Sun D, Pfister H, Yang MH (2016) Blind image deblurring using dark channel prior. in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1628-1636). IEEE. https://doi.org/10.1109/CVPR.2016.180

  15. 15.

    Ren W, Cao X, Pan J, Guo X, Zuo W, Yang MH (2016) Image deblurring via enhanced low-rank prior. IEEE Trans Image Process 25(7):3426–3437. https://doi.org/10.1109/TIP.2016.2571062

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Bai Y, Jia H, Jiang M, Liu X, Xie X, Gao W (2020) Single image blind deblurring using multi-scale latent structure prior. IEEE Trans Circuits Syst Video Technol 30(7):2033–2045. https://doi.org/10.1109/TCSVT.2019.2919159

  17. 17.

    Chen L, Fang F, Wang T, Zhang G (2019) Blind image deblurring with local maximum gradient prior. in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1742–1750). IEEE. https://doi.org/10.1109/CVPR.2019.00184

  18. 18.

    Ren D, Zhang K, Wang Q, Hu Q, Zuo W (2020) Neural blind deconvolution using deep priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3341-3350). IEEE. https://doi.org/10.1109/CVPR42600.2020.00340

  19. 19.

    Liu YQ, Du X, Shen HL, Chen SJ (2021) Estimating generalized Gaussian blur kernels for out-of-focus image Deblurring. IEEE Trans Circuits Syst Video Technol 31(3):829–843. https://doi.org/10.1109/TCSVT.2020.2990623

  20. 20.

    Nan Y, Ji H (2020) Deep learning for handling kernel/model uncertainty in image deconvolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2388-2397). IEEE. https://doi.org/10.1109/CVPR42600.2020.00246

  21. 21.

    Ren D, Zuo W, Zhang D, Zhang L, Yang MH (2019) Simultaneous fidelity and regularization learning for image restoration. IEEE Trans Pattern Anal Mach Intell 43:284–299. https://doi.org/10.1109/TPAMI.2019.2926357

    Article  Google Scholar 

  22. 22.

    Zhang J, Ghanem B (2018) ISTA-net: interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1828-1837). IEEE. https://doi.org/10.1109/CVPR.2018.00196

  23. 23.

    Shi W, Jiang F, Liu S, Zhao D (2019) Scalable convolutional neural network for image compressed sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 12290-12299). IEEE. https://doi.org/10.1109/CVPR.2019.01257

  24. 24.

    Nan Y, Quan Y, Ji H (2020) Variational-EM-based deep learning for noise-blind image Deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3626-3635). IEEE. https://doi.org/10.1109/CVPR42600.2020.00368

  25. 25.

    Quan Y, Chen M, Pang T, Ji H (2020) Self2Self with dropout: learning self-supervised Denoising from single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1890-1898). IEEE. https://doi.org/10.1109/CVPR42600.2020.00196

  26. 26.

    Ding Q, Chen G, Zhang X, Huang Q, Ji H, Gao H (2020) Low-dose CT with deep learning regularization via proximal forward backward splitting. Phys Med Biol 65(12):125009. https://doi.org/10.1088/1361-6560/ab831a

    Article  Google Scholar 

  27. 27.

    Chern NNK, Neow PA, Ang MH (2001) Practical issues in pixel-based autofocusing for machine vision. In Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No. 01CH37164) (pp. 2791-2796). IEEE. https://doi.org/10.1109/ROBOT.2001.933045

  28. 28.

    Llano EG, Vázquez MSG, Vargas JMC, Fuentes LMZ, Acosta AAR (2018) Optimized robust multi-sensor scheme for simultaneous video and image iris recognition. Pattern Recogn Lett 101:44–51. https://doi.org/10.1016/j.patrec.2017.11.012

    Article  Google Scholar 

  29. 29.

    du Plessis A, Tshibalanganda M, le Roux SG (2020) Not all scans are equal: X-ray tomography image quality evaluation. Mater Today Commun 22:100792. https://doi.org/10.1016/j.mtcomm.2019.100792

    Article  Google Scholar 

  30. 30.

    Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020) Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 8110-8119). IEEE. https://doi.org/10.1109/CVPR42600.2020.00813

  31. 31.

    Her L, Yang X (2019) Research of image sharpness assessment algorithm for autofocus. In 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC) (pp. 93-98). IEEE. https://doi.org/10.1109/ICIVC47709.2019.8980980

  32. 32.

    Yang C, Chen M, Zhou F, Li W, Peng Z (2020) Accurate and rapid auto-focus methods based on image quality assessment for telescope observation. Appl Sci 10(2):658. https://doi.org/10.3390/app10020658

  33. 33.

    Hu S, Li Z, Wang S, Ai M, Hu Q (2020) A texture selection approach for cultural artifact 3D reconstruction considering both geometry and radiation quality. Remote Sens 12(16):2521. https://doi.org/10.3390/rs12162521

    Article  Google Scholar 

  34. 34.

    Pan J, Dong J, Tai YW, Su Z, Yang MH (2017) Learning discriminative data fitting functions for blind image deblurring. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 1068-1076). IEEE. https://doi.org/10.1109/ICCV.2017.122

  35. 35.

    Pan J, Ren W, Hu Z, Yang MH (2018) Learning to deblur images with exemplars. IEEE Trans Pattern Anal Mach Intell 41(6):1412–1425. https://doi.org/10.1109/TPAMI.2018.2832125

  36. 36.

    Cho S, Lee S (2009) Fast motion deblurring. ACM Trans Graph 28(5):1–8. https://doi.org/10.1145/1661412.1618491

  37. 37.

    Whyte O, Sivic J, Zisserman A (2014) Deblurring shaken and partially saturated images. Int J Comput Vis 110(2):185-201. https://doi.org/10.1007/s11263-014-07 27-3

  38. 38.

    Qu Z, Huang X, Chen K, Liu L (2019) Algorithm of multiexposure image fusion with detail enhancement and ghosting removal. J Electron Imaging 28(1):13–22. https://doi.org/10.1117/1.JEI.28.1.013022

    Article  Google Scholar 

  39. 39.

    Lyu C, Huang K, Liang HN (2015) A unified gradient regularization family for adversarial examples. In 2015 IEEE International Conference on Data Mining (ICDM) (pp. 301-309). IEEE. https://doi.org/10.1109/ICDM.2015.84

  40. 40.

    Zabalza J, Ren J, Zheng J, Han J, Zhao H, Li S, Marshall S (2015) Novel two-dimensional singular spectrum analysis for effective feature extraction and data classification in hyperspectral imaging. IEEE Trans Geosci Remote Sensing 53(8):4418–4433. https://doi.org/10.1109/TGRS.2015.2398468

Download references

Acknowledgements

This work was supported by the Project for Distinctive Innovation of General Universities of Guangdong Province (NO.2018KTSCX120), the National Natural Science Foundation of China (NO.62072122), the Innovation Team Project of the Education Department of Guangdong Province (NO.2017KCXTD021), the Ph.D. Start-up Fund of Natural Science Foundation of Guangdong Province (NO.2016A030310335), the Education Department of Guangdong Province (NO.2019KSYS009), the Scientific and Technological Planning Projects of Guangdong Province (NO.2021A0505030074), and the Guangdong Colleges and Universities Young Innovative Talents Projects (NO.2018KQNCX138).

Author information

Affiliations

Authors

Corresponding authors

Correspondence to Huimin Zhao or Jinchang Ren.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, R., Zheng, Z., Pan, J. et al. Fast Blind Deblurring of QR Code Images Based on Adaptive Scale Control. Mobile Netw Appl (2021). https://doi.org/10.1007/s11036-021-01780-y

Download citation

Keywords

  • QR code
  • Blind deblurring
  • Tenengrad method
  • Adaptive scale control