1 Introduction

Higher image quality is required to transmit and acquire images with the rapid development of the digital vision technology [1, 2]. So it is necessary to embed a module of monitoring image quality at displaying terminals. The module can be used to adjust the quality of displaying images and videos in real-time and satisfy the demands of human watching.

No-reference image quality assessment technique is proposed to monitor the image quality of processing systems and overcome the difficulty of reference images being absent [3]. This technique is deeply studied by many scholars [4,5,6,7,8]. The quality features are important factors that are extracted to represent the properties of image quality in the technique of no-reference image quality assessment [9]. The models were built based on the quality features for representing the image quality properties. And the models can obtain the evaluated results of being consistent with the human visual nerve responses. The models based on the structures [10] and the models based on the natural scene statistics (NSS) [11,12,13] got better results of image quality assessment. Besides modelling the human visual nerve responses, other effective methods of image quality assessment were proposed by some scholars for representing the characteristics of visual responses. The method of image quality assessment based on structures was proposed by Wang et al. [14], which evaluated image quality by using structural information as image quality features, according to the structural sensitiveness of the human vision system. And the method based on the sparse was proposed by He et al. [15] with the satisfactory assessment results, where the sparsity of the human vision system in processing information was considered. Specially, NSS provided the developing basis for natural image quality assessment [16,17,18,19]. The natural image quality assessment methods based on NSS have almost obtained the results of being higher consistent with subjective perception. At the same time, machine learning was introduced to increase the accuracy of no-reference image quality assessment, and some methods had achieved the better results that were consistent with human perception. Despite the fact that the image quality assessment has acquired some results being consistent with subjective perception, the studies are lacking to evaluate image quality in real time. It is imperative that no-reference image quality assessment approaches monitor image quality instantaneously. The methods should be more effective and the assessment results should be more consistent with human subjective perception as well.

In this paper, modelling the pixel-pair statistics [20] is applied to extract quality patches, which represent the image quality, for speeding up the algorithm and increasing the real-time ability of evaluating quality. The spatial correlation of pixel-pairs is used to find out the quality patches. The selected quality patches are calculated by the FastICA for obtaining the independent components. The assessment results of no-reference image quality are obtained effectively and accurately by matching the Hamming distance between the binary codes. One code is calculated from a tested quality image patch, and the other code is the address code of the Hash lookup table. The image quality is expressed as the weighted DMOS (differential mean opinion score) value that is stored in the indexed unit of the Hash table, and the stored DMOS value comes from the published database LIVE2. The proposed no-reference image quality assessment method is shown in Fig. 1 with the mainly operating procedure.

Fig. 1
figure 1

The mainly operating procedure of the real-time no-reference image quality assessment based on quality patches

The rest of the paper is organized as follows. The method of generating image quality features is elucidated in Section 2. Then, the proposed PIH-IQA algorithm based on image quality patches is described in Section 3. Experiments and discussions are illustrated with the form of charts and tables in Section 4. Finally, Section 5 concludes the paper.

2 Generating image quality features

The operating scheme is shown in Fig. 1, and the generative method of selecting image quality patches is illustrated in this chapter. Two neighboring pixels sampled from natural images have an extensive correlation. And the distributing range of the dots in a pixel-pairs scatter plot can represent the dispersed extent of pixel values in an image. The value difference is relatively large between pixels locating on the edges of the scatter plot. And if the quantity of edges in an image is larger, the distributed range of the dots in a pixel-pairs scatter plot is wider. At the same time, the distributed width of the dots in a scatter plot represents the more edges and structures of an image, where the human vision system is more sensitive to structures and edges. According to the target of representing the image quality, the distributed width of the dots in a scatter plot is applied to find out the image quality patches for speeding up the algorithm.

2.1 Extracting image quality patches

Given a pair of neighboring pixels at the position (m, n) and (m, n + 1), the spatial dependency of pixel-pairs is defined as the frequency on a grey-scale group [G1,G2], where G1 is the grey-scale value of the pixel at the position (m, n) and G2 is the grey-scale value of the pixel at the position (m, n + 1). The scatter plot representing the spatial dependency is shown in Fig. 2. In the plot, the distributed width of the dots is found out by calculating the maximum vertical distance from the locations of the non-zero dots to the diagonal line of the plot. According to the proceeding study in the paper [20], the patches, which are with the largest standard deviation, are with the largest distribution width and with the largest gray-scale deviation. Hence, the more edges and structures are included in an image, the more distributed width value is calculated.

Fig. 2
figure 2

The original image and its scatter plot of the spatial dependency of pixel-pairs

It is necessary to neglect the smooth parts of an image for speeding up the image quality assessment. The smooth parts of an image have little information and their distortions introduce little effects on subjective perception. According to human vision system, the salient ranges in an image need to be extracted and considered focally for evaluating image quality. As the target of this paper is to get a real-time method of image quality assessment, the image quality is evaluated by using several image patches to represent the image quality. The selected image patches are called “image quality patches” in this paper. And the patches are selected by calculating the distributed width of the dots in the scatter plot that models the spatial dependency of pixel-pairs. The selected image quality patches in Lena are shown in Fig. 3.

Fig. 3
figure 3

The selected image quality patches in Lean

Image quality patches are selected by the following steps:

  1. (1)

    Generate a scatter plot of the image being evaluated.

  2. (2)

    Calculate the dots distributed width Iw of the scatter plot of the input image.

  3. (3)

    Split the input image into patches with the size of a × b (the patch of 32 × 32 is selected in this paper) and calculate the distributed width Ipw of each image patch.

  4. (4)

    Calculate the ratio of Ipw to Iw. And the ratio records as RWID. Here, RWID is defined as the ratio of the dots distributed width of the scatter plot of an image patch to the dots distributed width of the scatter plot of the entire image.

  5. (5)

    Put out image quality patches according to following rules:

  6. 1)

    While \( RWID=\frac{I_{\mathrm{pw}}}{I_{\mathrm{w}}}=1 \), put out image quality patches belonging to the first class and record the value of RWID1(i), where i is the order number.

  7. 2)

    While \( 0.95< RWID=\frac{I_{pw}}{I_w}<1 \), put out image quality patches belonging to the second class and record the value of RWID2(i).

  8. 3)

    While \( 0.9> RWID=\frac{I_{pw}}{I_w}<0.95 \), put out image quality patches belonging to the third class and record the value of RWID3(i).

  9. 4)

    While \( 0.85< RWID=\frac{I_{pw}}{I_w}<0.9 \), put out image quality patches belonging to the fourth class and record the value of RWID4(i).

  10. 5)

    While \( 0.8< RWID=\frac{I_{pw}}{I_w}<0.85 \), put out image quality patches belonging to the fifth class and record the value of RWID5(i).

In this paper, it is almost impossible to obtain the analytical solution of classifying parameters due to the nonlinearity of RWID. Instead, we construct a validation set and apply the brute-force search algorithm to get suboptimal values of classifying the image quality patches.

2.2 Generating image quality features

Independent component analysis (ICA) is applied to generate the features of image quality patches in this paper. ICA is a kind of non-orthogonal linear transformation method, and its target is making variables independent after transformation.

A pixel grey-scale value is denoted by I(x,y) in an original image quality, and the patch is generated as a linear superposition of features Ai in ICA.

$$ I\left(x,y\right)=\sum \limits_{i=1}^m{A}_i\left(x,y\right){S}_i $$
(1)

where (x,y) expresses the coordinate of a pixel in an image quality patch, i is the order number of the features Ai, Ai are the same for all patches, and the Si are coefficients with different values to different image quality patches.

There are three assumptions being satisfied in ICA calculating. They are (1) Si are independent statistically, (2) the distributions of the Si are non-Gaussian, and (3) the linear system defined by Ai is linearly invertible. Then, an invertible matrix Wi = Ai−1 can be found out based on above assumptions, and Si can be calculated using the following equation:

$$ {S}_i=\sum \limits_{x,y}{W}_1\left(x,y\right)I\left(x,y\right) $$
(2)

Two preprocessing steps, centralizing and whitening, must be carried out to produce the uncorrelated components in ICA.

Centralization means subtracting the mean value in an image quality patch and making a zero mean value matrix. Whitening matrix is obtained by eigen-decomposition to the covariance matrix of the image quality patch, and the whitened result is that each element is uncorrelated and normalized.

There are many algorithms to calculate ICA. The FastICA algorithm is proposed and developed by Finnish scholar Hyvärinen [21]. The algorithm is based on the principle of non-Gaussian maximization, and it makes maximized minus entropy as the objective function. The algorithm can separate out an independent component each time.

The basic form of the FastICA algorithm is as follows:

  1. (1)

    Centralize and whiten image quality patches I(x,y), then obtain the data X.

  2. (2)

    Choose an initial (e.g., random) weight vector W.

  3. (3)

    Let \( {W}_i=E\left\{ Xg\left({W}_i^TX\right)\right\}-E\left\{{g}^{\prime }{W}_i^TX\right\}{W}_i \), where g(x) = α1[1 − tanh(α1x)] and g(x) = α1[1 − tanh2(α1x)].

  4. (4)

    Normalize Wi by Wi = WiWi‖.

  5. (5)

    If Wi is not convergent, then return to step (3).

In this paper, randomly selecting 600 images in database LIVE2 [22], CISQ [23], and TID2013 [24] are used as training sample images. Image quality patches were extracted from sample images according to the steps in the above context. A group of statistically independent Wi, called feature detectors, were obtained by the FastICA algorithm, where each image patch was built as a column in the sample matrix. The obtained Wi are shown in Fig. 4.

Fig. 4
figure 4

36 Wis are obtained by the FastICA algorithm

3 Method—no-reference image quality assessment

Hash indexing method [25, 26] is applied to build the matching relation between independent component coefficients Si and the address code of the Hash table. And DMOS values and RWID values are stored in the Hash table for fast obtaining the evaluated results of image quality. DMOS values are applied for the assessment results being consistent with subjective perception.

3.1 Building the Hash lookup table

The independent components Si are obtained by multiplying the detectors Wi with the image patch I(x,y) respectively. The absolute value in the independent component Si can be described as values between 0 and 255, and the number of the values between 0 and 255 is 36. The binarization of Si can be seen in the Fig. 5.

Fig. 5
figure 5

The binarization process of the independent components coefficients Si

First, calculate the absolute value of the difference between Si (i = 1,…,36) by Eq. (3). The absolute value SD is defined as:

$$ {SD}_i=\left\{\begin{array}{cc}\left|{S}_{i+1}-{S}_i\right.& if\kern1em i<36\\ {}\left|{S}_i-{S}_{36}\right.& otherwise\end{array}\right. $$
(3)

Then, calculate the binarization of SDi by Eq. (4), where M is trained and taken as 17 in Fig. 5. The value of M is designated as the minimum among mean values of each columns, and the M value can be adjusted according to the binarization results.

$$ {b}_i=\left\{\begin{array}{cc}1,& if\left({SD}_i\le M\right)\\ {}0,& otherwise\end{array}\right. $$
(4)

A Hash lookup table is applied to fast index the similar independent components. And a Hash function is used to point the position in the Hash lookup table in this paper. Here, a binary code with 36 bits is transformed by the Hash function. As the Hash lookup table is built from the sampled 600 images, the image quality patches are collected with a large number. Every image quality patch is assigned a unit in the Hash lookup table. Considering the inevitable conflicts, the least of binary address with 12 bits is generated necessarily by the Hash function.

Let Ni = bi × 3 + 1 × 20 + bi × 3 + 2 × 21 + bi × 3 + 3 × 22 (i = 0, ⋯, 11) and N12 = N0, then

$$ {h}_i=\left\{\begin{array}{cc}1,& if\;{N}_i\le {N}_{i+1}\\ {}0,& \mathrm{otherwise}\end{array}.\right. $$
(5)

The Hash function is defined as:

$$ H\left({h}_0,\cdots, {h}_{11}\right)={h}_0\times {2}^0+\cdots +{h}_{11}\times {2}^{11}. $$
(6)

The binary code of independent components and its DMOS value and RWID value are placed in the specific unit, which is pointed by the address pointer from the Hash function.

3.2 Indexing the quality features

The independent components coefficients Sip of the tested image are obtained by calculating the independent components of image quality patches. Then, the binarization values of Sip are transformed by the Hash function; then, the DMOS and RWID values can be indexed from the unit with obtained address.

The independent components coefficients Sip of an image quality patch are changed Hash address codes according to the Hash function in the Eqs. (3)–(6). Then, the units pointed by the address codes in the Hash lookup table are found, and the hamming distances are calculated between the placed independent components in the units and the Sip. If the distances are smaller than the value Th, the number of similar patches and the DMOS and RWID values are recorded and stored together.

3.3 . Evaluating images quality

The final no-reference image quality assessment score PIH-IQA (simply named the method in this paper) is calculated by using the following Eq. (7), where DMOSi and RWIDi are obtained by indexing in the Hash lookup table and RWIDn(i) are the ratios of the distributed widths that are found out from the scatters of the spatial dependent relation of image pairs.

$$ PIH- IQA=\frac{\mathbf{1}}{N}\sum \limits_{i=1}^N{DMOS}_i\ast \frac{RWID_n(i)}{RWID_i} $$
(7)

4 Experimental results and discussions

Three groups of experiments are applied to analyze the effects of PIH-IQA in evaluating the image quality without references. Three groups of experiments include the experiments of comparing the subjective consistence, the experiments of comparing with other methods, and the experiments of comparing the results in different distortions.

The consistence between the evaluated values using PIH-IQA method and the DMOS values in databases are shown by the scatter plot in the subjective consistence experiments. Three metrics are used to compare different image quality assessment methods and different image distortions, which are the Pearson linear correlation coefficient for predicting the accuracy of the results, the Spearman order correlation for the monotonicity of evaluating results, and the root mean square error for the subjective consistency of the results.

4.1 Comparing with subjective perception

Testing its subjective consistence is the main criterion that evaluates whether an image quality assessment method is effective, in other words, whether the evaluated scores are consistent with the human subjective perception. And the scatter of the subjective scores and the evaluated results to the database LIVE2 are shown in Fig. 6, where the horizontal axis is the scores calculated by the PIH-IQA method and the vertical axis is the DMOS values of the subjective score. The dots in the figure are mainly scattered around the diagonal line except for special dots. It can be seen that the evaluated results of PIH-IQA keep the better consistence with the subjective perception.

Fig. 6
figure 6

The scatter of the subjective consistence of the PIH-IQA method in the database LIVE2

4.2 Comparing with other evaluated methods

The results of the proposed PIH-IQA method in this paper are compared with other more used image quality assessment methods. The compared methods include full reference methods, such as PSNR, IFC [27], MSSSIM [14], SSIM [10], and VIF [28], and no reference methods like CBIQ [29], SRNSS [15], QAC [30], NSS-GS [13], BLINDS [31], BLINDS-II, GMLOG [32], CORNIA [33], and PATCHIQ [9]. The compare is finished by calculating the three metrics PCC, SRC, and RMSE, and the comparing results are listed in Table 1. The first three methods are shown in the table and the PIH-IQA gets the better no-reference assessment results. To the database LIVE2, the PIH-IQA method is located in the first three methods of the three metrics.

Table 1 Performance of the proposed PIH-IQA and the other competing models

Table 1 shows that the proposed PIH-IQA has the largest number of scores highlighted in italics. From the first three scores of the PIH-IQA, the performance is superior to that of the other IQA models in terms of SRC and RMSE. The conclusion that the proposed PIH-IQA is superior can be obtained for the largest number of values in italics.

4.3 Comparing different distortions

An image quality assessment method should be suitable to evaluate images with difference distortions. Images with 5 kinds of distortions are evaluated in the database LIVE2, and the assessed results are compared respectively. The compares are finished by calculating the three metrics PCC, SRC and RMSE, and the compared results are listed in the Table 2. From the results shown in Table 2, the PIH-IQA method can be used to assess images with different distorted types.

Table 2 Performance comparison of the IQA models on each individual distortion type in the database LIVE 2

The proposed PIH-IQA is the third best in terms of the number of scores highlighted in italics. As we did in Table 1, the results in Table 2 also show similar tendency in terms of SRC and RMSE. Of course, it seems for the proposed method to have room to improve the value of PCC. The conclusion also can be obtained that the proposed PIH-IQA is superior.

Table 3 shows the running time of the 11 IQA models on an image of size 512 × 512. All algorithms were run on a ThinkPad X220 notebook with Intel Core i5-2450M CPU@2.5GHz and 4G RAM. The software platform used to run all algorithms was MATLAB R2012b (8.0.0783). The MATLAB source codes of the other IQA methods were obtained from the original authors. (It should be noted that whether the code is optimized may affect the running time of an algorithm.) Clearly, PSNR is the fastest, followed by SSIM and MS-SSIM, but for no-reference IQA metrics, especially for the training IQA models, the running time of proposed PIH-IQA is much faster than NSRCIQ.

Table 3 Running time of the IQA models

4.4 Evaluating aerial images in different weathers

The quality of aerial images is mainly affected by weather, and the bad weather can lead to executing subsequent processes incorrectly. There are images collected from the Internet, including 25 images in sunny weather, 11 images in cloudy weather, 17 images in foggy weather, 13 images in rainy weather, and 18 images in snowy weather. The aerial images were labeled according to the weather before the following experiments. These images are evaluated quality by the proposed method, and the results are shown in Fig. 7. In Fig. 7, the horizontal axis shows the different weathers, where 1 expresses sunny weather, 2 expresses cloudy weather, 3 expresses foggy weather, 4 expresses rainy weather, and 5 expresses snowy weather. Being analyzed, the PIH-IQA scores in different weathers scatter on different values. The quality scores of images in foggy, rainy, and snowy weathers are almost more than 10 except a few points. Although foggy, rainy, and snowy weathers have different influences on aerial images quality, the foggy weather has the most serious impact on aerial images quality.

Fig. 7
figure 7

The scatter of the relation between aerial images and weathers of the PIH-IQA method

5 Conclusion

The proposed real-time no-reference image quality assessment method obtains the excellent effects from the experimental results of compares. The first excellent point is the selection of the image quality patches based on the spatial correlation of image pairs. Although the spatial correlation is not good at expressing the image quality as an index, it can effectively extract the image patches with abundant information as image quality patches, which provide important information to evaluate the image quality in the following procedures. The second excellent point is the 36 features are applied to extract the independent components of image quality patches. Although the features are not more, they are sufficient to evaluate the image quality according to the assessed results. The last point for speeding up the evaluation is the Hash lookup table that greatly increases the speed of the proposed image quality assessment method. Summarizing the above contents, both the better features representing the image quality and the faster evaluating method are very important and indispensable.