Advertisement

Distortion-specific feature selection algorithm for universal blind image quality assessment

  • Imran Fareed NizamiEmail author
  • Muhammad Majid
  • Waleed Manzoor
  • Khawar Khurshid
  • Byeungwoo Jeon
Open Access
Research
  • 176 Downloads

Abstract

Blind image quality assessment (BIQA) aims to use objective measures for predicting the quality score of distorted images without any prior information regarding the reference image. Several BIQA techniques are proposed in literature that use a two-step approach, i.e., feature extraction for distortion classification and regression for predicting the quality score. In this paper, a three-step approach is proposed that aims to improve the performance of BIQA techniques. In the first step, feature extraction is performed using existing BIQA techniques to determine the distortion type. Secondly, features are selected for each distortion type based on the mean value of Spearman rank ordered correlation constant (SROCC) and linear correlation constant (LCC). Lastly, distortion-specific features are used by regression model to predict the quality score. Experimental results show that the predicted quality score using distortion-specific features strongly correlates with the subjective quality score, improves the overall performance of existing BIQA techniques, and reduces the processing time.

Keywords

Blind image quality assessment Feature extraction Feature selection Classification Support vector regression 

Abbreviations

BIQA

Blind image quality assessment

BLIINDS II

Blind image integrity notator based in DCT statistics II

BRISQUE

Blind/referenceless image spatial quality evaluator

CurveletQA

Curvelet Quality Assessment

DCT

Discrete cosine transform

DIIVINE

Distortion Identification-based Image Verity and IntegratioN Evaluation

FF

Fast fading

FR

Full reference

GB

Gaussian blur

GC

global contrast

GM

Gradient magnitude

GM-LOG

Gradient magnitude and Lapl acian of Gaussian-based IQA

IQA

Image quality assessment

JP2KC

JEPEG2000 compression

JPEG

JPEG compression

KCC

Kendall correlation constant

LCC

Linear correlation constant

LOG

Laplacian of Gaussian

MOS

Mean observer score

NSS

Natural scene statistics

PN

pink noise 301

RMSE

root mean squared error

RR

Reduced reference

SROCC

Spearman’s rank ordered correlation constant

SSEQ

Spatial-spectral entropy-based quality

SVC

support vector classification

SVR

support vector regression

WN

white noise

1 Introduction

In recent years, multimedia content has become a significant part of our lives. Delivery of images at the highest quality to the end user is an essential requirement for many modern imaging applications. Therefore, estimation of perceived image quality by humans, also known as subjective evaluation has gained importance. Subjective evaluation is used as a benchmark for image quality assessment (IQA), but the constraint of time and the tedious nature of the task make it unsuitable for many applications [1]. IQA techniques aim to replicate the behavior of human visual system to evaluate the quality score of images using objective parameters or measures. Objective IQA is divided into full reference (FR), reduced reference (RR), and blind IQA techniques. FR-IQA techniques require the pristine version of the image to predict the quality score of images [2, 3, 4, 5, 6, 7, 8, 9, 10]. RR-IQA techniques do not require the whole reference image but some information extracted from the reference image to perform IQA [11, 12, 13, 14, 15, 16, 17]. Techniques that evaluate the image quality score without the use of any prior information of reference image are called blind image quality assessment (BIQA) techniques [1]. BIQA techniques usually follow a two-step approach. Firstly, the distortion type affecting the image is determined using extracted features and then regression is applied to predict the quality score of the image.

Most BIQA techniques extract features, which are altered in the presence of distortion [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Features are extracted either in spatial domain, wavelet domain, and discrete cosine transform (DCT) domain or using edge information of image. Discrete wavelet transform and complex wavelet transform have been used in [38] and [24], respectively, to extract statistical features from distorted images to determine the distortion type and predicting the quality score. The ability of wavelet transforms to extract high frequencies can be exploited for edge analysis and IQA. In [31], curvelet transform is utilized to extract statistical features for the purpose of BIQA, which has rich information of scale and orientation in the image. Blind image integrity notator [39], which is an extended version of [40] has used DCT-based features along with the Bayesian inference model for predicting the quality score of the image. It requires no statistical model and utilizes sampled DCT coefficients for IQA. Natural scene statistics (NSS)-based features have been extracted in spatial domain for the evaluation of image quality score using luminance information [41]. It is simple and computationally less expensive since no transform has to be computed. In [42], the collection of quality aware features in a spatial domain are utilized for BIQA that measures the deviation in statistical properties between a distorted and natural images. In [43], two-dimensional features called atoms are introduced, which used sparse representation of coefficients in feature set to assess the quality of images. Shearlet transform has been used in [44] to model the NSS characteristics of images. Natural undistorted parts of the image are compared with the distorted parts to assess the quality score. Laplacian of Gaussian and Gaussian magnitude along with support vector regression (SVR) are used in [30] for the predicting the quality score of images using edge information. In [45], spatial and spectral entropy that captures information over different scales are used to compute features for the evaluation of quality score. Since, utilizing entropy as features to perform FR-IQA showed promising results; therefore, entropy was utilized as features for BIQA. Recently, BIQA is performed on multiple distorted images by augmenting the features extracted using blind image quality index (BIQI) [46], blind/referenceless image spatial quality evaluator (BRISQUE) [41], and sparse representation for natural scene statistics (SRNSS) [47] and selecting the top three features based on the average value spearman rank ordered correlation constant (SROCC) and linear correlation (LCC) [48]. But the performance of feature selection in [48] is only limited to the LIVE multiply-distorted image database and features extracted in [42, 46, 47]. The performance of [48] cannot be generalized to other BIQA techniques.

All of the aforementioned BIQA techniques employ the same set of features for each distortion type to evaluate the quality score of images. Each distortion type affects individual BIQA feature in a distinct manner because every type of distortion exhibits different characteristics, e.g., the Gaussian blur affects the edge information in an image, whereas JPEG distortion introduces blockiness. Therefore, the same set of features used for every distortion type will not yield the optimum results. This paper introduces a distortion-specific feature selection algorithm, which is based on Spearman rank ordered correlation constant (SROCC) and linear correlation constant (LCC) scores. All features having SROCC and LCC score computed over individual features, greater than the mean value of SROCC and LCC, are selected for the specific distortion type. The major contributions of this work are as follows:
  1. 1

    A new SROCC- and LCC-based feature selection algorithm is proposed, which can be utilized with any two-step BIQA framework.

     
  2. 2

    The proposed algorithm improves the performance of BIQA techniques in terms of better correlation of predicted quality score with the mean observer score (MOS) and reduces the processing time.

     
  3. 3

    The proposed three-step approach is robust, database independent, and applicable in real-time scenarios.

     

The rest of the paper is organized as follows. Section 2 explains the proposed methodology along with distortion-specific feature selection algorithm. Section 3 presents the experimental results for six different BIQA techniques followed by the conclusion in Section 4.

2 Proposed methodology

The proposed methodology for BIQA is shown in Fig. 1, which follows a three-step approach, i.e., feature extraction, distortion-specific feature selection, and support vector regression, in contrast to the traditional two-step approach that is feature extraction and regression. The details of each step are as follows.
Fig. 1

Block diagram of the proposed three-step BIQA approach

2.1 Feature extraction for BIQA

In the first step, the N number of features F=f1,···,fN are extracted using existing BIQA techniques. Since noise in the image usually disrupts the high-frequency information in the image such as edges and corners, therefore, established BIQA techniques usually extract features in spatial and transform domains that model the deviation in characteristics of distorted images in comparison to natural images. To validate the performance of the proposed feature selection algorithm over features extracted in different domains, six BIQA techniques are selected that extract features in spatial, DCT transform, wavelet transform, curvelet transform, and spectral domains. All the selected BIQA techniques follow a two-step approach, i.e., feature extraction and distortion classification, followed by the computation of the quality score using SVR. The six BIQA techniques include BRISQUE [41], gradient magnitude and Laplacian of Gaussian-based IQA (GM-LOG) [30], blind image integrity notator based in DCT statistics II (BLIINDS II) [39], spatial-spectral entropy-based quality (SSEQ) [45], distortion identification-based image verity and integration evaluation (DIIVINE) [38], and curvelet quality assessment (CurveletQA) [31]. The details of BIQA techniques used for feature extraction are as follows:

2.1.1 Blind/referenceless image spatial quality evaluator (BRISQUE)

BRISQUE [41] extracts features in the spatial domain by utilizing locally normalized luminance coefficients and their products over two scales. Local mean displacements are removed to normalize the local variance of log contrast, which has de-correlating properties.Eighteen features are extracted over each scale using variance, shape, mean value, right variance parameters for horizontal, left variance,vertical, and diagonal pairwise products. BRISQUE uses a total of 36 features for the evaluation of quality score.

2.1.2 Gradient magnitude and Laplacian of Gaussian-based IQA (GM-LOG)

GM-LOG [30] uses the joint statistical relationship between the local contrast features of Laplacian of Gaussian (LoG) and gradient magnitude (GM) for BIQA. An adaptive procedure called joint adaptive normalization based on gain control and divisive normalization models on the local neighborhood is used to remove the spatial redundancies of GM and LOG coefficients. The technique follows a two-step approach, i.e., identification of distortion type and quality score prediction. A total of 40 features are extracted, which describe the structural information of the images for assessing the qualityusing SVR.

2.1.3 Blind image integrity notator based in DCT statistics II (BLIINDS II)

BLIINDS II [39] extracts NSS features in DCT domain on 17×17 patches of image. Each DCT block is divided into three directional regions and Gaussian fitting is performed on each region. BLIINDS II extracts four types of features, namely, coefficients of frequency variation, generalized energy subband ratio measure, Gaussian model shape parameters, and orientation model-based features to obtain 24 features. These features are utilized with a Bayesian inference model for predicting the image quality.

2.1.4 Spatial-spectral entropy-based quality (SSEQ)

SSEQ [45] extracts features at three scales of image resolution. To avoid aliasing, bi-cubic interpolation is used during down sampling. Each image is partitioned into subregions consisting of 8×8 pixels. Spatial and spectral entropy in DCT domain are computed for each patch. The spatial and spectral entropies are sorted in an ascending order, and 60% of the central elements are selected, which constitute a feature vector of length 12 to predict the quality score. These features are used as input to a pre-trained support vector classification (SVC) for identifying the distortion type affecting the image and then given as input to a SVR model for predicting the quality score.

2.1.5 Distortion Identification-based Image Verity and IntegratioN Evaluation (DIIVINE)

DIIVINE [38] uses a loose discrete wavelet transform to compute five groups of statistical features over two scales and six orientations by using steerable pyramids and statistical distribution curve fitting. Five groups of features, namely scale and orientation selective features, orientation selective statistics, correlation across scales, spatial correlation, and across orientation statistics constitute a feature vector of length 88. The feature vector is given as input to the SVC for the determination of distortion type, and SVR is utilized for the computation of image quality score.

2.1.6 Curvelet Quality Assessment (CurveletQA)

CurveletQA [31] extracts three types of features from curvelet subbands. Four NSS features are computed on the finest scale of the curvelet subbands using asymmetric Gaussian distribution, two features are extracted on the finest detail layer using the mean value of kurtosis and the ratio of sample mean and standard deviation of the non-cardinal orientation energies and six features are computed for scalar energy distribution by taking the difference of mean values of logarithmic magnitude of subbands at adjacent layers. These 12 features are used for the prediction of distortion type using SVC and prediction of quality score using SVR respectively.

The extracted features using BIQA techniques are given as input to the SVC to determine the distortion type D.

2.2 Distortion-specific feature selection

Natural images are highly structured, which possess properties that are affected when distortion is introduced in an image. These properties are known as natural scene statistics (NSS) [22]. BIQA techniques that utilize NSS try to assess the image quality based on the deviation between the NSS properties of distorted and natural images. The features that can effectively represent the deviation between the NSS properties of distorted and natural images show better performance, and they can predict the image quality more accurately. The main objective of this work is to introduce a generalized approach for distortion-specific feature selection that aim to improve the performance of existing BIQA techniques using features that can effectively represent the deviation in characteristics of an images from that of a natural image for a particular distortion type, which is also validated by Fig. 2. Therefore, second step involves selection of distortion-specific features based on SROCC and LCC score represented as FD={f1,···,fM}, where MN. Generally, SROCC and LCC are utilized to assess the similarity between the MOS and the quality score predicted using BIQA techniques. A value close to 1 suggests a superior performance. Therefore, we select those features that have individual SROCC and LCC scores greater than the mean SROCC and mean LCC computed over individual features. The selected features have SROCC and LCC values closer to 1 resulting in enhancement of the prediction score. SROCC and LCC score is computed for each individual feature as,
$$ \begin{aligned} SROCC=1-\frac{6\sum\limits_{i=1}^{T} d_{i}^{2}}{T\left(T^{2}-1\right)}, \end{aligned} $$
(1)
Fig. 2

Normalized histogram of features values averaged over all the distortion type for selected BIQA techniques. a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm

where di is the difference between paired ranks and T is the total number of samples, and
$$ \begin{aligned} LCC = \frac{\sum\limits_{i=1}^{T}(x_{i}-\bar{x})(y_{i}-\bar{y}))}{\sqrt{\sum\limits_{i=1}^{T}(x_{i}-\bar{x})^{2}}\sqrt{\sum\limits_{i=1}^{T}(y_{i}-\bar{y})}}, \end{aligned} $$
(2)

where xi and yi are the ith instance in first and second dataset, respectively, and \(\bar {x}\) and \(\bar {y}\) are mean values of datasets x and y, respectively. To perform feature selection, SROCCi and LCCi, for each individual feature Fi belonging to a BIQA technique, is computed. The predicted quality score using each individual feature is calculated using a pre-trained SVR model. Eighty percent of images in the dataset are used to train the SVR model, and testing is performed using the remaining 20% images. Mean score of SROCC (μS) and mean score of LCC (μL) computed over 1000 iterations are utilized for selecting distortion-specific features FD. The proposed feature selection approach for BIQA is presented as Algorithm 1.

2.3 Support vector regression

In the third step, selected features FD by Algorithm 1 are given as input to SVR for the prediction of quality score. Each distortion-specific regression model have different features as input. SVR is given as,
$$ \begin{aligned} \psi(F_{D})=\alpha\beta(F_{D})+b, \end{aligned} $$
(3)

where FD is the input feature vector, α is the weight constant, β(·) is the feature space, and b is the bias value.

3 Experimental results and discussion

The proposed methodology is evaluated on four IQA databases, i.e., LIVE [49], CSIQ [50], TID2013 [51], and LIVE in the wild image quality challenge database [52]. LIVE database contains 779 images and five distortion types, namely fast fading (FF), Gaussian blur (GB), JPEG2000 compression (JP2KC), JPEG compression (JPEG), and white noise (WN). The CSIQ databases consists of 900 images and six distortion types, i.e., GB, JP2KC, JPEG, WN, global contrast (GC), and pink noise (PN). TID2013 database consists of 3000 images and 24 distortion types.

Figure 2 shows the normalized histograms of features averaged over all the distortion types and three databases, i.e., LIVE, TID2013, and CSIQ. Most BIQA techniques assess the quality of a distorted image by measuring the deviation of image characteristics from the characteristics of non-distorted images. Therefore, BIQA techniques should perform well if the deviation in the characteristic of images is represented by the extracted features. It can be observed that the deviation in characteristics of features of the distorted images are increased from the non-distorted image when the proposed feature selection is performed as compared to using all the features.

3.1 Performance comparison

For estimation of results, support vector machine requires pre-trained models for determining the distortion type and prediction of quality score. Therefore, we divide the dataset into two non-overlapping disjoint sets, i.e., training and testing. Eighty percent of the images are selected for training, whereas 20% of images are utilized for testing. The training and testing is repeated 1000 times with random disjoint set of images to predict the quality score. The SVR parameters c and γ used in this paper are the same as those mentioned by respective BIQA techniques.

Median scores of SROCC, LCC, Kendall correlation constant (KCC), and root mean squared error (RMSE) are reported for the performance evaluation of the proposed approach. The SROCC, LCC, and KCC scores measure the similarity between mean observer score and predicted quality score, whereas RMSE measure the error.

Figures 3, 4, and 5 show the performance of the proposed distortion-specific feature selection algorithm for selected BIQA techniques over each distortion type for TID2013, LIVE, and CSIQ IQA databases, respectively. The horizontal axis in Fig. 3 represents the distortion type label as given in the TID2013 database. It is evident from the results that the distortion-specific feature selection algorithm improves the SROCC score for majority of distortion types on selected BIQA techniques. It can be observed from Fig. 3 that the proposed technique consistently outranks the BIQA techniques and the proposed algorithm shows better or at par performance on 15, 18, 14, 15, 14, and 17 out of a total of 24 distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively, on the TID2013 database. Similarly, the proposed technique shows better or at par performance on 4, 4, 4, 4, 3, and 4 out of a total of five distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively, on the LIVE database. On the CSIQ database, the proposed technique shows better or at par performance on 6, 3, 4, 4, 6, and 3 out of a total of 6 distortion types as compared to using all the features for BRISQUE [41], BLIINDS II [39], GM-LOG [30], SSEQ [45], DIIVINE [38], and CurveletQA [31], respectively.
Fig. 3

Performance comparison of proposed algorithm for each distortion on TID2013 database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm

Fig. 4

Performance comparison of proposed algorithm for each distortion on LIVE database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features, and after proposed feature selection algorithm

Fig. 5

Performance comparison of proposed algorithm for each distortion on CSIQ database (median SROCC), a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using original, all features and after proposed feature selection algorithm

Figures 6, 7, 8, and 9 shows the overall performance comparison of each BIQA technique along with the proposed distortion-specific feature selection algorithm and feature selection algorithm given in [48] on four IQA databases, i.e., LIVE [49], CSIQ [50], TID2013 [51], and LIVE, in the wild image quality challenge database [52], respectively. It can be observed that the proposed distortion-specific feature selection algorithm improves the overall performance of all the six state-of-the-art BIQA techniques as compared to using all the features, and by using feature selection algorithm of [48], the performance is worse than the original BIQA technique. The proposed algorithm also improves the performance of BIQA techniques on real images. The performance on LIVE in the wild image quality challenge database shows that the proposed algorithm can be used in real-time scenarios with real images taken in daylight and night time conditions.
Fig. 6

Overall performance comparison of proposed algorithm on different BIQA techniques for LIVE database, a SROCC, b LCC, c KCC, d RMSE

Fig. 7

Overall performance comparison of proposed algorithm on different BIQA techniques for CSIQ database, a SROCC, b LCC, c KCC, d RMSE

Fig. 8

Overall performance comparison of proposed algorithm on different BIQA techniques for TID2013 database, a SROCC, b LCC, c KCC, d RMSE

Fig. 9

Overall performance comparison of proposed algorithm on different BIQA techniques for wild in the LIVE challenge database, a SROCC, b LCC, c KCC, d RMSE

The performance of proposed algorithm can be validated by Fig. 10 that represents the comparison between the performance using all features against the proposed feature selection algorithm in terms of box plots for each BIQA technique. Box plots measure the dispersion or variance in data utilizing interquartile range and standard deviation represented by a five-number summary that includes the minimum value, first quartile (Q1), median value, third quartile (Q3), and maximum value of samples. The interquartile range is computed using the difference between Q1 and Q3. Q1 in box plots denotes the 25 percentile of the SROCC values, i.e., 25% of the SROCC values lie below Q1, and Q3 denotes the 75 percentile of SROCC values, i.e., 75% SROCC values lie below Q3. The box plots are computed for SROCC scores computed over 1000 runs averaged over all the IQA databases. It can be observed that the predicted quality score using BIQA techniques shows higher correlation with MOS when feature selection is performed. The interquartile range of the box plot for SROCC is reduced when feature selection is applied, which depicts the reduction in standard deviation of quality score prediction for BIQA techniques.
Fig. 10

Box plots of SROCC score for different BIQA techniques, a BRISQUE, b GM-LOG, c BLIINDS II, d SSEQ, e DIIVINE, and f CurveletQA, using all features and after features selection algorithm

Table 1 shows the overall performance of the proposed feature selection algorithm for cross-database validation when training is performed on one database and testing is performed on the other two databases. Four common type of distortions, i.e., GB, JP2KC, JPEG, WN, are considered for cross database evaluation. It can be observed that the proposed feature selection algorithm performs better than using all the features over all the BIQA techniques considered in this work. The cross database evaluation results show that the proposed feature selection algorithm is database independent and the proposed distortion-specific feature selection algorithm improves the overall performance of BIQA techniques irrespective of database.
Table 1

Overall performance comparison of proposed algorithm for cross-database validation

IQA database for training

IQA database for testing

BIQA technique

All features

After feature selection algorithm

   

SROCC

LCC

KCC

RMSE

SROCC

LCC

KCC

RMSE

CSIQ

LIVE

BRISQUE [41]

0.9311

0.9095

0.7677

0.2519

0.9650

0.9363

0.8173

0.2382

  

BLIINDS II [39]

0.9365

0.8799

0.7434

0.2311

0.9791

0.9768

0.7884

0.2062

  

GM-LOG [30]

0.9495

0.9542

0.7581

0.2006

0.9627

0.9642

0.7716

0.1981

  

SSEQ [45]

0.7012

0.6827

0.5112

0.1940

0.7068

0.6868

0.5183

0.2188

  

DIIVINE [38]

0.8475

0.8799

0.7722

0.6524

0.9713

0.9577

0.8379

0.4789

  

CurveletQA [31]

0.6170

0.5828

0.4279

0.2184

0.6516

0.6303

0.4869

0.1923

 

TID2013

BRISQUE [41]

0.8986

0.8997

0.7106

0.1791

0.9225

0.9293

0.7258

0.1580

  

BLIINDS II [39]

0.8005

0.7671

0.6068

0.2338

0.8056

0.7951

0.6089

0.2187

  

GM-LOG [30]

0.9051

0.9260

0.7000

0.1711

0.9074

0.9285

0.7055

0.1642

  

SSEQ [45]

0.7728

0.7775

0.5650

0.1842

0.7819

0.7825

0.5692

0.1831

  

DIIVINE [38]

0.8627

0.8610

0.8271

0.4781

0.9279

0.9669

0.8812

0.4286

  

CurveletQA [31]

0.7763

0.7773

0.5708

0.1980

0.7869

0.7846

0.5810

0.1933

LIVE

CSIQ

BRISQUE [41]

0.8998

0.9075

0.7073

0.2244

0.9133

0.9116

0.7244

0.2181

  

BLIINDS II [39]

0.9017

0.8928

0.6882

0.2634

0.9561

0.9754

0.7421

0.2293

  

GM-LOG [30]

0.9108

0.9000

0.7043

0.2426

0.9433

0.9618

0.7368

0.2238

  

SSEQ [45]

0.6884

0.6874

0.4875

0.2360

0.7266

0.7521

0.5283

0.2257

  

DIIVINE [38]

0.8714

0.8431

0.6673

0.4746

0.8817

0.8953

0.7272

0.4344

  

CurveletQA [31]

0.7122

0.7154

0.5284

0.2759

0.7350

0.7222

0.5440

0.2678

 

TID2013

BRISQUE [41]

0.9072

0.8778

0.7185

0.2388

0.9100

0.8808

0.7235

0.2357

  

BLIINDS II [39]

0.9056

0.8384

0.6940

0.2467

0.9253

0.8919

0.7182

0.2348

  

GM-LOG [30]

0.9204

0.9106

0.7118

0.2234

0.9214

0.9180

0.7206

0.2188

  

SSEQ [45]

0.8501

0.8481

0.6317

0.2123

0.8527

0.8489

0.6538

0.2085

  

DIIVINE [38]

0.8672

0.8590

0.7209

0.2724

0.8876

0.8669

0.7264

0.2480

  

CurveletQA [31]

0.8417

0.8342

0.6467

0.2256

0.8537

0.8352

0.6692

0.2321

TID2013

CSIQ

BRISQUE [41]

0.8665

0.8741

0.7196

0.2174

0.9238

0.9149

0.7617

0.1996

  

BLIINDS II [39]

0.8747

0.8326

0.6535

0.2689

0.8946

0.8878

0.6719

0.2544

  

GM-LOG [30]

0.8393

0.8366

0.6360

0.1989

0.8582

0.8602

0.6579

0.1834

  

SSEQ [45]

0.7281

0.7168

0.5116

0.2236

0.7282

0.7372

0.5187

0.2165

  

DIIVINE [38]

0.8481

0.7940

0.6175

0.2904

0.8790

0.8824

0.6672

0.2007

  

CurveletQA [31]

0.7723

0.7604

0.5634

0.2202

0.7743

0.7665

0.5711

0.2153

 

LIVE

BRISQUE [41]

0.9288

0.9256

0.7481

0.2536

0.9426

0.9272

0.7618

0.2342

  

BLIINDS II [39]

0.9389

0.8917

0.7451

0.2279

0.9513

0.9228

0.7540

0.2071

  

GM-LOG [30]

0.9336

0.9286

0.7349

0.2109

0.9582

0.9373

0.7596

0.2043

  

SSEQ [45]

0.7046

0.6641

0.4958

0.2079

0.7268

0.7021

0.5210

0.1970

  

DIIVINE [38]

0.8657

0.8243

0.6464

0.2919

0.9026

0.8628

0.7280

0.2332

  

CurveletQA [31]

0.8121

0.7937

0.6122

0.2648

0.8209

0.8041

0.6252

0.2563

The italic values signify better performance when using all the features or proposed feature selection algorithm for a particular BIQA technique

Table 2 shows the comparison of the overall proposed feature selection algorithm with BIQA techniques in terms of number of features and total processing time using a core i7 processor with 8 GB of RAM operating at 2.3 GHz. It can be observed that the proposed distortion-specific feature selection algorithm outperforms all BIQA techniques by reducing the number of features and improving the performance of existing BIQA techniques. The proposed feature selection also shows slight reduction in total processing time since reduction in number of features leads to reduced training and testing time for the SVR. The time taken for computation of SROCC and LCC score over individual features is not added as it is performed once to indicate, which features are selected, and therefore, it will be unfair to add it to the processing time for prediction of quality score, each time a test image is given as an input. The largest reduction in processing time of 2.94% is obtained for GM-LOG, and the lowest reduction of 0.013% is obtained for BLINDS-II IQA technique.
Table 2

Reduction in number of features by using proposed algorithm for different BIQA techniques on LIVE database

BIQA technique

Total number of features

Proposed feature selection

Total processing time

  

FF

GB

JP2KC

JPEG

WN

All features

Proposed feature selection

BRISQUE [41]

36

22

23

17

19

24

0.2215

0.2184

BLIINDS II [39]

24

12

14

13

12

15

70.459

70.450

GM-LOG [30]

40

22

20

24

24

25

0.2246

0.2180

SSEQ [45]

12

5

5

4

6

6

2.0634

2.0627

DIIVINE [38]

88

50

39

38

33

57

23.083

23.038

CurveletQA [31]

12

6

6

6

6

10

2.8965

2.8938

The italic values signify the least execution time for a particular BIQA technique when all features are used or proposed feature selection is applied

4 Conclusion

BIQA techniques proposed in literature use the same set of features for all the distortion types to evaluate the quality score of images. Each distortion type affects the individual BIQA feature in a distinct manner because each type of distortion exhibits different characteristics. Therefore, using the same set of features for all the distortion type will not yield optimum results. This paper presents a distortion-specific feature selection algorithm based on mean values of SROCC and LCC scores for blind image quality assessment. All features having individual SROCC and LCC scores greater than the mean values of SROCC and LCC computed over all the features are selected for the specific distortion type. The proposed algorithm is tested on six BIQA techniques and over most commonly used four IQA databases. The experimental results show that the proposed approach not only improves the performance of existing BIQA techniques but also reduces number of features that result in reduction of processing time. The proposed distortion-specific feature selection algorithm can be used with any BIQA technique that follows a two-step approach. Results on cross database evaluation show that the proposed algorithm is robust and database independent. Furthermore, experimental results on the LIVE in the wild image quality challenge database shows that the proposed algorithm is also valid for real images.

Notes

Acknowledgements

There are no acknowledgements.

Funding

No funding is available for this work.

Availability of data and materials

Please contact author for data request.

Authors’ contributions

All authors have contributed equally towards this paper. IFN and MM came up with the research idea for this work. IFN and WM performed the simulations. MM, KK, and BJ analyzed the results. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.
    W. Hou, X. Gao, D. Tao, X. Li, Blind image quality assessment via deep learning. IEEE Trans. Neural. Netw. Learn. Syst.26(6), 1275–1286 (2015).MathSciNetGoogle Scholar
  2. 2.
    M. Oszust, Full-reference image quality assessment with linear combination of genetically selected quality measures. PloS ONE. 11(6), 0158333 (2016).Google Scholar
  3. 3.
    H. Khosravi, M. H. Hassanpour, Model-based full reference image blurriness assessment. Multimed. Tools Appl.76(2), 2733–2747 (2017).Google Scholar
  4. 4.
    Z. Chen, J. Lin, N. Liao, C. W. Chen, Full reference quality assessment for image retargeting based on natural scene statistics modeling and bi-directional saliency similarity. IEEE Trans. Image Process. (2017).Google Scholar
  5. 5.
    A. Saha, Q. J. Wu, Full-reference image quality assessment by combining global and local distortion measures. Signal Process.128:, 186–197 (2016).Google Scholar
  6. 6.
    Y. Ding, S. Wang, D. Zhang, Full-reference image quality assessment using statistical local correlation. Electron. Lett.50(2), 79–81 (2014).Google Scholar
  7. 7.
    S. Rezazadeh, S. Coulombe, A novel discrete wavelet transform framework for full reference image quality assessment. Signal. Image Video Process.7(3), 559–573 (2013).Google Scholar
  8. 8.
    A. Nafchi, H. Z. Shahkolaei, R. Hedjam, M. Cheriet, Mean deviation similarity index: efficient and reliable full-reference image quality evaluator. IEEE Access. 4:, 5579–5590 (2016).Google Scholar
  9. 9.
    J. Yang, Y. Lin, B. Ou, X. Zhao, Image decomposition-based structural similarity index for image quality assessment. EURASIP J. Image Video Process.2016(1), 31 (2016).Google Scholar
  10. 10.
    G. Yang, D. Li, F. Lu, Y. Liao, W. Yang, RVSIM: a feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process.2018(1), 6 (2018).Google Scholar
  11. 11.
    Y. Liu, G. Zhai, K. Gu, X. Liu, D. Zhao, W. Gao, Reduced-reference image quality assessment in free-energy principle and sparse representation. IEEE Trans. Multimedia. 20:, 379–391 (2017).Google Scholar
  12. 12.
    D. Liu, F. Li, H. Song, Regularity of spectral residual for reduced reference image quality assessment. IET Image Processing. 11:, 1135–1141 (2017).Google Scholar
  13. 13.
    S. Golestaneh, L. J. Karam, Reduced-reference quality assessment based on the entropy of DWT coefficients of locally weighted gradient magnitudes. IEEE Trans. Image Process.25(11), 5293–5303 (2016).MathSciNetGoogle Scholar
  14. 14.
    J. Wu, W. Lin, Y. Fang, L. Li, G. Shi, I. Niwas, Visual structural degradation based reduced-reference image quality assessment. Signal Process. Image Commun.47:, 16–27 (2016).Google Scholar
  15. 15.
    J. Wu, W. Lin, G. Shi, L. Li, Y. Fang, Orientation selectivity based visual pattern for reduced-reference image quality assessment. Inf. Sci.351:, 18–29 (2016).Google Scholar
  16. 16.
    S. Bosse, Q. Chen, M. Siekmann, W. Samek, T. Wiegand, in Image Processing (ICIP), 2016 IEEE International Conference On. Shearlet-based reduced reference image quality assessment (IEEEPiscataway, 2016), pp. 2052–2056.Google Scholar
  17. 17.
    Y. Zhang, T. D. Phan, DM Chandler, Reduced-reference image quality assessment based on distortion families of local perceived sharpness. Signal Process. Image Commun.55:, 130–145 (2017).Google Scholar
  18. 18.
    Q. Wu, H. Li, F. Meng, B. Ngan, K. N. Luo, C. Huang, B. Zeng, Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Trans. Circ. Syst. Video Technol.26(3), 425–440 (2016).Google Scholar
  19. 19.
    Q. Li, W. Lin, J. Xu, Y. Fang, Blind image quality assessment using statistical structural and luminance features. IEEE Trans. Multimedia. 18(12), 2457–2469 (2016).Google Scholar
  20. 20.
    W. Lu, T. Xu, Y. Ren, L. He, Statistical modeling in the shearlet domain for blind image quality assessment. Multimedia Tools Appl.75(22), 14417–14431 (2016).Google Scholar
  21. 21.
    Y. Zhang, J. Wu, X. Xie, L. Li, G. Shi, Blind image quality assessment with improved natural scene statistics model. Digit. Signal Process.57:, 56–65 (2016).MathSciNetGoogle Scholar
  22. 22.
    M. Nizami, I. F. Majid, H. Afzal, K. Khurshid, Impact of feature selection algorithms on blind image quality assessment. Arab. J. Sci. Eng.43:, 1–14 (2017).Google Scholar
  23. 23.
    S. Du, Y. Yan, Y. Ma, Blind image quality assessment with the histogram sequences of high-order local derivative patterns. Digit. Signal Process.55:, 1–12 (2016).Google Scholar
  24. 24.
    Y. Zhang, A. K. Moorthy, D. M. Chandler, A. C. Bovik, C-diivine: No-reference image quality assessment based on local magnitude and phase statistics of natural scenes. Signal Process. Image Commun.29(7), 725–747 (2014).Google Scholar
  25. 25.
    G. Yang, Y. Liao, Q. Zhang, D. Li, W. Yang, No-reference quality assessment of noise-distorted images based on frequency mapping. IEEE Access. 5:, 23146–23156 (2017).Google Scholar
  26. 26.
    M. Nizami, I. F. Majid, K. Khurshid, in Applied Sciences and Technology (IBCAST), 2017 14th International Bhurban Conference On. Efficient feature selection for blind image quality assessment based on natural scene statistics (IEEEPiscataway, 2017), pp. 318–322.Google Scholar
  27. 27.
    L. Li, Y. Yan, Z. Lu, J. Wu, K. Gu, S. Wang, No-reference quality assessment of deblurred images based on natural scene statistics. IEEE Access. 5:, 2163–2171 (2017).Google Scholar
  28. 28.
    K. Panetta, A. Samani, S. Agaian, A robust no-reference, no-parameter, transform domain image quality metric for evaluating the quality of color images (IEEE, Piscataway, 2018).Google Scholar
  29. 29.
    H. R. Sheikh, A. C. Bovik, L. Cormack, No-reference quality assessment using natural scene statistics: Jpeg2000. IEEE Trans. Image Process.14(11), 1918–1927 (2005).Google Scholar
  30. 30.
    W. Xue, X. Mou, L. Zhang, X. Bovik, A. C. Feng, Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Trans. Image Process.23(11), 4850–4862 (2014).MathSciNetzbMATHGoogle Scholar
  31. 31.
    L. Liu, H. Dong, H. Huang, A. C. Bovik, No-reference image quality assessment in curvelet domain. Signal Process. Image Commun.29(4), 494–505 (2014).Google Scholar
  32. 32.
    D. Ghadiyaram, A. C. Bovik, Perceptual quality prediction on authentically distorted images using a bag of features approach. J. Vis.17(1), 32–32 (2017).Google Scholar
  33. 33.
    E. Siahaan, A. Hanjalic, J. A. Redi, Semantic-aware blind image quality assessment. Signal Process. Image Commun.60:, 237–252 (2018).Google Scholar
  34. 34.
    B. Appina, S. Khan, S. S. Channappayya, No-reference stereoscopic image quality assessment using natural scene statistics. Signal Process. Image Commun.43:, 1–14 (2016).Google Scholar
  35. 35.
    W. Hachicha, M. Kaaniche, A. Beghdadi, F. A. Cheikh, No-reference stereo image quality assessment based on joint wavelet decomposition and statistical models. Signal Process. Image Commun.54:, 107–117 (2017).Google Scholar
  36. 36.
    T. Zhu, L. Karam, A no-reference objective image quality metric based on perceptually weighted local noise. EURASIP J. Image Video Process.2014(1), 5 (2014).Google Scholar
  37. 37.
    M. Shahid, A. Rossholm, B. Lövström, H-J Zepernick, No-reference image and video quality assessment: a classification and review of recent approaches. EURASIP J. Image Video Process.2014(1), 40 (2014).Google Scholar
  38. 38.
    A. K. Moorthy, A. C. Bovik, Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process.20(12), 3350–3364 (2011).MathSciNetzbMATHGoogle Scholar
  39. 39.
    M. A. Saad, A. C. Bovik, C. Charrier, Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process.21(8), 3339–3352 (2012).MathSciNetzbMATHGoogle Scholar
  40. 40.
    M. A. Saad, A. C. Bovik, C. Charrier, A DCT statistics-based blind image quality index. IEEE Signal Process. Lett.17(6), 583–586 (2010).Google Scholar
  41. 41.
    A. Mittal, A. K. Moorthy, A. C. Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process.21(12), 4695–4708 (2012).MathSciNetzbMATHGoogle Scholar
  42. 42.
    A. Mittal, R. Soundararajan, A. C. Bovik, Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett.20(3), 209–212 (2013).Google Scholar
  43. 43.
    C. Zhang, J. Pan, S. Chen, T. Wang, D. Sun, No reference image quality assessment using sparse feature representation in two dimensions spatial correlation. Neurocomputing. 173:, 462–470 (2016).Google Scholar
  44. 44.
    Y. Li, X. Po, L. -M. Xu, L. Feng, No-reference image quality assessment using statistical characterization in the shearlet domain. Signal Process Image Commun.29(7), 748–759 (2014).Google Scholar
  45. 45.
    L. Liu, B. Liu, H. Huang, A. C. Bovik, No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun.29(8), 856–863 (2014).Google Scholar
  46. 46.
    A. K. Moorthy, A. C. Bovik, A two-step framework for constructing blind image quality indices. IEEE Signal Process. Lett.17(5), 513–516 (2010).Google Scholar
  47. 47.
    L. He, D. Tao, X. Li, X. Gao, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Sparse representation for blind image quality assessment (IEEEPiscataway, 2012), pp. 1146–1153.Google Scholar
  48. 48.
    Y. Lu, F. Xie, T. Liu, Z. Jiang, D. Tao, No reference quality assessment for multiply-distorted images based on an improved bag-of-words model. IEEE Signal Process. Lett.22(10), 1811–1815 (2015).Google Scholar
  49. 49.
    H. R. Sheikh, M. F. Sabir, A. C. Bovik, A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process.15(11), 3440–3451 (2006).Google Scholar
  50. 50.
    E. C. Larson, D. M. Chandler, Most apparent distortion: full-reference image quality assessment and the role of strategy. J. Electron. Imaging. 19(1), 011006–011006 (2010).Google Scholar
  51. 51.
    N. Ponomarenko, L. Jin, O. Ieremeiev, V. Lukin, K. Egiazarian, J. Astola, B. Vozel, K. Chehdi, M. Carli, F. Battisti, et al, Image database tid2013: Peculiarities, results and perspectives. Signal Process. Image Commun.30:, 57–77 (2015).Google Scholar
  52. 52.
    D. Ghadiyaram, A. C. Bovik, Massive online crowdsourced study of subjective and objective picture quality. IEEE Trans. Image Process.25(1), 372–387 (2016).MathSciNetGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Electrical Engineering and Computer Science, National University of Sciences and TechnologyIslamabadPakistan
  2. 2.Department of Computer EngineeringUniversity of Engineering and TechnologyTaxilaPakistan
  3. 3.Department of Computer EngineeringBahria UniversityIslamabadPakistan
  4. 4.College of Information and Communication Engineering, Sungkyunkwan UniversitySeoulSouth Korea

Personalised recommendations