Skip to main content

Image Features and Feature Processing

  • Chapter
  • First Online:
Visual Quality Assessment by Machine Learning

Part of the book series: SpringerBriefs in Electrical and Computer Engineering ((BRIEFSSIGNAL))

  • 1646 Accesses

Abstract

Image representation is an elementary problem in any image processing application. The straightforward method is to represent an image by point-to-point. Regarding biological tasks of image processing, such as recognition, retrieval, tracking, and categorizing, such a method would be very uneconomical. The neighboring points are highly correlated with each other in natural images, so there exists a large amount of redundancies in natural images. The biological image processing should compress these redundancies as much as possible, which would significantly benefit the following classification, recognition, or retrieval tasks. To achieve this goal, pictorial information should be processed in such a way that the highest possible proportion of redundant information is filtered out. In this chapter, we first summarize the state-of-the-art processings of image representation by arranging them into basic processing and advanced processing categories, resulting in basic features and advanced features, respectively. In addition, feature learning is investigated to generate more efficient features for biological image-processing tasks. The feature selection and feature extraction techniques are used in feature learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Lindeberg T (1994) Scale-space theory in computer vision. Springer, Heidelberg. ISBN: 0-7923-9418-6

    Google Scholar 

  2. Lindeberg T (1998) Feature detection with automatic scale selection. Int J Comput Vis 30(2):77–116

    Google Scholar 

  3. Marcelja S (1980) Mathematical description of the responses of simple cortical cells. J Opt Soc Am 70(11):1297–1300

    Google Scholar 

  4. Daugman JG (1985) Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J Opt Soc Am A 2(7):1160–1169

    Article  Google Scholar 

  5. Daugman JG (1980) Two-dimensional spectral analysis of cortical receptive field profiles. Vis Res 20(10):847–856

    Article  Google Scholar 

  6. Jones JP, Palmer LA (1987) An evaluation of the two-dimensional gabor filter model of simple receptive fields in cat striate cortex. J Neurophysiol 58(6):1233–1258

    Google Scholar 

  7. Deep learning (2015) In Wikipedia, The Free Encyclopedia. Retrieved April 8, 2015, from http://en.wikipedia.org/w/index.php?title=Deep_learning&oldid=3D655313266

  8. Lowe DG (1999) Object recognition from local scale-invariant features. In: Proceedings of the international conference on computer vision, vol 2, pp 1150–1157

    Google Scholar 

  9. Lowe DG (2004) Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image. U.S. Patent No. 6,711,293, 23 Mar 2004

    Google Scholar 

  10. Lowe DG (2004) Distinctive image features from scale-Invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  11. Lindeberg T, Bretzner L (2003) Real-time scale selection in hybrid multi-scale representations. In: Proceedings of scale-space’03. Lecture notes in computer science, vol 2695, pp 148–163, Springer

    Google Scholar 

  12. http://www.cs.ubc.ca/lowe/keypoints/

  13. http://www.vlfeat.org/overview/sift.html

  14. Bay H, Tuytelaars T, Gool LV (2006) SURF: speeded up robust features. In: Proceedings of the ninth European conference on computer vision, May 2006

    Google Scholar 

  15. Oliva Aude, Torralba Antonio (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175

    Article  MATH  Google Scholar 

  16. Feature selection (2015) In Wikipedia, The Free Encyclopedia. Retrieved April 7, 2015, from http://en.wikipedia.org/w/index.php?title=Feature_selection&oldid=D652994518

  17. http://jmlr.csail.mit.edu/papers/v3/guyon03a.html

  18. Yang YM, Pedersen JO (1997) A comparative study on feature selection in text categorization. In: ICML, 1997

    Google Scholar 

  19. Forman G (2003) An extensive empirical study of feature selection metrics for text classification. J Mach Learn Res 3:1289–1305

    MATH  Google Scholar 

  20. Duda PEHRO, Stork DG (2001) Pattern classification. Wiley-Interscience Publication, Chichester

    Google Scholar 

  21. Gu QQ, Li ZH, Han JW (2012) Generalized Fisher score for feature selection. CoRR. arxiv:1202.3725

  22. He X, Cai D, Niyogi P (2005) Laplacian score for feature selection. In: NIPS, 2005

    Google Scholar 

  23. Ding C, Peng HC (2003) Minimum redundancy feature selection from microarray gene expression data. In: Proceedings of second IEEE computational systems bioinformatics conference, Aug 2003, pp 523–528

    Google Scholar 

  24. Ding C, Peng HC (2005) Minimum redundancy feature selection from microarray gene expression data. J Bioinform Comput Biol 3(2):185–205

    Article  MathSciNet  Google Scholar 

  25. Peng HC, Long F, Ding C (2005) Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238

    Article  Google Scholar 

  26. Shlens J (2014) A tutorial on principal component analysis. CoRR. arxiv:1404,1100, Oct 2014

  27. Tao D, Li X, Lu W, Gao X (2009) Reduced-reference IQA in contourlet domain. IEEE Trans Syst Man Cybern B Cybern 39(6):1623–1627

    Article  Google Scholar 

  28. Han H, Kim D, Park R (2009) Structural information-based image quality assessment using LU factorization. IEEE Trans Consum Electron 55(1):165–171

    Google Scholar 

  29. Kim D, Park R (2009) New image quality metric using the Harris response. IEEE Signal Process Lett 16(7):616–619

    Article  Google Scholar 

  30. Eskicioglu A, Gusev A, Shnayderman A (2006) An SVD-based grayscale image quality measure for local and global assessment. IEEE Trans Image Process 15(2):422–429

    Article  Google Scholar 

  31. Sendashonga M, Labeau F (2006) Low complexity image quality assessment using frequency domain transforms. In: Proceedings of international conference image process, 2006, pp 385–388

    Google Scholar 

  32. Kalman D (1996) A singularly valuable decomposition: the SVD of a matrix. College Math J 27(1):2–23

    Article  MathSciNet  Google Scholar 

  33. Stewart G (1990) Stochastic perturbation theory. SIAM Rev 32(4):579–610

    Article  MATH  MathSciNet  Google Scholar 

  34. Liu J, Liu X, Ma X (2008) First order perturbation analysis of singular vectors in singular value decomposition. IEEE Trans Signal Process 56(7):3044–3049

    Article  MathSciNet  Google Scholar 

  35. Narwaria M, Lin W (2010) Objective image quality assessment based on support vector regression. IEEE Trans Neural Netw 21(3):515–519

    Article  Google Scholar 

  36. Narwaria M, Lin WS (2012) SVD-based quality metric for image and video using machine learning. IEEE Trans Circuit Syst Video Technol 42(2)347–364

    Google Scholar 

  37. Targhi A, Shademan A (2003) Clustering of singular value decomposition of image data with applications to texture classification. In: Proceedings of SPIE visual communication image process, Lugano, Switzerland, July 2003, vol 5150, pp 972–979

    Google Scholar 

  38. Kalman D (1996) A singularly valuable decomposition: the SVD of a matrix. College Math J 27(1):2–23

    Article  MathSciNet  Google Scholar 

  39. Golub G, Kahan W (1965) Calculating the singular values and pseudo-inverse of a matrix. J Soc Ind Appl Math B Numer Anal 2(2):205–224

    Article  MATH  MathSciNet  Google Scholar 

  40. Liu B (2007) Web data mining. Springer, Heidelberg

    MATH  Google Scholar 

  41. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans PAMI Special Issue Learn Deep Archit 8(35):1798–1828

    Google Scholar 

  42. Coates A, Lee H, Ng AY (2011) An analysis of single-layer networks in unsupervised feature learning. In: International conference on AI and statistics (AISTATS), 2011

    Google Scholar 

  43. Srebro N, Rennie J, Jaakkola TS (2004) Maximum-margin matrix factorization. In: Advances in neural information processing systems

    Google Scholar 

  44. Hinton G (2010) A practical guide to training restricted Boltzmann machines. Momentum 9(1):926

    Google Scholar 

  45. Dance C, Willamowski J, Fan LX, Bray C, Csurka G (2004) Visual categorization with bags of keypoints. ECCV workshop on statistical learning in computer vision, Prague

    Google Scholar 

  46. Jurafsky D, Martin JH (2009) Speech and language processing. Pearson Education International, Harlow, pp 145–146

    Google Scholar 

  47. Liang P (2005) Semi-supervised learning for natural language. Master’s thesis, MIT

    Google Scholar 

  48. Turian J, Ratinov L, Bengio Y (2010) Word representations: a simple and general method for semi-supervised learning. In: Proceedings of the 48th annual meeting of the association for computational linguistics, pp 384–394

    Google Scholar 

  49. Coates A, Ng AY (2012) Learning feature representations with k-means. In: Neural networks: tricks of the trade. Springer, Heidelberg, pp 561–580. http://link.springer.com/chapter/10.1007%2F978-3-642-35289-8

  50. MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley symposium on mathematical statistics and probability, vol 1. University of California Press, pp 281–297

    Google Scholar 

  51. MacKay D (2003) An example inference task: clustering. Information theory, inference and learning algorithms. Cambridge University Press, Cambridge, pp 286–294. http://www.cs.toronto.edu/mackay/itprnn/ps/286.294.pdf

  52. Wang L, Zhu J, Zou H (2007) Hybrid huberized support vector machines for microarray classification. In: ICML, 2007

    Google Scholar 

  53. Obozinski G, Taskar B, Jordan M (2006) Multi-task feature selection. Technical report, Department of Statistics, University of California, Berkeley

    Google Scholar 

  54. Nie FP, Huang H, Cai X, Ding CH (2010) Efficient and robust feature selection via joint \(\ell _{2,1}\)-norms minimization. In: Advances in neural information processing systems, vol 23, pp 1813–1821

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Long Xu .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 The Author(s)

About this chapter

Cite this chapter

Xu, L., Lin, W., Kuo, CC.J. (2015). Image Features and Feature Processing. In: Visual Quality Assessment by Machine Learning. SpringerBriefs in Electrical and Computer Engineering(). Springer, Singapore. https://doi.org/10.1007/978-981-287-468-9_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-287-468-9_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-287-467-2

  • Online ISBN: 978-981-287-468-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics