Skip to main content

Convolutional Neural Networks and Texture Classification

  • Chapter
  • First Online:
Image Texture Analysis

Abstract

Convolutional neural networks (CNN) model is an instrumental computational model not only in computer vision but also in many image and video applications. Similar to Cognitron and Neocognitron , CNN can automatically learn the features of data with the multiple layers of neurons in the network. There are several different versions of the CNN which have been reported in the literature. If an original image texture is fed into the CNN, it will be called an image-based CNN . A major problem with the image-based CNNs is that the number of training images is very demanding for the good generalization of the network due to the rotation and scaling change in images. An alternative method is to divide an image into many small patches for the CNN training. This is very similar to the patches used in the K-views model . In this chapter, we will briefly explain the image-based CNN and patch-based CNN for image texture classification . The LeNet-5 neural network architecture will be used as a basic CNN model. CNN is useful not only in the image recognition but also in the textural feature representation. Texture features, which are automatically learned and extracted from a massive amount of images using the CNN, become the focus of developing feature extraction methods.

I have just three things to teach: simplicity, patience, compassion. These three are your greatest treasures.

— Lao Tzu

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 69.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Brodatz P (1999) Textures: a photographic album for artists and designers. Dover Publications. ISBN 0486406997

    Google Scholar 

  2. Dana KJ, van Ginneken B, Nayar SK, Koenderink JJ (1999) Reflectance and texture of real-world surfaces. ACM Trans Graph 18(1):1–34

    Google Scholar 

  3. Davies ER (2018) Computer vision: principles, algorithms, applications, and learning, 5th edn. Academic, New York

    Google Scholar 

  4. Fukushima K, Miyake S, Takayuki I (1983) Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans Syst Man Cybern SMC-13(5):826–834

    Article  Google Scholar 

  5. Gonzalez RC, Woods RE (2002) Digital image processing, 2nd edn. Prentice Hall, Englewood Cliffs

    Google Scholar 

  6. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. The MIT Press, Cambridge

    MATH  Google Scholar 

  7. Hafemann LG (2014) An analysis of deep neural networks for texture classification. M.S. thesis, Universidade Federal do Parana,

    Google Scholar 

  8. Hayman E, Caputo B, Fritz M, Eklundh J (2004) On the significance of real-world conditions for material classification. In: European conference on computer vision, vol 4. pp 253–266

    Google Scholar 

  9. Heaton J (2015) Artificial intelligence for humans: deep learning and neural networks, vol 3. Heaton Research, Inc.,

    Google Scholar 

  10. Huang Z, Pan Z, Lei B (2017) Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens 9:907. https://doi.org/10.3390/rs9090907

    Article  Google Scholar 

  11. Jarrett K, Kavukcuoglu K, Ranzato M, LeCun Y (2009) What is the best multi-stage architecture for object recognition? In: IEEE 12th international conference on computer vision, Kyoto, Japan, 29 Sept−2 Oct 2009, pp 2146–2153

    Google Scholar 

  12. Karpathy A, Li F-F (2015) Deep visual-semantic alignments for generating image descriptions, CVPR

    Google Scholar 

  13. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  14. Lazebnik S, Schmid C, Ponce J (2005) A sparse texture representation using local affine regions. IEEE Trans Pattern Anal Mach Intell 27(8):1265–1278

    Article  Google Scholar 

  15. Lecun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubband W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551

    Article  Google Scholar 

  16. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp 2278–2324

    Google Scholar 

  17. Liu L, Fieguth P, Clausi D, Kuang G (2012) Sorted random projections for robust rotation-invariant texture classification. Pattern Recogn 45(6):2405–2418

    Article  Google Scholar 

  18. Liu L, Chen J, Fieguth P, Zhao G, Chellappa R, Pietikainen M (2018) BoW meets CNN: two decades of texture representation. Int J Comput Vis 1−26. https://doi.org/10.1007/s11263-018-1125-z

  19. Oquab M, Bottou L, Laptev I, Sivic J (2013) Learning and transferring mid-level image representations using convolutional neural networks, INRIA, Technical report, HAL-00911179

    Google Scholar 

  20. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  21. Snyder WE, Qi H (2004) Machine vision. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  22. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition, ICLR

    Google Scholar 

  23. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958

    MathSciNet  MATH  Google Scholar 

  24. Sun X (2014) Robust texture classification based on machine learning, Ph.D. thesis, Deakin University

    Google Scholar 

  25. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, USA, 7–12 June 2015

    Google Scholar 

  26. Van Den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kavukcuoglu K ( 2016) WaveNet: a generative model for raw audio. In: SSW, 125 pp

    Google Scholar 

  27. Varma M, Zisserman A (2005) A statistical approach to texture classification from single images. Int J Comput Vis 62(1):61–81

    Article  Google Scholar 

  28. Varma M, Zisserman A (2009) A statistical approach to material classification using image patch exemplars. IEEE Trans Pattern Anal Mach Intell 31(11):2032–2047

    Article  Google Scholar 

  29. Wong SC, Gatt A, Stamatescu V, McDonnell MD (2016) Understanding data augmentation for classification: when to warp?” In: International conference on digital image computing techniques and applications (DICTA), Gold Coast, QLD, Australia, 30 Nov−2 Dec 2016

    Google Scholar 

  30. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in neural information processing systems (NIPS 2014), vol 27

    Google Scholar 

  31. Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Fleet D et al (eds) ECCV 2014, Part I, LNCS, vol 8689. Springer International Publishing, Switzerland, pp 818–833

    Google Scholar 

  32. Zhang J, Marszalek M, Lazebnik S, Schmid C (2007) Local features and kernels for classification of texture and object categories: a comprehensive study. Int J Comput Vis 73(2):213–238

    Article  Google Scholar 

  33. https://www.mathworks.com/products/deep-learning.html

  34. http://deeplearning.net/software/theano/

  35. http://deeplearning.net/software/pylearn2/

  36. https://www.tensorflow.org/

  37. http://caffe.berkeleyvision.org/

  38. https://www.microsoft.com/en-us/research/project/project-catapult/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chih-Cheng Hung .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hung, CC., Song, E., Lan, Y. (2019). Convolutional Neural Networks and Texture Classification. In: Image Texture Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-13773-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-13773-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-13772-4

  • Online ISBN: 978-3-030-13773-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics