Abstract
Convolutional neural networks (CNN) model is an instrumental computational model not only in computer vision but also in many image and video applications. Similar to Cognitron and Neocognitron , CNN can automatically learn the features of data with the multiple layers of neurons in the network. There are several different versions of the CNN which have been reported in the literature. If an original image texture is fed into the CNN, it will be called an image-based CNN . A major problem with the image-based CNNs is that the number of training images is very demanding for the good generalization of the network due to the rotation and scaling change in images. An alternative method is to divide an image into many small patches for the CNN training. This is very similar to the patches used in the K-views model . In this chapter, we will briefly explain the image-based CNN and patch-based CNN for image texture classification . The LeNet-5 neural network architecture will be used as a basic CNN model. CNN is useful not only in the image recognition but also in the textural feature representation. Texture features, which are automatically learned and extracted from a massive amount of images using the CNN, become the focus of developing feature extraction methods.
I have just three things to teach: simplicity, patience, compassion. These three are your greatest treasures.
— Lao Tzu
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brodatz P (1999) Textures: a photographic album for artists and designers. Dover Publications. ISBN 0486406997
Dana KJ, van Ginneken B, Nayar SK, Koenderink JJ (1999) Reflectance and texture of real-world surfaces. ACM Trans Graph 18(1):1–34
Davies ER (2018) Computer vision: principles, algorithms, applications, and learning, 5th edn. Academic, New York
Fukushima K, Miyake S, Takayuki I (1983) Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans Syst Man Cybern SMC-13(5):826–834
Gonzalez RC, Woods RE (2002) Digital image processing, 2nd edn. Prentice Hall, Englewood Cliffs
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. The MIT Press, Cambridge
Hafemann LG (2014) An analysis of deep neural networks for texture classification. M.S. thesis, Universidade Federal do Parana,
Hayman E, Caputo B, Fritz M, Eklundh J (2004) On the significance of real-world conditions for material classification. In: European conference on computer vision, vol 4. pp 253–266
Heaton J (2015) Artificial intelligence for humans: deep learning and neural networks, vol 3. Heaton Research, Inc.,
Huang Z, Pan Z, Lei B (2017) Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens 9:907. https://doi.org/10.3390/rs9090907
Jarrett K, Kavukcuoglu K, Ranzato M, LeCun Y (2009) What is the best multi-stage architecture for object recognition? In: IEEE 12th international conference on computer vision, Kyoto, Japan, 29 Sept−2 Oct 2009, pp 2146–2153
Karpathy A, Li F-F (2015) Deep visual-semantic alignments for generating image descriptions, CVPR
Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90
Lazebnik S, Schmid C, Ponce J (2005) A sparse texture representation using local affine regions. IEEE Trans Pattern Anal Mach Intell 27(8):1265–1278
Lecun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubband W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp 2278–2324
Liu L, Fieguth P, Clausi D, Kuang G (2012) Sorted random projections for robust rotation-invariant texture classification. Pattern Recogn 45(6):2405–2418
Liu L, Chen J, Fieguth P, Zhao G, Chellappa R, Pietikainen M (2018) BoW meets CNN: two decades of texture representation. Int J Comput Vis 1−26. https://doi.org/10.1007/s11263-018-1125-z
Oquab M, Bottou L, Laptev I, Sivic J (2013) Learning and transferring mid-level image representations using convolutional neural networks, INRIA, Technical report, HAL-00911179
Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359
Snyder WE, Qi H (2004) Machine vision. Cambridge University Press, Cambridge
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition, ICLR
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958
Sun X (2014) Robust texture classification based on machine learning, Ph.D. thesis, Deakin University
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, USA, 7–12 June 2015
Van Den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kavukcuoglu K ( 2016) WaveNet: a generative model for raw audio. In: SSW, 125 pp
Varma M, Zisserman A (2005) A statistical approach to texture classification from single images. Int J Comput Vis 62(1):61–81
Varma M, Zisserman A (2009) A statistical approach to material classification using image patch exemplars. IEEE Trans Pattern Anal Mach Intell 31(11):2032–2047
Wong SC, Gatt A, Stamatescu V, McDonnell MD (2016) Understanding data augmentation for classification: when to warp?” In: International conference on digital image computing techniques and applications (DICTA), Gold Coast, QLD, Australia, 30 Nov−2 Dec 2016
Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in neural information processing systems (NIPS 2014), vol 27
Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: Fleet D et al (eds) ECCV 2014, Part I, LNCS, vol 8689. Springer International Publishing, Switzerland, pp 818–833
Zhang J, Marszalek M, Lazebnik S, Schmid C (2007) Local features and kernels for classification of texture and object categories: a comprehensive study. Int J Comput Vis 73(2):213–238
https://www.microsoft.com/en-us/research/project/project-catapult/
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Hung, CC., Song, E., Lan, Y. (2019). Convolutional Neural Networks and Texture Classification. In: Image Texture Analysis. Springer, Cham. https://doi.org/10.1007/978-3-030-13773-1_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-13773-1_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-13772-4
Online ISBN: 978-3-030-13773-1
eBook Packages: Computer ScienceComputer Science (R0)