An Approach of Transferring Pre-trained Deep Convolutional Neural Networks for Aerial Scene Classification
Feature selection or feature extraction plays a vital role in image classification task. Since the advent of deep learning methods, significant efforts have been given by researchers to obtain an optimal feature set of images for improving classification performance. Though several deep architectures of Convolutional Neural Networks (CNNs) have been successfully designed but training such deep architectures with small datasets like aerial scenes often leads to overfitting hence affects the classification accuracy. To tackle this issue in past few works, pre-trained CNNs are adopted as feature extractor where features are directly transferred to train only the classification layer for classifying images on the target dataset. In this work, an approach of feature extraction is proposed where both “multi-layer” and “multi-model” features are extracted from pre-trained CNNs. “Multi-layer” features are concatenation of features from multiple layers within a same CNN and “Multi-model” are concatenation of features from different CNN models. The concatenated features are further reduced with some method to obtain an optimal feature set.
KeywordsConvolutional neural network Feature extraction Transfer learning
- 2.Castelluccio, M., Poggi, G., Sansone, C., Verdoliva, L.: Land use classification in remote sensing images by convolutional neural networks. arXiv preprint arXiv:1508.00092 (2015)
- 5.Liu, Q., Hang, R., Song, H., Zhu, F., Plaza, J., Plaza, A.: Adaptive deep pyramid matching for remote sensing scene classification. arXiv preprint arXiv:1611.03589 (2016)
- 9.Yang, Y., Newsam, S.: Bag-of-visual-words and spatial extensions for land-use classification. In: Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270–279. ACM (2010)Google Scholar
- 10.Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)Google Scholar