AggregationNet: Identifying Multiple Changes Based on Convolutional Neural Network in Bitemporal Optical Remote Sensing Images
The detection of multiple changes (i.e., different change types) in bitemporal remote sensing images is a challenging task. Numerous methods focus on detecting the changing location while the detailed “from-to” change types are neglected. This paper presents a supervised framework named AggregationNet to identify the specific “from-to” change types. This AggregationNet takes two image patches as input and directly output the change types. The AggregationNet comprises a feature extraction part and a feature aggregation part. Deep “from-to” features are extracted by the feature extraction part which is a two-branch convolutional neural network. The feature aggregation part is adopted to explore the temporal correlation of the bitemporal image patches. A one-hot label map is proposed to facilitate AggregationNet. One element in the label map is set to 1 and others are set to 0. Different change types are represented by different locations of 1 in the one-hot label map. To verify the effectiveness of the proposed framework, we perform experiments on general optical remote sensing image classification datasets as well as change detection dataset. Extensive experimental results demonstrate the effectiveness of the proposed method.
KeywordsMultiple change detection Remote sensing Aggregation network
This study was partly supported by the National Science and Technology Major Project (21-Y20A06-9001-17/18), the National Key Research and Development Program of China (No. 2018YFB0505000), the National Natural Science Foundation of China (No. 41571402), the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (No. 61221003).
- 1.Basu, S., Ganguly, S., Mukhopadhyay, S., DiBiano, R., Karki, M., Nemani, R.R.: DeepSat - a learning framework for satellite imagery. CoRR abs/1509.03602 (2015)Google Scholar
- 8.Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1735–1742 (2006)Google Scholar
- 9.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
- 10.Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of ACM International Conference on Multimedia, pp. 675–678 (2014)Google Scholar
- 11.Liu, G., Delon, J., Gousseau, Y., Tupin, F.: Unsupervised change detection between multi-sensor high resolution satellite images. In: European Signal Processing Conference, pp. 2435–2439 (2016)Google Scholar
- 13.Lu, X., Guo, Y., Liu, N., Wan, L., Fang, T.: Non-convex joint bilateral guided depth upsampling. Multimed. Tools Appl. 10, 1–24 (2017)Google Scholar
- 14.Lu, X., Ma, C., Ni, B., Yang, X., Reid, I., Yang, M.: Deep regression tracking with shrinkage loss. In: ECCV, pp. 369–386 (2018)Google Scholar
- 15.Lv, P., Zhong, Y., Zhao, J., Zhang, L.: Unsupervised change detection model based on hybrid conditional random field for high spatial resolution remote sensing imagery. In: IEEE International Geoscience and Remote Sensing Symposium, pp. 1863–1866 (2016)Google Scholar
- 18.Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10, 000 classes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891–1898 (2014)Google Scholar
- 22.Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: IEEE International Conference on Computer Vision (2017)Google Scholar