Abstract
This paper introduces a novel method for satellite colour imagery retrieval, based on an adaptive Gaussian–Markov random field (AGMRF) model with the Bayes-driven deep convolutional neural network (AGMRF–BDCNN). The given input imagery is segregated into the structure, microstructure, and texture components, and the AGMRF-driven features and statistical features are extracted from the segregated components and are formulated as a feature vector of the query imagery. Cosine direction and Bhattacharyya distance measures are deployed to match the feature vector with the feature vector of the feature vector database. If the query imagery features match the feature-vector database's features, then the reference imagery in the database is marked and indexed. The indexed imageries are retrieved. Three different benchmark data sets, SceneSat, PatternNet, and UC Merced, have been used to validate the proposed AGMRF–BDCNN method. For the SceneSat data set, the AGMRF–BDCNN method results in 0.2319 scores for ANMRR and 0.7156 scores for mAP; for the UC Merced data set, it yields 0.2316 scores for ANMRR and 0.7816 scores for mAP; for PatternNet data set, it achieves 0.2405 scores for ANMRR and 0.6979 scores for mAP. The obtained results are comparable to state-of-the-art methods.
Similar content being viewed by others
Abbreviations
- Pr :
-
Probability
- \(X{(}k, l{)}\) :
-
Centre pixel of the imagery at location (k, l)
- \(X{(}k + p, l + q{)}\) :
-
Neighbouring pixels to the centre pixel of the imagery
- \(p, q\) :
-
Index of the neighbouring pixels to the centre pixel of the imagery
- \(w(L, L)\) :
-
Window of size L × L
- \(w\left( {k, l} \right)\) :
-
Centre pixel of the window
- \(F\left( {M,N} \right)\) :
-
Whole imagery of size M × N
- \(\mathop F\limits^{ \wedge } {(}M,N{)}\) :
-
Predicted/reconstructed imagery
- \(T_{c} \left( {M,N} \right)\) :
-
Texture component imagery
- \(S_{c} \left( {M,N} \right)\) :
-
Structure component imagery
- \(MS_{c} \left( {M,N} \right)\) :
-
Microstructure component imagery
- \(FT{(}M,N,I{)}\) :
-
Feature-tensor matrix
- \(\varepsilon \left( {k,l} \right)\) :
-
Error/noise term
- \(\alpha {,}\theta {,}\varphi ,K\) :
-
AGMRF model’s parameters
- \(\mathop \alpha \limits^{ \wedge } {,}\mathop \theta \limits^{ \wedge } {,}\mathop \varphi \limits^{ \wedge } ,\mathop K\limits^{ \wedge }\) :
-
Estimated values of the parameters
- \(T(r,s)\) or T rs :
-
Covariance matrix with order r × s of parameter estimation
- \(T(r,r)\) or \( \, T_{rr}\) :
-
Diagonal elements of the covariance matrix, i.e., variance.
- Co:
-
Components
- \(W_{q} ,W_{r}\) :
-
Covariance matrices of the query and reference imageries of BD
- \(E{(} \cdot {)}\) :
-
Estimate of
- \(W\) :
-
The pooled covariance matrix of the query and reference imageries of BD
- \(m_{q} ,m_{r}\) :
-
Mean vector of the query and reference imageries of BD
- \(\overline{w}\) :
-
Mean value of the window
- \(\Gamma (k,l)\) :
-
Filter matrix of size 5 × 5 with model coefficients
- \(\Gamma\) :
-
AGMRF model’s coefficient
- \(\overrightarrow {{FV_{q} }} ,\overrightarrow {FV}_{r}\) :
-
Feature vectors of the query and reference imageries
- \(m\overrightarrow {{FV_{q} }}\) :
-
Median of the feature vectors of the query imageries
- \(m\overrightarrow {FV}_{r}\) :
-
Median of the feature vectors of the reference imageries
- \(m\overrightarrow {{FV_{db} }}\) :
-
Median (index) of the feature vectors of the feature-vector database
- \(\sigma_{{S_{c} }}\) :
-
Standard deviation of the structure component
- \(\overline{S}\) :
-
Mean value of the structure component
- \(\Theta\) :
-
Parameters set
- \(\overline{f}\) :
-
Mean value of the pixels in sub-imagery
- \(\Delta_{i}\) :
-
Parameter updating factor for ith iteration
References
Blaschke T, Hay GJ, Kelly M, Lang S, Hofmann P, Addink E, Feitosa RQ, Van der Meer F, Van der Werff H, Van Coillie F (2014) Geographic object-based image analysis—towards a new paradigm. ISPRS J Photogramm Remote Sens 87:180–191
Bouteldja S, Kourgli A, Aissa AB (2019) Efficient local-region approach for high-resolution remote-sensing image retrieval and classification. J Appl Remote Sens 13(1):016512
Chen M, Strobl J (2013) Multispectral textured image segmentation using a multi-resolution fuzzy Markov random field model on variable scales in the wavelet domain. Int J Remote Sens 34(13):4550–4569
Demir B, Bruzzone L (2014) A novel active learning method in relevance feedback for content-based remote sensing image retrieval. IEEE Trans Geosci Remote Sens 53(5):2323–2334
Dong Q, Luo AG (2020) Progress indication for deep learning model training: a feasibility demonstration. IEEE Access 8:79811–79843
Du Z, Li X, Lu X (2016) Local structure learning in high resolution remote sensing image retrieval. Neurocomputing 207:813–822
Gong M, Zhao J, Liu J, Miao Q, Jiao L (2016) Change detection in synthetic aperture radar images based on deep neural networks. IEEE Trans Neural Netw Learn Syst 27:125–138
Gong W, Fang S, Yang G, Ge M (2017) Using a hidden Markov model for improving the spatial-temporal consistency of time series land cover classification. ISPRS Int J Geo-Inf 6(10):292–305
Guo M, Zhou C, Liu J (2019) Jointly learning of visual and auditory: a new approach for rs image and audio cross-modal retrieval. IEEE J Select Top Appl Earth Obser Remote Sens 12(11):4644–4654
He C, Zhang Q, Qu T, Wang D, Liao M (2019) Remote sensing and texture image classification network based on deep learning integrated with binary coding and sinkhorn distance. Remote Sens 11(2870):1–17
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings: IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, pp. 770–778
Hu F, Tong X, Xia GS, Zhang L (2017) Delving into deep representations for remote sensing image retrieval. In: International conference on signal processing proceedings. ICSP, pp. 198–203
Imbriaco R, Sebastian C, Bondarev E, de With PHN (2019) Aggregated deep local features for remote sensing image retrieval. Remote Sens 11(493):1–23
Iscen A, Furon T, Gripon V, Rabbat M, Jégou H (2018) Memory vectors for similarity search in high-dimensional spaces. IEEE Trans Big Data 4:65–77
Jégou H, Douze M, Schmid C, Pérez P (2010) Aggregating local descriptors into a compact image representation. In: CVPR 2010
Justus, D., Brennan, J., Bonner, S., McGough, A.S., 2018. Predicting the computational cost of deep learning models. In: Proceedings: IEEE International Conference on Big Data, pp. 3873–3882
Lekhy SR (2012) Projective Field. Scholarpedia 7(10):10114
Li X, Shao G (2014) Object-based land-cover mapping with high resolution aerial photography at a county scale in Midwestern USA. Remote Sens 6:11372–11390
Li P, Ren P, Zhang X, Wang Q, Zhu X, Wang L (2018) Region-wise deep feature representation for remote sensing images. Remote Sens 10:1–14
Liu G-H, Li Z-Y, Zhang L, Xu Y (2011) Image retrieval based on microstructure descriptor. Pattern Recogn 44(9):2123–2133
Liu X, Jiao L, Zhao J, Zhao J, Zhang D, Liu F, Yang S, Tang X (2018) Deep multiple instance learning-based spatial-spectral classification for PAN and MS imagery. IEEE Trans Geosci Remote Sens 56:461–473
Loris N, Ghidoni S, Brahnam S (2017) Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recogn 71:158–172
Merabet YE, Ruichek Y (2018) Local concave-and-convex microstructure patterns for texture classification. Pattern Recogn 76:303–322
Nanni L, Ghidoni S, Brahanam S (2017) Hand-crafted vs. non-handcrafted features for computer vision classification. Pattern Recogn 71:158–172
Napoletano P (2018) Visual descriptors for content-based retrieval of remote-sensing images. Int J Remote Sens 39(5):1343–1376
Noh H, Araujo A, Sim J, Weyand T, Han B (2017) Large-scale image retrieval with attentive deep local features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3456–3465
Pan J, Dong J, Liu Y, Zhang J, Ren J, Tang J, Tai YW, Yang M-H (2020) Physics-based generative adversarial models for image restoration and beyond. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2020.2969348
Perronnin F, Sánchez J, Mensink T (2010) Improving the fisher kernel for large-scale image classification. In: ECCV 2010 - European Conference on Computer Vision, Sep 2010, Heraklion, Greece. pp. 143–156.
Poornachandran C, Chembian WT, Seetharaman K (2022) Satellite image retrieval based on adaptive Gaussian Markov random field model with bayes back-propagation neural network. SN Comput Sci. https://doi.org/10.1007/s42979-021-00946-5
Radenovic F, Iscen A, Tolias G, Avrithis Y, Chum O (2018) Revisiting oxford and paris: large-scale image retrieval benchmarking. In: Proceedings of the IEEE Computer Vision and Pattern Recognition Conference, Salt Lake City, UT, USA
Rezende J, Mohamed S, Wierstra D (2014) Stochastic Back-propagation and approximate inference in deep generative models. In: Proceedings: 31st International Conference on Machine Learning, Beijing, China
Salzenstein F, Collet C (2006) Fuzzy Markov random fields versus chains for multispectral image segmentation. IEEE Trans Pattern Anal Mach Intell 28(11):1753–1767
Seetharaman K (2012) A block-oriented restoration in grayscale images using full range autoregressive model. Pattern Recogn 45(4):1591–1601
Seetharaman K, Jeyakarthic M (2014) Statistical distributional approach for scale and rotation invariant colour image retrieval using multivariate parametric tests and orthogonality condition. J Vis Commun Image Represent 25(5):727–739
Seetharaman K, Palanivel N (2013) Texture characterisation, representation, description and classification based on a family of full range gaussian markov random field model. Int J Image Data Fus 4(4):342–362
Sun L, Wu Z, Liu J, Xiao L, Wei Z (2015) Supervised spectral-spatial hyperspectral image classification with weighted markov random fields. IEEE Trans Geosci Remote Sens 53(3):1490–1503
Tang X, Zhang X, Liu F, Jiao L (2018) Unsupervised deep feature learning for remote sensing image retrieval. Remote Sens 10(8):1–30
Tao Z, Bing-Qiang H, Huiling L, Hongbin S, Pengfei Y, Hongsheng D (2021) 18F-FDG-PET/CT whole-body imaging lung tumor diagnostic model: an ensemble E-ResNet-NRC with divided sample space. BioMed Res Int. https://doi.org/10.1155/2021/8865237
Tong X-Y, Xia G-S, Hu F, Zhong Y, Datcu M, Zhang L (2019) Exploiting deep features for remote sensing image retrieval: a systematic investigation. IEEE Trans Big Data. https://doi.org/10.1109/TBDATA.2019.2948924
Vasanthi M, Seetharaman K (2021) A hybrid method for biometric authentication-oriented face detection using autoregressive model with Bayes Backpropagation Neural Network. Soft Comput 25:1659–1680
Wang Q, Yuan Z, Li X (2019) GETNET: a general end-to-end two-dimensional CNN framework for hyperspectral image change detection. IEEE Trans Geosci Remote Sens 55(1):3–13
Xia GS, Tong XY, Hu F, Zhong Y, Datcu M, Zhang L (2017) Exploiting deep features for remote sensing image retrieval: a systematic investigation. arXiv:1707.07321
Xiong W, Lv Y, Zhang X, Cui Y (2020) Learning to translate for cross-source remote sensing image retrieval. IEEE Trans Geosci Remote Sens. https://doi.org/10.1109/TGRS.2020.2968096
Xu K, Huang H, Deng P, Shi G (2020) Two-stream feature aggregation deep neural network for scene classification of remote sensing images. Inf Sci 539:250–268
Yang Y, Newsam S (2013) Geographic image retrieval using local invariant features. IEEE Trans Geosci Remote Sens 51:818–832
Zhang X, Liang Y, Li C, Huyan N, Jiao L, Zhou H (2017) Recursive autoencoders-based unsupervised feature learning for hyperspectral image classification. IEEE Geosci Remote Sens Lett 14:1928–1932
Zhou W, Newsam S, Li C, Shao Z (2017) Learning low dimensional convolutional neural networks for high-resolution remote sensing image retrieval. Remote Sens 9(489):1–20
Zhou W, Newsam N, Li C, Shao Z (2018) PatternNet: A benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS J Photogramm Remote Sens 145:197–209
Zhou T, Lu H, Yang Z, Qiu S, Huo B, Dong Y (2021) The ensemble deep learning model for novel COVID-19 on CT images. Appl Soft Comput 98:106885
Zhuo Z, Zhou Z (2021) Remote sensing image retrieval with gabor-CA-ResNet and split-based deep feature transform network. Remote Sens 13:869. https://doi.org/10.3390/rs13050869
Funding
This study was not funded by any funding agency.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author K. Seetharaman does not have any conflict of interest.
Human and animal rights
As this study was fully involved in mathematical and computational studies, there are no possibilities of using humans or animals for experimental purposes. This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
The joint probability density function of the pixels of the imagery is defined as in Eq. (A1):
where
\(Q = T_{00} + K^{2} \sum\limits_{r = 1}^{N} {S_{r}^{2} T_{rr} + } 2K^{2} \sum\limits_{{\mathop {r,s = 1}\limits_{r < s} }}^{N} {S_{r} S_{s} T_{rs} - } 2K\sum\limits_{r = 1}^{N} {S_{r} T_{0r} }\), s. t. K ∈ R, α > 1, 0 < θ < π, 0 < ϕ < π/2 and \(\sigma^{2} > 0\).
Each parameter of the parameter set, Θ, follows its own distribution, such as α follows displaced exponential distribution, θ, ϕ, and K follow the Uniform distribution. The prior-distribution of the parameters is assigned as follows.
1. α is distributed to displaced exponential distribution with parameter β, that is
2. \(\sigma^{2} \, \) has the inverted Gamma distribution with parameter ν and δ, that is
3. K, θ, and ϕ are uniformly distributed over their domain, that is
The joint priori density function of the parameter set, Θ, is given in Eq. (A5):
Using Eqs. (A1), (A2), and Bayes rule, the joint posterior density of α, θ, and ϕ is obtained as
Integrating Eq. (A6) with respect to \(\sigma^{2}\), the joint posterior density of K, α, θ, and ϕ is obtained as
where
That is
where
\(a_{1} = \frac{a}{C}\), \(b_{1} = \frac{b}{a}\), \(a = \sum\limits_{r = 1}^{N} {S_{r}^{2} T_{rr} } + 2\sum\limits_{\begin{subarray}{l} r,s = 1 \\ r < s \end{subarray} }^{N} {S_{r} S_{s} T_{rs} }\), \(b = \sum\limits_{r = 1}^{N} {S_{r}^{2} T_{rr} }\), \(T_{rs} = \sum\limits_{t = 1}^{N} {X_{t - r} X_{t - s} ;} r,s = 0,1,2, \ldots ,N\) and \(S_{r} = \frac{\sin (r\theta )\cos (r\varphi )}{{\alpha^{r} }}\).
The proper Bayesian inference on the parameters, α, θ, and ϕ, can be obtained from their respective posterior density function given in (A7):
The point estimates of the parameters, α, θ, and ϕ, are considered the respective marginal posterior distribution, i.e., posterior means. In a view to minimising the computational effort, first, the posterior mean of α is computed. Then, α is fixed at its posterior mean, and the conditional means of θ and ϕ are evaluated. The α, θ, and ϕ are fixed at their posterior means, then the conditional mean of K is evaluated. Thus, the estimates are
The reason behind the proposed method is said to be a Bayes-driven Deep CNN.
Appendix B
Similarly, \(\Gamma_{3}\), \(\Gamma_{4}\), and \(\Gamma_{5}\) are computed.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Seetharaman, K., Vasanthi, M. Satellite imagery retrieval based on adaptive Gaussian–Markov random field model with Bayes deep convolutional neural network. Soft Comput 28, 661–684 (2024). https://doi.org/10.1007/s00500-023-09418-9
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-023-09418-9