Remote Sensing Image Matching Based Improved ORB in NSCT Domain
Abstract
Aiming at the problem that the ORB algorithm has no scale invariance and low matching accuracy in image matching, an improved ORB algorithm is proposed on the basis of SURF algorithm. Based on the flexibility of NSCT in image decomposition and the effectiveness of the improved ORB algorithm in remote sensing image matching, an improved ORB algorithm based on NSCT domain is proposed for remote sensing image matching. The image to be matched and the reference image are decomposed by NSCT. Two corresponding lowfrequency images are obtained. Then, to reduce the influence of highfrequency noise on matching results, two lowfrequency images are inputted to the improved ORB algorithm to obtain initial match results. The RANSAC algorithm is adopted to eliminate the mismatching points and complete the image matching. The experimental results show that the algorithm can make up the problem that the ORB algorithm has no scale invariance, and effectively improve the matching speed and accuracy of scale and rotation changes between two images. Meanwhile, the algorithm is more robust than classical methods in many complex situations such as image blur, field of view change, and noise interference.
Keywords
Remote sensing image matching ORB algorithm SURF algorithm NSCTIntroduction
Image matching is the basic component of the field of machine vision, which is now widely used in many fields, such as medicine, agriculture, remote sensing, machinery and artificial intelligence, etc. (Uchiyama et al. 2015; Schmid et al. 2000; Reese et al. 2015; Sedaghat and Ebadi 2015; Lee et al. 2016; Ye et al. 2017). For image matching, it is mainly divided into graybased matching and featurebased matching (Moon and Loh 2015). The matching method based on image gray information need a large amount of calculation and easy to be affected by illumination and noise. The matching method based on image feature can overcome the shortcomings of matching with image gray information. For the gray scale change, image deformation and image occlusion, feature matching method has a better ability to adapt. Therefore, the method based on feature information matching is more widely used in practical application. The common matching method based on image feature is to use the local feature descriptor, which has made a lot of researches on the local feature descriptor of the image. Lowe (2002) proposed the Scale invariant feature transform algorithm (SIFT) which has good robustness, but the amount of computation is large. Therefore, scholars have studied and improved SIFT. Ke and Sukthankar (2004) proposed PCASIFT to reduce the dimension of descriptor and reduce the computational complexity. Speed up robust feature (SURF) algorithm is first presented in 2006 by Bay et al. (2006), it is faster and more robust than the SIFT. However, these algorithms still do not achieve realtime requirements. Rublee et al. (2011) proposed the Oriented Brief algorithm (ORB) in 2011. Although the algorithm is fast, the ORB does not have the scale invariance, so the image matching effect is poor when the image scale changes. Thus, the research and improvement of ORB has great practical significance. In recent years, there has been a lot of improvements for SIFT and SURF algorithms (Chen et al. 2011; Liu 2016; Yu et al. 2015; Ma 2015), but little improvement has been made on ORB. Gao and Zhu (2016) used the improved ORB algorithm to detect moving object for driving assistance system which improved the matching accuracy by establishing a matching optimization strategy to eliminate the mismatched pairs; Li et al. (2016) combined Hamming distance and cosine similarity measurements to describe the binary eigenvectors, effectively reducing the error matching rate; Zou (2015) combined the ORB algorithm with the nearestneighbor (NN) search method, which replaced the Hamming distance by the nearestneighbor (NN) search method, and improves the speed of ORB matching by this method. However, none of these improved algorithms can solve the problem that the ORB algorithm does not have scale invariance.
In summary, the ORB algorithm is improved in this paper. The scale invariance of the SURF algorithm is used to obtain a new ORBSURF algorithm (abbreviated as ORSURF), which not only retains the fast superiority of the ORB algorithm, but also makes the ORB algorithm scale invariance; nonsubsampled contourlet (NSCT) is used to reduce the impact of highfrequency noise on the matching results; and the random sample consensus algorithm (RANSAC) is applied to eliminate false matching points to complete image matching. The improved algorithm can effectively solve the problem that the ORB algorithm has no scale invariance, and improves the accuracy, speed, and robustness of remote sensing image matching.
The Theory of NSCT
Improved ORB Algorithm
SURF Feature Point Detection
In order to ensure that the image matching process has scale invariance, the scale space is constructed on the image, and the feature points are extracted on different scale images. In the establishment of scale space, the size of the original image is not changed, and the integral image of the original image is filtered by changing the size of the box filter so as to form the scale space of the image. In order to determine the SURF feature point, the Hessian threshold H is set and a nonmaximum suppression in a 3 × 3 × 3 neighborhood is applied. Such points are used as feature points, when the characteristic value of the feature point is larger than the threshold value H and is larger than the response value of the adjacent 26 points. Finally, accurate interpolation is performed to obtain stable feature points.
Generating ORSURF Feature Descriptor
Feature Point Matching
After using the ORSURF algorithm to get the feature point, then image matching is the next work. Aiming at the problem that the KDimension tree (KD tree) nearestneighbor search algorithm has poor performance in processing highdimensional datasets, the improved KD tree nearestneighbor search algorithm Best Bin First (BBF) is used to process the data in this paper. Compared with the KD tree algorithm, the BBF algorithm adds the priority queue in the backtracking query, that is, the backtrack search is according to the priority queue. In addition, the BBF algorithm also sets the running timeout limit and improves the search efficiency. In this paper, the similarity distance between feature points can be obtained by calculating the Euclidean distance between feature points in two images, and then establish a 64dimensional KD tree. The BBF algorithm is adopted to find out the feature highlights of two nearest Euclidean distance points between basic image and matching image. Then, compare the ratio of two nearest Euclidean distance with given threshold, if the result is less than the threshold (0.55 in this article), the match is correct matching. Generally, the threshold range is between 0.4 and 0.6 (Jia et al. 2016). If the threshold is less than 0.4, there is a problem that the matching point is few. If the threshold is greater than 0.6, there is a large number of error matching points. Therefore, 0.55 is chosen as the threshold.
Principle of Improved Remote Sensing Image Matching Algorithm

Step 1: The reference image and image to be matched are decomposed by NSCT.

Step 2: The lowfrequency components of two images decomposed by NSCT are regarded as the input images of improved ORB algorithm.

Step 3: The feature highlights of two nearestneighbor points are found out by using the improved KD tree nearestneighbor search algorithm BBF. The preliminary match point is obtained.

Step 4: RANSAC algorithm is used to filter the matching points and image matching is completed.
Results and Discussion
To compare the proposed algorithm with the SIFT, SURF and HarrisSURF algorithms (Wang et al. 2015) in terms of matching performance and various antiinterference capabilities in detail, a large number of experiments of the aforementioned algorithms on actual remote sensing images are conducted, and a quantitative evaluation and analysis are performed. The quantitative evaluation includes feature point extraction performance, matching performance, antirotation capability, antiscale variation capability and antinoise capability.
Feature Point Extraction Performance Analysis
Comparisons in feature point extraction performance
Algorithm  The number of feature points  Detection time/ms 

SIFT  3024  1804.7 
SURF  1089  342.4 
HarrisSURF  1567  2334.4 
ORSURF  924  78.734 
It can be seen from Table 1 that the number of feature points detected by the ORSURF algorithm in this paper is less than that of other algorithms, which can effectively extract the image information to highlight the image details, while in the feature points detection by the SIFT, SURF and HarrisSURF algorithm, the large number of feature points will lead to the redundancy of effective information and increase the computational complexity. In terms of detection time, the ORSURF algorithm takes much less time than SIFT, SURF and HarrisSURF algorithms. In general, the ORSURF algorithm can effectively detect the image information in a short time and highlight the details of the image, which shows the speediness of the algorithm in the feature points detection and the validity of the feature points selection.
Matching Performance Analysis
Comparisons in matching performance
Algorithm  Matching accuracy/%  Match time/ms 

SIFT  93.67  12,539.79 
SURF  85.13  3332.43 
HarrisSURF  87.90  4572.00 
The proposed method  95.48  516.07 
Compared with SURF algorithm, HarrisSURF algorithm has improved its performance. The algorithm proposed in the paper has the fastest matching speed, the highest matching accuracy and the best capability.
It is known from Table 2 that in the four algorithms, the SURF algorithm has greatly reduced the matching time compared with the SIFT algorithm, but in the matching accuracy, the performance is not good, there are more mismatches; the HarrisSURF algorithm is slightly better than the SURF algorithm in matching performance; and the algorithm proposed in this paper has the fastest matching speed, the highest matching accuracy and the best performance.
Antirotation Capability Analysis
In order to verify the performance of the algorithm in the rotation invariance, the rotation processing of the image are carried out, and the matching experiment is carried out. First, for the verification of rotation invariance, four classical digital images are selected as the experimental samples, each image is rotated from 0° to 180° at intervals of 30°. Images are verified and matched among SIFT algorithm, SURF algorithm, HarrisSURF algorithm and the proposed matching algorithm. And the results of the accuracy and matching time are recorded.
Comparisons in antirotation capability
Image  Algorithm  Matching accuracy  Match time/ms 

SIFT  98.62  11,688.86  
SURF  75.65  3896.28  
HarrisSURF  76.73  3428.43  
The proposed method  97.60  578.33  
SIFT  96.80  12,701.65  
SURF  76.41  4939.04  
HarrisSURF  75.36  4898.20  
The proposed method  96.61  616.23  
SIFT  97.97  11,892.61  
SURF  74.84  3520.58  
HarrisSURF  76.96  3774.61  
The proposed method  97.54  664.79  
SIFT  98.22  11,947.12  
SURF  78.46  3330.86  
HarrisSURF  78.74  2854.56  
The proposed method  96.50  609.54 
Antiscale Variation Capability Analysis
Comparisons in antiscale variation capability
Algorithm  50% of original size  200% of original size 

SIFT  
Matching accuracy/%  97.73  96.22 
Match time/ms  13,031.1  10,936 
SURF  
Matching accuracy/%  96.23  95.56 
Match time/ms  3437.16  3122.87 
HarrisSURF  
Matching accuracy/%  96.4  94.33 
Match time/ms  2921.59  3354.44 
ORB  
Matching accuracy/%  18.32  20.21 
Match time/ms  303.21  378.93 
The proposed method  
Matching accuracy/%  97.65  96.79 
Match time/ms  836.21  1023.48 
It can be seen from Table 4 that the performance of SIFT, SURF, HarrisSURF and the algorithm proposed in this paper are very good in terms of image scale changes, but the algorithm has obvious advantages in running time. Since the ORB algorithm does not have scale invariance, although the algorithm runs faster, the matching performance is worse. Considering the matching effect and matching time comprehensively, this algorithm is an improvement of ORB and SURF algorithm. It not only maintains the superiority and accuracy of the ORB algorithm, but also compensates for the defect that the ORB algorithm does not have scale invariance and obtains better results.
Antinoise Capability Analysis
As shown in Fig. 5, the matching rate of HarrisSURF is higher than that of SURF, but they are both lower than that of the SIFT algorithm without noise. However, with increasing Gaussian noise variance/salt and pepper noise density, the matching rate of the SIFT algorithm decreases dramatically compared with that of SURF and HarrisSURF. The performance of the proposed algorithm is the best, and the matching rate is always superior to the other three algorithms. Because in this paper NSCT is used to decompose the image, and only the lowfrequency component of image is matched, which can significantly reduce the effect of noise and other details, thus accelerating the matching and improving its precision.
Conclusions
This paper proposed an algorithm based on the improved ORB algorithm and the NSCT algorithm. The experimental results show that: 1. The algorithm can make up for the lack of scale invariance of ORB algorithm and effectively improve the matching speed and accuracy of the scale change of the image. 2. The algorithm has more robust and comprehensive consideration in the complex situation. The algorithm performs best in terms of the speed and matching effect of the algorithm.
Notes
Acknowledgements
This work was supported by the National Natural Science Foundation of China (No. 61561048) and the National Natural Science Foundation of China (No. U1803261).
Author contributions
The authors declare equal contribution. All authors read and approved the final manuscript.
Compliance with Ethical Standards
Conflict of interest
The authors declare that they have no competing interests.
References
 Bay, H., Tuytelaars, T., & Gool, L. V. (2006). SURF: Speeded up robust features. In Computer vision & image understanding (pp. 404–417).Google Scholar
 Chen, W., Zhao, Y., & Xie, W. (2011). An improved SIFT algorithm for image featurematching. In International conference on multimedia technology, Hangzhou, China (pp. 197–200).Google Scholar
 Da, C. A., Zhou, J., & Do, M. N. (2006). The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing, 15(10), 3089–3101.CrossRefGoogle Scholar
 Gao, J., & Zhu, H. (2016). Moving object detection for driving assistance system based on improved ORB feature matching. In International conference on internet and distributed computing systems, China (pp. 446–457).Google Scholar
 Jia, X., Wang, X., & Dong, Z. (2016). Image matching method based on improved SURF algorithm. In IEEE international conference on computer and communications, Chengdu, China (pp. 142–145).Google Scholar
 Ke, Y., & Sukthankar, R. (2004). PCASIFT: A more distinctive representation for local image descriptors. In IEEE computer society conference on computer vision & pattern recognition, Washington, American (pp. 506–513).Google Scholar
 Lee, H., Rhee, H., & Oh, J. H. (2016). Measurement of 3D vibrational motion by dynamic photogrammetry using leastsquare image matching for subpixel targeting to improve accuracy. Sensors, 16, 359.CrossRefGoogle Scholar
 Li, L., Wu, L., & Gao, Y. (2016). Improved image matching method based on ORB. In IEEE/ACIS international conference on software engineering, artificial intelligence, networking and parallel/distributed computing (pp. 465–468).Google Scholar
 Liu, J. (2016). Feature matching of fuzzy multimedia image based on improved SIFT matching. Recent Advances in Electrical & Electronic Engineering, 9, 34–38.Google Scholar
 Lowe, D. G. (2002). Object recognition from local scaleinvariant features. In IEEE international conference on computer vision (p. 1150).Google Scholar
 Ma, Y. L. S. (2015). Research on image based on improved SURF feature matching. In Seventh international symposium on computational intelligence and design, China (pp. 581–584).Google Scholar
 Moon, Y. S., & Loh, W. K. (2015). Triangular inequalitybased rotationinvariant boundary image matching for smart devices. Multimedia Systems, 21, 15–28.CrossRefGoogle Scholar
 Reese, H., Nordkvist, K., & Nyström, M. (2015). Combining point clouds from image matching with SPOT 5 multispectral data for mountain vegetation classification. International Journal of Remote Sensing, 36, 403–416.CrossRefGoogle Scholar
 Rublee, E., Rabaud, V., & Konolige, K. (2011). ORB: An efficient alternative to SIFT or SURF. In IEEE international conference on computer vision, Barcelona, Spain (pp. 2564–2571).Google Scholar
 Schmid, C., Mohr, R., & Bauckhage, C. (2000). Evaluation of interest point detectors. International Journal of Computer Vision, 37, 151–172.CrossRefGoogle Scholar
 Sedaghat, A., & Ebadi, H. (2015). Remote sensing image matching based on adaptive binning SIFT descriptor. IEEE Transactions on Geoscience & Remote Sensing., 53, 5283–5293.CrossRefGoogle Scholar
 Uchiyama, Y., Abe, A., & Muramatsu, C. (2015). Eigenspace template matching for detection of lacunar infarcts on MR images. Journal of Digital Imaging, 28, 116–122.CrossRefGoogle Scholar
 Wang, W., Cao, T., & Sheng, L. (2015). Remote sensing image automatic registration on multiscale Harris–Laplacian. Journal of the Indian Society of Remote Sensing, 43, 501–511.CrossRefGoogle Scholar
 Ye, Y., Shen, L., & Hao, M. (2017). Robust opticaltoSAR image matching based on shape properties. IEEE Geoscience & Remote Sensing Letters., 14, 1–5.CrossRefGoogle Scholar
 Yu, D., Yang, F., & Yang, C. (2015). Fast rotationfree featurebased image registration using improved NSIFT and GMMbased parallel optimization. IEEE Transactions on Biomedical Engineering, 63, 1653–1664.CrossRefGoogle Scholar
 Zou, J. (2015). A nearest neighbor search method for image matching based on ORB. Journal of Information & Computational Science, 12, 2691–2700.CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.