Automatic Geolocation Correction of Satellite Imagery


Modern satellites tag their images with geolocation information using GPS and star tracking systems. Depending on the quality of the geopositioning equipment, errors may range from a few meters to tens of meters on the ground. At the current state of art, there is no established method to automatically correct these errors limiting the large-scale joint utilization of cross-platform satellite images. In this paper, an automatic geolocation correction framework that corrects images from multiple satellites simultaneously is presented. As a result of the proposed correction process, all the images are effectively registered to the same absolute geodetic coordinate frame. The usability and the quality of the correction framework are demonstrated through a 3-D surface reconstruction application. The 3-D surface models given by original satellite geopositioning metadata, and the corrected metadata, are compared. The quality difference is measured through an entropy-based metric applied to the orthographic height maps given by the 3-D surface models. Measuring the absolute accuracy of the framework is harder due to lack of publicly available high-precision ground surveys. However, the geolocation of images of exemplar satellites from different parts of the globe are corrected, and the road networks given by OpenStreetMap are projected onto the images using original and corrected metadata to demonstrate the improved quality of alignment.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13


  1. 1.

    For the experiments in this paper, the satellite with the best geolocation accuracy is GeoEye-1.

  2. 2.

    Note that in this paper, ASTER DEM tiles (30 m resolution) are also used albeit only to set reasonable estimates for minimum and maximum elevations in a given area and to provide a rough ground plane to constrain the search for registration.

  3. 3.

    A viable range for Z values, \(\left[ {\hbox {Z}_{{min}} ,\hbox {Z}_{{max}} } \right] \), is retrieved from the image metadata.

  4. 4.

    The height of the scene volume is denoted as 80+ m in Fig. 7a as it changes with respect to the terrain.

  5. 5.

    Note that the orthographic height maps have 1 m GSD so the patches are 5 pixel by 5 pixel in the map image. To compute the entropy of a patch, 25 height values in the range [0.255] are collected in a histogram with 16 bins. The entropy of the patch histogram is computed as \(H=-\sum _{i=0}^{15}p_{i}\log p_{i}\) where \(p_{i}\) is the normalized bin count.


  1. d’Angelo, P., & Reinartz, P. (2012). DSM based orientation of large stereo satellite image blocks. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 39, 209–214.

    Article  Google Scholar 

  2. Dial, G., & Grodecki, J. (2005). RPC replacement camera models. In Proceedings of the ASPRS 2005 annual conference.

  3. Fraser, C. S., & Hanley, H. B. (2003). Bias compensation in rational functions for IKONOS satellite imagery. Photogrammetric Engineering and Remote Sensing, 69, 53–58.

    Article  Google Scholar 

  4. Grodecki, J., & Dial, G. (2003). Block adjustment of high-resolution satellite images described by rational polynomials. Photogrammetric Engineering and Remote Sensing, 69, 59–68.

    Article  Google Scholar 

  5. Mikhail, E. M., Bethel, J. S., & McGlone, J. C. (2001). Introduction to modern photogrammetry. New York: Wiley.

    Google Scholar 

  6. Oh, J., Toth, C., & Grejner-Brzezinska, D. (2010). Automatic georeferencing of aerial images using high-resolution stereo satellite images. In ASPRS annual conference.

  7. Pollard, T., & Mundy, J. L. (2007). Change detection in a 3-D world. In Proceedings of computer vision and pattern recognition (CVPR).

  8. Pollard, T., Eden, I., Mundy, J. L., & Cooper, D. B. (2010). A volumetric approach to change detection in satellite images. ASPRS Photogrammetric Engineering & Remote Sensing Journal, 76(7), 817–831.

    Article  Google Scholar 

  9. Pritt, M. D., & LaTourette, K. J. (2011). Automated georegistration of motion imagery. In Applied imagery pattern recognition workshop (AIPR).

  10. Tom, V., Wallace, G. K., & Wolfe, G. J. (1983). Image registration by a statistical method. In Proceedings of SPIE applications of digital image processing VI (p. 432).

Download references


Supported by the Intelligence Advanced Research Projects Activity (IARPA) via Air Force Research Laboratory (AFRL), contract FA8650-12-C-7211. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.

Author information



Corresponding author

Correspondence to Ozge C. Ozcanli.

Ethics declarations


The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, AFRL, or the U.S. government.

Additional information

Communicated by Riad I. Hammoud, Josef Sivic, Larry S. Davis, Marc Pollefeys.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ozcanli, O.C., Dong, Y., Mundy, J.L. et al. Automatic Geolocation Correction of Satellite Imagery. Int J Comput Vis 116, 263–277 (2016).

Download citation


  • Georegistration
  • Satellite imagery
  • 3-D modeling
  • RPC camera model
  • Bias correction