1 Introduction

Accurate topographic products of the Moon’s surface furnish detailed information about visible features and hazards, e.g., big slopes, rocks, craters, etc. Currently, there has been several attempts to produce elevation models of the Moon to monitor its’ surface processes. Most of these investigation are based on data collected by missions sent in the last few years such as LOLA (Rosenburg et al. 2011; Kreslavsky et al. 2013; Barnouin et al. 2010), Chandrayaan-1 (Loncaric et al. 2011; Radhadevi et al. 2011; Sivakumar et al. 2012), Chang’E-1’s laser altimeter data (Hu et al. 2013; Li et al. 2011), Chang’E-1’s stereo images (Wan et al. 2012), or Kaguya (Araki et al. 2010). Data captured by these instruments have several limitations. For example, the Chang’E-1 LAM data had spacing resolutions of 1.4 km in the along-track direction and 7 km in the cross-track direction at the equator, Wu et al. (2013). For the LOLA, Smith et al. (2010) and Smith et al. (2011) reported kilometer range separation in the across-track direction at the equator. Its absolute horizontal accuracy is about 300 m, Zuber et al. (2010). Scientists at the Astrogeology Science Center were able to produce DEMs with about 100 m cell size from data collected between July 2009 and July 2013; gaps were still present, though (Astrogeology Science Center 2015). For the WACs camera onboard LRO, Scholten et al. (2012) produced a DEM with pixel spacing of 100 m and a vertical accuracy of about 15 m. Tran et al. (2010) reported a range of difference between −88 and +245 m when they compared an unedited DTM generated from the NAC images with LOLA data. For both the WACs and the NACs, the absolute horizontal accuracy is 300 m, same as LOLA. These results justify the use of data captured by the Apollo metric camera as we disclosed a horizontal accuracy of 8.5 m and a verticale uncertinity of about 30 m for the unedited DEM. The finding show that the data is a valuable imagery source to map craters, mountains, and lunar lobate scarps and also for geological, geophysical, and morphological analysis of the lunar surface. In addition, these images are a valuable reference for studies about changes that are shaping the surface of the Moon including the formation of new small craters and landslides particularly those that are larger than the spatial resolution of the DEMs created from such images.

Therefore it is necessary to create high quality elevation models from these images. The Apollo program started in 1969 and its main purpose was to send a man to the Moon and return him safely to the Earth. NASA then sent other Apollo space-crafts, including Apollo 15, 16, and 17. The goals of these missions were to conduct longer and more science-oriented lunar expedition. Onboard these expeditions were a set of cameras in the Scientific Instrument Module. These cameras included a Metric Camera, a Panoramic Camera, and a Stellar Mapping Camera, Wu (1988). Images captured by these sensors have been used for several scientific studies such as: estimating the optimum illumination conditions for lunar photogrammetric mapping, quantifying structural deformation of its surface, spatial variations of crater geometry and lunar landforms, determining the rheological properties of lunar flows, and measuring the detailed roughness of the Moon’s surface, Wu and Moore (1980a).

The metric camera was a Fairchild aerial camera with 76 mm focal length lens and 127 mm frame size (Wu and Moore 1980a). At nominal orbital altitude, measured by a laser altimeter, the camera acquired images with 25 m ground resolution, 165 km side width, and 78 % overlap between successive images and 57 % overlap between alternate frames (Cameron et al. 1974). Films captured by these cameras were kept in the Command Module during the spacecraft’s trip back to Earth. At that time, most topographic maps for the Moon were compiled by analytical stereoplotters (Wu and Moore 1980b). In June 2007, ASU started scanning and creating an online digital archive of all the original Apollo flight films. The project was a collaborative effort between NASA’s Johnson Space Center and ASU to permit researchers and the general public to access the original images. The new format allowed photogrammetrists to complete conventional photogrammetric processes on PCs and produce Digital Terrain Models (DTMs) faster.

The Intelligent Robotics Group at the NASA Ames Research Center has produced several surface models for the Moon surface from these images (Broxton et al. 2009). Researchers at the center have developed the Ames Stereo Pipeline (Moratto et al. 2010; Zachary et al. 2010) for processing multiple orbital stereo imageries and produce surface models from them. Image matching and correlation is implemented in a pairwise mode, though (Kim et al. 2011). Although one pair of stereo images is adequate to find the 3D position of two visible corresponding image features, it is insufficient to extract the entire surface due to hidden features that are not projected into the image pair. Moreover, most of the available software packages generate DTMs through automatic image matching algorithms in pairwise modes (Joglekar and Gedam 2012; Gruen 2012; Zhang et al. 2014).

One of the advantages of photogrammetry is to perform image matching across several views at the same time (Wiman 1998; Elaksher 2008; Noh et al. 2012). Such approach accounts for depth discontinuities, occlusions, and image-signal noise that impair stereo-matching algorithms from achieving precise and reliable DTMs. In this article, we propose to take advantage of the large overlaps, the multi viewing, and the high ground resolution of the Apollo 15 metric camera images in generating a more accurate and trustful surface of the Moon. The overlaps between these images guarantee that each point on the surface appears in at least four frames. We started by computing the relative positions and orientations of the exposure stations and then carried out the correlation-based matching. The final DEM was then compared with a LOLA-based one that has a superior vertical accuracy, although coarser spatial resolution.

2 Methodology

After downloading the images, they should be accurately orientated before photogrammetric mapping and image-based measurement can be performed. Our developed procedure for creating an elevation model from Apollo images is outlined in Fig. 1.

Fig. 1
figure 1

Outline of image orientation and surface reconstruction approach

2.1 Camera Calibration

Camera calibration is necessary for extracting accurate 3D information from images. The aim of camera calibration is to calculate the so-called inner orientation parameters, i.e. focal length, lens offsets and distortions. Apollo’s 15, 16, and 17 missions were equipped with a high resolution Mapping Camera (MC). These cameras were 76-millimeter Fairchild metric camera. The images were taken on 5-inch-wide film with an angular coverage of 74° by 74° at a resolution of 200 lines/mm. To correct each frame for positional and film distortions, a square array of 121 reseau and eight fiducial marks etched on the glass plate were imaged (Light 1972). Table 1 lists the calibration data for the MCs of the Apollo 16 mission, Wu and Moore 1980b. These parameters include the calibrated focal length and offsets of the principal point from the center of the fiducial marks, Fig. 2. The lens distortions for these cameras were less than 50 µm, though (Light 1972). The original films were archived in Johnson Space Center (JSC). In June 2007, Arizona State University (ASU) started scanning and creating an online digital archive of all the original Apollo flight films. The MC films were scanned using a Leica DSW 700 photogrammetric scanner at 200 pixels/mm. Since the Moon is one of the most high contrast objects, the images were scanned at 14-bit grayscale. This allowed acquiring the wide gray-scale spectrum captured by the original films (Lawrence et al. 2008).

Table 1 Inner orientation parameters for Apollo 16 MC
Fig. 2
figure 2

Fiducial marks

2.2 Extraction and Matching of Conjugate Points

To form the image block, we started by selected the first pair of images in the orbit. Feature points are then detected and matched through applying the Scale Invariant Feature Transform (SIFT) technique, (Lowe 2004). This technique transform an image into a large number of image regions. Each of these regions, i.e. keypoint, is invariant to any scaling, rotation or translation of the image. These keypoints points are extracted from the scale-space extrema of differences-of-Gaussians (DoG) within a difference-of-Gaussians pyramid. The Gaussian pyramid is built by recursively smoothing and subsampling the input image. The difference-of-Gaussians pyramid is calculated from the discrepancies between the successive levels in the Gaussian pyramid. Keypoints are those which have extrema values of the DoG with respect to their image coordinates and the corresponding pyramid level. Keypoint descriptors are then defined from the gradient magnitudes and orientations of a 16 × 16 square array around each point. Concretively, the values of the gradient orientations populate a 128 orientation histograms for a 4 × 4 subregion. Values for the 128 orientation directions are then computed for each point. These values represent the descriptors for each point. Conjugate keypoints are then located in pairs of overlapping images based on the Euclidean distances of their descriptor vectors.

2.3 Relative Orientation

The coplanarity condition (Mikhail et al. 2001) is a fundamental equation in photogrammetry to construct the relative orientation between two image points. It forces the two image points, their corresponding ground point, and the two perspective centers of the two cameras to all lay in a single plane. At least five common points are required to determine the relative orientation parameters of the two images. Since the SIFT algorithm is capable of providing extra points, we carried out this process in through a least-squares estimation model. The model takes the image coordinates of the SIFT conjugate points as inputs and outputs the camera parameters of the second image with respect to the first one. To account for mismatches generated by the SIFT algorithm, we first applied the L1-norm minimization (Marshall and Bethel 1996) to eliminate gross errors. L1-norm procedure was applied iteratively until no blunders persist within the set of corresponding points. The only type of mismatches that could exist after this execution is if two points with similar attributes are on the same epiploar line. Such problem could be removed readily in the bundle adjustment step as described later.

2.4 Bundle Adjustment of Free Network

The collinearity equations (Mikhail et al. 2001) are extensively utilized in photogrammetry to position the ground point, its image point, and the optical center of the photograph on a single ray. The intersection of two rays from two different exposure stations defines the location of the ground point in space. With the aid of ground control points that are of superior quality to the photogrammetric products, the exterior orientation parameters of the cameras could be determined. However, the available control networks such as the Unified Lunar Control Network (ULCN) and the Clementine Lunar Control Network (CLCN), Davies and Colvin (2000), are inferior to that required to constrain the images taken by the metric camera; therefore, the image block was adjusted as a free network (Granshaw 1980). In a free network solution, seven constraint equations are imposed to overcome the datum deficiency problem resulting from the lack of ground control points (Dermanis 1994).

A unified least squares adjustment (Mikhail and Ackermann 1976) is then employed to solve for the orientation parameters of every exposure. In this approach, all unknowns are treated as observations. Genuine observations, i.e. image coordinates, are given different weights than pseudo observations, i.e. orientation parameters and ground coordinates. This is realized through the covariance matrix of each group. Observations assigned with low variances are considered fixed during the adjustment and are not allowed to change freely. On the other hand, observations assigned with high variances are allowed to be adjusted freely.

2.5 Surface Reconstruction

Photogrammetry offers powerful tools to match features across multiple images simultaneously. Lunar surface was reconstructed using the following automated image matching process. First, we interpolated an initial surface from the feature points generated by the bundle adjustment algorithm using Kriging. For each cell in the surface, intensity values of sub-image patches are compaired through correlation based matching. If high similarity between the patches was found the algorithm recognizes this elevation as the true height of the point. Otherwise, different elevations at equal steps above and below the initial estimate are tested and the similarity between image intinisites is estimated for each case. These similarities are then stored in an array and after all heights are evaluated, the height with the largest correleation is to correspond to the accurate elevation of the point. The process is executed in a coarse-to-fine hierarchical approach (Zhang and Fraser 2008). Starting with a low resolution replica of the image and progressing toward the full size image. At the same time, we densify the surface grid, reduce the extent of the elevation search, and shorten the step size. Since each point appears in more than one stereo pair, multipile correlation values are generated. In fact, we produce n(n − 1)/2 correlation values for n images. These values are analyzed and represented by a weighted mean. The weight for each image-pair is inversely proportional to the distance between their exposure stations. If the two images are close to each other, a high degree of likeness will exist between the pixel values when we are at the correct elevation. Otherwise, if the two iamges are far away the geometric relationship with the illumination source will be different and that will lead to dissimilar intinisties. Throught the matching process, occluded image patches were detected and not including, Fig. 3.

Fig. 3
figure 3

Excluded (red) and included (green) image signals. (Color figure online)

2.6 Experimental Results and Analysis

The developed system was tested with several blocks of the MC images acquired by the Apollo 16 mission. The inner orientation parameters of the images were held fixed at the values introduced earlier. The SIFT algorithm was applied in a pairwise mode and those points extracted were matched using their descriptors. Several thresholds were examined and the results didn’t change significantly. Figure 4 discloses the results of excuting this algorthim for one image pairs. End dots of each line identify the corresponding points. The robustness of the algorithm was tested by analyzing the number of outliers in the L1-norm minimization. For most cases, we had a success rate of more than 95 % with very few incorrect matches. These were detected and eliminated in the relative orienation process, though. For the relative orientation process, the average Root Mean Square Error (RMSE) for the image tie points was 0.9 pixel and the maximum value was 2.2 pixels. After removing outlieres the avearge went down to 0.6 pixel and the maximum value was 1.6 pixels.For the bundle adjustment, the avrerage RMSE was 0.5 pixel and the maximum was 1.4 pixels. These metrics measure the internal precision of the BA system. The approximate surface model interpolated from these points is shown in Fig. 5 while that produced by the correlation-based matching is presented in Fig. 6. Differences between the generated DEM and a surface model created from the LOLA mission are disclosed in Fig. 7. The average value of the absolute discrapancies between the two surfaces was 36.5 m with a max inconsistency of +170 m and a minimum of −56 m. Theortically, the standard accuracies for horizontal locations and ground elevations estimated using photogrammetric techniques are determined as introduced in Eq. (1a, b) (Moffit and Mikhail 1980).

$$ \sigma X = \sigma Y = \sigma p \times S $$
(1a)
$$ \sigma Z = \frac{{\sigma {\text{p}}{ \times }{\mkern 1mu} {\text{S}}{ \times }{\mkern 1mu} {\text{H}}}}{B}$$
(1b)

where H is the flying height, B is the distance between the centers of two successive images, S is the image scale, and σp is the parallax accuracy estimated as √2σi, where σi is the standard error of the image coordinate measurements in two photos.

Fig. 4
figure 4

Results of the SIFT algorithm

Fig. 5
figure 5

DEM surface interpolated from tie points

Fig. 6
figure 6

Correlation-based DEM surface (m)

Fig. 7
figure 7

Differences between Apollo DEM and LOLA-based DEM

For this study the flight height (H) was about 100 km above average surface, the distance between the centers of two successive images (B) was approximately 32 km, the image scale (S) was 1: 1,315,790. The physical pixel size (σi) is 5 µm, assuming a one pixel uncertanity in the pixel coordinates. This provides σX and σY of 8.5 m and σZ of about 30 m. Although the vertical accuracy has not reached that of the LOLA, the data have a much finer spatial resolution as the laser altimetry models have substantial longitudinal data gaps at mid- and particularly equatorial latitudes and provide visual infromation that is vital in many applications. The Apollo data also services as a benchmark to study historical changes of the Moon’s surface.

The results shown are for a subset of eight overlapping images that cover an area of 10o by 5o. The entire process took about 45 min on a standard desktop PC with an Intel Core i5 2.53  GHz processor and 4 GB RAM. The algorithm was implemented using a series of programs written in the C++ language. Sift matching took about 10 min while the DEM generation was generated in roughly 20 min. The final DEM cell size is 15 m.

3 Conclusion

High accuracy topographic products for planets are essential to NASA’s studies and analyses of the planets’ surfaces. A key tool in providing such products is the photogrammetric processing of historical, current and future remotely sensed data. Photogrammetry is a very potent, accurate and flexible technique for producing precise and high quality surface models. In this article we explored creating elevation surface models from images acquired by the Apollo 16’s metric camera. We developed an image orientation approach to automatically form image blocks and determine their orientation parameters in a relative mode. We then explored creating elevation surface models through a hieratical correlation-based matching approach. Matching was performed simultaneously between all images to determine point correspondences among all views. Intensities of obscured points were eliminated from the matching model. Absolute differences between the generated DEM and a LOLA-based surface of the Moon reviled an average of 36.5 m. Such accuracy is very satisfactory for many scientific applications including the studies of the geomorphological, geological, and geophysical characteristics of the Moon.