Keywords

1 Introduction

Efficient, accurate and real-time depth estimation is essential for a wide variety of scene understanding applications in domains such as virtual/mixed reality, autonomous vehicles, and robotics. Currently, a consumer-grade Kinect v2 depth sensor consumes \({\sim }\)15 W of power, only works indoors at a limited range of \({\sim }4.5\) m, and degrades under increased ambient light [8]. For reference, a future VR/MR head mounted depth camera would need to consume \(1{\slash }100\)th the power and have a range of 1–80 m (indoors and outdoors) at the full FOV and resolution of an RGB camera. Such requirements present an opportunity to jointly develop energy-efficient depth hardware and depth estimation models. Our work begins to address depth estimation from this perspective.

Due to its intrinsic scale ambiguity, monocular depth estimation is a challenging problem, with state-of-the-art models [4, 17] still producing \({>}12\%\) mean absolute relative error on the popular large-scale NYUv2 indoor dataset [24]. Such errors are prohibitive for applications such as 3D reconstruction or tracking, and fall very short of depth sensors such as the Kinect that boast relative depth error on the order of \({\sim }1\%\) [14, 25] indoors.

Fig. 1.
figure 1

From Sparse to Dense Depth. An RGB Image and very sparse depth map are input into a deep neural network. We obtain a high-quality dense depth prediction as our final output.

Acknowledging the limitations of monocular depth estimation, we provide our depth model with a sparse amount of measured depth along with an RGB image (See Fig. 1) in order to estimate the full depth map. Such sparse depth resolves the depth scale ambiguity, and could be obtained from e.g. a sparser illumination pattern in Time-of-Flight sensors [8], confident stereo matches, LiDAR-like sensors, or a custom-designed sparse sensor. We show that the resultant model can provide comparable performance to a modern depth sensor, despite only observing a small fraction of the depth map. We believe our results can thus motivate the design of smaller and more energy-efficient depth sensor hardware. As the objective is now to densify a sparse depth map (with additional cues from an RGB image), we call our model Deep Depth Densification, or D\(^3\).

One advantage of our D\(^3\) model is that it accommodates for arbitrary sparse depth input patterns, each of which may correspond to a relevant physical system. A regular grid of sparse depth may come from a lower-power depth sensor, while certain interest point sparse patterns such as ORB [27] or SIFT [21] could be output from modern SLAM systems [23]. In the main body of this work, we will focus on regular grid patterns due to their ease of interpretation and immediate relevance to existing depth sensor hardware, although we detail experiments on ORB sparse patterns in the Supplementary Materials.

Our contributions to the field of depth estimation are as follows:

  1. 1.

    A deep network model for dense scene depth estimation that achieves accuracies comparable to conventional depth sensors.

  2. 2.

    A depth estimation model which works simultaneously for indoors and outdoors scenes and is robust to common measurement errors.

  3. 3.

    A flexible, invertible method of parameterizing sparse depth inputs that can accommodate arbitrary sparse input patterns during training and testing.

2 Related Work

Depth estimation has been tackled in computer vision well before the advent of deep learning [28, 29]; however, the popularization of encoder-decoder deep net architectures [1, 20], which produce full-resolution pixel-wise prediction maps, make deep neural networks particularly well-suited for this task. Such advances have spurred a flurry of research into deep methods for depth estimation, whether through fusing CRFs with deep nets [37], leveraging geometry and stereo consistency [5, 16], or exploring novel deep architectures [17].

Depth in computer vision is often used as a component for performing other perception tasks. One of the first approaches to deep depth estimation also simultaneously estimates surface normals and segmentation in a multitask architecture [4]. Other multitask vision networks [3, 12, 34] also commonly use depth as a complementary output to benefit overall network performance. Using depth as an explicit input is also common in computer vision, with plentiful applications in tracking [30, 33], SLAM systems [13, 36] and 3d reconstruction/detection [7, 19]. There is clearly a pressing demand for high-quality depth maps, but current depth hardware solutions are power-hungry, have severe range limitations [8], and the current traditional depth estimation methods [4, 17] fail to achieve the accuracies necessary to supersede such hardware.

Such challenges naturally lead to depth densification, a middle ground that combines the power of deep learning with energy-efficient sparse depth sensors. Depth densification is related to depth superresolution [10, 31], but superresolution generally uses a bilinear or bicubic downsampled depth map as input, and thus still implicitly contains information from all pixels in the low-resolution map. This additional information would not be accessible to a true sparse sensor, and tends to make the estimation problem easier (see Supplementary Material). Work in [22, 23] follows the more difficult densification paradigm where only a few pixels of measured depth are provided. We will show that our densification network outperforms the methods in both [22, 23].

3 Methodology

3.1 Input Parametrization for Sparse Depth Inputs

We desire a parametrization of the sparse depth input that can accommodate arbitrary sparse input patterns. This should allow for varying such patterns not only across different deep models but even within the same model during training and testing. Therefore, rather than directly feeding a highly discontinuous sparse depth map into our deep depth densification (D\(^3\)) model (as in Fig. 1), we propose a more flexible parametrization of the sparse depth inputs.

At each training step, the inputs to our parametrization are:

  1. 1.

    I(xy) and D(xy): RGB vector-valued image I and ground truth depth D. Both maps have dimensions H \(\times \) W. Invalid values in D are encoded as zero.

  2. 2.

    M(xy): Binary pattern mask of dimensions H \(\times \) W, where \(M(x,y)=1\) defines (xy) locations of our desired depth samples. M(xy) is preprocessed so that all points where \(M(x,y)=1\) must correspond to valid depth points (\(D(x,y)>0\)). (see Algorithm 1).

From I, D, and M, we form two maps for the sparse depth input, \(\mathcal {S}_1(x,y)\) and \(\mathcal {S}_2(x,y)\). Both maps have dimension H \(\times \) W (see Fig. 2 for examples).

  • \(\mathcal {S}_1(x,y)\) is a NN (nearest neighbor) fill of the sparse depth \(M(x,y)*D(x,y)\).

  • \(\mathcal {S}_2(x,y)\) is the Euclidean Distance Transform of M(xy), i.e. the L\(_2\) distance between (x, y) and the closest point \((\text {x}',\text {y}')\) where \(M(x',y')=1\).

The final parametrization of the sparse depth input is the concatenation of \(\mathcal {S}_1(x,y)\) and \(\mathcal {S}_2(x,y)\), with total dimension H \(\times \) W \(\times \) 2. This process is described in Algorithm 1. The parametrization is fast and involves at most two Euclidean Transforms. The resultant NN map \(\mathcal {S}_1\) is nonzero everywhere, allowing us to treat the densification problem as a residual prediction with respect to \(\mathcal {S}_1\). The distance map \(\mathcal {S}_2\) informs the model about the pattern mask M(xy) and acts as a prior on the residual magnitudes the model should output (i.e. points farther from a pixel with known depth tend to incur higher residuals). Inclusion of \(\mathcal {S}_2\) can substantially improve model performance and training stability, especially when multiple sparse patterns are used during training (see Sect. 5.3).

figure a
Fig. 2.
figure 2

Various Sparse Patterns. NN fill maps \(\mathcal {S}_1\) (top row) and the sampling pattern Euclidean Distance transforms \(\mathcal {S}_2\) (bottom row) are shown for both regular and irregular sparse patterns. Dark points in \(\mathcal {S}_2\) correspond to the pixels where we have access to depth information.

In this work, we primarily focus on regular grid patterns, as they are high-coverage sparse maps that enable straightforward comparisons to prior work (as in [22]) which often assume a grid-like sparse pattern, but our methods fully generalize to other patterns like ORB (see Supplementary Materials).

3.2 Sparse Pattern Selection

For regular grid patterns, we try to ensure minimal spatial bias when choosing the pattern mask M(xy) by enforcing equal spacing between subsequent pattern points in both the x and y directions. This results in a checkerboard pattern of square regions in the sparse depth map \(\mathcal {S}_1\) (see Fig. 2). Such a strategy is convenient when one deep model must accommodate images of different resolutions, as we can simply extend the square pattern in M(xy) from one resolution to the next. For ease of interpretation, we will always use sparse patterns close to an integer level of downsampling; for a downsampling factor of \(A\times A\), we take \({\sim }H*W/A^2\) depth values as the sparse input. For example, for 24 \(\times \) 24 downsampling on a 480 \(\times \) 640 image, this would be 0.18\(\%\) of the total pixels.

Empirically we observed that it is beneficial to vary the sparse pattern M(xy) during training. For a desired final pattern of N sparse points, we employ a slow decay learning schedule following \(N_{\text {sparse}}(t) = \lfloor 5Ne^{-0.0003t} + N\rfloor \) for training step \(0\le t\le 80000\). Such a schedule begins training at six times the desired sparse pattern density and smoothly decays towards the final density as training progresses. Compared to a static sparse pattern, we see a relative decrease of \({\sim } 3\%\) in the training L\(_2\) loss and also in the mean relative error when using this decay schedule. We can also train with randomly varying sampling densities at each training step. This we show in Sect. 5.3 results in a deep model which performs well simultaneously at different sampling densities.

Fig. 3.
figure 3

D\(^3\) Network Architecture. Our proposed multi-scale deep network takes an RGB image concatenated with \(\mathcal {S}_1\) and \(\mathcal {S}_2\) as inputs. The first and last computational blocks are simple 3\(\,\times \,\)3 stride-2 convolutions, but all other blocks are DenseNet modules [9] (see inset). All convolutional layers in the network are batch normalized [11] and ReLU activated. The network outputs a residual that is added to the sparse depth map \(\mathcal {S}_1\) to produce the final dense depth prediction.

4 Experimental Setup

4.1 Architecture

We base our network architecture (see Fig. 3) on the network used in [2] but with DenseNet [9] blocks in place of Inception [32] blocks. We empirically found it critical for our proposed model to carry the sparse depth information throughout the deep network, and the residual nature of DenseNet is well-suited for this requirement. For optimal results, our architecture retains feature maps at multiple resolutions for addition back into the network during the decoding phase.

Each block in Fig. 3 represents a DenseNet Module (see Fig. 3 inset for a precise module schematic) except for the first and last blocks, which are simple 3\(\,\times \,\)3 stride-2 convolutional layers. A copy of the sparse input \([\mathcal {S}_1,\mathcal {S}_2]\) is presented as an additional input to each module, downsampled to the appropriate resolution. Each DenseNet module consists of 2L layers and k feature maps per layer; we use \(L=5\) and \(k=12\). At downsample/upsample blocks, the final convolution has stride 2. The (residual) output of the network is added to the sparse input map \(\mathcal {S}_1\) to obtain the final depth map estimate.

4.2 Datasets

We experiment extensively with both indoor and outdoor scenes. For indoor scenes, we use the NYUv2 [24] dataset, which provides high-quality 480 \(\times \) 640 depth data taken with a Kinect V1 sensor with a range of up to 10m. Missing depth values are filled using a standard approach [18]. We use the official split of 249/215 train/validation scenes, and sample 26331 images from the training scenes. We further augment the training set with horizontal flips. We test on the standard validation set of 654 images to compare against other methods.

For outdoor scenes, we use the KITTI road scenes dataset [35], which has a depth range up to \({\sim }\)85 m. KITTI provides over 80000 images for training, which we further augment with horizontal flips. We test on the full validation set (\({\sim }\)10% the size of the training set). KITTI images have resolution 1392 \(\times \) 512, but we take random 480 \(\times \) 640 crops during training to enable joint training with NYUv2 data. The 640 horizontal pixels are sampled randomly while the 480 vertical pixels are the 480 bottom pixels of the image (as KITTI only provides LiDAR GT depth towards ground level). The LiDAR projections used in KITTI result in very sparse depth maps (with only \({\sim }\)10% of depths labeled per image), and we only evaluate our models on points with GT depth.

4.3 General Training Characteristics and Performance Metrics

In all our experiments we train with a batch size of 8 across 4 Maxwell Titan X GTX GPUs using Tensorflow 1.2.1. We train for 80000 batches and start with a learning rate of 1e−3, decaying the learning rate by 0.2 every 25000 steps. We use Adam [15] as our optimizer, and standard pixel-wise \(L_2\) loss to train.

Standard metrics are used [4, 23] to evaluate our depth estimation model against valid GT depth values. Let \(\hat{y}\) be the predicted depth and y the GT depth for N pixels in the dataset. We measure: (1) Root Mean Square Error (RMSE): \(\sqrt{\frac{1}{N}\sum [\hat{y}-y]^2} \), (2) Mean Absolute Relative Error (MRE): \(\frac{100}{N}\sum \left( \frac{|\hat{y}-y|}{y}\right) \), and (3) Delta Thresholds (\(\delta _i\)): \(\frac{|\{\hat{y}|\text {max}(\frac{y}{\hat{y}},\frac{\hat{y}}{y})<1.25^i\}|}{|\{\hat{y}\}|}\). \(\delta _i\) is the percentage of pixels with relative error under a threshold controlled by the constant i.

5 Results and Analysis

Here we present results and analysis of the D\(^3\) model for both indoor (NYUv2) and outdoor (KITTI) datasets. We further demonstrate that D\(^3\) is robust to input errors and also generalizes to multiple sparse input patterns.

5.1 Indoor Scenes from NYUv2

Table 1. D\(^3\) Performance on NYUv2. Lower RMSE and MRE is better, while higher \(\delta _i\) is better. NN Fill corresponds to using the sparse map \(\mathcal {S}_1\) as our final prediction. If no sparse depth is provided, the D\(^3\) model falls short of [4, 17], but even at 0.01% points sampled the D\(^3\) model offers significant improvements over state-of-the-art non-sparse methods. D\(^3\) additionally performs the best compared to other sparse depth methods at all input sparsities.
Fig. 4.
figure 4

Performance on the NYUv2 Dataset. RMSE and MRE are plotted on the left (lower is better), while the \(\delta _i\) are plotted on the right (higher is better). Our D\(^3\) models achieve the best performance at all sparsities, while joint training on outdoor data (D\(^3\) mixed) only incurs a minor performance loss.

Fig. 5.
figure 5

NYUv2 MRE performance at different depths. (a) MRE at different depths for varying levels of sparsity. At 0.39% sparsity the average MRE is less than 1% which is comparable to depth sensors. (b) Histogram of GT depths in the validation dataset; higher relative errors correspond to rarer depth values.

Fig. 6.
figure 6

Visualization of D\(^3\) Predictions on NYUv2. Left column: Sample RGB and GT depths. Middle columns: sparse \(\mathcal {S}_1\) map on top and D\(^3\) network prediction on bottom for different sparsities. Vanilla network denotes the case with no sparse input (monocular depth estimation). Final column: D\(^3\) residual predictions (summed with \(\mathcal {S}_1\) to obtain the final prediction) and error maps of the final estimate with respect to GT. Errors are larger at farther distances. Residuals are plotted in grayscale and capped at \(|\delta |\le 1\) for better visualization; they exhibit similar sharp features as \(\mathcal {S}_1\), showing how a D\(^3\) model cancels out the nonsmoothness of \(\mathcal {S}_1\).

From Table 1 we see that, at all pattern sparsities, the D\(^3\) network offers superior performance for all metricsFootnote 1 compared to the results in [23] and in [22]Footnote 2. The accuracy metrics for the D\(^3\) mixed network represent the NYUv2 results for a network that has been simultaneously trained on the NYUv2 (indoors) and KITTI (outdoors) datasets (more details in Sect. 5.4). We see that incorporating an outdoors dataset with significantly different semantics only incurs a mild degradation in accuracy. Figure 4 has comparative results for additional sparsities, and once again demonstrates that our trained models are more accurate than other recent approaches.

At 16 \(\times \) 16 downsampling our absolute mean relative error falls below 1\(\%\) (at 0.99%). At this point, the error of our D\(^3\) model becomes comparable to the error in consumer-grade depth sensors [8]. Figure 5(a) presents a more detailed plot of relative error at different values of GT depth. Our model performs well at most common indoor depths (around 2–4 m), as can be assessed from the histogram in Fig. 5(b). At farther depths the MRE deteriorates, but these depth values are rarer in the dataset. This suggests that using a more balanced dataset can improve those MRE values as well.

Table 2. Timing for D\(^3\) and other architectures. Models are evaluated assuming 0.18% sparsity and using 1 Maxwell Titan X. The D\(^3\) network achieves the lowest RMSE compared to other well known efficient network architectures. A slim version of D\(^3\) runs at a near real-time 16 fps for VGA resolution.

Visualizations of our network predictions on the NYUv2 dataset are shown in Fig. 6. At a highly sparse 48 \(\times \) 48 downsampling, our D\(^3\) network already shows a dramatic improvement over a vanilla network without any sparse input. We note here that although network outputs are added as residuals to a sparse map with many first order discontinuities, the final predictions appear smooth and relatively free of sharp edge artifacts. Indeed, in the final column of Fig. 6, we can see how the direct residual predictions produced by our networks also contain sharp features which cancels out the non-smoothness in the sparse maps.

5.2 Computational Analysis

In Table 2 we show the forward pass time and accuracy for a variety of models at 0.18% points sampled. Our standard D\(^3\) model with \(L=5\) and \(k=12\) achieves the lowest error and takes 0.11s per VGA frame per forward pass. Slimmer versions of the D\(^3\) network incur mild accuracy degradation but still outperform other well known efficient architectures [1, 26]. The baseline speed for our D\(^3\) networks can thus approach real-time speeds for full-resolution 480 \(\times \) 640 inputs, and we expect these speeds can be further improved by weight quantization and other optimization methods for deep networks [6]. Trivially, operating at half resolution would result in our slimmer D\(^3\) networks operating at a real-time speed of >60fps. This speed is important for many application areas where depth is a critical component for scene understanding.

5.3 Generalizing D\(^3\) to Multiple Patterns and the Effect of \(\mathcal {S}_2\)

We train a D\(^3\) network with a different input sparsity (sampled uniformly between 0.065% and 0.98% points) for each batch. Figure 7(a) shows how this multi-sparsity D\(^3\) network performs relative to the 0.18% and 0.39% sparsity models. The single-sparsity trained D\(^3\) networks predictably perform the best near the sparsity they were tuned for. However, the multi-sparsity D\(^3\) network only performs slightly worse at those sparsities, and dramatically outperforms the single-sparsity networks away from their training sparsity value. Evidently, a random sampling schedule effectively regularizes our model to work simultaneously at all sparsities within a wide range. This robustness may be useful in scenarios where different measurement modes are used in the same device.

Fig. 7.
figure 7

Multi-sparsity D\(^3\) models. (a) The Random Sampling network was trained with a different sparse pattern (between 0.065% and 0.98% points sampled) on every iteration, and performs well at all sparsity levels while being only mildly surpassed by single-density networks at their specialized sparsities. (b) Validation loss curves (for 0.18% points sampled) for a D\(^3\) models trained with and without inclusion of distance map \(\mathcal {S}_2\). \(\mathcal {S}_2\) is clearly crucial for stability and performance, especially when training with complex pattern schedules.

Inclusion of the distance map \(\mathcal {S}_2\) gives our network spatial information of the sparse pattern, which is especially important when the sparse pattern changes during training. Figure 7(b) shows validation L\(_2\) loss curves for D\(^3\) networks trained with and without \(\mathcal {S}_2\). \(\mathcal {S}_2\) improves relative L\(_2\) validation loss by 34.4% and greatly stabilizies training when the sparse pattern is varied randomly during training. For a slow decay sampling schedule (i.e. what is used for the majority of our D\(^3\) networks), the improvement is 8.8%, and even for a static sampling schedule (bottom of Fig. 7(b)) there is a 2.8% improvement. The inclusion of the distance map is thus clearly essential to train our model well.

5.4 Generalizing D\(^3\) to Outdoor Scenes

We extend our model to the challenging outdoor KITTI dataset [35]. All our KITTI D\(^3\) models are initialized to a pre-trained NYUv2 model. We then train either with only KITTI data (KITTI-exclusive) or with a 50/50 mix of NYUv2 and KITTI data for each batch (mixed model). Since NYUv2 images have a max depth of 10 m, depth values are scaled by 0.1 for the KITTI-exclusive model. For the mixed model we use a scene-agnostic scaling rule; we scale all images down to have a max depth of \(\le \)10 m, and invert this scaling at inference. Our state-of-the-art results are shown in Table 3. Importantly, as for NYUv2, our mixed model only performs slightly worse than the KITTI-exclusive network. More results for additional sparsities are presented in the Supplementary Material.

Table 3. D\(^3\) Model Performance on the KITTI Dataset. Lower values of RMSE and MRE are better, while higher values of \(\delta _i\) are better. For competing methods we show results at the closest sparsity. The performance of our models, including the mixed models, is superior by a large margin.
Fig. 8.
figure 8

Joint Predictions on NYUv2 and KITTI. The RGB, Depth GT, and Sparse Input \(\mathcal {S}_1\) are given in the first three rows. Predictions by three models on both indoors and outdoors scenes are given in the final three rows, with the second-to-last row showing the mixed model trained on both datasets simultaneously. All sparse maps have a density of \(0.18\%\) (\({\sim }\)24 \(\times \) 24 downsampling).

Visualizations of the our model outputs are shown in Fig. 8. The highlight here is that the mixed model produces high-quality depth maps for both NYUv2 and KITTI. Interestingly, even the KITTI-exclusive model (bottom row of Fig. 8) produces good qualitative results on the NYUv2 dataset. Perhaps more strikingly, even an NYUv2 pretrained model with no KITTI data training (third-to-last row of Fig. 8) produces reasonable results on KITTI. This suggests that our D\(^3\) models intrinsically possess some level of cross-domain generalizability.

5.5 Robustness Tests

Thus far, we have sampled depth from high-quality Kinect and LiDAR depth maps, but in practice sparse depth inputs may come from less reliable sources. We now demonstrate how our D\(^3\) network performs given the following common errors within the sparse depth input:

  1. 1.

    Spatial misregistration between the RGB camera and depth sensor.

  2. 2.

    Random Gaussian error.

  3. 3.

    Random holes (dropout), e.g. due to shadows, specular reflection, etc.

Fig. 9.
figure 9

Potential Errors in Sparse Depth. The three sparse depth maps on the right all exhibit significant errors that are common in real sensors.

In Fig. 9 we show examples of each of these potential sources of error, and in Fig. 10 we show how D\(^3\) performs when trained with such errors in the sparse depth input (see Supplementary Material for for tabulated metrics). The D\(^3\) network degrades gracefully under all sources of error, with most models still outperforming the other baselines in Table 1 (none of which were subject to input errors). It is especially encouraging that network performs robustly under constant mis-registration error, a very common issue when multiple imaging sensors are active in the same visual system. The network effectively learns to fix the calibration between the different visual inputs. Predictably, the error is much higher when the mis-registration is randomly varying per image.

5.6 Discussion

Through our experiments, we’ve shown how the D\(^3\) model performs very well at taking a sparse depth measurement in a variety of settings and turning it into a dense depth map. Most notably, our model can simultaneously perform well both on indoor and outdoor scenes. We attribute the overall performance of the model to a number of factors. As can be gathered from Table 2, the design of our multi-scale architecture, in which the sparse inputs are ingested at various scales and outputs are treated as residuals with respect to \(\mathcal {S}_1\), is important for optimizing performance. Our proposed sparse input parameterization clearly allows for better and more stable training as seen in Fig. 7. Finally, the design of the training curriculum, in which we use varying sparsities in the depth input during training, also plays an important role. Such a strategy makes the model robust to test time variations in sparsity (see Fig. 7) and reduces overall errors.

Fig. 10.
figure 10

Accuracy of D\(^3\) Networks under Various Sparse Depth Errors. Under all potential error sources (with the exception of the unlikely random spatial mis-registration error), the D\(^3\) network exhibits graceful error degradation. This error degradation is almost negligible for constant spatial mis-registration.

6 Conclusions

We have demonstrated that a trained deep depth densification (D\(^3\)) network can use sparse depth information and a registered RGB image to produce high-quality, dense depth maps. Our flexible parametrization of the sparse depth information leads to models that generalize readily to multiple scene types (working simultaneously on indoor and outdoor images, from depths of 1 m to 80 m) and to diverse sparse input patterns. Even at fairly aggressive sparsities for indoor scenes, we achieve a mean absolute relative error of under 1\(\%\), comparable to the performance of consumer-grade depth sensor hardware. We also found that our model is fairly robust to various input errors.

We have thus shown that sparse depth measurements can be sufficient for applications that require an RGBD input, whether indoors or outdoors. A natural next step in our line of inquiry would be to evaluate how densified depth maps perform in 3d-reconstruction algorithms, tracking systems, or perception models for related vision tasks such as surface normal prediction. We hope that our work motivates additional research into uses for sparse depth from both the software and hardware perspectives.