Learning 3D Shape Completion under Weak Supervision

We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet and ModelNet as well as on real robotics data from KITTI and Kinect, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with recent fully supervised baselines and outperforms data-driven approaches, while requiring less supervision and being significantly faster.

: 3D Shape Completion. Results for cars on ShapeNet (Chang et al, 2015) and KITTI (Geiger et al, 2012) and for chairs and tables on ModelNet (Wu et al, 2015) and Kinect (Yang et al, 2018). Learning shape completion on real-world data is challenging due to sparse and noisy observations and missing ground truth. Occupancy grids (bottom) or meshes from signed distance functions (SDFs, top) at various resolutions in beige and point cloud observations in red.
Keywords 3D shape completion · 3D reconstruction · weakly-supervised learning · amortized inference · benchmark 1 Introduction 3D shape perception is a long-standing and fundamental problem both in human and computer vision (Pizlo, 2007(Pizlo, , 2010Furukawa and Hernandez, 2013) with many applications to robotics. A large body of work focuses on 3D reconstruction, e.g., reconstructing objects or scenes from one or multiple views, which is an inherently ill-posed inverse problem where many configurations of shape, color, texture and lighting may result in the very same image. While the primary goal (1) We train a denoising variational auto-encoder (DVAE) (Kingma and Welling, 2014;Im et al, 2017) as shape prior on ShapeNet using occupancy grids and signed distance functions (SDFs) to represent shapes. (2) The fixed generative model, i.e., decoder, then allows to learn shape completion using an unsupervised maximum likelihood (ML) loss by training a new recognition model, i.e., encoder. The retained generative model constraints the space of possible shapes while the ML loss aligns the predicted shape with the observations. of human vision is to understand how the human visual system accomplishes such tasks, research in computer vision and robotics is focused on the task of devising 3D reconstruction systems. Generally, work by Pizlo (2010) suggests that the constraints and priors used for 3D perception are innate and not learned. Similarly, in computer vision, cues and priors are commonly built into 3D reconstruction pipelines through explicit assumptions. Recently, however -leveraging the success of deep learning -researchers started to learn shape models from large collections of data, as for example ShapeNet (Chang et al, 2015). Predominantly generative models have been used to learn how to generate, manipulate and reason about 3D shapes (Girdhar et al, 2016;Brock et al, 2016;Sharma et al, 2016;Wu et al, 2016bWu et al, , 2015. In this paper, we focus on the specific problem of inferring and completing 3D shapes based on sparse and noisy 3D point observations as illustrated in Fig. 1. This problem occurs when only a single view of an individual object is provided or large parts of the object are occluded as common in robotic applications. For example, autonomous vehicles are commonly equipped with LiDAR scanners providing a 360 degree point cloud of the surrounding environment in real-time. This point cloud is inherently incomplete: back and bottom of objects are typically occluded and -depending on material properties -the observations are sparse and noisy, see  for an illustration. Similarly, indoor robots are generally equipped with low-cost, realtime RGB-D sensors providing noisy point clouds of the observed scene. In order to make informed decisions (e.g., for path planning and navigation), it is of utmost importance to efficiently establish a representation of the environment which is as complete as possible.
Existing approaches to 3D shape completion can be categorized into data-driven and learning-based methods. The former usually rely on learned shape priors and formulate shape completion as an optimization problem over the corresponding (lower-dimensional) latent space (Rock et al, 2015;Haene et al, 2014;Li et al, 2015;Engelmann et al, 2016;Nan et al, 2012;Bao et al, 2013;Dame et al, 2013;Nguyen et al, 2016). These approaches have demonstrated good performance on real data, e.g., on KITTI (Geiger et al, 2012), but are often slow in practice.
Learning-based approaches, in contrast, assume a fully supervised setting in order to directly learn shape completion on synthetic data (Riegler et al, 2017a;Smith and Meger, 2017;Dai et al, 2017;Sharma et al, 2016;Fan et al, 2017;Rezende et al, 2016;Yang et al, 2018;Wang et al, 2017;Varley et al, 2017;Han et al, 2017). They offer advantages in terms of efficiency as prediction can be performed in a single forward pass, however, require full supervision during training. Unfortunately, even multiple, aggregated observations (e.g., from multiple views) will not be fully complete due to occlusion, sparse sampling of views and noise, see Fig. 14 (right column) for an example.
In this paper, we propose an amortized maximum likelihood approach for 3D shape completion (cf. Fig. 2) avoiding the slow optimization problem of data-driven approaches and the required supervision of learningbased approaches. Specifically, we first learn a shape prior on synthetic shapes using a (denoising) variational auto-encoder (Im et al, 2017;Kingma and Welling, 2014). Subsequently, 3D shape completion can be formulated as a maximum likelihood problem. However, instead of maximizing the likelihood independently for distinct observations, we follow the idea of amortized inference (Gershman and Goodman, 2014) and learn to predict the maximum likelihood solutions directly. Towards this goal, we train a new encoder which embeds the observations in the same latent space using an unsupervised maximum likelihood loss. This allows us to learn 3D shape completion in challenging real-world situations, e.g., on KITTI, and obtain sub-voxel accurate results using signed distance functions at resolutions up to 64 3 voxels. For experimental evaluation, we introduce two novel, synthetic shape completion benchmarks based on ShapeNet and ModelNet (Wu et al, 2015). We compare our approach to the data-driven approach by Engelmann et al (2016), a baseline inspired by Gupta et al (2015) and the fully-supervised learning-based approach by Dai et al (2017); we additionally present experiments on real data from KITTI and Kinect (Yang et al, 2018). Experiments show that our approach outperforms data-driven techniques and rivals learning-based techniques while significantly reducing inference time and using only a fraction of supervision.
A preliminary version of this work has been published at CVPR'18 (Stutz and Geiger, 2018). However, we improved the proposed shape completion method, the constructed datasets and present more extensive experiments. In particular, we extended our weaklysupervised amortized maximum likelihood approach to enforce more variety and increase visual quality significantly. On ShapeNet and ModelNet, we use volumetric fusion to obtain more detailed, watertight meshes and manually selected -per object-category -220 highquality models to synthesize challenging observations. We additionally increased the spatial resolution and consider two additional baselines (Dai et al, 2017;Gupta et al, 2015). Our code and datasets will be made publicly available 1 .
The paper is structured as follows: We discuss related work in Section 2. In Section 3 we introduce the weakly-supervised shape completion problem and describe the proposed amortized maximum likelihood approach. Subsequently, we introduce our synthetic shape completion benchmarks and discuss the data preparation for KITTI and Kinect in Section 4.1. Next, we discuss evaluation in Section 4.2, our training procedure in Section 4.3, and the evaluated baselines in Section 4.4. 1 https://avg.is.tuebingen.mpg.de/research_ projects/3d-shape-completion.
Finally, we present experimental results in Section 4.5 and conclude in Section 5.

3D Shape Completion and Single-View 3D Reconstruction
In general, 3D shape completion is a special case of single-view 3D reconstruction where we assume point cloud observations to be available, e.g. from laser-based sensors as on KITTI (Geiger et al, 2012).
3D Shape Completion: Following Sung et al (2015), classical shape completion approaches can roughly be categorized into symmetry-based methods and datadriven methods. The former leverage observed symmetry to complete shapes; representative works include (Thrun and Wegbreit, 2005;Pauly et al, 2008;Zheng et al, 2010;Kroemer et al, 2012;Law and Aliaga, 2011). Data-driven approaches, in contrast, as pioneered by Pauly et al (2005), pose shape completion as retrieval and alignment problem. While Pauly et al (2005) allow shape deformations, Gupta et al (2015), use the iterative closest point (ICP) algorithm (Besl and McKay, 1992) for fitting rigid shapes. Subsequent work usually avoids explicit shape retrieval by learning a latent space of shapes (Rock et al, 2015;Haene et al, 2014;Li et al, 2015;Engelmann et al, 2016;Nan et al, 2012;Bao et al, 2013;Dame et al, 2013;Nguyen et al, 2016). Alignment is then formulated as optimization problem over the learned, low-dimensional latent space. For example, Bao et al (2013) parameterize the shape prior through anchor points with respect to a mean shape, while Engelmann et al (2016) and Dame et al (2013) directly learn the latent space using principal component analysis and Gaussian process latent variable models (Prisacariu and Reid, 2011), respectively. In these cases, shapes are usually represented by signed distance functions (SDFs). Nguyen et al (2016) use 3DShapeNets (Wu et al, 2015), a deep belief network trained on occupancy grids, as shape prior. In general, data-driven approaches are applicable to real data assuming knowledge about the object category. However, inference involves a possibly complex optimization problem, which we avoid by amortizing, i.e., learning, the inference procedure. Additionally, we also consider multiple object categories.
With the recent success of deep learning, several learning-based approaches have been proposed (Firman et al, 2016;Smith and Meger, 2017;Dai et al, 2017;Sharma et al, 2016;Rezende et al, 2016;Fan et al, 2017;Riegler et al, 2017a;Han et al, 2017;Yang et al, 2017Yang et al, , 2018. Strictly speaking, these are data-driven, as well; however, shape retrieval and fitting are both avoided by directly learning shape completion end-to-end, under full supervision -usually on synthetic data from Shape-Net (Chang et al, 2015) or ModelNet (Wu et al, 2015). Riegler et al (2017a) additionally leverage octrees to predict higher-resolution shapes; most other approaches use low resolution occupancy grids (e.g., 32 3 voxels). Instead, Han et al (2017) use a patch-based approach to obtain high-resolution results. In practice, however, full supervision is often not available; thus, existing models are primarily evaluated on synthetic datasets. In order to learn shape completion without full supervision, we utilize a learned shape prior to constrain the space of possible shapes. In addition, we use SDFs to obtain subvoxel accuracy at higher resolutions (up to 48×108×48 or 64 3 voxels) without using patch-based refinement or octrees. We also consider significantly sparser observations.
Single-View 3D Reconstruction: Single-view 3D reconstruction has received considerable attention over the last years; we refer to (Oswald et al, 2013) for an overview and focus on recent deep learning approaches, instead. Following Tulsiani et al (2018), these can be categorized by the level of supervision. For example, (Girdhar et al, 2016;Choy et al, 2016;Wu et al, 2016b;Häne et al, 2017) require full supervision, i.e., pairs of images and ground truth 3D shapes. These are generally derived synthetically. More recent work (Yan et al, 2016;Tulsiani et al, 2017Tulsiani et al, , 2018Kato et al, 2017;Lin et al, 2017;Fan et al, 2017;Tatarchenko et al, 2017;Wu et al, 2016a), in contrast, self-supervise the problem by enforcing consistency across multiple input views. Tulsiani et al (2018), for example, use a differentiable ray consistency loss; and in (Yan et al, 2016;Kato et al, 2017;Lin et al, 2017), differentiable rendering allows to define reconstruction losses on the images directly. While most of these approaches utilize occupancy grids, Fan et al (2017) and Lin et al (2017) predict point clouds instead. Tatarchenko et al (2017) use octrees to predict higher-resolution shapes. Instead of employing multiple views as weak supervision, however, we do not assume any additional views in our approach. Instead, knowledge about the object category is sufficient. In this context, concurrent work by Gwak et al (2017) is more related to ours: a set of reference shapes implicitly defines a prior of shapes which is enforced using an adversarial loss. In contrast, we use a denoising variational auto-encoder (DVAE) (Kingma and Welling, 2014;Im et al, 2017) to explicitly learn a prior for 3D shapes.

Shape Models
Shape models and priors found application in a wide variety of different tasks. In 3D reconstruction, in general, shape priors are commonly used to resolve ambiguities or specularities (Dame et al, 2013;Güney and Geiger, 2015;Kar et al, 2015). Furthermore, pose estimation (Sandhu et al, 2011(Sandhu et al, , 2009Aubry et al, 2014), tracking (Ma andSibley, 2014;Leotta and Mundy, 2009), segmentation (Sandhu et al, 2011(Sandhu et al, , 2009, object detection (Zia et al, 2013(Zia et al, , 2014Pepik et al, 2015;Song and Xiao, 2014;Zheng et al, 2015) or recognition (Lin et al, 2014) -to name just a few -have been shown to benefit from shape models. While most of these works use hand-crafted shape models, for example based on anchor points or part annotations (Zia et al, 2013(Zia et al, , 2014Pepik et al, 2015;Lin et al, 2014), recent work (Liu et al, 2017;Sharma et al, 2016;Girdhar et al, 2016;Wu et al, 2016bWu et al, , 2015Smith and Meger, 2017;Nash and Williams, 2017;Liu et al, 2017) has shown that generative models such as VAEs (Kingma and Welling, 2014) or generative adversarial networks (GANs) (Goodfellow et al, 2014) allow to efficiently generate, manipulate and reason about 3D shapes. We use these more expressive models to obtain high-quality shape priors for various object categories.

Amortized Inference
To the best of our knowledge, the notion of amortized inference was introduced by Gershman and Goodman (2014) and picked up repeatedly in different contexts (Rezende and Mohamed, 2015;Wang et al, 2016;Ritchie et al, 2016). Generally, it describes the idea of learning to infer (or learning to sample). We refer to (Wang et al, 2016) for a broader discussion of related work. In our context, a VAE can be seen as specific example of learned variational inference (Kingma and Welling, 2014; Rezende and Mohamed, 2015). Besides using a VAE as shape prior, we also amortize the maximum likelihood problem corresponding to our 3D shape completion task.

Method
In the following, we introduce the mathematical formulation of the weakly-supervised 3D shape completion problem. Subsequently, we briefly discuss denoising variational auto-encoders (DVAEs) (Kingma and Welling, 2014; Im et al, 2017) which we use to learn a strong shape prior that embeds a set of reference Given reference shapes Y and incomplete observations X , we want to learn a mapping x n →ỹ(x n ) such that y(x n ) matches the unknown ground truth shape y * n as close as possible. The observations x n are split into free space (i.e., x n,i = 0, right) and point observations (i.e., x n,i = 1, left). Shapes are shown in beige and observations in red.
shapes in a low-dimensional latent space. Then, we formally derive our proposed amortized maximum likelihood (AML) approach. Here, we use maximum likelihood to learn an embedding of the observations within the same latent space -thereby allowing to perform shape completion. The overall approach is also illustrated in Fig. 2.

Problem Formulation
In a supervised setting, the task of 3D shape completion can be described as follows: Given a set of incomplete observations X = {x n } N n=1 ⊆ R R and corresponding ground truth shapes Y * = {y * n } N n=1 ⊆ R R , learn a mapping x n → y * n that is able to generalize to previously unseen observations and possibly across object categories. We assume R R to be a suitable representation of observations and shapes; in practice, we resort to occupancy grids and signed distance functions (SDFs) defined on regular grids, i.e., x n , y * n ∈ R H×W ×D R R . Specifically, occupancy grids indicate occupied space, i.e., voxel y * n,i = 1 if and only if the voxel lies on or inside the shape's surface. To represent shapes with subvoxel accuracy, SDFs hold the distance of each voxel's center to the surface; for voxels inside the shape's surface, we use negative sign. Finally, for the (incomplete) observations, we write x n ∈ {0, 1, ⊥} R to make missing information explicit; in particular, x n,i = ⊥ corresponds to unobserved voxels, while x n,i = 1 and x n,i = 0 correspond to occupied and unoccupied voxels, respectively.
On real data, e.g., KITTI (Geiger et al, 2012), supervised learning is often not possible as obtaining ground truth annotations is labor intensive, cf. (Menze and Geiger, 2015;Xie et al, 2016). Therefore, we target a weakly-supervised variant of the problem instead: Given observations X and reference shapes Y = {y m } M m=1 ⊆ R R both of the same, known object category, learn a mapping x n →ỹ(x n ) such that the predicted shapẽ y(x n ) matches the unknown ground truth shape y * n as close as possible -or, in practice, the sparse observation x n while being plausible considering the set of reference shapes, cf. Fig. 3. Here, supervision is provided in the form of the known object category. Alternatively, the reference shapes Y can also include multiple object categories resulting in an even weaker notion of supervision as the correspondence between observations and object categories is unknown. Except for the object categories, however, the set of reference shapes Y, and its size M , is completely independent of the set of observations X , and its size N , as also highlighted in Fig. 2. On real data, e.g., KITTI, we additionally assume the object locations to be given in the form of 3D bounding boxes in order to extract the corresponding observations X . In practice, the reference shapes Y are derived from watertight, triangular meshes, e.g., from ShapeNet (Chang et al, 2015) or ModelNet (Wu et al, 2015).

Shape Prior
We approach the weakly-supervised shape completion problem by first learning a shape prior using a denoising variational auto-encoder (DVAE). Later, this prior constrains shape inference (see Section 3.3) to predict reasonable shapes. In the following, we briefly discuss the standard variational auto-encoder (VAE), as introduced by Kingma and Welling (2014), as well as its denoising extension, as proposed by Im et al (2017).
Variational Auto-Encoder (VAE): We propose to use the provided reference shapes Y to learn a generative model of possible 3D shapes over a low-dimensional latent space Z = R Q , i.e., Q R. In the framework of VAEs, the joint distribution p(y, z) of shapes y and latent codes z decomposes into p(y|z)p(z) with p(z) being a unit Gaussian, i.e., N (z; 0, I Q ) and I Q ∈ R R×R being the identity matrix. This decomposition allows to sample z ∼ p(z) and y ∼ p(y|z) to generate random shapes. For training, however, we additionally need to approximate the posterior p(z|y). To this end, the so-called recognition model q(z|y) ≈ p(z|y) takes the form where µ(y), σ 2 (y) ∈ R Q are predicted using the encoder neural network. The generative model p(y|z) decomposes over voxels y i ; the corresponding probabilities p(y i |z) are represented using Bernoulli distributions for occupancy grids or Gaussian distributions for SDFs: In both cases, the parameters, i.e., θ i (z) or µ i (z), are predicted using the decoder neural network. For SDFs, we explicitly set σ 2 to be constant (see Section 4.3). Then, σ 2 merely scales the corresponding loss, thereby implicitly defining the importance of accurate SDFs relative to occupancy grids as described below.
In the framework of variational inference, the parameters of the encoder and the decoder neural networks are found by maximizing the likelihood p(y). In practice, the likelihood is usually intractable and the evidence lower bound is maximized instead, see (Kingma and Welling, 2014;Blei et al, 2016). This results in the following loss to be minimized: Here, w are the weights of the encoder and decoder hidden in the recognition model q(z|y) and the generative model p(y|z), respectively. The Kullback-Leibler divergence KL can be computed analytically as described in the appendix of (Kingma and Welling, 2014). The negative log-likelihood − ln p(y|z) corresponds to a binary cross-entropy error for occupancy grids and a scaled sum-of-squared error for SDFs. The loss L VAE is minimized using stochastic gradient descent (SGD) by approximating the expectation using samples: The required samples z (l) ∼ q(z|y) are computed using the so-called reparameterization trick, in order to make L VAE , specifically the sampling process, differentiable. In practice, we found L = 1 samples to be sufficient -which conforms with results by Kingma and Welling (2014). At test time, the sampling process z ∼ q(z|y) is replaced by the predicted mean µ(y). Overall, the standard VAE allows us to embed the reference shapes in a low-dimensional latent space. In practice, however, the learned prior might still include unreasonable shapes.
Denoising VAE (DVAE): In order to avoid inappropriate shapes to be included in our shape prior, we consider a denoising variant of the VAE allowing to obtain a tighter bound on the likelihood p(y). More specifically, a corruption process y ∼ p(y |y) is considered and the corresponding evidence lower bound results in the following loss: Note that the reconstruction error − ln p(y|z) is still computed with respect to the uncorrupted shape y while z, in contrast to Eq. (3), is sampled conditioned on the corrupted shape y . In practice, the corruption process p(y |y) is modeled using Bernoulli noise for occupancy grids and Gaussian noise for SDFs. In experiments, we found DVAEs to learn more robust latent spaces -meaning the prior is less likely to contain unreasonable shapes. In the following, we always use DVAEs as shape priors.

Shape Inference
After learning the shape prior, defining the joint distribution p(y, z) of shapes y and latent codes z as product of generative model p(y|z) and prior p(z), shape completion can be formulated as a maximum likelihood (ML) problem for p(y, z) over the lower-dimensional latent space Z = R Q . The corresponding negative loglikelihood − ln p(y, z) to be minimized can be written as As the prior p(z) is Gaussian, the negative log-probability − ln p(z) is proportional to z 2 2 and constrains the problem to likely, i.e., reasonable, shapes with respect to the shape prior. As before, the generative model p(y|z) decomposes over voxels; here, we can only consider actually observed voxels x i = ⊥. We assume that the learned shape prior can complete the remaining, unobserved voxels x i = ⊥. Instead of solving Eq. (7) for each observation x ∈ X independently, however, we follow the idea of amortized inference (Gershman and Goodman, 2014) and train a new encoder z(x; w) to learn ML. To this end, we keep the generative model p(y|z) fixed and train only the weights w of the new encoder z(x; w) using the ML objective as loss: Here, λ controls the importance of the shape prior. The exact form of the probabilities p(y i = x i |z) depends on the used shape representation. For occupancy grids, this term results in a cross-entropy error as both the predicted voxels y i and the observations x i are, for x i = ⊥, binary. For SDFs, however, the term is not well-defined as p(y i |z) is modeled with a continuous Gaussian distribution, while the observations x i are binary. As solution, we could compute (signed) distance values along the rays corresponding to observed points (e.g., following (Steinbrucker et al, 2013)) in order to obtain continuous observations x i ∈ R for x i = ⊥. However, as illustrated in Fig. 4, noisy observations cause the distance values along the whole ray to be invalid. This can partly be avoided when relying only on occupancy to represent the observations; in this case, free space (cf. Fig. 3) observations are partly correct even though observed points may lie within the corresponding shapes.
For making SDFs tractable (i.e., to predict subvoxel accurate, visually smooth and appealing shapes, see Section 4.5) while using binary observations, we propose to define p(y i = x i |z) through a simple transformation. In particular, as p(y i |z) is modeled using a Gaussian distribution N (y i ; µ i (z), σ 2 ) where µ i (z) is predicted using the fixed decoder (σ 2 is constant), and x i is binary (for x i = ⊥), we introduce a mapping θ i (µ i (z)) transforming the predicted mean SDF value to an occupancy probability θ i (µ i (z)): As, by construction (see Section 3.1), occupied voxels have negative sign or value zero in the SDF, we can derive the occupancy probability θ i (µ i (z)) as the probability of a non-positive distance: Here, erf is the error function which, in practice, can be approximated following (Abramowitz, 1974). Eq. (11) is illustrated in Fig. 4 where the occupancy probability θ i (µ i (z)) is computed as the area under the Gaussian bell curve for y i ≤ 0. This per-voxel transformation can easily be implemented as non-linear layer and its derivative wrt. µ i (z) is, by construction, a Gaussian. Note that the transformation is correct, not approximate, based on our model assumptions and the definitions in Section 3.1. Overall, this transformation allows us to easily minimize Eq. (8) for both occupancy grids and SDFs using binary observations. The obtained encoder embeds the observations in the latent shape space to perform shape completion.   (b)). When using occupancy only, in contrast, only the voxels behind the surface are assigned invalid occupancy states (marked red ); the remaining voxels are labeled correctly (marked green ; cf. (c)).
, we illustrate the transformation discussed in Section 3.3 allowing to use the binary observations x i (for x i = ⊥) to supervise the SDF predictions. This is achieved by transforming the predicted Gaussian distribution to a Bernoulli distribution with occupancy probability θ i (µ i (z)) = p(y i ≤ 0) (blue area).

Practical Considerations
Encouraging Variety: So far, our AML formulation assumes a deterministic encoder z(x, w) which predicts, given the observation x, a single code z corresponding to a completed shape. A closer look at Eq. (8), however, reveals an unwanted problem: the data term scales with the number of observations, i.e., |{x i = ⊥}|, while the regularization term stays constant -with less observations, the regularizer gains in importance leading to limited variety in the predicted shapes because z(x; w) tends towards zero. In order to encourage variety, we draw inspiration from the VAE shape prior. Specifically, we use a probabilistic recognition model (cf. see Eq. (1)) and replace the negative log-likelihood − ln p(z) with the corresponding Kullback-Leibler divergence KL(q(z|x)|p(z)) with p(z) = N (z; 0, I Q ). Intuitively, this makes sure that the encoder's predictions "cover" the prior distribution -thereby enforcing variety. Mathematically, the resulting loss, i.e., can be interpreted as the result of maximizing the evidence lower bound of a model with observation process p(x|y) (analogously to the corruption process p(y |y) for DVAEs in (Im et al, 2017) and Section 3.2). The expectation is approximated using samples (following the reparameterization trick in Eq. (5)) and, during testing, the sampling process z ∼ q(z|x) is replaced by the mean prediction µ(x). In practice, we find that Eq. (13) improves visual quality of the completed shapes. We compare this AML model to its deterministic variant dAML in Section 4.5.
Handling Noise: Another problem of our AML formulation concerns noise. On KITTI, for example, specular or transparent surfaces cause invalid observations -laser rays traversing through these surfaces cause observations to lie within shapes or not get reflected. However, our AML framework assumes deterministic, i.e., trustworthy, observations -as can be seen in the reconstruction error in Eq. (13). Therefore, we introduce pervoxel weights κ i computed using the reference shapes Y = {y m } M m=1 : where y m,i = 1 if and only if the corresponding voxel is occupied. Applied to observations x i = 0, these are trusted less if they are unlikely under the shape prior.
Note that for point observations, i.e., x i = 1, this is not necessary as we explicitly consider "filled" shapes (see Section 4.1). This can also be interpreted as imposing an additional mean shape prior on the predicted shapes with respect to the observed free space. In addition, we use a corruption process p(x |x) consisting of Bernoulli and Gaussian noise during training (analogously to the DVAE shape prior).
ShapeNet: We utilize the truncated SDF (TSDF) fusion approach of Riegler et al (2017a) to obtain watertight versions of the provided car shapes allowing to reliably and efficiently compute occupancy grids and SDFs. Specifically, we use 100 depth maps of 640×640 pixels resolution, distributed uniformly on the sphere around the shape, and perform TSDF fusion at a resolution of 256 3 voxels. Detailed watertight meshes, without inner structures, can then be extracted using marching cubes (Lorensen and Cline, 1987) and simplified to 5k faces using MeshLab's quadratic simplification algorithm (Cignoni et al, 2008), see Fig. 5a to c. Finally, we manually selected 220 shapes from this collection, re- For KITTI, we show observed points in red and the accumulated, partial ground truth in green. Note that for the first example ground truth is not available due to missing past/future observations. For Kinect, we show observations in red and ElasticFusion (Whelan et al, 2015) ground truth in beige. Note that the objects are rotated and not aligned as in ModelNet (cf. Fig. 5).
moving exotic cars, unwanted configurations, or shapes with large holes (e.g., missing floors or open windows). The shapes are splitted into |Y| = 100 reference shapes, |Y * | = 100 shapes for training the inference model, and 20 test shapes. We randomly perturb rotation and scaling to obtain 5 variants of each shape, voxelize them using triangle-voxel intersections and subsequently "fill" the obtained volumes using a connected components algorithm (Jones et al, 2001). For computing SDFs we use SDFGen 2 . We use three different resolutions: H×W ×D = 24×54×24, 32×72×32 and 48×108×48 voxels. Examples are shown in Fig. 5d to f. Finally, we use the OpenGL renderer of Güney and Geiger (2015) to obtain 10 depth maps per shape. The incomplete observations X are obtained by re-projecting them into 3D and marking voxels with at least one point as occupied and voxels between occupied voxels and the camera center as free space. We obtain more dense point clouds at 48×64 pixels resolution and sparser point clouds using depth maps of 24×32 pixels resolution. For the latter, more challenging case we also add exponentially distributed noise (with rate parameter 70) to the depth values, or randomly (with probability 0.075) set them to the maximum depth to simulate the deficiencies of point clouds captured with real sensors, e.g., on KITTI. These two variants are denoted SN-clean and SN-noisy. The obtained observations are illustrated in Fig. 5e.

KITTI:
We extract observations from KITTI's Velodyne point clouds using the provided ground truth 3D bounding boxes to avoid the inaccuracies of 3D object detectors (train/test split by Chen et al (2016)). As the 3D bounding boxes in KITTI fit very tightly, we first 2 https://github.com/christopherbatty/SDFGen. We report the number of (rotated and scaled) meshes, used as reference shapes, and the resulting number of observations (i.e., views, 10 per shape). We also report the average fraction of observed voxels, i.e., |{xi =⊥}| /HW D. For ModelNet, we exemplarily report statistics for chairs; and for Kinect, we report statistics for tables.
padded them by factor 0.25 on all sides; afterwards, the observed points are voxelized into voxel grids of size H×W ×D = 24×54×24, 32×72×32 and 48×108×48 voxels. To avoid taking points from the street, nearby walls, vegetation or other objects into account, we only consider those points lying within the original (i.e., not padded) bounding box. Finally, free space is computed using ray tracing as described above. We filter all observations to ensure that each observation contains a minimum of 50 observations. For the bounding boxes in the test set, we additionally generated partial ground truth by accumulating the 3D point clouds of 10 future and 10 past frames around each observation. Examples are shown in Fig. 6.
ModelNet: We use ModelNet10, comprising 10 popular object categories (bathtub, bed, chair, desk, dresser, monitor, night stand, table, toilet) and select, for each category, the first 200 and 20 shapes from the provided training and test sets. Then, we follow the pipeline outlined in Fig. 5, as on ShapeNet, using 10 random variants per shape. Due to thin structures, however, SDF computation does not work well (especially for low resolution, e.g., 32 3 voxels). Therefore, we approximate the SDFs using a 3D distance transform on the occupancy grids. Our experiments are conducted at a resolution of H×W ×D = 32 3 , 48 3 and 64 3 voxels. Given the increased difficulty, we use a resolution of 64 2 , 96 2 and 128 2 pixels for the observation generating depth maps.
In our experiments, we consider bathtubs, chairs, desks and tables individually, as well as all 10 categories together (resulting in 100k views overall). For Kinect, we additionally used a dataset of rotated chairs and tables aligned with Kinect's ground plane.  Fig. 7: Network Architectures. We use different resolutions for ShapeNet and KITTI as well as Model-Net and Kinect (bottom and top, respectively). In both cases, architectures for higher resolutions employ one additional stage in the en-and decoder (in gray). Each convolutional layer is followed by ReLU activations and batch normalization (Ioffe and Szegedy, 2015); the window sizes for max pooling and nearest-neighbor upsampling can be derived from the context; the number of channels are given in parentheses.
Kinect: Yang et al. provide Kinect scans of various chairs and tables. They provide both single-view observations as well as ground truth from ElasticFusion (Whelan et al, 2015) as occupancy grids. However, the ground truth is not fully accurate, and only 40 views are provided per object category. Still, the objects have been segmented to remove clutter and are appropriate for experiments in conjunction with ModelNet10. Unfortunately, Yang et al. do not provide SDFs; again, we use 3D distance transforms as approximation. Additionally, the observations do not indicate free space and we were required to guess an appropriate ground plane. For our experiments, we use 30 views for training and 10 views for testing, see Fig. 6 for examples.

Evaluation
For occupancy grids, we use Hamming distance (Ham) and intersection-over-union (IoU) between the (thresholded) predictions and the ground truth; note that lower Ham is better, while lower IoU is worse. For SDFs, we consider a mesh-to-mesh distance on ShapeNet and a mesh-to-point distance on KITTI. We follow (Jensen et al, 2014) and consider accuracy (Acc) and completeness (Comp). To measure Acc, we uniformly sample roughly 10k points on the reconstructed mesh and average their distance to the target mesh. Analogously, Comp is the distance from the target mesh (or the ground truth points on KITTI) to the reconstructed mesh. Note that for both Acc and Comp, lower is bet-

Architectures and Training
As depicted in Fig. 7, our network architectures are kept simple and shallow. Considering a resolution of 24×54×24 voxels on ShapeNet and KITTI, the encoder comprises three stages, each consisting of two convolutional layers (followed by ReLU activations and batch normalization (Ioffe and Szegedy, 2015)) and max pooling; the decoder mirrors the encoder, replacing max pooling by nearest neighbor upsampling. We consistently use 3 3 convolutional kernels. We use a latent space of size Q = 10 and predict occupancy using Sigmoid activations. We found that the shape representation has a significant impact on training. Specifically, learning both occupancy grids and SDFs works better compared to training on SDFs only. Additionally, following prior art in single image depth prediction (Eigen and Fergus, 2015;Eigen et al, 2014;Laina et al, 2016), we consider log-transformed, truncated SDFs (logTSDFs) for training: given a signed distance y i , we compute sign(y i ) log(1+ min(5, |y i |)) as the corresponding log-transformed, trun-dAML AML Fig. 9: Comparison of AML and dAML. Our deterministic variant, dAML, suffers from inferior results. Predicted shapes in beige and observations in red at low resolution (24×54×24 voxels). cated signed distance. TSDFs are commonly used in the literature (Newcombe et al, 2011;Riegler et al, 2017a;Dai et al, 2017;Engelmann et al, 2016;Curless and Levoy, 1996) and the logarithmic transformation additionally increases the relative importance of values around the surfaces (i.e., around the zero crossing).
For training, we combine occupancy grids and logTS-DFs in separate feature channels and randomly translate both by up to 3 voxels per axis. Additionally, we use Bernoulli noise (probability 0.1) and Gaussian noise (variance 0.05). We use Adam (Kingma and Ba, 2015), a batch size of 16 and the initialization scheme by Glorot and Bengio (2010). The shape prior is trained for 3000 to 4000 epochs with an initial learning rate of 10 −4 which is decayed by 0.925 every 215 iterations until a minimum of 10 −16 has been reached. In addition, weight decay (10 −4 ) is applied. For shape inference, training takes 30 to 50 epochs, and an initial learning rate of 10 −4 is decayed by 0.9 every 215 iterations. For our learning-based baselines (see Section 4.4) we require between 300 and 400 epochs using the same training procedure as for the shape prior. On the Kinect dataset, where only 30 training examples are available, we used 5000 epochs. We use log σ 2 = −2 as an empirically found trade-off between accuracy of the reconstructed SDFs and ease of training -significantly lower log σ 2 may lead to difficulties during training, including divergence. On ShapeNet, ModelNet and Kinect, the weight λ of the Kullback-Leibler divergence KL (for both DVAE and (d)AML) was empirically determined to be λ = 2, 2.5, 3 for low, medium and high resolution, respectively. On KITTI, we use λ = 1 for all resolutions. In practice, λ controls the trade-off between diversity (low λ) and quality (high λ) of the completed shapes. In addition, we reduce the weight in free space areas to one fourth on SN-noisy and KITTI to balance between occupied and free space. We implemented our networks in Torch (Collobert et al, 2011). The plots illustrate that the DVAE is able to separate the ten object categories. In (c) and (d), we show a t-SNE visualization and a projection of the latent space corresponding to our learned AML model on SN-clean. We randomly picked 10 ground truth shapes, "x", and the corresponding observations (10 per shape), points (gray pixels indicate remaining shapes/observations). The plots illustrate that AML is able to associate observations with the corresponding ground truth shapes under weak supervision.

Baselines
Data-Driven Approaches: We consider the works by Engelmann et al (2016) and Gupta et al (2015) as data-driven baselines. Additionally, we consider regular maximum likelihood (ML). Engelmann et al (2016) -referred to as Eng16 -use a principal component analysis shape prior trained on a manually selected set of car models 3 . Shape completion is posed as optimization problem considering both shape and pose. The pretrained shape prior provided by Engelmann et al. assumes a ground plane which is, according to KITTI's LiDAR data, fixed at 1m height. Thus, we don't need to optimize pose on KITTI as we use the ground truth bounding boxes; on ShapeNet, in contrast, we need to optimize both pose and shape to deal with the random rotations in SN-clean and SN-noisy.  Table 2: Quantitative Results on ShapeNet and KITTI. We consider Hamming distance (Ham) and intersection over union (IoU) for occupancy grids as well as accuracy (Acc) and completeness (Comp) for meshes on SN-clean, SN-noisy and KITTI. For Ham, Acc and Comp, lower is better; for IoU, higher is better. The unit of Acc and Comp is voxels (voxel length at 24×54×48 voxels) or meters. Note that the DVAE shape prior (in gray) is only reported as reference (i.e., bound on (d)AML). We indicate the level of supervision in percentage, relative to the corresponding resolution (see Table 1) and mark the best results under full supervision in red and under weak supervision in green. (2015) we also consider a shape retrieval and fitting baseline. Specifically, we perform iterative closest point (ICP) (Besl and McKay, 1992) fitting on all training shapes and subsequently select the best-fitting one. To this end, we uniformly sample 1Mio points on the training shapes, and perform point-to-point ICP 4 for a maximum of 100 iterations using R t = I 3 0 as initialization. On the training set, we verified that this approach is always able to retrieve the perfect shape.

Inspired by the work by Gupta et al
Finally, we consider a simple ML baseline iteratively minimizing Eq. (7) using stochastic gradient descent (SGD). This baseline is similar to the work by Engelmann et al., however, like ours it is bound to the voxel grid. Per example, we allow a maximum of 5000 iterations, starting with latent code z = 0, learning rate 0.05 and momentum 0.5 (decayed every 50 iterations at rate 0.85 and 1.0 until 10 −5 and 0.9 have been reached).
Learning-Based Approaches: Learning-based approaches usually employ an encoder-decoder architecture to directly learn a mapping from observations x n 4 http://www.cvlibs.net/software/libicp/.
to ground truth shapes y * n in a fully supervised setting Varley et al, 2017;Yang et al, 2018Yang et al, , 2017Dai et al, 2017). While existing architectures differ slightly, they usually rely on a U-net architecture (Ronneberger et al, 2015;Cicek et al, 2016). In this paper, we use the approach of Dai et al (2017) 5 -referred to as Dai17 -as a representative baseline for this class of approaches. In addition, we consider a custom learning-based baseline which uses the architecture of our DVAE shape prior, cf. Fig. 7. In contrast to (Dai et al, 2017), this baseline is also limited by the lowdimensional (Q = 10) bottleneck as it does not use skip connections.

Experimental Evaluation
Quantitative results are summarized in Table 2 (Shape-Net and KITTI) and 3 (ModelNet). Qualitative results

5
We use https://github.com/angeladai/cnncomplete. On ModelNet we added one convolutional stage in the enand decoder for larger resolutions; on ShapeNet and KITTI, we needed to adapt the convolutional strides to fit the corresponding resolutions. for the shape prior are shown in Fig. 9 and 10; shape completion results are shown in Fig. 11 (ShapeNet and ModelNet) and 14 (KITTI and Kinect).
Latent Space Dimensionality: Regarding our DVAE shape prior, we found the dimensionality Q to be of crucial importance as it defines the trade-off between reconstruction accuracy and random sample quality (i.e., the quality of the generative model). A higher-dimensional latent space usually results in higher-quality reconstructions but also imposes the difficulty of randomly generating meaningful shapes. Across all datasets, we found Q = 10 to be suitable -which is significantly smaller compared to related work: 35 in (Liu et al, 2017), 6912 in (Sharma et al, 2016), 200 for (Wu et al, 2016b;Smith and Meger, 2017) or 64 in (Girdhar et al, 2016). Still, we are able to obtain visually appealing results. Finally, in Fig. 9 we show qualitative results, illustrating good reconstruction performance and reasonable random samples across resolutions. Fig. 10 shows a t-SNE (van der Maaten and Hinton, 2008) visualization as well as a projection of the Q = 10 dimensional latent space, color coding the 10 object categories of ModelNet10. The DVAE clusters the object categories within the support region of the unit Gaussian. In the t-SNE visualization, we additionally see ambiguities arising in ModelNet10, e.g., night stands and dressers often look indistinguishable while monitors are very dissimilar to all other categories. Overall,  Table 3: Quantitative Results on ModelNet. Results for bathtubs, chairs, desks, tables and all ten categories combined (ModelNet10). As the ground truth SDFs are merely approximations (cf. Section 4.1), we concentrate on Hamming distance (Ham; lower is better) and intersection-over-union (IoU; higher is better). Only for chairs, we report accuracy Acc and completeness Comp in voxels (voxel length at 32 3 voxels). We also indicate the level of supervision (see Table 1). Again, we report the DVAE shape prior as reference and color the best weakly-supervised approach using green and the best fully-supervised approach in red.
these findings support our decision to use a DVAE with Q = 10 as shape prior.
Ablation Study: In Table 2, we show quantitative results of our model on SN-clean and SN-noisy. First, we report the reconstruction quality of the DVAE shape prior as reference. Then, we consider the DVAE shape prior (Naïve), and its mean prediction (Mean) as simple baselines. The poor performance of both illustrates the difficulty of the benchmark. For AML, we also consider its deterministic variant, dAML (see Section 3). Quantitatively, there is essentially no difference; however, Fig. 9 demonstrates that AML is able to predict more detailed shapes. We also found that using both occupancy and SDFs is necessary to obtain good performance -as is using both point observations and free space.
Considering Fig. 10, we additionally demonstrate that the embedding learned by AML, i.e., the embedding of incomplete observations within the latent shape space, is able to associate observations with corresponding shapes even under weak supervision. In particular, we show a t-SNE visualization and a projection of the latent space for AML trained on SN-clean. We colorcode 10 randomly chosen ground truth shapes, resulting in 100 observations (10 views per shape). AML is usually able to embed observations near the corresponding ground truth shapes, without explicit supervision (e.g., for violet, pink, blue or teal, the observations -points -are close to the corresponding ground truth shapes -"x"). Additionally, AML also matches the unit Gaussian prior distribution reasonably well.
Comparison to Baselines on Synthetic Data: For ShapeNet, Table 2 demonstrates that AML outperforms data-driven approaches such as Eng16, ICP and ML and is able to compete with fully-supervised approaches, Dai17 and Sup, while using only 8% or less supervision. We also note that AML outperforms ML, illustrating that amortized inference is beneficial. Furthermore, Dai17 outperforms Sup, illustrating the advantage of propagating low-level information (through skip connections) without bottleneck. Most importantly, the performance gap between AML and Dai17 is rather small considering the difference in supervision (more than 92%) and on SN-noisy, the drop in performance for Dai17 and Sup is larger than for AML suggesting that AML handles noise and sparsity more robustly. Fig. 11 shows that these conclusions also apply visually where AML performs en par with Dai17.
For ModelNet, in Table 3, we mostly focus on occupancy grids (as the derived SDFs are approximate, cf. Section 4.1) and show that chairs, desks or tables are more difficult. However, AML is still able to predict high-quality shapes, outperforming data-driven approaches. Additionally, in comparison to ShapeNet, the gap between AML and fully-supervised approaches (Dai17 and Sup) is surprisingly small -not reflecting the difference in supervision. This means that even under full supervision, these object categories are difficult to complete. In terms of accuracy (Acc) and completeness (Comp), e.g., for chairs, AML outperforms ICP and ML; Dai17 and Sup, on the other hand, outperform AML. Still, considering Fig. 11, AML predicts visually appealing meshes although the reference shape SDFs on Model- While AML is designed for especially sparse observations, it also performs well in a multi-view setting. Additionally, higher resolutions allow to predict more detailed shapes. Shapes, occupancy grids or meshes, in beige and observations in red.
Net are merely approximate. Qualitatively, AML also outperforms its data-driven rivals; only Dai17 predicts shapes slightly closer to the ground truth.
Multiple Views and Higher Resolutions: In Table 2, we consider multiple, k ∈ {2, 3, 5}, randomly fused observations (from the 10 views per shape). Generally, additional observations are beneficial (also cf. Fig. 12); however, fully-supervised approaches such as Dai17 benefit more significantly than AML. Intuitively, especially on SN-noisy, k = 5 noisy observations seem to impose contradictory constraints that cannot be resolved under weak supervision. We also show that higher resolution allows both AML and Dai17 to predict more detailed shapes, see Fig. 12; for AML this is significant as, e.g., on SN-noisy, the level of supervision reduces to less than 1%. Also note that AML is able to handle the slightly asymmetric desks in Fig. 12 due to the Dai17 AML GT Dai17 AML GT Fig. 13: Category-Agnostic Results on Model-Net10. AML is able to recover detailed shapes of the correct object category even without category supervision (as provided to Dai17). Shapes (occupancy grids and meshes) in beige and observations in red at low resolution (32 3 voxels).
strong shape prior which itself includes symmetric and less symmetric shapes.

Multiple Object Categories:
We also investigate the category-agnostic case, considering all ten Model-Net10 object categories; here, we train a single DVAE shape prior (as well as a single model for Dai17 and Sup) across all ten object categories. As can be seen in Table 3, the gap between AML and fully-supervised approaches, Dai17 and Sup, further shrinks; even fullysupervised methods have difficulties distinguishing object categories based on sparse observations. Fig. 12 shows that AML is able to not only predict reasonable shapes, but also identify the correct object category. In contrast to Dai17, which predicts slightly more detailed shapes, this is significant as AML does not have access to object category information during training.
Comparison on Real Data: On KITTI, considering Fig. 14, we illustrate that AML consistently predicts detailed shapes regardless of the noise and sparsity in the inputs. Our qualitative results suggest that AML is able to predict more detailed shapes compared to Dai17 and Eng16; additionally, Eng16 is distracted by sparse and noisy observations. Quantitatively, instead, Dai17 and Sup outperform AML. However, this is mainly due to two factors: first, the ground truth collected on KITTI does rarely cover the full car; and second, we put significant effort into faithfully modeling KITTI's noise statistics in SN-noisy, allowing Dai17 and Sup to generalize very well. The latter effort, especially, can be avoided by using our weakly-supervised approach, AML. On Kinect, also considering Fig. 14, only 30 observations are available for training. It can be seen that AML predicts reasonable shapes for tables. We find it interesting that AML is able to generalize from only 30 training examples. In this sense, AML functions similar to ML, in that the objective is trained to overfit to few samples. This, however, cannot work in all cases, as demonstrated by the chairs where AML tries to predict a suitable chair, but does not fit the observations as well. Another problem witnessed on Kinect, is that the shape prior training samples need to be aligned to the observations (with respect to the viewing angles). For the chairs, we were not able to guess the viewing trajectory correctly (cf. (Yang et al, 2018)).
Failure Cases: AML and Dai17 often face similar problems, as illustrated in Fig. 15, suggesting that these problems are inherent to the used shape representations or the learning approach independent of the level of supervision. For example, both AML and Dai17 have problems with fine, thin structures that are hard to reconstruct properly at any resolution. Furthermore, identifying the correct object category on ModelNet10 from sparse observations is difficult for both AML and Sup. Finally, AML additionally has difficulties with exotic objects that are not well represented in the latent shape space as, e.g., designed chairs.
Runtime: At low resolution, AML as well as the fullysupervised approaches Dai17 and Sup, are particular fast, requiring up to 2ms on a NVIDIA TM GeForce R GTX TITAN using Torch (Collobert et al, 2011). Datadriven approaches (e.g., Eng16, ICP and ML), on the other hand, take considerably longer. Eng16, for instance requires 168ms on average for completing the shape of a sparse LIDAR observation from KITTI using an Intel R Xeon R E5-2690 @2.6Ghz and the multithreaded Ceres solver (Agarwal et al, 2012). ICP and ML take longest, requiring up to 38s and 75s (not taking into account the point sampling process for the shapes), respectively. Except for Eng16 and ICP, all approaches scale with the used resolution and the employed architecture.

Conclusion
In this paper, we presented a novel, weakly-supervised learning-based approach to 3D shape completion from sparse and noisy point cloud observations. We used a (denoising) variational auto-encoder (Im et al, 2017;Kingma and Welling, 2014) to learn a latent space of shapes for one or multiple object categories using syn-thetic data from ShapeNet (Chang et al, 2015) or Model-Net (Wu et al, 2015). Based on the learned generative model, i.e., decoder, we formulated 3D shape completion as a maximum likelihood problem. In a second step, we then fixed the learned generative model and trained a new recognition model, i.e. encoder, to amortize, i.e. learn, the maximum likelihood problem. Thus, our Amortized Maximum Likelihood (AML) approach to 3D shape completion can be trained in a weakly-supervised fashion. Compared to related datadriven approaches, e.g., (Rock et al, 2015;Haene et al, 2014;Li et al, 2015;Engelmann et al, 2016Engelmann et al, , 2017Nan et al, 2012;Bao et al, 2013;Dame et al, 2013;Nguyen et al, 2016), our approach offers fast inference at test time; in contrast to other learning-based approaches, e.g., (Riegler et al, 2017a;Smith and Meger, 2017;Dai et al, 2017;Sharma et al, 2016;Fan et al, 2017;Rezende et al, 2016;Yang et al, 2018;Wang et al, 2017;Varley et al, 2017;Han et al, 2017), we do not require full supervision during training. Both characteristics render our approach useful for robotic scenarios where full supervision is often not available such as in autonomous driving, e.g., on KITTI (Geiger et al, 2012), or indoor robotics, e.g., on Kinect (Yang et al, 2018).
On two newly created synthetic shape completion benchmarks, derived from ShapeNet's cars and Model-Net10, as well as on real data from KITTI and, we demonstrated that AML outperforms related data-driven approaches (Engelmann et al, 2016;Gupta et al, 2015) while being significantly faster. We further showed that AML is able to compete with fully-supervised approaches (Dai et al, 2017), both quantitatively and qualitatively, while using only 3 − 10% supervision or less. In contrast to (Rock et al, 2015;Haene et al, 2014;Li et al, 2015;Engelmann et al, 2016Engelmann et al, , 2017Nan et al, 2012;Bao et al, 2013;Dame et al, 2013), we additionally showed that AML is able to generalize across object categories without category supervision during training. On Kinect, we also demonstrated that our AML approach is able to generalize from very few training examples. In contrast to (Girdhar et al, 2016;Liu et al, 2017;Sharma et al, 2016;Wu et al, 2015;Dai et al, 2017;Firman et al, 2016;Han et al, 2017;Fan et al, 2017), we considered resolutions up to 48×108×48 and 64 3 voxels as well as significantly sparser observations. Overall, our experiments demonstrate two key advantages of the proposed approach: significantly reduced runtime and increased performance compared to datadriven approaches showing that amortizing inference is highly effective.
In future work, we would like to address several aspects of our AML approach. First, the shape prior is essential for weakly-supervised shape completion, as also noted by Gwak et al (2017). However, training expressive generative models in 3D is still difficult. Second, larger resolutions imply significantly longer training times; alternative shape representations and data structures such as point clouds (Qi et al, 2017a,b;Fan et al, 2017) or octrees (Riegler et al, 2017b,a;Häne et al, 2017) might be beneficial. Finally, jointly tackling pose estimation and shape completion seems promising (Engelmann et al, 2016