Skip to main content

PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval and Deformation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13663))

Included in the following conference series:

Abstract

This paper introduces a data-driven shape completion approach that focuses on completing geometric details of missing regions of 3D shapes. We observe that existing generative methods lack the training data and representation capacity to synthesize plausible, fine-grained details with complex geometry and topology. Our key insight is to copy and deform patches from the partial input to complete missing regions. This enables us to preserve the style of local geometric features, even if it drastically differs from the training data. Our fully automatic approach proceeds in two stages. First, we learn to retrieve candidate patches from the input shape. Second, we select and deform some of the retrieved candidates to seamlessly blend them into the complete shape. This method combines the advantages of the two most common completion methods: similarity-based single-instance completion, and completion by learning a shape space. We leverage repeating patterns by retrieving patches from the partial input, and learn global structural priors by using a neural network to guide the retrieval and deformation steps. Experimental results show our approach considerably outperforms baselines across multiple datasets and shape categories. Code and data are available at https://github.com/GitBoSun/PatchRD.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., Davis, J.: SCAPE: shape completion and animation of people. ACM Trans. Graph. (TOG) 24, 408–416 (2005). https://doi.org/10.1145/1073204.1073207

    Article  Google Scholar 

  2. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (Proc. SIGGRAPH) 28(3) (2009)

    Google Scholar 

  3. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992). https://doi.org/10.1109/34.121791

    Article  Google Scholar 

  4. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1999, pp. 187–194. ACM Press/Addison-Wesley Publishing Co. (1999). https://doi.org/10.1145/311535.311556

  5. Chabra, R., et al.: Deep local shapes: learning local SDF priors for detailed 3D reconstruction. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12374, pp. 608–625. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58526-6_36

    Chapter  Google Scholar 

  6. Chang, A.X., et al.: ShapeNet: an information-rich 3D model repository. Technical report. arXiv:1512.03012 [cs.GR], Stanford University—Princeton University—Toyota Technological Institute at Chicago (2015)

  7. Chaudhuri, S., Koltun, V.: Data-driven suggestions for creativity support in 3D modeling. ACM Trans. Graph. 29 (2010). https://doi.org/10.1145/1866158.1866205

  8. Chen, Z., et al.: Multiresolution deep implicit functions for 3D shape representation. In: ICCV (2021)

    Google Scholar 

  9. Chen, Z., Kim, V.G., Fisher, M., Aigerman, N., Zhang, H., Chaudhuri, S.: DECOR-GAN: 3D shape detailization by conditional refinement. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  10. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1996, pp. 303–312. Association for Computing Machinery, New York (1996). https://doi.org/10.1145/237170.237269

  11. Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: ScanNet: richly-annotated 3D reconstructions of indoor scenes. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2017)

    Google Scholar 

  12. Dai, A., Qi, C.R., Nießner, M.: Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the Computer Vision and Pattern Recognition (CVPR). IEEE (2017)

    Google Scholar 

  13. Davis, J., Marschner, S., Garr, M., Levoy, M.: Filling holes in complex surfaces using volumetric diffusion. In: 3DPVT, pp. 428–441 (2002). https://doi.org/10.1109/TDPVT.2002.1024098

  14. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: IEEE International Conference on Computer Vision (ICCV) (1999)

    Google Scholar 

  15. Eitz, M., Richter, R., Boubekeur, T., Hildebrand, K., Alexa, M.: Sketch-based shape retrieval. ACM Trans. Graph. 31(4) (2012). https://doi.org/10.1145/2185520.2185527

  16. Genova, K., Cole, F., Sud, A., Sarna, A., Funkhouser, T.: Local deep implicit functions for 3D shape. In: CVPR (2019)

    Google Scholar 

  17. Groueix, T., Fisher, M., Kim, V.G., Russell, B., Aubry, M.: AtlasNet: a Papier-Mâché approach to learning 3D surface generation. In: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  18. Guo, X., Xiao, J., Wang, Y.: A survey on algorithms of hole filling in 3D surface reconstruction. Vis. Comput. 34(1), 93–103 (2016). https://doi.org/10.1007/s00371-016-1316-y

    Article  Google Scholar 

  19. Han, X., Li, Z., Huang, H., Kalogerakis, E., Yu, Y.: High-resolution shape completion using deep neural networks for global structure and local geometry inference. In: IEEE International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  20. Hanocka, R., Hertz, A., Fish, N., Giryes, R., Fleishman, S., Cohen-Or, D.: MeshCNN: a network with an edge. ACM Trans. Graph. 38(4) (2019). https://doi.org/10.1145/3306346.3322959

  21. Hanocka, R., Metzer, G., Giryes, R., Cohen-Or, D.: Point2Mesh: a self-prior for deformable meshes. ACM Trans. Graph. 39(4) (2020). https://doi.org/10.1145/3386569.3392415

  22. Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Trans. Graph. (SIGGRAPH 2007) 26(3) (2007)

    Google Scholar 

  23. Hertz, A., Hanocka, R., Giryes, R., Cohen-Or, D.: Deep geometric texture synthesis. ACM Trans. Graph. 39(4) (2020). https://doi.org/10.1145/3386569.3392471

  24. Hu, P., Wang, C., Li, B., Liu, M.: Filling holes in triangular meshes in engineering. JSW 7, 141–148 (2012). https://doi.org/10.4304/jsw.7.1.141-148

  25. Huang, Z., Yu, Y., Xu, J., Ni, F., Le, X.: PF-Net: point fractal network for 3D point cloud completion. In: CVPR (2020)

    Google Scholar 

  26. Kanazawa, A., Black, M.J., Jacobs, D.W., Malik, J.: End-to-end recovery of human shape and pose. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018, pp. 7122–7131. Computer Vision Foundation/IEEE Computer Society (2018). https://doi.org/10.1109/CVPR.2018.00744. http://openaccess.thecvf.com/content_cvpr_2018/html/Kanazawa_End-to-End_Recovery_of_CVPR_2018_paper.html

  27. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Proceedings of the Fourth Eurographics Symposium on Geometry Processing (2005)

    Google Scholar 

  28. Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Trans. Graph. (TOG) 32, 1–13 (2013)

    Article  MATH  Google Scholar 

  29. Kim, Y.M., Mitra, N.J., Yan, D.M., Guibas, L.: Acquiring 3D indoor environments with variability and repetition. ACM Trans. Graph. (TOG) 31, 1–11 (2012)

    Google Scholar 

  30. Kolotouros, N., Pavlakos, G., Black, M.J., Daniilidis, K.: Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In: 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), 27 October–2 November 2019, pp. 2252–2261. IEEE (2019). https://doi.org/10.1109/ICCV.2019.00234

  31. Kuo, W., Angelova, A., Lin, T.Y., Dai, A.: Patch2CAD: patchwise embedding learning for in-the-wild shape retrieval from a single image. In: ICCV (2021)

    Google Scholar 

  32. Kwatra, V., Schodl, A., Essa, I., Turk, G., Bobick, A.: Graphcut textures: image and video synthesis using graph cuts. ACM Trans. Graph. SIGGRAPH 2003 22(3), 277–286 (2003)

    Google Scholar 

  33. Levoy, M., et al.: The digital Michelangelo project: 3D scanning of large statues. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2000, pp. 131–144. ACM Press/Addison-Wesley Publishing Co. (2000). https://doi.org/10.1145/344779.344849

  34. Li, D., Shao, T., Wu, H., Zhou, K.: Shape completion from a single RGBD image. IEEE Trans. Vis. Comput. Graph. 23, 1809–1822 (2016)

    Article  Google Scholar 

  35. Li, M., Zhang, H.: D\(^2\)IM-Net: learning detail disentangled implicit fields from single images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10246–10255 (2021)

    Google Scholar 

  36. Li, Y., Dai, A., Guibas, L., Nießner, M.: Database-assisted object retrieval for real-time 3D reconstruction. In: Computer Graphics Forum (2015)

    Google Scholar 

  37. Li, Y., Ma, T., Bai, Y., Duan, N., Wei, S., Wang, X.: PasteGAN: a semi-parametric method to generate image from scene graph. In: NeurIPS (2019)

    Google Scholar 

  38. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: a skinned multi-person linear model. ACM Trans. Graph. 34(6) (2015). https://doi.org/10.1145/2816795.2818013

  39. Nan, L., Xie, K., Sharf, A.: A search-classify approach for cluttered indoor scene understanding. ACM Trans. Graph. 31, 1–10 (2012)

    Article  Google Scholar 

  40. Nealen, A., Igarashi, T., Sorkine, O., Alexa, M.: Laplacian mesh optimization. In: Proceedings of the 4th International Conference on Computer Graphics and Interactive Techniques (2006)

    Google Scholar 

  41. Ohtake, Y., Belyaev, A., Alexa, M., Turk, G., Seidel, H.P.: Multi-level partition of unity implicits. ACM Trans. Graph. 22(3), 463–470 (2003). https://doi.org/10.1145/882262.882293

    Article  Google Scholar 

  42. Pan, L., et al.: Variational relational point completion network. arXiv preprint arXiv:2104.10154 (2021)

  43. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  44. Pauly, M., Mitra, N.J., Giesen, J., Gross, M.H., Guibas., L.J.: Example-based 3D scan completion. In: Symposium on Geometry Processing (2005)

    Google Scholar 

  45. Pavlakos, G., et al.: Expressive body capture: 3D hands, face, and body from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019, pp. 10975–10985. Computer Vision Foundation/IEEE (2019). https://doi.org/10.1109/CVPR.2019.01123. http://openaccess.thecvf.com/content_CVPR_2019/html/Pavlakos_Expressive_Body_Capture_3D_Hands_Face_and_Body_From_a_CVPR_2019_paper.html

  46. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional occupancy networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 523–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_31

    Chapter  Google Scholar 

  47. Qi, X., Chen, Q., Jia, J., Koltun, V.: Semi-parametric image synthesis. In: CVPR (2018)

    Google Scholar 

  48. Ranjan, A., Bolkart, T., Sanyal, S., Black, M.J.: Generating 3D faces using convolutional mesh autoencoders. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 725–741. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_43

    Chapter  Google Scholar 

  49. Ren, Y., Yu, X., Zhang, R., Li, T.H., Liu, S., Li, G.: StructureFlow: image inpainting via structure-aware appearance flow. In: IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  50. Rock, J., Gupta, T., Thorsen, J., Gwak, J., Shin, D., Hoiem, D.: Completing 3D object shape from one depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)

    Google Scholar 

  51. Schulz, A., Shamir, A., Baran, I., Levin, D.I.W., Sitthi-Amorn, P., Matusik, W.: Retrieval on parametric shape collections. ACM Trans. Graph. 36, 1–14 (2017)

    Article  Google Scholar 

  52. Siddiqui, Y., Thies, J., Ma, F., Shan, Q., Nießner, M., Dai, A.: RetrievalFuse: neural 3D scene reconstruction with a database. In: ICCV (2021)

    Google Scholar 

  53. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  54. Sorkine, O., Cohen-Or, D.: Least-squares meshes. In: Shape Modeling Applications (2004)

    Google Scholar 

  55. Sung, M., Kim, V.G., Angst, R., Guibas, L.: Data-driven structural priors for shape completion. ACM Trans. Graph. (TOG) 34, 1–11 (2015)

    Article  Google Scholar 

  56. Takayama, K., Schmidt, R., Singh, K., Igarashi, T., Boubekeur, T., Sorkine-Hornung, O.: GeoBrush: interactive mesh geometry cloning. Comput. Graph. Forum (Proc. EUROGRAPHICS 2011) 30(2), 613–622 (2011)

    Google Scholar 

  57. Tangelder, J., Veltkamp, R.: A survey of content based 3D shape retrieval methods. In: Proceedings Shape Modeling Applications, pp. 145–156 (2004). https://doi.org/10.1109/SMI.2004.1314502

  58. Tatarchenko, M., Richter, S., Ranftl, R., Li, Z., Koltun, V., Brox, T.: What do single-view 3D reconstruction networks learn? In: CVPR (2019)

    Google Scholar 

  59. Tchapmi, L.P., Kosaraju, V., Rezatofighi, S.H., Reid, I., Savarese, S.: TopNet: structural point cloud decoder. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  60. Tseng, H.-Y., Lee, H.-Y., Jiang, L., Yang, M.-H., Yang, W.: RetrieveGAN: image synthesis via differentiable patch retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 242–257. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_15

    Chapter  Google Scholar 

  61. Turk, G., Levoy, M.: Zippered polygon meshes from range images. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 1994, pp. 311–318. Association for Computing Machinery, New York (1994). https://doi.org/10.1145/192161.192241

  62. Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Deep image prior. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018, pp. 9446–9454. Computer Vision Foundation/IEEE Computer Society (2018). https://doi.org/10.1109/CVPR.2018.00984. http://openaccess.thecvf.com/content_cvpr_2018/html/Ulyanov_Deep_Image_Prior_CVPR_2018_paper.html

  63. Uy, M.A., Huang, J., Sung, M., Birdal, T., Guibas, L.: Deformation-aware 3D model embedding and retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 397–413. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_24

    Chapter  Google Scholar 

  64. Uy, M.A., Kim, V.G., Sung, M., Aigerman, N., Chaudhuri, S., Guibas, L.: Joint learning of 3D shape retrieval and deformation. In: CVPR (2021)

    Google Scholar 

  65. Vaswani, A., et al.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017). https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

  66. Wang, X., Ang, M.H., Jr., Lee, G.H.: Cascaded refinement network for point cloud completion. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  67. Wang, X., Ang, M.H., Jr., Lee, G.H.: Voxel-based network for shape completion by leveraging edge generation. In: ICCV (2021)

    Google Scholar 

  68. Wheeler, M., Sato, Y., Ikeuchi, K.: Consensus surfaces for modeling 3D objects from multiple range images. In: ICCV, pp. 917–924 (1998). https://doi.org/10.1109/ICCV.1998.710826

  69. Wu, J., Zhang, C., Xue, T., Freeman, W.T., Tenenbaum, J.B.: Learning a probabilistic latent space of object shapes via 3D generative-adversarial modeling. In: Advances in Neural Information Processing Systems, pp. 82–90 (2016)

    Google Scholar 

  70. Wu, Z., et al.: 3D ShapeNets: a deep representation for volumetric shape modeling. In: Proceedings of 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)

    Google Scholar 

  71. Xiang, P., et al.: SnowflakeNet: point cloud completion by snowflake point deconvolution with skip-transformer. In: ICCV (2021)

    Google Scholar 

  72. Xie, C., Wang, C., Zhang, B., Yang, H., Chen, D., Wen, F.: Style-based point generator with adversarial rendering for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4619–4628 (2021)

    Google Scholar 

  73. Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., Sun, W.: GRNet: gridding residual network for dense point cloud completion. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 365–381. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_21

    Chapter  Google Scholar 

  74. Xiong, X., De la Torre, F.: Supervised descent method and its applications to face alignment. In: CVPR, pp. 532–539 (2013). https://doi.org/10.1109/CVPR.2013.75

  75. Xu, R., Guo, M., Wang, J., Li, X., Zhou, B., Loy, C.C.: Texture memory-augmented deep patch-based image inpainting. IEEE Trans. Image Process. (TIP) 30, 9112–9124 (2021)

    Article  Google Scholar 

  76. Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: CVPR (2020)

    Google Scholar 

  77. Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., Zhou, J.: PoinTr: diverse point cloud completion with geometry-aware transformers. In: ICCV (2021)

    Google Scholar 

  78. Yuan, W., Khot, T., Held, D., Mertz, C., Hebert, M.: PCN: point completion network. In: 2018 International Conference on 3D Vision (3DV) (2018)

    Google Scholar 

  79. Zhang, W., Yan, Q., Xiao, C.: Detail preserved point cloud completion via separated feature aggregation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12370, pp. 512–528. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58595-2_31

    Chapter  Google Scholar 

  80. Zhao, W., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: a literature survey. ACM Comput. Surv. 35(4), 399–458 (2003). https://doi.org/10.1145/954339.954342

    Article  Google Scholar 

  81. Zhou, K., et al.: Mesh quilting for geometric texture synthesis. In: ACM SIGGRAPH 2006 Papers, SIGGRAPH 2006, pp. 690–697 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bo Sun .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 9075 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, B., Kim, V.G., Aigerman, N., Huang, Q., Chaudhuri, S. (2022). PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval and Deformation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13663. Springer, Cham. https://doi.org/10.1007/978-3-031-20062-5_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20062-5_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20061-8

  • Online ISBN: 978-3-031-20062-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics