Skip to main content

Learning Continuous Implicit Representation for Near-Periodic Patterns

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Near-Periodic Patterns (NPP) are ubiquitous in man-made scenes and are composed of tiled motifs with appearance differences caused by lighting, defects, or design elements. A good NPP representation is useful for many applications including image completion, segmentation, and geometric remapping. But representing NPP is challenging because it needs to maintain global consistency (tiled motifs layout) while preserving local variations (appearance differences). Methods trained on general scenes using a large dataset or single-image optimization struggle to satisfy these constraints, while methods that explicitly model periodicity are not robust to periodicity detection errors. To address these challenges, we learn a neural implicit representation using a coordinate-based MLP with single image optimization. We design an input feature warping module and a periodicity-guided patch loss to handle both global consistency and local variations. To further improve the robustness, we introduce a periodicity proposal module to search and use multiple candidate periodicities in our pipeline. We demonstrate the effectiveness of our method on more than 500 images of building facades, friezes, wallpapers, ground, and Mondrian patterns in single and multi-planar scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Figure 3 is generated based on the final pipeline explained in Sect. 3.4.

References

  1. PSU Near-Regular Texture Database. http://vivid.cse.psu.edu/

  2. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28, 24 (2009)

    Article  Google Scholar 

  3. Barnes, C., Shechtman, E., Goldman, D.B., Finkelstein, A.: The generalized PatchMatch correspondence algorithm. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 29–43. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15558-1_3

    Chapter  Google Scholar 

  4. Barron, J.T.: A general and adaptive robust loss function. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4331–4339 (2019)

    Google Scholar 

  5. Bemana, M., Myszkowski, K., Seidel, H.P., Ritschel, T.: X-fields: implicit neural view-, light-and time-image interpolation. ACM Trans. Graph. (TOG) 39, 1–15 (2020)

    Article  Google Scholar 

  6. Cao, C., Fu, Y.: Learning a sketch tensor space for image inpainting of man-made scenes. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14509–14518 (2021)

    Google Scholar 

  7. Chen, H., Liu, J., Chen, W., Liu, S., Zhao, Y.: Exemplar-based pattern synthesis with implicit periodic field network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3708–3717 (2022)

    Google Scholar 

  8. Chen, Y., Liu, S., Wang, X.: Learning continuous image representation with local implicit image function. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8628–8638. IEEE (2021)

    Google Scholar 

  9. Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5939–5948. IEEE (2019)

    Google Scholar 

  10. Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., Vedaldi, A.: Describing textures in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)

    Google Scholar 

  11. Efros, A.A., Freeman, W.T.: Image quilting for texture synthesis and transfer. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 341–346. Association for Computing Machinery (2001)

    Google Scholar 

  12. Genova, K., Cole, F., Vlasic, D., Sarna, A., Freeman, W.T., Funkhouser, T.: Learning shape templates with structured implicit functions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7154–7164. IEEE (2019)

    Google Scholar 

  13. Halperin, T., et al.: Endless loops: detecting and animating periodic patterns in still images. ACM Trans. Grap. (TOG) 40, 1–12 (2021)

    Article  Google Scholar 

  14. Hays, J., Leordeanu, M., Efros, A.A., Liu, Y.: Discovering texture regularity as a higher-order correspondence problem. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 522–535. Springer, Heidelberg (2006). https://doi.org/10.1007/11744047_40

    Chapter  Google Scholar 

  15. He, K., Sun, J.: Statistics of patch offsets for image completion. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7573, pp. 16–29. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33709-3_2

    Chapter  Google Scholar 

  16. Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33, 1–10 (2014)

    Google Scholar 

  17. Jetchev, N., Bergmann, U., Vollgraf, R.: Texture synthesis with spatial generative adversarial networks. arXiv preprint arXiv:1611.08207 (2016)

  18. Jiri, B., Jan, S., Jan, K., Habart, D.: Supervised and unsupervised segmentation using superpixels, model estimation, and graph cut. J. Electron. Imaging 26, 061610 (2017)

    Google Scholar 

  19. Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)

  20. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4681–4690 (2017)

    Google Scholar 

  21. Lettry, L., Perdoch, M., Vanhoey, K., Van Gool, L.: Repeated pattern detection using CNN activations. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 47–55. IEEE (2017)

    Google Scholar 

  22. Li, J., Wang, N., Zhang, L., Du, B., Tao, D.: Recurrent feature reasoning for image inpainting. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2020)

    Google Scholar 

  23. Li, Y., et al.: Multi-plane program induction with 3D box priors. In: Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  24. Li, Y., Mao, J., Zhang, X., Freeman, W.T., Tenenbaum, J.B., Wu, J.: Perspective plane program induction from a single image. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4434–4443. IEEE (2020)

    Google Scholar 

  25. Liu, J., Liu, Y.: Grasp recurring patterns from a single view. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2003–2010. IEEE (2013)

    Google Scholar 

  26. Liu, S., Ng, T.T., Sunkavalli, K., Do, M.N., Shechtman, E., Carr, N.: PatchMatch-based automatic lattice detection for near-regular textures. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 181–189. IEEE (2015)

    Google Scholar 

  27. Liu, Y., Collins, R.T., Tsin, Y.: A computational model for periodic pattern perception based on frieze and wallpaper groups. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 26, 354–371 (2004)

    Article  Google Scholar 

  28. Liu, Y., Lin, W.C.: Deformable texture: the irregular-regular-irregular cycle. Carnegie Mellon University, the Robotics Institute (2003)

    Google Scholar 

  29. Liu, Y., Lin, W.C., Hays, J.: Near-regular texture analysis and manipulation. ACM Trans. Graph. (TOG) 23, 368–376 (2004)

    Article  Google Scholar 

  30. Liu, Y., Tsin, Y., Lin, W.C.: The promise and perils of near-regular texture. Int. J. Comput. Vis. (IJCV) 62, 145–159 (2005)

    Article  Google Scholar 

  31. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1150–1157. IEEE (1999)

    Google Scholar 

  32. Mao, J., Zhang, X., Li, Y., Freeman, W.T., Tenenbaum, J.B., Wu, J.: Program-guided image manipulators. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4030–4039. IEEE (2019)

    Google Scholar 

  33. Martin-Brualla, R., Radwan, N., Sajjadi, M.S., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7210–7219. IEEE (2021)

    Google Scholar 

  34. Mechrez, R., Talmi, I., Zelnik-Manor, L.: The contextual loss for image transformation with non-aligned data. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 768–783 (2018)

    Google Scholar 

  35. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58452-8_24

    Chapter  Google Scholar 

  36. Nazeri, K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: EdgeConnect: structure guided image inpainting using edge prediction. In: The IEEE International Conference on Computer Vision (ICCV) Workshops. IEEE (2019)

    Google Scholar 

  37. Park, M., Brocklehurst, K., Collins, R.T., Liu, Y.: Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 31, 1804–1816 (2009)

    Article  Google Scholar 

  38. Park, M., Brocklehurst, K., Collins, R.T., Liu, Y.: Translation-symmetry-based perceptual grouping with applications to urban scenes. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010. LNCS, vol. 6494, pp. 329–342. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19318-7_26

    Chapter  Google Scholar 

  39. Park, M., Collins, R.T., Liu, Y.: Deformed lattice discovery via efficient mean-shift belief propagation. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 474–485. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88688-4_35

    Chapter  Google Scholar 

  40. Parmar, G., Zhang, R., Zhu, J.Y.: On aliased resizing and surprising subtleties in GAN evaluation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  41. Pritts, J., Chum, O., Matas, J.: Detection, rectification and segmentation of coplanar repeated patterns. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2973–2980. IEEE (2014)

    Google Scholar 

  42. Pritts, J., Rozumnyi, D., Kumar, M.P., Chum, O.: Coplanar repeats by energy minimization. In: Proceedings of the British Machine Vision Conference (BMVC), pp. 107.1–107.12. IEEE (2016)

    Google Scholar 

  43. Rao, A., Lohse, G.: Identifying high level features of texture perception. CVGIP: Graph. Models Image Process. 55, 218–233 (1993)

    Google Scholar 

  44. Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. Adv. Neural Inf. Process. Syst. 33, 7462–7473 (2020)

    Google Scholar 

  45. Skorokhodov, I., Ignatyev, S., Elhoseiny, M.: Adversarial generation of continuous images. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10753–10764. IEEE (2021)

    Google Scholar 

  46. Suvorov, R., et al.: Resolution-robust large mask inpainting with Fourier convolutions. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2149–2159 (2022)

    Google Scholar 

  47. Tancik, M., et al.: Fourier features let networks learn high frequency functions in low dimensional domains. In: Neural Information Processing Systems (NeurIPS) (2020)

    Google Scholar 

  48. Teboul, O., Simon, L., Koutsourakis, P., Paragios, N.: Segmentation of building facades using procedural shape priors. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3105–3112. IEEE (2010)

    Google Scholar 

  49. Torii, A., Sivic, J., Pajdla, T., Okutomi, M.: Visual place recognition with repetitive structures. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 883–890. IEEE (2013)

    Google Scholar 

  50. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9446–9454. IEEE (2018)

    Google Scholar 

  51. Wang, T., Ouyang, H., Chen, Q.: Image inpainting with external-internal learning and monochromic bottleneck. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5120–5129 (2021)

    Google Scholar 

  52. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  53. Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE Trans. Pattern Anal. Mach. Intell. 29, 463–476 (2007)

    Article  Google Scholar 

  54. Wu, C., Frahm, J.-M., Pollefeys, M.: Detecting large repetitive structures with salient boundaries. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 142–155. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15552-9_11

    Chapter  Google Scholar 

  55. Xu, K., Zhang, M., Li, J., Du, S.S., Kawarabayashi, K.I., Jegelka, S.: How neural networks extrapolate: from feedforward to graph neural networks. In: International Conference on Learning Representations (ICLR) (2021)

    Google Scholar 

  56. Yu, Z., Zheng, J., Lian, D., Zhou, Z., Gao, S.: Single-image piece-wise planar 3D reconstruction via associative embedding. In: CVPR, pp. 1029–1037 (2019)

    Google Scholar 

  57. Zeng, Y., Fu, J., Chao, H., Guo, B.: Learning pyramid-context encoder network for high-quality image inpainting. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1486–1494 (2019)

    Google Scholar 

  58. Zeng, Yu., Lin, Z., Yang, J., Zhang, J., Shechtman, E., Lu, H.: High-resolution image inpainting with iterative confidence feedback and guided upsampling. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12364, pp. 1–17. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58529-7_1

    Chapter  Google Scholar 

  59. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  60. Zhang, Z., Ganesh, A., Liang, X., Ma, Y.: TILT: transform invariant low-rank textures. Int. J. Comput. Vis. (IJCV) 99, 1–24 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  61. Zheng, J., Ramasinghe, S., Lucey, S.: Rethinking positional encoding. arXiv preprint arXiv:2107.02561 (2021)

  62. Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1452–1464 (2017)

    Article  Google Scholar 

  63. Zhou, Y., Barnes, C., Shechtman, E., Amirghodsi, S.: TransFill: reference-guided image inpainting by merging multiple color and spatial transformations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2266–2276 (2021)

    Google Scholar 

  64. Ziyin, L., Hartwig, T., Ueda, M.: Neural networks fail to learn periodic functions and how to fix it. In: Neural Information Processing Systems (NeurIPS), pp. 1583–1594 (2020)

    Google Scholar 

Download references

Acknowledgement

This work was supported by a gift from Zillow Group, USA, and NSF Grants #CNS-2038612, #IIS-1900821.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bowei Chen .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 9392 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Chen, B., Zhi, T., Hebert, M., Narasimhan, S.G. (2022). Learning Continuous Implicit Representation for Near-Periodic Patterns. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13675. Springer, Cham. https://doi.org/10.1007/978-3-031-19784-0_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19784-0_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19783-3

  • Online ISBN: 978-3-031-19784-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics