Skip to main content
Log in

Frequency-importance gaussian splatting for real-time lightweight radiance field rendering

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Recently, there have been significant developments in the realm of novel view synthesis relying on radiance fields. By incorporating the Splatting technique, a new approach named Gaussian Splatting has achieved superior rendering quality and real-time performance. However, the training process of the approach incurs significant performance overhead, and the model obtained from training is very large. To address these challenges, we improve Gaussian Splatting and propose Frequency-Importance Gaussian Splatting. Our method reduces the performance overhead by extracting the frequency features of the scene. First, we analyze the advantages and limitations of the spatial sampling strategy of the Gaussian Splatting method from the perspective of sampling theory. Second, we design the Enhanced Gaussian to more effectively express the high-frequency information, while reducing the performance overhead. Third, we construct a frequency-sensitive loss function to enhance the network’s ability to perceive the frequency domain and optimize the spatial structure of the scene. Finally, we propose a Dynamically Adaptive Density Control Strategy based on the degree of reconstruction of the background of the scene, which adaptive the spatial sample point generation strategy dynamically according to the training results and prevents the generation of redundant data in the model. We conducted experiments on several commonly used datasets, and the results show that our method has significant advantages over the original method in terms of memory overhead and storage usage and can maintain the image quality of the original method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Algorithm 1
Algorithm 2
Algorithm 3

Similar content being viewed by others

Data Availability

The datasets used in this study include the Mip-NeRF360 dataset, available at http://storage.googleapis.com/gresearch/refraw360/360_v2.zip, the Deep Blending dataset, accessible at http://www-sop.inria.fr/reves/publis/2018/HPPFDB18/datasets.html, the Tanks & temples dataset, found at https://drive.google.com/file/d/11KRfN91W1AxAW6lOFs4EeYDbeoQZCi87/view?usp=sharing, and the Synthetic NeRF dataset, located in the nerf_synthetic folder at https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1. Original experimental data for this study can be obtained from the corresponding author upon reasonable request after the project’s completion.

References

  1. Ahn I, Kim C (2013) A novel depth-based virtual view synthesis method for free viewpoint video. IEEE Trans Broadcast 59(4):614–626

    Article  Google Scholar 

  2. Avidan S, Shashua A (1997) Novel view synthesis in tensor space. In: Proceedings of IEEE computer society conference on computer vision and pattern recognition, pp 1034–1040. https://doi.org/10.1109/CVPR.1997.609457

  3. Avidan S, Shashua A (1998) Novel view synthesis by cascading trilinear tensors. IEEE Trans Vis Comput Graph a(a):293–306

  4. Barron JT, Mildenhall B, Tancik M, et al (2021) Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5855–5864

  5. Barron JT, Mildenhall B, Verbin D, et al (2022) Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5470–5479

  6. Chen A, Xu Z, Geiger A et al (2022) Tensorf: Tensorial radiance fields. In: Avidan S, Brostow G, Cissé M et al (eds) Computer Vision - ECCV 2022. Springer Nature Switzerland, Cham, pp 333–350

  7. Drebin RA, Carpenter L, Hanrahan P (1988) Volume rendering. ACM Siggraph Comput Graph 22(4):65–74

  8. Du Y, Zhang Y, Yu HX, et al (2021) Neural radiance flow for 4d view synthesis and video processing. In: 2021 IEEE/CVF international conference on computer vision (ICCV), IEEE Computer Society, pp 14304–14314

  9. Fridovich-Keil S, Yu A, Tancik M, et al (2022) Plenoxels: Radiance fields without neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5501–5510

  10. Hedman P, Philip J, Price T et al (2018) Deep blending for free-viewpoint image-based rendering. ACM Trans Graph (ToG) 37(6):1–15

    Article  Google Scholar 

  11. Jain A, Mildenhall B, Barron JT, et al (2022) Zero-shot text-guided object generation with dream fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 867–876

  12. Karnewar A, Ritschel T, Wang O, et al (2022) Relu fields: The little non-linearity that could. In: ACM SIGGRAPH 2022 conference proceedings, pp 1–9

  13. Kerbl B, Kopanas G, Leimkühler T et al (2023) 3d gaussian splatting for real-time radiance field rendering. ACM Trans Graph (ToG) 42(4):1–14

    Article  Google Scholar 

  14. Knapitsch A, Park J, Zhou QY, et al (2017) Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Trans Graph 36(4)

  15. Kobayashi S, Matsumoto E, Sitzmann V (2022) Decomposing nerf for editing via feature field distillation. In: Koyejo S, Mohamed S, Agarwal A, et al (eds) Advances in neural information processing systems, vol 35. Curran Associates, Inc., pp 23311–23330. https://proceedings.neurips.cc/paper_files/paper/2022/file/93f250215e4889119807b6fac3a57aec-Paper-Conference.pdf

  16. Li L, Shen Z, Wang Z et al (2022) Streaming radiance fields for 3d video synthesis. Adv Neural Inf Process 35:13485–13498

    Google Scholar 

  17. Li Z, Niklaus S, Snavely N, et al (2021) Neural scene flow fields for space-time view synthesis of dynamic scenes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 6498–6508

  18. Lin CH, Ma WC, Torralba A, et al (2021) Barf: Bundle-adjusting neural radiance fields. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5741–5751

  19. Martin-Brualla R, Radwan N, Sajjadi MS, et al (2021) Nerf in the wild: Neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7210–7219

  20. Max N (1995) Optical models for direct volume rendering. IEEE Trans Vis Comput 1(2):99–108

    Article  Google Scholar 

  21. Mildenhall B, Srinivasan PP, Tancik M et al (2021) Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM 65(1):99–106

    Article  Google Scholar 

  22. Mildenhall B, Hedman P, Martin-Brualla R, et al (2022) Nerf in the dark: High dynamic range view synthesis from noisy raw images. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 16190–16199

  23. Müller T, Evans A, Schied C et al (2022) Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans Graph (ToG) 41(4):1–15

    Article  Google Scholar 

  24. Neff T, Stadlbauer P, Parger M, et al (2021) Donerf: Towards real-time rendering of compact neural radiance fields using depth oracle networks. In: Computer graphics Forum, Wiley Online Library, pp 45–59

  25. Park K, Sinha U, Barron JT, et al (2021) Nerfies: Deformable neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp 5865–5874

  26. Poole B, Jain A, Barron JT, et al (2022) Dreamfusion: Text-to-3d using 2d diffusion. arXiv:2209.14988

  27. Pumarola A, Corona E, Pons-Moll G, et al (2021) D-nerf: Neural radiance fields for dynamic scenes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 10318–10327

  28. Rebain D, Jiang W, Yazdani S, et al (2021) Derf: Decomposed radiance fields. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14153–14161

  29. Ren L, Pfister H, Zwicker M (2002) Object space ewa surface splatting: A hardware accelerated approach to high quality point rendering. In: Computer graphics forum, Wiley Online Library, pp 461–470

  30. Schonberger JL, Frahm JM (2016) Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4104–4113

  31. Sun C, Sun M, Chen HT (2022) Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 5459–5469

  32. Taud H, Mas J (2018) Multilayer perceptron (mlp). Geomatic approaches for modeling land change scenarios, pp 451–455

  33. Verbin D, Hedman P, Mildenhall B, et al (2022) Ref-nerf: Structured view-dependent appearance for neural radiance fields. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR), IEEE, pp 5481–5490

  34. Wang Y, Yang S, Hu Y, et al (2022) Nerfocus: Neural radiance field for 3d synthetic defocus. arXiv:2203.05189

  35. Xian W, Huang JB, Kopf J, et al (2021) Space-time neural irradiance fields for free-viewpoint video. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9421–9431

  36. Xu ZQJ, Zhang Y, Luo T, et al (2019) Frequency principle: Fourier analysis sheds light on deep neural networks. arXiv:1901.06523

  37. Yang J, Pavone M, Wang Y (2023) Freenerf: Improving few-shot neural rendering with free frequency regularization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 8254–8263

  38. Yen-Chen L, Florence P, Barron JT, et al (2022) Nerf-supervision: Learning dense object descriptors from neural radiance fields. In: 2022 International conference on robotics and automation (ICRA), pp 6496–6503. https://doi.org/10.1109/ICRA46639.2022.9812291

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xingquan Cai.

Ethics declarations

Conflicts of interest

The authors declare no financial or personal conflicts of interest related to this research. The publication of this study aims to disseminate scientific knowledge, and all results are based on objective experiments and analyses. We assure the integrity and transparency of the writing and research process of this paper, adhering to ethical guidelines

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Hu, Y., Zhang, Y. et al. Frequency-importance gaussian splatting for real-time lightweight radiance field rendering. Multimed Tools Appl (2024). https://doi.org/10.1007/s11042-024-18679-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-024-18679-x

Keywords

Navigation