Skip to main content

Pixel-GS: Density Control with Pixel-Aware Gradient for 3D Gaussian Splatting

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis results and advancing real-time rendering performance. However, the effectiveness of 3DGS heavily relies on the quality of the initial point cloud, as poor initialization can result in blurring and needle-like artifacts. This issue is mainly due to the point cloud growth condition, which only considers the average gradient magnitude of points from observable views, thereby failing to grow for large Gaussians that are observable from many viewpoints while many of them are only covered in the boundaries. To address this, we introduce Pixel-GS to take the area covered by the Gaussian in each view into account during the computation of the growth condition. The covered area is employed to adaptively weigh the gradients from different views, thereby facilitating the growth of large Gaussians. Consequently, Gaussians within the regions with insufficient initializing points can grow more effectively, leading to a more accurate and detailed reconstruction. Besides, we propose a simple yet effective strategy to suppress floaters near the camera by scaling the gradient field according to the distance to the camera. Extensive qualitative and quantitative experiments validate that our method achieves state-of-the-art rendering quality while maintaining real-time rendering, on challenging datasets such as Mip-NeRF 360 and Tanks  & Temples. Code and demo are available at: https://pixelgs.github.io.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aliev, K.A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: ECCV (2020)

    Google Scholar 

  2. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: MIP-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV (2021)

    Google Scholar 

  3. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: MIP-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)

    Google Scholar 

  5. Botsch, M., Hornung, A., Zwicker, M., Kobbelt, L.: High-quality surface splatting on today’s GPUs. In: EUROGRAPHICS (2005)

    Google Scholar 

  6. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: tensorial radiance fields. In: ECCV (2022)

    Google Scholar 

  7. Chen, Z., et al.: NeuRBF: a neural fields representation with adaptive radial basis functions. In: ICCV (2023)

    Google Scholar 

  8. Cheng, K., et al.: Gaussianpro: 3D Gaussian splatting with progressive propagation. In: ICML (2024)

    Google Scholar 

  9. Chung, J., Oh, J., Lee, K.M.: Depth-regularized optimization for 3D Gaussian splatting in few-shot images. In: CVPR (2024)

    Google Scholar 

  10. Drebin, R.A., Carpenter, L., Hanrahan, P.: Volume rendering. In: SIGGRAPH (1988)

    Google Scholar 

  11. Duan, Y., Wei, F., Dai, Q., He, Y., Chen, W., Chen, B.: 4D Gaussian splatting: towards efficient novel view synthesis for dynamic scenes. arXiv:2402.03307 (2024)

  12. Fan, Z., Wang, K., Wen, K., Zhu, Z., Xu, D., Wang, Z.: LightGaussian: unbounded 3D Gaussian compression with 15x reduction and 200+ fps. arXiv:2311.17245 (2023)

  13. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  14. Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ rays: uncertainty quantification for neural radiance fields. In: CVPR (2024)

    Google Scholar 

  15. Gross, M., Pfister, H.: Point-based graphics (2011)

    Google Scholar 

  16. Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: 2021 IEEE ICCV (2021)

    Google Scholar 

  17. Hu, W., et al.: Tri-MipRF: tri-mip representation for efficient anti-aliasing neural radiance fields. In: ICCV (2023)

    Google Scholar 

  18. Huang, Y.H., et al.: SC-GS: sparse-controlled Gaussian splatting for editable dynamic scenes. In: CVPR (2024)

    Google Scholar 

  19. Insafutdinov, E., Dosovitskiy, A.: Unsupervised learning of shape and pose with differentiable point clouds. In: NeurIPS (2018)

    Google Scholar 

  20. Jambon, C., et al.: Nerfshop: interactive editing of neural radiance fields. In: PACMCGIT (2023)

    Google Scholar 

  21. Katsumata, K., Vo, D.M., Nakayama, H.: An efficient 3D Gaussian representation for monocular/multi-view dynamic scenes. arXiv:2311.12897 (2023)

  22. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. TOG (2023)

    Google Scholar 

  23. Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. TOG (2017)

    Google Scholar 

  24. Kopanas, G., Philip, J., Leimkühler, T., Drettakis, G.: Point-based neural rendering with per-view optimization. In: Computer Graphics Forum (2021)

    Google Scholar 

  25. Kratimenos, A., Lei, J., Daniilidis, K.: DynMF: neural motion factorization for real-time dynamic view synthesis with 3D Gaussian splatting. arXiv:2312.00112 (2023)

  26. Kulhanek, J., Sattler, T.: Tetra-NeRF: representing neural radiance fields using tetrahedra. In: ICCV (2023)

    Google Scholar 

  27. Lee, J.C., Rho, D., Sun, X., Ko, J.H., Park, E.: Compact 3D Gaussian representation for radiance field. In: CVPR (2024)

    Google Scholar 

  28. Levoy, M.: Efficient ray tracing of volume data. TOG (1990)

    Google Scholar 

  29. Lin, C.H., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: AAAI (2018)

    Google Scholar 

  30. Liu, L., Gu, J., Zaw Lin, K., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: NeurIPS (2020)

    Google Scholar 

  31. Lu, T., et al.: Scaffold-GS: structured 3D Gaussians for view-adaptive rendering. In: CVPR (2024)

    Google Scholar 

  32. Luiten, J., Kopanas, G., Leibe, B., Ramanan, D.: Dynamic 3D Gaussians: tracking by persistent dynamic view synthesis. arXiv:2308.09713 (2023)

  33. Max, N.: Optical models for direct volume rendering. TVCG (1995)

    Google Scholar 

  34. Max, N., Chen, M.: Local and global illumination in the volume rendering integral. Technical report (2005)

    Google Scholar 

  35. Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRF in the dark: high dynamic range view synthesis from noisy raw images. In: CVPR (2022)

    Google Scholar 

  36. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  37. Morgenstern, W., Barthel, F., Hilsmann, A., Eisert, P.: Compact 3D scene representation via self-organizing gaussian grids. arXiv:2312.13299 (2023)

  38. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. TOG (2022)

    Google Scholar 

  39. Navaneet, K., Meibodi, K.P., Koohpayegani, S.A., Pirsiavash, H.: Compact3D: compressing gaussian splat radiance field models with vector quantization. arXiv:2311.18159 (2023)

  40. Niedermayr, S., Stumpfegger, J., Westermann, R.: Compressed 3D Gaussian splatting for accelerated novel view synthesis. In: CVPR (2024)

    Google Scholar 

  41. Niemeyer, M., et al.: Radsplat: radiance field-informed gaussian splatting for robust real-time rendering with 900+ fps. arXiv preprint arXiv:2403.13806 (2024)

  42. Philip, J., Deschaintre, V.: Floaters no more: radiance field gradient scaling for improved near-camera training. In: EGSR (2023)

    Google Scholar 

  43. Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs. In: ICCV (2021)

    Google Scholar 

  44. Reiser, C., et al.: MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes. TOG (2023)

    Google Scholar 

  45. Ren, L., Pfister, H., Zwicker, M.: Object space EWA surface splatting: a hardware accelerated approach to high quality point rendering. In: Computer Graphics Forum (2002)

    Google Scholar 

  46. Roessle, B., Barron, J.T., Mildenhall, B., Srinivasan, P.P., Nießner, M.: Dense depth priors for neural radiance fields from sparse input views. In: CVPR (2022)

    Google Scholar 

  47. Sainz, M., Pajarola, R.: Point-based rendering techniques. Comput. Graph. (2004)

    Google Scholar 

  48. Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: CVPR (2022)

    Google Scholar 

  49. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP (2004)

    Google Scholar 

  50. Warburg, F., Weber, E., Tancik, M., Holynski, A., Kanazawa, A.: Nerfbusters: removing ghostly artifacts from casually captured nerfs. In: ICCV (2023)

    Google Scholar 

  51. Wiles, O., Gkioxari, G., Szeliski, R., Johnson, J.: Synsin: end-to-end view synthesis from a single image. In: CVPR (2020)

    Google Scholar 

  52. Wu, G., et al.: 4D Gaussian splatting for real-time dynamic scene rendering. In: CVPR (2024)

    Google Scholar 

  53. Xu, Q., et al.: Point-NeRF: point-based neural radiance fields. In: CVPR (2022)

    Google Scholar 

  54. Yan, Z., Low, W.F., Chen, Y., Lee, G.H.: Multi-scale 3D Gaussian splatting for anti-aliased rendering. In: CVPR (2024)

    Google Scholar 

  55. Yang, J., Pavone, M., Wang, Y.: FreeNeRF: improving few-shot neural rendering with free frequency regularization. In: CVPR (2023)

    Google Scholar 

  56. Yang, Z., Yang, H., Pan, Z., Zhang, L.: Real-time photorealistic dynamic scene representation and rendering with 4D Gaussian splatting. In: ICLR (2024)

    Google Scholar 

  57. Yang, Z., et al.: Spec-Gaussian: anisotropic view-dependent appearance for 3D Gaussian splatting. arXiv:2402.15870 (2024)

  58. Yang, Z., Gao, X., Zhou, W., Jiao, S., Zhang, Y., Jin, X.: Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction. In: CVPR (2024)

    Google Scholar 

  59. Yariv, L., et al.: BakedSDF: meshing neural SDFs for real-time view synthesis. In: SIGGRAPH (2023)

    Google Scholar 

  60. Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. TOG (2019)

    Google Scholar 

  61. Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: Plenoctrees for real-time rendering of neural radiance fields. In: ICCV (2021)

    Google Scholar 

  62. Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-splatting: alias-free 3D Gaussian splatting. In: CVPR (2024)

    Google Scholar 

  63. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)

    Google Scholar 

Download references

Acknowledgement

This work is supported by the National Natural Science Foundation of China (No. 62201484), HKU Startup Fund, and HKU Seed Fund for Basic Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenbo Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Z., Hu, W., Lao, Y., He, T., Zhao, H. (2025). Pixel-GS: Density Control with Pixel-Aware Gradient for 3D Gaussian Splatting. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15077. Springer, Cham. https://doi.org/10.1007/978-3-031-72655-2_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72655-2_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72654-5

  • Online ISBN: 978-3-031-72655-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics