Abstract
3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis results and advancing real-time rendering performance. However, the effectiveness of 3DGS heavily relies on the quality of the initial point cloud, as poor initialization can result in blurring and needle-like artifacts. This issue is mainly due to the point cloud growth condition, which only considers the average gradient magnitude of points from observable views, thereby failing to grow for large Gaussians that are observable from many viewpoints while many of them are only covered in the boundaries. To address this, we introduce Pixel-GS to take the area covered by the Gaussian in each view into account during the computation of the growth condition. The covered area is employed to adaptively weigh the gradients from different views, thereby facilitating the growth of large Gaussians. Consequently, Gaussians within the regions with insufficient initializing points can grow more effectively, leading to a more accurate and detailed reconstruction. Besides, we propose a simple yet effective strategy to suppress floaters near the camera by scaling the gradient field according to the distance to the camera. Extensive qualitative and quantitative experiments validate that our method achieves state-of-the-art rendering quality while maintaining real-time rendering, on challenging datasets such as Mip-NeRF 360 and Tanks & Temples. Code and demo are available at: https://pixelgs.github.io.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aliev, K.A., Sevastopolsky, A., Kolos, M., Ulyanov, D., Lempitsky, V.: Neural point-based graphics. In: ECCV (2020)
Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: MIP-NeRF: a multiscale representation for anti-aliasing neural radiance fields. In: ICCV (2021)
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: MIP-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)
Botsch, M., Hornung, A., Zwicker, M., Kobbelt, L.: High-quality surface splatting on today’s GPUs. In: EUROGRAPHICS (2005)
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: tensorial radiance fields. In: ECCV (2022)
Chen, Z., et al.: NeuRBF: a neural fields representation with adaptive radial basis functions. In: ICCV (2023)
Cheng, K., et al.: Gaussianpro: 3D Gaussian splatting with progressive propagation. In: ICML (2024)
Chung, J., Oh, J., Lee, K.M.: Depth-regularized optimization for 3D Gaussian splatting in few-shot images. In: CVPR (2024)
Drebin, R.A., Carpenter, L., Hanrahan, P.: Volume rendering. In: SIGGRAPH (1988)
Duan, Y., Wei, F., Dai, Q., He, Y., Chen, W., Chen, B.: 4D Gaussian splatting: towards efficient novel view synthesis for dynamic scenes. arXiv:2402.03307 (2024)
Fan, Z., Wang, K., Wen, K., Zhu, Z., Xu, D., Wang, Z.: LightGaussian: unbounded 3D Gaussian compression with 15x reduction and 200+ fps. arXiv:2311.17245 (2023)
Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)
Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ rays: uncertainty quantification for neural radiance fields. In: CVPR (2024)
Gross, M., Pfister, H.: Point-based graphics (2011)
Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: 2021 IEEE ICCV (2021)
Hu, W., et al.: Tri-MipRF: tri-mip representation for efficient anti-aliasing neural radiance fields. In: ICCV (2023)
Huang, Y.H., et al.: SC-GS: sparse-controlled Gaussian splatting for editable dynamic scenes. In: CVPR (2024)
Insafutdinov, E., Dosovitskiy, A.: Unsupervised learning of shape and pose with differentiable point clouds. In: NeurIPS (2018)
Jambon, C., et al.: Nerfshop: interactive editing of neural radiance fields. In: PACMCGIT (2023)
Katsumata, K., Vo, D.M., Nakayama, H.: An efficient 3D Gaussian representation for monocular/multi-view dynamic scenes. arXiv:2311.12897 (2023)
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. TOG (2023)
Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: benchmarking large-scale scene reconstruction. TOG (2017)
Kopanas, G., Philip, J., Leimkühler, T., Drettakis, G.: Point-based neural rendering with per-view optimization. In: Computer Graphics Forum (2021)
Kratimenos, A., Lei, J., Daniilidis, K.: DynMF: neural motion factorization for real-time dynamic view synthesis with 3D Gaussian splatting. arXiv:2312.00112 (2023)
Kulhanek, J., Sattler, T.: Tetra-NeRF: representing neural radiance fields using tetrahedra. In: ICCV (2023)
Lee, J.C., Rho, D., Sun, X., Ko, J.H., Park, E.: Compact 3D Gaussian representation for radiance field. In: CVPR (2024)
Levoy, M.: Efficient ray tracing of volume data. TOG (1990)
Lin, C.H., Kong, C., Lucey, S.: Learning efficient point cloud generation for dense 3D object reconstruction. In: AAAI (2018)
Liu, L., Gu, J., Zaw Lin, K., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: NeurIPS (2020)
Lu, T., et al.: Scaffold-GS: structured 3D Gaussians for view-adaptive rendering. In: CVPR (2024)
Luiten, J., Kopanas, G., Leibe, B., Ramanan, D.: Dynamic 3D Gaussians: tracking by persistent dynamic view synthesis. arXiv:2308.09713 (2023)
Max, N.: Optical models for direct volume rendering. TVCG (1995)
Max, N., Chen, M.: Local and global illumination in the volume rendering integral. Technical report (2005)
Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRF in the dark: high dynamic range view synthesis from noisy raw images. In: CVPR (2022)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)
Morgenstern, W., Barthel, F., Hilsmann, A., Eisert, P.: Compact 3D scene representation via self-organizing gaussian grids. arXiv:2312.13299 (2023)
Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. TOG (2022)
Navaneet, K., Meibodi, K.P., Koohpayegani, S.A., Pirsiavash, H.: Compact3D: compressing gaussian splat radiance field models with vector quantization. arXiv:2311.18159 (2023)
Niedermayr, S., Stumpfegger, J., Westermann, R.: Compressed 3D Gaussian splatting for accelerated novel view synthesis. In: CVPR (2024)
Niemeyer, M., et al.: Radsplat: radiance field-informed gaussian splatting for robust real-time rendering with 900+ fps. arXiv preprint arXiv:2403.13806 (2024)
Philip, J., Deschaintre, V.: Floaters no more: radiance field gradient scaling for improved near-camera training. In: EGSR (2023)
Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs. In: ICCV (2021)
Reiser, C., et al.: MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes. TOG (2023)
Ren, L., Pfister, H., Zwicker, M.: Object space EWA surface splatting: a hardware accelerated approach to high quality point rendering. In: Computer Graphics Forum (2002)
Roessle, B., Barron, J.T., Mildenhall, B., Srinivasan, P.P., Nießner, M.: Dense depth priors for neural radiance fields from sparse input views. In: CVPR (2022)
Sainz, M., Pajarola, R.: Point-based rendering techniques. Comput. Graph. (2004)
Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: CVPR (2022)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. TIP (2004)
Warburg, F., Weber, E., Tancik, M., Holynski, A., Kanazawa, A.: Nerfbusters: removing ghostly artifacts from casually captured nerfs. In: ICCV (2023)
Wiles, O., Gkioxari, G., Szeliski, R., Johnson, J.: Synsin: end-to-end view synthesis from a single image. In: CVPR (2020)
Wu, G., et al.: 4D Gaussian splatting for real-time dynamic scene rendering. In: CVPR (2024)
Xu, Q., et al.: Point-NeRF: point-based neural radiance fields. In: CVPR (2022)
Yan, Z., Low, W.F., Chen, Y., Lee, G.H.: Multi-scale 3D Gaussian splatting for anti-aliased rendering. In: CVPR (2024)
Yang, J., Pavone, M., Wang, Y.: FreeNeRF: improving few-shot neural rendering with free frequency regularization. In: CVPR (2023)
Yang, Z., Yang, H., Pan, Z., Zhang, L.: Real-time photorealistic dynamic scene representation and rendering with 4D Gaussian splatting. In: ICLR (2024)
Yang, Z., et al.: Spec-Gaussian: anisotropic view-dependent appearance for 3D Gaussian splatting. arXiv:2402.15870 (2024)
Yang, Z., Gao, X., Zhou, W., Jiao, S., Zhang, Y., Jin, X.: Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction. In: CVPR (2024)
Yariv, L., et al.: BakedSDF: meshing neural SDFs for real-time view synthesis. In: SIGGRAPH (2023)
Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. TOG (2019)
Yu, A., Li, R., Tancik, M., Li, H., Ng, R., Kanazawa, A.: Plenoctrees for real-time rendering of neural radiance fields. In: ICCV (2021)
Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-splatting: alias-free 3D Gaussian splatting. In: CVPR (2024)
Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (No. 62201484), HKU Startup Fund, and HKU Seed Fund for Basic Research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Z., Hu, W., Lao, Y., He, T., Zhao, H. (2025). Pixel-GS: Density Control with Pixel-Aware Gradient for 3D Gaussian Splatting. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15077. Springer, Cham. https://doi.org/10.1007/978-3-031-72655-2_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-72655-2_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72654-5
Online ISBN: 978-3-031-72655-2
eBook Packages: Computer ScienceComputer Science (R0)