GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification

Soonbin Lee1, Yeong-Gyu Kim2, Simon Sasse1, Tomás M. Borges1, Yago Sánchez1
Eun-Seok Ryu2, Thomas Schierl1, Cornelius Hellge1
1Fraunhofer Heinrich-Hertz-Institute (HHI), Germany
2Sungkyunkwan University (SKKU), South Korea
MY ALT TEXT

TL;DR

We present GaussianPOP, a new simplification framework based on analytical Gaussian error quantification. Our key contribution is a error criterion, derived directly from the 3DGS rendering equation, that precisely measures each Gaussian's contribution to the rendered image. The framework is both accurate and flexible, supporting (1) on-training pruning as well as (2) post-training simplification via iterative error re-quantification for improved stability.

Comparative analysis of pruning methods

The figure compares our method against state-of-the-art approaches in two key scenarios. The (1) on-training scenario integrates pruning into the training process from scratch, whereas the (2) post-training scenario applies simplification to a pre-trained model.

Key Insight

  1. Rendering Error Formulation: We derive an analytical error criterion directly from the 3DGS rendering equation to precisely quantify each Gaussian's visual contribution to the rendered image. Unlike heuristic importance scores, this metric provides a direct measure of the visual error induced by removing a specific primitive.
  2. Versatile Pruning Score: GaussianPOP consistently outperforms state-of-the-art importance-based methods (LightGaussian, MiniSplatting) in both on-training and post-training scenarios. By accurately identifying visually redundant information, it achieves a superior trade-off between model compactness and high rendering quality across diverse datasets.

Error distribution plot A
Error distribution plot B

Post-training pruning results starting from pre-trained models, evaluated against various importance scores. The plots illustrate the PSNR vs. pruning ratio for our method both w/o fine-tuning (post-prune) and with fine-tuning (5000 iter). The results validate that our error quantification maintains significantly higher fidelity even without retraining, highlighting its inherent accuracy in identifying visually redundant Gaussians.

Visual Comparisons

Ours, 0.19M
3DGS, 3.26M
Ours, 0.39M
3DGS, 4.42M

On-training pruning results starting from scratch with our pruning metrics (Total 40k iterations). The visual comparisons show that our method maintains significantly higher fidelity even at extreme pruning ratios.

Ours, 80% Pruned+FT
LightGS, 80% Pruned+FT
Ours, 80% Pruned+FT
LightGS, 80% Pruned+FT

Post-training pruning results starting from pre-trained ply to reuse existing models, evaluated against LightGS (Total 5k fine-tuning iterations).

BibTeX

@misc{lee2026gaussianpopprincipledsimplificationframework,
      title={GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification}, 
      author={Soonbin Lee and Yeong-Gyu Kim and Simon Sasse and Tomas M. Borges and Yago Sanchez and Eun-Seok Ryu and Thomas Schierl and Cornelius Hellge},
      year={2026},
      eprint={2602.06830},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.06830}, 
}