On-training pruning results starting from scratch with our pruning metrics (Total 40k iterations). The visual comparisons show that our method maintains significantly higher fidelity even at extreme pruning ratios.
We present GaussianPOP, a new simplification framework based on analytical Gaussian error quantification. Our key contribution is a error criterion, derived directly from the 3DGS rendering equation, that precisely measures each Gaussian's contribution to the rendered image. The framework is both accurate and flexible, supporting (1) on-training pruning as well as (2) post-training simplification via iterative error re-quantification for improved stability.
On-training pruning results starting from scratch with our pruning metrics (Total 40k iterations). The visual comparisons show that our method maintains significantly higher fidelity even at extreme pruning ratios.
Post-training pruning results starting from pre-trained ply to reuse existing models, evaluated against LightGS (Total 5k fine-tuning iterations).
@misc{lee2026gaussianpopprincipledsimplificationframework,
title={GaussianPOP: Principled Simplification Framework for Compact 3D Gaussian Splatting via Error Quantification},
author={Soonbin Lee and Yeong-Gyu Kim and Simon Sasse and Tomas M. Borges and Yago Sanchez and Eun-Seok Ryu and Thomas Schierl and Cornelius Hellge},
year={2026},
eprint={2602.06830},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.06830},
}