city-super / Octree-GS
- ΡΡΠ΅Π΄Π°, 3 Π°ΠΏΡΠ΅Π»Ρ 2024β―Π³. Π² 00:00:05
Octree-GS: Towards Consistent Real-time Rendering with LOD-Structured 3D Gaussians
Kerui Ren*, Lihan Jiang*, Tao Lu, Mulin Yu, Linning Xu, Zhangkai Ni, Bo Dai βοΈ
[2024.04.01] ππ The viewer for Octree-GS is available now.
[2024.04.01] We release the code.
Inspired by the Level-of-Detail (LOD) techniques, we introduce \modelname, featuring an LOD-structured 3D Gaussian approach supporting level-of-detail decomposition for scene representation that contributes to the final rendering results. Our model dynamically selects the appropriate level from the set of multi-resolution anchor points, ensuring consistent rendering performance with adaptive LOD adjustments while maintaining high-fidelity rendering results.We tested on a server configured with Ubuntu 18.04, cuda 11.6 and gcc 9.4.0. Other similar configurations should also work, but we have not verified each one individually.
git clone https://github.com/city-super/Octree-GS --recursive
cd Octree-GS
SET DISTUTILS_USE_SDK=1 # Windows only
conda env create --file environment.yml
conda activate octree_gs
First, create a data/ folder inside the project path by
mkdir data
The data structure will be organised as follows:
data/
βββ dataset_name
βΒ Β βββ scene1/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
βΒ Β βββ scene2/
βΒ Β βΒ Β βββ images
βΒ Β βΒ Β βΒ Β βββ IMG_0.jpg
βΒ Β βΒ Β βΒ Β βββ IMG_1.jpg
βΒ Β βΒ Β βΒ Β βββ ...
βΒ Β βΒ Β βββ sparse/
βΒ Β βΒ Β βββ0/
...
The MatrixCity dataset can be downloaded from Hugging Face/Openxlab/ηΎεΊ¦η½η[ζεη :hqnn].
The BungeeNeRF dataset is available in Google Drive/ηΎεΊ¦η½η[ζεη :4whv]. The MipNeRF360 scenes are provided by the paper author here. The SfM data sets for Tanks&Temples and Deep Blending are hosted by 3D-Gaussian-Splatting here. Download and uncompress them into the data/ folder.
For custom data, you should process the image sequences with Colmap to obtain the SfM points and camera poses. Then, place the results into data/ folder.
To train multiple scenes in parallel, we provide batch training scripts:
train_tandt.shtrain_mipnerf360.shtrain_bungeenerf.shtrain_db.shrun them with
bash train_xxx.sh
Notice 1: Make sure you have enough GPU cards and memories to run these scenes at the same time.
Notice 2: Each process occupies many cpu cores, which may slow down the training process. Set
torch.set_num_threads(32)accordingly in thetrain.pyto alleviate it.
For training a single scene, modify the path and configurations in single_train.sh accordingly and run it:
bash single_train.sh
dataset_name/scene_name/ or scene_name/;For these public datasets, the configurations of 'voxel_size' and 'fork' can refer to the above batch training script.
This script will store the log (with running-time code) into outputs/dataset_name/scene_name/exp_name/cur_time automatically.
We've integrated the rendering and metrics calculation process into the training code. So, when completing training, the rendering results, fps and quality metrics will be printed automatically. And the rendering results will be save in the log dir. Mind that the fps is roughly estimated by
torch.cuda.synchronize();t_start=time.time()
rendering...
torch.cuda.synchronize();t_end=time.time()
which may differ somewhat from the original 3D-GS, but it does not affect the analysis.
Meanwhile, we keep the manual rendering function with a similar usage of the counterpart in 3D-GS, one can run it by
python render.py -m <path to trained model> # Generate renderings
python metrics.py -m <path to trained model> # Compute error metrics on renderings
The viewer for Octree-GS is available now.
Please follow the LICENSE of 3D-GS.
We thank all authors from 3D-GS and Scaffold-GS for presenting such an excellent work.