"src/git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "8e35ef0142cb8445c608105d06c53594085f8aed"
Unverified Commit 62e70d39 authored by Ruilong Li(李瑞龙)'s avatar Ruilong Li(李瑞龙) Committed by GitHub
Browse files

update ngp perf. (#68)

parent 76c0f981
...@@ -12,8 +12,8 @@ Using NerfAcc, ...@@ -12,8 +12,8 @@ Using NerfAcc,
- The `vanilla NeRF` model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR) - The `vanilla NeRF` model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR)
in *1 hour* rather than *days* as in the paper. in *1 hour* rather than *days* as in the paper.
- The `Instant-NGP NeRF` model can be trained to *better quality* (+~0.7 PSNR) with *9/10th* of - The `Instant-NGP NeRF` model can be trained to *equal quality* in *4.5 minutes*,
the training time (4.5 minutes) comparing to the official pure-CUDA implementation. comparing to the official pure-CUDA implementation.
- The `D-NeRF` model for *dynamic* objects can also be trained in *1 hour* - The `D-NeRF` model for *dynamic* objects can also be trained in *1 hour*
rather than *2 days* as in the paper, and with *better quality* (+~2.5 PSNR). rather than *2 days* as in the paper, and with *better quality* (+~2.5 PSNR).
- Both *bounded* and *unbounded* scenes are supported. - Both *bounded* and *unbounded* scenes are supported.
...@@ -100,9 +100,9 @@ Before running those example scripts, please check the script about which datase ...@@ -100,9 +100,9 @@ Before running those example scripts, please check the script about which datase
the dataset first. the dataset first.
``` bash ``` bash
# Instant-NGP NeRF in 4.5 minutes with better performance! # Instant-NGP NeRF in 4.5 minutes with reproduced performance!
# See results at here: https://www.nerfacc.com/en/latest/examples/ngp.html # See results at here: https://www.nerfacc.com/en/latest/examples/ngp.html
python examples/train_ngp_nerf.py --train_split trainval --scene lego python examples/train_ngp_nerf.py --train_split train --scene lego
``` ```
``` bash ``` bash
......
...@@ -32,5 +32,5 @@ single NVIDIA TITAN RTX GPU. The training memory footprint is about 11GB. ...@@ -32,5 +32,5 @@ single NVIDIA TITAN RTX GPU. The training memory footprint is about 11GB.
.. _`D-Nerf`: https://arxiv.org/abs/2011.13961 .. _`D-Nerf`: https://arxiv.org/abs/2011.13961
.. _`D-Nerf dataset`: https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0 .. _`D-Nerf dataset`: https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0
.. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/5637cc9a1565b2685c02250eb1ee1c53d3b07af1 .. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/76c0f9817da4c9c8b5ccf827eb069ee2ce854b75
...@@ -7,29 +7,34 @@ See code `examples/train_ngp_nerf.py` at our `github repository`_ for details. ...@@ -7,29 +7,34 @@ See code `examples/train_ngp_nerf.py` at our `github repository`_ for details.
Benchmarks Benchmarks
------------ ------------
*updated on 2022-10-08* *updated on 2022-10-12*
Here we trained a `Instant-NGP Nerf`_ model on the `Nerf-Synthetic dataset`_. We follow the same Here we trained a `Instant-NGP Nerf`_ model on the `Nerf-Synthetic dataset`_. We follow the same
settings with the Instant-NGP paper, which uses trainval split for training and test split for settings with the Instant-NGP paper, which uses train split for training and test split for
evaluation. Our experiments are conducted on a single NVIDIA TITAN RTX GPU. The training evaluation. All experiments are conducted on a single NVIDIA TITAN RTX GPU. The training
memory footprint is about 3GB. memory footprint is about 3GB.
.. note:: .. note::
The Instant-NGP paper makes use of the alpha channel in the images to apply random background The Instant-NGP paper makes use of the alpha channel in the images to apply random background
augmentation during training. Yet we only uses RGB values with a constant white background. augmentation during training. For fair comparision, we rerun their code with a constant white
background during both training and testing. Also it is worth to mention that we didn't strictly
follow the training receipe in the Instant-NGP paper, such as the learning rate schedule etc, as
the purpose of this benchmark is to showcase instead of reproducing the paper.
+----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+ +-----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
| PSNR | Lego | Mic |Materials| Chair |Hotdog | Ficus | Drums | Ship | MEAN | | PSNR | Lego | Mic |Materials| Chair |Hotdog | Ficus | Drums | Ship | MEAN |
| | | | | | | | | | | | | | | | | | | | | |
+======================+=======+=======+=========+=======+=======+=======+=======+=======+=======+ +=======================+=======+=======+=========+=======+=======+=======+=======+=======+=======+
| Instant-NGP (5min) | 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | 33.51 | 26.02 | 31.10 | 33.18 | |Instant-NGP 35k steps | 35.87 | 36.22 | 29.08 | 35.10 | 37.48 | 30.61 | 23.85 | 30.62 | 32.35 |
+----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+ +-----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
| Ours (~4.5min) | 36.82 | 37.61 | 30.18 | 36.13 | 38.11 | 34.48 | 26.62 | 31.37 | 33.92 | |(training time) | 309s | 258s | 256s | 316s | 292s | 207s | 218s | 250s | 263s |
+----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+ +-----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
| Ours (Training time)| 288s | 259s | 256s | 324s | 288s | 245s | 262s | 257s | 272s | |Ours 20k steps | 35.50 | 36.16 | 29.14 | 35.23 | 37.15 | 31.71 | 24.88 | 29.91 | 32.46 |
+----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+ +-----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
|(training time) | 287s | 274s | 269s | 317s | 269s | 244s | 249s | 257s | 271s |
+-----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
.. _`Instant-NGP Nerf`: https://arxiv.org/abs/2201.05989 .. _`Instant-NGP Nerf`: https://github.com/NVlabs/instant-ngp/tree/51e4107edf48338e9ab0316d56a222e0adf87143
.. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/5637cc9a1565b2685c02250eb1ee1c53d3b07af1 .. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/76c0f9817da4c9c8b5ccf827eb069ee2ce854b75
.. _`Nerf-Synthetic dataset`: https://drive.google.com/drive/folders/1JDdLGDruGNXWnM1eqY1FNL9PlStjaKWi .. _`Nerf-Synthetic dataset`: https://drive.google.com/drive/folders/1JDdLGDruGNXWnM1eqY1FNL9PlStjaKWi
...@@ -40,4 +40,4 @@ that takes from `MipNerf360`_. ...@@ -40,4 +40,4 @@ that takes from `MipNerf360`_.
.. _`Instant-NGP Nerf`: https://arxiv.org/abs/2201.05989 .. _`Instant-NGP Nerf`: https://arxiv.org/abs/2201.05989
.. _`MipNerf360`: https://arxiv.org/abs/2111.12077 .. _`MipNerf360`: https://arxiv.org/abs/2111.12077
.. _`Nerf++`: https://arxiv.org/abs/2010.07492 .. _`Nerf++`: https://arxiv.org/abs/2010.07492
.. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/5637cc9a1565b2685c02250eb1ee1c53d3b07af1 .. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/tree/76c0f9817da4c9c8b5ccf827eb069ee2ce854b75
...@@ -29,5 +29,5 @@ conducted on a single NVIDIA TITAN RTX GPU. The training memory footprint is abo ...@@ -29,5 +29,5 @@ conducted on a single NVIDIA TITAN RTX GPU. The training memory footprint is abo
| Ours (Training time)| 58min | 53min | 46min | 62min | 56min | 42min | 52min | 49min | 52min | | Ours (Training time)| 58min | 53min | 46min | 62min | 56min | 42min | 52min | 49min | 52min |
+----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+ +----------------------+-------+-------+---------+-------+-------+-------+-------+-------+-------+
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/tree/5637cc9a1565b2685c02250eb1ee1c53d3b07af1 .. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/tree/76c0f9817da4c9c8b5ccf827eb069ee2ce854b75
.. _`vanilla Nerf`: https://arxiv.org/abs/2003.08934 .. _`vanilla Nerf`: https://arxiv.org/abs/2003.08934
...@@ -8,8 +8,8 @@ Using NerfAcc, ...@@ -8,8 +8,8 @@ Using NerfAcc,
- The `vanilla Nerf`_ model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR) \ - The `vanilla Nerf`_ model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR) \
in *1 hour* rather than *1~2 days* as in the paper. in *1 hour* rather than *1~2 days* as in the paper.
- The `Instant-NGP Nerf`_ model can be trained to *better quality* (+~0.7 PSNR) with *9/10th* of \ - The `Instant-NGP Nerf`_ model can be trained to *equal quality* in *4.5 minutes*, \
the training time (4.5 minutes) comparing to the official pure-CUDA implementation. comparing to the official pure-CUDA implementation.
- The `D-Nerf`_ model for *dynamic* objects can also be trained in *1 hour* \ - The `D-Nerf`_ model for *dynamic* objects can also be trained in *1 hour* \
rather than *2 days* as in the paper, and with *better quality* (+~2.5 PSNR). rather than *2 days* as in the paper, and with *better quality* (+~2.5 PSNR).
- Both *bounded* and *unbounded* scenes are supported. - Both *bounded* and *unbounded* scenes are supported.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment