Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
nerfacc
Commits
2ff8229a
Unverified
Commit
2ff8229a
authored
Sep 28, 2022
by
Ruilong Li(李瑞龙)
Committed by
GitHub
Sep 28, 2022
Browse files
Docs (#36)
* update doc and bump version * update docs with final scores
parent
c0cb5e22
Changes
11
Hide whitespace changes
Inline
Side-by-side
Showing
11 changed files
with
86 additions
and
35 deletions
+86
-35
README.md
README.md
+4
-3
docs/source/apis/grid.rst
docs/source/apis/grid.rst
+2
-0
docs/source/conf.py
docs/source/conf.py
+1
-1
docs/source/examples/dnerf.rst
docs/source/examples/dnerf.rst
+13
-1
docs/source/examples/ngp.rst
docs/source/examples/ngp.rst
+9
-5
docs/source/examples/unbounded.rst
docs/source/examples/unbounded.rst
+32
-10
docs/source/examples/vanilla.rst
docs/source/examples/vanilla.rst
+13
-2
docs/source/index.rst
docs/source/index.rst
+7
-8
examples/datasets/nerf_360_v2.py
examples/datasets/nerf_360_v2.py
+2
-2
examples/radiance_fields/mlp.py
examples/radiance_fields/mlp.py
+1
-1
nerfacc/contraction.py
nerfacc/contraction.py
+2
-2
No files found.
README.md
View file @
2ff8229a
...
...
@@ -2,22 +2,23 @@
[

](https://github.com/KAIR-BAIR/nerfacc/actions/workflows/code_checks.yml)
[

](https://plenoptix-nerfacc.readthedocs-hosted.com/en/latest/?badge=latest)
This is a
**tiny**
toolbox for
**accelerating**
NeRF training & rendering using PyTorch CUDA extensions. Plug-and-play for most of the NeRFs!
## Examples:
```
bash
# Instant-NGP Ne
RF
# Instant-NGP Ne
rf
python examples/train_ngp_nerf.py
--train_split
trainval
--scene
lego
```
```
bash
# Vanilla MLP Ne
RF
# Vanilla MLP Ne
rf
python examples/train_mlp_nerf.py
--train_split
train
--scene
lego
```
```
bash
# MLP Ne
RF
on Dynamic objects (D-Ne
RF
)
# MLP Ne
rf
on Dynamic objects (D-Ne
rf
)
python examples/train_mlp_dnerf.py
--train_split
train
--scene
lego
```
...
...
docs/source/apis/grid.rst
View file @
2ff8229a
.. _`Occupancy Grid`:
Occupancy Grid
===================================
...
...
docs/source/conf.py
View file @
2ff8229a
...
...
@@ -8,7 +8,7 @@ project = "nerfacc"
copyright
=
"2022, Ruilong"
author
=
"Ruilong"
release
=
"0.1"
release
=
"0.1
.1
"
version
=
"0.1.1"
# -- General configuration
...
...
docs/source/examples/dnerf.rst
View file @
2ff8229a
Dynamic Scene
====================
Here we trained a 8-layer-MLP for the radiance field and a 4-layer-MLP for the warping field,
(similar to the T-Nerf model in the `D-Nerf`_ paper) on the `D-Nerf dataset`_. We used train
split for training and test split for evaluation. Our experiments are conducted on a
single NVIDIA TITAN RTX GPU.
Note:
The :ref:`Occupancy Grid` used in this example is shared by all the frames. In other words,
instead of using it to indicate the opacity of an area at a single timestamp,
Here we use it to indicate the `maximum` opacity at this area `over all the timestamps`.
It is not optimal but still makes the rendering very efficient.
+----------------------+----------+---------+-------+---------+-------+--------+---------+-------+-------+
| | bouncing | hell | hook | jumping | lego | mutant | standup | trex | AVG |
| | balls | warrior | | jacks | | | | | |
+======================+==========+=========+=======+=========+=======+========+=========+=======+=======+
|
Pap
er (PSNR: ~2day)
| 38.93 | 25.02 | 29.25 | 32.80 | 21.64 | 31.29 | 32.79 | 31.75 | 30.43 |
|
D-N
er
f
(PSNR: ~2day) | 38.93 | 25.02 | 29.25 | 32.80 | 21.64 | 31.29 | 32.79 | 31.75 | 30.43 |
+----------------------+----------+---------+-------+---------+-------+--------+---------+-------+-------+
| Ours (PSNR: ~50min) | 39.60 | 22.41 | 30.64 | 29.79 | 24.75 | 35.20 | 34.50 | 31.83 | 31.09 |
+----------------------+----------+---------+-------+---------+-------+--------+---------+-------+-------+
| Ours (Training time)| 45min | 49min | 51min | 46min | 53min | 57min | 49min | 46min | 50min |
+----------------------+----------+---------+-------+---------+-------+--------+---------+-------+-------+
.. _`D-Nerf`: https://arxiv.org/abs/2104.00677
.. _`D-Nerf dataset`: https://www.dropbox.com/s/0bf6fl0ye2vz3vr/data.zip?dl=0
docs/source/examples/ngp.rst
View file @
2ff8229a
.. _`Instant-NGP Example`:
Instant-NGP
====================
...
...
@@ -6,25 +8,27 @@ See code `examples/train_ngp_nerf.py` at our `github repository`_ for details.
Benchmarks
------------
Here we trained a NGP Nerf model on the Ne
RF
-Synthetic dataset. We follow the same
settings with the paper, which uses trainval split for training and test split for
Here we trained a
`Instant-
NGP Nerf
`_
model on the
`
Ne
rf
-Synthetic dataset
`_
. We follow the same
settings with the
Instant-NGP
paper, which uses trainval split for training and test split for
evaluation. Our experiments are conducted on a single NVIDIA TITAN RTX GPU. The training
memory footprint is about 3GB.
.. note::
The paper makes use of the alpha channel in the images to apply random background
The
Instant-NGP
paper makes use of the alpha channel in the images to apply random background
augmentation during training. Yet we only uses RGB values with a constant white background.
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| | Lego | Mic | Materials |Chair |Hotdog | Ficus | Drums | Ship | AVG |
| | | | | | | | | | |
+======================+=======+=======+============+=======+========+========+========+========+========+
|
Paper
(PSNR:
5min)
| 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | 33.51 | 26.02 | 31.10 | 33.18 |
|
Instant-NGP
(PSNR:5min)| 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | 33.51 | 26.02 | 31.10 | 33.18 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (PSNR:4.5min) | 36.71 | 36.78 | 29.06 | 36.10 | 37.88 | 32.07 | 25.83 | 31.39 | 33.23 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (Training time)| 286s | 251s | 250s | 311s | 275s | 254s | 249s | 255s | 266s |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/
\ No newline at end of file
.. _`Instant-NGP Nerf`: https://arxiv.org/abs/2103.13497
.. _`github repository`: https://github.com/KAIR-BAIR/nerfacc/
.. _`Nerf-Synthetic dataset`: https://drive.google.com/drive/folders/1JDdLGDruGNXWnM1eqY1FNL9PlStjaKWi
docs/source/examples/unbounded.rst
View file @
2ff8229a
Unbounded Scene
====================
Here we trained a `Instant-NGP Nerf`_ on the `MipNerf360`_ dataset. We used train
split for training and test split for evaluation. Our experiments are conducted on a
single NVIDIA TITAN RTX GPU.
+----------------------+-------+
| | Garden|
| | |
+======================+=======+
| Paper (PSNR: ~ days) | 26.98 |
+----------------------+-------+
| Ours (PSNR: ~ 1 hr) | 25.41 |
+----------------------+-------+
| Ours (Training time)| 58min |
+----------------------+-------+
The main difference between working with unbounded scenes and bounded scenes, is that
a contraction method is needed to map the infinite space to a finite :ref:`Occupancy Grid`.
We have difference options provided for this (see :ref:`Occupancy Grid`). The experiments
here is basically the Instant-NGP experiments (see :ref:`Instant-NGP Example`) with a contraction method
that takes from `MipNerf360`_.
.. note::
Even though we are comparing with `Nerf++`_ and `MipNerf360`_, the model and everything are
totally different with them. There are plenty of ideas from those papers that would be very
helpful for the performance, but we didn't adopt them. As this is just a simple example to
show how to use the library, we didn't want to make it too complicated.
+----------------------+-------+-------+------------+-------+--------+--------+--------+
| |Garden |Bicycle| Bonsai |Counter|Kitchen | Room | Stump |
| | | | | | | | |
+======================+=======+=======+============+=======+========+========+========+
|Nerf++(PSNR:~days) | 24.32 | 22.64 | 29.15 | 26.38 | 27.80 | 28.87 | 24.34 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+
|MipNerf360(PSNR:~days)| 26.98 | 24.37 | 33.46 | 29.55 | 32.23 | 31.63 | 28.65 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+
| Ours (PSNR:~1hr) | 25.41 | 22.89 | 27.35 | 23.15 | 27.74 | 30.66 | 21.83 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+
| Ours (Training time)| 40min | 35min | 47min | 39min | 60min | 41min | 28min |
+----------------------+-------+-------+------------+-------+--------+--------+--------+
.. _`Instant-NGP Nerf`: https://arxiv.org/abs/2103.13497
.. _`MipNerf360`: https://arxiv.org/abs/2111.12077
.. _`Nerf++`: https://arxiv.org/abs/2010.07492
docs/source/examples/vanilla.rst
View file @
2ff8229a
...
...
@@ -6,16 +6,27 @@ See code `examples/train_mlp_nerf.py` at our `github repository`_ for details.
Benchmarks
------------
Here we trained a 8-layer-MLP for the radiance field as in the `vanilla Nerf`_. We used the
train split for training and test split for evaluation as in the Nerf paper. Our experiments are
conducted on a single NVIDIA TITAN RTX GPU.
.. note::
The vanilla Nerf paper uses two MLPs for course-to-fine sampling. Instead here we only use a
single MLP with more samples (1024). Both ways share the same spirit to do dense sampling
around the surface. Our fast rendering inheritly skip samples away from the surface
so we can simplly increase the number of samples with a single MLP, to achieve the same goal
with the coarse-to-fine sampling, without runtime or memory issue.
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| | Lego | Mic | Materials |Chair |Hotdog | Ficus | Drums | Ship | AVG |
| | | | | | | | | | |
+======================+=======+=======+============+=======+========+========+========+========+========+
|
Paper
(PSNR:
1~2
days)| 32.54 | 32.91 | 29.62 | 33.00 | 36.18 | 30.13 | 25.01 | 28.65 | 31.00 |
|
NeRF
(PSNR:
~
days)
| 32.54 | 32.91 | 29.62 | 33.00 | 36.18 | 30.13 | 25.01 | 28.65 | 31.00 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (PSNR: ~50min) | 33.69 | 33.76 | 29.73 | 33.32 | 35.80 | 32.52 | 25.39 | 28.18 | 31.55 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (Training time)| 58min | 53min | 46min | 62min | 56min | 42min | 52min | 49min | 52min |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/
\ No newline at end of file
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/
.. _`vanilla Nerf`: https://arxiv.org/abs/2003.08934
docs/source/index.rst
View file @
2ff8229a
...
...
@@ -5,14 +5,13 @@ NerfAcc is a PyTorch Nerf acceleration toolbox for both training and inference.
Using NerfAcc,
- The `vanilla Nerf model
`_
with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR) \
- The `vanilla Nerf
`_
model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR) \
in *1 hour* rather than *1~2 days* as in the paper.
- The `
i
nstant-
ngp
Nerf model
`_
can be trained to *equal quality* with *9/10th* of the training time (4.5 minutes) \
- The `
I
nstant-
NGP
Nerf
`_
model can be trained to *equal quality* with *9/10th* of the training time (4.5 minutes) \
comparing to the official pure-CUDA implementation.
- The `D-Nerf model
`_
for *dynamic* objects can also be trained in *1 hour* \
- The `D-Nerf
`_
model for *dynamic* objects can also be trained in *1 hour* \
rather than *2 days* as in the paper, and with *better quality* (+~0.5 PSNR).
- *Unbounded scenes* from `MipNerf360`_ can also be trained in \
*~1 hour* and get comparable quality to the paper.
- Both the *bounded* and *unbounded* scenes are supported.
*And it is pure python interface with flexible apis!*
...
...
@@ -50,8 +49,8 @@ Installation:
NeRFactory <https://plenoptix-nerfactory.readthedocs-hosted.com/>
.. _`vanilla Ne
RF model
`: https://arxiv.org/abs/2003.08934
.. _`
i
nstant-
ngp NeRF model
`: https://arxiv.org/abs/2103.13497
.. _`D-Nerf
model
`: https://arxiv.org/abs/2104.00677
.. _`vanilla Ne
rf
`: https://arxiv.org/abs/2003.08934
.. _`
I
nstant-
NGP Nerf
`: https://arxiv.org/abs/2103.13497
.. _`D-Nerf`: https://arxiv.org/abs/2104.00677
.. _`MipNerf360`: https://arxiv.org/abs/2111.12077
.. _`pixel-Nerf`: https://arxiv.org/abs/2012.02190
\ No newline at end of file
examples/datasets/nerf_360_v2.py
View file @
2ff8229a
...
...
@@ -53,7 +53,7 @@ def _load_colmap(root_fp: str, subject_id: str, split: str, factor: int = 1):
# image names anymore.
image_names
=
[
imdata
[
k
].
name
for
k
in
imdata
]
# # Switch from COLMAP (right, down, fwd) to Ne
RF
(right, up, back) frame.
# # Switch from COLMAP (right, down, fwd) to Ne
rf
(right, up, back) frame.
# poses = poses @ np.diag([1, -1, -1, 1])
# Get distortion parameters.
...
...
@@ -96,7 +96,7 @@ def _load_colmap(root_fp: str, subject_id: str, split: str, factor: int = 1):
assert
params
is
None
,
"Only support pinhole camera model."
# Previous Ne
RF
results were generated with images sorted by filename,
# Previous Ne
rf
results were generated with images sorted by filename,
# ensure metrics are reported on the same test set.
inds
=
np
.
argsort
(
image_names
)
image_names
=
[
image_names
[
i
]
for
i
in
inds
]
...
...
examples/radiance_fields/mlp.py
View file @
2ff8229a
...
...
@@ -163,7 +163,7 @@ class NerfMLP(nn.Module):
class
SinusoidalEncoder
(
nn
.
Module
):
"""Sinusoidal Positional Encoder used in Ne
RF
."""
"""Sinusoidal Positional Encoder used in Ne
rf
."""
def
__init__
(
self
,
x_dim
,
min_deg
,
max_deg
,
use_identity
:
bool
=
True
):
super
().
__init__
()
...
...
nerfacc/contraction.py
View file @
2ff8229a
...
...
@@ -27,7 +27,7 @@ class ContractionType(Enum):
.. math:: f(x) =
\\
frac{1}{2}(tanh(
\\
frac{x - x_0}{x_1 - x_0} -
\\
frac{1}{2}) + 1)
UN_BOUNDED_SPHERE: Contract an unbounded space into a unit sphere. Used in
`Mip-Ne
RF
360: Unbounded Anti-Aliased Neural Radiance Fields`_.
`Mip-Ne
rf
360: Unbounded Anti-Aliased Neural Radiance Fields`_.
.. math::
f(x) =
...
...
@@ -39,7 +39,7 @@ class ContractionType(Enum):
.. math::
z(x) =
\\
frac{x - x_0}{x_1 - x_0} * 2 - 1
.. _Mip-Ne
RF
360\: Unbounded Anti-Aliased Neural Radiance Fields:
.. _Mip-Ne
rf
360\: Unbounded Anti-Aliased Neural Radiance Fields:
https://arxiv.org/abs/2111.12077
"""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment