Unverified Commit f140e483 authored by Ruilong Li(李瑞龙)'s avatar Ruilong Li(李瑞龙) Committed by GitHub
Browse files

Docs (#33)

* ngp doc

* fix doc title

* benchmark ngp is done.

* finer training schedule for mlp-based nerfs

* slightly doc update

* update index
parent b2a0170a
...@@ -14,17 +14,17 @@ memory footprint is about 3GB. ...@@ -14,17 +14,17 @@ memory footprint is about 3GB.
.. note:: .. note::
The paper makes use of the alpha channel in the images to apply random background The paper makes use of the alpha channel in the images to apply random background
augmentation during training. Yet we only uses a constant white background. augmentation during training. Yet we only uses RGB values with a constant white background.
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+ +----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| | Lego | Mic | Materials |Chair |Hotdog | Ficus | Drums | Ship | | | Lego | Mic | Materials |Chair |Hotdog | Ficus | Drums | Ship | AVG |
| | | | | | | | | | | | | | | | | | | | |
+======================+=======+=======+============+=======+========+========+========+========+ +======================+=======+=======+============+=======+========+========+========+========+========+
| Paper (PSNR: 5min) | 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | 33.51 | 26.02 | 31.10 | | Paper (PSNR: 5min) | 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | 33.51 | 26.02 | 31.10 | 33.18 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+ +----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (PSNR) | 36.61 | 37.45 | 30.15 | 36.10 | 37.88 | 32.07 | 25.83 | | | Ours (PSNR:4.5min) | 36.71 | 36.78 | 29.06 | 36.10 | 37.88 | 32.07 | 25.83 | 31.39 | 33.23 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+ +----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
| Ours (Training time)| 300s | 272s | 258s | 311s | 275s | 254s | 249s | | | Ours (Training time)| 286s | 251s | 250s | 311s | 275s | 254s | 249s | 255s | 266s |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+ +----------------------+-------+-------+------------+-------+--------+--------+--------+--------+--------+
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/ .. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/
\ No newline at end of file
Vanilla Nerf Vanilla Nerf
==================== ====================
See code `examples/train_mlp_nerf.py` at our `github repository`_ for details.
Benchmarks
------------
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+
| | Lego | Mic | Materials |Chair |Hotdog | Ficus | Drums | Ship |
| | | | | | | | | |
+======================+=======+=======+============+=======+========+========+========+========+
| Paper (PSNR: 5min) | 32.54 | 32.91 | 29.62 | 33.00 | 36.18 | 30.13 | 25.01 | 28.65 |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+
| Ours (PSNR) | XX.XX | XX.XX | XX.XX | XX.XX | XX.XX | XX.XX | XX.XX | XX.XX |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+
| Ours (Training time)| XXmin | XXmin | XXmin | 45min | XXmin | XXmin | 41min | XXmin |
+----------------------+-------+-------+------------+-------+--------+--------+--------+--------+
.. _`github repository`: : https://github.com/KAIR-BAIR/nerfacc/
\ No newline at end of file
...@@ -7,8 +7,8 @@ Using NerfAcc, ...@@ -7,8 +7,8 @@ Using NerfAcc,
- The `vanilla Nerf model`_ with 8-layer MLPs can be trained to *better quality* (+~1.0 PNSR) \ - The `vanilla Nerf model`_ with 8-layer MLPs can be trained to *better quality* (+~1.0 PNSR) \
in *45 minutes* rather than *1~2 days* as in the paper. in *45 minutes* rather than *1~2 days* as in the paper.
- The `instant-ngp Nerf model`_ can be trained to *better quality* (+~1.0 PNSR) \ - The `instant-ngp Nerf model`_ can be trained to *equal quality* with *9/10th* of the training time \
in *5 minutes* compare to the paper. comparing to the official pure-CUDA implementation.
- The `D-Nerf model`_ for *dynamic* objects can also be trained in *45 minutes* \ - The `D-Nerf model`_ for *dynamic* objects can also be trained in *45 minutes* \
rather than *2 days* as in the paper, and with *better quality* (+~2.0 PSNR). rather than *2 days* as in the paper, and with *better quality* (+~2.0 PSNR).
- *Unbounded scenes* from `MipNerf360`_ can also be trained in \ - *Unbounded scenes* from `MipNerf360`_ can also be trained in \
......
...@@ -72,16 +72,20 @@ if __name__ == "__main__": ...@@ -72,16 +72,20 @@ if __name__ == "__main__":
).item() ).item()
# setup the radiance field we want to train. # setup the radiance field we want to train.
max_steps = 40000 max_steps = 50000
grad_scaler = torch.cuda.amp.GradScaler(1) grad_scaler = torch.cuda.amp.GradScaler(1)
radiance_field = DNeRFRadianceField().to(device) radiance_field = DNeRFRadianceField().to(device)
optimizer = torch.optim.Adam(radiance_field.parameters(), lr=5e-4) optimizer = torch.optim.Adam(radiance_field.parameters(), lr=5e-4)
scheduler = torch.optim.lr_scheduler.MultiStepLR( scheduler = torch.optim.lr_scheduler.MultiStepLR(
optimizer, optimizer,
milestones=[max_steps // 2, max_steps * 3 // 4, max_steps * 9 // 10], milestones=[
max_steps // 2,
max_steps * 3 // 4,
max_steps * 5 // 6,
max_steps * 9 // 10,
],
gamma=0.33, gamma=0.33,
) )
# setup the dataset # setup the dataset
data_root_fp = "/home/ruilongli/data/dnerf/" data_root_fp = "/home/ruilongli/data/dnerf/"
target_sample_batch_size = 1 << 16 target_sample_batch_size = 1 << 16
......
...@@ -87,13 +87,18 @@ if __name__ == "__main__": ...@@ -87,13 +87,18 @@ if __name__ == "__main__":
).item() ).item()
# setup the radiance field we want to train. # setup the radiance field we want to train.
max_steps = 40000 max_steps = 50000
grad_scaler = torch.cuda.amp.GradScaler(1) grad_scaler = torch.cuda.amp.GradScaler(1)
radiance_field = VanillaNeRFRadianceField().to(device) radiance_field = VanillaNeRFRadianceField().to(device)
optimizer = torch.optim.Adam(radiance_field.parameters(), lr=5e-4) optimizer = torch.optim.Adam(radiance_field.parameters(), lr=5e-4)
scheduler = torch.optim.lr_scheduler.MultiStepLR( scheduler = torch.optim.lr_scheduler.MultiStepLR(
optimizer, optimizer,
milestones=[max_steps // 2, max_steps * 3 // 4, max_steps * 9 // 10], milestones=[
max_steps // 2,
max_steps * 3 // 4,
max_steps * 5 // 6,
max_steps * 9 // 10,
],
gamma=0.33, gamma=0.33,
) )
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment