"projects/vscode:/vscode.git/clone" did not exist on "cdd2142dd531b4f983468bd9b158f4085ee57dd8"
README.md 5.38 KB
Newer Older
1
# NerfAcc
Matthew Tancik's avatar
Matthew Tancik committed
2
[![Core Tests.](https://github.com/KAIR-BAIR/nerfacc/actions/workflows/code_checks.yml/badge.svg)](https://github.com/KAIR-BAIR/nerfacc/actions/workflows/code_checks.yml)
3
[![Documentation Status](https://readthedocs.com/projects/plenoptix-nerfacc/badge/?version=latest)](https://www.nerfacc.com/en/latest/?badge=latest)
Ruilong Li's avatar
readme  
Ruilong Li committed
4

5
https://www.nerfacc.com/
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
6

7
8
9
10
11
NerfAcc is a PyTorch Nerf acceleration toolbox for both training and inference. It focus on
efficient volumetric rendering of radiance fields, which is universal and plug-and-play for most of the NeRFs.

Using NerfAcc, 

Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
12
- The `vanilla NeRF` model with 8-layer MLPs can be trained to *better quality* (+~0.5 PNSR)
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
13
  in *1 hour* rather than *days* as in the paper.
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
14
- The `Instant-NGP NeRF` model can be trained to *better quality* (+~0.7 PSNR) with *9/10th* of
15
  the training time (4.5 minutes) comparing to the official pure-CUDA implementation.
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
16
- The `D-NeRF` model for *dynamic* objects can also be trained in *1 hour*
17
  rather than *2 days* as in the paper, and with *better quality* (+~2.5 PSNR).
18
19
20
- Both *bounded* and *unbounded* scenes are supported.

**And it is pure Python interface with flexible APIs!**
Ruilong Li's avatar
readme  
Ruilong Li committed
21

22
23
24
25
26
27
## Installation

```
pip install nerfacc
```

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
## Usage

The idea of NerfAcc is to perform efficient ray marching and volumetric rendering. So NerfAcc can work with any user-defined radiance field. To plug the NerfAcc rendering pipeline into your code and enjoy the acceleration, you only need to define two functions with your radience field.
- `sigma_fn`: Compute density at each sample. It will be used by `nerfacc.ray_marching()` to skip the empty and occluded space during ray marching, which is where the major speedup comes from. 
- `rgb_sigma_fn`: Compute color and density at each sample. It will be used by `nerfacc.rendering()` to conduct differentiable volumetric rendering. This function will receive gradients to update your network.

An simple example is like this:

``` python
import torch
from torch import Tensor
import nerfacc 

radiance_field = ...  # network: a NeRF model
rays_o: Tensor = ...  # ray origins. (n_rays, 3)
rays_d: Tensor = ...  # ray normalized directions. (n_rays, 3)
44
optimizer = ...  # optimizer
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78

def sigma_fn(
    t_starts: Tensor, t_ends:Tensor, ray_indices: Tensor
) -> Tensor:
    """ Query density values from a user-defined radiance field.
    :params t_starts: Start of the sample interval along the ray. (n_samples, 1).
    :params t_ends: End of the sample interval along the ray. (n_samples, 1).
    :params ray_indices: Ray indices that each sample belongs to. (n_samples,).
    :returns The post-activation density values. (n_samples, 1).
    """
    t_origins = rays_o[ray_indices]  # (n_samples, 3)
    t_dirs = rays_d[ray_indices]  # (n_samples, 3)
    positions = t_origins + t_dirs * (t_starts + t_ends) / 2.0
    sigmas = radiance_field.query_density(positions) 
    return sigmas  # (n_samples, 1)

def rgb_sigma_fn(
    t_starts: Tensor, t_ends: Tensor, ray_indices: Tensor
) -> Tuple[Tensor, Tensor]:
    """ Query rgb and density values from a user-defined radiance field.
    :params t_starts: Start of the sample interval along the ray. (n_samples, 1).
    :params t_ends: End of the sample interval along the ray. (n_samples, 1).
    :params ray_indices: Ray indices that each sample belongs to. (n_samples,).
    :returns The post-activation rgb and density values. 
        (n_samples, 3), (n_samples, 1).
    """
    t_origins = rays_o[ray_indices]  # (n_samples, 3)
    t_dirs = rays_d[ray_indices]  # (n_samples, 3)
    positions = t_origins + t_dirs * (t_starts + t_ends) / 2.0
    rgbs, sigmas = radiance_field(positions, condition=t_dirs)  
    return rgbs, sigmas  # (n_samples, 3), (n_samples, 1)

# Efficient Raymarching: Skip empty and occluded space, pack samples from all rays.
# packed_info: (n_rays, 2). t_starts: (n_samples, 1). t_ends: (n_samples, 1).
79
80
81
82
83
with torch.no_grad():
    packed_info, t_starts, t_ends = nerfacc.ray_marching(
        rays_o, rays_d, sigma_fn=sigma_fn, near_plane=0.2, far_plane=1.0, 
        early_stop_eps=1e-4, alpha_thre=1e-2, 
    )
84
85
86
87
88

# Differentiable Volumetric Rendering.
# colors: (n_rays, 3). opaicity: (n_rays, 1). depth: (n_rays, 1).
color, opacity, depth = nerfacc.rendering(rgb_sigma_fn, packed_info, t_starts, t_ends)

89
# Optimize: Both the network and rays will receive gradients
90
91
92
93
94
95
optimizer.zero_grad()
loss = F.mse_loss(color, color_gt)
loss.backward()
optimizer.step()
```

Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
96
## Examples: 
Ruilong Li's avatar
readme  
Ruilong Li committed
97

Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
98
Before running those example scripts, please check the script about which dataset it is needed, and download
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
99
100
the dataset first.

Ruilong Li's avatar
Ruilong Li committed
101
``` bash
102
# Instant-NGP NeRF in 4.5 minutes with better performance!
103
# See results at here: https://www.nerfacc.com/en/latest/examples/ngp.html
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
104
python examples/train_ngp_nerf.py --train_split trainval --scene lego
Ruilong Li's avatar
readme  
Ruilong Li committed
105
106
```

Ruilong Li's avatar
Ruilong Li committed
107
``` bash
108
# Vanilla MLP NeRF in 1 hour with better performance!
109
# See results at here: https://www.nerfacc.com/en/latest/examples/vanilla.html
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
110
python examples/train_mlp_nerf.py --train_split train --scene lego
Ruilong Li's avatar
Ruilong Li committed
111
112
```

Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
113
```bash
114
# D-NeRF for Dynamic objects in 1 hour with better performance!
115
# See results at here: https://www.nerfacc.com/en/latest/examples/dnerf.html
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
116
python examples/train_mlp_dnerf.py --train_split train --scene lego
117
118
```

Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
119
```bash
120
# Instant-NGP on unbounded scenes in 20 minutes!
121
# See results at here: https://www.nerfacc.com/en/latest/examples/unbounded.html
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
122
python examples/train_ngp_nerf.py --train_split train --scene garden --auto_aabb --unbounded --cone_angle=0.004
Ruilong Li(李瑞龙)'s avatar
Ruilong Li(李瑞龙) committed
123
```