Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
nerfacc
Commits
a4414f1a
Commit
a4414f1a
authored
Sep 08, 2022
by
Ruilong Li
Browse files
example of use
parent
f92d1910
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
101 additions
and
7 deletions
+101
-7
.readthedocs.yaml
.readthedocs.yaml
+12
-0
docs/source/index.rst
docs/source/index.rst
+3
-7
docs/source/usage.rst
docs/source/usage.rst
+86
-0
No files found.
.readthedocs.yaml
0 → 100644
View file @
a4414f1a
version
:
2
python
:
version
:
3.9
install
:
-
requirements
:
requirements.txt
-
requirements
:
docs/requirements.txt
-
method
:
pip
path
:
.
sphinx
:
fail_on_warning
:
true
\ No newline at end of file
docs/source/index.rst
View file @
a4414f1a
Welcome to nerfacc
's
documentation!
Welcome to
the
nerfacc documentation!
===================================
**Lumache** (/lu'make/) is a Python library for cooks and food lovers
that creates recipes mixing random ingredients.
It pulls data from the `Open Food Facts database <https://world.openfoodfacts.org/>`_
and offers a *simple* and *intuitive* API.
This is a fast differentiable volume rendering toolbox as a PyTorch extension for NeRF.
Check out the :doc:`usage` section for further information, including
how to :ref:`installation` the project.
...
...
@@ -18,5 +15,4 @@ Contents
.. toctree::
usage
api
\ No newline at end of file
usage
\ No newline at end of file
docs/source/usage.rst
0 → 100644
View file @
a4414f1a
Usage
=====
.. _installation:
Installation
------------
To use nerfacc, first install it using pip:
.. code-block:: console
(.venv) $ pip install git+https://github.com/liruilong940607/nerfacc
Example of use
----------------
.. code-block:: python
from typing import Callable, List, Union
from torch import Tensor
import torch
import torch.nn.function as F
from nerfacc import OccupancyField, volumetric_rendering
# setup the scene bounding box.
scene_aabb = torch.tensor([-1.5, -1.5, -1.5, 1.5, 1.5, 1.5]).cuda()
# setup the scene radiance field. Assume you have a NeRF model and
# it has following functions:
# - query_density(): {x} -> {density}
# - forward(): {x, dirs} -> {rgb, density}
radiance_field = ...
# setup some rendering settings
render_n_samples = 1024
render_bkgd = torch.ones(3).cuda()
render_step_size = (
(scene_aabb[3:] - scene_aabb[:3]).max() * math.sqrt(3) / render_n_samples
)
# setup occupancy field with eval function
def occ_eval_fn(x: torch.Tensor) -> torch.Tensor:
"""Evaluate occupancy given positions.
Args:
x: positions with shape (N, 3).
Returns:
occupancy values with shape (N, 1).
"""
density_after_activation = radiance_field.query_density(x)
occupancy = density_after_activation * render_step_size
return occupancy
occ_field = OccupancyField(occ_eval_fn=occ_eval_fn, aabb=aabb, resolution=128)
# training
for step in range(10_000):
# generate rays from data and the gt pixel color
rays = ...
pixels = ...
# update occupancy grid
occ_field.every_n_step(step)
# rendering
(
accumulated_color,
accumulated_depth,
accumulated_weight,
_,
) = volumetric_rendering(
query_fn=radiance_field.forward, # {x, dir} -> {rgb, density}
rays_o=rays.origins,
rays_d=rays.viewdirs,
scene_aabb=aabb,
scene_occ_binary=occupancy_field.occ_grid_binary,
scene_resolution=occupancy_field.resolution,
render_bkgd=render_bkgd,
render_n_samples=render_n_samples,
# other kwargs for `query_fn`
...,
)
# compute loss
loss = F.mse_loss(accumulated_color, pixels)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment