"...git@developer.sourcefind.cn:OpenDAS/pytorch-encoding.git" did not exist on "8425bd86d403861f4691814f9e6038da3acbc355"
Unverified Commit d022177a authored by Ruilong Li(李瑞龙)'s avatar Ruilong Li(李瑞龙) Committed by GitHub
Browse files

Docs (#18)

* update doc title

* update to pip install

* cleanup docs
parent cbaedda7
.. _api:
API
============
.. automodule:: nerfacc
:members:
:show-inheritance:
OccupancyField
===================================
.. currentmodule:: nerfacc
.. autoclass:: OccupancyField
:members:
:show-inheritance:
\ No newline at end of file
Volumetric Ray Marching
=========================
.. currentmodule:: nerfacc
.. autofunction:: ray_aabb_intersect
.. autofunction:: volumetric_marching
\ No newline at end of file
Volumetric Rendering
======================
.. currentmodule:: nerfacc
.. autofunction:: volumetric_rendering_steps
.. autofunction:: volumetric_rendering_weights
.. autofunction:: volumetric_rendering_accumulate
\ No newline at end of file
Instant-NGP
====================
See code at our github repository: https://github.com/KAIR-BAIR/nerfacc/examples/
Benchmarks
------------
We trained on NeRF-Synthetic trainval set using TITAN RTX, and evaluated on testset.
Note Instant-NGP's results are taken from the paper, which is trained on a Nvidia 3090,
with random background trick using alpha channel.
+----------------------+----------+----------+------------+-------+--------+
| | Lego | Mic | Materials |Chair |Hotdog |
| | | | | | |
+======================+==========+==========+============+=======+========+
| Paper (PSNR: 5min) | 36.39 | 36.22 | 29.78 | 35.00| 37.40 |
+----------------------+----------+----------+------------+-------+--------+
| Ours (PSNR) | 36.61 | 37.45 | 30.15 | 36.06 | 38.17 |
+----------------------+----------+----------+------------+-------+--------+
| Ours (Training time)| 300s | 272s | 258s | 331s | 287s |
+----------------------+----------+----------+------------+-------+--------+
:github_url: https://github.com/KAIR-BAIR/nerfacc
.. image:: _static/images/logo.png NeRFacc Documentation
:width: 400 ===================================
:align: center
:alt: nerfacc
NeRF Acceleration NeRFacc is a PyTorch NeRF acceleration toolbox for both training and inference.
-----------------
This is a fast differentiable volume rendering toolbox as a PyTorch extension for NeRF.
Check out the :doc:`usage` section for further information, including
how to :ref:`installation` the project.
.. note:: .. note::
This project is under active development. This project is under active development.
Contents Installation:
-------- -------------
.. code-block:: console
$ pip install nerfacc
.. toctree::
:glob:
:maxdepth: 1
:caption: Example Usages
examples/*
.. toctree::
:glob:
:maxdepth: 1
:caption: Python API
apis/*
.. toctree:: .. toctree::
:maxdepth: 1
:caption: Projects
usage NeRFactory <https://plenoptix-nerfactory.readthedocs-hosted.com/>
api
\ No newline at end of file
Usage
=====
.. _installation:
Installation
------------
To use nerfacc, first install it using pip:
.. code-block:: console
(.venv) $ pip install git+https://github.com/liruilong940607/nerfacc
Example of use
----------------
.. code-block:: python
from typing import Callable, List, Union
from torch import Tensor
import torch
import torch.nn.function as F
from nerfacc import OccupancyField, volumetric_rendering
# setup the scene bounding box.
scene_aabb = torch.tensor([-1.5, -1.5, -1.5, 1.5, 1.5, 1.5]).cuda()
# setup the scene radiance field. Assume you have a NeRF model and
# it has following functions:
# - query_density(): {x} -> {density}
# - forward(): {x, dirs} -> {rgb, density}
radiance_field = ...
# setup some rendering settings
render_n_samples = 1024
render_bkgd = torch.ones(3).cuda()
render_step_size = (
(scene_aabb[3:] - scene_aabb[:3]).max() * math.sqrt(3) / render_n_samples
)
# setup occupancy field with eval function
def occ_eval_fn(x: torch.Tensor) -> torch.Tensor:
"""Evaluate occupancy given positions.
Args:
x: positions with shape (N, 3).
Returns:
occupancy values with shape (N, 1).
"""
density_after_activation = radiance_field.query_density(x)
occupancy = density_after_activation * render_step_size
return occupancy
occ_field = OccupancyField(occ_eval_fn=occ_eval_fn, aabb=aabb, resolution=128)
# training
for step in range(10_000):
# generate rays from data and the gt pixel color
rays = ...
pixels = ...
# update occupancy grid
occ_field.every_n_step(step)
# rendering
(
accumulated_color,
accumulated_depth,
accumulated_weight,
_,
) = volumetric_rendering(
query_fn=radiance_field.forward, # {x, dir} -> {rgb, density}
rays_o=rays.origins,
rays_d=rays.viewdirs,
scene_aabb=aabb,
scene_occ_binary=occupancy_field.occ_grid_binary,
scene_resolution=occupancy_field.resolution,
render_bkgd=render_bkgd,
render_n_samples=render_n_samples,
# other kwargs for `query_fn`
...,
)
# compute loss
loss = F.mse_loss(accumulated_color, pixels)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment