# nerfacc This is a **tiny** tootlbox for **accelerating** NeRF training & rendering using PyTorch CUDA extensions. Plug-and-play for most of the NeRFs! ## Instant-NGP example ``` python examples/trainval.py ``` ## Performance Reference Ours on TITAN RTX : | trainval | Lego | Mic | Materials | Chair | Hotdog | | - | - | - | - | - | - | | Time | 300s | 272s | 258s | 331s | 287s | | PSNR | 36.61 | 37.45 | 30.15 | 36.06 | 38.17 | | FPS | 11.49 | 21.48 | 8.86 | 15.61 | 7.38 | Instant-NGP paper (5 min) on 3090 (w/ mask): | trainval | Lego | Mic | Materials | Chair | Hotdog | | - | - | - | - | - | - | | PSNR | 36.39 | 36.22 | 29.78 | 35.00 | 37.40 | ## Tips: 1. sample rays over all images per iteration (`batch_over_images=True`) is better: `PSNR 33.31 -> 33.75`. 2. make use of scheduler (`MultiStepLR(optimizer, milestones=[20000, 30000], gamma=0.1)`) to adjust learning rate gives: `PSNR 33.75 -> 34.40`. 3. increasing chunk size (`chunk: 8192 -> 81920`) during inference gives speedup: `FPS 4.x -> 6.2` 4. random bkgd color (`color_bkgd_aug="random"`) for the `Lego` scene actually hurts: `PNSR 35.42 -> 34.38`