- 03 Aug, 2022 1 commit
-
-
Darijan Gudelj authored
Summary: Made the config system call open_dict when it calls the tweak function. Reviewed By: shapovalov Differential Revision: D38315334 fbshipit-source-id: 5924a92d8d0bf399bbf3788247f81fc990e265e7
-
- 02 Aug, 2022 8 commits
-
-
David Novotny authored
Summary: Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop. Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc. Reviewed By: kjchalup Differential Revision: D38313475 fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
-
Krzysztof Chalupka authored
Summary: Blender data doesn't have depths or crops. Reviewed By: shapovalov Differential Revision: D38345583 fbshipit-source-id: a19300daf666bbfd799d0038aeefa14641c559d7
-
Jeremy Reizenstein authored
Summary: Simple DataLoaderMapProvider instance Reviewed By: davnov134 Differential Revision: D38326719 fbshipit-source-id: 58556833e76fae5790d25a59bea0aac4ce046bf1
-
Darijan Gudelj authored
Summary: fix to the D38275943 (https://github.com/facebookresearch/pytorch3d/commit/597e0259dc43bf4903e9c99f5d61410c1ad75b78). Reviewed By: bottler Differential Revision: D38355683 fbshipit-source-id: f326f45279fafa57f24b9211ebd3fda18a518937
-
Krzysztof Chalupka authored
Summary: Before this diff, train_stats.py would not be created by default, EXCEPT when resuming training. This makes them appear from start. Reviewed By: shapovalov Differential Revision: D38320341 fbshipit-source-id: 8ea5b99ec81c377ae129f58e78dc2eaff94821ad
-
Jeremy Reizenstein authored
Summary: Remove the dataset's need to provide the task type. Reviewed By: davnov134, kjchalup Differential Revision: D38314000 fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
-
Darijan Gudelj authored
Summary: Added _NEED_CONTROL to JsonIndexDatasetMapProviderV2 and made dataset_tweak_args use it. Reviewed By: bottler Differential Revision: D38313914 fbshipit-source-id: 529847571065dfba995b609a66737bd91e002cfe
-
Jeremy Reizenstein authored
Summary: Only import it if you ask for it. Reviewed By: kjchalup Differential Revision: D38327167 fbshipit-source-id: 3f05231f26eda582a63afc71b669996342b0c6f9
-
- 01 Aug, 2022 5 commits
-
-
David Novotny authored
Summary: <see title> Reviewed By: bottler Differential Revision: D38314727 fbshipit-source-id: 7178b816a22b06af938a35c5f7bb88404fb1b1c4
-
Darijan Gudelj authored
Summary: Made eval_batches be set in call to `__init__` not after the construction as they were before Reviewed By: bottler Differential Revision: D38275943 fbshipit-source-id: 32737401d1ddd16c284e1851b7a91f8b041c406f
-
David Novotny authored
Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it. Reviewed By: bottler Differential Revision: D38315840 fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
-
David Novotny authored
Summary: Fixes the MC rasterization bug Reviewed By: bottler Differential Revision: D38312234 fbshipit-source-id: 910cf809ef3faff3de7a8d905b0821f395a52edf
-
Jeremy Reizenstein authored
Summary: Make a dummy single-scene dataset using the code from generate_cow_renders (used in existing NeRF tutorials) Reviewed By: kjchalup Differential Revision: D38116910 fbshipit-source-id: 8db6df7098aa221c81d392e5cd21b0e67f65bd70
-
- 30 Jul, 2022 1 commit
-
-
Krzysztof Chalupka authored
Summary: This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows: ``` Experiment data_source: ImplicitronDataSource dataset_map_provider data_loader_map_provider model_factory: ImplicitronModelFactory model: GenericModel optimizer_factory: ImplicitronOptimizerFactory training_loop: ImplicitronTrainingLoop evaluator: ImplicitronEvaluator ``` 1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables. 2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop. 3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects. 4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step. 5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps. 6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user. All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations. In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment. Reviewed By: bottler Differential Revision: D37723227 fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
-
- 28 Jul, 2022 1 commit
-
-
Jeremy Reizenstein authored
Summary: This is an internal change in the config systen. It allows redefining a pluggable implementation with new default values. This is useful in notebooks / interactive use. For example, this now works. class A(ReplaceableBase): pass registry.register class B(A): i: int = 4 class C(Configurable): a: A a_class_type: str = "B" def __post_init__(self): run_auto_creation(self) expand_args_fields(C) registry.register class B(A): i: int = 5 c = C() assert c.a.i == 5 Reviewed By: shapovalov Differential Revision: D38219371 fbshipit-source-id: 72911a9bd3426d3359cf8802cc016fc7f6d7713b
-
- 22 Jul, 2022 3 commits
-
-
Krzysztof Chalupka authored
Summary: Adding MeshRasterizerOpenGL, a faster alternative to MeshRasterizer. The new rasterizer follows the ideas from "Differentiable Surface Rendering via non-Differentiable Sampling". The new rasterizer 20x faster on a 2M face mesh (try pose optimization on Nefertiti from https://www.cs.cmu.edu/~kmcrane/Projects/ModelRepository/!). The larger the mesh, the larger the speedup. There are two main disadvantages: * The new rasterizer works with an OpenGL backend, so requires pycuda.gl and pyopengl installed (though we avoided writing any C++ code, everything is in Python!) * The new rasterizer is non-differentiable. However, you can still differentiate the rendering function if you use if with the new SplatterPhongShader which we recently added to PyTorch3D (see the original paper cited above). Reviewed By: patricklabatut, jcjohnson Differential Revision: D37698816 fbshipit-source-id: 54d120639d3cb001f096237807e54aced0acda25
-
Krzysztof Chalupka authored
Summary: Needed to properly change devices during OpenGL rasterization. Reviewed By: jcjohnson Differential Revision: D37698568 fbshipit-source-id: 38968149d577322e662d3b5d04880204b0a7be29
-
Krzysztof Chalupka authored
Summary: EGLContext is a utility to render with OpenGL without an attached display (that is, without a monitor). DeviceContextManager allows us to avoid unnecessary context creations and releases. See docstrings for more info. Reviewed By: jcjohnson Differential Revision: D36562551 fbshipit-source-id: eb0d2a2f85555ee110e203d435a44ad243281d2c
-
- 21 Jul, 2022 4 commits
-
-
Jeremy Reizenstein authored
Summary: Error when sending an unbatched FrameData through GM. Reviewed By: shapovalov Differential Revision: D38036286 fbshipit-source-id: b8d280c61fbbefdc112c57ccd630ab3ccce7b44e
-
Jeremy Reizenstein authored
Summary: Avoid calculating all_train_cameras before it is needed, because it is slow in some datasets. Reviewed By: shapovalov Differential Revision: D38037157 fbshipit-source-id: 95461226655cde2626b680661951ab17ebb0ec75
-
Jeremy Reizenstein authored
Summary: lint issues (mostly flake) in implicitron Reviewed By: patricklabatut Differential Revision: D37920948 fbshipit-source-id: 8cb3c2a2838d111c80a211c98a404c210d4649ed
-
Jeremy Reizenstein authored
Summary: We especially need omegaconf when testing impicitron. Reviewed By: patricklabatut Differential Revision: D37921440 fbshipit-source-id: 4e66fde35aa29f60eabd92bf459cd584cfd7e5ca
-
- 19 Jul, 2022 1 commit
-
-
Jeremy Reizenstein authored
Summary: X-link: https://github.com/fairinternal/pytorch3d/pull/39 Blender and LLFF cameras were sending screen space focal length and principal point to a camera init function expecting NDC Reviewed By: shapovalov Differential Revision: D37788686 fbshipit-source-id: 2ddf7436248bc0d174eceb04c288b93858138582
-
- 18 Jul, 2022 1 commit
-
-
Jeremy Reizenstein authored
Summary: Add the conditioning types to the repro yaml files. In particular, this fixes test_conditioning_type. Reviewed By: shapovalov Differential Revision: D37914537 fbshipit-source-id: 621390f329d9da662d915eb3b7bc709206a20552
-
- 17 Jul, 2022 1 commit
-
-
Jeremy Reizenstein authored
Summary: For debugging, introduce PYTORCH3D_NO_ACCELERATE env var. Reviewed By: shapovalov Differential Revision: D37885393 fbshipit-source-id: de080080c0aa4b6d874028937083a0113bb97c23
-
- 15 Jul, 2022 1 commit
-
-
Iurii Makarov authored
Summary: I tried to run `experiment.py` and `pytorch3d_implicitron_runner` and faced the failure with this traceback: https://www.internalfb.com/phabricator/paste/view/P515734086 It seems to be due to the new release of OmegaConf (version=2.2.2) which requires different typing. This fix helped to overcome it. Reviewed By: bottler Differential Revision: D37881644 fbshipit-source-id: be0cd4ced0526f8382cea5bdca9b340e93a2fba2
-
- 14 Jul, 2022 3 commits
-
-
Jiali Duan authored
Summary: EPnP fails the test when the number of points is below 6. As suggested, quadratic option is in theory to deal with as few as 4 points (so num_pts_thresh=3 is set). And when num_pts > num_pts_thresh=4, skip_q is False. To avoid bumping num_pts_thresh while passing all the original tests, check_output is set to False when num_pts < 6, similar to the logic in Line 123-127. It makes sure that the algo doesn't crash. Reviewed By: shapovalov Differential Revision: D37804438 fbshipit-source-id: 74576d63a9553e25e3ec344677edb6912b5f9354
-
Jeremy Reizenstein authored
Summary: New linter warning is complaining about `raise` inside `except`. Reviewed By: kjchalup Differential Revision: D37819264 fbshipit-source-id: 56ad5d0558ea39e1125f3c76b43b7376aea2bc7c
-
David Novotny authored
Summary: Removing 1 from the crop mask does not seem sensible. Reviewed By: bottler, shapovalov Differential Revision: D37843680 fbshipit-source-id: 70cec80f9ea26deac63312da62b9c8af27d2a010
-
- 13 Jul, 2022 4 commits
-
-
Roman Shapovalov authored
Summary: 1. Random sampling of num batches without replacement not supported. 2.Providers should implement the interface for the training loop to work. Reviewed By: bottler, davnov134 Differential Revision: D37815388 fbshipit-source-id: 8a2795b524e733f07346ffdb20a9c0eb1a2b8190
-
Jeremy Reizenstein authored
Summary: Fixing comments on D37592429 (https://github.com/facebookresearch/pytorch3d/commit/0dce883241ae638b9fa824f34fca9590d5f0782c) Reviewed By: shapovalov Differential Revision: D37752367 fbshipit-source-id: 40aa7ee4dc0c5b8b7a84a09d13a3933a9e3afedd
-
Jeremy Reizenstein authored
Summary: Accelerate is an additional implicitron dependency, so document it. Reviewed By: shapovalov Differential Revision: D37786933 fbshipit-source-id: 11024fe604107881f8ca29e17cb5cbfe492fc7f9
-
Roman Shapovalov authored
Summary: 1. Respecting `visdom_show_preds` parameter when it is False. 2. Clipping the images pre-visualisation, which is important for methods like SRN that are not arare of pixel value range. Reviewed By: bottler Differential Revision: D37786439 fbshipit-source-id: 8dbb5104290bcc5c2829716b663cae17edc911bd
-
- 12 Jul, 2022 5 commits
-
-
David Novotny authored
Summary: one more bugfix in JsonIndexDataset Reviewed By: bottler Differential Revision: D37789138 fbshipit-source-id: 2fb2bda7448674091ff6b279175f0bbd16ff7a62
-
Jeremy Reizenstein authored
Summary: After recent accelerate change D37543870 (https://github.com/facebookresearch/pytorch3d/commit/aa8b03f31dc2a178f8d7da457df28f19b5917009), update interactive trainer test. Reviewed By: shapovalov Differential Revision: D37785932 fbshipit-source-id: 9211374323b6cfd80f6c5ff3a4fc1c0ca04b54ba
-
Tristan Rice authored
Summary: This fixes a indexing bug in HardDepthShader and adds proper unit tests for both of the DepthShaders. This bug was introduced when updating the shader sizes and discovered when I switched my local model onto pytorch3d trunk instead of the patched copy. Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1252 Test Plan: Unit test + custom model code ``` pytest tests/test_shader.py ```  Reviewed By: bottler Differential Revision: D37775767 Pulled By: d4l3k fbshipit-source-id: 5f001903985976d7067d1fa0a3102d602790e3e8
-
Tristan Rice authored
renderer: add support for rendering high dimensional textures for classification/segmentation use cases (#1248) Summary: For 3D segmentation problems it's really useful to be able to train the models from multiple viewpoints using Pytorch3D as the renderer. Currently due to hardcoded assumptions in a few spots the mesh renderer only supports rendering RGB (3 dimensional) data. You can encode the classification information as 3 channel data but if you have more than 3 classes you're out of luck. This relaxes the assumptions to make rendering semantic classes work with `HardFlatShader` and `AmbientLights` with no diffusion/specular. The other shaders/lights don't make any sense for classification since they mutate the texture values in some way. This only requires changes in `Materials` and `AmbientLights`. The bulk of the code is the unit test. Pull Request resolved: https://github.com/facebookresearch/pytorch3d/pull/1248 Test Plan: Added unit test that renders a 5 dimensional texture and compare dimensions 2-5 to a stored picture. Reviewed By: bottler Differential Revision: D37764610 Pulled By: d4l3k fbshipit-source-id: 031895724d9318a6f6bab5b31055bb3f438176a5
-
Nikhila Ravi authored
Summary: ## Changes: - Added Accelerate Library and refactored experiment.py to use it - Needed to move `init_optimizer` and `ExperimentConfig` to a separate file to be compatible with submitit/hydra - Needed to make some modifications to data loaders etc to work well with the accelerate ddp wrappers - Loading/saving checkpoints incorporates an unwrapping step so remove the ddp wrapped model ## Tests Tested with both `torchrun` and `submitit/hydra` on two gpus locally. Here are the commands: **Torchrun** Modules loaded: ```sh 1) anaconda3/2021.05 2) cuda/11.3 3) NCCL/2.9.8-3-cuda.11.3 4) gcc/5.2.0. (but unload gcc when using submit) ``` ```sh torchrun --nnodes=1 --nproc_per_node=2 experiment.py --config-path ./configs --config-name repro_singleseq_nerf_test ``` **Submitit/Hydra Local test** ```sh ~/pytorch3d/projects/implicitron_trainer$ HYDRA_FULL_ERROR=1 python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs hydra/launcher=submitit_local hydra.launcher.gpus_per_node=2 hydra.launcher.tasks_per_node=2 hydra.launcher.nodes=1 ``` **Submitit/Hydra distributed test** ```sh ~/implicitron/pytorch3d$ python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs hydra/launcher=submitit_slurm hydra.launcher.gpus_per_node=8 hydra.launcher.tasks_per_node=8 hydra.launcher.nodes=1 hydra.launcher.partition=learnlab hydra.launcher.timeout_min=4320 ``` ## TODOS: - Fix distributed evaluation: currently this doesn't work as the input format to the evaluation function is not suitable for gathering across gpus (needs to be nested list/tuple/dicts of objects that satisfy `is_torch_tensor`) and currently `frame_data` contains `Cameras` type. - Refactor the `accelerator` object to be accessible by all functions instead of needing to pass it around everywhere? Maybe have a `Trainer` class and add it as a method? - Update readme with installation instructions for accelerate and also commands for running jobs with torchrun and submitit/hydra X-link: https://github.com/fairinternal/pytorch3d/pull/37 Reviewed By: davnov134, kjchalup Differential Revision: D37543870 Pulled By: bottler fbshipit-source-id: be9eb4e91244d4fe3740d87dafec622ae1e0cf76
-
- 11 Jul, 2022 1 commit
-
-
Jeremy Reizenstein authored
Summary: remove erroneous RandomDataLoaderMapProvider Reviewed By: davnov134 Differential Revision: D37751116 fbshipit-source-id: cf3b555dc1e6304425914d1522b4f70407b498bf
-