1. 07 Nov, 2022 1 commit
    • Jeremy Reizenstein's avatar
      allow dots in param_groups · 7be49bf4
      Jeremy Reizenstein authored
      Summary:
      Allow a module's param_group member to specify overrides to the param groups of its members or their members.
      Also logging for param group assignments.
      
      This allows defining `params.basis_matrix` in the param_groups of a voxel_grid.
      
      Reviewed By: shapovalov
      
      Differential Revision: D41080667
      
      fbshipit-source-id: 49f3b0e5b36e496f78701db0699cbb8a7e20c51e
      7be49bf4
  2. 18 Oct, 2022 1 commit
    • Jeremy Reizenstein's avatar
      different learning rate for different parts · fe5bdb2f
      Jeremy Reizenstein authored
      Summary:
      Adds the ability to have different learning rates for different parts of the model. The trainable parts of the implicitron have a new member
      
             param_groups: dictionary where keys are names of individual parameters,
                  or module’s members and values are the parameter group where the
                  parameter/member will be sorted to. "self" key is used to denote the
                  parameter group at the module level. Possible keys, including the "self" key
                  do not have to be defined. By default all parameters are put into "default"
                  parameter group and have the learning rate defined in the optimizer,
                  it can be overriden at the:
                      - module level with “self” key, all the parameters and child
                          module s parameters will be put to that parameter group
                      - member level, which is the same as if the `param_groups` in that
                          member has key=“self” and value equal to that parameter group.
                          This is useful if members do not have `param_groups`, for
                          example torch.nn.Linear.
                      - parameter level, parameter with the same name as the key
                          will be put to that parameter group.
      
      And in the optimizer factory, parameters and their learning rates are recursively gathered.
      
      Reviewed By: shapovalov
      
      Differential Revision: D40145802
      
      fbshipit-source-id: 631c02b8d79ee1c0eb4c31e6e42dbd3d2882078a
      fe5bdb2f
  3. 03 Oct, 2022 1 commit
    • Darijan Gudelj's avatar
      load whole dataset in train loop · 37bd280d
      Darijan Gudelj authored
      Summary: Loads the whole dataset and moves it to the device and sends it to for sampling to enable full dataset heterogeneous raysampling.
      
      Reviewed By: bottler
      
      Differential Revision: D39263009
      
      fbshipit-source-id: c527537dfc5f50116849656c9e171e868f6845b1
      37bd280d
  4. 22 Sep, 2022 1 commit
    • Jeremy Reizenstein's avatar
      foreach optimizers · 209c160a
      Jeremy Reizenstein authored
      Summary: Allow using the new `foreach` option on optimizers.
      
      Reviewed By: shapovalov
      
      Differential Revision: D39694843
      
      fbshipit-source-id: 97109c245b669bc6edff0f246893f95b7ae71f90
      209c160a
  5. 01 Sep, 2022 1 commit
  6. 30 Aug, 2022 1 commit
    • David Novotny's avatar
      CO3Dv2 trainer configs · 1163eaab
      David Novotny authored
      Summary:
      Adds yaml configs to train selected methods on CO3Dv2.
      
      Few more updates:
      1) moved some fields to base classes so that we can check is_multisequence in experiment.py
      2) skip loading all train cameras for multisequence datasets, without this, co3d-fewview is untrainable
      3) fix bug in json index dataset provider v2
      
      Reviewed By: kjchalup
      
      Differential Revision: D38952755
      
      fbshipit-source-id: 3edac6fc8e20775aa70400bd73a0e6d52b091e0c
      1163eaab
  7. 10 Aug, 2022 1 commit
    • Jeremy Reizenstein's avatar
      LinearExponential LR · a39cad40
      Jeremy Reizenstein authored
      Summary: Linear followed by exponential LR progression. Needed for making Blender scenes converge.
      
      Reviewed By: kjchalup
      
      Differential Revision: D38557007
      
      fbshipit-source-id: ad630dbc5b8fabcb33eeb5bdeed5e4f31360bac2
      a39cad40
  8. 09 Aug, 2022 1 commit
    • Krzysztof Chalupka's avatar
      Mods and bugfixes for LLFF and Blender repros · c83ec355
      Krzysztof Chalupka authored
      Summary:
      LLFF (and most/all non-synth datasets) will have no background/foreground distinction. Add support for data with no fg mask.
      
      Also, we had a bug in stats loading, like this:
        * Load stats
        * One of the stats has a history of length 0
        * That's fine, e.g. maybe it's fg_error but the dataset has no notion of fg/bg. So leave it as len 0
        * Check whether all the stats have the same history length as an arbitrarily chosen "reference-stat"
        * Ooops the reference-stat happened to be the stat with length 0
        * assert (legit_stat_len == reference_stat_len (=0)) ---> failed assert
      
      Also some minor fixes (from Jeremy's other diff) to support LLFF
      
      Reviewed By: davnov134
      
      Differential Revision: D38475272
      
      fbshipit-source-id: 5b35ac86d1d5239759f537621f41a3aa4eb3bd68
      c83ec355
  9. 02 Aug, 2022 3 commits
    • David Novotny's avatar
      Move load_stats to TrainingLoop · c3f8dad5
      David Novotny authored
      Summary:
      Stats are logically connected to the training loop, not to the model. Hence, moving to the training loop.
      
      Also removing resume_epoch from OptimizerFactory in favor of a single place - ModelFactory. This removes the need for config consistency checks etc.
      
      Reviewed By: kjchalup
      
      Differential Revision: D38313475
      
      fbshipit-source-id: a1d188a63e28459df381ff98ad8acdcdb14887b7
      c3f8dad5
    • Krzysztof Chalupka's avatar
      Fix train_stats.pdf: they now work by default · b7b188bf
      Krzysztof Chalupka authored
      Summary: Before this diff, train_stats.py would not be created by default, EXCEPT when resuming training. This makes them appear from start.
      
      Reviewed By: shapovalov
      
      Differential Revision: D38320341
      
      fbshipit-source-id: 8ea5b99ec81c377ae129f58e78dc2eaff94821ad
      b7b188bf
    • Jeremy Reizenstein's avatar
      remove get_task · f8bf5280
      Jeremy Reizenstein authored
      Summary: Remove the dataset's need to provide the task type.
      
      Reviewed By: davnov134, kjchalup
      
      Differential Revision: D38314000
      
      fbshipit-source-id: 3805d885b5d4528abdc78c0da03247edb9abf3f7
      f8bf5280
  10. 01 Aug, 2022 1 commit
    • David Novotny's avatar
      Better seeding of random engines · 80fc0ee0
      David Novotny authored
      Summary: Currently, seeds are set only inside the train loop. But this does not ensure that the model weights are initialized the same way everywhere which makes all experiments irreproducible. This diff fixes it.
      
      Reviewed By: bottler
      
      Differential Revision: D38315840
      
      fbshipit-source-id: 3d2ecebbc36072c2b68dd3cd8c5e30708e7dd808
      80fc0ee0
  11. 30 Jul, 2022 1 commit
    • Krzysztof Chalupka's avatar
      Replace pluggable components to create a proper Configurable hierarchy. · 1b0584f7
      Krzysztof Chalupka authored
      Summary:
      This large diff rewrites a significant portion of Implicitron's config hierarchy. The new hierarchy, and some of the default implementation classes, are as follows:
      ```
      Experiment
          data_source: ImplicitronDataSource
              dataset_map_provider
              data_loader_map_provider
          model_factory: ImplicitronModelFactory
              model: GenericModel
          optimizer_factory: ImplicitronOptimizerFactory
          training_loop: ImplicitronTrainingLoop
              evaluator: ImplicitronEvaluator
      ```
      
      1) Experiment (used to be ExperimentConfig) is now a top-level Configurable and contains as members mainly (mostly new) high-level factory Configurables.
      2) Experiment's job is to run factories, do some accelerate setup and then pass the results to the main training loop.
      3) ImplicitronOptimizerFactory and ImplicitronModelFactory are new high-level factories that create the optimizer, scheduler, model, and stats objects.
      4) TrainingLoop is a new configurable that runs the main training loop and the inner train-validate step.
      5) Evaluator is a new configurable that TrainingLoop uses to run validation/test steps.
      6) GenericModel is not the only model choice anymore. Instead, ImplicitronModelBase (by default instantiated with GenericModel) is a member of Experiment and can be easily replaced by a custom implementation by the user.
      
      All the new Configurables are children of ReplaceableBase, and can be easily replaced with custom implementations.
      
      In addition, I added support for the exponential LR schedule, updated the config files and the test, as well as added a config file that reproduces NERF results and a test to run the repro experiment.
      
      Reviewed By: bottler
      
      Differential Revision: D37723227
      
      fbshipit-source-id: b36bee880d6aa53efdd2abfaae4489d8ab1e8a27
      1b0584f7
  12. 15 Jul, 2022 1 commit
  13. 12 Jul, 2022 1 commit
    • Nikhila Ravi's avatar
      Updates to support Accelerate and multigpu training (#37) · aa8b03f3
      Nikhila Ravi authored
      Summary:
      ## Changes:
      - Added Accelerate Library and refactored experiment.py to use it
      - Needed to move `init_optimizer` and `ExperimentConfig` to a separate file to be compatible with submitit/hydra
      - Needed to make some modifications to data loaders etc to work well with the accelerate ddp wrappers
      - Loading/saving checkpoints incorporates an unwrapping step so remove the ddp wrapped model
      
      ## Tests
      
      Tested with both `torchrun` and `submitit/hydra` on two gpus locally. Here are the commands:
      
      **Torchrun**
      
      Modules loaded:
      ```sh
      1) anaconda3/2021.05   2) cuda/11.3   3) NCCL/2.9.8-3-cuda.11.3   4) gcc/5.2.0. (but unload gcc when using submit)
      ```
      
      ```sh
      torchrun --nnodes=1 --nproc_per_node=2 experiment.py --config-path ./configs --config-name repro_singleseq_nerf_test
      ```
      
      **Submitit/Hydra Local test**
      
      ```sh
      ~/pytorch3d/projects/implicitron_trainer$ HYDRA_FULL_ERROR=1 python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_local hydra.launcher.gpus_per_node=2 hydra.launcher.tasks_per_node=2 hydra.launcher.nodes=1
      ```
      
      **Submitit/Hydra distributed test**
      
      ```sh
      ~/implicitron/pytorch3d$ python3.9 experiment.py --config-name repro_singleseq_nerf_test --multirun --config-path ./configs  hydra/launcher=submitit_slurm hydra.launcher.gpus_per_node=8 hydra.launcher.tasks_per_node=8 hydra.launcher.nodes=1 hydra.launcher.partition=learnlab hydra.launcher.timeout_min=4320
      ```
      
      ## TODOS:
      - Fix distributed evaluation: currently this doesn't work as the input format to the evaluation function is not suitable for gathering across gpus (needs to be nested list/tuple/dicts of objects that satisfy `is_torch_tensor`) and currently `frame_data`  contains `Cameras` type.
      - Refactor the `accelerator` object to be accessible by all functions instead of needing to pass it around everywhere? Maybe have a `Trainer` class and add it as a method?
      - Update readme with installation instructions for accelerate and also commands for running jobs with torchrun and submitit/hydra
      
      X-link: https://github.com/fairinternal/pytorch3d/pull/37
      
      Reviewed By: davnov134, kjchalup
      
      Differential Revision: D37543870
      
      Pulled By: bottler
      
      fbshipit-source-id: be9eb4e91244d4fe3740d87dafec622ae1e0cf76
      aa8b03f3