1. 27 May, 2021 1 commit
    • Tao Xu's avatar
      add an option to set the number of test images · 73f0f05f
      Tao Xu authored
      Summary: Add an option to set the number of test images. Thus, during finetune, we can set a small number of test images (for only visualization purpose) to save the time for evaluation.
      
      Reviewed By: leehomyc
      
      Differential Revision: D28720086
      
      fbshipit-source-id: 8085be6a0f4f8742784e3dafe255716f3ae02acb
      73f0f05f
  2. 25 May, 2021 3 commits
    • Kai Zhang's avatar
      fix for checking device type · bf395ce5
      Kai Zhang authored
      Summary: Currently we are checking if MODEL.DEVICE is "gpu", but actually we DEVICE could also be "cuda". This diff checks if device is "cpu" instead.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D28689547
      
      fbshipit-source-id: 7512d32b7c08b0dcdc6487c6c2f1703655e64b19
      bf395ce5
    • Yanghan Wang's avatar
      update RCNN model test base · 0ab6d3f1
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/75
      
      Refactor the base test case
      - make test_dir valid throughout the test (rather than under local context), so individual test can load back the export model
      - refactor the `custom_setup_test` for easier override.
      - move parameterized into base class to avoid copying naming function
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28651067
      
      fbshipit-source-id: c59a311564f6114039e20ed3a23e5dd9c84f4ae4
      0ab6d3f1
    • Kai Zhang's avatar
      Read number of processes from dist_config · 29b57165
      Kai Zhang authored
      Summary: Currently when launching a training flow, we read number of processes from resources.num_gpus. To be backward compatible with existing D2 (https://github.com/facebookresearch/d2go/commit/f82d44d3c33e6c781a3c6f2b27b376fdfbaeda53)Go training config, this diff changes to dist_config.num_processes_per_machine instead.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D28630334
      
      fbshipit-source-id: 3c684cd56e5d2e247c7b82e1d1eeff0f39e59ee4
      29b57165
  3. 24 May, 2021 1 commit
  4. 22 May, 2021 2 commits
  5. 21 May, 2021 3 commits
  6. 17 May, 2021 2 commits
    • Kai Zhang's avatar
      add dataset visualization · 536e9d25
      Kai Zhang authored
      Summary: Add dataset visualization so that we could visualize test results in Tensorboard.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28457363
      
      fbshipit-source-id: 4c2fd9dce349c6fb9e1cec51c9138cf0abb45d7b
      536e9d25
    • Jacob Szwejbka's avatar
      Remove run_on_bundled_input · fdd64119
      Jacob Szwejbka authored
      Summary:
      Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344
      
      remove a helper function thats more trouble then its worth.
      
      ghstack-source-id: 129131889
      
      Reviewed By: dhruvbird
      
      Differential Revision: D28460607
      
      fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
      fdd64119
  7. 16 May, 2021 1 commit
    • Zhicheng Yan's avatar
      create CfgNode with consistent type · cbd695ac
      Zhicheng Yan authored
      Summary: Create new CfgNode that is consistent with the parent node.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28318466
      
      fbshipit-source-id: 38cb84de6bdfec2b283c4d9a1090cad47c118c9c
      cbd695ac
  8. 14 May, 2021 1 commit
  9. 13 May, 2021 2 commits
    • Kai Zhang's avatar
      Auto scale config for multi-node training · e87ed5f0
      Kai Zhang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/62
      
      Lightning trainer set max step to cfg.SOLVER.MAX_ITER. However, this is the max iteration for all nodes, in multi-node training, we need to scale it down, as well as eval period and other configs.
      This diff calls `auto_scale_world_size` before passing the config to trainer.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D28140877
      
      fbshipit-source-id: 2639ae58773a4ec2a0cc59dfefb2f5d9b1afe1a8
      e87ed5f0
    • Yanghan Wang's avatar
      remove adet's default config from base runner · f3d05021
      Yanghan Wang authored
      Reviewed By: zhanghang1989
      
      Differential Revision: D28346653
      
      fbshipit-source-id: d80a1f824b097c05029edb171739a4928e47e4d8
      f3d05021
  10. 12 May, 2021 1 commit
    • Luis Perez's avatar
      Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to... · 0848c589
      Luis Perez authored
      Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to github/third-party/PyTorchLightning/pytorch-lightning
      
      Summary:
      # Manual
       - remove fixme's in `model_checkpoint.py`, `parameter_monitor.py`, `test_quantization.py`, and `speed_monitor.py` now that `Trainer` is properly annotated.
      - update `test_quantization.py` to `trainer.train_loop.global_step` instead of `trainer.global_step` which is a read-only.
      - update `loop_callback.py` to read from `train_loop` for `batch_idx` (which is no longer available).
      
      # Automatic
      ### New commit log messages
        7b283e3c Bugfix/Multiple dataloaders (#7433)
        d7c44cc6 Docs: sync chlog 1.3.1 (#7478)
        fdf50a5e Mark certain Trainer APIs as protected (#7420)
        ad9118f0 remove trainer hidden state | sanity refactor [1 / n] (#7437)
        4a1134db Log epoch metrics before firing the `on_evaluation_end` hook (#7272)
        b65ae794 Automatically check `DataModule.has_{setup,teardown,prepare_data}` [2/2] (#7238)
        8660d8cf [pre-commit.ci] pre-commit autoupdate (#7475)
        f6fe715e Fix Sphinx argument deprecation (#7464)
      
      Reviewed By: shuyingsunshine21
      
      Differential Revision: D28353491
      
      fbshipit-source-id: 98b87d99e2f09b47b07270858fcbdb5d5299730b
      0848c589
  11. 10 May, 2021 2 commits
  12. 07 May, 2021 2 commits
  13. 06 May, 2021 3 commits
  14. 05 May, 2021 2 commits
    • Yanghan Wang's avatar
      force contiguous when calling `augment_model_with_bundled_inputs` · 04bbc81f
      Yanghan Wang authored
      Summary: `augment_model_with_bundled_inputs` can compress the tensor when values are constant, however it requires contiguous layout and `zero_like` can return non-contiguous ones
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28224987
      
      fbshipit-source-id: 32b13728ff8fadd53412dbf2d59c4b46e92af04a
      04bbc81f
    • Sam Tsai's avatar
      add enlarge bounging box manipulation · e1961ad4
      Sam Tsai authored
      Summary: Add a bounding manipulation tool to padding bounding box data.
      
      Reviewed By: newstzpz
      
      Differential Revision: D28082071
      
      fbshipit-source-id: f168cae48672c4fa5c4ec98697c57ed7833787ab
      e1961ad4
  15. 04 May, 2021 2 commits
    • Hang Zhang's avatar
      OSS build mask head using fbnet builder · 477ab964
      Hang Zhang authored
      Summary:
      [WIP] Will add pretrained weights and update model url & scores
      
      build mask head using fbnet builder and retrain weights
      
      Reviewed By: wat3rBro
      
      Differential Revision: D27992340
      
      fbshipit-source-id: a216a99954eb3784438d595cd09cbb19e70ec3c3
      477ab964
    • Yanghan Wang's avatar
      move some of `test_meta_arch_rcnn.py` to oss · e84d3414
      Yanghan Wang authored
      Reviewed By: newstzpz
      
      Differential Revision: D27747996
      
      fbshipit-source-id: 6ae3b89c3944098828e246e5a4a89209b8e171a1
      e84d3414
  16. 30 Apr, 2021 1 commit
    • Sam Tsai's avatar
      add keypoints metadata registry · 77ebe09f
      Sam Tsai authored
      Summary:
      1. Add a keypoint metadata registry for registering different keypoint metadata
      2. Add option to inject_coco_dataset for adding keypoint metadata
      
      Reviewed By: newstzpz
      
      Differential Revision: D27730541
      
      fbshipit-source-id: c6ba97f60664fce4dcbb0de80222df7490bc6d5d
      77ebe09f
  17. 29 Apr, 2021 2 commits
  18. 28 Apr, 2021 2 commits
    • Hang Zhang's avatar
      Patch for Quantizing PointRend model · 3e243c1a
      Hang Zhang authored
      Summary: PointRend mask doesn't work for quantization. Add a patch to disable it.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D27800349
      
      fbshipit-source-id: ae0268ee78b000245ebdb2edbfc679a62c85a59a
      3e243c1a
    • Ananth Subramaniam's avatar
      Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to... · a95c7983
      Ananth Subramaniam authored
      Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning
      
      Summary:
      ### New commit log messages
        7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053)
        bab72255 [fix] Add barriers before and after setup hook is run (#7202)
        f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228)
        e147127c [feat] Add better support for predict + ddp 2/3 (#7215)
        ca6c87ff Add back `clip_gradients(model)` (#7231)
        3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014)
        5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213)
        52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937)
        dd5ec75e Deprecate save_function from model checkpoint callback (#7201)
        ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194)
        6be0a859 Update teardown for TPU acc (#7211)
        bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814)
        68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130)
        44d775fc Update Error message for ProfileConnector (#7204)
        31fcd7d0 Deprecate write_predictions on the LightningModule (#7066)
        591b9cee make bug_report_model minimal (#7191)
        b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180)
        f58865aa Properly set `LightningModule.device` after model replacement (#7188)
        8439aead Update FairScale on CI (#7017)
        92af3632 Fix `lr_finder` suggesting too high learning rates (#7076)
        d534e53e add missing predict docs (#7150)
      
      Reviewed By: kazhang
      
      Differential Revision: D28032962
      
      fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
      a95c7983
  19. 27 Apr, 2021 1 commit
    • Jacob Szwejbka's avatar
      Remove methods_to_optimize from script · c04ef895
      Jacob Szwejbka authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/54
      
      This arg is being deprecated, and its use case was really only for modules that use functions besides forward for inference.  The new plan is just to optimize every function. Since this script was just created I'm hoping I can edit this without throwing lots of stuff out of wack.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D27954176
      
      fbshipit-source-id: fbe178fcc0404e5d2524d8edb4052e2cd17f43ba
      c04ef895
  20. 23 Apr, 2021 3 commits
  21. 22 Apr, 2021 1 commit
  22. 21 Apr, 2021 2 commits