1. 09 Nov, 2022 1 commit
  2. 08 Nov, 2022 1 commit
  3. 03 Nov, 2022 2 commits
    • Yanghan Wang's avatar
      use SharedList as offload backend of DatasetFromList by default · 01c351bc
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/405
      
      - Use the non-hacky way (added in D40818736, https://github.com/facebookresearch/detectron2/pull/4626) to customize offloaded backend for DatasetFromList.
      - In `D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go`, switch to use `SharedList` (added in D40789062, https://github.com/facebookresearch/mobile-vision/pull/120) by default to save RAM and optionally use `DiskCachedList` to further save RAM.
      
      Local benchmarking results (using a ~2.4 GiB dataset) using dev mode:
      | RAM usage (RES, SHR) | No-dataset | Naive | NumpySerializedList | SharedList | DiskCachedList |
      | -- | -- | -- | -- | -- | -- |
      | Master GPU worker.         | 8.0g, 2.8g | 21.4g, 2.8g | 11.6g, 2.8g | 11.5g, 5.2g | -- |
      | Non-master GPU worker  | 7.5g, 2.8g | 21.0g, 2.8g | 11.5g, 2.8g | 8.0g, 2.8g | -- |
      | Per data loader worker     | 2.0g, 1.0g | 14.0g, 1.0g | 4.4g, 1.0g | 2.1g, 1.0g | -- |
      
      - The memory usage (RES, SHR) is found from `top` command. `RES` is total memory used per process; `SHR` shows how much RAM can be shared inside `RES`.
      - experiments are done using 2 GPU and 2 data loader workers per GPU, so there're 6 processes in total, the **numbers are per-process**.
      - `No-dataset`: running the same job with tiny dataset (only 4.47 MiB after serialization), since RAM usage should be negligible, it shows the floor RAM usage.
      - other experiments are running using a dataset of the size of **2413.57 MiB** after serialization.
        - `Naive`: vanilla version if we don't offload the dataset to other storage.
        - `NumpySerializedList`: this optimization was added a long time ago in D19896490. I recalled that the RAM was indeed shared for data loader worker, but seems that there was a regression. Now basically all the processes have a copy of data.
        - `SharedList`: is enabled in this diff. It shows that only the master GPU needs extra RAM. It's interesting that it uses 3.5GB RAM more than other rank, while the data itself is 2.4GB. I'm not so sure if it's overhead of the storage itself or the overhead caused by sharing it with other processes, since non-master GPU using `NumpySerializedList` also uses 11.5g of RAM, we probably don't need to worry too much about it.
        - `DiskCachedList`: didn't benchmark, should have no extra RAM usage.
      
      Using the above number for a typical 8GPU, 4worker training, assuming the OS and other programs take 20-30GB RAM, the current training will use `11.6g * 8 + 4.4g * 8*4 = 233.6g` RAM, on the edge of causing OOM for a 256gb machine. This aligns with our experience that it supports ~2GB dataset. After the change, the training will use only `(11.5g * 7 + 8.0g) + 2.1g * 8*4 = 155.7g` RAM, which gives a much larger head room, we can thus train with much larger dataset (eg. 20GB) or use more DL workers (eg. 8 workers).
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D40819959
      
      fbshipit-source-id: fbdc9d2d1d440e14ae8496be65979a09f3ed3638
      01c351bc
    • Yanghan Wang's avatar
      replace torch.testing.assert_allclose with torch.testing.assert_close · c6666d33
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/409
      
      `assert_close` is preferred over `assert_allclose`: https://github.com/pytorch/pytorch/issues/61844
      
      The `assert_allclose` was removed yesterday in https://github.com/pytorch/pytorch/pull/87974, causing test to fail, eg. https://github.com/facebookresearch/d2go/actions/runs/3389194553/jobs/5632021291
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D41000306
      
      fbshipit-source-id: 7bd1cb9d5edf0a4609a909e2283df411bcabdf13
      c6666d33
  4. 28 Oct, 2022 2 commits
  5. 26 Oct, 2022 4 commits
  6. 20 Oct, 2022 1 commit
  7. 06 Oct, 2022 1 commit
  8. 03 Oct, 2022 1 commit
  9. 01 Oct, 2022 1 commit
  10. 28 Sep, 2022 1 commit
    • Matthew Yu's avatar
      support pytorch checkpoint as teacher model using config · dc176d58
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371
      
      In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like:
      ```
      # config.py
      MODEL:
        FBNET_V2:
        ...
      DISTILLATION:
        TEACHER:
          MODEL:
            FBNET_V2:
            ...
            WEIGHTS: /path/to/teacher/weights
      ...
      ```
      
      This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model.
      
      In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model:
      ```
      # new_config.py
      MODEL:
        FBNET_V2:
      ...
      DISTILLATION:
        TEACHER:
          CONFIG_FNAME: /path/to/teacher/config
          RUNNER_NAME:
      ...
      ```
      
      This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go.
      
      Reviewed By: newstzpz
      
      Differential Revision: D37640041
      
      fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
      dc176d58
  11. 31 Aug, 2022 2 commits
  12. 12 Aug, 2022 1 commit
  13. 27 Jul, 2022 2 commits
  14. 26 Jul, 2022 1 commit
  15. 19 Jul, 2022 1 commit
  16. 14 Jul, 2022 1 commit
  17. 13 Jul, 2022 1 commit
  18. 08 Jul, 2022 1 commit
    • Yanghan Wang's avatar
      prepare_for_quant_convert -> custom_covert_fx · 97904ba4
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/325
      
      `prepare_for_quant_convert` is a confusing name because it only does `convert`, there's no "prepare" in it. It's actually for fx only, because eager mode always calls `torch.quantization.convert`, there's no way to customize it. So just call this `custom_convert_fx`. The new name is also unique in fbcode, so easy to do codemod later on.
      
      This diff simply does the renaming by biggrep + replace.
      
      Reviewed By: jerryzh168
      
      Differential Revision: D37676717
      
      fbshipit-source-id: e7d05eaafddc383dd432986267c945c8ebf94df4
      97904ba4
  19. 06 Jul, 2022 1 commit
  20. 02 Jul, 2022 1 commit
  21. 29 Jun, 2022 1 commit
  22. 24 Jun, 2022 1 commit
    • Yanghan Wang's avatar
      use runner class instead of instance outside of main · 8051775c
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/312
      
      As discussed, we decided to not use runner instance outside of `main`, previous diffs already solved the prerequisites, this diff mainly does the renaming.
      - Use runner name (str) in the fblearner, ML pipeline.
      - Use runner name (str) in FBL operator, MAST and binary operator.
      - Use runner class as the interface of main, it can be either the name of class (str) or actual class. The main usage should be using `str`, so that the importing of class happens inside `main`. But it's also a common use case to import runner class and call `main` for things like ad-hoc scripts or tests, supporting actual class makes it easier modify code for those cases (eg. some local test class doesn't have a name, so it's not feasible to use runner name).
      
      Reviewed By: newstzpz
      
      Differential Revision: D37060338
      
      fbshipit-source-id: 879852d41902b87d6db6cb9d7b3e8dc55dc4b976
      8051775c
  23. 21 Jun, 2022 1 commit
  24. 20 Jun, 2022 1 commit
  25. 17 Jun, 2022 1 commit
  26. 16 Jun, 2022 1 commit
    • Matthew Yu's avatar
      add modeling hook algo and helper · f3fc01aa
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/299
      
      This implements the first iteration of generalized distillation in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go. The functionality is separated into the following:
      
      => Adding distillation functionality without user changing their meta architecture:
      
      class DistillationModelingHook
      * This is an implementation detail that we hide from the user.
      * We use dynamic mixin to specify additional functionality to the user's model. In this way, the original (student) model retains all attributes but the mixin class will override the forward (and provide more functionality like teacher updates).
      * Teacher build currently only supports loading a torchscript model, pytorch compatiblity in later diffs
      
      => Implementing distillation methods
      
      class DistillationAlgorithm
      * The user can use some default algorithm (e.g., LabelDistillation) or create their own. This is where we specify the overrided forward func of the model and any other distillation requirements (updating the weights of the teacher model).
      * The basic LabelDistillation allows a user to use a teacher model during training to relabel the gt
      
      => User customization
      
      class DistillationHelper
      * This is what we expect the user to customize. As an example the user probably needs to write their own pseudo_labeler to take batched_inputs and relabel this with the teacher
      
      Both DistillationHelper and DistillationAlgorithm use a registry so that users can add their customization in their own code and use these customizations by specifying in the config
      
      Reviewed By: newstzpz
      
      Differential Revision: D36708227
      
      fbshipit-source-id: bc427c5d42d0c7ff4d839bf10782efac24dea107
      f3fc01aa
  27. 10 Jun, 2022 1 commit
    • John Reese's avatar
      apply new formatting config · 0900aeba
      John Reese authored
      Summary:
      pyfmt now specifies a target Python version of 3.8 when formatting
      with black. With this new config, black adds trailing commas to all
      multiline function calls. This applies the new formatting as part
      of rolling out the linttool-integration for pyfmt.
      
      paintitblack
      
      Reviewed By: zertosh, lisroach
      
      Differential Revision: D37084377
      
      fbshipit-source-id: 781a1b883a381a172e54d6e447137657977876b4
      0900aeba
  28. 09 Jun, 2022 1 commit
  29. 05 Jun, 2022 2 commits
  30. 02 Jun, 2022 2 commits
  31. 28 May, 2022 1 commit