1. 17 Nov, 2022 2 commits
    • Matthew Yu's avatar
      add class to keep track of loss metadata and function to compute losses · 0316fed4
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/419
      
      This diff adds a metadata class `LayerLossMetadata` to help keep track of the losses we want to compute over layers. The class contains the type of loss, loss name, and layer names.
      
      This diff adds a helper function to iterate over a list of `LayerLossMetadata` and return a dict containing the results.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40286564
      
      fbshipit-source-id: b269dc63cc90a437ca279379d759c3106016327c
      0316fed4
    • Matthew Yu's avatar
      add a helper to record layers in a model · 53c4c2c1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/418
      
      This diff adds a function that can be used to add `CachedLayers` to a model. Function iterates over named modules and dynamically mixes in `CachedLayer` to target modules.
      
      This diff adds a function to remove the cached layers.
      
      Reviewed By: Minione
      
      Differential Revision: D40285806
      
      fbshipit-source-id: 3137d19927d8fb9ec924a77c9085aea29fe94d5e
      53c4c2c1
  2. 16 Nov, 2022 2 commits
    • Matthew Yu's avatar
      support a layer that saves outputs · 120b463c
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/417
      
      This diff adds a layer `CachedLayer` which is meant to be used with dynamic mixin. This layer runs the original module and clones the output into a dictionary provided by the user.
      
      The main use case is in distillation where we dynamically mixin these layers to the layers that the user wants to compute various losses.
      
      See subsequent diffs to get integration with distillation.
      
      Reviewed By: Minione
      
      Differential Revision: D40285573
      
      fbshipit-source-id: 2058deff8b96f63aebd1e9b9933a5352b5197111
      120b463c
    • Matthew Yu's avatar
      update teacher to support models where device is a property · 0f27e90f
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/416
      
      Distillation assumes teacher model has an attribute "device". Sometimes this attribute is actually a property (e.g., generalizedrcnn) but there is zero guarantee that it exists. We add a helper function to move the model to the device and add this attribute if needed.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40283954
      
      fbshipit-source-id: 42921653eac8a79499e22edac29aa6aeac016e8a
      0f27e90f
  3. 15 Nov, 2022 1 commit
  4. 14 Nov, 2022 1 commit
  5. 11 Nov, 2022 1 commit
  6. 09 Nov, 2022 1 commit
  7. 08 Nov, 2022 1 commit
  8. 07 Nov, 2022 1 commit
  9. 03 Nov, 2022 2 commits
    • Yanghan Wang's avatar
      use SharedList as offload backend of DatasetFromList by default · 01c351bc
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/405
      
      - Use the non-hacky way (added in D40818736, https://github.com/facebookresearch/detectron2/pull/4626) to customize offloaded backend for DatasetFromList.
      - In `D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go`, switch to use `SharedList` (added in D40789062, https://github.com/facebookresearch/mobile-vision/pull/120) by default to save RAM and optionally use `DiskCachedList` to further save RAM.
      
      Local benchmarking results (using a ~2.4 GiB dataset) using dev mode:
      | RAM usage (RES, SHR) | No-dataset | Naive | NumpySerializedList | SharedList | DiskCachedList |
      | -- | -- | -- | -- | -- | -- |
      | Master GPU worker.         | 8.0g, 2.8g | 21.4g, 2.8g | 11.6g, 2.8g | 11.5g, 5.2g | -- |
      | Non-master GPU worker  | 7.5g, 2.8g | 21.0g, 2.8g | 11.5g, 2.8g | 8.0g, 2.8g | -- |
      | Per data loader worker     | 2.0g, 1.0g | 14.0g, 1.0g | 4.4g, 1.0g | 2.1g, 1.0g | -- |
      
      - The memory usage (RES, SHR) is found from `top` command. `RES` is total memory used per process; `SHR` shows how much RAM can be shared inside `RES`.
      - experiments are done using 2 GPU and 2 data loader workers per GPU, so there're 6 processes in total, the **numbers are per-process**.
      - `No-dataset`: running the same job with tiny dataset (only 4.47 MiB after serialization), since RAM usage should be negligible, it shows the floor RAM usage.
      - other experiments are running using a dataset of the size of **2413.57 MiB** after serialization.
        - `Naive`: vanilla version if we don't offload the dataset to other storage.
        - `NumpySerializedList`: this optimization was added a long time ago in D19896490. I recalled that the RAM was indeed shared for data loader worker, but seems that there was a regression. Now basically all the processes have a copy of data.
        - `SharedList`: is enabled in this diff. It shows that only the master GPU needs extra RAM. It's interesting that it uses 3.5GB RAM more than other rank, while the data itself is 2.4GB. I'm not so sure if it's overhead of the storage itself or the overhead caused by sharing it with other processes, since non-master GPU using `NumpySerializedList` also uses 11.5g of RAM, we probably don't need to worry too much about it.
        - `DiskCachedList`: didn't benchmark, should have no extra RAM usage.
      
      Using the above number for a typical 8GPU, 4worker training, assuming the OS and other programs take 20-30GB RAM, the current training will use `11.6g * 8 + 4.4g * 8*4 = 233.6g` RAM, on the edge of causing OOM for a 256gb machine. This aligns with our experience that it supports ~2GB dataset. After the change, the training will use only `(11.5g * 7 + 8.0g) + 2.1g * 8*4 = 155.7g` RAM, which gives a much larger head room, we can thus train with much larger dataset (eg. 20GB) or use more DL workers (eg. 8 workers).
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D40819959
      
      fbshipit-source-id: fbdc9d2d1d440e14ae8496be65979a09f3ed3638
      01c351bc
    • Yanghan Wang's avatar
      replace torch.testing.assert_allclose with torch.testing.assert_close · c6666d33
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/409
      
      `assert_close` is preferred over `assert_allclose`: https://github.com/pytorch/pytorch/issues/61844
      
      The `assert_allclose` was removed yesterday in https://github.com/pytorch/pytorch/pull/87974, causing test to fail, eg. https://github.com/facebookresearch/d2go/actions/runs/3389194553/jobs/5632021291
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D41000306
      
      fbshipit-source-id: 7bd1cb9d5edf0a4609a909e2283df411bcabdf13
      c6666d33
  10. 01 Nov, 2022 1 commit
  11. 31 Oct, 2022 2 commits
  12. 28 Oct, 2022 2 commits
  13. 27 Oct, 2022 2 commits
  14. 26 Oct, 2022 5 commits
  15. 24 Oct, 2022 1 commit
  16. 23 Oct, 2022 1 commit
  17. 20 Oct, 2022 2 commits
  18. 14 Oct, 2022 1 commit
  19. 06 Oct, 2022 1 commit
  20. 05 Oct, 2022 2 commits
  21. 03 Oct, 2022 1 commit
  22. 01 Oct, 2022 2 commits
  23. 29 Sep, 2022 1 commit
  24. 28 Sep, 2022 4 commits