1. 19 Dec, 2022 1 commit
    • Anton Rigner's avatar
      Fix WeightedSampler to also work with adhoc datasets · ab49d0b6
      Anton Rigner authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/437
      
      # Problem
      - We use `TRAIN_CATEGORIES` to overrider the classes for convenient experimentation, to not have to re-map the JSON file
      - But it's not possible to use the WeightedTrainingSampler with specified repeat factors (`DATASETS.TRAIN_REPEAT_FACTOR`) when also overriding the classes to use for training (ad-hoc datasets), because the underlying dataset name doesn't match the datasets specified in the `TRAIN_REPEAT_FACTOR` pairs (mapping between <dataset_name, repeat_factor>)
      
      # Fix
      
      - Update the dataset names for the REPEAT_FACTORS mapping as well, if we have enabled the `WeightedTrainingSampler` and use ad-hoc datasets.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41765638
      
      fbshipit-source-id: 51dad484e4d715d2de900b5d0b7c7caa19903fb7
      ab49d0b6
  2. 16 Dec, 2022 1 commit
  3. 12 Dec, 2022 2 commits
  4. 09 Dec, 2022 1 commit
  5. 08 Dec, 2022 1 commit
  6. 30 Nov, 2022 6 commits
    • Matthew Yu's avatar
      support caching tuples · dece58ba
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/432
      
      We support caching of tuples since they behave similarly to lists
      
      Reviewed By: XiaoliangDai
      
      Differential Revision: D41483876
      
      fbshipit-source-id: 9d741074f8e2335ddd737ae3f1bdb288910f5564
      dece58ba
    • Matthew Yu's avatar
      algorithm · 150db2d1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/431
      
      Add a generic domain adaptation algorithm. This algorithm:
      * gets domain0 data out of the dataloader
      * runs domain0 data into the model and saves target layer output
      * gets domain1 data of the dataloader
      * runs domain1 data into the model and saves target layer output
      * runs domain adaptation loss on domain0, domain1 outputs
      * combines losses using model training iteration
      
      This diffs adds `get_preprocess_domain0_input` and `get_preprocess_domain1_input` to the distillation helper. These are functions that the user can use to convert the dataloader output to something that will be used by the model (e.g., pull the domain0 or domain1 key out of a dataloader that returns a dict).
      
      Differential Revision: D40970724
      
      fbshipit-source-id: fff050fbe864654fa6cb0df927f6843855ec1c14
      150db2d1
    • Matthew Yu's avatar
      support registering layer losses to model · c4860c5b
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/430
      
      We add losses in distillation by instantiating them in the distillation algorithm's init and then running them during the forward pass.
      
      However this has some issues:
      * the losses are not registered as a module in the model since they we organize them as a list of layerlossmetadata => this means that things like AMP do not behave as expected
      * the losses are not on the same device as the rest of the model since they are created potentially after the model is moved to a new device
      
      This diff solves both of these issues by including a helper function that registers and moves the losses to the same device as the model. `register_layer_losses_and_to_device` takes as input `List[LayerLossMetadata]`, moves the losses to the same device as the model and then registers these losses to the model.
      
      Differential Revision: D41296932
      
      fbshipit-source-id: ae7ae0847bce1b5cc481d838b9cae69cea424f25
      c4860c5b
    • Matthew Yu's avatar
      support ignoring teacher · 909de50d
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/429
      
      Add a teacher type called `no_teacher` which can be specified by the user in the case they ignore the teacher (e.g., domain adaptation). Building the teacher just returns a noop (`nn.Identity`)
      
      Differential Revision: D40971788
      
      fbshipit-source-id: fc49ac44224c92806a7be253eefb8454305814eb
      909de50d
    • Peizhao Zhang's avatar
      add an augmentation to pad image to square. · c2d7dbab
      Peizhao Zhang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/428
      
      add an augmentation to pad image to square.
      * For example, image with shape (10, 7, 3) will become (10, 10, 3) and pad with value specified by `pad_value`.
      
      Reviewed By: tax313, wat3rBro
      
      Differential Revision: D41545182
      
      fbshipit-source-id: 6d5fd9d16984a9904d44f22386920cdf130edda7
      c2d7dbab
    • Matthew Yu's avatar
      set cache in recorded layers · 30ac5858
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/433
      
      Distillation uses a module called `CachedLayer` to record the outputs of a layer to an internal dict. This dict is typically initialized by the object itself and any value is overwritten every time the model runs.
      
      However, sometimes we need more than one output run of the layer (e.g., domain adaptation => we run the model on real, then synthetic data and need to use both outputs).
      
      This diff adds a helper to set externally set the cache dict of a model. In other words, we can run `set_cache_dict` on some model to change the dict used by all `CachedLayer` in the model. This allows us to run the model and record some outputs, then change the cache dict and rerun the model to save different outputs.
      
      Differential Revision: D40970577
      
      fbshipit-source-id: 49cb851af49ae193d0c8ac9218e02fdaf4e6587b
      30ac5858
  7. 28 Nov, 2022 1 commit
  8. 23 Nov, 2022 2 commits
  9. 22 Nov, 2022 1 commit
    • Matthew Yu's avatar
      add default layer losses and loss combiner · 419974bb
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/421
      
      Add some reasonable defaults when running knowledge distillation
      * get_default_kd_image_classification_layer_losses => returns cross entropy loss on the output of the student classification layer and the teacher output (this is what the imagenet distillation uses)
      * DefaultLossCombiner => simple function to multiply the losses by some weights
      
      Unsure if these should go in `distillation.py` or a separate place (e.g., defaults or classification)
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40330718
      
      fbshipit-source-id: 5887566d88e3a96d01aca133c51041126b2692cc
      419974bb
  10. 21 Nov, 2022 1 commit
    • Yanghan Wang's avatar
      add configure_dataset_creation for lightning · 0ea6bc1b
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/423
      
      The `DefaultTask` has forked the implementation of some runner methods from `Detectron2GoRunner`, which is not necessary since there should be no differences. This might cause issue that we update the one from `Detectron2GoRunner` but forgot about `DefaultTask`.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D41350485
      
      fbshipit-source-id: 38a1764a7cc77dc13939ac7d59f35584bf9dab9b
      0ea6bc1b
  11. 19 Nov, 2022 1 commit
    • Matthew Yu's avatar
      kd algorithm · 9ec4f2bf
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/420
      
      Adds knowledge distillation as a generic algorithm that can be used by various projects.
      
      If eval, the algorithm just returns the result of the student model.
      
      If training, the algorithm feeds the input into both the student and teacher model. The user provides a list of `LayerLossMetadata` that provides the layers and losses run on these layers. The algorithm uses dynamic mixin to record the outputs of the relevant layers and compute the losses after both models are run.
      
      We provide student and teacher preprocessing as a placeholder before we support a more generic dataloader which can provide different inputs to the student and teacher (e.g., as of now, if you want to provide the teacher with a larger input then the dataloader should return a large input and the student preprocessing can downsample the input).
      
      We add the following functions as part of the user customizable distillation helper:
      * get_teacher => return a teacher that can be used directly by the KD algorithm
      * get_layer_losses => return a list of `LayerLossMetadata` that provides the layers and losses
      * get_preprocess_student_input => manipulate the output of the dataloader before passing to the student
      * get_preprocess_teacher_input => manipulate the output of the dataloader before passing to the teacher
      * get_combine_losses => since we may want to weight the student and distillation losses, return a function that can manipulate the loss_dict
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40326412
      
      fbshipit-source-id: 2fb0e818a7d5b120d62fb7aba314ff96cc7e10c5
      9ec4f2bf
  12. 18 Nov, 2022 1 commit
  13. 17 Nov, 2022 3 commits
    • Anthony Chen's avatar
      Integrate PyTorch Fully Sharded Data Parallel (FSDP) · 02625ff8
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/396
      
      Integrate PyTorch FSDP, which supports two sharding modes: 1. gradient + optimizer sharding; 2. full model sharding (params + gradient + optimizer). This feature is enabled in the train_net.py code path.
      
      Sources
      * Integration follows this tutorial: https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html
      
      API changes
      * Add new config keys to support the new feature. Refer to mobile-vision/d2go/d2go/trainer/fsdp.py for the full list of config options
      * Add `FSDPCheckpointer` as an inheritance of `QATCheckpointer` to support special loading/saving logic for FSDP models
      
      Reviewed By: wat3rBro
      
      Differential Revision: D39228316
      
      fbshipit-source-id: 342ecb3bcbce748453c3fba2d6e1b7b7e478473c
      02625ff8
    • Matthew Yu's avatar
      add class to keep track of loss metadata and function to compute losses · 0316fed4
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/419
      
      This diff adds a metadata class `LayerLossMetadata` to help keep track of the losses we want to compute over layers. The class contains the type of loss, loss name, and layer names.
      
      This diff adds a helper function to iterate over a list of `LayerLossMetadata` and return a dict containing the results.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40286564
      
      fbshipit-source-id: b269dc63cc90a437ca279379d759c3106016327c
      0316fed4
    • Matthew Yu's avatar
      add a helper to record layers in a model · 53c4c2c1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/418
      
      This diff adds a function that can be used to add `CachedLayers` to a model. Function iterates over named modules and dynamically mixes in `CachedLayer` to target modules.
      
      This diff adds a function to remove the cached layers.
      
      Reviewed By: Minione
      
      Differential Revision: D40285806
      
      fbshipit-source-id: 3137d19927d8fb9ec924a77c9085aea29fe94d5e
      53c4c2c1
  14. 16 Nov, 2022 2 commits
    • Matthew Yu's avatar
      support a layer that saves outputs · 120b463c
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/417
      
      This diff adds a layer `CachedLayer` which is meant to be used with dynamic mixin. This layer runs the original module and clones the output into a dictionary provided by the user.
      
      The main use case is in distillation where we dynamically mixin these layers to the layers that the user wants to compute various losses.
      
      See subsequent diffs to get integration with distillation.
      
      Reviewed By: Minione
      
      Differential Revision: D40285573
      
      fbshipit-source-id: 2058deff8b96f63aebd1e9b9933a5352b5197111
      120b463c
    • Matthew Yu's avatar
      update teacher to support models where device is a property · 0f27e90f
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/416
      
      Distillation assumes teacher model has an attribute "device". Sometimes this attribute is actually a property (e.g., generalizedrcnn) but there is zero guarantee that it exists. We add a helper function to move the model to the device and add this attribute if needed.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40283954
      
      fbshipit-source-id: 42921653eac8a79499e22edac29aa6aeac016e8a
      0f27e90f
  15. 15 Nov, 2022 1 commit
  16. 14 Nov, 2022 1 commit
  17. 11 Nov, 2022 1 commit
  18. 09 Nov, 2022 1 commit
  19. 08 Nov, 2022 1 commit
  20. 07 Nov, 2022 1 commit
  21. 03 Nov, 2022 2 commits
    • Yanghan Wang's avatar
      use SharedList as offload backend of DatasetFromList by default · 01c351bc
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/405
      
      - Use the non-hacky way (added in D40818736, https://github.com/facebookresearch/detectron2/pull/4626) to customize offloaded backend for DatasetFromList.
      - In `D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go`, switch to use `SharedList` (added in D40789062, https://github.com/facebookresearch/mobile-vision/pull/120) by default to save RAM and optionally use `DiskCachedList` to further save RAM.
      
      Local benchmarking results (using a ~2.4 GiB dataset) using dev mode:
      | RAM usage (RES, SHR) | No-dataset | Naive | NumpySerializedList | SharedList | DiskCachedList |
      | -- | -- | -- | -- | -- | -- |
      | Master GPU worker.         | 8.0g, 2.8g | 21.4g, 2.8g | 11.6g, 2.8g | 11.5g, 5.2g | -- |
      | Non-master GPU worker  | 7.5g, 2.8g | 21.0g, 2.8g | 11.5g, 2.8g | 8.0g, 2.8g | -- |
      | Per data loader worker     | 2.0g, 1.0g | 14.0g, 1.0g | 4.4g, 1.0g | 2.1g, 1.0g | -- |
      
      - The memory usage (RES, SHR) is found from `top` command. `RES` is total memory used per process; `SHR` shows how much RAM can be shared inside `RES`.
      - experiments are done using 2 GPU and 2 data loader workers per GPU, so there're 6 processes in total, the **numbers are per-process**.
      - `No-dataset`: running the same job with tiny dataset (only 4.47 MiB after serialization), since RAM usage should be negligible, it shows the floor RAM usage.
      - other experiments are running using a dataset of the size of **2413.57 MiB** after serialization.
        - `Naive`: vanilla version if we don't offload the dataset to other storage.
        - `NumpySerializedList`: this optimization was added a long time ago in D19896490. I recalled that the RAM was indeed shared for data loader worker, but seems that there was a regression. Now basically all the processes have a copy of data.
        - `SharedList`: is enabled in this diff. It shows that only the master GPU needs extra RAM. It's interesting that it uses 3.5GB RAM more than other rank, while the data itself is 2.4GB. I'm not so sure if it's overhead of the storage itself or the overhead caused by sharing it with other processes, since non-master GPU using `NumpySerializedList` also uses 11.5g of RAM, we probably don't need to worry too much about it.
        - `DiskCachedList`: didn't benchmark, should have no extra RAM usage.
      
      Using the above number for a typical 8GPU, 4worker training, assuming the OS and other programs take 20-30GB RAM, the current training will use `11.6g * 8 + 4.4g * 8*4 = 233.6g` RAM, on the edge of causing OOM for a 256gb machine. This aligns with our experience that it supports ~2GB dataset. After the change, the training will use only `(11.5g * 7 + 8.0g) + 2.1g * 8*4 = 155.7g` RAM, which gives a much larger head room, we can thus train with much larger dataset (eg. 20GB) or use more DL workers (eg. 8 workers).
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D40819959
      
      fbshipit-source-id: fbdc9d2d1d440e14ae8496be65979a09f3ed3638
      01c351bc
    • Yanghan Wang's avatar
      replace torch.testing.assert_allclose with torch.testing.assert_close · c6666d33
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/409
      
      `assert_close` is preferred over `assert_allclose`: https://github.com/pytorch/pytorch/issues/61844
      
      The `assert_allclose` was removed yesterday in https://github.com/pytorch/pytorch/pull/87974, causing test to fail, eg. https://github.com/facebookresearch/d2go/actions/runs/3389194553/jobs/5632021291
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D41000306
      
      fbshipit-source-id: 7bd1cb9d5edf0a4609a909e2283df411bcabdf13
      c6666d33
  22. 01 Nov, 2022 1 commit
  23. 31 Oct, 2022 2 commits
  24. 28 Oct, 2022 2 commits
  25. 27 Oct, 2022 2 commits
  26. 26 Oct, 2022 1 commit
    • Matthew Yu's avatar
      swap the order of qat and layer freezing to preserve checkpoint values · 13b2fe71
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/399
      
      Freezing the model before running quantization causes an issue with loading a saved checkpoint bc fusing does not support FrozenBatchNorm2d (which means that the checkpoint could have a fused weight conv.bn.weight whereas the model would have an unfused weight bn.weight). The longer term solution is to add FrozenBatchNorm2d to the fusing support but there are some subtle issues there that will take some time to fix:
      * need to move FrozenBatchNorm2d out of D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb) and into mobile_cv lib
      * current fuser has options to add new bn ops (e.g., FrozenBatchNorm2d) which we use with ops like SyncBN but this currently is only tested with inference so we need to write some additional checks on training
      
      The swap will make freezing compatible with QAT and should still work with standard models. One subtle potential issue is that the current BN swap assumes that BN is a leaf node. If a user runs QAT without fusing BN, the BN will no longer be the leaf node as it will obtain an activation_post_process module in order to record the output. The result is that BN will not be frozen in this specific instance. This should not occur as BN is usually fused. A small adjustment to the BN swap would just be to swap the BN regardless of whether it is a leaf node (but we have to check whether activation_post_process module is retained). Another long term consideration is moving both freezing and quant to modeling hooks so the user can decide the order.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D40496052
      
      fbshipit-source-id: 0d7e467b833821f7952cd2fce459ae1f76e1fa3b
      13b2fe71