"vscode:/vscode.git/clone" did not exist on "c2bc59d2b12b5ef02f49ca4af32aea1003893053"
  1. 22 Nov, 2022 1 commit
    • Matthew Yu's avatar
      add default layer losses and loss combiner · 419974bb
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/421
      
      Add some reasonable defaults when running knowledge distillation
      * get_default_kd_image_classification_layer_losses => returns cross entropy loss on the output of the student classification layer and the teacher output (this is what the imagenet distillation uses)
      * DefaultLossCombiner => simple function to multiply the losses by some weights
      
      Unsure if these should go in `distillation.py` or a separate place (e.g., defaults or classification)
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40330718
      
      fbshipit-source-id: 5887566d88e3a96d01aca133c51041126b2692cc
      419974bb
  2. 19 Nov, 2022 1 commit
    • Matthew Yu's avatar
      kd algorithm · 9ec4f2bf
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/420
      
      Adds knowledge distillation as a generic algorithm that can be used by various projects.
      
      If eval, the algorithm just returns the result of the student model.
      
      If training, the algorithm feeds the input into both the student and teacher model. The user provides a list of `LayerLossMetadata` that provides the layers and losses run on these layers. The algorithm uses dynamic mixin to record the outputs of the relevant layers and compute the losses after both models are run.
      
      We provide student and teacher preprocessing as a placeholder before we support a more generic dataloader which can provide different inputs to the student and teacher (e.g., as of now, if you want to provide the teacher with a larger input then the dataloader should return a large input and the student preprocessing can downsample the input).
      
      We add the following functions as part of the user customizable distillation helper:
      * get_teacher => return a teacher that can be used directly by the KD algorithm
      * get_layer_losses => return a list of `LayerLossMetadata` that provides the layers and losses
      * get_preprocess_student_input => manipulate the output of the dataloader before passing to the student
      * get_preprocess_teacher_input => manipulate the output of the dataloader before passing to the teacher
      * get_combine_losses => since we may want to weight the student and distillation losses, return a function that can manipulate the loss_dict
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40326412
      
      fbshipit-source-id: 2fb0e818a7d5b120d62fb7aba314ff96cc7e10c5
      9ec4f2bf
  3. 17 Nov, 2022 2 commits
    • Matthew Yu's avatar
      add class to keep track of loss metadata and function to compute losses · 0316fed4
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/419
      
      This diff adds a metadata class `LayerLossMetadata` to help keep track of the losses we want to compute over layers. The class contains the type of loss, loss name, and layer names.
      
      This diff adds a helper function to iterate over a list of `LayerLossMetadata` and return a dict containing the results.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40286564
      
      fbshipit-source-id: b269dc63cc90a437ca279379d759c3106016327c
      0316fed4
    • Matthew Yu's avatar
      add a helper to record layers in a model · 53c4c2c1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/418
      
      This diff adds a function that can be used to add `CachedLayers` to a model. Function iterates over named modules and dynamically mixes in `CachedLayer` to target modules.
      
      This diff adds a function to remove the cached layers.
      
      Reviewed By: Minione
      
      Differential Revision: D40285806
      
      fbshipit-source-id: 3137d19927d8fb9ec924a77c9085aea29fe94d5e
      53c4c2c1
  4. 16 Nov, 2022 2 commits
    • Matthew Yu's avatar
      support a layer that saves outputs · 120b463c
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/417
      
      This diff adds a layer `CachedLayer` which is meant to be used with dynamic mixin. This layer runs the original module and clones the output into a dictionary provided by the user.
      
      The main use case is in distillation where we dynamically mixin these layers to the layers that the user wants to compute various losses.
      
      See subsequent diffs to get integration with distillation.
      
      Reviewed By: Minione
      
      Differential Revision: D40285573
      
      fbshipit-source-id: 2058deff8b96f63aebd1e9b9933a5352b5197111
      120b463c
    • Matthew Yu's avatar
      update teacher to support models where device is a property · 0f27e90f
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/416
      
      Distillation assumes teacher model has an attribute "device". Sometimes this attribute is actually a property (e.g., generalizedrcnn) but there is zero guarantee that it exists. We add a helper function to move the model to the device and add this attribute if needed.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40283954
      
      fbshipit-source-id: 42921653eac8a79499e22edac29aa6aeac016e8a
      0f27e90f
  5. 15 Nov, 2022 1 commit
  6. 28 Sep, 2022 1 commit
    • Matthew Yu's avatar
      support pytorch checkpoint as teacher model using config · dc176d58
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371
      
      In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like:
      ```
      # config.py
      MODEL:
        FBNET_V2:
        ...
      DISTILLATION:
        TEACHER:
          MODEL:
            FBNET_V2:
            ...
            WEIGHTS: /path/to/teacher/weights
      ...
      ```
      
      This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model.
      
      In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model:
      ```
      # new_config.py
      MODEL:
        FBNET_V2:
      ...
      DISTILLATION:
        TEACHER:
          CONFIG_FNAME: /path/to/teacher/config
          RUNNER_NAME:
      ...
      ```
      
      This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go.
      
      Reviewed By: newstzpz
      
      Differential Revision: D37640041
      
      fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
      dc176d58
  7. 29 Jun, 2022 1 commit
  8. 16 Jun, 2022 1 commit
    • Matthew Yu's avatar
      add modeling hook algo and helper · f3fc01aa
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/299
      
      This implements the first iteration of generalized distillation in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go. The functionality is separated into the following:
      
      => Adding distillation functionality without user changing their meta architecture:
      
      class DistillationModelingHook
      * This is an implementation detail that we hide from the user.
      * We use dynamic mixin to specify additional functionality to the user's model. In this way, the original (student) model retains all attributes but the mixin class will override the forward (and provide more functionality like teacher updates).
      * Teacher build currently only supports loading a torchscript model, pytorch compatiblity in later diffs
      
      => Implementing distillation methods
      
      class DistillationAlgorithm
      * The user can use some default algorithm (e.g., LabelDistillation) or create their own. This is where we specify the overrided forward func of the model and any other distillation requirements (updating the weights of the teacher model).
      * The basic LabelDistillation allows a user to use a teacher model during training to relabel the gt
      
      => User customization
      
      class DistillationHelper
      * This is what we expect the user to customize. As an example the user probably needs to write their own pseudo_labeler to take batched_inputs and relabel this with the teacher
      
      Both DistillationHelper and DistillationAlgorithm use a registry so that users can add their customization in their own code and use these customizations by specifying in the config
      
      Reviewed By: newstzpz
      
      Differential Revision: D36708227
      
      fbshipit-source-id: bc427c5d42d0c7ff4d839bf10782efac24dea107
      f3fc01aa