"torchvision/csrc/io/decoder/decoder.h" did not exist on "8b9859d3aeebcd37e6a284fc751c58569857f7be"
  1. 25 Feb, 2022 3 commits
  2. 23 Feb, 2022 4 commits
  3. 15 Feb, 2022 1 commit
  4. 12 Feb, 2022 1 commit
  5. 11 Feb, 2022 1 commit
  6. 10 Feb, 2022 1 commit
  7. 07 Feb, 2022 1 commit
  8. 04 Feb, 2022 1 commit
  9. 01 Feb, 2022 2 commits
    • ChongyuNVIDIA's avatar
      Add the permutation related support as the extension for asp lib. (#1194) · 89edb819
      ChongyuNVIDIA authored
      * Add the permutation related support as the extension for asp lib.
      
      * [Fix] Track the permutation sequence for progressive channel swap strategy.
      
      * Fix the corner case that one layer is not sparse, but need to apply permutation due to its siblings.
      
      * Fix the deprecated functions in ASP unit tests.
      
      * Fix the sparsity info typo in ASP lib.
      
      * [Enhancement] Set the identical random seed for all GPUs to make sure the same results generated in permutation search.
      
      * Update the README.md with identical random seed setting and NeurIPS info.
      
      * Integrate the Pybind11 enhancement of permutation search into ASP lib.
      89edb819
    • Masaki Kozuki's avatar
      transformer: Allows for custom sync context in no pipelining forward backward function (#1281) · 79c01877
      Masaki Kozuki authored
      * add kwarg of `custom_sync_context_handler`
      
      * add kwargs to ignore custom_sync_context_handler which mistakenly passed to fwd/bwd funcs
      79c01877
  10. 31 Jan, 2022 2 commits
  11. 29 Jan, 2022 1 commit
  12. 28 Jan, 2022 2 commits
    • Masaki Kozuki's avatar
      small changes in test and logger format (#1278) · b1c75f6f
      Masaki Kozuki authored
      * cosmetic refactor in test
      
      * log with PID
      
      * log more info: rank, pid, filename, lineNo
      b1c75f6f
    • Masaki Kozuki's avatar
      allow for `None` batch (#1280) · a960fe8c
      Masaki Kozuki authored
      * have get_kth_microbatch deal with None batch
      
      * broadcast based on tensor parallel rank
      
      * dtype
      
      * remove unnecessary .cuda()
      
      Processes of tensor parallel rank != 0 doesn't need to prepare one or more `torch.utils.data.DataLoader` instances, which means the argument of `batch` of `get_kth_microbatch` function can be `None` but the current function implementation doesn't allow for it.
      a960fe8c
  13. 21 Jan, 2022 2 commits
  14. 19 Jan, 2022 1 commit
  15. 13 Jan, 2022 1 commit
  16. 17 Dec, 2021 1 commit
    • Masaki Kozuki's avatar
      Add an argument of `dtype` to forward_backward functions to specify the dtype... · b88c507e
      Masaki Kozuki authored
      Add an argument of `dtype` to forward_backward functions to specify the dtype used in p2p comm (#1249)
      
      * let users sepcify dtype for p2p comm taking the possibility of O2 style AMP into account
      
      * add `dtype` argument to forward_backward functions
      
      * fix
      
      * better message
      
      * add docstring of dtype
      
      * add a link to dtype logic of p2p comm
      b88c507e
  17. 16 Dec, 2021 2 commits
  18. 15 Dec, 2021 2 commits
  19. 14 Dec, 2021 2 commits
    • Masaki Kozuki's avatar
      Faster `--fast_multihead_attn` build (#1245) · 7ec8ed67
      Masaki Kozuki authored
      * merge .so files
      
      * odr
      
      * fix build
      
      * update import
      
      * apply psf/black with max line length of 120
      
      * update
      
      * fix
      
      * update
      
      * build fixed again but undefined symbol again
      
      * fix 2, still layer norm grad is undefined
      
      * remove unused cpp files
      
      * without layer_norm.cuh, import works
      
      * import fast_multihead_attn works...
      
      but why? Was unnecessary `#include "layer_norm.cuh"` was the culprit
      causing .shared objects not to be able to link `HostApplyLayerNorm` and
      `HostLayerNormGradient`?
      
      * clean up layer norm
      7ec8ed67
    • eqy's avatar
      check size in kth microbatch (#1247) · ed94d0bb
      eqy authored
      ed94d0bb
  20. 10 Dec, 2021 2 commits
    • Masaki Kozuki's avatar
      Cherry-pick Megatron-LM's changes in pipeline model parallel for T5 (#1232) · 0e25fcc4
      Masaki Kozuki authored
      * update parallel_state
      
      * update pipeline common funcs - forward_step and backward_step
      
      * update pipelining w/o interleaving
      
      * type hint
      
      * merge utils into without_interleaving
      
      Motivation: functions in utils are only used by
      forward_backward_pipelining_without_interleaving
      
      * fix handling of `model_type`
      
      * fix import of DDP
      
      * update set_input_tensor method
      
      * fix
      
      * cosmetic
      
      * update model
      
      * refactor pipeline test scripts
      0e25fcc4
    • Rishi Puri's avatar
      Minimal gpt pipeline parallel (builds off of minimal_bert_pipeline_parallel)... · ab7af058
      Rishi Puri authored
      
      Minimal gpt pipeline parallel (builds off of minimal_bert_pipeline_parallel) including cpu-offloading (#1222)
      
      * minimal bert pipeline parallel test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * first draft of gpt minimal test
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * framework to scale up the gpt2 test for variety of distributed setups
      
      * adding gpt_minimal_test to list of multigpu tests
      Co-authored-by: default avatarEddie Yan <eddiey@nvidia.com>
      Co-authored-by: default avatarriship <riship@nvidia.com>
      ab7af058
  21. 09 Dec, 2021 2 commits
  22. 19 Nov, 2021 3 commits
  23. 10 Nov, 2021 2 commits