1. 27 Oct, 2021 1 commit
    • Masaki Kozuki's avatar
      Pipeline Model Parallel (#1202) · 63d5dd63
      Masaki Kozuki authored
      
      
      * Init apex.ppu (pipeline model parallel utility)
      
      Reference commit:
      
      ```
      commit 5ab646376d67831601d5552c193241d017f1b35c (HEAD -> main, internal/main)
      Merge: 14f2c684 7b293d9b
      Author: Mohammad Shoeybi <mshoeybi@nvidia.com>
      Date:   Wed Sep 22 22:57:54 2021 -0700
      
          Merge branch 'add_BOS' into 'main'
      
          Add Beginning of Sentence token option and adding semaphore while multi-threading to prevent crashes and hangs due to connection keep-alives
      
          See merge request ADLR/megatron-lm!328
      ```
      
      * removing get_args and replace import - phase 1
      
      * removing get_args and replace import - phase 2
      
      * move ppu to apex.transformer.pipeline_parallel
      
      * update two __init__.py
      
      * update READMEs
      
      * mpu -> parallel_state & tensor_parallel
      
      * fix
      
      * remove not pipeline files
      
      * separate schedules.py - phase 1
      
      * dissect schedules.py
      
      * data_iterators -> batch
      
      * remove optimizer from forward_backward_step funcs
      
      * init test
      
      * Apply 2 suggestion(s) to 2 file(s)
      
      * fix cyclic import
      
      * fix syntax of Callable
      
      * fix - 1
      
      * move directory as testing used for pp test as well
      
      * add some functions for num microbatches calculator
      
      * model is a list in pipeline parallel
      
      * skip build num microbatch calculator
      
      * fix test
      
      * assert -> raise
      
      * skip args printing
      
      * specify tensor shape everywhere even if None - phase 1
      
      * private timers
      
      * passing tensor shape & dtype around
      
      * update dtype handling by introducing helper func
      
      * write helper func to reduce cyclomatic complexity
      
      * remove duplicate
      
      * update
      
      * move split_tensor_into_1d_equal_chunks to avoid cyclic import
      
      * tmp
      
      * cosmetic
      
      * move gather_split_1d_tensor to avoid cyclic imports
      
      * remove debug print
      
      * add outer loop
      
      * early return if possible
      
      * cosmetic
      
      * passing around tensor shape
      
      * refactor test
      
      * add script to learn batch sampler behavior
      
      * update
      
      * minibatch splitter
      
      * add minibatch splitter
      
      * split minibatch into microbatches
      
      * minor changes
      
      * uncomment split batch for test sake
      
      * set as attribute
      
      * study the behavior of no pipelining
      
      * debug 1
      
      * reflect test util namespace change
      
      * update readme
      
      * cosmetic in test
      
      * add model build helper func for interleaving shced
      
      * adding model builder from megatron
      
      * canbe cyclic import
      
      * fix
      
      * enable interleaving test, but failing even if forward only
      
      * fix batch preparation
      
      * add explanation
      
      * print data parallel size
      
      * fix typo
      
      * Add Megatron style GPT model by Rishi
      Co-authored-by: default avatarRishi Puri <riship@nvidia.com>
      
      * update
      
      * type hint for jit
      
      * fix forward_backward_no_pipelining test
      
      * pipeline forward backward seem to hang if not forward only
      
      * fix typo
      
      * debug
      
      * add p2p test
      
      * simplify
      
      * fix
      
      * tentative
      
      * set both tmp and pmp to 1
      
      * init
      
      * fix typo
      
      * fix
      
      * fix path of divide
      
      * set seed for tmp
      
      * update upon Eddie comment
      
      * fix typo
      
      * adding failing data loader test
      
      * fix
      
      * megatron still failing
      
      * check in
      
      * with the nested loop of new order, interleaving seems fine
      
      * cosmetic change
      
      * make `forward_backward_pipelining_with_interleaving private
      
      * warn users that interleaving sched is unstable
      
      * move noop handler to no pipelining
      
      * comment out rank_print
      
      * make `build_model` more flexible
      
      * skip megatron test tentatively
      
      * correctly comment out rank_print
      
      * correctly comment out rank_print
      
      * correctly comment out rank_print
      
      * skip appropriately
      
      * remove wip p2p comm test
      
      * update type hint of model_provider_func
      
      * disable tf32 in each test script
      
      * skip interleaving w/ backward
      
      * rename as mpu is the old name
      
      * remove broken case
      
      * expose build_model func
      
      * delete `dist.ring_exchange` func call and `use_ring_exchange` argument
      
      * nit fixes
      
      * check in
      
      * remove unused file
      
      * update the list
      
      * update tensor shape
      
      * remove mixed dtype case
      
      * use torch.distributed.run
      
      * 2020 -> 2021
      
      * another 2020 -> 2021
      
      * docstring & type hint
      
      * fix teardown
      
      * update
      
      * change to experimental
      
      * check if warned
      Co-authored-by: default avatarRishi Puri <riship@nvidia.com>
      Co-authored-by: default avatarEddie Yan <eddiey@nvidia.com>
      63d5dd63
  2. 23 Oct, 2021 1 commit
  3. 08 Oct, 2021 1 commit
  4. 06 Oct, 2021 1 commit
  5. 02 Oct, 2021 1 commit
  6. 15 Apr, 2021 1 commit
    • Sudhakar Singh's avatar
      Add unit tests for Fused NovoGrad (#1065) · 59d2f7ac
      Sudhakar Singh authored
      * Add unit tests for fused-novograd
      
      * Fix: tensors should reside on the same device
      
      * Fix: Cudastream should be called on the same device on which the tensors reside on. Found this during debugging fused novograd multi-device unit test
      
      * fixed issues mentioned in the comments
      59d2f7ac
  7. 01 Dec, 2020 1 commit
  8. 05 Aug, 2020 1 commit
  9. 23 Jun, 2020 3 commits
  10. 14 May, 2020 1 commit
  11. 30 Apr, 2020 1 commit
    • Deyu Fu's avatar
      Improvements to apex.mlp (#804) · 31aceeaa
      Deyu Fu authored
      * update fused bias relu backward kernel
      
      * adding support for not require first layer dgrad
      
      * fix bug: wrong layer in requires grad
      
      * add infrastructure for optional bias and activation, currently only support no bias and no relu
      
      * make bias and relu optional separately
      
      * add sigmoid activation option
      31aceeaa
  12. 22 Apr, 2020 2 commits
    • Deyu Fu's avatar
    • Vinicius Reis's avatar
      Fix LARC with mixed precision (#793) · 2ec84ebd
      Vinicius Reis authored
      The LARC optimizer wraps an underlying optimizer and then needs to be passed
      to amp.initialize for mixed precision. There were 3 different crashes happening
      in this situation, fix all of them and add a unit test.
      
      I don't know if the 'LARC' in sys.modules check ever worked. In my setup, the
      entry in sys.modules is 'apex.parallel.LARC'. Checking if the variable is
      defined seems more reliable though.
      2ec84ebd
  13. 31 Mar, 2020 1 commit
  14. 27 Feb, 2020 1 commit
  15. 03 Oct, 2019 1 commit
  16. 03 Sep, 2019 1 commit
    • Deyu Fu's avatar
      Fix issues in fused_dam (#469) · 7fa74925
      Deyu Fu authored
      * move import of amp_C to __init__()
      
      * make fp16/32 separate lists to support mixed param types, disable double test
      
      * make zero_grad consistent between adam/novograd/lamb
      7fa74925
  17. 27 Aug, 2019 1 commit
    • ptrblck's avatar
      Enable Checkpointing (#420) · dec4fdd6
      ptrblck authored
      * add state_dict, load_state_dict
      
      * add test_restoring, test_loss_scale_decrease
      
      * disable amp outputs for checkpoint tests
      
      * add test for amp.state_dict, cleanup
      
      * add state_dict patch, add test
      
      * fixed testing, cleanup
      
      * add readme for checkpointing
      
      * add docs to source/amp
      
      * add review changes to doc
      dec4fdd6
  18. 17 Aug, 2019 1 commit
  19. 15 Aug, 2019 1 commit
  20. 13 Aug, 2019 2 commits
  21. 12 Aug, 2019 1 commit
  22. 08 Aug, 2019 1 commit
  23. 06 Aug, 2019 1 commit
    • ngimel's avatar
      Clean up layer norm tests (#418) · 3ef01fae
      ngimel authored
      * Bug fix for non-affine layer-norm + add backward unit test
      
      * clean up tests and add tests for a large batch
      3ef01fae
  24. 03 Jul, 2019 1 commit
  25. 31 May, 2019 1 commit
  26. 27 May, 2019 2 commits
  27. 16 May, 2019 1 commit
  28. 02 May, 2019 1 commit
  29. 10 Apr, 2019 3 commits
  30. 04 Apr, 2019 1 commit
    • mcarilli's avatar
      WIP: Handle arbitrary combinations of optimizers/models/losses (#232) · 3f87614f
      mcarilli authored
      * Refactor to allow more flexible treatment of multiple optimizers/models/losses
      
      * Adding _process_optimizers.py
      
      * Created L0 tests (now passing).
      
      * fix: minor print typo (#234)
      
      * make L1 results easier to read
      
      * L0 multiple model/optimizer/loss test fleshed out
      
      * Adding test that master params remain synced across distributed processes
      
      * Docstring updates
      
      * Docstring updates
      3f87614f
  31. 19 Mar, 2019 1 commit
  32. 13 Mar, 2019 1 commit
  33. 10 Mar, 2019 1 commit