- 27 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* Init apex.ppu (pipeline model parallel utility) Reference commit: ``` commit 5ab646376d67831601d5552c193241d017f1b35c (HEAD -> main, internal/main) Merge: 14f2c684 7b293d9b Author: Mohammad Shoeybi <mshoeybi@nvidia.com> Date: Wed Sep 22 22:57:54 2021 -0700 Merge branch 'add_BOS' into 'main' Add Beginning of Sentence token option and adding semaphore while multi-threading to prevent crashes and hangs due to connection keep-alives See merge request ADLR/megatron-lm!328 ``` * removing get_args and replace import - phase 1 * removing get_args and replace import - phase 2 * move ppu to apex.transformer.pipeline_parallel * update two __init__.py * update READMEs * mpu -> parallel_state & tensor_parallel * fix * remove not pipeline files * separate schedules.py - phase 1 * dissect schedules.py * data_iterators -> batch * remove optimizer from forward_backward_step funcs * init test * Apply 2 suggestion(s) to 2 file(s) * fix cyclic import * fix syntax of Callable * fix - 1 * move directory as testing used for pp test as well * add some functions for num microbatches calculator * model is a list in pipeline parallel * skip build num microbatch calculator * fix test * assert -> raise * skip args printing * specify tensor shape everywhere even if None - phase 1 * private timers * passing tensor shape & dtype around * update dtype handling by introducing helper func * write helper func to reduce cyclomatic complexity * remove duplicate * update * move split_tensor_into_1d_equal_chunks to avoid cyclic import * tmp * cosmetic * move gather_split_1d_tensor to avoid cyclic imports * remove debug print * add outer loop * early return if possible * cosmetic * passing around tensor shape * refactor test * add script to learn batch sampler behavior * update * minibatch splitter * add minibatch splitter * split minibatch into microbatches * minor changes * uncomment split batch for test sake * set as attribute * study the behavior of no pipelining * debug 1 * reflect test util namespace change * update readme * cosmetic in test * add model build helper func for interleaving shced * adding model builder from megatron * canbe cyclic import * fix * enable interleaving test, but failing even if forward only * fix batch preparation * add explanation * print data parallel size * fix typo * Add Megatron style GPT model by Rishi Co-authored-by:Rishi Puri <riship@nvidia.com> * update * type hint for jit * fix forward_backward_no_pipelining test * pipeline forward backward seem to hang if not forward only * fix typo * debug * add p2p test * simplify * fix * tentative * set both tmp and pmp to 1 * init * fix typo * fix * fix path of divide * set seed for tmp * update upon Eddie comment * fix typo * adding failing data loader test * fix * megatron still failing * check in * with the nested loop of new order, interleaving seems fine * cosmetic change * make `forward_backward_pipelining_with_interleaving private * warn users that interleaving sched is unstable * move noop handler to no pipelining * comment out rank_print * make `build_model` more flexible * skip megatron test tentatively * correctly comment out rank_print * correctly comment out rank_print * correctly comment out rank_print * skip appropriately * remove wip p2p comm test * update type hint of model_provider_func * disable tf32 in each test script * skip interleaving w/ backward * rename as mpu is the old name * remove broken case * expose build_model func * delete `dist.ring_exchange` func call and `use_ring_exchange` argument * nit fixes * check in * remove unused file * update the list * update tensor shape * remove mixed dtype case * use torch.distributed.run * 2020 -> 2021 * another 2020 -> 2021 * docstring & type hint * fix teardown * update * change to experimental * check if warned Co-authored-by:
Rishi Puri <riship@nvidia.com> Co-authored-by:
Eddie Yan <eddiey@nvidia.com>
-
- 23 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* switch from clone to out-of-place subtract * Update apex/mpu/cross_entropy.py * Apply 1 suggestion(s) to 1 file(s) Co-authored-by:Eddie Yan <eddiey@nvidia.com>
-
- 08 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* run backward * remove custom_fwd/custom_bwd
-
- 06 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* [ColumnParallelLinear] Test behavior in autocast * fix test * casts manually to autocast dtype
-
- 02 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
Co-authored-by:
Piotr Bialecki <pbialecki@nvidia.com> Co-authored-by:
Eddie Yan <eddiey@nvidia.com> Co-authored-by:
Rishi Puri <riship@nvidia.com> Co-authored-by:
Sangkug Lym <slym@nvidia.com>
-
- 15 Apr, 2021 1 commit
-
-
Sudhakar Singh authored
* Add unit tests for fused-novograd * Fix: tensors should reside on the same device * Fix: Cudastream should be called on the same device on which the tensors reside on. Found this during debugging fused novograd multi-device unit test * fixed issues mentioned in the comments
-
- 01 Dec, 2020 1 commit
-
-
Kexin Yu authored
DistributedFusedAdam Model Parallelism Support (Megatron) Co-authored-by:
Kexin Yu <kexiny@nvidia.com> Co-authored-by:
Kexin Yu <kexinznzn@gmail.com>
-
- 05 Aug, 2020 1 commit
-
-
ngimel authored
* add device guards to the optimizers * add untracked file * set deviceGuard in multi_tensor_apply * address review comments; fix lamb * indent * typo
-
- 23 Jun, 2020 3 commits
- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 30 Apr, 2020 1 commit
-
-
Deyu Fu authored
* update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option
-
- 22 Apr, 2020 2 commits
-
-
Deyu Fu authored
-
Vinicius Reis authored
The LARC optimizer wraps an underlying optimizer and then needs to be passed to amp.initialize for mixed precision. There were 3 different crashes happening in this situation, fix all of them and add a unit test. I don't know if the 'LARC' in sys.modules check ever worked. In my setup, the entry in sys.modules is 'apex.parallel.LARC'. Checking if the variable is defined seems more reliable though.
-
- 31 Mar, 2020 1 commit
-
-
Jeff Bowles authored
-
- 27 Feb, 2020 1 commit
-
-
mcarilli authored
* NHWC support for multi tensor apply * compilation fix for version<=1.4
-
- 03 Oct, 2019 1 commit
-
-
ptrblck authored
* increase atol for Half-Float comparison to 1.5e-4 * disable tests for different opt_levels * reset atol * add bitwise accurate comparison
-
- 03 Sep, 2019 1 commit
-
-
Deyu Fu authored
* move import of amp_C to __init__() * make fp16/32 separate lists to support mixed param types, disable double test * make zero_grad consistent between adam/novograd/lamb
-
- 27 Aug, 2019 1 commit
-
-
ptrblck authored
* add state_dict, load_state_dict * add test_restoring, test_loss_scale_decrease * disable amp outputs for checkpoint tests * add test for amp.state_dict, cleanup * add state_dict patch, add test * fixed testing, cleanup * add readme for checkpointing * add docs to source/amp * add review changes to doc
-
- 17 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 15 Aug, 2019 1 commit
-
-
Christian Clauss authored
-
- 13 Aug, 2019 2 commits
-
-
Deyu Fu authored
FusedSGD now work as before FusedAdam now work with o1/o2, no longer fuse scaling and casting Removed special backend handling for FusedAdam Moved and updated test for FusedAdam into run_optimizers Removed legacy tests for optimizers.FP16_optimizer and FusedAdam in run_mixed_adam
-
Marek Kolodziej authored
Co-authored-by:
Aditya Agrawal <aditya.iitb@gmail.com> Co-authored-by:
Marek Kolodziej <mkolod@gmail.com>
-
- 12 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 08 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 06 Aug, 2019 1 commit
-
-
ngimel authored
* Bug fix for non-affine layer-norm + add backward unit test * clean up tests and add tests for a large batch
-
- 03 Jul, 2019 1 commit
-
-
Michael Carilli authored
-
- 31 May, 2019 1 commit
-
-
mcarilli authored
* Existing tests passing, still need to add per-tensor tests * Test is passing, still need to measure performance * ILP for l2norm functor
-
- 27 May, 2019 2 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
- 16 May, 2019 1 commit
-
-
mcarilli authored
* Support add_param_group * syntax * Test added and passing
-
- 02 May, 2019 1 commit
-
-
Michael Carilli authored
-
- 10 Apr, 2019 3 commits
-
-
Lam Dang authored
-
Lam Dang authored
-
Michael Carilli authored
-
- 04 Apr, 2019 1 commit
-
-
mcarilli authored
* Refactor to allow more flexible treatment of multiple optimizers/models/losses * Adding _process_optimizers.py * Created L0 tests (now passing). * fix: minor print typo (#234) * make L1 results easier to read * L0 multiple model/optimizer/loss test fleshed out * Adding test that master params remain synced across distributed processes * Docstring updates * Docstring updates
-
- 19 Mar, 2019 1 commit
-
-
Michael Carilli authored
-
- 13 Mar, 2019 1 commit
-
-
Wil Kong authored
-
- 10 Mar, 2019 1 commit
-
-
Michael Carilli authored
-