- 13 Dec, 2021 1 commit
-
-
Hubert Lu authored
-
- 09 Dec, 2021 1 commit
-
-
Kevin Stephano authored
* Add fused mixed precision lamb optimizer. * Fix device usage in constructor. * Fix sending param_group tensor state to device. * Remove unneeded device set.
-
- 17 Nov, 2021 1 commit
-
-
Masaki Kozuki authored
-
- 27 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
* Init apex.ppu (pipeline model parallel utility) Reference commit: ``` commit 5ab646376d67831601d5552c193241d017f1b35c (HEAD -> main, internal/main) Merge: 14f2c684 7b293d9b Author: Mohammad Shoeybi <mshoeybi@nvidia.com> Date: Wed Sep 22 22:57:54 2021 -0700 Merge branch 'add_BOS' into 'main' Add Beginning of Sentence token option and adding semaphore while multi-threading to prevent crashes and hangs due to connection keep-alives See merge request ADLR/megatron-lm!328 ``` * removing get_args and replace import - phase 1 * removing get_args and replace import - phase 2 * move ppu to apex.transformer.pipeline_parallel * update two __init__.py * update READMEs * mpu -> parallel_state & tensor_parallel * fix * remove not pipeline files * separate schedules.py - phase 1 * dissect schedules.py * data_iterators -> batch * remove optimizer from forward_backward_step funcs * init test * Apply 2 suggestion(s...
-
- 19 Oct, 2021 1 commit
-
-
Hubert Lu authored
-
- 08 Oct, 2021 1 commit
-
-
eqy authored
-
- 07 Oct, 2021 1 commit
-
-
eqy authored
-
- 04 Oct, 2021 1 commit
-
-
Jeff Daily authored
-
- 02 Oct, 2021 1 commit
-
-
Masaki Kozuki authored
Co-authored-by:
Piotr Bialecki <pbialecki@nvidia.com> Co-authored-by:
Eddie Yan <eddiey@nvidia.com> Co-authored-by:
Rishi Puri <riship@nvidia.com> Co-authored-by:
Sangkug Lym <slym@nvidia.com>
-
- 24 Sep, 2021 1 commit
-
-
Masaki Kozuki authored
-
- 04 Sep, 2021 1 commit
-
-
Burc Eryilmaz authored
* support for fused dense layer with cublasLt, fusion in both fprop and bprop * fix typo causing syntax error * add fused GEMM+gelu+GEMM modue * fix typo for workspace size * update cublas check for 11600 * add tests for fused dense layer * fix CUDA 10.x path * safer guard around CUBLAS constants, remove unreferenced variable * more guard changes * guard against cublas version instead of cuda Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com>
-
- 01 Sep, 2021 2 commits
-
-
Burc Eryilmaz authored
* fuse norm into scale * add fused norm into dlamb Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com>
-
Burc Eryilmaz authored
* support for fused dense layer with cublasLt, fusion in both fprop and bprop * fix typo causing syntax error * add fused GEMM+gelu+GEMM modue * fix typo for workspace size * update cublas check for 11600 * add tests for fused dense layer * fix CUDA 10.x path Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com>
-
- 17 May, 2021 1 commit
-
-
Burc Eryilmaz authored
Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com>
-
- 19 Apr, 2021 1 commit
-
-
Burc Eryilmaz authored
* don't create cublasLt handle, fix zero block size case * cleanup
-
- 17 Apr, 2021 1 commit
-
-
Burc Eryilmaz authored
* initial cublaslt support * 64 bit input * add license headers * cleanup * remove license Co-authored-by:pbialecki <pbialecki@nvidia.com>
-
- 15 Apr, 2021 1 commit
-
-
Sudhakar Singh authored
* Add unit tests for fused-novograd * Fix: tensors should reside on the same device * Fix: Cudastream should be called on the same device on which the tensors reside on. Found this during debugging fused novograd multi-device unit test * fixed issues mentioned in the comments
-
- 25 Feb, 2021 1 commit
-
-
Jeff Daily authored
This reverts commit bdd481d1.
-
- 25 Jan, 2021 1 commit
-
-
Jeff Daily authored
- incorrect use of __shfl_down - fix warp size assumptions - update unit tests to exit on failure
-
- 21 Jan, 2021 1 commit
-
-
Jeff Daily authored
use __launch_bounds__(1024) for multi_tensor_apply, re-enable skipped tests
-
- 18 Jan, 2021 1 commit
-
-
Jeff Daily authored
-
- 15 Jan, 2021 1 commit
-
-
Sarunya Pumma authored
-
- 04 Nov, 2020 1 commit
-
-
Ashish Farmer authored
* fix warp size in WARP_SHFL* in layernorm * enable fused_layer_norm tests on ROCm
-
- 19 Oct, 2020 1 commit
-
-
lly-zero-one authored
In this PR, we mainly tried to optimize the performance of Syncatchnorm and also fixed one potential issue in the welford_parallel kernel implementation. For performance improvement, we batched the mean/var/count all_gather communication together and sent it once in the forward path We also batch the all_reduce in backward path We add the contiguous call on the input of welford_parallel kernel. If there is any standard perf benchmark, I would be happy to run it.
-
- 05 Aug, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
* enable mlp cuda * add setup changes and tests * skip the unit tests * updated conditions for empty array * removed hip platform conditions
-
ngimel authored
* add device guards to the optimizers * add untracked file * set deviceGuard in multi_tensor_apply * address review comments; fix lamb * indent * typo
-
- 10 Jul, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Enable sync batchnorm * enable syncbn properly * update the unit tests * update tests * update conditions for welford_merge_element * updated conditions based on comments.
-
- 06 Jul, 2020 1 commit
-
-
jjsjann123 authored
* [sync BN] support non-uniform batch size across process group. TODO: test should be added once cleaned up. * updating unit tests * new unit tests for different inputs * cleaning
-
- 22 Jun, 2020 1 commit
-
-
ashishfarmer authored
-
- 15 Jun, 2020 1 commit
-
-
rohithkrn authored
-
- 26 May, 2020 1 commit
-
-
rohithkrn authored
-
- 23 May, 2020 1 commit
-
-
Kexin Yu authored
-
- 22 May, 2020 5 commits
- 21 May, 2020 2 commits
-
-
Kexin Yu authored
-
Jeff Daily authored
-
- 20 May, 2020 1 commit
-
-
lcskrishna authored
-