- 14 May, 2020 1 commit
-
-
Andrew Tulloch authored
-
- 12 May, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
rohithkrn authored
-
- 07 May, 2020 2 commits
-
-
Chaitanya Sri Krishna Lolla authored
-
Chaitanya Sri Krishna Lolla authored
* fix dropout scaling from p to 1/(1-p) (#816) Co-authored-by:Sukru Eryilmaz <seryilmaz@computelab-dgx1v-32.nvidia.com> * Improvements to apex.mlp (#804) * update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option * enable wider load/store for multi_tensor_apply kernels (#763) * modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load * Changes to make xentropysoftmax load/store vectorized when possible: (#725) * Changes to make xentropysoftmax load/store vectorized when possible: Increase default ILP so that each thread handle 16 Bytes data in one step Make thread load/store longest vector possible Make unroll case handle adjacent data instead of strided...
-
- 30 Apr, 2020 2 commits
-
-
Deyu Fu authored
* modify MTA axpby for wider load/store * Make scale/axpby/l2/adam/lamb multi_tensor uses wider load
-
Deyu Fu authored
* update fused bias relu backward kernel * adding support for not require first layer dgrad * fix bug: wrong layer in requires grad * add infrastructure for optional bias and activation, currently only support no bias and no relu * make bias and relu optional separately * add sigmoid activation option
-
- 28 Apr, 2020 1 commit
-
-
Chaitanya Sri Krishna Lolla authored
* Initial commit to hipify all cuda code * enable multi_tensor_apply extension * added generatedFileCleaner to handle nested hip files
-
- 22 Apr, 2020 1 commit
-
-
Deyu Fu authored
-
- 10 Apr, 2020 1 commit
-
-
Thor Johnsen authored
-
- 27 Feb, 2020 1 commit
-
-
mcarilli authored
* NHWC support for multi tensor apply * compilation fix for version<=1.4
-
- 04 Oct, 2019 1 commit
-
-
Deyu Fu authored
* move previous fused_adam and fp16_optimizer to contrib * make build contrib.fused_adam optional * change build option name * remove unnecessary try import
-
- 06 Sep, 2019 1 commit
-
-
mcarilli authored
* Pushing for build tests * Contrib files * Removing deprecated checks
-
- 20 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 17 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 16 Aug, 2019 2 commits
-
-
Deyu Fu authored
correctly not apply bias correction to epsilon(same as recent upstream change) correctly not apply bias correction to weight decay(consistent with upstream AdamW) Make adam_w_mode for FusedAdam/LAMB, to do L2 or Weight Decay (Adam vs AdamW) Correct document reg_inside_moment differently from adam_w_mode in FusedNovoGrad Removed legacy eps_mode from FusedAdam Make internal math type float across fused optimizers
-
Deyu Fu authored
-
- 08 Aug, 2019 1 commit
-
-
Deyu Fu authored
-
- 06 Aug, 2019 1 commit
-
-
ngimel authored
* Bug fix for non-affine layer-norm + add backward unit test * clean up tests and add tests for a large batch
-
- 01 Aug, 2019 1 commit
-
-
Natalia Gimelshein authored
-
- 26 Jul, 2019 1 commit
-
-
Edward Z. Yang authored
Signed-off-by:Edward Z. Yang <ezyang@fb.com>
-
- 12 Jul, 2019 1 commit
-
-
Edward Z. Yang authored
Signed-off-by:Edward Z. Yang <ezyang@fb.com>
-
- 03 Jul, 2019 4 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
Michael Carilli authored
-
- 28 Jun, 2019 1 commit
-
-
Thor Johnsen authored
-
- 14 Jun, 2019 1 commit
-
-
Thor Johnsen authored
-
- 11 Jun, 2019 1 commit
-
-
Michael Carilli authored
-
- 31 May, 2019 2 commits
-
-
Thor Johnsen authored
* First draft, for discussion * Fix mistakes in LAMB equations * Add loop over chunk * Bug fix * Bug fix * Bug fix * Undo bug fix * Bug fix * Add multi tensor LAMB optimizer to setup.py * Rename step_size to learning_rate * Fix compilation errors
-
mcarilli authored
* Existing tests passing, still need to add per-tensor tests * Test is passing, still need to measure performance * ILP for l2norm functor
-
- 27 May, 2019 1 commit
-
-
Michael Carilli authored
-
- 10 May, 2019 1 commit
-
-
Michael Carilli authored
-
- 03 May, 2019 1 commit
-
-
Michael Carilli authored
-
- 27 Apr, 2019 1 commit
-
-
jjsjann123 authored
* Persistent group batchnorm added Added persistent grouped batch norm for performance run on strong scaling case: currently only supporting: 1. nhwc layout 2. fp16 3. synchronization only within a node! Environment variable is used to tune LAUNCH_MARGIN that limits the CTAs usage by the persistent kernel. Documentation and examples will follow. * updating type().scalarType() to scalar_type() * moving launch margin to be defined at layer creation, adding a knob cap max ctas per sm * fixing the cta computation * review comment: set device_id through cudaGetDevice() move cudaMemset to cudaMemsetAsync updated __threadfence() to __threadfence_system() inter device write
-
- 26 Apr, 2019 5 commits
-
-
Michael Carilli authored
-
Michael Carilli authored
-
ptrblck authored
* change .type().ScalarType() to .scalar_type() + at::ScalarType::X to at::kX * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF * revert scalar_type() to type() in AT_DISPATCH_FLOATING_TYPES * revert scalar_type() to type() for AT_DISPATCH_FLOATING_TYPES_AND_HALF in welford.cu * revert scalar_type() to type() in layer_norm_cuda_kernel.cu * revert at::kType to at::ScalarType::Type * use DISPATCH_FLOAT_AND_HALF to get rid of warnings * add dispatch mechanisms for double+float and double+float+half
-
Michael Carilli authored
-
Michael Carilli authored
-