Gemm layernorm welford (#413)
* Add device op of gemm layernorm
* [What] Rename F to H
[Why] F and G prepare for welford tensor
* Add gridwise gemm + welford
* Extract template parameter
* Rename kernel. Prepare to add second half kernel
* Extract var
* Add second kernel for gemm+layernorm
* Move to the gemm_layernorm folder
* Rename F and G to mean and var
* Do not use snakeCurved, it makes determination of padding for welford difficult
* Rewrite the device interface and rename some var
* Add welford count
* Update interface
* Sync code, prepare to test on MI200
* Clean the code
* Implement layernorm
* Add comment to mension hipFree
* Wrtie out the e for debug.
This could be remove and use h for instead
* 1. Allocate mean, var and count into by SetWorkSpacePointer.
2. Add GetWorkSpaceSize to calculate the space size
* Add gemm layernorm host code
* use reference layernorm
* Fix bug of blockwise welford for first kernel
* Fix bug of mean var padding for layernorm
* Use sgpr for shuffleM_index
* padding for GemmMeanVarCountGridDescriptor_M_NBlock
* Add layout parameter
* Check argument for gemm
* calculate max count for tail block
* Share E and H memory in device op
* Hard code the vector dim
* Refine the MakeDescriptor
* 1. Remove E parameter, because E is inside of device op
2. Check vector size
* [What] Rename MakeMeanVarDescriptor_M_N
[Why] Prepare to add count version of make descriptor
* Use 1D global memory for count
* Prevent redundant IO
* Update parameter
* Add pipeline v1/v2 selector
* Rename the example name
* Add base class for gemm layernorm
* Refine naming to distinguish naive and welford
* Add comment to explan in detail
* We don't need to pad in N dimension in gemm for mean/var/count. Set NPerTile 1
* Rewrite the 2st kernel, use multiple block along N dimension in layernorm kernel
* Share the vector size
* Refine var name
* [What] Force LayernormThreadSliceSize_N = vector size.
[Why] Memory coalesce
* Add comment
* Extract divisor out of the loop in reference layernorm
* Pad different size for E and H in layernorm kernel according to different block tile
* Refine naming
* Refine naming
* Prevent implicit cast
* [What] use ck::math::sqrt instead of __builtin_amdgcn_sqrtf
[Why] __builtin_amdgcn_sqrtf is only support float, double will cause casting
* Cast only constant
* Change of post shuffle thread descriptor
* Add EMeanVarDataType parameter.
* Merge the mean and var threadwise copy
* Add missing index
* Fix Typo
* Sync the variable with previous if
* 1. Declare e inside the host_gemm_layernorm()
2. Prevent implicit cast in reference code
Co-authored-by:
Po Yen Chen <PoYen.Chen@amd.com>
Showing
File moved
File moved
File moved
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Please register or sign in to comment