Commit 43ab0b52 authored by Tri Dao's avatar Tri Dao
Browse files

Mention that some CUDA extensions have only been tested on A100s

parent e4d3013e
...@@ -5,6 +5,9 @@ We make it work for bfloat16. ...@@ -5,6 +5,9 @@ We make it work for bfloat16.
For best performance, you should use CUDA >= 11.8. CuBLAS versions before For best performance, you should use CUDA >= 11.8. CuBLAS versions before
this doesn't have the best matmul + bias + gelu performance for bfloat16. this doesn't have the best matmul + bias + gelu performance for bfloat16.
It has only been tested on A100s.
```sh ```sh
cd csrc/fused_dense_lib && pip install . cd csrc/fused_dense_lib && pip install .
``` ```
This CUDA extension implements fused dropout + residual + LayerNorm, based on This CUDA extension implements fused dropout + residual + LayerNorm, based on
Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm). Apex's [FastLayerNorm](https://github.com/NVIDIA/apex/tree/master/apex/contrib/layer_norm).
We add dropout and residual, and make it work for both pre-norm and post-norm architecture. We add dropout and residual, and make it work for both pre-norm and post-norm architecture.
It has only been tested on A100s.
```sh ```sh
cd csrc/layer_norm && pip install . cd csrc/layer_norm && pip install .
``` ```
This CUDA extension implements optimized cross-entropy loss, adapted from Apex's This CUDA extension implements optimized cross-entropy loss, adapted from Apex's
[Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy). [Xentropy](https://github.com/NVIDIA/apex/tree/master/apex/contrib/xentropy).
We make it work for bfloat16 and support in-place backward to save memory. We make it work for bfloat16 and support in-place backward to save memory.
It has only been tested on A100s.
```sh ```sh
cd csrc/xentropy && pip install . cd csrc/xentropy && pip install .
``` ```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment