Commit b784ed73 authored by Tri Dao's avatar Tri Dao
Browse files

[Docs] Clarify OpenFold speedup

parent d9021ae4
...@@ -89,7 +89,8 @@ yields the fastest BERT training on cloud instances in MLPerf training 2.0 (June ...@@ -89,7 +89,8 @@ yields the fastest BERT training on cloud instances in MLPerf training 2.0 (June
memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2. With memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold 2. With
FlashAttention as one of its FlashAttention as one of its
[components](https://twitter.com/gahdritz/status/1595420944880779266), it is [components](https://twitter.com/gahdritz/status/1595420944880779266), it is
up to 3x faster than AlphaFold2, and can predict 2x longer structures. up to 3x faster than AlphaFold2 to run inference on short sequences, and can
predict 2x longer structures.
## Different implementations ## Different implementations
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment