You need to sign in or sign up before continuing.
Commit 04fd6d34 authored by Gustaf Ahdritz's avatar Gustaf Ahdritz
Browse files

Add note about FlashAttention

parent 4b3a102f
......@@ -176,8 +176,8 @@ For large-scale batch inference, we offer an optional tracing mode, which
massively improves runtimes at the cost of a lengthy model compilation process.
To enable it, add `--trace_model` to the inference command.
To get a speedup during inference, [FlashAttention](https://github.com/HazyResearch/flash-attention)
enable FlashAttention in the config.
To get a speedup during inference, enable [FlashAttention](https://github.com/HazyResearch/flash-attention)
in the config. Note that it appears to work best for sequences with < 1000 residues.
Input FASTA files containing multiple sequences are treated as complexes. In
this case, the inference script runs AlphaFold-Gap, a hack proposed
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment