Unverified Commit bfa6c7d0 authored by Gustaf Ahdritz's avatar Gustaf Ahdritz Committed by GitHub
Browse files

Revise FlashAttention section of README

parent 3c93fc40
...@@ -177,9 +177,8 @@ For large-scale batch inference, we offer an optional tracing mode, which ...@@ -177,9 +177,8 @@ For large-scale batch inference, we offer an optional tracing mode, which
massively improves runtimes at the cost of a lengthy model compilation process. massively improves runtimes at the cost of a lengthy model compilation process.
To enable it, add `--trace_model` to the inference command. To enable it, add `--trace_model` to the inference command.
By default, [FlashAttention](https://github.com/HazyResearch/flash-attention) To get a speedup during inference, [FlashAttention](https://github.com/HazyResearch/flash-attention)
is enabled in the config. This speeds up computation of the evoformer's MSA enable FlashAttention in the config.
column attention module.
Input FASTA files containing multiple sequences are treated as complexes. In Input FASTA files containing multiple sequences are treated as complexes. In
this case, the inference script runs AlphaFold-Gap, a hack proposed this case, the inference script runs AlphaFold-Gap, a hack proposed
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment