@@ -29,6 +29,7 @@ vice versa (see `scripts/convert_of_weights_to_jax.py`).
...
@@ -29,6 +29,7 @@ vice versa (see `scripts/convert_of_weights_to_jax.py`).
OpenFold has the following advantages over the reference implementation:
OpenFold has the following advantages over the reference implementation:
-**Faster inference** on GPU, sometimes by as much as 2x.
-**Inference on extremely long chains**, made possible by our implementation of low-memory attention
-**Inference on extremely long chains**, made possible by our implementation of low-memory attention
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)). OpenFold can predict the structures of
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)). OpenFold can predict the structures of
sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
...
@@ -37,7 +38,7 @@ kernels support in-place attention during inference and training. They use
...
@@ -37,7 +38,7 @@ kernels support in-place attention during inference and training. They use
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch
implementations, respectively.
implementations, respectively.
-**Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments.
-**Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments.
-**Faster inference** on GPU for short chains.
-**FlashAttention** support greatly speeds up MSA attention.