Unverified Commit 237e26c4 authored by Gustaf Ahdritz's avatar Gustaf Ahdritz Committed by GitHub
Browse files

Bullet README

parent c4a4df22
...@@ -14,26 +14,20 @@ DeepMind experiments. It is omitted here for the sake of reducing clutter. In ...@@ -14,26 +14,20 @@ DeepMind experiments. It is omitted here for the sake of reducing clutter. In
cases where the *Nature* paper differs from the source, we always defer to the cases where the *Nature* paper differs from the source, we always defer to the
latter. latter.
OpenFold is built to support inference with AlphaFold's original JAX weights. OpenFold is built to support inference with official AlphaFolds parameters. Try it out for yourself with
It's also faster than the official code on GPU. Try it out for yourself with
our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb). our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb).
Unlike DeepMind's public code, OpenFold is also trainable. It can be trained Additionally, OpenFold has the following advantages over the reference implementation:
with [DeepSpeed](https://github.com/microsoft/deepspeed) and with either `fp16`
or `bfloat16` half-precision.
OpenFold is equipped with an implementation of low-memory attention - Openfold is **trainable** in full precision or `bfloat16` half-precision, with or without [DeepSpeed](https://github.com/microsoft/deepspeed).
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)), which - **Faster inference** on GPU.
enables inference on extremely long chains. - **Inference on extremely long chains**, made possible by our implementation of low-memory attention
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)).
We've modified [FastFold](https://github.com/hpcaitech/FastFold)'s custom CUDA - **Custom CUDA attention kernels** modified from [FastFold](https://github.com/hpcaitech/FastFold)'s
kernels to support in-place attention during inference and training. These use kernels support in-place attention during inference and training. They use
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch 4x and 5x less GPU memory than equivalent FastFold and stock PyTorch
implementations, respectively. implementations, respectively.
- **Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments that will be released alongside original OpenFold weights, trained from scratch using our code (more on that soon).
We also make available efficient scripts for generating alignments. We've
used them to generate millions of alignments that will be released alongside
original OpenFold weights, trained from scratch using our code (more on that soon).
## Installation (Linux) ## Installation (Linux)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment