Commit f38f346d authored by Gustaf Ahdritz's avatar Gustaf Ahdritz
Browse files

Add another recommendation to README

parent 143ba486
...@@ -171,7 +171,7 @@ run inference with AlphaFold-Multimer, use the (experimental) `multimer` branch ...@@ -171,7 +171,7 @@ run inference with AlphaFold-Multimer, use the (experimental) `multimer` branch
instead. instead.
To minimize memory usage during inference on long sequences, consider the To minimize memory usage during inference on long sequences, consider the
following options: following changes:
- As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template - As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template
stack is a major memory bottleneck for inference on long sequences. OpenFold stack is a major memory bottleneck for inference on long sequences. OpenFold
...@@ -194,13 +194,15 @@ These represent a favorable tradeoff in most memory-constrained cases. ...@@ -194,13 +194,15 @@ These represent a favorable tradeoff in most memory-constrained cases.
Powerusers can choose to tweak these settings in Powerusers can choose to tweak these settings in
`openfold/model/primitives.py`. For more information on the LMA algorithm, `openfold/model/primitives.py`. For more information on the LMA algorithm,
see the aforementioned Staats & Rabe preprint. see the aforementioned Staats & Rabe preprint.
- Disable `tune_chunk_size` for long sequences. Past a certain point, it only
wastes time.
- As a last resort, consider enabling `offload_inference`. This enables more - As a last resort, consider enabling `offload_inference`. This enables more
extensive CPU offloading at various bottlenecks throughout the model. extensive CPU offloading at various bottlenecks throughout the model.
With all of these enabled, we were able to run inference on a 4600-residue Using the most conservative settings, we were able to run inference on a
complex with a single A100. Compared to AlphaFold's own memory offloading mode, 4600-residue complex with a single A100. Compared to AlphaFold's own memory
ours is considerably faster: the same complex took the more efficent offloading mode, ours is considerably faster: the same complex takes the more
AlphaFold-Multimer more than double the time. efficent AlphaFold-Multimer more than double the time.
### Training ### Training
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment