Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
OpenFold
Commits
f38f346d
Commit
f38f346d
authored
Jun 21, 2022
by
Gustaf Ahdritz
Browse files
Add another recommendation to README
parent
143ba486
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
5 deletions
+7
-5
README.md
README.md
+7
-5
No files found.
README.md
View file @
f38f346d
...
@@ -171,7 +171,7 @@ run inference with AlphaFold-Multimer, use the (experimental) `multimer` branch
...
@@ -171,7 +171,7 @@ run inference with AlphaFold-Multimer, use the (experimental) `multimer` branch
instead.
instead.
To minimize memory usage during inference on long sequences, consider the
To minimize memory usage during inference on long sequences, consider the
following
option
s:
following
change
s:
-
As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template
-
As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template
stack is a major memory bottleneck for inference on long sequences. OpenFold
stack is a major memory bottleneck for inference on long sequences. OpenFold
...
@@ -194,13 +194,15 @@ These represent a favorable tradeoff in most memory-constrained cases.
...
@@ -194,13 +194,15 @@ These represent a favorable tradeoff in most memory-constrained cases.
Powerusers can choose to tweak these settings in
Powerusers can choose to tweak these settings in
`openfold/model/primitives.py`
. For more information on the LMA algorithm,
`openfold/model/primitives.py`
. For more information on the LMA algorithm,
see the aforementioned Staats & Rabe preprint.
see the aforementioned Staats & Rabe preprint.
-
Disable
`tune_chunk_size`
for long sequences. Past a certain point, it only
wastes time.
-
As a last resort, consider enabling
`offload_inference`
. This enables more
-
As a last resort, consider enabling
`offload_inference`
. This enables more
extensive CPU offloading at various bottlenecks throughout the model.
extensive CPU offloading at various bottlenecks throughout the model.
With all of these enabled
, we were able to run inference on a
4600-residue
Using the most conservative settings
, we were able to run inference on a
complex with a single A100. Compared to AlphaFold's own memory
offloading mode,
4600-residue
complex with a single A100. Compared to AlphaFold's own memory
ours is considerably faster: the same complex t
ook
the more
efficent
offloading mode,
ours is considerably faster: the same complex t
akes
the more
AlphaFold-Multimer more than double the time.
efficent
AlphaFold-Multimer more than double the time.
### Training
### Training
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment