Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
OpenFold
Commits
ca2e07bf
"git@developer.sourcefind.cn:OpenDAS/mmdeploy.git" did not exist on "686619677693badfa7d39fad13c6ec7bca8414fd"
Commit
ca2e07bf
authored
Jun 13, 2022
by
Gustaf Ahdritz
Browse files
Add new inference features to README
parent
578541c8
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
26 additions
and
0 deletions
+26
-0
README.md
README.md
+23
-0
openfold/config.py
openfold/config.py
+3
-0
No files found.
README.md
View file @
ca2e07bf
...
@@ -156,6 +156,29 @@ the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer). To
...
@@ -156,6 +156,29 @@ the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer). To
run inference with AlphaFold-Multimer, use the (experimental)
`multimer`
branch
run inference with AlphaFold-Multimer, use the (experimental)
`multimer`
branch
instead.
instead.
By default, OpenFold will attempt to automatically tune the inference-time
`chunk_size`
hyperparameter controlling a memory/runtime tradeoff in certain
modules during inference. The chunk size specified in the config is only
considered a minimum. This feature ensures consistently fast runtimes
regardless of input sequence length, but it also introduces some runtime
variability, which may be undesirable for certain users. To disable this
feature, set the
`tune_chunk_size`
option in the config to
`False`
.
As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template
stack is a major memory bottleneck for inference on long sequences. OpenFold
supports two mutually exclusive inference modes to address this issue. One,
`average_templates`
in the
`template`
section of the config, is similar to the
solution offered by AlphaFold-Multimer, which is simply to average individual
template representations. Our version is modified slightly to accommodate
weights trained using the standard template algorithm. Using said weights, we
notice no significant difference in performance between the averaged template
embeddings and the standard ones. The second,
`offload_templates`
, temporarily
offloads individual template embeddings into CPU memory. The former is an
approximation while the latter is slightly slower; both allow the model to
utilize arbitrarily many templates across sequence lengths. Both are disabled
by default, and it is up to the user to determine which best suits their needs,
if either.
### Training
### Training
To train the model, you will first need to precompute protein alignments.
To train the model, you will first need to precompute protein alignments.
...
...
openfold/config.py
View file @
ca2e07bf
...
@@ -143,6 +143,7 @@ tm_enabled = mlc.FieldReference(False, field_type=bool)
...
@@ -143,6 +143,7 @@ tm_enabled = mlc.FieldReference(False, field_type=bool)
eps
=
mlc
.
FieldReference
(
1e-8
,
field_type
=
float
)
eps
=
mlc
.
FieldReference
(
1e-8
,
field_type
=
float
)
templates_enabled
=
mlc
.
FieldReference
(
True
,
field_type
=
bool
)
templates_enabled
=
mlc
.
FieldReference
(
True
,
field_type
=
bool
)
embed_template_torsion_angles
=
mlc
.
FieldReference
(
True
,
field_type
=
bool
)
embed_template_torsion_angles
=
mlc
.
FieldReference
(
True
,
field_type
=
bool
)
tune_chunk_size
=
mlc
.
FieldReference
(
True
,
field_type
=
bool
)
NUM_RES
=
"num residues placeholder"
NUM_RES
=
"num residues placeholder"
NUM_MSA_SEQ
=
"msa placeholder"
NUM_MSA_SEQ
=
"msa placeholder"
...
@@ -409,6 +410,7 @@ config = mlc.ConfigDict(
...
@@ -409,6 +410,7 @@ config = mlc.ConfigDict(
"msa_dropout"
:
0.15
,
"msa_dropout"
:
0.15
,
"pair_dropout"
:
0.25
,
"pair_dropout"
:
0.25
,
"clear_cache_between_blocks"
:
True
,
"clear_cache_between_blocks"
:
True
,
"tune_chunk_size"
:
tune_chunk_size
,
"inf"
:
1e9
,
"inf"
:
1e9
,
"eps"
:
eps
,
# 1e-10,
"eps"
:
eps
,
# 1e-10,
"ckpt"
:
blocks_per_ckpt
is
not
None
,
"ckpt"
:
blocks_per_ckpt
is
not
None
,
...
@@ -431,6 +433,7 @@ config = mlc.ConfigDict(
...
@@ -431,6 +433,7 @@ config = mlc.ConfigDict(
"pair_dropout"
:
0.25
,
"pair_dropout"
:
0.25
,
"blocks_per_ckpt"
:
blocks_per_ckpt
,
"blocks_per_ckpt"
:
blocks_per_ckpt
,
"clear_cache_between_blocks"
:
False
,
"clear_cache_between_blocks"
:
False
,
"tune_chunk_size"
:
tune_chunk_size
,
"inf"
:
1e9
,
"inf"
:
1e9
,
"eps"
:
eps
,
# 1e-10,
"eps"
:
eps
,
# 1e-10,
},
},
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment