"pcdet/vscode:/vscode.git/clone" did not exist on "cddcf9ba4ec4a9f16f3b303723895efc870c8c2f"
Commit 0333b7a3 authored by Gustaf Ahdritz's avatar Gustaf Ahdritz
Browse files

Fix sidechain loss, touch up README

parent 68ba77e5
......@@ -10,7 +10,8 @@ OpenFold also depends on `openmm==7.5.1` and `pdbfixer`, which are only
available via `conda`.
For convenience, we provide a script that installs Miniconda locally, creates a
`conda` virtual environment, and installs all Python dependencies. Run:
`conda` virtual environment, installs all Python dependencies, and downloads
useful resources (including DeepMind's pretrained parameters). Run:
```bash
scripts/install_third_party_dependencies.sh
......@@ -22,7 +23,7 @@ To activate the environment, run:
scripts/activate_conda_venv.sh
```
To deactivate it, run
To deactivate it, run:
```bash
scripts/deactivate_conda_venv.sh
......@@ -30,11 +31,18 @@ scripts/deactivate_conda_venv.sh
## Features
OpenFold reproduces AlphaFold's inference and data processing pipelines. With
the exception of model ensembling, which fared poorly in DeepMind's ablation
testing and is therefore omitted here, OpenFold supports all features of the
original required for inference. It is even capable of importing AlphaFold's
original pretrained weights.
OpenFold supports all features of the original AlphaFold required for inference
with the sole exception of model ensembling, which fared poorly in DeepMind's
ablation testing. It is even capable of importing AlphaFold's original
pretrained model parameters.
Future versions will support multi-GPU training with
[DeepSpeed](https://github.com/microsoft/DeepSpeed).
## Copyright notice
While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters
remain under the more restrictive CC BY-NC 4.0 license, a copy of which is
downloaded to `openfold/resources/params` by the installation script. They are
thereby made unavailable for commercial use.
......@@ -167,19 +167,42 @@ def sidechain_loss(
**kwargs,
) -> torch.Tensor:
renamed_gt_frames = (
(1. - alt_naming_is_better[..., None, None, None, None]) *
(1. - alt_naming_is_better[..., None, None, None]) *
rigidgroups_gt_frames +
alt_naming_is_better[..., None, None, None, None] *
alt_naming_is_better[..., None, None, None] *
rigidgroups_alt_gt_frames
)
batch_dims = sidechain_frames.shape[:-5]
# Steamroll the inputs
sidechain_frames = sidechain_frames[-1]
sidechain_frames = sidechain_frames.view(
*batch_dims, -1, 4, 4
)
sidechain_frames = T.from_4x4(sidechain_frames)
renamed_gt_frames = renamed_gt_frames.view(
*batch_dims, -1, 4, 4
)
renamed_gt_frames = T.from_4x4(renamed_gt_frames)
rigidgroups_gt_exists = rigidgroups_gt_exists.reshape(
*batch_dims, -1
)
sidechain_atom_pos = sidechain_atom_pos[-1]
sidechain_atom_pos = sidechain_atom_pos.view(
*batch_dims, -1, 3
)
renamed_atom14_gt_positions = renamed_atom14_gt_positions.view(
*batch_dims, -1, 3
)
renamed_atom14_gt_exists = renamed_atom14_gt_exists.view(
*batch_dims, -1
)
fape = compute_fape(
sidechain_frames,
renamed_gt_frames,
gt_exists,
rigidgroups_gt_exists,
sidechain_atom_pos,
renamed_atom14_gt_positions,
renamed_atom14_gt_exists,
......
......@@ -59,11 +59,11 @@ def random_affine_vectors(dim):
for d in dim:
prod_dim *= d
affines = np.zeros((prod_dim, 7))
affines = np.zeros((prod_dim, 7)).astype(np.float32)
for i in range(prod_dim):
affines[i, :4] = Rotation.random(random_state=42).as_quat()
affines[i, 4:] = np.random.rand(3,)
affines[i, 4:] = np.random.rand(3,).astype(np.float32)
return affines.reshape(*dim, 7)
......@@ -73,11 +73,11 @@ def random_affine_4x4s(dim):
for d in dim:
prod_dim *= d
affines = np.zeros((prod_dim, 4, 4))
affines = np.zeros((prod_dim, 4, 4)).astype(np.float32)
for i in range(prod_dim):
affines[i, :3, :3] = Rotation.random(random_state=42).as_matrix()
affines[i, :3, 3] = np.random.rand(3,)
affines[i, :3, 3] = np.random.rand(3,).astype(np.float32)
affines[:, 3, 3] = 1
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment