README.md 8.29 KB
Newer Older
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
1
2
3
4
5
# OpenFold

A faithful PyTorch reproduction of DeepMind's 
[AlphaFold 2](https://github.com/deepmind/alphafold).

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
6
7
8
9
10
11
## Features

OpenFold carefully reproduces (almost) all of the features of the original open
source inference code. The sole exception is model ensembling, which fared
poorly in DeepMind's own ablation testing and is being phased out in future
DeepMind experiments. It is omitted here for the sake of reducing clutter. In 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
12
cases where the *Nature* paper differs from the source, we always defer to the 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
13
14
15
latter. 

OpenFold is built to support inference with AlphaFold's original JAX weights.
16
17
Try it out with our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb)
(not yet visible from Colab because the repo is still private).
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
18
19

Unlike DeepMind's public code, OpenFold is also trainable. It can be trained 
20
21
22
with [DeepSpeed](https://github.com/microsoft/deepspeed) and with mixed 
precision. `bfloat16` training is not currently supported, but will be in the 
future.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
23

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
24
## Installation (Linux)
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
25

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
26
Python dependencies available through `pip` are provided in `requirements.txt`. 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
27
28
OpenFold depends on `openmm==7.5.1` and `pdbfixer`, which are only available 
via `conda`. For producing sequence alignments, you'll also need `jackhmmer`, 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
29
`kalign`, and the [HH-suite](https://github.com/soedinglab/hh-suite) installed 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
30
on your system. Finally, some download scripts require `aria2c`.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
31

32
33
34
Note that the required version of PyTorch Lightning is 1.5.0, which has not
yet been released. Install that package from the nightly build.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
35
For convenience, we provide a script that installs Miniconda locally, creates a 
36
37
`conda` virtual environment, installs all Python dependencies, and downloads
useful resources (including DeepMind's pretrained parameters). Run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
38
39

```bash
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
40
41
42
scripts/install_third_party_dependencies.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
43
To activate the environment, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
44
45

```bash
sft-managed's avatar
sft-managed committed
46
source scripts/activate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
47
48
```

49
To deactivate it, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
50
51

```bash
sft-managed's avatar
sft-managed committed
52
source scripts/deactivate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
53
54
```

55
56
57
58
59
60
To install the HH-suite to `/usr/bin`, run

```bash
# scripts/install_hh_suite.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
61
## Usage
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
62

Gustaf's avatar
Gustaf committed
63
To download DeepMind's pretrained parameters and common ground truth data, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
64
65

```bash
Gustaf's avatar
Gustaf committed
66
scripts/download_data.sh data/
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
67
68
```

Gustaf's avatar
Gustaf committed
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
You have two choices for downloading protein databases, depending on whether 
you want to use DeepMind's MSA generation pipeline (w/ HMMR & HHblits) or 
[ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster
[MMseqs2](https://github.com/soedinglab/mmseqs2) instead. For the former, run:

```bash
scripts/download_alphafold_databases.sh data/
```

For the latter, run:

```bash
scripts/download_mmseqs_databases.sh data/    # downloads .tar files
scripts/prep_mmseqs_databases.sh data/        # unpacks and preps the databases
```

Make sure to run the latter command on the machine that will be used for MSA
generation (the script estimates how the precomputed database index used by
MMseqs2 should be split according to the memory available on the system).

89
90
91
92
93
94
Alternatively, you can use raw MSAs from 
[ProteinNet](https://github.com/aqlaboratory/proteinnet). After downloading
the database, use `scripts/prepare_proteinnet_msas.py` to convert the data into
a format recognized by the OpenFold parser. The resulting directory becomes the
`alignment_dir` used in subsequent steps.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
95
### Inference
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
96

Gustaf's avatar
Gustaf committed
97
98
To run inference on a sequence using a set of DeepMind's pretrained parameters, 
run e.g.:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
99

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
100
```bash
101
python3 run_pretrained_openfold.py \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
102
    target.fasta \
103
104
105
106
107
108
109
    data/uniref90/uniref90.fasta \
    data/mgnify/mgy_clusters_2018_12.fa \
    data/pdb70/pdb70 \
    data/pdb_mmcif/mmcif_files/ \
    data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --output_dir ./ \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
sft-managed's avatar
sft-managed committed
110
111
112
113
114
    --device cuda:1 \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
115
```
116

Gustaf's avatar
Gustaf committed
117
118
119
120
121
where `data` is the same directory as in the previous step. If `jackhmmer`, 
`hhblits`, `hhsearch` and `kalign` are available at the default path of 
`/usr/bin`, their `binary_path` command-line arguments can be dropped.
If you've already computed alignments for the query (see "Training"), you have 
the option to circumvent the expensive alignment computation here.
122

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
123
### Training
124

Gustaf's avatar
Gustaf committed
125
126
After activating the OpenFold environment with 
`source scripts/activate_conda_env.sh`, install OpenFold by running
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
127

sft-managed's avatar
sft-managed committed
128
129
130
```bash
python setup.py install
```
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
131

Gustaf's avatar
Gustaf committed
132
133
134
135
To train the model, you will first need to precompute protein alignments. 

You have two options. You can use the same procedure DeepMind used by running
the following:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
136
137
138
139
140
141
142
143
144

```bash
python3 scripts/precompute_alignments.py mmcif_dir/ alignment_dir/ \
    data/uniref90/uniref90.fasta \
    data/mgnify/mgy_clusters_2018_12.fa \
    data/pdb70/pdb70 \
    data/pdb_mmcif/mmcif_files/ \
    data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
sft-managed's avatar
sft-managed committed
145
146
147
148
149
    --cpus 16 \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
150
```
Gustaf's avatar
Gustaf committed
151

sft-managed's avatar
sft-managed committed
152
As noted before, you can skip the `binary_path` arguments if these binaries are at `/usr/bin`.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
153
154
Expect this step to take a very long time, even for small numbers of proteins.

Gustaf's avatar
Gustaf committed
155
156
157
158
159
160
161
Alternatively, you can generate MSAs with the ColabFold pipeline (and templates
with HHsearch) with:

```bash
python3 scripts/precompute_alignments_mmseqs.py input.fasta \
    data/mmseqs_dbs \
    uniref30_2103_db \
Gustaf's avatar
Gustaf committed
162
    alignment_dir \
Gustaf's avatar
Gustaf committed
163
164
165
166
167
168
169
170
171
172
    ~/MMseqs2/build/bin/mmseqs \
    /usr/bin/hhsearch \
    --env_db colabfold_envdb_202108_db
    --pdb70 data/pdb70/pdb70
```

where `input.fasta` is a FASTA file containing one or more query sequences. To 
generate an input FASTA from a directory of mmCIF files, we provide
`scripts/mmcif_dir_to_fasta.py`.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
173
Next, generate a cache of certain datapoints in the mmCIF files:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
174
175

```bash
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
176
177
178
179
python3 scripts/generate_mmcif_cache.py \
    mmcif_dir/ \
    mmcif_cache.json \
    --no_workers 16
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
180
181
182
183
184
185
186
187
188
189
190
191
```

This cache is used to minimize the number of mmCIF parses performed during 
training-time data preprocessing. Finally, call the training script:

```bash
python3 train_openfold.py mmcif_dir/ alignment_dir/ template_mmcif_dir/ \
    2021-10-10 \ 
    --template_release_dates_cache_path mmcif_cache.json \ 
    --precision 16 \
    --gpus 8 --replace_sampler_ddp=True \
    --seed 42 \ # in multi-gpu settings, the seed must be specified
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
192
193
    --deepspeed_config_path deepspeed_config.json \
    --resume_from_ckpt ckpt_dir/
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
194
195
196
197
198
199
200
201
202
203
```

where `--template_release_dates_cache_path` is a path to the `.json` file
generated in the previous step. A suitable DeepSpeed configuration file can be 
generated with `scripts/build_deepspeed_config.py`. The training script is 
written with [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) 
and supports the full range of training options that entails, including 
multi-node distributed training. For more information, consult PyTorch 
Lightning documentation and the `--help` flag of the training script.

204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
## Testing

To run unit tests, use

```bash
scripts/run_unit_tests.sh
```

The script is a thin wrapper around Python's `unittest` suite, and recognizes
`unittest` commands. E.g., to run a specific test verbosely:

```bash
scripts/run_unit_tests.sh -v tests.test_model
```

Certain tests require that AlphaFold be installed in the same Python
environment. These run components of AlphaFold and OpenFold side by side and
ensure that output activations are adequately similar. For most modules, we
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
222
target a maximum difference of `1e-4`.
223

224
225
226
227
228
229
230
## Copyright notice

While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters 
remain under the more restrictive CC BY-NC 4.0 license, a copy of which is 
downloaded to `openfold/resources/params` by the installation script. They are
thereby made unavailable for commercial use.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
231
232
233
234

## Contributing

If you encounter problems using OpenFold, feel free to create an issue!