README.md 12.9 KB
Newer Older
1
2
![header ](imgs/OpenFold_viz_banner.jpg)

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
3
4
5
6
7
# OpenFold

A faithful PyTorch reproduction of DeepMind's 
[AlphaFold 2](https://github.com/deepmind/alphafold).

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
8
9
10
## Features

OpenFold carefully reproduces (almost) all of the features of the original open
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
11
source inference code (v2.0.1). The sole exception is model ensembling, which 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
12
fared poorly in DeepMind's own ablation testing and is being phased out in future
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
13
DeepMind experiments. It is omitted here for the sake of reducing clutter. In 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
14
cases where the *Nature* paper differs from the source, we always defer to the 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
15
16
17
latter. 

OpenFold is built to support inference with AlphaFold's original JAX weights.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
18
19
It's also faster than the official code on GPU. Try it out for yourself with 
our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb).
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
20
21

Unlike DeepMind's public code, OpenFold is also trainable. It can be trained 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
22
23
with [DeepSpeed](https://github.com/microsoft/deepspeed) and with either `fp16`
or `bfloat16` half-precision.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
24

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
25
OpenFold is equipped with an implementation of low-memory attention 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
26
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)), which 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
27
enables inference on extremely long chains.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
28

29
30
31
32
We've modified FastFold's custom CUDA kernels to support in-place attention
during inference and training. These use 4x and 5x less GPU memory than 
equivalent FastFold and stock PyTorch implementations, respectively.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
33
34
35
36
We also make available efficient scripts for generating alignments. We've
used them to generate millions of alignments that will be released alongside
original OpenFold weights, trained from scratch using our code (more on that soon).

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
37
## Installation (Linux)
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
38

39
40
41
42
All Python dependencies are specified in `environment.yml`. For producing sequence 
alignments, you'll also need `kalign`, the [HH-suite](https://github.com/soedinglab/hh-suite), 
and one of {`jackhmmer`, [MMseqs2](https://github.com/soedinglab/mmseqs2) (nightly build)} 
installed on on your system. Finally, some download scripts require `aria2c`.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
43

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
44
For convenience, we provide a script that installs Miniconda locally, creates a 
45
46
`conda` virtual environment, installs all Python dependencies, and downloads
useful resources (including DeepMind's pretrained parameters). Run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
47
48

```bash
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
49
50
51
scripts/install_third_party_dependencies.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
52
To activate the environment, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
53
54

```bash
sft-managed's avatar
sft-managed committed
55
source scripts/activate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
56
57
```

58
To deactivate it, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
59
60

```bash
sft-managed's avatar
sft-managed committed
61
source scripts/deactivate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
62
63
```

64
65
66
67
68
69
With the environment active, compile OpenFold's CUDA kernels with

```bash
python3 setup.py install
```

70
71
72
73
74
75
To install the HH-suite to `/usr/bin`, run

```bash
# scripts/install_hh_suite.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
76
## Usage
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
77

Gustaf's avatar
Gustaf committed
78
To download DeepMind's pretrained parameters and common ground truth data, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
79
80

```bash
Eric Ma's avatar
Eric Ma committed
81
bash scripts/download_data.sh data/
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
82
83
```

Gustaf's avatar
Gustaf committed
84
85
86
You have two choices for downloading protein databases, depending on whether 
you want to use DeepMind's MSA generation pipeline (w/ HMMR & HHblits) or 
[ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster
87
MMseqs2 instead. For the former, run:
Gustaf's avatar
Gustaf committed
88
89

```bash
Eric Ma's avatar
Eric Ma committed
90
bash scripts/download_alphafold_dbs.sh data/
Gustaf's avatar
Gustaf committed
91
92
93
94
95
```

For the latter, run:

```bash
Eric Ma's avatar
Eric Ma committed
96
97
bash scripts/download_mmseqs_dbs.sh data/    # downloads .tar files
bash scripts/prep_mmseqs_dbs.sh data/        # unpacks and preps the databases
Gustaf's avatar
Gustaf committed
98
99
100
101
102
103
```

Make sure to run the latter command on the machine that will be used for MSA
generation (the script estimates how the precomputed database index used by
MMseqs2 should be split according to the memory available on the system).

104
105
Alternatively, you can use raw MSAs from 
[ProteinNet](https://github.com/aqlaboratory/proteinnet). After downloading
106
the database, use `scripts/prep_proteinnet_msas.py` to convert the data into
107
a format recognized by the OpenFold parser. The resulting directory becomes the
108
109
`alignment_dir` used in subsequent steps. Use `scripts/unpack_proteinnet.py` to
extract `.core` files from ProteinNet text files.
110

111
112
113
For both inference and training, the model's hyperparameters can be tuned from
`openfold/config.py`. Of course, if you plan to perform inference using 
DeepMind's pretrained parameters, you will only be able to make changes that
114
115
do not affect the shapes of model parameters. For an example of initializing
the model, consult `run_pretrained_openfold.py`.
116

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
117
### Inference
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
118

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
119
120
To run inference on a sequence or multiple sequences using a set of DeepMind's 
pretrained parameters, run e.g.:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
121

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
122
```bash
123
python3 run_pretrained_openfold.py \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
124
    target.fasta \
125
    data/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
126
127
128
129
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
130
131
    --output_dir ./ \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
132
    --model_device cuda:1 \
sft-managed's avatar
sft-managed committed
133
134
135
136
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
137
```
138

Gustaf's avatar
Gustaf committed
139
140
141
where `data` is the same directory as in the previous step. If `jackhmmer`, 
`hhblits`, `hhsearch` and `kalign` are available at the default path of 
`/usr/bin`, their `binary_path` command-line arguments can be dropped.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
142
If you've already computed alignments for the query, you have the option to 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
143
skip the expensive alignment computation here.
144

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
145
146
147
148
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement)
is enabled by default in inference mode. To disable it, set `globals.chunk_size`
to `None` in the config.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
149
### Training
150

Gustaf's avatar
Gustaf committed
151
152
153
154
To train the model, you will first need to precompute protein alignments. 

You have two options. You can use the same procedure DeepMind used by running
the following:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
155
156
157
158
159
160
161
162
163

```bash
python3 scripts/precompute_alignments.py mmcif_dir/ alignment_dir/ \
    data/uniref90/uniref90.fasta \
    data/mgnify/mgy_clusters_2018_12.fa \
    data/pdb70/pdb70 \
    data/pdb_mmcif/mmcif_files/ \
    data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
sft-managed's avatar
sft-managed committed
164
165
166
167
168
    --cpus 16 \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
169
```
Gustaf's avatar
Gustaf committed
170

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
171
172
173
As noted before, you can skip the `binary_path` arguments if these binaries are 
at `/usr/bin`. Expect this step to take a very long time, even for small 
numbers of proteins.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
174

Gustaf's avatar
Gustaf committed
175
176
177
178
179
180
181
Alternatively, you can generate MSAs with the ColabFold pipeline (and templates
with HHsearch) with:

```bash
python3 scripts/precompute_alignments_mmseqs.py input.fasta \
    data/mmseqs_dbs \
    uniref30_2103_db \
Gustaf's avatar
Gustaf committed
182
    alignment_dir \
Gustaf's avatar
Gustaf committed
183
184
185
186
187
188
189
    ~/MMseqs2/build/bin/mmseqs \
    /usr/bin/hhsearch \
    --env_db colabfold_envdb_202108_db
    --pdb70 data/pdb70/pdb70
```

where `input.fasta` is a FASTA file containing one or more query sequences. To 
Gustaf's avatar
Gustaf committed
190
191
generate an input FASTA from a directory of mmCIF and/or ProteinNet .core 
files, we provide `scripts/data_dir_to_fasta.py`.
Gustaf's avatar
Gustaf committed
192

193
Next, generate a cache of certain datapoints in the template mmCIF files:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
194
195

```bash
196
python3 scripts/generate_mmcif_cache.py \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
197
198
199
    mmcif_dir/ \
    mmcif_cache.json \
    --no_workers 16
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
200
201
```

202
203
204
205
This cache is used to pre-filter templates. 

Next, generate a separate chain-level cache with data used for training-time 
data filtering:
206
207
208
209
210
211
212
213
214
215
216
217
218

```bash
python3 scripts/generate_chain_data_cache.py \
    mmcif_dir/ \
    chain_data_cache.json \
    --cluster_file clusters-by-entity-40.txt \
    --no_workers 16
```

where the `cluster_file` argument is a file of chain clusters, one cluster
per line (e.g. [PDB40](https://cdn.rcsb.org/resources/sequence/clusters/clusters-by-entity-40.txt)).

Finally, call the training script:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
219
220
221
222
223
224
225
226

```bash
python3 train_openfold.py mmcif_dir/ alignment_dir/ template_mmcif_dir/ \
    2021-10-10 \ 
    --template_release_dates_cache_path mmcif_cache.json \ 
    --precision 16 \
    --gpus 8 --replace_sampler_ddp=True \
    --seed 42 \ # in multi-gpu settings, the seed must be specified
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
227
    --deepspeed_config_path deepspeed_config.json \
228
    --checkpoint_every_epoch \
229
    --resume_from_ckpt ckpt_dir/ \
230
    --train_chain_data_cache_path chain_data_cache.json
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
231
232
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
233
234
235
where `--template_release_dates_cache_path` is a path to the mmCIF cache. 
A suitable DeepSpeed configuration file can be generated with 
`scripts/build_deepspeed_config.py`. The training script is 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
236
237
238
239
240
written with [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) 
and supports the full range of training options that entails, including 
multi-node distributed training. For more information, consult PyTorch 
Lightning documentation and the `--help` flag of the training script.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
241
242
243
244
Note that the data directory can also contain PDB files previously output by
the model. These are treated as members of the self-distillation set and are
subjected to distillation-set-only preprocessing steps.

245
246
247
248
249
250
251
252
253
## Testing

To run unit tests, use

```bash
scripts/run_unit_tests.sh
```

The script is a thin wrapper around Python's `unittest` suite, and recognizes
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
254
`unittest` arguments. E.g., to run a specific test verbosely:
255
256
257
258
259

```bash
scripts/run_unit_tests.sh -v tests.test_model
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
260
Certain tests require that AlphaFold (v2.0.1) be installed in the same Python
261
262
environment. These run components of AlphaFold and OpenFold side by side and
ensure that output activations are adequately similar. For most modules, we
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
263
target a maximum pointwise difference of `1e-4`.
264

265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
## Building and using the docker container

### Building the docker image

Openfold can be built as a docker container using the included dockerfile. To build it, run the following command from the root of this repository:

```bash
docker build -t openfold .
```

### Running the docker container 

The built container contains both `run_pretrained_openfold.py` and `train_openfold.py` as well as all necessary software dependencies. It does not contain the model parameters, sequence, or structural databases. These should be downloaded to the host machine following the instructions in the Usage section above. 

The docker container installs all conda components to the base conda environment in `/opt/conda`, and installs openfold itself in `/opt/openfold`,

Before running the docker container, you can verify that your docker installation is able to properly communicate with your GPU by running the following command:


```bash
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
```

Note the `--gpus all` option passed to `docker run`. This option is necessary in order for the container to use the GPUs on the host machine.

To run the inference code under docker, you can use a command like the one below.  In this example, parameters and sequences from the alphafold dataset are being used and are located at `/mnt/alphafold_database` on the host machine, and the input files are located in the current working directory. You can adjust the volume mount locations as needed to reflect the locations of your data. 

```bash
docker run \
--gpus all \
-v $PWD/:/data \
-v /mnt/alphafold_database/:/database \
-ti openfold:latest \
python3 /opt/openfold/run_pretrained_openfold.py \
/data/input.fasta \
/database/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
301
302
303
304
--uniref90_database_path /database/uniref90/uniref90.fasta \
--mgnify_database_path /database/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path /database/pdb70/pdb70 \
--uniclust30_database_path /database/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
305
306
307
308
309
310
311
312
313
314
--output_dir /data \
--bfd_database_path /database/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device cuda:0 \
--jackhmmer_binary_path /opt/conda/bin/jackhmmer \
--hhblits_binary_path /opt/conda/bin/hhblits \
--hhsearch_binary_path /opt/conda/bin/hhsearch \
--kalign_binary_path /opt/conda/bin/kalign \
--param_path /database/params/params_model_1.npz
```

315
316
317
318
## Copyright notice

While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters 
319
320
fall under the CC BY 4.0 license, a copy of which is downloaded to 
`openfold/resources/params` by the installation script. Note that the latter
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
321
replaces the original, more restrictive CC BY-NC 4.0 license as of January 2022.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
322
323
324

## Contributing

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
325
326
If you encounter problems using OpenFold, feel free to create an issue! We also
welcome pull requests from the community.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
327
328
329

## Citing this work

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
330
For now, cite OpenFold as follows:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
331

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
332
333
334
335
336
337
338
339
340
341
```bibtex
@software{Ahdritz_OpenFold_2021,
  author = {Ahdritz, Gustaf and Bouatta, Nazim and Kadyan, Sachin and Xia, Qinghui and Gerecke, William and AlQuraishi, Mohammed},
  doi = {10.5281/zenodo.5709539},
  month = {11},
  title = {{OpenFold}},
  url = {https://github.com/aqlaboratory/openfold},
  year = {2021}
}
```
Gustaf Ahdritz's avatar
Add DOI  
Gustaf Ahdritz committed
342
343

Any work that cites OpenFold should also cite AlphaFold.