README.md 13.5 KB
Newer Older
1
2
![header ](imgs/OpenFold_viz_banner.jpg)

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
3
4
5
6
7
# OpenFold

A faithful PyTorch reproduction of DeepMind's 
[AlphaFold 2](https://github.com/deepmind/alphafold).

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
8
9
10
## Features

OpenFold carefully reproduces (almost) all of the features of the original open
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
11
source inference code (v2.0.1). The sole exception is model ensembling, which 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
12
fared poorly in DeepMind's own ablation testing and is being phased out in future
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
13
DeepMind experiments. It is omitted here for the sake of reducing clutter. In 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
14
cases where the *Nature* paper differs from the source, we always defer to the 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
15
latter.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
16

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
17
OpenFold is built to support inference with AlphaFold's official parameters. Try it out for yourself with 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
18
our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb).
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
19

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
20
Additionally, OpenFold has the following advantages over the reference implementation:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
21

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
22
23
24
25
26
27
- Openfold is **trainable** in full precision or `bfloat16` half-precision, with or without [DeepSpeed](https://github.com/microsoft/deepspeed).
- **Faster inference** on GPU.
- **Inference on extremely long chains**, made possible by our implementation of low-memory attention 
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)).
- **Custom CUDA attention kernels** modified from [FastFold](https://github.com/hpcaitech/FastFold)'s 
kernels support in-place attention during inference and training. They use 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
28
29
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch 
implementations, respectively.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
30
- **Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments that will be released alongside original OpenFold weights, trained from scratch using our code (more on that soon).
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
31

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
32
## Installation (Linux)
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
33

34
35
36
37
All Python dependencies are specified in `environment.yml`. For producing sequence 
alignments, you'll also need `kalign`, the [HH-suite](https://github.com/soedinglab/hh-suite), 
and one of {`jackhmmer`, [MMseqs2](https://github.com/soedinglab/mmseqs2) (nightly build)} 
installed on on your system. Finally, some download scripts require `aria2c`.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
38

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
39
For convenience, we provide a script that installs Miniconda locally, creates a 
40
41
`conda` virtual environment, installs all Python dependencies, and downloads
useful resources (including DeepMind's pretrained parameters). Run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
42
43

```bash
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
44
45
46
scripts/install_third_party_dependencies.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
47
To activate the environment, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
48
49

```bash
sft-managed's avatar
sft-managed committed
50
source scripts/activate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
51
52
```

53
To deactivate it, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
54
55

```bash
sft-managed's avatar
sft-managed committed
56
source scripts/deactivate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
57
58
```

59
60
61
62
63
64
With the environment active, compile OpenFold's CUDA kernels with

```bash
python3 setup.py install
```

65
66
67
68
69
70
To install the HH-suite to `/usr/bin`, run

```bash
# scripts/install_hh_suite.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
71
## Usage
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
72

Gustaf's avatar
Gustaf committed
73
To download DeepMind's pretrained parameters and common ground truth data, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
74
75

```bash
Eric Ma's avatar
Eric Ma committed
76
bash scripts/download_data.sh data/
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
77
78
```

Gustaf's avatar
Gustaf committed
79
80
81
You have two choices for downloading protein databases, depending on whether 
you want to use DeepMind's MSA generation pipeline (w/ HMMR & HHblits) or 
[ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster
82
MMseqs2 instead. For the former, run:
Gustaf's avatar
Gustaf committed
83
84

```bash
Eric Ma's avatar
Eric Ma committed
85
bash scripts/download_alphafold_dbs.sh data/
Gustaf's avatar
Gustaf committed
86
87
88
89
90
```

For the latter, run:

```bash
Eric Ma's avatar
Eric Ma committed
91
92
bash scripts/download_mmseqs_dbs.sh data/    # downloads .tar files
bash scripts/prep_mmseqs_dbs.sh data/        # unpacks and preps the databases
Gustaf's avatar
Gustaf committed
93
94
95
96
97
98
```

Make sure to run the latter command on the machine that will be used for MSA
generation (the script estimates how the precomputed database index used by
MMseqs2 should be split according to the memory available on the system).

99
100
Alternatively, you can use raw MSAs from 
[ProteinNet](https://github.com/aqlaboratory/proteinnet). After downloading
101
the database, use `scripts/prep_proteinnet_msas.py` to convert the data into
102
a format recognized by the OpenFold parser. The resulting directory becomes the
103
104
`alignment_dir` used in subsequent steps. Use `scripts/unpack_proteinnet.py` to
extract `.core` files from ProteinNet text files.
105

106
107
108
For both inference and training, the model's hyperparameters can be tuned from
`openfold/config.py`. Of course, if you plan to perform inference using 
DeepMind's pretrained parameters, you will only be able to make changes that
109
110
do not affect the shapes of model parameters. For an example of initializing
the model, consult `run_pretrained_openfold.py`.
111

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
112
### Inference
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
113

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
114
115
To run inference on a sequence or multiple sequences using a set of DeepMind's 
pretrained parameters, run e.g.:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
116

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
117
```bash
118
python3 run_pretrained_openfold.py \
119
    fasta_dir \
120
    data/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
121
122
123
124
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
125
126
    --output_dir ./ \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
127
    --model_device cuda:1 \
sft-managed's avatar
sft-managed committed
128
129
130
131
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
132
```
133

Gustaf's avatar
Gustaf committed
134
135
136
where `data` is the same directory as in the previous step. If `jackhmmer`, 
`hhblits`, `hhsearch` and `kalign` are available at the default path of 
`/usr/bin`, their `binary_path` command-line arguments can be dropped.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
137
If you've already computed alignments for the query, you have the option to 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
138
skip the expensive alignment computation here.
139

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
140
141
142
143
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement)
is enabled by default in inference mode. To disable it, set `globals.chunk_size`
to `None` in the config.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
144
145
146
147
148
149
150
151
Inference-time low-memory attention (LMA) can be enabled in the model config.
This setting trades off speed for vastly improved memory usage. By default,
LMA is run with query and key chunk sizes of 1024 and 4096, respectively.
These represent a favorable tradeoff in most memory-constrained cases.
Powerusers can choose to tweak these settings in 
`openfold/model/primitives.py`. For more information on the LMA algorithm,
see the aforementioned Staats & Rabe preprint.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
152
### Training
153

Gustaf's avatar
Gustaf committed
154
155
156
157
To train the model, you will first need to precompute protein alignments. 

You have two options. You can use the same procedure DeepMind used by running
the following:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
158
159
160
161
162
163
164
165
166

```bash
python3 scripts/precompute_alignments.py mmcif_dir/ alignment_dir/ \
    data/uniref90/uniref90.fasta \
    data/mgnify/mgy_clusters_2018_12.fa \
    data/pdb70/pdb70 \
    data/pdb_mmcif/mmcif_files/ \
    data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
sft-managed's avatar
sft-managed committed
167
168
169
170
171
    --cpus 16 \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
172
```
Gustaf's avatar
Gustaf committed
173

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
174
175
176
As noted before, you can skip the `binary_path` arguments if these binaries are 
at `/usr/bin`. Expect this step to take a very long time, even for small 
numbers of proteins.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
177

Gustaf's avatar
Gustaf committed
178
179
180
181
182
183
184
Alternatively, you can generate MSAs with the ColabFold pipeline (and templates
with HHsearch) with:

```bash
python3 scripts/precompute_alignments_mmseqs.py input.fasta \
    data/mmseqs_dbs \
    uniref30_2103_db \
Gustaf's avatar
Gustaf committed
185
    alignment_dir \
Gustaf's avatar
Gustaf committed
186
187
188
189
190
191
192
    ~/MMseqs2/build/bin/mmseqs \
    /usr/bin/hhsearch \
    --env_db colabfold_envdb_202108_db
    --pdb70 data/pdb70/pdb70
```

where `input.fasta` is a FASTA file containing one or more query sequences. To 
Gustaf's avatar
Gustaf committed
193
194
generate an input FASTA from a directory of mmCIF and/or ProteinNet .core 
files, we provide `scripts/data_dir_to_fasta.py`.
Gustaf's avatar
Gustaf committed
195

196
Next, generate a cache of certain datapoints in the template mmCIF files:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
197
198

```bash
199
python3 scripts/generate_mmcif_cache.py \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
200
201
202
    mmcif_dir/ \
    mmcif_cache.json \
    --no_workers 16
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
203
204
```

205
206
207
208
This cache is used to pre-filter templates. 

Next, generate a separate chain-level cache with data used for training-time 
data filtering:
209
210
211
212
213
214
215
216
217
218
219
220
221

```bash
python3 scripts/generate_chain_data_cache.py \
    mmcif_dir/ \
    chain_data_cache.json \
    --cluster_file clusters-by-entity-40.txt \
    --no_workers 16
```

where the `cluster_file` argument is a file of chain clusters, one cluster
per line (e.g. [PDB40](https://cdn.rcsb.org/resources/sequence/clusters/clusters-by-entity-40.txt)).

Finally, call the training script:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
222
223
224
225
226
227
228
229

```bash
python3 train_openfold.py mmcif_dir/ alignment_dir/ template_mmcif_dir/ \
    2021-10-10 \ 
    --template_release_dates_cache_path mmcif_cache.json \ 
    --precision 16 \
    --gpus 8 --replace_sampler_ddp=True \
    --seed 42 \ # in multi-gpu settings, the seed must be specified
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
230
    --deepspeed_config_path deepspeed_config.json \
231
    --checkpoint_every_epoch \
232
    --resume_from_ckpt ckpt_dir/ \
233
    --train_chain_data_cache_path chain_data_cache.json
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
234
235
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
236
237
238
where `--template_release_dates_cache_path` is a path to the mmCIF cache. 
A suitable DeepSpeed configuration file can be generated with 
`scripts/build_deepspeed_config.py`. The training script is 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
239
240
241
242
243
written with [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) 
and supports the full range of training options that entails, including 
multi-node distributed training. For more information, consult PyTorch 
Lightning documentation and the `--help` flag of the training script.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
244
245
246
247
Note that the data directory can also contain PDB files previously output by
the model. These are treated as members of the self-distillation set and are
subjected to distillation-set-only preprocessing steps.

248
249
250
251
252
253
254
255
256
## Testing

To run unit tests, use

```bash
scripts/run_unit_tests.sh
```

The script is a thin wrapper around Python's `unittest` suite, and recognizes
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
257
`unittest` arguments. E.g., to run a specific test verbosely:
258
259
260
261
262

```bash
scripts/run_unit_tests.sh -v tests.test_model
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
263
Certain tests require that AlphaFold (v2.0.1) be installed in the same Python
264
265
environment. These run components of AlphaFold and OpenFold side by side and
ensure that output activations are adequately similar. For most modules, we
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
266
target a maximum pointwise difference of `1e-4`.
267

268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
## Building and using the docker container

### Building the docker image

Openfold can be built as a docker container using the included dockerfile. To build it, run the following command from the root of this repository:

```bash
docker build -t openfold .
```

### Running the docker container 

The built container contains both `run_pretrained_openfold.py` and `train_openfold.py` as well as all necessary software dependencies. It does not contain the model parameters, sequence, or structural databases. These should be downloaded to the host machine following the instructions in the Usage section above. 

The docker container installs all conda components to the base conda environment in `/opt/conda`, and installs openfold itself in `/opt/openfold`,

Before running the docker container, you can verify that your docker installation is able to properly communicate with your GPU by running the following command:


```bash
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
```

Note the `--gpus all` option passed to `docker run`. This option is necessary in order for the container to use the GPUs on the host machine.

To run the inference code under docker, you can use a command like the one below.  In this example, parameters and sequences from the alphafold dataset are being used and are located at `/mnt/alphafold_database` on the host machine, and the input files are located in the current working directory. You can adjust the volume mount locations as needed to reflect the locations of your data. 

```bash
docker run \
--gpus all \
-v $PWD/:/data \
-v /mnt/alphafold_database/:/database \
-ti openfold:latest \
python3 /opt/openfold/run_pretrained_openfold.py \
/data/input.fasta \
/database/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
304
305
306
307
--uniref90_database_path /database/uniref90/uniref90.fasta \
--mgnify_database_path /database/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path /database/pdb70/pdb70 \
--uniclust30_database_path /database/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
308
309
310
311
312
313
314
315
316
317
--output_dir /data \
--bfd_database_path /database/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device cuda:0 \
--jackhmmer_binary_path /opt/conda/bin/jackhmmer \
--hhblits_binary_path /opt/conda/bin/hhblits \
--hhsearch_binary_path /opt/conda/bin/hhsearch \
--kalign_binary_path /opt/conda/bin/kalign \
--param_path /database/params/params_model_1.npz
```

318
319
320
321
## Copyright notice

While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters 
322
323
fall under the CC BY 4.0 license, a copy of which is downloaded to 
`openfold/resources/params` by the installation script. Note that the latter
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
324
replaces the original, more restrictive CC BY-NC 4.0 license as of January 2022.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
325
326
327

## Contributing

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
328
329
If you encounter problems using OpenFold, feel free to create an issue! We also
welcome pull requests from the community.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
330
331
332

## Citing this work

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
333
For now, cite OpenFold as follows:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
334

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
335
336
337
338
339
340
341
342
343
344
```bibtex
@software{Ahdritz_OpenFold_2021,
  author = {Ahdritz, Gustaf and Bouatta, Nazim and Kadyan, Sachin and Xia, Qinghui and Gerecke, William and AlQuraishi, Mohammed},
  doi = {10.5281/zenodo.5709539},
  month = {11},
  title = {{OpenFold}},
  url = {https://github.com/aqlaboratory/openfold},
  year = {2021}
}
```
Gustaf Ahdritz's avatar
Add DOI  
Gustaf Ahdritz committed
345
346

Any work that cites OpenFold should also cite AlphaFold.