README.md 16.5 KB
Newer Older
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
1
![header ](imgs/of_banner.png)
2

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
3
4
5
6
7
# OpenFold

A faithful PyTorch reproduction of DeepMind's 
[AlphaFold 2](https://github.com/deepmind/alphafold).

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
8
9
10
## Features

OpenFold carefully reproduces (almost) all of the features of the original open
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
11
source inference code (v2.0.1). The sole exception is model ensembling, which 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
12
fared poorly in DeepMind's own ablation testing and is being phased out in future
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
13
DeepMind experiments. It is omitted here for the sake of reducing clutter. In 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
14
cases where the *Nature* paper differs from the source, we always defer to the 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
15
latter.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
16

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
17
18
19
20
21
OpenFold is trainable, and we've trained it from scratch, matching AlphaFold's
performance. We've publicly released model weights and our training data &mdash some 
400,000 MSAs &mdash under a permissive license. Model weights are available 
from this repository while the MSAs are hosted by [RODA](registry.opendata.aws/openfold). 

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
22
OpenFold is built to support inference with AlphaFold's official parameters. Try it out for yourself with 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
23
our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb).
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
24

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
25
Additionally, OpenFold has the following advantages over the reference implementation:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
26

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
27
28
- Openfold is trainable in full precision or `bfloat16` half-precision, with or without [DeepSpeed](https://github.com/microsoft/deepspeed).
- **Faster inference** on GPU for chains with < 1500 residues. 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
29
- **Inference on extremely long chains**, made possible by our implementation of low-memory attention 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
30
31
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)). OpenFold can predict the structures of
  sequences with more than 4000 residues on a single A100, and even more with offloading.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
32
33
- **Custom CUDA attention kernels** modified from [FastFold](https://github.com/hpcaitech/FastFold)'s 
kernels support in-place attention during inference and training. They use 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
34
35
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch 
implementations, respectively.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
36
- **Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
37

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
38
## Installation (Linux)
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
39

40
41
42
43
All Python dependencies are specified in `environment.yml`. For producing sequence 
alignments, you'll also need `kalign`, the [HH-suite](https://github.com/soedinglab/hh-suite), 
and one of {`jackhmmer`, [MMseqs2](https://github.com/soedinglab/mmseqs2) (nightly build)} 
installed on on your system. Finally, some download scripts require `aria2c`.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
44

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
45
For convenience, we provide a script that installs Miniconda locally, creates a 
46
47
`conda` virtual environment, installs all Python dependencies, and downloads
useful resources (including DeepMind's pretrained parameters). Run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
48
49

```bash
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
50
51
52
scripts/install_third_party_dependencies.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
53
To activate the environment, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
54
55

```bash
sft-managed's avatar
sft-managed committed
56
source scripts/activate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
57
58
```

59
To deactivate it, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
60
61

```bash
sft-managed's avatar
sft-managed committed
62
source scripts/deactivate_conda_env.sh
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
63
64
```

65
66
67
68
69
70
With the environment active, compile OpenFold's CUDA kernels with

```bash
python3 setup.py install
```

71
72
73
74
75
76
To install the HH-suite to `/usr/bin`, run

```bash
# scripts/install_hh_suite.sh
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
77
## Usage
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
78

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
79
80
To download our original OpenFold weights, DeepMind's pretrained parameters, 
and common ground truth data, run:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
81
82

```bash
Eric Ma's avatar
Eric Ma committed
83
bash scripts/download_data.sh data/
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
84
85
```

Gustaf's avatar
Gustaf committed
86
87
88
You have two choices for downloading protein databases, depending on whether 
you want to use DeepMind's MSA generation pipeline (w/ HMMR & HHblits) or 
[ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster
89
MMseqs2 instead. For the former, run:
Gustaf's avatar
Gustaf committed
90
91

```bash
Eric Ma's avatar
Eric Ma committed
92
bash scripts/download_alphafold_dbs.sh data/
Gustaf's avatar
Gustaf committed
93
94
95
96
97
```

For the latter, run:

```bash
Eric Ma's avatar
Eric Ma committed
98
99
bash scripts/download_mmseqs_dbs.sh data/    # downloads .tar files
bash scripts/prep_mmseqs_dbs.sh data/        # unpacks and preps the databases
Gustaf's avatar
Gustaf committed
100
101
102
103
104
105
```

Make sure to run the latter command on the machine that will be used for MSA
generation (the script estimates how the precomputed database index used by
MMseqs2 should be split according to the memory available on the system).

106
107
Alternatively, you can use raw MSAs from 
[ProteinNet](https://github.com/aqlaboratory/proteinnet). After downloading
108
the database, use `scripts/prep_proteinnet_msas.py` to convert the data into
109
a format recognized by the OpenFold parser. The resulting directory becomes the
110
111
`alignment_dir` used in subsequent steps. Use `scripts/unpack_proteinnet.py` to
extract `.core` files from ProteinNet text files.
112

113
114
115
For both inference and training, the model's hyperparameters can be tuned from
`openfold/config.py`. Of course, if you plan to perform inference using 
DeepMind's pretrained parameters, you will only be able to make changes that
116
117
do not affect the shapes of model parameters. For an example of initializing
the model, consult `run_pretrained_openfold.py`.
118

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
119
### Inference
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
120

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
121
122
To run inference on a sequence or multiple sequences using a set of DeepMind's 
pretrained parameters, run e.g.:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
123

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
124
```bash
125
python3 run_pretrained_openfold.py \
126
    fasta_dir \
127
    data/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
128
129
130
131
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
132
133
    --output_dir ./ \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
134
    --model_device cuda:1 \
sft-managed's avatar
sft-managed committed
135
136
137
138
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
139
    --openfold_param_path openfold/openfold_params/finetuning_1.pt
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
140
```
141

Gustaf's avatar
Gustaf committed
142
143
144
where `data` is the same directory as in the previous step. If `jackhmmer`, 
`hhblits`, `hhsearch` and `kalign` are available at the default path of 
`/usr/bin`, their `binary_path` command-line arguments can be dropped.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
145
If you've already computed alignments for the query, you have the option to 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
146
147
148
149
150
151
152
skip the expensive alignment computation here with 
--use_precomputed_alignments.

Exactly one of --openfold_param_path or --jax_param_path must be specified to
run the inference script. These accept .pt/DeepSpeed OpenFold checkpoints and
AlphaFold's .npz JAX parameter files, respectively. For a breakdown of the 
differences between the different parameter files, see the README in 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
153
`openfold/resources/openfold_params/`.
154

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
155
156
157
158
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement)
is enabled by default in inference mode. To disable it, set `globals.chunk_size`
to `None` in the config.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
159
160
161
162
163
164
165
166
Inference-time low-memory attention (LMA) can be enabled in the model config.
This setting trades off speed for vastly improved memory usage. By default,
LMA is run with query and key chunk sizes of 1024 and 4096, respectively.
These represent a favorable tradeoff in most memory-constrained cases.
Powerusers can choose to tweak these settings in 
`openfold/model/primitives.py`. For more information on the LMA algorithm,
see the aforementioned Staats & Rabe preprint.

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
167
168
169
170
Input FASTA files containing multiple sequences are treated as complexes. In
this case, the inference script runs AlphaFold-Gap, a hack proposed
[here](https://twitter.com/minkbaek/status/1417538291709071362?lang=en), using
the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer). To
171
172
run inference with AlphaFold-Multimer, use the (experimental) `multimer` branch 
instead.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
173

174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
By default, OpenFold will attempt to automatically tune the inference-time 
`chunk_size` hyperparameter controlling a memory/runtime tradeoff in certain 
modules during inference. The chunk size specified in the config is only 
considered a minimum. This feature ensures consistently fast runtimes 
regardless of input sequence length, but it also introduces some runtime 
variability, which may be undesirable for certain users. To disable this
feature, set the `tune_chunk_size` option in the config to `False`.

As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template
stack is a major memory bottleneck for inference on long sequences. OpenFold
supports two mutually exclusive inference modes to address this issue. One,
`average_templates` in the `template` section of the config, is similar to the
solution offered by AlphaFold-Multimer, which is simply to average individual
template representations. Our version is modified slightly to accommodate 
weights trained using the standard template algorithm. Using said weights, we
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
189
notice no significant difference in performance between our averaged template 
190
191
embeddings and the standard ones. The second, `offload_templates`, temporarily 
offloads individual template embeddings into CPU memory. The former is an 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
192
193
194
195
approximation while the latter is slightly slower; both are memory-efficient 
and allow the model to utilize arbitrarily many templates across sequence 
lengths. Both are disabled by default, and it is up to the user to determine 
which best suits their needs, if either.
196

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
197
### Training
198

Gustaf's avatar
Gustaf committed
199
200
201
202
To train the model, you will first need to precompute protein alignments. 

You have two options. You can use the same procedure DeepMind used by running
the following:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
203
204
205

```bash
python3 scripts/precompute_alignments.py mmcif_dir/ alignment_dir/ \
206
207
208
209
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
210
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
sft-managed's avatar
sft-managed committed
211
212
213
214
215
    --cpus 16 \
    --jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
    --hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
    --hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
    --kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
216
```
Gustaf's avatar
Gustaf committed
217

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
218
219
220
As noted before, you can skip the `binary_path` arguments if these binaries are 
at `/usr/bin`. Expect this step to take a very long time, even for small 
numbers of proteins.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
221

Gustaf's avatar
Gustaf committed
222
223
224
225
226
227
228
Alternatively, you can generate MSAs with the ColabFold pipeline (and templates
with HHsearch) with:

```bash
python3 scripts/precompute_alignments_mmseqs.py input.fasta \
    data/mmseqs_dbs \
    uniref30_2103_db \
Gustaf's avatar
Gustaf committed
229
    alignment_dir \
Gustaf's avatar
Gustaf committed
230
231
232
233
234
235
236
    ~/MMseqs2/build/bin/mmseqs \
    /usr/bin/hhsearch \
    --env_db colabfold_envdb_202108_db
    --pdb70 data/pdb70/pdb70
```

where `input.fasta` is a FASTA file containing one or more query sequences. To 
Gustaf's avatar
Gustaf committed
237
238
generate an input FASTA from a directory of mmCIF and/or ProteinNet .core 
files, we provide `scripts/data_dir_to_fasta.py`.
Gustaf's avatar
Gustaf committed
239

240
Next, generate a cache of certain datapoints in the template mmCIF files:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
241
242

```bash
243
python3 scripts/generate_mmcif_cache.py \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
244
245
246
    mmcif_dir/ \
    mmcif_cache.json \
    --no_workers 16
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
247
248
```

249
250
251
252
This cache is used to pre-filter templates. 

Next, generate a separate chain-level cache with data used for training-time 
data filtering:
253
254
255
256
257
258
259
260
261
262
263
264
265

```bash
python3 scripts/generate_chain_data_cache.py \
    mmcif_dir/ \
    chain_data_cache.json \
    --cluster_file clusters-by-entity-40.txt \
    --no_workers 16
```

where the `cluster_file` argument is a file of chain clusters, one cluster
per line (e.g. [PDB40](https://cdn.rcsb.org/resources/sequence/clusters/clusters-by-entity-40.txt)).

Finally, call the training script:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
266
267
268
269
270
271
272
273

```bash
python3 train_openfold.py mmcif_dir/ alignment_dir/ template_mmcif_dir/ \
    2021-10-10 \ 
    --template_release_dates_cache_path mmcif_cache.json \ 
    --precision 16 \
    --gpus 8 --replace_sampler_ddp=True \
    --seed 42 \ # in multi-gpu settings, the seed must be specified
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
274
    --deepspeed_config_path deepspeed_config.json \
275
    --checkpoint_every_epoch \
276
    --resume_from_ckpt ckpt_dir/ \
277
    --train_chain_data_cache_path chain_data_cache.json
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
278
279
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
280
where `--template_release_dates_cache_path` is a path to the mmCIF cache. 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
281
282
Note that `template_mmcif_dir` can be the same as `mmcif_dir` which contains
training targets. A suitable DeepSpeed configuration file can be generated with 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
283
`scripts/build_deepspeed_config.py`. The training script is 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
284
285
written with [PyTorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) 
and supports the full range of training options that entails, including 
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
286
287
288
289
290
291
292
293
multi-node distributed training, validation, and so on. For more information, 
consult PyTorch Lightning documentation and the `--help` flag of the training 
script.

Note that, despite its variable name, `mmcif_dir` can also contain PDB files 
or even ProteinNet .core files. To emulate the AlphaFold training procedure, 
which uses a self-distillation set subject to special preprocessing steps, use
the family of `--distillation` flags.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
294

295
296
297
298
299
300
301
302
303
## Testing

To run unit tests, use

```bash
scripts/run_unit_tests.sh
```

The script is a thin wrapper around Python's `unittest` suite, and recognizes
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
304
`unittest` arguments. E.g., to run a specific test verbosely:
305
306
307
308
309

```bash
scripts/run_unit_tests.sh -v tests.test_model
```

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
310
Certain tests require that AlphaFold (v2.0.1) be installed in the same Python
311
312
environment. These run components of AlphaFold and OpenFold side by side and
ensure that output activations are adequately similar. For most modules, we
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
313
target a maximum pointwise difference of `1e-4`.
314

315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
## Building and using the docker container

### Building the docker image

Openfold can be built as a docker container using the included dockerfile. To build it, run the following command from the root of this repository:

```bash
docker build -t openfold .
```

### Running the docker container 

The built container contains both `run_pretrained_openfold.py` and `train_openfold.py` as well as all necessary software dependencies. It does not contain the model parameters, sequence, or structural databases. These should be downloaded to the host machine following the instructions in the Usage section above. 

The docker container installs all conda components to the base conda environment in `/opt/conda`, and installs openfold itself in `/opt/openfold`,

Before running the docker container, you can verify that your docker installation is able to properly communicate with your GPU by running the following command:


```bash
docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
```

Note the `--gpus all` option passed to `docker run`. This option is necessary in order for the container to use the GPUs on the host machine.

To run the inference code under docker, you can use a command like the one below.  In this example, parameters and sequences from the alphafold dataset are being used and are located at `/mnt/alphafold_database` on the host machine, and the input files are located in the current working directory. You can adjust the volume mount locations as needed to reflect the locations of your data. 

```bash
docker run \
--gpus all \
-v $PWD/:/data \
-v /mnt/alphafold_database/:/database \
-ti openfold:latest \
python3 /opt/openfold/run_pretrained_openfold.py \
349
/data/fasta_dir \
350
/database/pdb_mmcif/mmcif_files/ \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
351
352
353
354
--uniref90_database_path /database/uniref90/uniref90.fasta \
--mgnify_database_path /database/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path /database/pdb70/pdb70 \
--uniclust30_database_path /database/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
355
356
357
358
359
360
361
--output_dir /data \
--bfd_database_path /database/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device cuda:0 \
--jackhmmer_binary_path /opt/conda/bin/jackhmmer \
--hhblits_binary_path /opt/conda/bin/hhblits \
--hhsearch_binary_path /opt/conda/bin/hhsearch \
--kalign_binary_path /opt/conda/bin/kalign \
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
362
--openfold_param_path /database/openfold_params/finetuning_1.pt
363
364
```

365
366
367
368
## Copyright notice

While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters 
369
370
fall under the CC BY 4.0 license, a copy of which is downloaded to 
`openfold/resources/params` by the installation script. Note that the latter
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
371
replaces the original, more restrictive CC BY-NC 4.0 license as of January 2022.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
372
373
374

## Contributing

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
375
376
If you encounter problems using OpenFold, feel free to create an issue! We also
welcome pull requests from the community.
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
377
378
379

## Citing this work

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
380
For now, cite OpenFold as follows:
Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
381

Gustaf Ahdritz's avatar
Gustaf Ahdritz committed
382
383
384
385
386
387
388
389
390
391
```bibtex
@software{Ahdritz_OpenFold_2021,
  author = {Ahdritz, Gustaf and Bouatta, Nazim and Kadyan, Sachin and Xia, Qinghui and Gerecke, William and AlQuraishi, Mohammed},
  doi = {10.5281/zenodo.5709539},
  month = {11},
  title = {{OpenFold}},
  url = {https://github.com/aqlaboratory/openfold},
  year = {2021}
}
```
Gustaf Ahdritz's avatar
Add DOI  
Gustaf Ahdritz committed
392
393

Any work that cites OpenFold should also cite AlphaFold.