"tests/vscode:/vscode.git/clone" did not exist on "fb02f39ad8736da962951ecf54658dd1881b902f"
Unverified Commit f434a278 authored by Jennifer Wei's avatar Jennifer Wei Committed by GitHub
Browse files

Merge pull request #439 from aqlaboratory/setup-improvements

Adds Documentation and minor quality of life fixes
parents 3eef7caa d8117ce3
......@@ -11,5 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Cleanup # https://github.com/actions/virtual-environments/issues/2840
run: sudo rm -rf /usr/share/dotnet && sudo rm -rf /opt/ghc && sudo rm -rf "/usr/local/share/boost" && sudo rm -rf "$AGENT_TOOLSDIRECTORY"
- name: Build the Docker image
run: docker build . --file Dockerfile --tag openfold:$(date +%s)
\ No newline at end of file
run: docker build . --file Dockerfile --tag openfold:$(date +%s)
version: 2
# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "mambaforge-4.10"
# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py
conda:
environment: docs/environment.yml
......@@ -187,7 +187,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2024 AlQuraishi Laboratory
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
......
This diff is collapsed.
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
name: openfold-docs
channels:
- conda-forge
dependencies:
- sphinx=7
- myst-parser=3
- furo
\ No newline at end of file
# Auxiliary Sequence Files for OpenFold Training
The training dataset of OpenFold is very large. The `pdb` directory alone contains 185,000 mmcifs; each chain for has multiple sequence alignment files and mmcif files.
OpenFold introduces a few new file structures for faster access to alignments and mmcif data.
This documentation will explain the benefits of having the condensed file structure, and explain the contents of each of the files.
## Default alignment file structure
One way to store mmcifs and alignments files would be to have a directory for each mmcif chain.
For example, consider two protein as a case study
```
- OpenProteinSet
└── mmcifs
├── 3lrm.cif
└── 6kwc.cif
...
```
In the `alignments` directory, [PDB:6KWC](https://www.rcsb.org/structure/6KWC) is a monomer with one chain, and thus would have one alignment direcotry. [PDB:3LRM](https://www.rcsb.org/structure/3lrm), a homotetramer, would have one alignment directory for each of its four chains.
```
- OpenProteinSet
└── alignments
└── 3lrm_A
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_B
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_C
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_D
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 6kwc_A
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
...
```
In practice, the IO overhead of having one directory per protein chain makes accessing the alignments slow.
## OpenFold DB file structure
Here we describe a new filesystem that can be used by OpenFold for more efficient access of alignment file and index file contents
All together, the file directory would look like:
```
- OpenProteinSet
├── duplicate_pdb_chains.txt
└── pdb
├── mmcif_cache.json
└── mmcifs
├── 3lrm.cif
└── 6kwc.cif
└── alignment_db
├── alignment_db_0.db
├── alignment_db_1.db
...
├── alignment_db_9.db
└── alignment_db.index
```
We will describe each of the file types here.
### Alignments db files and index files
To speed up access of MSAs, OpenFold has an alternate alignments storage procedure. Instead of storing dedicated files for each single alignment, we consolidate large sets of alignments to single files referred to as _alignments_db's_. This can reduce I/O overhead and in practice we recommend using around 10 `alignments_db_x.db` files to store the total training set of OpenFold. During training, OpenFold can access each alignment using byte index pointers that are stored in a separate index file (`alignments_db.index`). The alignments for the `3LRM` and `6KWC` examples would be recorded in the index file as follows:
```alignments_db.index
{
...
"3lrm_A": {
"db": "alignment_db_0.db",
"files": [
["bfd_uniclust_hits.a3m", 212896478938, 1680200],
["mgnify_hits.a3m", 212893696883, 2782055],
["pdb70_hits.hhr", 212898159138, 614978],
["uniref90_hits.a3m", 212898774116, 6165789]
]
},
"6kwc_A": {
"db": "alignment_db_1.db",
"files": [
["bfd_uniclust_hits.a3m", 415618723280, 380289],
["mgnify_hits.a3m", 415618556077, 167203],
["pdb70_hits.hhr", 415619103569, 148672],
["uniref90_hits.a3m", 415617547852, 1008225]
]
}
...
}
```
For each entry, the corresponding `alignment_db` file and the byte start location and number of bytes to read the respective alignments are given. For example, the alignment information in `bfd_uniclust_hits.a3m` for chain `3lrm_A` can be found in the database file `alignment_db_0.db`, starting at byte location `212896478938` and reading in the next `1680200` bytes.
### Chain cache files and mmCIF cache files
Information from the mmcif files can be parsed in advance to create a `chain_cache.json` or a `mmcif_cache.json`. For OpenFold, the `chain_cache.json` is used to sample chains for training, and the `mmcif_cache.json` is used to prefilter templates.
Here's what the chain_cache.json entry looks like for our examples:
```chain_cache.json
{
...
"3lrm_A": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_B": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_C": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_D": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"6kwc_A": {
"release_date": "2021-01-27",
"seq": "GSTIQPGTGYNNGYFYSYWNDGHGGVTYTNGPGGQFSVNWSNSGEFVGGKGWQPGTKNKVINFSGSYNPNGNSYLSVYGWSRNPLIEYYIVENFGTYNPSTGATKLGEVTSDGSVYDIYRTQRVNQPSIIGTATFYQYWSVRRNHRSSGSVNTANHFNAWAQQGLTLGTMDYQIVAVQGYFSSGSASITVS",
"resolution": 1.297,
"cluster_size": 195
},
...
}
```
The mmcif_cache.json file would contain similar information, but condensed by mmcif id, e.g.
```mmcif_cache.json
{
"3lrm": {
"release_date": "2010-06-30",
"chain_ids": [
"A",
"B",
"C",
"D"
],
"seqs": [
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK"
],
"no_chains": 4,
"resolution": 2.7
},
"6kwc": {
"release_date": "2021-01-27",
"chain_ids": [
"A"
],
"seqs": [
"GSTIQPGTGYNNGYFYSYWNDGHGGVTYTNGPGGQFSVNWSNSGEFVGGKGWQPGTKNKVINFSGSYNPNGNSYLSVYGWSRNPLIEYYIVENFGTYNPSTGATKLGEVTSDGSVYDIYRTQRVNQPSIIGTATFYQYWSVRRNHRSSGSVNTANHFNAWAQQGLTLGTMDYQIVAVQGYFSSGSASITVS"
],
"no_chains": 1,
"resolution": 1.297
},
...
}
```
### Duplicate pdb chain files
Duplicate chains occur across pdb entries. Some of these chains are the homomeric units of a multimer, others are subunits that are shared across different protein.
To reduce storage overhead of creating / storing identical data for duplicate entries, we have a duplicate chain file. Each line stores the all chains that are identical. Our `6kwc` and `3lrm` examples would be stored as follows.
```duplicate_pdb_chains.txt
...
6kwc_A
3lrm_A 3lrm_B 3lrm_C 3lrm_D
...
```
# FAQ
Frequently asked questions or encountered issues when running OpenFold.
## Setup
- When running unit tests (e.g. [`./scripts/run_unit_tests.sh`](https://github.com/aqlaboratory/openfold/blob/main/scripts/run_unit_tests.sh)), I see an error such as
```
ImportError: version GLIBCXX_3.4.30 not found
```
> Solution: Make sure that the `$LD_LIBRARY_PATH` environment has been set to include the conda path, e.g. `export $LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH`
- I see a CUDA mismatch error, eg.
```
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
```
> Solution: Ensure that your system's CUDA driver and toolkit match your intended OpenFold installation (CUDA 11 by default). You can check the CUDA driver version with a command such as `nvidia-smi`
- I get some error involving `fatal error: cuda_runtime.h: No such file or directory` and or `ninja: build stopped: subcommand failed.`.
> Solution: Something went wrong with setting up some of the custom kernels. Try running `install_third_party_dependencies.sh` again or try `python3 setup.py install` from inside the OpenFold folder. Make sure to prepend the conda environment as described above before running this.
## Training
- My model training is hanging on the data loading step:
> Solution: While each system is different, a few general suggestions:
- Check your `$KMP_AFFINITY` environment setting and see if it is suitable for your system.
- Adjust the number of data workers used to prepare data with the `--num_workers` setting. Increasing the number could help with dataset processing speed. However, to many workers could cause an OOM issue.
- When I reload my pretrained model weights or checkpoints, I get `RuntimeError: Error(s) in loading state_dict for OpenFoldWrapper: Unexpected key(s) in state_dict:`
> Solution: This suggests that your checkpoint / model weights are in OpenFold v1 format with outdated model layer names. Convert your weights/checkpoints following [this guide](convert_of_v1_weights.md).
\ No newline at end of file
# OpenFold Inference
In this guide, we will cover how to use OpenFold to make structure predictions.
## Background
We currently offer three modes of inference prediction:
- Monomer
- Multimer
- Single Sequence (Soloseq)
This guide will focus on monomer prediction, the next sections will describe [Multimer](Multimer_Inference.md) and [Single Sequence](Single_Sequence_Inference.md) prediction.
`
### Pre-requisites:
- OpenFold Conda Environment. See [OpenFold Installation](Installation.md) for instructions on how to build this environment.
- Downloading sequence databases for performing multiple sequence alignments. We provide a script to download the AlphaFold databases [here](https://github.com/aqlaboratory/openfold/blob/main/scripts/download_alphafold_dbs.sh).
## Running AlphaFold Model Inference
The script [`run_pretrained_openfold.py`](https://github.com/aqlaboratory/openfold/blob/main/run_pretrained_openfold.py) performs model inference. We will go through the steps of how to use this script.
An example directory for performing infernce on [PDB:6KWC](https://www.rcsb.org/structure/6KWC) is provided [here](https://github.com/aqlaboratory/openfold/tree/main/examples/monomer). We refer to this example directory for the below examples.
### Download Model Parameters
For monomer inference, you may either use the model parameters provided by Deepmind, or you may use the OpenFold trained parameters. Both models should give similar performance, please see [our main paper](https://www.biorxiv.org/content/10.1101/2022.11.20.517210v3) for further reference.
The model parameters provided by Deepmind can be downloaded with the following script located in this repository's `scripts/` directory:
```
$ bash scripts/download_alphafold_params.sh $PARAMS_DIR
```
To use the OpenFold trained parameters, you can use the following script
```
$ bash scripts/download_openfold_params.sh $PARAMS_DIR
```
We recommend selecting `openfold/resources` as the params directory as this is the default directory used by the `run_pretrained_openfold.py` to locate parameters.
If you choose to use a different directory, you may make a symlink to the `openfold/resources` directory, or specify an alternate parameter path with the command line argument `--jax_param_path` for AlphaFold parameters or `--openfold_checkpoint_path` for OpenFold parameters.
### Model Inference
The input to [`run_pretrained_openfold.py`](https://github.com/aqlaboratory/openfold/blob/main/run_pretrained_openfold.py) is a directory of FASTA files. AlphaFold-style models also require a sequence alignment to perform inference.
If you do not have sequence alignments for your input sequences, you can compute them using the inference script directly by following the instructions for the following section [inference without pre-computed alignments](#model-inference-without-pre-computed-alignments).
Otherwise, if you already have alignments for your input FASTA sequences, skip ahead to the [inference with pre-computed alignments](#model-inference-with-pre-computed-alignments) section.
#### Model inference without pre-computed alignments
The following command performs a sequence alignment against the OpenProteinSet databases and performs model inference.
```
python3 run_pretrained_openfold.py \
$INPUT_FASTA_DIR \
$TEMPLATE_MMCIF_DIR
--output_dir $OUTPUT_DIR \
--config_preset model_1_ptm \
--uniref90_database_path $BASE_DATA_DIR/uniref90 \
--mgnify_database_path $BASE_DATA_DIR/mgnify/mgy_clusters_2018_12.fa \
--pdb70_database_path $BASE_DATA_DIR/pdb70 \
--uniclust30_database_path $BASE_DATA_DIR/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
--bfd_database_path $BASE_DATA_DIR/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--model_device "cuda:0"
```
**Required arguments:**
- `--output_dir`: specify the output directory
- `$INPUT_FASTA_DIR`: Directory of query fasta files, one sequence per file,e.g. `examples/monomer/fasta_dir`
- `$TEMPLATE_MMCIF_DIR`: MMCIF files to use for template matching. This directory is required even if using template free inference.
- `*_database_path`: Paths to sequence databases for sequence alignment.
- `--model_device`: Specify to use a GPU is one is available.
#### Model inference with pre-computed alignments
To perform model inference with pre-computed alignments, use the following command
```
python3 run_pretrained_openfold.py ${INPUT_FASTA_DIR} \
$TEMPLATE_MMCIF_DIR \
--output_dir $OUTPUT_DIR \
--use_precomputed_alignments $PRECOMPUTED_ALIGNMENTS \
--config_preset model_1_ptm \
--model_device "cuda:0" \
```
where `$PRECOMPUTED_ALIGNMENTS` is a directory that contains alignments. A sample alignments directory structure for a single query is:
```
alignments
└── 6KWC_1
   ├── bfd_uniclust_hits.a3m
   ├── hhsearch_output.hhr
   ├── mgnify_hits.sto
   └── uniref90_hits.sto
```
`bfd_uniclust_hits.a3m`, `mgnify_hits.sto`, and `uniref90_hits.sto` are all alignments of the query structure against the BFD, Mgnify, and Uniref90 datasets respsectively. `hhsearch_output.hhr` contains hits against the PDB70 database used for template matching. The example directory `examples/monomer/alignments` shows examples of expected directories.
#### Configuration settings for template modeling / pTM scoring
There are a few configuration settings available for template based and template-free modeling, and for the option to estimate a predicted template modeling score (pTM).
This table provides guidance on which setting to use for each set of predictions, as well as the parameters to select for each preset.
| Setting | `config_preset` | AlphaFold params (match config name) | OpenFold params (any are allowed) |
| -------------------------: | ----------------------------------------: | :-------------------------------------------------------------------------------- | :--------------------------------- |
| With template, no ptm | model_1<br>model_2 | `parms_model_1.npz`<br>`parms_model_2.npz` | `finetuning_[2-5].pt` |
| With template, with ptm | model_1_ptm<br>model_2_ptm | `params_model_1_ptm.npz`<br>`params_model_2_ptm.npz` | `finetuning_ptm_[1-2].pt` |
| Without template, no ptm | model_3<br>model_4<br>model_5 | `parms_model_3.npz`<br>`parms_model_4.npz`<br>`parms_model_5.npz` | `finetuning_no_templ_[1-2].pt` |
| Without template, with ptm | model_3_ptm<br>model_4_ptm<br>model_5_ptm | `parms_model_3_ptm.npz`<br>`parms_model_4_ptm.npz`<br>`parms_model_5_ptm.npz`<br> | `finetuning_no_templ_ptm_1.pt` |
If you use AlphaFold parameters, and the AlphaFold parameters are located in the default parameter directory (e.g. `openfold/resources`) the parameters that match the `--config_preset` will be selected.
The full set of configurations available for all 5 AlphaFold model presets can be viewed in [`config.py`](https://github.com/aqlaboratory/openfold/blob/main/openfold/config.py#L105). The [OpenFold Parameters](OpenFold_Parameters.md) page contains more information about the individual OpenFold parameter files.
#### Model outputs
The expected output contents are as follows:
- `alignments`: Directory of alignments. One directory is made per query sequence, and each directory contains alignments against each of the databases used.
- `predictions`: PDB files for predicted structures
- `timings.json`: Json with timings for inference and relaxation, if specified
### Optional Flags
Some commonly used command line flags are here. A full list of flags can be viewed from the `--help` menu
- `--config_preset`: Specify a different model configuration. There are 5 available model preset settings, some of which support template modeling, others support template-free modeling. The default is `model_1`. More details can be below in the [[Inference#Template-free modeling]] section
- `--hmmsearch_binary_path`, `--hmmbuild_binary_path`, etc. Hmmer, HHsuite, kalign are required to run alignments. `run_pretrained_openfold.py` will search for these packages in the `bin/` directory of your conda environment. If needed, you can specify a different binary directory with these arguments.
- `--openfold_checkpoint_path` : Uses an checkpoint or parameter file. Expected types are Deepspeed checkpoint files or `.pt` files. Make sure your selected checkpoint file matches the configuration setting chosen in `--config_preset`.
- `--data_random_seed`: Specifies a random seed to use.
- `--save_outputs`: Saves a copy of all outputs from the model, e.g. the output of the msa track, ptm heads.
- `--experiment_config_json`: Specify configuration settings using a json file. For example, passing a json with `{globals.relax.max_iterations = 10}` specifies 10 as the maximum number of relaxation iterations. See for [`openfold/config.py`](https://github.com/aqlaboratory/openfold/blob/main/openfold/config.py#L283) the full dictionary of configuration settings. Any parameters that are not manually set in these configuration settings will refer to the defaults specified by your `config_preset`.
### Advanced Options for Increasing Efficiency
#### Speeding up inference
The **DeepSpeed DS4Sci_EvoformerAttention kernel** is a memory-efficient attention kernel developed as part of a collaboration between OpenFold and the DeepSpeed4Science initiative.
If your system supports deepseed, using deepspeed generally leads an inference speedup of 2 - 3x without significant additional memory use. You may specify this option by selecting the `--use_deepspeed_inference` argument.
If DeepSpeed is unavailable for your system, you may also try using [FlashAttention](https://github.com/HazyResearch/flash-attention) by adding `globals.use_flash = True` to the `--experiment_config_json`. Note that FlashAttention appears to work best for sequences with < 1000 residues.
#### Large-scale batch inference
For large-scale batch inference, we offer an optional tracing mode, which massively improves runtimes at the cost of a lengthy model compilation process. To enable it, add `--trace_model` to the inference command.
#### Configuring the chunk size for sequence alignments
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement) is enabled by default in inference mode. To disable it, set `globals.chunk_size` to `None` in the config. If a value is specified, OpenFold will attempt to dynamically tune it, considering the chunk size specified in the config as a minimum. This tuning process automatically ensures consistently fast runtimes regardless of input sequence length, but it also introduces some runtime variability, which may be undesirable for certain users. It is also recommended to disable this feature for very long chains (see below). To do so, set the `tune_chunk_size` option in the config to `False`.
#### Long sequence inference
To minimize memory usage during inference on long sequences, consider the following changes:
- As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template stack is a major memory bottleneck for inference on long sequences. OpenFold supports two mutually exclusive inference modes to address this issue. One, `average_templates` in the `template` section of the config, is similar to the solution offered by AlphaFold-Multimer, which is simply to average individual template representations. Our version is modified slightly to accommodate weights trained using the standard template algorithm. Using said weights, we notice no significant difference in performance between our averaged template embeddings and the standard ones. The second, `offload_templates`, temporarily offloads individual template embeddings into CPU memory. The former is an approximation while the latter is slightly slower; both are memory-efficient and allow the model to utilize arbitrarily many templates across sequence lengths. Both are disabled by default, and it is up to the user to determine which best suits their needs, if either.
- Inference-time low-memory attention (LMA) can be enabled in the model config. This setting trades off speed for vastly improved memory usage. By default, LMA is run with query and key chunk sizes of 1024 and 4096, respectively. These represent a favorable tradeoff in most memory-constrained cases. Powerusers can choose to tweak these settings in `openfold/model/primitives.py`. For more information on the LMA algorithm, see the aforementioned Staats & Rabe preprint.
- Disable `tune_chunk_size` for long sequences. Past a certain point, it only wastes time.
- As a last resort, consider enabling `offload_inference`. This enables more extensive CPU offloading at various bottlenecks throughout the model.
- Disable FlashAttention, which seems unstable on long sequences.
Using the most conservative settings, we were able to run inference on a 4600-residue complex with a single A100. Compared to AlphaFold's own memory offloading mode, ours is considerably faster; the same complex takes the more efficent AlphaFold-Multimer more than double the time. Use the `long_sequence_inference` config option to enable all of these interventions at once. The `run_pretrained_openfold.py` script can enable this config option with the `--long_sequence_inference` command line option
Input FASTA files containing multiple sequences are treated as complexes. In this case, the inference script runs AlphaFold-Gap, a hack proposed [here](https://twitter.com/minkbaek/status/1417538291709071362?lang=en), using the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer).
\ No newline at end of file
# Multimer Inference
To run inference on a complex or multiple complexes using a set of DeepMind's pretrained parameters, you will need:
- AlphaFold Multimer v2.3 parameters
- Updated sequence databases, with UniRef and PDB Seqres databases.
## Upgrade from a previous OpenFold Installation
If you had previously downloaded OpenFold parameters and or AlphaFold databases, you will need to download updated versions. Here are some instructions for upgrading from an existing openfold installations.
### Download AlphaFold-Multimer v2.3 Model Parameters
1. Re-download the alphafold parameters to get the latest
AlphaFold-Multimer v2.3 weights:
```bash
bash scripts/download_alphafold_params.sh openfold/resources
```
### Download AlphaFold Databases for Multimer
1. Download the [UniProt](https://www.uniprot.org/uniprotkb/)
and [PDB SeqRes](https://www.rcsb.org/) databases:
```bash
bash scripts/download_uniprot.sh data/
```
The PDB SeqRes and PDB databases must be from the same date to avoid potential
errors during template searching. Remove the existing `data/pdb_mmcif` directory
and download both databases:
```bash
bash scripts/download_pdb_mmcif.sh data/
bash scripts/download_pdb_seqres.sh data/
```
1. Additionally, AlphaFold-Multimer uses upgraded versions of the [MGnify](https://www.ebi.ac.uk/metagenomics)
and [UniRef30](https://uniclust.mmseqs.com/) (previously UniClust30) databases. To download the upgraded databases, run:
```bash
bash scripts/download_uniref30.sh data/
bash scripts/download_mgnify.sh data/
```
```{note}
Multimer inference can also run with the older database versions if desired.
```
## Running Multimer Inference
The [`run_pretrained_openfold.py`](https://github.com/aqlaboratory/openfold/blob/main/run_pretrained_openfold.py) script can be used to run multimer inference with the follwoing command.
```bash
python3 run_pretrained_openfold.py \
fasta_dir \
data/pdb_mmcif/mmcif_files/ \
--uniref90_database_path data/uniref90/uniref90.fasta \
--mgnify_database_path data/mgnify/mgy_clusters_2022_05.fa \
--pdb_seqres_database_path data/pdb_seqres/pdb_seqres.txt \
--uniref30_database_path data/uniref30/UniRef30_2021_03 \
--uniprot_database_path data/uniprot/uniprot.fasta \
--bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
--jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
--hhblits_binary_path lib/conda/envs/openfold_venv/bin/hhblits \
--hmmsearch_binary_path lib/conda/envs/openfold_venv/bin/hmmsearch \
--hmmbuild_binary_path lib/conda/envs/openfold_venv/bin/hmmbuild \
--kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign \
--config_preset "model_1_multimer_v3" \
--model_device "cuda:0" \
--output_dir ./
```
**Notes:**
- Template searching in the multimer pipeline uses HMMSearch with the PDB SeqRes database, replacing HHSearch and PDB70 used in the monomer pipeline.
- As with monomer inference, if you've already computed alignments for the query, you can use the `--use_precomputed_alignments` option.
- At this time, only AlphaFold parameter weights are available for multimer mode.
\ No newline at end of file
# Notes on OpenFold Training and Parameters
For OpenFold model parameters, v. 06_22.
## Training details
OpenFold was trained using OpenFold on 44 A100s using the training schedule from Table 4 in
the AlphaFold supplement. AlphaFold was used as the pre-distillation model.
Training data is hosted publicly in the "OpenFold Training Data" RODA repository.
To improve model diversity, we forked training after the initial training phase
and finetuned an additonal branch without templates.
## Parameter files
Parameter files fall into the following categories:
initial_training.pt:
OpenFold at the end of the initial training phase.
finetuning_x.pt:
Checkpoints in chronological order corresponding to peaks in the
validation LDDT-Ca during the finetuning phase. Roughly evenly spaced
across the 45 finetuning epochs.
NOTE: finetuning_1.pt, which was included in a previous release, has
been deprecated.
finetuning_no_templ_x.pt
Checkpoints in chronological order corresponding to peaks during an
additional finetuning phase also starting from the 'initial_training.pt'
checkpoint but with templates disabled.
finetuning_no_templ_ptm_x.pt
Checkpoints in chronological order corresponding to peaks during the
pTM training phase of the `no_templ` branch. Models in this category
include the pTM module and comprise the most recent of the checkpoints
in said branch.
finetuning_ptm_x.pt:
Checkpoints in chronological order corresponding to peaks in the pTM
training phase of the mainline branch. Models in this category include
the pTM module and comprise the most recent of the checkpoints in said
branch.
Average validation LDDT-Ca scores for each of the checkpoints are listed below.
The validation set contains approximately 180 chains drawn from CAMEO over a
three-month period at the end of 2021.
initial_training: 0.9088
finetuning_2: 0.9061
finetuning_3: 0.9075
finetuning_4: 0.9059
finetuning_5: 0.9054
finetuning_no_templ_1: 0.9014
finetuning_no_templ_2: 0.9032
finetuning_no_templ_ptm_1: 0.9025
finetuning_ptm_1: 0.9075
finetuning_ptm_2: 0.9097
\ No newline at end of file
# Setting up the OpenFold PDB training set from RODA
The multiple sequence alignments of OpenProteinSet and mmCIF structure files required to train OpenFold are freely available at the [Registry of Open Data on AWS (RODA)](https://registry.opendata.aws/openfold/). Additionally, OpenFold requires some postprocessing and [auxiliary files](Aux_seq_files.md) for training that need to be generated from the AWS data manually. This documentation is intended to give a full overview of those steps starting from the data download.
### Pre-Requisites:
- OpenFold conda environment. See [OpenFold Installation](Installation.md) for instructions on how to build this environment.
- In particular, the [AWS CLI](https://aws.amazon.com/cli/) is used to download data from RODA.
- For this guide, we assume that the OpenFold codebase is located at `$OF_DIR`.
## 1. Downloading alignments and structure files
To fetch all the alignments corresponding to the original PDB training set of OpenFold alongside their mmCIF 3D structures, you can run the following commands:
```bash
mkdir -p alignment_data/alignment_dir_roda
aws s3 cp s3://openfold/pdb/ alignment_data/alignment_dir_roda/ --recursive --no-sign-request
mkdir pdb_data
aws s3 cp s3://openfold/pdb_mmcif.zip pdb_data/ --no-sign-request
aws s3 cp s3://openfold/duplicate_pdb_chains.txt . --no-sign-request
unzip pdb_mmcif.zip -d pdb_data
```
The nested alignment directory structure is not yet exactly what OpenFold expects, so you can run the `flatten_roda.sh` script to convert them to the correct format:
```bash
bash $OF_DIR/scripts/flatten_roda.sh alignment_data/alignment_dir_roda alignment_data/
```
Afterwards, the old directory can be safely removed:
```bash
rm -r alignment_data/alignment_dir_roda
```
## 2. Creating alignment DBs (optional)
As further explained in [Auxiliary Sequence Files in OpenFold](Aux_seq_files.md), OpenFold supports an alternate format for storing alignments that can increase training performance in I/O bottlenecked systems. These so-called `alignment_db` files can be generated with the following script:
```bash
python $OF_DIR/scripts/alignment_db_scripts/create_alignment_db_sharded.py \
alignment_data/alignments \
alignment_data/alignment_dbs \
alignment_db \
--n_shards 10 \
--duplicate_chains_file pdb_data/duplicate_pdb_chains.txt
```
We recommend creating 10 total `alignment_db` files (= "shards") for better
filesystem health and fast preprocessing, but note that this script will only run
optimally if the number of CPUs on your machine is at least as big as the number
of shards you are creating.
As an optional check, you can run the following command which should return $634,434$:
```bash
grep "files" alignment_data/alignment_dbs/alignment_db.index | wc -l
```
## 3. Adding duplicate chains to alignments (skip if step 2 was used)
To save space, the OpenProteinSet alignment database is stored without duplicates, meaning that only one representative alignment is stored for all chains with identical sequences in the PDB and duplicate instances are tracked with a [`duplicate_chains.txt`](Aux_seq_files.md#duplicate-pdb-chain-files) file. As OpenFold will select chains during training based on the chains in the alignment directory (or `alignment_db`), we therefore need to add those duplicate chains back in in order to train on the full conformational diversity of chains in the PDB.
If you've followed the optional Step 2, the `.index` file of your `alignment_db` files will have already been adjusted for duplicates and you can proceed to the next step. Otherwise, the standard alignment directory can be expanded to accommodate duplicates by inserting symlinked directories for the duplicate chains that point to their representative alignments:
```bash
python $OF_DIR/scripts/expand_alignment_duplicates.py \
alignment_data/alignments \
pdb_data/duplicate_pdb_chains.txt
```
As an optional check, the following command should return $634,434$:
```bash
ls alignment_data/alignments/ | wc -l
```
## 4. Generating cluster-files
The AlphaFold dataloader adjusts the sampling probability of chains by their inverse cluster size, so we need to generate these sequence clusters for our training set.
As a first step, we'll need a `.fasta` file of all sequences in the training set. This can be generated with the following scripts, depending on how you set up your alignment data in the previous steps:
**Use this if you set up the duplicate-expanded alignment directory (faster):**
```bash
python $OF_DIR/scripts/alignment_data_to_fasta.py \
alignment_data/all-seqs.fasta \
--alignment_dir alignment_data/alignments
```
**Use this if you set up the `alignment_db` files:**
```bash
python $OF_DIR/scripts/alignment_data_to_fasta.py \
alignment_data/all-seqs.fasta \
--alignment_db_index alignment_data/alignment_dbs/alignment_db.index
```
Next, we need to generate a cluster file at 40% sequence identity, which will contain all chains in a particular cluster on the same line. You'll need [MMSeqs2](https://github.com/soedinglab/MMseqs2?tab=readme-ov-file#installation) for this as well, which can be set up either in a conda environment or as a binary.
```bash
python $OF_DIR/scripts/fasta_to_clusterfile.py \
alignment_data/all-seqs.fasta \
alignment_data/all-seqs_clusters-40.txt \
/path/to/mmseqs \
--seq-id 0.4
```
## 5. Generating cluster-files
As a last step, OpenFold requires ["cache" files](Aux_seq_files.md#chain-cache-files-and-mmcif-cache-files) with metadata information for each chain that are used for choosing templates and samples during training.
The data caches for OpenProteinSet can be downloaded from RODA with the following:
```bash
aws s3 cp s3://openfold/data_caches/ pdb_data/ --recursive --no-sign-request
```
If you wish to create data caches for your own datasets, the steps to generate the cache are as follows:
```bash
mkdir pdb_data/data_caches
python $OF_DIR/scripts/generate_mmcif_cache.py \
pdb_data/mmcif_files \
pdb_data/data_caches/mmcif_cache.json \
--no_workers 16
```
The chain-data-cache is used for filtering training samples and adjusting per-chain sampling probabilities and can be generated with the following script:
```bash
python $OF_DIR/scripts/generate_chain_data_cache.py \
pdb_data/mmcif_files \
pdb_data/data_caches/chain_data_cache.json \
--cluster_file alignment_data/all-seqs_clusters-40.txt \
--no_workers 16
```
### Soloseq inference
MSA-free sequence to structure prediction using the [ESM-1b model](https://github.com/facebookresearch/esm) embeddings.
To run inference for a sequence using the SoloSeq single-sequence model, you can either precompute ESM-1b embeddings in bulk, or you can generate them during inference.
For generating ESM-1b embeddings in bulk, use the provided script: [`scripts/precompute_embeddings.py`](https://github.com/aqlaboratory/openfold/blob/main/scripts/precompute_embeddings.py). The script takes a directory of FASTA files (one sequence per file) and generates ESM-1b embeddings in the same format and directory structure as required by SoloSeq. Following is an example command to use the script:
```shell
python scripts/precompute_embeddings.py fasta_dir/ embeddings_output_dir/
```
In the same per-label subdirectories inside `embeddings_output_dir`, you can also place `*.hhr` files (outputs from HHSearch), which can contain the details about the structures that you want to use as templates. If you do not place any such file, templates will not be used and only the ESM-1b embeddings will be used to predict the structure. If you want to use templates, you need to pass the PDB MMCIF dataset to the command.
Then download the SoloSeq model weights, e.g.:
```shell
bash scripts/download_openfold_soloseq_params.sh openfold/resources
```
Now, you are ready to run inference:
```shell
python run_pretrained_openfold.py \
fasta_dir \
data/pdb_mmcif/mmcif_files/ \
--use_precomputed_alignments embeddings_output_dir \
--output_dir ./ \
--model_device "cuda:0" \
--config_preset "seq_model_esm1b_ptm" \
--openfold_checkpoint_path openfold/resources/openfold_soloseq_params/seq_model_esm1b_ptm.pt
```
For generating the embeddings during inference, skip the `--use_precomputed_alignments` argument. The `*.hhr` files will be generated as well if you pass the paths to the relevant databases and tools, as specified in the command below. If you skip the database and tool arguments, HHSearch will not be used to find templates and only generated ESM-1b embeddings will be used to predict the structure.
```shell
python3 run_pretrained_openfold.py \
fasta_dir \
data/pdb_mmcif/mmcif_files/ \
--output_dir ./ \
--model_device "cuda:0" \
--config_preset "seq_model_esm1b_ptm" \
--openfold_checkpoint_path openfold/resources/openfold_soloseq_params/seq_model_esm1b_ptm.pt \
--uniref90_database_path data/uniref90/uniref90.fasta \
--pdb70_database_path data/pdb70/pdb70 \
--jackhmmer_binary_path lib/conda/envs/openfold_venv/bin/jackhmmer \
--hhsearch_binary_path lib/conda/envs/openfold_venv/bin/hhsearch \
--kalign_binary_path lib/conda/envs/openfold_venv/bin/kalign \
```
For generating template information, you will need the UniRef90 and PDB70 databases and the JackHmmer and HHSearch binaries.
SoloSeq allows you to use the same flags and optimizations as the MSA-based OpenFold. For example, you can skip relaxation using `--skip_relaxation`, save all model outputs using `--save_outputs`, and generate output files in MMCIF format using `--cif_output`.
```{note}
Due to the nature of the ESM-1b embeddings, the sequence length for inference using the SoloSeq model is limited to 1022 residues. Sequences longer than that will be truncated.
```
\ No newline at end of file
# Training OpenFold
## Background
This guide covers how to train an OpenFold model for monomers. Some additional instructions are provided at the end for fine-tuning your model.
### Pre-requisites:
This guide requires the following:
- [Installation of OpenFold and dependencies](Installation.md) (Including jackhmmer and hhblits depedencies)
- A preprocessed dataset:
- For this guide, we will use the original OpenFold dataset which is available on RODA, processed with [these instructions](OpenFold_Training_Setup.md).
- GPUs configured with CUDA. Training OpenFold with CPUs only is not supported.
## Training a new OpenFold model
#### Basic command
For a dataset that has the default alignment file structure, e.g.
```
-$DATA_DIR
└── pdb_data
├── mmcifs
├── 3lrm.cif
└── 6kwc.cif
...
├── obsolete.dat
├── duplicate_pdb_chains.txt
└── data_caches
├── duplicate_pdb_chains.txt
└── data_caches
└── alignment_data
└── alignments
├── 3lrm_A/
├── 3lrm_B/
└── 6kwc_A/
...
```
The basic command to train a new OpenFold model is:
```
python3 train_openfold.py $DATA_DIR/pdb/mmcifs $DATA_DIR/alignment_data/alignments $TEMPLATE_MMCIF_DIR $OUTPUT_DIR \
--max_template_date 2021-10-10 \
--train_chain_data_cache_path $DATA_DIR/pdb_data/data_caches/chain_data_cache.json \
--template_release_dates_cache_path $DATA_DIR/pdb_data/data_caches/mmcif_cache.json \
--config_preset initial_training \
--seed 42 \
--obsolete_pdbs_file_path $DATA_DIR/pdb_data/obsolete.dat \
--num_nodes 1 \
--gpus 4 \
--num_workers 4
```
The required arguments are:
- `mmcif_dir` : Mmcif files for the training set.
- `alignments_dir`: Alignments for the sequences in `mmcif_dir`, see expected directory structure
- `template_mmcif_dir`: Template mmcif files with structures, which can be the same directory as mmcif_dir. The `max_template_date` and `template_release_dates_cache_path` will specify which templates will be allowed based on a date cutoff
- `output_dir` : Where model checkpoint files and other outputs will be saved.
Commonly used flags include:
- `config_preset`: Specifies which selection of hyperparameters should be used for initial model training. Commonly used configs are defined in [`openfold/config.py`](https://github.com/aqlaboratory/openfold)
- `num_nodes` and `gpus`: Specifies number of nodes and GPUs available to train OpenFold.
- `seed` - Specifies random seed
- `num_workers`: Number of CPU workers to assign for creating dataset examples
- `obsolete_pdbs_file_path`: Specifies obsolete pdb IDs that should be excluded from training.
- `val_data_dir` and `val_alignment_dir`: Specifies data directory and alignments for validation dataset.
```{note}
Note that `--seed` must be specified to correctly configure training examples on multi-GPU training runs
```
#### Train with OpenFold Dataset Configuration
If the [OpenFold alignment database](OpenFold_Training_Setup.md#2-creating-alignment-dbs-optional) setup is used, resulting in a data directory such as:
```
- $DATA_DIR
├── duplicate_pdb_chains.txt
├── pdb_data
└── mmcifs
├── 3lrm.cif
└── 6kwc.cif
└── alignment_data
└── alignment_db
├── alignment_db_0.db
├── alignment_db_1.db
...
├── alignment_db_9.db
└── alignment_db.index
```
The training command will use the `alignment_index_path` argument to specify `db.index` files, e.g.:
```
python3 train_openfold.py $DATA_DIR/pdb_data/mmcifs $DATA_DIR/alignment_data/alignment_db $TEMPLATE_MMCIF_DIR $OUTPUT_DIR \
--max_template_date 2021-10-10 \
--train_chain_data_cache_path $DATA_DIR/pdb_data/data_caches/chain_data_cache.json \
--template_release_dates_cache_path $DATA_DIR/pdb_data/data_caches/mmcif_cache.json \
--alignment_index_path $DATA_DIR/pdb/alignment_db.index
--config_preset initial_training \
--seed 42 \
--obsolete_pdbs_file_path $DATA_DIR/pdb/obsolete.dat \
--num_nodes 1 \
--gpus 4 \
--num_workers 4
```
#### Additional command line flag options:
Here we provide brief descriptions for customizing your training run of OpenFold. A full description of all flags can be accessed by using the `--help` option in the script
- **Use Deepspeed acceleration strategy:** `--deepspeed_config` This option configures OpenFold to use custom Deepspeed kernels. This option requires a deepspeed_config.json, you can create your own, or use the one in the OpenFold directory
- **Use a validation dataset:** Specify validation database paths with `--val_data_dir` + `--val_alignment_dir`. Validation metrics will be evaluated on these datasets.
- **Use a self-distillation dataset:** Specify paths with `--distillation_data_dir` and `--distillation_alignment_dir` flags
- **Change specific parameters in the model or data setup:** `--experiment_config_json`. These parameters must be defined in the [`openfold/config.py`](https://github.com/aqlaboratory/openfold/blob/main/openfold/config.py). For example to change the crop size for training a model, you can write the following json:
```cropsize.json
{
"data.train.crop_size": 128
}
```
- **Configure training settings with PyTorch Lightning**
Some flags e.g. `--precision`, `--max_epochs` configure training behavior. See the Pytorch Lightning Trainer args section in the `--help` menu for more information and consult [Pytorch lightning documentation](https://lightning.ai/docs/pytorch/stable/)
- Precision: On A100s, OpenFold training works best with bfloat 16 precision (e.g. `--precision bf16-mixed`)
- **Restart training from an existing checkpoint:** Use the `--resume_from_ckpt` to restart training from an existing checkpoint.
## Advanced Training Configurations
:::
### Fine tuning from existing model weights
If you have existing model weights, you can fine tune the model by specifying a checkpoint path with `--resume_from_ckpt` and `--resume_model_weights_only` arguments, e.g.
```
python3 train_openfold.py $DATA_DIR/mmcifs $DATA_DIR/alignment.db $TEMPLATE_MMCIF_DIR $OUTPUT_DIR \
--max_template_date 2021-10-10 \
--train_chain_data_cache_path chain_data_cache.json \
--template_release_dates_cache_path mmcif_cache.json \
--config_preset finetuning \
--alignment_index_path $DATA_DIR/pdb/alignment_db.index \
--seed 4242022 \
--obsolete_pdbs_file_path obsolete.dat \
--num_nodes 1 \
--gpus 4 \
--num_workers 4 \
--resume_from_ckpt $CHECKPOINT_PATH \
--resume_model_weights_only
```
If you have model parameters from OpenFold v1.x, you may need to convert your checkpoint file or parameter. See [Converting OpenFold v1 Weights](convert_of_v1_weights.md) for more details.
### Using MPI
If MPI is configured on your system, and you would like to use MPI to train OpenFold models, you may do so with the following step:
1. Add the `mpi4py` package, which are available through pip and conda. Please see [mpi4py documentation](https://pypi.org/project/mpi4py/) for more instructions on installation.
2. Add the `--mpi_plugin` flag to your training command.
### Training Multimer models
```{note}
Coming soon.
```
\ No newline at end of file
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = 'OpenFold'
copyright = '2024, OpenFold Team'
author = 'OpenFold Team'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = [
'myst_parser',
]
templates_path = ['_templates']
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'furo'
html_static_path = ['_static']
myst_enable_extensions = ["colon_fence", "dollarmath", "amsmath"]
## Weights Renaming
As part of the [OpenFold v2 update](https://github.com/aqlaboratory/openfold/releases/tag/v2.0.0) with the integration of multimer prediction, certain model layers of the AlphaFold model were renamed. For example.
`module.model.template_angle_embedder.*` is now referred to as
`module.model.template_embedder.template_single_embedder.*`
If you have some checkpoints that were trained using OpenFold v1 or older, and now want to resume training on OpenFold v2, you may need to convert your checkpoints.
## FAQ
### Do I need to convert my checkpoints / model weights?
If you want to run inference or resume training from a checkpoint that was trained with OpenFold V1, you will need to convert your checkpoint.
If you want load model weights only, without starting from a specific time step, then you should not need to convert your checkpoints. The training of the model will begin from `step=0` in this case. To do so, you'll need both the `--resume_from_ckpt` and `--resume_model_weights_only` flags. This example allows you train starting from the pre-trained openfold weights:
```bash
$ python3 $OPENFOLD_DIR/train_openfold.py test_data_epoch/mmcifs test_data_epoch/alignments test_data_epoch/template_mmcifs $OUTPUT_DIR 2021-09-30 \
...
--resume_from_ckpt openfold/resources/openfold_params/finetuning_2.pt \
--resume_model_weights_only
```
### How do I convert my checkpoints?
Use [`scripts/convert_v1_to_v2_weights.py`](https://github.com/aqlaboratory/openfold/blob/main/scripts/convert_v1_to_v2_weights.py) e.g.
`python scripts/convert_v1_to_v2_weights.py checkpoints/6-209.ckpt checkpoints/6-209.ckpt.converted`
Then, to resume training, set the following flags:
```bash
$ python3 $OPENFOLD_DIR/train_openfold.py test_data_epoch/mmcifs test_data_epoch/alignments test_data_epoch/template_mmcifs $OUTPUT_DIR 2021-09-30 \
...
--resume_from_ckpt checkpoints/6-209.ckpt.converted
```
\ No newline at end of file
# OpenFold
```{figure} ../imgs/of_banner.png
:width: 900px
:align: center
:alt: Comparison of OpenFold and AlphaFold2 predictions to the experimental structure of PDB 7KDX, chain B._
```
Welcome to the Documentation for OpenFold, the fully open source, trainable, PyTorch-based reproduction of DeepMind's
[AlphaFold 2](https://github.com/deepmind/alphafold).
Here, you will find guides and documentation for:
- [Getting started with OpenFold](installation.md)!
- Learn how to [run inference with OpenFold](Inference.md)
- [Train your own OpenFold models](Training_OpenFold.md)
- Find guidance for setup and running OpenFold in the [FAQ](FAQ.md).
We also have a [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb) that can be used for single structure / multimer prediction.
Some portions of the documentation are still under migration from the original README, which can be found [here](original_readme.md).
# Features
OpenFold carefully reproduces (almost) all of the features of the original open
source monomer (v2.0.1) and multimer (v2.3.2) inference code. The sole exception is
model ensembling, which fared poorly in DeepMind's own ablation testing and is being
phased out in future DeepMind experiments. It is omitted here for the sake of reducing
clutter. In cases where the *Nature* paper differs from the source, we always defer to the
latter.
OpenFold is trainable in full precision, half precision, or `bfloat16` with or without DeepSpeed,
and we've trained it from scratch, matching the performance of the original.
We've publicly released model weights and our training data &mdash; some 400,000
MSAs and PDB70 template hit files &mdash; under a permissive license. Model weights
are available via scripts in this repository while the MSAs are hosted by the
[Registry of Open Data on AWS (RODA)](https://registry.opendata.aws/openfold).
Try out running inference for yourself with our [Colab notebook](https://colab.research.google.com/github/aqlaboratory/openfold/blob/main/notebooks/OpenFold.ipynb).
OpenFold also supports inference using AlphaFold's official parameters, and
vice versa (see `scripts/convert_of_weights_to_jax.py`).
OpenFold has the following advantages over the reference implementation:
- **Faster inference** on GPU, sometimes by as much as 2x. The greatest speedups are achieved on Ampere or higher architecture GPUs.
- **Inference on extremely long chains**, made possible by our implementation of low-memory attention
([Rabe & Staats 2021](https://arxiv.org/pdf/2112.05682.pdf)). OpenFold can predict the structures of
sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
- **Custom CUDA attention kernels** modified from [FastFold](https://github.com/hpcaitech/FastFold)'s
kernels support in-place attention during inference and training. They use
4x and 5x less GPU memory than equivalent FastFold and stock PyTorch
implementations, respectively.
- **Efficient alignment scripts** using the original AlphaFold HHblits/JackHMMER pipeline or [ColabFold](https://github.com/sokrypton/ColabFold)'s, which uses the faster MMseqs2 instead. We've used them to generate millions of alignments.
- **FlashAttention** support greatly speeds up MSA attention.
- **DeepSpeed DS4Sci_EvoformerAttention kernel** is a memory-efficient attention kernel developed as part of a collaboration between OpenFold and the DeepSpeed4Science initiative. The kernel provides substantial speedups for training and inference, and significantly reduces the model's peak device memory requirement by 13X. The model is 15% faster during the initial training and finetuning stages, and up to 4x faster during inference.
# Copyright Notice
While AlphaFold's and, by extension, OpenFold's source code is licensed under
the permissive Apache Licence, Version 2.0, DeepMind's pretrained parameters
fall under the CC BY 4.0 license, a copy of which is downloaded to
`openfold/resources/params` by the installation script. Note that the latter
replaces the original, more restrictive CC BY-NC 4.0 license as of January 2022.
## Contributing
If you encounter problems using OpenFold, feel free to create an issue! We also
welcome pull requests from the community.
## Citing this Work
Please cite our paper:
```bibtex
@article {Ahdritz2022.11.20.517210,
author = {Ahdritz, Gustaf and Bouatta, Nazim and Floristean, Christina and Kadyan, Sachin and Xia, Qinghui and Gerecke, William and O{\textquoteright}Donnell, Timothy J and Berenberg, Daniel and Fisk, Ian and Zanichelli, Niccolò and Zhang, Bo and Nowaczynski, Arkadiusz and Wang, Bei and Stepniewska-Dziubinska, Marta M and Zhang, Shang and Ojewole, Adegoke and Guney, Murat Efe and Biderman, Stella and Watkins, Andrew M and Ra, Stephen and Lorenzo, Pablo Ribalta and Nivon, Lucas and Weitzner, Brian and Ban, Yih-En Andrew and Sorger, Peter K and Mostaque, Emad and Zhang, Zhao and Bonneau, Richard and AlQuraishi, Mohammed},
title = {{O}pen{F}old: {R}etraining {A}lpha{F}old2 yields new insights into its learning mechanisms and capacity for generalization},
elocation-id = {2022.11.20.517210},
year = {2022},
doi = {10.1101/2022.11.20.517210},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/10.1101/2022.11.20.517210},
eprint = {https://www.biorxiv.org/content/early/2022/11/22/2022.11.20.517210.full.pdf},
journal = {bioRxiv}
}
```
If you use OpenProteinSet, please also cite:
```bibtex
@misc{ahdritz2023openproteinset,
title={{O}pen{P}rotein{S}et: {T}raining data for structural biology at scale},
author={Gustaf Ahdritz and Nazim Bouatta and Sachin Kadyan and Lukas Jarosch and Daniel Berenberg and Ian Fisk and Andrew M. Watkins and Stephen Ra and Richard Bonneau and Mohammed AlQuraishi},
year={2023},
eprint={2308.05326},
archivePrefix={arXiv},
primaryClass={q-bio.BM}
}
```
Any work that cites OpenFold should also cite [AlphaFold](https://www.nature.com/articles/s41586-021-03819-2) and [AlphaFold-Multimer](https://www.biorxiv.org/content/10.1101/2021.10.04.463034v1) if applicable.
```{toctree}
:hidden:
:caption: Guides
Installation.md
Inference.md
Single_Sequence_Inference.md
Multimer_Inference.md
OpenFold_Training_Setup.md
Training_OpenFold.md
```
```{toctree}
:hidden:
:caption: Reference
Aux_seq_files.md
OpenFold_Parameters.md
FAQ.md
original_readme.md
```
\ No newline at end of file
# Setting Up OpenFold
In this guide, we will OpenFold and its dependencies.
**Pre-requisites**
This package is currently supported for CUDA 11 and Pytorch 1.12. All dependencies are listed in the [`environment.yml`](https://github.com/aqlaboratory/openfold/blob/main/environment.yml)
At this time, only Linux systems are supported.
## Instructions
:::
### Installation:
1. Clone the repository, e.g. `git clone https://github.com/aqlaboratory/openfold.git`
1. From the `openfold` repo:
- Create a [Mamba]("https://github.com/conda-forge/miniforge/releases/latest/download/) environment, e.g.
`mamba env create -n openfold_env -f environment.yml`
Mamba is recommended as the dependencies required by OpenFold are quite large and mamba can speed up the process.
- Activate the environment, e.g `conda activate openfold_env`
1. Run the setup script to configure kernels and folding resources.
> scripts/install_third_party_dependencies.sh`
3. Prepend the conda environment to the $LD_LIBRARY_PATH., e.g.
`export $LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH``. You may optionally set this as a conda environment variable according to the [conda docs](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#saving-environment-variables) to activate each time the environment is used.
4. Download parameters. We recommend using a destination as `openfold/resources` as our unittests will look for the weights there.
- For AlphaFold2 weights, use
> ./scripts/download_alphafold_params.sh <dest>
- For OpenFold weights, use :
> ./scripts/download_openfold_params.sh <dest>
- For OpenFold SoloSeq weights, use:
> ./scripts/download_openfold_soloseq_params.sh <dest>
### Checking your build with unit tests:
To test your installation, you can run OpenFold unit tests. Make sure that the OpenFold and AlphaFold parameters have been downloaded, and that they are located (or symlinked) in the directory `openfold/resources`
Run with the following script:
> scripts/run_unit_tests.sh
The script is a thin wrapper around Python's `unittest` suite, and recognizes `unittest` arguments. E.g., to run a specific test verbosely:
> scripts/run_unit_tests.sh -v tests.test_model
**Alphafold Comparison tests:**
Certain tests perform equivalence comparisons with the AlphaFold implementation. Instructions to run this level of tests requires an environment with both AlphaFold 2.0.1 and OpenFold installed, and is not covered in this guide. These tests are skipped by default if no installation of AlphaFold is found.
## Environment specific modifications
### CUDA 12
To use OpenFold on CUDA 12 environment rather than a CUDA 11 environment.
In step 1, use the branch [`pl_upgrades`](https://github.com/aqlaboratory/openfold/tree/pl_upgrades) rather than the main branch, i.e. replace the URL in step 1 with https://github.com/aqlaboratory/openfold/tree/pl_upgrades
Follow the rest of the steps of [Installation Guide](#Installation)
### MPI
To use OpenFold with MPI support, you will need to add the package [`mpi4py`](https://pypi.org/project/mpi4py/). This can be done with pip in your OpenFold environment, e.g. `$ pip install mpi4py`.
### Install OpenFold parameters without aws
If you don't have access to `aws` on your system, you can use a different download source:
- HuggingFace (requires `git-lts`): `scripts/download_openfold_params_huggingface.sh`
- Google Drive: `scripts/download_openfold_params_gdrive.sh`
### Docker setup
A [`Dockerfile`] is provided to build an OpenFold Docker image. Additional notes for setting up a docker container for OpenFold and running inference can be found [here](original_readme.md#building-and-using-the-docker-container).
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment